Research Political Issues

I’ve avoided discussing politics here, although not for lack of interest. The problem with discussing politics is that it’s customary for people to say much based upon little information. Nevertheless, politics can have a substantial impact on science (and we might hope for the vice-versa). It’s primary election time in the United States, so the topic is timely, although the issues are not.

There are several policy decisions which substantially effect development of science and technology in the US.

  1. Education The US has great contrasts in education. The top universities are very good places, yet the grade school education system produces mediocre results. For me, the contrast between a public education and Caltech was bracing. For many others attending Caltech, it clearly was not. Upgrading the k-12 education system in the US is a long-standing chronic problem which I know relatively little about. My own experience is that a basic attitude of “no child unrealized” is better than “no child left behind”. A fair claim can also be made that the US just doesn’t invest enough.
  2. Respect Lack of respect for science and technology is routinely expressed in many ways in the US.
    1. The most bald form of lack of respect is scientific censorship. This may be easily understood as a generality: you choose to spend a large fraction of your life learning to interpret some part of the world. After years, you come to some conclusion about the nature of the world. Then, someone with no particular experience or expertise tells you to alter it.
    2. A more refined form of lack of respect is simply lack of presence in decision making. This isn’t necessarily intentional: many people simply make decisions from the gut, and then come up with reasons to justify their decision. This style explicitly cuts out the deep thinking of science. Many policies could have been better informed by a serious consideration of even basic science:
      1. The oil of Iraq is fundamentally less valuable if we are going to tackle global warming.
      2. Swapping gasoline for hydrogen-based transportable energy source is dubious because it introduces another energy storage conversion to lose energy on. The same goes for swapping bioethanol for gasoline. In contrast, hybrid and electric vehicles actually recover substantial energy from regenerative braking, and a plug-in hybrid could run off electricity in typical commuter usage.
      3. The Space Shuttle is a boondoggle design. The rocket equation implies that the ratio of initial to final mass for vehicles reaching earth orbit must be at least a factor of e2.5 (it’s actually e2.93 for the Space Shuttle). Making the system reusable implies that most of this mass returns to earth so the payload deliverable into space is only 1.2% of the liftoff mass. A better designed system might deliver payloads a factor of 4 larger or be much smaller.
      4. Passenger Inspections at airports is another poor policy from the perspective of science. It isn’t effective, and there is no cost-efficient way to make it effective against a motivated opponent. Solid evidence for this is the continued use of mules to smuggle drugs. The basic problem from a chemistry point of view is that too much can be done with a small amount of mass. Deterrence and limitation (armored cockpits and active resistance for example) are fine policies.
    3. Lack of support. The simplest form of lack of respect is simply lack of support. The case for federal vs corporate funding of basic science and technology development is very simple: the benefit to society of conducting such work dramatically exceeds the benefit any one agent within society (such as a company) could gain from it. Of late, investment in core science has been an anemic 0.0005 GDP and visa issues hamstring broader technology development.
  3. Confidence This is primarily related to the technology side of science and technology. Many policy decisions are made without confidence in the ability of technologists to adapt. This comes in at least two flavors.
    1. The foreordained solution. Policy often comes in the form “we use approach X to solve problem Y” (some examples are above). This demonstrates an overconfidence by policy makers in there ability to pick the winner, and a lack of confidence in the ability of technologists to solve problems. It also represents an opportunity for large established industries to get huge payoffs at taxpayer expense. The X-prize represents the opposite of this approach, and it has been radically more effective by any reasonable standard.
    2. Confusion about the meaning of wealth. Some people believe that wealth is about what you have. However, for a society it seems much better to measure wealth in terms of what the society can do. Policy makers often forget that science and technology is a capability when it comes time to think of a solution. For example, someone with no confidence in the ability to create and make affordable plugin electric hybrids might think it necessary to conquest for oil.
  4. Stability People can’t program, do science, or invent new things when they are worried about more immediate events. There are several destabilizing trends going on in the US right now which either now or in the future may make it hard to focus away from immediate concerns.
    1. Debt and money supply. The federal debt for the US government is about 3.5 times the federal budget. This is bad for the simple reason that investors buying US treasury bonds aren’t investing in new technology. However, the destabilizing concern is more subtle. Since world war II, the US dollar has become the standard currency for exchange around the world. Since debt by the government creates a temptation by the government to (effectively) print money, the number of dollars in circulation has been rapidly growing. But, a growing number of dollars means that the currency is devaluing, which makes owning dollars undesirable. I don’t know an example of a previous world currency that has ceased to be such, but basic economics says that bad things happen to dollar-based savings if all the dollars flow back into the US. So far, the decline of the dollar has been relatively gradual, but a very disruptive cliff might exist out there somewhere. Policies which increase debt (like cutting taxes and increasing spending) exacerbate this problem. There is no fix once the dollar loses world currency status because confidence can be lost quickly, but not regained.
    2. Health Care. The US is running an experiment to determine how large a fraction of GDP can be devoted to health care. Currently it’s over 15%, in first place, and growing. This is even worse than it sounds, because many comparable countries in Europe (or Japan) have older populations which should generally be more expensive to take care of. In the present situation, because health care is incredibly expensive, losing health insurance (which is typically tied to a job) is potentially catastrophic for any individual.
    3. Wealth Asymmetry. The US has shifted towards a substantially more asymmetric division of wealth since the 1970s. An asymmetric division of wealth is not fundamentally bad—there needs to be room for great success to imply great rewards. However, a casual correlation of science and technology development with the gini coefficient map reveals that a large gini coefficient and substantial science and technology development do not coincide. The problem is that wealth becomes inheritable, and it’s very unlikely that the wealth is inherited by a someone interested in science and technology. Wealth is now scheduled to become perfectly inheritable in 2010 in the US.

I’m sure some of these issues are endemic to many other parts of the world as well, because there are fundamental conceptual difficulties with investing in the unknown instead of the known.

The Machine Learning Award goes to …

Perhaps the biggest CS prize for research is the Turing Award, which has a $0.25M cash prize associated with it. It appears none of the prizes so far have been for anything like machine learning (the closest are perhaps database awards).

In CS theory, there is the Gödel Prize which is smaller and newer, offering a $5K prize along and perhaps (more importantly) recognition. One such award has been given for Machine Learning, to Robert Schapire and Yoav Freund for Adaboost.

In Machine Learning, there seems to be no equivalent of these sorts of prizes. There are several plausible reasons for this:

There is no coherent community. People drift in and out of the central conferences all the time. Most of the author names from 10 years ago do not occur in the conferences of today. In addition, the entire subject area is fairly new. There are at least a core group of people who have stayed around.
Machine Learning work doesn’t last Almost every paper is forgotten, because {the goals change, there isn’t any real progress, there are no teachable foundations}. It’s true of all fields of research.
There is no consensus (or clarity) on what’s important. The field is fractured between many very different viewpoints—statistical, empirical, AI, and theoretical. The prioritization of results across these very different viewpoints is hard. We can hope that people who have been around for awhile can learn to appreciate these other viewpoints. The Turing Award is over a much more diverse set of interests.

I think there are some sound reasons why we-as-a-field-of-research should want a field-wide award.

  1. Aspiration Perhaps the most valuable aspect of an award is that it gives people an incentive to aim for something in the long term. The closest approximation that we have right now is “best papers” awards at individual conferences. Best paper awards have a role, but it’s not the same. 10 years from now, when we look back 10 years, which papers will seem most significant? Across many different conferences?
  2. Representation One function of an award is that tells other people what we consider good work. In an academic reference frame, it gives information of the form “this person deserves tenure”. From a less cynical point of view, having a short list of “what’s important” that outsiders can peruse might be quite helpful as a starting point in understanding what’s going on.
  3. Crystallization Research is a process of discovering information and placing it carefully in context. An award has some role in furthering that process.

The worst part of any award is administering it. How do you avoid wasting time and playing favorites while keeping the higher level vision of what might be useful in the long term? I don’t have any great insight, except that it would need to be done.

The Privacy Problem

Machine Learning is rising in importance because data is being collected for all sorts of tasks where it either wasn’t previously collected, or for tasks that did not previously exist. While this is great for Machine Learning, it has a downside—the massive data collection which is so useful can also lead to substantial privacy problems.

It’s important to understand that this is a much harder problem than many people appreciate. The AOL data release is a good example. To those doing machine learning, the following strategies might be obvious:

  1. Just delete any names or other obviously personally identifiable information. The logic here seems to be “if I can’t easily find the person then no one can”. That doesn’t work as demonstrated by the people who were found circumstantially from the AOL data.
  2. … then just hash all the search terms! The logic here is “if I can’t read it, then no one can”. It’s also trivially broken by a dictionary attack—just hash all the strings that might be in the data and check to see if they are in the data.
  3. … then encrypt all the search terms and throw away the key! This prevents a dictionary analysis, but it is still entirely possible to do a frequency analysis. If 10 terms appear with known relative frequencies in public data, then finding 10 terms encrypted terms with the same relative frequencies might give you very good evidence for what these terms are.

All of these strategies turn out to be broken. For those not familiar with Machine Learning, other obvious strategies turn out to not work that well.

  1. Just don’t collect the data. We are not too far off from a world where setting the “please don’t collect my information” flag in your browser implies that you volunteer to have your searches return less relevant results, to not find friends, to not filter spam, etc… If everyone effectively has that flag set by legislation the effect would be very substantial. Many internet companies run off of advertising so eliminating the ability to do targeted advertising will eliminate the ability of these companies to exist.
  2. …Then just keep aggregations of the data! Aggregating data is very bad for machine learning in general. When we are figuring out how to do machine learning it’s even worse because we don’t know in advance which aggregations would be most useful.
  3. …Then keep just enough data around and throw out everything else! Unfortunately, there is no such thing as “enough data”. More data is always better.

This is a particularly relevant topic right now, because it’s news and because CMU and NSF are organizing a workshop on the topic next month, which I’m planning to attend. However, this is not simply an interest burst—the long term trend of increasing data collection implies this problem will repeatedly come up over the indefinite future.

The privacy problem breaks into at least two parts.

  1. Cultural Norms. Historically, almost no monetary transactions were recorded and there was a reasonable expectation that people would forget a visitor. This is rapidly changing with the rise of credit cards and cameras. This change in what can be expected is profoundly uncomfortable.
  2. Power Balance. Data is power. The ability to collect and analyze large quantities of data which many large organizations now have or are constructing increases their power relative to ordinary people. This power can be used for good (to improve services) or for bad (to maximize monopoly status or for spying).

The cultural norm privacy problem is sometimes solvable by creating an opt-in or opt-out protocol. This is particularly helpful on the internet because a user could simply request “please don’t record my search” or “please don’t record which news articles I read”. Needing to do this for every search or every news article would be annoying. However, this is easily fixed by having a system wide setting—perhaps a special browser cookie which says “please don’t record me” that any site could check. None of this is helpful for cameras (where no interface exists) or monetary transactions (where the transaction itself determines whether or not some item is shipped).

The power balance privacy problem is much more difficult. Some solutions that people attempt are:

  1. Accept the change in power balance. This is the default action. There are plenty of historical examples where large organizations have abused their power, so providing them more power to abuse may be unwise.
  2. Legislate a halt. Forbid cameras in public places. Forbid the collection or retention of data by any organization. The problem with this method is that technology simply isn’t moving in this direction. At some point, we may end up with cameras and storage devices so small, cheap, and portable that forbidding their use is essentially absurd. The other difficulty with this solution is that it keeps good things from happening. For example, a reasonable argument can be made that the British were effective at tracking bomb planters because the cameras of London helped them source attacks.
  3. Legislate an acceleration. Instead of halting the collection of data, open it up to more general use. One example of this is cameras in police cars in the US. Recordings from these cameras can often settle disputes very definitively. As technology improves, it’s reasonable to expect cameras just about anywhere people are in public. Some legislation and good engineering could make these cameras available to anyone. This would involve a substantial shift in cultural norms—essentially people would always be in potential public view when not at home. This directly collides with the “privacy as a cultural norm” privacy problem.

The hardness of the privacy problem mentioned at the post beginning implies difficult tradeoffs.

  1. If you have cultural norm privacy concerns, then you really don’t appreciate method (3) for power balance privacy concerns.
  2. If you value privacy greatly and the default action is taken, then you prefer monopolistic marketplaces. The advantages of a large amount of private data are prohibitive to new market entrance.
  3. If you want the internet to work better, then there are limits on how little data can be collected.

All of the above is even murkier because what can be done with data is not fully known, nor is what can be done in a privacy sensitive way.

Asking questions

There are very substantial differences in how question asking is viewed culturally. For example, all of the following are common:

  1. If no one asks a question, then no one is paying attention.
  2. To ask a question is disrespectful of the speaker.
  3. Asking a question is admitting your own ignorance.

The first view seems to be the right one for research, for several reasons.

  1. Research is quite hard—it’s difficult to guess how people won’t understand something in advance while preparing a presentation. Consequently, it’s very common to lose people. No worthwhile presenter wants that.
  2. Real understanding is precious. By asking a question, you are really declaring “I want to understand”, and everyone should respect that.
  3. Asking a question wakes you up. I don’t mean from “asleep” to “awake” but from “awake” to “really awake”. It’s easy to drift through something sort-of-understanding. When you ask a question, especially because you are on the spot, you will do much better.

Some of these effects might seem minor, but they accumulate over time, and their accumulation can have a big effect in terms of knowledge and understanding by the questioner as well as how well ideas are communicated. A final piece of evidence comes from checking cultural backgrounds. People with a cultural background that accepts question asking simply tend to do better in research. If this isn’t you, it’s ok. By being conscious of the need to ask questions and working up the courage to do it, you can do fine.

A reasonable default is that the time to not ask a question is when the speaker (or the environment) explicitly say “let me make progress in the talk”.

The View From China

I’m visiting Beijing for the Pao-Lu Hsu Statistics Conference on Machine Learning.

I had several discussions about the state of Chinese research. Given the large population and economy, you might expect substantial research—more than has been observed at international conferences. The fundamental problem seems to be the Cultural Revolution which lobotimized higher education, and the research associated with it. There has been a process of slow recovery since then, which has begun to be felt in the research world via increased participation in international conferences and (now) conferences in China.

The amount of effort going into construction in Beijing is very impressive—people are literally building a skyscraper at night outside the window of the hotel I’m staying at (and this is not unusual). If a small fraction of this effort is later focused onto supporting research, the effect could be very substantial. General growth in China’s research portfolio should be expected.