Carbon in Computer Science Research

Al Gore‘s film and gradually more assertive and thorough science has managed to mostly shift the debate on climate change from “Is it happening?” to “What should be done?” In that context, it’s worthwhile to think a bit about what can be done within computer science research.

There are two things we can think about:

  1. Doing Research At a cartoon level, computer science research consists of some combination of commuting to&from work, writing programs, running them on computers, writing papers, and presenting them at conferences. A typical computer has a power usage on the order of 100 Watts, which works out to 2.4 kiloWatt-hours/day. Looking up David MacKay‘s reference on power usage per person, it becomes clear that this is a relatively minor part of the lifestyle, although it could become substantial if many more computers are required. Much larger costs are associated with commuting (which is in common with many people) and attending conferences. Since local commuting is common across many people, and there are known approaches (typically public transportation) for more efficient commuting, I expect researchers can piggyback on improvements in public transportation to reduce commuting costs. In fact, the situation for researchers may be better in general, as the nature of the job may make commuting avoidable, at least on some days.

    Presenting at conferences is the remaining problem area, essentially due to travel by airplane to and from a conference. Travel by airplane has an energy cost similar to travel by car over the same distance, but we typically take airplanes for very long distances. Unlike cars, typical airplane usage requires stored energy in a dense form. For example, there are no serious proposals I’m aware of for battery-powered airplanes, because all existing rechargeable batteries have a power density around 1/10th that of hydrocarbon fuel (which makes sense given that about 3/4 of the mass for a hydrocarbon fire is oxygen in the air). This suggests airplane transport may be particularly difficult to adapt towards low or zero carbon usage. The plausible approaches I know involve either using electricity (from where?) to inefficiently crack water for hydrogen, or the biofuel approach where hydrocarbons are made by plants, with neither of these approaches particularly far along in development. If these aren’t developed, it seems we should expect fewer conferences, more regional conferences, Europe with it’s extensive fast train network to be less impacted, and more serious effort towards distributed conferences. For the last, it’s easy to imagine with existing technology having simultaneous regional conferences which are mutually videoconferenced, and we aren’t far from being able to handle a fully interactive videobroadcast amongst an indefinitely large number of participants. As a corollary of fewer conferences, other interactive mechanisms (for example research blogs) seems likely to grow.

  2. Research Topics They keyword for research topics is efficiency, and it is not a trivial concern on a global scale. In computer science, there have been a few algorithms (such as quicksort and hashing) developed which substantially and broadly improved real-world efficiency, but the real driver of efficiency so far is the hardware development, which has phenomenally improved efficiency for several decades.

    Many of the efficiency improvements are sure to remain hardware based, but software is becoming an essential component. One basic observation about efficient algorithms is that for problems admitting an efficient parallel solution (counting is a great example), the parallel algorithm is generally more efficient, because energy use is typically superlinear in clock speed. As an extreme example, the human brain which is deeply optimized by evolution for energy efficiency typically runs at at 100Hz or 100KHz.

    Although efficiency suggests parallel algorithms, this should not be done blindly. For example, in machine learning the evidence I’ve seen so far suggests that online learning (which is admittedly harder to parallelize) is substantially more efficient than batch style learning, implying that I expect online approaches to be more efficient than map-reduce based machine learning as is typically seen in the Mahout project.

    A substantial difficulty with parallel algorithms is the programming itself. In this regard, there is plenty of room for programming language work as well.

8 Replies to “Carbon in Computer Science Research”

  1. Beyond the deep technical aspects of VM and algorithmic optimizations, we can look at places where algorithms can enable changes in society or business.

    One example is running an e-commerce shop and sending parcels by mail; these are far more energy efficient than buying in normal retail stores.

    For that parcel delivery, I think it was Fedex that managed to squeeze 1 extra delivery per truck per day by using better routing software.

    Public transit, car- and bike-sharing are areas where ML experts could contribute.

    Our power grids are old and will have to be updated to handle renewable sources. To add to the complexity, we can also help balance supply and demand for energy by selectively shutting off connected appliances (or charging your electric vehicle when the wind blows).

    There are a lot of low-hanging fruit. Some will merely require application of existing knowledge, others the development of new approaches. Either way, it’s an easier sell than saying we have to be deprived of flying 🙂

    1. While machine learning techniques can surely make some contributions to sustainability, lets be realistic: planning and scheduling is a classic field for operations research since now almost sixty years. Its certainly not a problem waiting for the machine learning hammer. (And the OR guys already do a lot of environmental modeling, see http://greenor.wordpress.com/ and for example http://mat.tepper.cmu.edu/blog/?p=247)

      1. “Realistic”? Sounds like a mild put down. Ok, I’m not an expert in the nuances between OR and ML. The OR people I know aren’t likely to try an evolutionary algorithm to design a transit system. Maybe I don’t know enough about their field?

        Does the smart grid which I alluded to and Frank Schilder mentioned also fall under OR?

        What contributions would you envision?

        1. I think I should restate what I said before in more polite terms. What I meant was that it is easy to get excited about possible contributions of ML to sustainability, but that one should not be naive enough to ignore the state of the art in the fields one is talking about.

          For example, regarding transit systems: vehicle routing problems, delivery and pickup problems, etc. are all very well covered by OR and in fact environmental objectives such as CO2 footprint are already optimized for as of today. Many of these systems use machine learning techniques for predicting demand, congestion, etc. When one is talking about possible contributions ML can make, these should be formulated in contrast with the state-of-the-art. To do this one needs to be aware of what the state-of-the-art actually is.

          Similar things hold for many real-world resource-allocation problems: resource allocation has been important before sustainability considerations came into play. Because it has been important before, there is a large body of research already.

  2. In addition to making algorithms more efficient, another ML research area could focus on how to use energy more efficiently and hence reduce our carbon footprint. I think the so-called smart-grid offers a lot of potential for machine learning approaches (see http://en.wikipedia.org/wiki/Smart_grid).

    Here is one example where several ML approaches are tested for building better wind prediction models used by wind farms:

    A. Kusiak, H.-Y. Zheng, and Z. Song, Wind Farm Power Prediction: A Data-Mining Approach, Wind Energy, Vol. 12, No. 3, 2009, pp. 275-293
    http://www.icaen.uiowa.edu/~ankusiak/Journal-papers/Wind_Farm_Prediction.pdf

  3. what about the pedagogy about complexity of algorithms? CS students are taught to evaluate their algorithms in terms of “orders of magnitude” and big-Oh notations… it’s easy to lose track of a constant here and there, and the constant is the difference between 2kw and 3kw. I agree that it’s essential to teach these fundamental concepts; but is there a way to do it without losing track of real carbon costs?

  4. To follow up on the Mahout reference, its JIT compilation strategy perhaps deserves some thought. In its usual form, all the CPUs in a grid compile the code at run time. Given the design of todays computers, CPU power expenditure is not the largest fraction of the power consumed by a computer, but still its an energy that can easily be amortized over all runs by compiling it into machine code once. The energy saved may not be insignificant either, considering the number of computers that the code is run on, and the number of times the code is run.

  5. On big, high performance computers, more efficient algorithms just lead to more, higher resolution, and longer jobs, up to the capacity of the cluster.

Comments are closed.