Headroom for AI development

(Dylan Foster and Alex Lamb both helped in creating this.)

In thinking about what are good research problems, it’s sometimes helpful to switch from what is understood to what is clearly possible. This encourages us to think beyond simply improving the existing system. For example, we have seen instances throughout the history of machine learning where researchers have argued for fixing an architecture and using it for short-term success, ignoring potential for long-term disruption. As an example, the speech recognition community spent decades focusing on Hidden Markov Models at the expense of other architectures, before eventually being disrupted by advancements in deep learning. Support Vector Machines were disrupted by deep learning, and convolutional neural networks were displaced by transformers. This pattern may repeat for the current transformer/large language model (LLM) paradigm. Here are some quick calculations suggesting it may be possible to do significantly better along multiple axes. Examples include the following:

  • Language learning efficiency: A human baby can learn a good model for human language after observing 0.01% of the language tokens typically used to train a large language model.
  • Representational efficiency: A tiny Portia spider with a brain a million times smaller than a human can plan a course of action and execute it over the course of an hour to catch prey.
  • Long-term planning and memory: A squirrel caches nuts and returns to them after months of experience, which would correspond to keeping billions of visual tokens in context using current techniques.

The core of this argument is that it is manifestly viable to do better along multiple axes, including sample efficiency and the ability to perform complex tasks requiring memory. All these examples highlight advanced capabilities that can be achieved at scales well below what is required by existing transformer architectures and training methodologies (in terms of either data or compute). This is in no way meant as an attack on transformer architectures; they are a highly disruptive technology, accomplishing what other types of architectures have not, and they will likely serve as a foundation for further advances. However, there is much more to do.

Next, we delve into each of the examples above in greater detail.

Sample complexity: The language learning efficiency gap

The sample efficiency gap is perhaps best illustrated by considering the core problem of language modeling where a transformer is trained to learn language. A human baby starts with no appreciable language but learns it well before adulthood. Reading at 300 words per second with 1.3 tokens/word on average implies 6.5 tokens/second. Speaking is typically about half of reading speed, implying three tokens per second. Sleeping and many other daily activities of course involve no tokens per second. Overall, one language token per second is a reasonable rough estimate of what a child observes. At this rate, 31 years must pass before they observe a billion tokens. Yet speculations about GPT-4 suggest four orders of magnitude more than a human observes in the process of learning. Closing this language learning efficiency gap (or more generally, sample efficiency gap) can have significant impact at multiple scales:

  • Large models: Organizations have already scraped most of the internet and exhausted natural sources for high-quality tokens (e.g., arXiv, Wikipedia). To continue improving the largest models, better sample efficiency may be required.
  • Small models: In spite of significant advances, further improvements to sample efficiency may be required if we want small language models (e.g., at the 3B scale) to reach the same level of performance as frontier models like GPT-4.

There are common arguments against the existence of a language efficiency gap which appear unconvincing.

Maybe a better choice of tokens is all you need?

This can’t be entirely ruled out, but the Phi series was an effort in this direction with the latest model trained on 10T tokens, implying there’s still a four-orders-of-magnitude efficiency gap between a human and a model which is still generally weaker than GPT-4 along most axes. It is possible that more sophisticated interactive data collection approaches could help close this gap, but this is largely unexplored.

Maybe language learning is evolutionary?

The chimpanzee-human split is estimated to have occurred between 5M and 13M years ago, resulting in a 35 million base-pair difference. The timeline for the appearance of language is estimated to have occurred between 2.5M and 150K years ago. Estimating divergence at 10M years ago and language occurring 1M years ago, with a stable rate of evolution on both sides. This suggests a crude upper bound of 35M/10/2 = 1.75M base pairs (or, around 3.5M bits) on the number of DNA bits encoding language inheritance. That’s around 5 orders of magnitude less than the number of parameters in a modern LLM, so this is not a viable explanation for the language efficiency gap.

On the other hand, it could be the case that the evolutionary lineage of humans evolved most language precursors long before actual language. The human genome has about 3.1B base pairs, with about one-third of proteins primarily expressed in the brain. Using an estimate of 1B base pairs (around 2B bits) that are brain related. This is still around two orders of magnitude smaller than the LLMs in use today, so it’s not a viable explanation for the language learning efficiency gap. It is plausible that the structure of neurons in a human brain, which strongly favors sparse connections over the dense connections favored by a GPU, are advantageous for learning purposes.

Maybe human language learning is accelerated by multimodality?

Humans benefit from a rich sensory world, notably through visual perception, which extends far beyond language. Estimating a “token count” for this additional information is difficult, however. For example, if someone is reading a book at 6.5 tokens per second, are they benefiting from all the extra sensory information? A recent paper puts the rate at which information is consciously processed in a human brain at effectively 10 bits/second which is only modestly more than the cross entropy of a language model. More generously, we could work from the common saying that “a picture is worth a thousand words” which is not radically different from techniques for encoding images into transformers. Using this, we could estimate that extra modalities increase the number of tokens by three orders of magnitude, resulting in 1T tokens observed by age 31. Given this, there is still an order-of-magnitude learning efficiency gap between humans and language models of the same class as GPT-4.

Maybe the learning efficiency gap does not matter?

In some domains, it may be possible to overcome the inefficiencies of a learning architecture by simply gathering more and more data as needed. At a scientific level, this is not a compelling argument, since understanding the fundamental limits of what is possible is the core purpose of science. Hence, this is a business argument, which may indeed be valid in some cases. A business response is that learning efficiency matters in domains where it is difficult or impossible to collect sufficient data: think of robot demonstrations, personalizing models, problems with long range structure, a universal translator encountering a new language, and so on. In addition, improving learning efficiency may lead to improvement in other forms of efficiency (e.g., memory and compute) via architectural improvements.

Model size: The representational efficiency gap

A second direction in which transformer-based models can be improved lies in model size, or representational efficiency. This is perhaps best illustrated by considering the problem of designing models or agents capable of physical or animal-like intelligence. This includes capabilities like 1) understanding one’s environment (perception); 2) moving oneself around (locomotion); and 3) planning to reach goals or accomplish basic tasks (e.g., navigation and manipulation). Naturally, this is very relevant if our goal is to build foundation models for embodied decision making.

The Portia spider has a brain one million times smaller than that of a human, yet it is observed to plan a course of action and execute it successfully over durations as long as an hour. Stated another way, it is possible to engage in significant physical intelligence behavior with 100M floats representing the neural connections and a modest gigaflop CPU capable of executing them in real time. This provides a strong case that much animal intelligence can be radically more representationally efficient than what has been observed in lingual domains, or yet implemented in software. A concrete question along these lines is:

Can we design a model with 100M floats that can effectively navigate and accomplish physical-intelligence tasks in the real world?

It is not clear whether there is an existing model of any size that can effectively do this. The most famous examples in this direction are game agents, which only function in relatively simple environments.

Are transformer models for language representationally inefficient?

While the discussion above concerns representational efficiency for physical intelligence, it is also interesting to consider representational efficiency for language. That is, are existing language models representationally efficient, or can similar capabilities be achieved with substantially smaller models? On the one hand, it is possible that language is an inherently complex process to both produce and understand. On the other hand, it might be possible to represent human level language in a radically more size-efficient manner, as in the case of physical intelligence.

To this end, one interesting example is given by Alex, a grey parrot that managed to learn and meaningfully use a modest vocabulary with a brain one-hundredth the size of a human brain by weight. If we accept the computational model of a neuron as a nonlinearity on a linear integration, Alex might have 1B neurons operating at 1T flops. Given Alex’s limited language ability, this isn’t constraining enough to decisively argue that language models that are substantially smaller than current models can be achieved. At the same time, it is plausible that most of Alex’s brain was not devoted to human language, offering some hope.

The long-term memory and planning gap

A third direction concerns developing models and agents suitable for domains that involve complex long-term interactions, which may necessitate the following capabilities:

Memory: Effectively summarizing the history of interaction into a succinct representation and using it in context.

Planning: Choosing the next actions or tokens deliberately to achieve a long range goal.

Recent advances like O1 and R1 handle relatively short range planning but are significant advancements in this vein. Existing applications of transformer language models largely avoid long-term interactions, since they can deviate from instructions. To highlight why we might expect to improve this situation, note that humans manage to engage in coherent plans over years-long timescales. Human-level intelligence isn’t required for this, though, as many animals exhibit behaviors that require long-timescale memory and planning. For example, a squirrel with a brain less than one-hundredth the size of a human brain stores food and reliably comes back to it after months of experience. Restated in a transformer-relevant way, a squirrel can experience billions of intervening (and potentially distracting) visual tokens before recalling the location of a cache of food and returning to it. How can we develop competitive models and agents with this capability?

Does it matter?

A common approach to circumvent memory and planning limitations of existing models is to create an outer-level executor that uses the LLM as a subroutine, combined with other tools for memory or planning systems. These approaches tacitly acknowledges the limits of current architectures by offering an alternative solution. Historically, as for machine vision or speech recognition, it has always been more difficult to create a learning system that accomplishes the task of interest with end-to-end training, but it was worthwhile when done as the results were superior. This pattern may repeat for long-term memory and planning, yielding better solutions.

An AI Miracle Malcontent

The stark success of OpenAI’s GPT4 model surprised me shifting my view from “really good autocomplete” (roughly inline with intuitions here) to a dialog agent exhibiting a significant scope of reasoning and intelligence. Some of the MSR folks did a fairly thorough study of capabilities which seems like a good reference. I think of GPT4 as an artificial savant: super-John capable in some language-centric tasks like style and summarization with impressive yet more limited abilities in other domains like spatial and reasoning intelligence.

And yet, I’m unhappy with mere acceptance because there is a feeling that a miracle happened. How is this not a miracle, at least with hindsight? And given this, it’s not surprising to see folks thinking about more miracles. The difficulty with miracle thinking is that it has no structure upon which to reason for anticipation of the future, prepare for it, and act rationally. Given that, I wanted to lay out my view in some detail and attempt to understand enough to de-miracle what’s happening and what may come next.

Deconstructing The Autocomplete to Dialog Miracle
One of the ironies of the current situation is that an organization called “OpenAI” created AI and isn’t really open about how they did it. That’s an interesting statement about economic incentives and focus. Nevertheless, back when they were publishing, the Instruct GPT paper suggested something interesting: that reinforcement learning on a generative model substrate was remarkably effective—good for 2 to 3 orders of magnitude improvement in the quality of response with a tiny (in comparison to language sources for next word prediction) amount of reinforcement learning. My best guess is that this was the first combination of 3 vital ingredients.

  1. Learning to predict the next word based on vast amounts of language data from the internet. I have no idea how much, but wouldn’t be surprised if it’s a million lifetimes of reading generated by a billion people. That’s a vast amount of information there with deeply intermixed details about the world and language.
    1. Why not other objectives? Well, they wanted something simple so they could maximize scaling. There may indeed be room for improvement in choice of objective.
    2. Why language? Language is fairy unique amongst information in that it’s the best expression of conscious thought. There is thought without language (yes, I believe animals think in various ways), but you can’t really do language without thought.
  2. The use of a large deep transformer model (pseudocode here) to absorb all of this information. Large here presumably implies training on many GPUs with both data and model parallelism. I’m sure there are many fine engineering tricks here. I’m unclear on the scale, but expect the answer is more than thousands and less than millions.
    1. Why transformer models? At a functional level, they embed ‘soft attention’ (=ability to look up a value with a key in a gradient friendly way). At an optimization level, they are GPU-friendly.
    2. Why deep? The drive to minimize word prediction error in the context of differentiable depth creates a pressure to develop useful internal abstractions.
  3. Reinforcement learning on a small amount of data which ‘awakens’ a dialog agent. With the right prompt (=prefix language) engineering a vanilla large language model can address many tasks as the information is there, but it’s awkward and clearly not a general purpose dialog agent. At the same time, the learned substrate is an excellent representation upon which to apply RL creating a more active agent while curbing an inherited tendency to mimic internet flamebait.
    1. Why reinforcement learning? One of the oddities of language is that there is more than one way of saying things. Hence, the supervised learning view that there is a right answer and everything else is wrong sets up inherent conflicts in the optimization. Hence, “reinforcement learning from human feedback” pairs inverse reinforcement learning to discover a reward function and basic reinforcement learning to achieve better performance. What’s remarkable about this is that the two-step approach is counter to the information processing inequality.

The overall impression that I’m left with is something like the “ghost of the internet”. If you ask the internet for the answer to a question on the best forum available and get an answer, it might be in the ballpark of as useful and as correct as that which GPT4 provides (notably, in seconds). Peter Lee’s book on the application to medicine is pretty convincing. There are pluses and minuses here—GPT4’s abstraction of language tasks like summarization and style appear super-human, or at least better than I can manage. For commonly discussed content (e.g. medicine) it’s fairly solid, but for less commonly discussed content (say, Battletech fan designs) it becomes sketchy as the internet gives out. There are obviously times when it errs (often egregiously in a fully confident way), but that’s also true in internet forums. I specifically don’t trust GPT4 with math and often find it’s reasoning and abstraction abilities shaky, although it’s deeply impressive that they exist at all. And driving a car is out because it’s a task that you can’t really describe.

What about the future?
There’s been a great deal about the danger of AI discussed recently, and quite a mess of misexpectations about where we are.

  1. Is GPT4 and future variants the answer to [insert intelligence-requiring problem here]? GPT4 seems most interesting as a language intelligence. It’s clearly useful as an advisor or a brainstormer. The meaning of “GPT5” isn’t clear, but I would expect substantial shifts in core algorithms/representations are necessary for mastering other forms of intelligence like memory, skill formation, information gathering, and optimized decision making.
  2. Are generative models the end of consensual reality? Human societies seem to have a systematic weakness in that people often prefer a consistent viewpoint even at the expense of fairly extreme rationalization. That behavior in large language models is just looking at our collective behavior through a mirror. Generative model development (both language and video) do have a real potential to worsen this. I believe we should be making real efforts as a society to harden and defend objective reality in a multiple ways. This is not specifically about AI, but it would address a class of AI-related concerns and improve society generally.
  3. Is AI about to kill everyone? Yudkowski’s editorial gives the impression that a Terminator style apocalypse is just around the corner. I’m skeptical about the short term (the next several years), but the longer term requires thought.
    1. In the short term there are so many limitations of even GPT4 (even though it’s a giant advance) that I both lack the imagination to see a path to “everyone dies” and I expect it would be suicidal for an AI as well. GPT4, as an AI, is using the borrowed intelligence of the internet. Without that source it’s just an amalgamation of parameters of no interesting capabilities.
    2. For the medium term, I think there’s a credible possibility that drone warfare becomes ultralethal inline with this imagined future. You can already see drone warfare in the Ukraine-Russia war significantly increasing the lethality of a battlefield. This requires some significant advances, but nothing seems outlandish. Counterdrone technology development and limits on usage inline with other war machines seems prudent.
    3. For the longer term, Vinge’s classical singularity essay is telling here as he lays out the inevitability of developing intelligence for competitive reasons. Economists are often fond of pointing out how job creation has accompanied previous mechanization induced job losses and yet my daughter points out how we keep increasing the amount of schooling children must absorb to be capable members of society. It’s not hard to imagine a desolation of jobs in a decade or two where AIs can simply handle almost all present-day jobs and most humans can’t skill-up to be economically meaningful. Our society is not prepared for this situation—it seems like a quite serious and possibly inevitable possibility. Positive models for a nearly-fully-automated society are provided by Star Trek and Iain Banks although science fiction is very far from a working proposal for a working society.
    4. I’m skeptical about a Lawnmower Man like scenario where a superintelligence suddenly takes over the world. In essence, cryptographic barriers are plausibly real, even to a superintelligence. As long as that’s so, the thing to watch out for is excessive concentrations of power without oversight. We already have a functioning notion of super-human intelligence in organizational intelligence and are familiar with techniques for restraining organizational intelligence into useful-for-society channels. Starting with this and improving seems reasonable.

What is the Right Response to Employer Misbehavior in Research?

I enjoyed my conversations with Timnit when she was in the MSR-NYC lab, so her situation has been on my mind throughout NeurIPS.

Piecing together what happened second-hand is always tricky, but Jeff Dean’s account and Timnit’s agree on a basic outline. Timnit and others wrote a paper for FAccT which was approved for submission by the normal internal review process, then later unapproved. Timnit threatened to leave unless various details about this unapproval were clarified. Google then declared her resigned.

The definition of resign makes it clear an employee does it, not an employer. Since that apparently never happened, this is a mischaracterized firing. It also seems quite credible that the unapproval process was highly unusual based on various reactions I’ve seen and my personal expectations of what researchers would typically tolerate.

This frankly looks bad to me and quite a number of other people. Aside from the plain facts, this is also consistent with racism and/or sexism given the roles of those involved. Google itself now faces a substantial rebellion amongst employees.

However, I worry about consequences to some of these reactions.

  1. Some people suggest not reviewing papers from Google-based researchers. As a personal decision, this is making a program chair’s difficult job harder. As a communal decision, this would devastate the community since a substantial fraction are employed at Google. These people did not make this decision and many actively support Timnit there (at some risk to their job) so a mass-punishment approach seems deeply counterproductive.
  2. Others have suggested that Google should not be a sponsor at major machine learning conferences. Since all of these are run as nonprofits, the lost grants will either be made up by increasing costs for everyone or reducing grants to students and diversity sponsorship. Reduced grants in particular seem deeply counterproductive.
  3. Some have suggested that all industry research in general is bad. Industrial research varies substantially from place to place, perhaps much more so than in academia. As an example, Microsoft Research has no similar internal review process for publications. Overall, the stereotyping inherent in this view makes me uncomfortable and there are some real advantages to working in industry in terms of ability to concentrate on research or effecting real change.

It’s critical to understand that the strength of the research community is incredibly valuable to the community. It’s not hard to imagine a different arrangement where all industrial research is proprietary, with only a few major companies operating competitive internal research teams. This sort of structure exists in some other fields, often to the detriment of anyone other than a major company. Researchers at those companies can’t as easily switch jobs and researchers outside of those companies may lack the context to even contribute to the state of the art. The field itself progresses slower and in a more secretive way due to lack of sharing. Anticommunal acts based on mass ostracization or abandonment could shift our structure from the current relatively happy equilibrium where people from all over can participate, learn, and contribute towards a much worse situation.

This is not to say that there are no consequences. The substantial natural consequences of a significant moral-impacting event will play out regardless of anything else. The marketplace for top researchers is quite competitive so for many of them uncertainty about the feasibility of publication, the disposition and competence of senior leadership, or constraints on topics tips the balance towards other offers. That may be severe this year, since this all blew up as the recruiting season was launching and I expect it to last over many years unless some significant action is taken. In this sense, I expect all the competitors may be looking forward to recruiting more than they were previously and the cost of not resolving the conflict here in a better way may be much, much higher than just about any other course of action. This is not particularly hypothetical—I saw it play out over the years after the silicon valley lab was cut as the brain drain of other great researchers in competitive areas was severe for several years afterwards.

I don’t think a general answer to the starting question is possible, since it will always depend on circumstances. Even this instance is complex with actions that could cause unintuitive adverse impacts on unanticipated parts of our community or damage the community as a whole. I personally hope that the considerable natural consequences here form a substantial deterrent to misbehavior in the long term. Please think this through when considering your actions here.

Edits: tweaked conclusion wording a bit with advice from reshamas.

Critical issues in digital contract tracing

I spent the last month becoming a connoisseur of digital contact tracing approaches since this seems like something where I might be able to help. Many other people have been thinking along similar lines (great), but I also see several misconceptions that even smart and deeply involved people are making.

For the following a key distinction to understand is between proximity and location approaches. In proximity approaches (such as DP3T, TCN, MIT PACT(*), Apple or one of the UW PACT(*) protocols which I am involved in) smartphones use Bluetooth low energy and possibly ultrasonics to discover other smartphones nearby. Location approaches (such as MIT Safe Paths or Israel) instead record the absolute location of the device based on gps, cell tower triangulation, or wifi signals.

Location traces are both poor quality and intrinsically identifying
Many people associate the ability of a phone to determine where it is with the ability to discover where it is with high precision. This is typically incorrect. Common healthcare guidance for possible contact is “within 2 meters for 10 minutes” while location data is often off by 10-100 meters, with varying accuracy due to which location methodology is in use. As an example, approximately everyone in Manhattan may be within 100 meters of someone who later tested positive for COVID-19. Given this inaccuracy, I expect users of a system based on location crossing to simply turn them off due to the large number of false positives.

These location traces, even though they are crude, are also highly identifying. When going about your normal pre-pandemic life, you move from location X to Y to Z. Typically no one else goes from X to Y to Z in the same timeframe (clocks are typically very accurate). If you test positive and make your trace available to help suppress the virus, a store owner with a video camera and a credit card record might de-anonymize you and accuse you of killing someone they care about. Given the stakes here, preserving as much anonymity as possible is critical for convincing people to release the information which is needed to control the virus.

Given this, approaches which upload the location data of users seem likely to have reduced adoption and many false positives. While some governments are choosing to use all location data on an involuntary basis like Israel, the lack of effectiveness compared to proximity based approaches and the draconian compromise of civil liberties are worrisome.

Location traces can be useful in a privacy-preserving way
Understanding the above, people often conclude that location traces are subsumed by alternatives. That’s not true. Location approaches can be made very private by simply never allowing a location trace leave the personal device. While this might feel contradictory to epidemiological success, it’s actually extremely helpful in at least two ways.

  1. People have a pretty poor memory, so when they test positive and someone calls them up to do a contact tracing interview, having a location trace on their phone can be incredibly useful in jogging their memory. Using the location trace this way allows the manual contact tracing process to be much more complete. It can also be made much faster by allowing infected people to prefill much of a contact interview form before they get a call.
  2. The virus is inherently very localized, so public health authorities often want to quickly talk to people at location X or warn people to stay away from location Y until it is cleaned. This can be strongly enabled by on-device location traces. The phone can download all the public health messages in a region and check automatically which are relevant to the phone’s location trace, surfacing those as needed to the user. This provides more power than crossing location traces. A message of “If you were at store X on April 16th, please email w@y.z” allows people to not respond if they went to store V next door.

Both of these capabilities are a part of the UW PACT protocols I worked on for this reason.

Proximity-only approaches have an x2 problem

When people abandon location-based approaches, it’s in favor of proximity-based approaches. For any proximity protocol approach to work, both the infected person and the contact must be running the protocol implying there are two ways for it to fail to be useful.
illustration of x*x
To get a sense of what is necessary, consider the reproduction number of the coronavirus. Estimates vary but a reproduction number of 2.5 is reasonable. That is, the virus might infect 2.5 new people per infected person on average in the absence of countermeasures. To keep an infection with a base reproduction number of 2.5 from exponentiating, it is necessary to reduce the reproduction number to 1 which can be done when 60% of contacts are discovered, assuming (optimistically) no testing error and perfect isolation of discovered contacts before they infect anyone else.

To reach 60% you need 77.5% of people to download and run proximity protocols. This is impossible in many places where smartphones are owned by fewer than 77.5% of the population. Even in places where it’s possible it’s difficult to imagine reaching that level of usage without it being a mandatory part of the operating system that you are forced to use. Even then, subpopulations without smartphones are not covered. The square problem gets worse at lower levels of adoption. At 10% adoption (which corresponds to a hugely popular app), only 1% of contacts can be discovered via this mechanism. Despite the smallness, informing 1% of contacts does have real value in about the same sense that if someone loaned you money with a 1%/week interest rate you would call them a loan shark. At the same time, this is only 1/60th of a solution to getting the reproduction number below 1.

Hence, people advocating for proximity approaches must either hope for pervasive mandatory use (which will still miss subcommunities without smartphones) or accept that proximity approaches are only a part of the picture.

This quadratic structure also implies that the number of successful proximity tracing protocols will be either 0 or 1 in any geographic region. Given that Apple/Google are building a protocol into their OSes, that’s the candidate for the possible 1 in most of the world once it becomes available(**).

This quadratic structure is difficult to avoid. For example, if location traces are crossed with location traces, the same issue comes up. Similarly for proximity tracing, you could imagine recording “wild” bluetooth beacons and then reporting them to avoid the quadratic structure. This however unavoidably reveals contacts publicly which can then cause the positive person to be revealed publicly.

Interestingly, traditional manual contact tracing does not suffer from the quadratic problem. Hence approaches (discussed above) which augment and benefit from manual contact tracing have a linear value structure, which matters enormously with lower levels of adoption.

What works?
The primary thrust of contract tracing needs to be manual, as that is what has worked in countries (like South Korea) which suppressed large outbreaks. Purely digital approaches don’t seem like a credible solution due to issues discussed above. Hybrid approaches with smartphone-based apps can help by complementing manual contact tracing and perhaps via proximity approaches. Getting there requires high levels of adoption, which implies trust is a critical commodity. In addition to navigating the issues above, projects need to be open source, voluntary, useful, and strongly respect privacy (the ACLU recommendations are good here). This is what the CovidSafe project is aimed at in implementing the UW PACT protocols. Projects not navigating the above issues as well are less credible in my understanding.

An acknowledgement: many people have affected my thinking through this process, particularly those on the UW PACT paper and CovidSafe projects.

(*) I have no idea how the name collision occurred. We started using PACT here, 3 weeks ago, and circulated drafts to many people including a subset of the MIT PACT group before putting it on arxiv.

(**) The Apple protocol is a bit worrisome as development there is not particularly open and I have a concern about the crypto protocol. The Tracing Key on page 5, if acquired via hack or subpeona, allows you to prove the location of a device years after the fact. This is not epidemiologically required and there are other protocols without this weakness. Edit: The new version of their protocol addresses this issue.

What is the most effective policy response to the new coronavirus pandemic?

Disclaimer: I am not an epidemiologist, but there is an interesting potentially important pattern in the data that seems worth understanding.

World healthcare authorities appear to be primarily shifting towards Social Distancing. However, there is potential to pursue a different strategy in the medium term that exploits a vulnerability of this disease: the 5 day incubation time is much longer than a 4 hour detection time. This vulnerability is real—it has proved exploitable at scale in South Korea and in China outside of Hubei.

Exploiting this vulnerability requires:

  1. A sufficient capacity of rapid tests be available. Sufficient here is perhaps 30 times the number of true new cases per day based on South Korea’s testing rate.
  2. The capacity to rapidly trace the contacts of confirmed positive cases. This is both highly labor intensive and absurdly cheap compared to shutting down the economy.
  3. Effective quarantining of positive and suspect cases. This could be in home, with the quarantine extended to the entire family. It could also be done in a hotel (… which are pretty empty these days), or in a hospital.

Where Test/Trace/Quarantine are working, the number of cases/day have declined empirically. Furthermore, this appears to be a radically superior strategy where it can be deployed. I’ll review the evidence, discuss the other strategies and their consequences, and then discuss what can be done.

Evidence for Test/Trace/Quarantine
The TTQ strategy works when it effectively catches a 1 – 1 / reproduction number fraction of cases. The reproduction number is not precisely known although discovering 90% of cases seems likely effective and 50% of cases seems likely ineffective based on public data.

How do you know what fraction of cases are detected? A crude measure can be formed by comparing detected cases / mortality across different countries. Anyone who dies from pneumonia these days should be tested for COVID-19 so the number of deaths is a relatively trustworthy statistic. If we suppose the ratio of true cases to mortality is fixed, then the ratio of observed cases to mortality allows us to estimate the fraction of detected cases. For example, if the true ratio between infections and fatalities is 100 while we observe 30, then the detection rate is 30%.

There are many caveats to this analysis (see below). Nevertheless, this ratio seems to provide real information which is useful in thinking about the future. Drawing data from the Johns Hopkins COVID-19 time series, and plotting we see:

The arrows here represent the progression of time by days with time starting at the first recorded death. The X axis here is the ratio between cumulative observed cases and cumulative observed deaths. Countries that are able and willing to test widely have progressions on the right while those that are unable or unwilling to test widely are on the left. Note here that the X axis is on a log scale allowing us to see small variations in the ratio when the ratio is small and large variations in the ratio when the ratio is large.

The Y axis here is the number of cases/day. For a country to engage in effective Test/Trace/Quarantine, it must effectively test, which the X axis is measuring. Intuitively, we expect countries that test effectively to follow up with Trace and Quarantine, and we expect this to result in a reduced number of cases per day. This is exactly what is observed. Note that we again use a log scale for the Y axis due to the enormous differences in numbers.

There are several things you can read from this graph that make sense when you consider the dynamics.

  1. China excluding Hubei and South Korea had outbreaks which did not exceed the hospital capacity since the arrows start moving up and then loop back down around a 1% fatality rate.
  2. The United States has a growing outbreak and a growing testing capacity. Comparing with China-excluding-Hubei and South Korea’s outbreak, only a 1/4-1/10th fraction of the cases are likely detected. Can the United States expand capacity fast enough to keep up with the growth of the epidemic?
  3. Looking at Italy, you can see evidence of an overwhelmed healthcare system as the fatality rate escalates. There is also some hope here, since the effects of the Italian lockdown are possibly starting to show in the new daily cases.
  4. Germany is a strange case with an extremely large ratio. It looks like there is evidence that Germany is starting to control their outbreak, which is hopeful and aligned with our expectations.

The creation of this graph is fully automated and it’s easy to graph things for any country in the Johns Hopkins dataset. I created a github repository with the code. Feel free to make fun of me for using C++ as a scripting language 🙂

You can also understand some of the limitations of this graph by thinking through the statistics and generation process.

  1. Mortality is a delayed statistic. Apparently, it’s about a week delayed in the case of COVID-19. Given this, you expect to see the ratio generate loops when an outbreak occurs and then is controlled. South Korea and China-excluding-Hubei show this looping structure, returning to a ratio of near 100.
  2. Mortality is a small statistic, and a small statistic in the denominator can make the ratio unstable. When mortality is relatively low, we expect to see quite a variation. Checking each progression, you see wide ratio variations initially, particularly in the case of the United States.
  3. Mortality may vary from population to population. It’s almost surely dependent on the age distribution and health characteristics of the population and possibly other factors as well. Germany’s ratio is notably large here.
  4. Mortality is not a fixed variable, but rather dependent on the quality of care. A reasonable approximation of this is that every “critical” case dies without intensive care support. Hence, we definitely do not expect this statistic to hold up when/where the healthcare system is overwhelmed, as it is in Italy. This is also the reason why I excluded Hubei from the China data.

Lockdown
The only other strategy known to work is a “lockdown” where nearly everyone stays home nearly all the time, as first used in Hubei. This characterization is simplistic—in practice such a quarantine comes with many other measures as well. This can work very effectively—today the number of new case in Hubei is in the 10s.

The lockdown approach shuts down the economy fast and hard. Most people can’t work, so they can’t make money, so they can’t buy things, so the people who make things can’t make money, so they go broke, etc… This is strongly reflected in the stock market’s reaction to the escalating pandemic. If the lockdown approach is used for long most people and companies are destined for bankruptcy. If a lockdown approach costs 50% of GDP then a Test/Trace/Quarantine approach costing only a few% of GDP seems incredibly cheap in comparison.

The lockdown approach is also extremely intrusive. It’s akin to collective punishment in that it harms the welfare of everyone, regardless of their disease status. Many peoples daily lives fundamentally depend on moving around—for example people using dialysis.

Despite this, the lockdown approach is being taken up everywhere that cases are overwhelming or threaten to overwhelm hospitals because the alternative (next) is even worse. One advantage that a lockdown approach has is that it can be used now while the Test/Trace/Quarantine approach requires more organizing. It’s the best bad option when the Test/Trace/Quarantine capacity is exceeded or to bridge the time until it becomes available.

If/when/where Test/Trace/Quarantine becomes available, I expect it to be rapidly adopted. This new study (page 11) points out that repeated lockdowns are close to permanent lockdowns in effect.

Herd Immunity
Some countries have considered skipping measures to control the virus on the theory that the population eventually acquires enough people with individual immunity after recovery so the disease dies out. This approach invites severe consequences.

A key issue here is: How bad is the virus? The mortality rate in China excluding Hubei and South Korea is only about 1%. From this, some people appear to erroneously reason that the impact of the virus is “only” having 1% of 50% of the population die, heavily weighted towards older people. This reasoning is fundamentally flawed.

The mortality rate is not a fixed number, but rather dependent on the quality of care. In particular, because most countries have very few intensive care units, an uncontrolled epidemic effectively implies all but a vanishing fraction of sick people only benefit from home stay quality of care. How many people could die with home stay quality of care? Essentially everyone who would otherwise require intensive care at a hospital. In China, that meant 6.1% (see page 12). Given this, the sound understanding is that COVID-19 generates a factor 2-3 worse mortality than the 1918 influenza pandemic where modern healthcare might make this instead be half as bad when not overwhelmed. Note here that the fatality rate in Hubei (4.6% of known cases, which might be 3% of total cases) does not fully express how bad this would be due to the fraction of infected people remaining low and a surge of healthcare support from the rest of China.

The herd immunity approach also does not cause the disease to die out—instead it continues to linger in the population for a long time. This means that people traveling from such a country will be effectively ostracized by every country (like China or South Korea) which has effectively implemented a Test/Trace/Quarantine approach.

I’ve avoided discussing the ethics here since people making this kind of argument may not care about ethics. For everyone else it’s fair to say that letting part of the population die to keep the economy going is anathema. My overall expectation is that governments pursuing this approach are at serious risk of revolt.

Vaccine

Vaccines are extremely attractive because they are a very low cost way to end the pandemic. They are however uncertain and take time to develop and test, so they are not a viable strategy for the next few months.

What can be done?

Public health authorities are generally talking about Social Distancing. This is plausibly the best general-public message because everyone can do something to help here.

It’s also clear that healthcare workers, vaccines makers, and everyone supporting them have a critical role to play.

But, perhaps there’s a third group that can really help? Perhaps there are people who can help scale up the Test/Trace/Quarantine approach so it can be rapidly adopted? Natural questions here are:

  1. How can testing be scaled up rapidly—more rapidly than the disease? This question is already getting quite a bit of attention, and deservedly so.
  2. How can tracing be scaled up rapidly and efficiently? Hiring many people who are freshly out of work is the most obvious solution. That could make good sense given the situation. However, automated or partially automated approaches have the potential to greatly assist as well. I hesitate to mention cell phone tracking because of the potential for abuse, but can that be avoided while still gaining the potential public health benefits?
  3. How can quarantining be made highly precise and effective? Can you estimate the risk of infection with high precision? What support can safely be put in place to help those who are quarantined? Can we avoid the situation where the government says “you should quarantine” and “people in quarantine can’t vote”?

Some countries started this pandemic setup for relatively quick scaleup of the Test/Trace/Quarantine. Others, including the United States, seem to have been unprepared. Nevertheless, I am still holding out hope that the worst case scenarios (high mortality or months-long lockdowns) can be largely avoided as the available evidence suggests that this is certainly possible. Can we manage to get the number of true cases down (via a short lockdown if necessary) to the point where an escalating Test/Trace/Quarantine approach can take over?

Edit: I found myself remaking the graph for myself personally so I made it update hourly and added New York (where I live).