Machine Learning (Theory)

11/6/2006

Data Linkage Problems

Data linkage is a problem which seems to come up in various applied machine learning problems. I have heard it mentioned in various data mining contexts, but it seems relatively less studied for systemic reasons.

A very simple version of the data linkage problem is a cross hospital patient record merge. Suppose a patient (John Doe) is admitted to a hospital (General Health), treated, and released. Later, John Doe is admitted to a second hospital (Health General), treated, and released. Given a large number of records of this sort, it becomes very tempting to try and predict the outcomes of treatments. This is reasonably straightforward as a machine learning problem if there is a shared unique identifier for John Doe used by General Health and Health General along with time stamps. We can merge the records and create examples of the form “Given symptoms and treatment, did the patient come back to a hospital within the next year?” These examples could be fed into a learning algorithm, and we could attempt to predict whether a return occurs.

The problem is that General Health and Health General don’t have any shared unique identifier for John Doe. Information is often mispelled (name misspellings are very common), mistyped, changed (people move), and simply not unique (how many people were born on your birthday?).

Although this is just one example, data linkage problems seem to be endemic to learning applications. There seem to be several solutions:

  1. Improved recording. Sometimes minor changes to what information is recorded can strongly disambiguate. For example, there is a big difference between recording the pages visited at a website versus tracking the sequence of pages visited. The essential thing to think about when designing the information to record is: How will I track the consequences of decisions?
  2. Two-stage learning. First predict which records should be linked, based upon a smaller dataset that is hand checked. Then, use your learned predictor to do the linkage, and then solve your real prediction problem. There are several pitfalls here.
    1. Rarity problems. Links are typically much more rare than nonlinks. The training process needs to take this into account by properly representing the scarcity of nonlinks.
    2. Information interfaces. A prediction of “link” or “no link” is too scarce an information source in an inherently noisy environment. Instead, a probability of link may need to be estimated.
    3. Two stage estimation. A common approach to improving performance is turning a double approximation (given x predict y, given y predict z) into a single approximation (given x predict z). A method for achieving single approximation here is tricky because we have ancillary information about the intermediate prediction.
  3. Customized algorithms. The Bayesian approach of “specify a prior, then use Bayes law to get a posterior, then predict with the posterior” is attractive here because we often have strong prior beliefs about at least the linkage portion of the problem.
  4. Others?

The data linkage problem also makes very clear the tension between privacy and machine learning. For example, being able to cross index hospital cases might yield a large jump in our ability to predict outcomes, which might suggest improved treatments (it is only a weak suggestion that must be verified—we must be very careful about applying a predictor to an input distribution it did not learn with respect to). And yet, linking records can result in unexpectedly large pools of information on individuals. Furthermore explicitly sensitive information (like credit card numbers) might easily be the most useful bit of information for linkage.

4 Comments to “Data Linkage Problems”
  1. Actually the privacy problem is one of the main obstacles in improving the existing state-of-the-art in record linkage. Many large-scale data sets that can be used for evaluation contain sensitive information and cannot be disclosed publicly. This, of course, makes it very difficult to compare different methods objectively. If you want to examine Bayesian approaches, there is very interesting work done in the context of US Census data (see the work by Winkler) and a significant amount of research in statistics starting by Fellegi-Sunter in 1969.

  2. Robert Langlois says:

    IBM just released an interesting product aimed at this very problem called “IBM Anonymous Resolution”. It seems to be some type of encryption on the data that prevents sensitive information from being revealed but still allows the machine learning algorithms to learn something from the dataset.
    http://www-306.ibm.com/software/info/features/dataprivacy/

  3. Ross Gayler says:

    The ANU Data Mining Group is working on record linkage. They are addressing some of the privacy issues with a subproject on blindfolded record linkage. They also have a good collection of links to record linkage resources.

  4. Burak says:

    We have encountered this problem on an data warehousing project; What we did was to use a “generated id”, say, we take first 3 letters of first, 3 letters of last names and combine it with state and 4 letters of street address. This way we managed to avoid misspellings and form an ID good enough that let us connect people in various data stores.

Leave a Reply


Powered by WordPress