Holy grails of machine learning?

Let me kick things off by posing this question to ML researchers:

What do you think are some important holy grails of machine learning?

For example:
– “A classifier with SVM-level performance but much more scalable”
– “Practical confidence bounds (or learning bounds) for classification”
– “A reinforcement learning algorithm that can handle the ___ problem”
– “Understanding theoretically why ___ works so well in practice”
etc.

I pose this question because I believe that when goals are stated explicitly and well (thus providing clarity as well as opening up the problems to more people), rather than left implicit, they are likely to be achieved much more quickly. I would also like to know more about the internal goals of the various machine learning sub-areas (theory, kernel methods, graphical models, reinforcement learning, etc) as stated by people in these respective areas. This could help people cross sub-areas.

4 Replies to “Holy grails of machine learning?”

  1. I think this is a difficult question to answer because most of the time most people think about ‘what am I going to next’ rather than the end goal.

    For myself, I am looking for all the pieces of learning theory which can be applied without the need for a human to think about whether or not that is reasonable. It’s my eventual hope to collect these pieces into a learning engine that can solve learning problems without a human.

  2. For me the long term goal is to move towards developing a robot scientist who not only reasons away in isolation without interacting with the world (like the theorem proving engines that start with axioms and then just think) but also goes out and determines what experiments may be required to collect new data while also taking into account economic feasibility.

    As a short term effort in the more immediate and tangible future, I aim to maximize the accuracy with which I can predict outcomes (what will happen if…?) at a minimum cost for the whole decision process. To clarify, some examples of costs are those associated with
    a>collecting training data,
    b>computational effort expended in the analysis process(for cpu cycles or even the opportunity cost of not being able to do something else in this time)
    c>mistakes in predictions
    When multiple agents may have to co-ordinate together there are also costs associated with:
    d>communication

    I’m most interested in ML when there are tradeoffs between the predictive accuracy and the costs for making predictions.

  3. As a beginning researcher in the field, let me offer an opinion:
    Humans can learn many useful tasks from fewer examples than machines. The holy grail of machine learning would be to figure out inductive bias(es) used by humans to allow such good performance in certain areas.

Comments are closed.