BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Panel on the Future of AI

Panel on the Future of AI

This item in japanese

A panel on the future of AI at the San Francisco QCon conference explored some of the issues facing machine learning today. Five different areas were explored: the critical issues facing AI right now, how has technology changed the way people are hired, how non-leading edge companies make the best use of current technologies, what the role of humans in relation to AI is, and what the exciting new breakthroughs on the immediate horizon are.

The panel was moderated by Shubha Nabar who is a senior director at Salesforce Einstein. The panel members were Melanie Warrick, senior developer advocate for ML and Google Cloud, Chris Moody, manager of the applied AI team at Stitch Fix, Miju Han, director of product at GitHub, Kevin Moore, senior data scientist at Salesforce Einstein, and Reena Philip, engineering manager at Facebook.

Critical Issues Facing AI

For Warrick, a major problem is eliminating data bias when building models because this bias affects the products and services based on these models.

Several panelists worried about the hype surrounding AI. Moody stated that classic machine learning techniques are usually good enough for most companies. He made reference to this in two blog posts on the Stich Fix blog.

Han talked about the disconnect between investors who think that automated coding is happening tomorrow, and the reality that it probably will not happen for a long while. There is also a disconnect between the machine learning community and the software developers who support them, when it comes to talking about how data is structured, what the standards behind the data are, and how hard it is to get good quality data, especially for the security use cases that GitHub is focusing on.

Moore noted the extremely challenging problem of communicating what the model is doing to people who are not familiar with machine learning, but must use the results. One part of this problem is convincing people that there are not huge sources of bias in the algorithms.

He also commented on the irony of the democratization of data science which now makes it easier for malicious actors using machine learning to do things such as fake voices or spread misinformation. He wondered if machine learning can be deployed to fight machine learning.

Deep Learning vs. Traditional Machine Learning

How has deep learning changed the way companies approached problems? Has deep learning changed the way leading-edge companies hire people?

Moody stated that at Stich Fix most of the value does not come from deep learning. Most of the knowledge stems from careful understanding of the domain and simple things. Deep learning has been used in other areas such as being able to find the shortest path to put items together for shipping with arbitrary constraints (such as a one-way aisle) which is different than the traveling salesman problem.

Python has become the primary stack, according to Warrick, and most of the tools are built on top of that platform. She also stated that there is a mix of Data Science doctorates with boot camp and traditional developers.

Stich Fix has a different culture. Engineers do not write ETL. Only data scientists build, deploy into production, and analyze models. Engineers build the LEGO bricks that scientists use, such as an AB testing module, production model monitoring, or deployment processes. Data Scientists must be able to run everything from end-to-end which means they tend to gravitate to simpler maintainable models. Deep learning has not changed that process.

Google has deep learning and machine learning throughout all its products, according to Warrick. Imagine analysis, machine learning, as well as search, have been greatly impacted by deep learning. Reinforcement learning (Deep Mind) has been used to achieve a 40% reduction in data center cooling.

At Facebook, Philip said that two years ago machine learning was not that pervasive. Now teams have data scientists who look at the possible insights that could be drawn from the data, and machine learning engineers that focus on building models. Facebook also does basic research on machine learning. Shubha Nabar thinks that deep learning helps to stabilizes systems because it allows you to use arbitrary data to replace many different systems that would have had to be built in the past.

Deep Learning and the Enterprise

How do enterprises and small companies who are not doing state-of-the-art work deal with the machine learning hype?

Moore notes that small companies are not likely to have data scientists on staff. Nonetheless, they have well defined problems to solve, certain business practices that they wish to be more efficient, as well as predictions they would like to make. The problem they have is that not all their data is in one place, and that data may not be good. They will either have to hire an external consulting company, or leverage vendor products such as those offered by Google, Amazon, Microsoft, and Salesforce. Which vendor product you pick is usually dictated by where you store your data. In some cases you will pick the vendor product that most closely matches what you need. Small companies may need to track data even if they do not think they will need it, because it may turn out to be useful as labels when training a machine learning algorithm.

Shubha Nabar suggests that companies instrument everything.

At GitHub, according to Han, they are sophisticated at some things, but not at others. They are interested in how data will change the process of software development, but they have to fight investor hype. Investors believe that soon programs will write themselves through autonomous programming. Han asserted that this is unlikely to occur in our lifetime. On the other hand, if you tell developers during recruiting that is what investors are expecting, people will go away and work somewhere else.

The first step will be to make suggestions and optimizations to code that already has been written. Their big advantage in doing this is that GitHub is sitting on what is probably the largest software data set in the world. The difficulty is that it took a while for the data scientists to learn how to do data science with code and continuous integration logs. They are not at the point where they can do deep learning.

They are going to start with the detection of potential security problems based on a model built by the machine learning team. Another opportunity is to verify if code is semantically correct, make recommendations about performance, or analyze dependencies to determine what might break if you change a line of code. The startup space for machine learning management is booming, and better tools integrated into the workflow are needed to allow someone to deploy a machine learning model, experiment against it, roll it back, and collaborate.

Human Involvement and Model Building

Warrick argued that humans have to be part of the machine learning process. The team needs a diversity of ideas, of perspectives, of ways to think about the problem to avoid bias. You have to be clear about the data and the types of problems being solved, and what is missing from the model.

At Stich Fix, Moody explained, the models are for informing the stylist about body types and preferences. Models are not about scores, or feeding into another model. It is about building models that are interpretable so that stylists can use the results to figure out what the client is saying. It is not the goal of Stich Fix to replace stylists.

Han said that GitHub needs humans in the loop in machine learning because human reviewers of the models are needed. For example, the reviewers need to see if a malicious actor is present. The problem is that nobody wants to be the human reviewer.

Moore sees humans in the loop to see if things go radically wrong. Also, it is difficult to infer business practices from the data. You probably will always need humans, at least for oversight to validate that the models are producing something of value.

A problem at Facebook is that different groups look at data from their point of view, and label it for their needs. This is problematic when another group needs that data for their own use. Philip says that they are looking at ways to centralize data annotation for text, audio, and visual content.

Exciting New Breakthroughs

At the end, the panelists gave their thoughts on what potential new developments are possible.

Moody thought that the fusion of Bayesian and Deep Learning techniques would allow for the incorporation of uncertainty and variation in the models. Warrick said that reinforcement learning and generative algorithms would allow one to solve for unknowns. Philip was excited about the joint modelling of video, audio and text, where there is no metadata present, to identify violence. She also hoped that universities and private companies getting together, sharing more labeled data sets, would allow for more shared research, and advance the state of the art more rapidly. Moore noted that reinforcement learning allows for the learning (AlphaGo Zero) of a certain class of video games without any previous knowledge about the games.

Rate this Article

Adoption
Style

BT