Google AI Head Jeff Dean: Trend Analysis in Machine Learning in 2020

Whatever role the computer will play in society in the future, Jeff Dean will play a powerful role in the results. As the leader of Google’s artificial intelligence technology research team, he has covered a wide range of work, contributing to everything from research and development of autonomous vehicles to manufacturing robots to Google’s powerful online advertising business.

At NeurIPS, the world’s leading artificial intelligence conference in Vancouver, WIRED talked to Dean about his team’s latest research and how Google is trying to impose ethical restrictions on these explorations.

Wired: You gave a talk about building new computers to advance machine learning. What’s new in this?

Jeff Dean: The first was to use machine learning to place and route circuits on a chip. After a series of new circuits were designed, human experts placed it on a chip in an efficient way to optimize area, power consumption, and many other parameters. Normally, this process is completed within a few weeks. With a machine learning model, the completion of the process becomes very effective, and the results are comparable to or better than human experts. In essence, this is learning to play the game of chip placement. We have been “playing” a bunch of different Google internal chips, such as TPU [Google’s custom machine learning chip].

W: More powerful chips are at the core of recent advances in artificial intelligence. However, Facebook’s artificial intelligence chief said that this strategy would soon run into a wall. In contrast, one of your top researchers urged the field to continue exploring new ideas.

JD: Building more efficient, larger-scale computing systems still seems to have a lot of potential to tap, especially those tailored for machine learning. I think the basic research done in the past five or six years still has a lot of application space. We will work with Google product colleagues to apply many of these to the real world.

But considering what can and cannot be done today? What is the next research frontier? We are also in the process of exploration. We want to build a system that can be extended to a new task. It is becoming more and more interesting and important to do things with less data and less calculations.

W: Another challenge that has attracted attention at NeuroIPS is the ethical issues that arise from some artificial intelligence applications. Google announced a complete set of artificial intelligence ethics principles 18 months ago after a Pentagon artificial intelligence project called Maven triggered protests. How has Google’s artificial intelligence work changed since then?

JD: I think the entire Google has a better understanding of how to put these principles into practice. We have a process where product teams using machine learning can somehow get early foresight before designing the entire system, such as how you should collect data to make sure it is not biased, or something like that. At the same time, we are continuing to advance research directions embodied in the principles. We have done a lot of work on bias, fairness, privacy, and machine learning.

W: These principles exclude work on weapons, but allow government operations, including defense programs. Has Google started any new military projects since Maven?

JD: We are happy to work with the military or other government agencies in a manner consistent with our principles. So this is something we are happy to do to help improve the safety of Coast Guard personnel. Cloud teams tend to do this job, because this is actually their line of business.

W: Mustafa Suleyman, co-founder of London-based artificial intelligence startup DeepMind, recently moved to Google. DeepMind is a London artificial intelligence startup owned by Alphabet and a major player in machine learning research. Suleiman said he will work with you and Google’s top legal and policy director, Kent Walker. What are you doing together?

JD: Suleiman has a broad perspective on issues related to artificial intelligence policy. He is also involved in Google’s AI principles and review process, so I think he will spend most of his time on AI ethics and policy related work. I really want Suleiman to comment on what he is going to do. One area Kent’s team is working on is how we can improve the principles of artificial intelligence to provide more guidance to Google’s product teams, such as facial recognition.

W: You gave a keynote speech on how machine learning can help society cope with climate change. Can you talk about this? What does a machine learning project itself sometimes consume a lot of energy?

JD: There are many opportunities to apply machine learning to different aspects of the problem. My colleague John Platt is focusing on these issues recently. For example, machine learning can help improve transportation efficiency or make climate modeling more accurate because traditional model calculations are very dense, which limits spatial resolution. Overall, I care about carbon emissions and machine learning. But its share of total emissions is relatively low, and some of the papers I’ve seen on machine learning energy use do not consider the source of energy. At Google Data Center, our annual energy use for all our computing needs is almost 100% renewable.

JD: One is multi-modal learning: tasks have different modes, such as video and text or video and audio. We don’t do much in this area, and this may be more important in the future. Machine learning research in healthcare is also a work in which we invest a lot of energy. The second is to optimize the machine learning model on the device, so that we can add more interesting features to the hardware, such as mobile phones and other types of devices.

Leave a Reply

Your email address will not be published. Required fields are marked *