Business

Google Researchers Are Learning How Machines Learn

SAN FRANCISCO — Machines are starting to learn tasks on their own. They are identifying faces, recognizing spoken words, reading medical scans and even carrying on their own conversations.

Posted Updated

By
CADE METZ
, New York Times

SAN FRANCISCO — Machines are starting to learn tasks on their own. They are identifying faces, recognizing spoken words, reading medical scans and even carrying on their own conversations.

All this is done through so-called neural networks, which are complex computer algorithms that learn tasks by analyzing vast amounts of data. But these neural networks create a problem that scientists are trying to solve: It is not always easy to tell how the machines arrive at their conclusions.

On Tuesday, a team at Google took a small step toward addressing this issue with the unveiling of new research that offers the rough outlines of technology that shows how the machines are arriving at their decisions.

“Even seeing part of how a decision was made can give you a lot of insight into the possible ways it can fail,” said Christopher Olah, a Google researcher.

A growing number of AI researchers are now developing ways to better understand neural networks. Jeff Clune, a professor at University of Wyoming who now works in the AI lab at the ride-hailing company Uber, called this “artificial neuroscience.”

Understanding how these systems work will become more important as they make decisions now made by humans, like who gets a job and how a self-driving car responds to emergencies.

First proposed in the 1950s, neural networks are meant to mimic the web of neurons in the brain. But that is a rough analogy. These algorithms are really a series of mathematical operations, and each operation represents a neuron. Google’s new research aims to show — in a highly visual way — how these mathematical operations perform discrete tasks, like recognizing objects in photos.

Inside a neural network, each neuron works to identify a particular characteristic that might show up in a photo, like a line that curves from right to left at a certain angle or several lines that merge to form a larger shape. Google wants to provide tools that show what each neuron is trying to identify, which ones are successful, and how their efforts combine to determine what is actually in the photo — perhaps a dog or a tuxedo or a bird.

The kind of technology Google is discussing could also help identify why a neural network is prone to mistakes and, in some cases, explain how it learned this behavior, Olah said. Other researchers, including Clune, believe they can also help minimize the threat of “adversarial examples” — where someone can potentially fool neural networks by, say, doctoring an image.

Researchers acknowledge that this work is still in its infancy. Jason Yosinski, who also works in Uber’s AI lab, which grew out of the company’s acquisition of a startup called Geometric Intelligence, called Google’s technology idea “state of art.” But he warned it may never be entirely easy to understand the computer mind.

“To a certain extent, as these networks get more complicated, it is going to be fundamentally difficult to understand why they make decisions,” he said. “It is kind of like trying to understand why humans make decisions.”

Copyright 2024 New York Times News Service. All rights reserved.