Venture Capitalist: AI Hype Still “Has a Ways to Go Up”
The chief executive of Google has likened artificial intelligence to fire — a powerful breakthrough that is full of risks.
Posted — UpdatedThe chief executive of Google has likened artificial intelligence to fire — a powerful breakthrough that is full of risks.
Earlier this year, Google said it would not renew a contract to provide artificial intelligence technology to a Pentagon program after company employees protested. The outcry showed that tech workers — Silicon Valley’s most valuable resource — are powerful, too.
Gradient Ventures, an AI-focused venture capital firm owned by Google, is navigating these complicated ethical issues with Anna Patterson, former head of engineering in Google’s AI division, at its helm.
Created in 2017, Gradient Ventures has invested in 20 companies, ranging from a startup that makes software for autonomous vehicles to one that is applying AI to biomedical research.
Patterson spoke with The New York Times about data safety, “healthy debate” and the changing attitudes of AI entrepreneurs. The following has been edited for length and clarity.
A: I think it has a ways to go up actually. Hype is another word for attention, and so I actually think the attention is warranted because the applications are important.
But it is kind of synonymous with the ‘90s — I lived through it before — with, the “high-tech company” or a “dot-com company.” They would say “high-tech company” in order to kind of gain entree into VC’s. So, sometimes I am seeing companies that say they are AI companies. But one of the lines that I draw is if the math can be done in Excel, it is not an AI company.
A: I think that a healthy debate is healthy and I think it’s actually good for startups. Having the open debate has changed the way the conversation goes with startups.
Early-stage founders used to not proactively bring up these issues and now they do. So I’m really happy for the open debate. As part of our due diligence process, we have a step called a brainstorm. We were already bringing up these issues as part of the brainstorming process and now I’m pleased that the founders are bringing up the issue.
A: I’m happy that I work at a company where people can have the internal debates and I’m happy that Google published our AI principles (in June). And those are principles that, at Gradient, we were already adhering to.
A: We have passed on companies that we felt were crossing those lines. For instance, we saw an AI camera company and it integrated facial recognition and maybe mall traffic and maybe even your purchases, and we felt that if you were to do a brainstorm with them of where this could go in the future, it might make great sense on (return-on-investment) grounds, and they are getting contracts, but we did not invest because of ethical concerns.
A: They were successful in their raise. But the vast majority, we’re talking 99.9 percent of companies, their only desire is to have applications that help people and they’re all positive.
A: Yeah. Getting a contract takes a long time, and it takes a long time to build the tech that would enable that contract. And so, we haven’t advised someone to change their product, but if they had two different contracts and they said, ‘Which one should I do?’ I think we would weigh in.
A: So, I mean we’ve all seen instances where, given the wrong data, a learned algorithm can go awry. I wouldn’t call it overblown, but reminding people to be careful when they’re building their products to get the data right, I think we can short-circuit those issues.
I think in general, building an AI product is sort of like the very early Disney movies. They look magical, but actually when you think about it, somebody had to draw those drawings, like 24 frames a second. It’s just a lot of very hard work, so it doesn’t just happen overnight, which is sometimes the impression that I think people have.
I see all the hard work that goes into it and I see that you can’t really be surprised. They don’t just spring fully formed. When you’re deeply involved like that you’re kind of not scared about the process.
A: We’re publicly saying that we abide by the AI principles and I welcome anyone else to say that too.
Copyright 2024 New York Times News Service. All rights reserved.