Google CEO Sundar Pichai brought good tidings to investors on parent company Alphabet’s earnings call last week. Alphabet reported $39.3 billion in revenue last quarter, up 22 percent from a year earlier. Pichai gave some of the credit to Google’s machine learning technology, saying it had figured out how to match ads more closely to what consumers wanted.
One thing Pichai didn’t mention: Alphabet is now cautioning investors that the same AI technology could create ethical and legal troubles for the company’s business. The warning appeared for the first time in the “Risk Factors” segment of Alphabet’s latest annual report, filed with the Securities and Exchange Commission the following day:
“New products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results.”
Companies must use the risk factors portion of their annual filings to disclose foreseeable troubles to investors. That’s supposed to keep the free market operating. It also provides companies a way to defuse lawsuits claiming management hid potential problems.
It’s not clear why Alphabet’s securities lawyers decided it was time to warn investors of the risks of smart machines. Google declined to elaborate on its public filings. The company began testing self-driving cars on public roads in 2009 and has been publishing research on ethical questions raised by AI for several years.
Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:
“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”
Microsoft also has been investing deeply in AI for many years, and in 2016 it introduced an internal AI ethics board that has blocked some contracts seen as risking inappropriate use of the technology.
Microsoft did not respond to queries regarding the timing of its disclosure on the potential for rogue AI. Both Microsoft and Alphabet have played prominent roles in a recent flowering of concern and research about ethical challenges raised by artificial intelligence. Both have already experienced them firsthand.
Last year, researchers found Microsoft’s cloud service was much less accurate at detecting the gender of black women than white men in photos. The company apologized and said it has fixed the problem. Employee protests at Google forced the company out of a Pentagon contract applying AI to drone surveillance footage, and it has censored its own Photos service from searching for apes in user snaps after an incident in which black people were mistaken for gorillas.
Microsoft’s and Google’s new disclosures might seem obscure. SEC filings are sprawling documents written in a special and copiously sub-claused lawyerly dialect. All the same, David Larcker, director of Stanford’s Corporate Governance Research Initiative, says the new acknowledgements of AI’s attendant risks have probably been noticed. “People do look at these things,” he says.
Investors and competitors analyze risk factors to get a sense of what’s on management’s mind, Larcker says. Many items are so commonly listed—such as the risks of an economic slowdown—as to be more or less meaningless. Differences among companies or unusual items, like ethical challenges raised by artificial intelligence, can be more informative.
Some companies that claim their futures depend heavily on AI and machine learning do not list unintended effects of those technologies in their SEC disclosures. In IBM’s most recent annual report, for 2017, the company claims that it “leads the burgeoning market for artificial intelligence infused software solutions” while also being a pioneer of “data responsibility, ethics and transparency.” But the filing was silent on risks attendant with AI or machine learning. IBM did not respond to a request for comment. The company’s next annual filing is due in the next few weeks.
Amazon, which relies on AI in areas including its voice assistant Alexa and warehouse robots, did add a mention of artificial intelligence in the risk factors in its annual report filed earlier this month. However, unlike Google and Microsoft, the company does not invite investors to entertain thoughts of how its algorithms could be biased or unethical. Amazon’s fear is that the government will slap business-unfriendly rules on the technology.
Under the heading “Government Regulation Is Evolving and Unfavorable Changes Could Harm Our Business,” Amazon wrote: “It is not clear how existing laws governing issues such as property ownership, libel, data protection, and personal privacy apply to the Internet, e-commerce, digital content, web services, and artificial intelligence technologies and services.”
Ironically, on Thursday, Amazon invited some government rules on facial recognition, a technology it has pitched to law enforcement, citing the danger of misuse. Amazon didn’t respond to a request for comment about why it thinks investors need to know about regulatory but not ethical uncertainties around AI. That assessment may change in time.
Larcker says that as new business practices and technologies become more important, they tend to sprout in risk disclosures at many companies. Cybersecurity used to make a rare appearance in SEC filings; now mentioning it is pro forma. AI could be next. “I think it’s kind of the natural progression of things,” Larcker says.