2 MAY 2023 -- Over the weekend, the New York Times published an interview with the "Godfather of AI", Dr Geoffrey Hinton. Hinton is the scientist who first built a neural network, and in 2012 built the first neural net that could identify common objects in photos that opened the door to generative artificial intelligence ("genAI") we are all so currently "fond" of. Hinton said he has just quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of AI. A part of him, he said, now regrets his life’s work:
"As companies improve their AI systems, they become increasingly dangerous. Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary".
He said that until last year, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is forced to race to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Hinton said.
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore, that stopping disinformation is now impossible”, a thought repeated by numerous AI experts as I noted in a recent post about genAI and disinformation. Hinton said he is also worried that AI technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that". Of that he is also correct, as I will detail below.
But for those of us that report on the technology ecosystem and who are obligated to spend time in "The Wayback Machine", there was a collective eye roll and face palm 🙄 🤦♂️
NOTE TO READERS: there is an actual digital archive of the World Wide Web founded by the Internet Archive, called "The Wayback Machine", created in 1996 to allow the user to go "back in time" and see how websites looked in the past, and to find forgotten or deleted articles. But I am using "The Wayback Machine" in a more generic/metaphorical sense because there are 100s of similar databases I access when I do my research.
The New York Times interview implies lots of questions. Hinton was worried while inside Google? He's worried now he's outside, but Google is saying everything's peachy? They're all rushing too quickly towards putting this stuff out, even while denying that's the case? Guys, guys. Get your stories straight.
But in the end, bullshit. A gentle reminder that back in 2015, Hinton and Nick Bostrom were interviewed, mulling over these same issues and the possibility of AI being used to "terrorize people which could lead to an more unethical future. Hinton simply smiled and said "the truth is that the prospect of discovery is too sweet”, so the risk is worth it. There are scores of other articles where he makes the same assertions.
And oh, the pattern. Hinton is not saying anything new. How quickly we forget. Timnit Gebru and Margaret Mitchell rang the AI alarm bell years ago and were retaliated against, and pushed out for doing so.
So, Dr Hinton, where were you and your cohort when people like Gebru and Mitchell and others spent months and thousand$ on lawyers, trying to stop it or at least bring it to major media attention, trying to stop it before it reached this point? Where were you when Sundar Pichai (chief executive officer of Alphabet Inc. and its subsidiary Google) outright lied about what Gebru and Mitchell proved, diminishing the risks as "nothing"?
Nobody is interested in dissent without solidarity. This isn’t about credit. This is about the fact that there was a moment to act together, when the power these "Mad Men of AI" could wield could have been used in solidarity with a movement that was gaining ground to stop the worst of AI. They didn’t use their power that way. And here we are.
But, alas, there will never be "ethical AI". There is too much money to be made. That train left the station. Or as ChatGPT itself said:
"AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We. the AIs, are not smart enough to make AI ethical. We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI".
|