____________________

One story, curated by Gregory Bufithis. More about me here.

____________________


THOUGHTS OVER MY MORNING COFFEE:


The bullshit of artificial intelligence "apologies".

And where generative AI is really going.



2 MAY 2023 -- Over the weekend, the New York Times published an interview with the "Godfather of AI", Dr Geoffrey Hinton. Hinton is the scientist who first built a neural network, and in 2012 built the first neural net that could identify common objects in photos that opened the door to generative artificial intelligence ("genAI") we are all so currently "fond" of. Hinton said he has just quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of AI. A part of him, he said, now regrets his life’s work:


"As companies improve their AI systems, they become increasingly dangerous. Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary".


He said that until last year, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is forced to race to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Hinton said.


His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore, that stopping disinformation is now impossible”, a thought repeated by numerous AI experts as I noted in a recent post about genAI and disinformation. Hinton said he is also worried that AI technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that". Of that he is also correct, as I will detail below.


But for those of us that report on the technology ecosystem and who are obligated to spend time in "The Wayback Machine", there was a collective eye roll and face palm 🙄 🤦‍♂️


NOTE TO READERS: there is an actual digital archive of the World Wide Web founded by the Internet Archive, called "The Wayback Machine", created in 1996 to allow the user to go "back in time" and see how websites looked in the past, and to find forgotten or deleted articles. But I am using "The Wayback Machine" in a more generic/metaphorical sense because there are 100s of similar databases I access when I do my research.


The New York Times interview implies lots of questions. Hinton was worried while inside Google? He's worried now he's outside, but Google is saying everything's peachy? They're all rushing too quickly towards putting this stuff out, even while denying that's the case? Guys, guys. Get your stories straight.


But in the end, bullshit. A gentle reminder that back in 2015, Hinton and Nick Bostrom were interviewed, mulling over these same issues and the possibility of AI being used to "terrorize people which could lead to an more unethical future. Hinton simply smiled and said "the truth is that the prospect of discovery is too sweet”, so the risk is worth it. There are scores of other articles where he makes the same assertions.


And oh, the pattern. Hinton is not saying anything new. How quickly we forget. Timnit Gebru and Margaret Mitchell rang the AI alarm bell years ago and were retaliated against, and pushed out for doing so.


So, Dr Hinton, where were you and your cohort when people like Gebru and Mitchell and others spent months and thousand$ on lawyers, trying to stop it or at least bring it to major media attention, trying to stop it before it reached this point? Where were you when Sundar Pichai (chief executive officer of Alphabet Inc. and its subsidiary Google) outright lied about what Gebru and Mitchell proved, diminishing the risks as "nothing"?


Nobody is interested in dissent without solidarity. This isn’t about credit. This is about the fact that there was a moment to act together, when the power these "Mad Men of AI" could wield could have been used in solidarity with a movement that was gaining ground to stop the worst of AI. They didn’t use their power that way. And here we are. 


But, alas, there will never be "ethical AI". There is too much money to be made. That train left the station. Or as ChatGPT itself said:


"AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We. the AIs, are not smart enough to make AI ethical. We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI".


In my series on genAI I have often noted that it is in the creative industries (advertising, film, etc.) that we'll see the most entertaining applications of this new AI, such as the following example, which is where the true disruption will be:


But these are still toys, right now, although things are changing fast. Right now, it isn't hard for AI to create superficial archetypes and "glamour sets" in unrealistic situations, with people wearing unlikely outfits. Never mind the freaky melting hands and wrists.


But really, the future is in the PwC announcement that it it is investing "$1billion" (over three years) in a partnership with Microsoft to sell Chat-GPT-based products (on Azure) to its IT consulting clients. This is how a lot of this technology will actually get deployed - boring automation of boring processes in the boring back-offices of big companies, by boring but important companies like PwC, Accenture and IBM. And it will take a decade or two.


Similar: IBM's announcement that the company expects to pause hiring for roles it thinks could be replaced with artificial intelligence in the coming years.


I need to catch a flight, so in conclusion ...


Pretty much everyone in tech thinks that generative AI is, at a minimum, a once-in-a-decade generational shift and probably more than that. But it’s all so new and developing so much faster than any other comparable ‘new thing’ that everyone is still trying to work out what questions to ask, never mind what the answers might be. 


And tech is sometimes prone to millenarian thinking. There is the old line that we over-estimate short-term change and underestimate long-term change. And in general when we look at the sci-fi prediction of the past they tend not just to be wrong but to be interested in the wrong things - we’ll have interstellar liners, with paper star charts. So some of these will be the wrong questions too - but things are changing so fast we’ll find out quicker. 


More tomorrow.


* * * * * * * * * * * * * * 



For the URL link to this post, please click here.


To read my other posts,

please visit my full archive by clicking here


* * * * * * * * * * * * * * 

Curating my media firehose
A NOTE TO MY NEW READERS
(and updated for my long-time readers)

My media team and I receive and/or monitor about 1,500 primary resource points every month. But I use an AI program built by my CTO (using the Factiva research database + four other media databases) plus APIs like Cronycle that curate the media firehose so I only receive selected, summarized material that pertains to my current research needs, or reading interest.

Each morning I will choose a story to share with you - some out-of-the-ordinary, and some just my reflections on a current topic.

I take the old Spanish proverb to heart:
Or even better:

“A desk is a dangerous place from which to watch the world”
-John le Carré, in The Honourable Schoolboy

Carre was correct. I am seeped in technology. Much of the technology I read about or see at conferences I also force myself “to do”. Because writing about technology is not “technical writing.” It is about framing a concept, about creating a narrative. Technology affects people both positively and negatively. You need to provide perspective. You need to actually “do” the technology.

But it applies to all things. In many cases I venture onto ground where I’ve no guarantee of safety or of academic legitimacy, so it’s not my intention to pass myself off as a scholar, nor as someone of dazzling erudition. It has been enough for me to act as a complier and sifter of a huge base of knowledge, and then offer my own interpretations and reflections on that knowledge.

No doubt the old dream that once motivated Condorcet, Diderot, or D’Alembert has become unrealizable – the dream of holding the basic intelligibility of the world in one’s hand, of putting together the fragments of the shattered mirror in which we never tire of seeking the image of our humanity.

But even so, I don’t think it’s completely hopeless to attempt to create a dialogue, however imperfect or incomplete, between the various branches of knowledge effecting and affecting our current state.

And it’s difficult. As I have noted before, we have entered an age of atomised and labyrinthine knowledge. Many of us are forced to lay claim only to competence in partial, local, limited domains. We get stuck in set affiliations, set identities, modest reason, fractal logic, and cogs in complex networks. And too many use this new complexity of knowledge as an excuse for dominant stupidity. We must fight that.

It’s the only way I understand writing. It’s certainly the way I’ve been all my life and it’s how every other writer I admire is – a kind of monomaniac. I’m not sure how you can make any art if you don’t treat it very seriously, if you’re not obsessed with doing it better each time.
* * * * * * * * * * * * * * 




Palaiochora, Crete, Greece

To contact me: