____________________

One story, curated by Gregory Bufithis. More about me here.

____________________


THOUGHTS OVER MY MORNING COFFEE


Sam Altman, Mob Boss.


Looks like the OpenAI board was right all along.



Even on little things, Sam is not consistently candid.


Sam Altman relaxing at home.


21 May 2024 -- A break from genocide and geopolitical megalomaniac scoundrels to talk about ... technology's megalomaniac scoundrels.


A week ago, OpenAI released an exciting new demo, featuring a voice character with a sexy breathy voice that was supposed to remind you of Scarlett Johansson’s AI agent character in the fabulous film Her. Lots of people gushed over it. Yes, many worried about the sexism, as well they should, but that’s a story for another day. And of course, I daresay, the demo was just a demo, one that will never work robustly as advertised, but that too is a story for another day.


Fast forward to today; there’s been a backlash. Too many people noticed the coincidence and not everyone was happy. Some wondered whether ScarJo had been compensated. Yesterday, under pressure, OpenAI pulled the ScarJo-like voice, alleging that the resemblance was purely a coincidence:

I probably don’t have to tell you, but that’s complete bullshit. And stupid, obviously refuted bullshit at that. For one thing, Sam himself had proudly posted a reference to the film “Her” within hours of the demo:

And OpenAI staff mentioned "Her" every opportunity they could.


I can’t tell you what happened to Sam’s caps lock, but obviously the claim that the resemblance to ScarJo was a “coincidence” was a lie. Sam knew perfectly well what the character sounded like.


And if it really was another actress’s voice, why pause the “sky” voice , as there’s nothing to prove or answer. Publish her name and all is good to go. But Altman claims the real voice actor "wants to protect her privacy". Literally incredible.


But it seems Altman is desperate to push this forward before they all realize AI is decades from being what they are trying to promote, and they know it or they wouldn’t be monetizing very small spinoffs from AI. Hundreds of billions invested and seeing no returns whatsoever, costs now spiraling out of control and investors need some income or they pull the plug , so they putting out cheap AI bits and pieces


A couple hours later, ScarJo herself (via her publicist) sent a statement, even more damning, to Bobby Allyn, a journalist at NPR, telling the real story:

Yeah, Sam. "Coincidence" my eye. 


Note to readers: and an important note to clarify a few things about the letters from ScarJo’s lawyers: They weren’t cease and desist notices. It’s not initiating a lawsuit. The letters sought clarity about what exactly was fed to the model to produce the distinctive “Sky” voice. Insiders at OpenAI have divulged across social media that OpenAI specifically chose an actress/voice over expert that could mimic ScarJo, and that was the "massaged" to get even closer to ScarJo's voice.


And ScarJo’s contention that this goes back to September checks out:


They said it wasn’t "intentional", but of course it was. Sam may not be wanting to delete his “her” tweet, but 6 million people saw it. And his "concidence" line is a sign of consciousness of guilt.


All of this is really about consent. Artists and writer and actors don’t want their work to be used their permission; if you want to use their stuff, you should compensate them, and get their permission. 


If they say “no”, no means no. Scarlett said “no". That didn’t stop Sam. Mob bosses never take "No" for answer. As the film maker Toni Thai put it:


Altman has got away with a lot for a long time, but people are starting to see through the ruse. Here’s an (admittedly unscientific) poll Gary Marcus took yesterday:

Count me with the majority. Spin is a way of life at OpenAI; telling the truth is not. Casey Newton noticed, too (more from him in a moment):

So did Canadian MP Michelle Rempel Garner who voiced the opinion of many across Twitter and other social media outlets:

The (old, now-replaced) Board said in November they fired Sam because he was not consistently candid. But as I mentioned in a detailed post 2 years ago, his fudges to the Senate about his ownership of OpenAI equity (he has indirect equity, which he failed to mention, and he owns an OpenAI VC firm that trades on the company name, that he also failed to mention), the board saw it with his lies about Helen Toner, and now we all see it with his embarrassing lies about Scarlett Johansson. 


It . is . a . pattern.


As Casey Newton noted last night on his blog:


Johansson is one of the world’s most famous actresses, and she speaks for an entire class of creatives who are now wrestling with the fact that automated systems have begun to erode the value of their work. OpenAI’s decision to usurp her voice for its own purposes will now get wide and justified attention.


Johansson's lengthy, very successful career and her global recognition substantiate the fact that her recognizable voice is her intellectual property, worth tens of millions if not hundreds, in addition to being a valuable component of her brand.


That is the key. If Hollywood was wondering why actors held out against AI in the contract negotiations ... this is it. If someone voice clones you, that's not just "training content", mate. That's you, as far as anyone knows. Aside from being deeply disturbing for the person cloned, that could be used for fraud, porn, advertising, supporting despotic regimes (quite a number of Eastern European vloggers are finding themselves suddenly being magically VC'd into Putin supporters), or acting out some lonely kid/tech billionaire's teenage fantasy.


Actors can and do license their voices for VC - famously James Earl Jones, who's now fully retired. Nobody else will be Darth Vader. He sold his voice for $150 million. With all kinds of conditions on how his voice can be used.


Johansson could well make the argument that Sky's global release damaged her brand worldwide and abused her IP. Altman's "Hail Mary" shot at hiring her so close to the release date raises questions. What were the discussions within OpenAI after she apparently put them on hold while her side discussed the matter? Did Altman give them the release date along with the second offer? There is no way he could have thought she would be available on short notice.


Altman's "her" post is damning. Johansson's team is not new to this. They sued Disney for $50 million over breach of contract for failing to release one of her film's in cinemas before streaming and it (which resulted in Disney reducing their payout to her), and they settled in fairly short order. She received a huge settlement.


And, no, that Johansson’s experience wasn’t even the most important thing that happened at OpenAI in the past week.


Last week, Ilya Sutskever announced he was leaving the company. Sutskever, a co-founder of the company who is renowned both for his technical ability and his concerns about the potential of AI to do harm, was a leader of the faction that briefly deposed Altman in November. Sutskever reversed his position once it became clear that the vast majority of OpenAI employees were prepared to quit if Altman did not return as CEO. But Sutskever hasn’t worked for the company since.


In an anodyne public statement, Sutskever said only that he now plans to work on “a project that is very personally meaningful to” him.


Why not say more about the circumstances of his departure? Vox reported over the weekend that OpenAI’s nondisclosure agreements forbid criticizing the company for the duration of the former employee’s lifetime; forbids them from acknowledging the existence of the NDA; and forces them to give up all vested equity in the company if they refuse to sign it. Altman wanted everybody tied up.


Note to readers: after the story drew wide attention, Altman apologized and said the company would remove the provision about clawing back equity for violating the NDA.


As OpenAI’s chief scientist, Sutskever also led its so-called “superalignment team,” which it formed last July to research ways of ensuring that advanced AI systems act safely and in accordance with human intent. Among other things, the company promised to dedicate 20% of its scarce and critical computing power to the project. That is now out the door. The "alignment issue" is enormous in AI and I will try to address it in another post.



Bottom line? just typical tech behavior - break the rules to establish a precedent, create a buzz around it. Be surprised by any negative public reactions, and then (and only then) discuss but claiming you "did not do any harm or did not intend to do any harm".


Rinse and repeat.

* * * * * * * * * * * * * * 


For a URL link to share this post, please click here.


To read my other thoughts and musings,

please see my full web site by clicking here


* * * * * * * * * * * * * * 




Palaiochora, Crete, Greece

To contact me: