As Casey Newton noted last night on his blog:
Johansson is one of the world’s most famous actresses, and she speaks for an entire class of creatives who are now wrestling with the fact that automated systems have begun to erode the value of their work. OpenAI’s decision to usurp her voice for its own purposes will now get wide and justified attention.
Johansson's lengthy, very successful career and her global recognition substantiate the fact that her recognizable voice is her intellectual property, worth tens of millions if not hundreds, in addition to being a valuable component of her brand.
That is the key. If Hollywood was wondering why actors held out against AI in the contract negotiations ... this is it. If someone voice clones you, that's not just "training content", mate. That's you, as far as anyone knows. Aside from being deeply disturbing for the person cloned, that could be used for fraud, porn, advertising, supporting despotic regimes (quite a number of Eastern European vloggers are finding themselves suddenly being magically VC'd into Putin supporters), or acting out some lonely kid/tech billionaire's teenage fantasy.
Actors can and do license their voices for VC - famously James Earl Jones, who's now fully retired. Nobody else will be Darth Vader. He sold his voice for $150 million. With all kinds of conditions on how his voice can be used.
Johansson could well make the argument that Sky's global release damaged her brand worldwide and abused her IP. Altman's "Hail Mary" shot at hiring her so close to the release date raises questions. What were the discussions within OpenAI after she apparently put them on hold while her side discussed the matter? Did Altman give them the release date along with the second offer? There is no way he could have thought she would be available on short notice.
Altman's "her" post is damning. Johansson's team is not new to this. They sued Disney for $50 million over breach of contract for failing to release one of her film's in cinemas before streaming and it (which resulted in Disney reducing their payout to her), and they settled in fairly short order. She received a huge settlement.
And, no, that Johansson’s experience wasn’t even the most important thing that happened at OpenAI in the past week.
Last week, Ilya Sutskever announced he was leaving the company. Sutskever, a co-founder of the company who is renowned both for his technical ability and his concerns about the potential of AI to do harm, was a leader of the faction that briefly deposed Altman in November. Sutskever reversed his position once it became clear that the vast majority of OpenAI employees were prepared to quit if Altman did not return as CEO. But Sutskever hasn’t worked for the company since.
In an anodyne public statement, Sutskever said only that he now plans to work on “a project that is very personally meaningful to” him.
Why not say more about the circumstances of his departure? Vox reported over the weekend that OpenAI’s nondisclosure agreements forbid criticizing the company for the duration of the former employee’s lifetime; forbids them from acknowledging the existence of the NDA; and forces them to give up all vested equity in the company if they refuse to sign it. Altman wanted everybody tied up.
Note to readers: after the story drew wide attention, Altman apologized and said the company would remove the provision about clawing back equity for violating the NDA.
As OpenAI’s chief scientist, Sutskever also led its so-called “superalignment team,” which it formed last July to research ways of ensuring that advanced AI systems act safely and in accordance with human intent. Among other things, the company promised to dedicate 20% of its scarce and critical computing power to the project. That is now out the door. The "alignment issue" is enormous in AI and I will try to address it in another post.
Bottom line? just typical tech behavior - break the rules to establish a precedent, create a buzz around it. Be surprised by any negative public reactions, and then (and only then) discuss but claiming you "did not do any harm or did not intend to do any harm".
Rinse and repeat.
|