This Week in AI: Can we (and could we ever) trust OpenAI?

This Week in AI: Can we (and could we ever) trust OpenAI?

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

By the way, TechCrunch plans to launch an AI newsletter on June 5. Stay tuned. In the meantime, we’re upping the cadence of our semiregular AI column, which was previously twice a month (or so), to weekly — so be on the lookout for more editions.

This week in AI, OpenAI launched discounted plans for nonprofits and education customers and drew back the curtains on its most recent efforts to stop bad actors from abusing its AI tools. There’s not much to criticize, there — at least not in this writer’s opinion. But I will say that the deluge of announcements seemed timed to counter the company’s bad press as of late.

Let’s start with Scarlett Johansson. OpenAI removed one of the voices used by its AI-powered chatbot ChatGPT after users pointed out that it sounded eerily similar to Johansson’s. Johansson later released a statement saying that she hired legal counsel to inquire about the voice and get exact details about how it was developed — and that she’d refused repeated entreaties from OpenAI to license her voice for ChatGPT.

Now, a piece in The Washington Post implies that OpenAI didn’t in fact seek to clone Johansson’s voice and that any similarities were accidental. But why, then, did OpenAI CEO Sam Altman reach out to Johansson and urge her to reconsider two days before a splashy demo that featured the soundalike voice? It’s a tad suspect.

Then there’s OpenAI’s trust and safety issues.

As we reported earlier in the month, OpenAI’s since-dissolved Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources — but only ever (and rarely) received a fraction of this. That (among other reasons) led to the resignation of the teams’ two co-leads, Jan Leike and Ilya Sutskever, formerly OpenAI’s chief scientist.

Nearly a dozen safety experts have left OpenAI in the past year; several, including Leike, have publicly voiced concerns that the company is prioritizing commercial projects over safety and transparency efforts. In response to the criticism, OpenAI formed a new committee to oversee safety and security decisions related to the company’s projects and operations. But it staffed the committee with company insiders — including Altman — rather than outside observers. This as OpenAI reportedly considers ditching its nonprofit structure in favor of a traditional for-profit model.

Incidents like these make it harder to trust OpenAI, a company whose power and influence grows daily (see: its deals with news publishers). Few corporations, if any, are worthy of trust. But OpenAI’s market-disrupting technologies make the violations all the more troubling.

It doesn’t help matters that Altman himself isn’t exactly a beacon of truthfulness.

When news of OpenAI’s aggressive tactics toward former employees broke — tactics that entailed threatening employees with the loss of their vested equity, or the prevention of equity sales, if they didn’t sign restrictive nondisclosure agreements — Altman apologized and claimed he had no knowledge of the policies. But, according to Vox, Altman’s signature is on the incorporation documents that enacted the policies.

And if former OpenAI board member Helen Toner is to be believed — one of the ex-board members who attempted to remove Altman from his post late last year — Altman has withheld information, misrepresented things that were happening at OpenAI and in some cases outright lied to the board. Toner says that the board learned of the release of ChatGPT through Twitter, not from Altman; that Altman gave wrong information about OpenAI’s formal safety practices; and that Altman, displeased with an academic paper Toner co-authored that cast a critical light on OpenAI, tried to manipulate board members to push Toner off the board.

None of it bodes well.

Here are some other AI stories of note from the past few days:

Related Articles
COMMENTS