Keeping up with the rapidly evolving AI industry is challenging. Until an AI can help you, here’s a useful summary of recent developments in machine learning, including noteworthy research and experiments we didn’t cover individually.
This week in AI, OpenAI introduced discounted plans for nonprofits and educational clients and unveiled its latest attempts to prevent misuse of its AI tools. While there’s not much to criticize there, one might say the timing of these announcements seems designed to counteract recent negative press about the company.
Let’s start with Scarlett Johansson. OpenAI removed a voice from its AI-powered chatbot ChatGPT after users pointed out it closely resembled Johansson’s. Johansson later issued a statement revealing she’d hired legal counsel to investigate the voice’s creation and mentioned she had repeatedly declined OpenAI’s requests to license her voice for ChatGPT.
A piece in The Washington Post suggests OpenAI didn’t clone Johansson’s voice intentionally and that the similarities were coincidental. However, it’s curious why OpenAI CEO Sam Altman contacted Johansson to reconsider two days before a demo featuring the soundalike voice.
Then there are OpenAI’s trust and safety challenges.
As reported earlier, OpenAI’s now-defunct Superalignment team, tasked with managing “superintelligent” AI systems, was promised 20% of the company’s compute resources but seldom received a fraction of it. This, among other reasons, led to the resignation of the team’s co-leads, Jan Leike and Ilya Sutskever, previously OpenAI’s chief scientist.
Nearly a dozen safety experts have left OpenAI in the past year. Several, including Leike, have publicly expressed concerns that the company prioritizes commercial projects over safety and transparency. In response, OpenAI created a new committee to oversee safety and security decisions but filled it with company insiders, including Altman, rather than outside observers. This development comes as OpenAI is reportedly considering abandoning its nonprofit status for a traditional for-profit model.
Incidents like these make it challenging to trust OpenAI, especially as its influence grows daily through deals with news publishers. While few corporations are inherently trustworthy, OpenAI’s groundbreaking technologies make its lapses especially concerning.
It doesn’t help that Altman himself isn’t always a model of transparency.
When news broke about OpenAI’s aggressive tactics toward former employees — including threats of losing vested equity or barring equity sales unless they signed restrictive nondisclosure agreements — Altman apologized and claimed ignorance. But, according to Vox, Altman’s signature is on the documents that enacted these policies.
Former OpenAI board member Helen Toner asserts that Altman has withheld information and at times misrepresented occurrences at OpenAI, even lying to the board. Toner claims the board learned about ChatGPT’s release via Twitter rather than directly from Altman, received incorrect details about OpenAI’s safety practices, and that Altman attempted to manipulate board members to oust Toner following her co-authoring a critical academic paper about the company.
None of this is promising.
Here are other notable AI stories from the past few days:
- Voice cloning made easy: The Center for Countering Digital Hate’s new report highlights how AI-powered voice cloning services make it simple to fake politician statements.
- Google’s AI Overviews struggle: Google’s AI-generated search results, AI Overviews, face some challenges. The company admits it but claims it’s iterating quickly. (We’ll see.)
- Paul Graham on Altman: Paul Graham, co-founder of startup accelerator Y Combinator, dismissed claims on X that Altman was pressured to resign as Y Combinator president in 2019 due to potential conflicts of interest. (Y Combinator has a small stake in OpenAI.)
- xAI raises $6B: Elon Musk’s AI startup, xAI, secured $6 billion in funding as Musk prepares to compete aggressively with the likes of OpenAI, Microsoft, and Alphabet.
- Perplexity’s new AI feature: AI startup Perplexity has launched Perplexity Pages, a new feature to help users create visually appealing reports, articles, or guides, as reported by Ivan.
- AI models’ favorite numbers: Devin discusses the numbers different AI models prefer when asked to provide a random answer. It turns out they have preferences, reflecting their training data.
- Mistral releases Codestral: French AI startup Mistral, backed by Microsoft and valued at $6 billion, released a new generative AI model for coding named Codestral, which has a restrictive license preventing commercial use.
- Chatbots and privacy: Natasha writes about the European Union’s ChatGPT taskforce and its efforts to address the AI chatbot’s privacy compliance.
- ElevenLabs’ sound generator: Voice cloning startup ElevenLabs introduced a tool to generate sound effects through prompts, initially announced in February.
- Interconnects for AI chips: Companies like Microsoft, Google, and Intel have formed the UALink Promoter Group to develop next-gen AI chip components, notably excluding Arm, Nvidia, and AWS.