Home Tech OpenAI’s GPT-4o Launch Overshadowed by Internal Safety Team Disbandment

OpenAI’s GPT-4o Launch Overshadowed by Internal Safety Team Disbandment

0
OpenAI’s GPT-4o Launch Overshadowed by Internal Safety Team Disbandment

0:00

Keeping up with the rapidly evolving AI landscape is challenging. Until an AI can handle it for you, here’s a roundup of recent developments in machine learning, alongside notable research and experiments not covered on their own.

Truth Voices is planning to launch an AI newsletter soon. Meanwhile, the frequency of our semi-regular AI column is increasing to a weekly cadence, so expect more frequent updates.

This week, OpenAI was at the forefront again (despite Google’s efforts) with a product launch and significant internal changes. The company introduced GPT-4o, its most advanced generative model to date, and then disbanded a team working on controlling potential “superintelligent” AI systems.

The dismantling of the team made headlines, revealing that OpenAI deprioritized the team’s safety research in favor of new product launches like GPT-4o. This led to the resignation of the team’s co-leads, Jan Leike and OpenAI co-founder Ilya Sutskever.

Superintelligent AI remains largely theoretical; it’s uncertain when or if the tech industry will achieve the necessary breakthroughs for AI capable of performing any human task. However, this week’s events suggest that OpenAI’s leadership, particularly CEO Sam Altman, is prioritizing product development over safety measures.

Altman reportedly “infuriated” Sutskever by rushing AI-powered feature launches at OpenAI’s first developer conference last November. He also criticized Helen Toner, a Georgetown Center for Security and Emerging Technologies director and former OpenAI board member, over a paper that critiqued OpenAI’s safety approach, even attempting to remove her from the board.

Over the past year, OpenAI has let its chatbot store accumulate spam and allegedly scraped data from YouTube against the platform’s terms of service while expressing ambitions to allow its AI to generate depictions of porn and gore. This suggests safety has become a lower priority, causing many OpenAI safety researchers to believe their work might be better supported elsewhere.

Here are additional notable AI stories from recent days:

  • OpenAI + Reddit: OpenAI struck a deal with Reddit to use the social site’s data for AI model training, a move welcomed by Wall Street but potentially unpopular with Reddit users.
  • Google’s AI: At its annual I/O developer conference, Google unveiled numerous AI products, including video-generating Veo, AI-enhanced Google Search results, and upgrades to the Gemini chatbot apps.
  • Anthropic hires Krieger: Mike Krieger, Instagram co-founder and co-founder of personalized news app Artifact, is joining Anthropic as the company’s first chief product officer, overseeing consumer and enterprise efforts.
  • AI for kids: Anthropic announced it would allow developers to create kid-focused apps and tools using its AI models, provided they adhere to certain rules. Rivals like Google prohibit their AI from being built into apps for younger ages.
  • AI film festival: AI startup Runway hosted its second AI film festival, showcasing moments where human creativity stood out more than AI-generated content.

More machine learnings

AI safety is a prominent topic this week, especially with the OpenAI departures. However, Google DeepMind is forging ahead with its new “Frontier Safety Framework.” This strategy involves identifying potentially harmful AI capabilities, regularly evaluating models to detect critical capability levels, and applying mitigation plans to prevent misuse. More details can be found in their technical report.

Cambridge researchers have identified a different risk involving chatbots that simulate deceased individuals by training on their data. Although this technology could aid in grief management, it carries significant ethical concerns. Lead researcher Katarzyna Nowaczyk-Basińska emphasizes the need for safeguards to mitigate the social and psychological risks of digital immortality.

At MIT, physicists are using AI to predict physical systems’ phases, traditionally a complex statistical task. Their machine learning model, trained on relevant data and grounded with known material characteristics, offers a more efficient method.

CU Boulder is exploring AI’s potential in disaster management, from predicting resource needs to mapping damage and training responders. While there’s hesitation to use AI in critical scenarios, Professor Amir Behzadan advocates for human-centered AI to enhance disaster response and recovery.

Lastly, Disney Research is working on diversifying the outputs of diffusion image generation models, which tend to produce repetitive results. Their technique involves annealing the conditioning signal with scheduled Gaussian noise, enhancing diversity in generated images. This approach balances diversity and condition alignment, providing more varied outputs.

No comments

Leave a reply

Please enter your comment!
Please enter your name here

Exit mobile version