Barret Zoph, a research lead at OpenAI, recently showcased the new GPT-4o model’s ability to detect human emotions through a smartphone camera. During the demonstration, ChatGPT mistakenly identified Zoph’s face as a wooden table, prompting a humorous moment before the error was corrected. The AI tool then accurately described his facial expression and potential emotions.
OpenAI has introduced a new model for ChatGPT that can process text, audio, and images. Surprisingly, the GPT-4o model is now available for free, a departure from the previous subscription-based model. Features like memory and web browsing, previously restricted to paid subscribers, are now accessible to all users.
Despite the availability of GPT-4o for free, ChatGPT Plus subscribers still have access to more prompts and newer features. The subscription allows users to interact more extensively with the AI model and offers exclusive updates such as voice mode, which will be released to subscribers before non-paying users.
Zoph demonstrated the playful AI voice mode in OpenAI’s demo videos, showcasing its ability to answer questions and perform speech translation. The company believes that users will discover innovative use cases for the technology.