Meta Forms AI Council with All-White Male Members

0:00

Meta announced on Wednesday the formation of an AI advisory council composed exclusively of white men. This move follows a longstanding trend where women and people of color often feel overlooked in the AI sector, despite their contributions and expertise. 

Meta did not immediately reply to our inquiry regarding the diversity of the advisory council. 

The new advisory board is distinct from Meta’s actual board of directors and its Oversight Board, which is more diverse in gender and racial composition. This AI board, appointed without shareholder election and lacking fiduciary duty, is tasked with providing insights and recommendations on technological advancements, innovation, and strategic growth. According to Meta, the board will meet “periodically.”

Notably, the AI advisory council is made up entirely of business professionals and entrepreneurs, with no presence of ethicists or individuals with deep research backgrounds. While executives from companies like Stripe, Shopify, and Microsoft may have extensive experience in bringing products to market, AI’s unique risks and potential repercussions, especially for marginalized groups, call for a broader range of expertise.

In a recent interview, Sarah Myers West, managing director at the AI Now Institute, highlighted the importance of critically examining AI-producing institutions to ensure they serve the public’s needs. She emphasized, “This is error-prone technology, and we know from independent research that those errors are not distributed equally; they disproportionately harm communities that have long faced discrimination. We should be setting a much, much higher bar.”

Women, in particular, face significant risks from AI. A 2019 report from Sensity AI revealed that 96% of AI deepfake videos online were nonconsensual and sexually explicit. With the rise of generative AI, such harmful behavior continues to target women.

One notable incident in January saw nonconsensual, pornographic deepfakes of Taylor Swift going viral on X, amassing hundreds of thousands of likes and millions of views. Despite historically inadequate responses from social platforms like X in similar instances, Swift’s high-profile status prompted X to block search terms related to these deepfakes.

However, most individuals victimized by such technology do not have the same level of influence as Swift. Numerous reports and cases involve students creating explicit deepfakes of their peers, facilitated by easily accessible apps that “undress” photos or alter them. NBC’s Kat Tenbarge uncovered that platforms like Facebook and Instagram even hosted ads for Perky AI, an app promoting explicit image generation.

Meta only removed these ads for Perky AI after Tenbarge’s findings brought them to the company’s attention. Some ads featured blurred images of celebrities like Sabrina Carpenter and Jenna Ortega, including an image of Ortega from when she was just sixteen. Meta’s mishandling of these ads was not isolated; the Oversight Board is investigating Meta’s failure to address reports of sexually explicit AI-generated content.

Incorporating voices from women and people of color in AI development is essential. Historically, the exclusion of these groups from critical research and technology development has led to harmful oversights. For instance, women were only included in clinical trials starting in the 1970s. Furthermore, AI systems have displayed racial biases, such as self-driving cars’ difficulties in detecting Black individuals, a finding by a 2019 Georgia Institute of Technology study.

AI’s entrenched biases mirror and amplify existing societal inequalities in areas such as employment, housing, and criminal justice. Voice recognition technology often fails to understand diverse accents, sometimes mistaking work by non-native English speakers as AI-generated content, as Axios highlighted. Facial recognition technologies disproportionately flag Black individuals as suspects, underscoring systemic biases in law enforcement.

The predominance of systemic biases in AI development suggests a lack of sufficient leadership to address these issues adequately. The focus on rapid innovation, especially with generative AI technologies, may exacerbate existing problems. A report by McKinsey warns that AI could automate a significant portion of jobs predominantly held by minority workers, potentially worsening economic disparities.

Given Meta’s all-white male AI advisory council, concerns arise about its ability to represent and advise on AI products for a diverse global population. Crafting inclusive and safe AI requires a nuanced approach, involving diverse perspectives to avoid reinforcing existing power structures. While Meta’s approach may fall short, there’s room for other startups to address these critical needs.

Dominic-Madori Davis
Dominic-Madori Davis
Dominic-Madori Davis is a senior venture capital and startup reporter. She is based in New York City.

Latest stories

Ad

Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!

Ad
Continue on app