Slack Under Fire for AI Data Usage and Privacy Policy Confusion

0:00

Amid ongoing concerns over how big tech companies utilize data from individuals and businesses for AI training, Slack users are increasingly frustrated with the platform’s AI strategy under Salesforce.

Slack, like many other companies, is leveraging user data to train its new AI features. However, opting out of this data usage requires users to email the company, a detail hidden in an outdated privacy policy.

This policy went largely unnoticed until a disgruntled individual highlighted it on a popular developer community site, causing the issue to go viral.

The controversy erupted when a post on Hacker News linked directly to Slack’s privacy principles, sparking a wider discussion among Slack users. It revealed that users are automatically opted in to AI training by default and must send an email to opt out.

The Hacker News thread led to numerous conversations on other platforms. Users questioned a new product called “Slack AI“, which offers features like conversation summaries and search results, and why it isn’t mentioned in the privacy policy. They also found Slack’s references to “global models” and “AI models” unclear.

The confusion and frustration around opting out and understanding where AI principles apply paint Slack in a negative light, despite the company’s claims of data control.

Though the shock is recent, the terms have been unchanged since at least September 2023, according to the Internet Archive.

Slack’s privacy policy states that customer data is used to train “global models” that enhance features such as channel and emoji recommendations. However, the company clarifies that these models do not memorize or reproduce customer data.

A Slack spokesperson emphasized, “Slack’s platform-level machine learning models do not learn or memorize customer data; they solely enhance features like channel and emoji recommendations.” However, the policy does not cover the full extent of Slack’s plans for AI training.

Customers still benefit from Slack’s “globally trained AI/ML models” even if they opt out of data training. But the necessity of using customer data to power features like emoji recommendations remains unclear.

Additionally, the company stated it doesn’t use customer data to train Slack AI.

“Slack AI, an additional purchase, employs large language models (LLMs) that do not rely on customer data for training. These LLMs run within Slack’s AWS infrastructure, ensuring customer data remains internal and exclusive to the organization,” a spokesperson noted.

Some of this confusion might be clarified soon. Reacting to a critical comment on Threads by engineer and writer Gergely Orosz, Slack engineer Aaron Maurer acknowledged the need to update the privacy page to reflect Slack AI’s usage.

Maurer explained that the terms were formulated before Slack AI existed and primarily dealt with search and recommendations. Users should watch for updates to understand Slack’s current AI usage.

Slack’s situation highlights the importance of clearly communicating data use and privacy policies in the rapidly evolving AI landscape.

Ivan Mehta
Ivan Mehta
Ivan covers global consumer tech developments. He is based out of India and has previously worked at publications including Huffington Post and The Next Web.

Latest stories

Ad

Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
Ad
Continue on app