WhatsApp Backs 'Optional' AI Tool Despite Inability to Disable
The social media platform defends 'optional' AI tools that cannot be disabled.
In a tale of contradictions, WhatsApp claimed its new AI feature embedded in the messaging service is "entirely optional" - despite the fact it cannot be removed from the app.
Permanently Featured
The Meta AI logo is an ever-present blue circle with pink and green splashes in the bottom right of your Chats screen. When used, it opens a chatbot designed to answer your questions, but it has drawn attention and frustration from users who cannot remove it from the app.
The uproar follows Microsoft's Recall feature, which was an always-on tool - before the firm faced a backlash and decided to allow people to disable it.
"We think giving people these options is a good thing and we're always listening to feedback from our users," WhatsApp told the BBC.
The company likens the feature to other permanent options in the app like 'channels' and 'status'.
It comes the same week Meta announced an update to its teen accounts feature on Instagram.
The firm revealed it was testing AI technology in the US designed to find accounts belonging to teenagers who have lied about their age on the platform.
The Blue Circle
If you can't see the feature, you may not be able to use it yet.
Meta says the feature is only being rolled out to some countries at the moment and advises it "might not be available to you yet, even if other users in your country have access.”
As well as the blue circle, there is a search bar at the top inviting users to 'Ask Meta AI or Search'. This is also a feature on Facebook Messenger and Instagram, with both platforms owned by Meta.
Its AI chatbot is powered by Llama 4, one of the large language models operated by Meta.
Before you ask it anything, there is a long message from Meta explaining what Meta AI is - stating it is "optional". Also, on its website, WhatsApp says Meta AI "can answer your questions, teach you something, or help come up with new ideas.”
User’s Thoughts
So far in Europe people aren't very pleased, with users on X, Bluesky, and Reddit outlining their frustrations - and Guardian columnist Polly Hudson was among those venting their anger at not being able to turn it off.
Dr Kris Shrishak, an adviser on AI and privacy, was also highly critical, and accused Meta of "exploiting its existing market" and "using people as test subjects for AI".
"No one should be forced to use AI," he told the BBC.
"Its AI models are a privacy violation by design - Meta, through web scraping, has used personal data of people and pirated books in training them.
"Now that the legality of their approach has been challenged in courts, Meta is looking for other sources to collect data from people, and this feature could be one such source."
Dr Shrishak says users should be wary. "When you send messages to your friend, end to end encryption will not be affected," he said.
"Every time you use this feature and communicate with Meta AI, you need to remember that one of the ends is Meta, not your friend."
The tech giant also highlights that you should only share material which you know could be used by others.
"Don't share information, including sensitive topics, about others or yourself that you don't want the AI to retain and use," it says.

