Chatbots with False Memories! What’s Next?

Australian researchers teach misinformation to BlenderBot
27 April 2023
Image by Storyblocks

Chatbots are increasingly used to provide customer service, but a recent study from Australia's Macquarie University has revealed that "chit-chat bots" can be tricked into spreading fake news. This raises the question: how reliable is automated communication?

The paper, Those Aren’t Your Memories, They’re Somebody Else’s: Seeding Misinformation in Chat Bot Memories, focused on Meta’s BlenderBot 2 and BlenderBot 3.

Meta recently introduced ’s BlenderBot 2 and BlenderBot 3, which utilise long-term memory capability to enable the chatbots to mimic natural conversation. 

Conor Atkins and four other researchers demonstrated that the long-term memory of BlenderBot could be poisoned with false information, and reliably regurgitated when asked. 

Atkins told iTnews the research at the moment is specific to BlenderBot, since it was the first to use long-term memory in this way (and while the researchers didn’t gather formal results for BlenderBot 3, their initial experiments showed the newer version can still be poisoned).

He said it’s likely competing vendors will follow a similar model if they decide to deploy chit chat bots: “If the chatbot can generate memories from the user inputs, this memory poisoning can occur.” 

- CyberBeat

 

About CyberBeat

CyberBeat is a grassroots initiative from a team of producers and subject matter experts, driven out of frustration at the lack of media coverage, responding to an urgent need to provide a clear, concise, informative and educational approach to the growing fields of Cybersecurity and Digital Privacy.

Contact CyberBeat

If you have a story of interest, a comment, a concern or if you'd just like to say Hi, please contact us

Terms & Policies >>

Sponsors

We couldn't do this without the support of our sponsors and contributors.