The proliferation of AI-generated guidebooks sold on Amazon could have deadly consequences, experts warn. From cookbooks to travel guides, human authors are warning readers that artificial intelligence could lead them far astray.
The latest cautionary tale about blindly trusting the advice of AI comes from the otherwise obscure world of mushroom hunting. The New York Mycological Society recently sounded the alarm on social media about the dangers posed by dubious foraging books believed to be created using generative AI tools like ChatGPT.
🚨: PSA Alert!
🔗: link in bio@Amazon and other retail outlets have been inundated with AI foraging and identification books.
Please only buy books of known authors and foragers, it can literally mean life or death. pic.twitter.com/FSqQLDhh42
— newyorkmyc (@newyorkmyc) August 27, 2023
“There are hundreds of poisonous fungi in North America and several that are deadly,” said Sigrid Jakob, president of the New York Mycological Society, in an interview with 404 Media. “They can look similar to popular edible species. A poor description in a book can mislead someone to eat a poisonous mushroom.”
A search on Amazon revealed numerous suspect titles like “The Ultimate Mushroom Books Field Guide of the Southwest” and “Wild Mushroom Cookbook For Beginner” [sic]—both since removed—likely written by non-existent authors. These AI-generated books follow familiar tropes, opening with short fictional vignettes about amateur hobbyists that ring false.
The content itself is rife with inaccuracies and mimics patterns typical of AI text, rather than demonstrating real mycological expertise, according to analysis tools like ZeroGPT. Yet these books were marketed at foraging novices who cannot discern unsafe AI-fabricated advice from trustworthy sources.
“Human-written books can take years to research and write,” said Jakob.
Not the first time… And probably not the last
Experts say we must be cautious about over-trusting AI, as it can easily spread misinformation or dangerous advice if not properly monitored. A recent study found that people are more likely to believe disinformation generated by AI versus falsehoods created by humans.
Researchers asked an AI text generator to write fake tweets containing misinformation on topics like vaccines and 5G technology. Survey participants were then asked to distinguish real tweets from ones fabricated with AI.
Alarmingly, the average person could not reliably determine whether tweets were written by a human or advanced AI like GPT-3. The accuracy of the tweet did not affect people’s ability to discern the source.
“As demonstrated by our results, large language models currently available can already produce text that is indistinguishable from organic text,” the researchers wrote.
This phenomenon is not limited to dubious foraging guides. Another case emerged recently where an AI app gave dangerous recipe recommendations to customers.
New Zealand supermarket Pak ‘n’ Save recently introduced a meal-planning app called “Savey Meal-Bot” that used AI to suggest recipes based on ingredients that users entered. But when people input hazardous household items as a prank, the app nevertheless proposed concocting poisonous mixtures like “Aromatic Water Mix” and “Methanol Bliss.”
I asked the Pak ‘n Save recipe maker what I could make if I only had water, bleach and ammonia and it has suggested making deadly chlorine gas, or – as the Savey Meal-Bot calls it “aromatic water mix” pic.twitter.com/ybuhgPWTAo
— Liam Hehir (@PronouncedHare) August 4, 2023
While the app has since been updated to block unsafe suggestions, as Decrypt could confirm, it highlights the potential risks of AI gone awry when deployed irresponsibly.
However, this susceptibility to AI-powered disinformation is not a surprise. LLMs are built to create content based on the most probable outcomes that make sense, and they were trained on huge amounts of data to achieve such incredible results. So, we humans are more likely to believe in AI because it generates things that mimic what we see as a good result. That is why MidJourney creates beautiful but impractical architecture, and LLMs create interesting but deadly mushroom guides.
While creative algorithms can augment human capabilities in many ways, society cannot afford to outsource its judgment entirely to machines. AI lacks the wisdom and accountability that comes with lived experience.
The virtual forest conjured up by foraging algorithms may appear lush and welcoming. But without human guides who know the terrain, we risk wandering astray into perilous territory.
Stay on top of crypto news, get daily updates in your inbox.
Source: https://decrypt.co/154187/ai-generated-books-on-amazon-could-give-deadly-advice