News

Fetch AI, SingularityNet Team Up to Squash AI Hallucinations

AI's Thirst for Data Centers May Drain Water Supplies

Developers Fetch AI and SingularityNET have announced a new partnership to curb AI hallucinations using decentralized technology.

The deal between Fetch AI and SingularityNET will focus on addressing the tendency of large language models (LLMs) to produce inaccurate or irrelevant outputs, or “hallucinations,” which they deemed to be two significant obstacles in building AI reliability and adoption.

Decentralized platforms such as SingularityNET and Fetch allow developers to assemble multi-component AI systems, with different components that run on different machines, without any central coordinator, SingularityNET CEO Ben Goertzel told Decrypt. “The potential to experiment with a variety of different configurations for multi-component AI systems, such as neural-symbolic systems, gives a flexibility of a different sort than exists in standard centralized AI infrastructures,” he said.

The agreement also includes plans to launch a series of yet-to-be-named products in 2024 that they claim will leverage decentralized technology to develop more advanced and accurate AI models.

The deal would allow developers to access and combine tools like Fetch.ai’s DeltaV interface and SingularityNET’s AI APIs, allowing AI developers to build more decentralized, dependable, and intelligent models.

“SingularityNET has been working on a number of methods to address hallucinations in LLMs. The key theme across all of these is neural-symbolic integration. We have been focused on this since SingularityNET was founded in 2017,” SingularityNET Chief AGI Officer Alexey Potapov added. “Our view is that LLMs can only go so far in their current form and are not sufficient to take us towards artificial general intelligence but are a potential distraction from the end goal.”

Neural-symbolic integration

Neural-symbolic integration blends neural networks, which learn from data, with symbolic AI, which uses clear rules for reasoning. This combination enhances AI’s learning and decision-making, making it adaptable and logically sound, leading to more accurate and reliable AI applications.

“We don’t expect to fix [hallucinations] completely, of course, but I think part of the fun with LLMs is to have the hallucination because that’s where the innovation comes in,” Fetch AI founder and CEO Humayun Sheikh told Decrypt. “In a way, you have to be creative without any limitations.”

While AI hallucinations are a problem, Sheikh said, coupling hallucination with AI-generated deepfakes is a more significant threat and not only a concern now but one that will grow in the future as AI becomes more advanced and hallucinations more convincing.

Sheikh pointed out the danger of an AI that “extrapolates some wrong ideas, and it reinforces itself,” calling it a “self-fulfilling” problem. “That, to me, is the biggest problem in the near-term future, rather than thinking that some AI is going to start trying to take us over,” he added.

The rise of AI hallucinations

Along with the rise of artificial intelligence into the mainstream, AI hallucinations have also made headlines.

In April, ChatGPT wrongly accused a law professor, Jonathan Turley, of sexually assaulting a student during a class trip he never took.

In October, attorneys for former Fugees member Pras Michel filed a motion for a new trial alleging that his former legal team used artificial intelligence and the AI model hallucinated its responses, causing their client to be convicted of 10 counts of including conspiracy to commit witness tampering, falsifying documents, and serving as an unregistered foreign agent.

“Hallucination is a double-edged sword, especially for content creation,” Fetch AI CPO Kamal Ved said. “A lot of people enjoy the hallucination because they get a kick out of it and what the LLM is going to come back with.” He explained that the partners are “trying to tackle the problem of actually executing an action, and we expect some sort of determinism.”

Making hallucinations harder to combat is a lack of transparency in AI model development, researchers at Stanford University said as companies battle to dominate the market. And while generative AI developers may say they want to be transparent, an October report by Stanford University’s Center for Research on Foundation Models (CRFM) said the opposite is true.

CRFM Society Lead Rishi Bommasani warned that companies in the foundation model space are becoming less transparent, adding that, “If you don’t have transparency, regulators can’t even pose the right questions, let alone take action in these areas.”

OpenAI, Meta, and IBM have found groups aimed at building more transparency in AI development. In July, OpenAI partnered with Anthropic, Google, and Microsoft to launch the Frontier Model Forum, which is aimed at the responsible development of AI models, including dealing with AI hallucinations.

Earlier this month, IBM partnered with Meta to launch the AI Alliance, a consortium of over 50 entities that came together to research, develop, and build AI models responsibly and transparently.

“The progress we continue to witness in AI is a testament to open innovation and collaboration across communities of creators, scientists, academics, and business leaders,” IBM Chairman and CEO Arvind Krishna said in a statement. “This is a pivotal moment in defining the future of AI.”

Stay on top of crypto news, get daily updates in your inbox.



Source: https://decrypt.co/209790/fetch-ai-singularitynet-team-up-to-squash-ai-hallucinations

Leave a Reply

Your email address will not be published. Required fields are marked *