News

What Is Generative AI? The Technology That Can Teach Itself

What Is Generative AI? The Technology That Can Teach Itself

Generative AI has taken the world by storm, with platforms like ChatGPT, Midjourney, and Stable Diffusion changing how we create, view, and experience media for good or bad. Once the domain of science fiction and theory, artificial intelligence can now be accessed everywhere, from desktop computers to smartphones.

Generative AI refers to tools that use prompts to generate images, text, music, and videos. This beginner’s guide will explore generative AI, the companies developing it, and the future of this emerging technology. 

In the beginning

The idea of artificial intelligence has existed in various forms for nearly a century, but it wasn’t until the 1950s that the concept began to become a reality. 

Several milestones led to the artificial intelligence we know today, including the Imitation Game better known as the Turing Test by British mathematician Alan Turing in 1950, the DENDRAL project in the 1960s, IBM’s Deep Blue defeating chess champion Garry Kasparov in 1997, and IBM’s Watson winning Jeopardy in 2011.

In 2014, thanks to Ian Goodfellow and his colleagues’ work with Generative Adversarial Networks (GANs), the generative AI we know today began to take shape.

The next big leap in artificial intelligence came in 2015 with the founding of artificial intelligence developer OpenAI, the creators of ChatGPT. San Francisco-based OpenAI was founded in December 2015 by Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, John Schulman, and Wojciech Zaremba.

In computer science, singularity is achieved when artificial intelligence surpasses human intelligence
 

In November 2022, OpenAI launched the first iteration of ChatGPT; this was followed in March 2023 with the launch of the more powerful GPT-4.

Proponents of AI see the technology as a gateway to a utopian future where poverty, disease, and inequality are eradicated. On the other hand, detractors say artificial intelligence will replace humans in the workplace and have accused generative AI of plagiarism, copyright infringement, and stealing from human creators. The truth, as with most things, lies somewhere in the middle.

AI vs. Generative AI

AI entered the public discourse significantly in November after the public launch of ChatGPT in November. The thing we know as artificial intelligence refers to two facets of technology, artificial intelligence, and machine learning. 

Artificial intelligence lets computers emulate human thought and perform tasks that mimic the human brain. Machine learning refers to algorithms that allow computers to identify patterns, make decisions, and improve themselves through experience.

Generative AI learns from data to produce content. While traditional AI is interpretable and consistent, generative AI is flexible but can be less predictable. An example of this unpredictability is generative AI’s habit of “hallucinating” responses.

Hallucinations

AI hallucinations refer to instances when an AI generates unexpected, untrue results not backed by real-world data. AI hallucinations can be false content, news, or information about people, events, or facts.

In April, Jonathan Turley, a US criminal defense attorney, and law professor, claimed that ChatGPT accused him of committing sexual assault. Worse, the AI made up and cited a Washington Post article to substantiate the claim.

In May, Steven A. Schwartz, a lawyer in Mata v. Avianca Airlines, admitted to “consulting” the chatbot as a source when conducting research. The problem? The results ChatGPT provided Schwartz were all fabricated.

How Does Generative AI Work?

Generative AI learns from troves of data fed into the system. Most of that data comes from large language model developers scraping the internet. AI programs use neural network algorithms to produce new content resembling the data it was trained on using prompts. Thanks to deep learning, generative AI models can not generate images, voices, music, and video games.

Popular platforms for using generative AI include:

   * Art and Imagery: Midjourney, Stable Diffusion, Playground, Leonardo AI

   * Music: Shutterstock Amper Music, OpenAI’s MusicNet, Soundful

   * Text: ChatGPT, Google Bard, Claude

   * Games: Scenario, Promethean AI, Nvidia Omniverse

Ethical Implications and Controversies

Advances in artificial intelligence have also created a cottage industry for online scams using the technology. AI and AI-generated deepfakes have become so prevalent that the Vatican and United Nations have warned about the technology.

A deepfake is a type of video or audio content created with artificial intelligence that depicts false events that are increasingly harder to discern as fake, thanks to generative AI platforms like Midjourney 5.1 and OpenAI’s DALL-E 2.

In June, Meta announced the launch of Voicebox, an AI speech tool. Acknowledging the potential misuse of the platform to create audio deepfakes, Meta said Voicebox would not be released to the public.

The United Nations warned that ai-generated deepfakes on social media could fuel hate in conflict zones. The global body called on social media platform developers to invest in content moderation systems that utilize both human and artificial intelligence for all languages used where they operate—and to make content reporting transparent.

In May, OpenAI CEO Sam Altman called for creating a new regulatory agency to oversee the development of artificial intelligence during a Congressional Hearing in Washington, DC.

 “I would form a new agency that licenses any effort above a certain scale of capabilities, and that can take that license away and ensure compliance with safety standards,” Altman said, adding that the would-be agency should require independent audits of any AI technology.

Generative AI platforms have also been accused of promoting unhealthy eating habits and body images.

Conclusion and Future Outlook

What does the future hold for artificial intelligence? For many computer scientists and science fiction fans, the singularity is where artificial intelligence models’ rapid advancement is heading.

“We acknowledge that there are a number of research breakthroughs to happen before we get to human-level AGI (artificial general intelligence),” SingularityNET COO Janet Adams told Decrypt. “But we have built the technology stack for that AGI, and they could even emerge sooner than three to seven years.”

In computer science, singularity is achieved when artificial intelligence surpasses human intelligence, resulting in rapid, unpredictable technological advancements and societal changes. Technology theorists speculate that the singularity will happen by 2045, but thanks to developments in AI, that timetable is being moved up.

Stay on top of crypto news, get daily updates in your inbox.



Source: https://decrypt.co/resources/what-is-generative-ai-the-technology-that-can-teach-itself

Leave a Reply

Your email address will not be published. Required fields are marked *