News

A Brief History of Artificial Intelligence (AI): From Turing to IoT

A Brief History of Artificial Intelligence (AI): From Turing to IoT

In brief

Artificial intelligence (AI) as a concept dates back to ancient times, but modern AI began in the mid-20th century with pioneers like Alan Turing.
I- n the 1950s, computers were not advanced enough to develop AI, and they were far too costly for most (leasing fees could run up to $200,000 per month).
Newell, Shaw, and Simon created Logic Theorist in the mid-1950s. Logic Theorist is a computer program often seen as the first AI program. It can solve mathematical theorems using symbols.
Funding for AI increased in the 1960s and 1970s, including government support from DARPA.
Key developments in the 1980s include the advent of “deep learning,” allowing computers to learn using experience, and expert systems that could begin to replicate the decision-making process of a human.
By the 2000s, AI had become popularized through IBM’s Deep Blue, Furby toys, Roombas, and eventually Apple’s Siri virtual assistant, among many others.
Now, developers launch new AI programs in the form of autonomous vehicles, machine learning tools, chatbots, virtual assistants, and more, leading to a growing internet of things (IoT) and countless new use cases.

The concept of artificial intelligence (AI) can be traced back to ancient times and legends about artificial beings that were endowed with consciousness, such as the golem in Jewish folklore. Golems, so the mythology goes, were beings made out of lifeless substances like dirt that were brought to life by a type of incantation.

1950s: Alan Turing

Modern-day AI has its origins in the earliest computers in the mid-20th century and with pioneers such as the British mathematician, cryptographer, and computer scientist Alan Turing. In his 1950 paper “Computing Machinery and Intelligence,” Turing posed a question: can machines use information and reason to solve problems and make decisions in the same way that humans do? In many ways, this question still guides contemporary efforts to create and advance AI technologies.

During Turing’s lifetime, technological limitations significantly impeded potential advances in AI. Computers were rare, extremely expensive (costing up to $200,000 per month in the 1950s), and rudimentary compared to modern hardware. A key problem for Turing’s generation is that computers at the time could only execute commands, not store them. Computers could perform functions but were not yet able to remember what they had done.

1950s: Logic Theorist

One of the first AI programs was called Logic Theorist, developed in the mid-1950s by Allen Newell, Cliff Shaw, and Herbert Simon. Logic Theorist was a computer program that could use symbolic language to prove mathematical theorems. In addition to being a groundbreaking technological advancement for AI, Logic Theorist has also had a decades-long impact on the field of cognitive psychology.

1960s through 1980s: Technology Develops and Subsequent AI Programs Launch

In the 1960s and 1970s, computing technology advanced quickly. Computers were able to process more quickly and store more information. Perhaps even more importantly, they became more common, accessible, and less expensive. Following from Newell, Shaw, and Simon, other early computer scientists created new algorithms and programs that became better able to target specific tasks and problems. These include ELIZA, a program by Joseph Weizenbaum designed as an early natural language processor.

One of the reasons for AI’s success during this period was strong financial support from the Defense Advanced Research Projects Agency (DARPA) and leading academic institutions. This support and the speed of developments in AI technology led scientists like Marvin Minsky to predict in 1970 that a machine with the “general intelligence of an average human being” was only three to eight years away.

Still, there were many obstacles to overcome before this goal could be reached. Computer scientists discovered that natural language processing, self-recognition, abstract thinking, and other human-specific skills were difficult to replicate with machines. And a lack of computational power with computers as they existed at that time was still a significant barrier.

The 1980s saw new developments in so-called “deep learning,” allowing computers to take advantage of experience to learn new skills. Expert systems, launched by Edward Feigenbaum, could begin to replicate decision-making processes by humans.

1990s: Deep Blue

One of the highest-profile examples of AI to date occurred in 1997, when IBM’s Deep Blue computer program defeated chess world champion and grandmaster Gary Kasparov. The match was highly publicized, bringing AI to the public in a way that it had not been previously. At the same time, speech recognition software had advanced far enough to be integrated in Windows operating systems. In 1998, AI made another important inroad into public life when the Furby, the first “pet” toy robot, was released.

2000s: Humanoid Robots, Driverless Cars

In the early 2000s, a number of humanoid robots brought AI closer to science fiction tropes. Kismet, a robot with a human-like face and which could recognize and simulate emotions, launched in 2000. Honda released a similar humanoid robot called ASIMO in the same year. In 2009, Google developed a driverless car prototype, although news of this advancement did not emerge until later.

2010s to the Present Day

The last decade or so has seen AI technologies and applications proliferate at a tremendous pace. ImageNet, originally launched in 2007, is a powerful database of annotated images used to train AI programs. Highly-publicized AI applications have found their way into the popular game show Jeopardy!, video games, and even the iPhone when it released the virtual assistant Siri in 2011. Amazon’s Alexa, Microsoft’s Cortana, and other AI programs and tools have further popularized this technology.

Now, in the age of the internet of things (IoT), one is likely to find AI in more places than ever before. Autonomous vehicles, machine learning tools, chatbots, virtual assistants, and more AI programs continue to launch, often at an accelerating pace and with increasing power. One of the most highly-publicized AI programs in history, the chatbot ChatGPT, launched late in 2022 and has quickly inspired legions of fans and related chatbot programs. Investors are increasingly focused on AI companies. The future of AI seems bright, though there are also those who remain skeptical of potential ethical or other concerns.

 

Stay on top of crypto news, get daily updates in your inbox.



Source: https://decrypt.co/resources/a-brief-history-of-artificial-intelligence-ai-from-turing-to-iot

Leave a Reply

Your email address will not be published. Required fields are marked *