企業家精神

Artificial intelligence and the art of imitation

The evolution of AI - humankind's attempt to artificially imitate its own intelligence.

日付
著者
Marc Neumann, Guest author
読み取り時刻
7 minutes

A black and white scene from the movie Metropolis shows a woman's head wearing a helmet with wires attached.
AI technologies are the defining issue for people interested in engineering, business, politics, the mind, and money. However, the question and search for artificial, machine intelligence is already several years older. © akg / Horst von Harbou - Stiftung Deutsche Kinemathek

Is there any part of our lives that artificial intelligence (AI) hasn't already impacted - or won't in the future? From internet searches to stock markets, legislation to production processes, this transformative technology is reshaping the world as we know it.

So what exactly is artificial intelligence? Is it the mechanical knight envisioned by Leonardo da Vinci or HAL, the calculating computer from "2001: A Space Odyssey"? Does it encompass everything from chess-playing computers and the Terminator to virtual assistants like Siri and Alexa? And where do self-driving cars and smart refrigerators fit in?

While each of these examples - real or fictional - may seem wildly different, they have one fundamental thing in common: they are all the product of humankind's enduring quest to create machines that imitate our cognitive abilities and behaviours. And it is this ambition that has shaped the evolution of AI, driving innovations from early mechanical robots to today's sophisticated self-learning systems.

Alan Turing and the dawn of AI

When John McCarthy coined the term “artificial intelligence", he had a clear purpose in mind. According to Matthew Jones, Smith Family Professor of History at Princeton University, McCarthy deliberately kept the term broad to attract funding at a 1955 workshop at Dartmouth College. McCarthy himself admitted, "I invented the term artificial intelligence…when we were trying to get money for a summer study." His actual goal at the time, which was to develop human-level machine intelligence, wasn't considered "catchy" enough.

A group of elderly men with red lanyards around their necks pose for a group photo.
John McCarthy (seated, center), who coined the term 'artificial intelligence' in 1955, selected it deliberately to boost his chances of securing funding for a summer study on human-level machine intelligence. © John Markoff/NYT/Redux/laif

But the story of AI and the quest for machine intelligence began before McCarthy gave it a name. While early innovators like Charles Babbage (1791-1871), who conceptualised the first automatic digital computer in the 19th century, and the mathematician and polymath John von Neumann (1903-1957), a pioneer in hardware and algorithms, laid the groundwork, the origins of our current understanding of AI can be traced back to Alan Turing's 1950 paper, "Computing Machinery and Intelligence".

In this seminal work, the English cryptographer credited with helping crack the Nazi's "Enigma" code during the Second World War proposed the "Imitation Game" (later called the "Turing Test") as a way to assess machine intelligence. Turing's role-playing game consisted of a computer trying to convince a human interrogator that it is human and not a computer via written responses. In his paper, Turing systematically refutes all objections to the possibility that intelligent machines could one day exist. And computer scientists have been trying to create systems that mimic human behaviour and thought ever since.

A young man in a 1930s suit with side-parted hair stands against a gray background and looks to the right.
In a famous essay from 1950, Alan Turing explains the idea of the "imitation game". © Darchivio/opale.photo/laif

Interestingly, Turing's influence extends not only to the beginnings, but also to the course of the history of AI - right up to the Transformer, which from 2017 will allow neural networks not only to recognize data patterns, but also to "remember" them, which became the cornerstone of today's "self-learning" LLM (Large Language Models) such as ChatGPT (GPT stands for "Generative Pre-trained Transformer").

Again, in the spirit of Turing's famous Imitation Game, which later became known as the Turing Test, the history of AI is a history of attempts to imitate human intelligence and later to map the world it perceives as accurately as possible. The Turing Test is the paradigm of AI.

From robots to symbolic reasoning and neural networks.

AI's evolution can be broken down into three key stages. The first focused on robots modelled after the human body; the second on symbolic reasoning using rules-based logic to imitate human logic and intellect; and the third on neural networks that analyse empirical data to recognise patterns in a way similar to the human brain.

Robots – intelligence that mirrors the human body

For centuries, humans have been captivated by the idea of robots with human-like qualities, especially consciousness. Advancements in engineering and the increasing electrification in the 19th and 20th centuries further fuelled this vision and led to the emergence of the first analogue mechanical robots such as Gakutensoku, which was developed in Japan in the late 1920s.

These were followed by digital, computer-controlled robots that could perform specialised tasks and were used in various capacities and locations ranging from factories to operating theatres. MIT researcher Rodney Brooks advocated for these sensor-driven autonomous robots in the 1980s, convinced they were the superior approach for replicating human behaviour and consciousness - and research into this form of AI continues today. As we shall see, at the same time, however, operating systems equipped with complex programmes to perceive the world and their environment, such as evolutionary and adaptive robots, also remain popular.

The convergence of this autonomous hardware and software has led to military applications, like the robot dog recently unveiled by the Chinese military, which has a firearm mounted on its back. While this invention may sound like something from a sci-fi novel, it is, in fact, simply another example of humanity's enduring quest to create machines that imitate reality.

A human-sized figure with a face and moving limbs stands next to a man and a box with cables and controls.
Analog electronic androids, the first prototypes of which appeared in the late 1920s, gave way to digital, computer-controlled machines. © akg-images/Universal Images Group/Underwood Archives

Symbolic reasoning – mimicking human logic

The second stage in the evolution of AI was born of the assumption that the world could be fully described and understood using rules-based systems. Inspired by this idea, mathematicians and logicians joined forces to develop principles of logical grammar, which laid the foundation for AI programming. These systems aimed to mimic human reasoning and perception, leading to the creation of early programming languages like John McCarthy's LISP. LISP paved the way for a number of further important breakthroughs, including chess-playing computers and the first chatterbots, like Joseph Weizenbaum's ELIZA.

ELIZA was a natural language processing programme developed in 1966 that simulated a psychotherapist during a conversation with a patient and followed a written questions and answers format. This was possible thanks to advances in machine learning and so-called expert systems which emerged in the 1960s and solved problems that would otherwise require human expertise. 

These early successes in symbolic reasoning led to the founding of organisations like the American Association for Artificial Intelligence in 1979 and gave rise to a first AI boom that also saw significant government investment. Japan, for example, launched the "Fifth Generation Computer System" in 1982. At a cost of CHF 1.75 billion in today's money, the project aimed to create computers that could speak, reason, and translate naturally like humans (it was discontinued after ten years due to lack of commercial success).

A screen shows a chessboard with a game and a human buried with his head in his hands.
On May 11, 1997, world chess champion Garry Kasparov was defeated in his final game by IBM's Deep Blue computer. But beyond a rules-based environment like chess, AI was still unconvincing. © KEYSTONE/AFP/Stan Honda

Although this momentum continued into the early 1980s, it wasn't long before an "AI winter" set in and enthusiasm for the technology dwindled, resulting in less funding and interest in artificial intelligence research. But important progress continued to be made, nonetheless. For example, the world's first driverless car, unveiled by Ernst Dickmanns in 1986. The vehicle was capable of navigating roads without major obstacles at 80 kilometres per hour. However, like many AI systems of the time, it struggled in unpredictable, real-world scenarios - something that spontaneous human intelligence was still better equipped to deal with. IBM's chess computer Deep Thought, and later Deep Blue, experienced similar fates: they excelled in the structured environment of chess, which has a fixed set of rules, but fell short in handling the complexity of human-like decision-making due, among other things, to their reliance on brute-force computation and a lack of information and computational power.

Neural networks - learning like the human brain

The third stage of AI emerged with the introduction of neural networks modelled loosely after the structure of the human brain. Early models by pioneers like MIT co-founder Marvin Minsky used artificial neural networks to solve problems. These artificial networks, which were packed into software, were designed to learn and adapt through experience rather than explicitly following predefined logical rules. By repeatedly performing tasks, they were optimised and could approximate a solution or recognise patterns. But until the 1990s, researchers faced a similar problem as in stage 2; their progress was limited by a lack of computational power and data.

One important breakthrough on the computational power side came in 2009, when Stanford researchers integrated GPUs (graphics processing units) into computers central processing units (CPUs). This made it possible to train neural networks at multiple levels, a concept referred to as deep learning, and enabled AI systems to process and recognise patterns in vast amounts of data, significantly accelerating their computing speed and bringing them another step closer to matching the complexity of the human brain. The technology led to applications like Siri (2011) and Alexa (2014), which brought neural networks into people's homes and thus into their everyday realities.

In 2017, almost a decade later, Google introduced its Transformer model, which successfully addressed the second issue; the lack of data. With this breakthrough, neural networks were able to handle enormous datasets, creating a memory-like ability to retain information. In essence, the model was able to piece together a representation of the world, similar to how the human brain forms an understanding of reality. In a way, the world with all its irreducibility was imitated, made memorable, and brought into the machine.

The internet and the power of data

With the vast repository of digital data available on the internet and the advances in computational power, the abilities of machine learning models have made another massive jump over the last few years. Large language models (LLMs) and generative pre-trained transformers (the GPT in ChatGPT), for example, can now work their way through the internet's incredibly dense, almost inexhaustible treasure trove of raw data to analyse patterns and generate outputs that reflect the status quo of human knowledge and behaviour.

A metal robot sits in a library reading a book.
A recent study may prove that Turing's imitation game is perfect. © Freepix/ bugphai

And it is here that the story of AI's evolution ends - at least for the time being. Today’s AI systems can mimic human brain functions with astonishing precision and make sense of complex realities. Proof of this is all around us: many of us have been momentarily fooled by a deceptively genuine article, study or image produced by generative AI such as ChatGPT, DALL-E or Gemini. 

AI has also woven itself into our daily life, whether we're browsing the internet, creating content or trying to work more efficiently. In fact, many of us already wonder how we ever did without it.

Does GAI pass the Turing Test?

But AI hasn't just become ubiquitous, it appears it may also meet the benchmark definition of machine intelligence: in May 2024, researchers at the University of San Diego announced that ChatGPT-4 had become the first computer to pass the Turing Test. According to the researchers, human participants could not reliably distinguish its responses from those of a human. If the study stands up to peer review, this milestone suggests that AI may have finally succeeded in playing, and winning, Turing's Imitation Game.

A man sitting on a table and smiles
Insights

"AI will disrupt every industry"

Entrepreneur and computer scientist Richard Socher expects that rapid advances in artificial intelligence will upend the way we search and, together with automation, leave almost no business untouched.
Head with data strings
企業家精神

Generative AI: A few good use cases

Companies and investors are racing to make sense of the latest AI boom. But the verdict is still out on which artificial intelligence tools truly boost productivity and improve the bottom line.
Paraplegic patient, center, walks in between Gregoire Courtine, right, and Jocelyne Bloch, left
企業家精神

Jacking into the cortex: The promising future of brain-computer interfaces

New technologies that connect to the human brain promise to help millions of patients with debilitating diseases as well as the world’s growing ageing population.
定期購読の登録