Artificial intelligence (AI) has made its way into the public consciousness thanks to the advent of powerful new chatbots and image generators. But the field has a long history dating back to The dawn of computing. Given the fundamental role AI could play in transforming the way we live in the years to come, it’s essential to understand the roots of this burgeoning field. Here are 12 of the most important milestones in AI history.
1950 — Alan Turing’s seminal paper on artificial intelligence
A black and white photo of Alan Turing
(Photo credit: Images of History via Getty Images)
Renowned British computer scientist Alan Turing published a document titled “Computing machines and intelligencewhich was one of the first detailed investigations into the question “Can machines think?”.
To answer this question, one must first tackle the challenge of defining “machine” and “thinking.” So he proposed a game: An observer would observe a conversation between a machine and a human and try to figure out who was who. If they couldn’t reliably figure it out, the machine would win the game. While this doesn’t prove that a machine “thinks,” the Turing Test, as it came to be known, has been an important criterion for AI advances ever since.
1956 — The Dartmouth Workshop
A photo of old brick buildings in the fall on the Dartmouth campus
(Photo credit: Patrick Donovan via Getty Images)
AI as a scientific discipline can trace its roots back to Dartmouth Summer Research Project on Artificial Intelligenceheld at Dartmouth College in 1956. The participants were a group of influential computer scientists, including John McCarthy, Marvin Minsky, and Claude Shannon. It was the first time the term “artificial intelligence” was used, as the group spent nearly two months discussing how machines could simulate learning and intelligence. This meeting kicked off serious research into AI and laid the foundation for many advances that would come in the decades that followed.
1966 — First AI chatbot
A screen image of the ELIZA program. It shows a conversation between ELIZA and a user.
(Image credit: public domain)
MIT researcher Joseph Weizenbaum has unveiled the world’s first AI chatbot, known as the ELISAThe underlying software was rudimentary, regurgitating canned responses based on keywords it detected in the prompt. Still, when Weizenbaum programmed ELIZA to act as a psychotherapist, people were reportedly amazed at how believable the conversations were. The work spurred growth interest in natural language processingnotably from the US Defense Advanced Research Projects Agency (DARPA), which has provided significant funding for early AI research.
1974-1980 — First “AI Winter”
A retro photo of an abandoned computer room
(Photo credit: sasacvetkovic33 via Getty Images)
The initial enthusiasm for AI soon faded. The 1950s and 1960s had been a fertile period for the field, but in their enthusiasm, leading experts made bold claims about what machines would be capable of in the near future. The technology’s failure to live up to these expectations led to growing discontent. very critical report British mathematician James Lighthill’s work in the field led the British government to cut almost all funding for AI research. DARPA also cut its funding significantly around this time, leading to what would become known as the first “AI winter.”
1980 — Flood of “expert systems”
An abstract graphic illustration of purple, pink and blue squares and dots
(Photo credit: Flavio Coelho via Getty Images)
Despite disillusionment with AI in many quarters, research continued, and by the early 1980s the technology had begun to attract attention from the private sector. In 1980, researchers at Carnegie Mellon University built a AI system called R1 for Digital Equipment Corporation. The program was an “expert system,” an approach to AI that researchers had been experimenting with since the 1960s. These systems used logical rules to analyze large databases of specialized knowledge. The program saved the company millions of dollars a year and sparked a boom in expert system deployments across industry.
1986 — Foundations of Deep Learning
A photo of Geoffrey Hinton speaking at a conference in 2023
(Photo credit: Ramsey Cardy via Getty Images)
Until now, most research has focused on “symbolic” AI, which relies on hand-crafted logic and knowledge databases. But since the birth of the field, there has also been a competing stream of research on “connectionist” approaches, inspired by the brain. This research has been going on quietly in the background and finally came to light in the 1980s. Rather than programming systems by hand, these techniques involved tricking “artificial neural networks” into learning rules by training on data. In theory, this would lead to more flexible AI, unconstrained by the manufacturer’s preconceptions, but training neural networks has proven difficult. In 1986, Geoffrey Hinton, later dubbed one of the “godfathers of deep learning,” published a book called “Artificial Intelligence: An Approach to Deep Learning.” a document popularize “backpropagation” — the training technique that underpins most current AI systems.
1987-1993 — Second winter of AI
A hand rests on an old computer keyboard
(Photo credit: Olga Kostrova via Getty Images)
After their experiments in the 1970s, Minsky and fellow AI researcher Roger Schank warned that the hype around AI had reached unsustainable levels and that the field was in danger of another decline. They coined the term “AI winter” in a round table At the 1984 meeting of the Association for the Advancement of Artificial Intelligence, their warning proved to be correct, and by the late 1980s the limitations of expert systems and their specialized AI hardware began to become apparent. Industry spending on AI declined dramatically, and most of the AI startups went bankrupt.
1997 — Deep Blue loses to Garry Kasparov
Garry Kasparov holds his head in his hands as he sits in front of a chessboard
(Photo credit: Stan Honda/Stringer via Getty Images)
Despite repeated ups and downs, artificial intelligence research progressed steadily throughout the 1990s, largely out of the public eye. That changed in 1997, when Deep Blue, an expert system designed by IBM, beat world chess champion Garry Kasparov in a six-game seriesAI researchers have long considered skill at this complex game to be a key indicator of progress, so beating the world’s best human player was considered a major milestone and made headlines around the world.
2012 — AlexNet ushers in the era of deep learning
Close-up of a cryptocurrency mining rig
(Photo credit: eclipse_images via Getty Images)
Despite a rich body of academic work, neural networks were considered impractical for real-world applications. To be useful, they needed many layers of neurons, but implementing large networks on conventional computer hardware was prohibitively inefficient. In 2012, Hinton PhD student Alex Krizhevsky handily won the ImageNet computer vision competition with a deep learning model called AlexNetThe secret was to use specialized chips called graphics processing units (GPUs) that could efficiently handle much deeper networks. This paved the way for the deep learning revolution that has powered most of the advances in AI since then.
2016 — AlphaGo loses to Lee Sedol
(Image credit: Document distributed via Getty Images)
While artificial intelligence had already left chess in its rearview mirror, the much more complex Chinese board game Go remained a challenge. But in 2016, Google DeepMind AlphaGo In 1998, AlphaGo defeated Lee Sedol, one of the world’s best Go players, in a five-game series. Experts believed such a feat would not be achieved for years, and so the result sparked growing excitement about advances in AI. This was partly due to the versatile nature of the algorithms underlying AlphaGo, which relied on an approach called “reinforcement learning.” In this technique, AI systems learn effectively through trial and error. DeepMind later extended and improved the approach to create AlphaZerowho can learn to play a wide variety of games.
2017 — Invention of transformer architecture
Artistic representation of a flowing network of lights
(Photo credit: Yuichiro Chino via Getty Images)
Despite significant advances in computer vision and gaming, deep learning has been slow to advance in language tasks. Then, in 2017, Google researchers released a new neural network architecture called a “transformer,” which could ingest large amounts of data and make connections between distant data points. This architecture proved particularly useful for the complex task of language modeling, and made it possible to create AIs that could tackle a variety of tasks simultaneously, such as translation, text generation, and document summarization. All of today’s leading AI models rely on this architecture, including image generators like OpenAI’s SLABas well as Google DeepMind’s revolutionary protein folding model AlphaFold 2.
2022 – ChatGPT Launch
The ChatGPT logo displayed on a phone screen
(Photo credit: SOPA Images via Getty Images)
On November 30, 2022, OpenAI released a chatbot powered by its large GPT-3 language model. Known as “ChatGPTThe tool became a global sensation, amassing more than a million users in less than a week and 100 million the following month. It was the first time members of the public could interact with the latest AI models — and most were blown away. The service is credited with kicking off an AI boom that has seen billions of dollars invested in the field and spawned many imitators from big tech companies and startups. It has also led to growing unease about the pace of AI progress, prompting an open letter from prominent technology leaders calling for a pause in AI research to allow time to assess the implications of the technology.
0 Comments