- I'm here in the Helix in the Gates Hillman Centers of Carnegie Mellon University. In this lecture, I will offer you a definition of artificial intelligence, or AI, and give you a brief overview of its history from its inception in the 1950s. First, the definition. Let's start by saying what AI isn't. AI is not machines that think, or even computers that work the way the brain works. AI is what machines do, not how they do it. The authors of a leading textbook on AI have offered eight possible definitions of the term.
A founder of the field says there's no precise definition. I'll offer you one definition of artificial intelligence that I think is useful for our purposes. AI is the theory and development of computer systems able to perform tasks that normally require human intelligence. This includes cognitive tasks, like planning, reasoning, and learning, and also perceptual tasks, like recognizing speech, understanding text, and recognizing faces. The reality is the definition of AI is subject to change over time. "As soon as it works, no one calls it AI anymore," lamented one of the founders of the field.
"AI is whatever hasn't been done yet," said another. This means the definition of AI is a moving target. Once people get used to its amazing capabilities, they'll no longer think of them as examples of artificial intelligence. Sometimes you'll hear the term artificial general intelligence. This refers to the intelligence of a hypothetical machine that could successfully perform any intellectual task that a human being can. Artificial general intelligence has not been created, but many researchers aspire to do so. There's lots of speculation about when, if ever, it will be achieved.
Instead, today, the state of the art of artificial intelligence is concerned with performing relatively narrow tasks, some of which are really impressive, like recommending cancer treatments or driving a car. The current and near future state of AI is the focus of our course. Another definition that will be useful in this course is agent. In everyday English, an agent is something that acts or does something. All computer programs do something, but AI agents are expected to do more. To operate autonomously, perceive their environment, persist over time, adapt to change, and create and pursue goals.
A rational agent is one that acts to achieve the best outcome, or if that's uncertain, the best expected outcome. You may hear the phrase an AI, as in, we're building an AI that will schedule your meetings for you automatically. Consider that a synonym for agent. Artificial intelligence is in the news a lot these days, but did you know that the field started over 60 years ago? I'd like to walk you through a very brief history of the field to give you a better perspective of where we are today. The field dates from the 1940s and '50s when researchers thought to use computers to understand how the brain works by trying to mimic human intelligence.
The pioneers of the field were born in the 1910s and '20s. Among the most well-known to a general audience is Alan Turing. He was a logician and mathematician. He proposed a theory of logical computing machines that could be reprogrammed to solve an infinite number of problems. He also advanced the belief that machines could mimic human intelligence, and he proposed a test for this that has come to be known as the Turing test. He was also the subject of a recent Hollywood film starring Benedict Cumberbatch and Keira Knightley. Other pioneers of the field you will encounter if you read more on the history of AI include John McCarthy, who coined the term artificial intelligence.
McCarthy convened a seminal workshop on the topic in 1956. The proposal seeking funds for the conference was audacious or optimistic. It read, in part, "An attempt will be made to find how to make machines "use language, form abstractions and concepts, "solve kinds of problems now reserved for humans, "and improve themselves. "We think," continued the proposal, "that a significant advance can be made "in one or more of these problems "if a carefully selected group of scientists "work on it together for a summer." Amazing.
Other important early players in AI were Alan Newell and Herbert Simon, who created foundational AI tools and demonstrations, including, with a colleague, something they called a general problem solver. There's a building here at Carnegie Mellon named after them. In the 1960s and '70s, it was the era of demonstration programs. Researchers created basic autonomous robots and created programs that could prove theorems, solve calculus problems, even impersonate a psychotherapist. They achieved impressive results on a range of narrow problems, but harder or more ill-defined problems were out of reach due to simplistic algorithms, poor methods for handling uncertainty, a surprisingly ubiquitous fact of life, and limitations on computing power.
Amid disappointment with a lack of continued progress, AI fell out of fashion by the mid-1970s. In the 1980s, AI got a boost because of the threat from Japan. Japan had launched the Fifth Generation Project to develop a massively parallel computing architecture that would be the fifth generation of computing after tubes, transistors, chips, and whole processors on a chip. Fear of losing ground to Japan unleashed new investments in AI in the US. Computer vendors pooled funding for research.
The government provided research funds. A number of commercial vendors of AI technology were founded, and some offered stock to the public, including Intellicorp, Teknowledge, and Symbolics, where I worked. By the end of the 1980s, maybe half of the Fortune 500 were developing or maintaining AI systems called expert systems. High hopes for expert systems eventually cooled as their limitations became recognized. They suffered a glaring lack of common sense. It was difficult to capture experts' tacit knowledge in the form of rules, and it was expensive and difficult to build and maintain large expert systems.
By the end of the '80s, AI ran out of steam again. In the '90s, technical work on AI continued with a lower profile. Techniques such as neural networks and genetic algorithms received fresh attention, in part because they avoided some of the limitations of expert systems, and partly because newer algorithms made them more effective. By the 2000s, a number of factors helped renew progress in AI. We'll talk more about that in the next lecture. Wrapping up, AI is the field that aims to make machines perform tasks that only humans used to be able to do.
The field got started in the 1950s and experienced several periods of high expectations, and then disappointment. Accumulated technological progress over the last 10 years or so has ushered in a new era of AI.
- Artificial intelligence explained
- Cognitive technologies explained
- Supervised, unsupervised, and reinforcement learning
- Machine learning models and algorithms
- Language, speech, and visual processing
- Business applications of cognitive tech
- The impact of cognitive technologies at work
- Future of cognitive technologies