Learn the definition AI. Get examples of common implementations, and estimates of current and projects workforce and economic impact.
- [Instructor] I want to begin our discussion of artificial intelligence by talking about some of the extraordinary promise of AI, especially in applied fields at work. So for example, there's been amazing progress of using AI to help diagnose diseases, by for example, looking at brain scans, or listening for heart murmurs, and that allows healthcare providers to do better service, and more efficient service. AI is also being used for research synthesis where they take decades of scientific research, thousands, hundreds of thousands of articles, and comb through them for new insights and applications.
AI is also used productively for stream lining logistics, be it air travel, or commercial shipping, or even ride sharing. And this is in addition to being able to provide immediate response to credit card and loan applications, identifying potential fraud in real time, classifying photos and scan email. There are so many potential applications for this. Now, as we go through this it's helpful to remember that AI has several related fields, and sometimes people use one or the other when they're referring to basically the same thing. AI properly is artificial intelligence, and the idea there is you have thinking machines or computers that can learn from experience like humans can, and operate without specific instructions, and that can do things like visual perception, logical reasoning, and learning.
There's a closely related field of machine learning, this is an entire collection of computer algorithms that are able to learn from the data to predict patterns and outcomes. One particularly useful approach in machine learning and AI has been the use of neural networks, and these are algorithms that have a very complex set of hidden layer of nodes that come in between the input and the output that'll offer an intermediate set of processing. And a very well developed set of neural networks are called deep learning, it's a specific kind of neural network with many layers that are hidden in between, that allow for a lot of intermediate processing and these have been responsible for some of the most exciting developments in AI.
And then finally, there's predictive modeling and analytics, this is the general practice of building models, statistical models, to predict specific outcomes, like whether a person will have to check into a hospital, or whether a person will default on a loan. And predictive modeling can be done with AI or machine learning, or it can be done with more standard approaches, but AI has made enormous amount of progress, especially in complex problems of predictive modeling. Now let's take a very very short look at the timeline of artificial intelligence.
The general concept has been around since the 1950s, that's when work began to create machines that could reason, reach conclusions, make decisions, and learn from mistakes. That was the goal, but obviously, the technology was nowhere close to where it is right now, and while it sparked some immediate interest and some interesting developments, it didn't go very far. We had a blank period during the 60s where there wasn't nearly as much progress, and then in the 70s, researchers decided to take a different approach and they made progress by drawing from game theory and mathematics, and the methods of experimental psychology.
That was a second major phase in AI. Then in the 1990s more progress was made when IBM's Deep Blue beat the world chess champion Garry Kasparov, something that people thought a machine would never be able to do. Now of course since then, AI's have developed the ability to beat world champions at other games, each step representing an amazing accomplishment in creativity, and really, human problem-solving. And then most recently in the 2010s, this is the important part, deep learning, this one particular approach to AI, has become economically feasible.
The computer technology has caught up to make it doable, the amount of data has caught up to give the algorithms the raw data that they need for processing, and so, this is a very short timeline from the 1950s up till now where we now have an explosion of development and applications of artificial intelligence, and those will form the basis for the rest of our discussion.
- Bias in AI
- Navigating the social challenges of AI
- Moral reasoning and relational ethics
- General Data Protection Regulation (GDPR) and AI
- Discrimination in data
- Liability and AI
- AI in life-and-death situations
- Confronting the challenges of AI as a developer, executive, and consumer