- In this lecture we're going to talk about knowledge representation and reasoning. Professor Eric Nyberg here of Carnegie Mellon University is going to help us explore the topic. We cover this somewhat abstract concept to demystify the idea that computers can think. They can't, but for computers to simulate thinking they need a way of representing and manipulating knowledge, and this is what the technologies of knowledge representation and reasoning are for. One way of representing knowledge is in the form of rules, if then rules. For instance, if the gas gauge reads E, then the tank is empty, or if the patient has fever, and the patient has sore throat, and the patient has fatigue, then the patient has flu.
Each of these represents a piece of knowledge. Rules-based systems are cognitive technology created to reason about a domain when we have expert knowledge encoded in a rules base. Now Eric, rules-based systems go back to the early days of AI, don't they? - Yes, they do David. In fact, since the late '70s and early '80s rules-based systems have been sort of developed as one area of AI. A great example would be the MYCIN system which reasoned about conditions and diseases using rules like the ones that you mentioned, but also assigning a probability or likelihood to the diagnosis, in your case, that the patient had the flu, and multiple rules so that you could reason about multiple possible diseases that one patient might have given a particular set of symptoms with different likelihoods or probabilities.
- Got it. So, your classic rules-based system has three main components. It has a rules base, it has an inference engine, which tries to apply those rules, and it has a working memory, which contains everything the system believes to be true at the time, and the inference engine looks through the rules base and tries to match the if section of the rules to the working memory to figure out which rules to apply. Is that basically it? - That's right, David. A rules-based system will begin by loading into the working memory all of the facts that it currently knows about the situation. Then it will iterate through all of the rules trying to find a rule that matches.
So, the if part of the rule matches something in the working memory, and then the then part of the rule is fired, which usually adds more information to the working memory, and the system will keep iterating through the rule set until no rules are fired based on the result of the prior iteration and hopefully the system has been able to come to a conclusion when it runs out of knowledge. - And these kinds of systems are best where knowledge can be expressed as rules and in a limited number of rules. Once you start getting to thousands of rules, they can be tough to maintain.
- Absolutely, in fact, rules-based systems are really good for situations where there are a small number of variables. It's fairly transparent for a human to think about what to write down in the if then parts of the rules, but if the if condition is something that has many, many, many different considerations, it's very hard to write a single rule that can capture that, and that's where rule-based systems tend to be a little bit less effective. - Got it. Another way of representing knowledge is familiar to us from high school biology, taxonomies.
We all learned about a taxonomy of hierarchal structure that's used to classify things into animals, plants, or minerals, and animals into mammals, birds, and amphibians, and mammals in turn into primates and other categories. We can build a computer model of a taxonomy to represent knowledge in a domain and answer questions about that domain. So Eric, when are taxonomies a good choice? - So, I think taxonomies are very effective for organizing knowledge. So, let's say I have to have a set of rules about children versus adults because I want to reason differently about them.
I can use a taxonomy of categories to sort of organize that knowledge. Taxonomies are also a really good way to organize data. So, let's say I want to take all of the news stories being published on the web and then categorize them into different categories so that I can look at the ones in the category I'm interested in. Then usually a taxonomy or a hierarchy is used to cluster the information. - Got it. Now, sometimes you need to make a decision without knowing all of the relevant facts, or when you're not sure about some of the facts.
A knowledge representation and reasoning model called Bayesian networks, or Bayes nets, is good for modeling a situation where your opinion or your confidence about a belief may change as your knowledge changes. Take the example of deciding whether to see a new movie. I may start out uncertain about this. Then I acquire new information. Say it stars an actor I like, or it gets a good review. Then I see the trailer and that looks good, and a friend finally recommends it to me. With each new piece of knowledge, my certainty increases.
Bayesian networks can represent assertions, but in addition they're good at representing degrees of certainty, which can change over time. They can also represent cause and effect, which we'll get into in a second. In this diagram, assertions have some probability of being true and are represented as nodes depicted by ovals in the picture. Cause and effect are represented as arcs that connect the nodes to each other. This is a simple Bayes net depicting the causes and symptoms of lung cancer.
It represents the idea that both pollution and being a smoker can cause cancer. In other words, they have an effect on the probability that a person will have cancer. It also shows that having cancer can cause an irregular X-ray and dyspnoea, or difficulty breathing. Bayes nets also represent the probabilities of all of their assertions and causal connections, and if I learn something new about the patient, say the patient tells us she doesn't smoke, this probability changes. This Bayes net could be used for diagnosis by reasoning from symptoms to cause, from the bottom to the top, or for prediction, from the top to the bottom.
So, Eric, question for you. When are Bayes nets a good choice for knowledge representation and reasoning? - So, I think Bayes nets are very good for the situations that you mentioned, David, where there may be different factors that sort of play into a decision that you want to make, or the likelihood that something is true, and because they can work with whatever partial evidence is available. I think Bayes nets become more difficult to work with when there's a very large number of variables or you want to actually change the network and then recalculate all of the probabilities or likelihoods.
So, it can be a little bit more difficult to use Bayes nets in a domain where you don't have a small fixed number of variables to work with. - Got it. Now, it turns out that the real world is full of uncertainty, and Bayes nets have emerged actually, right, as an important technology for building systems that can cope effectively with uncertainty. - Yes. - So, just to summarize this lecture, there are a number of methods for representing knowledge and reasoning automatically about that knowledge, and we looked at three of them, rules-based systems, which are good when we're able to capture expertise in the form of rules, taxonomies, which are good for representing hierarchal information, and Bayes nets, which are good for reasoning under uncertainty.
- Artificial intelligence explained
- Cognitive technologies explained
- Supervised, unsupervised, and reinforcement learning
- Machine learning models and algorithms
- Language, speech, and visual processing
- Business applications of cognitive tech
- The impact of cognitive technologies at work
- Future of cognitive technologies