Join Yash Patel for an in-depth discussion in this video Interview: Adam Geitgey, part of The Psychology of Living In A Data-Driven World.
- So I'm on a call right now with Adam Geitgey. Adam is an expert in AI machine learning and Adam thank you so much for joining us. - Thanks for having me. - Humans are notoriously bad at making predictions and you know here comes AI machine learning, our saviors and so what role does AI play in prediction making and how can it help us? - Yeah, that's a great question. I think AI and machine learning has a role in cases where we just have too much data to make predictions by hand. I think more and more now we have cases where we're building models that depend on 10,000, 20,000, 50,000 data points about each thing we're looking at and having AI and machine learning is a way to make sense of all that data and extract patterns from it.
It's super helpful but I don't think it necessarily makes us make better predictions. It just kind of expands the range of things we can make predictions about. - So, would you say that maybe it's not necessarily better, it's just kind of faster? - Yeah, I think it's a tool like I like to think of AI as an automation tool specifically, like when people think of AI, they think of artificial intelligence, you know AI artificial intelligence but what we have as AI is not really even intelligence in the same way that people think about intelligence.
The AI that we have now is really large scale statistics. Alright, let's say that you run a restaurant review website and people upload reviews to the website, one of the things that you're going to have to deal with is people posting inaccurate reviews or posting mean things or inappropriate content. With a computer you can show up millions of restaurant reviews and they can learn really well to tell apart good and bad reviews but in that entire process, the computer doesn't really understand English in the same way that the person does. It's really using statistics to look at the examples of good things and bad things.
So, I think the real power of AI is you can scale up these kind of boring jobs and dealing with computers really quickly and really efficiently. But they don't necessarily do it better than humans. They just let the humans focus on things that are more interesting and more creative and kind of replace some of those lower level tasks with automation. - It's really interesting that you say that. So now we're able to make predictions at a more rapid pace, maybe a little more objectively if you will. You know when these machines, these algorithms, when they make their predictions, they're not necessarily reporting a confidence interval to the ordinary average Joe, right? And so, without that information, things become kind of like, oh well the machine spit this out or the machine spit that out.
Like have you had to deal with that before, where people are like, well no the algorithm says this like you had to correct them for whatever reason? - Yeah, that comes up all the time in the final work. I mean especially when people see the machine say something but you don't know how good the data was that it's saying that from or you don't know how much data it had to make that decision from. Probably the biggest data they challenge, when you're actually building this kind of stuff is getting good data that's accurate and actually helps you predict the problem. It's so easy to have the garbage in, garbage out problem.
So not only do you want to, when you build these systems, expose some kind of confidence indication to the user, even if it's like a green color or a red color, something simple but you also have to really test your models in a systematic way, so that you're not letting nonsense slip through, even though it looks official, which can happen really easily. - Gotcha and I think that kind of answers, maybe gets to answering my next question is that sometimes the outcomes aren't so great, right? And that kind of elicits fear from the people that are looking at the results, 'cause you're saying the subjective machine has given me this result, so what you're saying is that's one of the ways to alleviate like kind of attenuate that fear is to consistently, systematically test it.
Are there other ways or other techniques that you use? - Yeah, I think you should think of any prediction and the machine gives you a sort of like a weather report. You know like it lets people make predictions on a large scale, across a lot of people but doesn't mean that every single prediction is right all of the time. We're used to thinking about computers as these machines that are deterministic and they give you an answer and it's the same answer every time and in this world it's really about what's the most likely thing that I think will happen? But that's not necessarily what will happen, so if you get some kind of indication that's scary, use that as the starting point to research and see if that's really true or to learn more about the problem that you're looking at.
It doesn't necessarily mean that it's right. - And so how can we come to terms with coping with a black box issue where we've really have no idea what the machine is doing or thinking? - Yeah and that's a really great question and an important one, especially when that black box might be deciding something really important about your future like whether you get the home loan or whether you get the certain job and I think the answer is both in technology and on the kind of the government side. On the technology side, we're inventing better ways to explore and explain these models and new techniques that help us figure out why they said certain things but in some cases, that's not always possible.
So in the European Union, they've put in place laws now that say if a black box makes a decision about a person, you have to go back and explain that to the person why that decision came to be. You can't just say the model said this, which I think is an important thing to think about, an important direction in the future because the problem with these models is let's say you build a model that works 98 percent of the time. That means two percent of the time it doesn't work and two percent of seven billion people is a lot of people to make mistakes about, so it's really important to have an escape hatch in these models, a way for people to get a solution if it goes wrong.
- Yeah and you know what's interesting is we'll start using AI in healthcare, no doubt about it, right? We'll have machine learning in healthcare and it's going to be making predictions about maybe how long people have to live or making predictions about a potential diagnosis and so you know how do we build that safety net so that people don't lose their minds when they get something that's not favorable? - Yeah and I work in this field myself. I'm doing work right now in healthcare and literally looking at records and making predictions of outcomes and it is scary and I think the first time people may have ever run into that is if they've ever done one of those DNA tests, where it kind of gives you some indication of what diseases you might be at risk of but in the same way, those kind of things are just statistical indications, they're just likelihood like saying hey look at this, this is an important thing to pay attention to.
As those models get better and more accurate, I think society will kind of have to adapt to that because it is something different that we've never really had before. It kind of like that question of do you want to know your own future if you have the option you know and I think the answer is different for different people. - Yeah exactly. Hey Adam, thank you so much for joining us. Adam Geitgey, everyone. He has a ton of courses on LinkedIn regarding machine learning, algorithms, artificial intelligence. So thank you so much for your time. - Yeah, thanks a lot. Thanks for having me.