From the course: Cognitive Technologies: The Real Opportunities for Business

Robotics

- In this lecture, we're going to talk about robotics, in which multiple cognitive technologies are combined to enable agents to interact with their physical environment. Eric Nyberg joins me here in the Newell-Simon high-bay, in the Robotics Institute at Carnegie Mellon University to help us explore this topic. There are three main categories of robots. First is manipulators, or robot arms. They are physically anchored to their workplace. Next are mobile robots, including unmanned ground vehicles, and unmanned air vehicles, such as drones. Finally, mobile manipulators, which combine mobility with manipulation. Think of those humanoid robots popular in movies. Robots have been in industrial use for decades, and a newer generation of autonomous robots can sense and respond to their environment, plan their actions, and in some cases, interact and work alongside people. Let's talk about how robots work. Robotic systems involve multiple technologies and disciplines, including mechanical and electrical engineering, machine learning, computer vision, planning, and even speech recognition. One of the things that makes robotics hard is uncertainty. If a robot cannot observe all of its environment, say something is blocking its view, it will have to make a decision based on partial information. Also, a robot action might produce an unforeseen result, say if a robot loses its balance on a loose rock. Technologies that support reasoning under uncertainty, and which can provide a robot detailed information about its current state, can be crucial. The essential building blocks of robots are sensors and effectors. Sensors enable robots to perceive their environment. Types of sensors include passive sensors like cameras, and active sensors, which send energy into the environment to be reflected back, like sonar or lidar. The main use of sensors in robotics include range finders, used to measure the distance to nearby objects, location sensors, which determine the location of the robot, proprioceptive sensors, which inform the robot of its own motion, the position of its joints, for instance, and force and torque sensors, which measure how hard a robot is gripping or turning. Robots use effectors, such as legs, wheels, joints, and grippers, to assert physical force on the environment. Computer vision and machine learning play an important role in robot perception, and planning plays an important role in robot action. Eric, what about the intersection of language technologies in robotics? - Well, David, you mentioned mobile robots, and I think that in the future, we're looking forward to applications where robots and humans interact together in the environment, maybe collaborating to reach a goal or solve a task, and I think that natural language is increasingly important in that context, because humans would prefer to interact with robots using natural language, and having a normal dialogue with a robot, as opposed to having to stop to punch in commands or use some kind of dedicated controller for the robot, and that's because the environments where we would like to use robots in a collaborative sense are stressful, risky environments where we have to allow humans to communicate as naturally as possible, because we all know that when we're under stress, it can be hard for us to remember passwords or special codes in order for us to get something done. - [David] Language technologies will be essential in the emerging field of collaborative robotics. - [Eric] Absolutely. - One thing that interests me is the notion of commanding a robot, or asking a robot, about something in its environment, which it then has to perceive and recognize. How hard is that? - [Eric] Well, David, what you're referring to is the situation where a human might tell a robot to go pick up an object, or move to a certain location, and of course a robot will need to understand the description that the human has given, and then decide which of the things that it can perceive in its environment the human is actually referring to. So, if I say go stand by that tree over there, the robot may have to scan the visual environment, and maybe it's obvious, 'cause there's only one tree in the environment, but maybe there's four or five, and then the robot will have to clarify exactly which direction, or which distance we're talking about with respect to the tree that's the actual goal. - Still an area of research and development. So the applications of robotics are expanding, as many of the underlying technologies including sensors, machine learning, and computer vision continue to improve. Applications include manufacturing, agriculture, where we're seeing autonomous tractors, transportation, where we see autonomous cars and trucks, healthcare, where surgery robots are in wide use, as are mobile robots that deliver supplies in hospitals, hazardous environments, where they can be used for toxic waste clean-up, personal services, like vacuum cleaners, lawn mowers, and golf caddies, entertainment, where we're seeing robotic toys coming into the market, and human augmentation, exoskeletons worn on a body to help disabled people move, or to help workers manage heavy loads. To sum up, there are three main categories of robots. Manipulators, mobile robots, and mobile manipulators. The building blocks of robots are sensors and effectors. Applications of robots are numerous and growing, driven by improvements in the underlying technologies.

Contents