Placeholder Image

字幕表 動画を再生する

  • John-Green-bot: Hi, I'm John-Green Bot and welcome to Crash Course AI!

  • Today we're learning about meeee!!!!

  • Jabril: Hey!

  • This is my show!

  • John-Green-bot: Uh oh

  • Jabril: It's ok John Green Bot, we can do this intro together.

  • Robotics is a broad topic, because it's the science of building a computer that moves

  • and interacts with the world (or even beyond the world, in space).

  • John Green Bot: So today we're going to talk about robots, like me, and what makes us tick!

  • INTRO

  • Some of the most exciting AIs are robots that move through the world with us, gathering

  • data and taking actions!

  • Robots can have wings to fly, fins to swim, wheels to drive, or legs to walk.

  • And they can explore environments that humans can't even survive in.

  • But, unlike humans, who can do many different things.

  • Robots are built to perform specific tasks, with different requirements for hardware and

  • for learning.

  • Curiosity is a pretty amazing robot who spent 7 years exploring Mars for us, but it wouldn't

  • be able to build cars like industrial robots or clean your apartment like a Roomba.

  • Robotics is such a huge topic that it's also part of Computer Science, Engineering,

  • and other fields.

  • In fact, this is the third Crash Course video we've made about robots!

  • In the field of AI, robotics is full of huge challenges.

  • In some cases, what's easy for computers (like doing millions of computations per second)

  • is hard for humans.

  • But with robotics, what's easy for humans, like making sense of a bunch of diverse data

  • and complex environments, is really hard for computers.

  • Like, for example, in the reinforcement learning episode, we talked about walking, and how

  • hard it would be to precisely describe all the joints and small movements involved in

  • a single step.

  • But if we're going to build robots to explore the stars or, get me a snack, we have to figure

  • out all those details, from how to build an arm to how to use it to grab things.

  • So we're going to focus on three core problems in robotics: Localization, Planning and Manipulation.

  • The most basic feature of a robot is that it interacts with the world.

  • To do that, it needs to know where it is (which is localization) and how to get somewhere

  • else (which is planning).

  • So localization and planning go hand-in-hand.

  • We humans do localization and planning all the time.

  • Let's say you go to a new mall and you want to find some shoes.

  • What do you do?

  • You start to build a map of the mall in your head by looking around at all the walls, escalators,

  • shops, and doors.

  • As you move around, you can update your mental map and keep track of how you got there -- that's

  • localization.

  • And once you have a mental map and know the way to the shoe store, you can get there more

  • quickly next time.

  • For example, you can plan that the escalator is faster than the elevator.

  • The most common way we input that data is with our eyes through perception.

  • Our two slightly different views of the world allows us to see how far away objects are

  • in space.

  • This is called stereoscopic vision.

  • And this mental map is the key to what many robots do too, if they need to move around

  • the world.

  • As they explore, they need to simultaneously track their position and update their mental

  • map of what they see.

  • This process is called Simultaneous Localization And Mapping, which goes by the cool nickname

  • SLAM.

  • But instead of eyes, robots use all kinds of different cameras.

  • Many robots use RGB cameras for perception, which gather color images of the world.

  • Some robots, like John-Green-bot, use two cameras to achieve stereoscopic vision like

  • us!

  • But robots can /also/ have sensors to help them see the world in ways that humans can't.

  • One example is infrared depth cameras.

  • These cameras measure distances by shooting out infrared light (which is invisible to

  • our eyes) and then seeing how long it takes to bounce back.

  • Infrared depth cameras are how some video game motion sensors work, like how the Microsoft

  • Kinect could figure out where a player is and how they're gesturing.

  • This is also a part of how many self-driving cars work, using a technology called LiDAR,

  • which emits over 100,000 laser pulses a second and measures when they bounce back.

  • This generates a map of the world that marks out flat surfaces and the rough placement

  • of 3D objects, like a streetlamp, a mailbox, or a tree on the side of the road.

  • Once robots know how close or far away things are, they can build maps of what they think

  • the world looks like, and navigate around objects more safely.

  • With each observation and by keeping track of its own path, a robot can update its mental

  • map.

  • But just keep in mind, most environments change, and no sensor is perfect.

  • So a lot goes into localization, but after a robot learns about the world, it can plan

  • paths to navigate through it.

  • Planning is when an AI strings together a sequence of events to achieve some goal, and

  • this is where robotics can tie into Symbolic AI from last episode.

  • For example, let's say John Green Bot had been trained to learn a map of this office,

  • and I wanted him to grab me a snack from the kitchen.

  • He has localization covered, and now it's time to plan.

  • To plan, we need to define actions, or things that John-Green-bot can do.

  • Actions require preconditions, or how objects currently exist in the world.

  • And actions have effects on those objects in the world to change how they exist.

  • So if John Green Bot's mental map has a door between his current location and the

  • kitchen, he might want to use anopen dooraction to go through it.

  • This action requires a precondition of the door being closed, and the effect is that

  • the door will be open so that John Green Bot can go through it.

  • John Green Bot's AI would need to consider different possible sequences of actions (including

  • their preconditions and effects) to reason through all the routes to the kitchen in this

  • building and choose one to take.

  • Searching through all these possibilities can be really challenging, and there are lots

  • of different approaches we can use to help AIs plan, but that would deserve a video on

  • its own.

  • Anyway, during planning we run into the third core problem of robotics: manipulation.

  • What can John Green Bot's mechanical parts actually do?

  • Can he actually reach out his arms and interact with objects in the world?

  • Many humans can become great at manipulating things (and I'm talking about objects, not

  • that force powers stuff).

  • Like, for example, I can do this but it took me a while to get good at it.

  • Just look at babies, they're really clumsy by comparison.

  • Two traits that help us with manipulation, and can help our robots, are proprioception

  • and closed loop control.

  • Proprioception is how we know where our body is and how it's moving, even if we can't

  • see our limbs.

  • Let's try an experiment: I'm going to close my eyes, stretch my arms out wide, and

  • point with both hands.

  • Now, I'm going to try to touch my index fingers without looking.

  • Almost perfect!

  • And I wasn't way off because of proprioception.

  • Our nervous system and muscles help our body's sense of proprioception.

  • But most robots have motors and need sensors to figure out if their machine parts are moving

  • and how quickly.

  • The second piece of the puzzle is closed loop control or control with feedback.

  • Theloopwe're talking about involves the sensors that perceive what's going on,

  • and whatever mechanical pieces control what's going on.

  • If I tried that experiment again with my eyes open instead of closed, it would go even better.

  • As my fingers get closer to each other, I can see their positions and make tiny adjustments.

  • I use my eyes to perceive, and I control my arms and fingers with my muscles, and there's

  • a closed loop between them -- they're all part of my body and connected to my brain.

  • It'd be a totally different problem if there was an open loop or control without feedback,

  • like, for example, if I closed my eyes and tried to touch my finger to someone else's.

  • My brain can't perceive with their eyes or control their muscles, so I don't get

  • any feedback and basically have to keep doing whatever I start doing.

  • We use closed loop control in lots of situations without even thinking about it.

  • If a box we're picking up is heavier than expected, we feel it pull the skin on our

  • fingers or arms so we tighten our grip.

  • If it's EVEN heavier than expected, we'd might try & involve our other hand, & if its

  • too heavy, well, we'll call over my open-loop-example buddy.

  • But this process has to be programmed when it comes to building robots.

  • Manipulation can look different depending on a robot's hardware and programming.

  • But with enough work we can get robots to perform specific tasks like removing the cream

  • from an oreo cookie.

  • Beyond building capable robots that work on their own, we also have to consider how robots

  • interact and coordinate with other robots and even humans.

  • In fact, there's a whole field of Human-Robot Interaction that studies how to have robots

  • work with or learn from humans.

  • This means they have to understand our body and spoken language commands.

  • What's so exciting about Robotics is that it brings together every area of AI into one

  • machine.

  • And in the future, it could bring us super powers, help with disabilities, and even make

  • the world a little more convenient by delivering snacks.

  • John-Green-bot: Here you go, Jabril!

  • Thanks, John-Green-botgo get me a spoon.

  • Butwe're still a long way from household robots that can do all these things.

  • And when we're building and training robots, we're working in test spaces rather than

  • the real world.

  • For instance, a LOT of work gets done on self-driving car AI, before it even gets close to a real

  • road.

  • We don't want a flawed system to accidentally hurt humans.

  • These test spaces for AI can be anything from warehouses, where robots can practice walking,

  • to virtual mazes that can help an AI model learn to navigate.

  • In fact, some of the common virtual test spaces are programmed for human entertainment: games.

  • So next week, we'll see how teaching AI to play games (even games like chess) can

  • help us solve real-world problems. See ya then.

  • John Green Bot: Crash Course AI is produced in association with PBS Digital Studios.

  • If you want to help keep Crash Course free for everyone, forever, you can join our community on Patreon.

  • And if you want to learn more about engineering robots check out this video.

John-Green-bot: Hi, I'm John-Green Bot and welcome to Crash Course AI!

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

ロボティクスクラッシュコースAI #11 (Robotics: Crash Course AI #11)

  • 16 1
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語