![]() ![]() In a forthcoming paper for the journal "Trends in Cognitive Science," Ryota Kanai, a neuroscientist and founder of a Tokyo-based startup Araya discusses how to give machines intrinsic motivation. They can’t define problems on their own, and defining problems is usually the hard part. Machines excel at playing the game Go because humans directed the machines to solve it. “I would argue this robot is self-aware in a very primitive way,” Lipson says.Īnother humanlike capability that researchers would like to build into AI is initiative. When Lipson unplugs one of the motors, the robot realizes it now has only three legs and that its actions no longer produce the intended effects. “It moves its motors in a random way.”Īfter four days of flailing, it realizes it has four legs and figures out how to coordinate and move them so it can slither across the floor. “Very much like a baby, it babbles,” Lipson says. It makes all the actions it is capable of to see what happens: how, for example, turning on a motor bends a leg joint. “It doesn’t know how its motors are arranged, what its body plan is,” Lipson saysīut it has the capacity to learn. When switched on, its internal computer has no prior information about itself. The four-legged spidery machine is about the size of a large tarantula. Lipson’s team also built a robot that can develop an understanding of its body. “Self-driving cars will have to read the minds of what other self-driving cars want to do,” says Paul Verschure, a neuroscientist at Universitat Pompeu Fabra in Barcelona. But much of driving is anticipating other drivers’ maneuvers and responding defensively - functions that are associated with consciousness. They decide where to steer and when to brake by taking constant radar and laser readings and feeding them into algorithms. Self-driving cars have some of the most advanced AI systems today. ![]() Gerard Julien / AFP-Getty Images Robot, Know Thyself Children interact with the programmable humanoid robot "Pepper," developed by French robotics company Aldebaran Robotics, at the Global Robot Expo in Madrid in 2016. Conscious machines may actually be our allies. Although science fiction may have us worried about sentient robots, it’s really the mindless robots we need to be cautious of. “If we could capture some of the structure of consciousness, it’s a good bet that we’d be producing some interesting capacity,” says Selmer Bringsjord, an AI researcher at the Rensselaer Polytechnic Institute in Troy, N.Y. We could simply ask them why they did what they did. If machines had minds, they might not be so inscrutable. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. There’s no deeper logic behind the answers they give. The AI systems basically memorize these associations. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. Though these systems may be powerful, they are opaque. Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |