MIT's new AI can teach itself to control robots by watching the world through their eyes — it only needs a single camera
MIT's new AI can teach itself to control robots by watching the world through their eyes — it only needs a single camera

www.livescience.com
MIT's new AI can teach itself to control robots by watching the world through their eyes — it only needs a single camera

Woah this is awesome. I imagine this could be used to make "shitty robots" with imperfect joints, backlash or other things more precise. Which could help with 3D printers as well as 3D printed robots. Or using internal strings and pulleys to control a robot arms. Especially for larger robot arms stiffness becomes a problem.
Is there an open source, easy to use software framework so people can start to play and experiment with it?
yup, here's the repo for it https://github.com/sizhe-li/neural-jacobian-field
They really went all out haha. They also have a linked website with a video and a tutorial. The tutorial has an easier to understand explanation of what they are doing.
It seems they use simple motion flow of the video to train the neural net, but they also use some kind of volume rendering to train the AI to predict and reconstruct the 3D scene with your robot. And they use cheap depth cameras but apparently it also works without depth. And this works for basically any robot you can imagine which is really brilliant.
Looking at all those pneumatic soft robots, now I wonder if you could invert this to use for an 3D input device. Like a kind of 3D printed pneumatic joystick that simply measures the resulting air pressure at the end of internal channels when you tilt or twist or move the joystick. No wiring or assembly, just 3D print a joystick and glue it to a board.