Featured Product
This Week in Quality Digest Live
Innovation Features
Judah Levine
How the UTC time scale was defined
Jo Napolitano
Scientists developed a device that can sort information similarly to how the human brain does
Knowledge at Wharton
A hybrid workforce will be much more difficult to manage
Eryn Brown
Their prospects for surviving the pandemic may seem dim, but there are some encouraging signs, experts say
John Toon
Detailed 3D data could help improve reliability and performance

More Features

Innovation News
Inspect nozzle welds using phased array ultrasound testing techniques including ray-tracing, scanner simulation, coverage maps
Produce large parts up to 300 × 300 × 450 mm without residual stress, gas cross flow, or having to pre-sinter powder bed
Interfacial launches highly filled, proprietary polymer masterbatches
‘Completely new diagnostic platform’ could prove to be a valuable clinical tool for detecting exposure to multiple viruses
Precitech ships Nanoform X diamond turning lathe to Keene State College
Galileo’s Telescope describes how to measure success at the top of the organization, translate down to every level of supervision
Realistic variations in glossiness could aid fine art reproduction and the design of prosthetics

More News

Georgia Tech News Center

Innovation

Controlling a Robot Is Now As Simple As Point and Click

New interface allows more efficient, faster technique to remotely operate robots

Published: Wednesday, May 10, 2017 - 12:00

(Georgia Tech: Atlanta) -- The traditional interface for remotely operating robots works just fine for roboticists. They use a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task.

But for someone who isn’t an expert, the ring-and-arrow system is cumbersome and error-prone. It’s not ideal, for example, for older people trying to control assistive robots at home.

A new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn’t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.

“Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we’ve shortened the process to just two clicks,” said Sonia Chernova, the Georgia Tech assistant professor in robotics who advised the research effort.

Her team tested college students on both systems, and found that the point-and-click method resulted in significantly fewer errors, allowing participants to perform tasks more quickly and reliably than using the traditional method.

“Roboticists design machines for specific tasks, then often turn them over to people who know less about how to control them,” said David Kent, the Georgia Tech Ph.D. robotics student who led the project. “Most people would have a hard time turning virtual dials if they needed a robot to grab their medicine. But pointing and clicking on the bottle? That’s much easier.”

The traditional ring-and-arrow-system is a split-screen method. The first screen shows the robot and the scene; the second is a 3-D, interactive view where the user adjusts the virtual gripper and tells the robot exactly where to go and grab. This technique makes no use of scene information, giving operators a maximum level of control and flexibility. But this freedom and the size of the workspace can become a burden and increase the number of errors.

The point-and-click format doesn’t include 3-D mapping. It only provides the camera view, resulting in a simpler interface for the user. After a person clicks on a region of an item, the robot’s perception algorithm analyzes the object’s 3-D surface geometry to determine where the gripper should be placed. It’s similar to what we do when we put our fingers in the correct locations to grab something. The computer then suggests a few grasps. The user decides, putting the robot to work.

“The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can’t see, such as the back of a bottle,” said Chernova. “Our brains do this on their own — we correctly predict that the back of a bottle cap is as round as what we can see in the front. In this work, we are leveraging the robot’s ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up.”

By analyzing data and recommending where to place the gripper, the burden shifts from the user to the algorithm, which reduces mistakes. During a study, college students performed a task about two minutes faster using the new method vs. the traditional interface. The point-and-click method also resulted in approximately one mistake per task, compared to nearly four for the ring-and-arrow technique.

In addition to assistive robots in homes, the researchers see applications in search-and-rescue operations and space exploration. The interface has been released as open-source software and was presented in Vienna, Austria, March 6–9, 2017, at the 2017 Conference on Human-Robot Interaction (HRI2017).

The study is partially supported by National Science Foundation Fellowship (IIS 13-17775) and the Office of Naval Research (N000141410795). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.

Discuss

About The Author

Georgia Tech News Center’s picture

Georgia Tech News Center

Georgia Tech News Center features articles, photographs, and video about research, innovation, and current events prepared by the Institute Communications staff. Sign up for news RSS feeds for access to its latest news. You can also monitor its Twitter feed. Located in Atlanta, Georgia, the Georgia Institute of Technology is a leading research university committed to improving the human condition through advanced science and technology.