Your robot might beat you up.
It sounds like a tired sci-fi cliché, but it’s true. Early last month, security firm IOActive released research that documented the sorry state of robotic security. The firm found lots of problems – and did so without looking very hard.
The research covered home, business and industrial robots from six companies: SoftBank Robotics, UBTECH Robotics, ROBOTIS, Universal Robots, Rethink Robotics and Asratec Corp. IOActive said that almost 50 vulnerabilities were found just during the initial research. The problems included weak cryptography and communications, privacy and authentication issues.
The company said that hackers could use the vulnerabilities to conduct cyber espionage and “perform unwanted actions.” The scariest part: “In the most extreme cases, robots could be used to cause serious physical damage and harm to people and property.”
People are being injured by robots. IOActive points to examples: A robot knocked over a toddler at the Stanford Shopping Center in Palo Alto, California; a robot smashed a window at a Shenzhen, China, trade fair and nine soldiers died and 14 were injured during a shooting exercise in 2007. It's likely that none of these were caused by hacks. They point out, however, the dangers of even slightly impaired robots.
We live in an era in which connected devices are deeply embedded in our lives. Everything – from pacemakers in people’s chests to traffic light coordination to the growing number of tasks performed by robots – is networked.
This makes great things possible. Now, an ambulance picking up a heart attack victim can get green lights all the way to the hospital while vital signs and other important information is sent to the team waiting in the ER. Problems with automobiles can be detected long before they cause a breakdown.
The other side of the coin, of course, is that bad people – and there are plenty of them – will attack these functions for profit and sometimes even for fun. Robots present a particularly dangerous element because they are strong, often mobile and generally used in close proximity to people. It’s a recipe for disaster.
Hacking would take the normal dangers, which can never be completely eliminated, to a much higher level. IOActive Security Research Lucas Apa suggested that buyers must demand that security be layered in. “First, companies and end-users should be aware of the possible risks and threats robots can introduce if they are insecure,” Apa wrote in response to questions from Technically Speaking. “[E]ducation on security comes second, to be integrated at early design stages by vendors. Developers, engineers and product managers should learn at least the foundations of security best practices, and adapt them to their development life cycle.”
The real world challenge is that security increases cost without increasing revenue or end-user functionality. It therefore doesn’t always rise to the top of the priority list of companies looking to maximize every dime of investment. “Every new technology that has not been adopted mainstream yet, suffers from the same time-to-market pressures and limited budgets,” Apa wrote. “It is common for those products to focus on marketing and logistics, as vendors consider security as an intangible asset on the first production cycles.”
The sense, however, is that that this should and must change: End users of robots – whether they are on factory floors or in consumers’ homes – must push for manufacturers to design in security from the earliest stages. “Companies should always think twice when deploying any new technology, especially if security is not even a feature promoted by vendors,” Apa wrote. “If buyers' requirements and expectations rise, vendors would have to adapt to this demand and provide more secure robots. This is the only way vendors will start adopting more effective strategies against these common security concerns.”