The age of the robots is well entrenched. Artificial intelligence is effectively everywhere. It has so successfully permeated our society that we are barely aware of its existence. It’s the key driver of an enhanced and personalised user experience at the casino online. It is also the ‘cognitive tool’ of modern machines that vary from virtual personal assistants to self-driving cars.
Futurists warn of world where slaughterbots, insect cyborgs and highly intelligent autonomous murder-machines wreak havoc on the human race. Other commentators predict alarming levels of unemployment, where the 24/7 efficacy of robots supplants human beings… in the warehouses and on the farms, highways and byways and the factory floor.
In reality, both of these scenarios are still a few decades in the future. However, there’s no way of knowing whether one or both of these doomsday predictions will actually play out for real… or is there?
The Future of AI on the Battlefield
The concept of killer robots is not a new one. Isaac Asimov, the celebrated sci-fi writer who popularised robotics in the 1950s is arguably best remembered for a collection of short stories known as I, Robot. He also devised The Three Laws of Robotics, the first of which is; “A robot may not injure a human being or, through inaction, allow a human being to come to harm”.
This is indeed a noble premise that many in the world would like to embrace. The fear of the militarisation of AI is clearly very real as is evident from the much talked about Campaign to Stop Killer Robots. To date about 26 countries have pledged their support for a future devoid of cyborg soldiers or AI enabled weaponry.
The problem is that ‘rogue nations’ like North Korea, Iran and even Russia appear hell bent on developing autonomous warfare systems. These are systems that are powered by machine learning algorithms to kill people. If successful these combat bots, missiles and even small arms systems will have the capabilities to not only ‘learn’ but to choose their own targets.
Even the mere suggestion of a giving a machine the autonomy to decide whether a human is a target without the input of a real person is enough to instil fear in everyone… including the most battle-hardened hawks.
Without the buy-in from every nation on earth the campaign, however well intended, is destined to fail. A balance of power skewed in the favour of what most of the world thinks are the bad guys – Kim Young-Un, Bashar al-Assad, Hasan Rouhani and Vladimir Putin – will create fertile conditions for a world in jeopardy.
As with the Space Race of the sixties, there’ll undoubtedly be a race to militarise AI… and in the words of Russian President Vladimir Putin: “whoever leads in AI will rule the world”. With that in mind, are slaughterbots and autonomous weapons the future of global warfare? It appears increasingly so!
Unemployment in the Age of the Robots
The second scenario paints a picture of jobless masses who’ve had their livelihoods stolen by intelligent automated machines. The multinational investment banker, Goldman Sachs has come up with a few alarming predictions to back up this rather bleak assessment.
According to predictions by the financial wizards, up to 25 000 truckers in the USA will lose their jobs every month when self-driving delivery vehicles are officially rolled out. They also foresee more than one million warehouse packers and pickers being systematically replaced by robots. E-commerce giant Amazon has already deployed learning robots, capable of finding and transporting items, to its vast network of warehouses. Extrapolate these figures world-wide and the picture is gloomy indeed.
The second scenario is as much a threat to global stability as the first. Immigration and globalisation are no longer the primary drivers of unemployment, no matter how loudly they’re trumpeted as fact by US President Donald Trump and his cohorts in the west. Automation and AI have slipped up the scales and are now one of the main catalysts of job losses in the mining, agriculture, manufacturing and retail sectors.
Felonious Use of AI Technology a Major Red Flag
Despite these two rather depressing scenarios, killer robots and massive unemployment are not currently deemed the biggest risks associated with AI. It’s the irregular and felonious use of the technology that’s most concerning for law makers and law enforcers.
Analysts have raised the red flag around the probability of criminals using machine learning technologies to enable more sophisticated hacking attempts. It’s envisaged that hackers will highjack AI techniques to automate payment processing and deploy chatbots to negotiate with multiple victims of ransomware simultaneously in order to clean up millions upon millions of dollars in a matter of a few hours.
People with nefarious intent can also use AI to create believable, though erroneous, audio and video clips of prominent politicians and business leaders to support personalised disinformation campaigns. Furthermore, one shudders to think of the consequences if highly sophisticated open-source technologies like drone navigation and face recognition fall into the wrong hands!
Robots have the capacity to be all things from slaughterbots, worker bees and the drivers of cybercrime to intelligent and well-equipped little helpers designed to rescue victims of natural disasters, disarm bombs and tackle tasks that humans can’t or won’t do. It’s clearly up to us to decide whether to use technology for good or for bad.
Comment Policy
Your words are your own, so be nice and helpful if you can. Please, only use your REAL NAME, not your business name or keywords. Using business name or keywords instead of your real name will lead to the comment being deleted. Anonymous commenting is not allowed either. Limit the amount of links submitted in your comment. We accept clean XHTML in comments, but don't overdo it please. You can wrap code in [lang-name][/lang-name] tags.