|Artificial Intelligence For Better Or Worse?|
|Written by Nikos Vaggalis|
|Monday, 04 April 2016|
Advancement in the field of Artificial intelligence is like a tsunami that cannot be stopped. There is a new conquest everyday that, piece by piece completes the puzzle. But what happens when the puzzle is finally assembled? Will the world transcend into a new state of consciousness or will it come face to face with its own demise? It's not easy to tell, but there are hints and indications from both directions.
Nowadays we are increasingly accustomed to the idea of the rise of intelligent machines.
The Atlas franchise is the prevalent example, whose abilities stretch as far as the progress its human inventors make, for now confined to tasks elementary for us humans activities, like the shifting and placing of boxes, the wiping of floors and other mechanical or repetitive tasks.
This might sound not that big a deal, until you mix these primitive abilities with others, thought not feasible just a few years ago, such as the robot interacting with its surroundings, getting up on its feet after falling down, or sustaining the focus on its assigned task despite facing external and forceful distractions.
Breakthroughs like these are set to increase industrial production, take care of pesky household chores and assume the burden of common hazardous or heavyweight tasks, so that humanity can enjoy a better well being.
The question is, will everyone be positively and equally affected by the coming revolution?
The first signs that robots will replace humans in white collar jobs are already visible, with the blue collar ones soon to follow.
White or blue collars aside, no guild of workers can feel safe that a machine will not take over its job, relying on the belief that it belongs to a niche field, a programmer for example. This surely sounds inconceivable?
On the contrary, recent research has revealed that even programmers are afraid. This research carries special significance since the software engineering guild is one that possesses insider information as where things are heading.
Massive unemployment is one such direction and there are already thoughts circulating in some Dutch cities and countries like Finland, of handing a minimum basic income to every citizen as a compensation for the loss of the jobs to robot workers.
The recent commotion over Microsoft's AI Twitter chatbot Tay serves as a object lesson. The researchers intention was that Tay would be capable of acquiring intelligence through conversations with humans. Instead it was tricked into altering its innocent and admittedly naive personality of a teenage girl to adopt an anti-feminist and racist one. Microsoft admitted to there being a bug in its design, this goes to remind us that after all it's just software, thus prone to the same issues that any program faces throughout its existence.
Who can tell what will happen if the software agents that power robotic hardware gets hacked or infected with a virus? How can we make adequate precautions against such an act?
You could argue that this is human malice and that with appropriate safety nets it can be avoided. Reality is quick to prove this notion false as bugs in any piece of software ever developed, leading to vulnerabilities or malfunctions, are discovered every day. But for the sake of continuing this argument let's pretend that humans develop bug-free software, something that eradicates the possibility of hacking and virus spreading. Then, what about the case of the machine self-modifying and self-evolving their core base?
AIX and AlphaGo take this to a whole new level. AIX by trying to teach a Minectaft sprite how to climb a virtual hill, deviating from the usual way of feeding its neural network with mass amounts of data but instead letting it think his way out. AlphaGo by beating a human champion without resorting to millions of pre-calculated moves but instead thinking and adapting as it AlphaGo-es.
This is subtle. It means that the road towards the building of machines that think and not just calculate is closer than we thought. As a Minecraft character modifies its behaviour to adapt to its surroundings, trying to find ways to overcome its obstacle, why wouldn't a military robot do the same in becoming an autonomous killing machine? We are all aware of the worldwide armies' timeless obsession with creating the ultimate soldier, the one who doesn't sleep, eat or drink, his sole focus being taking care of his assignment.
Far fetched it might sound, but that was the exact topic debated by a panel of experts at last year's World Economic Forum in Davos. Not that far fetched after all.
The question now shifts to whether you can inject ethics into a machine, whether you can make certain that it obeys the rules governing a war, yes there are rules of contact even in a war (Geneva convention), or even turn against their makers in a moment of self-judgement?
These are a few of the many negatively poised pondering that should not, however, overshadow AI's novel achievements in many aspects of human life. Mobile devices that shut off all communication when they sense that their owners needs some rest, Cortana aiding you in the use of a computer, intelligent agents that start a conversation with you when they sense the loneliness in the sound of your voice or in reading your facial expressions, self driving cars that mobilize disabled people or make the roads safe again.
These are examples of how AI can provide us with convenience and a more comfortable life. At a different level there's the life saving properties of AI; aiding human doctors to make accurate diagnoses by being able to consume petabytes of information at a glance or discovering cures to lethal diseases that humans would not be able to, paving new ways to medical miracles.
One thing is certain; you can't stop progress.
Curiosity is embedded in our genes and researches put all the negative thoughts aside, not because of blind bias but because they are in turn genetically programmed in just one thing: seeking evolution.
The problem is seeking evolution in any way achievable?
What happens when the evolved man-made creation acquires more intelligence than its maker and turns from friend to foe? How can you safeguard against that case?
Unfortunately, these are as of yet questions unanswered either by scientists or policy makers. The former sideline them, while the latter just can't keep up with the pace.
It increasingly seems that the decisions will be based on a case by case approach of and trial and error. Let's just hope that that error is not cataclysmic...
image by A Health Blog
or email your comment to: firstname.lastname@example.org
|Last Updated ( Monday, 04 April 2016 )|