Tuesday, January 12, 2016

I'm Not A Conspiracy Theorist, I'm a Robot Realist

by Eileen Neary
Assistant Project Manager

Ever since the creation of automated machines, the fears of an artificial intelligence (AI) takeover (or cybernetic revolt, as more professional futurists call it) have been growing. From the minds of science fiction greats Isaac Asimov and Aldous Huxley to 2015’s blockbuster films Avengers: Age of Ultron and Ex Machina, culture shows that people are fascinated with what machines could someday be. This potential future has long seemed fictional, but the (small) possibility of artificial intelligence destroying the human race has, in some ways, been on the minds of more than just storytellers and conspiracy theorists.

The phrase “artificial intelligence” was coined by cognitive scientist John McCarthy in 1955. By the early ’70s, one of the first successful QA, or question-answering, programs was functioning. It was named SHRDLU, and could follow a user’s instructions to pick up and place blocks and pyramids in a virtual toy box. Today, “intelligent personal assistants,” otherwise known as AI software agents, are standard features on the biggest brand-name products. Perhaps what started with SHRDLU, has continued with agents like Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa . . . but where does it end?

Maybe with doom. An informal survey of participants at the Global Catastrophic Risk Conference published by the University of Oxford’s Future of Humanity Institute (FHI) estimated in 2008 [PDF link] that the risk for Superintelligent AI resulting in the end of the human race has a 5 percent chance of occurring before the year 2100 (a tie with molecular nanotechnology weapons). This beats out the survey responders’ estimated probabilities for wars (4 percent), engineered pandemics (2 percent), nuclear war specifically (1 percent) and other possibilities, like nanotechnology accidents and natural pandemics.

High-profile scientists like Stephen Hawking, Elon Musk of SpaceX and Steve Wozniak of Apple, have signed an open letter from the Future of Life Institute (an organization with the goal to prevent any existential risks to the human race) that addresses how to make AI safe and helpful to society, rather than more powerful and potentially deadly. The letter advocates for the ban of autonomous weapons to prevent a “global AI arms race.” The goal of the letter is to prevent automated weapons of the future for a variety of ethical reasons, as the results could be tragic for the human race.

There has also been the fear of machines continuing to put employees out of work. But as it turns out, there are some tasks that robots just cannot do. Amazon Mechanical Turk is just one host for HITs, or human intelligence tasks (tasks that require a person's mind). These types of jobs involve taking surveys, describing and labeling the content in images or videos, matching data, and conducting research.

If you’re still feeling a little uneasy about all of this, here’s a list of 10 jobs that have a less than 1 percent chance of ever being replaced by robots, according to researchers at the University of Oxford (yes, the same university with an institute that surveys estimates for the probability of human demise.)

And if you’re still feeling uneasy, well, here’s a cute puppy.

Did You Know?
Besides Mechanical Turk, other sites to perform HITs include ShortTask, CrowdFlower and Clickworker. Certain apps are available too, like EasyShift and TaskRabbit. 

No comments:

Post a Comment