2017-02-16



Artificial intelligence has advanced considerably over the last decade. Unfortunately, as with any new technology, there will always be setbacks. Google recently encountered some serious concerns with one of their AI systems, after it started acting aggressively.

Any businesses that depend on new AI technology should be aware of this problem. It shows the need to be cautious before adopting new AI solutions.

Google Disaster a Setback for AI Evolution

In 2016, developers made a lot of milestones with AI innovation. According to Tech Republic, here are a few of the most promising:

A computer was programmed to beat the world’s Go champion.

Tesla Autopilot brought a man with a blood clot to the hospital.

An AI simulation was able to predict the outcome of the Kentucky Derby.

Microsoft’s applications were actually able to recognize speech better than humans!

MogIA, an Indian AI technology was able to predict the surprising outcome of the United States election, which most pundits never saw coming.

AI was able to help improve cancer treatments.

While these are very encouraging developments, we must always remember that AI has its limits. Researchers at DeepMind, Google’s AI research division, encountered one of these shortcomings earlier this month.

The team of researchers had their most sophisticated AI application play a series of games against itself. They observed the simulations and were shocked to see how it turned out.

The two simulations began firing lasers at each other almost immediately. They were incredibly aggressive. The researchers claimed that the outcome was a predictor of environmental outcomes.

“These results show that agents learn aggressive policies in environments that combine a scarcity of resources with the possibility of costly action,” said the researchers.

However, there may be more serious reasons to be concerned. This may be a serious problem with the AI itself.

One possibility is that these machines simply aren’t programmed to understand game theory. Human beings don’t always take the most aggressive action against their counterparts. They need to observe and often take more diplomatic approaches. They may also have to hesitate and conserve resources in competitive environments.

This could be a serious problem if it isn’t addressed. It could also illustrate fundamental structural problems with other AI applications. We would need to know how to invent something that could be more safely controlled.

Other Experts Worry About Aggressive Nature of AI

The outcome at DeepMind confirm the fears other brilliant minds have raised in recent months. Elon Musk and Stephen Hawking have warned that AI could create serious peril for the human race. Bart Selman, a Cornell professor and AI ethicist, shares their concerns.

Do these fears mean that we need to abandon all efforts to develop new AI technology? No. However, they do indicate that AI systems need to be constrained until we have time to understand their potential. It’s perfectly reasonable to use AI technology for photo-editing tools and other consumer applications, but it would be very risky to allow an unfettered AI system to run a military application, handle offshore drilling or other applications where countless lives are at risk.

“Things like computer vision are starting to work; speech recognition is starting to work. There’s quite a bit of acceleration in the development of AI systems,” Selman said at a recent event. “And that’s making it more urgent to look at this issue.”

Experts will continue to raise concerns about the future of AI until these issues are addressed. Automotive start up companies and new tech startups must proceed with caution. Fortunately, the new experiments at DeepMind prove that it can be dangerous if given too much autonomy.

“What does this mean for the future of AI?” asks Ryan Kh. “Unfortunately, we won’t know until we have had time to do more research.”

Image from http://toptrendinglist.com/facebook-making-its-artificial-intelligence-software-open-to-tech-industry/

The post Recent Events Show Need For Artificial Intelligence Industry Precautions appeared first on Business First Family.

Show more