2015-12-25

HAL 9000 in the film 2001.

The concept of inhuman intelligence goes back to the deep prehistory of mankind. At first the province of gods, demons, and spirits, it transferred seamlessly into the interlinked worlds of magic and technology. Ancient Greek myths had numerous robots, made variously by gods or human inventors, while extant artefacts like the Antikythera calendrical computer show that even in 200 BCE we could build machinery that usefully mimicked human intellectual abilities.
There has been no age or civilisation without a popular concept of artificial intelligence (AI). Ours, however, is the first where the genuine article—machinery that comfortably exceeds our own thinking skills—is not only possible but achievable. It should not be a surprise, then, that our ideas of what that actually means and what will actually happen are hopelessly coloured by cultural assumptions ancient and modern.
We rarely get it right: Kubrick’s 2001 saw HAL 9000 out-thinking highly trained astronauts to murderous effect; Bill Gates’ 2001 gave us Clippy, which was more easily dealt with.
Now, with AI a multi-billion dollar industry seeping into our phones, businesses, cars, and homes, it’s time to bust some of the most important AI myths and dip into some reality.
Myth: AI is all about making machines that can think
When digital computing first became practical in the middle of the last century, there were high hopes that AI would follow in short order. Alan Turing was famously comfortable with the concept in his 1948 “Intelligent Machinery” paper, seeing no objections to a working, thinking machine by the end of the century. Sci-fi author Isaac Asimov created Multivac, a larger, brighter version of actual computers such as the 1951 UNIVAC 1, first used by the US Census Bureau. (The favour was later returned by IBM, when it named the first chess computer to outrank all humans Deep Blue after Douglas Adams’ hyperintelligent Deep Thought machine from the HitchHiker’s Guide To The Galaxy.)
There are many projects and much research going on into replicating human-like thought, mostly by hardware and software simulations of human brain structures and functions as new techniques reveal them. One of the higher-profile efforts is the Blue Brain project at the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, which started in 2005 with a target date for a working model roughly equivalent to some human functions by 2023.
There are two main problems for any brain simulator. The first is that the human brain is extraordinarily complex, with around 100 billion neurons and 1,000 trillion synaptic interconnections. None of this is digital; it depends on electrochemical signaling with inter-related timing and analogue components, the sort of molecular and biological machinery that we are only just starting to understand.
Enlarge / An image from the Blue Brain project, showing the complexity of the mammalian neocortical column. Shown here is "just" 10,000 neurons and 30 million interconnections. A human brain is millions of times more complex than this.EPFLEven much simpler brains remain mysterious. The landmark success to date for Blue Brain, reported this year, has been a small 30,000 neuron section of a rat brain that replicates signals seen in living rodents. 30,000 is just a tiny fraction of a complete mammalian brain, and as the number of neurons and interconnecting synapses increases, so the simulation becomes exponentially more complex—and exponentially beyond our current technological reach.
This yawning chasm of understanding leads to the second big problem: there is no accepted theory of mind that describes what “thought” actually is.
This underlying quandary—attempting to define “thought”—is sometimes referred to as the hard problem, and the results of understanding it are called strong AI. People engaged in commercial AI remain sceptical that it will be resolved any time soon, or that it is necessary or even desirable to do for any practical benefits. There is no doubt that artificial intelligences are beginning to do very meaningful work and that the speed of change of technology will continue to shunt things along, but full-blown sentience still seems far-fetched.
IBM Watson, one of the highest-profile successes in AI to date, started its life as an artificial contender on the American TV game show Jeopardy. It combines natural language processing with a large number of expert processes that try different strategies to match an internal knowledge database with potential answers. It then checks the confidence levels of its internal experts and chooses to answer the question if those levels are high enough (see below right).
The first serious application of Watson that might actually improve the quality of human life has been as a diagnostic aid in cancer medicine. Since 2011, Watson has been assisting oncologists by delving through patient medical records and trying to correlate that data with clinical expertise, academic research, or other sources of data in its memory banks. The end result is that Watson might offer up treatment options that the human doctor may not have previously considered.
“[It’s like] having a capable and knowledgeable ‘colleague’ who can review the current information that relates to my patient,” said Dr. James Miser, the chief medical information officer at Bumrungrad international hospital in Thailand. “It is fast, thorough, and has the uncanny ability to understand how the available evidence applies to the unique individual I am treating.”
Enlarge / Watson, competing on the game show Jeopardy. The bars at the bottom show its confidence in each answer. If no answer passes the confidence threshold (the white line), Watson doesn't respond.As marvellous as this sounds, it mostly serves to highlight the similarities and differences between current, narrow, practical AI and its strong, as-yet-mythical cousin. One basic engine of both is the neural network, a system based on basic biological concepts that takes a set of inputs and attempts to match them to things that have previously been seen by the neural network. The key concept is that the system isn’t told how to do this analysis; instead, it learns by being given both inputs and outputs for a correct solution and then adjusting its own computational pathways to create internal knowledge to be used on later, unknown inputs.
We are now at the point where Watson and other AI systems such as Facebook’s DeepFace facial recognition system can do this with narrow, constrained data sets, but they are generally incapable by themselves of extending beyond the very specific tasks they’ve been programmed to do.
Google, for its part, seems content with narrow AI—searching pictures by content, crunching environmental and science data, and machine language translation—than predicting the emergence of general strong AI. The human brain can find, utilise, and link together vastly more complicated and ill-defined data, performing feats of recognition and transformation that can model entire universes. Google projects like DeepMind are experimenting with combining different techniques—in one case, using neural networks alongside reinforcement learning where the machine generates random inputs until it happens to hit on a rewarding strategy, which it refines—to try and close the gap, but they still act on very specific, narrow tasks.
A video showing DeepMind learning how to play the Atari game <em>Breakout</em>.
Most recently, the DeepMind project used this combination of techniques to “master a diverse range of Atari 2600 games.” Speaking to Wired, Google researcher Koray Kavukcuoglu said his team has built “a general-learning algorithm that should be applicable to many other tasks”—but learning how to perform a task is a long way away from consciously thinking about those tasks, what the repercussions of those tasks might be, or having the wherewithal to opt out of doing those tasks in the first place.

Myth: AI won’t be bound by human ethics
The myriad dangers of artificial intelligences acting independently from humans are easy to imagine in the case of a rogue robot warrior, or a self-driving car that doesn’t correctly identify a life-threatening situation. The dangers are less obvious in the case of a smart search engine that has been quietly biased to give answers that, in the humble opinion of the megacorp that owns the search engine, aren’t in your best interest.
These are real worries with immediate importance to how we use, and are used by, the current and plausible future of AI technology. If a doctor uses Watson (or Siri or Google Now or Cortana) as part of what proves to be a misdiagnosis, who or what is ethically responsible for the consequences? And might we one day face the issues of sentient machines demanding rights?
The good news is that these worries are being taken seriously. Trying to define ethics, even between humans, is notoriously difficult. Society’s generally accepted ground rules are codified in a practical way by law and the legal system—and it’s here that practical answers to AI ethics are being developed.
The first question is whether robots and AI are genuinely new things in human experience requiring new ways of thinking, or whether they can be corralled by tweaks to existing principles.
“Both,” Ryan Calo, assistant professor of law at Washington University and leading light of cyberlaw, told Ars Technica. “Some rather visible people focus on the notion that robots will ‘wake up’ and demand rights or try to harm us. I don't think this will happen, at least not in the foreseeable future. But robots and AI even now present novel and interesting challenges for law and policy, just as the Internet did in the 1990s.”
So what happens if an AI learns or exhibits harmful behaviour. Who carries the can?
We have options, said Calo, including making people strictly liable if they deploy learning systems where they could cause trouble. “This could limit self-learning systems to those where they are really needed or less dangerous,” he said. But that can’t cover everything, according to Calo. “Risk management will play an even greater role in technology policy.”
The Internet itself, a new technology that brought new legal challenges, has a lot of lessons for AI law. “Some of those lessons are readily applicable to robots—for example, the idea that architecture or ‘code’ can be a kind of regulatory force, or that disciplines like computer science and law should talk to each other,” Calo said.
But other lessons don't translate, especially when it’s not just information that can be damaged. “Courts won't be so comfortable when bones instead of bits are on the line. I call this the problem of embodiment," he said.
“We may need a new model entirely. We may need a Federal Robotics Commission to help other agencies, courts, and state and federal lawmakers understand the technology well enough to make policy.”
Such a move would ensure that AI and robotics get the attention that they need as a new technology, while still hewing to familiar legislative approaches.
[embedded content]Boston Dynamics' "Petman" robot. Petman is ostensibly being developed to test military clothing and other equipment. Google acquired Boston Dynamics in 2013.
Make law, not war
There are less sanguine lessons for places where ethics have always been harder to enforce, though. In March 2015, the US Army sponsored a workshop that imagined what the battlefield will look like in 2050. Among its conclusions, it saw a huge increase in the role of artificial intelligence, not just in processing data but prosecuting warfare, putting the human soldiers “on the loop” rather than in it.
The workshop also predicted automated decision making, misinformation as a weapon, micro-targeting, large-scale self-organisation, and swarms of robots that would act independently or collaboratively. Even with humans in control, modern warfare is exceptionally prone to civilian collateral damage. With machines calling the shots in an environment filled with automated deception, what happens?
With so much AI development happening through open-source collaboration—Elon Musk and Sam Altman recently announced a billion-dollar investment in OpenAI, a research company devoted to keeping AI developments generally available—one ethical decision is immediately important. If you are developing AI techniques, do you want them used in war? If not, how can that be stopped?

Myth: AI will spin out of control
It’s hard not to notice when intellectual and business celebrities of the calibre of Stephen Hawking and Elon Musk characterise AI as enough of a threat to imperil the very existence of humanity.
"The development of full artificial intelligence could spell the end of the human race,” Hawking has said. "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.” Musk is equally cheerless, saying back in 2014 that strong AI is “potentially more dangerous than nukes, and more recently that AI is “our biggest existential threat.”
According to these technological luminaries, a sufficiently capable AI will not only be able to outthink us humans, but will necessarily evolve its own motivations and plans while being able to disguise and protect them, and itself, from us. And then we’ll be in trouble.
Enlarge / Gordon Moore's original graph, plotting a predicted trend in transistor density that would later become Moore's law.IntelExactly how this scenario will come about has not been made clear, though. The leading theorist and cheerleader for mankind’s imminent disappearance into insignificance or worse is Ray Kurzweil, who extrapolates the exponential growth in technological capability characterised by Moore’s law to a point in the mid 2040s—the Singularity—where AI will be self-perpetuating and no longer reliant on human intellect.
Counter-arguments are plentiful, not least from the observation that exponential growth is frequently limited by outside factors that become more important as that growth continues. Moore’s law itself, which states that every couple of years or so the number of transistors on a given area of silicon will double, has held good for fifty years but is deeply tied to aspects of basic physics that place hard limits on its future.
As transistors get smaller they are capable of switching at higher speeds, but they also suffer from exponential increases in leakage due to quantum tunnelling. This is a complex subject, but in essence: as the various layers inside a transistor get thinner and thinner, it’s easier for electrons to tunnel through. At the very least this tunnelling effect significantly increases power consumption, but it can potentially cause a catastrophic failure.
Moore’s law is only one half of the problem. The clock speed of processors regularly doubled from the mid-70s to the mid noughties, when it ran into another problem: an unmanageable increase in electrical power required, plus the corollary requirement of keeping these mega-power-dense chips from frying themselves.
Enlarge / Intel LGA 1155 pinout diagram. Click to zoom in. Note that the vast majority of the 1155 pins are used to deliver power to the chip, rather than for communications.While chips have continued to shrink, the max power consumption of a high-end computer chip has mostly stayed put. The end result is that we’re now trying to shift about 100 watts of thermal energy from a chip that might only be 10 millimetres on each side, which is rather difficult. We’ll soon need a novel cooling solution to go any further, lest we butt up against some laws of thermodynamics.
Ultimately, the biggest limit is that transistors are made of atoms, and we’re approaching the point where we can’t make a transistor any smaller or remove more atoms and still have a working device. Industry roadmaps point to the mid-2020s at the latest, but even today we’re starting to feel the squeeze of the laws of physics. Intel said this year that 2016’s switch from 14 nanometre transistors—where the smallest component is around 27 atoms across—to 10 nanometres was on hold, stretching Moore’s two years to at least three.
Waiting for the next big break
For the time being, then, most efforts have been focussed on multiple cores, arguing that two cores at 2 GHz are as good as one at 4 GHz—but for the most part they aren’t, as relatively few computing tasks can be efficiently split up to run across multiple cores.
The other big change in the last few years has been the rampant growth of large, centralised computing installations in data centres, public and hybrid clouds, and supercomputers. Performance gains have been hard to come by at a micro scale, and so companies and institutions have been going macro, where efficiencies of processing data at scale can be realised.
Siri doesn’t live on your iPhone: she lives in Apple’s data centres; the Xbox One can’t handle the physics of a destructible environment in real time, and so it’s off-loaded to Microsoft Azure instead.
Even in the data centre or supercomputer, though, other factors limit expansion. Again, most notably, power consumption and heat dissipation, but also the speed of light. The speed of light, which governs just about every digital communications interconnect, from copper wires to optical fibre to Wi-Fi, sets a hard limit on how much information can flow into and out of computer chips for processing. It already impacts how some specialised AI, most notably real-time financial analysis and high-frequency trading, can work.
Enlarge / A map of some of the financial microwave-link networks in southern England and continental Europe. Many of the transatlantic cables land in Land's End, Cornwall—by running a microwave network down to Cornwall from London, a couple of milliseconds can be gained.The Trading Mesh
For example, three years ago a networking company built an above-ground microwave network between London and Frankfurt, halving the round-trip latency of the existing fibre network from 8.35ms to 4.6ms. The network was used in secret for high-frequency trading for a full year before it became public knowledge. It only cost about £10 million ($15 million) to build the network connection between the two cities, but the trader may have made a profit of hundreds of millions of pounds.
Nobody knows how strong AI will work, but it must involve processing vast amounts of information. Unless it gets smart enough to find an entire alternative set of physical laws that appear to be hard-coded into the structure of spacetime, it will always be limited by how fast it can compare information held in different places.
Quantum physics itself is rapidly evolving the tools to consider information as being as fundamental to the functioning of the universe, and as circumscribed by law, as energy. These promise a real answer to how smart AI can get, long before it gets there.

Myth: AI will be a series of sudden breakthroughs
In his 1964 short story Dial F For Frankenstein, Arthur C. Clarke described all the phones in the world simultaneously sounding a single ring as the global telephone system achieved sentience. Clarke later claimed that Tim Berners-Lee acknowledged this as one inspiration behind the invention of the Web—well, perhaps. But the image of a system “waking up” and becoming aware is central to many future mythologies of AI.
Reality seems disinclined to follow. The development and advancement of AI is happening in a slow and deliberate fashion. Only now, after some fifty years of development, is AI is starting to make inroads into advanced applications such as healthcare, education, and finance. And again, these are still very narrow applications; you won't find an AI financial adviser that can also help diagnose your rare tropical disease.
Enlarge / These images were automatically annotated by one of Google's AI projects. You can imagine what Watson or other AIs might be able to do in a medical setting.
The myth of a “big bang” AI breakthrough has damaged the field many times in the past, with heightened expectations and associated investments leading to a wholesale withdrawal from research when predictions weren’t met.
These “AI winters” have occurred around the world and on a regular basis. In the 1980s, the Japanese government funded a half-billion dollar “Fifth Generation” project designed to leapfrog Western technology through massively parallel supercomputers that effectively programmed themselves when presented with logically-defined problems. By the time the project finished, nothing commercially useful had been produced while Western computing systems outperformed them by evolving conventional techniques. Funding for AI stopped.
Much the same had happened in the UK in the early 1970s, where most government investment in AI was cancelled after the Lighthill Report to Parliament concluded that none of the promised benefits of AI showed any sign of being useful in the real world. The report criticized AI’s “grandiose objectives” compared to its production of “toy” systems unable to cope with the complexities of actual data. Once again, the point was made that conventional approaches outperformed, and seemed likely to continue to outperform, anything that AI could realistically deliver.
Further ReadingI’m a nuclear Armageddon survivor: Ask me anythingNuclear apocalypse is far more likely than zombies. Read this feature to be prepared.Ironically, many failed AI projects—machine translation in the early 1960s, initial neural networks in the later 1960s, speech recognition in the 1970s, “expert systems” that codified business knowledge in the 1980s—have become realities through the development of cloud computing that couples very large amounts of computation with very large data sets. This commercially driven infrastructure, built for prosaic business reasons rather than ostensible advancement of AI, argues for gradual development in sync with utility.
What it really boils down to, then, is money. Commercialism is a forcing factor: it pushes businesses to continually improve their products, to adapt and develop their AI software as they go along. Until, of course, they create an AI that can adapt and develop itself, without human intervention. But that's still a long way off. Probably.
Rupert Goodwins started out as an engineer working for Clive Sinclair, Alan Sugar, and some other 1980s startups. He is now a London-based technology journalist who's written and broadcast about the digital world for more than thirty years. You can follow him on Twitter at @rupertg.

This post originated on Ars Technica UK

Show more