Questions that were once firmly in the domain of philosophy are now being mulled over by the front running thinkers of 21st century. This post from OZY delves into the relevance of old philosophy to the future and wonders if that B.A. in philosophy just might be useful in 2059.
Enter a bookstore, while they still exist. Walk toward the philosophy section, toward shelves of fat books by Plato, Nietzsche, Spinoza. Some postmodernists with funny names — Zizek, Baudrillard. Perhaps you browse through their pages before putting them back in their place, respectfully but with a bit of a yawn.
More appealing, perhaps: the books at the front of the store, the best sellers, the ones that portend crises (Rise of the Robots: Technology and the Threat of a Jobless Future); others advise on surviving one (Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence). The business best sellers are more gung-ho about the changes to come: Zero to One: Notes on Startups, or How to Build the Future.Or Elon Musk: Tesla, SpaceX and the Quest for a Fantastic Future.
These almost-sci-fi books offer the Silicon Valley zeitgeist, distilled and ready to drink: This is the applied philosophy of doers and thought leaders, sculpted by the kings and fewer queens of the Bay Area, this gleaming, ambitious place that churns out innovation at an exponential rate and is quietly building your future. These thinkers have smartened up talking thermostats, installed robots on the manufacturing floor and begun to unschool us all. They will rewrite labor laws and colonize Mars. And whether you like their thinking or not, today’s techno-philosophersare incarnating the next generation of big ideas, intentionally tackling fundamental questions about the nature of consciousness and what constitutes the good life, questions that once lived mainly in philosophy departments.
Yet the grand ideas driving the technological age seem to move so fast as to be positively ephemeral for the rest of us. Many live in rapid, self-referential conversations and retweets and late-night ideating between designers. They take form in a language that could frighten away those who deem themselves technologically illiterate. Parse the insider chatter and you will discover the philosophies of the people who will direct flows of money through this boom and its potential busts and those who own your many screens. But perhaps the most seemingly dreamy-eyed of the bunch are using their epistemological, ontological and ethical muscles on giant, sometimes scary, positively cinematic issues of artificial intelligence. Meet the futurists.
***
One such group of futurists toils far from the marbled halls of a stately library; the day I arrive, they are operating out of a humble office in a missable building in downtown Berkeley. The Machine Intelligence Research Institute, a tiny collective with six researchers on staff, shares a hallway (tellingly) with the nonprofit Center for Applied Rationality. In the collective’s cramped space, where one must step over a mattress on the floor (available for emergency naps), there are rapidly drawn figures and terms of art spelled out on white boards, scrawled with all the intensity of a coach prepping football plays. Advised by PayPal billionaire Peter Thiel and founding Skype engineer Jaan Tallinn, MIRI-ers are math geeks, high-level types who might otherwise be in academia or pulling in six-figure salaries as Wall Street quants or, of course, as software engineers.
They’re here instead because of a problem whose exigency they, and a whole generation of technologically minded folks, feel acutely: that of artificial intelligence. Here in the Valley, AI is one of the two final frontiers, right up there with the cosmos. Big money is flowing in its direction; last month saw the launch of a star-studded nonprofit called OpenAI (an organization spokesperson said the team was too busy to comment right now) to study “friendly” artificial intelligence. With some $1 billion in hand, it’s backed by donors like Elon Musk, Reid Hoffman, Peter Thiel and more. The dream: obedient, human-enhancing robots. The threat: unfriendly AI, self-explanatory.
Nate Soares is executive director of the Machine Intelligence Research Institute, which is advised by PayPal billionaire Peter Thiel and Skype founder Jaan Tallinn.
Source: Alex Washburn/OZY
Quick definition check: AI doesn’t necessarily mean humanoid robots, and it’s not just headline-making stuff like IBM’s Jeopardy-winning robot Watson or self-driving cars. Much of AI is being built by various unsexy algorithms that drive image recognition, natural language processing and machine learning. So far, such algorithms have made our computers immensely powerful and ace at learning a few clearly defined tasks, like, say, beating us at chess or finding a nearby restaurant. But they haven’t yet brought us artificial general intelligence, aka a close-to-human ability to reason and learn from experience. And we’re nowhere close to machines being conscious, if you believe that sort of thing. Today’s AI is a kind of “Swiss Army knife,” says Jerry Kaplan, self-proclaimed Silicon Valley “fossil,” futurist and author of a new book on AI, Humans Need Not Apply: It’s a bunch of helpful, individual tools in our pocket, but we don’t yet have one giant master tool.
Despite the hazy horizon, the good people at MIRI have taken it upon themselves to build the foundational mathematics to help us design AI to, one day, “align with our values,” as Nate Soares, MIRI’s executive director, puts it. Almost every super-sexy question I ask for MIRI’s take on has Soares chastising me with a patience, young grasshopper. Will he teach machines to think? Stave off murderous bots? He reiterates that MIRI’s work is seriously bedrock stuff. Like, they’re still inventing calculus that might one day help someone build the tools to one daybuild a rocket ship. So don’t call them NASA.
But in the popular consciousness, AI has taken on epic proportions to rival the space-race craze. Soares traces the surge of lay interest to a few recent events: the publication of Oxford philosopher and futurist Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, a dense tome that zoomed surprisingly high on the New York Times best-seller list; the entry of Stephen Hawking and Musk into the fray. Not to mention the intensifying conversation about automation and what it means for jobs. Add in ubiquitous references in pop culture, like the blockbuster Ex Machina, and it all made 2015 the year AI went mainstream.
Some see the fuss as somewhat paranoid. As machine-learning expert and Coursera cofounder Andrew Ng told Fusion’s editor in chief, Alexis Madrigal, recently: “I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars. … It’s just not productive to work on that right now.” Plus, many find it hard to stomach the futurists themselves, seeing their obsession with the future as a disregard for the present. These are “great big moonshot, optimistic ideas about the future,” says Shannon Vallor, professor of philosophy at Santa Clara University in Silicon Valley. What about the troubles today?
Still others would prefer the philosophy talk be coupled with action. Take the goings-on at another haven for AI geeks who feel let down by the academy and industry alike — the Bay Area–based startup Vicarious, which, with more than $70 million in hand, is like a real-world test lab. Two years ago, by building a “brainlike vision system,” the Vicarious team beat those Captcha texts that ask you to prove you’re not a robot — they taught computers to recognize a complex pattern of images, much as a person would. Vicarious cofounder Scott Phoenix, a bounding, tall guy who handshakes me into the office past a treadmill desk (“We have a bunch of them”), calls MIRI more like a “political think tank” looking to “build safety nets everywhere, even if most will probably end up unused.” For Phoenix, though, the safety nets need to be accompanied by some hand-dirtying. “The best way to build safe AI,” he says, is to, well, start building some AI even as we figure out how to save our own necks.
Shannon Vallor is a professor at Santa Clara University in Silicon Valley and president of the international Society for Philosophy and Technology.
Source: Alex Washburn/OZY
“If you think it’s too early to think about something, it’s probably slightly too late,” Soares says. Forebodingly, he adds, citing global warming and nuclear weapons: “Humanity does not have a good track record for preparing for threats … until it is too late.” For the extreme view, there’s Bostrom, who runs Oxford’s Future of Humanity Institute and has made a career by talking about the existential threats facing humanity. AI is foremost on the quaking-in-your-boots list. “You’re designing for a problem that doesn’t yet exist,” he admits. But in two or three decades, once we are face-to-face with the problem, well, “we’re kind of out of luck at that point,” he says. “Better to design ‘an insurance policy.’ ”
***
The three laws of robotics are: “One, a robot may not injure a human being or, through inaction, allow a human being to come to harm. Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” These laws are cited almost as often asMoore’s law in some circles in Silicon Valley, never mind that they’re fictional, imagined up by sci-fi writer Isaac Asimov.
If Asimov’s are the Ten Commandments, then everyone else is busy trying to write the common law to channel those core principles. And not by crudely hard-coding morality or plugging ethics into a machine. Rather, many researchers want to teach computers to teach themselves good behavior. One of them is Francesca Rossi, professor of computer science at the University of Padova in Italy and a leading AI researcher, who thinks machines should learn what we believe. Which might mean letting a robot learn our bad habits, like “some sort of irrationality that’s hopefully not dangerous.” If computers don’t think like us, Rossi tells me, we won’t accept them as coworkers and friends and will consign them instead to the weirdo lunch table.
But how do we think? Rossi likens human ethics to preferences; we have a few core principles we hold dear, but more often, we decide what’s right based on context. Even the saintliest among us are necessarily relativistic. If Rossi can figure out the logical patterns we follow when we make decisions and why we prefer various ethical principles, she might be able to teach a computer to learn those ethics.
Others in her field, like Carnegie Mellon professor Manuela Veloso, are testing the preference stuff in more immediate circumstances. Veloso is part of a team of CMU researchers who regularly play with a friendly little guy called the CoBot (Collaborative Robot), which winds its way through the university’s lecture halls and labs, handling sometimes conflicting requests from professors and even guiding visitors to meetings with academics. CoBot receives and parses multiple directions — go grab me a coffee, but go to the fourth floor first and come back to Dr. So-and-So’s office between now and 12 o’clock, etc. It can pull off plenty of tasks, but Veloso tells me one big challenge still lies ahead: figuring out how CoBot makes its decisions. It’s not as though we can ask it, straight up, why it did what it did.
Scott Phoenix’s startup, Bay Area–based Vicarious, built a “brainlike vision system” that enables computers to beat those Captcha “Are you a robot?” tests.
Source: Alex Washburn/OZY
The thing is, says Kaplan, “everyone quotes Asimov’s laws of robotics, but everyone forgets that they never work. That’s the whole point: Theynever work.”
Indeed, Asimov wrote a mountain of stories about every way those laws could and did go wrong, and we’ve barely begun to imagine all the possibilities. There is still more to consider, scientifically, legally, societally. Bostrom, who recently sat down with OZY at the Creative Destruction Lab’s machine-learning conference in Toronto, tells me we might want to ask the hard, weird questions now, like how to treat “digital minds in a morally responsible way.” In other words, are human rights robot rights? We’ll never be able to truly understand how machines “think” — even the grandest psychotherapist couldn’t draw thoughts out of them. “But that might just be a reason to start earlier, to make this a non-silly topic.”
The most curious challenge of all could be the law, that bastion of semantics and signification. Someday, someone will want to sue a robot. Say that robot is a driverless car and has an accident. Or — Kaplan’s example — say your assistant bot goes out to get you a coffee and ends up beating up a man who it thinks is stealing a woman’s purse on the sidewalk. In fact, that guy was just the boyfriend, helping his lady out. The well-intentioned bot, unable to distinguish between theft and basic social interaction, had ethical intentions. But who is culpable for the mix-up? You? The company? The robot itself? Kaplan guesses courts will eventually decide that robots— like corporations — are “artificial persons.” So sue that damn robot. Talk about defining the future of humanity.
***
And that’s what this is all about, really — few “professional” philosophers are studying the future with urgency, says Clark Glymour, alumni professor of philosophy at Carnegie Mellon. Philosophers, he tells me, are “a community with sinecures. We don’t really have to pay much attention. Nor are philosophers trained, with some exceptions, to deal with contemporary issues.” If the professionals won’t do the thinking, it’s inevitable that technologists will pull double duty asethicists and developers alike. I heard once from an acquaintance that her boss at Google regularly quipped, “We spend so much time studying the past, and yet we don’t study the future with the same intensity.”
The buck has been passed, from the Glymours of the world to Google’sethics of AI committee. Technological revolutions have always been accompanied by tectonic intellectual shifts. Those swings brought us Adam Smith, Thomas Malthus, the Romantics, Karl Marx. Now? It’s brought us some unlikely philosophers, but philosophers nonetheless, like Musk, Thiel — a former philosophy major in college whose favorite light reading is the French Catholic literary critic René Girard — and Hoffman (another philosophy major!).
It was in an old-fashioned bookstore a few months ago, Kepler’s Books in Menlo Park, that Jerry Kaplan sat discussing his new release. He had been painting the future for a small, rapt audience — it looked thrilling, occasionally gloomy and surprisingly near. A man in the audience raised his hand. With all his foresight, he asked, what did Kaplan recommend the next generation do to prepare? White-haired, grinning, the man with a Ph.D. in computer science said, “Get a broad, liberal arts education.” He referenced his B.A. in the history and philosophy of science. “Ph.D.s should really have an expiration date,” he said. Philosophy, though? That somehow remained both ancient and urgent.
Source: The 21st Century Philosophers | Fast Forward | OZY
The post How Relevant Is Philosophy to the 21st Century? appeared first on Genius Awakening.