2016-01-15

The Happiness Code
A new approach to self-improvement is taking
off in Silicon Valley: cold, hard rationality.

By JENNIFER KAHN
JAN. 14, 2016

Last summer, three dozen people, mostly programmers in their 20s, gathered in a rented house in San Leandro, Calif., a sleepy suburb of San Francisco, for a lesson in ‘‘comfort-zone expansion.’’ An instructor, Michael Smith, opened the session with a brief lecture on identity, which, he observed, can seem immutable. ‘‘We think we behave in certain ways because of who we are,’’ he began. ‘‘But the opposite is also true. Experience can edit identity.’’

The goal of the ‘‘CoZE’’ exercise, Smith explained, was to ‘‘peek over the fence’’ to a new self; by doing something that makes you uncomfortable and then observing the result. There was an anticipatory hush, and then the room erupted. One person gave a toast. A product manager at Dropbox broke into song. In a corner, a programmer named Brent took off his shirt, revealing a milky chest and back, then sat with his head bowed. (He would later walk around wearing a handwritten sign that read, ‘‘Please touch me.’’)

The exercise went on for an hour, and afterward, participants giddily shared their stories. One person described going onto the patio and watching everyone else through the window, in order to experience a feeling of exclusion. Another submerged his hand in a pan of leftover chicken curry, to challenge his natural fastidiousness. Unexpectedly, he enjoyed the experience. ‘‘It felt playful,’’ he said.

At the end, Smith led everyone in a group cheer. The CoZE exercise was part of a four-day workshop offered by the Center for Applied Rationality (CFAR) in Berkeley, and each of the workshop’s sessions invariably finished with participants chanting, ‘‘3-2-1 Victory!’’ — a ritual I assumed would quickly turn halfhearted. Instead, as the weekend progressed, it was performed with increasing enthusiasm. By the time CoZE rolled around, late on the second day, the group was nearly vibrating. When Smith gave the cue, everyone cheered wildly, some ecstatically thrusting both fists in the air.

As self-help workshops go, Applied Rationality’s is not especially accessible. The center’s three founders — Julia Galef, Anna Salamon and Smith — all have backgrounds in science or math or both, and their curriculum draws heavily from behavioral economics. Over the course of the weekend, I heard instructors invoke both hyperbolic discounting (a mathematical model of how people undervalue long-term rewards) and prospect theory (developed by the behavioral economists Daniel Kahneman and Amos Tversky to capture how people inaccurately weigh risky probabilities). But the premise of the workshop is simple: Our minds, cobbled together over millenniums by that lazy craftsman, evolution, are riddled with bad mental habits. We routinely procrastinate, make poor investments, waste time, fumble important decisions, avoid problems and rationalize our unproductive behaviors, like checking Facebook instead of working. These ‘‘cognitive errors’’ ripple through our lives, CFAR argues, and underpin much of our modern malaise: Because we waste time on Facebook, we end up feeling harried; when we want to eat better or get to the gym more, we don’t, but then feel frustrated and guilty.

Some of these problems are byproducts of our brain’s reward system. We cash checks quickly but drag our feet paying credit-card bills, no matter the financial cost, because cashing a check generates a surge of dopamine but paying a bill makes us stressed. Other mistakes are glitchier. A person who owes back taxes might avoid talking to the I.R.S. because of a lingering monkey-brain belief that avoiding bad news keeps it from being true. While such logical errors may be easy to spot in others, the group says, they’re often harder to see in ourselves. The workshop promised to give participants the tools to address these flaws, which, it hinted, are almost certainly worse than we realize. As the center’s website warns, ‘‘Careful thinking just isn’t enough to understand our minds’ hidden failures.’’

Most self-help appeals to us because it promises real change without much real effort, a sort of fad diet for the psyche. (‘‘The Four-Hour Workweek,’’ ‘‘The Life-Changing Magic of Tidying Up.’’) By the magical-thinking standards of the industry, then, CFAR’s focus on science and on tiresome levels of practice can seem almost radical. It has also generated a rare level of interest among data-driven tech people and entrepreneurs who see personal development as just another optimization problem, if a uniquely central one. Yet, while CFAR’s methods are unusual, its aspirational promise — that a better version of ourselves is within reach — is distinctly familiar. The center may emphasize the benefits that will come to those who master the techniques of rational thought, like improved motivation and a more organized inbox, but it also suggests that the real reward will be far greater, enabling users to be more intellectually dynamic and nimble. Or as Smith put it, ‘‘We’re trying to invent parkour for the mind.’’

CFAR has been offering workshops since 2012, but it doesn’t typically advertise its classes. People tend to hear about the group from co-workers (usually at tech companies) or through a blog called LessWrong, associated with the artificial-intelligence researcher Eliezer Yudkowsky, who is also the author of the popular fan-fiction novel ‘‘Harry Potter and the Methods of Rationality.’’ (Yudkowsky founded the Machine Intelligence Research Institute (MIRI), which provided the original funding for CFAR; the two groups share an office space in Berkeley.) Yudkowsky is a controversial figure. Mostly self-taught — he left school after eighth grade — he has written openly about polyamory and blogged at length about the threat of a civilization-ending A.I. Despite this, CFAR’s sessions have become popular. According to Galef, Facebook hired the group to teach a workshop, and the Thiel Fellowship invited CFAR to teach several classes at its annual meeting. Jaan Tallinn, who helped create Skype, recently began paying for math and science students to attend CFAR meetings.

This is all the more surprising given that the workshops, which cost $3,900 per person, are run like a college-dorm cram session. Participants stay on-site for the entire time (typically four days and nights), often in bargain-basement conditions. In San Leandro, the organizers packed 48 people (36 participants, plus six staff members and six volunteers) into a single house, using twin mattresses scattered on the floor as extra beds. In the kitchen, I asked Matt O’Brien, a 30-year-old product manager who develops brain-training software for Lumosity, whether he minded the close quarters. He looked briefly puzzled, then explained that he already lives with 20 housemates in a shared house in San Francisco. Looking around the chaotic kitchen, he shrugged and said, ‘‘It’s not really all that different.’’

Those constraints produced a peculiar homogeneity. Nearly all the participants were in their early- to mid-20s, with quirky bios of the Bay Area variety. (‘‘Asher is a singing, freestyle rapping, former international Quidditch All-American turned software engineer.’’) Communication styles tended toward the formal. When I excused myself from one conversation, my interlocutor said, ‘‘I will allow you to disengage,’’ then gave a courtly bow. The only older attendee, a man in his 50s who described himself as polyamorous and ‘‘part Vulcan,’’ ghosted through the workshop, padding silently around the house in shorts and a polo shirt.

If the demographics of the workshop were alarmingly narrow, there was no disputing the group’s studiousness. Over the course of four days, I heard not a single scrap of chatter about anything unrelated to rationality. Nor, so far as I could discern, did anybody ever leave the house. Not for a quick trip to the Starbucks a mile down the road. Not for a walk in the sprawling park a half-mile away. One participant, Phoenix Eliot, had recently moved into a shared house where everyone was a ‘‘practicing rationalist’’ and reported that the experience had been positive. ‘‘We haven’t really had any interpersonal problems,’’ Eliot told me. ‘‘Whereas if this were a regular house, with people who just like each other, I think there would have been a lot more issues.’’

When I first spoke to Galef, she told me that, while the group tends to attract analytical thinkers, a purely logical approach to problem-solving is not the goal. ‘‘A lot of people think that rationality means acting like Spock and ignoring things like intuition and emotion,’’ she said. ‘‘But we’ve found that that approach doesn’t actually work.’’ Instead, she said, the aim was to bring the emotional, instinctive parts of the brain (dubbed ‘‘System One’’ by Kahneman) into harmony with the more intellectual, goal-setting parts of the brain (‘‘System Two’’).

At the orientation, Galef emphasized this point. System One wasn’t something to be overcome, she said, but a wise adviser, capable of sensing problems that our conscious minds hadn’t yet registered. It also played a key role in motivation. ‘‘The prefrontal cortex is like a monkey riding an elephant,’’ she told the group. ‘‘System One is the elephant. And you’re not going to steer an elephant by telling it where it should go.’’ The challenge, Galef said, was to recognize instances in which the two systems were at war, leading to a feeling of ‘‘stuckness’’: ‘‘Things like, ‘I want to go to the gym more, but I don’t go.’ Or, ‘I want my Ph.D., but I don’t want to work on it.’ ’’ She sketched a picture of a duck facing one way and its legs and feet resolutely pointed in the opposite direction. She called these problems ‘‘software bugs.’’

Afterward, I chatted with O’Brien and Mike Plotz, a circus juggler-turned-coder, about the program’s appeal. When I asked Plotz why he thought the workshops attracted so many programmers, he glanced at O’Brien. ‘‘I think most of us are fairly analytical,’’ he began. ‘‘We like to think about how complex systems work and how they can be optimized.’’ Because of this, Plotz added, he tends to notice patterns of behavior, in himself and in others. ‘‘When you realize that people are complex systems — that we operate in complicated ways, but also sort of follow rules — you start to think about how you might tweak some of those variables.’’

Deliberately or not, CFAR’s application process also filters out many of the less committed. There is an extensive, in-person interview, conducted by an instructor. Afterward, participants are required to fill out an elaborate self-report, in which they’re asked to assess their own personality traits and behaviors. (A friend or family member is given a similar questionnaire to confirm the accuracy of the applicant’s self-assessment.) ‘‘We get a fair number of people who say, ‘I want to come to the workshop because everybody I work with is really irrational and I want to fix them,’ ’’ Anna Salamon told me. ‘‘Which is not what we are looking for.’’

Despite this rigorous vetting, Salamon acknowledged that the center’s aims are ultimately proselytic. CFAR began as a spinoff of MIRI, which Yudkowsky created in 2000, in part to study the impending threat posed by artificially intelligent machines, which, he argued, could eventually destroy humanity. (Yudkowsky’s concern was that the machines could become sentient, hide this from their human operators and then decide to eliminate us.) Over the years, Yudkowsky found that people struggled to think clearly about A.I. risk and were often dismissive of it. In 2011, Salamon, who had been working at MIRI since 2008, volunteered to figure out how to overcome that problem.

When I spoke with Salamon, she said that ‘‘global catastrophic risks’’ like sentient A.I. were often difficult to assess. There wasn’t much data from which to extrapolate; this not only made the threats harder to evaluate but also discouraged researchers from digging into the question. (Studies have shown that people are more likely to avoid thinking about problems that feel depressing or vague and are also more likely to engage in mental ‘‘discounting’’ — assuming that the risk of something bad happening is lower than it actually is.) CFAR’s original mandate was to give researchers the mental tools to overcome their unconscious assumptions. Or as Salamon put it, ‘‘We were staring at the problem of staring at the problem.’’

Like many in the community, Salamon believes that the skills of rational thought, as taught by CFAR, are important to humanity’s long-term survival, in part because they can help us confront such seemingly remote catastrophic risks, as well as more familiar ones, like poverty and climate change. ‘‘One thing that primates tend to do is to make up stories for why something we believe must be true,’’ Salamon told me. ‘‘It’s very rare that we genuinely evaluate the evidence for our beliefs.’’

It was a point of view that nearly everyone at the workshop fervently shared. As one participant told me: ‘‘Self-help is just the gateway. The real goal is: Save the world.’’

The next day’s classes began with ‘‘goal factoring,’’ taught by Michael Smith. Born in Washington State, Smith was home-schooled and raised by ‘‘immortalist’’ parents. (Immortalists believe that one of humanity’s most-pressing needs is to figure out how to overcome death.) Smith, who goes by Valentine, described his father as a former ‘‘Ayn Randian objectivist’’ who believed in telepathy and named his son after the protagonist in Robert Heinlein’s science-fiction classic ‘‘Stranger in a Strange Land.’’ (In Heinlein’s book, Valentine Michael Smith is raised by Martians but returns to Earth to found a controversial cult.)

As a lecturer, Smith had a messianic quality, gazing intensely at students and moving with taut deliberation, as though perpetually engaged in a tai-chi workout. Goal factoring, Smith explained, is essentially a structured thought exercise: a way to analyze an aspiration (‘‘I want to be promoted to manager’’) by identifying the subgoals that drive it. While some of these may be obvious, others (‘‘I want to impress my ex-girlfriend’’) might be more embarrassing or less conscious. The purpose of the exercise, Smith said, was to develop a process for seeing your own motivations honestly and for spotting when they might be leading you astray. ‘‘These are blind spots,’’ Smith warned. ‘‘Blind spots that can poison your ability to keep track of what’s truly important to you.’’

To begin the factoring process, Smith asked each of us to choose a goal, list all the things we believed would come from accomplishing it and then brainstorm ways to achieve each thing. If you wanted a promotion to make more money, was there another way to get a higher salary — say, by asking for a raise or changing jobs? Finally, Smith said, we should imagine having achieved each of those subgoals. Were we satisfied? If not, that indicated the presence of a hidden motive, one that we had either overlooked or didn’t want to acknowledge.

Though the exercise didn’t strike me as especially penetrating — garden-variety introspection made punctilious — it was hugely popular. My group in the goal-factoring session included Ben Pace, a sweetly lumbering 18-year-old in a suit jacket and running shoes, who tended to balance his notepad on his knee like an old-timey newspaper reporter. Pace had flown over from Britain for the workshop, which he discovered at 15 by reading the LessWrong blog. He had applied to Oxford for the fall, and was hoping to attend. ‘‘I was feeling very worried about it,’’ he confided, ‘‘but then I goal-factored it and realized that I could get many of the same things I want from Oxford in other ways.’’

While Pace said that he had come to the workshop to practice the techniques of rationality, others had more pressing worries. During one break, I chatted with Andrew, a software developer specializing in mobile platforms who asked to be identified only by his first name to protect his privacy. Andrew acknowledged that he tended to struggle in social situations and suffered from depression and anxiety. ‘‘My brain has a lot of ridiculous social rules,’’ he told me. ‘‘I tend to be very closed off. And then there’s a switch where I’m almost completely open. It’s this binary transition.’’

Andrew said that he had initially been dubious of applied rationality, which he first heard about in a Reddit philosophy forum. Over time, though, he found that using the techniques made it easier to catch himself in the act of rationalizing a bad decision or avoiding an unpleasant task, like applying for a job. Initially, Andrew said, he assumed that he was simply afraid of rejection. But when he used aversion factoring — like goal factoring, but focused on what makes you avoid an unpleasant but important task — he made a surprising discovery. While visualizing how he would feel about applying for jobs if there were no chance of rejection, he realized that he still found the task aversive. In the end, he determined that his reluctance was rooted in a fear not of rejection but of making a bad career choice.

It was a significant insight, the kind more typically won through hours of talk therapy. And indeed, some participants reported that the techniques had genuinely changed their lives, either by helping them with mental-health issues like attention deficit or obsessive-compulsive disorder or simply by allowing them to recognize unquestioned assumptions. For a few — especially a set of high achievers for whom success hadn’t brought happiness — that process had been nearly tectonic. ‘‘For most of my life, I believed ‘If I do a good job, good things will happen,’ ’’ one person told me. ‘‘Now I ask, ‘If I do a good job, what does that mean?’ ’’

Others, though, seemed to see rationality less as a fundamental recalibration and more as a tool to be wielded. One participant, Michael Gao — who claimed that, before he turned 18, he made $10 million running a Bitcoin mine but then lost it all in the Mt. Gox collapse — seemed appalled when I suggested that the experience might have led him to value things besides accomplishment, like happiness and human connection. The problem, he clarified, was not that he had been too ambitious but that he hadn’t been ambitious enough. ‘‘I want to augment the race,’’ Gao told me earnestly, as we sat on the patio. ‘‘I want humanity to achieve great things. I want us to conquer death.’’

Given that I had already undergone a fair amount of talk therapy myself, I didn’t expect the workshop to bring me much in the way of new insights. But then, at one point, Smith cited the example of a man with a potentially cancerous mole who refuses to go see the doctor. It was part, he said, of ‘‘a broader class of mental errors’’ we’re all prone to: the belief that avoiding bad news will keep it from becoming true. While this didn’t strike me as particularly revelatory at the time, it turned out to be a stealthy insight. For an exercise the next day, I listed all the reasons I was avoiding talking with a financial planner, something I had intended to do for months. Many of them were pedestrian. Getting my financial records together would be tedious, and I was also mildly embarrassed by my income, which is on the low side. Working through the problem, though, I realized that the actual reason was humiliatingly simple: I was afraid of hearing that I needed to spend less and save more. Like mole man, I was afraid of what I might learn.

But are such realizations alone enough to create change? Fears can be stubborn and not particularly easy to argue with. When I mentioned this to Smith, he shrugged. ‘‘Hiding from the painful states of the world doesn’t prevent them from happening,’’ he said. Then, like a strict parent telling a sniffling child to shape up, he added: ‘‘The point isn’t just ‘How do I get myself to go to the doctor this time?’ It’s ‘How do I make it so that I will never be susceptible to that type of thinking error again?’ ’’

CFAR draws on the insights of behavioral economics and a growing interest in how they might be marshaled to make us happier, healthier and more fiscally responsible. For years, economists were stumped by certain consumer behaviors that seemed irrational and self-defeating, like failing to sign up for a 401K or carelessly going deep into credit-card debt. Daniel Kahneman and Amos Tversky’s prospect theory explained these quirks as a product of a seemingly inbuilt set of misperceptions, known collectively as cognitive bias.

Among other things, they found that people are typically both risk-averse and loss-averse: more likely to choose a guaranteed payout of $1,000 than to gamble on winning $1,400 when there’s a 20 percent chance they could end up with nothing. They also discovered that people tend to underestimate the chance of a low-probability event occurring, thus inadvertently exposing themselves to terrible risks. (The 2011 tsunami, for example, caught the Japanese off guard and devastated parts of northeastern Japan.)

In the past few decades, psychologists have identified dozens of cognitive biases, including ‘‘gambler’s fallacy’’ (believing that a coin toss is more likely to come up heads if the previous five flips were tails); ‘‘anchoring’’ (the tendency to rely heavily on one piece of information — usually the first thing we learn — when making a decision); the ‘‘Ikea effect’’ (disproportionately valuing things that you’ve labored over); and ‘‘unit bias’’ (assuming that a ‘‘portion’’ is the right size, which accounts for our tendency to finish off an opened bag of cookies).

More surprising was the degree to which these biases turned out to drive our behavior, in ways both quotidian (what we choose to buy) and dire (the mortgage collapse that led to the 2008 financial crisis). Since then, a welter of strategies has emerged for exploiting these same mechanisms to spur better long-term choices; some of these are already influencing public policy and public health. Governments have begun encouraging companies, for example, to make enrollment in an I.R.A. the default choice, rather than requiring people to opt in, or asking supermarkets not to put racks of candy right near the registers. Last year, President Obama established a Social and Behavioral Sciences Team at the White House; based on its findings, he recently ordered federal agencies to use behavioral-economics strategies to improve participation in their programs.

What makes CFAR novel is its effort to use those same principles to fix personal problems: to break frustrating habits, recognize self-defeating cycles and relentlessly interrogate our own wishful inclinations and avoidant instincts. Galef described ‘‘propagating urges’’ — a mental exercise designed to make long-term goals feel more viscerally rewarding — as an extension of operant conditioning, in which an experimenter who hopes to increase a certain behavior in an animal will reward incremental steps toward that behavior. Goal factoring and aversion factoring, she added, came out of behavioral economics, as well as research on a cognitive bias known as ‘‘introspection illusion’’: thinking we understand our motives or fears when we actually don’t. (That illusion is why the factoring process begins with listing every reason you’re either avoiding something or pursuing a goal, and then uses a second round of thought experiments to ferret out hidden factors.)

Figuring out how to translate behavioral-economics insights into a curriculum involved years of trial and error. Salamon recruited Galef, a former science journalist, in 2011, and later hired Smith, then a graduate student in math education at San Diego State. (Smith first met Yudkowsky at a conference dedicated to cryonics, in which a deceased person’s body is stored in a supercooled vat, to be resuscitated in a more advanced future.) In early 2012, the group began offering free classes to test its approach and quickly learned that almost none of it worked. Participants complained that the lectures were abstract and confusing and that some points seemed obvious while others simply felt wrong. A session on Bayes’s Theorem was especially unpopular, Salamon recalled, adding, ‘‘People visibly suffered through it.’’

The group also discovered a deeper problem: No one was very motivated to make his or her thinking more accurate. What people did want, Salamon recalled, was help with their personal problems. Some were constantly late to things. Others felt trapped by their own unproductive habits. Nearly everyone wanted help managing their email, eating better and improving their relationships. ‘‘Relatively early on,’’ Salamon said, ‘‘we realized that we had to disguise the epistemic rationality content as productivity advice or relationship advice.’’

In the end, the group built a curriculum largely from existing research into human behavior, but the goal of Applied Rationality remained the same: to provide tools, not advice. ‘‘Unlike a lot of self-help programs, we don’t advocate particular things that people should do,’’ Galef told me. ‘‘We just encourage them to look at the models that are driving their choices and try to examine those models rationally.’’ She shrugged. ‘‘People are already making predictions, whether or not they’re aware of it. They’re already saying ‘I’ll be miserable if I leave this relationship’ or ‘I won’t be able to make any difference in this big global problem because I’m just one person.’ So a lot of what we do is just trying to make people more aware of those predictions and to question whether they’re actually accurate.’’

At the San Leandro workshop, that approach seemed to have paid off. Participants sat raptly through the lectures, despite the intense pace: 80-minute sessions, held back to back, for nine hours, with additional sessions after dinner. Galef later said that this immersive structure was deliberate — a way to ‘‘accelerate the absorption of unfamiliar concepts’’ — but I found it overwhelming. There were sessions on developing an ‘‘inner simulator’’ to help visualize the possible outcome of a decision; one on ‘‘focused grit,’’ in which participants had to brainstorm solutions to seemingly intractable personal problems within a five-minute time limit; another on ‘‘trigger-action planning,’’ which used associative cues, or TAPs, to spur the development of productive habits, like ‘‘The minute I walk through my front door, I will change into my gym clothes.’’

The TAPs session was led by Salamon, a thin, muppety woman with a corona of brown hair. As a graduate student, Salamon studied the philosophy of science, and her lectures often seemed to take the wry view of humans as only marginally more evolved chimps. Involuntary TAPs already drive much of our behavior, she said — ‘‘Like: See open bag of Cheetos. Put in hand’’ — but could also be made intentional and productive.

My partner for the TAPs exercise, a soft-spoken engineer who works at Google, told me that he had tried TAPs before, but with limited success. ‘‘For a while, I was trying to drink more water, so I set up a TAP to drink a glass of water the minute I got to work,’’ he said. ‘‘It worked for a few weeks, but then I stopped. I started just wanting to get to work.’’ Now, he said, he was considering changing his TAP, to cue himself to drink water when he wanted a break. ‘‘Maybe it’ll help me stop reading Reddit,’’ he added.

If TAPs felt slightly gimmicky, like rat-maze training for adults, other techniques seemed more profound. My favorite was propagating urges, the one that focused on motivating yourself to reach long-term goals. What makes things like weight loss or learning to play the violin difficult, Salamon said, is that they often conflict with System One-driven urges (wanting a nap, craving a cookie). And because long-term goals typically require sticking it out through a series of unpleasant intermediate steps (eating less, practicing the violin), it can be easy to lose the original motivation. ‘‘Things we feel rewarded by, we do automatically,’’ Salamon added. ‘‘When I want Thai food, I’ll drive there, look at the menu, go inside, order. I don’t have to convince myself to take those steps. But in other cases, the connection is lost.’’

The solution, she said, is finding a way to make long-term goals feel more like short-term urges, especially because our brains are wired to associate actions and rewards that follow closely in time. (To discourage bad habits, conversely, you should stretch out the time between an action and its reward. ‘‘If you want to stop reading stuff online instead of working, have the pages load more slowly,’’ Salamon advised.) Because of this powerful association, small but immediate negative experiences can have disproportionate impact: the aversive moment of getting into a cold swimming pool can overwhelm the delayed rewards of doing morning laps. To override that resistance, she said, you need to associate the activity with a powerful feeling of reward, one with a stronger neurochemical kick than the virtuous goals (‘‘being healthier’’) that we normally aspire to. The next step is to come up with a mental image that vividly captures that feeling and that you can summon in moments of weakness. ‘‘It has to be a very sticky image,’’ Salamon said. ‘‘If it isn’t, you won’t experience that gut-level surge of motivation.’’ She told a story about how Smith overcame his aversion to doing push-ups, which made him feel unpleasantly hot and sweaty, by tapping into his obsession with longevity: now he pictured the heat from the exercise as a fire that burned away cell-damaging free radicals.

What made propagating urges so compelling, at least for me, was that it cut to the heart of a fundamental internal struggle: the clash between the shortsighted impulses that drive our daily behavior (checking email until it becomes ‘‘too late’’ to go to the gym) and the long-term aspirations that might make us genuinely happier if we could only persuade the petulant toddler in our minds to get onboard.

In a practice session, I paired up with Brian Raszap, a programmer at Amazon with a gentle smile and empathic manner. The aim of the exercise was to troubleshoot a long-term goal that we had each been struggling with and then create a new, sticky image to use as motivation. Raszap went first. He explained that he and some co-workers go to a Brazilian jujitsu class during lunch, usually once or twice a week. ‘‘When I go, I love it,’’ Raszap told me. ‘‘I feel so good. But half the time, I don’t go.’’

We talked through the problem for a while, then I asked Raszap to describe the feeling that he got from the class. He brightened. After working out, he told me, he is very relaxed, filled with a deeply pleasurable lassitude. When I asked if he could tap into that for motivation, Raszap nodded. ‘‘Maybe that would work,’’ he said. ‘‘I usually think about wanting to get better at jujitsu. But maybe instead, I can think about feeling really good this afternoon.’’

Many of CFAR’s techniques resemble a kind of self-directed version of psychotherapy’s holy trinity: learning to notice behaviors and assumptions that we’re often barely conscious of; feeling around to understand the roots of those behaviors; and then using those insights to create change. But there was something unsettling about how CFAR focused on superficial fixes while overlooking potentially deeper issues. While talking with Raszap, I began by asking why, if he truly wanted to go, he often skipped the jujitsu class. Raszap listed practical obstacles: Sometimes he doesn’t want the interruption; sometimes he just has a lot to do. But he also said that even the idea of attending the class more regularly makes him feel anxious. ‘‘It’s a feeling of not doing enough,’’ Raszap told me. Perversely, the workout only heightened his fear of failing, of missing the next class. This was coupled with a claustrophobic sense of obligation, what Raszap called ‘‘a fear of foreverness’’ — ‘‘Like, if I go today, I’ll have to keep going forever.’’

When I told Raszap that these last anxieties sounded like the sort of thing that might benefit more from psychotherapy than from behavior-modification techniques, he agreed. ‘‘I do have a good therapist, and we do talk about this,’’ he told me. ‘‘But it’s a different approach. Therapy is more about grand life narratives. Applied rationality is more practical, like, ‘What if you went to jujitsu in the evening, rather than at lunch?’ ’’

Yet applied rationality doesn’t typically acknowledge this gap. Proponents of rationality tend to talk about the brain as a kind of second-rate computer, jammed full of old legacy software but possible to reprogram if you can master the code. The reality, though, is almost certainly more complex. We often can’t see our biggest blind spots clearly or recognize their influence without outside help.

Several weeks after the workshop, I asked Salamon whether CFAR was intended to be a kind of D.I.Y. therapy, because that seemed to be how some participants were using it. She demurred, saying that the instructors have occasionally recommended counseling to participants who exhibit truly alarming behaviors and beliefs. But she considered therapy-grade problems to be relatively rare. ‘‘Ninety percent of the time, when people aren’t remembering to fill out their expense forms, there’s nothing deep there,’’ Salamon said. Even when a participant does have a deep-seated issue, she added, the techniques can still be effective. ‘‘You just have to give things a bit more space,’’ she said. ‘‘And not expect that they’ll yield to hacks.’’

Shortly before the CoZE exercise began on Saturday, I skipped the group dinner to hide in my room. After two days in Rationality House, I was feeling strung out, overwhelmed by the relentless interaction and confounded by the workshop’s obfuscatory jargon. ‘‘Garfield errors’’ were shorthand for taking the wrong steps to achieve a goal, based on a story about an aspiring comedian who practiced his craft by watching Garfield cartoons. ‘‘Hamming problems’’ signified particularly knotty or deep issues. (The name was a reference, Salamon explained, to the Bell Labs mathematician Richard Hamming, who was known for ambushing his peers by asking what the most important problem in their field was and why they weren’t working on it.)

And while some exercises seemed useful, other parts of the workshop — the lack of privacy or downtime, the groupthink, the subtle insistence that behaving otherwise was both irrational and an affront to ‘‘science’’ — felt creepy, even cultish. In the days before the workshop, I repeatedly asked whether I could sleep at home, because I lived just a 15-minute drive away. Galef was emphatic that I should not. ‘‘People really get much more out of the workshop when they stay on-site,’’ she wrote. ‘‘This is a strong trend ... and the size of the effect is quite marked.’’

As it turns out, I wasn’t the only one to find the workshop disorienting. One afternoon, I sat on the front steps with Richard Hua, a programmer at Microsoft who was also new to CFAR. Since the workshop began, Hua told me, he had sensed ‘‘a lot of interesting manipulation going on.’’

‘‘There’s something about being in there that feels hypnotic to me,’’ he added. ‘‘I wouldn’t say it’s a social pressure, exactly, but you kind of feel obliged to think like the people around you.’’ Another woman, who recently left her software job in Portland, Ore., to volunteer with CFAR, said her commitment to rationality had already led to difficulties with her family and friends. (When she mentioned this, Smith proposed that she make new friends — ones from the rationalist community.)

But there was also the fact that the vibe was just a little strange, what with the underlying interest in polyamory and cryonics, along with the widespread concern that the apocalypse, in the form of a civilization-destroying artificial intelligence, was imminent. When I asked why a group of rationalists would disproportionately share such views, people tended to cite the mind-expanding powers of rational thought. ‘‘This community is much more open to actually evaluating weird ideas,’’ Andrew told me. ‘‘They’re willing to put in the effort to explore the question, rather than saying: ‘Oh, this is outside my window. Bye.’ ’’ But the real reason, many acknowledged, was CFAR’s connection to Yudkowsky. Compulsive and rather grandiose, Yudkowsky is known for proclaiming the imminence of the A.I. apocalypse (‘‘I wouldn’t be surprised if tomorrow was the Final Dawn, the last sunrise before the earth and sun are reshaped into computing elements’’) and his own role as savior (‘‘I think my efforts could spell the difference between life and death for most of humanity’’).

When I asked Galef and Smith whether they worried that the group’s association with Yudkowsky might be off-putting, they seemed genuinely mystified. Galef said the group designed its own curriculum, without consulting Yudkowsky, and also worked hard to remain ‘‘value neutral,’’ emphasizing the techniques of rational thought rather than focusing on MIRI. Smith was more direct. Yudkowsky, he said, is ‘‘entangled in our origins.’’ Then he shrugged. Newton was a jerk, he pointed out, ‘‘but that doesn’t affect physics.’’

As the workshop drew to a close, the fear of falling back into old mental habits seemed to haunt participants. ‘‘I think that if I actually did these things, my life would be measurably better,’’ Hua told me. ‘‘But I can already predict that I’m going to slack off after the workshop ends. There’s a very big mental load around tackling these problems.’’

To keep people on track, CFAR holds online practice sessions for 10 weeks after a workshop and also assigns ‘‘accountability buddies’’ to encourage participation. The center is debating whether to develop an online version of its workshops that anyone can access. At the same time, it is also considering whether it would be ‘‘higher impact’’ to focus on teaching rationality to a small group of influential people, like policy makers, scientists and tech titans. ‘‘When I think about the things that have caused human society to advance, many of them seem to stem from new and better ways of thinking,’’ Galef added. ‘‘And while the self-help function of the workshops is great, I wouldn’t be devoting my life to this if that was all that I thought we were doing.’’

I hadn’t planned to practice the techniques myself, but in the weeks after the workshop ended, I found myself using them often. I began to notice when I was avoiding work — ‘‘finishing’’ a section of the newspaper (unit bias!) or doing other unproductive foot-dragging — and then rationalizing the lost time as mental ‘‘preparation.’’ I also found myself experimenting more and noting the results: working in a library rather than a coffee shop (more effective); signing up and paying for spin classes in advance (ditto); going to a museum on the weekend rather than doing something outdoors (so-so). Against all odds, the workshop had cracked open a mental window: Instead of merely muddling through, I began to consider how my habits might be changed. And while it was hard to tell whether this shift was because of the techniques themselves or simply because I had spent four days focusing intensely on those habits, the effect was the same. Instead of feeling stuck in familiar ruts, I felt productive, open and willing to try new things. I even felt a bit happier.

When I emailed some of the other participants, most reported a similar experience. Mike Plotz, the juggler turned coder, told me that he had recently done ‘‘a flurry of goal-factoring.’’ Among other things, he wanted to understand why he spent so much time checking Facebook every morning before work. Plotz said that he knew the Facebook habit wasn’t helping him and that he often ended up running late and feeling harried. After goal-factoring the problem, Plotz said, he realized that what he really wanted was autonomy: the feeling of being able to choose what he did each morning. Now, he said, rather than passively resisting work through Facebook, he gets up an hour earlier and does whatever he wants. ‘‘This morning I got up, made coffee and listened to ‘Moby-Dick,’ ’’ Plotz said when we spoke. ‘‘So I’d say that, so far, it’s going well.’’

I asked Plotz if he could tell whether the changes he made were due to the applied-rationality techniques or simply the product of a more active, problem-solving mind-set. ‘‘In some ways, I think the techniques are that: a way to kick you into a more productive state of mind,’’ he told me. But he also noted that they supplied a framework, a strategy for working through the questions that such a mind-set might raise. ‘‘It’s one thing to notice your thoughts and behaviors,’’ Plotz said. ‘‘Turning that into a technique that actually lets you accomplish stuff? That’s hard.’’

Has anyone followed Less Wrong? Read Yudkowsky's writings? It's quite interesting to see how his school of human rationality being examined by the mainstream media.

Statistics: Posted by Battlehymn Republic — 2016-01-15 02:44pm

Show more