2014-07-19

You can hear this as an MP3.

[It's important to understand just how much the theory has evolved in the last 10 years. Much more perhaps than in its first eight.]

Doug Kaye: Hello, and welcome to IT Conversations, a series of interviews recording and transcripts on the hot topics of information technology. I am your host, Doug Kaye, and in today’s program, I am pleased to bring you this special presentation from the Open Source Business Conference held in San Francisco on March 16 and 17, 2004.

Mike Dutton: My name is Mike Dutton, and it is my pleasure to introduce to you today Clayton Christensen. Professor Christensen hardly needs an introduction. His first bestseller, “The Innovator’s Dilemma,” has sold over half a million copies and has added the terms “disruptive innovation” to our corporate lexicon. His sequel — and you have to have a sequel to be a management guru — is entitled “The Innovator’s Solution” and is currently Business Week’s bestseller’s list. Professor Christensen began his career at the Boston Consulting Group and served as a White House fellow in the Reagan administration. In 1984, he cofounded and served as chairman of Ceramics Process Systems Cooperation. Then, as he was approaching his 40th birthday, he took the logical step of quitting his job and going back to school, where he earned a doctorate in Business Administration from Harvard Business School. So, today he is a professor of Business Administration at Harvard Business School where teaches and researches technology commercialization innovation. Professor Christensen is also a practicing entrepreneur. In 2000 he founded Innosight, a consulting firm focused on helping firms set their innovative strategies. And according to a recent article in Newsweek, “Innosight’s phones ring off the hook, and the firm cannot handle all the demand,” very similar to all the startups in open source here today. So, please join me in welcoming Clayton Christensen.

Clayton Christensen: Thank you, Mike! I’m 6 feet 8, so if it’s okay, I’ll just…the mic picks up okay. I’m sure delighted to be with you, especially because there is blizzard in Boston today; my kids have to shovel the snow!

As Mike mentioned, I came in to academia late in life, and the first chunk of research that I was engaged in was trying to understand what it is that could kill a successful, well — run company. And those of you who are familiar with it, probably know that the odd conclusion that I got of that was that it was actually good management that kills these companies. And subsequent then to the publishing of the book that summarized that work, “The Innovator’s Dilemma,” I’ve been trying to understand the flip side of that, which is if I want to start a new business that has the potential to kill a successful, well — run competitor, how would I do it? And that’s what we tried summarize in the book, “The Innovator’s solution.” It’s really quite a different book than the “Dilemma” was, because the “Dilemma” built a theory of what is it that caused these companies to fail. And then in the writing of this solution, I’ll just give you analogy for where we came out on how to successfully start new growth businesses.

I remember when I first got out of business school and had my first job. I was taught the methods of total quality management as they existed in the 1970’s, and we had this tool that was called a “statistical process control chart.” (Do they still teach that around here?) Basically you made a piece, you measured the critical performance parameter and you plotted it on this chart, and there was a target parameter that you were always trying to make the piece to hit, but you had this pesky scatter around that target. And I remember being taught at the time that the reason for the scatter is that there is just intrinsic variability and unpredictability in manufacturing processes.

So, the methods that were taught about manufacturing quality control in the ‘70’s were all oriented to helping you figure out how to deal with that randomness. And then the quality movement came of age, and what they taught us is, “No, there’s not randomness in manufacturing processes.” Every time you got a result that was bad, it actually had a cause, but it just appeared to be random because you didn’t know what caused it. And so the quality movement then gave us tools to understand what are all the different variables that can affect the consistency of output in a manufacturing operation. And once we could understand what those variables were and then develop methods to control them, manufacturing became not a random process, but something that was highly predictable and controllable.

Well, I think that today the creation of new growth businesses is where the quality movement was 30 years ago, and that is that there’s just a widespread belief that it’s random and unpredictable. So, for example, every time venture capitalists invest in a new company, they invest in the belief that it will be successful, but the odds are that two out of ten really become successful, and the whole industry is structured to help them deal with the alleged variability in creating new growth businesses. And for established companies who launch new products, every time they invest in a new product, they think it will be successful, but actually 75% fail. And so a lot of the methods that are taught for how to manage innovation are really structured around how to deal with the alleged unpredictability. We think that it is not intrinsically unpredictable. If we can actually understand the variables that affect the success of new businesses that we’re trying to start, that we can succeed with a much higher probability than has historically been the case.

And so that’s what I want to talk about is, “What are these variables that we have to control?” And in kind of an unabashed way, I’m going to structure this around theories of strategy and management. And the word “theory” gets a bum rap amongst some managers because it’s associated with the word “theoretical,” which connotes “impractical.” But a theory is actually a very practical thing because it’s a statement of causality, a statement of what causes what and why.

And so, like gravity is a theory; it allows to predict that if you jumped out of a window on the top floor of this hotel, you’re going to fall, and you don’t have to collect experimental evidence on that question. And what this means is that every time a manager takes an action, it’s actually predicated upon a theory in her head that, “If I do this, I’m going get the result that I need.” And every time you put a business plan into place, you’re actually employing theories in your mind that if you do these things, you will be successful. It’s just you don’t know quite often what the theories are that you’re employing and aren’t aware whether they’re good or bad. So, I just want to try to discuss with you some of the theories that we’ve tried to draw upon to put together this work, especially as they relate to how you would create a successful open source software business and how the companies that might be disruptable or threatened by open source software create new ways of growth that will keep them healthy.

So, I’m going to… move to the next slide please. These are the questions that you’ve got to get a lot of things right, but 10 questions that you need to get right in building a new growth business are, “How do we beat the competitors?” “How do we know what customers we ought to focus on with our first product?” “When we are focusing on a set of customers, how do we know whether they’re going to want to buy the product that we have in mind?” “How do we distribute to them, and how do we build a brand that communicates what we want to communicate to them?” “Of all of the things that have to be in place for the customers to benefit from the product, which should we do ourselves and what can we rely upon partners and suppliers to provide?” “How do we keep our product from getting commoditized?” “Who should we hire to run this new business, and what kind of person if we hired to run it would kill the business?” “How do we structure the organization, and if it’s within an established company, where is the right home for the new growth business to live?” “How do we know when we’ve got the right strategy, and how do we know when the strategy that has been working will not work in the future?” And finally, “Whose money should we take to fund the business and whose money, if we took it, would kill the business?” turns out to be quite an important question.

And so I want to just walk through the models that we propose you can use to think your way through these questions as far as we can go in the time that we have. And then we’ve got some time at the end for you to call all of this into question or send barbs or criticisms, or ask other questions as you see fit.

So, lets start at the top one, and it turned out that this one was really quite readily answerable by the model of disruption that we summarized in “The Innovators Dilemma,” and for those of you aren’t familiar with it, I’d like to just walk through that as quickly as we can. There are three parts to this model.

The first one is represented by that line, and what it suggests is that in every market, there is a trajectory of improvement that customers are able to utilize over time. And a good way to visualize that is in the car industry. Every year the car companies give us new and improved engines, and yet we can’t utilize all the improvement that they give us because you’ve got nuisances like police that put a crimp on how much of the engine we can use.

Now to keep the diagram simple, I’ll just depict that ability to utilize improvement as a single line representing the median customers in a market. But if you can remember that there’s really a distribution of customers in every market, and so at the high end, really demanding applications that are never satisfied with the best they can find, and at the low end, pretty unsophisticated folks that are overserved by very little. So, that’s the first piece of the model is there’s an ability to utilize improvement.

And then the second one is that in each market there’s a trajectory of improvement that the innovating companies provide as they introduce new and improved products. And the most important finding on this is that this trajectory of technological progress almost always outstrips the ability of customers to use that improvement. And so it means that a company whose products aren’t good enough to be used by customers in the mainstream of a market at one point can improve its products at such a rapid rate that it overshoots what they’re able to use at a later point in time. Now they may keep buying that product out here; they just can’t utilize all the improvement that’s made available within it.

And a good way to visualize this one is, if you go back to the early years of the personal computer industry when we were first learning how to do word processing. Do you remember how often you had to stop your fingers and let the Intel 286 chip inside catch up to you because it wasn’t good enough even for a simple application like word processing? But as Intel has introduced faster and faster chips that it can sell for more attractive profits to demanding customers in higher tiers of the market, now that they’re at a 3 — gigahertz Pentium IV processor up here, they’ve way overshot the speed that mainstream business users are able to use.

Now at the same time, there’re still some freaky people at the high end that need even faster chips, but they’ve overshot what the mainstream can use. Now some of the innovations that allow a company to move up this performance trajectory are simple incremental year — to — year engineering improvements, and others are this sort of dramatic breakthrough technologies. Like in telecommunications, the change from analog to digital and from digital to optical were very complicated technological tour de force. But they had the same effect on the industry as the simple ones, and that is they sustained this trajectory of performance improvement.

And what we found, as you may know in that study, is that it actually doesn’t matter technologically how difficult or radical that innovation is, that almost always the incumbents win these battles of sustaining innovations. Again, it just doesn’t matter technologically how hard it is. It just seems like, if it helps them make a better product that they can sell for more attractive margins to their best customers, they figure out a way to get it done.

But then there was this other kind that we call the Disruptive Technology that comes into the market every once in a while, and open source clearly is one of these. And we called it “disruptive” not because it was a dramatic breakthrough improvement, but instead of sustaining the trajectory of improvement, it disrupted and redefined it and brought to the market a product that was crummier than those that historically had been available. In fact, it performed so poorly that it couldn’t be used by customers in the mainstream. But it brought to the market a simpler and more affordable product that allowed a whole new population of people now to begin owning and using it, and then, because trajectory is so steep, what takes route in a simple application then can intersect with the mainstream. And so that’s the basic model of disruption.

One the companies that has tried to use this a lot in the last few years in managing their strategy and new product development has been Intel because, as I mentioned, they’ve gone from a point of having a product that wasn’t good enough to now overshooting the volume of the market. I got invited to go out to a meeting there because, through the late 1990’s, Intel, they were coming into the low end of the processor market in entry — level computer systems.

Much cheaper processors were made by Cyrix and AMD, and they were just killing Intel at the low end; in fact, their market share in entry — level systems dropped from 90% to 30% in 18 months. And it felt great actually to get driven out of the low end, because as they were losing the volume in the most price — sensitive tiers of the market, they were replacing the volume at the high end with much more attractive margins, and so overall their reported gross profits were improving, and Wall Street just loves gross margins! And so it felt good until they saw this dumb model, and then it helped them see, “My gosh! If we lose the low end today, we may lose the mainstream tomorrow!”

So they I got to come out and have this meeting with their executive staff and their chairman, Andy Grove, who is in the audience. And I was going through this, and he had a real puzzled look on his face, and then it was like a teacher’s dream and the light bulb turned on, and he raised his hand, and he said, “I see what’s wrong with your idea.” He went up and he crossed out the word “technologies” there and he said, “Clay, if you frame this as a technological problem, you’re going to mislead the world.”

And unfortunately, “The Innovators Dilemma” had just been published and I couldn’t get it out. But he said, “If I get the idea, I would characterize it as just straightforward technology that disrupts the business model of the leaders, and that’s what makes it so hard.” And he then went on to give his view of the puzzle that had been in my mind that it kind of triggered the whole line of research, and that was living in the Boston area, how can digital equipment get killed?

I remember watching digital equipment grow up through the ‘70’s and ‘80’s. It was probably the most widely admired of all the companies in the world economy. And when you read articles in the business press about why they were so successful, it was always attributed to the brilliance of the management team. Then about 1988, they just fell off a cliff and began to unravel very quickly, and when you then read articles in the business press about why they had stumbled so badly, it was always attributed to the ineptitude of the management team. They were the same folks running the company.

And so for a while, I scratched my head and wondered, “How could good managers get that dumb and that fast?” That really is the bad management hypothesis, is the most ready explanation that we can offer for the failure of most companies. But the reason it didn’t quite fit right in this case is that every minicomputer company in the world collapsed in unison. Is was not just digital, it was Data General, Prime, Wang, NEXT, Hewlett Packard, and you might expect them to collude on pricing, but to collude to collapse was a bit of a stretch, and this had to be something more fundamental going on.

And that’s what really precipitated this question. And so Andy then went on to give his view (let’s go to the next slide) of what happened to digital equipment. And he said, “In the first place, if we are able to line up in this room the sequence of mini computers that Digital introduced to its markets, they didn’t skip a beat.” If you peeled the covers off and looked at the technologies that were required to make a good mini computer better, anything that helped them make a better product that they could sell for higher margins to their best customers, they got it done.

But he reminded me, “Do you remember how crummy those early personal computers were?” They were toys. In fact, Apple sold the Apple II primarily as a toy to children. It wasn’t good enough to be used by customers in the main stream market as it existed in the late ’70’s and early ‘80’s, and that meant that the more carefully Digital listened to its customers and tried to reflect their unmet needs in the properties of their next generation product, they got no signal that the personal computer mattered because, in fact, their customers couldn’t use it. And so it took root as a toy, and then because this trajectory improves at such a rapid rate, within a few years it intersected with the main stream needs, and it wasn’t just digital, it was the whole population that got blown out of the water.

And he said to his earlier point, “This wasn’t a technology problem. Digital’s engineers could have designed a PC with their eyes shut.” But they had a business model, and the minicomputers were quite expensive and complicated, and in order to sell them, they had to be sold direct to the customer, and the selling process involved the lot of training and support and service. You just had to have costs like that in the business in order to play in that game.

Given that kind of a business model, Digital had to make about 45% gross profit margins, and a typical computer sold for about $250,000. Now, in that environment, as in most companies environments, people are walking in to senior management all of the time with proposals to make better products. Well, some of the proposals that management was entertaining entailed making a better computer than Digital had ever made before. If you look at those proposals, they typically promise gross margins of 60% and the machines could easily be sold for half a million dollars.

But at that same time that management was trying decide if they should invest in those things, other people were walking in with proposals to invest in personal computers because it was quite obvious by the early ‘80’s, that this was going to be a big market. But if you look at those business plans, in the very best of years they promise gross margins of 40% and they were headed down to 20%, and these machines could only be sold for $2000.

And so Andy said, “So really, the decision the management had to make was, “Should we invest our money to make better products that our best customers could use that would improve our profit margins, or should we invest our money to make worse products that none of our customers could use that would ruin our profit margins? What should we do?” And I’m pretty quick, and so what I said was, “Andy, since you’re in this situation, my advice is to bale out and become a professor.” But then I realized that the very same thing is happening to the Harvard and Stanford business schools. We have become very good and very expensive, and we’re getting disrupted by crummy, low — end, on — the –job learning experiences like you’re having today! A little bit later I want to talk about that because it really is quite frightening for us!

But anyway, this is kind of my answer for the first of these questions that you need to get right, which is, “How do you beat the competitors?” And the answer is, that if you come into an existing market with a better product, the odds are that the competitors will get you because you’ve taken a piece of real estate up there that is financially attractive for them to pursue. If you come into the market with a disruptive product, the odds are that the entrant will win because you set up a situation where they’re motivated to flee rather than fight. And if you pick a fight like that where they don’t want to fight you because it’s in their interest to move in some other direction, it’s a great way for a little company to beat a big company. And so that’s the answer to the first question of why disruption is a great tool to beat the competition.

Now, I want to skip down. If you look at the stodgy companies today, and really crawl back inside of their history, most of them started out as disruptive innovators. It’s interesting in Japan. I had a student who went back to Japan and became a senior official in their Ministry of International Trade and Industry a few years ago, and he, poor guy, got sentenced to having to write plan for the resurrection of Japan’s economy.

And he worked on this thing for about two years and then called up and said, “I don’t think there’s any hope for Japan.” And he was looking at it from a macroeconomic policy perspective and came over and we talked about it for a couple of days. And then what hit us is that every one of the industries that constituted a fundamental engine of Japan’s economic miracle in the ‘60’s, ‘70’s and ‘80’s did this.

And so, those of you who have gray hair may remember that Toyota came into our market in the ‘60’s with a crummy, rusty, subcompact model called the Corona that no self — respecting non — college student would think of owning, and now they make Lexuses. And Sony came in with crummy transistor radios, and now they’re the best consumer electronics maker in the world. Their steel industry came in with the lowest quality steel in the world, and now they’re the highest quality steel companies. Canon did it in photocopiers, and Seiko did it in watches, and over and over.

And just like it happens in our economy, now those companies have become huge global giants, making the highest quality products, serving the most demanding tiers, and there is no growth up there. If they had tried to somehow cobble together the computer from outsourced subsystems or modules that fit together according to some industry standard, they couldn’t have done it because the establishment of the interface standards takes so many degrees of freedom away from the design engineers that they would have to back away from the frontier of what’s possible. And when the product isn’t good enough, competitively you can’t back off the frontier. And so that meant that in order to play in that game, you had to do everything in order to do anything. There is a huge advantage to being integrated and having a proprietary architecture in this era when the functionality isn’t good enough.

And so in the early years of that industry, IBM just dominated its world. And in the similar period of the automobile industry, General Motors and Ford just dominated their world. And the question comes, “What happens once the functionality and reliability become more than good enough for what customers in the less demanding tiers of the market can use? What do you do to get traction with these kinds of customers if you want to build a new business serving them?”

And the answer is that what’s not good enough now changes, and what begins to matter to these customers is, “I can’t get what I need fast enough, and I can’t get exactly what I need as fast as possible.” And so improvements in speed to market and the ability responsively to give every customer exactly what they need, that constitutes a new trajectory of innovation along which improvements are rewarded with attractive prices and increases in market share.

And in order to compete in this way to be fast and flexible and responsive, the architecture of the product has to evolve towards a modular architecture, because modularity enables you to upgrade one piece of the system without having to redesign everything, and you can mix and match and plug and play best of breed components to give every customer exactly what they need. And because there are clean interface standards here, when that happens, the industry disintegrates. (And this is just the chart that I put together with Andy Grove a few years ago to illustrate the rough concept.)

So here the rough stages of value added in the computer industry, and during the first two decades, it was essentially dominated by vertically integrated companies because they had to be integrated given the way you had to compete at the time. We could actually insert right in here “Apple Computer.” (Let me go back to the prior slide.) Do you remember in the early years of the PC industry Apple with its proprietary architecture? Those Macs were so much better than the IBM’s. They were so much more convenient to use, they rarely crashed, and the IBM’s were kludgy machines that crashed a lot, because in a sense, that open architecture was prematurely modular.

But then as the functionality got more than good enough, then there was scope, and you could back off of the frontier of what was technologically possible, and the PC industry flipped to a modular architecture. And the vendor of the proprietary system, Apple continues probably to make the neatest computers in the world, but they become a niche player because as the industry disintegrates like this, it’s kind of like you ran the whole industry through a baloney slicer, and it became dominated by a horizontally stratified population of independent companies who could work together at arm’s length interfacing by industry standards.

One of the things is that to us was most interesting is that where the money is made flips on both sides of this equation. We wrote an article about this that we published in the Harvard Business Review called “Skate to Where the Money Will Be” in honor of the ice hockey star, Wayne Gretzky. Somebody asked him, “How come you’re so good?” and he said, “Well, I never skate to where the puck is; I always skate to where the puck is going to be.”

And the notion here is that if you create a new business that tries to position itself at the point in a value chain where really attractive money is being made, by the time you get there it probably will have gone, and you can tell where it’s gone in a very predictable way, and that’s what I want to try to get at here. Over on this side of the world, the money tends to be made by the company that designs the architecture, the system, that solves what is not good enough. Because it’s functionality and reliability that’s not good enough, the company that makes this systems that is proprietary and optimized tends to be at the place where most of the profit in the industry is made. Because the performance of that kind of a product isn’t dictated by the individual components, of which it is comprised; this is determined at the level of the architecture of the system, and that is where the money is made.

So in the early years of computing, IBM had a 70% market share; they made 95% of the industry’s profit. In the similar era in automobiles, General Motors had a 55% market share; they made 80% of the industry’s profit. And if you were a supplier to General Motors or IBM, you just lived a miserable, profit — free existence year after year because the components did not solve the problem of what was not good enough; the system solved the problem.

But on this side, when it becomes more than good enough and the architecture becomes modular, where the money is made flips to the inside of the product. And a good way to visualize this is just imagine that you were working as a computer designer for Compac, and your boss said, “I want you to go design a better computer than Dell.” How are you going to do this? Put in a faster microprocessor, more gigapixels on the screen, higher capacity disk drive, or anything you can do, the competitors can just copy instantly because in a nonintegrated word, you’re outsourcing from a common supplier base, and when the architecture of the system is modular, and it fits together according to industry standards, the better products are not created through clever architectural design; the performance of the product is driven by what’s inside.

And so the ability to make money migrates from the system to the subsystems that define the performance and allow these guys to keep moving up market. And so that’s the answer for why in the computer world IBM in the design and assembly of computers made all of the money, and it was not in the components. And so when they got into the personal computer business, they thought that the same formula would hold here, and so they outsourced the components and stayed in the design and assembly of the computer, and they did just what Wayne Gretzky said don’t do. They skated to where the money used to be and outsourced where the money would be.

You can see the very same thing happening in the automobile industry today. Automobiles have become more than good enough for what all but the most demanding customers are able to use. Our family is a great example. We just sold our Toyota Corolla after about 180,000 miles of loyal problem — free service. Just a beautiful car! but my kids hadn’t been willing to ride in it for about the last four years because it went out of style about five years before it wore out.

And so, do I need Toyota to give me an even more reliable car next year? I can’t absorb more reliability. And so this very same thing is happening in the automobile industry. Over here, when the architectures were like that, it took six years to design a new car; now it takes two years. You can walk into a Toyota dealership today and custom order a car assembled exactly to your spec, and it will be delivered in five days, about as fast as Dell can deliver a computer assembled to your spec.

And the way they’re becoming this way, fast and responsibly flexible, is the architecture of the automobiles have evolved from a proprietary and interdependent architecture to a modular architecture. Over here, they source components from hundreds of suppliers, no one of which made a difference. On this side, they source components from a few suppliers that they call “tier — one suppliers.”

On the left hand side, for example, Dana Corporation supplied axles. On the right hand side, Dana Corporation supplies a complete rolling chassis with all of the suspension system and everything. And the smoothness of the ride, or the feel of the ride, isn’t dictated by Ford anymore, it’s dictated by Dana because that problem is solved in the rolling chassis. Similarly, Johnson Controls on the left hand side supplied seats; on the right hand side they supplied the entire interior cockpit subsystem and Delco supplies the electrical system and Bosch, the breaking system, and so on.

And true to form, the industry has had to disintegrate. And so the integrated giants that dominated over here, General Motors, packaged up all of its components operations and sold them off in a company called Delphi Automotive, and Ford packaged its components operations off and sold them off as Visteon. But you can see that the car companies did exactly what IBM did when it put Microsoft and Intel into business, and that is, they sold off the pieces of value added, the subsystems, where in the future the money would be made in order to stay at the level of value added, which is the assembly of a car, where in the past the money was made.

This also highlights, for me, a process of what you would call “commoditization.” And what commoditization means is a company’s product getting better and better and better and better, and you reach a point where these folks aren’t benefited by an even better product, and so their willingness to pay a better price for an improved product diminishes to the point that you can’t get pricing to stick for an improvement.

And that’s one dimension of selling a commodity is, you just can’t get a premium price for a better product. The other dimension of commoditization is that your ability to differentiate your product disappears. Here, this is highly differentiable; here, it’s not at all differentiable. And so the process of a company’s products becoming commoditized is just a very natural result of the interaction of the technological progress in customers’ ability to utilize that progress.

Even a brand can become commoditized. And most companies think that, “Well, if our product is really not differentiable, at least we take refuge in having a brand.” If you think about it in these terms, a brand has value when you’re marketing upward to customers who are not yet satisfied with the best they can find, because the brand serves to close as much as is possible the emotional gap. But once the product is manifestly more than adequate and you’re marketing down to overserve customers, the brand really does not create value, and the brand itself can become commoditized.

Now, I want to try to walk into how you can use this way of thinking in a concept that’s called the “law of conservation of modularity.” And I want to illustrate this in a view of what I think is going to happen on the hardware side, in particular in the semiconductor industry, and then try to use that to think about what open source software could mean.

And the core concept of the law of conservation of modularity is that — if you can just visualize — if you are writing a software application to run on Windows, you might go to Redmond and knock on the door and say, “Would you please just let me into Windows? And if I could just change these 25 lines of code, the application would run so much better!” But Windows doesn’t dare open the door, do they? because it has an interdependent architecture, and if you change a couple of lines, who knows what else would get screwed up!

And so the application has to be suboptimized and conform itself to Windows so that Windows could be optimized. And the reason is, according to (I’ll go back a slide) this model, historically in order to fuel Dell moving up market so that it could keep competing against Sun Microsystems at the margin there, the fuel that allows Dell to move up is the microprocessor inside and the operating system inside.

That’s what constrains its up — market progress. And so the microprocessor and the operating systems have a proprietary and interdependent architecture even while Dell’s product has a modular architecture. And so, back to the software analogy. The application has to be suboptimized so that Windows could be optimized. But if you’re writing an application to run on Linux, because Linux has a modular architecture, you don’t even have to knock on the door. You just walk in, change what needs to be changed as long as you don’t screw up the interfaces, and the modularity and conformability of Linux allows the application to be optimized.

And so one side or the other needs to be modular and conformable to allow what’s not good enough to be optimized. If you think about it in a hardware context, because historically the microprocessor had not been good enough, then its architecture inside was proprietary and optimized and that meant that the computers architecture had to modular and conformable to allow the microprocessor to be optimized. But in a little hand held device like the RIM BlackBerry, it’s the device itself that’s not good enough, and you therefore cannot have a one — size — fits — all Intel processor inside of a BlackBerry, but instead, the processor itself has to be modular and conformable so that it has on it only the functionality that the BlackBerry needs and none of the functionality that it doesn’t need. So again, one side or the other needs to be modular and conformable to optimize what’s not good enough.

Now, there was a guy at Bell Labs a few years ago who published an article about Moore’s Law, and what he showed is that in pursuit of Moore’s Law – now the vertical axis here is the complexity of the circuit, which may roughly equate to the speed of the circuit. In the pursuit of Moore’s Law every year, the fabs and applied materials make 60% more transistors available on an area of silicon than were available the prior year.

But if you look at the ability of circuit designers to utilize transistors year on year, they’re are only able to utilize 20% more transistors than they were the year before for any given level of complexity of circuit, and the reason is they just have design budgets; they don’t have enough money or time to design circuits that are complex enough to utilize all the transistors that Moore’s Law makes available.

What that means is, for most of the volume applications in the world, circuit designers are actually awash in transistors. Even while at the very high end, they still need even finer line widths and demand that Moore’s Law take the next step to that next node of technology, but they’ve overshot what most circuit designers are able to utilize. And so, what this would then predict is that circuits which on this side had to be proprietary and interdependent in their architecture, over here, now the way you compete to win the business of those people is going to change and you’re going to need to be very fast and flexible and responsive and be able to deliver systems on chips that offer every application exactly the functionality that they need and none of the functionality that they don’t need.

And so, how the law of conservation will play itself out is, this is kind of my sense of how the industry was structured in the past. So the microprocessor wasn’t good enough. That meant that the desktop computer had to have a modular architecture to conform itself in order to allow this to be optimized because the line widths on the circuit were not good enough.

The equipment that was made by companies like Applied Materials and Tokyo Electron, each piece of equipment was optimized. It had its own proprietary architecture, and in the sequence of steps that a wafer has to go through, there was no attempt, nor could you make an attempt, to synchronize the flow of material across those machines. Each piece of equipment had to be optimized for itself. That meant that the fabs had to be laid out in a modular way bay by bay, and the sequential steps in the process had to be buffered or modularized by having gobs of working process inventory in those fabs. And that made the fabs very slow, but they actually had to optimize this rather than vice versa because this wasn’t good enough, and then the components that comprise applied materials equipment did not matter at all. So the money was made here and the money was made here, and these guys, these guys and these guys lived a miserable, profit — free existence.

Now in the future, in handheld devices, I’m just talking about this little piece of the world, but I think it applies to almost any situation where logic gets embedded in a system. But, I’ll talk about a handheld device like the RIM BlackBerry. It’s the device itself that is not yet good enough and, therefore, you cannot back off the frontier of what’s technologically possible. It has to be optimized with a proprietary interdependent architecture. That means that the processor inside of a BlackBerry has to be modular and conformable to allow that to be optimized.

Now you think about this, where these chips are now customized chips delivered to every customer’s application, and the design cycle out there in the customer’s end is measured in months rather than years. The fabs up here would take three months often to work an order through all of that inventory in the fab. And for a fab to take three months to deliver an order in a world down here where these are really fast — cycle custom designed products, is just intolerable.

And so the fabs are going to need to figure out how to deliver products really fast, and that means that over the next few years rather than being laid out in bay structure, a fab is going to need to reconfigure itself in a single wafer process so that they can process silicon like Toyota makes cars, with very little inventory in the process, and that’s what make them really fast.

And nobody has figured out how to do that yet, but the pressure from the market will mean that it’s the fab that is not good enough on this critical performance dimension, which is, get every customer exactly the circuit they need as quickly as they can do it. That then means that the manufacturing equipment from companies like Applied Materials needs to be modular and conformable so that the fab can optimize the flow of product through itself.

And this is possible now because Moore’s Law has overshot what most circuit designers can utilize, and this is possible because these products don’t need a Pentium IV processor, and so you can back away from the frontier. And so what it means is that the places in the value chain where attractive profits can be earned are going to migrate from where they are today in a very predictable way.

So, I’m not a software engineer or designer, but this is what I think Linux does, or MySQL or Apache, whatever it is, is that because of the open source character of it, the architecture is modular. And what that means is — let me back off from this. What this really tells me is that, like the microprocessor is going through a process of commoditization as it overshoots and becomes modular and undifferentiable. But, whenever that happens at one layer of value added that there is a process of commoditization, it initiates a reciprocal process of decommoditization at the next level of value added.

So, whereas the device up here was a commodity, this is not a commodity. Whereas a fab was commodity, this is a proprietary architecture that’s not a commodity and so on. So — and I’ll come back to the software world — there’s a fellow named Tim O’Reilly who’s done a lot of thinking about this that some of you may know. (Is that you? We just emailed. Stand up. He’s a lot smarter than he looks actually!) And another guy who runs a company in Santa Clara called Tensilica, who has thought a lot about this law of conservation of modularity. Tensilica makes these modular integrated circuits.

So the operating system is going, because of Linux, from a proprietary to a commoditized modular architecture, and what you’ll see happen then, because the very modularity of the open source architecture allows it to conform itself to allow the application to be optimized. And the operating system in many ways just folds itself in to the application and disappears.

And so my sense is if you look at how Red Hat lives, ostensibly they’re an operating system vendor, but really what the value that they create is at the next layer, the software that keeps the operating system from ever crashing and keeps maintaining itself, and that’s what is not good enough. And the conformability of Linux allows them to sell what you might call an application that is just extraordinarily optimized, and that’s becoming an noncommodity.

And similarly, in the Oracle world, the database software is proprietary and optimized, and a lot of money was made there, but MySQL allows to you to just fold the database into whatever the next layer of value added is so that it can be optimized, the application can be. And Google runs on Linux, and the operating system disappears into the search engine.

And so I don’t think that you could say that open source is a movement and nobody has figured how to make money in it, it is just where the money is made migrates to a different layer in the value — added chain and, in fact, it facilitates the decommoditization of the next layer because what’s not good enough can now be optimized. And so that’s my rough…Tim, do you want to clarify any of that gibberish? Or did I get your argument right?

Tim O’Reilly: I’m actually talking about it tomorrow, but actually I’m really struck by something else that you are saying here, and actually I’m going to disagree with you about Red Hat because what I think Red Hat is much more analogous to is your fab. I look at Apple with Mac OS X has done something that’s much more analogous to folding the value into an application layer on top, but what Red Hat does, and what I think really all Linux vendors do, and actually somebody else in the audience, Ian Murdoch, is really a leading thinker on this.

He’s right over here. It’s really that the critical competency of open source distributions is actually the active assembly. I think that’s really an interesting thing, and we were starting to see that what Ian’s new company does is really focus on really custom distributions and that ability to be responsive, to be faster. So I think there’s a lot of different elements in the story. You did actually correctly characterize my argument that we’re driving value up to things like Google on top of Linux.

There are many, many instances of that, but there is a lot of other pieces of the story. I think there is another one, too. I’m jumping into things that we’re going to ask you in the questions, but I think that your whole fabrinology here, you’re starting to have a lot of people starting to play with FPGAs, for example, where you’re actually literally doing a lot of the processor work in a quick responsive way, and then some.

Clayton Christensen: Okay! Thanks, Tim. I had a student write an article about cellphones and where the value migrates there because, in many ways, a Nokia and a Motorola phone have been over on the left hand side where the proprietary architectures, then they do their own processors, they do their own operating system optimized, but now those cellphones have so many features that the limitations on the system are not in the handset themselves, it’s elsewhere in the system.

And so we wrote a couple of things that forecast that the handsets are going to become modular, and because of that then, where the money is made in that value chain is going to migrate to the back end. And it’ll become a disintegrated industry, and the way to make money would be for Motorola to sell its chip sets to a thousand Chinese assemblers and for Nokia to sell its operating system to a thousand Chinese assemblers, and all of those guys colliding against each other in commodities would then drive the pricing of those things down and so on, and that’s the way the world would work.

And sure enough, Motorola subsequently announced that they were opening up their system and selling chip sets to anybody who wanted to buy. And then, according to this student, Nokia announced that it would make its operating system available to anybody who wanted to buy, and so I’m thinking, “Boy, these guys are brilliant because they followed Clay’s advice.” But then Nokia announced that it was almost giving its operating system away, and I thought, “Those idiots! That’s where the money is going to be made,” but then my student said, “No, they’re a lot smarter than you, Clay, because of the law of conservation of modularity.

By opening up their operating system and making it essentially free,” he asserted, “what that then allows is the operating system becomes modular and conformable so that Nokia could keep optimizing hardware and keep the hardware part of the system proprietary.” Had they let it became a truly modular world, then Microsoft was there sitting with its own operating system ready to move in and make all the money in the assembly of a modular handset, and maybe the strategy that Nokia followed, it is actually kind of clever to essentially wipe out the value that is created at the operating system layer in order to keep playing the game where they had an advantage over Microsoft.

(Okay, now I want to click ahead.) They’re just one other set of concepts that I wanted to go over, and then we could just have some questions. This last one was the question, I think, number two on our list of, “How do I know who are the right customers to target with my new technology?”

And I think from what little I know about open source, there are a lot of wrong customers that have been targeted that have caused a lot of expense in brief. Now, where this idea came from is actually in one of our MBA classrooms. And I had written an article about the disruption of the Harvard Business School, and what it asserts is that our MBA’s have become extremely expensive.

I would never criticize Stanford in public, so I’ll just talk about Harvard. They cost about $130,000 to hire. And if you look at who recruits on our campus, operating companies have a very hard time recruiting because they’re so expensive and they can’t fit our expensive graduates into their salary structures. So who recruits increasingly are venture capitalists, private equity investors, Mackenzie and Goldman Sachs.

Now, the operating companies aren’t getting lower quality talent; they’re just going into undergraduate programs and raking out the best engineers and others that they can find, putting them to work, and then two years later at the time when many of them would leave to get an MBA, the companies are saying, “Nope, you don’t need an MBA!” “We have GE Crotonville or we have Motorola University or Intel University. IBM spends $500 million dollars a year in management training. We’ll train you right here!”

So anyway, I wrote a case about how on — the — job training is disrupting the Harvard Business School, and one of the students raised her hand and, in a very polite way, she said, Well, excuse me, but I think you can only be disrupted if you have overshot what the market needs, and frankly I am not overserved by your teaching.” So I was convinced that Harvard was getting it, and yet it was very clear that she wasn’t overserved. And so it helped me think through that there are actually two different kinds of disruption, and I want to just talk that through.

So, one type of disruption, we’ll plot on this chart, and what we showed before is that if a company’s entry strategy is to bring a better product into an established market, the probability that it will build a successful growth business is zero because of these asymmetries of motivation that exist. Now, incidentally, if a venture capitalist funds a venture that tries to do this, and they actually do come into an established market with a better product, if their strategy is to turn around quick and sell out to the incumbent leader, they can actually turn in a nice piece of money, but it’s not a strategy to create a new growth business.

Now, one type of disruption we called a “low — end disruption,” it just takes root in the very same market where the incumbent leaders are, but it just picks it off at the low end, and they build a business model that can make money at the discount prices that’s required to steal the business down here. So it doesn’t create a new growth market, but it does create a new growth business. And the examples I’ve used in my writings of steel mini — mills, they did this. Discount department stores did that. They didn’t create a new market; they just had a lower cost business model, and the incumbents were motivated to flee rather that fight.

But the other type of disruption, and this is what corporate education is, we called the “new market disruption,” and it comes out in a new context, and so it’s almost like you have a third plane of competition out here. And by bringing a product that is so simple and inexpensive, a whole new population of people can now afford to own and use a product who historically couldn’t do it because they didn’t have the money or the skill, and it creates a booming new market out in this new plane of competition and doesn’t effect the business of the original players at all for a very long time.

The personal computer was one of these, right? So I remember when I got out of grad school, when I had to compute, I had to take my punched cards to the corporate mainframe center, and the expert ran the job for me. Because it was so expensive and inconvenient, we didn’t compute very much. But when the personal computer was introduced, it was so inexpensive and so idiot simple that an idiot like me could now begin to compute for himself in the convenience of my own office.

And at the beginning, out here in this new plane of competition, those early PC’s could barely do word processing. But because I hadn’t been able to do anything myself, I was delighted to have something that wasn’t very good. And then as the PC and the software associated with it got better and better and better out in this third plane of competition, ultimately it got good enough that it started to then suck applications out of the back plane into the new plane, and little by little the customers left the established players. And so the effect of the disruption was the same in that the established leaders got killed; it just is kind of a different animal bringing something that is so much more affordable and simple that a whole new group of people can now begin to do it for themselves.

I want to just illustrate this with a couple of examples from history, and then think through how open source software might be affected by this principle. This is the historical example: The transistor was a disruptive innovation relative to the vacuum tube because when it emerged in the late 40s and early 50s, it simply couldn’t handle the power that was required to be used in the markets that existed at the time, the big tabletop radios and floor standing televisions and so on.

Every one of the vacuum tube companies took a license to the transistor, but they carried the license into their laboratories and they framed it as a technological deficiency. In other words, the transistor isn’t enough yet to be used in the market. And if you could go back and get all of the expenses out of these companies, they probably in aggregate spent $2 billion in today’s dollars, investing, trying to make solid state electronics good enough that you could make big products out of them.

And while they were trying to do that, over here — now I’m going collapse this back into two dimensions, but when you see green, I really mean that that’s taking root out here in the third plane of competition. The first application was a germanium transistor hearing aid in 1952. A tiny little market, but it valued it for the very attributes that it made it useless in the mainstream, and that was low power consumption. And then in 1955, Sony introduced its first pocket radio. And those of you with gray hair remember how crummy those things were, just static laced, very low fidelity, wouldn’t get a signal from much of a distance.

But Sony chose to sell the pocket radio to the rebar of humanity, people we call teenagers. And the teenagers were delighted to have a product that was not very good because their alternative was no radio at all, and it allowed them to do something that they had wanted to do, but never could do, and that is listen to Rock — n — Roll out of the earshot of their parents.

So a booming new market emerged in this third plane of competition, and these guys back here felt no pain because they were all new customers. Had Sony tried to sell its pocket radio to the parents, a crummy product would have been judged to be crummy because they had the alternative of a high quality vacuum tube radio. Then in 1959, Sony introduced its first portable television, and again, they competed against nonconsumption. They made it so affordable and simple that a whole new population of households who didn’t have a big enough apartment to have a big floor standing TV or didn’t have enough money to buy one, now they could own one, and because the alternative was no TV at all, they were delighted with the crummy product.

And again, a booming new market emerged in this third plane of competition until the mid 1960’s. And now solid state electronics was good enough that it could handle the power required to be used in making these big devices. And bam! within three years, all of the applications got sucked out into the solid state world, and the vacuum tube companies were just dead, and these are venerable institutions like RCA. And the punishing thing is that it’s not that they didn’t see the technology coming. They saw it before Sony did.

It was not that they weren’t aggressive and visionary. They invested far more money trying to make the technology good enough than did Sony as they were building these growth businesses. The punishing thing is that they targeted the wrong customers. They targeted consumers, and the only way the customers here would have adopted the new technology is if it were better than the old technology and more cost effective. That was a very demanding technical hurdle for the vacuum tubes companies to surmount. As they came out and competed against nonconsumption in contrast, Sony had a much more modest performance hurdle because they just had to make the product that was better that nothing, and the customers were delighted.

Now, where you see this happening today is voice recognition software. So, the next time you go to a computer superstore, go to the voice recognition software shelf and pick up a box there that’s called the IBM ViaVoice. Now don’t buy it, but just look at it! They have a picture of the customer on the box, and it’s an administrative assistant who is sitting in front of her computer wearing a headset speaking rather than word processing.

You think about the value proposition that IBM has to be making to this woman. She types 90 words a minute. She is 99% accurate. If she needs to capitalize something, she just instinctively presses shift and cruises through. And IBM has to say, “No, don’t do that anymore. I want you to put this headset on and teach yourself to speak in a slow and distinct and consistent manner in complete sentences. If you must capitalize, you must pause, speak the command “capitalize,” pause, speak the word you want to capitalize, pause, speak the command “uncapitalize,” pause, please be patient, we are 70% accurate, this will get better we promise.”

This is not an attractive proposition to this customer. And IBM has — I’ve not worked with them at all, but as I understand it — they’ve spent maybe $700 million trying to make voice recognition technology good enough that it can be used in that market. This is a very difficult technical hurdle to surmount. Meanwhile, while they are investing that aggressively, Lego comes up with these robots that recognize “stop,” “go,” “left,” “right,” and the kids are thrilled with the four word vocabulary, and then press — or — say — one kinds of applications take root, and now directory assistants ask you to say the city and state and so on, and much simpler, and an interesting market is emerging.

I bet maybe the next place it takes route is in chatrooms because the kids don’t spellcheck or capitalize anyway, and they would rather speak than type. And maybe then the next application would be, when you see these stubby fingered executives with their BlackBerrys trying to peck out emails, and their fingers are four times the diameter of the keys, they’re only 70% accurate. If somebody gave them a voice recognition algorithm that really didn’t have to be very good so that they could speak their wireless email rather than peck it out, I bet they’d be thrilled with the crummy product. And ultimately, as it takes route in these new applications, it may get good enough that we can do word processing with it, but it’ll be a long time.

I’ve asked myself, “Why would the IBM engineers have picked off the most demanding application conceivable for this technology,” and it probably resides in the resource allocation process of the company, because it’s not just IBM, everybody does it. You got to rule out stupidity because they’re at least as smart as us, but in order to get funded, the people who had the idea knew that they just couldn’t stand up in front of senior management without PowerPoint and just say, “I’m sure there’s going to be a lot of ‘press or say’ when stuff happens some time.

They never get funded. But they have to do a PowerPoint presentation that has financial projections, and they have to be able to say that we hired a consulting firm and they did a market study and, there are 37.9 million administrative assistants who spend this many hours a day word processing, and this is how big the market is, and in order to get funded it forces the company to target the market that ultimately causes them to fail.

And the digital camera people like Kodak fell victim to the same process. These digital cameras are potentially disruptive, and so in order to make a digital camera good enough that people would opt to take a digital rather than a film image, they have to cram it full of charge couple devices, and that drives the price point up so high that the only people who can afford to buy a digital camera are the people who own film cameras, and so then the reward for success once they got a good enough digital cameras is they don’t sell film.

So a massive investment and no g

Show more