2016-08-30

You thought it started with the Intel 4004, but the tale is more complicated

Transistors, the electronic amplifiers and switches found at the heart of everything from pocket radios to warehouse-size supercomputers, were invented in 1947. Early devices were of a type called bipolar transistors, which are still in use. By the 1960s, engineers had figured out how to combine multiple bipolar transistors into single integrated circuits. But because of the complex structure of these transistors, an integrated circuit could contain only a small number of them. So although a minicomputer built from bipolar integrated circuits was much smaller than earlier computers, it still required multiple boards with hundreds of chips.

In 1960, a new type of transistor was demonstrated: the metal-oxide-semiconductor (MOS) transistor. At first this technology wasn’t all that promising. These transistors were slower, less reliable, and more expensive than their bipolar counterparts. But by 1964, integrated circuits based on MOS transistors boasted higher densities and lower manufacturing costs than those of the bipolar competition. Integrated circuits continued to increase in complexity, as described by Moore’s Law, but now MOS technology took the lead.

By the end of the 1960s, a single MOS integrated circuit could contain 100 or more logic gates, each containing multiple transistors, making the technology particularly attractive for building computers. These chips with their many components were given the label LSI, for large-scale integration.

Engineers recognized that the increasing density of MOS transistors would eventually allow a complete computer processor to be put on a single chip. But because MOS transistors were slower than bipolar ones, a computer based on MOS chips made sense only when relatively low performance was required or when the apparatus had to be small and lightweight—such as for data terminals, calculators, or avionics. So those were the kinds of computing applications that ushered in the microprocessor revolution.

Most engineers today are under the impression that the start of that revolution began in 1971 with Intel’s 4-bit 4004 and was immediately and logically followed by the company’s 8-bit 8008 chip. In fact, the story of the birth of the microprocessor is far richer and more surprising. In particular, some newly uncovered documents illuminate how a long-forgotten chip—Texas Instruments’ TMX 1795—beat the Intel 8008 to become the first 8-bit microprocessor, only to slip into obscurity.

What opened the door for the first microprocessors, then, was the application of MOS integrated circuits to computing. The first computer to be fashioned out of MOS-LSI chips was something called the D200, created in 1967 by Autonetics, a division of North American Aviation, located in Anaheim, Calif.

This compact, 24-bit general-purpose computer was designed for aviation and navigation. Its central processing unit was built from 24 MOS chips and benefitted from a design technique called four-phase logic, which used four separate clock signals, each with a different on-off pattern, or phase, to drive changes in the states of the transistors, allowing the circuitry to be substantially simplified. Weighing only a few kilograms, the computer was used for guidance on the Poseidon submarine-launched ballistic missile and for fuel management on the B-1 bomber. It was even considered for the space shuttle.

The D200 was followed shortly by another avionics computer that contained three CPUs and used in total 28 chips: the Central Air Data Computer, built by Garrett AiResearch (now part of Honeywell). The computer, a flight-control system designed for the F-14 fighter, used the MP944 MOS-LSI chipset, which Garrett AiResearch developed between 1968 and 1970. The 20-bit computer processed information from sensors and generated outputs for instrumentation and aircraft control.

Some Assembly Required

This online simulator lets you explore the workings of a simple microprocessor

The architecture of the F-14 computer was unusual. It had three functional units operating in parallel: one for multiplication, one for division, and one for special logic functions (which included clamping a value between upper and lower limits). Each functional unit was composed of several different kinds of MOS chips, such as a read-only memory (ROM) chip, which contained the data that determined how the unit would operate; a data-steering chip; various arithmetic chips; and a RAM chip for temporary storage.

Because the F-14 computer was classified, few people ever knew about the MP944 chipset. But Autonetics widely publicized its D200, which then inspired an even more compact MOS-based computer: the System IV. That computer was the brainchild of Lee Boysel, who left Fairchild Semiconductor in 1968 to cofound Four-Phase Systems, naming his new company after Autonetics’ four-phase logic.

The CPU of the 24-bit System IV was constructed from as few as nine MOS chips: three arithmetic-logic-unit (ALU) chips of a design dubbed the AL1 (which performed arithmetic operations like adding and subtracting, along with logical operations like AND, OR, and NOT), three ROM chips, and three random-logic chips.

Everything’s Bigger in Texas

Although Texas Instruments’ TMX 1795 and Intel’s 8008 had a similar number of transistors, the former required a much larger silicon die.  Indeed, the TMX 1795 was larger than the Intel 8008 and 4004 combined. Intel’s engineers believed that its large size made the TI chip impractical to produce in commercial quantities, but TI’s very successful TMS 0100 calculator chip, introduced at about the same time, had an even larger die. So the connection between die size and commercial viability must not have been straightforward. (The relative sizes of the dies are shown below.)

Almost simultaneously, a Massachusetts-based startup called Viatron Computer Systems got into the game. Just a year after its launch in November 1967, the company announced its System 21, a 16-bit minicomputer with various accessories, all built from custom MOS chips.

We can thank someone at Viatron for coining the word “microprocessor.” The company first used it in an October 1968 announcement of a product it called the 2101. But this microprocessor wasn’t a chip. In Viatron’s lexicon, the word referred to part of a smart terminal, one that came complete with keyboard and tape drives and connected to a separate minicomputer. Viatron’s “microprocessor” controlled the terminal and consisted of 18 custom MOS chips on three separate boards.

Amid these goings-on at the end of the 1960s, the Japanese calculator maker Business Computer Corp. (better known as Busicom) contracted with Intel for custom chips for a multiple-chip calculator. The final product was simplified to a single-chip CPU, the now-famous Intel 4004, along with companion chips for storage and input/output (I/O). The 4-bit 4004 (meaning that it manipulated data words that were only 4 bits wide) is often considered the first microprocessor.

The calculator containing the 4004 first came together at the start of 1971. By this time, it had plenty of competition. A semiconductor company called Mostek had produced the first calculator-on-a chip, the MK6010. And Pico Electronics and General Instrument also had their G250 calculator-on-a-chip working. Within six months, the Texas Instruments TMS 1802 calculator-on-a-chip was also operational, it being the first chip in TI’s hugely successful 0100 line. While these circuits worked fine as calculators, they couldn’t do anything else, whereas the 4004 operated by carrying out instructions stored in external ROM. Thus it could serve in a general-purpose computer.

This was a fast-moving time for the electronic-calculator business, and after running into financial difficulties, Busicom gave up its exclusive rights to the 4004 chip. In November 1971 Intel began marketing it and its associated support chips as a stand-alone product intended for general computing applications. Within a few months, the 4004 was eclipsed by more powerful microprocessors, however, so it found few commercial applications. They included a couple of pinball machines, a word processor, and a system for tallying votes.

In this sense, it was an electronic calculator that begot the first microprocessor, Intel’s 4-bit 4004. But the 8-bit microprocessors that quickly succeeded it had a very different genesis. That story starts in 1969 with the development of the Datapoint 2200 “programmable terminal,” by a company called Computer Terminal Corp. (CTC), based in San Antonio, Texas.

The Datapoint 2200 was really a general-purpose computer, not just a terminal. Its 8-bit processor was initially built out of about 100 bipolar chips. Its designers were looking for ways to have the processor consume less power and generate less heat. So in early 1970, CTC arranged for Intel to build a single MOS chip to replace the Datapoint processor board, although it’s unclear whether the idea of using a single chip came from Intel or CTC.

By June 1970, Intel had developed a functional specification for a chip based on the architecture of the Datapoint 2200 and then put the project on hold for six months. This is the design that would become the Intel 8008. So whether you consider the calculator-inspired 4004 or the terminal-inspired 8008 to be the first truly useful single-chip, general-purpose microprocessor, you’d have to credit its creation to Intel, right? Not really.

You see, in 1970, when Intel began working on the 8008, it was a startup with about 100 employees. After learning of Intel’s processor project, Texas Instruments, or TI—a behemoth of a company, with 45,000 employees—asked CTC whether it, too, could build a processor for the Datapoint 2200. CTC gave engineers at TI the computer’s specifications and told them to go ahead. When they returned with a three-chip design, CTC pointedly asked whether TI could build it on one chip, as Intel was doing. TI then started working on a single-chip CPU for CTC around April 1970. That design, completed the next year, was first called the TMX 1795 (X for “experimental”), a name that morphed into TMC 1795 when it was time for the chip to shed its prototype status.

In June 1971, TI launched a media campaign for the TMC 1795 describing how this “central processor on a chip” would make the new Datapoint 2200 “a powerful computer with features the original one couldn’t offer.” That didn’t happen, though: After testing the TMC 1795, CTC rejected it, opting to continue building its processor using a board of bipolar chips. Intel’s chip wouldn’t be ready until the end of that year.

Many historians of technology believe that the TMC 1795 died then and there. But newly surfaced documents from the late Gary Boone, the chip’s lead developer, show that after CTC’s rejection, TI tried to sell the chip (which after some minor improvements became known as the TMC 1795A) to various companies. Ford Motor Co. showed interest in using the chip as an engine controller in 1971, causing Boone to write, “I think we have walked into the mass market our ‘CPU-on-a-chip’ desperately needs.” Alas, these efforts failed, and TI ceased marketing the TMC 1795, focusing on its more profitable calculator chips instead. Nevertheless, if you want to assign credit for the first 8-bit microprocessor, you should give that honor to TI, never mind that it fumbled the opportunity.

By the time Intel had the 8008 working, at the end of 1971, CTC had lost interest in single-chip CPUs and gave up its exclusive rights to the design. Intel went on to commercialize the 8008, announcing it in April 1972 and ultimately producing hundreds of thousands of them. Two years later, the 8008 spawned Intel’s 8080 microprocessor, which heavily influenced the 8086, which in turn opened the floodgates for Intel’s current line of x86 chips. So if you’re sitting at a PC with an x86 processor right now, you’re using a computer based on a design that dates all the way back to Datapoint’s 2200 programmable terminal of 1969.

As this history makes clear, the evolution of the microprocessor followed anything but a straight line. Much was the result of chance and the outcome of various business decisions that might easily have gone otherwise. Consider how the 8-bit processor architecture that CTC designed for the Datapoint 2200 was implemented in four distinct ways. CTC did it twice using a board stuffed with bipolar chips, first in an arrangement that communicated data serially and later using a parallel design that was much faster. Both TI and Intel met CTC’s requirements with single chips having almost identical instruction sets, but the packaging, control signals, instruction timing, and internal circuitry of the two chips were entirely different.

Intel used more advanced technology than did TI, most notably self-aligned gates made of polysilicon, which made the transistors faster and improved yields. This approach also allowed the transistors to be packed more densely. As a result, the 4004 and 8008, even combined, were smaller than the TMC 1795. Indeed, Intel engineers considered the TI chip too big to be practical, but that really wasn’t the case: TI’s highly successful TMS 0100 calculator chip, introduced soon afterward, was even larger than the TMC 1795.

Given all this, whom should we credit with the invention of the microprocessor? One answer is that the microprocessor wasn’t really an invention but rather something that everyone knew would happen. It was just a matter of waiting for the technology and market to line up. I find this perspective the most compelling.

Another way to look at things is that “microprocessor” is basically a marketing term driven by the need of Intel, TI, and other chip companies to brand their new products. Boone, despite being the developer of the TMC 1795, later credited Intel for its commitment to turning the microprocessor into a viable product. In an undated letter, apparently part of a legal discussion over who should get credit for the microprocessor, he wrote: “The dominant theme in the development of the microprocessor is the corporate commitment made by Intel in the 1972–75 period…. Their innovations in design, software and marketing made possible this industry, or at least hurried it along.”

Honors for creating the first microprocessor also depend on how you define the word. Some define a microprocessor as a CPU on a chip. Others say all that’s required is an arithmetic logic unit on a chip. Still others would allow these functions to be packaged in a few chips, which would collectively make up the microprocessor.

In my view, the key features of a microprocessor are that it provides a CPU on a single chip (including ALU, control functions, and registers such as a program counter) and that it is programmable. But a microprocessor isn’t a complete computer: Additional chips are typically needed for memory, I/O, and other support functions.

Using such a definition, most people consider the Intel 4004 to be the first microprocessor because it contains all the components of the central processing unit on a single chip. Both Boone and Federico Faggin (of Intel’s 4004 team) agree that the 4004 beat the earliest TMX 1795 prototypes by a month or two. The latter would then represent the first 8-bit microprocessor, and the Intel 8008 the first commercially successful 8-bit microprocessor.

But if you adopt a less-restrictive definition of “microprocessor,” many systems could be considered the first. Those who consider an ALU-on-a-chip to be a microprocessor credit Boysel for making the first one at Fairchild in 1968, shortly before he left to cofound Four-Phase Systems. The AL1 from Four-Phase Systems is also a candidate because it combined registers and ALU on a single chip, while having the control circuitry external. If you allow that a microprocessor can consist of multiple LSI chips, the Autonetics D200 would qualify as first.

Patents provide a different angle on the invention of the microprocessor. TI was quick to realize the profitability of patents. It obtained multiple patents on the TMX 1795 and TMS 0100 and made heavy use of these patents in litigation and licensing agreements.

Based on its patents, TI could be considered the inventor of both the microprocessor and the microcontroller, a single-chip packaging of CPU, memory, and various support functions. Or maybe not. That’s because Gilbert Hyatt obtained a patent for the single-chip processor in 1990, based on a 16-bit serial computer he built in 1969 from boards of bipolar chips. This led to claims that Hyatt was the inventor of the microprocessor, until TI defeated Hyatt’s patent in 1996 after a complex legal battle.

Another possible inventor to credit would be Boysel. In 1995, during a legal proceeding that Gordon Bell later mockingly called “TI versus Everybody [PDF],” Boysel countered TI’s single-chip processor patents by using a single AL1 ALU chip from 1969 to demonstrate a working computer to the court. His move effectively torpedoed TI’s case, although I don’t see his demo as particularly convincing, because he used some technical tricks to pull it off.

Regardless of what you consider the first microprocessor, you have to agree that there was no lack of contenders for this title. It’s a shame, really, that most people seek to recognize just one winner in the race and that many fascinating runners-up are now almost entirely forgotten. But for those of us with an interest in the earliest days of microcomputing, this rich history will live on.

About the Author

Ken Shirriff worked as a programmer for Google before retiring in June 2016. A computer history buff, he’s fascinated with the earliest CPU chips. At the time of publication of this article, he was helping to restore a 1973 Xerox Alto microcomputer, the computer that introduced the graphical user interface and the mouse. (For more on the restoration, see Shirriff’s blog, www.righto.com.)

Show more