 |
Transistors and Cold Fusion
(Originally Published July-August,
1999 In Infinite Energy Magazine Issue #26)
by Jed Rothwell
Click here
to read Part 1.
No Permanent Winners, No Manifest Destiny
Part 1
(see Infinite Energy No. 25, p. 32-34) closed with the questions:
Was the transistor truly inevitable? Where would we be without it?
Is any innovation inevitable and unstoppable? I conclude that fundamental
breakthroughs, like the transistor, are not inevitable, but once
they are made, contingent, derivative or follow-up breakthroughs
like integrated circuits become inevitable. The discovery of cold
fusion was not inevitable by any means, and cold fusion technology
may never be developed because of technical difficulties or political
opposition, but if it is developed and it becomes established, many
contingent breakthroughs, like home power generators, will become
inevitable.
The book Crystal Fire1 begins with
a quote from Bill Gates: "Without the invention of the transistor,
I'm quite sure that the PC would not exist as we know it today."
That is the conventional wisdom, and it is probably right. On the
other hand, ours may not be the best of all possible worlds, and
it is conceivable that something better than the transistor might
have been discovered, resulting in better PCs. Furthermore, the
transistor itself may not be as important as the integrated circuit,
in which transistors, resistors, capacitors, and other circuits
are fabricated together and miniaturized. Integrated circuits were
developed in 1958, but they might have come along sooner and they
might have been easier to manufacture if they had not been made
with semiconducting silicon. (As explained below, integration with
silicon looked like a bad idea at first.)
By the 1940s, people understood that something better
than the vacuum tube amplifier was needed. Vacuum tubes were fragile,
bulky, and they consumed thousands of times more power than was
necessary for the job. Many people thought that a solid state device
would be the best choice and different solid state devices were
developed. AT&T came up with the transistor; Univac developed a
solid state "magnetic amplifier," which worked better and faster
in some computer applications than the transistors available in
the early 1950s.2 A replacement for the vacuum tube was
inevitable, but it might have been . . . a better vacuum tube. Tubes
might have been shrunk down to microscopic dimensions and integrated
with other circuits on a chip made from some material cheaper and
easier to handle than silicon. Such vacuum tubes were developed
as high speed signal processors for special applications. Microscopic,
mass-produced vacuum tube technology is presently undergoing a renaissance
in low power, flat panel, high resolution plasma screens for televisions
and computer monitors. I asked a leading researcher in this field,
Charles Spindt3 of SRI, to speculate about this. He responded:
. . . sure, if solid state never came along,
Ken Shoulders' and Don Geppert's ideas on integrated micronsized
"vacuum tubes"-Vacuum Microelectronics- from the late 1950s and
early 1960s could be what we would be using today, but it sure wouldn't
have been easy . . . and might not have happened. I expect that
even if there had been a twenty year head start for microvacuum
devices, solid state would have become a or the major
technology eventually.
Technology is Forever in Flux
When AT&T announced the invention of the transistor,
its importance was recognized immediately, but many years and millions
of dollars of intense development were needed before the transistor
was established as a practical replacement for vacuum tubes in most
applications. Devices invented for one purpose often end up being
used for other purposes. Transistors were invented to replace amplifiers,
which of course they did, but they were not suited for use in computer
circuitry until the mid 1950s, when IBM and others decided they
were the wave of the future. AT&T announced the transistor in 1948
and distributed sample devices that same year. Many major U.S. corporations
launched large scale, intense research and development projects
immediately. IBM held back until 1955, when it began the "Stretch"
computer project. "It was . . . obvious that an enormous investment
would be required to develop the infant transistor technology."
The Stretch used 169,100 transistors. The solid state devices of
that time "were neither fast enough nor had they the current carrying
capabilities to drive the ferrite core memories." Engineers who
had been trained to work with vacuum tubes had difficulty adjusting
to the new technology. "For a time the laboratory expressly forbade
anyone to have a piece of vacuum tube equipment visible within his
work area." In 1956, eight years after the initial breakthrough,
IBM still had no transistor manufacturing facilities. It had to
buy initial lots of transistors from Texas Instruments.4
People have a false sense that our way of life is
permanent and our tools have been with us for a long time. We think
of computer random access memory (RAM) as the main use of transistors,
but until 1970 most RAM was made of ferrite core. Transistors remained
difficult to manufacture and expensive. In 1966, after eighteen
years of massive, worldwide research and development, semiconductor
computer RAM memory cost roughly $1 per bit compared to $0.10 for
magnetic core or film, or $0.01 for slow ferrite core.5
Ferrite core memory is used in the Space Shuttle, which was designed
from 1972 to 1978. RAM was also made of thin films, plated wire,
and in early computers, rotating magnetic drums like today's hard
disks. Semiconductor RAM was too expensive for primary storage.
It was used for fast, CPU scratch pad memory.6 Magnetic
ferrite core dominated for fifteen years, semiconductors replaced
it for thirty years, and if recent developments pan out, magnetic
memory may soon make a comeback, replacing semiconductors again.
Magnetic RAM remains attractive because it is fast and it holds
data without consuming power even when the computer is turned off.7
New kinds of memory may replace semiconductors. Exotic, three-dimensional,
holographic RAM might be perfected. A holographic memory chip the
size of a sugar cube would hold terabytes (a million times more
than today's RAM) and it would operate thousands of times faster.8
In hand held computers and digital cameras, tiny fast hard disks
may soon replace semiconductor memory, a revival of the magnetic
drum. Since 1986, the cost and data density of hard disks have been
improving faster than semiconductors, confounding industry predictions.9
Competing solutions to the same problem--like magnetic
versus semiconductor RAM--often race neck and neck for years. Sometimes
they end up converging, when someone finds a way to combine the
best features of both. A good example is the competition between
propellers and jet engines. The Wright brothers invented the pusher
propellor mounted on the back of the wings, which kept the air stirred
by the propellers from affecting flight performance. This was followed
by tractor propellers mounted on the front of the airframe or on
the wings. The propellor works best at speeds below 400 mph, and
it does not work at all above 500 mph, when the blade edges exceed
the speed of sound. The "pure" jet, or turbine engine, was invented
in the 1940s. It works well at or above the speed of sound, but
it is inefficient at slower speeds. The turboprop engine-a jet driving
a conventional propellor-came next. It has excellent flying qualities,
fuel economy, and reliability, but it cannot go above the 400 mph
propellor speed limit. Finally, the propellor was placed inside
the engine cowling to make the fanjet, which is the biggest and
most efficient engine yet. It is a hybrid, combining the advantages
of propellers and jets. General Electric occasionally advertises
a futuristic looking jet engine with an unducted fan at the back
of an engine which is mounted on the rear of the airframe, bringing
us back to the Wright brothers' pusher propeller design.
. . . the commercial development of the turbine
passed through some paradoxical stages before arriving at the present
big jet era. Contrary to one standard illusion, modern technology
does not advance with breathtaking speed along a predictable linear
track. Progress goes hesitantly much of the time, sometimes encountering
long fallow periods and often doubling back unpredictably upon its
path.10
In the mid-1960s, people did not know that semiconductors
would soon become the dominant form of RAM, and they continued to
pour money into commercially successful but obsolescent alternatives
like magnetic cores, thin films, and plated wire. In 1966, RCA advertised
that it was developing superconducting cryoelectric computer memories
which "offer far greater potential bit-packing density in computer
memory elements than does any current state-of-the-art system .
. . and at a far lower cost per bit." The advertisement boasted
that a 3 cm square memory plane "might well contain as many as one
million bits!" This was ten times better than the best ferrite core
memory plane then available.11
Research and development is risky; companies lost
fortunes backing the wrong kind of transistor, or the right kind
at the wrong time. An engineer who worked on the 1955 Univac LARC
supercomputer wrote:
The development of a 4 µsec memory
was a great technical challenge (otherwise known as a headache).
The biggest problem without apparent solution was that there wasn't
a transistor available that was capable of driving heavy currents
that could switch fast enough. If only the memory could have been
designed two years later, the problem would have disappeared. The
problem was resolved but in a brute force, expensive manner. . .
all problems were solved, but at great expense and delay to the
program. LARC is an example of the price that must be paid for pushing
the state of the art before it wants to be pushed.12
When a technology is just beginning, people do not
have a sense of how the machine should look or what the best use
for it will be. In the early days of automobiles, airplanes, RAM
memory, and personal computers, inventors created a wonderful effusion
of picturesque and improbable designs. Transistors were first made
of germanium, then later silicon. Competition was hot from the start.
Soon after Bell Labs invented grown junction devices, General Electric
announced the alloy junction, which was easier to manufacture. RCA
licensed GE's design and soon began mass production. In a recent
interview, Jack Kilby recalled:
The semiconductor technology, in general,
changed very rapidly in the 1950s. We went through, I think, six
completely different types of transistor structures in that period
from point contact to grown junction, alloyed junction, surface
barrier, diffused base, and planar. This could be done in part,
because the equipment was very inexpensive. Not much money was involved
in tooling so that basic changes of that type could be accomplished.13
Many different cold fusion devices have been developed,
using palladium, nickel, and superconducting materials. Loading
has been achieved with electrolysis, electromigration, deuteron
beams, and various other methods. Critics say this effusion of techniques
means that scientists are floundering around. They are floundering,
but this is a healthy, normal part of dynamic, early stage, freeform
discovery.
Integrated Circuits Looked Like a Bad
Idea
After transistors became practical, other inventions
were needed to make them ubiquitous. To start with, engineers could
not pluck out triode vacuum tubes and install transistors instead;
they had to redesign products from scratch and re-educate themselves
in the process, like the IBM engineers working on the Stretch computer.
Conventional heaters and motors will likewise have to be redesigned
to utilize cold fusion power.
Transistor innovation did not end when AT&T was granted
a patent. Transistors required a tremendous amount of research.
They underwent many changes, growing faster and more powerful as
new types were introduced and as fabrication equipment and materials
improved. Improvements continue to the present day and will continue
for as long as transistors are used. The most important innovation
in transistor design was the integrated circuit, invented independently
in 1958 by Robert Noyce and Jack Kilby. An integrated circuit consists
of two or more transistors, resistors, capacitors, and other circuits,
including the wires connecting them, fabricated together on a single
piece of silicon. At first glance, integration seemed like a peculiar
idea because, as Kilby explained, "Nobody would have made these
components out of semiconductor material then. It didn't make very
good resistors or capacitors, and semiconductor materials were incredibly
expensive."14 It turned out to be a great idea because
it reduced labor and errors and it allowed circuits to be miniaturized.
Today, millions of circuits occupy the space formerly taken by one.
In a sense, integration and miniaturization were more beneficial
than the discovery of the transistor itself.
Integration became inevitable after a printing technique,
photolithography, was successfully applied to fabricating transistors.
(This is a good example of an old technology used for a new purpose.)
Robert Noyce explained that with photolithography, Fairchild produced
hundreds of transistors on one piece of silicon, "But then people
cut these beautifully arranged things into little pieces and had
girls hunt for them with tweezers in order to put leads on them
and wire them all back together again . . . Then we would sell them
to our customers, who would plug all these separate packages into
a printed circuit board." This was a waste of time, effort, and
money. Even though silicon was expensive, it was worth making resistors
and capacitors out of it to eliminate this step. Noyce concluded
that integration was inevitable: "There is no doubt in my mind that
if the invention hadn't arisen at Fairchild, it would have arisen
elsewhere in the very near future. It was an idea whose time had
come, where the technology had developed to the point where it was
viable."15 Integration, like zone refining (a purification
technique developed at Bell Labs; see Part
1), is not directly related to transistors, but it was developed
in response to the transistor boom. Integration would have been
valuable even if transistors had not come along. In fact, it might
have delayed or prevented transistors if it had come first, because,
as explained above, it works well with vacuum tubes, and it might
have been used to make millions of tiny tubes or magnetic core memories.
Transistors Never Became Easy to Reproduce
In Part 1, I said that one of the myths spread
by cold fusion opponents is that soon after things are discovered,
they become easy to reproduce. Transistors never became easy to
reproduce. Integrated circuits are even worse. In the 1980s, after
three decades of the most intense high-tech R&D in history, more
than 50% of factory die yields had to be scrapped. Today 10 to 20%
fail. This is mainly because circuit density keeps increasing; if
manufacturers were still producing 64 K RAM chips, the yield would
be high. But it is also because of fundamental limitations in knowledge
and know-how. Reproducibility problems were gradually overcome not
by simplifying the problem or finding a general principle which
allows "any scientist" to reproduce a transistor easily, as the
skeptics seem to believe, but by a combination of heroic measures
and "a plethora of small continuous improvements," as an Intel plant
manager put it. Heroic measures, or brute force solutions, means
building clean rooms, dressing technicians in Gore-Tex astronaut
suits and goggles, and other extreme measures to exclude contamination.
Intel maintains such tight control over the machinery in its clean
rooms that when an equipment supplier wishes to use a different
kind of screw to hold the face plate onto the equipment cabinet,
the supplier must inform first Intel and go through a complex approval
process. Such measures would not be needed if this were an exact
science. Intel would simply spell out the specifications for screws,
telling its vendors what parts to use and what parts to avoid.16
The Advantages of Open Development
A few cold fusion scientists claim they have
viable heat producing cells. They are apparently sitting on these
devices, trying to perfect the technology by themselves, presumably
so that they will get a larger share of the scientific credit or
royalties. Others, like CETI, talk about establishing a "coordinated"
research program with a small number of "research partners." These
strategies will not work. Cold fusion is too big to be developed
by any single company, or with a planned program of coordination
and cooperation. Even AT&T was not big enough to handle the transistor.
When one person, one company, or the DOE is in charge of development,
even if it only "coordinates" research, the decision makers will
probably make a wrong turn and ruin everyone's prospects. AT&T soon
went astray in transistors, ignoring the development of integrated
circuits for several years. In 1948, soon after AT&T filed for a
patent for transistors, they began shipping sample devices to leading
U.S. laboratories including the Army Signal Corps, Los Alamos, the
Naval Research Laboratory, General Electric, Motorola, RCA, Westinghouse,
and others. In 1951, AT&T and the Joint Chiefs of Staff argued over
whether the transistor should be classified. AT&T prevailed, but
it agreed to abide by the recommendations of the Joint Chiefs of
Staff, "to guard the special manufacturing processes so essential
to the success of transistor development and production with all
possible care short of actual military classification."17
AT&T wanted to reveal full details. This was partly in response
to pressure from Justice Department anti-trust lawyers, but it was
also because AT&T managers understood that even AT&T was not big
enough to tackle transistors on its own. In September 1951, seven
busloads of the nation's top scientists and engineers were invited
to a Bell Labs laboratory for a five-day symposium on transistor
performance and applications. Manufacturing processes were not revealed.
In April 1952, another nine-day hands-on training seminar was held
for companies that had paid the patent licensing fees. This time,
AT&T revealed everything. Mark Shepherd, a Texas Instruments engineer,
recalled, "They worked the dickens out of us. They did a very good
job; it was very open and really very helpful."18
Other companies soon began manufacturing transistors
and paying AT&T royalties. Texas Instruments and others soon made
better transistors than AT&T for some applications. AT&T purchased
them and saved money in their telephone network. Long distance direct
dial service began in 1951, just as the first transistors were being
installed in the network. It led to a huge increase in long distance
telephone calls and profits. AT&T might have given transistor technology
to other companies for free instead of licensing it, yet it still
would have benefited tremendously.
References
- Riordan, M. and Hoddeson, L. 1997. Crystal
Fire: The Birth of the Information Age, (Norton).
- Glass, R. 1983. Computing Catastrophes,
(Computing Trend), p. 100.
- Spindt's research is described at http://www.indiana.edu/~hightech/fpd/papers/FEDs.html.
- Glass, R. ibid., p. 94, quoting the
director of the Stretch project, Stephen Dunwell.
- These numbers are suspect. They
come from D. Evans, "Computer Logic and Memory," Scientific
American, September 1966, p. 82, but the cost of transistors
is quoted as $10 per bit, whereas other sources from the mid 1960s,
like Sanders or Glass, say it was $0.50 per bit. At $0.50 per
bit, the 32 MB of RAM in today's typical personal computer would
cost $144 million.
- Sanders, D. 1968. Computers in Business,
(McGraw-Hill), p. 271.
- Gibbs, W. 1999. "The Magnetic Attraction,"
Scientific American, May.
- Stein, R. 1992. "Terabyte Memories with the
Speed of Light," Byte, March, p. 168.
- Markoff, J. 1998. "In the Data Storage Race,
Disks Are Outpacing Chips," New York Times, February 23,
p. C1.
- Eddy, P., Potter, E., and Page,
B. 1976. Destination Disaster, From the TriMotor to the DC10:
The Risk of Flying, (Quadrangle, The New York Times Book Co.).
- RCA advertisement, 1966. Scientific American,
September, p. 42.
- Glass, R. ibid., pp. 101, 102. The
LARC supercomputer when fully expanded had 97,500 words of main
memory, and six million words of disk space (12 megabytes) on
24 disks. It cost $6 million for a basic system, according to
the U.S. Army Ordinance Corps, http://ftp.arl.mil/~mike/comphist/61ordnance/app7.html.
- An Interview with Jack Kilby, 1997, Texas
Instruments Inc., http://www.ti.com/corp/docs/kilbyctr/interview.shtml.
- Riordan, M. and Hoddeson, L. ibid.,
p. 259.
- Riordan, M. and Hoddeson, L. ibid.,
p. 264-265.
- Lohr, S. 1995. "Suiting Up for America's
High Tech Future," New York Times, December 3.
- Riordan, M. and Hoddeson, L. ibid.,
p. 196.
- ibid., p. 196-197.
|