Supplementary Readings
1. Integrated Circuits: the Past and the Future
As with many inventions, two people had the idea for an integrated circuit at almost the same time. Transistors had become commonplace in everything from radios to phones to computers, and now manufacturers wanted something even better. Sure, transistors were smaller than vacuum tubes, but for some of the newest electronics, they weren't small enough.
But there was a limit on how small you could make each transistor, since after it was made it had to be connected to wires and other electronics. The transistors were already at the limit of what steady hands and tiny tweezers could handle. So, scientists wanted to make a whole circuit — the transistors, the wires, everything else they needed — in a single blow. If they could create a miniature circuit in just one step, all the parts could be made much smaller.
One day in late July of 1958, Jack Kilby was sitting alone at Texas Instruments. It suddenly occurred to him that all parts of a circuit, not just the transistor, could be made out of silicon. At the time, nobody was making capacitors or resistors out of semiconductors. If it could be done then the entire circuit could be built out of a single crystal — making it smaller and much easier to produce. By September 12, Kilby had built a working model, and on February 6, Texas Instruments filed a patent. Their first “Solid Circuit” the size of a pencil point, was shown off for the first time in March.
But over in California, another man had similar ideas. In January of 1959, Robert Noyce was working at the small Fairchild Semiconductor startup company. He also realized a whole circuit could be made on a single chip. That spring, Fairchild began a push to build what they called “unitary circuits” and they also applied for a patent on the idea. Knowing that TI had already filed a patent on something similar, Fairchild wrote out a highly detailed application, hoping that it wouldn't infringe on TI's similar device.
All that detail paid off. On April 25, 1961, the patent office awarded the first patent for an integrated circuit to Robert Noyce while Kilby's application was still being analyzed. Today, both men are acknowledged as having independently conceived of the idea.
Today's predictions also say that there is a limit to just how much the transistor can do. This time around, the predictions are that transistors can't get substantially smaller than they currently are. Then again, in 1961, scientists predicted that no transistor on a chip could ever be smaller than 10 millionths of a meter — and on a modern Intel Pentium chip they are 100 times smaller than that.
With hindsight, such predictions seem ridiculous, and it's easy to think that current predictions will sound just as silly thirty years from now. But modern predictions of the size limit are based on some very fundamental physics — the size of the atom and the electron. Since transistors run on electric current, they must always, no matter what, be at least big enough to allow electrons through.
On the other hand, all that's really needed is a single electron at a time. A transistor small enough to operate with only one electron would be phenomenally small, yet it is theoretically possible. The transistors of the future could make modern chips seem as big and bulky as vacuum tubes seem to us today. The problem is that once devices become that tiny, everything moves according to the laws of quantum mechanics — and quantum mechanics allows electrons to do some weird things. In a transistor that small, the electron would act more like a wave than a single particle. As a wave it would smear out in space, and could even tunnel its way through the transistor without truly acting on it.
Researchers are nevertheless currently working on innovative ways to build such tiny devices — abandoning silicon, abandoning all of today's manufacturing methods. Such transistors are known, not surprisingly, as single electron transistors, and they'd be considered “on” or “off” depending on whether they were holding an electron. In fact, such a tiny device might make use of the quantum weirdness of the ultra-small. The electron could be coded to have three positions — instead of simply “on” or “off” it could also have“somewhere between on and off.” This would open up doors for entirely new kinds of computers. At the moment, however, there are no effective single electron transistors.
Even without new technologies, there's room for miniaturization. Moore's law continues and transistors double every two years toward the billion-transistor microprocessor. Chips like this would allow computers to be much “smarter” than they currently are.
2. A Top-Down Approach to IC Design
The challenges facing the electronics design community today are significant. Advances in semiconductor technology have increased the speed and complexity of designs in tandem with growing time-to-market pressures. The companies that have remained competitive are those that are able to adapt to changing methodology requirements and develop a broad range of products quickly and accurately.
Successful product development environments (PDEs) streamline the design process by creating the best practices involving people, process, and technology. Developing these best practices is based on a thorough understanding of the needed design methods and how to apply them to the system project. This document reviews the basic principles of top-down design for ASIC and FPGA-intensive systems, and provides guidelines for developing best practices based on both semiconductor and EDA technology advances.
The strategy of most successful PDEs is to build advanced, high quality products based on a system platform architecture that effectively incorporates leading-edge hardware and software algorithms as well as core technology. This strategy provides integration density, performance, and packaging advantages and enables product differentiation in features, functions, size, and cost. In most cases, to fully exploit these opportunities, this strategy requires a transition from a serial or bottom-up product development approach to top-down design.
In a bottom-up design approach, the design team starts by partitioning the system design into various subsystem and system components (blocks). The subsystems are targeted to ASICs, FPGAs, or microprocessors. Since these subsystem designs are usually on the critical path to completing the design, the team starts on these immediately, developing the other system components in parallel. Each block is designed and verified based on its requirements. When all blocks are complete, system verification begins.
The bottom-up design approach has the advantages of focusing on the initial product delivery and of allowing work to begin immediately on critical portions of the system. With this approach, however, system-level design errors do not surface until late in the design cycle and may require costly design iterations. Furthermore, while related products can reuse lower-level components, they cannot leverage any system-level similarities in design architecture, intellectual property, or verification environment. Finally, bottom-up design requires commitment to a semiconductor technology process early on and hinders the ability to reuse designs in other technology processes.
The alternative approach is the top-down design approach. In this approach, the design team invests time up front in developing system-level models and verification environment. Using the system models, the team is able to analyze trade-offs in system performance, features set, partitioning, and packaging. Furthermore, a system-level verification environment ensures that system requirements are met and provides the infrastructure for verifying the subsystems and system components.
The top-down design approach results in higher confidence so that the completed design will meet the original schedule and system specifications. Basing the starting point of the system design on a single verified model ensures that critical design issues surface early in the process and reduces false starts in the concurrent design of ASICs, PCBs, and systems. The design team can discover and manage system-level issues up front, rather than having to redesign the system at the end of the design cycle. Because each subsystem is designed and verified within the context of the system verification environment, the overall system functionality is preserved.