The myriad challenges of designing a chip on its own range from functional correctness to power and signal integrity and manufacturability. Chips do not exist in isolation. Instead, they must be integrated into a system context both electrically and mechanically. Moreover, chips must be packaged and then attached to a circuit board. While chip/package/printed-circuit board (PCB) co-design does offer challenges, tools and methodologies are available to designers in pursuit of design closure.
Design Challenges Abound
There once was a time when chips, packages, and boards were designed serially and in that order. But at some point, signal-integrity issues for board designers, often resolved through an unoptimized design, began to dictate a systemic approach to the process.
With time constraints and short design cycles as endemic as they are, the task of establishing systemic synergy between a chip, package, and board is extremely challenging. Right off the bat, the design team often must decide whether some aspects of the system will be considered in detail at all, or whether they will simply carry forward assumptions from previous design cycles.
IC package technology is evolving quite rapidly, and it is too often overlooked in the broader scheme. “We see packaging technology as a very big driver, almost the same in significance as power-related issues,” says Dian Yang, senior vice president and general manager of Apache Design Solutions. The primary reason is the cost of packaging, which can get out of hand quickly with some of today’s advanced technologies.
On the other hand, those expensive packages have capabilities that you can’t get in any other way. Packages using 3D IC technologies driven by through-silicon vias (TSVs), systems-in-a-package (SIPs), and chip-on-chip technology can be differentiators that factor into the device’s market success (Fig. 1). But beware: Choose the wrong package and your device may not sell because it’s too expensive, or your package choice may even result in the device’s failure in certain use cases.
“3D packaging with TSVs drives higher integration but really complicates the off-chip network,” says John Park, methodology architect in the System Design Division of Mentor Graphics (see “Thanks To TSVs, 3D IC Packaging Gets Set To Tackle Tough Challenges” at www.electronicdesign.com).
A signal path that was once a simple wire bond to a metal lead frame now involves redistribution routing to a microbump, followed by a silicon interposer to another IC and then down to the package bumps (Fig. 2).
Getting Started
An initial challenge concerns substrate and board topologies. How many signal and ground layers should comprise the package substrate and board? Tradeoffs must be made between the number of layers and routability, as cost rises dramatically with increasing complexity. This means undertaking feasibility analysis.
Once that process is completed, system-level floorplanning considerations come into play. How do I plan wirebonding and routing? Often, systems are designed with fixed DRAM chips and a custom processor. This may necessitate yet another feasibility analysis of routability and bonding options.
“We have evolved into the concept of designing from outside in rather than inside out,” says Brad Griffin, Cadence’s director of marketing for SIP, IC packaging, and high-speed PCB design tools. “What happens is the package guy is pushed into the middle as the negotiator between the chip and board teams.”
Often, board components are relatively fixed and need to be where they are because of the placement of components such as connectors. Sometimes components can be rotated in place, but that’s about all the flexibility that’s available.
On the other hand, says Griffin, the chip design can be negotiated and influenced in ways that might reduce the number of layers in the package substrate. But the board can be the place where flexibility is most limited. “There’s often a certain amount of fixed-ness to board design, and these fixed parameters become constraints at the package and chip levels,” Griffin says.
The design process must consider target system specifications. How will I verify proper operation based on those specifications? If the frame of reference is time-domain specifications, such as how much overshoot or ringing will be permitted in signal paths, or how much noise on the power planes in the package substrate or board, how do I define signal conditions so I can test these aspects?
“In most cases, you can’t operate this way,” says Brad Brim, product marketing manager at Sigrity. “There are so many signals that you’d have to simulate endlessly to cover all the possible use cases.”
The alternative is to design to frequency-domain specifications, such as impedance mismatches. This, in turn, leads to the area of decoupling-capacitor (decap) optimization for purposes of cost versus performance tradeoffs.
“There’s been a strong trend toward decap optimization,” says Brim. “A rule of thumb has been to use one decap per power pin.” Many reference designs use this rule, but it can constitute overdesign.
Decaps are not free, either in terms of cost or board/substrate area. They also tend to gum up routing if used too liberally. Thus, it’s important to work through the tradeoff between cost and performance with decap placement. Tools from some of the analysis vendors, such as Sigrity, will perform this kind of analysis.
The Elephant In The Room
A huge factor to consider in the system design process, of course, is power. This is a major deciding factor in determining how many layers comprise both the package and the board. In turn, this controls costs and the type of package. These issues are mainly centered on dc-power delivery. How many power and ground planes do I need? Can I get away with fewer layers?
For ac-power delivery, designers must again consider the number of ground planes and whether to have a power plane above (known as single reference) or ground plane below signal layers (known as dual reference). Inadequate ac-power delivery can result in IR-drop problems due to simultaneous switching of signal lines.
Although they were often thought of as such in the past, power and signal integrity are by no means independent. There is no such thing as an ideal power or ground plane, and that means noise. “Simultaneous switching noise is very much a power delivery effect although it’s reflected in signals,” says Sigrity’s Brad Brim. Thus, signal integrity and power delivery must be considered together (Fig. 3).
There are thermal issues to consider. If a circuit or component is getting hot enough to merit concern, then it probably has gotten hot enough that the metals have changed in conductivity. Thermal and electrical effects are a nonlinearly coupled problem. As metals heat, they become more conductive, which means still more heat. Sigrity has incorporated some thermal and electrical analysis in its tools. “What we found is that you have to consider both effects at the same time,” says Brim.
In characterizing packages for its own chips, Texas Instruments’ package simulation group considers the thermal performance requirement of the end system and analyzes the overall reliability of the package in that context.
“We look at thermal performance on the chip,” says Darvin Edwards, TI Fellow and manager of Texas Instruments’ package simulation group. “We look for hot spots, how they might be mitigated by the package, and how the system environment might draw heat away.” TI provides JEDEC thermal models to customers to aid them in calculating the junction temperatures of components.
In its tools, Sigrity couples both an electrical solver and a thermal solver so results from electrical analysis are passed to the thermal solver. The thermal results are passed back again to the electrical solver. The result, Brim asserts, is a more accurate electrical solution.
Simulating The System
System simulation can be looked at from either the chip or package side. Modeling of any system calls into question the level of resolution of the models for effective simulation performance. It is critical to be able to model the performance of the power-delivery system, but first consider the scope of the problem.
For digital chips, up to 60% of the pins (or balls) are dedicated to power delivery. So for a 5000-pin chip, you can assume that at least half of them, or 2500 pins, will be either power or ground pins. Detailed chip-level analysis requires per-pin or pin-grouped package models. Typically, grouping is done by board nets. Looking back from the board to the chip/package, the board analysis is done by per-net chip models.
It’s particularly useful when tools cooperate to give design teams various views into the design, some from the chip side and some from the package side. Cadence Design Systems has a particular advantage in this respect as it has flows in all three areas: IC design, package design, and PCB design.
Cadence has gone some distance toward a chip/package co-design flow, as its Encounter Digital Implementation System gives users at least a rudimentary view of the package design. Conversely, Cadence’s Allegro Package Designer provides a view of the IC design.
“When you design the package, you need to be able to see the pad ring of the chip,” says Cadence’s Brad Griffin. “You won’t see all the macros and the floorplanning, but if you can at least see connectivity to the I/Os in the pad ring, you can be smarter about routing and how to assign signals.”
From the perspective of the Encounter Digital Implementation System, it’s equally useful for the IC designer to have a view of the wirebond pattern to be used inside the package. “It’s helpful for them to see where the bond pads are and what names are assigned,” says Griffin. “This view into the package helps the IC designer make better decisions about where to place I/Os, and that information is passed back to Encounter.”
There is also some synergy between Allegro Package Designer and Cadence’s Allegro PCB, fueled in part by a system connectivity manager that corrects for the various names that might be applied to netlists across the chip, package, and board. It allows for the unique net names on each fabric to be maintained while showing that they are indeed connected. The alternative has been to try to manage this manually with Excel spreadsheets, which becomes unwieldy, not to mention the difficulty in propagating information about changes to the various fabrics.
What About Modeling?
“The days are gone when you could design a chip without a package model,” says Sigrity’s Brad Brim. Without considering the effects of the package, estimates of dynamic noise on power delivery can be disastrously flawed.
Unfortunately, package modeling is an inexact art. Such models range in granularity from I/O Buffer Information Specification (IBIS) large package models to lumped RLC representations to the output of full-wave 3D field solvers. According to TI’s Darvin Edwards, the modeling of package effects is most effective with the full-wave 3D field solvers. But the important thing, he says, no matter what methodology you use, is to be aware of any simplifications and/or assumptions that are made in the modeling tool’s algorithms for purposes of reducing compute overhead.
“Our modeling methodology is extremely integral to our package design methodology,” says Edwards. TI relies on package modeling to optimize the designs, substrate layouts, traces and the spaces between them, impedances, and power/ground distribution to ensure that electrical requirements are met.
To its customers, TI supplies compact electrical models of the components but also delivers various models for integration into the customer’s analysis tools. “Electrical issues don’t stop at the boundary of the package,” says Edwards.
Model types include IBIS models, distributed Spice models, and transmission-line models. In some cases, TI will provide full 3D package models, but that model is typically unwieldy for customers’ purposes, says Edwards. Most prefer compact models that won’t clog up their analysis tools.
Model Connectivity
Hundreds and even thousands of physical interconnects often comprise the chip/package/board system. Not all of these can be electrically modeled on a per-pin basis, as the aggregate would be far too large for analysis. An alternative is modeling on a per-net basis, which may not afford the resolution necessary for adequate system modeling.
So, designers need support for arbitrary groups of pins. In simulation, however, the challenge is figuring out how to know which nodes of a given model connect to which nodes of another model. Also, how do you reliably inform simulation tools of how to establish these links?
One scheme for accomplishing this is the Model Connection Protocol (MCP), developed by Sigrity. The MCP is implemented as model headers located at the top of the model file. These headers enable pin mapping within tools to facilitate chip/package/PCB analysis on various fronts (Fig. 4).
Various Sigrity tools support the import and export of models with MCP headers for system simulation in Sigrity’s tools as well as third-party tools. At this time, chip-level power analysis tools from Cadence support the MCP format. Cadence’s Encounter Power System and Voltage Storm import MCP package models generated by Sigrity’s tools for chip/package system simulation. Conversely, the Cadence tools export MCP chip models for system simulation within Sigrity’s tools.
Another modeling protocol is Apache Design Solutions’ Chip Power Model (CPM). Apache developed the CPM concept to enable system simulation and analysis without requiring detailed layout information or transistor-level models (Fig. 5). It also serves the desire of vendors to safeguard their IP.
CPMs enable system integrators to perform accurate and realistic power and noise signoff analysis. “What CPM does is bridge the IC side to the package and system guys,” says Apache’s Dian Yang. Yang likens this to the way in which a transistor-level model bridges manufacturing process engineers to circuit designers.
CPM is an effort to provide more than simple lump models with one that is silicon-correlated. “Now we are looking at the real waveform on each bump,” says Yang. CPMs are fully distributed Spice-accurate models that represent actual die behavior.
The recently released CPM v2.0 model considers the LC resonance frequency of the system and automatically generates an on-die switching scenario operating at or near the system resonance. This capability enables system designers to access a CPM representing the worst-case switching scenario that can be used for stress testing the chip/package design. By using resonance-aware models, designers can determine the optimal placement and configuration of the package and PCB decoupling capacitance to help manage power and noise.
The creation of CPMs is a matter of a single push of a button. After performing chip-level power noise analysis in Apache’s RedHawk power-integrity analyzer, it uses the intrinsic detail stored in the tool’s database to create the model.
While CPM is a useful format, some see a weakness in the fact that it can only be created by and transferrred among Apache’s tools at this time. Sigrity’s MCP connectivity format is currently under discussion in IBIS committees as the basis of an open industry standard for model connectivity.
The Big Picture
When it comes to a full chip-package-PCB co-design methodology, some important high-level concepts will be critical as such methdologies begin to emerge. One key area is managing netlists through the transition from the chip to the board; i.e., signal X, as the IC design tools have designated it, may become signal Y as far as the PCB layout tool is concerned. Flows will require overall management of connectivity that keep track of signals even though their names change from fabric to fabric. Along with that must come automated pin mapping.
Connectivity management of this nature would be driven by a software backplane, based on industry standards, that would manage data on the chip floorplan, ball-grid array (BGA) power/ground requirements, and board-level interfaces such as double-data rate (DDR). “You would have an environment in which the IC, package, and board layout teams could share data and make tradeoffs for bus or interface optimization,” says Mentor Graphics’ John Park.
Improving IC Flows For 3D
In preparing for a 3D IC packaging strategy, it would be helpful if designers had a physical verification environment that anticipated such architectures. Meanwhile, such an environment should come in the context of existing flows and not interfere with use of the rule decks supplied by the foundries.
Mentor Graphics has something in mind for a future Calibre revision that would enable designers to perform physical verification on each die independently using golden Calibre signoff decks, just as is done today, but that would go further by analyzing the interfaces between those die from the electrical, physical, and parasitic standpoints.
The only way to manage this kind of verification today is to attempt full multi-chip verification simultaneously. This simply isn’t practical on designs at nodes like 28 nm, says Michael White, Mentor’s senior product marketing manager for Calibre (see “The Traditional Approach To IC Implementation And Its Problems” at www.electronicdesign.com). “Focusing on the interfaces is the right approach. First you confirm that what comes out of the individual ICs is correct, and then add on the effects of the interfaces,” White says.
Such an approach would overcome a number of deficiencies of what White terms the “mega-merge” methodology. For one thing, combining data on logic circuits and memory can be very problematic, even if you can get all the design data on the memory IP. For another, a design-rule-checking (DRC) run on multiple chips at 28 nm could take days, whereas focusing on the interconnects would be a much quicker run.
The “mega-merge” approach also requires combining all of the graphic database system (GDS) files and rule decks. In any 3D IC design, these files and rule decks will be from different process nodes, and even from different foundries. There will inevitably be name collisions in how layers are defined in those various rule decks. In short, it introduces a host of gnarly problems that are best avoided.
Two-Pronged Approach
Another way to approach the task of integrating IC, package, and board design is to partition it out among at least a couple of tools. This is the approach taken by Zuken, with an eye toward eventually coaxing together a unified flow.
The company’s CR-5000 PCB design software provides for a full board-design environment that handles package and board integration. Relying on IP that Zuken acquired from Rio Design Automation in 2009, Zuken’s RioMagic tool overcomes limitations inherent in first-generation chip/package co-design tools by digging deeper into the silicon to bring more characterized information into the package view. With that data, designers can intelligently assess what can be altered in the IC design to make package optimizations.
The secret sauce is a database that unifies chip and package design data, enabling either “package-aware chip design” or “chip-aware package design,” says Zuken’s Steve Watt, an application engineer specializing in co-design.
RioMagic is in essence a feasibility tool that allows you to create an I/O ring in three types of flows. In a prototype flow, users start from scratch to create a feasible package for a chip based on the chip design data. This flow enables I/O planning, consideration of voltage domains, and general “what-if” exploration.
A more traditional top-down flow permits a more granular level of I/O planning and redistribution-layer (RDL) routing. Here, users can consider physical and electrical constraints. The flow begins in logic synthesis. At the floorplanning state, data is transferred via RioMagic into the CR-5000 environment to assess the floorplan’s viability.
For applications in which a package’s ball assignments are already fixed and you’d like it to accommodate a new chip, there’s a less traditional bottom-up flow. This flow drives backward from the ball assignments, feeding that into package analysis to come up with a viable pad ring that will accept the new IC. This flow works for packages with either a pad ring or bump pattern.
Overlooking RioMagic and CR-5000 is a co-design manager that allows assessments of complex IC, package, and board arrangements. “It’s a top-down way to integrate all the views of the data for intelligent consideration,” says Watt.
The IC/package flow (RioMagic) and package/board flow (CR-5000) are “still a little like two flows,” says Kent McLeroth, Zuken’s vice president of system engineering. “Our goal is to continue to integrate those two flows.”
Zuken’s CR-5000 packaging software is tightly integrated with the Rio Magic tool, and companies are using a Rio Magic/CR-5000 flow worldwide. Zuken is currently working on incorporating this functionality directly into its core tools.