IEEE Transactions on Components, Packaging, and Manufacturing Technology,
Vol. 21, No. 4, pp. 610-616, December 1998.

The Role of Physical Implementation in Virtual Prototyping of Electronic Systems

Markku Lindell
Nokia Research Center
Paul Stoaks
Nu Thena Systems
David Carey
Microelectronics and Computer Technology Corporation (MCC)
Peter Sandborn
CALCE Electronic Products and Systems Consortium
University of Maryland


I. Introduction

Different sectors of the electronics industry are exposed to different sets of problems and challenges, thus requiring different prioritization of resources. In particular, the development of low manufacturing volume systems (e.g., avionics, medical, defense) appears to require radically different prioritization of resources than the development of time-to-market driven high manufacturing volume products (e.g., mobile phones, notebook computers) [1]. Even through these two segments might be seen as at two opposite ends of a continuous spectrum, they share a common and critical dependence on virtual prototyping.

A. Time-to-Market Driven Electronic Products

The primary driver of today’s commercial electronics market is time-to-market. Portable computers, cellular telephones, and a host of other high-density systems, often have design cycles that are less than a year long, and market windows that may be even shorter. The very existence of these products depends on finding quick design solutions that meet increasingly challenging performance and cost requirements. Central to the success of these products are highly developed design methodologies and tools that facilitate first pass success.

A great deal of effort is applied to the design and selection of chips with manufacturing costs in mind, however, minimizing the cost of the system packaging is often sacrificed to meet time-to-market requirements. It is common to knowingly overspend because there isn’t time to iterate a packaging associated design change through the design process. Including an analysis of the costs associated with the physical implementation of systems as part of the system design methodology avoids leaving money on the table at the end of development, without increasing the design time.

An additional characteristic of short development cycle products, is slow adoption of new technological advances. New technologies and materials are not readily adopted because the products can not tolerate any risk that may lengthen their design cycle. Even if adequate infrastructure and suppliers exist, new technologies still have inherent risks caused by the lack of design experience and the lack of tools to support designers. Thus, many opportunities for gaining competitive advantage through the use of new technologies are lost by the lack of adequate system design and technology tradeoff methodologies and tools.

For high-volume time-to-market driven applications, the central value of virtual prototyping is avoidance of design iterations and minimization of manufacturing costs.

B. Low-Volume Complex Electronic Systems

Low manufacturing volume systems [1] have just as great a need for virtual prototyping as time-to-market driven applications, but for different reasons. Several distinctive factors describe low-volume systems: long development cycles, long product lifetimes (often 10 years or more), part obsolescence risks, use of components outside of their temperature specification limits, harsh operating environments, and catastrophic consequences of system failure during operation. The value of virtual prototyping shifts from minimizing manufacturing costs to a focus on life cycle optimization and risk management. Life cycle considers design, manufacturing, maintenance, upgrading, and end-of-life activities, and focuses analysis on risk mitigation, reliability, and life cycle cost.

Low-volume systems tend to have poorly structured supply chains ("supply webs"), are subject to more stringent regulation and required qualification, and have high manufacturing costs that are more often dominated by IC costs (packaging costs may be irrelevant). For many low-volume systems, the costs associated with qualification, maintenance, and obsolescence may be considerably larger than the original manufacturing costs.

C. The Product Development Process

Whether a product is characterized as time-to-market driven or a low manufacturing volume, to guarantee market success, the development process needs to incorporate a diverse set of technical and non-technical data. A simplified view of a modern electronics product and system development process is illustrated in Fig.1.

Fig. 1. Simplified product development process for an electronic product containing both hardware and software components.

The product development process starts from feasibility and market analysis, that when combined with technology assessments, form an initial business plan for the product. After launching the development program, more mature product conceptualization is performed, targeting validated specifications. The concept phase takes the input from feasibility analysis, market situation and trends, customer feedback, competitive assessment, and technology forecasts and explores product level parameters including cost, size, performance, power consumption, usability and design. The cost estimates factor in design, materials, manufacturing, and life cycle costs as well as quality.

Available implementation options become coupled to the system design phase, leading to more optimized system and resource partitioning under given product specification constraints. Examples of tradeoffs include hardware/software/memory allocations and design creation/design reuse and their impact on product features, power consumption, size and cost. The final system design phase provides better-validated specifications to the detailed design of hardware, ASIC, software and mechanical elements while launching other activities like test planning, sourcing and production capabilities.

II. The System Design Space

Packaging tradeoff analysis (physical partitioning) is a subset of system and resource partitioning and an extension of traditional virtual prototyping to include the determination of actual technology implementation details (Fig. 2). Functional verification addresses the need to verify that the system satisfies the customer requirements. Architectural design develops an integrated collection of hardware, software, and interface components that implement the system’s functionality [2]. The physical partitioning activity in Fig. 2 takes inputs from architectural design in the form of a specification of the number and type of gates or bits, functional blocks, or die; and the connectivity between them. Within physical partitioning, the results of the architectural mapping are partitioned into bare die, packaged chips, multichip modules (MCMs), boards, and multiboard systems. Users specify the implementation technologies, materials, and design rules with the help of libraries. The resulting physical partitioning and implementation is analyzed using a suite of estimators and simulators. The analysis produces a set of performance, size, cost, and manufacturing "metrics" that can be used to compare design candidates.

Similar to the functional verification and architectural design, physical partitioning has a hierarchical organization. In the physical space, this means that physical containers (die, single chip packages, MCMs, boards) can be defined in a hierarchical manner to arbitrary depth. In other words, boards can contain other boards or MCMs, boards and MCMs can contain chip packages, chip packages can contain one or more bare die, die can contain one or more functional blocks, and functional blocks can contain one or more gates.

Fig. 2. The relationship of physical partitioning activities to the functional verification and architectural design activities associated with traditional virtual prototyping. All portions of the virtual prototyping space are characterized by specification, partitioning, and verification activities.

The process outlined in Fig. 2 is complicated by the fact that both hardware and software must be considered. The partitioning activity associated with architectural design includes making decisions about which functionality is implemented using hardware and which is implemented in software. This partitioning may have a huge impact on system costs, performance, upgradability, reliability, and many other aspects of the complete system. Hardware/software partitioning is difficult to accomplish without prototyping and simulation. Neither static analysis of the functional description, nor automated generation/optimization techniques that attempt to explore the implementation space can easily be employed given today's understanding of the problem. Hardware/software partitioning can more easily be accomplished via prototyping, where a candidate architecture and partitioning can be simulated. The resulting simulation can be verified against the system requirements for both functionality and performance. Where the available prototyping solutions differ is the point during the design cycle at which the partitioning and simulation can be performed. The greatest value is derived when this activity occurs as near the beginning of the design cycle as possible before effort has been committed to any candidate architecture.

III. Physical Partitioning

The activities associated with physical partitioning must consider the interrelated components shown in Fig. 3. The four categories of activities below must seamlessly interact with each other in order to provide an effective physical partitioning solution.

Structure: Structure represents the physical implementation architecture of the system. During physical partitioning, the architectural elements of the design that are to be implemented in hardware, are assigned to physical partitions such as ICs, MCMs, PWBs, etc. Partitioning is performed based on system cost and performance requirements with a view toward obsolescence, upgradability, maintainability, and possibly disassembly. In addition to partitioning, structure includes placement (the orientation of adjacent components to each other), topology (the orientation of adjacent boards to each other), disconnection (designing subsystems to be disconnectable), and reuse (dividing systems into parts that can be reused in other systems). During the physical partitioning activity, different physical implementations may be evaluated, necessitating the movement of architectural elements between physical partitions and changes to component/sub-assembly placement within the partitions.

Technology: Technology characterization is at the core of the physical partitioning. The degree that physical partitioning depends on technology decisions far exceeds the technology requirements of the other portions of the virtual prototyping solution. The technologies that must be characterized and captured into libraries to support physical partitioning include components, substrates, materials, connectors, packages (for packaging bare die), and process flows for fabricating, assembling, testing/screening, and qualifying systems. In addition to the data itself, this category includes the management and delivery of the data. Technology data management methodologies must allow physical partitioning to access the necessary technology data quickly and accurately.

Analysis: Analysis comprises all the "Design for X" activities (manufacturability, environment, testability, design-to-cost, etc.), all performance estimation activities (electrical, thermal, reliability), and life cycle business drivers (obsolescence, qualification, maintenance, upgrading). This portion of the solution can be populated with point design tools (simulators) and/or estimation-level advisors. All of the analyses performed in this area should lead back to an economic impact on the life cycle.

Fig. 3. Enabling activities for system-level physical tradeoff analysis. Successful coordination of Analysis, Structure, Technology, and Optimization is a necessity for obtaining a complete technology tradeoff solution.
Optimization: Optimization is a management framework within which all the partitioning and "Design for X" activities are performed. Optimization does not necessarily imply that the system must automatically choose the optimum design specification without user involvement, but rather, it represents tools that collaborate with the user to optimize their system’s physical implementation. Optimization includes objective function formulation, allocating and budgeting, sensitivity analysis, prioritization, treatment of uncertainties, and the management of constraints and requirements. Formalizing and automating the process of system design and tradeoff analysis is looked upon at many companies as a proactive approach to product optimization and a key piece of the concurrent engineering puzzle that enables continuous improvement strategies. If tradeoff analysis and its associated Design-for-X activities are to contribute effectively to the continuous improvement culture, metrics that unify all the diverse design concerns must be adopted. Most of the well-known quality elements, such as testability, manufacturability, reliability, etc. have clear-cut metrics. These metrics are typically parametric (i.e., have independent and dependent variables) and are clearly tied to customer expectations and demands, and their impact on the product’s manufacturing lifecycle cost can be quantified. More importantly the ability and critical nature of affecting these quality indicators at the product design level is appreciated and understood.

A. Life Cycle Economics

Whether the product is time-to-market driven, such as a high-volume mobile telephone, or a low-volume radar system for a fighter jet, the "golden" metric is cost. In the time-to-market case, the greatest contributor to life cycle cost is correctly coordinating design and manufacturing with the market window. In the low-volume case, the greatest contributor to life cycle cost is managing and mitigating risks associated with fielding a system with a long lifetime in a harsh environment. Figure 4 shows all the contributions to life cycle costs (not all contributions are applicable to all products).

Fig. 4. Product development, manufacturing, and support contributions to life cycle costs. Note, all categories (with the exception of End of Life, include both hardware and software).

Traditional cost drivers (purchase and manufacturing) may be minor contributors to the actual life cycle cost of the system depending on the product’s market segment.

B. Virtual Qualification and Technology Risk Assessment

Virtual qualification is performed through computer-aided simulation of the fundamental mechanical, thermomechanical, chemical, and electrical mechanisms by which systems fail. Virtual qualification provides risk assessment before the equipment supplier commits to prototype hardware, and points to whether a particular design will meet the desired application lifetime and by what mechanism(s) it is likely to fail. This procedure permits the reliability of the new system to be assessed using science-based simulations, thereby facilitating rapid and cost-effective qualification of the new system

Virtual qualification is a methodology for rapidly assessing the application specific reliability of electronic systems through simulation [3]. The simulations determine the dominant failure mechanisms and the time to failure of an electronic system, based on physics-of-failure models that describe the fundamental degradation processes. Each physics-of-failure model is composed of a stress analysis model and a damage assessment model. The stress analysis model can be adapted to reflect the component or assembly architecture, while the damage model depends only on a material’s response to stress. Physics-of-failure methods are therefore applicable to new designs as well as nearly all existing components and systems. In addition, damage models can be generated independent of a particular component design, allowing them to be completed off-line without impacting the critical path in the design cycle. One physics-of-failure model does not describe the reliability of the system, however. Instead, each failure mechanism must be modeled using one or more separate physics-of-failure models. Therefore, electronic system reliability is assessed by evaluating the time-to-failure by many competing failure mechanisms (Fig. 5).

Fig. 5. Physics of Failure (PoF) Process [4].

Qualification is defined as a process to verify whether the anticipated reliability is indeed achieved under actual life cycle loads for some specified length of time. The purpose of qualification is therefore to audit the ability of the design, manufacture, and assembly to meet reliability goals. Virtual qualification approach is significantly more timely and cost-effective than traditional qualification, which often consists of following decades old "one size fits all" standard tests that are often inaccurate, improperly applied, and unnecessarily restrictive.

C. Virtual Verification

Most manufacturing (and assembly) assessment is performed at the end of the design cycle, after physical design is complete and artwork is ready for generation. Several commercial software tools exist for assessing the manufacturability of PWBs and the assemblability of systems after physical design has been completed. Board and assembly manufacturability tools, perform checks in three categories of manufacturability: 1) Assembly process compatibility, 2) Proximity to other structures, and 3) Artwork verification. Assembly process compatibility involves comparing the component’s size, shape, and mounting method to the process that will be used to assemble boards containing the component. Proximity checking involves checking the location of the component relative to other components assembled on the board and the edge of the board. The third category, artwork verification, involves checking the board layout for the correct orientation and location of fudicials, alignment holes, and other structures necessary to facilitate assembly.

While artwork verification can not be addressed during virtual prototyping, process compatibility and proximity checking can. Virtual verification involves adapting the same tools used for late design cycle verification to be used during the earliest parts of the design cycle. The power of this approach is that the identical databases that contain equipment and process information are applied to the design both at the conceptual level and after physical design during verification activities.

D. The Role of Standards

Some traditional standards can and have been used during system virtual prototyping to predict various aspects of the performance of a system (e.g., MIL-HDBK-217 can be used to predict failure rates for electronic systems). It is widely accepted that many existing "process definition" standards do not accurately reflect the reality of today’s high-density electronic components and systems. Newer approaches to standards are based on "product performance" and represent intelligent, application-specific use of physical principles rather than blindly following a process that may not provide value to your product. Because newer standards are based on product performance, they are applicable to virtual prototyping.

IV. Implementation

Even if it is possible to develop accurate predictions of performance and cost metrics for complex systems, there are still significant hurdles to widespread use of virtual prototyping:

A. Next Generation Design Exploration Tools

Virtual prototyping engineering design automation (EDA) tools typically concentrate on functional design implementation, often starting from algorithmic system exploration studies and focusing on system-on-chip embedded hardware/software solutions. However, the product development process must begin earlier at the conceptual phase where more basic technology selections, and their impact on high level product metrics, need to be incorporated. The information sources and formats used for these earlier explorations are highly heterogenous and may include inhouse databases, technology vendor’s roadmaps, textual and graphical data, and expert personel’s accumulated knowhow. At present, there are few, if any, tools or methodologies which manage data with this diversity and integrate it with other parts of the product development process.

Conceptual creations drive system design and often begin with a set of written documents as design specifications. We suggest a more "active" vehicle is needed to effectively describe the whole product or system under development. This hypothetical conceptual platform would model the whole product/system at an abstract block level including high level (but quantitative) views to processing elements, memories, batteries, displays and other key system technologies and components.

Next generation system design exploration tools might be hyper spreadsheets [5] where the tools are user-oriented design environments and the actual tools functionality is lying somewhere in the network. Under this model, users get key estimates solved in their high level design environment quickly and robustly without worrying about models or tool interoperability because those capabilities are embedded into the network and maintained by others.

The product concept or "virtual prototype" at this level needs to link all the engineering disciplines in development process, gather key factors, and compile the needed product level metrics. The process must be highly user-oriented so that the right technology choices can be explored quickly and accurately enough regardless of data origin or even completeness. Also the use of such advisor technologies must be self explanatory because the user can not be expert in all the technology areas.

The design tool infrastructure for EDA exists from functional system design down to physical implementation and manufacturing modeling. However there are significant limitations to interoperability between the different tools and design hierarchies in order to get key product level metrics. Also there is a lack of robust, data-driven abstract performance level modeling capabilities to combine conceptual and functional system level design tools.

B. Accommodating Input Uncertainties

Virtual prototyping is performed at the earliest phases of the design process before detailed physical design (layout and routing) is done, and inputs are often little more than a bill of materials and packaging technology choices. Because minimal information is available at this point in the design, and the information that is available includes substantial uncertainties, a careful treatment of those uncertainties is necessary to obtain meaningful analysis results.

In order to facilitate making design and process decisions with only a few basic metrics and without highly specific data input sets for every component or material used in a product, each input must be optionally represented as a probability distribution rather than a single fixed value. For example, in the case of the quality of an incoming part, experience with the supplier and the part suggests a most likely value for the incoming yield, but different shipments of parts may have yields that are slightly higher or lower.

If one or more of the input values are defined as a probability distribution, one or more of the final metrics that describe the system will be a probability distribution rather than a single value. The width of the resulting distribution provides a measure of the sensitivity of the computed metric to the uncertainties in the data inputs.

V. Summary

The earliest stages of the design process provide the best opportunity to significantly impact a system’s characteristics. This notion has lead to the concept of virtual prototyping, which allows system attributes to be defined and tested prior to large investments in design or fabrication. The integration of packaging tradeoff analysis with functional verification and architectural design in a hardware/software codesign environments results in a complete virtual prototyping solution that can be used to optimize electronic systems.

The value proposition attached to virtual prototyping for high manufacturing volume, time-to-market driven applications is different than that attached to low manufacturing volume applications, yet, virtual prototyping plays a critical role for both market segments. In all market segments, life cycle cost is the most general metric for measuring the effectiveness of design, technology, and material decisions.

References

1. A. Dasgupta, E. B. Magrab, D. K. Anand, K. Eisinger, J. G. McLeish, M. A. Torres, P. Lall, and T. J. Dishongh, "Perspectives to understand risks in the electronic industry," IEEE Trans. on Components, Packaging, and Manufacturing Technology-Part A, vol. 20, pp. 542-547, December 1997.

2. M. Vertal and J. Crowley, "An approach to hardware/software co-modeling for rapid design exploration," Proc. High-Level Electronic System Design Conference, pp. 140-147, October 1997.

3. P. McCluskey, M. Pecht, and S. Azarm, "Reducing time-to-market using virtual qualification," Proceedings, Institute of Environmental Sciences, pp. 148-152, May 1997.

4. M. Pecht, "Reliability, maintainability, and availability," in Handbook of Systems Engineering and Management, A. P. Sage and W. B. Rouse (Editors), John Wiley and Sons, New York, 1999.

5. J. Rabaey, "Interoperability should be the problem of the person who provides the tools," IEEE Spectrum, vol. 35, pp. 60-61, September 1998.
 

Markku Lindell received his M.S. and Lic. in Tech. degrees in electrical engineering from the Tampere University of Technology, Tampere, Finland in 1988 and 1993 respectively. He is a Principal Engineer at Nokia Research Center where his interests include electronic systems design, modeling, design methodologiesv and tools and virtual prototyping.

Paul Stoaks received his B.S. degree in physics from Eastern Oregon State College in 1986.  He is the Product Manager at Nu Thena Systems, Inc., an electronic design automation (EDA) company that produces the Foresight system design and Foresight Co-Design hardware/software co-design products. With 9 years of experience in the EDA industry, he has worked as a software developer and product engineering manager on IC layout, EDA tool integration, and systems level planning software projects.

David Carey (M’86) received his the B.S. and M.S. degree in Electrical Engineering from the Texas A&M in 1983, and 1985, respectively.
He is a Technical Manager at the Microelectronics and Computer Technology Corporation (MCC) where he has worked for 13 years. His present areas of interest include system design methods, product and technology benchmarking, and system tradeoff analysis with an emphasis in consumer information appliances and associated manufacturing. Previous areas of activity have included microwave and high speed digital system analysis, electronic packaging for high performance applications, low-noise power distribution methods, and multichip manufacturing process development. Mr. Carey is the author of numerous technical publications and holds over 15 patents. He is a member of Tau Beta Pi, Eta Kappa Nu, and IEEE-CHMT.

Peter A. Sandborn (M’87) received his the B.S. degree in engineering physics from the University of Colorado, Boulder, in 1982, and the M.S. degree in electrical science and the Ph.D. degree in electrical engineering, both from the University of Michigan, Ann Arbor, in 1983 and 1987, respectively. He is an associate professor in the CALCE Electronic Products and Systems Consortium (EPSC) at the University of Maryland where his interests include technology tradeoff analysis for electronic packaging, system life cycle and risk economics, hardware/software codesign, and design tool development. Prior to joining the University of Maryland, he was a founder and Chief Technical Officer of Savantage, a technical contributor at Nu Thena Systems, and a Senior Member of Technical Staff at the Microelectronics and Computer Technology Corporation. Dr. Sandborn is the author of over 50 technical publications and one book on multichip module design. He is currently an associate editor for the IEEE Transactions on CPMT – Part C.