Enhancing Semiconductor Design/Manufacturing Collaboration

By Eric

Whether for a single customer or a larger market, investing in new semiconductor products is a high risk business with the potential for strong profitability, but also significant loss. Mitigating risks in the manufacturing process go a long way in assuring that those business investments are profitable. Risk mitigation can be done through comprehensive automation of the collaboration between engineering to manufacturing.  A number of benefits accrue through automation:

  • Consistent use of best practice know-how
  • Reduction of ECO costs  from best-practice process deviations
  • Enhanced oversight and compliance for material and chemical content reporting
  • Acceleration of product introduction time
  • Faster, lower cost accommodation for unexpected supply chain change decisions

 

This automation requires an integrated approach to configuring and managing the sourcing network as it applies to the IC BOM. The notion of an inverted IC BOM (see figure below) provides a model for defining the steps from which a wafer then is transformed into integrated circuit parts inventory. This becomes especially important when singulated dies find their way into a wide variety of finished goods SKUs.

IC BOM Example

The automation of this process is best done using a configurable rules system and process definition editor that creates hierarchical process that defines the execution of wafer-to-parts transformation. That transformation must not only embody best possible scenario that maximizes profitability, but also be configurable to accommodate unforeseen business and technical factors that require deviation from best business case in order to meet customer commitments. It should also  accommodate corrective workflows for possible process deviation errors.

The rules engine should be able to define the complete sourcing network including fabrication, bumping, singulation, assembly, sorting, testing, marking and inventory storage and shipment. Process managers should be able to create and change these processes without resorting to low-level IT coding support, so as to quickly respond to supply chain issues. The resulting process should also provide up-to-date requirements and test result traceability from NPI to manufacturing. It should include  analytics for flexible, end-user configurable assessment of process performance.

This process engine is then the structure for distributing manufacturing requirements and instructions, collecting test and operational data, creating a single go-to resource for design-to-manufacturing oversight.

Come visit us at the Design Automation Conference in San Francisco next week where our process architects for design-to-manufacturing process coordination will be discussing and demonstrating solutions and best-practices. We’ll be offering a full presentation and demo agenda, a cocktail hour and prizes.

Design Collaboration – What do we gain with integrated Design Analysis

By Eric

High-Tech IndustryEven from the early days of chip design the different tasks involved from architecture, logic design, layout and verification were accomplished in most part as individual efforts. The considerations of the “other disciplines” most of the time were not part of the equation in accomplishing ones task. “Once the logic design is done the back-end person can figure out how best to implement the layout”. When chip complexity and size were not so great we could get away with this kind of approach.

Today with large scale SOC designs and aggressive design targets, sophisticated nm technologies and schedules, this can no longer be the norm. More and more design tasks are being parallelized to compress design schedules. Design teams are much larger and can be located in different parts of the planet. The complex silicon technologies require deeper, more time consuming analysis of an increased list of parasitic effects such as cross-talk, inductive and capacitive coupling, junction leakage, etc. to achieve functional, performance and power design targets. In addition, sophisticated design tools produce volumes of analysis data over hundreds of modes and corners for each design flow step in the implementation process which allow engineers to evaluate if the design is converging toward budget targets.

So how can we manage this torrential flow of data in a way that keeps us on track and meet aggressive schedules? We need the ability to collect all this data from all project instances consistently from each design step, where ever it is produced, to a centralized location. The data needs to be organized in a way that allows review in a systematic approach from a project level to detailed issue presentation. The hundreds of analysis corners that may be generated for each flow step covering different process and operating conditions should be captured and organized for quick review. Important key metrics need to be displayed and highlighted making it possible to to make decisions where to focus first. As shown in Figure 1 below, the system should allow all aspects of the analysis data to be viewed in context (such as timing, layout, power, congestion, etc.) to see how different metrics could be contributing to specific issues. Historical data collected by such a system can then be compared by various analysis capabilities (tables, plots, metric aggregation, views) to assess metric trends and determine if the design is converging to expected targets. The system would enhance the ability to weed out non-issues from “project-critical” issues, allowing focus on key resolutions for the next pass of implementation. Finally, the system should help in constructing the current status and progress of the design and highlight problematic blocks that need further attention.

Figure 1

This integrated system would be useless without the ability to share the organized database with others to collaborate on issues, resolutions and trends as the design matures to completion. A centralized database where all team members can view the same picture of issues allows better decisions to be made and help with communication between disciplines (i.e. front-end and back-end).

With the ability to collect data from anywhere at any stage the flow, automatically keep track of design progress and analyze issues from an integrated view the prospect of meeting or bringing in schedules for these complex SOC design projects becomes more attainable.

Also, we’re going to be at the Design Automation Conference in San Francisco this year again. We will have a full presentation and demo agenda, a cocktail hour and prizes, join us!

Why Design Data Management and Analytics aren’t just ordinary Big Data

By Eric

SemiconductorsI’m an electronics engineer who spent a part of his career in the business intelligence and analytics domain. In that regard, I’m always interested in technology and business areas that have unique analytics needs. Semiconductor design closure is one such domain. With 14 nanometer geometry fabrication now coming on-line, the complexity of integrated circuits is taking another geometric step in complexity as large projects can have 200+ IP blocks in their designs (see figure below).

Variability and Velocity are more critical than Volume

When taking into consideration that millions of transistors can constitute a block and that blocks can be chosen from libraries in the thousands, and that there can be multiple variations of a block, the analytics challenge approaches that of Big Data. Though, this not necessarily because of overall data size, but because of data complexity, variability and velocity.

For these large projects, then, the effort to meet timing, power, IR drop and other design parameters takes geometrically longer…yet again. Of course, some of this increased verification effort can be done in parallel by multiple design teams, each working on sub-sections of the chip. But, ultimately the entire system design has to be simulated to assure right design first time. I’m sure most would agree with me that system failure often happens at interfaces. Whether it’s an interface within a design or a responsibility interface between designers, it’s the same situation.

Why ordinary Big Data analytics won’t do the job

Effective analytics for design testing and verification provides a way to analyze interface operation from all relevant perspectives. Coming back to the topic of Big Data, my view is that commonly known Big Data analytics tools could be helpful, but are not sufficient to meet this requirement. In particular, I observe that appropriate semiconductor big data analytics must have the following capabilities:

  • Support for the hierarchical nature of chip design.
  • Ability to integrate information from multiple design tools and relate them in some way to each other to indicate relevant cause/effect relationships.
  • The ability to compare and contrast these relationships using graphical analytics to expose key relationships super quickly.
  • The ability to easily zoom, pivot, filter, sort, rank and do other kinds of analytics tasks on data to gain the right viewpoints.
  • The ability to deliver these analytics with minimal application admin or usage effort.
  • Effective visualizations for key design attributes unique to semiconductor projects.
  • The ability to process data from analog, digital and the other types of common EE design and simulation tools.
  • The ability to handle very complex, large chip design data structures so that requirement, specification and simulation consistency is maintained.

It seems to me that semiconductor design engineers have been quietly contending with Big Data analytics challenges even though they haven’t necessarily been part of the mainstream Big Data conversations. Yet, the tools in use for chip design perhaps have some very interesting capabilities for other technical and business disciplines. My $.02.

Also, we’re going to be at the Design Automation Conference in San Francisco this year again. We will have a full presentation and demo agenda, a cocktail hour and prizes, join us!

Eric ROGGE is a member of the High-Tech Industry team. You can find him on Twitter @EricAt3DS.



Page 1 of 3123
3ds.com

Beyond PLM (Product Lifecycle Management), Dassault Systèmes, the 3D Experience Company, provides business and people with virtual universes to imagine sustainable innovations. 3DSWYM, 3D VIA, CATIA, DELMIA, ENOVIA, EXALEAD, NETVIBES, SIMULIA and SOLIDWORKS are registered trademarks of Dassault Systèmes or its subsidiaries in the US and/or other countries.