Factors Affecting the Future of the Semiconductor IP Management Business

By Eric

The era of semiconductor IP is here and it’s a good sized business.  (>£700M for ARM, >$400M for Synopsys, > $100M for Cadence, all annually)  And without a doubt, the demand for semiconductor IP will continue to grow. Regardless of the size of the target market, all companies creating semiconductors are now using or reusing IP, whether developed internally or externally. But a variety of business factors will shape the future of the IP business.

Sourced from: http://bit.ly/YxXx3F

The success of leading device manufacturers in dreaming up new, more advanced features and capabilities which consumers seem to want and buy drives semiconductors to be ever more complex and short lived. This affects the nature of IP. For example, short life cycles mean less business value which then affects investment in quality, both by the IP provider and licensee. Quality is gained through the V&V process as well as development support. It’s highly likely that IP providers and consumers that have differentiated V&V know-how and support systems will have an edge in the market. Similarly, the ability to capture and manage specs and know-how for IP block integration will be a differentiator.

The larger device designers who are market leaders will have economies of scale working in their favor, allowing differentiated advancements in power consumption and functionality through finer-grain integration of IP blocks. They can absorb the additional V&V and design costs from stitching hundreds of IP blocks into a system. And in fact it’s highly likely that these companies will continue to be the primary consumers of IP, because they can gain the most value from it. But, as advances in differentiation slow in a particular product category (witness pocket calculators), this advantage may recede. Smaller vendors who from the start become adept at on-boarding, managing and reusing IP, especially larger sub-systems may gain advantage over time through constant refinement of development and IP licensing processes to maximize margins despite smaller served markets.

In summary, there are a number of factors, some of them opposing each other which will affect the business for creation and consumption of semiconductor IP. Businesses that adapt to these factors, and implement processes and systems to streamline their IP management will fair better against the external forces that work against them. In some ways, it’s a lot like being chased by a bear: you don’t have to run faster than the bear, only faster than the guy next to you.

More information about Dassault Systemes solutions for IP Management. 

Enhancing Semiconductor Design/Manufacturing Collaboration

By Eric

Whether for a single customer or a larger market, investing in new semiconductor products is a high risk business with the potential for strong profitability, but also significant loss. Mitigating risks in the manufacturing process go a long way in assuring that those business investments are profitable. Risk mitigation can be done through comprehensive automation of the collaboration between engineering to manufacturing.  A number of benefits accrue through automation:

  • Consistent use of best practice know-how
  • Reduction of ECO costs  from best-practice process deviations
  • Enhanced oversight and compliance for material and chemical content reporting
  • Acceleration of product introduction time
  • Faster, lower cost accommodation for unexpected supply chain change decisions

 

This automation requires an integrated approach to configuring and managing the sourcing network as it applies to the IC BOM. The notion of an inverted IC BOM (see figure below) provides a model for defining the steps from which a wafer then is transformed into integrated circuit parts inventory. This becomes especially important when singulated dies find their way into a wide variety of finished goods SKUs.

IC BOM Example

The automation of this process is best done using a configurable rules system and process definition editor that creates hierarchical process that defines the execution of wafer-to-parts transformation. That transformation must not only embody best possible scenario that maximizes profitability, but also be configurable to accommodate unforeseen business and technical factors that require deviation from best business case in order to meet customer commitments. It should also  accommodate corrective workflows for possible process deviation errors.

The rules engine should be able to define the complete sourcing network including fabrication, bumping, singulation, assembly, sorting, testing, marking and inventory storage and shipment. Process managers should be able to create and change these processes without resorting to low-level IT coding support, so as to quickly respond to supply chain issues. The resulting process should also provide up-to-date requirements and test result traceability from NPI to manufacturing. It should include  analytics for flexible, end-user configurable assessment of process performance.

This process engine is then the structure for distributing manufacturing requirements and instructions, collecting test and operational data, creating a single go-to resource for design-to-manufacturing oversight.

Come visit us at the Design Automation Conference in San Francisco next week where our process architects for design-to-manufacturing process coordination will be discussing and demonstrating solutions and best-practices. We’ll be offering a full presentation and demo agenda, a cocktail hour and prizes.

Design Collaboration – What do we gain with integrated Design Analysis

By Eric

High-Tech IndustryEven from the early days of chip design the different tasks involved from architecture, logic design, layout and verification were accomplished in most part as individual efforts. The considerations of the “other disciplines” most of the time were not part of the equation in accomplishing ones task. “Once the logic design is done the back-end person can figure out how best to implement the layout”. When chip complexity and size were not so great we could get away with this kind of approach.

Today with large scale SOC designs and aggressive design targets, sophisticated nm technologies and schedules, this can no longer be the norm. More and more design tasks are being parallelized to compress design schedules. Design teams are much larger and can be located in different parts of the planet. The complex silicon technologies require deeper, more time consuming analysis of an increased list of parasitic effects such as cross-talk, inductive and capacitive coupling, junction leakage, etc. to achieve functional, performance and power design targets. In addition, sophisticated design tools produce volumes of analysis data over hundreds of modes and corners for each design flow step in the implementation process which allow engineers to evaluate if the design is converging toward budget targets.

So how can we manage this torrential flow of data in a way that keeps us on track and meet aggressive schedules? We need the ability to collect all this data from all project instances consistently from each design step, where ever it is produced, to a centralized location. The data needs to be organized in a way that allows review in a systematic approach from a project level to detailed issue presentation. The hundreds of analysis corners that may be generated for each flow step covering different process and operating conditions should be captured and organized for quick review. Important key metrics need to be displayed and highlighted making it possible to to make decisions where to focus first. As shown in Figure 1 below, the system should allow all aspects of the analysis data to be viewed in context (such as timing, layout, power, congestion, etc.) to see how different metrics could be contributing to specific issues. Historical data collected by such a system can then be compared by various analysis capabilities (tables, plots, metric aggregation, views) to assess metric trends and determine if the design is converging to expected targets. The system would enhance the ability to weed out non-issues from “project-critical” issues, allowing focus on key resolutions for the next pass of implementation. Finally, the system should help in constructing the current status and progress of the design and highlight problematic blocks that need further attention.

Figure 1

This integrated system would be useless without the ability to share the organized database with others to collaborate on issues, resolutions and trends as the design matures to completion. A centralized database where all team members can view the same picture of issues allows better decisions to be made and help with communication between disciplines (i.e. front-end and back-end).

With the ability to collect data from anywhere at any stage the flow, automatically keep track of design progress and analyze issues from an integrated view the prospect of meeting or bringing in schedules for these complex SOC design projects becomes more attainable.

Also, we’re going to be at the Design Automation Conference in San Francisco this year again. We will have a full presentation and demo agenda, a cocktail hour and prizes, join us!



Page 1 of 41234
3ds.com

Beyond PLM (Product Lifecycle Management), Dassault Systèmes, the 3D Experience Company, provides business and people with virtual universes to imagine sustainable innovations. 3DSWYM, 3D VIA, CATIA, DELMIA, ENOVIA, EXALEAD, NETVIBES, SIMULIA and SOLIDWORKS are registered trademarks of Dassault Systèmes or its subsidiaries in the US and/or other countries.