[ Back ]   [ More News ]   [ Home ]
September 26, 2005
New Physical Verification System from Cadence
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor


by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

Introduction

On September 12, 2005 Cadence introduced its Physical Verification System for rapid turnaround of DRC and LVS. The system's massively parallel approach facilitates multiple design turns per working day-even for the largest designs at 90-nanometers, 65-nanometers and below that would otherwise require overnight or multi-day runs. Cadence claims that PVS delivers near-linear performance scaling across very large numbers of CPUs and compared with conventional tools, significantly decreases physical verification cycle time as well as the overall number of cycles required. Cadence had lost its leadership position in DRC/LVS to Mentor Graphics some time ago. This new offering is based in part on technology acquired from eTop, a Beijing-based EDA firm. I had an opportunity to discuss this with Mark Miller, Cadence VP for Business Development DFM.

How does DFM fit into Cadence from an organizational point of view?
The big split is at the top between the field sales channel oriented stuff versus product development side of it. That whole branch that does product development is called PRO for Product and Technology Organization. That's headed by Jim Miller, no relation to me (at least not that I'm willing to admit). He has a couple of operating groups plugging into him including the Virtuoso team, the Encounter team and the DFM team. The DFM business unit is run by Mark Levitt, the VP of DFM, and I work for him.

Is physical verification a subset of DFM and if so what percentage (some, most, all)?
It's a significant portion. At Cadence it includes all of the RC extraction tools for high precision transistor level extraction, power analysis technology, all of the Voltage Storm stuff, as well as all the physical verification and yield optimization technologies.

What would be considered part of DFM but not part of physical verification?
Basically anything that would be used for final chip analysis, optimization, signoff and tapeout. All the signoff level accuracy tools. Analysis and extraction teams all report through here. Also the lithography optimization and treatment.

What functions are included in physical verification?
Generally DRC (Design Rule Check), LVS (Layout versus Schematic) and EDRC (Electrical Design Rule Check).

What is the importance of these applications and are they become even more important nowadays?
Traditionally these technologies have been used to signoff literally every design as it approaches completion of the design phase before handing it over to manufacturing. This stage of design rule checking ensures that the physical implementation of the design is indeed manufacturable. In other words that all the tolerances, the spacing, the edge to edge relationships between all of the geometries in the data base are at the minimums or greater of what the manufacturing technology you are targeting for is capable of. It is a crucially important step and it is done for virtually every design that takes place. The complexity of that task as you move from 180 nm to 130 nm to 90 nm to 65 nm grows exponentially. The rule deck, the collection of rules that you are checking the design against grows dramatically in size and the rules themselves are much more complicated. So therefore there has been a big increase in the amount of run time, CPU time consumed and the number of iterations that design teams have to go through to reach DRC closure on their designs.

Where does physical verification fit in the design flow?
There are sort of two variations of the theme. First, as people are creating data, as they are building cell libraries or blocks or placing a couple of blocks together and wiring them up they will need to incrementally check their work as they create new data. That's what generally thought of as incremental or interactive DRC. The other variation of the theme is when you start getting large quantities of data, for example a large block that just finished being placed and routed inside a P&R system and now you want to add the cell expansion from the cell library to the place and route data. Now you want to do a high precision DRC on that entire block. That might run for hours and in some cases for days, if you are trying to do the whole chip. That's the second category which is sort of large job or whole chip batch physical verification. That's really where this new product that we're talking to you about today is targeted. It's aiming at that large block or full chip physical verification challenge. That's the one that has grown explosively in terms of its complexity and the amount of time necessary to perform it.

What is the source of the rule deck?
In the end the design teams themselves as it turns out. But in general the base rule deck comes from whoever is going to be manufacturing the chip. If you're inside an IDM like Intel or like STMicro designing the chip internally, you're going to get it from the CAD organization that supports the product. If you're at a foundry like TSMC, UMC or Chartered, you're going to get the rule deck from your foundry support team. In the case of a fabless company that's using a foundry to manufacture the design they will augment the rule deck with a lot of internal expertise. They may feel strongly that they can do more or an even better job. Really in the end you do this thoroughly, you ensure that your design performs at process nominal yield and process nominal parametric values. But if your team happens to be extra smart of extra experienced, they may use secret sauce techniques or tweaks to add to the deck in addition to the baseline stuff that will allow them to squeeze an extra few points of yield out of it. At least that's the theory.

Is there any benefit from prior experience, prior runs or hierarchical organization or does one start from scratch every time you make a physical verification run?
By nature they are often somewhat incremental, meaning that each time you finish a run, you're going to see results. You perform a series of checks and then make changes or modifications to the design to fix the errors that are found. Depending upon the nature of it, you may want to run just one or two cells to make sure that you've got it right before you launch the big batch run again, if you've made a localized change. The answer is sort of both. In the end you're still going to want to run the entire design in batch from top to bottom but people do use it incrementally on occasion to perform checks like that. There are other more traditional tools from Cadence inside Virtuoso where there's the capability to do small incremental checks.

What was the motivation for developing this new product?
With batch physical verification the rule decks get much more complicated and the complexity of the rules gets a lot bigger as well as one makes process node transitions.

This whole area of design rule verification and DRC tools has been around for a while. It was part of what originally formed Cadence. We kind of invented this game when ECAD and SDA Systems merged to form Cadence in 1988. ECAD was basically a company that wrote just DRC tools. We've had a history as long as anybody in the industry about this technology in particular. Over the years, as you know Moore's Law and all, design sizes have increased rather dramatically on a regular schedule and the type of jobs that were expected to be done by these verification engines has grown. One of the first and fundamental breakthroughs that came maybe 10 to 12 years ago, was people started to hierarchically process their designs. Originally all this stuff was done flat, meaning just like every other process in the design you looked at one layer at a time. You could actually process all the geometry as one big flat pile of polygons, rectangles and pads. That worked great until designs reached a size and the rule decks increased a little bit in complexity. Suddenly you were looking at untenably long run times, maybe multiple days in a lot of cases, multiple weeks in the worse cases. The first real innovation that took place, maybe 10 to 15 years ago, was when people introduced the idea of using hierarchy. As you know there are a lot of duplicate structures in a big chip design. If you're looking at a memory chip, every one of the core memory cells is a replication of the one next to it. So the tools would leverage the use of that replication and take short cuts if you will to try to dramatically shorten the run time. That worked great until about 5 years ago when we started running into subwavelength lithography related rules, meaning we were trying to print things smaller than the wavelength of the light you're trying to print them with. Now we are using 193 nm light. The issue there is that lithography effects are all about who's next to you, who's your next-door neighbor because it is frequency domain analysis. The relationships that cause violations here are dependent upon whether there is another object nearby, meaning they are all about interference patterns. That as it turns out broke hierarchical processing. You go to all the trouble to break down the design hierarchically to get to the bottom level of data you were trying to process in that hierarchy and realize that there was a cell right next-door, a piece of data right next-door from a different branch of the hierarchy that you had to throw it away, flatten it and process it the old way anyway in order to get the right answers. That's one of the reasons that run times have exploded, kind of gone in a very ugly direction. The second is that the existing solutions that are there right now haven't really taken advantage of multiprocessing architecture, specifically distributed processing. About 10 to 15 years ago SMP or symmetric multiprocessing and multithreading was the vogue thing to do if you look at the architectures of all the SUN machines that were being sold at that point in time. They were 4 to 8 CPUs with one common memory and one common disk subsystem that they would talk through. That multithreading, that SMP approach is great at accelerating one particular algorithm to a point but unfortunately it runs out of gas at about 8 to 10 CPUs due to the sharing of the memory and the disk subsystem and interprocess communication. There is a different architecture that has come to prominence in the last several years that is basically the grid oriented computing architecture, sort of blade servers and hundred to thousands of them available. You basically break the design down into tiles or windows and process it. All the solutions that are out there right now really haven't taken advantage of the most recent approach of trying to distribute the processing of the design.

Also the complexity of the rules themselves. There are literally thousand of rules in one of these rule decks. One of the reasons is because the languages that are used to express these rules are low level languages, sort of like assembly code in a way; Boolean operations, edge-to-edge reference, spacing checks. There isn't much in the way of high level abstraction or expression of intent. One particular check say a latchup check might take 500 to 600 lines of DRC code to express it. That's both a problem for the person who is going to have to write the rules and it is also a problem for the person who is actually looking at errors that were found by that rule and has to refer back to the code to figure out what broke. And certainly not least, the rework cycle is quite long and unfortunately wholly serialized, meaning that when a job is run, you have to wait for the end of the job before you can look at the errors, start to understand what was found and begin to fix them, grade them or waive them if they were false errors.

What is really new with this physical verification system?
The thing that we have done that is truly remarkable here is this optimizing compiler. We have created a software architecture that is highly modular in nature. At the beginning of its execution the first thing this optimizing complier does is to look at three different things, simultaneously analyze and then optimize that particular run for those three entities. They are the rule deck, the incoming data from Open Access (we also support a GDSII stream as you might imagine) and the actual computer resources available. It looks at these three, the dependency tree inside the rule deck and analyzes whether it can be broken down into independent subdecks that don't require any cross calculations between these sections of the deck, the hierarchy of the design, the replication, the use of arrayed structures inside the chip and finally the available machines on you server array or on your network. This part of the flow actually stops and does a quick performance check. It ascertains exactly what performance levels the machines are capable of. It takes these three factors and creates an optimized run deck if you will for the particular job and then launches it and spreads it across the server array.

The net result of this architecture is that we are able to give you remarkable levels of performance improvement. The industry standard solution out there right now from one of our competitors is able to perform at a base line level that our customers have told us that if they are going to consider a change, they are going to need to see performance improvement in the neighborhood of 10X.

Mark shared with me a graph showing the speedup for three different designs for 16, 36 and 64 CPUs. The first two designs were 130 nm and the last design was a 90nm, a big processor. The GDSII file sizes were 396 MB, 3,947MB and 662Mb. The times for a single CPU to process the deck were 121 min, 447min and 750min. The performance scaled linearly for all three designs.

These are pretty big chips by the way, not little things. The point being with 16 CPUs we are able to meet or beat the 10X requirement. This will take a job that traditionally runs overnight, one that you launch at 5PM and come in the next morning to look at the results and now you will be able to do it over lunch. As one of our associates said it better be a short lunch. If you happen to have more computer resources available the design environment scales linearly as shown by the performance of the 32 and 64 CPUs configurations.

The basic idea is linear scalability, massive parallelism and an optimizing complier that simultaneously optimizes the rule deck, the incoming data stream and available resources.

How is the relative performance with only one CPU?
The most common available industry solution (Mentor Graphics Calibre) is the base line. Mark showed me another chart showing the CPU time and memory consumption for six different 90nm customer designs. In all but one case, Cadence's new physical verification system tied or beat the baseline CPU time. Memory consumption was also very competitive, no more than 20% greater in any case.

The press release contains the following quote from Shoji Ichino, general manager, LSI Technology Development at Fujitsu:

"The Cadence Physical Verification System is the leading solution that addresses Fujitsu's needs for advanced sub-90-nanometer designs and that also delivers the performance scalability we require to reach 65 nanometers and below. The system offers outstanding performance, concurrent results reporting, and superior integration with the Virtuoso platform and OpenAccess. The Cadence Physical Verification System is in production use by our worldwide design teams for 90- and 65-nanometer physical verification and its extensibility will be used in the future to address manufacturing and yield optimization."

This underscore the fact that although we're not releasing this product for volume distribution right know, it is in production use. Fujitsu, our development partner, is using it in production at both 65 and 90 nm and achieving splendid results at this point.

I understand that PVS is a highly modular environment
By modular in nature we mean that we've got a whole variety of different engines in our architecture. This means that the optimizing compiler as it is looking at the whole rule deck can basically call a number of different execution engines depending upon the nature of the rule and the nature of the incoming data stream that it is looking at. We've got high performance flat engines, hierarchical engines and a whole family of what we are calling “dedicated” engines which are special purpose executables targeted specifically at some of the ugliest and most complex chips we described earlier like latchup checks, antenna checks, width dependent checks and density gradient checks as well as programmatically integrating in our RC Extraction, our litho solutions, our RET product family and mask data prep. This thing was architected in a way that we would be able to live with it for 15 years. As you can imagine this is a pretty big project. We can't afford to do this every year. Designed for modularity, extensibility, and scalability not just in performance but scalability Internal enhancement extended and improved.

Tightly integrated with Encounter our digital IC design tool set as well as with Virtuoso our custom IC with tool set.

How would you summarize?
The net result is that this solution dramatically improves the amount of time it will take for a full chip signoff level DRC to be done. We are able to scale the performance linearly up to 100+ CPUs with no sacrifice whatsoever in accuracy. In fact there are a lot of cases where we are able to improve accuracy. Those dedicated engines have the ability to use much more accurate algorithms, as opposed to the low level checking sort of assembly level command I mentioned earlier. This is integrated within all the standard Cadence flows our custom IC and digital flows and in the end this should provide our customers with a much higher level performance and a lower cost of ownership and support.

How is this solution package?
The way we are packaging this has to do with three different configurations. You might consider an L, an XL and a GXL configuration. There are a number of different options depending upon your level of interest in some of the modules I described like RET and like mask data prep (MDP). You might or might not want to use our solutions in all categories. It is being broken up in ways that you can sort of pick and choose some components. The primary packages will be centered around a baseline version, an extended performance version and a version that is scalable under literally unlimited number of CPUs as well as a bunch of really advanced technologies for yield enhancement and optimization.

What operating systems does this support?
Right now pretty much every Linux variant you can thing of out there, 32 bit and 64 bit Linux machines as well as SUN Solaris. All the stuff you would expect to find in any one of our major semiconductor design customers.

When do you expect to release this for general distribution?
December! This is a technology announcement. We are still in developmental partner mode now for the next couple of months. In December we will ramp it up for full sales channel distribution and volume production.

What is the anticipated pricing for this offering?
It starts at $50,000 for a single CPU and scales upwards.

What is the upgrade price for customers that already have some of your DFM and physical verification modules?
In virtually all cases for large major customers they have some kind of relationship with us in place so there will be case by case upgrade consideration given as they look through the technology and decide how it applies for them. I don't have a one size fits all answer for you because there are so many different variations of installations of our customers from the more legacy tools.

Would a new customer at the lowest level configuration for this new environment be paying more than for the existing tool set?
In general when you buy more, you pay less per unit. Conceptually, if you're looking for a guide post, if you're buying a bundled configuration that has a lot more functionality, it would be more cost effective than buying as individual line items. Again it is difficult for me to talk generally about this.

Do you anticipate that a large number of prospects will already have compute farms?
Oh, yeah! Just about everybody big has one. Just about everybody medium size or smaller either has started to build one or rents one when they need it from IBM, HP or SUN.

The other thing that is neat about this - I don't know how closely you've looked lately at the hardware that is available out there. It's rather remarkable what $1,500 will but you in a blade these days. You are looking at a single or dual Opteron with a surprising amount of memory on it and some disk. For $1,500 to $2,000 a slice you can just keep adding these things into a rack. That's the essence of why this architecture seems to be exploding with our customers so quickly right now.

The lithography modules seem to be more of a post processing of a completed design than a DRC tool.
In some important ways it's less interactive though errors are found. There are applications for this type of technology that I was telling you about that involve litho rule checking where you will take on an actual post litho simulated result or even post OPC treatment, in other words the shapes are distorted as they will be printed, and run a checker against it to make sure that all of the litho critical dimensions and litho measurement points are still within tolerances. That's one application space for this class of technology.

If one finds an error, how far back does one go?
It depends. When you are in the litho treatment end of the world, the last thing you want to do is go back and talk to the design team. In some cases it happens but what you want to do is adjust the treatment or some parametric value on the engine to accommodate for the specific error you found. But sometimes you have to actually go all the way back to the design team to make the change. That's one of the reasons that those loops in the design flow are so horrifically expensive these days.

Are there any competitors out there offering distributed processing for physical verification?
I'm not exactly sure what other competitors are doing. I think Synopsys not too long ago made an announcement along these lines. I think that Magma made one as well. But we don't see them at any customer sites.

Was this technology home grown or the result of an acquisition?
There was an acquisition that took place about a year ago, a little company called eTop (ED: eTop Design Automation, a Beijing startup in August 2004). That's where one of the core DRC checking engines came from that we included in this environment. Virtually all the distributed processing and massive parallelism, concurrent reporting of results (I failed to talked about this earlier. ED: It delivers error data during runtime, via OpenAccess, into a Virtuoso-based debug environment. Designers can start to debug designs almost immediately after commencement of a physical verification run.), optimizing complier, all of the secret sauce I was describing is all pretty much home grown code.

Besides Fujitsu are there any other alpha sites, beta sites or early adopters?
There is actually a fairly large number right now. But there is nobody else where I am in a position to give names. We are working on a lot of projects that haven't taped out yet.

Are there any planned third party offerings in connection with this?
Nothing at this particular point in time.

What about the existing portfolio of DFM products? Dracula, ..?
Dracula is the original legacy DRC tool from the ECAD acquisition I mentioned 15 years ago whenever Cadence was originally formed. That tells you how long these things live by the way. It's not used on cutting edge designs right now but it's still around. People doing legacy incremental revisions of old chips, old military stuff but not cutting edge designs. But it turns out that there is a surprising market for Dracula in mainland China right now because a lot of those guys are using what we wouldn't consider cutting edge technology.

Before this introduction Assura was Cadence's mainline DRC solution. It worked fine for us up through about 130 nm. Customers did pretty well with it. But as we hit 90 nm and certainly as we are going to 65 nm, we came to the realization that we needed a whole new architecture to get it to scale and perform the way we wanted it to for the customer designs we saw coming.

What about Fire & Ice, Voltage Storm, ..?
As I mentioned before there is a lot of stuff in the DFM bucket. It's pretty much the whole spread of tools needed to extract, model and aggregate the behavior traits of design data with respect to its electrical characteristics and manufacturability. Most of these modules I would not consider mainstream with the new physical verification system.



The top five articles over the last two weeks as determined by the number of readers were:

Mentor Graphics Appoints New Vice President Mentor appointed Arun Arora as vice president and corporate treasurer. Previously Mr. Arora had responsibility for a significant portion of US sales and business development at Verari Systems. Prior to that, he was a senior member of the Corporate Development and Western Area Sales Operations Groups at Sun Microsystems. He has also held positions with companies including Credit Suisse First Boston and Goldman Sachs.

Cadence Unveils Next-Gen Verification System (Electronic News Magazine) See this week's editorial

Class Action Lawsuit Against Synopsys Dismissed The class action lawsuit, filed in August 2004 in US District Court for the Northern District of California, had alleged securities laws violations by Synopsys and certain of its officers. Judgment has now been entered in favor of Synopsys with each party bearing its own fees and costs.

New Product Announcement: EverCAD to Deliver the Next Generation All-in-One Mixed-Signal Verification Solution, DIAMOND, for Complex Mixed-Signal SoC designers. Slated for ability early next month DIAMOND is designed for circuit level verification of designs with over millions of gates, including mixed-signal SoCs, video and network processors, DSPs, Flash, SRAM, DRAM, CAM and other memory-like structures. Also, DIAMOND's inherent strength is in its ability to accurately simulate Delta-sigma, ADC/DAC, switched-capacitor filters, PLLs (phase-locked loops), charge pumps, high-speed transceivers and large analog designs which require long simulation times when using SPICE tools.

Si2 Forms New Open Modeling Coalition OMC will address critical issues -- such as accuracy, consistency, security, and process variations -- in the characterization and modeling of libraries and IP blocks used for the design of integrated circuits. The initial OMC Working Groups (WG) will address the proposed library standards for Effective Current Source Model (ECSM), Data Model Consistency, Statistical Timing and Characterization Data.

Structured ASICs to Solve Cost and Design Issues in the IC Industry See last week's Editorial



Other EDA News

True Circuits Introduces New Line of High Resolution Clock Generator PLLs

LSI Logic Adopts Synopsys' New Design Planning Capabilities for Industry-Leading ASICs

STMicroelectronics and Synopsys Demonstrate Interoperability of Their SATA IP Cores for 90nm Technology

Synopsys DesignWare Verification IP for AMBA 3 AXI is First to Earn ARM 'AMBA 3 Assured' Logo Certification

Synopsys Speeds Development of High Performance Designs With AMBA 3 AXI Synthesizable IP in DesignWare Library

Springer Publishes ARM-Synopsys Verification Methodology Manual for SystemVerilog

Synopsys Announces Source-Code License for SystemVerilog Verification Library

Global Unichip Collaborates With Cadence and Kilopass to Deliver Innovative Consumer Entertainment Designs

Mentor Graphics Appoints New Vice President

Sun Microsystems Servers With Solaris Operating System and UltraSPARC (R) Microprocessors Deliver up to Fivefold Performance Boost; Sun Captures Performance Lead From IBM, HP

ADVISORY/ VaST Systems Technology CTO to Chair Compilation & Power Session at EMSOFT, Present at EMSOFT and PARC '05

Synopsys Extends Galaxy Design Platform with JupiterIO for Concurrent Die and Package Floorplanning and Analysis

New Product Announcement: EverCAD to Deliver the Next Generation All-in-One Mixed-Signal Verification Solution, DIAMOND, for Complex Mixed-Signal SoC designers.

Other IP & SoC News

NEC Electronics America Announces New 8.4-Inch LCD With Ultra-Advanced, Super-Fine TFT Technology for Industrial Use

Altera Receives Request for Information From SEC

Avnet, Atmel Supercharge Battery Technology Inc.'s Design Process; New Laptop Batteries Are Now Smaller While Boasting Higher Performance Than Before

CheckSum Provides Low-Cost In-Circuit with Boundary-Scan Test; Low-Cost Test Platform Delivers In-Circuit with Boundary-Scan Test for Under $70,000

MagnaChip Semiconductor Launches a One-Chip Solution for the Digital Still Camera Application Market

STMicroelectronics Publishes Certified EEMBC Results for VLIW Core Used in High Performance Embedded Consumer Applications

STMicroelectronics Introduces Step-Change in CPU Performance in Set-Top Box Decoder Chip

WiSpry Names Nathan Silberman Vice President of Engineering

SMSC Reports 58% Year-Over-Year Increase in Second Quarter Revenues, Including Oasis Acquisition; Revenues and Earnings Exceed Company's Prior Estimates

ANADIGICS Raises Third Quarter 2005 Guidance

Structured ASICs to Solve Cost and Design Issues in the IC Industry

North American Semiconductor Equipment Industry Posts August 2005 Book-To-Bill Ratio of 1.05

Jetstream Media Technologies Announces Two New IP Core Products; Products to enable embedded digital effects for consumer electronic devices

Knowlent Joins VSIA to Improve Analog IP Verification

AMCC Sets New Standard for SATA II Hardware RAID With High Performance 3ware 9550SX Controllers; Available Now, 3ware 9550SX Hardware RAID Controllers Set the High Water Mark for I/O Performance

Toshiba Announces New High Efficiency, Space Saving Multi-Chip Module for DC-DC Converters

Semtech Leverages Packaging, Process Innovations in New 3.3V ESD Protection Device; New uClamp3324P Comes in Leadless Package That is up to 77% Smaller Than Competing Solutions; Flow-Through Design Simplifies Board Layout

Spansion Demonstrates High-Density Flash Memory Solutions Based on 90nm MirrorBit Technology

Conexant Expands India Management Team; Industry Veterans Fill President and Chief Operating Officer Positions

Legerity Unveils Industry's Most Integrated VoWiFi Handset Chip and Software; The Le8100 WiFi Product Bundle Redefines Voice-over-WiFi SIP Telephony by Making It More Economical, Adaptable, and Power-Efficient

Low-Power Tuner from Zarlink Targets Standard- and High- Definition Digital Satellite PayTV Systems

National Semiconductor Delivers Multiple Innovations in Ethernet Transceivers

Atmel Introduces First Power Management IC for Handset Add-on Modules

White Electronic Designs Awarded $1.9 Million Contract for System-On-Chip Microprocessor Modules for Air-to-Air Missiles

picoChip and ARM Announce 90NM Next-Generation Wireless Chipsets

Broadcom's New Enterprise IP Phone Chip Brings Next-Generation Features and Capabilities to Mid-Range IP Phones

Achronix Semiconductor Corporation Announces 650MHz FPGA and Early 2006 Release of 1+GHz Product

Airgo Unlocks Door to Digital Home With First Wireless Chip to Outstrip Wired Speeds

Fairchild Semiconductor's European Global Power Resource(TM) Center Offers Broad Range of Solutions with Single Ended Primary Inductance Converter (SEPIC) Topology for Numerous Low Voltage Applications

Voxelle ships Personal VoIP Gateway chipset for Skype

Siano Mobile Silicon Unveils World's Lowest Power Consumption Mobile Digital TV Receiver; Multi-band, Multi-standard Receiver Chipset Offers a Low Cost, Easy to Design Solution for Fast Integration of Digital TV into Mobile Devices

Diamond High Voltage Schottky Diode Announced by Element Six

Texas Instruments Breaks 65nm Leakage Power Barrier with SmartReflex(TM) Technologies

Cypress Unveils Industry's First Firmware-Programmable USB2.0 NAND Controller; Single-Chip EZ-USB(TM) NX2LP-Flex Controller Enables Value Added Features in NAND Flash-Based Thumbdrives

Vitesse Enhances Baseboard and Enclosure Management Controller Family

Cypress Introduces TouchWake(TM) Capability for WirelessUSB(TM) Radio-on-a-Chip Devices; New Feature Allows Wireless Mice and Other Peripherals to Wake from Sleep Mode When Touched, Extending Battery Life and Improving Performance

TI Integrates Smart Battery and Power Management Technology on Single Chip

New Agere Systems Storage Chip Increases Disk Drive Capacity for Portable Consumer Devices Through Superior Perpendicular Recording Performance

IDT Offers Leading Network Search Engines in RoHS-Compliant Flip-Chip Packaging; Leading Communications IC Company Improves Environmental Program, Offering its Entire Product Portfolio in RoHS-compliant Packages

Intersil Introduces New Family of Volatile, 32-Tap Digitally Controlled Potentiometers for Ultra-Low-Power Applications

Sequoia Communications Releases the Industry's First Single-Chip, Multi-Mode WEDGE RF Transceiver; Company's WEDGE Device Sets New Standards in Cost and Power Consumption for 3G Handsets

STMicroelectronics Extends ST7 USB Flash Microcontroller Family

Broadcom's Announces New Single-Chip Mobile VoIP Processor for Wi-Fi(R) Phones

Mobilygen Launches the Industry's Lowest Power Single-Chip H.264 CODEC Chip Providing TV-Quality Video for Mobile Products

You can find the full EDACafe.com event calendar here.

To read more news, click here.


-- Jack Horgan, EDACafe.com Contributing Editor.