[ Back ]   [ More News ]   [ Home ]
May 28, 2007
Timing and Signal Integrity – CLK Design Automation
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor


by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

On May 21st CLK Design Automation introduced the Amber™ Analyzer, a threaded and incremental static timing and signal integrity analysis solution.  The product leverages the power of multi-core, multi-processor compute platforms to execute 10 to 20x faster than conventional tools.  Incremental analysis increases throughput 100x or more over existing design flows.  I had a chance to interview Isador Katz before the announcement.

Would you give us a brief bio?
From start to finish or finish to start?

Whichever you prefer.
Bachelors in economics from Wesleyan University in 1979. Worked as a regulatory economist for three years, working against the Bell System at the time.  Decided I did not like law but I liked technology.  So I went back to grad school at MIT.  In ‘84 I left MIT and came to work at a company named Daisy Systems.  Worked for Tony Zingale and from there for Lucio Lanza.  When Daisy began to disappear, I went to Dataquest in 1986 where I was senior analyst for electronic Design Automation.  That lasted a little more than a year.  From there I went to work at Cadence where I had a number of assignments.  For the last two years I was VP of marketing for the IC Division.  Decided to leave because I wasn’t crazy about working for my boss by the name of Gerry Hsu.  Went and worked for the Hailey brothers at MetaSoftware.  I decided I had gone from the frying pan into the fire and moved back east to work as VP of Marketing and then CEO at Chrysalis Symbolic Design.  That was from ‘95 to ‘99.  In 1999, of all things, I sold Chrysalis to guess who, Gerry Hsu.  Stayed there a couple of months and decided that it was not going to work again.  Decided to leave EDA.  From 2000 to early 2003 I was CEO of Lightchip, an optical component company that I was brought into to try and rescue by an investor.  You can do everything you can but can’t fix a market.  So in 2003 we sold that off.  In 2003 I looked at different opportunities.  In 2004 we were actually invited back to look at opportunities in EDA which we did very quickly and came up with the idea of CLK Design Automation which we started working on in the fall of 2004.  We got funding in 2005.  That’s where I am today.

You had one curious position, namely, EDA analyst at Dataquest.  Did that experience give you any insight that you would not otherwise have had if you remained on the vendor side?
It actually gave me one profound insight that I should read my own newsletters more closely.  Your own insights that you write under pressure are often times the best ones you come up with.  In ‘87 I did two newsletters within two weeks of each other.  One was for synthesis that said this looks like a market that is going to take off.  People need this stuff.  I wrote up a forecast showing it roughly doubling every year.  The people I knew from Daisy were over there.  I probably should have gone to work for them.  I wrote the second article on something called design framework.  I said these are essential technologies to integrate EDA tools together.  Everyone is going to have to have them in their flow but I am not sure anyone is going to pay for a framework.  So, what do I do?  I go off and work for my friends at a standalone framework company.  So read your newsletters more closely.  Sometime your first instincts turn out to be your best.  For people who live on brains sometimes that is contrary to your own thinking.

Does CLK mean anything other than clock?
Yes.  The way the company got founded was that Hal Conklin and I were invited by this VC to look at EDA.  We quickly recruited our old engineer, Lee LaFrance, from Chrysalis who had been VP of Formal at Avant!.  He said that I have the perfect architect, a gentleman named Joao Geada.  The four of us were working on it in the background.  One moment in the morning in a rare moment of lucidity I said Conklin LaFrance, Katz and Geada what about CLK DA, CLK Design Automation.  That’s it.  It has nothing to do with clocks.

What is that you are announcing?
We have two different announcements.  One is about the product Amber Analyzer, the other in the background is about the company.  The product announcement is that we have developed a next generation static analysis tool suite.  Essentially it is fully threaded, fully incremental and has a number of enhancements behind it that meet a lot of unmet requests from customers.  This is really a classic case of faster, better, more.  Faster in the sense of taking an existing foot print, do a lot more with it and then doing something radical to really improve throughput.  More is the laundry list of features that people have always been pressing vendors to have but they never seem to get.

Where did CLK Design Automation come from?
We are backed by Morgenthaler and Atlas Ventures.  Morgenthaler is best known for investing in Synopsys.  They also invested in Chrysalis and Lightchip.  Atlas you probably know from ViewLogic and BlueSpec.  When Atlas came to us and asked us to look at EDA again, they had heard about 65nm and 45nm.  We immediately called all of our old customers and asked them what problems they were working on and needed to be fixed.  They mentioned a couple of them, some in the physical design space.  We knew those required talent on the west coasts that we did not have access to.  One of the top issues they kept coming back to was that the current timing tools simply were not getting the tools done.  Not that they are bad tools.  They were great tools in their day.  They were not running fast enough, taking forever to debug, really hard to find problems.  Just on and on.  At that point statistical was coming up.  We had seen a lot of papers on that.  We asked if they want us to do statistical stuff.  They said that would be nice someday but what we really need right now is for what we are doing now to be much more effective.  We said okay.   We built the team in response to the customer requirement.  That is backwards from the way most EDA companies get built.  Most start with an algorithm or a technology and then say lets go find a problem.  We found a problem and then said lets go build a team.  We brought in Joao as the chief architect who was at IBM Yorktown and Cadence and has a PhD in distributed computing from University of New Castle.  At Chrysalis we had put together the first and second generation at Chrysalis.  We found really good people in other technology spaces.  We went and got some of the best and brightest out of Motorola Freescale, a guy who had developed a lot of the timing tools at Intel.  We recruited two very good academic advisors right from the gitgo: Duane Boring from MIT and David Blaau from Michigan who really spent a lot of time with us.  They are not just show up for diner types.  They come here for days at a time.  Most of what we did was stay close to one of the largest IDMs that we had talked to during the summer.  We can not reveal the name yet but they design high speed digital components.  They have worked with us from the getgo on specs, giving us circuits, evaluating alpha and beta codes, and really stayed closely in the loop.  They are now putting it into production at 45 nm designs.  Really a company driven by the market, found the talent and then built the right product.

Here are some of the things we learned during the process.  Timing, the classic problem of timing and signal integrity.  Right up until the very end of the cycle people have tens of timing violations they can not get out of the system.  It has been very difficult for them to prioritize and fix those things.  Largely because things just take too long to run, literally 20 hour runtimes per corner for a signal integrity timing check.  That’s just takes too long.

The other key factor we heard while people talk about process variations and statistical this but statistical that, signal integrity, wires interfering with each other, is the primary killer of chips.  You have to look at that across lots of corners and lots of modes.  At the end of the day because of power constraints, you really have to make every path critical.  It is critical either for power or for timing. You bring everything right up to the edge.  The other thing we kept hearing loud and clear was if they change one wire or swap out one cell today, they have to do a 24 hour timing run.  That makes no sense.  The tool has to scale with the level of work that people are doing otherwise we will never get to the next generation of SoCs and big chips.

We figured out very quickly we had to change.  When Dr. Geada looked at the problem, he said I know how to make this faster.  Let’s take advantage of these new threaded architectures.  We say that but we didn’t realize that the conventional wisdom of the day was that you could not thread signal integrity.  Because of the nature of the mesh, because every wire could potentially be speaking to every other wire, you could not petition the problem.  The second thing we realized was that analysis time has to be proportional to the design change.  If you are working with 5,000 gates and you make a change, you get an answer.  If you are working with 9 million instances or 60 million gates, you can’t make a change and then wait 20 hours.  The next thing when you work on these projects, there are lots of hands in the project and lots of people need to be able to work at once.  Finally, you can not just look at delay, you have to look at all the effects that are contributing.

The big breakthrough the guys figured out to do was how to thread any of the analyses we do today.  We can thread timing.  We can thread signal integrity.  We can thread power.  I am not saying we can thread any of the other applications but for this class of analysis we have figured this stuff out.  This is a big deal.  We have tested up through 16 processors.  We get nearly linear speed up through all 16 processors and theoretically we should be able to go up through 64 processors.

This is a big deal, particularly for signal integrity.  The other key thing we figured out is how to do incremental signal integrity.  What we mean by that is that when you run incremental SI with our tool, we will give you exactly the same answer you would have gotten, if you had run the entire design.  Again, that is huge.  Never been done before.

What we can do with this now is 10x to 20x speedup when you run threaded.  The 100x speedup is what you get if you turn on incremental.  What we have heard back from customers is at the end of the design cycle when they are doing all sorts of design changes, incremental is huge.

On top of that we have built a persistent data structure.  This is not a use once and get a report kind of tool.  We have all sorts of stuff to enable us to do multicorner, multimode debug diagnostics and put the advanced analysis packages on top of that for leakage and optimization.  The platform has an API, a complete scripting language.  It is meant to be embedded in other tools.  It can be used as a standalone tool.  It has a persistent data structure.  It really can be a common timing platform for tool development.

Everything in the system is threaded.  We are shooting for drop-in compatibility with existing flows meaning that if a customer has a script, we will run that script as is.  We are not trying to do that for the world.  We are trying to take on one customer at a time.  The idea is that a customer should see immediate benefits for throughput from our tool before they get there.  Our timing engine is within 1% of PrimeTime and I mean at any point in the circuit, any node, any slope you go look at we will give you the same answer.  On signal integrity, people ask us to benchmark against SPICE.  We are within 3% of SPICE and we have done a number of things because of our background on how to reduce pessimism for that.

CLK is as much about architecture as it is about algorithm.  A lot of time people will take an algorithm, partially prove it out and then try to make a tool.  The tool gets hacked in around that algorithm.  Our philosophy was that we do not know what algorithm will be right in 5 years.  The algorithms may change, calculation methods may change.  But if you build the right architecture at the gitgo, you can be able to adapt.  I do not want to diminish the work that the algorithm people have done but it is the architecture that makes this company sing.  It is what enables us to be able to do the threaded and the incremental.  It is what has enabled us to add new calculation methods and to build an API that gives you full access to the tool.  It has also enabled us to do the stuff underneath for multimode and multicorner.  For example, we have something we call an experiment layer. We have the design history inside the data base along with the results.  One night you can run the entire circuit.  The next morning you can have 5 to 10 engineers looking at their piece of the design, doing what-if analyses, trialing that in with different placements, routes or swaps.  Then you can bring that all together and run the design again.  Importantly, that type of incremental local analysis sits on a desktop machine.  You run the big jobs on the large server and you run the local jobs on the engineer’s workstation.

We have a gentleman named Paul Levine from Morgenthaler who sits on our board who was at ClearCase and did computer science for years.  Classically, if you look at anybody who tires to do threading or distributed, they get benefit for 2 or 3 CPUs, maybe 4 CPUs, and then it tails off rapidly.  Our system (and we have verified this on multiple designs through 16 CPUs) scales linearly.  Theoretically we are fairly confident that we will keep scaling right up to 64 CPUs.  What that means is that we have got the horsepower to deal not just with today’s designs but to deal with tomorrow’s designs.

As a little bit of backdrop, processors are not getting any faster.  They are not going to go to 10GHz or 20GHz.  We are going to live with 2GHz processors.  What is going to happen is that you will have more processors.  The only way an EDA tool is going to keep scaling to stay ahead of the customer requirements is to do this type of thing.

People have talked about threaded and distributed but incremental is as fundamental a change as threaded is.  There is no way they are going to keep designing 600 million transistor or 1 billion transistor design if every time an engineer makes a change, we have to reanalyze the entire chip or we need to reroute entire sections or regenerate an entire mask set or redo all the LPC.  That’s just is not going to work.  Every tool (and we do not claim we can solve every other tools problem) will need to address both a threaded capability and an incremental capability.  Engineers as long as they go through the design cycle later and later into the chip flow what they have fixed they want to stay fixed.  If a design has passed, they need it to stay passed.  They do not want to do the Wackamo thing where you fix one thing and another thing breaks.  You have got to converge.  Incremental when you are late in the cycle and you are trying to converge that is the only flow you are going to be able to use to push your way in there.  Right now at the end of the cycle, people have 50 problems left over.  They do not know which ones to fix.  The fix one and 5 other problems show up.  You can not get to tapeout, if you are going to keep operating that way.

I have mentioned this in passing because put philosophy is faster, better, more but we have built a great deal if advanced functionality into the system.  We do have leakage power, we have a statistical model, we do on chip variation and statistical timing built in and we have integrated SPICE inside there.

What we have done in multicorner and multimode is the experiment layer.  As part of what we can do this notion of a delta report.  Put into plain English, don’t show me the whole darn thing, just show me what got better or worse.  Part of the design architecture again is that the report and results are part of the system.  The differences between the reports are what we are able to track.  So when you look across corners or across revisions, you can see what got better or worse.  You do not have to write five million lines of PERL scripts to figure out what is going on.

Other nice facilities we have are localized effects for voltage, temperature or cross talk, custom reports and an API so people can directly read or write out of the tool.  And finally, we support the alphabet soup: OA, sdc, sdf, liberty, spef, Verilog.

Have you benchmark performance on real designs?
Yes.  One design is a 60 million gate full chip, 32 GB of parasitic files compressed, 9.1 instances with some very large macros.  On 8 CPUs this takes a little under 2 hours.  On 16 CPUs it takes around an hour.  The other tool today takes about 18 hours to run.  That means that designer’s behavior can fundamentally change.  You can go through several design revisions per day as you try to get to design closure not 3 design revisions per week with the other tool.  The other key thing is incremental.  On a 6.5 million gate block with 1.5 million instances it took 11 minutes on four CPUs to do the base line.  They swapped out 50,000 cells to check for some voltage variation.   We can run that now around 1 minute.  If you were to change a cell or something like that the answer would be back in seconds.  Again, a major breakthrough in behavior because the current flow says if you change 5 cells or change 50,000 cells, you wait 3.5 hours.  With CLK you can make that change and get an answer before you can come back with a cup of coffee.

The summary is that we are one of the next generation tools in EDA that have to come out to build a flow that can deal with 10 million, 20 million, or 30 million placement points in its design or 600 million or a billion transistors, however you want to measure it.  Those flows have to take advantage of the presence of multiple processors in one machine.  They have got to be able to scale to the amount of changes being done.  That’s what the next generation tool set does.  It is a ground up architectural development from the bottom to the top.  You can’t take an existing tool and retro fit it.  Maybe the algorithm but certainly not the data structures and the architecture.

It has been shipping for a little while to our initial partner.  We are now doing additional commercial engagements.  That’s the story.

The pricing starts at $25K.  Is that for one CPU?
That is for one processor with up to two cores in it for the base timing system.

How does the pricing go up with the number of processors?
Basically, we go from one system to a system with all the functionality on as many processors as you want at a substantially higher price.  The idea was that if the CAD manager was trying to compare this with their existing tool for static timing, they should see a good price performance advantage from the baseline going up.  But most of the customers we are working with right now are looking at this not necessarily as a replacement for their existing signoff tool but as a larger quantity thing which is going to become their mainstay engineering tool for getting timing and signal integrity closure.  We are not trying to deal with the world.  We are not trying to sell to every logo out there.  We are trying to sell to big logos that have a chance to engage with us.

You mentioned that very early on when you went to prospects without a product, they said statistical, which was a new thing at the time, would be nice but they really needed to significantly improve their existing tools.  My observation over the years is that people typically look at their existing tools wishing they were faster, more robust, more functional etc and then they would be better off.  They rarely have the vision for a quantum leap because they are struggling with their existing problems on a daily basis.  Now that it is three years later and you are going back to the same prospects, are they still stating the same thing or are they now saying that they realized the importance and need for statistical?
Actually, for the most part, it has not changed.  If anything, they have been able to digest where statistical does or does not fit.  It seems to occupy some important niche responsibility.  There are some harder things people have learned as they have gotten closer to the problem that suggests that in one of those contexts statistical is a feature not the solution.  Let me break that down.  There is a manufacturing engineering problem that has to do with people who see lots of designs and are trying to figure out how the process is drifting or not drifting.  This is the sort of stuff that PDF does today.  In that context statistics are meat and potatoes.  They have always done statistics.  That is how they track processes.  In that context statistical tools have meaning.  In the engineering domain where we work for reasons both technical and from an adoption standpoint it turns out that statistical is a feature not a product.  In fact statistical really has to sit within the framework of a classical corner space environment

It turns out that manufacturing processes are not unimodal.  In fact if you look at the manufacturing data, they tend to be tightly clustered about a number of different points.  It turns out that a lot of the variation you see is from line to line in a fab or from fab to fab if you are a multifab shop or within a die it has to do with reticle to reticle section.  Within these different multimodes, the classical Gaussian curve, the normal curve, is a unimodal curve.  If you actually plot the data, you find clusters.  Those cluster turn out to be what we call corners.  So corners turn out from a statistical and manufacturing data base still to be the best way to represent information to the users.  Now within a corner things are very tightly correlated such that you can look at statistical information but the variation is much more tightly wound.  It is not as wide.  So in that context, statistical becomes a utility that you can use say instead of on-chip variation or as a follow-up tool to look for outliers that the corner analysis did not reveal.  But it is the case in the engineering mode, statistical turns out to be a feature that you turn on after you have completed you corner analysis to reveal additional information about you margin.

People do see statistical in their engineering roadmap as something they would like to use within the context of a classic corner based analysis.  Now there are other issues that have to do with how foundries develop their information, how they present the information and how engineers learn how to use statistical data that also once again say from an adoption standpoint it is much easier if users can see the statistical data side by side with their corner information.  That just makes the adoption curve easier.  From a technical standpoint, it turns out that corners still are the best way to represent information.  They have been people smarter than I am who have made that observation.  There are adoption reasons that say for getting people to adopt statistical as it is appropriate is best done from inside a classical corner analysis.  That is again from the engineering perspective.  There is another domain which is manufacturing engineering and yield analysis where statistical is part of the meat and potatoes of what they do.  But that is not the market we serve.

You said one of your board members was at ClearCase.
Paul Levine.  He was the founding CEO of Atria Software.  He took Atria public and merged it with Pure Software. Pure Software ultimately merged with Rational before Rational was acquired by IBM.

As I recall ClearCase is a software source control system.  What these systems typically do is determine what to be recompiled and relinked given the header files and source modules that had been modified.  This would save substantial compilation and linking time.  But the resulting software executable still had to be totally retested.  Why is it different with signal integrity when cells have been swapped or …?
The key is not that we are doing less signal integrity analysis so much as we are doing all the signal integrity analysis to guarantee it is the same answer.  The way our system works and we are applying for patents is that we have all the results information inside our system.  We know what the previous signal integrity analysis did.  When you make a change, our incremental algorithms kick in and see what other things were affected by that.  We know enough about how the previous calculation worked to be able to figure out what else has to get pulled in to do the calculation.  We also have knowledge about where that dampens out and you can stop doing the calculation.  It is not an algorithm thing.  It is inherent in the data structures and data architecture.  We can figure out automatically the calculation we need to get to the point where you are getting the same result as if you ran the whole design structure.  All this is done in the background.  All automatically.  That’s how are tests are run.  We run circuits and we do signal integrity calculations.  We have a system that just scrambles it, does arbitrary changes.  We literally do a binary to binary compare to make sure we are getting the same answer.

The concept of using threaded code to distribute software across multiple processors is not new.  You seemed to say that people felt that signal integrity did not lend itself to being threaded.  Why was that the case and how were you able to over come that?
The nature of the signal integrity problem is that it is looking at all the wires and cell elements as antennas inside the system.  In principal every antenna talks to every other antenna, although in practice a wire on one side of the chip has very little or no influence on a wire on the other side of the chip.  The classic approach to that had been why don’t we come up with a partitioning algorithm that divides the chip up and share results and figure out how to force it onto different processors.  What then happened was that the overhead of sharing results between all the different things began to lock the system down.  It was as if you knew upfront that everybody had to share results but you were going to try and push it so that they didn’t and try to re-synchronize later.  That model just fell apart very quickly.  It was an algorithmic approach to solving this problem.  Geada with his background in threading said you just do not do it that way.  The way to do it is through the data structures and the architecture of the data base such that the work just naturally gets parceled out to the different processors just as if you were doing any other timing analysis.  So long as the data base was non-locking, so long as everything could automatically and quickly share results, then you could keep all of those processors busy all of the time.  Going back to first principles in architecture Joao used his background to come up with a data model that enabled us to very easily send out work to all the processors, be able to have all those processors communicate very efficiently through the data structures and architected such that all of them could be kept continuously busy.  It is in a sense an application of known good threading and architectural principles to a worse case problem so far as threading and distribution is concerned.  That’s what the team put together.  It was a nontrivial exercise.

It seemed like when you started three years ago with an algorithmic approach, encountered difficulties and switched.
No.  The beauty of this team we put together is that everyone had done their job 2,3 or 4 times before.  The timing guys had written 3 timing tools.  The SI guys have written 2 SI tools.  Joao had worked on 3 different architectures and found himself having to retrofit things back into things going back and forth.  The guys sat down and for the first 3 or 4 months didn’t write a lot of code.  They designed, then they prototyped.  They worked it out.  They figured out first principles and showed that the architecture held water.  We have taken chunks out of it at different points but it was always designed such that we could rip out subsections and modules, rebuild it and see that the basic data system would hold up.  That is actually a case of people who understood what they wanted to accomplish and who understood enough about computer science going into it, and had made the mistakes of the past and benefited from those things.  Our goal here at CLK is that we do not want to repeat any of the old mistakes.  We are just trying not to invent new ones.

The company was founded in 2004.  It took roughly 3 years to get where you are now.  Is there anybody else out there who has come along in the interim or perhaps firms already in the SI business that have introduces similar tools?
This is one of those things where I want to be cautious.  The company you don’t know about is the one you are going to get surprised by.  In our conversations with people out there we ask “Have you seen anybody out there whot has made a commitment to developing from the ground up a next generation threaded, incremental static timing and signal integrity tool?”  People keep coming back and saying no.  They have not seen it.  That is not to say that there are not people at Synopsis that are not figuring out how to take advantage of multiple processors.  It is not that people at Magma haven’t said this is something we should do.  But when we go out into the customer space, we have not seen that anybody else is as far as along as we are.  But we are certainly not aware of any startups at this point.  Again what we do not know, we do not know.  You can always get surprised.  Right now I think we are the farthest along in terms of getting down this venture.  Again it is a non-trivial thing to do right.

When are you announcing?
May 21.

Does CLK have any plans for any follow-on products?
Yes.  One of the things we want to do is take advantage of the fact that we have built a real strong platform for doing very fast iterative timing and signal integrity.  People have asked us to go and take the next step in a couple of different domains.  There are some things we are not going to try and do.  We are not going to become an extraction company or something like that.  We do not have the hubris to think that we can do threaded anything.  That is just not true.  We now understand enough of what we did to realize this is hard.  But there are some adjacent problems which have to do with helping people clean up their circuits after place and route, get rid of violations, and analyze what those violations are that we can make some innovations on.  There are things which have to do with getting to the next generation of accuracy on the signal integrity front.  I think we can make contributions there as well.  These are building on what we have already done.  We are not going to branch off to become something we are not.

How much have the venture capitalists invested in CLK?
We have had two rounds now, $5 million and $4 million.  So $9 million invested to date.

How big a company is CLK?
We are at 17 people.  The vast majority are here in Littleton, Mass.  We have a few people based in Austin.

Do you plan to sell direct, through distributors, ..?
The US is our principal focus is going to be with a direct sales force.  In Japan you always need help.  We do not plan to go direct there anytime soon.  Our model is not to try and sell to everybody here.  When we were at Chrysalis, we were very proud when we hit 100 logos.  I am not sure that was in our best interests or our customers’ best interest.  Our goal is to really deal with a few customers.  If we are going to make investments in engineering and application engineers so that once a customers has used our software, they feel very comfortable in deploying it to all their engineers internally.  That’s the way to grow the business.  If all I have had at the end of the day was 20 to 30 customers all of whom were committed to us and we could develop more product and all of their engineers were using it (these would have to be substantial customers), I would be perfectly happy.  I think our customers would be happier too.

There is a classic EDA failure mode where you keep building out your sales force, acquiring more and more customers.  The R&D guys get confused on which problem they are supposed to solve first.  Engineering never finishes the product for anybody.  None of the customers ever want to buy a second copy.  That’s a failure mode we are trying to avoid.

The problem of the day.
Of the hour!  I wish it were the day.  That’s one of the mistakes we are seeking not to repeat.  It is going to be hard.  There are lots of pressures to push you in lots of different directions.  That’s one of the focuses we have for the business.  Our company motto is we may be wrong but we are not confused.

Anything to add?
We started pretty focused with our initial partner.  We have been taking it out for the last three months.  You asked a good question “Have people’s mind set changed in the last three years since we started?”  It has been gratifying to find out that we started with leading edge customers where we validated the first problem statement.  If anything, all that has happened is that more and more of the customers have come to the same conclusion.  What we are building would appear to be the right product for the time.

When you announce on the 21st, will you identify the IDM?
I doubt it knowing the way things usually work.

The top articles over the last two weeks as determined by the number of readers were:

HP Reports Second Quarter 2007 Results HP announced financial results for its second fiscal quarter ended April 30, 2007, with net revenue of $25.5 billion, representing growth of 13% year-over-year, or 10% when adjusted for the effects of currency.   GAAP operating profit was $2.1 billion. As outlook HP estimates Q3 FY07 revenue will be approximately $23.7 billion to $23.9 billion.

Mentor Graphics Expands Questa Functional Verification Platform and Targets Low-power Designs Mentor announced it has expanded the comprehensive Questa verification solution which includes the new Questa 6.3 functional verification platform addressing low-power verification, and powerful verification management capabilities that enable closed-loop management reporting, analysis and documentation. It also includes improved debugging and version 3.0 of the industry's first open-source standards-based Advanced Verification Methodology (AVM).
The Questa 6.3 verification platform will ship in Q2 2007 and includes access to the Advanced Verification Methodology portal. Configurations start at $24,000 USD for a 12-month license.

Cadence Introduces Industry's First Complete Custom IC Simulation and Verification Solution Cadence unveiled  Virtuoso Multi-Mode Simulation (release MMSIM 6.2), an end-to-end simulation and verification solution for custom IC that uses a common, fully integrated database of netlists and models to simulate analog, RF, memory, and mixed-signal designs and design blocks. This allows designers to switch from one simulation engine to another without compatibility issues or interpretation impacts, so consistency, accuracy, and design coverage are improved, while cycle time and risk are reduced. The overall result is lower cost of adoption, support, and ownership, and faster time to market.

Virtuoso Multi-Mode Simulation is tightly integrated with the new Virtuoso custom design environment, enabling a complete design-to-verification methodology.

Pro Design Launches Next Generation of CHIPit ASIC Prototyping Systems Pro Design, a leading supplier of high-speed ASIC and SoC verification platforms, today announced the launch of its CHIPit V5 series, the next generation of its successful CHIPit product family. The new generation of CHIPit High-Speed ASIC Prototyping Systems is very flexible, scalable and is available with 1 to 6 Xilinx Virtex-5 LX330 FPGAs. Depending on the configuration, the system handles ASIC design capacities up to 8 M ASIC gates

UMC Expands Support for Mentor Graphics' Calibre YieldAnalyzer to Deliver Production Proven DFM Flow Mentor announced that UMC has expanded its support for the Calibre nm Platform with Calibre YieldAnalyzer for all major design flows for its 90 (nm and 65nm processes. Mentor and UMC have worked collaboratively to introduce DFM capabilities that give designers highly valuable information to guide physical design improvements that can increase production yields.

Other EDA News

Other IP & SoC News



You can find the full EDACafe event calendar here.

To read more news, click here.


-- Jack Horgan, EDACafe.com Contributing Editor.


Rating: