Most of what has appeared in this column about Rambus has been about litigation; patent suits with Hynix, Infineon, Micro and most recently Samsung as well as FCC. I thought it would be interesting to see if anything was occurring on the product front.
On July 7 Rambus announced the latest version of its high-bandwidth XDR memory interface technology, named XDR2. The XDR2 memory interface uses a micro-threaded DRAM core and circuit enhancements that enable data rates starting at 8GHz Rambus claims that this makes it five times faster than today's best-in-class GDDR graphics DRAM products. XDR2 memory interface is targeting applications that require extreme memory bandwidth, such as 3D graphics, advanced video imaging, and network routing and switching applications.
I had an opportunity to discuss this with Victor Echevarria, Product Marketing Manager for the Platform Solution Group.
What is the significance of XDR2?
XDR2 means a lot of things to the memory industry. It really has significance for lots of different markets. The first notable item about XDR2 is that it is a follow on product to our XDR memory interface which is shipping today, the memory technology for the cell processor. XDR2 really integrates a number of technologies to up the bandwidth from XDR1, from 3.2 GHz to 8 GHz. The significance of XDR2 is that it is the first DRAM technology to ever include micro-threading technology. Micro-threading technology is a technology that we've invented at Rambus to essentially enable finer access granularity transactions.
Some background on this topic. As memory interface technology increases in speed, the interfaces seem to increase much faster than the DRAM core can keep up. By core I mean all of the storage elements within the DRAM. What ends up happening is that every time an interface doubles or triples in speed, what it essentially doing every time an access request is being issued into a DRAM a lot of data is sent back. With each generation more and more data is sent per request.
You can imagine that you have a library and a librarian who is going to fetch you books. You request a book from the library and the librarian goes, gets it and gives it to you. By increasing the interface speed at which the librarian can fetch books for you, doesn't necessarily mean the librarian can fetch multiple books at one time. Let's say we double the librarians speed. Instead of issuing multiple requests, I tell the librarian to go grab one book bit the librarian brings back 2 or 4 books. That's the problem facing the application markets that require high speed memory. There's the notion of access granularity that's not getting any better as time goes on. Micro-threading
essentially eliminates that. It allows you to transfer very fine chunks of data with this incredibly fast interface.
One of the key applications where this is going to help is in the graphics space. The graphics processor makers often deal with frame buffering using units of measurement called triangles. They break down a three dimensional rendered frame buffer into millions of polygons and millions of triangles. The trend is for these triangles to get smaller and smaller to yield more realistic images. You can think of an image that has a single triangle that basically looks like a triangle. As you add more and more triangles you generate more and more complex shapes. Three dimensional applications are now getting to the point where they are having 7M to 10M triangles on a given frame. If a given
triangle is extremely small then it would help graphics vendors to be able to pull out individual triangles from the memory. That's where micro-threading plays in. This is where XDR2 is targeting primarily graphics vendors. We are also targeting consumer electronics and networking vendors.
What is there about consumer electronics and networking applications that would make them targets for XDR2?
Let's touch on networking first. Networking packet buffer processor are essentially dealing with arbitrary length network traffic packets that are coming in and out of you router, switch or wherever you have your packet buffer processor. About half of those packets end up being about 32 bytes in length. As memory technology increases in speed, you are not going to be able to access 32 bytes at a time. What you are doing is limiting those 32 byte packets with your new memory access technology you are not going to be able to access just those 32 bytes. You are going to have to access 64 bytes or more. With micro-threading and XDR2 we have got that back down to 16 bytes. So we are able
now to access the small granularity where you are not wasting one-half your bandwidth with data. With consumer electronics the value proposition is similar to that with XDR1 which is that you obtain the bandwidth you need using a single XDR2 device that you ordinarily require multiple DDR2 or DDR3 devices to obtain. There is definitely going to be some benefit to the finer access granularity as well given the fact that increasingly consumer electronics are dealing with some three dimensional processing and finer cells for rendering images.
XDR2 also integrates a couple of other technologies that are not necessarily new to the semiconductor industry but are new to DRAM. We are integrating transmit equalization to enable this 8 GHz signaling rates across consumer FR4 PCB material. And it is still differential signaling topology memory. XDR is currently the only differential memory technology that is shipping in the mainstream. Also an adaptive timing circuitry which complements our flex based timing adjust in XDR. Flexsave for XDR eliminates any bit-to-bit skew that could be caused by trace length mismatch, by driver mismatch or by any type of capacitive effects that vary pin-to-pin. The adaptive circuitry we are
including with XDR2 is essentially a way to make sure that any differences that occur during system operation, due for example to temperature fluctuations, are tracked out over the course of normal operations.
What is the cost ratio of XDR2 to XDR1?
XDR2 currently doesn't have any DRAM vendors that we have publicly announced. Since there's no silicon, there isn't really any information on cost that we can quote at this time.
What is the availability date?
We can not comment specifically on what our customers' plans are. We do know from conversations with a number of controller vendors in the industry and also our DRAM vendors that we expect the need for this kind of memory somewhere in the 2007 timeframe. However, we are not committing to that deliverable date. This is merely speculation of when a memory technology like this would be effective in the industry.
Presumable it would be of value today. Is the delay simply the length of time needed to incorporate new technology into an upcoming product lifecycle?
It would definitely have its effectiveness today but what I mean by saying that its effectiveness would come to a head in 2007 is that current memory technologies are sufficient for what the market requirements are for graphics vendors such as NVIDIA and ATI. It is for us working with them to define what the requirements are for the next generation and to see what they are capable of doing. Clearly it's 2007 before we see any great XDR2 applications.
Do you anticipate any other memory vendor introducing similar or competing technology?
The only competing technologies in the graphics space are efforts in the graphic GDDR market. The comparisons we have shown to date show that XDR2 provides over 5x effective bandwidth than the best in class competing GDDR device.
Who are the vendors of those GDDR devices?
You can find the full EDACafe event calendar here.
To read more news, click here.
-- Jack Horgan, EDACafe.com Contributing Editor.