Forum latest

The changing world of high-speed memory
General
Written by Gizmo   
Thursday, 07 December 2006 08:06

EDN have an article about embedded memory and the technologies that are on the horizon for making it cheaper and faster.  The article gives a brief history of memory design and then goes on to examine where things are at and where they are going.

Many of you probably aren't aware of this, but DRAM is fundamentally not much faster today than it was 10 years ago.  Back then, you could get a SIMM that had DRAM with a single-cycle access time of about 100 nS or so, and CPUs ran at clock speeds of about 200 MHz.  Today, we have CPUs that run at clock speeds in excess of 3 GHz, but DRAM still has a single-cycle access time of around 45-50 nS; while CPUs have increased in performance by an order of magnitude, DRAMS have only doubled in speed.  Partly as a result of this, we have CPUs that dedicate significant portions of their transistor count to cache memory and cache coherency logic.

Why?  Well, mainly because DRAMs are commodity items, the goal is to build them as cheaply as possible, and make them as large as possible.  The article points out that research IBM did back in the mid-90s shows that DRAMs shouldn't have to be much slower than the SRAM that is currently used in CPU caches, and could still be much cheaper to manufacture than SRAM, as well as being more power efficient.  This should enable CPU designers to implement a level of caching on the CPU in DRAM that would enable a significant performance improvement for no additional cost to manufacturing if properly designed, and could have a substantial impact on power consumption.

Indeed, designs like Innovative Silicon's Z-RAM™ (which was licensed by AMD back in January of '06) prove that assertion to be basically correct.

Give the article a read and then comment in the forums!  

 
Don't Click Here Don't Click Here Either