Forum latest

Why new hard disks might not be much fun for XP users
Hardware
Written by Daniel   
Thursday, 11 March 2010 19:09

From ARS Technica

A rather surprising article hit the front page of the BBC on Tuesday: the next generation of hard disks could cause slowdowns for XP users.

Not normally the kind of thing you'd expect to be placed so prominently, but the warning it gives is a worthy one, if timed a bit oddly. The world of hard disks is set to change, and the impact could be severe. In the remarkably conservative world of PC hardware, it's not often that a 30-year-old convention gets discarded. Even this change has been almost a decade in the making.

 



The problem is hard disk sectors. A sector is the smallest unit of a hard disk that software can read or write. Even though a file might only be a single byte long, the operating system has to read or write at least 512 bytes to read or write that file.

512-byte sectors have been the norm for decades. The 512-byte size was itself inherited from floppy disks, making it an even older historical artifact. The age of this standard means that it's baked in to a lot of important software: PC BIOSes, operating systems, and the boot loaders that hand control from the BIOS to the operating system. All of this makes migration to a new standard difficult.

Given such entrenchment, the obvious question is, why change? We all know that the PC world isn't keen on migrating away from long-lived, entrenched standards—the continued use of IPv4 and the PC BIOS are two fine examples of 1970s and 1980s technology sticking around long past their prime, in spite of desirable replacements (IPv6 and EFI, respectively) being available. But every now and then, a change is forced on vendors in spite of their naturally conservative instincts.
Hard disks are unreliable

In this case, there are two reasons for the change. The first is that hard disks are not actually very reliable. We all like to think of hard disks as neatly storing the 1s and 0s that make up our data and then reading them back with perfect accuracy, but unfortunately the reality is nothing like as neat.

Instead of having a nice digital signal written in the magnetic surface—little groups of magnets pointing "all north" or "all south"—what we have have is groups pointing "mostly south" or "mostly north." Converting this imprecise analog data back into the crisp digital ones and zeroes that represents our data requires the analog signal to be processed.

That processing isn't enough to reliably restore the data, though. Fundamentally, it produces only educated guesses; it's probably right, but could be wrong. To counter this, the hard disks store a substantial amount of error-checking data alongside each sector. This data is invisible to software, but is checked by the drive's firmware. This error-checking data gives the drive a substantial ability to reconstruct data that is missing or damaged using clever math, but this comes with considerable storage overhead. In a 2004-vintage disk, for every 512 bytes of data, typically 40 bytes of error checking data are also required, along with a further 40 bytes used to locate and indicate the start of the sector, and provide space between sectors. This means that 80 bytes are used for data integrity for every 512 bytes of user data, so about 13% of the theoretical capacity of a hard disk is gone automatically, just to account for the inevitable errors that come up when reading and interpreting the analog signal stored on the disk. With this 40-byte overhead, the drive can correct something like 50 consecutive unreadable bits. Longer codes could recover from longer errors, but the trade-off is that this eats into storage capacity.
Higher areal density is a blessing and a curse

This has been the status quo for many years. What's changing to make that a problem now? Throughout that period, areal density—the amount of data stored in a given disk area—has been on the rise. Current disks have an areal density typically around 400 Gbit/square inch; five years ago, the number would be closer to 100. The problem with packing all these bits into ever decreasing areas is that it's making the analog signal on the disk get increasingly worse. The signals are weaker, there's more interference from adjacent data, and the disk is more sensitive to minor fluctuations in voltages and other suboptimal conditions when writing. [More...] [Comments...]

 

 

See also

None found.


Hardware | Windows | Linux | Security | Mobile Devices | Gaming
Tech Business | Editorial | General News | folding@home

Forum | Download Files

Copyright ©2001 - 2012, AOA Forums.  All rights reserved.

Alliance of Overclocking Arts

Links monetized by VigLink

Don't Click Here Don't Click Here Either