Enterprise-grade hard drives are built with a different philosophy to consumer-grade drives - that is, speed is not king so much in the field of server storage devices; reliability is.
For a server, what you're looking for is a high MTBF value (Mean Time Between Failures) which gives a rough idea of how long the drive will operate for before probability suggests it will start experiencing old-age related issues. For the drives in my workstation, that value isn't specified, because they're bog-standard Seagate consumer drives. For the drives in my servers, they're around 1.4 million hour MTBF rated.
The harder you push the envelope, the lower your stability will become; overclocking is a prime example of this. When creating hardware, the more you force the same technology to do, and the closer to the limit you bend it, the less confident you can be that it will survive in a datacentre environment, churning away day after day for multiple years between swapouts - because that's the point of a server's hard drive. Desktop PCs are used very differently, get a lot less work (in varying setups) in higher bursts, and are built according to what sells - capacity and speed.