Unfortunately, with the currently available operating systems, you cannot use your GPU ("3d card") to improve the desktop rendering quality. The simple reason for this is that to use GPU and multisampling or supersampling, your OS would need to push the original (or high enough quality) data to the GPU pipeline. For example, the OS and graphics libraries would need to render all vector grachics as vectors, images with higher resolution than your display can render, position all elements with mathematical (subpixel) accuracy etc etc. Currently available operating systems do not do this because of historical reasons.
I'd guess Mac OS X is closest to achieving rendering the whole desktop on GPU with improved quality because their graphics APIs are close enough to required. Such graphics rendering would still need to be supported by 3rd party applications, though. Any 3rd party application designed to push pixels on actual display will not provide enough data for the GPU to improve rendering quality.
In addition, current display technology is still (on average) too poor to render e.g. mathematically correct fonts and we still need fonts that adapt to pixels, instead of rendering mathematically correct letter shapes (modifying fonts to match pixels is called "font hinting"). Once we have around 300 ppi displays for desktop use we can skip adapting fonts to pixels and we're one step closer to GPU accelerated desktops. Until we have high enough display resulution, rendering mathematically correct font shapes will result in somewhat blurred text.
Note that human vision is not limited to "300 dpi" despite what Apple Inc says in their ads. Human vision is limited by combination of distance of viewed object and the size of the viewed object. The "300 dpi" or "300 ppi" is the limit for one distance – if you move your head closer, you need higher display resolution to hit the limit of human vision.