Arbitrary precision is possible in scientific computing. You just pay for it in additional processing.
It is the responsibility of the the scientist to be aware of computational limitations in their calculations and to test for them. For example, small changes in input parameters should produce correspondingly small changes in the output.
Arbitrary precision is achieved by using an arbitrary number or words to encode a number. Arithmetic operations become more complicated as rounding and remainders must be tracked and re-included to the computation:
http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
Support for high-precision numbers exists in almost all programming languages. Some high-level tools like Mathematica try to do the handling for you (it estimates the precision you need).
Generally, these tools are completely agnostic to the processor used, as long as that processor doesn't make errors: