Monte Carlo arithmetic operations
Working out whether software based calculations involving floating-point values delivers a sensible answer requires lots of mathematical sophistication and in practice is often impractical or intractable. The vast majority of developers make no effort, indeed most don’t even know why the effort is needed. Various ‘end-user’ solutions have been proposed, e.g., interval arithmetic.
One interesting solution is to perturb the result of floating-point operations and measure the effect on the final answer. Any calculation that is sensitive to small random changes in the result of an operation (there is randomness present in any operation that operates on values that can only be represented to a finite precision) will produce answers that depend on the direction and magnitude of the perturbation. Comparing the answers from several program executions provides a measure of one kind of error present in the calculation.
Monte Carlo arithmetic is a proposed extension of floating-point arithmetic that operates by randomly selecting how round-off errors occur (the proposer provides sample code).
With computing power continuing to increase, running a program several times is often a viable option (we don’t all number crunch for cpu days). Most of the transistors on a modern CPU chip are devoted to memory cache, using a few of these to support Monte Carlo arithmetic instructions is entirely practical. Perhaps when vendors get over supporting the base-10 radix required by the latest IEEE 754R standard and are looking for something new to attract customers they will provide a mechanism that makes it practical to obtain estimates of some of the error in floating-point calculations.
Will IEEE 754 become a fringe representation?
Many people believe that with a few historical exceptions the IEEE 754 standard has won the floating-point value bit-representation battle. What these people have forgotten is that money rules; customers are willing to ditch standards if it increases profit. FPGA devices can be configured to perform float-point operations faster and more cheaply than commodity cpus.
Making optimal use of a FPGA may require using a radix of 4 and for the time being automatically convert back and forth between an external 754 radix-2 representation. In those cases where multiplication/division operations are more common than addition/subtraction use of a logarithmic number system has performance benefits. For specialist scientific calculations (where cpu time is measured in days) purpose built FPGA devices are the path to significant performance improvements. In many mass market applications the full power of a 32-bit representation is not needed and a representation using fewer bits does an acceptable job using less powerful (ie, cheaper) hardware.
Customer demand for higher performance and lower cost will push vendors to deliver purpose designed products. IEEE 754 may be the floating-point representation that people without spending power use because it was once designed into cpus and vendors are forced to continue to support it for backwards compatibility.
Recent Comments