Arbitrary-precision arithmetic: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Kjknohw (talk | contribs)
rm cleanup tag, this article looks fine to me, and there are no complaints on the talk page!
Line 1: Line 1:
{{cleanup-verify}}
On a [[computer]], '''arbitrary-precision arithmetic''', also called '''bignum''' arithmetic, is a technique that allows [[computer program]]s to perform [[calculation]]s on [[integer]]s and [[rational number]]s with an arbitrary number of [[numerical digit|digit]]s of precision, limited only by the available [[memory (computers)|memory]] of the host system. It typically works by storing a number as a variable-length [[array]] of digits in some [[base (mathematics)|base]], in contrast to most computer arithmetic which uses a fixed number of [[bit]]s given by the size of the [[processor register]]s. Rational numbers can be stored as a pair of two integers for the [[numerator]] and [[denominator]], in a [[fixed-point arithmetic|fixed-point]] format with a fixed denominator, or in a [[floating point]] format as a [[significand]] multiplied by an arbitrary exponent.
On a [[computer]], '''arbitrary-precision arithmetic''', also called '''bignum''' arithmetic, is a technique that allows [[computer program]]s to perform [[calculation]]s on [[integer]]s and [[rational number]]s with an arbitrary number of [[numerical digit|digit]]s of precision, limited only by the available [[memory (computers)|memory]] of the host system. It typically works by storing a number as a variable-length [[array]] of digits in some [[base (mathematics)|base]], in contrast to most computer arithmetic which uses a fixed number of [[bit]]s given by the size of the [[processor register]]s. Rational numbers can be stored as a pair of two integers for the [[numerator]] and [[denominator]], in a [[fixed-point arithmetic|fixed-point]] format with a fixed denominator, or in a [[floating point]] format as a [[significand]] multiplied by an arbitrary exponent.



Revision as of 04:38, 18 July 2006

On a computer, arbitrary-precision arithmetic, also called bignum arithmetic, is a technique that allows computer programs to perform calculations on integers and rational numbers with an arbitrary number of digits of precision, limited only by the available memory of the host system. It typically works by storing a number as a variable-length array of digits in some base, in contrast to most computer arithmetic which uses a fixed number of bits given by the size of the processor registers. Rational numbers can be stored as a pair of two integers for the numerator and denominator, in a fixed-point format with a fixed denominator, or in a floating point format as a significand multiplied by an arbitrary exponent.

Perhaps the earliest widespread implementation of arbitrary precision arithmetic was in Maclisp. Later, the VAX/VMS operating system offered bignum facilities as a collection of string functions. Today, bignum libraries are available for most modern programming languages (see below). Almost all computer algebra systems implement arbitrary precision arithmetic.

Arbitrary-precision arithmetic is sometimes called infinite-precision arithmetic, which is something of a misnomer: the number of digits of precision always remains finite (and is bounded in practice), although it can grow very large.

Arbitrary-precision arithmetic should not be confused with symbolic computation, as provided by computer algebra systems. The latter represent numbers by symbolic expressions such as , or even by computer programs, and in this way can symbolically represent any computable number (limited by available memory). Numeric results can still only be provided to arbitrary (finite) precision in general, however, by evaluating the symbolic expression using arbitrary-precision arithmetic.

Applications

Arbitrary-precision arithmetic is usually much slower than arithmetic using numbers that fit entirely within processor registers, since the latter are usually implemented in hardware arithmetic whereas the former must be implemented in software. Consequently, arbitrary precision is only used in a limited range of applications that require extremely precise results or exact integer arithmetic with very large numbers.

The most common application is encryption, whose algorithms commonly employ arithmetic with integers of hundreds or thousands of digits.

Arbitrary precision arithmetic is also used to compute fundamental mathematical constants such as pi to millions or more digits and to analyze their properties.

A third example is in rendering Fractal images with an extremely high magnification.

Algorithms

Numerous algorithms have been developed to efficiently perform arithmetic operations on numbers stored with arbitrary precision. In particular, supposing that N digits are employed, algorithms have been designed to minimize the asymptotic complexity for large N.

The simplest algorithm is for addition, where one simply adds the digits in sequence, carrying as necessary, which yields an O(N) algorithm (see big O notation).

For multiplication, the most straightforward algorithms used for multiplying numbers by hand requires operations, but multiplication algorithms have been devised (and also algorithms with slightly worse complexity but with sometimes superior real-world performance for moderate N).

Arbitrary-precision software

Arbitrary-precision arithmetic in most computer software is implemented by calling an external library that provides datatypes and subroutines to store numbers with the requested precision and to perform computations.

Stand-alone application software that supports arbitrary precision computations.

References