# Product (math)

A product is the result of a multiplication as well as a term that represents a multiplication. The linked elements are called factors.

In this sense, a product is a representation of form

${\ displaystyle \ cdot \;: \; A \ times B \; \ rightarrow \; C}$

where the product of and is usually noted as. ${\ displaystyle a \ in A}$${\ displaystyle b \ in B}$${\ displaystyle a \ cdot b}$

Derived from the Latin word producere in the meaning of bringing forward , “product” is originally the name of the result of a multiplication of two numbers (from Latin: multiplicare = to multiply). The use of the painting point goes back to Gottfried Wilhelm Leibniz , the alternative symbol to William Oughtred . ${\ displaystyle \; \ cdot \;}$${\ displaystyle \ times}$

## Products of two numbers

Here is always , i. That is, the product of two numbers is again a number. Products are also assumed to be associative here , i.e. H. ${\ displaystyle A = B = C}$

${\ displaystyle (a \ cdot b) \ cdot c = a \ cdot (b \ cdot c) \ quad {\ text {for all}} a, b, c \ in A}$

### Product of two natural numbers

3 times 4 equals 12

For example, if you arrange game pieces in a rectangular scheme in r rows of s pieces , you need for this

${\ displaystyle r \ cdot s = \ sum _ {i = 1} ^ {s} r = \ sum _ {j = 1} ^ {r} s}$

Game pieces. The multiplication is a short form for the multiple addition of r summands (corresponding to the r rows), all of which have the value s (there are s stones in each row). The total number can also be calculated by adding the number s (corresponding to the number of stones placed one behind the other in a column) r times (corresponding to the number r of such columns of stones arranged next to one another) (you need r-1 plus sign for this ). This already shows the commutativity of the multiplication of two natural numbers .

If one counts the number 0 to the natural numbers, then these form a half ring . The inverse elements regarding addition are missing for a ring : There is no natural number x with the property 3 + x = 0.

A product in which the number 0 appears as a factor always has the value zero: An arrangement of zero rows of pieces does not include a single piece, regardless of the number of pieces per row.

### Product of two whole numbers

Adding the negative whole numbers gives the ring of whole numbers . Two whole numbers are multiplied by multiplying their respective amounts and adding the following sign : ${\ displaystyle \ mathbb {Z}}$

${\ displaystyle {\ begin {array} {| c | cc |} \ hline \ cdot & - & + \\\ hline - & + & - \\ + & - & + \\\ hline \ end {array}} }$

In words this table says:

• Minus times minus results in plus
• Minus times plus results in minus
• Plus times minus results in minus
• Plus times plus results in plus

For a strictly formal definition of equivalence classes of pairs of natural numbers, see the article on whole numbers .

### Product of two fractions

You can add, subtract and multiply without restriction in whole numbers. The division by a nonzero number is only possible if the dividend is a multiple of the divisor. This restriction can be lifted with the transition to the field of rational numbers , i.e. to the set of all fractions . In contrast to their sum, the product of two fractions does not require the formation of a main denominator : ${\ displaystyle \ mathbb {Q}}$

${\ displaystyle {\ frac {z} {n}} \ cdot {\ frac {z '} {n'}} = {\ frac {z \ cdot z '} {n \ cdot n'}}}$

If necessary, the result can be shortened .

### Product of two real numbers

As Euclid was able to show, there is no such thing as a rational number whose square is two. Likewise, the ratio of the circumference to the diameter, i.e. the number of circles  π, cannot be represented as the quotient of two whole numbers. Both “gaps” are closed by a so-called completion in the transition to the field of real numbers . Since an exact definition of the product in the brevity offered here does not seem possible, the idea is only briefly outlined: ${\ displaystyle \ mathbb {R}}$

Every real number can be understood as an infinite decimal fraction. For example, and The rational approximations - such as 1.41 and 3.14 - can easily be multiplied together. By gradually increasing the number of decimal places, a sequence of approximate values ​​for the product is obtained - in a process that cannot be carried out in a finite time${\ displaystyle {\ sqrt {2}} = 1 {,} 4142 \ ldots}$${\ displaystyle \ pi = 3 {,} 1415 \ ldots}$${\ displaystyle {\ sqrt {2}} \ cdot \ pi = 4 {,} 4428 \ ldots}$

### Product of two complex numbers

Even on the set of real numbers, there are unsolvable equations such as . For both negative and positive values ​​of , the square on the left is always a positive number. Through the transition to the body of complex numbers , which is often referred to as adjunction , i.e. adding , the so-called Gaussian number plane emerges from the real number line . Two points on this level, i.e. two complex numbers, are formally multiplied taking into account : ${\ displaystyle x ^ {2} = - 1}$${\ displaystyle x}$${\ displaystyle \ mathbb {C}}$${\ displaystyle \ mathrm {i} = {\ sqrt {-1}}}$${\ displaystyle \ mathrm {i} ^ {2} = - 1}$

{\ displaystyle {\ begin {aligned} (a + b \, \ mathrm {i}) \ cdot (c + d \, \ mathrm {i}) & = a \ cdot c + a \ cdot d \, \ mathrm {i} + b \ cdot c \, \ mathrm {i} + b \ cdot d \ cdot \ mathrm {i} ^ {2} \\ & = (a \ cdot cb \ cdot d) + (a \ cdot d + b \ cdot c) \, \ mathrm {i} \ end {aligned}}}

#### Geometric interpretation

A complex number in polar coordinates

A complex number can also be written in plane polar coordinates :

${\ displaystyle a + b \, \ mathrm {i} = r \ cdot (\ cos (\ varphi) + \ mathrm {i} \ sin (\ varphi)) = r \ cdot \ mathrm {e} ^ {\ mathrm {i} \ varphi}}$

Is further

${\ displaystyle c + d \, \ mathrm {i} = s \ cdot (\ cos (\ psi) + \ mathrm {i} \ sin (\ psi)) = s \ cdot \ mathrm {e} ^ {\ mathrm {i} \ psi}}$

so it applies due to the addition theorems for sine and cosine

${\ displaystyle (a \ cdot cb \ cdot d) + (a \ cdot d + b \ cdot c) \, \ mathrm {i} = r \ cdot s \ cdot (\ cos (\ varphi + \ psi) + \ mathrm {i} \ sin (\ varphi + \ psi)) = r \ cdot s \ cdot \ mathrm {e} ^ {\ mathrm {i} (\ varphi + \ psi)}}$

Geometrically, this means: multiplication of the lengths with simultaneous addition of the angles.

### Product of two quaternions

Even the complex numbers can be expanded algebraically. A real four-dimensional space is created, the so-called Hamiltonian quaternions . The associated multiplication rules are presented in detail in the Quaternion article. In contrast to the above number ranges, the multiplication of quaternions is not commutative; H. and are generally different. ${\ displaystyle \ mathbb {H}}$${\ displaystyle a \ cdot b}$${\ displaystyle b \ cdot a}$

## More examples of commutative rings

### Residual classes of integers

That the product of two numbers is odd if and only if both factors are odd is a well-known fact. Similar rules also apply to divisibility by an integer N greater than two. The even numbers here correspond to the multiples of N; an even number is divisible by two with no remainder. In the case of odd numbers, one should distinguish which remainder is left after dividing this number by N as an integer. Modulo 3 - as the saying goes - there are three remainder classes of integers: Those that are multiples of three, those with a remainder of 1, and those with a remainder of 2. The product of two such numbers always has a remainder of one modulo three.

The set of these remainder classes , written, has exactly N elements. A typical element has the form and stands for the set of all integers which, when divided by N, result in the same remainder as the number a. On the set of all such remainder classes is given by ${\ displaystyle \ mathbb {Z} / N \ mathbb {Z}}$${\ displaystyle a + N \ mathbb {Z}}$

${\ displaystyle (a + N \ mathbb {Z}) + (b + N \ mathbb {Z}) = a + b + N \ mathbb {Z}}$

${\ displaystyle (a + N \ mathbb {Z}) \ cdot (b + N \ mathbb {Z}) = a \ cdot b + N \ mathbb {Z}}$

a multiplication explained. The resulting ring is called the remainder class ring modulo N. If and only if N is a prime number , it is actually a field. Example: modulo 5 is the residual class of 2 inversely to that of 3, since 6 modulo 5 is one. The systematic finding of multiplicative inverses modulo N is done using the Euclidean algorithm .

### Function rings

If the ring R is commutative, the set (the set of all functions of a non-empty set M with values ​​in R ) also forms a commutative ring if addition and multiplication are defined component-wise. That is, if you can ${\ displaystyle \ mathbb {F}: = R ^ {M}}$${\ displaystyle \ mathbb {F}}$

${\ displaystyle (f + g) (m): = f (m) + g (m)}$
${\ displaystyle (f \ cdot g) (m): = f (m) \ cdot g (m)}$

explained to everyone . ${\ displaystyle m \ in M}$

If one chooses the real numbers with the usual addition and multiplication as ring R , and as M an open subset of or more generally of , then the terms continuity and differentiability of functions make sense. The set of continuous or differentiable functions then forms a subring of the function ring , which trivially has to be commutative again if or R is commutative. ${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle f: M \ to R}$${\ displaystyle \ mathbb {F}}$

### Folding product

Convolution of the rectangular function with itself results in the triangular function

Let be two integrable real functions, whose absolute values ​​have a finite improper integral : ${\ displaystyle f, g \ colon \ mathbb {R} \ rightarrow \ mathbb {R} \,}$

${\ displaystyle \ int \ limits _ {- \ infty} ^ {\ infty} | f (t) | \, \ mathrm {d} \, t \; <\; \ infty \ quad {\ text {and}} \ int \ limits _ {- \ infty} ^ {\ infty} | g (t) | \, \ mathrm {d} \, t \; <\; \ infty}$

Then the improper integral

${\ displaystyle (f * g) (t) \;: = \ int \ limits _ {- \ infty} ^ {\ infty} f (\ tau) \ cdot g (t- \ tau) \, \ mathrm {d } \ tau}$

for every real number t also finite. The function f * g defined by this is called the convolution product or the convolution of f and g. Here f * g is again integrable with a finite improper absolute value integral. Furthermore, f * g = g * f, i.e. H. the convolution is commutative.

After Fourier transformation , the convolution product is the point-by-point defined product except for a constant normalization factor (so-called convolution theorem ). The convolution product plays an important role in mathematical signal processing .

The Gaussian bell curve can be characterized by the fact that its folding with itself results in a bell curve that is somewhat wider (see here ). The central limit theorem is based on precisely this property .

### Polynomial rings

The set of all polynomials in the variable X with real coefficients also forms a so-called polynomial ring . The product is calculated as follows: ${\ displaystyle \ mathbb {R} [X]}$

${\ displaystyle \ left (\ sum _ {i = 0} ^ {n} a_ {i} X ^ {i} \ right) \ cdot \ left (\ sum _ {j = 0} ^ {m} b_ {j } X ^ {j} \ right) = \ sum _ {k = 0} ^ {n + m} c_ {k} X ^ {k}}$

With

${\ displaystyle c_ {k} = \ sum _ {i + j = k} a_ {i} \ cdot b_ {j}}$

These rings play an important role in many areas of algebra. For example, the body of the complex numbers can be elegantly defined as a factor ring . ${\ displaystyle \ mathbb {R} [X] / (X ^ {2} +1)}$

In the transition from finite sums to absolutely convergent series or formal power series, the product discussed here becomes the so-called Cauchy product .

## Products in linear algebra

The linear algebra deals with vector spaces and linear mappings between such. Various products appear in this context. In the following, the field of real numbers is mostly used as the basic field for simplification.

### Scalar product

Already in the definition of a vector space V, the concept of submerged scalar multiplication on. This allows vectors to be “stretched” in general by a real factor, whereby in the case of multiplication by a negative scalar, the direction of the vector is also reversed.

The scalar product is a picture ${\ displaystyle \ mathbb {R} \ times V \ rightarrow V}$

### Scalar product

The concept of a scalar product must be strictly distinguished from this . This is a bilinear mapping

${\ displaystyle \ cdot: V \ times V \ rightarrow \ mathbb {R}}$

with the additional requirement that is for everyone . ${\ displaystyle v \ cdot v> 0}$${\ displaystyle 0 \ not = v \ in V}$

Therefore the expression is always calculable and provides the concept of the norm (length) of a vector. ${\ displaystyle \ | v \ |: = {\ sqrt {v \ cdot v}}}$

The scalar product also allows the definition of an angle between two non-zero vectors v and w:

${\ displaystyle \ cos \ angle (v, w) = {\ frac {v \ cdot w} {\ | v \ | \ cdot \ | w \ |}}}$

The polarization formula shows that such a length concept inversely always leads to a scalar product and thus also to an angle concept.

In every n- dimensional Euclidean space , an orthonormal system can be found through orthonormalization . If all vectors are represented as a linear combination with respect to an orthonormal basis , the scalar product of two such coordinate tuples can be calculated as a standard scalar product : ${\ displaystyle e_ {1}, \ ldots e_ {n}}$

${\ displaystyle \ left (\ sum _ {i = 1} ^ {n} \ alpha _ {i} e_ {i} \ right) \ cdot \ left (\ sum _ {i = 1} ^ {n} \ beta _ {i} e_ {i} \ right) = \ sum _ {i = 1} ^ {n} \ alpha _ {i} \, \ beta _ {i}}$

### Cross product in three-dimensional space

In , as the standard model of a 3-dimensional Euclidean space, another product, the so-called cross product , can be defined. It does an excellent job with various problems of analytical geometry in space. ${\ displaystyle \ mathbb {R} ^ {3}}$

The cross product is an illustration

${\ displaystyle \ times: \ mathbb {R} ^ {3} \ times \ mathbb {R} ^ {3} \ rightarrow \ mathbb {R} ^ {3}}$

Like every Lie product , it is anti-commutative: in particular, is${\ displaystyle v \ times w = -w \ times v}$${\ displaystyle v \ times v = 0}$

### Late product

The so-called late product - also only explained in the - is not a product of two, but three vectors. In modern parlance, it corresponds to the determinant of three column vectors written next to each other and is probably the easiest to calculate according to Sarrus' rule . Formally there is an illustration ${\ displaystyle \ mathbb {R} ^ {3}}$

${\ displaystyle \ det: \ mathbb {R} ^ {3} \ times \ mathbb {R} ^ {3} \ times \ mathbb {R} ^ {3} \ rightarrow \ mathbb {R}}$

which is still referred to as a product today, probably only for historical reasons. The Spat product clearly measures the volume of a Spat in space.

### Composition of linear images

If f: U → V and g: V → W are two linear mappings , then they are executed one after the other

${\ displaystyle g \ circ f: U \ ni u \ mapsto g (f (u)) \ in W}$

linear. If one denotes the set of all linear mappings from U to V , then the composition of maps yields a product ${\ displaystyle \ operatorname {Hom} (U, V)}$

${\ displaystyle \ circ: \ operatorname {Hom} (V, W) \ times \ operatorname {Hom} (U, V) \ rightarrow \ operatorname {Hom} (U, W)}$

In the special case U = V = W, the so-called endomorphism ring of V. ${\ displaystyle \ operatorname {End} (V) = \ operatorname {Hom} (V, V)}$

### Product of two matrices

Given are two matrices and . Since the number of columns in A is the same as the number of rows in B, the matrix product can be${\ displaystyle A = (a_ {i, j}) _ {i = 1 \ ldots s; j = 1 \ ldots r} \ in \ mathbb {R} ^ {s \ times r}}$${\ displaystyle B = (b_ {j, k}) _ {j = 1 \ ldots r; k = 1 \ ldots t} \ in \ mathbb {R} ^ {r \ times t}}$

${\ displaystyle A \ cdot B = \ left (\ sum _ {j = 1} ^ {r} a_ {i, j} \ cdot b_ {j, k} \ right) _ {i = 1 \ ldots s; k = 1 \ ldots t} \; \ in \ mathbb {R} ^ {s \ times t}}$

form. In the special case r = s = t of square matrices, this creates the matrix ring . ${\ displaystyle \ mathbb {R} ^ {r \ times r}}$

### Composition of linear images as a matrix product

There is a close connection between the composition of linear images and the product of two matrices. Let r = dim (U), s = dim (V) and t = dim (W) be the (finite) dimensions of the vector spaces U, V and W. Let also be a basis of U, a basis of V and a basis von W. With respect to these bases, let the representing matrix be from f: U → V and the representing matrix from g: V → W. Then is ${\ displaystyle {\ mathcal {U}} = \ {u_ {1}, \ ldots u_ {r} \}}$${\ displaystyle {\ mathcal {V}} = \ {v_ {1}, \ ldots v_ {s} \}}$${\ displaystyle {\ mathcal {W}} = \ {w_ {1}, \ ldots w_ {t} \}}$${\ displaystyle A = M _ {\ mathcal {V}} ^ {\ mathcal {U}} (f) \ in \ mathbb {R} ^ {s \ times r}}$${\ displaystyle B = M _ {\ mathcal {W}} ^ {\ mathcal {V}} (g) \ in \ mathbb {R} ^ {r \ times t}}$

${\ displaystyle B \ cdot A = M _ {\ mathcal {W}} ^ {\ mathcal {U}} (g \ circ f) \ in \ mathbb {R} ^ {s \ times t}}$

the representative matrix of . ${\ displaystyle g \ circ f: U \ rightarrow W}$

In other words: the matrix product provides the coordinate-dependent description of the composition of two linear maps.

### Tensor product of vector spaces

The tensor product of two real vector spaces V and W is a kind of product of two vector spaces. It is therefore similar to the set theoretic product discussed below . In contrast to this, however, it is not a question of the categorical product in the category of real vector spaces. Nevertheless, it can be categorized using a universal property with regard to bilinear mapping. After that is the canonical embedding ${\ displaystyle V \ otimes W}$

${\ displaystyle V \ times W \ ni (v, w) \ mapsto v \ otimes w \ in V \ otimes W}$

The “mother of all products that can be defined on V and W”, so to speak. Any other real bilinear product

${\ displaystyle B: V \ times W \ rightarrow Y}$

with values ​​in any vector space Y comes namely by connecting a uniquely determined linear mapping

${\ displaystyle {\ tilde {B}}: V \ otimes W \ rightarrow Y}$

conditions.

### Mapping matrices as second-order tensors

The vector space Hom (V, W) of all linear mappings between two vector spaces V and W can be interpreted in a natural (bifunctional) way as the tensor product of the dual space V * of V with W:

${\ displaystyle V ^ {*} \ otimes W \ rightarrow \ operatorname {Hom} (V, W)}$

Here, a decomposable tensor , i.e. a functional f: V → R and a vector w in W, becomes the linear mapping g: V → W with ${\ displaystyle f \ otimes w \ in V ^ {*} \ otimes W \,}$

${\ displaystyle g (v) = f (v) \ cdot w \,}$

assigned. Can every linear mapping from V to W be obtained in this way? No, but not every tensor is decomposable either. Just as every tensor can be written as the sum of decomposable tensors, every linear mapping from V to W can also be obtained as the sum of mappings such as g defined above.

The fact that Hom (V, W) is naturally isomorphic to the tensor product of the dual space of V with W also means that the representing matrix of a linear mapping g: V → W is a simply contravariant and simply covariant tensor. This is also expressed in the transformation behavior of representational matrices when the base is changed .

## Set theoretical product

The Cartesian product M x N of two sets M and N fits at first glance, not casually in the presented product term one. However, there is a connection not only in the word “product”: The product of two natural numbers m and n was explained above as the cardinality of the Cartesian product of an m-element and an n-element set. Certain forms of the distributive law also apply .

The Cartesian product is also the categorical product in the category of quantities .

## Finite and infinite products

### Finite products with many factors

The product notation

The factorial of a natural number n (written as n! ) Describes the number of possible arrangements of n distinguishable objects in a row:

${\ displaystyle n! = 1 \ cdot 2 \ cdot \ ldots \ cdot (n-1) \ cdot n = \ prod _ {i = 1} ^ {n} i}$

The product symbol is based on the first letter of the word product from the Greek capital letter Pi ; it is also used as a sum symbol based on the sigma . ${\ displaystyle \ textstyle \ prod}$ ${\ displaystyle \ textstyle \ sum}$

Since the product of natural numbers is commutative, one can also use an index set (and thus leave the order of the factors undetermined)

${\ displaystyle n! = \ prod _ {i \ in \ {1, \ ldots, n \}} i}$

Here is an animation of the product spelling:

### The empty product

The empty product has the value one (the neutral element of the multiplication) - just as the empty sum always results in zero (the neutral element of the addition).

### Infinite products

John Wallis discovered the baffling fact that

${\ displaystyle {\ frac {\ pi} {2}} = \ prod _ {i = 1} ^ {\ infty} {\ frac {(2i) (2i)} {(2i-1) (2i + 1) }}}$

applies (compare Wallis product ). But what exactly is meant by the infinite product on the right? To do this, one considers the sequence of the finite partial products

${\ displaystyle P_ {n}: = \ prod _ {i = 1} ^ {n} {\ frac {(2i) (2i)} {(2i-1) (2i + 1)}}}$

If this sequence converges to a real number P , then one defines

${\ displaystyle \ prod _ {i = 1} ^ {\ infty} {\ frac {(2i) (2i)} {(2i-1) (2i + 1)}} \;: = P}$

A sequence of numbers is more precise . The infinite product ${\ displaystyle (a_ {n}) _ {n \ in \ mathbb {N}}}$

${\ displaystyle \ prod _ {n = 1} ^ {\ infty} a_ {n}}$

is called convergent if and only if the following conditions are met:

1. Almost all of them are non-zero, i.e. H. there is so for all true${\ displaystyle a_ {n}}$${\ displaystyle n_ {0} \ in \ mathbb {N}}$${\ displaystyle a_ {n} \ neq 0}$${\ displaystyle n> n_ {0}}$
2. the limit exists and${\ displaystyle \ lim _ {N \ to \ infty} \ prod _ {n = n_ {0} +1} ^ {N} a_ {n}}$
3. this limit is different from zero.

(The validity of the last two conditions is independent of which one has chosen in the first). In this case you bet ${\ displaystyle n_ {0}}$

${\ displaystyle \ prod _ {n = 1} ^ {\ infty} a_ {n}: = \ lim _ {N \ to \ infty} \ prod _ {n = 1} ^ {N} a_ {n}}$.

This limit value exists, because either there is at least one factor and from then on all partial products are zero or one can in the second condition o. B. d. A. choose. ${\ displaystyle a_ {n} = 0}$ ${\ displaystyle n_ {0} = 0}$

Core series criterion ( convergence criterion for infinite products): The following statements are equivalent:

• An infinite product with positive nuclei converges absolutely.${\ displaystyle \ textstyle P = \ prod \ limits _ {k = 1} ^ {\ infty} a_ {k} = \ prod \ limits _ {k = 1} ^ {\ infty} (1 + h_ {k}) }$${\ displaystyle h_ {k}}$
• The kernel series converges absolutely.${\ displaystyle \ textstyle S = \ sum \ limits _ {n = 1} ^ {\ infty} h_ {n}}$

#### properties

• A convergent infinite product is zero if and only if one of the factors is zero. Without the third condition, this statement would be false.
• The factors of a convergent product converge to 1 (necessary criterion).

#### Examples of lack of convergence

Although the sequence of partial products converges (towards zero), infinite products such as the following are not said to be convergent:

• ${\ displaystyle \ prod \ limits _ {n = 1} ^ {\ infty} 0}$: An infinite number of factors are zero, the first condition is violated.
• ${\ displaystyle \ prod \ limits _ {n = 1} ^ {\ infty} (n-1)}$: You have to choose. But if the first factor is omitted, the partial product sequence does not converge (definitely diverges from ). The second condition is violated.${\ displaystyle n_ {0} \ geq 1}$${\ displaystyle + \ infty}$
• ${\ displaystyle \ prod \ limits _ {n = 1} ^ {\ infty} {\ tfrac {1} {n}}}$: The sequence of the partial products converges, however towards zero, so that the third condition is violated.

These three examples also do not meet the above. necessary criterion. The product fulfills the necessary criterion, but the consequence of the partial products does not converge: The product of the first factors is . ${\ displaystyle \ textstyle \ prod _ {n = 1} ^ {\ infty} \ left (1 + {\ frac {1} {n}} \ right)}$${\ displaystyle N}$${\ displaystyle N + 1}$

## literature

• Structure of the number system. In: dtv-Atlas zur Mathematik, Vol. 1, 2nd edition 1976, p. 52ff.
• Heinz-Dieter Ebbinghaus et al .: Numbers. Springer, Berlin 1992, ISBN 3-540-55654-0 . ( Google Books )

Extensive literature references on linear algebra can be found there .