# Scalar multiplication Scalar multiplication in the Euclidean plane: the vector w is multiplied by the number 2 and the vector v by the number -1

The scalar multiplication , also called S multiplication or scalar multiplication , is an outer two-digit link between a scalar and a vector that is required in the definition of vector spaces . The scalars are the elements of the body over which the vector space is defined. The analog connection in modules is also called scalar multiplication.

The result of a scalar multiplication is a correspondingly scaled vector . In the illustrative case of Euclidean vector spaces , the scalar multiplication extends or shortens the length of the vector by the specified factor. In the case of negative scalars, the direction of the vector is also reversed. A special form of such scaling is normalization . Here, a vector is multiplied by the reciprocal of its length (generally its norm ), resulting in a unit vector with length (or norm) one.

## definition

If there is a vector space over the body , then the scalar multiplication is a two-digit operation ${\ displaystyle V}$ ${\ displaystyle K}$ ${\ displaystyle \ odot \ colon K \ times V \ to V}$ ,

which, by definition of the vector space, is mixed associative and distributive , i.e. fulfills the following properties for all vectors and all scalars : ${\ displaystyle u, v \ in V}$ ${\ displaystyle \ alpha, \ beta \ in K}$ • ${\ displaystyle \ alpha \ odot (\ beta \ odot v) = (\ alpha \ cdot \ beta) \ odot v}$ • ${\ displaystyle \ alpha \ odot (u \ oplus v) = \ alpha \ odot u \ oplus \ alpha \ odot v}$ • ${\ displaystyle (\ alpha + \ beta) \ odot v = \ alpha \ odot v \ oplus \ beta \ odot v}$ In addition, the neutrality of the single element of the body applies : ${\ displaystyle 1}$ • ${\ displaystyle 1 \ odot v = v}$ .

Here refers to the vector addition in as well , and each addition and multiplication in the body . Is often both the vector addition, and for the body adding the plus sign and both the scalar multiplication, and for the body multiplying the mark is used. This convention will also be followed later in this article to make it easier to read. The multiplication symbol is often left out and you write briefly instead of and instead of . ${\ displaystyle \ oplus}$ ${\ displaystyle V}$ ${\ displaystyle +}$ ${\ displaystyle \ cdot}$ ${\ displaystyle K}$ ${\ displaystyle +}$ ${\ displaystyle \ cdot}$ ${\ displaystyle \ alpha \ beta}$ ${\ displaystyle \ alpha \ cdot \ beta}$ ${\ displaystyle \ alpha v}$ ${\ displaystyle \ alpha \ cdot v}$ ## properties

### neutrality

Designates the zero element of the body and the zero vector of the vector space, then applies to all vectors${\ displaystyle 0_ {K} \ in K}$ ${\ displaystyle 0_ {V} \ in V}$ ${\ displaystyle v \ in V}$ ${\ displaystyle 0_ {K} \ cdot v = 0_ {V}}$ ,

because it is true with the second distributive law

${\ displaystyle 0_ {K} \ cdot v + 0_ {K} \ cdot v = (0_ {K} + 0_ {K}) \ cdot v = 0_ {K} \ cdot v}$ and therefore must be the zero vector. The same applies to all scalars${\ displaystyle 0_ {K} \ cdot v}$ ${\ displaystyle \ alpha \ in K}$ ${\ displaystyle \ alpha \ cdot 0_ {V} = 0_ {V}}$ ,

because it holds with the first distributive law

${\ displaystyle \ alpha \ cdot 0_ {V} + \ alpha \ cdot 0_ {V} = \ alpha \ cdot (0_ {V} + 0_ {V}) = \ alpha \ cdot 0_ {V}}$ and therefore the zero vector must also be here . Overall you get that ${\ displaystyle \ alpha \ cdot 0_ {V}}$ ${\ displaystyle \ alpha \ cdot v = 0_ {V} \ Leftrightarrow \ alpha = 0_ {K} ~ {\ text {or}} ~ v = 0_ {V}}$ ,

because it follows either or and then , where the multiplicative inverse element is to. ${\ displaystyle \ alpha \ cdot v = 0_ {V}}$ ${\ displaystyle \ alpha = 0_ {K}}$ ${\ displaystyle \ alpha \ neq 0_ {K}}$ ${\ displaystyle v = \ alpha ^ {- 1} \ cdot 0_ {V} = 0_ {V}}$ ${\ displaystyle \ alpha ^ {- 1}}$ ${\ displaystyle \ alpha}$ ### Inverse

If now denotes the additively inverse element to the unity element and the inverse vector to , then applies ${\ displaystyle -1}$ ${\ displaystyle 1}$ ${\ displaystyle -v}$ ${\ displaystyle v}$ ${\ displaystyle (-1) \ cdot v = -v}$ ,

because with the neutrality of one one obtains

${\ displaystyle 0_ {K} = 0_ {K} \ cdot v = (1-1) \ cdot v = 1 \ cdot v + (- 1) \ cdot v = v + (- 1) \ cdot v}$ and so the inverse vector is too . If the additively inverse element is generally closed , then applies ${\ displaystyle (-1) \ cdot v}$ ${\ displaystyle v}$ ${\ displaystyle - \ alpha}$ ${\ displaystyle \ alpha}$ ${\ displaystyle (- \ alpha) \ cdot v = - (\ alpha \ cdot v) = \ alpha \ cdot (-v)}$ ,

because with one gets through the mixed associative law ${\ displaystyle \ beta = -1}$ ${\ displaystyle (- \ alpha) \ cdot v = (\ beta \ alpha) \ cdot v = \ beta \ cdot (\ alpha \ cdot v) = - (\ alpha \ cdot v)}$ as well as with the commutativity of the multiplication of two scalars

${\ displaystyle (- \ alpha) \ cdot v = (\ alpha \ beta) \ cdot v = \ alpha \ cdot (\ beta \ cdot v) = \ alpha \ cdot (-v)}$ .

## Examples

### Coordinate vectors

If the coordinate space and a coordinate vector , then the multiplication with a scalar is defined component-wise as follows: ${\ displaystyle V = K ^ {n}}$ ${\ displaystyle v = (v_ {1}, \ ldots, v_ {n}) ^ {T} \ in K ^ {n}}$ ${\ displaystyle \ alpha \ in K}$ ${\ displaystyle \ alpha \ cdot v = \ alpha \ cdot {\ begin {pmatrix} v_ {1} \\\ vdots \\ v_ {n} \ end {pmatrix}} = {\ begin {pmatrix} \ alpha \ cdot v_ {1} \\\ vdots \\\ alpha \ cdot v_ {n} \ end {pmatrix}}}$ .

In scalar multiplication, each component of the vector is therefore multiplied by the scalar. In three-dimensional Euclidean space , for example, one obtains ${\ displaystyle \ mathbb {R} ^ {3}}$ ${\ displaystyle 3 \ cdot {\ begin {pmatrix} \, 1 \, \\ 4 \\ 2 \ end {pmatrix}} = {\ begin {pmatrix} \, 3 \ cdot 1 \, \\ 3 \ cdot 4 \\ 3 \ cdot 2 \ end {pmatrix}} = {\ begin {pmatrix} \, 3 \, \\ 12 \\ 6 \ end {pmatrix}}}$ .

### Matrices

If the matrix space and a matrix , the multiplication with a scalar is also defined component-wise: ${\ displaystyle V = K ^ {m \ times n}}$ ${\ displaystyle A = (a_ {ij}) \ in K ^ {m \ times n}}$ ${\ displaystyle \ alpha \ in K}$ ${\ displaystyle \ alpha \ cdot A = \ alpha \ cdot {\ begin {pmatrix} a_ {11} & \ ldots & a_ {1n} \\\ vdots & \ ddots & \ vdots \\ a_ {m1} & \ ldots & a_ {mn} \ end {pmatrix}} = {\ begin {pmatrix} \ alpha \ cdot a_ {11} & \ ldots & \ alpha \ cdot a_ {1n} \\\ vdots & \ ddots & \ vdots \\\ alpha \ cdot a_ {m1} & \ ldots & \ alpha \ cdot a_ {mn} \ end {pmatrix}}}$ .

With scalar multiplication, each entry in the matrix is ​​multiplied by the scalar. For example, one obtains for a real matrix ${\ displaystyle (2 \ times 2)}$ ${\ displaystyle 3 \ times {\ begin {pmatrix} 1 & 2 \\ 4 & 3 \ end {pmatrix}} = {\ begin {pmatrix} 3 \ times 1 & 3 \ times 2 \\ 3 \ times 4 & 3 \ times 3 \ end {pmatrix} } = {\ begin {pmatrix} 3 & 6 \\ 12 & 9 \ end {pmatrix}}}$ .

### Polynomials

If the vector space of the polynomials in the variable with coefficients is from a field , then the multiplication of a polynomial with a scalar is again defined component-wise: ${\ displaystyle V = K [X]}$ ${\ displaystyle X}$ ${\ displaystyle K}$ ${\ displaystyle P \ in K [X]}$ ${\ displaystyle \ alpha \ in K}$ ${\ displaystyle \ alpha P = \ alpha (a_ {0} + a_ {1} X + \ dotsb + a_ {n} X ^ {n}) = (\ alpha a_ {0}) + (\ alpha a_ {1} ) X + \ dotsb + (\ alpha a_ {n}) X ^ {n}}$ .

For example, the scalar multiplication of the real polynomial function with the number gives the polynomial ${\ displaystyle p (x) = x ^ {n} -x}$ ${\ displaystyle 3}$ ${\ displaystyle (3p) (x) = 3 (x ^ {n} -x) = 3x ^ {n} -3x}$ .

### Functions

If a linear function space and a function from a non-empty set into a vector space , then the result of the scalar multiplication of such a function by a scalar is defined as the function ${\ displaystyle V = F (\ Omega, W)}$ ${\ displaystyle f \ in F (\ Omega, W)}$ ${\ displaystyle \ Omega}$ ${\ displaystyle W}$ ${\ displaystyle \ alpha \ in K}$ ${\ displaystyle \ alpha f \ colon \ Omega \ to W, \ quad x \ mapsto (\ alpha f) (x) = \ alpha \ cdot f (x)}$ .

For example, if one considers the vector space of the linear real functions of the form , then one obtains the function by scalar multiplication with a real number${\ displaystyle f (x) = ax + b}$ ${\ displaystyle c}$ ${\ displaystyle (cf) (x) = c \ cdot f (x) = c \ cdot (ax + b) = cax + cb}$ .

With the scalar multiplication, each function value is scaled by the factor . ${\ displaystyle c}$ 