suppose a+b=2b, what expression equals to b-a

As far as linear algebra is concerned, the two nearly important operations with vectors are vector improver [adding ii (or more) vectors] and scalar multiplication (multiplying a vectro by a scalar). Coordinating operations are defined for matrices.

Matrix addition. If A and B are matrices of the same size, then they can be added. (This is similar to the restriction on calculation vectors, namely, only vectors from the same space R n tin can be added; yous cannot add a 2‐vector to a 3‐vector, for instance.) If A = [a ij] and B = [b ij] are both m x north matrices, then their sum, C = A + B, is also an 1000 x n matrix, and its entries are given past the formula

Thus, to detect the entries of A + B, simply add the corresponding entries of A and B.

Example ane: Consider the following matrices:

Which ii can exist added? What is their sum?

Since only matrices of the aforementioned size can exist added, only the sum F + H is defined (Thou cannot be added to either F or H). The sum of F and H is

Since addition of real numbers is commutative, it follows that addition of matrices (when information technology is defined) is also commutative; that is, for any matrices A and B of the same size, A + B will always equal B + A.

Example two: If whatever matrix A is added to the cypher matrix of the same size, the result is clearly equal to A:

This is the matrix analog of the argument a + 0 = 0 + a = a, which expresses the fact that the number 0 is the additive identity in the prepare of real numbers.

Case 3: Observe the matrix B such that A + B = C, where

If

and so the matrix equation A + B = C becomes

Since two matrices are equal if and only if they are of the same size and their corresponding entries are equal, this concluding equation implies

Therefore,

This example motivates the definition of matrix subtraction: If A and B are matrices of the same size, and so the entries of AB are found by simply subracting the entries of B from the corresponding entries of A. Since the equation A + B = C is equivalent to B = CA, employing matrix subtraction above would yield the same result:

Scalar multiplication. A matrix can exist multiplied past a scalar as follows. If A = [a ij] is a matrix and 1000 is a scalar, so

That is, the matrix kA is obtained by multiplying each entry of A past one thousand.

Example 4: If

then the scalar multiple 2 A is obtained by multiplying every entry of A by 2:

Instance v: If A and B are matrices of the same size, then AB = A + (− B), where − B is the scalar multiple (−1) B. If

then

This definition of matrix subtraction is consistent with the definition illustrated in Example 8.

Example vi: If

so

Matrix multiplication. By far the most of import operation involving matrices is matrix multiplication, the process of multiplying one matrix by another. The first step in defining matrix multiplication is to recall the definition of the dot production of two vectors. Permit r and c be ii n‐vectors. Writing r as a i 10 n row matrix and c as an n x 1 column matrix, the dot production of r and c is

Annotation that in social club for the dot production of r and c to exist defined, both must contain the same number of entries. Also, the society in which these matrices are written in this production is important here: The row vector comes get-go, the column vector second.

Now, for the last pace: How are ii full general matrices multiplied? Commencement, in order to class the product AB, the number of columns of A must friction match the number of rows of B; if this status does not hold, so the product AB is not defined. This criterion follows from the restriction stated in a higher place for multiplying a row matrix r past a column matrix c, namely that the number of entries in r must match the number of entries in c. If A is m ten n and B is n x p, then the product AB is defined, and the size of the product matrix AB will be chiliad x p. The following diagram is helpful in determining if a matrix product is defined, and if so, the dimensions of the product:

Thinking of the m 10 due north matrix A as composed of the row vectors r 1, r 2,…, r m from R n and the n x p matrix B every bit composed of the column vectors c 1, c 2,…, c p from R n ,

and

the rule for computing the entries of the matrix production AB is r i · c j = ( AB) ij , that is,

Example 7: Given the two matrices

make up one's mind which matrix product, AB or BA, is defined and evaluate it.

Since A is 2 10 iii and B is 3 x 4, the product AB, in that gild, is defined, and the size of the production matrix AB volition be 2 ten four. The production BA is not divers, since the first cistron ( B) has 4 columns but the second factor ( A) has only 2 rows. The number of columns of the first matrix must match the number of rows of the 2d matrix in order for their product to be defined.

Taking the dot product of row i in A and column 1 in B gives the (1, 1) entry in AB. Since

the (i, i) entry in AB is 1:

The dot product of row 1 in A and column 2 in B gives the (ane, ii) entry in AB,

and the dot production of row i in A and column 3 in B gives the (1, 3) entry in AB:

The first row of the production is completed past taking the dot product of row ane in A and column 4 in B, which gives the (one, four) entry in AB:

At present for the second row of AB: The dot product of row two in A and cavalcade 1 in B gives the (2, 1) entry in AB,

and the dot product of row ii in A and column 2 in B gives the (ii, 2) entry in AB:

Finally, taking the dot production of row 2 in A with columns 3 and 4 in B gives (respectively) the (2, 3) and (2, 4) entries in AB:

Therefore,

Example 8: If

and

compute the (3, 5) entry of the product CD.

Beginning, note that since C is 4 10 5 and D is 5 x 6, the product CD is indeed divers, and its size is iv x half dozen. Still, there is no demand to compute all twenty‐iv entries of CD if only ane particular entry is desired. The (3, 5) entry of CD is the dot production of row 3 in C and column 5 in D:

Case ix: If

verify that

only

In particular, note that even though both products AB and BA are divers, AB does non equal BA; indeed, they're non even the aforementioned size!

The previous case gives one illustration of what is mayhap the well-nigh important stardom betwixt the multiplication of scalars and the multiplication of matrices. For real numbers a and b, the equation ab = ba always holds, that is, multiplication of existent numbers is commutative; the order in which the factors are written is irrelevant. Nevertheless, information technology is decidedly false that matrix multiplication is commutative. For the matrices A and B given in Example 9, both products AB and BA were defined, just they certainly were not identical. In fact, the matrix AB was 2 ten 2, while the matrix BA was 3 x 3. Here is another illustration of the noncommutativity of matrix multiplication: Consider the matrices

Since C is 3 x 2 and D is 2 x 2, the product CD is defined, its size is 3 ten 2, and

The product DC, however, is not defined, since the number of columns of D (which is 2) does non equal the number of rows of C (which is 3). Therefore, CD ≠ DC, since DC doesn't fifty-fifty be.

Because of the sensitivity to the order in which the factors are written, one does not typically say simply, "Multiply the matrices A and B." It is usually important to signal which matrix comes commencement and which comes second in the product. For this reason, the statement "Multiply A on the correct by B" means to form the production AB, while "Multiply A on the left past B" means to form the production BA.

Example 10: If

and ten is the vector (−2, iii), show how A tin can exist multiplied on the correct by x and compute the product.

Since A is 2 x ii, in society to multiply A on the correct by a matrix, that matrix must take 2 rows. Therefore, if x is written as the ii 10 1 column matrix

and so the product A x tin be computed, and the result is some other 2 ten 1 cavalcade matrix:

Example 11: Consider the matrices

If A is multiplied on the right by B, the consequence is

just if A is multiplied on the left past B, the result is

Notation that both products are defined and of the aforementioned size, merely they are not equal.

Example 12: If A and B are square matrices such that AB = BA, and then A and B are said to commute. Show that whatsoever two square diagonal matrices of order two commute.

Permit

be two arbitrary 2 x 2 diagonal matrices. Then

and

Since a 11 b 11 = b xi a eleven and a 22 b 22 = b 22 a 22, AB does indeed equal BA, as desired.

Although matrix multiplication is ordinarily not commutative, it is sometimes commutative; for example, if

then

Despite examples such as these, it must be stated that in full general, matrix multiplication is non commutative.

There is another difference between the multiplication of scalars and the multiplication of matrices. If a and b are real numbers, then the equation ab = 0 implies that a = 0 or b = 0. That is, the only way a production of real numbers can equal 0 is if at to the lowest degree one of the factors is itself 0. The analogous statement for matrices, however, is not true. For instance, if

then

Note that even though neither Chiliad nor H is a zero matrix, the product GH is.

Yet another difference between the multiplication of scalars and the multiplication of matrices is the lack of a general cancellation law for matrix multiplication. If a, b, and c are real numbers with a ≠ 0, then, past canceling out the gene a, the equation ab = air conditioning implies b = c. No such police exists for matrix multiplication; that is, the argument AB = Air-conditioning does not imply B = C, fifty-fifty if A is nonzero. For example, if

and so both

and

Thus, even though AB = Ac and A is not a zero matrix, B does non equal C.

Instance 13: Although matrix multiplication is not always commutative, it is e'er associative. That is, if A, B, and C are whatever three matrices such that the product (AB)C is defined, then the product A(BC) is too defined, and

That is, every bit long as the order of the factors is unchanged, how they are grouped is irrelevant.

Verify the associative law for the matrices

First, since

the product (AB)C is

Now, since

the product A(BC) is

Therefore, (AB)C = A(BC), every bit expected. Note that the associative police implies that the product of A, B, and C (in that order) tin exist written simply as ABC; parentheses are not needed to resolve any ambiguity, considering there is no ambiguity.

Example 14: For the matrices

verify the equation ( AB) T = B T A T.

First,

implies

Now, since

B T A T does indeed equal ( AB) T. In fact, the equation

holds true for any two matrices for which the product AB is defined. This says that if the product AB is divers, and so the transpose of the production is equal to the product of the transposes in the reverse order.

Identity matrices. The zero matrix 0 thousand x due north plays the role of the condiment identity in the set of chiliad x n matrices in the same way that the number 0 does in the set of real numbers (think Case 7). That is, if A is an m ten due north matrix and 0 = 0 m 10 n , then

This is the matrix analog of the statement that for any real number a,

With an condiment identity in manus, you may enquire, "What about a multiplicative identity?" In the set of real numbers, the multiplicative identity is the number one, since

Is there a matrix that plays this function? Consider the matrices

and verify that

and

Thus, AI = IA = A. In fact, it can be hands shown that for this matrix I, both products AI and IA will equal A for any 2 10 2 matrix A. Therefore,

is the multiplicative identity in the set of 2 ten 2 matrices. Similarly, the matrix

is the multiplicative identity in the prepare of 3 10 3 matrices, and so on. (Annotation that I 3 is the matrix [δ ij ] iii x iii.) In general, the matrix I n —the n 10 due north diagonal matrix with every diagonal entry equal to 1—is chosen the identity matrix of lodge due north and serves as the multiplicative identity in the set of all n x n matrices.

Is there a multiplicative identity in the gear up of all thou x n matrices if m ≠ n? For any matrix A in G one thousand x due north ( R), the matrix I m is the left identity ( I mA = A ), and I n is the right identity ( AI n = A ). Thus, unlike the set of due north x n matrices, the set up of nonsquare m x due north matrices does not possess a qunique ii‐sided identity, considering I g ≠ I due north if m ≠ n.

Example 15: If A is a square matrix, and then A two denotes the product AA,A 3 denotes the product AAA, and so forth. If A is the matrix

testify that A 3 = − A.

The calculation

shows that A ii = − I. Multiplying both sides of this equation by A yields A 3 = − A, as desired. [Technical note: It tin can be shown that in a certain precise sense, the collection of matrices of the class

where a and b are real numbers, is structurally identical to the collection of complex numbers, a + bi. Since the matrix A in this case is of this class (with a = 0 and b = 1), A corresponds to the complex number 0 + 1 i = i, and the analog of the matrix equation A 2 = − I derived above is i 2 = −ane, an equation which defines the imaginary unit of measurement, i.]

Case 16: Discover a nondiagonal matrix that commutes with

The trouble is asking for a nondiagonal matrix B such that AB = BA. Like A, the matrix B must exist 2 x 2. One way to produce such a matrix B is to form A 2, for if B = A ii, associativity implies

(This equation proves that A two volition commute with A for any square matrix A; furthermore, it suggests how i tin can prove that every integral power of a square matrix A will commute with A.)

In this case,

which is nondiagonal. This matrix B does indeed commute with A, every bit verified by the calculations

and

Example 17: If

prove that

for every positive integer northward.

A few preliminary calculations illustrate that the given formula does hold true:

However, to establish that the formula holds for all positive integers n, a general proof must be given. This will be done here using the principle of mathematical induction, which reads as follows. Let P(n) announce a proposition apropos a positive integer n. If it can be shown that

and

then the argument P(n) is valid for all positive integers n. In the present instance, the statement P(n) is the assertion

Because A ane = A, the statement P(ane) is certainly true, since

Now, assuming that P(n) is true, that is, bold

information technology is now necessary to constitute the validity of the statement P( n + ane), which is

But this statement does indeed hold, considering

By the principle of mathematical consecration, the proof is complete.

The inverse of a matrix. Allow a exist a given real number. Since 1 is the multiplicative identity in the fix of real numbers, if a number b exists such that

so b is called the reciprocal or multiplicative changed of a and denoted a −one (or 1/ a). The analog of this statement for square matrices reads as follows. Let A be a given north 10 due north matrix. Since I = I n is the multiplicative identity in the set up of n x north matrices, if a matrix B exists such that

so B is called the (multiplicative) inverse of A and denoted A −i (read " A changed").

Example 18: If

then

since

and

Withal another distinction betwixt the multiplication of scalars and the multiplication of matrices is provided past the being of inverses. Although every nonzero existent number has an inverse, there be nonzero matrices that have no changed.

Example 19: Show that the nonzero matrix

has no inverse.

If this matrix had an inverse, then

for some values of a, b, c, and d. However, since the 2d row of A is a nil row, y'all can see that the 2d row of the product must also be a zero row:

(When an asterisk, *, appears as an entry in a matrix, it implies that the actual value of this entry is irrelevant to the present discussion.) Since the (2, 2) entry of the production cannot equal 1, the product cannot equal the identity matrix. Therefore, it is impossible to construct a matrix that can serve equally the changed for A.

If a matrix has an inverse, it is said to be invertible. The matrix in Case 23 is invertible, but the one in Example 24 is non. Afterward, you will learn diverse criteria for determining whether a given foursquare matrix is invertible.

Instance 20: Instance 18 showed that

Given that

verify the equation ( AB) −1 = B −1 A −ane.

Kickoff, compute AB:

Side by side, compute B −1 A −1:

Now, since the product of AB and B −1 A −i is I,

B −ane A −1 is indeed the inverse of AB. In fact, the equation

holds true for any invertible square matrices of the same size. This says that if A and B are invertible matrices of the same size, so their production AB is also invertible, and the inverse of the product is equal to the product of the inverses in the contrary club. (Compare this equation with the one involving transposes in Example 14 to a higher place.) This result can be proved in general by applying the associative law for matrix multiplication. Since

and

it follows that ( AB) −i = B −1 A −1, as desired.

Example 21: The inverse of the matrix

is

Show that the changed of B T is ( B −one) T.

Grade B T and ( B −1) T and multiply:

This calculation shows that ( B −1) T is the inverse of B T. [Strictly speaking, it shows just that ( B −1) T is the right changed of B T, that is, when information technology multiplies B T on the right, the product is the identity. It is also true that ( B −1) T B T = I, which means ( B −1) T is the left changed of B T. All the same, information technology is non necessary to explicitly bank check both equations: If a foursquare matrix has an inverse, at that place is no stardom between a left inverse and a correct changed.] Thus,

an equation which actually holds for whatever invertible square matrix B. This equation says that if a matrix is invertible, then so is its transpose, and the changed of the transpose is the transpose of the inverse.

Example 22: Employ the distributive property for matrix multiplication, A( B ± C) = AB ± AC, to respond this question: If a 2 x 2 matrix D satisfies the equation D 2D − 6 I = 0, what is an expression for D −1?

Past the distributive holding quoted above, D 2D = D 2DI = D(D − I). Therefore, the equation D 2D − half dozen I = 0 implies D(D − I) = 6 I. Multiplying both sides of this equation by 1/half-dozen gives

which implies

As an illustration of this event, the matrix

satisfies the equation D 2D − half dozen I = 0, equally you may verify. Since

and

the matrix 1/half dozen ( D−I) does indeed equal D −1, every bit claimed.

Example 23: The equation ( a + b) 2 = a ii + two ab + b two is an identity if a and b are real numbers. Bear witness, however, that ( A + B) 2 = A 2 + 2 AB + B 2 is not an identity if A and B are ii ten ii matrices. [Note: The distributive laws for matrix multiplication are A( B ± C) = AB ± Air-conditioning, given in Example 22, and the companion police, ( A ± B) C = Ac ± BC.]

The distributive laws for matrix multiplication imply

Since matrix multiplication is not commutative, BA will usually non equal AB, so the sum BA + AB cannot be written as 2 AB. In general, and so, ( A + B) 2A two + 2 AB + B 2. [Whatever matrices A and B that do not commute (for example, the matrices in Example xvi above) would provide a specific counterexample to the statement ( A + B) 2 = A 2 + 2 AB + B ii, which would also establish that this is not an identity.]

Example 24: Presume that B is invertible. If A commutes with B, testify that A volition also commute with B −1.

Proof. To say " A commutes with B" means AB = BA. Multiply this equation by B −1 on the left and on the right and use associativity:

Case 25: The number 0 has just i square root: 0. Show, however, that the (2 by 2) zero matrix has infinitely many square roots past finding all ii x 2 matrices A such that A 2 = 0.

In the same way that a number a is called a square root of b if a two = b, a matrix A is said to exist a square root of B if A 2 = B. Permit

exist an arbitrary ii 10 two matrix. Squaring it and setting the result equal to 0 gives

The (ane, ii) entries in the terminal equation imply b( a + d) = 0, which holds if (Case 1) b = 0 or (Case 2) d = − a.

Case 1. If b = 0, the diagonal entries then imply a = 0 and d = 0, and the (2, 1) entries imply that c is arbitrary. Thus, for any value of c, every matrix of the form

is a foursquare root of 0 2x2.

Case ii. If d = − a, then the off‐diagonal entries will both be 0, and the diagonal entries will both equal a 2 + bc. Thus, as long equally b and c are chosen so that bc = − a 2, A two volition equal 0.

A like chain of reasoning beginning with the (2, ane) entries leads to either a = c = d = 0 (and b arbitrary) or the same decision as before: as long equally b and c are chosen so that bc = − a 2, the matrix A ii volition equal 0.

All these cases tin can be summarized as follows. Any matrix of the following form will have the property that its square is the 2 by 2 zero matrix:

Since at that place are infinitely many values of a, b, and c such that bc = − a two, the zero matrix 0 2x2 has infinitely many square roots. For example, choosing a = 4, b = two, and c = −eight gives the nonzero matrix

whose square is

johnsonprepireare.blogspot.com

Source: https://www.cliffsnotes.com/study-guides/algebra/linear-algebra/matrix-algebra/operations-with-matrices

0 Response to "suppose a+b=2b, what expression equals to b-a"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel