S41541 Mathematics 05 Flipbook PDF


22 downloads 113 Views 4MB Size

Story Transcript

School!of!Architecture,!Science!and!Technology,!!!!!!!!!!!!!!! Programs!!

Yashwantrao!Chavan!Maharashtra!Open!University ! &!!

Course!Codes! V92!-!SEC511!!!

!

Programs!!

Programs

&!!

&

Course!Codes!

Course Codes

V92:!B.Sc.!

V92 - SEC511 Mathematics Elective - 0501

S41541!!!

Matrix!Algebra

S41541-T01! !

Email: [email protected]

Website: www.ycmou.ac.in! Phone: +91-253-2231473!

A S T, !Y C M O U , ! N a s h i k ! – ! 4 2 2 ! 2 2 2 , ! M H , ! I n d i a !

Yashwantrao Chavan Maharashtra Open University Vice-Chancellor: Prof. Dr. E. Vayunandan School of Architecture, Science and Technology Director (I/C) of the School: Dr. Sunanda More School Council (2018-2020) Dr Sunanda More Director(I/c) & Associate Professor, School of Architecture, Science & Technology, YCMOU, Nashik

Dr Manoj Killedar Associate Professor, School of Architecture, Science & Technology, YCMOU, Nashik

Mrs Chetana Kamlaskar Assistance Professor, School of Architecture, Science & Technology, YCMOU, Nashik

Dr. Pramod Khandare Director(I/c) & Associate Professor, School of Computer Science, YCMOU, Nashik

Dr. Sanjivani Mahale Associate Professor, School of Education, YCMOU, Nashik

Dr. Rucha Gujar Assistant Professor, School of Continuing Education, YCMOU, Nashik

Dr. Surendra Patole Assistant Professor, School of Commerce & Management, YCMOU, Nashik

Dr. Sanjay J. Dhoble Prof. Dept. of Physics, R.T.M. Nagpur University, Nagpur

Dr. Gangadhar Asaram Meshram Professor of Organic Chemistry, Department of Chemistry, Mumbai University, Mumbai Dr. D.R. Nandanwar, Joint Director, Technical Education Regional Office, Pune – 411 016

Dr. T.M. Karade Retired Professor, R.T.M. Nagpur University, Nagpur

Mr. D.B. Saundarkar, Representative Study Centre Coordinator, (S.C. Code: 42108) Brahmpuri, Dist. Chandrapur

Development Team Course Coordinator and

Book Writer

Book Editor

Dr Manoj Killedar Associate

Dr. T.M. Karade

Dr. J. N. Salunke

Professor, School of

Retired Professor,

Instructional Technology Editor

Architecture, Science &

R.T.M. Nagpur University,

Technology, YCMOU,

Nagpur

Nashik

Former Professor, School of Mathematical Sciences, Swami Ramanand Teerth Marathwada University, Vishnupuri, Nanded-431606

This work by YCMOU is licensed under a Creative Commons Attribution-

NonCommercial-ShareAlike 4.0 International License. v First Book Publication : 12 M a r c h 2018

Publication Number:2347

v Publisher : Registrar, YCMOU, Nashik - 422 222, MH, India v ISBN Number- 978-93-91514-13-6

S41541-T01: Mathematics Elective - 05

Page 2

V ICE C HANCELLOR ’ S M ESSAGE ! Dear Students, Greetings!!! I offer cordial welcome to all of you for the Master’s degree programme of Yashwantrao Chavan Maharashtra Open University. As a post graduate student, you must have autonomy to learn, have information and knowledge regarding different dimensions in the field of Mathematics and at the same time intellectual development is necessary for applic ation of knowledge wisely. The process of learning includes appropriate thinking, understanding important points, describing these points on the basis of experience and observation, explaining them to others by speaking or writing about them. The science of education today accepts the principle that it is possible to achieve excellence and knowledge in this regard. The syllabus of this course has been structured in this book in such a way, to give you autonomy to study easily without stirring from home. During the counseling sessions, scheduled at your respective study centre, all your doubts will be clarified about the course and you will get guidance from some qualified and experienced counsellors/ professors. This guidance will not only be based on lectures, but it will also include various techniques such as questionanswers, doubt clarification. We expect your active participation in the contact sessions at the study centre. Our emphasis is on ‘self study’. If a student learns how to study, he will become independent in learning throughout life. This course book has been written with the objective of helping in self-study and giving you autonomy to learn at your convenience. During this academic year, you have to give assignments, complete laboratory activities, field visits and the Project work wherever required. You may have to opt for specialization as per programme structure. You will get experience and joy in personally doing above activities. This will enable you to assess your own progress and there by achieve a larger educational objective. We wish that you will enjoy the courses of Yashwantrao Chavan Maharashtra Open University, emerge successful and very soon become a knowledgeable and honorable Master’s degree holder of this university. I congratulate “Development Team” for the development of this excellent high quality “Self- Learning Material (SLM)” for the students. I hope and believe that this SLM will be immensely useful for all students of this program. Best Wishes! - Dr. Prof E. Vayunandan Vice-Chancellor, YCMOU

S41541-T01:!Mathematics!Elective!-!0501!

Page!2!!

F ORWARD B Y T HE D IRECTOR !

This book aims at acquainting the students with basic fundamentals of Mathematics required at degree level. The book has been specially designed for Science students. It has a comprehensive coverage of Mathematical concepts and its application in practical life. The book contains numerous mathematical examples to build understanding and skills. The book is written with self- instructional format.

Each chapter is prepared with

articulated structure to make the contents not only easy to understand but also interesting to learn. Each chapter begins with learning objectives which are stated using Action Verbs as per the Bloom’s Taxonomy. Each Unit is started with introduction to arouse or stimulate curiosity of learner about the content/ topic. Thereafter the unit contains explanation of concepts supported by tables, figures, exhibits and solved illustrations wherever necessary for better effectiveness and understanding. This book is written in simple language, using spoken style and short sentences. Topics of each unit of the book presents from simple to complex in logical sequence. This book is appropriate for low achiever students with lower intellectual capacity and covers the syllabus of the course. Exercises given in the chapter include MCQs, conceptual questions and practical questions so as to create a ladder in the minds of students to grasp each and every aspect of a particular concept. 02 - 04 practical activities are written in the last unit in each credit block to build application of knowledge and skills from that credit part to real world scenario, case study or problem. I thank the students who have been a constant motivation for us. I am grateful to the writers, editors and the School faculty associated in this SLM development of the Programme.

- Dr. Sunanda More Director (I/C), School of Architecture, Science and Technology

S41541-T01:!Mathematics!Elective!-!0501!

Page!3!!

P REFACE B Y T HE A UTHOR !

This title is an outcome of the previous attempts starting from 1997 by the author. Since then many additions and omissions resulted in the texts published till to date t o meet the requirement of the universities from time to time. Consequently the material content is freely borrowed from the earlier editions. The basic difference between the present text and the earlier ones is due to the format of the texts adopted by YA SHWANTRAO CHAVAN MAHARASHTRA OPEN UNIVERSITY, NASHIK. In this regard I have been benefited by the guidance from Dr Killedar, and I owe to him. Attempts are made to keep the book free from mistakes. However, I cannot claim that it is totally without mistakes. The following notations are used in the book. ·

ln x means log e x

·

Theorem is concluded by QED, which stands for quod erat demonstrandum meaning that had to be proved

The source of various examples in this text lies in ·

various standard texts on the subject concerned in REFERENCES

·

examinations from the universities

·

SET and NET examinations

·

construction by the author

Special thanks to ·

Dr Sunanda More (Director) and Dr M Killedar

I have followed many books on the subject and are listed in the references. Though I do not claim originality, I have my way of presenting the subject. I am hopeful that the students will be benefited in their pursuit of learning mathematics. Any suggestion for the improvement will be highly appreciated.

Author Nagpur, February 20, 2018

S41541-T01:!Mathematics!Elective!-!0501!

Page!4!!

CONTENTS

1-6

CREDIT 01

1-105

Unit 01-01: Elements of matrix algebra

1-34

Introduction

1

1.1

Matrix

1

1.2

Various types matrices

3

1.3

Arithmetic operations on matrices

8

1.4

Multiplication of matrices

16

Summary

34

Key words

34

Unit 01-02: The determinant of a square matrix

35-51

Introduction

19

2.1

Definition of a group

19

2.2

Complex roots of unity

34

2.3

Composition modulo n

36

Summary

44

Key words

44

Unit 01-03: The inverse of a matrix

53-78

Introduction

45

3.1

About an identity element of a group

45

3.2

About an inverse of every element of a group

45

3.3

Subgroups

55

3.4

Intersection and union of two groups

60

Summary

63

Key words

63

1

Unit 01-04: Elementary transformations of a matrix

79-91

Introduction

65

4.1

Permutation as a mapping

65

4.2

Permutation group

68

4.3

Orbit of s  S

74

4.4

Cycle or cyclic permutation

76

4.5

Even and odd permutations

87

Summary

94

Key words

94

Unit 01-05: The rank of a matrix

93-105

Introduction

95

5.1

Order of a  G

95

Summary

110

Key words

110

CREDIT 02

1-105

Unit 02-01: System of simultaneous linear equations

1-32

Introduction

1

1.1

Cosets

1

1.2

Index of a subgroup

16

1.3

Lagrange"s theorem

16

Summary

20

Key words

20

Unit 02-02: Linear independence of vectors

33-42

Introduction

21

2.1

Definition of a normal subgroup

21

2.2

Product HK of subsets H and K

32

Summary

39

Key words

39

2

Unit 02-03: The eigen value problem

43-69

Introduction

41

3.1

Quotient group

41

3.2

Cyclic group

49

Summary

65

Key words

65

Unit 02-04: The Cayley-Hamilton theorem

71-83

Introduction

67

4.1

Function (mapping)

67

4.2

Homomorphism

69

4.3

Kernel of a homomorphism

78

Summary

84

Key words

84

Unit 02-05: The diagonalization of a matrix

85-105

Introduction

85

5.1

85

Theorems on homomorphism

Summary

109

Key words

109

CREDIT 03

1-111

Unit 03-01: Vector spaces

1-27

Introduction

1

1.1

Basic information

1

1.2

The real coordinate space R n

3

1.3

Vector space

6

1.4

Examples of vector spaces

9

1.5

General properties of a vector space

24

Summary

27

Key words

27 3

Unit 03-02: Subspaces

29-41

Introduction

29

2.1

Subspace (or vector subspace)

29

2.2

Union and intersection of subspaces

36

Summary

41

Key words

41

Unit 03-03: Linear span and sum of subspaces

43-62

Introduction

43

3.1

Linear span

43

3.2

Linear span of a subset

46

3.3

Sum and direct sum of subspaces

54

Summary

62

Key words

62

Unit 03-04: Linear independence

63-80

Introduction

63

4.1

63

Linear independence of vectors

Summary

80

Key words

80

Unit 03-05: Basis of a vector space

81-111

Introduction

81

5.1

Basis of a vector space

81

5.2

Dimension of a vector space

90

5.3

Quotient space

99

5.4

Coordinate vector relative to basis

108

Summary

111

Key words

111

4

CREDIT 04

1-112

Unit 04-01: Linear transformations

1-30

Introduction

1

1.1

Linear transformation or vector space homomorphism

1

1.2

Range and kernel of a linear transformation

13

Summary

30

Key words

30

Unit 04-02: Isomorphism of vector spaces

31-52

Introduction

31

2.1

Isomorphism

31

2.2

First fundamental theorem on homomorphism

46

Summary

52

Key words

52

Unit 04-03: Matrix associated with a linear map

53-73

Introduction

53

3.1

Matrix of a linear mapping

53

3.2

Formulation of m u n matrix

55

3.3

Linear map associated with a matrix

68

Summary

73

Key words

73

Unit 04-04: Algebra of linear op[erations

75-94

Introduction

75

4.1

Definition of a matrix reopened

76

4.2

Range and kernel of linear transformation

85

4.3

Singular and nonsingular linear transformations

91

Summary

94

Key words

94 5

Unit 04-05: Eigenvalues and eigenvectors of a linear mapping

95-112

Introduction

95

5.1

Some definitions

95

5.2

Linearly independent eigenvectors

105

5.3

Polynomials of linear transformations

108

Summary

112

Key words

112

6

UNIT 01-01: ELEMENTS OF MATRIX ALGEBRA

1-34

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the basic concepts of matrix algebra INTRODUCTION Matrix is one of the powerful and elegant tools in mathematics to deal with various practical problems. It forms a branch of linear algebra. Though the concept of matrix was brought forth by Cayley in 1860, it took about 100 years to realize its utility in engineering study after the emergence of digital computers around 1960. The beauty of matrix lies in its compact single letter notation such as A , B , C ,  for an array of many numbers arranged in rows and columns and perform various operations on array through the operations on single letter symbols. Having understood its applicability the subject matrix algebra is being studied right from high school level. Therefore, it is presumed that the readers have basic knowledge of matrix. However, for completeness here we briefly summarize the basic information about matrix. 1.1 Matrix There are many occasions in mathematics where we deal with rectangular orderly arrangement of numbers or functions. An array of this type may be given by the symbol

A

ª a11 «a « 21 «  « ¬am1

a12 a22 am 2

 a1n º  a2 n »» . » »  amn ¼

The quantities aij , i 1, 2 ,  , m , j 1,  , n are numbers or functions and are called elements or members of the array A. Such an array, subject to certain rules of operation, (addition, multiplication etc) is called a matrix. The matrices are denoted by [ ] or ( ) or || ||. It is understood that the aij are the members of some field F and we say that the matrix A is defined over F . The elements of A which are in a horizontal line constitute a row of the matrix A and those in a vertical line a column of the matrix. An element aij occurs at the intersection of the i th row and j th column.

1

Order of a matrix A matrix A with m rows and n columns is called a matrix!of!order (m, n) or m u n matrix. For

convenience and brevity we denote the matrix A [ aij ]mun or [aij ]( m , n ) . It means that A is a matrix of elements aij and is of order m u n.

ª2 3 i 0 º Illustration. A ««1  6 7  3»» is a 3u 4 matrix «¬5 11 9 0 »¼

B

ª0 1 0º «1 0 0» is a 3u 3 matrix « » «¬2  5 6»¼

C

ª1 º «2» « » is a 4 u 1 matrix «  2» « » ¬0¼

D

>0

0 4 1 0@ is a 1u 5 matrix

E [3] is a 1u 1 matrix

F

ª0 0 0 º «0 0 0» is a 2 u 3 matrix ¼ ¬

Problem 1.1

Find the matrix of order 2u 3 whose elements aij satisfy the equation aij Solution. Let the matrix be

a12 ªa A [ aij ] « 11 ¬a21 a22 Given that Ÿ

aij

a13 º a23 »¼

i  2 j  ij

a11 1  2(1)  (1)(1)

2

a12 1  2( 2)  (1)(2) 3 a13 1  2(3)  (1)(3) a21 2

4

2  2(1)  ( 2)(1) 2

i  2 j  ij

a22

2  2( 2)  ( 2)(2) 2

a23

2  2(3)  (2)(3) 2 ª 2 3 4º A « » ¬ 2 2 2¼

Ÿ

MCQ 1.1

A matrix M has 2n  1 elements, n  N. Then the possible orders of M are (A) (1, 2) (B) (2, 4) (C) (9, 7) (D) (7, 8)

SAQ 1.1 Construct the matrix A [aij ]nu n such that aij

2

aij .

1.2 Various types of matrices Zero or null matrix. A zero!matrix is a matrix whose all the elements are zero i.e. A [aij ] is a zero matrix if aij

Illustration.

>0

0,  i, j.

0 0 0@ ,

ª 0 0 º ª0 0 0 º «0 0» , «0 0 0» are zero matrices but ª0 0º is not a zero «0 1 » » « « » ¬ ¼ «¬0 0»¼ «¬0 0 0»¼

matrix since there is one aij i.e. a22 z 0.

Square matrix. A matrix in which the number of rows is equal to the number of columns is a square!matrix. Thus a square matrix is of the form n u n. We call it an n -square matrix or a square matrix of order n. For a square matrix the elements a11 , a22 ,  , ann are called the diagonal!elements and

their sum: a11  a22    ann

n

¦ aii

i 1

the trace of the matrix. 3

Illustration. [3] is a 1-square matrix ª0 0 º «0 1» is a 2 " square matrix ¬ ¼

and

ª0 1 0º «1 0 0» is a 3-square matrix. « » «¬2  5 6»¼

Diagonal matrix. A square matrix whose all non-diagonal elements are zero is called a diagonal

matrix. Thus A [aij ] is diagonal if A is a square matrix and aij

0, i z j. In such a case we

write A diag [a11 , a22 , , ann ].

Illustration.

0 º ª0 0 ª2 0 º « » «0  1» , «0  3 0 » are diagonal matrices. ¬ ¼ «0 0 1 / 2 » ¬ ¼

Scalar matrix. A diagonal matrix whose all diagonal elements are equal is called a scalar!matrix. Thus A [aij ] is a scalar matrix if a11

Illustration.

a22

 ann and aij

0, i z j

ª3 0 0º «0 3 0» is a scalar matrix. « » «¬0 0 3»¼

Problem 1.2 State true or false: every diagonal matrix is a scalar matrix. ª2 0 º Solution. Consider a diagonal matrix A « ». Since a11 z a22 , A is not a scalar matrix. ¬ 0 1 ¼ Hence the given statement is false.

Unit or identity matrix. An n -square diagonal matrix whose all diagonal elements are 1 is called an identity!matrix or unit!matrix. Thus A [aij ] is a unit matrix if aij

0, i z j and a11

An identity matrix is denoted by I n or simply by I .

4

a22

 ann 1.

Illustration. I1 [1], I 2

ª1 0 º «0 1 » , I 3 ¼ ¬

ª1 0 0º «0 1 0» are identity matrices. « » «¬0 0 1»¼

Row matrix. A matrix which has one row is a row!matrix. Illustration. [ x] , [1 2 0] , [0 x y z 2  1] are row matrices. Column matrix. A matrix which has one column is a column!matrix.

ª1 º Illustration. [x] , « » , ¬0 ¼

ª2º « 4 » are column matrices. « » «¬ 9»¼

Real matrix. A matrix A [aij ] is real if aij  R ,  i, j i.e all the elements of a matrix are real.

Illustration. A [2, 3,  1, 0, 0] is not a real matrix but B

ª3 2º » is a real matrix. « ¬« 6 9 / 2¼»

Triangular matrix. A square matrix A [aij ] is called an upper! triangular! matrix if all the

elements below the diagonal are zero i.e. aij

0 for i ! j. A is a lower!triangular matrix if all the

elements above the diagonal are zero i.e. aij

0 for i  j.

ª6 ª 1 0 0º «0 Illustration. A «« 4 2 0»» is lower triangular and B « «0 «¬1 7 3»¼ « ¬0

1 0 1 º 7 3 5 »» is upper triangular. 0 1 3» » 0 0 4¼

The transpose of a matrix. The matrix of order n u m obtained by interchanging the rows and

columns of an m u n matrix A is called the transpose of A and is denoted by Ac or At . Hence if A [aij ]mu n , then Ac [aijc ]nu m , where aijc

a ji . Note that Acc ( Ac)c

A

The transpose of an identity matrix is itself the identity matrix. ª 2 6º ª2 0  3º Illustration. A «« 0 1»». Then Ac « .I 6 1 7 »¼ ¬ «¬ 3 7»¼

ª1 0 º «0 1 » , I c ¼ ¬

ª1 0º «0 1 » = I ¼ ¬

5

Symmetric matrix. A symmetric! matrix A is a square matrix such that Ac

defined as a matrix A [aij ] such that aij

A. It can also be

a ji ,  i, j.

Skew symmetric matrix. A skew-symmetric or anti-symmetric matrix is a square matrix A such

that Ac  A. Thus A is skew-symmetric if aij aii

a ji ,  i, j. Put j i in this equation: aii

 aii i.e.

0,  i. Thus all the diagonal elements of a skew symmetric matrix must be zero.

1 2  iº ª 0 « Illustration. A « 1 3  i »» is symmetric «¬2  i  i 1 »¼

and

ª 0 5  7º B «« 5 0 i »» is skew symmetric. «¬ 7  i 0 »¼

The conjugate of a matrix. Let A [aij ] be a matrix of complex numbers. When the elements of

such a matrix A are replaced by their complex conjugates, the resulting matrix is called the conjugate!of!a!matrix A and is denoted by A. Thus A

[aij ].

We have

A

( A)

A, ( A )c ( A c)

2 º 2 º ª1  i ª1  i . Then A « Illustration . A « » ». Verify that ( A)c ( Ac) . ¬ 3i  7  2i¼ ¬ 3i  7  2i¼ The tranjugate of a matrix. The transposed conjugate of a matrix A is called the tranjugate of

A and is denoted by AT . Thus AT

( A )c ( A c). If A is real then AT

Ac.

2 º T 2 º ª1  i ª1  i Illustration . A « Ÿ A « » », A ¬ 3i  7  2i¼ ¬ 3i  7  2i¼

 3i º ª1  i « 2  7  2i » ¼ ¬

Hermitian matrix. A square matrix A [aij ] is called Hermitian if AT Thus

6

A is Hermitian œ aij

a ji ,  i, j

A or Ac

A.

a ji ,  i, j

A is Hermitian œ aij

or Ÿ

aii

aii , i.e. aii

real

Thus the diagonal elements of a Hermitian matrix are real numbers.

Illustration. A

C

ª1  i 2 º « i 1  i » is not Hermitian but B ¼ ¬

ª1  i º «i 0 » and ¼ ¬

4  i 6i º ª 2 «4  i 1 3 »» are Hermitian* matrices. « «¬  6i 3 0 »¼

Skew-Hermitian matrix. A square matrix A [aij ] is called skew-Hermitian if AT

 A or

Ac  A. Thus

A is skew Hermitian œ aij

 a ji ,  i, j

or

A is skew Hermitian œ aij

 a ji ,  i, j

Ÿ

aii

This means that aii

 aii i.e. aii  aii

0

0 or pure imaginary. Hence the diagonal elements of a skew-Hermitian

matrix are zero or pure imaginary numbers. 1  i  1  2i º ª 0 « 3i »» is skew-Hermitian. Illustration. « 1  i  i «¬ 1  2i 3i 4i »¼

Equal matrices. Let A [aij ] and B [bij ] be two m u n matrices. They are defined to be equal if and only if aij

bij ,  i, j. Thus two matrices are equal if and only if they are of the same order

and their corresponding elements are equal. The equality of matrices follow the following properties. (i) Determinative!property: For the matrices A and B, either A B or A z B. (ii) Reflexive!property: A (iii) Symmetric!property: A

A for any matrix A BŸB

(iv) Transitive!property: A B and B *

A CŸA C

Hermite Charles (1822-1901) was the French mathematician.

7

Hence equality of matrices is an equivalence!relation

. ª2 1 x º Illustration. The matrix « » ¬a b  1¼

ª2 1 3 º «0 3  1» if x 3 , a 0 , b 3. ¼ ¬

Problem 1.3 x y º ª 2x  y Determine x, y, a, b of the unit matrix « ». ¬a  b  1 a  2b  6¼

Solution. Since the given matrix is an 2 u 2 unit matrix, it is of the form x y º ª 2x  y «a  b  1 a  2b  6» ¬ ¼

ª1 0 º «0 1 » ¼ ¬

Using the concept of equal matrices, we get 2 x  y 1, x  y

0

a  b  1 0, a  2b  6 1

and Solving these equations,

x 1, y

1, a 3, b

2

MCQ 1.2 Choose the true statement/s from the following: (A) Diagonal matrix is only upper triangular. (B) Diagonal matrix is only lower triangular. (C) Diagonal matrix is both upper and lower triangular. (D) Diagonal matrix is neither lower nor upper triangular.

SAQ 1.2 Identify the Hermitian and skew Hermitian matrices from the following:  i 0º ª 2 3i º ª i ª 1 i º ª0 i º ª0 0 º A « ,B « ,C « ,D « ,E « » » » ». » ¬ 3 i 2 i ¼ ¬1  2i 0 i ¼ ¬  i 0¼ ¬ i 0¼ ¬0 0 ¼

1.3 Arithmetic operations on matrices Since matrices are mathematical objects, basic arithmetic operations addition, subtraction and multiplication of two matrices can be defined to get another matrices.



8

A relation which satisfies the properties: reflexivity, symmetry and transitivity is an equivalence!relation.

Addition of matrices Let A [aij ] and B [bij ] be two m u n matrices. Their sum A  B is defined as the m u n matrix [aij  bij ] i.e. the sum is obtained by adding their corresponding elements. Two matrices of the

same order are said to be conformable!for!addition. Thus the sum of any two m u n matrices is again an m u n matrix. Hence the set of all matrices of the same order is closed!with!respect!to addition.

Illustration. A

ª1 0  3º «6 1  2 » , B ¬ ¼ A B

Then

ª1 2 0º «1  7 3» , C ¼ ¬

ª 0 1º «  1 2» . ¼ ¬

ª1  1 0  2  3  0º «6  1 1  7  2  3» ¬ ¼

ª2 2  3º «7  6 1 ». ¬ ¼

A  C is not defined.

Theorem 1.1 The!addition!of!matrices!is!commutative!and!associative!i.e.!if! A, B !and! C !are!conformable!for addition,!then A B A  (B  C)

B A

(1.1)

( A  B)  C

(1.2)

Proof. The proof follows immediately from the commutativity and associativity of numbers (real/complex) or scalars. We have A  B [aij ]  [bij ] [aij  bij ] [bij  aij ] [bij ]  [ aij ]

B A

A  ( B  C ) [aij  (bij  cij )] [(aij  bij )  cij ] ( A  B)  C

QED

Theorem 1.2 (Cancellation law for addition) (i) A  C

BC œ A

B

(ii) C  A C  B œ A

B

Proof. We have A  C [aij ]  [cij ] [aij  cij ], B  C [bij  cij ] (i)

AC

B  C œ [aij  cij ] [bij  cij ] œ aij  cij

bij  cij ,  i, j , by definition of equality 9

œ aij

bij ,  i, j

œA

B.

Similarly the part (ii) can be proved.

QED

Problem 1.4 Show that every square matrix can be uniquely expressed as the sum of symmetric matrix and skew symmetric matrix.

Hint. For a square matrices of the same order, A and B, we have Acc

A, ( A  B)c

Ac  Bc

Solution. Let A be any square matrix. We have 1 1 ( A  Ac)  ( A  Ac) 2 2

A where

B Bc

Now

Cc

and Ÿ

1 ( A  Ac)c 2

(a1)

1 ( A  Ac) 2

1 ( A  Ac) , C 2

1 ( A  Ac)c 2

BC

(a2)

1 ( Ac  Acc) 2

1 ( Ac  A) 2

B

1 ( Ac  Acc) 2

1 ( Ac  A) 2

C

B is a symmetric matrix and C is a skew symmetric matrix.

Since A is any square matrix, then (a1) implies that every square matrix can be expressed as the sum of symmetric matrix and skew symmetric matrix. Now we show that the representation in (a1) is unique. If possible assume that there is another representation of A : A D  E,

(a3)

where D is a symmetric matrix and E is a skew symmetric matrix. The uniqueness is established if D

B and E

C. We prove this. Taking the transpose of (a3),

Ac ( D  E )c

Dc  E c D  E ,  D c

D , Ec

E

(a4)

By addition and subtraction of (a3) and (a4), we get D (a2) and (a5) Ÿ

1 ( A  Ac) and E 2 D

Hence the representation in (a1) is unique. 10

B, E

1 ( A  Ac) 2 C

(a5)

Problem 1.5 7 º ª5 Express the matrix « » as the sum of a symmetric matrix and a skew symmetric matrix. ¬  3  4¼

7 º ª5  3º ª5 « 3  4» . Then Ac «7  4». ¬ ¬ ¼ ¼

Solution. Let A

We know that A can be expressed uniquely as a sum of symmetric matrix and skew symmetric matrix in the form: A

1 1 ( A  Ac)  ( A  Ac) 2 2

7 º ª 5  3º ½ 1 ­ ª 5 7 º ª5  3º ½ 1­ª 5 «  ®« ¾ ® « ¾ » » 2 ¯ ¬ 3  4¼ ¬7  4¼ ¿ 2 ¯ ¬ 3  4»¼ «¬7  4»¼ ¿ 1 2

73 º 1 ª 55 73 º ª 55 «  3  7  4  4 »  2 «  3  7  4  4» ¼ ¬ ¬ ¼

1 ª10 4 º 1 ª 0 10º  2 «¬ 4  8»¼ 2 «¬ 10 0 »¼ ª5 2 º ª 0 5º « 2  4»  «  5 0 » ¼ ¼ ¬ ¬ symmetric matrix  skew symmetric matrix. Problem 1.6

Express the matrix A

2 5  5i º ª 1 i « 2i 2  i 4  2i »» as sum of Hermitian matrix and skew Hermitian « «¬ 1  i  4 7 »¼

matrix. Solution. Suppose that the given matrix A is expressed as sum of Hermitian matrix and skew

Hermitian matrix in the form

A

ª a «d  id 2 « 1 «¬ e1  ie2

d1  id 2 b f1  if 2

e1  ie2 º ª ix « » f1  if 2 »  « p1  ip2 c »¼ «¬  q1  iq2

p1  ip2 iy  r1  ir2

q1  iq2 º r1  ir2 »» , iz »¼

where a , b , c ,  , x , y , z are all real numbers.

11

Ÿ

A

a  ix d1  p1  i (d 2  p2 ) e1  q1  i (e2  q2 )º ª «d  p  i (d  p ) b  iy f1  r1  i ( f 2  r2 ) »» 1 2 2 « 1 «¬ e1  q1  i (e2  q2 ) f1  r1  i ( f 2  r2 ) »¼ c  iz

Equating the elements of A , we get a  ix 1  i, d1  p1  i (d 2  p2 ) 2 , e1  q1  i (e2  q2 ) 5  5i

d1  p1  i (d 2  p2 ) 2i , b  iy

2  i , f1  r1  i ( f 2  r2 ) 4  2i

e1  q1  i (e2  q2 ) 1  i , f1  r1  i ( f 2  r2 ) 4 , c  iz

7

Equating real and imaginary parts, we have a 1, x 1, d1  p1 d1  p1 e1  q1

0 ,  d 2  p2

2 , d 2  p2

0 , e1  q1

5 , e2  q2

2 , b 2 , y 1, f1  r1

1,  e2  q2 1, f1  r1

5

4 , f 2  r2

4 ,  f 2  r2

2

0, c 7, z

0

Solving above equations, we get a 1, b 2 , c 7 , d1 e1

Ÿ

2 q2 , e2

A

3  q1 , f1

p1 1, d 2

0 , f2

 p2

r2 1, r1

1

4, y 1, z

1  2i 2  3i º ª i 1  2i 3  2i º ª 1 «1  2i » « 2 i 4  i »» i »  «  1  2i « «¬2  3i  i 7 »¼ «¬ 3  2i  4  i 0 »¼

MCQ 1.3

Let M

ªa b º « c d » and N ¬ ¼

ªD E º « J G». The matrix M  N symmetric ¼ ¬

(A) for all complex numbers a, D, d , G with b  c  E  J

0

(B) for all complex numbers a, D, d , G with b  c  E  J z 0 (C) for all complex numbers b, E, c, J with a  d  D  G z 0

(D) for all complex numbers b, E, c, J with a  d  D  G 0

12

0

MCQ 1.4

Consider the matrices M

ª1 0 3º « 2 k 5 » , N » « «¬ 0 0  3»¼

ª0 1 0º «6 2 5» . Then the trace of M  N satisfies » « «¬7 0 0»¼

(A) x 2  3 x  2 0 but not x 2  x  2 0 (B) x 3  2 x 2  2 x  4 0 but not x

2i

(C) Neither x 2  1 0 nor x 2  9 0 (D) x 2  4 0 but not x 2  x  2 0 SAQ 1.3

Compute the values of a, b, c , d  R so that the sum of the two matrices ªd ªa  ci b  di º and « « ai »  bi ¼ ¬b ¬

 ai º ci »¼

is Hermitian with trace 3. SAQ 1.4

Show that A  Ac is symmetric for any n - square matrix A. SAQ 1.5

1 i 2  3i º ª 0 « Prove that the matrix «  1  i 4i 4  5i »» is skew Hermitian. «¬ 2  3i  4  5i  3i »¼ Scalar multiplication

Let k be any scalar and A [aij ] be any m u n matrix. We call kA as multiplying a matrix A by a scalar k and is defined as kA [k aij ]

(1.3)

Hence to obtain kA just multiply every!element of A by k . ª 2  3º ª 6  9º Illustration. A « . Then 3 A « » ». ¬ 1 4 ¼ ¬ 3 12 ¼

13

Remark

(i) We do not distinguish between k A and A k . (ii) We write  A

(1) A i.e. the negative of an m u n matrix A [aij ] is defined to be

 A [  aij ]. Thus the negative of A is obtained by changing the sign of every element of A.

Then A  ( B)

A  B.

Illustration. A

i º ª2  3 «3 4 i 1  2 i » Ÿ  A ¬ ¼

3 i º ª 2 «  3  4 i  1  2i » ¬ ¼

Theorem 1.3 (Properties of matrix addition and scalar multiplication)

If A , B and C are m u n matrices, 0 is a m u n zero matrix and h , k are any scalars, then A B

commutativity:

A  (B  C )

associativity:

A0

associativity: distributivity:

B A

( A  B)  C

A, A  ( A)

h(kA)

(hk ) A

(h  k ) A hA  kA, k ( A  B) 1. A

0

kA  kB

A, 0 A 0, h 0 0

The proof follows from definitions and hence is left to the readers. Problem 1.7

Give an example each of 3u 3 real symmetric matrix B and 3u 3 real skew symmetric matrix C and verify that B  iC is a Hermitian matirx.

Solution. Let

B

ª 1  2 3º «  2 0 4» , C « » «¬ 3 4 5»¼

ª0 1 2º «  1 0  3» « » «¬ 2 3 0 »¼

Thus B is real symmetric matrix and C is real skew symmetric matrix.

Then

14

B  iC

ª 1  2 3º ª0 1 2º « 2 0 4»  i «  1 0  3» « » « » «¬ 2 3 0 »¼ «¬ 3 4 5»¼

ª 1  0  2  i 3  2i º « 2  i 0 4  3i »» « «¬ 3  2i 4  3i 5  0 »¼

 2  i 3  2i º ª 1 « 2  i 0 4  3i »» « «¬ 3  2i 4  3i 5 »¼

Hermitian matrix. Problem 1.8

Show that if A is Hermitian, then i A is skew Hermitian. Solution. Let A [amn ] be a Hermitian matrix. Ÿ

amn

Now

anm ,  m , n

(a1)

i A i [amn ] [i amn ] [bmn ], where bmn

Ÿ

bmn

i amn ,  m , n

i anm , by (a1)

i amn i amn  bnm ,  m , n

Ÿ

The matrix [bmn ]

i A is skew Hermitian.

Problem 1.9

Prove that every Hermitian matrix can be written as A  i B, where A is real and symmetric and B is real and skew symmetric.

Solution. Let P [ pij ] be any n u n Hermitian matrix. Then by definition, PT

(a1)

P

Since any n -square matrix can be written as sum of two n -square matrices, we express P

A  i B,

where A and B real matrices of order n. The result is established if we show that A is symmetric and B is skew symmetric. PT

Ÿ

( A  i B) T

AT  (i B ) T

Ac  i Bc ,  AT

Then (a1) Ÿ

Ac  i Bc

Ÿ

Ac

Ÿ

Ac, B T

AT  i B T Bc for real matrices A and B

AiB

A , Bc

B

A is symmetric and B is skew symmetric.

15

Hence any Hermitian matrix P is written as A  i B , where A is symmetric and B is skew symmetric. MCQ 1.5 ªa b º 2« » ¬ r 2a ¼

Given

a º ªb r º ªr  3a «3 a »  « r a  b »¼ ¼ ¬ ¬

Then the equation of a circle with centre (a , b) and radius r is (A) x 2  y 2  2 x  4 y

25

(B) x 2  y 2  2 x  4 y 16 (C) x 2  y 2  2 x  4 y 9 (D) x 2  y 2  2 x  4 y

4

SAQ 1.6

3  2i  4º ª 2 « If A «3  2i 5 6i »», show that A is Hermitian and i A is skew Hermitian matrices. «¬  4  6i 3 »¼ SAQ 1.7

If 2 A  3B diag (1,  5, 2 ) for A diag (a,  1, c), B diag (3, b,  4), then find the values of a, b, c.

1.4 Multiplication of matrices

If A [aij ] and B [bij ] be m u n matrices, then it is natural to write their product as AB [aij bij ] so that the commutativity of the product AB [aij bij ] [bij aij ] BA

is achieved. Unfortunately this is not the fact and above is not the way the product of two matrices is defined. In this case we follow the procedure laid down by Cayley for defining the product of two matrices and is called the Cayley!product of two matrices as follows. Let A [aij ] be an m u p matrix and let B [bij ] be an p u n matrix. The product AB is the m u n matrix C

16

[cij ] defined by

p

cij

¦ aik bkj

k 1

ai1 b1 j  ai 2 b2 j    aip b pj

(1.4)

The cij is the result of multiplication of the corresponding elements of the i th row of A and of the j th column of B : i th row of A [ ai1 ai 2 ai 3  aip ]

j th column of B

ai1

Their product:  b1 j

ª b1 j º «b » « 2j» «  » « » «¬ b pj »¼

ai 2  aip 



b2 j  b pj

When the product AB is defined we say that A is conformable!to B for multiplication. Note that AB does not necessarily define BA. Thus A is conformable to B for multiplication does not

necessarily imply that B is conformable to A for multiplication. The conformability is guaranteed both ways only in case of square matrices of the same order. Illustration

(i) A [3, 2,  1], B

ª0º « 1». A is 1u 3, B is 3 u 1. Then AB is a 1 u 1 matrix. Here BA is also « » «¬ 3 »¼

defined. AB [3 ˜ 0  2 ˜ 1  1 ˜ 3] [5]

Ÿ

BA

ª0º « 1» [3, 2,  1] « » «¬ 3 »¼

ª 0 ˜ 3 0 ˜ 2 0 ˜ 1 º « 1 ˜ 3  1 ˜ 2  1 ˜ 1» « » «¬ 3 ˜ 3 3 ˜ 2 3 ˜ 1 »¼

0 0º ª0 « 3  2 1 » « » «¬ 9 6  3»¼

Thus BA is a 3 u 3 matrix. It is obvious that AB z BA. ª 1 2º (ii) For A « », B ¬ 1 1 ¼

AB

ª0  1º «3 2 », we have ¬ ¼

ª 1 2º ª0  1º «  1 1 » ˜ «3 2 » ¬ ¼ ¬ ¼

ª 1 ˜ 0  2 ˜ 3 1 ˜ 1  2 ˜ 2 º « 1 ˜ 0  1 ˜ 3  1 ˜ 1  1 ˜ 2» ¬ ¼

ª6 3º «3 3» ¼ ¬

17

BA

ª0  1º ª 1  1º «3 2 » ˜ « 1 8 » ¼ ¼ ¬ ¬

ª1  8º «1 13 » z AB ¼ ¬

Problem 1.10

Matrix A has x rows and x  5 columns. Matrix B has y rows and 11  y columns. Both AB and BA exist. Find x and y. Solution. Since AB is defined, number of columns of A is equal to the number of rows of B i.e. x5

y

(a1)

Similarly BA is defined Ÿ

11  y

x

(a2)

(a1) and (a2) Ÿ

x 3 , y 8.

Problem 1.11

Factorise the matrix A

ª5  2 1 º «7 1  5» into the form LU , where L is lower triangular with each » « «¬3 7 4 »¼

diagonal element one and U is an upper triangular matrix.

Solution. Let A L U , where L

ª 1 0 0º «a 1 0» and U « » «¬b c 1»¼

ªx d «0 y « «¬0 0

eº f »». z »¼

It is easy to note that L is lower triangular with each diagonal element one and U is an upper triangular matrix. Then A LU gives ª5  2 1 º « 7 1  5» « » «¬3 7 4 »¼

ª1 0 0º ª x d « a 1 0» « 0 y « » « «¬b c 1»¼ «¬0 0

eº f »» z »¼

d e ªx º «ax ad  y ae  f »» « «¬bx bd  cy be  cf  z »¼

Equating the elements on both sides, x 5, d Ÿ

a

2 , e 1, ax 7 , ad  y 1, ae  f

7 ,b 5

3 ,c 5

41 ,d 19

2 , e 1, f

With these values we get the matrices L and U . 18

5 , bx 3 , bd  cy 

32 , x 5, y 5

19 ,z 5

7, be  cf  z 327 19

4

Problem 1.12

Prove that the product of the matrices ª cos 2 T ª cos 2 I cos T sin Tº cos I sin Iº and « « » » 2 sin T ¼ sin 2 I ¼ ¬cos T sin T ¬cos I sin I

is a null matrix, where T and I differ by an odd multiple of S / 2. Hint. cos (2n  1)

S 2

0, 2n  1 odd number

Solution. We have ª cos 2 T cos T sin Tº « » sin 2 T ¼ ¬cos T sin T

ª cos 2 I cos I sin Iº « » sin 2 I ¼ ¬cos I sin I

ªcos 2 T cos 2 I  cos T sin T cos I sin I cos 2 T cos I sin I  cos T sin T sin 2 Iº « 2 2 2 2 » ¬ cos T sin T cos I  sin T cos I sin I cos T cos I sin T sin I  sin T sin I ¼ ªcos T cos I (cos T cos I  sin T sin I) cos T sin I (cos T cos I  sin T sin I)º « sin T cos I (cos T cos I  sin T sin I) sin T sin I (cos T cos I  sin T sin I) » ¼ ¬ ªcos T cos I cos (T  I) cos T sin I cos (T  I)º « sin T cos I cos (T  I) sin T sin I cos (T  I) » ¼ ¬

S ª «cos T cos I cos (2n  1) 2 « S « sin T cos I cos(2n  1) 2 ¬

Sº cos T sin I cos(2n  1) » 2 ,  T  I (2n  1) S S» 2 sin T sin I cos (2n  1) » 2¼

ª0 0 º S «0 0», cos (2n  1) 2 ¬ ¼

0, see Hint

Problem 1.13 ª cos D sin D º n If A « », show that A  D D sin cos ¬ ¼

ª cos nD sin nD º « sin nD cos nD », where n is positive integer. ¼ ¬

Hint. Use mathematical induction. ª cos D sin D º Solution. Let A « ». We have to show that  n  N ¬ sin D cos D ¼ An

ª cos nD sin nD º « sin nD cos nD » ¬ ¼

(a1)

For n 1, above gives 19

A

ª cos D sin D º « sin D cos D » ¬ ¼

Thus (a1) is true for n 1. Assume it for some n  N. Then An 1

ª cos nD sin nD º An A « » ¬ sin nD cos nD ¼

ª cos D sin D º « sin D cos D » ¬ ¼

cos nD sin D  sin nD cos D º ª cos nD cos D  sin nD sin D « sin nD cos D  cos nD sin D  sin nD sin D  cos nD cos D » ¬ ¼ ª cos (nD  D) sin ( nD  D) º « sin (nD  D) cos (nD  D)» ¬ ¼ ª cos (n  1)D sin (n  1)D º « sin (n  1)D cos (n  1)D » ¬ ¼ Hence (a1) is true for n  1 if it is true for n. Then by mathematical induction, (a1) is true for all positive integral values of n.

Problem 1.14 ªa b c º Given a matrix A ««b c a »» , where a, b, c are real positive numbers, abc 1 and AcA I , «¬ c a b »¼ then find the value of a 3  b3  c 3 .

(IIT " JEE 03)

Solution. We have AcA I ª1 0 0º «0 1 0»,  Ac » « «¬0 0 1»¼

Ÿ

ªa b c º ªa b c º «b c a » «b c a » » »« « «¬ c a b »¼ «¬ c a b »¼

Ÿ

ª a 2  b 2  c 2 ab  bc  ca ab  bc  ca º « » 2 2 2 «ab  bc  ca a  b  c ab  bc  ca » «ab  bc  ca ab  bc  ca a 2  b 2  c 2 » ¬ ¼

Ÿ

ªa b c º «b c a » » « «¬ c a b »¼

ª1 0 0 º «0 1 0 » « » «¬0 0 1»¼

a 2  b 2  c 2 1 and ab  bc  ca 0

We know that a 3  b3  c3

20

(a  b  c)(a 2  b 2  c 2  ab  bc  ca)  3abc

(a1)

(a  b  c)(1  0)  3,  abc 1, (a1) (a  b  c)  3 ( a  b  c) 2

Now

a 2  b 2  c 2  2(ab  bc  ca) 1, by (a1)

Ÿ

abc

Ÿ

(a2)

r1

a  b  c 1,  a, b, c are positive real numbers

a 3  b3  c3 1  3 4

Then (a2) Ÿ MCQ 1.6

Consider the statements: (i) The conformability for addition of matrices is an equivalence relation. (ii) The conformability for multiplication of matrices is not an equivalence relation. Then

(A) (i) and (ii) both are true. (B) (i) and (ii) both are false. (C) (i) is false (D) only (ii) is true

Select the correct answer. MCQ 1.7

ª1 «0 If A « «0 « ¬0

2 5 0 0

3 4º 6 7 »» , then the trace of A 2 is 8 9» » 0 10¼

(A) 1 + 5 + 8 + 10 (B) 4 2  6 2 (C) 12  52  82  10 2 (D) 12  2 2    10 2 . Select the correct answer.

21

MCQ 1.8

ª2 1 º Let A « ». Select the correct statement from the following: ¬3  1¼ (A) A2  A  5I (B) A2  A  I

0 0

(C) A2  3 A  5 I

0

(D) A2  A  5I

0

Hint. Here 0 on the right side stands for a zero matrix. SAQ 1.8

3

Find M if M

ª0 1 3º «0  1 5». « » «¬0 0 1»¼

SAQ 1.9

ª0 i º 2 Given A « », where i i 0 ¬ ¼

1 . Compute A2 , A3 , A4 . Derive a rule for the positive integral

powers of A. SAQ 1.10

ª3º ª3º Determine a, b and c if «« 2 »» [1 2] « » ¬ 1¼ «¬ 2 »¼

ªa «b « «¬ c

º ». » »¼

SAQ 1.11

Let A( x)

ªcos x  sin x º « sin x cos x ». Show that A( x) and A( y ) commute. ¬ ¼

Further show that and

22

A2 ( x)

ªcos 2 x  sin 2 x º « sin 2 x cos 2 x » ¬ ¼

A( x) A( y )

A( x  y ).

SAQ 1.12

ª1 « Let A « Z «Z 2 ¬

Z2 º » 1 », B Z »¼

Z Z2 1

ªZ « 2 «Z «1 ¬

Z2 1 Z

1º » Z » , where Z is the cube root of unity and Z z 1. Z2 »¼

Show that AB 0. Hint. Z3 1, 1  Z  Z2

0

The unusual properties of matrix multiplication

The familiar rules of multiplication of scalars imply that ab

ba

0 Ÿ either a

ab

ab

0 or b

0

ac, a z 0 Ÿ b c

where a, b, c are scalars. We observe that these rules, in general, do not hold true for matrix multiplication. Thus if A, B, C are conformable for the indicated products, we have (i) AB z BA in general (ii) AB (iii) AB (iv) A2

Illustration.

Ÿ

0 does not necessarily imply A 0 or B AC does not necessarily imply B I does not necessarily imply A

ª1 0 º A « », B ¬ 2 0¼

ª0 0 º «3 1», C ¼ ¬

AB

ª0 0 º «0 0» i.e. AB ¼ ¬

AC

ª0 0 º «0 0» Ÿ AB ¬ ¼

0.

C even if A z 0.

 I or A  I .

ª0 0 º «4  1» ¼ ¬

0 but A z 0, B z 0

AC and A z 0. But B z C.

Remark. The familiar rules of multiplication of scalar algebra do not hold good for matrix

multiplication does not mean that the matrix multiplication is devoid of any merit. On the other hand we treat this failure as the special characteristics of the matrix multiplication. However, we show that other rules of scalar algebra, associativity and distributivity, do apply to matrix multiplication. 23

Theorem 1.4 (Usual properties of matrix multiplication) If! A , B , C ! are! conformable! for! addition,! multiplication! whenever! needed! and! h , k ! are! scalars, then

Associativity:

(i)

A( BC )

Distributivity:

(ii) ( A  B)C

Linearity:

(iii) A(hB  kC )

Associativity:

(iv) (hA) B

AC  BC , C ( A  B)

Ÿ

Similarly

A( BC )

( AB)C

CA  CB

h AB  k AC

A(hB )

Proof. (i) Let A [ aij ]mun , ! B [b jk ]nu p , ! C

BC

(1.5a)

( AB)C

(1.5b) (1.5c)

h( AB )

(1.5d)

[ckl ] pur . !We have

ªp º « k¦1 b jk ckl » , j 1,  , n and l 1,  , r ¬ ¼ nu r p ªn º a ¦ ( ¦ ij « j 1 k 1 b jk ckl )» ¬ ¼ mur

ªp § n · º « k¦1 ¨ j¦1 aij b jk ¸ ckl » ¹ ¼ mu r ¬ ©

Ÿ

A( BC )

ªn p º ¦ aij b jk ckl » «¦ ¬ j 1k 1 ¼ mur ªn p º « j¦1 k¦1 aij b jk ckl » ¬ ¼ mu r ( AB)C

(ii) Let A [aij ]mu n , B [b jk ]nu p , C [c jk ]nu p . Note that for making B  C meaningful, B and C must have the same order. Now B  C [b jk  c jk ]nu p Then

A( B  C )

ªn º « j¦1 aij (b jk  c jk )» ¬ ¼ mu p ªn º « j¦1(aij b jk  aij c jk )» ¬ ¼ mu p n ªn º « j¦1 aij b jk  j¦1 aij c jk » ¬ ¼ mu p

º º ªn ªn a b aij c jk »  ¦ «¦ « j 1 ij jk » ¼ mu p ¼ mu p ¬ j 1 ¬ AB  AC For the remaining part of (ii) suppose that the matrices 24

A, B, C

are of orders

m u n , p u m , p u m respectively. Complete the proof yourself. Similarly write down the proof of (iii) and (iv).

QED

Problem 1.15

ª 0 1º A « », B ¬ 1 1¼

For the matrices

ª1 2º «0 3 » , C ¬ ¼

ª  1 0º « 1 1» ¬ ¼

verify (1.5b) i.e. the distributivity of matrix multiplication over addition.

Solution. We have

ª 0 1º ­ª1 2º ª 1 0º ½ « 1 1» ®«0 3»  « 1 1» ¾ ¼¿ ¼ ¬ ¼ ¯¬ ¬

A( B  C )

and

AB  AC

ª 0 1º ª0 2º ª1 4º « 1 1» «1 4» = «1 2» ¼¬ ¬ ¼ ¼ ¬

ª 0 1º ª1 2º ª 0 1º ª 1 0º « 1 1» «0 3»  « 1 1» « 1 1» ¼ ¼¬ ¼ ¬ ¼¬ ¬

ª 0 3º ª1 1º ª1 4º « 1 1»  «2 1» = «1 2» ¬ ¼ ¼ ¬ ¼ ¬

This gives (1.5b). Complete the remaining part. Problem 1.16

Verify the associativity A( BC )

( AB)C for the following matrices

ª1 0  1º A « », B ¬0 1 2 ¼

Solution. A(BC )

( AB)C

ª 1 1 0º «  1 1 2» , C « » «¬ 0 0 1 »¼

­ ª 1 1 0 º ª1 0 º ½ ª1 0  1º °« »« »° «0 1 2 » ® «  1 1 2 » «0 1 » ¾ ¼ ° « 0 0 1 » «1 0 » ° ¬ ¼¬ ¼¿ ¯¬ ­ ª 1 1 0 º ½ ª1 0 º ° °ª1 0  1º «  1 1 2»» ¾ ««0 1»» ®« » « ° ¬ 0 1 2 ¼ « 0 0 1 » ° «1 0 » ¼ ¬ ¼¿ ¬ ¯

Ÿ

A( BC )

ª1 0º «0 1». « » «¬1 0»¼

ª1 1º ª1 0  1º « » «0 1 2 » «1 1» ¼ «1 0» ¬ ¬ ¼

ª0 1º «3 1» ¼ ¬

ª1 0 º ª 1 1  1º « » «  1 1 4 » «0 1 » ¼ «1 0 » ¬ ¬ ¼

ª0 1º «3 1» ¼ ¬

( AB)C

Problem 1.17

Let A

ª i 0º 2 «0 i », where i ¼ ¬

1 . Give a general rule for An , n  N.

25

Solution.

A2

ª i 0º ª i 0º « 0 i » «0 i » ¼ ¼¬ ¬ A3 A4

Ÿ

An

ª 1 0 º « 0  1» ¬ ¼

ª1 0 º « » ¬0 1 ¼

A2 ˜ A  IA  A,  IA A3 ˜ A

A˜ A

 A2

A5

A4 ˜ A

IA

I

A

(  I )

I

A

A6

A5 ˜ A

A˜ A

I

A7

A6 ˜ A

 IA  A

A8

A7 ˜ A  A ˜ A I

n 4m ­ I ° A n 4m  1 ° , m 1, 2 , 3 ,  . ® °  I n 4m  2 °¯  A n 4m  3

Problem 1.18 If A and B are of order n, show that tr ( A  B)

trA  trB, tr trace.

Solution. Let A [ aij ]nu n , B [bij ]nu n . A  B [aij  bij ]nun

Ÿ n

tr A

By definition,

¦ aii

i 1

tr ( A  B)

Now

a11    ann, tr B

n

¦ (aii  bii )

i 1

n

n

i 1

i 1

¦ aii  ¦ bii

n

¦ bii

i 1

tr A  tr B

Problem 1.19 Consider the Pauli spin matrices: Vx

ª0 1 º «1 0 » , ¼ ¬

Vy

ª0  i º « i 0 » , Vz ¼ ¬

ª1 0 º «0  1». ¼ ¬

Show that (i) the commutators of V x and V y , V y and V z , and V z and V x are respectively 2i V z , 2iV x , and 2iV y and

26

(ii) the above matrices are anti commutative with each other.

Hint. The commutator of the n square matrices A and B is AB  BA. V xV y  V yV x

Solution.

ª 0 1 º ª0  i º ª 0  i º ª 0 1 º «1 0» « i 0 »  « i 0 » «1 0» ¼ ¼¬ ¼ ¬ ¼¬ ¬ ª i 0 º ª  i 0º «0  i »  « 0 i » ¼ ¬ ¼ ¬ ª2i 0 º « 0  2i » ¬ ¼ ª1 0 º 2i « » ¬0  1¼ 2i V z

Ÿ

Commutator of V x and V y is 2i V z .

Similarly compute V y V z  V z V y , V z V x  V x V z and prove the required results. V xV y

(ii)

V yV x

ª0 1 º ª0  i º «1 0 » « i 0 » ¼ ¼¬ ¬

ª 0  i º ª0 1 º « i 0 » «1 0» ¬ ¼¬ ¼

Similarly we can prove that V y V z

ª  i 0º « 0 i» ¬ ¼

ªi 0 º «0  i » ¼ ¬

ªi 0 º « » ¬0  i ¼

V z V y and V z V x

(a1)

V x V y , by (a1)

V x V z . This proves (ii).

Problem 1.20 Consider a 2 u 2 real matrix A such that A2 ª xy « 2 ¬ x

0. Show that it can be written in the form

y2 º ».  xy ¼

ªa b º Solution. Let A « », where a, b, c, d  R. ¬c d ¼ ªa b º ªa b º ªa 2  bc ab  bd º «c d » «c d » = « 2» ¼ ¬ca  dc cb  d ¼ ¼¬ ¬

A2

Then A2

0 Ÿ

Ÿ a 2  bc 0, ab  bd

ca  dc

0, cb  d 2

0

The middle two equations give (b  c)(a  d )

0 i.e. b

c or a

d 27

For b c, the remaining equations Ÿ Ÿ a 2  b2 Ÿ

a b d

Then

0, b 2  d 2

0

0,  a, b, d  R c 0

Thus A becomes a zero matrix which is a trivial form. Hence we reject b c and consider a

d . Let a

d

xy, x, y  R. . Then a 2  bc

0 give x 2 y 2  bc

0

. This is satisfied by y2, c

b

x2

Thus A takes the form ª xy « 2 ¬ x

y2 º »  xy ¼

Note that the above form is not unique.

SAQ 1.13 If 2 A  3B

ª6  1 º «1  5» and A  2 B ¬ ¼

ª 2  3º « 4 0 », then find trace A  traceB. ¬ ¼

Hint. Use Problem 1.18 Theorem 1.5. (Transpose of the product) If! A and! B be!two! m u n and! n u p matrices!respectively,!then ( AB)c BcAc.

(1.6)

Proof. Let A [aik ]mu n and B [bkj ]nu p be two matrices. Then AB is defined and

AB [cij ]mu p , cij

n

( AB)c [cijc ] p u m , Ac [akic ]nu m , Bc [bikc ] p u n , ckic

Ÿ

BcAc is defined

28

(1.7)

k 1

Now

and

¦ aik bkj cik , akic

aik , bikc

n BcAc ª« ¦ bikc ackj º» [cijc ] p u m , by (1.7) ¼ ¬k 1

bki

BcAc ( AB)c.

Ÿ

QED

Orthogonal matrix. A square matrix A is orthogonal if

A Ac

(1.8)

I.

Problem 1.21

ª 0 2b c º «a b  c » is orthogonal. « » «¬a  b c »¼

Find the values of a , b , c if A

Solution. We have

Ac

a aº ª0 «2b b  b » « » «¬ c  c c »¼

Let the matrix A be orthogonal. A Ac I

Ÿ

Ÿ

Ÿ

a aº ª 0 2b c º ª 0 «a b  c » «2b b  b » « »« » «¬a  b c »¼ «¬ c  c c »¼ ª 4b 2  c 2 « 2 2 « 2b  c « 2b 2  c 2 ¬

2b 2  c 2 a2  b2  c2 a2  b2  c2

ª1 0 0º «0 1 0 » « » «¬0 0 1»¼

 2b 2  c 2 º » a2  b2  c2 » a 2  b 2  c 2 »¼

ª1 0 0º «0 1 0 » « » «¬0 0 1»¼

Equating the elements on both sides, 4b 2  c 2 1, a 2  b 2  c 2 1, 2b 2  c 2 , a 2  b 2  c 2 Ÿ

a

r

1 1 ,c ,b r 6 2

r

0.

1 . 3

These are the required values of a , b , c. Problem 1.22

Show that A

ª cos T 0 sin T º « 0 1 0 »» is an orthogonal matrix. « «¬  sin T 0 cos T »¼

29

ª cos T 0  sin T º « 0 1 0 »» « «¬ sin T 0 cos T »¼

Ac

Solution. Here

A Ac

Then

ª cos T 0 sin T º « 0 1 0 »» « «¬  sin T 0 cos T »¼

ª cos T 0  sin T º « 0 1 0 »» « «¬ sin T 0 cos T »¼

ª cos 2 T  0  sin 2 T 0  cos T sin T  sin T cos Tº « » 0 1 0 « » 2 2 « sin T cos T  cos T sin T 0 » T  T sin cos ¬ ¼ ª1 0 0º «0 1 0 » « » «¬0 0 1»¼ I Ÿ

The matrix A is orthogonal.

Unitary matrix. An square matrix U is called unitary if and only if

UUT

I.

(1.9)

For a real unitary matrix, U T Uc U cU

Ÿ

U Uc

(1.10)

I.

This (1.10) is a condition for an orthogonal matrix U . Hence a real unitary matrix is orthogonal. Problem 1.23

Show that A

Solution. Here

Ÿ

AAT

ªa  i c «b  i d ¬

b  idº is unitary matrix if a 2  b 2  c 2  d 2 1. a  i c »¼

A AT

A ( Ac)

b  idº a  i c »¼

ª a ic « b  i d ¬

b  i dº a  i c »¼

ª(a  ic)(a  ic)  (b  id )(b  id ) (a  ic)(b  id )  (b  id )(a  ic)º « (b  id )(a  ic)  (a  ic)(b  id ) (b  id )(b  id )  (a  ic)(a  ic) » ¼ ¬ ªa 2  c 2  b 2  d 2 « 0 ¬

30

ªa  i c «b  i d ¬

º 0 2 2» b d a c ¼ 2

2

AAT

Ÿ

I œ a 2  b2  c2  d 2 1

Hence A is unitary if a 2  b 2  c 2  d 2 1. Problem 1.24

Show that transpose of a unitary matrix is unitary. Solution. Let A be a unitary matrix. Then AAT

Ÿ

I . Taking the transpose,

( A AT )c I c

I ,  Ic

( AT )c Ac

I , by (1.6)

( Ac) T Ac

Ÿ

I

I

Hence Ac is unitary. Problem 1.25

ª2 99 º 999 Let A « ». Then the trace of A is  0 1 ¼ ¬ (a) (2) 999  (99) 99  (1) 9 (b) 2999 ˜ (1) 9 ˜ 99 (c) 99999 (d) 2999  (1) 999 . Select the correct answer. Hint. Trace of [aij ] a11  a22    ann for a square matrix [aij ] . Solution. A999

A ˜ A to 999 factors. Since we have to find trace of A999 , we concentrate only

on diagonal elements of A2 , A3 ,  , A999 .

Now

A2

ª2 99 º ª2 99 º «0  1» «0  1» ¬ ¼¬ ¼ A3

A2 A

ª2 2 « ¬0

x ª2.2 º « 0 (1) ˜ (1)» ¬ ¼

x º ª2 99 º »« » (1) 2 ¼ ¬0  1¼

ª 23 « ¬0

ª2 2 « ¬0

x º » (1) 2 ¼

y º » . (1) 3 ¼

31

Continuing this,

A999

ª2999 « ¬ 0

z º » (1) 999 ¼

Then trace of A999 is 2999  (1)999 . Hence the correct choice is (d). MCQ 1.9

Let M

ªa  b º «b a ». The values of a and b satisfied by the equation ¼ ¬ M  M c 2I

are

(A) a 0, b  R (B) a 1, b  R (C) a 1, b not defined (D) None of the above

MCQ 1.10

Let M be a symmetric matrix and let N be a square matrix of order that of M . Then (A) M c is symmetric but N cMN is not symmetric. (B) M c is not symmetric but N cMN is symmetric. (C) Both M c and N cMN are symmetric. (D) Both M c and N cMN are not symmetric. MCQ 1.11

Let A 

1 1 I and A  I be two orthogonal matrices. Then 2 2 (A) A is orthogonal (B) A  2 Ac 0 (C) A  Ac 0 (D) A  2 Ac 0

32

SAQ 1.14

If A

ª1 2 a º 1« 2 1 b »» is orthogonal, then find the values of a , b and c. 3« «¬2  2 c »¼

SAQ 1.15 ª1 0º n If A « » , then show that the sum of the elements of A is n  2. 1 1 ¬ ¼

SAQ 1.16

Let A be a square matrix. Then prove the following: (a) A  Ac is symmetric. (b) A  Ac is skew symmetric. (c) AAc and AcA are symmetric. SAQ 1.17

Prove that the matrix

1 ª1  i  1  i º is unitary. 2 «¬1  i 1  i »¼

33

SUMMARY An array of quantities a ij , i 1, 2 ,  , m , j 1,  , n subject to certain rules of operation, (addition, multiplication etc) is called a matrix. Simply an array of numbers or quantities does not qualify a matrix. It is wrong to say that a matrix has a numerical value. We write a matrix as A [aij ]mun or [aij ]( m , n ) . Basic arithmetic operations addition, subtraction and multiplication of two matrices produce another matrices. In general, the familiar rules of multiplication of scalars do not hold true for matrix multiplication.

KEY WORDS Matrix Unit matrix Transpose of a matrix Hermitian matrix Orthogonal matrix Unitary matrix

34

UNIT 01-02: THE DETERMINANT OF A SQUARE MATRIX

35-51

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of determinant Evaluate the determinant of a square matrix

INTRODUCTION In the previous unit the concept of matrix is introduced and the various types of matrices are considered. Out of the stock of these matrices we select only square matrices i.e. the matrices whose rows and columns are equal in number. Such types give rise to the concept of a determinant of a matrix. As a matter of fact the study of determinants can be carried out without matrix and the readers must have an idea of it since the determinants are introduced at school level. It is understood that the readers have some basic knowledge of a determinant i.e. its evaluation etc. The basic difference between a matrix and its determinant is that a matrix is an arrangement of entities while its determinant is a single quantity assigned to the arrangement. 2.1 Some definitions Determinant Consider an n-square matrix A [aij ]. We denote the determinant of A by det A or | A | and

write

| A|

Illustration. Consider a square matrix

Then

| A|

 a1n

a11

a12

a21 

a22  a2 n

an1

an 2  ann

(2.1)

ª 1 2º A « ». ¬  2 3¼ 1 2 2 3

1 ˜ 3  (2)(2) 7

Singular matrix

A square matrix A is said to be singular if | A | 0. 35

Non singular matrix

A square matrix A is said to be nonsingular if | A | z 0 . Illustration ª1 2 0 º (i) A ««2 4 0»» , «¬7  6 2»¼ Ÿ

| A|

1 2

2 4

0 0

2(4  4) 0

7 6 2

A is singular. ª cos T sin T º 2 2 (ii) A « » , | A | cos T  sin T 1 z 0  T T sin cos ¬ ¼

Ÿ

A is nonsingular.

Minor

A determinant left after deleting equal number of rows and columns is called a minor determinant. If one row and one column is deleted, then the corresponding minor is called a first minor. Hereafter by minor we mean a first minor unless otherwise stated. Let M ij be the minor of aij in | A | i.e. M ij is the determinant formed by deleting ith row and jth column in | A | . Cofactor

The signed minor is called the cofactor of aij and we denote it by Aij Illustration

(i)

(ii)

36

A

ª1 2º «3 4» ¼ ¬

ª a11 «a ¬ 21

a12 º a22 »¼

(1)11 M 11

M 11

a22

4, A11

M 12

a21

3, A12 (1)12 M12 3

M 21

a12

2, A21

M 22

a11 1, A22

A

ª 1 2  1º «3 4 0» « » «¬ 2 6 1 »¼

(1)12 M 21

4

2

(1)22 M 22 1

(1)i  j M ij .

M 11

4 0 6 1

M 12

3 0 2 1

M 13

4,

A11

4

3,

A12

3

3 4 2 6

26,

A13

26

M 21

2 1 6 1

8,

A21

8

M 22

1 1 2 1

1, A22

1

M 23

1 2 2 6

10,

A23

10

M 31

2 1 4 0

4,

A31

4

M 32

1 1 3 0

3,

A32

3

M 33

1 2 3 4

2, A33

2

Problem 2.1

If f ( x)

ª2 0 1º x  5 x  6, find f ( A), where A ««2 1 3»». Is f ( A) nonsingular? «¬1  1 0»¼ 2

Solution. From the definition of f (x), we have

f ( A)

A2  5 A  6 I ,

where I is 3u 3 unit matrix

Ÿ

f ( A)

ª1 0 0 º ª2 0 1º ª2 0 1º ª2 0 1º « 2 1 3» « 2 1 3 »  5 « 2 1 3»  6 « 0 1 0 » « » « »« » « » «¬0 0 1»¼ «¬1  1 0»¼ «¬1  1 0»¼ «¬1  1 0»¼

37

ª 4  0  1 0  0  1 2  0  0º ª10 0 5 º ª6 0 0º « 4  2  3 0  1  3 2  3  0 »  «10 5 15»  «0 6 0» » » « » « « «¬2  2  0 0  1  0 1  3  0 »¼ «¬ 5  5 0 »¼ «¬0 0 6»¼ ª 1 1  3 º «  1  1  10» » « «¬ 5 4 4 »¼

Ÿ

| f ( A) |

(a1)

1 1  3  1  1  10 5 4 4

36  54  27 z 0

Hence the matrix f ( A) is nonsingular. Problem 2.2

A matrix of second order is made from the elements 0 and 1. Find the number of matrices with the determinants having non-negative values. Solution. We have two elements 0 and 1 from which the matrices are to be constructed. The

matrices are of order 2 i.e. each of the matrix has 4 elements. Thus the total number of possible matrices is 2 4 16. Out of these 16 matrices, there are only three matrices ª1 1º ª0 1º ª0 1º «1 0», «1 0», «1 1» ¬ ¼ ¬ ¼ ¬ ¼

whose determinants 1 1 1 0

(1)(0)  (1)(1) 1

0 1 1 0

(0)(0)  (1)(1) 1

0 1 1 1

(0)(1)  (1)(1) 1

Other remaining 13 matrices yield the determinants with non-negative values 0 or 1. Hence the required number is 13. MCQ 2.1

Consider a square matrix of order three whose elements are 1 or  1. Then the minimum value of the determinant of the matrix is 38

(D) 0 (B)  2 (C)  4 (D)  6 MCQ 2.2

Let

b ax  b º ª a « A « b c bx  c »» «¬ax  b bx  c 0 »¼

such that a ! 0 and [discriminant of ax 2  2bx  c]  0. Then

(A) | A |  0 (B) | A |

0

(C) | A | t 0 (D) | A | ! 0 Hint. a ! 0, b 2  4ac  0 Ÿ ax 2  bx  c ! 0 for all x  R

MCQ 2.3 ª a2 b2 º Consider A « », where a, b  R. If | A | ¬ a  b a  b ¼

0, then the line ax  by

0 passes

through the fixed point (A) (1, 0) (B) (0,1) (C) (1,1) (D) (1,  1) SAQ 2.1 ª pa Let A «« qc «¬ rb

qb ra pc

rc º pb»» such that a  b  c qa »¼

p  q  r. Show that | A |

0.

39

SAQ 2.2 ª1  x 1  y 1  z º 0, where A «« 1 1  2 y 1  z »» , x, y, z are nonzero real numbers. Show «¬ 1 1 1  3z »¼

Given that | A |

1 1 1    3 0. x y z

that

2.2 Properties of determinants P1. Interchange of any two rows (or columns) changes the sign of the determinant.

Illustration. A

ª1  1 0º «0 3 2 » , | A | » « «¬1 0 1 »¼

1 1 0 0 3 2 1 0 1

1

We denote | A ( R 13) | = determinant obtained from | A | by interchanging 1st and 3rd rows.

Then

| A( R 13) |

1 0 1 0 3 2 1 1 0

1  | A |

Notation. The interchange of ith row and jth row is denoted by Ri l R j . Similarly Ci l C j

stands for the interchange of ith column and jth column. P2. If A has two identical rows (or columns), then | A | 0.

ª1 2º Illustration. A « », | A | 0. ¬1 2¼ P3. Interchange of rows and columns of A does not change the value of | A | i.e. | A | | Ac | .

ª1  1 0 º Illustration. Consider A ««0 3 2 »» . Then Ac «¬1 0 1.»¼ Ÿ

and

40

ª 1 0 1º « 1 3 0 ». » « «¬ 0 2 1.»¼

| A | 1(3  0)  1(2  0) 3  2 1

| Ac |

1 0 1 1 3 0 0 2 1

1(3  0)  0  1( 2  0) 3  2 1 | A |

P4. If every element of a row (or column) of | A | is multiplied by a scalar k , the | A | is multiplied by k .

ª 2 1º « 1 3». ¼ ¬

Illustration. Let A

Ÿ

| A|

2

1

7

1 3

Multiply the 2nd column of | A | by 3. The resulting determinant is 2

3

1 9 Notation. kRi

18  3 21 3 | A |

multiplying the ith row by a scalar k , kCi

multiplying the ith column by a

scalar k .

P5. If in A we add any multiple of one row (or column) to a different one, the determinant of the new matrix equals | A | .

ª 1 2º Illustration. Let A « ». Then | A | 1  4 5 ¬ 2 1 ¼ Multiply the 2nd row of A by 2 and add it to the 1st row to get B

Ÿ

ª1  4 2  2º « 2 1 »¼ ¬

ª  3 4º « 2 1 » ¬ ¼

| B | 3  8 5 | A | .

Notation. Ri o mRi  kR j means addition of k times the jth row to m times the ith row P6. If a matrix B is obtained from A by carrying its ith row (or column) over p rows (or column), then | B | (1) p | A | .

ª4 «2 Illustration. A « «1 « ¬0

0 3 2 0

0 1 0 1

1º 0»» , | A | 29. Carry the 1st row over 3 rows and get 1» » 2¼

41

B

ª2 «1 « «0 « ¬4

3 2 0 0

1 0 1 0

0º 1»» 2» » 1¼

Then | B | ( 1)3 | A | 29. Verify yourself that | B | 29. P7. Let A [aij ] be a square matrix of order n. Then n

¦ aik A jk

k 1

Let i

Gij | A | , Gij

­1, i j . ® ¯0, i z j

(2.1)

j 1. Then (2.1) gives n

¦ a1k A1k

k 1 n

For i 1, j

¦ a1k A2 k

2:

k 1

G11 | A | | A | ,  G11 1 G12 | A | 0,  G12

0

ª 1 2  1º Illustration. For the matrix A «« 3 4 0 »», | A |  28. Consider the elements «¬ 2 6 1 »¼ of the first row and their cofactors: 3

¦ a1k A1k

k 1

a11 A11  a12 A12  a13 A13

1(4)  (2)(3)  (1)(26)

28 | A |

For the elements of the first row and the cofactors of the elements of the second row: 3

¦ a1k A2 k

k 1

a11 A21  a12 A22  a13 A23 3

Similarly

¦ a2 k A2 k

k 1

1(8)  ( 2)(1)  (1)(10) = 0 3

| A |, ¦ a2 k A3k

0

k 1

P8. If A and B are two n -square matrices, then | AB | | A | | B | ª3 1º Illustration. A « », B ¬ 2 0¼ Then

ª 1 2 º « 1  1» . ¬ ¼ AB

ª  3  1 6  1º «  2  0 4  0» ¬ ¼

ª 2 5º «  2 4» ¬ ¼

| A | 2, | B | 1, | AB | 2.

42

(2.2)

Thus

| AB | | A | | B |

Problem 2.3 If matrix is skew symmetric with odd order, then show that it is singular.

Solution. Let A be a skew symmetric matrix of odd order 2n  1, say. Ac  A

Ÿ

| Ac |

Ÿ Ÿ

| A|

Ÿ Ÿ

(1) 2 n 1 | A |

| A|

 | A |,  | Ac |

2| A |

 | A|

| A|

0 i.e. | A |

0

The matrix A is singular.

Problem 2.4 2 2 º ª3  x « 4 x 1 »» is singular. For what values of x , the matrix « 2 «¬  2  4  1  x »¼

Solution. Let the given matrix be A.

Ÿ

| A|

3 x

2

2

2

4 x

1

2

4

1  x

3 x

2

2

4 x

0

x

2 1 , by R3 o R3  R2 x

 x (3  x) 2

Then

| A|

0 if x(3  x) 2

0 i.e. x 0 , 3

Hence A is singular if x 0 , 3.

MCQ 2.4 Let x, y, z be in arithmetic progression. Then the determinant

43

a 2  a 2 n 1  2 x b 2  b n  2  3 y c 2  x 2n  x 2 n 1  y 2y n n 1 2 2 2 a 2 x b  2  2y c  z is

(A) a 2  b 2  c 2  2 n 1 ( x  y  z ) (B) x  y  z  2 n ( a 2  b 2  c 2 ) (C) 2 n a 2b 2 c 2 xyz (D) 0

Hint. x  z

2 y, use R1 o R1  R3

MCQ 2.5

ax c b c bx a If a  b  c 0, then one root of b a cx (A) x

0 is

abc

(B) x 0 (C) x

a bc

(D) x

abc

SAQ 2.3

If a, b, c are nonzero real numbers, then show that a 2b 2 b 2c 2 c2a 2

ab a  b bc b  c ca c  a

0

SAQ 2.4

Let f ( x)

x 2 2 2 x 2 2 2 x

0. Show that f ( x) 0 and

df ( x) dx

0 has one common root.

2.3 The adjoint matrix

Let A [aij ] be an n -square matrix and Aij be the cofactor of aij . We define

44

ª A11 «A « 12 «  « ¬ A1n

Adjoint A adj A [ Aij ]c

A21  An1 º A22  An 2 »» » » A2 n  Ann ¼

(2.3)

Thus adj A is the transpose of the cofactor matrix. ª 1 2  1º Illustration. Consider A «« 3 4 0 »». «¬ 2 6 1 »¼ Denoting A [aij ], we have A11

cofactor of a11 ( 1)

4 0 , A12 6 1

cofactor of a12 (2) 

3 0 2 1

Similarly writing the cofactors of the elements of A, we have

adj A

ª 4 « « 6 « 2 « « 6 « 2 « ¬ 4

0 1 1 1 1 0



3

0

2 1 1 1

2 1 1 1  3 0

ª 4  3 26 º « 8  1  10» » « «¬ 4  3  2 »¼

4 º » 2 6 » 1 2»  » 2 6 » 1 2 » » 3 4 ¼ 3

c

c

8 4º ª4 «  3  1  3» » « «¬ 26  10  2»¼

Remark. For a 2 u 2 matrix A

ªa b º « c d », we have ¬ ¼ adj A

ª d  bº « c a » ¬ ¼

(2.4)

Theorem 2.1 If! A !is!a!matrix!of!order! n ,!then A ˜ (adj A) (adj A) ˜ A | A | I n .

(2.5) 45

Proof. From definition, we write ª a11 «a A ˜ adj A « 21 «  « ¬ an1

a12  a1n º ª A11 a22  a2 n »» «« A12 ˜ » «  » « an 2  ann ¼ ¬ A1n

ª a11 A11    a1n A1n «a A    a A 2 n 1n « 21 11 «  « ¬ an1 A11    ann A1n

A21  An1 º A22  An 2 »» » » A2 n  Ann ¼

a11 A21    a1n A2n  a11 An1    a1n Ann º a21 A21    a2n A2n  a21 An1    a2n Ann »» » » an1 A21    ann A2 n  an1 An1    ann Ann ¼

ª | A| 0  0 º « 0 | A|  0 » », by (2.1) « » «  » « 0  | A| ¼ ¬ 0 = diag ( | A |, | A | ,  , | A | ) ª1 0  0 º «0 1  0 » » | A| « » « » « ¬0 0  1 ¼ | A | In Similarly we can show that (adj A) ˜ A | A | I n . Then follows (2.5).

Problem 2.5 ª 0 2º For the matrices A « », B ¬ 2 0¼

ª0 1 2º «  1 0  4», find adj A and adj B. » « «¬ 2 4 0 »¼

Solution. The adjoint of a 2 u 2 matrix can be immediately written by (2.4):

adj A It is skew symmetric.

46

ª0  2º «2 0 » ¬ ¼

QED

Now

adj B

ª 0 4 « «4 0 « 1 2 « « 4 0 « 1 2 « ¬ 0 4





1  4 2 0 0 2 2 0 0 2 1  4

ª 16 8  4º «8 4  2»» « «¬ 4  2 1 »¼

1 0 º » 2 4 » 0 1»  » 2 4 » 0 1 » » 1 0 ¼

c

c

ª 16 8  4º «8 4  2»» . « «¬ 4  2 1 »¼ It is symmetric. Thus A is skew symmetric of even order and adj A is skew symmetric. B is skew symmetric of odd order and adj B is symmetric.

Problem 2.6 ª 1 2  1º For the matrix A «« 3 4 0 »», find A ˜ adj A. «¬ 2 6 1 »¼

Solution.

A ˜ adj A

8 4º ª 1 2  1º ª 4 « 3 4 0 » ˜ «  3  1  3» » » « « «¬ 2 6 1 »¼ «¬ 26  10  2»¼ ª 4  6  26  8  2  10 4  6  2 º « 12  12  24  4 12  12 »» « «¬ 8  18  26 16  6  10  8  18  2»¼ 0 0 º ª 28 « 0  28 0 »» « «¬ 0 0  28»¼ ª1 0 0º  28 ««0 1 0»» «¬0 0 1»¼

=  28 I 3

47

Theorem 2.2 If! A !is!a!non-singular!matrix!of!order! n, !then | adj A | | A |n 1 .

(2.6)

Proof. By Thm 2.1, we have A ˜ adj A | A | I n | A ˜ adj A | | | A | I n | | A |n | I n |

Ÿ

| A | ˜ | adj A | | A |n 1,  | I n | 1

or

| adj A | | A |n 1 ,  A is non singular Ÿ | A | z 0

or

Problem 2.7 For a nonsingular n -square matrix A, show that adj(adj A) | A |n  2 ˜ A.

Solution. We know that A ˜ adj A | A | I n Replacing A by adj A, we get (adj A) ˜ adj (adj A) | adj A | I n | A |n 1 I n , by (2.6) Ÿ

A ˜ [(adj A) ˜ adj(adj A)]

A ˜ ( | A |n 1 I n ), premultiplying by A

or

( A ˜ adj A) ˜ (adj (adj A)) | A |n 1 ( A ˜ I n ), by associativity

or

| A | I n ˜ (adj (adj A)) | A |n 1 A, by (2.5)

or

| A | adj(adj A) | A |n1 A

or

adj (adj A) | A |n 2 A,  | A | z 0

Problem 2.8 Prove that if A is symmetric, then adj A! is symmetric.

Solution. Let A [aij ] be an n -square symmetric matrix i.e. aij 48

a ji ,  i, j

QED

cofactor of aij = cofactor of a ji

A ji ,  aij

Now

Aij

Ÿ

The matrix [ Aij ] is symmetric i.e. the matrix [ Aij ]c is symmetric

Ÿ

The matrix adj A is symmetric

a ji

MCQ 2.6 Let A be a skew-symmetric matrix of order n. Then (A) adj A is symmetric if n is even (B) adj A is skew symmetric if n is even (C) adj A is symmetric if n is odd (D) adj A is skew symmetric if n is odd

MCQ 2.7

Let adj (adj A)

ª1 2 1 º » « A , where A «a 0 4». Then «¬1 1 1 »¼

(A) a 0 (B) a 1 (C) a

2

(D) a 3

MCQ 2.8 Let A be a square matrix of order n such that | adj (adj A) |

| A |4 . Then

(A) n 5 (B) n

4

(C) n 3 (D) n

2

MCQ 2.9 Let A be a square matrix of order 4 such that | A | 3. Then the value of | adj{adj(adj A)}| is (A) 39 49

(B) 327 (C) 2 27 (D) 29

SAQ 2.5 16. Let A [aij ] be an n -square matrix and Aij be the cofactor of aij in A. Then show that n

| aij  x | | A |  x ¦ Aij . i, j 1

SAQ 2.6 3º ª 2 1 « Find adj A when A « 2  3 11 »» What is adj (adj A) ? «¬ 2 1  5»¼

SAQ 2.7 Show that the adjoint of a scalar matrix is a scalar matrix.

SAQ 2.8

Show that the matrix A

50

ª 4  3  3º «1 0 1 »» is its own adjoint. « «¬ 4 4 3 »¼

SUMMARY The concepts of determinant of a square matrix and adjoint matrix are introduced. The various properties of a determinant are explained. The illustrations and the solved examples to that effect are given to understand the concepts.

KEY WORDS Determinant of a square matrix Minor Cofactor Adjoint matrix

51

UNIT 01-03: THE INVERSE OF A MATRIX

53-78

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of inverse of a square matrix Evaluate the inverse by various methods

INTRODUCTION The concept of reciprocal of a nonzero number is taught at primary level. For example, to find the reciprocal x of 7 we solve

x˜7 7˜ x 1 and get

(3.1)

1 or 7 1. To be specific, we can say that 7 1 is the inverse of 7 under multiplication. 7

Similarly we can define the inverse of 7 with respect to addition: 7 x

x7 0

Solving this we say that  7 is the inverse of 7 under addition. We extend the concept of inverse of a nonzero number under multiplication and depicted in (3.1) to a nonsingular matrix. The 1 on the right side of (3.1) is replaced by the identity matrix I . The definition of the inverse of a matrix is given in the next section. 3.1 Inverse matrix defined

Let A and B be two square matrices such that AB

BA

I

Then B is called the inverse of A and is expressed as B

A1. Similarly A is called the inverse

of B and is written as A B 1. Another definition

Let A be any non-singular matrix. Then A1

53

adj A | A|

(3.2)

Illustration

(i)

ªcos T  sin Tº « sin T cos T » ¼ ¬

ª cos T sin T º A « », B ¬ sin T cos T¼ AB

ª  cos T sin T  sin T cos Tº cos 2 T  sin 2 T » « sin 2 T  cos 2 T ¼ ¬ sin T cos T  cos T sin T

AB

Hence

A1

Ÿ

(ii)

ª1 0º «0 1 » ¬ ¼

BA

Also

ª1 0 º «0 1 » ¼ ¬

I

I

BA I B, B 1

A

ª 2 1º « 3 5» , | A | 10  3 13 z 0. ¼ ¬

A

Hence A is non-singular and A1 exists.

ª5  1º adj A = « ». ¬3 2 ¼

Now

adj A | A|

A1

Then

1 ª5  1º 13 «¬3 2 »¼

Remark. When A1 exists, we say that A is invertible. Non-square matrices are not invertible. Theorem 3.1

Every!invertible!matrix!has!a!unique!inverse. Proof. Let A be an invertible matrix. If possible suppose that it has two distinct inverses, say B

and C. Then

AB AB

Now

BA

I and AC

CA

I

BA Ÿ C ( AB) C ( BA)

(This is possible because A, B, C are square matrices of the same order). Ÿ

(CA) B CI ,  associativity and BA I

Ÿ

Hence the inverse of A is unique.

54

IB C i.e. B

C QED

Theorem 3.2

The!necessary!and!sufficient!condition!for!a!square!matrix! A !to!possess!an!inverse!is!that! A !is non-singular. Proof. Necessary condition

Let A be an n -square matrix and let B be its inverse. It is obvious that B is an n -square matrix. We have

I Ÿ | AB | | I | i.e. | A | | B | 1

AB

Ÿ | A | z 0 i.e. A is non-singular

Sufficient condition. Let A be a non-singular matrix. Hence | A | z 0. Define

C

CA

Now

Also one can show that AC

adj A ˜A | A|

adj A | A|

1 ((adj A) A) | A|

1 ˜( | A| I ) = I | A|

I . Thus AC

Then by definition, A is invertible and C

CA

I

A1.

QED

Remark. Above theorem implies that a singular matrix does not possess an inverse i.e. the

singular matrices are not invertible. Some authors opine that every non-square matrix is singular. Theorem 3.3

If! A !and! B !are! n -square!invertible!matrices,!then adj ( AB) (adj B) (adj A).

(3.3)

Proof. Let A and B be n -square invertible matrices. Then AB is invertible and ( AB) 1 exists.

Now ( AB) ˜ (adj B ˜ adj A)

A ˜ ( B ˜ adj B) ˜ adj A,  matrix product is associative A ˜ ( | B | I n ) ˜ adj A, by Thm 2.1 of Unit 2 | B | ( A ˜ I n ) ˜ adj A | B | ( A ˜ adj A),  A ˜ I n

55

A

| B | | A | In

| A | | B | In

| AB | I n ,  | AB | | A | | B |

( AB) ˜ (adj ( AB)), by Thm 2.1 Premultiplying by ( AB) 1 , we obtain ( AB) 1 ( AB ) (adj B ˜ adj A) Ÿ

( AB) 1 ( AB ) Adj ( AB)

[( AB) 1 ( AB)] (adj B ˜ adj A) [( AB) 1 ( AB)] Adj ( AB) by associativity

I n (adj B ˜ adj A)

Ÿ

I n Adj ( AB)

adj B ˜ adj A Adj ( AB)

Ÿ

QED

Inverse of a diagonal matrix

Let A [aij ] be n -square diagonal matrix with | A | z 0. Then its inverse exists. In this case we write

A diag [a11 , a22 , , ann ] Ÿ

| A | a11 a22  ann z 0

Ÿ

aii z 0 ,  i 1, 2 ,, n A11

Then

(cofactor of a11 in A ) A11 | A|

Ÿ

a22 a33  ann

1 a11

Similarly we can show that Aii | A|

1 , i 1, 2 ,  , n aii

By definition

A1

56

adj A | A|

ª A11 «| A | « « 0 « « « « 0 ¬

0

A22 | A|  0

   0

º 0 » » 0 » »  » Ann » » | A|¼

ª 1 «a « 11 « 0 « « « « 0 ¬«

0 1 a22  0

º 0 » » 0 0 » »  » 1 »  » ann ¼» 

(3.4)

ª3 0 0º Illustration. For A ««0 2 0»», A1 «¬0 0 1»¼

ª1 / 3 0 0º « 0 1 / 2 0». » « «¬ 0 0 1»¼

Problem 3.1

2  2º ª1  i « Show that the matrix A « 0 i  2  1 »» is non-singular. Determine the adjoint of the matrix «¬ i 1 0 »¼ A and its inverse. Solution. We have 1 i

| A|

0 i

1 i 0 i

2

2

i  2 1 1 0

0

2

i  1  1 ,  adding 2nd and 3rd columns 1

0

(1  i )(1)  2(1  i )

1  3i

z0 Ÿ

The matrix A is non-singular

Ÿ

A

Then

1

adj A

 2 2(1  i)º ª 1 « i 2i 1  i »» « «¬1  2i  1  i 1  3i »¼

adj A | A|

 2 2(1  i)º ª 1 1 « i 2i 1  i »»  1  3i « «¬1  2i  1  i 1  3i »¼

Problem 3.2

Show that if A is non-singular, then AB

AC Ÿ B C.

Solution. Let A be any non-singular matrix of order n. To define AB and BC , suppose that B

and C are n u m matrices. Since A is non-singular, A1 exists. Then AB

AC Ÿ A1 ( AB)

A1 ( AC )

Ÿ ( A1 A) B ( A1 A)C Ÿ IB

57

IC

Ÿ B

C

Problem 3.3

Let A and B be two n -square non-zero matrices such that AB 0. Then prove that both the matrices are singular. Solution. We apply the method of contradiction. Suppose that A is non-singular i.e. | A | z 0.

Then A1 exists. Premultiplying AB

0 by A1 , we get A1 ( AB)

A1 0

( A1 A) B 0

Ÿ Ÿ

IB

0 i.e. B

0

Thus B is a zero matrix which contradicts the fact that B is non-zero. Hence the supposition that A is non-singular is wrong. Therefore, A is a singular matrix. Similarly one can show that B is singular. Problem 3.4

Show that if the non-singular matrices A and B commute, so also do A1 and B. Solution. Let A and B be non-singular matrices. Hence A1 and B 1 exist. Also AB

that A and B are square matrices of the same order. Now AB BA Ÿ A1 ( AB) A1

A1 ( BA) A1

( A 1 A) ( BA1 ) ( A 1 B)( AA1 )

Ÿ

IBA1

i.e.

A1BI or BA1

A1 B

Hence A1 and B commute. MCQ 3.1

If A

ª0 1 2 º «1 2 3» and A1 » « «¬3 D 1 »¼

ª 1 1 1 º 1«  8 6 2E»» , then « 2 «¬ 5  3 1 »¼

(A) The point (D, E) lies on the straight line 2 x  y  1 0. (B) The point (D, E) lies on the straight line 2 x  y  1 0. (C) The point (D, E) lies on the straight line 2 x  y  1 0. 58

BA implies

(D) The point (D, E) lies on the straight line x  2 y  1 0. MCQ 3.2

ª6 0  1º Consider the matrix A ««0 3 1 »». Choose the correct statement from the following: «¬2 1 0 »¼ (A A  A1  AA1 is defined (B) A  A1  AA1 is not defined (C) A  A1  AA1 is defined (D) A  A1  AA1 is defined MCQ 3.3

Let A diag (a, b) such that A2  a I (A) A1

ª1 0 º «0  1» ¬ ¼

(B) A1

ª1 0 º «0  2 » ¬ ¼

(C) A1

ª0 0 º «0  1» ¼ ¬

(D) A1

ª0 0 º «0  2 » ¼ ¬

0. Then

SAQ 3.1

Show that if the non-singular matrix A is symmetric, then A1 is symmetric. Hint. A is symmetric œ A

Ac

SAQ 3.2

ª 1 Find the value of k  R for which the matrix «« 0 «¬  k

59

2 kº 4 0 »» is invertible. 3 1 »¼

SAQ 3.3

Find A1

ª0 «1 if A « «1 « ¬ 1

2 1 3º 1  1  2»» and verify your answer. 2 0 1» » 1 2 6¼

3.4 Inverse of the product of matrices Let A and B be two non-singular matrices of order n. The product AB is defined and also is non-singular. Then its inverse ( AB) 1 exists.

Theorem 3.4 If! A !and! B !are!two!non-singular!matrices!of!order! n, !then ( AB) 1

B 1 A1.

(3.5)

Proof. Let A and B be non-singular matrices of order n. Then their product AB is also nonsingular and is of order n. Therefore ( AB) 1 exists. Further B 1 A1 is also defined. ( AB)( B 1 A1 )

Now

A( BB 1 ) A1 , by associativity = AIA1 I

( B 1 A1 )( AB)

and

B 1 ( A1 A) B

( AB)( B 1 A1 )

Ÿ

B 1 A1

Ÿ

B 1IB

( B 1 A1 )( AB)

( AB) 1 ,  XY

YX

I

I

I ŸY

X 1

Problem 3.5 Verify Theorem 3.4 for the matrices A

ª 1 0º « 1 2» and B ¬ ¼

ª 2  1º « 1 1 ». ¬ ¼

Solution. Here | A | 2 z 0, | B | 1 z 0 i.e. A, B are non-singular. Hence A1 and B 1 exist. Then

60

A1

1 ª 2 0º , B 1 « » 2 ¬1 1 ¼

ª1 1 º «1 2» and AB ¬ ¼

ª 2  1º « 4 3 » ¬ ¼

QED

( AB) 1

and

1 2

ª3 1 º 1 1 « 4 2» , B A ¬ ¼ ( AB) 1

Ÿ

1 2

ª3 1º « 4 2» ¬ ¼

B 1 A1

Hence the theorem 3.4 i.e. (3.5) is verified.

Problem 3.6 If the non-singular symmetric matrices A and B commute, then show that A1B 1 is symmetric and A1 , B 1 commute.

Solution. Let A and B be non-singular symmetric matrices. Then by SAQ 3.4, the matrices A1 and B 1 are symmetric i.e. ( A1 )c

A1 and ( B 1 )c

B 1

( A1 B 1 )c ( B 1 )c ( A1 )c

Now

B 1 A1 ( AB) 1 , by (3.5) ( BA) 1 ,  AB

BA

A1B 1 , by (3.5) Ÿ

A1 B 1 is symmetric.

Also

A1 B 1 ( AB)

A1 B 1 ( BA)

A 1 ( B  1 B ) A

A1 IA

I

and

B 1 A1 ( BA)

B 1 A1 ( AB)

B 1 ( A1 A) B

B 1 IB

I

( A1 B 1 )( AB)

Ÿ

A1B 1

Ÿ Ÿ

B 1 A1 ( BA)

B 1 A1 ( AB)

B 1 A1 ,  AB is nonsingular

A1 , B 1 commute

Theorem 3.5 If! A !is!a!non-singular!matrix,!then ( Ac) 1

Proof. We have 61

( A1 )c.

(3.6)

AA1

Ÿ

I

( AA1 )c ( A1 A)c

Taking the transpose, ( A1 )c Ac

Ÿ

A1 A

Ac( A1 )c

Ic

I ,  ( AB)c BcAc, I c

I

(3.6)

QED

SAQ 3.4 Confirm the result (3.6) for B

ª 2  1º « 1 1 ». ¬ ¼

Problem 3.7 If a matrix A satisfies a relation A2  A  I

0, prove that A1 exists and A1

I  A, I being

an identity matrix.

Solution. Let

A2  A  I

0

i.e.

A2  A I

I

Ÿ

A( A  I )

I

Ÿ

| A( A  I ) |

Ÿ

(a1)

|I|

| A | | A  I | 1 i.e. | A | z 0

Hence A is non-singular and A1 exists. Pre-multiplying (a1) by A1 , we get ( A1 A) A  ( A1 A) I Ÿ

I AI I

Ÿ

A I

A 1 I

A 1 A 1

Problem 3.8 ª1 2 2 º If A ««2 1 2»», show that A2  4 A  5 I «¬2 2 1 »¼

0 where I , 0 are the unit matrix and the null matrix

of order 3 respectively. Use this result to find A1.

Solution. We have

62

A

2

AA

A  4 A  5I

Ÿ

2

ª 1 2 2 º ª1 2 2 º « 2 1 2 » « 2 1 2» » « »« ¬«2 2 1 »¼ «¬2 2 1»¼

ª9 8 8º «8 9 8» « » «¬8 8 9»¼

ª1 2 2 º ª1 0 0º ª9 8 8º «8 9 8»  4 «2 1 2»  5 «0 1 0» « » « » « » «¬2 2 1 »¼ «¬0 0 1»¼ «¬8 8 9»¼ ª9  4  5 8  8  0 8  8  0 º «8  8  0 9  4  5 8  8  0 » « » «¬8  8  0 8  8  0 9  4  5»¼ ª0 0 0 º «0 0 0 » « » «¬0 0 0»¼ A2  4 A  5 I

Ÿ

Multiplying by A1 ,

( A1 A) A  4( A1 A)  5 A1I I A  4 I  5 A1

Ÿ

5 A1

Ÿ

0

I A  4I

A1 0

0 A4I

ª1 0 0 º ª1 2 2 º « 2 1 2 »  4 «0 1 0 » « « » » «¬0 0 1»¼ «¬2 2 1 »¼ ª1  4 2  0 2  0º «2  0 1  4 2  0» » « «¬2  0 2  0 1  4 »¼

Ÿ

A

MCQ 3.4

ª 3  3 4º If A ««2  3 4»», then «¬0  1 1 »¼ (A) A1 63

A

1

2º ª 3 2 1 « 2  3 2 »» 5 « «¬ 2 2  3»¼

2º ª 3 2 « 2 3 2 » « » «¬ 2 2  3»¼

(B) A1

A2

(C) A1

A  A2

(D) A1

A3

MCQ 3.5

Let A(T)

ªcos T  sin T 0º « sin T cos T 0». Consider the two statements » « «¬ 0 0 1»¼

(a) ( A  A1 )(S) 0 zero matrix of order 3 (b) ( A  A1 )(S / 2) diag (0, 0, 2) Choose the correct statements from the following: (A) Only (a) is true. (B) Both the statements are true. (B) Only (b) is true. (B) Both the statements are false. MCQ 3.6

Let the non-singular symmetric matrices A and B commute. Then (A) AB 1 is symmetric but A1B is not symmetric (B) AB 1 is not symmetric but A1B is symmetric (C) Both AB 1 and A1B are symmetric (D) Neither AB 1 nor A1B is symmetric SAQ 3.5

Show that the matrix A

ª1 3 7 º «4 2 3» satisfies the equation « » «¬1 2 1 »¼ A3  4 A2  20 A  35 I

Hence find A1.

64

0.

SAQ 3.6

Show that if I!+!A is nonsingular, then ( I  A) 1 and (I!" A) commute. 3.5 Inverse of orthogonal matrices

By definition, a square matrix A is orthogonal if A Ac

I . This means Ac

A is orthogonal œ A1

Ÿ

Ac

A1.

(3.7)

As a matter of fact, (3.7) can be used as definition of an orthogonal matrix. Thus for an orthogonal matrix A to find inverse A1 is to find its transpose Ac. Problem 3.9

Show that A

ª  2 1 2º 1« 2 2 1 »» is orthogonal and if so find A1. « 3 «¬ 1  2 2»¼

Solution. We have A Ac

ª  2 1 2º ª 2 2 1 º 1« 1« » 2 2 1 » u « 1 2  2»» « 3 3 «¬ 1  2 2»¼ «¬ 2 1 2 »¼ ª 4  1  4  4  2  2  2  2  4º 1«  4  2  2 4  4 1 2  4  2 »» « 9 «¬  2  2  4 2  4  2 1  4  4 »¼ ª9 0 0 º 1« 0 9 0»» « 9 «¬0 0 9»¼ ª1 0 0º «0 1 0 » « » «¬0 0 1»¼

Ÿ

(a1)

A is orthogonal.

Also by definition of inverse, (a1) Ÿ

A

65

1

Ac

ª 2 2 1 º 1« 1 2  2»» « 3 «¬ 2 1 2 »¼

Problem 3.9

If A

ª1 2 a º 1« 2 1 b »» is orthogonal, find a , b and c. Also find the inverse of A. 3« «¬2  2 c »¼

Solution. Let the given matrix A be orthogonal. Then

A Ac I

Ÿ

ª1 2 a º ª1 2 2 º 1« 1« » 2 1 b » u « 2 1  2»» = « 3 3 «¬2  2 c »¼ «¬a b c »¼

ª1 0 0º «0 1 0 » « » «¬0 0 1»¼

Ÿ

ª 5  a 2 4  ab  2  ac º » 1« 4  ab 5  b 2 2  bc » « 9« 2 » ¬ 2  ac 2  bc 8  c ¼

ª1 0 0º «0 1 0 » « » «¬0 0 1»¼

By equality of matrices, we have 5  a2 4  ab

and (a1) Ÿ

a2

4 , b2

9 , 5  b2 0 ,  2  ac

4 , c 2 1 i.e. a

9 , 8  c2 0 , 2  bc

(a1)

9

(a2)

0

r 2, b r 2, c

r1 .

In view of (a2), above gives two sets of values: a

A

Since A is orthogonal,

Ÿ

A

or

2 , b 2 , c 1 and a

A

1

1

ª1 2 2º 1« 2 1  2»» , for a 3« «¬ 2 2  1 »¼

Let the matrix A be orthogonal. Then

66

Ac

Ac

1

ª1 2 2 º 1« 2 1  2»» 3« «¬a b c »¼

2º ª1 2 1« 2 1  2»» , for a « 3 «¬2  2 1 »¼

MCQ 3.7

(A) A1 A A1

1

2 , b 2 , c

2 , b 2 , c 1

2 , b 2 , c

1

(a3)

(B) A1 A2 A1

Ac

(C) A1 A3 A1

Ac

(D) A1 A4 A1

A2

SAQ 3.6

Prove that inverse of an orthogonal matrix is orthogonal and its transpose is also orthogonal. 3.6 Evaluation of inverse of matrix

The methods which determine an inverse of a nonsingular square matrix include (i) inverse by adjoint method (ii) inverse by partioning of a matrix (iii) inverse by Gauss-Jordan method (iv) inverse by Cayley-Hamilton theorem. Now we demonstrate the working of these methods to compute an inverse of a non-singular matrix as follows. Inverse by adjoint method

In this method verify that the given matrix A is nonsingular and then find adj A from the definition

A1

adj A . | A|

Problem 3.10

ª1  3 2 º Find adjoint and then the inverse of ««2 0 0»». «¬1 4 1 »¼ Solution. Let the given matrix be A. Since adjoint matrix is the transpose of cofactors matrix, we

have

adj A

67

ª 0 0 « « 4 1 « 3 2 « « 4 1 « 3 2 « ¬ 0 0





2 0 1 1 1 2 1 1 1 2 2 0

2 0 º » 1 4 » 1 3 »  » 1 4 » 1 3 » » 2 0 ¼

c

c ª0 2 8 º «11  1  7» « » «¬ 0 4 6 »¼

ª 0 11 0º « 2  1 4» » « «¬ 8  7 6»¼ Ÿ Ÿ

| A|  2

3 2 4 1

22 z 0

A is non-singular and A1 exists

By definition, we have

A

1

adj A | A|

ª 0 11 0º 1 «  2  1 4»» « 22 «¬ 8  7 6»¼

Problem 3.11

ª2 1  1º If A ««0 2 1 »», verify that A (adj A) «¬2 2 0 »¼

| A|

Solution.

Now

Ÿ

adj A

A (adj A)

2 1 1 0 2 1 2 2 0

68

2 1 1 0 2 1 0 1 1

ª  2 2  4º «  2 2  2» » « «¬ 3  2 4 »¼

c

ª2 0 0º «0 2 0» » « «¬0 0 2»¼

§ A · ¨¨ ¸¸ adj A ©| A|¹ ( adj A)

1

A | A|

1 A 2

2(2  1)

2

ª 2  2 3 º «2 2  2»» « «¬ 4  2 4 »¼

ª2 1  1º ª 2  2 3 º «0 2 1 » « 2 2  2»» » « « «¬2 2 0 »¼ «¬ 4  2 4 »¼

Ÿ

Ÿ

| A | I . Hence find the inverse of adj A.

ª1 0 0 º 2««0 1 0»» «¬0 0 1»¼

I

ª1 1 / 2  1 / 2º «0 1 1 / 2 »» « «¬1 1 0 »¼

| A|I

Problem 3.12

If N

1  2i º ª 0 1 « 1  2i », show that ( I  N )( I  N ) is a unitary matrix. 0 ¬ ¼

Hint. Square matrix A is unitary œ A1

Ac

Solution. We have

IN

1  2i º ª1 0º ª 0 «0 1»  « 1  2i 0 »¼ ¬ ¼ ¬

 1  2i º ª 1 «1  2i 1 »¼ ¬

IN

1  2i º ª1 0º ª 0 «0 1»  « 1  2i 0 »¼ ¬ ¼ ¬

1  2i º ª 1 « 1  2i 1 »¼ ¬

|I N|

Ÿ

( I  N ) 1

Ÿ

1  ( 1  4) 6  1  2i º 1ª 1 « 1 »¼ 6 ¬1  2i

A ( I  N )( I  N ) 1

Denote

 1  2i º  1  2i º 1 ª 1 ª 1 u « «1  2i » 1 »¼ 1 ¼ 6 ¬1  2i ¬  1  2i  1  2i º 1 ª 1 1  4 «  1  4  1 »¼ 6 ¬1  2i  1  2i  (1  2i ) / 3º ª 2/3 «(1  2i ) / 3  2 / 3 »¼ ¬ Ÿ

and (a1) and (a2) Ÿ

A1

adj A | A|

(1  2i ) / 3º ª  2/3 ,| A| 1 «(1  2i ) / 3  2 / 3 »¼ ¬

A

 (1  2i ) / 3º ª  2/3 «(1  2i ) / 3  2 / 3 »¼ ¬

Ac

(1  2i ) / 3º ª  2/3 « (1  2i ) / 3  2 / 3 »¼ ¬ A1

Ac

Then by definition, A ( I  N )( I  N ) 1 is unitary.

69

(a1)

(a2)

MCQ 3.8

Let a, b, c, d are real numbers such that a 2  b 2  c 2  d 2 1 and A A1 then

(A) ps  qr

complex number

(B) ps  qr

1

ª a  ib « c  id ¬

c  id º . If a  ib »¼

ª p qº «r s» ¬ ¼

(C) ps  qr 1 (D) ps  qr

0

SAQ 3.7

Prove that

ªcos T  sin Tº « sin T cos T » ¼ ¬

ª « 1 « T « tan 2 ¬

Tº ª  tan » « 1 2 »« T 1 » « tan 2 ¼¬

Tº tan » 2 » 1 » ¼

1

SAQ 3.8

ª1 0º ª0  1º b« Express the inverse of a « » » in the same form, when a and b are real numbers. ¬0 1¼ ¬1 0 ¼ 3.7 Inverse by matrix partitioning method

One can easily find the inverse of a nonsingular matrix of order one or two. This simplicity is not retained by the matrix of higher orders. An idea comes to mind that why not write a higher order matrix in terms of smaller order matrices? Yes, it is always possible to do so and this is the basic idea behind the partition of matrix. We divide a matrix in terms of sub blocks forming small order matrices by inserting lines parallel to rows and columns. We do not go into details of partition of matrices but confine only to our purpose of using it for finding inverse of matrix. We start with a 3 -square nonsingular matrix A and demonstrate the process of partitioning. Consider a

nonsingular matrix

A

ª3 6 1 º « 2 4  2» » « «¬0 3 1 »¼

It can be partitioned in many ways. Some of them are as follows:

70

P1:

ª3 6 1 º ª3 6 1 º « 2 4  2 » , « 2 4  2» , » « « » «¬0 3 1 »¼ «¬0 3 1 »¼

P2:

A

ª3 6 1 º «2 4  2» { ª A11 » «A « «¬0 3 1 »¼ ¬ 21

where

A11

P3:

A

where

A11

P4:

A

where

A11

P5:

A

where

A11

ª 3 6º «2 6» , A12 ¬ ¼

A12

ª 3º «2» , A12 ¬ ¼

>3

6@, A12

ª 2º «0» , A22 ¬ ¼

>1@ , A21

ª 4  2º «3 1 » ¬ ¼

A12 º A22 »¼

ª6 1 º «4  2» , A21 ¼ ¬

ª3 6 1 º «2 4  2» { ª A11 « » «A «¬0 3 1 »¼ ¬ 21

>0 3@, A22 >1@

A12 º A22 »¼

>6 1@ , A21

ª3 6 1 º «2 4  2» { ª A11 « » «A «¬0 3 1 »¼ ¬ 21

ª3 6 1 º « 2 4  2» « » «¬0 3 1 »¼

A12 º A22 »¼

ª1 º « 2», A21 ¬ ¼

ª3 6 1 º «2 4  2» { ª A11 « » «A «¬0 3 1 »¼ ¬ 21

>3@,

ª3 6 1 º « 2 4  2» « » «¬0 3 1 »¼

>0@, A22 >3 1@

A12 º A22 »¼ ª 2 4º «0 3» , A22 ¬ ¼

ª  2º «1 » ¬ ¼

The quantities Aij have no connection with cofactors. They are treated as elements of the matrix A and the arithmetic operations defined on the matrix A can be performed as if these quantities

are elements of matrix. However, it should be bourne in mind that these quantities are matrices of specific orders. With this background, we complete our aim of finding the inverse of a n u n nonsingular matrix A as follows:

A

71

ª a11 a12  a1n º «a » « 21 a22  a2n » «    » « » ¬ an1 an 2  ann ¼

We partition it as

A

ª a11 « a « 21 «  « « a r1 « a r 1,1 « «  « a ¬ n1

a12



a1r

a1r 1



a 22 

 

a2r 

a 2 r 1 

 

ar 2

a rr   a r 1, r   a nr 

a r 1, 2  an 2

a r , r 1  a r 1, r 1  

Thus

rs

 

a n, r 1

s

r

ª A11 ( r u r ) « A (s u r ) ¬ 21

a1n º a 2 n »» r  » » ar , n » a r 1, n » » s  » a nn »¼

A12 ( r u s ) º A22 ( s u s ) »¼

(4.6b)

n.

While partitioning care should be taken that one of the elements Aij must be nonsingular matrix: here we consider that A11 is non-singular. One can consider A12 as a non-singular matrix. But in this case the orders of A11 and A22 will change. Let B be the inverse of A i.e. A1

B. Suppose that A and B are partitioned exactly similarly

i.e. B

ª B11 (r u r ) B12 (r u s ) º « B ( s u r ) B ( s u s )» 22 ¬ 21 ¼

(3.8)

Similarly follows the partition of n u n unit matrix I n : ª I rr «O ¬ sr

In

Ors º , I ss »¼

(3.9)

where Ors stands for a zero matrix of order r u s etc. Our problem is to find B i.e. B11 , B12 , B21 and B22 in terms of given quantities A11 , A12 , A21 and A22 . Since B

A1 , we have AB

Ÿ

72

ª A11 «A ¬ 21

A12 º ª B11 A22 »¼ «¬ B21

B12 º B22 »¼

ª I rr «O ¬ sr

In

BA Ors º I ss »¼

ª B11 «B ¬ 21

B12 º ª A11 B22 »¼ «¬ A21

A12 º A22 »¼

(3.10)

and

A11B11  A12 B21

I rr , A11B12  A12 B22

A21B11  A22 B21

Osr , A21B12  A22 B22

B11 A11  B12 A21

I rr , B11 A12  B12 A22

B21 A11  B22 A21

Osr , B21 A12  B22 A22

Ors

(3.11a)

I ss

(3.11b)

Ors

(3.12a)

I ss

(3.12b)

These are 8 equations in 4 unknowns: B11 , B12 , B21 , B22 . We choose four equations (3.11a) and E 1 , where E

(3.12b) to determine the unknowns. Set B22

B22

1

itself is a nonsingular matrix.

The introduction of E is just for convenience of calculation. One can do with B22 also without E. From the second of (3.11a), we write  A12 B22

A11B12 1

Since A11 is non-singular, A11

 A12E1 1

exists and then pre-multiplying by A11

A111 A11 B12

1

 A11 A12 E 1 or I B12

or

B12

,

1

( A11 A12 ) E 1

1

 ( A11 A12 ) E 1

(3.13a)

Similarly the first of (3.12b) gives  B22 A21 or B21 A11 A11

B21 A11 Ÿ

B21

1

E1 A21 A11

1

1

 E 1 ( A21 A11 )

(3.13b)

1

Also premultiplying the first of (3.11a) by A11 , we have 1

1

A11 A11B11  A11 A12 B21 Ÿ

I B11

1

1

A11 I

1

A11  A11 A12 B21

1

1

1

Ÿ

B11

A11  ( A11 A12 ) (E 1 A21 A11 )

Ÿ

B11

A11  ( A11 A12 ) E1 ( A21 A11 )

It remains to determine B22

E 1. With the known values in the remaining second equation of

1

1

1

(3.13c)

(3.12b), we get 1

 E 1 ( A21 A11 ) A12  E 1 A22 or

E

1

I or  ( A21 A11 ) A12  A22 1

A22  ( A21 A11 ) A12

Thus the set (3.13) of four equations determine B and hence A1. 73

E (3.13d)

Remark. The crucial point in the working of the method is the non-singularity of A11. But this

need not happen always. Hence the first thing is to choose the proper partition to suit our demand. Irrespective of the order of A, the partition must always be in the form of four blocks such that one of the matrices is non-singular. In view of this the choice of P1 is out of question. In P2, A11 and A22 are square matrices but A11 is singular. Hence this A11 goes out of choice and we have to choose A22 , instead. On similar counts in P4 we have to consider A12 . The partitions P3 and P5 work well. It is consequential that if A11 is singular, (3.13) does not hold. What is to be done in this case? We look at another partition and make small modifications in (3.13) as follows: (i) | A11 |

0, A22 is non-singular: Apply 1 l 2 in (3.13)

(ii) A12 is non-singular in Aij : keep first suffix unchanged and apply 1 l 2 in the second suffix and in Bij : keep the second suffix unchanged and apply 1 l 2 in the first suffix. Thus now (3.13) becomes 1

B11

 E 1 ( A22 A12 )

B12

E 1 , E

1

A21  A22 ( A12 A11 ) (3.14)

1

1

1

B21

A12  ( A12 A11 ) E1 ( A22 A12 )

B22

 ( A12 A11 ) E 1

1

(iii) A21 is non-singular in Aij : keep second suffix unchanged and apply 1 l 2 in the first suffix and in Bij : keep the first suffix unchanged and apply 1 l 2 in the second suffix etc. The following solved example illustrate the working of the method. Problem 3.13

ª1 3 3º Use partitioning method to compute the inverse of ««1 4 3»» . «¬1 3 4»¼ 74

| A|

Solution. Here

1 3 3

1 3 3

1 4 3

0 1 0

1 3 4

0 0 1

1 z 0.

Hence A1 exists. Let the partition of the given matrix be

A

A11

where

ª1 3 3º «1 4 3» { ª A11 « » «A «¬1 3 4»¼ ¬ 21

ª1 3º «1 4» , A12 ¬ ¼

ª3º «3» , A21 ¬ ¼

A12 º , A22 »¼

>1 3@, A22 >4@

| A11 | 4  3 1 z 0 , | A22 |

Ÿ

4z0

Hence the above partition works and (3.13) gives A1. Let A1

Now

A11

1 ª 4  3º 1 , A11 A12 « » 1 ¬ 1 1 ¼

1

A21 A11

4  3º » ¬ 1 1 ¼

>1 3@ ª«

1

1

A22  A21 ( A11 A12 )

ª 4  3º ª3º « 1 1 » «3» ¬ ¼¬ ¼

>4  3

 3  3@

>4@  >1 3@ª«

3º » ¬0 ¼

Then

E

Ÿ

E 1

>1@

Then

B11

A11  ( A11 A12 )E 1 ( A21 A11 )

1

1

1

ª 4  3º ª3 0º «  1 1 »  «0 0 » ¬ ¼ ¬ ¼

75

1

B12

( A11 A12 )E 1

B21

E 1 ( A21 A11 )

B22

E 1

1

>1@

ª B11 «B ¬ 21

B

ª 12  9 º « 3  3» ¬ ¼

ª 3º «0 » ¬ ¼

>1 0@

>4@  >3@ >1@

ª 4  3º ª3º « 1 1 »  «0» >1@>1 0@ ¼ ¬ ¼ ¬

ª 7  3º « 1 1 » ¬ ¼

ª 3º ª 3º  « » >1@ « » ¬0 ¼ ¬0¼  >1@>1 0@

B12 º B22 »¼

> 1 0@

Ÿ

A

1

B

ª B11 «B ¬ 21

B12 º B22 »¼

ª 7  3  3º « 1 1 0 »» « «¬ 1 0 1 »¼

Problem 3.14

Find the inverse of the matrix A

ª0 cos T  sin Tº «0 sin T cos T » by partitioning method. « » «¬1 0 0 »¼

Hint. This belongs to (3.14).

| A | cos 2 T  sin 2 T 1 z 0

Solution. Here

Hence A is non-singular and A1 exists. Let A be partitioned as

A

Let

ª0 º «0» , A12 ¬ ¼

A11

where

A1

B

ª0 cos T  sin Tº «0 sin T cos T » « » «¬1 0 0 »¼

ª A11 «A ¬ 21

ªcos T  sin Tº « sin T cos T » , A21 ¬ ¼

A12 º , A22 »¼

>1@, A22 >0

0@

B12 º . B22 »¼

ª B11 «B ¬ 21

We use (3.14): A12

ª cos T sin T º 1 « sin T cos T» , A12 A11 ¬ ¼

1

A22 A12

E

76

1

>0

ª cos T  sin Tº ª0º « sin T cos T » «0» ¬ ¼¬ ¼

ª cos T sin T º 0@ « » ¬ sin T cos T¼

A21  A22 A11

>1@  >0

>0

0@

ª0 º 0@ « » >1@ ¬0 ¼

B11

E 1 ( A22 A12 )  >1@>0 0@

B12

E 1

B21

A12  ( A12 A11 ) E1 ( A22 A12 )

1

ª0 º «0 » ¬ ¼

>0

0@

>1@ 1

1

1

ª cos T sin T º ª0º « sin T cos T»  «0» >1@>0 0@ ¼ ¬ ¼ ¬

ª cos T sin T º ª0 0º « sin T cos T»  «0 0» ¼ ¬ ¬ ¼ B22

1

( A12 A11 ) E 1

ª cos T sin T º « sin T cos T» ¬ ¼

0 ª cos T sin T º ª0º >1@ ª« º» « » « » ¬ sin T cos T¼ ¬0¼ ¬0 ¼

Ÿ

A

1

B

0 1º ª 0 « cos T sin T 0» « » «¬ sin T cos T 0»¼

SAQ 3.9

Find inverse of the matrix A!by partitioning method in the following cases: ª0  sin D cos D º (a) ««0 cos D sin D »» «¬1 0 0 »¼ ª1  1 1 º (b) ««4 1 0»» «¬8 1 1»¼

77

SUMMARY The concept of inverse of a non-singular matrix is introduced. The methods of adjoint and partition of a matrix are explained to evaluate the inverses of the non-singular matries.

KEY WORDS Inverse of a matrix Adjoint matrix Partition of a matrix

78

UNIT 1.4: ELEMENTARY TRANSFORMATIONS OF A MATRIX

79-91

LEARNING OBJECTIVE After successful completion of the unit, you will be able to Explain the meaning of elementary transformation of a matrix Apply the same to evaluate an inverse of a non-singular matrix INTRODUCTION In the previous unit four methods were suggested for obtaining an inverse of a non-singular matrix. Two methods are discussed so far. There are methods where the use of elementary transformations of a matrix is used to evaluate an inverse. Therefore, it is necessary to know about these transformations. In this unit we first explain the elementary transformations of a matrix and then apply them to compute an inverse of a given matrix. 4.1 Elementary transformation of a matrix explained An elementary!transformation of a matrix is a transformation of any one of the following: (i) Interchange of any two rows or columns i.e. i th row l j th row and symbolically is denoted by Rij For columns: i th column l j th column and is denoted by Cij (ii) The multiplication of a row (or column) by a non-zero constant i.e. i th row o a (i th row) and is denoted by Ri (a) For column: i th column o a(i th column) and is denoted by Ci (a) (iii) Addition of a multiple of one row(or column) to another row (or column) i.e. i th row o (i th row)  a ( j th row) and is denoted by Rij (a) . For column: i th column o ( i th column)  a ( j th column): Cij (a) A transformation which applies to row is called a row! transformation and which applies to column is a column! transformation. Thus in total there are six elementary row and column operations. The order of a matrix is not changed by these operations.

79

Illustration. Consider the matrix

A

ª3 5 7 º « 2 3 4» . « » «¬1 2 3»¼

Then we have ª1 2 3 º ª3 7 5º « » R13 o «2 3 4» , C23 o ««2 4 3»» «¬3 5 7 »¼ «¬1 3 2»¼ 5 7 º ª  3 5 7º ª3 « » R2 (3) o « 6  9  12» , C1 (1) o «« 2 3 4»» «¬  1 2 3»¼ «¬ 1 2 3 »¼ ª3  4 5  6 7  8º 3 4 »» R12 (2) o «« 2 «¬ 1 2 3 »¼

ª7 11 15º «2 3 4 » « » «¬1 2 3 »¼

The inverse of an elementary transformation

It is the transformation on the matrix which nullifies the effect of the given transformation and restores the matrix to its original form. Thus if T is a given transformation on A , T 1 is the inverse of T such that T 1 (TA)

A

For example if we interchange two rows and then again interchange them, there is no net change. Hence the transformation in (i) is its own inverse. We write the inverse of the elementary transformations as follows: Rij1

Rij , Ri1 (k )

§1· Ri ¨ ¸ , Rij1 (k ) ©k¹

Rij (k )

Just replace R by C to get the inverse of elementary column transformations. ª2 1 3  1 º Illustration. Let A ««4 2 1  4»» . «¬3  1 2 1 »¼ We denote the effect of Rij on A by Rij [ A].

Ÿ 80

ª2 1 3  1 º R23[ A] ««3  1 2 1 »» «¬4 2 1  4»¼

R23 [ R23 [ A]] 1 R23

Ÿ

A

R23

3  1º ª2 1 « R21 (2)[ A] «0 0  5  2»» «¬3  1 2 1 »¼

Also

R21 ( 2) [ R21 (2)[ A]] 1 R21 (2)

Ÿ

A

R21 (2)

Equivalent matrices

Two matrices A and B of the same order are said to be equivalent if one can be obtained from the other by a sequence of elementary transformations. This is expressed as A ~ B which means that A is equivalent to B. If the elementary transformations used are row transformations, we

say that A is row!equivalent to B. From the definition itself we observe that A ~ A, A ~ B Ÿ B ~ A, A ~ B and B ~ C Ÿ A ~ C , where A, B, C are the matrices of the same order. Therefore the relation ~ defined on the set of m u n matrices is reflexive, symmetric and transitive and hence it is an equivalence relation. Illustration. The matrices

ª 2 4 6º A « » and B ¬1 3 5 ¼

ª2 4 10º «1 3 7 » ¼ ¬

are equivalent since C31 (2)[ A]

ª2 4 10º «1 3 7 » ¬ ¼

B

Thus A is column equivalent to B. Elementary matrices

An elementary!row!(column)!matrix is the matrix obtained from the identity matrix I n by the use of an elementary row or (column) transformation. Illustration. Consider I 2

ª1 0º «0 1» . Then ¬ ¼ R12 ( I 2 )

ª0 1 º «1 0» C12 ( I 2 ) ¬ ¼ 81

R2 (k )( I 2 )

ª1 0 º «0 k » ¼ ¬

C2 (k )( I 2 )

R12 (k )( I 2 )

ª1 k º «0 1 » ¬ ¼

C21 (k ) ( I 2 )

The effect of elementary transformation on a matrix A can be brought out through multiplication (pre or post) of the corresponding elementary matrix depending upon the transformation (row or column). Result 4.1

Elementary!row!(column)!transformation!of!a!matrix!can!be!obtained!by!pre(post)!multiplying! A by!the!corresponding!elementary!matrix. ª 1 0 2º Illustration. Consider the matrix A «« 1 3 1 »» . «¬ 2  1 2»¼ (i) We have R12 [ A]

ª 1 3 1 º «1 0 2»» « «¬ 2  1 2»¼

We show that this matrix can be obtained by applying R12 ( I 3 ) on A. We write ª1 0 0º R12 [ I 3 ] R12 ««0 1 0»» «¬0 0 1»¼

Ÿ

(ii) Similarly

and

82

[ R12 ( I 3 )] A

C12 [ A]

ª0 1 0 º ª 1 0 2 º «1 0 0 » «  1 3 1 » » « » « «¬0 0 1»¼ «¬ 2  1 2»¼

ª0 1 0 º «1 0 0» « » «¬0 0 1»¼ ª 1 3 1 º « 1 0 2» « » «¬ 2  1 2»¼

ª1 0 0º ª 0 1 2º « 3  1 1 » , C [ I ] C «0 1 0 » 12 3 12 « « » » «¬ 1 2 2»¼ «¬0 0 1»¼

A[C12 ( I 3 )]

ª 1 0 2 º ª0 1 0 º «  1 3 1 » «1 0 0 » » « » « «¬ 2  1 2»¼ «¬0 0 1»¼

R12 [ A]

ª0 1 0 º «1 0 0» « » «¬0 0 1»¼

ª 0 1 2º « 3  1 1» « » «¬ 1 2 2»¼

C12 [ A]

Result 4.2

Let!the!product! AB of!the!matrices! A !and! B !be!defined.!Then (i) R ( AB) and

(ii) C ( AB)

R( A) B A (CB )

where! R( A) denotes! an! elementary! row! transformation! of! A ! and! C ( A) ! means! an! elementary column!transformation!of! A. !

ª2 0 1º Illustration. Let A « », B ¬0  1 0¼

ª1 º «1 » . « » «¬0»¼

Then

ª2º ª6º « 1» , R1 (3)[ AB] « 1» ¬ ¼ ¬ ¼

Now

AB

[ R1 (3) A]B

ª6º « 1» ¬ ¼

R1 (3)[ AB]

ª6º C1 (3)[ AB ] « » ¬ 3¼

Also

Ÿ

ª1 º ª6 0 3º « » « 0  1 0 » «1 » ¬ ¼ «0 » ¬ ¼

A [C1 (3) B ]

ª3º ª2 0 1º « » « 0  1 0 » «3» ¼ «0 » ¬ ¬ ¼

ª6º « 3» ¬ ¼

C1 (3)[ AB]

4.2 Inverse by elementary row transformations (Gauss " Jordan method)

Let A be a non-singular matrix of order n. We have

A In A Apply a sequence of elementary row transformations to the left side and the prefactor I n on the right side till we get In

BA i.e. B

A1.

Working method to find A 1 when A is given

Write A IA. Apply elementary row transformation to the left side and the same operation on I of the right side. Continue the application till left side becomes I and the I on the right side

83

becomes B i.e. resulting into the form I

BA. Then A1

B. The following examples illustrate

the method. Problem 4.1

ª1 2 1 º Using row transformations, find the inverse of the matrix A ««3 2 3»» . «¬1 1 2»¼ Solution. Consider

A IA ª1 2 1 º «3 2 3» » « «¬1 1 2»¼

i.e.

ª1 0 0º «0 1 0 » A » « «¬0 0 1»¼

R21 (3), R31 (1) Ÿ

ª1 2 1 º «0  4 0 » » « «¬0  1 1»¼

ª 1 0 0º «  3 1 0» A » « «¬  1 0 1»¼

R2 (1 / 4) Ÿ

ª1 2 1 º «0 1 0 » » « «¬0  1 1»¼

0 0º ª 1 «3 / 4  1 / 4 0» A » « «¬  1 0 1»¼

R32 (1) Ÿ

ª1 2 1 º «0 1 0 » » « «¬0 0 1»¼

0 0º ª 1 « 3 / 4  1 / 4 0» A » « «¬ 1 / 4  1 / 4 1»¼

R13 (1) Ÿ

ª1 2 0º «0 1 0 » » « «¬0 0 1»¼

1 / 4  1º ª 5/ 4 « 3 / 4 1/ 4 0 » A » « «¬ 1 / 4  1 / 4 1 »¼

R12 ( 2) Ÿ

ª1 0 0º «0 1 0 » » « «¬0 0 1»¼

ª 1 / 4 3 / 4  1º « 3 / 4 1/ 4 0 » A » « «¬ 1 / 4  1 / 4 1 »¼

Ÿ

A

1

ª 1 / 4 3 / 4  1º « 3 / 4 1/ 4 0 » » « «¬ 1 / 4  1 / 4 1 »¼

ª  1 3  4º 1« 3  1 0 »» « 4 «¬ 1  1 4 »¼

MCQ 4.1

Let the real numbers a1 (z 0), a2 ,, a7 be in arithmetic progression. Consider the statements:

84

ª a1 «ia ¬ 7

(i) B 1 exists if B

(ii) A

1

ª a1 exists if A ««a4 «¬ a5

ia7 º a1 »¼ a2 a5 a6

a3 º a6 »» a7 »¼

Choose the correct statement/s from the following: (A) Both (i) and (ii) are true. (B) Only (ii) is true. (C) Only (i) is true. (D) Both (i) and (ii) is false. Hint. Apply R32 (1), R21 (1), a5  a4

a6  a5

a7  a6

d

SAQ 4.1 ª 2 1  1º Using row transformations, find the inverse of the matrix A ««0 2 1 »» . «¬5 2  3»¼

4.3 Normal form of a matrix

A matrix A is said to be in the normal form if it can be put in any one of the forms ªI Ir , « r ¬0

0º ªI º , [ I r 0] , « r » » 0¼ ¬ 0 ¼

(4.1)

where the size of 0 matrix is determined from the value of r. For example if r

2, then one can

put the matrices in the forms

I2

ª1 0 0º ª1 0º « » ª1 0 0º «0 1 » , « 0 1 0 » , « 0 1 0 » , ¬ ¼ «0 0 0» ¬ ¼ ¼ ¬

ª1 0º « 0 1 ». » « «¬ 0 0 »¼

The normal form immediately gives the rank of the matrix (see the next Unit). Some of the normal forms of the matrices are as under. ª1 0 º ª1 0 º 2 u 2 matrices: « », « » ¬0 1 ¼ ¬0 0 ¼

85

ª1 0 0º 3u 3 matrices: «0 1 0» , » « «¬0 0 1»¼ ª1 0 0 º 2u 3 matrices: « », ¬0 1 0 ¼

ª1 0 0º ª1 0 0º «0 1 0 » , «0 0 0 » » « » « «¬0 0 0¼» «¬0 0 0¼»

ª1 0 0º «0 0 0 » ¬ ¼

ª1 0 º ª1 0 º 3u 2 matrices: «0 1» , «0 0» » « » « «¬0 0»¼ «¬0 0»¼

Working procedure for reducing the given matrix to its normal form

The main steps are as follows: (I) By the application of elementary transformations reduce the first row and the first column of the given matrix A [aij ] in the form 1

0

0



0

0 0 

0 For this (i) Make a11 nonzero if it is zero by interchanging rows or columns. (ii) If the new a11 is not 1 make it 1 by dividing the first row by a11. (iii) To make the first column zero except a11 , subtract appropriate multiples of the first row from the other rows. (iv) Then make the first row zero except a11 by subtracting appropriate multiples of the first column from the other columns. (II) Without disturbing the first row and the first column repeat the steps in (I) from (i) " (iv) to make a22 1 and the second row and the second column zero. (III) Continue the process for the third row and the third column and so on till the end of the diagonal is reached or till the remaining elements of the matrix are zero. The ultimate matrix will be in one of the forms (4.1).

86

Problem 4.2

Reduce the matrix A

ª3 5 7º « 2 3 4 » to the normal form. » « «¬ 1 2 3 »¼

Solution. We apply the following steps to get the normal form of the matrix A. ª1 2 3 º Step 1 R13 : A ~ ««2 3 4»» «¬3 5 7 »¼

3º ª1 2 « Step 2 R21 ( 2) : ~ «0  1  2»» «¬3 5 7 »¼ 3º ª1 2 « Step 3 R31 (3) : ~ «0  1  2»» «¬0  1  2»¼ 3º ª1 0 « Step 4 C21 (2) : ~ «0  1  2»» «¬0  1  2»¼ 0º ª1 0 « Step 5 C31 (3) : ~ «0  1  2»» «¬0  1  2»¼ Thus we have achieved the first stage of getting the first row and the first column in the desired form as stated in (I). Now concentrate on the remaining sub matrix ª  1  2º «  1  2» ¼ ¬

Repeating the earlier process, we get 2º ª1 ª1 2 º ª1 0 º o R21 (1) : « R1 (1) : « o C21 ( 2) : « » » ». ¬  1  2¼ ¬0 0 ¼ ¬0 0 ¼ Using above after the step 5, we obtain ª1 0 0 º A ~ ««0 1 0»» «¬0 0 0»¼

ªI 2 «0 ¬

0º 0»¼

For convenience and simplicity we have separated the sub matrix but one can do without the 87

separation as shown below 0º ª1 0 « 2 »» Step 6 R2 (1) : ~ «0 1 «¬0  1  2»¼ ª1 0 0 º Step 7 R32 (1) : ~ ««0 1 2»» «¬0 0 0»¼ ª1 0 0 º Step 8 C32 ( 2) : ~ ««0 1 0»» «¬0 0 0»¼

ªI 2 «0 ¬

0º 0»¼

These steps can be reduced if we combine row operations, steps 2 and 3 together and column operations, step 4 and 5 together. It works as follows: After step 1, we have 3º ª1 2 R21 (2)½ » « Step 2 ¾ : A ~ «0  1  2 » R31 (3) ¿ «¬0  1  2»¼ 0 º ª1 0 C21 (2)½ « » Step 3. ¾ ~ 0  1  2» etc. C31 (3) ¿ « «¬0  1  2»¼

Problem 4.3

For the matrix ª1 1 2 º A ««1 2 3 »» , «¬0  1  1»¼

find nonsingular matrices P and Q such that PAQ is in the normal form. Hence determine A1. Hint. A I A I Solution, We have A

i.e.

88

2º ª1 1 «1 2 3 » « » «¬0  1  1»¼

IAI

ª1 0 0 º ª1 0 0º «0 1 0 » A «0 1 0 » » « » « «¬0 0 1»¼ «¬0 0 1»¼

2º ª1 1 «0 1 1 » » « «¬0  1  1»¼

R2  R1 :

C2  C1 , C3  2C1 :

C3  C2 :

ª 1 0 0º ª1  1  2 º «  1 1 0 » A «0 1 0 »» « » « «¬ 0 0 1»¼ «¬0 0 1 »¼

ª1 0 0 º «0 1 1 » » « «¬0  1  1»¼

ª1  1  2 º ª 1 0 0º «  1 1 0 » A «0 1 0 »» » « « «¬0 0 «¬ 1 1 1»¼ 1 »¼

ª1 0 0 º «0 1 1 » « » «¬0 0 0»¼

R3  R2 :

ª1 0 0 º «0 1 0 » « » «¬0 0 0»¼

ª1  1  1º ª 1 0 0º « 1 1 0» A «0 1  1» « « » » «¬0 0 1 »¼ «¬ 1 1 1»¼ ªI2 «O ¬

Ÿ

where

P

ª1 0 0 º ª 1 0 0º «  1 1 0 » A «0 1 0 » » « » « «¬0 0 1»¼ «¬ 0 0 1»¼

Oº O »¼

(a1)

P A Q,

ª 1 0 0º « 1 1 0» and Q « » «¬ 1 1 1»¼

ª1  1  1º «0 1  1» « » «¬0 0 1 »¼

Thus the form of PAQ is normal and P and Q are nonsingular. Remark. The matrices are P and Q are not unique. MCQ 4.2

Let PAQ be the normal form for the matrix

A

ª1  1  1º «1 1 1 » , « » «¬3 1 1 »¼

where the matrices P and Q are non-singular. Then

(A) Trace of 2 P  Q is 2. (B) Trace of 2 P  Q is 1. (C) Trace of 2 P  Q is 0. (D) Trace of 2 P  Q is  1.

89

MCQ 4.3

The normal form of the matrix ª 2 1  3  6º «3  3 1 2 »» « «¬1 1 1 2 »¼

is

(A) [ I 3 0] (B) [0 I 3 ] ªI (C) « 2 ¬0

0º 0»¼

ª0 0 º (D) « » ¬0 I 2 ¼

SAQ 4.2 ª 3  3 4º If A ««2  3 4»» , find two nonsingular matrices P and Q such that PAQ «¬0  1 1 »¼ find A1.

SAQ 4.3 ª 2 1  1º Use Gauss " Jordan method to find the inverse of the matrix ««0 2 1 »» . «¬5 2  3»¼

SAQ 4.4 Reduce the matrix

ª2 3  1  1 º «1  1  2  4» « » «3 1 3  2» « » 0  7¼ ¬6 3 to normal form.

90

I . Hence

SUMMARY The elementary transformations of matrix are fully explained with illustrations. They are used to find an inverse of a non-singular matrix. The concept of equivalent matrices and the normal form of a matrix are discussed.

KEY WORDS Elementary transformation Equivalent matrices Gauss-Jordan method Normal form of a matrix

91

UNIT 1.5: THE RANK OF A MATRIX

93-105

LEARNING OBJECTIVE After successful completion of the unit, you will be able to Explain the meaning of a rank of a matrix Apply the definition to determine the rank of a given matrix INTRODUCTION In the first four units various concepts related to a matrix are explained. There is one more important concept of a rank of non-zero matrix which is related to a minor of a non-zero matrix. The interesting part is the rank of a matrix remains unchanged under elementary transformations of the matrix. It is helpful in deciding the consistency of the linear equations. This unit is devoted to the study of rank of a matrix. 5.1 Definition of rank Let A be any non-zero m u n matrix. Any matrix obtained from A by deleting some rows and columns is called a sub!matrix of A. The matrix A is itself a sub matrix of A. The determinant

of a square sub matrix is called a minor of the matrix A. Illustration

ª1 2  1º Consider A « ». The sub matrices are ¬3 0 4 ¼ [1], [2], [-1], [3], [0] [4], [1 2], [1 "1], [2 "1], [3 0], [3 4], [0 4], [1 2 "1], [3 0 4] ª 1 º ª 2 º ª  1 º ª1  1º ª2  1º ª1 2  1º « 3 » , « 0 » , « 4 » , «3 4 » , «0 4 » , «3 0 4 » . ¼ ¼ ¬ ¼ ¬ ¼ ¬ ¬ ¼ ¬ ¼ ¬ Out of these, the square sub matrices are: 1-square matrices: [1], [2], ["1], [3], [0], [4] ª1 2º ª1  1º ª2  1º 2-square matrices: « » », « », « ¬3 0¼ ¬3 4 ¼ ¬0 4 ¼ The determinants of these matrices give the minors of A. They are 1-square minors or minors of order 1: 1, 2, "1, 3, 0, 4 2-square minors or minors of order 2: " 6, 7, 8. 93

Rank

A non-zero matrix A is said to have rank r if at! least! one of its r -square minors is different from zero while every (r  1) -square minor, if!any, is zero. We denote rank by U( A) r. Illustration

In the above illustration, U( A)

2 because there exists a 2-square minor of A which is non-zero.

As a matter of fact here all 2-square minors are non-zero. But the requirement is of only one nonzero minor. Since there is no 2+1 square minor, the question of its consideration does not arise. Remark

(i) The rank of a zero matrix is 0. (ii) The rank of a n -square nonsingular matrix is n . Thus if the rank of a square matrix is equal to its order, the matrix is non-singular. (iii) The rank of I n is n. (iv) The rank of a m u n matrix d min (m, n).

4º ª 2 6 « (v) Consider A « 1  3  2»» . Here | A | 0, each 2-square minor of A is zero. A is not a «¬ 3  9  6»¼ zero matrix. Hence rank U( A) 1. MCQ 5.1

Consider the statements: (a) The rank of every non-zero matrix t 1 . (b) A 6-square singular matrix always has the rank r such that 5 d r d 6. Select the correct answer from the following: (A) (a) is true but (b) is false (B) (a) is false but (b) is true (C) (a) and (b) both are true. (D) (a) and (b) both are false.

94

From the expansion of a determinant of a square matrix it is seen that if all the minors of order r of a matrix are zero, then so are all minors of order r  1 or higher. Therefore, in finding the rank of a matrix, if all the r-square minors are zero, there is no need to examine minors of any higher order. This reduces the work of determination of rank of a given matrix. Still the determination of the rank of a matrix of higher orders is a tedious job barring exceptional simple cases. Therefore, there is a need of some method of making this task less tedious. The method must be such that the order and the rank of a given matrix must not change under the operation or transformation. Fortunately, there is a method, which involves operations that act on the matrix but do not alter its order and rank. These operations which we have already studied are the elementary transformations of a matrix. The rank of a matrix remains unchanged for the following reason.

The problem of finding the rank of a matrix is closely linked with the nature of the determinant " it is zero or non-zero. For example if the rank is r , the determinants of order r has a non-zero value but the determinant of order (r  1) has a zero value. Since this character of the determinant is unaltered by the elementary transformations or operations (i), (ii) and (iii) explained in the section 4.1 of the Unit 4, the rank of the given matrix remains unchanged irrespective of the change of the matrix under these operations. This is proved in the next theorem. Theorem 5.1

The!elementary!transformations!do!not!change!the!rank!of!a!matrix. Proof. We prove the theorem for row transformations. The proof for column follows similarly. Let

A be any nonzero m u n matrix of rank r. In proving the theorem we make use of the definition of

rank and the properties of the determinant. We have (P1) Since rank U( A) r , every (r  1) -square minor of A is zero. (P2) Interchange of rows of a determinant changes its sign. (P3) Multiplication of a row of a determinant by an k z 0 changes the determinant to k (original determinant).

(P4) Addition of any multiple of one row to another row does not change the value of the determinant. Let B be the matrix obtained from A by a row transformation. Let D be any (r  1) -square minor of A. Consider that D changes to Dc under a row transformation and hence Dc is the (r  1) - square minor of B having the same position as D. We study the effect of row transformations on D. Effect of row transformation Rij . In this case either

95

(i) there is no change in D i.e. Dc D 0, by (P1) or

(ii) the rows of D are interchanged. Then Dc  D, by P2. Ÿ Dc 0, by P1

or

(iii) One of the rows of D is replaced by another new row not in D. Here Dc = [another (r  1) -square minor of A}!= 0, by P1

Hence under Rij the vanishing of any (r  1) -minor of A is retained. Effect of Ri (k ). Here either

(i) there is no change in D i.e. Dc D 0 by (P1) or

(ii) one of the rows of D is multiplied by k! ! i.e. Dc kD, by (P3) and then Dc 0, by (P1).

Effect of Rij (k ). In this case either

(i) there is no change in D i.e. Dc or

D

0

(ii) one of the rows of D is increased by k times another rows of D. Then Dc D , by P4 Ÿ Dc 0

or

(iii) k times a new row not in D is added to one of the rows of D. In this case Dc

D  k (another (r  1) -square minor of A ) 0  k ˜ 0 = 0, by P1

It follows that every (r  1) -square minor of B is zero. Hence the rank of B d r. Therefore, the elementary row- transformations cannot raise the rank of a matrix. There may be a possibility of lowering the rank. But then the inverse transformations has to increase the rank of B which is prohibited by the fact just proved. Hence the transformation neither raise the rank nor lower the rank of a matrix.

QED

Illustration of the proof

Consider the matrix ª15 16 17 18 19º «10 11 12 13 14» « » A «5 6 7 8 9» « » «4 5 6 7 8» «¬ 3 4 5 6 7 »¼ Its rank is 2. Then every 3 square minor of A is zero.

96

6 7 8 Let

D

5 6 7 . Check yourself that | D |

0.

4 5 6 Effect of Rij

ª10 11 12 13 14º «15 16 17 18 19» « » (i) Apply R12 to A : « 5 6 7 8 9 » « » «4 5 6 7 8» «¬ 3 4 5 6 7 »¼ D is not changed under R12 i.e. Dc D 0,  D 0

(ii)

ª15 16 17 18 19º «10 11 12 13 14» « » R35 to A : « 3 4 5 6 7 » . « » «4 5 6 7 8» «¬ 5 6 7 8 9 »¼ 4 5 6

Now

Dc

5 6 7 6 7 8

(iii)

6 7 8  5 6 7 =0 4 5 6

ª4 5 6 7 8º «10 11 12 13 14» « » R14 to A : « 5 6 7 8 9 » , Dc « » «15 16 17 18 19» «¬ 3 4 5 6 7 »¼

6 7 8 16 17 18 = 0 4 5 6

Effect of Ri (k ) R2 ("3) o No change in D

ª15 16 17 18 «10 11 12 13 « R3 (2) to A : «10 12 14 16 « «4 5 6 7 «¬ 3 4 5 6

19º 14»» 18» , Dc » 8» 7 »¼

12 14 16 5 6 7 4

5

6

6 7 8 2 5 6 7

2D 0

4 5 6

Effect of Rij (k )

Apply R12 ( 4) to A : No change in D. 97

16 17 18 19 º ª 15 « 10 11 12 13 14 »» « R34 (1) to A : «5  4 6  5 7  6 8  7 9  8» « » 5 6 7 8 » « 4 «¬ 3 4 5 6 7 »¼

Dc

65 76 87 5 6 7 4 5 6

6 7 8 5 6 7 5 6 7  5 6 7 = D0 4 5 6 4 5 6

D 0

ª15 16 17 18 19º «10 11 12 13 14» « » R41 (2) to A : « 5 6 7 8 9 » « » « 4 5 17 7 8 » «¬ 3 4 5 6 7 »¼

Dc

6 7 8 5  2(16) 6  2(17) 7  2(18) 4 5 6 6 7 8 6 7 8 5 6 7  2 16 17 18 4 5 6 4 5 6 = D  2 (3 minor of A )

0

Problem 5.1

Reduce the matrix A

ª3 5 7º « 2 3 4 » to the normal form and then find its rank. » « «¬ 1 2 3 »¼

Hint. Problem 4.2 of Unit 4 Solution. Referring to the Problem 4.2 of the Unit 4, by elementary operations the given matrix

is reduced to ª1 0 0 º A ~ ««0 1 0»» «¬0 0 0»¼ Since | I 2 | 1 z 0, we have U( A) 2.

98

ªI 2 «0 ¬

0º 0»¼

Problem 5.2

ª1 2  1 «4 1 2 Determine the rank of the matrix « «3  1 1 « ¬1 2 0

3º 1 »» . 2» » 1¼

Hint. Reduce the matrix to its normal form

ª1 2  1 3 º «0  7 6  11» » Solution. Step 1: R21 (4), R31 (3), R41 (1) Ÿ A ~ « «0  7 4  7 » « » 1 2¼ ¬0 0 ª1 0 0 0 º «0  7 6  11» » Step 2: C21 (2), C31 (1), C41 (3) Ÿ A ~ « «0  7 4  7 » « » ¬0 0 1  2 ¼ 0 0 º ª1 0 «0 1  6 / 7 11 / 7 » » Step 3: R2 (1 / 7) Ÿ A ~ « «0  7 4 7 » « » 2 ¼ 1 ¬0 0 ª1 «0 Step 4: R32 (7) Ÿ A ~ « «0 « ¬0

0 0 0 º 1  6 / 7 11 / 7 »» 0 2 4 » » 0 1 2 ¼

ª1 «0 Step 5: C32 (6 / 7), C42 ( 11 / 7) Ÿ A ~ « «0 « ¬0 ª1 «0 Step 6: R3 (1 / 2) Ÿ A ~ « «0 « ¬0 ª1 «0 Step 7: R43 (1) Ÿ A ~ « «0 « ¬0

0 1 0 0 0 1 0 0

0 0 0º 1 0 0 »» 0 2 4 » » 0 1  2¼

0 0º 0 0 »» 1  2» » 1  2¼ 0 0º 0 0 »» 1  2» » 0 0¼

99

ª1 «0 Step 8: C43 (2) Ÿ A ~ « «0 « ¬0

0 1 0 0

0 0 1 0

0º 0»» ª I 3 ~ 0» «¬ 0 » 0¼

0º 0»¼

U( A) 3

Ÿ

Note. We need not go for step 8 after 7 for calculating the rank of the matrix. Since the number

of non-zero rows are 3, the rank is 3. However, to find the normal form of the matrix we have to go for the step 8. Row equivalence

The matrix A is said to be row!equivalent to B if it is reduced to B by the use of elementary row transformations alone. MCQ 5.2

Consider the matrices ª0 «3 « «1 « ¬1

1 1 1 0

( x  m) 2  ( y  n ) 2

4

ª2 0 1º «1  2 2» and N « » ¬«3 2 0»¼

M

 3 1 º  3  1 »»  3 1 » » 0 0 ¼

such that U( M ) m and U( N ) n. Then the circle

passes through the point/s (A) (0, 0) (B) (4, 0) (C) (4, 2) (D) (2, 0) MCQ 5.3

The matrices M

100

ª1 «1 « «2 « ¬3

3 2 4 7

2 1 3 4

2º 3 »» and N 4» » 8¼

ª1 2 3 14 º «4 5 7 35» are reduced to their normal forms: « » «¬3 3 4 21»¼

M

[ I m ] and N

ªI n «0 ¬

0º . 0»¼

Then the equation x 2  2mx  n 0 has (A) Equal real roots (B) Distinct positive roots (C) Imaginary roots (D) Distinct negative roots SAQ 5.1

Find the ranks of the following matrices: ª2  3 5 º A ««6  9 15 »» and B «¬8  12 20»¼

ª 3 1 2 º «  6 2  4» » « «¬  3 1  2»¼

SAQ 5.2

Reduce the matrix

A

ª2 3 4 5 º «3 4 5 6 » « » «4 5 6 7 » « » ¬ 9 10 11 12 ¼

to normal form and find its rank 5.2 Canonical form of matrix

There are different canonical forms of matrix. The normal form ªIr «0 ¬

0º 0»¼

of an arbitrary matrix of rank r is one of them. Consider a matrix A of rank r. The structure of its canonical form is as follows: About its first r rows

(i) Each row contains nonzero elements besides zero elements. (ii) In each row the first nonzero element is 1. (iii) In any two consecutive rows, the element 1 in the lower row is to the right of the 101

element 1 in the upper row. (iv) A column containing 1 has all other elements zero. After r rows

(i) All rows contain only zero elements. (ii) All zero rows are grouped at the bottom of a matrix. Result. When a matrix is expressed in a canonical form, its rank is the number of nonzero rows. Illustration

The matrices ª1 0 ª0 1 0 3 º ª1 0 0  3º « «0 0 1  1» , «0 1 1 2 » , «0 1 » «0 0 « » « «¬0 0 0 0 »¼ «¬0 0 0 0 »¼ « ¬0 0

0 0 1 0

0º 0»» 0» » 0¼

are in the canonical form. Reduction to a canonical matrix

Any non zero matrix A of rank r is row equivalent to a canonical matrix C. Working procedure

Consider a nonzero m u n matrix A [aij ] of rank r. (P1) If a11 z 0, use R1 (1 / a11 ) to make it 1. (P2) If a11

0 and ai1 z 0, use R1i so that now new a11 ai1 z 0. Repeat (P1) and get 1 at a11.

(P3) Use R21 ( a21 ), R31 ( a31 ),  , Rm1 ( am1 ) to make new a21

a31  am1

0.

If all rows, except the first, become zero we got C. If this does not happen then repeat the process as in (P1) to (P3) for the 2nd column. By this if nonzero elements occur only in the first two rows, we have C. If not, continue (P1) " (P3) for the 3rd column, 4th column and so on till C is obtained. If the rank of A is r , the process will continue up to the first r rows i.e. the only nonzero rows will be the first r rows. Note. The number of nonzero rows of C is the rank of the matrix.

102

Problem 5.3

Find the canonical matrix row equivalent of 1 ª1  1 1 «1  1 2 3 « «2  2 1 0 « ¬1 1  1  3

1º 1 »» . 2» » 3¼

Hence or otherwise find the rank of the matrix. Solution. Let the given matrix be A.

Apply R21 (1), R31 ( 2), R41 (1) : 1 ª1  1 1 «0 0 1 2 A~ « «0 0  1  2 « ¬0 2  2  4

1º 0»» 0» » 2¼

1 1º ª1  1 1 «0 2  2  4 2 » » R24 : ~ « «0 0  1  2 0 » « » 1 2 0¼ ¬0 0

,

1 1º ª1  1 1 «0 1  1  2 1 » » R2 (1 / 2) : ~ « «0 0  1  2 0 » « » 2 0¼ ¬0 0 1 1 ª1  1 1 «0 1  1  2 R3 (1) : ~ « «0 0 1 2 « 2 ¬0 0 1 1 ª1  1 1 «0 1  1  2 R43 (1) : ~ « «0 0 1 2 « 0 ¬0 0 0 ª1 «0 R12 (1), R23 (1) : ~ « «0 « ¬0

0 1 0 0

1º 1»» 0» » 0¼ 1º 1»» 0» » 0¼

0 1 0 0 1 2 0 0

2º 1 »» 0» » 0¼

There are three non-zero rows and hence the rank of the matrix A is 3. 103

Problem 5.4

Reduce the matrix

A

to canonical form and find its rank.

ª0 «1 « «3 « ¬1

 3 1 º 1 1 »» 0 2 » » 2 0 ¼

1 0 1 1 .

Solution. Applying the following elementary transformations, we have

ª1 «0 R12 : A ~ « «3 « ¬1

0 1 1º 1  3  1»» 1 0 2» » 1 2 0 ¼

ª1 «0 R3  3R1 , R4  R1 : A ~ « «0 « ¬0 ª1 «0 R3  R2 , R4  R2 : A ~ « «0 « ¬0

0 1 1º 1  3  1»» 1  3  1» » 1  3  1¼ 0 1 1º 1  3  1»» 0 0 0» » 0 0 0¼

This is the canonical form of the matrix. It has 2 nonzero rows. Hence U( A) 2. SAQ 5.3

Reducing the matrix ª2 3  1  1 º «1  1  2  4 » » A « «3 1 3  2» « » 0  7¼ ¬6 3 to canonical form find its rank.

104

SUMMARY The concept of rank of a matrix is discussed. The methods of finding the rank are illustrated through the solved examples. It is also shown that one can easily find the rank of a matrix from its normal form and canonical form. Therefore, the problem of finding a rank reduces to organize a matrix in its normal form of canonical form.

KEY WORDS Rank of a matrix Normal form Canonical form Row equivalent Column equivalent

105

UNIT 02-01: SYSTEM OF SIMULTANEOUS LINEAR EQUATIONS

1-32

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the Linear Equations Apply the matrix formulation of the linear equations Apply the methods to solve these equations INTRODUCTION Solving simultaneous linear equations is an age old problem. We are acquainted with this problem right from high school mathematics. In this section we consider the contribution of matrix theory in solving a system of linear equations. Consider a system of m linear equations in n unknowns

x1 , , xn : a11 x1  a12 x2    a1n xn

b1

a21 x1  a22 x2    a2 n xn

b2

(1.1)

! am1 x1  am 2 x2    amn xn

bm

Here all aij , bi are the members of some field. However, we assume that these are real numbers i.e. all aij , bi  R. Solution of (1.1) A solution of the system (1.1) is a set of values of x1 ,  , xn in R (or F ) which satisfy m equations (1.1). When such a system is given, regarding its solution there are two possibilities: (i) solution exists or

(ii) solution does not exist.

Consistent or compatible system System of equations (1.1) is said to be consistent if its solution exists. If the solution does not exist, then the system is inconsistent.

1

Illustration The equations x1  x2

have only one solution, x1 1, x2

3, 2 x1  x2

0

2 and the system is consistent.

But the system x1  x2

2, 2 x1  2 x2

3

has no solution and it is inconsistent. Further, when the solution exists, there arises three cases: (a) the solution is unique i.e. there is one and only one solution (b) the solution is trivial i.e. the system has zero solution (c) there are infinitely many solutions. Illustration (a) The equations x1  x2

have unique solution,

3 , 2 x1  x2

x1 1, x2

0

2.

(b) The equations x y

has only zero solution:

0, x  y

x 0, y

0

0.

(c) The system x  2y

3

has infinitely many solutions. Remark. A consistent system has either one solution or infinitely many solutions. 1.1 Matrix formulation of simultaneous linear equations In matrix notation the system (1.1) can be written as AX

2

B,

(1.2)

where

A

ª a11 «  « «¬am1

a12 am 2

 a1n º  »» , X  amn »¼

ª « « « « ¬

x1 º x2 »» and B  » » xn ¼

ª b1 «b « 2 «  « ¬ bn

º » ». » » ¼

(1.3)

Coefficient matrix. The matrix A consists of the coefficients of the unknowns x1 , , xn and is called the coefficient!matrix of the given system (1.1).

Augmented matrix. The m u (n  1) matrix denoted by [ A B ] : ª a11  a1n «a  a2 n 21 [ A B] « «  « ¬am1  amn

b1 º b2 »» » » bm ¼

is called the augmented!matrix of (1.1).

Illustration (i) Consider the simultaneous linear equations in two unknowns: 2x  3y 1 x y 3 Here the coefficient and the augmented matrices are respectively given by

ª 2  3 1º ª2  3º A « and [ A B ] « » » ¬1 1 3¼ ¬1 1 ¼ (ii) For the simultaneous linear equations in three unknowns x  2y  z

2

2x  5 y  z 1 yz

(a1)

1

we have ª1 2  1º A ««2 5 1 »» and [ A B ] «¬0 1 1 »¼

ª1 2  1 2 º «2 5 1 1 » » « «¬0 1 1  1»¼

Elementary operations

By an elementary operation on a system of linear equations we mean doing any one of the following: 3

(i) Interchange of two equations. (ii) Multiply an equation by a nonzero element of F .

(iii) Add to one equation k times a different equation. If a system of linear equations is obtained from a given system by the use of a finite number of elementary operations, we say that two systems are equivalent. Two equivalent systems have exactly the same solution. Illustration

Consider the system of equations in (a1). In this case the elementary operations (i) to (iii) are demonstrated as follows: (i) Interchange of the first and second equations 2x  5 y  z 1 x  2y  z yz

(a2)

2

1

Though the solution of (a1) and (a2) are same, the coefficient and augmented matrices of (a1) and (a2) are different. (ii) Multiply the first equation of (a1) by 2. Then (a1) becomes 2x  4 y  2z x  2y  z yz

2 2

(a3)

1

(iii) In the first equation add  1 times the last equation in (a3): ( x  2 y  z )  ( y  z ) 2  1 i.e. x  y  2 z 3 Then the system (a1) becomes x  2y  z

2

x  y  2z 3 yz

1

Homogeneous equations

A system of equations (1.1), in which b1

b2  bm

linear!equations. In matrix notation we write 4

0, is called a system of homogeneous

0,

AX

(1.4)

where 0 is the column vector whose all entries are zero. In this case the coefficient matrix A and the augmented matrix [A 0] have the same rank. Methods of solving linear equations

There are some special and general methods, depending upon the system of equations, for solving linear equations. We discuss some of them. 1.2 Solution by matrix inversion

This is a special method where the number of equations in (1.1) is equal to the number of unknowns x1 , x2 , , xn . Thus the system (1.1), now takes the matrix form AX

where

B

ª a11 a12  a1n º «a a22  a2 n »» 21 , X A « «  » « » ¬ an1 an 2  ann ¼

(1.5) ª « « « « ¬

x1 x2  xn

º » », B » » ¼

ª b1 «b « 2 «  « ¬ bn

º » ». » » ¼

The method is applicable only when the coefficient matrix A is nonsingular. Then A1 exists. Pre-multiplying (1.5) by A1 , we write A1 ( AX ) Ÿ

( A1 A) X

Ÿ

A1 B

A1 B or IX X

A1 B

A1 B

(1.6)

This provides the solution to the system of equations. We show that this solution is unique. If possible let there be two solutions X 1 and X 2 of the equation (1.5) Ÿ

AX 1

Ÿ

B and AX 2 AX 1

B

AX 2

Premultiplying by A1 :

A1 ( AX 1 )

A1 ( AX 2 )

Ÿ

( A1 A) X 1

( A1 A) X 2

Ÿ

IX 1

I X 2 i.e. X 1

X2

Therefore the solution is unique.

5

Working method of solving (1.5)

Write the given system in the form (1.5). Verify that | A | z 0. Then find A1 and then use (1.6). Procedure is illustrated in the following examples. Problem 1.1

Solve the linear equations 2x  3y  z

0, x  y  2 z 5, 3x  y  z 1

by matrix method. Solution. Rewriting the system of equations in matrix form.

AX

where

Now

ª2 3  1º A ««1  1 2 »» , X «¬3 1  1»¼

| A|

(a1)

B, ª xº « y» , B « » «¬ z »¼

ª0 º «5 » « » «¬1»¼

2(1  2)  3(6  1)  (1  3) 15 z 0

Hence A is a non-singular matrix. Then A1 exists and the solution of the system (a1) is given by X Here

A1

A1B

adj A | A|  (1  6) 1 3 º ª 1 2 1 «  (3  1) 23  (2  9)»» 15 « «¬ 6  1  (4  1)  2  3 »¼ ª 1 2 5 º 1 « 7 1  5»» « 15 «¬ 4 7  5»¼

Ÿ

ª 1 2 5 º ª 0 º 1 « 7 1  5»» «« 5 »» A B « 15 ¬« 4 7  5»¼ «¬ 1 ¼» 1

ª 0  10  5º 1 « 0  5  5 »» « 15 «¬0  35  5»¼

6

(a2)

c

ª 15 º 1 « 0 »» 15 « «¬ 30 »¼

Then (a2) Ÿ

X

Ÿ

ª1º «0» « » «¬ 2 »¼

ª xº ª1º « 0 » i.e. « y » « » « » «¬ z »¼ «¬ 2 »¼

x 1, y

0, z

ª1 º «0 » « » «¬2»¼ 2

Problem 1.2 Given the matrix equation

Y

Find X

ª x1 º « x » if Y « 2» «¬ x3 »¼

ª1 1 2º «1 2 5» X . » « «¬1 3 3»¼

(a1)

ª 2º «0» . « » «¬5»¼

ª1 1 2º Solution. Let A ««1 2 5»» . «¬1 3 3»¼

Ÿ

| A|

1 1 2

1 1 2

1 2 5

0 1 3

1 3 3

0 2 1

1 6

5 z 0

Hence A1 exists. Then (a1) becomes

Y

AX ,

(a2)

where A1 exists. Premultiplying (a2) by the A1 , we get X

A1Y

­ adj A ½ ¾Y ® ¯ | A| ¿ 1 º ª 2º ª 9 3 1« 1  3»» ««0»»  «2 5 «¬ 1  2 1 »¼ «¬5»¼ 7

ª 13º 1«  «  11»» 5 «¬ 7 »¼

ª 13 º 1« » 11 5« » «¬ 7 »¼

Ÿ

X

ª x1 º «x » « 2» «¬ x3 »¼

ª 13 / 15 º « 11 / 15 » » « «¬ 7 / 15»¼

MCQ 1.1 The solution of the equations 2 x  5 y  3z 1,  x  2 y  z

2, x  y  z

0

satisfy (A) xyz

0

(B) x 3  y 3  z 3

0

(C) xyz 1 (D) x 3  y 3  z 3

3

SAQ 1.1 Solve the following equations by matrix method. (a) x  y  z

3, x  2 y  3z

(b) 3x  y  2 z

4, x  4 y  9 z

3, 2x  3 y  z

3 , x  2 y  z

6 4.

1.3 Solution by Cramer"s rule (a method of determinant) The Cramer"s rule is again a special method of solution. Consider a system of n nonhomogeneous equations in n unknowns x1 ,, xn as in (1.2), where | A | z 0. Then by Cramer"s rule we have

x1

| A1 | , x2 | A|

| A2 | ,  , xn | A|

| An | | A|

where Ai = matrix obtained from A by replacing its ith column with the column of b"s. In two and three variables, above takes the following forms

Two variables equations: 8

a11 x1  a12 x2

b1 , a21 x1  a22 x2

b2

matrix form:

Ÿ

x1

Three variable equations:

x1

ª b1 º «b » ¬ 2¼

b1 a12 b2 a22 , x2 a11 a12 a21 a22

a11 b1 a21 b2 . a11 a12 a21 a22

a11 x1  a12 x2  a13 x3

b1

a21 x1  a22 x2  a23 x3

b2

a31 x1  a32 x2  a33 x3

b3

ª a11 a12 «a « 21 a22 «¬ a31 a32

matrix form:

Ÿ

ª a11 a12 º ª x1 º «a »« » ¬ 21 a22 ¼ ¬ x2 ¼

b1 b2 b3 a11 a21 a31

a12 a13 a22 a23 a32 a33 , x2 a12 a13 a22 a23 a32 a33

a13 º ª x1 º a23 »» «« x2 »» a33 »¼ «¬ x3 »¼

a11 a21 a31 a11 a21 a31

ª b1 º «b » « 2» «¬b3 »¼

b1 a13 b2 a23 b3 a33 , x3 a12 a13 a22 a23 a32 a33

a11 a21 a31 a11 a21 a31

a12 b1 a22 b2 a32 b3 a12 a13 a22 a23 a32 a33

Remark. The Cramer"s method can be viewed as a rule or formula. It gives the explicit values of

the individual variables in stead of the whole space [ x1 ,, xn ]. When n exceeds 3, the working of the formula becomes tedious since one has to compute determinants of order more than three. Problem 1.3

Solve the following equations with the help of determinants x yz

4, x  y  z

Solution. Rewriting in matrix form, AX

| A|

1 1 1 1 1 1 2 1 1

5.

B

ª1 1 1º ª x º «1  1 1» « y » « »« » «¬2 1 1»¼ «¬ z »¼

i.e.

Here

0 , 2x  y  z

ª 4º «0 » « » «¬5»¼

1(1  1)  1(2  1)  1(1  2)

2 z0

9

Hence Cramer"s rule is applicable: x

4 1 1 1 0 1 1 2 5 1 1

1, y

1 4 1 1 1 0 1 2 2 5 1

2, z

1 1 4 1 1 1 0 2 2 1 5

1

MCQ 1.2

Consider four simultaneous equations in x, y, z , t variables having in matrix equation B , | A | z 0.

AX Select the false statement from the following:

(A) B z 0 Ÿ one of the x, y, z , t may be zero. (B) B z 0 Ÿ two of the x, y, z , t may be zero. (C) B z 0 Ÿ three of the x, y, z , t may be zero. (D) B z 0 Ÿ all x, y, z , t must be zero. SAQ 1.2

Solve the following equations by Cramer"s rule. 2x  y  z

4, 3x  2 y  z

2, x  y  2 z

2.

1.4 Solution by Gauss-Jordan elimination method

The Gauss elimination method is known to us from school days when we solve linear equations in two or three variables. Since the similar procedure is repeated in matrix Gauss-Jordan method, we illustrate the working of Gauss-Jordan method by an example. Consider a system of linear equations x yz 1

and

(a1)

3x  2 y  z

7

(a2)

2x  3y  z

3

(a3)

We keep (a1) unchanged and eliminate x from (a1), (a2) and (a1), (a3): (a2) 3 (a1) Ÿ and (a3) 2 (a1) Ÿ Thus the given system reduces to 10

 5 y  4z yz 1

4

x yz 1  5 y  4z

(a4)

4

(a5)

yz 1

(a6)

We keep (a4) and (a5) unchanged and eliminate y from (a5) and (a6): 5 (a6)  (a5):

9z 9

Hence the system now takes the form x yz 1  5 y  4z

(a7) (a8)

4

9z 9

(a9)

(a9) gives z 1 . With this value in (a8), we get  5y  4

4 i.e. y

0

x  0  1 1 i.e. x

2

With these values of y and z in (a7), we have

Hence the solution is

x

2, y

0, z 1

In solving the equations we have employed elementary operations on equations. Now we translate the above solution in matrix form and it becomes Gauss-Jordan method. The corresponding operations will be elementary matrix row operations. With the help of these operations a coefficient matrix is brought in the echelon form as shown in the following lines. The matrix equivalent of the equations is AX



B

In an Echelon form (i) All zero rows are grouped at the bottom of a matrix (ii) The number of zeros before the 1st nonzero element in a row is less than the number of zeros before the first nonzero element in the next row.

ª0 2 1  1º «0 0  1 3 » , B Illustration. A » « «¬0 0 0 0 »¼ A is in echelon form but B is not.

ª0 2 3 0 º «0 5 1 6 » » « «¬0 0 0 0»¼

When the matrix is in an echelon form, its rank is the number of nonzero rows.

11

i.e.

ª1 1  1º ª x º «3  2 1 » « y » »« » « «¬2 3  1»¼ «¬ z »¼

ª1 º «7 » « » «¬3»¼

By

R2  3R1 , R3  2 R1 :

ª1 1  1º ª x º «0  5 4 » « y » »« » « «¬0 1 1 »¼ «¬ z »¼

ª1 º « 4» « » «¬1»¼

5R3  R2 :

ª1 1  1º ª x º «0  5 4 » « y » « »« » «¬0 0 9 »¼ «¬ z »¼

ª1 º « 4» « » «¬9»¼

x  y  z 1 ,  5 y  4z

Ÿ

4 , 9z

9

This is the system (a7)-(a9), which yield x

2, y

0, z 1

Problem 1.4

Solve the following equations by Gauss-Jordan elimination method x yz2 0 2x  y  z  1 0 3x  y  z 4x  y  2z

0 0.

Solution. Rewriting the equations,

x yz

2 , 2 x  y  z

1, 3x  y  z

0 , 4x  y  2z

In matrix form, we have ª1 «2 « «3 « ¬4

R2  2 R1 , R3  3R1 , R4  4 R1 :

12

1 1 1 1

1º ª xº 1 »» « » y 1» « » » «z» 2¼ ¬ ¼

ª1  1 1 º «0 1  1 » ª x º « » « y» «0 2  2 » « » « » «¬ z »¼ ¬0 3  2 ¼

ª  2º «  1» « » «0» « » ¬0¼ ª  2º «3» « » «6» « » ¬8 ¼

0

R3  2 R2 , R4  3R2 :

ª1  1 1 º «0 1  1» ª x º « » « y» «0 0 0 » « » » «¬ z »¼ « ¬0 0 1 ¼

ª  2º «3» « » «0» « » ¬  1¼

R34 :

ª1  1 1 º «0 1  1» ª x º » « y» « «0 0 1 » « » » «¬ z »¼ « ¬0 0 0 ¼

ª  2º «3» « » «  1» « » ¬0¼

x yz

Ÿ

Ÿ

2 , y  z

x 1, y

3, z

2, z

1

1

Problem 1.5

Solve by Gauss-Jordan elimination method: x yz

2 , 2 x  y  2 z 1, x  y  z

0.

Solution. The system of equations is ª1  1 1 º ª x º « 2 1 2» « y » »« » « «¬1 1 1 »¼ «¬ z »¼

ª 2º «1 » « » «¬0»¼

R2  2 R1 , R3  R1 :

ª1  1 1 º ª x º «0 3 0 » « y » « »« » «¬0 2 0»¼ «¬ z »¼

ª2º «  3» « » «¬ 2»¼

1 1 R2 , R2 : 3 2

ª1  1 1 º ª x º «0 1 0 » « y » »« » « «¬0 1 0»¼ «¬ z »¼

ª2º « 1» « » «¬ 1»¼

R3  R2 :

ª1  1 1 º ª x º «0 1 0 » « y » « »« » «¬0 0 0»¼ «¬ z »¼

ª2º « 1» « » «¬ 0 »¼

x yz

Ÿ

Let x

2, y

1, 0 0

a. Then the solution is x

a, y

1, z

a  1

Hence the system admits infinite number of solutions for a  R.

13

MCQ 1.3

The solution of the system x  2 y 3 has no solution

(A) (B) has only x 5, y

2

(C) has only x 3, y

0

(D) has x 3  2a, y

a,  a  R

MCQ 1.4

The system of equations x  y  z  t 1, x  2 y  3 z  t

2

has the following solutions. (A) x

5 ,y 3

1 ,z 3

0, t 1 only.

(B) x 0, y 1, z 1, t 1 only. (C) x

1 (a  5b  4), y 3

1 (2a  2b  1), z b, t 3

a,  a, b  R .

(D) x, y, z have 33000 values.

SAQ 1.3

Solve the following equations by Gauss Elimination method (a) x1  2 x2  x3

3 , 3 x1  x2  2 x3 1, 2 x1  2 x2  3 x3

(b) 3x  4 y  z  6t

0 , 2 x  3 y  2 z  3t

2 , x1  x2  x3

0 , 2 x  y  14 z  9t

1

0, x  3 y  13z  3t

0

1.4 Non-homogeneous equations

Consider the system of m equations in n unknown given by AX

B.

(1.7)

When (1.7) has one or more solutions, the system it is said to be consistent! ! otherwise it is inconsistent.

14

Theorem 1.1 (Condition of consistency) The!system!of!equations! AX

B ! is! consistent! if! and! only! if,! the! coefficient! matrix! A ! and! the

augmented!matrix! [ AB] !are!of!the!same!rank.

Proof. Let C1 ,  , Cn be the columns of the matrix A : A [C1 , C2 ,  , Cn ]

Then (1.7) is expressed as x1C1  x2C2    xr Cr    xn Cn

(1.8)

B

Suppose that rank of A is r. Then there will be r linearly independent columns. We name them as C1 , C2 , , Cr . Each of the remaining n  r columns is expressible in terms of C1 ,  , Cr . Necessary condition. Suppose that the system is consistent. Hence (1.7) has at least one solution x1 , x2 , , xn , say. Then rewriting (1.8), we get x1C1    xr Cr  xr 1Cr 1    xn Cn

Since (rank A )

(1.9)

B

r , each of the n  r columns Cr 1 ,  , Cn is written as a linear combination of

Cr 1 , , Cn . Then (1.9) implies that B is a linear combination of C1 , C2 ,, Cr . Thus maximum

number of linearly independent columns in the matrix [ AB] is r. This means that the rank of [ AB ] is r which is the rank of A. Sufficient condition. Let the matrices A and [ AB ] have the same rank r. Then the maximum

number of linearly independent columns in [ AB ] is r. But the rank of A being r , these independent columns must be from A alone. Hence the column B is the linear combination of these linearly independent columns, C1 ,  , Cr , say. Ÿ

B

k1C1    k r Cr , k1 ,  , k r are some scalars

i.e.

B

k1C1    k r Cr  0 ˜ Cr 1  0 ˜ Cr 2    0 ˜ Cn

(1.10)

Comparing (1.9) and (1.10), we get x1

k1 , x2

k 2 ,  , xr

k r , xr 1

0,  , xn

0

(1.11)

This ( x1 ,  , xr ,  , xn ) is the solution of (1.7). Hence the given system has a solution and thus the system is consistent.

QED

Remark. The solution may not be unique. If C1 , C2 ,, Cn are linearly dependent, then the

system has infinitely many solutions. 15

Theorem 1.2 If! ! A ! be! an! n-square! non-singular! matrix,! then! the! equation! AX

B ! has! a! unique! solution,

where! B is!!a!! n u 1 !matrix.

Proof. Let A be an n-square non-singular matrix. Then the rank of A is n. Also the rank of n u (n  1) matrix [ AB ] is n. Hence A and [ AB ] have the same rank. Therefore, the system AX

B is consistent and has a solution. We deduce its solution as under:

Premultiplying AX

B by A1 , we get A 1 AX

Ÿ

X

A1 B is the solution of AX

A 1 B or IX

A1 B

B.

We show that the solution is unique. If possible assume that X 1 and X 2 are the solutions of AX

B. Then AX 1

B and AX 2

Ÿ

AX 1

Ÿ

B. AX 2 or A1 AX 1 IX 1

IX 2 or X 1

A1 AX 2 X2

Hence the solution is unique.

QED

Working procedure for solving AX = B (i) Write [AB] and reduce it to the triangular form by elementary row transformations. From this determine the rank of A and [ AB]. (ii) If Rank A z Rank [ AB ], the system is inconsistent and the question of solving the equations does not arise. (iii) If U( A) U([ AB]) r , say, then the solution exists. If r  m, the given system of m equations reduces to an equivalent system of r equations. The equations can be solved for x1 ,  , xr in terms of xr 1 , , xn . The n  r unknowns xr 1 ,  , xn are arbitrary. If n

r , then there will be no n  r arbitrary unknown. This leads to a unique solution.

(iv) If r  n, then n  r unknowns assume arbitrary values and hence there will be infinitely many solutions. (v) For r d m  n, the system has an infinite number of solutions.

16

Problem 1.6 Show that the equations 2x  6 y

11, 6 x  20 y  6 z

3, 6 y  18 z

1

are not consistent.

Solution. The matrix form of the given system is

AX

Augmented matrix

Ÿ

0 ºª xº ª2 6 «6 20  6 » « y » « »« » «¬0 6  18»¼ «¬ z »¼

AB

ª 11º «3» » « «¬  1 »¼

B

 11º 0 ª2 6 «6 20  6  3 » « » «¬0 6  18  1 »¼

R21 (3) :

ª2 6 0  11º «0 2  6 30 » « » «¬0 6  18  1 »¼

R32 (3) :

ª2 6 0  11º «0 2  6 30 » . « » «¬0 0 0  91»¼

(a)

U( A) 2, U([ AB]) 3.

Since the ranks are not equal, the system is inconsistent.

Another method. From (a), we have ª2 6 0 º A ~ ««0 2  6»» «¬0 0 0 »¼ Then the system is equivalent to

ª2 6 0 º ª x º «0 2  6» « y » « »« » «¬0 0 0 »¼ «¬ z »¼ Ÿ

2x  6 y

11, 2 y  6 z

ª  11 º « 30 » « » «¬  91 »¼ 30, 0 91

The last equation is not possible and hence the given system is inconsistent.

17

Problem 1.7 Solve completely the system of equation x  2 y  z 3, 3x  y  2 z 1, x  y  z

1, 2 x  2 y  3z

2.

Solution. The matrix form of the system is

AX

ª1 2  1º «3  1 2 » ª x º « »« y» «1  1 1 » « » « » ¬« z ¼» ¬2  2 3 ¼

ª « « « « ¬

3 º 1 »» 1 » » 2 ¼

B

ª1 2  1 3 º «3  1 2 1 » » [ AB] « «1  1 1  1» « » ¬2  2 3 2 ¼

Augmented matrix

ª1 2  1 3 º «0  7 5  8 » » R21 ( 3), R31 (1), R41 (2) Ÿ « «0  3 2  4 » « » ¬0  6 5  4 ¼

Ÿ

R32 (3/ 7), R42 (6 / 7) :

1 3 º ª1 2 «0  7  8 »» 5 « «0 0  1 / 4  4 / 7 » « » 5 / 7 20 / 7 ¼ ¬0 0

R43 (5) :

1 3 º ª1 2 «0  7  8 »» 5 « . «0 0  1 / 7  4 / 7» » « 0 0 ¼ ¬0 0

U( A) 3 U([ AB])

Hence the equations are consistent. The solution is unique since (rank A ) number of unknowns. Now the system is equivalent to

1 º ª1 2 «0  7 5 »» « «0 0  1 / 7 » « » 0 ¼ ¬0 0 Ÿ

i.e. 18

ª xº « y» « » «¬ z »¼

ª 3 º « 8 » « » « 4 / 7 » « » ¬ 0 ¼

x  2 y  z 3,  7 y  52 8  x

1, y

4, z

4

1 z 7



4 7

Problem 1.8 Investigate for what values of a, b the equations 6, x  2 y  3 z 10, x  2 y  az

x yz have

b

(i) no solution (ii) a unique solution

and

(iii) an infinitely many solutions.

Solution. The system in the matrix form is

AX

ª1 1 1 º ª x º «1 2 3 » « y » « »« » «¬1 2 a »¼ «¬ z »¼

ª6º «10» « » «¬ b »¼

ª1 1 1 6 º [ AB] ««1 2 3 10»» «¬1 2 a b »¼

Here

R21 ( 1), R31 ( 1) :

6 º ª1 1 1 « ~ «0 1 2 4 »» «¬0 1 a  1 b  6»¼

R32 (1) :

1 6 º ª1 1 « ~ «0 1 2 4 »» «¬0 0 a  3 b  10»¼

Case (i): The system will have no solution if the system is inconsistent. This happens for U( A) z U([ AB]) If a 3, then U( A) 2, U([ AB]) 3 and b z 10. Hence for a 3, b z 10, the system is inconsistent.

Case (ii): The solution is unique if | A | z 0. 1 1 1 Ÿ

1 2 3 z0 1 2 a

Ÿ

Ÿ

1 1

1

0 1

2

z0

0 0 a 3

a  3 z 0 i.e. a z 3

Hence system has an unique solution if a z 3.

19

Case (iii): If a 3, b 10, then U( A) U([ AB]) 2. Hence the system is consistent and has infinitely many solutions.

Problem 1.9 For what values of k the equations x  y  z 1, 2 x  y  4 z

k , 4 x  y  10 z

k2

have a solution and solve them completely in each case.

Solution. The given system is AX

B i.e. ª1 1 1 º ª x º «2 1 4 » « y » « »« » «¬4 1 10»¼ «¬ z »¼

ª1º «k » « » «¬k 2 »¼

ª1 1 1 1 º «2 1 4 k » « » «¬4 1 10 k 2 »¼

augmented matrix

>AB@

R2  2 R1 , R3  4 R1 :

1 º ª1 1 1 « >AB@ ~ «0  1 2 k  2 »» «¬0  3 6 k 2  4»¼

R3  3R2 :

1 ª1 1 1 º « >AB@ ~ «0  1 2 k  2 »» «¬0 0 0 k 2  3k  2»¼

The system is consistent if U( A) U( AB ) i.e. if k 2  3k  2 (k  1)(k  2)

Ÿ

0

0 or k 1, 2

Corresponding to these values of k , the solution is obtained from (a1): ª1 1 1 º ª x º «0  1 2 » « y » « »« » «¬0 0 0»¼ «¬ z »¼

x  y  z 1,  y  2 z

Ÿ Ÿ

x

k  3a  1, y

It has infinite number of solutions as follows:

20

ª 1 º « k  2» » « «¬ 0 »¼ k 2

2a  k  2 , z

a

(a1)

3a , y

(i) for k 1 : x

(ii) for k

2 : x 1  3a , y

2a  1, z

2a , z

a

a

Problem 1.10

Show that the system 3x  4 y  5 z has no solution unless a  c

a , 4 x  5 y  6 z b, 5 x  6 y  7 z

c

2b.

Hence find the solution if a 1, b 2 , c 3. Solution. In matrix form the system becomes AX

B i.e. ªa º «b » « » «¬ c »¼

ª3 4 5º ª x º «4 5 6» « y » »« » « «¬5 6 7 »¼ «¬ z »¼

ª3 4 5 a º «4 5 6 b » » « «¬5 6 7 c »¼

augmented matrix

>AB@

4 5 R2  R1 , R3  R1 : 3 3

4 5 a ª3 º « >AB@ ~ «0  1 / 3  2 / 3 b  (4 / 3)a »» «¬0  2 / 3  4 / 3 c  (5 / 3)a »¼

 3R2 ,  3R3 :

a º ª3 4 5 « >AB@ ~ «0 1 2  3b  4a »» «¬0 2 4  3c  5a »¼

R3  2R2 :

a ª3 4 5 º « >AB@ ~ «0 1 2  3b  4a »» «¬0 0 0 6b  3c  3a »¼

(a1)

The system has solutions if U( A) U( AB ) i.e. 6b  3c  3a

0 or 2b

ac

Hence the given system has no solutions unless 2b a  c. For the values a 1, b 2 , c 3, the solution is obtained from (a1): ª3 4 5º ª x º «0 1 2 » « y » « »« » «¬0 0 0»¼ «¬ z »¼

ª1 º «  2» « » «¬ 0 »¼ 21

3x  4 y  5 z 1, y  2 z

Ÿ

2

It gives infinite number of solutions x

k  3, y

2k  2 , z

k.

MCQ 1.5

The value of O for which the following equations fail to have unique solutions 3x  y  Oz 1, 2 x  y  z is

2, x  2 y  Oz

1

(A)  3 / 2 (B)  5 / 2 (C)  7 / 2 (D) None of these.

MCQ 1.6

The equations x yz

6, x  2 y  3 z 10, x  2 y  kz 10

have infinite number of solutions. Then (A) k

0

(B) k

3

(C) k

4

(D) k

5

MCQ 1.7

Let the augmented matrix of a system of equations be equivalent to 1 3 º ª1  2 «0 2 2  2 »» . « «¬0 0 a  1 b  3»¼ Then

(A) for unique solution a  1 z 0 (B) for no solution a  1 0, b  3 z 0 (C) for infinitely many solutions a  b (D) for unique solution a  1 0

22

2

SAQ 1.4

Determine whether the system is consistent i 1  iº ª x º ª 0 « i 0 i »» «« y »» « «¬1  i  i 0 »¼ «¬ z »¼

ª 1º « 0 ». « » «¬ 1 »¼

SAQ 1.5

Test the consistency and solve x yz

6 , x  y  2z

5 , 3x  y  z 8 , 2 x  2 y  3z

7.

SAQ 1.6

Find the values of k for which the equations x  y  z 1, x  2 y  3z

k and x  5 y  9 z

k2

have a solution. For these values of k , solve the system completely. SAQ 1.7

Test consistency and if possible solve the following: (a) 2 x  3 y  7 z

5 , 3x  y  3z 13 , 2 x  19 y  47 z

(b) 3x  3 y  2 z 1, x  2 y (c) x  y  z (d) 2 x  y  z

4 , 10 y  3z

2 , 2 x  3 y  z

3 , 2 x  y  3z 1, 4 x  y  5 z 9 , 3x  y  z

32

6 , 4x  y  2z

5

2 , 3x  2 y  z

4

7,  x  y  z

4

1.5 Homogeneous equations

In matrix notation, a system of homogeneous!linear!equations!!is AX

0,

(1.12)

where 0 is the column vector whose all entries are zero. In this case the coefficient matrix A and the augmented matrix [ A 0] have the same rank. Hence the system of homogeneous linear equations is always consistent and x1

0,  , xn

0 is always a solution, called a!trivial!solution

or a zero solution.

23

Theorem 1.3 If! X 1 !and! X 2 !are!two!solutions!of!(1.12),!then!their!linear!combination!is!a!solution!of!(1.12). Proof. Let X 1 and X 2 be any two solutions of (1.12). Then AX 1

0, AX 2

0.

(1.13)

Consider a linear combination of X 1 and X 2 as k1 X 1  k 2 X 2 , where k1 and k 2 are any scalars. Then A(k1 X 1  k 2 X 2 )

k1 AX 1  k 2 AX 2 k1 0  k 2 0, by (1.13)

=0 k1 X 1  k 2 X 2 is a solution of (1.12).

Ÿ

QED

Theorem 1.4 The!number!of!linearly!independent!solutions!of!the!homogeneous!equation!(1.12)!is! n  r , !where r !is!the!rank!of!the! m u n !matrix! A. Proof. We write the m u n matrix A in terms of n column vectors as

A [C1 , C2 ,  , Cn ] Then the matrix equation (1.12) takes the form

ª « [C1C2  Cn ] « « « ¬

x1 º x2 »» [0] or x1C1    xn Cn  » » xn ¼

0

(1.14)

Since the rank of A is r , the column rank of A is r and then by definition there are r linearly independent columns. We choose them as C1 , C2 , , Cr . The remaining (n  r ) columns: Cr 1 ,  , Cn depend upon C1 , , Cr i.e. they can be expressed as a linear combinations of C1 ,, Cr . Hence we write Cr 1

h11C1    h1r Cr

Cr 2

h21C1    h2 r Cr



Cn 24

hk1C1    hkr Cr ,

where k

n  r.

These equations can be rewritten as h11C1    h1r Cr  1 ˜ Cr 1  0 ˜ Cr 2    0 ˜ Cn

0

h21C1    h2 r Cr  0 ˜ Cr 1  1 ˜ Cr  2    0 ˜ Cn

0



hk1C1    hkr Cr  0 ˜ Cr 1  0 ˜ Cr 2    1 ˜ Cn

h21  hk1 º h22  hk 2 »»   » » h2 r  hkr » [ 0 0  0] 0 0 » » 0 » 1   » » 0  1 »¼

ª h11 «h « 12 «  « h [C1C2  Cr Cr 1Cr 2  Cn ] « 1r «1 « «0 «  « «¬ 0

Ÿ

0

(1.15)

Comparing (1.14) and (1.15), the n  r ( k ) column vectors:

X1

ª h11 º «  » « » « h1r » « » « 1 » , « 0 » « » «  » « 0 » ¬ ¼

ª h21 «  « « h2 r « « 0 « 1 « «  « 0 ¬

X2

º » » » » » ,, Xk » » » » ¼

ª hk1 º «  » « » « hkr » « » « 0 » « 0 » « » «  » « 1 » ¬ ¼

(1.16)

are the solutions of the equation (1.12). The theorem will be established if we show that these solutions are linearly independent and any solution of AX

0 is expressed as a linear

combination of these n  r vectors. Consider a relation of the form c1 X 1  c2 X 2    ck X k

0, k

nr

(1.17)

Substituting the expressions of X 1 ,  , X k from (1.16) in the above, we get ª c1h11 «  « « c1h1r « «  c1 « 0 « «  « 0 ¬

º ª c2 h21 » «  » « » « c2 h2 r » « »« 0 » «  c2 » « » «  » « 0 ¼ ¬

ª ck hk1 º «  » « » « ck hkr » « » »  « 0 « 0 » « » «  » « c » k ¼ ¬

º » » » » » » » » » ¼

ª « « « « « « « « « ¬

0 º ªc1h11    ck hk1 º « » »   » « » «c1h1r    ck hkr » 0 » « » »  c1 0 » or « » « »  c2 0 » « » »   » « » « » c  0 »¼ k ¬ ¼

ª0º «» « » «0» « » «0» «0» « » «» «0» ¬ ¼

25

Ÿ

c1

c2

 ck

0

Thus (1.17) implies above. Therefore, the vectors X 1 , X 2 ,  , X k are linearly independent. Let X

[ x1 ,  , xn ]c be any solution of AX

Here

X

0.

ª x1 º «  » [ x  x ]c 1 n « » «¬ xn »¼

Also X 1 , X 2 ,, X n are the solutions of AX

0. Hence the linear combination

X  xr 1 X 1  xr  2 X 2    xn X k , k is also a solution of AX Denote

0. Y

X  xr 1 X 1  xr 2 X 2    xn X k , k Y

where Ÿ

nr

nr,

(1.18)

[ y1  yr  yn ]c .

[ y1  yr yr 1  yn ]c [ x1  xr xr 1  xn ]c  xr 1[h11  h1r  1 0  0]c

 xr 2 [h21 h2r 0  1  0]c +   xn [ hk1  hkr 0 0   1]c Comparing (r  1) th, (r  2) th,  , n th components on both sides, we get yr 1

xr 1  xr 1 (1)  0    0

yr 2

xr  2  0  xr  2  0   0

0 0

 yn Ÿ

Y

The vector Y is a solution of AX

xn  0    xn ( 1)

[ y1 y2  yr 0 0  0]c 0 i.e. AY

0 or

ª « « « « [C1  Cr Cr 1 Cr 2  Cn ] « « « « « ¬ Ÿ

26

0

y1 º  »» yr » » 0 » [0] 0 » »  » 0 »¼

y1C1  y2C2    yr Cr

0

Since C1 , , Cr are linearly independent above gives y1



y2

yr

0

Thus Y is a zero column vector i.e. (1.18) gives X xr1 X1  xr2 X 2   xn X k Thus any solution X of AX

linear combination of X1 , , X k , k n  r

0 is expressible as a linear combination of (n  r ) linearly

independent solutions.

QED

Implications of the theorem

(i) Since n  r is the number of solutions, n  r t 0 i.e. r d n and thus r cannot be greater than n. For r

n, the equation AX

0 has zero ( n  r ) number of linearly independent solutions

i.e. it has no linearly independent solutions. The equation has only zero solution. For example, x  2y

0, 3x  4 y

0

In matrix form we have AX

ª1 2º ª x º 0{« »« » ¬3 4¼ ¬ y ¼

ª0 º «0 » ¬ ¼

The rank of the matrix A is 2 which is the value of n. The equations have only x 0, y

0

as a solution. (ii) For r  n, the number of linearly independent solutions are n  r. Also any linear combination of these n  r solutions is a solution and hence the equation has an infinitely many solutions. (iii) If m  n i.e. the number of equations is less than the number of unknowns, the equation AX

0 must possess a non-zero solution and the number of solutions will be infinitely many.

Working procedure to solve AX 0

(i) Write the given system in the matrix form AX

0, A is m u n.

(ii) Reduce A to a triangular form by elementary! row! transformations and find its rank r (if r

n, then it has only trivial solution).

(iii) By (ii) the given system will reduce to an equivalent system of r equations. Solve these equations by Cramer"s rule or by any other method of elimination and get r values x1 , , xr . Express these x1 ,  , xr in terms of the other unknowns xr 1 , , xn . These n  r unknowns are 27

arbitrary and can be given any suitable values.

Problem 1.11 Show, by considering the rank of an appropriate matrix that the following system of equations possesses no solution other than the trivial solution 3x  y  z

0,  15 x  6 y  5 z

0, 5 x  2 y  2 z

0.

Solution. The matrix form of the given system is

AX

ª 3 1 1 º ª x º 0 Ÿ «« 15 6  5»» «« y »» «¬ 5  2 2 »¼ «¬ z »¼

Apply R21 (5) , R31 (5 / 3) :

1 ºª xº ª3  1 «0 1 0 »» «« y »» « «¬0  1 / 3 1 / 3»¼ «¬ z »¼

R32 (1 / 3) :

ª3  1 1 º ª x º «0 1 0 »» «« y »» « «¬0 0 1 / 3»¼ «¬ z »¼

Now U( A) 3 n. Hence this is the case r

ª0º «0» « » «¬ 0 »¼

ª0 «0 « «¬ 0 ª0 «0 « «¬ 0

Ÿ

0, y

0,

z

0

0, 2 x  4 y  5 z

0

x

y

1 z 3

0

Hence the system has only the trivial solution.

Problem 1.12 Find all non-trivial solutions of (i) x  2 y  3 z

0, 2 x  5 y  6 z

(ii) x  2 y  z

0, x  2 y  z

0

Solution. (i) We have AX

28

(a1)

n. Thus the system has only trivial solution. We

confirm this from (a1). From (a1), we get 3x  y  z

º » » »¼

º » » »¼

ªxº ª1  2 3º « » « 2 5 6» « y » ¼« z » ¬ ¬ ¼

ª0 º «0 » ¬ ¼

ªxº ª1  2 3 º « » R21 (2) : « »« y» ¬0 9 0 ¼ « z » ¬ ¼ Here U( A) r

ª0 «0 « «¬ 0

(a1)

2  3 n. Then we get infinite number of solutions. This also follows from (a1): x  2 y  3z

Let z

º » » »¼

0, 9 y

0, 0 0

a. Then the solution is 3a, y

x

0, z

a.

These are infinite number of solutions.

AX

(ii)

R21(1), R31(2) :

ª1  2 1 º ª x º «1  2  1» « y » « »« » «¬2  4  5»¼ «¬ z »¼

ª0 «0 « «¬ 0

º » » »¼

ª1  2 1 º ª x º «0 0  2 » « y » « »« » «¬0 0  7 »¼ «¬ z »¼

ª0 «0 « «¬ 0

º » » »¼

(a2)

Rank of A is 2. Linearly independent solution are 3  2 1. From (a2), we have x  2y  z Ÿ

z

x

Then the solution is

0,  7 z

0, 0 y

0

0 , x  2y 0 .

2 a, y

a, z

0

Problem 1.13 State with reason that the following system has a non-trivial solution without solving it. If yes, find all solutions of the system x  2 y  3z

0, 2 x  y  2 z

0, x  8 y  13 z

Solution. Since the system is in homogeneous form AX

0.

0, the rank of coefficient matrix is

equal to the rank of the augmented matrix. Hence the system is consistent i.e. it has a solution. Also

| A|

Ÿ

1 2 3 2 1 2 1  8 13

( 13  16)  2(2  26)  3(16  1)

0

The system has a non-trivial solution 29

Now

 3º ª1 2  3º 2 ª1 2  3º ª1 « 2  1 2 » ~ «0  5 8 » ~ «0  5 8 » « » « » « » «¬1  8 13 »¼ «¬0  10 16 »¼ «¬0 0 0 »¼

A

Then the given system reduces to ª1 2  3º ª x º «0  5 8 » « y » « »« » «¬0 0 0 »¼ «¬ z »¼ x  2 y  3z

Ÿ Ÿ

x

ª0 º «0 » « » «¬0»¼

0 ,  5 y  8z

a  ,y 5

8a ,z 5

0

a

It has infinite number of solutions.

MCQ 1.8 The system of equations ax  by  cz

0, bx  cy  az

0, cx  ay  bz

0

has a nontrivial solution if (A) a  b  c 0 (B) a  b  c 0 (C) a

b

c

(D) a b z c

MCQ 1.9 The following system of equations is consistent and has nontrivial solutions. (k  1) x  (4k  2) y  (k  3) z

Then

(A) k 2  5k  6 0 (B) k 2  5k  6 0 (C) k 2  5k  6 0 (D) k 2  4k  3 0

30

(k  1) x  (3k  1) y  2kz

0

2 x  (3k  1) y  3(k  1) z

0

0

SAQ 1.8 Solve the following equations 2 x  y  3z

0 , 3x  2 y  z

0 , x  4 y  5z

0.

SAQ 1.9 Let A denote any skew-symmetric matrix of order 3:

A

c  bº ª0 « c 0 a »» . « «¬ b  a 0 »¼

Obtain in parametric form the solutions of the equation AX

0 and, hence show without

computing the product that AB 0, where

B

ª a 2 ab ac º « » 2 «ab b bc » . « ac bc c 2 » ¬ ¼

SAQ 1.10 Solve the equations (a) 5 x  2 y  3z 0 , 3x  5 y  2 z 0 , 2 x  3 y  5 z 0 (b) x  2 y  z 0 , 2 x  4 y  2 z 0 ,  x  2 y  z 0

31

SUMMARY The methods based upon matrix inversion to solve a system of simultaneous linear equations (homogeneous and non-homogeneous equations) are discussed. These methods include Cramer"s method and Gauss # Jordan elimination method.

KEY WORDS Consistent system of equations Simultaneous linear equations Homogeneous equations Non-homogeneous equations Cramer"s rule Gauss-Jordan method

32

UNIT 02-02: LINEAR INDEPENDENCE OF VECTORS

33-42

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the linear independence of vectors Apply to identify the linearly independent or dependent vectors INTRODUCTION In the beginning the concept matrix was introduced as an array of numbers/objects obeying certain mathematical operations. If it is the only face description of matrix, then its study becomes restrictive. For its fullest fertility it has to be associated with some like discipline with rich structure and applicability. Fortunately there seems to be a possibility of linkage of matrix theory with linear algebra for a very simple reason that a vector concept of linear space is inherent in the structure of matrix. Consider a matrix A of order m u n :

ª a11 a12 «a a22 21 A « «  « ¬am1 am 2

 a1n º  a2 n »» .  » »  amn ¼

(2.1)

Its first row can be denoted by an object R1 having n components:

R1

( a11 , a12 ,, a1n )

In linear space it is a vector. Similarly the remaining rows : R2 ,, Rm constitute the vectors and then the matrix A becomes a vector itself having m components: ª R1 º «R » 2 A « ». «» « » ¬ Rm ¼

(2.2)

This is not the only description of A it can be thought of as a vector with n components:

A [C1 C2  Cn ] ,

where

C1

ª a11 º «a » « 21 » , C 2 «» « » ¬am1 ¼

ª a12 º «a » « 22 » ,  , C n «» « » ¬ am 2 ¼

ª a1n º «a » « 2n » «» « » ¬amn ¼ 33

In brief a concept of linear algebra can be safely brought into the domain of matrix theory. As a matter of fact, a consequential consideration of a linear transformation of one vector space to another one is a matrix whose elements are the members of a field over which the spaces are defined. Once vectors enter into a matrix theory, its other attributes like linear independence etc can be easily extended to row vectors and column vectors of a matrix. 2.1 Linear independence of row/column vectors

Consider an ordered set S of elements x1 ,  , xn , where each xi  field F . We write x ( x1 , x2 , , xn ).

We call x an n -dimensional! vector. Its n components are x1 , x2 , , xn . The zero vector is denoted by 0 (0, 0, , 0). Remark. Hereafter, we consider F

R. Thus all the scalars will be real numbers.

Addition of vectors Let x ( x1 , x2 ,  , xn ) and y ( y1 , y2 , , yn ). Then

xy

( x1  y1 , x2  y2 , , xn  yn )

(2.3)

Multiplication of a vector by a scalar Let k be any scalar  R. Then

k x (kx1 , kx2 , , kxn ) Remark. 0 x 0( x1 , x2 ,, xn ) (0 x1 , 0 x2 ,, 0 xn ) (0, 0,, 0) 0,  x and

k 0 0,  k  R.

Ÿ

(3, 2, 4), z

(0, 1,  3). Then

2x

2(2,  1, 0) (4,  2, 0)

y

(1)(3, 2, 4) (3,  2,  4)

2x  y  z

(4,  2, 0)  (3,  2,  4)  (0, 1,  3) (4  3  0,  2  2  1, 0  4  3)

34

(2.5a) (2.5b)

Illustration Let x, (2,  1, 0), y

(2.4)

(1,  3,  7)

Equality of vectors ( x1 , x2 ,  , xn )

( y1 , y2 ,  , yn ) œ x1

y1 ,  , xn

yn

MCQ 2.1 Two vectors x (a, 2, b) and y

(3, c,  1) are equal. Then the value of the determinant

a b c b c a c a b is

(A)  48 (B)  50 (C)  52 (D)  54

Linearly independent vectors The vectors x1 , x 2 ,  , x n are said to be linearly!independent if

c1x1  c2 x 2    cn x n

0 Ÿ c1

c2

 cn

0

(2.6)

Otherwise the vectors are said to be linearly! dependent. In this case one can write one of the vectors x1 , x 2 ,, x n in terms of the linear combination of the other vectors i.e. xn

c1x1    cn1x n1 ,

where not all c1 ,  , cn1 are zero. Note. If there are m equations in n unknowns and n ! m , then the number of independent unknowns is n  m. Remark. The equivalent statement of x1 ,, x n are linearly dependent or independent is that the

set { x1 ,, x n } is linearly dependent or independent. Illustration

Let the three vectors be x (1, 0, 0), y

Consider the relation Ÿ then

(0, 1, 0) and z

c1x  c2 y  c3z

(0, 0, 1). (a1)

0

c1 (1, 0, 0)  c2 (0, 1, 0)  c3 (0, 0, 1)

(c1 , 0, 0)  (0, c2 , 0)  (0, 0, c3 )

0 (0, 0, 0) ,

(0, 0, 0) 35

(c1  0  0, 0  c2  0, 0  0  c3 ) (0, 0, 0)

i.e. or

(c1 , c2 , c3 )

or

c1

c2

c3

(0, 0, 0)

0, by equality of vectors

(a2)

Thus (a1) Ÿ (a2). Hence by definition, the given vectors x, y, z are linearly independent. Theorem 5.1

The!vectors! x1 , x 2 ,, x n !are!linearly!dependent!if!at!least!one!of!the!vectors!is!the!zero!vector. Proof. Let x1

0. Then by (2.5b), we have

1 x1

0

Also noting (2.5a), we write 1 x1  0 x 2  0x 2    0x n

0  0  0 0

Thus the linear combination on the left side is a zero vector. But not all the coefficients of the vectors x1 ,, x n are zero. Hence the definition in (2.6), the vectors x1 ,, x n are linearly dependent.

QED

Theorem 5.2

The!singleton!set! { x } is!linearly!independent! œ x z 0. Hint. x z 0, ax 0 Ÿ a 0 Proof. Let {x} be linearly independent. Then ax 0 Ÿ a 0.

Ÿ

x 0 or x z 0

In the case x 0, by Thm 5.1, the set {x} becomes linearly dependent, which contradicts that {x} is linearly independent. Hence follows that x z 0. Conversely assume that x z 0. Then ax 0 Ÿ a 0. Hence {x} is linearly independent. Theorem 5.3

If! the! set! X

{x1 , x 2 ,, x n } ! is! linearly! independent,! then! any! nonempty! subset! of! X is! linearly

independent. Proof. Let X be a linearly independent set and let Y

X . Consider that

36

{x1 , x 2 ,, x m }, 1 d m d n be any subset of

c1x1    cm x m Ÿ

[c1x1    cm x m ]  0x m1    0x n

Ÿ

c1x1    cm x m  0x m 1    0x n

Ÿ

c1

c2  cm

Thus (2.7) Ÿ Ÿ

c1

(2.7)

0 [ 0]  0    0

0

[0]  0    0 0

0,  X is linearly independent

 cm

c2

0

The set Y is linearly independent.

QED

Row rank and column rank Let A [aij ] be a m u n matrix over F given in (2.1). It has m rows R1 , R2 ,  , Rm and n columns C1 , C2 , , Cn . Each row contains n elements  F and hence we consider each row as a vector. Thus the set [ R1 , R2 , , Rm ] constitutes a row! space of the matrix A. Similarly the columns

C1

or

ª a11 «a « 21 «  « ¬ am1

º » », C 2 » » ¼

ª « « « « ¬

a12 a22  am 2

C1

(a11 , a21, , am1 )c

C2

(a12 , a22 , , am2 )c

º » »,  , C n » » ¼

ª « « « « ¬

a1n a2 n  amn

º » » » » ¼

 Cn

(a1n , a 2 n ,, a mn )c

define n vectors called column!vectors and [C1 , C2 , , Cn ] form a column!space of the matrix. Here we write a column vector as a transpose of a row vector. The dimension of the row space is called the row!rank of the matrix A and the dimension of the column space the column! rank of A. These ranks can also be defined in terms of linear independence of the row vectors and the column vectors since the dimensionality of a space is closely connected with the linear independence of the vectors which generate the space.

Row rank. The number of linearly independent rows of a matrix A is called the row!rank of A. Column rank. The number of linearly independent columns of a matrix A is called the column

rank of A. 37

row rank of A = column rank of A = rank of A

In fact Problem 2.1

Show that row rank of a matrix A

ª1 2 3º «1 2 5» is 2. « » «¬2 4 8»¼

Solution. We have

and

R1

(1, 2, 3), R2

(1, 2, 5), R3

(2, 4, 8)

C1

(1, 1, 2)c , C2

(2, 2, 4)c , C3

(3, 5, 8)c

R1

Let i.e.

(1, 2, 3)

c1R2  c2 R3

(a1)

c1 (1, 2, 5)  c2 (2, 4, 8)

1 c1  2c2 , 2 2c1  4c2 , 3 5c1  8c2

Ÿ Ÿ

c1

1, c2

Then (a1) Ÿ

R1

 R2  R3

1

This shows that R1 , R2 , R3 are linearly dependent. Since there is one relation in R1 , R2 and R3 , the number of independent row vectors is 3  1 2. Hence row rank of A is 2. Similarly verify yourself that C1 , C2 , C3 are linearly dependent and the column rank of A is 2. It further implies that the dimension of the row space is 2. So is the case with the column space. Also verify yourself that the rank of A is 2 by reducing it to its normal form or canonical form. Problem 2.2

ª 1 2 3º « 4 0 5» is 2. ¬ ¼

Show that the row rank and the column rank of A Solution. Here we have

R1

(1, 2, 3), R2

(4, 0, 5), C1

(1,  4)c, C2

(2, 0)c, C3

Consider a relation c1 R1  c2 R2

0

i.e.

c1 (1, 2, 3)  c2 (4, 0, 5) (0, 0, 0)

Ÿ

c1  4c2

38

0, 2c1

0, 3c1  5c2

0

(3, 5)c

Ÿ

c1

c2

0

Hence R1 and R2 are linearly independent. Thus the row rank of A is 2. Let

C3 (3, 5)c

i.e.

d1C1  d 2C2

d1 (1,  4)c  d 2 (2, 0)c

3 d1  2d 2 , 5 4d1

Ÿ

i.e.

d1

5  , d2 4

Ÿ

C3

5 17  C1  C2 4 8

17 8

Hence C1 , C2 , C3 are linearly dependent and the number of independent column vectors is 2. Thus the column rank of A is 2. Problem 2.3

Are the following vectors linearly dependent? If so, find the relation between them. x1

(1, 2 , 4) , x 2

(2 ,  1, 3) , x3

( 0 , 1 , 2) , x 4

( 3 , 7, 2)

Solution. Consider a linear relation between the given vectors:

c1x1  c2 x 2  c3x 3  c4 x 4

(a1)

0

where c1 , c2 , c3 , c4 are scalars. Ÿ

c1 (1, 2 ,4)  c2 (2 ,  1,3)  c3 (0 ,1, 2)  c4 ( 3 , 7 , 2) (0 , 0 , 0)

Ÿ

(c1  2c2  3c4 , 2c1  c2  c3  7c4 , 4c1  3c2  2c3  2c4 ) (0 , 0, 0)

Ÿ

c1  2c2  3c4 0, 2c1  c2  c3  7c4

0, 4c1  3c2  2c3  2c4

0

This is a homogeneous matrix equation

R2  2 R1 , R3  4 R1 Ÿ

ª c1 º ª1 2 0  3º « » « 2  1 1 7 » « c2 » « » «c » «¬4 3 2 2 »¼ « 3 » ¬ c4 ¼

ª0 º «0 » « » «¬0»¼

ª c1 º ª1 2 0  3º « » «0  5 1 13 » «c2 » « » «c » «¬0  5 2 14 »¼ « 3 » ¬c4 ¼

ª0 º «0 » « » «¬0»¼

39

ª c1 º ª1 2 0  3º « » «0  5 1 13 » «c2 » « » «c » «¬0 0 1 1 »¼ « 3 » ¬c4 ¼

R3  R2 :

c1  2c2  3c4

Ÿ

Taking c4

0 ,  5c2  c3  13c4

ª0 º «0 » « » «0 » « » ¬0 ¼ 0 , c3  c4

0

k , above gives c3

 k , c2

12 k , c1 5



24 k  3k 5

9 12  k x1  k x 2  kx3  k x 4 5 5

Then (a1) Ÿ

9x1  12x 2  5x3  5x 4

i.e.

9  k 5 0

0

Hence the given vectors are linearly dependent and above is the relation between them. MCQ 2.2 Let x1

(1,  1, 0) , x 2

(0 ,1,  1) , x3

(0 , 0 ,1) be the elements of R 3 . The set of vectors

{ x1 , x 2 , x3} is (a) Linearly independent (b) Linearly dependent (c) null (d) none of these. MCQ 2.3 The vectors x (1, 3 , 4 , 2) , y

(3 ,  5 , 2 , 6) and z

( 2 ,  1 , 3 , 4)

are linearly dependent. If the relation amongst them is ax  by  cz

0,

where a, b, c are scalars, then a, b, c are the roots of the equation: (A) x 3  2 x 2  x  2 0 (B) x 3  3x 2  x  3 0 (C) x 3  3x 2  x  3 0 40

(D) x 3  2 x 2  x  2 0 MCQ 2.4 The value/s of D so that the vectors (1, 2 , 9 , 8) , (2 , 0 , 0 , D) , (D , 0 , 0 , 8), (0 , 0 , 1, 0) are linearly dependent is (A) 1, 4 (B) r 4 (C)  4, 2 (D) 2, 4 SAQ 2.1 Examine the following system of vectors for linear dependence. If dependent, find the relation between them. (a) x1 [3 , 1, 1] , x 2 (b) x1

(1,1,0 ,1) , x 2

[2 , 0 ,  1], x3 [4 , 2 , 1] (1,1,1,1) , x3

(4 , 4 , 1, 1) and x 4

(1, 0 , 0 , 1)

41

SUMMARY The concept of linearly dependent and independent vectors is explained with illustrations. Some of the results concerning these vectors are discussed.

KEY WORDS Linearly dependent vectors Linearly independent vectors Row rank Column rank

42

UNIT 03-01: THE EIGEN VALUE PROBLEM

43-69

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the eigen value probelm Apply to find eigen values and eigen vectors INTRODUCTION Consider a vector x on which some operator or function f acts. Naturally x is going to change. We put a condition on f such that it will not change the direction of x . There may be a change in its magnitude. In such a situation

x o Ox i.e. f x Ox

(3.1)

where O is a scalar. This is the motivation of an eigen value problem. In the present consideration of matrix algebra, we take f as a square matrix A such that the product Ax is defined and then (3.1) becomes

Ax Ox

(3.2)

ª x1 º «  » » « «¬ xn »¼

(3.3a)

3.1 The eigenvalue problem defined Let a vector x be a column vector

x

which is transformed in to another column vector

ª y1 º «  » » « «¬ yn »¼

y

by a linear transformation

(3.3b)

i.e.

y

Ax

where A is an n -square matrix. We are interested in finding the vectors x that are transformed to x or Ox , where O is a scalar. This gives rise to an eigenvalue problem discussed above in (3.2). The problem is to determine the scalars O and the nonzero vectors x which simultaneously 43

satisfy the equation (3.2) for a given matrix A [aij ] of order n.

Equation (3.2) is called an eigen!equation or characteristic!equation and can be rewritten as

OI x  Ax 0,  Ix x where I is the unit matrix of order n. Ÿ

Ÿ

i.e.

(OI  A) x 0

(3.4a)

­ ª1 0  0º ª a11 a12  a1n º ½ ª x1 º ° « » °« » » « ° «0 1  0» «a21 a22  a2 n » ° « x2 » O  ® ¾ » °«  » » «  ° « ° «¬0 0  1»¼ «¬ an1 an 2  ann »¼ ° «¬ xn »¼ ¯ ¿ ªO  a11  a12   a1n º ª « a O  a22   a2 n »» «« 21 « «  »« « »«  an 2  O  ann ¼ ¬ ¬  an1

x1 º x2 »»  » » xn ¼

ª0 «0 « « « ¬0

ª0º «0» « ». «» « » ¬0¼

º » » » » ¼

(3.4b)

Here the coefficient matrix is (OI  A), see (3.4a). These equations (3.4) have a nonzero solution

x if the determinant of the coefficient matrix vanishes | OI  A | 0

i.e. O  a11

or

 a21   an1

 a12

(3.5a)



 a1n

O  a22 

 a2 n

 an 2

0

(3.5b)

 O  ann

The expansion of the determinant gives a polynomial f (O ) in O of degree n.

Characteristic polynomial A polynomial f (O ) is called the characteristic!polynomial of A.

Characteristic or secular equation The equation (3.5): f (O )

is the characteristic or secular!equation of A.

44

0

(3.6)

Eigen values or characteristic roots The n roots O1 , , O n of (3.6) are called the eigenvalues or characteristic!roots of A. Eigen vector Let O O1 be any characteristic root of A. For this value (3.4a) becomes (O1I  A)x 0 It has a nonzero solution for x. Such a solution x is called an eigen! vector or! characteristic vector of A.

Remark 1. For characteristic, physicists use the terms eigenvalue, eigenvector and eigenvalue problem, the words being derived from the German word EIGENWERT. Some call it proper values and proper vectors and social scientists call latent! roots and latent! vectors. We shall be using both eigenvalues, eigenvectors and characteristic roots, characteristic vectors.

Remark 2. Since eigen vector x is a column vector, we write a column matrix X in place of x . Problem 3.1 Show that the eigen values of an orthogonal matrix ªcos T  sin Tº A « » ¬ sin T cos T ¼ are of unit modulus.

Hint. O is of unit modulus œ | O | 1 Solution. Here we have ª1 0º ªcos T  sin Tº [OI  A] O « »« » ¬0 1¼ ¬ sin T cos T ¼ ªO 0 º ªcos T  sin Tº « 0 O »  « sin T cos T » ¬ ¼ ¬ ¼ sin T º ªO  cos T «  sin T O  cos T» ¬ ¼ Then the eigen equation | OI  A | Ÿ

0 gives O  cos T sin T  sin T O  cos T

0

45

Ÿ

(O  cos T) 2  sin 2 T 0

Ÿ

(O  cos T) 2

 sin 2 T

Ÿ

O  cos T r i sin T

Ÿ

O cos T r i sin T

(a1)

Two eigen values are given by (a1). Their modulus is given by |O|

| cos T r i sin T |

(cos 2 T  sin 2 T)1 / 2 1

Problem 3.2 Show that the eigen values of an idempotent matrix are either zero or unity.

Hint. A is idempotent œ A2

A

Solution. An eigen equation for any nonzero vector x is Ax Ox Premultiplying by A,

A( Ax)

(a1)

A(Ox) O ( Ax)

Ÿ

A2 x O (Ox), by (a1)

Ÿ

Ax O2 x,  A2

A

Ÿ

Ox O2 x, by (a1)

Ÿ

(O2  O)x 0,  x z 0

Ÿ

O2  O 0 or O(O  1) 0

Ÿ

O 0 or 1

Problem 3.3 Show that if O is the eigenvalue of a nonsingular matrix A, then O1 is the eigenvalue of A1 . Solution. Let O be any nonzero eigenvalue of a nonsingular matrix A and let X be the corresponding eigenvector. Then AX

OX

Since A is non-singular, A1 exists. Premultiplying above by A1 , we get Ÿ

46

A1 AX

A1OX

Ÿ

OA1 X or X

IX

A1 X

Ÿ

This means that

OA1 X

1 X O

1 is the eigenvalue of A1. O

Problem 3.4

Show that if B is an invertible matrix of the same order as A, then the matrices A and B 1 AB have the same characteristic roots. Solution. The characteristic roots will be same if they have the same characteristic equation.

Now the characteristic equation of A is | OI  A | 0 Denote C

(a1)

B 1 AB. Then the characteristic equation of C is | OI  C | 0 B 1OIB

Now

B 1OB,  IB

(a2) B

O B 1B OI With this expression for OI and noting the value of C , (a2) becomes Ÿ

| B 1 O I B  B 1 AB | 0

Ÿ

| B 1 (OI  A) B | 0

or

| B 1 | | O I  A | | B | 0 ( | B 1 | | B | ) | OI  A | 0

Ÿ

Ÿ

| B 1 B | | OI  A | 0

Ÿ

| I | | OI  A | 0

or

| OI  A | 0,  | I | 1

Ÿ

(a1).

Hence A and B 1 AB have the same characteristic equation.

47

Problem 3.5

Show that if O is a characteristic root of a nonsingular matrix A, then | A | / O is a characteristic root of adj A. Hint. The characteristic root of a nonsingular matrix is non zero. Solution. Let O be a characteristic root of a nonsingular n-square matrix A. Then O z 0 Let the

corresponding vector be X . Then OX

AX

(a1)

Now adj A is also an n-square matrix. Denote adj A B. Premultiplying (a1) by B, we get BOX

BAX Ÿ

OBX

( BA) X

Ÿ

| A | IX

Ÿ

OBX

O( BX ),  BA (adj A) ˜ A | A | I | A| X

Ÿ

O ( BX ),  IX BX

Hence | A | / O is the eigenvalue of B

X

| A| X O

adj A.

Problem 3.6

Let A [aij ] be any n -square matrix. Show that n

a11  a22    ann

(i) sum of the eigenvalues of A is ¦ aii i 1

and

(ii) product of the eigenvalues of A is | A | .

a11  O a12 a21 a22  O   an1 an 2

Hint.

where

sm

a1n   a2 n    ann  O

(1) m ( sum of all the m -square principal minors of A), m 1,, n  1

Solution. The eigen equation | A  OI |

48

On  s1On 1    sn 1O  (1) n | A | ,

0 gives

a11  O a12 a21 a22  O   an1 an 2

 a1n  a2 n    ann  O

0

On  s1On 1    sn 1O  (1) n | A |

Ÿ

sm

where

0

(a1)

(1) m ( sum of all the m -square principal minors of A), m 1,, n  1

Since (a1) is a polynomial equation of degree n in O, we suppose that it has n number of roots O1 , O 2 ,, O n , say. O1  O 2    O n

Ÿ

sum of the roots (1)

s1 1

 s1

( 1)1 (a11    ann )

a11  a22    ann O1 O 2  O n

and

product of the roots

(1) n

( 1) n | A | 1

| A| MCQ 3.1

If O1 , O 2 , , O n are the characteristic roots of an n-square matrix A and if k is a scalar, then the characteristic roots of kA are (A) k  O1 , k  O 2 , , k  O n (B) k  O1 , k  O 2 , , k  O n (C) kO1 , kO 2 , , kO n (D) O1  k , O 2  k , , O n  k MCQ 3.2

The characteristic roots of the diagonal matrix A diag (1,  1, 2,  2) satisfy the equation (A) x 4  4 x 2

0

49

(B) x 4  x 2

0

(C) x 4  10 x 2  9 0 (D) x 4  5 x 2  4 0 MCQ 3.3

Let A and B be orthogonal matrices of orders 3 and 5 respectively. Consider the statements: (a) Eigen value of A is either 1 or  1 (b) Eigen value of B is either 1 or  1 Select the true statement from the following: (A) Only (a) is true (B) Both are true (C) Only (b) is true (D) Both are false MCQ 3.4

Consider the statements ª1 0 1º (a) A ««0 2 3»» has one eigen value zero «¬3  2 0»¼ (b) Every singular matrix has at least one zero eigen value Choose the true statement from the following: (A) Both are true and (b) is the reason for (a) (B) Both are true but (b) is not the reason for (a) (C) Both are false (D) Only (a) is true MCQ 3.5

If 3 is an eigenvalue of A, then an eigenvalue of A2 is (A) 5 (B) 6 50

(C) 8 (D) 9 SAQ 3.1

If A and B are the n square matrices, show that the matrices AB and BA have the same eigen values. SAQ 3.2

Show that the eigenvalues of A and Ac are same. Hint. | Ac |

| A|

SAQ 3.3

If O is an eigenvalue of A, then show that Om is an eigenvalue of Am , for any positive integral value of m. Hint. Use mathematical induction. 3.2 Some important results Theorem 3.1

Let! O1 , O 2 ,  , O k !be!distinct!characteristic!roots!of!a!matrix! A, !and!let! X 1 , X 2 ,  , X k !be!any nonzero! characteristic! vectors! associated! with! these! values! respectively.! Then! ! X 1 , X 2 ,  , X k are!linearly!independent. ª1 x1 « 1 x2 Hint. Vandermonde matrix: « « « ¬«1 xn

x12  x1n1 º » x22  x2n1 » is singular only when xi » » xn2  xnn1 ¼»

xj

for at least one pair of integers i, j such that 1 d i  j d n . Proof. Consider the relation c1 X 1  c2 X 2    ck X k

(3.6)

0

where c1 ,  , ck are constants. Multiplying by A,

A(c1 X 1 )  A(c2 X 2 )    A(ck X k )

Ÿ

c1 ( AX 1 )  c2 ( AX 2 )    ck ( AX k ) 0

A0

51

Ÿ

c1O1 X 1  c2O 2 X 2    ck O k X k

0,  AX 1

O1 X 1 ,, AX k

Ok X k

(3.7)

Again multiplying by A, we obtain A(c1O1 X 1 )  A(c2O 2 X 2 )    A(ck O k X k ) 0 Ÿ Ÿ

c1O1 ( AX 1 )  c2O 2 ( AX 2 )    ck O k ( AX k ) 0 c1O21 X 1  c2O22 X 2    ck O2k X k

0,  AX 1

c1 O31 X 1  c 2 O32 X 2    c k O3k X k

Repeating the process,

O1 X 1 etc

(3.8) (3.9)

0



c1Ok11 X 1  c2 Ok21 X 2    ck Okk1 X k

0.

(3.10)

The k equations (3.6) to (3.10) may be written in the form ª1 O1 « 1 O2 [c1 X 1 c2 X 2  ck X k ] « « « ¬«1 O k

O21  Ok11 º » O22  Ok21 » » » 2 k 1 O k  O k ¼»

0

But O1 ,  , O k are all distinct. Hence the right matrix on the left side is non-singular and is called the Vandermonde!matrix. We denote it by V . Then the above equation becomes [c1 X 1 c2 X 2  ck X k ] V

0

Since V is nonsingular, V 1 exists. Post multiplying by V 1 , we get [c1 X 1 c2 X 2  ck X k ] VV 1

0V 1

i.e.

[c1 X 1 c2 X 2  ck X k ] I

0

Ÿ

[c1 X 1 c2 X 2  ck X k ]

0

Ÿ Ÿ

c1 X 1 c1

c2  ck

0, c2 X 2

0,  , ck X k

0

0,  all X 1 , , X k are non zero

Thus (3.6) Ÿ c1  ck 0. Hence X1, X 2 , , X k are linearly independent.

QED

Theorem 3.2

If!! O1 , O 2 ,  , O n !are!the!eigenvalues!of!an!n-square!matrix! A !and!if! k !is!a!scalar,!then! O1  k , O 2  k , , O n  k !are!the!eigenvalues!of! A  kI . 52

Proof. Let O1 ,  , O n be the eigenvalues of an n-square matrix A.

| OI  A | (O  O1 ) (O  O 2 )  (O  O n )

Ÿ

| (O  k ) I  A | (O  k  O1 )(O  k  O 2 )  (O  k  O n )

Now Ÿ

0

| OI  ( A  kI ) | {O  (O1  k )} {O  (O 2  k )}{O  (O n  k )}

This shows that O1  k , O 2  k ,  , O n  k are the eigenvalues of A  kI .

QED

Theorem 3.3

The!eigen!values!of!a!Hermitian!matrix!are!all!real. Hint. O is real œ O

O

Proof. We denote the conjugate of the transpose by

Ac

AT

Let A be any Hermitian matrix. Then by definition AT

A

Let O be any eigen value of A and let X be the corresponding eigen vector. This gives AX

OX

Premultiplying by X T , X T AX

( X T AX ) T

Ÿ Ÿ Ÿ Ÿ Ÿ

X T OX

X T AT ( X T ) T X T AX

OX T X

(OX T X ) T

X T ( X T ) T OT ,  ( AB ) T O X T X ,  AT

OX T X

(3.11)

A, OT

B T AT O

O X T X , by (3.11)

(O  O ) X T X

0

Since X is a nonzero vector, X T X z 0 and then above gives Ÿ

OO

Ÿ

O

Hence O is real.

0 O QED

53

Problem 3.3

Show that the eigen values of a skew Hermitian matrix are either pure imaginary or zero. Solution. Let A be any skew symmetric Hermitian matrix. Then iA is Hermitian. Hence by Thm

3.3, iA has only real eigen values. The eigen value equation is AX Ÿ Ÿ

(iA) X

OX

(iO) X

iA has the eigenvalue iO

Since iA has real eigen value, this is possible if O is zero or purely imaginary. Problem 3.4

Prove that the characteristic roots of A and Ac are the conjugate of the characteristic roots of A. Proof. Let O be any characteristic root of A. Then

| OI  A | 0 Taking the conjugate,

| OI  A | 0

i.e.

| OI  A | 0

Ÿ

| O I  A | 0,  I

(a1)

I

This shows that O is the characteristic root of A. Similarly taking the conjugate of the transpose of (a1), we get | (OI  A)c | 0 | {(OI )c  Ac} | 0

Ÿ

| (OI  Ac) | 0,  I c

or

| OI  Ac | 0 or | O I  Ac | 0

Ÿ Ÿ

I , O is scalar

O is the characteristic root of Ac

MCQ 3.6

Let 1, 2, 3 be the eigen values of a 3 - square matrix A. If the sum of the eigen values of A  aI is a 2  22, then (A) a 54

4, 7

(B) a

 4, 7

(C) a

4,  7

(D) a

4,  7

MCQ 3.7

Let a eiT and b

ie  iT be the eigen values of a Hermitian matrix. Then

(A) a 4  b 4

sin 2T

(B) a 4  b 4

cos 2T

(A) a 4  b 4

sin T  cos T

(A) a 4  b 4

sin T  cos T

3.3 Computing eigen values and eigen vectors

Having discussed some results and solved examples related to eigen values and vectors, now we go for finding eigen values and corresponding eigen vectors of a given matrix. Since a matrix can either be symmetric or non- symmetric and the eigen values can be either repeated or nonrepeated, we consider on the problems of two types: (a) Matrix A is non-symmetric having (a1) non-repeated eigen values (a2) repeated eigen values (b) Matrix A is symmetric having (b1) non-repeated eigen values (b2) repeated eigen values. In case of (a1) and (a2), the same method works for computing eigen vectors. However, for (b1) and (b2), the methods are slightly different. The following solved problems illustrate the working of the methods. Before attempting the problems, the following points be noted as they may be of help. (i) Eigen vectors are column vectors and we denote them by Xi

eigen vector corresponding to the eigen value O i

(ii) The vectors X i and X j are orthogonal œ X ic ˜ X j

0, i z j

(iii) For a symmetric matrix the eigen vectors are orthogonal. 55

Non symmetric matrix with non repeated eigen values Problem 3.5

Find the eigen values and the corresponding eigen vectors of the matrix ª8  8  2 º A ««4  3  2»» . «¬3  4 1 »¼ Solution. The eigen equation is

(A  O I)X

0, X

8 2 º ª8  O « 4  3  O  2 »» « 4 1  O ¼» ¬« 3

Ÿ

ª xº « y» « » ¬« z ¼»

ª xº « y» « » «¬ z »¼ ª0 º «0 » « » «¬0»¼

(a1)

The characteristic equation of the matrix A is | AOI |

0

8 2 8O 4 3O 2 3 4 1 O

Ÿ

0

(8  O )[(3  O )(1  O)  8]  8[6  4(1  O )]  2[16  3(3  O)] 0

Ÿ

O3  6O2  11O  6

Ÿ

0 i.e. (O  1)(O  2)(O  3)

0

O 1, 2 , 3

Ÿ

Thus the eigen values 1, 2 , 3 are distinct. To find the corresponding eigen vectors consider (a1) for O 1 : ª7  8  2 º ª x º « 4  4  2» « y » « »« » «¬3  4 0 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

Applying the elementary row transformations to the coefficient matrix, we have

R2  R3 :

56

ª7  8  2 º ª x º «1 0  2» « y » »« » « «¬3  4 0 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

R1  7 R2 , R3  3R2 :

ª0  8 12 º ª x º «1 0  2 » « y » « »« » «¬0  4 6 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

1 1 R1 , R3 : 4 2

ª0  2 3 º ª x º «1 0  2 » « y » « »« » «¬0  2 3 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

R3  R1 :

ª0  2 3 º ª x º «1 0  2 » « y » « »« » «¬0 0 0 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

 2 y  3z

Ÿ

0 , x  2z

0

We have two equations in three unknowns and hence there is one independent solution. Taking z

a, this can be written as x

2 a, y

3a ,z 2

a

(a2)

Since a is arbitrary, it can be taken as per our convenience. We suppose that a

2. Then the

eigen vector X 1 corresponding to O 1 can be obtained form (a2):

X1

ª xº « y» « » «¬ z »¼

ª 4º « 3» « » «¬2»¼

We say that associated with O 1, there is one dimensional vector space spanned by the vector

X 1 [4 3 2]c. Every nonzero vector k[4 3 2]c of this space is the eigen vector of A. For O 2 , (a1) becomes

By

Ÿ

ª6  8  2º ª x º « 4  5  2» « y » « »« » «¬3  4  1 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

R2  R1 , R1  2 R3 :

0 º ª xº ª0 0 «1  1  1» « y » « »« » ¬«3  4  1¼» «¬ z ¼»

ª0 º «0 » « » ¬«0¼»

R3  3R2 :

ª0 0 0 º ª x º «1  1  1» « y » « »« » «¬0  1 2 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

x yz

0 ,  y  2z

0

57

Ÿ

x

3a , y

2a , z

a

Then the eigen vector X 2 corresponding to O 2 is

X2

ª 3º «2» , for a 1 « » «¬1 »¼

Thus corresponding to O 1, there is one dimensional vector space spanned by the vector

X 2 [3 2 1]c. Every nonzero vector k[3 2 1]c of this space is the eigen vector of A. ª5  8  2º ª x º « 4  6  2» « y » « »« » «¬3  4  2»¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

R1  R2 , R2  R3 :

ª1  2 0 º ª x º «1  2 0 » « y » »« » « «¬3  4  2»¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

R2  R1 , R3  3R1 :

ª1  2 0 º ª x º «0 0 0 »» «« y »» « «¬0 2  2»¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

For O 3, (a1) Ÿ

By

Ÿ

x  2y

Ÿ

x

0 , 2 y  2z 2a , y

z

0

a

Then the eigen vector can be written as

X3

ª 2º «1 » , for a 1 « » «¬1 »¼

It thus follows that corresponding to O 1, there is one dimensional vector space spanned by the vector X 3 [ 2 1 1]c. Every nonzero vector k[2 1 1]c of this space is the eigen vector of A. We show that the eigen vectors X 1 , X 2 , X 3 are linearly independent. Consider that

c1 X 1  c2 X 2  c3 X 3

Ÿ

58

ª 3º ª 2º ª 4º « » « » c1 «3»  c2 «2»  c3 ««1 »» «¬1 »¼ «¬2»¼ «¬1 »¼

0

ª0 º «0 » « » «¬0»¼

(a3)

ª4c1  3c2  2c3 º « 3c  2c  c » 2 3 » « 1 «¬ 2c1  c2  c3 »¼

Ÿ

Ÿ

4c1  3c2  2c3

ª0 º «0 » « » «¬0»¼

0, 3c1  2c2  c3

0, 2c1  c2  c3

0

Solving these equations, we get

c1

c2

c3

(a4)

0

Since (a3) Ÿ (a4), the vectors X 1 , X 2 , X 3 are linearly independent. Remark. One can easily check that

Sum of the eigen values

1  2  3 6 (trace of A)

8  3 1

Nonsymmetric matrix with repeated eigen values Problem 3.7

Find the eigen values and the corresponding eigen vectors of the matrix

ª  2 2  3º 1  6»» . A «« 2 «¬  1  2 0 »¼ Solution. The characteristic equation for the matrix A is

Ÿ

2O

2

3

2

1 O

6

1

2

0O

0

(2  O)[(1  O)(O)  12]  2[6  2O]  3[4  1  O] 0

Ÿ

O3  O2  21O  45 0 i.e. (O  3) 2 (O  5) O

Ÿ

0

5,  3,  3

Hence the eigen values are 5 ,  3 ,  3. The value  3 is repeated. We find the corresponding eigen vectors. The matrix equation is

 3 º ª xº 2 ª 2  O « 2 1  O  6 »» «« y »» « «¬  1  2 0  O »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

(a1)

For O 5, (a1) becomes 59

ª  7 2  3º ª x º « 2  4  6» « y » »« » « «¬  1  2  5»¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

Applying elementary row transformations:

R13 ,  R1 , R2 (1 / 2), R2  R1 , R3  7 R1 , R2 ( 1 / 4), R3 (1 / 16), R3  R2 , we obtain ª0 º «0 » « » «¬0»¼

ª1 2 5 º ª x º «0 1 2 » « y » »« » « «¬0 0 0»¼ «¬ z »¼ Write down all the details.

x  2 y  5z

Ÿ Ÿ

a , y

x

0 , y  2z

2a , z

0

a

Then the eigen vector corresponding to O 5 is

X1

For O

ª1º « 2 » , for a 1 « » «¬ 1»¼

3, the matrix equation (a1) becomes 2  3º ª x º ª1 «2 4  6»» «« y »» « «¬ 1  2 3 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

Applying elementary row transformations (give details), above reduces to

ª1 2  3º ª x º «0 0 0 » « y » « »« » «¬0 0 0 »¼ «¬ z »¼

x  2 y  3z

Ÿ

Let y

ª0 º «0 » « » «¬0»¼ 0

a , z b. Then the solution is x

2a  3b , y

a, z

b

The corresponding eigen vector is

X2

60

ª 3º «0» , for a « » «¬1»¼

0, b 1

(a2)

Since the eigen value 3 is repeated, there shall be two eigen vectors for this value. We have obtained one X 2 . Let the other be X 3 . These vectors X 2 , X 3 must be linearly independent. Hence X 3 can be obtained from the solution (a2) for the values of a and b other than

a 0 , b 1. We take a 1, b 0 and then write

X3

ª  2º « 1 ». « » «¬ 0 »¼

Symmetric matrix with non-repeated eigen values Problem 3.8

Find the eigen values and the corresponding eigen vectors of the following matrix

ª 8 6 2 º A «« 6 7  4»» . «¬ 2  4 3 »¼ Solution. The eigen equation is

2 º ª8  O  6 « 6 7O 4 » » « «¬ 2  4 3  O »¼ The characteristic equation | A  OI |

Ÿ

Ÿ

ª0 «0 « «¬ 0

ªxº « y» « » «¬ z »¼

º » » »¼

(a1)

0 Ÿ

8O

6

2

6

7O

4

2

4

3O

0

(8  O)[(7  O)(3  O)  16]  6[8  6(3  O)]  2[24  2(7  O)] 0

O(O2  18O  45)

0 i.e. O(O  3)(O  15)

0

O 0, 3, 15 The eigen vectors are determined from the eigen equation (a1. For O

ª 8 6 2 º ª x «  6 7  4» « y « »« «¬ 2  4 3 »¼ «¬ z

º » » »¼

0 , (a1) becomes

ª0º «0» « » «¬ 0 »¼

61

Applying

ª 4  3 1 º ª xº «  6 7  4» « y » »« » « «¬ 2  4 3 »¼ «¬ z »¼

1 R1 : 2

ª0º «0» « » «¬ 0 »¼

6 R2  R1 , R3  2 R1 : 4

1 ºª x º ª4  3 «0 5 / 2  5 / 2» « y » « »« » «¬0  5 / 2 5 / 2 »¼ «¬ z »¼

2 2 R2 , R3 : 5 5

ª4  3 1 º ª x º «0 1  1» « y » « »« » «¬0  1 1 »¼ «¬ z »¼

ª0º «0» « » «¬ 0 »¼

R3  R2 :

ª4  3 1 º ª x «0 1  1» « y »« « «¬0 0 0 »¼ «¬ z

ª0º «0» « » «¬ 0 »¼

4x  3y  z

Ÿ

º » » »¼

0, y  z

ª0º «0» « » «¬ 0 »¼

0

Since there are two equations in three unknowns, there is only one independent solution:

y

z

a, x

1 (3 y  z ) 4

1 a 2

The corresponding eigenvector is

X1

Thus associated with O

ªxº «y» « » «¬ z »¼

ª1 º «2», for a « » «¬2»¼

ª a/2 º « a » or X 1 « » «¬ a »¼

2

0 there is one dimensional vector space spanned by the vector (1, 2, 2)c .

Every non zero vector k (1 , 2, 2)c of this space is the characteristic vector of A . For O 3 , (a1) becomes

By

62

ª 5 6 2 º ª x º «  6 4  4» « y » « »« » «¬ 2  4 0 »¼ «¬ z »¼

ª0º «0» « » «¬ 0 »¼

1 R2 : 2

ª 5 6 2 º ª x º «  3 2  2» « y » « »« » «¬ 1  2 0 »¼ «¬ z »¼

ª0º «0» « » «¬ 0 »¼

R1 l R3 :

ª 1 2 0 º ª x «  3 2  2» « y « »« «¬ 5  6 2 »¼ «¬ z

ª0 «0 « «¬ 0

º » » »¼

º » » »¼

ª1  2 0 º ª x º R2  3R1 , R3  5R1 : ««0  4  2»» «« y »» «¬0 4 2 »¼ «¬ z »¼

ª1  2 0 º ª x º ª0º « 0 » or «0 2 1» « y » « »« » « » «¬0 2 1»¼ «¬ z »¼ «¬ 0 »¼

ª1  2 0 º ª x º «0 2 1 » « y » « »« » «¬0 0 0»¼ «¬ z »¼

R3  R2 :

Ÿ

x  2y

Ÿ

x

or

x

Ÿ

ª0º «0» « » «¬ 0 »¼

0, 2y  z

2a, y

2 , y 1, z

X2

ª0º «0» « » «¬ 0 »¼

0 2a

a, z

2, for a 1 ª2º «1» « » «¬ 2»¼

(2, 1,  2)c. Hence for O 3 there is one dimensional vector

The corresponding vector is X 2

space spanned by the vector (2, 1,  2)c. Every non zero vector k (2, 1,  2)c of this space is the characteristic vector of A. Verify yourself that for O 15, X 3c

(10, 2,  1)c.

Symmetric matrix with repeated eigen values Problem 3.9

Find the eigen values and the corresponding eigen vectors of the matrix ª 6 2 2 º A «« 2 3  1»» . «¬ 2  1 3 »¼ Solution. Here the matrix A is symmetric. The matrix equation ( A  OI ) X

i.e.

Ÿ

0

2 º ª xº ª6  O  2 «  2 3  O 1 » « y» « »« » «¬ 2  1 3  O »¼ «¬ z »¼ 6O 2 2  2 3  O 1 2 1 3  O

ª0 º «0 » « » «¬0»¼

(a1)

0

63

(6  O)[(3  O) 2  1]  2[2  2(3  O )]  2[ 2  2(3  O )] 0

Ÿ

O3  12O2  36O  32

Ÿ Ÿ

(O  2)(O2  10O  16)

Ÿ

O

0

0 or (O  2) 2 (O  8)

0

2 , 2 ,8

The eigen values are 2 , 2 , 8 and the value2 is repeated. The corresponding eigen vectors are determined from the matrix equation (a1). For O 8 , it gives

1 Applying  R1 : 2

ª 2  2 2 º ª x º «  2  5  1» « y » « »« » «¬ 2  1  5»¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

1  1º ª x º ª1 «  2  5  1» « y » « »« » «¬ 2  1  5»¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

ª 1 1  1º ª x º «0  3  3» « y » « »« » «¬0  3  3»¼ «¬ z »¼

R2  2 R1 , R3  2 R1 :

ª1 1  1º ª x º «0 1 1 » « y » « »« » «¬0 0 0 »¼ «¬ z »¼

1  R3 , R3  R2 : 3

x yz

Ÿ

ª0 º «0 » « » «¬0»¼

ª0 º «0 » « » «¬0»¼

0, y  z

0

There is only one independent solution: 2a, y

x i.e.

x

2, y

 a, z

1, z 1, for a 1

The corresponding eigen vector is X1

ª xº « y» « » «¬ z »¼

Every nonzero vector k X 1 is the eigen vector of A. For O 2 , the matrix equation (a1) becomes

64

a

ª2º « 1» « » «¬ 1 »¼

ª 4  2 2 º ª xº « 2 1  1» « y » « »« » «¬ 2  1 1 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

1 R1 : 2

ª 2 1 1 º ª xº « 2 1  1» « y » « »« » «¬ 2  1 1 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

R2  R1 , R3  R1 :

ª2  1 1º ª x º «0 0 0» « y » »« » « «¬0 0 0»¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

2x  y  z

Ÿ

0

We have one equation in three unknowns. The solution can be expressed as x

2a  b

a , y b, z

The corresponding eigen vector can be written as ª xº « y» « » «¬ z »¼

ª a º « b » « » «¬ 2a  b »¼

(a2)

Since the matrix A is symmetric, the eigen vectors must be orthogonal. There are two eigen vectors corresponding to the repeated eigen value O

2. We obtain one of the vectors X 2 from

(a2) for a 1, b 1 :

X2

ª1º «1» « » «¬ 1»¼

To find the remaining X 3 , we use the condition of orthogonality of vectors i.e. X 1c X 2

Let

X3

0 , X 1c X 3

Ÿ

0

(a3)

ªu º « v » . Then (a3) gives « » «¬ w»¼ ªu º >2  1 1@ «« v »» «¬ w»¼

Ÿ

0 , X 2c X 3

ªu º 0 and >1 1  1@ «« v »» «¬ w»¼

0

2u  v  w 0 and u  v  w 0 u

0, v

w a 65

Ÿ

X3

ª0 º «a » « » «¬a »¼

ª0 º «1» , for a 1 « » «¬1»¼

The required eigen vectors are X 1 , X 2 , X 3 . MCQ 3.8

Let X 1 and X 2 be the eigen vectors of the matrix

ªcos T  sin Tº A « ». ¬ sin T cos T ¼ Then

(A) X 1  X 2

[0 2i ]c

(B) X 1  X 2

[ 2 0]c

(C) X 1  X 2

[2 0]c

(D) X 1  X 2

[2i 0]c

MCQ 3.9

A square matrix A has O

0 (twice), 1(thrice), 2(twice), 3 eigenvalues. Consider the statements:

(i) order of A is eight (ii) | A |

6

(iii) trace of A is 10. Make the correct choice from the following. (A) All the statements are true. (B) Only (i) is true (C) (i), (iii) are true and (ii) is false (D) All the statements are false. MCQ 3.10

ª1 4 k º ª1 2 º The values of k for which the matrices « », « » have real and distinct eigen values are ¬4 3 ¼ ¬2 k ¼ given by

66

(A) k d 

(B) k !

1 16

1 16

(C) k  

1 16

(D) k ! 

1 16

MCQ 3.11

ªa b º The condition that the matrix « » has equal characteristic roots is ¬c d ¼ (A) (a  d ) 2  4bc

0

(B) (a  d ) 2  4bc 0 (C) (a  d ) 2  4bc ! 0 (D) (a  d ) 2  4bc  0 MCQ 3.12

Let a, b, c, d be the characteristic roots of the matrix ª 0 0 0  1º « 1 0 0 0 » « ». « 0 1 0 0 » « » ¬ 0 0 1 0 ¼ Then

(A) a  b  c  d

0

(B) a  b  c  d

0

(C) a  b  c  d

0

(C) a  b  c  d

0

67

SAQ 3.4

Find the eigen values and the eigen vectors corresponding to the highest eigen value of ª 6 2 2 º A «« 2 3  1»» . «¬ 2  1 3 »¼ SAQ 3.5

Find the eigen values and the eigen vectors corresponding to the to the smallest eigen value of

A

ª1  6  4 º «0 4 2 »» . « «¬0  6  3»¼

SAQ 3.6

ª4  5º Find the eigen values and the corresponding eigen vectors of the matrix A « ». ¬1  2 ¼ SAQ 3.7

Determine the characteristic roots and vectors of the Hermitian matrix ª1 0 0 º «0 0 Z 2 » , « » «¬0 Z 0 »¼ where Z is a complex cube root of unity: Z e 2 Si / 3 . SAQ 3.8

Find the eigen vector for repeated eigen value only of the matrix ª1  6  4 º A ««0 4 2 »» . «¬0  6  3»¼ SAQ 3.9

Show that the matrices A and B have same charcteristic equation

where

68

ª0 a bº A ««a 0 c »» , B «¬b c 0»¼

ª0 b a º «b 0 c » . « » «¬a c 0 »¼

SUMMARY The eigen value problem consisting of finding the eigen values and the corresponding eigen vectors is discussed with illustrations. The properties shared by these values and vectors are explained. The various methods for computing the eigen values and the eigen vectors are demonstrated by solved problems.

KEY WORDS Eigen value problem Eigen equation or Characteristic equation Eigen value or Characteristic root Eigen vector or Characteristic vector

69

UNIT 02-04: THE CAYLEY-HAMILTON THEOREM

71-83

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the Cayley " Hamilton theorem Apply the theorem to find the inverse of a square matrix INTRODUCTION From the previous Unit, we note that the eigen equation

AX

OX

(4.1)

for a n - square matrix A [aij ] has the characteristic equation

O  a11

| OI  A |

0 or

 a21   an1

 a12



 a1n

O  a22 

 a2 n

 an 2

0

(4.2)

 O  ann

Expanding the determinant, we get a polynomial in O : f (O ) | OI  A | On  a1On 1  a2On  2    an 1O  an

(4.3)

where a1 , a2 , , an are related to the principal minors of A. As a matter of fact, we write

and

am

(1) m (sum of all the m-square principal minors of A ), m 1, 2,  , n  1

an

(1) n | A | .

4.1 The Cayley-Hamilton theorem

Before the discussion on the Cayley-Hamilton theorem, we solved some problems that will prepare the background for the theorem. Problem 4.1

Find the characteristic equation f (O ) 0 and the value of f ( A) in case of the following matrices: ª1 4 º (i) A « » ¬ 2 3¼ 71

and

ª2 1 1 º (ii) A ««0 1 0»» . «¬1 1 2»¼

Solution. (i) The characteristic equation is f ( O ) | OI  A | 0

O 1  4 2 O 3

f (O )

i.e. Ÿ

ª 1 4 º ª1 4 º « 2 3» « 2 3» ¼¬ ¬ ¼

A2

Then (a2) Ÿ

0

A2  4 A  5 I

f ( A)

Now

O2  4O  5

(a1)

(a2)

ª9 16º «8 17 » . ¼ ¬

ª1 0 º ª1 4 º ª9 16º «8 17 »  4 «2 3»  5 «0 1» ¼ ¬ ¼ ¬ ¼ ¬

f ( A)

ª9  4  5 16  16  0º «8  8  0 17  12  5» ¼ ¬

Ÿ

ª0 0 º «0 0 » ¼ ¬

=0 (ii) Here the characteristic equation f (O )

f (O )

Ÿ

f (O )

We have

A

A

0

(a3)

A3  5 A2  7 A  3I

(a4)

f ( A)

3

72

1 1 º ª2  O « 0 1 O 0 »» « «¬ 1 1 2  O »¼

(1  O ) [(2  O ) 2  1] O3  5 O2  7O  3 0

Ÿ

2

| A  OI | 0 gives

AA

AA

2

ª2 1 1 º ª2 1 1 º «0 1 0» «0 1 0» « » »« «¬1 1 2»¼ «¬1 1 2»¼

ª5 4 4º «0 1 0 » « » «¬4 4 5»¼

ª 2 1 1 º ª 5 4 4º «0 1 0» «0 1 0» « »« » «¬1 1 2»¼ «¬4 4 5»¼

ª14 13 13º «0 1 0» « » «¬13 13 14»¼

Then (a4) Ÿ

ª14 13 13º «0 1 0» » « «¬13 13 14»¼

f ( A)

ª5 4 4º ª2 1 1 º ª1 0 0º » « » « 5«0 1 0»  7 «0 1 0»  3 ««0 1 0»» «¬4 4 5»¼ «¬1 1 2»¼ «¬0 0 1»¼

13  20  7 º ª14  25  14  3 13  20  7 « » 0 1 5  7  3 0 « » «¬ 13  20  7 13  20  7 14  25  14  3»¼ ª0 0 0 º «0 0 0 » « » «¬0 0 0»¼

=0 Comment. From this example it is observed that the characteristic equation f (O ) 0 is also

satisfied by the corresponding matrix i.e. f (O ) 0 Ÿ f ( A) 0. We suspect whether it is true for all square matrices. The answer to the question is affirmative and is given by the Cayley-Hamilton theorem. Theorem 4.1 (The Cayley-Hamilton theorem)

Every! square! matrix! A ! satisfies! its! own! characteristic! equation! i.e.! if! f (O )

0 ! is! the

characteristic!equation!of!the!matrix! A, !then! f ( A) 0. Hint. A ˜ adj A | A | I , A and I are of the same order. Proof. Let A [aij ] be any nonzero n-square matrix. We denote

C

adj ( OI  A )

where OI  A is the characteristic matrix of A. The cofactors of OI  A are of degree at most (n  1) in O i.e. of degree n  1 or less. This is true for the elements of C. Hence C can be written as a matrix polynomial in O i.e. C

C0  C1O  C2 O2    Cn2 On2  Cn1On1

(4.4)

where C0 , C1 ,  , Cn1 are the n-square matrices whose elements are functions of aij . We illustrate this point by an example. Consider the matrix

73

ª1 3 0º «3  1 0 » » « «¬0 0 2»¼

A

OI  A

Then

Ÿ

0 º ªO  1  3 «  3 O 1 0 »» « «¬ 0 0 O  2»¼

cofactor of (O  1) cofactor of (3)

(O  1)(O  2) O2  O  2  (3)(O  2) 3O  6

cofactor of 0 0 cofactor of (O  1)

(O  1)(O  2) O2  3O  2

cofactor of (O  2)

(O  1)(O  1)  9 O2  10

Ÿ

C

adj (OI  A) ªO2  O  2 3O  6 0 º « » 2 O  3O  2 0 » « 3O  6 « 0 0 O2  10»¼ ¬ ª 2  6 0 º ª  1 3 0º ª1 0 0º « 6 2 » « » 2« 0 »  O « 3  3 0»  O «0 1 0»» « «¬ 0 «¬ 0 «¬0 0 1»¼ 0  10»¼ 0 0»¼

C0  OC1  O2C2 , where C0

C1 and

C2

first matrix on the right, second matrix i.e. the coefficient of O ½ third matrix I i.e. the coefficient of O2 . ¾ ¿

Now by the property (see Hint), we write (OI  A)C Ÿ

(OI  A) [adj (OI  A)]

| OI  A | I

(OI  A){C0  OC1  O2C2   Cn2On2  Cn1On1} {an  an1O   a1On1  On }I

Comparing the coefficients of like powers of O on both sides, we get

 AC0 74

an I

C0  AC1

an1I ,  IC0

C0

C1  AC2

an  2 I ,  IC1

C1



Cn  2  ACn 1 C n 1

a1 I ,  ICn2

C n2

I

We eliminate the matrices Ci from the above equations. For this we multiply these equations on the left successively by I , A, A2 ,  , An1 , An to get

 AC0

an I

AC0  A2C1

an1 A,  AI

A2C1  A3C2

an2 A2

A



An1Cn2  An Cn1 An Cn1

a1 An1

An

Adding these equations, we obtain 0

an I  an1 A    a1 An1  An

Ÿ

f ( A)

0

QED

Remark. This theorem helps to compute A1. Problem 4.2

Verify Caley-Hamilton theorem for matrix

A

ª1 4 º « 2 3» ¼ ¬

and hence express

A5  4 A4  7 A3  11A2  A  10 I as linear polynomial in A. Solution. The characteristic equation is

75

f (O ) | OI  A | 0 f (O )

i.e.

O 1  4 2 O 3

O2  4O  5

0

The verification of the CH theorem is established in Problem 4.1. By the CH theorem, we have

A2  4 A  5 I

f ( A) Now

0

(a1)

A5  4 A4  7 A3  11A2  A  10 I A5  4 A4  5 A3  2 A3  8 A2  10 A  3 A2  12 A  15 I  A  5I A3 ( A2  4 A  5I )  2 A( A2  4 A  5I )  3( A2  4 A  5I )  A  5 I A3 (0)  2 A(0)  3(0)  A  5I by (a1) 0  0  0  A  5I

A  5I Problem 4.3

Verify Caley-Hamilton theorem for matrix

A

ª2 1 1º «0 1 0 » « » «¬1 1 2»¼

and use it to find the matrix represented by

A8  5 A7  7 A6  3 A5  A4  5 A3  8 A2  2 A  I . Solution. We have

f (O ) | OI  A |

0

f (O) (1  O) [(2  O) 2  1] 0 i.e. O3  5 O2  7O  3 0

Ÿ

For the verification of the CH theorem, see Problem 4.1. By the CH theorem, Now

f ( A)

A3  5 A 2  7 A  3 I

0

A8  5 A7  7 A6  3 A5  A4  5 A3  8 A2  2 A  I A5 ( A3  5 A2  7 A  3I )  A( A3  5 A2  7 A  3I )  A2  A  I 0  0  A2  A  I , by (a1)

76

(a1)

ª5 4 4º ª2 1 1 º ª1 0 0º «0 1 0»  «0 1 0»  «0 1 0» , see Problem 4.1 for A2 » « » » « « «¬4 4 5»¼ «¬1 1 2»¼ «¬0 0 1»¼ ª8 5 5 º «0 3 0 » » « «¬5 5 8»¼ MCQ 4.1

For the matrix ª 2 1 1 º « 1 2  1» « » «¬ 1  1 2 »¼

A

the polynomial A6  6 A5  9 A4  2 A3  12 A2  23 A  9 I

reduces to a Ab  c I . Then the plane ax  by  6cz

0 passes through the point

(A) (1,  1,  1) (B) (1,1, 1) (C) (1,1,  1) (D) (1,1,  1) MCQ 4.2

The matrix represented by A4  5 A3  8 A2  2 A  I is

B

where

ª2 1 1 º A ««0 1 0»». «¬1 1 2»¼

Then

(A) trace of 19

ªa b b º «0 c 0 » , « » «¬b b a »¼

77

(B) a b  c (C) a  b  c 0 (D) a b  c SAQ 4.1

Verify the Cayley-Hamilton theorem for the matrix ª 1 2º A « ». ¬  1 3¼ Use the theorem to express A6  4 A5  8 A4  12 A3  14 A2 as a linear polynomial in A. SAQ 4.2

For the matrix A

show that

ª 6 2 2 º « 2 3  1» « » «¬ 2  1 3 »¼

A3  12 A2  36 A 32 I .

4.2 Inverse by Cayley-Hamilton theorem

We use the Cayley-Hamilton theorem to compute inverse of a nonsingular matrix. The following solved examples illustrate the method. Problem 4.4

Verify Cayley-Hamilton theorem for matrix A and hence find A1 if

A

2  2º ª1 « 1 3 0 »» . « «¬ 0  2 1 »¼

Solution. (a) The CE for A is

f (O )

78

| A  OI |

0

1 O f (O )

Ÿ

 1 0

2

2

3O

0

2

1 O

0

f (O) O3  5O2  9O  1 0

Ÿ

A3  5 A2  9 A  I

By CH theorem,

0

(a1)

We verify this. Now

A

2

AA

2  2º ª  1 12  4º ª1 « 1 3 2 »» 0 »» «« 4 7 « «¬ 0  2 1 »¼ «¬ 2  8 1 »¼

3

A

Ÿ

2  2º 2  2º ª 1 ª1 « « 1 3 » 0 »» 0 » « 1 3 « «¬ 0  2 1 »¼ «¬ 0  2 1 »¼

A  5A  9A  I 3

2

ª  1 12  4º « 4 7 2 »» « «¬ 2  8 1 »¼

ª 13 42  2º «  11 9 10 »» « «¬ 10  22  3»¼

ª 13  5  9  1 42  60  18  2  20  18 º «  11  20  9 9  35  27  1 10  10 »» « «¬ 10  10  22  40  18  3  5  9  1»¼ ª0 0 0 º «0 0 0 » « » «¬0 0 0»¼ 0

Hence (a1) i.e. the CH is verified. We use (a1) to find A1. Here

| A|

Ÿ

2

2

1

3

0

0

2

1

54 1z 0

A1 exists.

Multiplying (a1) by A1 , Ÿ

1

A1

A2  5 A  9 I  A1

0

A2  5 A  9 I 2  2º ª1 ª  1 12  4º « « 4 7 » 0 »»  9 2 »  5 « 1 3 « «¬ 0  2 1 »¼ «¬ 2  8 1 »¼

ª1 0 0º «0 1 0 » « » «¬0 0 1»¼

79

ª 1  5  9 12  10  4  10º «  4  5 7  15  9 2 »» « «¬ 2  8  10 1  5  9»¼ ª3 2 6º «1 1 2 » » « «¬2 2 5»¼ Problem 4.5

ª1 2 º Use Caley-Hamilton theorem to find A8 , where A « ». ¬2  1¼ Solution. The CE for A is

1 O

2

2

1  O

A2  5 I

By Cayley-Hamilton theorem, Ÿ

A2

Ÿ

A8

0 i.e. O2  5 0

( A2 ) 4

(5I ) 4

54 I

0

5I 625 I

ª625 0 º « 0 625» ¬ ¼

Problem 4.6

Find A

2

ª2 1 1 º by Caley-Hamilton theorem if A ««0 1 0»» . «¬1 1 2»¼

Solution. The CE for A is

1 1 º ª2  O « 0 1 O 0 »» « «¬ 1 1 2  O »¼ Ÿ

O3  5O2  7O  3 0 A3  5 A 2  7 A  3 I

By CH theorem,

Now

80

0

| A|

ª2 1 1 º «0 1 0» « » «¬1 1 2»¼

0

4 1 3 z 0

(a1)

Hence A1 , A2 exist. We have

A

Multiplying (a1) by A1 ,

Ÿ

3A

1

ª2 1 1 º ª2 1 1 º «0 1 0» «0 1 0» » « »« «¬1 1 2»¼ «¬1 1 2»¼ A 2  5 A  7 I  3 A 1

A  5A  7I 2

2

ª 2  1  1º «0 3 0 »» « «¬ 1  1 2 »¼

ª 2  1  1º 1 « 0 3 0 »» « 3 «¬ 1  1 2 »¼

1

A

Multiplying (a1) by A2 ,

A  5 I  7 A 1  3 A 2

3 A 2

0

45 45 º ª5  10  7 » « 0 1 5  7 0 « » «¬ 4  5 45 5  10  7 »¼

Ÿ

Ÿ

ª5 4 4º «0 1 0 » » « «¬4 4 5»¼

(a2)

0

A  5 I  7 A 1 1  (7 / 3) º ª2  5  (14 / 3) 1  (7 / 3) » « 0 1 5  7 0 » « «¬ 1  (7 / 3) 1  (7 / 3) 2  5  (14 / 3)»¼ ª 5 / 3  4 / 3  4 / 3º « 0 3 0 »» « «¬ 4 / 3  4 / 3 5 / 3 »¼

Ÿ

A

2

ª 5  4  4º 1 « 0 9 0 »» 9 « «¬ 4  4 5 »¼

MCQ 4.3

Consider the matrix ª1 2  2 º A ««2 5  4»» . «¬3 7  5»¼ Then the prime number satisfied by the equation A  A1

A2  xI is

(A) 5 (B) 3 81

(C) 2 (D) 7 SAQ 4.3

Verify the Cayley-Hamilton theorem for the matrix ª 2 1 1 º A «« 1 2  1»» . «¬ 1  1 2 »¼ Then use it to find A1. SAQ 4.4

Find A1 by Cayley-Hamilton theorem if (a) A

ª5 3º «3 2» ¼ ¬

(b) A

3º ª1 0 «2 1  1» « » «¬1  1 1 »¼

SAQ 4.5

4

Use the Cayley-Hamilton theorem to find M if M

82

ª1 0 1º «0 1 0» . » « «¬1 0 1»¼

SUMMARY The Cayley-Hamilton theorem is established. It is shown how to use the theorem to find inverse of a nonsingular matrix.

KEY WORDS Cayley-Hamilton theorem Polynomial in eigen value O Polynomial in matrix A Inverse of a matrix

83

UNIT 02-05: THE DIAGONALIZATION OF A MATRIX

85-105

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the diagonalization of a matrix Apply to get it of a given matrix INTRODUCTION It is our experience that diagonal matrices are simple to handle. For example, consider a diagonal matrix

D diag (d1 , d 2 ,, d n ) Its eigen equation | D  OI |

0 immediately yield

(d1  O )(d 2  O )  (d n  O) 0 giving d1 , d 2 ,, d n as its eigen values. Similarly its power D m can be computed easily to get Dm

m

m

m

diag (d1 , d 2 ,, d n ) .

Such and other simple aspects of a diagonal matrix makes one to think of converting a n -square matrix into a diagonal one. It is observed that there is a category of n -square matrices which are transformable to diagonal matrices. We discuss this problem in this unit. 5.1 Similar matrices

Let A and B be two n -square matrices. These matrices are said to be similar if there is a nonsingular matrix P such that B

Illustration. A

ª 2 0º «1 1 » , B ¼ ¬

(5.1)

ª2 1º «0 1» are similar since we have ¬ ¼ P

i.e.

P 1 AP.

ª1 1 º «1 2» such that B ¬ ¼

ª2 1º «0 1» ¬ ¼

P 1 AP

ª 2  1º ª2 0º ª1 1 º « 1 1 » «1 1» «1 2» ¬ ¼¬ ¼¬ ¼ 85

Some properties of similar matrices Theorem 5.1 Two!similar!matrices!have!the!same!eigen!values. Hint. (det P)(det P 1 ) 1 Proof. Let A and B be similar matrices. Then B

Now

P 1 AP , P is nonsingular.

| B  OI |

| P 1 A P  OI | | P 1 AP  P 1OIP | ,  P 1OIP OI | P 1 ( A  OI ) P | | P 1 | | A  OI || P | | A  OI |, see hint

Ÿ

| B  OI |

0 œ | A  OI |

0

Hence A and B have same characteristic equation and hence the same eigenvalues.

QED

Theorem 5.2 If! v ! is! an! eigenvector! of! B

P 1 AP ! corresponding! to! the! eigenvalue! O, ! then! Pv ! is! an

eigenvector!of! A !corresponding!to!the!same!eigenvalue! O. Proof. Let v be an eigenvector of B corresponding to the eigenvalue O. Then Bv Also Ÿ

Ÿ

B

Ov

P 1 AP

PB ( PP 1 ) AP

AP

A( P v) ( A P) v ( PB ) v,  AP

PB

P( Bv ) P(O v),  Bv Ov O ( P v)

86

Ÿ

Pv is an eigenvector of A corresponding to the eigenvalue O.

QED

Theorem 5.3

If!an! n -square!matrix!!has! n !linearly!independent!eigen!vectors,!then!it!is!similar!to!a!diagonal matrix. Proof. Let x1 , x 2 ,, x n be linearly independent eigenvectors of a matrix A of order n

corresponding to the eigenvalues O1 , O 2 ,, O n . Here each xi is a column vector. Ÿ

A x1

O1x1 , A x 2

O 2 x 2 , , A x n

Onxn

(5.2)

P [x1 , x 2 , , x n ]

Denote Ÿ

AP

A[x1 , x 2 , , x n ] [ A x1 , A x 2 , , A x n ] [O1 x1 , O 2 x 2 , , O n x n ] , by (5.2) ªO1 0 «0 O 2 [x1 , x 2 , , x n ] « «  « ¬0 0

0º  0 »»  » »  On ¼ 

P diag (O1 , O 2 ,, O n ) Premultiplying by P 1 ,

P 1 A P ( P 1P ) diag (O1 , O 2 ,, O n ) I diag (O1 , O 2 ,, O n ) diag (O1 , O 2 ,, O n )

Hence A is similar to D

diag (O1 , O 2 ,, O n ).

QED

Definition of a diagonalizable matrix In the light of Theorem 5.3, given a n -square matrix A, we can find a matrix P such that P 1 AP

D. If there exists such a P, we say that A is diagonalizable (or diaginable) and the

matrix P diagonalizes . Modal matrix. When the matrix P diagonalizes A , then P is called a modal!matrix of A. Spectral matrix. The matrix D is the spectral matrix of A .

87

Working method of diagonalization (i) From eigenequation | A  OI |

0, obtain eigenvalues O1 ,, O n of A. Correspondingly

calculate independent eigenvectors x1 ,, x n . (ii) Take P [x1 , x 2 , , x n ] (iii) Compute P 1 (iv) Write D

P 1 AP and compute D

The following examples illustrate the method. Problem 5.1 ª 1  2º Reduce the following matrix « » to diagonal form. ¬ 5 4 ¼ Solution. Let the given matrix be A. Then its eigen equation is ª1  O  2 º ª x º «  5 4  O» « y» ¼¬ ¼ ¬

ª0 º «0 » ¬ ¼

(a1)

Eigen values are given by

5

4O

O

Ÿ

By

2

1, 6

1 , (a1) becomes

ª 2  2º ª x º « 5 5 » « y » ¼¬ ¼ ¬

ª0 º «0 » ¬ ¼

1 1 R1 , R2 , 5 2

ª 1  1º ª x º « 1 1 » « y » ¬ ¼¬ ¼

ª0 º «0 » ¬ ¼

R2  R1 :

ª1  1º ª x º «0 0 » « y » ¬ ¼¬ ¼

ª0 º «0 » ¬ ¼

Ÿ

x y

0

i.e.

x

a

88

0

O2  5O  6 0 or (O  1)(O  6)

i.e.

For O

1 O

y

0

Taking a 1, we write one eigen vector ª1º «1» , for a 1 ¬¼

x1

For O

6 , (a1) Ÿ

ª  5  2º ª x º «  5  2» « y » ¼¬ ¼ ¬

ª0 º ª5 2º ª x º «0» or «5 2» « y » ¬ ¼ ¼¬ ¼ ¬

ª5 2 º ª x º «0 0 » « y » ¼¬ ¼ ¬

Ÿ

ª0 º «0» , by R2  R1 ¬ ¼

5x  2 y

Ÿ

Ÿ

x

2a, y

ª0 º «0 » ¬ ¼

5a i.e. x

0 2, y

5, for a 1

Then the second eigenvector is

x2

ª1 2 º P [x1 , x 2 ] « » ¬1  5¼

Choose

Since | P |

ª2º «  5» ¬ ¼

1

2

1 5

7 z 0, the inverse P 1 exists. Then

P 1

adj P |P|

1 ª  5  2º  « 7 ¬  1 1 »¼

1 ª5 2 º 7 «¬1  1»¼

Then the diagonal form of A is D

P 1 A P

1 ª5 2 º ª 1  2º ª1 2 º 7 «¬1 1»¼ «¬ 5 4 »¼ «¬1  5»¼ 1 ª5 2 º 7 «¬1 1»¼

ª 1 12 º « 1  30» ¬ ¼

1 ª 7 0 º 7 «¬ 0 42»¼

ª  1 0º « 0 6» ¼ ¬

89

Problem 5.2 Find the eigenvalues and the eigenvectors of the matrix ª 6 2 2 º A «« 2 3  1»» . «¬ 2  1 3 »¼ Then find the modal matrix B and the diagonal matrix D.

Solution. The CE for A is 2 ºª xº ª6  O  2 «  2 3  O  1 »« y» »« » « «¬ 2  1 3  O »¼ «¬ z »¼

Ÿ

6O

2

2

2

3O

1

2

1

3O

(6  O)(O2  6O  8)  8(O  2)

Ÿ

(a1)

0

0 i.e. (O  2)(O2  10O  16)

(O  2) 2 (O  8)

Ÿ

ª0 º «0 » « » «¬0»¼

0

0

Hence the eigenvalues are 2 , 2 , 8.

Eigenvectors. For O

By

2 , (a1) becomes

ª 4  2 2 ºª xº « 2 1  1» « y » « »« » «¬ 2  1 1 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

1 R1 : 2

ª 2 1 1 ºª xº « 2 1  1» « y » « »« » «¬ 2  1 1 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

R2  R1 , R3  R1 :

ª2  1 1º ª x º «0 0 0» « y » « »« » «¬0 0 0»¼ «¬ z »¼ 2x  y  z

Ÿ

ª0 º «0 » « » «¬0»¼ 0

We have one equation in three variables. Hence there are two independent solutions: x

a, y

b, z

2a  b

Taking a 1, b 0 and a 0, b 1, the eigen vectors are 90

x1

ª1 º « 0 », x 2 « » «¬ 2»¼

ª0 º «1» « » «¬1»¼

For O 8 , (a1) gives ª 2  2 2 º ª x º «  2  5  1» « y » « »« » «¬ 2  1  5»¼ «¬ z »¼

By

ª1 1  1 º ª x º ª0 º «0» or «2 5 1 »» «« y »» « « » «¬2  1  5»¼ «¬ z »¼ «¬0»¼

ª 1 1  1º ª x º « R2  2 R1, R3  2 R1 : «0 3 3 »» «« y »» «¬0  3  3»¼ «¬ z »¼ ª1 1  1º ª x º «0 1 1 » « y » « »« » «¬0 0 0 »¼ «¬ z »¼

R3  R2 :

x yz

i.e.

ª0 º «0 » « » «¬0»¼

ª1 1  1º ª x º ª0 º «0» or «0 1 1 » « y » « » « »« » «¬0»¼ «¬0  1  1»¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

ª0 º «0 » « » «¬0»¼ 0, y  z

0

We have two equations in three variables and hence one independent solution: x

Then the eigenvector

2, y

x3

1, z 1 ª2º « 1» « » «¬ 1 »¼

ª1 0 2º B [x1 , x 2 , x 3 ] «« 0 1  1»» «¬ 2 1 1 »¼

The modal matrix

Ÿ

|B|

1

0

0

1 1

2 1

2 1(1  1)  2(2)

6z0

1

Hence B 1 exists. Then

B

Now

1

adj B |B|

D

ª 2 2 2º 1« 2 5  1»» « 6 «¬ 2 1 1 »¼

c

ª 2 2  2º 1« 2 5 1 »» « 6 «¬2  1 1 »¼

B 1 A B

91

ª 2 2  2º ª 6  2 2 º ª 1 0 2 º 1« 2 5 1 »» «« 2 3  1»» «« 0 1  1»» 6« «¬2  1 1 »¼ «¬ 2  1 3 »¼ «¬ 2 1 1 »¼ ª2 2  2º ª 6  4  2  2 12  2  2 º 1« 2 5 1 »» «« 2  2 3  1  4  3  1»» 6« «¬2  1 1 »¼ «¬ 2  6  1  3 4  1  3 »¼ ª2 2  2º ª 2 0 16 º 1« 2 5 1 »» «« 0 2  8»» « 6 «¬2  1 1 »¼ «¬ 4 2 8 »¼ ª 4  8 4  4 32  16  16º 1« 4  4 10  2 32  40  8 »» « 6 «¬4  4  2  2 32  8  8 »¼ ª12 0 0 º 1« 0 12 0 »» 6« «¬ 0 0 48»¼ ª 2 0 0º «0 2 0» « » «¬0 0 8»¼

MCQ 5.1 ª8  8  2 º The matrix ««4  3  2»» is diagonalized to D diag [a, b, c ]. «¬3  4 1 »¼ Then

(A) a, b, c are in A.P. (B) a, b, c are in G.P. (C) The plane ax  by  cz 5 does not pass through the point (1, 0,  1). (D) 1  abc is a prime number.

MCQ 5.2 ª1 0 0 º Let D diag [a, b, c] be the diagonalized matrix of ««0 3  1»» . Then «¬0  1 3 »¼ (A) 2a  b  c 1 92

(B) 2a  b  c 0 (C) a, b, c are in G.P. (D) a, b, c are in H.P.

SAQ 5.1 Diagonalize the following matrices:

(i) A

ª1 0  1º «1 2 1 » » « «¬2 2 3 »¼

(ii) A

ª  2 2  3º «2 1  6»» « «¬  1  2 0 »¼

(iii) A

ª 1 2  1º «1 2 1 »» « «¬ 1  1 0 »¼

SAQ 5.2 Find the modal matrix B corresponding to matrix A

ª3 4 º 1 «4  3» and verify that B AB is a ¬ ¼

diagonal matrix.

5.2 Power of matrix Let A be a n -square matrix. The method of diagonalization helps to find Am , where m is a positive integer. We know that A is diagonalized by a modal matrix P such that D

P 1 A P,

(5.3)

where D is a diagonal matrix. Ÿ Ÿ

D2

( P 1 A P) ( P 1 A P) D3

D D2

P 1 A ( P P 1 ) A P

P 1 A ( P P 1 ) A2 P

P 1 A I A P

P 1 A I A2 P

P 1 A2 P

P 1 A3 P

Continuing the process, we deduce that Dm

P 1 Am P

Premultiplying by P and postmultiplying by P 1 , we get

93

P D m P 1

( P P 1 ) Am ( P P 1 )

Am

P D m P 1 .

Am

or

I Am I

(5.4)

This is the required rule for finding the powers of a square matrix A. .

Problem 5.3 ª1 4 º Find A4 in case of the matrix A « ». ¬ 2 3¼

Solution. By (5.4), we have P D 4 P 1

A4

(a1)

The problem is completed if P and D are known. We now compute these matrices as follows. Eigen equation for A is [OI  A] X

ªO  1  4 º ª x º «  2 O  3» « y » ¬ ¼¬ ¼

ª0 º «0 » ¬ ¼

Eigen values are given by (O  1)(O  3)  8 0 i.e. O2  4O  5 0

O

Ÿ

1, 5

From (a2), we find the eigen vectors corresponding to these values. For O

Ÿ Ÿ Ÿ

1,

ª  2  4º ª x º «  2  4» « y » ¬ ¼¬ ¼

ª1 2º ª x º ª0 º «0» i.e. «1 2» « y » ¬ ¼¬ ¼ ¬ ¼ x  2y 2, y

x

x1

ª0 º «0 » ¬ ¼

0

1

ª2º « 1» ¬ ¼

Similarly we can show that the eigen vector for O 5 is (find yourself)

x2

Ÿ

94

ª1º «1» ¬¼

ª 2 1º P [x1 x 2 ] « », | P | ¬ 1 1¼

3, P 1

1 ª1  1º 3 «¬1 2 »¼

(a2)

D

and

1 ª1  1º ª1 4º ª 2 1º 3 «¬1 2 »¼ «¬2 3»¼ «¬ 1 1»¼

P 1 AP

Ÿ

D4

ª( 1) 4 « ¬ 0

0º » 54 ¼

ª  1 0º « 0 5» . ¬ ¼

ª1 0 º «0 625» ¬ ¼

With these values in (a1), we get 4

A

4

1 ª 2 1º ª1 0 º ª1  1º 3 «¬ 1 1»¼ «¬0 625»¼ «¬1 2 »¼ 1 ª 2 625º ª1  1º 3 «¬ 1 625»¼ «¬1 2 »¼ 1 3

ª627 1248º «624 1251» ¬ ¼

ª209 416º « 208 417» ¼ ¬ SAQ 5.3

ª2 1 1 º Evaluate A in case of the matrix A ««0 1 0»» . «¬1 1 2»¼ 3

5.3 Sylvester"s theorem

There is another result, known as Sylvester"s theorem which helps to find power, exponential, inverse etc of a given matrix. We state the theorem without proof. Theorem 5.4

For a square matrix A of order n,

p ( A)

p (O1 ) q(O1 )  p (O 2 ) q (O 2 )    p(O n ) q (O n )

(5.5)

where p stands for an operator like power, exponential, inverse etc and q (O )

adj[ f (O)] , f (O ) | f (O) |c

O I  A.

(5.6)

Problem 5.4

Use Sylvester"s theorem to prove the following:

95

ª cos D sin D º n (a) If A « » , then A sin cos  D D ¼ ¬

ª cos nD sin nD º « sin nD cos nD » ¼ ¬

ªcosh x sinh x º ex « » ¬ sinh x cosh x ¼

ª x xº A (b) If A « » , then e x x ¼ ¬

ª1 0 º (c) If A « » , then sin A (sin 1) A ¬2  1¼

Solution. (a) The eigen values are given by

| f (O ) | Ÿ

|O I  A|

O  cos D  sin D  sin D O  cos D

O2  2O cos D  cos 2 D  sin 2 D

Ÿ

2 cos D r 4 cos 2 D  4 2

O

O1

i.e.

cos D  i sin D , O 2

0

O2  2O cos D  1 0 cos D r i sin D

cos D  i sin D

By Sylvester"s formula, p ( A) Choosing p ( A)

p (O1 ) q(O1 )  p (O 2 ) q (O 2 )

An , above becomes An

(O1 ) n q (O1 )  (O 2 ) n q(O 2 )

ªO  cos D  sin D º and | f (O) | « sin D O  cos D »¼ ¬

Now

[ f (O)]

Ÿ

adj[ f (O)]

Ÿ

q (O )

adj[ f (O)] | f (O) |c

2O  2 cos D

sin D º ªO  cos D 1 « 2O  2 cos D ¬  sin D O  cos D »¼

q(O1 )

1 ª i sin D sin D º 2i sin D «¬ sin D i sin D »¼

and

q (O 2 )



96

(a2)

O2  2O cos D  1

sin D º ªO  cos D «  sin D O  cos D » and | f (O) |c ¼ ¬

Ÿ

Then (a2) Ÿ An

(a1)

1 ª1  i º 2 «¬i 1 »¼

sin D º 1 ª i sin D « 2i sin D ¬  sin D  i sin D »¼

1 ª 1 iº 2 «¬ i 1»¼

ª1  i º 1 ª 1 iº 1  (cos D  i sin D) n « (cos D  i sin D) n « » » 2 ¬i 1 ¼ 2 ¬ i 1¼

ª1  i º 1 ª 1 iº 1 (cos nD  i sin nD) «  (cos nD  i sin nD) « » » 2 ¬i 1 ¼ 2 ¬ i 1¼ 1 ªcos nD  i sin nD  i cos nD  sin nD º 2 «¬i cos nD  sin nD cos nD  i sin nD »¼ 

1 ª cos nD  i sin nD i cos nD  sin nD º 2 «¬ i cos nD  sin nD cos nD  i sin nD »¼

1 ª 2 cos nD 2 sin nD º 2 «¬ 2 sin nD 2 cos nD »¼

(b)

| f (O ) |

|O I  A|

ªO  x  x º «  x O  x» ¼ ¬

ª cos nD sin nD º « sin nD cos nD » ¼ ¬ O2  2 Ox

O (O  2 x )

The eigenvalues are O1

0 , O2

2 x.

Taking p( A) e A in (a1) of (a) e O1 q (O1 )  e O 2 q (O 2 )

eA

e 0 q ( 0)  e 2 x q ( 2 x ) q (0)  e 2 x q( 2 x) Now Ÿ

Ÿ

Ÿ

and

Then (a1) Ÿ

ªO  x  x º [ f (O)] « » and | f (O ) | ¬  x O  x¼

(a1) O2  2Ox

x º ªO  x adj[ f (O)] « and | f (O) |c O  x »¼ ¬ x q (O )

adj[ f (O)] | f (O) |c

q ( 0)



eA

x º ªO  x 1 « O  x »¼ 2O  2 x ¬ x

1 ª x x º 2 x «¬ x  x »¼

q(2 x)

2O  2 x

1 ªx 2 x «¬ x

xº x »¼

1 ª 1  1º 2 «¬ 1 1 »¼ 1 ª1 1º 2 «¬1 1»¼

1 ª 1  1º 1 2 x ª1 1º  e « » 2 «¬ 1 1 »¼ 2 ¬1 1¼

97

1 ª 1  e2 x « 2 ¬ 1  e 2 x

 1  e2 x º » 1  e2x ¼

ªe x  e  x « x x ¬e  e

e x  e x º » e x  e x ¼

ex 2

ª(e x  e  x ) / 2 (e x  e  x ) / 2 º ex « x x » x x ¬(e  e ) / 2 (e  e ) / 2 ¼ ªcosh x sinh x º ex « » ¬ sinh x cosh x ¼ (c) Here we have | f (O ) | The eigenvalues are O1 1, O 2

|O I  A|

0 º ªO  1 «  2 O  1» ¼ ¬

O2  1

1. Considering p( A) sin A, p ( A)

p (O1 ) q(O1 )  p (O 2 ) q (O 2 )

sin A (sin O1 ) q(O1 )  sin(O 2 )(O 2 ) sin 1[q (1)  q(1)]

gives Now

0 º ªO  1 [ f (O)] « » and | f (O ) | ¬  2 O  1¼

Ÿ

ªO  1 0 º adj[ f (O)] « and | f (O) |c O  1»¼ ¬ 2 q (O )

Ÿ

Ÿ

q(1)

adj[ f (O)] | f (O) |c

1 ª 2 0º and q(1) 2 «¬2 0»¼ q(1)  q (1)

Ÿ

Then (a1) Ÿ

1 ª0 0 º  « 2 ¬2  2»¼

1 ª2 0 º 2 «¬4  2»¼

ª1 0 º «2  1» ¬ ¼

sin A (sin 1) A

ª 2 4º «3 1 » , then using Sylvester"s theorem show that ¬ ¼ (i) sin 2 A  cos 2 A I

98

2O

0 º 1 ªO  1 « O  1»¼ 2O ¬ 2

Problem 5.5

If A

O2  1

1 ª 0 0º 2 «¬ 2 2»¼ A

(a1)

(ii) sec 2 A  tan 2 A I and

(iii) ln e A

Solution. (a)

A.

| f (O ) |

O2 4  3 O 1

|O I  A|

O2  3O  10 (O  2)(O  5)

Then the eigenvalues are O1

2 , O 2

5

We have p ( A)

p (O1 ) q(O1 )  p (O 2 ) q (O 2 )

(a1)

(i) Choosing p( A) sin 2 A , above becomes sin 2 A (sin 2 O1 ) q(O1 )  (sin 2 O 2 ) q(O 2 ) Similarly taking p ( A) cos 2 A, (a1) gives cos 2 A (cos 2 O1 ) q(O1 )  (cos 2 O 2 ) q(O 2 ) Adding the above two equations, sin 2 A  cos 2 A Now

q (O ) q(2)

adj[ f (O)] | f (O) |c

1 ª 3 4 º  « 7 ¬ 3  4»¼ q(2)  q (5)

Then (a2) Ÿ

(a2)

O2  3O  10

4 º ªO  1 adj[ f (O)] « and | f (O ) |c O  2»¼ ¬ 3

Ÿ

Ÿ

q( 2)  q(5)

ªO  1  4 º [ f (O)] « » and | f (O ) | ¬  3 O  1¼

Ÿ

Ÿ

q (O1 )  q(O 2 )

2O  3

4 º ªO  1 1 « O  2»¼ 2O  3 ¬ 3

1 ª 3  4º and q(5) 7 «¬ 3 4 »¼ 1 ª 7 0 º ª1 0 º 7 «¬0 7 »¼ «¬0 1»¼

sin 2 A  cos 2 A

I

1 ª 4 4º 7 «¬3 3»¼ (a3)

I

(ii) Take p ( A) sec 2 A , tan 2 A in (a1): sec 2 A (sec 2 O1 ) q (O1 )  (sec 2 O 2 ) q(O 2 ) 99

tan 2 A (tan 2 O1 ) q (O1 )  (tan 2 O 2 ) q(O 2 ) sec 2 A  tan 2 A (sec 2 O1  tan 2 O1 ) q(O1 )  (sec 2 O 2  tan 2 O 2 )q (O 2 )

Ÿ

q (O1 )  q(O 2 ) q (2)  q (5) I , by (a3) (iii) Taking p ( A) log e e A in (a1), we get log e e A

(log e e O1 ) q (O1 )  (log e e O 2 ) q (O 2 ) (O1 ) q (O1 )  (O 2 ) q(O 2 )

2 q( 2)  5 q (5) 2 ª 3  4º 5 ª 4 4º  «  « » 7 ¬ 3 4 ¼ 7 ¬3 3»¼ 1 ª 6 8 º 1 ª20 20º  7 «¬ 6  8»¼ 7 «¬15 15 »¼ 1 ª14 28º 7 «¬21 7 »¼

ª 2 4º «3 1 » ¼ ¬ A MCQ 5.3

ª2 1 º 2 Using Sylvester"s theorem, for A « » , the matrix represented by ( A  3 A  I ) is 3 4 ¬ ¼

100

ª 0 3º (A) « » ¬6 9 ¼

ª3 0 º (B) « » ¬9 6¼

ª0 3 º (C) « » ¬9 6 ¼

ª0 9 º (D) « » ¬3 6¼

MCQ 5.4

If M

ª a 0º M « 0 b » , then the matrix e is ¬ ¼ ªe a (A) « ¬0

0º » eb ¼

ªe a (B) « ¬0

0 º »  eb ¼

ª e a (C) « ¬ 0

0º » eb ¼

ª e a (D) « ¬ 0

0 º »  eb ¼

SAQ 5.4

Use Sylvester"s theorem to show the following:

ª  1 4º (a) 3 tan A (tan 3) A , where A « » ¬ 2 1¼ ª 1 3º (b) 2 sin A (sin 2) A , where A « ». ¬ 1 1¼ 5.3 Quadratic forms

A function of the form f ( x1 , x2 ) a11 x1  a22 x2  2a12 x1 x2 2

2

is a quadratic!form in two variables. The quantities aij are constants and do not contain x1 , x2 . Using summation sign, it is written as f ( x1 , x2 ) where

aij

2

¦ aij xi x j ,

i, j 1

a ji , i , j 1, 2.

Similarly quadratic form in three and n variables can be written as f ( x1 , x2 , x3 )

3

¦ aij xi x j

i, j 1

101

n

¦ aij xi x j

f ( x1 , x2 , , xn )

and It is always assumed that aij

a ji for all i , j . We observe that

ªa f ( x1 , x2 ) [ x1 x2 ] « 11 ¬a21

ª a11 «a 21 f ( x1 , x2 , , xn ) [ x1 x2  xn ] « « « ¬ an1

and

(5.7)

i, j 1

a12 º ª x1 º a22 »¼ «¬ x2 »¼

a12 a22  an 2

 a1n º ª x1 º  a2 n »» «« x2 »»   » «» »« »  ann ¼ ¬ xn ¼

It means that a quadratic form is associated with a symmetric matrix A [aij ] such that f ( x1 , x2 ,, xn ) xT A x,

x [ x1 , x2 ,, xn ]T .

where

Remark. Since xc , yc , zc are used for coordinates, to avoid confusion we denote the transpose of a matrix A by AT in place of our usual notation Ac.

The matrix of the quadratic form A symmetric matrix A is called the matrix!of!the!quadratic!form. The rank of A is the rank!of the! quadratic! form. If the rank r  n, the quadratic form is called singular, otherwise nonsingular.

Problem 5.6 Find the quadratic form Q( x , y, z ) of the matrix ª1 2 3 º A ««2 3 1 »». «¬3 1 2»¼ 2

Hint. In (5.7) the coefficient of aii is xi xi Solution. Let x

x1 , y

x2 , z

xi and coefficient of aij is 2 xi x j

x3 . Then

Q( x , y, z )

3

¦ aij xi x j , aij

i, j 1

Ÿ

102

a ji for all i , j

Q ( x1 , x2 , x3 ) a11 x1  a22 x2  a33 x3  2a12 x1 x2  2a13 x1 x3  2a23 x2 x3 2

2

2

(1) x1  3x2  2 x3  2(2) x1 x2  2(3) x1 x3  2 (1) x2 x3 2

2

2

x1  3x2  2 x3  4 x1 x2  6 x1 x3  2 x2 x3 . 2

Ÿ

Q( x , y z )

2

2

x 2  3 y 2  2 z 2  4 x y  6 x z  2 y z ,  x1

x, x2

y , x3

z

This is the required form.

Remark. Noting the hint one can immediately write down the quadratic form from the matrix of the quadratic form and vice versa.

Problem 5.7 Write down the quadratic form corresponding to the following symmetric matrix

ª0 «1 « «2 « ¬3

1 2 3 4

2 3 4 5

3º 4»» . 5» » 6¼

Solution. Following the Hint of the previous Problem, the quadratic form is given by 4

¦ aij xi x j

Q

i, j 1

(0) x1 x1  2 (1) x1 x2  2 (2) x1 x3  2(3) x1 x4  2 x2 x2  2(3) x2 x3  2(4) x2 x4  4 x3 x3  2(5) x3 x4  6 x4 x4

2 x2  4 x3  6 x4  2 x1 x2  4 x1 x3  6 x1 x4  6 x2 x3  8 x2 x4  10 x3 x4 2

2

2

Problem 5.8

Find the matrix of the quadratic form 2 x1  3 x2  4 x3  x4  x1 x2  2 x1 x3  3x1 x4  2 x2 x3  4 x2 x4 . 2

2

2

2

Solution. The matrix of the given quadratic form is symmetric and is

1/ 2 ª 2 «1/ 2  3 A [aij ] « « 1 1 « ¬ 3 / 2  2

1  3 / 2º 1  2 »» 4 0 » » 1 ¼ 0

Canonical form

A quadratic form is said to be canonical if all mixed terms like x1x2 , x1 x3 , x23 etc are absent. It 103

means that a quadratic form f ( x1 ,, xn ) is canonical if

aij

n

¦ aij xi x j

(5.8)

i, j

0,  i z j

Then the form (5.8) becomes f ( x1 ,, xn ) a11 x1  a22 x2    ann xn 2

2

2

In the case of a canonical quadratic form, the matrix of the form is diagonal: ªa11 0 «0 a 22 A « «  « 0 ¬0

0 º  0 »»  » »  ann ¼ 

Illustration f ( x1 , x2 )

3x12  4 x2 2

f ( x1 , x2 )

x12  x2 2

f ( x1 , x2 , x3 )

x12  2 x2 2  4 x32

are canonical forms. SAQ 5.5

Write down the quadratic form corresponding to the matrix ª 1 1  3º « 1 8 5 ». « » «¬ 3 5 6 »¼

104

(5.9)

SUMMARY Introducing the concept of similar matrices, the procedure to reduce a given square matrix to a diagonal matrix is explained. The theorem due to Sylvester is discussed and is used to find power, exponential, inverse etc of a given matrix. A quadratic form of a symmetric matrix is illustrated.

KEY WORDS Similar matrices Modal matrix Diagonalization of a matrix Sylvester"s theorem Quadratic form Canonical form

105

UNIT 03-01: VECTOR SPACES

1-24

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of a vector space INTRODUCTION

The term linear! algebra is in essence a branch of mathematics that studies vector spaces and systems of linear equations. As a matter of fact vector spaces form the cornerstone of linear algebra. In this text we confine to the study of finite dimensional vector spaces and linear transformations. We use the following notations for the basic sets as follows. N : the set of all natural numbers Z : the set of all integers Z  : the set of all positive integers Q : the set of all rational numbers

R : the set of all real numbers R  : the set of all positive real numbers C : the set of all complex numbers

Through out the book it is assumed that the sets under discussion are non empty unless stated otherwise. In the interest of making the chapter self contained, we append some basic that is helpful in the study of vector spaces. 1.1 Basic information Cartesian product The Cartesian product of sets A and B is denoted by Au B and is defined as

A u B {( a , b) | a  A , b  B} In general A u B z B u A. Illustration

(i) Let A {x , y}, B {1, 2 , 3}. Then 1

A u B { ( x ,1) , ( x , 2) , ( x , 3) , ( y ,1) , ( y , 2) , ( y , 3)} RuR

(ii)

R2

the plane xy

Binary operation

A binary operation on a set G is a function $ : G u G o G. a , b G Ÿ a $ b  G.

Thus Group

Definition. A group is an ordered pair (G , $), where $ is a binary operation defined on a set G

satisfying the following axioms: (G1) Associativity: a, b, c  G Ÿ a $ (b $ c)

( a $ b) $ c .

(G2) Existence!of!an!identity!element!in!G:  e  G such that a$e

a , a  G.

e$a

(G3) Existence!of!an!inverse!in!G: for each a  G ,  a 1  G such that a $ a 1

a 1 $ a e

Remark. Hereafter, for convenience sake we become less formal and denote a grou (G , $) simply

by G but with an understanding that now G is not simply a set but a the system (G, $). We also write a $ b as a ˜ b or simply ab for all a , b  G. We say that the set G is closed under $ œ $ is a binary operation defined on G. In the definition of a group, a binary operation is denoted by a multiplicative notation ($) . It is just the convention to denote the identity elements and the inverses as follows: For and

multiplicative operation $ : identity element

1 or e, inverse of aG is a 1

for additive operation + : identity element 0, inverse of a  G is  a.

One must note that  and $ are not our usual addition and multiplication operations respectively. They are just the notations. Similarly 1 and 0 are not the integers, they are just the symbols. We use e as the identity and call it a multiplicative!identity.

The set R is closed under usual or ordinary addition and usual multiplication having additive identity 2

0, additive inverse of a ( R ) is  a

multiplicative identity

1 1, inverse of a (z 0)  R is a  R

Definition. A group G is said to be abelian!or commutative if ab

ba,  a, b  G .

A group which is not abelian is called non-abelian or non-commutative. Illustration The set Z under usual addition () is a group. It is abelian. However, Z  with usual multiplication is not a group because there exits an identity 1, but no inverse of 2. Also Z  is not a group under usual addition because there is no identity element in Z  . Note that

02

20

2, but 0  Z  and hence identity does not exist.

Definition. Let G be a group. A nonempty subset H of G is called a subgroup of G if H forms a group under the binary operation on G. Definition. The order of a group G is the number of elements of G. It is denoted by o(G ) . A group G is said to be finite if o(G ) is finite. Some group properties P1. The identity of a group G is unique. P2. Every a  G has a unique inverse in G. P3. Cancellation laws: a , u , w  G. Then au aw Ÿ u

w and ua wa Ÿ u w.

P4. H is a subgroup of G œ xy 1  H or x  y  H . 1.2 The real coordinate space R n Before we consider the abstract concept of a vector space, an important example of real coordinate space R n is introduced. This forms an excellent intuitive model of abstract finite dimensional vector spaces. Once this general case is considered and discussed, its particular examples R1

R, R 2 , etc follow easily for taking n 1, 2,. For the first reading the discussion

may be tiresome and rather boring, but once it is studied with patience, you will enjoy the same and will get involved in understanding algebra, linear algebra in particular. Consider the space (or set) Rn

R u R u u R m n times o

3

Rn

Then

{x

( x1 , x2 ,  , xn ) | x1 ,  , xn  R}

Thus the elements of R n are n -tuples of real numbers. The elements of R n are called vectors having n components and are denoted by bold faced letters. Equality of vectors Let x

( x1 ,  , xn ) and y

( y1 ,  , yn ) be any two elements of R n . We define x y œ xi

yi , i 1 , 2 ,  , n

(1.1)

Vector addition and the scalar multiplication Let x , y  R n , D  R. We define xy

( x1 , x2 ,  , xn )  ( y1 , y2 ,  , yn ) Dx

and

D( x1 , x2 ,  , xn )

( x1  y1 , x2  y2 ,  , xn  yn )

(Dx1 , Dx2 ,  , Dxn )

(1.2) (1.3)

The addition in (1.2) defines a vector addition on R n and (1.3) defines a scalar multiplication. These two operations are carried out componentwise. Here a scalar means a real number. With the operations of vector addition and scalar multiplication the set R n becomes a vector space. (1, 0 ,  2)  (3 ,  1, 5) (1  3 , 0  1,  2  5) (4 ,  1, 3)

Illustration.

3(1, 0 ,  2) (3 , 0 ,  6) Theorem 1.1 The!following!properties!are!satisfied!by!any!scalars! D , E  R !and!any!vectors! x , y , z  R n :

P1. x  y  R n (additive!closure!property) P2. x  y P3.

y  x (addition!is!commutative)

x  (y  z )

(x  y )  z (addition!is!associative)

P4. There!is!zero!vector 0 (0 ,  , 0)  R n such!that

x0

x (existence!of!additive!identity)

P5. For!each x  R n ,!there!is!  x  R n !such!that

x  (  x) P6. Dx  R n 4

0 (existence!of!additive!inverse)

P7. D(x  y ) Dx  Dy (distributivity!for!vector!addition) P8. (D  E)x P9. D(Ex) P10. 1 ˜ x

Proof. Let P1.

Dx  Ex (distributivity!for!scalar!addition)

(DE)x (associativity!of!scalar!multiplication)

x , 1 R ( x1 ,  , xn ) , y

x

( y1 ,  , yn ) and z

( z1 ,  , z n ) be in R n .

Noting the definition of vector addition in (1.2), it implies that

x  y  Rn. P2.

xy

( x1 ,  , xn )  ( y1 ,  , yn )

( x1  y1 ,  , xn  yn ), by (1.2)

( y1  x1 ,  , yn  xn ),  addition is commutative for real numbers ( y1 ,  , yn )  ( x1 ,  , xn ), by (1.2)

yx P3.

x  (y  z )

( x1 ,  , xn )  ( y1  z1 ,  , yn  z n ), by (1.2) ( x1  ( y1  z1 ), , xn  ( yn  z n )), by (1.2) ( ( x1  y1 )  z1 ,  , ( xn  yn )  z n ),  additive associativity of reals ( x1  y1 ,  , xn  yn )  ( z1 ,  , z n ), by (1.2) (x  y )  z

P4 to P6. Complete yourself P7.

D(x  y )

D( x1  y1 ,  , xn  yn ), by (1.2)

( D( x1  y1 ) ,  , D( xn  yn ) ), by (1.3) (Dx1  Dy1 ,, Dxn  Dyn ) by distributivity property of reals (Dx1 ,  , Dxn )  (Dy1 ,  , Dyn ), by (1.2) D( x1 ,  , xn )  D( y1 ,  , yn ), by (1.3) D x  Dy

P8. Complete yourself P9. D(Ex) D(Ex1 ,  , Exn ), by (1.3) ( D(Ex1 ), , D(Exn ) ), by (1.3) 5

((DE) x1 ,  , (DE) xn ),  a(bc)

(ab)c for reals a , b , c

(DE)( x1 ,  , xn ), by (1.3) (DE)x

P10. Complete your self.

QED

Remark. P1 to P5 œ R n is an abelian group under vector addition P6 to P10 prescribe the properties of scalar multiplication.

MCQ 1.1 Choose the false statement/s from the following: (A) The vector of the form (a, b, 0, 0) is an element of R 4 for a  Q (B) The vector of the form (a, b, c) is an element of R 3 for a, b  Z, c  N (C) The vector of the form (a, b) is an element of R 2 for a  N, b  C (D) The vector of the form (a ) is an element of R for all a  R

MCQ 1.2 Let u ( a, b)  R 2 and v (c, d , 0)  R 3 . Select the correct statement/s from the following: (A) u  v (a  c, b  d , 0) (B) 0  u ( a, b), 0  R 2 (C) u  0 ( a, b, 0), 0  R (D) u

v œ a c, b d

Now we go ahead to discuss abstract structure vector space, the main concern of the unit.

1.3 Vector space Since to each vector space, there is associated a field, we define it first.

Field Definition.

Consider a set F of elements in which a relation of equality and two binary

operations of addition () and multiplication ($) are defined. Then an algebraic system ( F , , $) is a field with the following axioms: 6

(F1) ( F , ) is an abelian group. Its identity is the!zero!element of F . (F2) Non zero elements of F form an abelian group under ($). (F3) The operation ($) is distributive over ( ) i.e. D, E, J  F Ÿ D $ (E  J )

D $ E  D $ J.

Remark. Confining to less formality, we write ( F , , $) simply F In a field F we have two identities: the additive identity (denoted by 0) or the zero element of F and

the multiplicative identity (denoted by 1) but 1 z 0.

Note that 0 and 1 are not our usual integers. A field contains at least two elements 0 and 1. The elements of a field are called scalars. We write a $ b

ab,  a , b  F .

A field is the smallest structure on which we can perform all the basic four arithmetic operations addition (), subtraction (), multiplication (˜) and division (y) by nonzero elements. The structures Q , R, C with ordinary addition and multiplication are the examples of field.

Vector addition and scalar multiplication Let F be a field and let V be a non-empty set. Define two operations () and ($) such that  : V u V o V and $ : F u V o V

(1.4)

The above operations are different from the operations in the system ( F , , $), though the notations are same. In (1.4) the operation () is called the vector! addition and ($), the scalar multiplication.

MCQ 1.3 Let a, b V and h, k  F . Let the operations  and $ be defined as in (1.4). Then (A) h $ a is defined but a $ h is not defined. (B) a $ b is defined but h  a is not defined. (C) a  b is defined but h $ b is not defined. (D) b  k and h  k are not defined.

7

Vector space Here after the elements of V are not written in bold form and we write D $ v Dv ,  D  F , v  V .

Definition. Let F be a field and let V be a non-empty set. Let the two operations () and ($) be defined in (1.4). Then an algebraic system (V , , $) is called a vector!space over a field F if (v1) (V , ) is an abelian group (v2) D(v  w) Dv  Dw (v3) (D  E)v

Dv  E v

(v4) D(Ev) (DE)v (v5) 1v v,  v, w  V and  D, E  F . Here 1 is the unit element of F

Note that the axioms (v1) to (v5) are same as stated in Thm 1.1 with F

R and V

Rn.

Real vector space If F

R , then the vector space V ( F ) is called a real!vector!space.

Complex vector space

If F

C, then the vector space V ( F ) is called a complex!vector!space.

Vectors.

The elements of V are called vectors. Zeros

There will be two zeroes: scalar!zero of F and vector!zero of V . Both are denoted by the same symbol 0. In future we write a $ b ab, b $ v bv and use the following conventions, unless otherwise specified. (a) The elements of F are denoted by lower case Greek letters such as D, E etc. (b) Capital Latin letters A, B,  , U , V ,  denote the vector spaces over the field F . The elements of the vector spaces will be denoted by lower case Latin letters such as a, b, u ,  .

8

(c) The notation V (F ) stands for a vector space V over the field F . If the field F is understood we simply say that V is a vector space. (d) We use notation u  v for u  ( v), for vectors u , v. Remark

Above definition involves four operations: two operations (, $ ) in (1.4) and

two operations (, $) in the field F .

Though we denote them by the same symbols (, $), but it does not mean that (, $) in (V , , $) and ( F , , $) are same. In (v4) the operations between E and v is from (1.4) but the $ between D and E on the right is from ( F , , $). With some practice one can easily understand which

operation belongs to which system. 1.4 Examples of vector spaces Problem 1.1

Show that the set R of real numbers is a vector space over R. Solution. We show that all the axioms (v1)-(v5) are satisfied under the usual addition and scalar

multiplication defined on R. We know that R is an abelian group under addition (+). Also all the axioms (v2)-(v5) are satisfied  v, w, D, E  R. Thus all the conditions (v1)-(v5) are satisfied by R and hence it becomes a vector space over itself. Problem 1.2

Prove that the set R 2 is a vector space over R. Comment. The solution can be written down from the proof of Theorem 1.1 for n

2. However,

we give its detailed solution. Solution. Let x, y, z  R 2 . Then by definition, we write x ( x1 , x2 ), y

( y1 , y2 ), z

( z1 , z 2 ), x1 , x2 , y1 , y2 , z1 , z 2  R

Addition and scalar multiplication are defined as follows: x y

( x1 , x2 )  ( y1 , y2 ) ( x1  x2 , y1  y2 )

(a1) 9

Dx (Dx1 , Dx2 ), for D  R

and

We show that all the axioms (v1)-(v5) are satisfied. (v1) To show that R 2 is an abelian group under (+) (a)

x y

( x1  y1 , x2  y2 ) ( y1  x1 , y2  x2 ),  addition is commutative in R ( y1 , y2 )  ( x1  x2 ), by (a1) y  x, by (a1)

Above is true for all x, y  R 2 . Hence addition defined on R 2 is commutative. (b)

( x  y)  z

( x1  y1 , x2  y2 )  ( z1 , z 2 ), by (a1) (( x1  y1 )  z1 , ( x2  y2 )  z 2 ), by (a1) ( x1  ( y1  z1 ), x2  ( y2  z2 )), by associativity in R ( x1 , x2 )  ( y1  z1 , y2  z 2 ), by (a1) x  ( y  z ), by (a1)  x, y , z  R 2

Ÿ

Addition defined on R 2 is associative

(c) There is 0 (0, 0)  R 2 such that x  0 0  x,  x  R 2

Ÿ

Existence of additive identity in R 2

(d) For any x  R 2 , we have  x ( x1 ,  x2 ) such that x  ( x) ( x1 , x2 )  ( x1 ,  x2 ) ( x1  x1 , x2  x2 ) (0, 0) 0  R 2 Ÿ

Existence of inverse of every element of R 2

(a)-(d) Ÿ R 2 is an abelian group under (a1) (v2) For any D  R , we have D( x  y ) D( x1  y1 , x2  y2 ), by (a1) (Dx1  Dy1 , Dx2  Dy2 ), by (a2) (Dx1 , Dx2 )  (Dy1 , Dy2 ), by (a1) 10

(a2)

D( x1 , x2 )  D( y1 , y2 ), by (a2)

D x  Dy (v3) For any D, E  R , we write (D  E) x (D  E)( x1 , x2 ) ((D  E) x1 , (D  E) x2 ), by (a2) (Dx1  Ex1 , Dx2  Ex2 ) (Dx1 , Dx2 )  (Ex1 , Ex2 ), by (a1) D( x1 , x2 )  E( x1 , x2 ), by (a2) Dx  E x D(Ex) D(Ex1 , Ex2 ), by (a2)

(v4)

(D(Ex1 ), D(Ex2 )), by (a2) ((DE) x1 , (DE) x2 ), by associativity in R (DE)( x1 , x2 ), by (a2) (DE) x (v5) There is 1 R such that 1x 1( x1 , x2 ) (1x1 ,1x2 ) ( x1 , x2 )

x

Hence we observe that all the vector space axioms are satisfied and thus R 2 is a vector space over R. Remark. On the similar lines, it can be shown that R n is a vector space over the field R. For

details refer to Theorem 1.1. Problem 1.3

Prove that M , the set of all m u n matrices of real numbers is a vector space over R under addition of matrices and scalar multiplication to a matrix. Solution. Let M be the set of all m u n matrices of real numbers i.e.

M

{( aij ) mun | aij  R , 1 d i d m, 1 d j d n}

11

Let

A (aij ) mun , B

(cij ) mun  M and D, E  R

(bij ) mun , C

Then

B œ aij

A

(i) (a2) Ÿ

(a1)

A  B (aij  bij ) mu n

(a2)

DA (Daij ) mun

(a3)

A  B  M ,  aij  bij  R for aij , bij  R  i, j A  (B  C)

(ii)

bij ,  i, j

(aij ) mun  (bij  cij ) mun ( aij  bij  cij ) mun , by (a2) ( aij  bij ) mun  (cij ) mun

A  (B  C)

Ÿ

( A  B)  C , by (a2)

A B

(iii)

(aij  bij ) mun (bij  aij ) mun , by commutativity of  in R B  A, by (a2)

(iv) Consider 0

m u n zero matrix

(0) mu n , Its each element is 0  R Then

A0

Also by (iii),

(aij  0) mun

0 A

A0

(aij ) mun

A

A,  A M

Hence 0  M is an additive identity. (v) For A (aij ) mun  M , there is  A ( aij ) mun  M such that A  ( A) Ÿ

0

m u n zero matrix.

 A ( aij ) mun  M is the additive inverse of A

Thus (i) to (v) Ÿ ( M ,  ) is an abelian group. (vi) Now D, aij  R Ÿ Daij  R  i, j. Then DA M by (a3). (vii)

D( A  B )

D (aij  bij ) mun , by (a2)

(Daij  Dbij ) mun , by (a3) (Daij ) mu n  (Dbij ) mu n , by (a2) 12

DA  DB, by (a3) (D  E) A

(viii)

((D  E)aij ) mun

(Daij  E aij ) mun (Daij ) mun  (Eaij ) mun

DA  E A D(E A)

(ix)

D(Eaij ) mun

(DE aij ) mun (DE) A, by (a3) (x) There is 1 R such that 1A (1aij ) mun

(aij ) mun

A

Then (i) to (x) imply that M is a real vector space. Problem 1.4

Let R  be the set of all positive real numbers. Define the operations of addition † and scalar multiplication … as follows: uv,  u , v  R 

(a1)

u D ,  u  R  and D  R

(a2)

u†v

D …u

and

Prove that R  is a real vector space. Solution. Let u , v, w  R 

{x  R | x ! 0} and D, E  R. Addition † and scalar multiplication

… are given by (a1) and (a2).

uv  R  ,  u, v are positive real numbers.

(i)

u†v

(ii)

u † (v † w) u (vw), by (a1) (uv) w,  multiplication is associative in R  . (u † v) † w, by (a1)

(iii)

u†v

uv, by (a1) vu,  multiplication is commutative in R  . 13

v † u , by (a1)

(iv)

1  R  is the identity

zero vector with respect to †,  1 † u 1u

u,  u

(v) For any u  R  , we have



1 u



1 u

Hence inverse of any u  R  with respect to † is

1 identity. 1  R. u

Then (i) to (v) Ÿ (R  , † ) is an abelian group. uD  R

(vi)

D …u

(vii)

D … (u † v)

(uv) D , by (a1) and (a2) u D v D , by the law of indices

(D … u ) † (D … v), by (a1) and (a2)

(viii)

u DE , by (a3)

(D  E) … u

u Du E (D … u ) † (E … v), by (a2)

(ix)

D … (E … u )

(u E ) D , by (a2)

u DE (DE) … u , by (a2)

(x)

1… u

u1

u

Hence (i) to (x) imply that R  is a real vector space. Problem 1.5 Let K be a subfield of a field F . Prove that F is a vector space over K , under addition and multiplication in F . Solution. (i) F is a field Ÿ ( F ,) is an abelian group. (ii) D  K , x  F Ÿ Dx  F ,  K Ž F , D  K Ÿ D  F etc (iii) D, E  K and x, y  F Ÿ 14

D( x  y ) D(Ex) (iv) 1 ˜ x

Dx  Dy and (D  E) x Dx  Ex, by distributity in F (DE) x, by associative law in F

x,  x  F ,  1  K Ž F is the unity in F

Hence F is a vector space over the field K . Problem 1.6 If F is a field of real numbers show that the set of real-valued, continuous functions on the closed interval [0 ,1] forms a vector space over F . Hint. ( f  g ) x

f ( x)  g ( x), (Df ) x

Df ( x),  x  [0 ,1] and D  F

Solution. Let F be a field of real numbers R. Assume that f :[0,1] o R is continuous. Let V be a set of all such functions i.e. V

{ f : [0,1] o R | f is continuous}

Let f , g  V and D  R. We have ( f  g )( x)

f ( x)  g ( x), (Df ) x

Df ( x),  x  [0,1]

(a1)

We verify that the vector space conditions (v1-v5) are satisfied. (v1) (V , ) is an abelian group (a) Closure property We know that the sum of two continuous functions is again continuous. Thus V is closed under  as defined in (a1).

(b) Associativity Let f , g , h  V . Then for x  [0,1], we have [ f  ( g  h)] ( x)

f ( x)  ( g  h)( x), by (a1) f ( x)  [ g ( x)  h( x)], by (a1) [ f ( x)  g ( x)]  h( x),  additive associativity of reals ( f  g )( x)  h( x), by (a1) [( f  g )  h]( x),  x  [0,1]

Ÿ

f  ( g  h)

( f  g )  h,  f , g , h  V

Hence () defined in (a1) is associative. 15

(c) Existence of identity We denote zero function 0 V such that 0( x)

0,  x  [0,1]

(0  f )( x) 0( x)  f ( x), by (a1)

Now

0  f ( x),  0( x) 0 f (x),  x  [0,1],  0 additive identity of R Ÿ

0 f

f

f 0

f.

f 0

f ,  f V

Similarly one can show that

Hence  0  V such that 0 f i.e.

0 is the additive identity in V

(d) Existence of inverse Let f  V . Define  f  V such that ( f ) x [ f  ( f )]( x)

Now

 f ( x),  x  [0,1] f ( x)  ( f )( x), by (a1) f ( x)  f ( x)

0,  f ( x),  f ( x)  R

0( x),  x  [0,1]

Hence f

f  ( f )

0 identity of V . Similarly one can verify that ( f )  f

0 and then

inverse of f  V .

(e) Commutativity Since f (x), g (x) are real numbers and addition of real numbers in commutative, we have ( f  g )( x) Hence

f ( x)  g ( x)

f g

g  f ,  f , g V

i.e.  is commutative. Then (a) to (e) imply that (V ,  ) is an abelian group. (v2) For D  F , f , g  V , x  [0,1], we get 16

g ( x)  f ( x),  x

[D( f  g )] ( x) D[( f  g ) ( x)] D[ f ( x)  g ( x)], by (a1) Df ( x)  Dg ( x),  distributivity in R (Df )( x)  (Dg )( x), by (a1) (Df  Dg ) ( x), by (a1):  x  [0,1] D( f  g )

Ÿ

Df  Dg

[(D  E) f ] ( x) (D  E) f ( x)

(v3)

Df ( x)  Ef ( x),  distributivity in R (Df ) ( x)  (Ef ) ( x) (Df  Ef ) ( x),  x  [0,1] (D  E) f

Ÿ

Df  E f

(v4) Let D, E  F , f V , x  [0,1]. We have D(Ef )( x) D[(Ef )( x)] D[Ef ( x)], by (a1) DE f (x),  associativity in R (DE) f ( x) [(DE) f ]( x), by (a1) D(Ef )

Ÿ

(v5) We have 1 R. Then (1 f )( x) 1 f ( x) Ÿ

(DE) f f ( x),  1 unity element in R

1f

f

Hence all the vector space axioms are satisfied. Then (V , , ˜) becomes a vector space over the real field. Problem 1.7 In F [ x], let Vn be the set of all polynomials of degree less than n . Using the natural operations for polynomials of addition and multiplication, show that Vn is a vector space over F . Hint. p ( x)

D 0  D1 x    D r x r

polynomial of degree r in x , D "s  field F 17

F [ x] a set of all such polynomials. Addition of polynomials and scalar multiplication to polynomials are defined as

p( x)  q( x) (D0  D1x    Dr xr )  (E0  E1x    Es x s ) and

c0  c1 x    ct x t

k p ( x) k (D 0  D1 x    D r x r ) (kD 0 )  (kD1 ) x    (kD r ) x r

where ci

D i  Ei ,  i and d i

d 0  d1 x    d k x k ,

k Di ,  i

Additive identity 0 of F [ x] is a zero polynomial, 0  0x    0xn

0

( unit element of F [ x] )

1

additive inverse of p ( x) is  p ( x)

1  0x  0x2    0xn

(D 0 )  (D1 ) x    (D r ) x r

On the lines of the previous examples write down the solution yourself. Problem 1.8 Show by an example that the axiom 1 ˜ v v is independent of the remaining axioms of the vector space. Hint. o(G ) (order of the group G )

number of elements in the group G

Solution. Let G be any additive group with identity 0, say and o(G ) t 2. Let F be a field. Define the scalar multiplication (˜) as D ˜ v 0  G,  D  F , v  G Then (G,) is an abelian group. Moreover, (i) D ˜ v  G (ii) D ˜ (v  w)

0

00

D ˜ v  D ˜ w,

(iii) (D  E) ˜ v

0

00

(iv) D ˜ (E ˜ v)

0

(DE) ˜ v,  D, E  F and v, w  G

D˜v E˜v

Here 1 ˜ v 0 z v, for v(z 0)  G. Hence the last axiom (v5) of a vector space is not satisfied. However, we find that all other vector space axioms (v1) to (v4) are satisfied. Hence G is not a vector space over F . This proves that the axioms 1 ˜ v v is independent of the remaining axioms of vector space.

18

Problem 1.9 Show that an axiom 1 ˜ v v, in the definition of a vector space, can be replaced by the following one: the equation Dv

0 holds only if D

0, D  F , v  V

0 or v

(a1)

Solution. Assume (a1). We reproduce 1 ˜ v v from (a1). We write 1 ˜ v 1v. (1 ˜1)v 1 ˜ v, for 1  F , v  V

Now Ÿ

1(1v) 1v or 1(1v)  1v

Ÿ

1v  v

Ÿ

0 i.e. 1(1v  v)

0

0, by (a1):  1 z 0

1v

v  v V 1˜ v

Hence (a1) Ÿ

v

Problem 1.10 Let A be a nonempty set and

V

RA

{ f | f : A o R is a function }

Here R is the set of real numbers. For f , g V and for D  R , we define

f

( f  g )( x) and

g ( x),  x  A

(a1)

f ( x)  g ( x),  x  A

(a2)

g œ f ( x)

(Df )( x) D f ( x),  x  A

(a3)

Prove that V is a real vector space under addition and scalar multiplication defined as in (a2) and (a3). Solution. Let f , g , h V and let D , E  R.

(i) By (a2), f  g : A o R is a function,  f ( x)  g ( x)  R ,  x  A

f  g V

Ÿ

(ii)

( f  g )( x)

f ( x)  g ( x), by (a2) g ( x)  f ( x),  addition is commutative in R ( g  f )( x),  x  A, by (a2) 19

f g

Ÿ

[ f  ( g  h)]( x)

(iii)

g  f , by (a1).

f ( x)  ( g  h)( x) f ( x)  [ g ( x)  h( x)], by (a2) [ f ( x)  g ( x)]  h( x), by associative law in R for  ( f  g )( x)  h( x) [( f  g )  h]( x),  x  A : by (a2),

f  ( g  h)

Ÿ

( f  g )  h, by (a1)

(iv) Define a function 0 : A o R by 0( x) 0,  x  A This function 0 is in V . Also ( f  0)( x)

f ( x)  0( x),  f V : by (a2), f ( x)  0 f ( x),  x  A

f 0

Ÿ

f

0  f , by (a1) and (ii)

Hence the function 0 is an additive identity or zero vector. (v) For f V , there is  f : A o R defined by ( f )( x)

 f ( x), x  A. Then  f V .

[ f  ( f )]( x)

Now

f ( x)  ( f )( x), by (a2) f ( x)  f ( x) 0

0( x),  x  A

f  ( f )

Ÿ Ÿ

0

( f )  f , by (a1) and (ii)

 f V is an additive inverse of f

(i) to (v) Ÿ (V , ) is an abelian group. (vi) By (a3), Df : A o R is a function,  Df ( x)  R ,  x  A. Hence Df V . 20

[D( f  g )]( x)

(vii)

D [( f  g )( x)], by (a3) D[ f ( x)  g ( x)], by (a2) Df ( x)  Dg ( x), by distributivity in R (Df )( x)  (Dg )( x), by (a3) (Df  Dg )( x), by (a2),  x  A

D( f  g )

Ÿ

[(D  E) f ] ( x)

(viii)

Df  Dg , by (a1) (D  E) ˜ f ( x), by (a3) Df ( x)  Ef ( x), by distributivity in R (Df )( x)  (Ef )( x), by (a3) (Df  Ef )( x), by (a2),  x  A

( D  E) f

Ÿ

Df  Ef , by (a1)

[D(Ef )](x) D[(Ef )(x)]

(ix)

D[E( f ( x))] DE ˜ f ( x) ((DE) f )( x),  x  A : by (a3), D(Ef )

Ÿ

(DE) f , by (a1)

(1 ˜ f )( x) 1 ˜ f ( x), by (a3)

(x)

f ( x),  x  A 1˜ f

Ÿ

(i) to (x) Ÿ V

f , by (a1)

R A is a vector space over R

Remark. This vector space V is called a function!space over R. MCQ 1.4

Consider the statements: (a) For any field F , F is a vector space over F . (b) C is a vector space over R 21

(c) C is a vector space over Q. Choose the true statement/s from the following: (A) only (a) is true (B) all are true (C) only (b) and (c) are true (D) all are false MCQ 1.5

Consider the statements: (a) V is a vector space over F and K is a subfield of F Ÿ V is a vector space over K . (b) V is a complex vector space Ÿ V is a real vector space. Choose the true statement/s from the following: (A) Both are true but (a) is not the reason for (b). (B) Only (a) is true. (C) Both are true and (a) is the reason for (b). (D) Both are false. MCQ 1.6

Let V

{( a, b) | a, b  R} be the set of ordered pairs of real numbers such that the operations of

addition in V and scalar multiplication on V be defined by (a) (a, b)  (c, d ) (a  c, b  d ) and k (a, b) (ka, b) (b) (a, b)  (c, d ) (a  c, b  d ) and k (a, b) (k 2 a, k 2b) . (c) (a, b)  (c, d ) (a, b) and K (a, b) (ka, kb) for all a, b, c, d , k  R. Then

(A) V is a vector space over R with respect to (a). (B) V is not a vector space over R with respect to (b) only. (C) V is not a vector space over R with respect to (a) only. (D) V is not a vector space over R with respect to (a), (b) and (c).

22

MCQ 1.7

The set M

­ª x ®« ¯ ¬ ry

½ yº , x, y, r  R ¾ is a vector space over R with respect to usual addition and » x¼ ¿

scalar multiplication of matrices. Then (A) r  C (B) r  R but r C (C) r  N only (D) r  Z only SAQ 1.1

Let V be the set of all pairs of real numbers and let F be the field of real numbers. Define ( x1 , x2 )  ( y1 , y2 )

(3 x2  3 y2 ,  x1  y1 )

a( x1 , x2 ) (3ax2 ,  ax1 )

and for all x1 , x2 , y1 , y2 , a  R.

Show that V is not a vector space over the field of real numbers. SAQ 1.2

Prove that Cn {( x1 , x2 ,  , xn ) | xi  C, 1 d i d n} is a complex vector space where addition and scalar multiplication are defined component-wise. SAQ 1.3

Show that the set of all ordered n -tuples over R is a vector space over R. SAQ 1.4

Show that the solution set to a system of equations in n unknowns of the form D11 x1  a12 x2    a1n xn

0

a21 x1  a22 x2    a2 n xn

0



an1 x1  an 2 x2    ann xn

0

where a' s are real, forms a vector space over R under usual addition and scalar multiplication.

23

1.5 General properties of a vector space

The properties of a vector space are spelled out in the following theorem. Theorem 1.2

Let!V be!a!vector!space!over! F Then (i) D0 0,  D  F

(1.5a)

(ii) 0v 0,  v V

(1.5b)

(iii) (D)v

(1.5c)

(Dv),  D  F ,  v V

(iv) Dv 0 œ D 0 or v 0, (D  F , v  V ) (v) D(u  v) (vi) (1)u

(1.5d)

Du  Dv,  u , v V and D  F u ,  u V

Proof. The theorem is proved by using vector space axioms (v1) to (v5) given in the previous

unit. (i)

D ( 0  0)

D0,  0 0  0 V

Ÿ

D0  D0 D0,  distributivity D0  0,  0 identity  V

Ÿ

D0 0, by cancellation law.

(ii)

(0  0)v

0v, for 0  F ,  0 0  0

or

0v  0v

0v, by distributivity 0v  0, for 0  V

Ÿ

0v

0, by cancellation law

(iii) Let D  F . Then D  F . We have D  ( D)

0  F.

Ÿ

(D  (D))v

0v

0, by (ii) for v  V

Ÿ

Dv  (  D )v

0, by distributivity

Ÿ

( D )v

(inverse of Dv) (Dv)

(iv) D 0 or v 0 in (i) and (ii) Ÿ Dv 24

0

Conversely, let Dv 0. If D 0, then the result follows. If D z 0, then D 1  F and D 1 ( D v ) Ÿ

1v

(v)

D(u  v)

0 i.e. v

D 1 0 Ÿ (D 1D)v 0, by (i) 0

D[u  (v)]

Du  D(v), by distributivity

Du  (Dv), by (iii) Du  Dv

(vi)

u  (1)u 1u  ( 1)u [1  (1)]u 0u 0

Ÿ

(1)u

u

QED

Theorem 1.3 (Cancellation laws)

Let V be a vector space over the field F . We have (i) if v(z 0) V and D, E  F , then Dv E v Ÿ D E

(1.6a)

(ii) if u , v V and D(z 0)  F , then Du

Dv Ÿ u

v

(1.b)

Proof. (i) Let v be any nonzero element of a vector space V over F and D, E  F . Let

Dv E v Ÿ

Dv  E v 0

Ÿ

(D  E)v 0

Ÿ

D  E 0, by (1.5) since v z 0

Ÿ

D E

(ii) Let D be any nonzero element of F and u , v V . Let

Du

Dv 25

Ÿ

Du  Dv 0

Ÿ

D(u  v) 0

Ÿ

u  v 0, by (1.5) since D z 0

Ÿ

26

u

v

QED

SUMMARY The concept of a vector space over a field is introduced and explained with illustrations. Various types of vector spaces are discussed through solved examples. In the end the general properties shared by a vector space are enumerated.

KEY WORDS Vectors Scalars Scalar multiplication Field Vector space

27

UNIT 03-02: SUBSPACES

29-41

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the properties of vector spaces Explain the concept of a subspace of a vector space Apply to identify a subspace of a given vector space INTRODUCTION In the study of algebraic structures: groups and rings (see our book on Groups and Rings), there are concepts of subgroup and subring. Since vector space being one of the algebraic structures, there is a natural analog of subgroup and subring in vector space and we call it subspace. 2.1 Subspace (or vector subspace) Definition Let V be a vector space over a field F . A nonempty subset U of V is called a subspace of V if U is a vector space over F under vector addition and scalar multiplication as in V . Remark

The spaces {0} and V are called trivial or improper!subspaces of the vector space Space {0} is called a zero!space or null!space. Theorem 2.1 ( Test for a subspace)

A!nonempty!subset!U of!a!vector!space!V over! F !is!a!subspace!of!V iff (i) u  v U ,  u , v U and

(ii) Du  U ,  D  F , u U .

Proof. Let U be a nonempty subset of a vector space V (F ).

(a) Let U be a subspace of the vector space over a field F . Then U is a vector space over F and consequently (U ,  ) is an abelian group. Ÿ

u  v U ,  u, v U

and

Du  U ,  D  F , u  U 29

(b) Conversely let (i) and (ii) hold, where U Ž V . Now u , v  U Ÿ Dv, u  U , by (ii) for D  F Dv  u

Ÿ

u  Dv  U ,   is comutative on V u  v  U , by taking D

Ÿ

1  F

Hence U is a subgroup of the additive abelian group V i.e. (U ,  ) is an abelian group. Let u , v U , D, E  F . Then u, v  V ,  U Ž V Since V (F ) is a vector space, we have D(u  v)

Du  Dv, (D  E)u

Du  E u , D(E u )

(DE)u and 1u

u

(2.1)

But u , v  U , D, E  F . Then using (ii) and (2.1), we have U is a vector space over the field F

But U Ž V and V (F ) is a vector space. Hence U is a subspace of V (F ).

QED

Corollary 2.1

A!non!empty!subset!U!of!a!vector!space! V (F ) !is!a!subspace!of! V iff! Du  E v  U ,  D, E  F !and  u, v U .

Proof. (i) Let U be a non empty subspace of a vector space V over a field F . Ÿ

U is a vector space over the field F

Then u , v  U and D, E  F Ÿ Du , E v  U ,  scalar multiplication on U (F ) Du  E v  U ,  (U ,  ) is a group.

Ÿ

(ii) Conversely, let U be any nonempty subset of a vector space V (F ) and Du  E v  U ,  D, E  F and  u , v  U

In Du  E v  U put D E 1 and E Ÿ

0 to get u  v  U and DuU

U is a subspace of the vector space V (F ), by Thm 2.1

Corollary 2.2.

A! nonempty! subset! U ! of! a! vector! space! V (F ) ! is! a! subspace! of! V iff

Du  v  U ,  u , v  U , D  F . Proof. From the above corollary, we know that 30

QED

U is a subspace of a vector space V over F œ Du  E v  U for u , v  U , D, E  F

For E 1, the corollary follows.

QED

Problem 2.1

Let V be a vector space over the field R. For x(z 0)  V , the set S { ax | a  R}. Show that S is a subspace of V .

Solution. Let u , v  S . Then

u

a2 x, for some a1 , a2  R

a1 x, v

Du  Ev D(a1 x)  E(a2 x), D, E  R

Ÿ

(Da1 ) x  (Ea2 ) x, by vector space axiom (Da1  Ea2 ) x

(a1)

Since Da1  Ea2  R, (a1) Ÿ Du  E v  S ,  u , v  S , D , E  R Ÿ

S is a subspace of V

Problem 2.2 Show that U

{( a, b,0) | a, b  R } is a subspace of V

R 3.

Solution. We know that V

R3

{(a, b, c) | a, b, c  R}

is a real vector space under componentwise addition and scalar multiplication. Let u

(a, b,0), v ( x, y,0) U , a, b, x, y  R uv

Ÿ

(a, b,0)  ( x, y,0) ( a  x , b  y ,0 )  U ,  a  x , b  y  R

and

Du

D(a, b,0) (Da, Db,0) U ,  Da, Db  R

Then by Theorem 2.1, U is a subspace of the vector space V . 31

Problem 2.3 Prove that U

{( x1 , x2 , x3 )  V3 | x1  x2

x3 } is a subspace of V3 .

Solution. Given that V3 {( x1 , x2 , x3 ) | xi  R, 1 d i d 3} is a vector space under component-wise vector addition and scalar multiplication. Let U be any nonempty subset of V3 defined by U Consider u

{( x1 , x2 , x3 ) V3 | x1  x2

x3}

(a1)

( x1 , x2 , x3 ), v ( y1 , y2 , y3 ) be any two elements of U . Then x1  x2

Ÿ

x3 , y1  y2

y3 , by definition in (a1)

( x1  y1 )  ( x2  y2 )

x3  y3

(a2)

u  v ( x1 , x2 , x3 )  ( y1 , y2 , y3 )

Now

( x1  y1 , x2  y2 , x3  y3 ), by component-wise addition In view of (a2), the right side U and then u  v U . Also for D  F ,

Du

D( x1 , x2 , x3 ) (Dx1 , Dx2 , Dx3 ), by scalar multiplication Dx1  Dx2

Now

(a3)

D( x1  x2 ) Dx3 , by (a1)

Then (a3) Ÿ Du U Thus u  v, Du U ,  u , v U , D  F . Hence by Theorem 2.1, U is a subspace of the vector space V3 . Problem 2.4 Let

U {a0  a1x  a2 x 2  a3 x3 | a0 , a1, a2 , a3  R, x is a real variable}.

(a1)

Prove that U is a real subspace of a vector space of polynomials under addition and scalar multiplication to polynomials over R. Hint. A set of polynomials with real coefficients forms a vector space under component-wise addition and scalar multiplication of polynomials. Solution. Consider the vector space P of polynomials with real coefficients. Let U , defined in (a1), be a nonempty subset of P. Let

f ( x) a0  a1 x  a2 x 2  a3 x3 , g ( x) b0  b1 x  b2 x 2  b3 x3 U where a0 ,, a3 , b0 ,, b3  R. 32

f ( x)  g ( x) (a0  b0 )  (a1  b1 ) x  (a2  b2 ) x 2  (a3  b3 ) x3

Ÿ

(a2)

For any D  R , we have Df ( x) Da0  (Da1 ) x  (Da2 ) x 2  (Da3 ) x 3 But

a0  b0 ,, a3  b3  R and Da0 ,, Da3  R

Ÿ

f ( x)  g ( x), Df ( x) U , by (a1)  f ( x), g ( x) U and  D  R

(a3)

Then by Theorem 2.1, U is a subspace of P etc. Problem 2.5

Let V

R 3 be a vector space of ordered triples of real numbers with usual addition and scalar

multiplication. Which of the following is not a subspace of V ? (A) W

{ ( x, y , z ) | x  y 0 , x  2 y  3 z

(B) W

{ ( x, y, z ) | x 1, y 1}

(C) W

{ ( x, y, z ) | x  y  z 0}

(D) W

{ ( x, y, z ) | y 3x, z 5 x}

0}

(SET AUG 2010)

Hint. A subset of a vector space without zero vector is not a subspace. Solution: Let V

R 3 be a vector space of ordered triples of real numbers with usual addition and

scalar multiplication. (A) We choose x

3 z , y 3 z , z

z so that x  y

can be written as (3z , 3z , z ) i.e. W

0, x  2 y  3z

0. Hence any element of W

{(3 z , 3 z , z ) | z  R}. For z

0, (0, 0, 0) W and thus W

contains a zero vector. We use corollary 2.2 to show that W is a subspace V . Let Ÿ

u

(3x, 3x, x), v

(3 y, 3 y, y )  W and D  R.

Du  v D(3x, 3x, x)  (3 y, 3 y , y ) (3Dx, 3Dx, Dx)  ( 3 y, 3 y , y ), by scalar multiplication (3Dx  3 y, 3Dx  3 y, Dx  y ), by addition of vectors in V (  3(Dx  y ), 3(Dx  y ), (Dx  y ) )

Noting the form of the element of W , above shows that Du  v W ,  u, v W ,  D  R.

33

Ÿ

W is a subspace of V , by Corollary 2.2

(B) From the definition of W , we have (0, 0, 0) W Hence W cannot be a subspace of V , see Hint. (C) Considering W

{( x, y ,  x  y ) | x, y  R}, on the lines of (A), show yourself that W is a

subspace of V . (D) Choose W

{ ( x, 3 x, 5 x) | x  R } and confirm yourself that W is a subspace of V .

Hence the correct choice is (B). MCQ 2.1

Let V be the vector space of all functions from R o R. Consider the statements: (a) U

{ f | f (2) 0} is not a subspace of V .

(b) U

{ f | f (5) 3  f (1) } is a subspace of V .

Choose the true statement/s from the following: (A) Only (a) is true. (B) Only (b) is true. (C) Both the statements are true. (D) Both the statements are false. MCQ 2.2

Let V

­ ªa b º ½ | a , b, c , d  R ¾ ®« » ¯ ¬c d ¼ ¿

be a vector space under matrix addition and scalar

multiplication. Choose the true statement/s:

34

(A) U

{ x V | det x 0} is a subspace of V .

(B) U

{ x V | det x 1} is not a subspace of V .

(C) U

{ x V | x 2

x} is not subspace of V .

(D) U

{ x V | x 3

x} is a subspace of V .

MCQ 2.3

Which of the following statements is false? (A) Consider vector space C over C. Then for any scalars D,E : D ˜1  E ˜ i 0 œ D (B) The set M

0 and E 0

{ f  C [0, 1] | f (1 / 2) 0} is a subspace of the real vector space C [0,1] .

(C) Let V (F ) be any vector space. Then for any a  F and for any vector x  V , Dx

0 œ D 0 or x

0

(D) The class of all polynomials with real coefficients is a real vector space Hint. C [0,1]

The set of all continuous functions on the interval [0,1]

MCQ 2.4

In which of the alternatives a subset W of a vector space R 3 (R ) is not a subspace? (A) W

{ (a, b, 0) | a, b  R}

(B) W

{ ( a , b, c ) | a  b  c

(C) W

{ (a, b, c) | a 2  b 2  c 2 d 1}

(D) W

{ ( a , a , 0) | a  R }

0}

(SET FEB 2013)

MCQ 2.5

Which of the following is a subspace of the vector space C [a, b] ? (A) { f  C[a, b] | f (a) 1} b ­ ½ (B) ® f  C[a, b] ³ f ( x) d x ! 0 ¾ a ¯ ¿

­ (C) ® f  C[a, b] ¯

§a  b· ½ f¨ ¸ 1¾ © 2 ¹ ¿

(D) { f  C[a, b] | f (0) 0}

(SET AUG 2015)

SAQ 2.1

If W is a subspace of a vector space V and U is a subspace of W , then prove that U is a subspace of V . 35

SAQ 2.2

Let V

R 3 . Show that W

{( a, b, c) V | a ! 0} is not a subspace of V .

SAQ 2.3

Identify the subspaces of R 4 (R ), where R4 {(x, y, z, w) | x, y, z, wR}: (a) W

{( x, x, x, x) | x  R}

(b) W

{( x, y , x, y ) | x, y  Z}

(c) W

{( x,2 x, y , x  y ) | x, y  R}

SAQ 2.4

Let V

R 3 ( R ) {( x, y, z ) | x, y, z  R} and if any W be the set of all triplets ( x, y, z ) such that

(i) x  3 y  4 z and

0

(ii) a1 x  b1 y  c1 z

0.

Prove that W is a subspace of R 3 (R ). 2.2 Union and intersection of subspaces Theorem 2.2

Let! U , W !be!subspaces!of!a!vector!space! V (F ). !Then U ‰ W is!a!subspace!of!V œ ! U Ž W or! W Ž U . Proof. Let U , W be the subspaces of a vector space V (F ).

(i) Let U Ž W or W Ž U . Then U ‰W

W or U

Hence U ‰ W is subspace. (ii) Conversely, let U ‰ W be a subspace of V (F ). We apply the method of contradiction to prove that U Ž W or W Ž U . Assume the contrary that U Ž W or W Ž U is false i.e. U Œ W and W Œ U . Then  x U , y W with x W and y  U . Now x, y  U ‰ W and U ‰ W is a subspace of V ( F ). Ÿ

x  y  U ‰ W and  x U ,  y W

Ÿ

x  y U ,  x U or x  y  W ,  y W

36

(a1)

Ÿ

y

x  y  ( y ) U or x

x  y  ( y ) W

This contradicts y U . Hence the initial assumption that U Œ W and W Œ U is wrong. Then it follows that U Ž W or W Ž U .

QED

Problem 2.6

Show that the union of two subspaces need not be a subspace. Solution. We know that U {(0, y) | y  R} is a subspace of vector space V2 {( x,0) | x  R} is a subspace of V2`

and

W

Ÿ

U ‰W

{( x, y )  V2 | x

0 or y

0}

{( x, y )  V2 | xy

0}

Now (1,0), (0,1)  U ‰ W and (1, 0)  (0,1) (1,1) U ‰ W . Hence U ‰ W is not a subspace of V2 . Theorem 2.3

The intersection!of!two!subspaces!of!a!vector!space!is!a!subspace. Proof. Let U ,W be any two subspaces of a vector space V (F ). Then U and W contain the zero

vector i.e. 0 U , 0 W . Ÿ

0 U ˆ W

It is obvious that U ˆ W is a nonempty subset of V . Let u , v  U ˆ W and D, E  F . Ÿ

u , v U , u , v W and D, E  F

Ÿ

Du  Ev  U and Du  Ev W ,  U , W are subspaces: see Corollary 2.1

Ÿ

Du  Ev U ˆ W ,  u , v  U ˆ W ,  D, E  F

Ÿ

U ˆ W is a subspace of the vector space V (F ), by Corollary 2.1

QED

Problem 2.6

Sets U and W are defined by

and

U

{( x1 , x2 )  V2 | x1 t 0}

W

{( x1 , x2 ) V2 | x1 d 0}.

Show that U ˆ W a subspace of V2 . Solution. We know that 37

R2

V2

{( x1 , x2 ) | x1 , x2  R}

is a real vector space under component-wise addition and scalar multiplication. Now U ˆW

{( x1 , x2 )  V2 | x1 t 0 and x1 d 0}

{(0, x2 ) | x2  R}

is a nonempty subset of V2 . Let u , w U ˆ W . Ÿ Ÿ

u

(0, x2 ), v (0, y2 ), x2 , y2  R

u  v (0, x2 )  (0, y2 ) (0  0, x2  y2 ) (0, x2  y2 ) U ˆ W ,  x2  y2  R

For D  R , we have Du

D(0, x2 ) (D0, Dx2 ) (0, Dx2 ) U ˆ W ,  Dx2  R

Hence U ˆ W is a subspace of V2 Problem 2.7

Prove that an arbitrary intersection of subspaces of a vector space is again a subspace. Solution. Let {U O | O  '} be a family of subspaces of a vector space V over a field F . Here '

is the index set. We show that  U O is a subspace of V . For this the first condition is that zero O'

vector 0 must belong to this set. Since each of the spaces U O is a subspace of V , we have zero vector 0 U O ,  O  ' Ÿ

0   UO

Ÿ

ˆ U O is a nonempty subset of V ( F )

Let

u , v   U O and D, E  F

Ÿ

u , v  U O ,  O  ' and D, E  F

Ÿ

Du  Ev U O ,  O  ',  each U O is a subspace of V ( F )

Ÿ

Du  E v   U O

O'

{u | u U O ,  O  '}  V

O'

O'

Noting (a1),  U O is a subspace of a vector space V (F ), by Corollary 2.1. O'

38

(a1)

Problem 2.8

Let V be a vector space over an infinite field F . Prove that V cannot be written as the set theoretic union of a finite number of proper subspaces. Solution. We apply the method of contradiction. Assume the contrary that V can be expressed as

the union of a finite number of its subspaces i.e. V1 ‰ V2 ‰ V3 ‰  ‰ Vn ,

V

(a1)

where each Vi is a proper subspace of the vector space V . Consider V1 . In the case of V1  V2 ‰ V3 ‰  ‰ Vn , (a1) is replaced by V

V2 ‰ V3 ‰  ‰ Vn and the role of V1 in (a1) becomes redundant. Thus we

take

Similarly

V1  V2 ‰ V3 ‰  ‰ Vn

(a2)

V2  V1 ‰ V3 ‰  ‰ Vn

(a3)

and so on. Then in (a1) any Vi need not be expressed as a subset of the union of V1 , V2 ,  , Vi 1 , Vi 1 , Vi 2 ,  , Vn . Now (a2) Ÿ  x1 such that x1 V1 and x1   V2 ‰ V3 ‰  ‰ Vn . Similarly  y1 V2 and y1   V1 ‰ V3 ‰  ‰ Vn

(a4)

Since V is a vector space, x1  Dy1  V ,  D(z 0)  F . Assume that x1  Dy1  V1 ‰ V2 . Then x1  Dy1  V1 or x1  Dy1  V2 Since D(z 0)  F , D 1  F . We write y1 Ÿ

1 1 ( x1  Dy1 )  x1 D D

y1  V1 ,  V1 is a subspace of V

This contradicts (a4) since y1 V2 . Hence x1  Dy1   V1. Similarly we can show that x1  Dy1   V2 and hence follows that x1  Dy1 V1 ‰ V2 But x1  Dy1  V . Hence x1  Dy1 must lie in some Vk , 3 d k d n. The given field F being infinite  some scalars D, E (D z 0 z E z D), such that 39

x1  Dy1 , x1  Ey1  Vk , for some k , 3 d k d n Since D  E( z 0)  F ,

y1

1 1 ,  F . Now we have D E E D

1 1 ( x1  Dy1 )  ( x1  Ey1 ) D E ED linear combination of vectors x1  Dy1 and x1  Ey1 in Vk , 3 d k d n

Ÿ

y1 Vk , for some k such that 3 d k d n

Ÿ

y1  V1 ‰ V3 ‰ V4 ‰  ‰ Vn

This contradicts (a4). Hence the assumption made in the beginning must be wrong. Hence V cannot be written as the union of a finite number of proper subspaces.

MCQ 2.6

Consider the statements: (a) U

{ ( x, 0) | x  R} is a subspace of V2 .

(b) W

{ (0, y ) | y  R} is a subspace of V2 .

(c) U ‰ W

{ ( x, y ) | x, y  R , xy

0} is a subspace of V2 .

Choose the true statement/s from the following: (A) (a) is true and (b) is false (B) only (c) is false (C) (b) and (c) are true (D) all are true SAQ 2.5 n

If U1 , U 2 ,  , U n be subspaces of a vector space V (F ), prove that  U i is also a subspace of V . i 1

40

SUMMARY The concept of a vector subspace is introduced. It is shown that arbitrary intersection of subspaces is again subspace. However, this is not true for the union of subspaces.

KEY WORDS Subspace Union of subspaces Intersection of subspaces

41

UNIT 03-03: LINEAR SPAN AND SUM OF SUBSPACES

43-62

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of linear span of a nonempty subset of a vector space Explain the concept of a sum of subspaces Apply to find the sum of subspaces INTRODUCTION The concept of a linear combination of vectors is known while studying elementary vector algebra. In this unit the concept is carried to the elements of a vector space. This is a novel idea because, in general the elements of a vector space need not be our usual vectors which have magnitudes and directions. They can be functions, matrices, polynomials or any other quantities which follow the vector space axioms. 3.1 Linear span Linear combination of vectors (LC) Let V be a vector space over a field F and v1 , v2 ,, vn be in V . For any D1 , D 2 ,  , D n  F , the

vector D1v1  D 2 v2    D n vn

(3.1)

is called a linear!combination (LC) of vectors v1 , v2 ,, vn over F . Trivial linear combination of vector. If all scalars D i are zero i.e. D i

0,  i, then the LC

(3.1) is called a trivial!linear!combination of vectors v1 , v2 ,  , vn . If at least one D i z 0 in (3.1), then (3.1) is a nontrivial!linear!combination!of vectors v1 , v2 ,, vn . Finite linear combination. Let S be a subset of V and v1 , v2 ,  , vn  S . Then (3.1) is called a finite!linear!combination or simply a linear!combination of vectors in S . Illustration

(i) 14

3(2)  4(5) : 14 is a LC of 2 and 5

(ii) Let u (3,  2 ,1) , v (0 ,  1, 0) and w (3 , 0 ,1). 43

Ÿ

u

2v  1w.

Hence u is a LC of v and w. (iii) Consider the unit vectors e1

(1, 0, 0), e2

(0,1, 0), e3

(0, 0,1) of the vector space R 3 . We

write any x ( x1 , x2 , x3 )  R 3 as a linear combination of e1 , e2 , e3 :

x1 (1, 0, 0)  x2 (0,1, 0)  x3 (0, 0,1)

x ( x1 , x2 , x3 ) x

i.e.

x1e1  x2 e2  x3e3

Remark. Since v1 1v1  0v2    0vn , !, vn

0v1  0v2    1vn , 0 0v1  0v2    0vn ,

each vector vi and zero vector are LC of vectors v1 , v2 ,, vn . Problem 3.1 Confirm whether the vector u

(3, 2) is a linear combination of the vectors a (1,1) and

b (2, 2). Solution. Consider that

u

xa  yb, for some x, y  R.

(a1)

If x and y are determined, then the vector u will be a linear combination of a and b. Rewriting (a1)

(3, 2)

x(1,1)  y (2, 2) ( x, x)  (2 y, 2 y ) ( x  2 y, x  2 y ) 3 x  2 y, 2

Ÿ By subtraction,

x  2y

(a2)

1 0

This is absurd. Hence x and y cannot be determined and as such the vector u is not a linear combination of a and b. Problem 3.2

ª 2 0º Express the matrix A « » as a linear combination of the matrices over R. ¬  2 2¼ B

ª1  1º «0 1 » and C ¬ ¼

ª 0 1º «  1 0» . ¬ ¼

Solution. Let the combination be A 44

xB  yC , for some x, y  R

ª 2 0º «  2 2» ¬ ¼

Ÿ

ª1  1º ª 0 1º x«  y« » » ¬0 1 ¼ ¬  1 0¼ ª x  xº ª 0 «0 x »  « y ¬ ¼ ¬

yº 0 »¼

ª x  x  yº « y x »¼ ¬ Ÿ

2

x, 0  x  y ,  2  y , 2

Ÿ

x

2

x

y

Then the required linear combination is A 2 B  2C

MCQ 3.1 Let the vector a (k , 3, 0) in R 3 be a linear combination of the vectors b (1, 0, 2) and (1,1, 2). Then the point (1,1, 0) lies on the plane (A) kx  y  z  3 0 (B) kx  y  z  3 0 (C) kx  y  z  6 0 (D) kx  3 y  z  3 0

MCQ 3.2 The polynomial p

x 2  x  2 over R is expressible as linear combination of polynomials p1

such that p

x 2  x and p2

x 1

ap1  bp2 . Then a, b satisfy the equation:

(A) x 3  3x 2  2 x 0 (B) x 2  3 x  2 0

45

(C) x 2  3 x  4 0 (D) x 2  4 x  3 0

SAQ 3.1 Write the vector (5, 3, 8) in R 3 as a linear combination of the vectors (1, 2, 3) and (3,  1, 2).

SAQ 3.2 Find the value of a such that the vector (1, a, 1) in R 3 is expressible as a linear combination of vectors (1, 0, 1), (1, 1, 0) and (1, 2, 0).

3.2 Linear span of a subset Let S be any nonempty subset of a vector space V over F . A Linear!span of S is a collection of LC of all finite subsets of S It is denoted by L(S ) or by [S ] i.e. L( S ) {D1v1  D 2 v2    D n vn | D i  F , vi  S , 1 d i d n and n  N} If S

(3.2)

{v1 , v2 ,  , vn }, we write [S ]

L( S ) [v1 , v2 ,  , vn ]

In this case we say that the space L(S ) is spanned or generated by S . We take L(I) {0}

null subspace.

Illustration (i) For S (ii) If S

{v}, L( S ) {Dv | D  F }. {v1 , v2 }, then L( S ) [v1 , v2 ] {D1v1  D 2 v2 | D1 , D 2  F }.

Theorem 3.1 Let! S ! be! a! nonempty! subset! of! a! vector! space! V . ! Then! L(S ) ! is! the! smallest! subspace! of V containing S .

Proof. Let S be a nonempty subset of a vector space V (F ). Then L(S ) is given in (3.2). Let v  S . Ÿ v 1 ˜ v  L( S ), 1 F . Ÿ S Ž L(S ). Also the set L(S ) is a nonempty subset of V . Now we show that L(S ) is a subspace of V . Let x, y  L(S ) and O, P  F . 46

Ÿ

x

n

¦ D i vi , y

i 1

Ÿ

Ox  Py

m

¦ E j w j ,  D i , E j  F , vi , w j  S , 1 d i d n, 1 d j d m

j 1

n

m

i 1

j 1

O ¦ D i vi  P ¦ E j w j (OD1 )v1    (OD n )vn  (PE1 ) w1    (PEm ) wm Finite LC of S ,  OD i , PE j  F , vi , w j  S , 1 d i d n , 1 d j d m

Ÿ

Ox  Py  L(S )

Ÿ

L(S ) is a subspace of V , by Corollary 2.1 of Unit 2

To prove the last part, we show that for any subspace T containing S of V , L( S ) Ž T . Let T be any subspace of V with S Ž T . Now

x

n

¦ D i vi  L( S ),  vi  S , D i  F

i 1

v1 , v2 ,, vn  S i.e. v1 , v2 ,  , vn  T ,  S Ž T

Ÿ

D1v1  D 2 v2    D n vn  T ,  T is a subspace and vi  T , D i  F

Ÿ

x

Thus

x  L(S ) Ÿ x  T

Ÿ

L( S ) Ž T

But T is any subspace of V . Hence L(S ) is the smallest subspace which contains S .

QED

Theorem 3.2 If! S !and!T !are!subsets!of!a!vector!space! V , !then (i) S Ž T Ÿ L( S ) Ž L(T ) (ii) L( S )

S œ S is!a!subspace!of V

(iii) L( L( S )) and

L( S )

(iv) S Ž L(T ) and T Ž L(S ) œ L( S )

L(T ).

Proof. Let S, T be subsets of a vector space V (F ). (i) Let S Ž T . Consider v  L(S ). We have v

n

¦ D i vi , where D i  F , vi  S , 1 d i d n

(3.3)

i 1

47

Since S Ž T , for vi  S , 1 d i d n Ÿ each vi  T . Then (3.3) Ÿ v  L(T ). Thus v  L(S ) Ÿ v  L(T ) i.e. L( S ) Ž L(T ).

(ii) Let S

L(S ). But L(S ) is a subspace. Hence S is a subspace of V .

Conversely, let S be a subspace of V . We show that S

L(S ). By definition itself S Ž L(S ). Let

v  L(S ). Ÿ

v

n

¦ D i vi , where D i  F and vi  S , 1 d i d n

i 1

Ÿ

D1v1 , D 2 v2 ,  , D n vn  S

Ÿ

v

n

¦ D i vi  S ,  S is a subspace

i 1

Thus Ÿ

v  L(S ) Ÿ v  S L( S ) Ž S

Now S Ž L(S ) and L( S ) Ž S Ÿ L( S )

S.

(iii) The case (ii) Ÿ U

L(U ) œ U is a subspace of V

We know that L(S ) is a subspace of V . Then replacing U by L(S ), above gives L( S )

L L( S ) .

(iv) Let S Ž L(T ) and T Ž L(S ). Ÿ

L( S ) Ž L( L(T ))

Ÿ

L( S )

L( S ), by (i) and (iii)

L(T )

Conversely, let L( S ) Ÿ

L(T ) and L(T ) Ž L( L( S ))

L(T ). But S Ž L(S ) and T Ž L(T ).

S Ž L(T ) and T Ž L(S ).

QED

Finite dimensional vector space V We know that L(S ) is a subspace of V containing S . It is called a subspace generated!by! S . If L( S ) V , then we say that the vector space V is generated!by!a!set S . The vector space V is said to be finite!dimensional over a field F , if there is a finite subset S of V such that V 48

L(S ).

Illustration (i)

L(I) {0}. The null space is finite dimensional.

(ii)

V3

R 3 (R )

{ ( x , y , z )  R } {x(1,0,0)  y (0,1,0)  z (0,0,1) | x, y, z  R}

[(1, 0, 0), (0,1, 0), (0, 0,1)] Ÿ

V3 is a finite dimensional vector space

Problem 3.3 In V2 show that (3, 7) [ (1, 2), (0,1)] but (3,7)  [(1,2), (2,4)].

Hint. Show that (3, 7) is expressible as a LC of (1, 2) and (2, 4) Solution. We have V2 {( x, y ) | x, y  R} We write (3 , 7) as a LC of (1, 2) and (0 ,1) : (3 , 7)

D (1, 2)  E(0,1), D, E  R ( D , 2D  E )

Ÿ

D 3 and 2D  E

7 i.e. D 3 , E 1

Ÿ

(3 , 7) 3(1, 2)  1(0 ,1)

Ÿ

(3,7)  [(1, 2), (0 ,1)]

Let us verify whether (3 , 7) is expressible as a LC of (1, 2) and (2, 4). If possible, let (3, 7)

O(1, 2)  P(2, 4), for some O, P  R (O  2P, 2O  4P)

Ÿ

O  2P 3 and 2(O  2P)

Ÿ

7

2u3 7

This is absurd. Hence (3,7) cannot be a LC of (1, 2) and (2 , 4) i.e. (3,7)  [(1, 2), (2, 4)].

Problem 3.4 In the complex vector space C 2 , show that

(1  i,1  i )  [(1  i,1), (1,1  i )]. 49

Solution. Let (1  i,1  i ) be a LC of vectors (1  i, 1) and (1, 1  i ) : (1  i, 1  i )

D(1  i, 1)  E(1, 1  i ), for some D, E  C

((1  i )D  E, D  (1  i )E) (1  i )D  E 1  i and D  (1  i )E 1  i

Ÿ

Solving for D and E, we get D 1  i, E 1  i. Ÿ

(1  i,1  i )

(1  i ) (1  i,1)  (1  i ) (1,1  i )

(1  i,1  i )  [(1  i,1), (1,1  i )].

Ÿ

Problem 3.5 Find the span of S

{(1, 2,1), (1,1,  1), ( 4, 5,  2)} and then prove that (2,1,8) belongs to the

span of S but (1,3, 5) does not belong to the span of S . Solution. We know that S

{(1, 2,1), (1,1,  1), ( 4, 5,  2)} is a subset of the vector space V3 (R ).

It is easy to verify that (4, 5,  2) 1(1, 2,1)  3(1, 1,  1) Hence one of vectors in S is a LC of other two vectors. As such there are only two vectors which are linearly independent. L( S ) [(1, 2,1), (1,1,  1)]

Ÿ

{D(1, 2,1)  E(1,1,  1) | D, E  R} {(D  E, 2D  E, D  E _ D E  R} (i) To show that (2,1,8)  L( S ) Let

(2,1,8)

D  E 2, 2D  E 1, D  E 8

Ÿ

Ÿ

D

(2,  1,  8)

Ÿ

Ÿ

(2,1,8)  L( S )

(ii) To show that (1,  3, 5)  L( S ) 50

(D  E, 2D  E, D  E), for some D, E  R

3, E 5 3(1, 2,1)  5(1,1,  1)

(a1)

Let

(1,  3, 5)

(D  E, 2D  E, D  E), for some D, E  R D  E 1, 2D  E

Ÿ

The first two equations Ÿ D

3, D  E 5

4, E 5. With these values the third equation gives  9 5,

which is absurd. Hence the values of D and E cannot be determined and hence (1,  3, 5) cannot be written as a LC of the vectors in L(S). Hence (1,3, 5)  L( S ). Problem 3.6 Show that the vectors (1,1,1), (1,1, 0) and (1,1, 0) generate R 3 . Hint. Show that any ( x, y, z )  R 3 is written as a LC of the given vectors. Solution. Let ( x, y, z ) be any element of R 3 . Let ( x, y, z ) a(1,1,1)  b(1,1, 0)  c(1,1, 0), a, b, c  R (a, a, a )  (b, b, 0)  (c, c, 0) ( a  b  c, a  b  c, a ) abc

Ÿ Ÿ

a

z, b

x, a  b  c

y, z

a

1 ( y  x) 2

1 ( x  y  2 z ), c 2

Hence any element of R 3 is expressed as a LC of the given vectors. This shows that the given vectors generate R 3 . Problem 3.7

Show that the complex numbers u 1  i and v 1  i generate the complex field C as a vector space over the real field R. Solution. Let

x  iy be any element of C, where x, y  R. Let au  bv, for some a, b  R

z Ÿ

x  iy

(a1)

a(1  i )  b(1  i ) (a  b)  (a  b)i

Equating real and imaginary parts, we have ab Ÿ

a

x, a  b

1 ( x  y ), b 2

y

1 ( x  y) 2 51

Substituting these values in (a1), we find that any z  C is expressible as a LC of u, v over R. Thus the complex vectors u and v generate C etc. Problem 3.8

Let v1 , v2 ,, vn be n vectors of a vector space V (F ). Prove that (a) [v1 , v2 ,, vn ] [D1v1 , D 2 v2 ,, D n vn ] , D i (z 0)  F ,  i (b) [v1 , v2 ] [v1  v2 , v1  v2 ] (c) If vk  [v1 , v2 ,  , vk 1 ] for 2 d k d n, then [v1 , v2 ,  , vk 1 , vk , vk 1 ,  , vn ]

[v1 , v2 ,  , vk 1 , vk 1 ,  , vn ]

Solution. Let v1 , v2 ,, vn be vectors in a vector space V (F ).

(a) Let D1 , D 2 ,  , D n be all nonzero scalars. Define S {v1 , v2 ,, vn } and T vi  S Ÿ vi

Now Ÿ

Di1 (Di vi ) LC of D1v1 , D 2 v2 ,, D n vn i.e. vi  L(T )

S Ž L (T )

(a1)

D i vi  T Ÿ D i vi

Also Ÿ

{D1v1 , D 2 v2 ,  , D n vn }

D i (vi ) LC of v1 , v2 ,, vn i.e. D i vi  L(S )

T Ž L(S )

(a2)

Then (a1) and (a2) Ÿ L( S ) L(T ). Hence follows the result (a). {v1  v2 , v1  v2 } . Now

(b) Let S {v1 , v2 } and T

v1  v2 v1  v2

and Ÿ

v  T Ÿ v  L(S )

Ÿ

T Ž L(S )

Also

v1

1v1  1v2  L( S )

(a3)

1 1 (v1  v2 )  (v1  v2 )  L(T ) and v2 2 2

Ÿ

v  S Ÿ v  L(T )

Ÿ

S Ž L(T )

Above and (a3) Ÿ L( S ) 52

1v1  (1)v2  L( S )

1 1  (v1  v2 )  (v1  v2 )  L(T ) 2 2

L(T ) i.e. [v1 , v2 ] [v1  v2 , v1  v2 ].

A {v1 , v2 ,  , vn } and B [v1 , v2 ,  , vk 1 , vk 1 , vk  2 ,  , vn ]

(c) Let

A  {vk }

vk  [v1 , v2 ,  , vk 1 ], for 2 d k d n

We have

i.e.

vk

Here

BŽ A

Ÿ

L( B) Ž L( A)

D1v1  D 2v2    D k 1vk 1 , for some scalars D , D 2 ,  , D k 1

(a4)

(a5)

For any vi  A , i z k i.e. vi z vk , we have vi  B Ž L( B )

By (a4), vk  A is a LC of vectors v1 , v2 ,  , vk 1 in B i.e. vk  L( B ) Thus

v  A Ÿ v  L(B)

Ÿ

A Ž L(B) i.e. L( A) Ž L L( B)

(a5) and (a6) Ÿ L( A)

L( B )

(a6)

L( B) etc.

MCQ 3.3

Let (a, b, c)  R 3 belong to the space generated by the vectors (1,1, 0), (2,  5, 2) and (3,  12, 6). Then the plane ax  by  cz

0 passes through the point

(A) (2,  2,  3) (B) (2,  2, 3) (C) (2, 2, 3) (D) None of the above SAQ 3.3 Show that the following vectors generate R 3 : (a) (1, 0,1), (1,1, 0) and (0,1,1) (b) (1, 0, 0), (0,1, 0), (0, 0,1) and (1,1,1).

SAQ 3.4 Show that the xy plane U

{ (a, b, 0)} in R 3 is generated by the vectors 53

u

(1, 2, 0) and (2, 3, 0).

SAQ 3.5 Show that the polynomials t 2  t  1, t  2 and 2 generate the space of polynomials of degree d 2.

3.3 Sum and direct sums of subspaces Sum U  W Let U and W be subspaces of a vector space V (F ). Their sum U  W is defined as U W

{ v V | v u  w, u U , w W }

(3.3)

It means that U  W consists of all sums u  w where u U and w W . The above sum of two subspaces can be extended to n number of subspaces U1 , U 2 ,  , U n of a vector space V ( F ) and we write

U1  ...  U n

{v  V | v

u1  ...  un , ui  U i }

Theorem 3.3

Let!U and!W !be!subspaces!of!a!vector!space! V ( F ). !Then (i) U and W are contained in U  W i.e. U Ž U  W and W Ž U  W . (ii) U  W is!a!subspace!of V . (iii) U  W

L(U ‰ W ) or U  W is!the!smallest!subspace!of!! V !containing! U ‰ W .

Hint. U is a subspace of V ( F ) œ u1 , u2 U , D  F Ÿ Du1  u2  U Proof. (i) Let u U . Since W is a subspace of V , 0 W . Then we write

u u  0 U  W Thus u U Ÿ u U  W .

Ÿ

U Ž U W

Similarly we can show that W Ž U  W . (ii) Let U ,W be subspaces of V ( F ). Let x U  V . Then by definition

x u  v, for some u U and w W u  v, for some u , w V ,  U Ž V , W Ž V 54

(3.4)

Ÿ

x V ,  V is a vector space

Ÿ

U W Ž V

Let x, y  U  W and D  F . Then

u1  w1 , y

x Ÿ

Dx  y

u 2  w2 , for some u1 , u 2 U and w1 , w2 W

D(u1  w1 )  (u2  w2 ) (Du1  u2 )  (Dw1  w2 )

Now U is a subspace and u1 , u 2 U , D  F Ÿ Du1  u2 U . Similarly follows that Dw1  w2 W . Then follows that

Dx  y U  W , for x, y U  W , D  F Ÿ

U  W is a subspace of V ( F )

(iii) Let v  U  W . Ÿ v x  y, where x U , y  W i.e. x, y U ‰ LC of x, y  U ‰ W

Ÿ

v 1˜ x  1˜ y

Ÿ

v  L(U ‰ W )

Thus v  U  W Ÿ v  L(U ‰ W ).

Ÿ

U  W Ž L(U ‰ W )

(3.5)

Let v  L(U ‰ W ) i.e. v finite LC of U ‰ W . Then for some D i , E j  F and ui  U , w j  W , 1 d i d m, 1 d j d n, we have

v

D1u1  D 2u2    D m um  E1w1  E 2 w2    E n wn

Now U ,W being subspaces, Di F , ui U and E j  F , w j  W Ÿ (D1u1   Dmum ) U and (E1w1  En wn ) W Ÿ

v

(D1u1    D m um )  (E1w1    E n wn )  U  W

Thus v  L(U ‰ W ) Ÿ v  U  W

Ÿ

L(U ‰ W )  U  W

(3.6)

(3.5) and (3.6) Ÿ

U W

QED

L(U ‰ W )

55

Problem 3.9 Let A and B be two nonempty subsets of a vector space V . Then prove that (a) L( A ˆ B) Ž L( A) ˆ L( B ) and

(b) L( A ‰ B)

L( A)  L( B ).

Solution. Let A, B be nonempty subsets of a vector space V ( F ). (a) We have

A ˆ B Ž A and A ˆ B Ž B Ÿ

L( A ˆ B ) Ž L( A) and L( A ˆ B) Ž L( B), by Thm 3.2 (i)

Ÿ

L( A ˆ B) Ž L( A) ˆ L( B)

(b) For v  L( A ‰ B), Ÿ

v is a LC of finite number of elements of A ‰ B

v D1u1    Dmum  E1w1  E2 w2    En wn , D i , E j  F , ui  A, w j  B

Ÿ

m

n

m

i 1

j 1

i 1

v

¦ D i ui  ¦ E j w j  L( A)  L( B),  ¦ D i ui  L( A) etc

L( A ‰ B ) Ž L( A)  L( B )

Ÿ

Now

(a1)

A Ž A ‰ B, B Ž A ‰ B Ÿ L( A) Ž L( A ‰ B), L( B) Ž L( A ‰ B)

Ÿ

L( A)  L( B) Ž L( A ‰ B )

(a1) and (a2) Ÿ

L( A ‰ B)

(a2)

L( A)  L( B)

Independent subspaces The subspaces U1 ,..., U n of a vector space V (F ) are called independent!if u1  ...  un

0 and ui U i Ÿ ui

0,  i.

(3.7)

Disjoint subspaces Two subspaces U and W of the vector space V (F ) are said to be disjoint if their intersection is the zero subspace i.e. U ˆW

{0}.

Direct sum U † W of subspaces U and W Let U and W be subspaces of a vector space V (F ). Then V is called the direct!sum of U and V , denoted by 56

V

U †W œ V

U  W and U ,W are independent.

(3.8)

Remark. Above definition can be generalized to n number of subspaces. Let U1 ,..., U n be the finite number of subspaces of a vector space V (F ). Then V is called the direct! sum of these subspaces and write of V

U1 †  † U n œ V

U1  ...  U n and U1 ,..., U n are independent.

(3.9)

Remark. Herstein (1975) defines two types of sums: an internal direct sum and an external U1 †  † U n is used for an external direct sum of subspaces

direct sum. The notation V

U1 ,...,U n . Later on he shows that !the!internal!direct!sum!of! U1 , U 2 ,  , U n , !is!isomorphic!to!the external!direct!sum!of U1 , U 2 ,  , U n . Our approach is slightly deviating from that of Herstein.

Theorem 3.4 Two subspaces U and W of a vector space V (F ) are independent if U ˆW

{0}.

Solution. Let x U ˆ W . Then x U and x W . Since W is a subspace,  x W . Then x  (  x) Ÿ

x

0, x  U ,  x  W

 x 0,  U , W are independent

Ÿ

U ˆ W {0}.

QED

Theorem 3.5 Let U and W be the subspaces of a vector space V (F ). Then V

Proof. Let V

U †W œ V

U  W and U ˆW

{0}.

(3.10)

U † W . Then by definition V

Ÿ

U †W œ V œ V

U  W and U ,W are independent U  W and U ˆW

{0}, by Theorem 3.4

QED

Problem 3.10 Let U

x - axis and W

y - axis be the subspaces of a vector space R 3 . Show that U †W U W

U W.

x -axis  y -axis {( x,0,0) | x  R}  {(0, y,0) | y  R} 57

{( x,0,0)  (0, y,0) | x, y  R} {( x, y,0) | x, y  R} xy - plane Let (a, b, c) U ˆ W . Ÿ (a, b, c) U and (a, b, c) W Now (a, b, c) U Ÿ b c 0 and (a, b, c) W Ÿ a 0 c Ÿ

a b c 0

Ÿ

U ˆW

{(0,0,0)} {0}. U †W

Then

U W.

Problem 3.11 Let U and W be subspaces of a vector space R 3 such that {( x, y , z )  R 3 | z

U

x  y} and W

{( x, y , z )  R 3 | x

y

z}.

Show that R 3 U † W .

Hint. Show that R 3 U  W and U ˆW

{(0,0,0)} {0}.

Solution. Let ( x, y, z )  R 3 . We have ( x, y , z )

( z  y , z  x, 2 z  x  y )  ( x  y  z , x  y  z , x  y  z ) ( some element of U )  ( some element of W )

Ÿ

R3 U  W

(a1)

Let ( x, y, z ) U ˆ W . ( x, y, z ) U and ( x, y, z ) W

Ÿ

By definition of U and W , we have z

Write ( x, y, z )

x  y and x

ac

x, b  c

y, a  b  c

Solve for a, b, c in terms of x, y, z etc 58

z

(a, b, a  b)  (c, c, c) (element of U)  ( element of W ) ( a  c , b  c, a  b  c )

Ÿ

y

z

Ÿ

x 0 U ˆW

Ÿ

y

z

{(0,0,0)} {0}

(a2)

R3 U † W .

(a1) and (a2) Ÿ

Problem 3.12 Let S

[(1,  1, 0), (1, 0, 2)], T

[(0,1, 0), (0,1, 2)]. Determine the subspaces S ˆ T and S  T .

Solution. Let A {(1,  1, 0), (1, 0, 2)}, B {(0,1, 0), (0,1, 2)}. Then S [ A] and T Now

S T

[B]

{a (1,  1, 0)  b (1, 0, 2) | a, b  R} {( a  b,  a, 2b) | a, b  R}

(a1)

{c (0,1, 0)  d (0,1, 2) | c, d  R} {(0, c  d , 2d ) | c, d  R}

(a2)

Let

( x, y , z )  S ˆ T .

Ÿ

( x, y, z )  S and ( x, y, z )  T

Ÿ

( x, y, z ) (a  b,  a, 2b), for some a, b  R , by (a1)

and

( x, y , z )

Ÿ

(a  b,  a, 2b)

Ÿ

a  b 0 i.e. b a

Ÿ

( x, y , z )

Ÿ

S ˆ T { ( x, y , z ) } {a (0,  1,  2) | a  R}

Now

S T

Here

(0,1, 2)

i.e.

(0,1, 2)  [(1,  1, 0), (1, 0, 2), (0,1, 0)]

Ÿ

S T

(0, c  d , 2d ), for some c, d  R , by (a2) (0, c  d , 2d )

(0,  a,  2a )

a (0,1,2), a  R

[ A]  [ B ] [ A ‰ B ]

[(0,  1,  2)]

[(1,  1, 0), (1, 0, 2), (0,1, 0), (0,1, 2)]

1(1,  1, 0)  1(1, 0, 2)  0(0,1, 0)

[(1,  1, 0), (1, 0, 2), (0,1, 0)] .

Problem 3.12

Let V be the vector space of all functions from real field R into R. If U be the subspace of even functions and W the subspace of odd functions, show that V Hint. f (x) is even œ f (  x)

U †W .

f ( x) and f (x) is odd œ f (  x)  f ( x) 59

Solution. Let the subspaces U and W of the vector space V be given by

U

{ f ( x) | f (  x)

f ( x) ,  x  R} and V

{ f ( x) | f ( x)  f ( x) ,  x  R}

For any f ( x) V , we write f ( x)

g ( x)

Here

1 1 [ f ( x)  f ( x)]  [ f ( x)  f ( x)] = g ( x)  h( x), say 2 2

1 [ f ( x)  f ( x)] g ( x) and h( x) 2

1 [ f ( x)  f ( x)] h( x) 2

Ÿ

g (x) is an even function and h(x ) is an odd function

Ÿ

g ( x) U and h( x) W

Then (a1) Ÿ

U W

V

(a1)

(a2)

Let k ( x) U ˆ W . Then k ( x) U and k ( x) W . Ÿ

k ( x) k ( x) and k ( x) k ( x)

Ÿ

k ( x) 0,  x  R

Ÿ

U ˆW

Then (a2) and (a3) Ÿ

V

{0}

(a3)

U †W .

MCQ 3.4

Let U be a subspace of a vector space V . Then (A) U  U

2U

(B) U  U

U

(C) U  V

V

(D) U  {0} V MCQ 3.5

Let and W be the subspaces of R 3 defined by U

{ ( x, y , z ) | x  y  z

Consider the statements:

60

(a) U † V

U V

(b) U † W

U W

0}, V

{ ( x, y , z ) | y

z} and W

{ ( x, 0, 0) | x  R}.

(c) V † W z V  W Choose the true statement from the following: (A) Only (a) is true. (B) Only (a) is false. (C) All are true. (D) Only (c) is false. SAQ 3.6

Let U , W be subspaces of a vector space V . Prove that U  W is a direct sum iff u  w 0 (u  U , w  W ) implies u

w 0

SAQ 3.7

Let U and W be the subspaces of V3 defined by U Show that V3

{( x, y, z ) | x

y

z} and W

{(0, y, z ) | y, z  R }.

U †W.

61

SUMMARY The concepts of linear span of a subset of a vector space is explained with illustrations. Thereafter the sum and the direct sum of subspaces of a vector space are defined. In case of two subspaces, it is shown that these sums are equivalent if the intersection of the spaces is a null space.

KEY WORDS Linear span Independent subspaces Sum of subspaces Direct sum of subspaces

62

UNIT 03-04: LINEAR INDEPENDENCE

63-80

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of linear independence of vectors of a vector space Apply to identify such vectors and sets INTRODUCTION

An algebraic structure vector space is basically a set of elements. The properties of a vector space are intrinsically correlated with its elements. To handle all members for extracting useful results is out of question. Therefore, we search some privileged class of elements of a vector space which share the responsibility of predicting information about their parent set, a vector space. In this unit we explore such type of members leading to linear independence of a vector space. 4.1 Linear independence of vectors Let V be a vector space over F . A set of vectors v1 , v2 ,..., vn in V is said to be linearly!dependent

(LD) over F if there exist scalars D1 , D 2 ,..., D n  F , not all zero, such that D1v1  D 2 v2  ...  D n vn If (4.1) Ÿ all D i

0

zero vector

(4.1)

0 only, then the vectors v1 , v2 ,..., vn are linearly!independent (LI).

Remark

(i)

v1 , v2 ,..., vn  V are LD over F œ the set { v1 , v2 ,..., vn } is LD over F

(ii)

v1 , v2 ,..., vn  V are LI over F œ the set { v1 , v2 ,..., vn } is LI over F

(iii)

v1 , v2 ,..., vn are LD if their some nontrivial linear combination (LC) is a zero vector.

For an infinite subset B of a vector space, we say that B is LI iff every finite subset of B is LI. Hence an infinite subset of a vector space is LD if it has a finite LD subset. Problem 4.1

If F is the field of real numbers, prove that the vectors (1,1, 0, 0), (0,1,  1, 0) and (0, 0, 0, 3) in F ( 4) are linearly independent.

63

Hint. Denote the vectors by a, b, c. Write D a  E b  J c n  tuples. Show that D

Solution. Let a

E

J

(1,1, 0 , 0), b

(0 ,1,  1, 0), c

(0 , 0 , 0 , 3) and let D, E, J  F (0,0,0,0)  F ( 4)

Ÿ

D(1,1, 0 , 0)  E(0 ,1,  1, 0)  J (0 , 0 , 0 , 3)

Ÿ

D, D,0,0)  (0, E,E,0)  (0,0,0,3J )

Ÿ

( D , D  E ,  E , 3J ) D 0, D  E

Ÿ

(0,0,0,0). Use equality of

0  R.

D a  E b  J c 0, where 0

Assume that

0, 0

0,  E

0, 3J

(a1)

(0,0,0,0)

(0,0,0,0)

(0,0,0,0) 0, by equality of vectors

D 0, E 0, J

Ÿ

R.

0.

(a2)

Hence (a1) Ÿ (a2) i.e. the given vectors are LI.

Problem 4.2 Prove that (1, 0 , 0), (0 ,1, 0), (0 , 0 ,1) are LI and (1,1, 0), (3 ,1, 3), (5 , 3, 3) are LD over the field R.

Solution. (i) Let D, E, J  R. Consider D(1,0,0)  E(0,1,0)  J (0,0,1)

(0,0,0)

Ÿ

(D,0,0)  (0, E,0)  (0,0, J )

i.e.

(D, E, J )

D

Ÿ

zero vector in R 3 (0,0,0)

(0,0,0)

0 E

J

Hence the vectors (1,0,0), (0,1,0), (0,0,1) are LI. D(1,1,0)  E(3,1,3)  J (5,3,3)

(ii) Let

(D  3E  5J, D  E  3J , 3E  3J )

Ÿ

D  3E  5J

Ÿ

1, we get D

 J, D

(a1)

(0,0,0)

0, 3E  3J

0

2 J

2, E 1. Thus there are nonzero scalars D , E , J satisfying (a1). Hence

(1,1,0), (3,1,3), (5,3,3) are LD.

64

0, D  E  3J E

Ÿ

For J

(0,0,0)

Problem 4.3 If V (R ) be a vector space of 2 u 3 matrices over R, then show that the matrices A, B, C given below are LI:

A Solution. Let

ª2 1  1 º «3  2 4», B ¬ ¼ xA  yB  zC

0

ª1 1  3 º « 2 0 5», C ¬ ¼

ª 4  1 2º «1  2 3 ». ¬ ¼

2u 3 zero matrix, x, y, z  R

Ÿ

ª 4  1 2º ª0 0 0 º ª 1 1  3º ª2 1  1º  z« x«  y« » » « » » ¬1  2 3¼ ¬0 0 0 ¼ ¬ 2 0 5 ¼ ¬3  2 4 ¼

Ÿ

ª2 x 1x  1x º ª 1 y 1y  3 y º ª4 z  1z 2 z º ª0 0 0º «3x  2 x 4 x »  « 2 y 0 5 y »  «1z  2 z 3 z » «0 0 0» ¼ ¼ ¬ ¼ ¬ ¬ ¼ ¬

Ÿ

ª2 x  y  4 z x  y  z  x  3 y  2 z º «3x  2 y  z  2 x  2 z 4 x  5 y  3z » ¬ ¼

2x  y  4z

Ÿ

x yz

0 0

 x  3y  2z

3x  2 y  z  2x  2z

0 0 0

4 x  5 y  3z From the last but equation, we get z

 2x  y

0, 2 x  y

Ÿ

ª0 0 0 º «0 0 0 » ¬ ¼

0

 x. Substituting this value in the remaining equations, 0,  3x  3 y x 0

y

0, 2 x  2 y

0, x  5 y

0

z

Hence the matrices A, B, C are LI. Problem 4.4

Let V be the vector space of functions from R into R. Show that the functions f , g , h V are linearly independent where f (t ) et , g (t ) sin t , h(t ) t. Solution. Consider the linear combination of the functions

xf (t )  yg (t )  zh(t ) 0 i.e. x et  y sin t  z t

0, x, y, z  R

(a1) 65

Here zero on the right side is the zero function 0(t ). xe0  y (0)  z (0) 0 i.e. x 0

For t

0:

For t

S / 2:

For t

S:

x e S / 2  y  z ( S / 2) 0 x eS  z S 0

Above three equations Ÿ

x 0

y

z

Hence the functions are independent. Problem 4.5

Show that the vectors (1, 2, 3), (5, 3, 8) and (3,  1, 2) are linearly dependent in R 3 . Solution. Consider a linear combination of the given vectors:

x (1, 2, 3)  y (5, 3, 8)  z (3,  1, 2) 0 (0, 0, 0) Ÿ

( x, 2 x, 3 x)  (5 y, 3 y, 8 y )  (3z ,  z , 2 z ) 0 (0, 0, 0)

Ÿ

( x  5 y  3z , 2 x  3 y  z , 3 x  8 y  2 z ) (0, 0, 0)

Ÿ

x  5 y  3z

0

(a1)

2x  3y  z

0

(a2)

3x  8 y  2 z

0

(a3)

Applying 2 (a1) - (a2):

7 y  7z

0

3 (a1)-(a3):

7 y  7z

0

Hence the equations (a1) to (a3) have the solutions x

2a, y

a, z

 a , a ( z 0)  R

Hence the given vectors are linearly dependent. Problem 4.6

If v1 , v2 ,..., vn are LI vectors of a vector space V over R , then prove that vi z 0 for each i. Solution. Let v1 , v2 ,..., vn be LI. We apply the method of contradiction. If possible suppose that

v1

0. Consider a LC: 1v1  0v2  ...  0vn

0  0  ...  0,  1v1 1(0) 0 0

66

But 1 z 0. Hence by definition, v1 , v2 ,..., vn are LD. This is a contradiction to the hypothesis. Hence we must have v1 z 0. Similarly one can prove that v2 z 0,..., vn z 0. Ÿ

vi z 0 for each i.

Remark. Above example concludes that any set containing zero vector is LD. MCQ 4.1

Let V be a vector space and v V . Let (a) {v} is LI œ v z 0. and

(b) {v} is LI œ v 0

Choose the true statement from the following: (A) Both the statements are false (B) Only (a) is true (C) Only (b) is true (D) None of the above. MCQ 4.2

(c, d ) of the vector space R 2 are linearly dependent if

The vectors x (a, b) and y (A) ab cd (B) ac bd (C) ad

bc

(D) None of these. Theorem 4.1

Two!vectors! v1 , v2 !are!linearly!dependent!if!and!only!if!!one!of!them!is!a!scalar!multiple!of!the other. Proof. Let v1 , v2 be LD. Ÿ Ÿ

Dv1  Ev2 v1

0 zero vector, for some scalar D z 0 or E z 0 

E v2 , for D z 0 or v2 D



D v1 , for E z 0 E

This proves the necessary part. 67

Conversely, let v2

Ov1 for a scalar O. Then Ov1  (1)v2

0 and scalar  1 z 0

Hence v1 , v2 are linearly dependent.

QED

SAQ 4.1

Prove that vectors v1 , v2 , v3 are LD œ one of them is a LC of the other two. Remark. Let V be a vector space over a field F and K be a subfield of F . Then V is a vector

space over K . Let S Ž V . Then S is LD over K Ÿ S is LD over F S is LI over F Ÿ S is LI over K

and

Its converse is not necessarily true. For example, C is a vector space over C. Since i ˜ 1  (1) i

0, the vectors 1 , i are LD over C.

But they are not LD over R,  x ˜ 1  y ˜ i

0 for x, y  R Ÿ x

0

y only.

MCQ 4.3

Consider the vectors u

(1  i,  2i ) and v (1, 1  i ) in C 2 . Choose the true statement from the

following: (A) u and v are LD over C but not over R (B) u and v are LD over R but not over C (C) u and v are LI over C (D) u and v are LI over R but not over C Theorem 4.2

x1 Vectors! ( x1 , x2 , x3 ), ! ( y1 , y2 , y3 ), ! ( z1 , z 2 , z3 ) !in! R !are!LI œ y1 z1 3

x2 y2 z2

x3 y 3 z 0. z3

Proof. Let the vectors ( x1 , x2 , x3 ), ( y1 , y2 , y3 ), ( z1 , z 2 , z3 ) in R 3 !be LI œ D( x1 , x2 , x3 )  E( y1 , y2 , y3 )  J ( z1 , z 2 , z3 ) œ Dx1  E y1  Jz1 68

0, Dx2  Ey2  Jz 2

(0,0,0) Ÿ D 0 E J in R

0, Dx3  Ey3  Jz3

0

have trivial unique solutions D 0, E 0, J x1 œ x2 x3

y1 y2 y3

z1 x1 z 2 z 0 i.e. y1 z3 z1

x2 y2 z2

0 only

x3 y3 z 0. z3

QED

SAQ 4.2

x1 Show that {( x1 , x2 , x3 ), ( y1 , y2 , y3 ), ( z1 , z 2 , z3 )} in V3 is LD iff y1 z1

x2 y2 z2

x3 y3 z3

0.

SAQ 4.3

Find the value of k such that the set of vectors (1, 2, k ), (3, 0,  1), (5, 4, 3) of V3 ( R) is LD. Theorem 4.3

Let!V !be!a!vector!space!and v, v1 , v2 ,..., vn  V . (i) If! v !is!a!LC!of v1 , v2 ,..., vn i.e.! v  [v1 , v2 ,..., vn ], !then! {v, v1 , v2 ,.., vn } !is!LD (ii) If! {v1 , v2 ,..., vn } !is!LI!and! v  [v1 , v2 ,..., vn ], !then! {v, v1 , v2 ,..., vn } !is!LI. (iii) Any!subset!of!a!LI!set!is!also!LI (iv) Any!superset!of!a!LD!set!is!also!LD. Proof (i) Let v  [v1 , v2 ,..., vn ] in V (F ). D1v1  D 2 v2  ...  D n vn , for some D i  F , 1 d i d n

Ÿ

v

Ÿ

(1)v  D1v1  D 2 v2  ...  D n vn

0

zero vector, and scalar 1 z 0

Hence the set {v, v1 , v2 ,..., vn } is LD. (ii) Let {v1 , v2 ,..., vn } be LI and v  [v1 , v2 ,..., vn ]. Consider Dv  D1v1  D 2 v2  ...  D n vn

0 zero vector,

(4.2)

where D, D i are scalars, 1 d i d n If D z 0, then (4.2) gives v

(D 1D1 )v1  ( D 1D 2 )v2  ...  (D 1D n )vn  [v1 , v2 ,..., vn ],

which is a contradiction. Hence D

0. Using this in (4.2), we get 69

D1v1  D 2 v2  ...  D n vn D1

Ÿ

0 D2 D

Then (4.2) Ÿ

0

... D n  {v1 , v2 ,..., vn } is LI

0 D1

D2

... D n only

Hence the set {v, v1 , v2 ,..., vn } is LI. (iii) Let S be any LI subset of a vector space V and T be any subset of S . If T is finite, then it is LI and the result follows. If T is not finite, then to prove that it is LI, we have to show that its every finite subset is LI. Let A be any finite subset of T . Since T Ž S , the set A is a finite subset of a LI set S . Then A is LI. But A is any finite subset of T . Hence every finite subset of T is LI i.e. T is LI.

(iv) Let A be any LD subset in a vector space V and let B be any superset of A in V i.e. A Ž B Ž V Suppose that B is LI. Then by (iii), A is LI. This is a contradiction. Hence the supposition " B is LI# is wrong. Thus B is LD i.e. every superset of LD set is LD.

QED

Problem 4.7

If x, y, z are LI vectors of a vector space V , then prove that x  y , y  z , z  x are LI. Solution. Let x, y, z be LI vectors of a vector space V (F ). For D, E, J  F , consider

D( x  y )  E( y  z )  J ( z  x)

0 zero vector

(D  J ) x  (D  E) y  (E  J ) z

Ÿ

DJ

Ÿ

0, D  E 0, E  J

(a1)

0

0,  x, y, z are LI.

D 0 E J

Ÿ

Thus (a1) Ÿ x  y, y  z , z  x are LI. Problem 4.8

Let P, the set of all polynomials with real coefficients, be a real vector space under addition and scalar multiplication of polynomials. Prove that the infinite set {1, x, x 2 , x 3 ,...} is LI in P. Solution. For any n  N, and scalars D i , consider

D 01  D1 x  D 2 x 2  ...  D n x n Ÿ Ÿ 70

D0

0 D1

D2

0 zero polynomial

... D n , by definition of zero polynomial

{1, x, x 2 ,..., x n } is a LI set in P,  n  N

Hence every finite subset of {1, x, x 2 ,...} is LI. Thus {1, x, x 2 ,...} is a LI subset of P Problem 4.9

Verify the LI of the set {e x , e 2 x } in C (f) (f, f). Hint.

C (f ) (f, f) { f | f : R o R is a derivable function of all orders }.

Solution. By definition, C ( f ) (f, f) stands for a set of all functions from R o R that derivable

of all orders, see Hint. This is a vector space under point-wise addition and scalar multiplication of functions. Consider ae x  be 2 x

0

zero function,  x  R , for some a, b  R

(a1)

Differentiating with respect to x, ae x  2be 2 x

0,  x  R

Subtracting (a1) from the above equation, be 2 x Using b 0 in (a1), ae x

0,  x  R i.e. b 0

0  x  R i.e. a 0. Thus (a1) Ÿ a 0 b only.

Hence {e x , e 2 x } is LI in C ( f ) . Problem 4.10

Prove that the set of functions {x, | x |} is LI in a real vector space of the continuous functions defined on (1, 1). Solution. The space

C (1,1) { f | f : (1,1) o R is continuous } is a real vector space under addition and scalar multiplication of continuous functions. Here x , | x |  C (1,1). For a , b  R , consider a LC ax  b | x | Taking x 1 / 2 and x

0 zero function  x  (1,1)

1 / 2 in the above, we get

a b  2 2

0 i.e. a  b 0 and 

a b  2 2

0 i.e. a  b

0

Then a 0 b only and hence {x, | x |} is a LI set. 71

Theorem 4.4

Let! V !be!a!vector!space!over! F . !If! v1 ,..., vn  V !are!linearly!independent,!then!every!element!in their!linear!span!has!a!unique!representation!in!the!form! D1v1  ...  D n vn , !where!each! D i  F . Proof. By definition, every element in the linear span is of the form

D1v1  ...  D n vn , where each D i  F Let

v

D1v1  ...  D n vn  L(V )

(4.3)

If possible, assume that there is another representation of v given by v E1v1  ...  E n vn D1v1  ...  D n vn

(4.3) and (4.4) Ÿ

(4.4)

E1v1  ...  E n vn

(D1v1  E1v1 )    (D n vn  E n vn )

Ÿ

(D1  E1 )v1  ...  (D n  E n )vn

Ÿ

But v1 ,..., vn are LI. Hence (4.5) Ÿ D1  E1 D1

Ÿ

0, by distributivity.

0,..., D n  E n E1 ,..., D n

0 (4.5)

0

En

Hence v  L(V ) has a unique representation.

QED

Theorem 4.5

Let! V !be!a!vector!space!over! F !If! v1 ,..., vn  V , v1 z 0, !then!either!they!are!linearly!independent or!some! vk (k t 2) !is!a!linear!combination!of!the!preceding!ones! v1 ,..., vk 1. Proof. Let v1 ,..., vn  V . There arises two possibilities:

(i) v1 ,..., vn are LI or

(ii) v1 ,..., vn are LD.

For the case (i) there is nothing to prove. Hence assume (ii). Then D1v1  ...  D n vn

0,

where D i  F are not all zero. Since not all D i are zero,  the largest integer k ! 1 such that D k z 0 Ÿ

72

For i ! k , D i

0

(4.6)

D k 1

i.e.

D k 2

... D n

0 .

(4.7)

D1v1  ...  D k 1vk 1  D k vk  D k 1vk 1  ...  D n vn

Rewriting (4.6),

D1v1  ...  D k 1vk 1  D k vk

Ÿ

D k vk

Ÿ

0

0, by (4.7)

D1v1  ...  D k 1vk 1

As D k z 0, D k 1  F exists. Then multiplying the above identity by D k 1 , we get D k 1 (ak vk )

(D k 1D k )vk

Ÿ Ÿ

vk

D k 1 (D1v1  ...  D k 1vk 1 )

(D k 1D1 )v1  ...  ( D k 1D k 1 )vk 1

(D k 1D1 )v1  ...  (D k 1D k 1 )vk 1 ,  D k 1D k

identity element

Hence vk is a linear combination of v1 ,..., vk 1.

QED

Theorem 4.6

If! v1 ,..., vn !in! V !have! W !as!linear!span!and!if! v1 ,..., vk (k d n) are!linearly!independent,!then!we can! find! a! subset! of! v1 ,..., vn ! of! the! form! v1 ,..., vk , vi1 ,..., vir ! consisting! of! linearly! independent elements!whose!linear!span!is!also! W . Proof. If v1 ,..., vn  V are LI, then the theorem follows trivially. Hence assume that they are

linearly dependent. Then by Thm 4.5 there is some v j in v1 ,..., vn such that. vj

linear combination of v1 ,..., v j 1

(4.8)

Let v1 ,..., vk be linearly independent. Then k  j , see Thm 4.5. Now consider the set of (n  1) vectors : S

{v1 ,..., vk ,..., v j 1 , v j 1 ,..., vn }. We have L( S ) Ž W

(4.9)

Let wW Then we write w linear combination of v1 ,..., vk ,..., v j 1 , v j , v j 1 ,.., vn linear combination of v1 ,..., vk ,..., v j 1 , v j 1 ,..., vn ,  (4.8) Ÿ

w  L(S )

Thus w  W Ÿ w  L(S ) i.e. W Ž L(S ). Combining this with (4.9), we get

Let D1

1, D 2

2, D 3

0, D 4

0, D 5

1, D 6

D7

... D n

0. Then k

5. 73

L( S ) W . Therefore, deleting v j from v1 ,..., vn we get a subset of (n  1) vectors whose linear span is W . Continue this process of deleting one vector at a time till we get a subset v1 ,..., vk , vi1 ,..., vir whose linear span is W and no element is a linear combination of preceding ones. Hence the set is linearly independent.

QED

Theorem 4.7

If! V ! is! a! finite! dimensional! vector! space,! then! it! contains! a! finite! set! v1 ,..., vn ! of! linearly independent!elements!whose!linear!span!is! V . Proof. Let V be a finite dimensional vector space. Then V is the linear span of a finite number

of elements u1 ,..., u m . By Thm 4.6, we can find a subset S

{v1 ,..., vn } of {u1 ,..., um } such that

v "s are linearly independent and L( S ) V .

QED

Problem 4.11

Let S {(1, 1, 0), (0, 1, 1), (1, 0,  1), (1, 1, 1)} be the ordered set. (i) Show that the S is LD. (ii) Locate one of the vectors from S that belongs to the span of the previous ones. (iii) Find the largest LI subset of S whose span is [S ]. Solution. We have S Ž V3 .

(i) Consider S1 {(1,1, 0)}, S 2

{(1,1, 0), (0,1,1)}, S3 {(1,1, 0), (0,1,1), (1, 0,  1)} and S 4

S.

About S1 : It contains only one vector which is non-zero and hence it is LI. About S 2 : It contains only two vectors and any one of them is not a scalar multiple of the other and hence it is LI.

About S3 :

1 1 0 1

0 1

1(1)  1(1)

1 0 1

Ÿ

S3 is LD, by Thm 4.2

About S : Since S is a superset of LD set S3 , it is LD.

74

0

(ii) It is easy to note that (1, 0,  1) 1(1, 1, 0)  1(0, 1, 1) Ÿ

(1, 0,  1)  [ (1, 1, 0), (0,1, 1) ]

Hence (1, 0,  1) is the vector in S which belongs to the span of the vectors (1, 1, 0) and (0, 1, 1). Therefore, we can delete (1, 0,  1) in [ S ] [ S ] [(1,1, 0), (0,1,1), (1, 0,  1), (1,1,1)]

i.e.

[(1,1, 0), (0,1,1), (1,1,1)]

(a1)

1 1 0 0 1 1

(iii) Now

1(0)  1(1)  0 1 z 0

1 1 1 Hence A {(1,1, 0), (0,1,1), (1,1,1)} is a LI subset of S with [ A] [ S ] Problem 4.12

Show that the ordered set S {(1,1, 2), (1,  1,1), (1, 3, 3), ( 1, 3, 0)} is LD and locate one of the vectors that belongs to the span of previous one. Find also the largest LI subset whose span is equal to [ S ]. Solution. Let

{(1,1, 2), (1,  1,1), (1, 3, 3), (1, 3, 0)}.

S

We express (1, 3, 0) as LC of the vectors (1,1, 2), (1,  1,1), (1, 3, 3) : (1, 3, 0)

a(1,1, 2)  b(1,  1,1)  c(1, 3, 3)

(a1)

( a  b  c, a  b  3c, 2a  b  3c) abc

Ÿ

1, a  b  3c

3, 2a  b  3c

0

Solving these equations, we get b For a 0, b

3  , c 2



3 a  , c 2 2

1 a  2 2

1 . Then (a1) gives a LC of (1, 3 , 0) i.e. 2 (1, 3, 0)  [(1,1, 2), (1,  1,1), (1, 3, 3)]

(a2)

Hence S is LD. Noting (a2), we can discard the vector and write [ S ] [(1,1, 2), (1,  1,1), (1, 3, 3)].

75

Now express (1, 3, 3) as a LC of (1,1, 2), (1,  1,1) : Ÿ

(1, 3, 3)

D(1,1, 2)  E(1,  1,1)

(D  E, D  E, D  E)

D  E 1, D  E 3, 2D  E 3

Ÿ

Solving the equations, we get D

2, E

1

Ÿ

(1, 3, 3)  [(1,1, 2), (1,  1,1)]

Ÿ

[(1,1, 2), (1,  1,1), (1, 3, 3)] [(1,1, 2), (1,  1,1)] i.e. [S] [ A] ,

where A

{(1,1, 2), (1,  1,1)}.

The set A is LI, since any vector from it is not a scalar multiple of the other. Hence A {(1,1, 2), (1,  1,1)} is the largest LI subset of S whose span is [S ]. Problem 4.13

Let x1 P

(1, 1, 0, 1), x2

(1, 2,  1, 0), x3

(1, 0, 1, 2) and x4

(0, 1, 1, 1) be vectors in R 4 . Let

{ x1 , x2 , x4 } and Q { x1 , x3 , x4 }. Then

(A) Only Q is linearly independent (B) Only P is linearly independent (C) Both P and Q are linearly independent (D) Both P and Q are linearly dependent

(SET

Solution: Here P is the set of vectors x1 , x2 and x4 . The set Q has vectors x1 , x3 and x4 as its

members. All these xi vectors belong to R 4 . We check whether P and Q are LI. About P : Let

a1 x1  b1 x2  c1 x4

0

(0, 0, 0, 0)

Ÿ

a1 (1, 1, 0, 1)  b1 (1, 2,  1, 0)  c1 (0, 1, 1, 1 )

Ÿ

( a1  b1 , a1  2b1  c1 ,  b1  c1 , a1  c1 )

Ÿ

a1

Ÿ

b1

c1

(0, 0, 0, 0) (0, 0, 0, 0)

0

The set P is LI, by definition

About Q : Similarly we can show that a2 x1  b2 x3  c2 x4 76

0 Ÿ a2

b2

c2

0

Ÿ

The set Q is LI

Hence the correct option is (C). MCQ 4.4

Let V (F ) be a vector space. Consider the following statements: (I) If D, E, J are linearly independent vectors in V (F ), then D  E, E  J , J  D are also linearly independent. (II) The set of vectors A Ž V (F ) which contains the zero vector is always linearly independent. Then

(A) Both (I) and (II) are true (B) Both (I) and (II) are false (C) Only (I) is true (D) Only (II) is true

MCQ 4.5

Let C (R ) be the vector space of all continuous real functions over R. Then which of the following sets is linearly independent in C (R ) ? (A) {1, cos 2 x, sin 2 x} (B) {1, e x , e 2 x } (C) {1, x 2  2 x  1, x 2 , ( x  2) 2 } (D) {1, ln (1  | x | ), ln (1  | x | ) 2 }

(SET 2015)

MCQ 4.6

Let A and B be non-empty subsets of a vector space V. Suppose that A Ž B. Then (A) If B is linearly independent, then so is A (B) If B is linearly dependent, then so is A (C) If A is linearly independent, then so is B (D) If B is a generating set, then so is A

(SET 2013)

77

MCQ 4.7

Let n be an integer, n t 3 and let u1 , u2 ......, un be n linearly independent elements in a vector space over R. Set u0

0 and un 1 vi

Then

u1. Define ui 1  ui for i 1,2,...., n.

ui  ui 1 and wi

(A) v1 , v2 ,....., vn are linearly independent if n

2010

(B) v1 , v2 ,....., vn are linearly independent if n

2011

(C) w1 , w2 ,....., wn are linearly independent if n

2010

(D) w1 , w2 ,....., wn are linearly independent if n

2011

MCQ 4.8

Let x

( x1 , x2 , x3 ) , y

( y1 , y2 , y3 )  R 3 be linearly independent. Let G1

x2 y3  y2 x3 , G 2

x1 y3  y1 x3 , G3

x1 y2  y1 x2

If V is the span of x, y , then (A) V

{ (u, v, w) : G1u  G 2 v  G3 w 0}

(B) V

{ (u , v, w) : G1u  G 2 v  G3 w 0}

(C) V

{ (u, v, w) : G1u  G 2v  G3 w 0}

(D) V

{ (u, v, w) : G1u  G 2v  G3 w 0}

SAQ 4.4

Show that vectors (1, 0), (0,1) are LI while vectors (1, 0), (0,1), (1,1) are LD in V2 . SAQ 4.5

Show that the subset {(3, 4,  1), (1, 2, 0), (1, 0,  1)} of V3 is LD. SAQ 4.6

If x, y, z are LI vectors, show that x  y , x  y , x  2 y  z are also LI. SAQ 4.7

Let S {(1,1, 2), (1,  1,1), (1, 3, 3), (1, 3, 0), (1, 0,1)}. Find the largest LI subset A of S . 78

(NET 2012)

SAQ 4.8

Prove that the four vectors D1

(1, 0, 0), D 2

(0,1, 0), D 3

(0, 0,1), D 4

(1,1,1) in V3 (C ) form a LD

set but any three of them are LI. SAQ 4.9

Determine whether u and v are LD if (i) u

2  5t  6t 2  t 3 , v

(ii) u

ª1 2  3 º «6  5 4 » , v ¬ ¼

3  2t  4t 2  5t 3 ª6  5 4 º «1 2  3 » ¬ ¼

SAQ 4.10

Let V be the vector space of polynomials of degree d 3 over R. Determine whether u , v, w  V are independent or dependent where u

t 3  3t 2  5t  1, v t 3  t 2  8t  2 and w 2t 3  4t 2  9t  5.

79

SUMMARY The linear independence of vectors of a vector space is explained. Consequently some of the important results are proved and illustrated through solved examples.

KEY WORDS

Vector space Linear independence of vectors Linear dependence of vectors

80

UNIT 04-05: BASIS OF A VECTOR SPACE

81-111

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of a basis of a vector space Apply to identify such a basis INTRODUCTION

As commented in the introduction of the previous unit that the linearly independent vectors of a vector space plays an important role in extrapolating the properties of a vector space. These independent vectors form a set that leads to the concept of a basis of a vector space. It is interesting that all other huge number of members of a vector space can be written in terms of the members of a basis. We are going to discuss the details about a basis in this unit. 5.1 Basis of a vector space Definition of a basis A subset B of a vector space V is called a basis of V if (i) B is LI and

(ii) L( B) V .

Remark

(i) In the light of the definition of basis, Thm 4.7 of the previous unit is expressed as: If! V is! a! finite! dimensional! vector! space! and! if! u1 , u2 ,..., u m ! span! V , ! then! some! subset! of

{u1 , u 2 ,..., u m } !forms!a!basis!of! V . (ii) Every element of V is expressed as a linear combination of basis vectors. (iii) Every vector space has a basis. Problem 5.1

Show that B {i, j , k} is a basis for V3 (R ), where i (1,0,0) e1 , j Solution. Consider Di  Ej  Jk Ÿ

(0,1,0) e2 , k

(0,0,1)

e3 .

0, D, E, J  R .

D(1,0,0)  E(0,1,0)  J (0,0,1)

(0,0,0) 81

(D,0,0)  (0, E,0)  (0,0, J )

Ÿ

(0,0,0) i.e. (D, E, J )

(0,0,0)

D 0 E J

Ÿ

(a1)

Hence B is linearly independent. V3 {( x, y, z ) | x, y, z  R} {xi  yj  zk | x, y, z  R} [i, j , k ]

Now

L( B)

Thus B generates V3 . With this (a1) implies that B is a basis for V3 . Remark. This B is called a standard!basis for V3 . Problem 5.2 Prove that the set B1 {(1,1,1), (1,  1,1), (0,1,1)} is a basis of V3 . Solution. We have B1 Ž V3 . 1 1 1 1 1 1 0 1 1

Also

1(2)  1(1)  1(1)

2 z 0

Hence the set B1 is a linearly independent subset of V3 . Moreover, [ B1 ] Ž V3 .

(a1)

For ( x, y, z )  V3 , x, y, z  R , consider D(1,1,1)  E(1,  1,1)  J (0,1,1) , D, E, J  R

( x, y , z ) Ÿ

( x, y , z ) DE

Ÿ

D

Ÿ

Ÿ

( x, y , z )

(D  E, D  E  J , D  E  J )

x, D E J 2x  y  z ,E 2

y, DE J zy , J 2

z

zx

2x  y  z zy (1,1,1)  (1,  1,1)  ( z  x) (0,1,1)  [ B1 ] 2 2

This gives V3 Ž [ B1 ]. Then (a1) Ÿ V3 [ B1 ]. Ÿ

B1 is a basis for V3 .

Remark. There are different bases for the same vector space V3 but these bases have exactly three elements in each of them.

82

Theorem 5.1 If!! v1 ,..., vn !is!a!basis!of! V !or!span!V !over! F !and!if! w1 ,..., wm  V !are!linearly!independent!over F , !then! m d n. .

Proof. Let S n

{v1 ,..., vn } be a basis of V . Then every element of V is expressed as a LC of

v1 ,..., vn . Hence wm  V Ÿ linear combination of v1 ,  , vn .

wm

Since S n is a basis of V , S n spans V i.e. LD vectors wm , v1 ,..., vn spans V . Then by Thm 4.7, its some proper subset, say S n1 {wm , vi1 ,..., vik | k d n  1} forms a basis of V . Thus we start with the basis S n containing n number of v "s. Now in a basis

S n 1 , one w is added and some v "s are deleted. Then LD vectors wm 1 , wm , vi1 , , vi k will also span V . Again apply Thm 4.7 and obtain a basis of V in the form

S n2

{wm1 , wm , v j1 ,..., v j s | s d n  2}

Continuing this process, ultimately we obtain S {w2 , w3 ,, wm , va , vb ,} as a basis of V . Then w1  V

(5.1)

[ S ]. The vectors w1 ,, wm are linearly independent. Hence w1 is

not a linear combination of w2 ,..., wm i.e. the basis in (5.1) must include at least one v V . Now in (5.1) there are (m  1) w "s. Ÿ

(m  1) w "s in (5.1) d (n  1) v "s

Ÿ

m  1 d n  1 i.e. m d n.

QED

Theorem 5.2

If! V ! is! a! finite! dimensional! vector! space,! then! any! two! bases! of! V have! the! same! number! of elements. Proof. Let the two bases of V be v1 ,..., vn and w1 ,..., wm . Consider that {v1 ,..., vn } is a basis of V

and w1 ,..., wm are linearly independent vectors. Then m d n, by Thm 5.1 83

Next consider {w1 ,..., wm } as a basis of V and v1 ,..., vn as linearly independent vectors. Then n d m. m d n and n d m Ÿ m n.

Then

QED

Theorem 5.3 (Extension theorem)

Let! V ! be! a! finite! dimensional! vector! space! over! the! field! F and! u1 ,..., um  V ! be! linearly independent.!Then!we!can!find!vectors! um1 ,..., u mr  V !such!that! u1 ,..., u m , um1 ,..., umr !is!a!basis of! V . Proof. Let V be a finite dimensional vector space over F . Then V has a basis. Let v1 ,..., vn be a

basis of V . Then the vectors v1 ,..., vn span V . Ÿ

The vectors u1 ,..., um , v1 ,..., vn span V

If the set of above vectors is denoted by S , then by Thm 4.7,  a subset of B  V3 consisting of vectors u1 ,..., u m , vi1 ,..., vi r which are linearly independent and span V . Denote vi1

u m1 , vi2

um2 ,..., vir

u m r

Hence the LI vectors u1 ,..., um , um1,..., umr span V and form a basis of V .

QED

Problem 5.3 If a vector space V has a basis of n vectors, then prove that every set of p vectors with p ! n, is LD.

Solution. Let B {v1 , v2 ,..., vn } be a set of n vectors which forms a basis for a vector space V . Let {w1 , w2 ,..., w p } be any set of p vectors in V and p ! n. Here V

L(B). Suppose that

{w1 , w2 ,..., w p } is a LI subset of V . Then p d n, by Thm 5.1, which is a contradiction. Hence the supposition " {w1 , w2 ,..., w p } is LI" is wrong. Thus {w1 , w2 ,..., w p } is LD. Then every set of p vectors ( p ! n) in V is LD.

Remark. From this example it follows that if V has a basis containing n vectors, then any n  1 vectors in V are LD. Any two bases of a finite dimensional vector space have same number of vectors, see Thm 5.2. Hence if V has a basis of n elements then every other basis for V also has n elements. In particular, any basis of V3 ( R ) has 3 vectors. In general any basis of Vn has n

vectors.

84

Problem 5.4 Show that the set S

{1, x, x 2 ,..., x n } is a basis set for the vector space Pn of all polynomials (of

degree almost n ).

Solution. Each polynomial in Pn can be written as a linear combination of 1, x, , x n 1 and x n . c1 1  c2 x    cn x n 1  cn 1 x n

Also if

c1

Then

c2  cn 1

(a1)

0 zero polynomial cn

cn 1

0

Thus the vectors in (a1) are linearly independent. Hence the set {1, x, x 2 , , x n } forms the basis of the vector space Pn .

Theorem 5.4 (Replacement theorem) Let! B {x1 , x2 ,, xn } !be!a!basis!of!a!vector!space!V !over! F !and!let y

c1 x1  c2 x2    ci 1 xi 1  ci xi  ci 1 xi 1    cn xn

(5.2)

be!a!nonzero!vector!with! ci z 0. !Then Bc {x1 ,, xi 1 , y, xi 1 ,, xn } forms!a!basis!of! V .

Proof. The theorem is established if it is shown that (i) Bc is linearly independent and

(ii) any u V can be expressed as a LC of the elements of Bc.

We prove these things as follows. (i)

Let us consider the linear combination of the vectors in Bc : a1 x1    ai 1 xi 1  ai y  ai 1 xi 1    an xn

0

(5.3)

where a' s  F . Inserting the value of y from (5.2) into (5.3), we get a1 x1    ai 1 xi 1  ai [c1 x1    ci 1 xi 1  ci xi  ci 1 xi 1   cn xn ]  ai 1 xi $    an xn

0

85

Ÿ

(a1  ai c1 ) x1    (ai 1  ai ci 1 ) xi 1  ai ci xi  (ai 1  ai ci 1 ) xi 1    (an  ai cn ) xn

0

Since B is a basis, the vectors x1 ,, xn are linearly independent. Then above implies that a1  ai c1 Since ci z 0, ai ci

0,  , ai 1  ai ci 1

0 Ÿ ai

0, ai ci

0, ai 1  ai ci 1

0,  , an  ai cn

0

0. Then above gives a2  ai 1

a1

ai

ai 1  an

0

Then (5.3) Ÿ x1 , x2 , , xi 1 , y, xi 1 , , xn are linearly independent. Ÿ

Bc is linearly independent.

(ii) Let u  V . Then u

LC of the basis vectors x1 ,, xi ,, xn of V i.e. there b1 ,, bn  F such

that b1 x1    bi 1 xi 1  bi xi  bi 1 xi 1    bn xn

u

(5.4)

Solving (5.2) for ci xi , we have ci xi Ÿ

xi

y  c1 x1    ci 1 xi 1  ci 1 xi 1    cn xn

1

ci ( y  c1 x1    ci 1 xi 1  ci 1 xi 1    cn xn ),  ci z 0

With this value in (5.4), we obtain u

1

b1 x1    bi 1 xi 1  bi ci ( y  c1 x1    ci 1 xi 1  ci 1 xi 1    cn xn )  bi 1 xi 1    bn xn 1

1

1

1

(b1  bi ci c1 ) x1    (bi 1  bi ci ci 1 ) xi 1  bi ci y  (bi 1  bi ci ci 1 ) xi 1  1

   (bn  bi ci cn ) xn Ÿ

u is a LC of LI vectors.

Taking the stock of the above discussion, the set Bc forms a basis of V .

QED

Problem 5.5 Extend the set { (1, 0, 3), (2,1, 0)} to a basis of R 3 .

Solution. We know that the standard basis of R 3 is {e1 , e2 , e3} where e1

(1, 0, 0), e2

(0, 1, 0) and e3

(0, 0,1)

(a1)

From this standard basis, using the replacement theorem, we obtained the extended set which itself becomes a basis etc.

86

We express the given vectors in the terms of these basis vectors. (1, 0, 3) 1(1, 0, 0)  3(0, 0, 1)

Now

By the replacement theorem, we replace (1, 0, 0) by (1, 0, 3) in (a1) and get the basis vectors as (1, 0, 3), (0, 1, 0) and (0, 0, 1)

(a2)

We write (2, 1, 0) as a linear combination of these vectors: (2, 1, 0) 2(1, 0, 3)  1(0, 1, 0)  6(0, 0, 1) Again using the replacement theorem, (0, 1, 0) in (a2) can be replaced by (2, 1, 0) and then the required basis in (a2) becomes {(1, 0, 3), (2, 1, 0), (0, 0, 1) }. This is the required extension.

Problem 5.6 In which of the following alternatives a subset T of the set: S = { (2, 0, 0), (2, 2, 2), (2, 2, 0), (0, 2, 0)} is not a basis of R 3 (R ) ? (A) T! = { (2, 0, 0), (2, 2, 0), (2, 2, 2)} (B) T = { (2, 0, 0), (2, 2, 2), (0, 2, 0)} (C) T = { (2, 0, 0), (2, 2, 0), (0, 2, 0)} (D) T = { (2, 2, 0), (2, 2, 2), (0, 2, 0)}

(SET 2013)

Solution: Let S! = { (2, 0, 0), (2, 2, 2), (2, 2, 0), (0, 2, 0)}. (A) In this case the determinant of the associated coefficient matrix is 2 0 0 2 2 0

8z0

2 2 2 Hence the vectors in T are LI i.e. T forms a basis for R 3 . 2 0 0 (B) Here

2 2 2

8 z 0

0 2 0 87

Ÿ

T forms a basis for R 3 .

2 0 0 2 2 0

(C) In this case

0

0 2 0 Ÿ

The vectors in T are LD i.e. T does not form a basis for R 3 . 2 2 0 2 2 2

(D) Here

8 z 0

0 2 0 Ÿ

T forms a basis for R 3 .

Thus the correct choice is (C).

MCQ 5.1 Let (a, b)  R 2 and v1

( a, b), v2

(b, a ). Then {v1 , v2 } form a basis of R 2 if and only if:

(A) a 2  b 2 0 (B) a 2  b 2 z 0 (C) a  b 0 (D) a  b 0

(SET 2006)

MCQ 5.2 Let S {u1 , u2 ,......, u p } be a linearly independent subset of a vector space V Then

 v1 , ....., vq ! .

(A) p d q (B) p

q

(C) p  q (D) p ! q

(SET 2013)

MCQ 5.3 Let A1 ,....., An be column vectors of size m. Assume that they have coefficients in R , and they are linearly independent over R. Then (A) They are linearly independent over C (B) They are linearly dependent over C 88

(C) They from a basis for R (D) They from a subspace for R

(SET 2013)

MCQ 5.4 Let e1 ,....., en be the standard basis of R n . Then x1

a11e1 , x2

a12 e1  a22 e2 ,  , xn

a1n e1    ann en

is a basis of R n if and only if: (A) a11 ˜ ˜ ˜ ˜ ˜ a nn

0

(B) a11 ˜ ˜ ˜ ˜ ˜ a nn z 0 (C) ann 1 (D) a11

1

(SET 2006)

SAQ 5.1 Determine whether or not each of the following sets form a basis of V3 . (i) {(1, 2,  1), (0, 3,1)} (ii) {(1, 3,  4), (1, 4,  3), (2, 3,  11)} SAQ 5.2 Let S

{(1,1,1, 0), (1, 2, 3, 4)} be LI in V4 . Find the basis of V4 including S .

SAQ 5.3 Let V be the subspace of all vectors ( x1 , x2 , x3 , x4 ) of V4 (R ) such that x1  x2  x3  x4

0.

Do the vectors (1,  1, 0, 0), (1,1,  2, 0), (1, 0,  1, 0) form a basis for V ? Give reason for your answer. SAQ 5.4 Prove that the set

S

{D  iE, J  iG}

is a basis set of vector space

C(R )

iff

aG  EJ z 0, D, E, J , G  R.

89

5.2 Dimension of a vector space If a vector space is generated by its finite subset, then it is finite dimensional. If it is not generated by any finite subset of it, then it is infinite dimensional. Definition. The dimension of a non-trivial vector space V (F ) is the number of vectors in its basis and is denoted by dim V or [V : F ]. Thus if a basis B contains n - elements, we say that the vector space is n - dimensional and write dim V

n.

If a basis B contains infinite number of elements, then V is called an infinite!dimensional!vector space. Remark. The dimension of the trivial vector space is defined to be zero i.e. dim {0} 0. A basis of the null set is I. Problem 5.7 Let B {(1, 0, 0,..., 0), (0,1, 0,..., 0), ..., (0, 0,..., 0,1)} be a subset of a vector space F n over a field F . Prove that B is a basis of F n and dim F n

n.

Solution. Consider a LC D1 (1, 0, 0,..., 0)  D 2 (0,1, 0,..., 0)  ...  D n (0, 0,..., 0,1)

(0, 0,..., 0),

where RHS is the zero vector in F n and D i  F ,  i. (D1 , 0, 0,,0)  (0, D 2 , 0,, 0)    (0, 0, 0,, D n ) (0, 0, 0,, 0)

Ÿ

(D1 , D 2 ,.., D n )

i.e.

D1

Ÿ Ÿ

B is linearly independent.

Now

Fn

(0,0,...,0)

D2  Dn

0

(a1)

{(D1 , D 2 ,..., D n ) | D i  F } {D1 (1,0,...,0)  D 2 (0,1,0,...,0)  ...  D n (0,...,0,1) | D i  F , 1 d i d n} L(B )

(a2)

Then (a1) and (a2) imply that B is a basis of F n . Since B contains n vectors, dim F n 90

n.

Problem 5.8 Let V

C be a real vector space. Find its basis and dimension.

Solution. Here V (R ) C(R ) is a vector space under usual addition and multiplication. Consider a set B {1, i } Ž V . Let

D ˜1  E ˜ i 0, D, E  R Equating real and imaginary parts, D

0 E.

Hence B is LI. We show that every member x  iy  C is written as a LC of 1, i. We have x  iy Ÿ

x ˜ 1  y ˜ i , x, y  R

B {1, i} is a basis of V .

Since B contains two elements, dim V

2.

Theorem 5.5 In!an! n dimensional!vector!space! V , !any!set!of! n !LI!vectors!is!a!basis.

Proof. Let V be an n  dimensional vector space. Let B {v1 , v2 ,..., vn } be any LI set of n

vectors in V . Then L( B) Ž V and  wV . The set {v1 , v2 ,..., vn , w} has n  1 vectors and v1 z 0 . Hence it is LD. Ÿ

vk (2 d k d n) or w is a LC of its preceding vectors, see Thm 5.3

But v1 , v2, ..., vn are LI vectors. Hence vk , 2 d k d n cannot be a LC of v1 , v2 ,..., vk 1. Then w  [v1 , v2 ,..., vn ] L( B) Hence V Ž L(B). But L( B) Ž V . Hence L( B) V . Since B is LI, it forms a basis for V .

QED

Problem 5.9 Prove that the set {(1, 0, 0), (1,1, 0), (1,1,1)} is a basis of V3 .

Solution. Let B {(1, 0, 0), (1,1, 0), (1,1,1)}. Then B Ž V3 . Now the determinant associated with the vectors in B is 1 0 0 1 1 0

1z 0

1 1 1

Bases for V has n vectors so any n  1 vectors in V is LD. 91

Hence B is a LI subset of V3 containing 3 vectors. By Thm 5.5, dim V3

3 Ÿ B is a basis of V3 .

Now we present another version of Thm 4.6. It gives an extension of LI subset to a basis. The content of the new version is given in the following Thm 5.6.

Theorem 5.6 Let!the!set! {v1 , v2 ,..., vk } !be!a!LI!subset!of!an! n  dimensional!vector!space! V . !Then!we!can!find vectors! vk 1 , vk 2 ,..., vn !in! V !such!that!the!set! {v1 , v2 ,..., vk , vk 1 ,..., vn } !is!a!basis!for! V .

Proof. Let the set {v1 , v2 ,..., vk } be a LI subset of an n  dimensional vector space V . Then k d n, by Thm 5.1. We consider two cases k

Case 1. Let k

n and k  n.

n. Then the set {v1 , v2 ,..., vk } contains LI n elements and hence forms a basis of

vector space V .

Case 2. Let k  n. Then [v1 , v2 ,..., vk ] z V as dim V

n and [v1 , v2 ,..., vk ] is a proper subspace of

the vector space V . Then  vk 1  V  [v1 , v2 ,..., vk ] i.e. vk 1  V but vk 1  [v1 , v2 ,..., vk ]

{v1 , v2 ,..., vk , vk 1} is LI and k  1 d n,  {v1 , v2 ,..., vk 1} is LI

Ÿ

If k  1 n, then the set {v1 , v2 ,..., vk 1} is a basis of V . If k  1  n, then  vk 2  V such that {v1 , v2 ,..., vk 2 } is LI. Continuing the process, we obtain a LI set containing n vectors {v1 , v2 ,..., vk , vk 1 ,..., vn }. This forms a basis of a vector space V .

QED

Existence of a basis Existence of basis for a finite dimensional vector space V (F ) is assured by the above theorem. If V

{0}. then I is its basis. Suppose V z {0} and dim V

n. Then for any v1 V , v1 z 0, the set

{v1} is LI in V . The set {v1} can be extended to a basis {v1 , v2 ,..., vn } for V by the procedure explained in the proof of the Thm 5.6.

Problem 5.10 Given two LI vectors (1, 0,1, 0), (0,  1,1, 0) of V4 . Find a basis of V4 that includes these two vectors.

Solution. Given that the set S

{(1, 0,1, 0), (0,  1,1, 0)} is LI. We have to find two vectors in V4

such that they with the two given vectors in S form a LI set. 92

For this we make use of the span [S ] : [ S ] {D(1, 0,1, 0)  E(0,  1,1, 0) | D, E  R}

{(D,  E, D  E, 0) | D, E  R}

In the span [S ], all its vectors have zero as fourth component. With this information the vector (0, 0, 0,1)  V4 does not belong to [S ]. By Thm 4.3 (ii) (of the unit 4), we have T Ÿ

{(1, 0,1, 0), (0,  1,1, 0), (0, 0, 0,1)} is a LI subset of V4 [T ]

(a1)

{D(1,0,1,0)  E(0,1,1,0)  J (0,0,0,1) | D, E, J  R} {(D,E, D  E, J ) | D, E, J  R}

For [T ] { (t1 , t2 , t3 , t4 ) | all ti  R}, we have t1

t 2  t3 . Now we search a vector in V4 which does

not satisfy this condition. The vector (0,1, 0, 0) is suitable for the purpose: (0,1,0,0)  [(1,0,1,0), (0,1,1,0), (0,0,0,1)] Ÿ

B {(1,0,1,0), (0,1,1,0), (0,0,0,1), (0,1,0,0)} LI subset of V4 , by Thm 4.3(ii)

It has 4 ( dim V4 ) vectors. Hence B is a required basis for V4 .

Problem 5.11 Let S

[(1,1,0), (1,0,2)] and T

[(0,1,0), (0,1,2)]. Determine the subspaces S ˆ T and S  T .

Find dim ( S  T ) and dim ( S ˆ T ).

Hint. Problem 3.11 of the Unit 3 Solution. Using the Problem 3.11, one can deduce that S ˆT

[ (0 ,  1,  2) ] and S  T

[(1,1,0), (1,0,2), (0,1,0)]

dim ( S ˆ T ) 1

Ÿ

1 1 0 Now

Ÿ Ÿ

1

0

2

0

1

0

2 z 0

The vectors (1,1,0), (1,0,2), (0,1,0) are LI and S  T

R 3.

dim ( S  T ) 3

In the next theorem, there appears a concept of a quotient space, which is not explained earlier. First we discuss the concept and then deal with the theorem.

93

Problem 5.12 Which of the following is a basis for the vector space of polynomials in R [ x] of degree d 3 ? (A) 1  x  x 2 , 4  x, 11  x  x 2  4 x 3 (B) 1, 3  x, x 2  x 3 , 4  x  2 x 2  2 x 3 (C) x 3  x, x 2  x, x  1, x 3  x 2  3 x  1 (D) 3, 4  x, 2  3 x  x 2  x 3 , 4  7 x  x 3

(SET 2013)

Solution: Let V be the vector space of polynomials in R [x ] of degree d 3. Hence dim V (A) Here the vectors are 3 in number while dim V

4.

4. Hence the vectors cannot form a basis for

V. (B) It is easy to note that one of the vectors can be written as a LC of the remaining three vectors: 4  x  2 x 2  2 x 3 1(1)  1(3  x)  2( x 2  x 3 ) Ÿ

The vectors are LD and hence they cannot form a basis. x 3  x 2  3x  1 1( x 3  x)  1( x 2  x)  1( x  1)

(C) Here Ÿ

The vectors are LD and hence they cannot form a basis.

(D) Consider a LC of the vectors: a(3)  b(4  x)  c(2  3x  x 2  x 3 )  d (4  7 x  x 3 ) Ÿ

(3a  4b  2c  4d )  (b  3c  7d ) x  cx 2  (c  d ) x 3 3a  4b  2c  4d

Ÿ

 b  3c  7 d

0 zero polynomial 0  0 x  0 x 2  0 x3

0 0

c 0

Ÿ

a

cd

0

b

d

c

0

Hence the 4 vectors are LI and as such they form a basis for V . Thus the correct choice is (D).

MCQ 5.5 Let n be a positive integer and let H n be the space of all n u n matrices A (aij ) with entries in R satisfying aij 94

ars whenever

i  j r  s (i , j , r , s

1,  , n).

Then, the dimension of H n , as a vector space over R is (A) n 2 (B) n 2  n  1 (C) 2n  1 (D) 2n  1

(NET 2011)

MCQ 5.6 The Dimension of the vector space V 2

{ ( x, y ) : x and y are complex numbers} over the field of

reals is: (A) 4 (B) 2 (C) 3 (D) 1

(SET 2015)

MCQ 5.7 Let V be the vector space of all real n u n matrices A [aij ] such that aij

0 if i  j z n  1.

Then the dimension of V is (A) n  1 (B) 1 (C) n (D) n 2  n

(SET 2008)

MCQ 5.8 Let V be the vector space of all real n u n matrices A [aij ] such that for all i n

have ¦ aij

1, 2,  , n, we

0. What is the dimension of V?

j 1

(A) n 2  n (B)

n ( n  1) 2 95

(C) n (D)

n (n  1) 2

(SET 2008)

MCQ 5.9 The dimension of the subspace of K n consisting of those vectors A ( a1 ,....., an ) such that a1  ......  an

0 is

(A) n (B) n  1 (C) n / 2 (D) (n  1) / 2

(SET 2013)

MCQ 5.10 V is vector space of n u n real symmetric matrices. Then the dimension of V over R is (A)

n (n  1) 2

(B)

n (n  1) 2

ª n2 º (C) « » ¬2¼ ª n2 º (D) « »  1 ¬2¼

(SET 2004)

MCQ 5.11 An n u n real matrix A is antisymmetric if a ji

aij for all i, j where A [aij ] . What is the

dimension of the vector space of all n u n real antisymmetric matrices? (A) n 2 (B) n (C)

96

n ( n  1) 2

(D)

n (n  1) 2

(SET 2009)

MCQ 5.12 Let V be the set of all n u n skew-symmetric matrices over R. Then V is a real vector space of dimension: (A)

n(n  1) 2

(B)

n(n  1) 2

(C) n 2 (D) n 2  n

(SET 2015)

MCQ 5.13 The dimension of the vector space of all symmetric matrices of order n u n (n t 2) with real entries and trace equal to zero is:

­ n2  n ½ (A) ® ¾ 1 ¯ 2 ¿ ­ n2  n ½ (B) ® ¾ 1 ¯ 2 ¿ ­ n 2  2n ½ (C) ® ¾ 1 ¯ 2 ¿ ­ n 2  2n (D) ® ¯ 2

½ ¾ 1 ¿

(NET 2011)

MCQ 5.14 The dimension of the vectors space of all symmetric matrices A (aij ) of order n u n (n t 2) with real entries, a11

0 and trace zero is

(A)

( n 2  n  4) 2

(B)

( n 2  n  4) 2 97

(C)

( n 2  n  3) 2

(D)

( n 2  n  3) 2

(NET 2012)

MCQ 5.15 The dimension of the space of n u n matrices all of whose components are 0 expect possibly the diagonal components is: (A) n 2 (B) n  1 (C) n 2  1 (D) n

(SET 2011)

MCQ 5.16 The dimension of the space of diagonal n u n matrices is: (A) n (B) n 2 (C)

n(n  1) 2

(D)

n(n  1) 2

(SET 2013)

MCQ 5.17 Let Pn (x) be the vector space of all polynomials of degree at most n with real coefficients. Then its dimension is: (A) n  1 (B) n (C) n  1 (D) n 2

(SET 2015)

MCQ 5.18 The dimension of the vector space M 98

{[ aij ]m u n | aij  C} over the field R is:

(A) m  n (B) 2 mn (C) 2(m  n) (D) mn

(SET 2016)

5.3 Quotient space Coset. Let W be a subspace of a vector space V over F . Let v V . We define v W

{v  w | w  W }.

It is called a left coset of W in V . Similarly one can define a right coset W  v of W in V . In a vector space vector addition () is commutative. This gives W v

v W

Hence in case of vector space, there is no difference between a left and right cosets. We simply call v  W as a coset of W in V .

Some results (i) 0  W

W , where 0 V

(ii) w  W œ w  W (iii) v  W

W

v c  W œ v  vc  W

The space V / W Let W be a subspace of a vector space V over F .Define V /W

Thus

set of all cosets of W in V V /W

{v  W | v V }

(5.5)

We define the operations of vector addition (  ) and scalar multiplication (˜) on V / W as follows: (v  W )  (vc  W ) and

D ˜ (v  W )

(v  vc)  W Dv  W ,

(5.6a) (5.6b)

 v, vc V and D  F . Under these operations V / W becomes a vector space over F . This vector space V / W is called a quotient!space of V by W and its zero vector is W .

99

Remark. Confirm yourself that the operations () and (˜) on V / W are well defined. Theorem 3.7 Let! W be!a!subspace!of!a!finite!dimensional!vector!space! V . !Then (i) W is!finite!dimensional (ii) dim W d dim V and

(iii) dim (V / W ) dim V  dim W .

Proof. Let W be a subspace of a finite dimensional vector space V . Let dim V

n.

(i) We know that m vectors are LI in V Ÿ m d n

dim V

Then any n  1 elements in V are LD. We can find the largest set of LI vectors w1 , w2 ,..., wm  W and m d n. [ w1 , w2 ,..., wm ] Ž W

Now

(5.7)

For any w  W Ÿ w1 , w2 ,..., wm , w are LD. D1w1  D 2 w2  ...  D m wm  D 0 w 0, for D i  F and some D i z 0

Ÿ

For D 0 Ÿ

D1w1  D 2 w2  ...  D m wm

0:

0 Ÿ Di

0,  i

w1 , w2 ,..., wm , w are LI

This is a contradiction. Hence D 0 z 0 and then D0w Ÿ

w D 01D 0 w

D1w1  D 2 w2  ...  D m wm and D 01  F ( D 01D1 ) w1  (D 01D1 ) w2  ...  (D 01D m ) wm LC of LI vectors w1 , w2 ,..., wm

Ÿ

w  [ w1 , w2 ,..., wm ]

i.e.

W Ž [ w1 , w2 ,..., wm ]

(5.7) and (5.8) Ÿ W

[ w1 , w2 ,..., wm ] and hence {w1 , w2 ,..., wm } is a basis of W .

Hence W is finite dimensional and dim W (ii) Let dim W

m.

m. Then W has a basis {w1 , w2 ,..., wm } consisting of m vectors. The set

{w1 , w2 ,..., wm } is LI in the n  dimensional vector space V . 100

(5.8)

mdn

Ÿ

dim W d dim V ,  dim W

i.e.

dim V

(iii) Let {w1 , w2 ,..., wm } be a basis of the subspace W . Then

n œW

V

{w1 , w2 ,..., wm } is a LI subset of V .

By Thm 3.6, V has a basis {w1 , w2 ,..., wm , v1 , v2 ,..., vr }. Ÿ

dim V dim V

i.e. To prove: dim(V / W )

mr dim W  r

(5.9)

r.

Let v  V and v  W  V / W . Ÿ

v

D1w1  D 2 w2  ...  D m wm  E1v1  E 2 v2  ...  E r vr , D i , E j  F

v W

Ÿ

[(D1w1  D 2 w2  ...  D m wm )  (E1v1  E 2 v2  ...  E r vr )]  W [(D1w1  ...  D m wm )  W ]  [ (E1v1  ...  E r vr )  W ]

W  ((E1v1  W )  ...  ( Br vr  W ))

(5.10)

Since D1w1  ...  Dm wm W , wi W , i Ÿ (D1w1  ...  D m wm )  W

W

Then (5.10) Ÿ

v W

(E1v1  W )  (E 2 v2  W )  ...  (E r vr  W ),  W

zero of V / W

E1 (v1  W )  E 2 (v2  W )  ...  E r (vr  W ), where E j  F LC of v1  W , v2  W , !, vr  W Thus any element v  W of V / W is expressed as a LC of

v1  W , v2  W ,, vr  W over F Ÿ

Let Ÿ Ÿ Ÿ Ÿ

[ v1  W , v2  W ,..., vr  W ] V / W

a1 (v1  W )  a2 (v2  W )  ...  ar (vr  W ) W

(5.11)

zero vector in V / W , ai  F

(a1v1  W )  ( a2 v2  W )  ...  (ar vr  W ) W (a1v1  a2 v2  ...  ar vr )  W

W

a1v1  a2 v2  ...  ar vr  W = [ w1 , w2 ,..., wm ] a1v1  a2v2  ...  ar vr

b1w1  b2 w2  ...  bm wm , for bi  F , 1 d i d m 101

b1w1  b2 w2  ...  bm wm  (a1 )v1  ( a2 )v2  ...  (ar )vr

Ÿ Ÿ

b1

0 b2

... bm

a1

Ÿ Ÿ

 a2 a1

0

0

... ar ,  {w1 ,..., wm , v1 ,..., vr } is LI

a2

... ar

The set {v1  W , v2  W ,..., vr  W } is LI

(5.12)

By (5.11) and (5.12), we have {v1  W , v2  W ,..., vr  W } is a basis for V / W

This basis contains exactly r elements i.e. dim(V / W ) dim V

r. Using this in (5.19), we get

dim W  dim(V / W )

dim(V / W ) dim V  dim W .

or

QED

Theorem 5.8 If! U !and! W !are!finite!dimensional!subspaces!of!a!vector!space! V , then

!(i) U  W !is!finite!dimensional

and

(ii) dim (U  W ) dim U  dim W  dim(U ˆ W ).

Proof. (i) Let U and W be finite dimensional subspaces of a vector space V ( F ). Then U ˆ W is

a subspace of a finite dimensional vector spaces U ( F ) and W ( F ). Ÿ

U ˆ W has a finite basis A {w1 , w2 ,..., wk } which is a part of a basis B {w1 , w2 ,..., wk , u1 , u 2 ,..., u m } of U and a part of a basis C {w1 , w2 ,..., wk , v1 , v2 ,..., vn } of W

Ÿ

dim (U ˆ W ) k , dim U

Ÿ

kn

(5.13)

0

zero vector

(5.14)

D1w1  D 2 w2  ...  D k wk  E1u1  E 2u 2  ...  E mu m

(5.15)

U W

Now

k  m, dim W

L( B)  L(C)

L( B ‰ C)

The set {w1 ,..., wk , u1 ,..., um , v1 ,..., vn } B ‰ C spans U  W

For D i , E j , J l  F , consider a LC:

D1w1  ...  D k wk  E1u1  ...  E mu m  J1v1  ...  J n vn Ÿ

 ( J1v1  ...  J n vn )

Hence LHS of (5.15) is in U ˆ W i.e. 102

 ( J1v1  J 2 v2  ...  J n vn )  U ˆ W

 ( J1v1  J 2v2  ...  J n vn ) G1w1  G 2 w2  ...  G k wk , where Gi  F , 1 d i d k

Ÿ

G1w1  G 2 w2  ...  G k wk  J1v1  J 2 v2  ...  J n vn

Ÿ

G1

Ÿ

0 G2

D1

Ÿ

... G k

J1

J2

0

... J n ,  C {w1 ,..., wk , v1 ,..., vn } is LI

D1w1  D 2 w2  ...  D k wk  E1u1  E 2u2  ...  E mu m

Then by (5.15),

0 D2

Thus (5.14) Ÿ D1 Ÿ

[ w1 , w2 ,..., wk ]

0

... D k

D2

E1 E 2

... D k

E1

0

... E m ,  B {w1 ,..., wk , u1 ,..., u m } is LI

E2

... E m

J1

J2

... J n only

B ‰ C {w1 ,..., wk , u1 ,..., u m , v1 ,..., vn } is LI

(5.16)

Hence B ‰ C is a basis of U  W having exactly k  m  n vectors. dim (U  W )

Ÿ

k mn

a finite number

(5.17)

Hence U  W is a finite dimensional subspace of V ( F ). dim (U  W ) (k  m)  (k  n)  k

(ii) (5.17) Ÿ

dimU  dim W  dim (U ˆ W )

QED

Corollary If! U ,W are!finite!dimensional!subspaces!of!a!vector!space! V !and! U  W

dim (U  W )

U † W , !then

dim U  dim W .

Proof. Let U ,W be finite dimensional subspaces of V with U W

U † W i.e. U  W

{0}.

By Thm 5.8, we have dim (U  W ) dimU  dimW  dim (U ˆ W ) dim U  dim W , dim{ 0} 0

QED

Problem 5.10

Let V be a finite dimensional vector space and W be a subspace of V such that dim V Show that V

dim W .

W.

Solution. Let W be a subspace of a finite dimensional vector space V . Then W is finite

dimensional and it has a finite basis B. 103

Ÿ

B is LI and L( B) W

(a1)

Now B Ž W and W Ž V Ÿ B Ž V . Given that dim W

dim V

n, say

Then LI subset B of V contains n ( dim V ) vectors. Hence B is a basis of V i.e. L( B) V . Then (a1) Ÿ V

W.

Problem 5.11

Let W be a subspace of a finite dimensional vector space V over F . Prove that there exists a subspace W1 of V such that V

W † W1 .

Solution. Let W be a subspace of a finite dimensional vector space V ( F ). Ÿ

W Ž V and then W is a finite dimensional vector space over F

Let dim W

m and B {w1 , w2 ,..., wm } be a basis of W . Then B can be extended to form a

basis of V . Assume that C {w1 , w2 ,..., wm , v1 , v2 ,..., vn } be a basis of V . Define a subspace W1 of V generated by the LI subset D {v1 , v2 ,..., vn }. We have

B ‰ D L(C ) V , L( B) W , L( D ) W1

C Ÿ

V

L( B ‰ D )

L( B)  L( D )

W  W1

(a1)

Let u  W ˆ W1. u W

Ÿ Ÿ

u

L( B ) and u  W1

a1w1  a2 w2  ...  am wm

L( D)

b1v1  b2 v2  ...  bn vn , ai , b j  F

a1w1  a2 w2  ...  am wm  (b1 )v1  (b2 )v2  ...  (bn )vn

Ÿ Ÿ

a1

Ÿ

0

... am

a2 u

b1

b2

0

... bn ,  C is LI as C is a basis

a1w1  a2 w2  ...  am wm

Ÿ

W ˆ W1 {0}

By (a1) and (a2),

V

0  0  ...  0

0 (a2)

W † W1.

Problem 5.12

Let W

{D 0  D1 x  ...  D n1 x n1  F [ x] | D 0  D i  ...  D n1

0}. Show that W is a subspace of

Vn , a vector space of all polynomials of degree less than n. Find a basis of W over F . 104

Hint. p( x) D0  D1 x  ...  D n x n is a polynomial of degree n in x over F , where each D i  F .

Here

F [ x]

and

Vn

the set of all polynomials p ( x ) vector space of all polynomials of degree less than n .

Solution. By definition itself, W Ž Vn . Let p ( x) , q( x)  W ,

p ( x) D 0  D1 x  ...  D n 1 x n 1 , q( x) E0  E1 x  ...  E n1 x n1

where

D 0  D1  ...  D n 1

Ÿ

0, E0  E1  ...  E n1

0, by definition of W

(a1)

Let a, b  F . Then a p ( x)  b q ( x) Vn ,  Vn is a vector space. We show that

a p ( x)  b q( x) W We have

a p ( x)  b q ( x)

a [D 0  D1 x  ...  D n1 x n1 ]  b [E0  E1 x  ...  E n1 x n1 ] [(aD 0 )  (aD1 ) x  ...  (aD n1 ) x n1 ]  [(bE0 )  (bE1 ) x  ...  (bEn1 ) x n1 ] (aD 0  bE0 )  (aD1  bE1 ) x  ...  ( aD n 1  bEn 1 ) x n 1

J 0  J1 x  ...  J n1 x n1 , J i

aD i  bEi , i 1...n

(a2)

Since F is closed under addition and multiplication, we get

a , b, D i , E i  F Ÿ J i  F Here

J 0  J1  ...  J n1

( aD 0  bE0 )  (DD1  bE1 )  ....  (aD n1  bE n1 )

Since  is commutative and associative, we write

J 0  J1    J n 1

(aD 0  aD1  ...  aD n1 )  (bE 0  bE1  ...  bE n1 )

a(D 0  D1  ...  D n1 )  b(E0  E1  ...  E n1 ),  distributivity a0  b0 0

Since each J i  F in (a2), above Ÿ a p( x)  b q( x)  W . Thus W is a subspace of Vn . Hence a basis of W is 1  x, 1  x 2 , 1  x 3 , ..., 1  x n1.

105

Problem 5.13

Let U , W be subspaces of an n - dimensional vector space V and dim U Prove that dim (U ˆ W )

dim W

n  1, U z W .

n  2.

Solution. Let U , W be (n  1) dimensional subspaces of an n - dimensional vector space V and U z W . Then U  W is a subspace of V properly containing each U ,W and hence

dim (U  W ) ! n  1 and dim (U  W ) d dim V Ÿ

n d dim (U  W ) d dim V

Ÿ

dim (U  W ) dim (U  W )

But Ÿ

n

n

(a1)

dim U  dim W  dim (U ˆ W ), by Thm 5.8

n  1  n  1  dim (U ˆ W ), by (a1) dim (U ˆ W )

Ÿ

n

n2

Problem 5.14

Let W be the vector space of 2 u 2 complex Hermitian matrices. Then the dimension of W over field R is (A) 1 (B) 2 (C) 4 (D) 8 Solution. Let W be the vector space of 2 u 2 complex Hermitian matrices. We find the dimension

of W over the field R. We know that the dimension of a real vector space of all nu n complex Hermitian matrices is n 2 .

Ÿ

dim W

22

4

The correct option is (C). MCQ 5.19

If U and W are subspaces of R 4 generated respectively by { (1, 0, 0, 0), (1, 1, 0, 0, ), (1, 1, 1, 0) } and { (0, 0, 0, 1, (0, 0, 1, 1 ), (0, 1, 1, 1) }, 106

then the dimension of U ˆ W is (A) 1 (B) 2 (C) 3 (D) 4

(SET 2000)

MCQ 5.20

If V is a finite dimensional vector space over a field K, and W and U are subspaces of V, which of the following statements is correct? (A) dim (W  U ) = dim W  dim U (B) dim (W  U ) = dim W + dim U if and only if W  U = W † U (C) dim (W  U ) ! dim W + dim U (D) dim (W  U )  dim W + dim U

(SET 2002)

MCQ 5.21

Let W1 and W2 be two subspaces of a finite dimensional vector space V and let dim W1  dim W2 ! dim V .

Then

(A) W1 ˆ W2

{0}

(B) W1 ˆ W2 z {0} (C) W1 ˆ W2

I

(D) W1 ‰ W2

V

(SET 2016)

MCQ 5.22

Let A  M 10 (C) the vector space of 10 u10 matrices with entries in C. Let WA be the subspace of M 10 (C) spanned by { An | n t 0}. Choose the correct statements: (A) For any A, dim WA d 10 (B) For any A, dim WA  10 (C) For some A, 10  dim WA  100 (D) For some A, dim WA 100

(NET 2013)

107

MCQ 5.23 Let V be the vector space of all polynomials over C. Let W be the subspace of all even polynomials. Then (A) V, W and V/W are all infinite dimensional (B) V and W are infinite dimensional, but V/W is finite dimensional (C) V and V/W are infinite dimensional, but W is finite dimensional (D) V and V/W are finite dimensional, and W is finite dimensional

SAQ 5.5 Construct subspaces A, B of V4 such that dim A

2, dim B

3 but dim ( A ˆ B) 1.

Hint. A [(1, 0, 0, 0), (0,1, 0, 0)], B [(0,1, 0, 0), (0, 0,1, 0), (0, 0, 0,1)] are subspaces of vector space V4 of dimensions 2, 3 respectively. A ˆ B [(0,1, 0, 0)] is a subspace of V4 of dimension 1. Note that V4

A  B by the hypothesis.

SAQ 5.6 Let M (R ) be a set of all m u n matrices of real numbers. Then show that M (R ) is a real vector space under matrix addition and scalar multiplication. Find dimension of M (R ). . Hint. Denote Eij  M (R ) as the matrix whose (i, j ) th entry is 1 and all remaining entries are zero. Then {Eij | 1 d i d m, 1 d j d n} forms a basis for M (R ) having m n elements. Then dim .M (R ) m n. SAQ 5.7 Let S be the subspace of V3 is spanned by (0, 1, 0) and (0, 0, 1) and T be the subspace spanned by (1, 2, 0) and (3, 1, 2). Find a basis of each of the subspaces S ˆ T and S  T . SAQ 5.8 Suppose U and W are distinct 4  dimensional subspaces of a vector space V of dimension 6. Find the possible dimensions of U ˆ W . 5.4 Coordinate vector relative to basis Definition. Let B {v1 , v2 ,..., vn } be an ordered basis for a vector space V . Then any v V can be uniquely written as 108

D1v1  D 2 v2  ...  D n vn

v

where the scalars D1 , D 2 ,..., D n are fixed for v. The vector (D1 , D 2 ,..., D n ) is called the coordinate!vector v relative to the ordered basis B and it is denoted by [v]B (D1 , D 2 ,..., D n ).

[v ] B

i.e. Problem 5.15

Let B {(1,1,1), (1, 0,1), (0, 0,1)} be a basis for V3 . Find the coordinate vector (2,3,4)  V3 relative to basis B. Solution. Let B {v1 , v2 , v3 } be an ordered basis for V3 :

v1 Denote v

(2,3,4)  V3

(1,1,1), v2

(1, 0,1), v3

(0, 0,1)

L( B ).

Ÿ

v

Ÿ

D1v1  D 2 v2  D 3v3 , D i  R D1 (1,1,1)  D 2 (1,0,1)  D 3 (0,0,1)

(2,3,4)

(D1  D 2 , D1 , D1  D 2  D 3 )

D1  D 2

Ÿ

2, D1

D1

Ÿ Ÿ

3, D1  D 2  D 3

3, D 2

[v ] B

1, D 3

4

2

(D1 , D 2 , D 3 ) (3,1,2)

coordinate vector of (2,3,4) relative to B Problem 5.16 Given that {(1, 0,1), (1,1, 0), (1,1,1)} is a basis of V3 (R ). Find out the coordinates of (D, E, J )  V3 ( R ) with respect to the above basis. Solution. Let B {(1, 0, 0), (1,1, 0), (1,1,1)} be an ordered basis for the real vector space V3 . For

x, y, z  R , let (D, E, J ) Ÿ

x(1, 0, 0)  y (1,1, 0)  z (1,1,1) x yz

D, y  z

( x  y  z, y  z, z )

E, z

J 109

Ÿ Ÿ

x

D  E, y

[(D, E, J )]B

E  J, z

( x, y , z )

J

(D  E, E  J , J )

This gives the coordinate vector of (D, E, J ) with respect to the basis B. Thus (D  E, E  J, J ) are the coordinates of (D, E, J )  V3 with respect to the basis B. SAQ 5.9 Let B {D1 , D 2 , D 3 } be an ordered basis for V3 (R ), where

D1

(1, 0,  1), D 2

(1, 1, 1), D 3

(1, 0, 0).

Obtain the coordinates of the vector (a, b, c) in the ordered basis B. SAQ 5.10 Find the coordinate vector of (2, 3, 4,  1) relative to the ordered basis

B {(1,1, 0, 0), (0,1,1, 0), (0, 0,1,1), (1, 0, 0, 0)} of V4 .

110

SUMMARY Explaining the concept of a basis of a vector space, some of the related results are deduced. How to construct a basis for a vector space is illustrated by examples. Based upon the number of basis vectors, the dimension of a vector space is defined. Thereafter, the concepts of a quotient space and a coordinate vector relative to a basis are introduced and are supplemented with illustrative examples.

KEY WORDS Basis of a vector space Dimension of a vector space Quotient space Coordinate vector

111

UNIT 04-01: LINEAR TRANSFORMATIONS

1-30

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of a linear transformation or homomorphism INTRODUCTION

The study of finite dimensional vector spaces carried out in the earlier units is continued in regard to the linear mapping of a vector space U (F ) into a vector space V (F ) which are called linear transformations. Through appropriate addition and scalar multiplication, the set L(U , V ) of all

such linear transformations can be made a vector space over F . This constitutes an abstract

approach to define a matrix through a linear transformation. Synonyms for linear transformation are linear map, linear mapping, homomorphism, linear function and it plays a central role in linear algebra, functional analysis and operator theory. We conclude the study by highlighting the matrix representation of such linear transformations. 1.1 Linear transformation or vector space homomorphism Definition. Let U and V be vector spaces over the same field F . A function T : U o V is called

a linear!transformation or homomorphism!if

and

(i) T (u  v) T (u )  T (v)

(1.1a)

(ii) T (Du ) DT (u ),

(1.1b)

 u , v U and  D  F . Remark. For a vector space V , a linear map T : V o V is called a linear! operator or a linear

map on V . Theorem 1.1

Let! U !and! V !be!vector!spaces!over!the!same!field! F .!Then!a!function ! T : U o V !is!linear! œ T (Du  Ev) DT (u )  ET (v),

(1.2)

 D, E  F and! u , v V . Proof. (i) Let T : U o V be a linear map. For any u , v  U and D, E  F Ÿ Du , Ev U .

By definition of a linear map T , we have 1

T (Du )  T (E v)

T ( Du  E v )

DT (u )  ET (v)

This completes the necessary part. (ii) Conversely, let T (Du  E v) DT (u )  ET (v),  D, E  F and! u , v V Taking D E 1 F , we write T (u  v) T (u )  T (v),  u , v U

(1.3a)

T (Du ) DT (u ),  u U , D  F .

(1.3b)

Taking E 0  F , we get

Then (1.3) Ÿ T : U o V is a linear transformation. Remark. T is linear Ÿ T (u  v) T (1u  ( 1)v)

1Tu  (1)Tv

QED

Tu  ( Tv ) Tu  Tv

Corollary. Let! U ,V !be!vector!spaces!over!the!same!field! F . !Then!a!function

T : U o V !is!linear! œ T (Du  v)

DTu  Tv, !  u , v  U !and! D  F .

Proof. Taking E 1 in (1.2), we get (1.4).

(1.4) QED

Theorem 1.2

Let! U ,V !be!vector!spaces!over!a!field! F !and! T : U o V !be!a!linear!map.!Then (i) T (0)

0

(ii) T (u ) Tu ,  u U (iii) T (D1u1  D 2u2  ...  D n un )

D1Tu1  D 2Tu 2  ...  D nTu n ,  ui U , D i  F , 1 d i d n

Proof. (i) Let T : U o V be a linear map. Then

T (Du ) DTu ,  u U , D  F For D 0  F : (ii) Putting D

T ( 0)

0 Tu

0

1  F in (1.5), we get T ((1)u )

i.e.

zero vector in V

(1) T (u )

T (u ) Tu ,  u U .

(iii) We apply induction method. For n 1, (iii) becomes T (D1u1 ) D1T (u1 ) 2

(1.5)

This is true by definition of linearity of T i.e. (iii) is true for n 1. Assume that (iii) is true for

n m i.e. T (D1u1  D 2u 2  ...  D mu m ) Now

D1Tu1  D 2Tu 2  ...  D mTu m

(1.6)

T (D1u1  D 2u2  ...  D m1u m1 ) T ((D1u1  D 2u 2  ...  D mu m )  D m1um1 ) T (D1u1  ...  D m um )  T (D m1u m1 ),  T is linear. D1Tu1  D 2Tu 2  ...  D mTu m  D m1Tu m1 , by (1.6) and linearity of T

Thus (iii) is true for n m  1, if it is true for n m. Then by induction follows that (iii) is true

 n  N.

QED

Problem 1.1

Let f : R 3 o R 2 be a projection map defined by f ( x, y, z ) (0, y, z ).

(a1)

Show that f is linear. Solution. Let

(a, b, c), v ( p, q, r )  R 3 and D  R

u

Ÿ

u  v ( a , b, c )  ( p , q , r ) ( a  p , b  q , c  r ) Du

and Ÿ

Then

f (u )

D ( a , b, c ) ( D a , D b, D c )

f (a, b, c) (0, b, c), f (v) f (u  v)

f ( p, q, r ) (0, q, r ), by (a1)

(a2)

f (a  p, b  q, c  r ) (0, b  q, c  r ), by (a1) (0, b, c)  (0, q, r ) f (u )  f (v), by (a2)

and (a3) and (a4) Ÿ

f (Du )

(a3)

f (Da, Db, Dc) (0, Db, Dc) D(0, b, c) Df (u )

(a4)

f is linear.

Problem 1.2

Let a mapping T : V2 o V2 be defined by T ( x, y ) ( xc, yc), where xc

x cos T  y sin T, yc

x sin T  y cos T.

(a1)

Show that T is a linear map. 3

Solution. Denote u

( x, y ), v

uv

Ÿ Ÿ

( x1 , y1 )  V2 and D  R. ( x  x1 , y  y1 ) and Du

(Dx, Dy )

T (u  v) T ( x  x1 , y  y1 ) (( x  x1 ) cos T  ( y  y1 ) sin T, ( x  x1 ) sin T  ( y  y1 ) cos T), by (a1) ( x cos T  x1 cos T  y sin T  y1 sin T, x sin T  x1 sin T  y cos T  y1 cos T) ( x cos T  y sin T, x sin T  y cosT)  ( x1 cosT  y1 sin T, x1 sin T  y1 cos T) T ( x, y )  T ( x1 , y1 ), by (a1) T (u )  T (v).

and

T (Du ) T (Dx, Dy )

(a2)

(Dx cos T  Dy sin T, Dx sin T  Dy cos T), by (a1)

D ( x cos T  y sin T, x sin T  y cos T) DT ( x, y ), by (a1) DT (u ).

(a3)

(a2) and (a3) Ÿ T is linear. Problem 1.3

Let V be a vector space of differentiable functions f (x) in the variable x over the field R. Show that the differential mapping D :V o V

(a1)

is linear. Hint. Here D

d dx

Solution. Let f ( x), g ( x)V and D  R. Now D( f ( x)  g ( x))

and

d ( f ( x)  g ( x)) dx

D(Df ( x))

(a2) and (a3) Ÿ D is a linear map.

4

d d g ( x) f ( x)  dx dx

D ( f ( x))  D ( g ( x))

d d (Df ( x)) D f ( x) D D ( f ( x)) dx dx

(a2)

(a3)

Problem 1.4 Show that the translation mapping T : R2 o R2 defined by T ( x, y ) ( x  1, y  2) is not a linear map.

Hint. Both the conditions in (1.1) must be satisfied for a linear map T . Solution. We apply the method of counter example. Consider ( x , y ) (1, 1). Ÿ

T (1,1) (1  1, 1  2) (2,  1) and 5 T (1, 1) 5( 2,  1) (10,  5)

(a1)

T (5(1, 1)) T (5, 5) (5  1, 5  2) (6, 3)

(a2)

Now

(a1) and (a2) Ÿ T (5(1, 1)) z 5 T (1, 1) Hence T is not linear.

Problem 1.5 Let V be the vector space of n - square matrices over the field R. For an arbitrary matrix AV , we define a mapping T :V o V by

T (M )

MA  AM ,  M V .

(a1)

Show that T is linear.

Solution. Let M , N V and D  R. Then T ( M  N ) ( M  N ) A  A( M  N ), by (a1) MA  NA  AM  AN ( MA  AM )  ( NA  AN ),  matrix addition is associative T ( M )  T ( N ), by (a1) and

(a2)

T (DM ) (DM ) A  A(DM ) D( MA)  D( AM ) D( MA  AM ) DT (M ), by (a1)

(a3)

(a2) and (a3) Ÿ T is linear. 5

Problem 1.6 If T is a linear transformation from V2 to V2 defined by T (2,1) (3, 4), T (3,4) (0,5), then express (0,1) as a LC of (2,1) and (3, 4). Hence find image of (0,1) under T .

Solution. Let linear map T : V2 o V2 be given by

Consider

(0 ,1)

Ÿ

T (2,1) (3, 4), T (3, 4) (0, 5)

(a1)

D(2 ,1)  E( 3 , 4)

(a2)

(2D  3E, D  4E)

(0 ,1)

2D  3E 0, D  E 1

Ÿ

D

Solving above,

(a2) Ÿ Then

(2D, D)  (3E, 4E)

(0, 1) T (0,1)

2 3 ,E 11 11

2 3 ( 2, 1)  (3, 4) 11 11

2 3 T (2,1)  T (3, 4),  T is linear 11 11 3 2 (3, 4)  (0, 5) 11 11

§ 9 12 10 ·  ¸ ¨ , © 11 11 11 ¹

§9 · ¨ , 2¸ © 11 ¹

Problem 1.7 Find a linear transformation T from V2 to V2 such that T (1, 0) (1, 1) and T (0, 1) (1, 2)

Prove that T maps the square with vertices (0, 0), (1, 0), (1,1) and (0,1) into a parallelogram.

Solution. Let T : V2 o V2 be linear such that T (1, 0) (1,1) and T (0 ,1)

(1, 2).

We have to find T ( x, y ). Consider ( x, y )  V2 . We have ( x, y ) Ÿ 6

x(1, 0)  y (0,1)

T ( x, y ) T ( x(1, 0)  y (0,1))

(a1)

xT (1, 0)  yT (0,1),  T is linear and x, y  R x(1,1)  y (1, 2), by (a1)

( x, x )  (  y , 2 y ) T ( x, y ) ( x  y, x  2 y ), ( x, y )  V2

Ÿ

(a2)

This gives the formula for T . Let OACB be a square in xy - plane with vertices O (0, 0), A(1, 0), C (1,1), B (0,1), see Fig 5.1(a).

Let Oc, Ac, C c, Bc be the images of O, A, C , B respectively under T given by (a2). Then y

y

Cc

C

B

Bc

Ac

Fig 5.1 O

(a )

A

Oc

x

T (0,0)

(0,0)

( b)

x

origin,

Ac T ( A) T (1, 0) (1,1) C c T (C ) T (1,1) (0, 3) Bc T ( B) T (0,1) (1, 2), by (a2)

Thus an image of a square OACB is a quadrangle OcAcC cBc. slope of OcAc

Now

y2  y1 x2  x1

slope of BcC c

3 2 1 0  (1)

slope of OcBc

20 1  0

slope of AcC c Ÿ

Slope of OcAc

1 0 1 0

1

2

3 1 2 0 1

Slope of BcC c and slope of OcBc slope of AcC c

Thus OcAc // BcC c and OcBc // AcC c and hence OcAcC cBc is a parallelogram.

7

Theorem 1.3 A! linear! transformation! T ! is! completely! determined! by! its! values! on! the! elements! of! a! basis. Precisely,! if! B {u1 , u2 ,..., un } ! is! a! basis! for! U ! and! v1 , v2 ,..., vn ! be! n ! vectors! (not! necessarily distinct)!in! V !then!there!exists!a!unique!linear!transformation T :U o V such!that!

T (ui ) vi !for! i 1,2,..., n.

Proof. Let U , V be vector spaces over the field F and let B {u1 , u2 ,..., un } be a basis for U . Consider v1 , v2 ,..., vn V . Then  u U

[u1 , u2 ,..., un ], there are unique scalars D1 , D 2 ,..., D n

depending on u such that u

D1u1  D 2u2  ...  D nun T :U o V

Define T (u )

by

D1v1  D 2 v2  ...  D n vn

T (D1u1  D 2u2  ...  D nun ) D1v1  D 2 v2  ...  D n vn ,  D i  F .

i.e.

Here T is well defined (prove your self: x

y Ÿ T ( x) T ( y ) ). We prove that

(i) T is linear (ii) Tui

vi for i 1,2,..., n

(iii) T is unique with conditions (i) and (ii). u

(i) Let

D1u1  D 2u2  ...  D n un , v E1u1  E 2u2  ...  E n un  U , D i , Ei  F .

Then for O, P  F , we have Ou  Pv Ÿ

T (Ou  Pv)

(OD1  PE1 )u1  (OD 2  PE2 )u2  ...  (OD n  PEn )u n

(OD1  PE1 )v1  (OD 2  PE2 )v2  ...  (OD n  PEn )vn , by (1.7) O (D1v1  D 2 v2  ..  D n vn )  P(E1v1  E 2 v2  ...  E n vn ), by VS axioms OT (D1u1  D 2u 2  ...  D n un )  PT (E1u1  E 2u2  ...  E n un ), by (1.7) OTu  PTv

Ÿ

T is linear

(ii) Now

8

u1 1u1  0u 2  0u3  ...  0un , u2

0u1  1u 2  0u3  ...  0un , !.

(1.7)

By (1.7),

Tu1 1v1  0v2  0v3  ....  0vn

v1

0v1  1v2  0v3  ...  0vn

v2

0v1  0v2  0v3  ...  1vn

vn

Tu 2 

Tu n Ÿ

Tui

vi , for i 1,2,..., n.

(iii) Let S : U o V be any linear transformation such that Sui Then  u  U

(1.8)

[u1 , u2 ,..., u n ], we have u

and

vi , for i 1,2,..., n

Su

D1u1  D 2u2  ...  D nun

S (D1u1  D 2u2  ...  D nun ) D1Su1  D 2 Su2  ...  D n Sun ,  S is linear, Thm 1.2(iii)) D1v1  D 2 v2  ...  D n vn , by (1.8) T (D1u1  D 2u2  ...  D n u n ), by (1.7) Su Tu ,  u U

Ÿ

Hence S

T and hence follows the uniqueness of T .

QED

Problem 1.8 Let T be a linear operator on R 2 defined by T (3,1) (2,  4) and T (1, 1) (0, 2) . Then T (7, 5) (A) cannot be found uniquely (B) is equal to (2, 3) (C) is equal to (2, 4) (D) is equal to (2, 4)

Solution. We find the image of (7, 5) under T. Let ( x, y )  R 2 . We write ( x, y ) a (3, 1)  b (1, 1) (3a  b, a  b), a, b  R

9

Ÿ

x

Ÿ

a

3a  b, y

1 ( x  y ), b 2

ab 1 (3 y  x) 2

T ( x, y ) T [ a (3,1)  b (1,1)] a T (3,1)  b T (1,1), T is linear

Now

Ÿ

1 1 ( x  y )(2,  4)  (3 y  x)(0, 2) 2 2

T ( x, y )

Ÿ

T ( x, y ) ( x  y ,  3 x  5 y )

Ÿ

T (7, 5) (2, 4)

Hence, the correct option is (D).

MCQ 1.1 Consider the following maps: (i) T : R 2 o R 3 defined by T ( x, y ) ( x, y ,1) (ii) T : R 3 o R 2 defined by T ( x, y, z ) (2 x  y , y ) (iii) T : R 4 o R1 defined by T ( x, y , z , w)

x  2y  w

(iv) T : R 2 o R 4 defined by T ( x, y ) (0, y , x , x  y ) Which of these are not linear transformations? (A) (i) (B) (i) and (ii) (C) (ii) and (iii) (D) (i) and (iv)

(SET 2002)

MCQ 1.2 Which of the following functions T : R 2 o R 2 is not a linear transformation?

10

(A) T (a, b)

( a  b, a  b )

(B) T (a, b)

(a  b, b )

(C) T (a, b)

(1  a, b )

(D) T (a, b)

( b, a )

(SET 2011)

MCQ 1.3 Which of the following is a linear transformation from R 3 to R 2 ?

§ x· ¨ ¸ (i) f ¨ y ¸ ¨z¸ © ¹

§ x· ¨ ¸ § 4 · ¸¸, (ii) g ¨ y ¸ ¨¨ © x y¹ ¨z¸ © ¹

§ x· ¨ ¸ § xy · ¸¸, (iii) h ¨ y ¸ ¨¨ © x y¹ ¨z¸ © ¹

§ z  x· ¨¨ ¸¸ © x  y¹

(A) Only f (B) Only g (C) Only h (D) All the transformation f , g and h

(NET 2015)

MCQ 1.4

Let

a, b, c, d  R and T : R 2 o R 2 be the linear transformation defined by

ª xº T« » ¬ y¼

ªax  by º ª xº 2 « cx  dy », for « y »  R ¬ ¬ ¼ ¼

Let S : C o C be the corresponding map defined by S ( x  iy ) ( ax  by )  i (cx  dy ), for x, y  R 2 . Then (A) S is always C - linear i.e. S ( z1  z 2 ) S ( z1 )  S ( z 2 ) for all z1 , z 2  C and S (Dz ) DS ( z ) for all D  C and z  C \

(B) S is C - linear if b c and d

a

(C) S is C - linear if only b c and d

a

(D) S is C - linear if and only if T is identity transformation

(NET 2012)

MCQ 1.5

Let D1

(1, 2), D 2

(3, 4)  R 2 and E1

(3, 2, 1), E 2

(6, 5, 4)  R 3 . Let T be the linear

transformation from R 2 into!R 3 such that T (D i ) E i ,  i 1, 2.

Then 11

y y· § 3x (A) T ( x, y ) ¨ , x  , 2 x  ¸ 2 2¹ © 2 (B) T ( x, y )

y 3y § · ¨ x  , x  , 2x  y ¸ 2 2 © ¹

y y· § 3y (C) T ( x, y ) ¨ , x  , 2 x  ¸ 2 2¹ © 2 (D) Such a linear transformation does not exist

(SET 2011)

MCQ 1.6

Let the linear transformation T : R 3 o R 4 be such that T (1,0,0)

(1,2,0,4), T (0,1,0)

( 2,0,1,3), T (0, 0, 1) (0, 0, 0, 0).

Then (A) T ( x, y , z )

( x  2 y ,  x, y , 4 x  3 y )

(B) T ( x, y, z )

( x  2 y , 2 x,  y , 4 x  3 y )

(C) T ( x, y , z )

( x  2 y ,  x, y , 4 x  3 y )

(D) T ( x, y , z )

( x  2 y , x,  y ,  4 x  3 y )

(SET 2015)

SAQ 1.1

If T : V2 o V2 be a linear map defined by T (1,1) (0,1, 0, 0), T (1,  1) (1, 0, 0, 0)

where { (1,1), (1,  1) } is basis of V2 . Find T ( x, y ). SAQ 1.2

If a linear transformation be defined by T ( x  y ) Tx  Ty and T (Dx) DTx, where x, y are vectors and D is a scalar,

then determine whether the mapping T ( x1 , x2 )

( x1  1, x2 ) is a linear transformation.

SAQ 1.3

For a transformation A : V2 o V2 we have A ( x, y ) that A is linear. 12

( x' , y ' ) such that x' 2 x, y ' x  y. Show

SAQ 1.4

A linear map T : V2 o V2 is given by T (1,2)

(3,0) and T (2,1)

(1,2).

Find the general formula for T . SAQ 1.5

Let T : R 2 o R be the linear mapping for which T (1,1) 3 and T (0,1)

2.

Find T (a, b). SAQ 1.6

Prove that T : R 2 o R 2 defined by T (x, y) (x  y, x) is a linear transformation. SAQ 1.7

Prove that T : R 3 o R 2 defined by T (x, y, z) (| x |, 1) is not a linear transformation. 1.2 Range and kernel of a linear transformation Kernel or Null space

Let T : U o V be a linear mapping from a vector space U to a vector space V . Null!space!or!the kernel!of T is denoted by N (T ) or Ker T and is defined as N (T )

i.e. Here

KerT

{u U | T (u ) 0 N (T )

T ( 0)

KerT

zero vector in V }

T 1 ({0}).

0 Ÿ 0  KerT  U .

Range of T Range!of T is denoted by R(T ) and is defined as R(T ) {T (u ) | u  U } i.e. R (T ) T (U ).

It can be proved that N (T ) is a subspace of domain space U and R(T ) is a subspace of codomain space V .

13

Nullity of T

Dimension of the null space of linear map T is called the nullity!of T and is denoted by n(T ) i.e.

n(T )

dim N (T )

dim KerT .

Rank of T

The dimension of range of the linear map T is denoted by r (T ) and is called a rank of T i.e. r (T )

dim R(T ).

Theorem 1.4 Let! T : U o V !be!a!linear!map.!Then

(a)! R(T ) !is!a!subspace!of! V . (b)! N (T ) !is!a!subspace!of! U . (c )! T !is!1-1! œ N (T ) !is!a!zero!subspace!of! U . (d)! U

[u1 , u2 ,..., u n ] ! Ÿ R(T ) [Tu1 , Tu 2 ,..., Tu n ].

(e)! U !is!a!finite!dimensional!vector!space! Ÿ dim R (T ) d dim U . Proof. Let U ,V be vector spaces over a field F and T : U o V be a linear map. Then T (0) 0.

(a) By definition, we have R (T ) {T (u ) | u U } Ž V ,  T : U o V and 0  R(T ) Ÿ

R(T ) is a nonempty subset of V

Let v1 , v2 be any elements of R(T ) and D, E  F . Ÿ

v1

Ÿ

Tu1 , v2

Tu 2 for some u1 , u 2  U

Dv1  E v2

DTu1  ETu 2

= T (Du1  E u 2 ),  T is linear and Du1  Eu 2  U Ÿ

Dv1  E v2  R (T ).

Ÿ

R(T ) is a subspace of a vector space V

(b) By definition, 0  N (T ) {u  U | Tu 14

0

zero vector in V } Ž U .

We have Tu1

0, Tu 2

0, Du1  E u2  U ,  u1 , u2  N (T ), D, E  F ,  N (T ) Ž U .

T (Du1  Eu 2 )

Ÿ Ÿ

Du1  Eu 2  N (T )

Ÿ

N (T ) is a subspace of U

DTu1  E Tu 2

D 0  E0

(c) Let T : U o V be a 1-1 linear map. Then  u  N (T ) Ÿ Tu Since T is 1-1, Tu

T0 Ÿ u

00 0

0. But T 0 0.

0. Then N (T ) {0}.

Conversely, let N (T ) {0}. Consider u1 , u2 U with Tu1 Tu 2 . Then T (u1  u 2 ) Tu1  Tu 2

u1  u 2

Ÿ

Thus

(d) Let

zero vector in V

u1  u2  N (T ) {0}, by definition of N (T )

Ÿ

Ÿ

0

0 i.e. u1 Tu 2 Ÿ u1

Tu1

u2 u2

T is 1-1.

U

[u1 , u2 ,..., un ] {D1u1  D 2u2  ...  D nun | D i  F , 1 d i d n}. R (T ) {T (u ) | u  U }

Then

{T (D1u1  D 2u2  ...  D n un ) | D i  F ,1 d i d n} {D1Tu1  D 2Tu 2  ...  D nTu n | D i  F , 1 d i d n} [Tu1 , Tu 2 ,..., Tu n ] (e) Let U be a finite dimensional vector space, say dim U

n. Then U has a basis containing n

vectors, say B {u1 , u2 ,..., un }. Hence U

[u1 , u 2 ,..., un ]

L( B ) and R (T ) [Tu1 , Tu 2 ,..., Tu n ], by (d).

If Tu1 , Tu 2 ,..., Tu n are LI vectors, they form a basis for R(T ) and dim R(T ) n. If {Tu1 , Tu 2 ,..., Tu n } is not LI, then basis for R(T ) contains less than n vectors. In this case dim R(T )  n Thus

dim R(T ) d n i.e. dim R (T ) d dim U .

QED

15

Theorem 1.5 Let! T : U o V !be!a!linear!map.!Then

(a) If! T !is!!1-1!and! u1 , u 2 ,..., un !are!LI!vectors!in! U , !then! Tu1 , Tu 2 ,..., Tu n !are!LI!vectors!in V . (b) If! v1 , v2 ,..., vn !are!LI!vectors!in! R(T ) !and! u1 , u 2 ,..., u n !are!vectors!in!U !such!that ! Tui

vi !for! i 1,2,..., n,

!then! {u1 , u 2 ,..., u n } !is!LI.

Proof. Let U ,V be vector spaces over the same field F and T : U o V is a linear map. Then T ( 0)

(1.9a)

0

T (D1u1  ...  Dnun ) D1Tu1  ...  D nTun , D i  F , ui  U .

and

(1.9b)

(a) Let {u1 , u 2 ,..., u n } be a LI subset of U and let T is 1-1. Consider D1Tu1  D 2Tu 2  ...  D nTu n

T (D1u1  D 2u 2  ...  D nun ) T (0), by Thm 1.2(iii)

Ÿ

D1u1  D 2u 2  ...  D n u n

Ÿ

D1

Ÿ Ÿ

0, where D i  F , 1 d i d n

0 D2

0,  T is 1  1

... D n ,  {u1 , u 2 ,..., un } is LI.

{Tu1 , Tu 2 ,..., Tu n } is LI.

(b) Let {v1 , v2 ,..., vn } be a LI subset of V with vi D1u1  D 2u 2  ...  D n u n

Consider

0, where D i  F

T (D1u1  D 2u 2  ...  D n u n ) T (0)

Ÿ

D1Tu1  D 2Tu 2  ...  D nTu n

Ÿ

D1v1  D 2 v2  ...  D n vn

Ÿ Ÿ

Tui , ui  U , 1 d i d n.

D1

0 D2

0, by (1.9)

0,  Tui

vi  i

... D n ,  {v1 , v2 ,..., vn } is LI

Hence {u1 , u 2 ,..., un } is LI.

QED

Problem 1.9

Prove that a linear map T : V3 o V3 defined by T (e1 ) 16

e1  e2 , T (e2 )

2e2  e3 , T (e3 )

e1  e2  e3

(a1)

is neither 1-1 nor onto, where {e1 , e2 , e3 } is the standard basis for V3 . Solution. Since {e1 , e2 , e3 } is the standard basis for V3 , we have e1

(1,0,0), e2

T (e1 ) e1  e2

Then

(1, 0, 0)  (0,1, 0) (1,  1, 0)

T (e2 ) 2e2  e3 T (e3 ) e1  e2  e3

and (i) Now V3

(0,0,1).

(0,1,0), e3

2(0 ,1, 0)  (0 , 0 ,1) (0 , 2 ,1) (1, 0, 0)  (0,1, 0)  (0, 0,1) (1,1,1)

[e1 , e2 , e3 ] domain space of T .

Ÿ

R (T ) [Te1 , Te2 , Te3 ]

[(1,  1, 0), (0 , 2 ,1 ) , (1,1,1)] [(1,  1, 0), (0 , 2 ,1 ) , (1,  1, 0)  (0 , 2 ,1)] [(1,  1, 0), (0, 2,1)] Since {(1,1,0), (0,2,1)} is LI, it is also a basis for R(T ). Ÿ

dim R (T )

i.e.

R (T ) z codomain V3

Ÿ

T : V3 o V3 is not onto.

2z3

dim V3

dim (codomain space)

(ii) Let ( x, y, z )  N (T ) Ÿ Ÿ

T ( x, y, z ) T ( xe1  ye2  ze3 ) (0, 0, 0) xTe1  yTe2  zTe3

(0,0,0),  T is linear

Ÿ

x(1,  1, 0)  y (0, 2,1)  z (1,1,1) (0, 0, 0), by (a1)

Ÿ

( x  z ,  x  2 y  z , y  z ) (0, 0, 0)

Ÿ

xz

Solving these equations, Ÿ

0,  x  2 y  z y

x, z

0, y  z

0

 x.

N (T ) {( x, y, z ) V3 | y

x, z

 x}

{( x, x,  x) | x  R} { x (1,1,  1) | x  R} 17

[ (1,1,  1) ] z {0} Hence the linear T map is not 1-1. Theorem 1.6 (Rank-nullity theorem)

Let! T : U o V !be!a!linear!map!and!U !be!a!finite!dimensional!vector!space.!Then dim R(T )  dim N (T )

dim U !i.e.! Rank  Nullity

dimension!of!domain.

Proof. T : U o V be a linear map and U is a finite dimensional vector space. Let dim U

N (T ) {u  U | Tu

Now

p.

0 zero vector of V } is a subspace of U

Hence N (T ) is finite dimensional,  U is finite dimensional. Let dim N (T ) n. Then n d p. Let S

{u1 , u 2 ,..., un } be a basis for N (T ). But

N (T ) Ž U .

Hence S is a LI subset in U . The LI set S can be extended to a basis B for U . Let B {u1 , u2 ,..., u n , u n1 , un2 ,..., u p } be a basis for U . Now

T : U o V is linear, U

and

Tui

Then

R(T ) [Tu1 , Tu 2 ,..., Tu n , Tu n1 , Tu n2 ,..., Tu p ]

[ B ] [u1 , u2 ,..., u p ]

0, 1 d i d n as ui  N (T ) for 1 d i d n

[Tu n1 , Tu n2 ,..., Tu p ] L( A), where A {Tun 1 , Tun  2 ,..., Tu p }

(1.10)

For scalars D n 1 , D n  2 ,..., D p , consider D n1Tu n1  D n2Tu n2  ...  D pTu p Ÿ Ÿ Ÿ Ÿ Ÿ 18

0

zero vector in V

(1.11)

T (D n 1un 1  D n  2un  2  ...  D p u p ) 0,  T is linear

D n1un1  D n2un2  ...  D pu p  N (T ) D n1un1  D n2un 2  ...  D p u p

[u1 , u2 ,..., un ],  S is basis of N (T )

E1u1  E 2u2  ...  E n un , E i are scalars.

(E1 )u1  (E 2 )u 2  ...  (E n )u n  D n1un1  D n2u n2  ...  D p u p  E1

0 E2

... E n

D n 1

Dn 2

... D p ,  B is LI.

0

D n 1

Thus (1.11) Ÿ Ÿ

0 Dn2

... D p

A is linearly independent

Then (1.10) Ÿ A {Tun1,Tun2 ,...,Tup } has p  n vectors and is a basis for R(T ). Ÿ

pn

dim R (T )

dim U  dim N (T )

dim R (T )  dim N (T ) dim U .

i.e.

QED

Problem 1.10

Let T : V4 o V3 be a linear map defined by Te1

(1,  1,1), Te3

(1,1,1), Te2

(1, 0, 0), Te4

(1, 0,1).

(a1)

(0,0,0,1)

(a2)

Verify Rank-nullity theorem. Solution. Let {e1 , e2 , e3 , e4 } be the standard basis for V4

Ÿ

e1

(1,0,0,0), e2

(0,1,0,0), e3

R4.

(0,0,1,0), e4

Linear map T : V4 o V3 is given by (a1). Also For any ( x1 , x2 , x3 , x4 ) V4 , we have ( x1 , x2 , x3 , x4 )

x1 (1, 0, 0, 0)  x2 (0,1, 0, 0)  x3 (0, 0,1, 0)  x4 (0, 0, 0, 0,1) x1e1  x2e2  x3e3  x4 e4

Since V4

(a3)

[e1 , e2 , e3 , e4 ], we have R (T ) [Te1 , Te2 , Te3 , Te4 ] [(1,1,1), (1,  1,1), (1, 0, 0), (1, 0,1)] , by (a1)

Now

(1, 0,1)

Then (a4) Ÿ

(a4)

1 1 (1,1,1)  (1,  1,1)  0(1, 0, 0) 2 2

R (T ) [(1,1,1), (1,  1,1), (1, 0, 0)] 1 1 1 1 1 1

Also

1

0

2z0

0

Ÿ

(1,1,1), (1,  1,1), (1, 0, 0) are LI

Ÿ

{(1,1,1), (1,  1,1), (1, 0, 0)} is a basis for R(T ) 19

Ÿ

dim R (T ) 3 i.e. R (T ) V3 and T is onto.

N (T ) {( x1 , x2 , x3 , x4 )  V4 | T ( x1 , x2 , x3 , x4 )

By definition,

(0,0,0)}

{( x1 , x2 , x3 , x4 ) | T ( x1e1  x2 e2  x3e3  x4 e4 )

(0,0,0)}, by (a3)

{( x1 , x2 , x3 , x4 ) | x1Te1  x2Te2  x3Te3  x4Te4

x1Te1    x4Te4

Now

(0 , 0 , 0)

Ÿ

x1 (1,1,1)  x2 (1,  1,1)  x3 (1, 0 , 0)  x4 (1, 0 ,1)

Ÿ

( x1  x2  x3  x4 , x1  x2 , x1  x2  x4 )

Ÿ

x1  x2  x3  x4 x2

Solving these equations,

0 , x1  x2

x1 , x3

(0,0,0)}

( 0 , 0 , 0)

( 0 , 0 , 0)

0 , x1  x2  x4

0

2 x1

0 , x4

With these values, (a5) becomes

N (T )

{( x1 , x2 , x3 , x4 ) | x2

x1 , x3

2 x1}

0, x4

{( x1 , x1 ,0,2 x1 ) | x1  R} {x1 (1,1, 0 ,  2) | x1  R} [(1,1, 0 ,  2)]

Since

{(1,1, 0 ,  2)} is LI, it forms a basis for N (T ).

Ÿ Ÿ

dim N (T ) 1,  T is not 1-1 dim R(T )  dim N (T ) 3  1

dim V4

dim (domain space).

Thus the Rank-nullity theorem is verified.

Problem 1.11 Let T : V2 o V3 be a linear map defined by

T ( x1 , x2 )

( x1  x2 , x2  x1 , x1 ).

Show that T is 1-1.

Solution.

N (T ) {( x1 , x2 ) V2 | T ( x1 , x2 ) (0, 0, 0)} {( x1 , x2 ) | ( x1  x2 , x2  x1 ,  x1 ) {( x1 , x2 ) | x1  x2

20

0, x2  x1

(0, 0, 0)}

0,  x1

0}

(a5)

{( x1 , x2 ) | x1

0 and x2

0}

{(0, 0)} Ÿ

T is one-one,  T is linear

Problem 1.12

Let T : V3 o V3 be a linear map defined by

T ( x1 , x2 , x3 ) ( x1 , x2 ,0).

(a1)

Find range and kernel of T . Solution. Linear map T : V3 o V3 is given by (a1).

Range of T = R (T ) {T ( x1 , x2 , x3 ) | ( x1 , x2 , x3 )  V3 } {( x1 , x2 ,0) | x1 , x2  R}, by (a1) {x1 (1,0,0)  x2 (0,1,0) | x1 , x2  R } [(1, 0, 0), (0,1, 0)] Kernel of T = N (T ) = {( x1 , x2 , x3 )  V3 | T ( x1 , x2 , x3 ) {( x1 , x2 , x3 ) | ( x1 , x2 ,0) {( x1 , x2 , x3 ) | x1

0

(0,0,0)}

(0,0,0)}, by (a1)

x2 }

{(0, 0, x3 ) | x3  R} {x3 (0,0,1) | x3  R} [(0,0,1)] Problem 1.13

Let T : V3 o V3 defined by T ( x1 , x2 , x3 )

( x1  x2 , x2  x3 , x3  2 x1 ).

(a1)

Find range, kernel, rank and nullity. Also verify Rank-nullity theorem. Solution. Denote x ( x1 , x2 , x3 ), y Dx  E y

and

( y1 , y2 , y3 ) V3 and D, E  R. Then (Dx1  Ey1 , Dx2  Ey2 , Dx3  Ey3 )

T (Dx  Ey) (Dx1  Ey1  Dx2  Ey2 , Dx2  Ey2  Dx3  Ey3 , Dx3  Ey3  2Dx1  2Ey1 ) 21

( D( x1  x2 ), D( x2  x3 ), D( x3  2 x1 ) )  ( E( y1  y2 ), E( y2  y3 ), E( y3  2 y1 ) ) D( x1  x2 , x2  x3 , x3  2 x1 )  E( y1  y2 , y2  y3 , y3  2 y1 ) DTx  ETy , by (a1)

Hence T is linear. Let B {e1 , e2 , e3 } be the standard basis of V3 and dim V3

3. Then by (a1),

we have Te1 T (1,0,0) (1  0, 0  0, 0  2) (1, 0,  2)

Now

V3

Te2

T (0,1, 0) (0  1,1  0, 0  0) (1,1, 0)

Te3

T (0, 0,1) (0  0, 0  1,1  0) (0,1,1)

L( B) [e1 , e2 , e3 ] is the domain space of T . Hence its range space is R (T ) [Te1 , Te2 , Te3 ]

Ÿ

Also

{(1, 0,  2), (1,1, 0), (0,1,1)}

Basis of R (T )

Ÿ

[(1, 0,  2), (1,1, 0), (0,1,1)]

dim R (T ) 3 i.e. rank of T is 3 Kernel of T

N (T ) {( x1 , x2 , x3 )  V3 | T ( x1 , x2 , x3 ) (0, 0, 0)}

{( x1 , x2 , x3 ) | ( x1  x2 , x2  x3 , x3  2 x1 ) {( x1 , x2 , x3 ) | x1  x2

0, x2  x3

{( x1 , x2 , x3 ) | x1

x2

0

(0,0,0)}

0, x3  2 x1

0}

x3}

{(0,0,0)} Ÿ

Now

dim N (T )

0 i.e. nullity of T is zero

dim R(T )  dim N (T ) 3  0

dim V3

dim (domain of T ).

This verifies the Rank " nullity theorem. Theorem 1.7 If! U ! and! V ! are! finite! dimensional! vector! spaces! over! same! field! of! same! dimension,! then! a linear!map! T : U o V !is!1-1! œ T !is!onto.

Proof. Let U , V be finite dimensional vector spaces over the same field F with dim U

and T : U o V be a linear map. Now R(T ) is a subspace of V Ÿ

22

dim R (T )

dim V œ R (T ) V .

dim V

Also

T is 1-1 œ N (T ) {0} œ dim N (T )

0

dim R(T )  dim N (T ),  Rank-nullity theorem

œ dim U

dim R (T )  0 œ dim V

dim R (T ),  dim U

œ R(T ) V

dim V

codomain of T

œ T is onto

QED

Corollary 1. Let! U ,V !be!finite!dimensional!vector!spaces!and! T : U o V !be!a!linear!one-one and!onto!map!then! dim U

dim V .

Proof. Let T : U o V be a linear 1-1 and onto map. Ÿ

N (T ) {0} and R (T ) V

Ÿ

dim N (T )

By rank-nullity theorem,

0 and dim R(T ) dim V

dim R(T )  dim N (T )

dim U

dim V  0 dim U , by (1.12)

Ÿ

Corollary 2. Let! T : U o V !be!a!linear!map!and! dim U

(1.12)

dim V

QED

!a!finite!positive!integer.!Then

following!statements!are!equivalent: (a) T is!onto (b) R (T ) V (c) dim R (T )

dim V

(d) dim N (T )

0

(e) N (T ) {0} (f) T is 1-1.

Hint. Prove that (a) Ÿ (b) Ÿ (c) Ÿ (d) Ÿ (e) Ÿ (f) Ÿ (a). Proof. Let T : U o V be a linear map and dim U

dim V

n  N, say

(1.13)

By the rank-nullity theorem, we have 23

dim R(T )  dim N (T )

(1.14)

dim U

Then (a) Ÿ (b) Ÿ (c) is obvious. Assume (c) i.e. dim R(T ) dim V . Ÿ

dim R(T ) dim U n  dim N (T )

Then (1.14) Ÿ

n, by (1.13)

n i.e. dim N (T )

0 and hence (d).

Now (d) Ÿ (e) is obvious. Also (e) Ÿ (f), by Thm 1.4(c). Assume (f) i.e. T is 1-1. Ÿ

N (T ) {0} i.e. dim N (T ) 0, by Thm 1.4(c) as T is linear

Ÿ

dim R (T ) dim U , by (1.14)

Ÿ

dim R (T ) dim V , by (1.13)

Since R(T ) is a subspace of finite dimensional vector space V , we have R(T ) V Then T is onto i.e. (a).

QED

Problem 1.14 If T be a linear map on V2 defined by, T ( x1 , x2 )

(2 x1  3 x2 , x1  x2 ),

(a1)

show that T is one-one and onto. Solution. Consider a linear map T : V2 o V2 given by (a1) and

dim (domain of T )

dim (codomain of T )

2

a finite number

Hence T is 1-1 œ T is onto. Now N (T ) {( x1 , x2 )  V2 | T ( x1 , x2 )

(0,0)}

{( x1 , x2 ) | ( 2 x1  3x2 , x2  x2 ) {( x1 , x2 ) | 2 x1  3x2 {( x1 , x2 ) | x1 {(0,0)} Hence T is 1-1 and onto.

24

0

x2 }

(0,0)}, by (a1)

0 and x1  x2

0}

Problem 1.15 If T is a linear operator on R 3 given as ( x1  x2 , x2  x3 , x3 ),

T ( x1 , x2 , x3 )

then the dimension of Ker T (A) 1 (B) 0 (C) 3 (D) 2 Solution. Assume that T ( x1 , x2 , x3 )

Ÿ

(0, 0, 0)

( x1  x2 , x2  x3 , x3 ) (0, 0, 0)

Ÿ

x1

Ÿ

0 , x2

Ker T

Ÿ

T is one-one and onto.

Ÿ

dim Ker T

0, x3

0

{(0, 0, 0)}

0

Hence the correct option is (B). MCQ 1.7 If T : R 3 o R 3 is the identity map, then nullity of T

?

(A) 0 (B) 1 (C) 2 (D) 3

(SET 2013)

MCQ 1.8 If T is a linear operator on R 3 given as T ( x1 , x2 , x3 ) ( x1  x2 , x2  x3 , 0), then the dim of Image T is: (A) 0 25

(B) 1 (C) 3 (D) 2

(SET 2006)

MCQ 1.9 Let f : U oV be a linear map and { u1 , u 2 ,  , u n } be the set of linearly independent vectors in U . Then the set { f (u1 ), f (u 2 ),  , f (u n ) } is linearly independent iff: (A) f is one-one and onto (B) f is one-one (C) f is onto (D) U

V

(SET 2011)

MCQ 1.10 Let S : R n o R n be a linear transformation. Consider the following statements: (i) S is one-to-one iff S takes some basis of R n to a basis of R n . (ii) S is one-to-one iff S takes every basis of R n to a basis of R n . Then

(A) Only (i) is true (B) Only (ii) is true (C) Both (i) and (ii) are true (D) Both (i) and (ii) are false

(SET 2001)

MCQ 1.11 Let V be an n -dimensional vector space over a field K , T : V o V be a linear transformation. Which of the following statements is correct? (A) T is 1-1 implies T is onto, but not vice versa (B) T is onto implies T is 1-1, but not vice versa (C) T is 1-1 if and only if T is onto (D) There is no relationship between T being 1-1 and T being onto

26

MCQ 1.12 Let T : V o W be a linear transformation, where V and W are finite dimensional vector spaces. Then (A) dim V

dim Ker f  dim W

(B) dim V

rank T  nullity T

(C) dim W

rank T  nullity T

(D) dim W

dim V  rank T

(SET 2009)

MCQ 1.13 Let f : V o W be a linear transformation of finite dimensional R -vector spaces. Then (A) dim V t dim Ker f  dim W (B) dim V  dim Ker f d dim W (C) dim V d dim Ker f  dim W (D) dim V  dim Ker f t dim W

(SET 2007)

MCQ 1.14 Let T : R n o R n be a linear transformation. Which of the following statements implies that T is bijective? (A) Nullity(T) n (B) Rank (T) = Nullity (T) = !n (C) Rank (T)  Nullity (T) = n (D) Rank (T)  Nullity (T) = n

(NET 2013)

Hint. Bijective map means 1-1 and onto map MCQ 1.15 Let V be the vector space of polynomials of degree d 99 over R and T : V o V be defined as

T ( p)( x)

dp . Then: dx

(A) Rank of T = 100 and Nullity of T is 0 27

(B) Rank of T = 99 and Nullity of T = 1 (C) Rank of T = 98 and Nullity of T = 1 (D) Rank of T = 98 and Nullity of T = 2

(SET 2001)

MCQ 1.16 Consider the linear transformation T : R 7 o R 7 defined by

T ( x1 , x2 ,...., x6 , x7 )

( x7 , x6 ,...., x2 , x1 ).

Which of the following statements are true? (A) The determinant of T is 1 (B) There is a basis of R 7 with respect to which T is a diagonal matrix (C) T 7

I

(D) The smallest n such that n such that T n

I is even

(NET 2011)

SAQ 1.8 Find range and rank of the linear transformation T ( x1 , x2 , x3 ) ( x1  x2 , x2  x3 ). SAQ 1.9 If T is a linear transformation defined by

T (1,0,0) (0,1,0,2), T (0,1,0) (0,1,1,0), T (0,0,1) (0,1,1,4), then find range of T , null space of T , dim R(T ) and dim N (T ). SAQ 1.10 Let a linear transformation T : V3 o V3 be defined by

T (e1 ) e1  e2  e3 , T (e2 )

e1  e2  e3 , T (e3 )

e1  3e2  3e3 ,

where {e1 , e2 , e3 } is standard basis of V3 . Find range, kernel, rank of T and verify rank-nullity theorem. SAQ 1.11 A linear transformation T : V3 o V2 is defined as

Te1 28

(2,1), Te2

(0,1), Te3

(1,1),

where {e1 , e2 , e3 } is the standard basis of V3 . Find Range, kernel, rank, nullity of T and verify Rank-nullity Theorem. SAQ 1.12 Let a linear map T : V2 o V3 defined by T ( x1 , x2 )

( x1 , x1  x2 , x2 ). Find Range, Kernel, Rank

and Nullity of T . Is T one-one? Is it onto?

29

SUMMARY Explaining a linear transformation, its range, kernel and nullity are defined. A good number of solved problems and MCQ#s are given in this regard so that one can obtain these quantities. The rank nullity theorem is proved and it is verified by examples.

KEY WORDS Linear transformation Homomorphism Range Rank Nullity

30

UNIT 04-02: ISOMORPHISM OF VECTOR SPACES

31-52

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of an isomorphism of vector space INTRODUCTION

Having discussed a linear transformation of a vector space homomorphism, in this unit we study its particular case, an isomorphism. Thus basically an isomorphism is a linear transformation. In this regard there are some rich results which we are going to explain with illustration. 2.1 Isomorphism Definition Let U and V be vector spaces over the field F . If a linear map (or a homomorphism)

T :U o V is 1-1, then it is called an isomorphism. If the isomorphism, defined above, is onto, then the vector spaces U and V are! said to be isomorphic to each other. We write U | V to mean U is isomorphic to V . Remark

By definition itself, a 1-1 linear map T is an isomorphism from the vector space U (F ) to V (F ) if it satisfies the property

or

T (Du  Ev) DTu  ETv,  u , v  U and  D , E  F

(2.1a)

T (u  v) Tu  Tv and T (Du ) DTu ,  u , v  U and  D  F .

(2.1b)

The conditions in (2.1b) imply that T preserves sum and the scalar product. It means that the structures are preserved by T i.e. the spaces U (F ) and V (F ) are structurally identical. Remark. T : V o V is an isomorphism œ Kernel of T

{ zero element of V }.

Problem 2.1

If V is a finite dimensional vector space and T is a homomorphism of V onto V , prove that T must be 1-1 and hence an isomorphism.

31

Hint. dim V

n, B Ž V , B has n vectors, L( B ) V Ÿ B is a basis of V .

T is 1-1 œ T ( x) T ( y ) Ÿ x

y

Solution. Let V be a n -dimensional vector space over F . It has a basis, say B {v1 ,..., vn }.

Denote C {Tv1 ,..., Tvn }. Let v V . Since a homomorphism T is onto,  u V such that Tu Now

v.

(a1)

D1v1  ...  D n vn , where D1 ,..., D n  F

u

Then (a1) Ÿ

T (D1v1  ...  D n vn )

v

i.e.

D1Tv1  ...  D nTvn

v

Ÿ

Any v  V is expressed as a linear combination of vectors Tv1 ,..., Tvn

Hence L(C ) V . Now dim V

n and C has n vectors. Then C is a basis of V . We show that T

is 1  1 . Let x, y V such that Tx Ty. Ÿ

x1v1  ...  xn vn , y

x

T ( x1v1  ...  xn vn ) T ( y1v1  ...  yn vn )

Then Tx Ty Ÿ Ÿ

x1Tv1  ...  xnTvn

y1Tv1  ...  ynTvn ,  T is a homomorphism.

( x1  y1 )Tv1  ...  ( xn  yn )Tvn

Ÿ Ÿ

y1v1  ...  yn vn , where each xi , yi  F

x1  y1

0,..., xn  yn

Ÿ

Thus Tx Ty Ÿ x

x1

0

zero of V

(a1)

0,  C is a basis of V and Tv1 ,..., Tvn are LI y1 ,..., xn

yn i.e. x

y

y.

Hence T is 1  1 . Since T is a homomorphism, it is an isomorphism. Problem 2.2

Let {v1 ,..., vn } be a basis of V and let w1 ,..., wn be any n elements in V . Define T on V by T (O1v1  ...  O n vn )

O1w1  ...  O n wn .

(a) Show that T is a homomorphism of V into itself. (b) When is T an isomorphism? Hint. T : V o V is an isomorphism œ Kernel of T

32

{ zero element of V }.

(a1)

Solution. (a) Let v1 ,..., vn be a basis of a vector space V over F . Let T be defined

by (a1). Consider any x, y V and J  F . x D1v1  ...  D n vn , y

Ÿ

T ( x) T (D1v1  ...  D n vn )

Ÿ

Now

E1v1  ...  E n vn for some D i , Ei  F D1w1  ...  D n wn

(a2)

T ( y ) T (E1v1  ...  E n vn ) E1w1  ...  E n wn

(a3)

T is a homomorphism œ T ( x  y )

x y

Here

T ( x)  T ( y ) and T ( Jx)

JT ( x)

(D1v1  ...  D n vn )  (E1v1  ...  E n vn )

and

Jx ( JDv1  ...  JD n vn )

Ÿ

T ( x  y ) T [(D1  E1 )v1  ...  (D n  En )vn ] (D1  E1 ) w1  ...  (D n  E n ) wn (D1w1  E1w1 )  ...  (D n wn  E n wn ),  distributivity in V (D1w1  ...  D n wn )  (E1w1  ...  E n wn ) T ( x)  T ( y ), by (a2) and (a3)

(a4)

Similarly we can show that T ( Jx)

J T ( x), verify yourself.

Then (a4) shows that T is a homomorphism. (b) We know that T : V o V is an isomorphism œ Ker T = {0}. Let T be an isomorphism. Consider D1w1  D 2 w2    D n wn Ÿ

T (D1v1  D 2 v2    D n vn )

Ÿ

D1v1  D 2 v2    D n vn D1

Ÿ

0 D2

Thus (a5) Ÿ

0, D i  F .

(a5)

0 T (0), by definition (a1) 0,  T is an isomorphism

 D n ,  v1 , v2 ,  , vn are LI D1

0 D2  Dn

Ÿ

{w1 , w2 ,  , wn } is LI in V

Ÿ

{w1 , w2 ,  , wn } is a basis of V ,  dim V

n

33

Definition

A linear map T : U ( F ) o V ( F ) is said to be non! singular if T is 1-1 and onto. Thus T 1 : V o U exists and it is also 1-1 and onto. When T 1 exists, we say that a mapping T is inevitable. Composition of linear mappings

If T : U o V and S : V o W are linear maps then the composition of the maps T and S is denoted by S $ T :U o W and is defined by S (Tu ),  u U

( S $ T )u For convenience, we write S $ T

ST .

Theorem 2.1

Let! T : U o V !be!a!nonsingular!linear!map.!Then! T 1 : V o U !is!a!linear!1-1!and!onto!map. Hint. T is well defined œ u

v Ÿ T (u ) T (v)

Proof. Let T : U o V be a nonsingular linear map. Hence it is 1  1 , onto and linear.

T 1 : V o U by T 1 (v) u œ Tu

Define

v

(2.2)

We find that T 1 is well defined and onto. Let T 1v1 Ÿ

u1 , T 1v2

u2 , where v1 , v2 V , u1 , u 2  U and D, E  F Tu1

(i) Assume that T 1v1 T 1v2 . Then u1 T 1v1

Thus Ÿ

v1 , Tu 2

v2

u2 and Tu1 T 1v2 Ÿ v1

(2.3)

Tu 2 i.e. v1

v2 , by (2.3)

v2

T 1 is 1-1.

(ii) Now

Dv1  Ev2

DTu1  ETu 2 , by (2.3) T (Du1  Eu2 ),  T is linear

Ÿ

34

T 1 (Dv1  Ev2 )

Du1  Eu 2 , by (2.2)

DT 1 (v1 )  ET 1 (v2 ), by (2.3) Hence T 1 is linear. Thus T 1 : V o U is a linear non-singular map.

QED

Problem 2.3

{D 0  D1 x  ...  D n1 x n1 | D i  F } be a vector space over F under addition

Let F be a field, Vn

of polynomials and scalar multiplication to a polynomial, and F n

{(D 0 , D1 ,..., D n1 ) | D i  F } is

a vector space over F under coordinate-wise addition and scalar multiplication. Prove that Vn and F n are isomorphic. Solution. Define T : Vn o F n by

T (D 0  D1 x  ...  D n 1 x n 1 ) (D 0 , D1 ,..., D n 1 ), D i  F

(a1)

Then T is onto. Consider any polynomials f ( x) D 0  D1 x  ...  D n 1 x n 1 , g ( x) E0  E1 x  ...  En 1 x n 1 Vn , where D i , Ei  F and O, P  F . (i)

T (Of ( x)  Pg ( x))

T ((OD 0  PE0 )  (OD1  PE1 ) x  ...  (OD n1  PEn1 ) x n1 ) (OD 0  PE0 , OD1  PE1 ,..., OD n1  PEn1 ), by (a1) (OD 0 , OD1 ,..., OD n1 )  (PE0 , PE1 ,..., PEn1 ) O (D 0 , D1 ,..., D n1 )  P(E0 , E1 ,..., En1 ) OT ( f ( x))  PT ( g ( x)), by (a1)

Hence T is linear. (ii) Let T ( f ( x)) T ( g ( x)). Then (D 0 , D1 ,..., D n 1 ) (E0 , E1 ,..., En 1 ), by (a1) Di

Ÿ Ÿ

f ( x)

D 0  D1 x  ...  D n1 x n1

E0  E1 x  ...  E n1 x n1

T ( f ( x)) T ( g ( x)) Ÿ f ( x)

Hence Ÿ

Ei , for 0 d i d n  1 g (x)

g ( x)

T is 1-1.

35

Thus T : Vn o F n is a 1-1 onto linear map i.e. an onto isomorphism. Hence Vn is isomorphic to F n i.e. Vn | F n . Problem 2.4

Prove that "is isomorphic to# is an equivalence relation on a set of vector spaces. Solution. Let 6 be a set of vector spaces. Denote | for "is isomorphic to#. We show that | is

reflexive, symmetric and transitive. (i) To prove: | is reflexive i.e. V | V ,  V  6 Let I : V o V be defined by Ix

x , x V and V  6

It is easy to show that I is 1-1 and onto. For any x, y V and for any scalars O, P , we have I (Ox  Py )

Ox  Py

OIx  PIy

Hence I is linear. Thus I : V o V is an onto isomorphism V | V , V  6

i.e.

(ii) To prove: | is symmetric i.e. U | V Ÿ V | U . Let U , V  6 and U | V . Ÿ

 a linear map T : U o V which is 1-1 and onto

Ÿ

T 1 : V o U is also 1-1, onto and linear, see Thm 2.1

Ÿ

V | U.

(iii) To prove: | is transitive i.e. U | V and V | W Ÿ U | W . Let U | V and V | W , for U ,V ,W  6. Ÿ

 linear maps T : U o V , S : V o W which are 1-1 and onto.

Ÿ

S $ T : U o W is also 1-1 and onto

Now for  u1 , u2 U and O, P scalars, we have ( S $ T )(Ou1  Pu2 )

S (T (Ou1  Pu2 )) S (OTu1  PTu2 ),  T is linear OS (Tu1 )  PS (Tu 2 ),  S is linear

36

O( S $ T )u1  P( S $ T )u2 Thus S $ T : U o W is linear. Hence S $ T : U o W is an onto isomorphism i.e. U | W . Now (i) to (iii) imply that | is reflexive, symmetric and transitive and hence it is an equivalence relation on a set 6 of vector spaces. Problem 2.5

If V is a finite dimensional vector space over F and T is a homomorphism of V into itself which is not onto, prove that there is some v z 0 in V such that Tv 0. Solution. Let T be a homomorphism of V into itself. Then T 0 0. We use the method of

contradiction. Assume the contrary that there is no nonzero vector v V such that Tv 0. Let Tu Tw,  u , w V . Ÿ

Tu  Tw 0

or

T (u  w) 0 ,  T is a homomorphism. u  w 0 or u

Ÿ

Thus Tu Tw Ÿ u

w

w, showing that T is 1  1. But T is a homomorphism. Hence T is an

isomorphism of V into itself i.e. T is onto, see Problem 2.4. But this contradicts the hypothesis that T is not onto. Hence the initial assumption is wrong. Thus there is a v z 0  V such that Tv 0. Problem 2.6

Let F be a field. Prove that F m | F n œ m n. Solution. Let F m and F n be vector spaces over F of dimensions m and n respectively. Then a

basis Bm of F m contains m vectors and a basis F n contains n vectors. (i) Let F m | F n i.e. there is an onto isomorphism T : F m o F n . Since Bm is a basis of F m , Fn

R(T )

L(T ( Bm )) linear span of T ( Bm ),  T is onto

Also T is 1-1 and Bm is LI. Then T ( Bm ) is LI. T ( Bm ) is a LI set and L(T ( Bm )) Ÿ Ÿ Ÿ

Fn

T ( Bm ) is a basis of F n dim F n

m,  T ( Bm ) contains m vectors. n m,  dim F n

n 37

(ii) Conversely, let m n. Then T : F m o F n i.e. T : F m o F m given by x, x  F m

Tx is onto isomorphism. Ÿ

F m | F m i.e. F m | F n for m n.

Problem 2.7

Show that the linear map T : V3 o V3 defined by T ( x1 , x2 , x3 ) ( x1  x2  x3 , x2  x3 , x3 ) is non singular and find its inverse. Solution. For a linear map T null space is given by

N (T ) {( x1 , x2 , x3 )  V3 | T ( x1 , x2 , x3 )

(0,0,0)}

{( x1 , x2 , x3 ) | ( x1  x2  x3 , x2  x3 , x3 )

(0,0,0)}

{( x1 , x2 , x3 ) | x1  x2  x3

0 and x3

{( x1 , x2 , x3 ) | x1

0

x2

0, x2  x3

0}

x3}

{(0,0,0)} zero space Hence T : V3 o V3 is 1-1. Also T is linear with domain and codomain spaces having same dimension 3. Then T must be onto i.e. T is nonsingular: T 1 : V3 o V3 exists. Let

T 1 ( y1 , y2 , y3 )

Ÿ

T ( x1 , x2 , x3 )

i.e.

( x1  x2  x3 , x2  x3 , x3 )

( y1 , y2 , y3 )

Ÿ

x1  x2  x3

y2 , x3

Ÿ Ÿ

x1

( y1 , y2 , y3 )

y1 , x2  x3

y1  y2 , x2

T 1 ( y1 , y2 , y3 )

Problem 2.8

Prove that linear map T : V3 o V3 defined by 38

( x1 , x2 , x3 )

y2  y3 , x3

y3 y3

( y1  y2 , y2  y3 , y3 )

Te1

e1  e2 , Te2

e2  e3 , Te3

e1  e2  e3

(a1)

is nonsingular where {e1 , e2 , e3 } is the standard basis of the vector space over F . Find its inverse. Solution. Since {e1 , e2 , e3 } is the standard basis of the vector space V3 , for ( x1 , x2 , x3 )  V3 , we

have x1e1  x2 e2  x3e3

( x1 , x2 , x3 ) Ÿ

T ( x1 , x2 , x3 )

x1Te1  x2Te2  x3Te3

x1 (e1  e2 )  x2 (e2  e3 )  x3 (e1  e2  e3 ), by (a1) ( x1  x3 )e1  ( x1  x2  x3 )e2  ( x2  x3 )e3 ( x1  x3 , x1  x2  x3 , x2  x3 ),  ( x1 , x2 , x3 )  N (T ) Ÿ

T ( x1 , x2 , x3 )

zero vector in V3

( x1  x3 , x1  x2  x3 , x2  x3 )

Ÿ

x1  x3

Ÿ

0, x1  x2  x3

Ÿ

x1

Ÿ

Ÿ

(0,0,0)

0

x2

(0,0,0)

0, x2  x3

0

x3

N (T ) {(0,0,0)} T is 1-1

Since domain and codomain of linear map T are of same dimension, T is onto. Therefore, T is nonsignular and hence T 1 : V3 o V3 exists. Let

T 1 ( y1 , y2 , y3 )

i.e.

T ( x1 , x2 , x3 )

Ÿ

( x1  x3 , x1  x2  x3 , x2  x3 )

Ÿ

x1  x3

Ÿ Ÿ

x2

( x1 , x2 , x3 ) ( y1 , y2 , y3 )

y1 , x1  x2  x3

y2  y1 , x1

T 1 ( y1 , y2 , y3 )

( y1 , y2 , y3 )

y2 , x2  x3

y2  y3 , x3

y3

y1  y2  y3

( y2  y3 , y2  y1 , y1  y2  y3 ).

Problem 2.9

Let T : V3 o V3 be a linear map defined by Te1

e3 , Te2

e1 , Te3

e2 . Prove that T 2

T 1. 39

Hint. {e1 , e2 , e3 } is the standard basis of V3 Solution. For ( x1 , x2 , x3 ) V3 , we have

( x1 , x2 , x3 ) Ÿ

x1e1  x2 e2  x3e3

x1Te1  x2Te2  x3Te3 ,  T is linear

T ( x1 , x2 , x3 )

x1e3  x2 e1  x3e2 ( x2 , x3 , x1 ) Ÿ

(a1)

T 2 ( x1 , x2 , x3 ) T ( x2 , x3 , x1 ) ( x3 , x1 , x2 ), by (a1). N (T ) {( x1 , x2 , x3 )  V3 | T ( x1 , x2 , x3 )

(0,0,0)}

{( x1 , x2 , x3 ) | ( x2 , x3 , x1 ) (0,0,0)}, by (a1) {( x1 , x2 , x3 ) | x1

0

x2

x3}

{(0,0,0)} zero space Therefore, T is 1-1. Also T is onto since domain and codomain have same finite dimension. Ÿ

T : V3 o V3 is non-singular

Ÿ

T 1 : V3 o V3 exists.

Let

T 1 ( x1 , x2 , x3 )

( y1 , y2 , y3 )

i.e.

T ( y1 , y2 , y3 )

( x1 , x2 , x3 )

Ÿ Ÿ Ÿ Ÿ Ÿ

40

( y2 , y3 , y1 ) ( x1 , x2 , x3 ), by (a1) y1 T 1 ( x1 , x2 , x3 )

x3 , y2

x1 , y3

x2

( x3 , x1 , x2 ),  ( x1 , x2 , x3 )  V3

T 1 ( x1 , x2 , x3 ) T 2 ( x1 , x2 , x3 ),  ( x1 , x2 , x3 ) V3 , by (a2) T2

T 1.

(a2)

Theorem 2.2 If! V !is!a!finite!dimensional!!vector!space!over! F , !then! V | F n !for!a!unique!integer! n. Proof. Let V be a finite dimensional vector space over F . Let v1 ,..., vn be the basis of V and

v V . Ÿ

D1v1  ...  D n vn , where D i  F

v

(2.4)

Define f : V o F n such that f (v) (D1 ,, D n ). f (D1v1  ...  D n vn )

Ÿ

(D1 ,..., D n ).

Now the representation of v V in (2.4) is unique. Hence the mapping f is well defined. Moreover, it is 1  1 and onto. We show that f is a homomorphism. Let

w E1v1  ...  E n vn V

Ÿ

v  w (D1  E1 )v1  ...  (D n  E n )vn f (v  w)

Now

(D1  E1 ,..., D n  E n ) (D1 ,..., D n )  (E1 ,..., E n ) f (v)  f ( w)

and

f (cv)

(2.5)

f {(cD1 )v1  ...  (cD n )vn }, c  F (cD1 ,..., cD n ) = c(D1 ,  , D n ) cf (v)

(2.6)

(2.5) and (2.6) Ÿ f is a homomorphism. But f is 1  1 and onto. Hence f is 1  1 and onto homomorphism. Hence V | F n .

QED

Remark. When V | F n , n is called the dimension of the vector space V over F . Corollary. Any! two! finite! dimensional! vector! spaces! over! F ! of! the! same! dimension! are

isomorphic. Proof. Let V and W be two vector spaces of dimension n over the field F . By Thm 2.2,

V | F n and W | F n i.e. V | W .

QED

41

Problem 2.10

If V is finite dimensional and T is an isomorphism of V into V , prove that T must map V onto V. Hint.

T is an isomorphism Ÿ (i) T is a homomorphism and

(ii) T is 1  1 œ Ta Tb Ÿ a b Solution. Let V be a finite dimensional vector space. Then it has a basis. Let B {v1 ,..., vn } be a

basis of V . Define C {Tv1 ,..., Tvn }. Assume that D1 Tv1    D n Tvn

0

zero of V , D1 ,..., D n  F

(a1)

Now T is an isomorphism Ÿ T is a homomorphism. T (D1v1    D n vn )

Ÿ

T (D1v1 )    T (D n vn ) D1Tv1    D nTvn 0,  (a1)

T 0,  T is an isomorphism Ÿ T 0 0 Ÿ

D1v1    D n vn

Ÿ

D1 ... D n

0,  T is 1  1 0,  v1 ,..., vn are linearly independent

Thus (a1) Ÿ (a2) and then Tv1 ,, Tvn are linearly independent. Now dim V

(a2) n and C contains

n linearly independent vectors of V . Then C is a basis of V . Ÿ

For v V ,

v E1Tv1  ...  E nTvn , E1 ,..., En  F T (E1v1 )  ...  T (E n vn ),  T is a homomorphism

Ÿ

T (E1v1  ...  E n vn ),  T is a homomorphism Tu , where u

E1v1  ...  E n vn  V

Hence for each v V ,  an element u V such that Tu

v and hence T is onto.

MCQ 2.1

Let T be defined on F 2 by T ( x1 , x2 )

(Dx1  E x2 , Jx1  Gx2 ), where D, E, J, G are fixed elements

in the field F . Then T : F 2 o F 2 is (A) a homomorphism but not an isomorphism (B) an isomorphism 42

(C) a linear transformation which is not 1-1 (D) a linear transformation which is 1-1 MCQ 2.2

Let T be a linear map on V3 such that by T ( x1 , x2 , x3 ) Then

0, T 2

0, T 3

0

(B) T z 0, T 2

0, T 3

0

(C) T z 0, T 2 z 0, T 3

0

(A) T

(0, x1, x2 ),  ( x1, x2 , x3 ) V3 .

(D) T z 0, T 2 z 0, T 3 z 0 MCQ 2.3

Let T : V3 o V3 and S : V3 o V3 be two linear maps defined by T ( x1 , x2 , x3 ) (2x1  3x2 , 4x1  6x2 , x3 ) and

Se1

e1  e3 , Se2

e1 , Se3

e1  e2  e3 ,

where (e1 , e2 , e3 ) is the standard basis for V3. Then

(A) ST

TS

(B) ST z TS (C) ST

2TS

(D) ST

3TS

MCQ 2.4

Let S : R 3 o R 4 and T : R 4 o R 3 be linear transformation such that T $ S is the identity map of R 3 . Then: (A) S $ T is the identity map of R 4 (B) S $ T is one-one but not-onto (C) S $ T is onto but not one-one (D) S $ T is neither one-one nor onto

(NET 2012)

43

MCQ 2.5 Let V be the vector space of polynomials over R of degree less than or equal to n. For p ( x) ao  a1 x  .....  an x n in V , define a linear transformation T :V o V by (Tp ) ( x) ao  a1 x  a2 x 2  ......  (1) n an x n . Then which of the following are correct? (A) T is one-to-one (B) T is onto (C) T is invertible (D) T 2 z T

MCQ 2.6 Let linear map T : V3 o V2 defined by T ( x1 , x2 , x3 )

( x1  x2  x3 , x1 )

and linear map S : V2 o V2 defined by S ( x1 , x2 ) Then

(A) S 2

S , T 2 not defined

(B) T 2

T , ST not defined

(C) ST

TS

( x2 , x1 ).

(D) TS , T 2 not defined

SAQ 2.1 Let T be defined on F 2 by T ( x1 , x2 ) (ax1  bx2 , cx1  dx2 ), where a, b, c, d are some fixed elements of a field F . (i) Show that T is a linear map and

(ii) find necessary and sufficient conditions on a, b, c, d such that T is an onto isomorphism.

44

SAQ 2.2 Let T be a linear map on V3 defined by T ( x1 , x2 , x3 ) (2x1 , 4 x1  x2 , 2x1  3x2  x3 ). Show that T is non-singular. Find T 1.

SAQ 2.3 Let T : V3 o V3 is a linear map defined as Te1

e1  e2 , Te2

e1  e2  e3 , Te3

3e1  4e3 .

Show that T is non-singular. Hence find T 1.

SAQ 2.4 Show that the linear map T : V2 o V2 defined by T ( x, y )

( x cos T  y sin T, x sin T  y cos T)

is nonsignular and find T 1.

SAQ 2.5 Let R, S , T be linear maps from V3 to V3 defined by R e1

e1  e2 , R e2

e1  e2  e3 , R e3

Se1

e1  e2 , Se2

Te1

e1  e2  e3 , Te2

e2 , Se3

3e1  4e3

e1  e2  e3

e1  e2 , Te3

e3 .

Calculate R ( S  T ) and ST , where {e1 , e2 , e3 } is a standard basis of V3 .

SAQ 2.6 Let S and T be two linear maps from V3 to V3 defined as follows:

and

Se1

e1  e2 , Se2

Te1

e1  e2  e3 , Te2

e2 , Se3

e1  e2  e3

e1  e2 , Te3

e3 .

Determine ST and T 2 .

45

2.2 First fundamental theorem on homomorphism Theorem 2.3 (First fundamental theorem on homomorphism) Let! U ! and! V ! be! the! vector! spaces! over! F . ! Let! T : U o V ! be! an! onto! homomorphism! with kernel! W . !Then! V !is!isomorphic!to! U / W !i.e.! V | U /W .

Hint. Define a mapping f : U / W o V and show that (i) f is 1  1, kernel of f

zero element of U / W

W

(ii) f is onto homomorphism.

Proof. Let X

u  W U /W , u U . Define a mapping f : U / W o V such that f (X )

f (u  W ) Tu ,  X

u  W U / W

(2.7)

We first show that this mapping is well defined i.e. to show that u W

v  W Ÿ f (u  W )

f (v  W ). V f (u+W )!=!Tu

T

U u

Fig 2.1

f

u!+!W

Let u  W

v  W . Then u  v  W

W

0 W

Ÿ

f (u  v  W )

or

T (u  v) T 0, by (2.7)

Ÿ

Tu  Tv 0,  T is linear

Ÿ

Tu Tv f (u  W )

Ÿ Ÿ

U/W

f (0  W )

f (v  W )

f is well defined.

(a) To prove : f is onto We know that T : U o V is onto. Then for every v V ,  u U such that Tu v Tu 46

f (u  W )

v. Then (2.7) Ÿ (2.8)

Ÿ Ÿ

To every v V ,  (u  W ) U / W such that (2.8) is satisfied. f is onto.

(b) To prove: f is a homomorphism Let X , Y U / W such that X Ÿ

u 2  W , u1 , u2 U . Let D  F .

u1  W , Y X Y

Ÿ

(u1  u2 )  W , DX

f (X  Y)

Du1  W

f ( (u1  u 2 )  W ) T (u1  u2 ), by (2.7) Tu1  Tu 2 ,  T is a homomorphism f (u1  W )  f (u 2  W ) f ( X )  f (Y )

and

f (DX )

(2.9)

f (Du1  W ) T (Du1 ) DTu1 ,  T is homomorphism Df (u1  W ) D f (X )

(2.10)

(2.9) and (2.10) Ÿ f is a homomorphism.

(c) To prove: f is 1  1 We show that the kernel of f is the zero subspace of U / W i.e. {W }. Let u  W  K ( f ) Kernel of f . Ÿ

f (u  W )

0 i.e. Tu

0

Hence u  Ker T and then u W ,  W is the kernel of T . Ÿ u  W

W

But u  W is any element of K ( f ). Hence every element of K ( f ) is W . Ÿ

K ( f ) {W } i.e. f is 1-1.

Now (a) to (c) show that the homomorphism f is 1  1 and onto i.e.

U /W | V .

QED

47

Theorem 2.4 Let!W be!a!subspace!of! U . !Then!there!is!a!homomorphism!of!!U !onto! U /W .

Hint. U / W is a vector space with  and ˜ defined by (u  W )  (v  W ) D(u  W )

and

(u  v )  W ,  u , v  U

Du  W ,  D  F and  u U .

Proof. Define f : U o U / W such that f (u ) u  W Let u , v U and D  F . Ÿ

f (u  v )

(u  v)  W (u  W )  (v  W )

f (u )  f ( v ) f (Du )

and Hence

Du  W

D (u  W )

Df (u )

f is homomorphism. Verify yourself that f is onto.

QED

Remark. The first!fundamental!theorem on vector space homomorphism can be stated as follows: T : U o V is linear i.e. T : U o R (T ) is an onto linear map Ÿ R(T ) |

U is finite dimensional Ÿ dim R (T ) or

dim

U . N (T )

U N (T )

dim R (T ) dim U  dim N (T ).

Hence follows the Rank-nullity theorem. Problem 2.10

If A and B are subspaces of a vector space V , prove that

A A B is isomorphic to . Aˆ B B

Solution. Let A and B be subspaces of a vector space V (F ). Then A  B, A ˆ B are subspaces of V and B Ž A  B, A ˆ B Ž A. Thus B is a subspace of the vector space A  B and A ˆ B is a subspace of the vector space A over the field F . 48

A B A , are quotient spaces over F . B Aˆ B

Ÿ

T : Ao

Define by

A B B

x  B,  x  A

Tx

(a1)

(i) T is onto Since any element of

A B has the form B ( x  y )  B, x  A , y  B,

we consider ( x  y)  B  Now

Tx

A B . B

x  B, by (a1) ( x  B )  B,  B

0  B is the zero element in

( x  B)  ( y  B),  y  B

A B B

B as y  B and B is a subspace

( x  y)  B Thus to each ( x  y )  B  codomain of T ,  x  A such that T ( x) ( x  y )  B Hence T is onto. (ii) T is a linear map For x, x1  A and D, E  F , we have T (Dx  Ex1 )

(Dx  Ex1 )  B, by (a1) (Dx  B )  (Ex1  B ), by coset addition D( x  B )  E( x1  B ), by scalar multiplication of cosets D Tx  E Tx1 , by (a1)

Ÿ

(iii)

T is a linear map.

Ker T

­ ® x  A | Tx ¯

B

zero vector in

A B ½ ¾ B ¿ 49

{x  A | x  B

B}, by (a1)

{x  A | x  B} Aˆ B Ÿ

T :Ao

A B is an onto linear transformation with Ker T B

A ˆ B.

Then by the first fundamental theorem on homomorphism on vector space, we have A B A | . B Aˆ B

Remark. If A  B is finite dimensional, then by the above example, dim( A  B )  dim B

dim A  dim( A ˆ B )

Problem 2.11 Let T be a linear map on a finite dimensional vector space V . Prove that R(T ) ˆ N (T ) {0} œ T 2 (u )

0 Ÿ Tu

0.

Solution. Let T : V o V be linear and V be a finite dimensional vector space. (i) Let R (T ) ˆ N (T ) {0}. Consider T 2u

T (Tu ) 0, u V

i.e.

Tu  R(T ) and T (Tu )

Now

0 Ÿ Tu  N (T )

Tu  R(T ) ˆ N (T )

Ÿ

i.e. (ii) Conversely, let T 2u

0

Tu 0 Ÿ Tu

{0}

0

0. Assume that v  R (T ) ˆ N (T ). Then v  R (T ) {Tu | u  V }

and

v  N (T ) {u V | Tu

0}

Hence v Tu , for some u V and Tv 0. Ÿ By hypothesis this gives Tu Ÿ

50

T (Tu )

0 i.e. T 2u

0.

0 i.e. v 0. R(T ) ˆ N (T ) {0}.

MCQ 2.7 Let V be a real vector space with basis B {e1 , ..... , en } and let X be a set of n distinct elements in V . Then the number of linear transformations from V into V such that F ( B )

X is

(A) Zero (B) n (C) n 2 (D) n "

SAQ 2.7 Let T : U o V and S : V o W be two linear maps. Prove that (a) If S and T are non-singular, then ST is non-singular and ( ST ) 1

T 1S 1.

(b) If ST is 1-1, then T is 1-1. (c) If ST is onto, then S is onto (d) If ST is non-singular, then T is 1-1 and S is onto

51

SUMMARY An isomorphism of a vector space is defined as 1-1 vector homomorphism. The first fundamental theorem on homomorphism is discussed.

KEY WORDS Isomorphism Nonsingular linear map Isomorphic spaces

52

UNIT 04-03: MATRIX ASSOCIATED WITH A LINEAR MAP

53-73

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of a matrix through a linear transformation defined on a vector space INTRODUCTION The beginning pages of the book are devoted to the elementary study of matrix defined as a rectangular array of quantities which is subjected to the algebraic operations. This conventional approach of introducing matrix is known to high school students. There is another way to look at a matrix that is rich in its content and formulation in terms of linear transformation defined on a vector space. In this classical way a matrix is viewed in terms of the action of a linear transformation on a basis of a vector space. Thus a matrix is linear transformation oriented. The details are given in the following lines. 3.1 Matrix of a linear mapping Unless and otherwise specified it is assumed that a vector space V is n - dimensional and is defined over a field F . We append some background for the development of the subject. Be careful about notations

Notations play an important role in the development and understanding of the subject. In studying the present unit, we have to be careful about notations and conventions. Otherwise one may land into confusion. We use suffixes in two ways: single suffix notation: xi , ai , vi etc and

double suffix notation: xij , aij , vij etc.

Let us consider a vector space V of dimension two over a field R. Let its standard basis vector be B { e1 , e2 }. Then any v V is written uniquely as a linear combination of e1 and e2 : v v1e1  v2e2

(3.1)

Here v1 and v2 are the components of the vector v i.e. v1 , v2  R. This notation is convenient to write a vector without suffix. Suppose you are asked to write a vector v1 V in terms of e1 and e2 , then (3.1) will not be convenient. One may write 53

v1

a1e1  a2 e2

(3.2)

where a1 and a2 are the components of v1 i.e. they are real numbers. Note that in (3.1) v1 is a component i.e. a real number, but in (3.2) it is a vector. As far as writing one or two vectors in the form (3.2) may not pose any difficulty. Suppose we want to write the vectors v1 , v2 , , v100 , there is bound to be a problem. In such cases two suffix notation is helpful. Thus we rewrite (3.2) as v1

a11e1  a12e2

(3.3)

where the a11 , a12 are the components of v1.

Basis. Unless and otherwise specified the notations e1 , e2 ,, en are understood as standard basis vectors. Thus in R 2 : e1

(1, 0), e2

R 3 : e1

(1, 0, 0), e2

(0,1) (0, 1, 0), e3

(0, 0, 1)

and so on. Representation of a vector Let V be a vector space of dimension n over a field F . Suppose that B {e1 , e2 ,..., en } is an ordered basis of V . Then any v  V is expressed as a linear combination of basis vectors of B i.e. v

a1e1  a2 e2  ...  an en

(3.4)

where a1 ,..., an  F In matrix formulation it is expressed in two ways:

or

ª e1 º «e » 2 v [a1 a2  an ] « » «» « » ¬en ¼

(3.5a)

ª a1 º «a » 2 v [e1 e2  en ] « » «» « » ¬ an ¼

(3.5b)

In both the cases we say that a1 , a2 ,, an are the components of v relative to the basis B and write

54

§ a1 · ¨ ¸ ¨a ¸ v (a1 , a2 ,, an ) or v ¨ 2 ¸ .  ¨ ¸ ¨a ¸ © n¹

(3.6)

These are two ways of writing a vector amounting to a row representation and a column representation of a vector v V . The preference of notation lies with the convenience of writing.

Matrix Let F be an arbitrary field. A rectangular array of the form ª a11 «a « 21 « ... « « ai1 « ... « «¬am1

a12 a22

... a1 j ... a2 j

... ai 2

... ...

... aij

... ... ... am 2 ... amj

... a1n º ... a2 n »» ... ... » » ... ain » ... ... » » ... amn »¼

is called a matrix over F , where each aij  F . This matrix has m rows and n columns. We call it a mu n matrix. It is also denoted by (aij ) mun . The total number of entries aij in the above matrix is m n. Now we motivate the concept of a matrix through linear transformation. We first discuss the m u n matrix from which the consideration of the n -square matrix follows for m n etc.

3.2 Formulation of m u n matrix Let U and V be vector spaces of dimensions m and n respectively over the same field F . Consider that B1 {u1 , u2 ,..., um } and B2 {v1 , v2 ,..., vn } are the ordered bases of these spaces respectively over F . Define a linear map T : U o V . This T sends each u U to Tu V . In particular the basis vectors in B1 are transformed to Tui V , i 1, 2,, m. Since B2 is a basis of V , these vectors Tui are written in terms of the vectors in B2 i.e. Tu1

a11v1  a12 v2  ...  a1n vn

Tu 2

a21v1  a22 v2  ...  a2 n vn

 Tu m

(3.7) am1v1  am 2 v2  ...  amn vn

where each aij  F .

55

We rewrite the above system of equations in matrix notation in two ways as follows.

ª Tu1 º « Tu » « 2» «  » « » ¬Tu m ¼

ª a11 a12  a1n º ª v1 º «a »« » « 21 a22  a2 n » «v2 » «     »«  » « »« » ¬am1 am 2  amn ¼ ¬vn ¼

(3.8a)

Denoting the m u n matrix on the right side by M , we have

ª Tu1 º « Tu » « 2» «  » » « ¬Tu m ¼

ª v1 º «v » 2 M « » «» « » ¬vn ¼

(3.8b)

Another way of writing the system (3.7) is

ª a11 «a 12 [ Tu1 Tu 2  Tu m ] [v1 v2  vn ] « «  « ¬a1n

a21  am1 º a22  am 2 »»    » » a2 n  amn ¼

[ Tu1 Tu 2  Tu m ] [v1 v2  vn ] M c

i.e.

(3.9a)

(3.9b)

where M c transpose of M . We have two matrices M and its transpose M c which can be considered as representations of a linear map T . Both are suitable for representing T . The action of T on the B1 (i.e. 1u m matrix) is analogous to the action of M c( i.e. n u m matrix) on a column matrix B1c (i.e. m u 1 matrix) resulting into a n u 1 matrix M cB1c . We adopt this M c as the matrix associated with a linear map. As regard to M , the action of T on B1 (i.e. 1u m matrix) is equivalent to the action of M (: m u n matrix) on B1 denoted by B1 M which is the row matrix i.e. 1u n matrix. In this case neither MB1 nor MB1c products are defined.

Definition. Let U and V be vector spaces of dimensions m and n respectively over the field F . Let B1

(u1 ,, um ) and B2

(v1 ,, vn ) be the respective bases for U and V . If T : U o V

is a linear transformation, then the matrix of T in the bases B1 and B2 , denoted by M c and is given in (3.9a).

56

Another notations for associated matrix Since the definition of associated matrix involves three things: (i) linear map T (ii) basis B1 and

(iii) basis B2 ,

we denote it by [T : B1 , B2 ] or simply by m(T ). For a linear map T : V o V , we denote the associated matrix by MB

[T : B]

where B is a basis of V .

Problem 3.1 Consider a linear transformation T on a vector space V

R 2 on the real filed R defined by

T ( x, y ) ( x  y, 2 x  y ),  x, y  R

(a1)

Evaluate the matrix associated with the following bases: (i) standard basis B (e1 , e2 ), e1 and

(ii) basis C

(c1 , c2 ), c1

(1,1), c2

(1, 0), e2

(0,1)

(1, 0).

Solution. (i) We find Te1 ,Te2 under (a1) and then express them in terms of e1 and e2 as follows. Te1 T (1, 0) (1  0, 2  0), by (a1) (1, 2) 1(1, 0)  2(0,1) 1e1  2 e2 Te2

(a2)

T (0,1) (0  1, 0  1), by (a1) (1,  1) 1(1, 0)  1(0,1)

57

1e1  1e2

ªTe1 º «Te » ¬ 2¼

(a2) and (a3) Ÿ

Ÿ

Ÿ (ii)

M

(a3)

ª1 2 º ª e1 º «1  1» «e » ¼ ¬ 2¼ ¬

ª1 2 º «1  1» ¼ ¬

ª1 1 º [T : B ] M c « » ¬2  1¼

MB

Tc1 T (1,1) (1  1, 2  1), by (a1) (2,1) 1(1,1)  1(1, 0) 1c1  1c2

and

Tc2

(a4)

T (1, 0) (1  0, 2  0) (1, 2) 2(1,1)  1(1, 0) 2 c1  1c2

(a4) and (a5) Ÿ

Ÿ

M

MC

(a5)

ª1 1 º «2  1» ¼ ¬

ª1 2 º [T :C ] M c « » ¬1  1¼

From these two cases (i) and (ii) having different bases, we observe that [T : B] z [T : C ]

Problem 3.2 Let V be the vector space of all polynomial functions of degree d 3 from R into R of the form p ( x) a0  a1 x  a2 x 2  a3 x 3

(a1)

Let the linear transformation T be the differentiation transformation D on V . Construct the matrix of D with respect to the ordered basis B consisting of four functions pi ( x) 58

x i 1 i.e. B (1, x, x 2 , x 3 )

(a2)

Solution. Here the linear transformation is the differential operator D such that ( Dp)( x) a1  2a2 x  3a3 x 2 , by (a1)

(a3)

The ordered basis vectors are p1 ( x) 1, p2 ( x)

x 2 , p4 ( x )

x, p3 ( x)

x 3 , by (a2)

Using (a3), we find that ( Dp1 )( x) 0,  D1 0 ( Dp2 )( x) 1, ( Dp3 )( x) 2 x, ( Dp4 )( x) 3x 2 We rewrite the above in terms of the basis vectors p1 , , p4 as follows: Dp1

0 p1  0 p2  0 p3  0 p4

Dp2 1 p1  0 p2  0 p3  0 p4

Ÿ

Dp3

0 p1  2 p2  0 p3  0 p4

Dp4

0 p1  0 p2  3 p3  0 p4

ª0 «1 « «0 « ¬0

MB

0 0 2 0

0 0 0 3

0º 0»» 0» » 0¼

c

ª0 «0 « «0 « ¬0

1 0 0 0

0 2 0 0

0º 0»» . 3» » 0¼

Problem 3.3 Find the matrix of the linear map T : V2 o V3 defined by T ( x , y ) (  x  2 y , y ,  3x  3 y ) related to the bases B1

{(1, 2 ), (  2 , 1)} and B2 {( 1,0,2), (1,2,3), (1,1,1)}.

Solution. Let the linear map T : V2 o V3 be given by T ( x, y) ( x  2 y, y,3x  3 y) Let

(a1)

B1 {(1,2), (2,1)} {u1 , u2 } B2 {( 1,0,2), (1,2,3), (1,1,1)} {v1 , v2 , v3}

be the ordered bases of V2 and V3 respectively. By (a1), we have T (u1 ) T (1, 2) ( 1  4, 2,  3  6) (3, 2, 3) 59

Since T (u1 ) V3 , it is expressed as a linear combination of basis vectors of V3 : Ÿ

( 3, 2, 3)

a (1, 0, 2)  b(1, 2, 3)  c(1,  1,1) (a  b  c, 2b  c, 2a  3b  c)

 a  b  c 3, 2b  c

Ÿ

Solving these equations, Ÿ

a

2, 2a  3b  c 3

10 / 11, b 15 / 11, c 8 / 11

Tu1

(10 / 11)v1  (15 / 11)v2  (8 / 11)v3

Tu 2

(5 / 11)v1  (20 / 11)v2  (29 / 11)v3 .

Similarly we obtain

ªTu1 º «Tu » ¬ 2¼

Ÿ

Ÿ

ª v1 º ª 10 / 11 15 / 11 8 / 11 º « » « 5 / 11 20 / 11 29 / 11» «v2 » ¬ ¼ «v » ¬ 3¼

(T : B1 , B2 )

ª 10 / 11 15 / 11 8 / 11 º « 5 / 11 20 / 11 29 / 11» ¬ ¼

c

ª 10 / 11 5 / 11 º « 15 / 11 20 / 11» « » «¬ 8 / 11 29 / 11»¼ ª 10 5 º 1« 15 20»» 11 « «¬ 8 29»¼

Problem 3.4 Let T : P4 o P4 be a linear map given by x

T ( p ( x))

³ pc(t ) dt 1

and B1

B2

Hint. pc(t )

{1, x, x 2 , x 3 , x 4 } be a base for P4 . Determine matrix (T : B1 , B2 ). d p (t ) dt

Solution. Here P4 {a0  a1 x  a2 x 2  a3 x 3  a4 x 4 | ai  R} is a real vector space of polynomials of dimension 5. The bases of P4 are B1 60

B2

{1, x, x 2 , x 3 , x 4 } {u1 , u2 , u3 , u4 , u5 } {v1 , v2 , v3 , v4 , v5 },

where

u1

v1 1, u2

x , u3

v2

v3

x 2 , u4

v4

x 3 , u5

v5

x4

A linear map T : P4 o P4 is given by x

T ( f ( x))

³ f c(t ) dt.

(a1)

1 x

Ÿ

Tu1

T (1)

³ 0 dx

0v1  0v2  0v3  0v4  0v5

0

1 x

Tu 2

x  1 ( 1)v1  1v2  0v3  0v4  0v5

³ 1dx

T ( x)

1

Tu3

T (x2 )

x

³ 2 xdx

x 2  1 (1)v1  0v2  1v3  0v4  0v5

1

Tu 4

T ( x3 )

x

x 3  1 (1)v1  0v2  0v3  1v4  0v5

2 ³ 3x dx 1

Tu5

T (x4 )

x

x 4  1 (1)v1  0v2  0v3  0v4  1v5

3 ³ 4 x dx 1

The required matrix is

Ÿ

(T : B1 , B2 )

ª0 « 1 « « 1 « « 1 «¬ 1

0 1 0 0 0

0 0 1 0 0

0 0 0 1 0

0º 0»» 0» » 0» 1»¼

c

ª0  1  1  1  1º «0 1 0 0 0 » « » «0 0 1 0 0 » « » «0 0 0 1 0 » «¬0 0 0 0 1 »¼

Problem 3.5 A linear transformation T rotates each vector in R 2 clock wise through 900. The matrix T § ª1º ª0º · relative to the standard ordered basis ¨¨ « » , « » ¸¸ is © ¬0¼ ¬1¼ ¹ ª 0  1º (A) « » ¬ 1 0 ¼ ª 0 1º (B) « » ¬  1 0¼ ª0 1 º (C) « » ¬1 0 ¼

61

ª0  1º (D) « » ¬1 0 ¼

(NET 2013)

Solution. Adopting the row convention, let the standard basis (e1 , e2 ) be given by e1

(1, 0) and e2

(0,1).

We know that under a clock rotation through T, the point ( x, y ) o ( x cos T  y sin T,  x sin T  y cos T) ( x, y ) o ( y ,  x )

For T 900 ,

If the rotation is denoted by a linear transformation T , we have

T ( x, y ) ( y ,  x ) Ÿ

(a1)

Te1 T (1, 0) (0,  1), by (a1) 0 (1, 0)  1(0,1) 0 e1  1e2

Te2

and

T (0,1) (1, 0), by (a1) 1(1, 0)  0 (0,1) 1e1  0 e2 ªTe1 º «Te » ¬ 2¼

Ÿ

ª0  1º ª e1 º «1 0 » «e » ¼ ¬ 2¼ ¬

(a2)

Hence the associated matrix is ª0  1º «1 0 » ¼ ¬

c

ª 0 1º «  1 0» ¼ ¬

The choice is (B). For the row convention the matrix will its transpose i.e. the matrix is in (D).

Problem 3.6 Let the linear transformation T : R 3 o R 3 be given by the reflection with respect to the origin. Then the matrix of T with respect to the standard basis is ª1 0 0 º (A) ««0 1 0»» «¬0 0 1 »¼ 62

ª 0  1 0º (B) «« 1 0 0»» ¬« 0 0 1»¼ ª 1 0 0 º (C) «« 0  1 0 »» «¬ 0 0  1»¼ ª  1 0 0º (D) «« 0 1 0»» «¬ 0 0 1»¼

(SET 2016)

Solution. Under the reflection relative to the origin the point

( x, y , z ) o (  x,  y ,  z ) T ( x, y, z ) ( x,  y ,  z ).

i.e.

(a1)

The standard basis vectors are

e1

(1, 0, 0), e2

(0,1, 0), e3

(0, 0,1)

Using (a1), we have Te1 T (1, 0, 0) (1, 0, 0) 1(1, 0, 0)  0 (0,1, 0)  0 (0, 0,1) 1e1  0 e2  0 e3 Te2

T (0,1, 0) (0,  1, 0) 0 (1, 0, 0)  1(0,1, 0)  0 (0, 0,1) 0 e1  1e2  0 e3

Te3

T (0, 0,1) (0, 0,  1) 0 (1, 0, 0)  0 (0,1, 0)  1(0, 0,1) 0 e1  0 e2  1e3

Then the required matrix is ª 1 0 0 º « 0 1 0 » « » «¬ 0 0  1»¼

c

ª1 0 0º  ««0 1 0»» «¬0 0 1»¼

I

The correct choice is (C). MCQ 3.1

Let T be a linear operator on R 2 defined by T ( x1 , x2 ) (  x2 , x1 ). The matrix of T ; in the ordered basis B { (1, 1), (1,  1) } is ª 1 1º (A) « » ¬ 1 1¼

63

ª 1 1º (B) « » ¬  1 0¼ ª0  1º (C) « » ¬1 1 ¼ ª1 0 º (D) « » ¬1 1¼

MCQ 3.2 The matrix of the linear mapping f : R 2 o R 3 given by f ( x, y ) (2 x, y, x  y ) with respect to the standard bases is ª2 0 º (A) ««0 1 »» «¬1 1»¼ ª2 0 1 º (B) « » ¬0 1 1¼ ª2 2 1 º (C) « » ¬1 1 2 ¼

ª2 1º (D) ««2 1»» «¬1 2»¼

(SET 2013)

MCQ 3.3 The derivatives of the function f : R 2 o R 2 given by f ( x, y ) ( x  y sin x, y  x cos x) at (0, 0) is the linear transformation whose matrix with respect to standard basis is ª1 0º (A) « » ¬1 1 ¼ ª1 0 º (B) « » ¬0 1 ¼ ª1 1º (C) « » ¬0 1¼

ª1 0 º (D) « » ¬0 1¼ 64

(SET 2013)

MCQ 3.4

Let f : R 3 o R 3 be defined by f ( x, y, z ) (2 x  y  z , x  y, x  z ), where the basis of the domain is the usual basis and that of the codomain is {(1, 1, 1), (1, 1, 0), (1, 0, 0)}. Then the matrix of f relative to these bases is ª1 0 1 º (A) ««0  1 2 »» «¬1  1  1»¼ ª1 2 3º (B) «« 3 2 1»» «¬ 2 1 3»¼ ª1 0 1 º (C) ««0  1  1»» «¬1 2  1»¼ ª1 0 0 º (D) ««0 1 0»» «¬0 0 1»¼

(SET 2013)

MCQ 3.5

Let V be the vector space of polynomials generated by B {1, x, x 2 , 5 x 3  x}. Then the matrix representing

d : V o V relative to the basis B is dx

ª0 «1 (A) « «0 « ¬1

0

ª0 «0 (B) « «2 « ¬0

0º 1 0 0»» 0 0 0» » 1 15 0¼

ª0 «1 (C) « «2 « ¬1

0

0º 0 0 0»» 2 0 0» » 0 15 0¼ 0

0

0

0º 0 0 0»» 0 0 0» » 0 15 0¼ 0

65

ª0 «0 (D) « «0 « ¬1

0º 1 0 0»» 2 0 0» » 0 15 0¼ 0

0

(SET 2002)

MCQ 3.6

For a positive integer n, let Pn denote the space of all polynomial p ( x) with coefficients in R such that deg p( x) d n, and let Bn denote the standard basis of Pn given by Bn

{1, x, x 2 ,...., x n }.

If T : P3 o P4 is the linear transformation defined by x

T ( p ( x))

x 2 pc( x)  ³ p (t )dt o

and A (aij ) is the 4u 5 matrix of T with respect to standard basis B3 and B4 , then

(A) a32

3 and a33 2

7 3

(B) a32

3 and a33 2

0

(C) a32

0 and a33

7 3

(D) a32

0 and a33

0

(NET 2011)

MCQ 3.7 Let T be non-zero non-identity, linear map from R 2 to R 2 such that T 2

T . Then with respect to

some basis of R 2 , it is represented by matrix ª1 1º (A) « » ¬0 1¼ ª1 0º (B) « » ¬0 0 ¼ ª1 0º (C) « » ¬0 1 ¼ ª0 1 º (D) « » ¬0 0 ¼ 66

(SET 2005)

MCQ 3.8

§ x1 · ¨ ¸ Let T : R o R be the linear transformation given by T ¨ x2 ¸ ¨x ¸ © 3¹ 3

3

§ x2 · ¨ ¸ ¨ x3 ¸. Let A be the matrix of T ¨x ¸ © 1¹

§1· § 1 · § 0 · ¨ ¸ ¨ ¸ ¨ ¸ with respect to the basis ¨ 0 ¸ , ¨  1¸ , ¨ 1 ¸ . Then det A equals ¨ 0 ¸ ¨ 0 ¸ ¨  3¸ © ¹ © ¹ © ¹

(A) 0 (B) 1 (C) 1 (D) None of these

(SET 2001)

SAQ 3.1 Linear map T is given by

T ( x, y ) ( x  2 y , y ,  3x  3 y ). B1 {(1,1), (1,1)} is ordered basis of V2 and B2

{(1,1,1), (1,  1,1), (0,0,1)} is ordered basis of

V3 . Find the matrix of T relative to bases B1 and B2 . SAQ 3.2 Find matrix of T : V3 o V2 defined by

T ( x1 , x2 , x3 )

( x1  x2 , x2  x3 )

with respect to ordered bases B1 and B2 , where

B1 {(1,1, 0), (1, 0,1), (1,1,  1)} and B2

{ (2,  3), (1,4) }.

SAQ 3.3 Let I : V3 o V3 be a linear map defined by I ( x)

x. Determine a matrix of I relative to the

ordered bases B1 {e1 , e2 , e3 } and B2 {(1,1,1), (1,  1,1), (0,1,1)}. SAQ 3.4 Define T : M 2, 2 o M 2,3 such that

67

D12 º ªD T « 11 » ¬D 21 D 22 ¼

ªD11  D12 « D 12 ¬

0 D 21  D 22

D12  D 22 º ». 0 ¼

Prove that T is linear and determine its matrix relative to the standard bases of M 2, 2 and M 2,3 . SAQ 3.5

The transformation T : V2 o V2 maps a point ( x, y ) into its mirror image in the line y

x. Is T a

linear transformation? If it is, find its matrix. SAQ 3.6

If the linear transformation T1 and T2 on V3 are defined by T1e1

(1,  1, 2), T2 e1

T2 e2

(1, 0,1), T1e3

(1,1,  2), T1e2 (1, 2, 0), T2 e3

(0,1,1), (0,1,1), T

ª0 0 0 º then show that the matrix of linear transformation T1  T2 is ««1 1 2»» . «¬1 3 1 »¼ Remark. Let U ( F ),V ( F ) be vector spaces and L(U ,V ) {T | T : U o V is linear}. For any S , T  L(U ,V ), D  F define S  T , DT : U o V by

( S  T )u

Su  Tu and (DT )u

D(Tu ),  u U

Under these addition and scalar multiplication, L(U ,V ) is a vector space over F . If dim U dim V

m,

n, then dim L(U ,V ) mn.

SAQ 3.7 Let S : V3 o V4 , T : V3 o V4 be defined as

and

S ( x1 , x2 , x3 )

( x1  x2 , x1  2 x2  x3 , x2  3x3 , x1  x3 )

T ( x1 , x2 , x3 )

( x1  2 x2 , x1  x2 , 3 x2  x3 , x1  x2  x3 ).

Determine the matrix of 3S  4T relative to the standard basis.

4.3 Linear map associated with a matrix (another convention) Let A (D ij ) mun be a matrix of scalars i.e. D ij  F and F be a field. In particular F

68

R gives

A as a real matrix. Let B1 {u1 , u2 ,..., um } be an ordered basis of an m -dimensional vector space

U ( F ) and B2 {v1 , v2 ,..., vn } be an ordered basis of an n -dimensional vector space V . If we define the linear map T : U o V by m

¦ D ij v j ,

Tui Ÿ

j 1,2,  , n, then

i 1

(T : B1 , B2 )

(D ij )cnu m

Ac.

The linear map T is called the associated!linear!map of matrix Ac with respect to bases B1 and B2 of U , V respectively.

Problem 3.7

If matrix of a linear map T with respect to bases B1 and B2 is ª  1 2 1º « 1 0 3» ¼ ¬

where B1 {(1, 2, 0), (0,  1, 0), (1,  1,1)} and B2 {(1, 0), (2,  1)}. Find T ( x, y, z ). Solution. Let T : V3 o V2 be a linear map. Then

(T : B1 , B2 )

ª  1 2 1º « 1 0 3», ¼ ¬

where

B1 {(1, 2, 0), (0,  1, 0), (1,  1,1)} {u1 , u2 , u3}

and

B2 {(1,0), (2,1)} {v1 , v2 }

(a1)

are the ordered bases of V3 and V2 respectively. Since the associated matrix is in (a1), we have

Ÿ

ªTu1 º «Tu » « 2» «¬Tu3 »¼

c ª 1 2 1º ª v1 º « 1 0 3» «v » ¬ ¼ ¬ 2¼ ª 1 1º « 2 0» ª v1 º « » «v » «¬ 1 3»¼ ¬ 2 ¼ ª v1  v2 º « 2v » 1 « » «¬ v1  3v2 »¼ 69

Ÿ

Tu1

v1  v2

Tu 2

2v1

Tu3

v1  3v2

(1, 0)  (2,  1)

(1,  1) (a1)

2(1, 0) (2, 0) (1, 0)  3(2,  1) (7,  3)

Let ( x, y, z )  V3 . Then ( x, y, z ) is expressed as a linear combination of basis vectors in B1 i.e. au1  bu 2  cu3 , where a , b , c  R.

( x, y , z )

(a2)

T ( x , y , z ) T (au1  bu2  cu3 )

Ÿ

a Tu1  b Tu 2  cTu3 ,  T is linear a (1,  1)  b(2 , 0)  c(7 ,  3), by (a1) (a  2b  7c ,  a  3c ) We are done if the scalars a , b , c are determined in terms of x , y , z. From (a2), we write ( x, y, z ) a(1, 2, 0)  b(0,  1, 0)  c(1,  1,1) ( a  c, 2 a  b  c, c ) ac

Ÿ

x, 2 a  b  c

y, c

z

Solving these simultaneous equations for a, b, c in terms of x, y, z , we get a

x  z, b 2 x  y  3z, c

z

With these values, (a3) becomes T ( x, y , z )

(5 x  2 y,  x  2 z ),  ( x, y , z )  V3 .

This is the required linear transformation. Problem 3.8

Let A

ª1 0 0º «0 1 0» be a matrix of a linear map T with respect to bases B and B , where 1 2 « » «¬0 0 1»¼

B1 {(1,1,1), (1, 0, 0), (0,1, 0)} and B2 {(1, 2, 3), (1,  1,1), (2,1,1)}. Find T : V3 o V3 such that A (T : B1 , B2 ). Solution. A matrix A of a linear map with respect to bases B1 and B2 . Let the bases be

B1 {u1 , u2 , u3 } and B2 {v1 , v2 , v3}, where 70

(a3)

u1

(1,1,1), u2

(1, 0, 0), u3

and

v1

(1, 2, 3), v2

(1,  1,1), v3

Here

ªTu1 º «Tu » « 2» «¬Tu3 »¼

c ª1 0 0º ª v1 º «0 1 0 » «v » « » « 2» «¬0 0 1»¼ «¬ v3 »¼

Ÿ

(0, 1, 0) ( 2,1, 1).

ª1 0 0º ª v1 º «0 1 0 » « v » « »« 2 » «¬0 0 1»¼ «¬ v3 »¼

Tu1

v1

(1,2,3)

Tu 2

v2

(1,1,1)

Tu3

v3

( 2,1,1)

(a1)

For any ( x, y, z )  codomain V3 , ( x , y , z ) au1  bu2  cu3 , a , b , c  R Ÿ

(a2)

T ( x , y , z ) T (au1  bu2  cu3 ) a Tu1  b Tu 2  c Tu3 ,  T is linear a (1, 2 , 3)  b(1,  1,1)  c( 2 ,1,1), by (a1) (a  b  2c , 2a  b  c , 3a  b  c)

(a3)

The solution is complete if the values of a , b , c are obtained in terms of x, y, z. We use (a2): ( x, y, z ) a(1,1,1)  b(1, 0, 0)  c(0,1, 0) (a  b, a  c, a) Ÿ

x

a  b, y

Ÿ

a

z ,b

Then (a3) Ÿ

a  c, z

x  z ,c

a

y  z.

T ( x , y , z ) ( x  2 y  2 z ,  x  y  2 z , x  y  z ).

MCQ 3.9 If a matrix of linear transformation T : V2 o V3 with respect to standard bases is ª 2 1º « 1 0», « » «¬ 4 2»¼

then the image of (3, 1,  5) under T is (A) (29,  9)

71

(B) (29, 9) (C) (29, 9) (D) (29,  9)

MCQ 3.10 ª2 1 3 º The matrix of a linear map T relative to the bases B1 and B2 is « » where ¬1 0  1¼

B1 {( 2,1), (1, 2)} and B2 {(1,  1,  1), (1, 2, 3), ( 1, 0, 2)}. Then the point 5T (0,1) lies on the plane (A) 2 x  4 y  z 3 (B) 2 x  4 y  z

2

(C) 2 x  4 y  z 1 (D) None of the above

SAQ 3.8 ªcos T  sin Tº If « » is the matrix of linear map T : V2 o V2 relative to the standard basis, then find ¬ sin T cos T ¼

matrix of T 1 relative to the standard basis.

72

SUMMARY The concept of a matrix is introduced through a linear transformation with respect to bases. Evaluation of the matrices are illustrated by the examples where a linear mapping and the bases are given. Also it is shown how to find a linear map associated with a matrix along with the bases.

KEY WORDS Basis Matrix Linear transformation

73

UNIT 04-04: ALGEBRA OF LINEAR OPERATORS

75-96

LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the analogy of the algebraic structure of the space of linear operators to the set of matrices over a field F INTRODUCTION Let T : V o V be a linear map defined on a vector space V over a field F . It is called a linear

operator. Consider the set all such operators. We denote it by A(V ). The set is now without any structure, it is just a set. By defining addition and scalar multiplication of the operators, an algebraic structure is assigned to A(V ). The set A(V ) forms a vector space with the addition and scalar multiplication defined by

(T1  T2 )v T1v  T2 v (DT1 )v D(T1v)  T1 , T2  A(V ),  D  F . There is I  A(V ) such that Iv v,  v V and IT

TI

T ,  T  A(V ).

This element I is called a unit!element of A(V ). There is an element 0  A(V ) such that 0v 0,  v V . This 0 is called the zero!linear!transformation. An element T  A(V ) is invertible or regular if  S  A(V ) such that ST

We then write S

TS

I.

T 1. An element in A(V ) which is not regular is called singular.

Unless otherwise stated, we assume that (i) V is a finite dimensional vector space over F (ii) A(V ) has a unit element I

75

In the previous unit the concept of matrix is associated with a linear transformation in the bases. Now we attempt to associate matrices to S  T , kT and the composite map ST , where S , T  A(V ) and k  F . 4.1 Matrix association with a linear operator

The definition of an associated n u m matrix discussed in the previous unit can be applied to a linear operator for taking m n. In the present case the matter is simplified. We denote the matrix associated with T by (tij ) nu n where tij  F . We redefine m(T ) as follows. Definition. Let V be an n - dimensional vector space over F and let B (v1 , v2 ,, vn ) be its

basis over F . If T : V o V is a linear map, then the matrix of T in the basis B is defined as

M (T )

ªt11 t12  t1n º » «t « 21 t 22  t 2 n » «    » » « ¬t n1 t n 2  t nn ¼

c

ªt11 t 21  t n1 º » «t « 12 t 22  t 2 n » «    » » « ¬t1n t 2 n  t nn ¼

(4.1a)

where each tij  F and Tvi

n

¦ t ji v j , i 1,, n

(4.1b)

j 1

Remark. If v1 , v2 ,, vn is the standard basis of vector space V , then the matrix m(T ) associated

with a linear map T is called its natural!matrix. Here after, unless stated otherwise, we assume the following: (i) S , T  A(V ) (ii) M (S )

( sij ) matrix of S in v1 ,, vn and M (T )

(tij ) matrix of T in v1 ,, vn

where each sij , tij  F such that n

¦ s ji v j , i 1,, n and Tvi

Svi

j 1

n

¦ t ji v j , i 1,, n

j 1

Definition. Let S , T  A(V ).

Then

S

T œ Sv Tv,  v V

Theorem 4.1

Let S , T  A(V ). Then S 76

T œ sij

tij for each i, j.

(4.2)

Proof. By definition, we have

T œ Sv Tv,  v V

S Ÿ

T œ Svi

S

Tvi ,  vi V

(4.3)

Since S  A(V ), its matrix is M ( S ) ( sij ) where

s1i v1  s2i v2   sni vn , by (4.2)

Svi

Similar for T  A(V ), we have M (T ) (tij ) where

Tvi

t1i v1  t 2i v2    t ni vn

Noting above (4.3) Ÿ S

T œ s1i v1  s2i v2    sni vn

t1i v1  t 2i v2    t ni vn , i 1,  , n

œ ( s1i  t1i )v1  ( s2i  t 2i )v2    ( sni  t ni )vn œ s1i  t1i

0, s2i  t2i

0, , sni  t ni

œ s1i

t1i , s2i

œ sij

tij , for each i, j

t 2i , , sni

0, i 1,  , n

0,  v1 ,, vn are LI, i 1,, n

tni , i 1,, n QED

Theorem 4.2 (Matrix of the sum of two linear transformations)

Let! S !and!T  A(V ) !having!matrices! M (S ) !and! M (T ) respectively.!!Then! S  T !corresponds!to the!matrix! M ( S )  M (T ) !such!that M (S  T )

M ( S )  M (T )

(4.4)

Proof. We have (4.2).

Since A(V ) is a vector space and S , T  A(V ), we have S  T  A(V ). By definition of addition of functions, we write ( S  T )vi

Svi  Tvi ,  vi V ¦ s ji v j  ¦ t ji v j , by (4.2) j

j

¦ ( s ji  t ji )v j j

77

¦ u ji v j , where uij j

Ÿ

sij  tij for each i and j

linear map S  T corresponds the matrix (uij ). (uij ) ( sij  tij ) ( sij )  (tij ), for each i and j

Now

Then noting (4.2), above gives M (S  T )

M ( S )  M (T )

QED

Theorem 4.3 (Matrix of the product of a scalar and a linear transformation)

Let! S  A(V ) !with!matrix! M (S ) . Then OS corresponds to the matrix OM (S ) such that M (OS ) O M ( S ), O  F

(4.5)

Proof. Since O is a scalar, by definition of a scalar multiple of a function, we have

(OS )vi

O ( Svi ), for all vi V O (¦ s ji v j ),  vi V , by (4.2)

(4.6)

j

¦ (Os ji )v j ,  vi V j

Ÿ

OS corresponds to the matrix (Osij ) O ( sij )

Now (4.6) Ÿ

M (OS ) O M ( S ).

QED

Problem 4.1

Verify the Thm 4.2 in the case of a two dimensional vector space V over F such that v1 and v2 are the standard basis vectors. Solution. Let S , T  A(V ) with dim V

2. We express Sv1 , Sv2 , Tv1 , Tv2 as linear combination of

the standard basis vectors v1 and v2 as follows:

and

2

Sv1

¦ s j1v j

Sv2

¦ s j 2v j

j 1

2

j 1

s11v1  s21v2

a1v1  a2 v2 , for s11

a1 , s21

a2

s12 v1  s22 v2

b1v1  b2 v2 , for s21

b1 , s22

b2

Similarly we write Tv1 78

c1v1  c2 v2 and Tv2

d1v1  d 2 v2

where a1 , a2 , b1 , b2 , c1 , c2 , d1 , d 2  F . Ÿ

M (S )

ª a1 «a ¬ 2

( S  T )v1

Now

ª c1 «c ¬ 2

b1 º and M (T ) b2 »¼

d1 º d 2 »¼

(a1)

Sv1  Tv1 a1v1  a2 v2  c1v1  c2v2 (a1  c1 )v1  (a2  c2 )v2

( S  T ) v2

and

(a2)

Sv2  Tv2 b1v1  b2 v2  d1v1  d 2v2 (b1  d1 )v1  (b2  d 2 )v2

(a2) and (a3) Ÿ

m( S  T )

ª a1  c1 «a  c ¬ 2 2 ª a1 «a ¬ 2

(a3)

b1  d1 º b2  d 2 »¼

b1 º ª c1  b2 »¼ «¬c2

d1 º d 2 »¼

M ( S )  M (T ), by (a1) Ÿ

(4.4)

(ii) Let O  F . Then by definition,

and

O ( Sv1 ) O (a1v1  a2v2 ) (Oa1 )v1  (Oa2 )v2

(O S )v2

O ( Sv2 )

M (OS )

Ÿ Ÿ

(OS )v1

ª Oa1 «O a ¬ 2

O(b1v1  b2 v2 )

Ob1 º Ob2 »¼

ª a1 b1 º O« » ¬a21 b2 ¼

(Ob1 )v1  (Ob2 )v2 O M ( S ), by (a1)

(4.5) etc.

Theorem 4.4 (Matrix of the product of two linear transformations)

Let! S !and! T  A(V ) !having!matrices! M (S ) !and! M (T ) respectively.!!Then! ST !corresponds!to the!matrix! M (ST ) !such!that M ( ST ) Hint. ( ST )v

M ( S ) M (T )

(4.7)

S (Tv ),  v V

79

Proof. By definition, we have

S (Tvi ),  vi V

( ST )vi

S (¦ t ji v j ) j

¦ t ji ( Sv j ),  T is linear j

¦ t ji (¦ skj vk ), by (4.2) j

k

¦ ( ¦ skj t ji )vk k

j

¦ uki vk k

uki

where

Ÿ

¦ skj t ji

(4.8)

j

M ( ST ) (uij )

Define ( sij )(tij ) (uij ) for every i and j. Then follows (4.8). Noting this above implies that

M ( ST ) ( sij )(tij ) Ÿ

M ( ST )

M ( S ) M (T ).

QED

Interpretation of (4.8)

Rewriting (4.8), we have

uki

sk1t1i  sk 2t 2i    sknt ni ( sk1 , sk 2 ,, skn ) ˜ (t1i , t2i ,, t ni )

The right side is the dot product of two vectors: the first vector is the kth row of the matrix of S and the second vector is the ith column of the matrix of T . This is the usual product rule for multiplying two matrices which are compatible for multiplication.

QED

Problem 4.2

Verify the Thm 4.3 in the case of a two dimensional vector space V over F with the standard basis vectors v1 and v2 . Solution. From the Problem 4.1, we have

Sv1 80

a1v1  a2 v2 and Sv2

b1v1  b2 v2

Tv1

and

c1v1  c2 v2 and Tv2

d1v1  d 2 v2

where a1 , a2 , b1 , b2 , c1 , c2 , d1 , d 2  F .

M (S )

Then

Ÿ

M ( S ) M (T )

ª a1 «a ¬ 2

ª a1 «a ¬ 2

b1 º and M (T ) b2 »¼

b1 º ª c1 b2 »¼ «¬c2

d1 º d 2 »¼

ª c1 «c ¬ 2

d1 º d 2 »¼

ª a1c1  b1c2 «a c  b c ¬ 2 1 2 2

a1d1  b1d 2 º a2 d1  b2 d 2 »¼

(a1)

By definition, we write ( ST )v1

S (Tv1 ) S (c1v1  c2 v2 ) c1Sv1  c2 Sv2 ,  S is linear c1 (a1v1  a2 v2 )  c2 (b1v1  b2 v2 ) (a1c1  b1c2 )v1  (a2 c1  b2c2 )v2

( ST )v2

and

(a2)

S (Tv2 ) S (d1v1  d 2 v2 ) d1Sv1  d 2 Sv2 d1 (a1v1  a2v2 )  d 2 (b1v1  b2v2 ) ( a1d1  b1d 2 ) v1  (a2 d1  b2 d 2 ) v2

(a1) and (a2) Ÿ

M ( ST )

ª a1c1  b1c2 «a c  b c ¬ 2 1 2 2

(a2)

a1d1  b1d 2 º a2 d1  b2 d 2 »¼

M ( S ) M (T ), by (a1) MCQ 4.1

Let T be a linear transformation on R 2 defined by T (v)

ª1 2º A v, where A « ». Then for the ¬3 4¼

standard basis of R 2 , we have (A) M (T )

A

(B) M (T )

Ac

(C) trace M (T ) 2 trace A 81

(D) None of the above MCQ 4.2

Let S , T  A(R 2 ) be defined by S ( x, y ) ( x, 0) and T ( x, y ) (0, y ),  x, y  R For the standard basis for R 2 , choose the true statement/s from the following: (A) M ( S  T )

I

(B) M ( S  T )

I

(C) M ( ST ) (D) M ( S )

M (TS ) M (S 2 )

SAQ 4.1 Let S , T  A(R 3 ) such that

S ( x, y , z ) ( z , y, 0) and T ( x, y, z ) ( x  y , 0, y  z ). Using the standard basis compute (i) M ( S  T ) (ii) M (2S  3T ) (iii) M ( ST ) (iv) M (TS ) SAQ 4.2 Let M ( S ) and M (T ) be the natural matrices of S , T  A(V ). Show that [ M ( ST )]1 [ M (T )]1[ M ( S )]1 , S , T  A(V ). Theorem 4.5

Let! S , T ,U  A(V ). !Then (i)

[ M ( S ) M (T )] M (U )

M ( S )[ M (T ) M (U )]

(ii)!

M ( S ) [ M (T )  M (U )] M ( S ) M (T )  M ( S ) M (U )

and

[ M (T )  M (U )] M ( S )

82

M (T ) M ( S )  M (U ) M ( S ).

Proof. Let! S , T ,U  A(V ). We have (i)

[ M ( S ) M (T )] M (U ) [ M ( ST )] M (U ), by (4.7)

M ( ST ) M (U ) M [( ST )U ] M [ S (TU )],  composition of maps is associative M ( S ) M (TU ), by (4.7) M ( S )[ M (T ) M (U )], by (4.7) (ii)

M ( S )[ M (T )  M (U )] M ( S )[ M (T  U )], by (4.4) M [ S (T  U )], by (4.7) M ( ST  SU ), by composition of mappings M ( ST )  M ( SU ), by (4.4) M ( S ) M (T )  M ( S ) M (U ), by (4.7)

QED

Remark. The theorem means that (i) matrix multiplication is associative and

(ii) distributive over addition.

Theorem 4.6 (Matrix of an inverse linear operator)

Let! T !be!a!bijective!linear!map!on!a!vector!space!of!dimension! n. !Then! T !has!an!inverse!such that T 1 : V o V M (T 1 ) [ M (T )]1.

and

(4.9)

Proof. Since T is bijective it has unique inverse T 1 such that (TT 1 )v (T 1T )v

TT 1 T 1T

i.e.

I

Iv,  v V

identity map on V

(4.10)

We show that T 1 is linear. Let x, y V and a, b  F . Then there exists u , w V such that

Tu

x and Tw

y i.e. u T 1 x and w T 1 y

(4.11) 83

Now

T 1 ( ax  by ) T 1[aTu  bTw] T 1[T ( au  bw)],  T is linear (TT 1 )(au  bw), by definition of a composite mapping

I (au  bw), by (4.12) au  bw,  Iu u ,  u a T 1 x  b T 1 y, by (4.11) for all x, y V Ÿ

T 1 is a linear map on V

To prove the remaining part assume that

M (T ) (tij ), Tvi M (T 1 )

and

(T 1T )vi

Then Ÿ

(4.12a)

j

( sij ), T 1vi

¦ s ji v j

(4.12b)

j

T (T 1vi ), by definition

T (¦ s ji v j ),  T 1T

Ivi

¦ t ji v j

I , and (4.12b)

j

¦ s ji Tv j ,  T is linear j

¦ s ji ¦ t kj vk , by (4.12a) j

k

¦ ( ¦ tkj s ji )vk k

j

¦ aki vk , where aik k

Ÿ

vi

Ÿ Ÿ Ÿ Ÿ

84

¦ aki vk ,  Ivi k

¦ t kj s ji j

vi

a1i v1  a2i v2    aii vi    ani vn

vi

a1i v1  a2i v2    (aii  1)vi    ani vn a ji

0, j z i

aki

G ki

and aii

0

1,  v1 ,, vn are LI

­1 i k Kronecker delta ® ¯0 i z k

Ÿ

¦ tkj s ji j

Ÿ

(tij )( sij )

Ÿ

I

G ki unit matrix

M (T ) M (T 1 )

I

M (T 1 ) M (T )

I

Similarly we can show that

m(T 1 ) [m(T )]1

Ÿ

QED

4.2 Range and kernel of linear transformation Let U and V be vector spaces over a field F . Let T :U o V be a linear transformation. The

range and kernel of T are defined as follows. Range of T

The range of T , written R (T ), is the set of image points of U i.e. R(T ) {v V | T (u ) v for some u U }

(4.13)

It is obvious that R (T ) Ž V .

Kernel of T The kernel, written Ker T , is the set of elements of in U which are mapped into 0 V i.e. Ker T

{ u  U | T (u ) 0  V }

(4.14)

Theorem 4.7 Let!U !and!V !be!!vector!spaces!over!the!same!field! F . !If! T : U o V , !then (i) R (T ) is!a!subspace!of and

(ii) Ker T is!a!subspace!of! U .

Proof. (i) By definition, R (T ) Ž V . Now T (0) 0 Ÿ 0  R(T ). Let x, y  R(T ) and a, b  F . Then there are u , v V such that T ( x) u , T ( y ) v Now

(4.15)

T (ax  by ) a Tx  b Ty ,  T is linear au  bv, by (4.15) 85

Since V is a vector space over F , au  bv  V . Then by definition of R (T ), ax  by  r (T ). Thus x, y  R(T ) and a, b  F Ÿ ax  by  R(T ). Hence R (T ) is a subspace of V . (ii) For T (0) 0, 0  Ker T . Let x, y  Ker T . Ÿ

T ( x) 0 and T ( y ) 0

For a, b  F ,

T (ax  by ) a Tx  b Ty

Ÿ Ÿ

a 0  b0 0  V

ax  by  Ker T Ker T is a subspace of U .

QED

Remark. A m u n matrix over a field F is viewed as a linear mapping A : Vn o Vm . Then r ( A) is

the column space of A. We illustrate this fact by an example. Let an arbitrary 3u 2 matrix A over R be given by ªa A «« c «¬ e

bº d »» f »¼

We view it as a linear map A : R 2 o R 3. The standard basis of R 2 is {e1

(1, 0), e2

(0,1)}. These basis vectors e1 and e2 generate R 2 .

Their images under A are Ae1 and Ae2. are given by

and

Ae1

ªa «c « «¬ e

bº ª1º d »» « » 0 f »¼ ¬ ¼

ªa º «c » « » «¬ e »¼

1st column of A

Ae2

ªa «c « «¬ e

bº ª0 º d »» « » 1 f »¼ ¬ ¼

ªb º «d » « » «¬ f »¼

2nd column of A

This shows that the range of A is the column space of A. Working procedure to find R (T ) and Ker T where T : U o V For R (T ) (i) Choose the standard basis vectors of U : e1 ,, em , say (ii) Find Te1 ,, Tem

86

ª Te1 º (iii) Form a matrix A whose rows are Te1 ,, Tem i.e. A ««  »» «¬Tem »¼ (iv) Reduce A to echelon form. (v) The nonzero rows of the echelon form of A gives the basis for the range r (T ). (vi) The number of nonzero rows of the echelon matrix is the dim r (T ). (vii) Note that A is not the associated matrix of T . As a matter of fact Ac is the associated matrix of T . For Ker T (v) Equate each of the components of T to zero. This gives number of simultaneous equations. (vi) Applying Gauss elimination method and reduce these equations to echelon form. (vii) Number of free variables

number of equations " number of echelon equations

(viii) Give these free variables values 1, 0,  1 etc and find the values of the other variables. This will give the basis of the Ker T and dim Ker T

number of free variables.

Finally verify that dim r (T )  dim Ker T

dim U .

The following examples illustrate the procedure given above. Problem 4.3 Find the range and kernel of T : R 2 o R 2 such that T ( x, y ) ( x  y , x  y ),  x, y  R. Solution. Let the standard basis be e1

(1, 0) and e2

(a1)

(0,1). Then (a1) Ÿ

Te1 T (1, 0) (1,1) Te2

T (0,1) (1,  1)

The matrix of the column space of generators is A

ª1 1 º «1 1» ¬ ¼

We reduce it to echelon form: 87

ª1 1 º ª1 1 º A « », by R2 : R2  R1 » ~« ¬1  1¼ ¬0  2¼ Since the matrix has two nonzero rows, we have basis of r (T ) is { (1,1), (0,  2)} i.e. r (T ) [ (1,1), (0,  2) } and

dim r (T ) 2. Ker T { ( x, y ) | T ( x, y ) (0, 0)  codomain R 2 }

By definition,

T ( x, y ) ( x  y , x  y ) (0, 0) Ÿ

Now

x y

0 and x  y

Ÿ

x

Ÿ

Ker T = { (0, 0)} null space

Ÿ

dim Ker T

Here

y

0

0

0 dim r (T )  dim Ker T

2  0 2 dim R 2

Problem 4.4 Let T : R 3 o R 3 be the linear transformation defined by T ( x, y , z ) ( x  2 y , y  z , x  2 z ) Find a basis and the dimension of its range and its kernel.

Solution. The standard basis vectors are e1 Then

(1, 0, 0), e2

(0,1, 0), e3

Te1 T (1, 0, 0) (1, 0,1) Te2 Te3

T (0,1, 0) (2,1, 0) T (0, 0,1) (0,  1, 2)

The matrix of column space of generators is ª1 0 1 º A ««2 1 0»» «¬0  1 2»¼ We reduce it to echelon form: 88

(0, 0,1).

(a1)

1 º ª1 0 1 º ª1 0 1 º ª1 0 « » « A «2 1 0» ~ «0 1  2»» ~ ««0 1  2»» «¬0  1 2»¼ «¬0  1 2 »¼ «¬0 0 0 »¼ The matrix has two nonzero rows. Then basis of R (T ) is { (1, 0,1), (0,1,  2)} i.e. R(T ) [ (1, 0,1), (0,1,  2)] and

dim R(T ) 2.

By definition,

Ker T { ( x, y, z ) | T ( x, y, z ) (0, 0, 0)  codomain R 3}

Ÿ

x  2y

0, y  z

Ÿ

0, x  2 z

2 z , y

x

0

z

Thus there is only one free variable z and as such dim Ker T single vector which can be obtained by taking z 1 i.e. x

1. The kernel is generated by a

2, y 1, z 1 and then

[ (2,1,1)]

Ker T

Problem 4.5 Let T : R 3 o R 2 be the linear mapping defined by T ( x, y , z ) ( x  y , y  z )

(a1)

Find a basis and the dimension of its range and its kernel.

Solution. The standard basis vectors for R 3 are e1 Then

(1, 0, 0), e2

(0,1, 0), e3

(0, 0,1).

Te1 T (1, 0, 0) (1, 0) Te2

T (0,1, 0) (1,1)

Te3

T (0, 0,1) (0,1)

Ÿ

ª1 0º ª1 0º ª1 0º A ««1 1»» ~ ««0 1»» ~ ««0 1»» «¬0 1»¼ «¬0 1»¼ «¬0 0»¼

Ÿ

R(T ) [ (1, 0), (0,1)] and dim r (T ) 2

Now Ÿ

Ker T

{ ( x, y , z ) | T ( x, y , z ) (0, 0)  R 2 } x y

0, y  z

0 89

Ÿ

x

z, y

 z, z

Here is only one free variable and hence dim Ker T

z [ (1,  1,1)], for z 1.

1 and Ker T

Problem 4.6 Let V be the vector space of 2u 2 matrices over R and let M the linear map defined by T ( A)

ª 1  1º « 2 2 » . If T :V o V be ¬ ¼

MA, find a basis and dimension of the kernel of T .

ªa b º Solution. Let A « » V , a, b, c, d  R. Then ¬c d ¼ ª 1  1º ªa b º MA « »« » ¬ 2 2 ¼ ¬ c d ¼

T ( A)

bd º ª ac « 2a  2c  2b  2d » ¬ ¼

For the kernel, we have to find A i.e. determine a, b, c, d such that T ( A) 0 matrix. Ÿ

a  c 0, b  d

Ÿ

0,  2a  2c 0,  2b  2d a c, b d

Thus we have two free variables c and d . Then dim Ker T

0 (a1)

2.

To find a basis we choose (c, d ) (0,1), (1, 0). Then we get

a 0, b 1 and a 1, b 0, by (a1) Then a basis of kernel T is ­ ª0 1º ª1 0º ½ ®« », « » ¾. ¯ ¬0 1¼ ¬1 0¼ ¿ MCQ 4.3 For the linear transformation T : R 3 o R 3 defined by T ( a, b, c) (a  2b, b  c, a  2c) we have dim R(T ) m and dim Ker T Then the point (m, n) is the centre of the circle: (A) x 2  y 2  2 x  4 y  1 0 (B) x 2  y 2  6 x  2 y  1 0 (C) x 2  y 2  4 x  2 y  1 0

90

n.

(D) x 2  y 2  2 x  6 y  1 0 MCQ 4.4 Let D : V o V be the differential operator D ( f )

df defined on a vector space V of dx

polynomials in x over R. Then (A) Ker D

set of polynomials of degree ! 1

(B) Ker D

set of polynomials of degree zero

(C) R( D) 2V (D) R( D) V

SAQ 4.3 Let T : T3 o R 3 be the linear transformation defined by T ( x, y , z ) ( y  z , x  2 y, x  z ). Find a basis and the dimension of R (T ) and Ker T .

SAQ 4.4 Let V be the vector space of 2 u 2 matrices over R and let M

ª1 0 º «2  1». Let T : V o V be a ¼ ¬

linear transformation defined by T ( A)

AM  A,  A V .

Find a basis and dimension of R (T ) and Ker T .

4.3 Singular and nonsingular linear transformations Singular linear mapping Let U and V be finite dimensional vector spaces defined over the same field F . A linear mapping T : U o V is said to be singular if the image of some nonzero vector under T is a zero vector in V . This means that if u (z 0) U is mapped to T (u ) 0 V . Thus we say that T is singular if there exists a nonzero vector u U such that T (u ) 0.

Illustration. Consider a mapping T : R 2 o R 2 defined by T ( x, y ) ( x  y , 0) 91

T (1,  1) (1  1, 0) (0, 0)

Ÿ

Thus for a nonzero vector (1,  1) we have T (1,  1) (0, 0)  R 2

codomain of R 2 . This shows

that the linear map T is singular.

Singular and nonsingular matrix Let V be an n -dimensional space over a field F . Let T : V o V be a linear map. An n -square matrix M is said to be singular iff the associated linear map T (via standard basis) is singular. An n -square matrix M is nonsingular iff the associated linear map T (via standard basis) is nonsingular.

Problem 4.7 Let a linear map T : U o V be defined, where U and V are vector spaces over F . Show that T is singular if and only of kT is singular, where k (z 0)  F .

Hint. (kT )u

k (Tu )

Solution. Let the linear map T be singular. Ÿ

Tu

0, for some u z 0

Now by definition, we have k (Tu ) k (0) 0 for some u z 0

(kT )u Ÿ

kT is singular

Conversely assume that kT is singular. Ÿ

(kT ) w 0 for some w z 0

Ÿ

k (Tw) 0, by Hint

Ÿ

T (kw) 0,  T is linear

Now k z 0 and w z 0 Ÿ kw z 0. Then above shows that T is singular.

Problem 4.8 Let U be a finite dimensional vector space over F and let T : U o V be linear. Show that dim U

Hint. dim U

dim R(T )  dim KerT

Solution. Noting (a1), we have 92

dim r (T ) œ T is an isomorphism. (a1)

dim U

dim R(T ) œ dim Ker T œ Ker T

0

{0}

œ T is an isomorphism

MCQ 4.5 Let T :U o V be linear and singular. Then (A) 2T is singular (B)  2T is an isomorphism (C)

1 T is an isomorphism 2

1 (D)  T is singular 2

MCQ 4.6 Consider that T1 and T2 are any linear maps: (a) T1 : R 3 o R 4 can be singular. (b) T2 : R 4 o R 3 can be singular. Choose the true statement from the following: (A) Only (a) is true. (B) Only (b) is true. (C) Both are true. (D) Both are false.

SAQ 4. 5 Let T be a linear map on a vector space V over F such that T 2

T . If T z I , show that T is

singular.

93

SUMMARY A matrix is defined through a linear transformation on a vector space. The matrices associated with the sum of linear transformations, product of a scalar and a linear transformation and product of linear transformations are discussed with illustrative examples. Finally range and kernel of linear transformations are explained.

KEY WORDS Matrix Linear transformation Range of linear transformation Kernel of linear transformation

94

UNIT 04-05: EIGENVALUES AND EIGENVECTORS OF A LINEAR MAPPING 95-112 LEARNING OBJECTIVES After successful completion of the unit, you will be able to Explain the concept of eigenvalue and eigenvector of a linear transformation defined on a vector space Apply to evaluate the aforesaid values and the vectors for a given linear map INTRODUCTION The concepts of eigenvalues and corresponding eigenvectors are discussed earlier in reference to a square matrix. Since a matrix is associated with a linear transformation, these concepts can be reviewed with linear operators defined on vector spaces. In this unit this job is attended. Unless otherwise stated, we assume that (i) V is a finite dimensional vector space over F and V z {0}

(ii) A(V ), the space of all linear operators on V , has a unit element I (iii) OI  T  A(V ), for O  F and T  A(V ) 5.1 Some definitions Eigenvalue or characteristic root

Let T : V o V be a linear mapping on a vector space V over a filed F . A scalar O  F is called

an eigenvalue of T if there exists a nonzero vector v V such that T (v ) O v

(5.1)

An eigenvalue is also termed as characteristic!value. Eigen vector or characteristic vector

Every vector v V satisfying (5.1) is called an eigenvector of T belonging to the eigenvalue O. An eigenvector is also termed as characteristic!vector. Comments on (5.1)

Let T : R 2 o R 2 . In this case geometrically (5.1) indicates that v and T (v) are collinear i.e. they are along the same direction or in opposite direction depending upon O ! 0 or O  0. In any case the situation depicted in the Fig 5.1 is prohibited by (5.1).

95

T (v)

Fig 5.1 v

In such a case T has no eigenvalue and hence no eigenvector. Illustration Let T : V o V be the differential operator on the vector space of differentiable functions. Then T (e 2 x ) This is of the form (5.1) with O

d 2x e dx

2e 2 x

2 and v e 2 x . Hence 2 is the eigenvalue of T and the

corresponding eigenvector is e 2 x .

Problem 5.1 Let T : R 2 o R 2 be a linear map defined by T ( x, y ) ( x  y , y  x).

(a1)

Show that 0 and 2 are the eigenvalues of T . Find the associated eigenvectors.

Solution. From (a1), we have T (1,1) (1  1, 1  1) (0,0) 0(1,1) This is of the form (5.1), where v (1,1) and O 0. Hence 0 is an eigenvalue of T and (1,1) is the corresponding eigenvector. Similarly (a1) Ÿ Ÿ

T (1,1) (2,2) 2(1,1).

2 is an eigenvalue of T and (1,1) is the corresponding eigenvector.

Remark. One can convert the problem to matrix formulation. First find the matrix associated with the given linear transformation T by considering the action of T on the standard basis vectors of R 2 : e1

(1, 0) and e2

(0,1)

Then

T (e1 ) T (1, 0) (1,  1)

and

T (e2 ) T (0,1) (1,1)

The matrix associated with T is 96

M (T ) Then the eigen equation | OI  m(T ) |

ª 1  1º « 1 1 » ¬ ¼

c

ª 1  1º « 1 1 » ¬ ¼

0 becomes

ªO  1 1 º « 1 O  1»¼ ¬

0

Ÿ

(O  1) 2  1 0 i.e. O2  2O 0

Ÿ

O 0, 2

Problem 5.2 If v is the eigenvector belonging to the eigenvalue O of an operator T , then show that kv is also the eigenvector of T belonging to the eigenvalue O , for any nonzero scalar k  F .

Solution. Let v be the eigenvector belonging to the eigenvector O of an operator T : V o V . Then by definition, v z 0 and T (v ) O v

(a1)

Let k be any nonzero scalar. Now we have T (kv) k T (v), T is linear k Ov, by (a1)

O (kv) Ÿ

kv is the eigenvector belonging to the eigenvalue O.

Problem 5.3 Let V be the space of all real valued continuous functions of real variable. Define T : V o V by x

(Tf )( x)

³ f (t ) dt ,  f V , x  R.

(a1)

0

Show that T has no eigen value.

Solution. We apply the method of contradiction. If possible assume the contrary that O be an eigenvalue of T . Ÿ Ÿ

Tf

Of , for some f (z 0) V

(Tf )( x) (Of )( x),  x  R

97

x

Ÿ

O f ( x), by (a1)

³ f (t )dt

(a2)

0

Ÿ

f ( x)

Of c( x) and O z 0,  O

0 gives f

f c( x) f ( x)

Ÿ

0, a contradiction

1 O

or

f ( x) ce x / O , on integration, where c is a constant

Ÿ

f ( 0) c f ( 0) e x / O

Ÿ

f ( x)

Ÿ

t/O ³ f (0)e dt

³ f (t )dt

0

0

x

(a3)

x

O f (x), by (a2) Of (0) e x / O , by (a2) x

f ( 0) O e x / O

f (0) ³ et / O dt

Ÿ

0

Ÿ

x

t/O ³ e dt

O e x / O ,  f ( 0) z 0

0

Ÿ

Oe t / O

x 0

Oe x / O

Ÿ

O (e x / O  1) O e x / O

or

ex / O 1 ex / O ,  O z 0

Ÿ

1 0

This is absurd and hence the initial assumption that T has eigenvalue must be wrong. Thus T has no eigen value. Problem 5.4

Let O z 0 be an eigenvalue of an invertible linear mapping T : V o V . Show that O1 is an eigen value of T 1. Solution. Since O is an eigen value of T ,  v(z 0)  V such that T (v ) O v 98

v T 1 (Ov)

Ÿ

OT 1 (v),  T 1 is linear O1v T 1 (v)

Ÿ

Ÿ

O1 is an eigen value of T 1

MCQ 5.1

Let T : R 2 o R 2 be the linear mapping which rotates each vector v  R 2 by an angle T  (0, S). Choose the false statement/s from the following: (A) T has eigenvalue O 0. (B) T has eigenvalue O

S.

(C) T has no eigenvalue. (D) T has eigenvector for all O  R. MCQ 5.2

Let D denote the derivative which we view as a linear map on the space of differential functions and k be a non-zero integer. Then the eigenvectors of D 2 are (A) sin x and sin kx (B) cos x and cos kx (C) k sin x and k cos x (D) sin kx and cos kx

(SET 2013)

SAQ 5.1

Let T : R 2 o R 2 be a linear map defined by T ( x, y ) (0, y ).

Show that 0 and 1 are the

eigenvalues of T . Determine the corresponding eigenvectors. SAQ 5.2

If v is an eigenvector of T  A(V ) corresponding to the eigenvalue O, show that (a) kO is an eigenvector of kT ,  k  F . (b) O  P is an eigenvalue of T  P I , where P  F .

99

Theorem 5.1

Let T : V o V be a linear transformation on a vector space over F . Then O  F is an eigenvalue of T œ the operator OI  T is singular.

(5.2)

Hint. Tv 0, for some v(z 0) V Ÿ T is singular Proof. By definition, we have O  F is an eigenvalue of T œ  v(z 0) V such that T (v) Ov œ O v  T (v ) 0 œ (O I ) v  T (v ) 0 œ (OI  T )(v) 0 œ operator OI  T is singular.

QED

Eigen space

Let O be an eigenvalue of an operator T : V o V . Let EO be the set of all eigenvectors of T belonging to the eigenvalue O including zero vector i.e. EO

{ v V | T (v) Ov}

(5.3)

This set is called the eigenspace of O. Theorem 5.2 The!eigenspace! EO !is!a!subspace!of! V .

Proof. By definition in (5.3), EO is a subset of V . We show that EO is nonempty. We have T0 Ÿ

0 0v

0  EO i.e. EO z I

Let v, w  EO . Then T (v) Ov and T ( w) Ow

For any D, E  F , we have T (Dv  E w)

DT (v)  ET ( w),  T is linear D (Ov)  E(Ow), by (5.4)

100

(5.4)

O (Dv  E w) Ÿ

Dv  Ew is an eigenvector corresponding to an eigenvalue O or zero vector

Ÿ

Dv  Ew  EO , by definition in (5.3)

Hence EO is a subspace of V .

QED

Problem 5.5

Show that the eigenspace of O is the kernel of OI  T , where T : V o V . Solution. The eigenspace of O is

{v V | T (v) Ov}

EO

{v  V | T (v) (OI )v} {v V | (OI )(v) T (v) } {v V | (OI )(v)  T (v) 0 } {v V | (OI  T )(v) 0 } Ÿ

EO

Ker (OI  T )

Problem 5.6

Let the linear operator T : R 2 o R 2 be defined by T ( x, y ) ( y , x).

(a1)

Find all eigenvalues and a basis for each eigenspace. Solution. We find the matrix represented by T in (a1). Choose the standard basis vectors

(1, 0) and (0,1) for R 2 . Then (a1) Ÿ T (1, 0) (0,1)

and

T (0,1) (1, 0)

The matrix associated with T is

M (T )

The eigen equation (OI  m(T )) X

ª0 1 º «1 0» ¬ ¼

c

ª0 1 º «1 0» ¬ ¼

0 becomes

101

ª O  1º ª x º « 1 O » « y » ¬ ¼¬ ¼

ª0 º «0 » ¬ ¼

(a2)

Then the eigenvalues are determined from the equation O

1

1

O

0 i.e. O2  1 0 O 1,  1

Ÿ

We evaluate the corresponding eigenvectors. ª 1  1º ª x º « 1 1 » « y » ¬ ¼¬ ¼

For O 1, (a2) Ÿ

ª x y º « x  y » ¬ ¼

Ÿ Ÿ

x y

ª0 º «0 » ¬ ¼ ª0 º «0 » ¬ ¼

0 i.e. x

y

Taking y 1, we get the eigenvector ª xº « y» ¬ ¼

( x, y )c (1,1)c.

Thus the eigenspace of O 1 is generated by the basis (1,1)c. For O

1, (a2) Ÿ

Ÿ

ª 1  1º ª x º « 1  1» « y » ¬ ¼¬ ¼

x y

0 i.e. x

ª0 º «0 » ¬ ¼ y

The basis vector becomes (1,  1)c. Then the basis for the eigen space of O

1 is (1,  1)c.

Problem 5.7

Find all eigenvalues and a basis of each eigenspace of the linear mapping T : R 3 o R 3 defined by T ( x, y, z ) ( x  y  z , y  z ,  z ). Solution. The standard basis vectors for R 3 are (1, 0, 0), (0,1, 0) and (0, 0,1). Then (a1) gives

T (1, 0, 0) (1, 0, 0) T (0,1, 0) (1,1, 0)

and 102

T (0, 0,1) (1,1,  1)

(a1)

Then the matrix associated with T is

M (T )

ª1 0 0 º «1 1 0 » » « «¬1 1  1»¼

c

ª1 1 1 º «0 1 1 » » « «¬0 0  1»¼

The eigen equation becomes 1 º ª xº ªO  1  1 « 0 O  1  1 »» «« y »» « «¬ 0 0 O  1»¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

(a2)

The eigenvalues are given by O 1

1

1

0

O 1

1

0

0

O 1

0

Ÿ

(O  1) 2 (O  1) 0

Ÿ

O 1,  1

For O 1, (a2) gives ª0  1  1º ª x º «0 0  1» « y » »« » « «¬0 0 2 »¼ «¬ z »¼ ª y  z º « z » » « «¬ 2z »¼

Ÿ

 yz

Ÿ Ÿ

0,  z y

ª0 º «0 » « » «¬0»¼

ª0 º «0 » « » «¬0»¼ 0, 2 z

0, z

0

0

Choosing x 1, the eigenvector becomes ( x, y, z )c (1, 0, 0)c Hence a basis of the eigenspace of O 1 is {(1, 0, 0)}.

For O

1, (a2) Ÿ

ª 2  1  1º ª x º « 0  2  1» « y » »« » « «¬ 0 0 0 »¼ «¬ z »¼

ª0 º «0 » « » «¬0»¼

103

ª 2 x  y  z º «  2y  z » » « »¼ «¬ 0

Ÿ

 2x  y  z

Ÿ Ÿ

y

2 x, z

ª0 º «0 » « » «¬0»¼

0,  2 y  z

0

4 x and x is free to take any value

Choosing x 1, the eigenvector becomes ( x, y, z )c (1, 2,  4)c Then a basis of the eigenspace of O

1 is {(1, 2,  4)}.

MCQ 5.3 Let 0 be an eigenvalue of a linear operator T : V o V . Then (A) T is one-to-one. (B) T is singular. (C) Ker T

(0}.

(D) Ker T z (0}.

MCQ 5.4 Let v and w be eigenvectors of T corresponding to two distinct eigenvalues O1 and O 2 respectively. Then (A) For non-zero scalars D1 , D 2 , the vector D1v  D 2 w is not an eigenvector of T (B) For all scalars D1 , D 2 , the vector D1v  D 2 w is not an eigenvector of T (C) D1v  D 2 w is an eigenvector of T if D1 D 2 (D) D1v  D 2 w is an eigenvector of T if D1

 D2

(SET 2011)

SAQ 5.3 Find all eigenvalues and basis of each eigen space of the linear operator T : R 2 o R 2 defined by T ( x, y ) (2 x  y, x  2 y ).

(a1)

SAQ 5.4 Find all eigenvalues and a basis of each eigenspace of the linear mapping T : R 3 o R 3 defined by T ( x, y , z ) ( x  y, y  z ,  2 y  z ). 104

(a1)

5.2 Linearly independent eigenvectors There is an important result which states that nonzero eigenvectors belonging to distinct eigenvalues are linearly independent. This aspect is discussed in the following theorem. Theorem 5.3 If! v1 ,..., vn ! be! nonzero! eigenvectors! of! an! operator! T : V o V ! corresponding! to! distinct

eigenvalues!! O1 ,..., O n , !then! v1 ,..., vn !are!!linearly!independent. Proof. Let v1 ,..., vn be nonzero eigenvalues belonging to distinct belonging to distinct

eigenvalues O1 ,..., O n of T respectively. T (vi ) O i vi , vi z 0, i 1,..., n

Ÿ

(5.5)

We prove the theorem that v1 ,, vn are linearly independent by induction. For n 1, the set is {v1}. Then for a1  F , a1v1

0 Ÿ a1

0. Hence {v1} is linearly independent over F .

Hence the theorem is true for n 1. Assume that it is true for n  1 vectors i.e. the set {v1 ,, vn 1} is linearly independent. Then for scalars a1 ,, an 1  F , a1v1  a2 v2    an 1vn 1

0 Ÿ a1

a2  an 1

a1v1  a2 v2    an1vn1  an vn

0, all ai  F

0

(5.6)

Consider that

Ÿ Ÿ Ÿ

(5.7)

T (a1v1  a2 v2    an1vn1  an vn ) T (0) a1 T (v1 )  a2 T (v2 )    an 1 T (vn 1 )  an T (vn ) 0,  T is linear a1O1v1  a2O 2v2    an 1 O n 1vn 1  an O n vn

0, by (5.5)

(5.8)

Applying O n u (5.7)  (5.8), we get (O n  O1 ) a1v1  (O n  O 2 ) a2 v2    (O n  O n1 )an1vn1

0

Since {v1 ,, vn 1} is linearly independent, above gives (O n  O1 ) a1

0, (O n  O 2 ) a2

0, , (O n  O n 1 )an 1

0

(5.9)

But all O i are distinct and hence (O n  O1 ) z 0, (O n  O 2 ) z 0, , (O n  O n 1 ) z 0 Then (5.9) Ÿ

a1

0, a2

0,, an 1

0

With these values in (5.7), we get 105

a n vn Thus (5.7) Ÿ Ÿ

a1

0,  vn z 0

0 and then an 0, a2

0,, an 1

0, an

0

v1 , v2 ,, vn are linearly independent etc.

QED

Problem 5.8

Let T : V o V be a linear map on the vector space over F . If dim V

n, show that T can have

at most n distinct eigenvalues in F . Solution. Let dim V

n. Then

any basis of T has n elements. Hence any set of linearly

independent vectors in V can have at most n elements. By Thm 5.3, any set of distinct eigenvalues of T gives rise to the corresponding set of linearly independent eigenvectors in V . Hence T can have at most n distinct eigenvalues in F . Problem 5.9

Let T : V o V be a linear map on the vector space over F . If dim V

n, and if T has n distinct

eigenvalues in F , then there is a basis of V over F which consists of eigenvectors of T . Solution. Let dim V

n. Suppose that T has n distinct eigenvalues O1 ,..., O n in F . Then by

Thm 5.3, the eigenvectors v1 ,..., vn of T belonging to O1 ,..., O n respectively are linearly independent. Since dim V

n. the vectors v1 ,..., vn form a basis of V over F . Here v1 ,..., vn are

the eigenvectors of T . Problem 5.10

Let V be a finite-dimensional vector space over a field F . Prove that a linear mapping

T : V o V is regular if and only if whenever v1 ,..., vn  V are linearly independent, then Tv1 ,..., Tvn are also linearly independent. Hint.

(i) {0} is linearly dependent. (ii) T is regular œ Tv

0Ÿv 0

œ v z 0 Ÿ Tv z 0

Solution. Necessary part. Assume that T is regular and v1 ,..., vn  V be linearly independent

over F . We show that Tv1 ,..., Tvn are linearly independent. Let Ÿ 106

D1Tv1  ...  D nTvn

0, D i  F

T (D1v1  ...  D n vn ) 0,  T is linear

(a1)

D1v1  ...  D n vn

Ÿ

D1

Ÿ

... D n

0,  T is regular.

0, ? v1 ,..., vn are linearly independent.

Hence (a1) Ÿ all D 's are zero i.e. Tv1 ,..., Tvn are linearly independent. Sufficient part.

Suppose v1 ,..., vn are linearly independent. Then Tv1 ,..., Tvn are linearly

independent. Let v (z 0)  V . Then {Tv} is linearly independent. Ÿ

Tv z 0,  the set consisting of the zero vector alone is dependent

Thus v (z 0)  V Ÿ Tv z 0. Hence T is regular. MCQ 5.5

Let T : R 3 o R 3 be a linear transformation such that the eigenvalues of T are 1, 2 ,  2 . Then

the maximum number of linearly independent eigenvectors of T is (A) 4 (B) 3 (C) 5 (D) 2

(SET 2002)

MCQ 5.6

Let v1 , v2 ,...., vr be eigenvectors corresponding to eigenvalues c1 ,......., cr respectively, of a linear transformation. (5) If v1 , , vr are distinct, then c1 , , cr are distinct

(B) If c1 , , cr are distinct then v1 , , vr are linearly independent (C) If v1 , , vr are linearly independent, then c1 , , cr are distinct (D) The vectors v1 , , vr are linearly independent iff c1 , , cr are distinct SAQ 5.5

If V (O) denotes the set of eigen vectors including zero vector, corresponding to the eigen value O, prove that V (O) is a subspace of Vn and V (O1 ),V (O 2 ) are two subspaces corresponding to two distinct eigen values O1 and O 2 then V (O1 ) ˆ V (O 2 ) {0}. Hint. V (O)

K (O ) 107

X  V (O1 ) ˆ V (O 2 ) Ÿ AX Ÿ

X

O1 X

O 2 X Ÿ (O1  O 2 ) X

0

0,  O1  O 2 z 0 Ÿ V (O1 ) ˆ V (O 2 ) {0}.

5.3 Polynomials of linear transformations

Consider a polynomial p (t ) over a field F defined by p (t ) ant n  an 1t n 1    a1t  a0

(5.10)

where all ai  F . Let T : V o V be a linear operator on a vector space V over F . We define a polynomial of T as p (T ) anT n  an 1T n 1    a1T  a0 I

(5.11)

where I is the identity mapping. If p(T ) 0, then T is called the root of p(t ) 0 or the zero of p (t ). Remark. If T is associated with a square matrix A, then

p ( A) an An    a1 A  a0 I

(5.12)

defines the polynomial of a square matrix A. Here I is the unit matrix. Problem 5.11

Let T : R 2 o R 2 be a linear operator defined by T ( x, y ) ( x  y,  x).

(a1)

p (t ) t 2  t  3.

(a2)

p (T ) T 2  T  3I

(a3)

Consider a polynomial over R :

Find p (T ) ( x, y ). Solution. We have

Now

T 2 ( x, y ) T ( T ( x , y ) ) T ( ( x  y ,  x) ), by (a1) ( x  y  x,  x  y ), by (a1) ( y,  x  y )

108

Then (a3) Ÿ

p (T )( x, y ) ( y ,  x  y )  T ( x, y )  3I ( x, y ) ( y ,  x  y )  ( x  y ,  x)  3( x, y ),  I ( x, y ) ( x, y ) ( y  x  y  3 x,  x  y  x  3 y ) ( 2 x, 2 y ) 2( x , y )

Problem 5.12 Let V be the vector space of functions which has {e 2 x , e 2 x } as a basis. Let D be the differential operator on V . Prove that D is a zero of the polynomial p (t ) t 2  4.

(a1)

Solution. Here we have p( D)

D 2  4I

Operating p (D) on the basis vectors, we get Ÿ

p ( D)(e 2 x ) ( D 2  4 I )(e 2 x ) D 2 (e 2 x )  4 I (e 2 x ) 4e 2 x  4e 2 x ,  D

d dx

0 p ( D)(e 2 x )

and

( D 2  4 I )(e 2 x ) D 2 (e 2 x )  4 I (e 2 x ) 4e 2 x  4e 2 x ,  De 2 x

2e 2 x , D 2

(2)(4)e 2 x

0 Ÿ

Each basis vector is mapped into 0 by p(D ).

But every v V is expressed as a linear combination of basis vectors. Hence each v V is mapped into 0 by p(D). Ÿ

p( D) 0

Ÿ

D is the zero of p (t ) etc. 109

MCQ 5.7 Let V be the vector space of polynomials v( x) ax 2  bx  c. Let T : V o V be the differential operator. Consider a polynomial p (t ) t 2  t  1 p (T )(v( x)) Dx 2  Ex,

such that

where D, E are known in terms of a, b and c. Then the point (a, b, c) lies on the plane

(A) 2 x  y  z

0

(B) 2 x  y  z

0

(C) 2 x  y  z

0

(D) 2 x  y  z

0

SAQ 5.6

Find p (T )( x, y ) if T : R 2 o R 2 is defined by (i) T ( x, y ) ( x, 0) and

(ii) T ( x, y ) ( y, x)

where p (t ) t 2  2t  2. Theorem 5.4

Let! T : V o V be!a!linear!operator!on!the!vector!space!over! F . !If! O  F is!an!eigenvalue!of! T , then!for!any!polynomial! p (t ) !over! F , !the!polynomial! p(O) !!is!an!eigenvalue!of! p (T ). Proof. Let O  F be an eigenvalue of T . Ÿ

Tv Ov for some v(z 0) V

(5.13)

Consider a polynomial p(t ) ant n    a1t  a0 , all ai  F Ÿ

p (O) an On    a1O  a0

and

p (T ) anT n    a1T  a0 I

110

(5.14)

Ÿ

p (T )(v)

( anT n    a1T  a0 I )(v) anT n (v)    a1T (v)  a0 I (v)

Now

(5.15)

a0 I (v) a0 v T (v ) O v T 2 (v) T (T (v)) T (Ov) O T (v) O(Ov) O2v 

T n (v) On v With these values in (5.15), we get p (T )(v) an On v    a1Ov  a0v (an On    a1O  a0 )(v) p(O )(v), by (5.14), for v(z 0) V Ÿ

p(O) is the eigenvalue of p (T ).

QED

MCQ 5.8

Let V be the vector space of all real polynomials of degree d 10. Let Tp ( x)

pc( x) for p V be

a linear transformation from V to V . Consider the basis {1, x, x 2 ,.... , x10 } of V . Let A be the matrix of T with respect to this basis. Then (A) Trace A 1 (B) det A 0 (C) There is no m  N such that Am

0

(D) A has non zero eigenvalue

(NET 2016)

SAQ 5.7

If O is an eigenvalue of a linear operator T , then show that On is the eigenvalue of T n .

111

SUMMARY The concepts of eigenvalue and eigenvector of a linear operator on a vector space are explained. The procedure of obtaining these quantities and a basis of eigenspace is demonstrated through solved examples. The idea of a polynomial of linear transformation is introduced.

KEY WORDS Eigenvalue Eigenvector Eigen space Polynomial of linear mapping

112

ANSWERS: CREDITS 1-4

1-9

CREDIT 01 (1-105) UNIT 01-01 (1-34) MCQ 1.1: (C)

MCQ 1.2: (C)

MCQ 1.3: (A)

MCQ 1.4: (D)

MCQ 1.5: (D)

MCQ 1.6: (A)

MCQ 1.7: (C)

MCQ 1.8: (A)

MCQ 1.9: (B)

MCQ 1.10: (D)

MCQ 1.11: (C) SAQ 1.1: A [aij ] such that aij

0 or 1,  i, j

SAQ 1.2: Hermitian: C, E; Skew Hermitian: D, E SAQ 1.3: a 3, b c

d

4, b

SAQ 1.7: a

0

1, c 7

SAQ 1.8: M SAQ 1.9: A2

 I , A3

 A, A4

I , A4 n

SAQ 1.10: 3, 2, 2 SAQ 1.14: a

2, b

I , A4 n 1

A, A4 n  2

SAQ 1.13: 2, c 1 or a

2, b 2, c

 I , A4 n  3

 A, n  N

5 7

1

UNIT 01-02 (35-51) MCQ 2.1: (C)

MCQ 2.2: (A)

MCQ 2.3: (C)

MCQ 2.4: (D)

MCQ 2.5: (B)

MCQ 2.6: (B), (C)

MCQ 2.7: (D)

MCQ 2.8: (C)

MCQ 2.9: (B)

ª 4 8 20º SAQ 2.6: adj A ««12 4 16 »» , adj(adj A) 16 A «¬ 4 4 8 »¼ UNIT 01-03 (53-78) MCQ 3.1: (C)

MCQ 3.2: (B) 1

MCQ 3.3: (A)

MCQ 3.4: (D)

MCQ 3.5: (B)

MCQ 3.6: (C)

MCQ 3.7: (A), (D)

MCQ 3.8: (C)

SAQ 3.2: k  R

ª  1  3 3  1º «1 1  1 0 »» SAQ 3.3: « « 2  5 2  3» « » 0 1¼ ¬ 1 1

ª 4 11  5 º 1 « SAQ 3.5:  1  6 25 »» « 35 «¬ 6 1  10»¼

SAQ 3.8:

0 1º ª 0 « SAQ 3.9: (a) « sin D cos D 0»» «¬ cos D sin D 0»¼

2  1º ª1 « (b) « 4  7 4 »» «¬ 4  9 5 »¼

a ª1 0 º b ª0  1º  2 » « » 2 « a  b ¬ 0 1 ¼ a  b 2 ¬1 0 ¼ 2

UNIT 01-04 (79-91) MCQ 4.1: (C)

MCQ 4.2: (D)

MCQ 4.3: (A)

SAQ 4.1: A

1

SAQ 4.2: P

ª 8  1  3º « 5 1 2 »» « «¬ 10  1  4»¼ ª 1  1 0º «  2 3 0» , Q « » «¬ 0 0 1»¼

ª 8  1  3º SAQ 4.3: «« 5 1 2 »» «¬ 10  1  4»¼

ª1 0 0 º «0 1  4» , A1 « » «¬0 1  3»¼

ª 1 1 0 º «  2 3  4» « » «¬ 2 3  3»¼

ªI SAQ 4.4: « 3 ¬0

0º 0»¼

UNIT 01-05 (94-105) MCQ 5.1: (A)

MCQ 5.2: (C), (D)

MCQ 5.3: (D) SAQ 5.1: U( A) U( B ) 1 SAQ 5.3: U( A) 3

2

SAQ 2.2:

ªI A~ « 2 ¬0

0º , U( A) 2 0»¼

CREDIT 2 UNIT 01 (1-32) MCQ 1.1: (A), (B)

MCQ 1.2: (D)

MCQ 1.3: (D)

MCQ 1.4: (C)

MCQ 1.5: (C)

MCQ 1.6: (B)

MCQ 1.7: (A), (B)

MCQ 1.8: (A), (C)

MCQ 1.9: (B), (D) (b) 1, 2,  1

SAQ 1.1: (a) 2, 1, 0 SAQ 1.2: 1, 1, 1 SAQ 1.3: (a) x1

1, x2

x3

4

SAQ 1.6: k 1, x 1  a, y

2a, z

SAQ 1.7: (a) Inconsistent, no solution (c) Inconsistent, no solution SAQ 1.8: x

 a, y

z

SAQ 1.9: x

at ,y c

bt ,z c y

0, t

SAQ 1.5: Consistent, x 1, y

SAQ 1.4: Consistent

SAQ 1.10: (a) x

3a, z

(b) x 6a, y

z

a; k

3, x

a  1, y

2  2a , z

(b) consistent, x

a 2, z 3

a

2, y 1, z

4

(d) Inconsistent, no solution

a i.e. infinite number of solutions t

0

(b) x

2 a  b, y

a, z

b

UNIT 02 (33-42) MCQ 2.1: (C)

MCQ 2.2: (B)

MCQ 2.3: (D)

MCQ 2.4: (B)

SAQ 2.1: (a) Linearly independent

(b) Linearly independent

UNIT 03 (43-69) MCQ 3.1: (C)

MCQ 3.2: (D)

MCQ 3.3: (B)

MCQ 3.4: (A)

MCQ 3.5: (D)

MCQ 3.6: (C) 3

MCQ 3.7: (B)

MCQ 3.8: (A), (C)

MCQ 3.9: (C)

MCQ 3.10: (D)

MCQ 3.11 (B)

MCQ 3.12: (C)

SAQ 3.4: O 8, X

[2  1 1]c

SAQ 3.5: O 0, X

SAQ 3.6: O

1, 3, [1 1]c, [5 1]c

SAQ 3.7: O

1, 1, 1, [0  1 Z]c, [0 1 Z]c, [1 1 Z]c

SAQ 3.8 : [1 0  2]c, [0 1 1]c

UNIT 04 MCQ 4.1: (C)

MCQ 4.2: (A), (D)

MCQ 4.3: (A) SAQ 4.1:  4 A  5I

SAQ 4.3.: A

1

ª 3 1  1º 1« 1 3 1 »» « 4 «¬ 1 1 3 »¼

ª 2  3º SAQ 4.4: (a) « » ¬ 3 5 ¼

SAQ 4.5: M

4

3º ª0 3 1« (b) «3 2  7»» 9 «¬3  1  1 »¼

ª8 0 8º «0 1 0 » « » «¬8 0 8»¼

UNIT 05 (85-105) MCQ 5.1: (A), (C), (D)

MCQ 5.2: (B), (C)

MCQ 5.3: (C)

MCQ 5.4: (A)

ª1 0 0 º SAQ 5.1: (i) ««0 2 0»» «¬0 0 3»¼

0º ª5 0 « (ii) «0  3 0 »» «¬0 0  3»¼

ª1 0 0 º (iii) ««0 2 0 »» «¬0 0  2»¼

4

[2  1 2]c

ª2 1 º 1 «1  2» , B AB ¬ ¼

SAQ 5.2: B

ª5 0 º « 0  5» ¬ ¼

ª14 13 13º SAQ 5.3: «« 0 1 0 »» «¬13 13 14»¼

SAQ 5.5: x1  8 x2  6 x3  2 x1 x2  6 x1 x3  10 x2 x3 2

2

2

CREDIT 3 UNIT 01 (1-27) MCQ 1.1: (C)

MCQ 1.2: (B)

MCQ 1.3: (A), (D)

MCQ 1.4: (B)

MCQ 1.5: (C)

MCQ 1.6: (D)

MCQ 1.7: (A)

UNIT 02 (29-41) MCQ 2.1: (D)

MCQ 2.2: (B), (C)

MCQ 2.3: (A)

MCQ 2.4: (C)

MCQ 2.5: (D)

MCQ 2.6: (B)

SAQ 2.3: (a) and (c) are subspaces but (b) is not a subspace UNIT 03 (43-62) MCQ 3.1: (C), (D)

MCQ 3.2: (A), (B)

MCQ 3.3: (A)

MCQ 3.4: (B), (C)

MCQ 3.5: (B) SAQ 3.1: (5, 3, 8) 2(1, 2, 3)  1(3,  1, 2)

SAQ 3.2: a 1

UNIT 04 (63-80) MCQ 4.1: (B)

MCQ 4.2: (C)

MCQ 4.3: (A)

MCQ 4.4: (C)

MCQ 4.5: (B)

MCQ 4.6: (A) 5

MCQ 4.7: (B), (C), (D)

MCQ 4.8: (A)

SAQ 4.3: k 1

SAQ 4.7: { (1,1, 2), (1,  1,1), (1, 0,1)}

SAQ 4.9: (i) LI

(ii) LI

SAQ 4.10: LI UNIT 05 (81-111) MCQ 5.1: (B)

MCQ 5.2: (A)

MCQ 5.3: (A)

MCQ 5.4: (B)

MCQ 5.5: (D)

MCQ 5.6: (A)

MCQ 5.7: (C)

MCQ 5.8: (A)

MCQ 5.9: (B)

MCQ 5.10: (A)

MCQ 5.11: (C)

MCQ 5.12: (A)

MCQ 5.13: (B)

MCQ 5.14: (A)

MCQ 5.15: (D)

MCQ 5.16: (A)

MCQ 5.17: (C)

MCQ 5.18: (B)

MCQ 5.19: (B)

MCQ 5.20: (B)

MCQ 5.21: (B)

MCQ 5.22: (A)

MCQ 5.23: (B) SAQ 5.1: (i) No

(ii) No

SAQ 5.2: (1, 1, 1, 0), (1, 2, 3, 4), (0, 0, 1, 0), (0, 0, 0, 1) SAQ 5.3: No SAQ 5.7: S ˆ T

[ (0, 5,  2) ] , S  T

V3

Basis for S ˆ T is { (0, 5,  2) }, basis for S  T is {(0, 1, 0), (0, 0, 1), (1, 2, 0)} SAQ 5.8: 2, 3, 4 SAQ 5.9: (b  c, b, a  2b  c) SAQ 5.10: (2, 5,  1, 4)

6

CREDIT 4 UNIT 01 (1-30) MCQ 1.1: (A)

MCQ 1.2: (C)

MCQ 1.3: (C)

MCQ 1.4: (B), (C)

MCQ 1.5: (C)

MCQ 1.6: (B)

MCQ 1.7: (A)

MCQ 1.8: (D)

MCQ 1.9: (B)

MCQ 1.10: (C)

MCQ 1.11: (C)

MCQ 1.12: (B)

MCQ 1.13: (C)

MCQ 1.14: (D)

MCQ 1.15: (B)

MCQ 1.16: (B), (D)

§ x y x y · SAQ 1.1: ¨ , , 0, 0 ¸ 2 © 2 ¹

SAQ 1.2: No

§ 5 y  x 4x  2 y · SAQ 1.4: T ( x, y ) ¨ , ¸ 3 ¹ © 3

SAQ 1.5: 5a  2b

SAQ 1.8: R (T ) [ (1, 0), (1,1)] V2 , r (T ) 2 SAQ 1.9: R(T ) [ (0,1, 0, 2), (0,1,1, 0)], N (T ) [ (2,1,1)], dim R(T ) 2, dim N (T ) 1 SAQ 1.10: R(T ) [ (1,1,  1), (1,  1,1)], N (T ) [ (1,  2,1)], r (T ) 2 SAQ 1.11: R (T ) [ (2,1), (0,1)] V2 , N (T ) [ (1,1,  2)], r (T ) 2, n(T ) 1 SAQ 1.12: R(T ) [ (1,1, 0), (0,1,1)], ker T

{ (0, 0, 0)}, r (T ) 2, nullity 0, T is 1-1 and not onto.

UNIT 02 (31-52) MCQ 2.1: (B)

MCQ 2.2: (C)

MCQ 2.3: (B)

MCQ 2.4: (D)

MCQ 2.5: (A), (B), (C)

MCQ 2.6: (A), (D)

MCQ 2.7: (D) SAQ 2.1: (ii) ad  bc z 0

7

§1 · SAQ 2.2: T 1 ( x1 , x2 , x3 ) ¨ x1 , 2 x1  x2 , 5 x1  3x2  x3 ¸ ©2 ¹ SAQ 2.3: T 1 ( x1 , x2 , x3 )

1 ( 4 x1  x2  13 x3 , 4 x1  4 x2  13 x3 ,  x1  x2  2 x3 ) 5

SAQ 2.4: T 1 ( x, y ) ( x cos T  y sin T,  x sin T  y cos T) SAQ 2.5: (3x1  3 x2  8 x3 , 4 x1  x2 , 2 x1  2 x2  9 x3 ), (2 x1  x2  x3 ,  x1  x3 , x1  x3 ) SAQ 2.6: (12 x1  x2  x3 ,  x1  x3 , x1  x3 ), (2 x2 ,  2 x1 , 2 x1  x2  x3 )

UNIT 03 (53-73) MCQ 3.1: (B)

MCQ 3.2: (A)

MCQ 3.3: (A)

MCQ 3.4: (C)

MCQ 3.5: (A)

MCQ 3.6: (D)

MCQ 3.7: (B)

MCQ 3.8: (C)

MCQ 3.9: (D)

MCQ 3.10: (C)

ª 1 2º SAQ 3.1: [T : B1 , B2 ] «« 0 1 »» «¬ 1 3»¼

SAQ 3.2:

SAQ 3.3: [T : B1 , B2 ]

1  1º ª2 1« 0  1 1 »» 2« «¬ 2 0 2 »¼

1 ª7 3 4 º 11 «¬8 16 3»¼

ª1 «0 « «0 SAQ 3.4: [T ] « «0 «0 « «¬0

1 0 1 1 0 0

0 0 0 0 1 0

0º 0»» 1» » 0» 1» » 0»¼

ª0 1 º SAQ 3.5: T is linear, T ( x, y ) ( y, x), [T ] « » ¬1 0¼ ª 1  5 0 º « 4  2 3 » » SAQ 3.7: [3S  4T ] « « 0 9 5 » « » 4  1¼ ¬2

ª cos T sin T º SAQ 3.8: [T 1 ] « » ¬ sin T cos T¼

UNIT 04 (75-94) MCQ 4.1: (A) 8

MCQ 4.2: (A), (C), (D)

MCQ 4.3: (C)

MCQ 4.4: (B), (D)

MCQ 4.5: (A), (D)

MCQ 4.6: (B)

ª1 1 1 º ª 3  3 2 º ª0 1 1 º ª0 1 1 º « » « » » « 2 0 » (iii) «0 0 0» (iv) ««0 0 0»» SAQ 4.1: (i) «0 1 0» (ii) « 0 «¬0 1 1»¼ «¬ 0  3  3»¼ «¬0 0 0»¼ «¬0 1 0»¼ SAQ 4.3: dim R (T ) 3 and Ker T

{0}

SAQ 4.4: T is not linear because T (0)

I z0

UNIT 05 (95-112) MCQ 5.1: (A), (C), (D)

MCQ 5.2: (A), (B), (C), (D)

MCQ 5.3: (B), (D)

MCQ 5.4: (A)

MCQ 5.5: (B)

MCQ 5.6: (B)

MCQ 5.7: (C)

MCQ 5.8: (B)

SAQ 5.3: O 1, 3; {(1,  1)}, {(1,1)} SAQ 5.4: O 1,{1, 0, 0)}, other eigenvalues are imaginary and hence do not exist is R. SAQ 5.6: (i) p (T )( x, y ) ( x, 2 y )

(ii) p (T )( x, y ) (3 x  2 y , 3 y  2 x)

9

REFERENCES

1. Durbin J R Modern Algebra John Wiley & Sons, Inc. 5th Edition, 2005 2. Freleigh J B A First Course in Abstract Algebra Narosa Publishing House, 1982 3. Gallian J A Contemporary Abstract Algebra Narosa Publishing House, 4th Edition, 1998 4. Herstein I N Topics in Algebra Vikas Publishing House, 1979 5. Karade T M Lectures on Algebra Sonu Nilu, 1997 6. Karade T M and Bendre M S Lectures on Algebra and Trigonometry, 2nd Edition Sonu Nilu, 2003 7. Karade T M and Salunke J N Abstract Algebra and Differential Equations Sonu Nilu, 2004 8. Karade T M, Salunke J N, Adhav, and Bendre M S Lectures on Abstract Algebra, 5th Edition Sonu Nilu, 2010 9. Karade T M, Salunke J N, Bendre M S, and Thengane K D Calculus and elements of groups Sonu Nilu, 2013 10. Karade T M, Salunke J N, Katore S D, Rekha Rani Modern Algebra (groups-rings) Sonu Nilu, 2014 10. Karade T M, Salunke J N, Katore S D, Maya S Bendre, Rekha Rani Modern Algebra (groups-rings) Sonu Nilu, 2017 11. Lipschutz Seymour Linear Algebra Schaum"s Outline Series, McGraw-Hill International Book Company Singapore 1981 11. Malik D S, Mordeson J N and Sen M K Fundamentals of Abstract Algebra McGraw Hill , 1997 12. Surjeet Singh and Qazi Zameeruddin Modern Algebra Vikas Publishing House , 1991

1

Get in touch

Social

© Copyright 2013 - 2024 MYDOKUMENT.COM - All rights reserved.