MATHEMATICAL PHYSICS Flipbook PDF


61 downloads 115 Views 719KB Size

Recommend Stories


3B SCIENTIFIC PHYSICS
3B SCIENTIFIC® PHYSICS Schreibstimmgabel 21 Hz U8431030 Bedienungsanleitung 06/08 ALF 1 2 3 4 Stiel Zinken Halter mit Schreibstift Gegenmasse 1.

Story Transcript

www.ceedphysicsclinic.com/ 93 888 77 666

MATHEMATICAL METHODS IN PHYSICS Vector algebra and Vector Calculus •

Vector Differentiation

Product of two vectors (a) Scalar Product of two Vectors ⃗⃗⃗ 𝑎 . 𝑏⃗ = abcos𝜃 product.

and also known as dot

If ⃗⃗⃗ 𝑎 . 𝑏⃗ = 0. Two vectors are mutually perpendicular to each other [or orthogonal] (b) Vector product of two vectors 𝑏⃗ =ab sin 𝜃𝜋̂. Also 𝑎 ⃗⃗⃗ × 𝑏⃗ = - 𝑏⃗ × ⃗⃗⃗ 𝑎

⃗⃗⃗ × 𝑎

Two vectors are collinear (parallel and anti parallel) Also 𝑎 ⃗⃗⃗ × 𝑏⃗ = 0, ⃗⃗⃗ 𝑎 × ⃗⃗⃗ 𝑎 =0

⃗ 𝒅𝒓 𝒅𝒕

= 𝒅𝒔 𝒅𝒕 ,

⃗ 𝒅𝒔 𝒅𝒓

𝑑 𝑑𝑡

(𝑟⃗⃗⃗1 + ⃗⃗⃗ 𝑟2 ) =

+

⃗⃗⃗⃗2 𝒅𝑟 𝒅𝒕

If 𝑟 = 𝑖̂𝑥 + 𝑗̂𝑦 + 𝑘̂ 𝑧 then ⃗⃗⃗⃗1 𝒅𝑟 𝒅𝒕

𝑑 𝑑𝑡

(𝑟⃗⃗⃗1 . ⃗⃗⃗ 𝑟2 ) =

𝑑 𝑑𝑡

(𝑟⃗⃗⃗1 × ⃗⃗⃗ 𝑟2 ) = •

. 𝑟⃗⃗⃗𝟐 +

⃗⃗⃗⃗1 𝒅𝑟 𝒅𝒕

⃗ 𝒅𝒓 𝒅𝒕

⃗⃗⃗⃗2 𝒅𝑟 𝒅𝒕

=

𝒅𝒙 𝒅𝒚 𝒅𝒛 𝑖̂ + 𝑗̂ + 𝑘̂ 𝒅𝒕 𝒅𝒕 𝒅𝒕

. 𝑟⃗⃗⃗𝟏 ,

× 𝑟⃗⃗⃗𝟐 +𝑟⃗⃗⃗𝟏 ×

⃗⃗⃗⃗2 𝒅𝑟 s 𝒅𝒕

Gradient of a Scalar Field

Two vectors are perpendicular, If ⃗⃗⃗ 𝑎 × 𝑏⃗ = ab𝜋̂.

The gradient of a continuously differentiable scalar function 𝜙 (x, y, z ) may be mathematically 𝜕𝜙 𝜕𝜙 𝜕𝜙 defined as grad 𝜙 = 𝑖̂ + 𝑗̂ + 𝑘̂ ,

Expression of ⃗⃗⃗ 𝑎 × 𝑏⃗in terms of unit vectors 𝑖̂ 𝑗̂ 𝑘̂ ⃗ ⃗⃗⃗ 𝑎 × 𝑏 = |𝑎𝒙 𝑎𝒚 𝑎𝒛 | 𝑏𝒙 𝑏𝒚 𝑏𝒛

The gradient of a scalar at any point in a scalar field is a vector, the magnitude of which is equal to the maximum rate of increase of scalar function (𝝓) at that point.

𝜕𝑥

⃗ has scalar If OACB is a parallelogram then ⃗⃗⃗ 𝐴 ×𝐵 magnitude equal to the area of the parallelogram. •

⃗⃗⃗⃗1 𝒅𝑟 𝒅𝒕

𝜕𝑦

𝜕𝑧

Its direction is along the normal to the level surface at that point. For non curl or conservative field

Product of Three Vectors (a) Scalar triple product = 𝑎 ⃗⃗⃗ . (𝑏⃗ × 𝑐⃗⃗ ) 𝑎𝒙 𝑎𝒚 𝑎𝒛 ⃗ Also ⃗⃗⃗⃗ 𝑎 . (𝑏 × 𝑐⃗⃗ ) = |𝑏𝒙 𝑏𝒚 𝑏𝒛 | 𝑐𝒙 𝑐𝒚 𝑐𝒛

⃗ 𝐸⃗ = -grad 𝑉 •

Line Integrals 𝐵

The line integral of 𝑣 is ∫𝐴 𝑣 ⃗⃗⃗⃗ 𝑑𝑟 represents the ⃗⃗⃗⃗ (all path sum of the scalar products of 𝑣 and 𝑑𝑟

𝑎 . (𝑏⃗ × 𝑐⃗⃗ ) = ⃗⃗⃗⃗

⃗⃗⃗⃗ 𝑏 . (𝑎 × 𝑐⃗⃗ ) =

𝑐 . (𝑎 × ⃗⃗⃗ ⃗⃗⃗ 𝑏)

For coplanar vectors, ⃗⃗⃗⃗ 𝑎 . (𝑏⃗ × 𝑐⃗⃗ ) = 0 For unit vector 𝑖̂ ,𝑗̂ and 𝑘̂,

⃗ segments) taken over the path running from 𝐴 to 𝐵 as dr→ 0. •

Let S be any surface, divided into infinitesimal element represented as a vector ⃗⃗⃗⃗ 𝑑𝑠 , surface

𝑖̂. (𝑗̂ × 𝑘̂) = 𝑗̂. ( 𝑘̂ × 𝑖̂) = 𝑘̂ . ( 𝑖̂ × 𝑗̂ ) = 1 ⃗⃗⃗⃗ . (𝑏⃗ × 𝑐⃗⃗ ) or the scalar triple product of three vectors 𝑎 is equal to the volume of a parallelepiped having vectors as con – current edges.

integral may be encountered as ∬𝑠 𝐴 . ⃗⃗⃗⃗ 𝑑𝑠 •

(b) ⃗⃗⃗⃗ 𝑎 . (𝑏⃗ × 𝑐⃗⃗ ) Vector triple product ⃗⃗⃗ 𝑎 × (𝑏⃗ × 𝑐⃗⃗ ) = (𝑎 ⃗⃗⃗ . 𝑐)𝑏⃗ - (𝑎 ⃗⃗⃗ . 𝑏⃗)𝑐

Volume Integral Let dV = dxdydz→ element of volume. Volume integral is ∭𝑉 𝐴 . 𝑑𝑉



www.ceedphysicsclinic.com

Surface Integral

Divergence of a Vector Function

Page 6

www.ceedphysicsclinic.com/ 93 888 77 666

The divergence of 𝑣 or div 𝑣 is defined as the dot product of ∇ and 𝑣 ⃗ = ∴𝛁.𝒗 •

𝝏𝒗𝟏 𝝏𝒙

+

𝝏𝒗𝟐 𝝏𝒚

+

The flux of a vector field 𝐴 over any closed surface S is equal to the volume integral of the divergence of the vector field over the volume enclosed by the

𝝏𝒗𝟑 𝝏𝒛

surface 𝑆ie,

Physical Significance of Divergence The divergence of a vector function at a point is the net outflow per unit volume per second evaluated at that point. If ∇ . 𝑣 = 0, then the liquid is incompressible. For conservative electric field 𝛁 . ⃗𝑬 = 0

⃗ . ⃗⃗⃗⃗ ⃗ . 𝐝𝐕 𝐝𝐬 = ∭𝐕 𝐀 ∬𝐬 𝐀 •

Stoke’s Theorem The flux of the curl of a vector function 𝐴 over surface S of any shape is equal to the line integral of the vector field𝐴 over the boundary 𝐶 of the surface ie.. ⃗ . ⃗⃗⃗⃗ 𝐝𝐬 = ∫𝑪 ⃗𝑨 . ⃗⃗⃗⃗⃗ 𝒅𝒓 ∬𝐬 𝐜𝐮𝐫𝐥 𝐀

⃗ =0 𝐅𝐨𝐫 solenoidal field 𝛁 . 𝒗 •

Curl of a vector field If 𝐴 is any vector, then curl of a vector field is defined as 𝑖̂ 𝑗̂ 𝑘̂ 𝜕 𝜕𝑥

𝜕 𝜕𝑦

𝜕 | 𝜕𝑧

𝐴1

𝐴2

𝐴3

Curl 𝐴 = ∇ × 𝐴 = | Physical Significance:

When curl 𝑣 (where 𝑣 velocity vector) is now zero, the vector field must have circulation. When curl 𝑣 = 0 in some region, there will be a circulation or rotation at all in that region. The vector field 𝑣 is then called irrotational in that region. Important vector identity : •

∇(𝜙𝜓) = 𝜙∇𝜓 + 𝜓∇𝜙



⃗ ) = div 𝐴 + div 𝐵 ⃗ div (𝐴 + 𝐵



⃗)=𝐴×∇×𝐵 ⃗ ) + (𝐴 . ∇)𝐵 ⃗ +𝐵 ⃗ × ∇ × 𝐴) grad (𝐴. 𝐵 ⃗ . ∇)𝐴 + (𝐵



div (𝜙𝐴) = 𝜙 div 𝐴 + 𝐴 grad 𝜙



div curl 𝐴 = 0



curl grad 𝜙 = 0



⃗)=𝐵 ⃗ . curl𝐴 - 𝐴 . curl 𝐵 ⃗ div (𝐴 × 𝐵



⃗ ) = (𝐵 ⃗ . ∇)𝐴 - (𝐴 . ∇)𝐵 ⃗ + 𝐴 div 𝐵 ⃗ -𝐵 ⃗ curl (𝐴 × 𝐵 div 𝐴 curlcurl𝐴 = grad div 𝐴 - ∇2 𝐴



Gauss – Divergence Theorem

www.ceedphysicsclinic.com

Matrices Definition – A system of any m, n numbers arranged in a rectangular array of m rows and n columns is called a matrix of order m × n or m × n matrix (which is read as m by n matrix) 2 1 3 e.g[ ] is a 2 × 3 matrix 3 −2 8 A matrix may be represented by the symbols [𝑎𝑖𝑗 ] or by a simple capital letter A, say .Each of the number constituting a matrix is called an element of the matrix. • Special matrices (i) Square matrix -When m=n i.e, the number of rows and columns of array are equal, then the matrix is called a square matrix, otherwise it is known as rectangular matrix. (ii) Row and column matrix-If in a matrix , there is only one row it is called a raw matrix . If there is one column it is called column matrix. (iii) Null matrix-If all the elements of an𝑚 × 𝑛 matrix are zero, then it is called a null matrix or zero matrix and denoted by 𝑂𝑚×𝑛 or simply O. (iv) Unit matrix- A square matrix which has every element of the main diagonal equal to one and all other elements equal to zero. eg. 1 0 0 [0 1 0] 0 0 1 is a unit matrix of order three. Also known as identity matrix and it is denoted as I. (v) Equal matrix-Two matrices are said to be equal if (a) They are same of order (b) The elements in the corresponding positions of the two matrices are equal Page 7

www.ceedphysicsclinic.com/ 93 888 77 666

(vi) Diagonal matrix-As square matrix which all elements except those in the main diagonal are zero is known as diagonal matrix 2 0 0 e.g[0 2 0] 0 0 5 (vii) Singular matrix-As square matrix whose determinant is zero is singular , all non-square matrices are also called singular. • Triangular and Scalar matrix (i) Upper triangular matrix Example: 1 2 4 2 2 −9 0 0 3 −1 1 𝐴=[ ] ; 𝐵 = [0 1 2] 0 0 2 1 0 0 1 0 0 0 8 (ii) Lower triangular matrix Example: 𝑎11 0 0 …… 0 𝑎21 𝑎22 0 …… 0 𝑎31 𝑎32 𝑎33 … … 0 ⋮ ⋮ ⋮ ⋮ … . . [𝑎𝑛1 𝑎𝑛2 𝑎𝑛3 𝑎𝑛𝑛 ] (iii) Scalar matrix-A diagonal matrix whose diagonal elements are equal is called a scalar matrix 𝑘 0 0 …… 0 0 𝑘 0 …… 0 𝑆 = 0 0 𝑘 …… 0 ⋮ ⋮ ⋮ ⋮ [0 0 0 … . . 𝑘 ] • Transpose of a matrix Let 𝐴 = [𝑎𝑖𝑗 ]𝑚×𝑛 the 𝑛 × 𝑚 matrix obtained from A by changing is rows into columns and its column into rows is called the transpose of A and is denoted by the symbol 𝐴𝑇 Example: 1 2 3 4 𝐴 = [ 2 3 4 1] 3 4 2 1 3×4 1 2 3 𝑇 𝐴 = [ 2 3 4] 3 4 2 4 1 1 4×3 • Conjugate of matrix The matrix obtained from any given matrix A on replacing its elements by the corresponding conjugate complex number is called the conjugate of A and is denoted by 𝐴̅ 2 + 3𝑖 4 − 7𝑖 8 If 𝐴=[ ] −𝑖 6 9+𝑖

www.ceedphysicsclinic.com

2 − 3𝑖 4 + 7𝑖 8 ] 𝑖 6 9−𝑖 • Transposed conjugate of a matrix The transpose of the conjugate of a matrix A is called transposed conjugate of A and is denoted by A* Example: 1 + 2𝑖 2 − 3𝑖 3 + 4𝑖 If 𝐴 = [4 − 5𝑖 5 + 6𝑖 6 − 7𝑖 ] 8 7 + 8𝑖 7 1 + 2𝑖 4 − 5𝑖 8 then 𝐴𝑇 [2 − 3𝑖 5 + 6𝑖 7 + 8𝑖 ] 3 + 4𝑖 6 − 7𝑖 7 𝑇 ̅ and(𝐴 )=A* 1 − 2𝑖 4 + 5𝑖 8 = [2 + 3𝑖 5 − 6𝑖 7 − 8𝑖 ] 3 − 4𝑖 6 + 7𝑖 7 • Symmetric and skew Symmetric matrices- A square matrix 𝐴 = [𝑎𝑖𝑗 ] is said to be symmetric if it is (i,j)th element is the same as its (j,i) th element i.e, 𝑎𝑖𝑗 = 𝑎𝑗𝑖 for all i,j then ̅𝐴 = [

Example:



1 𝑖 −2𝑖 [ 𝑖 −2 4 ]. −2𝑖 4 3 Skew Symmetric Matrix A square matrix 𝐴 = [𝑎𝑖𝑗 ] is said to be skew symmetric if the (i,j) th element of A i.e., 𝑎𝑖𝑗 = −𝑎𝑗𝑖 0 ℎ [−ℎ 0 −𝑔 −𝑓

𝑔 𝑓] 0

A square matrix will be Skew symmetric if and only if 𝐴′ = −𝐴 𝑖. 𝑒., Which shows that each diagonal elements of a skew symmetric matrix is zero • Hermitian and Skew Hermitian Matrices Hermitian matrix-A sqwuare matrix 𝐴 = [𝑎𝑖𝑗 ] is said to be Hermitian if the (i,j) th elements of A is equal to the conjugate complex of the (j,i) th element of A i.e 𝑎𝑖𝑗 = 𝑎̅𝑗𝑖 𝑎 𝑏 + 𝑖𝑐 Example:[ ] 𝑏 − 𝑖𝑐 𝑎 Every diagonal element of a Hermitian matrix must be real Skew Hermitian matrix-For which 𝑎𝑖𝑗 = −𝑎̅𝑗𝑖 Page 8

www.ceedphysicsclinic.com/ 93 888 77 666

0 −2 − 𝑖 ] 2−𝑖 0 A square matrix will be anti Hermitian if 𝐴† = −𝐴 or 𝐴′ = −𝐴̅ or 𝑎𝑗𝑖 = −𝑎̅𝑖𝑗 Example: [

Setting

i=j



: 𝑎𝑗𝑖 = −𝑎̅𝑖𝑗

Which shows that every diagonal element of a Skew Hermitian matrix is either zero or purely imaginary number. Adjoint of a square matrix-

The trace of 𝐴 = 𝑎11 + 𝑎22 + ⋯ . +𝑎𝑛𝑛 𝑛

𝑇𝑟 𝐴 = ∑ 𝑎𝑖𝑗

Let 𝐴 = [𝑎𝑖𝑗 ]𝑚×𝑛 be any 𝑛 × 𝑛 matrix .the transpose of 𝐵𝑇 of the matrix 𝐵 = [𝐴𝑖𝑗 ]

𝑖=1

𝑚×𝑛

where 𝐴𝑖𝑗 → cofactor of the element 𝑎𝑖𝑗 in the determinant |𝐴|, is called the adjoint of matrix A and denoted by the symbol adjA Invertible matrices : Inverse or reciprocal of a matrices-Let A be any n-rowed square matrix then a matrix B, if it exists, such that 𝐴𝐵 = 𝐵𝐴 = 𝐼𝑛 is called inverse of A.



1

If A be a invertible matrix then the inverse of A is |𝐴| Adj A; •



Non singular matrices-A square matrix A is said to be non-singular according as |𝐴| ≠ 0 𝑜𝑟 |𝐴| = 0 Unitary and Orthogonal Matrices-A square matrix is said to be unitary if it inverse is equal to its conjugate transpose i.e. 𝐴† = 𝐴−1 𝑜𝑟 (𝐴̅)’=𝐴−1 ̅ 1 or 𝐴′ = 𝐴− Thus for a matrix A to be unitary 𝐴𝐴+ = 𝐴† 𝐴 = 𝐼 A real unitary matrix is said to be orthogonal matrix . For real A above the condition become AA’=A’A=I Taking the determinant on both sides of the equation |𝐴𝐴′| = |𝐴||𝐴′| = |𝐴|2 |𝐼| = 𝐼 ∴ |𝐴| = ±1 |𝐴| = +1 If given the matrix is proper orthogonal matrix |𝐴| = −1 If

www.ceedphysicsclinic.com

given the matrix is improper orthogonal matrix Trace of a Matrix-In any square matrix the algebraic sum of its elements along principal diagonal is called its trace … 𝑎1𝑛 𝑎11 𝑎12 … 𝑎2𝑛 𝑎21 𝑎22 If 𝐴=[ : : :] 𝑎𝑛1 𝑎𝑛2 … 𝑎𝑛𝑛



• • • •

The trace of a product of two matrices is independence of the order of multiplication Tr(AB)=Tr(BA) Also tr(A+B)=tr(A)+tr(B) Idempotent and Involutory Matrices-A square matrix A is called idem potent provided it satisfies the relation 𝐴2 = 𝐴 A square matrix is called involutory provided it satisfies the relation 𝐴2 = 𝐼 or (I+A)(I-A)=0 Where I is the identity matrix. Rank of Matrix-A number r is said to be the rank of a matrix A if it is possesses the following two properties: (i) There is at least one square sub matrix of A of order r whose determinant is not equal to zero (ii) If the matrix A contains any square submatrix of order r+1 then the determinant of every square sub-matrix of A of order r+1 should be zero or we have that the rank of matrix is the order of any highest order non-vanishing minor of the matrix The rank of every non-singular matrix of order n is n The rank of the transpose of a matrix is the same as that of the original matrix. If A and B be two equivalent matrices, then rank A=rank B For linear equations -Suppose we have m equations in n unknowns Let 𝑟 → rank of the matrix (i) if r=n, then the equation will have Page 9

www.ceedphysicsclinic.com/ 93 888 77 666











n-n ie.,no linearly independent solutions (ii) If r 𝑎 𝐺(𝑡) = { 0, 𝑡0 𝑑𝑡 𝑑2 𝑓(𝑡) = 𝑠 2 𝐹(𝑠) − 𝑠𝑓(0) − 𝑓 ′ (0) 𝑠>0 𝑑𝑡 2 𝑑3 𝑓(𝑡) = 𝑠 3 𝐹(𝑠) − 𝑠 2 𝑓(𝑠) − 𝑠𝑓(0) − 𝑓 ′ (0) 𝑠 𝑑𝑡 2 >0 (3) Integral 1

∫0 𝑓(𝑡)𝑑𝑡 = 𝑠 𝐹(𝑠)

and𝐹(𝑥) = 𝐹 −1 {𝑓(𝑠)} 1





The inverse formula 𝐹(𝑥) = 𝐹𝑠−1 {𝑓𝑠 (𝑠)} ` 2 ∞ =𝜋 ∫−∞ 𝐹(𝑥) sin 𝑠𝑥𝑑𝑥

𝑠→∞

𝑠→0

(8) Multiplication 𝑡𝑓(𝑡)

Fourier Cosine Transform 𝑓𝑐 (𝑠) = 𝐹𝑐 {𝐹(𝑥)}

𝑑

𝑑𝑛 𝐹(𝑠) 𝑑𝑠 𝑛 ∞ 1 (9) Division by t= 𝑡 𝑓(𝑡) ∫𝑠 𝐹(𝑠) 𝑑𝑠 𝑡 𝑛 𝑓(𝑡)(−1)𝑛

www.ceedphysicsclinic.com



− 𝑑𝑥 𝐹(𝑠)

by t

(10) Periodic function

Fourier Sine Transform The infinite sine transform of F(x),0a

𝑡

2

Some authors take the coefficient as √𝜋 each instead of 1 and

then𝐿{𝑡 𝑛 𝐹(𝑡)} = (−1)𝑛 𝑑𝑥 𝑛 𝑓(𝑠): 𝑛 = 1,2,3 •



1

=2𝜋 ∫−∞ 𝑓(𝑠)𝑒 𝑖𝑠𝑥 𝑑𝑠

𝑑 𝑛 [𝐹(𝑡)] 𝑑𝑡 𝑛

𝑡

(11) Convolution f(t)*g(t) F(s) G(s) Inverse Laplace Transform 𝐹(𝑡) = 𝐿−1 {𝑓(𝑠)} Fourier Transform The infinite Fourier transform of𝐹(𝑥), −∞ < 𝑥 < ∞ is denoted by f(s) or F{F(x)} and is defined as f(s)=F[f(x)]

f(t)

𝑇 ∫0 𝑒 −𝑠𝑡 𝑓(𝑡) 1−𝑒 −𝑠𝑡



=∫0 𝐹(𝑥) cos 𝑠𝑥𝑑𝑥 The inverse formula 𝐹(𝑥) = 𝐹𝑐−1 {𝑓𝑐 (𝑠)} 2



=𝜋 ∫−∞ 𝐹(𝑥) sin 𝑠𝑥𝑑𝑥 Page 19

www.ceedphysicsclinic.com/ 93 888 77 666

2 ∞ ∫ 𝐹(𝑥) sin 𝑠𝑥𝑑𝑥 𝜋 −∞ For finite Fourier sine transform of F(x),0

Get in touch

Social

© Copyright 2013 - 2024 MYDOKUMENT.COM - All rights reserved.