Sympy Tutorial
Sympy Tutorial
When solving math problems, it’s best to work with SymPy objects, Expressions
and wait to compute the numeric answer in the end. To obtain a >>> from sympy import simplify, factor, expand, collect
numeric approximation of a SymPy object as a float, call its .evalf()
You define SymPy expressions by combining symbols with basic math
method:
operations and other functions:
>>> pi
>>> expr = 2*x + 3*x - sin(x) - 3*x + 42
pi
>>> simplify(expr)
>>> pi.evalf()
2*x - sin(x) + 42
3.14159265358979
The function simplify can be used on any expression to simplify
The method .n() is equivalent to .evalf(). The global SymPy
it. The examples below illustrate other useful SymPy functions that
function N() can also be used to to compute numerical values. You can
correspond to common mathematical operations on expressions:
easily change the number of digits of precision of the approximation.
Enter pi.n(400) to obtain an approximation of π to 400 decimals. >>> factor( x**2-2*x-8 )
(x - 4)*(x + 2)
>>> expand( (x-4)*(x+2) )
x**2 - 2*x - 8
Symbols
>>> collect(x**2 + x*b + a*x + a*b, x)
>>> from sympy import Symbol, symbols x**2 + (a+b)*x + a*b # collect terms for diff. pows of x
Python is a civilized language so there’s no need to define variables To substitute a given value into an expression, call the .subs()
before assigning values to them. When you write a = 3, you define a method, passing in a python dictionary object { key:val, ... }
new name a and set it to the value 3. You can now use the name a with the symbol–value substitutions you want to make:
in subsequent calculations. >>> expr = sin(x) + cos(y)
>>> expr
Most interesting SymPy calculations require us to define symbols, sin(x) + cos(y)
which are the SymPy objects for representing variables and unknowns. >>> expr.subs({x:1, y:2})
For your convenience, when live.sympy.org starts, it runs the sin(1) + cos(2)
following commands automatically: >>> expr.subs({x:1, y:2}).n()
>>> from __future__ import division 0.425324148260754
>>> from sympy import *
>>> x, y, z, t = symbols('x y z t')
Note how we used .n() to obtain the expression’s numeric value.
>>> k, m, n = symbols('k m n', integer=True)
>>> f, g, h = symbols('f g h', cls=Function)
Solving equations
The first statement instructs python to convert 1/7 to 1.0/7 when >>> from sympy import solve
dividing, potentially saving you from any int division confusion. The
second statement imports all the SymPy functions. The remaining The function solve is the main workhorse in SymPy. This incredibly
statements define some generic symbols x, y, z, and t, and several powerful function knows how to solve all kinds of equations. In fact
other symbols with special properties. solve can solve pretty much any equation! When high school students
learn about this function, they get really angry—why did they spend
Note the difference between the following two statements: five years of their life learning to solve various equations by hand,
>>> x + 2 when all along there was this solve thing that could do all the math
x + 2 # an Add expression for them? Don’t worry, learning math is never a waste of time.
>>> p + 2
NameError: name 'p' is not defined The function solve takes two arguments. Use solve(expr,var) to
solve the equation expr==0 for the variable var. You can rewrite any
The name x is defined as a symbol, so SymPy knows that x + 2 is an equation in the form expr==0 by moving all the terms to one side
expression; but the variable p is not defined, so SymPy doesn’t know of the equation; the solutions to A(x) = B(x) are the same as the
what to make of p + 2. To use p in expressions, you must first define solutions to A(x) − B(x) = 0.
it as a symbol:
>>> p = Symbol('p') # the same as p = symbols('p') For example, to solve the quadratic equation x2 + 2x − 8 = 0, use
>>> p + 2 >>> solve( x**2 + 2*x - 8, x)
3
In this case the equation has two solutions so solve returns a list. Euler’s number e = 2.71828 . . . is defined one of several ways,
Check that x = 2 and x = −4 satisfy the equation x2 + 2x − 8 = 0. ∞
1 1
n X
e ≡ lim 1+ ≡ lim (1 + )1/ ≡ ,
The best part about solve and SymPy is that you can obtain symbolic n→∞ n →0 n!
answers when solving equations. Instead of solving one specific n=0
quadratic equation, we can solve all possible equations of the form and is denoted E in SymPy. Using exp(x) is equivalent to E**x.
ax2 + bx + c = 0 using the following steps: The functions log and ln both compute the logarithm base e:
>>> a, b, c = symbols('a b c') >>> log(E**3) # same as ln(E**3)
>>> solve( a*x**2 + b*x + c, x) 3
[(-b + sqrt(b**2 - 4*a*c))/(2*a), (-b-sqrt(b**2-4*a*c))/(2*a)]
By default, SymPy assumes the inputs to functions like exp and log are
In this case solve calculated the solution in terms of the symbols complex numbers, so it will not expand certain logarithmic expressions.
a, b, and c. You should be able to recognize the
√ expressions in the However, indicating to SymPy that the inputs are positive real numbers
solution—it’s the quadratic formula x1,2 =
−b± b2 −4ac
. will make the expansions work:
2a
>>> x, y = symbols('x y')
To solve a specific equation like x2 + 2x − 8 = 0, we can substitute >>> log(x*y).expand()
the coefficients a = 1, b = 2, and c = −8 into the general solution to log(x*y)
obtain the same result: >>> a, b = symbols('a b', positive=True)
>>> gen_sol = solve( a*x**2 + b*x + c, x) >>> log(a*b).expand()
>>> [ gen_sol[0].subs({'a':1,'b':2,'c':-8}), log(a) + log(b)
gen_sol[1].subs({'a':1,'b':2,'c':-8}) ]
[2, -4]
Polynomials
Trigonometry
II. Complex numbers
from sympy import sin, cos, tan, trigsimp, expand_trig
>>> from sympy import I, re, im, Abs, arg, conjugate
The trigonometric functions sin and cos take inputs in radians:
>>> sin(pi/6) Ever since Newton, the word “number” has been used to refer to one
1/2 of the following types of math objects: the naturals N, the integers
>>> cos(pi/6) Z, the rationals Q, and the real numbers R. Each set of numbers is
sqrt(3)/2 associated with a different class of equations. The natural numbers
N appear as solutions of the equation m + n = x, where m and n are
For angles in degrees, you need a conversion factor of π
[rad/◦ ]:
180 natural numbers (denoted m, n ∈ N). The integers Z are the solutions
>>> sin(30*pi/180) # 30 deg = pi/6 rads to equations of the form x + m = n, where m, n ∈ N. The rational
1/2
numbers Q are necessary to solve for x in mx = n, with m, n ∈ Z.
The inverse trigonometric functions sin−1 (x) ≡ arcsin(x) and The solutions to x2 = 2 are irrational (so ∈/ Q) so we need an even
cos−1 (x) ≡ arccos(x) are used as follows: larger set that contains all possible numbers: real set of numbers R.
>>> asin(1/2) A pattern emerges where more complicated equations require the
pi/6 invention of new types of numbers.
>>> acos(sqrt(3)/2)
Consider the quadratic equation x2 = −1. There are no real solutions
pi/6 √
to this equation, but we can define an imaginary number i = −1
Recall that tan(x) ≡ cos(x)
sin(x)
. The inverse function of tan(x) is (denoted I in SymPy) that satisfies this equation:
tan (x) ≡ arctan(x) ≡ atan(x)
−1
>>> I*I
>>> tan(pi/6) -1
1/sqrt(3) # = ( 1/2 )/( sqrt(3)/2 ) >>> solve( x**2 + 1 , x)
>>> atan( 1/sqrt(3) ) [I, -I]
pi/6
The solutions are x = i and x = −i, and indeed we can verify that
The function acos returns angles in the range [0, π], while asin and i2 + 1 = 0 and (−i)2 + 1 = 0 since i2 = −1.
atan return angles in the range [− π2 , π2 ].
The complex numbers C are defined as {a + bi | a, b ∈ R}. Complex
Here are some trigonometric identities that SymPy knows: numbers contain a real part and an imaginary part:
>>> sin(x) == cos(x - pi/2) >>> z = 4 + 3*I
True >>> z
>>> simplify( sin(x)*cos(y)+cos(x)*sin(y) ) 4 + 3*I
sin(x + y) >>> re(z)
>>> e = 2*sin(x)**2 + 2*cos(x)**2 4
>>> trigsimp(e) >>> im(z)
2 3
>>> trigsimp(log(e))
log(2*sin(x)**2 + 2*cos(x)**2) The polar representation of a complex number is z ≡ |z|∠θ √ ≡ |z|e .
iθ
>>> trigsimp(log(e), deep=True) For a complex number z = a + bi, the quantity |z| = a + b is 2 2
log(2) known as the absolute value of z, and θ is its phase or its argument:
>>> simplify(sin(x)**4 - 2*cos(x)**2*sin(x)**2 + cos(x)**4) >>> Abs(z)
cos(4*x)/2 + 1/2 5
>>> arg(z)
The function trigsimp does essentially the same job as simplify. atan(3/4)
If instead of simplifying you want to expand a trig expression, you The complex conjugate of z = a + bi is the number z̄ = a − bi:
should use expand_trig, because the default expand won’t touch trig >>> conjugate( z )
functions: 4 - 3*I
>>> expand(sin(2*x)) # = (sin(2*x)).expand()
sin(2*x) Complex conjugation
√ is important for computing the absolute value
>>> expand_trig(sin(2*x)) # = (sin(2*x)).expand(trig=True) of z (|z| ≡ z z̄) and for division by z ( z1 ≡ |z|z̄ 2 ).
2*sin(x)*cos(x)
Euler’s formula
Hyperbolic trigonometric functions >>> from sympy import expand, rewrite
The hyperbolic sine and cosine in SymPy are denoted sinh and cosh Euler’s formula shows an important relation between the exponential
respectively and SymPy is smart enough to recognize them when function ex and the trigonometric functions sin(x) and cos(x):
simplifying expressions:
eix = cos x + i sin x.
>>> simplify( (exp(x)+exp(-x))/2 )
cosh(x) To obtain this result in SymPy, you must specify that the number x is
>>> simplify( (exp(x)-exp(-x))/2 ) real and also tell expand that you’re interested in complex expansions:
5
>>> x = symbols('x', real=True) zero: limx→∞ x1 = 0. On the other hand, when x takes on smaller
>>> exp(I*x).expand(complex=True) and smaller positive values, the expression x1 becomes infinite:
cos(x) + I*sin(x) limx→0+ x1 = ∞. When x approaches 0 from the left, we have
>>> re( exp(I*x) ) limx→0− x1 = −∞. If these calculations are not clear to you, study
cos(x) the graph of f (x) = x1 .
>>> im( exp(I*x) )
sin(x) Here are some other examples of limits:
Basically, cos(x) is the real part of e , and sin(x) is the imaginary
ix >>> limit(sin(x)/x, x, 0)
part of eix . Whaaat? I know it’s weird, but weird things are bound 1
>>> limit(sin(x)**2/x, x, 0)
to happen when you input imaginary numbers to functions.
0
Euler’s formula is often used to rewrite the functions sin and cos in >>> limit(exp(x)/x**100,x,oo) # which is bigger e^x or x^100 ?
terms of complex exponentials. For example, oo # exp f >> all poly f for big x
>>> (cos(x)).rewrite(exp) Limits are used to define the derivative and the integral operations.
exp(I*x)/2 + exp(-I*x)/2
Compare this expression with the definition of hyperbolic cosine.
Derivatives
This limit expression describes the annual growth rate of a loan with
a nominal interest rate of 100% and infinitely frequent compounding. Tangent lines
Borrow $1000 in such a scheme, and you’ll owe $2718.28 after one year.
The tangent line to the function f (x) at x = x0 is the line that passes
Limits are also useful to describe the behaviour of functions. Consider through the point (x0 , f (x0 )) and has the same slope as the function
the function f (x) = x1 . The limit command shows us what happens at that point. The tangent line to the function f (x) at the point
to f (x) near x = 0 and as x goes to infinity: x = x0 is described by the equation
>>> limit( 1/x, x, 0, dir="+")
oo T1 (x) = f (x0 ) + f 0 (x0 )(x − x0 ).
>>> limit( 1/x, x, 0, dir="-")
What is the equation of the tangent line to f (x) = 12 x2 at x0 = 1?
-oo
>>> limit( 1/x, x, oo) >>> f = S('1/2')*x**2
0 >>> f
x**2/2
As x becomes larger and larger, the fraction x1 becomes smaller >>> df = diff(f, x)
and smaller. In the limit where x goes to infinity, x1 approaches >>> df
6
x**2 # + C We sayP the series an diverges to infinity (or is divergent) while the
P
series bn converges (or is convergent). As we sum together more
The fundamental theorem of calculus is important because it tells us
and more terms of the sequence bn , the total becomes closer P∞ and
how to solve differential equations. If we have to solve for f (x) in the
closer to some finite number. In this case, the infinite sum 1
differential equation dx
d
f (x) = g(x), we can take theR integral on both n=0 n!
converges to the number e = 2.71828 . . ..
sides of the equation to obtain the answer f (x) = g(x) dx + C.
The summation command is useful because it allows us to compute
infinite sums, but for most practical applications we don’t need to take
Sequences an infinite number of terms in a series to obtain a good approximation.
This is why series are so neat: they represent a great way to obtain
Sequences are functions that take whole numbers as inputs. Instead
approximations.
of continuous inputs x ∈ R, sequences take natural numbers n ∈ N
as inputs. We denote sequences as an instead of the usual function Using standard Python commands, we can obtain an approximation
notation a(n). to e that is accurate to six decimals by summing 10 terms in the
series:
We define a sequence by specifying an expression for its nth term:
>>> import math
>>> a_n = 1/n
>>> def b_nf(n):
>>> b_n = 1/factorial(n)
return 1.0/math.factorial(n)
Substitute the desired value of n to see the value of the nth term: >>> sum( [b_nf(n) for n in range(0,10)] )
2.718281 52557319
>>> a_n.subs({n:5})
>>> E.evalf()
1/5
2.718281 82845905 # true value
The Python list comprehension syntax [item for item in list]
can be used to print the sequence values for some range of indices:
Taylor series
>>> [ a_n.subs({n:i}) for i in range(0,8) ]
[oo, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7]
Wait, there’s more! Not only can we use series to approximate
>>> [ b_n.subs({n:i}) for i in range(0,8) ]
[1, 1, 1/2, 1/6, 1/24, 1/120, 1/720, 1/5040] numbers, we can also use them to approximate functions.
Observe that an is not properly defined for n = 0 since 10 is a division- A power series is a series whose terms contain different powers of the
by-zero error. To be precise, we should say an ’s domain is the positive variable x. The nth term in a power series is a function of both the
naturals an : N+ → R. Observe how quickly the factorial function sequence index n and the input variable x.
n! = 1 · 2 · 3 · · · (n − 1) · n grows: 7! = 5040, 10! = 3628800, 20! > 1018 . For example, the power series of the function exp(x) = ex is
We’re often interested in calculating the limits of sequences as n → ∞. x2 x3 x4 x5 X xn ∞
What happens to the terms in the sequence when n becomes large? exp(x) ≡ 1 + x + + + + + ··· = .
2 3! 4! 5! n!
>>> limit(a_n, n, oo) n=0
0 This is, IMHO, one of the most important ideas in calculus: you can
>>> limit(b_n, n, oo) compute the value of exp(5) by taking the infinite sum of the terms
0 in the power series with x = 5:
Both an = 1
and bn = 1
converge to 0 as n → ∞. >>> exp_xn = x**n/factorial(n)
n n!
>>> summation( exp_xn.subs({x:5}), [n, 0, oo] ).evalf()
Many important math quantities are defined as limit expressions. An 148.413159102577
>>> exp(5).evalf()
interesting example to consider is the number π, which is defined as
148.413159102577 # the true value
the area of a circle of radius 1. We can approximate the area of the
unit circle by drawing a many-sided regular polygon around the circle. Note that SymPy is actually smart enough to recognize that the infinite
Splitting the n-sided regular polygon into identical triangular splices, series you’re computing corresponds to the closed-form expression e5 :
we can obtain a formula for its area An . In the limit as n → ∞, the >>> summation( exp_xn.subs({x:5}), [n, 0, oo])
n-sided-polygon approximation to the area of the unit-circle becomes exp(5)
exact:
Taking as few as 35 terms in the series is sufficient to obtain an
>>> A_n = n*tan(2*pi/(2*n))
approximation to e that is accurate to 16 decimals:
>>> limit(A_n, n, oo)
pi >>> import math # redo using only python
>>> def exp_xnf(x,n):
return x**n/math.factorial(n)
Series >>> sum( [exp_xnf(5.0,i) for i in range(0,35)] )
148.413159102577
Suppose we’re given a sequence a n and we want to compute the sum
∞ The coefficients in the power series of a function (also known as the
of all the values in this sequence n an . Series are sums of sequences.
P
Taylor series) depend on the value of the higher derivatives of the
Summing the values of a sequence an : N → R is analogous to taking
function. The formula for the nth term in the Taylor series of f (x)
the integral of a function f : R → R. (n)
expanded at x = c is an (x) = f n!(c) (x − c)n , where f (n) (c) is the
To work with series in SymPy, use the summation function whose value of the nth derivative of f (x) evaluated at x = c. The term
syntax is analogous to the integrate function: Maclaurin series refers to Taylor series expansions at x = 0.
>>> a_n = 1/n
The SymPy function series is a convenient way to obtain the series
>>> b_n = 1/factorial(n)
>>> summation(a_n, [n, 1, oo]) of any function. Calling series(expr,var,at,nmax) will show you
oo the series expansion of expr near var=at up to power nmax:
>>> summation(b_n, [n, 0, oo]) >>> series( sin(x), x, 0, 8)
E x - x**3/6 + x**5/120 - x**7/5040 + O(x**8)
8
Here is how to compute the cross product of two vectors in SymPy: Kinematics
>>> u = Matrix([ 4,5,6])
>>> v = Matrix([-1,1,2]) Let x(t) denote the position of an object, v(t) denote its velocity, and
>>> u.cross(v) a(t) denote its acceleration. Together x(t), v(t), and a(t) are known
[4, -14, 9] as the equations of motion of the object.
The vector ~u × ~v is orthogonal to both ~ u and ~v . The norm of the
cross product k~u × ~v k is proportional to the lengths of the vectors The equations of motion are related by the derivative operation:
and the sine of the angle between them: d d
dt dt
(u.cross(v).norm()/(u.norm()*v.norm())).n() a(t) ←− v(t) ←− x(t).
0.796366206088088 # = sin(0.921..) Assume we know the initial position xi ≡ x(0) and the initial velocity
The name “cross product” is well-suited for this operation since it is vi ≡ v(0) of the object and we want to find x(t) for all later times.
calculated by “cross-multiplying” the coefficients of the vectors: We can do this starting from the dynamics of the problem—the forces
acting on the object.
u × ~v = (uy vz − uz vy , uz vx − ux vz , ux vy − uy vx ) .
~
Newton’s second law F ~net = m~a states that a net force F
~net applied
By defining individual symbols for the entries of two vectors, we can on an object of mass m produces acceleration ~a. Thus, we can obtain
make SymPy show us the cross-product formula: an objects acceleration if we know the net force acting on it. Starting
>>> u1,u2,u3 = symbols('u1:4') from the knowledge of a(t), we can obtain v(t) by integrating then
>>> v1,v2,v3 = symbols('v1:4') find x(t) by integrating v(t):
>>> Matrix([u1,u2,u3]).cross(Matrix([v1,v2,v3])) R R
vi + dt xi + dt
[ (u2*v3 - u3*v2), (-u1*v3 + u3*v1), (u1*v2 - u2*v1) ] a(t) −→ v(t) −→ x(t).
The dot product is anti-commutative ~
u × ~v = −~v × ~
u: The reasoning follows from the fundamental theorem of calculus: if
>>> u.cross(v) a(t) represents the change in v(t), then the total of a(t) accumulated
[4, -14, 9] between t = t1 and t = t2 is equal to the total change in v(t) between
>>> v.cross(u) these times: ∆v = v(t2 ) − v(t1 ). Similarly, the integral of v(t) from
[-4, 14,-9]
t = 0 until t = τ is equal to x(τ ) − x(0).
The product of two numbers and the dot product of two vectors
are commutative operations. The cross product, however, is not
commutative: ~
u × ~v 6= ~v × ~
u.
Uniform acceleration motion (UAM)
V. Mechanics Let’s analyze the case where the net force on the object is constant.
A constant force causes a constant acceleration a = mF
= constant. If
The module called sympy.physics.mechanics contains elaborate the acceleration function is constant over time a(t) = a. We find v(t)
tools for describing mechanical systems, manipulating reference and x(t) as follows:
frames, forces, and torques. These specialized functions are not >>> t, a, v_i, x_i = symbols('t a v_i x_i')
necessary for a first-year mechanics course. The basic SymPy functions >>> v = v_i + integrate(a, (t, 0,t) )
like solve, and the vector operations you learned in the previous >>> v
sections are powerful enough for basic Newtonian mechanics. a*t + v_i
>>> x = x_i + integrate(v, (t, 0,t) )
>>> x
Dynamics a*t**2/2 + v_i*t + x_i
You may remember these equations from your high school physics
The net force acting on
P an object is the sum of all the external forces
class. They are the uniform accelerated motion (UAM) equations:
acting on it F~net = ~ . Since forces are vectors, we need to use
F
vector addition to compute the net force. a(t) = a,
Compute F
~net = F
~1 + F
~2 , where F
~1 = 4ı̂[N] and F
~2 = 5∠30◦ [N]: v(t) = vi + at,
1
>>> F_1 = Matrix( [4,0] ) x(t) = xi + vi t + at2 .
2
>>> F_2 = Matrix( [5*cos(30*pi/180), 5*sin(30*pi/180) ] )
In high school, you probably had to memorize these equations. Now
>>> F_net = F_1 + F_2
>>> F_net you know how to derive them yourself starting from first principles.
[4 + 5*sqrt(3)/2, 5/2] # in Newtons
For the sake of completeness, we’ll now derive the fourth UAM
>>> F_net.evalf()
equation, which relates the object’s final velocity to the initial velocity,
[8.33012701892219, 2.5] # in Newtons
the displacement, and the acceleration, without reference to time:
To express the answer in length-and-direction notation, use norm to >>> (v*v).expand()
find the length of F
~net and atan21 to find its direction:
a**2*t**2 + 2*a*t*v_i + v_i**2
>>> F_net.norm().evalf() >>> ((v*v).expand() - 2*a*x).simplify()
8.69718438067042 # |F_net| in [N] -2*a*x_i + v_i**2
>>> (atan2( F_net[1],F_net[0] )*180/pi).n()
16.7053138060100 # angle in degrees The above calculation shows vf2 − 2axf = −2axi + vi2 . After moving
the term 2axf to the other side of the equation, we obtain
The net force on the object is F
~net = 8.697∠16.7◦ [N].
(v(t))2 = vf2 = vi2 + 2a∆x = vi2 + 2a(xf − xi ).
1 The function atan2(y,x) computes the correct direction for all vectors
(x, y), unlike atan(y/x) which requires corrections for angles in the range The fourth equation is important for practical purposes because it
[ π2 , 3π
2
]. allows us to solve physics problems without using the time variable.
10
Example: Find the position function of an object at time t = 3[s], if The motion of a mass-spring system is described by the pdifferential
d2
it starts from xi = 20[m] with vi = 10[m/s] and undergoes a constant equation dt 2 x(t) + ω x(t) = 0, where the constant ω =
2
m
k
is called
acceleration of a = 5[m/s2 ]. What is the object’s velocity at t = 3[s]? the angular frequency. We can find the position function x(t) using
>>> x_i = 20 # initial position the dsolve method:
>>> v_i = 10 # initial velocity >>> t = Symbol('t') # time t
>>> a = 5 # acceleration (constant during motion) >>> x = Function('x') # position function x(t)
>>> x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t,0,t) ) >>> w = Symbol('w', positive=True) # angular frequency w
>>> x >>> sol = dsolve( diff(x(t),t,t) + w**2*x(t), x(t) )
5*t**2/2 + 10*t + 20 >>> sol
>>> x.subs({t:3}).n() # x(3) in [m] x(t) == C1*sin(w*t) + C2*cos(w*t)
72.5 >>> x = sol.rhs
>>> diff(x,t).subs({t:3}).n() # v(3) in [m/s] >>> x
25 # = sqrt( v_i**2 + 2*a*52.5 ) C1*sin(w*t) + C2*cos(w*t)
If you think about it, physics knowledge combined with computer Note the solution x(t) = C1 sin(ωt) + C2 cos(ωt) is equivalent to
skills is like a superpower! x(t) = A cos(ωt + φ), which is more commonly used to describe
simple harmonic motion. We can use the expand function with the
General equations of motion argument trig=True to convince ourselves of this equivalence:
R R >>> A, phi = symbols("A phi")
vi + dt xi + dt >>> (A*cos(w*t - phi)).expand(trig=True)
The procedure a(t) −→ v(t) −→ x(t) can be used to obtain A*sin(phi)*sin(w*t) + A*cos(phi)*cos(w*t)
the position function x(t) even when the acceleration
√ is not constant.
Suppose the acceleration of an object is a(t) = kt; what is its x(t)? If we define C1 = A sin(φ) and C2 = A cos(φ), we obtain the form
>>> t, v_i, x_i, k = symbols('t v_i x_i k') x(t) = C1 sin(ωt) + C2 cos(ωt) that SymPy found.
>>> a = sqrt(k*t) Conservation of energy: We can verify that the total energy of the
>>> x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t, 0,t) )
mass-spring system is conserved by showing ET (t) = Us (t) + K(t) =
>>> x
x_i + v_i*t + (4/15)*(k*t)**(5/2)/k**2 constant:
>>> x = sol.rhs.subs({"C1":0,"C2":A})
>>> x
Potential energy A*cos(t*w)
>>> v = diff(x, t)
Instead of working with the kinematic equations of motion x(t), v(t), -A*w*sin(t*w)
and a(t) which depend on time, we can solve physics problems using >>> E_T = (0.5*k*x**2 + 0.5*m*v**2).simplify()
energy calculations. A key connection between the world of forces and >>> E_T
the world of energy is the concept of potential energy. If you move 0.5*A**2*(k*cos(w*t)**2 + m*w**2*sin(w*t)**2)
an object against a conservative force (think raising a ball in the air >>> E_T.subs({k:m*w**2}).simplify()
against the force of gravity), you can think of the work you do agains 0.5*m*(w*A)**2 # = K_max
the force as being stored in the potential energy of the object. >>> E_T.subs({w:sqrt(k/m)}).simplify()
0.5*k*A**2 # = U_max
For each force F
~ (x) there is a corresponding potential energy UF (x).
The change in potential energy associated with the force F ~ (x) and
displacement d~ is defined as the negative of the work done by the VI. Linear algebra
force during the displacement: UF (x) = −W = − d~ F ~ (x) · d~
x.
R
from sympy import Matrix
The potential energies associated with gravity F ~g = −mĝ and the A matrix A ∈ Rm×n is a rectangular array of real numbers with m
force of a spring F
~s = −k~
x are calculated as follows: rows and n columns. To specify a matrix A, we specify the values for
>>> x, y = symbols('x y') its mn components a11 , a12 , . . . , amn as a list of lists:
>>> m, g, k, h = symbols('m g k h') >>> A = Matrix( [[ 2,-3,-8, 7],
>>> F_g = -m*g # Force of gravity on mass m [-2,-1, 2,-7],
>>> U_g = - integrate( F_g, (y,0,h) ) [ 1, 0,-3, 6]] )
>>> U_g
m*g*h # Grav. potential energy Use the square brackets to access the matrix elements or to obtain a
>>> F_s = -k*x # Spring force for displacement x submatrix:
>>> U_s = - integrate( F_s, (x,0,x) ) >>> A[0,1] # row 0, col 1of A
>>> U_s -3
k*x**2/2 # Spring potential energy >>> A[0:2,0:3] # top-left 2x3 submatrix of A
[ 2, -3, -8]
Note the negative sign in the formula defining the potential energy.
[-2, -1, 2]
This negative is canceled by the negative sign of the dot product F ·d~
~ x:
when the force acts in the direction opposite to the displacement, the Some commonly used matrices can be created with shortcut methods:
work done by the force is negative. >>> eye(2) # 2x2 identity matrix
[1, 0]
[0, 1]
Simple harmonic motion >>> zeros((2, 3))
from sympy import Function, dsolve [0, 0, 0]
[0, 0, 0]
The force exerted by a spring is given by the formula F = −kx. If
the only force acting on a mass m is the force of a spring, we can use Standard algebraic operations like addition +, subtraction -, multipli-
Newton’s second law to obtain the following equation: cation *, and exponentiation ** work as expected for Matrix objects.
The transpose operation flips the matrix through its diagonal:
d2
h i
F = ma ⇒ −kx = ma ⇒ −kx(t) = m x(t) . >>> A.transpose() # the same as A.T
dt2
11
[ 2, -2, 1] The column space of A is the span of the columns of A that contain
[-3, -1, 0] the pivots in the reduced row echelon form of A:
[-8, 2, -3] >>> [ A[:,c] for c in A.rref()[1] ] # C(A)
[ 7, -7, 6] [ [ 2] [-3] [-8]
Recall that the transpose is also used to convert row vectors into [-2], [-1], [ 2]
column vectors and vice versa. [ 1] [ 0] [-3] ]
Note we took columns from the original matrix A and not its RREF.
Row operations To find the null space of A, call its nullspace method:
>>> M = eye(3) >>> A.nullspace() # N(A)
>>> M[1,:] = M[1,:] + 3*M[0,:] [ [0, -3, 2, 1] ]
>>> M
[1, 0, 0]
[3, 1, 0] Determinants
[0, 0, 1]
The determinant of a matrix, denoted det(A) or |A|, is a particular
The notation M[i,:] refers to entire rows of the matrix. The first way to multiply the entries of the matrix to produce a single number.
argument specifies the 0-based row index, for example the first row >>> M = Matrix( [[1, 2, 3],
of M is M[0,:]. The code example above implements the row operation [2,-2, 4],
R2 ← R2 + 3R1 . To scale a row i by constant c, use the command [2, 2, 5]] )
M[i,:] = c*M[i,:]. To swap rows i and j, use can use the Python >>> M.det()
tuple-assignment syntax M[i,:], M[j,:] = M[j,:], M[i,:]. 2
Determinants are used for all kinds of tasks: to compute areas and
Reduced row echelon form volumes, to solve systems of equations, and to check whether a matrix
The Gauss–Jordan elimination procedure is a sequence of row is invertible or not.
operations you can perform on any matrix to bring it to its reduced
row echelon form (RREF). In SymPy, matrices have a rref method
Matrix inverse
that computes their RREF:
>>> A = Matrix( [[2,-3,-8, 7], For every invertible matrix A, there exists an inverse matrix A−1
[-2,-1,2,-7], which undoes the effect of A. The cumulative effect of the product of
[1 ,0,-3, 6]]) A and A−1 (in any order) is the identity matrix: AA−1 = A−1 A = 1.
>>> A.rref()
([1, 0, 0, 0] # RREF of A >>> A = Matrix( [[1,2],
[0, 1, 0, 3] # locations of pivots [3,9]] )
[0, 0, 1, -2], [0, 1, 2] ) >>> A.inv()
[ 3, -2/3]
Note the rref method returns a tuple of values: the first value is the [-1, 1/3]
RREF of A, while the second tells you the indices of the leading ones >>> A.inv()*A
(also known as pivots) in the RREF of A. To get just the RREF of [1, 0]
A, select the 0th entry form the tuple: A.rref()[0]. [0, 1]
>>> A*A.inv()
[1, 0]
Matrix fundamental spaces [0, 1]
Consider the matrix A ∈ Rm×n . The fundamental spaces of a matrix The matrix inverse A−1 plays the role of division by A.
are its column space C(A), its null space N (A), and its row space
R(A). These vector spaces are important when you consider the Eigenvectors and eigenvalues
matrix product A~x = ~ y as “applying” the linear transformation
When a matrix is multiplied by one of its eigenvectors the output
TA : Rn → Rm to an input vector ~ x ∈ Rn to produce the output
is the same eigenvector multiplied by a constant A~eλ = λ~eλ . The
vector ~
y∈R . m
constant λ (the Greek letter lambda) is called an eigenvalue of A.
Linear transformations TA : Rn → Rm (vector functions) are
To find the eigenvalues of a matrix, start from the definition A~eλ =
equivalent to m × n matrices. This is one of the fundamental ideas
λ~eλ , insert the identity 1, and rewrite it as a null-space problem:
in linear algebra. You can think of TA as the abstract description of
the transformation and A ∈ Rm×n as a concrete implementation of A~eλ = λ1~eλ ⇒ (A − λ1) ~eλ = ~0.
TA . By this equivalence, the fundamental spaces of a matrix A tell
us facts about the domain and image of the linear transformation This equation will have a solution whenever |A − λ1| = 0.2 The
TA . The columns space C(A) is the same as the image space space eigenvalues of A ∈ Rn×n , denoted {λ1 , λ2 , . . . , λn }, are the roots of
Im(TA ) (the set of all possible outputs). The null space N (A) is the the characteristic polynomial p(λ) = |A − λ1|.
same as the kernel Ker(TA ) (the set of inputs that TA maps to the >>> A = Matrix( [[ 9, -2],
zero vector). The row space R(A) is the orthogonal complement of [-2, 6]] )
>>> A.eigenvals() # same as solve( det(A-eye(2)*x), x)
the null space. Input vectors in the row space of A are in one-to-one
{5: 1, 10: 1} # eigenvalues 5 and 10 with multiplicity 1
correspondence with the output vectors in the column space of A. >>> A.eigenvects()
Okay, enough theory! Let’s see how to compute the fundamental [(5, 1, [ 1]
spaces of the matrix A defined above. The non-zero rows in the [ 2] ), (10, 1, [-2]
[ 1] )]
reduced row echelon form of A are a basis for its row space:
>>> [ A.rref()[0][r,:] for r in A.rref()[1] ] # R(A) 2 The invertible matrix theorem states that a matrix has a non-empty
[ [1, 0, 0, 0], [0, 1, 0, 3], [0, 0, 1, -2] ] null space if and only if its determinant is zero.
12