LV038:LV-Uebersicht/SS10/Arbeitsbereiche/Solving ODEs with Power Series Approach

Aus Wiki der Fakultät für Physik Universität Wien

Wechseln zu: Navigation, Suche

Diese Seite befindet sich im Namensraum der LV: Unterstützung zu M1 mit Schwerpunkt Computeralgebra und Wiki


Inhaltsverzeichnis

Solving differential equations with power series approaches

Problem

We consider an ordinary differential equation

F(x,y(x),y^\prime(x),\ldots,y^{(n)}(x))=0

with boundary conditions

y(0)=c_0, y^\prime(0)=c_1, \ldots, y^{(n)}(0)=c_n.

We try to find an (hopefully convergent) approximative solution with a power series approach.

Naive method

In the above equation we make the ansatz y(x)=\sum_{n=0}^N a_n x^n, where N is the order of the approximation (if the boundarys are given at a point xc instead of 0, this may be generalized by replacing xn with (xxc)n). The first derivation yields y^\prime(x)=\sum_{n=1}^N a_n n x^{n-1} and accordingly for higher derivations. We can now simply insert this into our differential equation and compare the coefficients, which means "collecting" terms of equal order (i.e. all terms proportional to x, all terms proportional to x2 and so on). Therefore, any non-polynomial terms have to be expanded in power-series.

There is no way to justify this approach in terms of conversion questions. We just have to try it out and see, if it converges (which it does in most relevant cases).

The examples given below have also been solved step-by-step in this Mathematica file.

Example 1

We want to solve the differential equation

(1+x^2)y^{\prime \prime} - (1-x) y^\prime - y = 8 x^3 - 3x^2 + 6 x - 1

with boundarys y(0) = 0 and y'(0) = 1. As described above, we use the ansatz y(x) =\sum_{n=0}^N a_n x^n. Inserting this into the boundary conditions yields

y(0) = 0 \quad \Leftrightarrow \quad  \sum_{n=0}^N a_n 0^n = 0 \quad \Leftrightarrow a_0 = 0
y^\prime (0) = 1 \quad \Leftrightarrow \quad  \sum_{n=1}^N   a_n n 0^{n-1}  = 1 \quad \Leftrightarrow  a_1 =


From insterting the ansatz into the differential equation, we get

(1+x^2) \sum_{n=2}^N a_n n (n-1) x^{n-2} - (1-x) \sum_{n=1}^N a_n n x^{n-1} - \sum_{n=0}^N a_n x^n = 8 x^3 - 3x^2 + 6 x - 1

where we start with comparing all coefficients of order \mathcal{O}(1) (0th order) to get 2a2a1a0 = − 1, so a2 = 0.

Next comes order \mathcal{O}(x), which gives us 6a3 − 2a2 = 6, so a3 = 1.

And in order \mathcal{O}(x^2) we get 12a4 + 3a2 − 3a3 = − 3, so a4 = 0.

The last order we compute is \mathcal{O}(x^3), where we have 20a5 − 4a4 + 8a3 = 8, so a5 = 0.

This gives us a specific solution of this example around 0, up to order \mathcal{O}(x^3):

y(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4 + a_5 x^5 + \mathcal{O}(x^6) = x + x^3 + \mathcal{O}(x^6).

In this example, we can even go further: Since only terms with a4 and higher an are going to appear in the next orders, whereas the right-hand-side of the original equation will always be zero in those higher orders, we get an = 0 for all n > 3. So in this specific (very artificial) example, the solution we got by a power series approach is actually exact up to any order:

y(x) = x + x3.

Example 2

We want to approximate the solution for the differential equation y^\prime(x) + y(x) = e^x with boundary condition y(0) = 1. Insertion of the boundary condition into the power series ansatz for y(x) immediately gives us a0 = 1. Since now we have a non-polynomial inhomogenity, we also need to expand ex into a power series:

e^x = \sum_{n=0} \frac{x^n}{n!} .

We insert the ansatz, the derivative and the power series for the exponential into the differential equation, and, to simplify things, reorder all terms into a single sum (account the index shift in the y'-term):

\sum_{n=0} (a_n + a_{n+1} (n+1) - \frac{1}{n!})x^n=0.

The zeroth order gives us a0 + a1 − 1 = 0, so a1 = 0. From first order we have a1 + 2a2 − 1 = 0, so a2 = 1 / 2. In second order, it is a2 + 3a3 − 1 / 2 = 0, so a3 = 0. Following this pattern, we arrive at

a_n = \begin{cases} \frac{1}{n!}, & n \text{ even}\\ 0, & n \text{ odd} \end{cases}.

From recognizing the pattern, we also arrived at an exact solution for this example. However, keep in mind that we did not proove convergence of anything here, so writing

y(x) = \sum_{n=0}^\infty a_n x^n with an as above

would require some more work to be strictly correct in a mathematical sense. We may actually evaluate this sum by taking out the zero terms (all terms with odd n vanish) and writing

y(x) = \sum_{n=0}^\infty a_{2n} x^{2n} = \sum_{n=0}^\infty \frac{x^{2n}}{(2n)!} = \cosh x

where the last equality comes directly from the definition of the cosh function.

Frobenius method

Introduction

The naive method described above only works for differential equations with analytic solutions around x = 0, but the following example shows that some very simple equations don't satisfy this requirement:

Example 3

Consider the equation

x^2 y^{\prime\prime}+x\left(x+\frac{1}{2}\right)y^\prime-\left(x^2+\frac{1}{2}\right)y=0.

The linearity of this equation implies the existence of two linearly independent solutions. To find them, we'll assume that y is given by

y(x)=\sum_{k=0}^\infty c_k x^k

for some real coefficients ck.

Plugging this ansatz into our equation gives us the following condition on our power series:

\sum_{k=0}^\infty x^k\left(c_k\left(k^2-\frac{k}{2}-\frac{1}{2}\right)+c_{k-1}(k-1)-c_{k-2}\right)=0,

where we set c − 2 = c − 1 = 0.

By comparing the coefficients of x0 and x1, we find that c0 = 0 and c_1=y^\prime(0) is a free parameter.

The equation for k = 2 can be reduced to c_2=-\frac{2}{5}c_1, and we see that all further coefficients are completely determined by y^\prime(0). This means that we can only impose one boundary condition on y(x), so our naive method missed one solution.

Frobenius method

We previously observed that our approach only works for analytic solutions around our critical point. The simplest ansatz that includes singularities is

y(x)=\sum_{k=0}^\infty c_k x^{k+m}

where m is an arbitrary real constant and c_0\neq 0. This is called the Frobenius method.

Example 3 (cont.)

Plugging this into our equation and reordering terms leads to:

\sum_{k=0}^\infty c_k \left(x^{m+k}(m+k)(m+k-1) + x^{m+k+1}(m+k)+\frac{1}{2}x^{m+k}(m+k)-x^{m+k+2}-\frac{1}{2}x^{m+k}\right)=0.

Comparing coefficients of the lowest-order monomial, xm, gives us

c_0\left(m(m-1)+\frac{1}{2}m-1\right)=0.

Because c_0\neq 0, we find two possible values for m:

m=1 \mbox{ and } m=-\frac{1}{2}.

We already found a series solution for m = 1, so our second, linearly independent solution has the form

y_2(x)=\sum_{k=0}^\infty c_k x^{k-1/2}.

The same procedure as above lets us determine the coefficients ck, and we get:

y_2(x)=x^{-\frac{1}{2}}c_0\left(1-x+\frac{3}{2}x^2+\mathcal{O}\left(x^3\right)\right).

So our general solution to the differential equation is:

y(x)=c_1x\left(1-\frac{2}{5}x+\frac{9}{35}x^2+\mathcal{O}\left(x^3\right)\right)+c_2x^{-1/2}\left(1-x+\frac{3}{2}x^2+\mathcal{O}\left(x^3\right)\right),

where c1 and c2 are arbitrary constants.

Asymptotic analysis

It is often nearly impossible to find a closed form for the coefficients in the power series that solves a given differential equation. Sometimes the solution does not have a closed form at all, or it is a complicated expression involving many different functions. In the latter case, a very helpful method is an ansatz involving the equation's asymptotic behavior, as seen in the following example:

Example 4

An important equation in quantum mechanics (describing the harmonic oscillator) has the following form:

y_n^{\prime\prime}-x^2 y_n+(2n+1)y_n=0,

with the condition that \int\limits_{-\infty}^\infty y_n^2(x) dx <\infty.

As x \to \infty, the (2n + 1)-term in the equation is negligible, and the equation in the same limit approaches

y_n^{\prime\prime}(x)=x^2y_n(x).

In the limit x \to \infty, the equation has asymptotic solutions of the form y(x)=x^me^{\pm\frac{x^2}{2}}, i.e.

\lim_{x\to \infty}\frac{\frac{d^2}{dx^2}\left(x^me^{\pm\frac{x^2}{2}}\right)}{x^2\left(x^me^{\pm\frac{x^2}{2}}\right)}=1 for arbitrary integers m.


Because only one of the above functions fulfills the boundary condition, it's reasonable to look for solutions of the form y_n(x)=u_n(x)e^{-\frac{x^2}{2}}. This transformation changes the differential equation to:

u_n^{\prime\prime}(x)-2xu_n(x)+2nu_n(x)=0.

We will now try to find a power series representation for u:

u_n(x)=\sum\limits_{k=0}^\infty c_k x^k

\Rightarrow \sum\limits_{k=0}^\infty x^k(c_{k+2}(k+2)(k+1)-2c_k k +2nc_k)=0

\Rightarrow c_{k+2}=c_k\frac{2(k-n)}{(k+2)(k+1)}.

The values of c0 and c1 can be chosen arbitrarily, so we will set c0 = 0,c1 = 1 if n is odd, and c0 = 1,c1 = 0 if n is even. This means that the functions un(x) are polynomials of order n, they are called Hermite polynomials.

References

  • Frobenius method
    • M. Tenenbaum and H. Pollard. Ordinary Differential Equations. Dover Publications, New York, 1985. ISBN 0486649407.
  • Asymptotic analysis
    • D. J. Griffiths. Introduction to Quantum Mechanics (2nd Edition). Benjamin Cummings, San Francisco, 2004. ISBN 0131118927.
    • R. Shankar. Principles of Quantum Mechanics. Plenum Press, New York, 1994. ISBN 0306447908.
Persönliche Werkzeuge
ePraktika