[Mark S. Gockenbach] Partial Differential Equation

PDEDescripción completa

Views 134 Downloads 1 File size 61MB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

Citation preview

Partial Differential Equations

This page intentionally left blank

Partial Differential Equations Analytical and Numerical Methods

Mark S. Gockenbach Michigan Technological University Houghton, Michigan

siam Society for Industrial and Applied Mathematics Philadelphia

Copyright © 2002 by the Society for Industrial and Applied Mathematics. 1098765432 1 All rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 University City Science Center, Philadelphia, PA 19104-2688. Library of Congress Cataloging-in-Publication Data Gockenbach, Mark S. Partial differential equations : analytical and numerical methods / Mark S. Gockenbach. p. cm. Includes bibliographical references and index. ISBN 0-89871-518-0 1. Differential equations, Partial. I. Title. QA377 .G63 2002 515'.353-dc21 2002029411 MATLAB is a registered trademark of The MathWorks, Inc. Maple is a registered trademark of Waterloo Maple, Inc. Mathematica is a registered trademark of Wolfram Research, Inc.

0-89871-518-0

siam is a registered trademark.

Dedicated to my mother, Joy Gockenbach, and to the memory of my father, LeRoy Gockenbach.

This page intentionally left blank

Contents Foreword

xiii

Preface

xvii

1

Classification of differential equations

2

Models in one dimension 9 2.1 Heat flow in a bar; Fourier's law 9 2.1.1 Boundary and initial conditions for the heat equation 13 2.1.2 Steady-state heatfk 14 2.1.3 Diffusion 16 2.2 The hanging bar 21 2.2.1 Boundary conditions for the hanging bar 24 2.3 The wave equation for a vibrating string 27 2.4 Suggestions for further reading 30

3

Essential linear algebra 3.1 Linear systems as linear operator equations 3.2 Existence and uniqueness of solutions to Ax = b 3.2.1 Existence 3.2.2 Uniqueness 3.2.3 The Fredholm alternative 3.3 Basis and dimension 3.4 Orthogonal bases and projections 3.4.1 The L2 inner product 3.4.2 The projection theorem 3.5 Eigenvalues and eigenvectors of a symmetric matrix 3.5.1 The transpose of a matrix and the dot product ... 3.5.2 Special properties of symmetric matrices 3.5.3 The spectral method for solving Ax = b 3.6 Preview of methods for solving ODEs and PDEs 3.7 Suggestions for further reading

VII

1

31 31 38 38 42 45 50 55 58 61 68 71 72 74 77 78

viii 4

Contents Essential ordinary differential equations

4.1 4.2

4.3 4.4

4.5 4.6

4.7 5

Boundary value problems in statics

5.1 5.2

5.3

5.4

5.5

79

Converting a higher-order equation to a first-order system ... 79 Solutions to some simple ODEs 82 4.2.1 The general solution of a second-order homogeneous ODE with constant coefficients 82 4.2.2 A special inhomogeneous second-order linear ODE . 85 4.2.3 First-order linear ODEs 87 Linear systems with constant coefficients 91 4.3.1 Homogeneous systems 92 4.3.2 Inhomogeneous systems and variation of parameters 96 Numerical methods for initial value problems 101 4.4.1 Euler's method 102 4.4.2 Improving on Euler's method: Runge-Kutta methods 104 4.4.3 Numerical methods for systems of ODEs 108 4.4.4 Automatic step control and Runge-Kutta-Fehlberg methods 110 Stiff systems of ODEs 115 4.5.1 A simple example of a stiff system 117 4.5.2 The backward Euler method 118 Green's functions 123 4.6.1 The Green's function for a first-order linear ODE . . 123 4.6.2 The Dirac delta function 125 4.6.3 The Green's function for a second-order IVP . . . . 126 4.6.4 Green's functions for PDEs 127 Suggestions for further reading 128 131

The analogy between BVPs and linear algebraic systems . . . . 131 5.1.1 A note about direct integration 141 Introduction to the spectral method; eigenfunctions 144 5.2.1 Eigenpairs of — -j^ under Dirichlet conditions . . . . 144 5.2.2 Representing functions in terms of eigenfunctions . . 146 5.2.3 Eigenfunctions under other boundary conditions; other Fourier series 150 Solving the BVP using Fourier series 155 5.3.1 A special case 155 5.3.2 The general case 156 5.3.3 Other boundary conditions 161 5.3.4 Inhomogeneous boundary conditions 164 5.3.5 Summary 166 Finite element methods for BVPs 172 5.4.1 The principle of virtual work and the weak form of a BVP 173 5.4.2 The equivalence of the strong and weak forms of the BVP 177 The Galerkin method 180

Contents

5.6 5.7 5.8

ix

Piecewise polynomials and the finite element method 5.6.1 Examples using piecewise linear finite elements . . . 5.6.2 Inhomogeneous Dirichlet conditions Green's functions for BVPs 5.7.1 The Green's function and the inverse of a differential operator Suggestions for further reading

188 193 197 202 207 210

6

Heat flow and diffusion 211 6.1 Fourier series methods for the heat equation 211 6.1.1 The homogeneous heat equation 214 6.1.2 Nondimensionalization 217 6.1.3 The inhomogeneous heat equation 220 6.1.4 Inhomogeneous boundary conditions 222 6.1.5 Steady-state heat flow and diffusion 224 6.1.6 Separation of variables 225 6.2 Pure Neumann conditions and the Fourier cosine series 229 6.2.1 One end insulated; mixed boundary conditions . . . 229 6.2.2 Both ends insulated; Neumann boundary conditions 231 6.2.3 Pure Neumann conditions in a steady-state BVP . . 237 6.3 Periodic boundary conditions and the full Fourier series 245 6.3.1 Eigenpairs of — -j^ under periodic boundary conditions247 6.3.2 Solving the BVP using the full Fourier series . . . . 249 6.3.3 Solving the IBVP using the full Fourier series . . . . 252 6.4 Finite element methods for the heat equation 256 6.4.1 The method of lines for the heat equation 260 6.5 Finite elements and Neumann conditions 266 6.5.1 The weak form of a BVP with Neumann conditions 266 6.5.2 Equivalence of the strong and weak forms of a BVP with Neumann conditions 267 6.5.3 Piecewise linear finite elements with Neumann conditions 269 6.5.4 Inhomogeneous Neumann conditions 273 6.5.5 The finite element method for an IBVP with Neumann conditions 274 6.6 Green's functions for the heat equation 279 6.6.1 The Green's function for the one-dimensional heat equation under Dirichlet conditions 280 6.6.2 Green's functions under other boundary conditions . 281 6.7 Suggestions for further reading 283

7

Waves 285 7.1 The homogeneous wave equation without boundaries 285 7.2 Fourier series methods for the wave equation 291 7.2.1 Fourier series solutions of the homogeneous wave equation 293

x

Contents

7.2.2

7.3 7.4 7.5 8

Fourier series solutions of the inhomogeneous wave equation 7.2.3 Other boundary conditions Finite element methods for the wave equation 7.3.1 The wave equation with Dirichlet conditions . . . . 7.3.2 The wave equation under other boundary conditions Point sources and resonance 7.4.1 The wave equation with a point source 7.4.2 Another experiment leading to resonance Suggestions for further reading

296 301 305 306 312 318 318 321 324

Problems in multiple spatial dimensions 327 8.1 Physical models in two or three spatial dimensions 327 8.1.1 The divergence theorem 328 8.1.2 The heat equation for a three-dimensional domain . 330 8.1.3 Boundary conditions for the three-dimensional heat equation 332 8.1.4 The heat equation in a bar 333 8.1.5 The heat equation in two dimensions 334 8.1.6 The wave equation for a three-dimensional domain . 334 8.1.7 The wave equation in two dimensions 334 8.1.8 Equilibrium problems and Laplace's equation . . . . 335 8.1.9 Green's identities and the symmetry of the Laplacian 336 8.2 Fourier series on a rectangular domain 339 8.2.1 Dirichlet boundary conditions 339 8.2.2 Solving a boundary value problem 345 8.2.3 Time-dependent problems 346 8.2.4 Other boundary conditions for the rectangle . . . . 348 8.2.5 Neumann boundary conditions 349 8.2.6 Dirichlet and Neumann problems for Laplace's equation 352 8.2.7 Fourier series methods for a rectangular box in three dimensions 354 8.3 Fourier series on a disk 359 8.3.1 The Laplacian in polar coordinates 360 8.3.2 Separation of variables in polar coordinates 362 8.3.3 Bessel's equation 363 8.3.4 Properties of the Bessel functions 366 8.3.5 The eigenfunctions of the negative Laplacian on the disk 368 8.3.6 Solving PDEs on a disk 372 8.4 Finite elements in two dimensions 377 8.4.1 The weak form of a BVP in multiple dimensions . . 377 8.4.2 Galerkin's method 378 8.4.3 Piecewise linear finite elements in two dimensions . 379 8.4.4 Finite elements and Neumann conditions 388

Contents

8.5 9

10

xi

8.4.5 Inhomogeneous boundary conditions Suggestions for further reading

389 392

More about Fourier series 9.1 The complex Fourier series 9.1.1 Complex inner products 9.1.2 Orthogonality of the complex exponentials 9.1.3 Representing functions with complex Fourier series . 9.1.4 The complex Fourier series of a real-valued function 9.2 Fourier series and the FFT 9.2.1 Using the trapezoidal rule to estimate Fourier coefficients . 9.2.2 The discrete Fourier transform 9.2.3 A note about using packaged FFT routines 9.2.4 Fast transforms and other boundary conditions; the discrete sine transform 9.2.5 Computing the DST using the FFT 9.3 Relationship of sine and cosine series to the full Fourier series . 9.4 Pointwise convergence of Fourier series 9.4.1 Modes of convergence for sequences of functions . . 9.4.2 Pointwise convergence of the complex Fourier series 9.5 Uniform convergence of Fourier series 9.5.1 Rate of decay of Fourier coefficients 9.5.2 Uniform convergence 9.5.3 A note about Gibbs's phenomenon 9.6 Mean-square convergence of Fourier series 9.6.1 The space L2(-l,l) 9.6.2 Mean-square convergence of Fourier series 9.6.3 Cauchy sequences and completeness 9.7 A note about general eigenvalue problems 9.8 Suggestions for further reading

393 394 395 396 397 398 401

410 411 415 419 419 422 436 436 439 443 444 445 448 450 455 459

More about finite element methods 10.1 Implementation of finite element methods 10.1.1 Describing a triangulation 10.1.2 Computing the stiffness matrix 10.1.3 Computing the load vector 10.1.4 Quadrature 10.2 Solving sparse linear systems 10.2.1 Gaussian elimination for dense systems 10.2.2 Direct solution of banded systems 10.2.3 Direct solution of general sparse systems 10.2.4 Iterative solution of sparse linear systems 10.2.5 The conjugate gradient algorithm 10.2.6 Convergence of the CG algorithm 10.2.7 Preconditioned CG

461 461 462 465 467 467 473 473 475 477 478 482 485 486

402 404 409

xii

Contents

10.3

10.4 10.5

An outline of the convergence theory for finite element methods 488 H1( 10.3.1 The Sobolev space 489 10.3.2 Best approximation in the energy norm 491 10.3.3 Approximation by piecewise polynomials 491 10.3.4 Elliptic regularity and L2 estimates 492 Finite element methods for eigenvalue problems 494 Suggestions for further reading 499

A

Proof of Theorem 3.47

501

B

Shifting the data in two dimensions 505 B.O.I Inhomogeneous Dirichlet conditions on a rectangle . 505 B.0.2 Inhomogeneous Neumann conditions on a rectangle 508

C

Solutions to odd-numbered exercises

515

Bibliography

603

Index

607

Foreword Newton and other seventeenth-century scientists introduced differential equations to describe the fundamental laws of physics. The intellectual and technological implications of this innovation are enormous: the differential equation description of physics is the foundation upon which all modern quantitative science and engineering rests. Differential equations describe relations between physical quantities— forces, masses, positions, etc.—and their rates of change with respect to space and time, i.e. their derivatives. To calculate the implications of these relations requires the "solution" or integration of the differential equation: since derivatives approximate the ratios of small changes, solution of differential equations amounts to the summing together of many small changes—arbitrarily many, in principle—to compute the effect of a large change. Until roughly the middle of the twentieth century, this meant the derivation of formulas expressing solutions of differential equations as algebraic combinations of the elementary functions of calculus—polynomials, trigonometric functions, the logarithm and the exponential, and certain more exotic functions. Such formulas are feasible only for a limited selection of problems, and even then often involve infinite sums ("series") which can only be evaluated approximately, and whose evaluation was (until not so long ago) quite laborious. The digital revolution has changed all that. Fast computation, with inexpensive machines performing many millions of arithmetic operations per second, makes numerical methods for the solution of differential equations practical: these methods give useful approximate solutions to differential equations by literally adding up the effects of many microscopic changes to approximate the effect of a large change. Numerical methods have been used to solve certain relatively "small" differential equations since the eighteenth century—notably those that occur in the computation of planetary orbits and other astronomical calculations. Systematic use of the computer has vastly expanded the range of differential equation models that can be coaxed into making predictions of scientific and engineering significance. Analytical techniques developed over the last three centuries to solve special problem classes have also attained new significance when viewed as numerical methods. Optimal design of ships and aircraft, prediction of underground fluid migration, synthesis of new drugs, and numerous other critical tasks all rely—sometimes tacitly—upon the modeling of complex phenomena by differential equations and the computational approximation of their solutions. Just as significantly, mathematical advances underlying computational techniques have fundamentally changed the ways in which scientists and engineers think about differential equations and their implications. xiii

xiv

Foreword

Until recently this sea change in theory and practice has enjoyed little reflection in the teaching of differential equations in undergraduate classes at universities. While mention of computer techniques began showing up in textbooks published or revised in the 1970s, the view of the subject propounded by most textbooks would have seemed conventional in the 1920s. The book you hold in your hands, along with a few others published in recent years, notably Gil Strang's Introduction to Applied Mathematics, represents a new approach to differential equations at the undergraduate level. It presents computation as an integral part of the study of differential equations. It is not so much that computational exercises must be part of the syllabus—this text can be used entirely without any student involvement in computation at all, though a class taught that way would miss a great deal of the possible impact. Rather, the concepts underlying the analysis and implementation of numerical methods assume an importance equal to that of solutions in terms of series and elementary functions. In fact, many of these concepts are equally effective in explaining the workings of the series expansion methods as well. This book devotes considerable effort to these "classical" methods, side by side with modern numerical approaches (particularly the finite element method). The "classical" series expansions provide both a means to understand the essential nature of the physical phenomena modeled by the equations, and effective numerical methods for those special problems to which they apply. Perhaps surprisingly, some of the most important concepts in the modern viewpoint on differential equations are algebraic: the ideas of vector, vector space, and other components of linear algebra are central, even in the development of more conventional parts of the subject such as series solutions. The present book uses linear algebra as a unifying principle in both theory and computation, just as working scientists, engineers, and mathematicians do. This book, along with a number of others like it published in recent years, differs from earlier undergraduate textbooks on differential equations in yet another respect. Especially in the middle years of the last century, mathematical instruction in American universities tended to relegate the physical context for differential equations and other topics to the background. The "big three" differential equations of science and engineering—the Laplace, wave, and heat equations, to which the bulk of this book is devoted—have appeared in many texts with at most a cursory nod to their physical origins and meaning in applications. In part, this trend reflected the development of the theory of differential equations as a self-contained arena of mathematical research. This development has been extremely fruitful, and indeed is the source of many of the new ideas which underlie the effectiveness of modern numerical methods. However, it has also led to generations of textbooks which present differential equations as a self-contained subject, at most distantly related to the other intellectual disciplines in which differential equations play a crucial role. The present text, in contrast, includes physically and mathematically substantial derivations of each differential equation, often in several contexts, along with examples and homework problems which illustrate how differential equations really arise in science and engineering. With the exception of a part of the chapter on ordinary differential equations which begins the book, this text concerns itself exclusively with linear problems—

Foreword

xv

that is, problems for which sums and scalar multiples of solutions are also solutions, if not of the same problem then of a closely related problem. It might be argued that such a conventional choice is odd in a book which otherwise breaks with recent tradition in several important respects. Many if not most of the important problems which concern contemporary scientists and engineers are nonlinear. For example, almost all problems involving fluid flow and chemical reaction are nonlinear, and these phenomena and their simulation and analysis are of central concern to many modern engineering disciplines. While series solutions are restricted to linear problems, numerical methods are in principle just as applicable to nonlinear as to linear problems (though many technical challenges arise in making them work well). However, it is also true that not only are the classic linear differential equations—Laplace, wave, and heat—models for generic classes of physical phenomena (potentials, wave motion, and diffusion) to which many nonlinear processes belong, but also their solutions are the building blocks of methods for solving complex, nonlinear problems. Therefore the choice to concentrate on these linear problems seems very natural for a first course in differential equations. The computational viewpoint in the theory and application of differential equations is very important to a large segment of SIAM membership, and indeed SIAM members have been in the forefront of its development. It is therefore gratifying that SIAM should play a leading role in bringing this important part of modern applied mathematics into the classroom by making the present volume available. William W. Symes Rice University

This page intentionally left blank

Preface This introductory text on partial differential equations (PDEs) has several features that are not found in other texts at this level, including: • equal emphasis on classical and modern techniques. • the explicit use of the language and results of linear algebra. • examples and exercises analyzing realistic experiments (with correct physical parameters and units). • a recognition that mathematical software forms a part of the arsenal of both students and professional mathematicians. In this preface, I will discuss these features and offer suggestions for getting the most out of the text.

Classical and modern techniques Undergraduate courses on PDEs tend to focus on Fourier series methods and separation of variables. These techniques are still useful after two centuries because they offer a great deal of insight into those problems to which they apply. However, the subject of PDEs has driven much of the research in both pure and applied mathematics in the last century, and students ought to be exposed to some more modern techniques as well. The limitation of the Fourier series technique is its restricted applicability: it can be used only for equations with constant coefficients and only on certain simple geometries. To complement the classical topic of Fourier series, I present the finite element method, a modern, powerful, and flexible approach to solving PDEs. Although many introductory texts include some discussion of finite elements (or finite differences, a competing computational methodology), the modern approach tends to receive less attention and a subordinate place in the exposition. In this text, I have put equal weight on Fourier series and finite elements.

Linear algebra Both linear and nonlinear differential equations occur as models of physical phenomena of great importance in science and engineering. However, most introductory xvii

xviii

Preface

texts focus on linear equations, and mine is no exception. There are several reasons why this should be so. The study of PDEs is difficult, and it makes sense to begin with the simpler linear equations before moving on to the more difficult nonlinear equations. Moreover, linear equations are much better understood. Finally, much of what is known about nonlinear differential equations depends on the analysis of linear differential equations, so this material is prerequisite for moving on to nonlinear equations. Because we focus on linear equations, linear algebra is extremely useful. Indeed, no discussion of Fourier series or finite element methods can be complete unless it puts the results in the proper linear algebraic framework. For example, both methods produce the best approximate solution from certain finite-dimensional subspaces, and the projection theorem is therefore central to both techniques. Symmetry is another key feature exploited by both methods. While many texts de-emphasize the linear algebraic nature of the concepts and solution techniques, I have chosen to make it explicit. This decision, I believe, leads to a more cohesive course and a better preparation for future study. However, it presents certain challenges. Linear algebra does not seem to receive the attention it deserves in many engineering and science programs, and so many students will take a course based on this text without the "prerequisites." Therefore, I present a fairly complete overview of the necessary material in Chapter 3, Essential Linear Algebra. Both faculty previewing this text and students taking a course from it will soon realize that there is too much material in Chapter 3 to cover thoroughly in the couple of weeks it can reasonably occupy in a semester course. From experience I know that conscientious students dislike moving so quickly through material that they cannot master it. However, one of the keys to using this text is to avoid getting bogged down in Chapter 3. Students should try to get from it the "big picture" and two essential ideas: • How to compute a best approximation to a vector from a subspace, with and without an orthogonal basis (Section 3.4). • How to solve a matrix-vector equation when the matrix is symmetric and its eigenvalues and eigenvectors are known (Section 3.5). Having at least begun to grasp these ideas, students should move on to Chapter 4 even if some details are not clear. The concepts from linear algebra will become much clearer as they are used throughout the remainder of the text.1 I have taught this course several times using this approach, and, although students often find it frustrating at the beginning, the results seem to be good.

Realistic problems The subject of PDEs is easier to grasp if one keeps in mind certain standard physical experiments modeled by the equations under consideration. I have used these 1

Also, Chapter 4 is much easier going than Chapter 3, a welcome contrast!

Preface

xix

models to introduce the equations and to aid in understanding their solutions. The models also show, of course, that the subject of PDEs is worth studying! To make the applications as meaningful as possible, I have included many examples and exercises posed in terms of meaningful experiments with realistic physical parameters.

Software There exists powerful mathematical software that can be used to illuminate the material presented in this book. Computer software is useful for at least three reasons: • It removes the need to do tedious computations that are necessary to compute solutions. Just as a calculator eliminates the need to use a table and interpolation to compute a logarithm, a computer algebra system can eliminate the need to perform integration by parts several times in order to evaluate an integral. With the more mechanical obstacles removed, there is more time to focus on concepts. • Problems that simply cannot be solved (in a reasonable time) by hand can often be done with the assistance of a computer. This allows for more interesting assignments. • Graphical capabilities allow students to visualize the results of their computations, improving understanding and interpretation. I expect students to use a software package such as MATLAB, Mathematica, or Maple to reproduce the examples from the text and to solve the exercises. I prefer not to introduce a particular software package in the text itself, for at least two reasons. The explanation of the features and usage of the software can detract from the mathematics. Also, if the book is based on a particular software package, then it can be difficult to use with a different package. For these reason, my text does not mention any software packages except in a few footnotes. However, since the use of software is, in my opinion, essential for a modern course, I have written tutorials for MATLAB, Mathematica, and Maple that explain the various capabilities of these programs that are relevant to this book. These tutorials appear on the accompanying CD.

Outline The core material in this text is found in Chapters 5-7, which present Fourier series and finite element techniques for the three most important differential equations of mathematical physics: Laplace's equation, the heat equation, and the wave equation. Since the concepts themselves are hard enough, these chapters are restricted to problems in a single spatial dimension. Several introductory chapters set the stage for this core. Chapter 1 briefly defines the basic terminology and notation that will be used in the text. Chapter

xx

Preface

2 then derives the standard differential equations in one spatial dimension, in the process explaining the meaning of various physical parameters that appear in the equations and introducing the associated boundary conditions and initial conditions. Chapter 3, which has already been discussed above, presents the concepts and techniques from linear algebra that will be used in subsequent chapters. I want to reiterate that perhaps the most important key to using this text effectively is to move through Chapter 3 expeditiously. The rudimentary understanding that students obtain in going through Chapter 3 will grow as the concepts are used in the rest of the book. Chapter 4 presents the background material on ordinary differential equations that is needed in later chapters. This chapter is much easier than the previous one, because much of the material is review for many students. Only the last two sections, on numerical methods and stiff systems, are likely to be new. Although the chapter is entitled Essential Ordinary Differential Equations, Section 4.3 is not formally prerequisite for the rest of the book. I included this material to give students a foundation for understanding stiff systems of ODEs (particularly, the stiff system arising from the heat equation). Similarly, Runge-Kutta schemes and automatic step control are not strictly needed. However, understanding a little about variable step size methods is useful if one tries to apply an "off-the-shelf" routine to a stiff system. Chapter 8 extends the models and techniques developed in the first part of the book to two spatial dimensions (with some brief discussions of three dimensions). The last two chapters provide a more in-depth treatment of Fourier series (Chapter 9) and finite elements (Chapter 10). In addition to the standard theory of Fourier series, Chapter 9 shows how to use the fast Fourier transform to efficiently compute Fourier series solutions of the PDEs, explains the relationships among the various types of Fourier series, and discusses the extent to which the Fourier series method can be extended to complicated geometries and equations with nonconstant coefficients. Sections 9.4-9.6 present a careful mathematical treatment of the convergence of Fourier series, and have a different flavor from the remainder of the book. In particular, they are less suited for an audience of science and engineering students, and have been included as a reference for the curious student. Chapter 10 gives some advice on implementing finite element computations, discusses the solution of the resulting sparse linear systems, and briefly outlines the convergence theory for finite element methods. It also shows how to use finite elements to solve general eigenvalue problems. The tutorials on the accompanying CD include programs implementing two-dimensional finite element methods, as described in Section 10.1, in each of the supported software packages (MATLAB, Mathematica, and Maple). The sections on sparse systems and the convergence theory are both little more than outlines, pointing the students toward more advanced concepts. Both of these topics, of course, could easily justify a dedicated semesterlong course, and I had no intention of going into detail. I hope that the material on implementation of finite elements (in Section 10.1) will encourage some students to experiment with two-dimensional calculations, which are already too tedious to carry out by hand. This sort of information seems to be lacking from most books accessible to students at this level.

Preface

xxi

Possible course outlines In a one-semester course (42-45 class hours), I typically cover Chapters 1-7 and part of Chapter 8. I touch only lightly on the material concerning Green's functions and the Dirac delta function (Sections 4.6, 6.6, and 7.4.1), and sometimes omit Section 6.3, but cover the remainder of Chapters 1-7 carefully. If an instructor wishes to cover a significant part of the material in Chapters 8-10, an obvious place to save time is in Chapter 4. I would suggest covering the needed material on ODEs on a "just-in-time" basis in the course of Chapters 5-7. This will definitely save time, since my presentation in Chapter 4 is more detailed than is really necessary. Chapter 2 can be given as a reading assignment, particularly for the intended audience of science and engineering students, who will typically be comfortable with the physical parameters appearing in the differential equations.

Acknowledgments This book began when I was visiting Rice University in 1998-1999 and taught a course using the lecture notes of Professor William W. Symes. To satisfy my personal predilections, I rewrote the notes significantly, and for the convenience of myself and my students, I typeset them in the form of a book, which was the first version of this text. Although the final result bears, in some ways, little resemblance to Symes's original notes, I am indebted to him for the idea of recasting the undergraduate PDE course in more modern terms. His example was the inspiration for this project, and I benefited from his advice throughout the writing process. I am also indebted to the students who have suffered through courses taught from early version of this text. Many of them found errors, typographical and otherwise, that might otherwise have found their way into print. I would like to thank Professors Gino Biondini, Yuji Kodoma, Robert Krasny, Yuan Lou, Fadil Santosa, and Paul Uhlig, all of whom read part or all of the text and offered helpful suggestions. The various physical parameters used in the examples and exercises were derived (sometimes by interpolation) from tables in the CRC Handbook of Chemistry and Physics [35]. The graphs in this book were generated with MATLAB. For MATLAB product information, please contact: The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098 USA Tel: 508-647-7000 Fax: 508-647-7101 E-mail: [email protected] Web: www.mathworks.com As mentioned above, the CD also supports the use of Mathematica and Maple. For Mathematica product information, contact:

xxii

Preface

Wolfram Research, Inc. 100 Trade Center Drive Champaign, IL 61820-7237 USA Tel: 800-965-3726 Fax: 217-398-0747 E-mail: [email protected] For Maple product information, contact: Waterloo Maple Inc. 57 Erb Street West Waterloo, Ontario Canada N2L6C2 Tel: 800-267-6583 E-mail: [email protected]

Chapter 1

classification of differential

equations

Loosely speaking, a differential equation is an equation specifying a relation between the derivatives of a function or between one or more derivatives and the function itself. We will call the function appearing in such an equation the unknown function. We use this terminology because the typical task involving a differential equation, and the focus of this book, is to solve the differential equation, that is, to find a function whose derivatives are related as specified by the differential equation. In carrying out this task, everything else about the relation, other than the unknown function, is regarded as known. Any function satisfying the differential equation is called a solution. In other words, a solution of a differential equation is a function that, when substituted for the unknown function, causes the equation to be satisfied. Differential equations fall into several natural and widely used categories: 1. Ordinary versus partial: (a) If the unknown function has a single independent variable, say t, then the equation is an ordinary differential equation (ODE). In this case, only "ordinary" derivatives are involved. Examples of ODEs are

In the second example, a, b, and c are regarded as known constants, and f ( t ) as a known function of t. In both equations, the unknown is

u = u(t).

(b) If the unknown function has two or more independent variables, the equation is called a partial differential equation (PDE). Examples include

1

Chapter 1. Classification of differential equations

In (1-3), the unknown function is u = u(x,y), while in (1.4), it is u = u(x,t). In (1.4), c is a known constant. 2. Order: The order of a differential equation is the order of the highest derivative appearing in the equation. Most differential equations arising in science and engineering are first or second order. Example (1.1) above is first order, while Examples (1.2), (1.3), and (1.4) are second order.

3. Linear versus nonlinear: (a) As the examples suggest, a differential equation has, on each side of the equals sign, algebraic expressions involving the unknown function and its derivatives, and possibly other functions and constants regarded as known. A differential equation is linear if those terms involving the unknown function contain only products of a single factor of the unknown function or one of its derivatives with other (known) constants or functions of the independent variables.2 In linear differential equations, the unknown function and its derivatives do not appear raised to a power other than 1, or as the argument of a nonlinear function (like sin, exp, log, etc.). For example, the general linear second-order ODE has the form

Here we require that a 2 ( t ) 0 in order that the equation truly be of second order. Example (1.2) is a special case of this general equation. In a linear differential equation, the unknown or its derivatives can be multiplied by constants or by functions of the independent variable, but not by functions of the unknown. As another example, the general linear second-order PDE in two independent variables is

Examples (1.3) and (1.4) are of this form (although the independent variables are called x and t in (1.4), not x and y). Not all of an, a12, and a22 can be zero in order for this equation to be second order. A linear differential equation is homogeneous if the zero function is a solution. For example, u(t) = 0 satisfies (1.1), u(x,y) = 0 satisfies (1.3), 2

In Section 3.1, we will give a more precise definition of linearity.

Chapter 1. Classification of differential equations

3

and u(x,t) = 0 satisfies (1.4), so these are examples of homogeneous linear differential equations. A homogeneous linear equation has another important property: whenever u and v are solutions of the equation, so is cm + ftv for all real numbers a and ft. For example, suppose u(t) and v(t) are solutions of (1.1) (that is, du/dt — 3w and dv/dt = 3i>), and w = au + ftv. Then

Thus w is also a solution of (1.1). A linear differential equation which is not homogeneous is called inhomogeneous. For example

is linear inhomogeneous, since u(i) = 0 does not satisfy this equation. Example (1.2) might be homogeneous or not, depending on whether /(*) = 0 or not. It is always possible to group all terms involving the unknown function in a linear differential equation on the left-hand side and all terms not involving the unknown function on the right-hand side. For example, (1.5) is equivalent to

"Equivalent to" in this context means that (1.5) and (1.6) have exactly the same solutions: if u(t) solves one of these two equations, it solves the other as well. When the equation is written this way, with all of the terms involving the unknown on the left and those not involving the unknown on the right, homogeneity is easy to recognize: the equation is homogeneous if and only if the right-hand side is identically zero. (b) Differential equations which are not linear are termed nonlinear. For example,

is nonlinear. This is clear, as the unknown function u appears raised to the second power. Another way to see that (1.7) is nonlinear is as follows: if the equation were linear, it would be linear homogeneous, since the zero

4

Chapter 1. Classification of differential equations

function is a solution. However, suppose that u(t) and v(t} are nonzero solutions, so that

Thus w does not satisfy the equation, so the equation must be nonlinear. Both linear and nonlinear differential equations occur as models of physical phenomena of great importance in science and engineering. However, linear differential equations are much better understood, and most of what is known about nonlinear differential equations depends on the analysis of linear differential equations. This book will focus almost exclusively on the solution of linear differential equations. 4. Constant versus nonconstant coefficient: Linear differential equations like

have constant coefficients if the coefficients ra, c, and k are constants rather than (nonconstant) functions of the independent variable. The right-hand side f ( t ) may depend on the independent variable; it is only the quantities which multiply the unknown function and its derivatives which must be constant, in order for the equation to have constant coefficients. Some techniques that are effective for constant-coefficient problems are very difficult to apply to problems with nonconstant coefficients. As a convention, we will explicitly write the independent variable when a coefficient is a function rather than a constant. The only function in a differential equation for which we will omit the independent variable is the unknown function. Thus

is an ODE with a nonconstant coefficient, namely k(x). We will only apply the phrase "constant coefficients" to linear differential equations.

Chapter 1. Classification of differential equations

5

5. Scalar equation versus system of equations: A single differential equation in one unknown function will be referred to as a scalar equation (all of the examples we have seen to this point have been scalar equations). A system of differential equations consists of several equations for one or more unknown functions. Here is a system of three first-order linear, constant-coefficient ODEs for three unknown functions x i ( t ) , X 2 ( t ) , and xs(t):

It is important to realize that, since a differential equation is an equation involving functions, a solution will satisfy the equation for all values of the independent variable(s), or at least all values in some restricted domain. The equation is to be interpreted as an identity, such as the familiar trigonometric identity

To say that (1.8) is an identity is to say that it is satisfied for all values of the independent variable t. Similarly, u(t) — e3t is a solution of (1.1) because this function u satisfies for all values of t. The careful reader will note the close analogy between the uses of many terms introduced in this section ("unknown function," "solution," "linear" vs. "nonlinear" ) and the uses of similar terms in discussing algebraic equations. The analogy between linear differential equations and linear algebraic systems is an important theme in this book. Exercises 1. Classify each of the following differential equations according to the categories described in this chapter (ODE or PDE, linear or nonlinear, etc.):

6

Chapter 1.

Classification of differential equations

2. Repeat Exercise 1 for the following equations:

3. Repeat Exercise 1 for the following equations:

4. Repeat Exercise 1 for the following equations:

5. Determine whether each of the functions below is a solution of the corresponding differential equation in Exercise 1:

6. Determine whether each of the functions below is a solution of the corresponding differential equation in Exercise 2:

Chapter 1. Classification of differential equations

7

7. Find a function f ( t ) so that u(t) = tsm (t) is a solution of the ODE

Is there only one such function /? Why or why not? 8. Is there a constant / such that u(t] = e1 is a solution of the ODE

9. Suppose u is a nonzero solution of a linear, homogeneous differential equation. What is another nonzero solution? 10. Suppose u is a solution of

and v is a (nonzero) solution of

Explain how to produce infinitely many different solutions of (1.9).

This page intentionally left blank

Chapter 2

Modelas

in one dimension

In this chapter, we present several different and interesting physical processes that can be modeled by ODEs or PDEs. For now we restrict ourselves to phenomena that can be described (at least approximately) as occurring in a single spatial dimension: heat flow or mechanical vibration in a long, thin bar, vibration of a string, diffusion of chemicals in a pipe, and so forth. In Chapters 5-7, we will learn methods for solving the resulting differential equations. In Chapter 8, we consider similar experiments occurring in multiple spatial dimensions.

2.1

Heat flow in a bar; Fourier's law

We begin by considering the distribution of heat energy (or, equivalently, of temperature) in a long, thin bar. We assume that the cross-sections of the bar are uniform, and that the temperature varies only in the longitudinal direction. In particular, we assume that the bar is perfectly insulated, except possibly at the ends, so that no heat escapes through the side. By making several simplifying assumptions, we derive a linear PDE whose solution is the temperature in the bar as a function of spatial position and time. We show, when we treat multiple space dimensions in Chapter 8, that if the initial temperature is constant in each cross-section, and if any heat source depends only on the longitudinal coordinate, then all subsequent temperature distributions depend only on the longitudinal coordinate. Therefore, in this regard, there is no modeling error in adopting a one-dimensional model. (There is modeling error associated with some of the other assumptions we make.) We will now derive the model, which is usually called the heat equation. We begin by defining a coordinate system (see Figure 2.1). The variable x denotes position in the bar in the longitudinal direction, and we assume that one end of the bar is at x = 0. The length of the bar will be £, so the other end is at x = t. We denote by A the area of a cross-section of the bar. The temperature of the bar is determined by the amount of heat energy in 9

2.1. Heat flow in a bar; Fourier's law

11

The fact that we do not know EQ causes no difficulty, because we are going to model the change in the energy (and hence temperature). Specifically, we examine the rate of change, with respect to time, of the heat energy in the part of the bar between x and x + Ax. The total heat in [x, x + Ax] is given by (2.2), so its rate of change is

(the constant EQ differentiates to zero). To work with this expression, we need the following result, which allows us to move the derivative past the integral sign: Theorem 2.1. Let F : (a,b) x [c, d\ —>• R and dF/dx be continuous, and define (f>: (a, 6) ->• R by

Then 0 is continuously differentiate, and

That is,

By Theorem 2.1, we have

If we assume that there are no internal sources or sinks of heat, the heat contained in [x, x + Ax] can change only because heat flows through the crosssections at x and x + Ax. The rate at which heat flows through the cross-section at x is called the heat flux, and is denoted q(x,t). It has units of energy per unit area per unit time, and is a signed quantity: if heat energy is flowing in the positive x-direction, then q is positive. The net heat entering [x, x + Ax] at time t, through the two cross-sections, is

the last equation follows from the fundamental theorem of calculus. (See Figure 2.2.) Equating (2.3) and (2.4) yields

12

Chapter 2. Models in one dimension

or

Since this holds for all x and Ax, it follows that the integrand must be zero:

(The key point here is that the integral above equals zero over every small interval. This is only possible if the integrand is itself zero. If the integrand were positive, say, at some point in [0,^], and were continuous, then the integral over a small interval around that point would be positive.)

Figure 2.2. The heat flux into and out of a part of the bar. Naturally, heat will flow from regions of higher temperature to regions of lower temperature; note that the sign of du/dx indicates whether temperature is increasing or decreasing as x increases, and hence indicates the direction of heat flow. We now make a simplifying assumption, which is called Fourier's law of heat conduction: the heat flux q is proportional to the temperature gradient, du/dx. We call the magnitude of the constant of proportionality the thermal conductivity K, and obtain the equation

(the units of K can be determined from the units of the heat flux and those of the temperature gradient; see Exercise 2.1.1). The negative sign is necessary so that heat flows from hot regions to cold. Substituting Fourier's law into the differential equation (2.5), we can eliminate q and find a PDE for u:

(We have canceled a common factor of A) Here we have assumed that K is constant, which would be true if the bar were homogeneous. It is possible that K depends on x (in which case p and c probably do as well); then we obtain

We call (2.7) the heat equation. If the bar contains internal sources (or sinks) of heat (such as chemical reactions that produce heat), we collect all such sources into a single source function

2.1.

Heat flow in a bar; Fourier's law

13

/(x,t) (in units of heat energy per unit time per unit volume). Then the total heat added to [x, x + Ax] during the time interval [t, t + At] is

The time rate of change of this contribution to the total heat energy (at time t) is

The rate of change of total heat in [#, x + Ax] is now given by the sum of (2.4) and (2.8), so we obtain, by the above reasoning, the inhomogeneous3 heat equation

2.1.1

Boundary and initial conditions for the heat equation

The heat equation by itself is not a complete model of heat flow. We must know how heat flows through the ends of the bar, and we must know the temperature distribution in the bar at some initial time. Two possible boundary conditions for heat flow in a bar correspond to perfect insulation and perfect thermal contact. If the ends of the bar are perfectly insulated, so that the heat flux across the ends is zero, then we have

(no heat flows into the left end or out of the right end). On the other hand, if the ends of the bar are kept fixed at temperature zero4 (through perfect thermal contact with an ice bath, for instance), we obtain u(0,t) =u(l,t] = 0 for alH. Either of these boundary conditions can be inhomogeneous (that is, have a nonzero right-hand side), and we could, of course, have mixed conditions (one end insulated, the other held at fixed temperature). A boundary condition that specifies the value of the solution is called a Dirichlet condition, while a condition that specifies the value of the derivative is called a Neumann condition. A problem with a Dirichlet condition at one end and a Neumann condition at the other is said to have mixed boundary conditions. As noted 3

The term homogeneous is used in two completely different ways in this section and throughout the book. A material can be (physically) homogeneous, which implies that the coefficients in the differential equations will be constants. On the other hand, a linear differential equation can be (mathematically) homogeneous, which means that the right-hand side is zero. These two uses of the word homogeneous are unrelated and potentially confusing, but the usage is standard and so the reader must understand from context the sense in which the word is used. 4 Since changing u by an additive constant does not affect the differential equation, we can as well use Celsius as the temperature scale rather than Kelvin. (See Exercise 6.)

14

Chapter 2. Models in one dimension

above, we will also use the terms homogeneous and inhomogeneous to refer to the boundary conditions. The condition u(i] = 10 is called an inhomogeneous Dirichlet condition, for example. To completely determine the temperature as a function of time and space, we must know the temperature distribution at some initial time to. This is an initial rnnAH-.i.nn.'

Putting together the PDE, the boundary conditions, and the initial condition, we obtain an initial-boundary value problem (IBVP) for the heat equation. For example, with both ends of the bar held at fixed temperatures, we have the IBVP

2.1.2

Steady-state heat flow

An important special case modeled by the heat equation is steady-state heat flow—a situation in which the temperature is constant with respect to time (although not necessarily with respect to space). In this case, the temperature function u can be thought of as a function of x alone, u — u(x), the partial derivative with respect to t is zero, and the differential equation becomes

In the steady-state case, any source term / must be independent of time (otherwise, the equation could not possibly be satisfied by a function u that is independent of time). Boundary conditions have the same meaning as in the time-dependent case, although in the case of inhomogeneous boundary conditions, the boundary data must be constant. On the other hand, it obviously does not make sense to impose an initial condition on a steady-state temperature distribution. Collecting these observations, we see that a steady-state heat flow problem takes the form of a boundary value problem (BVP). For example, if the temperature is fixed at the two endpoints of the bar, we have the following Dirichlet problem:

We remark that, when only one spatial dimension is taken into account, a steadystate (or equilibrium] problem results in an ODE rather than a PDE. Moreover, these problems, at least in their simplest form, can be solved directly by two integrations. Nevertheless, we will (in Chapter 5) devote a significant amount of effort

2.1.

Heat flow in a bar; Fourier's law

15

toward developing methods for solving these ODEs, since the techniques can be generalized to multiple spatial dimensions, and also form the foundation for techniques for solving time-dependent problems. Example 2.2. The thermal conductivity of an aluminum alloy is 1.5 W/(cmK). We will calculate the steady-state temperature of an aluminum bar of length 1 m (insulated along the sides) with its left end fixed at 20 degrees Celsius and its right end fixed at 30 degrees. If we write u = u(x] for the steady-state temperature, then u satisfies the BVP

(We changed the units of the length of the bar to centimeters, so as to be consistent with the units of the thermal conductivity.) The differential equation implies that d?u/dx2 is zero, so, integrating once, du/dx is constant, say

Integrating a second time yields

The boundary condition w(0) = 20 implies that C? = 20, and the second boundary condition then yields

Thus

The reader should notice that the thermal conductivity of the bar does not affect the solution in the previous example. This is because the bar is homogeneous (that is, the thermal conductivity of the bar is constant throughout). This should be contrasted with the next example. Example 2.3. A metal bar of length 100 cm is manufactured so that its thermal conductivity is given by the formula

(the units of K(X] are W/(cmK)). We will find the steady-state temperature of the bar under the same conditions as in the previous example. That is, we will solve

16

Chapter 2. Models in one dimension

the BVP

Integrating both sides of the differential

equation once yields

where C is a constant. Equivalently, we have

Integrating a second time yields

Applying the boundary condition w(100) = 30, we obtain

With this value for C, we have

The steady-state temperatures from the two previous examples are shown in Figure 2.3, which shows that the heterogeneity of the second bar makes a small but discernible difference in the temperature distribution. In both cases, heat energy is flowing from the right end of the bar to the left, and the heterogeneous bar has a higher thermal conductivity at the right end. This leads to a higher temperature in the middle of the bar.

2.1.3

Diffusion

One of the facts that makes mathematics so useful is that different physical phenomena can lead to essentially the same mathematical model. In this section we introduce another experiment that is modeled by the heat equation. Of course, the

2.1.

Heat flow in a bar; Fourier's law

17

Figure 2.3. The steady-state temperature from Example 2.2 (solid line) and Example 2.3 (dashed curve). meaning of the quantities appearing in the equation will be different than in the case of heat flow. We suppose that a certain chemical is in solution in a straight pipe having a uniform cross-section. We assume that the pipe has length t, and we establish a coordinate system as in the preceding section, with one end of the pipe at x = 0 and the other at x = 1. Assuming the concentration varies only in the ^-direction, we can define u(x, t) to be the concentration, in units of mass per volume, of the chemical at time t in the cross-section located at x. Then the total mass of the chemical in the part of the bar between x and x + Aar (at time t) is

where A is the cross-sectional area of the pipe. The chemical will tend to diffuse from regions of high concentration to regions of low concentration (just as heat energy flows from hot regions to cooler regions). We will assume5 that the rate of diffusion is proportional to the concentration gradient

5 This assumption, which is analogous to Fourier's law of heat conduction, is called Pick's law in the diffusion setting.

18

Chapter 2.

Models in one dimension

the meaning of this assumption is that there exists a constant6 D > 0 such that, at time t, the chemical moves across the cross-section at x at a rate of

The units of this mass flux are mass per area per time (for example, g/cm2s). Since the units of du/dx are mass/length4, the diffusion coefficient D must have unit of area per time (for example, cm2/s). With this assumption, the equation modeling diffusion is derived exactly as was the heat equation, with a similar result. The total amount of the chemical contained in the part of the pipe between x and x + Ax is given by (2.12), so the rate at which this total mass is changing is

This same quantity can be computed from the fluxes at the two cross-sections at x and x + Ax. At the left, mass is entering at a rate of

while at the right it enters at a rate of

We therefore have

The result is the diffusion

equation:

If the chemical is added to the interior of the pipe, this can be accounted for by a function /(x, t), where mass is added to the part of the pipe between x and x + Ax at a rate of

(mass per unit time—/ has units of mass per volume per time). We then obtain the inhomogeneous diffusion equation:

6 This constant varies with temperature and pressure; see the CRC Handbook of Chemistry and °hysics [35], page 6-179.

2.1.

Heat flow in a bar; Fourier's law

19

Just as in the case of heat flow, we can consider steady-state diffusion. The result is the ODE with appropriate boundary conditions. Boundary conditions for the diffusion equation are explored in the exercises. Exercises 1. Determine the units of the thermal conductivity K from (2.6). 2. In the CRC Handbook of Chemistry and Physics [35], there is a table labeled "Heat Capacity of Selected Solids," which "gives the molar heat capacity at constant pressure of representative metals ... as a function of temperature in the range 200 to 600 K" ([35], page 12-190). For example, the entry for iron is as follows: Temp. (K) c (J/mole • K)

200 21.59

250 23.74

300 25.15

350 26.28

400 27.39

500 29.70

600 32.05

As this table indicates, the specific heat of a material depends on its temperature. How would the heat equation change if we did not ignore the dependence of the specific heat on temperature? 3. Verify that the integral in (2.1) has units of energy. 4. Suppose u represents the temperature distribution in a homogeneous bar, as discussed in this section, and assume that both ends of the bar are perfectly insulated. (a) What is the IBVP modeling this situation? (b) Show (mathematically) that the total heat energy in the bar is constant with respect to time. (Of course, this is obvious from a physical point of view. The fact that the mathematical model implies that the total heat energy is constant is one confirmation that the model is not completely divorced from reality.) 5. Suppose we have a means of "pumping" heat energy into a bar through one of the ends. If we add r Joules per second through the end at x = I, what would the corresponding boundary condition be? 6. In our derivation of the heat equation, we assumed that temperature was measured on the Kelvin scale. Explain what changes must be made (in the PDE, initial condition, or boundary conditions) to use degrees Celsius instead. 7. The thermal conductivity of iron is 0.802 W/(cm K). Consider an iron bar of length 1 m and radius 1 cm, with the lateral side completely insulated, and assume that the temperature of one end of the bar is held fixed at 20 degrees Celsius, while the temperature of the other end is held fixed at 30 degrees.

20

Chapter 2. Models in one dimension

Assume that no heat energy is added to or removed from the interior of the bar. (a) What is the (steady-state) temperature distribution in the bar? (b) At what rate is heat energy flowing through the bar? 8. (a) Show that the function

is a solution to the homogeneous heat equation

(b) What values of 0 will cause u to also satisfy homogeneous Dirichlet conditions at x = 0 and x = tl 9. In this exercise, we consider a boundary condition for a bar that may be more realistic than a simple Dirichlet condition. Assume that, as usual, the side of a bar is completely insulated and that the ends are placed in a bath maintained at constant temperature. Assume that the heat flows out of or into the ends in accordance with Newton's law of cooling: the heat flux is proportional to the difference in temperature between the end of the bar and the surrounding medium. What are the resulting boundary conditions? 10. Derive the heat equation from Newton's law of cooling (cf. the previous exercise) as follows: Divide the bar into a large number n of equal pieces, each of length Ax. Approximate the temperature in the «th piece as a function Ui(t) (thus assuming that the temperature in each piece is constant at each point in time). Write down a coupled system of ODEs for ui(t), u^t], • • • ,un(t) by applying Newton's law of cooling to each piece and its nearest neighbors. Assume that the bar is homogeneous, so that the material properties p, c, and K are constant. Take the limit as Aa; —> 0, and show that the result is the heat equation. 11. Suppose a chemical is diffusing in a pipe, and both ends of the pipe are sealed. What are the appropriate boundary conditions for the diffusion equation? What initial conditions are required? Write down a complete IBVP for the diffusion equation under these conditions. 12. Suppose that a chemical contained in a pipe of length I has an initial concentration distribution of w(or,0) = ip(x}. At time zero, the ends of the pipe are sealed, and no mass is added to or removed from the interior of the pipe. (a) Write down the IBVP describing the diffusion of the chemical. (b) Show mathematically that the total mass of the chemical in the pipe is constant. (Derive this fact from the equations rather than from common sense.)

2.2. The hanging bar

21

(c) Describe the ultimate steady-state concentration. (d) Give a formula for the steady-state concentration in terms of if). 13. Suppose a pipe of length t and radius r joins two large reservoirs, each containing a (well-mixed) solution of the same chemical. Let the concentration in one reservoir be MO and in the other be ui, and assume that w(0,£) = MO and u(l,t) = ut. (a) Find the steady-state rate at which the chemical diffuses through the pipe (in units of mass per time). (b) How does this rate vary with the length i and the radius r? 14. Consider the previous exercise, in which the chemical is carbon monoxide (CO) and the solution is CO in air. Suppose that MO = 0.01 and ui = 0.015, and that the diffusion coefficient of CO in air is 0.208 cm2/s. If the bar is 1 m long and its radius is 2cm, find the steady-state rate at which CO diffuses through the pipe (in units of mass per time). 15. Verify that Theorem 2.1 holds for (a, 6) x (c, d) = (0,1) x [0,1] and F defined by F(x,y) = cos(xy}.

2.2

The hanging bar

Suppose that a bar, with uniform cross-sectional area A and length i, hangs vertically, and it stretches due to a force (perhaps gravity) acting upon it. We assume that the deformation occurs only in the vertical direction; this assumption is reasonable only if the bar is long and thin. Normal materials tend to contract horizontally when they are stretched vertically, but both this contraction and the coupling between horizontal and vertical motion are small compared to the elongation when the bar is thin (see Lin and Segel [36], Chapter 12). With the assumption of purely vertical deformation, we can describe the movement of the bar in terms of a displacement function u(x,t). Specifically, suppose that the top of the bar is fixed at x = 0, and let down be the positive x-direction. Let the cross-section of the bar originally at x move to x + u(x,t) at time t (see Figure 2.4). We will derive a PDE describing the dynamics of the bar by applying Newton's second law of motion. We assume that the bar is elastic, which means that the internal forces in the bar depend on the local relative of change in length. The deformed length of the part of the bar originally between x and x + Ax (at time t} is Since the original length is Ax, the change in length of this part is and the relative change in length is

22

Chapter 2.

Models in one dimension

Figure 2.4. The hanging bar and its coordinate system. This explains the definition of the strain, the local relative change in length, as the dimensionless quantity

As we noted above, to say that the bar is elastic is to say that the internal restoring force of the deformed bar depends only on the strain. We now make the further assumption that the deformations are small, and therefore that the internal forces are, to good approximation, proportional to the strain. This is equivalent to assuming that the bar is linearly elastic, or Hookean. Under the assumption that the bar is Hookean, we can write an expression for the total internal force acting on P, the part of the bar between x and x + Ax. We denote by k(x) the stiffness of the bar at x, that is, the constant of proportionality in Hooke's law, with units of force per unit area. (In the engineering literature, k is called the Young's modulus or the modulus of elasticity. Values for various materials can be found in reference books, such as [35].) Then the total internal force is

The first term in this expression is the force exerted by the part of the bar below x + Ax on P, and the second term is the force exerted by the part of the bar above x on P. The signs are correct; if the strains are positive, then the bar has been stretched, and the internal restoring force is pulling down (the positive direction) at x + Ax and up at x.

2.2. The hanging bar

23

Now, (2.13) is equal (by the fundamental theorem of calculus) to

We now assume that all external forces are lumped into a body force given by a force density / (which has units of force per unit volume). Then the total external force on P (at time t] is

and the sum of the forces acting on part P is

Newton's second law states that the total force acting on P must equal the mass of P times its acceleration. This law takes the form

where p(x) is the density of the bar at x (in units of mass per volume). We can rewrite this as

(note how the factor of A cancels). This integral must be zero for every x e [0,£) and every Ax > 0. It follows (by the reasoning introduced on page 12) that the integrand must be identically zero; this gives the equation

The PDE (2.14) is called the wave equation. If the bar is in equilibrium, then the displacement does not depend on t, and we can write u = u(x]. In this case, the acceleration d2u/dt2 is zero, and the forcing function / must also be independent of time. We then obtain the following ODE for the equilibrium displacement of the bar:

This is the same equation that governs steady-state heat flow! Just as in the case of steady-state heat flow, the resulting BVPs can be solved with two integrations (see Examples 2.2 and 2.3).

24

Chapter 2. Models in one dimension

If the bar is homogeneous, so that p and k are constants, these last two differential equations can be written as

and

respectively. 2.2.1

Boundary conditions for the hanging bar

Equation (2.15) by itself does not determine a unique displacement; we need boundary conditions, as well as initial conditions if the problem is time-dependent. The statement of the problem explicitly gives us one boundary condition: u(0) = 0 (the top end of the bar cannot move). Moreover, we can deduce a second boundary condition from force balance at the other end of the bar. If the bottom of the bar is unsupported, then there is no contact force applied at x = t. On the other hand, the analysis that led to (2.13) shows that the part of the bar above x — t (which is all of the bar) exerts an internal force of

on the surface at x = t. Since there is nothing to balance this force, we must have

or simply

Since the wave equation involves the second time derivative of u, we need two initial conditions to uniquely determine the motion of the bar: the initial displacement and the initial velocity. We thus arrive at the following IBVP for the wave equation:

2.2. The hanging bar

25

The corresponding steady-state BVP (expressing mechanical equilibrium) is

There are several other sets of boundary conditions that might be of interest in connection with the differential equations (2.14) or (2.15). For example, if both ends of the bar are fixed (not allowed to move), we have the boundary conditions

(recall that u is the displacement, so the condition u(i] = 0 indicates that the crosssection at the end of the bar corresponding to x = t does not move from its original position). If both ends of the bar are free, the corresponding boundary conditions are Any of the above boundary conditions can be inhomogeneous. For example, we could fix one end of the bar at x = 0 and stretch the other to x = t + A^. This experiment corresponds to the boundary conditions w(0) = 0, u(t] = A£ As another example, if one end of the bar (say x = 0) is fixed and a force F is applied to the other end (x = £), then the applied force determines the value of du/dx(i}. Indeed, as indicated above, the restoring force of the bar on the x = t cross-section is and this must balance the applied force F:

This leads to the boundary condition

and the quantity F/A has units of pressure (force per unit area). For mathematical purposes, it is simplest to write an inhomogeneous boundary condition of this type as but for solving a practical problem, it is essential to recognize that

26

Chapter 2. Models in one dimension

Exercises 1. Consider the following experiment: A bar is hanging with the top end fixed at x = 0, and the bar is stretched by a pressure (force per unit area) p applied uniformly to the free (bottom) end. What are the boundary conditions describing this situation? 2. Suppose that a homogeneous bar (that is, a bar with constant stiffness A;) of length i has its top end fixed at x = 0, and the bar is stretched to a length 1+A£. by a pressure p applied to the bottom end. Take down to be the positive x-direction. (a) Explain why p and A^ have the same sign. (b) Explain why p and \t cannot both be chosen arbitrarily (even subject to the requirement that they have the same sign). Give both physical and mathematical reasons. (c) Suppose p is specified. Find A^ (in terms of p, k, and i). (d) Suppose Al is specified. Find p (in terms of A£, fc, and t}. 3. A certain type of stainless steel has a stiffness of 195 GPa. (A Pascal (Pa) is the standard unit of pressure, or force per unit area. The Pascal is a derived unit: one Pascal equals one Newton per square meter. The Newton is the standard unit of force: one Newton equals one kilogram meter per secondsquared. Finally, GPa is short for gigaPascal, or 109 Pascals.) (a) Explain in words (including units) what a stiffness of 195 GPa means. (b) Suppose a pressure of 1 GPa is applied to the end of a homogeneous, circular cylindrical bar of this stainless steel, and that the other end is fixed. If the original length of the bar is 1 m and its radius is 1 cm, what will its length be, in the equilibrium state, after the pressure has been applied? (c) Verify the result of 3b by formulating and solving the boundary value problem representing this experiment. 4. Consider a circular an aluminum alloy is fixed, what total the bar to a length

cylindrical bar, of length 1 m and radius 1 cm, made from with stiffness 70 GPa. If the top end of the bar (x = 0) force must be applied to the other end (x = 1) to stretch of 1.01 m?

5. Write the wave equation for the bar of Exercise 3, given that the density of the stainless steel is 7.9 g/cm3. (Warning: Use consistent units!) What must the units of the forcing function / be? Verify that the two terms on the left side of the differential equation have the same units as /. 6. Suppose that a i m bar of the stainless steel described in Exercise 3, with density 7.9g/cm3, is supported at the bottom but free at the top. Let the cross-sectional area of the bar be O.lm 2 . A weight of 1000 kg is placed on

2.3. The wave equation for a vibrating string

27

top of the bar, exerting pressure on it via gravity (the gravitational constant is 9.8m/s 2 ). The purpose of this problem is to compute and compare the effects on the bar of the mass on the top and the weight of the bar itself. (a) Write down three BVPs: i. First, take into account the weight of the bar (which means that gravity induces a body force), but ignore the mass on the top (so the top end of the bar is free—the boundary condition is a homogeneous Neumann condition). ii. Next, take into account the mass on the top, but ignore the effect of the weight of the bar (so there is no body force). iii. Last, take both effects into account. (b) Explain why the third BVP can be solved by solving the first two and adding the results. (c) Solve the first two BVPs by direct integration. Compare the two displacements. Which is more significant, the weight of the bar or the mass on top? (d) How would the situation change if the cross-sectional area of the bar were changed to 0.2m 2 ? 7. (a) Show that the function

is a solution to the homogeneous wave equation

(b) What values of 9 will cause u to also satisfy homogeneous Dirichlet conditions at x = 0 and x = tl

2.3

The wave equation for a vibrating string

We now present an argument that the wave equation (2.14) also describes the small transverse vibrations of an elastic string (such as a guitar string). In the course of the following derivation, we make several a priori unjustified assumptions which are significant enough that the end result ought to be viewed with some skepticism. However, a careful analysis leads to the same model (see the article by Antman [1]). For simplicity, we will assume that the string in question is homogeneous, so that any material properties are constant throughout the string. We suppose that the string is stretched to length I and that its two endpoints are not allowed to move. We further suppose that the string vibrates in the xy-plane, occupying the interval [0,1] on the x-axis when at rest, and that the point at (x,0) in the

28

Chapter 2. Models in one dimension

reference configuration moves to (x, u(x, t)) at time t. We are thus postulating that the motion of the string is entirely in the transverse direction (this is one of the severe assumptions that we mentioned in the previous paragraph). Granted this assumption, we now derive the differential equation satisfied by the displacement u. Since, by assumption, a string does not resist bending, the internal restoring force of the string under tension is tangent to the string itself at every point. We will denote the magnitude of this restoring force by T(x, t). In Figure 2.5, we display a part of the deformed string, corresponding to the part of the string between x and x + Ax in the reference configuration, together with the internal forces at the ends of this part, and their magnitudes. In the absence of any external forces, the sum of these internal forces must balance the mass times acceleration of this part of the string. To write down these equations, we must decompose the internal force into its horizontal and vertical components.

Figure 2.5. A part of the deformed string. We write n = n(x, t) for the force at the left endpoint and 9 for the angle this force vector makes with the horizontal. We then have

with Assuming that \du/dx Y ("f maps X into Y") to indicate that f is a function from X toY. The reader should recognize the difference between the range of a function / : X —)• Y and the set Y (which is sometimes called the co-domain of /). The set Y merely identifies the type of the output values /(#); for example, if Y = R, then every /(#) is a real number. On the other hand, the range of / is the set of elements of Y that are actually attained by /. As a simple example, consider / : R —> R defined by f ( x ) = x2. The co-domain of / is R, but the range of / consists of the set of nonnegative numbers: In many cases, it is quite difficult to determine the range of a function. The codomain, on the other hand, must be specified as part of the definition of the function. A set is just a collection of objects (elements); most useful sets have operations defined on their elements. The most important sets used in this book sue vector spaces. Definition 3.2. A vector space V is a set on which two operations are defined, addition (if u, v 6 V, then u + v e V) and scalar multiplication (if u € V and a is a scalar, then cm € V). The elements of the vector space are called vectors. (In this book, the scalars are usually real numbers, and we assume this unless otherwise stated. Occasionally we use the set of complex numbers as the scalars. Vectors will always be denoted by lower case boldface letters.) The two operations must satisfy the following algebraic properties:

For every vector space considered in this book, the verification of these vector space properties is straightforward and will be taken for granted. Example 3.3. n-space:

The most common example of a vector space is (real) Euclidean

3.1.

Linear systems as linear operator equations

33

Vectors in Rn are usually written in column form,

as it is convenient at times to think of u G Rn as an n x 1 matrix. Addition and scalar multiplication are defined componentwise:

Example 3.4. Apart from Euclidean n-space, the most common vector spaces are function spaces —vector spaces in which the vectors are functions. Functions (with common domains) can be added together and multiplied by scalars, and the algebraic properties of a vector space are easily verified. Therefore, when defining a function space, one must only check that any desired properties of the functions are preserved by addition and scalar multiplication. Here are some important examples: 1. C[a, b] is defined to be the set of all continuous, real-valued functions defined on the interval [a, b]. The sum of two continuous functions is also continuous, as is any scalar multiple of a continuous function. Therefore, C[a, b] is a vector space. 2. Cl [a, b] is defined to be the set of all real-valued, continuously differentiate functions defined on the interval [a,b]. (A function is continuously differentiate if its derivative exists and is continuous.) The sum of two continuously differentiate functions is also continuously differentiate, and the same is true for a scalar multiple of a continuously differentiate function. Therefore, Cl [a, b] is a vector space. 3. For any positive integer k, Ck[a, b] is the space of real-valued functions defined on [a, b] that have k continuous derivatives. Many vector spaces that are encountered in practice are subspaces of othe vector spaces. Definition 3.5. Let V be a vector space, and suppose W is a subset ofV with the following properties: 1. The zero vector belongs to W. 2. Every linear combination of vectors in W is also in W. That is, if x, y G W and a, /? G R, then

34

Chapter 3. Essential linear algebra

Then we call W a subspace ofV. A subspace of a vector space is a vector space in its own right, as the reader can verify by checking that all the properties of a vector space are satisfied for a subspace. Example 3.6. We define

The set C^ja, b] is a subset of C2[a,b], and this subset contains the zero function (hopefully this is obvious to the reader). Also, if u,v e 6%[a, b], a,j3 G R, and w = au + f3v, then w e C2[a, b] (since C2[a,b] is a vector space); and w(a) — au(a) + /3v(a) = a-Q + 0-Q = Q, and similarly w(b) = 0. Therefore, w € Cp[a,b], which shows that C^a,b] is a subspace of C2[a,b]. Example 3.7. We define

The set C^[a,b] is also a subset of C2[a,b], and it can be shown to be a subspace. Clearly the zero function belongs to Cj^[a,b]. If u,v € C^[a,b], a,/? £ R, and w = au + fiv, then

Similarly, and so w e C^[a,b]. This shows that C^[a,b] is a subspace of C2[a,b]. The previous two examples will be used throughout this book. The letters "D" and "N" stand for Dirichlet and Neumann, respectively (see Section 2.1, for example). The following provides an important nonexample of a subspace. Example 3.8. We define

where 7 and 8 are nonzero real numbers. Then, although W is a subset of C2[a,b], it is not a subspace. For example, the zero function does not belong to W, since it does not satisfy the boundary conditions. Also, if u, v e W and a, J3 € R, then, with w = au + flv, we have

3.1. Linear systems as linear operator equations

35

Thus w(a) does not equal 7, except in the special case that a + (3 = I. Similarly, w(b) does not satisfy the boundary condition at the right endpoint. The concept of a vector space allows us to define linearity, which describes many simple processes and is indispensable in modeling and analysis. Definition 3.9. Suppose X and Y are vector spaces, and f : X —>• Y is an operator (or function,7 or mapping) with domain X and range Y. Then f is linear if and only if This condition can be expressed as the following two conditions, which together are equivalent to (3.1):

A linear operator is thus a particularly simple kind of operator; its simplicity can be appreciated by comparing the property of linearity (e.g. f ( x + y} = f ( x ) + f(y)} with common nonlinear operators: ^/x + y ^ ^-\-^Jy-> sin (x + y) ^ sin (x) + sin(?/), etc. Example 3.10. The operator defined by a matrix A G R m x n via matrix-vectoi multiplication, f (x) = Ax, is linear; the reader should verify this if necessary (see Exercise 7). Moreover, it can be shown that every linear operator mapping Rn into Rm can be represented by a matrix A € R mxn in this way (see Exercise 8). This explains why the study of (finite-dimensional) linear algebra is largely the study of matrices. In this book, matrices will be denoted by upper case boldface letters. Example 3.11. To show that the sine function is not linear, we observe that

while

Example 3.12. Differentiation

defines an operator

7 The word "operator" is preferred over "function" in this context, because the vector spaces themselves are often spaces of functions.

36

Chapter 3. Essential linear algebra

and this operator is well known to be linear. For example,

since In general, the kth derivative operator defines a linear operator mapping Ck[a, b into C[a,b]. This is why linearity is so important in the study of differential equations. Since a matrix A e R n x n defines a linear operator from Rn to Rn, the linear system Ax = b can be regarded as a linear operator equation. From this point of view, the questions posed by the system are: Is there a vector x G Rn whose image under A is the given vector b? If so, is there only one such vector x? In the next section, we will explore these questions. The point of view of a linear operator equation is also useful in discussing differential equations. For example, consider the steady-state heat flow problem

from Section 2.1. We define a differential operator LD : C|,[0,^] -> C[Q,l] by

Then the BVP (3.2) is equivalent to the operator equation

(the reader should notice how the Dirichlet boundary conditions are enforced by the definition of the domain of LD). This and similar examples will be discussed throughout this chapter and in detail in Section 5.1.

Exercises 1. In elementary algebra and calculus courses, it is often said that / : R ->• R is linear if and only if it has the form f(x) — ax + &, where a and b are constants. Does this agree with Definition 3.9? If not, what is the form of a linear function / : R —^ R? 2. Show explicitly that / : R -> R defined by f(x) = ^/x is not linear. 3. For each of the following sets of functions, determine whether or not it is a vector space. (Define addition and scalar multiplication in the obvious way.) If it is not, state what property of a vector space fails to hold.

3.1.

Linear systems as linear operator equations

37

(d) Pn, the set of all polynomials of degree n or less. (e) The set of all polynomials of degree exactly n. 4. Prove that the differential operator L : Cl[a, b] —> C[a, b] defined by

is not a linear operator. 5. Prove that the differential operator L : Cl[a,b] —>• C[a,b] defined by

is not a linear operator. 6. Prove that the differential operator M : C2[a, b] —)• • R2 is linear. Prove that there is a matrix A € R 2x2 such that / is given by /(x) = Ax. Hint: Each x e R2 satisfies x = xiei + #262, where

Since / is linear, we have

The desired matrix A can be expressed in terms of the vectors /(ei), /(e 2 ). (b) Now show that if / : Rn —>• Rm is linear, then there exists a matrix A • C[Q,f\ by

where The reader should recall from the previous section that the BVP

can be written as the linear operator equation Lj^u — f . We will show that 'R-(A) is all of C[Q,(] by showing that we can solve Lpu = f for every f £ C[0,£]. The idea is to integrate twice and use the boundary conditions to determine the constants of intearation. We have so

We write where We then integrate to obtain

The reader should notice the use of the dummy variables of integration s and z. The first boundary condition, w(0) = 0, implies that Ci = 0. We then have

since u(i] — 0, we obtain

42

Chapter 3. Essential linear algebra

and so

The reader can verify directly (by differentiating twice) that this formula defines a solution of the BVP represented by LDU = f . This shows that we can solve LDU = f for any f e C[0,4 and so K(LD) = C[Q,l}.

3.2.2

Uniqueness

The linearity of a matrix operator A implies that nonuniqueness of solutions to Ax = b, if it occurs, has a special structure. Suppose x and z in Rn are both solutions to Ax = b (i.e. Ax = b and Az = b both hold). Then

the last step following from the linearity of A. If x ^ z, then w = x — z is a nonzero vector satisfying Aw = 0. On the other hand, suppose x is a solution to Ax = b and w is a nonzero vector satisfying Aw = 0. Then

and in this case there cannot be a unique solution to Ax = b. Because of the above observations, we define the null space of A to be

Since AO = 0 always holds for a linear operator A, we always have 0 e -A/"(A). Moreover, if x,z e A/"(A) and a, ft € R, then

and so

Therefore, ax + /3z € A/"(A). This shows that A/"(A) is a subspace of Rn. If 0 is the only vector in A/"(A), we say that A/"(A) is trivial. Our observations above lead to the following conclusion: If Ax = b has a solution, it is unique if and only if wV(A) is trivial. Furthermore, nothing in the above discussion depends on A's being a matrix operator; the same arguments can be made for any linear operator, such as a differential operator.

3.2. Existence and uniqueness of solutions to Ax = b

43

If JV(A) is nontrivial and Ax = b has a solution, then the equation has in fact infinitely many solutions. To see this, suppose x e Rn satisfies Ax = b and w e A/"(A), w 7^ 0. Then, for each a G R, we have

Since x + aw is different for each different choice of the real number a, this shows that the equation has infinitely many solutions. Moreover, it easily follows that the set of all solutions to Ax = b is, in this case,

Once again, the same properties hold for any linear operator equation. Example 3.16. Let A e R 4x4 be defined by

Consider the equation Ax = b, where b e R4 is arbitrary. Using the standard elimination algorithm,8 the system Ax = b can be shown to be equivalent to the system

We see that the system is inconsistent unless the conditions

hold. If these conditions are satisfied by b. then

where x% and x± can take on any value. Setting #3 = s and #4 = t, every vector of the form

8

We assume that the reader is familiar with Gaussian elimination, the standard row reduction algorithm for solving Ax = b. For a review, see any introductory text on linear algebra, such as the text by Lay [34].

44

Chapter 3.

Essential linear algebra

is a solution of the system. We have that

is one solution of Ax = b, and

Example 3.17. We compute the null space of the operator LN defined in Example 3.14- If LNU = 0, then u satisfies the BVP

The differential equation implies that u(x] = C\x + C% for some constants C\ and C^- The two boundary conditions each lead to the conclusion that C\ = 0; however, the constant C% can have any value and both the differential equation and the two boundary conditions will be satisfied. This shows that every constant function u(x) = C 0). In this case, since the equation is linear, any linear combination of erit and er2t is also a solution of (4.4). In fact, as we now show, every solution of (4.4) can be written as

4.2. Solutions to some simple ODEs

83

for some choice of ci, C2- In fact, we will show that, for any &i, £2, the initial value problem (IVP)

has a unique solution of the form (4.5). With u given by (4.5), we have

and we wish to choose c\, c^ to satisfy

that is,

The coefficient matrix in this equation is obviously nonsingular (since r\ / r-2), and so there is a unique solution c\,C2 for each fci, fo • Since every solution of (4.4) can be written in the form (4.5), we call (4.5) the qeneral solution of (4.4). 2. The characteristics roots are complex (i.e. b2 — 4ac < 0). In this case, the characteristics roots are also unequal; in fact, they form a complex conjugate pair. The analysis of the previous case applies, and we could write the general solution in the form (4.5) in this case as well. However, with 7*1, r oo as t —> oo. Even if k = 1, an (n — fc)-dimensional subspace of Rn is a very small part of Rn (comparable to a plane or a line in R3). Therefore, most initial vectors do not lie in this subspace.

Example 4.7. Let

Then the eigenvalues of A are AI = 1, A2 = —1, and \s = —2, and the corresponding eigenvectors are

The solution of (4-18) is

where

96

Chapter 4. Essential ordinary differential equations

The only initial values that lead to a solution x that does not grow without bound are But S is a plane, which is a very small part of Euclidean 3-space. Thus almost every initial value leads to a solution that grows exponentially. Another conclusion that we can draw is somewhat more subtle than those given above, but it will be important in Chapter 6. 4. If A has eigenvalues of very different magnitudes, then solutions of (4.18) have components whose magnitudes change at very different rates. Such solutions can be difficult to compute efficiently using numerical methods. We will discuss this point in more detail in Section 4.5.

4.3.2

In homogeneous systems and variation of parameters

We can now explain how to solve the inhomogeneous system

again only considering the case in which there is a basis of Rn consisting of eigenvectors of A. The method is a spectral method, and the reader may wish to review Section 3.5.3. If {ui, u 2 ,..., un} is a basis for R n , then every vector in Rn can be written uniquely as a linear combination of these vectors. In particular, for each £, we can write f(t) as a linear combination of ui, 112,..., un:

Of course, since the vector f (t) depends on £, so do the weights c\ (£), C2(t),..., cn(t). These weights can be computed explicitly from f, which of course is considered to be known. We can also write the solution of (4.20) in terms of the basis vectors:

Since x(£) is unknown, so are the weights ai(i),«2(*), • • • ,a,n(t). However, when these basis vectors are eigenvectors of A, it is easy to solve for the unknown weights. Indeed, substituting (4.21) in place of x yields

4.3.

Linear systems with constant coefficients

97

The ODE

then implies that

which can only hold if

Thus, by representing the solution in terms of the eigenvectors, we reduced the system of ODEs to n scalar ODEs that can be solved independently.16 The computation of the functions ci(£), C 2 ( t ) , . . . , cn(t) is simplest when A is symmetric, so that the eigenvectors can be chosen to be orthonormal. In that case,

We will concentrate on this case henceforth. The initial condition x(£0) = x0 provides initial values for the unknowns ai(t],a-2,(t},... ,an(t}. Indeed, we can write

so x(£ 0 ) = XQ implies that

or

When the basis is orthonormal, the coefficients & i , & 2 j - - - > & n can be computed in the usual way: Example 4.8. Consider the IVP

16

This is just what happened in the spectral method for solving Ax = b—the system of n simultaneous equations was reduced to n independent equations.

98

Chapter 4. Essential ordinary differential equations

where The matrix A is symmetric, and an orthonormal basis for R2 consists of the vectors

The corresponding eigenvalues are \i = 0, X% = 2. We now express f(t) in terms of the eigenvectors:

The solution is where

We obtain

and

Finally,

The technique for solving (4.20) presented in this section, which we have described as a spectral method, is usually referred to as the method of variation of parameters.

4.3.

Linear systems with constant coefficients

99

Exercises 1. Consider the IVP (4.18), where A is the matrix in Example 4.5. Find the solution for

2. Consider the IVP (4.18), where A is the matrix in Example 4.5. (a) Explain why, no matter what the value of x0, the solution x(£) converges to a constant vector as t —> oo. (b) Find all values of x0 such that the solution x is equal to a constant vector for all values of t. 3. Find the general solution to

where

4. Find the general solution to

where

5. Let A be the matrix in Exercise 3. Find all values of XQ such that the solution to (4.18) decays exponentially to zero. 6. Let A be the matrix in Exercise 4. Find all values of XQ such that the solution to (4.18) decays exponentially to zero. 7. Let A be the matrix of Example 4.5. Solve the IVP

where

100

Chapter 4. Essential ordinary differential equations

8. Let

Solve the IVP

where

9. The following system has been proposed as a model of the population dynamics of two species of animals that compete for the same resource:

Here o, 6, c, d are positive constants, x(t] is the population of the first species at time £, and y(t] is the corresponding population of the second species (x and y are measured in some convenient units, say thousands or millions of animals). The equations are easy to understand: either species increases (exponentially) if the other is not present, but, since the two species compete for the same resources, the presence of one species contributes negatively to the growth rate of the other. (a) Solve the IVP with 6 = c = 2, o = d = l , x(0) = 2, and j/(0) = 1, and explain (in words) what happens to the populations of the two species in the long term. (b) With the values of a, 6, c, d given in part 9a, is there an initial condition which will lead to a different (qualitative) outcome? 10. The purpose of this exercise is to derive the solution (4.11) of the IVP

The solution can be found using the techniques of this section, although the computations are more difficult than any of the examples we have presented.

4.4.

Numerical methods for initial value problems

101

(a) Rewrite (4.23) in the form

Notice that the matrix A is not symmetric. (b) Find the eigenvalues AI, A2 and eigenvectors Ui,u 2 of A. (c) Write the vector-valued function F(t) in the form

Since the eigenvectors ui and u2 are not orthogonal, this will require solving a (2 x 2) system of equations to find c\ (t} and c2 (t). (d) Write the solution in the form

and solve the scalar IVPs to get ai (t}, a2 (t}. (e) The desired solution is u(t] = xi(t). Show that the result is (4.11).

4.4

Numerical methods for initial value problems

So far in this chapter, we have discussed simple classes of ODEs, for which it is possible to produce an explicit formula for the solution. However, most differential equations cannot be solved in this sense. Moreover, sometimes the only formula that can be found is difficult to evaluate, involving integrals that cannot be computed in terms of elementary functions, eigenvalues and eigenvectors that cannot be found exactly, and so forth. In cases like these, it may be that the only way to investigate the solution is to approximate it using a numerical method, which is simply an algorithm producing an approximate solution. We emphasize that the use of a numerical method always implies that the computed solution will be in error. It is essential to know something about the magnitude of this error; otherwise, one cannot use the solution with any confidence. Most numerical methods for IVPs in ODEs are designed for first-order scalar equations. These can be applied, almost without change, to first-order systems, and hence to higher-order ODEs (after they have been converted to first-order systems). Therefore, we begin by discussing the general first-order scalar IVP, which is of the form We shall discuss time-stepping methods, which seek to find approximations to n(ti),^(£2), • • • ,u(tn}, where to < ti < t^ < • • • < tn define the grid. The quantities ti — to,t2 — ti,..., tn — tn-i are called the time steps. The basic idea of time-stepping methods is based on the fundamental theorem of calculus, which implies that if

102

Chapter 4. Essential ordinary differential equations

then

Now, we cannot (in general) hope to evaluate the integral in (4.25) analytically; however, any numerical method for approximating the value of a definite integral (such methods are often referred to as quadrature rules) can be adapted to form the basis of a numerical method of IVPs. By the way, equation (4.25) explains why the process of solving an IVP numerically is often referred to as integrating the ODE.

4.4.1

Euler's method

The simplest method of integrating an ODE is based on the left-endpoint rule:

Applying this to (4.25), we obtain Of course, this is not a computable formula, because, except possibly on the first step (i = 0), we do not know u(ti) exactly. Instead, we have an approximation, Ui = u(ti). Formula (4.26) suggests how to obtain an approximation m+\ to u(ti+i): The reader should notice that (except for i = 0) there are two sources of error in the estimate MJ+I. First of all, there is the error inherent in using formula (4.26) to advance the integration by one time step. Second, there is the accumulated error due to the fact that we do not know u(ti] in (4.26), but rather only an approximation to it. The method (4.27) is referred to as Euler's method. Example 4.9. Consider the IVP

The exact solution is and we can use this formula to determine the errors in the computed approximation to u(t). Applying Euler's method with the regular grid t{ = iAt, Ai = 10/n, we have

4.4. Numerical methods for initial value problems

103

For example, with UQ = 1 and n = 100, we obtain

In Figure 4-1, we graph the exact solution and the approximations computed using Euler's method for n = 10,20,40. As we should expect, the approximation produced by Euler's method gets better as At (the time step) decreases. In fact, Table 4-1, where we collect the errors in the approximations to w(10), suggests that the global error (which comprises the total error after a number of steps) is O(At). The symbol 0(At) ("big-oh of At") denotes a quantity that is proportional to or smaller than At as At ->• 0.

Figure 4.1. Euler's method applied to (4-28).

Proving that the order of Euler's method is really O(At) is beyond the scope of this book, but we can easily sketch the essential ideas of the proof. It can be proved that the left-endpoint rule, applied to an interval of integration of length At, has an error that is O(At 2 ). In integrating an IVP, we apply the left-endpoint rule repeatedly, in fact, n = 0(l/At) times. Adding up I/At errors of order At2 gives a total error of 0(At). (Proving that this heuristic reasoning is correct takes some rather involved but elementary analysis that can be found in most numerical analysis textbooks.)

104

Chapter 4. Essential ordinary differential equations

n 10 20 40 80 160 320 640

Error in w(10) 1

-3.3384 • 1Q-1.8507 -KT 1 -1.0054 -HT 1 -5.2790 • 1Q-2 -2.7111 -1Q- 2 -1.3748 -HT 2 -6.9235 • 10~3

Error in w(io) At

-3.3384 • 1Q-1 -3.7014 -10-1 -4.0218 -HT 1 -4.2232 - HT1 -4.3378 • lO"1 -4.3992 • 1Q-1 -4.4310 • 1Q-1

Table 4.1. Global error in Euler's method for (4-28).

4.4.2

Improving on Euler's method: Runge-Kutta methods

If the results suggested in the previous section are correct, Euler's method can be used to approximate the solution to an IVP to any desired accuracy by simply making At small enough. Although this is true (with certain restrictions), we might hope for a more efficient method—one that would not require as many time steps to achieve a given accuracy. To improve Euler's method, we choose a numerical integration technique that is more accurate than the left-endpoint rule. The simplest is the midpoint rule:

If At = b — a (and the integrand / is sufficiently smooth), then the error in the midpoint rule is O(A£3). Following the reasoning in the previous section, we expect that the corresponding method for integrating an IVP would have O(A£ 2 ) global error. It is not immediately clear how to use the midpoint rule for quadrature to integrate an IVP. The obvious equation is

However, after reaching time ti in the integration, we have an approximation for u(ti), but none for u(ti + A£/2). To use the midpoint rule requires that we first generate an approximation for u(ti + Ai/2); the simplest way to do this is with an Euler step:

Putting this approximation together with the midpoint rule, and using the approximation Ui = u(ti), we obtain the improved Euler method:

4.4.

105

Numerical methods for initial value problems

n 10 20 40 80 160 320 640

Error in it (10) 2

-2.1629 • 10~ -1.2535 -Mr 2 -4.7603 • 1Q-33

-1.4746 -nr 4

-4.1061 • 10~ -1.0835 • 10-4 -2.7828 • 10~5

Error in2 «(io) At

-2.1629-1Q-2 -5.0140 -1Q- 2 -7.6165 -1Q- 2 -9.4376 • 10~2 -10.512 -1Q- 2

-11.095 -i(r 22 -11.398 -10-

Table 4.2. Global error in the improved Euler method for (4-28).

Figure 4.2 shows the results of the improved Euler method, applied to (4.28) with n — 10, n = 20, and n = 40. The improvement in accuracy can be easily seen by comparing to Figure 4.1. We can see the O(At2) convergence in Table 4.2—when At is divided by two (i.e. n is doubled), the error is divided by approximately four.

Figure 4.2. Improved Euler method applied to (4-28).

The improved Euler method is a simple example of a Runge-Kutta (RK)

106

Chapter 4. Essential ordinary differential equations

method. An (explicit) RK method takes the following form:

with certain restrictions on the values of the parameters ctj, PM, and 7^ (e.g. a\ + h am — !)• Although (4.29) looks complicated, it is not hard to understand the idea. A general form for a quadrature rule is

where wi,W2,.--, wm are the quadrature weights and xi, x%,..., xm G [a, b] axe the quadrature nodes. In the formula for Ui+i in (4.29), the weights are

and values fci, £2, • • • ,km are estimates of f ( t , u ( t ) ) at ra nodes in the interval [ti,ti+i]. We will not use the general formula (4.29), but the reader should appreciate the following point: there are many RK methods, obtained by choosing various values for the parameters in (4.29). This fact is used in designing algorithms that attempt to automatically control the error. We discuss this further in Section 4.4.4 below. The most popular RK method is analogous to Simpson's rule for quadrature:

Simpson's rule has an error of O(A£ 5 ) (At = b — a), and the following related method for integrating an IVP has global error of O(A£ 4 ):

4.4.

107

Numerical methods for initial value problems

n 10 20 40 80 160 320 640

Error in w(10) 2

1.9941 • 10~ 1.0614 -Hr 3 6.5959 • 10~5 4.0108 • 10~6 2.4500 • 1Q-7 1.5100 -l(r 8 9.3654 • 10-10

Error in4 u(io) At

1.9941 • 10~2 1.6982 • 10~2 1.6885 • 10~2 1.6428 • 10~2 1.6057 -10- 2 1.5833 -10-2 1.5712 -ID" 2

Table 4.3. Global error in the fourth-order Runge-Kutta method (RK4) for (4.28).

We call this the RK4 method. Figure 4.3 and Table 4.3 demonstrate the accuracy of RK4; dividing A£ by two decreases the error by (approximately) a factor of 16. This is typical for O(At 4 ) convergence.

Figure 4.3. Fourth-order Runge-Kutta method (RK4) applied to (4-28). We must note here that the improvement in efficiency of these higher-order methods is not as dramatic as it might appear at first glance. For instance, comparing Tables 4.1 and 4.3, we notice that just 10 steps of RK4 gives a smaller error than 160 steps of Euler's method. However, the improvement in efficiency is not a factor of 16, since 160 steps of Euler's method use 160 evaluations of f ( t , u ) , while

108

Chapter 4. Essential ordinary differential equations

10 steps of RK4 use 40 evaluations of f ( t , u ) . In problems of realistic complexity, the evaluation of f ( t , u ] is the most expensive part of the calculation (often / is defined by a computer simulation rather than a simple algebraic formula), and so a more reasonable comparison of these results would conclude that RK4 is 4 times as efficient as Euler's method for this particular example and level of error. Higherorder methods tend to use enough fewer steps than lower-order methods that they are more efficient even though they require more work per step. 4.4.3

Numerical methods for systems of ODEs

The numerical methods presented above can be applied directly to a system of ordinary differential equations. Indeed, when the system is written in vector form, the notation is virtually identical. We now present an example. Example 4.10. Consider two species of animals that share a habitat, and suppose one species is prey to the other. Let x\ (t) be the population of the predator species at time t, and let X2(t) be the population of the prey at the same time. The LotkaVolterra predator-prey model for these two populations is

where ei, 62, #, r are positive constants. The parameters have the following interpretations: BI describes the attack rate of the predators (the rate at which the prey are killed bv the vredators is e^ x-\ x? ); 62 describes the growth rate of the predator population based on the number of prey killed (the efficiency of predators at converting predators to prey); q is the rate at which the predators die; r is the intrinsic growth rate of the prey population. The equations describe the rate of The rate of change of the predator growth due to feeding on the prey the prey population is the difference govern in the absence of predators) Define f : R x R2 -» R2 by

population growth (or decline) of each species. population is the difference between the rate of and the rate of death. The rate of change of between the natural growth rate (which would and the death rate due to predation.

(f is actually independent of t; however, we include t as an independent variable so that this example fits into the general form discussed above). Then, given initial

4.4.

Numerical methods for initial value problems

109

values #1,0 and #2,0 of the predator and prey populations, respectively, the IVP of interest is

where

Suppose el = 0.01, e2 = 0.2, r = 0.2, q = 0.4, XI,Q = 40, x2,o = 500.

usmg the RK4 method described above, we estimated the solution of (4-31) on the time interval [0,50]. The implementation is exactly as described in (4-30), except that now the various quantities (lcj,Uj,f(tj,Ui)J are vectors. The time step used was At = 0.05 (for a total of 1000 steps). In Figure 4-4> w& plot the two populations versus time; this graph suggests that both populations vary periodically.

Figure 4.4. The variation of the two populations with time (Example 4-10). Another way to visualize the results is to graph x^ versus x\; such a graph is meaningful because, for an autonomous ODE (one in which t does not appear explicitly), the curve (xi(t)jX^(t}} is determined entirely be the initial value XQ (see Exercise 5 for a precise formulation of this property). In Figure 4-5, we graph x% versus x\. This curve is traversed in the clockwise direction (as can be determined

110

Chapter 4. Essential ordinary differential equations

by comparing with Figure 4-4)- F°r example, when the prey population is large and the predator population is small (the upper left part of the curve), the predator population will begin to grow. As it does, the prey population begins to decrease (since more prey are eaten). Eventually, the prey population gets small enough that it cannot support a large number of predators (far right part of the curve), and the predator population decreases rapidly. When the predator population gets small, the prey population begins to grow, and the whole cycle begins again.

Figure 4.5. The variation of the two populations (Example 4-10).

4.4.4

Automatic step control and Runge-Kutta-Fehlberg methods

As the above examples show, it is possible to design numerical methods which give a predictable improvement in the error as the time step is decreased. However, these methods do not allow the user to choose the step size in order to attain a desired level of accuracy. For example, we know that decreasing At from 0.1 to 0.05 will decrease the error in an 0(At 4 ) method by approximately a factor of 16, but this does not tell us that either step size will lead to a global error less than 10~3. It would be desirable to have a method that, given a desired level of accuracy, could choose the step size in an algorithm so as to attain that accuracy. While no method guaranteed to do this is known, there is a heuristic technique that is usually successful in practice. The basic idea is quite simple. When an algorithm wishes to integrate from tn to tn+i, it uses two different methods, one of order Aife and another of order A£fc+1. It then regards the more accurate estimate (resulting from

4.4.

Numerical methods for initial value problems

111

the O(At fe+1 ) method) as the exact value, and compares it with the estimate from the lower-order method. If the difference is sufficiently small, then it is assumed that the step size is sufficiently small, and the step is accepted. (Moreover, if the difference is too small, then it is assumed that the step size is smaller than it needs to be, and it may be increased on the next step.) On the other hand, if the difference is too large, then it is assumed that the step is inaccurate. The step is rejected, and At, the step size, is reduced. This is called automatic step control, and it leads to an approximate solution computed on an irregular grid, since At can vary from one step to the next. This technique is not guaranteed to produce a global error below the desired level, since the method only controls the local error (the error resulting from a single step of the method). However, the relationship between the local and global error is understood in principle, and this understanding leads to methods that usually achieve the desired accuracy. A popular class of methods for automatic step control consists of the RungeKutta-Fehlberg (RKF) methods, which use two RK methods together to control the local error as described in the previous paragraph. The general form of RK methods, given in (4.29), shows that there are many possible RK formulas; indeed, for a given order At fc , there are many different RK methods that can be derived. This fact is used in the RKF methodology to choose pairs of formulas that evaluate /, the function defining the ODE, as few times as possible. For example, it can be proved that every RK method of order At5 requires at least six evaluations of /. It is possible to choose six points (i.e. the values ti,ti + 72/1,... , t j + ^h in (4.29)) so that five of the points can be used in an O(At 4 ) formula, and the six points together define an O(At 5 ) formula. (One such pair of formulas is the basis of the popular RKF45 method.) This allows a very efficient implementation of automatic step control. Exercises 1. The purpose of this exercise is to estimate the value of u(0.5), where u(t) is the solution of the IVP

The solution is w(t) = te*, and so w(0.5) = e 1 / 2 /2 = 0.82436. (a) Estimate w(0.5) by taking 4 steps of Euler's method. (b) Estimate w(0.5) by taking 2 steps of the improved Euler's method. (c) Estimate w(0.5) by taking 1 step of the classical fourth-order RK method. Which estimate is more accurate? How many times did each evaluate f ( t , u) = u + ete? 2. Reproduce the results of Example 4.10.

112

3.

Chapter 4. Essential ordinary differential equations 17

The system of ODEs

where

models the orbit of a satellite about two heavenly bodies, which we will assume to be the earth and the moon. In these equations, (x(t),y(t)) are the coordinates of the satellite at time t. The origin of the coordinate system is the center of mass of the earth-moon system, and the ar-axis is the line through the centers of the earth and the moon. The center of the moon is at the point (1 — /^i,0) and the center of the earth is at (—//i, 0), where IJL\ = 1/82.45 is the ratio of mass of the moon to the mass of the earth. The unit of length is the distance from the center of the earth to the center of the moon. We write ^2 = 1 — A*i • If the satellite satisfies the following initial conditions, then its orbit is known to be periodic with period T = 6.19216933:

(a) Convert the system (4.33) of second-order ODEs to a first-order system. (b) Use a program that implements an adaptive (automatic step control) method18 (such as RKF45) to follow the orbit for one period. Plot the orbit in the plane, and make sure that the tolerance used by the adaptive algorithm is small enough that the orbit really appears in the plot to be periodic. Explain the variation in time steps with reference to the motion of the satellite. (c) Assume that, in order to obtain comparable accuracy with a fixed step size, it is necessary to use the minimum step chosen by the adaptive algorithm. How many steps would be required by an algorithm with fixed step size? How many steps were used by the adaptive algorithm?19 17

Adapted from [16], pages 153-154. Both MATLAB and Mathematica provide such routines, ode45 and NDSolve, respectively. 19 It is easy to determine the number of steps used by MATLAB's ode45 routine. It is possible but tricky to determine the number of steps used by Mathematica's NDSolve routine. 18

4.4.

Numerical methods for initial value problems

113

4. Let

The problem is to produce a graph, on the interval [0,20], of the two components of the solution to

where

(a) Use the RK4 method with a step size of At = 0.1 to compute the solution. Graph both components on the interval [0,20]. (b) Use the techniques of the previous section to compute the exact solution. Graph both components on the interval [0,20]. (c) Explain the difference. 5. (a) Suppose t0 G [a, &] and f : Rn ->• R n , u : [a, 6] ->• Rn satisfy

satisfies

Moreover, show that the curves

and

are the same. (b) Show, by producing an explicit example (a scalar IVP will do), that the above property does not hold for a differential equation that depends explicitly on t (that is, for a nonautonomous ODE).

114

Chapter 4.

Essential ordinary differential equations

6. Consider the IVP

Use both Euler's method and the improved Euler method to estimate #(10), using step sizes 1,1/2,1/4,1/8, and 1/16. Verify that the error in Euler's method is O(A£), while the error in the improved Euler method is O(Ai 2 ). 7. Consider the IVP

(a) Use the RK4 method with step sizes 1/4,1/8,1/16,1/32,1/64, and 1/128 to estimate x(l/2) and verify that the error is O(Af 4 ). (b) Use the RK4 method with step sizes 1/4,1/8,1/16,1/32,1/64, and 1/128 to estimate x(2). Is the error O(A£ 4 ) in this case as well? If not, speculate as to why it is not. 8. Let

The matrix A has an eigenvalue A = —0.187; let x be a corresponding eigenvector. The problem is to produce a graph on the interval [0,20] of the five components of the solution to

(a) Solve the problem using the RK4 method (in floating point arithmetic) and graph the results. (b) What is the exact solution? (c) Why is there a discrepancy between the computed solution and the exact solution? (d) Can this discrepancy be eliminated by decreasing the step size in the RK4 method?

4.5. Stiff systems of ODEs

4.5

115

Stiff systems of ODEs

The numerical methods described in the last section, while adequate for many IVPs, can be unnecessarily inefficient for some problems. We begin with a comparison of two IVPs, and the performance of a state-of-the-art automatic step-control algorithm on them. Example 4.11. We consider the following two IVPs:

For both IVPs, the initial value is the same:

We will describe the matrices AI and A2 by their spectral decompositions. The two matrices are symmetric, with the same eigenvectors but different eigenvalues. The eigenvectors are

while the eigenvalues of AI are AI = —I, A2 = —1, X3 = —3 and the eigenvalues of A2 are AI = -I, A2 = -10, A3 = -100. Since the initial value x0 is an eigenvector of both matrices, corresponding to the eigenvalue —I in both cases, the two IVPs have exactly the same solution:

We solved both systems using the MATLAB command ode45, which implements a state-of-the-art automatic step-control algorithm based on a fourth-fifth-order scheme.20 The results are graphed in Figure 4-6, where, as expected, we see that the two computed solutions are the same (or at least very similar). However, examining the results of the computations performed by ode45 reveals a surprise: the algorithm used only 52 steps for IVP (4-35) but 1124 steps for (4-36)1 20

The routine ode45 was implemented by Shampine and Reichelt. See [42] for details.

116

Chapter 4. Essential ordinary differential equations

Figure 4.6. The computed solutions to IVP (4.35) (top) and IVP (4-36) (bottom). (Each graph shows all three components of the solution; however, the exact solution, which is the same for the two IVPs, satisfies xi(t] = £ 2 (£) = xs(t)•) A detailed explanation of these results is beyond the scope of this book, but we briefly describe the reason for this behavior. These comments are illustrated by an example below. Explicit time-stepping methods for ODEs, such as those that form the basis of ode45, have an associated stability region for the time step. This means that the expected O(At fc ) behavior of the global error is not observed unless A£ is small enough to lie in the stability region. For larger values of At, the method is unstable and produces computed solutions that "blow up" as the integration proceeds. Moreover, this stability region is problem dependent and gets smaller as the eigenvalues of the matrix A get large and negative.21 If A has large negative eigenvalues, then the system being modeled has some transient behavior that quickly dies out. It is the presence of the transient behavior 21

If the system of ODEs under consideration is nonlinear, say

then it is the eigenvalues of the Jacobian matrix df/dx that determine the behavior.

4.5. Stiff systems of ODEs

117

that requires a small time step—initially, the solution is changing over a very small time scale, and the time step must be small to model this behavior accurately. If, at the same time, there are small negative eigenvalues, then the system also models components that die out more slowly. Once the transients have died out, one ought to be able to increase the time step to model the more slowly varying components of the solution. However, this is the weakness of explicit methods like Euler's method, RK4, and others: the transient behavior inherent in the system haunts the numerical method even when the transient components of the solution have died out. A time step that ought to be small enough to accurately follow the slowly varying components produces instability in the numerical method and ruins the computed solution. A system with negative eigenvalues of widely different magnitudes (that is, a system that models components that decay at greatly different rates) is sometimes called stiff.22 Stiff problems require special numerical methods; we give examples below in Section 4.5.2. First, however, we discuss a simple example for which the ideas presented above can be easily understood. 4.5.1

A simple example of a stiff system

The IVP

has solution

and the system qualifies as stiff according to the description given above. We will apply Euler's method, since it is simple enough that we can completely understand its behavior. However, similar results would be obtained with RK4 or another explicit method. To understand the behavior of Euler's method on (4.37), we write it out explicitly. We have with

that is,

22 Stiffness is a surprisingly subtle concept. The definition we follow here does not capture all of this subtlety; moreover, there is not a single, well-accepted definition of a stiff system. See [33], Section 6.2, for a discussion of the various definitions of stiffness.

118

Chapter 4. Essential ordinary differential equations

It follows that

and, similarly, Clearly, the computation depends critically on

If one of these quantities is larger than 1, the corresponding component will grow exponentially as the iteration progresses. At the very least, for stability (that is, to avoid spurious exponential growth), we need

The first inequality determines the restriction on At, and a little algebra shows that we must have Thus a very small time step is required for stability, and this restriction on Af is imposed by the transient behavior in the system. As we show below, it is possible to avoid the need for overly small time steps in a stiff system; however, we must use implicit methods. Euler's method and the RK methods discussed in Section 4.4 are explicit, meaning that the value w(n+1) is defined by a formula involving only known quantities. In particular, only u^ appears in the formula; in some explicit methods (multistep methods), u^n~l\ u^n~2\ ... may appear in the formula, but w(n+1) itself does not. On the other hand, an implicit method defines u^n+l^ by a formula that involves w(n+1) itself (as well as u^ and possibly earlier computed values of u). This means that, at each step of the iteration, an algebraic equation (possibly nonlinear) must be solved to find the value of u( n+1 ). In spite of this additional computational expense, implicit methods are useful because of their improved stability properties.

4.5.2

The backward Euler method

The simplest implicit method is the backward Euler method. Recall that Euler's method for the IVP

4.5. Stiff systems of ODEs

119

was derived from the equivalent formulation

by applying the left-endpoint rule for quadrature:

The result is (see Section 4.4.1). To obtain the backward Euler method, we apply the right-hand rule,

instead. The result is This method is indeed implicit. It is necessary to solve the equation (4.38) for un+i. In the case of a nonlinear ODE (or a system of nonlinear ODEs), this may be difficult (requiring a numerical root-finding algorithm such as Newton's method). However, it is not difficult to implement the backward Euler method for a linear system. We illustrate this for the linear, constant-coefficient system:

The backward Euler method takes the form

Example 4.12. Applying the backward Euler method to the simple example (4-37) gives some insight into the difference between the implicit and explicit methods. We obtain the following iteration for the first component:

120

Chapter 4. Essential ordinary differential equations

Similarly, for the second component, we obtain

For any positive value of A£; we have

and so no instability can arise. The step size A£ will still have to be small enough for the solution to be accurate, but we do not need to take Ai excessively small to avoid spurious exponential growth. Example 4.13. We will apply both Euler's method and the backward Euler method to the IVP

where A.% is the matrix from Example 4-11 o,nd

We integrate over the interval [0,2] with a time step of At — 0.05 (40 steps) in both algorithms. The results are shown in Figure 4-7- Euler's method "blows up" with this time step, while the backward Euler method produces a reasonable approximation to the true solution. Remark There are higher-order methods designed for use with stiff ODEs, notably the trapezoidal method. It is also possible to develop automatic step-control methods for stiff systems.23 Exercises 1. Let

and consider the IVP

(a) Find the exact solution x(£). 23

MATLAB includes such routines.

4.5. Stiff systems of ODEs

121

Figure 4.7. The computed solutions to IVP (4-40) using Euler's method (top) and the backward Euler method (bottom). (Each graph shows all three components of the solution.) (b) Estimate x(l) using 10 steps of Euler's method and compute the norm of the error. (c) Estimate x(l) using 10 steps of the backward Euler method and compute the norm of the error. 2. Repeat Exercise 1 for

3. Let

Find the largest value of At such that Euler's method is stable. (Use numerical experimentation.)

122

Chapter 4. Essential ordinary differential equations

4. Let f : R x R2 ->• R2 be defined by

The nonlinear IVP

has solution

(a) Show by example that Euler's method is unstable unless At is chosen small enough. Determine (by trial and error) how small At must be in order that Euler's method is stable. (b) Apply the backward Euler method. Is it stable for all positive values of At? 5. Let

(a) Find the exact solution to

(b) Determine experimentally (that is, by trial and error) how small At must be in order that Euler's method behaves in a stable manner on this IVP. (c) Euler's method takes the form

Let the eigenpairs of A be AI, ui and A2, 112. Show that Euler's method is equivalent to

where Find explicit formulas for y^', y% , and derive an upper bound on At that guarantees stability of Euler's method. Compare with the experimental rpsiilt.

4.6.

Green's functions

123

(d) Repeat with the backward Euler method in place of Euler's method. 6. Let A € R n x n be symmetric with negative eigenvalues AI, A 2 , . . . , A n . (a) Consider Euler's method and the backward Euler method applied to the homogeneous linear system

Show that Euler's method produces a sequence xo,xi,X2,... where

while the backward Euler method produces a sequence x 0 ,xi,x 2 ,... where (b) What are the eigenvalues of I + AtA? (c) What condition must At satisfy in order that Euler's method be stable? (Hint: See the previous exercise.) (d) What are the eigenvalues of (I — At A)"1? (e) Show that the backward Euler method is stable for all positive values of At.

4.6

Green's functions

The Green's function for an IVP, BVP, or IBVP has a fairly simple meaning, although the details can get complicated, particularly for PDEs. A homogeneous linear ODE always has the zero solution; nonzero solutions arise only when the initial conditions are not zero, or when the differential equation has a nonzero forcing function (that is, is inhomogeneous). The initial values and the forcing function can be called the data of the problem. The Green's function represents the contribution to the solution of the datum at each point in time. In the remainder of this section, we explain this statement for two IVPs that we first considered in Section 4.2. We also explain how the Green's function can be interpreted as a special solution to the differential equation, and introduce the concept of the Dirac delta function. Finally, we comment on how the form of the Green's function will change when we consider PDEs.

4.6.1

The Green's function for a first-order linear ODE

The solution to

124

Chapter 4. Essential ordinary differential equations

as derived in Section 4.2, is

We define

Then we can write the formula for the solution u as

The function G is called the (causal) Green's function for (4.41), and formula (4.43) already hints at the significance of G. The effect of the initial datum UQ on the solution u at time t is G(t] to)uo, while the effect of the datum f ( s ) on the solution u at time t is G(t] s)/(s)ds. 24 As a concrete example, the IVP (4.41) models the growth of a population (of a country, say), where a is the natural growth rate and f ( t ) is the rate of immigration at time t.25 For the sake of definiteness, suppose the population is measured in millions of people and t is measured in years. Then UQ is the population at time to (in millions) and f ( t ) is the rate, in millions of people per year, of immigration at time t. We can see that u(t) = G(t;to) is the solution to the following IVP:

That is, u(t) = G(t;to) is the population function resulting from an initial population of 1 (million) and no immigration (here consider only times t after to). We can also recognize u(t) = G(t; s) as the population function corresponding to a certain pattern of immigration, namely, when 1 million people immigrate at the instant t = si Of course, this is an idealization, but there are many situations when we wish to model a phenomenon that takes place at an instant in time or at a single point in space (such as a point force in mechanics). We now explore this idea further. We consider a country whose population is modeled by the differential equation

We assume that initially there are no people in the country (u(to) = 0), and 1 million people immigrate over the time interval [s,s + At] (at a constant rate, during that 24 The data represented by the function / has different units than that represented by UQ. Indeed, an examination of the differential equation shows that the units of / are the units of UQ divided by time. Therefore, / is a rate, and it must be multiplied by the time "interval" ds prior to being multiplied by G(t; s). 25 This model of population growth is not a particularly good one, since it assumes that the growth rate remains constant over time. In reality, the growth rate changes due to changing birth rates, life spans, etc.

4.6.

Green's functions

125

interval, of I/At people per year). We assume that s > to. The resulting population satisfies the IVP

where

The solution is

This last expression is the average of G(t; r] over the interval [s, s + At]. As we take At smaller and smaller, we obtain

But what forcing function do we obtain in the limit?

4.6.2

The Dirac delta function

The function d&t has the following properties, the first two of which are just the definition:

which is the average value of g on the interval [0, At]. Since the limit, as At ->• 0, of the average value of g over [0, At] is