Preface Introduction to scientific computing with Python 1 Introduction to Python 1.1 Basic Python 1.1.1 Python, numbers and variables 1.1.2 Basic mathematical operations 1.1.3 Mathematical functions and constants 1.1.4 Powers of 10 and scientific notation 1.2 Assignment operators 1.3 Strings, tuples, dictionaries and lists 1.3.1 Strings 1.3.2 Tuples 1.3.3 Dictionaries 1.3.4 Lists 1.3.5 Lists of lists 1.4 Flow control 1.4.1 for loops 1.4.2 Useful applications 1.4.3 Boolean type of variables 1.4.4 while loops 1.4.5 Decisions and the if statement 1.4.6 Error control and exceptions 1.4.7 The command break 1.5 Functions 1.6 Classes 1.7 Accessing data files 1.8 Further reading Chapter highlights Exercises 2 Matrices and Python 2.1 Review of matrices 2.1.1 Matrix properties 2.1.2 Special matrices 2.2 The NumPy module 2.2.1 Matrices and arrays 2.2.2 Slicing 2.2.3 Special matrices, methods and properties 2.2.4 Matrix assignment 2.2.5 Matrix addition and scalar multiplication 2.2.6 Vectorized algorithms 2.2.7 Performance of algorithms 2.2.8 Matrix multiplication 2.2.9 Inverse of a matrix 2.2.10 Determinant of a matrix 2.2.11 Linear systems 2.2.12 Eigenvalues and eigenvectors 2.3 The SciPy module 2.3.1 Systems with banded matrices 2.3.2 Systems with positive definite, banded matrices 2.3.3 Systems with other special matrices 2.4 Drawing graphs with MatPlotLib 2.4.1 Plotting one-dimensional arrays 2.5 Further reading Chapter highlights Exercises 3 Scientific computing 3.1 Computer arithmetic and sources of error 3.1.1 Catastrophic cancelation 3.1.2 The machine precision 3.2 Normalized scientific notation 3.2.1 Floats in Python 3.3 Rounding and chopping 3.3.1 Absolute and relative errors 3.3.2 Loss of significance (again) 3.3.3 Algorithms and stability 3.3.4 Approximations of p 3.4 Further reading Chapter highlights Exercises 4 Calculus facts 4.1 Sequences and convergence 4.2 Continuous functions 4.2.1 Bolzano’s intermediate value theorem 4.3 Differentiation 4.3.1 Important theorems from differential calculus 4.3.2 Taylor polynomials 4.3.3 The big-O notation 4.4 Integration 4.4.1 Approximating integrals 4.4.2 Important theorems of integral calculus 4.4.3 The remainder in Taylor polynomials 4.5 Further reading Chapter highlights Exercises Introduction to computational mathematics 5 Roots of equations 5.1 Bisection method 5.1.1 Derivation and implementation 5.1.2 Proof of convergence 5.1.3 Rate of convergence 5.2 Fixed-point methods 5.2.1 Derivation and implementation 5.2.2 Theoretical considerations 5.2.3 Convergence of fixed-point methods 5.3 Newton’s method 5.3.1 Derivation and implementation 5.3.2 Rate of convergence 5.4 Secant method 5.4.1 Derivation and implementation 5.4.2 Rate of convergence 5.5 Other methods and generalizations 5.5.1 Stopping criteria 5.5.2 Steffensen’s method 5.5.3 Aitken’s delta-squared process 5.5.4 Multiple roots 5.5.5 High-order methods 5.5.6 Systems of equations 5.6 The module scipy.optimize 5.6.1 Bisection method 5.6.2 Fixed-point methods 5.6.3 Newton and secant methods 5.6.4 A hybrid method 5.6.5 Application in astrophysics 5.7 Further reading Chapter highlights Exercises 6 Interpolation and approximation 6.1 Introduction 6.2 Lagrange interpolation 6.2.1 Naive construction of the interpolant 6.2.2 Lagrange polynomials 6.2.3 Failure of Lagrange interpolation 6.2.4 Error analysis 6.2.5 Newton’s representation with divided differences 6.2.6 More on divided differences 6.3 Hermite interpolation 6.3.1 Computation of the Hermite interpolant 6.3.2 Error analysis 6.3.3 Implementation details 6.4 Spline interpolation 6.4.1 Continuous piecewise linear interpolation 6.4.2 Some theoretical considerations 6.4.3 Error analysis of piecewise linear interpolation 6.4.4 Cubic spline interpolation 6.4.5 An algorithm for cubic splines 6.4.6 Theoretical considerations and error analysis 6.4.7 B-splines 6.5 Method of least squares 6.5.1 Linear least squares 6.5.2 Polynomial fit 6.5.3 Non-polynomial fit 6.6 The module scipy.interpolate 6.6.1 Lagrange interpolation 6.6.2 Cubic spline interpolation 6.6.3 Computations with B-splines 6.6.4 General purpose interpolation and the function interp1d 6.6.5 Interpolation in two dimensions 6.6.6 Application in image processing 6.7 Further reading Chapter highlights Exercises 7 Numerical integration 7.1 Introduction to numerical quadrature and midpoint rule 7.2 Newton-Cotes quadrature rules 7.2.1 Trapezoidal rule 7.2.2 Error estimate of the trapezoidal rule 7.2.3 Composite trapezoidal rule 7.2.4 Error estimate of the composite trapezoidal rule 7.2.5 Simpson’s rule 7.2.6 Composite Simpson’s rule 7.2.7 Error of Simpson’s rule 7.2.8 Degree of exactness 7.3 Gaussian quadrature rules 7.3.1 Choice of nodes and weights 7.3.2 Gauss-Legendre quadrature rules 7.3.3 Gaussian quadrature on general intervals 7.3.4 Error analysis of the Gaussian quadrature 7.4 The module scipy.integrate 7.4.1 Trapezoidal rule 7.4.2 Simpson’s rule 7.4.3 Gaussian quadrature 7.4.4 Application in classical mechanics 7.5 Further reading Chapter highlights Exercises 8 Numerical differentiation and applications to differential equations 8.1 Numerical differentiation 8.1.1 First-order accurate finite difference approximations 8.1.2 Second-order accurate finite difference approximations 8.1.3 Error estimation 8.1.4 Approximation of second-order derivatives 8.1.5 Richardson’s extrapolation 8.2 Applications to ordinary differential equations 8.2.1 First-order ordinary differential equations 8.2.2 Euler method 8.2.3 Alternative derivation and error estimates 8.2.4 Implicit variants of Euler’s method 8.2.5 Improved Euler method 8.2.6 The notion of stability 8.3 Runge-Kutta methods 8.3.1 Explicit Runge-Kutta methods 8.3.2 Implicit and diagonally implicit Runge-Kutta methods 8.3.3 Adaptive Runge-Kutta methods 8.4 The module scipy.integrate again 8.4.1 Generic integration of initial value problems 8.4.2 Systems of ordinary differential equations 8.4.3 Application in epidemiology 8.5 Further reading Chapter highlights Exercises 9 Numerical linear algebra 9.1 Numerical solution of linear systems 9.1.1 Algorithm for the naive Gaussian elimination 9.1.2 LU factorization 9.1.3 Implementation of LU factorization 9.2 Pivoting strategies 9.2.1 Gaussian elimination with partial pivoting 9.2.2 LU factorization with pivoting 9.3 Condition number of a matrix 9.3.1 Vector and matrix norms 9.3.2 Theoretical properties of matrix norms 9.3.3 Condition number 9.4 Other matrix computations 9.4.1 Computation of inverse matrix 9.4.2 Computation of determinants 9.5 Symmetric matrices 9.5.1 LDLT factorization 9.5.2 Cholesky factorization 9.6 The module scipy.linalg again 9.6.1 LU factorization 9.6.2 Cholesky factorization 9.7 Iterative methods 9.7.1 Derivation of iterative methods 9.7.2 Classical iterative methods – Jacobi and Gauss-Seidel methods 9.7.3 Convergence of iterative methods 9.7.4 Sparse matrices in Python and the module scipy.sparse 9.7.5 Sparse Gauss-Seidel implementation 9.7.6 Sparse linear systems and the module scipy.sparce 9.7.7 Application in electric circuits 9.8 Further reading Chapter highlights Exercises Advanced topics 10 Best approximations 10.1 Vector spaces, norms, and approximations 10.1.1 Vector spaces 10.1.2 Inner products and norms 10.1.3 Best approximations 10.2 Linear least squares 10.2.1 Theoretical properties of linear least squares 10.2.2 Solution of linear least squares problem 10.2.3 Linear least squares problem in Python 10.3 Gram-Schmidt orthonormalization 10.3.1 Numerical implementation 10.4 QR factorization 10.4.1 Linear least squares using QR factorization 10.4.2 Computation of range 10.4.3 Householder matrices and QR factorization 10.4.4 Python implementation and orthonormalization 10.5 Singular value decomposition 10.5.1 Derivation of SVD 10.5.2 Theoretical considerations 10.5.3 Pseudoinverse and SVD 10.5.4 Linear least squares problem and SVD 10.5.5 Singular value decomposition in Python 10.5.6 Application in image compression 10.6 Furt