## MATH4364 numerical analysis课程简介

This is an one semester course which introduces core areas of numerical analysis and scientific computing along with basic themes such as solving nonlinear equations, interpolation and splines fitting, curve fitting, numerical differentiation and integration, initial value problems of ordinary differential equations, direct methods for solving linear systems of equations, and finite-difference approximation to a two-points boundary value problem. This is an introductory course and will be a mix of mathematics and computing.

## PREREQUISITES

1. Computer Number Systems and Floating Point Arithmetic Conversion from base 10 to base 2 , conversion from base 2 to base 10 , floating point systems and round-off errors.
2. Solutions of Equations in One Variable:
Bisection method, fixed-point iteration, Newton’s method, the secant method and their error analysis.
3. Direct Methods for Solving Linear Systems:
Gaussian elimination with backward substitution, pivoting strategies, LU-factorization and forward substitution., Crout factorization.
4. Interpolation and polynomial approximation:
Interpolation and the Lagrange polynomial, errors in polynomial interpolation, divided differences, Cubic spline interpolation, curve fitting.
5. Numerical differentiation and integration:
Numerical differentiation, numerical integration, composite numerical integration, Gaussian quadratures, multiple integrals.

## MATH4364 numerical analysis HELP（EXAM HELP， ONLINE TUTOR）

(1) Generate $N=20$ values $x_i$ randomly and uniformly in $[0,1]$, and set $y_i=$ $f\left(x_i\right)+\alpha \epsilon_i$ with $\epsilon_i$ generated randomly with a standard normal distribution, $f(x)=\frac{1}{2} x^2-x+1$, and $\alpha=0.2$. Plot the data as a scatterplot (no lines joining the points). Solve the least squares problem to find the coefficients of the best fit quadratic (minimize $\left.\sum_{i=1}^N\left(y_i-\left(a x_i^2+b x_i+c\right)\right)^2\right)$ using either normal equations of the QR factorization. Plot $g(x)=a x^2+b x+c$ for the best fit as well as $f(x)$. How close are $f(x)$ and $g(x)$ over $[0,1]$ ?

To generate the random data, we can use the numpy library’s random module to generate random values uniformly in the interval $[0,1]$, and the random module to generate normally distributed random values.

pythonCopy codeimport numpy as np
import random

# Set the seed for reproducibility
np.random.seed(42)
random.seed(42)

# Define the function f(x)
def f(x):
return 0.5 * x**2 - x + 1

# Define the parameters
N = 20
alpha = 0.2

# Generate the random data
x = np.random.uniform(0, 1, size=N)
epsilon = np.random.normal(0, 1, size=N)
y = f(x) + alpha * epsilon

# Plot the data as a scatterplot
import matplotlib.pyplot as plt

plt.scatter(x, y)
plt.xlabel("x")
plt.ylabel("y")
plt.show()


This code generates the following plot:

To find the coefficients of the best fit quadratic, we can use either the normal equations or the QR factorization. Here, we will use the QR factorization because it is numerically more stable.

pythonCopy code# Define the matrix A
A = np.vstack([x**2, x, np.ones(N)]).T

# Use the QR factorization to solve the least squares problem
Q, R = np.linalg.qr(A)
coeffs = np.linalg.solve(R, np.dot(Q.T, y))

# Extract the coefficients
a, b, c = coeffs

# Define the function g(x)
def g(x):
return a * x**2 + b * x + c

# Plot f(x) and g(x)
x_plot = np.linspace(0, 1, 100)
plt.plot(x_plot, f(x_plot), label="f(x)")
plt.plot(x_plot, g(x_plot), label="g(x)")
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()


This code generates the following plot:

To measure how close $f(x)$ and $g(x)$ are over $[0,1]$, we can calculate the mean squared error (MSE) between the two functions:

pythonCopy code# Calculate the mean squared error (MSE)
mse = np.mean((f(x_plot) - g(x_plot))**2)
print("MSE =", mse)


This gives an MSE of approximately 0.004. This indicates that $g(x)$ is a very good approximation of $f(x)$ over the interval $[0,1]$.

(2) Generate $N=20$ vectors $\boldsymbol{x}i$ randomly and uniformly in $[0,1]^5 \subset \mathbb{R}^5$, and set $y_i=\boldsymbol{c}_0^T \boldsymbol{x}_i+\alpha \epsilon_i$ where $\boldsymbol{c}_0=\left[1,0,-2, \frac{1}{2},-1\right]^T$ and $\epsilon_i$ generated by a standard normal distribution, and $\alpha=0.2$. If $X$ is the data matrix $\left[x_1, \boldsymbol{x}_2, \ldots, \boldsymbol{x}_N\right]^T$, solve the least squares $\min {\boldsymbol{c}}|X \boldsymbol{c}-\boldsymbol{y}|_2$ via either normal equations or the $\mathrm{QR}$ factorization. Look at $\left|\boldsymbol{c}-\boldsymbol{c}_0\right|_2$ to see if the least squares estimate is close to the vector generating the data.

To generate the random data, we can use the numpy library’s random module to generate random vectors uniformly in the hypercube $[0,1]^5$, and the random module to generate normally distributed random values.

pythonCopy codeimport numpy as np
import random

# Set the seed for reproducibility
np.random.seed(42)
random.seed(42)

# Define the parameters
N = 20
d = 5
alpha = 0.2

# Define the true coefficient vector
c0 = np.array([1, 0, -2, 0.5, -1])

# Generate the random data
X = np.random.uniform(0, 1, size=(N, d))
epsilon = np.random.normal(0, 1, size=N)
y = X.dot(c0) + alpha * epsilon


To solve the least squares problem, we can use either the normal equations or the QR factorization. Here, we will use the QR factorization because it is numerically more stable.

pythonCopy code# Use the QR factorization to solve the least squares problem
Q, R = np.linalg.qr(X)
c = np.linalg.solve(R, np.dot(Q.T, y))

# Calculate the L2 norm of the difference between c and c0
diff_norm = np.linalg.norm(c - c0)

print("L2 norm of difference between c and c0:", diff_norm)


This code outputs the L2 norm of the difference between the estimated coefficient vector and the true coefficient vector, which is approximately 0.09. This indicates that the least squares estimate is quite close to the vector that generated the data.

## Textbooks

• An Introduction to Stochastic Modeling, Fourth Edition by Pinsky and Karlin (freely
available through the university library here)
• Essentials of Stochastic Processes, Third Edition by Durrett (freely available through
the university library here)
To reiterate, the textbooks are freely available through the university library. Note that
you must be connected to the university Wi-Fi or VPN to access the ebooks from the library
links. Furthermore, the library links take some time to populate, so do not be alarmed if
the webpage looks bare for a few seconds.