### 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Householder Reflections

statistics-lab™ 为您的留学生涯保驾护航 在代写数值分析和优化numerical analysis and optimazation方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写数值分析和优化numerical analysis and optimazation方面经验极为丰富，各种代写数值分析和优化numerical analysis and optimazation相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Householder Reflections

Definition $2.5$ (Householder reflections). Let $\mathbf{u} \in \mathbb{R}^{n}$ be a non-zero vector. The $n \times n$ matrix of the form
$$I-2 \frac{\mathbf{u}^{T}}{| \mathbf{u}^{2}}$$
is called a Householder reflection.
A Householder reflection describes a reflection about a hyperplane which is orthogonal to the vector $\mathbf{u} /|\mathbf{u}|$ and which contains the origin. Each such matrix is symmetric and orthogonal, since
\begin{aligned} \left(I-2 \frac{\mathbf{u} \mathbf{u}^{T}}{|\mathbf{u}|^{2}}\right)^{T}\left(I-2 \frac{\mathbf{u u}^{T}}{|\mathbf{u}|^{2}}\right) &=\left(I-2 \frac{\mathbf{u} \mathbf{u}^{T}}{|\mathbf{u}|^{2}}\right)^{2} \ &=I-4 \frac{\mathbf{u} \mathbf{u}^{T}}{|\mathbf{u}|^{2}}+4 \frac{\mathbf{u}\left(\mathbf{u}^{T} \mathbf{u}\right) \mathbf{u}^{T}}{|\mathbf{u}|^{4}}=I . \end{aligned}
We can use Householder reflections instead of Givens rotations to calculate a QR factorization.

With each multiplication of an $n \times m$ matrix $A$ by a Householder reflection we want to introduce zeros under the diagonal in an entire column. To start with we construct a reflection which transforms the first nonzero column $\mathbf{a} \in$ $\mathbf{R}^{n}$ of $A$ into a multiple of the first unit vector $\mathbf{e}{1}$. In other words we want to choose $\mathbf{u} \in \mathbb{R}^{n}$ such that the last $n-1$ entries of $$\left(I-2 \frac{\mathbf{u}^{T}}{|\mathbf{u}|^{2}}\right) \mathbf{a}=\mathbf{a}-2 \frac{\mathbf{u}^{T} \mathbf{a}}{|\mathbf{u}|^{2}} \mathbf{u}$$ vanish. Since we are free to choose the length of u, we normalize it such that $|u|^{2}=2 \mathbf{u}^{T} \mathbf{a}$, which is possible since $\mathbf{a} \neq 0$. The right side of Equation (2.3) then simplifies to $\mathbf{a}-\mathbf{u}$ and we have $u{i}=a_{i}$ for $i=2, \ldots, n$. Using this we can rewrite the normalization as
$$2 u_{1} a_{1}+2 \sum_{i=2}^{n} a_{i}^{2}=u_{1}^{2}+\sum_{i=2}^{n} a_{i}^{2}$$
Gathering the terms and extending the sum, we have
$$u_{1}^{2}-2 u_{1} a_{1}+a_{1}^{2}-\sum_{i=1}^{n} a_{i}^{2}=0 \Leftrightarrow\left(u_{1}-a_{1}\right)^{2}=\sum_{i=1}^{n} a_{i}^{2} .$$
Thus $u_{1}=a_{1} \pm|\mathbf{a}|$. In numerical applications it is usual to let the sign be the same sign as $a_{1}$ to avoid $|\mathbf{u}|$ becoming too small, since a division by a very small number can lead to numerical difficulties.

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Linear Least Squares

Consider a system of linear equations $A \mathbf{x}=\mathbf{b}$ where $A$ is an $n \times m$ matrix and $\mathbf{b} \in \mathbb{R}^{n}$.

In the case $n<m$ there are not enough equations to define a unique solution. The system is called under-determined. All possible solutions form a vector space of dimension $r$, where $r \leq m-n$. This problem seldom arises in practice, since generally we choose a solution space in accordance with the available data. An example, however, are cubic splines, which we will encounter later.

In the case $n>m$ there are more equations than unknowns. The system is called over-determined. This situation may arise where a simple data model is fitted to a large number of data points. Problems of this form occur frequently when we collect $n$ observations which often carry measurement errors, and we want to build an $m$-dimensional linear model where generally $m$ is much smaller than $n$. In statistics this is known as linear regression. Many machine learning algorithms have been developed to address this problem (see for example [2] C. M. Bishop Pattern Recognition and Machine Learning).
We consider the simplest approach, that is, we seek $\mathbf{x} \in \mathbb{R}^{m}$ that minimizes the Euclidean norm $|A \mathbf{x}-\mathbf{b}|$. This is known as the least-squares problem.

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Iterative Schemes and Splitting

Given a linear system of the form $A \mathbf{x}=\mathbf{b}$, where $A$ is an $n \times n$ matrix and $\mathbf{x}, \mathbf{b} \in \mathbb{R}^{n}$, solving it by factorization is frequently very expensive for large $n$. However, we can rewrite it in the form
$$(A-B) \mathbf{x}=-B \mathbf{x}+\mathbf{b}$$
where the matrix $B$ is chosen in such a way that $A-B$ is non-singular and the system $(A-B) \mathbf{x}=\mathbf{y}$ is easily solved for any right-hand side $\mathbf{y}$. A simple iterative scheme starts with an estimate $\mathbf{x}^{(0)} \in \mathbb{R}^{n}$ of the solution (this could be arbitrary) and generates the sequence $\mathbf{x}^{(k)}, k=1,2, \ldots$, by solving
$$(A-B) \mathbf{x}^{(k+1)}=-B \mathbf{x}^{(k)}+\mathbf{b} .$$
This technique is called splitting. If the sequence converges to a limit, $\lim _{k \rightarrow \infty} \mathbf{x}^{(k)}=\hat{\mathbf{x}}$, then taking the limit on both sides of Equation (2.4) gives $(A-B) \hat{\mathbf{x}}=-B \hat{\mathbf{x}}+\mathbf{b}$. Hence $\hat{\mathbf{x}}$ is a solution of $A \mathbf{x}=\mathbf{b}$.

What are the necessary and sufficient conditions for convergence? Suppose that $A$ is non-singular and therefore has a unique solution $\mathbf{x}^{}$. Since $\mathbf{x}^{}$ solves $A \mathbf{x}=\mathbf{b}$, it also satisfies $(A-B) \mathbf{x}^{}=-B \mathbf{x}^{}+\mathbf{b}$. Subtracting this equation from (2.4) gives
$$(A-B)\left(\mathbf{x}^{(k+1)}-\mathbf{x}^{}\right)=-B\left(\mathbf{x}^{(k)}-\mathbf{x}^{}\right)$$
We denote $\mathbf{x}^{(k)}-\mathbf{x}^{*}$ by $\mathbf{e}^{(k)}$. It is the error in the $k^{\text {th }}$ iteration. Since $A-B$ is non-singular, we can write
$$\mathbf{e}^{(k+1)}=-(A-B)^{-1} B \mathbf{e}^{(k)}$$
The matrix $H:=-(A-B)^{-1} B$ is known as the iteration matrix. In practical applications $H$ is not calculated. We analyze its properties theoretically in order to determine whether or not we have convergence. We will encounter such analyses later on.

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Householder Reflections

Householder反射描述了关于与向量正交的超平面的反射在/|在|其中包含起源。每个这样的矩阵都是对称且正交的，因为
(一世−2在在吨|在|2)吨(一世−2在在吨|在|2)=(一世−2在在吨|在|2)2 =一世−4在在吨|在|2+4在(在吨在)在吨|在|4=一世.

2在1一种1+2∑一世=2n一种一世2=在12+∑一世=2n一种一世2

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Iterative Schemes and Splitting

(一种−乙)X=−乙X+b

(一种−乙)X(ķ+1)=−乙X(ķ)+b.

(一种−乙)(X(ķ+1)−X)=−乙(X(ķ)−X)

## 广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。