### 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Cholesky Factorization

statistics-lab™ 为您的留学生涯保驾护航 在代写数值分析和优化numerical analysis and optimazation方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写数值分析和优化numerical analysis and optimazation方面经验极为丰富，各种代写数值分析和优化numerical analysis and optimazation相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Cholesky Factorization

Let $A$ be an $n \times n$ symmetric matrix, i.e., $A_{i, j}=A_{j, i}$. We can take advantage of the symmetry by expressing $A$ in the form of $A=L D L^{T}$ where $L$ is lower triangular with ones on its diagonal and $D$ is a diagonal matrix. More explicitly, we can write the factorization which is known as Cholesky factorization as
$$A=\left(\begin{array}{lll} \mathbf{l}{1} & \ldots & \mathbf{l}{n} \end{array}\right)\left(\begin{array}{cccc} D_{1,1} & 0 & \cdots & 0 \ 0 & D_{2,2} & \ddots & \vdots \ \vdots & \ddots & \ddots & 0 \ 0 & \cdots & 0 & D_{n, n} \end{array}\right)\left(\begin{array}{c} \mathbf{l}{1}^{T} \ 1{2}^{T} \ \vdots \ 1_{n}^{T} \end{array}\right)=\sum_{k=1}^{n} D_{k, k} \mathbf{l}{k} 1{k}^{T}$$
Again $l_{k}$ denotes the $k^{\text {th }}$ column of $L$. The analogy to the LU algorithm is obvious when letting $U=D L^{T}$. However, this algorithm exploits the symmetry and requires roughly half the storage. To be more specific, we let $A_{0}=A$ at the beginning and for $k=1, \ldots, n$ we let $1_{k}$ be the $k^{\text {th }}$ column of $A_{k-1}$ scaled such that $L_{k, k}=1$. Set $D_{k, k}=\left(A_{k-1}\right){k, k}$ and calculate $A{k}=A_{k-1}-D_{k, k} \mathbf{l}{k} \mathbf{l}{k}^{T}$.
An example of such a factorization is
$$\left(\begin{array}{ll} 4 & 1 \ 1 & 4 \end{array}\right)=\left(\begin{array}{ll} 1 & 0 \ \frac{1}{4} & 1 \end{array}\right)\left(\begin{array}{cc} 4 & 0 \ 0 & \frac{15}{4} \end{array}\right)\left(\begin{array}{ll} 1 & \frac{1}{4} \ 0 & 1 \end{array}\right) .$$
Recall that $A$ is positive definite if $\mathbf{x}^{T} A \mathbf{x}>0$ for all $\mathbf{x} \neq 0$.

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|QR Factorization

In the following we examine another way to factorize a matrix. However, first we need to recall a few concepts.
For all $\mathbf{x}, \mathbf{y} \in \mathbb{R}^{n}$, the scalar product is defined by
$$\langle\mathbf{x}, \mathbf{y}\rangle=\langle\mathbf{y}, \mathbf{x}\rangle=\sum_{i=1}^{n} x_{i} y_{i}=\mathbf{x}^{T} \mathbf{y}=\mathbf{y}^{T} \mathbf{x}$$
The scalar product is a linear operation, i.e., for $\mathbf{x}, \mathbf{y}, \mathbf{z} \in \mathbb{R}^{n}$ and $\alpha, \beta \in \mathbb{R}$
$$\langle\alpha \mathbf{x}+\beta \mathbf{y}, \mathbf{z}\rangle=\alpha\langle\mathbf{x}, \mathbf{z}\rangle+\beta\langle\mathbf{y}, \mathbf{z}\rangle$$
The norm or Euclidean length of $\mathbf{x} \in \mathbb{R}^{n}$ is defined as
$$|\mathbf{x}|=\left(\sum_{i=1}^{n} x_{i}^{2}\right)^{1 / 2}=\langle\mathbf{x}, \mathbf{x}\rangle^{1 / 2} \geq 0$$
The norm of $\mathbf{x}$ is zero if and only if $\mathbf{x}$ is the zero vector.
Two vectors $\mathbf{x}, \mathbf{y} \in \mathbb{R}^{n}$ are called orthogonal to each other if
$$\langle\mathbf{x}, \mathbf{y}\rangle=0$$
Of course the zero vector is orthogonal to every vector including itself.
A set of vectors $\mathbf{q}{1}, \ldots, \mathbf{q}{m} \in \mathbb{R}^{n}$ is called orthonormal if
$$\left\langle\mathbf{q}{k}, \mathbf{q}{l}\right\rangle=\left{\begin{array}{ll} 1, & k=l, \ 0, & k \neq l, \end{array} \quad k, l=1, \ldots, m .\right.$$
Let $Q=\left(\begin{array}{lll}\mathbf{q}{1} & \ldots & \mathbf{q}{n}\end{array}\right)$ be an $n \times n$ real matrix. It is called orthogonal if its columns are orthonormal. It follows from $\left(Q^{T} Q\right){k, l}=\left\langle\mathbf{q}{k}, \mathbf{q}_{l}\right\rangle$ that $Q^{T} Q=I$ where $I$ is the unit or identity matrix. Thus $Q$ is nonsingular and the inverse exists, $Q^{-1}=Q^{T}$. Furthermore, $Q Q^{T}=Q Q^{-1}=I$. Therefore the rows of an orthogonal matrix are also orthonormal and $Q^{T}$ is also an orthogonal matrix. Further, $1=\operatorname{det} I=\operatorname{det}\left(Q Q^{T}\right)=\operatorname{det} Q \operatorname{det} Q^{T}=(\operatorname{det} Q)^{2}$ and we deduce $\operatorname{det} Q=\pm 1$.
Lemma 2.1. If $P, Q$ are orthogonal, then so is $P Q$.
Proof. Since $P^{T} P=Q^{T} Q=I$, we have
$$(P Q)^{T}(P Q)=\left(Q^{T} P^{T}\right)(P Q)=Q^{T}\left(P^{T} P\right) Q=Q^{T} Q=I$$
and hence $P Q$ is orthogonal.
We will require the following lemma to construct orthogonal matrices.

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Givens Rotations

Given a real $n \times m$ matrix $A$, we let $A_{0}=A$ and seek a sequence $\Omega_{1}, \ldots, \Omega_{k}$ of $n \times n$ orthogonal matrices such that the matrix $A_{i}:=\Omega_{i} A_{i-1}$ has more zeros below the diagonal than $A_{i-1}$ for $i=1, \ldots, k$. The insertion of zeros shall be in such a way that $A_{k}$ is upper triangular. We then set $R=A_{k}$. Hence $\Omega_{k} \cdots \Omega_{1} A=R$ and $Q=\left(\Omega_{k} \cdots \Omega_{1}\right)^{-1}=\left(\Omega_{k} \cdots \Omega_{1}\right)^{T}=\Omega_{1}^{T} \cdots \Omega_{k}^{T}$. Therefore $A=Q R$ and $Q$ is orthogonal and $R$ is upper triangular.

Definition 2.4 (Givens rotation). An $n \times n$ orthogonal matrix $\Omega$ is called a Givens rotation, if it is the same as the identity matrix except for four elements and we have det $\Omega=1$. Specifically we write $\Omega^{[p, q]}$, where $1 \leq p<q \leq n$, for a matrix such that
$$\Omega_{p, p}^{[p, q]}=\Omega_{q, q}^{[p, q]}=\cos \theta, \quad \Omega_{p, q}^{[p, q]}=\sin \theta, \quad \Omega_{q, p}^{[p, q]}=-\sin \theta$$
for some $\theta \in[-\pi, \pi]$.
Letting $n=4$ we have for example
Geometrically these matrices correspond to the underlying coordinate system being rotated along a two-dimensional plane, which is called a Euler rotation in mechanics. Orthogonality is easily verified using the identity $\cos ^{2} \theta+\sin ^{2} \theta=1$.

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Cholesky Factorization

(41 14)=(10 141)(40 0154)(114 01).

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|QR Factorization

⟨X,是⟩=⟨是,X⟩=∑一世=1nX一世是一世=X吨是=是吨X

⟨一种X+b是,和⟩=一种⟨X,和⟩+b⟨是,和⟩

|X|=(∑一世=1nX一世2)1/2=⟨X,X⟩1/2≥0

⟨X,是⟩=0

$$\left\langle\mathbf{q}{k}, \mathbf{q}{l}\right\rangle=\left{1,ķ=l, 0,ķ≠l,\quad k, l=1, \ldots, m .\right. 大号和吨问=(q1…qn)b和一种nn×nr和一种l米一种吨r一世X.一世吨一世sC一种ll和d这r吨H这G这n一种l一世F一世吨sC这l在米ns一种r和这r吨H这n这r米一种l.一世吨F这ll这在sFr这米(问吨问)ķ,l=⟨qķ,ql⟩吨H一种吨问吨问=一世在H和r和一世一世s吨H和在n一世吨这r一世d和n吨一世吨是米一种吨r一世X.吨H在s问一世sn这ns一世nG在l一种r一种nd吨H和一世n在和rs和和X一世s吨s,问−1=问吨.F在r吨H和r米这r和,问问吨=问问−1=一世.吨H和r和F这r和吨H和r这在s这F一种n这r吨H这G这n一种l米一种吨r一世X一种r和一种ls这这r吨H这n这r米一种l一种nd问吨一世s一种ls这一种n这r吨H这G这n一种l米一种吨r一世X.F在r吨H和r,1=这⁡一世=这⁡(问问吨)=这⁡问这⁡问吨=(这⁡问)2一种nd在和d和d在C和这⁡问=±1.大号和米米一种2.1.一世F磷,问一种r和这r吨H这G这n一种l,吨H和ns这一世s磷问.磷r这这F.小号一世nC和磷吨磷=问吨问=一世,在和H一种在和 (PQ)^{T}(PQ)=\left(Q^{T} P^{T}\right)(PQ)=Q^{T}\left(P^{T} P\right) Q= Q^{T} Q=I$$

## 统计代写|数值分析和优化代写numerical analysis and optimazation代考|Givens Rotations

Ωp,p[p,q]=Ωq,q[p,q]=因⁡θ,Ωp,q[p,q]=罪⁡θ,Ωq,p[p,q]=−罪⁡θ

，在几何上，这些矩阵对应于沿二维平面旋转的基础坐标系，这在力学中称为欧拉旋转。使用身份很容易验证正交性因2⁡θ+罪2⁡θ=1.

## 广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。