### 机器学习代写|主成分分析作业代写PCA代考| Matrix Analysis

statistics-lab™ 为您的留学生涯保驾护航 在代写主成分分析PCA方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写主成分分析PCA代写方面经验极为丰富，各种代写主成分分析PCA相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 机器学习代写|主成分分析作业代写PCA代考|Gradient of Real Function with Respect to Real Vector

Define gradient operator $\nabla_{x}$ of an $n \times 1$ vector $x$ as
$$\nabla_{x}=\left[\frac{\partial}{\partial x_{1}}, \quad \frac{\partial}{\partial x_{2}}, \quad \cdots, \quad \frac{\partial}{\partial x_{n}}\right]^{\mathrm{T}}=\frac{\partial}{\partial x},$$
Then the gradient of a real scalar quantity function $f(\boldsymbol{x})$ with respect to $\boldsymbol{x}$ is a $n \times 1$ column vector, which is defined as
$$\nabla_{x} f(\boldsymbol{x})=\left[\frac{\partial f(\boldsymbol{x})}{\partial x_{1}}, \quad \frac{\partial f(\boldsymbol{x})}{\partial x_{2}}, \quad \cdots, \quad \frac{\partial f(\boldsymbol{x})}{\partial x_{n}}\right]^{\mathrm{T}}=\frac{\partial f(\boldsymbol{x})}{\partial \boldsymbol{x}}$$
The negative direction of the gradient direction is called as the gradient flow of variable $\boldsymbol{x}$, written as
$$\dot{x}=-\nabla_{x} f(x)$$
The gradient of $m$-dimensional row vector function $f(\boldsymbol{x})=$ $\left[f_{1}(\boldsymbol{x}), f_{2}(\boldsymbol{x}), \ldots, f_{m}(\boldsymbol{x})\right]$ with respect to the $n \times 1$ real vector $x$ is an $n \times m$ matrix, defined as
$$\frac{\partial f(\boldsymbol{x})}{\partial \boldsymbol{x}}=\left[\begin{array}{lll} \frac{\partial f_{1}(\boldsymbol{x})}{\partial x_{1}} & \frac{\partial y_{2}(\boldsymbol{x})}{\partial x_{1}} & \frac{\partial f_{w}(\boldsymbol{x})}{\partial x_{1}} \ \frac{\partial f_{1}(\boldsymbol{x})}{\partial x_{2}} & \frac{\partial f_{2}(\boldsymbol{x})}{\partial x_{2}} & \frac{\partial f_{w}(\boldsymbol{x})}{\partial x_{2}} \ \frac{\partial f_{1}(\boldsymbol{x})}{\partial x_{n}} & \frac{\partial f_{2}(\boldsymbol{x})}{\partial x_{n}} & \frac{\partial f_{w}(\boldsymbol{x})}{\partial x_{n}} \end{array}\right]=\nabla_{x} \boldsymbol{f}(\boldsymbol{x}) .$$
Some properties of gradient operations can be summarized as follows:
(1) If $f(\boldsymbol{x})=c$ is a constant, then gradient $\frac{\partial c}{\partial \boldsymbol{x}}=\boldsymbol{O}$.
(2) Linear principle: If $f(\boldsymbol{x})$ and $g(\boldsymbol{x})$ are real functions of vector $\boldsymbol{x}$, and $c_{1}$ and $c_{2}$ are real constants, then
$$\frac{\partial\left[c_{1} f(\boldsymbol{x})+c_{2} g(\boldsymbol{x})\right]}{\partial \boldsymbol{x}}=c_{1} \frac{\partial f(\boldsymbol{x})}{\partial \boldsymbol{x}}+c_{2} \frac{\partial g(\boldsymbol{x})}{\partial \boldsymbol{x}}$$
(3) Product principle: If $f(\boldsymbol{x})$ and $g(\boldsymbol{x})$ are real functions of vector $\boldsymbol{x}$, then
$$\frac{\partial f(\boldsymbol{x}) g(\boldsymbol{x})}{\partial \boldsymbol{x}}=g(\boldsymbol{x}) \frac{\partial f(\boldsymbol{x})}{\partial \boldsymbol{x}}+f(\boldsymbol{x}) \frac{\partial g(\boldsymbol{x})}{\partial \boldsymbol{x}}$$
(4) Quotient principle: If $g(x) \neq 0$, then
$$\frac{\partial f(\boldsymbol{x}) / g(\boldsymbol{x})}{\partial \boldsymbol{x}}=\frac{1}{g^{2}(\boldsymbol{x})}\left[g(\boldsymbol{x}) \frac{\partial f(\boldsymbol{x})}{\partial \boldsymbol{x}}-f(\boldsymbol{x}) \frac{\partial g(\boldsymbol{x})}{\partial \boldsymbol{x}}\right] .$$
(5) Chain principle: If $\boldsymbol{y}(\boldsymbol{x})$ is a vector-valued function of $\boldsymbol{x}$, then
$$\frac{\partial f(\boldsymbol{y}(\boldsymbol{x}))}{\partial \boldsymbol{x}}=\frac{\partial \boldsymbol{y}^{T}(\boldsymbol{x})}{\partial \boldsymbol{x}} \frac{\partial f(\boldsymbol{y})}{\partial \boldsymbol{y}}$$
where $\frac{\partial y^{\mathrm{T}}(x)}{\partial x}$ is an $n \times n$ matrix.

## 机器学习代写|主成分分析作业代写PCA代考|Gradient Matrix of Real Function

The gradient of a real function $f(\boldsymbol{A})$ with respect to an $m \times n$ real matrix $\boldsymbol{A}$ is an $m \times n$ matrix, called as gradient matrix, defined as
$$\frac{\partial f(\boldsymbol{A})}{\partial \boldsymbol{A}}=\left[\begin{array}{cccc} \frac{\partial f(\boldsymbol{A})}{\partial A_{11}} & \frac{\partial f(\boldsymbol{A})}{\partial A_{12}} & \cdots & \frac{\partial f(\boldsymbol{A})}{\partial A_{L E}} \ \frac{\partial f(\boldsymbol{A})}{\partial A_{21}} & \frac{\partial f(\boldsymbol{A})}{\partial A_{22}} & \cdots & \frac{\partial f(\boldsymbol{A})}{\partial A_{2 c}} \ \vdots & \vdots & & \vdots \ \frac{\partial f(\boldsymbol{A})}{\partial A_{m 1}} & \frac{\partial f(\boldsymbol{A})}{\partial A_{\mathrm{m} 2}} & \cdots & \frac{\partial f(\boldsymbol{A})}{\partial A_{m n}} \end{array}\right]=\nabla_{\boldsymbol{A}} f(\boldsymbol{A})$$
where $A_{i j}$ is the element of matrix $A$ on its $i$ th row and $j$ th column.
Some properties of the gradient of a real function with respect to a matrix can be summarized as follows:
(1) If $f(\boldsymbol{A})=c$ is a constant, where $\boldsymbol{A}$ is an $m \times n$ matrix, then $\frac{\partial c}{\partial A}=\boldsymbol{O}{m \times n}$. (2) Linear principle: If $f(\boldsymbol{A})$ and $g(\boldsymbol{A})$ are real functions of matrix $\boldsymbol{A}$, and $c{1}$ and $c_{2}$ are real constants, then
$$\frac{\partial\left[c_{1} f(\boldsymbol{A})+c_{2} g(\boldsymbol{A})\right]}{\partial \boldsymbol{A}}=c_{1} \frac{\partial f(\boldsymbol{A})}{\partial \boldsymbol{A}}+c_{2} \frac{\partial g(\boldsymbol{A})}{\partial \boldsymbol{A}} .$$
(3) Product principle: If $f(\boldsymbol{A})$ and $g(\boldsymbol{A})$ are real functions of matrix $\boldsymbol{A}$, then
$$\frac{\partial f(\boldsymbol{A}) g(\boldsymbol{A})}{\partial \boldsymbol{A}}=g(\boldsymbol{A}) \frac{\partial f(\boldsymbol{A})}{\partial \boldsymbol{A}}+f(\boldsymbol{A}) \frac{\partial g(\boldsymbol{A})}{\partial \boldsymbol{A}}$$
(4) Quotient principle: If $g(\boldsymbol{A}) \neq 0$, then
$$\frac{\partial f(\boldsymbol{A}) / g(\boldsymbol{A})}{\partial(\boldsymbol{A})}=\frac{1}{g^{2}(\boldsymbol{A})}\left[g(\boldsymbol{A}) \frac{\partial f(\boldsymbol{A})}{\partial \boldsymbol{A}}-f(\boldsymbol{A}) \frac{\partial g(\boldsymbol{A})}{\partial \boldsymbol{A}}\right]$$
(5) Chain principle: Let $\boldsymbol{A}$ be an $m \times n$ matrix, and $y=f(\boldsymbol{A})$ and $g(y)$ are real functions of matrix $A$ and scalar $y$, respectively. Then
$$\frac{\partial g(f(A))}{\partial A}=\frac{d g(y)}{d y} \frac{\partial f(A)}{\partial A}$$
(6) If $\boldsymbol{A} \in \Re^{m \times n}, \boldsymbol{x} \in \Re^{m \times 1}, \boldsymbol{y} \in \Re^{n \times 1}$, then
$$\frac{\partial x^{\mathrm{T}} A y}{\partial A}=A y^{\mathrm{T}}$$
(7) If $A \in \Re^{n \times n}$ is nonsingular $\boldsymbol{x} \in \Re^{n \times 1}, \boldsymbol{y} \in \mathcal{K}^{n \times 1}$, then
$$\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A}^{-1} \boldsymbol{y}}{\partial \boldsymbol{A}}=-\boldsymbol{A}^{-\mathrm{T}} \boldsymbol{A} \boldsymbol{y}^{\mathrm{T}} \boldsymbol{A}^{-\mathrm{T}}$$

## 机器学习代写|主成分分析作业代写PCA代考|Gradient Matrix of Trace Function

Here, we summarize some properties of gradient matrix of trace functions.
(1)-(3) are gradient matrices of the trace of a single matrix.
(1) If $W$ is an $m \times m$ matrix, then
$$\frac{\partial \operatorname{tr}(\boldsymbol{W})}{\partial \boldsymbol{W}}=\boldsymbol{I}_{m}$$
(2) If an $m \times m$ matrix $\boldsymbol{W}$ is invertible, then
$$\frac{\partial \mathrm{tr}\left(\boldsymbol{W}^{-1}\right)}{\partial \boldsymbol{W}}=-\left(\boldsymbol{W}^{-2}\right)^{\mathrm{T}}$$
(3) For the outer product of two vectors, it holds that
$$\frac{\partial \operatorname{tr}\left(x y^{\mathrm{T}}\right)}{\partial \boldsymbol{x}}=\frac{\partial t r\left(\boldsymbol{y} \boldsymbol{x}^{\mathrm{T}}\right)}{\partial \boldsymbol{x}}=\boldsymbol{y}$$
(4)-(7) are gradient matrices of the trace of the product of two matrices.
(4) If $W \in \Re^{m \times n}, A \in \Re^{n \times m}$, then
$$\frac{\partial \operatorname{tr}(\boldsymbol{W} \boldsymbol{A})}{\partial \boldsymbol{W}}=\frac{\partial \mathrm{tr}(\boldsymbol{A} \boldsymbol{W})}{\partial \boldsymbol{W}}=\boldsymbol{A}^{\mathrm{T}} .$$
(5) If $\boldsymbol{W} \in \Re^{m \times n}, \boldsymbol{A} \in \Re^{m \times n}$, then
$$\frac{\partial \mathrm{tr}\left(\boldsymbol{W}^{\mathrm{T}} \boldsymbol{A}\right)}{\partial \boldsymbol{W}}=\frac{\partial \mathrm{tr}\left(\boldsymbol{A} \boldsymbol{W}^{\mathrm{T}}\right)}{\partial \boldsymbol{W}}=\boldsymbol{A} .$$
(6) If $W \in \Re^{m \times n}$, then
$$\frac{\partial \mathrm{tr}\left(\boldsymbol{W} \boldsymbol{W}^{\mathrm{T}}\right)}{\partial \boldsymbol{W}}=\frac{\partial \mathrm{tr}\left(\boldsymbol{W}^{\mathrm{T}} \boldsymbol{W}\right)}{\partial \boldsymbol{W}}=2 \boldsymbol{W}$$
(7) If $W \in \Re^{m \times n}$, then
$$\frac{\partial \mathrm{tr}\left(\boldsymbol{W}^{2}\right)}{\partial \boldsymbol{W}}=\frac{\partial \mathrm{tr}(\boldsymbol{W} \boldsymbol{W})}{\partial \boldsymbol{W}}=2 \boldsymbol{W}^{\mathrm{T}} .$$
(8) If $\boldsymbol{W}, \boldsymbol{A} \in \Re^{m \times m}$ and $\boldsymbol{W}$ is nonsingular, then
$$\frac{\partial \mathrm{tr}\left(\boldsymbol{A} \boldsymbol{W}^{-1}\right)}{\partial \boldsymbol{W}}=-\left(\boldsymbol{W}^{-1} \boldsymbol{A} \boldsymbol{W}^{-1}\right)^{\mathrm{T}} .$$

## 机器学习代写|主成分分析作业代写PCA代考|Gradient of Real Function with Respect to Real Vector

∇X=[∂∂X1,∂∂X2,⋯,∂∂Xn]吨=∂∂X,

∇XF(X)=[∂F(X)∂X1,∂F(X)∂X2,⋯,∂F(X)∂Xn]吨=∂F(X)∂X

X˙=−∇XF(X)

∂F(X)∂X=[∂F1(X)∂X1∂是2(X)∂X1∂F在(X)∂X1 ∂F1(X)∂X2∂F2(X)∂X2∂F在(X)∂X2 ∂F1(X)∂Xn∂F2(X)∂Xn∂F在(X)∂Xn]=∇XF(X).

(1) 如果F(X)=C是一个常数，然后是梯度∂C∂X=这.
(2) 线性原理：如果F(X)和G(X)是向量的实函数X， 和C1和C2是实常数，那么
∂[C1F(X)+C2G(X)]∂X=C1∂F(X)∂X+C2∂G(X)∂X
（3）产品原理：如果F(X)和G(X)是向量的实函数X， 然后
∂F(X)G(X)∂X=G(X)∂F(X)∂X+F(X)∂G(X)∂X
(4) 商数原则：如果G(X)≠0， 然后
∂F(X)/G(X)∂X=1G2(X)[G(X)∂F(X)∂X−F(X)∂G(X)∂X].
(5)链式原理：如果是(X)是一个向量值函数X， 然后
∂F(是(X))∂X=∂是吨(X)∂X∂F(是)∂是

## 机器学习代写|主成分分析作业代写PCA代考|Gradient Matrix of Real Function

∂F(一种)∂一种=[∂F(一种)∂一种11∂F(一种)∂一种12⋯∂F(一种)∂一种大号和 ∂F(一种)∂一种21∂F(一种)∂一种22⋯∂F(一种)∂一种2C ⋮⋮⋮ ∂F(一种)∂一种米1∂F(一种)∂一种米2⋯∂F(一种)∂一种米n]=∇一种F(一种)

(1) 如果F(一种)=C是一个常数，其中一种是一个米×n矩阵，那么∂C∂一种=这米×n. (2) 线性原理：如果F(一种)和G(一种)是矩阵的实函数一种， 和C1和C2是实常数，那么
∂[C1F(一种)+C2G(一种)]∂一种=C1∂F(一种)∂一种+C2∂G(一种)∂一种.
（3）产品原理：如果F(一种)和G(一种)是矩阵的实函数一种， 然后
∂F(一种)G(一种)∂一种=G(一种)∂F(一种)∂一种+F(一种)∂G(一种)∂一种
(4) 商数原则：如果G(一种)≠0， 然后
∂F(一种)/G(一种)∂(一种)=1G2(一种)[G(一种)∂F(一种)∂一种−F(一种)∂G(一种)∂一种]
(5) 链式原理：让一种豆米×n矩阵，和是=F(一种)和G(是)是矩阵的实函数一种和标量是， 分别。然后
∂G(F(一种))∂一种=dG(是)d是∂F(一种)∂一种
(6) 如果一种∈ℜ米×n,X∈ℜ米×1,是∈ℜn×1， 然后
∂X吨一种是∂一种=一种是吨
(7) 如果一种∈ℜn×n是非奇异的X∈ℜn×1,是∈ķn×1， 然后
∂X吨一种−1是∂一种=−一种−吨一种是吨一种−吨

## 机器学习代写|主成分分析作业代写PCA代考|Gradient Matrix of Trace Function

(1)-(3) 是单个矩阵的迹的梯度矩阵。
(1) 如果在是一个米×米矩阵，那么
∂tr⁡(在)∂在=一世米
(2) 如果一个米×米矩阵在是可逆的，那么
∂吨r(在−1)∂在=−(在−2)吨
(3) 对于两个向量的外积，有
∂tr⁡(X是吨)∂X=∂吨r(是X吨)∂X=是
(4)-(7) 是两个矩阵乘积的迹的梯度矩阵。
(4) 如果在∈ℜ米×n,一种∈ℜn×米， 然后
∂tr⁡(在一种)∂在=∂吨r(一种在)∂在=一种吨.
(5) 如果在∈ℜ米×n,一种∈ℜ米×n， 然后
∂吨r(在吨一种)∂在=∂吨r(一种在吨)∂在=一种.
(6) 如果在∈ℜ米×n， 然后
∂吨r(在在吨)∂在=∂吨r(在吨在)∂在=2在
(7) 如果在∈ℜ米×n， 然后
∂吨r(在2)∂在=∂吨r(在在)∂在=2在吨.
(8) 如果在,一种∈ℜ米×米和在是非奇异的，那么
∂吨r(一种在−1)∂在=−(在−1一种在−1)吨.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。