### 统计代写|主成分分析代写Principal Component Analysis代考|STAT3888

statistics-lab™ 为您的留学生涯保驾护航 在代写主成分分析Principal Component Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写主成分分析Principal Component Analysis代写方面经验极为丰富，各种代写主成分分析Principal Component Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|主成分分析代写Principal Component Analysis代考|Kernel PCA methodology

KPCA is a nonlinear equivalent of classical PCA that uses methods inspired by statistical learning theory. We describe shortly the KPCA method from Scholkopf et al. (1998).

Given a set of observations $\mathbf{x}{i} \in \mathbb{R}^{n}, i=1, \ldots, m$. Let us consider a dot product space $F$ related to the input space by a map $\phi: \mathbb{R}^{n} \rightarrow F$ which is possibly nonlinear. The feature space $F$ could have an arbitrarily large, and possibly infinite, dimension. Hereafter upper case characters are used for elements of $F$, while lower case characters denote elements of $\mathbb{R}^{n}$. We assume that we are dealing with centered data $\sum{i=1}^{m} \phi\left(\mathbf{x}{i}\right)=0$. In $F$ the covariance matrix takes the form $$\mathrm{C}=\frac{1}{m} \sum{j=1}^{m} \phi\left(\mathbf{x}{j}\right) \phi\left(\mathbf{x}{j}\right)^{\top} .$$
We have to find eigenvalues $\lambda \geq 0$ and nonzero eigenvectors $V \in F \backslash{0}$ satisfying
$$\mathbf{C V}=\lambda \mathrm{V} \text {. }$$

As is well known all solutions $\mathbf{V}$ with $\lambda \neq 0$ lie in the span of $\left{\phi\left(\mathbf{x}{i}\right)\right}{i=1}^{m}$. This has two consequences: first we may instead consider the set of equations
$$\left\langle\phi\left(\mathbf{x}{k}\right), \mathbf{C V}\right\rangle=\lambda\left\langle\phi\left(\mathbf{x}{k}\right), \mathbf{V}\right\rangle,$$
for all $k=1, \ldots, m$, and second there exist coefficients $\alpha_{i}, i=1, \ldots, m$ such that
$$\mathbf{V}=\sum_{i=1}^{m} \alpha_{i} \phi\left(\mathbf{x}{i}\right)$$ Combining (1) and (2) we get the dual representation of the eigenvalue problem $$\frac{1}{m} \sum{i=1}^{m} \alpha_{i}\left\langle\phi\left(\mathbf{x}{k}\right), \sum{j=1}^{m} \phi\left(\mathbf{x}{j}\right)\left\langle\phi\left(\mathbf{x}{j}\right), \phi\left(\mathbf{x}{i}\right)\right\rangle\right\rangle=\lambda \sum{i=1}^{m} \alpha_{i}\left\langle\phi\left(\mathbf{x}{k}\right), \phi\left(\mathbf{x}{i}\right)\right\rangle,$$
for all $k=1, \ldots m$. Defining a $m \times m$ matrix $K$ by $K_{i j}:=\left\langle\phi\left(\mathbf{x}{i}\right), \phi\left(\mathbf{x}{j}\right)\right\rangle$, this reads
$$K^{2} \alpha=m \lambda K \alpha,$$

## 统计代写|主成分分析代写Principal Component Analysis代考|Adding input variable information into Kernel PCA

In order to get interpretability we add supplementary information into KPCA representation. We have developed a procedure to project any given input variable onto the subspace spanned by the eigenvectors (9).

We can consider that our observations are realizations of the random vector $X=\left(X_{1}, \ldots, X_{n}\right)$. Then to represent the prominence of the input variable $X_{k}$ in the $\mathrm{KPCA}$. We take a set of points of the form $\mathbf{y}=\mathbf{a}+s \mathbf{e}{k} \in \mathbb{R}^{n}$ where $\mathbf{e}{k}=(0, \ldots, 1, \ldots, 0) \in \mathbb{R}^{n}, s \in \mathbb{R}$, where $k$-th component is equal 1 and otherwise are 0 . Then, we can compute the projections of the image of these points $\phi(\mathbf{y})$ onto the subspace spanned by the eigenvectors (9).

Taking into account equation (11) the induced curve in the eigenspace expressed in matrix form is given by the row vector:
$$\sigma(s){1 \times r}=\left(\mathbf{Z}{s}^{\top}-\frac{1}{m} 1_{m}^{\top} K\right)\left(\mathbf{I}{m}-\frac{1}{m} 1{m} \mathbf{1}{m f}^{\top}\right) \tilde{\mathbf{V}}{r}$$
where $\mathbf{Z}{\mathrm{s}}$ is of the form (10). In addition we can represent directions of maximum variation of $\sigma(\mathrm{s})$ associated with the variable $X{k}$ by projecting the tangent vector at $s=0$. In matrix form, we have
$$\left.\frac{d \sigma}{d s}\right|{s=0}=\left.\frac{d \mathbf{Z}{s}^{\top}}{d s}\right|{s=0}\left(\mathbf{I}{m}-\frac{1}{m} \mathbf{1}{m} \mathbf{1}{m}^{\top}\right) \tilde{\mathbf{V}}$$
with
$$\left.\frac{d \mathbf{Z}{s}^{\top}}{d s}\right|{s=0}=\left(\left.\frac{d \mathbf{Z}{s}^{1}}{d s}\right|{s=0} \ldots,\left.\frac{d \mathbf{Z}{s}^{m}}{d s}\right|{s=0}\right)^{\top}$$ and, with
\begin{aligned} \left.\frac{d \mathbf{Z}{s}^{t}}{d s}\right|{s=0} &=\left.\frac{d K\left(\mathbf{y}{,} \mathbf{x}{i}\right)}{d s}\right|{s=0} \ &=\left.\left(\sum{t=1}^{m} \frac{\partial K\left(\mathbf{y}, \mathbf{x}{i}\right)}{\partial y{t}} \frac{d y_{t}}{d s}\right)\right|{s=0} \ &=\left.\sum{t=1}^{m} \frac{\partial K\left(\mathbf{y}, \mathbf{x}{i}\right)}{\partial y{t}}\right|{\mathbf{y}=\mathbf{a}} \delta{t}^{k}=\left.\frac{\partial K\left(\mathbf{y}{,} \mathbf{x}{i}\right)}{\partial y_{k}}\right|_{\mathbf{y}=\mathbf{a}} \end{aligned}

## 统计代写|主成分分析代写Principal Component Analysis代考|Kernel PCA methodology

KPCA 是经典 PCA 的非线性等价物，它使用受统计学习理论启发的方法。我们简要描述了 Scholkopf 等人的 KPCA 方法。（1998 年）。

$$\mathrm{C}=\frac{1}{m} \sum j=1^{m} \phi(\mathbf{x} j) \phi(\mathbf{x} j)^{\top} .$$

$$\mathbf{C V}=\lambda \mathrm{V} \text {. }$$ 我们可以考虑方程组
$$\langle\phi(\mathbf{x} k), \mathbf{C V}\rangle=\lambda\langle\phi(\mathbf{x} k), \mathbf{V}\rangle,$$

$$\mathbf{V}=\sum_{i=1}^{m} \alpha_{i} \phi(\mathbf{x} i)$$

$$\frac{1}{m} \sum i=1^{m} \alpha_{i}\left\langle\phi(\mathbf{x} k), \sum j=1^{m} \phi(\mathbf{x} j)\langle\phi(\mathbf{x} j), \phi(\mathbf{x} i)\rangle\right\rangle=\lambda \sum i=1^{m} \alpha_{i}\langle\phi(\mathbf{x} k), \phi(\mathbf{x} i)\rangle$$

$$K^{2} \alpha=m \lambda K \alpha,$$

## 统计代写|主成分分析代写Principal Component Analysis代考|Adding input variable information into Kernel PCA

$$\sigma(s) 1 \times r=\left(\mathbf{Z} s^{\top}-\frac{1}{m} 1_{m}^{\top} K\right)\left(\mathbf{I} m-\frac{1}{m} 1 m \mathbf{1} m f^{\top}\right) \tilde{\mathbf{V}} r$$

$$\frac{d \sigma}{d s}\left|s=0=\frac{d \mathbf{Z} s^{\top}}{d s}\right| s=0\left(\mathbf{I} m-\frac{1}{m} \mathbf{1} m \mathbf{1} m^{\top}\right) \tilde{\mathbf{V}}$$

$$\frac{d \mathbf{Z} s^{\top}}{d s} \mid s=0=\left(\frac{d \mathbf{Z} s^{1}}{d s}\left|s=0 \ldots, \frac{d \mathbf{Z} s^{m}}{d s}\right| s=0\right)^{\top}$$

$$\frac{d \mathbf{Z} s^{t}}{d s}\left|s=0=\frac{d K(\mathbf{y}, \mathbf{x} i)}{d s}\right| s=0 \quad=\left(\sum t=1^{m} \frac{\partial K(\mathbf{y}, \mathbf{x} i)}{\partial y t} \frac{d y_{t}}{d s}\right) \mid s=0=\sum t=1^{m} \partial K$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。