### 机器学习代写|主成分分析作业代写PCA代考| Basis for Subspace Tracking

statistics-lab™ 为您的留学生涯保驾护航 在代写主成分分析PCA方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写主成分分析PCA代写方面经验极为丰富，各种代写主成分分析PCA相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 机器学习代写|主成分分析作业代写PCA代考|Extension or Generalization of PCA

It can be found that the above-mentioned algorithms only focused on eigenvector extraction or eigen-subspace tracking with noncoupled rules. However, a serious speed stability problem exists in the most noncoupled rules [28]. This problem is that in noncoupled PCA rules the eigen motion in all directions mainly depends on the principal eigenvalue of the covariance matrix; thus, numerical stability and fast convergence can only be achieved by guessing this eigenvalue in advance [28]; in noncoupled MCA rules the speed of convergence does not only depend on the minor eigenvalue, but also depend on all other eigenvalues of the covariance matrix, and if these extend over a large interval, no suitable learning rate may be found for a numerical solution that can still guarantee stability and ensure a sufficient speed of convergence in all eigen directions. Therefore, the problem is even more severe for MCA rules. To solve this common problem, Moller proposed some coupled PCA algorithms and some coupled MCA algorithms based on a special information criteria [28]. In coupled rules, the eigen pair (eigenvector and eigenvalue) is simultaneously estimated in coupled equations, and the speed of convergence only depends on the eigenvalue of its Jacobian. Thus, the dependence of the eigenvalues on the covariance matrix can be eliminated [28]. Recently, some modified coupled rules have been proposed [48].

It is well known that the generalized eigen decomposition (GED) plays very important roles in various signal processing applications, e.g., data compression, feature extraction, denoising, antenna array processing, and classification. Though PCA, which is the special case of GED problem, has been widely studied, the adaptive algorithms for the GED problem are scarce. Fortunately, a few efficient online adaptive algorithms for the GED problem that can be applied in real-time applications have been proposed [49-54]. In [49], Chaterjee et al. present new adaptive algorithms to extract the generalized eigenvectors from two sequences of random vectors or matrices. Most algorithms in literatures including [49] are gradient-based algorithms $[50,51]$. The main problem of this type of algorithms is slow convergence and the difficulty in selecting an appropriate step size which is essential: A too small value will lead to slow convergence and a too large value will lead to overshooting and instability. Rao et al. [51] have developed a fast recursive least squares (RLS)-like, not true RLS, sequential algorithm for GED. In [54], by reinterpreting the GED problem as an unconstrained minimization problem via constructing a novel cost function and applying projection approximation method and RLS technology to the cost function, RLS-based parallel adaptive algorithms for generalized eigen decomposition was proposed. In [55], a power method-based algorithm for tracking generalized eigenvectors was developed when stochastic signals having unknown correlation matrices are observed. Attallah proposed a new adaptive algorithm for the generalized symmetric eigenvalue problem, which can extract the principal and minor generalized eigenvectors, as well as their corresponding subspaces, at a low computational cost [56]. Recently, a fast and
numerically stable adaptive algorithm for the generalized Hermitian eigenvalue problem (GHEP) was proposed and analyzed in [48].

Other extensions of PCA also include dual-purpose algorithm [57-64], the details of which can be found in Chap. 5 , and adaptive or neural networks-based SVD singular vector tracking $[6,65-70]$, the details of which can be found in Chap. $9 .$

## 机器学习代写|主成分分析作业代写PCA代考|Concept of Subspace

Definition 1 If $\boldsymbol{S}=\left{\boldsymbol{u}{1}, \boldsymbol{u}{2}, \ldots, \boldsymbol{u}{m}\right}$ is the vector subset of vector space $\boldsymbol{V}$, then the set $\boldsymbol{W}$ of all linear combinations of $\boldsymbol{u}{1}, \boldsymbol{u}{2}, \ldots, \boldsymbol{u}{m}$ is called the subspace spanned by $\boldsymbol{u}{1}, \boldsymbol{u}{2}, \ldots, \boldsymbol{u}{\boldsymbol{m}}$, namely $$\boldsymbol{W}=\operatorname{Span}\left{\boldsymbol{u}{1}, \boldsymbol{u}{2}, \ldots, \boldsymbol{u}{m}\right}=\left{\boldsymbol{u}: \boldsymbol{u}=\alpha_{1} \boldsymbol{u}{1}+\alpha{2} \boldsymbol{u}{2}+\cdots+\alpha{m} \boldsymbol{u}{m}\right}$$ where each vector in $\boldsymbol{W}$ is called the generator of $\boldsymbol{W}$, and the set $\left{\boldsymbol{u}{1}, \boldsymbol{u}{2}, \ldots, \boldsymbol{u}{m}\right}$ which is composed of all the generators is called the spanning set of the subspace. A vector subspace which only comprises zero vector is called a trivial subspace. If the vector set $\left{\boldsymbol{u}{1}, \boldsymbol{u}{2}, \ldots, \boldsymbol{u}_{m}\right}$ is linearly irrespective, then it is called a group basis of $W$.

Definition 2 The number of vectors in any group basis of subspace $W$ is called the dimension of $W$, which is denoted by $\operatorname{dim}(W)$. If any group basis of $W$ is not composed of finite linearly irrespective vectors, then $W$ is called an infinite-dimensional vector subspace.

Definition 3 Assume that $\boldsymbol{A}=\left[a_{1}, a_{2}, \ldots, a_{n}\right] \in \boldsymbol{C}^{\mathrm{m} \times n}$ is a complex matrix and all the linear combinations of its column vectors constitute a subspace, which is called column space of matrix $A$ and is denoted by $\operatorname{Col}(\boldsymbol{A})$, namely
$$\operatorname{Col}(A)=\operatorname{Span}\left{a_{1}, a_{2}, \ldots, a_{n}\right}=\left{\boldsymbol{y} \in \boldsymbol{C}^{m}: \boldsymbol{y}=\sum_{j=1}^{n} \alpha_{j} a_{j}: \alpha_{j} \in \boldsymbol{C}\right}$$
Row space of matrix $A$ can be defined similarly.

## 机器学习代写|主成分分析作业代写PCA代考|Subspace Tracking Method

The iterative computation of an extreme (maximal or minimum) eigen pair (eigenvalue and eigenvector) can date back to 1966 [72]. In 1980, Thompson proposed a LMS-type adaptive algorithm for estimating eigenvector, which correspond to the smallest eigenvalue of sample covariance matrix, and provided the adaptive tracking algorithm of the angle/frequency combing with Pisarenko’s harmonic estimator [14]. Sarkar et al. [73] used the conjugate gradient algorithm to track the variation of the extreme eigenvector which corresponds to the smallest eigenvalue of the covariance matrix of the slowly changing signal and proved its much faster convergence than Thompson’s LMS-type algorithm. These methods were only used to track single extreme value and eigenvector with limited application, but later they were extended for the eigen-subspace tracking and updating methods. In 1990, Comon and Golub [6] proposed the Lanczos method for tracking the extreme singular value and singular vector, which is a common method designed originally for determining some big and sparse symmetrical eigen problem $A x=\lambda x[74]$.

The earliest eigenvalue and eigenvector updating method was proposed by Golub in 1973 [75]. Later, Golub’s updating idea was extended by Bunch et al. [76, 77 , the basic idea of which is to update the eigenvalue decomposition of the covariance matrix after every rank-one modification, and then go to the matrix’s latent root using the interlacing theorem, and then update the place of the latent root using the iterative resolving root method. Thus, the eigenvector can be updated. Later, Schereiber [78] introduced a transform to change a majority of complex number arithmetic operation into real-number operation and made use of Karasalo’s subspace mean method [79] to further reduce the operation quantity. DeGroat and

Roberts [80] developed a numerically stabilized rank-one eigen structure updating method based on mutual Gram-Schmidt orthogonalization. Yu [81] extended the rank-one eigen structure update to block update and proposed recursive update of the eigenvalue decomposition of a covariance matrix.

The earliest adaptive signal subspace tracking method was proposed by Owsley [7] in 1978. Using the stochastic gradient method, Yang and Kaveh [18] proposed a LMS-type subspace tracking algorithm and extended Owsley’s method and Thompson’s method. This LMS-type algorithm has a high parallel structure and low computational complexity. Karhumen [17] extended Owsley’s idea by developing a stochastic approaching method based on computing subspace. Like Yang and Kaveh’s extension of Thompson’s idea to develop an LMS-type subspace tracking algorithm, Fu and Dowling [45] extended Sarkar’s idea to develop a subspace tracking algorithm based on conjugate gradient. During the recent 20 years, eigen-subspace tracking and update has been an active research field. Since eigen-subspace tracking is mainly applied to real signal processing, these methods should be fast algorithms.

## 机器学习代写|主成分分析作业代写PCA代考|Extension or Generalization of PCA

PCA 的其他扩展还包括双用途算法 [57-64]，其详细信息可在第 1 章中找到。5、自适应或基于神经网络的SVD奇异向量跟踪[6,65−70]，其详细信息可以在第 1 章中找到。9.

## 机器学习代写|主成分分析作业代写PCA代考|Concept of Subspace

\operatorname{Col}(A)=\operatorname{Span}\left{a_{1}, a_{2}, \ldots, a_{n}\right}=\left{\boldsymbol{y} \in \boldsymbol {C}^{m}: \boldsymbol{y}=\sum_{j=1}^{n} \alpha_{j} a_{j}: \alpha_{j} \in \boldsymbol{C}\right}\operatorname{Col}(A)=\operatorname{Span}\left{a_{1}, a_{2}, \ldots, a_{n}\right}=\left{\boldsymbol{y} \in \boldsymbol {C}^{m}: \boldsymbol{y}=\sum_{j=1}^{n} \alpha_{j} a_{j}: \alpha_{j} \in \boldsymbol{C}\right}

## 机器学习代写|主成分分析作业代写PCA代考|Subspace Tracking Method

Roberts [80] 开发了一种基于互 Gram-Schmidt 正交化的数值稳定秩一特征结构更新方法。Yu [81]将秩一特征结构更新扩展到块更新，并提出了协方差矩阵的特征值分解的递归更新。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。