机器学习代写|主成分分析作业代写PCA代考|Feature Extraction

statistics-lab™ 为您的留学生涯保驾护航 在代写主成分分析PCA方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写主成分分析PCA代写方面经验极为丰富，各种代写主成分分析PCA相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

机器学习代写|主成分分析作业代写PCA代考|Feature Extraction

Pattern recognition and data compression are two applications that rely critically on efficient data representation [1]. The task of pattern recognition is to decide to which class of objects an observed pattern belonging to, and the compression of data is motivated by the need to save the number of bits to represent the data while incurring the smallest possible distortion [1]. In these applications, it is desirable to extract measurements that are invariant or insensitive to the variations within each class. The process of extracting such measurements is called feature extraction. It is also to say feature extraction is a data processing which maps a high-dimensional space to a low-dimensional space with minimum information loss.

Principal component analysis (PCA) is a well-known feature extraction method, while minor component analysis (MCA) and independent component analysis (ICA) can be regarded as variants or generalizations of the PCA. MCA is most useful for solving total least squares (TLS) problems, and ICA is usually used for blind signal separation (BSS).

In the following, we briefly review PCA, PCA neural networks, and extensions or generalizations of PCA.

机器学习代写|主成分分析作业代写PCA代考|PCA and Subspace Tracking

The principal components $(\mathrm{PC})$ are the directions in which the data have the largest variances and capture most of the information contents of data. They correspond to the eigenvectors associated with the largest eigenvalues of the autocorrelation matrix of the data vectors. Expressing data vectors in terms of the $\mathrm{PC}$ is called PCA. On the contrary, the eigenvectors that correspond to the smallest eigenvalues of the autocorrelation matrix of the data vectors are defined as the minor components

$(\mathrm{MC})$, and $\mathrm{MC}$ are the directions in which the data have the smallest variances (they represent the noise in the data). Expressing data vectors in terms of the MC is called MCA. Now, PCA has been successfully applied in many data processing problems, such as high-resolution spectral estimation, system identification, image compression, and pattern recognition, and MCA is also applied in total least squares, moving target indication, clutter cancelation, curve and surface fitting, digital beamforming, and frequency estimation.

The PCA or MCA is usually one dimensional. However, in real applications, PCA or MCA is mainly multiple dimensional. The eigenvectors associated with the $r$ largest (or smallest) eigenvalues of the autocorrelation matrix of the data vectors is called principal (or minor) components, and $r$ is referred to as the number of the principal (or minor) components. The eigenvector associated with the largest (smallest) eigenvalue of the autocorrelation matrix of the data vectors is called largest (or smallest) component. The subspace spanned by the principal components is called principal subspace (PS), and the subspace spanned by the minor components is called minor subspace (MS). In some applications, we are only required to find the PS (or MS) spanned by $r$ orthonormal eigenvectors. The PS is sometimes called signal subspace, and the MS is called noise subspace. Principal and minor component analyzers of a symmetric matrix are matrix differential equations that converge on the PCs and MCs, respectively. Similarly, the principal (PSA) and minor (MSA) subspace analyzers of a symmetric matrix are matrix differential equations that converge on a matrix whose columns’ span is the PS and MS, respectively. PCA/PSA and MCA/MSA are powerful techniques in many information processing fields. For example, PCA/PSA is a useful tool in feature extraction, data compression, pattern recognition, and time series prediction [2, 3], and MCA/MSA has been widely applied in total least squares, moving target indication, clutter cancelation, curve and surface fitting, digital beamforming, and frequency estimation [4].

As discussed before, the $\mathrm{PC}$ is the direction which corresponds to the eigenvector associated with the largest eigenvalue of the autocorrelation matrix of the data vectors, and the $\mathrm{MC}$ is the direction which corresponds to the eigenvector associated with the smallest eigenvalue of the autocorrelation matrix of the data vectors. Thus, implementations of these techniques can be based on batch eigenvalue decomposition (ED) of the sample correlation matrix or on singular value decomposition (SVD) of the data matrix. This approach is unsuitable for adaptive processing because it requires repeated ED/SVD, which is a very time-consuming task [5]. Thus, the attempts to propose adaptive algorithms are still continuing even though the field has been active for three decades up to now.

机器学习代写|主成分分析作业代写PCA代考|PCA Neural Networks

In order to overcome the difficulty faced by ED or SVD, a number of adaptive algorithms for subspace tracking were developed in the past. Most of these
techniques can be grouped into three classes [5]. In the first class, classical batch ED/SVD methods such as QR algorithm, Jacobi rotation, power iteration, and Lanczos method have been modified for the use in adaptive processing [6-10]. In the second class, variations in Bunch’s rank-one updating algorithm [11], such as subspace averaging $[12,13]$, have been proposed. The third class of algorithms considers the ED/SVD as a constrained or unconstrained optimization problem. Gradient-based methods [14-19], Gauss-Newton iterations [20, 21], and conjugate gradient techniques [22] can then be applied to seek the largest or smallest eigenvalues and their corresponding eigenvectors adaptively. Rank revealing URV decomposition [23] and rank revealing QR factorization [24] have been proposed to track the signal or noise subspace.

Neural network approaches on PCA or MCA pursue an effective “online” approach to update the eigen direction after each presentation of a data point, which possess many obvious advantages, such as lower computational complexity, compared with the traditional algebraic approaches such as SVD. Neural network methods are especially suited for high-dimensional data, since the computation of the large covariance matrix can be avoided, and for the tracking of nonstationary data, where the covariance matrix changes slowly over time. The attempts to improve the methods and to suggest new approaches are continuing even though the field has been active for two decades up to now.

In the last decades, many neural network learning algorithms were proposed to extract PS [25-31] or MS $[4,32-40]$. In the class of PS tracking, lots of learning algorithms such as Oja’s subspace algorithm [41], the symmetric error correction algorithm [42], and the symmetric version of the back propagation algorithm [43] were proposed based on some heuristic reasoning [44]. Afterward, some information criterions were proposed and the corresponding algorithms such as LMSER algorithm [31], the projection approximation subspace tracking (PAST) algorithm [5], the conjugate gradient method [45], the Gauss-Newton method [46], and the novel information criterion (NIC) algorithm were developed [44]. These gradient-type algorithms could be claimed to be globally convergent.

In the class of MS tracking, many algorithms [32-40] have been proposed on the basis of the feedforward neural network models. Mathew and Reddy proposed the MS algorithm based on a feedback neural network structure with sigmoid activation function [46]. Using the inflation method, Luo and Unbehauen proposed an MSA algorithm that does not need any normalization operation [36]. Douglas et al. presented a self-stabilizing minor subspace rule that does not need periodically normalization and matrix inverses [40]. Chiang and Chen showed that a learning algorithm can extract multiple MCs in parallel with the appropriate initialization instead of inflation method [47]. On the basis of an information criterion, Ouyang et al. developed an adaptive MC tracker that automatically finds the MS without using the inflation method [37]. Recently, Feng et al. proposed the OJAm algorithm and extended it for tracking multiple MCs or the MS, which makes the corresponding state matrix tend to a column orthonormal basis of the MS [35].

机器学习代写|主成分分析作业代写PCA代考|PCA and Subspace Tracking

(米C)， 和米C是数据具有最小方差的方向（它们代表数据中的噪声）。用 MC 表示数据向量称为 MCA。目前，PCA已成功应用于高分辨率光谱估计、系统识别、图像压缩、模式识别等诸多数据处理问题，MCA还应用于全最小二乘、运动目标指示、杂波消除、曲线和表面拟合、数字波束形成和频率估计。

PCA 或 MCA 通常是一维的。然而，在实际应用中，PCA 或 MCA 主要是多维的。与相关的特征向量r数据向量的自相关矩阵的最大（或最小）特征值称为主（或次要）分量，并且r被称为主要（或次要）组件的数量。与数据向量的自相关矩阵的最大（最小）特征值相关的特征向量称为最大（或最小）分量。由主成分跨越的子空间称为主子空间（PS），由次要成分跨越的子空间称为次要子空间（MS）。在某些应用中，我们只需要找到由r正交特征向量。PS有时被称为信号子空间，而MS被称为噪声子空间。对称矩阵的主成分分析器和次要成分分析器是分别收敛于 PC 和 MC 的矩阵微分方程。类似地，对称矩阵的主要 (PSA) 和次要 (MSA) 子空间分析器是矩阵微分方程，它们收敛于列跨度分别为 PS 和 MS 的矩阵上。PCA/PSA 和 MCA/MSA 是许多信息处理领域的强大技术。例如，PCA/PSA 是特征提取、数据压缩、模式识别和时间序列预测的有用工具 [2, 3]，而 MCA/MSA 已广泛应用于全最小二乘法、移动目标指示、杂波消除、曲线和曲面拟合、数字波束形成和频率估计 [4]。

机器学习代写|主成分分析作业代写PCA代考|PCA Neural Networks

PCA 或 MCA 上的神经网络方法追求一种有效的“在线”方法，在每次呈现数据点后更新特征方向，与传统的代数方法（如 SVD）相比，具有许多明显的优势，例如计算复杂度较低。神经网络方法特别适用于高维数据，因为可以避免大协方差矩阵的计算，也适用于协方差矩阵随时间缓慢变化的非平稳数据的跟踪。尽管到目前为止该领域已经活跃了二十年，但改进方法和提出新方法的尝试仍在继续。

有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。