### 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Discriminant Analysis and Cluster Analysis

statistics-lab™ 为您的留学生涯保驾护航 在代写多元统计分析Multivariate Statistical Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写多元统计分析Multivariate Statistical Analysis代写方面经验极为丰富，各种代写多元统计分析Multivariate Statistical Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Discriminant Analysis

In discriminant analysis, we wish to decide which population an observation in a sample comes from, if the possible populations are known in advance. For example, suppose that we know in advance that some customers are “good” and some customers are “bad” (so the two possible populations, good customers and bad customers, are known in advance). Given a randomly selected customer, we wish to determine if he/she is a good customer or not based on his/her records (data) such as credit history $\left(x_{1}\right)$, education $\left(x_{2}\right)$, and income $\left(x_{3}\right)$. Such a discriminant analysis may be very useful for (say) credit card applicants so that good applicants will receive credit cards while bad applicants do not receive credit cards. For the convenience of statistical analysis, sometimes we may assume that the sample $\left(x_{1}, x_{2}, x_{3}\right)$ (or its transformations, e.g., $\left.\left(\log \left(x_{1}\right), \log \left(x_{2}\right), \log \left(x_{3}\right)\right)\right)$, follows a multivariate normal distribution, but some discriminant analysis methods do not require this assumption.
As another example, suppose that a company reviews job applicants based on their academic records $\left(x_{1}\right)$, education $\left(x_{2}\right)$, working experience $\left(x_{3}\right)$, self confidence $\left(x_{4}\right)$, and motivation $\left(x_{5}\right)$. All job applicants can be classified as either “suitable” or “not suitable” based on the given information. So the company may perform a discriminant analysis to separate suitable applicants from unsuitable applicants based on the data all applicants provide.

More generally, suppose that there are two multivariate normally distributed populations, denoted by
population $\pi_{1}: N_{p}\left(\boldsymbol{\mu}{1}, \Sigma{1}\right), \quad$ population $\pi_{2}: N_{p}\left(\boldsymbol{\mu}{2}, \Sigma{2}\right)$.
Given an observation $\mathrm{x}=\left(x_{1}, x_{2}, \cdots, x_{p}\right)^{\mathrm{T}}$ from a sample, we want to find out whether $\mathbf{x}$ is from population $\pi_{1}$ or population $\pi_{2}$. This is a discriminant analysis. Note that, if $p=1$ (i.e., if there is only one variable of interest), then it is very easy to separate the observations. All we need to do is to decide a threshold value, say $K$, so that we can do the separation based on whether $x \leqslant K$ or $x>K$. For example, if we just wish to separate students based on their grades, it is easy to see which students are good and which are not, such as the ones with grades over $80 \%$ and the ones with grades less than $80 \%$ (so $K=80$ ). However, when $p \geqslant 2$, it is less straightforward to separate the observations. For example, if we wish to separate students based on their grades and their music skills, then it may be hard to do the separation since a student may have very good grades but poor music skills. In this case, we need more advance statistical methods to do the separation.

There are many methods available for discriminant analysis. For example, the following two methods are simple and useful ones:

• Likelihood method: we may choose population $\pi_{1}$ if the likelihood for $\pi_{1}$ is larger

than the likelihood for $\pi_{2}$, or vice versa. This method requires distributional assumption, such as multivariate normal distributions.

• Mahalanobis distance method we may consider the Mahalanobis distances between an observation $\mathbf{x}$ and the population mean $\boldsymbol{\mu}{i}$ : $$d{i}=\sqrt{\left(\mathrm{x}-\mu_{i}\right)^{\prime} \Sigma^{-1}\left(\mathrm{x}-\mu_{i}\right)}, \quad i=1,2,$$
assuming $\Sigma_{1}=\Sigma_{2}=\Sigma$. We can then choose population $\pi_{1}$ if $d_{1}<d_{2}$, or vice versa. This method does not require distributional assumption.

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Discriminant analysis for categorical data

The discriminant analysis methods discussed so far assume that all the variables or data are continuous. When some variables or data are categorical or discrete, the above methods cannot be used since the means and covariance matrices are no longer meaningful for categorical variables or data. When some variables are categorical, a simple approach is to use logistic regression models for discriminant analysis, as illustrated below. Note that a logistic regression model is a generalized linear model, which will be described in details in Chapter 9 .

As an example, consider the case of two populations. Let $y=1$ if an observation $\mathbf{x}=\left(x_{1}, x_{2}, \cdots, x_{p}\right)^{\mathbf{T}}$ is from population 1 and $y=0$ if the observation is from population 2. Then, we can consider the following logistic regression model
$$\log \frac{P(y=1)}{1-P(y=1)}=\beta_{0}+\beta_{1} x_{1}+\cdots+\beta_{p} x_{p},$$
where some $x_{j}$ ‘s may be categorical and some may be continuous. Given data, the above logistic regression model can be used to fit the data. Then, we can estimate the probability $P(y=1)$ based on the fitted logistic regression model. For observation $\mathbf{x}{i}$, if the estimated probability $\widehat{P}\left(y{i}=1\right)>0.5$, observation $\mathbf{x}_{i}$ is more likely from population 1 ; otherwise it is more likely from population 2. This method can be extended to more than two populations.

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Cluster Analysis

The goal of a cluster analysis is to identify homogeneous groups in the data by grouping observations based on their similarities or dis-similarities, i.e., similar observations are assigned to the same group. In other words, the idea is to partition all observations into subgroups or clusters (or populations), so that observations in the same cluster have similar characteristics.

For example, a marketing professional may want to partition all consumers into subgroups or clusters so that consumers in the same subgroup or cluster have similar buying habits. The partition can be based on age $\left(x_{1}\right)$, education $\left(x_{2}\right)$, income $\left(x_{3}\right)$, and monthly payments $\left(x_{4}\right)$. This is an example of cluster analysis based on four variables. The marketing professional may then design special advertisement strategies for different groups/clusters of consumers. Note that here the number of subgroups or clusters is not known before the cluster analysis.

Cluster analysis is similar to discriminant analysis in the sense that both methods try to separate observations into different groups. However, in discriminant analysis, the number of clusters (or subgroups or populations) are known in advance, and the objective is to determine which cluster (or population) an observation is likely to come from. In cluster analysis, on the other hand, the number of clusters (or subgroups or populations) is not known in advance, and the objective is find out distinct clusters and determine which cluster an observation is likely to come from. In other words, in cluster analysis one needs to find out how many clusters there may be and determine the cluster membership of an observation. Therefore, statistical methods for discriminant analysis may not be directly used in cluster analysis.

In cluster analysis, our objective is to devise a classification scheme, i.e., we need to find a rule to measure the similarity or dissimilarity between any two observations so that similar observations are grouped together to form clusters. Specifically, let $\mathbf{x}=\left(x_{1}, x_{2}, \cdots, x_{p}\right)^{\mathrm{T}}$ and $\mathbf{x}^{}=\left(x_{1}^{}, x_{2}^{}, \cdots, x_{p}^{}\right)^{\mathrm{T}}$ be two observations. The similarity between them can be measured by the “distance” between them, so that the two observations are similar if the distance between them is small. For example, the Euclidean distance between $\mathrm{x}$ and $\mathrm{x}^{}$ is defined as $$d_{0}\left(\mathrm{x}, \mathrm{x}^{}\right)=\sqrt{\left(\mathrm{x}-\mathrm{x}^{}\right)^{\mathrm{T}}\left(\mathrm{x}-\mathrm{x}^{}\right)}=\sqrt{\sum_{j=1}^{p}\left(x_{j}-x_{j}^{}\right)^{2}} .$$ However, the Euclidean distance does not take into account the variations and the correlations of the component variables. For multivariate data, each individual component variable has its own variance and the component variables may be correlated. Therefore, a better measure of the “distance” between two multivariate observations $\mathbf{x}$ and $\mathbf{x}^{}$ is the Mahalanobis distance:
$$d\left(\mathbf{x}, \mathbf{x}^{}\right)=\sqrt{\left(\mathbf{x}-\mathbf{x}^{}\right)^{\mathrm{T}} \Sigma^{-1}\left(\mathbf{x}-\mathbf{x}^{*}\right)}$$

where $\Sigma=\operatorname{Cov}(\mathbf{x})=\operatorname{Cov}\left(\mathbf{x}^{}\right)$ is the covariance matrix, assuming the two observations have the same covariance matrices. Thus, if the distance $d\left(\mathbf{x}, \mathbf{x}^{}\right)$ is small, we can consider that $\mathrm{x}$ and $\mathrm{x}^{*}$ are “close” and put them in the same cluster/group. Otherwise, we can put them in different clusters/groups. For a sample of $n$ observations, $\left{\mathbf{x}{1}, \cdots, \mathbf{x}{n}\right}$, each observation can be assigned into one of the clusters, based on a clustering method.

There are many cluster analysis methods available. In the following, we briefly discuss a few commonly used methods: nearest neighbour method, $k$-means algorithm, and two hierarchical cluster methods. Each method has its advantages and disadvantages, and different methods may lead to different results. In practice, it is always desirable to try at least two methods to analyze a dataset to see if the results agree or how much the results differ (which may provide some insights about the data).

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Discriminant Analysis

population表示圆周率1:ñp(μ1,Σ1),人口圆周率2:ñp(μ2,Σ2).

• 似然法：我们可以选择总体圆周率1如果可能性为圆周率1更大

• 马氏距离方法我们可以考虑观察之间的马氏距离X和人口平均μ一世 :d一世=(X−μ一世)′Σ−1(X−μ一世),一世=1,2,
假设Σ1=Σ2=Σ. 然后我们可以选择人口圆周率1如果d1<d2， 或相反亦然。这种方法不需要分布假设。

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Cluster Analysis

d0(X,X)=(X−X)吨(X−X)=∑j=1p(Xj−Xj)2.然而，欧几里得距离没有考虑成分变量的变化和相关性。对于多变量数据，每个单独的组件变量都有自己的方差，并且组件变量可能是相关的。因此，更好地衡量两个多变量观测值之间的“距离”X和X是马氏距离：

d(X,X)=(X−X)吨Σ−1(X−X∗)

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。