### 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|OLET5610

statistics-lab™ 为您的留学生涯保驾护航 在代写多元统计分析Multivariate Statistical Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写多元统计分析Multivariate Statistical Analysis代写方面经验极为丰富，各种代写多元统计分析Multivariate Statistical Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Distribution and Density Function

Let $X=\left(X_{1}, X_{2}, \ldots, X_{p}\right)^{\top}$ be a random vector. The cumulative distribution function (cdf) of $X$ is defined by
$$F(x)=\mathrm{P}(X \leq x)=\mathrm{P}\left(X_{1} \leq x_{1}, X_{2} \leq x_{2}, \ldots, X_{p} \leq x_{p}\right)$$
For continuous $X$, a nonnegative probability density function (pdf) $f$ exists that
$$F(x)=\int_{-\infty}^{x} f(u) d u$$
Note that
$$\int_{-\infty}^{\infty} f(u) d u=1$$
Most of the integrals appearing below are multidimensional. For instance, $\int_{-\infty}^{x} f(u) d u$ means $\int_{-\infty}^{x_{p}} \ldots \int_{-\infty}^{x_{1}} f\left(u_{1}, \ldots, u_{p}\right) d u_{1} \ldots d u_{p}$. Note also that the cdf $F$ is differentiable with
$$f(x)=\frac{\partial^{p} F(x)}{\partial x_{1} \cdots \partial x_{p}}$$
For discrete $X$, the values of this random variable are concentrated on a countable or finite set of points $\left{c_{j}\right}_{j \in J}$, the probability of events of the form ${X \in D}$ can then be computed as
$$\mathrm{P}(X \in D)=\sum_{\left{j: c_{j} \in D\right}} \mathrm{P}\left(X=c_{j}\right)$$
If we partition $X$ as $X=\left(X_{1}, X_{2}\right)^{\top}$ with $X_{1} \in \mathbb{R}^{k}$ and $X_{2} \in \mathbb{R}^{p-k}$, then the function
$$F_{X_{1}}\left(x_{1}\right)=\mathrm{P}\left(X_{1} \leq x_{1}\right)=F\left(x_{11}, \ldots, x_{1 k}, \infty, \ldots, \infty\right)$$
is called the marginal cdf. $F=F(x)$ is called the joint cdf. For continuous $X$ the marginal pdf can be computed from the joint density by “integrating out” the variable not of interest.
$$f_{X_{1}}\left(x_{1}\right)=\int_{-\infty}^{\infty} f\left(x_{1}, x_{2}\right) d x_{2}$$

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Moments and Characteristic Functions

Moments: Expectation and Covariance Matrix
If $X$ is a random vector with density $f(x)$ then the expectation of $X$ is
$$\mathrm{E} X=\left(\begin{array}{c} \mathrm{E} X_{1} \ \vdots \ \mathrm{E} X_{p} \end{array}\right)=\int x f(x) d x=\left(\begin{array}{c} \int x_{1} f(x) d x \ \vdots \ \int x_{p} f(x) d x \end{array}\right)=\mu$$

Accordingly, the expectation of a matrix of random elements has to be understood component by component. The operation of forming expectations is linear:
$$\mathrm{E}(\alpha X+\beta Y)=\alpha \mathrm{E} X+\beta \mathrm{E} Y$$
If $\mathcal{A}(q \times p)$ is a matrix of real numbers, we have:
$$\mathrm{E}(\mathcal{A} X)=\mathcal{A} E X$$
When $X$ and $Y$ are independent,
$$E\left(X Y^{\top}\right)=E X E Y^{\top}$$
The matrix
$$\operatorname{Var}(X)=\Sigma=\mathrm{E}(X-\mu)(X-\mu)^{\top}$$
is the (theoretical) covariance matrix. We write for a vector $X$ with mean vector $\mu$ and covariance matrix $\Sigma$,
$$X \sim(\mu, \Sigma)$$
The $(p \times q)$ matrix
$$\Sigma_{X Y}=\operatorname{Cov}(X, Y)=\mathrm{E}(X-\mu)(Y-v)^{\top}$$
is the covariance matrix of $X \sim\left(\mu, \Sigma_{X X}\right)$ and $Y \sim\left(v, \Sigma_{Y Y}\right)$. Note that $\Sigma_{X Y}=\Sigma_{Y X}^{\top}$ and that $Z=\left(\begin{array}{l}X \ Y\end{array}\right)$ has covariance $\Sigma_{Z Z}=\left(\begin{array}{ll}\Sigma_{X X} & \Sigma_{X Y} \ \Sigma_{Y X} & \Sigma_{Y Y}\end{array}\right)$. From
$$\operatorname{Cov}(X, Y)=\mathrm{E}\left(X Y^{\top}\right)-\mu v^{\top}=\mathrm{E}\left(X Y^{\top}\right)-\mathrm{E} X E Y^{\top}$$
it follows that $\operatorname{Cov}(X, Y)=0$ in the case where $X$ and $Y$ are independent. We often say that $\mu=\mathrm{E}(X)$ is the first order moment of $X$ and that $\mathrm{E}\left(X X^{\top}\right)$ provides the second order moments of $X$ :
$$E\left(X X^{\top}\right)=\left{E\left(X_{i} X_{j}\right)\right}, \text { for } i=1, \ldots, p \text { and } j=1, \ldots, p$$

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Properties of Conditional Expectations

Since $\mathrm{E}\left(X_{2} \mid X_{1}=x_{1}\right)$ is a function of $x_{1}$, say $h\left(x_{1}\right)$, we can define the random variable $h\left(X_{1}\right)=\mathrm{E}\left(X_{2} \mid X_{1}\right)$. The same can be done when defining the random variable $\operatorname{Var}\left(X_{2} \mid X_{1}\right)$. These two random variables share some interesting properties:
\begin{aligned} \mathrm{E}\left(X_{2}\right) &=\mathrm{E}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right)\right} \ \operatorname{Var}\left(X_{2}\right) &=\mathrm{E}\left{\operatorname{Var}\left(X_{2} \mid X_{1}\right)\right}+\operatorname{Var}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right)\right} \end{aligned}
Example $4.8$ Consider the following pdf
$$f\left(x_{1}, x_{2}\right)=2 e^{-\frac{x_{2}}{x_{1}}} ; 00 .$$
It is easy to show that
$$\begin{gathered} f\left(x_{1}\right)=2 x_{1} \text { for } 00 ; \quad \mathrm{E}\left(X_{2} \mid X_{1}\right)=X_{1} \text { and } \operatorname{Var}\left(X_{2} \mid X_{1}\right)=X_{1}^{2} . \end{gathered}$$
Without explicitly computing $f\left(x_{2}\right)$, we can obtain:
\begin{aligned} \mathrm{E}\left(X_{2}\right) &=\mathrm{E}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right)\right}=\mathrm{E}\left(X_{1}\right)=\frac{2}{3} \ \operatorname{Var}\left(X_{2}\right) &=\mathrm{E}\left{\operatorname{Var}\left(X_{2} \mid X_{1}\right)\right}+\operatorname{Var}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right)\right} \ &=\mathrm{E}\left(X_{1}^{2}\right)+\operatorname{Var}\left(X_{1}\right)=\frac{2}{4}+\frac{1}{18}=\frac{10}{18} \end{aligned}
The conditional expectation $\mathrm{E}\left(X_{2} \mid X_{1}\right)$ viewed as a function $h\left(X_{1}\right)$ of $X_{1}$ (known as the regression function of $X_{2}$ on $X_{1}$ ), can be interpreted as a conditional approximation of $X_{2}$ by a function of $X_{1}$. The error term of the approximation is then given by:
$$U=X_{2}-\mathrm{E}\left(X_{2} \mid X_{1}\right)$$
Theorem 4.3 Let $X_{1} \in \mathbb{R}^{k}$ and $X_{2} \in \mathbb{R}^{p-k}$ and $U=X_{2}-E\left(X_{2} \mid X_{1}\right)$. Then we have:

1. $E(U)=0$
2. $E\left(X_{2} \mid X_{1}\right)$ is the best approximation of $X_{2}$ by a function $h\left(X_{1}\right)$ of $X_{1}$ where $h$ : $\mathbb{R}^{k} \longrightarrow \mathbb{R}^{p-k}$. “Best” is the minimum mean squared error (MSE) sense, where
$$\operatorname{MSE}(h)=E\left[\left{X_{2}-h\left(X_{1}\right)\right}^{\top}\left{X_{2}-h\left(X_{1}\right)\right}\right] .$$

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Distribution and Density Function

F(X)=磷(X≤X)=磷(X1≤X1,X2≤X2,…,Xp≤Xp)

F(X)=∫−∞XF(在)d在

∫−∞∞F(在)d在=1

F(X)=∂pF(X)∂X1⋯∂Xp

\mathrm{P}(X \in D)=\sum_{\left{j: c_{j} \in D\right}} \mathrm{P}\left(X=c_{j}\right)\mathrm{P}(X \in D)=\sum_{\left{j: c_{j} \in D\right}} \mathrm{P}\left(X=c_{j}\right)

FX1(X1)=磷(X1≤X1)=F(X11,…,X1ķ,∞,…,∞)

FX1(X1)=∫−∞∞F(X1,X2)dX2

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Moments and Characteristic Functions

X∼(μ,Σ)

ΣX是=这⁡(X,是)=和(X−μ)(是−在)⊤

E\left(X X^{\top}\right)=\left{E\left(X_{i} X_{j}\right)\right}, \text { for } i=1, \ldots, p \文本 { 和 } j=1, \ldots, pE\left(X X^{\top}\right)=\left{E\left(X_{i} X_{j}\right)\right}, \text { for } i=1, \ldots, p \文本 { 和 } j=1, \ldots, p

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Properties of Conditional Expectations

\begin{对齐} \mathrm{E}\left(X_{2}\right) &=\mathrm{E}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right )\right} \ \operatorname{Var}\left(X_{2}\right) &=\mathrm{E}\left{\operatorname{Var}\left(X_{2} \mid X_{1}\right )\right}+\operatorname{Var}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right)\right} \end{aligned}\begin{对齐} \mathrm{E}\left(X_{2}\right) &=\mathrm{E}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right )\right} \ \operatorname{Var}\left(X_{2}\right) &=\mathrm{E}\left{\operatorname{Var}\left(X_{2} \mid X_{1}\right )\right}+\operatorname{Var}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right)\right} \end{aligned}

F(X1,X2)=2和−X2X1;00.

F(X1)=2X1 为了 00;和(X2∣X1)=X1 和 曾是⁡(X2∣X1)=X12.

\begin{对齐} \mathrm{E}\left(X_{2}\right) &=\mathrm{E}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right )\right}=\mathrm{E}\left(X_{1}\right)=\frac{2}{3} \ \operatorname{Var}\left(X_{2}\right) &=\mathrm{ E}\left{\operatorname{Var}\left(X_{2} \mid X_{1}\right)\right}+\operatorname{Var}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right)\right} \ &=\mathrm{E}\left(X_{1}^{2}\right)+\operatorname{Var}\left(X_{1}\right) =\frac{2}{4}+\frac{1}{18}=\frac{10}{18} \end{对齐}\begin{对齐} \mathrm{E}\left(X_{2}\right) &=\mathrm{E}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right )\right}=\mathrm{E}\left(X_{1}\right)=\frac{2}{3} \ \operatorname{Var}\left(X_{2}\right) &=\mathrm{ E}\left{\operatorname{Var}\left(X_{2} \mid X_{1}\right)\right}+\operatorname{Var}\left{\mathrm{E}\left(X_{2} \mid X_{1}\right)\right} \ &=\mathrm{E}\left(X_{1}^{2}\right)+\operatorname{Var}\left(X_{1}\right) =\frac{2}{4}+\frac{1}{18}=\frac{10}{18} \end{对齐}

1. 和(在)=0
2. 和(X2∣X1)是的最佳近似值X2通过函数H(X1)的X1在哪里H : Rķ⟶Rp−ķ. “最佳”是最小均方误差 (MSE) 意义，其中
\operatorname{MSE}(h)=E\left[\left{X_{2}-h\left(X_{1}\right)\right}^{\top}\left{X_{2}-h\左(X_{1}\right)\right}\right] 。\operatorname{MSE}(h)=E\left[\left{X_{2}-h\left(X_{1}\right)\right}^{\top}\left{X_{2}-h\左(X_{1}\right)\right}\right] 。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。