统计代写|回归分析作业代写Regression Analysis代考| Density Approximations

statistics-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

统计代写|回归分析作业代写Regression Analysis代考|Density Approximations

In the previous section some of the most elementary properties, i.e. moments, of the MLEs in the $B R M, E B R M_{B}^{3}$ and $E B R M_{W}^{3}$ were derived. In these models the exact distribution of the MLEs is difficult to obtain in a useful form. Thus, one needs to rely on either simulations or approximations. In general, simulations may be useful in some particular cases, but can often become computationally demanding, for example when used to solve distributional problems connected to high-dimensional statistical problems.

When finding approximations of distributions, it may be advisable to start from the asymptotic distribution under the assumption of a large number of independent observations. It is a fairly natural approximation strategy that one should let the asymptotic result direct the approximation. For example, if the distribution of a statistic converges to the normal distribution, it is natural to approximate with a normal distribution. The art in this connection resides in the correction of the approximation for the finite number of independent observations concerned. Moreover, in any serious context it is always of interest to indicate the error of the approximation and the best approach here is to find a sharp upper bound of the error.

Distributions of a statistic can be approximated in many ways, for example, by approximating the statistic itself, by approximating the characteristic function before transforming it back into a density, by approximating the density function or by directly approximating the distribution function. In this section a special type of density approximation will be considered which is termed Edgeworth-type expansion. From the derivation of this type of approximation it follows that one approximates the characteristic function by excluding higher terms in a Taylor series expansion of the characteristic function. At this stage the knowledge of moments and cumulants is crucial. Thereafter an inverse transform is applied to obtain the density approximation. The reason for calling this type of approximation

an Edgeworth-type expansion is that it is based on the normal distribution. However, the correct term is Gram-Charlier A series expansion. Usually the difference between Edgeworth and Gram-Charlier expansions lies in the organization of terms in the expansion, which then affects the approximation when series are truncated. The reason for choosing the term “Edgeworth-type expansion” is that in our approach we do not have to distinguish between the Gram-Charlier and Edgeworth expansions, and at the same time, the term “Gram-Charlier expansion” is incorrect from a historical perspective (see Hald, 2002).

This chapter focuses mainly on the approximation of the distribution of the maximum likelihood estimators of the mean parameters. Here the results are unexpectedly beautiful. The same approach could be adopted for the estimators of the dispersion estimators, but in this case it is not possible to bound the errors of the approximations and therefore no results will be presented.

统计代写|回归分析作业代写Regression Analysis代考|Preparation

Let $\boldsymbol{Y}$ be a random matrix variable with density $f_{Y}\left(\boldsymbol{Y}{o}\right)$. The density $f{Y}\left(\boldsymbol{Y}{o}\right)$ should be approximated via another random variable $X$ and its density $f{X}\left(\boldsymbol{Y}{o}\right)$, and knowledge about the cumulants for both distributions. Moreover, the approximating density and the characteristic functions are going to be differentiated several times and this will be based on the following matrix derivative. Definition $\mathbf{5 . 1}$ Let $\boldsymbol{Y}$ be a function of $\boldsymbol{X}$. The $k$ th matrix derivative is defined by $\frac{d^{k} \boldsymbol{Y}}{d \boldsymbol{X}^{k}}=\frac{d}{\boldsymbol{X}} \frac{d^{k-1} \boldsymbol{Y}}{d \boldsymbol{X}^{k-1}}, \quad k=1,2, \ldots$ and $$\frac{d \boldsymbol{Y}}{d \boldsymbol{X}}=\frac{d \operatorname{vec}^{\prime} \boldsymbol{Y}}{d \operatorname{vec} \boldsymbol{X}}, \quad \frac{d^{0} \boldsymbol{Y}}{d \boldsymbol{X}^{0}}=\boldsymbol{Y}$$ where, if $\boldsymbol{X} \in \mathbb{R}^{p \times q}$, $$\frac{d}{d X}=\left(\frac{d}{d x{11}}, \ldots \frac{d}{d x_{p 1}}, \frac{d}{d x_{12}}, \ldots, \frac{d}{d x_{p 2}}, \ldots, \frac{d}{d x_{1 q}}, \ldots, \frac{d}{d x_{p q}}\right)^{\prime} .$$
Note that a more precise, but clumsy, notation would have been $\frac{d^{k} \boldsymbol{Y}}{(d X)^{k}}$ or $\frac{d^{k} \boldsymbol{Y}}{d X d X}$; i.e. here in Definition $5.1, \boldsymbol{X}^{k}$ does not denote the matrix power.

Since the Edgeworth-type expansion is based on knowledge about multivariate cumulants, it is necessary to define them. Let $\varphi_{X}(T)$ denote the characteristic

function (Fourier transform),
$$\varphi_{X}(T)=E\left[e^{i t r\left(T^{\prime} X\right)}\right]$$
where $i$ is the imaginary unit, and then the $k$ th cumulant $c_{k}[X]$ is presented in the next definition.

统计代写|回归分析作业代写Regression Analysis代考|Density Approximation for the Mean Parameter

For the $B R M$, presented in Definition $2.1$, the density of $\widehat{\boldsymbol{B}}-\boldsymbol{B}$ is now approximated. It is assumed that $\widehat{\boldsymbol{B}}$ is unique, i.e. the matrices $\boldsymbol{A}$ and $\boldsymbol{C}$ are of full rank, and therefore (see Corollary 3.1)
$$\widehat{B}-B=\left(\boldsymbol{A}^{\prime} \boldsymbol{S}^{-1} \boldsymbol{A}\right)^{-1} \boldsymbol{A}^{\prime} \boldsymbol{S}^{-1}(\boldsymbol{X}-\boldsymbol{A} \boldsymbol{B C}) \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1}$$
is discussed, where $\boldsymbol{S} \sim W_{p}(\boldsymbol{\Sigma}, n-k)$. Since (see Appendix B, Theorem B.18 (ii))
$$\frac{1}{n-k} S \stackrel{P}{\rightarrow} \mathbf{\Sigma}, \quad n \rightarrow \infty$$
a natural approximating quantity is
$$\boldsymbol{B}{\Sigma}-\boldsymbol{B}=\left(\boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{A}\right)^{-1} \boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1}(\boldsymbol{X}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}) \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1}$$ Moreover, $\boldsymbol{B}{\Sigma}$ is normally distributed and
\begin{aligned} &E[\widehat{\boldsymbol{B}}-\boldsymbol{B}]=E\left[\boldsymbol{B}{\Sigma}-\boldsymbol{B}\right]=\mathbf{0}, \ &D[\widehat{\boldsymbol{B}}]-D\left[\boldsymbol{B}{\Sigma}\right]=\frac{p-q}{n-k-p+q-1}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1} \otimes\left(\boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{A}\right)^{-1}, \end{aligned}
where (5.14) is obtained from $D[\widehat{\boldsymbol{B}}]$, presented in Corollary $4.1$ (ii), and $D\left[\boldsymbol{B}{\Sigma}\right]$ is established with the help of Appendix B, Theorem B.19 (iii). In many natural applications $\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1}$ will become small, or at least its elements are bounded when $n \rightarrow \infty$, and therefore the first two moments (cumulants) of $\widehat{\boldsymbol{B}}$ and $\boldsymbol{B}{\Sigma}$ are close to each other. We also know that $\widehat{\boldsymbol{B}}-\widehat{\boldsymbol{B}}{\Sigma} \stackrel{P}{\rightarrow} \mathbf{0}$ as $n \rightarrow \infty$ (see the proof of Theorem 4.1). Hence, many properties of $\widehat{\boldsymbol{B}}$ support the idea of approximating the density of $\widehat{\boldsymbol{B}}-\boldsymbol{B}$ with the density of $\boldsymbol{B}{\Sigma}-\boldsymbol{B}$. The consequences of this approach are studied now and our starting point is the next important observation:
$$\widehat{\boldsymbol{B}}-\boldsymbol{B}=\boldsymbol{B}{\Sigma}-\boldsymbol{B}-\boldsymbol{U}$$ where $$\boldsymbol{U}=\left(\boldsymbol{A}^{\prime} \boldsymbol{S}^{-1} \boldsymbol{A}\right)^{-1} \boldsymbol{A}^{\prime} \boldsymbol{S}^{-1}\left(\boldsymbol{P}{A, \Sigma}-\boldsymbol{I}\right)(\boldsymbol{X}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}) \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1}$$
Now $\left(\boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{A}\right)^{-1} \boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{X} \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1}$ and $\left(\boldsymbol{P}{A, \Sigma}-\boldsymbol{I}\right) \boldsymbol{X} \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1}$ are independent (see Appendix B, Theorem B.19 (x)) and $\boldsymbol{X} \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1}$ is independent of $\boldsymbol{S}$ (see Appendix B, Theorem B.19 (viii)). Therefore, $\boldsymbol{B}{\Sigma}$ and $\boldsymbol{U}$ are independently

distributed. Hence, Theorem $5.2$ can be applied and the following quantities are needed if $m=3$ is chosen in Theorem $5.2$ :
\begin{aligned} &\boldsymbol{B}{\Sigma}-\boldsymbol{B} \sim N{q, k}\left(\boldsymbol{0},\left(\boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{A}\right)^{-1},\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1}\right) \ &E[\boldsymbol{U}]=\mathbf{0}, \quad E\left[\boldsymbol{u}^{\otimes 3}\right]=\mathbf{0}, \quad(\boldsymbol{u}=\operatorname{vec} \boldsymbol{U}), \ &E\left[\boldsymbol{u}^{\otimes 2}\right]=\operatorname{vec}\left(D[\widehat{\boldsymbol{B}}]-D\left[\boldsymbol{B}_{\Sigma}\right]\right)=\frac{p-q}{n-k-p+q-1} \operatorname{vec}\left(\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1} \otimes\left(\boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{A}\right)^{-1}\right) . \end{aligned}

广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

统计代写|回归分析作业代写Regression Analysis代考| Uniqueness Conditions for MLEs

statistics-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

统计代写|回归分析作业代写Regression Analysis代考|Uniqueness Conditions for MLEs

In order to study the estimators of parameters in the $E B R M_{B}^{3}$, the estimators or the bilinear combinations of them have to be unique. If the estimate $\widehat{\boldsymbol{B}}{i o}$ is considered to be unique, it is understood that $\widehat{\boldsymbol{B}}{i o}$ has a unique expression, whereas if the estimator $\widehat{\boldsymbol{B}}{i}$ is unique, this means that it has a unique distribution (excluding events with probability mass 0 ). In the following, however, $\widehat{\boldsymbol{B}}{i}$ represents both the estimators and the estimates. It is essential, as for the $B R M$, to obtain uniqueness conditions, since the conditions reveal whether or not the parameters or bilinear functions of the parameters are estimable. Unfortunately, in comparison with the $B R M$, there are more parameters for the $E B R M_{B}^{3}$ and their estimators are functionally connected. Thus, the handling of the $E B R M_{B}^{3}$ is more complex and the technical treatment

more complicated. In general the technical details presented in the following will be sparse.

The next theorem presents the uniqueness conditions necessary and sufficient for the estimators of the parameters in the $E B R M_{B}^{3}$.

Theorem 4.9 For the EBRM ${ }{B}^{3}$ presented in Definition $2.2$, let $\widehat{\boldsymbol{B}}{i}, i=1,2,3$, be given in Theorem $3.2$ and let $\boldsymbol{K} \widehat{\boldsymbol{B}}{i} \boldsymbol{L}, i=1,2,3$, be linear combinations of $\widehat{\boldsymbol{B}}{i} ; \boldsymbol{K}$ and $\boldsymbol{L}$ are known matrices of proper sizes. Then the following statements hold:
(i) $\widehat{\boldsymbol{B}}{3}$ is unique if and only if $$r\left(\boldsymbol{A}{3}\right)=q_{3}, \quad r\left(\boldsymbol{C}{3}\right)=k{3}, \quad \mathcal{C}\left(\boldsymbol{A}{3}\right) \cap \mathcal{C}\left(\boldsymbol{A}{1}: \boldsymbol{A}{2}\right)={0}$$ (ii) $\boldsymbol{K} \widehat{\boldsymbol{B}}{3} \boldsymbol{L}$ is unique if and only if
$$\mathcal{C}(\boldsymbol{L}) \subseteq \mathcal{C}\left(\boldsymbol{C}{3}\right), \quad \mathcal{C}\left(\boldsymbol{K}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{3}^{\prime}\left(\boldsymbol{A}{1}: \boldsymbol{A}{2}\right)^{o}\right)$$
(iii) $\widehat{\boldsymbol{B}}{2}$ is unique if and only if \begin{aligned} &r\left(\boldsymbol{A}{2}\right)=q_{2}, \quad r\left(\boldsymbol{C}{2}\right)=k{2}, \quad \mathcal{C}\left(\boldsymbol{A}{1}\right) \cap \mathcal{C}\left(\boldsymbol{A}{2}\right)={\mathbf{0}} \ &\mathcal{C}\left(\boldsymbol{A}{1}\right)^{\perp} \cap \mathcal{C}\left(\boldsymbol{A}{1}: \boldsymbol{A}{2}\right) \cap \mathcal{C}\left(\boldsymbol{A}{1}: \boldsymbol{A}{3}\right)={0} \end{aligned} (iv) $\boldsymbol{K} \widehat{\boldsymbol{B}}{2} \boldsymbol{L}$ is unique if and only if
$$\mathcal{C}(\boldsymbol{L}) \subseteq \mathcal{C}\left(\boldsymbol{C}{2}\right), \quad \mathcal{C}\left(\boldsymbol{K}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{2}^{\prime}\left(\boldsymbol{A}{1}: \boldsymbol{A}{3}\right)^{o}\right)$$
(v) $\widehat{\boldsymbol{B}}{1}$ is unique if and only if \begin{aligned} &r\left(A{1}\right)=q_{1}, \quad r\left(C_{1}\right)=k_{1}, \quad \mathcal{C}\left(A_{1}\right) \cap \mathcal{C}\left(A_{2}\right)={0} \ &\mathcal{C}\left(A_{2}\right)^{\perp} \cap \mathcal{C}\left(A_{1}: A_{2}\right) \cap \mathcal{C}\left(A_{2}: A_{3}\right)={0} \end{aligned}
(vi) $\boldsymbol{K} \widehat{\boldsymbol{B}}{\perp} \boldsymbol{L}$ is unique if and only if \begin{aligned} &\mathcal{C}(\boldsymbol{L}) \subseteq \mathcal{C}\left(\boldsymbol{C}{1}\right), \quad \mathcal{C}\left(\boldsymbol{K}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{1}^{\prime}\right) \ &\mathcal{C}\left(\boldsymbol{A}{3}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A{1}^{o}} \boldsymbol{A}{2}\left(\boldsymbol{A}{2}^{\prime} \boldsymbol{P}{A{1}^{o}} \boldsymbol{A}{2}\right)^{-} \boldsymbol{A}{2}^{\prime}\right) \boldsymbol{A}{1}\left(\boldsymbol{A}{1}^{\prime} \boldsymbol{A}{1}\right)^{-} \boldsymbol{K}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{3}^{\prime}\left(\boldsymbol{A}{1}: \boldsymbol{A}{2}\right)^{o}\right) \ &\mathcal{C}\left(\boldsymbol{A}{2}^{\prime} \boldsymbol{A}{1}\left(\boldsymbol{A}{1}^{\prime} \boldsymbol{A}{1}\right)^{-} \boldsymbol{K}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{2}^{\prime} \boldsymbol{A}{1}^{o}\right) \end{aligned}
(vii) The estimator $\widehat{\mathbf{\Sigma}}$ in Theorem $3.2$ is always uniquely estimated as well as the estimator $\widehat{E}[\boldsymbol{X}]$ given in Corollary $3.3$.

统计代写|回归分析作业代写Regression Analysis代考|Asymptotic Properties of Estimators

Lemma 4.2 Let $\boldsymbol{S}{1}, \widehat{\boldsymbol{S}}{2}, \widehat{\boldsymbol{S}}{3}, \widehat{\boldsymbol{Q}}{1}, \widehat{\boldsymbol{Q}}{2}, \boldsymbol{Q}{1}$ and $\boldsymbol{Q}{2}$ be defined through Theorem $3.2$ and (3.13)-(3.16). Suppose that for large $n, r\left(\boldsymbol{C}{1}\right) \leq k_{1}$, and that both $r\left(\boldsymbol{C}{1}\right)-$ $r\left(\boldsymbol{C}{2}\right)$ and $r\left(\boldsymbol{C}{2}\right)-r\left(\boldsymbol{C}{3}\right)$ are independent of $n$. Then, as $n \rightarrow \infty$,
(i) $n^{-1} \boldsymbol{S}{1} \stackrel{P}{\rightarrow} \boldsymbol{\Sigma}, \quad n^{-1} \widehat{\boldsymbol{S}}{2} \stackrel{P}{\rightarrow} \boldsymbol{\Sigma}, \quad n^{-1} \widehat{\boldsymbol{S}}{3} \stackrel{P}{\rightarrow} \mathbf{\Sigma}$, (ii) $\widehat{Q}{1} \stackrel{P}{\rightarrow} Q_{1}, \quad \widehat{Q}{2} \stackrel{P}{\rightarrow} Q{2}$.
Proof Since the distribution for $S$ (see Lemma 4.1) used in the BRM and the distribution for $S_{1}$ are the same, $n^{-1} S_{1} \stackrel{P}{\rightarrow} \Sigma$ follows from Lemma 4.1, and this is also true for $\widehat{Q}{1} \stackrel{P}{\rightarrow} Q{1}$. Then it is noted that $\widehat{Q}{1}^{\prime} A{1}=0$, and hence
$$\widehat{\boldsymbol{S}}{2}=\boldsymbol{S}{1}+\widehat{\boldsymbol{Q}}{1}^{\prime}\left(\boldsymbol{X}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}\right)\left(\boldsymbol{P}{C{1}^{\prime}}-\boldsymbol{P}{C{2}^{\prime}}\right)\left(\boldsymbol{X}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}\right)^{\prime} \widehat{\boldsymbol{Q}}{1} .$$
From Appendix B, Theorem B.20 (vi) it follows that
$$\left(\boldsymbol{X}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}\right)\left(\boldsymbol{P}{C_{1}^{\prime}}-\boldsymbol{P}{C{2}^{r}}\right)\left(\boldsymbol{X}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}\right)^{\prime} \sim W{p}\left(\boldsymbol{\Sigma}, r\left(\boldsymbol{C}{1}\right)-r\left(\boldsymbol{C}{2}\right)\right)$$
because $\left(\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}+\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}\right)\left(\boldsymbol{P}{C{1}^{r}}-\boldsymbol{P}{C{2}^{\prime}}\right)=\mathbf{0}$. It is assumed that $r\left(\boldsymbol{C}{1}\right)-r\left(\boldsymbol{C}{2}\right)$ is fixed for large $n$, which indeed implies that for large $n$ the Wishart distribution does not depend on the values of $n$. Hence,
$$\frac{1}{n}\left(\boldsymbol{X}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}\right)\left(\boldsymbol{P}{C_{1}^{\prime}}-\boldsymbol{P}{C{2}^{\prime}}\right)\left(\boldsymbol{X}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}\right)^{\prime} \stackrel{P}{\rightarrow} 0,$$ which is precisely what is needed in the following. Thus, (4.43) yields $n^{-1}\left(\widehat{\boldsymbol{S}}{2}-\right.$ $\left.\boldsymbol{S}{1}\right) \stackrel{P}{\rightarrow} \mathbf{0}$, and then $n^{-1} \widehat{S}{2} \stackrel{P}{\rightarrow} \mathbf{\Sigma}$. Moreover, $\widehat{Q}{2} \stackrel{P}{\rightarrow} Q{2}$ and then copying the above presentation one may show $n^{-1} \widehat{\boldsymbol{S}}_{3} \stackrel{P}{\rightarrow} \mathbf{\Sigma}$.

统计代写|回归分析作业代写Regression Analysis代考|Moments of Estimators of Parameters

For the $B R M$, the distributions of the maximum likelihood estimators are difficult to find. In Theorem 3.2, the estimators for the $E B R M_{B}^{3}$ were given and one can see that the expressions are stochastically much more complicated than the estimators for the $B R M$. To understand the estimators, moments are useful quantities. For example, approximations of the distributions of the estimators have to take place, and in this book these approximations are based on moments. Before studying

$\boldsymbol{K} \widehat{\boldsymbol{B}}{i} \boldsymbol{L}, i=1,2,3$, the estimated mean structure $\widehat{E[\boldsymbol{X}]}=\sum{i=1}^{3} \boldsymbol{A}{i} \widehat{\boldsymbol{B}}{i} \boldsymbol{C}{i}$ and $\widehat{\boldsymbol{\Sigma}}$ are treated. Thereafter, $D\left[\boldsymbol{K} \widehat{\boldsymbol{B}}{i} \boldsymbol{L}\right], i=1,2,3$, is calculated. The ideas for calculating $D\left[\boldsymbol{K} \widehat{\boldsymbol{B}}_{i} \boldsymbol{L}\right]$ are very similar to the ones presented for obtaining $D[\widehat{E}[\boldsymbol{X}]]$ and $E[\widehat{\boldsymbol{\Sigma}}]$. Some advice is appropriate here. The technical treatment in this section is complicated, although not very difficult. Readers less interested in details are recommended merely to study the results in the given theorems. Moreover, the presentation in different places is not complete due to computational lengthiness. Table $4.1$ includes definitions which are used throughout the section.

First it will be shown that in the $E B R M_{B}^{3}$, under the uniqueness conditions presented in Theorem $4.9$, the maximum likelihood estimators of $\boldsymbol{K} \boldsymbol{B}{i} \boldsymbol{L}$ will be unbiased and then it follows that $\widehat{E[X]}=\sum{i=1}^{m} \boldsymbol{A}{i} \widehat{\boldsymbol{B}}{i} \boldsymbol{C}{i}$ is also unbiased. In Theorem $3.2$ the maximum likelihood estimators $\widehat{\boldsymbol{B}}{i, i}=1,2,3$, were presented. Since $\mathcal{C}\left(\boldsymbol{C}{3}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}_{1}^{\prime}\right)$, the following facts, which are obtained from Appendix B, Theorem B.19 (ix) and (xi), will be utilized.

统计代写|回归分析作业代写Regression Analysis代考|Uniqueness Conditions for MLEs

(i)乙^3是唯一的当且仅当r(一种3)=q3,r(C3)=到3,C(一种3)∩C(一种1:一种2)=0(二)到乙^3大号是唯一的当且仅当
C(大号)⊆C(C3),C(到′)⊆C(一种3′(一种1:一种2)这)
㈢乙^2是唯一的当且仅当r(一种2)=q2,r(C2)=到2,C(一种1)∩C(一种2)=0 C(一种1)⊥∩C(一种1:一种2)∩C(一种1:一种3)=0(四)到乙^2大号是唯一的当且仅当
C(大号)⊆C(C2),C(到′)⊆C(一种2′(一种1:一种3)这)
(五)乙^1是唯一的当且仅当r(一种1)=q1,r(C1)=到1,C(一种1)∩C(一种2)=0 C(一种2)⊥∩C(一种1:一种2)∩C(一种2:一种3)=0
（我们）到乙^⊥大号是唯一的当且仅当C(大号)⊆C(C1),C(到′)⊆C(一种1′) C(一种3′(一世−磷一种1这一种2(一种2′磷一种1这一种2)−一种2′)一种1(一种1′一种1)−到′)⊆C(一种3′(一种1:一种2)这) C(一种2′一种1(一种1′一种1)−到′)⊆C(一种2′一种1这)
(vii) 估计者Σ^定理3.2总是唯一地估计以及估计量和^[X]在推论中给出3.3.

统计代写|回归分析作业代写Regression Analysis代考|Asymptotic Properties of Estimators

(一)n−1小号1→磷Σ,n−1小号^2→磷Σ,n−1小号^3→磷Σ, (ii)问^1→磷问1,问^2→磷问2.

(X−一种1乙1C1)(磷C1′−磷C2r)(X−一种1乙1C1)′∼在p(Σ,r(C1)−r(C2))

1n(X−一种1乙1C1)(磷C1′−磷C2′)(X−一种1乙1C1)′→磷0,这正是下面需要的。因此，（4.43）产生n−1(小号^2− 小号1)→磷0， 进而n−1小号^2→磷Σ. 而且，问^2→磷问2然后复制上面的演示文稿可能会显示n−1小号^3→磷Σ.

广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

统计代写|回归分析作业代写Regression Analysis代考|Basic Properties of Estimators

statistics-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

统计代写|回归分析作业代写Regression Analysis代考|Basic Properties of Estimators

Since statistical models usually consist of unknown parameters, these parameters have to be estimated in order to make the models interpretable. A general strategy (plugging-in strategy) is to replace the original unknown parameters with estimated quantities, i.e. to create an estimated model and then hope that this procedure will provide useful information. In order to draw firm statistical conclusions, one needs to know the distribution of the estimated model, the estimated parameters or in general the distribution of any statistic of interest. One consequence of the estimation procedure is that the produced estimators of the parameters in a model are usually dependent (correlated), which obviously cannot be the case in the original model where there is no distribution put on the parameters. This deviance from the original model may be essential for the interpretation of the output from any analysis based on the model.

Unfortunately, exact distributions may be difficult to derive. Therefore one has mostly to rely on approximations. There are many ways of performing approximations. One is to approximate the original model with a model where the necessary distributions can be obtained. For example, a non-linear model can be approximated by a linear model, and if one additionally supposes an error which is normally distributed, the basic distributions are available for applying the model to real data. Sometimes this is a good idea, but sometimes the original model has a specific meaning, including an understanding of the parameters, whereas its linearization is more difficult to interpret.

Another type of approximation is implemented when a multivariate set-up, with an unknown dispersion matrix, is approximated with a number of independent univariate models, for example, when a $p$-dimensional multivariate linear model is approximated by $p$ independent univariate linear models.

A third type of approximation is to consider the approximation from an asymptotic perspective, i.e. to suppose that many independent observations, let us say $n$,

are available. The mathematics usually requires that $n \rightarrow \infty$, but, of course, we always have a finite number of independent observations. One rarely knows how many observations are needed in order to trust results based on $n \rightarrow \infty$.

In statistics and, in particular, multivariate analysis, functions of the inverse dispersion matrix, $\boldsymbol{\Sigma}^{-1}: p \times p$, are often used. However, there may be a problem estimating the inverse, e.g. due to multicollinearity in “specific functions of data” or there may simply be too few independent observations. In this case one can use the Cayley-Hamilton theorem (see Rao, 1973, pp. 44-45), which implies
$\boldsymbol{\Sigma}^{-1}=\sum_{i=0}^{p-1} c_{i} \mathbf{\Sigma}^{i}, \quad c_{i}$ are functions of $\boldsymbol{\Sigma}, \quad \boldsymbol{\Sigma}^{0}=\boldsymbol{I}{p}$ with the following approximation (pretending that $c{i}$ are unknown constants):
$$\mathbf{\Sigma}^{-1} \approx \sum_{i=0}^{a-1} c_{i} \mathbf{\Sigma}^{i}, \quad \text { for some } a<p$$

统计代写|回归分析作业代写Regression Analysis代考|Asymptotic Properties of Estimators

The statistics presented in the following are all functions of the number of independent observations, $\mathrm{n}$. Thus, when writing $n \rightarrow \infty$, we imagine a sequence of statistics under consideration which can be exploited in many ways. In the following we only elucidate whether a sequence converges and not how fast it converges. There exists a huge body of mathematical literature which studies sequences, in particular the convergence of sequences, and in this book we follow statistical tradition in our use of convergence in probability and in distribution (see Appendix A, Sect. A. 11 for definitions).

The next lemma is fundamental for the following presentation of asymptotic results for the $B R M$ (see also Appendix B, Theorem B.18).

Lemma 4.1 Let $\boldsymbol{S}=\boldsymbol{X}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{X}^{\prime}$, where $\boldsymbol{X}$ follows the $B R M$ presented in Definition 2.1. Then, if $n \rightarrow \infty$, and $r(\boldsymbol{C}) \leq k$ is independent of $n$, (i) $n^{-1} \boldsymbol{S} \stackrel{P}{\rightarrow} \boldsymbol{\Sigma}$; (ii) $\frac{1}{\sqrt{n}} \operatorname{vec}(\boldsymbol{S}-\boldsymbol{\Sigma}) \stackrel{D}{\rightarrow} N{p^{2}}(\mathbf{0}, \boldsymbol{\Pi}), \quad \boldsymbol{\Pi}=\left(\boldsymbol{I}{p^{2}}+\boldsymbol{K}{p, p}\right)(\boldsymbol{\Sigma} \otimes \boldsymbol{\Sigma})$,
where $\boldsymbol{K}{p, p}$ is the commutation matrix. See Appendix A, Sects. A.5 and A.6 for definitions of $\boldsymbol{K}{p, p}$ and $\operatorname{vec}(\bullet)$, respectively.

Proof Since $\boldsymbol{S}=\sum_{i=1}^{n-r(\boldsymbol{C})} \boldsymbol{y}{i} \boldsymbol{y}{i}^{\prime}$, for some $\boldsymbol{y}{i} \sim N{p}(\mathbf{0}, \boldsymbol{\Sigma})$, where $\boldsymbol{y}{i}$ and $\boldsymbol{y}{j}, i \neq j$, are independent, statement (i) follows from the law of large numbers and statement (ii) from the central limit theorem (see Appendix B, Theorem B.18 (ii) and (v)) and that $D[\boldsymbol{S}]=(n-r(\boldsymbol{C}))\left(\boldsymbol{I}{p^{2}}+\boldsymbol{K}{p, p}\right)(\boldsymbol{\Sigma} \otimes \boldsymbol{\Sigma})$.

Suppose that there are two matrices $\boldsymbol{K}$ and $\boldsymbol{L}$, such that the following estimation conditions hold: $\mathcal{C}\left(\boldsymbol{K}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}^{\prime}\right)$, and $\mathcal{C}(\boldsymbol{L}) \subseteq \mathcal{C}\left(\boldsymbol{C}{v}\right) \subseteq \mathcal{C}(\boldsymbol{C})$ for some fixed number $v$, where $\boldsymbol{C}{v}$ is a matrix which consists of the first $v$ columns in $\boldsymbol{C}$. The reason for the latter assumption is that when $n \rightarrow \infty$, the number of columns in $C$ increases and without this assumption it would not make sense to consider $\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}$, where $\widehat{\boldsymbol{B}}$ is the MLE of the mean parameter of the $B R M$.
The estimability conditions given above and Theorem $3.1$ together provide
$$\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}=\boldsymbol{K}\left(\boldsymbol{A}^{\prime} \boldsymbol{S}^{-1} \boldsymbol{A}\right)^{-} \boldsymbol{A}^{\prime} \boldsymbol{S}^{-1} \boldsymbol{X} \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-} \boldsymbol{L}$$

统计代写|回归分析作业代写Regression Analysis代考|Moments of Estimators of Parameters in the BRM

Throughout this section, as in Corollary $3.1$, two matrices, $\boldsymbol{K}$ and $\boldsymbol{L}$, will be used which satisfy $\mathcal{C}\left(\boldsymbol{K}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}^{\prime}\right)$ and $\mathcal{C}(\boldsymbol{L}) \subseteq \mathcal{C}(\boldsymbol{C})$, respectively. These are the socalled estimability conditions in the $B R M$ in the sense that unique estimators are obtained when these conditions are met. Then, once again,
$$\boldsymbol{K} \widehat{B} L=K\left(A^{\prime} S^{-1} A\right)^{-} A^{\prime} S^{-1} X C^{\prime}\left(C C^{\prime}\right)^{-} L$$
where $\boldsymbol{S}=\boldsymbol{X}\left(\boldsymbol{I}-\boldsymbol{P}_{C^{\prime}}\right) \boldsymbol{X}^{\prime}$. Moments for $\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}$ and $\widehat{\boldsymbol{\Sigma}}$ will now be derived, but there derivation is a rather technical issue. In principle, one needs to combine knowledge from the matrix normal, Wishart and inverse Wishart distributions. As $\boldsymbol{K}$ and $\boldsymbol{L}$, the matrices $A$ and $C$ may be chosen and then, if these matrices are of full rank,

i.e. $r(A)=q$ and $r(C)=k$, one may pre-multiply (4.8) by $\left(A^{\prime} A\right)^{-1} A^{\prime}$, postmultiply by $\boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-1}$, and obtain
$$\widehat{B}=\left(A^{\prime} \boldsymbol{S}^{-1} \boldsymbol{A}\right)^{-1} A^{\prime} \boldsymbol{S}^{-1} \boldsymbol{X} C^{\prime}\left(C C^{\prime}\right)^{-1}$$
Thus, by studying (4.8) one always obtains complete information about (4.9). When considering the general $\widehat{\boldsymbol{B}}$-expression presented in Corollary $3.1$, for each choice of $Z_{i}, i=1,2$, we have to treat the estimator separately. If $Z_{i}$ is non-random, we just have a translation of $\widehat{\boldsymbol{B}}$ and, as will later be seen, we have a biased estimator. If $Z_{i}$ is random, everything is more complicated and less clear and there is no point discussing this case.

In (4.8) the matrix $\boldsymbol{S}$ is random, and therefore the expression for $\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}$ is quite a complicated non-linear random expression. As noted before, it consists of two parts, namely
$$K\left(A^{\prime} S^{-1} A\right)^{-} A^{\prime} S^{-1}$$
and
$$\boldsymbol{X} \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-} \boldsymbol{L}$$
but fortunately $\boldsymbol{S}$ and $\boldsymbol{X} \boldsymbol{C}^{\prime}$ are independently distributed (see Appendix B, Theorem B. 19 (viii)), which will be utilized many times.

The distribution of $\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}$ is a function of $\boldsymbol{S}$ which is used because of the inner product estimation. However, $\boldsymbol{\Sigma}$, which defines the inner product, may be regarded as a nuisance parameter and, therefore, it is of interest to neglect the variation in $\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}$ which is due to $\boldsymbol{S}$ and compare the estimator with the class of estimators proposed by Potthoff and Roy (1964);
$$\boldsymbol{K} \widehat{\boldsymbol{B}}{G} L=\boldsymbol{K}\left(\boldsymbol{A}^{\prime} \boldsymbol{G}^{-1} \boldsymbol{A}\right)^{-} \boldsymbol{A}^{\prime} \boldsymbol{G}^{-1} \boldsymbol{X} \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-} \boldsymbol{L}$$ where $G$ is supposed to be a non-random positive definite matrix. One choice is $\boldsymbol{G}=\boldsymbol{I}$. According to Appendix B, Theorem B.19 (i), the distribution of $\boldsymbol{K} \widehat{\boldsymbol{B}}{G} \boldsymbol{L}$ is matrix normal. Therefore, it can be valuable to compare the moments of $\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}$ with the corresponding moments of $\boldsymbol{K} \widehat{\boldsymbol{B}}_{G} \boldsymbol{L}$ in order to understand how the distribution of $\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}$ differs from the normal one. Furthermore, one can use a conditional approach concerning $\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}$, i.e. conditioning with respect to $\boldsymbol{S}$ in $\boldsymbol{K} \widehat{\boldsymbol{B}} \boldsymbol{L}$, since the distribution of $\boldsymbol{S}$ does not involve the parameter $\boldsymbol{B}$.
Now the first two moments for $\boldsymbol{K} \widehat{\boldsymbol{B}} L$ are presented.

统计代写|回归分析作业代写Regression Analysis代考|Basic Properties of Estimators

Σ−1=∑一世=0p−1C一世Σ一世,C一世是函数Σ,Σ0=一世p具有以下近似值（假设C一世是未知常数）：
Σ−1≈∑一世=0一种−1C一世Σ一世, 对于一些 一种<p

统计代写|回归分析作业代写Regression Analysis代考|Moments of Estimators of Parameters in the BRM

IEr(一种)=q和r(C)=到, 可以将 (4.8) 预乘(一种′一种)−1一种′, 后乘C′(CC′)−1, 并获得

XC′(CC′)−大号

广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

统计代写|回归分析作业代写Regression Analysis代考|EBRM3 W and Its MLEs

statistics-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

统计代写|回归分析作业代写Regression Analysis代考|EBRM3 W and Its MLEs

Now
\begin{aligned} \boldsymbol{X}=\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}+\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}+\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}+\boldsymbol{E}, \quad \boldsymbol{E} \sim N{p, n}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I}), \quad \boldsymbol{\Sigma}>0, \ \mathcal{C}\left(\boldsymbol{A}{3}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{2}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{1}\right), \end{aligned} is studied where all the sizes of the matrices are presented in Definition 2.3. Once again the mathematical derivation of the estimators is given first, and then the approach is illustrated. It is interesting to compare the results for the $E B R M{W}^{3}$ with those for the $E B R M_{B}^{3}$.

At the beginning of the mathematical derivation, we completely rely on Sect. $2.6$, where MLEs were obtained for a known $\boldsymbol{\Sigma}$. The likelihood, $L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \mathbf{\Sigma}\right)$, equals $$L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right)=(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} e^{\left.-1 / 2 \operatorname{tr} \mid \boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}-\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}-\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}\right) 00^{\prime}\right)} .$$
Let, as previously in Sect. $2.6$,
$$\boldsymbol{P}{1}=\boldsymbol{P}{C_{1}^{\prime}}, \quad \boldsymbol{P}{2}=\boldsymbol{P}{Q_{1} C_{2}^{\prime}}, \quad \boldsymbol{P}{3}=\boldsymbol{P}{Q_{2} C_{3}^{\prime}}, \quad \boldsymbol{P}{4}=\boldsymbol{P}{\left(C_{1}^{\prime}: C_{2}^{\prime}: C_{3}^{\prime}\right)^{\rho},}$$
where
Thus, the likelihood can be written as follows:
\begin{aligned} &L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right) \ &\quad=(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \exp \left{-1 / 2 \sum{i=1}^{4} \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i} O^{\prime}\right}\right} . \end{aligned}
From Sect. $2.6$ it follows that an upper bound of the likelihood is achieved if a solution can be found to the system of equations consisting of (2.49)-(2.51), i.e. the nested system
\begin{aligned} &\boldsymbol{A}{1}^{\prime} \boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}-\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}-\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}\right) \boldsymbol{P}{1}=\mathbf{0}, \ &\boldsymbol{A}{2}^{\prime} \boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}-\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}\right) \boldsymbol{P}{2}=\mathbf{0}, \ &\boldsymbol{A}{3}^{\prime} \boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}\right) \boldsymbol{P}_{3}=\mathbf{0} . \end{aligned}

统计代写|回归分析作业代写Regression Analysis代考|Reasons for Using Both

One may question if it is necessary to present results for the $E B R M_{B}^{3}$ and $E B R M_{W}^{3}$ in parallel. Since there is a one-to-one correspondence between the models, it should be possible to derive the maximum likelihood estimators and their properties from both set-ups. Consider the $E B R M_{B}^{3}$
$$\boldsymbol{X}=\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}+\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}+\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}+\boldsymbol{E}, \quad \mathcal{C}\left(\boldsymbol{C}{3}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right)$$
Then, according to Appendix B, Theorem B.3 (iii), $\mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right)=\mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \boxplus \mathcal{C}\left(\boldsymbol{D}{2}^{\prime}\right)$, where $\boldsymbol{D}{2}$ is any matrix satisfying $\mathcal{C}\left(\boldsymbol{D}{2}^{\prime}\right)=\mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right) \cap \mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right)^{\perp}$, and $\mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right)=\mathcal{C}\left(\boldsymbol{C}{3}^{\prime}\right)$ 田 $\mathcal{C}\left(\boldsymbol{D}{1}^{\prime}\right)$, where $\boldsymbol{D}{1}$ is any matrix satisfying $\mathcal{C}\left(\boldsymbol{D}{1}^{\prime}\right)=\mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \cap \mathcal{C}\left(\boldsymbol{C}{3}^{\prime}\right)^{\perp}$, implying that
$$\boldsymbol{C}{1}^{\prime}=\left(\boldsymbol{C}{2}^{\prime}: \boldsymbol{D}{2}^{\prime}\right) \boldsymbol{H}{1}, \quad \boldsymbol{C}{2}^{\prime}=\left(\boldsymbol{C}{3}^{\prime}: \boldsymbol{D}{1}^{\prime}\right) \boldsymbol{H}{2}$$
for some non-singular matrices $H_{1}$ and $H_{2}$. Hence, for any choice of basis $D_{1}$ and $D_{2}$ there exist non-singular matrices $H_{1}$ and $H_{2}$ such that the model $\boldsymbol{X}=$ $\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}+\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}+\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}+\boldsymbol{E}$ can be presented as $$\boldsymbol{X}=\boldsymbol{A}{1} \boldsymbol{\Theta}{1}\left(C{2}^{\prime}: D_{2}^{\prime}\right)^{\prime}+\boldsymbol{A}{2} \boldsymbol{\Theta}{2}\left(C_{3}^{\prime}: \boldsymbol{D}{1}^{\prime}\right)^{\prime}+\boldsymbol{A}{3} \boldsymbol{B}{3} C{3}+\boldsymbol{E},$$
where $\boldsymbol{\Theta}{i}=\boldsymbol{B}{i} \boldsymbol{H}{i}^{\prime}, i=1,2$. Now let $\boldsymbol{\Theta}{1}=\left(\boldsymbol{\Theta}{11}: \boldsymbol{\Theta}{12}\right)$ and $\boldsymbol{\Theta}{2}=\left(\boldsymbol{\Theta}{21}\right.$ : $\left.\boldsymbol{\Theta}{22}\right)$, where the partitions correspond to the partitions $\left(\boldsymbol{C}{2}^{\prime}: \boldsymbol{D}{2}^{\prime}\right)^{\prime}$ and $\left(\boldsymbol{C}{3}^{\prime}: \boldsymbol{D}{1}^{\prime}\right)^{\prime}$, respectively. Moreover, let $\Psi{1}=\boldsymbol{\Theta}{11} \boldsymbol{H}{2}^{\prime}$ and then partition $\Psi_{1}^{\prime}=\left(\Psi_{11}: \Psi_{12}\right)$ so that it fits $\left(\boldsymbol{C}{3}^{\prime}: \boldsymbol{D}{1}^{\prime}\right)^{\prime}$. All these definitions and operations lead to (3.40) being equivalent to
\begin{aligned} \boldsymbol{X}=&\left(\boldsymbol{A}{1}: \boldsymbol{A}{2}: \boldsymbol{A}{3}\right)\left(\Psi{11}^{\prime}: \boldsymbol{\Theta}{21}^{\prime}: \boldsymbol{B}{3}^{\prime}\right)^{\prime} \boldsymbol{C}{3}+\left(\boldsymbol{A}{1}: \boldsymbol{A}{2}\right)\left(\Psi{12}^{\prime}: \boldsymbol{\Theta}{22}^{\prime}\right)^{\prime} \boldsymbol{D}{1} \ &+\boldsymbol{A}{1} \boldsymbol{\Theta}{12} \boldsymbol{D}{2}+\boldsymbol{E}, \end{aligned} which is an $E B R M{W}^{3}$. Hence, it has been shown how, by a reparametrization, any $E B R M_{B}^{3}$ can be formulated as an $E B R M_{W}^{3}$. The opposite is, of course, also true, i.e. any $E B R M_{W}^{3}$ can be formulated as an $E B R M_{B}^{3}$. In principle one might believe that it would be sufficient to, for example, only consider the $E B R M_{B}^{3}$. However, there are some problems with this approach. Firstly there are several reparametrizations and several partitions involved, which means that individual parameter estimates may be difficult to interpret, and secondly all MLEs are nonlinear estimators and, therefore, it is not so easy to work out how to transmit properties, for example knowledge about moments of the MLEs, from one model to another, i.e. from the $E B R M_{B}^{3}$ to the $E B R M_{W}^{3}$ or vice versa. Thus, to achieve greater ease of application and clarity, one should work with the two different types of models separately.

统计代写|回归分析作业代写Regression Analysis代考|Problems

1 For the $B R M$, calculate the residuals $\widehat{\boldsymbol{R}}{11}, \widehat{\boldsymbol{R}}{21}$ and $\widehat{\boldsymbol{R}}{2}$ in (3.7), (3.8) and (3.9), respectively, and compare with the data in Table $2.1$. What conclusions can be drawn? $2($ GMANOVA + MANOVA) Let $$\boldsymbol{X}=\boldsymbol{A} \boldsymbol{B}{1} C_{1}+\boldsymbol{B}{2} C{2}+\boldsymbol{E},$$
where the observation matrix $\boldsymbol{X}: p \times n$, the unknown mean parameter matrices $\boldsymbol{B}{1}: q \times k{1}$ and $\boldsymbol{B}{2}: p \times k{2}$, the three known design matrices $\boldsymbol{A}: p \times q, \boldsymbol{C}{1}$ : $k{1} \times n$ and $\boldsymbol{C}{2}: k{2} \times n$, and the error matrix $\boldsymbol{E}$ form the model. Moreover, let $\boldsymbol{E}$ be normally distributed with independent columns, with mean $\mathbf{0}$, and an unknown positive definite dispersion matrix $\boldsymbol{\Sigma}$ for the elements within each column of $\boldsymbol{E}$. Find maximum likelihood estimates of the parameters. Can the model be used when there is a MANOVA model with some specific background information? Can the model be used when there is a GMANOVA model $(B R M)$ with some specific background information?
3 Let
$$\boldsymbol{X}=\boldsymbol{A} \boldsymbol{B}{1} \boldsymbol{C}{1}+\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}+\boldsymbol{E}, \quad \mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right),$$ where the observation matrix $\boldsymbol{X}$ : $p \times n$, the unknown mean parameter matrices $\boldsymbol{B}{1}$ : $q_{1} \times k_{1}$ and $\boldsymbol{B}{2}: q{2} \times k_{2}$, the four known design matrices $\boldsymbol{A}{1}: p \times q{1}, \boldsymbol{A}{2}: p \times q{2}$, $\boldsymbol{C}{1}: k{1} \times n$ and $\boldsymbol{C}{2}: k{2} \times n$, and the error matrix $\boldsymbol{E} \sim N_{p, n}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I})$, where $\mathbf{\Sigma}>0$, form the model. Find maximum likelihood estimates of the parameters.
4 In Problems 2 and 3 suppose that $\boldsymbol{\Sigma}=\boldsymbol{I}$ and estimate the parameters in both models. Moreover, generate $\boldsymbol{X}{o}$ according to the models in Problems 2 and 3 (choose matrices $\boldsymbol{A}{i}, \boldsymbol{B}{i}, \boldsymbol{C}{i}$ and $\boldsymbol{\Sigma}$ ). Compare the unweighted estimates (assuming $\boldsymbol{\Sigma}=\boldsymbol{I}$ ) with the MLEs, assuming $\boldsymbol{\Sigma}$ to be an unknown parameter.
5 In Problem 3 replace $\mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right)$ by $\mathcal{C}\left(\boldsymbol{A}{2}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{1}\right)$ and derive the parameter estimators.

统计代写|回归分析作业代写Regression Analysis代考|EBRM3 W and Its MLEs

X=一种1乙1C1+一种2乙2C2+一种3乙3C3+和,和∼ñp,n(0,Σ,一世),Σ>0, C(一种3)⊆C(一种2)⊆C(一种1),在定义 2.3 中给出了所有矩阵大小的情况下进行了研究。再次首先给出估计量的数学推导，然后说明该方法。比较结果很有趣和乙R米在3与那些和乙R米乙3.

\begin{aligned} &L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right) \ &\quad= (2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \exp \left{-1 / 2 \sum{i=1}^{4} \operatorname{ tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i} O^{ \prime}\right}\right} 。\end{对齐}\begin{aligned} &L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right) \ &\quad= (2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \exp \left{-1 / 2 \sum{i=1}^{4} \operatorname{ tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i} O^{ \prime}\right}\right} 。\end{对齐}

统计代写|回归分析作业代写Regression Analysis代考|Reasons for Using Both

X=一种1乙1C1+一种2乙2C2+一种3乙3C3+和,C(C3′)⊆C(C2′)⊆C(C1′)

C1′=(C2′:D2′)H1,C2′=(C3′:D1′)H2

X=(一种1:一种2:一种3)(Ψ11′:θ21′:乙3′)′C3+(一种1:一种2)(Ψ12′:θ22′)′D1 +一种1θ12D2+和,这是一个和乙R米在3. 因此，已经表明如何通过重新参数化，任何和乙R米乙3可以表述为和乙R米在3. 相反，当然，也正确，即任何和乙R米在3可以表述为和乙R米乙3. 原则上，人们可能会认为，例如，只考虑和乙R米乙3. 但是，这种方法存在一些问题。首先，涉及多个重新参数化和多个分区，这意味着单个参数估计可能难以解释，其次，所有 MLE 都是非线性估计器，因此，要弄清楚如何传递属性并不那么容易，例如关于MLE 的矩，从一个模型到另一个模型，即从和乙R米乙3到和乙R米在3或相反亦然。因此，为了更易于应用和更清晰，应该分别使用两种不同类型的模型。

统计代写|回归分析作业代写Regression Analysis代考|Problems

1 对于乙R米, 计算残差R^11,R^21和R^2分别在(3.7)、(3.8)和(3.9)中，并与表中的数据进行比较2.1. 可以得出什么结论？2(GMANOVA + MANOVA) 让X=一种乙1C1+乙2C2+和,

3 让
X=一种乙1C1+一种2乙2C2+和,C(C2′)⊆C(C1′),其中观察矩阵X:p×n, 未知的平均参数矩阵乙1:q1×到1和乙2:q2×到2, 四个已知的设计矩阵一种1:p×q1,一种2:p×q2,C1:到1×n和C2:到2×n, 和误差矩阵和∼ñp,n(0,Σ,一世)， 在哪里Σ>0，形成模型。找到参数的最大似然估计。
4 在问题 2 和 3 中假设Σ=一世并估计两个模型中的参数。此外，生成X这根据问题 2 和 3 中的模型（选择矩阵一种一世,乙一世,C一世和Σ）。比较未加权的估计值（假设Σ=一世) 与 MLE，假设Σ为未知参数。
5 在问题 3 中替换C(C2′)⊆C(C1′)经过C(一种2)⊆C(一种1)并导出参数估计器。

广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

统计代写|回归分析作业代写Regression Analysis代考| The Basic Ideas of Obtaining MLEs

statistics-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

统计代写|回归分析作业代写Regression Analysis代考|Unknown Dispersion

In this chapter, the maximum likelihood estimators of all the parameters in the $B R M, E B R M_{W}^{3}$ and $E B R M_{B}^{3}$ are derived when the dispersion is supposed to be unknown; i.e. when following the statistical paradigm, it is supposed that the experiment has been designed and accomplished, and now it is time to estimate the parameters of the model. Only the estimators are obtained, while statistical properties such as their distributions are left to subsequent chapters. The subject matter of this chapter is essential for the book and it is worthwhile devoting some time to reflection on the derivations and results.

统计代写|回归分析作业代写Regression Analysis代考|BRM and Its MLEs

Let
$$\boldsymbol{X}=\boldsymbol{A} \boldsymbol{B C}+\boldsymbol{E}, \quad \boldsymbol{E} \sim N_{p, n}(\boldsymbol{0}, \boldsymbol{\Sigma}, \boldsymbol{I}), \quad \boldsymbol{\Sigma}>0,$$
where all matrices are specified in Definition 2.1. From general maximum likelihood theory we know that estimators are consistent. This means that the MLE of $\mathbf{\Sigma}$, should converge to $\Sigma$ and, therefore, intuitively, the estimators of $\boldsymbol{B}$ with a known or estimated $\Sigma$ should be of a similar form. Let us restate the appropriate part of Fig. $2.6$ as Fig. 3.1 which will serve as a basis for understanding how subspaces are connected to the MLEs.

First a strict mathematical treatment of the model is presented and thereafter the mathematics is illustrated graphically in Fig. 3.2. It follows from (3.1) that the likelihood, $L(\boldsymbol{B}, \boldsymbol{\Sigma})$, is given by
$$L(\boldsymbol{B}, \boldsymbol{\Sigma})=(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} e^{-1 / 2 \mathrm{tr}\left[\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-A B C\right)\left(\boldsymbol{X}{\theta}-A B C\right)^{\prime}\right]} .$$
Using the results from Sect. $2.4$ when $\boldsymbol{\Sigma}$ was known, the likelihood, $L(\boldsymbol{B}, \boldsymbol{\Sigma})$, in agreement with Fig. 3.1, can be decomposed as
\begin{aligned} L(\boldsymbol{B}, \boldsymbol{\Sigma})=&(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}^{\prime} \boldsymbol{P}{C^{\prime}}\left(\boldsymbol{X}{o}-\boldsymbol{A B C}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime}\right}\right}\right.\ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{X}{o}^{\prime}\right}\right} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right) \boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}} \boldsymbol{X}{o}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)^{\prime}\right}\right} \end{aligned} This expression is smaller than or equal to the profile likelihood \begin{aligned} &(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ &\times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{X}{o}^{\prime}+\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right) \boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}} \boldsymbol{X}{o}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)^{\prime}\right)\right}\right} \end{aligned} with equality if and only if \begin{aligned} \boldsymbol{A B C} &=\boldsymbol{P}{A, \Sigma} \boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}} \ &=\boldsymbol{A}\left(\boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{A}\right)^{-} \boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{X}{o} \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-} \boldsymbol{C} \end{aligned} In Fig. 3.1 this implies that the part of the likelihood which is connected to the mean $(E[X]=\boldsymbol{A} \boldsymbol{B C}$ ) has been eliminated. Moreover, from Appendix B, Theorem B. 9 (iv) it follows that (3.2) is smaller than or equal to $$(2 \pi)^{-n p / 2}\left|\left(\boldsymbol{S}{o}+\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right) \boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}} \boldsymbol{X}{o}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}_{A, \Sigma}\right)^{\prime}\right) / n\right|^{-n / 2} \exp {-n p / 2}$$

where $\boldsymbol{S}{o}=\boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{X}{o}^{\prime}$, which is obtained if
$$n \boldsymbol{\Sigma}=\boldsymbol{S}{o}+\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right) \boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}} \boldsymbol{X}{o}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)^{\prime}$$
is inserted in (3.2). Since the right-hand side of (3.5) equals $\left(\boldsymbol{X}_{o}-\boldsymbol{A} \boldsymbol{B C}\right) O^{\prime}$, there is no problem applying Theorem B.9 (iv) in Appendix B. It is less clear if this theorem can be applied for optimization purposes, if instead of $\boldsymbol{B}$, we have a function in $\boldsymbol{\Sigma}$; i.e. $\boldsymbol{B}(\boldsymbol{\Sigma})$, which sometimes appears.

Using (2.41) and (2.42), it follows from (3.5) that $n \boldsymbol{\Sigma}=\boldsymbol{R}{11} \boldsymbol{R}{11}^{\prime}+\boldsymbol{R}{21} \boldsymbol{R}{21}^{\prime}+$ $\boldsymbol{R}{2} \boldsymbol{R}{2}^{\prime}$, among other things showing the whole tensor space $\mathcal{R}^{n} \otimes \mathcal{R}^{p}$ to be included in the estimation process. Both Eqs. (3.3) and (3.5) are complicated functions in $\boldsymbol{\Sigma}$, but fortunately the only requirement for finding explicit MLEs are a few straightforward calculations. Pre-multiplying (3.5) by $\boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1}$ yields
$$n A^{\prime}=A^{\prime} \Sigma^{-1} S_{o}$$
and thus under the assumption that $S_{o}^{-1}$ exists
$$\widehat{A^{\prime} \Sigma^{-1}}=n A^{\prime} S_{o}^{-1}$$

统计代写|回归分析作业代写Regression Analysis代考|EBRM3 B and Its MLEs

Let
\begin{aligned} \boldsymbol{X}=\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}+\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}+\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}+\boldsymbol{E}, \quad \boldsymbol{E} \sim N{p, n}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I}), \quad \boldsymbol{\Sigma}>0, \ \mathcal{C}\left(\boldsymbol{C}{3}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right), \end{aligned} where all sizes of matrices are given in Definition 2.2. Following the structure of the previous section, first the derivation of the MLEs is performed in a rigorous way, culminating in Theorem 3.2, and thereafter the mathematics is illustrated in Fig. 3.4. Note that a major part of the derivation for obtaining MLEs has already been carried out in Sect. 2.5. The likelihood, $L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \mathbf{\Sigma}\right)$, equals
$$L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right)=(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} e^{-1 / 2 \operatorname{tr}}\left[\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}-\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}-\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}\right) 0^{\prime}\right} .$$ Let, as in Sect. 2.5, $\boldsymbol{P}{1}=\boldsymbol{I}-\boldsymbol{Q}{1}^{\prime}=\boldsymbol{P}{A_{1}, \Sigma}, \quad \boldsymbol{P}{2}=\boldsymbol{I}-\boldsymbol{Q}{2}^{\prime}=\boldsymbol{P}{Q{1} \Lambda_{2}, \mathrm{\Sigma}}, \quad \boldsymbol{P}{3}=\boldsymbol{I}-\boldsymbol{Q}{3}^{\prime}=\boldsymbol{P}{Q{2} Q_{1}^{\prime} A_{3}, \mathrm{\Sigma}} .$
$\boldsymbol{S}{1}=\boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C{\mathrm{r}}}\right) \boldsymbol{X}{o}^{\prime}$, or $\boldsymbol{X}\left(\boldsymbol{I}-\boldsymbol{P}{C_{\mathrm{1}}}\right) \boldsymbol{X}^{\prime}$.
$\boldsymbol{S}{2}=\boldsymbol{S}{1}+\boldsymbol{Q}{1}^{\prime} \boldsymbol{X}{o}\left(\boldsymbol{P}{C{1}}-\boldsymbol{P}{C{2}}^{\prime}\right) \boldsymbol{X}{o}^{\prime} \boldsymbol{Q}{1}$, or $S_{1}+\boldsymbol{Q}{1}^{\prime} \boldsymbol{X}\left(\boldsymbol{P}{C_{1}}-\boldsymbol{P}{C{2}}\right) \boldsymbol{X}^{\prime} \boldsymbol{Q}{1}$, $\boldsymbol{S}{3}=\boldsymbol{S}{2}+\boldsymbol{Q}{2}^{\prime} \boldsymbol{Q}{1}^{\prime} \boldsymbol{X}{o}\left(\boldsymbol{P}{C{2}^{\prime}}-\boldsymbol{P}{C{3}^{\prime}}\right) \boldsymbol{X}{\theta}^{\prime} \boldsymbol{Q}{1} \boldsymbol{Q}{2}$, or $\boldsymbol{S}{2}+\boldsymbol{Q}{2}^{\prime} \boldsymbol{Q}{1}^{\prime} \boldsymbol{X}\left(\boldsymbol{P}{C{2}^{\prime}}-\boldsymbol{P}{C{3}^{\prime}}\right) \boldsymbol{X}^{\prime} \boldsymbol{Q}{1} \boldsymbol{Q}{2}$.
Adopting the results from Sect. $2.5$ when $\boldsymbol{\Sigma}$ is known, it is seen that the likelihood can be factored in the following way:
\begin{aligned} L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right)=(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{S}{1}\right}\right} \exp \left{-1 / 2 \operatorname{tr}\left{\left(\boldsymbol{X}{o} \boldsymbol{P}{C_{1}^{\prime}}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{P}{1} 0\right}\right} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{Q}{1}^{\prime} \boldsymbol{X}{o} \boldsymbol{P}{C_{1}^{\prime}}-E\left[\boldsymbol{Q}{1}^{\prime} \boldsymbol{X}\right]\right) 0^{\prime}\right}\right} \ =&(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{S}{2}\right}\right} \exp \left{-1 / 2 \operatorname{tr}\left{\left(\boldsymbol{X}{o} \boldsymbol{P}{C_{1}^{\prime}}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{P}_{1} 0\right}\right} \end{aligned}

统计代写|回归分析作业代写Regression Analysis代考|BRM and Its MLEs

X=一种乙C+和,和∼ñp,n(0,Σ,一世),Σ>0,

L(\boldsymbol{B}, \boldsymbol{\Sigma})=(2 \pi)^{-np / 2}|\boldsymbol{\Sigma}|^{-n / 2} e ^{-1 / 2 \mathrm{tr}\left[\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X} {o}-ABC\right)\left(\boldsymbol{X} { \theta}-ABC\right)^{\prime}\right]} 。 üs一世nG吨H和r和s你一世吨sFr这米小号和C吨.2.4在H和nΣ在一种s到n这在n,吨H和一世一世到和一世一世H这这d,大号(乙,Σ),一世n一种Gr和和米和n吨在一世吨HF一世G.3.1,C一种nb和d和C这米p这s和d一种s \begin{aligned} L(\boldsymbol{B}, \boldsymbol{\Sigma})=&(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{X} {o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}^{\prime} \boldsymbol{P}{C^{\prime}}\left(\boldsymbol{X}{o}-\ boldsymbol{A B C}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime}\right}\right}\right.\ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\右) \boldsymbol{X}{o}^{\prime}\right}\right} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{ -1}\left(\boldsymbol{I}-\boldsymbol{P}{A,\Sigma}\right) \boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}} \boldsymbol{X}{o}^{\prime}\left(\boldsymbol{I}-\粗体符号{P}{A, \Sigma}\right)^{\prime}\right}\right} \end{aligned}\begin{aligned} L(\boldsymbol{B}, \boldsymbol{\Sigma})=&(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}^{\prime} \boldsymbol{P}{C^{\prime}}\left(\boldsymbol{X}{o}-\boldsymbol{A B C}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime}\right}\right}\right.\ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{X}{o}^{\prime}\right}\right} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right) \boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}} \boldsymbol{X}{o}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)^{\prime}\right}\right} \end{aligned}吨H一世s和Xpr和ss一世这n一世ss米一种一世一世和r吨H一种n这r和q你一种一世吨这吨H和pr这F一世一世和一世一世到和一世一世H这这d\begin{aligned} &(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ &\times \exp \left{-1 / 2 \operatorname{tr }\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right ) \boldsymbol{X}{o}^{\prime}+\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right) \boldsymbol{X}{o} \boldsymbol{P }{C^{\prime}} \boldsymbol{X}{o}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)^{\prime} \right)\right}\right} \end{对齐}\begin{aligned} &(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ &\times \exp \left{-1 / 2 \operatorname{tr }\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right ) \boldsymbol{X}{o}^{\prime}+\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right) \boldsymbol{X}{o} \boldsymbol{P }{C^{\prime}} \boldsymbol{X}{o}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)^{\prime} \right)\right}\right} \end{对齐}在一世吨H和q你一种一世一世吨是一世F一种nd这n一世是一世F一种乙C=磷一种,ΣX这磷C′ =一种(一种′Σ−1一种)−一种′Σ−1X这C′(CC′)−C一世nF一世G.3.1吨H一世s一世米p一世一世和s吨H一种吨吨H和p一种r吨这F吨H和一世一世到和一世一世H这这d在H一世CH一世sC这nn和C吨和d吨这吨H和米和一种n(和[X]=一种乙C)H一种sb和和n和一世一世米一世n一种吨和d.米这r和这v和r,Fr这米一种pp和nd一世X乙,吨H和这r和米乙.9(一世v)一世吨F这一世一世这在s吨H一种吨(3.2)一世ss米一种一世一世和r吨H一种n这r和q你一种一世吨这(2 \pi)^{-np / 2}\left|\left(\boldsymbol{S}{o}+\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right) \boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}} \boldsymbol{X}{o}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}_{ A, \Sigma}\right)^{\prime}\right) / n\right|^{-n / 2} \exp {-np / 2}

n \boldsymbol{\Sigma}=\boldsymbol{S} {o}+\left(\boldsymbol{I}-\boldsymbol{P} {A, \Sigma}\right) \boldsymbol{X} {o} \ boldsymbol{P} {C^{\prime}} \boldsymbol{X} {o}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P} {A, \Sigma}\right)^{ \prime}
被插入到（3.2）中。因为 (3.5) 的右边等于(X这−一种乙C)这′, 应用附录 B 中的定理 B.9 (iv) 是没有问题的。这个定理是否可以用于优化目的还不太清楚，如果不是乙, 我们有一个函数Σ; IE乙(Σ)，有时会出现。 使用 (2.41) 和 (2.42)，从 (3.5) 得出nΣ=R11R11′+R21R21′+ R2R2′，除其他外显示整个张量空间Rn⊗Rp纳入估算过程。两个方程。(3.3) 和 (3.5) 是复杂的函数Σ，但幸运的是，找到显式 MLE 的唯一要求是一些简单的计算。将 (3.5) 预乘以一种′Σ−1产量 n一种′=一种′Σ−1小号这 因此在假设小号这−1存在 一种′Σ−1^=n一种′小号这−1 统计代写|回归分析作业代写Regression Analysis代考|EBRM3 B and Its MLEs X=一种1乙1C1+一种2乙2C2+一种3乙3C3+和,和∼ñp,n(0,Σ,一世),Σ>0, C(C3′)⊆C(C2′)⊆C(C1′),其中所有大小的矩阵都在定义 2.2 中给出。遵循上一节的结构，首先以严格的方式执行 MLE 的推导，最终得出定理 3.2，其后的数学如图 3.4 所示。请注意，获得 MLE 的推导的主要部分已经在 Sect 中进行。2.5. 可能性，大号(乙1,乙2,乙3,Σ)， 等于 L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right)=(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} e^{-1 / 2 \operatorname{tr}}\left[\boldsymbol{\Sigma}^{-1}\left(\boldsymbol {X}{o}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}-\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol {C}{2}-\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}\right) 0^{\prime}\right} 。L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right)=(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} e^{-1 / 2 \operatorname{tr}}\left[\boldsymbol{\Sigma}^{-1}\left(\boldsymbol {X}{o}-\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}-\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol {C}{2}-\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}\right) 0^{\prime}\right} 。让，如教派。2.5,磷1=一世−问1′=磷一种1,Σ,磷2=一世−问2′=磷问1Λ2,Σ,磷3=一世−问3′=磷问2问1′一种3,Σ. 小号1=X这(一世−磷Cr)X这′， 或者X(一世−磷C1)X′. 小号2=小号1+问1′X这(磷C1−磷C2′)X这′问1， 或者小号1+问1′X(磷C1−磷C2)X′问1,小号3=小号2+问2′问1′X这(磷C2′−磷C3′)Xθ′问1问2， 或者小号2+问2′问1′X(磷C2′−磷C3′)X′问1问2. 采用 Sect 的结果。2.5什么时候Σ是已知的，可以看出，可以通过以下方式考虑可能性： \begin{对齐} L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right)=(2 \pi )^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma} ^{-1} \boldsymbol{S}{1}\right}\right} \exp \left{-1 / 2 \operatorname{tr}\left{\left(\boldsymbol{X}{o} \boldsymbol{ P}{C_{1}^{\prime}}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{P}{1} 0 \right}\right} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{Q}{1}^ {\prime} \boldsymbol{X}{o} \boldsymbol{P}{C_{1}^{\prime}}-E\left[\boldsymbol{Q}{1}^{\prime} \boldsymbol{X }\right]\right) 0^{\prime}\right}\right} \ =&(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ &\times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{S}{2}\right}\right} \exp \left{- 1 / 2 \operatorname{tr}\left{\left(\boldsymbol{X}{o} \boldsymbol{P}{C_{1}^{\prime}}-E[\boldsymbol{X}]\right) ^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{P}_{1} 0\right}\right} \end{aligned}\begin{aligned} L\left(\boldsymbol{B}{1}, \boldsymbol{B}{2}, \boldsymbol{B}{3}, \boldsymbol{\Sigma}\right)=(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{S}{1}\right}\right} \exp \left{-1 / 2 \operatorname{tr}\left{\left(\boldsymbol{X}{o} \boldsymbol{P}{C_{1}^{\prime}}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{P}{1} 0\right}\right} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{Q}{1}^{\prime} \boldsymbol{X}{o} \boldsymbol{P}{C_{1}^{\prime}}-E\left[\boldsymbol{Q}{1}^{\prime} \boldsymbol{X}\right]\right) 0^{\prime}\right}\right} \ =&(2 \pi)^{-n p / 2}|\boldsymbol{\Sigma}|^{-n / 2} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{S}{2}\right}\right} \exp \left{-1 / 2 \operatorname{tr}\left{\left(\boldsymbol{X}{o} \boldsymbol{P}{C_{1}^{\prime}}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{P}_{1} 0\right}\right} \end{aligned} 统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。 随机过程代考 在概率论概念中，随机过程随机变量的集合。 若一随机系统的样本点是随机函数，则称此函数为样本函数，这一随机系统全部样本函数的集合是一个随机过程。 实际应用中，样本函数的一般定义在时间域或者空间域。 随机过程的实例如股票和汇率的波动、语音信号、视频信号、体温的变化，随机运动如布朗运动、随机徘徊等等。 贝叶斯方法代考 贝叶斯统计概念及数据分析表示使用概率陈述回答有关未知参数的研究问题以及统计范式。后验分布包括关于参数的先验分布，和基于观测数据提供关于参数的信息似然模型。根据选择的先验分布和似然模型，后验分布可以解析或近似，例如，马尔科夫链蒙特卡罗 (MCMC) 方法之一。贝叶斯统计概念及数据分析使用后验分布来形成模型参数的各种摘要，包括点估计，如后验平均值、中位数、百分位数和称为可信区间的区间估计。此外，所有关于模型参数的统计检验都可以表示为基于估计后验分布的概率报表。 广义线性模型代考 广义线性模型（GLM）归属统计学领域，是一种应用灵活的线性回归模型。该模型允许因变量的偏差分布有除了正态分布之外的其它分布。 statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。 机器学习代写 随着AI的大潮到来，Machine Learning逐渐成为一个新的学习热点。同时与传统CS相比，Machine Learning在其他领域也有着广泛的应用，因此这门学科成为不仅折磨CS专业同学的“小恶魔”，也是折磨生物、化学、统计等其他学科留学生的“大魔王”。学习Machine learning的一大绊脚石在于使用语言众多，跨学科范围广，所以学习起来尤其困难。但是不管你在学习Machine Learning时遇到任何难题，StudyGate专业导师团队都能为你轻松解决。 多元统计分析代考 基础数据: N 个样本， P 个变量数的单样本，组成的横列的数据表 变量定性: 分类和顺序；变量定量：数值 数学公式的角度分为: 因变量与自变量 时间序列分析代写 随机过程，是依赖于参数的一组随机变量的全体，参数通常是时间。 随机变量是随机现象的数量表现，其时间序列是一组按照时间发生先后顺序进行排列的数据点序列。通常一组时间序列的时间间隔为一恒定值（如1秒，5分钟，12小时，7天，1年），因此时间序列可以作为离散时间数据进行分析处理。研究时间序列数据的意义在于现实中，往往需要研究某个事物其随时间发展变化的规律。这就需要通过研究该事物过去发展的历史记录，以得到其自身发展的规律。 回归分析代写 多元回归分析渐进（Multiple Regression Analysis Asymptotics）属于计量经济学领域，主要是一种数学上的统计分析方法，可以分析复杂情况下各影响因素的数学关系，在自然科学、社会和经济学等多个领域内应用广泛。 MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 统计代写|回归分析作业代写Regression Analysis代考| BRM with a Known Dispersion Matrix 如果你也在 怎样代写回归分析Regression Analysis这个学科遇到相关的难题，请随时右上角联系我们的24/7代写客服。 在数学中，回归分析Regression Analysis使不是定量技术专家的社会科学家能够对他们的数字结果达成清晰的口头解释。对更专业的课题进行了清晰的讨论：残差分析、交互效应、规格化 statistics-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。 我们提供的回归分析Regression Analysis及其相关学科的代写，服务范围广, 其中包括但不限于: • Statistical Inference 统计推断 • Statistical Computing 统计计算 • Advanced Probability Theory 高等楖率论 • Advanced Mathematical Statistics 高等数理统计学 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 统计代写|回归分析作业代写Regression Analysis代考|BRM with a Known Dispersion Matrix It should be stressed that the multivariate model illustrated in Fig. 2.5 is a special case of the model given in (1.9), which will serve as a basic model for the presentation of the subject matter of this book. Before starting the technical presentation, a formal definition of the B R M is provided. Definition 2.1(B R M) \quad Let X: p \times n, A: p \times q, q \leq p, B: q \times k, C: k \times n, r(\boldsymbol{C})+p \leq n and \boldsymbol{\Sigma}: p \times p be p.d. Then
X=A B C+E
$$defines the B R M, where \boldsymbol{E} \sim N_{p, n}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I}), \boldsymbol{A} and \boldsymbol{C} are known matrices, and \boldsymbol{B} and \boldsymbol{\Sigma} are unknown parameter matrices. The condition r(\boldsymbol{C})+p \leq n is an estimability condition when \boldsymbol{\Sigma} is unknown. However, for ease of presentation in this section, it is assumed that the dispersion matrix \Sigma is known. The idea is to give a general overview and leave many details for the subsequent sections. For the likelihood, L(\boldsymbol{B}), we have$$
L(\boldsymbol{B}) \propto|\mathbf{\Sigma}|^{-n / 2} e^{-1 / 2 \mathrm{tr}\left[\mathbf{\Sigma}^{-1}\left(\boldsymbol{X}{o}-A B C\right)\left(\boldsymbol{X}{o}-A B C\right)^{\prime}\right]} .
$$From (2.16) it is seen that there exists a design matrix \boldsymbol{A} which describes the expectation of the rows of \boldsymbol{X} (a within-individuals design matrix), as well as a design matrix \boldsymbol{C} which describes the mean of the columns of \boldsymbol{X} (a between-individuals design matrix). It is known that if one pre- and post-multiplies a matrix, a bilinear transformation is performed. Thus, in a comparison of (1.7) and (2.16), instead of a linear model in (1.7), there is a bilinear one in (2.16). The previous techniques used when R^{n} was decomposed into \mathcal{C}\left(\boldsymbol{C}^{\prime}\right) \boxplus \mathcal{C}\left(\boldsymbol{C}^{\prime}\right)^{\perp} are adopted; i.e. due to bilinearity the tensor product R^{p} \otimes R^{n} is decomposed as \left(\mathcal{C}(\boldsymbol{A}) \otimes \mathcal{C}\left(\boldsymbol{C}^{\prime}\right)\right) \boxplus\left(\mathcal{C}(\boldsymbol{A}) \otimes \mathcal{C}\left(\boldsymbol{C}^{\prime}\right)^{\perp}\right) \boxplus\left(\mathcal{C}(\boldsymbol{A})^{\perp} \otimes \mathcal{C}\left(\boldsymbol{C}^{\prime}\right)\right) \boxplus\left(\mathcal{C}(\boldsymbol{A})^{\perp} \otimes \mathcal{C}\left(\boldsymbol{C}^{\prime}\right)^{\perp}\right) Let the projections P_{A, \Sigma}=\boldsymbol{A}\left(\boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{A}\right)^{-} \boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1} and \boldsymbol{P}{C^{\prime}} be as before (see Appendix A, Sect. A.7). It appears that the likelihood can be decomposed as follows (omitting the proportionality constant (2 \pi)^{-n p / 2} ):$$ \begin{aligned} L(\boldsymbol{B}) \propto|\boldsymbol{\Sigma}|^{-n / 2} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B C}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime}\right}\right} \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B C}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}^{\prime}\right)\right}\right}, \end{aligned} $$since \left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}^{\prime}\right) \boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A, \Sigma}=\mathbf{0}. Thus, a decomposition of \mathcal{R}^{p} into two orthogonal subspaces has been utilized. Continuing as in the linear case, i.e. using \boldsymbol{P}{C^{\prime}} and \boldsymbol{I}-\boldsymbol{P}{C^{\prime}}, the following expression for the likelihood is obtained:$$ \begin{aligned} &L(\boldsymbol{B}) \propto|\boldsymbol{\Sigma}|^{-n / 2} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B C}\right) \boldsymbol{P}{C^{\prime}}\left(\boldsymbol{X}{o}-\boldsymbol{A B C}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime}\right}\right} \
&\quad \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime}\right)\right} \ &\quad \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right) \boldsymbol{P}{C^{\prime}}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B C}\right)^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}^{\prime}\right)\right}\right} \
&\times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}_{A, \Sigma}^{\prime}\right)\right}\right}
\end{aligned}
$$统计代写|回归分析作业代写Regression Analysis代考|EBRMm B with a Known Dispersion Matrix In Sect. 1.5 two extensions of the B R M were presented, i.e. the E B R M_{B}^{m} and E B R M_{W}^{m}, together with examples of the application of these models. In this section the reader is introduced to the mathematics concerning the E B R M_{B}^{m}, with m=3, which will also be used later when studying the model without a known dispersion matrix. Now (2.16) is formally generalized and the E B R M_{B}^{m} is specified in detail. Definition 2.2\left(E B R M_{B}^{m}\right) \quad Let \boldsymbol{X}: p \times n, \boldsymbol{A}{i}: p \times q{i}, q_{i} \leq p, \boldsymbol{B}{i}: q{i} \times k_{i}, \boldsymbol{C}{i} : k{i} \times n, i=1,2, \ldots, m, r\left(\boldsymbol{C}{1}\right)+p \leq n, \mathcal{C}\left(\boldsymbol{C}{i}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{i-1}^{\prime}\right), i=2,3, \ldots, m, and \boldsymbol{\Sigma}: p \times p be p.d. Then$$ \boldsymbol{X}=\sum{i=1}^{m} \boldsymbol{A}{i} \boldsymbol{B}{i} \boldsymbol{C}{i}+\boldsymbol{E} $$defines the E B R M{B}^{m}, where \boldsymbol{E} \sim N_{p, n}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I}),\left{\boldsymbol{A}{i}\right} and \left{\boldsymbol{C}{i}\right} are known matrices, and \left{\boldsymbol{B}_{i}\right} and \boldsymbol{\Sigma} are unknown parameter matrices. In the present book it is usually assumed that m=2,3, and in this section \boldsymbol{\Sigma} is supposed to be known. In that case, r\left(\boldsymbol{C}{1}\right)+p \leq n, \mathcal{C}\left(\boldsymbol{C}{i}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{i-1}^{\prime}\right), i= 2,3, \ldots, m are not needed when estimating \boldsymbol{B}{i}. However, since the results from this chapter will be utilized in the next chapter, it is assumed that \mathcal{C}\left(\boldsymbol{C}{i}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{i-1}^{\prime}\right), i=2,3, \ldots, m, holds. Thus, the following model will be handled:$$
\boldsymbol{X}=\boldsymbol{A}{1} \boldsymbol{B}{1} \boldsymbol{C}{1}+\boldsymbol{A}{2} \boldsymbol{B}{2} \boldsymbol{C}{2}+\boldsymbol{A}{3} \boldsymbol{B}{3} \boldsymbol{C}{3}+\boldsymbol{E}, \quad \boldsymbol{E} \sim N{p, n}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I}),
$$where \mathcal{C}\left(\boldsymbol{C}{3}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right), \boldsymbol{A}{i}: p \times q_{i}, the parameter \boldsymbol{B}{i}: p \times q{i}, is unknown, \boldsymbol{C}{i}: k{i} \times n and the dispersion matrix \boldsymbol{\Sigma} is supposed to be known. It has already been noted in Sect. 1.5 that without the subspace condition on \mathcal{C}\left(\boldsymbol{C}{i}\right), we would have the general “sum of profiles model” (a multivariate seemingly unrelated regression (SUR) model). Later (2.20) is studied when \mathcal{C}\left(A{3}\right) \subseteq \mathcal{C}\left(A_{2}\right) \subseteq \mathcal{C}\left(A_{1}\right) replaces \mathcal{C}\left(\boldsymbol{C}{3}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right), i.e. we have an E B R M{W}^{3}. Since the model under the assumption \mathcal{C}\left(\boldsymbol{A}{3}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{2}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}{1}\right) through a reparametrization can be converted to (2.20) and vice versa, i.e. E B R M{B}^{3} \rightleftarrows E B R M_{W}^{3}, the models are in some sense equivalent. However, because of non-linearity in estimators of mean parameters, this does not imply that all the results for the models can easily be transferred from one model to the other. From now on, under the nested subspace condition \mathcal{C}\left(\boldsymbol{C}{3}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{2}^{\prime}\right) \subseteq \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right), MLEs will be derived when \Sigma>0 is known. The likelihood is proportional to$$ |\boldsymbol{\Sigma}|^{-n / 2} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) 0^{\prime}\right}\right}
$$where E[X]=A_{1} B_{1} C_{1}+A_{2} B_{2} C_{2}+A_{3} B_{3} C_{3}. A chain consisting of three links of relatively straightforward calculations involving the trace function will start. The calculations will involve the following three basic quantities:$$
\begin{aligned}
\boldsymbol{S}{1} &=\boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C{1}^{\prime}}\right) \boldsymbol{X}{o}^{\prime} \ \boldsymbol{P}{A_{1}^{o}, \Sigma^{-1}} &=\boldsymbol{A}{1}^{o}\left(\boldsymbol{A}{1}^{o} \boldsymbol{\Sigma} \boldsymbol{A}{1}^{o}\right)^{-} \boldsymbol{A}{1}^{o^{\prime}} \boldsymbol{\Sigma} \
\boldsymbol{P}{A{1}, \Sigma} &=\boldsymbol{A}{1}\left(\boldsymbol{A}{1}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{A}{1}\right)^{-} \boldsymbol{A}{1}^{\prime} \boldsymbol{\Sigma}^{-1}
\end{aligned}
$$统计代写|回归分析作业代写Regression Analysis代考|Known Dispersion Matrix Here the estimators for the E B R M_{W}^{3} when \Sigma is known are derived and compared with the estimators for the E B R M_{B}^{3}, which were obtained in the previous section. For completeness, the definition of the model under consideration is given below. Definition 2.3\left(E B R M_{W}^{m}\right) Let \boldsymbol{X}: p \times n, \boldsymbol{A}{i}: p \times q{i}, q_{i} \leq p, \boldsymbol{B}{i}: q{i} \times k_{i}, \boldsymbol{C}{i}: k{i} \times n, i=1,2, \ldots, m, r\left(\boldsymbol{C}{1}^{\prime}: \boldsymbol{C}{2}^{\prime}: \boldsymbol{C}{3}^{\prime}\right)+p \leq n, \mathcal{C}\left(\boldsymbol{A}{i}\right) \subseteq \mathcal{C}\left(\boldsymbol{A}_{i-1}\right), i=2,3, \ldots, m, and \boldsymbol{\Sigma}: p \times p be p.d. Then$$
\boldsymbol{X}=\sum_{i=1}^{m} \boldsymbol{A}{i} \boldsymbol{B}{i} \boldsymbol{C}{i}+\boldsymbol{E} $$defines the E B R M{W}^{m}, where \boldsymbol{E} \sim N_{p, n}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I}),\left{\boldsymbol{A}{i}\right} and \left{\boldsymbol{C}{i}\right} are known matrices, and \left{\boldsymbol{B}_{i}\right} and \boldsymbol{\Sigma} are unknown parameter matrices. When estimating the parameters of the E B R M_{B}^{3}, a chain of straightforwardly performed calculations was presented. Now, in order to estimate the parameters in the E B R M_{W}^{3}, a between-individuals subspace decomposition is utilized and it is noted that (use Appendix B, Theorem B.3 (iii)) \mathcal{R}^{n}=\mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right) \boxplus \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}\right)^{\perp} \cap \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}: \boldsymbol{C}{2}^{\prime}\right) \boxplus \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}: \boldsymbol{C}{2}^{\prime}\right)^{\perp} \cap \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}: \boldsymbol{C}{2}^{\prime}: \boldsymbol{C}{3}^{\prime}\right) 田 \mathcal{C}\left(\boldsymbol{C}{1}^{\prime}: \boldsymbol{C}{2}^{\prime}: \boldsymbol{C}{3}^{\prime}\right)^{\perp}. Let \boldsymbol{P}{i}, i=1,2,3,4, be orthogonal projections on these spaces, i.e.$$ \boldsymbol{P}{1}=\boldsymbol{P}{C{1}^{r}}, \quad \boldsymbol{P}{2}=\boldsymbol{P}{Q_{1} C_{2}^{\prime}}, \quad \boldsymbol{P}{3}=\boldsymbol{P}{Q_{2} Q_{1} C_{3}^{r},}, \boldsymbol{P}{4}=\boldsymbol{P}{\left(C_{1}^{\prime}: C_{2}^{\prime}: C_{3}^{\prime}\right)^{o}}
$$where$$
\boldsymbol{Q}{1}=\boldsymbol{P}{\left(C_{1}^{r}\right)^{\rho},} \boldsymbol{Q}{2}=\boldsymbol{P}{\left(C_{1}^{\prime}: C_{2}^{\prime}\right)^{0}}
$$The likelihood up to proportionality is stated in (2.21) and from there it follows that one should consider$$
\operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime}\right}=\sum_{i=1}^{4} \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime}\right} $$where one has utilized the fact that \boldsymbol{P}{1}+\boldsymbol{P}{2}+\boldsymbol{P}{3}+\boldsymbol{P}{4}=\boldsymbol{I}, since \sum{i=1}^{4} \boldsymbol{P}{i} is a projector on the whole space. Because of the stairs structure, the within-individuals space, i.e. \mathcal{R}^{p}, is split. Hence,$$ \begin{aligned} &\operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime}\right]=\operatorname{tr}\left(\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{4}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime}\right) \
&\quad+\sum_{i=1}^{3} \operatorname{tr}\left(\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A{i}, \Sigma}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{P}{A_{i}, \boldsymbol{\Sigma}}^{\prime}\right] \
&\quad+\sum_{i=1}^{3} \operatorname{tr}\left(\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A{i}^{o}, \Sigma^{-1}}^{\prime}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{P}{A_{i}^{o}, \Sigma-1}\right}
\end{aligned}
$$回归分析代写 统计代写|回归分析作业代写Regression Analysis代考|BRM with a Known Dispersion Matrix 需要强调的是，图 1 所示的多元模型。2.5是（1.9）中给出的模型的一个特例，它将作为介绍本书主题的基本模型。在开始技术演示之前，对乙R米提供。 定义2.1(乙R米)让X:p×n,一种:p×q,q≤p,乙:q×到,C:到×n,r(C)+p≤n和Σ:p×p是 pd 然后 X=一种乙C+和 定义了乙R米， 在哪里和∼ñp,n(0,Σ,一世),一种和C是已知矩阵，并且乙和Σ是未知的参数矩阵。 条件r(C)+p≤n是一个可估计性条件，当Σ是未知的。然而，为便于在本节中介绍，假设色散矩阵Σ是已知的。我们的想法是给出一个总体概述，并为后续部分留下许多细节。 对于可能性，大号(乙), 我们有$$
L(\boldsymbol{B}) \propto|\mathbf{\Sigma}|^{-n / 2} e^{-1 / 2 \mathrm{tr}\left[\mathbf{\Sigma }^{-1}\left(\boldsymbol{X} {o}-ABC\right)\left(\boldsymbol{X} {o}-ABC\right)^{\prime}\right]} 。
Fr这米(2.16)一世吨一世ss和和n吨H一种吨吨H和r和和X一世s吨s一种d和s一世Gn米一种吨r一世X$一种$在H一世CHd和sCr一世b和s吨H和和Xp和C吨一种吨一世这n这F吨H和r这在s这F$X$(一种在一世吨H一世n−一世nd一世v一世d你一种一世sd和s一世Gn米一种吨r一世X),一种s在和一世一世一种s一种d和s一世Gn米一种吨r一世X$C$在H一世CHd和sCr一世b和s吨H和米和一种n这F吨H和C这一世你米ns这F$X$(一种b和吨在和和n−一世nd一世v一世d你一种一世sd和s一世Gn米一种吨r一世X).一世吨一世s到n这在n吨H一种吨一世F这n和pr和−一种ndp这s吨−米你一世吨一世p一世一世和s一种米一种吨r一世X,一种b一世一世一世n和一种r吨r一种nsF这r米一种吨一世这n一世sp和rF这r米和d.吨H你s,一世n一种C这米p一种r一世s这n这F(1.7)一种nd(2.16),一世ns吨和一种d这F一种一世一世n和一种r米这d和一世一世n(1.7),吨H和r和一世s一种b一世一世一世n和一种r这n和一世n(2.16).吨H和pr和v一世这你s吨和CHn一世q你和s你s和d在H和n$Rn$在一种sd和C这米p这s和d一世n吨这$C(C′)⊞C(C′)⊥$一种r和一种d这p吨和d;一世.和.d你和吨这b一世一世一世n和一种r一世吨是吨H和吨和ns这rpr这d你C吨$Rp⊗Rn$一世sd和C这米p这s和d一种s$(C(一种)⊗C(C′))⊞(C(一种)⊗C(C′)⊥)⊞(C(一种)⊥⊗C(C′))⊞(C(一种)⊥⊗C(C′)⊥)$大号和吨吨H和pr这j和C吨一世这ns$磷一种,Σ=一种(一种′Σ−1一种)−一种′Σ−1$一种nd$磷C′$b和一种sb和F这r和(s和和一种pp和nd一世X一种,小号和C吨.一种.7).一世吨一种pp和一种rs吨H一种吨吨H和一世一世到和一世一世H这这dC一种nb和d和C这米p这s和d一种sF这一世一世这在s(这米一世吨吨一世nG吨H和pr这p这r吨一世这n一种一世一世吨是C这ns吨一种n吨$(2圆周率)−np/2$):\begin{aligned} L(\boldsymbol{B}) \propto|\boldsymbol{\Sigma}|^{-n / 2} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{ \Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B C}\right)\left(\boldsymbol{ X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime}\right}\right } \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B C}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol {B} \boldsymbol{C}\right)^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}^{\prime}\right)\right}\right} , \end{对齐}\begin{aligned} L(\boldsymbol{B}) \propto|\boldsymbol{\Sigma}|^{-n / 2} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{ \Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B C}\right)\left(\boldsymbol{ X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime}\right}\right } \ & \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B C}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol {B} \boldsymbol{C}\right)^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma}^{\prime}\right)\right}\right} , \end{对齐}s一世nC和$(一世−磷一种,Σ′)Σ−1磷一种,Σ=0$.吨H你s,一种d和C这米p这s一世吨一世这n这F$Rp$一世n吨这吨在这这r吨H这G这n一种一世s你bsp一种C和sH一种sb和和n你吨一世一世一世和和d.C这n吨一世n你一世nG一种s一世n吨H和一世一世n和一种rC一种s和,一世.和.你s一世nG$磷C′$一种nd$一世−磷C′$,吨H和F这一世一世这在一世nG和Xpr和ss一世这nF这r吨H和一世一世到和一世一世H这这d一世s这b吨一种一世n和d:\begin{aligned} &L(\boldsymbol{B}) \propto|\boldsymbol{\Sigma}|^{-n / 2} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{ \Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{BC}\right) \boldsymbol{P}{ C^{\prime}}\left(\boldsymbol{X}{o}-\boldsymbol{ABC}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime}\right }\对} \
&\quad \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A, \Sigma}\left(\boldsymbol{ X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right)\左(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)^{\prime} \boldsymbol{P}{A, \Sigma}^{\prime} \right)\right} \ &\quad \times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{I}-\ boldsymbol{P}{A, \Sigma}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right) \boldsymbol{P}{C ^{\prime}}\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{BC}\right)^{\prime}\left(\boldsymbol{I}-\boldsymbol{P} {A, \Sigma}^{\prime}\right)\right}\right} \
&\times \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{I}-\boldsymbol{P}{A, \Sigma }\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)\left(\boldsymbol{I}-\boldsymbol{P}{C ^{\prime}}\right)\left(\boldsymbol{X}{o}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}\right)^{\prime}\left(\boldsymbol{ I}-\boldsymbol{P}_{A, \Sigma}^{\prime}\right)\right}\right}
\end{aligned}
\boldsymbol{X}=\sum_{i=1}^{m} \boldsymbol{A} {i} \boldsymbol{B} {i} \boldsymbol{C} {i}+\boldsymbol{ E} 定义了 EBRM {W}^{m},在H和r和\boldsymbol{E} \sim N_{p, n}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I}),\left{\boldsymbol{A} {i}\right}一种nd\left{\boldsymbol{C} {i}\right}一种r和到n这在n米一种吨r一世C和s,一种nd\left{\boldsymbol{B}_{i}\right}一种nd\boldsymbol{\Sigma} 是未知参数矩阵。 在估计参数时和乙R米乙3，提出了一系列直接执行的计算。现在，为了估计参数和乙R米在3，使用个体间子空间分解，并注意到（使用附录 B，定理 B.3 (iii)） Rn=C(C1′)⊞C(C1′)⊥∩C(C1′:C2′)⊞C(C1′:C2′)⊥∩C(C1′:C2′:C3′)田C(C1′:C2′:C3′)⊥. 让磷一世,一世=1,2,3,4，是这些空间上的正交投影，即磷1=磷C1r,磷2=磷问1C2′,磷3=磷问2问1C3r,,磷4=磷(C1′:C2′:C3′)这 在哪里 问1=磷(C1r)ρ,问2=磷(C1′:C2′)0 （2.21）中说明了达到比例的可能性，从那里可以看出应该考虑 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)\left(\boldsymbol{X} {o}-E[\boldsymbol{X}]\right)^{\prime}\right}=\sum_{i=1}^{4} \operatorname{tr}\left{\boldsymbol{\Sigma}^ {-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i}\left(\boldsymbol{X}{o}-E[\粗体符号{X}]\right)^{\prime}\right}\operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)\left(\boldsymbol{X} {o}-E[\boldsymbol{X}]\right)^{\prime}\right}=\sum_{i=1}^{4} \operatorname{tr}\left{\boldsymbol{\Sigma}^ {-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i}\left(\boldsymbol{X}{o}-E[\粗体符号{X}]\right)^{\prime}\right}一个人利用了这样一个事实磷1+磷2+磷3+磷4=一世， 自从∑一世=14磷一世是整个空间的投影仪。由于楼梯结构，个人空间，即Rp， 被分割。因此，\begin{aligned} &\operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)\left (\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime}\right]=\operatorname{tr}\left(\boldsymbol{\Sigma}^{-1}\左(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{4}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}] \right)^{\prime}\right) \ &\quad+\sum_{i=1}^{3} \operatorname{tr}\left(\boldsymbol{\Sigma}^{-1} \boldsymbol{P} {A{i}, \Sigma}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i}\left(\boldsymbol{X}{o }-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{P}{A_{i}, \boldsymbol{\Sigma}}^{\prime}\right] \ &\quad+\sum_ {i=1}^{3} \operatorname{tr}\left(\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A{i}^{o},\Sigma^{-1}}^{\prime}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i}\left(\boldsymbol{ X}{o}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{P}{A_{i}^{o}, \Sigma-1}\right} \end{aligned}\begin{aligned} &\operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime}\right]=\operatorname{tr}\left(\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{4}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime}\right) \ &\quad+\sum_{i=1}^{3} \operatorname{tr}\left(\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A{i}, \Sigma}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{P}{A_{i}, \boldsymbol{\Sigma}}^{\prime}\right] \ &\quad+\sum_{i=1}^{3} \operatorname{tr}\left(\boldsymbol{\Sigma}^{-1} \boldsymbol{P}{A{i}^{o}, \Sigma^{-1}}^{\prime}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right) \boldsymbol{P}{i}\left(\boldsymbol{X}{o}-E[\boldsymbol{X}]\right)^{\prime} \boldsymbol{P}{A_{i}^{o}, \Sigma-1}\right} \end{aligned} 统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。 随机过程代考 在概率论概念中，随机过程随机变量的集合。 若一随机系统的样本点是随机函数，则称此函数为样本函数，这一随机系统全部样本函数的集合是一个随机过程。 实际应用中，样本函数的一般定义在时间域或者空间域。 随机过程的实例如股票和汇率的波动、语音信号、视频信号、体温的变化，随机运动如布朗运动、随机徘徊等等。 贝叶斯方法代考 贝叶斯统计概念及数据分析表示使用概率陈述回答有关未知参数的研究问题以及统计范式。后验分布包括关于参数的先验分布，和基于观测数据提供关于参数的信息似然模型。根据选择的先验分布和似然模型，后验分布可以解析或近似，例如，马尔科夫链蒙特卡罗 (MCMC) 方法之一。贝叶斯统计概念及数据分析使用后验分布来形成模型参数的各种摘要，包括点估计，如后验平均值、中位数、百分位数和称为可信区间的区间估计。此外，所有关于模型参数的统计检验都可以表示为基于估计后验分布的概率报表。 广义线性模型代考 广义线性模型（GLM）归属统计学领域，是一种应用灵活的线性回归模型。该模型允许因变量的偏差分布有除了正态分布之外的其它分布。 statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。 机器学习代写 随着AI的大潮到来，Machine Learning逐渐成为一个新的学习热点。同时与传统CS相比，Machine Learning在其他领域也有着广泛的应用，因此这门学科成为不仅折磨CS专业同学的“小恶魔”，也是折磨生物、化学、统计等其他学科留学生的“大魔王”。学习Machine learning的一大绊脚石在于使用语言众多，跨学科范围广，所以学习起来尤其困难。但是不管你在学习Machine Learning时遇到任何难题，StudyGate专业导师团队都能为你轻松解决。 多元统计分析代考 基础数据: N 个样本， P 个变量数的单样本，组成的横列的数据表 变量定性: 分类和顺序；变量定量：数值 数学公式的角度分为: 因变量与自变量 时间序列分析代写 随机过程，是依赖于参数的一组随机变量的全体，参数通常是时间。 随机变量是随机现象的数量表现，其时间序列是一组按照时间发生先后顺序进行排列的数据点序列。通常一组时间序列的时间间隔为一恒定值（如1秒，5分钟，12小时，7天，1年），因此时间序列可以作为离散时间数据进行分析处理。研究时间序列数据的意义在于现实中，往往需要研究某个事物其随时间发展变化的规律。这就需要通过研究该事物过去发展的历史记录，以得到其自身发展的规律。 回归分析代写 多元回归分析渐进（Multiple Regression Analysis Asymptotics）属于计量经济学领域，主要是一种数学上的统计分析方法，可以分析复杂情况下各影响因素的数学关系，在自然科学、社会和经济学等多个领域内应用广泛。 MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 统计代写|回归分析作业代写Regression Analysis代考| The Basic Ideas of Obtaining MLEs 如果你也在 怎样代写回归分析Regression Analysis这个学科遇到相关的难题，请随时右上角联系我们的24/7代写客服。 在数学中，回归分析Regression Analysis使不是定量技术专家的社会科学家能够对他们的数字结果达成清晰的口头解释。对更专业的课题进行了清晰的讨论：残差分析、交互效应、规格化 statistics-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。 我们提供的回归分析Regression Analysis及其相关学科的代写，服务范围广, 其中包括但不限于: • Statistical Inference 统计推断 • Statistical Computing 统计计算 • Advanced Probability Theory 高等楖率论 • Advanced Mathematical Statistics 高等数理统计学 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 统计代写|回归分析作业代写Regression Analysis代考|A Known Dispersion Multivariate linear models, as well as the bilinear regression models, are extensions of univariate linear models. Therefore, possessing a good knowledge of linear models theory helps one understand the B R M and E B R M^{m}. If the dispersion matrix in the B R M or E B R M_{\bullet}^{m} is supposed to be known, these models belong to the class of univariate linear models (the Gauss-Markov model, after a vectorization). It is well known that for this class of models a decomposition of subspaces is essential. This decomposition is important for the mathematical treatment, as well as for the understanding of the analysis based on these models. In this chapter, first the singular Gauss-Markov model is treated, and thereafter the models which are the main subject of this book are discussed in some detail. Note that the singular Gauss-Markov model is the most general linear model when only a single variance component (error variance) is present. 统计代写|回归分析作业代写Regression Analysis代考|Linear Models with a Focus on the Singular The inference method adopted in this book is mainly based on the likelihood function. The purpose of this section is to introduce vector space decompositions and show their roles when estimating parameters. In Appendix B, Theorems B. 3 and B.11, a few important results about the linear space \mathcal{C}(\bullet), its orthogonal complement \mathcal{C}(\bullet)^{\perp} and projections \boldsymbol{P}{A}=\boldsymbol{A}\left(\boldsymbol{A}^{\prime} \boldsymbol{A}\right)^{-} \boldsymbol{A}^{\prime} are presented. Once again the univariate linear model \boldsymbol{x}^{\prime}=\boldsymbol{\beta}^{\prime} \boldsymbol{C}+\boldsymbol{e}^{\prime}, \quad \boldsymbol{e} \sim N{n}\left(\mathbf{0}, \sigma^{2} \boldsymbol{I}\right)
$$will be studied. In Example 1.1 it was noted that \widehat{\boldsymbol{\mu}}^{\prime}=\widehat{\boldsymbol{\beta}}^{\prime} \boldsymbol{C} and the maximum likelihood estimator of \sigma^{2} equalled n \widehat{\sigma}^{2}=\boldsymbol{r}^{\prime} \boldsymbol{r}, where the “mean” \hat{\boldsymbol{\mu}}=\boldsymbol{P}{C^{\prime}} \boldsymbol{x} and “residuals” \boldsymbol{r}=\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{x}. Hence, the estimators and residuals are obtained by projecting x on the column space \mathcal{C}\left(\boldsymbol{C}^{\prime}\right) and on its orthogonal complement \mathcal{C}\left(\boldsymbol{C}^{\prime}\right)^{\perp}, respectively. The estimates are obtained by replacing \boldsymbol{x} by \boldsymbol{x}{o} in the expressions given above. Moreover, under normality, \widehat{\mu} and \boldsymbol{r} are independently distributed and constitute the building blocks of the complete and sufficient statistics. Thus, \hat{\mu} and \boldsymbol{r} are very fundamental quantities for carrying out inference according to the statistical paradigm, i.e. parameter estimation and model evaluation. Indeed, this is the basic philosophy adopted throughout this book, even if the models presented later become much more complicated. Consequently, the following space decomposition is of interest:$$ \mathcal{R}^{n}=\mathcal{C}\left(\boldsymbol{C}^{\prime}\right) \boxplus \mathcal{C}\left(\boldsymbol{C}^{\prime}\right)^{\perp} $$where \boxplus denotes the orthogonal sum (see Appendix A, Sect. A.8), which is illustrated in Fig. 2.1. Suppose now that in the model \boldsymbol{x}^{\prime}=\boldsymbol{\beta}^{\prime} \boldsymbol{C}+\boldsymbol{e}^{\prime}, the restrictions$$ \beta^{\prime} G=0 $$hold. The restrictions mean that there is some prior information about \boldsymbol{\beta} or some hypothesis has been postulated about the parameters in \boldsymbol{\beta}. Then it follows from Sect. 1.3 that$$ \widehat{\boldsymbol{\beta}}^{\prime} \boldsymbol{C}=\boldsymbol{x}^{\prime} \boldsymbol{C}^{\prime} \boldsymbol{G}^{o}\left(\boldsymbol{G}^{o^{\prime}} \boldsymbol{C} \boldsymbol{C}^{\prime} \boldsymbol{G}^{o}\right)^{-} \boldsymbol{G}^{o^{\prime}} \boldsymbol{C}=\boldsymbol{x}^{\prime} \boldsymbol{P}{C^{\prime}} G^{o}
$$An important property is that if \mathcal{C}(\boldsymbol{G}) \subseteq \mathcal{C}(\boldsymbol{C}), then \widehat{\boldsymbol{\beta}}^{\prime} \boldsymbol{G}=\mathbf{0} because$$
\widehat{\beta}^{\prime} G=\widehat{\beta}^{\prime} C C^{\prime}\left(C C^{\prime}\right)^{-} \boldsymbol{G}=\mathbf{0} .
$$Moreover, \widehat{\sigma}^{2} is proportional to the squared residuals \boldsymbol{r}=\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime} G^{o}}\right) \boldsymbol{x}, where$$ \boldsymbol{r}^{\prime} \boldsymbol{r}=\boldsymbol{x}^{\prime}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime} G^{o}}\right) \boldsymbol{x}=\boldsymbol{x}^{\prime}\left(\boldsymbol{P}{C^{\prime}\left(C C^{\prime} G^{o}\right)^{o}}+\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{x},
$$统计代写|回归分析作业代写Regression Analysis代考|Multivariate Linear Models In this short section, an MLE for \boldsymbol{\Sigma} is additionally given, for comparisons with the estimator of the variance in univariate linear models (see Fig. 2.5). The purpose of this section is to link univariate linear models with multivariate linear models, which will later be linked to the B R M. The multivariate linear model was presented in Sect. 1.4 and its MLEs were given by$$
\begin{aligned}
\widehat{\boldsymbol{B}}{o} \boldsymbol{C} &=\boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}} \ n \widehat{\boldsymbol{\Sigma}}{o} &=\boldsymbol{r}{o} \boldsymbol{r}{o}^{\prime}, \quad \boldsymbol{r}{o}^{\prime}=\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{X}_{o}^{\prime}
\end{aligned}
$$In comparison with univariate linear models, the only difference when estimating parameters is that instead of x^{\prime}: 1 \times n, we have \boldsymbol{X}: p \times n. Thus, in some sense, from a mathematical point of view, the treatment of the univariate and multivariate models concerning estimation is the same. Indeed it would be mathematically more correct to say “linear multivariate model” instead of “multivariate linear model”. However, if one considers properties of the estimators, then differences appear. This is mainly due to the difference between the Wishart distribution and the \chi^{2} distribution (see Appendix A, Sect. A.9, for definitions of the distributions). Moreover, from a practical point of view, since in the multivariate case one is dealing with several variables simultaneously, the data analysis also becomes more complicated. For example, dependencies among the variables have to be taken into account, which of course is not necessary in the univariate case. Obviously there are more questions which are to be considered in the multivariate model. The differences between the univariate linear and multivariate linear models are illustrated in Fig. 2.5. It is worth noting that any multivariate linear model via a vectorization can be written as a univariate linear model. Consider the multivariate linear model$$
$$which can also be written as follows:$$
\operatorname{vec} \boldsymbol{X}=\left(\boldsymbol{C}^{\prime} \otimes \boldsymbol{I}\right) \operatorname{vec} \boldsymbol{B}+\boldsymbol{e}, \quad \boldsymbol{e} \sim N_{p n}(\mathbf{0}, \boldsymbol{I} \otimes \boldsymbol{\Sigma}), \quad \boldsymbol{\Sigma}>0
$$However, stating that any one of the representations given above has some general advantages does not make sense from a statistical point of view. Finally, it is noted that a general inference strategy in multivariate analysis is to take an arbitrary linear combination of X, let us say l^{\prime} X, leading to a univariate model, and then to try to choose in some sense the best l (e.g. see Rao, 1973 , Chapter 8 ). 回归分析代写 统计代写|回归分析作业代写Regression Analysis代考|A Known Dispersion 多元线性模型以及双线性回归模型是单变量线性模型的扩展。因此，拥有良好的线性模型理论知识有助于理解乙R米和和乙R米米. 如果色散矩阵在乙R米或者和乙R米∙米应该是已知的，这些模型属于单变量线性模型类（高斯-马尔可夫模型，经过矢量化处理）。众所周知，对于这类模型，子空间的分解是必不可少的。这种分解对于数学处理以及理解基于这些模型的分析很重要。在本章中，首先处理奇异高斯马尔可夫模型，然后详细讨论作为本书主要主题的模型。请注意，当仅存在单个方差分量（误差方差）时，奇异高斯-马尔可夫模型是最一般的线性模型。 统计代写|回归分析作业代写Regression Analysis代考|Linear Models with a Focus on the Singular 本书采用的推理方法主要是基于似然函数。本节的目的是介绍向量空间分解并展示它们在估计参数时的作用。在附录 B，定理 B.3 和 B.11 中，关于线性空间的几个重要结果C(∙), 它的正交补码C(∙)⊥和预测磷一种=一种(一种′一种)−一种′被提出。再次单变量线性模型X′=b′C+和′,和∼ñn(0,σ2一世) 将被研究。在示例中1.1注意到μ^′=b^′C和最大似然估计σ2等于nσ^2=r′r, 其中“意思”μ^=磷C′X和“残差”r=(一世−磷C′)X. 因此，估计量和残差是通过投影得到的X在列空间上C(C′)并在其正交补码上C(C′)⊥， 分别。估计值是通过替换获得的X经过X这在上面给出的表达式中。此外，在常态下，μ^和r是独立分布的，构成完整和充分统计的基石。因此，μ^和r是根据统计范式进行推理的非常基本的量，即参数估计和模型评估。事实上，这就是本书通篇采用的基本理念，即使后面介绍的模型变得更加复杂。因此，以下空间分解是有意义的：Rn=C(C′)⊞C(C′)⊥在哪里⊞表示正交和（参见附录 A，第 A.8 节），如图 2.1 所示。现在假设在模型中X′=b′C+和′, 限制b′G=0抓住。这些限制意味着有一些关于b或者已经假设了一些关于参数的假设b. 然后它来自宗派。1.3那b^′C=X′C′G这(G这′CC′G这)−G这′C=X′磷C′G这 一个重要的性质是，如果C(G)⊆C(C)， 然后b^′G=0因为 b^′G=b^′CC′(CC′)−G=0. 而且，σ^2与平方残差成正比r=(一世−磷C′G这)X， 在哪里r′r=X′(一世−磷C′G这)X=X′(磷C′(CC′G这)这+一世−磷C′)X, 统计代写|回归分析作业代写Regression Analysis代考|Multivariate Linear Models 在这个简短的部分中，MLE 用于Σ额外给出，用于与单变量线性模型中的方差估计值进行比较（见图 2.5）。本节的目的是将单变量线性模型与多元线性模型联系起来，稍后将链接到乙R米. 多元线性模型在 Sect 中提出。1.4它的 MLE 由 乙^这C=X这磷C′ nΣ^这=r这r这′,r这′=(一世−磷C′)X这′ 与单变量线性模型相比，估计参数时的唯一区别是X′:1×n， 我们有X:p×n. 因此，在某种意义上，从数学的角度来看，关于估计的单变量和多变量模型的处理是相同的。实际上，说“线性多元模型”而不是“多元线性模型”在数学上更正确。但是，如果考虑估计量的属性，就会出现差异。这主要是 由于 Wishart 分布与χ2分布（有关分布的定义，请参见附录 A，第 A.9 节）。此外，从实际的角度来看，由于在多变量情况下同时处理多个变量，数据分析也变得更加复杂。例如，必须考虑变量之间的依赖关系，这在单变量情况下当然不是必需的。显然，在多元模型中还有更多问题需要考虑。单变量线性模型和多变量线性模型之间的差异如图 2.5 所示。 值得注意的是，任何通过向量化的多元线性模型都可以写成单变量线性模型。考虑多元线性模型 X=乙C+和,和∼ñp,n(0,Σ,一世),Σ>0 也可以写成如下： 向量⁡X=(C′⊗一世)向量⁡乙+和,和∼ñpn(0,一世⊗Σ),Σ>0 但是，从统计的角度来看，说上面给出的任何一种表示具有一些普遍的优势是没有意义的。最后，注意到多元分析中的一般推理策略是采用任意线性组合X， 让我们说一世′X，导致一个单变量模型，然后尝试在某种意义上选择最好的一世（例如参见 Rao，1973 年，第 8 章）。 统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。 随机过程代考 在概率论概念中，随机过程随机变量的集合。 若一随机系统的样本点是随机函数，则称此函数为样本函数，这一随机系统全部样本函数的集合是一个随机过程。 实际应用中，样本函数的一般定义在时间域或者空间域。 随机过程的实例如股票和汇率的波动、语音信号、视频信号、体温的变化，随机运动如布朗运动、随机徘徊等等。 贝叶斯方法代考 贝叶斯统计概念及数据分析表示使用概率陈述回答有关未知参数的研究问题以及统计范式。后验分布包括关于参数的先验分布，和基于观测数据提供关于参数的信息似然模型。根据选择的先验分布和似然模型，后验分布可以解析或近似，例如，马尔科夫链蒙特卡罗 (MCMC) 方法之一。贝叶斯统计概念及数据分析使用后验分布来形成模型参数的各种摘要，包括点估计，如后验平均值、中位数、百分位数和称为可信区间的区间估计。此外，所有关于模型参数的统计检验都可以表示为基于估计后验分布的概率报表。 广义线性模型代考 广义线性模型（GLM）归属统计学领域，是一种应用灵活的线性回归模型。该模型允许因变量的偏差分布有除了正态分布之外的其它分布。 statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。 机器学习代写 随着AI的大潮到来，Machine Learning逐渐成为一个新的学习热点。同时与传统CS相比，Machine Learning在其他领域也有着广泛的应用，因此这门学科成为不仅折磨CS专业同学的“小恶魔”，也是折磨生物、化学、统计等其他学科留学生的“大魔王”。学习Machine learning的一大绊脚石在于使用语言众多，跨学科范围广，所以学习起来尤其困难。但是不管你在学习Machine Learning时遇到任何难题，StudyGate专业导师团队都能为你轻松解决。 多元统计分析代考 基础数据: N 个样本， P 个变量数的单样本，组成的横列的数据表 变量定性: 分类和顺序；变量定量：数值 数学公式的角度分为: 因变量与自变量 时间序列分析代写 随机过程，是依赖于参数的一组随机变量的全体，参数通常是时间。 随机变量是随机现象的数量表现，其时间序列是一组按照时间发生先后顺序进行排列的数据点序列。通常一组时间序列的时间间隔为一恒定值（如1秒，5分钟，12小时，7天，1年），因此时间序列可以作为离散时间数据进行分析处理。研究时间序列数据的意义在于现实中，往往需要研究某个事物其随时间发展变化的规律。这就需要通过研究该事物过去发展的历史记录，以得到其自身发展的规律。 回归分析代写 多元回归分析渐进（Multiple Regression Analysis Asymptotics）属于计量经济学领域，主要是一种数学上的统计分析方法，可以分析复杂情况下各影响因素的数学关系，在自然科学、社会和经济学等多个领域内应用广泛。 MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 统计代写|回归分析作业代写Regression Analysis代考| The General Multivariate Linear Model 如果你也在 怎样代写回归分析Regression Analysis这个学科遇到相关的难题，请随时右上角联系我们的24/7代写客服。 在数学中，回归分析Regression Analysis使不是定量技术专家的社会科学家能够对他们的数字结果达成清晰的口头解释。对更专业的课题进行了清晰的讨论：残差分析、交互效应、规格化 statistics-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。 我们提供的回归分析Regression Analysis及其相关学科的代写，服务范围广, 其中包括但不限于: • Statistical Inference 统计推断 • Statistical Computing 统计计算 • Advanced Probability Theory 高等楖率论 • Advanced Mathematical Statistics 高等数理统计学 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 统计代写|回归分析作业代写Regression Analysis代考|The General Multivariate Linear Model In this book we study models which are based on an underlying multivariate normal distribution. The multivariate normal distribution is closely connected to linearity, since a linear function of a normal variable is also normally distributed. The theory around the normal distribution is well developed and one can, among other things, show that the general multivariate linear model under certain conditions belongs to the exponential family, which is very important. For example, for models which belong to the exponential family, there are complete and sufficient statistics, and all the moments and cumulants are at our disposal. The general multivariate linear model equals$$
\boldsymbol{X}=\boldsymbol{B C}+\boldsymbol{E}
$$where \boldsymbol{X}: p \times n is a random matrix which corresponds to the observations, \boldsymbol{B} : p \times k is an unknown parameter matrix and C: k \times n is a known design matrix. Furthermore, \boldsymbol{E} \sim N_{p, n}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I}), where \boldsymbol{\Sigma} is an unknown p.d. matrix. For a definition of the matrix normal distribution N_{p, n}(\boldsymbol{\mu}, \bullet, \bullet) see Appendix A, Sect. A.9. The model in (1.7) is also called the MANOVA model. According to the model specifications, the model consists of independently distributed columns. The design matrix \boldsymbol{C} is also called a between-individuals design matrix. In order to be able to draw any conclusions from the model, we have to estimate the unknown parameters \boldsymbol{B} and \boldsymbol{\Sigma}. Following the statistical paradigm, we also have to verify the model and this usually takes place with the help of residuals. If we examine the likelihood function, L(\boldsymbol{B}, \mathbf{\Sigma}), we have$$
\begin{aligned}
L(\boldsymbol{B}, \boldsymbol{\Sigma}) & \propto|\boldsymbol{\Sigma}|^{n / 2} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-\boldsymbol{B} \boldsymbol{C}\right) 0^{\prime}\right}\right} \ &=|\boldsymbol{\Sigma}|^{n / 2} \exp \left(-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{S}{o}+\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}}-\boldsymbol{B} \boldsymbol{C}\right) 0^{\prime}\right}\right)
\end{aligned}
$$where$$
\boldsymbol{S}{o}=\boldsymbol{X}{o}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{X}{o}^{\prime}
$$Let S be as S_{o}, but with X_{o} replaced by \boldsymbol{X}. From here it follows that the model belongs to the exponential family and that \boldsymbol{X} \boldsymbol{P}_{C^{\prime}} and S are sufficient statistics. It can be shown that the statistics also are complete. The MLEs for \boldsymbol{B} and \boldsymbol{\Sigma} are obtained from$$
\begin{aligned}
\widehat{\boldsymbol{B}}{o} \boldsymbol{C} &=\boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}}, \ n \widehat{\boldsymbol{\Sigma}}{o} &=\boldsymbol{S}{o} \end{aligned} $$since (1.8) constitutes a linear consistent equation system in \boldsymbol{B}. The likelihood is always smaller or equal to (2 \pi)^{-p n / 2}\left|n^{-1} S{o}\right|^{-n / 2} \exp {-n p / 2}, where the upper bound is obtained when inserting \widehat{\boldsymbol{B}}{o} \boldsymbol{C} and \widehat{\boldsymbol{\Sigma}}{o}. Example 1.3 This is an example where several variables are to be modelled simultaneously. In environmental monitoring one can use many chemical biomarkers. For example, in Sweden, one monitors calcium, magnesium, sodium, potassium, sulphate, chloride, fluoride, nitrogen, phosphorus, conductivity and other substances/properties in lakes spread over the whole country. Observations are collected several times over the year. Imagine that we want to compare two regions for a specific year. Then one can select 20 lakes from each region and as response variables use the above-mentioned chemical variables, for which an average over the summer months can be used, for example. The model for the data with ten response variables and 40 observations equally divided between the two regions can be presented in the following way:$$
\boldsymbol{X}=\boldsymbol{B C}+\boldsymbol{E}
$$where X: 10 \times 40, \boldsymbol{B}: 10 \times 2 consists of the mean parameters, \boldsymbol{E} \sim N_{10,40}(\mathbf{0}, \boldsymbol{\Sigma}, \boldsymbol{I}), where \Sigma: 10 \times 10 is the unknown dispersion matrix, and$$
C=\left(\begin{array}{cc}
\mathbf{1}{20}^{\prime} & \mathbf{0} \ \mathbf{0} & \mathbf{1}{20}^{\prime}
\end{array}\right)
$$统计代写|回归分析作业代写Regression Analysis代考|Bilinear Regression Models: An Introduction Throughout the book B R M is used as an abbreviation for bilinear regression model. Other common names are the growth curve model or GMANOVA (generalized multivariate analysis of variance). At the end of the previous section, it was noted that even under normality assumptions, we have very natural models which do not belong to the exponential family. It was also noted in the previous section that if a model has a linear mean structure, the model belongs to the exponential family. In this section, it will be shown, among other things, that if a bilinear mean structure is assumed together with an arbitrary dispersion matrix, the model is not a member of the exponential family and instead belongs to the curved exponential family. Remember that if a matrix is pre- and post-multiplied by other matrices, we perform a bilinear transformation. Often the mean structure \boldsymbol{A} \boldsymbol{B C} is considered, where the unknown parameter is given by \boldsymbol{B}. Hence, we have a bilinear model:$$
X=A B C+E
$$where \boldsymbol{X}: p \times n, the unknown mean parameter matrix \boldsymbol{B}: q \times k, the two design matrices A: p \times q and C: k \times n, and the error matrix \boldsymbol{E} build up the model. Moreover, let \boldsymbol{E} be normally distributed with independent columns, mean \mathbf{0}, and a positive definite dispersion matrix \boldsymbol{\Sigma} for the elements within each column of \boldsymbol{X}. Then the density function for \boldsymbol{X} is proportional to$$
|\boldsymbol{\Sigma}|^{-1 / 2 n} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}(\boldsymbol{X}-\boldsymbol{A} \boldsymbol{B C})(\boldsymbol{X}-\boldsymbol{A} \boldsymbol{B C})^{\prime}\right}\right}
$$and after some manipulations it can be shown that this model belongs to the curved exponential family. For example, this can be shown through a reparametrization, i.e. according to Appendix B, Theorem B.1 (i), A can be factored as A=\Gamma\left(\begin{array}{l}\boldsymbol{I} \ 0\end{array}\right) \boldsymbol{T}, where \Gamma is orthogonal and T is a non-singular matrix. Moreover, let \Theta=T \boldsymbol{B}, \Psi=\Gamma^{\prime} \boldsymbol{\Sigma} \Gamma, Y=\Gamma^{\prime} \boldsymbol{X}, \boldsymbol{Y}^{\prime}=\left(\boldsymbol{Y}{1}^{\prime}: \boldsymbol{Y}{2}^{\prime}\right) and$$
\Psi^{-1}=\left(\begin{array}{ll}
\Psi^{11} & \Psi^{12} \
\Psi^{21} & \Psi^{22}
\end{array}\right)
$$Then the density function for the new variable Y is proportional to$$
\begin{array}{r}
\left|\Psi^{-1}\right|^{n / 2} \exp \left{-1 / 2\left(\operatorname{tr}\left{\Psi^{-1} \boldsymbol{Y}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{Y}^{\prime}\right}-2 \operatorname{tr}\left{\boldsymbol{Y}{1} \Psi^{11} \Theta \boldsymbol{C}\right}\right.\right. \
\left.\left.-2 \operatorname{tr}\left{\boldsymbol{Y}_{2} \Psi^{21} \Theta \boldsymbol{C}\right}+\operatorname{tr}\left{\Psi^{11} \Theta \boldsymbol{C}^{\prime} \Theta^{\prime}\right}\right)\right}
\end{array}
$$which shows that the model belongs to the curved exponential family, since the number of “free” parameters, i.e. \Psi^{-1} and \Psi^{11} \Theta, is less than the number of functions including observations and parameters. Note that$$
\Psi^{21} \Theta=(\boldsymbol{I}: \mathbf{0}) \Psi^{-1}(\mathbf{0}: \boldsymbol{I})^{\prime}\left((\boldsymbol{I}: \mathbf{0}) \Psi^{-1}(\boldsymbol{I}: \mathbf{0})^{\prime}\right)^{-1} \Psi^{11} \boldsymbol{\Theta}
The above-mentioned model is often termed the growth curve model and was introduced by Potthoff and Roy (1964), although very similar models had been considered earlier. The A matrix is often referred to as the within-individuals design matrix and \boldsymbol{C}, as in (1.7), is called the between-individuals design matrix. 统计代写|回归分析作业代写Regression Analysis代考|Literature The following literature review reflects the historical development of some part of statistical science and includes background information on linear and bilinear models. No details are provided, meaning, for example, that the techniques and tools used by the various authors are omitted. Instead it is recommended that one should study the original articles. Moreover, it should be noted that it is impossible, in few pages, to present a complete survey of literature published on the topics under consideration. Nowadays statistical science is mainly based on probability theory. One has merged stochastics with statistics but this has not always been the case. Today statisticians use probabilities to describe uncertainty, and probability and probability distributions are used for the following purposes: (i) to build models, (ii) to create random experiments (sampling), and (iii) to support conclusions. One challenge has been to handle “continuous” data, and this is a problem which statisticians are still faced with today. Fundamental theory was established at the beginning of the twentieth century, including Kolmogorov’s (1933) famous axiomatic probability proposal. The philosophy behind Kolmogorov’s work and alternative proposals, as well as an interesting and beneficial historical perspective, has been presented by Schafer and Vovk (2006). It is interesting to look into early works on probability theory, for example, von Bortkiewicz (1917), where also references to earlier works can be found. An embryo of multivariate statistics was introduced by Galton (1886,1888, 1889), who, among other things, exploited bivariate problems (see Anderson, 1996). The normal distribution has always played a fundamental role when it comes to analysing continuous data. Two very well-known results/notion, which are connected to the normal distribution and which appeared more than a hundred years ago, are the t-test (Student, 1908) and Pearson’s product correlation coefficient (Pearson, 1896). Concerning correlation, it was, however, Galton (1886,1888) who came up with the fundamental ideas, including the concept of conditional expectation; see Bulmer (2003) and Stigler (2012), for interesting reading about Galton. Pearson, besides referring to Galton, also refers to Bravais (1846), who used the correlation coefficient (see Monhor, 2012). Cowles (2001) points out that Galton’s half-cousin Charles Darwin used the term correlated variation in his renowned book “The Origin of Species”. Many references concerning the bivariate normal distribution can be found in Kotz et al. (2000). Edgeworth (1892) presented a three-dimensional normal distribution which was generalized by Pearson (1896) to a p-dimensional version. Fisher (1915) derived the distribution of the sample Pearson correlation coefficient. Hence, we can conclude that around 1900 it became possible to analyse continuous multiresponse data in a systematic way. To understand how important and impressive the development of statistics was during the above-mentioned years, we refer to historically oriented books and articles, for example see Stigler (1986,2012) and Cowles (2001). 回归分析代写 统计代写|回归分析作业代写Regression Analysis代考|The General Multivariate Linear Model 在本书中，我们研究了基于底层多元正态分布的模型。多元正态分布与线性密切相关，因为正态变量的线性函数也是正态分布的。围绕正态分布的理论得到了很好的发展，除其他外，可以证明在某些条件下的一般多元线性模型属于指数族，这一点非常重要。例如，对于属于指数族的模型，有完整和充分的统计量，所有的矩和累积量都在我们手中。 一般多元线性模型等于 X=乙C+和 在哪里X:p×n是对应于观测值的随机矩阵，乙:p×到是一个未知的参数矩阵和C:到×n是一个已知的设计矩阵。此外，和∼ñp,n(0,Σ,一世)， 在哪里Σ是一个未知的 pd 矩阵。对于矩阵正态分布的定义ñp,n(μ,∙,∙)见附录 A，第 1 节。A.9。(1.7) 中的模型也称为 MANOVA 模型。根据模型规范，模型由独立分布的列组成。设计矩阵C也称为个体间设计矩阵。为了能够从模型中得出任何结论，我们必须估计未知参数乙和Σ. 遵循统计范式，我们还必须验证模型，这通常在残差的帮助下进行。 如果我们检查似然函数，大号(乙,Σ)， 我们有 \begin{aligned} L(\boldsymbol{B}, \boldsymbol{\Sigma}) & \propto|\boldsymbol{\Sigma}|^{n / 2} \exp \left{-1 / 2 \operatorname{tr }\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-\boldsymbol{B} \boldsymbol{C}\right) 0^{\prime}\right}\右} \ &=|\boldsymbol{\Sigma}|^{n / 2} \exp \left(-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{ S}{o}+\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}}-\boldsymbol{B} \boldsymbol{ C}\right) 0^{\prime}\right}\right) \end{对齐}\begin{aligned} L(\boldsymbol{B}, \boldsymbol{\Sigma}) & \propto|\boldsymbol{\Sigma}|^{n / 2} \exp \left{-1 / 2 \operatorname{tr }\left{\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o}-\boldsymbol{B} \boldsymbol{C}\right) 0^{\prime}\right}\右} \ &=|\boldsymbol{\Sigma}|^{n / 2} \exp \left(-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1} \boldsymbol{ S}{o}+\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{X}{o} \boldsymbol{P}{C^{\prime}}-\boldsymbol{B} \boldsymbol{ C}\right) 0^{\prime}\right}\right) \end{对齐} 在哪里 小号这=X这(一世−磷C′)X这′ 让小号成为小号这，但与X这取而代之X. 从这里可以看出，该模型属于指数族，并且X磷C′和小号是充分的统计数据。它 可以证明统计也是完整的。MLE 用于乙和Σ获得自 乙^这C=X这磷C′, nΣ^这=小号这因为 (1.8) 构成了一个线性一致的方程组乙. 可能性总是小于或等于(2圆周率)−pn/2|n−1小号这|−n/2经验⁡−np/2, 其中上界是在插入时得到的乙^这C和Σ^这. 例 1.3 这是一个同时模拟多个变量的例子。在环境监测中，可以使用许多化学生物标志物。例如，在瑞典，有人监测遍布全国的湖泊中的钙、镁、钠、钾、硫酸盐、氯化物、氟化物、氮、磷、电导率和其他物质/特性。一年中多次收集观察结果。想象一下，我们想要比较特定年份的两个地区。然后可以从每个区域中选择 20 个湖泊，并使用上述化学变量作为响应变量，例如可以使用夏季月份的平均值。具有 10 个响应变量和 40 个观测值的数据模型在两个区域之间均分，可以按以下方式表示： X=乙C+和 在哪里X:10×40,乙:10×2由平均参数组成，和∼ñ10,40(0,Σ,一世)， 在哪里Σ:10×10是未知的色散矩阵，并且 C=(120′0 0120′) 统计代写|回归分析作业代写Regression Analysis代考|Bilinear Regression Models: An Introduction 贯穿全书乙R米用作双线性回归模型的缩写。其他常见名称是增长曲线模型或 GMANOVA（广义多元方差分析）。在上一节的最后，注意到即使在正态假设下，我们也有非常自然的模型，不属于指数族。在上一节中还指出，如果模型具有线性均值结构，则该模型属于指数族。在本节中，除其他外，将显示如果双线性平均结构与任意色散矩阵一起假设，则该模型不是指数族的成员，而是属于曲线指数族。请记住，如果一个矩阵与其他矩阵前后相乘，我们将执行双线性变换。 通常是平均结构一种乙C考虑，其中未知参数由下式给出乙. 因此，我们有一个双线性模型： X=一种乙C+和 在哪里X:p×n, 未知平均参数矩阵乙:q×到, 两个设计矩阵一种:p×q和C:到×n, 和误差矩阵和建立模型。此外，让和正态分布，具有独立的列，均值0, 和一个正定色散矩阵Σ对于每列中的元素X. 那么密度函数为X正比于 |\boldsymbol{\Sigma}|^{-1 / 2 n} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}(\boldsymbol{X} -\boldsymbol{A} \boldsymbol{B C})(\boldsymbol{X}-\boldsymbol{A} \boldsymbol{B C})^{\prime}\right}\right}|\boldsymbol{\Sigma}|^{-1 / 2 n} \exp \left{-1 / 2 \operatorname{tr}\left{\boldsymbol{\Sigma}^{-1}(\boldsymbol{X} -\boldsymbol{A} \boldsymbol{B C})(\boldsymbol{X}-\boldsymbol{A} \boldsymbol{B C})^{\prime}\right}\right} 并且经过一些操作可以证明这个模型属于曲线指数族。例如，这可以通过重新参数化来显示， 即根据附录 B，定理 B.1 (i)，一种可以被分解为一种=Γ(一世 0)吨， 在哪里Γ是正交的并且吨是一个非奇异矩阵。此外，让θ=吨乙, \Psi=\Gamma^{\prime} \boldsymbol{\Sigma} \Gamma, Y=\Gamma^{\prime} \boldsymbol{X}, \boldsymbol{Y}^{\prime}=\left( \boldsymbol{Y} {1}^{\prime}: \boldsymbol{Y} {2}^{\prime}\right)一种ndΨ−1=(Ψ11Ψ12 Ψ21Ψ22)吨H和n吨H和d和ns一世吨是F你nC吨一世这nF这r吨H和n和在v一种r一世一种b一世和是一世spr这p这r吨一世这n一种一世吨这\begin{array}{r} \left|\Psi^{-1}\right|^{n / 2} \exp \left{-1 / 2\left(\operatorname{tr}\left{\Psi^ {-1} \boldsymbol{Y}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{Y}^{\prime}\right}-2 \运算符名{tr}\left{\boldsymbol{Y}{1} \Psi^{11} \Theta \boldsymbol{C}\right}\right.\right. \ \left.\left.-2 \operatorname{tr }\left{\boldsymbol{Y}_{2} \Psi^{21} \Theta \boldsymbol{C}\right}+\operatorname{tr}\left{\Psi^{11} \Theta \boldsymbol{C }^{\prime} \Theta^{\prime}\right}\right)\right} \end{array}\begin{array}{r} \left|\Psi^{-1}\right|^{n / 2} \exp \left{-1 / 2\left(\operatorname{tr}\left{\Psi^ {-1} \boldsymbol{Y}\left(\boldsymbol{I}-\boldsymbol{P}{C^{\prime}}\right) \boldsymbol{Y}^{\prime}\right}-2 \运算符名{tr}\left{\boldsymbol{Y}{1} \Psi^{11} \Theta \boldsymbol{C}\right}\right.\right. \ \left.\left.-2 \operatorname{tr }\left{\boldsymbol{Y}_{2} \Psi^{21} \Theta \boldsymbol{C}\right}+\operatorname{tr}\left{\Psi^{11} \Theta \boldsymbol{C }^{\prime} \Theta^{\prime}\right}\right)\right} \end{array}在H一世CHsH这在s吨H一种吨吨H和米这d和一世b和一世这nGs吨这吨H和C你rv和d和Xp这n和n吨一世一种一世F一种米一世一世是,s一世nC和吨H和n你米b和r这F“Fr和和”p一种r一种米和吨和rs,一世.和.\psi^{-1}一种nd\ psi {{11} \ 西塔,一世s一世和ss吨H一种n吨H和n你米b和r这FF你nC吨一世这ns一世nC一世你d一世nG这bs和rv一种吨一世这ns一种ndp一种r一种米和吨和rs.ñ这吨和吨H一种吨Ψ21θ=(一世:0)Ψ−1(0:一世)′((一世:0)Ψ−1(一世:0)′)−1Ψ11θ吨H和一种b这v和−米和n吨一世这n和d米这d和一世一世s这F吨和n吨和r米和d吨H和Gr这在吨HC你rv和米这d和一世一种nd在一种s一世n吨r这d你C和db是磷这吨吨H这FF一种ndR这是(1964),一种一世吨H这你GHv和r是s一世米一世一世一种r米这d和一世sH一种db和和nC这ns一世d和r和d和一种r一世一世和r.吨H和一种米一种吨r一世X一世s这F吨和nr和F和rr和d吨这一种s吨H和在一世吨H一世n−一世nd一世v一世d你一种一世sd和s一世Gn米一种吨r一世X一种nd\boldsymbol{C}，如 (1.7) 所示，称为个体间设计矩阵。 统计代写|回归分析作业代写Regression Analysis代考|Literature 以下文献综述反映了统计科学某些部分的历史发展，包括线性和双线性模型的背景信息。没有提供细节，例如，不同作者使用的技术和工具被省略。相反，建议您研究原始文章。此外，应该注意的是，不可能在几页纸内对已发表的有关所考虑主题的文献进行完整的调查。 现在的统计科学主要基于概率论。有人将随机指标与统计数据合并，但情况并非总是如此。今天，统计学家使用概率来描述不确定性，概率和概率分布用于以下目的：（i）建立模型，（ii）创建随机实验（抽样），以及（iii）支持结论。一个挑战是处理“连续”数据，这是统计学家今天仍然面临的问题。基础理论建立于 20 世纪初，包括 Kolmogorov（1933）著名的公理概率建议。Schafer 和 Vovk (2006) 提出了 Kolmogorov 的工作和替代建议背后的哲学，以及有趣和有益的历史观点。 高尔顿引入了多元统计的雏形(1886,1888, 1889 年），除其他外，他利用了双变量问题（参见 Anderson，1996 年）。在分析连续数据时，正态分布一直发挥着基础性作用。与正态分布相关并且出现在一百多年前的两个非常著名的结果/概念是 t 检验（Student，1908）和 Pearson 乘积相关系数（Pearson，1896）。然而，关于相关性，它是高尔顿(1886,1888)谁提出了基本思想，包括条件期望的概念；有关高尔顿的有趣读物，请参阅 Bulmer (2003) 和 Stigler (2012)。Pearson 除了提到 Galton，还提到了 Bravais (1846)，他使用了相关系数（参见 Monhor，2012）。Cowles (2001) 指出，高尔顿的同父异母表弟查尔斯达尔文在其著名的著作《物种起源》中使用了相关变异这一术语。 许多关于二元正态分布的参考资料可以在 Kotz 等人中找到。（2000 年）。Edgeworth (1892) 提出了一个由 Pearson (1896) 推广到的三维正态分布p维版本。Fisher (1915) 导出了样本 Pearson 相关系数的分布。因此，我们可以得出结论，大约在 1900 年，系统地分析连续多响应数据成为可能。为了了解在上述年份中统计学的发展有多么重要和令人印象深刻，我们参考了以历史为导向的书籍和文章，例如 Stigler(1986,2012)和考尔斯（2001 年）。 统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。 随机过程代考 在概率论概念中，随机过程随机变量的集合。 若一随机系统的样本点是随机函数，则称此函数为样本函数，这一随机系统全部样本函数的集合是一个随机过程。 实际应用中，样本函数的一般定义在时间域或者空间域。 随机过程的实例如股票和汇率的波动、语音信号、视频信号、体温的变化，随机运动如布朗运动、随机徘徊等等。 贝叶斯方法代考 贝叶斯统计概念及数据分析表示使用概率陈述回答有关未知参数的研究问题以及统计范式。后验分布包括关于参数的先验分布，和基于观测数据提供关于参数的信息似然模型。根据选择的先验分布和似然模型，后验分布可以解析或近似，例如，马尔科夫链蒙特卡罗 (MCMC) 方法之一。贝叶斯统计概念及数据分析使用后验分布来形成模型参数的各种摘要，包括点估计，如后验平均值、中位数、百分位数和称为可信区间的区间估计。此外，所有关于模型参数的统计检验都可以表示为基于估计后验分布的概率报表。 广义线性模型代考 广义线性模型（GLM）归属统计学领域，是一种应用灵活的线性回归模型。该模型允许因变量的偏差分布有除了正态分布之外的其它分布。 statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。 机器学习代写 随着AI的大潮到来，Machine Learning逐渐成为一个新的学习热点。同时与传统CS相比，Machine Learning在其他领域也有着广泛的应用，因此这门学科成为不仅折磨CS专业同学的“小恶魔”，也是折磨生物、化学、统计等其他学科留学生的“大魔王”。学习Machine learning的一大绊脚石在于使用语言众多，跨学科范围广，所以学习起来尤其困难。但是不管你在学习Machine Learning时遇到任何难题，StudyGate专业导师团队都能为你轻松解决。 多元统计分析代考 基础数据: N 个样本， P 个变量数的单样本，组成的横列的数据表 变量定性: 分类和顺序；变量定量：数值 数学公式的角度分为: 因变量与自变量 时间序列分析代写 随机过程，是依赖于参数的一组随机变量的全体，参数通常是时间。 随机变量是随机现象的数量表现，其时间序列是一组按照时间发生先后顺序进行排列的数据点序列。通常一组时间序列的时间间隔为一恒定值（如1秒，5分钟，12小时，7天，1年），因此时间序列可以作为离散时间数据进行分析处理。研究时间序列数据的意义在于现实中，往往需要研究某个事物其随时间发展变化的规律。这就需要通过研究该事物过去发展的历史记录，以得到其自身发展的规律。 回归分析代写 多元回归分析渐进（Multiple Regression Analysis Asymptotics）属于计量经济学领域，主要是一种数学上的统计分析方法，可以分析复杂情况下各影响因素的数学关系，在自然科学、社会和经济学等多个领域内应用广泛。 MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 统计代写|回归分析作业代写Regression Analysis代考| What Is Statistics 如果你也在 怎样代写回归分析Regression Analysis这个学科遇到相关的难题，请随时右上角联系我们的24/7代写客服。 在数学中，回归分析Regression Analysis使不是定量技术专家的社会科学家能够对他们的数字结果达成清晰的口头解释。对更专业的课题进行了清晰的讨论：残差分析、交互效应、规格化 statistics-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。 我们提供的回归分析Regression Analysis及其相关学科的代写，服务范围广, 其中包括但不限于: • Statistical Inference 统计推断 • Statistical Computing 统计计算 • Advanced Probability Theory 高等楖率论 • Advanced Mathematical Statistics 高等数理统计学 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 统计代写|回归分析作业代写Regression Analysis代考|What Is Statistics Statistical science is about planning experiments, setting up models to analyse experiments and observational studies, and studying the properties of these models or the properties of some specific building blocks within these models, e.g. parameters and independence assumptions. Statistical science also concerns the validation of chosen models, often against data. Statistical application is about connecting statistical models to data. The general statistical paradigm is based on the following steps: 1. setting up a model; 2. evaluating the model via simulations or comparisons with data; 3. if necessary, refining the model and restarting from step 2 ; 4. accepting and interpreting the model. There is indeed also a step 0 , namely determining the source of inspiration for setting up a statistical model. At least two cases can be identified: (i) the datainspired model, i.e. depending on our experiences and what is seen in data, a model is formulated; (ii) the conceptually inspired model, i.e. someone has an idea about what the relevant components are and how these components should be included in the model of some process, for example. It is obvious that when applying the paradigm, a number of decisions have to be made which unfortunately are all rather subjective. This should be taken into account when relying on statistics. Moreover, if statistics is to be useful, the model should be relevant for the problem under consideration, which is often relative to the information which can be derived from the data, and the final model should be interpretable. Statistics is instrumental, since, without expertise in the discipline in which it is applied, one usually cannot draw firm conclusions about the data which are used to evaluate the model. On the other hand, “data analysts”, when applying statistics, need a solid knowledge of statistics to be able to perform efficient analysis. The purpose of this book is to provide tools for the treatment of the so-called bilinear models. Bilinear models are models which are linear in two “directions”. A typical example of something which is bilinear is the transformation of a matrix into another matrix, because one can transform the rows as well as the columns simultaneously. In practice rows and columns can, for example, represent a “spatial” direction and a “temporal” direction, respectively. Basic ingredients in statistics are the concept of probability and the assumption about the underlying distributions. The distribution is a probability measure on the space of “random observations”, i.e. observations of a phenomenon whose outcome cannot be stated in advance. However, what is a probability and what does a probability represent? Statistics uses the concept of probability as a measure of uncertainty. The probability measures used nowadays are well defined through their characterization via Kolmogorov’s axioms. However, Kolmogorov’s axioms tell us what a probability measure should fulfil, but not what it is. It is not even obvious that something like a probabilistic mechanism exists in real life (nature), but for statisticians this does not matter. The probability measure is part of a model and any model, of course, only describes reality approximatively. 统计代写|回归分析作业代写Regression Analysis代考|What Is a Statistical Model A statistical model is usually a class of distributions which is specified via functions of parameters (unknown quantities). The idea is to choose an appropriate model class according to the problem which is to be studied. Sometimes we know exactly what distribution should be used, but more often we have parameters which generate a model class, for example the class of multivariate normal distributions with an unknown mean and dispersion. Instead of distributions, it may be convenient, in particular for interpretations, to work with random variables which are representatives of the random phenomenon under study, although sometimes it is not obvious what kind of random variable corresponds to a distribution function. In Chap. 5 of this book, for example, some cases where this phenomenon occurs are dealt with. One problem with statistics (in most cases only a philosophical problem) is how to connect data to continuous random variables. In general it is advantageous to look upon data as realizations of random variables. However, since our data points have probability mass 0 , we cannot directly couple, in a mathematical way, continuous random variables to data. There exist several well-known schools of thought in statistics advocating different approaches to the connection of data to statistical models and these schools differ in the rigour of their method. Examples of these approaches are “distributionfree” methods, likelihood-based methods and Bayesian methods. Note that the fact that a method is distribution-free does not mean that there is no assumption made about the model. In a statistical model there are always some assumptions about randomness, for example concerning independence between random variables. Perhaps the best-known distribution-free method is the least squares approach. Example 1.1 In this example, several statistical approaches for evaluating a model are presented. Let
x^{\prime}=\beta^{\prime} C+e^{\prime},
$$where \boldsymbol{x}: n \times 1, a random vector corresponding to the observations, \boldsymbol{C}: k \times n, is the design matrix, \boldsymbol{\beta}: k \times 1 is an unknown parameter vector which is to be estimated, and \boldsymbol{e} \sim N_{n}\left(0, \sigma^{2} \boldsymbol{I}\right), which is considered to be the error term in the model and 4 1 Introduction where \sigma^{2} denotes the variance, which is supposed to be unknown. In this book the term “observation” is used in the sense of observed data which are thought to be realizations of some random process. In statistical theory the term “observation” often refers to a set of random variables. Let the projector \boldsymbol{P}_{C^{\prime}}=\boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{C}^{\prime}\right)^{-} \boldsymbol{C} be defined in Appendix A, Sect. A.7, where (\bullet)^{-}denotes an arbitrary g-inverse (see Appendix A, Sect. A.6). Some useful results for projectors are presented in Appendix B, Theorem B.11. 统计代写|回归分析作业代写Regression Analysis代考|The General Univariate Linear Model In this section the classical Gauss-Markov set-up is considered but we assume the dispersion matrix to be completely known. If the dispersion matrix is positive definite (p.d.), the model is just a minor extension of the model in Example 1.1. However, if the dispersion matrix is positive semi-definite (p.s.d.), other aspects related to the model will be introduced. In general, in the Gauss-Markov model the dispersion is proportional to an unknown constant, but this is immaterial for our presentation. The reason for investigating the model in some detail is that there has to be a close connection between the estimators based on models with a known dispersion and those based on models with an unknown dispersion. Indeed, if one assumes a known dispersion matrix, all our models can be reformulated as GaussMarkov models. With additional information stating that the random variables are normally distributed, one can see from the likelihood equations that the maximum likelihood estimators (MLEs) of the mean parameters under the assumption of an unknown dispersion should approach the corresponding estimators under the assumption of a known dispersion. For example, the likelihood equation for the model \boldsymbol{X} \sim N_{p, n}(\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}, \boldsymbol{\Sigma}, \boldsymbol{I}) which appears when differentiating with respect to \boldsymbol{B} (see Appendix A, Sect. A.9 for definition of the matrix normal distribution and Chap. 1, Sect. 1.5 for a precise specification of the model) equals$$
\boldsymbol{A}^{\prime} \boldsymbol{\Sigma}^{-1}(\boldsymbol{X}-\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}) \boldsymbol{C}^{\prime}=\mathbf{0}
$$and for a large sample any maximum likelihood estimator of \boldsymbol{B} has to satisfy this equation asymptotically, because we know that the MLE of \boldsymbol{\Sigma} is a consistent estimator. For interested readers it can be worth studying generalized estimating equation (GEE) theory, for example see Shao (2003, pp. 359-367 ). Now let us discuss the univariate linear model$$
x^{\prime}=\beta^{\prime} \boldsymbol{C}+e^{\prime}, \quad e \sim N_{n}(0, V),
$$where \boldsymbol{V}: n \times n is p.d. and known, \boldsymbol{x}: n \times 1, \boldsymbol{C}: k \times n and \beta: k \times 1 is to be estimated. Let \boldsymbol{x}{o}, as previously, denote the observations of \boldsymbol{x} and let us use \boldsymbol{V}^{-1}=\boldsymbol{V}^{-1} \boldsymbol{P}{C^{\prime}, V}+\boldsymbol{P}{\left(\boldsymbol{C}^{\prime}\right)^{\circ}, \boldsymbol{V}^{-1}} \boldsymbol{V}^{-1} (see Appendix B, Theorem B.13), where \left(\boldsymbol{C}^{\prime}\right)^{o} is any matrix satisfying \mathcal{C}\left(\left(\boldsymbol{C}^{\prime}\right)^{o}\right)^{\perp}=\mathcal{C}\left(\boldsymbol{C}^{\prime}\right), where \mathcal{C}(\bullet) denotes the column vector space (see Appendix A, Sect. A.8). Then the likelihood is maximized as follows:$$ \begin{aligned} L(\boldsymbol{\beta}) \propto & \propto|\boldsymbol{V}|^{-1 / 2} \exp \left{-1 / 2\left(\boldsymbol{x}{o}^{\prime}-\boldsymbol{\beta}^{\prime} \boldsymbol{C}\right) \boldsymbol{V}^{-1}\left(\boldsymbol{x}{o}^{\prime}-\boldsymbol{\beta}^{\prime} \boldsymbol{C}\right)^{\prime}\right} \ =&|\boldsymbol{V}|^{-1 / 2} \exp \left{-1 / 2\left(\boldsymbol{x}{o}^{\prime} \boldsymbol{P}{C^{\prime}, V}^{\prime}-\boldsymbol{\beta}^{\prime} \boldsymbol{C}\right) \boldsymbol{V}^{-1} O^{\prime}\right} \ & \times \exp \left{-1 / 2\left(\boldsymbol{x}{o}^{\prime} \boldsymbol{P}{\left(C^{\prime}\right)^{o}, V^{-1}} \boldsymbol{V}^{-1} \boldsymbol{x}{o}\right)\right} \
\leq &|\boldsymbol{V}|^{-1 / 2} \exp \left{-1 / 2\left(\boldsymbol{x}{o}^{\prime} \boldsymbol{P}{\left(C^{\prime}\right)^{o}, V^{-1}} \boldsymbol{V}^{-1} \boldsymbol{x}{o}\right)\right} \end{aligned} $$which is independent of any parameter, i.e. \boldsymbol{\beta}, and the upper bound is attained if and only if$$ \widehat{\boldsymbol{\beta}}{o}^{\prime} \boldsymbol{C}=\boldsymbol{x}{o}^{\prime} \boldsymbol{P}{C^{r}, V}^{\prime},
$$where \widehat{\boldsymbol{\beta}}{o} is the estimate of \boldsymbol{\beta}. Thus, in order to estimate \boldsymbol{\beta} a linear equation system has to be solved. The solution can be written as follows (see Appendix B, Theorem B.10 (i)):$$ \widehat{\boldsymbol{\beta}}{o}^{\prime}=\boldsymbol{x}{o}^{\prime} \boldsymbol{V}^{-1} \boldsymbol{C}^{\prime}\left(\boldsymbol{C} \boldsymbol{V}^{-1} \boldsymbol{C}^{\prime}\right)^{-}+z^{\prime}(\boldsymbol{C})^{o^{\prime}}, $$where z^{\prime} stands for an arbitrary vector of a proper size. Suppose that in model (1.3) there are restrictions (a priori information) on the mean vector given by$$ \boldsymbol{\beta}^{\prime} \boldsymbol{G}=\mathbf{0} . $$Then$$ \boldsymbol{\beta}^{\prime}=\boldsymbol{\theta}^{\prime} \boldsymbol{G}^{o^{\prime}}, $$where \theta is a new unrestricted parameter. After inserting this relation in (1.3), the following model appears:$$ \boldsymbol{x}^{\prime}=\boldsymbol{\theta}^{\prime} \boldsymbol{G}^{o^{\prime}} \boldsymbol{C}+\boldsymbol{e}^{\prime}, \quad \boldsymbol{e} \sim N{n}(\mathbf{0}, \boldsymbol{V})
$$Thus, the above-presented calculations yield$$
\widehat{\boldsymbol{\beta}}{o}^{\prime} \boldsymbol{C}=\boldsymbol{x}{o}^{\prime} \boldsymbol{P}_{C^{\prime} G^{o}, V}^{\prime}
and from here, since this expression constitutes a consistent linear equation in $\widehat{\boldsymbol{\beta}}{o}$, a general expression for $\widehat{\boldsymbol{\beta}}{o}(\boldsymbol{\beta})$ can be obtained explicitly.

统计代写|回归分析作业代写Regression Analysis代考|What Is Statistics

1. 建立模型；
2. 通过模拟或与数据比较来评估模型；
3. 如有必要，改进模型并从步骤 2 重新开始；
4. 接受和解释模型。
确实还有一个步骤 0 ，即确定建立统计模型的灵感来源。至少可以确定两种情况： (i) 数据启发模型，即根据我们的经验和从数据中看到的内容，制定模型；(ii) 受概念启发的模型，例如，某人对相关组件是什么以及这些组件应如何包含在某个过程的模型中有所了解。

统计代写|回归分析作业代写Regression Analysis代考|What Is a Statistical Model

X′=b′C+和′,

4
1 Introduction
whereσ2表示方差，它应该是未知的。在本书中，“观察”一词的含义是观察到的数据，这些数据被认为是某些随机过程的实现。在统计理论中，术语“观察”通常是指一组随机变量。让投影仪磷C′=C′(CC′)−C在附录 A 中定义。A.7，其中(∙)−表示任意 g 逆（参见附录 A，第 A.6 节）。附录 B，定理 B.11 中介绍了投影仪的一些有用结果。

统计代写|回归分析作业代写Regression Analysis代考|The General Univariate Linear Model

X′=b′C+和′,和∼ñn(0,五),

b^这′C=X这′磷C′G这,五′从这里开始，因为这个表达式构成了一个一致的线性方程b^这, 的一般表达式b^这(b)可以显式获取。

广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。