统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Hints and Solutions

statistics-lab™ 为您的留学生涯保驾护航 在代写多元统计分析Multivariate Statistical Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写多元统计分析Multivariate Statistical Analysis代写方面经验极为丰富，各种代写多元统计分析Multivariate Statistical Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Survival Distributions

1. For lack of memory, start with $\mathrm{P}(T>a+b \mid T>a)=\mathrm{P}(T>$ $a+b, T>a) / \mathrm{P}(T>a)$. Second part: consider $\mathrm{P}\left{(T / \xi)^{v}>a\right}$.
2. Begin with $\mathrm{E}\left(T^{r}\right)=\int_{0}^{\infty} t^{r} f(t) d t$, in which the density $f(t)=-d \bar{F}(t) / d t$, and use the definition $\Gamma(a)=\int_{0}^{\infty} x^{a-1} \mathrm{e}^{-x} d x$ of the gamma function. You should obtain $\mu_{T}=\mathrm{E}(T)=\xi \Gamma(1+1 / \nu)$ and $\sigma_{T}^{2}=\operatorname{var}(T)=$ $\xi^{2} \Gamma(1+2 / v)-\mu_{T}^{2}$.
1. $\mathrm{E}\left(T^{r}\right)=\int_{-\infty}^{\infty} \mathrm{e}^{r y}\left(2 \pi \sigma^{2}\right)^{-1 / 2} \mathrm{e}^{-(y-\mu)^{2} / 2 \sigma^{2}} d y$ $=\left(2 \pi \sigma^{2}\right)^{-1 / 2} \int_{-\infty}^{\infty} \mathrm{e}^{-\left{\left(y-\mu-r \sigma^{2}\right)^{2}-\left(\mu+r \sigma^{2}\right)^{2}+\mu^{2}\right) / 2 \sigma^{2}} d y$ $=\mathrm{e}^{\left(\left(\mu+r \sigma^{2}\right)^{2}-\mu^{2}\right) / 2 \sigma^{2}}\left(2 \pi \sigma^{2}\right)^{-1 / 2} \int_{-\infty}^{\infty} \mathrm{e}^{-t^{2} / 2 \sigma^{2}} d y=\mathrm{e}^{r \mu+r^{2} \sigma^{2} / 2} .$ $\bar{F}(t)=\mathrm{P}\left(\mathrm{e}^{Y}>t\right)=\Phi(\log t) .$
2. $F(t)=1-\int_{0}^{t} f(s) d s=1-\Gamma(v)^{-1} \int_{0}^{t / \xi} y^{v-1} \mathrm{e}^{-y} d y=1-\Gamma(v ; t / \xi)$.
3. The density is $-d \bar{F}(t) / d t=(\gamma / \alpha)(1+t / \alpha)^{-\gamma-1}$. Then,
\begin{aligned} \mathrm{E}(1+T / \alpha) &=(\gamma / \alpha) \int_{0}^{\infty}(1+t / \alpha)^{-\gamma} d t \ &=\gamma /(\gamma-1) \text { for } \gamma>1 \text {, so } \mathrm{E}(T)=\alpha /(\gamma-1) \end{aligned}
Likewise, $\mathrm{E}\left{(1+T / \alpha)^{2}\right}=\ldots$, giving $\operatorname{var}(T)=\mathrm{E}\left(T^{2}\right)-\mathrm{E}(T)^{2}=$ $\alpha^{2} \gamma /\left{(\gamma-1)^{2}(\gamma-2)\right}$ for $\gamma>2$.
For quantile, $q=\bar{F}\left(t_{q}\right)=\left(1+t_{q} / \alpha\right)^{-\gamma}$ yields $t_{q}=\alpha\left(q^{-1 / \gamma}-1\right)$.
4. Hazard function $h(t)=(\gamma \rho / \alpha)(t / \alpha)^{\rho-1} /\left{1+(t / \alpha)^{\rho}\right}$. The function $t^{\rho-1} /\left(1+t^{\rho}\right)$ has derivative $(\rho-1) t^{\rho-2}\left(1+t^{\rho}\right)^{-1}\left{1-\left(\frac{\rho}{\rho-1}\right)\left(\frac{t^{\rho+1}}{1+t^{\rho}}\right)\right}$. For $\rho>1$ this is negative, so DFR; for $\rho>1$, the derivative is positive for small $t$, zero when $t$ solves $\left(\frac{\rho}{\rho-1}\right)=\left(\frac{t+1}{1+\ell^{p}}\right)$, and thereafter negative, so neither uniformly IFR nor DFR. Weibull-gamma mixture: suppose that $T$ has survivor function, conditional on $\lambda$, $\mathrm{P}(T>t \mid \lambda)=\mathrm{e}^{-\lambda t^{v}}$ and that $\lambda$ has a gamma distribution with density $f(\lambda)=\Gamma(\gamma)^{-1} \alpha^{\gamma} \lambda^{\gamma-1} \mathrm{e}^{-\alpha \lambda}$. Then, the unconditional survivor function of $T$ is
$$F(t)=\int_{0}^{\infty} \mathrm{e}^{-\lambda t^{v}} f(\lambda) d \lambda=\Gamma(\lambda)^{-1} \int_{0}^{\infty} \mathrm{s}^{\gamma-1} \mathrm{e}^{s\left(1+t^{v} / \alpha\right)} d s=\left(1+t^{v} / \alpha\right)^{-\gamma} .$$
5. a. Let $t \rightarrow \infty$ in $\mathrm{P}(T>t)=F(t)=\exp \left{-\int_{0}^{t} h(t) d t\right}$.
b. $\mathrm{P}(T>t)=\exp \left{-\int_{0}^{t} h(t) d t\right}=\exp \left[-\int_{0}^{t}\left{h_{1}(t)+h_{2}(t)\right} d t\right]$
$$=\exp \left{-\int_{0}^{t} h_{1}(t) d t\right} \times \exp \left{-\int_{0}^{t} h_{2}(t) d t\right}$$
\begin{aligned} &=\mathrm{P}\left(T_{1}>t\right) \times \mathrm{P}\left(T_{2}>t\right) \ &=\mathrm{P}\left(T_{1}>t, T_{2}>t\right)=\mathrm{P}\left{\min \left(T_{1}, T_{2}\right)>t\right} . \end{aligned}
6. $\mathrm{P}(T>t)=F(t)=\exp \left{-\int_{0}^{t} h(t) d t\right}=\mathrm{e}^{-a t}$ for $0t_{0} \Rightarrow \mathrm{P}\left(T>t_{0}\right)=\mathrm{e}^{-a t_{0}}, \mathrm{P}\left(T>2 t_{0}\right)=\mathrm{e}^{-(a+b) t_{0}}$.
7. Continuous $T: \mathrm{E}(T)=\int_{0}^{\infty} t f(t) d t=$ (by parts) $[-t F(t)]{0}^{\infty}+\int{0}^{\infty} F(t) d t$. Discrete T: $\mathrm{E}(T)=\sum_{j=0}^{\infty} j p_{j}=p_{1}+2 p_{2}+3 p_{3}+\cdots=\left(p_{1}+p_{2}+p_{3}\right.$ $+\cdots)+\left(p_{2}+p_{3}+\cdots\right)+\left(p_{3}+\cdots\right)+\cdots=\sum_{j=1}^{\infty} q_{j}$.

统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Parametric Inference: Frequentist and Bayesian

There are historical arguments about which came first, the chicken (Bayesian approach) or the egg (Frequentist approach). Some of the more vocal proponents of the different approaches to inference have been shouting at each other for years from their respective hilltops. Personally, I cannot raise much enthusiasm for the debate since both approaches have their merits and drawbacks. That said, I do think that the broad differences should be appreciated by the statistician-it is a bit depressing nowadays to hear research students say that they are Bayesian because they do McMC or because they do Bayesian modelling (meaning statistical modelling).

Let us define a parameter, say $\theta$, here as an unknown constant (maybe a vector) occurring in the expression for the statistical model under consideration. The likelihood function, based on data $D$, is $p(D \mid \theta)$, where $p$ is just used to represent a probability or a density. Both Frequentist and Bayesian will use the likelihood, when it is accessible, to make inferences, but in different ways.

The routine Frequentist approach is to maximise the likelihood over $\theta$ to obtain the maximum likelihood estimate (mle), $\hat{\theta}$ (in regular likelihood cases). Then the machinery of asymptotics can be brought to bear: as the sample size (or the information content of the data) increases, the distribution of $\hat{\theta}$ tends toward normal with mean $\theta$ and covariance matrix estimated as $-l^{\prime \prime}(\hat{\theta})^{-1}$, where $l^{\prime \prime}(\theta)$ is the second derivative (Hessian) matrix of the log-likelihood function, $l(\theta)=\log p(D \mid \theta)$. Standard errors, and the resulting confidence intervals, for component parameters can now be obtained. For hypothesis tests, appropriate likelihood ratio tests can be applied, or asymptotic equivalents such as those based on the score function (score statistics) and the mle (Wald statistics). The latter are less well recommended, though, in view of their lack of invariance under parametric transformation (e.g., Cox and Hinkley, 1974, Section $9.3$ [vii]). The asymptotic normal approximation to the distribution of the mle can sometimes be usefully improved by transformation of the parameters (e.g., Cox and Hinkley, 1974 , Section 9.3[vii]).

统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Bayesian Approach

The general literature in this area is not sparse. O’Hagan and Forster (2004) give a comprehensive, general treatment. For reliability and survival analysis, in particular, the book by Martz and Waller (1982) contains much detail and gives many references to applications. Lifetime Data Analysis (LIDA) published a special issue in 2011: “Bayesian Methods in Survival Analysis.”

I know that the distance from where I am sitting to Tipperary is a long way, because the old song says so, but I don’t know exactly how far. However, I do believe that it is constant, subject to a few earthquakes and my not stirring from this armchair. I would be prepared to say it is about 100 miles, give or take, though geography was never my strong point. Adopting the Bayesian approach, I would have to elaborate on this by specifying a probability that the distance does not exceed 150 miles: in fact I would have to think up a whole probability distribution for the distance. In practice, life is too short for such navel-gazing (as it has been called), and one usually adopts a convenient distribution with suitable attributes, such as an appropriate mean and variance. This is called a prior distribution for the parameter, being the aforesaid distance in this case.

Commonly, it is said that because a parameter is endowed with a probability distribution, it becomes a random variable. To my mind that is a lazy way of looking at it. A random variable, notwithstanding all the measurable function stuff, is a quantity that can take different values on different occasions. How can that be true of an unknown constant? I know that geographical areas are sometimes described as being “on the move,”but I do not think that this applies to Tipperary in quite that way.

Note that the prior distribution gives probabilities that are not the usual coin-tossing, die-rolling, card-shuffling types of probabilities-those are frequency based. It gives subjective probabilities, based on beliefs held by the subject. The crux of the matter is whether such probabilities can be combined with frequency probabilities, that is, whether the prior and the likelihood can be validly multiplied together to form a posterior distribution for $\theta$. Mr. Bayesian, he says yes; Mr. Frequentist, he says no; not sure about Mr. Del Monte.

统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Survival Distributions

1. 由于内存不足，请从磷(吨>一种+b∣吨>一种)=磷(吨> 一种+b,吨>一种)/磷(吨>一种). 第二部分：考虑\mathrm{P}\left{(T / \xi)^{v}>a\right}\mathrm{P}\left{(T / \xi)^{v}>a\right}.
2. 首先和(吨r)=∫0∞吨rF(吨)d吨, 其中密度F(吨)=−dF¯(吨)/d吨, 并使用定义Γ(一种)=∫0∞X一种−1和−XdX伽马函数。你应该获得μ吨=和(吨)=XΓ(1+1/ν)和σ吨2=曾是⁡(吨)= X2Γ(1+2/在)−μ吨2.
1. 和(吨r)=∫−∞∞和r是(2圆周率σ2)−1/2和−(是−μ)2/2σ2d是$=\left(2 \pi \sigma^{2}\right)^{-1 / 2} \int_{-\infty}^{\infty} \mathrm{e}^{-\left{\left( y-\mu-r \sigma^{2}\right)^{2}-\left(\mu+r \sigma^{2}\right)^{2}+\mu^{2}\right) / 2 \sigma^{2}} dy=\mathrm{e}^{\left(\left(\mu+r \sigma^{2}\right)^{2}-\mu^{2}\right) / 2 \sigma^{2}} \left(2 \pi \sigma^{2}\right)^{-1 / 2} \int_{-\infty}^{\infty} \mathrm{e}^{-t^{2} / 2 \ sigma^{2}} dy=\mathrm{e}^{r \mu+r^{2} \sigma^{2} / 2} 。\bar{F}(t)=\mathrm{P}\left(\mathrm{e}^{Y}>t\right)=\Phi(\log t) .$
2. F(吨)=1−∫0吨F(s)ds=1−Γ(在)−1∫0吨/X是在−1和−是d是=1−Γ(在;吨/X).
3. 密度为−dF¯(吨)/d吨=(C/一种)(1+吨/一种)−C−1. 然后，
和(1+吨/一种)=(C/一种)∫0∞(1+吨/一种)−Cd吨 =C/(C−1) 为了 C>1， 所以 和(吨)=一种/(C−1)
同样地，\mathrm{E}\left{(1+T / \alpha)^{2}\right}=\ldots\mathrm{E}\left{(1+T / \alpha)^{2}\right}=\ldots, 给曾是⁡(吨)=和(吨2)−和(吨)2= \alpha^{2} \gamma /\left{(\gamma-1)^{2}(\gamma-2)\right}\alpha^{2} \gamma /\left{(\gamma-1)^{2}(\gamma-2)\right}为了C>2.
对于分位数，q=F¯(吨q)=(1+吨q/一种)−C产量吨q=一种(q−1/C−1).
4. 危险功能h(t)=(\gamma \rho / \alpha)(t / \alpha)^{\rho-1} /\left{1+(t / \alpha)^{\rho}\right}h(t)=(\gamma \rho / \alpha)(t / \alpha)^{\rho-1} /\left{1+(t / \alpha)^{\rho}\right}. 功能吨ρ−1/(1+吨ρ)有导数(\rho-1) t^{\rho-2}\left(1+t^{\rho}\right)^{-1}\left{1-\left(\frac{\rho}{\rho -1}\right)\left(\frac{t^{\rho+1}}{1+t^{\rho}}\right)\right}(\rho-1) t^{\rho-2}\left(1+t^{\rho}\right)^{-1}\left{1-\left(\frac{\rho}{\rho -1}\right)\left(\frac{t^{\rho+1}}{1+t^{\rho}}\right)\right}. 为了ρ>1这是负数，所以 DFR；为了ρ>1, 导数对小为正吨, 零时吨解决(ρρ−1)=(吨+11+ℓp)，然后是负数，因此既不统一 IFR 也不统一 DFR。Weibull-gamma 混合：假设吨有幸存者功能，条件是λ, 磷(吨>吨∣λ)=和−λ吨在然后λ具有密度的 gamma 分布F(λ)=Γ(C)−1一种CλC−1和−一种λ. 然后，无条件幸存者函数吨是
F(吨)=∫0∞和−λ吨在F(λ)dλ=Γ(λ)−1∫0∞sC−1和s(1+吨在/一种)ds=(1+吨在/一种)−C.
5. 一种。让吨→∞在\mathrm{P}(T>t)=F(t)=\exp \left{-\int_{0}^{t} h(t) d t\right}\mathrm{P}(T>t)=F(t)=\exp \left{-\int_{0}^{t} h(t) d t\right}.
湾。\mathrm{P}(T>t)=\exp \left{-\int_{0}^{t} h(t) d t\right}=\exp \left[-\int_{0}^{t} \left{h_{1}(t)+h_{2}(t)\right} d t\right]\mathrm{P}(T>t)=\exp \left{-\int_{0}^{t} h(t) d t\right}=\exp \left[-\int_{0}^{t} \left{h_{1}(t)+h_{2}(t)\right} d t\right]
=\exp \left{-\int_{0}^{t} h_{1}(t) d t\right} \times \exp \left{-\int_{0}^{t} h_{2}(t ) d t\right}=\exp \left{-\int_{0}^{t} h_{1}(t) d t\right} \times \exp \left{-\int_{0}^{t} h_{2}(t ) d t\right}
\begin{对齐} &=\mathrm{P}\left(T_{1}>t\right) \times \mathrm{P}\left(T_{2}>t\right) \ &=\mathrm{P }\left(T_{1}>t, T_{2}>t\right)=\mathrm{P}\left{\min \left(T_{1}, T_{2}\right)>t\right } 。\end{对齐}\begin{对齐} &=\mathrm{P}\left(T_{1}>t\right) \times \mathrm{P}\left(T_{2}>t\right) \ &=\mathrm{P }\left(T_{1}>t, T_{2}>t\right)=\mathrm{P}\left{\min \left(T_{1}, T_{2}\right)>t\right } 。\end{对齐}
6. \mathrm{P}(T>t)=F(t)=\exp \left{-\int_{0}^{t} h(t) d t\right}=\mathrm{e}^{-a t}\mathrm{P}(T>t)=F(t)=\exp \left{-\int_{0}^{t} h(t) d t\right}=\mathrm{e}^{-a t}为了0吨0⇒磷(吨>吨0)=和−一种吨0,磷(吨>2吨0)=和−(一种+b)吨0.
7. 连续的吨:和(吨)=∫0∞吨F(吨)d吨=（按部分）$[-t F(t)] {0}^{\infty}+\int {0}^{\infty} F(t) dt.D一世sCr和吨和吨:\mathrm{E}(T)=\sum_{j=0}^{\infty} j p_{j}=p_{1}+2 p_{2}+3 p_{3}+\cdots=\left( p_{1}+p_{2}+p_{3}\对。+\cdots)+\left(p_{2}+p_{3}+\cdots\right)+\left(p_{3}+\cdots\right)+\cdots=\sum_{j=1}^{ \infty} q_{j}$。

有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。