## 统计代写|统计推断代写Statistical inference代考|STAT6110

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

## 统计代写|统计推断代写Statistical inference代考|Expectation, Variance, and Moment Generating Function of a Random Variable

The ideal situation in life would be to know with certainty what is going to happen next. This being almost never the case, the element of chance enters in all aspects of our life. A r.v. is a mathematical formulation of a random environment. Given that we have to deal with a r.v. $X$, the best thing to expect is to know the values of $X$ and the probabilities with which these values are taken on, for the case that $X$ is discrete, or the probabilities with which $X$ takes values in various subsets of the real line $\Re$ when $X$ is of the continuous type. That is, we would like to know the probability distribution of $X$. In real life, often, even this is not feasible. Instead, we are forced to settle for some numerical characteristics of the distribution of $X$. This line of arguments leads us to the concepts of the mathematical expectation and variance of a r.v., as well as to moments of higher order.
DEFINITION 1
Let $X$ be a (discrete) r.v. taking on the values $x_i$ with corresponding probabilities $f\left(x_i\right), i=1, \ldots, n$. Then the mathematical expectation of $X$ (or just expectation or mean value of $X$ or just mean of $X$ ) is denoted by $E X$ and is defined by:
$$E X=\sum_{i=1}^n x_i f\left(x_i\right)$$
If the r.v. $X$ takes on (countably) infinite many values $x_i$ with corresponding probabilities $f\left(x_i\right), i=1,2, \ldots$, then the expectation of $X$ is defined by:
$$E X=\sum_{i=1}^{\infty} x_i f\left(x_i\right), \quad \text { provided } \sum_{i=1}^{\infty}\left|x_i\right| f\left(x_i\right)<\infty .$$
Finally, if the r.v. $X$ is continuous with p.d.f. $f$, its expectation is defined by:
$$E X=\int_{-\infty}^{\infty} x f(x) d x, \quad \text { provided this integral exists. }$$
The alternative notations $\mu(X)$ or $\mu_X$ are also often used.

## 统计代写|统计推断代写Statistical inference代考|Some Probability Inequalities

If the r.v. $X$ has a known p.d.f. $f$, then, in principle, we can calculate probabilities $P(X \in B)$ for $B \subseteq \Re$. This, however, is easier said than done in practice. What one would be willing to settle for would be some suitable and computable bounds for such probabilities. This line of thought leads us to the inequalities discussed here.
(i) For any nonnegative r.v. $X$ and for any constant $c>0$, it holds:
$$P(X \geq c) \leq E X / c .$$
(ii) More generally, for any nonnegative function of any r.v. $X, g(X)$, and for any constant $c>0$, it holds:
$$P[g(X) \geq c] \leq E g(X) / c .$$
(iii) By taking $g(X)=|X-E X|$ in part (ii), the inequality reduces to the Markov inequality, namely,
$$P(|X-E X| \geq c)=P\left(|X-E X|^r \geq c^r\right) \leq E|X-E X|^r / c^r, \quad r>0 .$$
(iv) In particular, for $r=2$ in (15), we get the Tchebichev inequality, namely,
$$\begin{gathered} P(|X-E X| \geq c) \leq \frac{E(X-E X)^2}{c^2}=\frac{\sigma^2}{c^2} \quad \text { or } \ P(|X-E X|<c) \geq 1-\frac{\sigma^2}{c^2}, \end{gathered}$$
where $\sigma^2$ stands for the $\operatorname{Var}(X)$. Furthermore, if $c=k \sigma$, where $\sigma$ is the s.d. of $X$, then:
$$P(|X-E X| \geq k \sigma) \leq \frac{1}{k^2} \quad \text { or } \quad P(|X-E X|<k \sigma) \geq 1-\frac{1}{k^2} .$$
REMARK 2 From the last expression, it follows that $X$ lies within $k$ s.d.’s from its mean with probability at least $1-\frac{1}{k^2}$, regardless of the distribution of $X$. It is in this sense that the s.d. is used as a yardstick of deviations of $X$ from its mean, as already mentioned elsewhere.

Thus, for example, for $k=2,3$, we obtain, respectively:
$$P(|X-E X|<2 \sigma) \geq 0.75, \quad P(|X-E X|<3 \sigma) \geq \frac{8}{9} \simeq 0.889$$

# 统计推断代考

## 统计代写|统计推断代写Statistical inference代考|Expectation, Variance, and Moment Generating Function of a Random Variable

$$E X=\sum_{i=1}^n x_i f\left(x_i\right)$$

$$E X=\sum_{i=1}^{\infty} x_i f\left(x_i\right), \quad \text { provided } \sum_{i=1}^{\infty}\left|x_i\right| f\left(x_i\right)<\infty .$$

$$E X=\int_{-\infty}^{\infty} x f(x) d x, \quad \text { provided this integral exists. }$$

## 统计代写|统计推断代写Statistical inference代考|Some Probability Inequalities

(i)对于任意非负rv $X$和任意常数$c>0$，它成立:
$$P(X \geq c) \leq E X / c .$$
(ii)更一般地说，对于任意rv的任意非负函数$X, g(X)$，对于任意常数$c>0$，成立:
$$P[g(X) \geq c] \leq E g(X) / c .$$
(iii)通过(ii)部分中的$g(X)=|X-E X|$，不等式化简为Markov不等式，即:
$$P(|X-E X| \geq c)=P\left(|X-E X|^r \geq c^r\right) \leq E|X-E X|^r / c^r, \quad r>0 .$$
(iv)特别地，对于(15)中的$r=2$，我们得到切比切夫不等式，即:
$$\begin{gathered} P(|X-E X| \geq c) \leq \frac{E(X-E X)^2}{c^2}=\frac{\sigma^2}{c^2} \quad \text { or } \ P(|X-E X|<c) \geq 1-\frac{\sigma^2}{c^2}, \end{gathered}$$

$$P(|X-E X| \geq k \sigma) \leq \frac{1}{k^2} \quad \text { or } \quad P(|X-E X|<k \sigma) \geq 1-\frac{1}{k^2} .$$
REMARK 2由上一个表达式可知，$X$位于$k$ s.d内。的均值，无论$X$的分布如何，概率至少为$1-\frac{1}{k^2}$。正是在这个意义上，标准差被用作衡量$X$与其均值偏差的尺度，这一点在其他地方已经提到过。

$$P(|X-E X|<2 \sigma) \geq 0.75, \quad P(|X-E X|<3 \sigma) \geq \frac{8}{9} \simeq 0.889$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|STAT7604

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

## 统计代写|统计推断代写Statistical inference代考|Independent Events and Related Results

In Example 14, it was seen that $P(A \mid B)=P(A)$. Thus, the fact that the event $B$ occurred provides no information in reevaluating the probability of $A$. Under such a circumstance, it is only fitting to say that $A$ is independent of $B$. For any two events $A$ and $B$ with $P(B)>0$, we say that $A$ is independent of $B$, if $P(A \mid B)=P(A)$. If, in addition, $P(A)>0$, then $B$ is also independent of $A$ because
$$P(B \mid A)=\frac{P(B \cap A)}{P(A)}=\frac{P(A \cap B)}{P(A)}=\frac{P(A \mid B) P(B)}{P(A)}=\frac{P(A) P(B)}{P(A)}=P(B) .$$
Because of this symmetry, we then say that $A$ and $B$ are independent. From the definition of either $P(A \mid B)$ or $P(B \mid A)$, it follows then that $P(A \cap B)=$ $P(A) P(B)$. We further observe that this relation is true even if one or both of $P(A), P(B)$ are equal to 0 . We take this relation as the defining relation of independence.
DEFINITION 2
Two events $A_1$ and $A_2$ are said to be independent (statistically or stochastically or in the probability sense), if $P\left(A_1 \cap A_2\right)=P\left(A_1\right) P\left(A_2\right)$. When $P\left(A_1 \cap A_2\right) \neq P\left(A_1\right) P\left(A_2\right)$ they are said to be dependent.

REMARK 2 At this point, it should be emphasized that disjointness and independence of two events are two distinct concepts; the former does not even require the concept of probability. Nevertheless, they are related in that, if $A_1 \cap A_2=\varnothing$, then they are independent if and only if at least one of $P\left(A_1\right), P\left(A_2\right)$ is equal to 0 . Thus (subject to $\left.A_1 \cap A_2=\varnothing\right), P\left(A_1\right) P\left(A_2\right)>0$ implies that $A_1$ and $A_2$ are definitely dependent.

The definition of independence extends to three events $A_1, A_2, A_3$, as well as to any number $n$ of events $A_1, \ldots, A_n$. Thus, three events $A_1, A_2, A_3$ for which $P\left(A_1 \cap A_2 \cap A_3\right)>0$ are said to be independent, if all conditional probabilities coincide with the respective (unconditional) probabilities:
$$\left.\begin{array}{c} P\left(A_1 \mid A_2\right)=P\left(A_1 \mid A_3\right)=P\left(A_1 \mid A_2 \cap A_3\right)=P\left(A_1\right) \ P\left(A_2 \mid A_1\right)=P\left(A_2 \mid A_3\right)=P\left(A_2 \mid A_1 \cap A_3\right)=P\left(A_2\right) \ P\left(A_3 \mid A_1\right)=P\left(A_3 \mid A_2\right)=P\left(A_3 \mid A_1 \cap A_2\right)=P\left(A_3\right) \ P\left(A_1 \cap A_2 \mid A_3\right)=P\left(A_1 \cap A_2\right), P\left(A_1 \cap A_3 \mid A_2\right) \ \quad=P\left(A_1 \cap A_3\right), P\left(A_2 \cap A_3 \mid A_1\right)=P\left(A_2 \cap A_3\right) . \end{array}\right}$$
From the definition of conditional probability, relations (1) are equivalent to:
$$\left.\begin{array}{c} P\left(A_1 \cap A_2\right)=P\left(A_1\right) P\left(A_2\right), P\left(A_1 \cap A_3\right)=P\left(A_1\right) P\left(A_3\right), \ P\left(A_2 \cap A_3\right)=P\left(A_2\right) P\left(A_3\right), P\left(A_1 \cap A_2 \cap A_3\right)=P\left(A_1\right) P\left(A_2\right) P\left(A_3\right) . \end{array}\right}$$

## 统计代写|统计推断代写Statistical inference代考|Basic Concepts and Results in Counting

In this brief section, some basic concepts and results are discussed regarding the way of counting the total number of outcomes of an experiment or the total number of different ways we can carry out a task. Although many readers will, undoubtedly, be familiar with parts of or the entire material in this section, it would be advisable, nevertheless, to invest some time here in introducing and adopting some notation, establishing some basic results, and then using them in computing probabilities in the classical probability framework.

Problems of counting arise in a great number of different situations. Here are some of them. In each one of these situations, we are asked to compute the number of different ways that something or other can be done. Here are a few illustrative cases.

(i) Attire yourself by selecting a T-shirt, a pair of trousers, a pair of shoes, and a cap out of $n_1$ T-shirts, $n_2$ pairs of trousers, $n_3$ pairs of shoes, and $n_4$ caps (e.g., $n_1=4, n_2=3, n_3=n_4=2$ ).
(ii) Form all $k$-digit numbers by selecting the $k$ digits out of $n$ available numbers (e.g., $k=2, n=4$ such as ${1,3,5,7}$ ).
(iii) Form all California automobile license plates by using one number, three letters and then three numbers in the prescribed order.
(iv) Form all possible codes by using a given set of symbols (e.g., form all “words” of length 10 by using the digits 0 and 1 ).
(v) Place $k$ books on the shelf of a bookcase in all possible ways.
(vi) Place the birthdays of $k$ individuals in the 365 days of a year in all possible ways.
(vii) Place $k$ letters into $k$ addressed envelopes (one letter to each envelope).
(viii) Count all possible outcomes when tossing $k$ distinct dice.
(ix) Select $k$ cards out of a standard deck of playing cards (e.g., for $k=5$, each selection is a poker hand).
(x) Form all possible $k$-member committees out of $n$ available individuals.
The calculation of the numbers asked for in situations (i) through (x) just outlined is in actuality a simple application of the so-called fundamental principle of counting, stated next in the form of a theorem.

# 统计推断代考

## 统计代写|统计推断代写Statistical inference代考|Independent Events and Related Results

$$P(B \mid A)=\frac{P(B \cap A)}{P(A)}=\frac{P(A \cap B)}{P(A)}=\frac{P(A \mid B) P(B)}{P(A)}=\frac{P(A) P(B)}{P(A)}=P(B) .$$

$$F_X(+\infty)=\lim {n \rightarrow \infty} F_X\left(x_n\right), x_n \uparrow \infty \quad \text { and } \quad F_X(-\infty)=\lim {n \rightarrow \infty} F_X\left(y_n\right), y_n \downarrow-\infty \text {. }$$

$$P_X((-\infty, x])=F_X(x) \leq F_X(y)=P_X((-\infty, y]) .$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|STA2023

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

## 统计代写|统计推断代写Statistical inference代考|Bayesian Optimality

The goal of obtaining a smallest confidence set with a specified coverage probability can also be attained using Bayesian criteria. If we have a posterior distribution $\pi(\theta \mid \mathbf{x})$, the posterior distribution of $\theta$ given $\mathbf{X}=\mathbf{x}$, we would like to find the set $C(\mathbf{x})$ that satisfies
(i) $\int_{C(\mathbf{x})} \pi(\theta \mid \mathbf{x}) d \mathbf{x}=1-\alpha$
(ii) $\quad$ Size $(C(\mathbf{x})) \leq \operatorname{Size}\left(C^{\prime}(\mathbf{x})\right)$
for any set $C^{\prime}(\mathbf{x})$ satisfying $\int_{C^{\prime}(\mathbf{x})} \pi(\theta \mid \mathbf{x}) d \mathbf{x} \geq 1-\alpha$.
If we take our measure of size to be length, then we can apply Theorem 9.3.2 and obtain the following result.

Corollary 9.3.10 If the posterior density $\pi(\theta \mid \mathbf{x})$ is unimodal, then for a given value of $\alpha$, the shortest credible interval for $\theta$ is given by
$${\theta: \pi(\theta \mid \mathbf{x}) \geq k} \quad \text { where } \int_{{\theta: \pi(\theta \mid \mathbf{x}) \geq k}} \pi(\theta \mid \mathbf{x}) d \theta=1-\alpha .$$
The credible set described in Corollary 9.3.10 is called a highest posterior density (HPD) region, as it consists of the values of the parameter for which the posterior density is highest. Notice the similarity in form between the HPD region and the likelihood region.

Example 9.3.11 (Poisson HPD region) In Example 9.2.16 we derived a $1-\alpha$ credible set for a Poisson parameter. We now construct an HPD region. By Corollary 9.3.10, this region is given by $\left{\lambda: \pi\left(\lambda \mid \sum x\right) \geq k\right}$, where $k$ is chosen so that
$$1-\alpha=\int_{{\lambda: \pi(\lambda \mid \Sigma x) \geq k}} \pi\left(\lambda \mid \sum x\right) d \lambda .$$
Recall that the posterior pdf of $\lambda$ is $\operatorname{gamma}\left(a+\sum x,[n+(1 / b)]^{-1}\right)$, so we need to find $\lambda_L$ and $\lambda_U$ such that
$$\pi\left(\lambda_L \mid \sum x\right)=\pi\left(\lambda_U \mid \sum x\right) \text { and } \int_{\lambda_L}^{\lambda_U} \pi\left(\lambda \mid \sum x\right) d \lambda=1-\alpha$$

## 统计代写|统计推断代写Statistical inference代考|Loss Function Optimality

In the previous two sections we looked at optimality of interval estimators by first requiring them to have a minimum coverage probability and then looking for the shortest interval. However, it is possible to put these requirements together in one loss function and use decision theory to search for an optimal estimator. In interval estimation, the action space $\mathcal{A}$ will consist of subsets of the parameter space $\Theta$ and, more formally, we might talk of “set estimation,” since an optimal rule may not necessarily be an interval. However, practical considerations lead us to mainly consider set estimators that are intervals and, happily, many optimal procedures turn out to be intervals.

We use $C$ (for confidence interval) to denote elements of $\mathcal{A}$, with the meaning of the action $C$ being that the interval estimate ” $\theta \in C$ ” is made. A decision rule $\delta(\mathbf{x})$ simply specifies, for each $\mathbf{x} \in \mathcal{X}$, which set $C \in \mathcal{A}$ will be used as an estimate of $\theta$ if $\mathbf{X}=\mathbf{x}$ is observed. Thus we will use the notation $C(\mathbf{x})$, as before.

The loss function in an interval estimation problem usually includes two quantities: a measure of whether the set estimate correctly includes the true value $\theta$ and a measure of the size of the set estimate. We will, for the most part, consider only sets $C$ that are intervals, so a natural measure of size is Length $(C)=$ length of $C$. To express the correctness measure, it is common to use
$$I_C(\theta)= \begin{cases}1 & \theta \in C \ 0 & \theta \notin C .\end{cases}$$

# 统计推断代考

## 统计代写|统计推断代写Statistical inference代考|Bayesian Optimality

(i) $\int_{C(\mathbf{x})} \pi(\theta \mid \mathbf{x}) d \mathbf{x}=1-\alpha$
(ii) $\quad$尺寸$(C(\mathbf{x})) \leq \operatorname{Size}\left(C^{\prime}(\mathbf{x})\right)$

$${\theta: \pi(\theta \mid \mathbf{x}) \geq k} \quad \text { where } \int_{{\theta: \pi(\theta \mid \mathbf{x}) \geq k}} \pi(\theta \mid \mathbf{x}) d \theta=1-\alpha .$$

$$1-\alpha=\int_{{\lambda: \pi(\lambda \mid \Sigma x) \geq k}} \pi\left(\lambda \mid \sum x\right) d \lambda .$$

$$\pi\left(\lambda_L \mid \sum x\right)=\pi\left(\lambda_U \mid \sum x\right) \text { and } \int_{\lambda_L}^{\lambda_U} \pi\left(\lambda \mid \sum x\right) d \lambda=1-\alpha$$

## 统计代写|统计推断代写Statistical inference代考|Loss Function Optimality

$$L(\theta \mid \mathbf{x})=f(\mathbf{x} \mid \theta)$$

$$P_{\theta_1}(\mathbf{X}=\mathbf{x})=L\left(\theta_1 \mid \mathbf{x}\right)>L\left(\theta_2 \mid \mathbf{x}\right)=P_{\theta_2}(\mathbf{X}=\mathbf{x}),$$

$$\frac{P_{\theta_1}(x-\epsilon<X<x+\epsilon)}{P_{\theta_2}(x-\epsilon<X<x+\epsilon)} \approx \frac{L\left(\theta_1 \mid x\right)}{L\left(\theta_2 \mid x\right)},$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|Almost Sure Convergence

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

## 统计代写|统计推断代写Statistical inference代考|Almost Sure Convergence

A type of convergence that is stronger than convergence in probability is almost sure convergence (sometimes confusingly known as convergence with probability 1). This type of convergence is similar to pointwise convergence of a sequence of functions, except that the convergence need not occur on a set with probability 0 (hence the “almost” sure).

Definition 5.5.6 A sequence of random variables, $X_1, X_2, \ldots$, converges almost surely to a random variable $X$ if, for every $\epsilon>0$,
$$P\left(\lim _{n \rightarrow \infty}\left|X_n-X\right|<\epsilon\right)=1$$
Notice the similarity in the statements of Definitions 5.5.1 and 5.5.6. Although they look similar, they are very different statements, with Definition 5.5.6 much stronger. To understand almost sure convergence, we must recall the basic definition of a random variable as given in Definition 1.4.1. A random variable is a real-valued function defined on a sample space $S$. If a sample space $S$ has elements denoted by $s$, then $X_n(s)$ and $X(s)$ are all functions defined on $S$. Definition 5.5.6 states that $X_n$ converges to $X$ almost surely if the functions $X_n(s)$ converge to $X(s)$ for all $s \in S$ except perhaps for $s \in N$, where $N \subset S$ and $P(N)=0$. Example 5.5.7 illustrates almost sure convergence. Example 5.5.8 illustrates the difference between convergence in probability and almost sure convergence.

## 统计代写|统计推断代写Statistical inference代考|Convergence in Distribution

We have already encountered the idea of convergence in distribution in Chapter 2. Remember the properties of moment generating functions (mgfs) and how their convergence implies convergence in distribution (Theorem 2.3.12).

Definition 5.5.10 A sequence of random variables, $X_1, X_2, \ldots$, converges in distribution to a random variable $X$ if
$$\lim {n \rightarrow \infty} F{X_n}(x)=F_X(x)$$
at all points $x$ where $F_X(x)$ is continuous.
Example 5.5.11 (Maximum of uniforms) If $X_1, X_2, \ldots$ are iid uniform $(0,1)$ and $X_{(n)}=\max {1 \leq i \leq n} X_i$, let us examine if (and to where) $X{(n)}$ converges in distribution.
As $n \rightarrow \infty$, we expect $X_{(n)}$ to get close to 1 and, as $X_{(n)}$ must necessarily be less than 1 , we have for any $\varepsilon>0$,
\begin{aligned} P\left(\left|X_{(n)}-1\right| \geq \varepsilon\right) & =P\left(X_{(n)} \geq 1+\varepsilon\right)+P\left(X_{(n)} \leq 1-\varepsilon\right) \ & =0+P\left(X_{(n)} \leq 1-\varepsilon\right) . \end{aligned}
Next using the fact that we have an iid sample, we can write
$$P\left(X_{(n)} \leq 1-\varepsilon\right)=P\left(X_i \leq 1-\varepsilon, i=1, \ldots n\right)=(1-\varepsilon)^n$$

which goes to 0 . So we have proved that $X_{(n)}$ converges to 1 in probability. However, if we take $\varepsilon=t / n$, we then have
$$P\left(X_{(n)} \leq 1-t / n\right)=(1-t / n)^n \rightarrow e^{-t},$$
which, upon rearranging, yields
$$P\left(n\left(1-X_{(n)}\right) \leq t\right) \rightarrow 1-e^{-t}$$
that is, the random variable $n\left(1-X_{(n)}\right)$ converges in distribution to an exponential(1) random variable.

# 统计推断代考

## 统计代写|统计推断代写Statistical inference代考|Almost Sure Convergence

5.5.6一个随机变量序列$X_1, X_2, \ldots$几乎必然收敛于随机变量$X$，如果对每一个$\epsilon>0$，
$$P\left(\lim _{n \rightarrow \infty}\left|X_n-X\right|<\epsilon\right)=1$$

## 统计代写|统计推断代写Statistical inference代考|Convergence in Distribution

5.5.10一个随机变量序列$X_1, X_2, \ldots$在分布上收敛于一个随机变量$X$ if
$$\lim {n \rightarrow \infty} F{X_n}(x)=F_X(x)$$

\begin{aligned} P\left(\left|X_{(n)}-1\right| \geq \varepsilon\right) & =P\left(X_{(n)} \geq 1+\varepsilon\right)+P\left(X_{(n)} \leq 1-\varepsilon\right) \ & =0+P\left(X_{(n)} \leq 1-\varepsilon\right) . \end{aligned}

$$P\left(X_{(n)} \leq 1-\varepsilon\right)=P\left(X_i \leq 1-\varepsilon, i=1, \ldots n\right)=(1-\varepsilon)^n$$

$$P\left(X_{(n)} \leq 1-t / n\right)=(1-t / n)^n \rightarrow e^{-t},$$

$$P\left(n\left(1-X_{(n)}\right) \leq t\right) \rightarrow 1-e^{-t}$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。