### 统计代写|贝叶斯分析代写Bayesian Analysis代考|MAST90125

statistics-lab™ 为您的留学生涯保驾护航 在代写贝叶斯分析Bayesian Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写贝叶斯分析Bayesian Analysis代写方面经验极为丰富，各种代写贝叶斯分析Bayesian Analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|贝叶斯分析代写Bayesian Analysis代考|INDEPENDENT AND CONDITIONALLY INDEPENDENT

A pair of random variables $(X, Y)$ is said to be independent if for any $A$ and $B$,
$$p(X \in A \mid Y \in B)=p(X \in A),$$
or alternatively $p(Y \in B \mid X \in A)=p(Y \in B)$ (these two definitions are correct and equivalent under very mild conditions that prevent ill-formed conditioning on an event that has zero probability).

Using the chain rule, it can also be shown that the above two definitions are equivalent to the requirement that $p(X \in A, Y \in B)=p(X \in A) p(Y \in B)$ for all $A$ and $B$.

Independence between random variables implies that the random variables do not provide information about each other. This means that knowing the value of $X$ does not help us infer anything about the value of $Y$-in other words, it does not change the probability of $Y$. (Or vice-versa $-Y$ does not tell us anything about $X$.) While independence is an important concept in probability and statistics, in this book we will more frequently make use of a more refined notion of independence, called “conditional independence”-which is a generalization of the notion of independence described in the beginning of this section. A pair of random variables $(X, Y)$ is conditionally independent given a third random variable $Z$, if for any $A, B$ and $z$, it holds that $p(X \in A \mid Y \in B, Z=z)=p(X \in A \mid Z=z)$.

Conditional independence between two random variables (given a third one) implies that the two variables are not informative about each other, if the value of the third one is known. 3
Conditional independence (and independence) can be generalized to multiple random variables as well. We say that a set of random variables $X_{1}, \ldots, X_{n}$, are mutually conditionally independent given another set of random variables $Z_{1}, \ldots, Z_{m}$ if the following applies for any $A_{1}, \ldots, A_{n}$ and $z_{1}, \ldots, z_{m}:$
$$\begin{gathered} p\left(X_{1} \in A_{1}, \ldots, X_{n} \in A_{n} \mid Z_{1}=z_{1}, \ldots, Z_{m}=z_{m}\right)= \ \prod_{i=1}^{n} p\left(X_{i} \in A_{i} \mid Z_{1}=z_{1}, \ldots, Z_{m}=z_{m}\right) . \end{gathered}$$
This type of independence is weaker than pairwise independence for a set of random variables, in which only pairs of random variables are required to be independent. (Also see exercises.)

## 统计代写|贝叶斯分析代写Bayesian Analysis代考|EXCHANGEABLE RANDOM VARIABLES

Another type of relationship that can be present between random variables is that of exchangeability. A sequence of random variables $X_{1}, X_{2}, \ldots$ over $\Omega$ is said to be exchangeable, if for any finite subset, permuting the random variables in this finite subset, does not change their joint distribution. More formally, for any $S=\left{a_{1}, \ldots, a_{m}\right}$ where $a_{i} \geq 1$ is an integer, and for any permutation $\pi$ on ${1, \ldots, m}$, it holds that: ${ }^{4}$
$$p\left(x_{a_{1}}, \ldots, x_{a_{m}}\right)=p\left(x_{a_{\pi(1)}}, \ldots, x_{\left.a_{\pi(m)}\right)}\right) .$$
Due to a theorem by de Finetti (Finetti, 1980), exchangeability can be thought of as meaning “conditionally independent and identically distributed” in the following sense. De Finetti showed that if a sequence of random variables $X_{1}, X_{2}, \ldots$ is exchangeable, then under some regularity conditions, there exists a sample space $\Theta$ and a distribution over $\Theta, p(\theta)$, such that:
$$p\left(X_{a_{1}}, \ldots, X_{a_{m}}\right)=\int_{\theta} \prod_{i=1}^{m} p\left(X_{a_{i}} \mid \theta\right) p(\theta) d \theta,$$
for any set of $m$ integers, $\left{a_{1}, \ldots, a_{m}\right}$. The interpretation of this is that exchangeable random variables can be represented as a (potentially infinite) mixture distribution. This theorem is also called the “representation theorem.”

The frequentist approach assumes the existence of a fixed set of parameters from which the data were generated, while the Bayesian approach assumes that there is some prior distribution over the set of parameters that generated the data. (This will hecome clearer as the hook progresses.) De Finetti’s theorem provides another connection between the Bayesian approach and the frequentist one. The standard “independent and identically distributed” (i.i.d.) assumption in the frequentist setup can be asserted as a setup of exchangeability where $p(\theta)$ is a point-mass distribution over the unknown (but single) parameter from which the data are sampled. This leads to the observations being unconditionally independent and identically distributed. In the Bayesian setup, however, the observations are correlated, because $p(\theta)$ is not a point-mass distribution. The prior distribution plays the role of $p(\theta)$. For a detailed discussion of this similarity, see O’Neill (2009).

## 统计代写|贝叶斯分析代写Bayesian Analysis代考|EXPECTATIONS OF RANDOM VARIABLES

If we consider again the naive definition of random variables, as functions that map the sample space to real values, then it is also useful to consider various ways in which we can summarize these random variables. One way to get a summary of a random variable is by computing its expectation, which is its weighted mean value according to the underlying probability model.
It is easiest to first consider the expectation of a continuous random variable with a density function. Say $p(\theta)$ defines a distribution over the random variable $\theta$, then the expectation of $\theta$, denoted $E[\theta]$ would be defined as:
$$E[\theta]=\int_{\theta} p(\theta) \theta d \theta .$$
For the discrete random variables that we consider in this book, we usually consider expectations of functions over these random variables. As mentioned in Section 1.2, discrete random variable values often range over a set which is not numeric. In these cases, there is no “mean value” for the values that these random variables accept. Instead, we will compute the mean value of a real-function of these random variables.
With $f$ being such a function, the expectation $E[f(X)]$ is defined as:
$$E[f(X)]=\sum_{x} p(x) f(x)$$ For the linguistic structures that are used in this book, we will often use a function $f$ that indicates whether a certain property holds for the structure. For example, if the sample space of $X$ is a set of sentences, $f(x)$ can be an indicator function that states whether the word “spring” appears in the sentence $x$ or not; $f(x)=1$ if the word “spring” appears in $x$ and 0 , otherwise. In that case, $f(X)$ itself can be thought of as a Bernoulli random variable, i.e., a binary random variable that has a certain probability $\theta$ to be 1 , and probability $1-\theta$ to be 0 . The expectation $E[f(X)]$ gives the probability that this random variable is 1 . Alternatively, $f(x)$ can count how many times the word “spring” appears in the sentence $x$. In that case, it can be viewed as a sum of Bernoulli variables, each indicating whether a certain word in the sentence $x$ is “spring” or not.

## 统计代写|贝叶斯分析代写Bayesian Analysis代考|INDEPENDENT AND CONDITIONALLY INDEPENDENT

$$p(X \in A \mid Y \in B)=p(X \in A),$$

$$p\left(X_{1} \in A_{1}, \ldots, X_{n} \in A_{n} \mid Z_{1}=z_{1}, \ldots, Z_{m}=z_{m}\right)=\prod_{i=1}^{n} p\left(X_{i} \in A_{i} \mid Z_{1}=z_{1}, \ldots, Z_{m}=z_{m}\right) \text {. }$$

## 统计代写|贝叶斯分析代写Bayesian Analysis代考|EXCHANGEABLE RANDOM VARIABLES

$\mathrm{S}=|$ left{a_{1}，Vdots, a_{m}|right} 在哪里 $a_{i} \geq 1$ 是一个整数，并且对于任何排列 $\pi$ 上 $1, \ldots, m$ ，它认为: ${ }^{4}$
$$p\left(x_{a_{1}}, \ldots, x_{a_{m}}\right)=p\left(x_{a_{\pi(1)}}, \ldots, x_{\left.a_{\pi(m)}\right)}\right) .$$

$$p\left(X_{a_{1}}, \ldots, X_{a_{m i}}\right)=\int_{\theta} \prod_{i=1}^{m} p\left(X_{a_{i}} \mid \theta\right) p(\theta) d \theta$$

## 统计代写|贝叶斯分析代写Bayesian Analysis代考|EXPECTATIONS OF RANDOM VARIABLES

$$E[\theta]=\int_{\theta} p(\theta) \theta d \theta$$

$$E[f(X)]=\sum_{x} p(x) f(x)$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。