## 统计代写|蒙特卡洛方法代写monte carlo method代考| Markov Jump Processes

statistics-lab™ 为您的留学生涯保驾护航 在代写蒙特卡洛方法学monte carlo method方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写蒙特卡洛方法学monte carlo method代写方面经验极为丰富，各种代写蒙特卡洛方法学monte carlo method相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Markov Jump Processes

A Markov jump process $X=\left{X_{t}, t \geqslant 0\right}$ can be viewed as a continuous-time generalization of a Markov chain and also of a Poisson process. The Markov property $(1.30)$ now reads
$$\mathbb{P}\left(X_{t+s}=x_{t+s} \mid X_{u}=x_{u}, u \leqslant t\right)=\mathbb{P}\left(X_{t+s}=x_{t+s} \mid X_{t}=x_{t}\right) .$$
As in the Markov chain case, one usually assumes that the process is timehomogeneous, that is, $\mathbb{P}\left(X_{t+s}=j \mid X_{t}=i\right)$ does not depend on $t$. Denote this probability by $P_{s}(i, j)$. An important quantity is the transition rate $q_{i j}$ from state $i$ to $j$, defined for $i \neq j$ as
$$q_{i j}=\lim {t \downarrow 0} \frac{P{t}(i, j)}{t} .$$
The sum of the rates out of state $i$ is denoted by $q_{i}$. A typical sample path of $X$ is shown in Figure 1.6. The process jumps at times $T_{1}, T_{2}, \ldots$ to states $Y_{1}, Y_{2}, \ldots$, staying some length of time in each state.

More precisely, a Markov jump process $X$ behaves (under suitable regularity conditions; see [3]) as follows:

1. Given its past, the probability that $X$ jumps from its current state $i$ to state $j$ is $K_{i j}=q_{i j} / q_{i}$.
2. The amount of time that $X$ spends in state $j$ has an exponential distribution with mean $1 / q_{j}$, independent of its past history.

The first statement implies that the process $\left{Y_{n}\right}$ is in fact a Markov chain, with transition matrix $K=\left(K_{i j}\right)$.

A convenient way to describe a Markov jump process is through its transition rate graph. This is similar to a transition graph for Markov chains. The states are represented by the nodes of the graph, and a transition rate from state $i$ to $j$ is indicated by an arrow from $i$ to $j$ with weight $q_{i j}$.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Birth-and-Death Process

A birth-and-death process is a Markov jump process with a transition rate graph of the form given in Figure 1.7. Imagine that $X_{t}$ represents the total number of individuals in a population at time $t$. Jumps to the right correspond to births, and jumps to the left to deaths. The birth rates $\left{b_{i}\right}$ and the death rates $\left{d_{i}\right}$ may differ from state to state. Many applications of Markov chains involve processes of this kind. Note that the process jumps from one state to

the next according to a Markov chain with transition probabilities $K_{0,1}=1$, $K_{i, i+1}=b_{i} /\left(b_{i}+d_{i}\right)$, and $K_{i, i-1}=d_{i} /\left(b_{i}+d_{i}\right), i=1,2, \ldots$. Moreover, it spends an $\operatorname{Exp}\left(b_{0}\right)$ amount of time in state 0 and $\operatorname{Exp}\left(b_{i}+d_{i}\right)$ in the other states.
Limiting Behavior We now formulate the continuous-time analogues of (1.34) and Theorem 1.13.2. Irreducibility and recurrence for Markov jump processes are defined in the same way as for Markov chains. For simplicity, we assume that $\mathscr{E}={1,2, \ldots}$. If $X$ is a recurrent and irreducible Markov jump process, then regardless of $i$,
$$\lim {t \rightarrow \infty} \mathbb{P}\left(X{t}=j \mid X_{0}=i\right)=\pi_{j}$$
for some number $\pi_{j} \geqslant 0$. Moreover, $\pi=\left(\pi_{1}, \pi_{2}, \ldots\right)$ is the solution to
$$\sum_{j \neq i} \pi_{i} q_{i j}=\sum_{j \neq i} \pi_{j} q_{j i}, \quad \text { for all } i=1, \ldots, m$$
with $\sum_{j} \pi_{j}=1$, if such a solution exists, in which case all states are positive recurrent. If such a solution does not exist, all $\pi_{j}$ are 0 .

As in the Markov chain case, $\left{\pi_{j}\right}$ is called the limiting distribution of $X$ and is usually identified with the row vector $\pi$. Any solution $\pi$ of (1.42) with $\sum_{j} \pi_{j}=1$ is called a stationary distribution, since taking it as the initial distribution of the Markov jump process renders the process stationary.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|GAUSSIAN PROCESSES

The normal distribution is also called the Gaussian distribution. Gaussian processes are generalizations of multivariate normal random vectors (discussed in Section 1.10). Specifically, a stochastic process $\left{X_{t}, t \in \mathscr{T}\right}$ is said to be Gaussian if all its finite-dimensional distributions are Gaussian. That is, if for any choice of $n$ and $t_{1}, \ldots, t_{n} \in \mathscr{T}$, it holds that
$$\left(X_{t_{1}}, \ldots, X_{t_{n}}\right)^{\top} \sim \mathrm{N}(\boldsymbol{\mu}, \Sigma)$$
for some expectation vector $\boldsymbol{\mu}$ and covariance matrix $\Sigma$ (both of which depend on the choice of $\left.t_{1}, \ldots, t_{n}\right)$. Equivalently, $\left{X_{t}, t \in \mathscr{T}\right}$ is Gaussian if any linear combination $\sum_{i=1}^{n} b_{i} X_{t_{i}}$ has a normal distribution. Note that a Gaussian process is determined completely by its expectation function $\mu_{t}=\mathbb{E}\left[X_{t}\right], t \in \mathscr{T}$, and covariance function $\Sigma_{s, t}=\operatorname{Cov}\left(X_{s}, X_{t}\right), s, t \in \mathscr{T}$.

The Wiener process can be defined as a Gaussian process $\left{X_{t}, t \geqslant 0\right}$ with expectation function $\mu_{t}=0$ for all $t$ and covariance function $\Sigma_{s, t}=s$ for $0 \leqslant s \leqslant t$. The Wiener process has many fascinating properties (e.g., [11]). For example, it is a Markov process (i.e., it satisfies the Markov property $(1.30)$ ) with continuous sample paths that are nowhere differentiable. Moreover, the increments $X_{t}-X_{s}$ over intervals $[s, t]$ are independent and normally distributed. Specifically, for any $t_{1}<t_{2} \leqslant t_{3}<t_{4}$,
$$X_{t_{4}}-X_{t_{3}} \quad \text { and } \quad X_{t_{2}}-X_{t_{1}}$$
are independent random variables, and for all $t \geqslant s \geqslant 0$,
$$X_{t}-X_{s} \sim \mathrm{N}(0, t-s) .$$
This leads to a simple simulation procedure for Wiener processes, which is discussed in Section 2.8.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Markov Jump Processes

q一世j=林吨↓0磷吨(一世,j)吨.

1. 鉴于它的过去，概率X从当前状态跳转一世陈述j是ķ一世j=q一世j/q一世.
2. 的时间量X在州花费j具有均值的指数分布1/qj，独立于其过去的历史。

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Birth-and-Death Process

∑j≠一世圆周率一世q一世j=∑j≠一世圆周率jqj一世, 对全部 一世=1,…,米

## 统计代写|蒙特卡洛方法代写monte carlo method代考|GAUSSIAN PROCESSES

(X吨1,…,X吨n)⊤∼ñ(μ,Σ)

X吨4−X吨3 和 X吨2−X吨1

X吨−Xs∼ñ(0,吨−s).

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|蒙特卡洛方法代写monte carlo method代考| Classification of States

statistics-lab™ 为您的留学生涯保驾护航 在代写蒙特卡洛方法学monte carlo method方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写蒙特卡洛方法学monte carlo method代写方面经验极为丰富，各种代写蒙特卡洛方法学monte carlo method相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Classification of States

Let $X$ be a Markov chain with discrete state space $E$ and transition matrix $P$. We can characterize the relations between states in the following way: If states $i$ and $j$ are such that $P^{t}(i, j)>0$ for some $t \geqslant 0$, we say that $i$ leads to $j$ and write $i \rightarrow j$. We say that $i$ and $j$ communicate if $i \rightarrow j$ and $j \rightarrow i$, and write $i \leftrightarrow j$. Using the relation ” $\leftrightarrow “$ “, we can divide $\mathscr{E}$ into equivalence classes such that all the states in an equivalence class communicate with each other but not with any state outside that class. If there is only one equivalent class $(=\mathscr{E})$, the Markov chain is said to be irreducible. If a set of states $\mathscr{A}$ is such that $\sum_{j \in \mathscr{A}} P(i, j)=1$ for all $i \in \mathscr{A}$, then $\mathscr{A}$ is called a closed set. A state $i$ is called an absorbing state if ${i}$ is closed. For example, in the transition graph depicted in Figure 1.5, the equivalence classes are ${1,2},{3}$, and ${4,5}$. Class ${1,2}$ is the only closed set: the Markov chain cannot escape from it. If state 1 were missing, state 2 would be absorbing. In Example $1.10$ the Markov chain is irreducible since all states communicate.

Another classification of states is obtained by observing the system from a local point of view. In particular, let $T$ denote the time the chain first visits state $j$, or first returns to $j$ if it started there, and let $N_{j}$ denote the total number of visits to $j$ from time 0 on. We write $\mathbb{P}{j}(A)$ for $\mathbb{P}\left(A \mid X{0}=j\right)$ for any event $A$. We denote the corresponding expectation operator by $\mathbb{E}{j}$. State $j$ is called a recurrent state if $\mathbb{P}{j}(T<\infty)=1$; otherwise, $j$ is called transient. A recurrent state is called positive recurrent if $\mathbb{E}{j}[T]<\infty$; otherwise, it is called null recurrent. Finally, a state is said to be periodic, with period $\delta$, if $\delta \geqslant 2$ is the largest integer for which $\mathbb{P}{j}(T=n \delta$ for some $n \geqslant 1)=1$; otherwise, it is called aperiodic. For example, in Figure $1.5$ states 1 and 2 are recurrent, and the other states are transient. All these states are aperiodic. The states of the random walk of Example $1.10$ are periodic with period 2 .

It can be shown that recurrence and transience are class properties. In particular, if $i \leftrightarrow j$, then $i$ recurrent (transient) $\Leftrightarrow j$ recurrent (transient). Thus, in an irreducible Markov chain, one state being recurrent implies that all other states are also recurrent. And if one state is transient, then so are all the others.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Limiting Behavior

The limiting or “steady-state” behavior of Markov chains as $t \rightarrow \infty$ is of considerable interest and importance, and this type of behavior is often simpler to describe and analyze than the “transient” behavior of the chain for fixed $t$. It can be shown (see, for example, [3]) that in an irreducible, aperiodic Markov chain with transition matrix $P$ the $t$-step probabilities converge to a constant that does not depend on the initial state. More specifically,
$$\lim {t \rightarrow \infty} P^{t}(i, j)=\pi{j}$$
for some number $0 \leqslant \pi_{j} \leqslant 1$. Moreover, $\pi_{j}>0$ if $j$ is positive recurrent and $\pi_{j}=0$ otherwise. The intuitive reason behind this result is that the process “forgets” where it was initially if it goes on long enough. This is true for both finite and countably infinite Markov chains. The numbers $\left{\pi_{j}, j \in \mathscr{E}\right}$ form the limiting distribution of the Markov chain, provided that $\pi_{j} \geqslant 0$ and $\sum_{j} \pi_{j}=1$. Note that these conditions are not always satisfied: they are clearly not satisfied if the Markov chain is transient, and they may not be satisfied if the Markov chain is recurrent (i.e., when the states are null-recurrent). The following theorem gives a method for obtaining limiting distributions. Here we assume for simplicity that $\mathscr{E}={0,1,2, \ldots}$. The limiting distribution is identified with the row vector $\pi=$ $\left(\pi_{0}, \pi_{1}, \ldots\right)$

Theorem 1.13.2 For an irreducible, aperiodic Markov chain with transition matrix $P$, if the limiting distribution $\pi$ exists, then it is uniquely determined by the solution of
$$\pi=\pi P$$
with $\pi_{j} \geqslant 0$ and $\sum_{j} \pi_{j}=1$. Conversely, if there exists a positive row vector $\pi$ satisfying (1.35) and summing up to 1 , then $\pi$ is the limiting distribution of the Markov chain. Moreover, in that case, $\pi_{j}>0$ for all $j$ and all states are positive recurrent.

Proof: (Sketch) For the case where $\mathscr{E}$ is finite, the result is simply a consequence of (1.33). Namely, with $\pi^{(0)}$ being the $i$-th unit vector, we have
$$P^{t+1}(i, j)=\left(\pi^{(0)} P^{t} P\right)(j)=\sum_{k \in \mathcal{E}} P^{t}(i, k) P(k, j)$$
Letting $t \rightarrow \infty$, we obtain (1.35) from (1.34), provided that we can change the order of the limit and the summation. To show uniqueness, suppose that another vector $\mathbf{y}$, with $y_{j} \geqslant 0$ and $\sum_{j} y_{j}=1$, satisfies $\mathbf{y}=\mathbf{y} P$. Then it is easy to show by induction that $\mathbf{y}=\mathbf{y} P^{t}$, for every $t$. Hence, letting $t \rightarrow \infty$, we obtain for every $j$
$$y_{j}=\sum_{i} y_{i} \pi_{j}=\pi_{j},$$
since the $\left{y_{j}\right}$ sum up to unity. We omit the proof of the converse statement.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Random Walk on the Positive Integers

This is a slightly different random walk than the one in Example 1.10. Let $X$ be a random walk on $\mathscr{E}={0,1,2, \ldots}$ with transition matrix
$$P=\left(\begin{array}{cccccc} q & p & 0 & \ldots & & \ q & 0 & p & 0 & \ldots & \ 0 & q & 0 & p & 0 & \cdots \ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots \end{array}\right)$$
where $0<p<1$ and $q=1-p . X_{l}$ could represent, for example, the number of customers who are waiting in a queue at time $t$.

All states can be reached from each other, so the chain is irreducible and every state is either recurrent or transient. The equation $\pi=\pi P$ becomes
\begin{aligned} &\pi_{0}=q \pi_{0}+q \pi_{1} \ &\pi_{1}=p \pi_{0}+q \pi_{2} \ &\pi_{2}=p \pi_{1}+q \pi_{3} \ &\pi_{3}=p \pi_{2}+q \pi_{4} \end{aligned}
and so on. We can solve this set of equation sequentially. If we let $r=p / q$, then we can express the $\pi_{1}, \pi_{2}, \ldots$ in terms of $\pi_{0}$ and $r$ as
$$\pi_{j}=r^{j} \pi_{0}, j=0,1,2, \ldots$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|蒙特卡洛方法代写monte carlo method代考| MARKOV PROCESSES

statistics-lab™ 为您的留学生涯保驾护航 在代写蒙特卡洛方法学monte carlo method方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写蒙特卡洛方法学monte carlo method代写方面经验极为丰富，各种代写蒙特卡洛方法学monte carlo method相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|蒙特卡洛方法代写monte carlo method代考|MARKOV PROCESSES

Markov processes are stochastic processes whose futures are conditionally independent of their pasts given their present values. More formally, a stochastic process $\left{X_{t}, t \in \mathscr{T}\right}$, with $\mathscr{T} \subseteq \mathbb{R}$, is called a Markov process if, for every $s>0$ and $t$,
$$\left(X_{t+s} \mid X_{u}, u \leqslant t\right) \sim\left(X_{t+s} \mid X_{t}\right)$$
In other words, the conditional distribution of the future variable $X_{t+s}$, given the entire past of the process $\left{X_{u}, u \leqslant t\right}$, is the same as the conditional distribution of $X_{t+s}$ given only the present $X_{t}$. That is, in order to predict future states, we only need to know the present one. Property (1.30) is called the Markov property.
Depending on the index set $\mathscr{T}$ and state space $\mathscr{E}$ (the set of all values the $\left{X_{t}\right}$ can take), Markov processes come in many different forms. A Markov process with a discrete index set is called a Markov chain. A Markov process with a discrete state space and a continuous index set (such as $\mathbb{R}$ or $\mathbb{R}_{+}$) is called a Markov jump process.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Markov Chains

Consider a Markov chain $X=\left{X_{t}, t \in \mathbb{N}\right}$ with a discrete (i.e., countable) state space $\mathscr{E}$. In this case the Markov property (1.30) is
$$\mathbb{P}\left(X_{t+1}=x_{t+1} \mid X_{0}=x_{0}, \ldots, X_{t}=x_{t}\right)=\mathbb{P}\left(X_{t+1}=x_{t+1} \mid X_{t}=x_{t}\right)$$
for all $x_{0}, \ldots, x_{t+1}, \in \mathscr{E}$ and $t \in \mathbb{N}$. We restrict ourselves to Markov chains for which the conditional probabilities
$$\mathbb{P}\left(X_{t+1}=j \mid X_{t}=i\right), i, j \in \mathscr{E}$$
are independent of the time $t$. Such chains are called time-homogeneous. The probabilities in (1.32) are called the (one-step) transition probabilities of $X$. The distribution of $X_{0}$ is called the initial distribution of the Markov chain. The one-step transition probabilities and the initial distribution completely specify the distribution of $X$. Namely, we have by the product rule (1.4) and the Markov property (1.30),
\begin{aligned} &\mathbb{P}\left(X_{U}=x_{U}, \ldots, X_{t}=x_{t}\right) \ &\quad=\mathbb{P}\left(X_{0}=x_{0}\right) \mathbb{P}\left(X_{1}=x_{1} \mid X_{0}=x_{0}\right) \cdots \mathbb{P}\left(X_{t}=x_{t} \mid X_{0}=x_{0}, \ldots X_{t-1}=x_{t-1}\right) \ &\quad=\mathbb{P}\left(X_{0}=x_{0}\right) \mathbb{P}\left(X_{1}=x_{1} \mid X_{0}=x_{0}\right) \cdots \mathbb{P}\left(X_{t}=x_{t} \mid X_{t-1}=x_{t-1}\right) . \end{aligned}
Since $\mathscr{E}$ is countable, we can arrange the one-step transition probabilities in an array. This array is called the (one-step) transition matrix of $X$. We usually denote it by $P$. For example, when $\mathscr{E}={0,1,2, \ldots}$, the transition matrix $P$ has the form
$$P=\left(\begin{array}{cccc} p_{00} & p_{01} & p_{02} & \cdots \ p_{10} & p_{11} & p_{12} & \cdots \ p_{20} & p_{21} & p_{22} & \cdots \ \vdots & \vdots & \vdots & \ddots \end{array}\right) \text {. }$$
Note that the elements in every row are positive and sum up to unity.
Another convenient way to describe a Markov chain $X$ is through its transition graph. States are indicated by the nodes of the graph, and a strictly positive $(>0)$ transition probability $p_{i j}$ from state $i$ to $j$ is indicated by an arrow from $i$ to $j$ with weight $p_{i j}$.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Random Walk on the Integers

Let $p$ be a number between 0 and 1 . The Markov chain $X$ with state space $\mathbb{Z}$ and transition matrix $P$ defined by
$$P(i, i+1)=p, \quad P(i, i-1)=\bar{q}=1-\bar{p}, \quad \text { for all } i \in \mathbb{Z}$$
is called a random walk on the integers. Let $X$ start at $0 ;$ thus, $\mathbb{P}\left(X_{0}=0\right)=1$. The corresponding transition graph is given in Figure 1.4. Starting at 0 , the chain takes subsequent steps to the right with probability $p$ and to the left with probability $q$.

We show next how to calculate the probability that, starting from state $i$ at some (discrete) time $t$, we are in $j$ at (discrete) time $t+s$, that is, the probability $\mathbb{P}\left(X_{t+s}=j \mid X_{t}=i\right)$. For clarity, let us assume that $\mathscr{E}={1,2, \ldots, m}$ for some fixed $m$, so that $P$ is an $m \times m$ matrix. For $t=0,1,2, \ldots$, define the row vector
$$\boldsymbol{\pi}^{(t)}=\left(\mathbb{P}\left(X_{t}=1\right), \ldots, \mathbb{P}\left(X_{t}=m\right)\right)$$
We call $\pi^{(t)}$ the distribution vector, or simply the distribution, of $X$ at time $t$ and $\pi^{(0)}$ the initial distribution of $X$. The following result shows that the $t$-step probabilities can be found simply by matrix multiplication.
Theorem 1.13.1 The distribution of $X$ at time $t$ is given by
$$\pi^{(t)}=\pi^{(0)} P^{t}$$
for all $t=0,1, \ldots .$ (Here $P^{0}$ denotes the identity matrix.)
Proof: The proof is by induction. Equality (1.33) holds for $t=0$ by definition. Suppose that this equality is true for some $t=0,1, \ldots$. We have
$$\mathbb{P}\left(X_{t+1}=k\right)=\sum_{i=1}^{m} \mathbb{P}\left(X_{t+1}=k \mid X_{t}=i\right) \mathbb{P}\left(X_{t}=i\right)$$
But (1.33) is assumed to be true for $t$, so $\mathbb{P}\left(X_{t}=i\right)$ is the $i$-th element of $\pi^{(0)} P^{t}$. Moreover, $\mathbb{P}\left(X_{t+1}=k \mid X_{t}=i\right)$ is the $(i, k)$-th element of $P$. Therefore, for every $k$,
$$\sum_{i=1}^{m} \mathbb{P}\left(X_{t+1}=k \mid X_{t}=i\right) \mathbb{P}\left(X_{t}=i\right)=\sum_{i=1}^{m} P(i, k)\left(\boldsymbol{\pi}^{(0)} P^{t}\right)(i)$$
which is just the $k$-th element of $\pi^{(0)} P^{t+1}$. This completes the induction step, and thus the theorem is proved.

By taking $\pi^{(0)}$ as the $i$-th unit vector, $\mathbf{e}{i}$, the $t$-step transition probabilities can be found as $\mathbb{P}\left(X{t}=j \mid X_{0}=i\right)=\left(\mathbf{e}_{i} P^{t}\right)(j)=P^{t}(i, j)$, which is the $(i, j)$-th element of matrix $P^{t}$. Ihus, to find the $t$-step transition probabilities, we just have to compute the $t$-th power of $P$.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|MARKOV PROCESSES

(X吨+s∣X在,在⩽吨)∼(X吨+s∣X吨)

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Random Walk on the Integers

∑一世=1米磷(X吨+1=ķ∣X吨=一世)磷(X吨=一世)=∑一世=1米磷(一世,ķ)(圆周率(0)磷吨)(一世)

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|蒙特卡洛方法代写monte carlo method代考| JOINTLY NORMAL RANDOM VARIABLES

statistics-lab™ 为您的留学生涯保驾护航 在代写蒙特卡洛方法学monte carlo method方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写蒙特卡洛方法学monte carlo method代写方面经验极为丰富，各种代写蒙特卡洛方法学monte carlo method相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|蒙特卡洛方法代写monte carlo method代考|JOINTLY NORMAL RANDOM VARIABLES

It is helpful to view normally distributed random variables as simple transformations of standard normal – that is, $\mathrm{N}(0,1)$-distributed – random variables. In particular, let $X \sim \mathrm{N}(0,1)$. Then $X$ has density $f_{X}$ given by
$$f_{X}(x)=\frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-\frac{x^{2}}{2}} .$$
Now consider the transformation $Z=\mu+\sigma X$. Then, by (1.15), $Z$ has density
$$f_{Z}(z)=\frac{1}{\sqrt{2 \pi \sigma^{2}}} \mathrm{e}^{-\frac{(z-\mu)^{2}}{2 \sigma^{2}}} .$$
In other words, $Z \sim \mathrm{N}\left(\mu, \sigma^{2}\right)$. We can also state this as follows: if $Z \sim \mathrm{N}\left(\mu, \sigma^{2}\right)$, then $(Z-\mu) / \sigma \sim \mathrm{N}(0,1)$. This procedure is called standardization.

We now generalize this to $n$ dimensions. Let $X_{1}, \ldots, X_{n}$ be independent and standard normal random variables. The joint pdf of $\mathbf{X}=\left(X_{1}, \ldots, X_{n}\right)^{\top}$ is given by
$$f_{\mathbf{X}}(\mathbf{x})=(2 \pi)^{-n / 2} \mathrm{e}^{-\frac{1}{2} \mathbf{x}^{\top} \mathbf{x}}, \quad \mathbf{x} \in \mathbb{R}^{n} .$$
Consider the affine transformation (i.e., a linear transformation plus a constant vector)
$$\mathbf{Z}=\boldsymbol{\mu}+B \mathbf{X}$$
for some $m \times n$ matrix $B$. Note that, by Theorem 1.8.1, Z has expectation vector $\boldsymbol{\mu}$ and covariance matrix $\Sigma=B B^{\top}$. Any random vector of the form (1.23) is said to have a jointly normal or multivariate normal distribution. We write $\mathbf{Z} \sim N(\boldsymbol{\mu}, \Sigma)$. Suppose that $B$ is an invertible $n \times n$ matrix. Then, by (1.19), the density of $\mathbf{Y}=\mathbf{Z}-\boldsymbol{\mu}$ is given by
$$f_{\mathbf{Y}}(\mathbf{y})=\frac{1}{|B| \sqrt{(2 \pi)^{n}}} \mathrm{e}^{-\frac{1}{2}\left(B^{-1} \mathbf{y}\right)^{\top} B^{-1} \mathbf{y}}=\frac{1}{|B| \sqrt{(2 \pi)^{n}}} \mathrm{e}^{-\frac{1}{2} \mathbf{y}^{\top}\left(B^{-1}\right)^{\top} B^{-1} \mathbf{y}} .$$
We have $|B|=\sqrt{|\Sigma|}$ and $\left(B^{-1}\right)^{\top} B^{-1}=\left(B^{\top}\right)^{-1} B^{-1}=\left(B B^{\top}\right)^{-1}=\Sigma^{-1}$, so that
$$f_{\mathbf{Y}}(\mathbf{y})=\frac{1}{\sqrt{(2 \pi)^{n}|\Sigma|}} \mathrm{e}^{-\frac{1}{2} \mathbf{y}^{\top} \Sigma^{-1} \mathbf{y}} .$$
Because $\mathbf{Z}$ is obtained from $\mathbf{Y}$ by simply adding a constant vector $\boldsymbol{\mu}$, we have $f_{\mathbf{Z}}(\mathbf{z})=f_{\mathbf{Y}}(\mathbf{z}-\boldsymbol{\mu})$, and therefore
$$f_{\mathbf{Z}}(\mathbf{z})=\frac{1}{\sqrt{(2 \pi)^{n}|\Sigma|}} \mathrm{e}^{-\frac{1}{2}(\mathbf{z}-\boldsymbol{\mu})^{\top} \Sigma^{-1}(\mathbf{z}-\mu)}, \quad \mathbf{z} \in \mathbb{R}^{n} .$$
Note that this formula is very similar to that of the one-dimensional case.
Conversely, given a covariance matrix $\Sigma=\left(\sigma_{i j}\right)$, there exists a unique lower triangular matrix
$$B=\left(\begin{array}{cccc} b_{11} & 0 & \cdots & 0 \ b_{21} & b_{22} & \cdots & 0 \ \vdots & \vdots & & \vdots \ b_{n 1} & b_{n 2} & \cdots & b_{n n} \end{array}\right)$$
such that $\Sigma=B B^{\top}$. This matrix can be obtained efficiently via the Cholesky square root method; see Section A.1 of the Appendix.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|LIMIT THEOREMS

We briefly discuss two of the main results in probability: the law of large numbers and the central limit theorem. Both are associated with sums of independent random variables.

Let $X_{1}, X_{2}, \ldots$ be iid random variables with expectation $\mu$ and variance $\sigma^{2}$. For each $n$, let $S_{n}=X_{1}+\cdots+X_{n}$. Since $X_{1}, X_{2}, \ldots$ are iid, we have $\mathbb{E}\left[S_{n}\right]=n \mathbb{E}\left[X_{1}\right]=$ $n \mu$ and $\operatorname{Var}\left(S_{n}\right)=n \operatorname{Var}\left(X_{1}\right)=n \sigma^{2}$.

The law of large numbers states that $S_{n} / n$ is close to $\mu$ for large $n$. Here is the more precise statement.

Theorem 1.11.1 (Strong Law of Large Numbers) If $X_{1}, \ldots, X_{n}$ are iid with expectation $\mu$, then
$$\mathbb{P}\left(\lim {n \rightarrow \infty} \frac{S{n}}{n}=\mu\right)=1$$
The central limit theorem describes the limiting distribution of $S_{n}$ (or $S_{n} / n$ ), and it applies to both continuous and discrete random variables. Loosely, it states that the random sum $S_{n}$ has a distribution that is approximately normal, when $n$ is large. The more precise statement is given next.

Theorem 1.11.2 (Central Limit Theorem) If $X_{1}, \ldots, X_{n}$ are iid with expectation $\mu$ and variance $\sigma^{2}<\infty$, then for all $x \in \mathbb{R}$,
$$\lim {n \rightarrow \infty} \mathbb{P}\left(\frac{S{n}-n \mu}{\sigma \sqrt{n}} \leqslant x\right)=\Phi(x),$$
where $\Phi$ is the cdf of the standard normal distribution.
In other words, $S_{n}$ has a distribution that is approximately normal, with expectation $n \mu$ and variance $n \sigma^{2}$. To see the central limit theorem in action, consider Figure 1.2. The left part shows the pdfs of $S_{1}, \ldots, S_{4}$ for the case where the $\left{X_{i}\right}$ have a $U[0,1]$ distribution. The right part shows the same for the $\operatorname{Exp}(1)$ distribution. We clearly see convergence to a bell-shaped curve, characteristic of the normal distribution.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|POISSON PROCESSES

The Poisson process is used to model certain kinds of arrivals or patterns. Imagine, for example, a telescope that can detect individual photons from a faraway galaxy. The photons arrive at random times $T_{1}, T_{2}, \ldots .$ Let $N_{t}$ denote the number of arrivals in the time interval $[0, t]$, that is, $N_{t}=\sup \left{k: T_{k} \leqslant t\right}$. Note that the number of arrivals in an interval $I=(a, b]$ is given by $N_{b}-N_{a}$. We will also denote it by $N(a, b]$. A sample path of the arrival counting process $\left{N_{t}, t \geqslant 0\right}$ is given in Figure 1.3.

For this particular arrival process, one would assume that the number of arrivals in an interval $(a, b)$ is independent of the number of arrivals in interval $(c, d)$ when the two intervals do not intersect. Such considerations lead to the following definition:

Definition 1.12.1 (Poisson Process) An arrival counting process $N=\left{N_{t}\right}$ is called a Poisson process with rate $\lambda>0$ if
(a) The numbers of points in nonoverlapping intervals are independent.
(b) The number of points in interval $I$ has a Poisson distribution with mean $\lambda \times \operatorname{length}(I)$.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|JOINTLY NORMAL RANDOM VARIABLES

FX(X)=12圆周率和−X22.

F从(和)=12圆周率σ2和−(和−μ)22σ2.

FX(X)=(2圆周率)−n/2和−12X⊤X,X∈Rn.

F是(是)=1|乙|(2圆周率)n和−12(乙−1是)⊤乙−1是=1|乙|(2圆周率)n和−12是⊤(乙−1)⊤乙−1是.

F是(是)=1(2圆周率)n|Σ|和−12是⊤Σ−1是.

F从(和)=1(2圆周率)n|Σ|和−12(和−μ)⊤Σ−1(和−μ),和∈Rn.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|POISSON PROCESSES

(a) 非重叠区间中的点数是独立的。
(b) 区间点数一世具有均值的泊松分布λ×长度⁡(一世).

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|蒙特卡洛方法代写monte carlo method代考|FUNCTIONS OF RANDOM VARIABLES

statistics-lab™ 为您的留学生涯保驾护航 在代写蒙特卡洛方法学monte carlo method方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写蒙特卡洛方法学monte carlo method代写方面经验极为丰富，各种代写蒙特卡洛方法学monte carlo method相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|蒙特卡洛方法代写monte carlo method代考|FUNCTIONS OF RANDOM VARIABLES

Suppose that $X_{1}, \ldots, X_{n}$ are measurements of a random experiment. Often we are only interested in certain functions of the measurements rather than the individual measurements. Here are some examples.
EXAMPLE $1.5$
Let $X$ be a continuous random variable with pdf $f_{X}$ and let $Z=a X+b$, where $a \neq 0$. We wish to determine the pdf $f_{Z}$ of $Z$. Suppose that $a>0$. We have for any $z$
$$F_{Z}(z)=\mathbb{P}(Z \leqslant z)=\mathbb{P}(X \leqslant(z-b) / a)=F_{X}((z-b) / a) .$$
Differentiating this with respect to $z$ gives $f_{Z}(z)=f_{X}((z-b) / a) / a$. For $a<0$ we similarly obtain $f_{Z}(z)=f_{X}((z-b) / a) /(-a)$. Thus, in general,
$$f_{Z}(z)=\frac{1}{|a|} f_{X}\left(\frac{z-b}{a}\right) .$$
EXAMPLE $1.6$
Generalizing the previous example, suppose that $Z=g(X)$ for some monotonically increasing function $g$. To find the pdf of $Z$ from that of $X$ we first write
$$F_{Z}(z)=\mathbb{P}(Z \leqslant z)=\mathbb{P}\left(X \leqslant g^{-1}(z)\right)=F_{X}\left(g^{-1}(z)\right),$$
where $g^{-1}$ is the inverse of $g$. Differentiating with respect to $z$ now gives
$$f_{Z}(z)=f_{X}\left(g^{-1}(z)\right) \frac{\mathrm{d}}{\mathrm{d} z} g^{-1}(z)=\frac{f_{X}\left(g^{-1}(z)\right)}{g^{\prime}\left(g^{-1}(z)\right)} .$$
For monotonically decreasing functions, $\frac{\mathrm{d}}{\mathrm{d} z} g^{-1}(z)$ in the first equation needs to be replaced with its negative value.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Linear Transformations

Let $\mathbf{x}=\left(x_{1}, \ldots, x_{n}\right)^{\top}$ be a column vector in $\mathbb{R}^{n}$ and $A$ an $m \times n$ matrix. The mapping $\mathbf{x} \mapsto \mathbf{z}$, with $\mathbf{z}=A \mathbf{x}$, is called a linear transformation. Now consider a random vector $\mathbf{X}=\left(X_{1}, \ldots, X_{n}\right)^{\top}$, and let
$$\mathbf{Z}=A \mathbf{X}$$
Then $\mathbf{Z}$ is a random vector in $\mathbb{R}^{m}$. In principle, if we know the joint distribution of $\mathbf{X}$, then we can derive the joint distribution of Z. Let us first see how the expectation vector and covariance matrix are transformed.

Theorem 1.8.1 If $\mathbf{X}$ has an expectation vector $\boldsymbol{\mu}{\mathbf{X}}$ and covariance matrix $\mathbf{\Sigma}{\mathbf{X}}$, then the expectation vector and covariance matrux of $\mathbf{Z}-A \mathbf{X}$ are given by
$$\mu_{\mathbf{Z}}=A \mu_{\mathbf{X}}$$
and
$$\Sigma_{\mathbf{Z}}=A \Sigma_{\mathbf{X}} A^{\top} .$$
Proof: We have $\boldsymbol{\mu}{\mathbf{Z}}=\mathbb{F}[\mathbf{Z}]=\mathbb{E}[A \mathbf{X}]=A \mathbb{E}[\mathbf{X}]=A{\boldsymbol{\mu}{\mathbf{X}}}$ and \begin{aligned} \Sigma{\mathbf{Z}} &=\mathbb{E}\left[\left(\mathbf{Z}-\boldsymbol{\mu}{\mathbf{Z}}\right)\left(\mathbf{Z}-\boldsymbol{\mu}{\mathbf{Z}}\right)^{\top}\right]=\mathbb{E}\left[A\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)\left(A\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)\right)^{\top}\right] \ &=A \mathbb{E}\left[\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)^{\top}\right] A^{\top} \ &=A \Sigma_{\mathbf{X}} A^{\top} . \end{aligned}
Suppose that $A$ is an invertible $n \times n$ matrix. If $\mathbf{X}$ has a joint density $f \mathbf{X}$, what is the joint density $f_{\mathbf{z}}$ of $\mathbf{Z}$ ? Consider Figure 1.1. For any fixed $\mathbf{x}$, let $\mathbf{z}=A \mathbf{x}$. Hence, $\mathbf{x}=A^{-1} \mathbf{z}$. Consider the $n$-dimensional cube $C=\left[z_{1}, z_{1}+h\right] \times \cdots \times\left[z_{n}, z_{n}+h\right]$. Let $D$ be the image of $C$ under $A^{-1}$, that is, the parallelepiped of all points $\mathbf{x}$ such that $A \mathbf{x} \in C$. Then,
$$\mathbb{P}(\mathbf{Z} \in C) \approx h^{n} f_{\mathbf{Z}}(\mathbf{z})$$

## 统计代写|蒙特卡洛方法代写monte carlo method代考|General Transformations

We can apply reasoning similar to that above to deal with general transformations $\mathbf{x} \mapsto \boldsymbol{g}(\mathbf{x})$, written out as
$$\left(\begin{array}{c} x_{1} \ x_{2} \ \vdots \ x_{n} \end{array}\right) \mapsto\left(\begin{array}{c} g_{1}(\mathbf{x}) \ g_{2}(\mathbf{x}) \ \vdots \ g_{n}(\mathbf{x}) \end{array}\right)$$
For a fixed $\mathbf{x}$, let $\mathbf{z}=\boldsymbol{g}(\mathbf{x})$. Suppose that $\boldsymbol{g}$ is invertible; hence $\mathbf{x}=\boldsymbol{g}^{-1}(\mathbf{z})$. Any infinitesimal $n$-dimensional rectangle at $\mathbf{x}$ with volume $V$ is transformed into an $n$-dimensional parallelepiped at $\mathbf{z}$ with volume $V\left|J_{\mathbf{x}}(\boldsymbol{g})\right|$, where $J_{\mathbf{x}}(\boldsymbol{g})$ is the matrix of Jacobi at $\mathbf{x}$ of the transformation $\boldsymbol{g}$, that is,
$$J_{\mathbf{x}}(\boldsymbol{g})=\left(\begin{array}{ccc} \frac{\partial g_{1}}{\partial x_{1}} & \cdots & \frac{\partial g_{1}}{\partial x_{n}} \ \vdots & \cdots & \vdots \ \frac{\partial g_{n}}{\partial x_{1}} & \cdots & \frac{\partial g_{n}}{\partial x_{n}} \end{array}\right)$$
Now consider a random column vector $\mathbf{Z}=\boldsymbol{g}(\mathbf{X})$. Let $C$ be a small cube around $\mathbf{z}$ with volume $h^{n}$. Let $D$ be the image of $C$ under $\boldsymbol{g}^{-1}$. Then, as in the linear case,
$$\mathbb{P}(\mathbf{Z} \in C) \approx h^{n} f_{\mathbf{Z}}(\mathbf{z}) \approx h^{n}\left|J_{\mathbf{z}}\left(\boldsymbol{g}^{-1}\right)\right| f_{\mathbf{X}}(\mathbf{x}) .$$
Hence we have the transformation rule
$$f_{\mathbf{Z}}(\mathbf{z})=f_{\mathbf{X}}\left(\boldsymbol{g}^{-1}(\mathbf{z})\right)\left|J_{\mathbf{z}}\left(\boldsymbol{g}^{-1}\right)\right|, \quad \mathbf{z} \in \mathbb{R}^{n} .$$
$\left(\right.$ Note: $\left.\left|J_{\mathbf{z}}\left(\boldsymbol{g}^{-1}\right)\right|=1 /\left|J_{\mathbf{x}}(\boldsymbol{g})\right| .\right)$

## 统计代写|蒙特卡洛方法代写monte carlo method代考|FUNCTIONS OF RANDOM VARIABLES

F从(和)=磷(从⩽和)=磷(X⩽(和−b)/一种)=FX((和−b)/一种).

F从(和)=1|一种|FX(和−b一种).

F从(和)=磷(从⩽和)=磷(X⩽G−1(和))=FX(G−1(和)),

F从(和)=FX(G−1(和))dd和G−1(和)=FX(G−1(和))G′(G−1(和)).

μ从=一种μX

Σ从=一种ΣX一种⊤.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|General Transformations

(X1 X2 ⋮ Xn)↦(G1(X) G2(X) ⋮ Gn(X))

ĴX(G)=(∂G1∂X1⋯∂G1∂Xn ⋮⋯⋮ ∂Gn∂X1⋯∂Gn∂Xn)

F从(和)=FX(G−1(和))|Ĵ和(G−1)|,和∈Rn.
(笔记：|Ĵ和(G−1)|=1/|ĴX(G)|.)

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|蒙特卡洛方法代写monte carlo method代考|SOME IMPORTANT DISTRIBUTIONS

statistics-lab™ 为您的留学生涯保驾护航 在代写蒙特卡洛方法学monte carlo method方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写蒙特卡洛方法学monte carlo method代写方面经验极为丰富，各种代写蒙特卡洛方法学monte carlo method相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|蒙特卡洛方法代写monte carlo method代考|SOME IMPORTANT DISTRIBUTIONS

Tables $1.1$ and $1.2$ list a number of important continuous and discrete distributions. We will use the notation $X \sim f, X \sim F$, or $X \sim$ Dist to signify that $X$ has a pdf $f$, a cdf $F$ or a distribution Dist. We sometimes write $f_{X}$ instead of $f$ to stress that the pdf refers to the random variable $X$. Note that in Table $1.1, \Gamma$ is the gamma function: $\Gamma(\alpha)=\int_{0}^{\infty} \mathrm{e}^{-x} x^{\alpha-1} \mathrm{~d} x, \quad \alpha>0$

It is often useful to consider different kinds of numerical characteristics of a random variable. One such quantity is the expectation, which measures the mean value of the distribution.

Definition 1.6.1 (Expectation) Let $X$ be a random variable with pdf $f$. The expectation (or expected value or mean) of $X$, denoted by $\mathbb{E}[X]$ (or sometimes $\mu$ ), is defined by
$$\mathbb{E}[X]= \begin{cases}\sum_{x} x f(x) & \text { discrete case } \ \int_{-\infty}^{\infty} x f(x) \mathrm{d} x & \text { continuous case }\end{cases}$$
If $X$ is a random variable, then a function of $X$, such as $X^{2}$ or $\sin (X)$, is again a random variable. Moreover, the expected value of a function of $X$ is simply a weighted average of the possible values that this function can take. That is, for any real function $h$
$$\mathbb{E}[h(X)]= \begin{cases}\sum_{x} h(x) f(x) & \text { discrete case } \ \int_{-\infty}^{\infty} h(x) f(x) \mathrm{d} x & \text { continuous case. }\end{cases}$$
Another useful quantity is the variance, which measures the spread or dispersion of the distribution.

Definition 1.6.2 (Variance) The variance of a random variable $X$, denoted by $\operatorname{Var}(X)$ (or sometimes $\sigma^{2}$ ), is defined by
$$\operatorname{Var}(X)=\mathbb{E}\left[(X-\mathbb{E}[X])^{2}\right]=\mathbb{E}\left[X^{2}\right]-(\mathbb{E}[X])^{2}$$
The square root of the variance is called the standard deviation. Table $1.3$ lists the expectations and variances for some well-known distributions.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|JOINT DISTRIBUTIONS

Often a random experiment is described by more than one random variable. The theory for multiple random variables is similar to that for a single random variable.
Let $X_{1}, \ldots, X_{n}$ be random variables describing some random experiment. We can accumulate these into a random vector $\mathbf{X}=\left(X_{1}, \ldots, X_{n}\right)$. More generally, a collection $\left{X_{t}, t \in \mathscr{T}\right}$ of random variables is called a stochastic process. The set $\mathscr{T}$ is called the parameter set or inder set of the process. It may he discrete (e.g., $\mathbb{N}$ or ${1, \ldots, 10}$ ) or continuous (e.g., $\mathbb{R}{+}=[0, \infty)$ or $\left.[1,10]\right)$. The set of possible values for the stochastic process is called the state space. The joint distribution of $X{1}, \ldots, X_{n}$ is specified by the joint cdf
$$F\left(x_{1}, \ldots, x_{n}\right)=\mathbb{P}\left(X_{1} \leqslant x_{1}, \ldots, X_{n} \leqslant x_{n}\right) .$$
The joint pdf $f$ is given, in the discrete case, by $f\left(x_{1}, \ldots, x_{n}\right)=\mathbb{P}\left(X_{1}=\right.$ $\left.x_{1}, \ldots, X_{n}=x_{n}\right)$, and in the continuous case $f$ is such that
$$\mathbb{P}(\mathbf{X} \in \mathscr{B})-\int_{\mathscr{B}} f\left(x_{1}, \ldots, x_{n}\right) \mathrm{d} x_{1} \ldots \mathrm{d} x_{n}$$
for any (measurable) region $\mathscr{B}$ in $\mathbb{R}^{n}$. The marginal pdfs can be recovered from the joint pdf by integration or summation. For example, in the case of a continuous random vector $(X, Y)$ with joint pdf $f$, the pdf $f_{X}$ of $X$ is found as
$$f_{X}(x)=\int f(x, y) \mathrm{d} y$$
Suppose that $X$ and $Y$ are both discrete or both continuous, with joint pdf $f$, and suppose that $f_{X}(x)>0$. Then the conditional pdf of $Y$ given $X=x$ is given by
$$f_{Y \mid X}(y \mid x)=\frac{f(x, y)}{f_{X}(x)} \quad \text { for all } y .$$
The corresponding conditional expectation is (in the continuous case)
$$\mathbb{E}[Y \mid X=x]=\int y f_{Y \mid X}(y \mid x) \mathrm{d} y$$
Note that $\mathbb{E}[Y \mid X=x]$ is a function of $x$, say $h(x)$. The corresponding random variable $h(X)$ is written as $\mathbb{E}[Y \mid X]$. It can be shown (see, for example, [3]) that its expectation is simply the expectation of $Y$, that is,
$$\mathbb{E}[\mathbb{E}[Y \mid X]]=\mathbb{E}[Y] .$$
When the conditional distribution of $Y$ given $X$ is identical to that of $Y, X$ and $Y$ are said to be independent. More precisely:

## 统计代写|蒙特卡洛方法代写monte carlo method代考|Bernoulli Sequence

Consider the experiment where we flip a biased coin $n$ times, with probability $p$ of heads. We can model this experiment in the following way. For $i=1, \ldots, n$, let $X_{i}$ be the result of the $i$-th toss: $\left{X_{i}=1\right}$ means heads (or success), $\left{X_{i}=0\right}$ means tails (or failure). Also, let
$$\mathbb{P}\left(X_{i}=1\right)=p=1-\mathbb{P}\left(X_{i}=0\right), \quad i=1,2, \ldots, n .$$
Last, assume that $X_{1}, \ldots, X_{n}$ are independent. The sequence $\left{X_{i}, i=\right.$ $1,2, \ldots}$ is called a Bernoulli sequence or Bernoulli process with success probability $p$. Let $X=X_{1}+\cdots+X_{n}$ be the total number of successes in $n$ trials (tosses of the coin). Denote by $\mathscr{B}$ the set of all binary vectors $\mathbf{x}=\left(x_{1}, \ldots, x_{n}\right)$ such that $\sum_{i=1}^{n} x_{i}=k$. Note that $\mathscr{B}$ has $\left(\begin{array}{l}n \ k\end{array}\right)$ elements. We now have
\begin{aligned} \mathbb{P}(X=k) &=\sum_{\mathbf{x} \in \mathscr{B}} \mathbb{P}\left(X_{1}=x_{1}, \ldots, X_{n}=x_{n}\right) \ &=\sum_{\mathbf{x} \in \mathscr{B}} \mathbb{P}\left(X_{1}=x_{1}\right) \cdots \mathbb{P}\left(X_{n}=x_{n}\right)=\sum_{\mathbf{x} \in \mathscr{B}} p^{k}(1-p)^{n-k} \ &=\left(\begin{array}{l} n \ k \end{array}\right) p^{k}(1-p)^{n-k} . \end{aligned}
In other words, $X \sim \operatorname{Bin}(n, p)$. Compare this with Example 1.2.
Remark 1.7.1 An infinite sequence $X_{1}, X_{2}, \ldots$ of random variables is called inde pendent if for any finite choice of parameters $i_{1}, i_{2}, \ldots, i_{n}$ (none of them the same the random variables $X_{i_{1}}, \ldots, X_{i_{n}}$ are independent. Many probabilistic models in volve random variables $X_{1}, X_{2}, \ldots$ that are independent and identically distributed abbreviated as iid. We will use this abbreviation throughout this book.

Similar to the one-dimensional case, the expected value of any real-valued function $h$ of $X_{1}, \ldots, X_{n}$ is a weighted average of all values that this function can take Specifically, in the continuous case,
$$\mathbb{E}\left[h\left(X_{1}, \ldots, X_{n}\right)\right]=\int \ldots \int h\left(x_{1}, \ldots, x_{n}\right) f\left(x_{1}, \ldots, x_{n}\right) \mathrm{d} x_{1} \ldots \mathrm{d} x_{n} .$$

## 统计代写|蒙特卡洛方法代写monte carlo method代考|JOINT DISTRIBUTIONS

F(X1,…,Xn)=磷(X1⩽X1,…,Xn⩽Xn).

FX(X)=∫F(X,是)d是

F是∣X(是∣X)=F(X,是)FX(X) 对全部 是.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|蒙特卡洛方法代写monte carlo method代考|PRELIMINARIES

statistics-lab™ 为您的留学生涯保驾护航 在代写蒙特卡洛方法学monte carlo method方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写蒙特卡洛方法学monte carlo method代写方面经验极为丰富，各种代写蒙特卡洛方法学monte carlo method相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|蒙特卡洛方法代写monte carlo method代考|RANDOM EXPERIMENTS

The basic notion in probability theory is that of a random experiment: an experiment whose outcome cannot be determined in advance. The most fundamental example is the experiment where a fair coin is tossed a number of times. For simplicity suppose that the coin is tossed three times. The sample space, denoted $\Omega$, is the set of all possible outcomes of the experiment. In this case $\Omega$ has eight possible outcomes:
$$\Omega={H H H, H H T, H T H, H T T, T H H, T H T, T T H, T T T},$$
where, for example, HTH means that the first toss is heads, the second tails, and the third heads.

Subsets of the sample space are called events. For example, the event $A$ that the third toss is heads is
$$A={H H H, H T H, T H H, T T H} .$$
We say that event $A$ occurs if the outcome of the experiment is one of the elements in $A$. Since events are sets, we can apply the usual set operations to them. For example, the event $A \cup B$, called the union of $A$ and $B$, is the event that $A$ or $B$ or both occur, and the event $A \cap B$, called the intersection of $A$ and $B$, is the event that $A$ and $B$ both occur. Similar notation holds for unions and intersections of more than two events. The event $A^{c}$, called the complement of $A$, is the event that $A$ does not occur. Two events $A$ and $B$ that have no outcomes in common, that is, their intersection is empty, are called disjoint events. The main step is to specify the probability of each event.

Definition 1.2.1 (Probability) A probability $\mathbb{P}$ is a rule that assigns a number $0 \leqslant \mathbb{P}(A) \leqslant 1$ to each event $A$, such that $\mathbb{P}(\Omega)=1$, and such that for any sequence $A_{1}, A_{2}, \ldots$ of disjoint events
$$\mathbb{P}\left(\bigcup_{i} A_{i}\right)=\sum_{i} \mathbb{P}\left(A_{i}\right)$$
Equation (1.1) is referred to as the sum rule of probability. It states that if an event can happen in a number of different ways, but not simultaneously, the probability of that event is simply the sum of the probabilities of the comprising events.

For the fair coin toss experiment the probability of any event is easily given. Namely, because the coin is fair, each of the eight possible outcomes is equally likely, so that $\mathbb{P}({H H H})=\cdots=\mathbb{P}({T T T})=1 / 8$. Since any event $A$ is the union of the “elementary” events ${H H H}, \ldots,{T T T}$, the sum rule implies that
$$\mathbb{P}(A)=\frac{|A|}{|\Omega|},$$
where $|A|$ denotes the number of outcomes in $A$ and $|\Omega|=8$. More generally, if a random experiment has finitely many and equally likely outcomes, the probability is always of the form (1.2). In that case the calculation of probabilities reduces to counting.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|CONDITIONAL PROBABILITY AND INDEPENDENCE

How do probabilities change when we know that some event $B \subset \Omega$ has occurred? Given that the outcome lies in $B$, the event $A$ will occur if and only if $A \cap B$ occurs, and the relative chance of $A$ occurring is therefore $\mathbb{P}(A \cap B) / \mathbb{P}(B)$. This leads to the definition of the conditional probability of $A$ given $B$ :
$$\mathbb{P}(A \mid B)=\frac{\mathbb{P}(A \cap B)}{\mathbb{P}(B)} .$$
For example, suppose that we toss a fair coin three times. Let $B$ be the event that the total number of heads is two. The conditional probability of the event $A$ that the first toss is heads, given that $B$ occurs, is $(2 / 8) /(3 / 8)=2 / 3$.

Rewriting (1.3) and interchanging the role of $A$ and $B$ gives the relation $\mathbb{P}(A \cap$ $B)=\mathbb{P}(A) \mathbb{P}(B \mid A)$. This can be generalized easily to the product rule of probability, which states that for any sequence of events $A_{1}, A_{2}, \ldots, A_{n}$,
$$\mathbb{P}\left(A_{1} \cdots A_{n}\right)=\mathbb{P}\left(A_{1}\right) \mathbb{P}\left(A_{2} \mid A_{1}\right) \mathbb{P}\left(A_{3} \mid A_{1} A_{2}\right) \cdots \mathbb{P}\left(A_{n} \mid A_{1} \cdots A_{n-1}\right)$$
using the abbreviation $A_{1} A_{2} \cdots A_{k} \equiv A_{1} \cap A_{2} \cap \cdots \cap A_{k}$.
Suppose that $B_{1}, B_{2}, \ldots, B_{n}$ is a partition of $\Omega$. That is, $B_{1}, B_{2}, \ldots, B_{n}$ are disjoint and their union is $\Omega$. Then, by the sum rule, $\mathbb{P}(A)=\sum_{i=1}^{n} \mathbb{P}\left(A \cap B_{i}\right)$ and hence, by the definition of conditional probability, we have the law of total probability:
$$\mathbb{P}(A)=\sum_{i=1}^{n} \mathbb{P}\left(A \mid B_{i}\right) \mathbb{P}\left(B_{i}\right)$$
Combining this with the definition of conditional probability gives Bayes’ rule:
$$\mathbb{P}\left(B_{j} \mid A\right)=\frac{\mathbb{P}\left(A \mid B_{j}\right) \mathbb{P}\left(B_{j}\right)}{\sum_{i=1}^{n} \mathbb{P}\left(A \mid B_{i}\right) \mathbb{P}\left(B_{i}\right)}$$
Independence is of crucial importance in probability and statistics. Loosely speaking, it models the lack of information between events. Two events $A$ and $B$ are said to be independent if the knowledge that $B$ has occurred does not change the probability that $A$ occurs. That is, $A, B$ independent $\Leftrightarrow \mathbb{P}(A \mid B)=\mathbb{P}(A)$. Since $\mathbb{P}(A \mid B)=\mathbb{P}(A \cap B) / \mathbb{P}(B)$, an alternative definition of independence is
$A, B$ independent $\Leftrightarrow \mathbb{P}(A \cap B)=\mathbb{P}(A) \mathbb{P}(B)$.
This definition covers the case where $B=\emptyset$ (empty set). We can extend this definition to arbitrarily many events.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

Specifying a model for a random experiment via a complete description of $\Omega$ and $\mathbb{P}$ may not always be convenient or necessary. In practice, we are only interested in certain observations (i.e., numerical measurements) in the experiment. We incorporate these into our modeling process via the introduction of random variables, usually denoted by capital letters from the last part of the alphabet (e.g., $X$, $\left.X_{1}, X_{2}, \ldots, Y, Z\right)$.
EXAMPLE 1.2
We toss a biased coin $n$ times, with $p$ the probability of heads. Suppose that we are interested only in the number of heads, say $X$. Note that $X$ can tale any of the values in ${0,1, \ldots, n}$. The probability distribution of $X$ is given by the binomial formula
$$\mathbb{P}(X=k)=\left(\begin{array}{l} n \ k \end{array}\right) p^{k}(1-p)^{n-k}, \quad k=0,1, \ldots, n$$
Namely, by Example $1.1$, each elementary event ${H T H \cdots T}$ with exactly $k$ heads and $n-k$ tails has probability $p^{k}(1-p)^{n-k}$, and there are $\left(\begin{array}{l}n \ k\end{array}\right)$ such events.

The probability distribution of a general random variable $X$ – identifying such probabilities as $\mathbb{P}(X=x), \mathbb{P}(a \leqslant X \leqslant b)$, and so on – is completely specified by the cumulative distribution function (cdf), defined by
$$F(x)=\mathbb{P}(X \leqslant x), \quad x \in \mathbb{R} .$$
A random variable $X$ is said to have a discrete distribution if, for some finite or countable set of values $x_{1}, x_{2}, \ldots, \mathbb{P}\left(X=x_{i}\right)>0, i=1,2, \ldots$ and $\sum_{i} \mathbb{P}\left(X=x_{i}\right)=$ 1. The function $f(x)=\mathbb{P}(X=x)$ is called the probability mass function (pmf) of $X$ – but see Remark 1.4.1.

## 统计代写|蒙特卡洛方法代写monte carlo method代考|RANDOM EXPERIMENTS

Ω=HHH,HH,HH,H,HH,H,H,,

|一种|一种|Ω|=8

## 统计代写|蒙特卡洛方法代写monte carlo method代考|RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

1.1HHķnķpķ(1−p)nķ(n ķ)

F(X)=磷(X⩽X),X∈R.
XX1,X2,…,磷(X=X一世)>0,一世=1,2,…∑一世磷(X=X一世)=F(X)=磷(X=X)X

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。