统计代写|蒙特卡洛方法代写monte carlo method代考|FUNCTIONS OF RANDOM VARIABLES

statistics-lab™ 为您的留学生涯保驾护航 在代写蒙特卡洛方法学monte carlo method方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写蒙特卡洛方法学monte carlo method代写方面经验极为丰富，各种代写蒙特卡洛方法学monte carlo method相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

统计代写|蒙特卡洛方法代写monte carlo method代考|FUNCTIONS OF RANDOM VARIABLES

Suppose that $X_{1}, \ldots, X_{n}$ are measurements of a random experiment. Often we are only interested in certain functions of the measurements rather than the individual measurements. Here are some examples.
EXAMPLE $1.5$
Let $X$ be a continuous random variable with pdf $f_{X}$ and let $Z=a X+b$, where $a \neq 0$. We wish to determine the pdf $f_{Z}$ of $Z$. Suppose that $a>0$. We have for any $z$
$$F_{Z}(z)=\mathbb{P}(Z \leqslant z)=\mathbb{P}(X \leqslant(z-b) / a)=F_{X}((z-b) / a) .$$
Differentiating this with respect to $z$ gives $f_{Z}(z)=f_{X}((z-b) / a) / a$. For $a<0$ we similarly obtain $f_{Z}(z)=f_{X}((z-b) / a) /(-a)$. Thus, in general,
$$f_{Z}(z)=\frac{1}{|a|} f_{X}\left(\frac{z-b}{a}\right) .$$
EXAMPLE $1.6$
Generalizing the previous example, suppose that $Z=g(X)$ for some monotonically increasing function $g$. To find the pdf of $Z$ from that of $X$ we first write
$$F_{Z}(z)=\mathbb{P}(Z \leqslant z)=\mathbb{P}\left(X \leqslant g^{-1}(z)\right)=F_{X}\left(g^{-1}(z)\right),$$
where $g^{-1}$ is the inverse of $g$. Differentiating with respect to $z$ now gives
$$f_{Z}(z)=f_{X}\left(g^{-1}(z)\right) \frac{\mathrm{d}}{\mathrm{d} z} g^{-1}(z)=\frac{f_{X}\left(g^{-1}(z)\right)}{g^{\prime}\left(g^{-1}(z)\right)} .$$
For monotonically decreasing functions, $\frac{\mathrm{d}}{\mathrm{d} z} g^{-1}(z)$ in the first equation needs to be replaced with its negative value.

统计代写|蒙特卡洛方法代写monte carlo method代考|Linear Transformations

Let $\mathbf{x}=\left(x_{1}, \ldots, x_{n}\right)^{\top}$ be a column vector in $\mathbb{R}^{n}$ and $A$ an $m \times n$ matrix. The mapping $\mathbf{x} \mapsto \mathbf{z}$, with $\mathbf{z}=A \mathbf{x}$, is called a linear transformation. Now consider a random vector $\mathbf{X}=\left(X_{1}, \ldots, X_{n}\right)^{\top}$, and let
$$\mathbf{Z}=A \mathbf{X}$$
Then $\mathbf{Z}$ is a random vector in $\mathbb{R}^{m}$. In principle, if we know the joint distribution of $\mathbf{X}$, then we can derive the joint distribution of Z. Let us first see how the expectation vector and covariance matrix are transformed.

Theorem 1.8.1 If $\mathbf{X}$ has an expectation vector $\boldsymbol{\mu}{\mathbf{X}}$ and covariance matrix $\mathbf{\Sigma}{\mathbf{X}}$, then the expectation vector and covariance matrux of $\mathbf{Z}-A \mathbf{X}$ are given by
$$\mu_{\mathbf{Z}}=A \mu_{\mathbf{X}}$$
and
$$\Sigma_{\mathbf{Z}}=A \Sigma_{\mathbf{X}} A^{\top} .$$
Proof: We have $\boldsymbol{\mu}{\mathbf{Z}}=\mathbb{F}[\mathbf{Z}]=\mathbb{E}[A \mathbf{X}]=A \mathbb{E}[\mathbf{X}]=A{\boldsymbol{\mu}{\mathbf{X}}}$ and \begin{aligned} \Sigma{\mathbf{Z}} &=\mathbb{E}\left[\left(\mathbf{Z}-\boldsymbol{\mu}{\mathbf{Z}}\right)\left(\mathbf{Z}-\boldsymbol{\mu}{\mathbf{Z}}\right)^{\top}\right]=\mathbb{E}\left[A\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)\left(A\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)\right)^{\top}\right] \ &=A \mathbb{E}\left[\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)^{\top}\right] A^{\top} \ &=A \Sigma_{\mathbf{X}} A^{\top} . \end{aligned}
Suppose that $A$ is an invertible $n \times n$ matrix. If $\mathbf{X}$ has a joint density $f \mathbf{X}$, what is the joint density $f_{\mathbf{z}}$ of $\mathbf{Z}$ ? Consider Figure 1.1. For any fixed $\mathbf{x}$, let $\mathbf{z}=A \mathbf{x}$. Hence, $\mathbf{x}=A^{-1} \mathbf{z}$. Consider the $n$-dimensional cube $C=\left[z_{1}, z_{1}+h\right] \times \cdots \times\left[z_{n}, z_{n}+h\right]$. Let $D$ be the image of $C$ under $A^{-1}$, that is, the parallelepiped of all points $\mathbf{x}$ such that $A \mathbf{x} \in C$. Then,
$$\mathbb{P}(\mathbf{Z} \in C) \approx h^{n} f_{\mathbf{Z}}(\mathbf{z})$$

统计代写|蒙特卡洛方法代写monte carlo method代考|General Transformations

We can apply reasoning similar to that above to deal with general transformations $\mathbf{x} \mapsto \boldsymbol{g}(\mathbf{x})$, written out as
$$\left(\begin{array}{c} x_{1} \ x_{2} \ \vdots \ x_{n} \end{array}\right) \mapsto\left(\begin{array}{c} g_{1}(\mathbf{x}) \ g_{2}(\mathbf{x}) \ \vdots \ g_{n}(\mathbf{x}) \end{array}\right)$$
For a fixed $\mathbf{x}$, let $\mathbf{z}=\boldsymbol{g}(\mathbf{x})$. Suppose that $\boldsymbol{g}$ is invertible; hence $\mathbf{x}=\boldsymbol{g}^{-1}(\mathbf{z})$. Any infinitesimal $n$-dimensional rectangle at $\mathbf{x}$ with volume $V$ is transformed into an $n$-dimensional parallelepiped at $\mathbf{z}$ with volume $V\left|J_{\mathbf{x}}(\boldsymbol{g})\right|$, where $J_{\mathbf{x}}(\boldsymbol{g})$ is the matrix of Jacobi at $\mathbf{x}$ of the transformation $\boldsymbol{g}$, that is,
$$J_{\mathbf{x}}(\boldsymbol{g})=\left(\begin{array}{ccc} \frac{\partial g_{1}}{\partial x_{1}} & \cdots & \frac{\partial g_{1}}{\partial x_{n}} \ \vdots & \cdots & \vdots \ \frac{\partial g_{n}}{\partial x_{1}} & \cdots & \frac{\partial g_{n}}{\partial x_{n}} \end{array}\right)$$
Now consider a random column vector $\mathbf{Z}=\boldsymbol{g}(\mathbf{X})$. Let $C$ be a small cube around $\mathbf{z}$ with volume $h^{n}$. Let $D$ be the image of $C$ under $\boldsymbol{g}^{-1}$. Then, as in the linear case,
$$\mathbb{P}(\mathbf{Z} \in C) \approx h^{n} f_{\mathbf{Z}}(\mathbf{z}) \approx h^{n}\left|J_{\mathbf{z}}\left(\boldsymbol{g}^{-1}\right)\right| f_{\mathbf{X}}(\mathbf{x}) .$$
Hence we have the transformation rule
$$f_{\mathbf{Z}}(\mathbf{z})=f_{\mathbf{X}}\left(\boldsymbol{g}^{-1}(\mathbf{z})\right)\left|J_{\mathbf{z}}\left(\boldsymbol{g}^{-1}\right)\right|, \quad \mathbf{z} \in \mathbb{R}^{n} .$$
$\left(\right.$ Note: $\left.\left|J_{\mathbf{z}}\left(\boldsymbol{g}^{-1}\right)\right|=1 /\left|J_{\mathbf{x}}(\boldsymbol{g})\right| .\right)$

统计代写|蒙特卡洛方法代写monte carlo method代考|FUNCTIONS OF RANDOM VARIABLES

F从(和)=磷(从⩽和)=磷(X⩽(和−b)/一种)=FX((和−b)/一种).

F从(和)=1|一种|FX(和−b一种).

F从(和)=磷(从⩽和)=磷(X⩽G−1(和))=FX(G−1(和)),

F从(和)=FX(G−1(和))dd和G−1(和)=FX(G−1(和))G′(G−1(和)).

μ从=一种μX

Σ从=一种ΣX一种⊤.

统计代写|蒙特卡洛方法代写monte carlo method代考|General Transformations

(X1 X2 ⋮ Xn)↦(G1(X) G2(X) ⋮ Gn(X))

ĴX(G)=(∂G1∂X1⋯∂G1∂Xn ⋮⋯⋮ ∂Gn∂X1⋯∂Gn∂Xn)

F从(和)=FX(G−1(和))|Ĵ和(G−1)|,和∈Rn.
(笔记：|Ĵ和(G−1)|=1/|ĴX(G)|.)

有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。