### 计算机代写|机器学习代写machine learning代考|Linear Functions

statistics-lab™ 为您的留学生涯保驾护航 在代写机器学习machine learning方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写机器学习machine learning代写方面经验极为丰富，各种代写机器学习machine learning相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 计算机代写|机器学习代写machine learning代考|Linear Functions

Figure $3.21$ b illustrates how a function $y=2 x$ transforms a random variable $X$ with mean $\mu_{X}=1$ and standard deviation $\sigma_{X}=0.5$ into $Y$ with mean $\mu_{Y}=2$ and standard deviation $\sigma_{y}=1$. In the machine learning context, it is common to employ linear functions of random variables $y=g(x)=a x+b$, as illustrated in figure 3.21a. Given a random variable $X$ with mean $\mu_{X}$ and variance $\sigma_{X}^{2}$, the change in the neighborhood size simplifies to
$$\left|\frac{d y}{d x}\right|=|a| .$$
In such a case, because of the linear property of the expectation operation (see $\S 3.3 .5$ ),
$$\mu_{Y}=g\left(\mu_{X}\right)=a \mu_{X}+b, \quad \sigma_{Y}=|a| \sigma_{X} .$$

Let us consider a set of $n$ random variables $\mathbf{X}$ defined by its mean vector and covariance matrix,
$$\mathbf{X}=\left[\begin{array}{c} X_{1} \ \vdots \ X_{n} \end{array}\right], \mu_{\mathbf{X}}=\left[\begin{array}{c} \mu X_{1} \ \vdots \ \mu_{X_{n}} \end{array}\right], \boldsymbol{\Sigma}{\mathbf{X}}=\left[\begin{array}{ccc} \sigma{X_{1}}^{2} & \cdots & \rho_{n} \sigma_{X_{1}} \sigma_{X_{n}} \ & \cdots & \vdots \ \text { sym. } & & \sigma_{X_{n}}^{2} \end{array}\right]$$
and the variables $\mathbf{Y}=\left[\begin{array}{llll}Y_{1} & Y_{2} & \cdots & Y_{n}\end{array}\right]^{\top}$ obtained from a linear function $\mathbf{Y}=\mathbf{g}(\mathbf{X})=\mathbf{A} \mathbf{X}+\mathbf{b}$ so that
The function outputs $\mathbf{Y}$ (i.e., the mean vector), covariance matrix, and the joint covariance are then described by
If instead of having an $n \rightarrow n$ function, we have an $n \rightarrow 1$ function $y=g(\mathbf{X})=\mathbf{a}^{\top} \mathbf{X}+b$, then the Jacobian simplifies to the gradient vector $\nabla g(\mathbf{x})=\left[\begin{array}{ll}\frac{\partial g(\mathbf{x})}{\partial x_{1}} & \cdots \frac{\partial g(\mathbf{x})}{\partial x_{n}}\end{array}\right]$, which is again equal to the vector $\mathbf{a}^{\top}$,
$$\underbrace{[]{1 \times 1}}{Y}=\underbrace{[]{1 \times n}}{\mathbf{a} T=\nabla g(\mathbf{x})} \times \underbrace{[]{n \times 1}^{[}}{\mathbf{X}}+\underbrace{[]{1 \times 1}}{b} .$$
The function output $Y$ is then described by
\begin{aligned} \mu_{Y} &=g\left(\boldsymbol{\mu}{\mathbf{X}}\right)=\mathbf{a}^{\boldsymbol{\top}} \boldsymbol{\mu}{\mathbf{X}}+b \ \sigma_{Y}^{2} &=\mathbf{a}^{\boldsymbol{\top}} \boldsymbol{\Sigma}_{\mathbf{X}} \mathbf{a} . \end{aligned}

## 计算机代写|机器学习代写machine learning代考|Linearization of Nonlinear Functions

Because of the analytic simplicity associated with linear functions of random variables, it is common to approximate nonlinear functions by linear ones using a Taylor series so that

In practice, the series are most often limited to the first-order approximation, so for a one-to-one function, it simplifies to
$$Y=g(X) \approx a X+b$$
Figure $3.22$ presents an example of such a linear approximation for a one-to-one transformation. Linearizing at the expected value $\mu_{x}$ minimizes the approximation errors because the linearization is then centered in the region associated with a high probability content for $f_{X}(x)$. In that case, a corresponds to the gradient of $g(x)$ evaluated at $\mu X$,
$$a=\left[\frac{d g(x)}{d x}\right]{x=\mu{X}} .$$
For the $n \rightarrow 1$ multivariate case, the linearized transformation leads to
\begin{aligned} Y=g(\mathbf{X}) & \approx \mathbf{a}^{\top} \mathbf{X}+b \ &=\nabla g\left(\boldsymbol{\mu}{\mathbf{X}}\right)\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)+g\left(\boldsymbol{\mu}{\mathbf{X}}\right) \end{aligned} where $Y$ has a mean and variance equal to \begin{aligned} \mu{Y} & \approx g\left(\boldsymbol{\mu}{\mathbf{X}}\right) \ \sigma{Y}^{2} & \approx \nabla g\left(\boldsymbol{\mu}{\mathbf{X}}\right) \boldsymbol{\Sigma}{\mathbf{X}} \nabla g\left(\boldsymbol{\mu}{\mathbf{X}}\right)^{\top} \end{aligned} For the $n \rightarrow n$ multivarlatec case, the linearized transformătlon leads to \begin{aligned} \mathbf{Y}=\mathbf{g}(\mathbf{X}) & \approx \mathbf{A X}+\mathbf{b} \ &=\mathbf{J}{\mathbf{Y}, \mathbf{X}}\left(\boldsymbol{\mu}{\mathbf{X}}\right)\left(\mathbf{X}-\boldsymbol{\mu}{\mathbf{X}}\right)+\mathbf{g}\left(\boldsymbol{\mu}{\mathbf{X}}\right) \end{aligned} where $Y$ is described by the mean vector and covariance matrix, \begin{aligned} &\boldsymbol{\mu}{\mathbf{Y}} \cong g\left(\boldsymbol{\mu}{\mathbf{X}}\right) \ &\boldsymbol{\Sigma}{\mathbf{Y}} \cong \mathbf{J}{\mathbf{Y}, \mathbf{X}}\left(\boldsymbol{\mu}{\mathbf{X}}\right) \boldsymbol{\Sigma}{\mathbf{X}} \mathbf{J}{\mathbf{Y}, \mathbf{X}}^{\top}\left(\boldsymbol{\mu}{\mathbf{X}}\right) \end{aligned} For multivariate nonlinear functions, the gradient or Jacobian is evaluated at the expected value $\boldsymbol{\mu}{\mathbf{X}}$.

## 计算机代写|机器学习代写machine learning代考|Normal Distribution

The definition of probability distributions $f_{X}(x)$ was left aside in chapter 3 . This chapter presents the formulation and properties for the probability distributions employed in this book: the Normal distribution for $x \in \mathbb{R}$, the log-normal for $x \in \mathbb{R}^{+}$, and the Beta for $x \in(0,1)$.

The most widely employed probability distribution is the Normal, also known as the Gaussian, distribution. In this book, the names Gaussian and Normal are employed interchangeably when describing a probability distribution. This section covers the mathematical foundation for the univariate and multivariate Normal and then details the properties explaining its widespread usage.

## 计算机代写|机器学习代写machine learning代考|Linear Functions

|d是dX|=|一个|.

μ是=G(μX)=一个μX+b,σ是=|一个|σX.

X=[X1 ⋮ Xn],μX=[μX1 ⋮ μXn],ΣX=[σX12⋯ρnσX1σXn ⋯⋮  符号。 σXn2]

If 描述，而不是n→n函数，我们有一个n→1功能是=G(X)=一个⊤X+b, 然后雅可比简化为梯度向量∇G(X)=[∂G(X)∂X1⋯∂G(X)∂Xn]，这又等于向量一个⊤,

[]1×1⏟是=[]1×n⏟一个吨=∇G(X)×[]n×1[⏟X+[]1×1⏟b.

μ是=G(μX)=一个⊤μX+b σ是2=一个⊤ΣX一个.

## 计算机代写|机器学习代写machine learning代考|Linearization of Nonlinear Functions

μ是≈G(μX) σ是2≈∇G(μX)ΣX∇G(μX)⊤为了n→n多变量情况下，线性化变换导致

μ是≅G(μX) Σ是≅Ĵ是,X(μX)ΣXĴ是,X⊤(μX)对于多元非线性函数，梯度或雅可比在期望值处进行评估μX.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。