## 统计代写|随机控制代写Stochastic Control代考|Third Degree Sensor Filtering Problem

statistics-lab™ 为您的留学生涯保驾护航 在代写随机控制Stochastic Control方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写随机控制Stochastic Control代写方面经验极为丰富，各种代写随机控制Stochastic Control相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|随机控制代写Stochastic Control代考|Third Degree Sensor Filtering Problem

This section presents an example of designing the optimal filter for a linear state over third degree polynomial observations, reducing it to the optimal filtering problem for a second degree polynomial state with partially measured linear part and second degree polynomial multiplicative noise over linear observations, where a conditionally Gaussian state initial condition is additionally assumed.
Let the unmeasured scalar state $x(t)$ satisfy the trivial linear equation
$$d x(t)=d t+d w_1(t), \quad x(0)=x_0$$
and the observation process be given by the scalar third degree sensor equation
$$d y(t)=\left(x^3(t)+x(t)\right) d t+d w_2(t)$$
where $w_1(t)$ and $w_2(t)$ are standard Wiener processes independent of each other and of a Gaussian random variable $x_0$ serving as the initial condition in (10). The filtering problem is to find the optimal estimate for the linear state (10), using the third degree sensor observations $(11)$.
Let us reformulate the problem, introducing the stochastic process $z(t)=h(x, t)=x^3(t)+$ $x(t)$. Using the Ito formula (see (31)) for the stochastic differential of the cubic function $h(x, t)=x^3(t)+x(t)$, where $x(t)$ satisfies the equation (10), the following equation is obtained for $z(t)$
$$d z(t)=\left(1+3 x(t)+3 x^2(t)\right) d t+\left(3 x^2(t)+1\right) d w_1(t), z(0)=z_0$$
Here, $\frac{\partial h(x, t)}{\partial x}=3 x^2(t)+1, \frac{1}{2} \frac{\partial^2 h(x, t)}{\partial x^2}=3 x(t)$, and $\frac{\partial h(x, t)}{\partial t}=0$; therefore, $f(x, t)=1+3 x(t)+$ $3 x^2(t)$ and $g(x, t)=3 x^2(t)+1$. The initial condition $z_0 \in R$ is considered a conditionally Gaussian random vector with respect to observations (see the paragraph following (4) for details). This assumption is quite admissible in the filtering framework, since the real distributions of $x(t)$ and $z(t)$ are unknown. In terms of the process $z(t)$, the observation equation (11) takes the form
$$d y(t)=z(t) d t+d w_2(t)$$

## 统计代写|随机控制代写Stochastic Control代考|Random signal expansion

Let $\mathbf{S}$ be a zero mean, stationary, discrete-time random signal, made of $M$ successive samples and let $\left{s_1, s_2, \ldots, s_M\right}$ be a zero mean, uncorrelated random variable sequence, i.e.:
$$E\left{s_n s_m\right}=E\left{s_m^2\right} \delta_{n, m}$$
where $\delta_{n, m}$ denotes the Kronecker symbol.
It is possible to expand signal $\mathbf{S}$ into series of the form:
$$\mathbf{S}=\sum_{m=1}^M s_m \boldsymbol{\Psi}{\mathbf{m}}$$ where $\left{\boldsymbol{\Psi}{\mathbf{m}}\right}_{m=1 \ldots M}$ corresponds to a $M$-dimensional deterministic basis. Vectors $\boldsymbol{\Psi}{\mathbf{m}}$ are linked to the choice of random variables sequence $\left{s_m\right}$, so there are many decompositions $(2)$ These vectors are determined by considering the mathematical expectation of the product of $s_m$ with the random signal $\mathbf{S}$. It comes: $$\boldsymbol{\Psi}{\mathbf{m}}=\frac{1}{E\left{s_m^2\right}} E\left{s_m \mathbf{S}\right} .$$
Classically and using a $M$-dimensional deterministic basis $\left{\boldsymbol{\Phi}{\mathrm{m}}\right}{m=1 \ldots M}$, the random variables $s_m$ can be expressed by the following relation:
$$s_m=\mathbf{S}^T \boldsymbol{\Phi}{\mathbf{m}}$$ The determination of these random variables depends on the choice of the basis $\left{\boldsymbol{\Phi}{\mathrm{m}}\right}_{m=1 \ldots . M}$. We will use a basis, which provides the uncorrelation of the random variables. Using relations (1) and (4), we can show that the uncorrelation is ensured, when vectors $\boldsymbol{\Phi}{\mathrm{m}}$ are solution of the following quadratic form: $$\boldsymbol{\Phi}{\mathbf{m}}{ }^T \boldsymbol{\Gamma}{\mathbf{S S}} \boldsymbol{\Phi}{\mathbf{n}}=E\left{s_m^2\right} \delta_{n, m},$$
where $\Gamma_{\mathbf{S S}}$ represents the signal covariance.

# 随机控制代写

## 统计代写|随机控制代写Stochastic Control代考|Third Degree Sensor Filtering Problem

$$d x(t)=d t+d w_1(t), \quad x(0)=x_0$$

$$d y(t)=\left(x^3(t)+x(t)\right) d t+d w_2(t)$$

$$d z(t)=\left(1+3 x(t)+3 x^2(t)\right) d t+\left(3 x^2(t)+1\right) d w_1(t), z(0)=z_0$$

$$d y(t)=z(t) d t+d w_2(t)$$

## 统计代写|随机控制代写Stochastic Control代考|Random signal expansion

EVleft{s_n s_m\right } } = E \backslash l e f t { s _ m ^ { \wedge } 2 \backslash r i g h t } \backslash d e l t a _ { n , m }

$$\mathbf{S}=\sum_{m=1}^M s_m \mathbf{\Psi} \mathbf{m}$$

$$s_m=\mathbf{S}^T \mathbf{\Phi} \mathbf{m}$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|随机控制代写Stochastic Control代考|Filtering Problem for Linear States over Polynomial Observations

statistics-lab™ 为您的留学生涯保驾护航 在代写随机控制Stochastic Control方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写随机控制Stochastic Control代写方面经验极为丰富，各种代写随机控制Stochastic Control相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|随机控制代写Stochastic Control代考|Filtering Problem for Linear States over Polynomial Observations

Let $(\Omega, F, P)$ be a complete probability space with an increasing right-continuous family of $\sigma$-algebras $F_t, t \geq t_0$, and let $\left(W_1(t), F_t, t \geq t_0\right)$ and $\left(W_2(t), F_t, t \geq t_0\right)$ be independent Wiener processes. The $\bar{F}t$-measurable random process $(x(t), y(t)$ is described by a linear differential equation for the system state $$d x(t)=\left(a_0(t)+a(t) x(t)\right) d t+b(t) d W_1(t), \quad x\left(t_0\right)=x_0$$ and a nonlinear polynomial differential equation for the observation process $$d y(t)=h(x, t) d t+B(t) d W_2(t)$$ Here, $x(t) \in R^n$ is the state vector and $y(t) \in R^m$ is the observation vector. The initial condition $x_0 \in R^n$ is a Gaussian vector such that $x_0, W_1(t)$, and $W_2(t)$ are independent. It is assumed that $B(t) B^T(t)$ is a positive definite matrix. All coefficients in (1)-(2) are deterministic functions of time of appropriate dimensions. The nonlinear function $h(x, t)$ forms the drift in the observation equation (2). The nonlinear function $h(x, t)$ is considered a polynomial of $n$ variables, components of the state vector $x(t) \in R^n$, with time-dependent coefficients. Since $x(t) \in R^n$ is a vector, this requires a special definition of the polynomial for $n>1$. In accordance with (27), a $p$-degree polynomial of a vector $x(t) \in R^n$ is regarded as a $p$-linear form of $n$ components of $x(t)$ $$h(x, t)=\alpha_0(t)+\alpha_1(t) x+\alpha_2(t) x x^T+\ldots+\alpha_p(t) x \ldots p \text { times } \ldots x,$$ where $\alpha_0(t)$ is a vector of dimension $n, \alpha_1$ is a matrix of dimension $n \times n, \alpha_2$ is a $3 \mathrm{D}$ tensor of dimension $n \times n \times n, \alpha_p$ is an $(p+1) \mathrm{D}$ tensor of dimension $n \times \ldots(p+1)$ times $\ldots \times n$, and $x \times \ldots p$ times $\ldots \times x$ is a $p \mathrm{D}$ tensor of dimension $n \times \ldots p$ times $\ldots \times n$ obtained by $p$ times spatial multiplication of the vector $x(t)$ by itself (see (27) for more definition). Such a polynomial can also be expressed in the summation form \begin{aligned} & h_k(x, t)=\alpha{0 k}(t)+\sum_i \alpha_{1 k i}(t) x_i(t)+\sum_{i j} \alpha_2 k i j(t) x_i(t) x_j(t)+\ldots \ & +\sum_{i_1 \ldots i_p} \alpha_{p k i_1 \ldots i_p}(t) x_{i_1}(t) \ldots x_{i_p}(t), \quad k, i, j, i_1, \ldots, i_p=1, \ldots, n . \end{aligned}
The estimation problem is to find the optimal estimate $\hat{x}(t)$ of the system state $x(t)$, based on the observation process $Y(t)={y(s), 0 \leq s \leq t}$, that minimizes the Euclidean 2-norm
$$J=E\left[(x(t)-\hat{x}(t))^T(x(t)-\hat{x}(t)) \mid F_t^Y\right]$$ at every time moment $t$. Here, $E\left[\xi(t) \mid F_t^Y\right]$ means the conditional expectation of a stochastic process $\xi(t)=(x(t)-\hat{x}(t))^T(x(t)-\hat{x}(t))$ with respect to the $\sigma$ – algebra $F_t^Y$ generated by the observation process $Y(t)$ in the interval $\left[t_0, t\right]$. As known (31), this optimal estimate is given by the conditional expectation
$$\hat{x}(t)=m_x(t)=E\left(x(t) \mid F_t^Y\right)$$
of the system state $x(t)$ with respect to the $\sigma$ – algebra $F_t^Y$ generated by the observation process $Y(t)$ in the interval $\left[t_0, t\right]$. As usual, the matrix function
$$P(t)=E\left[\left(x(t)-m_x(t)\right)\left(x(t)-m_x(t)\right)^T \mid F_t^Y\right]$$
is the estimation error variance.
The proposed solution to this optimal filtering problem is based on the formulas for the Ito differential of the optimal estimate and the estimation error variance (cited after (31)) and given in the following section.

## 统计代写|随机控制代写Stochastic Control代考|Optimal Filter for Linear States over Polynomial Observations

Let us reformulate the problem, introducing the stochastic process $z(t)=h(x, t)$. Using the Ito formula (see (31)) for the stochastic differential of the nonlinear function $h(x, t)$, where $x(t)$ satisfies the equation (1), the following equation is obtained for $z(t)$
$$\begin{gathered} d z(t)=\frac{\partial h(x, t)}{\partial x}\left(a_0(t)+a(t) x(t)\right) d t+\frac{\partial h(x, t)}{\partial t} d t+ \ \frac{1}{2} \frac{\partial^2 h(x, t)}{\partial x^2} b(t) b^T(t) d t+\frac{\partial h(x, t)}{\partial x} b(t) d W_1(t), \quad z(0)=z_0 . \end{gathered}$$
Note that the addition $\frac{1}{2} \frac{\partial^2 h(x, t)}{\partial x^2} b(t) b^T(t) d t$ appears in view of the second derivative in $x$ in the Ito formula.
The initial condition $z_0 \in R^n$ is considered a conditionally Gaussian random vector with respect to observations. This assumption is quite admissible in the filtering framework, since the real distributions of $x(t)$ and $z(t)$ are actually unknown. Indeed, as follows from (32), if only two lower conditional moments, expectation $m_0$ and variance $P_0$, of a random vector $\left[z_0, x_0\right]$ are available, the Gaussian distribution with the same parameters, $N\left(m_0, P_0\right)$, is the best approximation for the unknown conditional distribution of $\left[z_0, x_0\right]$ with respect to observations. This fact is also a corollary of the central limit theorem (33) in the probability theory.
A key point for further derivations is that the right-hand side of the equation (4) is a polynomial in $x$. Indeed, since $h(x, t)$ is a polynomial in $x$, the functions $\frac{\partial h(x, t)}{\partial x}, \frac{\partial h(x, t)}{\partial x} x(t), \frac{\partial h(x, t)}{\partial t}$, and $\frac{\partial^2 h(x, t)}{\partial x^2}$ are also polynomial in $x$. Thus, the equation (4) is a polynomial state equation with a polynomial multiplicative noise. It can be written in the compact form
$$d z(t)=f(x, t) d t+g(x, t) d W_1(t), \quad z\left(t_0\right)=z_0,$$
where
$$\begin{gathered} f(x, t)=\frac{\partial h(x, t)}{\partial x}\left(a_0(t)+a(t) x(t)\right)+\frac{\partial h(x, t)}{\partial t}+ \ \frac{1}{2} \frac{\partial^2 h(x, t)}{\partial x^2} b(t) b^T(t), \quad g(x, t)=\frac{\partial h(x, t)}{\partial x} b(t) . \end{gathered}$$

# 随机控制代写

## 统计代写|随机控制代写Stochastic Control代考|Filtering Problem for Linear States over Polynomial Observations

$\left(W_1(t), F_t, t \geq t_0\right)$ 和 $\left(W_2(t), F_t, t \geq t_0\right)$ 是独立的维纳过程。这 $\bar{F} t$-可测随机过程 $(x(t), y(t)$ 由系统 状态的线性微分方程描述
$$d x(t)=\left(a_0(t)+a(t) x(t)\right) d t+b(t) d W_1(t), \quad x\left(t_0\right)=x_0$$

$$d y(t)=h(x, t) d t+B(t) d W_2(t)$$

$$h(x, t)=\alpha_0(t)+\alpha_1(t) x+\alpha_2(t) x x^T+\ldots+\alpha_p(t) x \ldots p \text { times } \ldots x,$$

$$h_k(x, t)=\alpha 0 k(t)+\sum_i \alpha_{1 k i}(t) x_i(t)+\sum_{i j} \alpha_2 k i j(t) x_i(t) x_j(t)+\ldots \quad+\sum_{i_1 \ldots i_p} \alpha_{p k i_1 \ldots i_p}(t) x_{i_1}$$

$$J=E\left[(x(t)-\hat{x}(t))^T(x(t)-\hat{x}(t)) \mid F_t^Y\right]$$

$$\hat{x}(t)=m_x(t)=E\left(x(t) \mid F_t^Y\right)$$

$$P(t)=E\left[\left(x(t)-m_x(t)\right)\left(x(t)-m_x(t)\right)^T \mid F_t^Y\right]$$

## 统计代写|随机控制代写Stochastic Control代考|Optimal Filter for Linear States over Polynomial Observations

$$d z(t)=\frac{\partial h(x, t)}{\partial x}\left(a_0(t)+a(t) x(t)\right) d t+\frac{\partial h(x, t)}{\partial t} d t+\frac{1}{2} \frac{\partial^2 h(x, t)}{\partial x^2} b(t) b^T(t) d t+\frac{\partial h(x, t)}{\partial x} b(t)$$

$$d z(t)=f(x, t) d t+g(x, t) d W_1(t), \quad z\left(t_0\right)=z_0$$

$$f(x, t)=\frac{\partial h(x, t)}{\partial x}\left(a_0(t)+a(t) x(t)\right)+\frac{\partial h(x, t)}{\partial t}+\frac{1}{2} \frac{\partial^2 h(x, t)}{\partial x^2} b(t) b^T(t), \quad g(x, t)=\frac{\partial h(x, t)}{\partial x}$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## EE365 Stochastic control课程简介

Introduction to stochastic control, with applications taken from a variety of areas including supply-chain optimization, advertising, finance, dynamic resource allocation, caching, and traditional automatic control. Markov decision processes, optimal policy with full state information for finite-horizon case, infinite-horizon discounted, and average stage cost problems. Bellman value function, value iteration, and policy iteration. Approximate dynamic programming. Linear quadratic stochastic control.

## PREREQUISITES

• EE365 is the same as MS\&E251, Stochastic Decision Models.
• Homework 8 solutions have been posted.
• Last year’s final for practice, and the solutions.
• As a reminder, you are responsible for all announcements made on the Piazza forum.
• Paris’ pre-final office hours: Thursday Jun 5, 11-1 in Packard 107
• Sanjay’s pre-final office hours: Friday Jun 6, 2-3:30
• Samuel’s pre-final office hours: Friday Jun 6, 8:30pm-10pm in Huang 219

## EE365 Stochastic control HELP（EXAM HELP， ONLINE TUTOR）

Lemma 1 Consider $\mathbf{z}j$ a vector with the order of $\mathbf{x}_j$, associated to dynamic behavior of the system and independent of the noise input $\xi_j ;$ and $\left.\beta_j^i\right|{j=1, \ldots, k} ^{i=1, \ldots, l}$ is the normalized degree of activation, a variable defined as in (4) associated to $\mathbf{z}j$. Then, at the limit $$\lim {k \rightarrow \infty} \frac{1}{k} \sum_{j=1}^k\left[\beta_j^1 \mathbf{z}_j, \ldots, \beta_j^l \mathbf{z}_j\right] \xi_j^T=\mathbf{0}$$

Lemma 2 Under the same conditions as Lemma 1 and $\mathbf{z}j$ independent of the disturbance noise $\eta_j$, then, at the limit $$\lim {k \rightarrow \infty} \frac{1}{k} \sum_{j=1}^k\left[\beta_j^1 \mathbf{z}_j, \ldots, \beta_j^l \mathbf{z}_j\right] \eta_j=0$$

Lemma 3 Under the same conditions as Lemma 1, according to (23), at the limit
$$\begin{array}{r} \lim {k \rightarrow \infty} \frac{1}{k} \sum{j=1}^k\left[\beta_j^1 \mathbf{z}j, \ldots, \beta_j^l \mathbf{z}_j\right]\left[\gamma_j^1\left(\mathbf{x}_j+\xi_j\right), \ldots\right. \ \left.\gamma_j^l\left(\mathbf{x}_j+\xi_j\right)\right]^T=\mathbf{C}{\mathbf{z x}} \neq 0 \end{array}$$

Theorem 1 Under suitable conditions outlined from Lemma 1 to 3, the estimation of the parameter vector $\theta$ for the model in (12) is strongly consistent, i.e, at the limit
$$p \cdot \lim \theta=0$$

## Textbooks

• An Introduction to Stochastic Modeling, Fourth Edition by Pinsky and Karlin (freely
available through the university library here)
• Essentials of Stochastic Processes, Third Edition by Durrett (freely available through
the university library here)
To reiterate, the textbooks are freely available through the university library. Note that
you must be connected to the university Wi-Fi or VPN to access the ebooks from the library
links. Furthermore, the library links take some time to populate, so do not be alarmed if
the webpage looks bare for a few seconds.

Statistics-lab™可以为您提供stanford.edu EE365 Stochastic control随机控制课程的代写代考辅导服务！ 请认准Statistics-lab™. Statistics-lab™为您的留学生涯保驾护航。

## 统计代写|随机控制代写Stochastic Control代考|MATH4091

statistics-lab™ 为您的留学生涯保驾护航 在代写随机控制Stochastic Control方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写随机控制Stochastic Control代写方面经验极为丰富，各种代写随机控制Stochastic Control相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|随机控制代写Stochastic Control代考|Experimental provisioning of the computing nodes

Figure 7 shows the evolution of the number of stock points of our benchmark application, and the evolution of the number of available nodes that have some work to achieve: the number of provisioned nodes. The number of stock points defines the problem size. It can evolve at each time step of the optimization part and the splitting algorithm that distributes the $\mathrm{N}$-cube data and the associated work has to be run at the beginning of each time step (see section 3.1). This algorithm determines the number of available nodes to use at the current time step. The number of stock points of this benchmark increases up to 3515625 , and we can see on figure 7 the evolution of their distribution on a 256-nodes PC cluster, and on 4096 and 8192 nodes of a Blue Gene supercomputer. Excepted at time step 0 that has only one stock point, it has been possible to use the 256 nodes of our PC cluster at each time step. But it has not been possible to achieve this efficiency on the Blue Gene. We succeeded to use up to 8192 nodes of this architecture, but sometimes we used only 2048 or 512 nodes.
However, section $5.4$ will introduce the good scalability achieved by the optimization part of our application, both on our 256-nodes PC cluster and our 8192-nodes Blue Gene. In fact, time steps with small numbers of stock points are not the most time consuming. They do not make up a significant part of the execution time, and to use a limited number of nodes to process these time steps does not limit the performances. But it is critical to be able to use a large number of nodes to process time steps with a great amount of stock points. This dynamic load balancing and adaptation of the number of working nodes is achieved by our splitting algorithm, as illustrated by figure 7 .
Section $3.4$ introduces our splitting strategy, aiming to create and distribute cubic subcubes and avoiding flat ones. When the backward loop of the optimization part leaves step 61 and enters step 60 the cube of stock points increases a lot (from 140625 to 3515625 stock points) because dimensions two and five enlarge from 1 to 5 stock levels. In both steps the cube is split in 8192 subcubes, but this division evolves to take advantage of the enlargement of dimensions two and five.

## 统计代写|随机控制代写Stochastic Control代考|Detailed best performances of the application and its subparts

Figure 9 shows the details of the best execution times (using multithreading and implementing serial optimizations). First, we can observe the optimization part of our application scales while the simulation part does not speedup and limits the global performances and scaling of the application. So, our $\mathrm{N}$-cube distribution strategy, our shadow region map and routing plan computations, and our routing plan executions appear to be efficient and not to penalize the speedup of the optimization part. But our distribution strategy of Monte carlo trajectories in the simulation part does not speedup, and limits the performances of the entire application. Second, we observe on figure 9 our distributed and parallel algorithm, serial optimizations and portable implementation allow to run our complete application on a 7-stocks and 10state-variables in less than $1 h$ on our PC-cluster with 256 nodes and 512 cores, and in less than $30 \mathrm{mn}$ on our Blue Gene/P supercomputer used with 4096 nodes and 16384 cores. These performances allow to plan some computations we could not run before.
Finally, considering some real and industrial use cases, with bigger data set, the optimization part will increase more than the simulation part, and our implementation should scale both on our PC cluster and our Blue Gene/P. Our current distributed and parallel implementation is operational to process many of our real problems.

Our parallel algorithm, serial optimizations and portable implementation allow to run our complete application on a 7-stocks and 10-state-variables in less than $1 h$ on our $\overrightarrow{P C}$-cluster with 256 nodes and 512 cores, and in less than $30 \mathrm{mn}$ on our Blue Gene/P supercomputer used with 4096 nodes and 16384 cores. On both testbeds, the interest of multithreading and serial optimizations have been measured and emphasized. Then, a detailed analysis has shown the optimization part scales while the simulation part reaches its limits. These current performances promise high performances for future industrial use cases where the optimization part will increase (achieving more computations in one time step) and will become a more significant part of the application.
However, for some high dimension problems, the communications during the simulation part could become predominant. We plan to modify this part by reorganizing trajectories so that trajectories with similar stocks levels are treated by the same processor. This will allow us to identify and to bring back the shadow region only once per processor at each time step and to decrease the number of communication needed.
Previously our paradigm has been successfully tested too on a smaller case for gaz storage [Makassikis et al. (2008)]. Currently it is used to valuate power plants facing the market prices and for different problems of asset liability management. In order to make easier the development of new stochastic control applications, we aim to develop a generic library to rapidly and efficiently distribute $\mathrm{N}$ dimensional cubes of data on large size architectures.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|随机控制代写Stochastic Control代考|ELEC9741

statistics-lab™ 为您的留学生涯保驾护航 在代写随机控制Stochastic Control方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写随机控制Stochastic Control代写方面经验极为丰富，各种代写随机控制Stochastic Control相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|随机控制代写Stochastic Control代考|Nested loops multithreading

In order to take advantage of multi-core processors we have multithreaded, in order to create only one MPI process per node and one thread per core in place of one MPI process per core. Depending on the application and the computations achieved, this strategy can be more or less efficient. We will see in section $5.4$ it leads to serious performance increase of our application. To achieve multithreading we have split some nested loops using OpenMP standard tool or the Intel Thread Building Block library (TBB). We maintain these two multithreaded implementations to improve the portability of our code. For example, in the past we encountered some problems at execution time using OpenMP with ICC compiler, and TBB was not available on Blue Gene supercomputers. Using OpenMP or Intel TBB, we have adopted an incremental and pragmatic approach to identify the nested loops to parallelize. First, we have multithreaded the optimization part of our application (the most time consuming), and second we attempted to multithread the simulation part.
In the optimization part of our application we have easily multithreaded two nested loops: the first prepares data and the second computes the Bellman values (see section 2). However, only the second has a significant execution time and leads to an efficient multithreaded parallelization. A computing loop in the routing plan execution, packing some data to prepare messages, could be parallelized too. But, it would lead to seriously more complex code while this loop is only $0.15-0.20 \%$ of the execution time on a 256 dual-core PC cluster and on several thousand nodes of a Blue Gene/P. So, we have not multithreaded this loop.
In the simulation part each node processes some independent Monte-Carlo trajectories, and parallelization with multithreading has to be achieved while testing the commands in the algorithm 2. But this application part is not bounded by the amount of computations, but by the amount of data to get back from other nodes and to store in the node memory, because each MC trajectory follows an unpredictable path and requires a specific shadow region. So, the impact of multithreading will be limited on the simulation part until we improve this part (see section 6).

## 统计代写|随机控制代写Stochastic Control代考|Serial optimizations

Beyond the parallel aspects the serial optimization is a critical point to tackle the current and coming processor complexity as well as to exploit the entirely capabilities of the compilers. Three types of serial optimization were carried out to match the processor architecture and to simplify the language complexity, in order to help the compiler to generate the best binary:

1. Substitution or coupling of the main computing parts including blitz++ classes by standard $\mathrm{C}$ operations or basic $\mathrm{C}$ functions.
2. Loops unrolling with backward technique to ease SIMD or SSE (Streaming SIMD Extension for $\mathrm{x} 86$ processor architecture) instructions generation and optimization by the compiler while reducing the number of branches.
3. Moving local data allocations outside the parallel multithreaded sections, to minimize memory fragmentation, to reduce $\mathrm{C}++$ constructor/destructor classes overhead and to control data alignment (to optimize memory bandwidth depending on the memory architecture).

Most of the data are stored and computed within blit z++ classes. The blitz++ streamlines the overall implementation by providing arrays operations whatever the data type. Overloading operator is one of the main inhibitor for the compilers to generate an optimal binary. To get round this inhibitor the operations including blitz classes were replaced by standard $C$ pointers and $\mathrm{C}$ operations for the most time consuming routines. $\mathrm{C}$ pointers and operators of code $\mathrm{C}$ are very simple to couple with blitz++ arrays, and whatever the processor architecture we have got a significant speedup greater than a factor 3 with this technique. See [Vezolle et al. (2009)] for more details about these optimizations.
With the current and future processors it is compulsory to generate vector instructions to reach a good ratio of the serial peak performance. $30-40 \%$ of the total elapsed time of our software is spent in while loops including a break test. For a medium case the minimum number of iterations is around 100 . A simple look at the assembler code shows that, whatever the level of the compiler optimization, the structure of the loop and the break test do not allow to unroll techniques and therefore to generate vector instructions. So, we have explicitly loop unrolled these while-and-break loops, with extra post-computing iterations then unrolling back to get the break point. This method enables vector instructions while reducing the number of branches.
In the shared memory parallel implementation (with Intel TRB lihrary or OpenMP directives) each thread independently allocates local blitz++ classes (arrays or vectors). The memory allocations are requested concurrently in the heap zone and can generate memory fragmentation as well as potential bank conflicts. In order to reduce the overhead due to memory management between the threads the main local arrays were moved outside the parallel session and indexed per the thread numbers. This optimization decreases the number of memory allocations while allowing a better control of the array alignment between the threads. Moreover, a singleton $\mathrm{C}++$ class was added to $\mathrm{blitz++}$ library to synchronize the thread memory constructors/destructors and therefore minimize memory fragmentation (this feature can be deactivated depending on the operating system).

## 统计代写|随机控制代写随机控制代考|串行优化

1. 通过标准$\mathrm{C}$操作或基本$\mathrm{C}$函数对包括blitz+类在内的主要计算部件进行替换或耦合。
2. 循环使用向后技术展开，以简化编译器的SIMD或SSE(针对$\mathrm{x} 86$处理器架构的流SIMD扩展)指令生成和优化，同时减少分支的数量。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。