### 机器人代写|SLAM代写机器人导航代考|Proof of the FastSLAM Factorization

statistics-lab™ 为您的留学生涯保驾护航 在代写SLAM方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写SLAM代写方面经验极为丰富，各种代写SLAM相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 机器人代写|SLAM代写机器人导航代考|Proof of the FastSLAM Factorization

The FastSLAM factorization can be derived directly from the SLAM path posterior (3.2). Using the definition of conditional probability, the SLAM posterior can be rewritten as:
$$p\left(s^{t}, \Theta \mid z^{t}, u^{t}, n^{t}\right)=p\left(s^{t} \mid z^{t}, u^{t}, n^{t}\right) p\left(\Theta \mid s^{t}, z^{t}, u^{t}, n^{t}\right)$$
Thus, to derive the factored posterior (3.3), it suffices to show the following for all non-negative values of $t$ :
$$p\left(\Theta \mid s^{t}, z^{t}, u^{t}, n^{t}\right)=\prod_{n=1}^{N} p\left(\theta_{n} \mid s^{t}, z^{t}, u^{t}, n^{t}\right)$$
Proof of this statement can be demonstrated through induction. Two intermediate results must be derived in order to achieve this result. The first quantity to be derived is the probability of the observed landmark $\theta_{n_{t}}$ conditioned on the data. This quantity can be rewritten using Bayes Rule.
$$p\left(\theta_{n_{t}} \mid s^{t}, z^{t}, u^{t}, n^{t}\right) \stackrel{\text { Bayes }}{=} \frac{p\left(z_{t} \mid \theta_{n_{t}}, s^{t}, z^{t-1}, u^{t}, n^{t}\right)}{p\left(z_{t} \mid s^{t}, z^{t-1}, u^{t}, n^{t}\right)} p\left(\theta_{n_{t}} \mid s^{t}, z^{t-1}, u^{t}, n^{t}\right)$$
Note that the current observation $z_{t}$ depends solely on the current state of the robot and the landmark being observed. In the rightmost term of (3.6), we similarly notice that the current pose $s_{t}$, the current action $u_{t}$, and the current data association $n_{t}$ have no effect on $\theta_{n_{t}}$ without the current observation $z_{t}$. Thus, all of these variables can be dropped.
$$p\left(\theta_{n_{t}} \mid s^{t}, z^{t}, u^{t}, n^{t}\right) \stackrel{M a r k o v}{=} \frac{p\left(z_{t} \mid \theta_{n_{t}}, s_{t}, n_{t}\right)}{p\left(z_{t} \mid s^{t}, z^{t-1}, u^{t}, n^{t}\right)} p\left(\theta_{n_{t}} \mid s^{t-1}, z^{t-1}, u^{t-1}, n^{t-1}\right)$$
Next, we solve for the rightmost term of (3.7) to get:
$$p\left(\theta_{n_{t}} \mid s^{t-1}, z^{t-1}, u^{t-1}, n^{t-1}\right)=\frac{p\left(z_{t} \mid s^{t}, z^{t-1}, u^{t}, n^{t}\right)}{p\left(z_{t} \mid \theta_{n_{t}}, s_{t}, n_{t}\right)} p\left(\theta_{n_{t}} \mid s^{t}, z^{t}, u^{t}, n^{t}\right)$$

## 机器人代写|SLAM代写机器人导航代考|The FastSLAM 1.0 Algorithm

The factorization of the posterior (3.3) highlights important structure in the SLAM problem that is ignored by SLAM algorithms that estimate an unstructured posterior. This structure suggests that under the appropriate conditioning, no cross-correlations between landmarks have to be maintained explicitly. FastSLAM exploits the factored representation by maintaining $N+1$ filters, one for each term in (3.3). By doing so, all $N+1$ filters are low-dimensional.
FastSLAM estimates the first term in (3.3), the robot path posterior, using a particle filter. The remaining $N$ conditional landmark posteriors $p\left(\theta_{n} \mid s^{t}, z^{t}, u^{t}, n^{t}\right)$ are estimated using EKFs. Each EKF tracks a single landmark position, and therefore is low-dimensional and fixed in size. The landmark EKFs are all conditioned on robot paths, with each particle in the particle filter possessing its own set of EKFs. In total, there are $N \cdot M$ EKFs, where $M$ is the total number of particles in the particle filter. The particle filter is depicted graphically in Figure 3.3. Each FastSLAM particle is of the form:
$$S_{t}^{[m]}=\left\langle s^{t,[m]}, \mu_{1, t}^{[m]}, \Sigma_{1, t}^{[m]}, \ldots, \mu_{N, t}^{[m]}, \Sigma_{N, t}^{[m]}\right\rangle$$
The bracketed notation $[m]$ indicates the index of the particle; $s^{t,[m]}$ is the $m$-th particle’s path estimate, and $\mu_{n, t}^{[m]}$ and $\Sigma_{n, t}^{[m]}$ are the mean and covariance of the Gaussian representing the $n$-th feature location conditioned on the path $s^{t,[m]}$. Together all of these quantities form the $m$-th particle $S_{t}^{[m]}$, of which there are a total of $M$ in the FastSLAM posterior. Filtering, that is, calculating the posterior at time $t$ from the one at time $t-1$, involves generating a new particle set $S_{t}$ from $S_{t-1}$, the particle set one time step earlier. The new particle set incorporates the latest control $u_{t}$ and measurement $z_{t}$ (with corresponding data association $n_{t}$ ). This update is performed in four steps.
First, a new robot pose is drawn for each particle that incorporates the latest control. Each pose is added to the appropriate robot path estimate $s^{t-1,[m]}$. Next, the landmark EKFs corresponding to the observed landmark are updated with the new observation. Since the robot path particles are not drawn from the true path posterior, each particle is given an importance weight to reflect this difference. A new set of particles $S_{t}$ is drawn from the weighted particle set using importance resampling. This importance resampling step is necessary to insure that the particles are distributed according to the true posterior (in the limit of infinite particles). The four basic steps of the FastSLAM algorithm $[59]$, shown in Table 3.1, will be explained in detail in the following four sections.

## 机器人代写|SLAM代写机器人导航代考|Sampling a New Pose

The particle set $S_{t}$ is calculated incrementally, from the set $S_{t-1}$ at time $t-1$, the observation $z_{t}$, and the control $u_{t}$. Since we cannot draw samples directly from the SLAM posterior at time $t$, we will instead draw samples from a simpler distribution called the proposal distribution, and correct for the difference using a technique called importance sampling.

In general, importance sampling is an algorithm for drawing samples from functions for which no direct sampling procedure exists [55]. Each sample drawn from the proposal distribution is given a weight equal to the ratio of the posterior distribution to the proposal distribution at that point in the sample space. A new set of unweighted samples is drawn from the weighted set with probabilities in proportion to the weights. This process is an instantiation of Rubin’s Sampling Importance Resampling (SIR) algorithm [79].

The proposal distribution of FastSLAM generate guesses of the robot’s pose at time $t$ given each particle $S_{t-1}^{[m]}$. This guess is obtained by sampling from the probabilistic motion model.
$$s_{t}^{[m]} \sim p\left(s_{t} \mid u_{t}, s_{t-1}^{[m]}\right)$$
This estimate is added to a temporary set of particles, along with the path $s^{t-1,[m]}$. Under the assumption that the set of particles $S_{t-1}$ is distributed according to $p\left(s^{t-1} \mid z^{t-1}, u^{t-1}, n^{t-1}\right)$, which is asymptotically correct, the new particles drawn from the proposal distribution are distributed according to:
$$p\left(s^{t} \mid z^{t-1}, u^{t}, n^{t-1}\right)$$
It is important to note that the motion model can be any non-linear function. This is in contrast to the EKF, which requires the motion model to be

linearized. The only practical limitation on the measurement model is that samples can be drawn from it conveniently. Regardless of the proposal distribution, drawing a new pose is a constant-time operation for every particle. It does not depend on the size of the map.

A simple four parameter motion model was used for all of the planar robot experiments in this book. This model assumes that the velocity of the robot is constant over the time interval covered by each control. Each control $u_{t}$ is two-dimensional and can be written as a translational velocity $v_{t}$ and a rotational velocity $\omega_{t}$. The model further assumes that the error in the controls is Gaussian. Note that this does not imply that error in the robot’s motion will also be Gaussian; the robot’s motion is a non-linear function of the controls and the control noise.

The errors in translational and rotational velocity have an additive and a multiplicative component. Throughout this book, the notation $\mathcal{N}(x ; \mu, \Sigma)$ will be used to denote a normal distribution over the variable $x$ with mean $\mu$ and covariance $\Sigma$.
\begin{aligned} v_{t}^{\prime} & \sim \mathcal{N}\left(v_{t}, \alpha_{1} v_{t}+\alpha_{2}\right) \ \omega_{t}^{\prime} & \sim \mathcal{N}\left(\omega_{t}, \alpha_{3} \omega_{t}+\alpha_{4}\right) \end{aligned}
This motion model is able to represent the slip and skid errors errors that occur in typical ground vehicles [8]. The first step to drawing a new robot pose from this model is to draw a new translational and rotational velocity according to the observed control. The new pose $s_{t}$ can be calculated by simulating the new control forward from the previous pose $s_{t-1}^{[m]}$. Figure $3.4$ shows 250 samples drawn from this motion model given a curved trajectory. In this simulated example, the translational error of the robot is low, while the rotational error is high.

## 机器人代写|SLAM代写机器人导航代考|Proof of the FastSLAM Factorization

FastSLAM 分解可以直接从 SLAM 路径后验 (3.2) 推导出来。使用条件概率的定义，SLAM 后验可以重写为：
p(s吨,θ∣和吨,在吨,n吨)=p(s吨∣和吨,在吨,n吨)p(θ∣s吨,和吨,在吨,n吨)

p(θ∣s吨,和吨,在吨,n吨)=∏n=1ñp(θn∣s吨,和吨,在吨,n吨)

p(θn吨∣s吨,和吨,在吨,n吨)= 贝叶斯 p(和吨∣θn吨,s吨,和吨−1,在吨,n吨)p(和吨∣s吨,和吨−1,在吨,n吨)p(θn吨∣s吨,和吨−1,在吨,n吨)

p(θn吨∣s吨,和吨,在吨,n吨)=米一种rķ这在p(和吨∣θn吨,s吨,n吨)p(和吨∣s吨,和吨−1,在吨,n吨)p(θn吨∣s吨−1,和吨−1,在吨−1,n吨−1)

p(θn吨∣s吨−1,和吨−1,在吨−1,n吨−1)=p(和吨∣s吨,和吨−1,在吨,n吨)p(和吨∣θn吨,s吨,n吨)p(θn吨∣s吨,和吨,在吨,n吨)

## 机器人代写|SLAM代写机器人导航代考|The FastSLAM 1.0 Algorithm

FastSLAM 使用粒子滤波器估计 (3.3) 中的第一项，机器人路径后验。其余ñ条件地标后验p(θn∣s吨,和吨,在吨,n吨)使用 EKF 估计。每个 EKF 跟踪单个地标位置，因此是低维且大小固定的。地标 EKF 都以机器人路径为条件，粒子过滤器中的每个粒子都拥有自己的一组 EKF。总共有ñ⋅米EKF，其中米是粒子过滤器中的粒子总数。图 3.3 以图形方式描述了粒子过滤器。每个 FastSLAM 粒子的形式为：

## 机器人代写|SLAM代写机器人导航代考|Sampling a New Pose

FastSLAM 的proposal distribution 对机器人的位姿进行猜测吨给定每个粒子小号吨−1[米]. 这个猜测是通过从概率运动模型中采样获得的。
s吨[米]∼p(s吨∣在吨,s吨−1[米])

p(s吨∣和吨−1,在吨,n吨−1)

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。