### robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Extended Kalman Filtering

statistics-lab™ 为您的留学生涯保驾护航 在代写SLAM定位算法方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写SLAM定位算法代写方面经验极为丰富，各种代写SLAM定位算法相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Extended Kalman Filtering

In general, the integral in the recursive update equation (2.15) cannot be computed in closed form. However, approximate SLAM algorithms have been developed by restricting the form of the SLAM posterior, the motion model, and the measurement model. Many of the original SLAM algorithms originate from a seminal paper by Smith and Cheesman [82], which proposed the use of the Extended Kalman Filter (EKF) to estimate the SLAM posterior.

The EKF represents the SLAM posterior as a high-dimensional, multivariate Gaussian parameterized by a mean $\mu_{t}$ and a covariance matrix $\Sigma_{t}$. The mean describes the most likely state of the robot and landmarks, and the covariance matrix encodes the pairwise correlations between all pairs of state variables.
\begin{aligned} p\left(s_{t}, \Theta \mid u^{t}, z^{t}, n^{t}\right) &=\mathcal{N}\left(x_{t} ; \mu_{t}, \Sigma_{t}\right) \ x_{t} &=\left{s_{t}, \theta_{1}, \ldots, \theta_{N}\right} \end{aligned}

For robots that move in a plane, the mean vector $\mu_{t}$ is of dimension $2 N+$ 3, where $N$ is the number of landmarks. Three dimensions are required to represent the pose of the robot, and two dimensions are required to specify the position of each landmark. Likewise, the covariance matrix is of size $2 N+3$ by $2 N+3$. Thus, the number of parameters needed to describe the EKF posterior is quadratic in the number of landmarks in the map.

Figure $2.3$ shows a simple example of a Kalman Filter estimating the position of a single landmark in one dimension. Figure $2.3$ (a) shows the current belief in the landmark position (the solid distribution) and a new, noisy observation of the landmark (the dashed distribution). The Kalman Filter describes the optimal procedure for combining Gaussian beliefs in linear systems. In this case, the new posterior after incorporating the dashed observation is shown as a thick line in Figure $2.3(\mathrm{~b})$.

The basic Kalman Filter algorithm is the optimal estimator for a linear system with Gaussian noise [3]. As its name suggests, the EKF is simply an extension of the basic Kalman Filter algorithm to non-linear systems. The EKF does this by replacing the motion and measurement models with nonlinear models that are “linearized” around the most-likely state of the system. In general, this approximation is good if the true models are approximately linear and if the discrete time step of the filter is small.

The motion model will be written as the non-linear function $h\left(x_{t-1}, u_{t}\right)$ with linearized noise covariance $P_{t}$. Similarly, the measurement model will be written as the non-linear function $g\left(x_{t}, n_{t}\right)$ with linearized noise covariance

$R_{t}$. The EKF update equations can be written as follows:
\begin{aligned} \mu_{t}^{-} &=h\left(\mu_{t-1}, u_{t}\right) \ \Sigma_{t}^{-} &=\Sigma_{t-1}+P_{t} \ G_{x} &=\left.\nabla_{x_{t}} g\left(x_{t}, n_{t}\right)\right|{x{t}=\mu_{t}^{-} ; n_{t}=n_{t}} \ Z_{t} &=G_{x} \Sigma_{t}^{-} G_{x}^{T}+R_{t} \quad \hat{z}{n{t}}=g\left(\mu_{t}^{-}, n_{t}\right) \ K_{t} &=\Sigma_{t}^{-} G_{x}^{T} Z_{t}^{-1} \ \mu_{t} &=\mu_{t}^{-}+K_{t}\left(z_{t}-\hat{z}{n{t}}\right) \ \Sigma_{t} &=\left(I-K_{t} G_{t}\right) \Sigma_{t}^{-} \end{aligned}
For a complete derivation of the Kalman Filter, see $[46,31,87]$, and for a gentle introduction to the use of the Kalman Filter and the EKF, see $[95]$. It is important to note that if the SLAM problem is linear and Gaussian, then the Kalman Filter is both guaranteed to converge [69] and provably optimal [3]. Real-world SLAM problems are rarely linear, yet the EKF still tends to produce very good results in general. For this reason, the EKF is often held up as the “gold standard” of comparison for online SLAM algorithms.
The EKF has two substantial disadvantages when applied to the SLAM problem: quadratic complexity and sensitivity to failures in data association. The number of mathematical operations required to incorporate a control and an observation into the filter is dominated by the final EKF update equation (2.26). In the planar case, both $K_{t}$ and $G_{t}^{T}$ are of dimension $2 N+3$ by the dimensionality of the observation (typically two). Thus, the inner product in the calculation of $\Sigma_{t}$ requires a number of calculations quadratic with the number of landmarks $N$.

The second problem with the EKF applies in situations in which the data associations $n_{t}$ are unknown. The EKF maintains a single data association hypothesis per observation, typically chosen using a maximum likelihood heuristic. If the probability of an observation coming from any of the current landmarks is too low, the possibility of a new landmark is considered. If the data association chosen by this heuristic is incorrect, the effect of incorporating this observation into the EKF can never be removed. If many observations are incorporated into the EKF with wrong data associations, the EKF will diverge. This is a well known failure mode of the EKF [18]. The following sections will describe alternative approaches to SLAM that address the issues of efficient scaling and robust data association.

## robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Submap Methods

While the Kalman Filter is the optimal solution to the linear-Gaussian SLAM problem, it is computationally infeasible for large maps. As a result, a great

deal of SLAM research has concentrated on developing SLAM algorithms that approximate the performance of the EKF, but scale to much larger environments. The computational complexity of the EKF stems from the fact that covariance matrix $\Sigma_{t}$ represents every pairwise correlation between the state variables. Incorporating an observation of a single landmark will necessarily have an affect on every other state variable.

Typically, the observation of a single landmark will have a weak effect on the positions of distant landmarks. For this reason, many researchers have developed EKF-based SLAM algorithms that decompose the global map into smaller submaps. One set of approaches exploits the fact that the robot may linger for some period of time in a small section of the global map. Postponement $[10,47]$, and the Compressed Extended Kalman Filter (CEKF) [36], are both techniques that delay the incorporation of local information into the global map while the robot stays inside a single submap. These techniques are still optimal, in that they generate the same results as the full EKF. However, the computation required by the two algorithms is reduced by a constant factor because the full map updates are performed less frequently.

Breaking the global map into submaps can also lead to a more sparse description of the correlations between map elements. Increased sparsity can be exploited to compute more efficient sensor updates. Network Coupled Feature Maps [1], ATLAS [5], the Local Mapping Algorithm [7], and the Decoupled Stochastic Mapping [49] frameworks all consider relationships between a sparse network of submaps. When the robot moves out of one submap, it either creates a new submap or relocates itself in a previously defined submap. Each approach reduces the computational requirement of incorporating an observation to constant time, given known data association. However, these computational gains come at the cost of slowing down the overall rate of convergence. Each map has far fewer features than the overall map would have, and the effects of observations on distant landmarks may have to percolate through multiple correlation links.

Guivant and Nebot presented a similar method called Suboptimal SLAM $[36]$, in which the local maps are all computed with respect to a small number of base landmarks. Since the different constellations of landmarks are kept in different coordinate frames, they can be decorrelated more easily than if every landmark were in a single coordinate frame. The resulting algorithm produces an estimate that is an approximation to the true EKF estimate, however it requires linear time and memory.

## robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Sparse Extended Information Filters

Another filter approach to decomposing the SLAM problem is to represent maps using potential functions between nearby landmarks, similar to Markov Random Fields [6]. One such approach is the Sparse Extended Information Filter (SEIF) proposed in [90]. SEIFs implement an alternate parameterization of the Kalman Filter, called the Information Filter. Instead of updating a covariance matrix $\Sigma$, SEIFs update $\Sigma^{-1}$, the precision matrix. This parameterization is useful because the precision matrix is sparse if correlations are maintained only between nearby landmarks. Under appropriate approximations, this technique has been shown to provide efficient updates (given known data association) with a linear memory requirement.

The Thin Junction Tree Filter (TJTF) of Paskin [73] is a SLAM algorithm based on the same principle as the SEIF. Namely, maintaining a sparse network of probabilistic constraints between state variables enables efficient inference. The TJTF represents the SLAM posterior using a graphical model called a junction tree. The size of the junction tree grows as new landmarks are added to the map, but it can be “thinned” using an operation called variable contraction. The thinning operation can be viewed as a method for making the precision matrix of a SEIF sparse, however global maps can be extracted from TJTFs without any matrix inversion. TJTFs require linear computation in general, which can be reduced to constant time with further approximation.

## robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Extended Kalman Filtering

EKF 将 SLAM 后验表示为由均值参数化的高维、多元高斯μ吨和协方差矩阵Σ吨. 均值描述了机器人和地标最可能的状态，协方差矩阵编码了所有状态变量对之间的成对相关性。
\begin{对齐} p\left(s_{t}, \Theta \mid u^{t}, z^{t}, n^{t}\right) &=\mathcal{N}\left(x_{ t} ; \mu_{t}, \Sigma_{t}\right) \ x_{t} &=\left{s_{t}, \theta_{1}, \ldots, \theta_{N}\right} \结束{对齐}\begin{对齐} p\left(s_{t}, \Theta \mid u^{t}, z^{t}, n^{t}\right) &=\mathcal{N}\left(x_{ t} ; \mu_{t}, \Sigma_{t}\right) \ x_{t} &=\left{s_{t}, \theta_{1}, \ldots, \theta_{N}\right} \结束{对齐}

R吨. EKF 更新方程可以写成如下：
\begin{aligned} \mu_{t}^{-} &=h\left(\mu_{t-1}, u_{t}\right) \ \Sigma_ {t}^{-} &=\Sigma_{t-1}+P_{t} \ G_{x} &=\left.\nabla_{x_{t}} g\left(x_{t}, n_{ t}\右）\右| {x {t}=\mu_{t}^{-} ; n_{t}=n_{t}} \ Z_{t} &=G_{x} \Sigma_{t}^{-} G_{x}^{T}+R_{t} \quad \hat{z} {n {t}}=g\left(\mu_{t}^{-}, n_{t}\right) \ K_{t} &=\Sigma_{t}^{-} G_{x}^{ T} Z_{t}^{-1} \ \mu_{t} &=\mu_{t}^{-}+K_{t}\left(z_{t}-\hat{z} {n {t }}\right) \ \Sigma_{t} &=\left(I-K_{t} G_{t}\right) \Sigma_{t}^{-} \ end{aligned}

EKF 的第二个问题适用于数据关联的情况n吨是未知的。EKF 维护每个观察的单个数据关联假设，通常使用最大似然启发式选择。如果观测来自任何当前地标的概率太低，则考虑新地标的可能性。如果此启发式选择的数据关联不正确，则永远无法消除将此观察结果纳入 EKF 的效果。如果将许多观察结果与错误的数据关联合并到 EKF 中，则 EKF 将发散。这是 EKF [18] 众所周知的故障模式。以下部分将描述 SLAM 的替代方法，以解决有效扩展和稳健数据关联的问题。

## robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Submap Methods

SLAM 研究的重点是开发 SLAM 算法，该算法接近 EKF 的性能，但可以扩展到更大的环境。EKF 的计算复杂性源于协方差矩阵Σ吨表示状态变量之间的每个成对相关性。结合对单个地标的观察必然会对所有其他状态变量产生影响。

Guivant 和 Nebot 提出了一种类似的方法，称为次优 SLAM[36]，其中局部地图都是针对少量基本地标计算的。由于地标的不同星座保存在不同的坐标系中，因此与每个地标都在单个坐标系中相比，它们可以更容易地去相关。结果算法产生的估计值是真实 EKF 估计值的近似值，但是它需要线性时间和内存。

## robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Sparse Extended Information Filters

Paskin [73] 的 Thin Junction Tree Filter (TJTF) 是一种基于与 SEIF 相同原理的 SLAM 算法。也就是说，在状态变量之间维护一个稀疏的概率约束网络可以实现有效的推理。TJTF 使用称为连接树的图形模型表示 SLAM 后验。随着新地标添加到地图中，连接树的大小会增长，但可以使用称为变量收缩的操作“细化”它。细化操作可以看作是一种使 SEIF 的精度矩阵稀疏的方法，但是可以从 TJTF 中提取全局映射而无需任何矩阵求逆。TJTF 通常需要线性计算，可以通过进一步近似将其简化为恒定时间。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。