robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Extended Kalman Filtering

如果你也在 怎样代写SLAM定位算法这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。


statistics-lab™ 为您的留学生涯保驾护航 在代写SLAM定位算法方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写SLAM定位算法代写方面经验极为丰富,各种代写SLAM定位算法相关的作业也就用不着说。

我们提供的SLAM定位算法及其相关学科的代写,服务范围广, 其中包括但不限于:

  • Statistical Inference 统计推断
  • Statistical Computing 统计计算
  • Advanced Probability Theory 高等概率论
  • Advanced Mathematical Statistics 高等数理统计学
  • (Generalized) Linear Models 广义线性模型
  • Statistical Machine Learning 统计机器学习
  • Longitudinal Data Analysis 纵向数据分析
  • Foundations of Data Science 数据科学基础
robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Extended Kalman Filtering

robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Extended Kalman Filtering

In general, the integral in the recursive update equation (2.15) cannot be computed in closed form. However, approximate SLAM algorithms have been developed by restricting the form of the SLAM posterior, the motion model, and the measurement model. Many of the original SLAM algorithms originate from a seminal paper by Smith and Cheesman [82], which proposed the use of the Extended Kalman Filter (EKF) to estimate the SLAM posterior.

The EKF represents the SLAM posterior as a high-dimensional, multivariate Gaussian parameterized by a mean $\mu_{t}$ and a covariance matrix $\Sigma_{t}$. The mean describes the most likely state of the robot and landmarks, and the covariance matrix encodes the pairwise correlations between all pairs of state variables.
p\left(s_{t}, \Theta \mid u^{t}, z^{t}, n^{t}\right) &=\mathcal{N}\left(x_{t} ; \mu_{t}, \Sigma_{t}\right) \
x_{t} &=\left{s_{t}, \theta_{1}, \ldots, \theta_{N}\right}

For robots that move in a plane, the mean vector $\mu_{t}$ is of dimension $2 N+$ 3, where $N$ is the number of landmarks. Three dimensions are required to represent the pose of the robot, and two dimensions are required to specify the position of each landmark. Likewise, the covariance matrix is of size $2 N+3$ by $2 N+3$. Thus, the number of parameters needed to describe the EKF posterior is quadratic in the number of landmarks in the map.

Figure $2.3$ shows a simple example of a Kalman Filter estimating the position of a single landmark in one dimension. Figure $2.3$ (a) shows the current belief in the landmark position (the solid distribution) and a new, noisy observation of the landmark (the dashed distribution). The Kalman Filter describes the optimal procedure for combining Gaussian beliefs in linear systems. In this case, the new posterior after incorporating the dashed observation is shown as a thick line in Figure $2.3(\mathrm{~b})$.

The basic Kalman Filter algorithm is the optimal estimator for a linear system with Gaussian noise [3]. As its name suggests, the EKF is simply an extension of the basic Kalman Filter algorithm to non-linear systems. The EKF does this by replacing the motion and measurement models with nonlinear models that are “linearized” around the most-likely state of the system. In general, this approximation is good if the true models are approximately linear and if the discrete time step of the filter is small.

The motion model will be written as the non-linear function $h\left(x_{t-1}, u_{t}\right)$ with linearized noise covariance $P_{t}$. Similarly, the measurement model will be written as the non-linear function $g\left(x_{t}, n_{t}\right)$ with linearized noise covariance

$R_{t}$. The EKF update equations can be written as follows:
\mu_{t}^{-} &=h\left(\mu_{t-1}, u_{t}\right) \
\Sigma_{t}^{-} &=\Sigma_{t-1}+P_{t} \
G_{x} &=\left.\nabla_{x_{t}} g\left(x_{t}, n_{t}\right)\right|{x{t}=\mu_{t}^{-} ; n_{t}=n_{t}} \
Z_{t} &=G_{x} \Sigma_{t}^{-} G_{x}^{T}+R_{t} \quad \hat{z}{n{t}}=g\left(\mu_{t}^{-}, n_{t}\right) \
K_{t} &=\Sigma_{t}^{-} G_{x}^{T} Z_{t}^{-1} \
\mu_{t} &=\mu_{t}^{-}+K_{t}\left(z_{t}-\hat{z}{n{t}}\right) \
\Sigma_{t} &=\left(I-K_{t} G_{t}\right) \Sigma_{t}^{-}
For a complete derivation of the Kalman Filter, see $[46,31,87]$, and for a gentle introduction to the use of the Kalman Filter and the EKF, see $[95]$. It is important to note that if the SLAM problem is linear and Gaussian, then the Kalman Filter is both guaranteed to converge [69] and provably optimal [3]. Real-world SLAM problems are rarely linear, yet the EKF still tends to produce very good results in general. For this reason, the EKF is often held up as the “gold standard” of comparison for online SLAM algorithms.
The EKF has two substantial disadvantages when applied to the SLAM problem: quadratic complexity and sensitivity to failures in data association. The number of mathematical operations required to incorporate a control and an observation into the filter is dominated by the final EKF update equation (2.26). In the planar case, both $K_{t}$ and $G_{t}^{T}$ are of dimension $2 N+3$ by the dimensionality of the observation (typically two). Thus, the inner product in the calculation of $\Sigma_{t}$ requires a number of calculations quadratic with the number of landmarks $N$.

The second problem with the EKF applies in situations in which the data associations $n_{t}$ are unknown. The EKF maintains a single data association hypothesis per observation, typically chosen using a maximum likelihood heuristic. If the probability of an observation coming from any of the current landmarks is too low, the possibility of a new landmark is considered. If the data association chosen by this heuristic is incorrect, the effect of incorporating this observation into the EKF can never be removed. If many observations are incorporated into the EKF with wrong data associations, the EKF will diverge. This is a well known failure mode of the EKF [18]. The following sections will describe alternative approaches to SLAM that address the issues of efficient scaling and robust data association.

robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Submap Methods

While the Kalman Filter is the optimal solution to the linear-Gaussian SLAM problem, it is computationally infeasible for large maps. As a result, a great

deal of SLAM research has concentrated on developing SLAM algorithms that approximate the performance of the EKF, but scale to much larger environments. The computational complexity of the EKF stems from the fact that covariance matrix $\Sigma_{t}$ represents every pairwise correlation between the state variables. Incorporating an observation of a single landmark will necessarily have an affect on every other state variable.

Typically, the observation of a single landmark will have a weak effect on the positions of distant landmarks. For this reason, many researchers have developed EKF-based SLAM algorithms that decompose the global map into smaller submaps. One set of approaches exploits the fact that the robot may linger for some period of time in a small section of the global map. Postponement $[10,47]$, and the Compressed Extended Kalman Filter (CEKF) [36], are both techniques that delay the incorporation of local information into the global map while the robot stays inside a single submap. These techniques are still optimal, in that they generate the same results as the full EKF. However, the computation required by the two algorithms is reduced by a constant factor because the full map updates are performed less frequently.

Breaking the global map into submaps can also lead to a more sparse description of the correlations between map elements. Increased sparsity can be exploited to compute more efficient sensor updates. Network Coupled Feature Maps [1], ATLAS [5], the Local Mapping Algorithm [7], and the Decoupled Stochastic Mapping [49] frameworks all consider relationships between a sparse network of submaps. When the robot moves out of one submap, it either creates a new submap or relocates itself in a previously defined submap. Each approach reduces the computational requirement of incorporating an observation to constant time, given known data association. However, these computational gains come at the cost of slowing down the overall rate of convergence. Each map has far fewer features than the overall map would have, and the effects of observations on distant landmarks may have to percolate through multiple correlation links.

Guivant and Nebot presented a similar method called Suboptimal SLAM $[36]$, in which the local maps are all computed with respect to a small number of base landmarks. Since the different constellations of landmarks are kept in different coordinate frames, they can be decorrelated more easily than if every landmark were in a single coordinate frame. The resulting algorithm produces an estimate that is an approximation to the true EKF estimate, however it requires linear time and memory.

robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Sparse Extended Information Filters

Another filter approach to decomposing the SLAM problem is to represent maps using potential functions between nearby landmarks, similar to Markov Random Fields [6]. One such approach is the Sparse Extended Information Filter (SEIF) proposed in [90]. SEIFs implement an alternate parameterization of the Kalman Filter, called the Information Filter. Instead of updating a covariance matrix $\Sigma$, SEIFs update $\Sigma^{-1}$, the precision matrix. This parameterization is useful because the precision matrix is sparse if correlations are maintained only between nearby landmarks. Under appropriate approximations, this technique has been shown to provide efficient updates (given known data association) with a linear memory requirement.

The Thin Junction Tree Filter (TJTF) of Paskin [73] is a SLAM algorithm based on the same principle as the SEIF. Namely, maintaining a sparse network of probabilistic constraints between state variables enables efficient inference. The TJTF represents the SLAM posterior using a graphical model called a junction tree. The size of the junction tree grows as new landmarks are added to the map, but it can be “thinned” using an operation called variable contraction. The thinning operation can be viewed as a method for making the precision matrix of a SEIF sparse, however global maps can be extracted from TJTFs without any matrix inversion. TJTFs require linear computation in general, which can be reduced to constant time with further approximation.

robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Extended Kalman Filtering


robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Extended Kalman Filtering

一般来说,递归更新方程(2.15)中的积​​分不能以封闭形式计算。然而,通过限制 SLAM 后验、运动模型和测量模型的形式,已经开发了近似 SLAM 算法。许多原始的 SLAM 算法源自 Smith 和 Cheesman [82] 的一篇开创性论文,该论文提出使用扩展卡尔曼滤波器 (EKF) 来估计 SLAM 后验。

EKF 将 SLAM 后验表示为由均值参数化的高维、多元高斯μ吨和协方差矩阵Σ吨. 均值描述了机器人和地标最可能的状态,协方差矩阵编码了所有状态变量对之间的成对相关性。
\begin{对齐} p\left(s_{t}, \Theta \mid u^{t}, z^{t}, n^{t}\right) &=\mathcal{N}\left(x_{ t} ; \mu_{t}, \Sigma_{t}\right) \ x_{t} &=\left{s_{t}, \theta_{1}, \ldots, \theta_{N}\right} \结束{对齐}\begin{对齐} p\left(s_{t}, \Theta \mid u^{t}, z^{t}, n^{t}\right) &=\mathcal{N}\left(x_{ t} ; \mu_{t}, \Sigma_{t}\right) \ x_{t} &=\left{s_{t}, \theta_{1}, \ldots, \theta_{N}\right} \结束{对齐}

对于在平面上移动的机器人,平均向量μ吨有维度2ñ+3、哪里ñ是地标的数量。需要三个维度来表示机器人的位姿,并且需要两个维度来指定每个地标的位置。同样,协方差矩阵的大小2ñ+3经过2ñ+3. 因此,描述 EKF 后验所需的参数数量是地图中地标数量的二次方。

数字2.3显示了一个简单的卡尔曼滤波器示例,它估计单个地标在一维中的位置。数字2.3(a) 显示了对地标位置的当前信念(实线分布)和对地标的新的嘈杂观察(虚线分布)。卡尔曼滤波器描述了在线性系统中组合高斯信念的最佳过程。在这种情况下,合并虚线观察后的新后验在图中显示为粗线2.3( b).

基本的卡尔曼滤波器算法是具有高斯噪声的线性系统的最佳估计器 [3]。顾名思义,EKF 只是基本卡尔曼滤波器算法对非线性系统的扩展。EKF 通过用围绕系统最可能状态“线性化”的非线性模型替换运动和测量模型来做到这一点。一般来说,如果真实模型近似线性并且滤波器的离散时间步长很小,则这种近似是好的。

运动模型将被写成非线性函数H(X吨−1,在吨)具有线性化噪声协方差磷吨. 类似地,测量模型将被写为非线性函数G(X吨,n吨)具有线性化噪声协方差

R吨. EKF 更新方程可以写成如下:
\mu_{t}^{-} &=h\left(\mu_{t-1}, u_{t}\right) \
\Sigma_ {t}^{-} &=\Sigma_{t-1}+P_{t} \
G_{x} &=\left.\nabla_{x_{t}} g\left(x_{t}, n_{ t}\右)\右| {x {t}=\mu_{t}^{-} ; n_{t}=n_{t}} \
Z_{t} &=G_{x} \Sigma_{t}^{-} G_{x}^{T}+R_{t} \quad \hat{z} {n {t}}=g\left(\mu_{t}^{-}, n_{t}\right) \
K_{t} &=\Sigma_{t}^{-} G_{x}^{ T} Z_{t}^{-1} \
\mu_{t} &=\mu_{t}^{-}+K_{t}\left(z_{t}-\hat{z} {n {t }}\right) \
\Sigma_{t} &=\left(I-K_{t} G_{t}\right) \Sigma_{t}^{-} \
卡尔曼滤波器,见[46,31,87],关于卡尔曼滤波器和 EKF 的使用的简要介绍,请参阅[95]. 重要的是要注意,如果 SLAM 问题是线性和高斯问题,那么卡尔曼滤波器既可以保证收敛 [69] 又可以证明是最优的 [3]。现实世界的 SLAM 问题很少是线性的,但是 EKF 通常仍然倾向于产生非常好的结果。出于这个原因,EKF 通常被认为是在线 SLAM 算法比较的“黄金标准”。
当应用于 SLAM 问题时,EKF 有两个实质性的缺点:二次复杂性和对数据关联失败的敏感性。将控制和观察合并到滤波器中所需的数学运算的数量由最终的 EKF 更新方程 (2.26) 决定。在平面情况下,两者ķ吨和G吨吨有维度2ñ+3通过观察的维度(通常是两个)。因此,计算中的内积Σ吨需要多次计算与地标的数量成二次方ñ.

EKF 的第二个问题适用于数据关联的情况n吨是未知的。EKF 维护每个观察的单个数据关联假设,通常使用最大似然启发式选择。如果观测来自任何当前地标的概率太低,则考虑新地标的可能性。如果此启发式选择的数据关联不正确,则永远无法消除将此观察结果纳入 EKF 的效果。如果将许多观察结果与错误的数据关联合并到 EKF 中,则 EKF 将发散。这是 EKF [18] 众所周知的故障模式。以下部分将描述 SLAM 的替代方法,以解决有效扩展和稳健数据关联的问题。

robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Submap Methods

虽然卡尔曼滤波器是线性高斯 SLAM 问题的最佳解决方案,但它对于大型地图在计算上是不可行的。结果,一个伟大的

SLAM 研究的重点是开发 SLAM 算法,该算法接近 EKF 的性能,但可以扩展到更大的环境。EKF 的计算复杂性源于协方差矩阵Σ吨表示状态变量之间的每个成对相关性。结合对单个地标的观察必然会对所有其他状态变量产生影响。

通常,单个地标的观察将对远处地标的位置产生微弱的影响。出于这个原因,许多研究人员开发了基于 EKF 的 SLAM 算法,将全局地图分解为更小的子图。一组方法利用了机器人可能会在全球地图的一小部分逗留一段时间的事实。延期[10,47]和 Compressed Extended Kalman Filter (CEKF) [36] 都是在机器人停留在单个子图内时延迟将局部信息合并到全局地图中的技术。这些技术仍然是最佳的,因为它们产生的结果与完整的 EKF 相同。然而,这两种算法所需的计算量减少了一个常数因子,因为全图更新的执行频率较低。

将全局地图分解为子地图也可能导致对地图元素之间相关性的描述更加稀疏。可以利用增加的稀疏性来计算更有效的传感器更新。网络耦合特征图 [1]、ATLAS [5]、局部映射算法 [7] 和解耦随机映射 [49] 框架都考虑了子图的稀疏网络之间的关系。当机器人移出一个子图时,它要么创建一个新的子图,要么将自己重新定位到先前定义的子图中。给定已知的数据关联,每种方法都减少了将观察合并到恒定时间的计算要求。然而,这些计算增益是以降低整体收敛速度为代价的。每张地图的特征远少于整个地图的特征,

Guivant 和 Nebot 提出了一种类似的方法,称为次优 SLAM[36],其中局部地图都是针对少量基本地标计算的。由于地标的不同星座保存在不同的坐标系中,因此与每个地标都在单个坐标系中相比,它们可以更容易地去相关。结果算法产生的估计值是真实 EKF 估计值的近似值,但是它需要线性时间和内存。

robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping|Sparse Extended Information Filters

分解 SLAM 问题的另一种过滤方法是使用附近地标之间的势函数来表示地图,类似于马尔可夫随机场 [6]。一种这样的方法是[90]中提出的稀疏扩展信息过滤器(SEIF)。SEIF 实现了卡尔曼滤波器的替代参数化,称为信息滤波器。而不是更新协方差矩阵Σ, SEIF 更新Σ−1,精度矩阵。此参数化很有用,因为如果仅在附近地标之间保持相关性,则精度矩阵是稀疏的。在适当的近似下,该技术已被证明可以提供具有线性内存需求的有效更新(给定已知数据关联)。

Paskin [73] 的 Thin Junction Tree Filter (TJTF) 是一种基于与 SEIF 相同原理的 SLAM 算法。也就是说,在状态变量之间维护一个稀疏的概率约束网络可以实现有效的推理。TJTF 使用称为连接树的图形模型表示 SLAM 后验。随着新地标添加到地图中,连接树的大小会增长,但可以使用称为变量收缩的操作“细化”它。细化操作可以看作是一种使 SEIF 的精度矩阵稀疏的方法,但是可以从 TJTF 中提取全局映射而无需任何矩阵求逆。TJTF 通常需要线性计算,可以通过进一步近似将其简化为恒定时间。

robotics代写|SLAM定位算法代写Simultaneous Localization and Mapping 请认准statistics-lab™

统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。







术语 广义线性模型(GLM)通常是指给定连续和/或分类预测因素的连续响应变量的常规线性回归模型。它包括多元线性回归,以及方差分析和方差分析(仅含固定效应)。



有限元是一种通用的数值方法,用于解决两个或三个空间变量的偏微分方程(即一些边界值问题)。为了解决一个问题,有限元将一个大系统细分为更小、更简单的部分,称为有限元。这是通过在空间维度上的特定空间离散化来实现的,它是通过构建对象的网格来实现的:用于求解的数值域,它有有限数量的点。边界值问题的有限元方法表述最终导致一个代数方程组。该方法在域上对未知函数进行逼近。[1] 然后将模拟这些有限元的简单方程组合成一个更大的方程系统,以模拟整个问题。然后,有限元通过变化微积分使相关的误差函数最小化来逼近一个解决方案。





随机过程,是依赖于参数的一组随机变量的全体,参数通常是时间。 随机变量是随机现象的数量表现,其时间序列是一组按照时间发生先后顺序进行排列的数据点序列。通常一组时间序列的时间间隔为一恒定值(如1秒,5分钟,12小时,7天,1年),因此时间序列可以作为离散时间数据进行分析处理。研究时间序列数据的意义在于现实中,往往需要研究某个事物其随时间发展变化的规律。这就需要通过研究该事物过去发展的历史记录,以得到其自身发展的规律。


多元回归分析渐进(Multiple Regression Analysis Asymptotics)属于计量经济学领域,主要是一种数学上的统计分析方法,可以分析复杂情况下各影响因素的数学关系,在自然科学、社会和经济学等多个领域内应用广泛。


MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。



您的电子邮箱地址不会被公开。 必填项已用*标注