This paper presents a performance enhanced Risk-Aware Model Predictive Path Integral (RA‑MPPI) control using Gaussian-process (GP). To this end, the real system is run with arbitrary control input and the resulting data is saved. Based on the nominal model of the system and the collected data, uncertainty affecting the state transition is estimated by a Jacobian-based method offline. Then, the used control input and the estimated uncertainty are used to train the Gaussian process which generates the mean and variance of the uncertainty. Online, the gap between the nominal and real dynamics is compensated by the trained GP. In other words, the nominal model is refined using the estimated noise mean, while the estimated noise variance is directly incorporated into the RA‑MPPI framework, eliminating the need to manually tune the disturbance covariance parameter. Validation is conducted through simulation experiments in Gazebo using a F1TENTH car with a bicycle model navigating a track with and without obstacles. Also, real‑world performance is further demonstrated on the F1TENTH car platform driving the same track under similar conditions. Across both simulation and real‑world scenarios, the proposed method demonstrates improved safety and trajectory tracking performance compared to baseline MPPI and RA‑MPPI techniques in terms of lap times and the number of crash‑free lap completions. Videos of the F1TENTH car simulations, experiments, and similar Gazebo simulation results using a Clearpath Jackal with a differential drive model are available at the project webpage: https://gp-ramppi.github.io.
Consider the discrete-time system $$ x_{k+1} = f(x_k, u_{\mathrm{act}}), $$ where $x_k \in \mathbb{R}^{n_x}$ is the state at time step $k$, $u_{\mathrm{act}} \in \mathbb{R}^{n_u}$ the actual control input, and $f$ a sufficiently smooth function describing the known nominal model. The goal is to identify $u_{\mathrm{act}}$ by solving $$ x_{k+1} - f(x_k, u) = 0. $$
Define the residual function $$ \mathbf{r}(u) = x_{k+1} - f(x_k, u), $$ so that we seek $u^* = u_{\mathrm{act}}$ satisfying $\mathbf{r}(u^*) = 0$.
The iterative update rule is $$ u^{(i+1)} = u^{(i)} + \alpha J^{\dagger}(u^{(i)})\, \mathbf{r}(u^{(i)}), $$ where $J = \partial f / \partial u$ and $J^{\dagger}$ is the Moore–Penrose pseudoinverse.
Theorem: Under standard smoothness conditions, proper step size $\alpha$, and an initial guess sufficiently close to $u^*$, this iteration converges locally to $u^*$ if $$ \| I_{n_u} - \alpha J^{\dagger}(u^*) J(u^*) \| < 1. $$
The proof proceeds via a fixed-point contraction argument...
Define $$ \mathbf{G}(u) = u + \alpha J^{\dagger}(u)[x_{k+1} - f(x_k,u)], $$ so a solution satisfies $\mathbf{G}(u^*) = u^*$.
The Jacobian of $\mathbf{G}$ near $u^*$ satisfies approximately $$ D\mathbf{G}(u^*) \approx I_{n_u} - \alpha J^{\dagger}(u^*) J(u^*). $$ Local contraction holds if its norm is strictly less than 1.
Since $\partial \mathbf{r} / \partial u = -J$, one obtains $$ D\mathbf{G}(u^*) \approx I_{n_u} - \alpha J^{\dagger}(u^*) J(u^*). $$
If $$ \| I_{n_u} - \alpha J^{\dagger}(u^*) J(u^*) \| < 1, $$ then $\mathbf{G}$ is locally contractive, and the iteration converges to $u^*$.
If $J(u^*)$ has full column rank, then $$ J^{\dagger}(u^*) J(u^*) = I_{n_u}, $$ leading to $$ \| (1-\alpha)I_{n_u} \| = |1-\alpha| < 1 \quad (\alpha>0). $$
The iteration still converges if the Jacobian has full column (overdetermined) or row (underdetermined) rank, ensuring proper least-squares or minimum-norm updates.