“Matrix CookBook”<\/a><\/p>\n <\/p>\n
%matplotlib inline\nimport numpy as np\nfrom scipy.linalg import eigh\nfrom scipy.stats import multivariate_normal\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse\nfrom sklearn.datasets import make_spd_matrix\n\nplt.ion()\ndef compute_ellipse(cov):\nw, u = eigh(cov)\nstd = np.sqrt(w)\nangle = np.arctan2(u[1, 0], u[0, 0])\nreturn 3 * std[0], 3 * std[1], angle * 180. \/ np.pi\n\ndef plot_gaussian(mu, cov):\n"""Plot the mean and covariance of a 2D Gaussian distribution.\n\nParameters\n----------\nmu : array_like, shape (2,)\nMean vector of the distribution.\ncov: array_like, shape (2, 2)\nCovariance matrix of the distribution.\n"""\n\n# Plot the mean vector.\nsc = plt.scatter(mu[0], mu[1], label='Mean')\nwidth, height, angle = compute_ellipse(cov[:2, :2])\n# Plot the covariance matrix as an ellipse.\nellipse = Ellipse(xy=mu, width=width, height=height, angle=angle,\nalpha=0.2, label='Covariance')\nplt.gca().add_artist(ellipse)\nreturn sc, ellipse\n\nC = make_spd_matrix(2, random_state=1)\nmu = np.array([1, 2])\n\nX = multivariate_normal(mu, C).rvs(1000)\n\nplt.scatter(*X.T, s=1)\nplot_gaussian(mu, C);\n<\/code><\/pre>\n <\/p>\n
Distribution jointe<\/h3>\n D\u00e9composons notre \u00e9tat distribu\u00e9e selon une loi Gaussienne, en deux sous-parties, la distribution jointe des deux sous partie est :<\/p>\n
<\/span> <\/span> <\/p>\nAvec .<\/p>\n
Marginalisation<\/h3>\n <\/span> <\/span> <\/p>\nConditionnement<\/h3>\n Si on connait la valeur de alors:<\/p>\n
<\/span> <\/span> <\/p>\nCombinaison lin\u00e9aire<\/h3>\n Si on a deux variable al\u00e9atoires ind\u00e9pendantes<\/strong> :<\/p>\n <\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\nAlors la combinaison lin\u00e9aire des deux est distribu\u00e9e comme:<\/p>\n
<\/span> <\/span> <\/p>\nPropagation de la dynamique du syst\u00e8me<\/h2>\n Les filtres de Kalman mod\u00e9lisent le syst\u00e8me en temps discret et l’\u00e9volution entre deux pas de temps est donn\u00e9e par la loi lin\u00e9aire :<\/p>\n
<\/span> <\/span> <\/p>\navec : \n* : L’estimtation de l’\u00e9tat au pas apr\u00e8s avoir pris en compte les \u00e9ventuelles observations \n* : L’estimation de l’\u00e9tat au pas par propagation de la dynamique du syst\u00e8me depuis l’estimation au pas pr\u00e9c\u00e9dent. \n* : matrice de transition qui repr\u00e9sente l’\u00e9volution “naturelle” du syst\u00e8me (pas n\u00e9cessairement constante) \n* : une commande (ou contr\u00f4le) appliqu\u00e9e au syst\u00e8me \n* : l’effet de la commande sur le syst\u00e8me (pas n\u00e9cessairement constante) \n* : le bruit du processus, c’est-\u00e0-dire ce qu’on ne mod\u00e9lise pas bien.<\/p>\n
<\/span> <\/span> <\/p>\nExemple<\/h2>\nPoint mobile dans le plan<\/h3>\n Appliquons cela \u00e0 un objet ponctuel en mouvement sans frottement dans un plan.<\/p>\n
L’\u00e9tat de l’objet \u00e0 un instant est encod\u00e9 par sa position et sa vitesse :<\/p>\n
<\/p>\n
Le contr\u00f4le sur l’objet se fait par l’interm\u00e9diaire de l’\u00e9quivalent d’une pouss\u00e9e :<\/p>\n
<\/span> <\/span> <\/p>\nOn peut donc appliquer les \u00e9quations du mouvement de Newton pour mod\u00e9liser l’\u00e9volution du mobile entre deux instants proches. Si on consid\u00e8re que l’acc\u00e9l\u00e9ration est constante entre est , la vitesse va \u00eatre donn\u00e9es par:<\/p>\n
<\/span> <\/span> <\/p>\nSoit la vitesse initiale \u00e0 laquelle on ajoute la variation de vitesse due \u00e0 la pouss\u00e9e.<\/p>\n
Et le d\u00e9placement peut se calculer par:<\/p>\n
<\/span> <\/span> <\/p>\nO\u00f9 l’on retrouve la position \u00e0 l’instant pr\u00e9c\u00e9dent, plus la variation de poistion due \u00e0 la vitesse initiale, plus la variation de position li\u00e9e \u00e0 la pouss\u00e9e.<\/p>\n
Mise en \u00e9quation de la propagation du filtre de Kalman<\/h3>\n Pour se caler dans le formalisme des filtre de Kalman, je suppose que l’\u00e9tat du syst\u00e8me, au pas de temps , est donn\u00e9 par sa position et sa vitesse :<\/p>\n
<\/span> <\/span> <\/p>\nOn peut ensuite s\u00e9parer dans les \u00e9quations de mouvement. Les parties qui d\u00e9pendent de l’\u00e9tat pr\u00e9c\u00e9dent (position et vitesse ) et les parties qui d\u00e9pendent du contr\u00f4le (ici l’acc\u00e9l\u00e9ration) et on peut ainsi construire les diff\u00e9rents termes de l’\u00e9tape de propagation:<\/p>\n
<\/span> <\/span> <\/p>\nLe contr\u00f4le corresond ici \u00e0 la pouss\u00e9e:<\/p>\n
<\/span> <\/span> <\/p>\nEt l’\u00e9volution due au contr\u00f4le est donn\u00e9e par:<\/p>\n
<\/span> <\/span> <\/p>\nSimulons un tel systeme:<\/p>\n
delta_t = 0.1\nt = np.arange(0, 10., delta_t)\n\n# L'acc\u00e9l\u00e9ration est modif\u00e9e \u00e0 3 reprises pendant 10 pas de temps dans des directions diff\u00e9rentes\nacceleration = np.zeros((len(t), 2))\nacceleration[10: 20] = (0, 0.4)\nacceleration[30: 40] = (0, -0.6)\nacceleration[50: 60] = (0.1, 0.3)\n\n# Matrice d'\u00e9volution de l'\u00e9tat\nF = np.identity(4)\nF[0, 2] = delta_t\nF[1, 3] = delta_t\n\n# Matrice donnant l'effet du contr\u00f4le sur syst\u00e8me\nG = np.zeros((4, 2))\nG[0, 0] = G[1, 1] = 0.5 * delta_t**2\nG[2, 0] = G[3, 1] = delta_t\n\n# Estimation du bruit li\u00e9 au processus d'\u00e9volution\nQ = np.diag([0.001, 0.001, 0.002, 0.002]) ** 2\n\n# Etat initial\nstate_actual = np.zeros((len(t), 4))\nstate_actual[0] = (0, 0, 0.1, 0)\n\nstate_mean = np.zeros((len(t), 4))\nstate_mean[0] = (0, 0, 0.1, 0)\n\n# Evolution dans le temps\nnp.random.seed(0)\nfor i in range(1, len(t)):\nstate_actual[i] = multivariate_normal(F @ state_actual[i - 1] + G @ acceleration[i], Q).rvs()\nstate_mean[i] = F @ state_mean[i - 1] + G @ acceleration[i]\n<\/code><\/pre>\nfrom itertools import chain\nfrom IPython.display import HTML\nfrom matplotlib.animation import FuncAnimation\n\nclass AnimResult:\ndef __init__(self, state_actual, acceleration, state_mean, covariance=None, observations=None):\nself.state_actual = state_actual\nself.acceleration = acceleration\nself.state_mean = state_mean\nself.covariance = covariance\nself.observations = observations\n\ndef plot_init(self):\nplt.plot(*state_mean.T[:2], "--", color="tab:blue", label="Trajectoire estim\u00e9e")\nplt.plot(*state_actual.T[:2], "-", color="tab:orange", label="Trajectoire simul\u00e9e")\nif self.observations is not None:\nself.plot_observations()\n\ndef plot_acceleration(self):\nif self.acceleration is None:\nself.accel_arrow = None\nelse:\nself.accel_arrow = plt.arrow(\n*chain(self.state_mean[0, :2],\n0.5 * self.acceleration[0]),\nwidth=0.01,\nvisible=np.any(self.acceleration[i] != 0)\n)\n\ndef plot_estimate(self):\nself.actual_scatter = plt.scatter(*self.state_actual[[0], :2].T, color="tab:orange", label="Position simul\u00e9e")\n\nif self.covariance is None:\nself.mean_scatter = plt.scatter(*self.state_mean[[0], :2].T, color="tab:blue", label="Position estim\u00e9e")\nself.cov_ellipse = None\nelse:\nself.mean_scatter, self.cov_ellipse = plot_gaussian(self.state_mean[0, :2], self.covariance[0, :2, :2])\n\ndef update(self, i):\nself.actual_scatter.set_offsets(*self.state_actual[[i], :2])\n\nself.mean_scatter.set_offsets(*self.state_mean[[i], :2])\n\nif self.cov_ellipse is not None:\nwidth, height, angle = compute_ellipse(self.covariance[i, :2, :2])\nself.cov_ellipse.set_center(self.state_mean[i, :2])\nself.cov_ellipse.set_width(width)\nself.cov_ellipse.set_height(height)\nself.cov_ellipse.set_angle(angle)\n\nif self.acceleration is not None:\n\nself.accel_arrow.set_data(\nx=self.state_mean[i, 0],\ny=self.state_mean[i, 1],\ndx=0.5 * self.acceleration[i, 0],\ndy=0.5 * self.acceleration[i, 1]\n)\nself.accel_arrow.set_visible(np.any(self.acceleration[i] != 0))\n\nif self.observations is not None:\nself.update_observations(i)\n\ndef plot_observations(self, i):\npass\n\ndef update_observations(self, i):\npass\n\ndef animate(self):\nself.plot_init()\nself.plot_estimate()\nplt.legend()\nself.plot_acceleration()\n\nani = FuncAnimation(\nplt.gcf(), self.update, frames=np.arange(len(t)), interval=100\n)\nreturn HTML(ani.to_html5_video())\n<\/code><\/pre>\nplt.ioff()\nplt.figure()\nplt.gca().set_aspect("equal")\nAnimResult(state_actual=state_actual, acceleration=acceleration, state_mean=state_mean).animate()\n<\/code><\/pre>\n <\/p>\n
\n
Your browser does not support the video tag.<\/video><\/div>\n<\/div>\n <\/p>\n
Propagation de l’incertitude<\/h2>\n Puisque nous n’avons que des lois normales et des transformations lin\u00e9aires, l’erreur peut \u00e9galement \u00eatre propag\u00e9e avec, en notant est l’estimation de la covariance de l’\u00e9tat :<\/p>\n
<\/span> <\/span> <\/p>\nPuisque:<\/p>\n
<\/span> <\/span> <\/p>\n# Etat initial\nstate_actual = np.zeros((len(t), 4))\nstate_actual[0] = (0, 0, 0.1, 0)\n\nstate_mean = np.zeros((len(t), 4))\nstate_mean[0] = (0, 0, 0.1, 0)\n\nstate_cov = np.zeros((len(t), 4, 4))\nstate_cov[0] = Q * 5 ** 2\n\n# Evolution dans le temps\nnp.random.seed(0)\nfor i in range(1, len(t)):\nstate_actual[i] = multivariate_normal(F @ state_actual[i - 1] + G @ acceleration[i], Q).rvs()\nstate_mean[i] = F @ state_mean[i - 1] + G @ acceleration[i]\nstate_cov[i] = F @ state_cov[i - 1] @ F.T + Q\n\nplt.ioff()\nplt.figure()\nplt.gca().set_aspect("equal")\nAnimResult(state_actual=state_actual, acceleration=acceleration, state_mean=state_mean, covariance=state_cov).animate()\n<\/code><\/pre>\n <\/p>\n
Your browser does not support the video tag.<\/video><\/p>\n <\/p>\n
On voit donc ici le mouvement estim\u00e9 du syst\u00e8me connaissant uniquement l’\u00e9tat initial et les contr\u00f4les appliqu\u00e9s sur le syst\u00e8me. L’incertitude sur l’\u00e9tat (ici, on ne visualise que l’incertitude sur la position) augmente continuellement au cours du temps.<\/p>\n
Pour la diminuer, on peut avoir recours \u00e0 des observations.<\/p>\n
Observations<\/h2>\nTh\u00e9orie<\/h3>\n Par ailleurs, une observation se mod\u00e9lise par:<\/p>\n
<\/span> <\/span> <\/p>\navec:<\/p>\n
\n : la mesure issue de l’observation<\/li>\n : la matrice mod\u00e9lisation le processus d’observation<\/li>\n : le bruit d’observation.<\/li>\n<\/ul>\n <\/span> <\/span> <\/p>\nPour calculer l’information gagn\u00e9e apr\u00e8s une observation, on part de la distribution jointe de l’\u00e9tat courant et du r\u00e9sultat de l’observation:<\/p>\n
<\/span> <\/span> <\/p>\nOn peut ensuite appliquer les formules associ\u00e9es aux gaussiennes multivari\u00e9es:<\/p>\n
<\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\nSoit ici, en supposant qu’on a observ\u00e9 :<\/p>\n
<\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\nSimulons ici une observation correspondant \u00e0 la mesure de la position et de la vitesse selon l’axe uniquement:<\/p>\n
<\/span> <\/span> <\/p>\ncela donne:<\/p>\n
from scipy.stats import multivariate_normal\n\n# Etat initial\nstate_actual = np.zeros((len(t), 4))\nstate_actual[0] = (0, 0, 0.1, 0)\n\nstate_mean = np.zeros((len(t), 4))\nstate_mean[0] = (0, 0, 0.1, 0)\n\nstate_cov = np.zeros((len(t), 4, 4))\nstate_cov[0] = Q * 5 ** 2\n\n# Transformation leading to the observation\nH = np.array([\n[1, 0, 0, 0],\n[0, 0, 1, 0]\n])\n\n# Observation noise\nR = (0.01 ** 2, 0.1 ** 2) * np.identity(2)\n\nobservation_times = [40, 60]\nobservation_results = {}\n\n# Evolution dans le temps\nnp.random.seed(0)\nfor i in range(1, len(t)):\nstate_actual[i] = multivariate_normal(F @ state_actual[i - 1] + G @ acceleration[i], Q).rvs()\nstate_mean[i] = F @ state_mean[i - 1] + G @ acceleration[i]\nstate_cov[i] = F @ state_cov[i - 1] @ F.T + Q\n\nif i in observation_times:\n# Observation process\nobservation_results[i] = obs_i = multivariate_normal(H @ state_actual[i], R).rvs()\n\n# update the estimates\nz_i = H @ state_mean[i]\ncov_state_obs = state_cov[i] @ H.T\ncov_obs_obs = H @ state_cov[i] @ H.T + R\n\nstate_mean[i] += cov_state_obs @ np.linalg.solve(cov_obs_obs, obs_i - z_i)\nstate_cov[i] -= cov_state_obs @ np.linalg.solve(cov_obs_obs, cov_state_obs.T)\n\nclass AnimLinearObs(AnimResult):\ndef plot_observations(self):\nymin, ymax = plt.ylim()\ndx = 3 * np.sqrt(R.flatten())[0]\nself.observation_lines = {}\nself.observation_fill = {}\n\nfor i, obs_i in self.observations.items():\nxi = obs_i[0]\nself.observation_lines[i] = plt.vlines(xi, ymin, ymax, color="tab:green", linestyle="--", alpha=0.1)\nself.observation_fill[i] = plt.fill_between([xi - dx, xi + dx], [ymin, ymin], [ymax, ymax], color="lightgreen", alpha=0.1)\nplt.ylim(ymin, ymax)\n\ndef update_observations(self, i):\nfor iobs, line in self.observation_lines.items():\nline.set_alpha(0.8 if i == iobs else 0.1)\nself.observation_fill[iobs].set_alpha(0.8 if i == iobs else 0.1)\n\nplt.ioff()\nplt.figure()\nplt.gca().set_aspect("equal")\nAnimLinearObs(state_actual, acceleration, state_mean, state_cov, observation_results).animate()\n<\/code><\/pre>\n <\/p>\n
Your browser does not support the video tag.<\/video><\/p>\n <\/p>\n
On peut voir la contraction de l’incertitude selon li\u00e9e \u00e0 l’observation.<\/p>\n
Extended Kalman Filters<\/h2>\n Si les lois d’\u00e9volution et d’observation ne sont plus lin\u00e9aires, il est tout de m\u00eame possible d’appliquer le filtre de Kalman en le lin\u00e9arisant localement:<\/p>\n
<\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\nIl suffit alors de remplacer les formules de mise \u00e0 jour des covariances en utilisant la jacobienne des fonctions et :<\/p>\n
<\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\nIci, on simule une observation, o\u00f9 l’on peut mesurer la distance et l’angle d’observation depuis l’origine du rep\u00e8re:<\/p>\n
<\/span> <\/span> <\/p>\nLa loi de mouvement reste inchang\u00e9e, donc la jacobienne est identique \u00e0 la matrice donn\u00e9e plus t\u00f4t:<\/p>\n
import jax\nimport jax.numpy as jnp\n\ndef movement(state, control):\nreturn F @ state + G @ control\n\ndef observation(state):\nx, y = state[:2]\nr = jnp.sqrt(x ** 2 + y ** 2)\nangle = jnp.arctan2(y, x)\nreturn jnp.array([r, angle * 180 \/ np.pi])\n\nR = (np.identity(2) * (0.025, 0.5)) ** 2\n\n# Etat initial\nstate_actual = np.zeros((len(t), 4))\nstate_actual[0] = (0, 0, 0.1, 0)\n\nstate_mean = np.zeros((len(t), 4))\nstate_mean[0] = (0, 0, 0.1, 0)\n\nstate_cov = np.zeros((len(t), 4, 4))\nstate_cov[0] = Q * 5 ** 2\n\nobservation_results = {}\n\n# Evolution dans le temps\nnp.random.seed(0)\nfor i in range(1, len(t)):\nstate_actual[i] = multivariate_normal(movement(state_actual[i - 1], acceleration[i]), Q).rvs()\nstate_mean[i] = movement(state_mean[i - 1], acceleration[i])\nF_i = jax.jacobian(movement)(state_mean[i - 1], acceleration[i])\n\nstate_cov[i] = F_i @ state_cov[i - 1] @ F_i.T + Q\n\nif i in observation_times:\n# Observation process\nobservation_results[i] = obs_i = multivariate_normal(observation(state_actual[i]), R).rvs()\n\n# update the estimates\nz_i = observation(state_mean[i])\nH_i = jax.jacobian(observation)(state_mean[i])\ncov_state_obs = state_cov[i] @ H_i.T\ncov_obs_obs = H_i @ state_cov[i] @ H_i.T + R\n\nstate_mean[i] += cov_state_obs @ np.linalg.solve(cov_obs_obs, obs_i - z_i)\nstate_cov[i] -= cov_state_obs @ np.linalg.solve(cov_obs_obs, cov_state_obs.T)\n\ndef draw_polar_section(r, angle, dr, dangle, circle_grid=10):\nx0 = r * np.cos(angle \/ 180. * np.pi)\ny0 = r * np.sin(angle \/ 180. * np.pi)\n\nrs = np.array([ r - dr, r + dr ] + [r + dr ] * circle_grid + [r - dr] + [r - dr] * circle_grid)\nangles = np.array(\n[angle - dangle, angle - dangle] +\n[(angle - dangle) + (i + 1) \/ circle_grid * 2 * dangle for i in range(circle_grid)] +\n[angle + dangle ] +\n[(angle + dangle) - (i + 1) \/ circle_grid * 2 * dangle for i in range(circle_grid)]\n)\n\npolygon_xs = rs * np.cos(angles \/ 180. * np.pi)\npolygon_ys = rs * np.sin(angles \/ 180. * np.pi)\n\nreturn plt.scatter([x0], [y0], c="tab:green", alpha=0.1), plt.fill(polygon_xs, polygon_ys, alpha=0.1, color="lightgreen")\n\nclass AnimNonLinearObs(AnimResult):\ndef plot_observations(self):\ndr, dangle = 3 * np.sqrt(R.diagonal())\n\nself.observation_patches = {\ni: draw_polar_section(*obs_i, dr, dangle) for i, obs_i in self.observations.items()\n}\n\ndef update_observations(self, i):\nfor iobs, (obs_center, obs_poly) in self.observation_patches.items():\nalpha = 0.8 if i == iobs else 0.1\nobs_center.set_alpha(alpha)\nfor p in obs_poly: p.set_alpha(alpha)\n\nplt.ioff()\nplt.figure()\nplt.gca().set_aspect("equal")\nAnimNonLinearObs(state_actual, acceleration, state_mean, state_cov, observation_results).animate()\n<\/code><\/pre>\n <\/p>\n
Your browser does not support the video tag.<\/video><\/p>\n <\/p>\n
Filtres de Kalman sans parfum (Unscented Kalman Filter<\/em>)<\/h2>\nLa transformation sans parfum<\/h3>\n L’id\u00e9e de cette transformation et d’estimer l’effet de transformations non-lin\u00e9aires en utilisant des \u00e9chantillons bien choisis appel\u00e9s “points sigma”.<\/p>\n
On fait passer ces points \u00e0 travers les transformations d’\u00e9volution temporelle et d’observation et on estime ensuite, \u00e0 partir de ces points, les param\u00e8tres de la distribution r\u00e9sultante.<\/p>\n
L’id\u00e9e est de construire un ensemble de paires de points qui pointent dans les directions principales de la distribution Gaussienne. Si on d\u00e9compose la covariance en :<\/p>\n
<\/span> <\/span> <\/p>\nAlors, on construit les points sigma comme suit\u00a0:<\/p>\n
<\/span> <\/span> <\/p>\n\u00e0 chaque point est associ\u00e9 un poids:<\/p>\n
<\/span> <\/span> <\/p>\no\u00f9 est un param\u00e8tre d’\u00e9chelle.<\/p>\n
%matplotlib inline\nimport numpy as np\nfrom scipy.linalg import eigh\nfrom math import gcd\n\ndef sigma_points(mu, sigma, lambda_):\nn_dims = len(mu)\n\nw, u = eigh(sigma)\nw = np.maximum(w, 0)\nL = (u * np.sqrt(w)).T\n\nX = np.vstack(\n[ mu ] +\n[ mu + np.sqrt(n_dims + lambda_) * L[i] for i in range(n_dims) ] +\n[ mu - np.sqrt(n_dims + lambda_) * L[i] for i in range(n_dims) ]\n)\n\nw = np.hstack([lambda_ \/ (n_dims + lambda_)] + [0.5 \/ (n_dims + lambda_)] * (2 * n_dims))\n\nreturn X, w\n\nC = np.array([[1.0, 0.8], [0.8, 1]])\nmu = np.array([1, 2])\n\nX_sigma, w_sigma = sigma_points(mu, C, lambda_=1)\n\nplt.ion()\nplt.figure()\nplot_gaussian(mu, C);\nplt.scatter(*X_sigma.T, s=10);\n<\/code><\/pre>\n <\/p>\n
Application to the Kalman Filter<\/h3>\n On passe ces points \u00e0 travers la transformation d’\u00e9volution temporelle et \u00e0 travers la fonction d’observation et on estime ensuite les diff\u00e9rentes quantit\u00e9 d’int\u00e9r\u00eat en int\u00e9grant les poids:<\/p>\n
<\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\n <\/span> <\/span> <\/p>\n# Apply on sigma points matrix instead of single vector\ndef movement(state, control):\nreturn state @ F.T + G @ control\n\ndef observation(state):\nx, y = state.T[:2]\nr = np.sqrt(x ** 2 + y ** 2)\nangle = np.arctan2(y, x)\nreturn np.vstack([r, angle * 180 \/ np.pi]).T\n\n# Etat initial\nstate_actual = np.zeros((len(t), 4))\nstate_actual[0] = (0, 0, 0.1, 0)\n\nstate_mean = np.zeros((len(t), 4))\nstate_mean[0] = (0, 0, 0.1, 0)\n\nstate_cov = np.zeros((len(t), 4, 4))\nstate_cov[0] = Q * 5 ** 2\n\nobservation_results = {}\nlambda_ = 1e-1\n\n# Evolution dans le temps\nnp.random.seed(0)\nfor i in range(1, len(t)):\n\nstate_actual[i] = multivariate_normal(movement(state_actual[[i - 1]], acceleration[i])[0], Q).rvs()\n\nstate_sigma, weight_sigma = sigma_points(state_mean[i - 1], state_cov[i - 1], lambda_)\nstate_sigma_tr = movement(state_sigma, acceleration[i])\n\nstate_mean[i] = np.einsum("ij, i -> j", state_sigma_tr, weight_sigma)\nstate_diff = state_sigma_tr - state_mean[i]\nstate_cov[i] = np.einsum("ij, ik, i -> jk", state_diff, state_diff, weight_sigma)\n\nif i in observation_times:\n# Observation process\nobservation_results[i] = obs_i = multivariate_normal(observation(state_actual[[i]])[0], R).rvs()\n\n# update the estimates\n\nz_sigma = observation(state_sigma_tr)\nz_i = np.einsum("ij, i -> j", z_sigma, weight_sigma)\n\nz_diff = z_sigma - z_i\n\ncov_state_obs = np.einsum("ij, ik, i -> jk", state_diff, z_diff, weight_sigma)\ncov_obs_obs = np.einsum("ij, ik, i -> jk", z_diff, z_diff, weight_sigma)\n\nstate_mean[i] += cov_state_obs @ np.linalg.solve(cov_obs_obs, obs_i - z_i)\nstate_cov[i] -= cov_state_obs @ np.linalg.solve(cov_obs_obs, cov_state_obs.T)\n\nplt.ioff()\nplt.figure()\nplt.gca().set_aspect("equal")\nAnimNonLinearObs(state_actual, acceleration, state_mean, state_cov, observation_results).animate()\n<\/code><\/pre>\n <\/p>\n
Your browser does not support the video tag.<\/video><\/p>\n","protected":false},"excerpt":{"rendered":"Les filtres de Kalman Les filtres de Kalman permettent d’estimer l’\u00e9tat d’un syst\u00e8me dynamique pour lequel on connait : les r\u00e8gles \/ lois d’\u00e9volution les actions qui lui sont appliqu\u00e9es une s\u00e9rie d’observations \/ mesures potentiellement incompl\u00e8tes ou bruit\u00e9es. Dans sa version la plus simple, l’ensemble des transformations du syst\u00e8me ainsi que les observations r\u00e9alis\u00e9es […]<\/p>\n","protected":false},"author":3,"featured_media":17235,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[65],"tags":[],"class_list":["post-17325","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-machine-learning"],"acf":[],"yoast_head":"\n
Les filtres de Kalman - Probayes<\/title>\n \n \n \n \n \n \n \n \n \n \n \n\t \n\t \n\t \n \n \n \n\t \n\t \n\t \n