1. Fast SLAM

    This notebook looks at a technique for simultaneous localization (finding the position of a robot) and mapping (finding the positions of any obstacles), abbreviated as SLAM. In this model, the probability distribution for the robot's trajectory \(x_{1:t}\) is represented with a set of weighted particles. Let the weight …

    read more
  2. Mapping with Gaussian Conditioning

    For a robot to navigate autonomously, it needs to learn the locations of any potential obsticles around it. One of the standard ways to do this is with an algorithm known as EKF-Slam. Slam stands for "simultaneous localization and mapping", as the algorithm must simultaneously find out where the robot …

    read more
  3. Conjugate Computation

    This post is about a technique that allows us to use variational message passing on models where the likelihood doesn't have a conjugate prior. There will be a lot of Jax code snippets to make everything as concrete as possible.

    The Math

    Say \(X\) comes from a distribution with density …

    read more
  4. Sparse Variational Gaussian Processes

    This notebook introduces Fully Independent Training Conditional (FITC) sparse variational Gaussian process model. You shouldn't need any prior knowledge about Gaussian processes- it's enough to know how to condition and marginalize finite dimensional Gaussian distributions. I'll assume you know about variational inference and Pyro, though.

    import pyro
    import pyro.distributions …
    read more
  5. Differential Equations Refresher

    In my freshman year of college, I took an introductory differential equations class. That was nine years ago. I've forgotten pretty much everything, so I thought I'd review a little, trying to generalize the techniques along the way. I'll use summation notation throughout, and write \(\frac{\partial^n}{\partial x …

    read more
  6. Fun with Likelihood Ratios

    Say you're trying to maximize a likelihood \(p_{\theta}(x)\), but you only have an unnormalized version \(\hat{p_{\theta}}\) for which \(p_{\theta}(x) = \frac{\hat{p_\theta}(x)}{N_\theta}\). How do you pick \(\theta\)? Well, you can rely on the magic of self normalized importance sampling.

    read more