## Task

This is a project for the second term of Udacity’s Self-Driving Car Engineer program. The task was to implement a particle filter in a simulator, that allows a car to locate itself. The particle model compares landmarks (map data) with sensor data from lidar sensors to estimate its position and orientation in a probabilistic fashion.

“Particle filters comprise a broad family of sequential Monte Carlo algorithms for approximate inference in partially observable Markov chains” Thrun 2002 in Proceedings of Uncertainty in AI

_{Source: Udacity course lectures}

## Algorithm:

The idea of a particle filter in this setting is to deduce the present location of the robot using a motion model, map data and Bayesian statistics in the form of randomly initialized particles.

_{Source: Thrun 2002 in Proceedings of Uncertainty in AI}

In every iteration a prediction and update step is performed, followed by a resampling of particles in Monte Carlo fashion. This way only the most likely particles survive and can be used to estimate the robots position.

### Initialization

After initialization of the vehicle’s position with GPS data, a set of randomly placed particles is created, where each particle has its own x,y position and orientation angle in global map coordinates. In addition every particle is associated with a weight which is proportional to the chance of being at the vehicle’s true position. In the beginning this is the uniform distribution for all particles. The task is now to deduce which particle has the biggest likelihood of being (near) the real location.

### Prediction Step

Using information about the motion of the vehicle combined with a motion model, each particle simulates the same movement.

*Simple motion model*

Because we’re dealing with stochastic processes (wheels have slipped, sensor noise etc.) we have to account for this uncertainty. This can be done by adding noise from a distribution or drawing the prediction from one.

```
void ParticleFilter::prediction(double delta_t, double std_pos[], double velocity, double yaw_rate) {
std::default_random_engine random_engine;
// add normal distributed noise
std::normal_distribution<double> norm_x(0, std_pos[0]);
std::normal_distribution<double> norm_y(0, std_pos[1]);
std::normal_distribution<double> norm_theta(0, std_pos[2]);
for (auto &part : particles) {
// use motion model for prediction
if (fabs(yaw_rate) > 1e-3) {
part.x += velocity / yaw_rate * (sin(part.theta + yaw_rate * delta_t) - sin(part.theta)) + norm_x(random_engine);
part.y += velocity / yaw_rate * (cos(part.theta) - cos(part.theta + yaw_rate * delta_t)) + norm_y(random_engine);
part.theta += yaw_rate * delta_t + norm_theta(random_engine);
} else {
part.x += velocity * delta_t * cos(part.theta) + norm_x(random_engine);
part.y += velocity * delta_t * sin(part.theta) + norm_y(random_engine);
}
part.theta = normalizeAngle(part.theta);
}
}
```

### Update step

Using this *a priori prediction* in combination with sensor measurements and map data, we update the weights
for every particle, according to the probability, that this particle is correct.
But since the sensor measurements are relative to the vehicle’s unknown position,
we first have to transform the coordinates to the reference frame of the respective particle.

#### Coordinate transform

This transformation follows the simple equations of a homogeneous transformation with translation and rotation:

Which results in

After the transformation to the particle’s perspective we basically pretend, that the particle has taken this measurement. The next step is to compare this hypothetical measurement with the ground truth / map data.

#### Data association

The transformed measurements may be way off or multiple measurements may be close to the same landmark. To solve this problem of data association, I used a simple nearest neighbor algorithm to assign each measurement to the closest landmark (according to Euclidean distance metric).

_{Source: Udacity course lectures}

The nearest neighbor algorithm can be expensive in terms of runtime if many particles are used. However, in this setting it is sufficient and easy to implement.

```
for (auto &meas : measurements) {
double closest_dist = 1e4;
for (const Map::single_landmark_s &landmark : mapLandmarks.landmark_list) {
double dist = sqrt(pow(meas.x - landmark.x_f, 2) + pow(meas.y - landmark.y_f, 2));
// assign closest landmark to measurement
if (dist < closest_dist) {
closest_dist = dist;
meas.id = landmark.id_i;
}
}
}
```

#### Weight update

The weights for each particle are then updated by computing the factorial over all probability density values (PDF). The PDF values are obtained by computing the multivariate normal distribution for every landmark and then finding the corresponding PDF value for the distance between landmark and measurement.

_{Source: Wikipedia on multivariate Gaussian, License: CC BY-SA3.0}

### Resampling

During the final step of each iteration the set of particles is resampled. In this process each particle is sampled with the probability of its normalized weight, while every drawn particle is replaced and can be drawn again.

For resampling I use the C++ `std::discrete_distribution`

from the `random`

header.

```
// create distribution to draw indices proportional to particle weights
std::discrete_distribution<int> weight_dist(particle_probs.begin(), particle_probs.end());
// sample particles by drawing from the above distribution w/ replacement
std::vector<Particle> resampled_particles;
for (int i = 0; i < num_particles; ++i) {
resampled_particles.push_back(particles[weight_dist(gen)]);
}
// exchange old particle list with new resampled list
particles = std::move(resampled_particles);
```