Surrogate gradients provide a theoretically well-founded solution for end-to-end training of stochastic spiking neural networks.