SK

Topics in Bayesian Machine Learning

Date written Sep 17, 2020
Date updated Last updated: Oct 20, 2020
Filed under ML & Stats in math

Contents

I want this to be a helpful resource for newcomers to the field of Bayesian machine learning. The objective here is to collect relevant literature that brings insights into modern inference methods. Of course, this requires me to extract insights myself to be sure that the papers I put on are meaningful. Therefore, this post remains a living document.

I will post commentary, when I can, in terms of what to expect when reading the material. Often, however, I will only put materials in list to be considered as the recommended reading order. A recommendation for the overall sequence in which topics should be considered is harder to be prescribed. I do, however, suggest that this not be your first excursion into machine learning.

Blogs

Big Picture Views

When diving deep into a topic, we often find ourselves too close to the action. It is important to start with and keep the bigger picture in mind. I recommend the following to get a feel for the fundamental thesis around being Bayesian. It is not a silver bullet but a set of common-sense principles to abide by.

  • In my introductory article, The Beauty of Bayesian Learning, I describe the essence of Bayesian learning using a simple pattern guessing demo.
  • PRML Chapter 1 [3] is the place to start for a succinct treatment on the topic.
  • The ideas can be further reinforced through DJCM's PhD Thesis, Chapter 2 [1].
  • AGW's PhD Thesis Chapter 1 [2] provides a broader background on the big picture.

Less so now, but often arguments around the subjectivity of the prior is brought into question. This is unfortunately a misdirected argument because without subjectivity, "learning" cannot happen and is in general an ill-defined problem to tackle. Although, subjective priors is not the only thing that being Bayesian brings to the table.

Many people, including seasonsed researchers, have the wrong idea of what it means to be Bayesian. Putting prior assumptions does not make one a Bayesian. In that sense, everyone is a Bayesian because they build algorithms starting with some implicit priors (not statistical biases). I die a little when people compare Bayesian methods to simply regularlizing with the prior. That is an effect often misconstrued. For instance, take a look at this fun post by Dan Simpson, "The king must die" on why simply assuming a Laplace prior does not imply sparse solutions unlike its popular maximum a-posteriori variant known as the Lasso.

When explaining the data using a model, we usually have many competing hypothesis available, naturally leading to the model selection problem. Occam's razor principle advocates that we must choose the simplest possible explanation. Bayesian inference shines here as well by automatically embodying this "principle of parsimony".

  • ITILA Chapter 28 [7] describes how Bayesian inference handles "automatic Occam's razor" quantitatively.
  • Seeing the ever increasing complexity of neural network models, one may doubt the validity of Occam's razor, perhaps sensing a contradiction. Rasmussen & Ghahramani [8] resolve this through a simple experiment. Maddox & Benton et. al. [9] provide an excellent realization of this principle for large models.

Bayesian model averaging (BMA) is another perk enjoyed by Bayesians, which allows for soft model selection. Andrew G. Wilson clarifies the value it adds in a technical report titled The Case for Bayesian Deep Learning. Unfortunately, BMA is often misconstrued as model combination. Minka [10] dispells any misunderstandings in this regard.

Topics

Gaussian Processes

Gaussian Process (GP) research interestingly started as a consequence of the popularity and early success of neural networks.

  • DJCM's Introduction, Sections 1-6 [4] to understand where GPs comes from. A single reading before the next should help calibrate the mindset. I also recommend returning to this once more after the next reading.
  • GPML Chapter 1, 2, 3 [5] for a detailed treatment on the usual regression and classification problems.
  • LWK Chapter 1 [12] is worth a read for a big picture view of kernel machines. It does not, however, present a Bayesian perspective, but an optimization perspective. Nevertheless, it is a useful perspective.
  • GPML Chapter 5 [5] to understand how model selection behaves with GPs, and key caveats to look out for, especially regarding Bayesian Model Averaging. It also has a nice example of a non-trivial composite kernel.

Sparse Gaussian Processes

The non-parametric nature is slightly at odds with scalability of Gaussian Processes, but we've made some considerable progress through first principles in this regard as well.

Covariance Functions

Covariance functions are the way we describe our inductive biases in a Gaussian Process model and hence deserve a separate section altogether.

  • GPML Chapter 4 [5] provides a broad discussion around where covariance functions come from.
  • DD's PhD Thesis Chapter 2 [6] contains some basic advice and intuitions. This is more succinctly available as The Kernel Cookbok.

Monte Carlo algorithms

Monte Carlo algorithms are used for exact inference in scenarios when closed-form inference is not possible.

  • PRML Chapter 11.1 [3]

Markov Chain Monte Carlo

The simple Monte Carlo algorithms rely on independent samples from a target distribution to be useful. Relaxing the independence assumption leads to correlated samples via Markov Chain Monte Carlo (MCMC) family of algorithms.

  • IM's PhD Thesis, Chapter 1,2 [11] is arguably the best introduction to the topic.
  • PRML Chapter 11.2 [3]

Variational Inference

Pathologies

Bishop Chapter 10 [3] shows the zero-forcing behavior of the KL term involved in variational inference, as a result underestimating the uncertainty when unimodal approximations are used for multimodal true distributions. This, however, should not be considered a law of the universe, but only a thumb rule as clarified by Richard Turner et. al. investigate Counterexamples to variational free energy compactness folk theorems. Tom Rainforth et. al show that tighter variational bounds are not necessarily better.

Acknowledgements

I'm inspired by Yingzhen Li's resourceful document on Topics in Approximate Inference (2017). Many of the interesting references also come from discussions with my advisor, Andrew Gordon Wilson.

References

  1. Bishop, C.M., 2006. Pattern recognition and machine learning, springer.
  2. Duvenaud, D., 2014. Automatic model construction with Gaussian processes. University of Cambridge. Available at: https://www.cs.toronto.edu/ duvenaud/thesis.pdf.
  3. MacKay, D.J., 1998. Introduction to Gaussian processes. NATO ASI Series F Computer and Systems Sciences, 168, pp.133–166. Available at: http://www.inference.org.uk/mackay/gpB.pdf.
  4. MacKay, D.J. & Mac Kay, D.J., 2003. Information theory, inference and learning algorithms, Cambridge university press.
  5. MacKay, D.J.C., 1992. Bayesian Methods for Adaptive Models. Available at: http://www.inference.org.uk/mackay/PhD.html.
  6. Maddox, W.J., Benton, G. & Wilson, A.G., 2020. Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited. arXiv preprint arXiv:2003.02139.
  7. Minka, T.P., 2002. Bayesian model averaging is not model combination. https://tminka.github.io/papers/minka-bma-isnt-mc.pdf.
  8. Murray, I.A., 2007. Advances in Markov chain Monte Carlo methods. University of London. Available at: http://homepages.inf.ed.ac.uk/imurray2/pub/07thesis/murray_thesis_2007.pdf.
  9. Rasmussen, C.E. & Ghahramani, Z., 2001. Occam’s razor. In Advances in neural information processing systems. pp. 294–300.
  10. Scholkopf, B. & Smola, A.J., 2018. Learning with kernels: support vector machines, regularization, optimization, and beyond, Adaptive Computation and Machine Learning series.
  11. Williams, C.K. & Rasmussen, C.E., 2006. Gaussian processes for machine learning, MIT press Cambridge, MA. Available at: http://www.gaussianprocess.org/gpml/.
  12. Wilson, A.G., 2014. Covariance kernels for fast automatic pattern discovery and extrapolation with Gaussian processes. University of Cambridge. Available at: http://www.cs.cmu.edu/%7Eandrewgw/andrewgwthesis.pdf.