Somehow I managed to sign up to give 4 talks at Warwick during the next 3 weeks (!) This Tuesday and next, I will be presenting the 3rd chapter of Mark Huber‘s 2015 book, Perfect Simulation, at the reading group of the same name. This week will focus on Coupling from the Past (Propp & Wilson, 1996) while next week I will present perfect slice sampling (Mira, Møller & Roberts, 2001). A finite sample drawn using CFTP is unbiased, therefore it can be incorporated into pseudo-marginal methods such as the exchange algorithm (Murray, Ghahramani & MacKay, 2006). More about CFTP in a future blog post, no doubt!

Next Monday (June 5), I will be giving an introduction to splines, based on chapter 5 of the Elements of Statistical Learning (Hastie, Tibshirani & Friedman, 2009, 2nd ed.) supplemented by assorted other references:

Obviously, I’m not going to cover all of that in a one hour talk. The main ideas that I’ll aim to get across are Gibbs sampling and other estimation methods for the smoothing parameter, $\lambda$. This talk will be part of the reading group in statistical machine learning.

The last talk will be an introduction to approximate Bayesian computation (ABC) for the Warwick ML Club. This talk will largely be based on a previous talk that I gave at ABC in Sydney:

The summer is a busy time for conferences in the UK. I will be presenting a poster at the Workshop on New mathematical methods in computational imaging at Heriot-Watt University, Edinburgh, on June 30:

Approximate Posterior Inference for the Inverse Temperature of a Hidden Potts Model

There are many approaches to Bayesian computation with intractable likelihoods, including the exchange algorithm and approximate Bayesian computation (ABC). A serious drawback of these algorithms is that they do not scale well for models with a large state space. Markov random fields, such as the Potts model and exponential random graph model (ERGM), are particularly challenging because the number of discrete variables increases linearly with the size of the image or graph. The likelihood of these models cannot be computed directly, due to the presence of an intractable normalising constant. In this context, it is necessary to employ algorithms that provide a suitable compromise between accuracy and computational cost.

Bayesian indirect likelihood (BIL) is a class of methods that approximate the likelihood function using a surrogate model. This model can be trained using a pre-computation step, utilising massively parallel hardware to simulate auxiliary variables. We review various types of surrogate model that can be used in BIL. In the case of the Potts model, we introduce a parametric approximation to the score function that incorporates its known properties, such as heteroskedasticity and critical temperature. We demonstrate this method on 2D satellite remote sensing and 3D computed tomography (CT) images. We achieve a hundredfold improvement in the elapsed runtime, compared to the exchange algorithm or ABC. Our algorithm has been implemented in the R package “bayesImageS,” which is available from CRAN.

During July, I will be attending the programme “Scalable inference; statistical, algorithmic, computational aspects” at the Isaac Newton Institute (INI), Cambridge. This includes two workshops (so far, that I am aware of): Scalable Statistical Inference (July 3-7) and “Sampling methods in statistical physics and Bayesian inference” (July 18).

My abstract has also been accepted for the RSS International Conference in Glasgow, September 4-7. According to the conference programme, my talk has been scheduled for contributed session 6.5 Big Data, after lunch on the Wednesday. Hope to see you there!

From → MCMC

Ella Kaye on Ella Kaye

Computational Bayesian statistics

Bayes' Food Cake

A bit of statistics, a bit of cakes.

RWeekly.org - Blogs to Learn R from the Community

Computational Bayesian statistics

Richard Everitt's blog

Computational Bayesian statistics

Let's Look at the Figures

David Firth's blog

Nicholas Tierney

Computational Bayesian statistics

Sweet Tea, Science

Two southern scientistas will be bringing you all that is awesome in STEM as we complete our PhDs. Ecology, statistics, sass.

Musings, useful code etc. on R and data science

Another Astrostatistics Blog

The random musings of a reformed astronomer ...

Darren Wilkinson's research blog

Statistics, computing, data science, Bayes, stochastic modelling, systems biology and bioinformatics

Computational Bayesian statistics

Igor Kromin

Computational Bayesian statistics

Statisfaction

I can't get no

Xi'an's Og

an attempt at bloggin, nothing more...

Sam Clifford

Postdoctoral Fellow, Bayesian Statistics, Aerosol Science