Skip to content

bayesImageS::swNoData(..)

unnamed-chunk-52

This is the first in a series of posts describing the functions and algorithms that I have implemented in the R package bayesImageS.

Gibbs sampling was originally designed by Geman & Geman (1984) for drawing updates from the Gibbs distribution, hence the name. However, single-site Gibbs sampling exhibits poor mixing due to the posterior correlation between the pixel labels. Thus it is very slow to converge when the correlation (controlled by the inverse temperature β) is high.

The algorithm of Swendsen & Wang (1987) addresses this problem by forming clusters of neighbouring pixels, then updating all of the labels within a cluster to the same value. When simulating from the prior, such as a Potts model without an external field, this algorithm is very efficient.

The SW function in the PottsUtils package is implemented in a combination of R and C. The swNoData function in bayesImageS is implemented using RcppArmadillo, which gives it a speed advantage. It is worth noting that the intention of bayesImageS is not to replace PottsUtils. Rather, an efficient Swendsen-Wang algorithm is used as a building block for implementations of ABC (Grelaud et al., 2009), path sampling (Gelman & Meng, 1998), and the exchange algorithm (Murray et al., 2006). These other algorithms will be covered in future posts.

Read more…

Interactive image stacks in R

I’ve been quiet for the last few months because the work I was doing on pseudolikelihood and critical temperature developed into a paper that I had to write up and submit. Hopefully this will be the second journal article of my PhD thesis by publication, not counting the short paper I’ve submitted to the ICCR 2013 proceedings, nor the 5 manuscript pages of lit review that I’ve contributed to a middle-author paper that’s currently in prep.

A reader of this blog sent me a question about my posts on DICOM (part 1, part 2): specifically how to interact with the DICOM image as a stack of 2D slices. There are two components of this: first, I store the voxel intensity values as a numeric 3D array in R; secondly, I use the excellent misc3d package by Dai Feng and Luke Tierney to scroll interactively through the axial slices. Details (and R code) after the jump:

Read more…

handling DICOM-RT structure sets in oro.dicom

In my previous post I demonstrated how to read image data and meta-data from DICOM files, using the R package oro.dicom. In radiotherapy there is another kind of data that I work with, known as a DICOM-RT structure set. This is a special type of DICOM file that contains geometric contours of regions of interest, rather than image data.
Read more…

Reading DICOM images in R

The R packages oro.dicom and oro.nifti [1] provide the ability to import and visualise medical images that have been stored as DICOM [2] or ANALYZE™/NIfTI [3] files. This is handy because DICOM is widely used for clinical data, while NIfTI is a more compact format that stores only 2 files per 3D volume, rather than dozens.

library(oro.dicom)
dcmImages <- readDICOM("/dev/DICOMRT2", verbose = TRUE,
                       recursive = FALSE, exclude = "sql")

## 412 files to be processed by readDICOM()
##
|
|=================================================================| 100%

Read more…

Land Cover

Following up from posts by Steve Mosher and Oscar Perpiñán Lamigueiro, I thought I’d try my hand at representing categorical data in rasterVis.

Read more…

ICCR 2013 in Melbourne, May 6 – 9

I won’t be there, but my poster will be. Cathy Hargrave, the instigator of the Volume Analysis Tool project at the Radiation Oncology Mater Centre, will be presenting our poster titled “Segmentation of cone-beam CT using a hidden Markov random field with informative priors” at the 17th International Conference on the Use of Computers in Radiation Therapy, Melbourne, May 6–9, 2013.

I’ll post a link to the poster on QUT ePrints after the conference. I’ll also be submitting a short paper for publication in the IOP Journal of Physics: Conference Series.

Numerical stability

Using log-probabilities can make an algorithm more numerically stable, particularly when that algorithm involves the product of several probabilities. Multiplying probabilities together can lead to floating point overflow, resulting in a value of 0 or Inf. The sum of the logs is more resilient to this problem, since it enables the representation of much larger (and much smaller) numbers. For example:

prod(1:1000)

## [1] Inf

sum(log(1:1000))

## [1] 5912

prod(1/(1:1000))

## [1] 0

sum(log(1/(1:1000)))

## [1] -5912

Converting the product into a sum of logs should also lead to a slight speedup, although as I’ve previously noted it actually makes my code slower. I’ve resurrected my log-probability code from my Git stash and run some experiments to determine what I gain, as well as measuring the computational cost. Details after the jump.

Read more…

Normal Deviate

Thoughts on Statistics and Machine Learning

Steven Mosher's Blog

Creative License

Statisfaction

I can't get no

Xi'an's Og

an attempt at bloggin, nothing more...

The Thesis Whisperer

Just like the horse whisperer - but with more pages

Robin Ryder's blog

Statistics and other stuff

statistical society australia

A virtual meeting place for statisticians

Sam Clifford

Postdoctoral Fellow, Bayesian Statistics, Aerosol Science

Bayesian Research & Applications Group

Frontier Research in Bayesian Methodology & Computation

Follow

Get every new post delivered to your Inbox.

Join 225 other followers