I have a desperate need of understanding the world around me. HDR imaging is no difference and knowing this, my girlfriend (the wonderfully talented Security Pony at http://securitypony.com/) bought me a book about HDR imaging theory a few months back. It makes for great reading for anyone interested in the mechanics behind HDR. Having read the book, the next step was of course to build my own HDR image merge application. I settled on the Exposure Fusion algorithm by Tom Mertens that simply weights each pixel of each input image and then uses that weight to blend the images. There are other methods as well such as first merging to a 32bit HDR image and then use a tone mapper.
I really recommend you to read Tom's paper on Exposure Fusion before looking at the code just to get a grasp on the algorithm. But, in short, here's what it does:
By generating and combining saturation,- contrast- and 'well exposedness'-maps for every input image, weight maps of every image is created. The weight maps basically works as alpha masks and are used when blending the input images. Then, to avoid artifacts in the resulting image, the blending is done using the multi-resolution algorithm of Burt and Adelson as described in The Laplacian Pyramid as a Compact Image Code. And that's basically it.
I've tried my best as to avoid third-party libraries so that it would be possible for the reader to debug and follow every step of the transformation process. The only library I used was one for bitmap encoding/decoding, used only for reading the input images and writing the resulting bitmap back to storage. The implication of this is that the source code contains a lot of general array manipulation functions that can be used for other purposes as well. Play around with it! :)
Sample code: Here!
Ps. If you're interested in HDR photography I really recommend you to have a look at Trey Ratcliff's blog, Stuck in Customs.