Hope everybody has a good time. Things have been a little slow at this end. Just been mainly been recovering from a very tough last weekend.
I went to see Avatar in “Real D 3D”, which was really good and obviously I had to have a quick read around into the topic of stereoscopic imaging and about the various types of polarized glasses, and a little about how the projectors work and whatnot.
While I’m not expert, it was quickly apparent that without some decent technology, I wouldn’t be writing my own 3D demos any time soon.
What I could find out was that the stereoscopy refers to method of recording and presenting a three-dimensional to create the illusion of depth and as you’ll see, in most 3D films, there appears to be two images superimposed on one and another, and for the best part, this is actually what it is.
The image that would have been shot from the left and right cameras are superimposed into the one image, and as a result, we have depth, however, that’s the easy bit, the hard bit comes when you need to separate the two images for each eye to receive the appropriate image.
The versions at the cinema use special projectors that project each image using a particular polarization, and when combined with the special polarized glasses, the light is blocked appropriately so that each eye receives only the correct image and from there on, we perceive depth much like normal.
My initial interest while watching the movie, before knowing anything about it, was to do a graphics demo of some kind which got reading into Nvidia’s 3D Vision technology, which sounded a lot like some old Wicked3D eyeSCREAM glasses I had some years back, whereby the game presents the left and right images in an alternating fashion (effectively halving your perceived frame rate) and the special glasses shutter out the left and right eye appropriately.
A graphics demo is out, but then again, the old Red/Cyan glasses might be worth a laugh to write something.
The idea of indirect lighting has always fascinated me and for me is a must have.
I read into some tutorials and papers on Radiosity and I’m still in the process of mulling over that particular problem.
I’m not really after real-time performance, but just to be able to light my own geometry to create my own assets easily would be fantastic.
With regards to Radiosity, while there’s some great information on the actual algorithms, there’s not much on divvying up the geometry into the so-called patches.
Photon Mapping looked interesting as an alternative.
I’ve tried to find out how the early unreal engine (Unreal, Unreal Tournament), Quake II and Quake III managed to create their lightmaps with their editors to almost no avail also.
Hopefully by the end of Christmas, I’ll be more “enlightened”.. Wow.. That was almost as bad as BBC’s humour!