Linge Bai, Project 3, CS680 Computer Vision - Results and Extra Credit






1. Calculate Surface Normal:
I used intensity weighted least-squared method as learned in class to calculate the surface normal. But I tried two different variants: 1) At first, I calculcated surface normal based on gray scale of the image and the result was not very good. I got images with some sparse white pixels, such as:

To solve this problem, instead of using gray scale, I calculated the normal using all RGB channels. That is, I modified the fuctions for calculating the normal and albedo of one pixel to take in one more parameter, i.e. the color channel(which has red, green, blue and grayscale four values). When constructing the normal image, I used grayscale to calculate the normal. When constructing the albedo image, I calculated normals for each color channel and calculated the albedo for each color channel accordingly. By doing this, the white pixels dissappeared. The depth and uniformly relighting images use surface normals calculating from grayscale, while the relighting image based on original albedo values uses surface normals calculated from each RGB channels. I also checked for nan values in the surface normals and set the nan values to 0 to make the system robust.


2. Novel view synthesis:
Because we can get the depth information from the surface normal, we can get 3D information out of 2D images. I constructed .smf files to record this information. The .smf file contains information of vertices, faces and vertex normals. Two triangle meshes are constructed from four adjacent pixels. For example, pixels 00, 01, 11, 10 construct triangle meshes 00, 01, 11 and 00, 10, 11.

I used OpenGL to render the .smf file and thus get a more than 2D viewing of each object(buddha, cat, gray, horse, owl and rock). This is in the directory called 3DReconstruction under the proj3 directory. In the OpenGL program, I enabled lighting so that we can view the object better. Mouse call back is used: Press 'R' or 'r' to rotate the camera so that we can see more than back side of the object. Orthographic projection is used to minimize the effect of Silhouette edges, because the normal can not be computed correctly using the implemented method at Silhouette edges, hence the reconstruction of depth information based on the normals. If we change orthographic projection to perspective projection(as provided in the OpenGL program: right click the mouse, and select accordingly), we can observe the effect at the Silhouette edges. Another effect is the shape of the object, it looks warped.

Some instructions of the OpenGL program(this OpenGL program is based on CS680 Interactive Computer Graphics assignment 5):
1. The program has a window displaying a smf model
2. Mouse Interaction (menu): right click, change the light(white or colored), change the material(white shiney or gold), change the projection type(orthoganal projection or perspective projection)
3. Keyboard Interaction:
-- Press I or i to zoom in the camera
-- Press O or o to zoom out the camera
-- Press P or p to increase the camera height
-- Press L or l to decrease the camera height
-- Press R or r to rotate the camera
-- Press ESC to quit the program
4. Callback functions: display callback, reshape callback, mouse callback, keyboard callback
5. The program takes 1 command line argument: the smf file the usage is: ./3DReconstruct -f foo.smf
If you use Orthographic projection and Press 'R' or 'r' to rotate the camera more than 180 degree, you can observe the "Ambiguity in Human Perception" as discussed in Lecture 9 (slide 21).

Here are some screen shots:
1. Interface:


2. A look from the side:


3. Comparison between Perspective projection and Orthographic projection:
(perspective) (orthographic)