This post details the process of Single View Stereo, a technique replicating stereo vision effects using one camera moved between two positions, overcoming the challenge of unknown relative rotation (R) and translation (T). It outlines finding corresponding feature points between two images to calculate the Fundamental matrix (F) using methods like RANSAC. From F and camera intrinsics, the Essential matrix (E) is derived. E is decomposed via SVD to yield multiple potential R and T solutions. Point triangulation helps select the correct R and T by ensuring reconstructed 3D points lie in front of the camera. Finally, images are rectified, and algorithms like StereoBM generate a disparity map, which is then converted to a depth map. After migration to a new CMS, I couldn't migrate this post, which is why here's a pdf from the old post.
Single View StereoSingle View Stereovision
10 Aug 2017
Related Posts
-
Home Lab: Running bare metal servers
A journey of building a high-spec PC and transitioning to a sophisticated home lab setup. Using Proxmox to run servers at home.
-
Writing Kickass READMEs
Stop the README chaos! Learn the critical elements for a killer project description, install guides, and more. Level up your code!
-
60 days of AI
Why did I decide to start with AI? How did it go? What I learnt? And outcomes did I achieve?