AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Gopro fisheye1/10/2024 The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (Original poster, providing an alternative) # cv.Undistort2(src, dst, intrinsics, dist_coeffs)Īlso note that OpenCV uses a very different lens distortion model to the one in the web page you linked to. Mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)Ĭv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)Ĭv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0)) Mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1) Intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)ĭist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)ĭst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels) Print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argvįx, fy, cx, cy, k1, k2, p1, p2, output = argv Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion: #!/usr/bin/python See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). ![]() ![]() In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene. You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by R_u = f*tan(theta)Īnd the projection by common fisheye lens cameras (that is, distorted) is modeled by R_d = 2*f*sin(theta/2) Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed? My function stub looks like this: Point correct_fisheye(const Point& p,const Size& img) ĭouble R = sqrt((rel.x*rel.x)+(rel.y*rel.y)) Ĭonst Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta)) įprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y) Īlternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Output : Corrected image (technically also with perspective correction, but that's a separate step). Input : Original image with fish-eye distortion to fix. There's also a blog post that describes how to use tools to do it these pictures are from that: I've found this description of how to generate a fisheye effect, but not how to reverse it. ![]() I want to convert these points to rectilinear coordinates. I have some points that describe positions in a picture taken with a fisheye lens. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please. I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the inverse, in code in the style of the converting functions I posted?Ģ). ![]() I actually struggle to reverse it, and to map source coordinates to destination coordinates. How do you calculate the radial distance from the centre to go from fisheye to rectilinear?ġ). I discovered how to map a linear lens, from destination coordinates to source coordinates.
0 Comments
Read More
Leave a Reply. |