首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Large FOV (field of view) stereo vision sensor is of great importance in the measurement of large free-form surface. Before using it, the intrinsic and structure parameters of cameras should be calibrated. Traditional methods are mainly based on planar or 3D targets, which are usually expensive and difficult to manufacture especially for large dimension ones. Compared to that the method proposed in this paper is based on 1D (one dimensional) targets, which are easy to operate and with high efficiency. First two 1D targets with multiple feature points are placed randomly, and the cameras acquire multiple images of the targets from different angles of view. With the fixed angle between vectors defined by the two 1D targets we can establish the objective function with intrinsic parameters, which can be later solved by the optimization method. Then the stereo vision sensor with two calibrated cameras is set up, which acquire multiple images of another 1D target with two feature points in unrestrained motion. The initial values of the structure parameters are estimated by the linear method for the known distance between two feature points on the 1D target, while the optimal ones and intrinsic parameters of the stereo vision sensor are estimated with non-linear optimization method by establishing the minimizing function involving all the parameters. The experimental results show that the measurement precision of the stereo vision sensor is 0.046 mm with the working distance of about 3500 mm and the measurement scale of about 4000 mm×3000 mm. The method in this paper is proved suitable for calibration of stereo vision sensor of large-scale measurement field for its easy operation and high efficiency.  相似文献   

2.
To calibrate a structured light vision sensor, it is necessary to obtain at least four non-collinear feature points that fall on the light stripe plane. We propose a novel method to construct non-collinear feature points used for calibrating a structured light vision sensor with a planar calibration object. After the planar calibration object is moved freely in the range of measuring of the structured light vision sensor at least twice, all the local world coordinates of the feature points falling on the light stripe plane can be readily obtained in site. The global world coordinates of the non-collinear feature points in the local world coordinate frame can be computed through the three-dimensional (3D) camera coordinate frame. A planar calibration object is designed according to the proposed approach to provide accurate feature points. The experiments conducted on a real structured light vision sensor that consists of a camera and a single-light-stripe-plane laser projector reveal that the proposed approach has high accuracy and is practical in the vision inspection applications. The proposed approach greatly reduces the cost of the calibration equipment and simplifies the calibrating procedure. It advances structured light vision inspection one step from laboratory environments to real world use.  相似文献   

3.
A new method to calibrate a trinocular vision sensor is proposed and two main tasks are finished in this paper, i.e. to determine the transformation matrix between each two cameras and the trifocal tensor of the trinocular vision sensor. A flexible sphere target with several spherical circles is designed. As the isotropy of a sphere, trifocal tensor of the three cameras can be determined exactly from the feature on the sphere target. Then the fundamental matrix between each two cameras can be obtained. Easily, compatible rotation matrix and translation matrix can be deduced base on the singular value decomposition of the fundamental matrix. In our proposed calibration method, image points are not requested one-to-one correspondence. When image points locates in the same feature are obtained, the transformation matrix between each two cameras with the trifocal tensor of trinocular vision sensor can be determined. Experiment results show that the proposed calibration method can obtain precise results, including measurement and matching results. The root mean square error of distance is 0.026 mm with regard to the view field of about 200×200 mm and the feature matching of three images is strict. As a sphere projection is not concerned with its orientation, the calibration method is robust and with an easy operation. Moreover, our calibration method also provides a new approach to obtain the trifocal tensor.  相似文献   

4.
The problem associated with calibrating a structured light vision sensor is that it is difficult to obtain the coordinates of the world calibration points falling on the light stripe plane. In this paper, we present a novel method to address this problem by randomly moving a 1D (one-dimension) target within the sensor's view field. At each position where the target is set, the world coordinates with one calibration point on the light stripe plane that can be obtained based on at least three preset known points on the 1D target by a proposed two-stage technique. Thus, as long as the 1D target is at least set at three different positions, not less than three such calibration points can be obtained to perform the structured light vision sensor calibration. The simulation and real experiments conducted reveal that the proposed approach has an accuracy of up to 0.065 mm. The advantages of the proposed method are: (1) 1D target is easily machined with high accuracy, which reduces the cost of the calibration equipment; (2) the method simplifies the calibration operation and can be convenient in on-site calibration; (3) the method is suitable for use in confined spaces.  相似文献   

5.
Structured light 3D vision inspection is a commonly used method for various 3D surface profiling techniques. In this paper, a novel approach is proposed to generate the sufficient calibration points with high accuracy for structured light 3D vision. This approach is based on a flexible calibration target, composed of a photo-electrical aiming device and a 3D translation platform. An improved algorithm of back propagation (BP) neural network is also presented, and is successfully applied to the calibration of structured light 3D vision inspection. Finally, using the calibration points and the improved algorithm of BP neural network, the best network structure is established. The training accuracy for the best BP network structure is 0.083 mm, and its testing accuracy is 0.128 mm.  相似文献   

6.
基于双目立体视觉三维重构原理,采用主动扫描实现特征匹配的三维灰度重构技术,通过特征结构光扫描物体表面,由经过预标定的两套成像传感器拍摄其图像,经过图像处理程序提取出特征点,完成特征匹配,再计算出物体表面三维轮廓,同时将成像传感器中的灰度信息映射到相应的特征点,从而实现特征点的三维信息和颜色信息的重构和匹配。该技术为三维彩色逼真场景的重构奠定了基础。  相似文献   

7.
A novel two-dimensional (2D) pattern used in camera calibration is presented. With one feature circle located at the center, an array of circles is photo-etched on this pattern. An ellipse recognition algorithm is proposed to implement the acquisition of interest calibration points without human intervention. According to the circle arrangement of the pattern, the relation between three-dimensional (3D) and 2D coordinates of these points can be established automatically and accurately. These calibration points are computed for intrinsic parameters calibration of charge-coupled device (CCD) camera with Tsai method. A series of experiments have shown that the algorithm is robust and reliable with the calibration error less than 0.4 pixel. This new calibration pattern and ellipse recognition algorithm can be widely used in computer vision.  相似文献   

8.
一种高精度线结构光视觉传感器现场标定方法   总被引:13,自引:1,他引:12  
针对现有线结构光视觉传感器标定方法存在的局限性,提出一种不需要求解光平面标定点的标定方法.根据光条图像求解平面靶标上光条在摄像机坐标系下的Plücker矩阵.在视觉传感器前合适位置将平面靶标摆放多次,联立所有光条空间直线的Plücker矩阵,求解光平面在摄像机坐标系的平面方程.最后通过非线性优化方法得到光平面方程在最大似然准则下的最优解.在标定过程中,所有光条点都参与光平面参数的计算过程,因此该方法标定结果精度高、稳健性强.实验证明,与现有方法相比该方法标定精度提高30%左右.  相似文献   

9.
Line structured light vision sensor (LSLVS) calibration is to establish the relation between the camera and the light plane projector. This paper proposes a geometrical calibration method for LSLVS via three parallel straight lines on a 2D target. The approach is based on the properties of vanishing points and lines. During the calibration, one important aspect is to determine the normal vector of the light plane, another critical step is to obtain the distance parameter d of the light plane. In this paper, we put the emphasis on the later one. The distance constraint of parallel straight lines is used to compute a 3D feature point on the light plane, resulting in the acquisition of the parameter d. Thus, the equation of the light plane in the camera coordinate frame (CCF) can be solved out. To evaluate the performance of the algorithm, possible factors affecting the calibration accuracy are taken into account. Furthermore, mathematical formulations for error propagation are derived. Both computer simulations and real experiments have been carried out to validate our method, and the RMS error of the real calibration reaches 0.134 mm within the field of view 500 mm × 500 mm.  相似文献   

10.
Laser displacement sensors (LDSs) use a triangulation measurement model in general. However, the non-linearity of the triangulation measurement model influences the measurement accuracy of the LDS, and the geometric parameters calibration process of the components of the LDS is tedious. In this paper, we present a vision measurement model of the LDS based on the perspective projection principle. Furthermore, a corresponding calibration method is proposed. A planar target with featured lines is moved by a 2D moving platform to some preset known positions. At each position, the world coordinates of calibration points are obtained by the cross ratio invariance principle and the linear array camera of the LDS is used for collecting target images. The simulations verify the effectiveness of the proposed model and the feasibility of the calibration method. The experimental results indicate that the calibration method achieves a calibration accuracy of 0.026 mm. Compared with the traditional measurement model, the vision measurement model of the LDS is more comprehensive and avoids a linear approximation procedure, and the corresponding calibration method is easily complemented.  相似文献   

11.
Camera calibration is the most essential and usually the first step in computer vision applications. The results highly depend on the accuracy of the feature extraction. Conventional feature extraction methods suffer from perspective and lens distortion. In this paper, a novel feature extraction method is proposed by using fringe patterns groups as calibration target. Each group comprises six sinusoidal, in which, three are used for calculating vertical phases and the other three for horizontal phases. A three-step phase shift algorithm is used for wrapped phase calculation. Then, feature points are detected with a 2D phase-difference pulse detection method and refinement is done by simple interpolation. Finally, camera calibration is done by using these features as control points, experimental results indicated the proposed method accurate and robust.  相似文献   

12.
基于同心圆合成图像匹配的双目视觉标定   总被引:6,自引:1,他引:5  
侯俊捷  魏新国  孙军华 《光学学报》2012,32(3):315003-161
分析了双目视觉传感器的数学模型,提出了一种基于同心圆合成图像匹配的双目视觉传感器的标定方法。在测量范围内任意多次摆放同心圆靶标,由两台摄像机拍摄靶标图像。根据摄像机模型与已知同心圆在靶标坐标系上的位置关系,构造合成图像,将合成图像与观测图像进行相似度匹配,通过优化定位得到靶标上每个圆的圆心点图像坐标。利用左右图像对应的圆心图像坐标和双目视觉的约束关系,对双目视觉传感器参数进行非线性优化,并得到最优解。所提出的标定方法是在张正友方法的理论基础上,利用了图像的整体性进行的优化。实验结果表明,该方法提高了标定精度。  相似文献   

13.
基于两个正交一维物体的单幅图像相机标定   总被引:2,自引:0,他引:2  
薛俊鹏  苏显渝 《光学学报》2012,32(1):115001-159
提出了一种利用两个正交一维物体构成"T"型靶标进行摄像机标定的新方法。该方法只需对"T"型靶标上已知坐标的5点投影一幅图像,然后根据柔性靶标原理计算出由虚点和标记点组成的共直线的4点,由射影变换同素性、接合性以及交比不变性标定出镜头的一阶径向畸变参数。利用已知畸变参数对图像进行畸变校正,然后由基于两个正交一维物体坐标变换的方法即可标定出相机的内外参数。该方法线性求解镜头畸变参数,避免了传统方法非线性迭代优化过程中产生的参数耦合现象。实验表明,不进行镜头畸变校正则相机标定精度随着图像噪声的增加呈不稳定状态;进行畸变校正后对简单标定计算的初始值进行优化得到稳定的高精度标定结果。整个实验设备简单,操作方便,只需一幅图像即可实现镜头畸变和相机内外参数的标定,可以达到实时的效果。  相似文献   

14.
An active omnidirectional vision owns the advantages of the wide field of view (FOV) imaging, resulting in an entire 3D environment scene, which is promising in the field of robot navigation. However, the existing omnidirectional vision sensors based on line laser can measure points only located on the optical plane of the line laser beam, resulting in the low-resolution reconstruction. Whereas, to improve resolution, some other omnidirectional vision sensors with the capability of projecting 2D encode pattern from projector and curved mirror. However, the astigmatism property of curve mirror causes the low-accuracy reconstruction. To solve the above problems, a rotating polygon scanning mirror is used to scan the object in the vertical direction so that an entire profile of the observed scene can be obtained at high accuracy, without of astigmatism phenomenon. Then, the proposed method is calibrated by a conventional 2D checkerboard plate. The experimental results show that the measurement error of the 3D omnidirectional sensor is approximately 1 mm. Moreover, the reconstruction of objects with different shapes based on the developed sensor is also verified.  相似文献   

15.
A calibration method with an objective function generated from a uniform horizontal height is presented in this work for the laser plane in active vision measurement. A height target is developed with a center mark as the initial point of the uniform height. The height target is located on the horizontal plane of the 3D calibration board so that the horizontal plane is considered as the terminal of the uniform horizontal height. Based on the pinhole model of the camera and the laser plane equation, we model the objective function to find the optimal coefficients of the laser plane equation. The goal of the objective function is the smallest difference of the uniform height and the reconstructed height according to the feature points of the target. The objective function is optimized by the local particle swarm optimization. The calibrated global equation of a laser plane is obtained from the optimal value 1.153 × 103 of the objective function in the experiments. Two projective laser lines of the calibration laser plane cover the original laser lines in the image. The reconstruction errors of the calibration plane are also analyzed in discussions.  相似文献   

16.
Calibration for stereo vision system plays an important role in the field of machine vision applications. The existing accurate calibration methods are usually carried out by capturing a high-accuracy calibration target with the same size as the measurement view. In in-situ 3D measurement and in large field of view measurement, the extrinsic parameters of the system usually need to be calibrated in real-time. Furthermore, the large high-accuracy calibration target in the field is a big challenge for manufacturing. Therefore, an accurate and rapid calibration method in the in-situ measurement is needed. In this paper, a novel calibration method for stereo vision system is proposed based on phase-based matching method and the bundle adjustment algorithm. As the camera is usually mechanically locked once adjusted appropriately after calibrated in lab, the intrinsic parameters are usually stable. We emphasize on the extrinsic parameters calibration in the measurement field. Firstly, the matching method based on heterodyne multi-frequency phase-shifting technique is applied to find thousands of pairs of corresponding points between images of two cameras. The large amount of pairs of corresponding points can help improve the accuracy of the calibration. Then the method of bundle adjustment in photogrammetry is used to optimize the extrinsic parameters and the 3D coordinates of the measured objects. Finally, the quantity traceability is carried out to transform the optimized extrinsic parameters from the 3D metric coordinate system into Euclid coordinate system to obtain the ultimate optimal extrinsic parameters. Experiment results show that the procedure of calibration takes less than 3 s. And, based on the stereo vision system calibrated by the proposed method, the measurement RMS (Root Mean Square) error can reach 0.025 mm when measuring the calibrated gauge with nominal length of 999.576 mm.  相似文献   

17.
基于光学参考棒的立体视觉测量系统现场标定技术   总被引:9,自引:5,他引:4  
为实现大空间复杂工件的准确测量,精确标定立体视觉系统变得越来越重要。为了克服传统立体摄像机标定过程繁复、户外实现困难的弱点,提出了一种基于光学参考棒的灵活、有效的立体视觉测量系统标定技术。参考棒水平和深度方向各有三个距离已知的红外LED作为特征点。通过在测量范围内的不同位置和方位移动光学参考棒,两像机同时捕获参考棒上特征点的图像。基于匹配的特征像点以及对极线约束,利用线性算法和Levenberg-Marquardt(LM)迭代算法快速地标定立体视觉测量系统。两像机之间平移量的比例因子由参考棒上特征点间的已知距离确定。参量标定过程中,自动地控制光强,优化曝光时间,使不同位置处光点图像的强度均一致,可以获得高的信噪比,提高标定精度。实验结果表明,该方法灵活、有效,在线标定能达到很高的精度,将现场标定过程应用到实际的大空间三维测量系统中,测量最大误差为0.18 mm。  相似文献   

18.
基于神经网络的视觉系统标定方法   总被引:3,自引:1,他引:2  
为了解决摄像机标定存在的若干问题 ,根据立体视觉原理 ,提出了基于神经网络的双目视觉系统标定方法。通过对双目摄像机的有效视场分析 ,确定了一次测量面积 ,并把像对视差作为网络输入 ,建立空间点世界坐标与图像坐标非线性映射关系 ,使系统不经过复杂的摄像机内外参数标定 ,就能直接提取物体的三维信息 ,增加了系统的灵活性。实验证明 ,该方法有效可行  相似文献   

19.
目前机器视觉技术广泛应用于弹着点坐标的测量领域,针对双目视觉技术标定、演算复杂的问题,提出一种基于单目视觉技术的弹着点测量方法。通过图像处理技术定位出靶面4个角点以及弹着点像素坐标,采用矩形P4P方法解算出靶面坐标相对于相机坐标系的位姿,根据单目视觉成像原理计算出弹着点在靶面的实际坐标。根据该方法成功测量出了实验中靶面弹孔的坐标,最大测量误差为2.3 mm。结果表明,该方法可以快速、精确测量靶面与相机的位姿关系以及弹着点的坐标。  相似文献   

20.
Conventional camera calibration methods always require well-focused target images for accurate feature detection. This requirement brings many challenges for long-range vision systems, due to the difficulty of fabricating an equivalent size target to the sensing field-of-view (FOV). This paper presents an out-of-focus color camera calibration method with one normal-sized color-coded pattern as the calibration target. The red, green and blue (RGB) channels of the color pattern are encoded with three phase-shift circular grating (PCG) arrays. The PCG centers used as feature points are extracted by ellipse fitting of the 2π-phase points. Experiments demonstrate that the proposed method can achieve accurate calibration results even under severe defocus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号