首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Camera calibration is the most essential and usually the first step in computer vision applications. The results highly depend on the accuracy of the feature extraction. Conventional feature extraction methods suffer from perspective and lens distortion. In this paper, a novel feature extraction method is proposed by using fringe patterns groups as calibration target. Each group comprises six sinusoidal, in which, three are used for calculating vertical phases and the other three for horizontal phases. A three-step phase shift algorithm is used for wrapped phase calculation. Then, feature points are detected with a 2D phase-difference pulse detection method and refinement is done by simple interpolation. Finally, camera calibration is done by using these features as control points, experimental results indicated the proposed method accurate and robust.  相似文献   

2.
Conventional camera calibration methods always require well-focused target images for accurate feature detection. This requirement brings many challenges for long-range vision systems, due to the difficulty of fabricating an equivalent size target to the sensing field-of-view (FOV). This paper presents an out-of-focus color camera calibration method with one normal-sized color-coded pattern as the calibration target. The red, green and blue (RGB) channels of the color pattern are encoded with three phase-shift circular grating (PCG) arrays. The PCG centers used as feature points are extracted by ellipse fitting of the 2π-phase points. Experiments demonstrate that the proposed method can achieve accurate calibration results even under severe defocus.  相似文献   

3.
To calibrate a structured light vision sensor, it is necessary to obtain at least four non-collinear feature points that fall on the light stripe plane. We propose a novel method to construct non-collinear feature points used for calibrating a structured light vision sensor with a planar calibration object. After the planar calibration object is moved freely in the range of measuring of the structured light vision sensor at least twice, all the local world coordinates of the feature points falling on the light stripe plane can be readily obtained in site. The global world coordinates of the non-collinear feature points in the local world coordinate frame can be computed through the three-dimensional (3D) camera coordinate frame. A planar calibration object is designed according to the proposed approach to provide accurate feature points. The experiments conducted on a real structured light vision sensor that consists of a camera and a single-light-stripe-plane laser projector reveal that the proposed approach has high accuracy and is practical in the vision inspection applications. The proposed approach greatly reduces the cost of the calibration equipment and simplifies the calibrating procedure. It advances structured light vision inspection one step from laboratory environments to real world use.  相似文献   

4.
Calibrating camera radial distortion with cross-ratio invariability   总被引:1,自引:0,他引:1  
The calibration of camera distortion plays an important role in the field of industrial machine vision application. In this paper, a novel approach for calibrating camera radial distortion is presented based on cross-ratio invariability for perspective projection. Assumed to be with one-order radial distortion, the image coordinates and the cross-ratio of only four collinear points in space are needed in this approach. The cross-ratio, easily known from a calibration target, is identical with that of the four corresponding image points. This is called the cross-ratio invariability for perspective projection. A monadic two-order equation is built based on the cross-ratio invariability, which gives an accurate solution to radial distortion coefficients. A digital simulation and a practical image correction prove this approach to be simple, accurate, efficient and time saving.  相似文献   

5.
The global calibration of multiple vision sensors (MVS) with non-overlapping views has been widely studied. In this paper, a novel calibration method for MVS with non-overlapping fields of view based on 1D target is presented. First, two neighboring vision sensors are selected. The rotation matrix between the two vision sensors is computed using the co-linearity property of the feature points on 1D target. Then the translation vector is computed according to the known distances between feature points on 1D target. The global calibration of all vision sensors is realized by repeating the above pair-wise calibration on different pairs of vision sensors. Due to the small volume and mobility of 1D target, the proposed global calibration method can be applied to vision sensors distributed in a large area or narrow space. Experiment results show that the RMS error of global calibration is within 0.060 mm.  相似文献   

6.
于之靖  潘晓 《光学学报》2012,32(11):1112003
提出了一种基于构建初始测量网络的相机内部参数校准方法,有效解决了二维平面靶标在深度方向信息的不足以及三维靶标空间的限制等问题。通过对靶标板进行初始成像,按照测量网络的构建原则建立初始测量网络,经过后方交会求解外方位、前方交会求解空间三维靶标点坐标,最后由光束平差优化求解相机内部参数。利用标定后内部参数求解空间点坐标,实验结果表明,采用构建初始测量网的误差平均值为0.0794,优于平面靶标和立体靶标标定的-0.2443和-0.1916。同时校准所用时间也明显小于虚拟立体校准,即该方法具有快速、精确和方便等优点,满足大空间视觉测量中相机内部参数现场校准的要求。  相似文献   

7.
赵美蓉  李瀚辰  佟颖 《光学技术》2017,43(5):385-393
针对红外相机与可见光相机联合标定的问题,利用不同物体红外辐出度的差异,设计了可同时应用于红外相机与可见光相机标定的平面靶标。利用"最大稳定极值区域"(MSER)算法,检测靶标的镂空深色区域。为克服图像可能存在较大畸变的问题,通过相邻最大稳定极值区域质心的位置关系预估角点所在位置,在预估位置内进行角点检测,提取用于标定的亚像素角点位置。结果表明:新型靶标及相应的角点提取方法能够同时满足于红外相机与可见光相机内外参数标定的需要。通过红外与可见光"双双目"立体视觉系统的融合重构效果可看出,提供的标定数据能够满足系统需求。  相似文献   

8.
基于直线特征的摄像机标定方法研究   总被引:1,自引:0,他引:1  
提出了一种基于直线特征的求解摄像机内外方位参数的方法。首先利用径向平行约束RAC求解大部分的外方位参数,包括旋转矩阵R、平移分量tx和ty,然后再引入畸变模型求解内方位参数以及平移分量tz。该方法的特点是采用共面点标定物,利用直线特征约束进行标定,标定内外方位参数只需要一副图像就可以完成。实验结果表明,该方法是一种精度较高的、简单实用的摄像机标定方法。  相似文献   

9.
点阵列标定模板图像特征点提取方法   总被引:1,自引:1,他引:0       下载免费PDF全文
在摄像机标定过程中,提取标定模板图像特征点的精度是影响摄像机内外参数标定精度的重要因素。本文根据条纹投影三维轮廓测量实验系统的要求制作了点阵列标定模板,在图像处理边缘理论的基础上,以圆的解析特性为依据,采用坐标转换思想,引进圆系描述点、圆对应关系,运用统计学理论,提出提取点阵列标定模板图像特征点的新方法,并通过实验验证了该方法的正确性和可行性。为下一步的摄像机内外参数求解做了铺垫。  相似文献   

10.
面向大视场视觉测量的摄像机标定技术   总被引:7,自引:0,他引:7  
杨博文  张丽艳  叶南  冯新星  李铁林 《光学学报》2012,32(9):915001-174
提出了一种面向大视场高精度视觉测量的摄像机标定新方法,该方法采用亮度自适应的单个红外发光二极管(IR-LED)作为目标靶点,将该靶点固定在三坐标测量机的测头上,并依次精确移动至预先设定的空间位置,每次靶点到达设定的空间位置时,摄像机对靶点进行图像采集。利用三坐标测量机的精确位移,在三维空间构成一个虚拟立体靶标。针对虚拟立体靶标在大视场摄像机标定中只能覆盖一小部分标定空间的问题,通过自由移动摄像机在多个方位对虚拟立体靶标进行拍摄,使得多个虚拟立体靶标分布于整个标定空间。摄像机在每个方位对虚拟立体靶标的拍摄都标定出一组摄像机的内、外参数,然后以摄像机内参数和摄像机在各个方位下拍摄的虚拟立体靶标在摄像机坐标系下的位置及姿态参数为优化变量,建立以所有三维靶点位置重投影误差平方和为最小的目标函数,采用非线性优化方法求解摄像机标定参数的最优解。该方法较好地解决了大视场视觉测量中大尺寸靶标加工困难、摄像机标定精度难以保证的问题。仿真和实际标定实验均证明此方法可以有效提高大视场摄像机的标定精度。  相似文献   

11.
影响摄像机标定精度的因素分析   总被引:4,自引:1,他引:3  
摄像机标定是三维重建必不可少的步骤,摄像机标定结果的好坏直接决定着三维重建结果以及其他计算机视觉应用效果的好坏。研究了影响摄像机标定精度的一些因素,并给出了在这些因素影响下,摄像机焦距、主点位置、倾斜因子、径向畸变等参数的误差分布曲线。通过这些曲线可见增加特征点和标定图片数量,减小特征点提取误差,将有助于减小标定误差。研究表明,特征点数量控制在90点以上,标定图片数量在4~10之间,可以有效减少标定误差。对摄像机标定过程中合理选择标定图片数量,确定特征点数量和提取算法提供了参考依据。  相似文献   

12.
基于视觉测量的飞行器舵面角位移传感器标定   总被引:1,自引:0,他引:1  
侯宏录  周德云 《光子学报》2007,36(2):359-363
提出一种基于计算机单目视觉的飞行器副翼、襟翼、方向舵和升降舵角位移传感器的非接触标定方法.用共面的两个特征圆组成一标定靶固定在飞行器舵面上,用一台定焦数码摄像机对标定靶拍照,获得特征圆数字图像.经图像分析确定特征圆圆心及直径在像面上的透视投影位置和长度,根据透视投影逆变换原理建立物和像空间坐标关系的解算模型,进而导出靶面平面方程.飞行器舵面角位移则由标定靶面的法线方向余弦来表示.仿真结果表明,该方法具有标定过程快速、简单和准确的特点.不需事先标定摄像机内外部参量,其标定准确度优于0.2°.  相似文献   

13.
建立单摄像机虚拟鼠标的数学模型,提出一种虚拟鼠标的现场标定方法.以二维实体靶标为中介.利用计算机屏幕生成的虚拟靶标,确定摄像机坐标系到屏幕坐标系的变换.建立空间三维控制点图像坐标到鼠标指针的映射,实现了单摄像机虚拟鼠标的现场标定.提出的方法无需高成本的辅助设备,现场操作简单.在30 cm作用距离内采用640 pixel×480 pixel图像中50个特征点,当图像噪声方差达到1 pixel时,试验中控制点映射为屏幕坐标的均方根(RMS)误差小于3 pixel.该方法应用于研发的虚拟鼠标演示程序.表明切实可行,适用于虚拟鼻尖鼠标的现场标定.  相似文献   

14.
针对相机标定结果易受外界因素干扰的问题,为了提高标定准确度,利用已有的摄像机针孔成像模型,采用自适应角点检测算法提取靶标图像中的特征点,标定结果以重投影横纵像素误差的平均值作为性能指标,对均匀光源的照度、标定图片数量以及标定靶标上棋盘格尺寸3个影响因子做了相应的分组对比实验。研究结果表明,选择亮度高的光源提升标定准确度达到38%以上;特征点数目156个时,仅需18张~22张标定图片;相对较小的棋盘格尺寸可以使得标定准确度提高50%。以上结果充分说明了光源、图片数量和棋盘格尺寸对于提高相机标定准确度具有重要意义。  相似文献   

15.
An automatic calibration approach for multi-camera networks is proposed to calibrate the intrinsic parameters of each camera and the extrinsic parameters between different cameras. Firstly, the moving objects are tracking, and the feature points are detected and calculated by a matching method from image sequences. And then we estimate the intrinsic parameters of each camera respectively by a self-calibration method from the motion of feature points, while estimating the rotation and translation of each camera with respect to the object. Thirdly, we estimate the extrinsic parameters between different cameras from the rotation and the motion of each camera with respect to the object. Our method only needs to track the motion of objects in each camera without the correspondence between different cameras. It avoids the difficulty of the correspondence between different cameras in real networks. Experiments with simulated data and real images are carried out to verify the theoretical correctness and numerical robustness.  相似文献   

16.
针对高光谱成像特点,提出了一种基于三维特征检测微小摄像头的方案。在空间维利用猫眼效应筛选疑似目标,在光谱维对结果进行精准判定。依据摄像头结构,分析了可见光摄像头的反射光谱特征。基于几何光学和辐射度学,计算和仿真了系统的探测距离。结果表明,正常工作时,光功率影响最小探测距离,目标尺寸影响最大探测距离。搭建了微小摄像头光谱特征验证系统。结果表明,采用吸收型红外截止滤光片的目标的非反射光占比曲线变化平缓且数值高,采用反射型红外截止滤光片的目标的非反射光占比曲线可见光部分数值高,红外部分数值低,从700 nm附近开始下降,甚至发生突变,实验数据显示,突变位置的斜率绝对值是红外波段斜率绝对值的10倍以上。实验结果与预期分析的结果一致,验证了高光谱成像技术检测微小摄像头的可行性。  相似文献   

17.
In this study, a novel approach to a measuring methodology and calibration method for an optical non-contact scanning probe system is proposed and verified by experiments. The optical probe consists of a line laser diode and two charge-coupled device (CCD) cameras and is placed on a computer numerical control (CNC) machine to measure the workpiece profiles. A space mapping method using the least-squares algorithm is presented for the probe calibration and profile measurement. This method provides a simple and accurate calculation of the relationship between the real space plane and its related image space plane in a CCD camera. A transparent grid with regularly spaced nodal points is used to construct the space mapping function. The space coordinate of an object can be obtained from its image in the CCD camera via the mapping function. The measured profile data are smoothed by the B-spline blending function and can be transferred to a CAD/CAM package for industrial applications. Experimental results show that this technique can determine the 3-D profile of an object with an accuracy of 60 μm.  相似文献   

18.
基于两个正交一维物体的单幅图像相机标定   总被引:2,自引:0,他引:2  
薛俊鹏  苏显渝 《光学学报》2012,32(1):115001-159
提出了一种利用两个正交一维物体构成"T"型靶标进行摄像机标定的新方法。该方法只需对"T"型靶标上已知坐标的5点投影一幅图像,然后根据柔性靶标原理计算出由虚点和标记点组成的共直线的4点,由射影变换同素性、接合性以及交比不变性标定出镜头的一阶径向畸变参数。利用已知畸变参数对图像进行畸变校正,然后由基于两个正交一维物体坐标变换的方法即可标定出相机的内外参数。该方法线性求解镜头畸变参数,避免了传统方法非线性迭代优化过程中产生的参数耦合现象。实验表明,不进行镜头畸变校正则相机标定精度随着图像噪声的增加呈不稳定状态;进行畸变校正后对简单标定计算的初始值进行优化得到稳定的高精度标定结果。整个实验设备简单,操作方便,只需一幅图像即可实现镜头畸变和相机内外参数的标定,可以达到实时的效果。  相似文献   

19.
A novel two-dimensional (2D) pattern used in camera calibration is presented. With one feature circle located at the center, an array of circles is photo-etched on this pattern. An ellipse recognition algorithm is proposed to implement the acquisition of interest calibration points without human intervention. According to the circle arrangement of the pattern, the relation between three-dimensional (3D) and 2D coordinates of these points can be established automatically and accurately. These calibration points are computed for intrinsic parameters calibration of charge-coupled device (CCD) camera with Tsai method. A series of experiments have shown that the algorithm is robust and reliable with the calibration error less than 0.4 pixel. This new calibration pattern and ellipse recognition algorithm can be widely used in computer vision.  相似文献   

20.
Line structured light vision sensor (LSLVS) calibration is to establish the relation between the camera and the light plane projector. This paper proposes a geometrical calibration method for LSLVS via three parallel straight lines on a 2D target. The approach is based on the properties of vanishing points and lines. During the calibration, one important aspect is to determine the normal vector of the light plane, another critical step is to obtain the distance parameter d of the light plane. In this paper, we put the emphasis on the later one. The distance constraint of parallel straight lines is used to compute a 3D feature point on the light plane, resulting in the acquisition of the parameter d. Thus, the equation of the light plane in the camera coordinate frame (CCF) can be solved out. To evaluate the performance of the algorithm, possible factors affecting the calibration accuracy are taken into account. Furthermore, mathematical formulations for error propagation are derived. Both computer simulations and real experiments have been carried out to validate our method, and the RMS error of the real calibration reaches 0.134 mm within the field of view 500 mm × 500 mm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号