共查询到18条相似文献,搜索用时 93 毫秒
1.
利用环形激光视觉传感的焊缝三维恢复 总被引:2,自引:0,他引:2
为了实现复杂条件下的焊缝检测与跟踪,通过激光扫描技术,开发了环形激光视觉传感器,与机器人手爪一起构成焊接机器人"手-眼"系统,从而获得焊缝三维信息。改善了传统的点状或线条状激光视觉传感器信息量少、解释模糊以及跟踪方向单一等问题。研究了用于焊缝定位与跟踪的环形激光视觉传感器系统的内外参量标定,提出了实现焊缝三维计算的模型,最后,以对接接头、搭接接头、角接接头为对象进行了试验验证,并对标定精度进行了分析。结果表明:环形激光在不同的焊缝表现了不同的形态,结合标定试验结果,能够计算被检测焊缝的三维坐标,计算精度能够满足试验要求。 相似文献
2.
3.
4.
5.
基于同心圆合成图像匹配的双目视觉标定 总被引:6,自引:1,他引:5
分析了双目视觉传感器的数学模型,提出了一种基于同心圆合成图像匹配的双目视觉传感器的标定方法。在测量范围内任意多次摆放同心圆靶标,由两台摄像机拍摄靶标图像。根据摄像机模型与已知同心圆在靶标坐标系上的位置关系,构造合成图像,将合成图像与观测图像进行相似度匹配,通过优化定位得到靶标上每个圆的圆心点图像坐标。利用左右图像对应的圆心图像坐标和双目视觉的约束关系,对双目视觉传感器参数进行非线性优化,并得到最优解。所提出的标定方法是在张正友方法的理论基础上,利用了图像的整体性进行的优化。实验结果表明,该方法提高了标定精度。 相似文献
6.
7.
《光学学报》2017,(9)
视觉传感器的外参标定旨在建立视觉坐标系与外部标准坐标系之间的关系。设计了视觉测量合作靶球以实现视觉测量与仪器测量之间的合作测量。通过将视觉测量合作靶球与激光跟踪仪靶镜相互替换获取公共点在视觉坐标系和外部标准坐标系下的坐标,并利用公共点在两坐标系下的坐标对目标函数进行非线性优化,获得视觉传感器外参数最优解。视觉测量合作靶球的设计将靶标、光源和球形外壳相融合,以满足与跟踪仪合作靶镜的互换性和球心可测的需求。对外参标定过程中的误差传递进行了分析,并通过仿真优化标定精度,以及实验验证该标定方法的精度。结果表明,该方法的外参标定精度可达到0.036mm,能够实现直接、灵活、高精度的公共点数据获取。 相似文献
8.
多自由度双目立体视觉系统可以解决传统双目立体视觉视场区域小的问题。避免双目摄像机每转动一个角度都需要重新标定外参数,提出了一种基于转轴参数的标定方法。只需标定出初始位置时双目摄像机的内外参数,使摄像机绕转轴旋转,根据单应性原理测量摄像机与标定模板的位姿关系,确定转轴方向矢量与轴上点坐标。最终利用Rodrigues旋转矩阵确定旋转已知角度后双目摄像机的外参数,实现多自由度双目系统的标定。实验结果表明,提出的方法能准确测量转轴参数,完成多自由度双目立体视觉系统的快速标定,提高系统的工作效率。 相似文献
9.
为实现大型自由曲面工件的高精度快速三维测量,设计了一种由超大尺度结构光传感器和两轴导轨构成的三维测量系统。结构光传感器在两轴导轨的带动下获取图像,通过计算得到目标工件的三维坐标。为了将二维图像坐标转换到三维相机坐标,提出一种线结构光传感器内外参数同时标定的方法。该方法使用准一维靶标进行标定,通过获取运动机构平移过程中靶标与激光光条的图像,计算得到线结构光传感器内外参数。实验结果表明,该标定结果可靠,系统的测量误差在0.6mm以内,满足设计需求。标定过程操作简便,靶标制作简单,适合于工业现场标定使用。 相似文献
10.
11.
To calibrate a structured light vision sensor, it is necessary to obtain at least four non-collinear feature points that fall on the light stripe plane. We propose a novel method to construct non-collinear feature points used for calibrating a structured light vision sensor with a planar calibration object. After the planar calibration object is moved freely in the range of measuring of the structured light vision sensor at least twice, all the local world coordinates of the feature points falling on the light stripe plane can be readily obtained in site. The global world coordinates of the non-collinear feature points in the local world coordinate frame can be computed through the three-dimensional (3D) camera coordinate frame. A planar calibration object is designed according to the proposed approach to provide accurate feature points. The experiments conducted on a real structured light vision sensor that consists of a camera and a single-light-stripe-plane laser projector reveal that the proposed approach has high accuracy and is practical in the vision inspection applications. The proposed approach greatly reduces the cost of the calibration equipment and simplifies the calibrating procedure. It advances structured light vision inspection one step from laboratory environments to real world use. 相似文献
12.
13.
管道作为工业生产重要的传输手段其内表面腐蚀程度和瑕疵的精确检测对于保证安全生产具有重要意义。针对管道内表面圆结构光视觉检测,提出了一种基于共面参照物获取圆结构光视觉传感器标定特征点的新方法。该方法设计了圆结构光平面靶标,基于交比不变原理,以摄像机三维坐标系为中介,将多个局部世界坐标系下的标定特征点统一到全局世界坐标系中,得到位于圆结构光曲面上的非共线标定特征点的三维世界坐标。该方法降低了标定设备的成本,简化了结构光视觉传感器的标定过程。标定实验精度达到0.340 mm,标定结果表明,该方法切实可行。 相似文献
14.
为了对自主研发的工业机器人进行校准从而提高其运动精度,提出一种采用双目视觉动态跟踪球面编号靶点的机器人标定方法,利用安装在机器人末端靶球上特征分布的编号标志点进行工作空间内任意位姿的测量,由最小二乘迭代准确辨识出机器人的几何结构参数对控制器进行补偿。利用MFC由开放式、模块化思想编制标定软件,设计视觉测量、数据处理、机器人控制等功能模块,最后通过测量实例和对比实验,验证其可靠性和准确性。实验表明,该软件测量得到的位姿数据具有较高的精度,扩大了传统视觉跟踪的视野范围;同时将识别得到的机器人模型实际几何参数进行反馈补偿,成功地将机器人绝对位置精度由3.785 mm提高到1.618 mm,姿态精度由0.235 提高到0.139。 相似文献
15.
一种高精度线结构光视觉传感器现场标定方法 总被引:13,自引:1,他引:12
针对现有线结构光视觉传感器标定方法存在的局限性,提出一种不需要求解光平面标定点的标定方法.根据光条图像求解平面靶标上光条在摄像机坐标系下的Plücker矩阵.在视觉传感器前合适位置将平面靶标摆放多次,联立所有光条空间直线的Plücker矩阵,求解光平面在摄像机坐标系的平面方程.最后通过非线性优化方法得到光平面方程在最大似然准则下的最优解.在标定过程中,所有光条点都参与光平面参数的计算过程,因此该方法标定结果精度高、稳健性强.实验证明,与现有方法相比该方法标定精度提高30%左右. 相似文献
16.
A novel 1D target-based calibration method with unknown orientation for structured light vision sensor 总被引:5,自引:0,他引:5
The problem associated with calibrating a structured light vision sensor is that it is difficult to obtain the coordinates of the world calibration points falling on the light stripe plane. In this paper, we present a novel method to address this problem by randomly moving a 1D (one-dimension) target within the sensor's view field. At each position where the target is set, the world coordinates with one calibration point on the light stripe plane that can be obtained based on at least three preset known points on the 1D target by a proposed two-stage technique. Thus, as long as the 1D target is at least set at three different positions, not less than three such calibration points can be obtained to perform the structured light vision sensor calibration. The simulation and real experiments conducted reveal that the proposed approach has an accuracy of up to 0.065 mm. The advantages of the proposed method are: (1) 1D target is easily machined with high accuracy, which reduces the cost of the calibration equipment; (2) the method simplifies the calibration operation and can be convenient in on-site calibration; (3) the method is suitable for use in confined spaces. 相似文献
17.
A new method to calibrate a trinocular vision sensor is proposed and two main tasks are finished in this paper, i.e. to determine the transformation matrix between each two cameras and the trifocal tensor of the trinocular vision sensor. A flexible sphere target with several spherical circles is designed. As the isotropy of a sphere, trifocal tensor of the three cameras can be determined exactly from the feature on the sphere target. Then the fundamental matrix between each two cameras can be obtained. Easily, compatible rotation matrix and translation matrix can be deduced base on the singular value decomposition of the fundamental matrix. In our proposed calibration method, image points are not requested one-to-one correspondence. When image points locates in the same feature are obtained, the transformation matrix between each two cameras with the trifocal tensor of trinocular vision sensor can be determined. Experiment results show that the proposed calibration method can obtain precise results, including measurement and matching results. The root mean square error of distance is 0.026 mm with regard to the view field of about 200×200 mm and the feature matching of three images is strict. As a sphere projection is not concerned with its orientation, the calibration method is robust and with an easy operation. Moreover, our calibration method also provides a new approach to obtain the trifocal tensor. 相似文献
18.
An accurate TCF (tool control frame) model is essential for high-accuracy robot off-line programming. Meanwhile, TCF calibration is an important procedure for production recovery after robot collides in industrial field. This article proposes a novel TCF calibration method in robotic visual measurement system in which the robot TCF is defined based on the model of visual sensor and a standard sphere with known diameter is utilized as the calibration target. With the translational and rotational movements of the industrial robot, the visual senor measures the center of standard sphere from multiple different robot postures, TCF orientation and TCP position are determined in two steps. Robot off-line programming is performed based on the TCF calibration result, and robot collision is simulated on an ABB IRB2400 industrial robot. Experimental results have validated the effectiveness and efficiency of the standard sphere-based TCF calibration method, which could control the deviation of an identical featured point within 0.5 mm measured before and after collision recovery. 相似文献