共查询到19条相似文献,搜索用时 187 毫秒
1.
针对Kinect传感器在获取深度图像时存在深度值随机跳变的不准确性问题,基于最优估计的思想,提出卡尔曼滤波与多帧平均法相结合的图像修复方法。首先利用卡尔曼滤波对多幅深度图像进行修复处理,实现Kinect传感器在采集信息过程中随着时间递推,深度值的跳变逐渐趋于平稳的效果;然后基于多幅图像平均法确定最终的深度图像,解决了Kinect获取深度值存在误差导致的不精确问题。实验结果表明,该算法的均方根误差为38.102 5,平均梯度为0.471 3,信息熵为6.191 8,与单幅图像修复效果相比,得到的深度图像边缘更加清晰。 相似文献
2.
3.
宽场相干断层成像技术(WFOCT)具有提高OCT系统的扫描速率和实现高分辨率的三维显微技术等优点,成为当前研究的热点。本文阐述了WFOCT的基本原理,利用八步移相法重建出玻璃物体微细结构的断层图像,研究了宽场OCT系统对玻璃材料的纵向分辨率和探测深度,其中探测深度可达3.3 mm。在获得多幅断层图像的基础上,利用VC6.0和OpenGL混合编程,采用移动立方体(MC)算法重建出玻璃物体微细结构的三维图像。实验结果表明,WFOCT系统不但能够在生物组织检测等医学方面得到应用,而且对反射率较高的物体能够完成三维形貌显微成像探测和深度探测。 相似文献
4.
5.
6.
针对当前手势识别算法易受光线变化、复杂场景等干扰,从而导致手势识别准确性下降的问题,定义了一种体感控制与深度相机的手势识别算法。所提出的手势识别方法结合了体感控制(Leap Motion)传感器和Kinect深度传感器,可以有效提高手势识别精度与鲁棒性。通过体感控制传感器提取指尖与手的质心距离、指尖与手掌平面的高度、指尖与手掌中心的角度,以及指尖在手参照系统中的3D位置;通过Kinect深度传感器来提取手指样本与手部中心的距离、手部轮廓的局部曲率、手部形状的连通区域以及距离特征之间的相似性;为了结合两种不同传感器数据的互补信息,摒弃冗余,通过采集的指尖3D位置,找到旋转平移参数,以最小化所有采集帧中指尖点的平均投影误差来定义一种联合校准方法,确定体感控制传感器和Kinect深度传感器的外部参数,完成两种传感器坐标转换;采用支持向量机(SVM)进行分类学习,完成手势识别任务。实验表明:相对于已有的手势识别算法,所提算法不仅在Jochen.Triesch手势数据库中具有更高的平均识别率,约为97%,而且在不同光线、不同肤色和背景的复杂环境下,其同样具有更高的准确率与稳健性。 相似文献
7.
8.
9.
10.
11.
Three-dimensional (3D) measurement technology has been widely used in many scientific and engineering areas. The emergence of Kinect sensor makes 3D measurement much easier. However the depth map captured by Kinect sensor has some invalid regions, especially at object boundaries. These missing regions should be filled firstly. This paper proposes a depth-assisted edge detection algorithm and improves existing depth map inpainting algorithm using extracted edges. In the proposed algorithm, both color image and raw depth data are used to extract initial edges. Then the edges are optimized and are utilized to assist depth map inpainting. Comparative experiments demonstrate that the proposed edge detection algorithm can extract object boundaries and inhibit non-boundary edges caused by textures on object surfaces. The proposed depth inpainting algorithm can predict missing depth values successfully and has better performance than existing algorithm around object boundaries. 相似文献
12.
The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping. 相似文献
13.
X射线三维成像技术是目前国内外X射线成像研究领域的一个研究热点.但针对一些特殊成像目标,传统X射线计算层析(CT)成像模式易出现投影信息缺失等问题,影响CT重建的图像质量,使得CT成像的应用受到一定的限制.本文主要研究了基于光场成像理论的X射线三维立体成像技术.首先从同步辐射光源模型出发,对X射线光场成像进行建模;然后,基于光场成像数字重聚焦理论,对成像目标场在深度方向上进行切片重建.结果表明:该方法可以实现对成像目标任一视角下任一深度的内部切片重建,但是由于光学聚焦过程中的离焦现象,会引入较为严重的背景噪声.当对其原始数据进行滤波后,再进行X射线光场重聚焦,可以有效消除重建伪影,提高图像的重建质量.本研究既有算法理论意义,又可应用于工业、医疗等较复杂目标的快速检测,具有较大的应用价值. 相似文献
14.
针对核设施退役、核应急处置过程中放射性分布信息的可视化需求,提出了一种基于Kinect与γ相机图像信息融合的放射性区域重建与定位方法。首先,基于γ相机的特殊成像方式,构建了Kinect与γ相机组合成像模型,并完成相机组合联合标定;其次,基于视觉地图构建方法,建立了核辐射环境稠密点云地图并得到Kinect位姿;然后,提取γ相机图像中的放射性分布信息,根据相机组合模型计算地图中的放射性区域点云;最后,基于最小包围盒对γ相机成像的中心区域进行三维定位。在实验中,通过将Kinect和γ相机数据同步与空间对齐,在少量γ相机图像的情况下,实现了单个点源的三维分布重建模型与辐射场景地图的融合。在8×12 m2的实验室环境中点源定位的均方根误差为0.11 m,证明了本文方法的有效性。 相似文献
15.
In this Letter, we propose an improved three-dimensional (3D) image reconstruction method for integral imaging. We use subpixel sensing of the optical rays of the 3D scene projected onto the image sensor. When reconstructing the 3D image, we use a calculated minimum subpixel distance for each sensor pixel instead of the average pixel value of integrated pixels from elemental images. The minimum subpixel distance is defined by measuring the distance between the center of the sensor pixel and the physical position of the imaging lens point spread function onto the sensor, which is projected from each reconstruction point for all elemental images. To show the usefulness of the proposed method, preliminary 3D imaging experiments are presented. Experimental results reveal that the proposed method may improve 3D imaging visualization because of the superior sensing and reconstruction of optical ray direction and intensity information for 3D objects. 相似文献
16.
In this paper, we propose a method that controls the depth of the three-dimensional (3D) object existing over the depth-of-focus in integral imaging. The depth control method is performed only in a computer by synthesizing the intermediate sub-images between original sub-images obtained by transforming the captured elemental images. In the reconstruction process, we can obtain reconstructed 3D images with the better image quality within depth-of-focus than that reconstructed over the depth-of-focus. To demonstrate the feasibility of our method, optical and computational experiments are carried out and its results are presented. 相似文献
17.
18.
《中国光学快报(英文版)》2017,(8)
In this Letter, we propose a three-dimensional(3D) free view reconstruction technique in axially distributed image sensing(ADS). In typical integral imaging, free view reconstructed images can be obtained by tilting all elemental images or tilting the reconstruction plane due to large lateral perspectives for 3D objects. In conventional ADS, the reconstructed images at only a front view can be generated since the sensor is moved along with its optical axis so that it has small lateral perspectives for 3D objects. However, the reconstructed 3D images at any viewing point may be obtained because the virtual viewing camera may capture these slightly different perspectives for 3D objects. Therefore, in this Letter, we employ the virtual viewing camera to visualize the 3D images at the arbitrary viewing point. To support our proposed method, we show the experimental results. 相似文献
19.
In this paper, we propose a system combining the pickup process using an active sensor and the display process using depth-priority integral imaging (DPII) system to display true three-dimensional (3D) objects within large depth through real and virtual image fields. The active sensor provides depth map and color images of 3D objects. Using captured depth map and original color images, elemental images are computationally synthesized and displayed optically in DPII system. Proposed system provides scaling of 3D scenes for true 3D object. To show the usefulness of proposed system, we carry out the experiment for true 3D objects of three character patterns and present the experimental results. 相似文献