《计算机视觉在智能交通系统中的应用研究综述_英文_夏永泉.docx》由会员分享,可在线阅读,更多相关《计算机视觉在智能交通系统中的应用研究综述_英文_夏永泉.docx(9页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。
1、 第 29 卷第6期 2014年 12 月 郑州轻工业学院学报(自然科学版 ) JOURNAL OF ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY (Natural Science) Vol. 29 No. 6 Dec. 2014 文章编号 :2095 -476X(2014)06 -0052 -09 计算机视觉在智能交通系统中的应用研究综述 夏永泉 13, JO Kang4iyun2, 甘勇 13,金保华 13,钱慎一 13 (1.郑州轻工业学院计算机与通信工程学院,河南郑州 450001; 2. 蔚山大学,韩国蔚山 680 -749; 3. 郑州轻工业学院应急
2、平台信息技术河南省工程实验室,河南郑州 450001) 摘要:对计算机视觉在自主车、机器人定位、车辆检测、辅助驾驶、智能交通视频监控、行人检测以及 人脸识别等方面的应用研宄情况进行了综述,指出提高视觉传感器在恶劣天气情况下的检测和识别 率,以及解决视觉传感器产生的大数据量和计算机视觉处理方法对大量计算资源的需求等问题在今 后的研宄中值得关注 . 关键词:智能交通系统 ;计算机视觉;机器人定位 ;车辆检测;人脸识别;图像处理 中图分类号: TP391.41 ;U491 文献标志码 : A DOI: 10. 3969/j. issn. 2095 -476X. 2014.06. 013 Review
3、 of intelligent transportation system based on computer vision XIA Yong-quan1,3, JO Kang4iyun2, GAN Yong13, JIN Bao4iua13, QIAN Shen-yi1,3 (1. College of Computer and Communication Engineering i Zhengzhou University of Light Industry, Zhengzhou 450001, China , 2. University of Ulsarii Ulsan 680 -1A9
4、 Korea , 3. Information Technology Engineering Laboratory of Henan Emergency Management, Zhengzhou University of Light Industry, Zhengzhou 450001 China) Abstract: The application of computer vision in autonomous vehicles, robot localization, vehicle detection, driving assistance intelligent traffic
5、monitoring human detection face recognition and so on was summarized .It pointed out that improving detection and recognition rate of visual sensor under bad weather conditions and solving the problems such as large amount of data processing produced by visual sensor the need for a large amount of c
6、omputing resources of computer vision method are the focus in the future research. Key words: intelligent transportation system; computer vision; vehicle detection; robot localization; face recognition ;image process 0 Introduction Nowadays, a significant increase of the number of road vehicles is a
7、ccompanied by a buildup of road infrastructure. Simultaneously various traffic control systems have been developed in order to increase road traffic safety, road capacity and travel comfort. However? even as technology significantly advanced, traffic 收稿日期 :2014-06 -27 基金项目:国家自然科学基金项目 ( 61302118);河南高
8、校青年骨干教师资助计划项目 ( 2010GGJS-114) 作者简介:夏永泉 ( 1972),男,辽宁省绥中县人,郑州轻工业学院副教授,博士,主要研宄方向为图像处理、计算机视觉、 模式识别与人工智能 . 第 6 期 夏永泉,等 :计算机视觉在智能交通系统中的应用研宄综述 53 accidents still take a large number of human fatalities and injuries. To reduce the amount of annual traffic accidents a large number of different systems has be
9、en researched and developed. Such systems are part of road infrastructure or road vehicles (horizontal and vertical signalization, variable message signs, driver support systems etc. ) . Vehicle manufactures have implemented systems such as lane detection and lane departure systems, parking assistan
10、ts,collision avoidance? adaptive cruise control,etc. Main task of these systems is to make the driver aware when leaving the current lane. These systems are partially based on computer vision algorithms that can detect and track pedestrians or surrounding vehicles . Development of computing power an
11、d cheap video cameras enables today? s traffic safety systems to include more and more cameras and computer vision methods. Cameras are used as part of road infrastructure or in vehicles. They enable monitoring of traffic infrastructure,detection of incident situations, tracking of surrounding vehic
12、les, etc 2. Intellectualization, high speed, high precision, and multi -application scope are the development trend of intelligent transportation system (ITS) based on computer vision. The intelligent traffic device and processing method based on computer vision will become more and more advanced wi
13、th the development of signal processing theory and technologies. In additionhigh computation speed of CPU and large volume memory pro- vide the complex algorithms and device with strong computation ability and memory resource. All of the most important is that the development CCD cameras and compute
14、r vision hardware allow for efficient and inexpensive use of vision sensors as a component of a larger system. The goal of this paper is to make a review of existing computer vision technologies applied to ITS. Emphasis is on computer vision methods that can be used in systems building in vehicles t
15、o assist the driver and increase traffic safety. 1 Computer vision in ITS Computer vision is the process of using an image sensor to capture images, then using a computer processor to analyze these images to extract information of interest. A simple computer vision system can indicate the physical p
16、resence of objects within view by identifying simple visual attributes such as shape, size, or color of an object. More sophisticated computer vision sys- tems may establish not only the presence of an objects but can increasingly identify (or classify) the object based upon the requirements of an a
17、pplication. In ITS? computer vision technology is broadly applied to either detect objects and events that may represent safety risks to drivers or detect hindrances to mobility or otherwise improve the efficiency of road networks. Computer vision s advantages over many other detection sensors or lo
18、cation technologies are generally twofold. First , computer vision systems are relatively inexpensive and can be easily installed on a vehicle or road infrastructure element, and they can detect and identify objects without the need for complementary companion equipment such as transponders. Second,
19、 computer vision systems can capture a tremendous wealth of visual information over wide areas, often beyond the longitudinal and peripheral range of other sensors such as active radar. Through continual innovations in computer vision processing algorithms, this wealth of visual data can be exploite
20、d to identify more subtle changes and distinctions between objects enabling a wide array of ever more sophisticated applications. The key advantage of computer vision in transportation applications is its nonntrusiveness. In other words ,computer vision systems do not need to have devices embedded,
21、physically printed, or externally attached to the objects targeted for detection. In this way, computer vision ostensibly has operational advantages over radio frequency identifier (RFID) tags barcodes and wireless access points, which require additional installation of complementary readers scanner
22、s and wireless modems respectively. Furthermore upgrading image sensors does not impose upgrade costs on others. Converting to new image sensor hardware or image processing algorithms does not require upgrading tags, identifiers, or transponders devices on vehicles. For infrastructure4)ased image se
23、nsors image capturing devices are very easy to mount, remove, replace,and upgrade without extensive lane closures. 54 郑 州 轻 工 业 学 院 学 报 ( 自 然 科 学 版 ) 2014 年 2 Application of conqmter vision in ITS Computer vision technology also plays many other roles in improving productivity and safety in traffic
24、management and other transportation operations. Many surveillance cameras have been mounted along freeways and main intersections for public safetytraffic incident detection, ramp metering, and traffic signal timing. They have also been used as backup to calibrate or diagnose problems with other tra
25、ffic sensors such as loop detectors. With the wide deployment of roadside surveillance cameras and computer-aided analysis of traffic surveillance video transportation planners and researchers have the ability to extract traffic parameters and patterns beyond just speed and flowsuch as vehicle lane
26、changing or acceleration patterns. In addition? computer vision systems can help staff in traffic management centers monitor more operational scenes concurrently and act more quickly in response to emergencies such as crashes or other adverse traffic incidents. The applications of computer vision in
27、 ITS are shown in Fig. 1. as follows. 2.1 Human detection In video surveillance , the cameras are static and usually look down to the ground. According to different installing places of the cameras the surveillance traffic scenes in which pedestrians participate can be roughly divided into outdoor a
28、nd indoor 3 _ 5 Outdoor scenes include non-notorized road sections? pedestrian crossings sidewalks bus stops, etc6_9 . Environmental factors such as weather and illumination must be consid- ered for algorithms in outdoor surveillance. Besides, other moving objects will exist in the surveillance vide
29、os especially in Chinese mixed transportation system. These objects can bring trouble to the detection task. Indoor traffic surveillance scenes include subway station, railway station, etc. We can consider less of the surroundings factors and other moving objects in indoor environment. Some typical
30、surveillance images in traffic scenes can be seen in Fig. 2 and the detection results are shown in Fig. 3. 4 Fig. 3 Human detection examples 2.2 Vehicle detection Increasing congestion on freeways and problems associated with existing detectors has spawned an interest in new vehicle detection techno
31、logies *10 15 such as video image processing. Existing commercial image processing system works well in free-flowing traffic but the system has difficulties with congestion, shadows and lighting transitions. These problems stem from vehicles partially occluding one another and the fact that vehicles
32、 appear divergently under various lighting conditions116 2 .Instead of tracking entire vehicles vehicle features are tracked to make the system robust to partial occlusion. The system is fully functional under changing lighting conditions because the most salient features at the given moment are tra
33、cked. After the features exit the tracking region they are grouped into dis 第 6 期 夏永泉,等 :计算机视觉在智能交通系统中 的应用研宄综述 55 crete vehicles using a common motion constraint21 24. The groups represent individual vehicle trajectories which can be used to measure traditional traffic parameters as well as new metr
34、ics suitable for improved automated surveillance. The detection system is shown in Fig. 4 and the detection example is shown in Fig. 5. Fig. 4 Vehicle detection system Fig. 5 Vehicle detection example 2.3 Traffic sign recognition Traffic sign recognition gets considerable interest lately. The intere
35、st is driven by the market for intelli- gent applications such as autonomous driving , advanced driver assistance systems ( ADAS ) 26, mobile mapping 27, and the recent releases of larger traffic signs datasets such as Belgian 28 or German 29 datasets. TSR covers two problems: traffic sign detection
36、 (TSD) and traffic sign classification (TSC) . TSD is meant for the accurate localization of traffic signs in the image space, while TSC handles the labeling of such detections into specific traffic sign types or subcatego ries. For TSD and TSC numerous approaches have been developed. The traffic si
37、gn examples are shown in Fig. 6 and the traffic sign recognition examples are shown in Fig. 7. Fig. 6 Traffic sign examples Fig. 7 Traffic sign recognition examples 2.4 License plate recognition Automatic license plate recognition (LPR) plays an important role in numerous applications such as unatte
38、nded parking lots, security control of restricted areas, traffic law enforcement , congestion pricing , and automatic toll collection 30 _ 33 . Due to different working environments, LPR techniques vary from application to application. Most previous works have in some way restricted their working co
39、nditions,such as limiting them to indoor scenes , stationary backgrounds , fixed illumination , prescribed driveways ,limited vehicle speeds, or designated ranges of the distance between camera and vehicle *34 39 . The aim of this study is to 56 郑 州 轻 工 业 学 院 学 报 ( 自 然 科 学 版 ) 2014 年 lessen many of
40、these restrictions. Of the various working conditions? outdoor scenes and no stationary backgrounds may be the two factors that most influence the quality of scene images acquired and in turn the complexity of the techniques needed. In an outdoor environment, illumination not only changes slowly as
41、daytime progresses, but may change rapidly due to changing weather conditions and passing objects ( e. g., cars, airplanes,clouds, and overpasses) . In addition, paintable cameras create dynamic scenes when they move, pan or zoom. A dynamic scene image may contain multiple license plates or no licen
42、se plate at all. Moreover, when they do appear in an image, license plates may have arbitrary sizes orientations and positions. And if complex backgrounds are involved,detecting license plates can become quite a challenge. The license plate recognition examples are shown in Fig. 8. a) b) c) Fig. 8 L
43、icense plate recognition examples 2.5 Buliding recognition With the wide dissemination of digital cameras and various navigational aids for ITS the position of a building is very important for autonomous vehicle. The aim of the building recognition is to localize the building position and provide th
44、e ITS with useful information . The localization problems as considered in this application comprises two phases: location recognition and relative pose recovery of the camera with respect to the query view. In the present work? we focus on the recognition aspect and point the reader to some standar
45、d techniques for pose recovery. The problems of location and building recognition have been addressed by several authors in the past, mostly considering outdoors scenes. Authors in 41 used vanishing direction for alignment of a building view in the query image to the canonical view in the database a
46、nd proposed matching using descriptors associated with interest regions? followed by the relative pose recovery between the views from planar homographs. Authors in 42 proposed to extract invariant regions and used a set of color moment invariants to represent them. Recognition was based on the numb
47、er of matched regions. The recognition part in 43 was achieved by matching line segments and their associated descriptors. False matches were rejected by imposing epipolar geometry constraint. An alternative approach was proposed in work of 44 on con- text4)ased place recognition. The representation
48、 of individual locations was in this case obtained by integrating responses of the bank of filters over coarse spatial regions and fitting a Gaussian Mixture Model to the responses. This method enabled coarse classification of locations and also exploited spatial relationships between locations capt
49、ured by Hidden Markov Model. The proposed location model did not allow for actual pose recovery of the camera with respect to the scene, person in the urban area. This can be achieved in a two-stage process, by first acquiring a database of buildings and/or locations of a particular area from different viewpoints, followed by recognition of a new query view by matching it to the closest model in the data- base. From the application