《2022年外文翻译视频监控系统知识 .pdf》由会员分享,可在线阅读,更多相关《2022年外文翻译视频监控系统知识 .pdf(6页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。
1、A System for Video Surveillance and MonitoringThe thrust of CMU research under the DARPA Video Surveillance and Monitoring (VSAM) project is cooperative multi-sensor surveillance to support battlefield awareness. Under our VSAM Integrated Feasibility Demonstration (IFD) contract, we have developed a
2、utomated video understanding technology that enables a single human operator to monitor activities over a complex area using a distributed network of active video sensors. The goal is to automatically collect and disseminate real-time information from the battlefield to improve the situational aware
3、ness of commanders and staff. Other military and federal law enforcement applications include providing perimeter security for troops, monitoring peace treaties or refugee movements from unmanned air vehicles, providing security for embassies or airports, and staking out suspected drug or terrorist
4、hide-outs by collecting time-stamped pictures of everyone entering and exiting the building.Automated video surveillance is an important research area in the commercial sector as well. Technology has reached a stage where mounting cameras to capture video imagery is cheap, but finding available huma
5、n resources to sit and watch that imagery is expensive. Surveillance cameras are already prevalent in commercial establishments, with camera output being recorded to tapes that are either rewritten periodically or stored in video archives. After a crime occurs a store is robbed or a car is stolen in
6、vestigators can go back after the fact to see what happened, but of course by then it is too late. What is needed is continuous 24-hour monitoring and analysis of video surveillance data to alert security officers to a burglary in progress, or to a suspicious individual loitering in the parking lot,
7、 while options are still open for avoiding the crime.Keeping track of people, vehicles, and their interactions in an urban or battlefield environment is a difficult task. The role of VSAM video understanding technology in achieving this goal is to automatically “parse ” people and vehicles from raw
8、video, determine their geolocations, and insert them into dynamic scene visualization. We have developed robust routines for detecting and tracking moving objects. Detected objects are classified into semantic categories such as human, human group, car, and truck using shape and color analysis, and
9、these labels are used to improve tracking using temporal consistency constraints. Further classification of human activity, such 名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 1 页,共 6 页 - - - - - - - - - as walking and running, has also been achieved. Geolocations of
10、 labeled entities are determined from their image coordinates using either wide-baseline stereo from two or more overlapping camera views, or intersection of viewing rays with a terrain model from monocular views. These computed locations feed into a higher level tracking module that tasks multiple
11、sensors with variable pan, tilt and zoom to cooperatively and continuously track an object through the scene. All resulting object hypotheses from all sensors are transmitted as symbolic data packets back to a central operator control unit, where they are displayed on a graphical user interface to g
12、ive a broad overview of scene activities. These technologies have been demonstrated through a series of yearly demos, using a testbed system developed on the urban campus of CMU.Detection of moving objects in video streams is known to be a significant, and difficult, research problem. Aside from the
13、 intrinsic usefulness of being able to segment video streams into moving and background components, detecting moving blobs provides a focus of attention for recognition, classification, and activity analysis, making these later processes more efficient since only “moving” pixels need be considered.T
14、here are three conventional approaches to moving object detection: temporal differencing ; background subtraction; and optical flow. Temporal differencing is very adaptive to dynamic environments, but generally does a poor job of extracting all relevant feature pixels. Background subtraction provide
15、s the most complete feature data, but is extremely sensitive to dynamic scene changes due to lighting and extraneous events. Optical flow can be used to detect independently moving objects in the presence of camera motion; however, most optical flow computation methods are computationally complex, a
16、nd cannot be applied to full-frame video streams in real-time without specialized hardware.Under the VSAM program, CMU has developed and implemented three methods for moving object detection on the VSAM testbed. The first is a combination of adaptive background subtraction and three-frame differenci
17、ng . This hybrid algorithm is very fast, and surprisingly effective indeed, it is the primary algorithm used by the majority of the SPUs in the VSAM system. In addition, two new prototype algorithms have been developed to address shortcomings of this standard approach. First, a mechanism for maintai
18、ning temporal object layers is developed to allow greater disambiguation of moving objects that stop for a while, are occluded by other objects, 名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 2 页,共 6 页 - - - - - - - - - and that then resume motion. One limitation tha
19、t affects both this method and the standard algorithm is that they only work for static cameras, or in a ”stepand stare” mode for pan-tilt cameras. To overcome this limitation, a second extension has beendeveloped to allow background subtraction from a continuously panning and tilting camera . Throu
20、gh clever accumulation of image evidence, this algorithm can be implemented in real-time on a conventional PC platform. A fourth approach to moving object detection from a moving airborne platform has also been developed, under a subcontract to the Sarnoff Corporation. This approach is based on imag
21、e stabilization using special video processing hardware.The current VSAM IFD testbed system and suite of video understanding technologies are the end result of a three-year, evolutionary process. Impetus for this evolution was provided by a series of yearly demonstrations. The following tables provi
22、de a succinct synopsis of the progress made during the last three years in the areas of video understanding technology, VSAM testbed architecture, sensor control algorithms, and degree of user interaction. Although the program is over now, the VSAM IFD testbed continues to provide a valuable resourc
23、e for the development and testing of new video understanding capabilities. Future work will be directed towards achieving the following goals:l better understanding of human motion, including segmentation and tracking of articulated body parts;l improved data logging and retrieval mechanisms to supp
24、ort 24/7 system operations;l bootstrapping functional site models through passive observation of scene activities;l better detection and classification of multi-agent events and activities;l better camera control to enable smooth object tracking at high zoom; andl acquisition and selection of “best
25、views” with the eventual goal of recognizing individuals in the scene.视频监控系统名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 3 页,共 6 页 - - - - - - - - - 在美国国防部高级研究计划局, 视频监控系统项目下进行的一系列监控装置研究是一项合作性的多层传感监控,用以支持战场决策。 在我们的视频监控综合可行性示范条约下, 我们已经研发出自动视频解析技术,使得单个操作员通过动态视频传感器的分布式
26、网络来监测一个复杂区域的一系统活动。我们的目标是自动收集和传播实时的战场信息, 以改善战场指挥人员的战场环境意识。在其他军事和联邦执法领域的应用包括为部队提供边境安防,通过无人驾驶飞机监控和平条约及难民流动, 保证使馆和机场的安全, 通过收集建筑物每个进口和出口的印时图片识别可疑毒品和恐怖分子藏匿场所。自动视频监控在商业领域同样也是一个重要的研究课题。随着科技的发展,安装摄像头捕捉视频图像已经非常廉价,但是通过人为监视图像的成本则非常高昂。监视摄像头已经在商业机构中普遍存在,与相机输出记录到磁带或者定期重写或者存储在录像档案。 在犯罪发生后 -比如商店被窃或汽车被盗后, 再查看当时录像,往往为
27、时已晚。 尽管避免犯罪还有许多其他的选择,但现在需要的是连续24小时的监测和分析数据, 由视频监控系统提醒保安人员, 及时发现正在进行的盗窃案,或游荡在停车场的可疑人员。在城市或战场环境中追踪人员、车辆是一项艰巨的任务。VSAM 视频解析技术视频,确定其geolocations,并插入到动态场景可视化。我们已经开发强有力的例程为发现和跟踪移动的物体。被测物体分为语义类别,如人力,人力组,汽车和卡车使用形状和颜色分析,这些标签是用来改善跟踪一致性使用时间限制。进一步分类的人类活动,如散步,跑步,也取得了。Geolocations标记实体决心从自己的形象坐标使用广泛的基准立体声由两个或两个以上的重
28、叠相机的意见,或查看射线相交的地形模型由单眼意见。这些计算机的位置饲料进入了更高一级的跟踪模块, 任务多传感器变盘, 倾斜和缩放, 以合作,不断追踪的对象,通过现场。所有产生的对象假设所有传感器转交作为象征性的数据包返回到一个中央控制单元操作者,他们都显示在图形用户界面提供了广泛概述了现场活动。这些技术已证明, 通过一系列每年演示, 使用的试验系统上发展起来的城市校园的债务工具中央结算系统。检测移动物体的视频流被认为是一个重要和困难,研究问题。除了固有的作用能够部分进入移动视频流和背景的组成部分,移动块检测提供了一个关注的焦点识别,分类,分析和活动,使这些后来过程更有效率,因为只有“ 移动”
29、像素需要加以考虑。有三种常规方法来进行移动物体的检测:时间差分法;背景减法 ;和光流法。时间差分非常适应动态环境, 但通常是一个贫穷的工作中提取所有相关的功能像素。背景减除提供最完整的功能数据,但极为敏感,动态场景的变化,由于灯光名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 4 页,共 6 页 - - - - - - - - - 和不相干的活动。 光流可以用来检测独立移动的物体,在场的摄像机运动, 但大多数的光流计算方法的计算复杂,不能适用于全帧视频流的实时没有专门的硬件。根据 V
30、SAM 计划,债务工具中央结算系统制定并实施了三种方法的运动目标检测的 VSAM 试验。首先是结合自适应背景减除与三帧差分。这种混合算法是非常快,令人惊讶的有效的-事实上,它是主要的算法所使用的大多数SPUs在 VSAM 系统。此外,两个新的原型已经开发的算法来解决这一缺陷的标准办法。首先,一个机制,保持颞对象层次开发,使更多的歧义的移动物体,可以有效地阻止了一会儿,是闭塞的其他物体,而且然后恢复动议。一个限制,影响到该方法和标准算法是他们唯一的工作静态相机,或在“ stepand凝视” 模式泛倾斜相机。为了克服这一局限,第二次延长了beendeveloped让背景减法从不断平移和倾斜相机。通
31、过巧妙的积累形象的证据, 该算法可以实现实时的传统PC 平台。第四个办法来探测移动物体从空中移动平台,也得到了发展,根据分包合同的Sarnoff公司。这种方法是基于图像稳定使用特殊的视频处理硬件。目前 VSAM 通用试验系统和一套视频理解技术的最终结果是一项为期三年的,渐进的过程。 推动这一演变提供了一系列每年示威。下列表格提供了一个简明的大纲方面所取得的进展在过去三年中在视频领域的理解,技术,VSAM 试验架构, 传感器控制算法,并一定程度的用户交互。 虽然该计划是在现在, VSAM通用试验继续提供宝贵的资源开发和测试新的视频理解能力。今后的工作将致力于实现以下目标:l 更好地理解人类的议案
32、,其中包括分割和跟踪阐明身体部位;l 改善数据记录和检索机制,以支持24 / 7系统的运作l 引导功能的网站模式,通过被动观察现场活动;l 更好的检测和分类Multi -l Agent 的事件和活动l 更好的相机控制,实现了流畅的目标跟踪高变焦;和l 购置和选择的 “ 最佳意见 ” 的最终目标是承认个人在现场。名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 5 页,共 6 页 - - - - - - - - - 视频系统的监视和监测名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 6 页,共 6 页 - - - - - - - - -