压缩采样的介绍-毕业论文外文翻译.docx

上传人:豆**** 文档编号:29950283 上传时间:2022-08-02 格式:DOCX 页数:8 大小:59.23KB
返回 下载 相关 举报
压缩采样的介绍-毕业论文外文翻译.docx_第1页
第1页 / 共8页
压缩采样的介绍-毕业论文外文翻译.docx_第2页
第2页 / 共8页
点击查看更多>>
资源描述

《压缩采样的介绍-毕业论文外文翻译.docx》由会员分享,可在线阅读,更多相关《压缩采样的介绍-毕业论文外文翻译.docx(8页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、外文资料翻译译文压缩采样的介绍 信号或图像采样的传统方法遵循香农定理:采样速率大于等于信号频率最大值(也叫的奈奎斯特速率)的二倍。事实上,这一原理构成了音频和视频设备、医学成像设备和无线电接收器等设备上的几乎所有信号采集协议的基础(尽管对于一些信号,比如非带宽受限的图像,采样速率不是通过香农定理而是由时间或空间分辨率决定,然而在这样的体系里通常要在抽样前使用抗混叠的低通滤波器进行带宽限制,所以香农定理依然起到了一个隐式的作用)。例如,在数据转换方面,标准的模数转换器技术使用的量化香农定理表述为:信号均匀抽样速率大于等于奈奎斯特速率。 本文概括论述了压缩采样的理论,也被称作压缩传感或者CS,是一

2、篇突破了传统信号获取理论的文章。CS理论断言可以用比传统方法更少的采样点或测量值恢复信号或图像。为了实现这一点,CS依赖于两个原则:稀疏性和非相干性,前者与所感兴趣的信号有关,后者与传感模式有关。 稀疏性表达的思想是:连续时间信号的信息速率可能远小于根据带宽所计算出的值,或者说离散信号取决于远小于有限长度的一些量值,更明确的说,CS阐述了这样一个事实:从某种意义上说,当用适当的基表示时有简洁描述的情况下,许多自然信号是稀疏的或可压缩的。 非相关性扩充了时域和频域的二元性,并表达了这样一种思想:在中有稀疏表示的目标信号在它们所在的域上是展开的,正如在时域中冲击函数或者峰值函数在频域中是展开的一样

3、。换句话说,非相关性描述的是:与我们感兴趣的信号不同,采样/传感信号波形在基中有一个相当密集的表示。 至关重要的发现是能够设计一个有效的传感或采样方案来捕捉内嵌在稀疏信号里的有用的信息,再将其压缩。这些方法是非自适应的,只需要将信号与少量的与稀疏化的基不相关的固定波形相关联即可。更难以置信的是这些采样方法允许一个传感器在稀疏信号中有效的获取信息来重建这个信号,更进一步来说,可利用数值优化通过少量采样信号来重建整个信号。换句话说,CS是一个非常简单有效的信号采集方法,通过这个方法,样本可以不依赖于原始信号利用看起来不完整的数据在低速采样情况下使用计算机重构信号。 这篇文章的目的是概述CS理论的基

4、本原理,介绍了构成这一理论的重要数学思想,概述了在这个领域中的几个重要成果。CS理论其中一个魅力所在是它涉及到了应用数学中的多个分支,尤其涉及到了概率论,文中刻意强调了这方面,尤其是随机性能推导出非常有用的传感机制这一似乎令人惊讶的事实。文中还讨论了它的重要意义,解释为什么CS对于同时传感和压缩数据是一个实用的方案,并通过一些重要的应用来证明结论。信号感知问题 在这一部分,我们将讨论传感机制,其中信号f(t)的信息通过线性泛函来获得,记录的值如下: (1) 将波形与期望获得的目标简单的关联,这是一个标准架构,例如,如果传感波形是单位脉冲函数,则y是f在时间或空间上的抽样值的矢量;如果传感波形是

5、像素的指标函数,则y是通过数字摄像机中的传感器采集的图像数据;如果感知波形是正弦函数,则y是傅里叶系数。核磁共振成像用的就是这种传感模式,当然其他的例子也大量存在。 尽管可以建立一个持续时间/空间信号的CS理论,但这里只关心离散信号的情况,有两方面原因:首先,概念较为简单;其次,已有的离散CS理论已经非常成熟了(显然已经为连续理论铺平了道路在“应用”部分还会介绍)。因此,我们接下来关心的是采样过疏的情况,在这种情况下,观测数m比信号f的维数n小得多。出于各种原因,这样的问题极其普遍,比如,传感器的数量有限,或者是由于有些通过中子散射成像处理的测量方式非常昂贵,又或者是因为诸如磁共振成像时一样,

6、由于传感处理太慢,导致只能对目标检测很少的次数。这些问题的存在导致了重大难题的出现。仅仅通过m n 时的测量结果能否使恢复信号成为可能?通过设计m n 情况下的检测波形是否能检测到几乎所有f 相关信息?怎么通过这些信息大概得出f ?显然,这些问题解决起来相当艰巨,可能需要先解欠定线性方程组。令A 是以矢量 作为行的m*n感知矩阵( 是j 的复数转置),当m n 时由转换回的过程一般非常棘手,因为有无穷多个f信号解可使A = y 。但我们可以想到利用f 基于的自然存在的现实信号模型而达到解决问题的目的。由香农定理可知,如果f (t)实际带宽非常小,那么很小数量的均匀样本就可恢复得到f 。通过本文

7、余下的部分的介绍可以发现,由于极大范围信号模型的存在使得信号恢复成为可能。非相关性和稀疏信号的传感 这部分讲述了CS 理论的两大基本前提:稀疏性和非相关性。稀疏性 很多自然信号在适当的前提下都能用简洁的表达式表示。例如,如图1(a)所示的图像,其对应的小波变换图为图b。尽管原始图像中几乎所有像素均非零,但小波系数简明概括为:大多数小波系数值很小,并且相对很少的几个大系数包含着大部分信息。从数学角度而言,已知矢量(如图1中像素为n 的图像),在正交基(如小波基)下展开结果如下: (2) 式中x 是f 中的一个系数序列, 。将f 用y x表示(其中y 是一个以 为列的nn的矩阵)。稀疏性的含义现在

8、可明确为:当信号具有稀疏扩展性时,我们能够将那些小系数忽略不计而对信号造成的影响在信号恢复后不会被感知。从形式上考虑, 是保留展开式(2)中S 个最大系数值( )所得结果。通过定义可知,(以下 就是所有系数 的矢量,其中 除了最大的S 个都被置为0)。由于绝大部分都被置零,这个向量在严格意义上是稀疏的,我们称为最多S 个非零项的对象为S-sparse。由于y 是正交基,我们有 ,如果x在按值排序快速衰减的意义上是稀疏的或可压缩的,那么x 可以用s x 很大程度的逼近,因此误差是非常小的。 简单来看,我们可以丢弃绝大多数系数而不造成很大的损失。图1(c)展示了一个实例,其中几乎察觉不到1 兆像素

9、图像与丢掉97.5%的系数后的近似图像之间的差别。 当然,这个原则是大多数先进有损编码器的理论基础,这些有损编码器包括JPEG-2000和其他压缩格式,而一个简单的数据压缩方法就是由f计算得到x,然后(自适应的)编码求出S的位置和重要系数的值。由于重要信息段的位置可能预先未知(它们与信号有关),这一过程需要知道所有n个系数x,因此这样一个过程需要所有n个系数x已知;在我们的例子中,那些重要信息往往聚集在图像的边缘位置。一般而言,稀疏性是一个基本的建模要素,它能够带来高效率的基本信号处理;例如,精确的统计估计和分类,有效的数据压缩,等等。本文所研究的内容大胆新颖且具有深远意义,其中稀疏性对于信号

10、采集起着重要支撑作用,稀疏性决定了如何有效、非自适应地采集信号。外文资料原文An Introduction To Compressive Sampling Conventional approaches to sampling signals or images follow Shannons celebrated theorem: the sampling rate must be at least twice the maximum frequency present in the signal (the so-called Nyquist rate). In fact, this pri

11、nciple underlies nearly all signal acquisition protocols used in consumer audio and visual electronics, medical imaging devices, radio receivers, and so on. (For some signals, such as images that are not naturally bandlimited, the sampling rate is dictated not by the Shannon theorem but by the desir

12、ed temporal or spatial resolution. However, it is common in such systems to use an antialiasing low-pass filter to bandlimit the signal before sampling, and so the Shannon theorem plays an implicit role.) In the field of data conversion, for example, standard analog-to-digital converter (ADC) techno

13、logy implements the usual quantized Shannon representation: the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acq

14、uisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on two principles:sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing moda

15、lity. Sparsity expresses the idea that the “information rate” of a continuous time signal may be much smaller than suggested by its bandwidth, or that a discrete-time signal depends on a number of degrees of freedom which is comparably much smaller than its (finite) length. More precisely, CS exploi

16、ts the fact that many natural signals are sparse or compressible in the sense that they have concise representations when expressed in the proper basis . Incoherence extends the duality between time and frequency and expresses the idea that objects having a sparse representation in must be spread ou

17、t in the domain in which they are acquired, just as a Dirac or a spike in the time domain is spread out in the frequency domain. Put differently, incoherence says that unlike the signal of interest, the sampling/sensing waveforms have an extremely dense representation in . The crucial observation is

18、 that one can design efficient sensing or sampling protocols that capture the useful information content embedded in a sparse signal and condense it into a small amount of data. These protocols are nonadaptive and simply require correlating the signal with a small number of fixed waveforms that are

19、incoherent with the sparsifying basis. What is most remarkable about these sampling protocols is that they allow a sensor to very efficiently capture the information in a sparse signal without trying to comprehend that signal. Further, there is a way to use numerical optimization to reconstruct the

20、full-length signal from the small amount of collected data. In other words, CS is a very simple and efficient signal acquisition protocol which samplesin a signal independent fashionat a low rate and later uses computational power for reconstruction from what appears to be an incomplete set of measu

21、rements. Our intent in this article is to overview the basic CS theory that emerged in the works 13, present the key mathematical ideas underlying this theory, and survey a couple of important results in the field. Our goal is to explain CS as plainly as possible, and so our article is mainly of a t

22、utorial nature. One of the charms of this theory is that it draws from various subdisciplines within the applied mathematical sciences, most notably probability theory. In this review, we have decided to highlight this aspect and especially the fact that randomness canperhaps surprisinglylead to ver

23、y effective sensing mechanisms. We will also discuss significant implications, explain why CS is a concrete protocol for sensing and compressing data simultaneously (thus the name), and conclude our tour by reviewing important applications.THE SENSING PROBLEM In this article, we discuss sensing mech

24、anisms in which information about a signal f (t) is obtained by linear functionals recording the values (1)That is, we simply correlate the object we wish to acquire with the waveforms k(t ). This is a standard setup. If the sensing waveforms are Dirac delta functions (spikes), for example, then y i

25、s a vector of sampled values of f in the time or space domain. If the sensing waveforms are indicator functions of pixels, then y is the image data typically collected by sensors in a digital camera. If the sensing waveforms are sinusoids, then y is a vector of Fourier coefficients; this is the sens

26、ing modality used in magnetic resonance imaging (MRI). Other examples abound. Although one could develop a CS theory of continuous time/space signals, we restrict our attention to discrete signals f Rn . The reason is essentially twofold: first, this is conceptually simpler and second, the available

27、 discrete CS theory is far more developed (yet clearly paves the way for a continuous theorysee also “Applications”). Having said this, we are then interested in undersampled situations in which the number m of available measurements is much smaller than the dimension n of the signal f. Such problem

28、s are extremely common for a variety of reasons. For instance, the number of sensors may be limited. Or the measurements may be extremely expensive as in certain imaging processes via neutron scattering. Or the sensing process may be slow that one can only measure the object a few times as in MRI. A

29、nd so on. These circumstances raise important questions. Is accurate reconstruction possible from m n measurements only? Is it possible to design m n sensing waveforms to capture almost all the information about f ? And how can one approximate f from this information?Admittedly, this state of affair

30、s looks rather daunting, as one would need to solve an underdetermined linear system of equations. Letting A denote the m n sensing matrix with the vectors1 , . . . , as rows (ais the complex transpose of a), the process of recovering f Rn from y = Af Rm is ill-posed in general when m n: there are i

31、nfinitely many candidate signals f for which A f = y. But one could perhaps imagine a way out by relying on realistic models of objects f which naturally exist. The Shannon theory tells us that, if f(t ) actually has very low bandwidth, then a small number of (uniform) samples will suffice for recov

32、ery. As we will see in the remainder of this article, signal recovery can actually be made possible for a much broader class of signal models.INCOHERENCE AND THE SENSING OF SPARSE SIGNALSThis section presents the two fundamental premises underlying CS: sparsity and incoherence.SPARSITYMany natural s

33、ignals have concise representations when expressed in a convenient basis. Consider, for example, the image in Figure 1(a) and its wavelet transform in (b). Although nearly all the image pixels have nonzero values, the wavelet coefficients offer a concise summary: most coefficients are small, and the

34、 relatively few large coefficients capture most of the information. Mathematically speaking, we have a vector f Rn (such as the n-pixel image in Figure 1) which we expand in an orthonormal basis (such as a wavelet basis) = 12 n as follows: (2)where x is the coefficient sequence of f , xi = f,i. It w

35、ill be convenient to express f as x (where is the n n matrix with 1, . . . ,n as columns). The implication of sparsity is now clear: when a signal has a sparse expansion, one can discard the small coefficients without much perceptual loss. Formally, consider fS(t) obtained by keeping only the terms

36、corresponding to the S largest values of (xi) in the expansion (2). By definition, fS := xS, where here and below, xS is the vector of coefficients (xi) with all but the largest S set to zero.This vector is sparse in a strict sense since all but a few of its entries are zero; we will call S-sparse s

37、uch objects with at most S nonzeroentries. Since is an orthonormal basis (or “orthobasis”), we have f fS2 = x xS2 , and if x is sparse or compressible in the sense that the sorted magnitudes of the (xi) decay quickly, then x is well approximated by xS and, therefore, the error f fS2 is small. In pla

38、in terms, one can “throw away” a large fraction of the coefficients without much loss. Figure 1(c) shows an example where the perceptual loss is hardly noticeable from a megapixel image to its approximation obtained by throwing away 97.5% of the coefficients. This principle is, of course, what under

39、lies most modern lossy coders such as JPEG-2000 4 and many others, since a simple method for data compression would be to compute x from f and then (adaptively) encode the locations and values of the S significant coefficients. Such a process requires knowledge of all the n coefficients x, as the lo

40、cations of the significant pieces of information may not be known in advance (they are signal dependent); in our example, they tend to be clustered around edges in the image. More generally, sparsity is a fundamental modeling tool which permits efficient fundamental signal processing; e.g., accurate

41、 statistical estimation and classification, efficient data compression, and so on. This article is about a more surprising and far-reaching implication, however, which is that sparsity has significant bearings on the acquisition process itself. Sparsity determines how efficiently one can acquire signals nonadaptively.

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 教育专区 > 小学资料

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁