计算机英文参考文献Progress in Computers.pdf

上传人:蓝**** 文档编号:90995307 上传时间:2023-05-19 格式:PDF 页数:19 大小:540.88KB
返回 下载 相关 举报
计算机英文参考文献Progress in Computers.pdf_第1页
第1页 / 共19页
计算机英文参考文献Progress in Computers.pdf_第2页
第2页 / 共19页
点击查看更多>>
资源描述

《计算机英文参考文献Progress in Computers.pdf》由会员分享,可在线阅读,更多相关《计算机英文参考文献Progress in Computers.pdf(19页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、Progress in ComputersPrestige Lecture delivered to IEE,Cambridge,on 5 February 2004Maurice WilkesComputer LaboratoryUniversity of CambridgeThe first stored program computers began to work around 1950.The one we built inCambridge,the EDSAC was first used in the summer of 1949.These early experimental

2、 computers were built by people like myselfwith varying backgrounds.We all had extensive experience in electronicengineering and were confident that that experience would stand us in goodstead.This proved true,although we had some new things to learn.The mostimportant of these was that transients mu

3、st be treated correctly;what wouldcause a harmless flash on the screen of a television set could lead to a seriouserror in a computer.As far as computing circuits were concerned,we found ourselves withan embarass de richess.For example,we could use vacuum tube diodes forgates as we did in the EDSAC

4、or pentodes with control signals on both grids,a system widely used elsewhere.This sort of choice persisted and the termfamilies of logic came into use.Those who have worked in the computerfield will remember TTL,ECL and CMOS.Of these,CMOS has nowbecome dominant.In those early years,the IEE was stil

5、l dominated by power engineeringand we had to fight a number of major battles in order to get radioengineering along with the rapidly developing subject of electronics.dubbedin the IEE light current electrical engineering.properly recognised as anactivity in its own right.I remember that we had some

6、 difficulty in organisinga conference because the power engineers ways of doing things were not ourways.A minor source of irritation was that all IEE published papers wereexpected to start with a lengthy statement of earlier practice,somethingdifficult to do when there was no earlier practiceConsoli

7、dation in the 1960sBy the late 50s or early 1960s,the heroic pioneering stage was over andthe computer field was starting up in real earnest.The number of computersin the world had increased and they were much more reliable than the veryearly ones.To those years we can ascribe the first steps in hig

8、h levellanguages and the first operating systems.Experimental time-sharing wasbeginning,and ultimately computer graphics was to come along.Above all,transistors began to replace vacuum tubes.This changepresented a formidable challenge to the engineers of the day.They had toforget what they knew abou

9、t circuits and start again.It can only be said thatthey measured up superbly well to the challenge and that the change couldnot have gone more smoothly.Soon it was found possible to put more than one transistor on the samebit of silicon,and this was the beginning of integrated circuits.As time wento

10、n,a sufficient level of integration was reached for one chip to accommodateenough transistors for a small number of gates or flip flops.This led to arange of chips known as the 7400 series.The gates and flip flops wereindependent of one another and each had its own pins.They could beconnected by off

11、-chip wiring to make a computer or anything else.These chips made a new kind of computer possible.It was called aminicomputer.It was something less that a mainframe,but still very powerful,and much more affordable.Instead of having one expensive mainframe forthe whole organisation,a business or a un

12、iversity was able to have aminicomputer for each major department.Before long minicomputers began to spread and become more powerful.The world was hungry for computing power and it had been very frustratingfor industry not to be able to supply it on the scale required and at areasonable cost.Minicom

13、puters transformed the situation.The fall in the cost of computing did not start with the minicomputer;ithad always been that way.This was what I meant when I referred in myabstract to inflation in the computer industry going the other way.As timegoes on people get more for their money,not less.Rese

14、arch in Computer Hardware.The time that I am describing was a wonderful one for research incomputer hardware.The user of the 7400 series could work at the gate andflip-flop level and yet the overall level of integration was sufficient to give adegree of reliability far above that of discreet transis

15、tors.The researcher,in auniversity or elsewhere,could build any digital device that a fertileimagination could conjure up.In the Computer Laboratory we built theCambridge CAP,a full-scale minicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used fo

16、rthe Cambridge Ring,a pioneering wide-band local area network.Publicationof the design study for the Ring came just before the announcement of theEthernet.Until these two systems appeared,users had mostly been contentwith teletype-based local area networks.Rings need high reliability because,as the

17、pulses go repeatedly roundthe ring,they must be continually amplified and regenerated.It was the highreliability provided by the 7400 series of chips that gave us the courageneeded to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instructi

18、on sets.As time went on designersof commercially available machines added additional features which theythought would improve performance.Few comparative measurements weredone and on the whole the choice of features depended upon the designersintuition.In 1980,the RISC movement that was to change al

19、l this broke on theworld.The movement opened with a paper by Patterson and Ditzel entitledThe Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym,this title conveys little of theinsights into instruction set design which went with the RISC movement,inparticular th

20、e way it facilitated pipelining,a system whereby severalinstructions may be in different stages of execution within the processor atthe same time.Pipelining was not new,but it was new for small computersThe RISC movement benefited greatly from methods which hadrecently become available for estimatin

21、g the performance to be expectedfrom a computer design without actually implementing it.I refer to the use ofa powerful existing computer to simulate the new design.By the use ofsimulation,RISC advocates were able to predict with some confidence that agood RISC design would be able to out-perform th

22、e best conventionalcomputers using the same circuit technology.This prediction was ultimatelyborn out in practice.Simulation made rapid progress and soon came into universal use bycomputer designers.In consequence,computer design has become more of ascience and less of an art.Today,designers expect

23、to have a roomful of,computers available to do their simulations,not just one.They refer to such aroomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one majorexception,namely that of the Intel 8086 and its progeny,collectively

24、 referredto as x86.This has become the dominant instruction set and the RISCinstruction sets that originally had a considerable measure of success arehaving to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come fromthe research wings.both academic and indu

25、strial.of the computer field.Nodoubt,business considerations have a lot to do with the survival of x86,butthere are other reasons as well.However much we research oriented peoplewould like to think otherwise.high level languages have not yet eliminatedthe use of machine code altogether.We need to ke

26、ep reminding ourselves thatthere is much to be said for strict binary compatibility with previous usagewhen that can be attained.Nevertheless,things might have been different ifIntels major attempt to produce a good RISC chip had been more successful.I am referring to the i860(not the i960,which was

27、 something different).Inmany ways the i860 was an excellent chip,but its software interface did notfit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph ofthe x86 instruction set.It proved impossible to match the steadily increasingspeed of RISC

28、 processors by direct implementation of the x86 instruction setas had been done in the past.Instead,designers took a leaf out of the RISCbook;although it is not obvious,on the surface,a modern x86 processor chipcontains hidden within it a RISC-style processor with its own internal RISCcoding.The inc

29、oming x86 code is,after suitable massaging,converted intothis internal code and handed over to the RISC processor where the criticalexecution is performed.In this summing up of the RISC movement,I rely heavily on the latestedition of Hennessy and Pattersons books on computer design as mysupporting a

30、uthority;see in particular Computer Architecture,third edition,2003,pp 146,151-4,157-8.The IA-64 instruction set.Some time ago,Intel and Hewlett-Packard introduced the IA-64instruction set.This was primarily intended to meet a generally recognisedneed for a 64 bit address space.In this,it followed t

31、he lead of the designersof the MIPS R4000 and Alpha.However one would have thought that Intelwould have stressed compatibility with the x86;the puzzle is that they did theexact opposite.Moreover,built into the design of IA-64 is a feature known aspredication which makes it incompatible in a major wa

32、y with all otherinstruction sets.In particular,it needs 6 extra bits with each instruction.Thisupsets the traditional balance between instruction word length andinformation content,and it changes significantly the brief of the compilerwriter.In spite of having an entirely new instruction set,Intel m

33、ade thepuzzling claim that chips based on IA-64 would be compatible with earlierx86 chips.It was hard to see exactly what was meant.Chips for the latest IA-64 processor,namely,the Itanium,appear to havespecial hardware for compatibility.Even so,x86 code runs very slowly.Because of the above complica

34、tions,implementation of IA-64 requires alarger chip than is required for more conventional instruction sets.This inturn implies a higher cost.Such at any rate,is the received wisdom,and,as ageneral principle,it was repeated as such by Gordon Moore when he visitedCambridge recently to open the Betty

35、and Gordon Moore Library.I have,however,heard it said that the matter appears differently from within Intel.This I do not understand.But I am very ready to admit that I am completelyout of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that

36、is more compatible withx86 and they appear to be making headway with it.The chip is not aparticularly large one.Some people think that this is what Intel should havedone.Since the lecture was delivered,Intel have announced that they willmarket a range of chips essentially compatible with those offer

37、ed by AMD.The Relentless Drive towards Smaller TransistorsThe scale of integration continued to increase.This was achieved byshrinking the original transistors so that more could be put on a chip.Moreover,the laws of physics were on the side of the manufacturers.Thetransistors also got faster,simply

38、 by getting smaller.It was therefore possibleto have,at the same time,both high density and high speed.There was a further advantage.Chips are made on discs of silicon,known as wafers.Each wafer has on it a large number of individual chips,which are processed together and later separated.Since shrin

39、kage makes itpossible to get more chips on a wafer,the cost per chip goes down.Falling unit cost was important to the industry because,if the latestchips are cheaper to make as well as faster,there is no reason to go onoffering the old ones,at least not indefinitely.There can thus be one productfor

40、the entire market.However,detailed cost calculations showed that,in order to maintainthis advantage as shrinkage proceeded beyond a certain point,it would benecessary to move to larger wafers.The increase in the size of wafers was nosmall matter.Originally,wafers were one or two inches in diameter,a

41、nd by2000 they were as much as twelve inches.At first,it puzzled me that,whenshrinkage presented so many other problems,the industry should make thingsharder for itself by going to larger wafers.I now see that reducing unit costwas just as important to the industry as increasing the number of transi

42、storson a chip,and that this justified the additional investment in foundries and theincreased risk.The degree of integration is measured by the feature size,which,for agiven technology,is best defined as the half the distance between wires in thedensest chips made in that technology.At the present

43、time,production of 90nm chips is still building upSuspension of LawIn March 1997,Gordon Moore was a guest speaker at the celebrationsof the centenary of the discovery of the electron held at the CavendishLaboratory.It was during the course of his lecture that I first heard the factthat you can have

44、silicon chips that are both fast and low in cost described asa violation of Murphys law.or Sods law as it is usually called in the UK.Moore said that experience in other fields would lead you to expect to have tochoose between speed and cost,or to compromise between them.In fact,inthe case of silico

45、n chips,it is possible to have both.In a reference book available on the web,Murphy is identified as anengineer working on human acceleration tests for the US Air Force in 1949.However,we were perfectly familiar with the law in my student days,whenwe called it by a much more prosaic name than either

46、 of those mentionedabove,namely,the Law of General Cussedness.We even had a mockexamination question in which the law featured.It was the type of question inwhich the first part asks for a definition of some law or principle and thesecond part contains a problem to be solved with the aid of it.In ou

47、r case thefirst part was to define the Law of General Cussedness and the second wasthe problem;A cyclist sets out on a circular cycling tour.Derive an equationgiving the direction of the wind at any time.The single-chip computerAt each shrinkage the number of chips was reduced and there werefewer wi

48、res going from one chip to another.This led to an additionalincrement in overall speed,since the transmission of signals from one chip toanother takes a long time.Eventually,shrinkage proceeded to the point at which the wholeprocessor except for the caches could be put on one chip.This enabled awork

49、station to be built that out-performed the fastest minicomputer of the day,and the result was to kill the minicomputer stone dead.As we all know,thishad severe consequences for the computer industry and for the peopleworking in it.From the above time the high density CMOS silicon chip was Cock ofthe

50、 Roost.Shrinkage went on until millions of transistors could be put on asingle chip and the speed went up in proportion.Processor designers began to experiment with new architectural featuresdesigned to give extra speed.One very successful experiment concernedmethods for predicting the way program b

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 教育专区 > 单元课程

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁