2022年AIX性能调优[整 .pdf

上传人:C****o 文档编号:39708898 上传时间:2022-09-07 格式:PDF 页数:26 大小:579.99KB
返回 下载 相关 举报
2022年AIX性能调优[整 .pdf_第1页
第1页 / 共26页
2022年AIX性能调优[整 .pdf_第2页
第2页 / 共26页
点击查看更多>>
资源描述

《2022年AIX性能调优[整 .pdf》由会员分享,可在线阅读,更多相关《2022年AIX性能调优[整 .pdf(26页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、1IBM TRAININGA26Jaqui LynchAIX Performance Tuning?Las Vegas,NVAIX Performance TuningJaqui LynchSenior Systems EngineerMainline Information SystemsUpdated Presentation will be at:http:/ 1 页,共 26 页 -2Agenda?AIX v5.2 versus AIX v5.3?32 bit versus 64 bit?Filesystem Types?DIO and CIO?AIX Performance Tuna

2、bles?Oracle Specifics?Commands?ReferencesNew in AIX 5.2?P5 support?JFS2?Large Page support(16mb)?Dynamic LPAR?Small Memory Mode Better granularity in assignment of memory to LPARs?CuOD?xProfiler?New Performance commands vmo,ioo,schedoreplace schedtuneand vmtune?AIX 5.1 Status Will not run on p5 hard

3、ware Withdrawn from marketing end April 2005 Support withdrawn April 2006名师资料总结-精品资料欢迎下载-名师精心整理-第 2 页,共 26 页 -3AIX 5.3?New in 5.3With Power5 hardware?SMT?Virtual Ethernet?With APVShared EthernetVirtual SCSI AdapterMicropartitioningPLMAIX 5.3?New in 5.3JFS2 Updates?Improved journaling?Extent based al

4、location?1tb filesystems and files with potential of 4PB?Advanced Accounting?Filesystem shrink for JFS2?Striped ColumnsCan extend striped LV if a disk fills up?1024 disk scalable volume group1024 PVs,4096 LVs,2M pps/vg?Quotas?Each VG now has its own tunable pbufpoolUse lvmo command名师资料总结-精品资料欢迎下载-名师

5、精心整理-第 3 页,共 26 页 -4AIX 5.3?New in 5.3 NFSv4 Changes?ACLs NIM enhancements?Security?Highly available NIM?Post install configuration of Etherchanneland Virtual IP SUMA patch tool Last version to support 32 bit kernel MP kernel even on a UP Most commands changed to support LPAR stats Forced move from

6、vmtuneto ioo and vmo Page space scrubbing Plus lots and lots of other things32 bit versus 64 bit?32 Bit?Up to 96GB memory?Uses JFS for rootvg?Runs on 32 or 64 bit hardware?Hardware all defaults to 32 bit?JFS is optimized for 32 bit?5.3 is last version of AIX with 32 bit kernel?64 bit?Allows 96GB mem

7、ory?Current max is 256GB(arch is 16TB)except 590/595(1TB&2TB)?Uses JFS2 for rootvg?Supports 32 and 64 bit apps?JFS2 is optimized for 64 bit名师资料总结-精品资料欢迎下载-名师精心整理-第 4 页,共 26 页 -5FilesystemTypes?JFS?2gb file max unless BF?Can use with DIO?Optimized for 32 bit?Runs on 32 bit or 64 bit?Better for lots o

8、f small file creates and deletes?JFS2?Optimized for 64 bit?Required for CIO?Can use DIO?Allows larger file sizes?Runs on 32 bit or 64 bit?Better for large files and filesystemsGPFSClustered filesystemUse for RACSimilar to CIO noncached,nonblockingI/ODIO and CIO?DIO Direct I/O Around since AIX v5.1 U

9、sed with JFS CIO is built on it Effectively bypasses filesystem caching to bring data directly into application buffers Does not like compressed JFS or BF(lfe)filesystems?Performance will suffer due to requirement for 128kb I/O Reduces CPU and eliminates overhead copying data twice Reads are synchro

10、nous Bypasses filesystem readahead Inode locks still used Benefits heavily random access workloads名师资料总结-精品资料欢迎下载-名师精心整理-第 5 页,共 26 页 -6DIO and CIO?CIO Concurrent I/O Only available in JFS2 Allows performance close to raw devices Use for Oracle dbf and control files,and online redo logs,not for bina

11、ries No system buffer caching Designed for apps(such as RDBs)that enforce write serialization at the app Allows non-use of inodelocks Implies DIO as well Benefits heavy update workloads Not all apps benefit from CIO and DIO some are better with filesystem caching and some are safer that wayPerforman

12、ce Tuning?CPUvmstat,ps,nmon?Networknetstat,nfsstat,no,nfso?I/Oiostat,filemon,ioo,lvmo?Memorylsps,svmon,vmstat,vmo,ioo名师资料总结-精品资料欢迎下载-名师精心整理-第 6 页,共 26 页 -7New tunables?Old wayCreate rc.tune and add to inittab?New way/etc/tunables?lastboot?lastboot.log?NextbootUse p o optionsioo p o optionsvmo p o op

13、tionsno p o optionsnfso p o optionsschedo-p o optionsTuneables1/3?minperm%Value below which we steal from computational pages-default is 20%We lower this to something like 5%,depending on workload?Maxperm%default is 80%This is a soft limit and affects ALL file pages(including thosein maxclient)Value

14、 above which we always steal from persistentBe careful as this also affects maxclientWe no longer tune this we use lru_file_repageinsteadReducing maxperm stops file caching affecting programs that are running?maxclientdefault is 80%Must be less than or equal to maxpermAffects NFS,GPFS and JFS2Hard l

15、imit by defaultWe no longer tune this we use lru_file_repageinstead?numpermThis is what percent of real memory is currently being used forcaching ALL file pages?numclientThis is what percent of real memory is currently being used forcaching GPFS,JFS2 and NFS?strict_maxpermSet to a soft limit by defa

16、ult leave as is?strict_maxclientAvailable at AIX 5.2 ML4By default it is set to a hard limitWe used to change to a soft limit now we do not名师资料总结-精品资料欢迎下载-名师精心整理-第 7 页,共 26 页 -8Tuneables2/3?maxrandwrt Random write behind Default is 0 try 32 Helps flush writes from memory before syncdruns?syncd runs

17、every 60 seconds but that can be changed When threshholdreached all new page writes are flushed to disk Old pages remain till syncdruns?Numclust Sequential write behind Number of 16k clusters processed by write behind?J2_maxRandomWrite Random write behind for JFS2 On a per file basis Default is 0 tr

18、y 32?J2_nPagesPerWriteBehindClusterDefault is 32Number of pages per cluster for writebehind?J2_nRandomClusterJFS2 sequential write behindDistance apart before random is detected?J2_nBufferPerPagerDeviceMinimum filesystembufstructs for JFS2 default 512,effective at fsmountTuneables3/3?minpgahead,maxp

19、gahead,J2_minPageReadAhead&J2_maxPageReadAheadDefault min=2 max=8Maxfreeminfree=maxpgahead?lvm_bufcntBuffers for raw I/O.Default is 9Increase if doing large raw I/Os(no jfs)?numfsbufsHelps write performance for large write sizesFilesystembuffers?pv_min_pbufPinned buffers to hold JFS I/O requestsIncr

20、ease if large sequential I/Os to stop I/Os bottlenecking at the LVMOne pbufis used per sequential I/O request regardless of the number of pagesWith AIX v5.3 each VG gets its own set of pbufsPrior to AIX 5.3 it was a system wide setting?sync_release_ilockAllow syncto flush all I/O to a file without h

21、olding the i-node lock,and then use the i-node lock to do the commit.Be very careful this is an advanced parameter?minfreeand maxfreeUsed to set the values between which AIX will steal pagesmaxfreeis the number of frames on the free list at which stealing stops(must be=minfree+8)minfreeis the number

22、 used to determine when VMM starts stealing pages to replenish the free listOn a memory pool basis so if 4 pools and minfree=1000 then stealing starts at 4000 pages1 LRUD per pool,default pools is 1 per 8 processors?lru_file_repageDefault is 1 set to 0Available on=AIX v5.2 ML5 and v5.3Means LRUD ste

23、als persistent pages unless numperm minperm?lru_poll_intervalSet to 10Improves responsiveness of the LRUD when it is running名师资料总结-精品资料欢迎下载-名师精心整理-第 8 页,共 26 页 -9Minfree/maxfree?On a memory pool basis so if 4 pools and minfree=1000 then stealing starts at 4000 pages?1 LRUD per pool?Default pools is

24、1 per 8 processors?Cpu_scale_mempcan be used to change memory pools?Try to keep distance between minfreeand maxfree filename?nmon?Check error logsioo Output?lvm_bufcnt=9?minpgahead=2?maxpgahead=8?maxrandwrt=32(default is 0)?numclust=1?numfsbufs=186?sync_release_ilock=0?pd_npages=65536?pv_min_pbuf=51

25、2?j2_minPageReadAhead=2?j2_maxPageReadAhead=8?j2_nBufferPerPagerDevice=512?j2_nPagesPerWriteBehindCluster=32?j2_maxRandomWrite=0?j2_nRandomCluster=0名师资料总结-精品资料欢迎下载-名师精心整理-第 11 页,共 26 页 -12vmoOutputDEFAULTSmaxfree=128minfree=120minperm%=20maxperm%=80maxpin%=80maxclient%=80strict_maxclient=1strict_max

26、perm=0OFTEN SEENmaxfree=1088minfree=960minperm%=10maxperm%=30maxpin%=80Maxclient%=30strict_maxclient=0strict_maxperm=0numclient and numperm are both 29.9So numclient-numperm=0 aboveMeans filecachinguse is probably all JFS2/NFS/GPFSRemember to switch to new method using lru_file_repageiostatIGNORE FI

27、RST LINE-average since bootRun iostatover an interval(i.e.iostat2 30)tty:tin tout avg-cpu:%user%sys%idle%iowaitphysc%entc0.0 1406.0 93.1 6.9 0.0 0.012.0 100.0Disks:%tm_actKbps tpsKb_readKb_wrtnhdisk1 1.0 1.5 3.0 0 3hdisk0 6.5 385.5 19.5 0 771hdisk14 40.5 13004.0 3098.5 12744 13264hdisk7 21.0 6926.0

28、271.0 440 13412hdisk15 50.5 14486.0 3441.5 13936 15036hdisk17 0.0 0.00.00 0名师资料总结-精品资料欢迎下载-名师精心整理-第 12 页,共 26 页 -13iostat a AdaptersSystem configuration:lcpu=16 drives=15tty:tin tout avg-cpu:%user%sys%idle%iowait0.4 195.3 21.4 3.3 64.7 10.6Adapter:Kbps tpsKb_readKb_wrtnfscsi1 5048.8 516.9 1044720428

29、 167866596Disks:%tm_actKbps tpsKb_readKb_wrtnhdisk6 23.4 1846.1 195.2 381485286 61892408hdisk9 13.9 1695.9 163.3 373163554 34143700hdisk8 14.4 1373.3 144.6 283786186 46044360hdisk7 1.1 133.5 13.8 628540225786128Adapter:Kbps tpsKb_readKb_wrtnfscsi0 4438.6 467.6 980384452 85642468Disks:%tm_actKbps tps

30、Kb_readKb_wrtnhdisk5 15.2 1387.4 143.8 304880506 28324064hdisk2 15.5 1364.4 148.1 302734898 24950680hdisk3 0.5 81.4 6.8 3515294 16043840hdisk4 15.8 1605.4 168.8 369253754 16323884iostat-DExtended Drive Reporthdisk3 xfer:%tm_actbps tpsbread bwrtn0.5 29.7K 6.8 15.0K 14.8Kread:rpsavgservminservmaxservt

31、imeouts fails29.3 0.1 0.1784.5 0 0write:wpsavgservminservmaxservtimeouts fails133.6 0.0 0.3 2.1S 0 0wait:avgtimemintimemaxtimeavgqszqfull0.0 0.00.2 0.0 0名师资料总结-精品资料欢迎下载-名师精心整理-第 13 页,共 26 页 -14iostatOtheriostat-A asyncIO System configuration:lcpu=16 drives=15aio:avgcavfc maxg maif maxr avg-cpu:%user

32、%sys%idle%iowait150 0 5652 0 12288 21.4 3.3 64.7 10.6Disks:%tm_actKbps tpsKb_readKb_wrtnhdisk6 23.4 1846.1 195.2 381485298 61892856hdisk5 15.2 1387.4 143.8 304880506 28324064hdisk9 13.9 1695.9 163.3 373163558 34144512iostat-m paths System configuration:lcpu=16 drives=15tty:tin tout avg-cpu:%user%sys

33、%idle%iowait0.4 195.3 21.4 3.3 64.7 10.6Disks:%tm_actKbps tpsKb_readKb_wrtnhdisk0 1.6 17.0 3.7 1190873 2893501Paths:%tm_actKbps tpsKb_readKb_wrtnPath0 1.6 17.0 3.7 1190873 2893501lvmo?lvmo output?vgname=rootvg(default but you can change with v)?pv_pbuf_count=256 Pbufs to add when a new disk is added

34、 to this VG?total_vg_pbufs=512 Current total number of pbufsavailable for the volume group.?max_vg_pbuf_count=8192 Max pbufsthat can be allocated to this VG?pervg_blocked_io_count=0 No.I/Os blocked due to lack of free pbufsfor this VG?global_pbuf_count=512 Minimum pbufsto add when a new disk is adde

35、d to a VG?global_blocked_io_count=46 No.I/Os blocked due to lack of free pbufsfor all VGs名师资料总结-精品资料欢迎下载-名师精心整理-第 14 页,共 26 页 -15lsps a(similar to pstat)?Ensure all page datasets the same size although hd6 can be bigger-ensure more page space than memory Especially if not all page datasets are in ro

36、otvg Rootvg page datasets must be big enough to hold the kernel?Only includes pages allocated(default)?Use lsps-s to get all pages(includes reserved via early allocation(PSALLOC=early)?Use multiple page datasets on multiple disks Parallelismlsps outputlsps-aPage Space Physical Volume Volume Group Si

37、ze%Used Active Auto Typepaging05 hdisk9 pagvg01 2072MB 1 yes yes lvpaging04 hdisk5 vgpaging01 504MB 1 yes yes lvpaging02 hdisk4 vgpaging02 168MB 1 yes yes lvpaging01 hdisk3 vgpagine03 168MB 1 yes yes lvpaging00 hdisk2 vgpaging04 168MB 1 yes yes lvhd6 hdisk0 rootvg512MB 1 yes yes lvlsps-sTotal Paging

38、 Space Percent Used3592MB 1%Bad Layout aboveShould be balancedMake hd6 the biggest by one lpor the same size as the others in a mixed environment like this名师资料总结-精品资料欢迎下载-名师精心整理-第 15 页,共 26 页 -16SVMON Terminology?persistent Segments used to manipulate files and directories?working Segments used to i

39、mplement the data areas of processes and shared memory segments?client Segments used to implement some virtual file systems like Network File System(NFS)and the CD-ROM file system?http:/ inusefree pin virtualmemory 26279936 18778708 7501792 3830899 18669057pg space 7995392 53026work persclntlpagepin

40、 3830890 0 0 0in use 18669611 80204 28893 0In GB Equates to:size inusefree pin virtualmemory 100.25 71.64 28.62 14.61 71.22pg space 30.50 0.20work persclntlpagepin 14.61 0 0 0in use 71.22 0.31 0.15 0名师资料总结-精品资料欢迎下载-名师精心整理-第 16 页,共 26 页 -17General Recommendations?Different hot LVson separate physical

41、 volumes?Stripe hot LV across disks to parallelize?Mirror read intensive data?Ensure LVs are contiguous Use lslv and look at in-band%and distrib reorgvgif needed to reorgLVs?Writeverify=no?minpgahead=2,maxpgahead=16 for 64kb stripe size?Increase maxfreeif you adjust maxpgahead?Tweak minperm,maxperm

42、and maxrandwrt?Tweak lvm_bufcntif doing a lot of large raw I/Os?If JFS2 tweak j2 versions of above fields?Clean out inittab and rc.tcpipand inetd.conf,etc for things that should not start Make sure you don t do it partially i.e.portmapis in rc.tcpipand rc.nfsOracle Specifics?Use JFS2 with external J

43、FS2 logs(if high write otherwise internal logs are fine)?Use CIO where it will benefit you Do not use for Oracle binaries?Leave DISK_ASYNCH_IO=TRUE in Oracle?Tweak the maxservers AIO settings?If using JFS Do not allocate JFS with BF(LFE)It increases DIO transfer size from 4k to 128k 2gb is largest f

44、ile size Do not use compressed JFS defeats DIO名师资料总结-精品资料欢迎下载-名师精心整理-第 17 页,共 26 页 -18Tools?vmstat for processor and memory?nmonhttp:/ get a 2 hour snapshot(240 x 30 seconds)nmon-fT-c 30-s 240Creates a file in the directory that ends.nmon?nmon analyzerhttp:/ tool so need to copy the.nmonfile overOpe

45、ns as an excel spreadsheet and then analyses the data?sar sar-A-o filename 2 30/dev/null Creates a snapshot to a file in this case 30 snaps 2 seconds apart?ioo,vmo,schedo,vmstatv?lvmo?lparstat,mpstat?Iostat?Check out Alphaworksfor the Graphical LPAR tool?Many manymoreOther tools?filemonfilemon-v-o f

46、ilename-O allsleep 30trcstop?pstat to check async I/Opstat-a|grepaio|wc l?perfpmr to build performance info for IBM if reporting a PMR/usr/bin/perfpmr.sh300 名师资料总结-精品资料欢迎下载-名师精心整理-第 18 页,共 26 页 -19lparstatlparstat-hSystem Configuration:type=shared mode=Uncapped smt=On lcpu=4 mem=512 ent=5.0%user%sys

47、%wait%idlephysc%entclbusyapp vcswphint%hypvhcalls0.0 0.5 0.0 99.5 0.00 1.0 0.0 -1524 0 0.5 154216.0 76.3 0.0 7.7 0.30 100.0 90.5 -321 1 0.9 259Physc physical processors consumed%entc percent of entitled capacityLbusy logical processor utilization for system and userVcsw Virtual context switchesPhint

48、 phantom interrupts to other partitions%hypv-%time in the hypervisorfor this lparweird numbers on an idle system may be seenhttp:/ sSystem configuration:lcpu=4 ent=0.5Proc1Proc00.27%49.63%cpu0cpu2cpu1cpu30.17%0.10%3.14%46.49%Above shows how processor is distributed using SMT名师资料总结-精品资料欢迎下载-名师精心整理-第

49、19 页,共 26 页 -20AsyncI/OTotal number of AIOsin usepstat a|grep aios|wc l Or new way is:ps k|grep aio|wc-l4205AIO max possible requestslsattr El aio0 a maxreqsmaxreqs4096 Maximum number of REQUESTS TrueAIO maxservers lsattr El aio0 a maxserversmaxservers 320 MAXIMUM number of servers per cpuTrueNB max

50、servers is a per processor setting in AIX 5.3 Look at using fastpathFastpathcan now be enabled with DIO/CIO See Session A23 by Grover Davidson for a lot more info on AsyncI/OI/O Pacing?Useful to turn on during backups(streaming I/Os)?Set high value to multiple of(4*n)+1?Limits the number of outstand

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 教育专区 > 高考资料

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁