《英特尔大学进修课程 - 软件评估介绍 Software Metrics.pdf》由会员分享,可在线阅读,更多相关《英特尔大学进修课程 - 软件评估介绍 Software Metrics.pdf(13页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。
1、Software Metrics Guide1 Introduction2 Metrics Set2.1 Progress2.2 Effort2.3 Cost2.4 Review Results2.5 Trouble Reports2.6 Requirements Stability2.7 Size Stability2.8 Computer Resource Utilization2.9 Training3 Overview of Project Procedures1 IntroductionEffective management of the software development
2、process requires effective measurement of thatprocess.This guide presents an overview of the collection,analysis,and reporting of software metrics.Only theprogress,effort and trouble report metrics are required for the project.However,the student should befamiliar with all the metrics described belo
3、w.Software metrics are numerical data related to software development.Metrics strongly support softwareproject management activities.They relate to the four functions of management as follows:1.Planning-Metrics serve as a basis of cost estimating,training planning,resource planning,scheduling,andbud
4、geting.2.Organizing-Size and schedule metrics influence a projects organization.5/10/2009Software Metricssunset.usc.edu/classes/metrics.html1/133.Controlling-Metrics are used to status and track software development activities for compliance to plans.4.Improving-Metrics are used as a tool for proces
5、s improvement and to identify where improvement effortsshould be concentrated and measure the effects of process improvement efforts.A metric quantifies a characteristic of a process or product.Metrics can be directly observable quantities orcan be derived from one or more directly observable quanti
6、ties.Examples of raw metrics include the numberof source lines of code,number of documentation pages,number of staff-hours,number of tests,number ofrequirements,etc.Examples of derived metrics include source lines of code per staff-hour,defects perthousand lines of code,or a cost performance index.T
7、he term indicator is used to denote a representation of metric data that provides insight into an ongoingsoftware development project or process improvement activity.Indicators are metrics in a form suitable forassessing project behavior or process improvement.For example,an indicator may be the beh
8、avior of ametric over time or the ratio of two metrics.Indicators may include the comparison of actual values versus theplan,project stability metrics,or quality metrics.Examples of indicators used on a project include actualversus planned task completions,actual versus planned staffing,number of tr
9、ouble reports written andresolved over time,and number of requirements changes over time.Indicators are used in conjunction with one another to provide a more complete picture of project ororganization behavior.For example,a progress indicator is related to requirements and size indicators.Allthree
10、indicators should be used and interpreted together.2 Metrics SetThe metrics to be collected provide indicators that track ongoing project progress,softwareproducts,and software development processes.The defined indicators are consistent with the Software Engineering Institutes Capability Maturity Mo
11、del(CMM).Table 1 shows the indicator categories,the management insight provided,and the specific indicatorsfor recommended metrics.Depending upon the nature of the project,specific contractual requirements,ormanagement preference,a project may choose to collect additional metrics or to tailor the re
12、commended set.Chart Construction SummaryCharts are prepared for the standard metrics.All charts require titles,legends,and labels for all axes.Theyshould clearly and succinctly show the metrics of interest,with no excessive detail to detract the eye.Do notoveruse different line types,patterns,or col
13、or,or added dimensionality unless used specifically to differentiateitems.Overlayed data is preferable to multiple charts when the different data are related to each other and canbe meaningfully depicted without obscuring other details.The most common type of chart is the tracking chart.This chart i
14、s used extensively for the Progress indicator,and is used in similar forms for many of the other indicators.For task progress,it depicts the cumulative5/10/2009Software Metricssunset.usc.edu/classes/metrics.html2/13number of planned and actual task completions(or milestones)against time.For other in
15、dicators,it may showactual versus planned staffing profiles,actual versus planned software size,actual versus planned resourceutilization or other measures compared over time.There are many ways to modify the tracking chart.A horizontal planned line representing the cumulative goalcan be drawn at th
16、e top,multiple types of tasks can be overlaid on a single tracking chart(such as design,code,and integration),or the chart can be overlaid with other types of data.It is recommended that tracked quantities be shown as a line chart,and that replanned task progress beshown as a separate planning line.
17、The original planned baseline is kept on the chart,as well as all replanningdata if there is more than a single replan.The following sections provide brief descriptions of the different metrics categories with samples of therequired charts.Individual projects may enhance the charts for their situati
18、ons or have additional charts for thecategories.The sample charts are designed for overhead presentations and are available as templates from theprofessor.Table 1 Recommended Metrics Set for a ProjectIndicatorCategoryManagement InsightIndicatorsProgressProvides information on how well the project is
19、performing with respect to its schedule.Actual vs.planned taskcompletionsActual vs.planneddurationsEffortProvides visibility into thecontributions of staffing onproject costs,schedule adherence,and product quality.Actual vs.plannedstaffing profilesCostProvides tracking of actual costs against estima
20、ted costsand predicts future costs.Actual vs.planned costsCost and schedulevariancesReviewResultsProvides status of action items from life-cycle review.Status of action itemsTroubleReportsProvides insight into product and process quality andthe effectiveness of the testing.Status of trouble reportsN
21、umber of trouble reportsopened,closed,etc.duringreporting periodRequirementsProvides visibility into the magnitude and impact ofNumber of requirementschanges/clarifications5/10/2009Software Metricssunset.usc.edu/classes/metrics.html3/13Stabilityrequirements changes.Distribution ofrequirements over r
22、eleasesSize StabilityProvides insight into the completeness and stabilityof the requirements and into the ability of the staff tocomplete the project within the current budget andschedule.Size growthDistribution of size overreleasesComputerResourceUtilizationProvides information on how well the proj
23、ect is meeting its computer resourceutilization goals/requirements.Actual vs.planned profiles ofcomputer resource utilizationTrainingProvides information on the training program and staff skills.Actual vs.planned number ofpersonnel attending classes2.1 ProgressProgress indicators provide information
24、 on how well the project is performing with respect to planned taskcompletions and keeping schedule commitments.Tasks are scheduled and then progress is tracked to theschedules.Metrics are collected for the activities and milestones identified in the project schedules.Metrics onactual completions ar
25、e compared to those of planned completions to determine whether there are deviationsto the plan.The difference between the actual and planned completions indicates the deviations from the plan.Each project identifies tasks for which progress metrics will be collected.The completion criteria for each
26、 task must be well defined and measurable.The project should establish rangelimits(thresholds)on the planned task progress for the project.The thresholds are used for management ofsoftware development risk.Figure 1 depicts the cumulative number of planned and actual completions(or milestones)over ti
27、me.Notethat this chart is generic,and each project will substitute specific tasks(units,milestones,SLOCs,etc.).Additionally,each project is expected to produce multiple progress charts for different types of tasks,different teams,etc.5/10/2009Software Metricssunset.usc.edu/classes/metrics.html4/13Fi
28、gure 1 Progress Indicator2.2 EffortEffort indicators allow the software manager to track personnel resources.They provide visibility into thecontribution of staffing to project costs,schedule adherence,product quality and the amount of effort requiredfor each activity.Effort indicators include trend
29、s in actual staffing levels,staffing profile by activity or laborcategory,or a profile of unplanned staff loses.Effort indicators may be used by all levels of project software management to measure the actual profileagainst the plan.Each level of management forms a profile for its area of control an
30、d monitors the actualprofile against the plan.Determining the number of staff needed at any one time is an important function performed by softwaremanagement.By summing the number of staff during each reporting period,the composite staffing profile forthe project can be determined.These indicators a
31、re applied during all life-cycles phases,from project inception to project end.Effort metricsare to be collected and reported at least on a monthly basis.The effort and cost metrics are related.By convention,effort metrics are non-cumulative expenditures ofhuman resources,and cost metrics are cumula
32、tive levels of effort as tracked by earned value.Thus,costmetrics are a cumulative depiction of effort.Figure 2 shows a sample plot of monthly actual versus planned effort5/10/2009Software Metricssunset.usc.edu/classes/metrics.html5/13Figure 2 Effort Indicator2.3 CostCost management is an important
33、activity for the success of a project,and labor is the primary component ofsoftware development cost.Managers must define the work in their area,determine the skill level required toperform the work,and use productivity estimates and schedule constraints to determine budgeted costs overtime.Use staf
34、f-hours to measure cost,rather than dollars.The dollars per staff-hour varies over time and by laborcategory,and the conversion is made only by Finance.Cost is related to the effort indicator,with cost definedas an accumulation of effort expenditures.(The total project cost also includes non-labor c
35、osts,but they arenot tracked here.)Only those projects using earned value can report the earned value quantities.A Work Breakdown Structure(WBS)is established to define the structures that will be used to collect thecosts.The WBS identifies separate elements for requirements,design,documentation,cod
36、e and unit test,integration,verification,and system testing.Costs can also be segregated by component,function,orconfiguration item.Work packages are derived from the WBS.Costs are allocated to work packages usingan earned value method.This system allows managers to track the actual costs and measur
37、e them against theb dfh iifibili5/10/2009Software Metricssunset.usc.edu/classes/metrics.html6/13budget for their respective areas of responsibility.Figure 3 is a sample Cost Indicator Chart.The actual and budgeted quantities are derived from an earnedvalue system,and are shown in terms of staff-hour
38、s.Figure 3 Cost Indicator2.4 Review ResultsReview Results indicators provide insight into the status of action items from life-cycle reviews.The termAction Item(AI)refers to inspection defects and customer comments.Reviews include the following:Formal inspections of software documents or codeFormal
39、customer milestones,e.g.,SSR,PDR,CDR,or TRRInformal peer evaluations of products,e.g.,walkthroughs,technical reviews,or internal PDRsManagement reviewsProcess reviews,e.g.,SQA audits,SEI CMM assessments,or the causal analysis from formalinspections.There are standards for some reviews,as well as pro
40、cedures for conducting them.For example,formalinspections result in assertion logs that document the minor and major defects uncovered by the inspectionprocess.Therefore,standard review result indicators for formal inspections are:1.Counts of minor/major defects2.Rates of defect detection(e.g.,asser
41、tions per inspection meeting minute,defects per inspected documentpage,or defects per KSLOC of code inspected)5/10/2009Software Metricssunset.usc.edu/classes/metrics.html7/133.Defect status(e.g.,age of open defects,number of open/closed defects,and breakdown by defectcategories).A customer-conducted
42、 review such as a Preliminary Design Review(PDR)generates AIs that must be closedbefore approval of the Software Design Document.Therefore,standard review result indicators for a PDRare the number of comments written and their status(open,closed,and age).Review metrics record the AIs identified in t
43、he review findings and track them until they are resolved.Thesemetrics provide status on both products and processes.Review results are not to be used to evaluate theperformance of individuals.Review Results are collected and reported at least monthly at every stage of the software life cycle,butpre
44、ferably weekly for key AIs.Figure 4 depicts the cumulative counts of AIs written and closed by reporting period.Figure 4 Review Results Indicator2.5 Trouble ReportsTR indicators provide managers with insight into the quality of the product,software reliability,and theeffectiveness of testing.They al
45、so provide information on the software development process.The termsdefect and problem will be used interchangeably herein.Monthly tracking of TR indicators shows theprojects trends in the following areas:1.The rate at which TRs are being written and resolved.5/10/2009Software Metricssunset.usc.edu/
46、classes/metrics.html8/132.The type and severity of the TRs.3.Relationship between the number of TRs and the number of test cases passed or the number of test stepspassed.4.The TR density(the number of TRs per unit size).5.The number of defects in each software application/unit.TR indicators are appl
47、icable only in the following life cycle stages(and each release of the software withinthese stages,and during the informal and formal test segments of these stages)(1)application test andintegration,(2)system test,(3)acceptance test.Thus the TR indicators are applicable only to defects duringthe ope
48、ration or execution of a computer program.Due to the shortness of testing periods,and the dynamicsinvolved between the test team and the implementation team that analyzes the TRs and fixes the defects,theTR indicators are generally evaluated on a weekly basis.The terms open and closed are defined as
49、 follows:Open The problem has been reported.Closed The investigation is complete and the action required to resolve the problem has been proposed,implemented,and verified to the satisfaction of all concerned.In some cases,a TR will be found to be invalidas part of the investigative process and close
50、d immediately.Figure 5 shows the cumulative count of total,open,and closed TRs over time(weekly periods).Figure 5 TR Indicator5/10/2009Software Metricssunset.usc.edu/classes/metrics.html9/132.6 Requirements StabilityRequirements Stability provides an indication of the completeness,stability,and unde