lecture 3 more metrics cost estimation
play

Lecture 3: More Metrics & Cost Estimation 2018-04-23 Prof. Dr. - PowerPoint PPT Presentation

Softwaretechnik / Software-Engineering Lecture 3: More Metrics & Cost Estimation 2018-04-23 Prof. Dr. Andreas Podelski, Dr. Bernd Westphal Albert-Ludwigs-Universitt Freiburg, Germany 3 2018-04-23 main Survey:


  1. Softwaretechnik / Software-Engineering Lecture 3: More Metrics & Cost Estimation 2018-04-23 Prof. Dr. Andreas Podelski, Dr. Bernd Westphal Albert-Ludwigs-Universität Freiburg, Germany – 3 – 2018-04-23 – main –

  2. Survey: Softwarepraktikum 30 20 10 participate not in study participated plan; or earlier this semester later – 3 – 2018-04-23 – main – 2 /43

  3. Topic Area Project Management: Content • Software Metrics VL 2 • Properties of Metrics • Scales . . . • Examples VL 3 • Cost Estimation • “(Software) Economics in a Nutshell” . . • Expert’s Estimation . • Algorithmic Estimation VL 4 • Project Management • Project . . . • Process and Process Modelling • Procedure Models VL 5 • Process Models – 3 – 2018-04-23 – Sblockcontent – • Process Metrics . . . • CMMI, Spice 3 /43

  4. Content • Software Metrics • Subjective Metrics • Goal-Question-Metric Approach • Cost Estimation • “(Software) Economics in a Nutshell” • Cost Estimation • Expert’s Estimation • The Delphi Method • Algorithmic Estimation • COCOMO • Function Points – 3 – 2018-04-23 – Scontent – 4 /43

  5. Kinds of Metrics: by Measurement Procedure objective metric pseudo metric subjective metric Procedure measurement, counting, computation (based on review by inspector, possibly standardised measurements or verbal or by given scale assessment) Example, body height, air pressure body mass index (BMI), health condition, general weather forecast for the weather condition next day (“bad weather”) Example in size in LOC or NCSI; productivity; usability; Software number of (known) bugs cost estimation severeness of an error Engineering by COCOMO Usually used for collection of simple predictions (cost quality assessment; base measures estimation); error weighting overall assessments Advantages exact, reproducible, yields relevant, directly not subvertable, can be obtained usable statement on not plausible results, automatically directly visible applicable to complex characteristics characteristics Disadvantages not always relevant, hard to comprehend, assessment costly, often subvertable, pseudo-objective quality of results depends – 2 – 2018-04-19 – Smetrickinds – no interpretation on inspector – 3 – 2018-04-23 – Spseudocont – (Ludewig and Lichter, 2013) 36 /47 5 /43

  6. Pseudo-Metrics Some of the most interesting aspects of software development projects are (today) hard or impossible to measure directly, e.g.: • how maintainable is the software? • do all modules do appropriate error handling ? • how much effort is needed until completion? • is the documentation sufficient and well usable? • how is the productivity of my software people? Due to high relevance , people want to measure despite the difficulty in differentiated reproducible comparable economical measuring. Two main approaches: plausible available relevant robust Expert review , ( � ) ( � ) ( � ) ( � ) � ! ( � ) � � grading Pseudo-metrics , � ! � � � � � � � derived measures Note : not every derived measure is a pseudo-metric: • average LOC per module : derived, not pseudo � we really measure average LOC per module. – 2 – 2018-04-19 – Spseudo – – 3 – 2018-04-23 – Spseudocont – • measure maintainability in average LOC per module: derived, pseudo � we don’t really measure maintainability; average-LOC is only interpreted as maintainability. Not robust if easily subvertible (see exercises). 38 /47 6 /43

  7. Kinds of Metrics: by Measurement Procedure objective metric pseudo metric subjective metric Procedure measurement, counting, computation (based on review by inspector, possibly standardised measurements or verbal or by given scale assessment) Advantages exact, reproducible, yields relevant, directly not subvertable, can be obtained usable statement on not plausible results, automatically directly visible applicable to complex characteristics characteristics Disadvantages not always relevant, hard to comprehend, assessment costly, often subvertable, pseudo-objective quality of results depends no interpretation on inspector Example, body height, air pressure body mass index (BMI), health condition, general weather forecast for the weather condition (“bad next day weather”) Example in size in LOC or NCSI; productivity; usability; Software number of (known) bugs cost estimation severeness of an error Engineering by COCOMO Usually used for collection of simple predictions (cost quality assessment; base measures estimation); error weighting – 3 – 2018-04-23 – Ssubjective – overall assessments (Ludewig and Lichter, 2013) 7 /43

  8. Subjective Metrics example problems countermeasures Statement “The specification Terms may be Allow only certain is available.” ambiguous, statements, characterise conclusions are them precisely. hardly possible. Assessment “The module is Not necessarily Only offer particular implemented in a comparable. outcomes; put them on an clever way.” (at least ordinal) scale. Grading “Readability is Subjective; Define criteria for grades; graded 4.0 .” grading not give examples how to grade; reproducible. practice on existing artefacts (Ludewig and Lichter, 2013) – 3 – 2018-04-23 – Ssubjective – 8 /43

  9. The Goal-Question-Metric Approach – 3 – 2018-04-23 – main – 9 /43

  10. Information Overload!? Now we have mentioned nearly 60 attributes one could measure... Which ones should we measure? It depends ... d e e e l l t a l b a b c i i e i a c e t m t n u r l l n b b a d e t o a i p a s s r o v n e u l m u r i e o a f p b a f l v c o e l e o i p d a e c r r r One approach: Goal-Question-Metric (GQM). – 3 – 2018-04-23 – Sgqm – 10 /43

  11. Goal-Question-Metric (Basili and Weiss, 1984) The three steps of GQM : (i) Define the goals relevant for a project or an organisation. (ii) From each goal, derive questions which need to be answered to check whether the goal is reached. (iii) For each question, choose (or develop) metrics which contribute to finding answers. Being good wrt. to a certain metric is (in general) not an asset on its own. We usually want to optimise wrt. goals , not wrt. metrics . In particular critical: pseudo-metrics for quality. Software and process measurements may yield personal data (“personenbezogene Daten”). Their collection may be regulated by laws. – 3 – 2018-04-23 – Sgqm – 11 /43

  12. Example: A Metric for Maintainability • Goal: assess maintainability . • One approach: grade the following aspects, e.g., with scale S = { 0 , . . . , 10 } . (Some aspects may be objective, some subjective (conduct review)) • Norm Conformance • Locality • Testability n 1 : size of units (modules etc.) l 1 : use of parameters t 1 : test driver l 2 : information hiding t 2 : test data n 2 : labelling l 3 : local flow of control t 3 : preparation for test evaluation n 3 : naming of identifiers l 4 : design of interfaces t 4 : diagnostic components n 4 : design (layout) t 5 : dynamic consistency checks • Readability n 5 : separation of literals • Typing n 6 : style of comments r 1 : data types r 2 : structure of control flow y 1 : type differentiation r 3 : comments y 2 : type restriction • Define : m = n 1 + ··· + y 2 ( with weights : m g = g 1 · n 1 + ··· + g 20 · y 2 , G = � 20 i =1 g i ). G 20 • Procedure : – 3 – 2018-04-23 – Sgqm – • Train reviewers on existing examples. • Do not over-interpret results of first applications. • Evaluate and adjust before putting to use, adjust regularly. (Ludewig and Lichter, 2013) 12 /43

  13. Example: A Metric for Maintainability • Goal: assess maintainability . Development of a pseudo-metrics : • One approach: grade the following aspects, e.g., with scale S = { 0 , . . . , 10 } . (i) Identify aspect to be represented. (Some aspects may be objective, some subjective (conduct review)) (ii) Devise a model of the aspect. (iii) Fix a scale for the metric. • Norm Conformance • Locality • Testability (iv) Develop a definition of the pseudo-metric, n 1 : size of units (modules etc.) l 1 : use of parameters t 1 : test driver i.e., how to compute the metric. l 2 : information hiding t 2 : test data n 2 : labelling (v) Develop base measures for all parameters of l 3 : local flow of control t 3 : preparation for test evaluation n 3 : naming of identifiers the definition. l 4 : design of interfaces t 4 : diagnostic components n 4 : design (layout) (vi) Apply and improve the metric. t 5 : dynamic consistency checks • Readability n 5 : separation of literals • Typing n 6 : style of comments r 1 : data types r 2 : structure of control flow y 1 : type differentiation r 3 : comments y 2 : type restriction • Define : m = n 1 + ··· + y 2 ( with weights : m g = g 1 · n 1 + ··· + g 20 · y 2 , G = � 20 i =1 g i ). G 20 • Procedure : – 3 – 2018-04-23 – Sgqm – • Train reviewers on existing examples. • Do not over-interpret results of first applications. • Evaluate and adjust before putting to use, adjust regularly. (Ludewig and Lichter, 2013) 12 /43

Recommend


More recommend