Software Development Metrics
David Nicolette
Foreword by George Dinwiddie
  • July 2015
  • ISBN 9781617291357
  • 192 pages
  • printed in black & white

A real boon to those making the transition from a traditional serial development model to an agile one.

From the Foreword by George Dinwiddie, Software development consultant and coach
Software Development Metrics is a handbook for anyone who needs to track and guide software development and delivery at the team level, such as project managers and team leads. New development practices, including "agile" methodologies like Scrum, have redefined which measurements are most meaningful and under what conditions you can benefit from them. This practical book identifies key characteristics of organizational structure, process models, and development methods so that you can select the appropriate metrics for your team. It describes the uses, mechanics, and common abuses of a number of metrics that are useful for steering and for monitoring process improvement. The insights and techniques in this book are based entirely on field experience.
Table of Contents detailed table of contents

foreword

preface

acknowledgments

about this book

about the author

about the cover illustration

1. Making metrics useful

1.1. Measurements and metrics

1.1.1. What makes a metric "pragmatic"?

1.1.2. Forward-facing and backward-facing metrics

1.2. Factors affecting the choice of metrics

1.2.1. Process model

1.2.2. Delivery mode

1.3. How the metrics are presented

1.4. Name of the metric

1.5. Summary

2. Metrics for steering

2.1. Metric: Percentage of scope complete

2.1.1. When to use percentage of scope complete

2.1.2. A traditional project

2.1.3. An adaptive project

2.1.4. How to use percentage of scope complete

2.1.5. Anti-patterns

2.2. Metric: Earned value

2.2.1. When to use earned value

2.2.2. A traditional project

2.2.3. Anti-pattern: the novice team

2.3. Metric: Budget burn

2.3.1. When to use budget burn

2.3.2. A traditional project

2.3.3. An adaptive project using beyond budgeting

2.3.4. Anti-pattern: agile blindness

2.4. Metric: Buffer burn rate

2.4.1. When to use buffer burn rate

2.4.2. How to use buffer burn rate

2.5. Metric: Running tested features

2.5.1. When to use running tested features

2.5.2. An adaptive project

2.5.3. Anti-pattern: the easy rider

2.6. Metric: Earned business value

2.6.1. When to use earned business value

2.6.2. An adaptive project

2.6.3. Anti-patterns

2.7. Metric: Velocity

2.7.1. When to use velocity

2.7.2. An adaptive project

2.7.3. Anti-patterns

2.8. Metric: Cycle time

2.8.1. When to use cycle time

2.8.2. An adaptive project with consistently sized work items

2.8.3. An adaptive project with variable-sized work items

2.8.4. A traditional project with phase gates

2.9. Metric: Burn chart

2.9.1. When to use burn charts

2.9.2. How to use burn charts

2.9.3. Anti-patterns

2.10. Metric: Throughput

2.10.1. When to use throughput

2.10.2. A mixed-model project

2.11. Metric: Cumulative flow

2.11.1. When to use cumulative flow

2.11.2. A traditional project

2.12. Not advised

2.12.1. Earned schedule

2.12.2. Takt time

2.13. Summary

3. Metrics for improvement

3.1. Process-agnostic metrics

3.2. Technical metrics

3.3. Human metrics

3.4. General anti-patterns

3.4.1. Treating humans as resources

3.4.2. Measuring practices instead of results

3.5. Metric: Velocity

3.5.1. When to use velocity

3.5.2. An adaptive project

3.5.3. Anti-patterns

3.6. Metric: Cycle time

3.6.1. When to use cycle time

3.6.2. Tracking improvement in predictability

3.6.3. Tracking improvement in flow

3.6.4. Tracking responsiveness to special-cause variation

3.7. Metric: Burn chart

3.7.1. When to use burn charts

3.7.2. Adaptive development project using a time-boxed iterative process model

3.8. Metric: Cumulative flow

3.8.1. When to use a cumulative flow diagram

3.8.2. An adaptive project

3.9. Metric: Process cycle efficiency

3.9.1. When to use process cycle efficiency

3.9.2. Non-value-add time in queues

3.9.3. Non-value-add time in active states

3.9.4. What is normal PCE?

3.9.5. Moving the needle

3.10. Metric: Version control history

3.10.1. When to use version control history

3.11. Metric: Static code-analysis metrics

3.11.1. When to use static code-analysis metrics

3.12. Metric: Niko Niko calendar

3.12.1. When to use the Niko Niko calendar

3.12.2. Examples

3.12.3. Happy Camper

3.12.4. Omega Wolf

3.12.5. Zombie Team

3.13. Metric: Emotional seismogram

3.13.1. When to use the emotional seismogram

3.13.2. Examples

3.14. Metric: Happiness index

3.14.1. When to use the happiness index

3.14.2. Mechanics

3.15. Metric: Balls in bowls

3.15.1. When to use the balls-in-bowls metric

3.15.2. Mechanics

3.16. Metric: Health and happiness

3.16.1. When to use the health-and-happiness metric

3.16.2. Mechanics

3.17. Metric: Personality type profiles

3.17.1. When to use personality profiles

3.17.2. Anti-patterns

3.18. Summary

4. Putting the metrics to work

4.1. Pattern 1: Periodic refactoring iterations

4.2. Pattern 2: Velocity looks good, but little is delivered

4.3. Pattern 3: Linear workflow packaged in time-boxed iterations

4.4. Pattern 4: Erratic velocity but stable delivery

4.5. Summary

5. Planning predictability

5.1. Predictability and stakeholder satisfaction

5.1.1. Planning and traditional methods

5.1.2. Planning and adaptive methods

5.2. Measuring predictability

5.2.1. Estimation

5.2.2. Forecasting

5.2.3. Predictability of traditional plans

5.2.4. Predictability of adaptive plans

5.3. Predictability in unpredictable workflows

5.4. Effects of high variation in work item sizes

5.4.1. Deployable units of work

5.4.2. Trackable units of work

5.4.3. Demonstrating the value of consistently sized work items

5.5. Effects of high work-in-process levels

5.5.1. Work in process, cycle time, process cycle efficiency, and throughput

5.5.2. Work in process and defect density

5.6. Summary

6. Reporting outward and upward

6.1. Reporting hours

6.1.1. An example

6.1.2. Aggregate numbers are approximate

6.2. Reporting useless but mandated metrics

6.2.1. Recognizing what’s really happening

6.2.2. Beware of motivational side effects of metrics

6.2.3. Understanding what the numbers mean

6.3. Summary

index

© 2015 Manning Publications Co.

About the book

When driving a car, you are less likely to speed, run out of gas, or suffer engine failure because of the measurements the car reports to you about its condition. Development teams, too, are less likely to fail if they are measuring the parameters that matter to the success of their projects. This book shows you how.

Software Development Metrics teaches you how to gather, analyze, and effectively use the metrics that define your organizational structure, process models, and development methods. The insights and examples in this book are based entirely on field experience. You’ll learn practical techniques like building tools to track key metrics and developing data-based early warning systems. Along the way, you’ll learn which metrics align with different development practices, including traditional and adaptive methods.

What's inside

  • Identify the metrics most valuable for your team and process
  • Differentiate "improvement" from "change"
  • Learn to interpret and apply the data you gather
  • Common pitfalls and anti-patterns

About the reader

No formal experience with developing or applying metrics is assumed.

About the author

Dave Nicolette is an organizational transformation consultant, team coach, and trainer. Dave is active in the agile and lean software communities.


Buy
combo $59.99 pBook + eBook
eBook $47.99 pdf + ePub + kindle

FREE domestic shipping on three or more pBooks

Provides a solid foundation for how to start measuring your development teams.

Christopher W. H. Davis, Nike, Inc.

Measuring is the key to making and consistently hitting scheduling targets. This book will help you confidently build a schedule that is accurate and defensible.

Shaun Lippy, Oracle Corporation