Improving Software Reliability Using In-process Metrics
Industrial strength software development enforces numerous processes to deliver the product ontime. To improve software quality, including reliability, processes are often modified. However, it is often difficult to understand the impact of any one process change or addition on the overall software reliability, since it can frequently take many months to get sufficient feedback from the field. If software measurements made early in the lifecycle can be found to predict ultimate release quality, we can then effectively isolate the influence of specific processes/practices and tune them appropriately to improve the resulting quality.
This workshop addresses how to:
These topics will be covered:
1. Data for models. Discuss the difficulties in gathering and scrubbing adequate data for model variables. Emerging software development platforms such as Microsoft Visual Studio Team System (VSTS) and IBM Team Concert (aka IBM Jazz) provide mechanisms to collect various in-process events, in addition to collecting basic structural information about the software. This section focuses on automation methods, use of dashboard data, and metric goaling strategies based on model results. We will examine case studies, emphasizing specific data-related issues and resolution approaches.
2. Empirical and deterministic model development. Linear and nonlinear regression, CART, neural/Bayesian network, bugflow, and other model classes will be discussed, emphasizing the plusses and minuses of each. Case studies will be presented and discussed. Predictions of customer-found defects, total post-release defects, defect escape volumes, customer satisfaction scores, and security vulnerabilities will be described. Analysis techniques will be addressed, including constructing confidence intervals, goodness of fit statistics, and cost models.
3. Using models for diagnosis and prognosis. Early predictive modeling is improved by capturing near-real time events during software development and applying statistical techniques on the events and other software artifacts. The underlying premise is that this approach provides earlier warning, more accurate diagnosis of problems, and faster remedies than models that rely on off-line, downstream events, thereby improving overall reliability of the software process and products. Predictive models that use data from defect and trouble ticket databases will also be described and discussed. We will cover case-studies, diagnosis and predictive metrics, end-user enablement, and practical process improvement.
Introduction - Pete Rotella and Santonu Sarkar Introduction to the concept of In Process Measurement, and the scope of the workshop. |
10 min |
Case Studies/Experience Reports/Ongoing Research 20 min each including Q & A
|
80 min |
Summary of morning events, identification of groups/topics etc | 10 min |
Break-out Session – Moderated discussions in smaller groups to discuss
|
60 min |
Readout for each group (10 min each, including Q & A) | 40 min |
Consolidate Action Items, concluding remarks - Pete Rotella and Santonu Sarkar | 20 min |
Workshop Organizers