Industry Papers



Session
Paper Number
Paper Title
Analysis
40
Formal Model Based Methodology for Developing Controllers for Nuclear Applications
45
Application of Fault Tree Analysis in the interface of complex medical device data systems
244
Static Analysis in Medical Device Firmware and Software Development - Reliability and Productivity
Defect Data Analysis
142
Finding Dependencies from Defect History
149
Nonlinear trends for several software metrics
196
Reliability : A Software Engineering Perspective
Testing I
168
Model Driven Testing with Timed Usage Models in the Automotive Domain
187
The Goals and Challenges of Click Fraud Penetration Testing Systems
222
CHALLENGES AND SOLUTIONS IN TEST AUTOMATION OF MEDICAL VISUALIZATION APPLICATIONS
253
Automated Verification of Enterprise Load Tests
Reliability Prediction
146
Software Defect Prediction Via Operating Characteristic Curves
154
Applying Software Defect Prediction Model for reliable product quality
227
Software Reliability Prediction in Philips Healthcare – An Experience Report
Architecture and Modelling
234
Design of safety-critical systems with ASCET
249
Architecting for Reliability – Detection and Recovery Mechanisms
262
Application of the Architectural Analysis and Design Language (AADL) for Quantitative System Reliability and Availability Modeling,
ODC
201
Orthogonal Defect Classification in Agile Development
254
ODC Product Profiling
256
ODC Deployment - A Case Study at Caterpillar
Testing II
236
Introduction of Developer Testing in an Embedded Environment
259
Software Fault Injection - Industry Experience
260
Visualizing the Results of Field Testing
Process
89
Blind Men and the Elephant: Piecing Together Hadoop for Diagnosis
150
A sequential model approach to improve software assurance
235
Process for improving the quality and reliability of fixes for customer reported defects

Back to the Top

 

Formal Model Based Methodology for Developing Controllers for Nuclear Applications

Amol Wakankar, Raka Mitra, Anup Bhattacharjee, Shubhangi Shrikhande, Sham Dhodapkar,

Abstract.
The approach used in model based design is to build the model of the system in graphical/textual language. In older model based design approach, the correctness of the model is usually established by simulation. Simulation which is analogous to testing, cannot guarantee that the design meets the system requirements under all possible scenarios. This is however possible if the modeling language is based on formal semantics so that the developed model can be subjected to formal verification of properties based on specification. The verified model can then be translated into an implementation through reliable/verified code generator thereby reducing the necessity of low level testing. Such a methodology is admissible as per guidelines of IEC60880 standard applicable to software used in computer based systems performing category A functions and would also be acceptable for category B functions.
In this paper, we present our experience in implementation and formal verification of important controllers used in the process control system of a nuclear reactor. We have used the SCADE environment to model the controllers. The modeling language used in SCADE is based on the synchronous dataflow model of computation.
Two controllers viz Steam Generator Pressure Controller (SGPC) and Primary Heat Transport System Pressure Controller (PHTPC) were selected for implementation. These were earlier implemented using conventional software development techniques and are in actual use. In that way it was possible for us to compare the effectiveness of the new technique. The complete development process was followed which included modeling, simulation, test coverage measurement and code generation. This was followed by testing in target hardware with I/O simulation. For the verification of safety properties the SCADE Design Verifier was used.

Back to the Top

 

Application of Fault Tree Analysis in the interface of complex medical device data systems

Abstract.
Fault Tree Analysis (FTA) has been well implemented in nuclear, aerospace, robotics and other industries. In the medical device industry, the system not only includes the implantable device but also the subsystems supporting the patient medical device. These systems require high levels of dependability such as availability, sustainability, security and trustworthiness, among many others. Therefore, instead of a more traditional identification of hazards, this FTA application was focused on undesirable events that were not necessarily hazardous to the patient, but critical to reliably and efficiently supporting the patient. The intent of this paper is to show the application of the Fault Tree Analysis methodology in software intensive medical device data systems such as the implantable device cardiac data and patient data management systems. This analysis was purely qualitative, meaning only events, failure modes and causes potentially leading to the undesirable top event were identified. The vision is to conduct quantitative analysis once reliable failure rate data is available. The FTA facilitated the discovery and mitigation of potential failure modes early in the project when it is most efficient and effective to correct problems, rather than retrofitting a fix after development. The results show benefit in implementing this methodology during the early defining stages of system behavior and interfaces. The outcome of this FTA shows a promising future on more comprehensive applications of FTA across the medical device industry.

Back to the Top

 

Blind Men and the Elephant: Piecing Together Hadoop for Diagnosis

Abstract.
Google's MapReduce framework enables distributed, data-intensive, parallel applications by decomposing a massive job into smaller (Map and Reduce) tasks and a massive data-set into smaller partitions, such that each task processes a different partition in parallel. However, performance problems in a distributed MapReduce system can be hard to diagnose and to localize to a specific node or a set of nodes. On the other hand, the structure of large number of nodes performing similar tasks naturally affords us opportunities for observing the system from multiple viewpoints.

We present a “Blind Men and the Elephant” (Blimey) framework in which we exploit this structure, and demonstrate how problems in a MapReduce system can be diagnose by corroborating the multiple viewpoints. More specifically, we present algorithms within the Blimey framework based on OS-level performance counters, on white-box metrics extracted from logs, and on application-level heartbeats. We show that our Blimey algorithms are able to capture a variety of faults including resource hogs and application hangs, and to localize the fault to subsets of slave nodes in the MapReduce system.

In addition, we discuss how the diagnostic algorithms' outcomes can be further synthesized in a repeated application of the Blimey approach. We present a simple supervised learning technique which allows us to identify a fault if it has been previously observed.

Back to the Top

 

Finding Dependencies from Defect History

Abstract.
Dependency analysis is an essential part of various software engineering activities like integration testing, reliability analysis and defect prediction. In this paper, we propose a new approach to identify dependencies between components and associate a notion of “importance” with each dependency by mining the defect history of the system, which can be used to complement traditional dependency detection approaches like static analysis. By using our tool Ladybug that implements the approach, we have been able to identify important dependencies for Microsoft Windows Vista and Microsoft Windows Server 2008 and rank them for prioritizing testing especially when the number of dependent components was large. We have validated the accuracy of the results with domain experts who have worked on designing, implementing or maintaining the components involved.

Back to the Top

 

Software Defect Prediction Via Operating Characteristic Curves

Abstract.
Each software defect encountered by customers entails a significant cost penalty for software companies. Thus, knowledge about how many defects to expect in a software product at any given stage during its development process is a very valuable asset. Being able to estimate the number of defects will substantially improve the decision processes about releasing a software product.

Several software reliability prediction models have been proposed in the literature for estimating system reliability, but all these kinds of models make unrealistic assumptions to ensure solvability. These unreasonable assumptions in most traditional models resulted in limited practical applications of these models. Bayesian statistics, on the other hand, provide a framework for combining observed data with prior assumptions in order to model stochastic systems. Bayesian methods aim at assigning prior distributions to the parameters in the model in order to incorporate whatever a priori quantitative or qualitative knowledge we have available, and then to update these priors in the light of the data, yielding a posterior distribution via Bayes's Theorem.

Motivated by the widely used concept of operating characteristic (OC) curves in statistical quality control to select the sample size at the outset of an experiment, we present a software defect prediction model using OC curves. The main idea behind our proposed predictive operating characteristic (POC) method is to use geometric insight in helping construct an efficient and fast prediction method to reliably predict the cumulative number of defects at any given stage during the software development process. Experimental results illustrate the effectiveness and the much improved performance of the proposed method in comparison to Bayesian-based prediction approaches.

Back to the Top

 

Nonlinear trends for several software metrics

Abstract.
Release cycle, seasonality, and other nonlinear effects play an important role in the volume trends of customer-reported defects. Therefore, it is important to factor in nonlinear effects into the goaling schemes for customer-centric metrics, otherwise engineering teams would lose confidence in the appropriateness of certain specific metric goals or in the goaling program in general.

In an attempt to identify and describe nonlinear trend behavior, three metrics related to customer-found defects (CFDs) were examined: CFD Incoming, CFD Mean Time to Repair, and CFD Backlog. These metrics were calculated for a three-year period for two dimensions – the organization dimension (i.e., business unit) and the software version dimension (i.e., feature release). Twelve business units and 14 feature releases were modeled separately for each of the three metrics.

CFD Incoming rate behaves like a sine wave, with the amplitude a function of the number of new features and the effectiveness of the quality improvement practices implemented, and the wavelength a function of release cycle time and rampup rate in the customer space. Using nonlinear regression fitting, the resulting sinusoidal models in the two dimensions yield an average adjusted R-squared value of 0.83, compared to an average of 0.37 for the linear model. For some releases and business units, the CFD Incoming nonlinear result, at the end of the yearly goaling cycle, can vary by 60% or more from the linear result.

CFD Backlog behaves in a similar sinusoidal way – the adjusted R-squared values are 0.73 and 0.21 for the nonlinear and linear models, respectively.

CFD Mean Time to Repair rates drop over time in an exponential decay mode – the adjusted R-squared values for the nonlinear MTTR models average 0.78, while the linear models average 0.49.

Back to the Top

 

A sequential model approach to improve software assurance

Abstract.
Combining multiple statistical models to identify a response in a data set may offer more accurate results than with one model alone [2]. Techniques such as bagging, boosting, and stacking have been proposed for combing statistical models [2]. We extend the idea of using multiple models on a Cisco software system to predict attack-prone components. In an earlier case study, two similar models were run on the same data set sequentially to predict attack-prone components [1]. We are currently investigating sequential modeling approaches in a nonlinear model that predicts customer-found defect (CFD) rates. Our model does not predict 100% of the CFDs and thus may be improved with a sequential model run. An accurate model can afford informed decision processes that decide how to allocate resources in the software life cycle.

The first run of the model captured 75.6% of the attack-prone components in our data set. The false positive rate associated with the first run is approximately 45%. While the ability to capture 75.6% of the attack-prone components is plausible, the remaining 24.4% of the attack-prone components that are not identified can cause substantial economic impact if they escape into the field. The residual attack-prone components (false negatives) and the true negatives are isolated into a new data set where we perform a sequential run of a similar predictive model. The results of the sequential run afford us to capture and additional 9.7% of the attack-prone components to have captured 85.3% of the total attack-prone components. The associated false positive rate with the sequential run of the model is approximately 50%.

[1] M. Gegick, P. Rotella, and L. Williams, ""Predicting Attack-prone Components,"" International Conference on Software Testing, Verification, and Validation, Denver, CO, April 2009.

[2] Witten and E. Frank, Data Mining, Second ed. San Francisco, Elsevier, 2005.

Back to the Top

 

Applying Software Defect Prediction Model for reliable product quality

Abstract.

Back to the Top

 

Model Driven Testing with Timed Usage Models in the Automotive Domain

Abstract.
In the automotive domain software has become increasingly distributed and complex. The importance of time and timing grows, as timing in usage has severe impact on the behavior of the system. Moreover, testing time on Hardware-in-the-loop (HiL) simulators is scarce in industry, so it should be used as efficiently as possible.
In model-based testing Markov chain usage models (MCUM) are an established way to describe the usage of the system-under-test (SUT). The problem was that classic MCUMs do not provide a way to integrate time and timing systematically in the usage model. However, this information is not about the SUT,e.g. what is the average duration of a test case derived from the model. Yet information like this could be used before test generation to assess the testing effort and time needed to achieve testing targets. We present the Timed Usage Model (TUM) as solution to overcome these drawbacks. New is, that non-exponentional timing can be applied, either on states or on transitions, in order to be able to describe the usage in a realistic way. This way, before the generation of test cases, it is possible to compute indicators such as expected execution time of a stimulus in a test case, average test case duration. This information makes it possible to plan the execution of test cases. On the other hand the integration of time in the model makes it possible to derive automatically test cases to validate functionality under different timing conditions.
We have applied our approach at AUDI AG, a major german car manufacturer. For the operational concept of the Electronic Stability Program (ESP) a Timed Usage Model was created. Benefits were that the timing aspects of requirements could be integrated in the model. Hence it, before the test case generation indicators could be computed that gave information about what can be tested within the available testing time. So the test planning and generation is supported even before test case generation.

Back to the Top

 

The Goals and Challenges of Click Fraud Penetration Testing Systems

Abstract.
It is important for search and pay-per-click engines to penetration test their click fraud detection systems, in
order to find potential vulnerabilities and correct them
before fraudsters can exploit them. In this paper, we
describe: (1) some goals and desirable qualities of a
click fraud penetration testing system, based on our experience, and (2) our experiences with the challenges of
building and using a click fraud penetration testing system
called Camelot that has been in use at Google.

Back to the Top

 

Reliability : A Software Engineering Perspective

Abstract.
Software Reliability is an important factor in System Reliability as the contribution of software in products, is constantly increasing. Software Reliability as defined is “the probability of failure-free software operation in a specified environment for a specified period of time (or natural units)”. However there is no “ONE” uniform theory on software reliability yet. There are many models, but no consensus, reliability approach varies per application area and the question remains whether one theory would eventually prevail. Another dimension of this is the fact that ultimately it is the customer who defines reliability. So, depending on the product, the other “ilities” such as Usability, Interoperability, and Maintainability etc also become part of the Reliability definition. Since there is no single widely accepted method of estimating or predicting software reliability, can some practices be applied anyhow to make a kind of structure to be able to build reliable software?
This paper tries to address this question by giving an example of trying to define and achieve some facets of reliability in the DVD-Hard disk recorder. This paper explains how few specific instances of Reliability were defined for the recorder as “Critical to Quality” parameters in terms of Robustness and Interoperability. It also gives details on how these were flowed down into the lower level parameters which were then designed-in to achieve the desired reliability from software.
Extending this further, the paper recommends a framework for software reliability built around the three dimensions of Fault-Prevention, Fault-Tolerance and Fault-Detection. The process structure of CMMI and Orthogonal defect classification, augmented with FMEA and principles of graceful degradation mechanisms, supplemented with “Pondering maturity index” for reliability growth help to build a framework around which reliability of software can be assured, and tracked along the software development life cycle.

Back to the Top

 

Orthogonal Defect Classification in Agile Development

Abstract.
Agile development is a process for detecting and removing design defects in the iteration in which they are injected so as to provide early defect information on all the stages of the development process. In agile, one should always be looking for ways to improve what we're doing. When we don't capture defect information, we lose an opportunity to mine the data to help us determine process improvements.

Orthogonal Defect Classification (ODC) is more useful in identifying problems early in an agile project than waterfall model. Several agile practices can help teams to find defects early, maintain a fairly stable product and mitigate risk due to accommodating late arriving requirements which resulted in their ability to deliver on time. Some of the agile practices contributing to these positive results included test driven development, continuous integration, paired programming and simple design.

ODC data and expectations in agile environment will differ from other software models. We expect to see earlier discovery of defects including complex defects and expect to see a higher percentage of simpler triggers (i.e. ‘coverage’ and ‘variation’) defects in ‘new’ function being dropped into iteration as well as ‘missing’. In follow-on iterations those areas should stabilize such that missing decreases and more complex defects emerge.

In this paper we'll show adoption of various agile practices affecting defect injection, how to assess in-process defects in agile development, value ODC assessments provide and use of ODC metrics to evaluate the agile development. We’ll also discuss about the evaluation of defect removal effectiveness, product stability and defect injection in agile environment. Lastly we’ll conclude the session by specifying the action items that one can take to improve the effectiveness of ODC in agile development.

Back to the Top

 

CHALLENGES AND SOLUTIONS IN TEST AUTOMATION OF MEDICAL VISUALIZATION APPLICATIONS

Abstract.
Clinical applications play a significant role in the healthcare world in diagnosis and treatment. They are visualization intensive, workflow driven, thick client applications; they have a busy GUI and vendor dependent look and feel. Large part of the display real estate is devoted to visualize the images/results, with minimal area for other GUI controls.
In Philips Healthcare, most applications are built on an underlying platform based on Windows and .Net. Significant time is spent by different groups in validating these applications, which increases, as products evolve. Some of the challenges for GUI automation in clinical applications are:
1 About 50% of GUI controls are custom controls and complete automation is not possible with Commercial off the Shelf (COTS) tools alone, because of:
- Applications built with specific .Net strongly named assemblies.
- Recording in screen coordinates, as against object based recording .
- Big foot print on test systems.
- Lack of seamless integration with development environment
2 Recognition of image viewer, attributes and regions of interest as graphic & textual annotations on Images
3 Validation of image data (image quality and algorithm results)
An automation framework was developed to address these issues. The key elements being:
- A powerful hybrid solution of Microsoft automation framework and home-grown mechanisms
- Support for frequently used clinical operations with extension mechanisms
- A reporting mechanism supporting regulatory requirements
- Tools to build test cases/workflows
- Deployment and execution across multiple test systems

The results of deployment in different product lines are:
- 4700 integration test cases automated; execution time of 26 hours versus a manual effort of 80 person days. Deployed on five versions of the product.
- 750 test cases automated on a few clinical applications; execution time of 5 hours versus a manual effort of 8 person days.

Back to the Top

 

Software Reliability Prediction in Philips Healthcare – An Experience Report

Abstract.
In healthcare modalities like MR, CT, X-ray, Ultrasound etc., the demand of providing sophisticated, flexible and usable features without sacrificing reliability is increasingly being put on software. Hence, there is an increasing stress in the healthcare industry to implement software reliability engineering (SRE) practices. We have observed that in the state-of-the-practice there is significant drive towards improving software reliability by using in-process methods. Yet, there is little or no work reported from the Indian software industries in explicit estimation and prediction of software reliability – a step which should logically precede interventional methods of software reliability improvement.

The problem posed to us was to estimate and predict the reliability growth of projects running in our organisation without intervening with their processes. The research question we aimed to answer was: Does the defect log collected with no explicit stress to reliability specific data be used to estimate and predict the reliability growth of the project?

In this talk, we present our experience in implementing software reliability prediction in the healthcare domain. We studied the defect logs of three different projects running in our organisation and tried fitting them to the standard software reliability growth models. This did not work. Hence, we undertook an elaborate exercise to extract reliability growth information from the raw defect data by systematically removing elements from them which we considered as noise. For doing this, we used additional knowledge of the studied projects obtained from the respective teams. Judicious combination of project specific parameters resulted in our defect data to fit the models in some cases. Also, careful study of defect data has allowed us to gain additional insights into the dynamics of some of the project in the attempt to fit its defect data to the models.

Back to the Top

 

Design of safety-critical systems with ASCET

Abstract.
As on today, C language is the most widely used language for development of software for embedded control units, even for safety-critical systems. As an alternative to the conventional development, model-based design technology uses state-of-the-art design techniques to develop and implement embedded software in the field of automotive, aerospace, industrial automation etc through the established V development cycle. This paper summarizes the advantages of model-based design and auto-coding over conventional methods, discusses potential problems of modeling languages and code generators and gives some examples, how software development with ASCET works in practice.
Further this paper outlines the considerations needed for model-based design tools to produce safety-critical software. The model designed for simulation may not fit well for the auto-coding and it is important to understand the differences between models used for auto-coding and models used for simulation. For safety-critical systems, it is important to have the code behavior identical to that of the model behavior. For safety-critical applications, it is important to have complete control over the code generator. Validation of the code generator is a further important step to ascertain the quality of the auto-code generated. This paper also discusses the methods for validating the code generator from the perspective of software reliability.

The advantages of auto-code and its relevance to safety-critical system are explained in this paper. Automatic elimination of division by zero error, integer overflow etc are explained by means of simple examples. An overview of the MISRA compliance of auto-code and the certification of code generator with respect to the software reliability are also summarized.

Please see the attachment for a more elaborate outline on proposed paper content.

Back to the Top

 

Process for improving the quality and reliability of fixes for customer reported defects

Abstract.
Development teams provide fixes for customer reported problems. However, the quality and reliability of the fix provided often suffers due to
1)insufficient integration testing,
2)insufficient regression testing, and
3)inadequate code reviews of the fix provided.
The above factors may arise due to the development team owning the entire problem resolution process starting from collecting
data for problem analysis, right upto providing the fix to the front end team.

This paper outlines a draft process which assigns well defined problem resolution steps to the development and support teams in order to improve the quality and reliability of the fix.
The support team owns problem recreation, root cause analysis, proposing a fix, fix review, and delivering fix to the front end team. The development team owns fix review, fix testing, committing fix changes and preparing the fix.
Key to the process is a structured draft document which records for every problem
1)All relevant symptoms, recreate scenarios, solutions
2)All official interim fixes already provided.

The key benefits of the draft process include
1)Reduced ownership for development team increases their scope of regression testing and integration testing, thereby improving quality and reliability of the fix
2)Fix review by two independent teams improves reliability and quality of the fix
3)The draft document also helps
a)The development team to identify and focus on critical components, to improve their reliability on future releases
b)Accurate data collection leading to faster fix creation thus reducing the cost of fixing a defect
c)Reducing fix turnaround time for newer customers through interim fixes saving valuable customer time and money
d)Front end teams to recreate rare problem scenarios.
The above described draft process is being used by IBM AIX® development and support teams to improve reliability, quality and
turnaround time of fixes for customers.

Back to the Top

 

Introduction of Developer Testing in an Embedded Environment

Abstract.
This presentation outlines the steps, strategies and tools involved in transforming a large development organization engaged in the development of a large embedded system from a state where development engineers were not doing significant UT to the state now where UT is
considered an essential part of development. This represents a significant cultural change. I would define UT as creation of white box tests with a view to reduction in dev. escapes. The development organization has a few thousand engineers and is a major revenue generator for the company. The development is spread over multiple business units. Internal studies had indicated that too many of the CFDs were UT escapes.

Here is a list of of our pilots and the results. Categories of testing that were significant are identified. The cost of finding a bug was about 1/10th in comparison with the test groups. The time to resolution was significantly shorter. Precision of the problem reports was relatively very high.

Project1 - Found 125 problems using about 40 engineer week's of effort. Software fault injection was a key contributor The feature is released and there are no high/medium severity bugs against the feature.

Project2 - 59 problems using about 6 engineer week's of effort. 51 from Light-weight MBT and 8 from API Robustness testing

Project3 - Found 18 problems using about 8 engineer weeks of effort. 6 from API Robustness, 4 from concurrency testing

Project4 - Found 9 problems using 3 engineer weeks of effort

Project5 - Found 10 problems using 4 engineer week's of effort. 4 from CLI Robustness, 4 from light-weight MBT

Project6 - Found 47 problems using 10 engineer week's of effort. 12 from software fault injection

Now UT is an accepted practice. There is a strong support from both the management as well as the development engineers. There are clear indications that the features that went through UT are relatively
clean. Key factors are the UT tool and the support.

Back to the Top

 

Static Analysis in Medical Device Firmware and Software Development - Reliability and Productivity

Abstract.
The benefits have been studied and revealed in Medtronic CRDM using several commercially available Static Analysis tools We evaluated tools’effectiveness, efficiency and usability. We concluded that adopting a Static Analysis tool will strongly enhance our SW-FW reliability by catching bugs and potential flaws early in the development cycle, enforcing coding standard and reducing cost in code review that increases development productivity. Our conclusion is well aligned with what the FDA comes out with xindependently. 3 test-beds with 43 pre-seeded bugs that address different use scenario based on various platforms gave us sufficient evident in evaluation. By using 6 categories of criteria and 3 test-beds, we were able to objectively assess and select the right tools.

We are facing and overcoming several challenges in using static analysis: 1. potential high false positive rate, and on the downside, noticeable false negative rate; 2. tool integration into software development process, namely, change control system, code depository and IDE; 3. different SW platforms have difference priorities and use models that may require different tools.

Static Analysis is useful at any point in the development cycle, but it is a complement, not a replacement, of existing dynamic or black-box testing. To be effective, a combination of two or more vendors’ tools is necessary. Therefore two layers of gating mechanism may be considered with one used preferably every time code is checked in, and another in system build. One also needs to be conditioned to fix issues that are identified instead of writing them off as false positives.

Back to the Top

 

Architecting for Reliability – Detection and Recovery Mechanisms

Abstract.
High availability, in the form of continuous service availability, is achieved in telecommunications systems by implementing extensive and effective error detection and recovery mechanisms with high coverage. Escalating detection and recovery mechanisms start with those that can deal with targeted errors with very low latency and impact can escalate to actions with longer recovery times and broader system impact. In this work we extend previous studies, combining Markov models for escalating detection and recovery into a unified model. The results of this model show that a unified view, such as this, produces results that more closely align with our experience. It also non-intuitively shows that detection and recovery coverage should be balanced. Designers can use these models to evaluate alternative schemes for error detection and recovery to achieve a given system/service availability target.

Back to the Top

 

Automated Verification of Enterprise Load Tests

Abstract.
Many field problems are related to systems not scaling instead of feature bugs. To assure the quality of systems, load testing is a required testing procedure in addition to conventional functional testing procedures. Unlike other testing mechanisms which focus on testing the system based on a small number (one or two) of users; load testing examines the system’s behavior based on concurrent access by a large number (e.g. millions) of users. A typical load test has one or more load generators which simultaneously send requests to the system under test. During a load test, execution logs along with performance data are collected. Execution logs are debug statements that developers insert into the source code. Execution logs record the run time behavior of the application under test. Performance data are collected by resource monitoring tools like PerfMon. Performance data record the resource usage information such as CPU utilization, memory, disk I/O.

Existing load testing research focuses on the automatic generation of load test suites. There is limited work, that proposes the systematic analysis of the results of a load test. Unfortunately, looking for problems in a load test is a time-consuming and difficult task due to challenges like: depth and breadth of needed knowledge, time pressure, monitoring overhead, and large volume of data. Current practice of load testing analysis is mainly manual and ad-hoc.

We propose an approach which automatically verifies functional and performance requirements of a test by mining the readily available executions logs. Our approach consists of 3 steps: We first abstract log lines to execution events. Then, we derive functional models from the execution sequences to uncover functional problems. And derive performance models by mining response time distributions to detect performance problems. Our case studies on open source and large industrial systems show that our approach produces few false alarms and scales well.

Back to the Top

 

ODC Product Profiling

Abstract.
ODC product profiles provide the software engineer with an insight into the execution of a development process and the resultant software reliability experienced. In essence it yields images of the product through the process as seen by the different dimensions of ODC. This paper describes how we develop an ODC product profile and illustrates it with a case study. Specific to the case study, we show: (1) Impact distribution changes that are traceable across development and customer usage. (2) How these changes are related to the testing strategy that was executed. (3) How one can alter the experience with specific changes to the development or test process.

Presentation by: Troy Settles, CAT Electronics

Back to the Top

 

ODC Deployment - A Case Study at Caterpillar

Abstract.
Deploying ODC as a software engineering method in a large organization is a multi-year effort with a significant socialization aspect to it. This presentation will illustrate examples of the deployment elements along with a discussion of the lessons learned. We will cover seven major deployment steps: Pilot, Training, Deep Dives, Touch Points, Feedback, Center of Excellence and Ownership. The case study covers around three to four years of experience across multiple departments touching several hundred engineers.

Presentation by: Troy Settles or Anita Yadav, CAT Electronics

Back to the Top

 

Software Fault Injection - Industry Experience

Abstract.
This presentation provides information on the tool, the
practice and the results of the software fault injection technology that was introduced into our key embedded product few years ago.

Significant part of our code deals with error handling.This code was not touched in internal testinbg. But it was executed in some rare situations at customer installations. We had several CFD's resulting from these
situations. We developed the technology to systamatically inject faults and force control through error handling code. The technology involved on-the-fly transformation during compilations to intercept calls to various functions of interest. The tool also included support for introducing delays, generating exceptions and replace functions. The fault injection activity could be limited to selected set of components/files. The tool supported various policies for fault injection like random, all-callsites, every nth
invocation etc. The test automation tool that is used across Cisco was enhanced to support automation of the fault injection as well as the analysis of the results.

We have a large set of tests that are executed by our regression teams. We created special images that are instrumented for fault injection and ran the existing tests. This exercise uncovered several bugs in our error handling code. With about 20 weeks of effort we had found about 400 bugs. All these were potential CFD's. The norm for this group of testers was about 3 weeks/bug. In almost
all instances the error handling code was being executed for the first time. This also sensitized the development engineers about the need to test their error handling code.
The exercise was repeated on various branches/releases of the software with various sets of functions
with similar results. Now fault injection technology is part of our UT tool and is used regularly by our development engineers. The proportion of the CFD's resulting from error handling code has dropped significantly.

Back to the Top

 

Visualizing the Results of Field Testing

Abstract.
Field testing of software is necessary to find potential user problems before market deployment. The large number of users involved in field testing along with the variety of problems reported by them increases the complexity of managing the field testing process. However, most field testing processes are monitored using ad-hoc techniques and simple metrics (e.g., the number of reported problems). Deeper analysis and tracking of field testing results is needed. This paper introduces techniques for visualizing the field testing results. The techniques focus on the relation between users and their reported problems. The visualizations help identify general patterns to improve the testing process. For example, the technique identifies groups of users with similar problem profiles. Such knowledge helps improve future field testing efforts by reducing the number of needed users since we can pick representative users. We demonstrate our proposed techniques using the field testing results for two releases of a large scale enterprise application used by millions of users worldwide.

Back to the Top

 

Application of the Architectural Analysis and Design Language (AADL) for Quantitative System Reliability and Availability Modeling

Abstract.
"Reliability and availability are essential to mission success, but are frequently not adequately addressed in the design phase when there is a minimal cost and schedule impact for changes necessary to achieve system dependability requirements or objectives. An important reason for this omission is the absence of the necessary capability in current architectural design languages and tools. Languages such as the Unified Modeling Language (UML) and the System Markup Language (SysML) provide many useful diagram types for design representation but in themselves do not have the formality and specificity that allows for a standardized representation of system failure behavior which in turn can be used for a transformation algorithm. On the other hand, the Architectural Analysis and Design Language (AADL, SAE standard SA AS5506-2004) includes an Error Model Annex which is sufficient for automated generation of a reliability/ dependability model when combined with the architectural information in the primary representation. This paper describes how we used the AADL and Error Model Annex to automatically generate input files for the Mobius Stochastic Activity Network (SAN) analysis tool developed by the University of Illinois. It then describes how Mobius was used to evaluate system reliability and availability for a space vehicle -- both a baseline design and various alternatives. It concludes with an assessment of the strengths and limitations of our current implementation and plans for future enhancements.

"