Tutorials

List of 9 Tutorials:
  1. Model Based Testing of Control Systems
  2. A survey of verification tools for software reliability
  3. Testing Program Security Vulnerabilities
  4. Automation to Improve Reliability and Productivity - Tools
  5. Model-based Development in Practice: Successful Selection and Deployment
  6. A Methodology for Architecture-Based Software Availability Analysis
  7. Structured Safety and Assurance Cases: Concepts, Practicalities and Research Directions
    Proposed Duration
  8. Orthogonal Defect Classification (ODC) A 10x for Root Cause Analysis
  9. Establishing an Effective Industrial Test Program selecting the best Methods and Metrics

 

Automation to Improve Reliability and Productivity - Tools

Suresh C. Kothari, ECE Department, Iowa State University
Contact: kothari@iastate.edu
Homepage: http://class.ece.iastate.edu/kothari/index.html

This tutorial will be about tool-assisted practices to obtain major improvements in reliability and
productivity of software.

Duration: ½ day

Content

  • An overview of code analytics – a tool-assisted problem solving methodology for efficient and reliable software engineering
  • Hands-on demonstrations of tool-assisted code analytics applied to real-world engineering
  • problems
  • Examples of how code analytics can be used to benefit design, development, training, and high level decision making
  • Problem solving strategies critical for effective utilization of tools
  • Selection of tools and issues to address for instituting code analytics practices in industry

Areas of Applicability
The hands-on demonstrations and examples will be presented to illustrate potential applications of tool assisted code analytics to a spectrum of software engineering tasks:

  • Architecture extraction
  • Debugging and tracing
  • Defect analysis
  • Reliability analysis
  • System integration
  • System-level testing
  • Training engineers with internals of complex software
  • Viability analysis and cost and time estimates of projects
  • Audits of work performed
  • Integrate and use tool-assisted code analytics in your work

Outcomes
Completing this tutorial will help you to:

  • Integrate and use tool-assisted code analytics in your work
  • Know the different types of tools
  • Select appropriate tools to address your specific needs
  • Understand the important issues for making good use of tools

Participants
Mangers, architects, and developers who deals with evolution and maintenance of large software

About the Instructor
Suresh Kothari is a professor and Chair of the Software Systems Group in the Electrical and Computer Engineering department at Iowa State University. His industrial experience includes work at AT&T Bell Laboratory, consulting, and founding of EnSoft Corp. He has done extensive research and teaching of tool assisted code analytics. In 2008, he received the Prometheus Award for his teaching of innovative methods in software engineering. He was the keynote speaker at the 2008 IEEE International Conference on Program Comprehension held in Amsterdam.

Back to the Top


Model-based Development in Practice: Successful Selection and Deployment

Half-day Tutorial
Instructor: Jeremias Sauceda, CTO, EnSoft Corp., U.S.A.
Contact: pi@ensoftcorp.com
Model-based technologies such as UML, AADL, Simulink and many others can be used to analyze, implement, and document your system. This tutorial gives you the background necessary to successfully select and deploy a model-based process within your company.

Selecting a Model-based methodology
The key to selecting the right model-based methodology is to know what is at the root of the problem you are trying to solve. This requires asking critical questions such as:
- Are errors introduced into my application because the design must be manually translated from
a design document into code?
- Do I lack documentation of my system in a form that is useful for developers to implement
features or managers to make decisions?
- Has my application become so large it is difficult to understand all the relationships and
interactions within it?
Answers to questions like this will help in selecting a particular model-based methodology required to address your analysis, implementation, and documentation needs. This part of the workshop will discuss types of model-based methodologies and their applicability.

Deploying a Model-based Process
Deploying a model-based process involves many of the same issues as deploying other technologies that are part of your development process. However it also poses its own challenges. In this section of the tutorial we discuss the practical issues involved in deployment of a model-based methodology, including selecting a tool-chain, version control, team collaboration, etc.

Putting it all Together
We will put theory into practice by analyzing why AADL and Simulink are good choices to improve
reliability in a sample embedded software project and how deployment issues were addressed in this project.

Outcomes
After completing this tutorial you will be able to:
- Select the right model-based methodology to address your specific needs.
- Identify specific deployment issues early before they turn into problems.
- Successfully deploy a model-based process.

Participants
- Managers
- Senior Engineers
- Anyone involved in selecting or deploying a model-based process

About the Instructor: Jeremias Sauceda has extensive experience with model-based development (MBD) including consulting and product development. As the CTO of EnSoft, he has successfully launched MBD products that are used by more than 90 companies worldwide. He works with major avionics and automobile companies on deployment of MBD technology for safety-critical software. He was a speaker at the ESR Workshop at 2007 ISSRE conference.

Back to the Top




Testing Program Security Vulnerabilities

Mohammad Zulkernine and Hossain Shahriar
School of Computing, Queen’s University, Kingston, Canada
{mzulker, shahriar}@cs.queensu.ca

Abstract
Today’s software (or program) is complex in nature and accessible to almost everyone. These programs are developed using implementation languages and library functions that often suffer from inherent vulnerabilities. As a result, exploitations of these known vulnerabilities through successful attacks are very common. Despite rigorous usage of various complementary techniques to detect and prevent vulnerabilities, numerous exploitations are still being reported. Researches have shown that effective quality assurance methods can prevent such vulnerabilities when applied during software development processes, and software security testing is the most important quality assurance methods. However, effective program security testing involves obtaining an adequate test suite that can reveal specific vulnerabilities. In this tutorial, we will first present the importance of program security testing and fundamentals of software testing. This will be followed by an introduction to the most common program vulnerabilities and related security testing approaches. Then, we will present the idea of applying mutation-based testing to obtain an adequate test suite that can reveal program security breaches. We will demonstrate how mutation operators can be applied to inject vulnerabilities systematically and assess a test suite quality for revealing the four worst vulnerabilities in programs namely buffer overflow, SQL Injection, format string bug, and cross site scripting. The tutorial will address one of the most crucial issues of software quality assurance. By attending this tutorial, software engineers, security engineers, programmers, software/security testers will be aware of vulnerabilities and address them proactively

Presenters

Dr. Mohammad Zulkernine is an Associate Professor in the School of Computing of Queen’s University, Canada, where he leads the Queen’s Reliable Software Technology (QRST) research group. He received his Ph.D. from the University of Waterloo, Canada, where he belonged to the university’s Bell Canada Software Reliability Laboratory. His research focuses on methods and tools for reliable and secure software, automatic software monitoring, and intrusion detection. Dr. Zulkernine’s research projects are funded by a number of provincial and federal research organizations of Canada along with some industry partners such as Bell Canada and Cloakware Inc. He teaches software reliability and security related courses both in academia and industry and has extensive publications in these areas. Dr. Zulkernine spearheaded the organization and creation of the IEEE Workshop Series on Security, Trust, and Privacy for Software Applications. He has frequently served on the program committees of COMPSAC, DSN, ACM SAC, ESSoS, SSIRI, and QSIC, as well as a number of other international conferences and workshops on software and security engineering. Dr. Zulkernine is a senior member of the IEEE, a member of the ACM, and a licensed professional engineer of the province of Ontario, Canada. More information about his research and teaching can be found at http://www.cs.queensu.ca/~mzulker.

Hossain Shahriar is currently a PhD student in the School of Computing, Queen’s University, Canada, where he is a member of the Queen’s Reliable Software Technology (QRST) research group. Mr. Shahriar is an expert on software security testing with extensive publications and industry experience in the area of software and security engineering. He obtained his MSc degree in Computer Science from Queen’s University, Canada, while his MSc thesis research on security testing received the IEEE Kingston Section Research Excellence award. Mr. Shahriar also has been awarded the prestigious Ontario Graduate Scholarship for his PhD study in Ontario, Canada. He is a student member of the IEEE and the ACM. More information about his research and publications can be obtained from http://www.cs.queensu.ca/~shahriar.

Duration and intended audience
The tutorial is intended for half day (3 hours) duration. It is targeted for software/security engineering researchers, software/security testers and practitioners.

Content description
The tutorial consists of the six major parts.
In the first part, we will briefly introduce software testing and test coverage concepts that are used in practice. We will also discuss the limitation of different coverage criteria.
In the second part, we will provide an idea how the exploitation of four most common vulnerabilities (buffer overflow, format string bug, SQL injection, and cross site scripting) that affect program behaviors.
In the third part, we will provide an overview of software application (or program) security testing and compare with traditional software testing approach. We will also relate software security bug classes with traditional software bug classes, followed by a brief idea on other complementary security assurance approaches.
In the fourth part, we will introduce the idea of mutation-based security testing approach. We will briefly introduce mutation operators and mutant killing criteria for testing buffer overflow and format string bug vulnerabilities. We will present how the mutation-based testing and analysis can help in testing programs against potential buffer overflow and format string bug related attacks.
In the fifth part, we will describe how mutation-based security testing approach can be applied for web-based program vulnerabilities namely SQL injection. We will briefly introduce a number of mutation operators and mutant killing criteria for adequate testing of SQL injection of programs. Then, we will relate how the mutation-based testing and analysis can detect SQL injection attacks.
In the sixth part, we will describe how mutation-based security testing approach can be applied for web-based program vulnerabilities namely cross site scripting (XSS). We will briefly introduce a number of mutation operators and mutant killing criteria for adequate testing of XSS vulnerabilities in programs. Finally, we will relate how the mutation-based testing and analysis can help in identifying possible XSS attacks.

Back to the Top


Title: A Methodology for Architecture-Based Software Availability Analysis

Authors:
Swapna S. Gokhale, University of Connecticut
Shivani Arora, Alcatel-Lucent, Bangalore
Veena B. Mendiratta, Bell Labs, Alcatel-Lucent, USA

Contact: veena @alcatel-lucent.com, ssg @engr.uconn.edu, shivaniarora @alcatel-lucent.com

Motivation

  • Software architecture casts a significant influence on quality attributes such as reliability, availability, and performance.
  • Architecture-based analysis can be applied early in the software lifecycle to identify components critical from the point of view of system availability.
  • Many factors influence system availability – it is too cumbersome and intractable to consider all the factors simultaneously, yet not prudent to ignore any one of these.

Topics

  • Present a comprehensive, three-tier approach for architecture-based software availability analysis. This approach allows an integrated consideration of several factors in availability analysis.
    • Tier 1: Component availability analysis, subject to component failures and repair/restoration strategies.
    • Tier 2: Service availability analysis, subject to component interactions and message exchanges.
    • Tier 3: System availability analysis, subject to service distributions.
  • Demonstrate the methodology using a banking application and the IP Multimedia Subsystem (IMS).

Swapna S. Gokhale is an Associate Professor in the Dept. of Computer Science and Engineering at the University of Connecticut. She received her B.E. (Hons.) in Electrical and Electronics Engineering, and Computer Science from the Birla Institute of Technology and Science, Pilani, India in 1994, and her M.S. and Ph.D in Electrical and Computer Engineering from Duke University in 1996 and 1998 respectively. Her research interests include software testing and reliability, Quality of Service issues in wireless and wireline networks and Voice-over-IP. She received a CAREER award from the National Science Foundation (NSF) to conduct research in the area of architecture-based software reliability assessment in 2007. She is a Senior Member of the IEEE.

Shivani Arora is a Technical Manager at Alcatel-Lucent where she has worked since 2005 in the India Product Realization Centre. In her current job she is responsible for the complete product lifecycle of CDMA OAM platform software and product architecture and leads a team of Subject Matter Experts. In her earlier role as a Subject Matter Expert she has provided technical guidance for development teams of EVDO RNC, worked as a System Engineer for 3G1x RNC and as Solution Architect for the OAM network of CDMA OAM RAN. Before this she has worked in CDOT, the technology development wing of Government of India, for 16 years, on ISDN, V5.2 and GSM products. Her areas of interest are communication protocols, software architecture and design. She received her B.E. in Computer Science and Technology from the Indian Institute of Technology (IIT) Roorkee, India, in 1987 and her M.Tech. in Computer Technology from IIT New Delhi. She is a member of IEEE and has published a paper in Bell Labs Technical Journal.

Veena B. Mendiratta is a Practice Leader Reliability in the Bell Labs Corporate CTO at Alcatel-Lucent and is based in Naperville, IL, USA. In her current position she is responsible for network reliability analysis and current work includes Long Term Evolution (LTE) solution reliability. In 25 years with the company her work has focused on the reliability and performance analysis for telecommunications systems products, networks and services to guide system architecture solutions. Her technical interests include architecture, system and network dependability analysis, and software reliability engineering. She is a co-developer of a tool for Software Reliability Growth modeling. She has presented papers and tutorials at major IEEE conferences including RAMS, DSN, ISAS, DRCN, and ISSRE; and has published papers in the Bell Labs Technical Journal and in Lecture Notes in Computer Science. She has a B.Tech in Engineering from the Indian Institute of Technology, New Delhi, India; a Ph.D. in Operations Research from Northwestern University, Evanston, Illinois, USA; and is a Senior Member of IEEE, member of INFORMS, and member of the Alcatel-Lucent Technical Academy.

Back to the Top




Model Based Testing of Control Systems

Audience: Aerospace (Private Industries in Aerospace and Government Labs) and automotive control engineers, students, safety critical control system developers

Takeaway: A glimpse of 20 years of experience in automated control system testing, dos and don’ts, methods of testing

Background: Control systems are a part of many safety critical systems like the aircraft, automotive and health care systems. The commercial aircrafts today are using fly-by-wire technology to provide comfort to the passengers and to reduce pilot workload. The automotive industry uses control system for brake management, fuel management and engine control. The health care system use control system to position the X-Ray systems automatically. Nuclear reactors use control systems to control the reaction speed. All these are examples of safety critical systems. A failure in such a system would cause loss of lives and money.

The control systems are developed using Matlab/Simulink software. The C/Ada code is generated automatically or manually. Testing the control system becomes a challenging task for the Verification and Validation group in any organization developing such systems. Aircrafts are certified based on DO-178B standards. This mandates a verification and validation process with proof of correctness to be submitted to certifying authorities. The control system blocks and the C code have to be verified against a design specification document.

Manual and automated testing is carried out to verify the control system at the lowest level. The control system is also tested at a system level with an aircraft or vehicle model in loop. The low level testing is done using model based test methods. Various automation methods can be explored to minimize the test time and increase the coverage. This tutorial covers the various blocks used in a control system; it describes the concept of model coverage, describes methods used in testing the C code against the model.

Course Contents

  1. 1. Safety Critical Control Systems in the Industry
    Examples
    2. A typical control system
  2. Control system elements
  3. Control System Specification
    3. Testing the Controller
  4. Block coverage
    Signal generation
    Manual testing
    Random testing
    Orthogonal Array Testing
    Automated thresholds
    MCDC coverage
    Error seeding
    4. Things you can try
    Optimization – genetic algorithm, simulated annealing
    Use an excel spreadsheet to design test cases and documentation
    Use of tools like V&V toolbox, Reactis, Simdiff

Instructor Profile

Yogananda Jeppu: Software Specialist, Moog India, jyogananda @moog.com
20 years experience in the field of control system design, coding and testing for aerospace domain. 20 years of experience in using Matlab and Simulink software in the control systems domain. Use of optimization algorithm in generating model based test cases. Several papers on test case generation, automation and optimization.
Dr Selvamurugan B Hariram
13 years of experience in teaching control systems. He is currently working on automated control system testing for commercial fly-by-wire aircrafts. He has several publications on VLSI, FPGA based motor controllers, and DSP implementations. He has conducted several courses on simulation, DSP implementations and motor control.

Back to the Top


 

Orthogonal Defect Classification (ODC)

A 10x on Root Cause Analysis

Ram Chillarege, Chillarege, Inc
Contact: ram @ chillarege.com
More Info: ODC Pages

LENGTH 1/2 day

 

TOPICS

ODC Concepts
ODC Classification and Information Extraction
How to gain 10x in Root Cause Analysis
How to tune up the Test Process using ODC
In-process Measurement and Prediction with ODC
Case Studies of ODC based Process Diagnosis
What is required to support ODC?
How does one plan an ODC Rollout ?

AUDIENCE

This tutorial is for the practising engineer and manager in software engineering. One must have a reasonable experience with the software development lifecycle, process improvement methods, tools, and practices. Knowledge of CMMi and Six Sigma are useful but not necessary.
Typical roles are: Software engineering project leads, first and second line managers, those with delivery responsibility, QA responsibility, architects, program managment, and service management. The SEPG department as a whole would also be intersted.

TRAINING

Gain a firm appreciation of ODC concepts. Understand what it
can do, and what it cannot. Learn how ODC helps perform advanced software engineering analysis techniques to deliver value to the different stake holders - engineers, project managers and process architects. Learn how ODC is applied in-process and post-process. Discuss deployment models and options. Qualify and quantify the support services needed to insert, support and scale a deployment.

Instructor

Ram, inventor of Orthogonal Defect Classification (ODC), brings a new order of insight into measuring and managing software engineering. His consulting practice specializes in Software Engineering Optimization using ODC. These methods bring speed and consistency into the art of managing product quality and delivery using data from the current process.

He was with IBM for 14 years where he founded and headed the IBM Center for Software Engineering. He then served as Executive Vice President of Software and Technology for Opus360, New York. In 2004 Ram received the IEEE technical achievement award for the invention of Orthogonal Defect Classification (ODC). He had received the IBM Outstanding Innovation Award for ODC in 1993. The methodology brings value through fast measurement, sophisticated analysis and targeted feedback.

Ram is an IEEE Fellow, and author of ~50 peer reviewed technical articles. He chairs the IEEE Steering Committee for the International Symposium on Software Reliability Engineering. He has served on a few steering committees, editorial boards, the alumni board of the University of Illinois Department of Electrical and Computer Engineering. He received a BSc degree from the University of Mysore, BE and ME from the Indian Institute of Science, and PhD from the University of Illinois, Urbana Champaign in Electrical and Computer and Engineering.

Back to the Top




Structured Safety and Assurance Cases: Concepts, Practicalities and Research Directions

Duration: Half day

Overview of Tutorial:
This tutorial will develop the motivation and context of Safety and Assurance Cases. In particular it will introduce the concept of Structured Safety Cases using the standard Safety Case notations: ‘Claim-Argument-Evidence’ and Goal Structuring Notation. It will discuss a phased approach to Safety Case development: determining safety requirements; demonstrating satisfaction of these requirements at an equipment level and at an operational level; showing on-going safety under maintenance and modification; and finally safe disposal/decommissioning. The tutorial will be illustrated using Adelard’s Assurance and Safety Case Environment (ASCE). We will provide an overview of current research directions in assurance cases drawing on our work in a variety of sectors and discuss recent work we have been doing on Security Justification.

Audience: Assumptions, background, and experience.
No particular experience is assumed, but some background in assurance concepts and engineering reliable and trustworthy systems would be useful.

Take Aways: What will the student learn, and what are the benefits to their job.
Upon successful completion of this tutorial, the attendee will attain:
1. a good understanding of the concepts of structural safety and assurance cases
2. an understanding the key phases of Assurance Case development
3. familiarity with realistic safety case structures based on ASCE examples
4. an understanding of the issues in developing Assurance cases for different qualities

Presenter(s):
Robin E Bloomfield
Professor of System and Software Dependability
Director, Centre for Software Reliability
reb@csr.city.ac.uk
Member Adelard LLP reb@adelard.com
City University, London EC1V 0HB
Tel: +44 20 7490 9450 (sec Adelard)
Tel: +44 20 7040 8423 (sec CSR)

Back to the Top


A survey of verification tools for software reliability

Authors: professors from Indian Institute of Science

Quality Assurance (QA) in software development is a difficult process.
While studies show that up to 50% of the effort in software projects
currently goes towards QA, a lot of bugs and vulnerabilities still typically remain in software released to customers. The research community has suggested that automated verification tools and techniques be used
at all stages of software development to alleviate this problem. In
recent years this suggestion has found an audience with the software
development industry, with various companies (e.g., Coverity, Grammatech,
Klocwork successfully marketing verification tools to the industry. Many
leading companies have also established in-house research and development
groups focusing on verification, e.g., General Motors, IBM, Intel, and Microsoft.

In this tutorial we first introduce the general flavor of verification
tools and the verification process. We then give the theoretical
underpinnings of and demonstrate several tools that have been used
in realistic projects, such as Alloy (to identify errors at the design stage), FindBugs (to find violations of coding conventions), ESC/Java (to verify whether programmer-placed assertions will always succeed), and Pex (to automatically generate test-cases with the goal of increased test coverage). This tutorial is aimed at industry practitioners, researchers, and students interested in knowing about selected tools at the cutting-edge of software verification practice.

Back to the Top


Establishing an Effective Industrial Test Program selecting the best Methods and Metrics


If you have these questions:

  • How do you make verification and validation more effective?
  • What testing methods are best for industrial software development?
  • How do I effectively measure verification and validation?

This tutorial will provide the means to answer them.


We will present a tutorial on industry best practices for testing including test methods and metrics. The tutorial will emphasize effective verification and validation methods that ABB applies to their industrial software development. These methods have been demonstrably effective in improving testing in a variety of industrial product verification and validation settings. We will discuss techniques for selecting the most effective test methods for your particular development environment and life-cycle model. We will discuss a core set of test metrics and how to choose metrics that will provide the most value for an industrial environment.


Agenda:


  • Survey of industrial test methods and applicability for the following categories of methods (Brian P. Robinson, PhD.):

    • Black-box Testing Methods
    • Grey-box Testing Methods
    • White-Box Testing Methods
    • Positive (valid) case testing
    • Negative (invalid) case testing
    • Fault-based Testing Methods
    • Regression Testing

  • Survey of industrial test metrics (Will Snipes)

    • Unit Test Metrics
    • Common Testing Metrics for formal testing
    • Integration Testing Metrics
    • Requirements Testing Metrics

  • Open Question and Answer


Target Audience
Practitioners and researchers interested in learning details of industrial test methods and metrics. Practitioners starting or wishing to improve testing activities in their organization.

Organizers


  1. Brian P. Robinson, PhD. (ABB)
  2. Will Snipes (ABB) Contact: will.snipes @us.abb.com
    Back to the Top