Home


Research Papers

These papers are available for download subject to the standard copyright restrictions. A copy may be made for personal research use only. The document may not be copied or sold to third parties. Quoted extracts should acknowledge the source document. Use of larger portions of the document require the permission of the copyright holder.

Recent publications are listed first.





Combining testing and proof to gain high assurance in software: A case study
Authors:
P Bishop, R Bloomfield, L Cyra

Details:
In Proceedings of the IEEE International Symposium on Software Reliability Engineering (ISSRE 2013), 4-7 Nov 2013, Pasadena, pp. 248-257

Brief summary:
There are potential benefits in combining static analysis and testing because the results obtained can be more general than standalone dynamic testing but less resource-intensive than standalone static analysis. This paper presents a specific example of this approach applied to the verification of continuous monotonic functions. This approach combines a monotonicity analysis with a defined set of tests to demonstrate the accuracy of a software function over its entire input range. Unlike “standalone” dynamic methods, our approach provides full coverage, and guarantees a maximal error. We present a case study of the application of our approach to the analysis and testing of the software-implemented transfer function in a smart sensor. This demonstrated that relatively low levels of effort were needed to apply the approach. We conclude by discussing future developments of this approach.
Download


Security-Informed Safety: If It's Not Secure, It's Not Safe
Authors:
R Bloomfield, K Netkachova, R Stroud

Details:
In Proceedings of 5th International Workshop on Software Engineering for Resilient Systems (SERENE 2013), Kiev, Ukraine, Oct 2013

Brief summary:
Traditionally, safety and security have been treated as separate disciplines, but this position is increasingly becoming untenable and stakeholders are beginning to argue that if it’s not secure, it’s not safe. In this paper we present some of the work we have been doing on “security-informed safety”. Our approach is based on the use of structured safety cases and we discuss the impact that security might have on an existing safety case. We also outline a method we have been developing for assessing the security risks associated with an existing safety system such as a large-scale critical infrastructure.
Download


Does Software have to be Ultra Reliable in Safety Critical Systems?
Authors:
P Bishop

Details:
In Proceedings of Safecomp 2013, Toulouse, pp. 118-129, Sept 2013

Brief summary:
This paper argues that higher levels of safety performance can be claimed by taking account of: 1) external mitigation to prevent an accident: 2) the fact that software is corrected once failures are detected in operation. A model based on these concepts is developed to derive an upper bound on the number of expected failures and accidents under different assumptions about fault fixing, diagnosis, repair and accident mitigation. A numerical example is used to illustrate the approach. The implications and potential applications of the theory are discussed.
Download


Diversity for Security: a Study with Off-The-Shelf AntiVirus Engines
Authors:
P Bishop, R Bloomfield, I Gashi, V Stankovic

Details:
In Proceedings of ISSRE 2011, Hiroshima, Japan

Brief summary:
In this paper we present an emprical analysis using a known set of software viruses to explore the detection gains that can be achieved from using more diversity (i.e. more than two AntiVirus products), how diversity may help to reduce the “at risk time” of a system and a preliminary model-fitting using the hyper-exponential distribution.
Download


Toward a Formalism for Conservative Claims about the Dependability of Software-Based Systems
Authors:
PG Bishop, RE Bloomfield, B Littlewood, A Povyakalo, DR Wright

Details:
In Proceedings of IEEE Transactions on Software Engineering, pp 708-717, Vol. 37, No. 5, Sept/Oct 2011

Brief summary:
Here, we consider a simple case where an expert makes a claim about the probability of failure on demand (pfd) of a subsystem of a wider system and is able to express his confidence about that claim probabilistically. An important, but difficult, problem then is how such subsystem (claim, confidence) pairs can be propagated through a dependability case for a wider system, of which the subsystems are components. An informal way forward is to justify, at high confidence, a strong claim, and then, conservatively, only claim something much weaker: e.g. if I am 99 percent confident that the pfd is less than 0.00001 it is reasonable to be 100 percent confident that it is less than 0.001.In this paper, we provide formal support for such reasoning.
Download


An Approach to Using Non Safety-Assured Programmable Components in Modest Integrity Systems
Authors:
PG Bishop, N Chozos, K Tourlas

Details:
In Proceedings SAFECOMP 2010, Vienna, pp. 377–390, 2010

Brief summary:
There is a problem in justifying the use of programmable components if the components have not been safety justified to an appropriate integrity (e.g. to SIL 1 of IEC 61508). This paper outlines an approach (called LowSIL) developed in the UK CINIF nuclear industry research programme to justify the use of non safety-assured programmable components in modest integrity systems.
Download


Overcoming Non-determinism in Testing Smart Devices: A Case Study
Authors:
PG Bishop, L Cyra

Details:
In Proceedings SAFECOMP 2010, Vienna, pp. pp 237-250, 2010

Brief summary:
Non-determinism can arise due to inaccuracy in an analogue measurement made by the device when two alternative actions are possible depending on the measured value. This non-determinism makes it difficult to predict the output values that are expected from a test sequence of analogue input values. The paper presents two approaches to dealing with this difficulty: (1) based on avoidance of test values that could have multiple responses, (2) based on consideration of all possible interpretations of input data.
Download


Reliability Modeling of a 1-Out-Of-2 System: Research with Diverse Off-The-Shelf SQL Database Servers
Authors:
P Bishop, I Gashi, B Littlewood, D Wright

Details:
In Proceedings of the 18th IEEE International Symposium on Software Reliability Engineering (ISSRE 2007). 5-9 of November, Trollhättan, Sweden. 2007, pp. 49-58

Brief summary:
This paper discusses two methods for modelling the reliability growth of a fault-tolerent database constucted from diverse database servers.
Download


Assessment and Qualification of Smart Sensors
Authors:
S Guerra, P Bishop, R Bloomfield, D Sheridan

Details:
In Proceedings NPIC/HMIT 2010, Las Vegas, USA, 2010

Brief summary:
This paper describes research work done on approaches to justifying smart instruments, and in particular, how some of this research has successfully been applied to the safety substantiation of such instruments.
Download


Measuring Hazard Identification
Authors:
P R Caseley, Sofia Guerra and Peter Froome

Details:
In Proceedings of the 1st IET International Conference on System Safety, pp.23-28, 6-8 June 2006, London, UK.

Brief summary:
This paper discusses an experiment that measured the effectiveness of a hazard identification process used to support safety in Defence Standard 00-56 project. The experimental case study utilised a Ministry of Defence project that assessed simultaneously two potential suppliers who were competing for a MOD equipment contract. The UK MOD Corporate Research Programme funded the comparison work and the MOD Integrated Project Team funded the project which included each contractor's project safety processes.
Download


Justification of smart sensors for nuclear applications
Authors:
Peter Bishop, Robin Bloomfield, Sofia Guerra and Kostas Tourlas.

Details:
In Proceedings SAFECOMP 2005, 28-30 September, Fredrikstad, Norway, 2005 (c) Springer Verlag.

Brief summary:
This paper describes the results of a research study sponsored by the UK nuclear industry into methods of justifying smart sensors. Smart sensors are increasingly being used in the nuclear industry; they have potential benefits such as greater accuracy and better noise filtering, and in many cases their analogue counterparts are no longer manufactured. However, smart sensors (as it is the case for most COTS) are sold as black boxes despite the fact that their safety justification might require knowledge of their internal structure and development process. The study covered both management aspects of interacting with manufacturers to obtain the information needed, and the technical aspects of designing an appropriate safety justification approach and assessing feasibility of a range of technical analyses. The analyses performed include the methods we presented at Safecomp 2002 and 2003.
Download


Application of a Commercial Assurance Case Tool to Support Software Certification Services
Authors:
Luke Emmet and Sofia Guerra

Details:
SoftCeMent 05 (Software Certificate Management 2005) workshop at Conference on Automated Software Engineering

Brief summary:
This short paper for the SoftCeMent 05 workshop presents an approach to delivering a range of software certification processes based on the commercial assurance case tool, ASCE.
Download


Independent Safety Assessment of Safety Arguments
Authors:
Peter Froome

Details:
In Proceedings Safety-critical Systems Symposium, Southampton, UK, 8-10 February 2005 © Springer-Verlag

Brief summary:
The paper describes the role of independent Safety Auditor (ISA) as carried out at the present in the defence and other sectors in the UK. It outlines the way the ISA role has developed over the past 15–20 years with the changing regulatory environment. The extent to which the role comprises audit, assessment or advice is a source of confusion, and the paper clarifies this by means of some definitions, and by elaborating the tasks involved in scrutinising the safety argument for the system. The customers and interfaces for the safety audit are described, and pragmatic means for assessing the competence of ISAs are presented.
Download


Software and SILS

Authors:
P.G. Bishop

Details:
Safety Critical Systems Club Newsletter, Jan 2005

Brief summary:
This short article for the UK Safety Critical Systems Club Newsletter suggests an alternative interpretation of the SIL concept for software.
Download


An Exploration of Software Faults and Failure Behaviour in a Large Population of Programs

Authors:
M.J.P. van der Meulen, P.G. Bishop and M. Revilla

Details:
ISSRE 04, St Malo, France, 2-5 Nov 2004

Brief summary:
A large part of software engineering research suffers from a major problem---there are insufficient data to test software hypotheses, or to estimate parameters in their models. To obtain statistically significant results, a large set of programs is needed, each set comprising many programs built to the same specification. We have gained access to such a large body of programs (written in C, C++, Java or Pascal) and in this paper we present the results of an exploratory analysis of around 29\thinspace000 C programs written to a common specification.

The objectives of this study were to:
  • characterise the types of fault that are present in these programs
  • characterise how programs are debugged during development
  • assess the effectiveness of diverse programming.
The findings are discussed, together with the potential limitations on the realism of the findings.
Download


An Empirical Exploration of the Difficulty Function
Authors:
Julian G W Bentley, Peter G Bishop, Meine van der Meulen

Details:
SAFECOMP 21.-24. Sep. 2004, Potsdam, Germany, pp. 60-71

Brief summary:
The theory developed by Eckhardt and Lee (and later extended by Littlewood and Miller) utilises the concept of a "difficulty function" to estimate the expected gain in reliability of fault tolerant architectures based on diverse programs. The "difficulty function" is the likelihood that a randomly chosen program will fail for any given input value. To date this has been an abstract concept that explains why dependent failures are likely to occur. This paper presents an empirical measurement of the difficulty function based on an analysis of over six thousand program versions implemented to a common specification. The study derived a "score function" for each version. It was found that several different program versions produced identical score functions, which when analysed, were usually found to be due to common programming faults. The score functions of the individual versions were combined to derive an approximation of the difficulty function. For this particular (relatively simple) problem specification, it was shown that the difficulty function derived from the program versions was fairly flat, and the reliability gain from using multi-version programs would be close to that expected from the independence assumption.
Download


The future of goal-based assurance cases.
Authors:
P.G. Bishop, Robin Bloomfield and Sofia Guerra

Details:
In Proceedings of Workshop on Assurance Cases. Supplemental Volume of the 2004 International Conference on Dependable Systems and Networks, pp. 390-395, Florence, Italy, June 2004.

Brief summary:
Most regulations and guidelines for critical systems require a documented case that the system will meet its critical requirements, which we call an assurance case. Increasingly, the case is made using a goal-based approach, where claims are made (or goals are set) about the system and arguments and evidence are presented to support those claims. In this paper we describe Adelard's approach to safety cases in particular, and assurance cases more generally, and discuss some possible future directions to improve frameworks for goal-based assurance cases.
Download


Estimating PLC logic program reliability
Authors:
P.G. Bishop

Details:
Safety Critical Systems Symposium Birmingham, 17th-19th February, 2004

Brief summary:
This paper applies earlier theoretical work to an industrial PLC logic example. This study required extensions to the previous to estimate the number of residual logic faults (N). and we show that the worst case bound theory is applicable.
Download


Using a Log-normal Failure Rate Distribution for Worst Case Bound Reliability Prediction
Authors:
P.G. Bishop, R.E. Bloomfield

Details:
In Proceedings fourteenth International Symposium on Software Reliability Engineering (ISSRE '03), pp. 237-245, 17-20 November, Denver, Colorado, USA, 2003 (c) IEEE

Brief summary:
Prior research has suggested that the failure rates of faults follow a log normal distribution. We propose a specific model where distributions close to a log normal arise naturally from the program structure. The log normal distribution presents a problem when used in reliability growth models as it is not mathematically tractable. However we demonstrate that a worst case bound can be estimated that is less pessimistic than our earlier worst case bound theory.
Download


MC/DC based estimation and detection of residual faults in PLC logic networks
Authors:
P.G. Bishop

Details:
In Supplementary Proceedings fourteenth International Symposium on Software Reliability Engineering (ISSRE '03), Fast Abstracts, pp. 297-298, 17-20 November, Denver, Colorado, USA, 2003 (c) IEEE

Brief summary:
Coverage measurement has previously been used to estimate residual faults in conventional program code. The basic idea is that the relationship between code covered and faults found is nearly linear, so it is possible to estimate the number of residual faults from the proportion of uncovered code. In this paper we apply the same concept to PLC logic networks rather than conventional program code, combined with a random test strategy designed to maximize coverage growth. This proved to be very efficient in detecting the known faults in an industrial logic example
Download


Software Criticality Analysis of COTS/SOUP
Authors:
Peter Bishop, Robin Bloomfield, Tim Clement, Sofia Guerra

Details:
Reliability Engineering and System Safety 81 (2003) 291-301.

Brief summary:
This paper describes the Software Criticality Analysis (SCA) approach that was developed to support the justification of commercial off-the-shelf software (COTS) used in a safety-related system. The primary objective of SCA is to assess the importance to safety of the software components within the COTS and to show there is segregation between software components with different safety importance. The approach taken was a combination of Hazops based on design documents and on a detailed analysis of the actual code (100kloc). Considerable effort was spent on validation and ensuring the conservative nature of the results. The results from reverse engineering from the code showed that results based only on architecture and design documents would have been misleading.
Download


Integrity Static Analysis of COTS/SOUP
Authors:
P.G. Bishop, R.E. Bloomfield, T.P. Clement, A.S.L. Guerra and C.C.M. Jones

Details:
In Proceedings SAFECOMP 2003, pp. 63-76, 21-25 Sep, Edinburgh, UK, 2003, (c) Springer Verlag

Brief summary:
This paper describes the integrity static analysis approach developed to support the justification of commercial off-the-shelf software (COTS) used in a safety-related system. The static analysis was part of an overall software qualification programme, which also included the work reported in our paper presented at Safecomp 2002. The analysis addressed two main aspects: the internal integrity of the code (especially for the more critical functions), and the intra-component integrity, checking for covert channels. The analysis process was supported by an aggregation of tools, combined and engineered to support the checks done and to scale as necessary. Integrity static analysis is feasible for industrial scale software, did not require unreasonable resources and we provide data that illustrates its contribution to the software qualification programme.
Download


Worst Case Reliability Prediction Based on a Prior Estimate of Residual Defects
Authors:
P.G. Bishop, R.E. Bloomfield

Details:
In Proceedings Thirteenth International Symposium on Software Reliability Engineering (ISSRE '02), November 12-15, Annapolis, Maryland, USA, 2002(c) IEEE

Brief summary:
In this paper we extend an earlier worst case bound reliability theory to derive a worst case reliability function R(t), which gives the worst case probability of surviving a further time t given an estimate of residual defects in the software and a prior test time T. The earlier theory and its extension are presented and the paper also considers the case where there is a low probability of any defect existing in the program. The implications of the theory are discussed and compared with alternative reliability models.
Download


Software Criticality Analysis of COTS/SOUP
Authors:
Peter Bishop, Robin Bloomfield, Tim Clement, Sofia Guerra

Details:
In Proceedings SAFECOMP 2002, pp. 198-211, 10-13 Sep, Catania, Italy, 2002, (c) Springer Verlag

Brief summary:
This paper describes the Software Criticality Analysis (SCA) approach that was developed to support the justification of commercial off-the-shelf software (COTS) used in a safety-related system. The primary objective of SCA is to assess the importance to safety of the software components within the COTS and to show there is segregation between software components with different safety importance. The approach taken was a combination of Hazops based on design documents and on a detailed analysis of the actual code (100kloc). Considerable effort was spent on validation and ensuring the conservative nature of the results. The results from reverse engineering from the code showed that results based only on architecture and design documents would have been misleading.
Download


Estimating Residual Faults from Code Coverage
Authors:
P.G. Bishop

Details:
In Proceedings SAFECOMP 2002, 10-13 September, Catania, Italy, 2002, (c) Springer Verlag

Brief summary:
Many reliability prediction techniques require an estimate for the number of residual faults. In this paper, a new theory is developed for using test coverage to estimate the number of residual faults. This theory is applied to a specific example with known faults and the results agree well with the theory. The theory is used to justify the use of linear extrapolation to estimate residual faults. It is also shown that it is important to establish the amount of unreachable code in order to make a realistic residual fault estimate..
Download


Rescaling Reliability Bounds for a New Operational Profile
Authors:
P.G. Bishop

Details:
In Proceedings, International Symposium on Software Testing and Analysis (ISSTA 2002), ACM Software Engineering Notes, Vol 27 No. 4, pp 180-190, Rome, Italy, 22-24 July, 2002, (c) ACM

Brief summary:
One of the main problems with reliability testing and prediction is that the result is specific to a particular operational profile. This paper extends an earlier reliability theory for computing a worst case reliability bound. The extended theory derives a re-scaled reliability bound based on the change in execution rates of the code segments in the program. In some cases it is possible to derive a maximum failure rate bound that applies to any change in the profile. It also predicts that (in principle) a fair test profile can be derived where the reliability bounds are relatively insensitive to the operational profile. In addition the theory allows unit and module test coverage measures to be incorporated into an operational reliability bound prediction. The implications of the theory are discussed, and the theory is evaluated by applying it to two example programs with known faults.
Download


Learning from incidents involving E/E/PE systems
Authors:
P.G. Bishop, R.E. Bloomfield. L.O.Emmet

Details:
In Proceedings Thirteenth International Symposium on Software Reliability Engineering (ISSRE '02), November 12-15, Annapolis, Maryland, USA, 2002(c) IEEE

Brief summary:
The UK Health and Safety Executive (HSE) commissioned a research study into methods of learning from incidents involving electrical, electronic and  programmable elactronic systems (E/E/PES). The approach is designed to comply with the IEC 61508 standard and to be suitable for organisations at different levels of maturity.

The three reports resulting from this work can be downloaded from the HSE web site:

Part 1: Review of methods and industry practice.
HSE Contract Research Reports RR179 December 2003,
ISBN 0-7176-2787-X
http://www.hse.gov.uk/research/rrhtm/rr179.htm

Part 2: Recommended scheme.
HSE Contract Research Reports RR181, December 2003,
ISBN 0-7176-2789-6
http://www.hse.gov.uk/research/rrhtm/rr181.htm

Part 3: Guidance examples and rationale.
HSE Contract Research Reports RR182, December 2003,
ISBN 0-7176-2790-X
http://www.hse.gov.uk/research/rrhtm/rr182.htm
 


Learning from incidents involving electrical/ electronic/ programmable electronic safety-related systems. Project outline.
Authors:
Mark Bowell (HSE), George Cleland & Luke Emmet

Details:
Workshop paper for Workshop on the Investigation and Reporting of Incidents and Accidents (IRIA) 17th - 20th July 2002, The Senate Room, University of Glasgow

Brief summary:
The UK Health and Safety Executive (HSE) has initiated a programme of work that will eventually provide guidance for those responsible on how to learn from their own incident data; a means for HSE to ensure that it has the best information attainable on incidents involving electrical/ electronic/ programmable electronic (E/E/PE) safety-related systems; and a stimulus to industry. HSE has contracted a consortium, led by Adelard and also involving the Glasgow (University) Accident Analysis Group (GAAG) and Blacksafe Consulting, to carry out a 7-month interactive project that will: 1) identify and evaluate existing schemes for classifying causes from incident data and generating lessons to avoid recurrence of similar incidents; 2) select and modify an existing scheme or schemes, or derive a new one, in order to create a method for analysing and classifying incident data to match the principles and activities of IEC 61508; 3) test the new method using data from a small number of real incidents; and 4) identify and present the significant strengths and weaknesses of the proposed method and how it fits in with wider issues such as incident reporting, incident investigation and process improvement. This project is part of HSE's longer-term programme to provide best advice in this field. The paper provides an outline of the project
Download


Graphical Notations, Narratives and Persuasion: a Pliant Systems Approach to Hypertext Tool Design
Authors:
Luke Emmet & George Cleland

Details:
In Proceedings of ACM Hypertext 2002 (HT'02), June 11-15, 2002, College Park, Maryland, USA

Brief summary:
The Adelard Safety Case Editor (ASCE) is a hypertext tool for constructing and reviewing structured arguments. ASCE is used in the safety industry, and can be used in many other contexts when graphical presentation can make argument structure, inference or other dependencies explicit. ASCE supports a rich hypertext narrative mode for documenting traditional argument fragments. In this paper we document the motivation for developing the tool and describe its operation and novel features. Since usability and technology adoption issues are critical for software and hypertext tool uptake, our approach has been to develop a system that is highly usable and sufficiently "pliant" to support and integrate with a wide range of working practices and styles. We discuss some industrial application experience to date, which has informed the design and is informing future requirements. We draw from this some of the perhaps not so obvious characteristics of hypertext tools which are important for successful uptake in practical environments.
Download


Process Modelling to Support Dependability Arguments
Authors:
Robin Bloomfield and Sofia Guerra.

Details:
Process modelling to support dependability arguments. In Proceedings of the International Conference on Dependable Systems and Networks, DSN 2002, Washington, DC, USA, June 2002.

Brief summary:
This paper reports work to support dependability arguments about the future reliability of a product before there is direct empirical evidence. We develop a method for estimating the number of residual faults at the time of release from a "barrier model" of the development process, where in each phase faults are created or detected. These estimates can be used in a conservative theory in which a reliability bound can be obtained or can be used to support arguments of fault freeness. We present the work done to demonstrate that the model can be applied in practice. A company that develops safety-critical systems provided access to two projects as well as data over a wide range of past projects. The software development process as enacted was determined and we developed a number of probabilistic process models calibrated with generic data from the literature and from the company projects. The predictive power of the various models was compared.
Download


Use of SOUP in safety related applications

The UK Health and Safety Executive (HSE) recently commissioned research from Adelard into how pre-existing software components may be safely used in safety-related programmable electronic systems in a way that complies with the IEC 61508 standard. Two reports resulted from this work and are now published on the HSE web site:

The first report summarises the evidence that is likely to be available in practice relating to a software component to assist in assessing the safety integrity of a safety function that depends on that component.

The second report considers how the available evidence can best be used within the framework of the IEC 61508 safety lifecycle to support an argument for the safety integrity achieved by a safety function.
 


The Practicalities of Goal-Based Safety Regulation
Authors:
J Penny, A Eaton CAA (SRG), PG Bishop, RE Bloomfield (Adelard)

Details:
Aspects of Safety Management: Proceedings of the Ninth Safety-Critical Systems Symposium Bristol, UK, 6-8 February 2001, Felix Redmill and Tom Anderson (eds.) London; New York: Springer, 2001 ISBN: 1-85233-411-8, pages 35-48

Brief summary:
"Goal-based regulation" does not specify the means of achieving compliance but sets goals that allow alternative ways of achieving compliance, e.g. "People shall be prevented from falling over the edge of the cliff". In "prescriptive regulation" the specific means of achieving compliance is mandated, e.g. "You shall install a 1 meter high rail at the edge of the cliff". There is an increasing tendency to adopt a goal-based approach to safety regulation, and there are good technical and commercial reasons for believing this approach is preferable to more prescriptive regulation. It is however important to address the practical problems associated with goal-based regulation in order for it to be applied effectively. This paper discusses the motivation for adopting a goal-based regulatory approach, and then illustrates the implementation by describing SW01 which forms part of the CAP 670 regulations for ground-based air traffic services (ATS). The potential barriers to the implementation of such standards together are discussed, together with methods for addressing such barriers.
Download


The REVERE project: experiments with the application of probabilistic NLP to systems engineering
Authors:
Paul Rayson, Luke Emmet, Roger Garside and Pete Sawyer, 2000

Details:
The REVERE Project: Experiments with the application of probabilistic NLP to Systems Engineering. In Bouzeghoub, M., Kedad, Z., and Metais, E. (eds.) Natural Language Processing and Information Systems. 5th International Conference on Applications of Natural Language to Information Systems (NLDB'2000). Versailles, France, June 2000. Revised papers. LNCS 1959. - Springer-Verlag, Berlin Heidelberg, pp. 288 - 300. ISBN 3-540-41943-8.

Brief summary:
Despite natural language's well-documented shortcomings as a medium for precise technical description, its use in software-intensive systems engineering remains inescapable. This poses many problems for engineers who must derive problem understanding and synthesise precise solution descriptions from free text. This is true both for the largely unstructured textual descriptions from which system requirements are derived, and for more formal documents, such as standards, which impose requirements on system development processes. This paper describes experiments that we have carried out in the REVERE project to investigate the use of probabilistic natural language processing techniques to provide systems engineering support.
Download


The Development of a Commercial 'Shrink-Wrapped Application' to Safety Integrity Level 2: The DUST-EXPERT™ Story
Authors:
Tim Clement, Ian Cottam, Peter Froome and Claire Jones, 1999

Details:
Safecomp'99, Toulouse, France, Sept 1999. In Lecture Notes in Computer Science 1698, Springer 1999. ISBN 3-540-66488-, © Springer Verlag

Brief summary:
We report on some of the development issues of a commercial "shrink-wrapped application" - DUST-EXPERT™ - that is of particular interest to the safety and software engineering community. Amongst other things, the following are reported on and discussed: the use of formal methods; advisory systems as safety related systems; safety integrity levels and the general construction of DUST-EXPERT's safety case; statistical testing checked by an "oracle" derived from the formal specification; and our achieved productivity and error density.
Download


Requirements for a Guide on the Development of Virtual Instruments
Authors:
Luke Emmet and Peter Froome

Details:
In Proceedings NMC 99: National Measurement Conference 99, Brighton, UK. © Adelard 1999

Brief summary:
Adelard is producing a good-practice guide and training course on the development of virtual instruments as part of the DTI's Software Support for Metrology programme. This paper describes our requirements capture process and presents some of the principal issues that are emerging.
Download


A Methodology for Safety Case Development
Authors:
P G Bishop and R E Bloomfield, 1998

Details:
Safety-critical Systems Symposium, Birmingham, UK, Feb 1998, © Adelard

Brief summary:
A safety case is a requirement in many safety standards for computer systems and it is important that an adequate safety case is produced. In regulated industries such as the nuclear industry, the need to demonstrate safety to a regulator can be a major commercial risk. This paper outlines a safety case methodology that seeks to minimise safety risks and commercial risks by constructing a demonstrable safety case. The safety case ideas presented here were initially developed in a European and UK research programmes and have subsequently been applied in industry. To implement the safety case we advocate the integration of safety case development into the design process so that the costs and risks of the associated safety case can be included in the design trade-offs. We propose a layered structure for the safety case that allows the safety case to evolve over time and helps to establish the safety requirements at each level. For large projects with sub-contractors, this "top-down" safety case approach helps to identify the subsystem requirements and the subsystem safety case can be made an explicit contractual requirement to be delivered by the sub-contractor.
Download


The Formal Development of a Windows Interface
Authors:
T. Clement, 1998

Details:
3rd Northern Formal Methods Workshop, September 1998, Ilkley, UK., © Springer Verlag

Brief summary:
This paper describes an approach to the use of the formal method VDM in the design and implementation of Microsoft Windows™ interfaces. This approach evolved during the development of Dust-Expert™, a Windows-based system for providing design advice on the prevention and control of dust explosions, developed for the Health and Safety Executive (HSE). The approach we have adopted is deliberately conservative: we have aimed to see how we can take guidance in the design of the system from the standard Vienna Development Method rather than inventing new language constructs or new proof obligations. One advantage of this is that we can continue to use the tools that are available for supporting the standard language.
Download


Using Reversible Computing to Achieve Fail-safety
Authors:
P G Bishop, 1997

Details:
ISSRE 97, Nov 1997, Alberquerque, New Mexico, USA., © IEEE Computer Society Press

Brief summary:
This paper describes a fail-safe design approach that can be used to achieve a high level of fail-safety with conventional computing equipment which may contain design flaws. The method is based on the well-established concept of "reversible computing". Conventional programs destroy information and hence cannot be reversed. However it is easy to define a virtual machine that preserves sufficient intermediate information to permit reversal. Any program implemented on this virtual machine is inherently reversible. The integrity of a calculation can therefore be checked by reversing back from the output values and checking for the equivalence of intermediate values and original input values. By using different machine instructions on the forward and reverse paths, errors in any single instruction execution can be revealed. Random corruptions in data values are also detected. An assessment of the performance of the reversible computer design for a simple reactor trip application indicates that it runs about ten times slower than a conventional software implementation and requires about 20 kilobytes of additional storage. The trials also show a fail-safe bias of better than 99.998% for random data corruptions, and it is argued that failures due to systematic flaws could achieve similar levels of fail-safe bias. Potential extensions and applications of the technique are discussed.
Download


Viewpoints on Improving the Standards Making Process: Document Factory or Consensus Management?
Authors:
L O Emmet 1997

Details:
ISSES 97, Walnut Creek

Brief summary:
Emerging standards and guidelines need to be timely and reflect the requirements of the industrial sector they are designed to support. However, often, the delay between the identification of a need for a standard and its eventual release is too long. There is a need for increased understanding of the sources of delay and deadlock within the standards process. In this paper we describe an application of PERE (Process Evaluation in Requirements Engineering) to the standards process. PERE provides an integrated process analysis that identifies improvement opportunities by considering process weaknesses and protections from both mechanistic and human factors viewpoints. The resulting analysis identified both classical resource allocation problems and also specific problems concerning the construction and management of consensus within a typical standards making body. A number of process improvement opportunities are identified that could be implemented to improve the standards process. We conclude that consensus problems are the real barrier to timely standards production. Ironically the present trend for more distributed working and electronic support (via email etc.) may make the document factory aspect of standards production more efficient at the expense of consensus building.
Download


Data Reification Without Explicit Abstraction Functions
Authors:
T. Clement, 1996

Details:
FME'96, March 1996, Oxford, UK, © Springer Verlag

Brief summary:
Data reification in VDM normally involves the explicit positing of an abstraction function with certain properties. However, the condition for one definition to reify another only requires that a function with such properties should exist. This suggests that it may be possible to carry through a data reification without giving an explicit definition of the abstraction function at all. This paper explores this possibility and compares it with the more conventional approach.
Download


A Conservative Theory for Long-Term Reliability Growth Prediction
Authors:
P G Bishop and R E Bloomfield, 1996

Details:
ISSRE 96, Oct 1996, White Plains, NY, USA (see also IEEE Trans. Reliability, Dec 1996), © IEEE Computer Society Press

Brief summary:
While existing reliability growth theories employ a wide range of underlying models, the basic strategy is the same: to extrapolate future reliability from past failures. This approach works reasonably successfully over the short term but lacks predictive power over the long term (i.e. for usage times which are orders of magnitude greater than the current usage time). This paper describes a different approach to reliability growth modelling which should enable conservative long term predictions to be made. Using relatively standard assumptions it is shown that the expected value of the failure rate after a usage time T has an upper bound of N/eT where N is the initial number of faults and e is the exponential constant. This is conservative since it places a worst case bound on the reliability rather than making a best estimate. It is shown that less pessimistic results can be obtained if additional assumptions are made about the distribution of failure rates over the N faults. We also show that the predictions might be relatively insensitive to assumption violations over the longer term. The theory offers the potential for making long term software reliability growth predictions based solely on prior estimates of the number of residual faults (e.g. using the program size and other software development metrics). Some empirical evaluations of the theory have been made using a range of industrial and experimental reliability data and the results appear to agree with the predicted bound.
Download


PERE: Evaluation and Improvement of Dependable Processes
Authors:
Robin Bloomfield, John Bowers, Luke Emmet, Stephen Viller, 1996

Details:
Safecomp 96, Vienna, Oct 96, Springer Verlag. © Springer Verlag

Brief summary:
In the development of systems that have to be dependable, weaknesses in the requirements engineering (RE) process are highly undesirable. Such weaknesses may either introduce undetected system weaknesses, or otherwise significant costs may arise in their correction later in the development process. Typically, the RE process contains a number of individual and group activities and thus is particularly subject to weaknesses arising from human factors. Our work has concerned the development of PERE (Process Evaluation in Requirements Engineering), which is a structured method for analysing processes for weaknesses and proposing process improvements against them. PERE combines two complementary viewpoints within its process evaluation approach. Firstly, a classical engineering analysis is used for process modelling and generic process weakness identification. This initial analysis is fed into the second analysis phase, in which those process components that are primarily composed of human activity, their interconnections and organisational context are subject to a systematic human factors analysis. In this paper we briefly describe PERE and provide examples of the application experience to date.
Download


Software Fault Tolerance by Design Diversity
Authors:
P G Bishop, 1995

Details:
Software Fault Tolerance (ed. M. Lyu), Wiley, USA, 1995, © Wiley Press

Brief summary:
N-version programming is vulnerable to common faults. It was thought that the primary source of common faults arose from ambiguities and omissions in the specification but the Knight and Leveson experiment showing that failure independence of design faults cannot be assumed. This result is backed up by later experiments and qualitative evidence from other experiments. In addition an "error masking" mechanism that will cause failure dependency in almost all programs. This catalogue of problems may paint too gloomy a picture of the potential for N-version programming, because: back-to-back testing can certainly help to eliminate design faults, and failure dependency only arise if a majority of versions are faulty. For small applications developed with good quality controls, the probability of having multiple design faults can be quite low so N-version programming can be a useful safeguard against residual design faults.
Download


The SHIP Safety Case - A Combination of System and Software Methods
Authors:
P G Bishop and R E Bloomfield, 1995

Details:
SRSS95, Proc. 14th IFAC Conf. on Safety and Reliability of Software-based Systems, Brugge, Belgium, 12-15 September 1995

Brief summary:
n/a
 


The SHIP Safety Case
Authors:
P G Bishop and R E Bloomfield, 1995

Details:
SafeComp 95, Proc. 14th IFAC Conf. on Computer Safety, Reliability and Security (ed. G. Rabe), Belgirate, Italy, 11-13 October 1995, Springer, ISBN 3-540-19962-4., © Adelard

Brief summary:
This paper presents a safety case approach to the justification of safety-related systems. It combines methods used for handling software design faults with approaches used for hazardous plant. The general structure of the safety argument is presented together with the underlying models for system failure that can be used as the basis for quantified reliability estimates. The approach is illustrated using plant and computer based examples.
Download


The Variation of Software Survival Times for Different Operational Input Profiles
Authors:
Bishop, P.G., 1993

Details:
FTCS-23, Toulouse, June 22-24, 1993, IEEE Computer Society Press, ISBN 0-8186-3680-7, © IEEE Computer Society Press

Brief summary:
This paper provides experimental and theoretical evidence for the existence of contiguous failure regions in the program input space ("blob" defects). For real-time systems where successive input values tend to be similar, blob defects can have a major impact on the software survival time because the failure probability is not constant. For example, with a "random walk" input sequence, the probability of failure decreases as the time from the last failure increases. It is shown that the key factors affecting the survival time are the input "trajectory", the rate of change of the input values and the "surface area" of the defect (rather than its volume). It is shown that large defects can exhibit very long mean times to failure when the rate of change of input values is decreased.
Download


Stepwise Development and Verification of a Boiler System Specification.
Authors:
Bishop, P.G., Bruns, G., Anderson S.O., 1993

Details:
International Workshop on the Design and Review of Software Controlled Safety-related Systems, National Research Council, Ottawa, Canada, June 28-29, 1993. © Adelard

Brief summary:
In attempting to demonstrate the safety of the Generic Boiler System, two main problems are faced. First, there are a wide range of possible failures that can occur. For example, the physical devices themselves can fail, sensors can fail, and sensed values can be delayed or lost in transmission. Taking careful account of all possible failures is difficult. A second problem, common to all safety-critical systems, is that absolute safety cannot be shown. One can only hope to demonstrate partial or probable safety. However, estimates of the probability of safety are hard to calculate, and it is hard to know whether one can place much confidence in them. The approach demonstrated here addresses both of these issues. Our report has two parts. In Part I, the technique of step-wise elaboration of the boiler controller is demonstrated. In Part II, verification of safety and failure properties is shown for a boiler system model developed at a late step of elaboration.
Download