Popular in Course
Popular in ComputerScienence
This 46 page Class Notes was uploaded by Betty Kertzmann on Tuesday September 22, 2015. The Class Notes belongs to CS 530 at Colorado State University taught by Yashwant Malaiya in Fall. Since its upload, it has received 29 views. For similar materials see /class/210183/cs-530-colorado-state-university in ComputerScienence at Colorado State University.
Reviews for Fault
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/22/15
Wholistic Engineering for Software Reliability ISSRE 2000 Tutorial Yashwant K Malaiya Colorado State University malaiyacscolostateedu httpwwwcscoostateedutesting 101300 1 33 mm m MM Jag Wholistic Engineering for Software Reliability Outline Why it s time Demarcating measuring counting de nitions 0 Science amp engineering of reliability growth Those pesky residual defects 0 Components amp systems The toolbox 101300 211 379 2 mm Why It s Time Emergence of SRE 0 Craft incremental intuitive re nement 0 Science why it is so Observe hypothesize assess accuracy Engineering how to get what we want Approximate integrate evaluate 0 Are we ready to engineer software reliability 101300 3 Mg A n my J45 Why it s time 0 We have data on different aspects of reliability to have reasonable hypotheses We know limitations of the hypotheses 0 We will always need more data and better hypotheses 0 We have enough techniques amp tools to start engineering mun us quotmy 3919quot 319 101300 4 Why It s Needed Now Reliability expectations growing fast Large projects little time Quick changes in developing environments Reliance on a single technique not enough Pioneering work has already been done 1 lt 101300 5 21 1 mm m duh quotmy Learning from Hardware Reliability Well known well established methods 0 Now standard practice 0 Used by government and industrial org worldwide Considered a hard science compared with software reliability 101300 5 m M Hardware Reliability The Status 1 Earliest tube computers MTTF comparable to some computation times 1956 RCA TR1100 component failure rate models 1959 MlLHDBK217A common failure rate 04x10396 for all Cs for all cases Revised about every 7 years rm 101300 7 u mu us quotmy Hardware Reliability The Status 2 1995 Final update MILHDBK217F Notice 2 Still widely used Failure rates predicted often higher by a factor of 24 occasionally by an order of magnitude Constant failurerate the bathtub curve the Arrhenius relationship have been questioned 15412quot 319 101300 8 mun us quotmy Hardware Reliability The Status 3 0 Why use hardware reliability prediction Feasibility Study initial design Compare Design Alternatives Reliability along with performance and cost Find Likely Problem Spots high contributors to the product failure rate Track Reliability Improvements 101300 9 Mg U m m J45 Hardware vs Software Reliability Models Parameters Hardware Past experience Past experience with with similar units similar units Software Past experience Early past experience with similar units with similar units Later from the same unit Also suggested from the same unit 15412quot 319 101300 10 153mm us quotmy Wholistic Engineering for Software Reliability Outline Why it s time Demarcating measuring counting de nitions Science amp engineering of reliability growth Those pesky residual defects Components amp systems The toolbox 101300 11 mm us quotmy Basic Definitions Defect requires a corrective action Defect density defects per 1000 non comment source lines Failure intensity rate at which failures are encountered during execution MTTF mean time to failure inverse of failure intensity 101300 12 mm us quotmy Basic Definitions 2 Reliability Rtpno failures in time Ot Transaction reliability probability that a single transaction will be executed correctly Time may be measures in CPU time or some measure of testing effort mu us mm M g 101300 13 531 m Wholistic Engineering for Software Reliability Outline 0 Why it s time Demarcating measuring counting de nitions Science amp engineering of reliability growth Those pesky residual defects 0 Components amp systems The toolbox 211 379 101300 14 mun us quotmy Static and Dynamic Modeling Reliability at release depends on 7 Initial number of defects parameter 7 Effectiveness of defect removal process parameter 7 Operating environment Static modeling estimate parameters before testing begins 7 Use static data like so ware size etc Dynamic modeling estimate parameters during testing 7 Record when defects are found etc 7 Time or coverage based 101300 153mm we INAy 1 5 What factors control defect density Need to know for 7 static estimation of initial defect density 7 Find room for process improvement Static defect density models Additive ex Takahashi Kamayachi Da1f1a2f2a3f3 Multiplicative ex MIL HDBK 217 COCOMO RADC D7CF 1F209F30 15 153mm we INAy Li Malaiya Denton 93 97 A Static Defect Density Model Phase Programming Team DCthFermFyFrv Process Maturity Structure 39 C is constant of proportionality Requirement based on prior data Volatility 39 Default value of each function submodel is I 39 Calibration based on past similar projects 101300 17 w 4 Submodel Phase Factor th Based on Musa Gaifney Piwowarski et a1 At beginning of phase Multiplier Unit testing 4 Subsystem testing 25 System testing 1 default Operation 03 5 101300 18 153mm us quotmy Submodel Programming Team Factor Fpt Based on Takahashi Kamayachi Decline by 14 per year up to seven years Team s average skill level Multiplier High 04 Average 1 default Low 25 101300 19 153mm us quotmy Submodel Process Maturity Factor FIn Based on Jones Keene Motorola data SEI CMM Level Multiplier Level 1 15 Level 2 1 default Level 3 04 Level 4 01 Level 5 005 101300 20 153mm us quotmy Submodel Structure Factor FS Assembly code fraction assuming assembly has 40 more defects Factorl04gtltfracti0n in assembly Module size research reported at ISSRE 2000 Complexity Complex modules are more fault prone but there may be compensating factors 153mm us quotmy 101300 21 Submodel Requirement volatility Factor th Degree of changes and when they occur Most impact when changes occur near the end of testing Malaiya amp Denton ISSRE 99 3919 31913 101300 22 153mm us quotmy Using the Defect Density Model Calibrate submodels before use using data from a project as similar as possible Constant C can range between 620 Musa Static models are very valuable but high accuracy is not expected Useful when dynamic test data is not yet significant at 101300 23 153mm us quotmy Static Model Example For an organization C is between 12 and 16 Average team and SEI maturity level is 11 About 20 of code in assembly Other factors are average or same as past projects Estimate defect density at beginning of subsystem test phase Upper estimatel6gtlt25gtltlgtltlgtltl04 X02gtltl432KSLOC Lower estimate 12x25xlXlXl04X02Xl324KLOC 153mm us quotmy 3919quot 319 101300 24 Test methodologies Static review inspection vs dynamic execution Test views Blackbox lnctional input output description White box structural implementation used Combination white after black Test generation Partitioning Random Antirandom D eterminist ic Input mix 101300 25 Input Mix Operational Pro le Need to do 7 nd bugs fast 7 estimate operational failure intensity Best mix for efficient bug nding Li amp Malaiya 7 Quick amp limited testing Use operational pro le 7 High reliability Probe input space evenly Operational pro le will not execute rare and special cases 7 In general Use combination For acceptance testing Need Operational pro le 3919quot 319 101300 26 153mm us quotmy Operational Pro le Pro le set of disjoint actions operations that a program may perform and their probabilities of occurrence Operational pro le probabilities that occur in actual operation 7 Begintoend operations amp their probabilities 7 Markov states amp transition probabilities There may be multiple operational pro les Accurate operational pro le determination may not be needed at 101300 27 13mm YK quotmy Operational Pro le Example 39 Phone follower call types VIusa m A Voice call Q74 B FAX can 015 A1 Voice call no pager answer 018 c New nmnber entry 010 A2 Voice call no pager no 017 D Data base audit 0009 answer A3 Voice call pager voice 017 E Add subscriber 00005 answer F Delete subscriber 0000499 A4 Voice can pager answer on 012 G Hardware failure 0000001 page recovery A5 Voice call pager no answer 010 on page 15412quot 319 101300 28 13mm YK quotmy Wholistic Engineering for Software Reliability Outline 0 Why it s time Demarcating measuring counting de nitions 39 Science amp engineering of reliability growth 0 Those pesky residual defects Components amp systems The toolbox 17 9 101300 954 AM 29 T H 333 BSRE DD YK INAy 4 y Modeling Reliability Growth Testing cost can be 60 or more Careful planning to release by target date Decision making using a software reliability growth model SRGM Obtained using 7 Analytically using assumptions 7 Based on experimental observation A model describes a real process approximately Ideally should have good predictive capability and a reasonable interpretation 1241 3 94 101300 30 13mm n quotmy A Basic SRGM Testing time t CPU execution time manhours etc Total expected faults detected by time t ut Failure intensity d A t t dtK Defects present at time t Nt div 1 dz 1Nl 101300 31 153mm us quotmy A Basic SRGM Cont Parameter B1 is given by K 59 S source instructions Q number of object instructionsr source instruction r object instruction execution rate of the computer K faultexposure ratio range 1X10397 to 10X10397 when t is in CPU seconds 3919 31913 101300 32 153mm us quotmy 1 We get A Basic SRGM Cont NfN0e39 l MI 1quot MF o ze39 Where B0N0 total faults that would be eventually detected Assumes no new defects are generated during debugging Exponential model Jelinski Muranda 71 Shooman 71 Goel Okumoto 79 and Musa 75 80 101300 33 153mm us quotmy SRGMs Log Poisson Many SRGMs have been used Logarithmic model by MusaOkumoto found to have a good predictive capability t oln1jjt Mk I 1t applicable as long as mt N0 In practice almost always satis ed parameters 50 and 51 don t have a simple interpretation A use ll interpretation by Malaiya and Denton 101300 34 153mm us quotmy Bias in SRGMS Average Error El Average Bias Malaiya Karunanithi Verma 90 35 1013 00 153mm we Maw SRGM Preliminary Planning Example 7 initial defect density estimated 25 defectsKLOC 7 10000 lines ofC code 7 computer 70 million object instructions per second 7 fault exposure ratio K estimated to be 4XlO397 7 Estimate the testing time for defect density 25KLOC Procedure 7 Find Bo 51 7 Find testing time t1 101300 36 19 mm n quotmy SRGM Preliminary Planning cont From exponential model o N0 25x10 250 defects K 40x10397 3 I SQ 10000x25x Iquot 1 70x106 112x10394persec gt w 101300 37 mm m MM SRGM Preliminary Planning cont Reliability at release depends on Nt125x10 NO 25x10 eXp ll2x10 4t1 t1 2056 see CPU time 112x10 4 m1 250 x 112 x 104 e MU II 0028 failures sec 101300 04 38 153mm us quotmy SRGM Preliminary Planning cont For the same environment 318 is constant 7 Prior 5 KLOC project 51 was 2x10393 per sec 7 New 15 KLOC project 51 can be estimated as 2x103933 066x10393 per sec Value of fault exposure ratio K may depend on initial defect density and testing strategy Li Malaiya 93 at 101300 39 153mm us quotmy SRGM During Testing Collect and preprocess data 7 To extract the longterm trend data needs to be smoothed 7 Grouped data test duration intervals average failure intensity in each interval Select a model and determine parameters 7 past experience with projects using same process 7 exponential and logarithmic models often good choices 7 model that fits early data well may not have best predictive capability 7 parameters estimated using least square or maximum likelihood 7 parameter values used when stable and reasonable 15412quot 319 101300 40 153mm us quotmy 20 SRGM During Testing cont Compute how much more testing is needed 7 tted model to project additional testing needed desired failure intensity estimated defect density 7 recalibrating a model can improve projection accuracy 7 Interval estimates can be obtained using statistical methods 153mm us quotmy 101300 41 Example SRGM with Test Data CPU Hours Failures 1 27 co 4 a u 4 u N failure intensity perhour 2 5 3 10 1 u 5 in 15 4 CPU Hours 7 Target failure intensity lhour 27810394 per sec 04 101300 42 153mm us quotmy 21 Example SRGM with Test Data cont Fitting we get so 10147 and 51 522 x105 39 stopping time tf is then given by 2 78 x 104 10147 x 522 x 0 5 gm WW yielding tf 5 6 4 73 sec ie 1569 hours a 101300 43 537 mm n quotmy Example SRGM with Test Data cont Flgurel Usmg an SRGM 0 007 Fitted 0 006 m del Omeasured values gt g 0 005 7 z 2 5 0 004 0 g Fallu re 3 0 003 intensity 0 002 target 0 001 Y 0 5 10 15 20 Hours i 101300 44 all mm m MM 3539 22 Example SRGM with Test Data cont Accuracy of projection 7 Experience with Exponential model suggests 7 estimated 50 tends to be lower than the nal value 7 estimated 51 tends to be higher 7 true value of tf should be higher Hence 1569 hours should be used as a lower estimate Problems 7 test strategy changed spike in failure intensity smoothing 7 so ware under test evolving continuing additions Drop or adjust early data points x a 45 mm m MM Jag 101300 Wholistic Engineering for Software Reliability Outline 0 Why it s time Demarcating measuring counting de nitions 0 Science amp engineering of reliability growth Those pesky residual defects 0 Components amp systems The toolbox 101300 0412 46 153mm n quotmy 23 Test Coverage amp Defect Density Yes they are related Defeet vs Test Coverage model 1994 Malaiya Li Bieman Kareieh Skibbe Estimation of number of defects 1998 Li Malaiya Denton fr jg gth y 101300 47 La 153mm viiwa V U Motivation Why is Defect Density Important 0 Important measurement of reliability 0 Often used as release criteria Beginning Release Of Umt Frequently Highly NASA Testmg Cited Tested 16 20 033 01 101300 153mm us quotmy 24 Modeling Defects Time amp Coverage auu39mnu ilr hlmlm wry In it dim4 r may Impms I I swam us quotmy 101300 49 Coverage Based Defect Estimation Coverage is an objective measure of testing Directly related to test effectiveness Independent of processor speed and testing efficiency Lower defect density requires higher coverage to find more faults Once we start nding faults expect coverage vs defect growth to be linear 1502 31913 101300 50 153mm us quotmy 25 Coverage Model Estimated Defects 110 95 7 n I Iquot Linear 1 2 3 Approximation I E a ertheknee 0 10 20 30 40 50 60 70 so 90 100 Coverage knee MC 2 A0 AIC Only applicable a er the knee Assumptions Logarithmic Poisson Model for defects and 1 t St b1 S it E I verage e emen s a e o 5xixare CO 101300 Location of the knee Based on interpretation through logarithmic model Location of knee based on initial defect density Lower defect densities cause knee to occur at higher coverage Parameter estimation Malaiya and Denton HASE 98 26 101300 Data Sets Used Vouk and Pasquini Vouk data from N version programming project to create a ight controller Three data sets 6 to 9 errors each Pasquini data Data from European Space Agency C Program with 100000 source lines 29 of 33 known faults uncovered 101300 53 Mg 1 mm n quotmy Defects vs Branch Coverage Data Set Pasquini Defects Expected F itted Model 20 2a 2a 32 36 an Mudel Data u as 52 56 60 m as 72 76 an at as 92 95 mm ammcque 15412quot 319 101300 54 153mm us quotmy 27 Defects vs PUse Coverage Data Set Pasquini Defects Expected Fitted Model Mud2iData 2n 101300 PVLI se Cwemge 55 2a 26 32 36 on u w 52 56 6U 60 EB 72 76 an m 66 92 SE inn 4 Estimation of Defect Density 0 Estimated defects at 95 coverage for Pasquini data 28 faults found and 33 known to exist Measure Coverage Expected Achieved Defects Block 82 36 Branch 70 44 Puses 67 48 28 Defects VS PUse Coverage Data Set Vauk 3 Defects Expected F1tted Model 3 x a E 4 2 Mud2 E 33 4D 44 AB 52 5E EU BA BB 72 7E EU BA BB 92 9E WED PrLlse cu m e 17quot t 101300 3quot 9 new 4 11 MW Data Set Pasquini et al an m Estimates are stable g an a m m 12 Cases f f f f 101300 58 13mm n quotmy 29 Current Methods 0 Development process based models allow for a priori estimates Not as accurate as methods based on test data 0 Sampling methods often assume faults found as easy to find as faults not found Underestimates faults Exponential model Assume applicability of exponential model We present results of a comparison 101300 153mm us quotmy The Exponential Model Data Set Pasquini et al Estimate rises as new defects found Estimates very close to actual faults 101300 153mm us quotmy 30 Recent Conformation of Model 0 Frankl amp lakouneno Proc SIGSOFT 98 8 versons of European Space Agency program 10K LOC Single fault reinsertion Tom Williams manuscript 1999 analysis from rst principles 101300 1 lt 61 Us 153mm YK Mdmyz 39 Observations and Conclusions 0 Estimates with new method are very stable Visual con rmation of earlier projections 0 Which coverage measure to use Stricter measure will yield closer estimate 0 Some code may be dead or unreachable Found with compile or link time tools May need to be taken into account 101300 52 mun us quotmy 31 Wholistic Engineering for Software Reliability Outline 0 Why it s time Demarcating measuring counting de nitions 0 Science amp engineering of reliability growth Those pesky residual defects Components amp systems The toolbox 101300 63 mm m MM Jag Reliability of Multi component Systems 0 Software system number of modules 0 Individual modules developed and tested differently different defect densities and failure rates Sequential execution Concurrent execution N Version systems run us quotmy 211 379 101300 64 32 Sequential execution 0 Assume one module executed at a time o fraction of time module i under execution Xi its failure rate Mean system failure rate lsys 2 f it iI mu us quotmy 101300 55 ME 3 m Sequential Execution cont T mean duration of a dit single transaction module i is called ei times T during T each time executed for duration di eidi T i called 3ml time f sxmu us quotmy 15412quot 319 101300 66 33 Sequential Execution cont System reliability RSys exptsyS T Rm eXp2e di 20 i1 Since exp diti is R n Rm H Rif i1 101300 67 gay m mu us quotmy Concurrent execution Concurrently executing modules all run without failures for system to run me j concurrently executing modules In 2m 2 AJ jI 04 101300 68 mun us quotmy 34 NVersion systems CIitical applications like defense or avionics Each version is implemented and tested independently 0 Common implementation uses triplication and voting on the result 101300 mu us quotmy M sjd i 69 he i m ja rj 101300 NVersion Systems Cont RsysllR33RlR2 R09 gt Rsys972 04 70 mun us quotmy 35 NVersion systems Correlation 0 Correlation signi cantly degrades fault tolerance 0 Signi cant correlation common in N version KnightLeveson Is it cost effective rm 101300 71 u 45 mm n quotmy NVersion systems Correlation 3version system q3 probability of all three versions failing for the same input q2 probability that any two versions will fail together Probability PSys of the system failing Psysq33q2 101300 72 mm n quotmy 36 NVersion systems Correlation 0 Example data collected by Knight Leveson computations by Hatton 3version system probability of a version failing for a transaction 0 0004 0 in the absence of any correlated failures Pm 0 0004 2 31 0 00040 0004 2 4 8 x 10 7 101300 73 Mg 1 mm m quotmy NVersion systems Correlation Uncorrelated improvement factor of 0000448 x10397 8333 0 Correlated q3 25X10397 and q2 2510396 PsyS 25X10397 325gtlt10396 775gtlt10396 improvement factor 00004775gtlt10396 516 0 stateoftheart techniques can reduce defect density by a factor of 10 101300 74 All mm H mm 37 Safety Analyze system to identify possible conditions leading to unsafe behavior Eliminate or reduce the probability of occurrence of such events 0 Safety involves only a part of the system functionality 101300 1158 AM 75 gawk Fault Tree Analy51s Top Top Event Consequence Event Gate Deductive Intermedlate vent reverse loglc Gate Causes ABCD Basic EVEmS 9 Cut Sets ACD BCD PTEPAPCPD PBPCPD PAP BPCPD 2419 m 101300 76 may 38 Using Fault Trees Deterministic analyer prove that occurrence ofunsafe events rmplres a o rca1 conaadrcaon Feasrble for small grams Probabrlrsac analyer compute probabrlrcy occurrence ofan unsafe event So ware hardware and human factors Hazard Cr icality Index Matrix wk fnquzncyawntsumt my smmydammmawn0 mm M Navy 1925 Hazard Probability Frequent MTBHltltUL Probable MTBHltUL Occasional MTBHzUL Remote MTBHgtUL Improbable MTBHgtgtUL Impossible Probability 0 MTBH Mean time to hazard UL Unit life Compare with MILSTD 882D AppA 101300 79 mm m mi Jag Wholistic Engineering for Software Reliability Outline 0 Why it s time Demarcating measuring counting de nitions 0 Science amp engineering of reliability growth Those pesky residual defects 0 Components amp systems ill The toolbox 211 379 101300 80 153mm us quotmy 40 Tools For Automating Software Reliability Engineering 0 Can we eliminate debugging 0 Bugs would occur even with formal methods like VDM and Z McGibbon 0 hardware design and test tools is now regarded to be mandatory 0 Software increasing dependence 101300 539 lt 81 mm m was 101300 Why Tools Will be Mandatory Reliability expectations rising steadily Defect DensityKL DC A o A N w a IV 07 A 00 Lo 970 1980 1990 2000 2010 Year Source Poston amp Sexton BSRE DD we quotmy 04 82 4l Software Testing Tools History 70s LINT picks out all the fuzz 74 code instrumentor JAVS for coverage 80s capturereplay etc o 92 Memory leak defectors Late 90s Y2K tools 101300 83 153mm we may Manual vs automated testing QAI Test step Manual Automated Percent testing testing Improvement Test plan 32 40 25 development Test case 262 117 55 development Test execution 466 23 95 Test result 117 58 50 analyses Defect tracking 1 17 23 80 Report creation 96 16 83 Total hours 1090 277 75 Tools for all Phases Requirements phase Tools 7 Requirement RecorderVeri er 7 Test Case Generation Programming Phase Tools Static tools 7 Metrics Evaluators 7 Code Checkers 7 Inspection Based Error Estimation 101300 85 mm m mm Tools for all Phases cont Testing Phase Tools 7 CapturePlayback Tool 7 Memory Leak Detectors 7 Test Harness 7 Coverage Analyzers 7 Loadperformance tester 7 Bugtracker 3919 31913 101300 86 153mm us quotmy 43 Tools for all Phases cont Testing Phase Tools cont 7 Defect density estimation 7 Reliability Growth Modeling tools 7 Coverage based Reliability Tools 7 Fault tree analysis 7 Markov reliability Evaluation 101300 u um tesung Test uses Reliahlity guwm mudels cuvemge mudels 101300 88 EaK n 3511 153mm we Mdzlyz 44 Tool Costs 0 Tool identi cation 0 Tool acquisition Tool installationmaintenance Study of underlying principles Familiarity with operation Risk of nonuse Contacting user groupssupport at 101300 89 153mm we may References J D Musa A lanino and K Okumoto Software Reliability Measurement Prediction Applications McGraWHill 1987 Y K Malaiya and P Srimani Ed Software Reliability Models lEEE Computer Society Press 1990 A D Carleton R E Park and W A Florac Practical Software Measurement Tech Report SR1 CMJSEl97HB003 P Piwowarski M Ohba and J Caruso Coverage Measurement Experience during Function Test Proc Int Conference on Software Engineering 1993 pp 287301 Y K Malaiya N Li J Bieman R Karcich and B Skibbe The Relation between Test Coverage and Reliability Proc lEEECS Int Symposium on So ware Reliability Engineering Nov 1994 pp 186 195 101300 90 EaX ng 153mm we may 45 References YK Malaiya and J Denton What do the So ware Reliability Growth Model Parameters Represent Proc 1EEECS Int Symposium on Software Reliability Engineering 1SSRE Nov 1997 pp 124135 M Takahashi and Y Kamayachi An Emprical study of a Model for Program Error Prediction Proc Int Conference on Software Engineering Aug 1995 pp 330336 J Musa Software Reliability Engineering McGrawHill 1999 N Li and Y K Malaiya Fault Exposure Ratio Estimation and Applications Proc 1EEECS Int Symposium on Software Reliability Engineering Nov 1993 pp 372381 N Li and Y K Malaiya Enhancing accuracy of So ware Reliability Prediction Proc 1EEECS Int Symposium on Software Reliability Engineering Nov 1993 pp 7179 at 101300 91 153mm us quotmy References PB Lakey and A M Neufelder System and Software Reliability Assurance Notebook Rome Lab FSCRELI 1 97 L Hatton Nversion Design Versus One Good Design 1EEE Software NovDec 1997 pp 7176 Tom McGibbon An Analysis of Two Formal methods VDM and Z httpwwwdacsdticmil Aug 13 1997 Robert Poston A Guided Tour of Software Testing Tools Aonix March 30 1998 MR Lyu Ed So ware Reliability Engineering McGrawHill 1996 101300 92 IDCQQE i 153mm us quotmy
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'