Intro to Engineering
Intro to Engineering ENGR 107
Popular in Course
Popular in Engineering and Tech
This 146 page Class Notes was uploaded by Osvaldo Berge on Monday September 28, 2015. The Class Notes belongs to ENGR 107 at George Mason University taught by Carl Schaefer in Fall. Since its upload, it has received 21 views. For similar materials see /class/215162/engr-107-george-mason-university in Engineering and Tech at George Mason University.
Reviews for Intro to Engineering
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/28/15
Top Ei rww Ways to Manage Technical Risk Table of Contents TABLE OF CONTENTS i ACKNOWLEDGMENTS v INTRODUCTION vii CHAPTER 1 CHOOSE AN APPROACH 1 WHAT IS THE RELATIONSHIP BETWEEN APPROACH AND TECHNICAL RISK 1 THE CRITICAL PROCESS APPROACH 2 THE PRODUCT WORK BREAKDOWN STRUCTURE APPROACH 4 THE INTEGRATED PROCESS PRODUCT APPROACH 5 CHAPTER 2 ASSIGN ACCOUNTABILITY 7 WHAT IS THE RELATIONSHIP BETWEEN Z4CCOUNTABILITY AND TECHNICAL RISK 7 RISK MANAGEMENT ORG NI TION 7 RISK INTEGRATED PROCESS TE MS 8 CHAPTER 3 PUT RISK MANAGEMENT IN THE CONTRACT 11 WHAT IS THE RELATIONSHIP BETWEEN THE CONTRACT AND TECHNICAL RISK THE REQUEST FOR PROPOS I STATEMENT OF WORKSTATEMENT OF OBJECTIVE SOURCE SELECTION AWARD FEE FOR RISK 7 N CEMENT 11 11 13 14 15 17 CHAPTER 4 MANDATE TRAINING WHAT IS THE RELATIONSHIP BETWEEN TRAINING AND TECHNICAL RISK DEFENSE ACQUISITION UNIVERSITY COT TR SFS 17 17 PROGRAM TRAINING 18 CHAPTER 5 PRACTICE ENGINEERING FUNDAMENTALS WHAT IS THE RELATIONSHIP BETWEEN HENGINEERING FUNDAMENTALS AND TECHNICAL RISK 7 CRITICAL TECHNICAL PROCESSES 19 19 19 CHAPTER 6 UNDERSTAND COT SNDI APPLICATIONS WHAT IS THE RELATIONSHIP BETWEEN COTSWDI APPLICATIONS AND TECHNICAL RISK 41 NAVY EXPERIENCES WITH COTSNDI APPLICATIONS COTSN DI PRODUCT MATURITY amp TECHNOLOGY REFRESH MANAGING RISK ASSOCIATED WITH PLASTIC ENCAPSULATED DEVICES 45 CHAPTER 7 ESTABLISH KEY SOFTWARE MEASURES WHAT IS THE RELATIONSHIP BETWEEN SOFTWAREMEASIRES AND TECHNICAL RISK 51 MEASUREMENT SELECTION FOR TRACKING RISKS 51 SOFTWARE ME SITRPS 53 IMPLEMENTNG THE MEASUREMENT PROCESS 96 WATCHOUTFOR S 97 CHAPTER 8 ASSESS MITIGATE REPORT 99 WHAT IS THE RELATIONSHIP BETWEEN ASSESS MITIGATE REPORT AND TECHNICAL RISK 99 RISK A W rmT 99 C 1 39 Avvavvmantv Evaluating Critical Process Variance 100 Evaluating C 1 1 01 RISK ANALYSIS AND MITIGATION 103 TRACKING THE RISK 105 Database Software 1 05 REPORTING THE RISK 106 CHAPTER 9 USE INDEPENDENT A SSFSSORS 111 WHAT IS THE RELATIONSHIP BETWEEN INDEPENDENT ASSESSORS AND TECHNICAL RISK 111 PROGRAM P PF39RTF39NFP 112 TASKS 1 12 CHAPTER 10 STAY CURRENT ON RISK MANAGEMENT INITIATIVES 115 WHAT IS THE RELATIONSHIP BETWEEN HRISK MANAGEMENT INITIATIVES AND TECHNICAL RISK 115 QUALITY FUNCTION DEPLOYMENT 1 15 TAGUCHI TECHNIQUES 1 17 TECHNICAL PERFORMANCE MEASUREMENT 1 17 EARNED VALUE M N CEMENT 120 CHAPTER 11 EVALUATE NEW ACQUISITION POLICIES WHAT IS THE RELATIONSHIP BETWEEN CHANGES IN ACQUISITION POLICIES AND TECHNICAL RISK 7 COST AS AN INDEPENDENT VARIABLE CAIV and Risk 7 ENVIRONWTAL RISK M N CUMTNT SINGLE PROCESS INITIATIVE Single Process Initiative analRisk 7 DMNTSHING MANUFACTURING SOURCES amp MATERIAL SHORTAGES CONFIGURATIONM N CEMENT 121 121 122 124 126 126 127 129 APPENDIX A EXTRACTS OF RISK MANAGEMENT REQUIREMENTS IN THE DOD 5000 SERIES DOCUMENTS A l DoDD 50001 Defense A 111139 in m 43 DOD 5 000 2 R Mandatory Procedures for Major Defense Acquisition Programs WDAPs and Major Automated Information System MAIS A 1 39 39 39 Program A 3 APPENDIX B ADDITIONAL SOURCES OF INFORMATION B 1 INTRODUCTION B 3 DoD Information Analysis Centers 3 3 Manufacturing Centers Of FY alien 0 R5 iii Acknowledgments Project Sponsor Elliott B Branch Of ce of the Assistant Secretary of the Navy Research Development amp Acquisition Acquisition and Business Management Executive Director Editor Douglas 0 Patterson Of ce of the Assistant Secretary of the Navy Research Development amp Acquisition Acquisition and Business Management Technical Director Technical Writers William A Finn EGampG Services Edward I Smith EGampG Services Toshio Oishi IIT Research Institute Eric Grothues Naval Warfare Assessment Station Contributors CAPT Larrie Cable USN PEOA PMA299 Frank Doherty SPAWAR PMW 163 James Collis NAVAIR 132 George Clessas PEOT PMA201 Barbara Smith PEOA PMA275 John McGarry Naval Undersea Warfare Center Mike Wheeler Naval Warfare Assessment Station Lou Simpleman Institute for Defense Analysis Maye A Hardin By Design Maria Zabko EGampG Services Contributing Navy Offices PEOA PMA271 PEOT PMA259 PEOA PMA275 PEOT PMA 265 PEOA PMA276 PEOT PMA272 PEOA PMA290 PEOTADD PEOA PMA299 PEOTAD PMS422 PEOCU PMA280 PEOUSW PMS403 PEODDZl PMS500 PEOUSW PMS404 PEOMIW PMS407 PEOUSW PMS415 PEOSUB PMS401 SPAWAR PMW163 PEOSUB PMS411 SPAWAR PMW183 PEOSUB PMS450 DRPM AAAV PEOT PMA201 DRPM AEGIS PMS400 Introduction In recent years risk management has been increasingly emphasized by the DOD as a critical tool for assuring program success Whereas risk managemen is a general term encompassing all the different areas of risk management this document focuses specifically on the single aspect of Technical Risk Management Although managing risk for all aspects of a program is critical technical risk is perhaps the most important area of risk management because technical risk and the degree to which technical processes can be controlled is a signi cant driver of all other program risks Unfortunately technical risk and the importance of controlling critical technical processes are generally not well understood within the DoD acquisition community nor is adequate guidance on these considerations readily available In response to these shortcomings and the need to improve the ef ciency and effectiveness of the acquisition process this publication offers a single source of concise explanations and clear descriptions of steps one can take to establish and implement core technical risk management lnctions It contains baseline information explanations and best practices that contribute to a well founded technical risk management program 7 invaluable to program managers overwhelmed by the magnitude of information and guidance available on the broad subject of risk management today In addition as an aid to the reader Appendix A contains the Risk Management Requirements from DoDD 50001 and DoD 50002R implemented by SECNAVINST 50002B Each chapter addresses specific technical risk areas Although developed for Department of the Navy program managers and their staffs this document should be equally useful to contractor program managers The lndamentals contained herein are applicable to all acquisition efforts both large and small Chapter 1 Wm Choose an Approach What is the Relationship Between Approach and Technical Risk The choice of an approach for managing program technical risk should be made as soon as possible DoDD 50001 mandates that the Program Manager PM develop a risk management approach before decision authorities can authorize a program to proceed into the next phase of the acquisition process All aspects of a risk management program are in turn determined by the approach selected Delaying selection of a speci c approach for managing technical risk will cause a program to ounder especially if the contractor and the Government are following two different approaches Further the Integrated Product Team IPT cannot lnction successfully unless all members of the team 7 contractor and Government alike 7 are using a common approach Although the Defense Acquisition Deskbook offers the PM several approaches to risk management covering a broad spectrum of program risks only three approaches have been selected for inclusion in this publication Why Results of a 1997 survey of risk management in 41 Department of the Navy DoN programs revealed that the following three approaches to managing program technical risk represent those used almost exclusively by DoN PMs Equot Critical Process Technical risk management conducted primarily by assessing 3 primary contractor critical design test and production processes against industry best approaches to practices and metrics with the degree of variance determining the level of risk technical risk These critical processes are generally not tailored for individual Work management Breakdown Structure WBS elements Equot Product Work Breakdown Structure Technical risk management based on individual product or WBS elements with risk assessments based on deviations from a cost and schedule baseline Risk is expressed as a probability estimate rather than as a degree of process variance from a best practice Equot Integrated ProcessProduct W BS Technical risk management based on Chapter 1 speci c critical processes affecting individual WBS elements These critical design test and production processes are assessed against industry best practices and metrics with the degree of variance determining the level of risk These approaches are described in the remainder of this chapter The Critical Process Approach This approach is used to identify and analyze program technical risks by assessing the amount of variance between the contractor39s design test and production processes ie those not related to individual WBS elements and industry Best Practices Success of any risk reduction efforts associated with this technique will depend on the contractor39s ability and willingness to make a concerted effort to replace any de cient engineering practices and procedures with industry Best Practices Chapter 5 contains a list of several lndamental engineering design test and production Critical Processes with associated Best Practices and WatchOutFors The Critical Processes were derived from a number of commercial and defense industry sources One of the primary bene ts of this approach is that it addresses pervasive and subtle utilizes sources of risk in most DoD acquisition programs and uses lndamental engineering PFOVBH principles and proven procedures to reduce technical risks Figure 11 illustrates a engineering sample approach fundamentals Definitions Risk 7 Diffaence between actual erformance of a process and the own best practice for performing that process Use Tools See Tool Box Ri k Kno Pro 2 5k Assessment Measure Variance and Risk Management A Proactive blems management technique that Risk Unlmowns identifies critical processes and De ne Unlmowns e o ology for controlling their 1 l d Mk 0 me program Use Persona Know e ge Consider Produmblhty g antin Your Risks How do Yuu Identify a Risk SUPP Mammy Best Judgment T quotV mnm Low Understand the Prime Contractor s 7 Moderate critical processes mgh What Can Yuu Do AboutA Risk7 Understand the subcontractors critical Determinethe cause vice cure the processes m tom New Processes Risk Management T001ox Develop Backup Plans Any Processes Lacking Rigor MD 4245 77M Tem mes Parallel Paths Lessons Leame f m NAvso R6071 Best Practices Redesxei Mugauon plans De ining an U own mskmdmms Develop Prototypes U d m R Sk d L b changing Requirements AC we Resources p a i a a ase PMws TRIMs or other Software Applications 1 c t R sk Test Failure T Ch 1 omniunica e i Requirements Documents 9 0 OgY Customers Contracting for Risk Management 9 suppliers Negativ Trends orForecasts Qualified Supplier Availability La ofResources Risk database qmpment m M People Tools 39 Robust desxgn practices ienegfaf R ju relme Use Lessons Learned ualit standards 6quot 5 9 3quot Note that Low Risk Items rMatenal Y willnotbetracke at t t 1 independent Risk Assessment m we y m p aquot More Risk ManagementTraining Program Office Level d More Time Unqualified People 7 Knowledge 7 Experience ore Figure 1 1 Critical Process Risk Management Chapter 1 Process Metrics Best Practices and WatchOutFors are used in conjunction with contract requirements and performance speci cations to identify those technical processes that are critical to the program and to establish a program baseline of contractor processes This baseline should be developed using the fundamental engineering Critical Processes provided in Chapter 5 as a starting point and by reviewing and compiling additional Critical Processes in use by companies in both the defense and nondefense sectors The program baseline being used by the contractor should be determined by evaluating actual contractor performance as opposed to stated policy This program baseline should then be compared to a baseline of those industrywide processes and practices that are critical to the program The variances between the two baselines are indications of the technical process risk present in the program These results should be documented in a standard format such as a pro gramspecific Risk Assessment Form see Chapter 8 to facilitate the development of a risk handlingmitigation and risk tracking plan Figure 12 illustrates a sample approach 1W ASSESSM GUlDE I HIGH Major disruption likely Different approach may 3 be re ired Pri 39 e anagement attention CRITICAL PROCESS VARIANCE E required Level Whatis The CriticzlPrucess E d Variance imm Known Standard gt c MODERATE Some M 1 w disruption Different approach a mquot 3 b may berequired Additional b Small 8 a I management attention may be c Acceptable a 01 Large 1 2 3 4 5 D LOW Minimum imp act e Significant Consequence Minimum oversight needed to I ensure risk remains low CONSEQ UEIVCE Given The Risk is Realized what is the Magnitude ofthe Impact7 Level Technical andDr Schedule andDr Cust andax Impact on other Teams Performance 1 Minimal or No impact Minimal or No impact Minimal or No impact None 2 Small with Some Additional Resources Required lt 5 Some impact Reduction in Margin Able to Meet Need Dates 3 Acceptable With Minor slip in Key Milestone 5 r 7 Moderate impact sigiificant Reduction Not Able to MeetNeed Dates in Margin 4 Large No Ma or sli in Key Milestone gt 7 r 10 Major impact Remaining Margin or ritica Path impacted 5 sigiificant Can t Achieve Key Team or gt 10 significant impact Major Progam Milestone Figure 1 2 Critical Process Risk Assessment In summary the critical process approach has many benefits however the critical Final thought on processes normally are not directly related to the individual WBS product elements the Critical comprising the weapon system being developed and produced P Y 00955 3P 0301 Chapter 1 The Product Work Breakdown Structure Approach DOD 50002R requires that DOD programs tailor a program Work Breakdown Structure WBS for each program using the guidance in MILHDBK881 Work Breakdown Structure of 2 January 1998 MILHDBK881 de nes the WBS as a product oriented family tree composed of hardware software services data and facilities which results from systems engineering efforts during the acquisition of a defense material item A WBS displays and defines the products to be developed andor produced and relates the WBS elements of work to be accomplished to each other and to the end products A sound WBS clearly describes what the program manager wants to acquire It has Functional a logical structure and is tailored to a particular defense materiel item As stated in processes are MILHDBK881 the WBS is product oriented It addresses the products required quotOi WBS NOT the lnctional processes or costs associated with those products For example Elements subjects such as design engineering requirements analysis test engineering etc are not products Rather they are functional processes each representing a discrete series of actions with speci c objectives during product development andor production These are normally not identi ed as MILHDBK881 WBS elements and as a result generally do not receive adequate program consideration Section 2 of MILHDBK881 states that the WBS provides a framework for specifying the technical objectives of the program by first de ning the program in terms of hierarchically related product oriented elements and the work processes required for their completion Therefore the emphasis on product is to de ne the products to be developed andor produced and to relate the elements of work to be accomplished to each other and to the end products Unfortunately in this approach programs frequently place little emphasis on process A typical WBS technical risk management approach is based on the WBS products WBS tends to Risk assessments and mitigation activities are conducted primarily on the individual be an after WBS elements with an emphasis on technology product maturity or perceived thefad quality with little emphasis on related processes Risk is typically expressed as a measure or probability estimate rather than as a degree of process variance from a best practice 31 In the WBS approach technical risks are identi ed assessed and tracked for individual WBS elements identi ed at their respective levels primarily for impact on cost and schedule and the resulting effect on the overall product Since DoD programs are established around the WBS the associated costs and schedule for each product can be readily baselined against which risk can be measured as a deviation against cost and schedule performance Taking the WBS to successively lower level entities will help to assure that all required products are identi ed in Chapter 1 terms of cost and schedule performance as well as operational performance goals In general a typical WBS approach tends to be more reactive than proactive Although a direct measurement of product performance against cost and schedule performance has its benefits there are also some significant downsides to an approach in which processes are not considered The WBS by virtue of its inherent organizational properties produces technical performance measurements that are in essence a erthefact measures of risk Also by not focusing on processes the overall risk to the program may not be identified until the program is in jeopardy As stated in DoD 50002R the WBS provides a framework for program and technical planning cost estimating resource allocations performance measurements and status reporting Whereas the WBS is a good tool for measuring technical performance against cost and schedule it is an incomplete measure of technical risk without considering processes It is important to recognize that the WBS is a product of the systems engineering process which emphasizes both product and process solutions required for the completion of technical objectives However history indicates that until recently process solutions received too little emphasis The Integrated ProcessProduct Approach The Integrated ProcessProduct approach to technical risk management is derived primarily from the Critical Process approach and incorporates some facets of the ProductWBS approach The systems engineering lnction takes the lead in system development throughout any system s life cycle The purpose of systems engineering is to define and design process and product solutions in terms of design test and manufacturing requirements The work breakdown structure provides a framework for specifying the technical objectives of the program by first defining the program in terms of hierarchically related product oriented elements and the work processes required for their completion This emphasis on systems engineering including processes and technical risk along with process and product solutions validates and supports the importance of focusing on controlling the processes especially the prime contractor and subcontractors critical processes Such a focus is necessary to encourage a proactive risk management program one that acknowledges the importance of understanding and controlling the critical processes especially during the initial phases of product design and manufacture In summary the Critical Process Approach provides a proactive concentration on Integrates the technical drivers and associated technical risks as measured by process variance best aspects of Integrating this approach into the Product Approach enables the critical processes to 11031 3P 0301195 be directly related to the products comprising the weapon system being developed Chapter 1 and produced In this manner the bene ts of both approaches are realized Product maturity is accelerated technical risk is reduced CAIV objectives are more easily met schedule slippages are avoided and the Program Manager reaches Milestone decision points with a higher level ofconfidence See Table 11 for an overview of the advantages and disadvantages of all three approaches Table 1 1 Comparison of Approaches using a logical product oriented structure Relates the elements of work to be accomplished to each other and to the end product Separates a defense materiel item into its component parts Allows tracking of product items down to any level of interest Approach Advantages Disadvantages Process 0 Proactive focus on critical 0 Less emphasis on the processes product oriented elements 0 Encourages market search for of a program best practicesbenchmarks 0 Perception that technical 0 Reliance on fundamental design issues dilute the importance test and manufacturing of cost and schedule principles 0 Addresses pervasive and subtle sources of risk 0 Technical discipline will pay dividends in cost and schedule benefits Product WBS 0 Commonly accepted approach 0 Does not typically emphasize critical design and manufacturing processes or product cost Risk is typically expressed as a probability estimate rather than a process variance Delayed problem identi cation reactive Integrated Process Pro duct Maximizes the advantages of Process and Product approaches None significant Chapter 2 Wm Assign Accountability What is the Relationship Between Accountability and Technical Risk In practice most programs do not have an individual accountable to the Program Manager PM for risk management More o en than not several team members may be assigned risk management responsibilities but do not have ownership or accountability in the risk management process Therefore it is imperative that a risk management focal point accountable directly to the PM for the risk management program be established and speci cally identi ed in the program structure Otherwise risk management will quickly disintegrate and become an Oh by the way task until program risks have turned into program problems Risk Management Organization The risk management team is not an organization separate from the program of ce Rather it is integrated with the program of ce and includes program of ce prime contractor eld activity and support contractor personnel operating toward a shared goal A conceptual risk management organization which shows relationships among members of the program risk management team is provided in Figure 21 The key to establishing an effective risk organization is to formally assign and empower an individual whose primary role is managing risk This individual ASSI39gn a Risk referred to as the Risk Management Coordinator should be a higherlevel program Management of ce person such as the Deputy Program Manager DPM and should be coordinator accountable directly to the PM for all aspects of the risk program The Risk Management Coordinator must have a level of authority which provides direct unencumbered access to the PM and can cross organizational lines The Risk Management Coordinator Is the of cial point of contact and coordinator for the risk program Is responsible for reporting risk Is a subject matter expert on risk Maintains the risk management plan Coordinates risk training Does not need to be assigned llltime Chapter 2 Program Manager Direction Reports Risk Management Coordinator I l Independent Risk Guidance Assessments Assessors Information Tracking Tools Status Training Reporting V RISK IPT s 39Program Office personnel 39Prime Contractor 39Support Contractor 39Navy Field Sites Figure 2 1 Conceptual Risk Management Organization Risk Integrated Process Teams Providing information to the Risk Management Coordinator are the actual IPTs responsible for implementing the risk program These are comprised of experienced ASSigH the individuals from the different disciplines and lnctional areas within the program mt People Whereas these teams or individuals provide risk status and mitigation information to IP TS N0t to the Risk Management Coordinator they are empowered to make JHSt 1quotde recommendations and decisions regarding risk management activities while reporting risk without the fear of reprisal IPTs are responsible for Providing to the Risk Management Coordinator the results of risk assessments and mitigation activities using standard risk assessment forms Maintenance of the risk management database Implementing risk management practices and decisions including those discussed at program and design reviews Chapter 2 It is imperative that everyone involved with the risk program understands his or her H roles and responsibilities Standard terminology definitions and formats are critical 3 to the risk management process see Chapter 8 Assess Mitigate Report The Recommend a most effective method to do this is to document the risk process in a formal Risk formal Risk Management Plan Not only will a documented plan provide a standardized Management operating procedure for the risk effort but it will provide continuity to the risk Plan program as new personnel enter the program and others leave The DoD Risk Management Working Group chartered by the Of ce of the Under Secretary of Defense Acquisition and Technology USDAampT has developed a sample format for a Risk Management Plan The sample plan is a compilation of several good risk management plans taken from DoD programs The plan is designed to be tailored to fit individual program needs and may be more comprehensive than many programs require The DoD Risk Management Plan outline is available online in the DoD Deskbook Additional information may be obtained from the DoD Risk Management Homepage at httpwwwacqosdm tep1 0gramsseI39iskmanagementindexhtm Chapter 3 W5 Put Risk Management in the Contract What is the Relationship Between The Contract and Technical Risk The elimination of many Military Speci cations and Standards the use of performance speci cations and the shi of technical responsibility to contractors will not alone minimize program risk without explicit contractual requirements for risk management The perception is that the transfer of responsibility to the contractor automatically reduces program risk However if a program fails because risk isn t managed well by the contractor the Program Manager PM is ultimately responsible The need for contractual requirements for risk management is recognized in both DoDD 50001 and DoD 50002R The Request for Proposal The Request for Proposal RFP should communicate to all Offerors the concept that risk management is an essential part of the Government39s acquisition strategy Before the dra RFP is developed the PM should conduct a preliminary risk assessment to ensure that the program to be described in the RFP is executable within technical schedule and budget constraints Based on this assessment the technical schedule and cost issues identified should be discussed at preproposal c0nferences before the dra RFP is released In this way critical risks inherent in the program can be identified and addressed in the RFP In addition this helps to establish key risk management contractual conditions as emphasized in the DOD 5000 series During the preproposal conference Offerors should be encouraged to identify all elements at any level that are expected to be moderate or high risk In the solicitation PMs should ask Offerors to address 0 A risk management program 0 An assessment of risks 0 Risk mitigation plans for moderate and high risks Chapter 3 In addition the RFP should identify the requirement for periodic de ne the frequency risk assessment reports that would serve as inputs to the PM s risk assessment and monitoring processes and ensure that risks are continuously assessed Some programs require risk assessment reports for integration into quarterly Defense Acquisition Executive Summary reports Each RFP section is intended to elicit speci c types of information from Offerors that will when considered as a whole permit selection of the best candidate to produce the goods or perform the services required by the Government A number of sections of the RFP are key to risk management and are described as follows including examples of typical clauses SECTION C Desu39l 1 quot39 m ofWork This Section ofthe RFP includes any description or speci cations needed Statements describing risk management requirements may be included directly in Section C or by reference to the Statement of Work SOW or Statement of Objectives SOO A typical Section C clause is shown below The Offeror shall describe its proposed risk management program The Offeror shall describe how they intend to identify assess mitigate and El monitor potential technical risks Critical technical risks which may Sample RFP adversely impact cost schedule or performance shall be identi ed along Section C with proposed risk mitigation methods for all risks identi ed as moderate or Clause high SECTION L Instructions Conditions and Notices t0 Offerors This Section of the RFP includes provisions information and instructions to guide O erors in preparing their proposals Risk management requirements in Section L must be consistent with the rest of the RFP such as tasking established in Section C the SOW evaluation criteria in Section M and Special Provisions in Section H The requirements must ensure the resulting proposals will form a solid basis for evaluation The statements below provide examples from Navy programs for use in structuring Section L requirements to include risk management Volume I Part C Management Proposal Relevant PastPresent Performance The Offeror shall demonstrate its pastpresent performance in critical requirements and processes and its ability to understand and resolve technical El risk issues within its organizational structure including interaction with its Sample RFP subcontractors The Offeror shall discuss pastpresent performance in the Section L implementation of risk reductionmitigation efforts similar to those proposed Clauses for the reduction of all risks identi ed as moderate or high Chapter 3 Volume I Part D Technical Proposal Risk Management The Offeror shall provide a detailed description of the Risk management program to assure meeting the RFP requirements and objectives The Offeror shall de ne and commit to the risk management program including risk planning identification assessment mitigation and monitoring lnctions The Offeror shall explain how its risk management process is related to the systems engineering and overall program management processes The Offeror shall identify moderate and high technical risk areas the known and potential risks in these areas and the rationale for risk mitigation techniques proposed for these risk areas SECTION M Evaluation Factors for Award Section M noti es Offerors of the evaluation factors against which all proposals will be evaluated These factors should be care llly structured to ensure that emphasis is placed on the critical factors described in Section L They should set forth the relative importance of technical cost development versus production versus operational and support schedule management and other factors such as risk management and past performance as set forth in the Source Selection Plan The statement below provides an example for structuring Section M requirements to include risk management The Government will evaluate the Offeror s proposed risk management I program and plans for identifying assessing mitigating and monitor risks as well as proposed plans for mitigating those risks identi ed as moderate or sample RF P high Section M Clause When structuring Technical and Management evaluation criteria for Section M risk management should be included as a factor or subfactor as required by Part 3 paragraph 332 of DoD 50002R Statement of WorkStatement of Objectives The majority of existing Government contracts include a Statement of Work SOW that forms the basis for successful performance by the contractor and effective administration of the contract by the Government A wellwritten SOW enhances the opportunity for all potential Offerors to compete equally for Government contracts and serves as the standard for determining if the contractor meets stated performance requirements Another concept called the Statement of Objectives SOO shifts the responsibility for preparing the SOW from the Government to the solicitation respondents Recent DoD direction to lower Government costs encourages innovative contract options and exible design solutions The SOO captures the top level objectives of a solicitation risk management for instance and allows the O erors freedom in the structure and Chapter 3 de nition of SOW tasks as they apply to the proposed approach The following paragraphs contain two SOW examples and one 800 example constructed from a number of Navy program solicitations Risk Management and Rep01ting The Contractor shall maintain a risk management program to assess risks associated with achievement of technical cost and schedule requirements Speci c risk management functions shall at a minimum 0 Identify known and potential risks 0 Assess risks including a relative ranking by program impact and the establishment of critical thresholds 0 De ne methods or alternatives to mitigate or minimize these risks including the identi cation of criteria upon which programmatic decisions can be based 0 Track and report risk mitigation progress The contractor s risk management program will be presented to the Government initially for concurrence and then in monthly updates and at inprocess and other appropriate reviews Risk Management The Contractor shall implement a Risk Management Program in accordance with the XYZ Risk Management Plan using the Navy s Top Eleven Ways to M anageTechnical Risk publication as a guide The initial set of Contractordefmed risks shall be updated as the Government or Contractor identi es new risks The Contractor shall rank risks with respect to impact on performance cost and schedule and shall identify and develop mitigation plans for risk reductionresolution Risk Management Objectives 0 To develop and implement a risk management process with risk identi cation assessment mitigation and trackingreporting lnctions 0 To de ne and implement a risk assessment methodology that includes not only an understanding of cost schedule and performance impacts but also a periodic reassessment of these impacts on identi ed risk areas To establish acceptable risk levels to be achieved 0 To de ne risks and proposed risk mitigation steps for all items identi ed as moderate or high risk Source Selection DoD 50002R states Whenever applicable risk reduction through the use of mature processes shall be a significant factor in source selection ET SOW example 5 SOW example El 800 example Chapter 3 The purpose of Source Selection is to select the contractor whose performance can be expected to meet the Government39s requirements at an affordable price The Source Selection process entails evaluating each Offeror s capability for meeting product and process technical schedule and cost requirements while identifying and managing inherent program risks The evaluation team must discriminate among Offerors based upon the risk associated with each Offeror s proposed approach for meeting Government requirements including an evaluation of the O eror39s past and present performance record to establish a level of confidence in the contractor s ability to perform the proposed effort This evaluation should include consideration of 0 Product and process risk management approaches and associated risks determined by comparison with a Best Practices baseline 0 Technical cost and schedule assessments to estimate the additional resources e g time manpower loading hardware or special actions such as additional analyses or tests etc needed to control any risks that have medium or high risk ratings Past performance and recent improvements in the implementation of risk reductionmitigation efforts similar to those being proposed for reducing risks identi ed as moderate or high for the program being proposed Award Fee for Risk Management Award fees properly used are a valuable tool for motivating contractors to improve performance while creating opportunities for improved Government 7 contractor communication including ongoing feedback thus permitting problems to be resolved sooner Award fee discussions should be held on a regular basis monthly or quarterly is usually recommended The award fee process can be successfully implemented on a range of contract goals and elements including risk management The guidelines below can help PMs establish a risk management program using award fee criteria 0 Analyze the SOW and attendant requirements to determine which contract performance requirements should be subject to awards Specify the criteria against which contractor performance will be measured From the total award fee amount to be made available specify evaluation periods and the corresponding amount of award fee available each period 0 Explain the general procedures that will be used to determine the earned award fee for each evaluation period When analyzing the SOW and attendant requirements an important first step is the identification of critical areas of program risk Chapter 5 of this publication provides an initial set of critical technical process risk areas that can be used as a starting point Chapter 3 in this effort As a general rule historically highrisk processes and processes involved with new technologies are usually good candidates for consideration as award fee elements Tailor the contract performance elements ie areas of critical program risk selected for award fees to key events then assign them to appropriate award fee periods The results become the basis of the request for information from potential bidders as contained in the Instructions to Offerors without having to ask for extraneous detail A well thought out list of critical risk areas provides an excellent roadmap for the solicitation Award fee contracts based on contractor process improvements normally require some objective measurements to use as a basis for evaluation and award fee percentage determination Give the contractor regular structured feedback to preclude great disparity between what the contractor expects as an award fee payment and what the Government actually pays The simplicity of this approach is the very characteristic that makes the use of award fee criteria to establish a technical risk management program so effective Table 31 provides guidance for using award fee criteria in implementing technical risk Table 3 1 Award Fee Considerations Best Practice 0 Performance Feedback 7 Regular structured feedback to prime contractors on their performance with respect to award fee criteria at signi cant program reviews 0 Process Improvement 7 Process improvements can only be achieved if process changes are implemented Verify implementation via test results documentation and operational use Witness the actual implementation of new processes and procedures 0 Award fee owed down to subcontractors Watch Out For 0 No regular performance feedback provided by the Government to the prime contractor during the rst evaluation period Award fee contracts based on contractor process improvements without objective measurements to use as a basis for evaluation and award fee determination Relatively short contract performance periods making it difficult to establish a metric baseline implement a process change and validate an actual improvement in the resulting metric during the contract period Chapter 4 W4 Mandate Training What is the Relationship Between Training and Technical Risk It is often assumed that Government and contractor program staffs as acquisition professionals understand risk management Given the nuances and complexities of risk management most personnel perceive risk management differently due to varying backgrounds experiences and training In order to integrate these variances a formal indoctrination andor awareness training in risk management is essential All key Government and contractor personnel should understand their roles in the implementation of the risk management program as well as the goals strategies roles and responsibilities of the risk management team Team members who are not talking the same language will result in a risk management effort that is poorly executed and ine ective Defense Acquisition University Courses As in any organized effort be it for a sports event or business venture training is imperative for the success of the team or individuals involved DoD risk management is no different 7 an inadequately trained staff is prone to failure DoD has recognized the importance of training their acquisition professionals and has established mandatory training standards in DoD 500052M Acquisition Career Development Program November 1995 The Defense Acquisition University DAU as established by the Under Secretary of Defense Acquisition and Technology provides a structured sequence of courses needed to meet the mandatory and desired training standards established in DoD 500052M These courses are designed to provide program office personnel with core and specialized knowledge in their functional areas with higher level courses placing an emphasis on managing the acquisition process and learning the latest acquisition methods being implemented Therefore the program manager should ensure that as minimum key program office personnel involved with risk management attend these courses which include 0 Production and Quality Management 0 Logistics Fundamentals 0 Fundamentals of System Acquisition Management 0 Introduction to Acquisition Workforce Test and Evaluation Chapter 4 To enroll in the courses program offices submit a Department of the Navy Acquisition Training Registration sheet DACMl to their command acquisition training representatives Further information and course schedules can be obtained on the World Wide Web at http dacmsecnavnavymil Program Training While training in the DoD Acquisition Professional Courses provides the big picture it is also imperative that personnel be trained to their program s specific risk requirements and objectives This is necessary to ensure that all personnel responsible for the implementation of risk management understand the program objectives expectations goals terminology formats etc regarding risk management This training should not be limited to program office personnel but includes the prime contractor subcontractors support contractors and supporting eld activities Training as a minimum should provide instruction in the following 0 Background and introduction to risk management 0 Program office risk organizational structure and responsibilities 0 Concept and approach 0 Awareness of latest techniques through attendance at symposiums seminars workshops etc Program definitions and terminology 0 Risk assessment tools used by the program office Use of the risk management database This should also include handson instruction on the use of the risk database and tracking system The program office prime contractor or a support contractor can provide risk training however care should be taken in the selection of the training source The training sources should be subject matter experts in risk management and be familiar with the program s operations As emphasized throughout this publication all personnel responsible for planning and executing the risk management program must talk the same language have an understanding of what the risk pro gram s objectives are and understand how to use the various tools required to identify assess mitigate and track risk As in any venture this training is critical to achieving the objectives for the successful execution of a risk management program Chapter 5 Wm 5 Practice Engineering Fundamentals What is the Relationship Between Engineering Fundamentals and Technical Risk Engineering lndamentals are the basic disciplined design test and production practices that have been proven through experience to be critical for risk avoidance Experience has also shown that many of these lndamentals are not well understood by either the Government or industry As a result many program risks are derived from early management decisions regarding the application of these engineering fundamentals Critical Technical Processes Critical processes are a continuum of interrelated and interdependent disciplines A failure to perform well in one area may result in failure to do well in all areas A high risk program may result causing deployment of the product to be delayed with degraded product performance and at greater cost than planned Risk is eliminated or reduced when the deficient industrial process is corrected and that correction is o en effected at a level of detail not normally visible to the Program Manager PM This chapter contains lndamental technical processes and the associated Best Practices and WatchOutFors which have great in uence on technical risk These practices though by no means comprehensive do focus on key technical risk areas Use of proven best practices to achieve product success leads to a more organized approach to accomplish these activities and places more management significance on them Experienced engineers and PMs are aware that there are some requirements conditions materials types of equipment or parts and processes that almost invariably create potential or actual risk identified herein as WatchOutFors Knowing these 19 Chapter 5 areas of potential or actual risk gives a PM additional early insight for developing risk management or risk mitigation plans The Best Practices and WatchOutFors associated with critical industrial technical processes should be used as a starting point in developing a baseline of program specific contractor processes The best practices associated with these critical processes can also serve as benchmarks with which to compare your program s baseline processes and results achieved versus desired goals The following examples of critical processes for the Design Test and Production phases of a product s development are presented in this chapter DESIGN TEST PRODUCTION 0 Design Reference Mission 0 Design Limit Quali cation 0 Manufacturing Plan Pm le TeSting 0 Rapid Prototyping 0 TradeStudies 0 Test Analyze and Fix 0 Manufacturing Process 0 Design Analys es Proofin g Qualification 0 Parts amp Materials Selection 0 Conformal Coating for 0 Design for Testability Printed Wiring Circuit 0 BuiltIn Test Assemblies 0 Design Reviews 0 Subcontractor Control 0 Thermal Analysis T001 Planning 0 Design Release 0 Special Test Equipment 0 Computer Aided 0 Manufacturing Screening Des ign Computer Aided 0 Failure Reporting Analysis Manufacturing and Corrective Action 20 Chapter 5 Design Design Reference Mission Profile A Design Reference Mission Pro le CDRMP is a hypothetical profile consisting of timephased functional and environmental profiles derived from multiple or variable missions and the total envelope of environments to which the system will be exposed The DRMP becomes the basis for system and subsystem design and test requirements Best Practice 0 Mission Pro les cover all system environments during its life cycle including operational storage handling transportation training maintenance and production 0 Mission Pro les are de ned in terms of time duration and sequence level of severity and equency of cycles 0 Mission and System Profiles are detailed by the Government and contractor respectively based on natural and induced 39 e g 1 vibration 39 impulse shock and electrical transients Profiles are the foundation for design and test requirements from system level to piece parts including CommercialOffThe ShelfNonDevelopm ental Items COT SNDIs Watch Out For 0 DRMP environmental profiles that appear to be simply extracted from MILHDBK 810 Environmental Test Methods and Engineering Guidelines 31 July 1995 Mission Pro les based on average natural environmental conditions rather than the more extreme conditions that may more accurately re ect operational requirements in the placeat the time of use such as indicated by MILHDBK310 Global Climatic Data for Developing Military Products 23 June 1997 and the National Climatic Data Center Trade Studies Trade Study are iterative series of studies performed to evaluate and validate concepts representing new technologies or processes design alternatives design simplification ease of factory and eld test and compatibility with production processes Trade studies culminate in a design that best balances need against what is realistically achievable and affordable Best Practice 0 Trade studies are perform ed to evaluate alternatives and associated risks 0 Trade studies consider producibility supportability reliability cost and schedule as well as performance 0 Trade studies are conducted using principles of modeling and simulation experimental design and optimization heory 0 Trade studies include sensitivity analyses of key performance and life cycle cost parameters 0 Trade study alternatives are documented and form ally included in design review documentation to ensure downstream traceability to design characteristics 0 Trade studies are traceable to the DRMP and associated design requirements 0 Qualit FunctionDe lo enttechni ues are usedtoidentif ke re uirements when erformin tradeoffs Watch Out For 0 Use of new technologies without conducting tradestudies to identify risks 0 Trade studies that do not include participation by appropriate engineering disciplines 0 Product reliability quality and supportability traded for cost schedule and functional performance gains 21 Chapter 5 Design Design Analyses Design Analyses are perform ed to examine design parameters and their interaction with the environment Included are riskoriented analyses such as stress worst case thermal structural sneak circuit and Failure Modes Effects and Criticality Analysis FMECA which if conducted properly will ensure that reliable low risk mature designs are released Best Practice 0 Validate new analysis modeling tools prior to use 0 Conduct logic analysis on 100 of Integrated Circuits ICs 0 Analyze 100 of TC outputs for ability to drive maximum expected load at rated speed and voltage levels Eb Use Table 51 below to determine which design analyses should be performed Watch Out For 0 Analyses performed by inexperienced analysts 0 Anal ses erformed usin un roven software ro rams Table 51 Objectives of Selected Design Analyses Analyses Objectives 0 ReliabilityPrediction To evaluate alternative designs assist in determining whether or not requirements can be achieved and for help in detecting overstressed parts andor critical areas Failure Modes Effects and Criticality Analysis To identify design weaknesses by examining all failure modes using a bottomup approach Worst Case Analysis To evaluate circuit tolerances based on simultaneous part variations Sneak Circuit Analysis To identify latent electrical circuit paths that cause wanted functions or inhibit wanted functions Fault Tree Analysis 0 To identify effects of faults on system performance using a topdown approach 0 Finite Element Analysis 0 To assure material properties can withstand intended mechanical stresses in the intended env ironm ents 0 Stress Analysis 0 To determine or verify design integrity against conditional extremes or design behavior under various loads 0 Thermal Stress Analysis see Thermal Analysis 0 To determine or eliminate thermal overstress conditions to verify compliance with derating criteria 22 Chapter 5 Design Parts and Material Selection The Parts and Material Selection utilizes a disciplined design process including adherence to rm derating criteria and the use of Qualified Manufacturers Lists QML to standardize parts selection Best Practice 0 Use QML parts particularly for applications requiring extended temperature ranges 0 Electrical parameters of parts are characterized to requirements derived from the Design Reference Mssion Profile to ensure that all selected parts are reliable for the proposed application see Figure 63 Chapter 6 0 Derate all parts electrically and thermally 0 A Preferred Parts List is established prior to detailed design 0 Parts screening is tailored based onmaturity 0 Use highly integrated parts e g Application Speci c TCs ASICs to reduce The number of individual discrete partschips The number of interconnections Size power consumption and cooling requirements and Failure rates 0 Quality is measured by Certification by supplier Compliance with ETA623 Procurement Quality of Solid State Components by Governments Contractors July 1994 Verification to historical data base Particle Impact Noise Detection for cavity devices Destructive Physical Analysis for construction analyses 0 Strategy for parts obsolescence and technology insertion is established 0 Vendor selection criteria established for nonQML parts considering Qualification characterization and periodic testing data Reliabilityquality defect rates Demonstrated process controls and continuous improvement program Vendor production volume and history 0 Minimum acceptable defects for incom ing electronic piece parts Maximum of 100 defective parts per million Watch Out For 0 Development of highly integrated parts unique to one specific acquisition development program 0 Use of nonQML parts Whenever QML parts are available 0 Highly integrated parts that are not treated as a system of discrete parts to which the parts program requirements also apply 0 Use of parts in environments not speci ed by the Original Equipment Manufacturer 0 Variance in operating characteristics of commercial RF and analog parts 0 Use of any parts near at or above their rated values especially plastic encapsulated devices which reach higher junction temperatures than ceramic devices due to higher resistance to heat conduction 0 Device equency derating based on maximum overall operating temperature vs equency rating which varies at different operating temperatures 0 The use of parts beyond speci ed operating ranges by upscreening or uprating Designs using part technologies Whose remaining life cycle will not support production and postproduction uses 23 Chapter 5 Design Design for Testability Designing for Testability assures that a product may be thoroughly tested with minimum effort and that high confidence may be ascribed to test results Testing ensures that a system has been properly manufactured and is ready for use and that successful detection and isolation of a failure permits costeffective repair Best Practice 0 Perform testability analyses concurrently with design at all hardware and all maintenance levels 0 Use Fault Tree Analysis FMECA and Dependency Modeling amp Analysis to determine test point requirements and fault ambiguity group sizes 0 Use standard maintenance busses to test equipment at all maintenance levels 0 Use ASICs and other complex integrated circuitschips with selftest capabilities 0 Good testability design re ects the ability to Initialize the operating characteristics of a system by external means e g disable an internal clock Control internal functions of a system with external stimuli eg break up feedback loops Selectively access a system s internal partition and parts based on maintenance needs 0 Evaluate Printed Wiring Board PWB testability using RAC publication Testability and Assessment Tool 1991 Converts scored and weighted rating of factors including accessible vs inaccessible nodes proper documentation complexity removable vs nonremovable components and different logic types 34 factors in all to a possible total score of 100 The following testability scores illustrate this method TScores for PWB Testability Acceptable Score PWB Testability 81 to 100 Very easy 66 to 80 Easy Questionable Score PWB Testability 46 to 65 Some dif culty 31 to 45 Average difficulty Unacceptable Score PWB Testability 1 to 30 Hard 1 to 10 Very hard 0 or less Impossible to testtroubleshoot wout cost penalties Watch Out For 0 Incompatibility between operational time constraints and time required to perform Built In Test BIT COTSNDI testability design that is incompatible with mission needs and program lifecycle maintenance philosophy Testability design that results in specialpurpose test equipment Circuit card assemblies and modules with test points that aren t accessible Circuit functions that don t t on a single board Reverse funneling of tests Testabilit re uirements for roduction are de ned after desi n release 24 Chapter 5 Design Built In Test BuiltInTest BIT provides built in monitoring and fault isolation capabilities as integral features to the system design BIT can be supplemented with embedded expert system technology that incorporates diagnostic logicstrategy into the prime system Best Practice 0 BIT is compatible with other Automatic Test Equipment ATE 0 Use BIT software For most exible options voting logic sampling variations filtering etc to verify proper operation and identification of a failure or its cause To minimize BIT hardware To record BIT parameters 0 Use multiplexing to simplify BIT circuitry 0 Size the fault ambiguity group considering Mission requirements for reliability repair time down time false alarm rate etc J for test A 391 39 at 39 quot and depot 39 levels 0 Verify adequacy of the BIT circuit thresholds during development testing 0 BIT should as a minimum provide 98 detection of all failures Isolation to the lowest replaceable unit Less than 01 false alarms 0 Ratio of predicted to actual testability results 11 0 Preliminary testability analysis completed before PDR 0 Detailed testability analysis completed before CDR Watch Out For 0 High BIT effectiveness resulting in unacceptably high false alarm rates 0 Inadequate time to perform BIT localizationdiagnosis resulting in diminished BIT coverage and accuracy 0 BIT design and analyses that fail to consider the effects of DRMP and worstcase variations of parameters such as noise part tolerance and timing especially as affected by age Inadequate BIT memory allocation Limitations to BIT coverageeffectiveness caused by Nondetectable parts mechanical parts redundant connector pins decoupling capacitors oneshot devices etc Power filtering circuits Use of special test equipment eg signal generators to simulate operational input circuit conditions Interface andor compatibility problems between some equipment designs eg digital vs analog Unkeyed test connectors Test points without current limits Test points that are not protected against shorts to either adjacent test points or to ground Testing constraints that cause failures of oneshot devices safety related circuits and physically restrained mechanical systems Methodology used to calculate BIT effectiveness Eb See Figure 51 for an illustration of this 25 Chapter 5 Design SYSTEM X BIT Failures Subsystem Yes A Yes 80 B No I 5 C No 5 D Total 200 System X o In this illustration SystemX is designed for BIT detection of all failures of subsystemsA and B and none of the failures of subsystems C and D The BIT e ectiveness of SystemX can be calculated to be either 90 or 50 depending on the de nition used 0 90 BIT e ectiveness is based on the percentage of the system total failures that are detectable The total detectable failures ofthe BITportions ofsubsystemsA and B are 180 i e 10080 out ofthe system total of 200 180200 90 50 BIT e ectiveness is based on the percentage of the subsystems that can fail and be detected 0 Failure ofonly two subsystems C amp D out ofthefour in the system are detectable by BIT 24 50 Figure 5 1 BIT Design Based on Failure Rates Design Reviews A Design Review is a structured review process in which design analysis results design margins and design maturity are evaluated to identify areas of risk such as technology design stresses and producibility prior to proceeding to the next phase of the development process Best Practice Formal procedures are established for Design Reviews 0 Design Reviews are performed by independent and technically qualified personnel 0 Entry and exit criteria are established 0 Checklist and references are prepared 0 Manufacturing product assurance logistics engineering cost and other disciplines have equal authority to engineering in challenging design maturity 0 Design Review requirements are flowed down to the subcontractors 0 Subcontractors and customers participate in the design reviews 0 Conduct design reviews as follows PDR 20 of the design is complete lDR 50 of the design is complete CDR 95 of the design is complete Watch Out For Reviews that are primarily programmatic in nature instead of technical Review schedules that are based on planned milestone dates Reviews held Without review of analyses assumptions and processes Reviews held Without review of tradeoff studies underlying data and risk assessments Reviews not formally documented and reported to management Reviews held by teams Without adequate technical knowledge or representation of manufacturing product assurance supportability etc 26 Chapter 5 Design Thermal Analysis Thermal Analysis is one of the more critical analyses that is performed to eliminate thermal overstress conditions and to verify compliance with derating criteria Thermal analyses are o en supplemented with infrared scans thermal paint or the use of other measurement techniques to verify areas identified as critical Best Practice Determination and allocation of thermal loads and cooling requirements to lowerlevel equipment and parts are made based on the DRMP and the system selfgenerated heat Preliminary analyses are refined using actual power dissipation results as the thermal design matures The junctionto case thermal resistance values of a device are used for the thermal analysis infrared scan is conducted to verif the anal sis Thermal Surve e Watch Out For heat transfer The use of device junctionto ambient values for the thermal analysis since this method is highly dependent on assumptions about coolant flow conditions A thermal analysis that does not take into account all modes convection conduction radiation and paths of 27 Chapter 5 Design Design Release Design release is the point in the developmental stage of a product when creative design ceases and the product is released to production Scheduling a design release is closely related to the status of other design activities such as design reviews design for production and configuration management Best Practice 0 Design release process requires concurrent review by all technical disciplines 0 Measurable key characteristics and parameters are identi ed on drawings work instructions and process specifications Designs are released to production after Completion of all design reviews Closeout of all corrective action items Completion of all quali cation testing A producible supportable design is characterized by Stable design requirements Completed assessment of design effects on current manufacturing processes tooling and facilities Completed producibility analysis Completed rapid prototyping Completed analysis for compatibility with COT I inter aces Subcontractor design interfaces Form Fit and Function at all interfaces 0 Design release practices or equivalent of the prime contractor are flowed down to the subcontractors Watch Out For Design release based on manufacturing schedule Manufacturing drawings containing redlines Procurement for long lead items initiated with immature designs Drawings that are approved for release by engineering without review by all technical disciplines 28 Chapter 5 Design Computer Aided DesignComputer Aided Manufacturing Computer Aided DesignComputer Aided Manufacturing CADCANI introduces technical discipline throughout the design process to ensure success in complex development projects by integrating various design processes onto a common database Included is the capability to perform special analyses such as stress vibration thermal noise and weight as well as to permit simulation modeling using finite element analysis and solids modeling The outputs of this common database control manufacturing processes tool design and design changes Best Practice Embed design rules in the CADCAM system Map CADCAM tools to the design andmanufacturing processes Use compatible tools in an integrated CADCAM approach Use open architecture approach for software programs and data files Use new machine tools capable of being networked or upgraded to a network As a basis for procurement of new or upgraded CADCAM systems sensitivity analyses are perform ed for various future scenarios e g mainframe based versus Unix workstationbased or NT based versus future cost to maintain and interconnect 64 bit versus 32 bit math links to ERP systems etc 80 of design activity is computer based 100 of CAD drawings are CAM compatible Use common data exchange standards for 75 of processes All new machines networkable for CAM Watch Out For CADCAM tools that operate in a standalone manner 0 Failure to include total factory requirements and planned use for the CADCAM database 0 Lack of a longterm growth plan to keep from being backed into a technological deadend 0 Proprietary Computer Numerically Controlled and Direct Numerically Controlled platforms and software architectures CADCAM systems which are nonstandard for your industry customers and suppliers Companies who will not be in business in several years 29 Chapter 5 Test Design Limit Quali cation Testing Design Limit Qualification Testing is designed to ensure that system or subsystem designs meet performance requirements when exposed to environmental conditions expected at the extremes of the operating envelope the worst case environments of the DRMP Best Practice Design limitmargin testing based on the DRMP is integrated into the overall test plan especially with engineering reliability growth and life testing Design limit quali cation tests are performed to ensure worst case specification requirements are met Highly Accelerated Life Tests HALT are performed to determine the design margins When operating at the expected worst case environments and usage conditions To identify areas for corrective action Increased stress to failure conditions are included toward the end of Test Analyze and Fix TAAF testing to identify design margins Engineering development tests are performed beyond the design limits to measure the variance of the functional performance parameters under environmental extremes The failure mechanism of each failure includin stresses at the worst case s ecification limits is understood Watch Out For Design limit qualification testing environmental limits that are based on MILSTDs and do not consider the DRMP lnservice use of design limit quali cation test units and other units that are stressed to a level resulting in inadequate remaining life Incompatibility of the COT SNDls quali cation tests to the requirement Accelerated testin conditions which introduce failure modes not ex ected in normal use Chapter 5 Test Test Analyze and Fix The Test Analyze and Fix TAAF process is an iterative closed loop reliability growth methodology TAAF is accomplished primarily during engineering and manufacturing development The process includes testing analyzing test failures to determine cause of failure redesigning to remove the cause implementing the new design and retesting to verify that the failure cause has been removed Best Practice 0 Use of Duane or AMSAA Growth Models for the TAAF process 0 Test facilities are capable of simulating all environmental extremes 0 TAAF process starts at the lowest level of development and continues incrementally to higher assembly levels through the system level 0 TAAF units are representative of production units 0 TAAF process is integrated into the systems engineering development and test program to optimize the use of all assets tests and analyses 0 TAAF environments are based on worst case DRMP extremes and normally include as a minimum vibration temperature shock power cycling input voltage variance and output load 0 TAAF is augmented by Failure Reporting and Corrective Action System to improve selected systems with a continuing history of poor reliabilityperform ance 0 HALT is performed at all hardware assembly levels as a development tool and used as an alternative to TAAF to quickly identify design weaknesses and areas for improvement 0 The mechanism of each failure including stresses above the specification limits is understood 0 TAAF test resources should include between 2 to 10 UnitsUnderTest UU based on cost and complexity tradeoff 0 Ratio of TAAF test time at vibration and temperature extremes to total test times 08 3 Ratio 3 10 0 Total calendar time allocated and actual to complete TAAF testing is approximately twice the number of test hours 0 Test Time for each TAAF UU is within 50 of the average time 33 Utilize TriService Technical Brief Test Analyze and Fix TAAF Implementation January 1989 Watch Out For 0 Development programs with TAAF or HALT planned at the system level only 0 TAAF planned or conducted in lieu of developm ental exploratory engineering tests 0 TAAF testing conducted with a limited sample size and a limited number of test hourscycles 0 Use of Bayesian approaches to shorten TAAF test time and to estimate reliability when the apriori data is questionable 0 A tendency to focus on statistical measures associated with TAAF and HALT rather than using test results to identify and correct design deficiencies 0 TAAF UU and test facilities that are not conditioned groomed burn in screened etc prior to test as planned for normal production 0 Infant mortality failures are included in growth measurements 0 The use of TAAF as a trial and error approach to correct a poor design 0 The use of HALT to predict reliability 31 Chapter 5 Production Manufacturing Plan The Manufacturing Plan describes all actions required to produce test and deliver acceptable systems on schedule and at minimum cost The materials fabrication ow time in process tools test equipment plant facilities and personnel skills are described and integrated into a logical sequence and schedule of events Best Practice 0 Identi cation during design of key product characteristics and associated manufacturing process parameters and controls to minimize process variations and failure mo es FMECA of the manufacturing process during design for defect prevention Speci ed manufacturing process variability eg Cpk is within the design tolerances Variations of test and measuring equipment are accounted for when determining process capability Rapid prototyping for reduced cycle time from design to production see Rapid Prototyping Design For Manufacturing and Assembly to develop simplified designs Design for agile manufacturing to quickly adapt to changes in production rate cost and schedule Contingency planning for disruption of incoming parts variations in manufacturing quantities and changes in manufacturing capabilities Controlled drawing release system instituted see Design Release Process proofingquali cation see Manufacturing Process Proo ngQualification Productprocess changes that require quali cation are defined Flowcharts of manufacturing processes at the end of EMD validated at the start of LRIP Facilities manpower and machine loading for full rate production are validated during LRIP Production readiness reviews performed on critical processes 0 Subcontractor process capabilities integrated into the prime contractor s process capabilities 0 Speci c product tests and inspections replaced with Statistical Process Controls SPC on a demonstrated capable and stable process 0 Closed loop discrepancy reporting and corrective action system including custom er and subcontractor discrepancies 0 Post production support plan established and maintained for Repair capability Obsolescence of tools test equipment and technology Loss of contractor expertise and vendor base and Timecost to reestablish production line Metrics Include 0 Measurable key characteristics and parameters are identi ed on drawings work instructions and process specification 0 SPCs eg Cpkgtl 33 are established for key characteristics 0 Critical processes under control prior to production implementation Watch Out For Total cost of hidden factory for nonconforming materials 0 A de cient Materials Requirements Flaming system 0 Inadequate response planning for subcontractor design and production process changes 0 Establishment of SPC for key processes without use of statistical techniques eg Design of Experiments Taguchi QFD or adequate run time to determine variability of the process when stable Operator selfchecks without a process to verify integrity of the system Plannin which ermits roduction workarounds and fails to em hasize scheduled roduction out uts Chapter 5 Production Rapid Prototyping Rapid Prototyping utilizes physical prototypes created from computer generated threedimensional models to help verify design robustness as well as reduce engineering costs during production activities associated with faulty or difficult to manufacture designs The use of these prototypes includes functional testing producibility dimensional inspection assembly training as well as tool pattern development Best Practice 0 Rapid prototyping technology used in developing a product from concept to manufacturing 0 Used to reduce design cycle time iterate design changes check fit and interfaces calculate mass properties and identify design de ciencies Used in manufacturing producibility studies proof of tooling and fixtures training and as a visualization aid in the design of the evolving product 0 Virtual reality prototypes are analyzed using CAD tools and physical parts are fabricated from the CAD three dimensional drawings and data prior to production Watch Out For 0 Rapid prototyping without three dimensional CAD data for precise geometric representation 0 Twodimensional CAD surface model used in lieu of the more complete three dimensional solid model 0 Ra id rotot in withouta su ort structure to sustain the art in lace while it is bein enerated 33 Chapter 5 Production Manufacturing Process ProofingQualification Manufacturing Process ProofingQualification ensures the adequacy of production planning tool design assembly methods finishing processes and personnel training before the start of rate production This is done in a time frame that allows for design and configuration changes to be introduced into the product baseline Best Practice 0 Proofing simulates actual production environments and conditions Proof of Manufacturing models used to verify that processes and procedures are compatible with the design configuration First article tests and inspections included as part of process proofing Conforming hardware consistently produced within the cost and time constraints for the production phase Key processes are proofed to assure key characteristics are within design tolerances Process proofing must occur with A new supplier The relocation of a production line Restart of a line after a signi cant interruption of production New or modi ed test stations tools xtures and products Baseline and subsequent changes to the manufacturing processes Special processes nontestablenon inspectable Conversion of manual to automated line Watch Out For 0 Process proofing that does not include integration into higher assemblies to assure proper fit and function at the end item level Changes in subcontractor processes that occur without notifying the prime The use of SPC to qualify or validate the manufacturing process in lieu of first article tests and inspections The use of acceptance tests in lieu of process proofing or performance of first article tests and inspections Performance of first article tests and inspections only when contractually required Attempts to cite the warranty provisions rather than actually proofing the processes Overly ambitious schedule for quali cation of new products sources Chapter 5 Production Conformal Coating for Printed Wiring Boards A conformal coating is a thin lm applied to the surface of a Printed Wiring Board or other assembly which offers a degree of protection from hostile environments such as moisture dust corrosives solvents and physical stress Best Practice 0 Use trade studies to weigh the effects of conformal coating on longterm reliability safety and rework costs against potential savings in production and repair costs 0 Conformal coating is used in environments where contaminants cannot be adequately controlled including manufacturing or testing facilities 0 Match the type of conformal coating to the configuration maintenance concept and the use environment of what you want to coat 0 Inspection techniques in place to verify uniformity and completeness of conformal coating coverage Eb See Table 52 for selected coating properties Watch Out For Conformal coating used to meet hermetic requirements since conformal coating is not hermetic or waterproof 0 Manufacturing andor testing processes lacking a Failure Reporting and Corrective Action System and quality system to ensure that precautions against contaminants are effective especially on assemblies without conformal coating The application of conformal coating to a noncoated assembly without first assessing the effects on circuit operating frequencies mechanical stresses thermal hot spots etc that may increase failure rates The use of assemblies without conformal coating that contain critical analog circuits andor highpower circuits possibly creating safety hazards The use of conformal coating that is not compatible with the repair philosophy The toxicity and environmental friendliness of conformal coating including its byproducts Inadequate surface preparation and condition prior to application of conformal coating Improper masking prior to conformal coating Acrylic Resin Epoxy Resin Silicone Resin Paraxylyene 35 Chapter 5 Production Subcontractor Control Reliance on subcontracting has made effective management of subcontractors critical to program success Subcontractor Control includes the use of Integrated Product Teams formal and informal design reviews vendor conferences and subcontractor rating system databases Best Practice Subcontractorsupplier rating system with incentives for improved quality reduced cost and tim ely delivery Flowdown of performance specification or detail Technical Data Package depending on the acquisition strategy Subcontractors integrated into Integrated Product Teams to participate in the development of DRMP requirements Waiver of source and receiving inspections for subcontractors meeting certification requirements depending on the product s criticality Subcontractor controls critical subtier suppliers Subcontractor notifies prime of design and process changes affecting key characteristics Metrics include subcontractor demonstrated process controls eg Cpk gt 133 for key characteristics Watch Out For 0 Procurement of critical material from an unapproved source 0 Supplier performance rating does not consider the increased cost for defects discovered later in the prime s manufacturing process or after acceptance by the custom er 0 Subcontractor performance rating based primarily on cost schedule and receiving inspection vice performance requirements 0 Subcontractor process capability not verified Subcontractor decertification process is delinquent Chapter 5 Production Tool Planning Tool Planning encompasses those activities associated with establishing a detailed comprehensive plan for the design development implementation and proof of program tooling Tool planning is an integral part of the development process Best Practice 0 Tools designed with CAD concurrent with product design 0 Tool tolerances are at least 10 more restrictive than the hardware tolerances 0 systems J quot39 and quot quot39 studies performed to establish the variability allowed to meet the key characteristic tolerances Tools are proofed calibrated certified and controlled Hard tooling validated prior to the start of production Tools are maintained with the aid of production statistical control charts Production tools are procured if the hardware is to be second sourced Minimize special tools and fixtures Metrics include Process capability Cpk gt 133 for normal processes Process capability Cpk gt 167 for mission critical processes or for safety Watch Out For 0 So tooling used in production 0 Calibration of tooling not traceable to a National standard andor reference 0 Master tooling not controlled 37 Chapter 5 Production Special Test Equipment Special Test Equipment STE is a key element of the manufacturing process used to test a final product for performance after it has completed inprocess tests and inspections nal assembly and final visual inspection Best Practice STE is minimized ATE is developed for complex U39UT and considers test time limitations and accuracy STE accuracycalibration must be traceable to known National measurement standard andor references STE and applicable so ware are qualified certified and controlled STE maintainability and maintenance concept defined concurrent with product design Life cycle functional and environmental pro les considered in STE design Design best practices are considered for critical STE Production demands are factored into STE design for reliability STE reliability target 3 reliability of the system under test 41 minimum accuracy ratio between measurement levels eg STE and M standards and STE Watch Out For No fault repeatable loops STE software not validated STE production leads that impact increased rate production Root cause of STE discrepancies not understood STE false alarm rates STE not certified for acceptance testing Inadequate time between product CDR and STE delivery to support program schedule Chapter 5 Production Manufacturing Screening Manufacturing Screening is a process for detecting in the factory latent intermittent or incipient defects or aws introduced by the manufacturing process It normally involves the application of one or more accelerate environmental stresses designed to stimulate the product but within product design stress limits Best Practice Highly Accelerated Stress Screening HASS is performed as an environmental stress screen to precipitate and detect manufacturing defects HASS stress levels and profiles are determined from step stress HALT HASS precipitation screens are normally more severe than detection screens Product is operated and monitored during HASS The HASS screen effectiveness is proofed prior to production implementation HASS is perform ed with combined environment test equipment HASS stresses may be above design speci cation limits but within the destruct limits for example High rate thermal cycling High level multiaxis vibration Temperature dwells Input power cycling at high voltage Other margin stresses are considered when applicable to the product Alternative traditional environmental stress screening ESS guidelines for manufacturing defects may be in accordance with TriService Technical Brief 0029308 Environmental Stress Screening Guidelines July 1993 Parts Screening 100 screening required when defects exceed 100 PPM 100 screening required when yields show lack of process control Sample screening used when yields indicate a mature manufacturing process Watch Out For Inadequate fatigue life remaining in the product after HASS 0 HASS stresses that only simulate the eld environment 0 Environmental conditions that exceed the material properties of the product 0 HASS that does not excite the low vibration frequencies 39 Chapter 5 Production Failure Reporting Analysis and Corrective Action Failure Reporting Analysis and Corrective Action is a closed loop process in which all failures of both hardware and software are formally reported Analyses are performed to determine the root cause of the failure and corrective actions are implemented and verified to prevent recurrence Best Practice 0 Failure Reporting Analysis and Corrective Action System FRACAS implementation is consistent among the Government prime contractor and subcontractors 0 FRACAS is implemented from the part level through the system level throughout the system s life cycle 0 Criticality of failures is prioritized in accordance with their individual impact on operational performance 0 All failures are analyzed to sufficient depth to identify the underlying failure causes and necessary corrective actions Subcontractor failures and corrective actions are reported to the prime Prime contractor is involved in subcontractor closeout of critical failures Failure Review Board is composed of technical experts from each functional area 0 O 0 Failure database accessible by custom er prime contractor and subcontractors 0 Test requirements established for RetestOKCanNotDuplicate RTOKCND failures Metrics Include 0 100 of failures undergo engineering analysis 0 100 of critical failures undergo laboratory analysis 0 Failure analysis and proposed corrective action are completed 15 days for inhouse analysis f 30 days for outsourced analysis 0 Feedback from the field to the factory should be in lt 30 days Watch Out For Deferring FRACAS to the production phase No time limit for failure analysis and closeout Verification of corrective action not part of failure closeout Failures classified as random are not analyzed Failure analysis required only when repetitive failures occur Pattern of RTOKCND failures Exclusion of test equipment GFE and COT SNDI failures from FRACAS Engineering and lab analysis not considering History of previous failures Related circuit part failures Temperature and other environmental conditions at failure Workm anship precipitated failures correctable by design changes 0 RF and other high energy part failures often results from test setup difficulties 0 Backlog of failures to be analyzed in the laboratory 0 Failure Review Board F RB and Quality Review Board QRB not integrated to review effectiveness of both functional and nonfunctional failures Failure closeouts dependent on FRBQRB decisions 40 Chapter 6 Wm 6 Understand COTSNDI Applications What is the Relationship Between COT SNDI Applications and Technical Risk The use of Commercial OffTheShel Non Developmental Items COTSNDIs certainly has advantages among them 0 Immediate availability of items and 0 Access to stateoftheart technology available in the commercial sector without incurring developmental costs However there are very clear risks associated with the use of COTSNDIs With continuously changing technology traditional logistics support is o en ineffective due to performance con guration and interface changes coupled with a support system that takes too much time o en a period of time longer than the useful life of the item Finally since the use of COTSNDIs is relatively new to the DoD there is a paucity of data regarding the reliability quality and performance of COTSNDIs in a DOD environment These risks can only be minimized through the knowledgeable and effective selection integration and qualification of COTSNDIs Navy Experiences with COT SNDI Applications The information on the following pages represents lessons learned from Navy programs 41 Chapter 6 Design amp Market Investigation Best Practice 0 Use Form Fit and Function requirements to query the market 0 Begin market analysis early in program planning Watch Out For 0 Investigations slanted to make COTS the only acceptable choice 0 COTS selections made Without considering supportability and survivability 0 Market investi ations used for source selectionre39ection Selection Best Practice 0 Develop a procurement strategy for determining COTS viability for specific systems 0 Be certain the strate considers mission and environmental re uirements Watch Out For 0 Determination of COTS suitability made in the absence of a standard selection process I Testing Best Practice 0 Inspect and test COTSNDls at incoming inspection 0 Perform thorough testing through production 0 Do not ship spares directly from the original vendor to the production integration facility Rather spares should be functionally tested preferably at the system level 7 and as a minimum at the subsystem level 7 using operational software This will ensure design changes made by the vendor Will not adversely affect the system during deployment Watch Out For 0 Standard test schedules budgets and documentation that fail to account for the additional testing needed for COT SNDls Integration Best Practice 0 Require extensive compatibility testing of the product at both subassembly and system levels What appears to be com atible at the subassembl level ma not be so at the s stem level Watch Out For 0 The inherent difficulties in attempting a seamless integration of military items and COTSNDI products I 42 Chapter 6 System Architecture Best Practice 0 Design systems to withstand the insertion of new technology 0 Use an o ens stem architecture with strict adherence to COTS interface standards for hardware and software Watch Out For 0 Hardwaresoftware systems designed with inadequate margins and too many bells and whistles making them prone to failure when new technology is introduced Supportability Best Practice Buy more spares than you think you need because with COTSNDls you will need them Communicate problems back to the vendors Many will take corrective action their competitive position depends on it Buy all spares during production and functionally test them at the system or subsystem level using operational software Consider requiring vendor supplied drawings in enough detail to allow for an alternate source to protect against Diminishing Manufacturing Sources Define a COTS s arin olic for times when licenses and warranties ex ire before roduct s ares are used Watch Out For 0 Replacement items that do not meet configuration requirements Supportability issues that plague COT SNDI products due to frequent technology refresh cycles The average life span of COTS items which is between 6 and 24 months COTSNDls which are not able to be repaired at lower levels of assembly eg circuit card level these assemblies are often obsolete in months Limited Sources for NDI spares Reliance on phone 800 technical support lines and limited trainingmaintenance documentation especially in the eld 43 Chapter 6 COT SNDI Product Maturity amp Technology Refresh Figure 61 provides a planning process ow chart that can be used to help determine when COTS technology should be refreshed or updated to maintain a supportable system Source Data Data Assessment Option Option De nition Consequences collect source data Explore on COTS items used In Bevel quotmm M 95399quot quot5 qquot quot 5 5 mm co of various options weapon sy ems program plans COTS Database NO Are Option Option Costs proposed Assessments tecn refresh 39 39 costtsb 7 Assesglme 3 input option costs acce a e reason eness D me cosmodel into a costmodel YES Ariy No functional YES Re 39esh to avoid solel to avoid 39 enhancemem COTS obsolescence coTs oisoiescence 39 39 to tech refre n 39 and to Implement 3 Decisiorirrnakirig Process Flow at this technical functional riforrnatiori Feedback Process Flow Figure 6 1 COTS Technology Refresh Planning Process This process is intended to optimize the determination of when to refresh COTS items in order to keep program costs down and supportability high Following this process allows the PM 0 To predict when COTS items components may become unsupportable and require replacement To consider whether or not there are lnctional enhancements that could be realized by conducting a lnctional upgrade rather than simply refreshing existing COTS technology 44 Chapter 6 Figure 62 provides metrics for determining the maturity of a COTS product The four metrics state of the art state of the practice obsolete and must refresh indicate moderate risk low risk moderate risk and high risk scenarios respectively The corresponding numbers in each metric correspond to blocks llO below Plotting a particular COTS product through the technical life cycle intervals will assist in determining if the COTS product should be refreshed or its design reevaluated State ofthe Practice Obsolete a erlorflxes ieg Elock Updates 7 Third Maior Revlslorl HARDWARESOFTVVARE 4 Flrst Whjor Revlslon 5 MnorMods 5 Workarounds 5 MInorFIXes ieg Adaptztlonsj 5 second NhjorReVlSlorl Vendor Part Number 12345 67550 FDDIDAS 22345 77550 NTDSAE 32345 NTDSEcH 57550 urns D HARDWARE EXAMPLE 42345 NTDSE 57550 VME Audio 5oard 52345 07550 Digital lO 12345 s7550 Figure 6 2 COTS Product Maturity Metric Managing Risk Associated with Plastic Encapsulated Devices The affordability availability and operational reliability of Plastic Encapsulated Devices PEDs in uence their use in military designs Today PEDs may be as or more reliable than comparable Hermetically Sealed Microcircuit HSM ceramics when used under normal operating conditions While PEDs are not necessarily COTS or NDIs they are considered risk drivers and discussed in this section for the same reasons as COTS and NDIs namely their application performance and supportability in DoD applications are not llly known and there is not enough data available today to prove PEDs will survive longterm dormant storage Know Your Vendor Choose parts from volume lines oiTered by reputable PEDs vendors 7 vendors recognized for using best commercial practices 45 Chapter 6 Some best practices 39 A with 39 quot 39 J PEDs f cm are 0 A demanding and like customer base Use of statistical process control techniques Conduct of quali cation testing on their parts Available reliability quali cation and process yield data Processmaterial change noti cations supplied when process steps or materials are altered A seemingly simple process change can cause signi cant quality variations from lot to lot Typically only highvolume customers receive this service which usually excludes military customers Part quality can vary signi cantly among vendors Select PEDs from reputable vendors who meet the requirements of MILPRF performance speci cations for QML production Always request and review vendor reliability and quali cation test data If vendor manufacturing data is not available or the vendor is not QML certi ed use a discriminator test such as the Highly Accelerated Stress Test HAST to assist in the comparison of PEDs qualityreliability and to select quality suppliers Conduct preconditioning accordance with EINJEDEC Standard No Al l3A JESD22Al l3A Test Method Al l3A Preconditioning of Plastic Surface Mount Devices Prior to Reliability Testing June 1995 conduct HAST in accordance with EINJEDEC Standard No 22A110A JESD22AllOA Test Method Al lOA Highly Accelerated Temperature and Humidity Stress Test April 1997 HAST quickly eliminates weak parts and accepts superior parts It is the preferred test for simulating harsh military environments In addition to HAST other commonly used commercial test methods are 7 High Temperature Operating Life HTOL High Temperature Storage HTS or Bake Solder Preconditioning Temperature Cycling TC Autoclave or Pressure Cooker Temperature Humidity Bias THB Know Your Application 0 Select PEDs that can survive system lifecycle pro les and environments Ensure all environmental requirements are met Provide environmental controls for longterm unpowered conditions Prior to vendor selection inform the vendor of the device s intended mission life cycle applications including storage and unpowered applications If a vendor s existing quali cation data does not adequately prove the reliability of the part in its intended environment including storage request additional 46 Chapter 6 quali cationscreening tests by the part vendor It is more cost effective for the part vendor to conduct additional tests than to go to a third party Use the vendor Original Equipment Manufacturer to perform testing to benefit from the use of proprietary test methods test equipment and knowledge of part design and construction Emphasize the selection and use of quality molding compounds characterized by low stress low cost low contaminants and low moisture absorption The molding compound is a primary source of many problems especially as it interfaces with the lead frame Vendors should use low stress ionic contaminant epoxy compounds with strong adhesion to the lead frame Beware of a Coefficient of Thermal Expansion mismatch between lead frame and epoxy encapsulant and the epoxy and the die Vendors should report the glass transition temperature Tg and chemical properties of the epoxy plastic Choose a vendor who uses a high Tg epoxy Choose a circuit card assembly process that is benign to the PED package The surface mount technology soldering process can adversely affect the life of PEDs Minimize the amount of time the plastic body of the part is exposed to solder temperatures If temperatures exceed the Tg of the plastic the effects of the Coefficient of Thermal Expansion mismatch are magnified significantly Wave soldering or hand soldering throughhole parts is best vapor phase IR convection re ow soldering surfacemount are worst If surfacemount must be used use conduction belt hot bar or hand soldering if possible Avoid mixed throughhole and surface mount boards where surface mount PEDs are sent through the uxer and solder wave This is a predominant source of ionic contamination and thermal stress ANSIJ STD OZO Classification of Moisture Sensitive Components provides recommended solder assembly re ow pro les to prevent popcorning and other related assembly damage External ionic contaminants picked up during the solder assembly process are the source of many problems Be sure packaging requirements include dry bagging vacuum bagging and desiccants As a minimum keep PEDs sealed in a moisturebarrier bag with desiccant until attached to a circuit board Consider keeping PEDs in a bag with desiccant at all levels of assembly until the completed assembly is loaded into the storage container and sealed with desiccant Determine the length of time parts assemblies can be exposed to the ambient environment during manufacturing see AN SIJSTD OZO Some surfacemount plastic parts are llly saturated in as little as eight days as is re ected by their moisture sensitivity classification see ANSIJ STD OZO 47 Chapter 6 0 Use ceramic packages rather than PEDs whenever cost is equivalent and an HSM is available But be aware that the HSM may not be available for future buys so if used it should be interchangeable with a PED Ensure PEDs are designed into the system from the beginning 0 It is very risky to arbitrarily replace a ceramic package with a PED 0 The environmental parameters of the PED must be taken into account and necessary features must be added to the design to compensate for the more limited operating ranges of the PED 0 Select and use open architecture and robust systems And Don t Forget PEDs should never be used outside their designed operating parameters Upscreening a PED to a higher temperature is not recommended and usually voids manufacturer warranties Manufacturing processes are planned to result in acceptable yields while maintaining warranty requirements Higher operating temperatures can cause unacceptable variations in part operating characteristics andor void warranty requirements Bond pads are usually the first area to fail during accelerated life testing There are m validated simulation models for making lifetime PED predictions Electrical parameters of parts are characterized to requirements derived from the Design Reference Mission Profile to ensure that all selected parts are reliable for the proposed application see Figure 63 48 Chapter 6 Chip Design Capability1 Packaged Device Spec Sheet2 Device Application Requirements3 Device Derated Requirements4 Device Potential Design Margin5 Notes 1 Chips are typically capable of and quali ed for operation for these temperature ranges 2 Chips packagedinto devices are quali ed according to their speci cation sheet for various temperature ranges including commercial applications from 0 to 70 C 3 Example of temperature ranges for the packaged device for an application in a speci c system application 4 Extended temperature ranges at which the device must be characterizedquali ed for the speci c system application for reliable derated capabilities 5 Potential device design margin beyond speci c system application subject to manufacturing yield limitations Chipdevice performance parameter typically widen with increasing temperatures Proper circuit design requires proper derating including tradeoffs such as between temperature and operating frequency for reliable operations Knowledge of the range in parameter values corresponding to the expected range in operating temperatures is essential Otherwise the risk of circuit instability and failure becomes unacceptable Additionally variations in the chipdevice manufacturing processes require establishment of rated values with a safety margin below the maximum achievable Part manufacturers typically do not warranty their parts if used in applications more severe than the quali cation levels at which the parts were characterized Therefore uprating or upscreening of a part is not recommended The Quali ed Manufacturer List QML program is designed to provide parts meeting the severe military environments Source NAWC China Lake Charles Barakat Brief Parts Management and Encapsulated Devices November 1996 Figure 6 3 Parts Rating and Characterization Process 49 Chapter 7 Wm 7 Establish Key Software Measures What is the Relationship Between Software Measures and Technical Risk It is o en believed that a risk free so ware program can be achieved through extensive test efforts Whereas this approach may ultimately result in successful so ware development it is generally a reactive risk approach On the other hand use of a so ware measurement process is a proactive approach for identifying risks before they become problems This chapter contains a suggested approach for using key so ware measurement indicators as the foundation for identifying assessing mitigating and tracking so ware risks Measurement Selection for Tracking Risks Experience shows that most project specific so ware risks can be grouped into categories that are basic or common to almost all projects These common categories represent key concerns that must be managed on a daytoday basis by the project manager The six common so ware risk categories are listed in Table 71 along with examples for mapping these common risk categories to specific measure parameters The measures are not intended to represent an exhaustive or required set of project management measures However they are measures that have repeatedly proven to be effective over a wide range of projects In most cases it is not practical to collect all of the possible measures for each risk category Identification of the best set of measures for a project depends on a systematic evaluation of the potential measures with respect to the risks and relevant project characteristics The measurement set cannot be predefined Select the measure that best provides the desired insight based on both the required information and the project characteristics See the end of this Chapter for a reference containing additional information 51 Chapter 7 Table 7 1 Risk Categories and I VIeasures ISSUE CATEGORY MEASURE CATEGORY MEASURE Schedule and Progress Issues in this category relate to the completion of major milestones and individual work units A project that falls behind schedule can usually only meet its original schedule by eliminating functionality or sacri cing quality lIilestone Performance lIilestone Dates Work Unit Progress Component Status Requirement Status Test Case Status Paths Tested Problem Report Status Reviews Completed Change Request Status Incremental Capability Build content Component Build content 7 Function Resources and Cost Issues in this category relate to the balance between the work to be performed and personnel resources assigned to the project A project that exceeds the budgeted effort usually can recover only by reducing functionality or sacri cing quality Personnel Effort Staff Months Staff Experience Staff Turnover Financial Performance Earned Value Cost Environm ent Availability Tools amp Facilities Resource Availability Dates Resource Utilization Growth and Stability Issues in this category relate to the stability of the functionality or capability required of the so ware It also relates to the volume of software delivered to provide the required capability Stability includes changes in scope or quantity An increase in so ware size usually requires increasing the applied resources or extending the project schedule Product Size and Stability Lines of Code Components Words of Memory Database Size Functional Size and Stability Requirements Function Points Change Request Workload Chapter 7 ISSUE CATEGORY MEASURE CATEGORY MEASURE Product Quality Defects Problem Reports Issues in this category relate to the Defect Density ability of the delivered so ware product to support the user39s needs without failure Once a poor quality product is delivered the Failure Interval burden of making it work usually Complex1ty Cyclom at1c Complex1ty Logic Paths falls on the sustaining engineering organization Developm ent Perform ance Rework Rework Size Issues in this category relate to the Rework Effort capability of the developer relative to project needs A developer with Process Maturity Capability Maturity Model Level a poor 50ftware r Productivity Product SizeEffort Ratio process or low productivity may have difficulty meeting aggressive schedule and cost objectives More capable software developers are better able to deal with project Functional SizeEffort Ratio changes Technical Adequacy Target Computer Resource CPU Utilization Issues in this category relate to the Utilization CPU Throughput viability of the proposed technical lO Utilization approach It includes features such lO Throughput as software reuse use of COTS Memory Utilization software and components and reliance on advanced software development processes Cost increases and schedule delays may result if key elements of the proposed technical approach are not achieved Storage Utilization Response Time Technical Performance Achieved Accuracy in Requirements Concurrent Tasking Data Handling Signal Processing etc Technology Impacts Quantitative Impact of New Technology NDI Utilization Size by Origin Cycle Time etc Software Measures The following tables provide measurement descriptions for the measures listed in Table 71 and include 0 A de nition of the measure 0 Objectives to be achieved 0 Specifications for the measure 53 Chapter 7 Schedule and Progress Measure Milestone Dates The Milestone Dates measure consists of the start and end dates for software activities and events The measure provides an easy to understand view of the status of scheduled software activities and events Comparison of plan and actual milestone dates provides useful insight into both significant and repetitive schedule slips at the activity level Selection Guidance Speci cation Guidance Project Application 0 Basic measure applicable to all domains 0 Included in most DOD measurement policies and commercial measurement practices Generally applicable to all sizes and types of projects Useful during project planning development and sustaining engineering phases Some sustaining engineering projects may be considered level of effort tasks and may not have associated milestones or they may have only limited milestones such as date change assigned date change closed Process Integration 0 Required data is generally easily obtained from project scheduling systems andor documentation Data should be focused on software activities and events particularly key items affecting the critical path or risk items More detailed milestones provide a better indication of progress and allow earlier identification of problems If dependency data is collected slips in related activities can be more easily and accurately projected and assessed Usually Applied During 0 Requirements Analysis Estimates and Actuals 0 Design Estimates and Actuals 0 Implementation Estimates and Actuals 0 Integration and Test Estimates and Actuals Typical Data Items 0 Start Date 0 End Date 0 Dependent Activity Typical Attributes 0 Version 0 Organization Typical Aggregation Structure 0 Software activity 0 Component Typically Collected for Each 0 Key so ware activity 0 Configuration Item CI or equivalent Count Actuals Based On 0 Customer signoff 0 Action items closed 0 Documents baselined 0 lIilestone review held 0 Successful completion of tasks This Measure Answers Questions Such As Is the current schedule realistic 0 How many activities are concurrently scheduled 0 How often has the schedule changed 0 What is the projected completion date for the project Measure Component Status Chapter 7 Schedule and Progress The Component Status measure counts the number of so ware components that have completed a specific development activity Early in the development activity planning changes should be expected as the development activity is completed Later in the process an increase in the planned number of components can be an indication of unplanned or excessive growth A comparison of planned and actual components is very effective for assessing development progress Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Used on medium to large projects Useful during development and sustaining engineering phases Tracking progress through early development activities such as design or coding is not generally done on projects without a design activity such as sustaining engineering projects that are focused on problem resolution or CommercialOffTheShelf COTS integration projects Tracking progress during the integration and test activities may be done for projects with major reuse or COTS integration Process Integration 0 Easier to collect if formal reviews inspections or walkthroughs are included in the development process 0 Data sometimes available from configuration management systems or development tools 0 Data is generally available if there is a mature and disciplined development process 0 Component status during test activities requires a disciplined testing process with separate tests per components allocated to de ned test sequences 0 Component status during test activities can be applied for each unique test sequence ie CI test integration test including dryruns 0 Component status during test activities is generally one of the more difficult work unit progress measures to collect since most integration and test activities are based on requirements or functions Usually Applied During 0 Requirements Analysis Estimates 0 Design Implementation Estimates and Actuals 0 Integation and Test Estimates and Actuals Typical Data Items 0 Number of Units 0 Number of Units Complete Typical Attributes 0 Version 0 Software Activity Typical Aggregation Structure 0 Component Typically Collected for Each 0 CI or equivalent Software Activity may be defined as 0 Preliminary Design 0 Detailed Design 0 Code 0 Unit Test 0 CI Test Count Actuals Based On 0 Completion of component inspections or walkthroughs 0 Successful completion of speci ed test 0 Release to configuration management 0 Resolution of action items This Measure Answers Questions Such As Are components completing development activities as scheduled Is the planned rate of completion realistic What components are behind schedule 55 Chapter 7 Schedule and Progress Measure Requirement Status The Requirement Status measure counts the number of de ned requirements that have been allocated to software components and test cases and the number that have been successfully tested The measure is an indication of software design and test progress The measure addresses the degree to which required functionality has been successfully demonstrated against the specified requirements as well as the amount of testing that has been performed This measure provides an excellent measure of test progress This measure is also known as quotBreadth of Testing Selection Guidance Speci cation Guidance Project Application Typical Data Items 0 Applicable to all domains 0 Number of Requirements 0 Useful during development and sustaining engineering phases Not generally used on projects without a requirements or design activity such as sustaining engineering projects that are focused on problem resolution Not generally used on projects in which requirements cannot be traced to test cases Process Integration Requires disciplined requirements traceability and testing processes to implement successfully Allocated requirem ents should be testable and mapped to test sequences If an automated design tool is used the data is more readily available Can be applied for each unique test sequence ie Cl test integration test system test and regression test including dryruns One of the more dif cult work unit progress measures to collect since requirements often do not directly map to components test cases and test procedures It is also sometimes difficult to objectively determine if a requirement has been successfully tested Early in a project the requirements baseline is limited to highlevel speci cations Later on the requirements baseline expands and measurement data is traceable to components and test cases 0 Some requirements may not be testable until late in the testing process Others are not directly testable Some may be verified by inspection Both stated and derived requirements may be counted Usually Applied During 0 Requirem ents Analysis Estim ates 0 Design Integration and Test Estimates and Actuals 0 lmplem entation Estim ates 0 Number of Requirements Traced to Detailed Speci cations 0 Number of Requirements Traced to Software Components 0 Number of Requirements Traced to Test Speci cations 0 Number of Requirements Tested Successfully Typical Attributes 0 Version 0 Speci cation 0 Test Sequence Typical Aggregation Structure 0 Function Typically Collected for Each 0 Requirement Specification Count Actuals Based On 0 Completion of speci cation review 0 Baselining of specifications 0 Baselining of Requirements Traceability Matrix 0 Successful completion of all tests in the appropriate test sequence This Measure Answers Questions Such As 0 Have all of the requirements been allocated to software components 0 Are the requirements being implemented and tested as scheduled 56 Measure Test Case Status Chapter 7 Schedule and Progress The Test Case Status measure counts the number of test cases that have been attempted and those that have been completed successfully This measure can be used in conjunction with the Requirement Status measure to evaluate test progress This measure allows assessment of software quality based on the proportion of attempted test cases that are successfully executed This measure is one of the best measures of test progress Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Generally applicable to all sizes and types of projects Useful during development and sustaining engineering phases Process Integration 0 Need disciplined test planning and tracking processes to implement successfully Can be applied for each unique test sequence ie CI test integration test system test and regression test including quotdryruns There should be a mapping between defined test cases and requirements This allows an analysis of what functions are passing test and what ones are not Easy to collect Most projects define and allocate a quantifiable number of test cases to each software test sequence Usually Applied During 0 Implementation Estimates and Actuals 0 Integration and Test Estimates and Actuals Typical Data Items 0 Number of Test Cases 0 Number of Test Cases Attempted 0 Number of Test Cases Passed Typical Attributes 0 Version 0 Test Sequence Typical Aggregation Structure 0 Software Activity Test Typically Collected for Each 0 Activity Test Alternatives to Test Cases Include 0 Test Procedures 0 Test Steps 0 UseCase scenarios 0 Functional threads Count Actuals Based On 0 Successful completion of each test case in the appropriate test sequence This Measure Answers Questions Such As 0 Is test progress suf cient to meet the schedule 0 Is the planned rate of testing realistic 0 What functions are behind schedule 57 Chapter 7 Schedule and Progress Measure Paths Tested The Paths Tested measure counts the number of logical paths successfully tested The measure reports the degree to which the software has been successfully demonstrated and indicates the amount of testing that has been performed This measure is also called quotDepth of Testing Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Applicable to most types of projects Especially important for those with high reliability requirements security implications or catastrophic failure potential 0 Not generally used for COTS or reused code 0 Useful during development and sustaining engineering phases Process Integration Usually applied on a cumulative basis across all test sequences ie CI test integration test system test and regression test so that each path is tested by the time all testing is complete Often used in conjunction with Cyclomatic Complexity Difficult to collect requires the use of test tools that can verify test paths covered These test tools often require instrumentation of the code Difficult to use on large projects due to the large number of pa s Usually Applied During 0 Implementation Estimates and Actuals 0 Integation and Test Actuals Typical Data Items 0 Number of Paths 0 Number of Paths Tested 0 Number of Paths Tested Successfully Typical Attributes 0 Version 0 Test Sequence Typical Aggregation Structure 0 Component Typically Collected for Each 0 Unit or equivalent Alternative to Paths Include 0 Executable Statements 0 Decisions Count Actuals Based On 0 Successful completion of each test in the appropriate test sequence This Measure Answers Questions Such As 0 Have all of the paths been successfully tested 0 What percentages of the paths are represented in the testing approach Measure Problem Report Status Chapter 7 Schedule and Progress The Problem Report Status measure counts the number of software problems reported and resolved This measure provides an indication of product maturity and readiness for delivery The rates at which problem reports are written and closed can be used to estimate test completion This measure can also be used as an indication of the quality of the problem resolution process Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Applicable to all sizes and types of projects Useful during development and sustaining engineering phases Process Integration 0 Many projects have acceptance criteria based on the number of open problem reports by priority This measure is useful in tracking to those requirements 0 The amount of test activity has a signi cant impact on this measure Test personnel generally alternate between testing and fixing problems You may want to normalize this measure using some measure of Test Progress 0 Data is generally available Data is easier to collect when an automated problem tracking system is used 0 On development projects data is generally available during integration and test Problem report data is more difficult to collect earlier during requirements analysis design and implementation because the formal problem reporting system is usually not in place and rigidly enforced When this data is available it provides very good progress information An inspection or peer review process can provide this information Usually Applied During 0 Requirements Analysis Estim ates and Actuals Design Estimates and Actuals lmplem entation Estimates and Actuals 0 lntegation and Test Estimates and Actuals Typical Data Items 0 Number of Software Problems Reported 0 Number of Software Problems Resolved Typical Attributes 0 Version 0 Priority 0 ValidInvalid Typical Aggregation Structure 0 Component Typically Collected for Each 0 CT or equivalent Count Actuals Based On 0 Fix developed 0 Fix implemented 0 Fix integrated 0 Fix tested This Measure Answers Questions Such As 0 Are known problem reports being closed at a suf cient rate to meet the test completion date Is the product maturing Is the problem report discovery rate going down 0 0 When will testing be complete 0 What components have the most open problem reports 59 Chapter 7 Schedule and Progress Measure Reviews Completed The Reviews Completed measure counts the number of reviews successfully completed including both internal developer and acquirer reviews The measure provides an indication of progress in completing review activities Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Used on medium to large projects Not generally used on projects integrating COTS and reusable software components Useful during development and sustaining engineering phases Process Integration 0 Easy to collect if formal reviews are a part of the development process Usually Applied During 0 Requirements Analysis Estimates and Actuals 0 Design Estimates and Actuals 0 Implementation Estimates and Actuals Typical Data Items 0 Number of Reviews 0 Number of Reviews Scheduled 0 Number of Reviews Completed Successfully Typical Attributes 0 Version Typical Aggregation Structure 0 Component 0 Software Activity Typically Collected for Each 0 CI or equivalent 0 Major activity Alternatives to Reviews Include 0 Inspections Walkthroughs Count Actuals Based On 0 Completion of review 0 Resolution of all associated action items This Measure Answers Questions Such As 0 Are development review activities progressing as scheduled 0 Do the completed products meet the defined standards Are components passing the reviews 0 What components have failed their review 60 Measure Change Request Status Chapter 7 Schedule and Progress The Change Request Status measure counts the number of change requests enhancements or corrective action reports affecting a product The measure provides an indication of the amount of rework required and performed It only identifies the number of changes and does not report on the functional impact of changes or the amount of effort required to implement them Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Applicable to all sizes of projects Useful during the development phase Often used for projects in the sustaining engineering phase Not Resolved generally used for integration projects incorporating COTS and reused code Process Integration 0 Data should be available from most projects 0 Often used on iterative developments such as prototyping Usually Applied During 0 Requirements Analysis Actuals 0 Design Actuals 0 Implementation Actuals Integration and Test Actuals Typical Data Items 0 Number of Software Change Requests written 0 Number of Software Change requests Resolved Typical Attributes 0 Version Priority Validlnvalid ApprovedUnapproved Change Classification defect correction enhancement Typical Aggregation Structure 0 Function Typically Collected for Each 0 Requirement Specification 0 Design Specification Alternatives to Change Requests Include Enhancem ents 0 Corrective Action Reports Count Actuals Based On 0 Change implemented 0 Change integrated 0 Change tested This Measure Answers Such Questions As How many change requests have impacted the software Are change requests being implemented at a sufficient rate to meet schedule Is the trend of new change requests decreasing as the project nears completion 61 Chapter 7 Schedule and Progress Measure Build Content Component The Build Content Component measure identifies the components that are included in incremental builds The measure indicates progress in the incremental products Build content will often be deferred or removed in order to preserve the scheduled delivery date It is easier to track incorporation of capability by component rather than by function since it is relatively easy to detect whether or not a component has been integrated However this provides less information since the correlation between components and functionality is not always well defined Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Generally applicable to all sizes and types of projects 0 Useful during development and sustaining engineering phases Process Integration 0 Requires a formal detailed list of content by increment This content must be de ned at the component level Easy to collect especially if the project has a detailed tracking mechanism To effectively measure the content of the so ware at the version level the lower level units that comprise the version must individually be complete with respect to defined criteria Usually Applied During 0 Design Estimates 0 Implementation Estimates 0 Tntegation and Test Estimates and Actuals Typical Data Items 0 Number of Units 0 Number of Units Integrated successfully Typical Attributes 0 Version Typical Aggregation Structure 0 Component Typically Collected for Each 0 Unit or equivalent Count Actuals Based On 0 Successful integration 0 Successful testing This Measure Answers Such Questions As 0 Are components being incorporated as scheduled 0 Will each increment contain the specified components 0 What components have to be deferred or eliminated 0 What components have been added 0 Is development risk being deferred 62 Measure Build Content Function Chapter 7 Schedule and Progress The Build Content Function measure identi es the functional content of incremental builds The measure indicates the progress in the incorporation of incremental functionality Build content will o en be deferred or removed in order to preserve the scheduled delivery date Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Generally applicable to all sizes and types of projects Useful during development and sustaining engineering phases Process Integration 0 Requires a formal detailed list of functions by increment Feasible to collect if the project has a detailed tracking mechanism Easier to collect if usecase or functional threads are defined It is o en difficult to identify Whether a function is incorporated in its entirety A considerable amount of testing and analysis must be done to determine if all aspects of a function are incorporated Usually Applied During 0 Design Estimates 0 Implementation Estimates Integration and Test Estimates and Actuals Typical Data Items 0 Number of SubFunctions 0 Number of SubFunctions Integrated Successfully Typical Attributes 0 Version Typical Aggregation Structure 0 Function Typically Collected for Each Function or equivalent Count Actuals Based On 0 Successful integration 0 Successful testing This Measure answers Questions Such As 0 Is functionality being incorporated as scheduled What functionality has to be deferred Is development risk being deferred Will each increment contain the specified functionality 63 Chapter 7 Resources and Cost Measure Effort The Effort measure counts the number of hours or personnel applied to software tasks This is a straightforward generally understood measure It can be categorized by activity as well as by product This measure usuall correlates directly with software cost but can also be used to address other common issues including Schedule and Progress and Development Performance Selection Guidance Speci cation Guidance Project Application 0 Basic measure applicable to all domains 0 Included in most DoD measurement policies and commercial measurement practices 0 Generally applicable to all sizes and types of projects 0 Useful during project planning development and sustaining engineering phases Some sustaining engineering projects with fixed staffing levels may not track this measure Process Integration Data should be available from most projects at the system level Data usually derived from a financial accounting and reporting system andor separate time card system All labor hours applied to the so ware tasks should be collected including overtime The overtime data is sometimes difficult to collect Most effective when financial accounting reporting systems are directly tied to software products and activities at a low level of detail Counting software personnel may be difficult because they may not be allocated to the project on a fulltime basis or they may not be assigned to strictly softwarerelated tasks If labor hours are not explicitly provided data may be approximated from staffing andor cost data Labor hours are sometimes considered proprietary data The labor categories and activities that comprise the software tasks must be explicitly defined for each organization Planning data is usually based on software estimation models or engineering judgment Usually Applied During 0 Requirements Analysis Estimates and Actuals 0 Design Integration and Test Estimates and Actuals 0 Implementation Estim ates and Actuals Typical Data Items 0 Number ofLabor Hours Typical Attributes 0 Organization 0 Labor Category Typical Aggregation Structure 0 So ware Activity 0 Component Typically Collected for Each 0 So ware Activity 0 CI or equivalent Alternatives to Labor Hours Include 0 Labor DaysWeeksMonths 0 Full T im e Equivalents 0 Number of Personnel Alternatives to WBS Elements include 0 So ware Activities Count Actuals Based On 0 End of financial reporting period This Measure Answers Questions Such As Are development resources being applied according to plan 0 Are certain tasks or activities taking moreless effort than expected and is the effort pro le realistic 64 Measure Staff Experience Chapter 7 Resources and Cost The Staff Experience measure counts the total number of software personnel with experience in defined areas The measure is used to determine whether suf cient experienced personnel are available and used The experience factors are based on the requirements of each individual project such as domain or language Experience is usually measured in years which does not always equate to capability Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Applicable to projects that require particular expertise to complete 0 Useful during project planning development and sustaining engineering phases Process Integration 0 Requires a personnel database that maintains experience data Difficult to collect and keep upto date as people are addedremoved from a project Generally has to be 0 A matrix of project skill requirements versus current personnel skills may help to track this measure and identify necessary training areas Usually Applied During 0 Requirements Analysis Actuals Design Actuals 0 Implem entation Actuals 0 Integration and Test Actuals Typical Data Items 0 Number of Personnel 0 Number of Years of Experience Typical Attributes 0 Experience Factor Typical Aggregation Structure 0 Software Activity Organization Typically Collected for Each 0 Organization Experience Factor May be De ned for Language System Engineering Domain Hardware Application Platform Length of Time Team Together Count Actuals Based On 0 Prior to contract award 0 During annual performance evaluation This Measure Answers Questions Such As 0 Are sufficient experienced personnel available 0 Will additional training be required 65 Chapter 7 Resources and Cost Measure Staff Turnover The Staff Turnover measure counts staff losses and gains A large amount of turnover impacts learning curves productivity and the ability of the software developer to build the system with the resources provided within cost and schedule This measure is most effective when used in conjunction with the Staff Experience measure Losses of more experienced personnel are more critical Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Applicable to projects of all sizes and types Useful during development and sustaining engineering phases Process Integration 0 Very dif cult to collect on contractual projects 7 most developers consider this proprietary information May be more readily available on inhouse projects 0 It is useful to categorize the number of personnel lost into planned and unplanned losses since most projects plan to add and remove personnel at various stages of the project Usually Applied During 0 Requirements Analysis Actuals Design Actuals 0 Implem entation Actuals 0 Integation and Test Actuals Typical Data Items 0 Number of Personnel 0 Number of Personnel Gained per period 0 Number of Personnel Lost per period Typical Attributes 0 Typical Aggregation Structure 0 Software Activity Organization Typically Collected for Each 0 Organization Count Actuals Based On 0 End of nancial reporting period 0 F 39 39 charts 0 End of project activities or milestones 16 L1 ucLLuirr 0139 new This Measure Answers Questions Such As How many people have been addedhave left the project 0 How are the experience levels being affected by the turnover rates 0 What areas are most affected by turnover 66 Measure Earned Value Chapter 7 Resources and Cost The Earned Value measure is a comparison between the cost of work performed and the budget based on dollars budgeted per WBS element The measure can be used to identify cost overruns and underpins Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Applicable to any project that uses a cost and schedule system such as a Cost Schedule Control System Criteria CSCSC or an earned value measurement system 0 Useful during project planning development and sustaining engineering phases Process Integration 0 CSCSC data is required on most large DoD contracts so it is o en readily available This data should be based on a validated cost accounting system If this data is not required then the cost measure can be used instead 0 This can be difficult to track without an automated system tied to the accounting system 0 This data tends to lag other measurement information due to formal reporting requirements 0 Limited in applicability if costs are planned and expended on a level of effort basis Usually Applied During 0 Requirements Analysis Estimates and Actuals Design Estimates and Actuals lmplem entation Estimates and Actuals 0 lntegation and Test Estimates and Actuals Typical Data Items 0 Budgeted Cost of Work Scheduled 0 Budgeted Cost of Work Perform ed 0 Actual Cost of Work Perform ed 0 Estimate at Completion 0 Budget at Completion Typical Attributes 0 Organization Typical Aggregation Structure 0 Software Activity Typically Collected for Each 0 Software Activity Count Actuals Based On 0 WBS element complete to defined exit criteria 0 WBS element percent complete based on engineering judgm ent 0 WBS element percent complete based on underlying objective measures This Measure Answers Questions Such As 0 Are project costs in accordance with budgets 0 What is the projected completion cost 0 What WBS elements or tasks have the greatest variance 67 Chapter 7 Resources and Cost Measure Cost The Cost measure counts budgeted and expended cost The measure provides information about the amount of money expended on a project compared to budgets Selection Guidance Speci cation Guidance 0 Integation and Test Estimates and Actuals Project Application 0 Applicable to all domains Applicable to projects of all sizes and types Used to evaluate costs for those projects that do not use cost schedule control system criteria CSCSC Useful during project planning development and sustaining engineering phases Process Integration 0 Data should come from an automated accounting system This data tends to lag other measurement information due to formal reporting requirements 0 Should be relatively easy to collect at a high level Not all projects however will break out software WBS elements to a sufficient level of detail 0 This measure does not address the amount of work completed for the costs incurred Usually Applied During 0 Requirements Analysis Estimates and Actuals 0 Design Estimates and Actuals Implementation Estimates and Actuals Typical Data Items 0 Cost Dollars Typical Attributes 0 Organization Typical Aggregation Structure 0 Software Activity Typically Collected for Each 0 Software Activity Count Actuals Based On 0 WBS element complete to defined exit criteria 0 WBS element percent complete based on engineering judgm ent 0 WBS element percent complete based on underlying objective measures This Measure Answers Questions Such As 68 Are project costs in accordance with budgets Will the target budget be achieved or will there be an overrun or surplus Measure Resource Availability Dates Chapter 7 Resources and Cost The Resource Availability Dates measure tracks the availability of key development and test environment resources The measure is used to determine if key resources are available when needed It can be integrated in the milestone dates measure Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 More important for projects with constrained resources Useful during development and sustaining engineering phases Process Integration 0 Required data is generally easily obtained from project scheduling systems or documentation Resources may include so ware hardware integration and test facilities tools other equipment or office space Normally only key resources are tracked Personnel resources are not included in this measure they are tracked with Effort Be sure to consider all resources including those furnished by the government the developer and third party vendors Usually Applied During 0 Requirements AnalysisEstimates and Actuals Design Estimates and Actuals Tmplem entation Estimates and Actuals 0 lntegation and Test Estimates and Actuals Typical Data Items 0 Availability Date Typical Attributes None Typical Aggregation Structure 0 Software Activity Typically Collected for Each Key Resource Count Actuals Based On 0 Demonstration of the intended service This Measure Answers Questions Such As Are key resources available when needed 0 Is the availability of support resources impacting progress 69 Chapter 7 Resources and Cost Measure Resource Utilization The Resource Utilization measure counts the number of hours of resource time requested allocated scheduled available not available due to maintenance downtime or other problems and used It is used on projects that have resource constraints and is usually focused only on key resources This measure provides an indication of whether key resources are sufficient and if they are used effectively Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 More important for projects with constrained resources Especially important during integration and test activities Useful during development and sustaining engineering phases Process Integration 0 Relatively easy to collect at a high level Easier to collect if a resource monitor or resource scheduling system is in place Resources may include so ware hardware integration and test facilities tools and other equipment Normally only key resources are tracke Include both Governm entfurnished and developerfurnished resources Usually Applied During 0 Requirements Analysis Estimates and Actuals Design Estimates and Actuals Implementation Estimates and Actuals 0 Integation and Test Estimates and Actuals Typical Data Items 0 Requested Hours 0 Allocated Hours 0 Scheduled Hours 0 Available Hours 0 Hours Unavailable 0 Used Hours Typical Attributes Typical Aggregation Structure 0 Software Activity Typically Collected for Each Key Resource Count Actuals Based On 0 End of reporting period This Measure Answers Questions Such As 0 Are sufficient resources available 0 How efficiently are resources being used 70 Measure Lines of Code Chapter 7 Growth and Stability The Lines of Code measure counts the total amount of source code and the amount that has been added modified or deleted The total number of lines of code is a well understood measure that allows estimation of project cost required effort schedule and productivity Changes in the number of lines of code indicate development risk due to product size volatility and additional work that may be required Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Commonly used in weapons applications Included in most DOD measurement policies and some commercial measurement practices Used for projects of all sizes Less important for projects where little code is generated such as those using automatic code generation and visual programming environments Most effective for traditional high order languages such as Ada FORTRAN and COBOL Not generally used for fourthgeneration languages such as Natural and ECOS Not usually tracked for COTS software unless changes are made to the source code Useful during project planning development and sustaining engineering phases Process Integration 0 Define Lines of Code for each language Lines of code from different languages are not equivalent 0 You may want to calculate an effective or equivalent So ware Lines of Code count based on source New and modified lines would count at 100 while reused code would count at a lower percentage to address the effort required to integrate and test the reused code 0 It is sometimes difficult to generate accurate estimates early in the project especially for new types of projects 0 Estimates should be updated on a regular basis 0 Can be difficult estimating and tracking lines of code by source and type 0 Actuals can easily be counted using automated tools Usually Applied During 0 Requirements Analysis and Design Estimates 0 0 Integation and Test Actuals Implementation Estimates and Actuals Typical Data Items 0 Number of Lines of Code 0 Number of Lines of Code Added 0 Number of Lines of Code Deleted 0 Number of Lines of Code Modi ed Typical Attributes 0 Version 0 Source new reused NDI GOT S or COTS 0 Lan 0 Delivery Status deliverable non deliverable EndUse Environment operational support Typical Aggregation Structure 0 Component Typically Collected for Each 0 Unit or equivalent Lines of Code Definition May Include 0 Logical Lines 0 Physical Lines 0 Comments 0 Executables 0 Data Declarations 0 Compiler Directives Count Actuals Based On 0 Release to configuration management 0 Passing unit test 0 Passing inspection This Measure Answers Questions Such As How accurate was the size estimate on which the schedule and effort plans were based How much has the so ware size changed In what components have changes occurred Has the size allocated to each incremental build changed Is functionality slipping to later builds 71 Chapter 7 Growth and Stability Measure Components The Components measure counts the number of elementary so ware components in a software product and the number that are added modi ed or deleted The total number of components de nes the size of the software product Changes in the number of estimated and actual components indicate risk due to product size volatility and additional work that may be required Reporting the number of components provides product size information earlier than other size measures such as Lines of Code Selection Guidance Speci cation Guidance Project Application Typical Dab Items 0 Applicable to all application domains generally 0 Number of Units with different component de nitions 0 Number of Units Added 0 Applicable to all sizes and type projects 0 Number of Units Deleted 0 Not usually tracked for COTS software unless 0 Number of Units Modified changes are made to the source code 0 Useful during development and sustaining Typical Attributes engineering phases 0 Version 0 Source new reused NDI GOTS or COT S Process Integration 0 Language 0 Requires a welldefined and consistent component 0 Delivery Status deliverable non deliverable allocation structure ie unit to CI to build 0 EndUse Environment operational support 0 Required data is generally easy to obtain from software design tools configuration management Typical Aggregation Structure tools or documentation 0 Component 0 Deleted and added components are relatively easy to collect modi ed components are o en not Typically Collected for Each tracked 0 CT or equivalent 0 Volatility in the planned number of components may represent instability in the requirements or in Count Actuals Based On the design of the software 0 Release to con guration management 0 Passing unit test Usually Applied During 0 Passing inspection 0 Requirements Analysis Estimates Design Estimates and Actuals Implementation Estimates and Actuals 0 Integation and Test Actuals This Measure Answers Questions Such As 0 How many components need to be implemented and tested 0 How much has the approved so ware baseline chan ed 0 Have the components allocated to each incremental build changed ls functionality slipping to later builds 72 Measure Words of Memory Chapter 7 Growth and Stability This measure counts the number of words used in main memory in relation to total memory capacity This measure provides a basis to estimate if suf cient memory will be available to execute the software in the expected operational scenarios Selection Guidance Speci cation Guidance Project Application 0 Most commonly used for weapons systems 0 Used on any project with severe memory constraints such as avionics or onboard ight software For many projects the amount of memory reserved is part of the de ned exit criteria Useful during development and sustaining engineering phases Process Integration 0 Requires an automated tool that measures usage based on a defined operational pro le This is often 0 Estimation may be based on modeling or by assuming a translation factor between lines of code and words of memory Usually Applied During 0 Requirements Analysis Estimates 0 Design Estimates lmplem entation Estimates and Actuals 0 lntegation and Test Estimates and Actuals Typical Data Items 0 Number of Words of Memory 0 Number of Words of Memory Used Typical Attributes 0 Version Typical Aggregation Structure 0 Component Typically Collected for Each 0 Software CT or Hardware Cl 7 Processor Count Actuals Based On 0 Release to con guration management Passing unit test Passing inspection During Test Readiness Review 0 O O 0 Prior to delivery This Measure Answers Questions Such As 0 How much spare memory capacity is there 0 Does the memory need to be upgraded 73 Chapter 7 Growth and Stability Measure Database Size The Database Size measure counts the number of words records or tables elements in each database The measure indicates how much data must be handled by the system Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Often used for Automated Information System projects 0 Used for any project with a significant database Especially important for those with performance constraints 0 Useful during development and sustaining engineering phases Process Integration 0 In order to estimate the size of a database you must develop an operational profile This is generally a manual process that can be difficult Actuals are relatively easy to collect Usually Applied During 0 Requirements Analysis Estimates 0 Design Estimates 0 Implementation Estimates and Actuals Typical Data Items 0 Number of Tables 0 Number of Records or Entries 0 Number of Words or Bytes Typical Attributes 0 Version Typical Aggregation Structure 0 Component Typically Collected for Each 0 Software CI 7 Database 0 Hardware CI iProcessor Count Actuals Based On 0 Schema design released to configuration management 0 Schema implementation released to configuration 0 Integation and Test Actuals This Measure Answers Questions Such As 0 How much data has to be handled by the system 0 How many different data types have to be addressed 74 m anagem ent Measure Requirements Chapter 7 Growth and Stability The Requirements measure counts the number of requirements in the software and interface speci cations It also counts the number of these requirements that are added modified or deleted The measure provides information on the total number of requirements and the development risk due to volatility in requirements or functional growth Selection Guidance Speci cation Guidance Project Application Applicable to all domains Applicable to any project that tracks requirements Useful for any size and type of project Useful during project planning development and sustaining engineering phases Effective for both nondeveloped COT SGOT SReuse and newly developed software Process Integration 0 Requires a good requirements traceability process If an automated design tool is used the data is more readily available 0 Count changes against a baseline that is under formal con guration control Both stated and derived requirements may be included 0 To evaluate stability a good definition of the impacts of each change is require 0 It is sometime difficult to specifically define a quotrequiremen quot A consistently applied definition makes this measure more effective Usually Applied During 0 Requirements Analysis Estimates and Actuals 0 Design Actuals 0 Implementation Actuals 0 Tntegation and Test Actuals Typical Data Items 0 Number of Requirements 0 Number of Requirements Added 0 Number of Requirements Deleted 0 Number of Requirements Modi ed Typical Attributes 0 Version 0 Change Source developer acquirer user 0 Software Activity Typical Aggregation Structure 0 Function Typically Collected for Each 0 Requirement Specification Count Actuals Based On 0 Passing requirements inspection 0 Release to con guration management 0 Software Change Control Board Approval This Measure Answers Questions Such As Have the requirements allocated to each incremental build changed 0 Are requirements being deferred to later builds 0 How much has software functionality changed 0 What components have been affected the most 75 Chapter 7 Growth and Stability Measure Function Points The Function Points measure provides a weighted count of the number of external inputs and outputs logical internal files and interfaces and inquiries This measure determines the functional size of so ware to support an early estimate of the required level of effort It can also be used to normalize productivity measures and defect rates Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Commonly used in Automated Information System applications Not usually tracked for COTS or reused software Useful during development and sustaining engineering phases Process Integration 0 Requires a design process compatible with function points 0 Should be based on a de ned method such as the IFPUG function point counting practices manual 0 Usually requires formal training 0 Requires a welldefined set of work products to describe the requirements and design 0 Very labor intensive to estimate and count automated tools are scarce and have not been validated Usually Applied During 0 Requirem ents Analysis Estim ates 0 Design Estimates and Actuals 0 Implem entation Actuals 0 Integration and Test Actuals Typical Data Items 0 Number of Function Points 0 Number of Function Points Added 0 Number of Function Points Deleted 0 Number of Function Points Modified Typical Attributes 0 ersion 0 Source new reused NDI GOTS or COT S Typical Aggregation Structure 0 Function 0 Component Typically Collected for Each Function 0 CI or equivalent Count Actuals Based On 0 Completion of design documentation 0 Release to con guration management 0 Passing design documentation inspections 0 Delivery This Measure Answers Questions Such As 0 How big is the software product 0 How much work is there to be done 0 How much functionality is in the so ware 76 Measure Change Request Workload Chapter 7 Growth and Stability The Change Request Workload measure counts the number of change requests affecting a product The measure provides an indication of the amount of work required and perform ed Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Applicable to all sizes of project 0 Useful during the development phase Often used for projects in the sustaining engineering phase Not generally used for integration projects incorporating COTS and reused code Process Integration 0 Data should be available for most projects 0 Often used on iterative developments including sustaining engineering projects doing basic m aintenance Usually Applied During 0 Requirements Analysis Actuals Design Actuals Implem entation Actuals Integration and Test Actuals Typical Data Items 0 Number of Software Change Requests Written 0 Number of Software Change Requests Open 0 Number of Software Change Requests Assigned to a Version 0 Number of Software Change Requests Resolved Typical Attributes 0 Version Priority ValidInvalid ApprovedUnapproved Change Classification defect correction enhancement Typical Aggregation Structure 0 Function Typically Collected for Each 0 Requirement Specification 0 Design Specification Count Actuals Based On Change submitted Change approved Change analyzed Change implemented Change integrated Change tested This Measure Answers Questions Such As 0 How many change requests have been written 0 Is the backlog of open change requests declining 0 Is the rate of new change requests increasing or decreasing 77 Chapter 7 Product Quality Measure Problem Reports The Problem Reports measure quantifies the number status and priority of problems reported It provides very useful information on the ability of a developer to nd and fix defects The quantity of problems reported re ects the amount of development rework quality Arrival rates can indicate product maturity a decrease should occur as testing is completed Closure rates are an indication of progress and can be used to predict test completion Tracking the length of time that problem reports have remained open can be used to determine whether progress is being made in fixing problems It helps assess whether software rework is deferred Selection Guidance Speci cation Guidance Project Application Typical Data Items 0 Applicable to 311 domajns 0 Number of Problem Reports 0 Included in most DOD measurement policies and Average Age OfPrOblem Reports commercial measurement practices o Applicable to all sizes and types of projects Tychzisitrtlnbmes 0 Useful during development and sustaining engineering Priority phases 0 Problem Report Status Code P I t t 0 So ware Activity Originated mess egm 10ll 0 So ware Activity Discovered 0 Requires a disciplined problem reporting process This measure is generally available during integration and test Typical Aggregation Structure It is beneficial however to begin problem tracking earlier component during requirements design code and unit test inspections and mm teStS Typically Collected for Each 0 The status codes used on a prOject should address at a CI or equivalent minimum whether problem reports are open or resolved 0 Easy to collect actuals when an automated problem Count Actuals Based On reporting system is used Many projects do not estimate 0 problem report documented the number 0f PTOblem reports expecmd 0 Problem report approved by con guration 0 The number of discovered problem reports should be control board considered relative to the amount of discovery activity 0 Successfully tested such as number of inspections and amount of testing 0 Successfully integrated 0 Many projects use the number of open problem reports by 39 Delivery to field priority categories as a measure of readiness for test 0 To track age of problems reports the project may collect average age median age longest age or by age category eg number open less than 1 month 13 months more than 3 months etc Each project must define what activities are included in age e g time from discovery to validation integration or field Usually Applied During 0 Requirements Analysis Estim ates and Actuals 0 Design Integration and Test Estimates and Actuals 0 lmplem entation Estim ates and Actuals This Measure Answers Questions Such As 0 How many critical problem reports have been written 0 Do problem report arrival and closure rates support the scheduled completion date of integration and test 0 How many problem reports are open What are their priorities 78 Measure Defect Density Chapter 7 Product Quality The Defect Density measure is a ratio of the number of defects written against a component relative to the size of that component Either a product or function oriented size measure can be used The measure helps identify components with the highest concentration of defects These components often become candidates for additional reviews or testing or may need to be rewritten Trends in the overall quality of a system can also be monitored with this measure Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Applicable to all sizes and types of projects 0 Useful during development and sustaining engineering phases Process Integration 0 Requires a disciplined problem reporting process and a method of measuring software size 0 Requires the allocation of defect and size data to the associated component affected 0 In order to use functional measures of size requirements or function points must be allocated to the associated components 0 Actuals are relatively easy to collect Most projects do not estimate defect density 0 Usually only valid unique problem reports are included in the defect density calculation Usually Applied During 0 Requirements Analysis Actuals 0 Design Actuals 0 Implem entation Actuals 0 Integration and Test Actuals Typical Data Items 0 Number of Defects 0 Number of Lines of Code Typical Attributes 0 Version Priority 0 0 Source new reused NDI GOTS or COT S 0 Language Typical Aggregation Structure 0 Component Typically Collected for Each 0 CI or equivalent Alternatives to Lines of Code Include 0 Components 0 Requirements 0 Function Points Count Actuals Based On 0 Defects documented 0 Defects validated 0 Successfully integrated 0 Successfully tested 0 Delivered to field This Measure Answers Questions Such As 0 What is the quality of the software What components have a disproportionate amount of defects 0 0 What components require additional testing or rev1ew What components are candidates for rework 79 Chapter 7 Product Quality Measure Failure Interval The Failure Interval measure specifies the time between each report of a so ware failure The measure is used as an indicator of the length of time that a project can be expected to run without a software failure during systems operation The measure provides insight into how the so ware affects overall system reliability This measure can be used as an input to reliability prediction models Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Applicable to any project with reliability requirements 0 Useful during development in system or operational test Used throughout sustaining engineering based on reported operational failures Process Integration 0 Requires a disciplined failure tracking process Easier to collect if an automated system is used Data can be gathered from test logs or incident reports 0 Consider what priority of failures to include 0 Be sure to exclude nonsoftware failures This includes failures caused by hardware problems as well as user generated failures caused by operator error or user documentation errors 0 Some projects specify threshold limits on an acceptable number of failures per operating time for software reliability 0 Consider whether or not to count duplicate failures 0 Consider how to count operational time on interfacing hardware Usually Applied During 0 lntegation and Test Actuals Typical Data Items 0 Failure Identifier 0 Failure DateTime Stamp 0 Operating Time to Failure Typical Attributes 0 Version 0 Failure Priority Typical Aggregation Structure 0 Component Typically Collected for Each 0 Hardware Cl 0 Software Cl Count Actuals Based On 0 Failure documented 0 Failure validated This Measure Answers Questions Such As 0 What is the project39s expected operational reliability 0 How often will so ware failures occur during operation of the system 0 How reliable is the so ware 80 Chapter 7 Product QuaIilfy Measure Cyclomatic Complexity Logic Paths The Cyclomatic Complexity measure counts the number of unique logical paths contained in a so ware component This measure helps assess both code quality and the amount of testing required A high complexity rating is o en indicative of a high defect rate Components with high complexity usually require additional reviews or testing or may need to be rewritten Selection Guidance Speci cation Guidance Project Application Typical Data Items 0 Applicable to all domains 0 Cyclomatic Complexity Rating 0 Applicable to projects with testability reliability or m aintainability concerns Typical Attributes 0 Not generally used for COTS or reused code Not 0 Version generally used on so ware from automatic code generators or visual programming environments Typical Aggregation Structure 0 Useful during development and sustaining 0 Component engineering phases Typically Collected for Each Process Integration 0 Unit or equivalent 0 Cyclomatic complexity does not differentiate between type of control ow A CASE statement Count Actuals Based On counts as high complexity even though it is easier to 0 Passing inspection use and understand than a series of conditional 0 Passing unit test statements 0 Release to con guration management Cyclomatic complexity does not address data structures Operational requirements may require efficient highly complex code Relatively easy to collect actuals when automated tools are available eg for Ada C C Estimates are generally not derived but a desired threshold or expected distribution may be specified Usually Applied During 0 Design Actuals 0 lmplem entation Actuals 0 Integation and Test Actuals This Measure Answers Questions Such As How many complex components exist in this project 0 What components are the most complex 0 What components should be subject to additional testing 0 What is the minimum number of reviews and test cases required to test the logical paths through the component 81 Chapter 7 De veIopment Performance Measure Rework Size The Rework Size measure counts the number of lines of code changed to fix known defects This measure helps in assessing the quality of the initial development effort by indicating the amount of total code that had to undergo rework Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Applicable to most development processes In a rapid prototype process it is only applicable to the quotfinalquot version of the software product Not generally used for nondeveloped code such as COTS Useful during development and sustaining engineering phases Process Integration 0 Very dif cult to collect Most configuration management systems do not collect information on changes to the size of code or reason for the change rework 0 Rework size should only include code changed to correct defects Changes due to enhancements are not rework 0 Rework cost and schedule estimates should be included in the development p an Usually Applied During 0 Implem entation Actuals 0 Integration and Test Actuals Typical Data Items 0 Number of Lines of Code added due to rework 0 Number of Lines of Code deleted due to rework 0 Number of Lines of Code modified due to rework Typical Attributes 0 Version 0 Langua e 0 Delivery Status deliverable non deliverable 0 EndUse Environment operational support Typical Aggregation Structure Component Typically Collected for Each 0 Unit or equivalent Alternatives to Lines of Code Include 0 Components Count Actuals Based On 0 Release to con guration management 0 Passing inspection 0 Passing unit test This Measure Answers Questions Such As 0 How much code had to be changed as a result of correcting defects 0 What was the quality of the initial development effort 0 Is the amount of rework impacting the cost and schedule 82 Measure Rework Effort Chapter 7 De veIopment Performance The Rework Effort measure counts the amount of work effort expended to find and x so ware defects Rework effort may be expended to fix any software product including those related to requirements analysis design code etc This measure helps assess the quality of the initial development effort and identify products and so ware activities requiring the most rework Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Applicable to most development processes In a rapid prototype process it is only applicable to the quot nalquot version of the software product 0 Not generally used for effort associated with nondeveloped code such as COTS Useful during development and sustaining engineering phases Process Integration 0 Difficult to collect Some cost accounting systems do not collect information on rework effort 0 For basic tracking a single WBScost account should be created to track all rework effort per organization For more advanced tracking multiple WBScost accounts should be created to track rework at the component andor activity level 0 Rework effort should only include effort associated with correcting defects Effort expended due to incorporation of enhancements is not rework 0 Rework cost and schedule estimates should be included in the development plan Usually Applied During 0 Requirements Analysis Actuals 0 Design Actuals Implem entation Actuals 0 Integation and Test Actuals Typical Data Items 0 Labor Hours Due to Rework Typical Attribute 0 Organization 0 Labor Category 0 Version 0 Software Activity Typical Aggregation Structure 0 Software Activity Typically Collected for Each 0 Software Activity Count Actuals Based On 0 End of nancial reporting period This Measure Answers Questions Such As 0 How much effort was expended on xing defects in the software product 0 What software activity required the most rewor 7 0 Is the amount of rework impacting cost and schedule 83 Chapter 7 De veIopment Performance Measure Capability Maturity Model Level The Capability Maturity Model CMIVT Level measure reports the rating 15 of a software development organization39s software development process as defined by the Software Engineering Institute The measure is the result of a formal assessment of the organization39s project management and software engineering capabilities It is often used during the source selection process to evaluate competing developers Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains 0 Norm ally measured at the organizational level 0 Useful during project planning development and sustaining engineering phases Process Integration 0 Requires formal training and a very structured assessment approach Requires a significant amount of time and effort 0 An external assessor may formally conduct an assessment or a selfevaluation can be performed 0 Rating may be used during source selection to help select a developer Assessment may be used as part of a process improvement project Usually Applied During 0 Not applicable Typical Data Items 0 CMIVT Rating Typical Attributes 0 Typical Aggregation Structure 0 Software Activity Organization Typically Collected for Each 0 Organization Count Actuals Based On 0 Prior to contract award 0 External or Self Evaluation This Measure Answers Questions Such As Does a developer meet minimum development capability requirements 0 What is the developer39s current software development capability 0 What project management and software engineering practices can be improved 0 Is the developer39s software process adequate to address anticipated project risks issues and constraints 84 Chapter 7 Development Performance Measure Product SizeEffort Ratio The Product SizeEffort Ratio measure speci es the amount of software product produced relative to the amount of effort expended This common measure of productivity is used as a basic input to project planning and also helps evaluate whether performance levels are suf cient to meet cost schedule estimates Selection Guidance Speci cation Guidance Project Application Typical Data Items 0 Applicable to all domains Commonly used in 0 Number of Lines of Code weapons systems 0 Number of Labor Hours 0 Used for projects of all size Less important for projects where little code is generated such as those Typical Attributes using automatic code generation and visual 0 Version programming environments 0 Language 0 Not generally used for COTS or reused software 0 Estimates are o en used during project planning Both Typical Aggregation Structure estimates and actuals are used during development and 0 Software Activity sustaining engineering to focus on the incorporation of new functionality Not generally used for maintenance Typically Collected for Each projects focused on problem resolution 0 Organization Process Integration Alternatives to Lines of Code Include 0 In order to compare productivity from different 0 Components projects the same de nitions of size and effort must be 0 Tables used For size the same measure eg Lines of Code 0 Records or Entities must be used as well as the same de nition eg Logical lines For the effort measure the same labor Alternatives to Labor Hours Include categories and so ware activities must be included 0 Labor DaysWeeksMonths 0 The environment language tools and personnel 0 Full Time Equivalents experience will effect productivity achieved 0 Number of Personnel 0 Productivity can also be calculated using so ware cost models Many of these models include schedule as part Count Actuals Based On of the productivity equation 0 Completion of Version 0 To validly calculate productivity the effort measure 0 Components implemented must correlate directly with the size measure If for 0 Components integrated and tested example effort for a component is included but the component39s size is not productivity will be lower 0 Definitions should specify those elements of effort that are included eg project management documentation etc Usually Applied During 0 Requirements Analysis Estimates and Actuals 0 Design Estimates and Actuals 0 Implementation Estimates and Actuals 0 Integration and Test Estimates and Actuals This Measure Answers Questions Such As 0 Is the developer39s production rate sufficient to meet the completion date 0 How efficient is the developer at producing the so ware product 0 Is the plannedrequired so ware productivity rate realistic 85 Chapter 7 De veIopment Performance Measure Functional SizeEffort Ratio The Functional SizeEffort Ratio measure specifies the amount of functionality produced relative to the amount of effort expended This measure is used as a basic input to project planning and also helps evaluate whether performance levels are suf cient to meet cost schedule estimates Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Commonly used in AIS systems 0 Useful when product size measures are not available Useful during project planning development and sustaining engineering phases Process Integration 0 In order to compare productivities from different projects the same de nitions of size and effort must be used For size the same measure eg Function Points must be used as well as the same counting practices For the effort measure the same labor categories and so ware activities must be included The environment language tools and personnel experience will effect productivity achieved Productivity can also be calculated using so ware cost models Many of these models include schedules as part of the productivity equation To validly calculate productivity the effort measure must correlate directly with the size measure If for example effort for a function is included but the functional size is not productivity will be lower Useful early in the project before actual product size data is available Usually Applied During 0 Requirements Analysis Estimates and Actuals Design Estimates and Actuals Implementation Estimates and Actuals 0 Integation and Test Estimates and Actuals Typical Data Items 0 Number of Requirements 0 Number of Labor Hours Typical Attributes 0 Version Typical Aggregation Structure 0 Software Activity Typically Collected for Each 0 Organization Alternatives to Requirements Include 0 Function Points Alternatives to Labor Hours Include 0 Labor DaysWeeksMonths 0 Full Tim e Equivalents 0 Number of Personnel Count Actuals Based On 0 Completion of Version 0 Functions implemented 0 Functions integrated and tested This Measure Answers Questions Such As 0 Is the developer producing the so ware at a sufficient rate to meet the completion date 0 How efficient is the developer at producing the so ware 0 Is the plannedrequired so ware productivity rate realistic 86 Measure CPU Utilization Chapter 7 Technical Adequacy The CPU Utilization measure counts the estimated or actual proportion of time the CPU is busy during a measured time period This measure indicates whether suf cient CPU resources will be available to support operational processing This measure is also used to evaluate whether CPU reserve capacity will be sufficient for highusage operations or for added functionality Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Primarily used for weapon systems 0 Useful for any project with a dedicated processor and critical performance requirements Not generally used on projects located on shared processors 0 Useful during development and sustaining engineering phases Process Integration 0 Requires a tool that measures usage based on a defined operational pro le during a measured period of time The operational pro le load levels has a significant impact on this measure Test should include both normal and stress levels of operation The operational pro le for each test should be provided with the data Estimates are very difficult to derive and require significant simulation or modeling support Estimates must be developed early to impact design decisions Actual processor utilization is o en provided as an overhead function of an operating system and is more easily obtained Usually Applied During 0 Design Estimates 0 Implementation Estimates and Actuals 0 Integration and Test Actuals Typical Data Items 0 Time Processor is Busy 0 Measured Time Period 0 Specified Processor Utilization Limit Typical Attributes 0 Version 0 Operational profile Typical Aggregation Structure 0 Component Typically Collected for Each 0 Hardware CI iProcessor Count Actuals Based On 0 Integrated system test 0 Stressendurance test This Measure Answers Questions Such As 0 Have suf cient CPU resources been provided 0 Do CPU estimates appear reasonable Have large increases occurred 0 Can the CPU resources support additional functionality 87 Chapter 7 Technical Adequacy Measure CPU Throughput The CPU Throughput measure provides an estimate or actual count of the number of processing tasks that can be completed in a speci ed period of time This measure provides an indication of whether or not the software can support the system39s operational processing requirements Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Primarily used for weapon systems 0 Useful for any project with a dedicated processor and critical timing requirements Not generally used on projects located on shared processors 0 Useful during development and sustaining engineering phases Process Integration 0 Actuals can be based on realtime observation or may require a tool that measures task completion generally easy to collect stress levels of operation The operational pro le for each test should be provided with the data Estimates are very difficult to derive and require significant simulation or modeling sup ort decisions averaging period used is therefore important Usually Applied During 0 Design Estimates 0 Implementation Estimates and Actuals 0 Integation and Test Actuals based on a defined operational pro le This data is The operational pro le has a significant impact on this measure Tests should include both normal and Estimates must be developed early to impact design 0 The measurement methodology for CPU throughput is critical for meaningful results In many cases the measure is based on average CPU throughput The Typical Data Items 0 Number of Requests for Service 0 Number of Requests for Service Completed 0 Measured Time Period 0 Specified Processor Throughput Limit Typical Attributes 0 Version 0 Operational Profile Typical Aggregation Structure 0 Component Typically Collected for Each 0 Hardware CI 7 Processor Count Actuals Based On 0 Integrated system test 0 Stressendurance test This Measure Answers Questions Such As 0 Have suf cient CPU resources been acquired 0 Do CPU estimates appear reasonable Have large increases occurred 88 Measure I0 Utilization Chapter 7 Technical Adequacy The 10 Utilization measure calculates the proportion of time the 10 resources are busy during a measured time period This measure indicates whether lO resources are sufficient to support operational processing requirements Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Primarily used for weapon systems 0 Critical for high traffic systems 0 Network lO may also be measured under this measure 0 Useful during development and sustaining engineering phases Process Integration 0 Actual measurement requires a tool that measures usage based on a defined operational pro le during a measured period of time Actuals are relatively easy to collect The operational pro le has a significant impact on this measure The test cases should include both normal and stress levels of operation The operational pro le for each test should be provided with the data Estimates are very difficult to derive and require significant simulation or modeling sup ort Estimates must be developed early to impact design decisions Usually Applied During 0 Design Estimates 0 Implementation Estimates and Actuals 0 Integration and Test Actuals Typical Data Items 0 Time lO Resource is Busy 0 Time lO Resource is Available 0 Measured Time Period 0 Specified lO Channel Utilization Limit Typical Attributes Version 0 Operational Profile Typical Aggregation Structure 0 Component Typically Collected for Each 0 Hardware CI IO Device Count Actuals Based On 0 Integrated system test 0 Stressendurance test This Measure Answers Questions Such As 0 Do the 10 resources allow adequate data traf c ow 0 Can additional data traf c be provided a er system delivery 0 Should lO resources be expanded 89 Chapter 7 Technical Adequacy Measure I0 Throughput The 10 Throughput measure reports the rate at which the 10 resources send and receive data according to the number of data packets bytes words etc successfully sent or received during a measured time period This measure indicates whether the 10 resources are sufficient to support the system39s operational processing requirements Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Primarily used for weapon systems Critical for high traffic systems Network lO may also be measured under this measure Useful during development and sustaining engineering phases Process Integration 0 Actual measurement requires a tool that measures usage based on a defined operational pro le during a measured period of time This is relatively easy to collect The operational pro le has a significant impact on this measure Tests should include both normal and stress levels of operation The operational pro le for each test should be provided with the data Estimates are very difficult to derive and require significant simulation or modeling support Estimates must be developed early to impact design decisions The measurement methodology for 10 throughput is critical for meaningful results In many cases the measure is based on average lO throughput therefore the averaging period used is very important Usually Applied During 0 Integation and Test Actuals Design Estimates Implementation Estimates and Actuals Typical Data Items 0 Number of Data Packets 0 Number of Data Packets Successfully Sent 0 Number of Data Packets Successfully Received 0 Measured Time Period 0 Specified lO Throughput Limit Typical Attributes 0 Version 0 Operational Profile Typical Aggregation Structure 0 Component Typically Collected for Each 0 Hardware CI IO Device Count Actuals Based On 0 Integrated system test 0 Stressendurance test This Measure Answers Questions Such As Can the so ware design handle the required amount of system data in the allocated time Can the so ware handle additional system data after delivery 90 Measure Memory Utilization Chapter 7 Technical Adequacy The Memory Utilization measure indicates the proportion of memory that is used during a measured time period This measure addresses random access memory RANT read only memory ROM or any other form of electronic volatile memory This measure specifically excludes all types of magnetic and optical media eg disk tape CDROM etc This measure provides an indication of whether the memory resources can support the system39s operational processing requirements Selection Guidance Speci cation Guidance Project Application 0 Applicable to all domains Primarily used for weapon systems Critical for memory constrained systems Useful during development and sustaining engineering phases Process Integration 0 Measure and monitor different types of memory eg RAM ROM separately Specify the size of a word eg 16 bit 32 bit etc for each memory type Actual measurement requires a tool that measures usage based on a defined operational pro le during a measured time period or task This is relatively easy to collect The operational pro le has a significant impact on this measure The tests should include both normal and stress levels of operation The operational pro le for each test should be provided with the data Estimates are very difficult to derive and require significant simulation or modeling support Estimates must be developed early to impact design decisions Usually Applied During 0 Design Estimates 0 Implementation Estimates and Actuals 0 Integration and Test Actuals Typical Data Items 0 Memory 0 Memory Available 0 Memory Used 0 Measured Time Period 0 Specified Memory Utilization Limit Typical Attributes 0 Version 0 Operational Profile Typical Aggregation Structure 0 Component Typically Collected for Each 0 Hardware CI 7 Processor Count Actuals Based On 0 Integrated system test 0 Stressendurance test This Measure Answers Questions Such As 0 Will the software fit in the processors 0 Can the so ware size increase a er system delivery as needed to incorporate new functionality 0 What is the risk that system errors will be caused by lack of storage space 91 Chapter 7 Technical Adequacy Measure Storage Utilization The StorageUtilization measure reports the proportion of storage capacity used The measure provides an indication of whether storage resources are sufficient to store projects andor the anticipated volume of operational data generated by the system The term quotstoragequot refers to magnetic and optical media eg disk tapes hard drives CDROM etc but speci cally excludes all types of random access memory RANT read only memory ROBl or any other forms of electronic memory Selection Guidance Project Application 0 Applicable to all domains Primarily used for weapon systems Critical for storage constrained systems Useful during development and sustaining engineering phases Process Integration 0 Measure and monitor different types of storage eg disk tape separately Specify the size of a word eg 16 bits 32 bits etc for each storage type 0 Actuals are easy to measure Estimates are often based on product size Usually Applied During 0 Design Estimates 0 Implementation Estimates and Actuals 0 Integation and Test Actuals Speci cation Guidance Typical Data Items 0 Storage 0 Storage Available 0 Storage Used 0 Specified Storage Utilization Limit Typical Attributes 0 Version Typical Aggregation Structure 0 Component Typically Collected for Each 0 Hardware CI Storage Unit Count Actuals Based On 0 Integrated system test 0 Stressendurance test This Measure Answers Questions Such As 0 Have suf cient storage resources been provided 0 Do storage estimates appear adequate 0 What is the expansion capacity 92 Measure Response Time Chapter 7 Technical Adequacy The Response Time measure reports the amount of time required to process a request The measure counts the time between initiation of a request for service and the conclusion of that service It provides an indication that the target computer system responds in a timely manner User interface response time is often considered an important quality factor Selection Guidance Speci cation Guidance Project Application Typical Data Items 0 Applicable to all domains Used extensively on AIS 0 Service Initiation Time systems 0 Service Completion Time 0 Critical for projects with speci ed response time requirements Especially critical for realtime projects 0 Useful during development and sustaining engineering phases Process Integration 0 Actuals can be based on realtime observation or may require a tool that measures request completion based on a defined operational pro le This data is generally easy to collect 0 The operational pro le has a significant impact on this measure Tests should include both normal and stress levels of operation The operational pro le for each test should be provided with the data 0 This measure must be collected at a low level in order to provide a good characterization of the level of service provided Usually Applied During 0 Design Estimates 0 Implementation Estimates and Actuals 0 Integation and Test Actuals 0 Maximum Allowable Service Time Typical Attributes 0 Version 0 Operational Profile Typical Aggregation Structure 0 Function Typically Collected for Each 0 Function 7 Service Count Actuals Based On 0 Integrated system test 0 Stressendurance test This Measure Answers Questions Such As 0 Is the target computer system sufficient to meet response requirements 0 How long do certain services take 0 Does the so ware operate ef ciently 93 Chapter 7 Technical Adequacy MeasureAchieved Accuracy in Software Performance The measure of Achieved Accuracy in Software Performance is usually a combination of several other measures that are defined by the software functional and technical requirements These measures can include any functional characteristics that can be quantitatively defined and demonstrated during the software or system operation Technical Performance measures are usually de ned in term of the accuracy of the functions of the software or system to meet defined requirements such as response time data handling capability or signal processing These measures provide an indication of the overall ability of a so ware intensive system to meet the users functional requirements Selection Guidance Speci cation Guidance Project Application Typical Data Items 0 Applicable to all domains 0 Software functional performance level 0 Included in all government and commercial projects that de ne specific requirements that must be Typical Attributes achieved in software products 0 Version 0 Used for projects of all sizes 0 Source new reused NDI GOTS or COT S 0 Often used for projects integrating COTS so ware 0 Useful during development and sustaining Typical Aggregation Structure engineering phases 0 Component Process Integration Typically Collected for Each 0 Sometimes difficult to generate accurate estimates 0 CI or equivalent early in the project especially for new technologies and new projects Count Actuals Based On 0 Data may not be available until late in a project 0 Passing functional test when system functional testing is perform ed 0 Resource and technology limitations may prohibit demonstration and measurement of all technical performance parameters 0 Data is usually available from functional test records 0 Modeling and simulation results may be used to estimate so ware functional performance levels Usually Applied During 0 Requirements Analysis Estimates 0 Design Estimates 0 Implementation Estimates and Actuals 0 Integation and Test Actuals This Measure Answers Questions Such As 0 How accurate was the signal processing function in this so ware release 0 Is the system able to read all the required data files in the available time 0 Was the software able to perform all required functions to meet the required system response time 94 Chapter 7 Technical Adequacy Measure NDI Utilization The NDI Utilization measure tracks the amount of code that is planned for reuse against what is actually reused If less code is reused than planned additional schedule and effort will most likely be required to complete the development Selection Guidance Speci cation Guidance Project Application Typical Data Items 0 Applicable to all domains Commonly used in 0 Number of Lines of Code LOC weapons applications 0 Included in most DOD measurement policies and Typical Attributes some commercial measurement practices 0 Version 0 Used for projects of all sizes Less important for 0 Source new reused NDI GOTS or COTS projects where little code is generated such as those 0 Type added deleted modi ed using automatic code generation and visual 0 Language programming environments 0 Most effective for traditional high order languages Typical Aggregation Structure such as Ada FORTRAN and COBOL Not 0 Component generally used for fourth generation languages such as Natural and ECOS Typically Collected for Each 0 Not usually tracked for COTS software unless 0 Unit or equivalent changes are made to the source code 0 Useful during project planning development and Alternatives to Lines of Code Include sustaining engineering phases 0 Components 0 Function Points Process Integration 0 Requirements 0 Define Lines of Code for each language Lines of code from different languages are not equivalent Lines of Code De nition May Include 0 Sometimes difficult to generate accurate estimates 0 Logical Lines early in the project especially for new types of 0 Physical Lines projects 0 Comments 0 Estimates should be updated on a regular basis 0 Executables 0 Can be difficult estimating and tracking lines of 0 Data Declarations code by source new modified deleted reused 0 Compiler Directives NDI GOT S or COTS 0 Actuals can easily be counted using automated tools Count Actuals Based On 0 Release to configuration management Usually Applied During 0 Passing unit test 0 Requirements Analysis Estimates 0 Passing inspection 0 Design Estimates 0 Implementation Estimates and Actuals 0 Integration and Test Actuals This Measure Answers Questions Such As 0 How accurate was the reuse size estimate on which the schedule and effort plans were based 0 How much has the reuse software size changed In what components have changes occurred 0 Has the reuse size allocated to each incremental build changed 95 Chapter 7 Implementing the Measurement Process Once the project has begun the analysis of so ware measures becomes a major concern Analysis is conducted to determine whether so ware development efforts are meeting de ned plans assumptions and targets Planned and actual performance data are the inputs to this process Performance analysis should be viewed as an investigative process used to identify risks manage risks and track down and isolate problems This may require the use of slightly different data the use of dilTerent measures to generate dilTerent indicators and the identification of alternative courses of action each time performance is analyzed Many times schedule resources growth or quality trends are not recognizable as an indication of a potential problem until the associated risk has actually become a problem of major proportions Because so ware risks are not independent an integrated analysis using multiple indicators should be performed In combination Figures 71 and 72 show an example of a potential problem made visible by detecting inconsistent trends using multiple indicators Figure 71 shows an indicator for the measure Component Status during the design process and Figure 72 shows an indicator for the measure Problem Report Status for the same project Whereas the measure of actual component status appears to be only slightly behind the plan the discrepancy between the number of open and closed problem reports is increasing These open problem reports represent rework that must be completed before the design activity can be completed Thus the trends in these two performance indicators are inconsistent an indication of a potential problem Component Status Design I Plan 0 Actual Units Completed Figure 7 1 Component Status Indicator Example 96 Chapter 7 Once a potential problem has been identified it should be localized by examining indicators with more detailed data In the example just cited a Problem Report Status chart should be generated for each of the Con guration Items within the software design Identifying the speci c source of the potential problem helps to determine the rot cause and selection of the appropriate corrective actions Problem Report Status 50 Total 50 Closed Number of Problem Reports Figure 7 2 Problem Report Status Indicator Example WatchOutFors The following examples of lessons learned were extracted from the publication Practical So ware Measurement A Foundation for Objective Program Managemen This list is provided as a starting point and is by no means comprehensive The list is organized by common so ware risk categories Schedule and Progress 0 A gt10 cumulative or gt20 per period actual deviation from planned progress Once an actual progress trend line is established it is dif cult to change the rate of completion 0 A 5 or greater build schedule variance for single builds or a 10 build schedule variance across two or more builds Resources and Cost 0 Voluntary staff turnover gt10 per year 0 Large overruns during integration and test which may indicate quality problems with the code and significant defects that may delay completion 0 The addition of lar e numbers of e0 le within a short eriod of time this normall cannot be done effectivel 97 Chapter 7 Growth and Stability 0 Total software size increases gt 20 over original estimates 0 Constantly changing requirements or a large number of additions after requirements reviews which are leading indicators of schedule and budget problems later in the project Product Quality 0 Defect removal efficiency lt85 0 Large gaps between the closure rate and the discovery rate indicating that problem correction is being deferred which could result in serious schedule staffing and cost problems later in the project A horizontal problem discovery trend line during design coding or testing This may indicate that reviews and tests are not being performed and should be investigated Development Performance 0 A developer with a poor so ware development process or low productivity coupled with aggressive project schedule and cost objectives 0 Unplanned rework which is a frequent cause of low productivity 0 Attempts to increase productivity significantly on an existing project Technical Adequac 0 Changes in assumptions concerning the use of COTS software or the amount of code that can be reused I Additional information concerning so ware measurement may be found in the publication Practical So ware Measurement A Foundation for Objective Project Management Version 31 17 April 1998 sponsored by the Joint Logistics Commanders Joint Group on Systems Engineering and at httpwwwpsmscc0m 98 Chapter 8 Wm Assess Mitigate Report What is the Relationship Between Assess Mitigate Report and Technical Risk The risk assessment mitigation and reporting process often is not well structured and lacks discipline in its implementation For example assessments that re ect high risks are not encouraged reporting is not done at the necessary level and program schedules dictate the what and when of risk mitigation When the disciplines in this process fail resources and management attention cannot be applied to resolve risk issues and consequently corrective actions remain open The assessment mitigation and reporting of risk are the heart of the risk process and when this is not effective the risk management program fails Remember that a risk is not a problem a problem is a risk that has already occurred Risk Assessment The starting point for determining total program risk is to identify known or potential risk areas This responsibility should be accepted by each person involved with the design test constructionmanufacture operation support and eventual disposal of a weapon system its subsystems and components The earlier in the program that these risks are identi ed the easier they will be to manage and the less negative impact they will have on program cost schedule and performance objectives To facilitate the proactive identification of risks there are methods and tools available for consideration See Chapter 1 Figure 11 Critical Process Risk Management The next step is to assess the technical risk of each risk area identified within the program These risk assessments are conducted to determine the disruption to your program as a lnction of two parameters the level of each critical process variance from a known standard and the consequence ie the magnitude of the impact if the particular risk is realized The levels for each parameter are used to enter the Assessment Guide grid as shown in Chapter 1 Figure 12 Critical Process Risk Assessment and the result is either a low moderate or high risk assessment Note that 99 Chapter 8 the Consequence level takes into account the impact on technical performance schedule cost and other teams Total program risk assessment is determined by rollingup all of the critical process risk area assessments affecting the program The approach to risk assessment may vary depending on program philosophy Chapter 1 Choose an Approach provides additional details on approaches to technical risk management Conducting Assessments Assessments should be conducted in a manner that both optimizes program resources and schedule and at the same time is proactive in identifying risks before they become major program problems Assessments should expose the potential weaknesses of the program 7 and therefore should be conducted by subject matter experts from affected areas There are three types of risk assessment Experience to date however indicates that continuous assessments coupled with independent assessments when necessary represent the most effective strategy for assessing program risk Periodic Assessments Risk assessments are conducted at predetermined intervals normally in preparation for milestone reviews This approach may be sufficient for programs with limited resources however with this approach low risks could develop into higher program risks if not identi ed early enough Continuous Assessments Risk assessments are ongoing activities conducted by teams rather than activities conducted only at scheduled times such as program milestones major events etc This is a proactive approach allowing program risks to be identi ed early and mitigation strategies to be developed before technical risks impact performance cost and schedule Continuous assessments are especially beneficial during the early phases of a program s life cycle Independent Risk Assessments Risk assessments are conducted by an outside team of experts with experts normally coming from other programs or from industry This is a recommended practice as the assessors provide an unbiased review of the program and draw on their particular expertise to assess program risk This is such an effective tool that it is further discussed in Chapter 9 Use Independent Assessors PM Note Evaluating Critical Process Variance For each potential risk identified the question must be asked What is the Critical Process Variance from known standards or best practices Looking at Chapter 1 Figure 12 there are five choices levels of Critical Process Variance Minimal Small Acceptable Large anal Signi cant Associated with these five levels are the letters a through e They correspond to the yaxis on the Assessment Guide If the variance of a process from a known standard or best practice is considered minimal 100 Chapter 8 level a the risk will be determined by proceeding along the a row to the Consequence level selected The risk will be low per this gure unless ofcourse the Consequence is considered Significant level 5 Evaluating Consequence Risk Consequence is evaluated by answering the following question Given the identi ed risk is realized what is the magnitude of the impact of that risk Levels of Consequence are labeled 1 through 5 and correspond to the xaxis on the Assessment Guide Consequence is a multifaceted issue Applicable consequences have been narrowed down to four key areas again referring to Chapter 1 Figure 12 Technical Performance Schedule Cost and Impact on Other Teams At least one maybe more of the four consequence metrics needs to apply for there to be the potential for risk However if there is no adverse Consequence there is no risk irrespective of the assessed level of Critical Process Variance These four metrics are further discussed as follows 0 Technical Performance The wording of each level is oriented toward design processes but it should be applied as well to test processes production processes life cycle support and equipment disposal For example the word margin could apply to weight margin during design safety margin during testing or machine performance margins during constructionmanufacture and subsequent life cycle operation Schedule The words used in the Schedule column as in all columns of the Consequence Table are meant to be generic Avoid excluding a consequence level from consideration just because it doesn t match a team s specific definitions Cost Cost is considered an independent variable in Defense programs Since the magnitude of the dollars varies from component to component and process to process percentage of dollars is used This is also in step with Acquisition Program Baseline objectives and threshold values which are based on percentage of dollars The levels listed here represent costs at the program level However Integrated Product Teams IPTs may choose to align these definitions with standard cost reporting requirements consistent with cost consequences faced at the lower levels At the program level the definitions are as follows Level 1 is Minimal or No Impact Level 2 is lt5 Level 3 is 5 to 7 Level 4 is gt7 to 10 and Level 5 is gt10 Impact on Other Teams Both the consequence of a risk and the mitigation actions associated with reducing risk may impact another IPT When this impact results in increased complexity levels of risk also increase This may lOl Chapter 8 involve additional coordination or management attention resources and may therefore increase the level of risk Even alter the Process Variance and Consequence levels have been determined classification of the level of risk can be somewhat subjective eg it could depend on the type of data being assessed However all assessments should be based on experienced judgment from your best technical people Figures ll and 12 of Chapter I discussed previously provide a risk management and assessment tool based upon a process oriented approach and Critical Process Variances from known best practice standards are plotted against Consequences to derive a level of risk assessment Another approach used by some programs is provided for information in Figure 81 in which values of Probability or T quot quot39 1 0f0 and f assigned to each risk element with probability or likelihood plotted against consequence to derive a risk assessment level are What is the Likelihood the Risk Will Happen Level YourApproach and Processes 1 Not Likely Wi elfectively avoid or mitigate this risk u base ard practices 8 2 Low Likelihood Have usually mitigated this type of risk 5 with minimal oversight in similar cases Ta Likey May mitigate this risk butworkarounds 1 3 will be re ulred 5 4 Highly Likey zannot mitigate this risk buta different 4 approach might Near Certainty zannot mitigate this type of risk no known 5 processes orworkarounds are available U 4 g 3 Moderate wnm a Level Technical Schedule Cost 4 2 1 Minimal orno impact Minimal orno impact Minimal or no impact a 2 Minor perf shortfall Additional activities Budget increase or E same approach required able to meet unit production cost 1 g tained increase lt1 c a 3 Mod perf shortfall Minorschedule slip Budget increase or 1 2 3 4 5 but workarounds wlll mlss need date unit production cost 2 U available increase lt5 Consequence 4 Unacceptable Program critical path Budget increase or but workarounds affected unit production cost available increase lt10 5 Unacceptable no Cannot achieve key Budget increase or increase gt1o Figure 8 1 Risk Probability and Consequence In applying this figure many programs use colors such as Green Low Risk Yellow Moderate Risk and Red High Risk The derived risk assessment level should correlate with the assessment made by experienced program oiTice personnel if not the levels of process variance likelihood and consequence should be reevaluated and the risk assessment reconsidered 102 Chapter 8 In practice likelihood of occurrence is usually a judgment call whereas process variance may be somewhat easier to measure although sound judgment will still be required Knowing this underscores the importance of experienced andor expert judgment to truly assess program risk Chapter 5 Practice Engineering Fundamentals should be consulted to gain insight into technical baseline Best Practices and Watch Out Fors related to critical processes in design test and production and the principal areas of risk associated with each of those processes Risk Analysis and Mitigation Once risk has been identi ed and assessed the next step requires Risk Analysis and Mitigation As part of this step the risk owner develops specific tasks that when implemented will reduce the stated risk to an acceptable level This does not necessarily mean reducing the risk to low Some programs consider no risk as no progress and encourage proactive pursuit of cutting edge technologies This may require accepting some level of risk if the result leads to lture gains in terms of performance schedule andor cost The risk analysis process requires localizing the source or cause of the identi ed risk being care ll not to confuse symptoms with cause It is the source cause which will receive the necessary resources to mitigate risk to an acceptable level Once this has been accomplished Mitigation Plans must be developed that describe what has to be done when by whom the level of effort and the material or facilities required to mitigate risk to an acceptable level A proposed schedule for accomplishing these actions is required as well as a cost estimate if possible All assumptions used in the development of the Mitigation Plan must be listed Recommended mitigation actions that require resources outside the scope of a contract Ship Project Directive Work Request or other official tasking should be clearly identified The risk form used by the program should also include a list of the IPTs which the risk area or the Mitigation Plan may impact When completed and approved by the cognizant individual the risk form is recorded entered into the database Figure 11 Chapter 1 lists some ideas for developing risk Mitigation Plans that are selfexplanatory Two items listed in the ToolBox also contain a couple of sources that may benefit Mitigation Plan development These include but are not limited to the DoD 42457M Templates and NAVSO P607l Best Practices manuals These documents are often useful in developing Mitigation Plans for Design Test and Manufacturing risk areas The idea Renegotiate Requirements should normally be recommended as a last resort Another consideration is the identification of interrelationships between identified critical risks and risk mitigation plans For example in developing risk Mitigation Plans a common Mitigation Plan could be used to mitigate several areas of risk eg improved convection cooling techniques could reduce system complexity by 103 Chapter 8 eliminating the need for forced air cooling and improving part design margins by reducing worst case operating temperatures Conversely plans developed for mitigating risks in one area could have an adverse effect on other risks eg the addition of heat sinks to improve convection cooling could adversely increase system weight and increase maintenance times This type of analysis is encouraged to ensure that a Mitigation Plan for one area of risk does not have a counterproductive effect on one or more other risk areas Do not expect to avoid risk completely every program be it an ACAT I or ACAT IV will have risks Once risks have been reported and assessed a mitigation A6 strategy for every moderate and high risk should be established Risk resolution and PM Note workarounds can be kept off the critical path by early identi cation and resolution The program office has as a minimum three risk mitigation strategies available risk reductionprevention risk transfer sharing and risk acceptance 0 Risk ReductionPrevention Mitigation actions should clearly identify the root cause of the risk how the root cause will be eliminatedreduced and who individuals or teams are responsible for carrying out these actions Progress against mitigation actions must be tracked at appropriate intervals While this is o en done at milestone reviews and other major program decision points it is in the best interest of the program to review these efforts continuously One way to accomplish this is through the use of Event Driven Risk Mitigation Plans discussed under Reporting the Risk in which risk mitigation activities are integrated with the overall program schedule and resources Risk Transfer or Sharing In some cases risk consequence must be shared with another party such as the contractor or a participating program office Risk can also be transferred or reallocated to different WBS elements or subsystems In this instance reallocation is appropriate only if the element to which it is reallocated is better suited to mitigate the risk Risk transfer may be appropriate when the consequence of risk is high but the likelihood of occurrence is low Transfer techniques for example can include warranties or insurance policies Risk Acceptance As stated previously every program has risk Generally the more the program pushes stateoftheart technology and the greater the performance and operational requirements the greater the risks In many cases the program manager must be willing to accept some of these inherent risks as reduced risk would come at the expense of a degraded mission and performance and adversely impact budget and schedule constraints The key in accepting these risks is that the program manager must ensure that these risks are identified and understood early so that they do not become problems later and adversely impact the program 104 Chapter 8 Tracking the Risk As part of the assessment and reporting processes program risks must be formally tracked and documented in an organized manner This is necessary to determine trends and keep a status on risks and the effectiveness of mitigation activities Individuals report data in different ways therefore it is imperative that all members of the risk team which includes the contractor 0 Use a stande reporting format and 0 Use the same terminology and de nitions to describe define and report risk A standard format allows data to be communicated effectively between team members and management and allows standardized data to be incorporated into a risk database A sample Risk Assessment Form RAF is shown in figure 82 An effective tracking system has the following characteristics 0 Risk data decisions and mitigation activities are accessible to all team members and program office personnel involved with risk management and 0 Risk data is compiled in a central database so that data can be retrieved and put into use ll formats for analysis and reporting Database Software Manual databases will accomplish the job of tracking however electronic databases are preferred because they offer access at remote locations access by several personnel at once rapid recall and sorting of data and links to contractor risk databases Several recommended database programs available as COTS items include Lotus Notes Microso FOXPRO and Microso Access The database should as a minimum include the fields contained on the RAF Creating the database can be as simple as making the RAF or a modified version thereof the opening menu of the database with each eld in the form being a drop down menu The New Attack Submarine On Line Risk Database OLRDB was developed by the 39 Program Executive Of cer Submarines PEOSUB to identify assess manage Dial up the track and report program risk This is a Government owned tool and information on ABM obtaining an electronic copy of the OLRDB shell will be provided on the homepage ASNRDampAABM homepage for free software httpwwwabmrdahqnavymil information When choosing a database format consider the following 0 Coordinate with the prime contractor on the database program youhe will be using Use of the samecompatible so ware will ensure unhindered data ow access and sharing of information between the program of ce and contractor 0 Include database requirements in the contract or Statement of Work 105 Chapter 8 Use COTS so ware if possible this will ensure that so ware packages and upgrades will be available to the prime contractor and any new contractorsupplier that will need to access the system Online systems allow remote access between all parties Databases should be secure to prevent unauthorized access Reporting the Risk A 1998 GAO Report GAONSIAD9856 Best Practices Successful Application to Weapon Acquisitions Requires Changes in DoD s Environment noted that industry encourages and rewards personnel who report risk whereas in the DoD problems or indications that the technology cost and schedule estimates are decaying do not help sustain the program in subsequent years and thus their admission is discouraged It further stated that there were few rewards for discovering and recognizing problems early in DoD program development given the amount of external scrutiny the programs receive A 1994 study by the Defense Systems Management College also reported that A feeling of responsi bility for program advocacy appears to be the primary factor causing Government managers to search aggressively and optimistically for good news relating to their programs and to avoid bad news even when it means discrediting conventional management tools that forecast significant negative deviations from plan The above ndings re ect a culture problem within DoD that requires change Since program risks are unknowns at least until they have been assessed all risks are inherently high or Red until their impact is further understood andor Encourage mitigated In order to mitigate program risk risks must be reported To ensure all your staff to risks are reported and not understated program managers should employ the report risk following 0 Strongly encourage the reporting of risk without fear of reprisal 0 Status all new risks as high or Red until consequences and program impact are understood andor mitigated Risks must be presented in a clear concise and standardized format so that senior Use standard personnel responsible for making programmatic and technical decisions are not reporting burdened with large amounts of nonstandardized data yet at the same time have formats enough information to make programmatic decisions The report format should be comprehensive enough to give an overall assessment be supported by enough technical detail and include recommended corrective actions The Under Secretary of Defense for Acquisition and Technology USDAampT designates certain ACAT I programs to submit a Defense Acquisition Executive Summary DAES report The purpose of this report is to highlight risks and actual problems to USDAampT before they become major problems For designated ACAT I programs high and moderate risks should be included in the DAES report The Program Risk Mitigation Waterfall Chart Figure 83 illustrates the connection between program events and mitigation efforts as well as a record of progress in risk mitigation In addition the 106 Chapter 8 following provides some basics in reporting or rolling up risk data for senior personnel in a position to assist the program in reducing risk Use of a standard form and format to report risks The risk database should be capable of generating risk reports Government and contractor reports should utilize the same formformat and terminology Reports should summarize high and moderate risks and recommended actions A watch list which tracks risks and mitigation activities should be included as part of the report The watch list is normally updated monthly and should include the top ten or more as applicable risk items prioritized by exposure and leverage Reports should also include as a minimum Number of risk items resolved to date Number of new risk items since last report Number of unresolved risk items Unresolved risk items on the critical path Effect of technical risk on cost and schedule 107 Chapter 8 RISK ASSESSMENT FORM Please ll out and submit to RM Coordinator Use additional pages if needed mail 0 Fax RISK TITLE RISK TRACKING NUMBER Assigned by RM Coordinator XX OVERALL RISK LEVEL See reverse Circle one LOW MODERATE HIGH PRODUCT SUBASSEMBLY CONFIGURATION ITEM PROCESS AREA OR TEMPLATE To be identi ed by RM Coordinator RISK LEVEL DENTIFIERS Enter a number 15 See reverse Process Variance if Consequence 0 Performance 77 0 Schedule 0 Cost REQUIREMENT AFFECTED Record par and security classi cation Provide summary ofrequirement ifunclassi ed PIDS i WBS W Other ASpec i DATE Identi ed Submitted RISK ORIGINATOR arne Phone IPTZ WNER Assigned by the Risk OriginatorIPT Name Phone IPT RISK DESCRIPTION Provide as much detail as possible Use IFTHENIN ADDITION format RISK LEVEL RATIONALE Use Risk Level Identifiers see reverse and Program Risk Level Standard Guidelines RISK MITIGATION RECOMMENDATIONS What actions Will behave been taken to mitigate this Risk andWhen How could have We avoided this Risk Figure 8 2 Sample Risk Assessment Form 108 ozagtw mmuw Chapter 8 Attomplishui mmznnaj simulations to evaluate subsystem interactions timing issues Simulations to evaluate target sets environment e 39ects r nintin y 39 Develop baseline design Reassess risk Ij Get hardware and software in place or preEMl simulations E Consolidate team structure and supplier agreements M o D E R Supporting analyses and design studies A i T 2 E Initiate detailed trade studies and identin alternatives Validate and implement trade study decisions witlr customer on PD teams ior lower risk options Reassess risk rgl Extensive simulations st HW39JL testing L TAAF program witlr selected subsystems 0 Reassess risk W l Operational testing st simulations Ms 11 39 gt Production 51m FDR CDR PRR Ms 111 I PDampRR I END I LRIPPRODUCTION I CV I I 1990 I 1999 I 2000 I 2001 I 2002 I 2003 I 2004 I 2005 I Figure 8 3 Program Risk Mitigation Waterfall Chart Example 109 Chapter 8 Figure 84 is a Sample Watch List based on the WBS approach that rolls up technical risk schedule and cost against the Allowable Unit Production Cost AUPC associated with program risk with cost impact quantified for each risk The expected cost is computed by adding the speci c risk mitigation cost to the program schedule cost schedule slip times program burn rate and multiplying the sum by the probability of the risk occurring Expected costs of indiVidual risks are totaled to provide an expected program risk cost Risk Assessment Antenna arget Vibration Figure 8 4 Sample Watch List 110 Chapter 9 Wm 9 Use Independent Assessors What is the Relationship Between Independent Assessors and Technical Risk A disciplined approach to technical risk management can be extremely challenging in today s defense acquisition environment Not only are Government and industry organizations experiencing significant downsizing but also many of those leaving their jobs are the most experienced personnel As a result of Acquisition Reform and revised DoD 5000 series documents the policies and procedures governing the procurement of military hardware and so ware have undergone major changes eg the transition to commercial practices and the emphasis on CommercialOff the ShelfNon Developmental Items The loss of experienced people coupled with increasing procedural and technical complexity means greater risk for Program Managers PMs One proven way of reducing this risk is to utilize a team of experienced people to conduct independent assessments of program health Few would argue that obtaining a second opinion constitutes a common sense approach prior to making a critical decision regarding important matters such as major surgery career changes or financial investments In defense acquisition PMs are faced with many critical decisions and second opinions e g independent assessments play a key advisory role when those decisions are made including those pertaining to program technical and management risk DoDD 50001 states that Assessments independent of the developer and the user DoDD 50001 are extremely important to ensure an impartial evaluation of program status requirement for Consistent with statutory requirements and good management practice DoD shall independent use independent assessments of program status Senior acquisition o icials shall assessments consider these assessments when making decisions Sta o oices that provide independent assessments shall support the orderly progression of programs through the acquisition process Independent assessments shall be shared with the Integrated Product Team so that there is a full and open discussion of issues with no secrets lll Chapter 9 Although the rst sentence of this quote implies assessments directed by the Director Operational Test and Evaluation the remainder of the quote contains generic direction on assessments which is applicable to all aspects of a program including risk Program Experience In recent years several Navy programs have bene ted noticeably from recommendations provided by independent assessors These programs include FA18 Aircra Consolidated Automated Support System CASS New Attack Submarine NSSN and Surface Combatant Twenty First Century SC21 The scheduling of the assessments varied based on each program s needs at a particular time NSSN and SC21 scheduled independent assessments prior to major milestone reviews CASS scheduled an independent technical review to address poor initial Operational Test and Evaluation results and the need to improve design and manufacturing processes and FA18 uses an independent assessment team on a continuous basis Irrespective of the timing independent assessments have proven to be a valuable tool for a better understanding of Navy program risks Outside of DoD NASA is a strong proponent of independent and timely technical reviews eg reviews of analyses many highly specialized pertaining to the reliabilitydesign process NASA Practice No PDAP 1302 notes that approximately 40 percent of all analyses contain significant shortcomings when performed for the rst time Roughly half of these are defects or omissions in the analysis itself and not design defects The other 20 percent represent design defects the severity of which varies from minor to mission catastrophic The only proven method for detection of these defects is an independent review of the design details by an impartial objective competent peer group in the appropriate technical eld Tasks In preparing for an independent risk assessment of a speci c program the tasks assigned to the assessors should include but not be limited to the following 0 Review the program s risk management approach and the status of risk assessments conducted to date 0 Review Mission Needs Statement directed actions Cost and Operational Effectiveness Analysis results Acquisition Strategy Operational Requirement Document and other relevant program documentation e g Design Reference Mission Pro le for any known or potential risk areas critical processes which may have been overlooked 0 Examine all advanced or emerging technologies being considered and determine any known or potential risk areas critical processes which may have been overlooked 112 Chapter 9 0 Prepare a nal report for the PM concerning the adequacy of risk management efforts to date readiness for the next milestone review and recommendations for improvement In summary an independent risk assessment is a highpayoff tool for the PM s use Final in determining the adequacy of his risk management process Assessors should be thoughts independent of the PM s staff and selected on the basis of their professional reputation their indepth experience and their willingness to serve as honest brokers in behalf of the program being reviewed 113 Chapter 10 W10 Stay Current on Risk Management Initiatives What is the Relationship Between Risk Management Initiatives and Technical Risk The Program Manager PM and staff o en are not aware and consequently do not take advantage of continual advancements and new initiatives in best practices and analytical tools Additionally stateoftheart expertise such as that available from ManTech Program Centers of Excellence the DoD Information Analysis Centers and Government Industry Data Exchange Program GIDEP can provide valuable lessons learned to reduce technical risks Awareness of continual advancements and new initiatives in best practices and analytical tools provides opportunities for more effective development and manufacture of products meeting customer requirements with less technical risk These initiatives generally have focused on a better understanding of customer requirements improvements in the design and manufacturing processes and methods for reducing variation in the product and related processes These new initiatives enhance the achievement of robust designs and aid in lrther reducing the variations that occur in products their performance and associated processes The following are brief descriptions of several initiatives that enable significant technical risk avoidance Quality Function Deployment The Quality Function Deployment QFD is an analysis technique that enables the ag identi cation and systematic translation of customer requirements to actions required by the contractor to meet the customer s desires This technique is based What is on the use of a matrix to compare what the customer wants to alternative ways of QFD how the contractor plans to provide it thereby reducing technical risk 115 Chapter 10 Although a QFD analysis may be adequate with the use of only a toplevel matrix the cascading of the matrix to lower indenture levels as the design progresses is necessary to identify critical process parameters that must be controlled to meet customer requirements This cascading also provides a trail from customer requirements to the process parameters that need to be controlled Identi cation of the critical requirements and processes from among many together with the required controls are key ingredients for focusing the technical risk management effort A key to the successful use of this tool is its integration of design considerations and process actiVities Shorter developmenttoproduction time with fewer engineering and manufacturing changes are benefits resulting from the use of this tool to manage technical risk Hows Hows Hows Hows E 7 g a E a u g 8 u E a u a g E 3 g g m m 2 m a m m g S m o 2 E 0 w a Z 3 Whats quot3 Whats Whats Whats Dark Color Black Pamt Enamel Spray Small Slze Vle CMOS Buy Etc Etc Etc Etc Basic Requirements Design Features vs Parts Selection Processes vs vs Design Features Parts Selection vs Processes Parameters Source Reliability Analysis Center START Publication Quality Functional Deployment Volume 4 Number 1 Figure 10 1 QFD Technique The following cascading matrixes shown in Figure 101 conceptually illustrate one simplified use of the QFD technique 0 The first matrix matched the customer s requirements the whats identified in rows to the design features the hows identified in columns intended to meet the requirements The hows become the whats design features of the second matrix as against hows parts selected to implement the whats The parts selected then become the whats parts selection of the third matrix plotted against the hows of the processes used to create the parts Finally the processes become the whats processes of the fourth matrix where the hows are the process parameters that must be controlled Thus the cascaded matrixes translate the customer s requirements to a set of process parameters to be controlled One such translation in the last matrix relates the customer s requirement for a dark color to the pressure of a spray paint nozzle 116 Chapter 10 Investment of time to perform this analysis early in the development program results in insight into the customer s requirements an understanding of critical process parameters and a shorter overall product development cycle accomplished at lower risk T aguchi Techniques The Taguchi techniques are o en used to reduce variation in critical areas identified cg through QFD analysis The Taguchi techniques are innovative approaches to the statistical design of experiments focused on reducing variation from a targeted value What are not from specification limits It focuses on Taguchi 0 Identifying the critical factors both controllable and not which affect a process TBChHquBS7 or product 0 Reducing its sensitivity to variations from various sources thereby improving quality at optimum cost A er identifying the ideal functions or characteristics of a product or process team brainstorming is used to identify all possible factors that may affect it and to select the most important ones to analyze or test These characteristics and factors are included in a Taguchi orthogonal array to determine optimum solutions to improve quality or reduce variation The values of these factors or parameters are varied for all characteristics while observing the deviation from the desired target The resulting statistical information allows the development of a robust product or process that meets the customer requirements is produced at a lower risk and is reproducible at the lowest cost The Taguchi Quality Loss Function QLF provides an approximation of monetary loss caused when a product or process lnction or characteristic deviates from its targeted value Deviation from the targeted value results in decrease in quality customer dissatisfaction and increased loss The loss can be defined in a broad manner and may include the hidden factory performance deficiencies timeliness cost increases customer complaints warranty costs market share reputation etc Ref ASI Press Taguchi Methods and Quality Function Deployment 1998 Technical Performance Measurement Technical Performance Measurement TPM is simply speaking a timephased aw progress plan for the achievement of critical Technical Performance Parameters TPPs TPPs selected for inclusion should indicate when achieved progress in key What is areas of technical risk reduction and expected program success TPPs can be related TPM to hardware so ware human factors logistics or any product or lnctional area of a system 117 Chapter 10 TPM helps the PM remain focused on the critical technical elements of a system or program since decisions made more knowledgeany and quickly in these key areas keep a program on track for successful completion Figure 102 illustrates the methodology used to establish a TPM technical baseline and track progress against that baseline Establishing Program Requirements Engineering Engineering Coordination Assessments Select Technical Plan Expected Conduct Tests Parameters st 39 Analyses etc Assign Weights TPP Pl gress Plot Actual Linkto cvas VerT39me Achieved Values Technical Parameters Program Plan Risk Profile TPP Performance of 1 Performance of 2 Staffing 3 Reliability 4 Supportability 5 6 Based on an OUSDAampT APIPM presentation Figure 10 2 TPM Methodology Overview Properly established and implemented TPM facilitates identi cation and response to systemprogram risks by comparing actual performance to planned TPPs evaluating significant variances and instituting corrective actions as needed More speci cally 0 Achieved values actual test or analytical results are compared to the progress plan s TPPs to identify variances Variances can indicate the level of risk associated with particular processes or elements depending on degree 118 Chapter 10 Program success or failure can be estimatedprojected by considering the combined effect of risk associated with multiple achieved values Corrective actions should be implemented based on assessed levels of risk corrective actions Repeated calculations of achieved values also permit the detection of new risks before their effects on cost andor schedule are irrevocable ontrack pro gram In gure 103 the horizontal line at 325 lbs is the planned nal weight for the Achieved values should be repeatedly calculated in order to track the success of Achieved values that meet TPPs indicate an effective riskhandling strategy and an component 7 the TPP Sloping line A indicates the actual achieved progress toward meeting this specific parameter while sloping line B depicts the expected progress in weight reduction The variance shaded area between actual and expected represents the degree of risk while progress is being achieved Weight lbs JUL OCT Based on an OUSDAampT APIPM presentation Figure 10 3 Example TPM Progress Chan TPM System TPMS so ware is available to help PMs by automating the tasks associated with TPM The TPMS so ware is a Government owned tool sponsored by the Of ce of the Undersecretary of Defense Acquisition amp Technology Acquisition Program Integration amp Performance Management TPMS is free to all Defense Departments and Agencies Additional information is available at httpwwwacq0sdm apitpm H I Free software and info available at 119 Chapter 10 Earned Value Management Earned Value Management is a technique that relates resource planning to schedules cg and to technical performance requirements All work is planned budgeted and scheduled in timephased planned value increments constituting a cost and What is schedule measurement baseline There are two major objectives of an earned value Earned Value system Management 0 encourage contractors to use effective intemal cost and schedule management control system 0 provide the customer timely data produced by those systems for determining contract status Earned value management is useful in monitoring the effectiveness of riskhandling actions in that it provides periodic comparisons of the actual work accomplished in terms of cost and schedule with the work planned and budgeted These comparisons are made using a performance baseline that is established by the contractor and the PM at the beginning of the contract period This is accomplished through the Integrated Baseline Review process The baseline must capture the entire technical scope of the program in detailed work packages and includes the schedule to meet the requirements and the resources to be applied to each work package Speci c risk handling actions should be included in these packages The periodic earned value data can provide indications of risk and the effectiveness of riskhandling actions When variances in cost or schedule begin to appear in the work packages containing riskhandling actions the appropriate Integrated Product Teams can analyze the data to isolate the causes of the variances and gain insights into the need to modify riskhandling actions The benefits to project management of the earned value approach come from the disciplined planning conducted and the availability of metrics which show real variances from plan in order to generate necessary corrective actions Detailed implementation guidance may be found in the Earned Value Management Additional Implementation Guide NAVSO PAMPHLET 3627 Revision 1 of3 Oct 97 gUida ceu 120 Chapter 11 WI Evaluate New Acquisition Policies What is the Relationship Between Changes in Acquisition Policies and Technical Risk DOD establishes rules and regulations that apply to all DOD agencies and programs These requirements are o en mandated by public law which the DoD is required to implement Others are instituted in an effort to bring efficiencies into the DoD Practices and policies that have been in place for years are now superseded or have been significantly changed Therefore it is critical that someone in the program of ce is assigned the responsibility to not only become familiar with each new acquisition policy but also understand how it will impact technical risk Without this understanding the risks to the program may be high Cost As An Independent Variable In December of 1995 the Undersecretary of Defense Acquisition and Technology cg introduced a new concept entitled Cost as an Independent Variable or CAIV The intent of CAIV is to provide the customerwarfighter with highly capable systems What is that are affordable over their life cycles CAIV is based on the principle that the best CAI V time to reduce Total Ownership Cost TOC is early in the acquisition process and that initial costperformance tradeoff analyses should be conducted before the operational requirements and acquisition approach are finalized 121 Chapter 11 CAIV and Risk Management CAIV requires that Program Managers PMs establish aggressive cost objectives The ability to set and achieve such cost objectives depends signi cantly on early tradeolTs in performance versus cost The maximum level of acceptable risk is one of the factors that help to de ne an aggressive cost objective Risks in achieving both performance and aggressive cost goals must be clearly recognized and actively managed through continuing iterations of costperformanceschedulerisk tradeoffs identifying key design test and manufacturing process uncertainties and developing and implementing solutions Examples of such solutions include 0 Aggressive cost reduction measures such as the elimination of design analyses reduction in trade studies or the elimination of subsystem testing may signi cantly increase the risk of meeting performance and schedule thresholds Driving the design to achieve maximum performance may signi cantly increase the risk of meeting cost and schedule thresholds perhaps with little operational need for the extra performance User participation in tradeoff analyses is essential to attain a favorable balance between performance cost schedule and risk The PM and user representatives should identify risk and cost driving requirements during the generation of the Operational Requirements Document 0RD in order to know where tradeolTs may be necessary Integrated Product Teams IPTs especially during trade studies should address best practices and their impact on program cost and schedule risks The approval and funding of riskhandling options should be part of the process that establishes CAIV cost and performance goals Improving risk management will enable PMs to support the CAIV concept of setting early cost objectives that are challenging but realistic Program planning and Integrated Baseline Reviews should be conducted with an understanding of the scope of technical work required to manage program risks TOC objectives should be developed and included in the ORD solicitations and contracts Several Best Practices and WatchOutFors are shown in Table l l l 122 Chapter 11 Table 11 1 CAIV Best Practices and Watch Out Fors Best Practice 0 An aggressive and structured risk management program is implemented to manage trades between performance cost and schedule Outyear resources identi ed Production and Operational and Support 0amp8 cost objectives included in the Request for Proposal RFP Incentives for achieving cost objectives included in RFP and contract relative to total contract dollars 0 M 39 39 for Hove tion to reduce production and 0amp8 costs in place and operating Allocation of cost objectives provided to IPTs and key suppliers Identi cation and implementation of new technologies and manufacturing processes that can reduce costs Identi cation of proceduralprocess impediments to cost reduction measures Establishment of a strong relationship with vendor base including sound incentives structure Watch Out For Cost objectives not de ned or consistent with program requirements and projected scal resources No Government or contractor management commitment to achieve cost objectives Technical IPT members not participating in de ning alternative methods of achieving requirements WatchOutFors not addressed and evaluated to achieve acceptable program technical risk DesignToUnitCost DTUC in EMD that does not consider tradeoiT with various levels or types of warranties that may arise during production award negotiations Negotiations may result in increased warranty costs that exceed the program s planned allocations Additional design efforts during EMD could mitigate this program risk Source selection DTUC and cost of ownership based in part on a true bumpertobumper warranty during production when a limited maintenance type contract may actually be the more likely outcome depending on the acquisition strategy 0 Expectations of contracting for a bumpertobumper warranty when program funding is not adequate Additional information may be obtained from the following 21 OSD httpww an ns quotquot aquot quot quot 39 Ehfml Additional Navy httpwwwacq re navymiturb014htm oniline information 123 Chapter 11 Environmental Risk Management What goes into a weapon system facility or platform must eventually come out 7 cg either during use during refurbishment at system retirement or as a byproduct With this in mind the goal of this section is to make PMs aware of the latest Focus For the technologies and resources available to help bring and or retain acquisition programs PM in compliance with Federal environmental mandates PMs are responsible for understanding real and potential negative environmental effects caused by their programs throughout the entire life cycle of the system They must eliminate minimize and mitigate as many of these effects as possible which may increase risk to the program One of the most effective ways to minimize environmental risk and associated costs during the life cycle of a system is through Pollution Prevention or P2 Available Environmental Information There is no single resource list of the most environmentally sound designs or processes for all situations There are however quite a few joint GovernmentIndustry efforts through which the PM can access the latest validated P2 Engineering Technologies for a great number of situations and uses For instance Toxics Use Reduction Institute TURI 7 0 Conducts technical research on reduction technologies and processes associated with the use of specific toxic substances 0 Provides data on the technical feasibility of reduction technologies and processes associated with the use of specific toxic substances 0 Provides technical support to and or facilitation of the transfer of these technologies to industry P2 GEMS Internet Search Tool managed by TURI 7 0 Contains management information about environmental technical concerns environmental issues and materials friendly to the environment for facility planners engineers and managers 0 Allows search by keyword or by selection from one of four categories 1 Product or Industry 2 Chemical or Waste 3 Management Tools or 4 Processes Pacific Northwest Pollution Prevention Resource Center PPRC 7 0 A nonprofit organization supporting projects that yield results for pollution prevention or for the reductionelimination of the use of toxic substances 0 Maintains a series of tools technical publications and assistance networks including the PPRC Research Projects Data Base which provides technical l24 Chapter 11 information on 1 pollution prevention 2 toxic material alternatives 3 application methods and processes and 4 energy efficient technologies Enviroene Database Maintained as part of EPA s homepage 7 0 Contains extensive pollution prevention compliance assurance data 0 Includes pollution prevention case studies technologies and information on alternatives to toxic solvents 0 Contains 1 a list of P2 programs nationwide 2 P2 technicalresearch development data and 3 the Solvent Substitution Data Base System SSDS Umbrella with additional P2 data bases focused on specific environmental applications Enviroene DoN P2 Programs Database Maintained by the Navy 7 0 Provides online access to Navy efforts and best practices aimed at reducing the use of hazardous materials in existing operationsprocesses and at preventing the production of polluting agents 0 Includes model P2 plans case studies fact sheets and other helpful material Solvent Handbook Database System SHDS 7 Maintained by DoE amp DoD 7 0 Identifies alternative solvents ie solvents currently restricted and evaluates their performance corrosive potential air emissions recycling capabilities and compatibility with other materials Joint Group for Acquisition P2 Projects JGAPP 7 0 Conducts technical research on Industry Government P2 programs aimed at eliminating regulated materials from selected weapons systems for example Alternatives to TinLead Surface Finish on Circuit Boards and NonChrome Primers ABM Environmental Homepage Maintained by ASNRDampAABM 7 0 Links directly to 39 39 39 lists 39 39 responsibilities 1 lists environmental contract clauses and DoD MemorandaExecutive Orders related to environmental issues How Can PMs Get This Information All the above tools and organizations can be reached via the World Wide Web and Q can be accessed through links on the ASNRDampAABM Homepage Dial up the ABM httpwwwabmrdahqnavymil Homepage for quick results 125 Chapter 11 Single Process Initiative The Single Process Initiative SP1 is a DOD acquisition reform initiative introduced cg in December 1995 as a means for contractors to replace multiple Government unique business management and manufacturing practices with facilitywide and What is SPI more recently corporatewide practices The goal of SP1 is to reduce contract costs associated with unnecessary Government requirements and to move towards common acquisition practices within DoD and industry SPIs are established based on expected return on investment quality improvements and or strategic importance Single Process Initiative and Risk Management When Industry submits a proposal to replace or eliminate previously approved and successful military processes with commercial or company processes technical risks may exist until the new processes have been proven satisfactory To minimize these technical risks the following guidance in Table 112 should be considered Table 11 2 SPI Best Practices and Watch Out Fors Best Practice 0 Joint Government and contractor Management Council teams consider The unique needs and risks of each program The bene ts ofcommon processes The robustness of the contractor s process controls 0 Prime contractor owdown requirements to subcontractors based on accepted industry standards or acceptable subcontractor common processes vice unique inhouse standards 0 Prime contractor review subcontractor SPI processes to assess adequacy or risk in meeting program objectives or requirements 0 Contractor demonstrates process control on proposed common processes before the proposed process is implemented 0 Government and the contractor need to precisely define the facility locations at which the contractor39s proposed single processes apply Watch Out For 0 Potential cost and schedule impact of establishing common depot repair processes between competitive depots on tooling test equipment fixtures and capitalized equipment 0 Contractor SPI processes that are not technically acceptable for meeting unique program requirements 0 A Management Council that does not keep the IPTs informed about the overall Management Council strategy and status 126 Chapter 11 The Assistant Secretary of the Navy Research Development and amp Acquisition Acquisition Reform Office is the Navy s lead of ce for SP1 activities For the latest 5 information on approved SP1s SP1 status etc access the Navy and DCMC SP1 Latest SPI databases at information Navy httpwwwacq re navymispihtml available on DCMC httpwwwdcmchqdlamispispihtm line Diminishing Manufacturing Sources amp Material Shortages Numerous parts vital to repairing and supporting older equipments are being cg discontinued by manufacturers every year During scal year 1996 the Government Industry Data Exchange Program G1DEP alone distributed 125 discontinuance What is notices affecting 50600 parts Systems designed for long operational periods are DMSMS being supported by an industry where parts changeover and obsolescence are measured in months Once a system is out of initial production the contractor or parts manufacturers discontinue the parts or product lines an issue that has become even more critical due to the reduced amount of Government insight under acquisition reform This issue is referred to as Diminishing Manufacturing Sources and Material Shortages DMSMS Following are some DoD and industry resources and practices that the PM can use to reduce program risk relating to DMSMS DOD Resources The following is a partial listing of DoD organizations supporting DMSMS efforts 0 Government Industry Data Exchange Program G1DEP is the centralized DoD repository for DMSMS information cases and solutions G1DEP maintains noti cations from manufacturers original equipment manufacturers and Government activities of items that are no longer being produced DMSMS Notices are issued for any type of discontinuance obsolescence or shortage G1DEP also provides the value added to DMSMS notices by adding information from third party sources G1DEP DMSMS data includes alternate sources manufacturer s data a er market suppliers and can also compare the user s bill of materials or parts list with parts listed in the database When the user requests these services a list of affected parts is returned for analysis G1DEP membership is open to Government agencies and suppliers to the US and Canadian Governments Information about G1DEP can be obtained from G1DEP Operations Center PO Box 8000 Corona CA 91718 8000 Web Page URL httpwwwgidep0rg 127 Chapter 11 GIDEP provides links to several DOD and Industry DMSMS sites A sample of these sites is provided below For a more complete list access the GIDEP home 5 More page and click on the DMSMS icon information available oni line 0 DOD DMSMS Working Group The DoD has established a DMSMS working group to assist program offices in nding DMSMS solutions This team is comprised of members from the different DoD program offices and activities who have mutual DSMS concerns 0 Defense Microelectronics Activity DMEA Maintains a centralized source for reengineering components and establishing manufacturing sources for technology insertion DMEA provides technologically correct and economically viable solutions for microelectronics obsolescence httpwwwdmea0sdmil 0 Defense Supply Center Columbus DSCC Formerly called DESC provides DMSMS case information on recent discountenances httpwwwdsccdlamipr0gramsdsmmsindexhtml Industry Resources The following is a partial listing of private companies that will provide DMSMS solutions to subscribers and members 0 TACTech Inc Provides an information service that lmishes users with proprietary information including life cycle projections for microcircuits and discrete semiconductor devices used in military and Government systems TACTech licenses its proprietary software and databases to Government contractors and Government agencies The TACTech library contains information on over 100000 active and obsolete devices covering virtually all standard devices Their software identifies the life cycle of the parts and pending obsolescence httpwwwtactechc0m 0 Electronic Industries Alliance The EIA maintains a database available on a subscription basis httpwwweia0rg 0 Semiconductor Industries Association SIA SIA maintains a semiconductor obsolescence database httpwwwsemichips0rg Best Practices There are a number ofpossible solutions for DMSMS All have been used with varying success A combination of some of the following practices can assist the PM in reducing the risk of DMSMS impact to their programs 0 Early in the acquisition cycle program office personnel work with industry to perform continuous market research on industry trends 128 Chapter 11 Because the quantities of components used in any DOD system are small compared to industrial and consumer markets try to select Quali ed Manufacturer s List parts and industrial grade parts in high demand by industry Make lifeoftype buys Use a er market suppliers Use DMEA for technology insertion which o en reduces component count and number of boards Utilize open architecture design for new technology insertion Maintain the ability to quantify the impact of discontinuance of hardware availability on deployed systems Obtain data at a functional level which is technology independent and complies with standard product descriptions for use in lture emulation of parts and assemblies Table 113 provided by Joint Stars Program Office illustrates a sample cost impact for various alternatives to solve DMS parts problems Possible Provider Non recurring Cost multiplier Engineering K per part times or DMEA HTT DTC etc DMEA HTT DTC 100500 10000 DMEA Defense Activity Sacramento CA HTT Hardware Technology Center Ogden UT DTC DMSMS Technology Center Crane In GEM Generalized Emulation of Microcircuits Sarnoff Research Center Princeton NJ ITD Institute for Technology Development Con guration Management Con guration Management CM is defined as a process for establishing and maintaining consistency of a product39s performance and its lnctional and physical attributes with its design and its operational use throughout its life As af rmed by MILHDBK6l the intent of CM is to avoid cost and minimize risk Those who 129 Chapter 11 consider the small investment in the CM process a costdriver may not be considering the compensating bene ts of CM and may be underestimating the cost schedule and technical risk of an inadequate or delayed CM process DoD 50002R states the requirement for a con guration management process to control the system products process and related documentation The con guration management effort includes identifying documenting and veri ing the functional and physical characteristics of an item recording the configuration of an item and controlling changes to an item and its documentation It shall provide a complete audit trail of decisions and design modifications CM is a fundamental process that must be applied for long term product success regardless of which organization e g Government or contractor monitors its implementation The CM process encompasses to some degree every item of hardware and software down to the lowest bolt nut and screw or lowest so ware unit CM begins during development and properly implemented will contribute to the preparation of high quality design release drawings Such drawings should represent a stable design configuration that is suitable for production installation maintenance and logistics support necessary to ensure that the Operating Forces receive and are able to maintain weapon systems which work when needed A er delivery of equipment to the war ghters CM continues whenever this equipment is modified or upgraded because any modification or upgrade program is essentially a new development and production effort requiring the same CM process discipline as a quotnew startquot program With the implementation of Acquisition Reform AR DoD policies regarding the CM process have changed The responsibility for the CM process is now shared even more between DoD and the contractor and typically is no longer the sole responsibility of the program manager In this regard AR does not diminish the importance of CM rather it has resulted in a reconsideration of the degree to which the CM process should be controlled by the Government compared to the contractor Significant authority for configuration control may be delegated to contractors during all phases of the life cycle depending on such factors as the acquisition strategy the maintenance concept and the associated Technical Data Package TDP However DoD ultimately is responsible for the performance and configuration of the systems acquired The procuring Government agency is always the configuration control authority for the top level attributes as well as for lower level performance and design attributes depending on the aforementioned factors of acquisition strategy maintenance concept and TDP This shift in responsibility along with the move to CommercialOffTheShel NonDevelopmental Items COTSNDIs has decreased the program manager s involvement with the configuration of his system This has increased program risks and the need for the program management offices to plan for and understand the CM process and to ensure supportability and interoperability of military equipment and software The following provides some of the more 130 Chapter 11 signi cant items to be considered when managing the con guration of a system under the new AR policies to reduce program risks Documentation Acquisition reform has made a significant change in the types of con guration documents used to specify configuration items CIs DoD now speci es performance requirements and in most cases leaves design solutions to the contractor The types of documentation needed at the system level are determined by the DoD procuring agency whereas the contractor may be delegated the responsibility to choose the documentation needed below the system level DoD policy indicates preference for products meeting performance requirements rather than detailed specifications wherever possible Design Solutions Acquisition Reform and the latest DoD 5000 series have provided contractors the opportunity to prepare a design solution most suitable to meeting the operational requirement It is important for the DoD program manager to recognize that there will be a great deal of diversity in the methodologies employed by various contractors and consequently an early emphasis on CM process discipline will pay dividends in the long run eg by ensuring compatibility maintainability and supportability at all levels of repair Con guration Control Authority Con guration control is the process used by contractors and program managers to establish the configuration baseline and manage the preparation justi cation evaluation coordination disposition and implementation of proposed engineering changes and deviations to effected CIs and baseline con guration documentation The DoD needs to take delivery of and control product con guration documentation at a level of detail commensurate with the operational support and reprocurement strategies for a given program For reparable CIs design disclosure documentation is required wherever the CI will be operated maintained repaired trained supported and reprocured A signi cant factor in this determination is data that is properly established as quotContractor Proprietaryquot Authority rests with the program manager to decide whether it is necessary and cost effective to buy rights to the data do without it develop new data CIs or return to the original contractor whenever reprocurement or support of the CI is needed Engineering Release Program managers should ensure that both contractors and DoD activities follow engineering release procedures which record the release and retain records of approved con guration documentation These records ensure 0 An audit trail of CI documentation status and history 0 Veri cation that engineering documentation has been changed to re ect the incorporation of approved changes and to satisfy the requirements for traceability of deviations and engineering changes A means to reconcile engineering and manufacturing data to assure that engineering changes have been accomplished and incorporated into deliverable units of the CIs 131 Chapter 11 Interface Management Program managers normally have interface requirements with other systems Those interfaces constitute design constraints imposed on the programs As the system is de ned other interfaces between system components become apparent All interfaces need to be identified and documented e g Interface Control Documents ICDs so that their integrity may be maintained through a disciplined con guration control process Supportability AR initiatives such as the emphasis on COTSNDIs have changed the traditional methods of supporting a system during the production and operational phases The system supportability concept is a major decision factor in determining the extent of the DoD s CM involvement and determining the extent of the TDP CDRL requirement The following should be considered when planning for system supportability maintenance and risk reduction 0 The RFP proposal preparation instructions section L should have CM as a key management and past performance discriminator The weighting of the RFP evaluation criteria section M should re ect the importance of an elfective documented contractor CM process as a risk mitigator Interface interoperability and coordination requirements are de ned for the LRUsparts consistent with the maintenance philosophy The maintenance plan is a primary driver for the level of con guration control and support requirements Coordinating CM requirements with the maintenance plan support and maintenance planning and logistics personnel is imperative Program manager retains con guration change control authority on changes that impact compatability life reliability interchangeability form t lnction F3 and safety Program plans and budgets should include early planning for purchase of the TDP as appropriate Items provided under a performance specification at different times or from different suppliers should be interchangeable but may not be identical in internal design Where appropriate bidders should be provided with the speci c dimensional material manufacturing and assembly information needed to supply identical items with each reprocurement When the commercial items ordered or offered have been wholly or partially developed with private funding the commercial supplier is generally willing to only provide F3 information This information includes such items as brochures operating and training manuals and organizational maintenance technical manuals Suppliers are generally not willing to provide the Government with the design and manufacturing data necessary for a competitor to build the same product in quantity or to conduct major repairs or rebuilds Technical Data Packages contain detail configuration data down to the lowest replaceablerepairable units LRU or parts consistent with the maintenance philosophy Consider purchasing the Technical Data Package when the following apply If upgrades and followons to the system will be open bid with the possibility of another contractor being a prime for followon contracts 132 Chapter 11 Dual depots are used for maintenance The system is a largely COTSNDI system which will normally require technology refresh 133 APPENDIX A Extracts of Risk Management Requirements in the DOD 5000 Series Documents Issued 15 March 1996 APPENDIX A DoDD 50001 Defense Acquisition a Section D paragraph 1d Risk Assessment and Management PMs and other acquisition managers shall continually assess program risks Risks must be well understood and risk management approaches developed before decision authorities can authorize a program to proceed into the next phase of the acquisition process To assess and manage risk PMs and other acquisition managers shall use a variety of techniques including technology demonstrations prototyping and test and evaluation Risk management encompasses identi cation mitigation and continuous tracking and control procedures that feed back through the program assessment process to decision authorities To ensure an equitable and sensible allocation of risk between Government and industry PMs and other acquisition managers shall develop a contracting approach appropriate to the type of system being acquired DoD 50002R Mandatory Procedures for Major Defense Acquisition Programs MDAPs and Major Automated Information System MAIS Acquisition Programs a 5quot 0 a Part 1 Section 11 Purpose This part establishes a general model for acquisition programs The model acknowledges that every acquisition program is different PM and MDA shall structure the program to ensure a logical progression through a series of phases designed to reduce risk and provide adequate information for decisionmaking Part 1 Section 12 Overview of the Acquisition Management Process The acquisition process shall be structured in logical phases separated by major decision points called milestones Threat projections system performance and risk management shall be major considerations at each milestone decision point including the decision to start a new program Part 1 Section 142 Phase 0 Concept Exploration Phase 0 typically consists of shortterm concept studies The focus is to de ne and evaluate the feasibility of alternative concepts and to provide a basis for assessing the relative merits ie advantages and disadvantages degree of risk of these concepts at the next milestone decision point Part 1 Section 143 Phase 1 Program Definition and Risk Reduction During this phase the program shall become de ned as one or more concepts design approaches andor parallel technologies are pursued as warranted Assessments shall be re ned Prototyping demonstrations and early operational assessments shall be considered and included as necessary to reduce A3 APPENDIX A 5 Lquot 9 39quot risk so that technology manufacturing and support risks are well in hand before the next decision point Part 2 Section 23 Requirements Evolution Thresholds and objectives are defined below The values for an objective or threshold and de nitions for any specific parameter contained in the 0RD TEMP and APB shall be consistent 1 Threshold is minimum acceptable value to satisfy the need The spread between objective and threshold values shall be individually set for each program based on characteristics of the program eg maturity risk etc 2 Objective that value desired by the user and which the PM is attempting to obtain Part 3 Section 3222 APB Content The APB shall contain only the most important cost schedule and performance parameters Performance 2 Schedule 3 Cost In all cases the cost parameters shall re ect the total program and be realistic cost estimates based on a careful assessment of risks and realistic appraisals of the level of costs most likely to be realized Part 3 Section 323 Exit Criteria MDAs shall use exit criteria to establish goals Exit criteria will normally be selected to track progress in important technical schedule or management risk areas Part 3 Section 33 Acquisition Strategy Each PM shall develop and document an acquisition strategy roadmap for program execution Essential elements include risk management Part 3 Section 3313 Industrial Capability The PM shall structure the acquisition strategy to promote suiTicient program stability to encourage industry to invest plan and bear risks The program acquisition strategy shall analyze the industrial capability to design develop produce support This analysis shall identify DoD investments needed to create new industrial capabilities and the risks of industry being unable to provide program manufacturing capabilities at planned cost and schedule Part 3 Section 332 Cost Schedule and Performance Risk Management The PM shall establish a risk management program for each acquisition program to identify and control performance cost and schedule risks The risk management program shall identify and track risk drivers define risk abatement plans and provide for continuous risk assessment throughout each acquisition phase to determine how risks have changed Risk reduction measures shall be A4 APPENDIX A included in costperformance tradeoffs where applicable The risk management program shall plan for backups in risk areas and identify design requirements where performance increase is small relative to cost schedule and performance risk The acquisition strategy shall include identi cation of the risk areas of the program and a discussion of how the PM intends to manage those risks Part 3 Section 3332 Cost Management Incentives RFPs shall be structured to incentivize the contractor to meet or exceed cost objectives Whenever applicable risk reduction through use of mature processes shall be a signi cant factor in source selection Part 3 Section 334 Contract Approach The acquisition strategy shall discuss the types of contracts contemplated for each succeeding phase including considerations of risk assessment reasonable risk sharing by Government and contractors Part 3 Section 3341 Competition PMs and contracting of cers shall provide for full and open competition unless one of the limited statutory exceptions apply The PM shall consider component breakout which shall be done when there are signi cant cost savings when the technical or schedule risk of furnishing Government items to the prime contractor is manageable Part 3 Section 3356 Information Sharing and DOD Oversight DoD oversight activities shall consider all relevant and credible information that might mitigate risks and the need for DoD oversight Part 3 Section 34 Test and Evaluation Test and evaluation programs shall be structured to integrate all test and evaluation activities conducted by different agencies as an ef cient continuum All such activities shall be part of a strategy to provide information regarding risk and risk mitigation Part 3 Section 341 Test and Evaluation Strategy Test and evaluation planning shall begin in Phase 0 Concept Exploration 6 Early testing of prototypes in Phase I Program De nition and Risk Reduction and early operational assessments shall be emphasized to assist in identifying risks Part 3 Section 342 Development Test and Evaluation Development test and evaluation DTampE programs shall 3 Support the identi cation and description of design technical risks 4 Assess progress toward meeting Critical Operational Issues mitigation of acquisition technical risk achievement of manufacturing process requirements and system maturity APPENDIX A Lquot E f Part 3 Section 343 Certi cation of Readiness for Operational Test and Evaluation In support of this certi cation risk management measures and indicators with associated thresholds which address performance and technical adequacy of both hardware and so ware shall be de ned and used on each program A mission impact analysis of criteria and threshold that have not been met shall be completed prior to certi cation for operational tests Part 3 Section 351 Life Cycle Cost Estimates The life cycle cost estimates shall be 4 Neither optimistic nor pessimistic but based on a care Jl assessment of risks and re ecting a realistic appraisal of the level of cost most likely to be realized Part 4 Section 42 Integrated Process and Product Development It is critical that the processes used to manage develop manufacture verify test deploy operate support train people and eventually dispose of the system be considered during program design Part 4 Section 43 Systems Engineering The PM shall ensure that a systems engineering process is used to translate operational needs andor requirements into a system solution that includes the design manufacturing test and evaluation and support processes and products The systems engineering process shall establish a proper balance between performance risk cost and schedule The systems engineering process shall 3 Characterize and manage technical risks The key systems engineering activities that shall be performed are 4 System Analysis and Control System analysis and control activities shall be established to serve as a basis for evaluating and selecting alternatives measuring progress and documenting design decisions This shall include b The establishment of a risk management process to be applied throughout the design process The risk management effort shall address the identi cation and evaluation of potential sources of technical risks based on the technology being used and its related design manufacturing test and support processes risk mitigation efforts and risk assessment and analysis Technology transition planning and criteria shall be established as part of the overall risk management effort The following areas re ect important consideration in the design and shall be a part of the systems engineering process The extent of their consideration and impact on the product design shall be based on the degree to which they impact total system cost schedule and performance at an acceptable level of risk A6 APPENDIX A 431 Manufacturing and Production 432 Quality 433 Acquisition Logistics 434 Open Systems Design 435 Software Engineering 436 Reliability Maintainability and Availability 437 Environment Safety and Health 438 Human Systems Integration HSI 439 Interoperability SECNAV Instruction 50002B implements the requirements of the DOD 5000 Series gPMNote APPENDIX B Additional Sources of Information APPENDIX B Introduction Chapter 5 provided critical design test and production processes as an aid in performing risk assessments These processes are only intended to be used as a starting point from which programs can expand with their own critical processes tailored to their unique program needs As an additional aid this Appendix provides sources of information sponsored by DoD to assist in the dissemination of scienti c and technical information ie the Information Analysis Centers IACs chartered by DoD and the manufacturing Centers of Excellence COEs sponsored by the ManTech Programs of the Army Navy Air Force and Defense Logistics Agency DLA DoD Information Analysis Centers DoD IACs are formal organizations chartered by DoD to facilitate utilization of existing scienti c and technical information The primary mission of DoD IACs is to collect analyze synthesize and disseminate worldwide scienti c and technical information in clearly de ned specialized elds or subject areas A secondary mission is to promote standardization within their Mission and respective elds The IACs have a broad mission to improve the productivity of functions of scientists engineers managers and technicians in the Defense community through the IACs timely dissemination of evaluated information Thirteen contractoroperated DoD IACs are administratively managed and lnded Q by the Defense Technical Information Center DTIC Eleven other IACs are managed by the Services Individual IACs may be contacted directly for Access most information requiring technical expertise or expert judgment in their particular area IACs aniline A listing of each IAC and online address information is provided below However at the most of the DoD and Service sponsored IACs may be contacted by sending an following email message to address dodiacsdticmil DTIC IACs 0 Advanced Materials and Processes Technology Information Analysis Center AMPTIAC httpr0meiitric0mamptiac Chemical WarfareChemical amp Biological Defense IAC CBIAC httpwwwcbiacapgeaarmymil Chemical Propulsion Information Agency CPIA httpwwwjhueducpia Crew System Ergonomics Information Analysis Center CSERIAC httpcseriac ightwpafba mil B 3 APPENDIX B Data and Analysis Center for So ware DACS httpwwwdacscom Defense Modeling Simulation and Tactical Technology Information Analysis Center DMSTTIAC httpdmsttiachqiitricom Guidance and Control Information Analysis Center GACIAC httpgaciachqiitricom Information Assurance Technology Analysis Center IATAC httpwwwiatacdticmil Infrared Information Analysis Center IRIAC httpwwwerim0rgIRIAiriahtml Manufacturing Technology Information Analysis Center MTIAC httpwwwmtiaciitric0m Nondestructive Testing Information Analysis Center NTIAC httpwwwntiacc0m Reliability Analysis Center RAC httpromeiitricomrac SurvivabilityVulnerability Information Analysis Center SURVIAC httpsu139viacflightwpafba mil Service IACS Aerospace Structures Information Analysis Center ASIAC Email siac tvc l ightwpafb af mil Supportability Investment Decision Analysis Center SIDAC httpwwwsidacwpafba mil Air elds Pavements and Mobility Information Analysis Center APMIAC Email wesgvaexlwesarmymil Coastal Engineering Defense Information Analysis Center CEIAC Email swagnercercwesarmymil Cold Regions Science and Technology Information Analysis Center CRSTIAC httpwwwcrrelusacearmymicrstiac Concrete Technology Information Analysis Center CTIAC Email matherbexl wesarmymil APPENDIX B Environmental Information Analysis Center EIAC httpwwwwesarmymielh0mepagehtml Hydraulic Engineering Information Analysis Center HEIAC httphlnetwesarmymil Soil Mechanics Information Analysis Center SMIAC httpwwwwesa139mymilGLSMIACsmiachtml Shock and Vibration Information Analysis Center SAVIAC httpsaviacusaebahc0m DoD Nuclear Information Center DASIAC Manufacturing Centers Of Excellence The manufacturing Centers Of Excellence COE s sponsored by the ManTech 9 Programs of the Army Navy Air Force and DLA provide a focal point for the development and transfer of new manufacturing processes and equipment in a cooperative environment with industry academia and DoD activities The COEs The COEs have been set up in consortiumtype arrangements wherein industry academia and Government can be involved in developing and implementing Mission and functions of 0 Develop and demonstrate manufacturing technology solutions for identified the COES defense manufacturing issues Serve as corporate residences of expertise in their particular technological areas Provide consulting services to defense industrial activities and industry Facilitate the transfer of developed manufacturing technology and Provide advice to the ManTech Program directors concerning program formulation advanced manufacturing technologies An overview of each COE is available at One stop address for all COEs http mantechiitri com p rogram centexe html The following is a list of the centers Apparel Manufacturing Demonstration Center Best Manufacturing Practices Center of Excellence BMPCOE Center for Optics Manufacturing COM Center of Excellence for C quot Combat Rations Demonstration Center Electronics Manufacturing Productivity Facility EMPF Energetics Manufacturing Technology Center EMTC r iugT 39 39 CECMT
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'