The view of 20th and 21st century of Software Engineering
The view of 20th and 21st century of Software Engineering SE1
Comsats Institute of Information Technology
Popular in PRINCIPLES OF COMPUTER SCIENCE
Popular in ComputerScienence
verified elite notetaker
This 18 page Class Notes was uploaded by Ismail Yousuf on Monday November 9, 2015. The Class Notes belongs to SE1 at Comsats Institute of Information Technology taught by in Fall 2015. Since its upload, it has received 37 views. For similar materials see PRINCIPLES OF COMPUTER SCIENCE in ComputerScienence at Comsats Institute of Information Technology.
Reviews for The view of 20th and 21st century of Software Engineering
Almost no time left on the clock and my grade on the line. Where else would I go? Ismail has the best notes period!
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 11/09/15
A View of 20th and 21st Century Software Engineering Barry Boehm University of Southern California University Park Campus, Los Angeles email@example.com ABSTRACT 1. INTRODUCTION George Santayana's statement, "Those who cannot remember the One has to be a bit presumptuous to try to characterize both the past past are condemned to repeat it," is only half true. The past also and future of software engineering in a few pages. For one thing, includes successful histories. If you haven't been made aware of there are many types of software engineering: large or small; commodity or custom; embedded or user-intensive; greenfield or them, you're often condemned not to repeat their successes. legacy/COTS/reuse-driven; homebrew, outsourced, or both; casual- In a rapidly expanding field such as software engineering, this use or mission-critical. For another thing, unlike the engineering of happens a lot. Extensive studies of many software projects such as electrons, materials, or chemicals, the basic software elements we the Standish Reports offer convincing evidence that many projects engineer tend to change significantly from one decade to the next. fail to repeat past successes. Fortunately, I’ve been able to work on many types and generations This paper tries to identify at least some of the major past software of software engineering since starting as a programmer in 1955. I’ve experiences that were well worth repeating, and some that were not. made a good many mistakes in developing, managing, and acquiring It also tries to identify underlying phenomena influencing the software, and hopefully learned from them. I’ve been able to learn evolution of software engineering practices that have at least helped from many insightful and experienced software engineers, and to the author appreciate how our field has gotten to where it has been interact with many thoughtful people who have analyzed trends and and where it is. practices in software engineering. These learning experiences have helped me a good deal in trying to understand how software A counterpart Santayana-like statement about the past and future engineering got to where it is and where it is likely to go. They have might say, "In an era of rapid change, those who repeat the past are condemned to a bleak future." (Think about the dinosaurs, and also helped in my trying to distinguish between timeless principles think carefully about software engineering maturity models that and obsolete practices for developing successful software-intensive emphasize repeatability.) systems. This paper also tries to identify some of the major sources of change In this regard, I am adapting the  definition of “engineering” to that will affect software engineering practices in the next couple of define engineering as “the application of science and mathematics decades, and identifies some strategies for assessing and adapting to by which the properties of software are made useful to people.” The phrase “useful to people” implies that the relevant sciences include these sources of change. It also makes some first steps towards the behavioral sciences, management sciences, and economics, as distinguishing relatively timeless software engineering principles well as computer science. that are risky not to repeat, and conditions of change under which aging practices will become increasingly risky to repeat. In this paper, I’ll begin with a simple hypothesis: software people don’t like to see software engineering done unsuccessfully, and try to make things better. I’ll try to elaborate this into a high-level Categories and Subject Descriptors decade-by-decade explanation of software engineering’s past. I’ll D.2.9 [Management]: Cost estimation, life cycle, productivity, software configuration management, software process models. then identify some trends affecting future software engineering practices, and summarize some implications for future software engineering researchers, practitioners, and educators. General Terms Management, Economics, Human Factors. 2. A Hegelian View of Software Engineering’s Past Keywords The philosopher Hegel hypothesized that increased human Software engineering, software history, software futures understanding follows a path of thesis (this is why things happen the way they do); antithesis (the thesis fails in some important ways; here is a better explanation); and synthesis (the antithesis rejected Permission to make digital or hard copies of all or part of this work formuch of the original thesis; here is a hybrid that captures the personal or classroom use is granted without fee provided that copiesbest of both while avoiding their defects). Below I’ll try to apply this not made or distributed for profit or commercial advantage and that chypothesis to explaining the evolution of software engineering from bear this notice and the full citation on the first page. To copy oththe 1950’s to the present. republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ICSE’06, May 20–28, 2006, Shanghai, China. Copyright 2006 ACM 1-59593-085-X/06/0005…$5.00. 12 applications became more people-intensive than hardware-intensive; 2.1 1950’s Thesis: Software Engineering Is Like Hardware Engineering even SAGE became more dominated by psychologists addressing human-computer interaction issues than by radar engineers. When I entered the software field in 1955 at General Dynamics, the prevailing thesis was, “Engineer software like you engineer OPERATIONAL PLAN hardware.” Everyone in the GD software organization was either a hardware engineer or a mathematician, and the software being developed was supporting aircraft or rocket engineering. People MACHINE OPERATIONAL kept engineering notebooks and practiced such hardware precepts as SPECIFICATIONS SPECIFICATIONS “measure twice, cut once,” before running their code on the computer. PROGRAM This behavior was also consistent with 1950’s computing SPECIFICATIONS economics. On my first day on the job, my supervisor showed me the GD ERA 1103 computer, which filled a large room. He said, CODING “Now listen. We are paying $600 an hour for this computer and $2 SPECIFICATIONS an hour for you, and I want you to act accordingly.” This instilled in me a number of good practices such as desk checking, buddy CODING checking, and manually executing my programs before running them. But it also left me with a bias toward saving microseconds when the economic balance started going the other way. PARAMETER TESTING (SPECIFICATIONS) The most ambitious information processing project of the 1950’s was the development of the Semi-Automated Ground Environment (SAGE) for U.S. and Canadian air defense. It brought together ASSEMBLY TESTING leading radar engineers, communications engineers, computer (SPECIFICATIONS) engineers, and nascent software engineers to develop a system that would detect, track, and prevent enemy aircraft from bombing the SHAKEDOWN U.S. and Canadian homelands. Figure 1 shows the software development process developed by the hardware engineers for use in SAGE . It shows that sequential SYSTEM EVALUATION waterfall-type models have been used in software development for a long time. Further, if one arranges the steps in a V form with Coding Figure 1. The SAGE Software Development Process (1956) at the bottom, this 1956 process is equivalent to the V-model for Another software difference was that software did not wear out. software development. SAGE also developed the Lincoln Labs Thus, software reliability could only imperfectly be estimated by Utility System to aid the thousands of programmers participating in hardware reliability models, and “software maintenance” was a SAGE software development. It included an assembler, a library and much different activity than hardware maintenance. Software was build management system, a number of utility programs, and aids to invisible, it didn’t weigh anything, but it cost a lot. It was hard to tell testing and debugging. The resulting SAGE system successfully met whether it was on schedule or not, and if you added more people to its specifications with about a one-year schedule slip. Benington’s bring it back on schedule, it just got later, as Fred Brooks explained bottom-line comment on the success was “It is easy for me to single in the Mythical Man-Month . Software generally had many out the one factor that led to our relative success: we were all more states, modes, and paths to test, making its specifications much engineers and had been trained to organize our efforts along more difficult. Winston Royce, in his classic 1970 paper, said, “In engineering lines.” order to procure a $5 million hardware device, I would expect a 30- Another indication of the hardware engineering orientation of the page specification would provide adequate detail to control the 1950’s is in the names of the leading professional societies for procurement. In order to procure $5 million worth of software, a software professionals: the Association for Computing Machinery 1500 page specification is about right in order to achieve and the IEEE Computer Society. comparable control.”. Another problem with the hardware engineering approach was that 2.2 1960’s Antithesis: Software Crafting By the 1960’s, however, people were finding out that software the rapid expansion of demand for software outstripped the supply of engineers and mathematicians. The SAGE program began hiring phenomenology differed from hardware phenomenology in and training humanities, social sciences, foreign language, and fine significant ways. First, software was much easier to modify than was arts majors to develop software. Similar non-engineering people hardware, and it did not require expensive production lines to make product copies. One changed the program once, and then reloaded flooded into software development positions for business, government, and services data processing. the same bit pattern onto another computer, rather than having to These people were much more comfortable with the code-and-fix individually change the configuration of each copy of the hardware. This ease of modification led many people and organizations to approach. They were often very creative, but their fixes often led to adopt a “code and fix” approach to software development, as heavily patched spaghetti code. Many of them were heavily influenced by 1960’s “question authority” attitudes and tended to compared to the exhaustive Critical Design Reviews that hardware march to their own drummers rather than those of the organization engineers performed before committing to production lines and bending metal (measure twice, cut once). Many software employing them. A significant subculture in this regard was the 13 “hacker culture” of very bright free spirits clustering around major This movement had two primary branches. One was a “formal university computer science departments . Frequent role models methods” branch that focused on program correctness, either by were the “cowboy programmers” who could pull all-nighters to mathematical proof , or by construction via a “programming hastily patch faulty code to meet deadlines, and would then be calculus” . The other branch was a less formal mix of technical rewarded as heroes. and management methods, “top-down structured programming with Not all of the 1960’s succumbed to the code-and-fix approach, chief programmer teams,” pioneered by Mills and highlighted by the successful New York Times application led by Baker . IBM’s OS-360 family of programs, although expensive, late, and initially awkward to use, provided more reliable and comprehensive services than its predecessors and most contemporaries, leading to a dominant marketplace position. NASA’s Mercury, Gemini, and Apollo manned spacecraft and ground control software kept pace with the ambitious “man on the moon by the end of the decade” Structured schedule at a high level of reliability. Methods Other trends in the 1960’s were: Spaghetti Code • Much better infrastructure. Powerful mainframe operating Demand systems, utilities, and mature higher-order languages such diversity Waterfall Process as Fortran and COBOL made it easier for non- mathematicians to enter the field. Larger projects, • Generally manageable small applications, although those Weak planning & often resulted in hard-to-maintain spaghetti code. Hardware control engineering Software craft • The establishment of computer science and informatics methods Software - Code-and-fix departments of universities, with increasing emphasis on -Hardware Differencesdebugging efficiency software. • The beginning of for-profit software development and product companies. Many defects Formal Methods • More and more large, mission-oriented applications. Some were successful as with OS/360 and Apollo above, Skill Shortfalls but many more were unsuccessful, requiring near- complete rework to get an adequate system. Domain understanding • Larger gaps between the needs of these systems and the capabilities for realizing them. This situation led the NATO Science Committee to convene two landmark “Software Engineering” conferences in 1968 and 1969, Hardware Engineering Crafting attended by many of the leading researcher and practitioners in the 1950's 1960's field . These conferences provided a strong baseline of Figure2. Software Engineering Trends Through the 1970’s understanding of the software engineering state of the practice that industry and government organizations could use as a basis for The success of structured programming led to many other “structured” approaches applied to software design. Principles of determining and developing improvements. It was clear that better organized methods and more disciplined practices were needed to modularity were strengthened by Constantine’s concepts of coupling scale up to the increasingly large projects and products that were (to be minimized between modules) and cohesion (to be maximized within modules) , by Parnas’s increasingly strong techniques of being commissioned. information hiding , and by abstract data types 2.3 1970’s Synthesis and Antithesis: Formality . A number of tools and methods employing structured concepts were developed, such as structured design and Waterfall Processes The main reaction to the 1960’s code-and-fix approach involved ; Jackson’s structured design and programming , emphasizing data considerations; and Structured Program Design processes in which coding was more carefully organized and was Language . preceded by design, and design was preceded by requirements engineering. Figure 2 summarizes the major 1970’s initiatives to Requirements-driven processes were well established in the 1956 SAGE process model in Figure 1, but a stronger synthesis of the synthesize the best of 1950’s hardware engineering techniques with improved software-oriented techniques. 1950’s paradigm and the 1960’s crafting paradigm was provided by Royce’s version of the “waterfall” model shown in Figure 3 . More careful organization of code was exemplified by Dijkstra’s famous letter to ACM Communications, “Go To Statement It added the concepts of confining iterations to successive phases, Considered Harmful” . The Bohm-Jacopini result  showing and a “build it twice” prototyping activity before committing to full- scale development. A subsequent version emphasized verification that sequential programs could always be constructed without go- to’s led to the Structured Programming movement. and validation of the artifacts in each phase before proceeding to the next phase in order to contain defect finding and fixing within the same phase whenever possible. This was based on the data from 14 TRW, IBM, GTE, and safeguard on the relative cost of finding as the NASA/UMaryland/CSC Software Engineering Laboratory defects early vs. late . . SYSTEM REQUIREMENTS Some other significant contributions in the 1970’s were the in-depth analysis of people factors in Weinberg’s Psychology of Computer REQUIREMENTS Programming ; Brooks’ Mythical Man Month , which PRELIMINARY captured many lessons learned on incompressibility of software PDESIGN schedules, the 9:1 cost difference between a piece of demonstration ANALYSIS software and a software system product, and many others; Wirth’s Pascal  and Modula-2  programming languages; Fagan’s DESIGNM PRELIMINARY inspection techniques ; Toshiba’s reusable product line of DESIGN CODING industrial process control software ; and Lehman and Belady’s ANALYSIS PROGRAM studies of software evolution dynamics . Others will be covered DESIGN TESTING CODING below as precursors to 1980’s contributions. TESTING OPERATIONS However, by the end of the 1970’s, problems were cropping up with USAGE formality and sequential waterfall processes. Formal methods had difficulties with scalability and usability by the majority of less- Figure 3. The Royce Waterfall Model (1970) expert programmers (a 1975 survey found that the average coder in 1000 14 large organizations had two years of college education and two LLaarrggeerr ssooffttwwaarree pprroojjeeccttss years of software experience; was familiar with two programming c500 IIBBMM--SSSSDD languages and software products; and was generally sloppy, ff e inflexible, “in over his head”, and undermanaged . The x200 GGTTEE •• ff sequential waterfall model was heavily document-intensive, slow- o100 8800%% paced, and expensive to use. tt s 50 ••2200%%W ssuurrvveeyy)) •• Since much of this documentation preceded coding, many impatient c e 20 SSAAFFEEGGUUAARRDD ••• managers would rush their teams into coding with only minimal tt ll10 •• effort in requirements and design. Many used variants of the self- e SSmmaalllleerr ssooffttwwaarree pprrofulfilling prophecy, “We’d better hurry up and start coding, because R 5 ••• we’ll have a lot of debugging to do.” A 1979 survey indicated that 2 ••• about 50% of the respondents were not using good software 1 requirements and design practices  resulting from 1950’s SAGE ReeqqumeennttsDeessiiggnnCooddee DeevvemeenntAcccceeppttOppeerraattiioonn experience . Many organizations were finding that their tteesstt tteesstt Phhasse in Whhichh defecctwaass fxedd software costs were exceeding their hardware costs, tracking the Figure 4. Increase in Software Cost-to-fix vs. Phase (1976) 1973 prediction in Figure 5 , and were concerned about significantly improving software productivity and use of well- Unfortunately, partly due to convenience in contracting for software known best practices, leading to the 1980’s trends to be discussed acquisition, the waterfall model was most frequently interpreted as a next. purely sequential process, in which design did not start until there was a complete set of requirements, and coding did not start until 100 completion of an exhaustive critical design review. These 80 misinterpretations were reinforced by government process standards % of Hardware emphasizing a pure sequential interpretation of the waterfall model. 60 total cost Quantitative Approaches 40 Software One good effect of stronger process models was the stimulation of 20 stronger quantitative approaches to software engineering. Some good work had been done in the 1960’s such as System 0 Development Corp’s software productivity data  and 1955 1970 1985 experimental data showing 26:1 productivity differences among Year programmers ; IBM’s data presented in the 1960 NATO report ; and early data on distributions of software defects by phase and Figure 5. Large-Organization Hardware-Software Cost Trends type. Partly stimulated by the 1973 Datamation article, “Software (1973) and its Impact: A Quantitative Assessment” , and the Air Force CCIP-85 study on which it was based, more management attention and support was given to quantitative software analysis. 2.4 1980’s Synthesis: Productivity and Considerable progress was made in the 1970’s on complexity metrics that helped identify defect-prone modules ; software Scalability reliability estimation models ; quantitative approaches to Along with some early best practices developed in the 1970’s, the software quality ; software cost and schedule estimation 1980’s led to a number of initiatives to address the 1970’s problems, models ; and sustained quantitative laboratories such and to improve software engineering productivity and scalability. Figure 6 shows the extension of the timeline in Figure 2 through the rest of the decades through the 2010’s addressed in the paper. 15 16 H a d e w e H m n H a f a - e g a 1 e c d AS h n d 5 E e w G o e w 0 n S c a E d i r s g h y e s n e i g D n r S g e o m e f i r w a i l l i h n n s D y , d g f o e f e w c a e e s d -H- S S b e C f p 1 r M u o d w g 6 a a g c e a h 0 i n i a e t s g y g n c i d - r C f x f o c t e s W e a a g c k e o l r F t a r o F r o n o W m n r o l n e a a d m e g t e S 1 i r o a s & , a M r 7 y s m M s l t u 0 W a a e v P h t E s d n h e o d r r v F t n o h c s e u o r g d e e d s v g a s d s b b u l l l r t t e y , 6 A m C N e I o F r C c u C B S e O o M O l P p A u o n T m a S b R r o D s F S w e S p u t m e 1 d g /C n a o e , a i n e t n 8 u a A s t t x n y d h o g 0 t m M s r w c e M r o i e s v m , 4 e r u o d s n o t n U G s e i e , t f y g e s n l d S r , s o L P f a o w k c a o R e r f a s e C c p b n E o c L p S a d u t n n a a o W D a c d C e H r g u a c d o b a r n a m p i r i k u r m i n v c c a s e 1 e t f t c i y g n u a n e e 9 t y l i - e p r c f n r 0 P n e p r n R y c t n s o e t e c , a o g g c e u c R e r p r a T e u e i a s s d s t r s e , c R n i - c A o e e p v e d a g n n s d r o o n e d c n u m g M s h m t p e e a e o o h S g n n si o o I e s i d e t S n c R s p E a g A d M r r , a a P H p n d r i v o h i n p M l y e g t 2 i e e i e g d e n r L s n o d 0 y o - e - e h D d c e f S 0 V m r t r o r A k r w y s a v r e d v g o n a s u n e e n s e l f g e e e t n , e n , c m d c a s a a e i t y G u G l S m s s o o y r V n in C a e n b a b t c E m a v r m o f s u e a 2 l y m i t e u r s e a s v r s c 1 n u e r h - n r h b y e y p o 0 e c M s b t p o b m u o r t s t r n s g a o r i u s s s e t d a m s h c e a h d s d e e ; e n r , v s e e i c o e e n ; d s e e m t a v n s l g ; , s , i t y , , m M C B e m o A D g l p c u s s tc e u m o u y u n a o p t t y t u m o m r , o i y s s l a n , : l , ? The rise in quantitative methods in the late 1970’s helped identify Requirements, Design, Resource Estimation, Middleware, Reviews the major leverage points for improving software productivity. and Walkthroughs, and Analysis and Testing . Distributions of effort and defects by phase and activity enabled The major emphasis in the 1980’s was on integrating tools into better prioritization of improvement areas. For example, organizations spending 60% of their effort in the test phase found support environments. There were initially overfocused on that 70% of the “test” activity was actually rework that could be Integrated Programming Support Environments (IPSE’s), but eventually broadened their scope to Computer-Aided Software done much less expensively if avoided or done earlier, as indicated Engineering (CASE) or Software Factories. These were pursued by Figure 4. The cost drivers in estimation models identified extensively in the U.S. and Europe, but employed most effectively management controllables that could reduce costs through in Japan . investments in better staffing training, processes, methods, tools, and asset reuse. A significant effort to improve the productivity of formal software development was the RAISE environment . A major effort to The problems with process noncompliance were dealt with initially develop a standard tool interoperability framework was the by more thorough contractual standards, such as the 1985 U.S. Department of Defense (DoD) Standards DoD-STD-2167 and MIL- HP/NIST/ECMA Toaster Model . Research on advanced STD-1521B, which strongly reinforced the waterfall model by tying software development environments included knowledge-based support, integrated project databases , advanced tools its milestones to management reviews, progress payments, and interoperability architecture, and tool/environment configuration award fees. When these often failed to discriminate between capable and execution languages such as Odin . software developers and persuasive proposal developers, the DoD commissioned the newly-formed (1984) CMU Software Software Processes Engineering Institute to develop a software capability maturity Such languages led to the vision of process-supported software model (SW-CMM) and associated methods for assessing an environments and Osterweil’s influential “Software Processes are organization’s software process maturity. Based extensively on Software Too” keynote address and paper at ICSE 9 . Besides IBM’s highly disciplined software practices and Deming-Juran- reorienting the focus of software environments, this concept Crosby quality practices and maturity levels, the resulting SW- exposed a rich duality between practices that are good for CMM provided a highly effective framework for both capability developing products and practices that are good for developing assessment and improvement  The SW-CMM content was processes. Initially, this focus was primarily on process largely method-independent, although some strong sequential programming languages and tools, but the concept was broadened to waterfall-model reinforcement remained. For example, the first yield highly useful insights on software process requirements, Ability to Perform in the first Key Process Area, Requirements process architectures, process change management, process families, Management, states, “Analysis and allocation of the system and process asset libraries with reusable and composable process requirements is not the responsibility of the software engineering components, enabling more cost-effective realization of higher group but is a prerequisite for their work.” . A similar software process maturity levels. International Standards Organization ISO-9001 standard for quality Improved software processes contributed to significant increases in practices applicable to software was concurrently developed, largely productivity by reducing rework, but prospects of even greater under European leadership. productivity improvement were envisioned via work avoidance. In The threat of being disqualified from bids caused most software the early 1980’s, both revolutionary and evolutionary approaches to contractors to invest in SW-CMM and ISO-9001 compliance. Most work avoidance were addressed in the U.S. DoD STARS program reported good returns on investment due to reduced software . The revolutionary approach emphasized formal specifications rework. These results spread the use of the maturity models to and automated transformational approaches to generating code from internal software organizations, and led to a new round of refining specifications, going back to early–1970’s “automatic and developing new standards and maturity models, to be discussed programming” research , and was pursued via the under the 1990’s. Knowledge-Based Software Assistant (KBSA) program The Software Tools evolutionary approach emphasized a mixed strategy of staffing, reuse, process, tools, and management, supported by integrated In the software tools area, besides the requirements and design tools environments . The DoD software program also emphasized discussed under the 1970’s, significant tool progress had been mode accelerating technology transition, based on the  study in the 1970’s in such areas as test tools (path and test coverage indicating that an average of 18 years was needed to transition analyzers, automated test case generators, unit test tools, test software engineering technology from concept to practice. This led traceability tools, test data analysis tools, test simulator-stimulators and operational test aids) and configuration management tools. An to the technology-transition focus of the DoD-sponsored CMU Software Engineering Institute (SEI) in 1984. Similar initiatives excellent record of progress in the configuration management (CM) were pursued in the European Community and Japan, eventually area has been developed by the NSF ACM/IEE(UK)–sponsored leading to SEI-like organizations in Europe and Japan. IMPACT project . It traces the mutual impact that academic research and industrial research and practice have had in evolving 2.4.1 No Silver Bullet CM from a manual bookkeeping practice to powerful automated The 1980’s saw other potential productivity improvement aids for version and release management, asynchronous approaches such as expert systems, very high level languages, object checkin/checkout, change tracking, and integration and test support. orientation, powerful workstations, and visual programming. All of A counterpart IMPACT paper has been published on modern these were put into perspective by Brooks’ famous “No Silver programming languages ; other are underway on Bullet” paper presented at IFIP 1986 . It distinguished the “accidental” repetitive tasks that could be avoided or streamlined via 17 automation, from the “essential” tasks unavoidably requiring product and process; and of software and systems. For example, in syntheses of human expertise, judgment, and collaboration. The the late 1980’s Hewlett Packard found that several of its market essential tasks involve four major challenges for productivity sectors had product lifetimes of about 2.75 years, while its waterfall solutions: high levels of software complexity, conformity, process was taking 4 years for software development. As seen in changeability, and invisibility. Addressing these challenges raised Figure 7, its investment in a product line architecture and reusable the bar significantly for techniques claiming to be “silver bullet” components increased development time for the first three products software solutions. Brooks’ primary candidates for addressing the in 1986-87, but had reduced development time to one year by 1991- essential challenges included great designers, rapid prototyping, 92 . The late 1990’s saw the publication of several influential evolutionary development (growing vs. building software systems) books on software reuse . and work avoidance via reuse. Software Reuse The biggest productivity payoffs during the 1980’s turned out to involve work avoidance and streamlining through various forms of reuse. Commercial infrastructure software reuse (more powerful operating systems, database management systems, GUI builders, distributed middleware, and office automation on interactive personal workstations) both avoided much programming and long turnaround times. Engelbart’s 1968 vision and demonstration was reduced to scalable practice via a remarkable desktop-metaphor, mouse and windows interactive GUI, what you see is what you get (WYSIWYG) editing, and networking/middleware support system developed at Xerox PARC in the 1970’s reduced to affordable use by Apple’s Lisa(1983) and Macintosh(1984), and implemented eventually on the IBM PC family by Microsoft’s Windows 3.1 (198x ). Better domain architecting and engineering enabled much more effective reuse of application components, supported both by reuse Figure 7. HP Product Line Reuse Investment and Payoff frameworks such as Draco  and by domain-specific business fourth-generation-language (4GL’s) such as FOCUS and NOMAD . Object-oriented methods tracing back to Simula-67  enabled even stronger software reuse and evolvability via structures Besides time-to market, another factor causing organizations to depart from waterfall processes was the shift to user-interactive and relations (classes, objects, methods, inheritance) that provided products with emergent rather than prespecifiable requirements. more natural support for domain applications. They also provided Most users asked for their GUI requirements would answer, “I’m better abstract data type modularization support for high-cohesion not sure, but I’ll know it when I see it” (IKIWISI). Also, reuse- modules and low inter-module coupling. This was particularly intensive and COTS-intensive software development tended to valuable for improving the productivity of software maintenance, follow a bottom-up capabilities-to-requirements process rather than which by the 1980’s was consuming about 50-75% of most a top-down requirements-to capabilities process. organizations’ software effort . Object-oriented Controlling Concurrency programming languages and environments such as Smalltalk, Eiffel , C++ , and Java  stimulated the rapid growth of The risk-driven spiral model  was intended as a process to object-oriented development, as did a proliferation of object- support concurrent engineering, with the project’s primary risks oriented design and development methods eventually converging via used to determine how much concurrent requirements engineering, the Unified Modeling Language (UML) in the 1990’s . architecting, prototyping, and critical-component development was enough. However, the original model contained insufficient 2.5 1990’s Antithesis: Concurrent vs. guidance on how to keep all of these concurrent activities Sequential Processes synchronized and stabilized. Some guidance was provided by the The strong momentum of object-oriented methods continued into elaboration of software risk management activities  and the the 1990’s. Object-oriented methods were strengthened through use of the stakeholder win-win Theory W  as milestone criteria. such advances as design patterns ; software architectures and But the most significant addition was a set of common industry- coordinated stakeholder commitment milestones that serve as a basis architecture description languages ; and the for synchronizing and stabilizing concurrent spiral (or other) development of UML. The continued expansion of the Internet and emergence of the World Wide Web  strengthened both OO processes. methods and the criticality of software in the competitive These anchor point milestones-- Life Cycle Objectives (LCO), Life marketplace. Cycle Architecture(LCA), and Initial Operational Capability (IOC) Emphasis on Time-To-Market – have pass-fail criteria based on the compatibility and feasibility of The increased importance of software as a competitive discriminator the concurrently-engineered requirements, prototypes, architecture, plans, and business case . They turned out to be compatible with and the need to reduce software time-to-market caused a major shift major government acquisition milestones and the AT&T away from the sequential waterfall model to models emphasizing Architecture Review Board milestones . They were also concurrent engineering of requirements, design, and code; of 18 adopted by Rational/IBM as the phase gates in the Rational Unified criteria. One organization recently presented a picture of its CMM Process , and as such have been used on many Level 4 Memorial Library: 99 thick spiral binders of documentation successful projects. They are similar to the process milestones used used only to pass a CMM assessment. by Microsoft to synchronize and stabilize its concurrent software Agile Methods processes . Other notable forms of concurrent, incremental and evolutionary development include the Scandinavian Participatory The late 1990’s saw the emergence of a number of agile methods Design approach , various forms of Rapid Application such as Adaptive Software Development, Crystal, Dynamic Systems Development, eXtreme Programming (XP), Feature Driven Development , and agile methods, to be discussed under Development, and Scrum. Its major method proprietors met in 2001 the 2000’s below.  is an excellent source for iterative and evolutionary development methods. and issued the Agile Manifesto, putting forth four main value preferences: Open Source Development • Individuals and interactions over processes and tools. Another significant form of concurrent engineering making strong contribution in the 1990’s was open source software development. • Working software over comprehensive documentation. From its roots in the hacker culture of the 1960’s, it established an • Customer collaboration over contract negotiation institutional presence in 1985 with Stallman’s establishment of the • Responding to change over following a plan. Free Software Foundation and the GNU General Public License . This established the conditions of free use and evolution of a The most widely adopted agile method has been XP, whose major number of highly useful software packages such as the GCC C- technical premise in  was that its combination of customer Language compiler and the emacs editor. Major 1990’s milestones collocation, short development increments, simple design, pair in the open source movement were Torvalds’ Linux (1991), programming, refactoring, and continuous integration would flatten Berners-Lee’s World Wide Web consortium (1994), Raymond’s the cost-of change-vs.-time curve in Figure 4. However, data reported so far indicate that this flattening does not take place for “The Cathedral and the Bazaar” book , and the O’Reilly Open Source Summit (1998), including leaders of such products as Linux larger projects. A good example was provided by a large Thought , Apache, TCL, Python, Perl, and Mozilla . Works Lease Management system presented at ICSE 2002 . When the size of the project reached over 1000 stories, 500,000 Usability and Human-Computer Interaction lines of code, and 50 people, with some changes touching over 100 objects, the cost of change inevitably increased. This required the As mentioned above, another major 1990’s emphasis was on project to add some more explicit plans, controls, and high-level increased usability of software products by non-programmers. This required reinterpreting an almost universal principle, the Golden architecture representations. Rule, “Do unto others as you would have others do unto you”, To Analysis of the relative “home grounds” of agile and plan-driven literal-minded programmers and computer science students, this methods found that agile methods were most workable on small projects with relatively low at-risk outcomes, highly capable meant developing programmer-friendly user interfaces. These are often not acceptable to doctors, pilots, or the general public, leading personnel, rapidly changing requirements, and a culture of thriving to preferable alternatives such as the Platinum Rule, “Do unto others on chaos vs. order. As shown in Figure 8 , the agile home as they would be done unto.” ground is at the center of the diagram, the plan-driven home ground is at the periphery, and projects in the middle such as the lease Serious research in human-computer interaction (HCI) was going on management project needed to add some plan-driven practices to as early as the second phase of the SAGE project at Rand Corp in XP to stay successful. the 1950’s, whose research team included Turing Award winner Allen Newell. Subsequent significant advances have included Value-Based Software Engineering experimental artifacts such as Sketchpad and the Engelbert and Xerox PARC interactive environments discussed above. They have Agile methods’ emphasis on usability improvement via short also included the rapid prototyping and Scandinavian Participatory increments and value-prioritized increment content are also responsive to trends in software customer preferences. A recent Design work discussed above, and sets of HCI guidelines such as Computerworld panel on “The Future of Information Technology  and . The late 1980’s and 1990’s also saw the HCI field (IT)” indicated that usability and total ownership cost-benefits, expand its focus from computer support of individual performance including user inefficiency and ineffectiveness costs, are becoming to include group support systems . IT user organizations’ top priorities . A representative quote from panelist W. Brian Arthur was “Computers are working about as fast as we need. The bottleneck is making it all usable.” A recurring 2.6 2000’s Antithesis and Partial Synthesis: user-organization desire is to have technology that adapts to people Agility and Value rather than vice versa. This is increasingly reflected in users’ product selection activities, with evaluation criteria increasingly emphasizing So far, the 2000’s have seen a continuation of the trend toward rapid product usability and value added vs. a previous heavy emphasis on application development, and an acceleration of the pace of change in information technology (Google, Web-based collaboration product features and purchase costs. Such trends ultimately will support), in organizations (mergers, acquisitions, startups), in affect producers’ product and proc