Recommended Resources
Cyberhaven.com Offshore havens, asset protection, global investing and other useful techniques.
The Year 2000 Bookshelf Books to help your evaluate the Y2K problems you face.

Gary North's Y2K Links and Forums - Mirror

Summary and Comments

(feel free to mail this page)


Category: 

Programmers'_Views

Date: 

1997-10-13 14:30:15

Subject: 

Yourdon: Evidence That Large Projects Come in Late

Comment: 

Ed Yourdon is the author of two dozen books on programming. He is co-author of a new book, TIMEBOMB 2000. Here, he presents evidence that shows that large-scale programming projects come in late or not at all. This is a long posting, but it is important. This report is worth printing out.

* * * * * * * * *

I've spent 33 years in the software field, starting as a junior programmer at DEC in 1964, and my credentials are an open book (24 of them, actually, which you can find at www.yourdon.com) for anyone to read. I don't claim to be the absolute authority on anything, nor do I claim that anything I say or predict can be proven correct with mathematical precision. But I have personal confidence that I know what I'm talking about when it comes to software engineering and software management, and I'm willing to put my reputation and career on the line in this area.

The issue of project management is one that has been mentioned in various writings about Y2000, but it's often glossed over or dismissed -- and yet it's the primary basis of my technical concerns about the risk of Y2000 projects all over the world. Here it is, in a nutshell: the software industry has accumulated metrics from tens of thousands of projects (that's a literal number, not hyperbole) over the past 30-40 years, which tell us, statistically, that approximately 15% of all software projects are delayed (behind schedule), and approximately 25% are canceled before completion.

Statistics like this have been gathered and published over a long period of time, by well-respected software metrics professionals, long before Y2000 was on anyone's radar screen. One of the sources of information in this area is "Patterns of Software Systems Failure and Success," by Capers Jones (International Thomson Press, 1996 -- and probably available from Capers' web site at www.spr.com). But if you don't like the numbers from Capers, you can find almost identical numbers from the research carried out by Dr. Howard Rubin (head of the Computer Science Department at Hunter College in NYC, author of several world-wide software benchmarking metrics studies, and developer of the highly regarded ESTIMACS software-estimating product now marketed by Computer Associates) or Larry Putnam's company, Quantitative Software Management (Larry has also been in the software industry for 30 years, and is the coauthor of a book called "Measures for Success: Delivering Software On Time, Within Budget (Prentice-Hall, 1992). The Standish Group, the Gartner Group, the GAO, Scientific American, and several other reputable sources have confirmed that our track record for delivering software on time is lousy, and has been for a long, long time.

The situation is substantially worse for large projects. In the Capers Jones book referenced above, approx 24% of all 10,000 function-point (FP) projects are finished behind schedule, and approx 48% are canceled. For 100,000 FP projects, approx 21% are behind schedule, and approx 65% are canceled. If you're not familiar with function points: 1 FP is approx equal to 100 COBOL statements; you can do the rest of the arithmetic. The point is that many of the large, mission-critical legacy systems that are the subject of Y2000 remediation fall into this category.

A related issue: just how late are the projects that are "delayed"? It turns out that the experience of the software industry, over the last 30-40 years, is that the average software project is approximately 6-7 months late. Again, the situation is much worse on the big projects: the 10,000 FP projects, according to the Capers Jones statistics, are an average of 13.8 months behind schedule, and the 100,000 FP projects are an average of 25.8 months behind schedule. Note that this doesn't include the projects that were canceled!

For any IT manager, or any CEO, who says, "Rubbish! Those numbers don't apply to us! We're special!", a reasonable response is, "Fine -- then show me YOUR statistics on the percentage of projects that are ahead of schedule, on time, delayed, and canceled. And show me YOUR statistics on the average number of months behind schedule one should expect for a typical project in your organization." You'll find that in the substantial majority of IT organizations, there are no such statistics. Or to put it another way, approx 75% of US organizations are at level-1 on the SEI process-maturity scale (that's a whole separate topic, which you can learn about from Carnegie-Mellon University's Software Engineering Insitute at www.cmu.edu), which means they have no rational, well-defined, repeatable method of estimating their projects, nor do they keep statistics telling them whether previous projects succeeded, failed, or got delivered a year late. Whatever so-called estimates they produce are usually based on ad hoc guesses or negotiation, not estimating per se.

My experience in the Y2000 field for the past two years is that neither project managers, CEOs, or interested bystanders want to hear any of this. The reaction seems to be: "Oh, that's interesting" ... and then they push it out of their mind and get back to their optimistic plans for Y2000 projects. My response has been, "Wait a minute! Stop! Time out!! On what rational basis can you conclude that your Y2000 projects are going to behave any differently than the entire software industry has behaved for the past 30-40 years, especially since the Y2000 projects are bigger than anything you've ever done, and you haven't even been able to formulate a plan or hire a staff to get started?!?" The answer seems to be, "Well, this time it will be different." But why? "This time, we really know it's important." But does that mean your estimates will be any more rational or credible? "Well, this time we'll ORDER the programmers to be more productive." Notwithstanding my opinions about management styles, I wish that edicts would work in this case; unfortunately, 30 years of experience with software projects tells us that such edicts from high-level managers usually don't guarantee success; indeed, the danger here is that the project manager and low-level techies will be bullied into delivering software that's missing essential functionality, or that's riddled with bugs. (You might want to take a look at my "Death March" book for more details about the consequences of such high-pressure projects).

Bugs, by the way, are another reason I'm pessimistic about the likelihood of a smooth Y2000 transition in most organizations. Again, the pessimism is based on 30 years of experience: software project teams typically deliver "tested" software, which is put into production with an average of 1 bug for 100 lines of code (a certain well-known software company, for example, delivered a well-known PC operating system to the marketplace at the beginning of this decade with 5,000 known bugs in the code -- known to the software organization at the time they shipped the product! This is not atypical behavior.). Some organizations are 10 times better than this dismal figure; a few are 100 times better; a very, very few isolated organizations like Motorola are practicing "six-sigma" quality techniques that can actually reduce the number of bugs to 1 in a million lines of code. But given the massive amount of software that's being Y2000-remediated under pressure, without documentation, by programmers who are often unfamiliar with the original software -- well, I can't see the rational basis for hoping or believing that we'll get through all of this work without a massive number of bugs that will take months, if not years, to exorcise from the systems. You can do the arithmetic any way you want, but given the mind-boggling amount of code that has to be modified across all the Y2000 systems, a large rash of bugs is inevitable. Hopefully, most will be minor; some will be moderate and annoying; my concern here is that a few of them may turn out to be life-threatening.

What's taking place on almost all Y2000 projects is NOT estimating, but rather a form of "backwards wishful thinking." It starts as follows: everyone knows what the "ultimate" deadline is for Y2000 -- we can't negotiate or ignore that fact. Indeed, most organizations have arbitrarily decreed that their Y2000 projects WILL, by golly, be finished on Dec 31, 1998. Not because anyone did any project-level estimating, or planning with Pert charts and Gantt charts, but simply because that's when management has decreed that things will be done. So if that's the deadline, then a rational project manager has to work backwards, leading to a train of thought that says, "If we have to be done by 12/31/98, that means we have to start testing by 12/31/97, which means that we have to be done with all of the planning and analysis by 6/30/97 ... whoops! Omygosh, we're already 3 months behind schedule. Well, we'll make it up somehow by working very hard and convincing ourselves that we're not going to make any mistakes!" This is NOT a new phenomenon: we've been doing it for 30 years, every time management imposes an arbitrary deadline on project managers, because nobody has the guts to stand up and say, "Hell no!"

If you want to see a mind-boggling example of backwards wishful thinking, take a look at the published plans for Y2000 projects by the major Federal government agencies; I believe it's still available at www.cio.fed.gov/ y2krev.htm . Note how many of them have an abitrarily decreed deadline of 12/31/98 for finishing their work. Note also that the publicly stated schedule says, among other things, that 12 of the agencies are planning to devote either ZERO months, or ONE month, for the testing of all the Y2000 remediation work they've done on their schedule. And as I assume you've heard, four of the agencies were recently given "flunking" grades by Congressman Stephen Horn (a former university president, which may or may not be relevant) because they're so far behind schedule on the planning and assessment phase of their work, which is estimated by most Y2000 experts to represent approx 5% of the overall task. The typical reaction to such criticism: "Oh, well, we'll catch up by working harder, and we won't make any mistakes, which is why we can get away with so little testing." Maybe so, but such a statement contradicts the cumulative evidence of 30-40 years of work in the software field. Privately and off-the-record, the project managers (who get all the heat and the pressure from high-level managers when unpleasant news like this is made known) mutter to themselves, "We'll fudge the numbers on the next progress report, so that we can get these idiots off our back, and get some work done." This is not a new phenomenon; this is what's been happening to large software projects since I got into the field, and probably long before that. And there is good reason to believe (based on first-hand reports from such project managers, which I'm not allowed to describe in detail because of non-disclosure agreements I've signed) that there are several of these situations underway in the government right now.

Using the available industry metrics, the statically predictable fate of the 7,226 mission-critical systems (the number comes from the OMB folks) in the federal government agencies is that we can expect approx 991 of those systems to be finished late (by an average of 1-2 years, probably) and 1,721 to be canceled ... except that cancellation is a rather grim option, and may not be allowed. These are not pleasant numbers, and one's natural reaction is to say, "That can't be true! This time we'll be different!" Maybe ... but it's not as if we suddenly woke up with the advent of Y2000 projects and said to ourselves, as an industry, "Well, we're tired of being lazy and unproductive and wildly inaccurate in our estimates; let's all unanimously agree to change our ways, starting today." I've spent a substantial portion of my 33-year career writing books, providing training courses, and carrying out consulting work (as have several dozens of other, far more gifted and talented people than me) trying to accomplish this level of improvement that has suddenly been deemed imperative for Y2000 projects. We've made incremental improvements in many cases, significant improvements in a few cases, and no improvement whatsoever in a few other cases. If the collective efforts of the methodology gurus and metrics gurus and project-management gurus in the software field over the past 30 years has not been sufficient to eliminate the dismal statistics mentioned above, then I find it hard to believe that it's going to change suddenly in the next two years.

Maybe a miracle will occur and every one of those mission-critical projects will finish on time; maybe a similar miracle will occur in private industry. But the behavior, thus far, of both public-sector and private-sector organizations, belies this. Another interesting metrics-level observation in this area comes from Dr. Barry Boehm -- whose "Software Engineering Economics" book (Prentice Hall, 1981) and widely COCOMO software cost-estimating model, and numerous other accomplishments, put him near the very top of the software engineering "guru" list -- and who has studied numerous projects that fail and succeed, in order to find out what makes them succeed or fail. His observation (delivered before the plenary audience at the International Conference on Software Engineering in Boston in May 1997, during a question-answer session after my keynote address) was that the largest single factor in the ability of a project to finish on time is when the project is ACTUALLY started with real staff and a real budget, relative to when everyone knows that the project COULD be and SHOULD be started. Common practice throughout the software industry is that an organization decides that it needs to develop project X by date Y; then they form a committee and argue about it for a few months; and then they assign a project manager, but don't give him a budget or staff; then they decide to change the objectives and requirements for the project; and then, after valuable time has been wasted, they finally get down to work, by which point the possibility of finishing the project on time has been reduced to almost zero. This is exactly what's happening with many Y2000 projects, because senior management is paralyzed by the cost, the risk, and the necessity of having to make some extremely difficult business decisions. I assume you've seen the statistics in this area; the Cutter Consortium, whose Y2000 Advisory Service I head up, recently published a survey in which only 53% of the respondents said they had developed a formal Y2000 plan, 49% said they had explored the business consequences of Y2000, and 35% said they had developed a Y2000 triage plan.

The more you procrastinate on any large project, Y2000 or otherwise, the greater the risk that you won't finish at all. (Alas, managers in many companies still haven't learned that you can't solve the problem by throwing hundreds of programmers on the project at the last moment, just as you can't produce a baby in one month by impregnating nine women. The failure of that approach was first documented by Dr. Fred Brooks, head of IBM's OS/360 project, in a classic book called "The Mythical Man-Month" (Addison-Wesley, 1975 -- second, revised edition published in 1995)).

Hence my conclusion: all of the available evidence strongly suggests that we're not going to finish our Y2000 projects. We'll certainly get 50% of them done, probably 80%, possibly 90% -- but nowhere near 100%. Sure, some organizations will finish 100% of their systems (indeed, a few organizations have been Y2000 compliant all along); but some won't finish ANY of their Y2000 work, because they haven't heard about it yet, or because they haven't gotten around to funding and staffing it yet. To ignore the significant likelihood that 10% or 20% or more of the mission-critical Y2000 projects in this country won't be finished is irresponsible, in my opinion, not to mention imprudent. On a national level: sooner or later (probably later) someone at a high level is going to have to confront the significant likelihood that 900 or 1000 or more of the government's mission-critical systems will not be remediated in time. Unpleasant news, to be sure, but to ignore it is something I'm not willing to do.

A final comment, regarding the "TimeBomb 2000" book that I'm coauthoring with my daughter (available for now

at www.yourdon.com/fallback/fallbackhome.html

until it goes into production at Prentice Hall at the end of this month. . . . [O]ur statement was: we DON'T KNOW what the outcome of the Y2000 problem will be, nor does anyone else, not with mathematical certainty. There are many plausible scenarios, and what we tried to do in each of several important areas (banking, medicine, telecommunications, government, etc.) was to offer contingency-planning suggestions for four levels of Y2000-impact: a disruption lasting 2-3 days; a disruption lasting approx one month; a disruption lasting a year; and a disruption lasting a decade. In most of our chapters, we concluded that it was very difficult to even imagine, let alone predict, a 10-year disruption -- though Medicare, IRS, and Social Security are worth discussing. Even a one-year disruption seemed relatively unlikely in most cases; but the possibility of a one-month disruption is hardly radical... In any case, our premise was that responsible men and women should make their own decisions, based on their own assumptions, facts, guesses, informed opinion, or optimistic wishes. We continue to believe that's the most responsible thing we can do: let people decide for themselves. It would be nice if the entire conversation could take place with scientifically verifiable "facts", but there are two problems with this: (a) we don't have a lot of time to gather these so-called facts about the actual progress of Y2000 projects, and then debate them, disagree with them, and finally publish them -- if you're going to make fallback plans, the time to do it is now, with the best information you can get your hands on, not in December 1999. And (b) as you well know, most organizations have been advised by their legal department to avoid saying anything at all about their Y2000 activities, for fear of litigation. We've been fortunate that the various government agencies have been "opening their kimono" to let us get a glimpse of what's going on inside.

I've had this kind of dialogue with many people during the past two years, and one of the most frustrating situations is when a CEO, or an IT professional, or a layman, says to me, "Well, those are pretty impressive numbers you're throwing around -- but it's not constructive. It doesn't tell us what we should do to SOLVE the problem." I sympathize with this attitude, for ours is a positive, optimistic, "can-do" kind of society: we like to define the problem, marshal our resources to deal with it -- and then, by golly, follow the Nike commercial's advice of "just DO it!" And I certainly want the IT community to work as hard as it can to fix as many systems as possible -- for the consequences will be far less serious if we finish 90% of the work than if we finish only 80% of the work. But it makes rational discussion more difficult when we decide, a priori, that it's non-constructive, negative, defeatist, and possibly even treasonous to say, "The problem cannot be solved. Not completely. Not anywhere close to completely."

I worry that if we continue along, during the next two years, with the optimistic "can-do" attitude that we're going to get 98% or 99% or 100% of the Y2000 work done, then we run the risk of NOT having done the necessary fallback and contingency planning that we ought to be putting into motion now. That's an area outside the charter, authority, responsibility, and political power of the IT department; it falls squarely on the shoulders of senior management in business and government. But thus far they're ignoring or avoiding it; as such, I consider President Clinton's vague, optimistic Y2000 statement on August 15th substantially more harmful than any apocalyptic vision offered by visitors on this Y2000 forum. As Peter de Jager observed in a conference last month, "Management will only support the notion of triage when they've absolutely given up hope of converting ALL of their systems." So far, it appears that they haven't done so.


Return to Category: Programmers'_Views

Return to Main Categories

Return to Home Page