To be sure that a repair has not created more problems than it has solved, the repair team must test it. The only way to be sure is by running data into the old and repaired systems, separately, at the same time: parallel testing. If new data crashes the new system, that system must be checked out and repaired. Then the test must be run again. A good test may take six months. If the system crashes at five months, you start over.
Where does any organization find an extra 100% mainframe capacity to run the test? It can't. Mainframe computer time is just too expensive. There is no extra capacity equal to all the mainframe capacity on earth.
This seems to be the Achilles' heel that will bring down ALL systems. If you don't run a parallel test, how can you know your _fix_ is compliant? But where do you find an extra mainframe computer on which to run the test? Companies don't have that kind of hardware unused. So, how will the mandatory months of parallel testing be conducted in 1999, assuming that the repair comes in on schedule?
I see now way out: they won't do the testing. But the code is so complex that there will be mistakes. No one will know how bad these are until 2000, when it's too late to fix them.
Can a team repair a system with 40 million lines of code and not make one major mistake? Forget it. They will not know until the year 2000. Then it will be too late.
That's my view. Now consider the opinion of a professional y2k programmer:
* * * * * * * *
Date: Mon, 3 Feb 1997 13:42:31 -0600 (CST) From: Bill Cook
It seems to me that there is an _unknown quantity_ in Y2K testing and I haven't been able to wiggle around it. Here is what I mean.
Assumptions: (Any language, any database) 1. The program to be tested will not operate correctly on 20000101 using 21st Century test data (it has a known Year 2000 problem). 2. The program _appears_ to be operating correctly now (19970203) using 20th Century production data. 3. The standard that _no harm was done_ in Y2K mediation for this project is parallel testing for batch, and CRUD for on-line (Create, Read, Update, Delete) with no apparent anomalies. 4. There is no date expansion of the output files or user interfaces, the dates do not span 100 years, a windowing solution is to be employed in the code.
To validate the parallel test, you can only be 50% assured you did no harm by the mere fact that you can only compare the currently working program with the future perfect program using data that the currently working program can process. Ergo, just getting the future perfect program to run in the future perfect data provides you with nothing to parallel test against using 21st century data. How can you actually say you parallel tested?
If I was to create suites of test data that stress the system for given business situations (month-end, quarter-end, year-end, century-end, leap-year, on-request), and I could only get the corrected programs to run with it, what have I proved? How do I know that all of the paths through the code are being exercised and if so, exercised correctly?
If I go back and run today's data through the future perfect program logic, how do I know that things like overlooked hard-coded century logic and indicators aren't just operating as they do today? There seems to be a gap here.
Let's kick the tires on this one for a while and see what others have turned up to get around this.
________________________________________________________________ End of Message firstname.lastname@example.org Try these Y2K links: http://www.netcom.com/~wjcook/resource.html