Make your own free website on
Recommended Resources Offshore havens, asset protection, global investing and other useful techniques.
The Year 2000 Bookshelf Books to help your evaluate the Y2K problems you face.

Gary North's Y2K Links and Forums - Mirror

Summary and Comments

(feel free to mail this page)




1998-03-03 09:12:07


25 Billion Chips; $2.4 Trillion to Fix?


No one knows how many chips are installed, but the estimate of 25 billion is common. No matter how much money companies spend on repairing code, the world cannot check all 25 billion chips and replace all of the defective ones. There are not enough people and tools to do the checking. Organizations would not pay for it if they could hire the people. So, we have to go into 2000 hoping for the best.

Where do you plan to be when you are sitting there, hoping for the best?

This posting is from David Hall, who keeps sounding the alarm on the embedded chip problem. He guesses that it would cost $2.4 trillion to fix it.

Well, it's a nice hypothetical figure, comparable to estimating the number of angels that can dance on the point of a needle. But that medieval academic exercise was really a debate over the corporeality of angels. Defective chips are corporeal. The debate over the number of noncompliant chips and their replacament is a debate over how many people, two months after the defective chips fail, will still be corporeal.

* * * * * * *

Date: Wed, 25 Feb 1998 15:49:09 -0500


Every embedded sytems test that I am aware of has verified the assumption that each embedded system or piece of equipment MUST be treated as a unique system. Even the same models of equipment react to Year 2000 dates differently. Think about the hardware involved. The basic chips, RT--Cs, motherboards, BIOS, etc. HAVE HAD NO DATE STANDARD as far as 2000 dates are concerned. So how can anyone assume that no matter what pieces of hardware you use, they will all react the same at the macro level? Each "black box" is made up of numerous vendor specific "sub black boxes" that are made up of (and so on). Not one of the vendors up this build chain has EVER known exactly what the subvendor put into their "black box". So once you get to the actual installed system level, there may be a dozen subvendors, all with different chips in each of their "sub black boxes". And each of these chips was bought from the cheapest manufacturer on some day and thrown into a pile. When needed, a chip (since each one was built to the same specs) was picked up and used. Whose chip it was, no one cared. Well, now we need to care. And because there was no manufacturing standard for 2000 dates, each chip, even ones from the same manufacturer, may (WILL) react to year 2000 dates differently.

THEREFORE, each embedded system, or piece of equipment, or PC, or anything using chips, MUST be tested as if it was unique. There can be NO generic tests used for any reason on equipment with chips. Sorry about that. This is reality as proven by testing and documentation searches. This is one of the reasons why I persist in noting that embedded ssytems will cost the world at least four times as much to remediate (or fix failures) as mainframe systems.

$600 Billion X 4 = $2400 billion? Sounds like a dumb number? Well try and calculate how much the world has spent over the past fifty years putting chips into everything. This is probably a very small percentage. But it is a tremendous sum to pay over the next few years. Especially since most of it will go for nonexistant manpower.

Dave Hall

Opinions are my own and not those of my employer

Return to Category: Noncompliant_Chips

Return to Main Categories

Return to Home Page