y2k redux
August 30, 2006 9:16 AM   Subscribe

I have been thinking recently about the y2k silliness and how I had not ever read a good autopsy of the phenomenon or, rather, beyond the claim of hysteria, an adequate explanation of the lack of phenomenon. (long preamble and a long technical question inside)

There are several camps. "We averted a y2k disaster through our diligence" seems to be the minority opinion. A large group says "it was like killer bees" - an invocation of hysteria for the unthinking masses. The former opinion seems to me to be ridiculous, the latter, simplistic. It seems too easy to say it was a massive con job. The y2k believers had in their camp fairly intelligent people and sincere people who, it appears to me, they were at a minimum, fooling themselves. Although much has been discussed about "what was the germ" that alarmed computer specialists little has been done to explain why that germ was non-pathogenic. The wikipedia article doesn't address this issue.
http://en.wikipedia.org/wiki/Y2K
Even at this late date, an explanation is meaningful. There are general lessons here. Sorting out phony predictions of doom from real ones, or more generally, phony science from real science, is necessary. How we fool ourselves, how we add hysteria to an issue are crucial matters to understand. I recall in 1999 both reading a gun magazine (doing research) that explain what ammunition to use when the y2k dispossessed zombis showed up at your door, and the Utne Reader on how the coming anarchy will be a time for togetherness. Because we can't be experts in many issues (computers, Iraqi weapons of mass destruction, global warming) we rely on others to be experts for us. To make it bipartisan, as governor Bush was in charge of his state's response to y2k, he went after those weapons of mass destruction in Iraq, and he doesn't believe in global warming. Gore was assigned responsibility for the nation's preparedness for y2k, I don't recall a prior statement of his on WMD in Iraq, and he believes in global warming.)
With that preamble, I wanted to run my theory past those at MeFi who have the expertise to evaluate it. The Y2K bug always seemed phony to me. The main reason is that it seemed to require more effort in the original programming to create the bug than it would be to not have the bug.
The standard dogma was that in order to save space early programs had years presented as two digits appended after a "19" and, that when the year became 2000, the last two digits of 2000 would make the date appear as 1900. The computer would start "thinking" it was 1900 all over again or, worse, the internal counter would become zero causing divisions by zero to occur. Computers would crash, keyboards would melt and airplanes would fall from the sky.
The problem with this is that the internal register carrying the year is binary. When the year increases by one it would change from 99 (1100011) to 100 (1100100). A simple program that would save time and space would not bother saying, after I convert it to base 10 give me only the last two digits and I will present these last two digits and maintain them as the new counter. What would happen is that you would have 100 appended to 19, or the year 19100. This is exactly what happened in my only y2k glitch: Tiger Woods PGA Tour 1999 had the dates of games saved read as 19100 in 2000.
But, 19100 is not nineteen thousand one hundred. It is exactly one more in the computer memory than is 1999 (actually 100 vs. 99 or 1100011 vs. 1100100 in binary). This is why there were no sudden bills being sent out for 17000 years past due or any divisions by zero or confusions that it was 1900. The mistakes, if they were to ever appear would be cosmetic which is the only thing that was being performed when the date was presented as 19xx.
For a mistake to go beyond the presentation you would have to have a program that captured the appended number four digit number (1900) or five digit number (19100) and fed it into processing when by far the simplest thing to do to access the year is to address the register with the year being counted (100). Or it is possible someone stupid could have read the full digit readout and typed it in elsewhere as the actual year.
I don't believe Y2K experts were hucksters. The reason why so many Y2K experts fooled themselves was because they ran a simple but artificial test. They looked for what would happen if the year converted to 00. (This is speculation on my part.)
The next expansion in necessary bits to define a number would come when 127 becomes 128. A y2028 problem? ;)
Has there been a good autopsy of Y2K written? We spent 300 billion dollars in the US preparing for y2k. You would think someone would care.
posted by dances_with_sneetches to Computers & Internet (44 answers total) 9 users marked this as a favorite
 
Believe it or not, there's a hell of a lot of old COBOL code out there that stores stuff like year numbers as two ASCII digits rather than a single binary field. Lots of that stuff ran (probably still does run) in banks, tax departments and whatnot.

I know from a friend who used to work there that before Y2K panic set in, the Australian Tax Office was running COBOL code so ancient that nobody still working there knew it well enough to fix it any more. There was one procedure, for example, that generated massive printouts, consuming boxes and boxes of 15" fanfold paper, that had to be run just for its side effect of getting something updated in some database somewhere so that the next step in the processing would work properly. Nobody had looked at the printouts themselves in the fifteen years since the law that required them to be made got changed - in fact they were feeding the paper pretty much straight from the printer into the shredder.

Legacy code is often a nightmare beyond all reasonable proportions, and at least some of the Y2K effort was well justified.
posted by flabdablet at 9:28 AM on August 30, 2006


It's true that relatively few computers actually would have had serious problems come January 1, 2000, and that many of those could have been fixed fairly easily.

But, most large businesses, especially banks, hospitals and the like, envisioned a huge liability issue if any equipment proved not to be Y2K-ready and caused financial losses, business interruptions, deaths or injuries. They simply couldn't afford not to check everything. That included, in hospitals, things like blood pressure monitors that had no time/date function whatsoever.

I don't know where the $300 billion tally comes from. It certainly is possible that much was spent on new computer equipment during that time frame, but that equipment had a lot of benefits besides just being Y2K-ready. The investment helped extend the economic growth of the 1990s for a year or two and is still paying off today in a variety of ways.
posted by beagle at 9:29 AM on August 30, 2006


It was my understanding that a majority of the problems were going to be from the various protocols and other data interchange formats that had a date field. For instance, many people stored the date as two characters instead of a single byte, and so rollover wasn't strictly a problem there. But the database or protocol might have built-in data range assurances that force9s out-of-range values into something meaningful. But how that mapping occurs was ill-defined. Is it modulo-100 or peg at 99?

The other concern I think was bad assumptions on date ordering. For instance: employee start year is always less than employee end year. This isn't true with 2-digit text dates, even assuming it was wrapped correctly back to 00.
posted by todbot at 9:31 AM on August 30, 2006


The problem with this is that the internal register carrying the year is binary. When the year increases by one it would change from 99 (1100011) to 100 (1100100).
That's not how the programs they were worried about stored the date though. A lot of programs end up stored/stores the date as a two character string. So, in fact, one more than "99" is "00". You're right that some programs stored the two digit part of the year as a number and then they rolled over to 19100, which is still clearly wrong, but as you say, normally produces display errors.

But, if the system stored it as a string (or, in the case of Cobol, a two-digit number (yes, in Cobol there's a lot of programs that have 01 Year Pic 9(2))) when it became 2000, the current year was now was seen as 00--or 1900 to the program's logic. So events recorded from 1999 were seen as after the current date.

And even with all the fixes made, there have been bugs found because of going to the year 2000. The Wikipedia page has several examples and I remember hearing reports of localities finding problems in pre-2000 testing (like an elevator system stopped working in LA when they set the year to 2000).

Y2K was overblown. It appears that for the most part the worst case wouldn't have happened, but it wasn't a total non-problem.
posted by skynxnex at 9:32 AM on August 30, 2006


See, the thing is, it's not as though you can just add 1 to a two-digit year representation and have it go to 19100.

Computer memory isn't structured that way. You typically have a set size for various datatypes (one byte per character, 16 bits for an integer, etc.) and if you try and stuff more data into a variable than was originally allocated you'll have sometimes very serious errors.

If you don't check to see whether the buffer is large enough for the data you can end up with a buffer overflow, or, in the Y2K case, many systems were probably just truncating the date field so that 99 + 1 == 00 rather than 100.

Now, why didn't airplanes fall from the sky and all the nuclear missiles go haywire? Because of a massive amount of work on the part of a lot of programmers and because the original software was probably written to not entirely crash when presented with an incorrect date.

It's not like airplane software was written to say "if the date is less than 1903, then eject the wings."

Also, this issue was most prevalent with systems written in COBOL, which was usually used by businesses, not nuclear missile firmware.
posted by bshort at 9:37 AM on August 30, 2006 [1 favorite]


I was a sysadmin at the time. The reason Y2K wasn't a problem was precisely because of the alarmist predictions. We spent most of 1999 making absolutely sure that EVERY SINGLE THING in our network was patched, and even at that, we were stuck in the office at 12:01 AM, just in case.

We spent a great deal of time patching. There was a LOT of busted code.... and our network was nearly new! It was less than five years old, and yet practically every program in the whole network needed a patch. Even a program _I_ had written for a prior employer in the mid '90s needed a small patch.

It's easy to look back and say, "you morons, there was no threat!" There WAS a threat, and the reason it was averted was because of that warning and a great deal of work.

I think the Unix time_t rollover in 2037 is going to be much bigger, precisely because Y2K was so well-handled. People will still remember that not very much broke 37 years prior, and they'll under-allocate resources. And a lot of stuff will blow up, ancient code that nobody understands anymore, and we'll spend months or years tracking down all the bugs. By then, computer code will be reliable enough, and people will trust it enough, that there will probably be a few deaths from system failures.

I'm sure it won't be a catastrophe, but it won't go as well as Y2K did.
posted by Malor at 9:43 AM on August 30, 2006


See also BCD (binary-coded decimal).
posted by BaxterG4 at 9:43 AM on August 30, 2006


Here's an example of a real Y2K problem that we had to fix:

We get a file that has a send date on it. We only use the file if it's newer than the last file we got. This date is in the form MMDDYY, because the file format hasn't changed since 1972. Our code compares the year part, then the month, then the day. On Jan 1st, 2000, the year changes to 00 and the old year was 99. Since '00'< '99', it's not recognized as new file and isn't processed.br>
This didn't fix itself.
posted by smackfu at 9:44 AM on August 30, 2006


I should also amend.... at the time, computer software was buggy as hell. It's getting better, but society was mostly used to software that didn't really work right. So had we done NOTHING, a lot of stuff would have broken, but for the most part we'd have muddled through okay, since we were used to that happening anyway.

I think 2037 could be very different.
posted by Malor at 9:45 AM on August 30, 2006


Also, there's an implication here that you would somehow know if a company didn't fix every single Y2K problem. But they have no reason to advertise that fact. Quite the contrary. So the real scope of what happened and was fixed quickly is impossible to gauge.
posted by smackfu at 9:47 AM on August 30, 2006


A quick search turned up numerous reports of failures that did happen. Here's one example - scroll about half way down to annex II. Nothing fatal, or likely to cause headlines in the news, but a lot of stuff.

From personal experience at a small consulting firm, I can say that without 2-3 years of work, several key functions within Canada Post would have stopped working completely. There were many other, much larger, consulting firms working on other parts of their operations. No, it's not planes falling from the sky, but any nationwide disruption of mail service would have been significant. That's just one example of why the money was spent and the work was done.
posted by valleys at 9:53 AM on August 30, 2006


Again, let me re-emphasize.... a program I myself wrote had this problem. I wrote it in about 94, and I never expected it to still be in production 6 years later.

In my specific case, I generated job numbers based on the last two digits of the year; the first job in 1994 was 94-0001. The reports that ran were largely based on those two digits, and since 00 was not greater than 99, a lot of very fundamental assumptions in the code broke.

I just extended it to three digits, and the first job in 2000 was 100-0001; it was a nice easy solution that took about thirty minutes to implement and test. Probably looked a little silly, but it did work.

Remember, computers may work in binary, but PEOPLE mostly think in decimal, and it's people who write programs.

Your conspiracy theories are entirely misplaced.
posted by Malor at 9:54 AM on August 30, 2006


A good source for information about y2k problems is the archives of the RISKS digest; a search for y2k reveals quite a few hits. The first reference to y2k is from March 1996; the most recent report of lingering y2k problems is from October 2005.
posted by doorsnake at 10:07 AM on August 30, 2006


My husband wrote a lot of code in the 1980's. It wasn't intended to be used for more than 3 - 4 years. Instead of rewriting it, people just kept adding on to it. He said he had no idea what would happen that midnight.
posted by clarkstonian at 10:24 AM on August 30, 2006


This Wired article makes it sound like at least some luminaries (Yourdon, at least) feel as though it ended up seeming like a non-issue precisely because it was prevented, and I recall at least a few snafus that resulted.

Think of how different the world would be if, say, President Gore had prevented 9/11 a week ahead of the attacks. News coverage of bin Laden's ability to pull it off (and especially to bring down the towers) would have been laughed off, and the budget to address future acts of terrorism would have been cut.

Nevertheless, I do think the situation was overhyped. Some were making money off the hype (fearmongering is always good for the news outlets as well as consultants), and some of the hype was a simple case of extrapolating wildly: "I know my organization's code is screwed up a little. But what about the energy grid? The air traffic control systems? The nukes? It'll be the end of everything!"
posted by kimota at 10:37 AM on August 30, 2006


There was a good discussion of this on Crooked Timber a few months ago. The author, John Quiggin (a Y2K sceptic) posted a link to an academic paper he'd written, on 'The Y2K scare: causes, costs and cures' (link to PDF here), which is exactly what you're looking for, I think. I found it very interesting. Basically, he argued that the Y2K scare got out of control because no one had an interest in stopping it. No one stood to lose their jobs by over-reacting to the problem:

From the perspective of public administration, the two most compelling observations relate to conformity and collective amnesia. The response to Y2K shows how relatively subtle characteristics of a policy problem may produce a conformist response in which no policy actors have any incentive to oppose, or even to critically assess, the dominant view. Moreover, in a situation where a policy has been adopted and implemented with unanimous support, or at least without any opposition, there is likely to be little interest in critical evaluation when it appears that the costs of the policy have outweighed the benefits.

I think he has identified a more general problem here -- that in large organisations, there is a strong tendency to err on the side of caution. No one has any incentive to express scepticism -- they won't get rewarded if they're right, but they will get punished if they're wrong. (Compare the recent restrictions on airline travel -- no one wanted to appear to be underestimating the risk, and the result was a hugely disproportionate response.)

Unfortunately, Quiggin doesn't really propose any solution to this problem, except to suggest that in every organisation, there should be some form of 'institutionally sanctioned scepticism' -- which is easy to suggest, not so easy to implement.
posted by verstegan at 10:41 AM on August 30, 2006


I don't believe Y2K experts were hucksters. The reason why so many Y2K experts fooled themselves was because they ran a simple but artificial test. They looked for what would happen if the year converted to 00. (This is speculation on my part.)

In the politest possible way, it's idiotic speculation. The genuine Y2K experts were smart enough to not have to do an "artificial test". They could go through the actual code of the actual system and see what would happens to years and dates after 1-1-2000 and test how the rest of the program behaved in those circumstances. You aren't smarter than the rest of the world.
posted by cillit bang at 10:42 AM on August 30, 2006 [1 favorite]


The 2038 time_t bug is scary, precisely because there isn't really an obvious way to fix it. Anything that's 32-bit is going to have the same problem, and you can't just resize a system- and internet-wide value so easily.

Also, Y2K was/is only a problem in bad code. If you use a two-digit year, even in 1979, it was a stupid hack. Using time_t as a date store is the Right Way To Do It, its the best-practice.

Personally, I have a strong 64-bit preference in new computer purchases and I think everyone should do the same, but what about integrated systems that can't be easily patched? SCADA systems and the like are usually un-upgradeable or very close, and getting re-certified on a different hardware platform is unlikely to be cheap. Of course most SCADA software is apparently crap anyway, but... Look at the East-Coast blackout, a series of one-in-a-million system failures combined with an at-capacity grid turned a tree hitting a power line into a complete failure of one of the biggest power grids in the world. Now if a tree can do that, what do you think a potentially systematic software failure could do? There's a reason technical people are paranoid about this sort of thing: we know that nobody really understands the entire network. As has been said in many different ways, once you've seen a bank's internal code, you'll only keep your life savings in Kreugerrands.

Add in all of the previously-mentioned human idiocy that will see this as a cried-wolf problem and 2038 could be quite bad indeed. Of course, 32 years is a Long Time, but 2000 was a Long Time away at one point too.

Now a 64-bit time_t is big enough for a few billion years of seconds which, combined with ZFS' enough-inodes-for-the-observeable-universe IPv6's happy-to-give-an-entire-internet-to-every-square-centimeter-on-the-planet, we ought to be OK for a little while longer. What do you mean you can't patch the code in your pacemaker? Why not?

If I may rant briefly, the problem is that most consumers of software (including management/clients who 'buy' custom code) don't understand the risks of the long-term and the large-scale. You get applications that were designed, built and tested in an office, rolled out for a small user base, and ten years later are the foundation for millions of dollars worth of transactions. Building a system or application to be stable at more than trivial time and scalability constraints is Hard, and Hard means expensive and most clients are cheap.

Since the risks are far away, the mentality of "we'll just replace it when" comes into play, rationalizes the bargain-basement approach, and then is rapidly forgotten when "when" rolls around. The solution is not to try and take every possibility into account but rather to build enough flexibility into the design to accomodate changes in the future, by different programmers in a different environment and with different priorities.
posted by Skorgu at 10:49 AM on August 30, 2006 [1 favorite]


Response by poster: Okay, the 300 billion dollars spent is from the wikipedia article which cites a BBC news article from that time. As for conspiracy theories, I'm not putting any forward beyond fooling ourselves - a denigrating theory, but not a conspiracy.

As for the "problem was real" but we solved it - I don't buy that. I see no differences between places that went out of their way to upgrade for y2k from those that didn't. I remember at the time there was article describing which countries were doing what. Some countries were doing little to nothing. (I seem to recall among first world countries, Italy was in the latter camp.) There was no observable failure differences between places that were heavy in capital to upgrade and rewrite and those that weren't.

This seems to me to be not an example of a phenomena that was avoided, but a phenomena that wasn't there. I'm wondering if the experts don't have too much invested in their beliefs that what they did was meaningful to be able to evaluate it.

To quote a contemporary critique of the passage of y2k without a glitch:
"I (along with many of my colleagues both at work and in the industry as a whole) have spent a great deal of effort and time on fixing the Y2K problems 'before they happened'. What we have collectively achieved is pretty astonishing - the whole event passed without a bang, more like a whimper."

My expertise is much closer to evaluating avian flu. It seemed like we were wasting a lot of money on preventing something that was damned unlikely - at least in terms of "any given year" or "this year." The problem is the post-mortem analyses of such events is that you have to separate between:

a) nothing happened because we prevented it.
b) nothing happened because the model that said it would happen was flawed.

In the case of avian flu, it had a stronger historical precedence that one may have come to the third conclusion, better safe than sorry. Y2k bug was a phenomena without such precedence and (at least from my perspective) a greater reliance on the arcane.

Thanks especially for correcting me as to how the data was often stored - at least it invalidates my theory, at least as a global theory, anyway.

I think there's a good book to be written here. I'm not saying a million bestseller, but something that fairly evaluated the y2k phenomenon (and lack of it) would interest me - and I think others.

I know I'm not the one to write it.

(Note, in preview, a couple of more posts have been added. I wasn't ignoring you. Just completed this and have to go now.)
posted by dances_with_sneetches at 11:05 AM on August 30, 2006


"We averted a y2k disaster through our diligence" seems to be the minority opinion. "

Only if you poll the general public.

Bugs that would have caused big problems were quite demonstrably found, documented, and fixed, therefore, regardless of whether a minority of the public believes it, the evidence supporting this view is massive.

I think the discrepancy may be not so much that the experts were fooled, but that since so much was unknown, and assurances could not be given, that what would result from a "y2k disaster" in the mind of a computer expect was not the same as what would result from a "y2k disaster" in the mind of a survivalists fantasizing about the end of civilisation :-)

Blackouts happen, comm lines get cut, worst case would involve a whole bunch of really inconvenient service outages happening at the same time, hampering the recovery of each other, but not a Katrina-level disaster.
posted by -harlequin- at 11:09 AM on August 30, 2006


I contracted with $BIG_COMPANY in the 90's. One of our projects was Y2k with their embedded systems, and believe me - there were lots of these widgets out there.

Anyways, these things had a huge Y2k problem - they'd just shut down. If you relied upon something to, say, open or shut a door at a certain time, nothing would happen. It would just fail, permanently. This was very, very bad because the function that they controlled wasn't doors and it would affect quite a few sensitive people and property.

Anyways, the company undertook a massive and completely hush-hush program to update all identified products before Y2k. The built a big list and went site to site, location to location to upgrade or upsell to complaint products.

They finished a few years ahead of 1999 and there wasn't a hiccup come 2000. Problem solved, no muss, no fuss.

The point of that rambling story is that many large companies worked very hard to keep their true exposure under wraps. No one wanted to known as The Company Who Sold A Million Defective Widgets.

No news was good news wrt Y2k.
posted by unixrat at 11:16 AM on August 30, 2006


Actually, scratch my last post. I don't have the time to back up the point I was trying to make, and as is, it's made poorly and doesn't address even the most obvious counter-claim
posted by -harlequin- at 11:17 AM on August 30, 2006


s/complaint/compliant/g

Oops.
posted by unixrat at 11:22 AM on August 30, 2006


Like all good questions, the correct answer is probably in between the extreme views. There can be no doubt that the Y2K hysteria was overblown. There were people holing up in bunkers with tin cans and guns. That is obviously pretty far into the nutjob spectrum. It is also true that there were some consultants who made a lot more money than they deserved whipping senior management into a frenzy.

On the other hand, there were a ton of real bugs in real important programs that could have caused enormous problems if they had not been changed. Our organization spent a small fortune upgrading systems, replacing some older ones and generally getting our act together. Despite the efforts of some of our best and brightest, we still experienced some Y2K bugs. They weren't in the most important routines and they didn't cripple anything, but they were there. The thing is that we experience bugs and how to deal with the fallout from those bugs every single day. It is what we do and we are pretty good at performing the triage and fixing them in the smartest way for the particular problem. So the Y2K bugs wound up being like every other bug we fix every day.

I have to admit that we milked the phenomenon. We had some ancient systems that we wanted to replace anyhow that we managed to eliminate in the guise of Y2K-readiness. We got to assign the cost of the replacement to Y2K, which had its own budget, when we probably could have done it more cheaply with band-aids.
posted by Lame_username at 11:25 AM on August 30, 2006


I see no differences between places that went out of their way to upgrade for y2k from those that didn't

How on earth are you judging this?
posted by cillit bang at 11:34 AM on August 30, 2006


I see no differences between places that went out of their way to upgrade for y2k from those that didn't.

How do you have any idea? I worked at a company that wrote a lot of banking software. We tested our systems. When we ran them past 12/31/1999 they broke.

We spent a lot of time fixing them, we tested them some more, and when the big day came we were fine.

Was it blind luck? No. It was a lot of hard work on our part.

Unless you can provide some sort of documentation for what you're saying, you're wrong. I worked on systems like this, and respectfully, believe me when I say you have no clue what you're talking about.
posted by bshort at 11:46 AM on August 30, 2006


My buddy's book on collective behavior has a chapter: "Millenialism: Y2K and the End of the World as We Know It."
posted by LarryC at 11:59 AM on August 30, 2006


How do you have any idea? I worked at a company that wrote a lot of banking software. We tested our systems. When we ran them past 12/31/1999 they broke.
Seconded. Our first tests showed that virtually every program we had would fail in at least small ways and many key systems would completely collapse.
posted by Lame_username at 12:14 PM on August 30, 2006


I spent that night in the data center at work, making sure all of our patched/upgraded systems made it through the night. The only one that failed was the hideous old junk which ran our security badge readers. Sure enough, the next morning, I could not get out of the building. I had to call the CEO to come rescue me. We deployed a new security badge system that week.
posted by daveleck at 12:15 PM on August 30, 2006


"it appears that the costs of the [y2k] policy have outweighed the benefits."

"We had some ancient systems that we wanted to replace anyhow that we managed to eliminate in the guise of Y2K-readiness. We got to assign the cost of the replacement to Y2K"


I'm under the impression that the later behavour was probably pretty widespread/standard in many countries, and if so (which I suspect is a pretty safe "if"), would seem to effectively negate the first perception.

Ie, the benefits of Y2k costs exceed just Y2k readiness, they included a (usually long-overdue) infrastructure overhaul - costs that would have had to be paid eventually anyway. And additionally, improved infrastructure usually enables higher productivity or efficiency or lower ongoing costs, so over the long term it cuts your costs.

posted by -harlequin- at 12:42 PM on August 30, 2006


My Dad's been a COBOL guy since the 1960s. He was part of the problem, and he was part of the solution. He does legacy systems (transaction settlements) for one of the larger banking groups. He is adamant that for finance, there was a grave problem, and that it was fixed only temporaily. Basically, for a lot of critical systems they just applied a modulo patch on top of many of the systems that bifurcates the centuries using the two-digit year as >50 => 20th century, and <5 0=> 21st century. So everything is okay until 2050, when this common hack will no longer work.
posted by meehawl at 12:48 PM on August 30, 2006


Response by poster: I am certainly impressed by the number of people here that were involved in the recoding process.

In answer to a couple of things, I did a little research before I posted here. The general opinion seems to be much less generous than the one I presented - that y2k was a snake oil scam.

As for whether nations that did not upgrade showed little to no differences in problems in comparison to those who did, that phenomena is described in a variety of sources including the wikipedia entry. There exists a list of third world companies with system failures? I don't think they did upgrades and were probably more likely to have archaic software.

I wish there was a person here who could relate a story saying: we didn't upgrade and bad things happened. (I didn't change my computer or operating system which at the time was described as non-y2k compliant and nothing happened to me.) Running scenarios that induced failure is not quite good enough for me. In my line of business it is easy to model things that don't happen.

Again, with billions of dollars spent, you would have thought a good post-mortem analysis would have come out of this. I've seen ad-hoc statements and that's about it. If nothing else to provide confidence in the money spent.

Thanks for the spirited responses. I hope I haven't been too offensive.
posted by dances_with_sneetches at 1:45 PM on August 30, 2006


Again, with billions of dollars spent, you would have thought a good post-mortem analysis would have come out of this. I've seen ad-hoc statements and that's about it. If nothing else to provide confidence in the money spent.

Perhaps, but I'd bet that many companies are still enforcing Non-disclosure laws to protect themselves from the lawsuit-happy USA.

I don't think that anyone wants to hang themselves out there and say "Yes! We sold non-Compliant products right up to 1997 and we had to spend fifty million of shareholder money to fix that right up!"
posted by unixrat at 2:00 PM on August 30, 2006


"There exists a list of third world companies with system failures? I don't think they did upgrades and were probably more likely to have archaic software."

I think you have this backwards. In terms of stuff like computer infrastructure, my impression is that third world countries do not have archaic software - being third world means coming late to the party, so to speak. Being, say, 30 years behind the technology curve doesn't mean they get 30-year-old unpatched technology, it means they don't start getting technology until it has been evolving for 30 years in more developed nations and has come down in cost and accessability.

The USA was one of the first countries to get cell phones, and is now saddled with an archaic system, while third world countries that are only just now starting to get cell phone infrastructure, are getting infrastructure that is superior to that old stuff that the USA is only now just starting moves to try to escape.

Archaic systems (in this particular technology timeframe) are the bane of the developed world, not the third world. It's place like tHe USA where archaic stuff is built into the foundation.
posted by -harlequin- at 2:03 PM on August 30, 2006


>Some countries were doing little to nothing. (I seem to recall among first world countries, Italy was in the latter camp.)

I've definitely seen that "fact" but what can it actually mean?

I suspect it means "the Italian government didn't do much", as in, it didn't have some big-ass program and hand out Y2K Readiness Grants. Do we really think the Italian banking industry didn't at least check whether all their ATMs were going to start spewing out 1000-lira notes?

I think we've had enough data from private industry and finance to prove that in every country, people went to a great deal of trouble, whether they wanted it known or not.
posted by AmbroseChapel at 2:31 PM on August 30, 2006


It's an example of what Rush Limbaugh today was describing as our "crisis culture". He attributes it to an attempt by the Left to grab power, but I'm dubious. It is clearly, though, driven by a media that can't help but sensationalize every damn thing.
posted by megatherium at 3:01 PM on August 30, 2006


...some luminaries (Yourdon, at least) ...

Can't let this one go. Mr Yourdon was one of the people who helped generate the end-o'-the world hysteria. Yes, there was a genuine problem that had to be fixed. But Yourdon's bestseller at the time, "Timebomb 2000" said that cars and elevators would stop working. He said this even though he gave no actual examples of the systems that would crash. It irresponsible fearmongering at its worst.
posted by storybored at 3:11 PM on August 30, 2006


Running scenarios that induced failure is not quite good enough for me. In my line of business it is easy to model things that don't happen.

If that's not good enough for you, then what the hell WOULD be? We set the clocks ahead and shit broke. How much more testing do you need?

You're so married to your idea that you're not listening to what people are telling you... that the problems were very real, and would have caused definite real-world issues. It's been six years, so I don't remember all of what failed, but our backups stopped working, among other things. That's not an instant catastrophe, but lemme tell ya, from a corporate point of view, data loss can absolutely be a disaster.

Computer people are, in general, quite intelligent. If Y2K was really hysteria and really not a problem, there was no group of people on the planet, with the possible exception of the scientific community, better able to figure that out. Technical people usually have superb bullshit detectors. The mere fact that 99.9% of that community just shut up and got down to work, and hardly anyone said "this isn't a problem," should tell you a very great deal.

Your hypothesis is just not supportable.

I'm not even sure the end-user hysteria was overblown. If it weren't for the alarmists, Y2K repair wouldn't have gotten the resources it needed. We weren't that big a shop, and it still took us months to get everything 100% ready. Fortunately, because of the alarmists, we had all the resources and time we needed. Our network was young, so we didn't actually need very much, but upper management was standing poised with checkbooks out, and all other scheduling was lower priority. We might have been ready without the extra help, but I'm not sure of that.

Just FYI: My personal Y2K preparations consisted of buying a couple of gallons of water and making sure I had enough food in cans for a couple of days. I figured any disruption would be brief.
posted by Malor at 3:35 PM on August 30, 2006


Response by poster: What would be good enough for me? Actual faults from the large percentage of machines that were not upgraded. The list provided in the links given above is about what I would have suspected from having thousands (tens of thousands) techs tinkering with code trying to solve a problem that wasn't there.
What else would be good enough? A good investigation into the billions spent.
What else would be good enough? Something that substantitively distinguishes between the two hypotheses: nothing happened because it was fixed versus nothing was going to happen. In many cases you can find such evidence. This isn't like a case where you have one road fixed and then can claim that fixing that road avoided an accident. That's reasonable. You won't have the accident as evidence but you will have precedence as evidence, what previously happened on that road or similar roads. This is a case of an arcane language being used to describe a pothole that is invisible to most of us that was present in millions of roads worldwide. And I just can't see where the fixed roads were better than the non-fixed ones. Japan seemed to have the highest number of y2k bug incidences and I would have thought it would have been higher in diligence. The countries that (officially) spent in the hundreds of thousands (government funds) seemed to do better (from the above link).
What else would be good enough? As a scientist, evidence. The above posters have described why there is not evidence - companies doing this privately, fear of lawsuits, success. I agree it is a bummer to succeed in averting disaster and then have it claimed that disaster was never going to happen - but success in prevention can be equally be claimed when it would never happen.
What people have described above is a good case of what evil lurks in the heart of ancient code. But it is not compelling evidence that the evil was a danger.
posted by dances_with_sneetches at 4:23 PM on August 30, 2006


As for whether nations that did not upgrade showed little to no differences in problems in comparison to those who did, that phenomena is described in a variety of sources including the wikipedia entry. There exists a list of third world companies with system failures? I don't think they did upgrades and were probably more likely to have archaic software.

How many third-world countries do you know that have huge IBM mainframes lying around? How many third-world-based corporations had critical computing infrastructure that dated from the 1960's?

Malor is entirely right. You asked us whether your idea of how the dates worked in a computer was right or not. We told you it was wrong. You asked us whether the whole Y2K thing was analyzed in detail. We told you it was.

But even better than that, we didn't just analyze it, we tested it. Endlessly. We saw what would happen if we let the clocks cross over from 1999 to 2000 without fixing anything. Systems broke.

In my company's case, if we hadn't upgraded out systems our internal network would have had serious issues. Our processing systems would have had extremely serious issues. Our billing systems wouldn't have worked. Our company would have been crippled. Period.

What happened to companies that didn't do upgrades? I have no idea, but I know of no company that didn't have massive budgets for testing and fixing Y2K issues. Can you name even one? (I'm not talking about the local mom and pop mart. I'm talking about medium to large corporations.)

Was it overblown by people who were claiming your can opener wouldn't work in the morning and that the subways would all run backwards? Sure. But those people are idiots. They may have gotten air time, but they were making patently stupid claims.

You're really married to this conspiracy theory you've put together, but you're just wrong. I'm not sure what "research" you did that indicated that it was all snake oil and monorails, but it wasn't very good research. You're just wrong about this.

Wrong. Wrong. Wrong.
posted by bshort at 5:41 PM on August 30, 2006 [1 favorite]


Ridiculous things like y2k compliance stickers attached to loudspeakers and power supplies certainly helped promote the "there was never anything to worry about" mindset. The vast bulk of y2k remediation work, though, was absolutely necessary. Yes, there were scamsters and fearmongers making good money for nothing. No, they were not in the majority.

Serious work on fixing y2k issues really took off around 1996, as I recall. If we hadn't put those four years of work in starting then, it would have needed to be done anyway after the global financial system went belly-up, which it undoubtedly would have done. Airplanes wouldn't have fallen from the sky. They wouldn't have been flying in the first place, because the ticketing systems would not have been working.

I'm actually less unhappy about 2038 than I was about y2k. Not totally Pollyanna-ish by any means, but less unhappy; mainly because time_t generally resolves to long int, and long int is generally 64 bits in a 64-bit environment.

ISTM that 32 years from now, 64-bit architectures are going to be either the norm or on the small side (we're already seeing 128- and 256-bit architectures in specialty areas like GPU's) and most places where time_t values are stored will have had plenty of time to get widened - especially in the light of the lessons of y2k and GPS.

XML is rapidly turning into the standard way to pass data around between apps and systems, and XML doesn't enforce fixed field widths, so there's that working for us. There's also going to be no value difference between a 32-bit time_t and its 64-bit replacement, which gets rid of an entire class of potential bugs.

On the other hand, there will still be 6502-grade processors in washing machines, toasters and possibly pacemakers. On the other other hand, those things typically don't care about the absolute date.

The COBOL apps that caused most of the y2k reworking were left over from the sixties. 2k minus sixties is 30-40 years. By the time 2038 arrives, most of the surviving legacy apps are likely to have been written around now or a little earlier, with y2k lessons firmly in mind. Yes, there will be a certain amount of fundamentally broken code, but I would expect less of it than was fixed in the leadup to 2000.

The overall lesson from y2k and related events is that time-handling code is one of the hardest things to get right, mainly because it usually looks simple at first glance. There are many hidden assumptions around it, and they all need to be examined very very carefully.
posted by flabdablet at 7:10 PM on August 30, 2006


The Y2K bug always seemed phony to me. The main reason is that it seemed to require more effort in the original programming to create the bug than it would be to not have the bug.

Not true.

At the time I worked with mortgage servicing operations. In the mid/late 90s, most big servicers still performed nightly batch processing in a mainframe environment.

Many of the day's systems were initially written at a time when physical memory and off-line storage were almost incomprehensibly expensive to us today, and a lot was based on the 80-character limit inherent in the IBM punch card. Using two digits for the year left room more room for other, equally important data. One of the day's major systems was written in 370 Assembler; not a trivial task to upgrade. Several others, including the most popular, were in COBOL. Most of the time the easiest way for them to send data to us was on 36-track or even 9-track tape (those spinning reels in every 1960s sci-fi computer movie.) It might sound unbelievable today but this was all typical practice less than 10 years ago.

Servicers had to make damn straight all the potential problems were corrected before the rollover or one of two things would happen: 1) they'd have to manually review hundreds of thousands of loan statements monthly or 2) they'd be flooded with calls in February 2000 from homeowners being assessed for late charges and interest based on last payment dates of January 1900. Seriously, any servicer in either situation would have been bankrupted and unable to continue operations. Worst case, if it happened to enough servicers, the secondary mortgage and MBS markets would have collapsed (the bulk of the money collected by mortgage servicers ultimately goes to investors in mortgage-backed securities like GNMAs and FNMAs.)

On top of that, since mortgages can live for 30 years or more a lot of the data servicers were working with was still based on that 80-column format. This meant upgrading not only calculation algorithms but the base data files and their structure if everything was going to work. In fact, my first experience with similar issues was in working with data from an old, small lender who held interest-only loans dating back to the 1890s -- this was in 1993 or so.

None of the original programmers ever thought their systems would be around long enough to cause problems, or they figured the problem would be fixed ahead of time. Of course that's exactly what ended up happening -- unfortunately though this wasn't a surprise to the original coders it typically was to management.

So yeah, at least in my little part of the world it was a real issue, but to my knowledge it thankfully got dealt with in time.
posted by Opposite George at 9:46 PM on August 30, 2006


Oh, and the reason the servicer with the 100-plus year-old loans never had problems was because they were so small they had one lady doing everything by hand. None of the major automated systems at the time could have dealt with those loans. Without upgrades, every loan on those major systems would have caused similar problems in 2000.
posted by Opposite George at 9:55 PM on August 30, 2006


I think in the end the interesting aspect of Y2K is the difference in reaction in America vs the rest of the world. Yes, there was a software problem. Yes, it was fixed. But in America, the problem got blown up into an Apocalyptic one. (some posters here admit they stocked water, something that was totally unjustified). How many Europeans stocked water? In America, there was a widespread fear of the collapse of infrastructure and here is the key point, this fear was not evidence-based. Which software subsystems exactly would fail and cause water or electricity supplies to be cut off? No one anywhere named the affected systems and the precise problems these systems would have. The same hysteria surrounded medical devices. Rampant claims of pacemakers quitting, or diagnostic machines going haywire. But in the end, these were all urban legends.
posted by storybored at 8:08 AM on September 1, 2006


« Older Asian Hair Blues   |   How do I help my married friends with severe... Newer »
This thread is closed to new comments.