The Y2K Fable

Interesting tidbit broke in the news to start 2022. It appears that (due to technical reasons I won’t go into here), an oddball date-related bug scattered some minor havoc throughout the Microsoft world. Happy New Year, 2022!

Technical folks immediately compared this to Y2K, even going so far as to name the bug “Y2K22,” after what is arguably the most famous computer bug of all time.

Some may be unfamiliar with Y2K. (I’m not - I worked on one of the Y2K teams.) To remind us all, here I humbly present:

THE Y2K FABLE

Once upon a time (1), there were far fewer computing devices in the world. Many of these devices were very large, the size of small cars. They were kept in special rooms. The rooms were called “computer rooms.” The rooms were equipped with lots of electricity, a special raised floor, lots of cooling equipment (because when these machines ran programs, they got very very hot), and fire protection (in case something overheated or shorted out).

These computers devoured very large programs, written in old-fashioned programming languages, like COBOL (COmmon Business Oriented Language), FORTRAN (FORmula TRANslation), BASIC (Beginner’s All-purpose Symbolic Instruction Code), and PL/I (Programming Language / I).

It may seem strange to people today, but in those distant times(2), when major programs (payroll, inventory control, sales reporting, etc.) dominated the scene, information was stored in databases. These databases took up what was then huge amounts of space. (Today, you could probably put one on half an external solid state drive.)

Big companies (manufacturers, retailers and banks, for example) paid other big companies (computer manufacturers and database providers, for example) lots of money to store and process their information. And database space was very, very important and very expensive, both in money as well as ability to access information quickly. Databases had many many rows of information, or records, contained in them. And common information, like dates, could be stored many times in each row.

The space that dates occupied could therefore grow very quickly.

So, to save space, dates were recorded much as you might record them today — in the US, in MM/DD/YY (month-day-year) format, in Europe, in DD/MM/YY (day-month-year) format. Only two digits were used for the year. After all, everyone knew what century we were in, and would be in, for a very long time.

This meant that, in the US, December 31, 1999 would be 12/31/99 and in Europe, 31/12/99.

But what would January 1, 2000, look like?

01/01/00.

1900? Or 2000?

Uh-oh, people said. If we don’t know the date, what will the system do? Lose a hundred years in one second? Mess up timekeeping systems? Miscalculate certain people’s birth dates?

Where exactly was all this information stored? How widespread was the problem?

As it turned out, two-digit dates could be stored anywhere. For whatever reason, programmers would have defined them so, and used them so, for years and years, in program after program.

Still, some argued, if a two-digit birth year had a value of, for example, “45,” “everyone would know” that that “45” meant 1945 (a birth date logically occurring in the past), not 2045 (a birth date illogically occurring in the future).

The problem was, however, that common sense isn’t common in computer programs. If a program encounters something it doesn’t understand(3), it can often just stop working. Worse, it can continue working under the wrong assumptions.

So people didn’t know. They also weren’t sure of what they didn’t know versus what they did know.

This caused many to fear the worst. The “Y2K bug,” as it rapidly became known, could be anywhere. Typical large organizations would have hundreds of millions of lines of computer code, and inside those hundreds of millions of lines of instructions would be references to two-digit dates, decisions made on these references, and on and on.

The next question was, what needed to be done?

Many said, form oversight teams. Develop processes to spot, fix and test the problem. In large organizations, this meant looking at everywhere computation might be done, from regular programs for payroll and pension to manufacturing systems, end-user computing, even building control systems such as elevator safety (4).

Others said, now hold on. The two-digit date might be all over the place, but in how many cases did it actually pose a problem? What was the likelihood of an issue? Ten percent? Twenty? One? And if it occurred, what would the impact really be? Why spend time and money chasing something that might require only a modest amount of “fixing in the field?” Why spend good dollars chasing this, dollars (or yen or lira or pounds, etc.) that could have been spent on new development, competitive advantage?

The Y2K drumbeat grew loud. Major companies knew that, if they simply said they would accept the risk without trying to fix it, they could be held liable to their shareholders. Best to err on the side of caution.

Most (but, significantly, not all) governments followed suit.

Whole armies of consultants sprang up. After all, what is better suited to a fog-bound, emergent crisis than a group of experts with processes and capacity to help fix the problem — for a price, of course?

So the crisis evolved — like a slow-moving train approaching either a platform full of bewildered passengers standing on the tracks, or a platform mostly empty.

Many lacked conviction that this was a problem, while many others were full of passionate intensity(5).

Planes would fall from the sky. No, they wouldn’t. Factories would shut down. No, we’d figure it out. And on and on.

So the teams were put in place, processes invoked, billions of lines of code shipped to low cost “rebuild factories” overseas, and so in, in a mad scramble to complete the work before the simple matter of a clock advancing an important second came to pass.

The countdown. Teams in place, all around the world. Reams of reports indicating work performed, potential risk areas. People on standby to handle whatever might occur.

In the end, while millions celebrated the coming of the new millennium, and others wondered if the Times Square ball would drop or would wink out due to a computer malfunction — in the end, what happened was — not much.

In about two minutes, pundits(6) from major news outlets began questioning whether the problem had been as serious as advertised.

Those who had done the work argued that, had it not been for the endless thousands of hours spent to find, fix, test, and implement, things would have been far worse.

Those who hadn’t done the work pointed to those governments who’d “sat the Y2K problem out,” where the problems generated were few and fixable (7).

There were some burps and bumps, but for the most part, a collective sigh (exhaustion, relief, scorn, what have you) — and our world, held hostage only these past few centuries by hours, minutes, seconds — our world moved on.

By February, the teams were disbanded. Time to do other things.

Who was right?

What did we learn?

Well, if you want to know what those of us who worked on Y2K learned, you can read this footnote(8). Because we learned a lot.

If you ask, which side was right, I can tell you that, twenty-plus years on, no one really knows. And today, no one really cares.

But that’s not the point.

I told this Y2K fable to tell you another one.

A COVID fable.

Today, we find a world awash in information about COVID. No one knows what bits of the information are correct. Demagogues of all stripes will tell you that “everyone knows” what’s right to do. But “everyone knows” is an assertion of truth that, once uttered, catches the unaware unprepared.

In truth, there are many many people just as dedicated as those Y2K teams were, working their fool body parts off to fight what is, in reality, a phantom, a great unseen and unknown that, despite all our science, has the humbling affect of catching some of us and killing us outright, while leaving whole swathes of us moderately to mildly affected — or even not affected at all.

Should that work be done? Yes. When you’re faced with risk, with unknowns, with potentially frightening outcomes, you do everything you can to make the unknown known. So that you can act. So that you can reduce risk and overcome your fears if possible — and if not, learn to live with the risk. But until you quantify that risk as best you can, risk acceptance is itself a gamble. You want to hedge your bets the best you know how.

With COVID, we may not know — we may never know — what the risk truly has been. This makes COVID not that much different from any other high-risk, uncertain situation humans have faced. Ukraine comes to mind.

In the midst of it, when uncertainty is all we have, we do what we often have done. We move into camps and reinforce our beliefs(9) with opinions consistent with our own.

But no one really knows all that’s going on. Therefore, choices.

Mask? Don’t mask? Which mask?

Vaccinate? Take your chances? Boosters? Preventive medicines and lifestyles? Nutrition?

Social distance? Isolate? Gather?

Twenty years on, we might know the answers. I doubt it (10). We certainly won’t know the true toll, in lives lost, lives saved, and lives that got stuck somewhere in between. We won’t know which would truly have been the best way forward. Not really — if we’re honest. Still, we’ll have strong opinions about which choices were the best (11).

Our strong opinions can be divisive - if we permit them to be. We can choose to listen to opposing points of view, or we can choose to despise those who articulate them. Those are choices we can make, even in a crisis like COVID.

In the end, all we’ll really know is the road we traveled, the choices we made along the way, and the new place in which we find ourselves — twenty years on.

That’s a lesson learned from that quaint Y2K challenge, gone these twenty years and oh, so trivial by comparison. We’ll learn — we’ve been learning - lessons from COVID. About science, about religion, information, misinformation, opinions, reactions, life and death — but mostly, about ourselves.

Given all this, what should we do today, when uncertainty abounds? Perhaps one thing. We owe our children, watching us today, honest conversation - that we’re doing the best we can, that things can be scary, that we’ll maintain the best course we know how - for each other, for our families, and for our communities.

In other words, as with any crisis, we owe our children, watching us today, the best behavior we can muster.

======================================================

Footnotes
(1) The mid-1960’s is good enough, even if the fable describes a precise date. Mainframe computing came into its own somewhere in this period, and by the time I really got going in my IT career, major computer programs were already well over ten or even twenty years old. Incidentally, “once upon a time” is a phrase really hard to trace - its origins go back into oral history, and I suspect that “time” replaced “day” when timekeeping - dividing “day” into first “day” and “night” and then yielding to hours tolled from bell towers - replaced measuring things by daylight or moonlight. Daniel Boorstin has a lot to offer here in “The Discoverers” (Random House, 1983, pp. 25-53: “From Sun Time to Clock Time”).

(2) Gee whiz, another idiom (pointed to by a “gee whiz” idiom). I’m not looking this one up. You do it.

(3) OK. Computers don’t really “understand” anything (or didn’t back then - I’ll leave today’s permutations of “understanding” and “wisdom” to the AI zealots (and even then, better be careful)). After all, programs are incredibly long sequences of ones and zeroes artfully arranged by people versed in “languages” that translate from those streams into low-level groupings that then get lumped into something at least somewhat recognizable, such as {For x = 1 to 365}. Computers “know” what to do with this (in this case, repeat the next step(s) 365 times, and there better be an ending statement, like “{Next X} — or just watch what happens if you forget to put that in). They “know” the same way that salt “knows” to dissolve in water. Or viruses “know” how to mutate and so find better ways of infecting new hosts. (This is a foreshadowing footnote, by the way — you’ll see.)

(4) There was the infamous example of the elevator that would descend to the lowest floor and shut itself off. Whether this was even possible didn’t seem to be debated. When you’re operating on fear and speed - incidentally, the single most prevalent combination in human history, and only exacerbated by business “competitive advantage” (hence, the metaverse bandwagon) - you only seem to have time to absorb, not verify, fearful information.

(5) Those of you who know your Yeats might recognize this, paraphrased from “The Second Coming” — ‘The best lack all conviction, while the worst / Are full of passionate intensity.’

(6) I really can’t resist this. There are pundits, and then there are the “punditzes” - talking airheads with credentials and spewing content-free communication (got that one from Dilbert, thanks, Scott); and, sadly, influences that extend far beyond their intelligence or critical reasoning skills. I am aware, however, that one person’s punditz is another’s genius.

(7) OK, here’s the wiki entry. See Opposing view and Counterpoint towards the bottom. Or read the whole thing, and be happy that the Linux folks have already addressed the 2038 bug - which would, interestingly for this essay, occur about 20 years on from COVID.

(8) What we learned:

  • We had tens of millions of lines of computer code, some decades old - a sum total of programming practices, shortcuts, get-the-job-out-the-door decisions which we figured may or may not come home to roost - but we didn't know which was which unless we looked

  • We learned that other companies and government agencies could have billions of lines of code

  • We learned to build an asset management inventory from scratch, recording all programs, end-user computing systems, dealer/supplier/affiliate systems, building control systems, plant-floor and product development systems

  • We built a matrix of this, organized against the necessary process steps, which is still the way to subdivide and understand enterprise risk - ask anyone who's done a Business Impact Assessment (BIA) in business continuity

  • We learned the plant floor folks hated us, told us to get the hell out and take our stupid asset database with us - before building an asset management database of their own a few months later

  • We learned that, no matter how much we told people this was a business risk problem, it came back as an IT problem again and again

  • We learned that people would use the opportunity to push home-grown systems into our hands, classically transferring the risk

  • We learned the importance of enterprise-wide executive sponsorship, and the value of a steering team structure across the business activities

  • We learned there were consultants, snake-oil salespeople, people (to use the words of a character in one of my novels, “Santa, CEO”) "with one hand out to shake yours and the other hand reaching around your ass to empty your wallet"

  • We learned what it was like to chase ghosts, build code factories, use talent from overseas in ways we hadn’t before

  • And we learned that on January 1 or 2 or thereabouts, a whole lot of people didn’t give a damn about it — if, in fact, they ever did

(9) See “Saipan” to discover how tragic this can be — or how liberating.

(10) The BBC did a dark but beautiful, tragic, almost elegiac movie on the 1918 pandemic, called “The Forgotten Fallen.” It’s worth finding and watching, if only because it describes everything we’ve been going through with COVID. It’s much less dramatic, and much more realistic, than clever hi-tech stories today. Even now, a hundred years on, no one can say with complete confidence what truly went on, which choices were best, and exactly when the pandemic washed into the world population and effectively (but not completely) disappeared. (Remnants still circulate today.) What’s interesting to me is that, in many histories I’ve read, there are descriptions of the Great War, the Great Depression, World War II - but not nearly as much mention made of the Great Illness that swept the world just before the Roaring Twenties did everything it could to erase depression from the common mind.

(11) Reviewers of this work had strong opinions - particularly in equating Y2K, which was, after all, largely financial, with COVID, where decisions risked lives. A great point. I didn’t put the more painful questions in the body of the essay, mostly because I hadn’t yet come to grips with them: twenty years on, will we have discovered side effects to our vaccination programs; lives saved by vaccination pitted against livelihoods lost through lockdown; postponement of critical issues, such as global warming, the opioid crisis, school shootings; the great resignation creating gaps in labor and forcing wage concessions; supply chain issues; inflation (itself often a harbinger of deep instability)? Where will it all have led? Dare we take two future scales and on one, place the actions we took, while on the other, place actions (or inactions) others took, and then search for imbalance, whether in our favor or theirs? Perhaps all we’ll be able to conclude (in, say, 2050) is that COVID was the Great Destabilizer of the early 21st century — and then we can opine on whether, evil as it was, it had been either the shakeup we needed, or the opportunity to create yet more inequality, more wealth transfer to the top, and other ills. And so it will go, an endless double helix of argument and counter-argument — which, in the end, may be what really sets us apart from the rest of earth’s inhabitants.

Previous
Previous

A Modest Proposal

Next
Next

Saipan