Skip to content
Home » T-SQL Tuesday Retrospective #009: Beach Time

T-SQL Tuesday Retrospective #009: Beach Time

  • by
network cables coming out the back of a switch

(If you’d like to read my other T-SQL Tuesday Retrospective posts, click here.)

In August 2010, Jason Brimhall (blog | Twitter) invited us to discuss preparing for vacations:

“Write about what you have done to be able to get a break from the job. Beach time is usually vacation time, but is really anything that can create a break in the workplace. If you automated a process to lighten your load – tell us about that process. If you had to pull a 72-hour shift to ensure that your vacation plans would be unaltered by work – tell us about it. If you turn off the cell-phone and pager and ignore email for that vacation – tell us about it.”

Before the global pandemic left us at home for months at a time, my spouse and I used to enjoy adult-only cruises. You know, those large carbon generators in the ocean, only without any kids. It is one of my favourite things to do in the world, and combines the joy of travel with the convenience of unpacking only once.

Internet access is notoriously expensive at sea, and flaky because it relies on satellite connectivity. That means heavy latency, so until recently even a connected smart phone is a luxury. If there’s ever an excuse I need to get away from work, lack of Internet is it.

I run what we one-person-shops euphemistically call a “boutique agency.” While I don’t have many clients, the ones I have are worth keeping happy. This means if I’m going to be away for two weeks I need to trust that my (database) backups have (human) backups. In this case, I have a person I’ve worked with before who steps in to take care of my SQL Server customers. We have a reciprocal relationship where I’ve stepped in for them, and it works out well.

In the last three years I’ve also fallen back into software development (we legally can’t call it “engineering” in Canada), and I now have a person I’ve known for over two decades who is able to step in for me in case of an emergency.

Here’s the thing: in all the times I’ve been away from Internet access and my customers — getting in a healthy dose of sleep, food and alcohol — I think there have only been two occasions where the backup was even necessary, but here’s an example of why I ensure it’s in place.

Early in my career when I became an accidental network administrator shortly after a number of people had been laid off, I took a week off work for a well-deserved vacation. My significant other at the time and I drove from Johannesburg to Cape Town on the weekend of January 25, 2003. Some of you may know where this is going just from the date.

Given that smart phones and high-speed Internet were a few years away, there was no possibility that I could do anything in the case of an emergency. On the Friday before we were due to drive back I discovered a puncture in my car tyre which needed a repair, and for some reason it could only be done the following Monday. I called the office to explain the situation, and the acting CEO was unhappy about this because the office had been offline for a week. No one knew how to reset the router (including me), and the service provider was unable to connect remotely.

I explained that I would be back as soon as I could, and was told in no uncertain terms that any additional days would be unpaid leave. I accepted this, and when I returned the following week after driving for 14 hours with the worst sunburn on my back I’ve ever experienced (and almost falling asleep at the wheel), I made it into the office.

The network room was an old office with a server rack, lots of modems which no longer worked, and our trusty router. I took a careful look around to see if anything was out of the ordinary. I had already put a qmail server in front of the Exchange Server 5.5 box to eliminate a major spam problem we’d previously had, so I didn’t suspect email. Besides, the general opinion was that the line was down. When I looked at the router, I noticed that the TX and RX lights were solidly lit. The amount of traffic was clearly saturating the line. It was a denial of service, but from what? I traced the router to the switch (which may have been a hub), and looked at which other light was brightest. Following the dusty and untidy cabling, I traced the CAT-5 Ethernet cable back to a developer box running SQL Server 2000.

As soon as I saw what it was, I hit the reset button on the machine. I didn’t even bother switching over the KVM to log in and shut it down gracefully, because I knew it was SQL Slammer. When Microsoft released a patch for it six months prior, I had been instructed by the previous network administrator to patch our production server at our colocation service provider, back in the days where you had to drive to the data centre, sign in, and be escorted to your rack and shelf with a CD-ROM in hand. At the time, I assumed he would patch the development server locally, but then layoffs happened and it slipped through the cracks.

SQL Server is now one of the most secure relational database management servers in the world, but even so it still needs to be patched periodically. I would like to encourage you to make sure that you have your servers patched within a month of any security updates, but please do make sure you are up to date on Cumulative Updates beforehand. SQL Server is also one of the most complex products in the world and things do go wrong. Make yourself a VM snapshot, good database backups, and have a tested disaster recovery plan.

Share your Slammer story in the comments.

Photo by Thomas Jensen on Unsplash.