Don’t Forget About DR for Your DR

Here’s a scenario:

Let’s say you’re a small- or medium-sized company with either an on-premises data center in your office/building or in a “regular” co-lo nearby in the same metro area. You’ve got a mission-critical online presence, so in order to handle either a large-scale disaster for your geographic area or one just in your server room, you’ve written, implemented, and tested a disaster-recovery plan. Another co-lo a couple of states over is set up to be able to step in if needed, and this process can even be completed by non-technical resources in a couple of hours.Need a Plan C

This is a fairly-sound plan. However, what’s Step 2 after Something Bad™ happens to the primary data center and everything fails over to the DR site? What if Something Bad™ is long-term? You’re back to square one, with a single data center. Or where do you put the quorum file share for your AG?

Or, another situation: What if something happens to your DR site? Then what?

Almost Been There, Done That

One of our clients–who has a really good DR plan similar to the one described above–had a brush with this scenario earlier in the year. Their DR data center is in the Houston area, and in the aftermath of Hurricane Harvey, there were some concerns about the status of the DC. The DC itself was fine, but key support personnel would not have been able to get to the site for a number of days if there were such a need.

This situation did a good job of spurning conversations centered around what to do in this situation and what Plan C might look like.

Now What?

The point of this post is mostly to get you thinking about this scenario. Getting DR in place can be enough of a battle itself (I know), but ensuring that what happens next after a potential disaster is considered and planned for is another important step.

What this plan may look like is likely dependent upon what the “first stage” DR plan looks like. Not everyone can afford an additional site, especially if it’s a smaller company. And, let’s be honest: we could sit here all day and what-if burning data centers, but at some point, the return on this investment will become very questionable.

Although this looks/smells like a shameless plug for cloud/Azure, the public cloud is an excellent option to consider here. Even if your company is 100% on-premises with a classic hardware/virtualization platform, keeping a copy of critical systems’ backups up-to-date and available in the cloud is relatively inexpensive.  This “cold DR” process is a very easy-to-implement step to safeguard against a multi-phase or long-term disaster. In the event that these backups are needed, there’s the option of spinning up a group of VMs in the cloud to restore to. At the very least, this cold backup solution will be more-accessible than your current offsite backups if new on-prem servers are stood up somewhere to get the lights back on.

“The Tuesday Night Fire Code Violation”

It was July 19, 2005. At least, I’m pretty sure it was.

Based on IndyPASS’s meeting history, that second meeting way down at the bottom (use your keyboard’s End key; that’s what it’s there for) was basically a “here’s what’s new/awesome in SQL Server 2005” presentation. I’ve long since lost most of my email from that time, but that meeting makes sense in the timeline of 2005’s release.

During the dark, dark days of 2005, just about everyone was desperate for an upgrade to SQL 2000. I was, and I hadn’t even been here that long. The fledgling Indianapolis PASS chapter met in a good-sized conference room on the ground floor of a Duke-owned office building off Meridian St (“twelve o’clock on the I-465 dial”) on the north side of town. That night, there were probably half-again as many people in that room as it could comfortably hold. People standing, sitting on the floor, you name it. Tom Pizzato, the speaker, was introduced; he walked up to the podium and the first thing he said was, “Welcome to the Tuesday night fire code violation.” That is still the best one-liner to open a technical presentation I’ve ever seen, and ever since, it has been cemented to SQL Server 2005 itself in my brain.

That was a long time ago–It’ll be eleven years here in a couple months. Eleven years is an appreciable percentage of an eternity in the tech world. As a result, earlier this week, Extended Support for SQL 2005 ended. This means that you, if you are still running it anywhere, will get no help from Microsoft were something to go wrong. Perhaps more importantly, there will be no more security patches made available for it. Don’t expect if something big happens, there will be a replay of what Microsoft did for XP.

This is a pretty big deal. If you have any kind of problem that you can’t fix, and you call Microsoft Support about it, you won’t get any help for your in-place system. You will have to upgrade to a supported version before you’ll be able to get any assistance, and in the middle of a problem bad enough to call PSS probably is not the time you want to be doing a Cowboy Upgrade™ of your production database system.

I understand that there are plenty of industries and even some specific companies that are either forced to, or elect to continue to run out-of-support RDBMSes on their mission-critical systems. I supported SQL 2000 for far longer than I would like to admit, and it was a risky proposition. After I transitioned out of that role, there was a restoration problem (fortunately on a non-production system) that it sure would have been nice to be able to call Microsoft about, but that wasn’t an option.

Don’t put yourself in that situation. There are plenty of points that can be made to convince the powers that be to upgrade. The fact that any new security vulnerability will not be addressed/patched should be a pretty good one for most companies. If you have an in-house network security staff, loop them in on the situation; I bet they will be happy to help you make your case.

One final note: If you are still running 2005 and are looking to upgrade, don’t just hop up to 2008 or 2012–go all the way to 2014 (or, once it goes Gold, 2016). SQL Server 2008 and 2008 R2 are scheduled to go off Extended Support on July 9, 2019. Three years seems like a long way off now, but that’ll sneak up on you…just like April 12, 2016 might have.

T-SQL Tuesday #28: Jack of All Trades Crew, Checking In

T-SQL Tuesday—always a good option for helping to get back on the blogging horse.

TSQL Tuesday #28
T-SQL Tuesday #28

This month is hosted by Argenis Fernandez (blog | @DBArgenis), a SQL Server MCM & a #SQLFamily member that I have yet to meet. His topic of choice is “Jack of All Trades, Master of None?”, which is right up my alley, because for pretty much my entire IT career, that phrase has described me. It still does, right this second, but I’m trying to get over that. More on that coming.

Because I should be able to use them to frame out a good story (and because I’m cheap), I’m going to work with the list of questions Argenis has in his invitation post.

Are you specialized? On something? Or anything at all?

Am I anything at all? No, not really, thanks for asking! 😀

Anyway… it’s not really safe or fair to answer whether or not I’m specialized with anything but a firm “no.” I’ve been this way from day one. I don’t know that it’s been a conscious decision to get to this point more than it just happened as a result of being driven by a desire to know how everything works. Sometimes that leads to depth in weird places, which can come in handy while watching Jeopardy! on TV.

My non-specialization situation at the moment includes the capabilities of a decent SQL Server DBA, being what I’ll call “serviceable” when it comes to data modeling, and could still be a sysadmin if push came to shove, in a day job in which I have become the go-to ETL Developer. It’s a little weird, I admit. But, all of those other things help with the current focus. The ETL job is slow, you say? Well, is the server on the floor? Is the latency over the WAN link 700 ms per round trip? All of my other skills help, and that is what I think is the best part of being a jack-of-all-trades: It’s possible to know just enough to answer a lot of your own questions!

I’ve said it before, but another thing I like about knowing a little bit about a lot is you can make friends/commiserate with almost everyone in the IT department.

Are you the SQL Guy at work? Or the one who does everything?

Due to the size of company that I work for, there are very few “Guys” at work—everyone has a specific job (or sometimes jobs) that they do. Basically…near-insect-grade specialization. There’s not room for a jack-of-all trades at larger organizations, in my experience, with the possible exception of smaller, autonomous groups.

Having spent time in one of those smaller, autonomous groups within a larger organization, I was a little bit of the guy who does everything. My whole team was, actually. We were “Windows System Admins”, who ran just about everything except Exchange and the MSFT monitoring platform du jour, plus Citrix to boot. Although each of us had our strong points, pretty much all of us could get through whatever needed to be done and have things work when we were done. I think that just goes along with being a “sysadmin”—being a jack-of-all-trades is almost a necessity. Need to know hardware? Check. Security theory AND implementation? Yep. IIS (Apache as necessary)? Probably. Networking? You betcha. SQL was just one of the things that I did while there, although I did do a lot of it.

Do you code? And configure wireless routers at work also?

Hell no. I mean… not if I can help it. See, when I started in IT for real, I knew one whole programming language: Visual Basic 6. Two classes in school on it, and that was it. I wrote a little print queue viewer/management app while a student (hey, it was deployed on 2000+ machines!), but no real experience. To this day, VB6 is the only real (“real”) programming language I know. Not having a strong coding background does cause some problems occasionally, especially when talking to Developers who are used to DBAs coming up through those ranks. I definitely don’t know much about software engineering theory, and that’s where it shows up the most.

As part of the afore-mentioned sysadmin gig, I wrote a command-line only VB app as an automated interface between a couple of systems, but I’m not exactly proud of that moment, for a number of reasons. The one that applies here is one of the downfalls of being a jack-of-all-trades: it’s easy to cowboy up and do quick-and-dirty things off to the side, because you can. Perhaps even more dangerous: because no-one else can.What happens as soon as you’re done? Well, if you’re not careful, it winds up in Production, and then it becomes a support nightmare; if not for you, it will be for a coworker or the next guy. Either of which may some day hate you when their phone rings at 0300 because the wrong piece of duct tape fell off your masterpiece. I think being a jack-of-all-trades can be a good thing, as long as one of those trades is holding onto whatever processes and standards are in place…and if there aren’t any of those, hopefully one of said trades is coming up with some good organizational processes and standards!

As for the wireless router configuration bit—I try to keep that at home. Pretty sure the network guys wouldn’t like me messing around with those things. Just because I [used to] know Cisco IOS, doesn’t mean I should use it. That brings us to another good specific skill that a jack-of-all-trades should have: Knowing when to sit quietly. This goes for both wireless routers and writing anything in VB6 that has a prayer of ever seeing real, actual production use.

If you had to pick one thing to specialize on, what would it be?

Yeah, about that… All the above said, I actually am going to try to specialize on something. Of course, it isn’t enough to say I’m going to specialize on SQL Server. There’s too much in the product now. I’m going back to the thing that got me truly interested in the prospects of becoming a Data Pro in the first place: Business Intelligence. Unfortunately, I don’t think it’s safe to make that a goal, either. Just the BI side of the SQL Server platform is becoming too broad and too feature-rich to come to grips with. I’m going to have to be content with possibly not knowing anything at all about Reporting Services to focus on what I really want to do: Analysis Services. I actually want to be able to do most of the architecture work surrounding big BI projects, from start to finish (except for SSRS!), but I’m afraid that even just SSAS, including all of its new related technologies, could turn out to be too much.

That, though, is a journey that I hope we can all share in. Because I’m nice like that.

Other Thoughts

Being mechanically wired more than anything, it’s not quite as easy for me to tear down a piece of T-SQL as it is, say, the battery operated toys I used to take apart…Or a carburetor. But thanks to the Internet, it’s easy for me to read about and learn from someone who is more adept at doing that sort of thing. This shows two more helpful skills for a jack-of-all-trades: Being able to read and learn is a really important one, and being able (and willing) to share back out is another good one. You never what kind of DBA trying to configure Exchange you’re going to help out.

T-SQL Tuesday #15: Automation in SQL Server

Automation: every lazy DBA’s best friend; in some situations, a ticket to sanity.

T-SQL Tuesday #15

T-SQL Tuesday #15: Automation is the way to a DBA's heart

This month’s T-SQL Tuesday is brought to you by Pat Wright (blog | @SqlAsylum). The 15th topic for the monthly blog party is, as has been mentioned, Automation in SQL Server. I’m pretty excited to read this month’s posts to see what kind of crazy things everyone does. Most of these posts are probably going to be big on example scripts and code samples, as one would expect for such a topic. This one–for better or worse–won’t.

I have to admit that I haven’t done a lot of from-scratch automation in my day. Lots of things on the list at work right now, but implementation is still pending. As a result, I was afraid that I wouldn’t have anything to talk about this month, but I thought of a goofy direction that I can take this in.

Like many of us who do it for real now, I started my path to DBA-ness (uh… that’s unfortunate) as an Accidental DBA. Since I didn’t know any better at the time, I did a lot (OK, pretty much all) of administrative DBA tasks with the UI. Need to back up a database? Right-click | Tasks | Backup! Want to create an index? Fire up DTA! Use the GUI to pick the columns. Need to do that for more than one DB? Guess you’re going to be there for a while.

This technique obviously gets the job done (for the most part), but there is a lot of room for improvement.

“Automation” doesn’t have to be fancy

If you’re using the GUI for a lot of tasks, there’s an easy & cheap way to “automate” a lot of what you’re doing. Simply: Script stuff out. One doesn’t need to be a master of T-SQL syntax to start doing this, either. With SQL 2005 and above, making the transition from GUI to scripted tasks is pretty easy.

SSMS Backup Dialog's Script menu

The Backup Database dialog's "Script" menu

Just about every dialog box in SSMS has a “Script” button at the top. This control will script out whatever changes have been made in the dialog box. For example, if you bring up the Backup Database dialog, fill out the options & destination file location as desired, and then use the Script button to output that to a new Query window, you will wind up with a complete, functional BACKUP DATABASE command with all of the same settings that were selected in the GUI window. Mash F5 on that puppy and you’ll have your backup, just like you wanted it.

How does this classify as automation?

Spirit of the law, folks, spirit of the law 😉

Alright, I admit I might be stretching it a little bit here. I also know that just because two techniques solve the same set of problems doesn’t mean they can be classified the same way.

That said, consider some of the reasons that you automate big tasks:

  • Ease of consistent repeatability
  • Removal of the human element
  • Speed
  • Autonomy if you’re out of the office and someone is filling in

These same things make running T-SQL scripts instead of using the GUI for tasks a better idea:

  • As long as you don’t change the script before you re-run it, the same thing will happen repeatedly (unless of course the script does something like add a particular column to a table a second time). This is especially important when doing things such as migrating a new table through Dev, Test, Stage, and Production over the lifecycle of a project.
  • Setting options in a GUI window is prone to mis-clicks or flat-out forgetting to change a setting from the default.
  • The script is ready to go—running the action is as fast as opening the script file, checking it to make sure it is the one you’re expecting it to be, and mashing F5. This makes implementing the change a fast process, instead of having to click a bunch of radio buttons/checkboxes/whathaveyou, then verifying all of the settings before hitting OK.
  • If you’re out of the office, but something still needs to be deployed, it’s easy for the fill-in DBA (the boss?) to grab the scripts that have been prepared and run them. This is easier than walking through a list of checkboxes to check on a UI screen OR you try to remember everything from memory if the correct settings haven’t been written down.

Considering how I grew into a DBA, making this leap from pointing and clicking for just about everything to typing out ALTER TABLE instead, took some work. In the end, scripting everything is better in pretty much every conceivable way, even if it is hard at first.

If you’re a little GUI-heavy still and like the idea of automating the work that you do, letting go of the UI and embracing the big, blank T-SQL canvas is Step 1. The effort will be worth it, and you’ll feel like you’ve really automated tasks.

“Well I made a mistake today”

Getting mails from a Developer that start like this almost always leads to awesome. This turned out to be one of the times when it wasn’t as awesome as it could have been, but did give me the opportunity to spread some knowledge (which I don’t get to do very often, because, well, I’m not that smart).

This situation was the old, “oops, WHERE clauses are a good idea with DELETE statements.” The good news is that this was in Development, so it wasn’t a giant fire. Although I didn’t see the message right when it came in, I did see it in time to get to it before that night’s backup ran (we just keep one backup file in Dev and overwrite it every evening). I probably could have pulled from Production or Test instead of restoring a 140+ gig DB for a 299 row table, but we’ve got the space, more IO than God, and it was a Friday night where nothing else was going on out of the ordinary. Table restored, life goes on.

Actually, there were a couple points that I was able to make with this situation.

First: Tell your DBA when things go bad!

In our situation, with the backup file getting overwritten every night, if a Developer makes a mistake like this, they have to let us know before 8:00 the day of in order for us to be able ion do anything about it. The guys/gals have to first realize something bad happened, and then get to us right then in order to recover. If they sit on it until the next day, it is too late.

Second: BEGIN TRAN is your best friend.

When running DML, manually start and end transactions. Sure, SQL Server has the nice, easy implicit transactions that you don’t have to worry about, but those can become your worst enemy very easily. All it takes is either missed highlighting before mashing F5 or an unfortunately-placed closing paren.

BEGIN TRAN? (skip this paragraph if you already know) By default, SSMS uses implicit transactions. This means that even though you don’t type it out, when you run statements, SSMS begins a transaction, runs your stuff, and then commits it. By manually starting a transaction with BEGIN TRAN in front of your UPDATE, DELTE, or whatever, you retain control of this instead of letting the UI do it for you. This means you can run your statement(s), check the results, and then COMMIT or ROLLBACK yourself. In short, this is manual transaction control.

This one takes some diligence, because it’s easy to be complacent. I’m doing a simple little UPDATE statement, I didn’t make any mistakes, everything will be fine. Of course you think that—you wouldn’t run any statements that you didn’t think were right, would you? This is why you have to tell yourself to type BEGIN TRAN every time. It only takes once to really ruin your day.

OK, Third: COMMIT TRAN until it throws an error

This is another tip that I learned from our senior DBA on probably my first or second day on the job. Basically, when you commit your user transaction, keep trying to commit it until SSMS reports an error (trying to commit a transaction when there isn’t one open). Why? Glad you asked!

Create a table & put a couple rows of data in it:
CREATE TABLE TransTest (
  
ID      INT     IDENTITY(1,1),
  
Name    VARCHAR(20)     NOT NULL
   )

INSERT INTO TransTest
  
SELECT 'Smythee'

INSERT INTO TransTest
  
SELECT 'Bob'

Next, say you want to delete Bob from the table. Bob was never any fun anyway, was he? Because you’re heeding the above advice, you are going to wrap this simple one-row delete in a Transaction. You run the following:

BEGIN TRAN

DELETE
   FROM TranTest
  
WHERE Namee = 'Bob'

Whoops, you fat-fingered the column name and didn’t notice until you ran it, and it threw an error.

Fix it and run it again:

BEGIN TRAN

DELETE
   FROM TranTest
  
WHERE Name = 'Bob'

This runs OK, you double-check the contents of the table, everything looks fine.

Next step is to run COMMIT TRAN. That runs without error, and you go on your merry way.

But, there’s a problem: Select @@trancount and see what you get. You should see one transaction still open. Why is that?

When the first statement was run, a transaction was opened. Even though the statement itself bombed because of the bogus column name, that transaction is still there. When you fix it, if you run BEGIN TRAN again, you will now have a nested, second transaction. Running a single COMMIT will commit your changes, yes, but it still leaves one transaction open. Because that transaction still has locks, it will block other statements looking to operate in the TranTest table.

Moral of the story? Mash F5 on COMMIT TRAN until SSMS throws an error.

What was I talking about again?

Oh right, our poor developer.

In the mail I sent back to him, I commended him for being smart about letting us know right away when a mistake was made, as it allowed us to actually get the data back (or mostly so). I also recommended manual transactions, because they can save your tail.

I don’t know if he’ll take the advice to heart, but he at least has the tools available to him now if he wants to use them.