Azure Infrastructure pre-con ahead of #SQLSatCleveland

Microsoft Azure logoSQL Saturday in Cleveland, Ohio is next week, on February 3rd. If you’re in the area or can easily make it there, I hope that you can come out for a great day of free SQL Server training. I enjoy presenting at SQL Saturdays; they’re fun and educational days for speakers and attendees, alike. Last time we were in Cleveland it had snowed overnight when it was time to leave town on Sunday morning. I’ve lived even longer in the south now, so if that happens again, it’ll be even more fun this time.

In addition to my session on Saturday, where I will talk about using database projects in SSDT/Visual Studio, I’ll also be presenting an all-day session Friday on Azure Infrastructure. Planning and designing your infrastructure is just as important in the cloud as it is when building new systems on-premises. As Azure continues to grow and expand around the world, more companies will be choosing to migrate (or deploy new) services to the public cloud. Understanding the underlying components is imperative to maximum-performance and highly-successful Azure deployments and hybrid migrations. In this session, we’ll cover infrastructure fundamentals with a bit of a focus on deploying and running SQL Server in Azure; however, there will be plenty of general background discussion that can be used for any workload.

Registration for this precon is available here, on EventBrite: https://www.eventbrite.com/e/azure-infrastructure-presented-by-kerry-tyler-tickets-41688096218, with information about the overall SQL Saturday event available here: www.sqlsaturday.com/708

Saturday is free, but tickets for the full-day precon are $150.

I hope to see you next weekend!

Let’s Talk About Group Managed Service Accounts

SQL Server service dialog, Log On tabWe all know the best practices for SQL Server service accounts–domain account (if you’re using Active Directory), non local admin, different one for each service (and server/instance), etc, etc. These are, of course, good best practices and they should be followed as closely as possible in Production and on servers/instances that house Production data.

A problem arises if you have more than just a couple-few servers or run some of the BI components, however. The number of service accounts involved in your SQL Server plant could be very large, necessitating an incredible amount of overhead when it comes to managing those accounts. This goes beyond simply creating and assigning them–chances are good that there are policies in place that require changing passwords. User accounts, service accounts, and other automation accounts likely all fall under this umbrella. If you’re lucky, maybe non-user accounts have a longer change interval, but it’s still something that is going to need to be done on a regular basis. In large environments, this could take an excruciatingly awful amount of time to do.

All of this is not to mention the human factor involved here. One of the recurring themes in a couple of my presentations is making an effort to automate as many things as possible to remove the human from the process. Not that we’re bad, but there are some things, especially tedious and repetitious tasks, where dumb things go wrong simply because of the nature of the work. Changing a bunch of service account passwords is definitely one of them. There used to be two types of sysadmins: those that have changed a service account’s password but forgot to update and restart the service itself, and those who will.

Enter Group Managed Service Accounts

Group Managed Service accounts (gMSAs) are a way to avoid most of the above work. They are special accounts that are created in Active Directory and can then be assigned as service accounts. They are completely managed by Active Directory, including their passwords. This means no more manual work to meet the password-changing policy–the machine takes care of that for you.

There are other security-related controls that can be gained with them, but this is the major feature.

I’ll also note that you–the DBA–are likely to need some help from your AD admin to get these set up. They’re going to need to actually create the accounts for you in the system, and there may be some changes needed to their AD configuration in order to support them. They’ll also need to have a Windows Server 2012 (or R2) domain controller in their domain, but I’d hope today that’s not going to be a hurdle to overcome.

Since I’m mostly here to talk about SQL Server, I’ll note a couple of different support situations. gMSAs are supported from SQL Server 2014 and on running on Windows Server 2012 R2 and on for everything you can do with SQL Server–standalone instances, Failover Clusters, Availability Groups. Just plain Managed Service Accounts (MSAs) can go back a little further, but they only support standalone instances of SQL Server.

From a non-SQL Server perspective, one of the major disadvantages of gMSAs is that one can’t just use them everywhere. Services have to be specifically designed to support the use of these accounts, and that’s not going to be the case everywhere.

Since this isn’t exactly a new feature, there’s plenty of documentation and blog posts out there to read about this feature and the various requirements to implement. There’s a great overview and setup blog post on MSDN here: https://blogs.msdn.microsoft.com/markweberblog/2016/05/25/group-managed-service-accounts-gmsa-and-sql-server-2016/

That post links to this old TechNet article, which still is a pretty good resource for understanding what these things are and a little more detail on what is going on in the back-end: https://technet.microsoft.com/en-us/library/hh831782.aspx

Finally, my coworker Joey has a slightly older writeup here, https://joeydantoni.com/2012/12/14/group-managed-service-accounts/, that walks through the process of setting this up. Note that some of the requirements have changed since that was written, but the general process remains the same.

gMSAs are a nice feature that aren’t too onerous to setup, but go a long way to make your life easier and your data far more secure.

PSA: SQL Server 2016 SP1 CU7 is a Security Update

This week (Thursday the 4th), SQL 2016 SP1 CU7 was released. Right, fine, nothing out of the ordinary.

Except there is something out of the ordinary here. CU7 includes the patches for SQL Server related to the Spectre and Meltdown vulnerabilities (for more info on these as they relate to SQL Server, see Joey’s post here and Allen’s post here), and as a result, it is being flagged and published as a security update from Microsoft as KB 4058561.

So What?

Since this is a security update, that means it will be pushed down by/via Windows Update like other normal security patches. 4058561 says that, “This update will be provided via Microsoft Update at a future date”, but it’s probably safe to assume that this “future date” will be next Tuesday, January 9th, as that is the normal Patch Tuesday this month.

All that said, then, if you are or support a shop that likes to test their SQL CUs before they get deployed, if there aren’t also positive controls on what Windows Update does, it will be a good idea to start testing this update now. Sure, the Venn diagram of “people who test CUs” and “people who let Windows Update go to town on their SQL Servers, whether it be right off the bat Tuesday night or on the next weekend” probably don’t overlap much, but in case you fall into that category: Heads up.

Remember that CUs include all patches since their “baseline” (in this case, SP1), so you’re not going to just be getting the most recent updates–you’re going to be getting an additional six CUs’ worth, too.

Again…

Once again, as everyone else is (or should be) saying, this is a big-deal security vulnerability, and you (all of you, yes, even YOU) really need to apply this patch. For SQL Servers, see guidance from MS here: https://support.microsoft.com/en-us/help/4073225/guidance-for-sql-server

Seriously… patch your stuff.

Don’t Forget About DR for Your DR

Here’s a scenario:

Let’s say you’re a small- or medium-sized company with either an on-premises data center in your office/building or in a “regular” co-lo nearby in the same metro area. You’ve got a mission-critical online presence, so in order to handle either a large-scale disaster for your geographic area or one just in your server room, you’ve written, implemented, and tested a disaster-recovery plan. Another co-lo a couple of states over is set up to be able to step in if needed, and this process can even be completed by non-technical resources in a couple of hours.Need a Plan C

This is a fairly-sound plan. However, what’s Step 2 after Something Bad™ happens to the primary data center and everything fails over to the DR site? What if Something Bad™ is long-term? You’re back to square one, with a single data center. Or where do you put the quorum file share for your AG?

Or, another situation: What if something happens to your DR site? Then what?

Almost Been There, Done That

One of our clients–who has a really good DR plan similar to the one described above–had a brush with this scenario earlier in the year. Their DR data center is in the Houston area, and in the aftermath of Hurricane Harvey, there were some concerns about the status of the DC. The DC itself was fine, but key support personnel would not have been able to get to the site for a number of days if there were such a need.

This situation did a good job of spurning conversations centered around what to do in this situation and what Plan C might look like.

Now What?

The point of this post is mostly to get you thinking about this scenario. Getting DR in place can be enough of a battle itself (I know), but ensuring that what happens next after a potential disaster is considered and planned for is another important step.

What this plan may look like is likely dependent upon what the “first stage” DR plan looks like. Not everyone can afford an additional site, especially if it’s a smaller company. And, let’s be honest: we could sit here all day and what-if burning data centers, but at some point, the return on this investment will become very questionable.

Although this looks/smells like a shameless plug for cloud/Azure, the public cloud is an excellent option to consider here. Even if your company is 100% on-premises with a classic hardware/virtualization platform, keeping a copy of critical systems’ backups up-to-date and available in the cloud is relatively inexpensive.  This “cold DR” process is a very easy-to-implement step to safeguard against a multi-phase or long-term disaster. In the event that these backups are needed, there’s the option of spinning up a group of VMs in the cloud to restore to. At the very least, this cold backup solution will be more-accessible than your current offsite backups if new on-prem servers are stood up somewhere to get the lights back on.

Seattle for Summit 5×5: No. 5

The final post of my little series of fives for Seattle and PASS Summit finally focuses on the conference itself. This list is for five sessions/events that I think should be on your list to check out this week while at Summit. A couple of these will be difficult to get into, because they will be super-popular. In fact, people have been known to stand in line for the entire previous session in order to get in to one of these presenter’s sessions before. Therefore, like, plan accordingly and stuff.

Dr. Rimma Nehme Keynote

Thursday 8:15 AM
Rimma worked for a while at Microsoft’s Gray Systems lab in a certain midwestern town, with Dr David DeWitt. Back in the day, David–one of the best speakers you’ll ever seen in your life–would give the “tech” keynote at Summit, and everyone would show up to get their brains melted. It was awesome. What’s still awesome is that torch has been passed on to Rimma, and she’s every bit as awesome a speaker as David is. She also got her PhD from Purdue, which makes here more-awesome, obviously. It doesn’t even matter what she’s talking about (it’s here), just go. Thank me later.

Bob Ward: Inside SQL Server 2017 on Linux

Wednesday 3:15 PM
6B
Like the tech keynote, whatever Bob’s talking about at Summit is something to go to if you’re a DBA. This year it is about SQL Server on Linux, one of the most important developments in SQL Server Land in a long time, and definitely one of the most compelling new “features” (obviously not the best word here) in SQL Server 2017. This should be a great presentation for SQL Server DBAs, even if you can barely spell “vi”, and you’ll want to get there early. Bob’s session is where people have stood around for half a day waiting for.

Itzik Ben-Gan: T-SQL Tips and Tricks

Wednesday 10:45 AM
6B
Apparently you should just sit around in 6B all day Wednesday and you’ll have a great day. Itzik is to TSQL as Bob Ward is to the engine, or Rimma is to, well, everything data-related. A “tips and tricks” session may sound ho-hum, but if Itzik is giving it, you’re guaranteed to learn something and get your mind blown by what Itzik will be able to show you with three lines of TSQL. I don’t care how good at TSQL you are, I guarantee you’ll learn something from Itzik in this session.

André Kamman: Azure SQL DW Guidance for ETL Developers

Friday 11:45 AM
6B
Andre gets the “token” BI mention here, and for good reason. Azure SQL DW is a pretty great, scalable, useful service for workloads and data sets that can leverage its enormous capabilities, but along with that capacity and capability come pitfalls to the uninitiated. SQL DW can look like a bit of an odd duck due to its weird load patterns and seemingly missing SQL functions, so content like Andre’s are a definite requirement for ETL architects and developers who are going to be leveraging this service. Even if you don’t utilize it now, learning about the under-the-covers massively parallel processing is useful, I think.

Sunil Agarwal: Maximizing Query Performance with Columnstore Indexes

Wednesday 10:15 AM
615
Alright, so I have a scheduling conflict here. Don’t blame me, I didn’t build the schedule.

Sunil’s going to talk about query performance with Columnstore indexes in SQL Server. Joey and I may-or-may-not have had input into this presentation’s content, so I know it’s going to be good. Kinda like Azure SQL DW, Columnstore Indexes provides some great benefits, but it’s not necessarily something that one can just throw into your existing data model and have your performance go through the roof–there is a little extra work involved to get there. Also, Sunil’s always one of the smartest people in the room, just the type of person I like to listen to talk. I’m thinking I’m not going to be able to make it to this session, unfortunately, although Sunil does have another columnstore session (“Strategies to Speed Up Data Load into Clustered Columnstore Index”) Friday at 11:00 AM.