Azure Infrastructure pre-con ahead of #SQLSatCleveland

Microsoft Azure logoSQL Saturday in Cleveland, Ohio is next week, on February 3rd. If you’re in the area or can easily make it there, I hope that you can come out for a great day of free SQL Server training. I enjoy presenting at SQL Saturdays; they’re fun and educational days for speakers and attendees, alike. Last time we were in Cleveland it had snowed overnight when it was time to leave town on Sunday morning. I’ve lived even longer in the south now, so if that happens again, it’ll be even more fun this time.

In addition to my session on Saturday, where I will talk about using database projects in SSDT/Visual Studio, I’ll also be presenting an all-day session Friday on Azure Infrastructure. Planning and designing your infrastructure is just as important in the cloud as it is when building new systems on-premises. As Azure continues to grow and expand around the world, more companies will be choosing to migrate (or deploy new) services to the public cloud. Understanding the underlying components is imperative to maximum-performance and highly-successful Azure deployments and hybrid migrations. In this session, we’ll cover infrastructure fundamentals with a bit of a focus on deploying and running SQL Server in Azure; however, there will be plenty of general background discussion that can be used for any workload.

Registration for this precon is available here, on EventBrite: https://www.eventbrite.com/e/azure-infrastructure-presented-by-kerry-tyler-tickets-41688096218, with information about the overall SQL Saturday event available here: www.sqlsaturday.com/708

Saturday is free, but tickets for the full-day precon are $150.

I hope to see you next weekend!

Don’t Forget About DR for Your DR

Here’s a scenario:

Let’s say you’re a small- or medium-sized company with either an on-premises data center in your office/building or in a “regular” co-lo nearby in the same metro area. You’ve got a mission-critical online presence, so in order to handle either a large-scale disaster for your geographic area or one just in your server room, you’ve written, implemented, and tested a disaster-recovery plan. Another co-lo a couple of states over is set up to be able to step in if needed, and this process can even be completed by non-technical resources in a couple of hours.Need a Plan C

This is a fairly-sound plan. However, what’s Step 2 after Something Bad™ happens to the primary data center and everything fails over to the DR site? What if Something Bad™ is long-term? You’re back to square one, with a single data center. Or where do you put the quorum file share for your AG?

Or, another situation: What if something happens to your DR site? Then what?

Almost Been There, Done That

One of our clients–who has a really good DR plan similar to the one described above–had a brush with this scenario earlier in the year. Their DR data center is in the Houston area, and in the aftermath of Hurricane Harvey, there were some concerns about the status of the DC. The DC itself was fine, but key support personnel would not have been able to get to the site for a number of days if there were such a need.

This situation did a good job of spurning conversations centered around what to do in this situation and what Plan C might look like.

Now What?

The point of this post is mostly to get you thinking about this scenario. Getting DR in place can be enough of a battle itself (I know), but ensuring that what happens next after a potential disaster is considered and planned for is another important step.

What this plan may look like is likely dependent upon what the “first stage” DR plan looks like. Not everyone can afford an additional site, especially if it’s a smaller company. And, let’s be honest: we could sit here all day and what-if burning data centers, but at some point, the return on this investment will become very questionable.

Although this looks/smells like a shameless plug for cloud/Azure, the public cloud is an excellent option to consider here. Even if your company is 100% on-premises with a classic hardware/virtualization platform, keeping a copy of critical systems’ backups up-to-date and available in the cloud is relatively inexpensive.  This “cold DR” process is a very easy-to-implement step to safeguard against a multi-phase or long-term disaster. In the event that these backups are needed, there’s the option of spinning up a group of VMs in the cloud to restore to. At the very least, this cold backup solution will be more-accessible than your current offsite backups if new on-prem servers are stood up somewhere to get the lights back on.

SQL Server 2017: New Security in Analysis Services Tabular 1400

With SQL Server 2017 going GA this week, there’s been a lot of talk last week and this about new and improved features; this post is no different, but, I’m going a slightly different direction.

SQL Server Analysis Services Tabular models were first introduced with SQL Server 2012 (suddenly that seems so long ago) and have undergone continual and sometimes rapid revisions ever since. This remains true with SQL Server 2017, with the introduction of decent list of new features and other improvements.

One of the most exciting for me is the introduction of built-in support for object-level security.

But, We’ve Had Roles and Row Filters the Whole Time!

We have; you’re right. But, one thing that Tabular has never had–or Multidimensional models, either–is a built-in, easy way to do security in the other direction–columns!

Row level security is a very robust feature, and remains great. However, if there are situations where some columns or tables in the model shouldn’t be visible by all users (think Personally Identifiable Information), there wasn’t really a way to handle this before. Hoops would have to be jumped through utilizing DAX and possibly utilizing two different copies/versions of the same table in order to implement this behavior. Sometimes there would even need to be different versions of the same reports, based on which user group they were intended to target (with the underlying security/configuration of the cube driving what the user could or couldn’t see). This was, generally, a pain.

Perspectives are/were never intended as a security feature, and that hasn’t effectively changed with this.

In order to utilize this new feature (and the others), your tabular models will need to be developed/deployed in the 1400 compatibility level. This can be set when creating new models, in addition to being able to upgrade existing models (but this is a one-way street).

Azure Analysis Services

Since AAS is still my favorite thing, I can’t talk about SSAS without plugging it a little bit. Although 1400 compatibility has only been available in the on-prem product for about 24 hours now, it has been available in Preview in AAS since May. This is indicative of Microsoft’s cloud-first strategy–features will be available here first, filtering down to the on-premises software “later.” This may not be for everyone, but I think it’s one of the great reasons to consider Azure’s Platform as a Service offerings (another one is the built-in high availability).

Alright, So Maybe I Get the Cloud…

In general, I’m a cynical, Negative Poo-Poo Head. In the last few years, I’ve become more and more this way when it comes to technology trends. My first reflex as to why, is my increased time spent working with developers on a day-to-day basis, and my perception of how development  is susceptible to Flavor-of-the-Week-itis. I don’t think that’s really the cause; instead, I think I’m just getting more cynical as I get older in general. I know, this isn’t necessarily a good thing, but that’s a different blog post. The good news here is that I at least can still turn it off when I need to.

Anyway, the Cloud and my negative poo-pooness.

I’ve basically spent more time making fun of Cloud Computing than I have learning about it. “Oh, it’s the dumb terminal/mainframe paradigm all over again”, “There’s something wrong with this whole thing when you can buy a shipping container full of servers, put it out back, and call it a ‘private cloud’”, “It’s just the latest buzzword/flavor-of-the-week”, on and on. Honestly, even though Brent, Buck, and others have been saying it’s the Way of the Future™, I didn’t buy it.

Well…I think I’m on board now.

What changed my mind?

I don’t think I can nail it down to one thing, but I can point to a couple things that have been going on lately that played into this change of heart.

Last Friday was the Nashville SQL Users Group Meeting. Presenting was Brian Prince (Twitter | Blog (lookit dat Gamerscore) ) on SQL Azure. I was pretty interested in this topic due to wanting to actually learn something about this stuff that I make fun of. OK, that and I am a little interested to see what all the fuss is about.

Although I had to leave early due to an unfortunately-scheduled meeting, I did get to see most of Brian’s presentation. I was impressed with a lot of what I saw. I was impressed by the shipping containers full of servers that make up open-air datacenters. I was impressed by how they each can have their own genset backup power sitting next to them, how the servers in the containers live at 95o F, are cooled primarily by what basically amount to Swamp Coolers, and pretty much never get touched once they’re racked up. During this part of the presentation, the Sysadmin in me was clamoring to give up the DBA thing and figure out how to get a job building/supporting this stuff, because it looks just that awesome…

Brian then went into the components the Azure platform and where SQL Azure sits in it and how it is supported, replicated, and made available. We also got to see how you actually use the system along with some good arguments about DBs that we all have in our environments that would be good candidates to move to the cloud. The prices seem pretty good (I haven’t done any extensive calculations) when compared with running small but still mission-critical DBs in-house… It all actually sets up a pretty good argument.

All of this good info and eyes-on examples went a long way to getting me over whatever I was afraid of or didn’t buy about the system.

Buck has been going on about this whole thing for a while, and even put his money where his mouth is and is moving over to the Azure team. At first, I thought that was a loss for the SQL Community, but I think with my improving outlook, I don’t feel that way anymore. I believe it will be for the better, and I can’t wait to hear more from him in the future.

These are a couple of the things that have been going on lately that are beginning to change my mind about the cloud. I’m looking forward to learning more about it and considering it as an implementation option in the future.