SQL Server 2017: New Security in Analysis Services Tabular 1400

With SQL Server 2017 going GA this week, there’s been a lot of talk last week and this about new and improved features; this post is no different, but, I’m going a slightly different direction.

SQL Server Analysis Services Tabular models were first introduced with SQL Server 2012 (suddenly that seems so long ago) and have undergone continual and sometimes rapid revisions ever since. This remains true with SQL Server 2017, with the introduction of decent list of new features and other improvements.

One of the most exciting for me is the introduction of built-in support for object-level security.

But, We’ve Had Roles and Row Filters the Whole Time!

We have; you’re right. But, one thing that Tabular has never had–or Multidimensional models, either–is a built-in, easy way to do security in the other direction–columns!

Row level security is a very robust feature, and remains great. However, if there are situations where some columns or tables in the model shouldn’t be visible by all users (think Personally Identifiable Information), there wasn’t really a way to handle this before. Hoops would have to be jumped through utilizing DAX and possibly utilizing two different copies/versions of the same table in order to implement this behavior. Sometimes there would even need to be different versions of the same reports, based on which user group they were intended to target (with the underlying security/configuration of the cube driving what the user could or couldn’t see). This was, generally, a pain.

Perspectives are/were never intended as a security feature, and that hasn’t effectively changed with this.

In order to utilize this new feature (and the others), your tabular models will need to be developed/deployed in the 1400 compatibility level. This can be set when creating new models, in addition to being able to upgrade existing models (but this is a one-way street).

Azure Analysis Services

Since AAS is still my favorite thing, I can’t talk about SSAS without plugging it a little bit. Although 1400 compatibility has only been available in the on-prem product for about 24 hours now, it has been available in Preview in AAS since May. This is indicative of Microsoft’s cloud-first strategy–features will be available here first, filtering down to the on-premises software “later.” This may not be for everyone, but I think it’s one of the great reasons to consider Azure’s Platform as a Service offerings (another one is the built-in high availability).

Hey, you! Don’t Forget to Enable DB Mail in Agent

Raise your hand if you’ve been there: You set up a new SQL Server instance, configure Database Mail (test it), and then set up a nice Agent job to back up your databases. You configure it to send mail on job completion (so you can keep an eye on it not matter what), but it’s not sending mail. You test DB mail again, and it’s working. What gives?

This is kind of a gimme, but a few weeks ago I configured a new maintenance job (NOT a Maintenance Plan 😉 ) on a new-ish non-production server that didn’t have any other jobs on it yet. When it wasn’t sending mail, I stood around for a lot longer than I’d like to admit before I figured out what was going on.

The kicker is that you have to enable the use of database mail within the SQL Server Agent–this isn’t on by default.

As with most things in SQL Server, there’s a couple ways to turn this on. First is the GUI. The Alert System page of the SQL Agent’s properties dialog is shown here, and you can see right on the top where the main “Enable” checkbox is, along with dropdowns for the DB Mail settings you want to use. Flip that on, pick your desired mail settings (probably only have one setup), restart the Agent service, and your agent jobs will start sending mail as-expected.

Agent Properties dialog showing Mail Settings

There’s also the T-SQL route, which is useful for adding this configuration to a general “initial” script (such as ours: https://github.com/DC-AC/SQL2016_Scripted_Install) so you don’t have to worry about this on new instances that you install/setup. It’s a quick SP call to enable mail:

EXEC msdb.dbo.sp_set_sqlagent_properties @databasemail_profile=N’Main Profile’

Assign the mail profile you want to use, and go. The UI by default and greyed out (at least on a few 2016 instances that I’ve checked recently) checks the “Save copies of the sent messages in the Sent Items Folder” option. This option can be driven with the email_save_in_sent_folder parameter on the proc. Set it to 1 to turn on that option. True story: I have no idea where that mail gets saved on a SQL Server; I assume it goes to the “Sent Items” folder in the mailbox the profile is configured to use, but I’ve never actually configured this with a mailbox that I have access to to see.

This T-SQL step assumes SQL Server on Windows. If you’re doing this on Linux…well, it’s different. I’m not going to reproduce that work here, because it may change since SQL 2017 is still in RC at this point. So, if you’re doing this on Linux, check out the official docs for that process.

Moral of the story here: Don’t be a dumbass like me; turn on DB Mail in the Agent!

Using Excel and Get Data to Find Fixes in SQL Server CUs

Lately, for whatever reason, we’ve had clients running into a small rash of bugs or bug-like behavior in SQL Server; soGetData Buttonsme in the Engine, some in SSRS (the SSRS ones have been fun). In one case, it occurred a day or two after SQL Server 2016 SP1 CU3 was released, so we (I was talking to Joey about it at the time) had a list of fixes to go through.

 

This is fine and all, but when one is looking for a fix for a specific behavior (“I’ve had this bug all summer, so I want to look through every CU release to see if it’s in there”), it’s a bit of a pain to go through the whole list just scanning for the, say, Reporting Services fixes. It’s even worse if the instance is behind and you need to look through multiple CUs for something. Another scenario is if you are just reviewing a newly-released CU and really only care about fixes that pertain to the engine…you get the idea.

These lists can get long

Business Intelligence to the Rescue!

Fortunately, there are some tools built right into Excel that make this a whole lot easier than scrolling through the list in your browser. Armed with nothing more than the URL to the CU’s KB article and Excel 2016 (or a few older versions) quick work can be made of generating custom filters for this data.

Here are the steps:

In Excel 2016, click on the Data tab of the ribbon. This is where the artist formerly known as “Power Query” lives, now referred to as “Get & Transform.”

Starting with the New Query button, navigate down through the menu to From Other Sources and then From Web:

New Query | From Other Source | From Web

 

This brings up a simple little dialog that asks for a URL. Paste in the URL for the CU page you’re interested in; here, I’m using SQL 2016 SP1 CU3’s URL: https://support.microsoft.com/en-us/help/4019916/cumulative-update-3-for-sql-server-2016-sp1

Clicking OK brings up the next dialog, a security-related dialog that allows you to provide any credentials that may be needed to access the material. Of course, in this case, no specific credentials are needed, as it is a public web page. Leaving Anonymous selected here is the way to go.

Web Page Security

Clicking Connect will bring up the real meat of Power Query Get Data, where we will choose what data we want to import, and optionally do some ETL-like transformations to it.

Whenever pulling in data from a web page/table for the first time, there is a bit of experimentation that needs to happen. For example, when the “Navigator” dialog opens for the first time, there’s a big list of Tables from the web page, and no data displayed:

Select Table to load data from

What has to happen, is you need to find which of those tables contains the data on the web page you’re interested in. In our case, we’re interested in Table 0, where we can see the data we’re looking for; mainly the Fix area column:

Populated Table 0

Quick note: The reason for so many tables of other data on this page is that down towards the bottom of the page, under the “Cumulative update package file information” link/collapsed menu are a bunch of tables that contain a bunch of information about all of the files that are modified by fixes in this CU. All of those tables are available here, too.

Once the table you’re interested in is selected, we can move on. The next step could be clicking the Edit button, where you’d be able to do all kinds of transformations to the data in this table… here, we don’t need to do that, so can skip that part and go straight to loading the data.

As we’re only looking to read through this data on its own (as opposed to loading it into a Power Pivot data model), we can just click the Load button.

The end result will be a table of data in Excel that contains all the fixes in the CU:

Populated Fixes in Excel table

The best part about this, and the whole reason we’re here, is Excel’s “Auto Filter” feature works on this table (and it is already activated, even). Clicking on the arrowhead in the “Fix area” column yields this familiar pop-up menu, where all manner of sorting and filtering can be done.

Excel Auto Filter dialog

Simply check the area of the product you’re interested in from the list, and you’ll be presented with a nice short list of fixes to look through.

Fix list filtered to Heckaton

Awesome!

Re-use

But, let’s say you’ve gone through this, and you’re thinking “that was kind of a pain, and won’t really save any time for all the more often that page needs looked at.” That’s possibly a fair assessment. Since all of these CU pages are identical (for now), the extract logic stays the same, with the only thing needing to change being the source URL. Once you’ve set up this workbook once, you can save the file and modify the URL it pulls its data from when the next CU comes out, but the amount of clicking required to do that is about the same as it takes to set this up the first time, therefore I’m not sure how helpful that would be.

Probably the best thing to do is to save this file off after you’ve created it and reference it as-needed, clicking the Refresh All button on the Data tab when you open this to make sure you have current data.

“The Tuesday Night Fire Code Violation”

It was July 19, 2005. At least, I’m pretty sure it was.

Based on IndyPASS’s meeting history, that second meeting way down at the bottom (use your keyboard’s End key; that’s what it’s there for) was basically a “here’s what’s new/awesome in SQL Server 2005” presentation. I’ve long since lost most of my email from that time, but that meeting makes sense in the timeline of 2005’s release.

During the dark, dark days of 2005, just about everyone was desperate for an upgrade to SQL 2000. I was, and I hadn’t even been here that long. The fledgling Indianapolis PASS chapter met in a good-sized conference room on the ground floor of a Duke-owned office building off Meridian St (“twelve o’clock on the I-465 dial”) on the north side of town. That night, there were probably half-again as many people in that room as it could comfortably hold. People standing, sitting on the floor, you name it. Tom Pizzato, the speaker, was introduced; he walked up to the podium and the first thing he said was, “Welcome to the Tuesday night fire code violation.” That is still the best one-liner to open a technical presentation I’ve ever seen, and ever since, it has been cemented to SQL Server 2005 itself in my brain.

That was a long time ago–It’ll be eleven years here in a couple months. Eleven years is an appreciable percentage of an eternity in the tech world. As a result, earlier this week, Extended Support for SQL 2005 ended. This means that you, if you are still running it anywhere, will get no help from Microsoft were something to go wrong. Perhaps more importantly, there will be no more security patches made available for it. Don’t expect if something big happens, there will be a replay of what Microsoft did for XP.

This is a pretty big deal. If you have any kind of problem that you can’t fix, and you call Microsoft Support about it, you won’t get any help for your in-place system. You will have to upgrade to a supported version before you’ll be able to get any assistance, and in the middle of a problem bad enough to call PSS probably is not the time you want to be doing a Cowboy Upgrade™ of your production database system.

I understand that there are plenty of industries and even some specific companies that are either forced to, or elect to continue to run out-of-support RDBMSes on their mission-critical systems. I supported SQL 2000 for far longer than I would like to admit, and it was a risky proposition. After I transitioned out of that role, there was a restoration problem (fortunately on a non-production system) that it sure would have been nice to be able to call Microsoft about, but that wasn’t an option.

Don’t put yourself in that situation. There are plenty of points that can be made to convince the powers that be to upgrade. The fact that any new security vulnerability will not be addressed/patched should be a pretty good one for most companies. If you have an in-house network security staff, loop them in on the situation; I bet they will be happy to help you make your case.

One final note: If you are still running 2005 and are looking to upgrade, don’t just hop up to 2008 or 2012–go all the way to 2014 (or, once it goes Gold, 2016). SQL Server 2008 and 2008 R2 are scheduled to go off Extended Support on July 9, 2019. Three years seems like a long way off now, but that’ll sneak up on you…just like April 12, 2016 might have.

Normalization — It’s not Your Friend…or Your Enemy: Dataversity Webinar

As she does on a regular basis, my friend Karen Lopez ( blog | @datachick) is hosting a new webinar this week hosted by Dataversity. The topic, as the title of this post suggests, is about the good, bad, and craziness of normalization. The event is this Thursday at 2:00p Eastern Daylight (GMT -4).

Why am I sharing this? Well, I’m going to be there, too, playing the role of sidekick, because Karen’s the one that actually knows what she’s talking about 😉 . These webinars are always a good time, and you usually learn something, to boot.

It is free to everyone, but registration is required. More information and a link to register is available on this page.

If you join, stop in early while we do some audio checks, hang out and chat a bit beforehand. It’s a fun, informal time before the webinar proper starts. Stay tuned in via Twitter, as well. Monitor the #heartdata hashtag to participate in the conversation.

Hope to see you there!