Using Excel and Get Data to Find Fixes in SQL Server CUs

Lately, for whatever reason, we’ve had clients running into a small rash of bugs or bug-like behavior in SQL Server; soGetData Buttonsme in the Engine, some in SSRS (the SSRS ones have been fun). In one case, it occurred a day or two after SQL Server 2016 SP1 CU3 was released, so we (I was talking to Joey about it at the time) had a list of fixes to go through.

 

This is fine and all, but when one is looking for a fix for a specific behavior (“I’ve had this bug all summer, so I want to look through every CU release to see if it’s in there”), it’s a bit of a pain to go through the whole list just scanning for the, say, Reporting Services fixes. It’s even worse if the instance is behind and you need to look through multiple CUs for something. Another scenario is if you are just reviewing a newly-released CU and really only care about fixes that pertain to the engine…you get the idea.

These lists can get long

Business Intelligence to the Rescue!

Fortunately, there are some tools built right into Excel that make this a whole lot easier than scrolling through the list in your browser. Armed with nothing more than the URL to the CU’s KB article and Excel 2016 (or a few older versions) quick work can be made of generating custom filters for this data.

Here are the steps:

In Excel 2016, click on the Data tab of the ribbon. This is where the artist formerly known as “Power Query” lives, now referred to as “Get & Transform.”

Starting with the New Query button, navigate down through the menu to From Other Sources and then From Web:

New Query | From Other Source | From Web

 

This brings up a simple little dialog that asks for a URL. Paste in the URL for the CU page you’re interested in; here, I’m using SQL 2016 SP1 CU3’s URL: https://support.microsoft.com/en-us/help/4019916/cumulative-update-3-for-sql-server-2016-sp1

Clicking OK brings up the next dialog, a security-related dialog that allows you to provide any credentials that may be needed to access the material. Of course, in this case, no specific credentials are needed, as it is a public web page. Leaving Anonymous selected here is the way to go.

Web Page Security

Clicking Connect will bring up the real meat of Power Query Get Data, where we will choose what data we want to import, and optionally do some ETL-like transformations to it.

Whenever pulling in data from a web page/table for the first time, there is a bit of experimentation that needs to happen. For example, when the “Navigator” dialog opens for the first time, there’s a big list of Tables from the web page, and no data displayed:

Select Table to load data from

What has to happen, is you need to find which of those tables contains the data on the web page you’re interested in. In our case, we’re interested in Table 0, where we can see the data we’re looking for; mainly the Fix area column:

Populated Table 0

Quick note: The reason for so many tables of other data on this page is that down towards the bottom of the page, under the “Cumulative update package file information” link/collapsed menu are a bunch of tables that contain a bunch of information about all of the files that are modified by fixes in this CU. All of those tables are available here, too.

Once the table you’re interested in is selected, we can move on. The next step could be clicking the Edit button, where you’d be able to do all kinds of transformations to the data in this table… here, we don’t need to do that, so can skip that part and go straight to loading the data.

As we’re only looking to read through this data on its own (as opposed to loading it into a Power Pivot data model), we can just click the Load button.

The end result will be a table of data in Excel that contains all the fixes in the CU:

Populated Fixes in Excel table

The best part about this, and the whole reason we’re here, is Excel’s “Auto Filter” feature works on this table (and it is already activated, even). Clicking on the arrowhead in the “Fix area” column yields this familiar pop-up menu, where all manner of sorting and filtering can be done.

Excel Auto Filter dialog

Simply check the area of the product you’re interested in from the list, and you’ll be presented with a nice short list of fixes to look through.

Fix list filtered to Heckaton

Awesome!

Re-use

But, let’s say you’ve gone through this, and you’re thinking “that was kind of a pain, and won’t really save any time for all the more often that page needs looked at.” That’s possibly a fair assessment. Since all of these CU pages are identical (for now), the extract logic stays the same, with the only thing needing to change being the source URL. Once you’ve set up this workbook once, you can save the file and modify the URL it pulls its data from when the next CU comes out, but the amount of clicking required to do that is about the same as it takes to set this up the first time, therefore I’m not sure how helpful that would be.

Probably the best thing to do is to save this file off after you’ve created it and reference it as-needed, clicking the Refresh All button on the Data tab when you open this to make sure you have current data.

T-SQL Tuesday #22: Data Presentation

TSQL Tuesday Logo

Robert Pearl hosts No.22

It seems like it hasn’t been that long since last month’s T-SQL Tuesday post; I suppose time flies when you’re having fun and trying to finish up the same ETL project you’ve been working on since March.

This month’s SQL blog party is being hosted by Robert Pearl (blog| @PearlKnows), on the topic of “Data Presentation.” This is a good topic for me at this point, as I’ve all but finished my transition from DBA to BI Monkey (that’s something else I need to write about…). I think Robert is looking for specific examples of ways to present data, but since, as usual, I don’t have anything specific that I can actually publish, I’m left to speak generally about the topic.

Data Presentation: Just as Important as the Data Itself

In a previous life, I was responsible for almost everything data-related for the systems that we ran. As a result, I would get a lot of requests for data. One of my favorite requests would come in the form of, “can you give me some numbers for <X system>?” I would try to keep my response at least marginally non-snarky, but it would generally include two questions:

  1. What, exact, “numbers” do you want? (this is especially where I would have snark problems)
  2. What do you want the data to look like?

Of course, the first one is an important question—if the requestor cannot articulate what it is they actually want (or even what question they’re trying to answer), little else is going to matter. I’ll not dwell on this particular item too much, but suffice to say, sometimes getting a good answer to this seemingly easy question is anything but. I’ve basically come to the conclusion that this is normal.

Once over that hurdle, the conversation can move on to the presentation of whatever data/”numbers” it is the requestor wants. There are almost as many options for presenting data as there are for way to write the T-SQL to retrieve it. Just like writing the SQL in a way that is performance- and resource-conscious, care should be taken when working on the presentation design. It is imperative for the data to be presented in such a way that is understandable and digestible by its intended audience.

Notice I didn’t say “digestible by the party asking for it.” Don’t forget that the request originator may not be the party who is ultimately going to be parsing the provided data. If the audience is not clear in the original request, add a third question to the two that I have listed above: “Who is going to be acting on this data?”

Options for What Happens Next

When the “What do you want it to look like” question is asked, chances are decent that you’ve an idea about what the answer is going to be. If this is a one-off, ad-hoc request, Excel is a popular option. Alternatively, if a robust reporting system is in place, or this request will be a recurring one, developing a report to present the data might be a stronger choice. There are of course other options: the data could be destined for a statistical analysis application, where a CSV file would be more suitable. I would consider this an outlier, though—most of the time, data is prepared for direct human consumption.

Excel is such a popular option that you could almost call it Data’s Universal Distribution Engine (DUDE). Sending data over in Excel is less about the “make it pretty” side of good presentation as it is the “make it useful” side. I’ve found that Excel is a choice a lot of the time because the requestor wants to do more manipulation of the data once they get it. I’ll leave whether or not that is a good thing to the side; the truth is, such activity happens all the time. As a result, when preparing data for an Excel sheet, I like to have an idea of what the user is going to do with it. This sometimes helps to determine what data the user is looking for (if they don’t have a clear idea) but also can help with some formatting or “extras” to include. These “extras” could take the form of running subtotals, percent changes for Year over Year situations, or anything else that is easier to add via SQL instead of someone having to putz around in Excel.

Writing a report to present data has a different set of opportunities than pasting data into Excel. One of the things that I like to see in a solid reporting environment is a set of standards that apply to the reports themselves. Things like common header contents (report name, date/time stamp, name of the data source/DB the data is from, etc), standard text formatting, a common set of descriptors, etc, etc. In addition to making individual reports easier to read & feel more familiar, it can make it easier to compare data etween reports the hard way (one on each monitor), if one has to.

It's only worth 1,000 words if the first ones that come to mind are work safe

One thing each of these two tools gives you is the ability to present data in the form of pretty pictures. There’s a time and a place for everything, but the old cliché, “a picture is worth a thousand words” can/does apply. Sometimes it’s just flat-out hard to beat a good trendline. I have a much easier time seeing even the simplest of trends when data’s plotted out in a histogram. Conversely, one of my coworkers can look at a pile of numbers, not even sorted chronologically, and tell you what is going on in about three seconds.

Knowing where to put your effort goes back to knowing who your intended audience is. Likewise, knowing when to say “no” to visualization is a terribly useful skill. Every data element on the chart should be discernable, or else it doesn’t convey the information it is supposed to, and now the visualization is working against itself. The pie chart to the right? Don’t do that.

Summary

That’s about all I’ve got. In short: Presentation is important. Unfortunately, it can also be complicated. It’s important to ask questions early on in the process and to know your audience. Standardize if you can; help out a little with the complicated work if it can be done in SQL. Also, add visual representations without going overboard. I’ve always found turning “data” into “information” for people to be fun; if it can make someone else’s job easier/more fun, too, then all for the better.

Excel’s CONCATENATE Function

If you’re not already familiar with this function, I may be about to become your hero. Or, you’ll keep using the Input/Output wizard, some T-SQL, or straight-up SSIS to do this sort of thing and this post won’t matter. Either way, this was new to me a while back and its simplicity and usefulness (at least in our environment) blew my mind at first.

The CONCATENATE function in Excel itself is pretty straightforward; it’s a one-trick pony that does exactly what you think it does: concatenates strings together. In fact, Excel’s description of the function isn’t that much more complicated: “Joins several text strings into one text string.”

Couple notes about CONCATENATE…

The first one has to do with its behavior. CONCATENATE will combine anything in Excel cells, even things that classify as “numbers” that can be used in other arithmetic-based functions. Translated, this means if you have “12” in one cell and “78” in another, using CONCATENATE with those two cells will have a result of “1278.”

The second point is about its source data. Obviously this can be used to concatenate cell contents together into another cell, but strings can be put directly in the function definition, too. This means text can be inserted into the result directly via the function call—it doesn’t need to already be in the Sheet somewhere.

What’s it good for?

The primary use case for this function in my life is when I’m on a project that involves either adding new or updating existing data based on some Excel sheet of Business data that gets handed my way. I’ll use this to build UPDATEs or an INSERT statement to get this data into the DB quickly and easily.

I’m going to build a quick example using the Sales.SalesPerson table in AdventureWorks.

For the sake of argument, let’s say that everyone is getting their commission rate adjusted and it’s going to be easier to do it directly in the table instead of through the UI (this would be more plausible if the company were larger and had more than 17 rows in this table, but that’s my story and I’m sticking to it). Or, maybe there isn’t a UI, because “when or why would we ever need to change the commission rates? Don’t waste time on that interface.” But that never happens.

Anyway, the VP of Sales sends you the following Workbook that contains new commission percentages for all of the sales folk:

RawBusinessData

I’m making this somewhat easy by including the BusinessEntityID column, which is the Natural Key for this data. Such data makes this exercise quite easy, and it’s usually possible to get the business to include this important piece of data as long as you know this activity is coming and can talk to them ahead of time.

When building statements with CONCATENATE, I usually start out in SSMS and manually type out what the first statement would be:

UPDATE sales.SalesPerson
  
SET CommissionPct = 0.021
  
WHERE BusinessEntityID = 274

Most of the time, I type out the Excel function call manually based on this initial statement, but it can be copied in as well, after stripping off some of its formatting (line breaks & tabs). The values for the SET & WHERE clauses are replaced by the cells that contain those particular pieces of data. Due to the way the CONCATENATE function works, the “pieces” to concatenate are separated by commas. Wrap plain text to include in the final result in double quotes. The end result should look something like this:

CONCATENATE("UPDATE sales.SalesPerson set CommissionPct = ", C2, " where BusinessEntityID = ",A2)

Put an equals sign at the begging & paste that into Excel. Some magic happens and, bam, UPDATE statement:

First UPDATE

Copy that out and paste it in SSMS underneath the one written earlier & compare the two. If they match, the formula is good. The rest is simply a matter of filling the series down. All of the necessary UPDATE statements are done! Don’t skip checking those over to make sure nothing crazy is going on.

The same thing can be done with INSERT statements if a big block of new data is going into a table. Even if you prefer to do a single INSERT statement instead of discrete ones, CONCATENATE is still useful to build SELECTs and UNION/-ALLs to feed into a single INSERT:

CONCATENATE("SELECT '", B3, "', getdate(), ", A3, " UNION ALL")

Useful?

I think it is. Even if you usually use SSIS to do this type of work, I bet there are still times when using Excel to cook up some UPDATE statements in a hurry is both quicker and easier than writing out a package to do it, or maybe even than the Import wizard.

I use this on a regular basis and believe it’s one of the more useful non-BI or Expense Report things you can do with Excel 😉