You Still Have Your Eclipse Glasses, Right?

Courtesy NASA/SDO

Although, yes, you should just keep them until the eclipse in the US in 2024 (or the one in the Andes in 2019, which I haven’t completely ruled out just yet, because I’ve become a crazy person), there’s another good reason to keep a pair or two handy:

Sunspots!

More specifically: GIANT sunspots!

Right now, there are two fairly-large sunspot areas on the face of the sun. They’re so big, in fact, that they can be seen without magnification. Just put on your eclipse glasses and take a gander. Both of them are pretty close to the center of the solar disk, as seen by the picture above (which is from late yesterday). These will continue to rotate left-to-right across the face of the sun over the next several days.

Take a look; this is pretty cool (bonus points if you work in an office and go out and stand in the parking lot looking up at the sun for no apparent reason).

Space Weather

My favorite site to keep up on this sort of thing is spaceweather.com. Our “space weather” is almost-entirely affected by the sun, so most things on the site lean that way. If civilization as we know it is about to be ended by another Carrington Event, this will be one of the early places to hear about it.

Eclipse Glasses

Quick note about eclipse glasses. Although I heard a few mixed reports about whether or not eclipse glasses would degrade over time and therefore only be safe to use for the next few years, this appears to not be the case:

Such warnings are outdated and do not apply to eclipse viewers compliant with the ISO 12312-2 standard adopted in 2015.

So, keep track of those things, as long as you’ve got good ones.

Using Excel and Get Data to Find Fixes in SQL Server CUs

Lately, for whatever reason, we’ve had clients running into a small rash of bugs or bug-like behavior in SQL Server; soGetData Buttonsme in the Engine, some in SSRS (the SSRS ones have been fun). In one case, it occurred a day or two after SQL Server 2016 SP1 CU3 was released, so we (I was talking to Joey about it at the time) had a list of fixes to go through.

 

This is fine and all, but when one is looking for a fix for a specific behavior (“I’ve had this bug all summer, so I want to look through every CU release to see if it’s in there”), it’s a bit of a pain to go through the whole list just scanning for the, say, Reporting Services fixes. It’s even worse if the instance is behind and you need to look through multiple CUs for something. Another scenario is if you are just reviewing a newly-released CU and really only care about fixes that pertain to the engine…you get the idea.

These lists can get long

Business Intelligence to the Rescue!

Fortunately, there are some tools built right into Excel that make this a whole lot easier than scrolling through the list in your browser. Armed with nothing more than the URL to the CU’s KB article and Excel 2016 (or a few older versions) quick work can be made of generating custom filters for this data.

Here are the steps:

In Excel 2016, click on the Data tab of the ribbon. This is where the artist formerly known as “Power Query” lives, now referred to as “Get & Transform.”

Starting with the New Query button, navigate down through the menu to From Other Sources and then From Web:

New Query | From Other Source | From Web

 

This brings up a simple little dialog that asks for a URL. Paste in the URL for the CU page you’re interested in; here, I’m using SQL 2016 SP1 CU3’s URL: https://support.microsoft.com/en-us/help/4019916/cumulative-update-3-for-sql-server-2016-sp1

Clicking OK brings up the next dialog, a security-related dialog that allows you to provide any credentials that may be needed to access the material. Of course, in this case, no specific credentials are needed, as it is a public web page. Leaving Anonymous selected here is the way to go.

Web Page Security

Clicking Connect will bring up the real meat of Power Query Get Data, where we will choose what data we want to import, and optionally do some ETL-like transformations to it.

Whenever pulling in data from a web page/table for the first time, there is a bit of experimentation that needs to happen. For example, when the “Navigator” dialog opens for the first time, there’s a big list of Tables from the web page, and no data displayed:

Select Table to load data from

What has to happen, is you need to find which of those tables contains the data on the web page you’re interested in. In our case, we’re interested in Table 0, where we can see the data we’re looking for; mainly the Fix area column:

Populated Table 0

Quick note: The reason for so many tables of other data on this page is that down towards the bottom of the page, under the “Cumulative update package file information” link/collapsed menu are a bunch of tables that contain a bunch of information about all of the files that are modified by fixes in this CU. All of those tables are available here, too.

Once the table you’re interested in is selected, we can move on. The next step could be clicking the Edit button, where you’d be able to do all kinds of transformations to the data in this table… here, we don’t need to do that, so can skip that part and go straight to loading the data.

As we’re only looking to read through this data on its own (as opposed to loading it into a Power Pivot data model), we can just click the Load button.

The end result will be a table of data in Excel that contains all the fixes in the CU:

Populated Fixes in Excel table

The best part about this, and the whole reason we’re here, is Excel’s “Auto Filter” feature works on this table (and it is already activated, even). Clicking on the arrowhead in the “Fix area” column yields this familiar pop-up menu, where all manner of sorting and filtering can be done.

Excel Auto Filter dialog

Simply check the area of the product you’re interested in from the list, and you’ll be presented with a nice short list of fixes to look through.

Fix list filtered to Heckaton

Awesome!

Re-use

But, let’s say you’ve gone through this, and you’re thinking “that was kind of a pain, and won’t really save any time for all the more often that page needs looked at.” That’s possibly a fair assessment. Since all of these CU pages are identical (for now), the extract logic stays the same, with the only thing needing to change being the source URL. Once you’ve set up this workbook once, you can save the file and modify the URL it pulls its data from when the next CU comes out, but the amount of clicking required to do that is about the same as it takes to set this up the first time, therefore I’m not sure how helpful that would be.

Probably the best thing to do is to save this file off after you’ve created it and reference it as-needed, clicking the Refresh All button on the Data tab when you open this to make sure you have current data.

“The Tuesday Night Fire Code Violation”

It was July 19, 2005. At least, I’m pretty sure it was.

Based on IndyPASS’s meeting history, that second meeting way down at the bottom (use your keyboard’s End key; that’s what it’s there for) was basically a “here’s what’s new/awesome in SQL Server 2005” presentation. I’ve long since lost most of my email from that time, but that meeting makes sense in the timeline of 2005’s release.

During the dark, dark days of 2005, just about everyone was desperate for an upgrade to SQL 2000. I was, and I hadn’t even been here that long. The fledgling Indianapolis PASS chapter met in a good-sized conference room on the ground floor of a Duke-owned office building off Meridian St (“twelve o’clock on the I-465 dial”) on the north side of town. That night, there were probably half-again as many people in that room as it could comfortably hold. People standing, sitting on the floor, you name it. Tom Pizzato, the speaker, was introduced; he walked up to the podium and the first thing he said was, “Welcome to the Tuesday night fire code violation.” That is still the best one-liner to open a technical presentation I’ve ever seen, and ever since, it has been cemented to SQL Server 2005 itself in my brain.

That was a long time ago–It’ll be eleven years here in a couple months. Eleven years is an appreciable percentage of an eternity in the tech world. As a result, earlier this week, Extended Support for SQL 2005 ended. This means that you, if you are still running it anywhere, will get no help from Microsoft were something to go wrong. Perhaps more importantly, there will be no more security patches made available for it. Don’t expect if something big happens, there will be a replay of what Microsoft did for XP.

This is a pretty big deal. If you have any kind of problem that you can’t fix, and you call Microsoft Support about it, you won’t get any help for your in-place system. You will have to upgrade to a supported version before you’ll be able to get any assistance, and in the middle of a problem bad enough to call PSS probably is not the time you want to be doing a Cowboy Upgrade™ of your production database system.

I understand that there are plenty of industries and even some specific companies that are either forced to, or elect to continue to run out-of-support RDBMSes on their mission-critical systems. I supported SQL 2000 for far longer than I would like to admit, and it was a risky proposition. After I transitioned out of that role, there was a restoration problem (fortunately on a non-production system) that it sure would have been nice to be able to call Microsoft about, but that wasn’t an option.

Don’t put yourself in that situation. There are plenty of points that can be made to convince the powers that be to upgrade. The fact that any new security vulnerability will not be addressed/patched should be a pretty good one for most companies. If you have an in-house network security staff, loop them in on the situation; I bet they will be happy to help you make your case.

One final note: If you are still running 2005 and are looking to upgrade, don’t just hop up to 2008 or 2012–go all the way to 2014 (or, once it goes Gold, 2016). SQL Server 2008 and 2008 R2 are scheduled to go off Extended Support on July 9, 2019. Three years seems like a long way off now, but that’ll sneak up on you…just like April 12, 2016 might have.

Normalization — It’s not Your Friend…or Your Enemy: Dataversity Webinar

As she does on a regular basis, my friend Karen Lopez ( blog | @datachick) is hosting a new webinar this week hosted by Dataversity. The topic, as the title of this post suggests, is about the good, bad, and craziness of normalization. The event is this Thursday at 2:00p Eastern Daylight (GMT -4).

Why am I sharing this? Well, I’m going to be there, too, playing the role of sidekick, because Karen’s the one that actually knows what she’s talking about 😉 . These webinars are always a good time, and you usually learn something, to boot.

It is free to everyone, but registration is required. More information and a link to register is available on this page.

If you join, stop in early while we do some audio checks, hang out and chat a bit beforehand. It’s a fun, informal time before the webinar proper starts. Stay tuned in via Twitter, as well. Monitor the #heartdata hashtag to participate in the conversation.

Hope to see you there!

Big Challenges in Data Modeling: Ethics & Data Modeling

From the “There’s a first time for everything” file, I can announce that I’m going to be joining an online panel discussion this Thursday (ie, tomorrow), April 24 at 2:00p EDT (11a Pacific). I know!

Topic

This discussion will be about Ethics and Data Modeling. It’s part of a monthly series put on by Dataversity covering Big Challenges in Data Modeling.

We’ll cover questions like what to do when asked to do something “wrong” (and maybe what the definition of “wrong” is in the first place) and if there are any items in particular that a data modeler/someone doing that task need to be especially aware of. Although these questions apply to anyone in the data field—or anyone in IT or business at all, for that matter—this conversation will be focusing on how they apply to data modeling specifically.

Details

Participating will be Len Silverston, Denny Cherry, and Tamera Clark, with the whole apparatus MC’d/hosted by Karen Lopez (the one and only DataChick).

The broadcast is free, but you do have to register to get the sign-in information. That can be done at the webinar’s main announcement page (look for the round “Click to Register” graphic), along with reading full bios for all of us.

In addition to the Q&A and participant chat that will be going on during the discussion, you can follow the #BCDMOdeling hashtag on the tweeter. We’ll all be watching that as well.

Sign up, come out, ask some questions, and generally have a good time. Oh, and probably learn something, too. Can’t forget that.