Meme Monday: “I got 99 problems but a disk ain’t one”

Due to Internet Spaceships (aka, Spreadsheets in Space), whenever I hear the “I got 99 problems” line, I always think of a station that we had in a solar system whose name started with “9-9.” The alliance that I flew under had a naming standard of the first 3 characters or so of the station name matched the system name to cut down on confusion and help you confirm that you knew where you were. That station was named “99 problems but <…> ain’t one.” I don’t remember what was in the gap anymore; there’s a decent chance that it wasn’t family friendly anyway.

Anyway, I’m a huge nerd.

On topic…

This time around for what is becoming Tom’s monthly writing exercise, the topic is to make a list of things that can go wrong with SQL Server that are not disk issues. This should be fairly easy for me, since we have more I/O than God, fortunately, but we’ll see how much of a trainwreck I can turn this into.

  • Cowboy Sysadmins. I know this is asking for it. I know that there are plenty of good sysadmins out there who are just as pragmatic as good DBAs are. The problem is that they’re not always the only ones working on a project or active on the support rotation. Things can get changed that shouldn’t be when they shouldn’t be. It happens. How do I know this? Well, see, I used to be a Cowboy Sysadmin. I used to be a pretty strong opponent of ITIL and things like Change Management. Then I got a clue and things were better.
  • Cowboy Developers. Self-explanatory.
  • Linked Servers. I know Tom listed this one in his original post, but it really is a disaster waiting to happen. Long story short, I wound up implementing an LS over a WAN link, and it’s a miracle that worked. There are a lot of moving parts involved in LSes, and when a cluster is involved, it’s even worse. PLUS, when name resolution isn’t working through other means, hostnames need to be added to LMHOST (LMHOSTS, not HOSTS (!)). That says to me there’s some ancient piece of code in MSDTC in use and that scares the crap out of me.
  • Crap code/Design. And by “Crap,” I mean, “Legacy.” Why is old code always the worst? Were the people here before really that dumb, or did they just not know any better? I mean… the storage engine has to do the exact same thing with 9 indexes on a table now that it had to do a decade ago… (no, I’m not kidding).
  • Letting me design your DB. I’m better at this than I was two years ago, but you should still have someone who knows what they’re doing look at it before you do anything else with it.
  • Reporting out of your OLTP system. This may be OK…but it may be very, very bad. I’ve seen some doozies, but sometimes it isn’t completely avoidable. Limit it as much as you can. You don’t even need to go full-blown data warehouse or data mart for this, either; log-shipped or a backup restored on another instance may get the scary queries off your back.
  • Flakey alerting/monitoring system. This doesn’t directly affect your company’s DBs, but if a backup job failed last night and you didn’t get an email alert about it… well, things may not be OK, would they?

Tom was hoping to get nine out of us, but I’m tired, this is due tomorrow, and the President is apparently going to drop a bomb on us in five minutes, so this is what you get.

Tom, this whole thing is kind of fun and it really does give me/us easy fodder to write about, so I appreciate it.

Excel’s CONCATENATE Function

If you’re not already familiar with this function, I may be about to become your hero. Or, you’ll keep using the Input/Output wizard, some T-SQL, or straight-up SSIS to do this sort of thing and this post won’t matter. Either way, this was new to me a while back and its simplicity and usefulness (at least in our environment) blew my mind at first.

The CONCATENATE function in Excel itself is pretty straightforward; it’s a one-trick pony that does exactly what you think it does: concatenates strings together. In fact, Excel’s description of the function isn’t that much more complicated: “Joins several text strings into one text string.”

Couple notes about CONCATENATE…

The first one has to do with its behavior. CONCATENATE will combine anything in Excel cells, even things that classify as “numbers” that can be used in other arithmetic-based functions. Translated, this means if you have “12” in one cell and “78” in another, using CONCATENATE with those two cells will have a result of “1278.”

The second point is about its source data. Obviously this can be used to concatenate cell contents together into another cell, but strings can be put directly in the function definition, too. This means text can be inserted into the result directly via the function call—it doesn’t need to already be in the Sheet somewhere.

What’s it good for?

The primary use case for this function in my life is when I’m on a project that involves either adding new or updating existing data based on some Excel sheet of Business data that gets handed my way. I’ll use this to build UPDATEs or an INSERT statement to get this data into the DB quickly and easily.

I’m going to build a quick example using the Sales.SalesPerson table in AdventureWorks.

For the sake of argument, let’s say that everyone is getting their commission rate adjusted and it’s going to be easier to do it directly in the table instead of through the UI (this would be more plausible if the company were larger and had more than 17 rows in this table, but that’s my story and I’m sticking to it). Or, maybe there isn’t a UI, because “when or why would we ever need to change the commission rates? Don’t waste time on that interface.” But that never happens.

Anyway, the VP of Sales sends you the following Workbook that contains new commission percentages for all of the sales folk:

RawBusinessData

I’m making this somewhat easy by including the BusinessEntityID column, which is the Natural Key for this data. Such data makes this exercise quite easy, and it’s usually possible to get the business to include this important piece of data as long as you know this activity is coming and can talk to them ahead of time.

When building statements with CONCATENATE, I usually start out in SSMS and manually type out what the first statement would be:

UPDATE sales.SalesPerson
  
SET CommissionPct = 0.021
  
WHERE BusinessEntityID = 274

Most of the time, I type out the Excel function call manually based on this initial statement, but it can be copied in as well, after stripping off some of its formatting (line breaks & tabs). The values for the SET & WHERE clauses are replaced by the cells that contain those particular pieces of data. Due to the way the CONCATENATE function works, the “pieces” to concatenate are separated by commas. Wrap plain text to include in the final result in double quotes. The end result should look something like this:

CONCATENATE("UPDATE sales.SalesPerson set CommissionPct = ", C2, " where BusinessEntityID = ",A2)

Put an equals sign at the begging & paste that into Excel. Some magic happens and, bam, UPDATE statement:

First UPDATE

Copy that out and paste it in SSMS underneath the one written earlier & compare the two. If they match, the formula is good. The rest is simply a matter of filling the series down. All of the necessary UPDATE statements are done! Don’t skip checking those over to make sure nothing crazy is going on.

The same thing can be done with INSERT statements if a big block of new data is going into a table. Even if you prefer to do a single INSERT statement instead of discrete ones, CONCATENATE is still useful to build SELECTs and UNION/-ALLs to feed into a single INSERT:

CONCATENATE("SELECT '", B3, "', getdate(), ", A3, " UNION ALL")

Useful?

I think it is. Even if you usually use SSIS to do this type of work, I bet there are still times when using Excel to cook up some UPDATE statements in a hurry is both quicker and easier than writing out a package to do it, or maybe even than the Import wizard.

I use this on a regular basis and believe it’s one of the more useful non-BI or Expense Report things you can do with Excel 😉

PowerShell While Not an Admin: Well, That Wasn’t Any Fun!

Long story short (this backstory is the only thing that’s short about this post), I’m ridiculously behind in my RSS reader, but I’m working through it, as I refuse to Mark as Read en masse. One of the articles that I decided to carefully read through and follow along with is Thomas LaRock’s (blog | @SQLRockstar) SQL University post from Jan 19 (SQL University info).

Since one of my “goals” for this year is to learn PowerShell (if I stay a DBA), I followed the steps to get a feel for things (I’ve used PoSh before, but pretty much only in a copy-and-paste sense), and although the first couple commands worked as advertised, they were accompanied by some errors:

PowerShell WarningsAt first, it seemed a little odd to be getting Access Denied errors while all of the commands I was running were working. The “SQL Server Service” part & mention of WMI made it sound like a Windows-level function that the warnings were about. The commands I was running only dealt with SQL server, which could explain why they were working.

Alright, let’s back up a bit & go over the environment a little bit.

Principle of Least Privilege

The SQL Server I was connecting to is on a different machine on the network, not a local instance. Although my Windows account has SA rights on SQL, it doesn’t have any rights on the box itself (ie, not a local admin). Even though this is just equipment in the house, I have as much set up as I can in accordance with the Principle of Least Privilege. I both think this is a good idea and it gives us a more realistic environment to do testing and experimentation in.

(This whole exercise is actually a good example of why I have things set up this way: If I had gone to do anything with PoSh on, for example, our Accounting server at work, I would have run into this same situation, as the DBAs don’t have Local Admin on those servers. Instead of being surprised by this and having to run down what is going on and how to fix it, I already found it, figured out the problem & now would know what needs to be requested at the office.)

Since my account doesn’t have Admin access to the Server itself, it makes sense that I didn’t have rights for the system’s WMI service, especially remotely. Unfortunately, at the time, I hadn’t thought through the different parts of the system that PoSh was trying to touch, so I generally went in search based on the warning message.

What do WMI & DCOM have to do with Powershell?

WMI (Windows Management Instrumentation) is, put pretty simply, a way to get to administrative-type information about a machine. There are lots of ways to interact with it, and at the end of the day, it can be summed up as a way to programmatically interact with a Windows system. An example of something it can do is, say, return the status of a Windows Service…

DCOM (I had to look this chunk of stuff up) is a communication protocol used for inter-server communications by MS technologies. That’s a pretty stripped-down definition, too, but that’s the general idea.

WMI uses DCOM to handle the calls it makes to a remote machine. Since when we start it up from SSMS, Powershell seems interested in checking the status of the SQL Server service (I tried to find documentation of this, but couldn’t find anything), it makes a remote WMI call to do that. Out of the box, Server (2008 at least), doesn’t grant that right to non-Administrators, which was causing the Warning message I was getting.

Granting Remote WMI/DCOM Access

Interestingly, while searching for a resolution to this, I didn’t find anything that related to Powershell at all, let alone SQL Server specifically. For some reason I got distracted by this and wound up flopping around on a bunch of unhelpful sites and coming around the long way to a solution. However, I think this did help me notice something that I didn’t find mentioned anywhere. I’ll get to that in a second.

Here’s a rundown of what security settings I changed to get this straightened out. I’m not sure how manageable this is, but I’m going to work on that next. I’m fairly certain that everything I do here can be set via GPO, so it will be possible to distribute these changes to multiple servers without completely wanting to stab yourself in the face (and talking your Sysadin into doing it).

One of the first pages I found with useful information was this page from i-programmer.info. Reading that and this MSDN article at basically the same time got things rolling in the right direction.

Step 1: Adjust DCOM security settings

Fire up Component Services; easiest way to do this is Start | Run (Flag-R), and type in/run DCOMCNFG. Navigate to this path: Component Services\Computers\My Computer\DCOM Config\. Find Windows Management and Instrumentation, right-click on it, and choose Properties. On the Security tab, set the first two, “Launch and Activation Permissions” and “Access Permissions” to “Customize.” Click the Edit button in both of these sections and add the AD group that contains the users you want to grant access to (always grant to groups & not individual users). I checked all options in both of these places, as I want to allow remote access to DCOM.

Step 2: Add users to the DCOM Users group

The MSDN article mentioned a couple paragraphs up references granting users rights for various remote DCOM functions. While looking to set this up, I noticed something that no other article I read had mentioned (this is that bit I mentioned earlier), but I think is an important thing to cover, as it is a better way to deal with some of the security settings needed.

To see this, go back to Component Services, scroll back up to “My Computer” at the top and go to Properties again. On the “COM Security” tab, you will find two different sets of permissions that can be controlled. Click on one of the “Edit Limits” buttons (it doesn’t matter which one). The dialog will look something like this:

DCOM Users Group?

Distributed COM Users? ORLY?

A built-in security group for DCOM users? What is this? Well, turns out, it’s exactly what it sounds like it is—it’s a built-in group that already is configured to grant its members the rights needed for remote DCOM access.

That group makes this part of the setup really easy: Simply add the group containing, say, the DBAs, to this built-in group, and they magically have the needed remote DCOM rights. There’s no need to mess with the permissions on the COM Security tab talked about in this section.

Once to this point, I found that when I would run PS commands, I was now getting a different message:

Different Error after DCOM Secuirty set

The same root problem exists (“Could not obtain SQL Server Service information”, but the Access Denied part is gone, now replaced by an invalid WMI namespace message. That was much easier to troubleshoot, as it was only a WMI problem now. My assumption is that it was security-related, and that was the case, but as I found out, since I hadn’t dealt with the details of WMI security, there’s a little quirk that wound up making me burn a lot more time on this than I needed to.

Step 3: Adjust WMI security

The WMI Security settings are easier to get to than the DCOM stuff (at least it seems that way to me). In Server Manager, under Configuration, right below Services, is “WMI Control.” Right-click | Properties, Security Tab, and there ya go.

Now, here’s where some decisions need to be made. Basically, there are two options:

Grant access to everything

Only grant access to the SQL Server stuff.

Of course the easy way is to grant everything, but if these things are being changed in the first place, because a DBA doesn’t have Local Admin on the server, then that probably won’t line up with the in-use security policies.

Instead, the best thing is probably to set security on the SQL Server-specific Namespace. The path to this namespace is Root\Microsoft\SqlServer. Click on that folder, then hit the Security button. A standard Windows security settings dialog will come up. Add the desired user/group (again, probably the DBA group) here & add Execute Method & Remote Enable to the default as that should be the minimum needed. Don’t hit OK yet, however, as this is where the afore-mentioned quirk comes in.

By default, when you add a user & set rights in this WMI security dialog, it adds only for the current namespace—no subnamespaces are included. I got horribly burned by this because I’m used to NTFS permissions defaulting to “this folder & subfolders” when you add new ACLs.

WMI Security Advanced

Advanced WMI Security dialog

To fix this, click Advanced, then select the group that was just added, and click the Edit button. That will open the dialog shown above, at which point the listbox can be changed to “This namespace and subnamespaces.”

With all of that set, running SQL Server PowerShell commands should run successfully & not report any Warnings.

Firewall rules & other notes

I only addressed security-related settings here. In order for all of this to work, there may also be some firewall rules adjusted to allow DCOM traffic in addition to what is needed for PoSh access.

Putting my sysadmin hat on for a little bit, if a DBA came to me asking to change these settings, my response could be, “well that’s fine and all, but you really want me to make these changes on all dozen of our DB servers??” This is where Group Policies come in. I haven’t worked through the GPO settings needed to deploy these settings, but as I mentioned before, I’m pretty sure all of this can be set through them. I will work through that process soon & likewise document it as I go. Stay tuned for that post.

A slightly funny bit about this entire thing is due to the amount of time it took me to get this post together, I haven’t actually done anything related to Powershell! Whoops.

Good luck with Powershell everyone… I’ll get caught up eventually.

T-SQL Tuesday #15: Automation in SQL Server

Automation: every lazy DBA’s best friend; in some situations, a ticket to sanity.

T-SQL Tuesday #15

T-SQL Tuesday #15: Automation is the way to a DBA's heart

This month’s T-SQL Tuesday is brought to you by Pat Wright (blog | @SqlAsylum). The 15th topic for the monthly blog party is, as has been mentioned, Automation in SQL Server. I’m pretty excited to read this month’s posts to see what kind of crazy things everyone does. Most of these posts are probably going to be big on example scripts and code samples, as one would expect for such a topic. This one–for better or worse–won’t.

I have to admit that I haven’t done a lot of from-scratch automation in my day. Lots of things on the list at work right now, but implementation is still pending. As a result, I was afraid that I wouldn’t have anything to talk about this month, but I thought of a goofy direction that I can take this in.

Like many of us who do it for real now, I started my path to DBA-ness (uh… that’s unfortunate) as an Accidental DBA. Since I didn’t know any better at the time, I did a lot (OK, pretty much all) of administrative DBA tasks with the UI. Need to back up a database? Right-click | Tasks | Backup! Want to create an index? Fire up DTA! Use the GUI to pick the columns. Need to do that for more than one DB? Guess you’re going to be there for a while.

This technique obviously gets the job done (for the most part), but there is a lot of room for improvement.

“Automation” doesn’t have to be fancy

If you’re using the GUI for a lot of tasks, there’s an easy & cheap way to “automate” a lot of what you’re doing. Simply: Script stuff out. One doesn’t need to be a master of T-SQL syntax to start doing this, either. With SQL 2005 and above, making the transition from GUI to scripted tasks is pretty easy.

SSMS Backup Dialog's Script menu

The Backup Database dialog's "Script" menu

Just about every dialog box in SSMS has a “Script” button at the top. This control will script out whatever changes have been made in the dialog box. For example, if you bring up the Backup Database dialog, fill out the options & destination file location as desired, and then use the Script button to output that to a new Query window, you will wind up with a complete, functional BACKUP DATABASE command with all of the same settings that were selected in the GUI window. Mash F5 on that puppy and you’ll have your backup, just like you wanted it.

How does this classify as automation?

Spirit of the law, folks, spirit of the law 😉

Alright, I admit I might be stretching it a little bit here. I also know that just because two techniques solve the same set of problems doesn’t mean they can be classified the same way.

That said, consider some of the reasons that you automate big tasks:

  • Ease of consistent repeatability
  • Removal of the human element
  • Speed
  • Autonomy if you’re out of the office and someone is filling in

These same things make running T-SQL scripts instead of using the GUI for tasks a better idea:

  • As long as you don’t change the script before you re-run it, the same thing will happen repeatedly (unless of course the script does something like add a particular column to a table a second time). This is especially important when doing things such as migrating a new table through Dev, Test, Stage, and Production over the lifecycle of a project.
  • Setting options in a GUI window is prone to mis-clicks or flat-out forgetting to change a setting from the default.
  • The script is ready to go—running the action is as fast as opening the script file, checking it to make sure it is the one you’re expecting it to be, and mashing F5. This makes implementing the change a fast process, instead of having to click a bunch of radio buttons/checkboxes/whathaveyou, then verifying all of the settings before hitting OK.
  • If you’re out of the office, but something still needs to be deployed, it’s easy for the fill-in DBA (the boss?) to grab the scripts that have been prepared and run them. This is easier than walking through a list of checkboxes to check on a UI screen OR you try to remember everything from memory if the correct settings haven’t been written down.

Considering how I grew into a DBA, making this leap from pointing and clicking for just about everything to typing out ALTER TABLE instead, took some work. In the end, scripting everything is better in pretty much every conceivable way, even if it is hard at first.

If you’re a little GUI-heavy still and like the idea of automating the work that you do, letting go of the UI and embracing the big, blank T-SQL canvas is Step 1. The effort will be worth it, and you’ll feel like you’ve really automated tasks.

File System Rights Needed to Attach a Downgrade DB

This falls into the category of things I probably should have already known, but for some number of reasons (not all my fault!), I didn’t.

Last Thursday night was an honorary Friday night at The Hideaway, since we took last Friday off (I got screwed at work on a day of for New Year’s this year), so I was trying to catch up on some blog posts in my reader. One of them included some code I wanted to run, but I found out that I didn’t have AdventureWorks attached to SQL after the last rebuild of the main server at home. The first step was going to be fixing that, so I moved the files from their old location to my new standard folder and set off to Attach…

Before this story gets too long…

The DB wouldn’t attach to the SQL 2008 instance. I was using my old SQL 2005 version of AdventureWorks, because it was already on the machine and at least for now, I want to keep it around. It was complaining about the files or the DB being set to Read-Only, and it can’t do an upgrade on a Read-Only DB. I needed to fix the DB or the file system permissions.

The DB wasn’t read-only when I killed the server before the rebuild, so that wasn’t it. Had to be NTFS permissions.

Checked those and the group that the SQL Service Account had Modify permissions on the folder where the files were. Checked this a few times to make sure I wasn’t crazy. I tried to search out some help on this, but I was having a hard time getting the search terms right to find anything useful. I finally wound up breaking out ProcessMonitor to help figure out what was going on.

Turns out, SQL Server was using (impersonating?) the Windows account I had connected to the instance with when it was reading the files. That account doesn’t have write access to the folder(s) in question, so any writes it was trying to do were failing. That’ll do it.

I don’t understand why it does this. Nothing is mentioned in BOL about this, and some real quick Googling didn’t bring up anything, either. Since I pretty much work only with SQL 2000 and SQL auth all day right now, I don’t know if this is new in 2008. At my last job where I had 2005, whenever I would do an operation like this, my Windows account would have been Local Admin on the server, so I wouldn’t have run into it.

I did some quick testing and although it of course still impersonates the Windows Auth user when you attach a DB that doesn’t need upgrading, as long as the user you’re logged on with has read rights, you’ll still be in business.

Workaround(s)

Obviously giving write rights to the Windows account you’re logging on to SQL with will fix this, but that doesn’t strike me as a good idea if you don’t have those rights in the first place. I mean, you don’t have those rights for a reason, right (principle of least privilege, etc)? But, if it’s a Windows Auth-only situation, that’s the only way to do it with the account in question.

Another way around it, is to use SQL Auth. This is what I wound up doing, mainly because I wanted to test to see if that works. As mentioned, if the Instance isn’t in Mixed Mode Auth, this isn’t an option. Also if, for whatever reason, creating a new account for this purpose isn’t allowed, then this option also doesn’t help you.

I’m sure this is old news to most everyone else, but it caught me by surprise. Lesson learned!