PowerShell While Not an Admin: Well, That Wasn’t Any Fun!

Long story short (this backstory is the only thing that’s short about this post), I’m ridiculously behind in my RSS reader, but I’m working through it, as I refuse to Mark as Read en masse. One of the articles that I decided to carefully read through and follow along with is Thomas LaRock’s (blog | @SQLRockstar) SQL University post from Jan 19 (SQL University info).

Since one of my “goals” for this year is to learn PowerShell (if I stay a DBA), I followed the steps to get a feel for things (I’ve used PoSh before, but pretty much only in a copy-and-paste sense), and although the first couple commands worked as advertised, they were accompanied by some errors:

PowerShell WarningsAt first, it seemed a little odd to be getting Access Denied errors while all of the commands I was running were working. The “SQL Server Service” part & mention of WMI made it sound like a Windows-level function that the warnings were about. The commands I was running only dealt with SQL server, which could explain why they were working.

Alright, let’s back up a bit & go over the environment a little bit.

Principle of Least Privilege

The SQL Server I was connecting to is on a different machine on the network, not a local instance. Although my Windows account has SA rights on SQL, it doesn’t have any rights on the box itself (ie, not a local admin). Even though this is just equipment in the house, I have as much set up as I can in accordance with the Principle of Least Privilege. I both think this is a good idea and it gives us a more realistic environment to do testing and experimentation in.

(This whole exercise is actually a good example of why I have things set up this way: If I had gone to do anything with PoSh on, for example, our Accounting server at work, I would have run into this same situation, as the DBAs don’t have Local Admin on those servers. Instead of being surprised by this and having to run down what is going on and how to fix it, I already found it, figured out the problem & now would know what needs to be requested at the office.)

Since my account doesn’t have Admin access to the Server itself, it makes sense that I didn’t have rights for the system’s WMI service, especially remotely. Unfortunately, at the time, I hadn’t thought through the different parts of the system that PoSh was trying to touch, so I generally went in search based on the warning message.

What do WMI & DCOM have to do with Powershell?

WMI (Windows Management Instrumentation) is, put pretty simply, a way to get to administrative-type information about a machine. There are lots of ways to interact with it, and at the end of the day, it can be summed up as a way to programmatically interact with a Windows system. An example of something it can do is, say, return the status of a Windows Service…

DCOM (I had to look this chunk of stuff up) is a communication protocol used for inter-server communications by MS technologies. That’s a pretty stripped-down definition, too, but that’s the general idea.

WMI uses DCOM to handle the calls it makes to a remote machine. Since when we start it up from SSMS, Powershell seems interested in checking the status of the SQL Server service (I tried to find documentation of this, but couldn’t find anything), it makes a remote WMI call to do that. Out of the box, Server (2008 at least), doesn’t grant that right to non-Administrators, which was causing the Warning message I was getting.

Granting Remote WMI/DCOM Access

Interestingly, while searching for a resolution to this, I didn’t find anything that related to Powershell at all, let alone SQL Server specifically. For some reason I got distracted by this and wound up flopping around on a bunch of unhelpful sites and coming around the long way to a solution. However, I think this did help me notice something that I didn’t find mentioned anywhere. I’ll get to that in a second.

Here’s a rundown of what security settings I changed to get this straightened out. I’m not sure how manageable this is, but I’m going to work on that next. I’m fairly certain that everything I do here can be set via GPO, so it will be possible to distribute these changes to multiple servers without completely wanting to stab yourself in the face (and talking your Sysadin into doing it).

One of the first pages I found with useful information was this page from i-programmer.info. Reading that and this MSDN article at basically the same time got things rolling in the right direction.

Step 1: Adjust DCOM security settings

Fire up Component Services; easiest way to do this is Start | Run (Flag-R), and type in/run DCOMCNFG. Navigate to this path: Component Services\Computers\My Computer\DCOM Config\. Find Windows Management and Instrumentation, right-click on it, and choose Properties. On the Security tab, set the first two, “Launch and Activation Permissions” and “Access Permissions” to “Customize.” Click the Edit button in both of these sections and add the AD group that contains the users you want to grant access to (always grant to groups & not individual users). I checked all options in both of these places, as I want to allow remote access to DCOM.

Step 2: Add users to the DCOM Users group

The MSDN article mentioned a couple paragraphs up references granting users rights for various remote DCOM functions. While looking to set this up, I noticed something that no other article I read had mentioned (this is that bit I mentioned earlier), but I think is an important thing to cover, as it is a better way to deal with some of the security settings needed.

To see this, go back to Component Services, scroll back up to “My Computer” at the top and go to Properties again. On the “COM Security” tab, you will find two different sets of permissions that can be controlled. Click on one of the “Edit Limits” buttons (it doesn’t matter which one). The dialog will look something like this:

DCOM Users Group?

Distributed COM Users? ORLY?

A built-in security group for DCOM users? What is this? Well, turns out, it’s exactly what it sounds like it is—it’s a built-in group that already is configured to grant its members the rights needed for remote DCOM access.

That group makes this part of the setup really easy: Simply add the group containing, say, the DBAs, to this built-in group, and they magically have the needed remote DCOM rights. There’s no need to mess with the permissions on the COM Security tab talked about in this section.

Once to this point, I found that when I would run PS commands, I was now getting a different message:

Different Error after DCOM Secuirty set

The same root problem exists (“Could not obtain SQL Server Service information”, but the Access Denied part is gone, now replaced by an invalid WMI namespace message. That was much easier to troubleshoot, as it was only a WMI problem now. My assumption is that it was security-related, and that was the case, but as I found out, since I hadn’t dealt with the details of WMI security, there’s a little quirk that wound up making me burn a lot more time on this than I needed to.

Step 3: Adjust WMI security

The WMI Security settings are easier to get to than the DCOM stuff (at least it seems that way to me). In Server Manager, under Configuration, right below Services, is “WMI Control.” Right-click | Properties, Security Tab, and there ya go.

Now, here’s where some decisions need to be made. Basically, there are two options:

Grant access to everything

Only grant access to the SQL Server stuff.

Of course the easy way is to grant everything, but if these things are being changed in the first place, because a DBA doesn’t have Local Admin on the server, then that probably won’t line up with the in-use security policies.

Instead, the best thing is probably to set security on the SQL Server-specific Namespace. The path to this namespace is Root\Microsoft\SqlServer. Click on that folder, then hit the Security button. A standard Windows security settings dialog will come up. Add the desired user/group (again, probably the DBA group) here & add Execute Method & Remote Enable to the default as that should be the minimum needed. Don’t hit OK yet, however, as this is where the afore-mentioned quirk comes in.

By default, when you add a user & set rights in this WMI security dialog, it adds only for the current namespace—no subnamespaces are included. I got horribly burned by this because I’m used to NTFS permissions defaulting to “this folder & subfolders” when you add new ACLs.

WMI Security Advanced

Advanced WMI Security dialog

To fix this, click Advanced, then select the group that was just added, and click the Edit button. That will open the dialog shown above, at which point the listbox can be changed to “This namespace and subnamespaces.”

With all of that set, running SQL Server PowerShell commands should run successfully & not report any Warnings.

Firewall rules & other notes

I only addressed security-related settings here. In order for all of this to work, there may also be some firewall rules adjusted to allow DCOM traffic in addition to what is needed for PoSh access.

Putting my sysadmin hat on for a little bit, if a DBA came to me asking to change these settings, my response could be, “well that’s fine and all, but you really want me to make these changes on all dozen of our DB servers??” This is where Group Policies come in. I haven’t worked through the GPO settings needed to deploy these settings, but as I mentioned before, I’m pretty sure all of this can be set through them. I will work through that process soon & likewise document it as I go. Stay tuned for that post.

A slightly funny bit about this entire thing is due to the amount of time it took me to get this post together, I haven’t actually done anything related to Powershell! Whoops.

Good luck with Powershell everyone… I’ll get caught up eventually.

File System Rights Needed to Attach a Downgrade DB

This falls into the category of things I probably should have already known, but for some number of reasons (not all my fault!), I didn’t.

Last Thursday night was an honorary Friday night at The Hideaway, since we took last Friday off (I got screwed at work on a day of for New Year’s this year), so I was trying to catch up on some blog posts in my reader. One of them included some code I wanted to run, but I found out that I didn’t have AdventureWorks attached to SQL after the last rebuild of the main server at home. The first step was going to be fixing that, so I moved the files from their old location to my new standard folder and set off to Attach…

Before this story gets too long…

The DB wouldn’t attach to the SQL 2008 instance. I was using my old SQL 2005 version of AdventureWorks, because it was already on the machine and at least for now, I want to keep it around. It was complaining about the files or the DB being set to Read-Only, and it can’t do an upgrade on a Read-Only DB. I needed to fix the DB or the file system permissions.

The DB wasn’t read-only when I killed the server before the rebuild, so that wasn’t it. Had to be NTFS permissions.

Checked those and the group that the SQL Service Account had Modify permissions on the folder where the files were. Checked this a few times to make sure I wasn’t crazy. I tried to search out some help on this, but I was having a hard time getting the search terms right to find anything useful. I finally wound up breaking out ProcessMonitor to help figure out what was going on.

Turns out, SQL Server was using (impersonating?) the Windows account I had connected to the instance with when it was reading the files. That account doesn’t have write access to the folder(s) in question, so any writes it was trying to do were failing. That’ll do it.

I don’t understand why it does this. Nothing is mentioned in BOL about this, and some real quick Googling didn’t bring up anything, either. Since I pretty much work only with SQL 2000 and SQL auth all day right now, I don’t know if this is new in 2008. At my last job where I had 2005, whenever I would do an operation like this, my Windows account would have been Local Admin on the server, so I wouldn’t have run into it.

I did some quick testing and although it of course still impersonates the Windows Auth user when you attach a DB that doesn’t need upgrading, as long as the user you’re logged on with has read rights, you’ll still be in business.

Workaround(s)

Obviously giving write rights to the Windows account you’re logging on to SQL with will fix this, but that doesn’t strike me as a good idea if you don’t have those rights in the first place. I mean, you don’t have those rights for a reason, right (principle of least privilege, etc)? But, if it’s a Windows Auth-only situation, that’s the only way to do it with the account in question.

Another way around it, is to use SQL Auth. This is what I wound up doing, mainly because I wanted to test to see if that works. As mentioned, if the Instance isn’t in Mixed Mode Auth, this isn’t an option. Also if, for whatever reason, creating a new account for this purpose isn’t allowed, then this option also doesn’t help you.

I’m sure this is old news to most everyone else, but it caught me by surprise. Lesson learned!

Maintenance Plans: They don’t like it when DBAs leave

Discovered some…interesting…information about Maintenance Plans a few weeks ago when we had a DBA leave. I don’t think it warrants being called a “bug”, but it is behavior that I don’t think is ideal, and it definitely can cause issues for us and others. The good news here is that I did get a workaround nailed down (thanks to the help of Random Internet Guy), although we haven’t decided whether or not we’re going to use it.

How this all started

Recently, we had one of our DBAs leave. (Incidentally, this is the first time ever that I’ve had this happen to me and also the first time one has left this company.) After he left at the end of his last day, I pulled all of his access to the Production systems. We knew that any Agent jobs that he happened to own would start failing, and sure enough, there were a few failures that evening and over the course of the weekend.

One of these failures turned into a little more of an involved situation: The Maintenance Plan on the only production SQL 2008 box he set up failed. With us being used to SQL 2000, this was expected to be an easy fix and life would go on. Except, it wasn’t.

Toto, I’ve a feeling this isn’t SQL 2000 any more

First attempt to fix this was to change the owner of the SQL Agent Job to ‘sa’. He’s our standard job owner for consistency and because that makes the job impervious to people leaving. The job still failed, although the error was different this time. Obviously the Agent Job owner had some role in all of this, but it wasn’t the whole story, as things weren’t fixed.

After changing the Job owner, this is the error that occurred:

Executed as user: <SQL Service acct>, [blah blah blah]  Started:  11:25:38 AM  Error: 2010-10-06 11:25:39.76     Code: 0xC002F210     Source: <not sure if this GUID is secure or not> Execute SQL Task     Description: Executing the query “DECLARE @Guid UNIQUEIDENTIFIER      EXECUTE msdb..sp…” failed with the following error: “The EXECUTE permission was denied on the object ‘sp_maintplan_open_logentry’, database ‘msdb’, schema ‘dbo’.“. Possible failure reasons: Problems with the query, “ResultSet” property not set correctly, parameters not set correctly, or connection not established correctly.  End Error  Warning: 2010-10-06 11:25:39.79     Code: 0x80019002     Source: OnPreExecute      Description: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED.  [blah blah blah] Error: 2010-10-06 11:25:40.58     Code: 0xC0024104     Source: Check Database Integrity Task      Description: The Execute method on the task returned error code 0x80131501 (An exception occurred while executing a Transact-SQL statement or batch.). The Execute method must succeed, and indicate the result using an “out” parameter.  End Error  Error: 2010-10-06 11:25:40.64     Code: 0xC0024104     Source: <another GUID>      Description: The Execute method on the task returned error code 0x80131501 (An exception occurred while executing a Transact-SQL statement or batch.). The Execute method must succeed, and indicate the result using an “out” parameter.  End Error  Warning: 2010-10-06 11:25:40.64     Code: 0x80019002     Source: OnPostExecute      Description: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. [blah blah blah] End Warning  DTExec: The package execution returned DTSER_FAILURE (1).  Started:  11:25:38 AM  Finished: 11:25:40 AM  Elapsed:  2.64 seconds.  The package execution failed.  The step failed.

A Permissions error in MSDB? Huh?

I dug a little, and the short of it is that the Maintenance Plan’s connection string was still set to the departed Admin’s sysadmin account (a SQL login). All of the work that the MP was trying to do, it was still attempting as the other DBA. This, of course, isn’t going to get very far.

Maintenance Plans have connection strings?

This seems like a dumb question to ask, but it’s one of those things that I had glossed over in my head and not actually thought about. The answer: Sure they do! MPs are basically SSIS packages these days, and of course those have connection strings/settings. MPs are no different. Seems like a “DURRR” now, but hindsight, yadda, yadda.

This connection string is controlled by the “Manage Connections” button in Maintenance Plan toolbar. If you never mess with this when you create an MP, it uses an auto-generated connection to the Local Server, using whatever name you used to connect to it (there’s another topic right there, but someday I’ll talk about some quirks with Aliases I’ve run into, and I’ll address that then). This auto-gen’d connection also uses the same auth credentials you are currently using. If you only use Windows Auth, you are stuck with that; if in Mixed Mode, a SQL auth account is an option, and you can set it to another account. Of course, you would need to know that account’s password.

To fix our MP’s connection string, I could change it to my own credentials, but that of course doesn’t actually fix anything, and will leave an even bigger mess for whoever’s left if/when I leave or responsibilities change. I can’t set it to Windows Auth, because when you select that, it validates the auth right then and there. Since we don’t have Windows Auth SA accounts, that won’t work for us: the AD account that I’m logged on with doesn’t have enough rights. Something else needed to be done.

At this point I realized that this is a fair-sized problem and we can’t be the first shop to run into this; I mean, SQL 2005/2008 have been out for a while, and I’m sure lots of DBAs have left shops that heavily utilize MPs. Time to ask the Senior DBA.

Going nowhere fast

For as common as I expected this to be, I wasn’t finding much helpful information. There were some references to this error message, and even someone in the same situation I was in, but most of what I found were dead-end threads.

At one point, I thought for sure that @SQLSarg had it. This blog post contains the revelation that MPs have Owners! His main point of changing the MP owner is to change the owner that the associated Agent Job(s) are set to when you save the MP. Even though I had already changed the job owner to a valid account, I hoped that the MP’s owner had something to do with the credentials that were used in the Plan’s connection string.

Of course, it didn’t. No behavior change.

Aside: Even though it didn’t help with what I was trying to fix at the time, I believe that changing an MP’s owner to a shared or central service account is a good idea. Every time an MP gets modified, the Agent Job’s owner is changed to the same account as the Plan’s Owner. This might not even be you, and when that person leaves the company, you are guaranteed to have execution problems. There’s a Connect item open asking to make it easier to change the owner, so if you like Maintenance Plans and don’t hate yourself, it’s probably a good idea to log in over there and mash the green arrowhead.

The more I dug, the more dead-end threads I read, the more almost-but-not-quite blog posts I ran across, the more I became reserved to one simple fact:

It is impossible to set the Connection String Credentials in a Maintenance Plan to an account that you don’t know the password for.

As that started to sink in, I asked #SQLHelp if that were, in fact, true. I got a couple responses that yes, that was the case (I would be specific, but this was a number of weeks ago, and since Twitter sorta sucks for finding things like that and I can’t remember who it was, we’re out of luck). Although not the news I was looking for, I was glad to know that the deal was pretty much sealed—we were going to need to figure out something else to do.

One last gasp

Since I don’t feel like I’m doing everything I can while troubleshooting a problem if I don’t grab for at least one straw, I decided to investigate something that was described in one of the dead-end threads that I had found while searching.

This thread on eggheadcafe is a conversation between “howardwnospamnospa”* and Aaron Bertrand, who we all know and love. This one “howardw” was in a pretty similar boat to what I was in, and was really close to getting it to work. He was only thwarted because of an apparent quirk when an Instance is in Windows Auth-only mode.

In the last post in the thread, he reports that when he assigns the Local System account (“NT AUTHORITY\SYSTEM”) as owner of the MP, and sets the connection properties of the plan to use Windows Auth, the plan runs. To attempt to duplicate this, I started with an existing MP on a Dev instance that I had set up (therefore my SQL auth account was the MP owner and the user listed in the Connection Properties) and proceeded with the following steps:

  1. Using the statements in SQLSarg’s post, I changed the owner of the MP to Local System
  2. I added my Domain account to the Instance with SA rights
  3. I opened the MP in question with SSMS, switched its connection to use Windows Auth (while logged in with the Domain account used in Step 2), and saved it.
  4. Checked the MP’s auto-gen’d Agent Job to confirm that its owner was NT AUTHORITY\SYSTEM.
  5. Using a connection with my Domain account, I pulled my SQL account’s SA rights. This would previously make the MP sub-plan fail.

I ran the job and, holy cow, it worked! 😀

That’s good, right?

Ehhhh, maybe. Setting the owner to Local System seems pretty dirty. I haven’t thought of a way that this would be a security problem, but it still feels like a stupid hoop to jump through. For us, it has the extra hoop of us having to add our Domain accounts to the instances every time we want to set up or change connection info on an MP (once the Windows Auth is set, it doesn’t re-validate it when you save the plan, so you only have to do it that one time), then take it out when we’re done.

Those bits aside, this workaround (if you will let me get by without calling this a “hack”) does allow setting up Maintenance Plans that will live through the DBAs that set them up leaving the company. By technicality, I achieved my goal.

We’ve talked about this setup very briefly at the office, but haven’t made a final decision on whether or not to go down this road or to abandon MPs for DB maintenance altogether. Hopefully we will get that decision made before someone else leaves.

But at what cost?

The main cost here is a lot of destruction to my affinity for Maintenance Plans.

Using MPs in general, love them or hate them, is pretty much a religious debate. Usually I fall on the proponent side of those arguments. I think they’re easy to set up, they’re pretty flexible, they can dynamically pick up new DBs that get added to the system (not always an advantage), you can cook up some really crazy dependencies within them, and I like pictures & flowchart-y things :-).

After this whole ordeal? Pretty sure I’ve changed my mind. This whole default behavior of how they will stop working when the account that set them up initially is no longer an SA is way too much of a price to pay. What happens when you’re a single DBA shop and you get honked and bail one day? Near as I can tell, if your account gets restricted as it should, backups will stop working, but there won’t be anyone left to notice!

In lieu of MPs, there’s at least one good option out there that has most all of the desired functionality. While this was going on, @TheSQLGuru posted a link to this set of SPs. I haven’t had time yet to really dive into these, but on the surface, they look pretty good.

Final Comment(s)

This entire thing bugs me. I’m almost certain I’m missing something. This should be a huge problem for lots of people. I feel like I’m missing a detail or it’s because we use SQL logins for the most part or some other thing. But… in all of the testing that I’ve done, bad things happen when the initial creating user gets SA pulled. Period. I can’t get past the feeling that I’m doing something wrong, but I haven’t been able to figure it out yet.

In the meantime… I think Maintenance Plans are evil after all…

* Apparently not to be confused with howardwSOLODRAKBANSOLODRAKBANSO[LODRA] (Eve-Online thing; it’s honestly better if you don’t ask)

T-SQL Tuesday #11: The Misconception of Active/Active Clustering

TSQL Tuesday

October's TSQL2sDay: Hosted by Sankar Reddy

I was expecting my inaugural T-SQL Tuesday post to not happen this month… Until just yesterday, when I found myself helping explain to someone what an Active/Active SQL cluster was and what it wasn’t. Ah hah! I expected this to be a topic already covered fairly well, and as expected, I found a couple of really good articles/posts by others on this topic. I will reference those in a bit, but I can cover some more general ground first.

The misconception that was I was explaining yesterday morning is the same one that I’ve explained before and have seen others do the same: An Active/Active SQL Server Cluster is not a load-balancing system. It is, in fact, purely for High-Availability purposes. If what you are really after is true load-balancing, unfortunately, you will have to go elsewhere, to something like Oracle RAC.

When I say “true load-balancing”, I mean being able to load-balance a specific set of data across multiple pieces of hardware. What you can do with Active/Active clustering is load-balance a set of databases across multiple pieces of hardware. This is useful if you have a SQL Instance with multiple very heavily-utilized databases—one or more of these DBs could be pulled out to its own instance on separate hardware. This is not done without potentially serious tradeoffs, however, which will be covered in a bit.

The underlying reason for this is that Windows Clustering is only designed for HA purposes. With its “shared nothing” approach to clustering, any given resource within the cluster (say, the storage LUN that holds your accounting software’s MDF file) can only be controlled by one node of the cluster at a time. If the accounting DB’s home LUN is currently attached to the first node in a cluster, it’s going to be pretty hard for the second node to service requests for that database.

OK, what is Active/Active Clustering, then?

I like to think of it as two Active/Passive clusters that happen to live on the same set of hardware. To illustrate:

Active-Passive Cluster Example 1

This example cluster is a two-node Active/Passive cluster that contains a single SQL Server Instance, CLUS1\ACCOUNTING. This Instance has its own set of drive letters for its DB, Log, and TempDB storage (say, E:, F:, and G:). If both nodes are healthy, Node2 never has anything to do. It sits around twiddling its bits, just waiting for that moment when a CPU melts down in Node1, and it can take over running the ACCOUNTING instance to show how awesome it is.

Active-Passive Cluster Example 2

This is another two-node Active/Passive cluster, running a single instance, CLUS2\TIME (you know, because internal time tracking software is that important). Like CLUS1\ACCOUNTING above, it has its own set of drive letters for its storage needs (M:, N:, and O:)In this case, the SQL Server Instance is on Node2, leaving Node1 with nothing to do.

What do we get if we mashed these two clusters together?

Active-Active Cluster Example

We would get this, an “Active/Active” SQL cluster. It’s logically the same as the two previous clusters: it just so happens that instead of the second node doing nothing, it is running another independent instance. Each instance still has its own set of drive letters (E:, F:, and G: plus M:, N:, and O:). Just like in the “Accounting Cluster” above, if Node1 were to fail for some reason, the CLUS1\ACCOUNTING instance would be started up on Node2, this time, alongside CLUS2\TIME.

So, is doing this a good idea?

Maybe. Above, I talked about how a single instance Active/Passive cluster with more than one heavily-used DBs could be re-architected into an Active/Active cluster with some of the heavy DBs on one instance living on one node and the rest of them on a second instance living on the other node in the cluster. This gives you both High Availability and you’ve spread the load out between two servers, right? It does, but what happens if one of the nodes fails and the two instances “clown-car” onto the remaining node? Are there enough CPU & RAM resources on the remaining server to support the combined load?

That illustrates one of the major things to consider when thinking about deploying an Active/Active cluster. Tim Ford (Blog | @sqlagentman) has an article over at MSSQLTips that goes into detail about making this decision.

In addition, Aaron Bertand (Blog | @AaronBertrand) has a sweet little piece of automation to help manage resources when trying to squeeze the maximum amount or size of clowns into a single car without someone’s foot sticking out in the breeze. Of course, it’s something that you hope never actually gets used, but it’s a good way to wring the most performance out of an Active/Active cluster.

Conclusion

I don’t know if this is the best way to explain this concept, but I like it. The Active/Passive cluster concept is usually understood, so I believe it is a nice stepping stone to get to the truth about Active/Active clusters. As with everything, there are tradeoffs and gotchas that need to be considered when designing server architectures like this, and the above-mentioned others have gone a long way to cover those topics.

There we have it, my first T-SQL Tuesday post. Hopefully it is useful and not from too far out of left field 😉

Alright, So Maybe I Get the Cloud…

In general, I’m a cynical, Negative Poo-Poo Head. In the last few years, I’ve become more and more this way when it comes to technology trends. My first reflex as to why, is my increased time spent working with developers on a day-to-day basis, and my perception of how development  is susceptible to Flavor-of-the-Week-itis. I don’t think that’s really the cause; instead, I think I’m just getting more cynical as I get older in general. I know, this isn’t necessarily a good thing, but that’s a different blog post. The good news here is that I at least can still turn it off when I need to.

Anyway, the Cloud and my negative poo-pooness.

I’ve basically spent more time making fun of Cloud Computing than I have learning about it. “Oh, it’s the dumb terminal/mainframe paradigm all over again”, “There’s something wrong with this whole thing when you can buy a shipping container full of servers, put it out back, and call it a ‘private cloud’”, “It’s just the latest buzzword/flavor-of-the-week”, on and on. Honestly, even though Brent, Buck, and others have been saying it’s the Way of the Future™, I didn’t buy it.

Well…I think I’m on board now.

What changed my mind?

I don’t think I can nail it down to one thing, but I can point to a couple things that have been going on lately that played into this change of heart.

Last Friday was the Nashville SQL Users Group Meeting. Presenting was Brian Prince (Twitter | Blog (lookit dat Gamerscore) ) on SQL Azure. I was pretty interested in this topic due to wanting to actually learn something about this stuff that I make fun of. OK, that and I am a little interested to see what all the fuss is about.

Although I had to leave early due to an unfortunately-scheduled meeting, I did get to see most of Brian’s presentation. I was impressed with a lot of what I saw. I was impressed by the shipping containers full of servers that make up open-air datacenters. I was impressed by how they each can have their own genset backup power sitting next to them, how the servers in the containers live at 95o F, are cooled primarily by what basically amount to Swamp Coolers, and pretty much never get touched once they’re racked up. During this part of the presentation, the Sysadmin in me was clamoring to give up the DBA thing and figure out how to get a job building/supporting this stuff, because it looks just that awesome…

Brian then went into the components the Azure platform and where SQL Azure sits in it and how it is supported, replicated, and made available. We also got to see how you actually use the system along with some good arguments about DBs that we all have in our environments that would be good candidates to move to the cloud. The prices seem pretty good (I haven’t done any extensive calculations) when compared with running small but still mission-critical DBs in-house… It all actually sets up a pretty good argument.

All of this good info and eyes-on examples went a long way to getting me over whatever I was afraid of or didn’t buy about the system.

Buck has been going on about this whole thing for a while, and even put his money where his mouth is and is moving over to the Azure team. At first, I thought that was a loss for the SQL Community, but I think with my improving outlook, I don’t feel that way anymore. I believe it will be for the better, and I can’t wait to hear more from him in the future.

These are a couple of the things that have been going on lately that are beginning to change my mind about the cloud. I’m looking forward to learning more about it and considering it as an implementation option in the future.