Right, I Installed Windows 8 CP

I know, all of my friends have been busy playing with the Gold bits of SQL 2012, and that’s all well and good. I should be doing that, too, really really. Except…Well… I don’t exactly need to worry about that at this point, unfortunately. Different story. Instead, I’m playing with the Beta Consumer Preview release of Windows 8, because I like Windows (news flash).

Back in college, I would use my laptop as the test-bed for new stuff. I carried it to classes, so I got a lot of keyboard time with it (and a lot of show & tell, too). New Office & Windows versions were the most-used victims, as I took notes in Word—these were the days before OneNote. After my freshman year, I had a desktop, too, so I always had a more stable place to get work done if I had problems with anything. I would have used my current laptop for this, but it’s an old boat anchor (a Latitude 120L), and has too old/crappy of a video card to run the display at its full res in 8. Tried it with Dev Preview & the best I could get was 800×600. I fiddled with the disk subsystem in my ESXi server & installed 8 directly on that hardware to test a couple of applications that I wanted to make sure worked before I got to…

Blowing Up the Main Desktop

It was a perfectly healthy, couple year old install of 7, but it was on a semi-decent machine (Presler core P4) with a good video card, so it had to die (you know, for science).

Windows Experience Index for test machine. Overall Score 4.5

CPU bound... just the way I like it

The usual first step of this process is copying the ISO contents to a USB stick for install. Since I don’t do this very often, I always have to look up the steps to make sure I don’t forget something while fumbling around in diskpart. For whatever reason, this time I stumbled upon a link to the Windows 7 USB/DVD Download Tool. This is a neat little thing that automates the task of transferring an ISO of Windows Install media to a bootable USB disk (don’t let the “Windwos 7” in the name fool you). I doubt it’s faster than XCOPY, but it’s a little easier.

The install itself went pretty quickly after a couple false starts. Can’t really compare it to anything, but it did seem fast. On its first boot, it offers to let you attach a/your Live account to it, like Windows phones do (and Google accounts on Android and AppleID on the iPhone). On my first test machine I went ahead and tied it to my Live account just to see what it did. The answer: other than grabbing your account’s picture and making it fairly ticklish to figure out how to auth when RDPing to the machine, I’m not really sure what it did. It may not be fair to say that since I spent a total of about 20 minutes on that install, but that’s how not-obvious it was.

On my “real” test install, I put the machine in our test domain, did my usual early system settings changes, & worked through the high points of my application install list. No real hiccups there, so I was able to get on with…

Actually Using the Thing

This all happened a month or so ago now, but I’ve still only gotten a fistfull of good usage hours with it. In that time, I’ve gotten a few things to talk about and/or show. None of these are really earth-shattering; just the things that stick out for me. For a more general overview of it, check out some of PeterB’s DrPizza’s Peter Bright’s articles related to the topic.

Obviously the biggest change and the elephant in the room is the introduction of the Metro UI to Desktop Windows. One of the side-effects of this is the Start Menu goes away. This is a pretty big shock, and it still catches me off-guard when I hit the Windows Key and the main monitor (I’ve got two on this machine) flips over to the Start Screen. Gone is the ability to organize things via folders. Instead, if an app wants to put shortcuts on the Start Menu, they just wind up on the Start Screen. They get arranged into blocks of icons by default, and you can arrange/delete them further if you like. Near as I can tell, that is the extent of the organization options one has. The problem that I found is if you install something like Server Admin Tools (there’s already a pre-release version for Windows 8, which is fantastic) that adds a lot of shortcuts all at once: the default behavior is apparently for these icons to simply be barfed all over the Start Screen.

Start Screen showing icons all over the place

Organization is Overrated in 2012

I don’t think this is a good thing, but in this architecture, I’m not sure what else can be done. If one’s interested in keeping the Start Screen tidy, there’s going to be a lot of micromanagement to do. On the other hand, back in Vista, a search bar was added to the bottom of the Start Menu. Handily, that text box had focus when the Start Menu was raised, so one could mash Flag (“Flag” is how I usually refer to the “Windows Logo Key”, primarily because “Windows Logo Key” is way too damn many syllables) and start typing to bring up whatever shortcut they’re going for. Judging by commentary I’ve read about that particular feature (OK, it’s mostly Ars readers), it seems a decent chunk of people use that as their primary app starting/finding vector for things that aren’t pinned to the Taskbar. In Windows 8, even though there isn’t a visual cue for it, this functionality is still there. Hit Flag, and start typing. A search screen will come up showing results for Applications, Settings (Control Panel Applets), and Documents; these are all the same categories of things that the Search bar in the old Start Menu will return. Not being one who uses this search to find things very often, I’m going to need to get used to it if I don’t want to spend more time than I do now managing what the Start Screen looks like.

Start Menu screenshot showing the "Recent" list of files

This feature leads the list of ones I didn't get at first, but now will miss the most

One thing I can say for certain: I’m going to miss is the “recent” list that shows up for some applications in the old Start Menu. This is the list of docs that will cascade out from the app’s shortcut when an app is pinned or otherwise lands on the main part of the Start Menu. I admit that I don’t use this feature all that often, but there are a few things, Remote Desktop being one, where I use the recent list almost exclusively when calling the application. It does work the best when a small set of files are used all the time (like, say, shortcuts to the ETL servers), but it’s really useful for that sort of thing. I haven’t found a way to mirror this functionality in 8, and that’s a bummer.

Once over the initial hurdle of getting past the Start Screen (hint: click on the “Desktop” button), in what I would consider normal use, I don’t feel like I’m using anything other than Windows 7 (or Vista, for that matter). My normal usage pattern on this machine resembles what most people would consider “work”—Office apps, web browsing/research, Quicken, Twitter (old TweetDeck!), and Live Writer for blogging. I would say photo editing/management, but that barely gets done anymore. Non-“work” stuff involves Windows Media Center, and not much else; don’t have time for gaming anymore, either. All of these things are non-Metro (“classic”) Desktop apps, and as such, operate on the desktop. Using this kind of application and launching them either using pinned Taskbar icons or opening documents straight from the file system means that I hardly ever see any Metro UI elements.

That’s pretty much a good thing, until a bunch of time goes by and one forgets what they’re doing. Then, for example, a random app is needed from the Start Menu. Hit Flag, and get smacked in the face by the full-screen Metro Start Screen. This has happened to me more than a few times since I’ve been using 8. I also haven’t put Acrobat Reader on the machine yet, so when I open a PDF, the built-in reader app launches, which is a Metro app. Smacked in the face again. This one is of course my own fault, as I’ve delayed putting Acrobat on to at least try the built-in reader for a while to see what I think about it. It’s very jarring when this happens. It’s also disruptive at the moment, because it’s so different and still new.

Speaking of these full-screen Metro apps, they have two major drawbacks to me. One is the fact that they’re full-screen. On a machine with a nice high resolution, I hardly ever maximize windows/applications to use them, so this is a little annoying. There is the fixed two-app display thing available, but it’s still a little wonky when it involves one Metro app and the Desktop. Since the Desktop behaves as a single application in this situation, if it’s the app on the small side of the split-screen, then the windows running on the Desktop become the same little thumbnails that pop up when you mouseover window buttons on the Taskbar. As a result, it’s likely that you can’t read what’s going on and you definitely can’t interact with them. This arrangement isn’t overly useful.

The second major drawback is something that got in my way early on: The only arrangement choice with Metro apps is on the main monitor. What if I want to put that application on the smaller of the two monitors I have (which isn’t the main one)? Turns out, that can be done. Among the list of keyboard shortcuts outlined in this Windows Team Blog entry is Flag – PgUp/PgDn, which “Move[s] Start screen to Left/Right Monitor.” That command does exactly what it says it does, but also moves full-screen Metro apps with it (plus the “primary” Taskbar). It appears to be the same functionality as the “Use this monitor as my main monitor” checkbox on the Display Options dialog, just in handy keyboard shortcut form. The operation itself is smooth and fast, even on my less-than-optimal hardware, and technically it does address what I was trying to do. As part of moving the whole Metro…environment, for lack of a better term, it takes the built-in split-screen functionality with it. I guess that makes sense, but all of these “features” just continue to make the whole thing feel contrived.

While using this, a couple specific items stuck out at me, and I’ll go through those next.

File Copy Status

From the “it’s the details that count” category, we have the File Copy Status Dialog. (If this thing has an actual name, I obviously don’t know what it is.) Couple new things here which, as a completely crazy person who likes to watch numbers change, are welcome additions.

New File Copy Dialog showing the "More Details" view

Best thing to happen to the File Copy dialog since the File Copy Dialog

First, I almost always hit the “More Details” button to see what kind of throughput I’m getting on a copy/move/delete. Finally, in Windows 8, if you flip it to the More Details version of that little dialog, the setting sticks! Every time a file is copied, the status dialog opens with all of the details showing. Awesome.

Even better is a little throughput histogram which is now shown. This thing pulls triple duty, visually representing both percent complete and current/historical throughput, and also showing the current throughput in text form. Because at the moment I’m not overly enthused with this OS, this dialog is probably my favorite new feature.

New Task Manager

There’s a new Task Manager in 8. It brings some of the functionality of the full-blown Resource Monitor down into the smaller Task Manager package. I find this a bit of a mixed blessing, but in general it’s pretty nice. I usually leave Task Manager run all the time, so am a little worried about the performance impact with the new one—I’ve sometimes seen Resource Monitor peg out a CPU core while displaying disk activity.

That’s all well and good. BUT! With this machine, I got the old task manager! At first, I thought it was just because I decided to put it in the domain for this evaluation. Long story short, I had Task Manager pinned to the Taskbar on this profile, and I had used that shortcut to start it. Turns out, there are now two different Task Managers in Windows—the old one is still taskmgr.exe & the new one is TM.exe. The new one does eat a lot more RAM than the old, so there’s at least that going on between them. I’d guess that the old one will go away at some point, but for right now, we’ve got a choice on which one to run.

Minor Issues

I’ve had a couple non-UX/UI issues so far, both related to specific applications.

The first one is with the “new” TweetDeck (version 1+). I tried to install it, and it went on, but when I tried to sign in to my TweetDeck account on first launch, it couldn’t do it. The password I was using is right, but I couldn’t get anything other than an “unable to log in” error. Since Twitter has changed it so it can’t be used without using a/your TweetDeck account, I wasn’t getting anywhere with that. I put the most recent Air-based version on, and it is running fine.

The other application problem I am having is with Windows Live Writer. It is pretty unstable in general, but it will also crash pretty much all the time when trying to open an in-progress draft post. This makes it slightly ticklish to pick up again on a post if LW wasn’t left running. What I have found out that works is if the in-progress file is opened from the file system. That will start up LW & open the desired file. Better than nothing.

Overall Thoughts at the Moment

My feelings at the moment mirror what a lot of other people are saying: This OS is going to kick some serious ass on a tablet, and I can’t wait to get my hands on some of that. What it doesn’t do as well so far is be a desktop OS. IMO, this situation is most glaring with a multi-monitor setup. I don’t like how Metro apps are basically locked to full-screen and generally don’t feel very flexible. It also bugs me that the “classic” desktop sometimes behaves like a single application (like when it is involved in the Metro split-screen arrangement). I’m stopping short of outright calling the multi-monitor experience broken, but it’s sub-optimal, at least.

It will be interesting to see where all this goes. Since Metro apps are stuck in their box, and do a fairly good job of breaking what I consider the “normal” desktop computer workflow, I wonder if vendors will release multiple versions of applications so both desktop and tablet users will be able to use applications that function in their respective native environments. Not knowing much about software development, I don’t know how much easier (or cheaper) said than done that statement is. There’s been a lot of talk about browsers being released as native Metro apps. Obviously there will have to be some set of applications released as Metro apps, since that finger-friendly interface will definitely be the main one used on Tablets. My hope right now is that vendors (and Microsoft, for that matter) continue to support the Desktop work environment.

Bottom line: Other than minor-to-moderate problems with the UI & UX, it seems to be just as solid of an OS as Windows 7 is. I think it’s possible to use the OS mostly just like Windows 7, too. I want to emphasize that statement, because I feel like there’s a lot of frothing at the mouth about how terrible Metro is on the desktop. Although I don’t like everything about it either, there is plenty of “old” Windows still there. I truly, honestly, don’t believe Metro being in Windows 8 like it is right now is going to be as much of a failure as some on the Intarwebs are saying. Disruptive? Yes, definitely… But so were Windows 95 and Mac OS X.

Time will tell whether or not I change my mind.

T-SQL Tuesday #24: Prox ‘n’ Funx

Procedures and Functions. Well, this could be interesting. Everyone else’s posts, that is.

T-SQL Tuesday

#24, brought to us by Brad Schulz

OK, T-SQL Tuesday Twenty-Four: Two-year anniversary edition! Pretty sweet of Adam Machanic (blog | @AdamMachanic) to come up with this whole thing two years ago. A good guy, he is. This month’s topic is Prox ‘n’ Funx, brought to all of us by Brad Schulz (blog). I kind of feel bad here—I don’t really know much about Brad, but he’s an MVP, and flipping around his blog brings up some pretty cool posts. I really like this one, for example.

I actually have what I think is a decent little thing to talk about for this topic. It could have been a stretch topic for T-SQL Tuesday #11 about misconceptions, even if [I hope] not exactly a widespread one. This topic revolves around transaction control and error handling within stored procedures, which can be a crazy topic in and of itself. Books could be written on the topic. Specifically, the conversation that I found myself involved in one day was about what happens to any open explicit transactions when a procedure runs into an error.

Let’s Talk About Procs Dying

Once upon a time, someone said to me that if an error occurs within a procedure which contains an explicit transaction, that transaction will remain open, potentially blocking other sessions on the system. In short, that’s not true, and it’s fairly easy to prove. We’ll work through a quick little script to do this, and will include modern TRY…CATCH error handling, because that will come in when we get to the main point (I say “modern” there like it’s a new thing, but coming from a guy who has worked with SQL 2000 more than anything else, that distinction feels necessary). It actually doesn’t matter if there is any error handling during the first part of this exercise, as the results will be the same. This is a pretty contrived setup, but that’s pretty much what I’m going for.

First, create a table & put a couple of rows into it. The two columns we’re most worried about are “FakeNumberColumn” and “ActualNumberColumn” (note their data types), so named because the error which will be triggered in a little bit will be a type conversion error.
-- Create Table
CREATE TABLE dbo.TransactionTest2008
      
(     TransactionTestID       INT                     NOT NULL    IDENTITY(1,1),
            
RowDescription          VARCHAR(50)             NOT NULL,
            
FakeNumberColumn        VARCHAR(10)             NOT NULL,
            
ActualNumberColumn      INT                     NULL
      )
;

-- Populate it with a couple of rows
INSERT INTO dbo.TransactionTest2008 (RowDescription, FakeNumberColumn)
      
SELECT 'Actually a Number 1', 10
            
UNION ALL
      
SELECT 'Actually a Number 2', 100
;

Now for the really contrived part: a Stored Procedure that will throw an error if FakeNumberColumn contains something that won’t implicitly convert to a numeric:

CREATE PROCEDURE TransactionTester2008
AS

BEGIN try
  
  
BEGIN TRAN
     UPDATE
dbo.TransactionTest2008
          
SET ActualNumberColumn = FakeNumberColumn * 2
     
--   select *
-- from dbo.TransactionTest2008

     -- Wait for 10 seconds so we have a chance to look at DBCC OPENTRAN (unless of course it dies first)
     WAITFOR delay '00:00:10'

   COMMIT TRAN
END
try

BEGIN catch
-- Some kind of error has occured
  
PRINT 'Welcome to Catchville'

   IF @@TRANCOUNT > 0
      
ROLLBACK

   -- Raise an error with the details of the exception
  
DECLARE @ErrMsg NVARCHAR(4000), @ErrSeverity INT
   SELECT
@ErrMsg = ERROR_MESSAGE(),
      
@ErrSeverity = ERROR_SEVERITY()

   RAISERROR(@ErrMsg, @ErrSeverity, 1)
END CATCH


(The commented-out SELECT statement can be un-commented if you like, to see the state of the table at that point of execution.)

As it is now, the proc will run successfully. The 10 second WAITFOR in it gives you time to run DBCC OPENTRAN in another query window to see the proc’s open transaction.

EXEC TransactionTester2008

Open TransactionNow we’ll make things somewhat interesting. Insert another row into our table to put a letter into FakeNumberColumn, then run the proc again.

INSERT INTO dbo.TransactionTest2008 (RowDescription, FakeNumberColumn)
     
SELECT 'Not really a Number', 'F'

EXEC TransactionTester2008

Things won’t go so well this time…

Bombed Proc RunWe get the PRINT message about being in Catchville, so we know that our exception was caught and execution finished in the CATCH block. At this point, go run DBCC OPENTRAN again, and you will see that there isn’t a transaction open. This would be the expected behavior. No transactions are left open; the in-process activities are rolled back.

I should also note that a less-severe error, such as a constraint violation on an INSERT, will only cause an error in that particular statement. The Engine will skip over that statement & continue processing normally. That behavior has led to some near-brown pants moments while running a huge pile of INSERTs, putting in some provided business data, but that’s what explicit transactions are for!

Now, About Timeouts…

OK, that section wound up pretty long. Here’s where I’m actually getting to what I want to talk about…

We’ve established that errors in Stored Procedures will not lead to transactions being left open under normal circumstances. There is a situation where things don’t go so well: when a client/application connection times out for one reason or another. If this happens, the client side will close its end of the connection, and after the in-progress query SQL Server is running completes, nothing else really happens. This can leave open transactions, which, of course, are bad, bad, bad.

Starting with where we left off above, we can simulate an application timeout by cancelling the running SP in Management Studio.

First, delete the error-producing row from dbo.TransactionTest2008:

DELETE FROM dbo.TransactionTest2008
  
WHERE RowDescription = 'Not really a Number'

Execute TransactionTester2008 again, and this time, while in the 10-second WAITFOR, cancel the query in Management Studio. Even with the TRY…CATCH block in place, the explicit transaction is left open (check with DBCC OPENTRAN). What this means is that whatever application (or DBA!) running the statement(s) is responsible for closing an open transaction if a session times out or is cancelled. In my experience, if one is in the habit of wrapping everything you do in explicit BEGIN/COMMIT/ROLLBACK TRAN, they’ll be less likely to cancel a script they’re running and then sit there blocking half of the rest of the world. Not that I’ve been there, or anything…

There is a safety net here: XACT_ABORT. I first learned about XACT_ABORT while doing some work with Linked Servers a few years ago. What XACT_ABORT does when set to ON is to force SQL Server to terminate and roll back the entire batch if any error occurs. Here’s the Books On Line page for XACT_ABORT.

In our case, flipping that setting to ON within our test SP will change what happens when a query timeout (or cancel) happens. Add “SET XACT_ABORT ON” at the begging of the above SP and re-create it thusly (either drop and recreate or add the line and change it to ALTER PROCEDURE):

CREATE PROCEDURE TransactionTester2008
AS

SET XACT_ABORT ON
[…]

Run the SP as before, and again, while in the 10-second WAITFOR, cancel the query. Now if checking for an open transaction, there won’t be one—it was rolled back by the engine when the timeout (cancel) occurred, because of XACT_ABORT. No locks are still being held, no other sessions will be blocked by the timed-out session. Smooth sailing 🙂

As Usual, Be Careful

Before hauling off and adding this to a bunch of SPs, beware the Law of Unintended Consequences. Test extensively, because no matter how self-contained a fix may seem, who knows what else may be affected. This is no where this is as true as it is in old, overly-complicated, poorly-understood legacy systems. I know DBAs like to fix problems and prevent problems from happening in the first place, but please make sure that no new issues are introduced while trying to “fix” things.

Oh; don’t forget to clean up after yourself!

DROP TABLE dbo.TransactionTest2008
DROP PROCEDURE TransactionTester2008

 

T-SQL Tuesday #22: Data Presentation

TSQL Tuesday Logo

Robert Pearl hosts No.22

It seems like it hasn’t been that long since last month’s T-SQL Tuesday post; I suppose time flies when you’re having fun and trying to finish up the same ETL project you’ve been working on since March.

This month’s SQL blog party is being hosted by Robert Pearl (blog| @PearlKnows), on the topic of “Data Presentation.” This is a good topic for me at this point, as I’ve all but finished my transition from DBA to BI Monkey (that’s something else I need to write about…). I think Robert is looking for specific examples of ways to present data, but since, as usual, I don’t have anything specific that I can actually publish, I’m left to speak generally about the topic.

Data Presentation: Just as Important as the Data Itself

In a previous life, I was responsible for almost everything data-related for the systems that we ran. As a result, I would get a lot of requests for data. One of my favorite requests would come in the form of, “can you give me some numbers for <X system>?” I would try to keep my response at least marginally non-snarky, but it would generally include two questions:

  1. What, exact, “numbers” do you want? (this is especially where I would have snark problems)
  2. What do you want the data to look like?

Of course, the first one is an important question—if the requestor cannot articulate what it is they actually want (or even what question they’re trying to answer), little else is going to matter. I’ll not dwell on this particular item too much, but suffice to say, sometimes getting a good answer to this seemingly easy question is anything but. I’ve basically come to the conclusion that this is normal.

Once over that hurdle, the conversation can move on to the presentation of whatever data/”numbers” it is the requestor wants. There are almost as many options for presenting data as there are for way to write the T-SQL to retrieve it. Just like writing the SQL in a way that is performance- and resource-conscious, care should be taken when working on the presentation design. It is imperative for the data to be presented in such a way that is understandable and digestible by its intended audience.

Notice I didn’t say “digestible by the party asking for it.” Don’t forget that the request originator may not be the party who is ultimately going to be parsing the provided data. If the audience is not clear in the original request, add a third question to the two that I have listed above: “Who is going to be acting on this data?”

Options for What Happens Next

When the “What do you want it to look like” question is asked, chances are decent that you’ve an idea about what the answer is going to be. If this is a one-off, ad-hoc request, Excel is a popular option. Alternatively, if a robust reporting system is in place, or this request will be a recurring one, developing a report to present the data might be a stronger choice. There are of course other options: the data could be destined for a statistical analysis application, where a CSV file would be more suitable. I would consider this an outlier, though—most of the time, data is prepared for direct human consumption.

Excel is such a popular option that you could almost call it Data’s Universal Distribution Engine (DUDE). Sending data over in Excel is less about the “make it pretty” side of good presentation as it is the “make it useful” side. I’ve found that Excel is a choice a lot of the time because the requestor wants to do more manipulation of the data once they get it. I’ll leave whether or not that is a good thing to the side; the truth is, such activity happens all the time. As a result, when preparing data for an Excel sheet, I like to have an idea of what the user is going to do with it. This sometimes helps to determine what data the user is looking for (if they don’t have a clear idea) but also can help with some formatting or “extras” to include. These “extras” could take the form of running subtotals, percent changes for Year over Year situations, or anything else that is easier to add via SQL instead of someone having to putz around in Excel.

Writing a report to present data has a different set of opportunities than pasting data into Excel. One of the things that I like to see in a solid reporting environment is a set of standards that apply to the reports themselves. Things like common header contents (report name, date/time stamp, name of the data source/DB the data is from, etc), standard text formatting, a common set of descriptors, etc, etc. In addition to making individual reports easier to read & feel more familiar, it can make it easier to compare data etween reports the hard way (one on each monitor), if one has to.

It's only worth 1,000 words if the first ones that come to mind are work safe

One thing each of these two tools gives you is the ability to present data in the form of pretty pictures. There’s a time and a place for everything, but the old cliché, “a picture is worth a thousand words” can/does apply. Sometimes it’s just flat-out hard to beat a good trendline. I have a much easier time seeing even the simplest of trends when data’s plotted out in a histogram. Conversely, one of my coworkers can look at a pile of numbers, not even sorted chronologically, and tell you what is going on in about three seconds.

Knowing where to put your effort goes back to knowing who your intended audience is. Likewise, knowing when to say “no” to visualization is a terribly useful skill. Every data element on the chart should be discernable, or else it doesn’t convey the information it is supposed to, and now the visualization is working against itself. The pie chart to the right? Don’t do that.

Summary

That’s about all I’ve got. In short: Presentation is important. Unfortunately, it can also be complicated. It’s important to ask questions early on in the process and to know your audience. Standardize if you can; help out a little with the complicated work if it can be done in SQL. Also, add visual representations without going overboard. I’ve always found turning “data” into “information” for people to be fun; if it can make someone else’s job easier/more fun, too, then all for the better.

A Note About cliconfg.exe on x64 Machines

If you don’t know (and that’s a good thing in this instance), cliconfg.exe is one of those older-than-dirt parts of Windows that is still around because it serves a specific purpose. In this case, it is used to configure SQL Server connection aliases.

Aliases are basically an abstraction between a SQL Server Instance’s actual connection string information (DNS name, Instance name, port, etc) and the name used by a client to connect to it. In this case, “client” can be either code/an application or a user. I know aliases are mostly evil, however some people do use them extensively. I don’t know that they’re all bad, but that’s a different post for a different time.

x86 cliconfg.exe vs x64 cliconfg.exe

Sit down at any remotely-modern Windows machine (it’s been there since 2000) and you can bring up a Run box, type “cliconfg”, press ENTER, and the utility will launch. If you do this on a 64-bit build of Windows, the 64-bit version of the utility will launch, as you might expect. This sounds good at first, but it might not be.

The first problem this presents is there is absolutely zero way to tell from within the utility itself which version of it is running. Therefore, if you’ve just moved from a 32 to 64-bit machine, there isn’t anything that might get your attention and suggest that anything unexpected is going on. This has probably happened a long time ago for some, but if you’re a corporate monkey like I am, you might have only recently been able to make the jump to a 64-bit OS.

The ultimate issue that crops up stems from why there are two versions of cliconfg.exe in the first place. In short, Windows keeps separate lists of aliases, one used by 32-bit applications and the other by 64-bit. These lists are maintained by their respective versions of cliconfg.exe. These lists are completely independent, and are only accessible by applications compiled with the same bit width.

In order to launch the 32-bit version of cliconfg, it has to be run from the folder it’s in. The path to this file is %SystemDrive%\Windows\sysWOW64\cliconfg.exe. As mentioned, it looks (and acts) just like the 64-bit version. I’d think that the folder should be named “sysWOW32” instead, but what do I know?

Which one to use?

Setting up aliases needs to be slightly planned out, although the same aliases can just be set up on both “sides”, and call it done. If one chooses not to do that, it’s a matter of thinking about what apps are going to
be utilizing any given alias, and then setting that alias up on the same bit width as the app. For example, to use an alias from within SQL Server Management Studio, the alias needs to be set up on the 32-bit side, as SSMS is still a 32-bit application.

Plan B (or Plan A?)

Here’s a different idea: Don’t use cliconfg.exe!

It’s old. I have a feeling it will go away some day. It isn’t very clear on what you are doing (32 vs 64-bit). All problems. Instead, consider using the SQL Server Configuration Manager. It’s new(er). It’s part of SQL Server, so it will probably be around for at least a bit. It is also very clear about the difference between 32 and 64-bit aliases.

For the current topic of conversation, that last point is the most important one. On a 64-bit machine, the Config Manager will have two Native Client items in the tree view in the left pane, one for 64-bit and the other for 32. Aliases configured in one do not show up in the other, making it perfectly clear what is going on.

I know that aliases are bad; using cliconfg.exe to manage them is arguably worse. However, if the situation exists that they need to be used, care needs to be taken in 64-bit client environments. The best way to deal with this situation is to use Configuration Manager.

T-SQL Tuesday #21: “This Ugly Hack is Only Temporary”

…Unless “nothing is temporary around here” also applies to your shop. Then…well…good luck with that (I feel your pain).

Wednesday is better than Tuesday anyway; everyone knows that

It’s been a while since I’ve participated in a T-SQL Tuesday; haven’t been writing at all, really, as you can see. Summer apparently does that to me. If you’re reading this, then it means I got it shoehorned in.

T-SQL Tuesday is the brainchild of Adam Machanic (blog | @AdamMachanic) that started, apparently, 21 months ago. Its purpose is for bloggers to get together and cover a particular topic every month. This is good for readers and also is an easy way to get a blog topic, if one has a hard time coming up with them (like me). This month’s topic is hosted by Adam, and has been billed as “revealing your crap to the world.” There will be lots of good stuff to read this month, calling it now.

RI Enforced by aTrigger? Check.

I’m going to preface this one by saying that it never actually saw the light of day. Sometime between Design & Development, the project that it was part of got canned, so the Development instance was the only place this was ever deployed to. But, it was still crap, and it was crap that I came up with, at that, so here we go.

There once was a big ugly “Location” table. This table was way wider than it should have been, mainly due to the fact that “normalization” was a term missing from the historical evolution of the system it was a part of. As you can imagine, this table contained a lot of character columns of varying widths (up to and including TEXT). To make matters worse, in lots of cases, the UI allows free text entry, which makes much of the contents suspect, at best.

A project came along that basically would require normalizing out a couple of columns related to a certain location attribute. For the sake of discussion, this could be just about anything—we’ll use “Preferred Shipping Carrier” as an example.

At the beginning of design, this “PreferredShipCarrier” column is full of shady entries. There are some “UPS”es, “FedEx”es, and “None”s, but there’s also a lot of NULLs (which shouldn’t really happen), “N/A”s, and “UPPS”es, all of which are incorrect for one reason or another. Obviously as part of normalizing this column out, there would need  to be some corresponding data cleanup, but that’s beside the point.

Where things go south is how this was to be implemented. Since the column in the table is VARCHAR, a real Foreign Key relationship can’t be set up to the INT Surrogate Key in the new “PreferredShipper” table. This is the first sign of trouble. Second sign is that we can’t change the UI—it has to stay in its free-text entry state. This leaves the database exposed to bad data, as there can’t be RI to enforce valid entries.

Enter a Trigger. This can be used to, upon an attempted row insert into the Location table, look up the value entered for Location.PreferredShipCarrier, and throw an error if it doesn’t find a matching row. I hated it, but it got the job done.

Auditing with Triggers, too? What is up with you and Triggers?

SQL 2000 was terrible for auditing—it didn’t have all of the new fancy stuff that we have now for this sort of thing. That means, when you’re dealing with 2000, and the business wants an audit trail of who changed what when, your options are/were pretty limited.

At one point, a requirement came along to implement audit trails on some tables. This means I needed to duplicate tables, adding some metadata columns to them (audit timestamp, who dunnit, what the action was, etc), and then put a trigger on each of the source tables. Some of these particular tables were busy, so the triggers & log tables got a lot of action and I hated having to do this to get what was needed.

If the trigger was fired for a DELETE, it would log the row that was deleted. If an UPDATE was happening, it would first check to see if any column contents were actually changing, and if so, log the previous state of the row to the log table. These triggers will grow relative to the width of the table (because they contain three explicit lists of the tables’ columns), so if the tables being audited are wide, the trigger will get pretty long. Additionally, since triggers in SQL Server are set-based operations, running on all rows that are being DELETEd or INSERTed, special care needs to be taken so they can operate when more than one row is operated on. This can make them a bit ticklish to write.

I know this isn’t necessarily crap, as when you’ve got to audit, you’ve got to audit. I don’t like it if for no other reason than the extra clutter it puts in to the DB and just the general idea of triggers all over the place. All manner of things can happen in triggers, and if you are operating in an unfamiliar DB or troubleshooting a goofy problem, things could be going on in there behind the curtain making troubleshooting harder.

In short: Friends don’t let friends mash F5 on CREATE TRIGGER.