Archive | Software Development RSS for this section

SQL Server Management Objects – An Update

So, after previous post called “Adventures with SQL Server Management Objects“, I decided to take a step back and see if I could improve my code. I wasn’t happy with the while(true) loop, as if the developer stars align in a certain way this could potentially mean an infinite loop.

This is Bad™.

Originally I had the idea of using Tasks (from the Task Parallelism Library) to monitor the status of switching from Idle to Executing and then Executing to anything else (signifying failure), and so I plugged away at my code briefly, while also coming to grips with the TPL. However, as time went on, I realised this code was getting messier and messier because a) I didn’t really know what I was doing, and b) as I got more familiar with Tasks and when/where to use them I realised I technically didn’t need them (as it being a console app, I didn’t really care about blocking the main thread). Not only that, but monitoring the status changes of the Job in this way was fraught with danger because sometimes (yay consistency) the time between the initial job.Start() and job.Refresh(), the job can execute and fail, making the status monitoring a bit flaky at best. So I threw away my changeset and started again (from my previous commit), this time without any usage of Tasks.

This is what I ended up with, edited for brevity;

    /* snip */
    var lastRunDate = job.LastRunDate;

    job.Start();
    job.Refresh();

    // The job.LastRunDate will only change once the job finishes executing, providing a way of monitoring the job
    while (job.LastRunDate == lastRunDate)
    {
        Thread.Sleep(_interval);
        job.Refresh();
    }

    if (job.LastRunOutcome == CompletionResult.Succeeded)
        WriteInfo("Hooray! Job Successful");
    else
    {
        WriteError("Oh Noes! Job failed!");
        WriteError("Job failed with message: " + job.GetLastFailedMessageFromJob());
    }
    /* snip */

By monitoring when the LastRunDate property of the Job object changed, I was able to accurately determine when the job had actually stopped executing from the current attempt. Thankfully, the LastRunDate property only gets changed when the Job finishes executing (whether it fails or succeeds).

If you want to look at the full changeset, the details are here.

Technically though, this still blocks the main thread. I realised after I finished this, that you could wrap that entire chunk of code above inside a Task, using –


Task.Factory.StartNew( () => {

/* monitoring code here */

});

Unfortunately, this doesn’t have any of the CancellationToken stuff implemented in it. And honestly, I’m not entirely sure where it would go, or if its even necessary. That’s for another post though 🙂

Thanks for reading folks, have a great night/day.

Adventures with SQL Server Management Objects

Recently, I had an issue where I needed to be able to trigger a SQL Server Job from a Console Application.

Now initially, I had the knee-jerk reaction of hitting the good ol’ ADO.Net SQL Libraries, to invoke my overused little friend SqlCommand and his cousin sp_start_job. And to be honest, there is nothing wrong with this approach, and more often than not, that’s all you’ll need. However, for my particular situation I needed something a little more powerful, as the problem with invoking sp_start_job in this way, is that its done in a fire-and-forget manner. You see, sp_start_job DOES have a return value when its called, but that only indicates that the job has started successfully (or not), and includes nothing of what happens if the job itself fails. Given the client definitely needed to know what the heck was going on behind the scenes (at the very least from a high level perspective), I started in search of something to fulfil my requirement.

And so, on my journey, I discovered SQL Server Management Objects. And I’ll be honest, I cried a little. In happiness. This little ripper of a library has everything you need to ping a SQL Server Agent and all of its Job related goodness in several very useful classes.

Side note: This library could probably be (ab)used in so many other ways, but for today I’ll just focus on the SQL Agent side of things.

You need to add three references to your project:

  • [your install location]\Microsoft SQL Server\[your version]\SDK\Assemblies\Microsoft.SqlServer.ConnectionInfo.dll
  • [your install location]\Microsoft SQL Server\[your version]\SDK\Assemblies\Microsoft.SqlServer.Management.Sdk.Sfc.dll
  • [your install location]\Microsoft SQL Server\[your version]\SDK\Assemblies\Microsoft.SqlServer.Management.Smo.dll
  • [your install location]\Microsoft SQL Server\[your version]\SDK\Assemblies\Microsoft.SqlServer.SqlEnum.dll

The first class I will focus on will be the Server class. This represents a connection to a given SQL Server, and its quite easy to use.

var sqlServer = new Server("(local)");

Now, you can pass in a ServerConnection object (which has its own advantages), but for my purposes, passing in the server instance as a string was acceptable.

And that’s it. The Server object is now ready to accept whatever you want to throw at it! Even better, the connection isn’t currently open, so if we do;

 bool isOpen = sqlServer.ConnectionContext.IsOpen; 

This will return false. So no need to worry about connections hanging around every time you spin up a new Server object.

Ok, so now we have a Server object, lets spin up the next object of our desire, the Job class. Now I’m going to cheat slightly by retrieving an existing Job on the Agent. This might seem obvious, but the SQL Server Agent process needs to be running if you ping the JobServer property on the Server class, otherwise the code will fail horribly. Not that I’ve encountered this… Nope, not me… Ok, I encountered this 😦

var job = server.JobServer.Jobs["MyAwesomeJob"];

And now we have a SQL Server Job in a nice, clean object, ready for us to play with! From here you can call the Start() method on the Job object,  as well as iterate through the steps of the job. Honestly, the level of control you get over the SQL entities is nothing short of amazing, so I’d advise you to approach with caution.

The Refresh() method on the Job object sends a request to the server, asking for information about the SQL Server Job in context, and will update the objects properties to reflect any changes. However, there can be a delay between when the Start() method is called and the Job actually starting, so my current workaround for this is to use Thread.Sleep(1000) to give SQL Server time to process the Start() request. Once the SQL Server Agent kicks in, and Refresh() is called, the status of the object is updated to reflect the Job is executing (or that it has already failed). There is a better of way doing this than using Thread.Sleep, I’m sure, but for now it’ll do. Hmm, I think I’ll put this on my backlog to try and figure out a nicer way of doing this, and share the code when/if I’ve figured it out.

Now, its a simple while loop to ping the Job status every 1 second or so, and once we have the initial request change the status to JobExecutionStatus.Executing (this should take less than second), we check if its not currently Executing (side note: WriteLine is basically just a Console.WriteLine helper, that passes in a given colour).

while (true)
{

    if (job.CurrentRunStatus != JobExecutionStatus.Executing)
    {
        if (job.LastRunOutcome == CompletionResult.Succeeded)
            WriteLine("Hooray! Job Successful", ConsoleColor.Cyan);
        else
            WriteLine("Oh Noes! Job failed!", ConsoleColor.Red);
        break;
    }
    job.Refresh();

    WriteLine("Waiting...", ConsoleColor.Yellow);
    Thread.Sleep(1000);
}

Once this is done, we check the LastRunOutcome property, and act accordingly.

If you plan on using this, it might be a good idea to specify a timeout as this could potentially run forever if something unexpected happens on the SQL end.

Now, I’ll address pulling out the error details from the Job, should the job fail.

Again, SQL Server Management Objects come into their own with this, as finding the error details is a case of looking at the DataTable that is returned from a failed job by calling EnumHistory() on the Job object, and filtering down on the DataTable to return the Message column for the correct row. Don’t bother pulling out the error message from the first step (Step 0) itself, as this is more a high level “Job failed, was invoked by user X” type error, rather than anything useful. You may as well just output something like “An error has occurred, please contact Support.”, as its around the same level of usefulness. Overall I think it depends on how informed your users need to be (or want them to be!). It’s not the nicest way of going about things, but at this point it’s just what SMO provides.

In the end, I decided on going with a couple of extension methods for this, as I wanted to keep the flow of the code as smooth (or at least what I think is smooth) as possible.

The first is the method to retrieve the last failed step of the job that failed.

        public static JobStep GetLastFailedStep(this Job failedJob)
        {
            if (failedJob == null)
                throw new ArgumentException("failedJob cannot be null");

            if (failedJob.JobSteps == null || failedJob.JobSteps.Count == 0)
                throw new ArgumentException("failedJob steps cannot be null or 0");

            for (int i = (failedJob.JobSteps.Count-1); i >= 0; --i)
            {
                if (failedJob.JobSteps[i].LastRunOutcome == CompletionResult.Failed)
                    return failedJob.JobSteps[i];
            }

            return null;
        }

The second is the meat of the error reporting code, and retrieves the error message for the failed step,

        public static string GetLastFailedMessageFromJob(this Job failedJob)
        {
            if (failedJob == null)
                throw new ArgumentException("failedJob cannot be null");

            JobStep failedStep = failedJob.GetLastFailedStep();

            if (failedStep == null)
                throw new ArgumentException("No failed step found for job " + failedJob.Name);

            DataTable data = failedStep.EnumLogs();
            JobHistoryFilter filter = new JobHistoryFilter()
            {
                JobID = failedJob.JobID,
                OldestFirst = false,
                OutcomeTypes = CompletionResult.Failed

            };

            var jobHistory = failedJob.EnumHistory(filter).Select("StepName='" + failedStep.Name + "'", "");

            if (jobHistory != null)
                return jobHistory[0]["Message"].ToString();

            return string.Empty;
        }

So, this changes the snippet of code reporting job failure, meaning this;

WriteLine("Oh Noes! Job failed!", ConsoleColor.Red);

Becomes this;

 WriteLine("Oh Noes! Job failed!", ConsoleColor.Red);
 WriteLine("Job failed with message: " + job.GetLastFailedMessageFromJob(), ConsoleColor.Red);

And there you have it. Some (probably imperfect) code to help you run SQL Server Jobs, monitor them, and report any failures with a fair degree of accuracy. Obviously, this isn’t a one size fits all sort of solution, but worked quite well for me.

I have also posted my complete solution on GitHub, located here, and welcome any feedback/improvements you can provide.

Good luck, and happy SQLing from C#!

A Man and his Coffee Plunger

Today, I’m going to tell you a story.

Once upon a time, there was a man,  and he had a really nice coffee plunger. This man was me, and I loved that damn plunger.

At least, until one fateful day when my plunger was donated to charity. On my behalf. Without my knowledge.

Yep, thats right. Involuntary charity. You can’t make this stuff up.

I came in the following Monday scratching my head (as this all went down on a Friday), as I knew where I’d left my plunger, but I couldn’t find it. I was then informed that my plunger had been donated to chartiy. Cue a very sad (and angry) panda.

So I bit my tongue, and bought in another plunger… A then few days later this (glass) plunger was left with a large crack running down the side of my plunger. That’s two for two.

Fast forward a few months, and another (really shiny, beautiful) plunger – this is one my wife bought me – and more shenanigans ensued. I couldn’t find my plunger… I went through the entire kitchen several times (a few times with other people to make sure I wasnt missing the blindingly obvious), the entire floor of my building and nothing. Nada. Zip. It seemed that my beautiful shiny stainless steel plunger that was a present from my wife had been stolen.

 

** CUE DRAMATIC MUSIC **

By this time, the rage and sadness were at gargantuan levels. I ended up sending a (surprisingly) polite email around the office, asking for it back.

A week or so later, and lo and behold, stealthed into one of the kitchen cupboards, sitting in a plastic bag was my plunger. To really rub salt in the wound my plunger had not only been damaged (as if that weren’t bad enough!), but it had been taken apart and put back together incorrectly. Sigh.

So I took it out, gave it a good clean and put it back together again properly. All was well. My plunger had some battle scars, but I was honestly just happy to have it back.

Little did I know, the amazing wonderful people I work with decided that (since this was my third plunger that had been lost to the ethers) it would be a nice idea to order in a new plunger, the exact same make and model as my current one (this was not a cheap plunger, so the sentiment is all the more wonderful).

THESE are the people I work with, and you all fucking rock. Seriously, what team takes the time and effort to search out an expensive obscure plunger and buy it for a really sad panda. An awesome team, thats who.

So this email is a massive thankyou to those awesome people I work with, and an immortalization of your selfless act. You are all upstanding folks and have restored my faith in humanity.

Here they are:

Luke Wale
Daniel Okely
Macklin Hartley
Rico Gunawan
Brian Madsen
Travis Quirk
Stephanie Danes
Christiaan Coetzer
Richard Hughes

You are all awesome. Give yourselves a high-five.

I will also be shouting coffee to the above people this friday 🙂

TechEd 2013

Recently, I was luckly enough to be sent to the Gold Coast for a week long event known as Tech Ed…

Holllllllllllly crap was it awesome. I feel like the words on this screen dont come close to conveying the awesome that was TechEd 2013. There were 3000 dev & IT professionals at this conference, and the energy from so many like-minded people crammed into one convention centre was electric.

My days were completely filled with sessions, of various technical levels, from 8am til 6pm almost every day and I came away from most of these sessions filled with inspiration. I can honestly say I havent been that motivated or excited about development in my entire career.

Some of my favourites were;

  • Developer Kickoff – Stuff We Love by Ed Blankenship, Andrew Coates, Brady Gaster, Patrick Klug, Mads Kristensen (Presentation here).
  • SQL Server 2014 – Features Drilldown by Dandy Wayn (Presentation here).
  • Whats new in Visual Studio 2013 by Mads Kristensen (Presentation here).
  • The UX Doctor Will See You Now by Shane Morris.
  • Adding Powershell Support to Enterprise Apps by Mitch Denny (I sort of have a tech-crush on Powershell. Its so shiny and awesome…) (Presentation here).
  • Building Real World Cloud Apps with Windows Azure by Scott Guthrie (Presentation here).
  • Integration in the Cloud: A Deep Dive Into Windows Azure BizTalk Services by Bill Chesnut (Presentation here).

There were so many awesome sessions this year, some of which I havent mentioned… You can find the complete list of sessions, with recordings on the Microsoft Channel 9 Website, with the TechEd specific section being here. If you didnt managed to get to all the sessions you want, most of the sessions will have recordings so you can listen/watch anything you missed!

I thought I’d share my thoughts and impressions regarding my favourite sessions.

Dev Kickoff – Wow, Visual Studio 2013 has some seriously awesome shit under the hood this time round. A constant SignalR connection with the browser… no more hitting F5! Save the changes in Visual Studio and SignalR will pump it through to the browser, so seeing changes to the page suddenly becomes alot less tedious. This relationship also works the other way around… using the new Dev tools will allow you to make basic changes on the browser, and have these changes reflected, in real time, in the code. I swear its using System.Magic or something.

SQL 2014 – The natural query syntax feature that was demoed to us excited me. As I’m sure alot of developers out there can agree, writing reports can be a pain in the ass. This sort of functionality exposes the data to the user in a friendly way, allowing basic ad-hoc queries (and even some slightly more complex ones) in real time. The In-memory functionality for certain tables is also fascinating… performance goes through the roof compared to traditional IO bound transactions.

Whats new in VS2013 – This took the taster that was given us in the Dev kickoff and fleshed it out nicely. Brand new HTML editors left right and centre, Intellisense functionality added to many more places, and more importantly contextual intellisense. This is particularly bad in the current HTML editor as it tends to just give you a braindump of everything. So very, very shiny.

UX Doctor – This was a really great presentation. Shane Morris tempted the demo gods and went completely unscripted. He live-reviewed (is that even a term?) three applications volunteered to him as part of his presentation. I came away from this actually thinking about design and user experience principals and started picking apart other applications I use frequently. Quite a departure from my normal day-to-day stuff, so this was refreshing.

Cloud Apps with Azure – Special mention goes to Scott Guthrie. This guy not only seriously knows his stuff but has an amazing presentation style, which was engaging and enthusiastic. To be honest, I didnt really know who he was before TechEd (I’d heard it mentioned once or twice, but silly me hadnt paid attention) and was quite surprised by the amount of general interest in just having lunch with this guy, let alone attending his sessions. Thankfully, I did attend both of his sessions (a 3 hour highly technical demo, with zero demo glitches, what a champ) and came away FREAKING impressed with Windows Azure. Seriously. Wow. If you havent used Azure much, I highly recommend you take it for a spin. Its ok… I’ll wait.

Awesome, riiiight?!!

The Azure BizTalk Services presentation was also fantastic. I have (for some reason which I still dont quite fathom) an intense interest in BizTalk, so being in a room with a BizTalk MVP (yes, they exist!) and about 20 or so BizTalk developers was such an awesome experience (Yes, I know Im using the word awesome alot, but I dont care. It was a week full of awesome). Bill Chesnut has an astounding amount of knowledge of BizTalk as it stands in Azure, not to mention on-premise BizTalk. He managed to fend off quite a few of our questions with grace (as I think the room was a bit shocked) as Azure BizTalk Services itself is a bit of a strange one, with some seemingly odd limitations, but I have a feeling that due to the (now monthly!) release cycle of Azure, more functionality will be coming our way. To be honest, it will have to if Microsoft are serious about it as they are about everything else Cloud related. As it stands right now, I’d have a hard time recommending it over on-premise BizTalk.

So there you have it… A brief summary of my time at the most amazing conference ever. If you ever manage to score some tickets, I strongly urge you to go. As a developer the benefit to attending a conference like this enormous. Not only is the community outstanding, but everyone is there for the same reason you are… We all love tech. We make it our livelihood and hobby both, and are passionate about the things we love. So do yourself a favour, score yourself some tickets and soak in the geek 🙂

JavaScript: My Journey

So I finally jumped on the bandwagon. Two bandwagons, actually, but I’ll get to that soon.

I have a somewhat troubled history with JavaScript. You know those horror stories you hear from devs sometimes, the ones that make you shiver with fear? I lived through one of those, and it tainted my view of JavaScript from early on in my career. I saw JavaScript used (or more accurately, abused) in such a way that I was completely turned off from JavaScript, with an almost irrational hatred of the language.

I was encouraged to read “JavaScript: The Good Parts” to try and overcome my fears regarding JavaScript, and thankfully I managed to put my anger (and yes – it was genuine anger; that’s how bad this project scarred me) aside and plunged in headfirst. Very quickly I realised that JavaScript is actually pretty amazing. Prototyping languages are something I’d never really encountered before (I come from a .Net background). The JavaScript in my past was most definitely not how to use JavaScript, so it was fascinating to see JavaScript demonstrated in such a way that had me nodding my head and thinking “damn, that’s pretty cool”.

So bit by bit (ha, geddit?), I started to appreciate JavaScript… Not that I’d actually used it for anything else at that point, but my anger towards anything JavaScript was fading, so it was nice to come back to a slightly more balanced standpoint. Then, I discovered JQuery. Wow… just wow. JQuery is probably the single most important thing to ever happen to JavaScript… So many things that were previously tedious and terrible were single line, small character set calls. So my attitude towards JavaScript improved again.

Then, most recently, I had the pleasure of attending a presentation by a mate at work who was doing a live coding demo app (oh yes, live coding. Ballsy, right!?) to demonstrate some Windows 8, HTML5 and JavaScript goodness. I came away not only impressed (it was a good presentation), but inspired.

Which brings me to the second bandwagon; Windows 8 development. I’d tinkered with it a bit in the past, but nothing more than a bit of light reading. This was the first time I’ve actually spent any decent amount of time looking into it, and jumping straight into the Windows 8 JavaScript apps has been an absolute blast. So far, I’ve done the venerable “Hello World” tutorial, HW with navigation, and am currently working my way through the photo viewer sample application. And I’ve gotta say, using JavaScript is actually kind of fun especially with all the functionality exposed by the WinJS library. It’s been a while since I’ve picked up a completely new technology (no, I don’t count the previous experience I’ve had with JavaScript), and I can see some pretty awesome potential with JavaScript in Windows 8 development.

Come to think of it, its been a long time since I’ve coded for myself and actually enjoyed it this much. With my current job, I don’t get much time to play around with the new Shiny and now that I’m currently on holidays I think I’ll be spending some more time hacking away at JavaScript and Windows 8 in general. Once I’ve completed the tutorials I might try my hand at something small, maybe a blog engine or something to really solidify the knowledge in my head.

While I like what I’ve done so far in the WinJS world, I would be very curious to see what people have used for things like navigation and layout outside the WinJS realm. A lot of the stuff I’ve seen in Windows 8 Land, WinJS makes ridiculously easy (which is a good comment on the guys who developed the WinJS libraries), so once I’m done with the Windows 8 side of things, I’m going to delve into the other areas and see whats out there in terms of APIs and the like.

So this is my journey so far, and I hope you enjoyed the post. I think this might be Part 1 of a series… Not quite sure yet. Maybe a follow up post on my concerns with JavaScript development in general (because even with my new-found positive attitude, there are still a few!), and touching on some gotchas in the Windows 8 landscape.

Expect a follow-up post very soon with the resources I’ve used so far in my Windows 8 JavaScript learning journey.

L is for Legacy

Legacy is the dark beast of the development world. It’s not a ‘dark side’, as a ‘side’ – whether good or evil, implies a balance. No, I’m talking about an actual living entity that will eat you alive the second you turn your back on it, and laugh maniacally while it does, for no other reason than it wants to and will get many LOLz.

All those rushed decisions made years ago in the heat of the moment, for whatever reason (deadlines, money, laziness, lack of perspective), come back to bite you so hard in the ass so hard it brings tears to your eyes. Usually, the person who made the application has moved on, and here you are looking at a screen with an overwhelming sense of dread.

The biggest problems with Legacy applications are (in no particular order);

  • Learning Curve
  • Ancient Technology
  • The Gotchas.
  • Documentation (or lack thereof)

Learning Curve:

This is the teething process of actually being proficient to write code for this application; knowing the structure of the code and where to look when things go wrong (because, you know, its Legacy and will usually go horribly wrong). This can vary greatly depending on the size, scope and complexity of the application, as well as understanding unnecessary complexity which seems to feature as standard in all Legacy applications in the Multiverse. This learning curve is compounded when the application has complex business processes driving it, with a good percentage of the knowledge in a select few people’s heads. This is the time when you simply need to grit your teeth and fumble your way through until you see a light. Somewhere. At least, that’s what I’ve heard.

Ancient Technology:

While the tech in which the Legacy application was written might sort of resemble what you’re comfortable with, don’t make the mistake of having the attitude of ‘ah, its similar enough ill handle this no problem’. No. Just stop. While the technologies might seem similar, proficiently coding in whatever arcane tech is in question has its own bunch of little quirks which need to be *learnt*. Case in point; the ADO record sets used in Classic ASP?… RecordSet.MoveNext tripped me up more times than I’m proud of. Now, in a land of LINQ, nHibernate and even ADO.Net such a basic thing of moving to the next item in the bloody recordset might *seem* simple, but again, with how much of the heavy lifting today’s technologies give us its an easy thing to miss. And an annoying thing given that if you forget, Classic ASP (in all its infinite wisdom) will spike the CPU to 100% and you’ll be scratching your head as to why your PC has suddenly become non-responsive. And don’t even get me started on switching entire languages (ie. C# to VB.net or VB6). All languages have their own little ‘ways’ of accomplishing things, and each is juuuuust enough different to feel like the equivalent of stubbing your toe on a really sharp knife.

(Side Note: If you are smart enough to hit the ground running with something like that good for you. But the rest of us mere mortals require an ‘acclimatization period’ 🙂 )

The Gotchas:

In a way, this is related to Learning Curve but usually occurs well after you can actually find your way around a legacy solution. The Gotchas are the things that someone with a small amount of experience in this Legacy application will wisely nod their head and say “oh yeah, in this [insert really specific situation here], you need to [insert really obscure solution/hack/workaround]”. Which brings me to my final point…

Documentation (or lack thereof):

Ahh documentation. Many a MB has been wasted on the internet with discussion of developers and their love/hate (OK, mostly hate) relationship with documentation. Personally, I don’t see the big the deal… You write an application, you document it. Is it really that hard? I know people say “oh I hate writing doco”… well toughen up, princess. Its the support developers that will have to deal with what you’ve written, and giving them NO starting point is inexcusable as a developer. Coming from a support background, I suppose I appreciate good documentation more than most as I have felt the pain of being in an environment (several environments in fact) of no documentation. But seriously, I think we as developers need to grow the heck up. Goddamnit, document your stuff. ESPECIALLY the obscure stuff.

(Side Note: If you do encounter an environment with no doco, don’t just leave it in that state. Use the ‘Boy Scout Rule’; leave the place in a cleaner state than when you arrived. Document a few things (even if it is in your own time), and let the relevant people know where you have put the doco. Its just the right thing to do!)

Despite its frustrations, legacy can teach you a lot (of good things, that is) about software development if you approach it the right way. The good things it can teach you are (again, in no particular order);

  • Good Documentation Practices
  • Learning how to learn
  • How not to do something

Good Documentation Practices:

This is fairly simple. You encounter an area that isnt documented (or is documented poorly), and you fix it. And you fix it *right*. Legacy will make you document something in excruciating detail (more so than project development), as the emotional scars of what you’ve discovered give you a driving urge to make sure the next poor bastard that has to do something similar doesn’t go through the agony and pain you went through. Document templates come in handy here as having a standard format will make you look awesome.

Learning How To Learn:

Legacy applications (and this can be applied to any application you haven’t written personally) are a great way to hone your skills in understanding the various solutions and implementations that people come up with. You gain the ability to understand complex (even unnecessarily so) situations, and ensures your mind stays agile as we are all in a career where you can stagnate very easily.

How Not To Do Something:

If you are looking through an application, and you realize there is a great deal of pain (note; i said pain, not complexity) involved in what you are trying to understand, you probably want to avoid this way of doing things in future. This forms a ‘lessons learned’ aspect of software development… Something I wish more of us did at the end of each Next Big Thing(TM) as there are always mistakes made along the way. There’s nothing wrong with mistakes (even massive ones) provided you learn from it and don’t do it again.

While the agony of Legacy sometimes might not be worth the benefit, I think its good to be reminded of that beast waiting in the background for you to slip and gobble you up. At the very least the thought of that dark beast will keep you on your toes when writing the Next Big Thing (TM).

Night all 🙂

Progress!

So its actually been a productive week. Even though it did strictly take longer (by 1 day) than I was hoping in my previous post, I still managed to get my main goals accomplished.

I now have a shiny new VM with the following:

  • Windows Server 2012
  • SQL Server 2012
  • Visual Studio 2012
  • BizTalk 2010 R2.

Oh whats that you say? “But Adam, you said BizTalk 2013 you sly devil!”. Why yes… yes I did. Little did I realise by attempting to go as bleeding edge as I possibly could I would cut myself… In that BizTalk 2013 hasnt actually been released yet, on MSDN or retail. Silly me 😉 I did have a rather amusing head-scratching moment at that point.

Last night I started my own personal development again, which felt absolutely awesome. I’m going to be creating some sort of Windows 8 Metro app, will all the trimmings; snap-bar functionality, charms and the whole shebang! I  began the Intro to Windows 8 App Development on the Pluralsight Website to wet my feet in the Windows 8 world, and I am actually kind of excited… Its been a while since Ive learnt an interesting new technology. On a side note, you are even remotely interested in software development, and you dont have a subscription to PluralSight, GO GET ONE RIGHT NOW. NOW, I SAY! Heck, I’ll even wait for you.

Ok good, glad we got that sorted out. Trust me, you’ll thank me later 😀

So overall this week I think I’ve accomplished the minor goals I set out to do, plus even a bonus goal (that of starting my personal developement) on top! Its beginning to look alot like Christmas… (I even gave you the karaoke version of that song! Gosh I’m nice).

My goal for the following week will be to get through the PluralSight vid I’m currently watching, and write a really basic ‘Hello World’ style application.

Night all 🙂

Of Projects and Hobbies

On the whole, I enjoy writing as an honest to goodness hobby. Once I start writing my brain goes into the zone and I’m madly typing a thousand words or so in a very short time, and usually its fairly coherent. Lately I’ve been taking notes on potential future personal projects, but I’ve been seriously struggling to find the motivation to start just the initial investigations required for these projects. And dont even get me started on my technical blogging :\

So, with that in mind I’ve decided to start accelerating things a little bit but using an alternate approach to ‘CODECODECODE’. Firstly, as a mixture of hobby and personal development I’ve started a gaming blog. I am a HUGE gamer… my passion for gaming quite often overrides my common sense so its definitely a subject I know a lot about, and have a lot to say. The point of this is to try and get into the habit of writing. I’m a firm believer in that writing is a really useful skill as a developer, as we are required to be coherent in a number of different aspects (writing specs, talking to clients etc) and a blog is a good way to hone those communication skills a lot of developers seem to lack. So while its not really technical as such, at the very least its giving me practice in making my thoughts about various subjects clear and focused.

The second of my two-prong attack is these personal projects I’m talking about. I’m going to start setting REALLY small goals for these, as I tend to get overwhelmed by the scope of my ideas, killing it before it begins. For instance… by the end of this week I plan to have a new dev VM set up, with Windows Server 2012, VS2012, SQL2012 and BizTalk Server 2013. If I can have that accomplished, I will be a very happy camper. And next week perhaps some initial investigation into the simpler projects I’ve thought of. Next week is a long way away, so I’ll leave that up in the air for now (you’re on the edge of your seat right now, aren’t you? 😉 ).

Feel free to share your experiences in tackling your own personal development below, I’d be happy to hear someone else’s approach to this.

MVR (Most Valuable Resource)

The developers of today live in a Golden Age… That is to say an Age of Google, where developers joke (not so jokingly) about Google being the only reason they still have a job. Think about it; an entire generation of developers who rely so much on ‘The Google (TM)’, that any time we get any sort of error or issue we don’t expect BAM straight onto Google (or StackOverflow). What would happen if tomorrow, both StackOverflow and Google both died instantaneously, never to be revived (ignoring the fact that someone else would most likely knock up a replacement pretty quick)?

The shit would hit the fan, hard and that safety net for developers everywhere would be gone.

Over the past year or so I’ve tried not using Google and StackOverflow for my troubleshooting issues (see here), and its been an interesting experience. I’ve delved into areas I otherwise wouldn’t go into (because once we see that magical fix on Google or SO, we don’t look into any further because we need the problem fixed NOWNOWNOW), and while it has been more difficult and frustrating it’s helped me actually understand the problem at hand. And to be honest while it does take me a bit longer to get things done, I like it because I actually learn something, and better yet – and this is the part most people will struggle with – retain something. In the Google Age of ‘Search and Forget’ (ie. Search for a problem, fix it and then forget it) information retention is sorely lacking. We (and I’m guilty of this myself) Google the same things over and over our mind filters out the ‘quick-fix’ of information we sift through on a daily basis.

Originally this post was going to be about what you, as a developer, use as your most valuable resource/s and how much we love them, but once I started typing I kind of went off track a bit. But you know what, I think I’m going to keep it like this, and leave you with a thought experiment…

The next time you have an error or problem you need resolved, ask yourself what would you do if your MVR wasn’t around to help you out? How would you go about troubleshooting or fixing the problem? Instead of jumping onto your MVR site, actually take a step back and try to look at the problem at hand. Research the concepts around whats going wrong (you can use Google for this!), or what you’re attempting to do and compare it to what you’ve done. Who knows, you might just learn something you didn’t expect 🙂

 

Various Issues on Configuration a BizTalk Installation.

Bit of a hiatus since my last post, but life has been pretty busy lately. Started a new job recently (yay!), so Im currently settling in which hasn’t left me with much time for personal development.

I’ve been asked to do a basic presentation on BizTalk, introducing my new team to its basic concepts and what its used for. Im currently in the process of setting up a couple of demos, and (as always) BizTalk is giving me fun and games getting the darn thing set up. Again, as always its my fault rather than BizTalk’s, so here’s a couple of things I’ve just run into which I thought might help someone out there.

The first of these is the following error:

Failed to generate and backup the master secret to file: C:\Program Files\Common Files\Enterprise Single Sign-On\SSOC7B0.bak (SSO) : Access Denied.

The cause of this is (thankfully) quite simple: the account you’ve just configured BizTalk to use doesn’t have the necessary privileges on the machine to complete the configuration of the SSO component of the BizTalk installtion (provided you’ve selected SSO as one of the installed features). So once I’d added my BizTalk user to the Administrators group, that error message went away. Interestingly, when I went to re-configure the BizTalk installation it warned (!) me that the account had administrator privileges and posed a security risk to the box. So this makes me think the BizTalk service account doesnt actually need admin rights, but instead a particular subset of permissions slightly more elevated than normal. I dont particularly care right now, as I’m just trying to get the bloody thing up and running, so thats a post for another day 🙂

The second error was surprising, and was one of the validation errors BizTalk scans for when beginning a new configuration:

SSO DB already exists [this isnt the exact wording as Im trying to do this from memory].

Ok, so this part is really important: Restart your SQL Server Instance. Until you do this SQL doesnt play nice with the SSO DB (or more likely the newly created SSO Windows Service isnt playing nice), so restarting the SQL instance clears any crap left over from your attempted install. That should then let you drop the DB to avoid any further issues, otherwise there will be services that are holding connections to the SSO DB from the failed install.

The third (and last) of these errors was:

Failed to generate and backup the master secret to file: C:\Program Files\Common Files\Enterprise Single Sign-On\SSOC7B0.bak (SSO) : Access Denied

Look familiar? Yep, exact same error as the first one, but the cause of THIS was different, and far, far more annoying. I actually got to the point of uninstalling BizTalk several times (along with Enterprise SSO), multiple reboots, SQL instance restarts. Nothing. It kept failing on the same point (ie. the start). So I started looking a bit deeper into the log file, and found a permissions error related to the local account I was running on (not the BizTalk service account), saying access denied. “Strange”, I thought. “Im an admin on this box, BizTalk even allowed me to restart the WMI Service!” (this is amusing to me, because trying to install BizTalk on my local work pc was a bloody nightmare… thats a story for another day).

Then I had an idea… when I originally began to configure my BizTalk installation my service account it spat an error at me saying I couldnt use an account to configure BizTalk that had no password. So I created a password for it, and it went away. I realised at this point that Server 2008 r2 was logging onto the main account WITHOUT a password. So obviously, it must realise that my local account didnt have a password, and as such wasnt playing nice with SSO (hence why BizTalk spits an error when you try and configure a service account for it with no password). Would have been nice for BizTalk to tell me that at the start, but its ok. Once I set a password for my local account (which, silly me, I should have done from the start), the configuration worked perfectly! Ive now blogged this hellish issue, so it wont be an issue anymore 🙂 (or at least as big of an issue haha).

Hope this helps some poor BizTalk dev out there.

%d bloggers like this: