Line Endings.

3 people arguing over line endings.

I had a lengthy article here -- browsers, typewriters, javascript blah blah blah, cultural tolerance to backward incompatability in Mac and Microsoft worlds, blah blah blah -- but really, the Hanselman pretty much beat me to it, and all I've got left is a cartoon.

((As a derivative of a GPL'd work ("Tabs."|"Spaces."|"Both."),) The above image is licensed to you under version 2 of the GNU General Public License.)

 

**This** is how you pivot

Startups love to talk about 'Pivoting' -- those sudden changes in strategy, right angle turns that take you from obscurity to success.

"Burbn" pivoted from a location-checkin app to a stylized photo sharing app and became billion-dollar company, Instagram!

Doug and Dinsdale Piranha

But to find the masters of pivoting, we need look no further than the monty python sketch, the piranha brothers. This is pivoting done *right*!

At the age of fifteen Doug and Dinsdale started attending the Ernest Pythagoras Primary School in Clerkenwell. When the Piranhas left school they were called up but were found by an Army Board to be too unstable even for National Service. Denied the opportunity to use their talents in the service of their country, they began to operate what they called 'The Operation'. They would select a victim and then threaten to beat him up if he paid the so-called protection money.

This must not have been a particularly succesful strategem, for we are about to learn that the Piranha Brothers chose to 'pivot'.

Four months later they started another operation which the called 'The Other Operation'. In this racket they selected another victim and threatened not to beat him up if he didn't pay them.

This strategy also met with limited success and another pivot was in order.

One month later they hit upon 'The Other Other Operation'. In this the victim was threatened that if he didn't pay them, they would beat him up.

This for the Piranha brothers was the turning point.

 

Art of the command-line helper

The scariest code I ever wrote was the dialog in NimbleText that helps you use the command-line.

Much smack has been written in the past about confusing command-line helpers in other apps, so I set out to build this dialog with great trepidation in my heart.

Joseph Cooney has laid into two particular apps, a gui for wget, and a gui for robocopy. Even Jeff Atwood had a stab at wGetGui.

Here's what they looked like:

gui for wGet gui for robocopy

And here's a typical user response upon first encountering such a command-line helper:

I downloaded both these apps and tried them out.

Okay, the kindest thing you can say is that they are comprehensive, and with their use of tooltip text they do offer a little more help than the screenshots would suggest.

But they still give an immediate slap in the face to the end user. Something I want to steer away from.

So command-line helpers are a challenge. And to increase the pressure a little more: the command-line feature in NimbleText is only unlocked if you buy a license. If I'm expecting this feature to be worth money, then I really have to not screw it up.

What I did.

The first thing was to use descriptive labels, instead of verbatim option names. Instead of having a checkbox named "--rawdata" I'd have a label that said "Raw data". While this is only marginally more readable, it hopefully decreases the effect shown above.

Next I added a textbox at the foot of the form, where the command-line you've created is written, live, so you can see the output of your furious clicking.

There's a button for save, so you can save your command-line straight to a batch file. (It works from powershell too). A copy button, to put the command-line into your clipboard, and an execute button, which launches a cmd.exe process, and tries out the command-line immediately.

Other than that, I just sweated the small stuff. Alignment, spacing, capitalization, tab order, tool tips, everything as consistent as possible.

What I probably failed to do was give The Dialog any breathing space.

(And looking at the screenshot now I see a slight inconsistency with spacing, which should be fixed by the time you look at the application itself.)

Here's what I came up with:

Any suggested improvements? Please send them in.

 

Go and read a book.

I tweeted this yesterday, but wanted to discuss it in a little more than 140 chars

"Reading a book" is a classic important but non-urgent task. When your lifestyle lacks any book time, you know you're in the wrong quadrant.

Merril Covey Matrix

This is a reference to the four quadrants matrix (urgency versus importance) from the book 'First Things First' by Stephen Covey et al.

The idea is that many of the things we do can be ranked as either important or unimportant, and as urgent or non-urgent.

It's a neat and enlightening concept, but there's something utterly impractical about it.

A response from Dan Puzey summed it up well:

The real problem is that "organizing my life into quadrants" always seems a non-important non-urgent task...

Maybe that's why I've always felt uneasy about the four quadrants idea.

Don't spend time categorizing everything into one quadrant or the other. Don't get caught up in grandiose and abstract questions like "Do I have my life values in order? Am I doing first things first every day?"

Just ask yourself the simple, practical question "Have I read any good1 books lately?"

Your answer sums up a hell of a lot about how you're life is going. If you find you're not reading any good books, then you know right away that your life is out of balance.

Now stop staring at your navel, and go read Slaughterhouse-Five.


  1. If all you've read lately are comic books, by the way, then the answer to the question is an emphatic 'No'.


Image from Wikipedia: Merrill Covey Matrix

Bonus unrelated wikipedia link: Four-Quadrant movie.

 

Slurp up mega-traffic by writing scalable, timeless search-bait

In which I follow the advice of Patrick McKenzie to try and get my little software products into the eyeballs of a whole new audience.

sunday night blues, micro-Isv style

So, it was one of those lazy Sunday evenings when a microIsv guy does what he does best: he looks through the Google Analytics of his products, desperately trying to work out why he is not yet a millionaire, desperately trying to find what tiny tweak he can apply that will ensure he has no need to head to work in the morning, or ever again. (This is known as 'Sunday evening blues, microIsv-style')

When I looked at the search traffic for both sites (TimeSnapper and NimbleText), something leapt out at me, the way a tiger in the wilds of India might jump out at a plump looking passerby.

The only search terms people were using to find TimeSnapper were terms like "TimeSnapper", "Time Snapper" or related mis-spellings of the product name.

Noticeably absent from the keyword traffic was every single person in the world who hadn't already heard of the product from some other source. No one looking for "My browser crashed, how do I recover my work?" or "How do I make timesheets easier?" or "How can I understand my own bad habits?" or "Continuous Screenshot Taking" and so on for a million other search terms. (Hint: I just demonstrated the SEO technique of google-bombing oneself ;-) ). So my website -- That Dilligent Little 24 Hours a Day 7 Days a Week Sales Guy, wasn't drumming up one iota of new sales.

And the same for NimbleText. A tiny trickle of people would turn up, but only via search terms like "NimbleText", "Nimble Text" or "World's Simplest Code Generator" (the product's original name) -- and no one else.

So I asked myself, as I sat there on that uneventful Sunday eve: How do I make it happen?

In times like this, I always turn to the writings of Patrick Mckenzie (aka Patio11 on twitter and Hacker News). For SEO he recommends writing 'evergreen' and 'scalable' content.

'Evergreen' content is timeless content: stuff that isn't dependent on today's news cycle or the latest fashion.

'Scalable' content is the sort of content you can write a lot of. The sort of guff that doesn't take a great deal of soul searching.

In relation to NimbleText I easily came up with a basic idea for 'scalable' content generation. Normally, when writing about NimbleText I think about the features, and there's a finite amount I can write. If instead I were to write a short article on every possible specific situation where NimbleText could be used, then you'd be looking at a limitless source of article topics. Think of every type of code it can generate, every example piece of HTML it can produce, every piece of SQL it can concoct, you would be looking at an endless stream of simple, albeit quite repetitive articles. You could churn out such articles at a pretty fast rate. (NimbleText itself could even help with this task.)

Articles such as 'How do I generate insert statements?' may not be the sort of thing that sets the world on fire -- they're never going to attract a viral influx of rabid fans -- but hopefully they'll pander to some fine strange of the long tail of search traffic, and, over time, bring in a trickle of fresh visitors, potential paying customers.

This strategy is a sure winner from an SEO point of view. Wikipedia is essentially nothing but a giant engine built for the creation of Scalable Evergreen content. No wonder it takes first place for just about any search you perform.

So here's the short list of NimbleText-related articles I've written on the bus, since coming up with this strategy:

SQL Master Class (for NimbleText)

Create HTML Automatically (with NimbleText)

It takes less than one bus ride to write such an article, and they're only getting easier. I've got a backlog of thirty such topics and I'm sure with a more concentrated effort I could grow this to many more. Is it worth it? I'm unconvinced, but I'll look at the analytics over time and see what happens.

I've been running this experiment for a few weeks now. Already i've started to see people arrive from new search queries, suited to the articles I've written. The volumes are hardly mega, but the littlest steps bring the most satisfaction.

 

Do *NOT* try this Hacking Script at home

From this answer at stackoverflow, I read:

I saw this one in a bollywood movie. Our hero was busy romancing with his gf until his friend informs him about upcoming college exams. So, he decides to get examination papers by hacking into his college network. This is how he goes about it:

Enters Lab. Opens up a command prompt window. Types - Hack System

And that's it!!...A window pops up- System Hacked

He gets access to all papers and returns to his gf for a romantic song :)

Mind blown. I just had to try it out:

C:\temp>copy con hack.bat
@echo %* hacked!!!!
^Z
        1 file(s) copied.

C:\temp>hack system
system hacked!!!!

C:\temp>hack internet
internet hacked!!!!

C:\temp>hack FBI
FBI hacked!!!!

Drunk with power, yet trembling in terror; I'm sitting here with the door barricaded, certain the feds are going to burst through the door at any moment.

Do *not* try this at home.

 

The 'Should I automate it?' Calculator

Should I automate it?

Here's a clever calculator that let's you answer the age-old question: "is this thing worth automating?"

I put this together a few days ago and I just keep needing to use it! Situations keep coming up where I'm gobsmacked to find that our 'gut-feel' about the relative merits of two approaches is just not borne out by the simplest back-of-the-napkin calculation.

The neat thing about this calculator is that it distills the choice down to its most crucial elements, so you can come up with an answer very quickly.

Once you've plugged in some values and gotten your answer, you can easily share it with those chumps in management or with a clever colleague — click the 'Save this result' button, and you'll be given a url that you can send around, preserving all the values you plugged in, allowing others to tinker with your calculation and verify everything for themselves. (Implementing that bit was the funnest of the fun. Remind me to show you the 'GetHashyCode' extension method.)

When you take a moment to play with the figures, there's a bunch of things that leap out at you.

First up — this rather obvious result:

"If you're only going to do it once, it's not worth automating."

That might be quite a shock to some of my automation-happy friends, but I'm afraid the result is unequivocal.

Second: it's amazing how much value you can add by automating something that happens a lot.

Imagine your company has a timesheeting system that takes 10 minutes longer to complete than it should. It's used every week by 20 people, so in the next 2 years it will be filled out approximately 2000 times. You work out a way to save those 10 minutes.... how much effort should you put into making this improvement? Should you bail out if you can't fix it in 1 day? 2 days? 3 days? Here are the figures. It turns out the break even point is 430 hours of work — around 11 weeks! So yes, if it's going to cost you a whole day of work to improve the timesheeting system — go ahead and do it! You'd be insane not to!

Jan Ernst Matzeliger (1852 - 1889) Inventor and Businessman

Of course, the benefits of automation are more than just the time it can save. When a task becomes free to do it changes the nature of the value proposition. Read about the amazing impact of Jan Ernest Matzelinger — a brilliant automator who revolutionised the shoe industry.

The calculator could be simpler, or it could be more complex.

A simpler version would remove the 'hourly rate' fields — so the answer would be in just hours.

A slightly more complex version would allow there to be a different hourly rate for the person who cleans up when manual work goes wrong. This is realistic. Clean up crews can be expensive. Also the costs of maintaining the automation could be factored in. Cheap automation solutions tend to be very brittle.

Okay — I'm all out of discussion about this little tool. Use it, share it, automate something today.

 

 

aaron swartz: the early works

I can't stop thinking about, wondering about, caring about, reading about the tragic life of Aaron Swartz. There's a lot I want to write. I think I could fill a book just trying to process what it means, what is an appropriate response, what's it all about. But I'm not going to attempt that.

I've been reading Aaron's blog, on and off, for over ten years. Ten years is a long time. And by my own estimates, those particular 10 years were the longest in history.

Long ago I printed out his HOWTO: Be more productive for multiple re-reads and have returned to it many times since.

I wanted to go back, right back, and try to work out the earliest stuff of his that I read. And I wanted to watch the progression of his ideas as they emerged.

From his blog 'raw thought' -- there's a link to 'Older Posts' which takes you to 'the archive' (grouped by theme).

From there is a link to 'Full Archives' which takes you to the reverse-chronological archives.

These stretch back to May 2005 (the oldest entry on that page is about a server crash after which he had to restart his blogging. Under the so called 'Full archives' section there's no link to anything prior to May 2005.

Now I'm certain he was blogging long before that -- I'm certain I was reading his blog long before that.

Is the stuff before that server crash lost? I hoped not, so I set about locating it.

I clearly remember his powerpoint remix (from 2003!) - it got published in a book of Joel Spolsky's - and I soon tracked that down.

Taking a look at the url suggests a numbered blogging system (from Dave Winer's Radio Userland), and from there it's easy to find all of his prior blog entries.

After a bit of binary searching I found what looks like Aaron's first Hello, world, with article id of '81'.

So I wrote a powershell script to download everything (I hardly think aaronsw would object !!) and found that the articles go from number 81 up to 1691, with a few gaps.

Here's the script.

# Downloads aaron's early stuff
# i've done this the hard way because i didn't have time to do it the easy way.

$client = new-object System.Net.WebClient

$nums = 81..1691

#detected up to 1691  (April 26, 2005)
$nums | % {
    $url = [string]::Format( "http://www.aaronsw.com/weblog/{0:000000}",$_)
    $path = join-path $(get-location) ([string]::Format("aaronsw_{0:000000}.html",$_))
    Write-Host "downloading " $url " to " $path
    $client.DownloadFile( $url, $path )
    
    #sleep for 4 seconds before grabbing, to give the server time to exhale.
    Start-Sleep -s 4
}

Then I wrote a script to walk through those files and create an archive page in the same style as Aaron's other archive pages.

It's not pretty code, it got the job done...

dir .\aaronsw_*.html | % {

    #extract the filenumber out of the name... i should've made this easier.
    $num = $_.Name.Split("_")[1].Split(".")[0] 
    
    #calculate the target url for this file
    $url = [string]::Format("http://www.aaronsw.com/weblog/{0}",$num)
    
    #load the file 
    $article = gc $_.Name

    #grab the title
    $titleRegex = [regex]'h1>(.*)</h1>'
    $title = $titleRegex.Match($article).Groups[1].Value
    
    #grab the time
    $timeRegex = [regex]'<p class="posted">posted ([^(]+) \('
    $time = $timeRegex.Match($article).Groups[1].Value
    
    #output the url, title and time, as html
    $item = [string]::Format('<p><a href="{0}">{1}</a> ({2})</p>',$url,$title,$time)
    $item >> archivePreCrash.html
}

So the result is this fairly complete list of pre-server crash articles:

 

aaronsw archive: early works

 

Now this takes us up to April 2005. And the post-crash articles start in May 2005, so it probably means that everything's accounted for, except maybe a month's worth of blogging. There are some missing articles within that period, and some lost stuff. I can see that he restored it from the wayback machine where possible, but sometimes there was nothing to grab.

There are a lot of gems in there (and of course a bit of drivel: this starts when he was 15). I was going to pull out a few quotes, but I'd rather let you do that for yourself. He was a thoughtful guy. It'd be great if he was still around.

 

Finding (and removing) duplicate files on your hard drive

I generally hold to the philsophy that hard drive space is cheap, and your time is too valuable to waste on optimising hard drive space.

But one of those fun holiday activities, reserved for times when procrastination is at its peak, is to thoroughly clean up a hard drive and make extra room available.

My usual technique is to use SpaceSniffer (found courtesy of Scott Hanselman's tool list) but this time around I suspected that the biggest waste of space was caused by duplicate files (particularly music and photos) taking up a lot of space.

When confronted with a simple problem, the smart guys look for pre-existing solutions. But not me.

I like to employ something I call the 'my way is the best way' philosophy. Other people call it 'not invented here' syndrome, but I prefer to call it 'my way is the best way' because... well, my way is the best way.

thinking about duplicate files

Analysis is more fun than Action

Most of the duplicate-finding tools in this category have a feature where they will automatically delete all but one copy of each duplicate file found. That's not something I'm willing to do, at least not automatically. What I wanted to do was to create the full list of files, and then analyse it, for example in NimbleText. I wanted to create the list of files and then stand back, thoughfully stroking my long beard, just like Pai Mei from Kill Bill.

So I embarked on a special project, codenamed Dinomopabot, a name recommended by my 5 year old daughter who is very clever at these things. The final result is now named 'Dupes.exe': a command line tool for finding duplicate files on your hard drive.


You can browse, clone or fork the source-code, at Bitbucket:

'Dupes' sourcecode



Or download the executable, ready for use:

Download 'dupes.exe'


Here's the built-in help text:

Dupes Find duplicate files, by calculating checksums.

Usage: Dupes.exe [options]
Tip: redirect output to a .csv file, and manipulate with NimbleText.

Options:
  -p, --path=VALUE           the folder to scan
  -s, --subdirs              include subdirectories
  -f, --filter=VALUE         search filter (defaults to *.*)
  -a, --all                  show ALL checksums of files, even non-copies
  -?, -h, --help             show this message and exit

For each file it encounters, Dupes generates a sha256 checksum, with which to compare files. They're short and catchy, they look like this:

271EC103B44960B6A4C6A26FE13682A855133D3D95AC8ED81D7C90FA41571D1F

Cute hey? Almost adoption-worthy.

And for every member of a duplicate file set that the tool encounters, it spits out a row with four columns, separated by bar symbols ('|')

The four columns are:

CheckSum       Sha256 checksum of the file. (Hint: sort by this to get all duplicates together)
DuplicateNum   0 for the first file in the duplicate set, 1 for the second file, etc.
Filesize       In bytes. (Hint: sort by this, if you want to tackle big files first)
Path           Full path and filename for this duplicate.

So you run dupes.exe and direct the output into a textfile (using > [filename]), and from there you can manipulate it (with NimbleText for example), to create a batch file that carefully deletes all the hand-picked, unwanted duplicates of your choice.

Here's an example of a NimbleText pattern you could use with the output of Dupes. This will create a batch file that deletes all but the first copy of each file:

<% if ($1 > 0) { 'del ' + $3 } %>

That pattern is just a piece of embedded javascript (you can embed javascript in NimbleText patterns) that says "if column 1 is greater than Zero, then output the text 'del ' plus the text from column 3." Column 1 is the duplicate number, so it will be greater than zero for all but the first instance of the file. And column 3 is the full path and filename of the duplicate.

Thank you. I hope someone finds this thing useful. Also, please imagine suitably gigantic and terrifying disclaimers attached to this code. I wrote it after all.

 

Harvey, a .net chat server built with RabbitMQ

I've turned into a rabbid RabbitMQ fan in the last week or two, though so far I've only scratched the surface of what this thing does.

Below I'm going to walk through the code for a chat service, built with .net, that uses RabbitMQ for sending and receiving messages. But first a short discussion of Message Queues, RabbitMQ, and how to get this rabbit up and running.

A lengthy discussion is out of scope for this bus ride, but basically:

A message-queue is a piece of middleware for asynchronous communication. (System A sends messages to System B).

MQ's can be optimized for performance, reliability, scalability or any other '*ility' you can think to mention.

There's lots of them, they make different trade offs. Originally they were expensive proprietary technologies (e.g IBM's MQ-Series) - but along with the rise of standards in this area there have arisen various compelling open source offerings.

RabbitMQ is built on Erlang. I don't want to digress into sounding like one of those Erlang-douchebags, but Erlang is a good match for an MQ.

Erlang's initial purpose was to create telecommunications software that was (a) super reliable and (b) hot-swappable. That's a perfect fit for MQ software. It can spin up extra processes without all the heavy lifting of using extra threads, so where a normal OS thread allocates a few megs of memory, Erlang gets away with a few bytes. Extraordinary stuff.

Having said that, the biggest problem with RabbitMQ is that it's built on Erlang. Thus, to install it on your Enterprise-controlled Servers at BigCo you'll need to get Corporate IT's permission to install yet another VM/Platform. Good luck sweet talking those guys. They do *love* to kick up a fuss.

Up and running with RabbitMQ in Under 3 minutes

Everything I'm going to cover in this section is covered in part 1 of Derek Greer's RabbitMQ for windows series. So I'll go extra quick.

To setup a host server for your chatting you'll need to...

  1. Install erlang: http://www.erlang.org/download.html
  2. Set the ERLANG_HOME environment variable to point to the erlang folder under program files. e.g. C:\Program Files\erl5.9.2
  3. Install rabbitMQ: http://www.rabbitmq.com/download.html
  4. Enable the rabbitmq management plugin. from an elevated cmd prompt:
        Go to rabbit's sbin folder, e.g. %programfiles%\RabbitMQ Server\rabbitmq_server-2.8.7\sbin, and run:
        rabbitmq-plugins.bat enable rabbitmq_management
  5. To activate the management plugin, stop, install and start the rabbitmq service:
        rabbitmq-service.bat stop
        rabbitmq-service.bat install
        rabbitmq-service.bat start

  6. Finally, visit http://localhost:55672/mgmt/ and see that your rabbitMQ instance is alive.

It's *that* simple.

Worlds easier than most other installs. Much easier than installing a database, or keeping Adobe Reader up to date.

The only other thing you need do to become a certified .net RabbitMQ developer is use nuget to add a reference to the RabbitMQ.client package.

Introducing Harvey (the simple .net chat client)


Harvey Source Code Here.


Once your rabbitMQ service is up and running, every one on your network can grab Harvey.exe and join in one colossal chat room for all their communication purposes. Every message is delivered to every listener.

The architecture is simple. When you run Harvey.exe it creates two channels, one for sending, one for receiving. The send channel is connected to a fan-out exchange on the server. Each Harvey client also creates its own queue on the server (identified by a guid), which is bound to the afore mentioned fan-out exchange. Thus, when any client sends a message, every client receives it.

Let's step through it.

Set up a channel to the fanout exchange

(Just let it wash over you, this will all make sense by the end)

In form_load we setup everything we need for sending messages. We need a channel to the exchange. The exchange is of type 'fanout' meaning it will send all messages to all queues that are bound to it.

When we 'declare' the exchange, the exchange will be created on the server if it doesn't already exist. Otherwise we will use the existing exchange that has already been declared for us.

In form_load:


            var connectionFactory = new ConnectionFactory
            {
                HostName = "localhost",
                Port = 5672,
                UserName = "guest",
                Password = "guest",
                VirtualHost = "/"
            };

            connection = connectionFactory.CreateConnection();
            channelSend = connection.CreateModel();
            channelSend.ExchangeDeclare(exchangeName, ExchangeType.Fanout, false, true, null);

Sending a message

Assuming we have a textbox (txtMessage) for entering the message we want to post, here's what happens when we click send:


            string input =  txtUserName.Text + " > " + txtMessage.Text;
            byte[] message = Encoding.UTF8.GetBytes(input);
            channelSend.BasicPublish(exchangeName, "", null, message);
            txtMessage.Text = string.Empty; 
            txtMessage.Focus();

That was nice, but we probably want to receives messages back as well -- a chat is not just one way.

Set up a channel to your own queue, for receiving.

We declare a queue, a brand new queue that no one has declared before, and bind it to the fanout exchange.

So messages sent to that exchange will go to this queue, on the server. And we've got a channel to the queue.

(This bit also happens in form_load)


            channelReceive = connection.CreateModel();
            channelReceive.QueueDeclare(clientId, false, false, true, null);
            channelReceive.QueueBind(clientId, exchangeName, "");

Receiving a message...

The very next thing we do in form_load, is start a thread for listening to messages on that channel:


            receivingThread = new Thread(() => channelReceive.StartConsume(clientId, MessageHandler));
            receivingThread.Start();

(Note, forgetting to call .Start() cost me more debugging time than anything else in this whole learning experience)

The following 'StartConsume' extension method was lifted from one of Derek Greer's RabbitMQ articles:

We block the thread waiting for a Dequeue to happen.


        public static void StartConsume(this IModel channel, 
                     string queueName, Action<IModel, DefaultBasicConsumer, BasicDeliverEventArgs> callback)
        {
            QueueingBasicConsumer consumer = new QueueingBasicConsumer(channel);
            channel.BasicConsume(queueName, true, consumer);

            while (true)
            {
                try
                {
                    var eventArgs = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
                    callback(channel, consumer, eventArgs);
                }
                catch (EndOfStreamException)
                {
                    // The consumer was cancelled, the model closed, or the connection went away.
                    break;
                }
            }
        }

And the 'MessageHandler' delegate, above is as follows:


        public void MessageHandler(IModel channel, DefaultBasicConsumer consumer, BasicDeliverEventArgs eventArgs)
        {
            string message = Encoding.UTF8.GetString(eventArgs.Body) + "\r\n";

            txtConversation.InvokeIfRequired(() =>
            {
                txtConversation.Text += message;
                txtConversation.ScrollToEnd();
            });
        }

InvokeIfRequired is just a useful winforms extension method for hopping from a background thread onto the gui thread, taken from this stackoverflow question, and implemented as follows:


        public static void InvokeIfRequired(this Control control, MethodInvoker action)
        {
            if (control.InvokeRequired)
            {
                control.Invoke(action);
            }
            else
            {
                action();
            }
        }

Further reading:

This guy used a similar architecture to what i went with. It's just the simplest architecture imaginable, and he handled 2000 messages a second from a very minimal piece of hardware.

Simon Dixon's article - Getting Started With RabbitMQ in .net

Mike Hadlow has written 'an easy to use .net api for RabbitMQ' called EasyNetQ. One to watch.

As recommended above, Derek Greer has an Excellent Series on RabbitMQ for Windows

Further links to .net development with RabbitMQ