TechDays 16 – Slides & session videos

Last week, October 4 & 5, the entire Xpirit team was present at TechDays 16. TechDays is the biggest yearly developer conference in the Microsoft ecosystem and thus one of our main events. Xpirit was platinum sponsor and as such we contributed a bunch of cool things to the conference: our CTO Marcel de Vries ran a CTO Track aimed at Enterprise CTO’s. We ran a Mini-Hacks Contest and gave away some nice prizes to contestants, we delivered 17 sessions in total with 7 speakers, hosted the speaker dinner in our Amsterdam office and we developed the official TechDays 16 mobile app.

I’ll blog about the mobile app later. First, here are the slides and videos of my sessions at TechDays 16. The videos are in Dutch.

Conquer the Network

Session abstract
Almost every mobile app you will build will be driven by data. In a lot of cases, this data lives on a server or somewhere in the cloud.  Crossing the network from a mobile device brings more challenges than you might think at first hand. In this session, we will look at these challenges and how we can leverage some existing patterns and components to create a smooth and delightful experience for your end user. The technology will be based on Visual Studio, C# and the Xamarin platform to tackle the problems for all three major platforms at once.

Slides

 

Click here for the video on the Channel9 site

Microservices in .NET with NServiceBus

Session abstract
Microservices is an increasingly popular style of architecture. There are many opinions on what a microservice is, and how microservices should be implemented. In this presentation I will define what a microservices architecture should look like, and also point out some of the misconceptions and pitfalls I have seen surrounding Microservices. Instead of just theory, I will make a concrete implementation of a small microservices architecture with events and messaging, using the NServiceBus .NET framework.

Slides

 

Click here for the video on the Channel9 site.

TechDays App: Behind the Scenes

Co-presented with Geert van der Cruijsen.

TechDays isn’t complete without a mobile experience for finding the rights sessions, interact with your favorite speakers and leaving feedback. For TechDays 2016, Xpirit built a cross platform native app with Xamarin and Azure Mobile Apps. In this session, we’ll give you a look behind the scenes of this cool app.

 

Click here for the video on the Channel9 site.

 

NSBCon 2015 Recap

This past week I’ve been in Dallas,TX for the second edition of NSBCon 2015, the official NServiceBus conference.

[TL;DR]
It was a blast, I found the quality of the sessions and the activities very good and I met some nice and highly competent people. Oh, and great food!

As soon as the slides and videos are available online, I will add links.

Pre-Con Workshop
The week started out with a full day Advanced Saga Masterclass by Andreas Øhlund. Although most of the content was already familiar to me, I still had some nice takeaways from the day. The most important one is around integration patterns with external non-transactional resources. When dealing with non-transactional resources like web services in an NServiceBus architecture, it’s wise to place the interaction in a separate endpoint, and handling a command that you send from the Saga.

My takeaway was that you should leave the technical details of the web service interaction inside that integration endpoint. If an error occurs, just let NServiceBus handle it so you get First Level Retries and Second Level Retries for free. Only report back to the Saga with a functional reply, such as a Tracking# in case of a shipping provider. This way the Saga stays clean and functional, as does the integration handler.

FullSizeRender 6

Another important lesson is that a Saga are a good fit when there’s an aspect of Time in your domain logic. For example: a buyer’s remorse policy (the ability to cancel an order within x minutes). For logic like this, the RequestTimeout feature of a Saga really shines. You could state that if a Saga does not contain at least one timeout, you probably shouldn’t be using a Saga; instead you could just as well put it in a regular message handler.

The day ended in great company at Casa Rita’s Mexican Restaurant with a fantastic Briskett Burrito. (Hey, I’m a foodie)

 

Conference day 1
The first day of the regular conference was already packed with nice sessions. Udi’s keynote was sort of a State-of-the-Union on what Particular has been up to for the past year. Lots of work has been done on the internals of NServiceBus, more on that later. Most striking was probably the announcement that – a year after being introduced at NSBCon 2014 – ServiceMatrix is being discontinued.

To be honest, I am not really surprised and also not sorry for it. I’ve never believed in dragety-droppy modelling tools for building software and frankly I felt Particular was trying to go with that craze a little bit too much. As Udi explained, a tool like that will get you 80-90% there, but it’s the nitty gritty details in that last 10% that will break you up. So it’s back to the model of development that I’ve always liked so much about NServiceBus: put the developer in the driver’s seat.

That’s not to say that there is no demand for visual tools. Some of the efforts that have gone into ServiceMatrix are now being transformed into another form of visualisation, one that I much prefer: visualisation of your finished system as it runs. In other words: living documentation. I’ve always been a bigger fan of using simple tools for modelling: PowerPoint (yes I said it, I’m a PowerPoint architect), or much rather even: pencil & paper or a whiteboard. No modelling tool has ever really appealed to me, not the UML ones and especially not the “graphical coding” ones (looking at you Windows Workflow Foundation and BizTalk). Just stick with boxes and arrows and start coding, is my style.

NServiceBus V6
After Udi’s keynote, Particular solution architect extraordinaire Daniel Marbach took us through the new NServiceBus V6 release. V6 is all about embracing async/await, which means that an important chunk of NServiceBus Core was rewritten from the ground up to be async all the way.

Daniel did a great job of explaining why Particular did this. Ever since the introduction of the TPL and the async/await keywords, Microsoft has been pushing towards async code more and more. This is most apparent in the cloud, where all IO operations (Azure Storage Queues, Azure Service Bus, SQL Azure, etc) have an async API. Up till now, handlers in NServiceBus were synchronous, and made you implement a single method:

void Handle (MessageType message);

This makes it hard to consume async API’s for IO, because marking a void method as async is evil. Furthermore, the NServiceBus Core is not aware of any async code, so strange things can happen if you’re not careful. This lead the team to the decision that V6 should be a breaking change in order to go fully async. The interface for a handler now looks like this:

Task Handle(MessageType message, IMessageHandlerContext context);

You must now return a task (or Task.FromResult(0) if your implementation does not require async), and you receive an IMessageHandlerContext implementation that you can use to send or publish messages. No more public IBus Bus { get; set; }. This is done so that there is no more dependency on state across multiple threads. NServiceBus will make sure that you have the correct instance. It is also important to note that the use of ThreadLocal for caching is dangerous because of the introduction of async/await. All in all, an excellent talk by Daniel, who wrapped his lessons in a story about a Swiss Chocolate Factory. Gotta love the Swiss.

More interesting info on async/await and ThreadLocal can be found here:

[https://www.youtube.com/watch?v=4uWzIM1U-VA] Presentation about the pitfalls of async/await and the SynchronizationContext. Highly recommended, even if only because of the creative format.

[http://www.particular.net/blog/the-dangers-of-threadlocal] The dangers of ThreadLocal

IMG_9776

Akka.NET
Andrew Skotzko from Petabridge did a riveting, high octane presentation about Akka.NET and the Actor Model. I’ve briefly looked at the actor model before and am interested in exploring it more, especially after this session. One of the things I’ve been wondering about for a longer time is how the actor model fits in with NServiceBus.

In his session, Andrew mentioned that Akka could be a great producer or consumer for NServiceBus. In Andrew’s words: “Akka is about Concurrency, NServiceBus is about Reliability”. I’d have to explore this some more but frankly I don’t really see a great fit as of yet. One of my main concerns with Akka is that remote messaging between actors is not reliable. It is supposed to be light weight and location transparent, but the model compromises on reliability. Akka has an “at most once” delivery guarantee, which doesn’t exactly guarantee anything. For communication between actors, Akka provides location transparency, i.e. you just use an interface to send a message, and the receiving actor can be either in memory or on a remote machine. Hmm, where have we seen this fail before? At least NServiceBus is always clear on the fact that when you send a message, it’s inherently remote. And NServiceBus has your back in making sure the message arrives.

So, I’m not yet really seeing the point of using an actor model inside, or alongside NServiceBus services, but it requires more investigation.

This seems to be the best place to start: http://LearnAkka.net

I must say though, both Andrew and Aaron Stannard from Petabridge are both extremely passionate and helpful. They’re very nice guys and we had good fun over a couple of beers.

Death to the Distributor
Sean Feldman
from Particular talked us through a preliminary version of a new routing component in the NServiceBus plumbing which allows them to eliminate the Distributor component when you’re on MSMQ transport and want to scale out. A nice and short talk in which Sean showed a promising solution for decentral routing, with a pluggable strategy (you could write your own distribution logic if you’d like). With that basis, combined with realtime health info from endpoints, dynamic routing and commissioning and decommissioning of nodes will become possible. Interesting stuff that’s in the works.

There were also two interesting case studies on the first day: Building the Real-Time Web with NServiceBus and SignalR by Sam Martindale and a more high level session called Decomposing the domain with NServiceBus by Gary Stonerock II

Combining SignalR with NServiceBus has been a hobby of mine for a longer time, so the idea was already familiar to me. It was nice to see the application that Sam and his team put together: a multi-user graphical web application for “composing” and budgeting luxury homes that relies on SignalR and NServiceBus for on-the-fly budget calculation and distribution to multiple users at once. Sam was so nice to do a shout out of my earlier blogpost about the NServiceBus backplane for SignalR. With thanks to Ramon Smits of Particular, the sample has now been upgraded to the latest stable versions of both NServiceBus and SignalR. Thanks Ramon!

Gary’s session was interesting as well, sharing his journey designing a system for Medical Trials on a logical architectural level. Some key takeaways for me were:

  • In modelling your domain logic, make sure that you model it for success (i.e. Logical Success Modelling). Try to factor out the edge-cases and exceptions by asking more questions about the business logic. You will end up with much simpler domains.
  • Defer naming things; Don’t name services, domains or classes until you are more clear about its responsibilities. You will end up with much clearer names. Udi calls this “Things Oriented Architecture” in his ADSD course.

Hackathon!
Then it was time for the Hackathon! The assignment: come up with the most over-engineered Hello World application based on the NServiceBus building blocks. I teamed up with Anette, Daniel and Adam and after a couple of beers we came up with project Ivory Xylophone:

FullSizeRender 14
See? Pencil & paper FTW!

By sending a tweet with hashtag #ivoryxylophone, we set off a system that sends a message to a Saga that converts all the characters in the message to Morse code, gathers the characters, converts them back to plain text and sends the message to a handler that tweets out the message on Twitter. All NServiceBus message would go over a transport based on Git commits (…). Apart from the Git transport, we got it working end-to-end.

IMG_9745
Mob programming

We got second place, just after the awesome Slack transport by Mark Gould.

IMG_9770
Demo time for Ivory Xylophone

Conference day 2
Ted Neward kicked off day 2 with his keynote session “Platform Oriented Architecture“. He opened by establishing that the software industry has been through several cycles of inventing the same wrong solutions for the same problems multiple times. We’ve seen other people speak about that and frankly, he’s right. The nice thing about Ted’s talks was that he did a shot at how the future of architecture should look. He concludes that most of us are building a platform of some sorts. And in order to succeed, you need all of the good things that we’ve established: the 4 tenets of SOA are still a good idea, we still need to mind the 8 fallacies of distributed computing, but the most important thing is that a platform should have a clearly defined Domain and Context. Meetup, Uber, Yelp, AirBnb are all platform but each has its specific domain and give meaning to technology choices in their own context.

All About Transports
Swedish Chef Andreas Öhlund gave a great talk comparing the different transport options that NServiceBus gives, packed in a tasty tale of a Swedish Meatball company. I really liked his stylish slides, kudos! His message was important: different transports have very different characteristics and you should choose wisely. Do you choose a decentralised model with MSMQ or a broker style transport like SQL Server, Azure Service Bus or RabbitMQ? Talking about RabbitMQ – one does not simply install RabbitMQ and be done with it. It takes care to make RabbitMQ reliable (clustering) and you must be careful when you send messages inside a transaction that might fail, or you’ll end up with ghost messages. Andreas’ talk could be a great blogpost in itself. I hope Particular puts it on their blog soon.

Other interesting talks were Jimmy Bogard‘s Integration Patterns With NServiceBus where he shared some lessons learned about integrating legacy systems with newer NServiceBus systems, dealing with big files and his interesting Routing Slip Pattern.

Kijana Woodard had a funny and insightful talk about Things Not To Do With NServiceBus. I like seeing a talk like this as it shows that Particular isn’t just shoving NServiceBus down everyone’s throat as the end-all solution for everything. It’s not a golden hammer, so there are definitely cases where it doesn’t fit.

IMG_9760 2
Beef ribs!

Again, a great day, which ended in the fantastic Brazilian Steakhouse Boi Na Braza. Did I mention I’m a foodie?

Day 3: Unconference
One of the things I enjoyed the most was the Unconference day. A day where participants set the agenda and get to do the talking. We collaborated on an agenda for the day and ended up doing multiple parallel tracks of discussions and knowledge sharing. I participated in discussions about Security & NServiceBus, UI Composition. Akka.NET & the Actor Model, Saga & Domain Concepts and Microservices. Got a ton of new insights. My sketchy notes might not be as meaningful to you as they are to me, but still…

All in all, NSBCon was well worth the trip to Dallas. All the Particular folks are very approachable, hospitable and helpful, the participants were great discussion partners and the quality of the sessions was very good.

Xamarin, NServiceBus, Microservices and Enterprise Mobility – my sessions at Microsoft TechDays NL

Last Thursday and Friday, Microsoft TechDays NL 2015 was held at the World Forum in The Hague. TechDays is the biggest Microsoft related software conference, with over 2100 attendees. For us at Xpirit, the conference was a big success, fortunate to be selected to present a total of 21 sessions with a crew of 6 Xpiriters. This blogpost contains some reflections on the conference and the slide decks for my own sessions.

TechDays 2015 was great fun. I think Microsoft managed to pull of a nice conference. I made a Storify overview of Xpirit’s activities at TechDays.

James WhittakerWith an awesome, energising keynote by ex-Google employee and now Microsoft Distinguished Engineer James Whittaker, TechDays was off to a great start. James took us on a journey about how the web has evolved into the way we now consume data through apps, and how the app model might evolve into newer experiences in the future. It’s all about data consumption and intelligent software that gets that data into our hands the moment we need it. The Internet of Things will work for us and obsolete whole industries over time, leaving time and room for humanity to explore the world, the seas, science and the galaxy. Inspiring and very funny.

Xpirit magazine We chose to accompany our sessions with an in depth, 44 page magazine, handed out to all attendees. It covers some of the topics we spoke about at TechDays, as well as new technologies like Ionic and the cool new Hololens.

If you were unable to obtain a copy, you can also download it from our website. If you’d like a hard copy, give me a ping.

Being a hobby sketch-noter/cartoonist/graphics fanatic, I really appreciated seeing the live sketchers from Wandverslag create a beautiful drawing with details from the sessions and discussions in the hallways.

Wandverslag

TechDays NL speaker gift In the same style, all conference speakers got a nice personalised gift, our own cartoon.

As Xpirit, we chose to take on the topic of Microservice architecture, demystify some of its aspects and provide possible approaches for building microservices. I had the opportunity to talk about two of my favourite topics: mobile development (Xamarin!) and distributed systems architectures (NServiceBus!). Here are my sessions:

Lessons learned: migrating an N-tier web application to microservices with NServiceBus
In this session, I explained how I transformed an existing N-tier web application to a more scalable and manageable architecture in the microservices style. I presented the reasons for migrating and a 5-step migration plan. This session is not specifically about NServiceBus, but rather about the architectural approach.

Foodie for life

Microservices with NServiceBus in Particular
In this session, I showed how you can build a loosely coupled, message driven microservices architecture with the NServiceBus framework and the tools from the Particular platform.

NServiceBus

Enterprise Mobility & Cross Platform Development from the Trenches
This session is about some hard lessons learned while developing cross platform apps in an enterprise environment. The colliding worlds of EMM platforms and cross platform tools can give you some headaches if you’re not careful.

Building out-of-the-ordinary UI’s with Xamarin.Forms custom renderers
Xamarin.Forms is a powerful framework that uses UI abstractions to enable 100% code reuse for UI code whilst still delivering 100% native user experiences. In general, for simple data driven apps, Xamarin.Forms is a good fit. But what if you need to build a UI that is a little less ordinary? Can you still do that with Xamarin.Forms? This session explains how you can use Custom Renderers to go beyond standard.

Check out the blogs of my other Xpirit colleagues for their TechDays sessions: Rene, Marcel, Patriek and Marcel.

Thanks to everyone for attending my sessions, the great conversations and feedback! See you next year?

Xpirit: Off to new adventures

Today marks an important day for me personally… After 15 years, I’ve left Info Support last Friday to pursue a new adventure in my career. Today, I joined a great team, starting up a wholly new company named Xpirit.

Looking back
Info Support has been a part of my life for 15 years. I did my graduation project at Info Support in 1999, and after getting my Bachelor’s degree, I signed up to work for them as a software developer and consultant. Flash forward to 2014 and I find myself working as a software architect, heading up the Enterprise Mobile competence center, speaking at international software conferences and blogging, tweeting and sharing knowledge with awesome peers from all over the world. I learned so much in my time at Info Support, and I owe a lot to the great bunch of people there. In return, I gave my best over these 15 years and helped Info Support to be in the top of Dutch IT. Now the time has come for me to take on this new opportunity.

Xprit Think ahead. Act now.
Xpirit is a new consulting company, focusing entirely on the Microsoft ecosystem. With a fantastic new team, consisting of Microsoft MVP’s, Regional Directors and community leaders, we will offer high end consulting services for (enterprise) companies looking to implement or integrate systems using the Microsoft stack. This doesn’t mean that we will focus solely on MS products though. The Microsoft ecosystem is extremely rich with Open Source frameworks and 3rd party solutions.

Our technical team consists of Marcel de Vries (CTO), Alex Thissen, Marcel Meijer, Patriek van Dorp (soon) and myself and is powered by our Sales and Managing Director Pascal Greuter. This sounds like a dream team to me, and I’m extremely excited to be a part of it. Each brings their own strengths and personality to the table, which makes for a great environment to work in.

dreamteam cap

For me personally this means that I will continue to focus on cloud and mobile architecture and development in the enterprise. Of course, Xamarin will continue to be a big part of my strategy in this. Microsoft makes fantastic technology to build services and back end architectures, but let’s face it, their end user facing technologies are struggling to keep up with Google’s and Apple’s. It’s a diverse world, in which iOS and Android co-exist with – even dominate – Windows, even more so in the mobile area. Xamarin enables us to leverage the highly productive language and frameworks from the Microsoft stack directly on iOS and Android. A perfect fit. Marcel and I will continue to be active in the Xamarin community, and run the Dutch Mobile .NET Developers user group together with the fine people from Macaw and Info Support.

Many also know that I also enjoy working on distributed systems, SOA and event driven architectures. Azure is a great platform for this. One framework that I’ve been specialising in for the last couple of years, is NServiceBus. I’m committed to continuing my activities in the NServiceBus community, and would love to be your go-to-guy for distributed systems design with NServiceBus.

Xpirit is a Xebia company, which means that we’re fortunate to inherit the same company strenghts and values that make Xebia a great company. If you’re interested, go check out Good To Great by Jim Collins, Winning by Jack Welch, and Eckart’s Notes by Eckart Wintzen to understand the foundations we’re going to build on. Inspiring stuff, that’s for sure!

We’re starting small, but thinking big. We’re looking forward to great and innovating projects in the world of cloud, mobile, IoT and everything Microsoft.

I can’t wait to see what the future holds. Here’s to new adventures!

the-hobbit-adventure

Automating end-to-end NServiceBus tests with NServiceBus.AcceptanceTesting

Photo Credit: LoveInTheWinter via Compfight cc
Photo Credit: LoveInTheWinter via Compfight cc
Most of you will agree that automating software tests is a good idea. Writing unit tests is almost a no brainer nowadays, and I’m a big fan of Behavior Driven Development and the use of Cucumber to bring together system analysts, programmers and testers more closely. The closer your tests and documentation are to the actual software, the better, IMO.

Repeatable and automated functional tests are paramount to guarantee the quality of a constantly evolving software system. Especially when things become more and more complex, like in distributed systems. As you may know I’m a fan of NServiceBus, and testing our NServiceBus message based systems end-to-end has always been a bit cumbersome. The fine folks at Particular Software – creators of NServiceBus – have found a nice way to do their own integration and acceptance tests, and you can use that too!

The framework that supports this is somewhat of a hidden gem in the NServiceBus stack, and I know that the Particular team is still refining the ideas. Nonetheless, you can use it yourself. It’s called NServiceBus.AcceptanceTesting. Unfortunately it’s somewhat undocumented so it’s not easily discovered and not very easy to get started with. You’ll need to dive into the acceptance tests in the NServiceBus source code to find out how it works. This can be a little bit hairy because there’s a lot going on in these tests to validate all the different transports, persistence, behavior pipeline and messaging scenarios that NServiceBus supports. This means that there is a lot of infrastructure code in the NServiceBus acceptance test suite as well to facilitate all the different scenarios. How to distinguish between what’s in the AcceptanceTesting framework and what’s not?

As a sample, I created a simpler scenario with two services and added a couple of acceptance tests to offer a stripped down application of the AcceptanceTesting framework. You can find the full solution on GitHub, but I’ll give a step by step breakdown below.

The scenario
The sample scenario consists of two services: Sales and Shipping. When the Sales service receives a RegisterOrder command – say from a web front end – it does some business logic (e.g. validate if the amount <= 500) and decides whether the order is accepted or refused. Sales will publish an event accordingly: either OrderAccepted or OrderReceived. The Shipping service subscribes to the OrderAccepted event. It will ship the order as soon as it is accepted and publish an OrderShipped event. Like so:

NServiceBusAcceptanceTestScenario

I’m sure it won’t bring me the Nobel prize for software architecture, but that’s not the point. From a testing perspective, we’d like to know if a valid order actually gets shipped, and if an invalid order is refused (and not shipped).

Project setup
Once you have your solution set up with a Messages library, and the implementation projects for your message handlers, we’ll add a test project for our acceptance tests. You can use your favourite unit test framework, I chose MSTest in my sample.

Next, in your test project, add a reference to the NServiceBus.AcceptanceTesting package via the Package Manager Console:

Install-Package NServiceBus.AcceptanceTesting

This will pull down the necessary dependencies for you to start writing acceptance tests.

Writing a test
Let’s have a look at one of the tests I have implemented in my sample:

[TestMethod]
public void Order_of_500_should_be_accepted_and_shipped()
{
    Scenario.Define(() => new Context { })
        .WithEndpoint<Sales>(b => 
            b.Given((bus, context) =>
                // The SubscriptionBehavior will monitor for incoming subscription messages
                // Here we want to track if Shipping is subscribing to our the OrderAccepted event
                SubscriptionBehavior.OnEndpointSubscribed(s => 
                {
                    if (s.SubscriberReturnAddress.Queue.Contains("Shipping"))
                    {
                        context.ShippingIsSubscribed = true;
                    }
                }))
                // As soon as ShippingIsSubscribed (guarded by the first expression), we'll
                // fire off the test by sending a RegisterOrder command to the Sales endpoint
            .When(c => c.ShippingIsSubscribed, bus => bus.Send<RegisterOrder>(m =>
                {
                    m.Amount = 500;
                    m.CustomerName = "John";
                    m.OrderId = 1;
                }))
         )
        // No special actions for this endpoint, it just has to do its work
        .WithEndpoint<Shipping>() 
        // The test succeeds when the order is accepted by the Sales endpoint,
        // and subsequently the order is shipped by the Shipping endpoint
        .Done(context => context.OrderIsAccepted && context.OrderIsShipped && !context.OrderIsRefused)
        .Run();
}

Whoa, that’s a lot of fluent API shizzle! That’s just one statement with a bunch of lambda’s, mind you. Let’s break it down to see what we have here…

The AcceptanceTesting harness runs a scenario, as denoted by the Scenario class. The basic skeleton looks like this:

[TestMethod]
public void Order_of_500_should_be_accepted_and_shipped()
{
    Scenario.Define(() => new Context { })

        .WithEndpoint<Sales>()

        .WithEndpoint<Shipping>() 

        .Done(context => context.OrderIsAccepted && context.OrderIsShipped && !context.OrderIsRefused)

        .Run();
}

A scenario is defined using the Define method, which receives an instance of a class named Context. Next, the WithEndpoint() generic methods help us setup the different endpoints that participate in the current test scenario. In this case: Sales and Shipping. We’ll have a look at the types used here later.

Before the scenario is kicked off with the Run() method, we define a condition that indicates when the test has succeeded and pass that to the Done() method.

The expression looks like this:

context.OrderIsAccepted && context.OrderIsShipped && !context.OrderIsRefused

We’re evaluating a bunch of properties on an object named context. This is actually the instance of the Context class we saw being passed to the Scenario.Define() method. The context class looks like this:

class Context : ScenarioContext
{
  public bool OrderIsAccepted { get; set; }
  public bool OrderIsRefused { get; set; }
  public bool OrderIsShipped { get; set; }
  public bool ShippingIsSubscribed { get; set; }
}

It inherits from ScenarioContext, a base class in the NServiceBus.AcceptanceTesting framework, and it’s just a bunch of properties that get passed around throughout our test scenarios to keep track of the progress. The trick is to set these properties at specific moments as your test runs and as soon as the conditions are met, the test is considered a success.

In the example above, we expect that the order is accepted and shipped, and we also double check that it wasn’t refused. We can assert this by tracking the events being published.

The next piece of the puzzle is the definition of the endpoints that participate in the test:

.WithEndpoint<Sales>()

The type parameter in this case is a class called Sales. This class represents the Sales endpoint, but is actually defined in the test code. This is what it looks like:

public class Sales : EndpointConfigurationBuilder
{
  public Sales()
  {
    EndpointSetup<DefaultServer>()
    // Makes sure that the RegisterOrder command is mapped to the Sales endpoint
      .AddMapping<RegisterOrder>(typeof(Sales));
  }

  class SalesInspector : IMutateOutgoingMessages, INeedInitialization
  {
    // Will be injected via DI
    public Context TestContext { get; set; }

    public object MutateOutgoing(object message)
    {
      if (message is OrderAccepted)
      {
        TestContext.OrderIsAccepted = true;
      }

      if (message is OrderRefused)
      {
        TestContext.OrderIsRefused = true;
      }

      return message;
    }

    public void Customize(BusConfiguration configuration)
    {
       configuration.RegisterComponents(c => c.ConfigureComponent<SalesInspector>(DependencyLifecycle.InstancePerCall));
    }
  }
}

The Sales class derives from EndpointConfigurationBuilder, and is our bootstrap for this particular endpoint. The class itself doesn’t do much, except bootstrapping the endpoint by specifying an endpoint setup template – a class named DefaultServer – and making sure that the RegisterOrder message is mapped to its endpoint.

We also see a nested class called SalesInspector, which is an NServiceBus MessageMutator. We are using the extensibility of NServiceBus to plug in hooks that help us track the progress of the test. In this case, the mutator listens for outgoing messages – which would be OrderAccepted or OrderRefused for the Sales endpoint – and sets the flags on the scenario context accordingly.

This is all wired up through the magic of type scanning and the use of the INeedInitialization interface. This happens through the endpoint setup template class: DefaultServer. I actually borrowed most of this code from the original NServiceBus code base, but stripped it down to just use the default stuff:

/// <summary>
/// Serves as a template for the NServiceBus configuration of an endpoint.
/// You can do all sorts of fancy stuff here, such as support multiple transports, etc.
/// Here, I stripped it down to support just the defaults (MSMQ transport).
/// </summary>
public class DefaultServer : IEndpointSetupTemplate
{
  public BusConfiguration GetConfiguration(RunDescriptor runDescriptor, 
                                EndpointConfiguration endpointConfiguration,
                                IConfigurationSource configSource, 
                                Action<BusConfiguration> configurationBuilderCustomization)
  {
    var settings = runDescriptor.Settings;

    var types = GetTypesToUse(endpointConfiguration);

    var config = new BusConfiguration();
    config.EndpointName(endpointConfiguration.EndpointName);
    config.TypesToScan(types);
    config.CustomConfigurationSource(configSource);
    config.UsePersistence<InMemoryPersistence>();
    config.PurgeOnStartup(true);

    // Plugin a behavior that listens for subscription messages
    config.Pipeline.Register<SubscriptionBehavior.Registration>();
    config.RegisterComponents(c => c.ConfigureComponent<SubscriptionBehavior>(DependencyLifecycle.InstancePerCall));

    // Important: you need to make sure that the correct ScenarioContext class is available to your endpoints and tests
    config.RegisterComponents(r =>
    {
      r.RegisterSingleton(runDescriptor.ScenarioContext.GetType(), runDescriptor.ScenarioContext);
      r.RegisterSingleton(typeof(ScenarioContext), runDescriptor.ScenarioContext);
    });

    // Call extra custom action if provided
    if (configurationBuilderCustomization != null)
    {
      configurationBuilderCustomization(config);
    }

    return config;
  }

  static IEnumerable<Type> GetTypesToUse(EndpointConfiguration endpointConfiguration)
  {
    // Implementation details can be found on GitHub
  }
}

Most of this code will look familiar: it uses the BusConfiguration options to define the endpoint. In this case, the type scanner will look through all referenced assemblies to find handlers and other NServiceBus stuff that may participate in the tests.

Most notable is the use of the SubscriptionBehavior class, which is plugged into the NServiceBus pipeline that comes with NServiceBus 5.0 – watch the NServiceBus Lego Style talk by John and Indu at NSBCon London for more info. This behavior simply listens for subscription messages from endpoints and raises events that you can hook into. This is necessary for our tests to run successfully because the test can only start once all endpoints are running and subscribed to the correct events. The behavior class is not part of the NServiceBus.AcceptanceTesting framework though. IMO, it would be handy if Particular moved this one to the AcceptanceTesting framework as I think you’ll be needing this one a lot. Again, I borrowed the implementation from the NServiceBus code base:

class SubscriptionBehavior : IBehavior<IncomingContext>
{
  public void Invoke(IncomingContext context, Action next)
  {
    next();
    var subscriptionMessageType = GetSubscriptionMessageTypeFrom(context.PhysicalMessage);
    if (EndpointSubscribed != null && subscriptionMessageType != null)
    {
      EndpointSubscribed(new SubscriptionEventArgs
      {
        MessageType = subscriptionMessageType,
        SubscriberReturnAddress = context.PhysicalMessage.ReplyToAddress
      });
    }
  }

  static string GetSubscriptionMessageTypeFrom(TransportMessage msg)
  {
    return (from header in msg.Headers where header.Key == Headers.SubscriptionMessageType select header.Value).FirstOrDefault();
  }

  public static Action<SubscriptionEventArgs> EndpointSubscribed;

  public static void OnEndpointSubscribed(Action<SubscriptionEventArgs> action)
  {
    EndpointSubscribed = action;
  }

  internal class Registration : RegisterStep
  {
    public Registration()
      : base("SubscriptionBehavior", typeof(SubscriptionBehavior), "So we can get subscription events")
    {
      InsertBefore(WellKnownStep.CreateChildContainer);
    }
  }
}

Okay, almost done. We have our endpoint templates set up, message mutators listening to the relevant outgoing messages and SubscriptionBehavior to make sure the test is ready to run. Let’s get back to the part that actually makes the whole scenario go:

    Scenario.Define(() => new Context { })
        .WithEndpoint<Sales>(b => 
            b.Given((bus, context) =>
                // The SubscriptionBehavior will monitor for incoming subscription messages
                // Here we want to track if Shipping is subscribing to our the OrderAccepted event
                SubscriptionBehavior.OnEndpointSubscribed(s => 
                {
                    if (s.SubscriberReturnAddress.Queue.Contains("Shipping"))
                    {
                        context.ShippingIsSubscribed = true;
                    }
                }))
                // As soon as ShippingIsSubscribed (guarded by the first expression), we'll
                // fire off the test by sending a RegisterOrder command to the Sales endpoint
            .When(context => context.ShippingIsSubscribed, bus => bus.Send<RegisterOrder>(m =>
                {
                    m.Amount = 500;
                    m.CustomerName = "John";
                    m.OrderId = 1;
                }))
         )
   ...

For the Sales endpoint, we specified a whole bunch of extra stuff. First, there’s the event handler for the SubscriptionBehavior.OnEndpointSubscribed event. Here, the Sales endpoint basically waits for the Shipping endpoint to subscribe to the events. The context is available here as well, part of the lambda that’s passed to the Given() method, so we can flag the subscription by setting a boolean.

The final piece is the guard passed to the When() method. This is monitored by the AcceptanceTesting framework as the test runs and as soon as the specified condition is met, we can use the bus instance available there to send a message to the Sales endpoint: the RegisterOrder command will trigger the whole process we’re testing here. We’re sending an order of $500, which we expect to be accepted and shipped. There’s a test that checks the refusal of an order > 500 in the sample as well.

Some tips
For your end-to-end tests, you will be pulling together DLL’s from all of your endpoints and with all of your message definitions. So it makes sense to setup a separate solution or project structure for these tests instead of adding it to an existing solution.

If your handlers are in the same DLL as your EndpointConfig class, the assembly scanner will run into trouble, because it will find multiple classes that implement IConfigureThisEndpoint. While you can intervene in how the assembly scanner does its work (e.g. manually filtering out specific DLL’s per endpint definition), it might be better to keep your handlers in separate assemblies to make acceptance testing easier.

As you see, you need to add some infrastructural stuff to your tests, such as the EndpointConfigurationBuilder classes and the IEndpointSetupTemplate class for everything to work properly. You can implement this infrastructure stuff per test or per test suite, but you might want to consider creating some more generic implementations that you can reuse across different test suites. IMO the DefaultServer implementation from the NServiceBus tests is also a nice candidate for becoming a part of the NServiceBus.AcceptanceTesting package to simplify your test code as this is already a very flexible implementation.

keep-calm-you-passed-the-test

Conclusion
As you see, the NServiceBus.AcceptanceTesting framework takes some time to get to know properly. There’s more advanced stuff in there which I didn’t cover in this introduction. Now if you dive into the scenarios implemented by Particular themselves, you’ll find inspiration for testing other bits of your system.

I like the model a lot, and I think this can save a lot of time retesting many end-to-end scenarios.

NSBCon 2014 recap

Update: Particular has put up an awesome recap page for NSBCon London. All session videos are available there as well! And be sure to check out the excellent intro video.

NSBCon 2014, the first official NServiceBus conference at Skills Matter in London was a great success. I really enjoyed the sessions and hallway discussions with the participants about how they were using NServiceBus. Mark and I had the opportunity to share our experiences with NServiceBus in a session. Here are some of my personal highlights.

Barbecoa
Bear with me, I’ll get to technical NServiceBus stuff, but being a foodie, I can’t resist posting food pics as well. But if you’re really boring and don’t care about great food, you can skip the fun.

This was my second visit to London and we had a little bit of time to spend in the evenings to explore London. I arrived a day earlier than Mark to attend the ADSD Unconference on June 26th. My colleagues Marcel and Sander were also in town for a large ALM project they’re doing, so we decided to meet up and go for diner. Gordon Ramsay was out of our league, but luckily Jamie Oliver was close by with his excellent BBQ/grill restaurant Barbecoa. Nice ambience and the dishes were simple yet very refined and tasty.

Jamie Oliver's Barbecoa

Wood plank-smoked duck, cherries, maple dressing, red mizuna and pecans

Unpulled pork, Caraway slaw, jalopeno cornbread

Barbecoa brownie, Raspberry & Pink Peppercorn Sorbet & Aerated Chocolate

Barbecoa's butchery



Starter: Wood plank-smoked duck, cherries, maple dressing, red mizuna and pecans – excellent and playful taste combination between the smoked duck, sweet cherries and earthy pecans

Main: Unpulled pork, Caraway slaw, jalopeno cornbread – wow this pork butt was tender, and the combination with the spicy jalopeno cornbread was excellent

Dessert: Barbecoa brownie, Raspberry & Pink Peppercorn Sorbet & Aerated Chocolate – I love the use of pepper or other spices with fruit, made the sorbet really come alive; and chocolate brownies… no need to say more

Right around the corner was Barbecoa’s Butchery, where all the meat is dry-aging. Very nice. After dinner, I had a quick stroll along the Thames and St. Paul’s Cathedral.

And the conference hadn’t even started 🙂

ADSD Unconference 
In February 2012, I attended Udi Dahan’s Advanced Distributed Systems Design course. Most attendees will confirm that this course is quite mind twisting if you have been brainwashed with the “Service Oriented Architecture = Web Services” dogma the whole time. Nowadays I think messaging and asynchronous systems are a bit hipper with all these cloud platforms taking off, but chaining web services into a ball of mud together was still all the rage at the time.

The unconference was an interesting way to get ADSD alumni together to discuss their experiences applying the techniques from the course. An unconference is an interesting format, where topics are determined by the participants, and then discussed in free format sessions.

We saw an impressive example of a composite web UI implementation by Lars Corneliussen from Faktum Software – BTW, I checked, but he’s not some distant Scandinavian cousin 🙂 Anyway, we discussed UI composition, whereby data from separate services is combined at the UI level. One of the interesting challenges I see is applying this pattern with mobile apps. Given the latency and low bandwidth that mobile apps have to deal with, I think it’s better to do the composition at the API level and prepare highly optimized resources for the app to communicate with.

Andreas OhlundThe Particular team also facilitated a couple of great discussions. A discussion about Ops got me interested in Splunk, for holistic and pro-active monitoring of all sorts of events across a distributed system. I definitely need to check that out. Also, Indu Alagarsamy had a nice discussion about Routing Slips vs. Process Orchestration with Saga’s. Have a look at Jimmy Bogard’s blog for a great description of different Saga / messaging patterns. Danny Cohen coined the term “Bolshevik” for centralized process orchestration. I’m going to use that term from now on 🙂

One of the insights was that – in a way – routing slip is “Bolshevik” as well, as a routing slip also has a predefined, sequential route. I tend to agree with that, though the Routing Slip pattern can be useful for having messages flow across specific endpoints only.

I really enjoyed the nice discussions at the ADSD Unconference and appreciated the willingness to share experiences and lessons learned by all participants. Having them face-to-face in a small group also really helps.

London
Mark arrived shortly after I finished the unconference. We went for a nice long walk along the Thames to visit some highlights in London. Might as well make good use of your time, right?

St. Paul's Cathedral and Millennium Bridge

London skyline

Tower Bridge

Southwark Bridge


NSBCon Day 1
IMG_5954The first day of NSBCon started off with a nice breakfast at Skills Matter. What a great location and a nice environment for tech conferences like NSBCon. Udi Dahan kicked off the day with a presentation about the past, present and future of NServiceBus. There were a couple more old timers in the room that also started using NServiceBus at version 1.x, just like me 🙂 Udi reminded us how awful the website and logo used to look at the time, and he sincerely apologized for ILMerge-ing all the external dependencies into NServiceBus. Yep, it was bad, but the team has made NServiceBus a very slick and solid product over the past few years!

Info Support was prominently visible as well as one of the event’s sponsors.

The rest of the program was a nice mixture of case studies with NServiceBus, technology deep dives and theory. It was nice to see how NServiceBus is used at big companies like Wonga (Charlie Baker’s session) to handle large amounts of payments and Spotlight (Dylan Beattie’s session) where it even plays a role in video encoding.

Most prevalent from almost all of the session was how important it is to have decent monitoring across your entire system. Luckily the new tools from Particular can come a long way in giving insight in a message driven system, but I think that you can’t do without decent, holistic monitoring. Dashboards that give insight into both the technical stuff that goes on in your system and functional checks.

This was also one of the points that Mark and I highlighted on our Best Practices session. It was an honor to present at this first official NServiceBus conference. We got some nice feedback afterwards. The hallway discussions afterwards are always so valuable.

James Lewis gave a great talk about Managing Microservices and Yves Goeleven taught us about using NServiceBus in an Azure cloud. I really like using the Azure platform, cloud architecture brings a whole set of new challenges to the table.

The day ended for us with a nice speaker diner. It was great to spend some time with the whole Particular team and the other speakers to share experiences. You don’t get that many NServiceBus users from across Europe together in one room that easily.

NSBCon day 2
After a nice espresso at the Goswell Road Coffee Shop, the second day started with a deep dive into the Particular Service Platform by Danny Cohen. He explained how the separate components work together to monitor, diagnose and even design distributed systems with NServiceBus. I especially like the role of ServiceControl, which can serve as a nice extension point for your custom monitoring needs as well. New Relic feed, anyone? I have blogged about ServiceControl and ServicePulse before.

To use Danny’s words: “I’ll let you take in the coolness for a minute”

The other sessions were very enjoyable as well. A look at the new pipeline architecture in NServiceBus 5 by Indu Alagarsamy and John Simons was very nice. I really like the new model, based on the Russian Doll Pattern. Mark and I already saw some nice opportunities for replacing things in our implementation with these new style behaviors.

Greg Young and Szymon Pobiega showed an impressive integration of Event Store and NServiceBus. Event Store is another product I definitely want to check out. Greg is a great presenter as well.

Jan Ove Skogheim took us on the journey he made with his customer, migrating a complex, web service infested system to a message driven architecture with NServiceBus. Some nice insights there as well. In trying to get the team into the right mindset, Jan Ove was very keen on using the right terminology. “When someone said ‘service’, I slapped them in the face” 🙂

And Andreas Öhlund gave some nice insights into the internal development process at Particular. He quoted Netscape’s founder while explaining why releases are sometimes delayed: “Don’t ship crap”. I completely agree.

What struck me was that Andreas used this image in his presentation:

Slap!

So, both Jan Ove and Andreas use slapping… Perhaps some Scandinavian custom, but let me be the first to coin an official term for this methodology: Slap Driven Development. Interesting concept that I might try at work some time. You heard it first here!

In closing
NSBCon 2014 was a big success. Udi was visibly very proud to have a first official conference for his brainchild and rightly so. He has built a great company and community around NServiceBus. Looking forward to NSBCon 2015 already!

A lap around ServicePulse (2 of 3)

This is part 2 of my short series about the new Particular Service Platform… specifically about ServicePulse. In my first post about ServicePulse, you can read about the overall architecture and the problems that ServicePulse solves. Now I want to focus on one specific feature: the ability to add custom checks to your services.

After having set up ServiceControl and ServicePulse, you are able to monitor any NServiceBus service in your architecture. Just drop in the Heartbeat plugin and ServicePulse will see your service. The Heartbeat basically says: the service is there, it’s running and it’s able to send messages (since the Heartbeat is sent through a message to the ServiceControl input queue).

In general, this will not be sufficient, as you will also want to know how the service is behaving from a functional point of view. Or maybe you even want to monitor some deeper technical dependencies, such as whether it can reach its database, whether the config is OK, whether it can reach that external web service you rely upon, et cetera.

This is where custom checks come in, and they’re easy to implement. A custom check is very similar to the Heartbeat check: you just drop in a DLL that contains one or more checks, and the messages will be sent to ServiceControl. We end up with the following deployment overview:

ServiceControl-Deployment-CustomChecks

The custom check plugins will report through the same channel as the Heartbeat plugin.

Implementing a custom check
A custom check is pretty easy to implement, and you can choose between two scenarios: a periodic check and a one-time check.

Either way, you need to add a reference to the ServiceControl.Plugin.CustomChecks Nuget package and you’re good to go.

Startup check
A Startup custom check is executed when the service starts, so basically only once. A check like this is useful if you want to check some configuration settings after deployment, or if you want to check some environment dependencies that the service needs to run properly. You implement a one-time check by inheriting from ServiceControl.Plugin.CustomChecks.CustomCheck.

public class StartupCheck : ServiceControl.Plugin.CustomChecks.CustomCheck
{
    public StartupCheck()
        : base("StartupCheck", "Categoryname") // the name of the check and its category are specified here 
    {
        if (ConfigurationManager.AppSettings["TestSetting"] != "1")
        {
            ReportFailed("TestSetting must be 1!");
        }
        else
        {
            ReportPass();
        }
    }
}

The default constructor passes a check name and a category to the base constructor, which bootstraps the custom check. CustomCheck is an abstract base class, so you need to implement the StartupCheck method. All you have to do is put in your check, and call either ReportPass if all is well, or ReportFailed if there’s a problem. ReportFailed accepts a fault reason, which will be visible in ServicePulse.

Periodic check
A periodic check runs at a self defined interval, so it’s ideally suited to monitor things that might change over time, such as the availability of certain services or databases, or functional scenarios such as: “has all Point of Sale data arrived yet?”.

Again, implementing such a check is pretty easy. This time, inherit from ServicePulse.Plugin.CustomChecks.PeriodicCheck.

public class CheckHealth : PeriodicCheck
{
    public CheckHealth()
        : base("Healthcheck", "CategoryName", TimeSpan.FromMinutes(10))
    {

    }

    public override CheckResult PerformCheck()
    {
        // Fake a failure once in a while
        // TODO: think of a useful check to implement here.
        if (DateTime.Now.Second % 2 == 0)
        {
            return CheckResult.Failed("This is a sample failure report");
        }
         return CheckResult.Pass;
    }
}

Besides the check name and category name, the base constructor also excepts a TimeSpan. You can specify here at what interval the check runs. Each 10 minutes in the example.

Next, we override the PerformCheck method, which returns a CheckResult object. Case of success, report CheckResult.Pass, otherwise use CheckResult.Failed. Again, a reason or description for the feature must be supplied.

So what does admin/ops see?
After a custom check is deployed and activated, we can see the results in the ServicePulse front end. On the Dashboard, it shows:

Custom checks

And upon further inspection on the Custom Checks screen:

Custom checks - overview

In my current project, we make heavy use of custom checks to monitor the health of our system, and whether customers are efficiently using it. For now, we do this through a custom built monitoring service (based on NServiceBus), but I can see these migrating to ServiceControl plugins over time.

Next time: dealing with failed messages!

Meet the new Particular Service Platform… A lap around ServicePulse (1 of 3)

Most of you might have noticed that I’m a big fan of NServiceBus, a very nice .NET based framework for implementing service oriented systems, based on messaging. Particular Software, the company behind NServiceBus, has been working on building a platform around NServiceBus, which turns it into much more than just a developer’s framework.

The new additions to the platform are:

  • ServiceMatrix, a slick designer, embedded in Visual Studio to aid development of NServiceBus solutions
  • ServiceInsight, a debug/diagnostics tool that helps to track messages and message conversations
  • ServiceControl and ServicePulse, a dynamic duo that lets us monitor production environments

While these components can actually work together in a complete solution, I’ll focus on ServiceControl and ServicePulse in three consecutive blogposts. These two work together to provide overview and control over a running NServiceBus deployment in production. I’ll explain what each of these components do.

TL;DR – how to get up and running:

  1. Download and install ServiceControl
  2. Download and install ServicePulse
  3. Enable auditing on your NServiceBus endpoints
  4. Drop the Heartbeat ServiceControl plugin in your service’s bin folder and restart your service
  5. Start the ServicePulse front end
  6. Sit back and relax

Want more detail? First, let’s look at the problems ServiceControl and ServicePulse solve.

Distributed systems, distributed messages
The beauty – in my opinion – of NServiceBus is that it is a true implementation of the Service Bus architectural pattern, yielding a distributed system based on a decentralized messaging infrastructure. In essence, much like Ethernet, “the bus is everywhere”. With many independent, autonomous services running all over your data center, sending messages all over the place, how do you keep track of how it’s doing? How do you deal with messages that fail, and end up in an error queue in such a distributed architecture? With NServiceBus, there is no central broker through which all messages flow, which you could measure and monitor.

This has been a real challenge with NServiceBus up until now. I’ve seen many Roll Your Own solutions (including in my own project), and the use of 3rd party tools like QueueExplorer to deal with failed messages. Particular of course recognized this, resulting in the new tool suite, part of which is ServicePulse.

So what is ServicePulse?
ServicePulse is a system administrator’s best friend – a dashboard for monitoring the health and activity in a running NServiceBus ecosystem.

ServicePulse is basically a website, built with NancyFx that shows the status of your system in a nice dashboard overview:

ServicePulse-Dashboard-overview

What’s fun about this website, is that it updates in near realtime with the aid of SignalR. This is a nice example of the combination of NServiceBus and SignalR in action to provide “messaging” all the way into the front end. The two mix well together, I have blogged about it before as well.

ServicePulse architecture
Let’s dive into the architecture of this solution, and see how ServicePulse deals with the distributed nature of all our services.

As mentioned, there is no central broker in a distributed system that uses NServiceBus that you can easily stick a monitor on top. So, to get health information in a central dashboard, we need to somehow gather the information centrally.

Actually, ServicePulse is just a website that leverages another component in the Particular Service Platform as a backend: ServiceControl. ServiceControl is an NServiceBus service itself that exposes a REST API for consumers like ServicePulse. A typical deployment would look like this:

ServiceControl-Deployment-basic

One “best practice” with NServiceBus is to have a central error queue, so that you can monitor failed messages in one place. ServiceControl feeds off that error queue. Pretty smart, because this enables us to just “click on” ServiceControl like a LEGO piece.

Next, ServiceControl also wants to know about the activity in your system. It does this by feeding off the audit queue, much in the same fashion as the error messages. Having a centralized audit queue is also considered a good practice. And now we finally have a service that consumes all these messages and does something useful with it. 🙂 Make sure to enable auditing on all services you want to monitor.

ServiceControl can listen to only one audit and one error queue and ServicePulse can only listen to one ServiceControl instance. This is OK if you follow these best practices, but if you for some reason decide to have a different deployment strategy, you need to account for this.

Note that ServiceControl consumes the messages off these queues, so both audit and failed messages are now under control of ServiceControl, which stores them in a RavenDB database. This means that you have to be careful with this data and make sure that it’s properly backed up. Remember, failed messages contain valuable business data! ServicePulse doesn’t do a lot with the audit messages, but ServiceInsight does. ServiceInsight is a different topic altogether. Also keep in mind that ServiceControl is not a long term storage solution. It will clean up the database with a configurable interval (e.g. 2 weeks). So if you need to adhere to compliancy rules that require you to keep an audit trail of messages for several years, make sure you regularly backup the database, or extract the messages from ServiceControl to store it in a durable place.

By the way, when I say “queue”, I don’t necessarily mean an MSMQ queue. Just like NServiceBus itself, ServiceControl is transport agnostic, so you can also use it when you do SQL Server transport, RabbitMQ or ZeroMQ, for example.

Of course you can also use the ServiceControl REST API yourself to get information from your system and report it to other monitoring environments. For example, I’m thinking of having some of this information published to our New Relic account. Something for another blogpost 🙂

Important to note: ServiceControl by default listens to the http://localhost:33333 endpoint. ServicePulse will try to access ServiceControl via that url directly from the browser. If you want to use ServicePulse from a remote browser, you will need to tweak these url’s and ports during installation. You can tweak this configuration as described on the Particular docs site.

Enabling monitoring on endpoints
So with this basic setup, ServiceControl can now pull in error and audit messages. If you were to fire up the ServicePulse dashboard now, you’d see messages along the lines of:

New ‘MyService.MyEndpoint’ endpoint detected at ‘HOSTNAME’. In order for this endpoint to be monitored the plugin needs to be installed.

“The plugin”? What plugin? Here comes the magic… 🙂

If you want your service to be monitored, it must send a heartbeat to ServiceControl. It does this by means of a heartbeat plugin, a component that you just drop in the bin directory of your service. It will automatically be picked up when the service is (re)started. It implements the IWantToRunWhenBusStartsAndStops interface, so it gets invoked by NServiceBus automatically.

The plugin is described on the Particular docs, but the gist is that you just download the ServiceControl.Plugin.Heartbeat Nuget component into your endpoint project. Alternatively, you can extract the DLL and just drop it in the deployment folder of your services.

The plugin reports to ServiceControl via messaging, just like your own services. It does so by reporting to the Particular.ServiceControl input queue. The deployment now looks like this:

ServiceControl-Deployment-heartbeat

Now, if a service goes down, ServicePulse will report this immediately:

Service down

The administrator can now react, fix the problem and bring the service back online.

Service back online

How cool!

Next up: Custom checks and Dealing with failed messages. Stay tuned!

Learning NServiceBus

Over the past two years, I’ve been architecting and building a system, applying design principles I’ve learned over the years doing software architecture at Info Support, and from the excellent “Advanced Distributed Systems Design” course by Udi Dahan. Now, I should say that these are no “one-size-fits-all” design principles; in fact there’s never a one-size-fits-all architecture for anything. But they had a natural fit with the problem domain that we are working on.

The two year timespan stems from the fact that we have been (and are still in the progress of) migrating the system from a classic ASP.NET three tier architecture to a more disconnected and distributed one, using messaging where applicable. This has helped us achieve more flexibility, better separation of concerns and an overall faster system for the end user. We’re not finished yet, as there’s a lot of legacy code left to migrate and all the while, the shop has to stay open, but we’re definitely getting there.

NServiceBus logoOne of the tools and frameworks that have been crucial in this re-architecture is NServiceBus. For those who don’t know NServiceBus, it is a light-weight, very elegant .NET framework, built around the principles of a distributed, service oriented architecture, using messaging. It’s designed by Udi Dahan and his team at Particular Software. In fact, NServiceBus is the direct result of the architectural lessons and experiences Udi has learned, architecting systems for customers over many years.

While NServiceBus is very light-weight, it still has a bit of a learning curve, to really understand the concepts and apply them correctly. Especially if you haven’t done the ADSD course mentioned earlier, some things might seem a bit abstract to a developer at first.

There’s always been documentation on the NServiceBus website, but not always up to date and a bit fragmented to my liking. There’s also the very helpful and thriving NServiceBus community group, where you can ask a great variety of questions to a great bunch of experts. There’s even a great NServiceBus course on PluralSight by Andreas Öhlund. But with NServiceBus 4.0 out, the course material is a bit stale on some of the technical stuff.

What was lacking most was a well structured, easy to follow guide on using and understanding NServiceBus, to be used as a companion when building your application with it. Fortunately, there’s one now: the book “Learning NServiceBus”, written by the awesome David Boike. David is an NServiceBus Champ, a very active contributor in the community forums and an allround knowledgeable software architect.

Learning NServiceBus book coverLearning NServiceBus is a great companion to building applications using NServiceBus, as well as administering and operating those applications. What I like best about the book is that it’s well structured, building up from the most basic “hello world” type scenario to the more complex capabilities, such as Sagas, and error handling & second level retries. All the while, it provides great background info on the architectural considerations behind NServiceBus. David has a pleasant, witty writing style which makes it easy to read. Not your average “dry” software development book, luckily.

You can also download the sample code used throughout the book, but I find it a better learning experience to build the samples from scratch as you read the book.

The book focuses on the all new NServiceBus 4.0 release, containing a bunch of important changes compared to 3.X, which makes it all the more worthwile to check out this book!

My congrats to David Boike on getting the book published. Great work and well recommended! You can get it from the publisher’s website.