NSBCon 2015 Recap


This past week I’ve been in Dallas,TX for the second edition of NSBCon 2015, the official NServiceBus conference.

It was a blast, I found the quality of the sessions and the activities very good and I met some nice and highly competent people. Oh, and great food!

As soon as the slides and videos are available online, I will add links.

Pre-Con Workshop
The week started out with a full day Advanced Saga Masterclass by Andreas Øhlund. Although most of the content was already familiar to me, I still had some nice takeaways from the day. The most important one is around integration patterns with external non-transactional resources. When dealing with non-transactional resources like web services in an NServiceBus architecture, it’s wise to place the interaction in a separate endpoint, and handling a command that you send from the Saga.

My takeaway was that you should leave the technical details of the web service interaction inside that integration endpoint. If an error occurs, just let NServiceBus handle it so you get First Level Retries and Second Level Retries for free. Only report back to the Saga with a functional reply, such as a Tracking# in case of a shipping provider. This way the Saga stays clean and functional, as does the integration handler.

FullSizeRender 6

Another important lesson is that a Saga are a good fit when there’s an aspect of Time in your domain logic. For example: a buyer’s remorse policy (the ability to cancel an order within x minutes). For logic like this, the RequestTimeout feature of a Saga really shines. You could state that if a Saga does not contain at least one timeout, you probably shouldn’t be using a Saga; instead you could just as well put it in a regular message handler.

The day ended in great company at Casa Rita’s Mexican Restaurant with a fantastic Briskett Burrito. (Hey, I’m a foodie)


Conference day 1
The first day of the regular conference was already packed with nice sessions. Udi’s keynote was sort of a State-of-the-Union on what Particular has been up to for the past year. Lots of work has been done on the internals of NServiceBus, more on that later. Most striking was probably the announcement that – a year after being introduced at NSBCon 2014 – ServiceMatrix is being discontinued.

To be honest, I am not really surprised and also not sorry for it. I’ve never believed in dragety-droppy modelling tools for building software and frankly I felt Particular was trying to go with that craze a little bit too much. As Udi explained, a tool like that will get you 80-90% there, but it’s the nitty gritty details in that last 10% that will break you up. So it’s back to the model of development that I’ve always liked so much about NServiceBus: put the developer in the driver’s seat.

That’s not to say that there is no demand for visual tools. Some of the efforts that have gone into ServiceMatrix are now being transformed into another form of visualisation, one that I much prefer: visualisation of your finished system as it runs. In other words: living documentation. I’ve always been a bigger fan of using simple tools for modelling: PowerPoint (yes I said it, I’m a PowerPoint architect), or much rather even: pencil & paper or a whiteboard. No modelling tool has ever really appealed to me, not the UML ones and especially not the “graphical coding” ones (looking at you Windows Workflow Foundation and BizTalk). Just stick with boxes and arrows and start coding, is my style.

NServiceBus V6
After Udi’s keynote, Particular solution architect extraordinaire Daniel Marbach took us through the new NServiceBus V6 release. V6 is all about embracing async/await, which means that an important chunk of NServiceBus Core was rewritten from the ground up to be async all the way.

Daniel did a great job of explaining why Particular did this. Ever since the introduction of the TPL and the async/await keywords, Microsoft has been pushing towards async code more and more. This is most apparent in the cloud, where all IO operations (Azure Storage Queues, Azure Service Bus, SQL Azure, etc) have an async API. Up till now, handlers in NServiceBus were synchronous, and made you implement a single method:

void Handle (MessageType message);

This makes it hard to consume async API’s for IO, because marking a void method as async is evil. Furthermore, the NServiceBus Core is not aware of any async code, so strange things can happen if you’re not careful. This lead the team to the decision that V6 should be a breaking change in order to go fully async. The interface for a handler now looks like this:

Task Handle(MessageType message, IMessageHandlerContext context);

You must now return a task (or Task.FromResult(0) if your implementation does not require async), and you receive an IMessageHandlerContext implementation that you can use to send or publish messages. No more public IBus Bus { get; set; }. This is done so that there is no more dependency on state across multiple threads. NServiceBus will make sure that you have the correct instance. It is also important to note that the use of ThreadLocal for caching is dangerous because of the introduction of async/await. All in all, an excellent talk by Daniel, who wrapped his lessons in a story about a Swiss Chocolate Factory. Gotta love the Swiss.

More interesting info on async/await and ThreadLocal can be found here:

[https://www.youtube.com/watch?v=4uWzIM1U-VA] Presentation about the pitfalls of async/await and the SynchronizationContext. Highly recommended, even if only because of the creative format.

[http://www.particular.net/blog/the-dangers-of-threadlocal] The dangers of ThreadLocal


Andrew Skotzko from Petabridge did a riveting, high octane presentation about Akka.NET and the Actor Model. I’ve briefly looked at the actor model before and am interested in exploring it more, especially after this session. One of the things I’ve been wondering about for a longer time is how the actor model fits in with NServiceBus.

In his session, Andrew mentioned that Akka could be a great producer or consumer for NServiceBus. In Andrew’s words: “Akka is about Concurrency, NServiceBus is about Reliability”. I’d have to explore this some more but frankly I don’t really see a great fit as of yet. One of my main concerns with Akka is that remote messaging between actors is not reliable. It is supposed to be light weight and location transparent, but the model compromises on reliability. Akka has an “at most once” delivery guarantee, which doesn’t exactly guarantee anything. For communication between actors, Akka provides location transparency, i.e. you just use an interface to send a message, and the receiving actor can be either in memory or on a remote machine. Hmm, where have we seen this fail before? At least NServiceBus is always clear on the fact that when you send a message, it’s inherently remote. And NServiceBus has your back in making sure the message arrives.

So, I’m not yet really seeing the point of using an actor model inside, or alongside NServiceBus services, but it requires more investigation.

This seems to be the best place to start: http://LearnAkka.net

I must say though, both Andrew and Aaron Stannard from Petabridge are both extremely passionate and helpful. They’re very nice guys and we had good fun over a couple of beers.

Death to the Distributor
Sean Feldman
from Particular talked us through a preliminary version of a new routing component in the NServiceBus plumbing which allows them to eliminate the Distributor component when you’re on MSMQ transport and want to scale out. A nice and short talk in which Sean showed a promising solution for decentral routing, with a pluggable strategy (you could write your own distribution logic if you’d like). With that basis, combined with realtime health info from endpoints, dynamic routing and commissioning and decommissioning of nodes will become possible. Interesting stuff that’s in the works.

There were also two interesting case studies on the first day: Building the Real-Time Web with NServiceBus and SignalR by Sam Martindale and a more high level session called Decomposing the domain with NServiceBus by Gary Stonerock II

Combining SignalR with NServiceBus has been a hobby of mine for a longer time, so the idea was already familiar to me. It was nice to see the application that Sam and his team put together: a multi-user graphical web application for “composing” and budgeting luxury homes that relies on SignalR and NServiceBus for on-the-fly budget calculation and distribution to multiple users at once. Sam was so nice to do a shout out of my earlier blogpost about the NServiceBus backplane for SignalR. With thanks to Ramon Smits of Particular, the sample has now been upgraded to the latest stable versions of both NServiceBus and SignalR. Thanks Ramon!

Gary’s session was interesting as well, sharing his journey designing a system for Medical Trials on a logical architectural level. Some key takeaways for me were:

  • In modelling your domain logic, make sure that you model it for success (i.e. Logical Success Modelling). Try to factor out the edge-cases and exceptions by asking more questions about the business logic. You will end up with much simpler domains.
  • Defer naming things; Don’t name services, domains or classes until you are more clear about its responsibilities. You will end up with much clearer names. Udi calls this “Things Oriented Architecture” in his ADSD course.

Then it was time for the Hackathon! The assignment: come up with the most over-engineered Hello World application based on the NServiceBus building blocks. I teamed up with Anette, Daniel and Adam and after a couple of beers we came up with project Ivory Xylophone:

FullSizeRender 14

See? Pencil & paper FTW!

By sending a tweet with hashtag #ivoryxylophone, we set off a system that sends a message to a Saga that converts all the characters in the message to Morse code, gathers the characters, converts them back to plain text and sends the message to a handler that tweets out the message on Twitter. All NServiceBus message would go over a transport based on Git commits (…). Apart from the Git transport, we got it working end-to-end.


Mob programming

We got second place, just after the awesome Slack transport by Mark Gould.


Demo time for Ivory Xylophone

Conference day 2
Ted Neward kicked off day 2 with his keynote session “Platform Oriented Architecture“. He opened by establishing that the software industry has been through several cycles of inventing the same wrong solutions for the same problems multiple times. We’ve seen other people speak about that and frankly, he’s right. The nice thing about Ted’s talks was that he did a shot at how the future of architecture should look. He concludes that most of us are building a platform of some sorts. And in order to succeed, you need all of the good things that we’ve established: the 4 tenets of SOA are still a good idea, we still need to mind the 8 fallacies of distributed computing, but the most important thing is that a platform should have a clearly defined Domain and Context. Meetup, Uber, Yelp, AirBnb are all platform but each has its specific domain and give meaning to technology choices in their own context.

All About Transports
Swedish Chef Andreas Öhlund gave a great talk comparing the different transport options that NServiceBus gives, packed in a tasty tale of a Swedish Meatball company. I really liked his stylish slides, kudos! His message was important: different transports have very different characteristics and you should choose wisely. Do you choose a decentralised model with MSMQ or a broker style transport like SQL Server, Azure Service Bus or RabbitMQ? Talking about RabbitMQ – one does not simply install RabbitMQ and be done with it. It takes care to make RabbitMQ reliable (clustering) and you must be careful when you send messages inside a transaction that might fail, or you’ll end up with ghost messages. Andreas’ talk could be a great blogpost in itself. I hope Particular puts it on their blog soon.

Other interesting talks were Jimmy Bogard‘s Integration Patterns With NServiceBus where he shared some lessons learned about integrating legacy systems with newer NServiceBus systems, dealing with big files and his interesting Routing Slip Pattern.

Kijana Woodard had a funny and insightful talk about Things Not To Do With NServiceBus. I like seeing a talk like this as it shows that Particular isn’t just shoving NServiceBus down everyone’s throat as the end-all solution for everything. It’s not a golden hammer, so there are definitely cases where it doesn’t fit.

IMG_9760 2

Beef ribs!

Again, a great day, which ended in the fantastic Brazilian Steakhouse Boi Na Braza. Did I mention I’m a foodie?

Day 3: Unconference
One of the things I enjoyed the most was the Unconference day. A day where participants set the agenda and get to do the talking. We collaborated on an agenda for the day and ended up doing multiple parallel tracks of discussions and knowledge sharing. I participated in discussions about Security & NServiceBus, UI Composition. Akka.NET & the Actor Model, Saga & Domain Concepts and Microservices. Got a ton of new insights. My sketchy notes might not be as meaningful to you as they are to me, but still…

All in all, NSBCon was well worth the trip to Dallas. All the Particular folks are very approachable, hospitable and helpful, the participants were great discussion partners and the quality of the sessions was very good.

Xamarin, NServiceBus, Microservices and Enterprise Mobility – my sessions at Microsoft TechDays NL


Last Thursday and Friday, Microsoft TechDays NL 2015 was held at the World Forum in The Hague. TechDays is the biggest Microsoft related software conference, with over 2100 attendees. For us at Xpirit, the conference was a big success, fortunate to be selected to present a total of 21 sessions with a crew of 6 Xpiriters. This blogpost contains some reflections on the conference and the slide decks for my own sessions.

TechDays 2015 was great fun. I think Microsoft managed to pull of a nice conference. I made a Storify overview of Xpirit’s activities at TechDays.

James WhittakerWith an awesome, energising keynote by ex-Google employee and now Microsoft Distinguished Engineer James Whittaker, TechDays was off to a great start. James took us on a journey about how the web has evolved into the way we now consume data through apps, and how the app model might evolve into newer experiences in the future. It’s all about data consumption and intelligent software that gets that data into our hands the moment we need it. The Internet of Things will work for us and obsolete whole industries over time, leaving time and room for humanity to explore the world, the seas, science and the galaxy. Inspiring and very funny.

Xpirit magazine We chose to accompany our sessions with an in depth, 44 page magazine, handed out to all attendees. It covers some of the topics we spoke about at TechDays, as well as new technologies like Ionic and the cool new Hololens.

If you were unable to obtain a copy, you can also download it from our website. If you’d like a hard copy, give me a ping.

Being a hobby sketch-noter/cartoonist/graphics fanatic, I really appreciated seeing the live sketchers from Wandverslag create a beautiful drawing with details from the sessions and discussions in the hallways.


TechDays NL speaker gift In the same style, all conference speakers got a nice personalised gift, our own cartoon.

As Xpirit, we chose to take on the topic of Microservice architecture, demystify some of its aspects and provide possible approaches for building microservices. I had the opportunity to talk about two of my favourite topics: mobile development (Xamarin!) and distributed systems architectures (NServiceBus!). Here are my sessions:

Lessons learned: migrating an N-tier web application to microservices with NServiceBus
In this session, I explained how I transformed an existing N-tier web application to a more scalable and manageable architecture in the microservices style. I presented the reasons for migrating and a 5-step migration plan. This session is not specifically about NServiceBus, but rather about the architectural approach.

Foodie for life

Microservices with NServiceBus in Particular
In this session, I showed how you can build a loosely coupled, message driven microservices architecture with the NServiceBus framework and the tools from the Particular platform.


Enterprise Mobility & Cross Platform Development from the Trenches
This session is about some hard lessons learned while developing cross platform apps in an enterprise environment. The colliding worlds of EMM platforms and cross platform tools can give you some headaches if you’re not careful.

Building out-of-the-ordinary UI’s with Xamarin.Forms custom renderers
Xamarin.Forms is a powerful framework that uses UI abstractions to enable 100% code reuse for UI code whilst still delivering 100% native user experiences. In general, for simple data driven apps, Xamarin.Forms is a good fit. But what if you need to build a UI that is a little less ordinary? Can you still do that with Xamarin.Forms? This session explains how you can use Custom Renderers to go beyond standard.

Check out the blogs of my other Xpirit colleagues for their TechDays sessions: Rene, Marcel, Patriek and Marcel.

Thanks to everyone for attending my sessions, the great conversations and feedback! See you next year?



Wow, some interesting announcements in the Microsoft Build 2015 keynote yesterday. Too many to go into in a quick blogpost, but Azure sure is looking great, Hololens is impressive, Docker on Windows, Visual Studio Code on Mac and Linux – wow!

My main interest was what’s happening with Windows 10, and especially on Mobile. Microsoft is doing some interesting things when it comes to its interpretation of cross platform… 

First, if we look at Office 16 and its new JavaScript extensions: finally we can get Xobni like features (remember?) without those obnoxious memory and CPU hogging OCX plugins. Most impressive: it works on Office for Windows but just as well on Office for iOS. Well played.

Windows 10 in itself is starting to look pretty impressive. Personally I’m not really buying into the idea of having my phone act like a desktop when plugged into a monitor, but it looked impressive. And the idea of continuity is something Google, Apple and Microsoft are implementing quite nicely.

The whole idea of Universal Apps is very compelling and a good way to get more developers on the Windows platform, especially Windows Phone, which desperately needs to gain some traction. Purpose built Windows apps are starting to look pretty nice now that they’re starting to borrow some UI paradigms that work from Apple and Google and shedding off some of that first generation Metro blandness. 

But here comes the most interesting part… Besides using C# and .NET to build Windows native apps, you can now also target Windows and the Windows Store with four new technologies. The first one makes the most sense…

Web: you can put your server hosted website on Windows as an app, and actually leverage features like Cortana if you detect that you’re running on Windows. Sounds like a good approach and probably better than the WinJS route that was introduced with WinRT.

Next, we have Win32 / .NET, which basically means that the old style applications now also have a chance to be in the Windows Store. Applications like PhotoShop or PhotoShop Elements are good candidates for this, as they rely on low level graphic processing and are not forced to rewrite in a more limiting framework. Let’s see how this works out when old ugly grey WinForms applications hit the Store. I also assume that these apps will be limited to run on desktop only and not on tablets or phones.

But then… Two more interesting options were announced. 

First, Microsoft pulled a BlackBerry on us and announced that Windows Phone will have an Android subsystem, so that it can run Java or C++ Android apps, with “minimal adjustments”. Whoa.

Next, Microsoft has also added the ability to take your existing Objective-C code for iOS, and -again, with “minimal adjustments” – compile and run it on Windows. They went on to demonstrate this with the game hit CandyCrush Saga.

Suddenly the clean – rebuilt from the ground up – Windows 10 operating system is turning into a chameleon OS that adds even more compatibility.

This is an interesting move, and surely a way to make Windows Phone more relevant when it comes to apps. All of a sudden, both Android and iOS developers have an opportunity to target “1 billion” extra devices just by recompiling their apps for Windows. About time, because quite frankly, it was starting to become a bit pathetic when I was discussing tech with my Windows based colleagues.

“Let’s use [communication platform X]!”
– “Er, we can’t, there’s no app for that for my Windows Phone”

Sad trombone.

So, what does this mean for cross platform development?
One may start to wonder what’s the point of doing cross platform development, such as with my personal favorite mobile development tool Xamarin? That’s an interesting question indeed. I mean, why bother with C#, architecting for code reuse, etc. when you can do it fully native and add Windows to the mix while you’re at it?

Well, first of all, you’d still be writing your app twice if you’re targeting Android AND iOS. So that’s Java and Objective-C/Swift. And then you have to choose which of these two you’re going to put into the hands of Windows users.

But more important to me: what is the experience you’re going to give your Windows users? Just as I’m an iOS fanboy, these users pick their device and OS from a personal preference. These people like their square coloured, high-on-typography tiles and their swirling screen transitions. What is the feeling you’re going to give them as an app developer if your app is an Android app and looks and behaves just like it?

Sure, this is a great way to segue into the Windows ecosystem and get your popular app in the hands of Windows users, but it will make their coloured rectangles OS look like FrankenWindows when they have mixed Windows, Android and iOS apps on them. This is what it’s going to look like to them:



As I see it, this should be an intermediate step. You’ll want to give your Windows users a first class experience just as your iOS and Android users, so you’ll want to drink that Windows Universal App Kool-Aid, and develop for them natively. And this is where the Universal Apps approach make a lot of sense, because you’re not only hitting that 5% market share Phone OS, but tablets, desktops and laptops as well if you’re doing it right. Those 1 billion devices that Microsoft is talking about.

So, I think this FrankenWindows trick is a good way for Microsoft to make Windows Phone more relevant in the market, and I hope it will make developers push their iOS and Android apps to the Windows Store as well. But as Windows Phone gains traction, I also expect that cross platform development will remain relevant.

I’ll leave it up to you whether this will be HTML/CSS/JavaScript or native via Xamarin.

In any case, Microsoft made some bold moves yesterday, and it makes developing in their ecosystem all the more exciting.

UPDATE: Alan Mendelevich has a great blog post on this topic as well, with some inside info on how these Android and iOS bridges should be interpreted

Cross platform mobile & Enterprise mobility just got better


In my previous blogpost I mentioned a couple of challenges that I encountered in previous projects where we were developing apps for internal use within an enterprise organization. These apps were distributed using an EMM platform, and developed using a cross platform tool: Xamarin.

At the time, the status quo was that EMM tooling and cross platform development frameworks were evolving at a different pace, which meant that EMM platforms (at least the ones we looked at) were not prepared for apps built with a tool like Xamarin. Read the blogpost for further details.

Early March, Xamarin and AirWatch, along with other industry partners like Box.com, Cisco and Workday announced their new initiative “ACE”, “App Configuration for Enterprise“.

This is a very interesting and fortunate development since this means that we will be working towards an industry standard for API’s through which enterprise apps can make use of EMM features. After all, most of the features are the same across vendors, but each has their own technology to solve these issues. This meant that if you wanted to use features like Single Sign On across applications, data protection or dealing with network connectivity, the EMM framework of choice would be very invasive to the app. If ACE is widely adopted, this is no longer the case, and enterprises will have the freedom to change EMM vendors without much impact on their apps (other than having them wrapped again and re-distributed).

Not only developers benefit from this, but end users as well. Having a standardised and easy setup process, onboarding in an EMM program is a much lower barrier. I’ve seen the UX of some of these vendors and they’re sometimes confusing and unpleasant. A standardised way to handle this is a welcome addition.


At the moment, ACE is still mostly just an initiative, and I think that the standards need to crystallise a bit more, but there’s already a Dev Center with a couple of (proposed) standards. Let’s have a look…

What we see is that most of the standards default to leveraging the out of the box facilities in the OS. I.e. if the app uses these frameworks and API’s, and the EMM vendor also conforms to these standards, you should be safe. For iOS 7+, the Managed App Configuration standard is referenced, which makes use of the EMM API’s that Apple has built into iOS such as the Apple MDM Protocol.

A configuration setting sent to an app for example, would be stored in the NSUserDefaults dictionary named “com.apple.configuration.managed”. Reading a setting would then be as easy as this:

NSString *keyValue = [[[NSUserDefaults standardUserDefaults] 

Or, in Xamarin speak:

var keyValue = NSUserDefaults
    .DictionaryForKey ("com.apple.configuration.managed") ["keyName"]
    .ToString ();

For Android (Lollipop+), you would make use of App Restrictions.

Well, you can read the rest of the specs on the Dev Center page.

In general, I think it’s a good idea to default to built in OS features for tackling security requirements like this. One drawback is that for some (or most, actually) features, you have to assume that your users are on a recent OS version. With iOS, that’s 7+, which is usually not that big of an issue because of the fast adoption of new iOS versions. But for Android, some features are introduced in Lollipop. Certificate authentication in a web view for example, was not accessible for a developer in previous versions. And that might be a big issue if your enterprise has a BYOD policy, and you’re dealing with all those older versions out there. But, you have to start somewhere, so this is a matter of time before this is sorted out. That may take quite a while though, so if you’re going to follow the ACE strategy, you might want to think about providing compliant devices to your users instead of them bringing in their own old and crappy Android phone.

What about cross platform development?
A big advantage of Xamarin is that we have the ability to share code across platforms. With these EMM and App Config features however, we’re going back to platform specific implementations. This calls for a component or set of plugins that can help in abstracting these implementations. This would of course be a Xamarin specific component in my case, but the same would apply if you use Cordova or Titanium.

I’d be interested to know if Xamarin is already working on an abstraction like this, or otherwise I’d be interested in starting such an initiative. If anyone wants to help, let me know :)

Another very big advantage is that – if developers and EMM vendors follow these standards – app wrapping or use of a proprietary EMM SDK would become unnecessary. And this is good news for cross platform developers. No more worrying about whether the wrapper is aware of the Mono DLL’s inside my Xamarin.Android app, etcetera.

Where’s Windows?!
One very big shortcoming is that the ACE site only proposes solutions for iOS and Android. Where’s Windows?! In the recent releases of Windows Phone, Microsoft has added support for several EMM features to the OS, so surely there must be some guidance for implementing SSO, SAML, Settings, etcetera in Windows Phone, Store and desktop apps.

This is something that I would like to add, or see added, to the standards.

All in all, I’m very happy with the ACE initiative and I hope to see more vendors join the club. Microsoft should definitely chip in, and other cross platform tool vendors as well. I applaud Xamarin, AirWatch and the others for showing the way forward.

This can make a lot of difference for enterprises who struggle with purchasing and contracting an EMM vendor, concerns about lock-in, and limitations or obstacles in their app development.

Cross platform mobile & enterprise mobility from the trenches


In this post, I will share some lessons learned from projects I did, creating mobile apps with cross platform technologies and managing them with an Enterprise Mobility Management platform. Most of my experience is with Xamarin technology in combination with AirWatch EMM, but most of the lessons learned are applicable to other technologies as well as they are generic to the way EMM systems and cross platform technologies usually operate. If you’re using PhoneGap, Titanium, Citrix XenMobile, GOOD Technologies, etc. you may have to deal with the same gotcha’s.

DISCLAIMER: innovation in the EMM and Cross Platform Mobile Development area is progressing swiftly. The information in this article may quickly go outdated. I’ll try to revise this post every once in a while.


Photo Credit: Roberto Trm via Compfight cc

Photo Credit: Roberto Trm via Compfight cc

EMM, or Enterprise Mobility Management, is a broad topic, ranging from making corporate email available on mobile devices to corporate app stores to full-on mobile device management. In projects I’m involved with, EMM usually means that we are dealing with a controlled environment in which the apps that we build are made available and secured through the EMM platform.

Most existing EMM vendors have offered products that support MDM – Mobile Device Management. Traditionally, corporate owned devices are put under MDM, through which the IT department can enforce strict policies and fully control the device, such as wiping it or locking it from a distance once it is compromised. “All your device are belong to us!”

With the rise of the BYOD trend – Bring Your Own Device – many EMM vendors are moving away from pure MDM, and are also offering an MAM model – Mobile Application Management. The main distinction is that in the MAM mindset, the device is considered property of the employee, and rather the apps and more specifically the data in those apps, are controlled by the company. This means that on the device, there are both private and corporate assets, something we call a Dual Persona situation. This means that a device wipe will not be possible, but instead an Enterprise Wipe will only remove corporate data, apps and policies and leave the private data intact.

One important thing to note is that these MAM products are heavily under development and not always very stable of feature rich as the MDM tools. There’s also a couple of vendors popping up in the market, that focus solely on MAM and leave the MDM features behind. Depending on the security requirements within an organization, MAM may not be suitable yet for the situation at hand.

Features and stability aside, the level of support for the different mobile OS-es also varies heavily. Some mobile OS-es are more mature than others in support of enterprise features, which can either enable or limit an EMM vendor. iOS for example, has very rich API’s for enterprise management, Android is catching up and Windows Phone is adding some of these features since version 8.1. Combined with the slow market adoption of Windows Phone, there aren’t many vendors that have very rich support for Windows Phone yet.

Tip: be sure to assess the stability of the MAM product on the desired features and support for your target OS-es before deciding on a product.

EMM and Cross Platform Mobile Development
Most EMM platforms offer two ways of securing apps: first through app wrapping and secondly by using an SDK inside the app. These technologies – especially app wrapping – make it very easy for a company to apply and enforce security policies without the developer worrying about these security measures in the app’s code. This way, security measures are always standardized, and can be applied transparently to the developer. Moreover, app wrapping lowers the vendor lock-in, which makes it easier to replace an EMM platform if necessary.

Many enterprise architects are fond of thinking in these generic terms and like to decree things like “security must be applied in a standard and transparent way”, or “thou shalt prevent vendor lock-in”. The features on the EMM vendor’s marketing slides are a perfect fit with these 10,000ft statements, so in theory this sounds great, right?

Photo Credit: ores2k via Compfight cc

Photo Credit: ores2k via Compfight cc

Companies that build corporate apps and want to support a BYOD strategy, have to deal with a wide diversity of device types and mobile OS-es. This means that building apps can become a hassle, unless you use a cross platform mobile development tool or platform. Examples of these are HTML5, PhoneGap, Appcellerator or Xamarin. At Info Support, Xamarin is the primary tool of choice, so in most of our projects we will be working with Xamarin. These Cross Platform Mobile Development platforms are also heavily under development and are rapidly improving.

So on one hand, we see the need for transparent and standardized security through app wrapping, and on the other hand we see a desire to build apps in a cost efficient way using cross platform technologies.

So here’s the catch… Combining these two goals can be challenging. For tight integration on the device and secure wrapping of apps, EMM vendors rely heavily on native features of the underlying OS. This means that wrapping and the SDK will work fine primarily for native built apps.

Most EMM vendors will therefore not guarantee their product to work with a cross platform app.

Furthermore, the development of both EMM and Cross Platform tools is going in rapid but independent speed, which means that both worlds have not come together yet in such a way that integration is guaranteed to work.

Tip: when evaluating EMM and Cross Platform tools, look for a combination that works for the most important scenarios. Do a proof of concept, and dig deep to see if it really works. Enterprisey 10,000ft princples won’t work here (if they ever do).

Let’s look at some examples…

Data-in-transit security
Data-in-transit is data that is being sent over the network between device and a second party, usually a server side API.

EMM platforms typically secure data-in-transit by intercepting network traffic (http/https) and adding extra encryption or routing it through a secure tunnel. This means that the wrapper must be able to intercept these network calls.

The way this works is as depicted in the following figure:


So if you’re not careful, and don’t have a good understanding of how your cross platform framework works, network calls may go unnoticed and bypass the security layer of the EMM container. Whoops.

Photo Credit: striatic via Compfight cc

Photo Credit: striatic via Compfight cc

With most cross platform tools, you won’t easily be able to have your network calls intercepted, unless you have a way to redirect this traffic through the original API layer. In Xamarin, one way to do this is to use the ModernHttpClient library, which is available as a component on the Xamarin Component store.




Data-at-rest security
Data-at-rest (DAR) is data that resides on the device, either in memory or persisted on the local storage as a file or database. In some cases, security guidelines may require this data to be encrypted.

Most EMM vendors also promise to support automatic data-at-rest encryption. Also here, carefully read between the lines to discover what level of support you will get per mobile OS.

Most of the time, DAR encryption is added to the app during the wrapping process. For Android, this means that the actual Java code inside the APK is altered. For example, whenever there is a call to File I/O or the SQLite database, this code would be replaced by calls to IOCipher and SQLCipher respectively. This means that local storage in wrapped apps will be automatically encrypted.

EnigmaIn a cross platform app however, the app code will usually reside some shared layer/language (a Mono DLL in case of Xamarin.Android for example) inside the APK. A wrapping engine looking for Java calls to File I/O or SQLite will therefore not detect them in the app. Unless the wrapping engine is aware of the cross platform tool being used, an app may go into production without the DAR-encryption policy applied.

This means that EMM security cannot be applied “transparently” to the app developer, since there has to be a mutual understanding of the tools that are being used, and the level of integration between the two. In one of my projects, we had to do our own DAR-encryption inside the app.

A SQLCipher component is available for Xamarin. You can see a presentation on this component on the Evolve 2013 site. There are however no C# bindings for IOCipher available. It may be wise to resort to the open source Conceal framework from Facebook.

For iOS, wrapping engines mostly don’t support the same type of automatic DAR-encryption. This is due to the limitations of the iOS OS. This means that an app developer has to deal with DAR-encryption himself anyway.

An excerpt from the documentation of an EMM vendor on the topic:

iOS supports Data Protection in iOS 6 but requires the application developer to explicitely implement it in the App; there is no way to force data protection from the wrapping engine.
iOS 7 offers Data Protection for all apps as long as the user has set a passcode AND only applies between the time the device has been rebooted and unlocked for the first time.

You can read more in the Apple SDK docs.

Beware that for using the Data Protection API’s in iOS, the user must have a PIN-code enabled on the device. In a BYOD situation, you cannot always rely on this being the case.

Both EMM tools and Cross Platform Development tools are still evolving rapidly. Given the diversity of both product categories, it will be hard to find two tools that perfectly integrate. This means that you have to be aware of some pitfalls and limitations when combining the two. I would always advise my customers to do a deep technical validation of a proposed solution based on an “out of the box” feature of the EMM platform.

I’d be interested in your experiences!

Xpirit: Off to new adventures


Today marks an important day for me personally… After 15 years, I’ve left Info Support last Friday to pursue a new adventure in my career. Today, I joined a great team, starting up a wholly new company named Xpirit.

Looking back
Info Support has been a part of my life for 15 years. I did my graduation project at Info Support in 1999, and after getting my Bachelor’s degree, I signed up to work for them as a software developer and consultant. Flash forward to 2014 and I find myself working as a software architect, heading up the Enterprise Mobile competence center, speaking at international software conferences and blogging, tweeting and sharing knowledge with awesome peers from all over the world. I learned so much in my time at Info Support, and I owe a lot to the great bunch of people there. In return, I gave my best over these 15 years and helped Info Support to be in the top of Dutch IT. Now the time has come for me to take on this new opportunity.

Xprit Think ahead. Act now.
Xpirit is a new consulting company, focusing entirely on the Microsoft ecosystem. With a fantastic new team, consisting of Microsoft MVP’s, Regional Directors and community leaders, we will offer high end consulting services for (enterprise) companies looking to implement or integrate systems using the Microsoft stack. This doesn’t mean that we will focus solely on MS products though. The Microsoft ecosystem is extremely rich with Open Source frameworks and 3rd party solutions.

Our technical team consists of Marcel de Vries (CTO), Alex Thissen, Marcel Meijer, Patriek van Dorp (soon) and myself and is powered by our Sales and Managing Director Pascal Greuter. This sounds like a dream team to me, and I’m extremely excited to be a part of it. Each brings their own strengths and personality to the table, which makes for a great environment to work in.

dreamteam cap

For me personally this means that I will continue to focus on cloud and mobile architecture and development in the enterprise. Of course, Xamarin will continue to be a big part of my strategy in this. Microsoft makes fantastic technology to build services and back end architectures, but let’s face it, their end user facing technologies are struggling to keep up with Google’s and Apple’s. It’s a diverse world, in which iOS and Android co-exist with – even dominate – Windows, even more so in the mobile area. Xamarin enables us to leverage the highly productive language and frameworks from the Microsoft stack directly on iOS and Android. A perfect fit. Marcel and I will continue to be active in the Xamarin community, and run the Dutch Mobile .NET Developers user group together with the fine people from Macaw and Info Support.

Many also know that I also enjoy working on distributed systems, SOA and event driven architectures. Azure is a great platform for this. One framework that I’ve been specialising in for the last couple of years, is NServiceBus. I’m committed to continuing my activities in the NServiceBus community, and would love to be your go-to-guy for distributed systems design with NServiceBus.

Xpirit is a Xebia company, which means that we’re fortunate to inherit the same company strenghts and values that make Xebia a great company. If you’re interested, go check out Good To Great by Jim Collins, Winning by Jack Welch, and Eckart’s Notes by Eckart Wintzen to understand the foundations we’re going to build on. Inspiring stuff, that’s for sure!

We’re starting small, but thinking big. We’re looking forward to great and innovating projects in the world of cloud, mobile, IoT and everything Microsoft.

I can’t wait to see what the future holds. Here’s to new adventures!


Automating end-to-end NServiceBus tests with NServiceBus.AcceptanceTesting


Photo Credit: LoveInTheWinter via Compfight cc

Photo Credit: LoveInTheWinter via Compfight cc

Most of you will agree that automating software tests is a good idea. Writing unit tests is almost a no brainer nowadays, and I’m a big fan of Behavior Driven Development and the use of Cucumber to bring together system analysts, programmers and testers more closely. The closer your tests and documentation are to the actual software, the better, IMO.

Repeatable and automated functional tests are paramount to guarantee the quality of a constantly evolving software system. Especially when things become more and more complex, like in distributed systems. As you may know I’m a fan of NServiceBus, and testing our NServiceBus message based systems end-to-end has always been a bit cumbersome. The fine folks at Particular Software – creators of NServiceBus – have found a nice way to do their own integration and acceptance tests, and you can use that too!

The framework that supports this is somewhat of a hidden gem in the NServiceBus stack, and I know that the Particular team is still refining the ideas. Nonetheless, you can use it yourself. It’s called NServiceBus.AcceptanceTesting. Unfortunately it’s somewhat undocumented so it’s not easily discovered and not very easy to get started with. You’ll need to dive into the acceptance tests in the NServiceBus source code to find out how it works. This can be a little bit hairy because there’s a lot going on in these tests to validate all the different transports, persistence, behavior pipeline and messaging scenarios that NServiceBus supports. This means that there is a lot of infrastructure code in the NServiceBus acceptance test suite as well to facilitate all the different scenarios. How to distinguish between what’s in the AcceptanceTesting framework and what’s not?

As a sample, I created a simpler scenario with two services and added a couple of acceptance tests to offer a stripped down application of the AcceptanceTesting framework. You can find the full solution on GitHub, but I’ll give a step by step breakdown below.

The scenario
The sample scenario consists of two services: Sales and Shipping. When the Sales service receives a RegisterOrder command – say from a web front end – it does some business logic (e.g. validate if the amount <= 500) and decides whether the order is accepted or refused. Sales will publish an event accordingly: either OrderAccepted or OrderReceived. The Shipping service subscribes to the OrderAccepted event. It will ship the order as soon as it is accepted and publish an OrderShipped event. Like so:


I’m sure it won’t bring me the Nobel prize for software architecture, but that’s not the point. From a testing perspective, we’d like to know if a valid order actually gets shipped, and if an invalid order is refused (and not shipped).

Project setup
Once you have your solution set up with a Messages library, and the implementation projects for your message handlers, we’ll add a test project for our acceptance tests. You can use your favourite unit test framework, I chose MSTest in my sample.

Next, in your test project, add a reference to the NServiceBus.AcceptanceTesting package via the Package Manager Console:

Install-Package NServiceBus.AcceptanceTesting

This will pull down the necessary dependencies for you to start writing acceptance tests.

Writing a test
Let’s have a look at one of the tests I have implemented in my sample:

public void Order_of_500_should_be_accepted_and_shipped()
    Scenario.Define(() => new Context { })
        .WithEndpoint<Sales>(b => 
            b.Given((bus, context) =>
                // The SubscriptionBehavior will monitor for incoming subscription messages
                // Here we want to track if Shipping is subscribing to our the OrderAccepted event
                SubscriptionBehavior.OnEndpointSubscribed(s => 
                    if (s.SubscriberReturnAddress.Queue.Contains("Shipping"))
                        context.ShippingIsSubscribed = true;
                // As soon as ShippingIsSubscribed (guarded by the first expression), we'll
                // fire off the test by sending a RegisterOrder command to the Sales endpoint
            .When(c => c.ShippingIsSubscribed, bus => bus.Send<RegisterOrder>(m =>
                    m.Amount = 500;
                    m.CustomerName = "John";
                    m.OrderId = 1;
        // No special actions for this endpoint, it just has to do its work
        // The test succeeds when the order is accepted by the Sales endpoint,
        // and subsequently the order is shipped by the Shipping endpoint
        .Done(context => context.OrderIsAccepted && context.OrderIsShipped && !context.OrderIsRefused)

Whoa, that’s a lot of fluent API shizzle! That’s just one statement with a bunch of lambda’s, mind you. Let’s break it down to see what we have here…

The AcceptanceTesting harness runs a scenario, as denoted by the Scenario class. The basic skeleton looks like this:

public void Order_of_500_should_be_accepted_and_shipped()
    Scenario.Define(() => new Context { })



        .Done(context => context.OrderIsAccepted && context.OrderIsShipped && !context.OrderIsRefused)


A scenario is defined using the Define method, which receives an instance of a class named Context. Next, the WithEndpoint() generic methods help us setup the different endpoints that participate in the current test scenario. In this case: Sales and Shipping. We’ll have a look at the types used here later.

Before the scenario is kicked off with the Run() method, we define a condition that indicates when the test has succeeded and pass that to the Done() method.

The expression looks like this:

context.OrderIsAccepted && context.OrderIsShipped && !context.OrderIsRefused

We’re evaluating a bunch of properties on an object named context. This is actually the instance of the Context class we saw being passed to the Scenario.Define() method. The context class looks like this:

class Context : ScenarioContext
  public bool OrderIsAccepted { get; set; }
  public bool OrderIsRefused { get; set; }
  public bool OrderIsShipped { get; set; }
  public bool ShippingIsSubscribed { get; set; }

It inherits from ScenarioContext, a base class in the NServiceBus.AcceptanceTesting framework, and it’s just a bunch of properties that get passed around throughout our test scenarios to keep track of the progress. The trick is to set these properties at specific moments as your test runs and as soon as the conditions are met, the test is considered a success.

In the example above, we expect that the order is accepted and shipped, and we also double check that it wasn’t refused. We can assert this by tracking the events being published.

The next piece of the puzzle is the definition of the endpoints that participate in the test:


The type parameter in this case is a class called Sales. This class represents the Sales endpoint, but is actually defined in the test code. This is what it looks like:

public class Sales : EndpointConfigurationBuilder
  public Sales()
    // Makes sure that the RegisterOrder command is mapped to the Sales endpoint

  class SalesInspector : IMutateOutgoingMessages, INeedInitialization
    // Will be injected via DI
    public Context TestContext { get; set; }

    public object MutateOutgoing(object message)
      if (message is OrderAccepted)
        TestContext.OrderIsAccepted = true;

      if (message is OrderRefused)
        TestContext.OrderIsRefused = true;

      return message;

    public void Customize(BusConfiguration configuration)
       configuration.RegisterComponents(c => c.ConfigureComponent<SalesInspector>(DependencyLifecycle.InstancePerCall));

The Sales class derives from EndpointConfigurationBuilder, and is our bootstrap for this particular endpoint. The class itself doesn’t do much, except bootstrapping the endpoint by specifying an endpoint setup template – a class named DefaultServer – and making sure that the RegisterOrder message is mapped to its endpoint.

We also see a nested class called SalesInspector, which is an NServiceBus MessageMutator. We are using the extensibility of NServiceBus to plug in hooks that help us track the progress of the test. In this case, the mutator listens for outgoing messages – which would be OrderAccepted or OrderRefused for the Sales endpoint – and sets the flags on the scenario context accordingly.

This is all wired up through the magic of type scanning and the use of the INeedInitialization interface. This happens through the endpoint setup template class: DefaultServer. I actually borrowed most of this code from the original NServiceBus code base, but stripped it down to just use the default stuff:

/// <summary>
/// Serves as a template for the NServiceBus configuration of an endpoint.
/// You can do all sorts of fancy stuff here, such as support multiple transports, etc.
/// Here, I stripped it down to support just the defaults (MSMQ transport).
/// </summary>
public class DefaultServer : IEndpointSetupTemplate
  public BusConfiguration GetConfiguration(RunDescriptor runDescriptor, 
                                EndpointConfiguration endpointConfiguration,
                                IConfigurationSource configSource, 
                                Action<BusConfiguration> configurationBuilderCustomization)
    var settings = runDescriptor.Settings;

    var types = GetTypesToUse(endpointConfiguration);

    var config = new BusConfiguration();

    // Plugin a behavior that listens for subscription messages
    config.RegisterComponents(c => c.ConfigureComponent<SubscriptionBehavior>(DependencyLifecycle.InstancePerCall));

    // Important: you need to make sure that the correct ScenarioContext class is available to your endpoints and tests
    config.RegisterComponents(r =>
      r.RegisterSingleton(runDescriptor.ScenarioContext.GetType(), runDescriptor.ScenarioContext);
      r.RegisterSingleton(typeof(ScenarioContext), runDescriptor.ScenarioContext);

    // Call extra custom action if provided
    if (configurationBuilderCustomization != null)

    return config;

  static IEnumerable<Type> GetTypesToUse(EndpointConfiguration endpointConfiguration)
    // Implementation details can be found on GitHub

Most of this code will look familiar: it uses the BusConfiguration options to define the endpoint. In this case, the type scanner will look through all referenced assemblies to find handlers and other NServiceBus stuff that may participate in the tests.

Most notable is the use of the SubscriptionBehavior class, which is plugged into the NServiceBus pipeline that comes with NServiceBus 5.0 – watch the NServiceBus Lego Style talk by John and Indu at NSBCon London for more info. This behavior simply listens for subscription messages from endpoints and raises events that you can hook into. This is necessary for our tests to run successfully because the test can only start once all endpoints are running and subscribed to the correct events. The behavior class is not part of the NServiceBus.AcceptanceTesting framework though. IMO, it would be handy if Particular moved this one to the AcceptanceTesting framework as I think you’ll be needing this one a lot. Again, I borrowed the implementation from the NServiceBus code base:

class SubscriptionBehavior : IBehavior<IncomingContext>
  public void Invoke(IncomingContext context, Action next)
    var subscriptionMessageType = GetSubscriptionMessageTypeFrom(context.PhysicalMessage);
    if (EndpointSubscribed != null && subscriptionMessageType != null)
      EndpointSubscribed(new SubscriptionEventArgs
        MessageType = subscriptionMessageType,
        SubscriberReturnAddress = context.PhysicalMessage.ReplyToAddress

  static string GetSubscriptionMessageTypeFrom(TransportMessage msg)
    return (from header in msg.Headers where header.Key == Headers.SubscriptionMessageType select header.Value).FirstOrDefault();

  public static Action<SubscriptionEventArgs> EndpointSubscribed;

  public static void OnEndpointSubscribed(Action<SubscriptionEventArgs> action)
    EndpointSubscribed = action;

  internal class Registration : RegisterStep
    public Registration()
      : base("SubscriptionBehavior", typeof(SubscriptionBehavior), "So we can get subscription events")

Okay, almost done. We have our endpoint templates set up, message mutators listening to the relevant outgoing messages and SubscriptionBehavior to make sure the test is ready to run. Let’s get back to the part that actually makes the whole scenario go:

    Scenario.Define(() => new Context { })
        .WithEndpoint<Sales>(b => 
            b.Given((bus, context) =>
                // The SubscriptionBehavior will monitor for incoming subscription messages
                // Here we want to track if Shipping is subscribing to our the OrderAccepted event
                SubscriptionBehavior.OnEndpointSubscribed(s => 
                    if (s.SubscriberReturnAddress.Queue.Contains("Shipping"))
                        context.ShippingIsSubscribed = true;
                // As soon as ShippingIsSubscribed (guarded by the first expression), we'll
                // fire off the test by sending a RegisterOrder command to the Sales endpoint
            .When(context => context.ShippingIsSubscribed, bus => bus.Send<RegisterOrder>(m =>
                    m.Amount = 500;
                    m.CustomerName = "John";
                    m.OrderId = 1;

For the Sales endpoint, we specified a whole bunch of extra stuff. First, there’s the event handler for the SubscriptionBehavior.OnEndpointSubscribed event. Here, the Sales endpoint basically waits for the Shipping endpoint to subscribe to the events. The context is available here as well, part of the lambda that’s passed to the Given() method, so we can flag the subscription by setting a boolean.

The final piece is the guard passed to the When() method. This is monitored by the AcceptanceTesting framework as the test runs and as soon as the specified condition is met, we can use the bus instance available there to send a message to the Sales endpoint: the RegisterOrder command will trigger the whole process we’re testing here. We’re sending an order of $500, which we expect to be accepted and shipped. There’s a test that checks the refusal of an order > 500 in the sample as well.

Some tips
For your end-to-end tests, you will be pulling together DLL’s from all of your endpoints and with all of your message definitions. So it makes sense to setup a separate solution or project structure for these tests instead of adding it to an existing solution.

If your handlers are in the same DLL as your EndpointConfig class, the assembly scanner will run into trouble, because it will find multiple classes that implement IConfigureThisEndpoint. While you can intervene in how the assembly scanner does its work (e.g. manually filtering out specific DLL’s per endpint definition), it might be better to keep your handlers in separate assemblies to make acceptance testing easier.

As you see, you need to add some infrastructural stuff to your tests, such as the EndpointConfigurationBuilder classes and the IEndpointSetupTemplate class for everything to work properly. You can implement this infrastructure stuff per test or per test suite, but you might want to consider creating some more generic implementations that you can reuse across different test suites. IMO the DefaultServer implementation from the NServiceBus tests is also a nice candidate for becoming a part of the NServiceBus.AcceptanceTesting package to simplify your test code as this is already a very flexible implementation.


As you see, the NServiceBus.AcceptanceTesting framework takes some time to get to know properly. There’s more advanced stuff in there which I didn’t cover in this introduction. Now if you dive into the scenarios implemented by Particular themselves, you’ll find inspiration for testing other bits of your system.

I like the model a lot, and I think this can save a lot of time retesting many end-to-end scenarios.