Cross platform mobile & enterprise mobility from the trenches

Standard

Introduction
In this post, I will share some lessons learned from projects I did, creating mobile apps with cross platform technologies and managing them with an Enterprise Mobility Management platform. Most of my experience is with Xamarin technology in combination with AirWatch EMM, but most of the lessons learned are applicable to other technologies as well as they are generic to the way EMM systems and cross platform technologies usually operate. If you’re using PhoneGap, Titanium, Citrix XenMobile, GOOD Technologies, etc. you may have to deal with the same gotcha’s.

DISCLAIMER: innovation in the EMM and Cross Platform Mobile Development area is progressing swiftly. The information in this article may quickly go outdated. I’ll try to revise this post every once in a while.

EMM, BYOD, MDM, MAM?

Photo Credit: Roberto Trm via Compfight cc

Photo Credit: Roberto Trm via Compfight cc

EMM, or Enterprise Mobility Management, is a broad topic, ranging from making corporate email available on mobile devices to corporate app stores to full-on mobile device management. In projects I’m involved with, EMM usually means that we are dealing with a controlled environment in which the apps that we build are made available and secured through the EMM platform.

Most existing EMM vendors have offered products that support MDM – Mobile Device Management. Traditionally, corporate owned devices are put under MDM, through which the IT department can enforce strict policies and fully control the device, such as wiping it or locking it from a distance once it is compromised. “All your device are belong to us!”

With the rise of the BYOD trend – Bring Your Own Device – many EMM vendors are moving away from pure MDM, and are also offering an MAM model – Mobile Application Management. The main distinction is that in the MAM mindset, the device is considered property of the employee, and rather the apps and more specifically the data in those apps, are controlled by the company. This means that on the device, there are both private and corporate assets, something we call a Dual Persona situation. This means that a device wipe will not be possible, but instead an Enterprise Wipe will only remove corporate data, apps and policies and leave the private data intact.

One important thing to note is that these MAM products are heavily under development and not always very stable of feature rich as the MDM tools. There’s also a couple of vendors popping up in the market, that focus solely on MAM and leave the MDM features behind. Depending on the security requirements within an organization, MAM may not be suitable yet for the situation at hand.

Features and stability aside, the level of support for the different mobile OS-es also varies heavily. Some mobile OS-es are more mature than others in support of enterprise features, which can either enable or limit an EMM vendor. iOS for example, has very rich API’s for enterprise management, Android is catching up and Windows Phone is adding some of these features since version 8.1. Combined with the slow market adoption of Windows Phone, there aren’t many vendors that have very rich support for Windows Phone yet.

Tip: be sure to assess the stability of the MAM product on the desired features and support for your target OS-es before deciding on a product.

EMM and Cross Platform Mobile Development
Most EMM platforms offer two ways of securing apps: first through app wrapping and secondly by using an SDK inside the app. These technologies – especially app wrapping – make it very easy for a company to apply and enforce security policies without the developer worrying about these security measures in the app’s code. This way, security measures are always standardized, and can be applied transparently to the developer. Moreover, app wrapping lowers the vendor lock-in, which makes it easier to replace an EMM platform if necessary.

Many enterprise architects are fond of thinking in these generic terms and like to decree things like “security must be applied in a standard and transparent way”, or “thou shalt prevent vendor lock-in”. The features on the EMM vendor’s marketing slides are a perfect fit with these 10,000ft statements, so in theory this sounds great, right?

Photo Credit: ores2k via Compfight cc

Photo Credit: ores2k via Compfight cc

Companies that build corporate apps and want to support a BYOD strategy, have to deal with a wide diversity of device types and mobile OS-es. This means that building apps can become a hassle, unless you use a cross platform mobile development tool or platform. Examples of these are HTML5, PhoneGap, Appcellerator or Xamarin. At Info Support, Xamarin is the primary tool of choice, so in most of our projects we will be working with Xamarin. These Cross Platform Mobile Development platforms are also heavily under development and are rapidly improving.

So on one hand, we see the need for transparent and standardized security through app wrapping, and on the other hand we see a desire to build apps in a cost efficient way using cross platform technologies.

So here’s the catch… Combining these two goals can be challenging. For tight integration on the device and secure wrapping of apps, EMM vendors rely heavily on native features of the underlying OS. This means that wrapping and the SDK will work fine primarily for native built apps.

Most EMM vendors will therefore not guarantee their product to work with a cross platform app.

Furthermore, the development of both EMM and Cross Platform tools is going in rapid but independent speed, which means that both worlds have not come together yet in such a way that integration is guaranteed to work.

Tip: when evaluating EMM and Cross Platform tools, look for a combination that works for the most important scenarios. Do a proof of concept, and dig deep to see if it really works. Enterprisey 10,000ft princples won’t work here (if they ever do).

Let’s look at some examples…

Data-in-transit security
Data-in-transit is data that is being sent over the network between device and a second party, usually a server side API.

EMM platforms typically secure data-in-transit by intercepting network traffic (http/https) and adding extra encryption or routing it through a secure tunnel. This means that the wrapper must be able to intercept these network calls.

The way this works is as depicted in the following figure:

Container-vs-CrossPlatform

So if you’re not careful, and don’t have a good understanding of how your cross platform framework works, network calls may go unnoticed and bypass the security layer of the EMM container. Whoops.

Photo Credit: striatic via Compfight cc

Photo Credit: striatic via Compfight cc

With most cross platform tools, you won’t easily be able to have your network calls intercepted, unless you have a way to redirect this traffic through the original API layer. In Xamarin, one way to do this is to use the ModernHttpClient library, which is available as a component on the Xamarin Component store.

Container-with-Xamarin

Win!

Yes!

Data-at-rest security
Data-at-rest (DAR) is data that resides on the device, either in memory or persisted on the local storage as a file or database. In some cases, security guidelines may require this data to be encrypted.

Most EMM vendors also promise to support automatic data-at-rest encryption. Also here, carefully read between the lines to discover what level of support you will get per mobile OS.

Most of the time, DAR encryption is added to the app during the wrapping process. For Android, this means that the actual Java code inside the APK is altered. For example, whenever there is a call to File I/O or the SQLite database, this code would be replaced by calls to IOCipher and SQLCipher respectively. This means that local storage in wrapped apps will be automatically encrypted.

EnigmaIn a cross platform app however, the app code will usually reside some shared layer/language (a Mono DLL in case of Xamarin.Android for example) inside the APK. A wrapping engine looking for Java calls to File I/O or SQLite will therefore not detect them in the app. Unless the wrapping engine is aware of the cross platform tool being used, an app may go into production without the DAR-encryption policy applied.

This means that EMM security cannot be applied “transparently” to the app developer, since there has to be a mutual understanding of the tools that are being used, and the level of integration between the two. In one of my projects, we had to do our own DAR-encryption inside the app.

A SQLCipher component is available for Xamarin. You can see a presentation on this component on the Evolve 2013 site. There are however no C# bindings for IOCipher available. It may be wise to resort to the open source Conceal framework from Facebook.

For iOS, wrapping engines mostly don’t support the same type of automatic DAR-encryption. This is due to the limitations of the iOS OS. This means that an app developer has to deal with DAR-encryption himself anyway.

An excerpt from the documentation of an EMM vendor on the topic:

iOS supports Data Protection in iOS 6 but requires the application developer to explicitely implement it in the App; there is no way to force data protection from the wrapping engine.
iOS 7 offers Data Protection for all apps as long as the user has set a passcode AND only applies between the time the device has been rebooted and unlocked for the first time.

You can read more in the Apple SDK docs.

Beware that for using the Data Protection API’s in iOS, the user must have a PIN-code enabled on the device. In a BYOD situation, you cannot always rely on this being the case.

Conclusion
Both EMM tools and Cross Platform Development tools are still evolving rapidly. Given the diversity of both product categories, it will be hard to find two tools that perfectly integrate. This means that you have to be aware of some pitfalls and limitations when combining the two. I would always advise my customers to do a deep technical validation of a proposed solution based on an “out of the box” feature of the EMM platform.

I’d be interested in your experiences!

Xpirit: Off to new adventures

Standard

Today marks an important day for me personally… After 15 years, I’ve left Info Support last Friday to pursue a new adventure in my career. Today, I joined a great team, starting up a wholly new company named Xpirit.

Looking back
Info Support has been a part of my life for 15 years. I did my graduation project at Info Support in 1999, and after getting my Bachelor’s degree, I signed up to work for them as a software developer and consultant. Flash forward to 2014 and I find myself working as a software architect, heading up the Enterprise Mobile competence center, speaking at international software conferences and blogging, tweeting and sharing knowledge with awesome peers from all over the world. I learned so much in my time at Info Support, and I owe a lot to the great bunch of people there. In return, I gave my best over these 15 years and helped Info Support to be in the top of Dutch IT. Now the time has come for me to take on this new opportunity.

Xprit Think ahead. Act now.
Xpirit is a new consulting company, focusing entirely on the Microsoft ecosystem. With a fantastic new team, consisting of Microsoft MVP’s, Regional Directors and community leaders, we will offer high end consulting services for (enterprise) companies looking to implement or integrate systems using the Microsoft stack. This doesn’t mean that we will focus solely on MS products though. The Microsoft ecosystem is extremely rich with Open Source frameworks and 3rd party solutions.

Our technical team consists of Marcel de Vries (CTO), Alex Thissen, Marcel Meijer, Patriek van Dorp (soon) and myself and is powered by our Sales and Managing Director Pascal Greuter. This sounds like a dream team to me, and I’m extremely excited to be a part of it. Each brings their own strengths and personality to the table, which makes for a great environment to work in.

dreamteam cap

For me personally this means that I will continue to focus on cloud and mobile architecture and development in the enterprise. Of course, Xamarin will continue to be a big part of my strategy in this. Microsoft makes fantastic technology to build services and back end architectures, but let’s face it, their end user facing technologies are struggling to keep up with Google’s and Apple’s. It’s a diverse world, in which iOS and Android co-exist with – even dominate – Windows, even more so in the mobile area. Xamarin enables us to leverage the highly productive language and frameworks from the Microsoft stack directly on iOS and Android. A perfect fit. Marcel and I will continue to be active in the Xamarin community, and run the Dutch Mobile .NET Developers user group together with the fine people from Macaw and Info Support.

Many also know that I also enjoy working on distributed systems, SOA and event driven architectures. Azure is a great platform for this. One framework that I’ve been specialising in for the last couple of years, is NServiceBus. I’m committed to continuing my activities in the NServiceBus community, and would love to be your go-to-guy for distributed systems design with NServiceBus.

Xpirit is a Xebia company, which means that we’re fortunate to inherit the same company strenghts and values that make Xebia a great company. If you’re interested, go check out Good To Great by Jim Collins, Winning by Jack Welch, and Eckart’s Notes by Eckart Wintzen to understand the foundations we’re going to build on. Inspiring stuff, that’s for sure!

We’re starting small, but thinking big. We’re looking forward to great and innovating projects in the world of cloud, mobile, IoT and everything Microsoft.

I can’t wait to see what the future holds. Here’s to new adventures!

the-hobbit-adventure

Automating end-to-end NServiceBus tests with NServiceBus.AcceptanceTesting

Standard

Photo Credit: LoveInTheWinter via Compfight cc

Photo Credit: LoveInTheWinter via Compfight cc

Most of you will agree that automating software tests is a good idea. Writing unit tests is almost a no brainer nowadays, and I’m a big fan of Behavior Driven Development and the use of Cucumber to bring together system analysts, programmers and testers more closely. The closer your tests and documentation are to the actual software, the better, IMO.

Repeatable and automated functional tests are paramount to guarantee the quality of a constantly evolving software system. Especially when things become more and more complex, like in distributed systems. As you may know I’m a fan of NServiceBus, and testing our NServiceBus message based systems end-to-end has always been a bit cumbersome. The fine folks at Particular Software – creators of NServiceBus – have found a nice way to do their own integration and acceptance tests, and you can use that too!

The framework that supports this is somewhat of a hidden gem in the NServiceBus stack, and I know that the Particular team is still refining the ideas. Nonetheless, you can use it yourself. It’s called NServiceBus.AcceptanceTesting. Unfortunately it’s somewhat undocumented so it’s not easily discovered and not very easy to get started with. You’ll need to dive into the acceptance tests in the NServiceBus source code to find out how it works. This can be a little bit hairy because there’s a lot going on in these tests to validate all the different transports, persistence, behavior pipeline and messaging scenarios that NServiceBus supports. This means that there is a lot of infrastructure code in the NServiceBus acceptance test suite as well to facilitate all the different scenarios. How to distinguish between what’s in the AcceptanceTesting framework and what’s not?

As a sample, I created a simpler scenario with two services and added a couple of acceptance tests to offer a stripped down application of the AcceptanceTesting framework. You can find the full solution on GitHub, but I’ll give a step by step breakdown below.

The scenario
The sample scenario consists of two services: Sales and Shipping. When the Sales service receives a RegisterOrder command – say from a web front end – it does some business logic (e.g. validate if the amount <= 500) and decides whether the order is accepted or refused. Sales will publish an event accordingly: either OrderAccepted or OrderReceived. The Shipping service subscribes to the OrderAccepted event. It will ship the order as soon as it is accepted and publish an OrderShipped event. Like so:

NServiceBusAcceptanceTestScenario

I’m sure it won’t bring me the Nobel prize for software architecture, but that’s not the point. From a testing perspective, we’d like to know if a valid order actually gets shipped, and if an invalid order is refused (and not shipped).

Project setup
Once you have your solution set up with a Messages library, and the implementation projects for your message handlers, we’ll add a test project for our acceptance tests. You can use your favourite unit test framework, I chose MSTest in my sample.

Next, in your test project, add a reference to the NServiceBus.AcceptanceTesting package via the Package Manager Console:

Install-Package NServiceBus.AcceptanceTesting

This will pull down the necessary dependencies for you to start writing acceptance tests.

Writing a test
Let’s have a look at one of the tests I have implemented in my sample:

[TestMethod]
public void Order_of_500_should_be_accepted_and_shipped()
{
    Scenario.Define(() => new Context { })
        .WithEndpoint<Sales>(b => 
            b.Given((bus, context) =>
                // The SubscriptionBehavior will monitor for incoming subscription messages
                // Here we want to track if Shipping is subscribing to our the OrderAccepted event
                SubscriptionBehavior.OnEndpointSubscribed(s => 
                {
                    if (s.SubscriberReturnAddress.Queue.Contains("Shipping"))
                    {
                        context.ShippingIsSubscribed = true;
                    }
                }))
                // As soon as ShippingIsSubscribed (guarded by the first expression), we'll
                // fire off the test by sending a RegisterOrder command to the Sales endpoint
            .When(c => c.ShippingIsSubscribed, bus => bus.Send<RegisterOrder>(m =>
                {
                    m.Amount = 500;
                    m.CustomerName = "John";
                    m.OrderId = 1;
                }))
         )
        // No special actions for this endpoint, it just has to do its work
        .WithEndpoint<Shipping>() 
        // The test succeeds when the order is accepted by the Sales endpoint,
        // and subsequently the order is shipped by the Shipping endpoint
        .Done(context => context.OrderIsAccepted && context.OrderIsShipped && !context.OrderIsRefused)
        .Run();
}

Whoa, that’s a lot of fluent API shizzle! That’s just one statement with a bunch of lambda’s, mind you. Let’s break it down to see what we have here…

The AcceptanceTesting harness runs a scenario, as denoted by the Scenario class. The basic skeleton looks like this:

[TestMethod]
public void Order_of_500_should_be_accepted_and_shipped()
{
    Scenario.Define(() => new Context { })

        .WithEndpoint<Sales>()

        .WithEndpoint<Shipping>() 

        .Done(context => context.OrderIsAccepted && context.OrderIsShipped && !context.OrderIsRefused)

        .Run();
}

A scenario is defined using the Define method, which receives an instance of a class named Context. Next, the WithEndpoint() generic methods help us setup the different endpoints that participate in the current test scenario. In this case: Sales and Shipping. We’ll have a look at the types used here later.

Before the scenario is kicked off with the Run() method, we define a condition that indicates when the test has succeeded and pass that to the Done() method.

The expression looks like this:

context.OrderIsAccepted && context.OrderIsShipped && !context.OrderIsRefused

We’re evaluating a bunch of properties on an object named context. This is actually the instance of the Context class we saw being passed to the Scenario.Define() method. The context class looks like this:

class Context : ScenarioContext
{
  public bool OrderIsAccepted { get; set; }
  public bool OrderIsRefused { get; set; }
  public bool OrderIsShipped { get; set; }
  public bool ShippingIsSubscribed { get; set; }
}

It inherits from ScenarioContext, a base class in the NServiceBus.AcceptanceTesting framework, and it’s just a bunch of properties that get passed around throughout our test scenarios to keep track of the progress. The trick is to set these properties at specific moments as your test runs and as soon as the conditions are met, the test is considered a success.

In the example above, we expect that the order is accepted and shipped, and we also double check that it wasn’t refused. We can assert this by tracking the events being published.

The next piece of the puzzle is the definition of the endpoints that participate in the test:

.WithEndpoint<Sales>()

The type parameter in this case is a class called Sales. This class represents the Sales endpoint, but is actually defined in the test code. This is what it looks like:

public class Sales : EndpointConfigurationBuilder
{
  public Sales()
  {
    EndpointSetup<DefaultServer>()
    // Makes sure that the RegisterOrder command is mapped to the Sales endpoint
      .AddMapping<RegisterOrder>(typeof(Sales));
  }

  class SalesInspector : IMutateOutgoingMessages, INeedInitialization
  {
    // Will be injected via DI
    public Context TestContext { get; set; }

    public object MutateOutgoing(object message)
    {
      if (message is OrderAccepted)
      {
        TestContext.OrderIsAccepted = true;
      }

      if (message is OrderRefused)
      {
        TestContext.OrderIsRefused = true;
      }

      return message;
    }

    public void Customize(BusConfiguration configuration)
    {
       configuration.RegisterComponents(c => c.ConfigureComponent<SalesInspector>(DependencyLifecycle.InstancePerCall));
    }
  }
}

The Sales class derives from EndpointConfigurationBuilder, and is our bootstrap for this particular endpoint. The class itself doesn’t do much, except bootstrapping the endpoint by specifying an endpoint setup template – a class named DefaultServer – and making sure that the RegisterOrder message is mapped to its endpoint.

We also see a nested class called SalesInspector, which is an NServiceBus MessageMutator. We are using the extensibility of NServiceBus to plug in hooks that help us track the progress of the test. In this case, the mutator listens for outgoing messages – which would be OrderAccepted or OrderRefused for the Sales endpoint – and sets the flags on the scenario context accordingly.

This is all wired up through the magic of type scanning and the use of the INeedInitialization interface. This happens through the endpoint setup template class: DefaultServer. I actually borrowed most of this code from the original NServiceBus code base, but stripped it down to just use the default stuff:

/// <summary>
/// Serves as a template for the NServiceBus configuration of an endpoint.
/// You can do all sorts of fancy stuff here, such as support multiple transports, etc.
/// Here, I stripped it down to support just the defaults (MSMQ transport).
/// </summary>
public class DefaultServer : IEndpointSetupTemplate
{
  public BusConfiguration GetConfiguration(RunDescriptor runDescriptor, 
                                EndpointConfiguration endpointConfiguration,
                                IConfigurationSource configSource, 
                                Action<BusConfiguration> configurationBuilderCustomization)
  {
    var settings = runDescriptor.Settings;

    var types = GetTypesToUse(endpointConfiguration);

    var config = new BusConfiguration();
    config.EndpointName(endpointConfiguration.EndpointName);
    config.TypesToScan(types);
    config.CustomConfigurationSource(configSource);
    config.UsePersistence<InMemoryPersistence>();
    config.PurgeOnStartup(true);

    // Plugin a behavior that listens for subscription messages
    config.Pipeline.Register<SubscriptionBehavior.Registration>();
    config.RegisterComponents(c => c.ConfigureComponent<SubscriptionBehavior>(DependencyLifecycle.InstancePerCall));

    // Important: you need to make sure that the correct ScenarioContext class is available to your endpoints and tests
    config.RegisterComponents(r =>
    {
      r.RegisterSingleton(runDescriptor.ScenarioContext.GetType(), runDescriptor.ScenarioContext);
      r.RegisterSingleton(typeof(ScenarioContext), runDescriptor.ScenarioContext);
    });

    // Call extra custom action if provided
    if (configurationBuilderCustomization != null)
    {
      configurationBuilderCustomization(config);
    }

    return config;
  }

  static IEnumerable<Type> GetTypesToUse(EndpointConfiguration endpointConfiguration)
  {
    // Implementation details can be found on GitHub
  }
}

Most of this code will look familiar: it uses the BusConfiguration options to define the endpoint. In this case, the type scanner will look through all referenced assemblies to find handlers and other NServiceBus stuff that may participate in the tests.

Most notable is the use of the SubscriptionBehavior class, which is plugged into the NServiceBus pipeline that comes with NServiceBus 5.0 – watch the NServiceBus Lego Style talk by John and Indu at NSBCon London for more info. This behavior simply listens for subscription messages from endpoints and raises events that you can hook into. This is necessary for our tests to run successfully because the test can only start once all endpoints are running and subscribed to the correct events. The behavior class is not part of the NServiceBus.AcceptanceTesting framework though. IMO, it would be handy if Particular moved this one to the AcceptanceTesting framework as I think you’ll be needing this one a lot. Again, I borrowed the implementation from the NServiceBus code base:

class SubscriptionBehavior : IBehavior<IncomingContext>
{
  public void Invoke(IncomingContext context, Action next)
  {
    next();
    var subscriptionMessageType = GetSubscriptionMessageTypeFrom(context.PhysicalMessage);
    if (EndpointSubscribed != null && subscriptionMessageType != null)
    {
      EndpointSubscribed(new SubscriptionEventArgs
      {
        MessageType = subscriptionMessageType,
        SubscriberReturnAddress = context.PhysicalMessage.ReplyToAddress
      });
    }
  }

  static string GetSubscriptionMessageTypeFrom(TransportMessage msg)
  {
    return (from header in msg.Headers where header.Key == Headers.SubscriptionMessageType select header.Value).FirstOrDefault();
  }

  public static Action<SubscriptionEventArgs> EndpointSubscribed;

  public static void OnEndpointSubscribed(Action<SubscriptionEventArgs> action)
  {
    EndpointSubscribed = action;
  }

  internal class Registration : RegisterStep
  {
    public Registration()
      : base("SubscriptionBehavior", typeof(SubscriptionBehavior), "So we can get subscription events")
    {
      InsertBefore(WellKnownStep.CreateChildContainer);
    }
  }
}

Okay, almost done. We have our endpoint templates set up, message mutators listening to the relevant outgoing messages and SubscriptionBehavior to make sure the test is ready to run. Let’s get back to the part that actually makes the whole scenario go:

    Scenario.Define(() => new Context { })
        .WithEndpoint<Sales>(b => 
            b.Given((bus, context) =>
                // The SubscriptionBehavior will monitor for incoming subscription messages
                // Here we want to track if Shipping is subscribing to our the OrderAccepted event
                SubscriptionBehavior.OnEndpointSubscribed(s => 
                {
                    if (s.SubscriberReturnAddress.Queue.Contains("Shipping"))
                    {
                        context.ShippingIsSubscribed = true;
                    }
                }))
                // As soon as ShippingIsSubscribed (guarded by the first expression), we'll
                // fire off the test by sending a RegisterOrder command to the Sales endpoint
            .When(context => context.ShippingIsSubscribed, bus => bus.Send<RegisterOrder>(m =>
                {
                    m.Amount = 500;
                    m.CustomerName = "John";
                    m.OrderId = 1;
                }))
         )
   ...

For the Sales endpoint, we specified a whole bunch of extra stuff. First, there’s the event handler for the SubscriptionBehavior.OnEndpointSubscribed event. Here, the Sales endpoint basically waits for the Shipping endpoint to subscribe to the events. The context is available here as well, part of the lambda that’s passed to the Given() method, so we can flag the subscription by setting a boolean.

The final piece is the guard passed to the When() method. This is monitored by the AcceptanceTesting framework as the test runs and as soon as the specified condition is met, we can use the bus instance available there to send a message to the Sales endpoint: the RegisterOrder command will trigger the whole process we’re testing here. We’re sending an order of $500, which we expect to be accepted and shipped. There’s a test that checks the refusal of an order > 500 in the sample as well.

Some tips
For your end-to-end tests, you will be pulling together DLL’s from all of your endpoints and with all of your message definitions. So it makes sense to setup a separate solution or project structure for these tests instead of adding it to an existing solution.

If your handlers are in the same DLL as your EndpointConfig class, the assembly scanner will run into trouble, because it will find multiple classes that implement IConfigureThisEndpoint. While you can intervene in how the assembly scanner does its work (e.g. manually filtering out specific DLL’s per endpint definition), it might be better to keep your handlers in separate assemblies to make acceptance testing easier.

As you see, you need to add some infrastructural stuff to your tests, such as the EndpointConfigurationBuilder classes and the IEndpointSetupTemplate class for everything to work properly. You can implement this infrastructure stuff per test or per test suite, but you might want to consider creating some more generic implementations that you can reuse across different test suites. IMO the DefaultServer implementation from the NServiceBus tests is also a nice candidate for becoming a part of the NServiceBus.AcceptanceTesting package to simplify your test code as this is already a very flexible implementation.

keep-calm-you-passed-the-test

Conclusion
As you see, the NServiceBus.AcceptanceTesting framework takes some time to get to know properly. There’s more advanced stuff in there which I didn’t cover in this introduction. Now if you dive into the scenarios implemented by Particular themselves, you’ll find inspiration for testing other bits of your system.

I like the model a lot, and I think this can save a lot of time retesting many end-to-end scenarios.

Thoughts on the Xamarin Evolve 2014 keynote

Standard

Wow, Xamarin certainly pulled off a great one in the Evolve 2014 keynote today. I had to miss the conference this year but I sat behind my MacBook watching the keynote in high anticipation regardless.

Evolve2014Keynote

The keynote was a rollercoaster ride, and the team can be very proud of what they have achieved. Xamarin is clearly on a roll! Let’s go over today’s highlights (at least, for me)…

The vibe
Evolve 2013 was a fantastic conference, mostly because of the awesome vibe. The Xamarin team was visibly proud, and the whole atmosphere during the conference made it great to be a part of. From the tweets in the #XamarinEvolve stream, it seems that they pulled it off again this year, despite the much larger scale. Kudos to Nat, Miguel and their team for that!

The great vibe was also noticeable in the keynote. It’s really great to see a CEO and CTO on stage – who are themselves just geeks like us – radiate with pride and excitement. And a little bit nervous as well :)

The Digital Business
One important thing that was recognized in the keynote was that the Digital Business is becoming top priority for C-level executives in the enterprise. This enables interesting scenarios for mobility and cloud to change the way employees do their day to day work.

Avanade mentioned in the keynote that CxO’s are focusing on reach, recognising the fragmentation in the mobile arena regarding devices, mobile OS-es and the variety of device flavours. BYOD is happening, which means that a good cross platform scenario is very important.

At the same time, user experience is a huge part of the story. As Nat rightly remarked:

Your app, even if it’s an enterprise app, is next to the best apps in the world on someones phone.

As a result of the consumerization of IT, users have come to expect top notch UX from their enterprise apps as well. Just as smooth as their consumer apps. Needless to say that the reason why I love Xamarin is that they bridge cross platform development and native experience in the best way I’ve seen.

IBM partnership and Mobile Middleware
Mobile middleware is becoming a real necessity in the enterprise. Opening up enterprise backend systems to mobile devices isn’t just like regular EAI – Enterprise Application Integration. We have to deal with very different connectivity scenarios, new security threats, and performance constraints that come with mobile devices. This opens up opportunities for new and interesting architectural styles and cloud based backends. A company that – IMO – gets this very well is KidoZen.

IBM is now also stepping into this new world of mobile with their Worklight product. Very interested to see where the Xamarin/IBM integration is going.

Xamarin Forms Partners
Xamarin.Forms is a very promising framework, in which Xamarin balances cross platform development and native user experience in a very elegant way.

The contributions from the Xamarin.Forms partners fully brings the framework to life, with awesome powerful controls. Charts, document handling, advanced user inputs, etcetera. This is a very nice step towards adulthood for Xamarin.Forms.

Xamarin Profiler
With these constrained mobile devices, performance efficiency is very important. For this reason, you’ll find yourself spending a lot of time running your apps through profilers to get out all the jitters and memory leaks. Xamarin now has a great profiler that actually looks at the app from a Mono/.NET point of view. Can’t wait to play with it!

xamarin_profiler_mac@2x

What I really like about the new tools is that Xamarin is taking their own advice to heart: create native experience on the target platform. This means that the Mac versions of the tools have the distinct OSX style, and the Windows versions have that recognizable “Metro” look (I still call it Metro). It’s the details that matter, and I like it.

Xamarin Android Player
Every Android developer knows how excruciating the Android emulator is to use. Although it’s an accurate emulator, it’s dreadfully slow. Many have already defaulted to using Genymotion, although that may cause integration problems, for example with VirtualBox if you have a Windows VM running in Parallels on your MacBook like me.

hero-screenshot@2x

The Xamarin Android Player is much faster, and integrates neatly with other IDE’s that speak ADB.

Now, the player won’t excuse you from testing with real devices, but this greatly improves the developer experience! Wow, well done Xamarin!

Xamarin Sketches
You could expect from Xamarin that they’d take Roslyn and do something great with it. Xamarin Sketches is a nice REPL like tool that helps you draft up C# snippets, have them executed immediately and park them as snippets. Web development style immediate feedback with C# development!

I love how Xamarin brings the IDE closer and closer to Brett Victor’s vision in his Inventing On Principle talk, similar to what Apple did with their Swift Playground.

XamarinInsights

In addition, the real time integration with the emulators, where you immediately see code changes being executed, is a huge productivity boost! Amazing piece of engineering from the Xamarin team!

C# Test Automation
Now finally we can write our automated UI tests in C#!

Xamarin Insights
A good solution for monitoring the stability and quality of your apps, out there on all those devices, is crucial. It helps to detect problems and keep the quality and user experience on a high level. Moreover, monitoring your apps behaviour can give you a lot of new insights.

There are several solutions in the market that can help you track errors, and even do a bit of performance monitoring. Services like Crashlytics, Crittercism, Raygun and even my favourite APM tool New Relic all have mobile offerings.

The problem with these is that most of them hook into native API’s to do their work. This means that you’ll need a C# binding for their SDK’s to use them in your Xamarin app. For example, Raygun has a component in the Xamarin Component Store you can use. Still, catching errors won’t give you all the details you might be looking for as a C# developer. You’ll want to have the full .NET/Mono stack trace and more details from the Xamarin environment that you won’t get if you don’t have a “Xamarin native” SDK.

Furthermore, monitoring app performance will be difficult, again since most APM SDK’s hook into extension points in the underlying native framework. With the New Relic SDK for iOS for example, you’ll find yourself trying to hook measurements onto Objective-C methods, selectors and other extension points, whereas you really want to know how your C# method is doing. These SDK’s also make it difficult to instrument shared code, since their API’s are usually platform specific. Again, a “Xamarin native” SDK is lacking here. I really hope New Relic has a Xamarin compatible SDK on their roadmap. But now, luckily Xamarin has taken up the challenge themselves…

Enter Xamarin Insights! I’ve already had the privilege of playing with the platform a little bit, and things look very promising. There is still some work to be done on the performance monitoring part, but the error tracking and alerting feature is shaping up very nicely.

XamarinSketches

Conclusion
Xamarin is growing up fast. With the addition of Test Cloud, Insights and the new Profiler, they’re building a great one-stop solution for mobile development. Sketches, C# test automation and the fantastic new Android Player make the developer experience even more delightful.

Congratulations to the whole Xamarin team!

NSBCon 2014 recap

Standard

Update: Particular has put up an awesome recap page for NSBCon London. All session videos are available there as well! And be sure to check out the excellent intro video.

NSBCon 2014, the first official NServiceBus conference at Skills Matter in London was a great success. I really enjoyed the sessions and hallway discussions with the participants about how they were using NServiceBus. Mark and I had the opportunity to share our experiences with NServiceBus in a session. Here are some of my personal highlights.

Barbecoa
Bear with me, I’ll get to technical NServiceBus stuff, but being a foodie, I can’t resist posting food pics as well. But if you’re really boring and don’t care about great food, you can skip the fun.

This was my second visit to London and we had a little bit of time to spend in the evenings to explore London. I arrived a day earlier than Mark to attend the ADSD Unconference on June 26th. My colleagues Marcel and Sander were also in town for a large ALM project they’re doing, so we decided to meet up and go for diner. Gordon Ramsay was out of our league, but luckily Jamie Oliver was close by with his excellent BBQ/grill restaurant Barbecoa. Nice ambience and the dishes were simple yet very refined and tasty.

Jamie Oliver's Barbecoa

Wood plank-smoked duck, cherries, maple dressing, red mizuna and pecans

Unpulled pork, Caraway slaw, jalopeno cornbread

Barbecoa brownie, Raspberry & Pink Peppercorn Sorbet & Aerated Chocolate

Barbecoa's butchery



Starter: Wood plank-smoked duck, cherries, maple dressing, red mizuna and pecans – excellent and playful taste combination between the smoked duck, sweet cherries and earthy pecans

Main: Unpulled pork, Caraway slaw, jalopeno cornbread – wow this pork butt was tender, and the combination with the spicy jalopeno cornbread was excellent

Dessert: Barbecoa brownie, Raspberry & Pink Peppercorn Sorbet & Aerated Chocolate – I love the use of pepper or other spices with fruit, made the sorbet really come alive; and chocolate brownies… no need to say more

Right around the corner was Barbecoa’s Butchery, where all the meat is dry-aging. Very nice. After dinner, I had a quick stroll along the Thames and St. Paul’s Cathedral.

And the conference hadn’t even started :)

ADSD Unconference 
In February 2012, I attended Udi Dahan’s Advanced Distributed Systems Design course. Most attendees will confirm that this course is quite mind twisting if you have been brainwashed with the “Service Oriented Architecture = Web Services” dogma the whole time. Nowadays I think messaging and asynchronous systems are a bit hipper with all these cloud platforms taking off, but chaining web services into a ball of mud together was still all the rage at the time.

The unconference was an interesting way to get ADSD alumni together to discuss their experiences applying the techniques from the course. An unconference is an interesting format, where topics are determined by the participants, and then discussed in free format sessions.

We saw an impressive example of a composite web UI implementation by Lars Corneliussen from Faktum Software – BTW, I checked, but he’s not some distant Scandinavian cousin :) Anyway, we discussed UI composition, whereby data from separate services is combined at the UI level. One of the interesting challenges I see is applying this pattern with mobile apps. Given the latency and low bandwidth that mobile apps have to deal with, I think it’s better to do the composition at the API level and prepare highly optimized resources for the app to communicate with.

Andreas OhlundThe Particular team also facilitated a couple of great discussions. A discussion about Ops got me interested in Splunk, for holistic and pro-active monitoring of all sorts of events across a distributed system. I definitely need to check that out. Also, Indu Alagarsamy had a nice discussion about Routing Slips vs. Process Orchestration with Saga’s. Have a look at Jimmy Bogard’s blog for a great description of different Saga / messaging patterns. Danny Cohen coined the term “Bolshevik” for centralized process orchestration. I’m going to use that term from now on :)

One of the insights was that – in a way – routing slip is “Bolshevik” as well, as a routing slip also has a predefined, sequential route. I tend to agree with that, though the Routing Slip pattern can be useful for having messages flow across specific endpoints only.

I really enjoyed the nice discussions at the ADSD Unconference and appreciated the willingness to share experiences and lessons learned by all participants. Having them face-to-face in a small group also really helps.

London
Mark arrived shortly after I finished the unconference. We went for a nice long walk along the Thames to visit some highlights in London. Might as well make good use of your time, right?

St. Paul's Cathedral and Millennium Bridge

London skyline

Tower Bridge

Southwark Bridge


NSBCon Day 1
IMG_5954The first day of NSBCon started off with a nice breakfast at Skills Matter. What a great location and a nice environment for tech conferences like NSBCon. Udi Dahan kicked off the day with a presentation about the past, present and future of NServiceBus. There were a couple more old timers in the room that also started using NServiceBus at version 1.x, just like me :) Udi reminded us how awful the website and logo used to look at the time, and he sincerely apologized for ILMerge-ing all the external dependencies into NServiceBus. Yep, it was bad, but the team has made NServiceBus a very slick and solid product over the past few years!

Info Support was prominently visible as well as one of the event’s sponsors.

The rest of the program was a nice mixture of case studies with NServiceBus, technology deep dives and theory. It was nice to see how NServiceBus is used at big companies like Wonga (Charlie Baker’s session) to handle large amounts of payments and Spotlight (Dylan Beattie’s session) where it even plays a role in video encoding.

Most prevalent from almost all of the session was how important it is to have decent monitoring across your entire system. Luckily the new tools from Particular can come a long way in giving insight in a message driven system, but I think that you can’t do without decent, holistic monitoring. Dashboards that give insight into both the technical stuff that goes on in your system and functional checks.

This was also one of the points that Mark and I highlighted on our Best Practices session. It was an honor to present at this first official NServiceBus conference. We got some nice feedback afterwards. The hallway discussions afterwards are always so valuable.

James Lewis gave a great talk about Managing Microservices and Yves Goeleven taught us about using NServiceBus in an Azure cloud. I really like using the Azure platform, cloud architecture brings a whole set of new challenges to the table.

The day ended for us with a nice speaker diner. It was great to spend some time with the whole Particular team and the other speakers to share experiences. You don’t get that many NServiceBus users from across Europe together in one room that easily.

NSBCon day 2
After a nice espresso at the Goswell Road Coffee Shop, the second day started with a deep dive into the Particular Service Platform by Danny Cohen. He explained how the separate components work together to monitor, diagnose and even design distributed systems with NServiceBus. I especially like the role of ServiceControl, which can serve as a nice extension point for your custom monitoring needs as well. New Relic feed, anyone? I have blogged about ServiceControl and ServicePulse before.

To use Danny’s words: “I’ll let you take in the coolness for a minute”

The other sessions were very enjoyable as well. A look at the new pipeline architecture in NServiceBus 5 by Indu Alagarsamy and John Simons was very nice. I really like the new model, based on the Russian Doll Pattern. Mark and I already saw some nice opportunities for replacing things in our implementation with these new style behaviors.

Greg Young and Szymon Pobiega showed an impressive integration of Event Store and NServiceBus. Event Store is another product I definitely want to check out. Greg is a great presenter as well.

Jan Ove Skogheim took us on the journey he made with his customer, migrating a complex, web service infested system to a message driven architecture with NServiceBus. Some nice insights there as well. In trying to get the team into the right mindset, Jan Ove was very keen on using the right terminology. “When someone said ‘service’, I slapped them in the face” :)

And Andreas Öhlund gave some nice insights into the internal development process at Particular. He quoted Netscape’s founder while explaining why releases are sometimes delayed: “Don’t ship crap”. I completely agree.

What struck me was that Andreas used this image in his presentation:

Slap!

So, both Jan Ove and Andreas use slapping… Perhaps some Scandinavian custom, but let me be the first to coin an official term for this methodology: Slap Driven Development. Interesting concept that I might try at work some time. You heard it first here!

In closing
NSBCon 2014 was a big success. Udi was visibly very proud to have a first official conference for his brainchild and rightly so. He has built a great company and community around NServiceBus. Looking forward to NSBCon 2015 already!

A lap around ServicePulse (2 of 3)

Standard

This is part 2 of my short series about the new Particular Service Platform… specifically about ServicePulse. In my first post about ServicePulse, you can read about the overall architecture and the problems that ServicePulse solves. Now I want to focus on one specific feature: the ability to add custom checks to your services.

After having set up ServiceControl and ServicePulse, you are able to monitor any NServiceBus service in your architecture. Just drop in the Heartbeat plugin and ServicePulse will see your service. The Heartbeat basically says: the service is there, it’s running and it’s able to send messages (since the Heartbeat is sent through a message to the ServiceControl input queue).

In general, this will not be sufficient, as you will also want to know how the service is behaving from a functional point of view. Or maybe you even want to monitor some deeper technical dependencies, such as whether it can reach its database, whether the config is OK, whether it can reach that external web service you rely upon, et cetera.

This is where custom checks come in, and they’re easy to implement. A custom check is very similar to the Heartbeat check: you just drop in a DLL that contains one or more checks, and the messages will be sent to ServiceControl. We end up with the following deployment overview:

ServiceControl-Deployment-CustomChecks

The custom check plugins will report through the same channel as the Heartbeat plugin.

Implementing a custom check
A custom check is pretty easy to implement, and you can choose between two scenarios: a periodic check and a one-time check.

Either way, you need to add a reference to the ServiceControl.Plugin.CustomChecks Nuget package and you’re good to go.

Startup check
A Startup custom check is executed when the service starts, so basically only once. A check like this is useful if you want to check some configuration settings after deployment, or if you want to check some environment dependencies that the service needs to run properly. You implement a one-time check by inheriting from ServiceControl.Plugin.CustomChecks.CustomCheck.

public class StartupCheck : ServiceControl.Plugin.CustomChecks.CustomCheck
{
    public StartupCheck()
        : base("StartupCheck", "Categoryname") // the name of the check and its category are specified here 
    {
        if (ConfigurationManager.AppSettings["TestSetting"] != "1")
        {
            ReportFailed("TestSetting must be 1!");
        }
        else
        {
            ReportPass();
        }
    }
}

The default constructor passes a check name and a category to the base constructor, which bootstraps the custom check. CustomCheck is an abstract base class, so you need to implement the StartupCheck method. All you have to do is put in your check, and call either ReportPass if all is well, or ReportFailed if there’s a problem. ReportFailed accepts a fault reason, which will be visible in ServicePulse.

Periodic check
A periodic check runs at a self defined interval, so it’s ideally suited to monitor things that might change over time, such as the availability of certain services or databases, or functional scenarios such as: “has all Point of Sale data arrived yet?”.

Again, implementing such a check is pretty easy. This time, inherit from ServicePulse.Plugin.CustomChecks.PeriodicCheck.

public class CheckHealth : PeriodicCheck
{
    public CheckHealth()
        : base("Healthcheck", "CategoryName", TimeSpan.FromMinutes(10))
    {

    }

    public override CheckResult PerformCheck()
    {
        // Fake a failure once in a while
        // TODO: think of a useful check to implement here.
        if (DateTime.Now.Second % 2 == 0)
        {
            return CheckResult.Failed("This is a sample failure report");
        }
         return CheckResult.Pass;
    }
}

Besides the check name and category name, the base constructor also excepts a TimeSpan. You can specify here at what interval the check runs. Each 10 minutes in the example.

Next, we override the PerformCheck method, which returns a CheckResult object. Case of success, report CheckResult.Pass, otherwise use CheckResult.Failed. Again, a reason or description for the feature must be supplied.

So what does admin/ops see?
After a custom check is deployed and activated, we can see the results in the ServicePulse front end. On the Dashboard, it shows:

Custom checks

And upon further inspection on the Custom Checks screen:

Custom checks - overview

In my current project, we make heavy use of custom checks to monitor the health of our system, and whether customers are efficiently using it. For now, we do this through a custom built monitoring service (based on NServiceBus), but I can see these migrating to ServiceControl plugins over time.

Next time: dealing with failed messages!