There is no app! – LevelUp Mobile slides


On September 22nd, 2016 I presented a session called “There is no app!” at the LevelUp Mobile 2016 event in Leusden. Here are the slides and video of that presentation:

[The video will be placed here shortly]

Mobile platforms are evolving and getting richer and richer in features with every new release. The OS itself is becoming the primary interface for users to interact with, as are a new category of (wearable) devices.

What does this mean for us as app developers? Are the days of the traditional “mobile app” numbered? How do we serve our end users and optimize their mobile moment as much as possible?

The average smartphone owner has installed over 100 apps on their phone while they only use between 3 and 5 apps a day. By integrating your app more deeply into the mobile operating system can greatly increase the usage of the app. During this session we will show what you can do to integrate your apps in the Windows, iOS and Android platforms to keep your app top of mind. We’ll look at spotlight search, universal links, app indexing, Cortana integration and other APIs provided by iOS and Google Play services to engage your users in your apps. We’ll also look at new interaction models that are closer to the mobile platform: widgets, 3D-Touch, etc.


Ceci n’est pas une app

LevelUpMobile_logoRGBCall to action: join us at LevelUp Mobile on September 22nd in Leusden for a FREE inspirational evening on the future of Mobility and Devices.


The mobile platform war has been raging for almost ten years now. For now, it seems that Google (Android) and Apple (iOS) have won. (link) Microsoft, though still pushing Windows 10 for mobile as well, has accepted this and started providing high quality mobile apps for both Android and iOS.

Apple and Google have invested a lot in making their platforms richer and richer to attract and retain users. Apple’s advantage of 100% vertical integration of software and hardware has allowed them to create experiences like Apple Pay, 3D-Touch and Touch-ID that are very appealing to users and developers alike. At the same time, both Apple and Google have been putting features into the OS and stock apps that are competing with 3rd party offerings in the App Store. Furthermore, users have come to expect the same experience they get from their OS from 3rd party apps. Though some platform features might seem alike between iOS, Android and Windows, the way they are implemented can vastly differ and require access to core platform API’s.

As a strong proponent of Xamarin, I’ve been working in the world of cross platform mobile app development for almost 6 years now. The reason we chose to go with Xamarin was – first of all – of course the ability to share code amongst platforms, but – equally important – full access to the native platform API’s and the ability to create 100% native experiences. Given the trend of ever innovating mobile platforms, this puts us at a huge advantage over cross platform solutions that go for the lowest common denominator, both in UI (the same UI across all platforms) and UX (most of the time just the common superficial feature set across platforms).

With iOS 10, Apple is showing us a trend where apps can be integrated even deeper in the core OS experience. Of course we already had widgets in Android, but what to think about interactive widgets in iOS’s Today view, enriched with Siri’s AI capabilities? Interactive notifications are becoming more popular. Where a notification used to be a way to alert the user and allow them to open the accompanying app by tapping on it, notifications are becoming a User Interface by themselves, allowing the user to deal with the app’s functionality right from the lock screen.

Deal with a notification right away from the Home screen. No need to open the app!
The boundaries of apps are blurring even more with advanced features like 3D-touch on the Home screen, and the ability to interact with apps from the Siri screen:

Direct access to an app’s features through 3D-Touch. No need to open the app!

Siri knows how to invoke your app and show it as a widget right inside its own interface. No need to open the app!

iMessage can invoke your app right from its own interface. No need to open your app!
These are all iOS examples, by the way, but similar features can be found in Android and Windows 10, with its Live Tiles, Cortana integration, etcetera.

In general, user interaction with their mobile devices is becoming more and more streamlined, and to stay ahead as developers, we need to start thinking about these micro-interactions, these Mobile Moments, and offer the most efficient experience with our apps.

Mobile is not a neutral platform (link). The philosophy of web applications (built for browsers, available everywhere, with a consistent user experience everywhere) doesn’t apply here. We don’t build for the web, we build for the OS. Yay for native development! 🙂

There is no spoon.
If we follow this train of thought, it leads us to an existential question: is there actually an app?

I would argue: not anymore – at least not in the traditional sense where we have an icon sitting on the home screen that launches into an application that comes into the foreground and occupies the whole screen. It seems like the days of the mobile “app” are numbered and we have to start thinking about apps as a set of autonomous micro-interactions that work together with the OS and/or other apps.

Luckily for us, as developers, there are plenty of new API’s and frameworks that help us build these interactions and I think it will only become more exciting from a technical perspective to build mobile experiences.


On September 22nd, I’m joining Brechtje de Leij (mobile strategist and expert), Jorn de Vries of Flitsmeister fame, Johan Gorter & Rick Hoving from AFAS Software and the ever brilliant Laurent Bugnion to speak at a one-off inspiring event about the future of Mobile and Devices: LevelUp Mobile. Together with my colleague Geert, our talk is going to be about the exact topic of this blogpost and we’ll show some real life examples of how to implement these Mobile Moments using Xamarin.

If you have not registered yet, you can do it here:! It’s free and it’s going to be fun!

To get more inspired, read Laurent’s teaser blog post about his upcoming talk: A world of devices.

GMImagePicker ported to Xamarin.iOS

I ported GMImagePicker to C#. Code here, Nuget here. Happy coding!

This past week I was working on a Xamarin project where we need support for selecting multiple images and/or taking pictures and uploading them to a backend service. The default UIImagePicker control in iOS is ok but not very versatile. It can take a picture with the camera, or lets you select a single image from your gallery but that’s about it. Furthermore, working with the resulting images is quite cumbersome as you’re working with large UIImage objects in memory. A lot of performance and memory issues can happen if you’re not careful.

In order to deal with photos and videos more efficiently and in a much richer manner, Apple introduced the PhotoKit API in iOS 8. Mike Bluestein has written a nice introductory post on this API on the Xamarin blog, so I’m not going to repeat this.

In short, PhotoKit works with the notion of PHAsset objects, which are basically descriptors for media files on the device. There are API’s to query different photo galleries, etcetera. Only once you actually need the image for display or other purposed do you have to retrieve the image using the PHAsset descriptor. Very efficient.

Many apps, such as the Facebook app, allow users to select multiple images in a user friendly manner, and maybe even add pictures on the go by providing access to the camera while they are browsing their gallery. This is something that we also wanted to add to our app. Luckily, there are some nice open source projects around that implement just that. One of the nicest ones is the GMImagePicker component by Guillermo Muntaner Perelló, which uses the PhotoKit API under the hood. The user experience looks like this:


That’s slick! You can browse through several collections, and the control is highly customizable, and even contains localized texts for labels, buttons and dialogs. Only, it’s written in Objective-C…

I had two options: bind the API using Xamarin’s Objective Sharpie or port it verbatim to C#. I chose to port it, mainly to have full control over the inner workings of the control and to not have to pull in a “foreign” language into the project. The port has complete feature parity with the Objective-C version and I tried to iron out as many issues as I could. It seems to be working pretty smoothly in my Xamarin app.

The code is up on GitHub and you can use the control by either downloading the code and including the .csproj in your project, or install the Nuget package in your Xamarin.iOS app:

Install-Package GMImagePicker.Xamarin

As I said, the GMImagePicker control is highly customizable. You can change its appearance by specifying colors for different parts of the UI, and you can provide custom titles and confirmation prompts. It’s also possible to filter and limit the types of assets you want the user to select. The whole range of options can be found in the sample app that comes with the control. Here is an overview:

var picker = new GMImagePickerController {
Title = "Custom Title",
CustomDoneButtonTitle = "Finished",
CustomCancelButtonTitle = "Nope",
CustomNavigationBarPrompt = "Take a new photo or select an existing one!",
ColsInPortrait = 3,
ColsInLandscape = 5,
MinimumInteritemSpacing = 2.0f,
DisplaySelectionInfoToolbar = true,
AllowsMultipleSelection = true,
ShowCameraButton = true,
AutoSelectCameraImages = true,
ModalPresentationStyle = UIModalPresentationStyle.Popover,
MediaTypes = new [] { PHAssetMediaType.Image },
// Other customizations to play with:
//ConfirmSingleSelection = true,
//ConfirmSingleSelectionPrompt = "Do you want to select the image you have chosen?",
//PickerBackgroundColor = UIColor.Black,
//PickerTextColor = UIColor.White,
//ToolbarBarTintColor = UIColor.DarkGray,
//ToolbarTextColor = UIColor.White,
//ToolbarTintColor = UIColor.Red,
//NavigationBarBackgroundColor = UIColor.Black,
//NavigationBarTextColor = UIColor.White,
//NavigationBarTintColor = UIColor.Red,
//PickerFontName = "Verdana",
//PickerBoldFontName = "Verdana-Bold",
//PickerFontNormalSize = 14.0f,
//PickerFontHeaderSize = 17.0f,
//PickerStatusBarStyle = UIStatusBarStyle.LightContent,
//UseCustomFontForNavigationBar = true,

// You can limit which galleries are available to browse through
picker.CustomSmartCollections = new [] {

// Event handling
picker.FinishedPickingAssets += Picker_FinishedPickingAssets;
picker.Canceled += Picker_Canceled;

// Other events to implement in order to influence selection behavior:
// Set EventArgs::Cancel flag to true in order to prevent the action from happening
picker.ShouldDeselectAsset += (s, e) => { /* allow deselection of (mandatory) assets */ };
picker.ShouldEnableAsset += (s, e) => { /* determine if a specific asset should be enabled */ };
picker.ShouldHighlightAsset += (s, e) => { /* determine if a specific asset should be highlighted */ };
picker.ShouldShowAsset += (s, e) => { /* determine if a specific asset should be displayed */ };
picker.ShouldSelectAsset += (s, e) => { /* determine if a specific asset can be selected */ };
picker.AssetSelected += (s, e) => { /* keep track of individual asset selection */ };
picker.AssetDeselected += (s, e) => { /* keep track of individual asset de-selection */ };

// The GMImagePicker can be treated as a PopOver as well:
var popPC = picker.PopoverPresentationController;
popPC.PermittedArrowDirections = UIPopoverArrowDirection.Any;
popPC.SourceView = gmImagePickerButton;
popPC.SourceRect = gmImagePickerButton.Bounds;

await PresentViewControllerAsync(picker, true);

The extensibility is very convenient. For example: if you want to set a maximum to the total size of the images you want to allow the user to select, you can handle the AssetSelected event, keep track of the total size selected, and handle ShouldSelectAsset, to prevent selection if a maximum threshold has been reached. This is exactly what we wanted to have in our app.

Once the user has finished selecting assets, you can use a PHImageManager to retrieve the actual images in whatever size you like:

void FinishedPickingAssets (object s, MultiAssetEventArgs e)
  PHImageManager imageManager = new PHImageManager();

  foreach (var asset in e.Assets) {
    imagePreview.Image = null;

    imageManager.RequestImageForAsset (asset, 
      new CGSize(asset.PixelWidth, asset.PixelHeight), 
      (image, info) => {
        // do something with the image (UIImage), e.g. upload to server
        // you can get the JPEG byte[] via image.AsJPEG()

Very nice, and it’s now available for Xamarin.iOS developers as well 🙂

Photo by Andrew Illarionov (

Many thanks to Guillermo for letting me port his excellent code and publish it. I’d love to hear your feedback on the code and would love to see your PR’s for improvements.

What will iOS 6 bring us?

Tomorrow Apple opens up WWDC 2012. As usual, CEO Tim Cook will deliver the keynote, which is usually laden with cool new announcements. Will it be a new iPhone? An Apple HDTV? In any case, there will be an introduction of iOS 6 for sure, judging from the banners in the Moscone Convention Center.


I don’t care so much to see the iPhone 5 tomorrow, as I’m sure there will be one this year. Interesting rumours and “leaked parts” are flying around on the internet. Lots of cool stuff and no doubt it will be the coolest device on the market for yet another year. In any case, I will buy it as a replacement for my trusty iPhone 4.

For now, I’m mostly interested in iOS 6 and the new stuff it will bring. With iOS 5 I had hoped for a refresh, or even an overhaul, of the home screen. Slick and fluid as it is, iOS is starting to look aged compared to Windows Phone Mango and even Android Ice Cream Sandwich. Not that I want to have widgets all over the place, but the liveliness of Windows Phone (with its Live Tiles) is lacking in iOS. Live Tiles and Background Agents are the two things I have “feature envy” over. Metro, not so much. It looks nice and fresh at first sight, but after a couple of minutes of using a some of the apps it becomes obvious how difficult it is to apply the concept. Most Metro apps look bland, bare bone and not so fresh as the WP7 home screen promises. So it’s iPhone for me all the way, unsurprisingly… iOS is still the most complete, stable and mature smart phone OS on the market in my opinion. It’s not a coincidence that all of the cool apps appear on iOS first, and usually look the coolest on iOS as well.

While iOS 5 did bring a bunch of improvements in usability and some nice new features, the static home screen remained, so I was a bit disappointed. I´m hoping for more in iOS 6. When I first saw the announcement for WWDC 2012, I immediately thought the logo looked fresh and playful.


Will this translate to the overall appearance of iOS 6? I hope so. Rumours are that there will be a completely new Maps application with 3D and all. Cool, but not crucial for me. Now is the time for Apple to put more innovation into the overall experience. Other rumours are that most of the application chrome in default apps will be silver instead of the grayish blue, much like the iPad. Also nice, but not shocking. Funny enough the iOS 6 logo (as seen above) is that same grayish blue color as the old chrome. On the other hand, the “6” has a gray/silver color. We’ll see…

What I’d love to see is an overhaul of the home screen UI, more and tighter integration possibilities between apps, and more informative home screen. I use my iPhone both for work and privately, so I’d love to see my upcoming appointments for the day in some way, missed phone calls, maybe some Twitter related info, and a bit less clunky than how notifications are done in iOS 5 (which already was a big improvement over iOS 4)! This video, made by an iOS fan, shows some nice ideas for iOS 6. I hope some of them will be announced tomorrow:

In terms of user interaction, I’m sure Apple will be setting the bar once again as soon as the new iPhone 5 is introduced. I’m expecting (hoping for) the addition of tactile and haptic feedback on the touch screen (not just those lame vibrations most Android devices do). Rumours and Apple patents about this have been flying around for a couple of years now. Time for action.

Apple has been known to put clues here and there into their invitations and stuff. Look closely at the iOS 6 logo… See the ripples? Does this hint at tactile feedback? Or is it just the new default wallpaper like on the new iPad? Sometimes speculating on non-information is fun 😉

Another cool rumour is the one around NFC integration. Apple was recently awarded a patent that hints towards usage of NFC for mobile payment type scenario’s. Very nice, and NFC opens some nice possibilities for apps.

The last thing I’d like Apple to do is create a better experience on the iPad for pens. I know Steve Jobs hated styluses, but there are a lot of very cool creativity apps for the iPad. FiftyThree’s Paper app is currently the coolest on the market. I like using the iPad for drawing, sketching and notetaking, and my Bamboo Stylus for iPad is a great tool, but the experience is still clunky. iPad / iOS has no palm negation technique so drawing on the iPad isn’t as natural as on a piece of paper. Microsoft is way more ahead of Apple in this area, with its years old Ink technology. The //BUILD developer preview tablet with Windows 8 comes with a nice pen and drawing and writing on it feels very natural. The more the iPad is becoming a device for creation, not just consuming, the more I think it needs this type of support. Of course iOS needs to be prepared for this as well. Luckily there have been rumours about this…

I’ll be watching the keynote live blogs very closely tomorrow! Time for Apple to 0wn back the competition.

Push notifications in iOS with MonoTouch

A recurring theme when building mobile apps is push notifications. I’m working on a couple of apps at Info Support using MonoTouch for iOS and we want to add push notifications to those apps. There’s a lot of interesting and very useful information on the internet about the implementation of notifications, which is actually pretty straight forward once you know the API.

First of all, Apple has an excellent iOS Developer Library with very comprehensive and pleasant to read information about the iOS API’s. Here is the section on Notifications.

Apple makes a distinction between local and remote notifications. Local notifications being notifications you schedule and “send” from the device itself from a background task, remote notifications being the push notifications coming from the Apple Push Notification service (APN). I’m specifically looking at push notifications.

Handling notifications on the iDevice

There is a great sample implementation showing the basics of handling notifications in the iOS app here on Google Code. I shamelessly took the code snippets from that sample 🙂 In short, it’s as simple as this (in C#/MonoTouch):

// This method is invoked when the application has loaded its UI and its ready to run
public override bool FinishedLaunching (UIApplication app, NSDictionary options)
	//This tells our app to go ahead and ask the user for permission to use Push Notifications
	// You have to specify which types you want to ask permission for
	// Most apps just ask for them all and if they don't use one type, who cares
	                                                                   | UIRemoteNotificationType.Badge
	                                                                   | UIRemoteNotificationType.Sound);

	//The NSDictionary options variable would contain our notification data if the user clicked the 'view' button on the notification
	// to launch the application.  So you could process it here.  I find it nice to have one method to process these options from the
	// FinishedLaunching, as well as the ReceivedRemoteNotification methods.
	processNotification(options, true);

	//See if the custom key value variable was set by our notification processing method
	if (!string.IsNullOrEmpty(launchWithCustomKeyValue))
		//Bypass the normal view that shows when launched and go right to something else since the user
		// launched with some custom value (eg: from a remote notification's 'View' button being pressed, or from a url handler)

		//TODO: Insert your own logic here

	// If you have defined a view, add it here:
	// window.AddSubview (navigationController.View);

	window.MakeKeyAndVisible ();

	return true;

On startup, the AppDelegate calls UIApplication.SharedApplication.RegisterForRemoteNotificationTypes() from within the FinishedLaunching method, in which you specify which notifications you are interested in. These can be a combination of UIRemoteNotificationType.Alert (for alert texts), UIRemoteNotificationType.Badge (for numbers to update the application’s badge with) and UIRemoteNotificationType.Sound (for a custom sound to be played upon receipt of the notification).

iOS will register itself with the APN and will do a callback to the application when that is finished. This can have one of two outcomes: either it succeeds or it fails. You handle this by overriding two methods in the AppDelegate: RegisteredForRemoteNotifications and FailedToRegisterForRemoteNotifications. In case of success, Apple will give you a device token with which the APN can address your application on your device. Typically, what you do in RegisteredForRemoteNotifications is passing that device token to your application services (the server application that generates the notifications) so that it can find you.

public override void RegisteredForRemoteNotifications (UIApplication application, NSData deviceToken)
	//The deviceToken is of interest here, this is what your push notification server needs to send out a notification
	// to the device.  So, most times you'd want to send the device Token to your servers when it has changed

	//First, get the last device token we know of
	string lastDeviceToken = NSUserDefaults.StandardUserDefaults.StringForKey("deviceToken");

	//There's probably a better way to do this
	NSString strFormat = new NSString("%@");
	NSString newDeviceToken = new NSString(MonoTouch.ObjCRuntime.Messaging.IntPtr_objc_msgSend_IntPtr_IntPtr(new MonoTouch.ObjCRuntime.Class("NSString").Handle, new MonoTouch.ObjCRuntime.Selector("stringWithFormat:").Handle, strFormat.Handle, deviceToken.Handle));

	//We only want to send the device token to the server if it hasn't changed since last time
	// no need to incur extra bandwidth by sending the device token every time
	if (!newDeviceToken.Equals(lastDeviceToken))
		//TODO: Insert your own code to send the new device token to your application server
		//Save the new device token for next application launch
		NSUserDefaults.StandardUserDefaults.SetString(newDeviceToken, "deviceToken");

public override void FailedToRegisterForRemoteNotifications (UIApplication application, NSError error)
	//Registering for remote notifications failed for some reason
	//This is usually due to your provisioning profiles not being properly setup in your project options
	// or not having the right mobileprovision included on your device
	// or you may not have setup your app's product id to match the mobileprovision you made
		Console.WriteLine("Failed to Register for Remote Notifications: {0}", error.LocalizedDescription);

Now, in the first snippet (FinishedLaunching()), you might also have noticed the call to processNotification(), which handles an incoming notification. We’ll get to the implementation of that method further on, but what’s important here is that there are two scenario’s to take care of: the first is receiving a notification while the app is running. This is handled by overriding the ReceivedRemoteNotification method in your AppDelegate. When a notification comes in, iOS will call this method when the app is active. The other scenario is when the app is not running. When iOS receives a notification and the user chooses to take action upon it, the app will launch and iOS will pass the notification data to FinishedLaunching. This is why you’ll also want to handle that from FinishedLaunching().

When a notification comes in, you basically get an NSDictionary object containing the notification data. Basically this is some JSON encoded data containing the alert, badge and name of the sound file to be played (if applicable). An alert can be a simple text string, but might also be a more complex structure, if it is a localized message. So, you process notifications from both FinishedLaunching() and ReceivedRemoteNotification(). A good practice would be to implement the handling in a separate method to which you delegate the work from both locations. So here is the implementation of processToken():

public override void ReceivedRemoteNotification (UIApplication application, NSDictionary userInfo)
	//This method gets called whenever the app is already running and receives a push notification
	// YOU MUST HANDLE the notifications in this case.  Apple assumes if the app is running, it takes care of everything
	// this includes setting the badge, playing a sound, etc.
	processNotification(userInfo, false);

void processNotification(NSDictionary options, bool fromFinishedLaunching)
	//Check to see if the dictionary has the aps key.  This is the notification payload you would have sent
	if (null != options && options.ContainsKey(new NSString("aps")))
		//Get the aps dictionary
		NSDictionary aps = options.ObjectForKey(new NSString("aps")) as NSDictionary;

		string alert = string.Empty;
		string sound = string.Empty;
		int badge = -1;

		//Extract the alert text
		//NOTE: If you're using the simple alert by just specifying "  aps:{alert:"alert msg here"}  "
		//      this will work fine.  But if you're using a complex alert with Localization keys, etc., your "alert" object from the aps dictionary
		//      will be another NSDictionary... Basically the json gets dumped right into a NSDictionary, so keep that in mind
		if (aps.ContainsKey(new NSString("alert")))
			alert = (aps[new NSString("alert")] as NSString).ToString();

		//Extract the sound string
		if (aps.ContainsKey(new NSString("sound")))
			sound = (aps[new NSString("sound")] as NSString).ToString();

		//Extract the badge
		if (aps.ContainsKey(new NSString("badge")))
			string badgeStr = (aps[new NSString("badge")] as NSObject).ToString();
			int.TryParse(badgeStr, out badge);

		//If this came from the ReceivedRemoteNotification while the app was running,
		// we of course need to manually process things like the sound, badge, and alert.
		if (!fromFinishedLaunching)
			//Manually set the badge in case this came from a remote notification sent while the app was open
			if (badge >= 0)
				UIApplication.SharedApplication.ApplicationIconBadgeNumber = badge;
			//Manually play the sound
			if (!string.IsNullOrEmpty(sound))
				//This assumes that in your json payload you sent the sound filename (like sound.caf)
				// and that you've included it in your project directory as a Content Build type.
				var soundObj = MonoTouch.AudioToolbox.SystemSound.FromFile(sound);

			//Manually show an alert
			if (!string.IsNullOrEmpty(alert))
			UIAlertView avAlert = new UIAlertView("Notification", alert, null, "OK", null);

	//You can also get the custom key/value pairs you may have sent in your aps (outside of the aps payload in the json)
	// This could be something like the ID of a new message that a user has seen, so you'd find the ID here and then skip displaying
	// the usual screen that shows up when the app is started, and go right to viewing the message, or something like that.
	if (null != options && options.ContainsKey(new NSString("customKeyHere")))
		launchWithCustomKeyValue = (options[new NSString("customKeyHere")] as NSString).ToString();

		//You could do something with your customData that was passed in here

Pretty nifty!

Sending notifications

In short, sending notifications to a user means that you send a notification payload to the APN using the device token to address the user. There are several ways to do that.

Christian Weyer has an excellent blog post on the implementation of push notifications using a third party for the distribution of notifications: Urban Airship. Urban Airship has a free service with which you can send up to 1,000,000 messages a month. If you want more service and features, there’s an attractive pricing model. Urban Airship abstracts the handling of multiple platforms, such as Apple, Microsoft, BlackBerry and Google, so you can service almost any type of device. PushIO offers a similar service.

If you only have to send notifications to Apple devices, since – let’s face it – iOS is the only OS that really matters… there is a nice open source C# library that does the heavy lifting for you: APNS-Sharp. This library can also handle AppStore Feedback and receipts for In App Purchases for you. Nice!

Of course, Marcel, Willem and I are targeting Android and Windows Phone as well, so a service like Urban Airship or PushIO would be ideal.

UPDATE: Added the suggestions made by Slava. Thanks for that!