Readers who follow my blog closely might have noticed that it’s been a while since I’ve published something, seven months to be exact. Unfortunately this isn’t going to change any time soon, here is why.

In 2011 I was challenged by my teachers back in college to play around with Kinect and share my experiences at a user group. I had no experience with Kinect or sensor programming and there was no official SDK yet. Since then it has been a hell of a ride – Starting up this blog and writing about my experiences, talking at a conference and user group, working with companies to get Kinect going in their organisation and receiving the Microsoft MVP award in 2014 & 2015.

Last year Microsoft successfully released the Kinect for Windows v2 and announced the Hololens bringing holographical experience to your homes. It felt like the perfect moment to close this chapter and diskinect.

Technology is evolving at an enormous pace where it’s key to divide your focus to conquer so that means letting things go. Kinect for Windows started as a fun thing to do that evolved into an out-of-control hobby that I loved love to do but it was never my job.

During the day, I work at a company called Codit which is higly experienced in integration & cloud projects built on top of the Microsoft stack. Those who know Microsoft Azure will agree that it’s a wonderful platfrom that is growing and adapting at the speed of light. Because of that velocity it requires my full dedication and my shift of focus because in the end, my goal was always to deliver the best solutions for the customer where possible and to strive for perfection.

Thanks to everyone who gave me wonderful opportunities, the Kinect for Windows team for building such great product(s) and my fellow MVPs for all your work.

For those who want, I have a new blog that is not focussed on Kinect for Windows but more general with a flavour of cloud. If you’re still looking for a Kinect expert, here’s the place to be!

Thanks for reading,


Posted in General | Leave a comment

Announcement – Full release of Kinect for Windows v2 SDK, publishing to the Windows Store & the new Kinect adapter

This week was a big one for the Kinect teams where they announced several things –

  • Full release of the Kienct for Windows SDK 2.0 – This means that you can use the SDK to build interactive & easy to use applications for commerial usage without paying any runtime licenses!
  • Publish your Kinect apps to the Windows Store – Your Kinect for Windows-enabled store apps can now also be pushed to the Windows Store and downloaded everyone
  • Availability of the Kinect adapter for Windows – For only $45.99 you can now buy the Kinect adapter for Windows that enables you to use your Kinect for Xbox One on your computer. This means that if you already own a Kinect for Xbox One that you don’t need to buy a Kinect for Windows sensor anymore!

Kinect adapter

Just want to start coding? Good news, you can also use NuGet packages, here is an overview of all the packages!

My code samples are getting updated to v2.0 as we speak. In the meantime, here are some interesting links –

  • Read Alex Kipman’s official statement here
  • Read the official Kinect for Windows statement here
  • Download the SDK here

I am looking forward to hear what you will build with it!

Thanks for reading,


Posted in General | 2 Comments

Analysing expressions with Face Tracking

In one of my previous post, read it here, I walked you through building an application that will track someone’s face and display the expressions of the person.

Cool but this has no added value – How happy was the person? Was he/she interested? Did he/she wear glasses?

In this post we will create an application that will perform face tracking for all tracked persons, a max of 6 persons. Afterwards it will be analysed so these statistics will tell you for how many perctange of the time he/she looked away or how likely he/she was wearing glasses,…

As of this writing the Kinect for Windows SDK is still in Public Preview (v1409) and can be found here.

Sample Analytics

Here is an example of the analytics that we will generate.

<FaceAnalytics xmlns="http://schemas.datacontract.org/2004/07/Codit.Summit.Core.FaceTracking" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">

High-level overview

We will use the concept of trackers that will keep track of the expressions for each tracked body called a FaceTracker. This FaceTracker will act as conductor that will manage a set of trackers for eacht Face Feature we want to track called a FaceFeatureTracker. This means that the FaceTracker will receive all the FaceFrames and pass them to the FaceFeatureTrackers.

Once a certain person leaves the scene, the tracker will notify our application and provide us with a set of analytics.

Our application will be a dummy that just creates the trackers and waits untill it notifies him that analytics are available. These analytics will then be saved on disk.


I will briefly talk about the flow I used for this application without going in-depth.

The basics from my previous post still apply regarding project setup, build events, etc.


We will start by creating the constructor for the FaceTracker where we pass in the body ID, the requested features to track and our KinectSensor. There after we will create a timestamp when we are starting so we can calculate the duration.

As we’ve seen in my previous post we will need a FaceFrameSource & FaceFrameReader so we can start receiving frames.

After that we will assign our event handlers and start our FaceFeatureTrackers.

public FaceTracker(ulong bodyId, FaceFrameFeatures faceFeatures, KinectSensor kinect)
    // Pin-point start of tracking
    _startTracking = DateTime.Now;

    // Save variables
    _bodyId = bodyId;
    _faceFeatures = faceFeatures;
    // _kinectId = kinect.UniqueKinectId --> NotImplementedYet

    // Create a new source with body TrackingId
    _faceSource = new FaceFrameSource(kinect, bodyId, faceFeatures);

    // Create new reader
    _faceReader = _faceSource.OpenReader();

    Console.WriteLine(String.Format("Tracker for body #{0} started.", _bodyId));

    // Initialize FaceFeatureTrackers

    // Wire events
    _faceReader.FrameArrived += OnFaceFrameArrived;
    _faceSource.TrackingIdLost += OnTrackingLost;

We will create a new feature-tracker for the requested face features by passing in the feature – or FaceProperty – and the current body ID. Each tracker is stored in a dictionary so I can easily query it on the FaceProperty.

private void InitialiseFeatureTrackers()
    if (!_featureAnalytics.ContainsKey(FaceProperty.Engaged) && _faceFeatures.HasFlag(FaceFrameFeatures.FaceEngagement))
        _featureAnalytics.Add(FaceProperty.Engaged, new FaceFeatureTracker(FaceProperty.Engaged, _bodyId));


    if (!_featureAnalytics.ContainsKey(FaceProperty.WearingGlasses) && _faceFeatures.HasFlag(FaceFrameFeatures.Glasses))
        _featureAnalytics.Add(FaceProperty.WearingGlasses, new FaceFeatureTracker(FaceProperty.WearingGlasses, _bodyId));

The FaceFeatureTracker constructor will simply store the faceproperty and the corresponding body ID.

public FaceFeatureTracker(FaceProperty faceProp, ulong bodyId)
    _faceProperty = faceProp;
    _bodyId = bodyId;

The OnFaceFrameArrived-event handler for the FaceTracker is simple as well, we will retrieve the FaceFrameResult and update all the trackers.

private void OnFaceFrameArrived(object sender, FaceFrameArrivedEventArgs e)
    // Retrieve the face reference
    FaceFrameReference faceRef = e.FrameReference;

    if (faceRef == null) return;

    // Acquire the face frame
    using (FaceFrame faceFrame = faceRef.AcquireFrame())
        if (faceFrame == null) return;

        // Retrieve the face frame result
        FaceFrameResult frameResult = faceFrame.FaceFrameResult;

        if (frameResult != null)
            // Update trackers

This is done by simply looping over all our feature trackers and calling the Track-method.

We will pass in the DetectionResult for the FaceProperty that the tracker is responsible for.

private void UpdateTrackers(FaceFrameResult frameResult)
    // Loop all trackers
    foreach (FaceProperty feature in _featureAnalytics.Keys)
        // Track the detection results

The feature tracker will then simply check if this result already occured, add it if needed and increment the occurrence.

public void Track(DetectionResult detectionResult)
    // Add new detection result if not present yet
    if (!_tracking.ContainsKey(detectionResult)) _tracking.Add(detectionResult, 0);

    // Increment the tracking value

By doing this we are able to keep track of the occurences per property and calculate percentages later on.

Losing track of users

Users come and go, this means that eventually the FaceTracker will lose track of the user and we need to generate the analytics.

The FaceTracker does this by looping over all the FaceFeatureTrackers and let them generate their FaceFeatureAnalytics. After that he will create his a FaceAnalytics by passing in the Kinect ID, Body ID, list of feature analytics and the duration of the tracking.

Last thing the tracker will throw an event that will pass the analytics to those who are interested.

// Custom event to throw when tracking is lost & analytics are available
public event FaceAnalyticsAvailableHandler FaceAnalyticsAvailable;
public delegate void FaceAnalyticsAvailableHandler(FaceAnalytics fa);

/// <summary>
/// We lost track of the body and analytics are generated
/// </summary>
private void OnTrackingLost(object sender, TrackingIdLostEventArgs e)
    Console.WriteLine(String.Format("Tracker for body #{0} lost.", e.TrackingId));

    // Create analytics for each feature
    List<FaceFeatureAnalytics> featureAnalytics = _featureAnalytics.Values.Select(fft => FaceFeatureAnalytics.Analyse(fft, _bodyId)).ToList();

    // Notify listeners 
    FaceAnalyticsAvailable(new FaceAnalytics(_kinectId, _bodyId, featureAnalytics, (DateTime.Now - _startTracking)));

Here we will analyse the FaceFeatureTracker to generate the FaceFeatureAnalytics.
It will include all the details for this feature and indicate what value was tracked the most.

public static FaceFeatureAnalytics Analyse(FaceFeatureTracker tracker, ulong bodyId)
    if (tracker == null) throw new ArgumentException("Invalid feature tracker", "tracker");
    if (bodyId == 0) throw new ArgumentException("Invalid body Id", "bodyId");

    // Get most frequent result
    DetectionResult frequentResult = DetectionResult.Unknown;

    // Create details list
    List<FaceFeatureDetailsAnalytics> featureDetails = new List<FaceFeatureDetailsAnalytics>();

    int totalOccurences = 0;
    int detectionOcc = -1;

    foreach (KeyValuePair<DetectionResult, int> pair in tracker.Results)
        // Determin if this occured more
        if (pair.Value > detectionOcc)
            frequentResult = pair.Key;
            detectionOcc = pair.Value;

        // Add to details list
        featureDetails.Add(new FaceFeatureDetailsAnalytics(pair.Key, pair.Value));

        // Increment total
        totalOccurences += pair.Value;

    double perc = 0;

    featureDetails.ForEach(ffda => ffda.CalculatePercentage(totalOccurences));

    // Calculate percentage
    if (tracker.Results.ContainsKey(frequentResult))
        perc = Math.Round((((double)tracker.Results[frequentResult] / (double)totalOccurences) * 100), 2);

    return new FaceFeatureAnalytics(bodyId, tracker.FaceProperty, frequentResult, perc) { FaceFeatureDetails = featureDetails };

Here we will simply calculate the percentage for a feature detail.

public void CalculatePercentage(double totalOccurences)
    _percentage = Math.Round((((double)_counter / (double)totalOccurences) * 100));

The FaceAnalytics constructor is pretty basic and will just store the values.

public FaceAnalytics(string kinectId, ulong bodyId, List<FaceFeatureAnalytics> featureAnalytics, TimeSpan trackDuration)
    _kinectId = kinectId;
    _bodyId = bodyId;
    _featureAnalytics = featureAnalytics;
    _trackDuration = trackDuration;

Back to the client

Now that we know how the trackers and analytics work we can take a look at our WPF client.

First we will define a list of FaceFrameFeatures that we are interested in, this will define what we will keep track of. Second we will create a dictionary that will contain all our FaceTrackers with the body ID as key.

When new body frame comes in we will just check if we already have a tracker for this body, if not we will create one and wait for the tracker to pop back in.

/// <summary>
/// Requested face features
/// </summary>
private const FaceFrameFeatures _faceFrameFeatures = FaceFrameFeatures.MouthMoved
                                                                | FaceFrameFeatures.MouthOpen
                                                                | FaceFrameFeatures.LeftEyeClosed
                                                                | FaceFrameFeatures.RightEyeClosed
                                                                | FaceFrameFeatures.LookingAway
                                                                | FaceFrameFeatures.Happy
                                                                | FaceFrameFeatures.FaceEngagement
                                                                | FaceFrameFeatures.Glasses;

/// <summary>
/// Holds all the face trackers
/// </summary>
private Dictionary<ulong, FaceTracker> _trackers = new Dictionary<ulong, FaceTracker>();

/// <summary>
/// Handle the new body frames
/// </summary>
private async void OnBodiesArrive(object sender, BodyFrameArrivedEventArgs e)
    // Retrieve the body reference
    BodyFrameReference bodyRef = e.FrameReference;

    if (bodyRef == null) return;

    // Acquire the body frame
    using (BodyFrame frame = bodyRef.AcquireFrame())
       if (frame == null) return;

       // Create a new collection when required
        if (_bodies == null || _bodies.Count() != frame.BodyCount)
            _bodies = new Body[frame.BodyCount];

        // Refresh the bodies

        // Get the amount of tracked users
        int trackedBodies = _bodies.Count(bdy => bdy.IsTracked);

        // Start tracking faces
        foreach (Body body in _bodies)
            if (body.IsTracked)
                // Create a new tracker if required
                if (!_trackers.ContainsKey(body.TrackingId))
                    FaceTracker tracker = new FaceTracker(body.TrackingId, _faceFrameFeatures, _kinect);
                    tracker.FaceAnalyticsAvailable += OnFaceAnalyticsAvailable;

                    // Add to dictionary
                    _trackers.Add(body.TrackingId, tracker);

We will create a serializer that will serialize our analytics into a xml string.
I used the DataContractSerializer because I find it is more powerful than the XmlSerializer.

internal class GenericSerializer
    public static string SerializeToString(T obj)
        using (MemoryStream memStm = new MemoryStream())
            DataContractSerializer serializer = new DataContractSerializer(obj.GetType());
            serializer.WriteObject(memStm, obj);

            memStm.Seek(0, SeekOrigin.Begin);

            using (var streamReader = new StreamReader(memStm))
                return streamReader.ReadToEnd();

Our last step is to handle the FaceAnalyticsAvailable-event when a tracker is ready which is pretty straight forward – We will close our tracker, remove it from the tracker dictionary, serialize it to a Xml string and save it on disk.

private void OnFaceAnalyticsAvailable(FaceAnalytics fa)
    // Close reader for this body
    // Compose filename
    string fileName = string.Format("{0}/Face-Tracking-{1}.xml", _analyticsFolder, fa.BodyId);

    // Serialize to string
    string serializedAnalytics = GenericSerializer<FaceAnalytics>.SerializeToString(fa);

    // Convert to byte array
    byte[] rawAnalytics = Encoding.UTF8.GetBytes(serializedAnalytics);

    // Flush to disk
    using (FileStream fs = new FileStream(fileName, FileMode.Create))
        fs.Write(rawAnalytics, 0, rawAnalytics.Length);

The Close-method of our tracker will simple dispose the FaceFrameReader & FaceFrameSource

public void Close()


Building this application was pretty easy to do – Just create some trackers that are in charge tracking all the occurences. There after we used the data to mold it in some analytics that illustrate what the expressions were for that person.

Although the concept is very simple it can be very powerful for instance tracking the emotional responses when conference attendees see your prototype.

You can download my code here so you can try it yourself!

Thank you for reading,


Posted in Tutorial | Tagged , , | 4 Comments

Delivering Kinect On-Demand to a Store App with Azure Media Services & Notification Hubs – Tutorial

In my previous post I introduced you to a scenario where Kinect & the cloud are comebine to illustrate that Kinect & the cloud are a good match. I also introduced you to Microsoft Azure Storage, Media Services & Notification Hubs that we will use to develop this end-to-end scenario!

Reminder – This tutorial requires an active Microsoft Azure subscription and a trial is available! More info in my previous post.


I developed a solution template that is based on the Kinect for Windows Public Preview SDK in case you want to follow along & test it yourself.

This template includes a basic Windows Store app and a WPF client that already displays the Kinect camera, if you want to know more about displaying the camera you can read this post.

You can download the template here.

Building the Kinect recorder

Record camera frames

We will start by creating a variable that will flag a recording, a counter that will keep track of the sequence number and a unique ID per recording. Note that ID could be of type Guid as well depending on your preferences.

/// <summary>
/// Current count of the image
/// </summary>
private int _sequenceNr = 1;

/// <summary>
/// Indication whether we are recording
/// </summary>
private bool _isRecording = false;

/// <summary>
/// Unique recording ID
/// </summary>
private string _recordingID = string.Empty;

When the users wants to start recording we will validate the temporary folder, reset our recording variables, update the status and change the UI.

/// <summary>
/// Start recording
/// </summary>
private void StartRecording()
    // Validate temporary folder
    if (ValidateTemporaryFolder() == false) return;

    // Setup recording
    _sequenceNr = 1;
    _isRecording = true;
    _recordingID = Guid.NewGuid().ToString();

    // Update status
    Status.Content = "Recording...";

    // Toggle controls
    VideoCaption.IsReadOnly = true;
    TemporaryFolder.IsReadOnly = true;
    StartRecordingButton.IsEnabled = !_isRecording;
    StopRecordingButton.IsEnabled = _isRecording;

Once we are recording we will need to save the images locally, this means that we will need to change our OnColorFrameArrived-method

Right after we’ve updated our WriteableBitmap we will check if the recording flag is on.
If so we are going to copy the _colorPixels array, save the image asynchronous in the temporary folder and increment the sequence number.

Important to know is that the image will contain the recording ID & sequence number for this frame.

 // Save image when recording
if (_isRecording)
    // Create a new byte-array
    byte[] imageData = new byte[_colorPixels.Length];

    // Copy the orginal array in the new one
    Array.Copy(_colorPixels, imageData, _colorPixels.Length);

    // Save the image in the local folder
    await ImageProcessor.SaveJpegAsync(imageData, frameDesc.Width, frameDesc.Height, frameDesc.Width * _bytePerPixel, TemporaryFolder.Text, string.Format("{0}_{1:000000}", _recordingID, _sequenceNr));

    // Increment the sequence number

The ImageProcessor is a helper class that does all the saving for us – We just pass in the data with its width, height & stride along with the requested location & filename. It will then save the message as a Jpeg by using the JpegBitmapEncoder in an asynchronous way.

public class ImageProcessor
    /// <summary>
    /// Save a buffer as a JPEG
    /// </summary>
    /// <param name="data">Image data</param>
    /// <param name="width">Width of the image</param>
    /// <param name="height">Height of the image</param>
    /// <param name="stride">Stride of the image</param>
    /// <param name="folder">Output folder</param>
    /// <param name="filename">Filename</param>
    public static async Task SaveJpegAsync(byte[] data, int width, int height, int stride, string folder, string filename)
        Task saveJpegTask = Task.Run(() =>
        if (data != null)
            // Create a new bitmap
            WriteableBitmap bmp = new WriteableBitmap(width, height, 96.0, 96.0, PixelFormats.Bgr32, null);

            // write pixels to bitmap
            bmp.WritePixels(new Int32Rect(0, 0, width, height), data, stride, 0);

            // create jpg encoder from bitmap
            JpegBitmapEncoder enc = new JpegBitmapEncoder();

            // create frame from the writable bitmap and add to encoder

            // Create whole path
            string path = Path.Combine(folder, filename + ".jpg");

                // write the new file to disk
                using (FileStream fs = new FileStream(path, FileMode.Create))
            catch (IOException ex)
                Console.ForegroundColor = ConsoleColor.Red;
                Console.WriteLine("Error! Exception - " + ex.Message);

        await saveJpegTask;

Once the recording is stopped we will clear the recording flag, change the UI and start processing the image frames we now render the local images into a video.

private async Task StopRecording()
    // Stop recording
    _isRecording = false;

    // Disable stop controls
    StopRecordingButton.IsEnabled = false;

    // Process the recorded frames
    await ProcessFrames();

    // Reset caption & Enable start
    VideoCaption.Text = string.Empty;
    VideoCaption.IsReadOnly = false;
    TemporaryFolder.IsReadOnly = false;
    StartRecordingButton.IsEnabled = true;

Locally rendering the Kinect video

We will load all the local images from the temporary folder for that recording ID and render it as a video. This will be done in a VideoProcessor where we pass in the FPS, width & height of the images, the quality, path to the temporary folder and our recording ID.

As you can see I am forcing it to use 15 FPS since the FPS from the camera can be different depending on the light, in order to have a constant FPS I’m forcing 15 since we will always have 15 or more.

private async Task ProcessFrames()
    Status.Content = "Starting video render...";
    // Render video locally
    string videoPath = await VideoProcessor.RenderVideoAsync(15, 1920, 1080, 100, TemporaryFolder.Text, _recordingID);

Before we can start redering we need to download the SharpAVI library that will render the video for us.

SharpAVI allows us to use a AviWriter that is configured and a IAviVideoStream with MotionJpegVideoEncoderWpf with the specified values. After that we will loop all the images in our temp folder with that recording ID and write the pixels to the stream that will write the AVI-video to the temporary folder.

public class VideoProcessor
    /// <summary>
    /// Render a video based on JPEG-images
    /// </summary>
    /// <param name="fps">Requested frames-per-second</param>
    /// <param name="width">Width of the images</param>
    /// <param name="height">Height of the images</param>
    /// <param name="quality">Requested quality</param>
    /// <param name="path">Path to the folder containing frame-images</param>
    /// <param name="renderGuid">Unique GUID for this frame-batch</param>
    /// <returns>Path to the video</returns>
    public static async Task<string> RenderVideoAsync(int fps, int width, int height, int quality, string path, string renderGuid)
        if(quality < 1 && quality > 100) throw new ArgumentException("Quality can only be between 1 and 100.");

        Task<string> renderT = Task.Run(() =>
        // Compose output path
        string outputPath = string.Format("{0}/{1}.avi", path, renderGuid);

        // Create a new writer with the requested FPS
        AviWriter writer = new AviWriter(outputPath)
            FramesPerSecond = fps

        // Create a new stream to process it
        IAviVideoStream stream = writer.AddVideoStream().WithEncoder(new MotionJpegVideoEncoderWpf(width, height, quality));
        stream.Width = width;
        stream.Height = height;

        // Create an output stream
        byte[] frameData = new byte[stream.Width * stream.Height * 4];

        // Retrieve all iamges for this batch
        string[] images = Directory.GetFiles(path, string.Format("{0}*.jpg", renderGuid));

        // Process image per image
        foreach (string file in images)
            // Decode the bitmap
            JpegBitmapDecoder decoder = new JpegBitmapDecoder(new Uri(file), BitmapCreateOptions.None, BitmapCacheOption.Default);

            // Get bitmap source
            BitmapSource source = decoder.Frames[0];
            // Copy pixels
            source.CopyPixels(frameData, 1920 * 4, 0);

            // Write it to the stream
            stream.WriteFrame(true, frameData, 0, frameData.Length);

        // Close writer

        return outputPath;

        await renderT;

        return renderT.Result;

Provisioning a Microsoft Azure Media Service

It is time to provision ourselves a Media Service on the Microsoft Azure platform!

Browse to the management portal and select New > App Services > Media Service > Quick Create. Here you can assign a name to your media service, the requested region where it will be running and create or link a storage account.
Creating media service
Once that our service is provisioned click on Manage keys, here you can find the authentication keys we will use. Don’t share this with anyone!
Copying the keys
Copy & save the Account Name & Primary key in your App.config of your WPF project, we will use this to authenticate with the service.-config

    <add key="MediaAccount" value="_YOUR-SERVICE-NAME_" />
    <add key="MediaKey" value="_YOUR-PRIMARY-KEY_" />

Be careful with regenerating keys, it could break other applications relying on the service.

Encoding and packaging to Smooth Streaming

Now that we have our local video we will upload, encode and package it with Microsoft Azure Media Services.

To do so we will first add two new NuGet packages – Windows Azure Media Services .NET SDK & Windows Azure Media Services .NET SDK.

Next up we will create a MediaServicesAgent that we will use to do all our Media Services stuff with. For now we will start with a CTOR that accepts the Media Account Name & Key so we can create a CloudMediaContext.

public class MediaServicesAgent
    /// <summary>
    /// Media services credentials
    /// </summary>
    private MediaServicesCredentials _mediaCredentials;

    /// <summary>
    /// Media Context
    /// </summary>
    private CloudMediaContext _mediaContext;

    /// <summary>
    /// Default CTOR
    /// </summary>
    /// <param name="mediaAccount"></param>
    /// <param name="mediaKey"></param>
    public MediaServicesAgent(string mediaAccount, string mediaKey)
        _mediaCredentials = new MediaServicesCredentials(mediaAccount, mediaKey);
        _mediaContext = new CloudMediaContext(_mediaCredentials);

Now that we have our agent we will extend the ProcessFrames-method and save the timestamp when the video was rendered, update the status and call a new HostVideoInAzure-method that will contain all the Media Services logic that requires the local path of the video.

private async Task ProcessFrames()
    Status.Content = "Starting video render...";

    // Render video locally
    string videoPath =
await VideoProcessor.RenderVideoAsync(15, 1920, 1080, 100, TemporaryFolder.Text, _recordingID);

    // Save recording timestamp
    DateTime recordedStamp = DateTime.Now;

    Status.Content = "Done rendering video.";

    // Host video in Microsoft Azure
    string streamUrl = await HostVideoInAzure(videoPath);

Next we will create an instance of our Media Services Agent based on our Media Services keys in our configuration file, this requires a reference to System.Configuration.

Second we will create a basic version of HostVideoInAzure and start with calling the UploadAsset-method and passing the local path and a method that will display the progress of the upload.

/// <summary>
/// Media Services agent (Microsoft Azure Media Services)
/// </summary>
private MediaServicesAgent _mediaAgent = new MediaServicesAgent(ConfigurationManager.AppSettings.Get(&quot;MediaAccount&quot;), ConfigurationManager.AppSettings.Get(&quot;MediaKey&quot;));

/// <summary>
/// Upload the rendered video to the cloud, encode to MP4 and deliver as Smooth Stream
/// </summary>
/// <param name="videoPath">Path to the local video</param>
private async Task<string> HostVideoInAzure(string videoPath)
    Status.Content = "Starting video upload...";

    // Upload the video as an Asset
    IAsset rawAsset = await _mediaAgent.UploadAsset(videoPath, UploadAssetHandler);...

/// <summary>
/// Displays the progress of the upload
/// </summary>
private void UploadAssetHandler(object sender, UploadProgressChangedEventArgs e)
    Dispatcher.Invoke(() => Status.Content = string.Format("Uploading Asset - {0}%", Math.Round(e.Progress, 0)));

This method will upload a unencrypted IAsset – hence the AssetCreationOptions.None – that contains one IAssetFile that will be our video and return it when we are done so we can use it later on. We also assign the upload handler so we can update our UI.

The snippet in comment can be used as well thanks to the Extensions NuGet-package.

 public async Task<IAsset> UploadAsset(string filePath, EventHandler<UploadProgressChangedEventArgs> uploadHandler = null)
    Task<IAsset> uploadTask = Task.Run(() =>
        // Retrieve filename
        string assetName = Path.GetFileName(filePath);

        // Create a new asset in the context
        IAsset asset = _mediaContext.Assets.Create(assetName, AssetCreationOptions.None);

        // Create a new asset file
        IAssetFile file = asset.AssetFiles.Create(assetName);

        // Hook-up the event if handler is specified
        if (uploadHandler != null)
            file.UploadProgressChanged += uploadHandler;

        // Upload the video

        return asset;

    await uploadTask;

    return uploadTask.Result;

    // Snippet when you want to use the Microsoft Azure Media Services extensions
    //return await _mediaContext.Assets.CreateFromFileAsync(filePath, AssetCreationOptions.None, cancellationToken);

Next we will create a Media Services Job that will encode our Asset into a ‘H264 Adaptive Bitrate MP4 Set SD 16×9′ and package it into a Smooth Stream. We will do this in a new EncodeAndPackage-method that requires a job name, our raw asset and a handler to visulize the progress.

Let’s start by creating a new job to which we will assing two tasks – One for the encoding & one for the packaging.

For our encoding task we will retrieve the Windows Azure Media Encoder and create a new set based on this encoder, our requested preset and give it a decent name.
Next we will add our raw asset as an input asset and create a new unencrypted output asset suffixed with “_MP4″.

Our packing task is done pretty much the same – We retrieve the Windows Azure Media Packager, read the configuration of the stream from an XML file and create a new job based on this configuration.
Last thing we need to do for the packaging is adding an input asset for this task – which is the output of our first task – and create a new output asset.

With our tasks set we are ready to link our handler, submit the job and wait untill it has been processed.

public async Task<IJob> EncodeAndPackage(string jobName, IAsset rawAsset, EventHandler<JobStateChangedEventArgs> jobHandler = null)
    Task<IJob> t = Task.Run(() =>
        // Create a new job
        IJob job = _mediaContext.Jobs.Create(jobName);

        /* Taks I - Encode into MP4
           Retrieve the encoder */
        IMediaProcessor latestWameMediaProcessor = (from p in _mediaContext.MediaProcessors
                    where p.Name == "Windows Azure Media Encoder"
                    select p).ToList().OrderBy(wame => new Version(wame.Version)).LastOrDefault();

        // Select the requested preset (Same as in the portal)
        string encodingPreset = "H264 Adaptive Bitrate MP4 Set SD 16x9";

        // Add a new task to the job for the encoding
        ITask encodeTask = job.Tasks.AddNew("Encoding", latestWameMediaProcessor, encodingPreset, TaskOptions.None);

        // Add our rendered video as input

        // Add a new asset as output
        encodeTask.OutputAssets.AddNew(rawAsset.Name + "_MP4", AssetCreationOptions.None);

        /* Taks II - Package into Smooth Streaming
           Retrieve the packager */
        IMediaProcessor latestPackagerMediaProcessor = (from p in _mediaContext.MediaProcessors
                        where p.Name == "Windows Azure Media Packager"
                        select p).ToList().OrderBy(wame => new Version(wame.Version)).LastOrDefault();

        // Read the config from XML
        string SSConfig = File.ReadAllText(Path.GetFullPath(@"D:\Source Control\Kinect for Windows\Second Generation Kinect\Kinect - VOD Media Services\K4W.KinectVOD\K4W.KinectVOD.Client.WPF\Assets\Media_Services_MP4_to_Smooth_Streams.xml"));

        // Add a new packaging task
        ITask packagingSSTask = job.Tasks.AddNew("Packing into Smooth Streaming", latestPackagerMediaProcessor, SSConfig, TaskOptions.None);

        // Add the output of the encoding

        // Create a new output Asset
        packagingSSTask.OutputAssets.AddNew("Result_SS_" + rawAsset.Name, AssetCreationOptions.None);

        // Hook-up the handler if reaquired
        if (jobHandler != null)
            job.StateChanged += jobHandler;

        // Submit the job

        // Execute

        return job;

    await t;

    return t.Result;

As with the uploading, we are displaying the progress of the job in our UI.

private void JobStateChangedHandler(object sender, JobStateChangedEventArgs e)
    Dispatcher.Invoke(() => Status.Content = string.Format("Job is currently {0}", e.CurrentState));

With everything set we can add this line to our HostVideoInAzure-method to start the job and visualize the progress.

IJob encodedAssetId = await _mediaAgent.EncodeAndPackage(string.Format("Encoding '{0}' into Mp4 & package to SS", rawAsset.Name), rawAsset, JobStateChangedHandler);

Our last step is to create a location where the stream can be consumed.

We will create a new method CreateNewSsLocator that will create a new locator for our packaged asset where we will assign an IAccessPolicy & Locator type and return the URI to the stream. The IAcessPolicy will be used to define what the requested permissions for that locator.

The locator type on the other hand defines the kind of access to the asset – You can create a Shared Access Signature URL that will be based on storage level and is mostly used for downloading the video or progressive download while we are using Smooth Streaming and an On-Demand Locator is required so it will create a Origin streaming endpoint.

More information about those two types here.

public Uri CreateNewSsLocator(IAsset packagedAsset, LocatorType locatorType, AccessPermissions accessPermissions, TimeSpan duration)
    if (packagedAsset == null) throw new Exception("Invalid encoded asset");

    // Create a new access policy to the video
    IAccessPolicy policy = _mediaContext.AccessPolicies.Create("Streaming policy", duration, accessPermissions);

    // Create a new locator to that resource with our new policy
    _mediaContext.Locators.CreateLocator(locatorType, packagedAsset, policy);

    // Return the uri by using the extensions package
    return packagedAsset.GetSmoothStreamingUri();

This is how your HostVideoInAzure-method should look like after calling our CreateNewSsLocator-method.

private async Task<string> HostVideoInAzure(string videoPath)
    Status.Content = "Starting video upload...";

    // Upload the video as an Asset
    IAsset rawAsset = await _mediaAgent.UploadAsset(videoPath, UploadAssetHandler);

    Status.Content = "Starting encoding & packaging...";

    // Encode & Package in Media Services
    IJob encodedAssetId = await _mediaAgent.EncodeAndPackage(string.Format("Encoding '{0}' into Mp4 & package to SS", rawAsset.Name), rawAsset, JobStateChangedHandler);

    Status.Content = "Creating locator endpoint...";

    // Create a new Smooth Streaming Locator
    Uri ssUri = _mediaAgent.CreateNewSsLocator(encodedAssetId.OutputMediaAssets[1], LocatorType.OnDemandOrigin, AccessPermissions.Read, TimeSpan.FromDays(7));

    return ssUri.ToString();

Testing the stream

You can now test your Smooth Stream on this website.

Provisioning a Microsoft Notification Hub

Time to provision a notification hub in a Service Bus namespace, this will allow us to send notifications!

Log in into the portal and click New > App Services > Service Bus > Notification Hub > Quick Create and give the hub a self-describing name. You’ll also need to specify the region where the hub will be provisioned along with a name for the new Service Bus namespace or use an existing one!
Creating notification hub
Next we will create new Shared Access Policies or SAS Policies that we will use for the authentication. Instead of using one policy with full control we will create two – a notifier that will be able to send messages and thus our Kinect recorder. Our second policy will be a listener that will be used by our clients to listen for notifications.

This allows us to restrict the level of access based on the requirements so the listeners can’t abuse their access to spam notifications to others.
Creating SAS policies
Navigate to the Service Bus-overview page, select your namespace and click Connection Information.
Generating keys
Copy the namespace for the Notifier and store it in the App.config of your WPF application. Also store the nameyour notification hub so we know what hub we need to send to.

    <add key="MediaAccount" value="_YOUR-SERVICE-NAME_" />
    <add key="MediaKey" value="_YOUR-PRIMARY-KEY_" />
    <add key="NotificationHub" value="kinect-VOD-tutorial" />
    <add name="servicebus-ns" connectionString="Endpoint=sb://_YOUR_NAMESPACE_.servicebus.windows.net/;SharedAccessKeyName=SendPolicy;SharedAccessKey=_SHARED-KEY_" />

Notifing the clients

Now that we have our Smooth Stream ready in the cloud we still need to notify our viewers that there is a new video available. We will use Microsoft Azure Notification Hubs to send a push notification to all our clients.

We will create a NotificationHubAgent that will do all the work for us. This agent requires the name of the push notification hub and the connection string. It will then create a NotificationHubClient that we will use to send.

The agent will expose a SendTemplateNotificationAsync-method that will send a set of properties to the notification hub. The advantage of the TemplateNotification is that this is done in a platform independent way and the receiver is responsible for the appearance of the notification.
The client applications will then be able to use our set of properties in their notification.

public class NotificationHubAgent
    /// <summary>
    /// Notification hub client to certain hub
    /// </summary>
    private NotificationHubClient _hubClient;

    /// <summary>
    /// Default CTOR
    /// </summary>
    /// <param name="hubName">Name of the requested notification hub</param>
    /// <param name="connectionString">Connection string to the Service Bus namespace</param>
    public NotificationHubAgent(string hubName, string connectionString)
        if (string.IsNullOrEmpty(hubName)) throw new ArgumentException("Invalid hub name.");
        if (string.IsNullOrEmpty(connectionString)) throw new ArgumentException("Invalid Service Bus connection string.");

        // Create a new hub client
        _hubClient = NotificationHubClient.CreateClientFromConnectionString(connectionString, hubName);

    /// <summary>
    /// Send a template notification (Platform independent)
    /// </summary>
    /// <param name="properties">Set of properties</param>
    public async Task SendTemplateNotificationAsync(Dictionary<string, string> properties)
        if (properties == null) throw new ArgumentException("Properties cannot be Null.");

        // Send
        await _hubClient.SendTemplateNotificationAsync(properties);

We will use this agent to send out the URL to our clients along with some metadata.

First we create a new instance of the agent based on our App.config. There after we create a new SendNotification-method that will assemble a list of properties along with the stream url that will be sent out.

Hence that I create a new RecordingData object, convert it to JSON and add it to the property list.
This allows me to push some additional metadata that we will use later on.

/// <summary>
/// Notification Agent (Microsoft Azure Notification Hubs)
/// </summary>
private NotificationHubAgent _notificationAgent = new NotificationHubAgent(ConfigurationManager.AppSettings.Get("NotificationHub"), ConfigurationManager.ConnectionStrings["servicebus-ns"].ConnectionString);
/// <summary>
/// Send the streaming URL & caption to the clients
/// </summary>
/// <param name="streamUrl">Url of the stream</param>
private async Task SendNotification(string streamUrl, DateTime stamp)
    // Create metadata for the client (will be used in the launch-property of the tile)
    RecordingData recordingData = new RecordingData(VideoCaption.Text, streamUrl, _recordingID, stamp);

    // Assign properties for the notification
    Dictionary<string, string> properties = new Dictionary<string, string>()
        {"Caption", recordingData.Caption},
        {"SmoothStreamUrl", recordingData.SmoothStreamUrl},
        {"RecordingId", recordingData.RecordingId},
        {"RecordingStamp", recordingData.RecordingStamp.ToString()},
        {"RecordingData", recordingData.SerializeToJson()}

    // Send the notification
    await _notificationAgent.SendTemplateNotificationAsync(properties);

Last step in our WPF client is to clean-up our temporary folder for our current recording.

private async Task RemoveLocalAssets()
    string tempFolder = TemporaryFolder.Text;
    Task cleanupT = Task.Run(() =>
        foreach (string file in Directory.GetFiles(tempFolder, string.Format("{0}*", _recordingID)))

    await cleanupT;

This is how your ProcessFrames method should look like in your WPF application.

private async Task ProcessFrames()
    Status.Content = "Starting video render...";

    // Render video locally
    string videoPath =
        await VideoProcessor.RenderVideoAsync(15, 1920, 1080, 100, TemporaryFolder.Text, _recordingID);

    // Save recording timestamp
    DateTime recordedStamp = DateTime.Now;

    Status.Content = "Done rendering video.";

    // Host video in Microsoft Azure
    string streamUrl = await HostVideoInAzure(videoPath);

    Status.Content = "Video is available on-demand.";

    // Send notifications to clients
    await SendNotification(streamUrl, recordedStamp);

    // Remove saved images & local video afterwards
    await RemoveLocalAssets();

Consuming the Smooth Stream with a Windows 8.1 Store App

Before we can start receiving notifications we need to register our application and link it to our notification hub.

In the Notification Hub documentation they explain how you can associate your app with the Notification hub and the changes to your app.manifest to setup your application in order to receive notifications!

Receiving & handling push notification

Once that we have linked our store application with our Push Notification Hub we can now receive & process notifications.

Let’s start by creating a PushNotificationHelper that will expose a RegisterTemplateNotificationAsync-method.
This method will register a notification template on a NotificationHub with the given name, our Xml notification template and a new PushNotificationChannel.

The Xml template is based on a ToastImageAndText02 where we fill in a decent title, caption and image.
Note that we are adding a Launch to the DocumentElement that will contain our metadata, this will be the metadata that you receive when the viewers taps the notification.

public class PushNotificationsHelper
    /// <summary>
    /// Register a template notification
    /// </summary>
    /// <param name="hubName">Name of the sending hub</param>
    /// <param name="connectionString">Connection string to the Service Bus namespace</param>
    /// <param name="templateName">Name of the template</param>
    /// <param name="metadata">Notification property holding the metadata</param>
    /// <param name="header">Header text of the toast</param>
    /// <param name="footer">Footer text of the toast</param>
    /// <param name="image">Url to the image</param>
    /// <returns></returns>
    public static async Task<TemplateRegistration> RegisterTemplateNotificationAsync(string hubName, string connectionString, string templateName, string metadata, string header, string footer, string image)
        // Create a new push notification channel
        PushNotificationChannel channel = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();

        // Create a new notification hub
        NotificationHub hub = new NotificationHub(hubName, connectionString);

        // Generate the template for the toast
        XmlDocument toastTemplate = await GenerateXmlTemplateAsync(metadata, header, footer, image);

        // Register the template
        return await hub.RegisterTemplateAsync(channel.Uri, toastTemplate, templateName);

    /// <summary>
    /// Generate the Xml Template for the 'ToastImageAndText02' notification
    /// </summary>
    /// <param name="metadata">Notification property holding the metadata</param>
    /// <param name="header">Header text of the toast</param>
    /// <param name="footer">Footer text of the toast</param>
    /// <param name="image">Url to the image</param>
    private static async Task<XmlDocument> GenerateXmlTemplateAsync(string metadata, string header, string footer, string image)
        var template = ToastNotificationManager.GetTemplateContent(ToastTemplateType.ToastImageAndText02);

        // msg, id, url, tag
        template.DocumentElement.SetAttribute("launch", metadata);

        var titleNode = template.SelectSingleNode("//text[@id='1']") as XmlElement;
        if (titleNode != null)
        titleNode.InnerText = header;

        var captionNode = template.SelectSingleNode("//text[@id='2']") as XmlElement;
        if (captionNode != null)
        captionNode.InnerText = footer;

        var imgNode = template.SelectSingleNode("//image[@id='1']") as XmlElement;
        if (imgNode != null)
        imgNode.SetAttribute("src", image);
        imgNode.SetAttribute("alt", image);

        return template;

In our scenario we are broadcasting to all our clients but in some scenarios you’d only want to notify a specific group of clients. In that case the clients would need to register as above but additionally specify a set of properties that it wants to be notified of.

Let’s say that we expand our scenario and recorders can add tags f.e. ‘Diving’ to their video, the viewer can then select the categories of their interest. The store app notification registration will then specify the set of interesting tags.

It is important to know that each register returns a temporary TemplateRegistration.

We will now create a new RegisterPushNotifications-method that will call our helper class.
In this method we will save the expiration date of the registration and renew it when required.

public async Task RegisterPushNotifications(string hubName, string connectionString, string templateName, string metadata, string header, string footer, string image)
    bool registerTemplate = false;

    // Retrieve local settings
    ApplicationDataContainer localSettings = ApplicationData.Current.LocalSettings;

    // Retrieve saved expiration date for this template
    object registerExpiration = localSettings.Values[templateName.Replace(" ", "-")];

    // Flag as to-register when no value found
    if (registerExpiration != null)
        // Try parse to datetime
        DateTime expirationDateTime;
        DateTime.TryParse(registerExpiration.ToString(), out expirationDateTime);

        // Register when expired
        if (expirationDateTime <= DateTime.Now)
            registerTemplate = true;
        registerTemplate = true;

    // Create a new registration when required
    if (registerTemplate == true)
        TemplateRegistration tempRegistration = await PushNotificationsHelper.RegisterTemplateNotificationAsync(hubName, connectionString, templateName, metadata, header, footer, image);

        // Save new expiration date
        localSettings.Values[templateName.Replace(" ", "-")] = tempRegistration.ExpiresAt.ToString();

We will expand the OnLaunched-method by checking if there are arguments available.

When the user taps a notification our metadata will be available in the Arguments.
This means that when there are no arguments available we need to check if our registration exists or has expired & navigate to the MainPage.
If there are arguments available we will deserialize the metadata & pass it along when we’re navigating to our VideoPage.

// Switching decision between pages
if (string.IsNullOrEmpty(e.Arguments))
    // Register for notifications
    await RegisterPushNotifications("kinect-VOD-tutorial",
            string.Format("$({0})", "RecordingData"),
            "New recorded video",
            string.Format("$({0})", "Caption"),

    // Navigate to the overview page
    // Deserialize to RD
    RecordingData data = e.Arguments.DeserializeFromJson<RecordingData>();

    // Navigate to the video page
    rootFrame.Navigate(typeof(VideoPage), data);

Watching the video

Now that we have our metadata we can watch the stream in our app. but unfortunately Smooth Streaming isn’t supported out-of-the-box in Windows 8.1 Store Apps.

Luckily there are frameworks available to do so – Microsoft Player Framework & Adaptive plugin will help us, so don’t forget to add the NuGet packages.
Next to that you need to install the Smoot Streaming Client SDK for Windows 8.1 or add it in Visual Studio under Tools > Extensions and Updates.

We will retrieve the metadata in our Parameter and pass it to our defaultViewModel so we can bind it to our video control.

protected override void OnNavigatedTo(NavigationEventArgs e)
    defaultViewModel["RecordingData"] = (e.Parameter) as RecordingData;

In the Xaml of VideoPage we will add two new references – One to Microsoft.PlayerFramework & one to Microsoft.PlayerFramework.Adaptive.
There references allow us to add a MediaPlayer with AdativePlugin, this plugin is required to play Smooth Streams. We will also bind our Smooth Stream Url from our metadata to the media player.

As you can see I wrapped the MediaPlayer into ViewBox. This allows us to scale on the WxH ration I specified on the MediaPlayer to optimize for different screen resolutions.

<Viewbox Margin="0,0,140,40" Grid.Column="1" Grid.Row="1">
			Source="{Binding RecordingData.SmoothStreamUrl}">
					<playerFxPlugin:AdaptivePlugin />

Locally saving the video streams

To improve the Store Application we will save all the metadata for the videos and save it in the local storage.
This allows us to display the complete list of videos when we start the application.

To do so, I created a genericLocalStorageHelper that will save & load our data in the local storage.

public class LocalStorageHelper
    /// <summary>
    /// Load a local file and retrieve the content
    /// </summary>
    /// <typeparam name="T">Requested result type</typeparam>
    /// <param name="fileName">Local filename</param>
    /// <returns>Local content</returns>
    public static async Task<T> LoadFileContentAsync<T>(string fileName)
            StorageFile localFile = await ApplicationData.Current.LocalFolder.GetFileAsync(fileName);
            return (localFile != null) ? (await FileIO.ReadTextAsync(localFile)).DeserializeFromJson<T>() : default(T);
        catch (FileNotFoundException ex)
            return default(T);

    /// <summary>
    /// Save content to a local file
    /// </summary>
    /// <typeparam name="T">Content Type</typeparam>
    /// <param name="fileName">Requested filename</param>
    /// <param name="content">Content to save</param>
    public static async Task SaveFileContentAsync<T>(string fileName, T content)
        StorageFile localFile = await ApplicationData.Current.LocalFolder.CreateFileAsync(fileName, CreationCollisionOption.ReplaceExisting);
        await FileIO.WriteTextAsync(localFile, content.SerializeToJson());

We will expand our OnLaunched-handler to load the previous videos, add the new one and save it back in local storage before we will navigate to the video page.

// Deserialize to RD
RecordingData data = e.Arguments.DeserializeFromJson<RecordingData>();

// Load recording history on first run
if (_recordingHistory == null)
   _recordingHistory = await LocalStorageHelper.LoadFileContentAsync<ObservableCollection<RecordingData>>(RecordingFileName) 
                       ?? new ObservableCollection<RecordingData>();
// Add to the list

// Save the new list locally
await LocalStorageHelper.SaveFileContentAsync(RecordingFileName, _recordingHistory);

// Navigate to the video page
rootFrame.Navigate(typeof(VideoPage), data);

In our MainPage we will override the OnNavigatedTo-method and load the videos from local storage.

protected override async void OnNavigatedTo(NavigationEventArgs e)
    this.DefaultViewModel["Items"] = await LocalStorageHelper.LoadFileContentAsync<ObservableCollection<RecordingData>>(App.RecordingFileName) ?? new ObservableCollection<RecordingData>();

Now when you’ll start your application from the Start-screen it will load all previously notified videos!

Supporting Kinect availability changes

Now that our end-to-end scenario is working fine we can make our WPF application more stable. We will end by supporting availability changes of our sensor so that our recordings aren’t broken & the user is acquainted about the sensor that is unavailable

Let’s start by creating a new handler for the IsAvailableChanged-event and changing the UI before opening the sensor

// Hook-up availability event
_kinect.IsAvailableChanged += OnKinectAvailabilityChanged;

// Setup initial controls
if (_kinect.IsAvailable == false)
    StartRecordingButton.IsEnabled = false;
    Status.Content = "Kinect is unavailable.";
    KinectCamera.Visibility = Visibility.Collapsed;
    KinectUnavailable.Visibility = Visibility.Visible;

// Open connection

In the handler we will simply check if the Kinect is available or not.
There after we will update the UI, update the status and stop the recording if we lost connection.

private async void OnKinectAvailabilityChanged(object sender, IsAvailableChangedEventArgs e)
    if (e.IsAvailable == false)
        // Update status
        Status.Content = "Kinect is unavailable.";

        if (_isRecording)
            // Stop recording and render as-is
            await StopRecording();
            // Disable recording
            StartRecordingButton.IsEnabled = false;

        // Update UI
        KinectCamera.Visibility = Visibility.Collapsed;
        KinectUnavailable.Visibility = Visibility.Visible;
        // Update status
        Status.Content = "Kinect is available.";

        // Update UI
        StartRecordingButton.IsEnabled = true;
        KinectCamera.Visibility = Visibility.Visible;
        KinectUnavailable.Visibility = Visibility.Collapsed;

It’s a wrap!

That was it! Although we went to a decent amount of code it’s not that hard to build this scenario.

I hope you like it, my code is available here if you want to give it a spin.
Feel free to report bugs or extend the scenario!

Delivering to multiple platforms

Imagine that you publish your application to the store and people are massively downloading it and there is a demand for an Windows Phone, iOS or Android app – No worries!
Because we are using Media Services and Notification Hubs we can use the same backend without any big changes!

The notification that we are broadcasting are templated notifications that are platform-independent because the client application is responsible of defining the appearance.
On the other hand Notification Hubs is handling the backend for us by contacting the Push Notification System for iOS, Windows Phone and/or Android to make sure that they are sent out.
The only thing you need to do is link your notification hub to your new app.

Unfortunately Smooth Streaming is a protocol developed by Microsoft to support adaptive streaming in the Microsoft ecosystem.
This means that iOS, Android or even HTTP/HTML application will not be able to view your videos.
Media Services offer you two choices – Extend your current packaging job with packaging to a new protocol like HLS v3. This will create a new asset that can be consumed by using another locator endpoint on an Origin server.
The downside of this is that you are storing multiple assets in the Azure Blob Storage and thereby pay more.

Another option is to perform dynamic packing instead of packaging to a specfic protocol.
This allows you to only store your encoded MP4 asset in the Blob Storage and dynamically package to the requested stream based on the demand.
You then don’t need to store multiple assets anymore and improves maintenance but the downside here is that you need a dedicated origin instance to stream which also costs more.

Mingfei recently did a session with Scott Hanselman a session on this for Azure Friday, watch it here.

Why not build a Store App recorder?!

The Kinect for Windows allows C# developers to build WPF & Windows Store applications, you might ask yourself why I chose to use a WPF recorder?

Next to the fact that I prefer WPF over Store Apps is that storing all these images locally can be a bottleneck. The recorder is intensively using local storage to save each image frame & render the video later on.

With WPF this is not a problem, I have direct access to my local drives and can do anything if my account permits me. Store Applications however runs in a sandbox and doesn’t allow this without user interaction. We could force the user to select a folder where we store everything but I don’t like the idea of that.

An alternative would to use the local storage of the Store app but I don’t know if that is build for that. In my opinion this feature hasn’t been built to store Full HD images at 30-60 FPS but I am not a Store App developer so don’t shoot me if it is possible!


Here are some resources that might help you experiment yourself –

  • “Using Windows Azure Media Services .NET SDK with key concepts explained” by Mingfei Yan (article)
  • “Introducing Extensions for Windows Azure Media Services .NET SDK” by Mingfei Yan (article)
  • “Lights, Camera, Action – Media Services on the Loose” by Mike Martin (video / slides)
  • “Useful resources for Windows Azure Media Services” by Mingfei Yan (article)
  • “Getting started with Notification Hubs” (article)
  • Patterns & Practices ‘Building an On-Demand Video Service with Microsoft Azure Media Services’ (article)


In this post we’ve build an end-to-end scenario that enables a user to record a video with their Kinect and broadcast it to all viewers by using the cloud.

This was also a small introduction to ‘Kinecting the Cloud’, I hope you liked it.

Thanks for reading,


Thank you Mingei Yan & Mike Martin for reviewing

Posted in Kinecting the Cloud, Second Generation Kinect for Windows, Tutorial | Tagged , , | 7 Comments

Delivering Kinect On-Demand to a Store App with Azure Media Services & Notification Hubs – Introduction

In this post I will introduce you to an end-to-end scenario where a Kinect application is using a cloud backend.

I will also briefly introduce you to Microsoft Azure, the cloud platform of Microsoft, and what is has to offer in our scenario.

End-to-end Scenario

In this scenario we will develop a Kinect application that enables the user to record a video with a self-describing caption. All the viewers will be notified that there is a new video available so they can watch it on-demand.

Before I start with the tutorial, let me quickly introduce some of the services we will be using in this scenario.

Microsoft Azure Storage

Microsoft Azure Storage offers three types of storage Queues, Tables & Blobs.

Queues are used for simple messaging scenarios while tables are used for NoSQL. Blobs on the other hand is storage of files – or Blobs – in the Cloud seperated in several Containers.

MA Storage

Microsoft Azure Media Services

Microsoft Azure Media Services enables you to upload, encode, package, secure and delivery media on-demand or live in the cloud.

You can upload asset that represent media files - Audio & Video - and are stored as a Storage Blob behind the scenes. These assets can be used in jobs to encode into a new assets with different format or package them for streaming.

The assets delivered with an on-demand locator which is a streaming endpoint hosted by Origin servers.

But there is more – Support for ads, secure delivery, content protection, integrated CDN capability and more! Media Services was also responsible for the heavy-lifting live streaming in the past for football world cup & Olympics.

At //BUILD/ 2014 Mingfei Yan & Mariano Converti gave a really good overview of the platform and is available here or read more about Media Services here.

MA Media Services

Microsoft Azure Notification Hubs (Service Bus Stack)

Microsoft Azure Notification Hubs provide an easy-to-use infrastructure that enables you to send push notifications from any backend (in the cloud or on-premises) to any mobile platform.

With Notification Hubs you can easily send cross-platform, personalized push notifications without having to deal with the different platform notification systems (PNSs) youself. With a single API call, you can target an entire audience segments containing millions of users or a individual users based on tags. Read more about Notification Hubs here.

MA Notification Hubs

Try it for free

Microsoft Azure offers a free trial for one month (limited) that allows you to play with Media Services, Notification Hubs or others services.

This tutorial requires you to have an active Azure subscription either an existing one or a trial.
You can apply for the free trial here!


Let’s start by taking a look at the high-level “architecture”.

We will develop a WPF client that will orchestrate the communication between the Kinect sensor & the cloud. The WPF client enables the users to start & stop the recording and assign a self-describing caption for the viewers. Upon recording we will save each frame as a JPG-image and render it into an AVI-video at the end. Important to know is that the recording will automatically stop when the Kinect sensor becomes unavailable.

When the recording is done we will have a local video that we will upload as our raw Asset, encode it into MP4 & package it to a Smooth Stream for our viewers app. Last but not least we will send a notification to all our viewers that there is a new video available along with the stream URL & the specified caption.
Demo Scenario - Kinect

The viewers will use a simple Windows Store App that will receive push notifications when a new video is ready. They can then use the stream URL and play the video in from Media Services. The stream URL will also be stored in the local storage so that the video can be watched again later on.
Demo Scenario - Client


In this post I gave a brief introduction Microsoft Azure what services we will use to build a robust application to deliver Kinect on-demand in a Store app

This scenario is a good example of ‘Kinecting the Cloud’, a term I like to use that ressembles combining Kinect with the cloud. There are a lot more scenarios that share the same terminology and this is only the beginning, more about this in the future.

In my next post we will dive in some code as I explain how we can implement this scenario & how I support new viewer clients in the future by using the cloud as a backend.

Thanks for reading,


Posted in Kinecting the Cloud, Second Generation Kinect for Windows, Tutorial | Tagged , , | 2 Comments

Event – Amsterdam Kinect Hackathon September 5-6

Event Logo

The Kinect for Windows team & MVPs have been on the road for several hackatons New York, Dallas, Redmond & Waterloo – Bringing devices, experimental SDKs and happy to listen to their ideas.

Recently a new hackaton was announced in Europe – It will take place on the 5th & 6th of September in Pakhuis De Zwijger, Amsterdam, The Netherlands.

As always there will be three grand prizes for the best application but everyone who attends will receive a Amsterdam Kinect Hackathon T Shirt!

Next to that the hackaton allows you to talk to fellow Kinect developers, UI/UX developers, etc or just share your ideas with the Kinect for Windows team & MVPs - incl. me!

If you want to hack along, you can register here or go the event website.

See you there!

Tom Kerkhove

Posted in Event, Second Generation Kinect for Windows | Tagged , | Leave a comment

Mayday, mayday! Ending the Kinecting AR Drone series.

DISCLAIMER – This application is not finished and needs additional work

It’s been almost a year since I announced my Kinecting AR drone series that is combining Kinect for Windows with AR Drone.

The big idea behind it was to teach you some of the core Kinect for Windows v1 features while playing with an awesome toy. It would be using the camera, speech and skeletal tracking to manipulate the drone – Fly around, do some tricks, blink some leds and play with the camera.


Unfortunately with the private & public preview of Kinect for Windows I’ve been swimming in a sea of work – Covering the new content , thinking of new concepts and serve quality content.

Loads of cool ideas but only so few time to get my hands dirty, this blog has even become my second “job” – Although it is fun ofcourse!

I’m open-sourcing my current status code without covering it in new blog posts.

Currently you are able to enter your “battle station” as a “Commander” and take-off by using speech commands while monitoring the Kinect & Drone cameras. You can blink the drone LEDs, perform some tricks and the foundation of flying with your arms.

All this is done with AR.Drone-library from Ruslan Balanukhin.

Flying gestures

Unfortunately flying the drone with your body isn’t finished yet – The gestures are partially developed but not flying smoothly as I want it to be.

Flying is done by spreading your arms so you can fly like a real helicopter –

  • Fly up – Move both your hand above your head (25° Angle)
  • Fly down – Move both your hand below your shoulder your head (25° Angle)
  • Move left – Move left hand below your shoulders and right hand above your head (25° Angle)
  • Move right – Move right hand below your shoulders and left hand above your head (25° Angle)
  • Move forward – Lean forward
  • Move backwards – Lean backward
  • Rotate left – Rotate your arms counter clock wise with your spine as a center
  • Rotate right – Rotate your arms clock wise with your spine as a center

Now it’s up to you!

Although that I don’t have the time to finish it doesn’t mean that you should stop! You can try to make the flying more smoothly and fly it yourself!

You can download and take a look yourself here.

Good luck, have fun & thanks for reading,


Posted in Kinecting AR Drone | Tagged , | 8 Comments