Delivering Kinect On-Demand to a Store App with Azure Media Services & Notification Hubs – Tutorial

In my previous post I introduced you to a scenario where Kinect & the cloud are comebine to illustrate that Kinect & the cloud are a good match. I also introduced you to Microsoft Azure Storage, Media Services & Notification Hubs that we will use to develop this end-to-end scenario!

Reminder – This tutorial requires an active Microsoft Azure subscription and a trial is available! More info in my previous post.

Template

I developed a solution template that is based on the Kinect for Windows Public Preview SDK in case you want to follow along & test it yourself.

This template includes a basic Windows Store app and a WPF client that already displays the Kinect camera, if you want to know more about displaying the camera you can read this post.

You can download the template here.

Building the Kinect recorder

Record camera frames

We will start by creating a variable that will flag a recording, a counter that will keep track of the sequence number and a unique ID per recording. Note that ID could be of type Guid as well depending on your preferences.

/// <summary>
/// Current count of the image
/// </summary>
private int _sequenceNr = 1;

/// <summary>
/// Indication whether we are recording
/// </summary>
private bool _isRecording = false;

/// <summary>
/// Unique recording ID
/// </summary>
private string _recordingID = string.Empty;

When the users wants to start recording we will validate the temporary folder, reset our recording variables, update the status and change the UI.

/// <summary>
/// Start recording
/// </summary>
private void StartRecording()
{
    // Validate temporary folder
    if (ValidateTemporaryFolder() == false) return;

    // Setup recording
    _sequenceNr = 1;
    _isRecording = true;
    _recordingID = Guid.NewGuid().ToString();

    // Update status
    Status.Content = "Recording...";

    // Toggle controls
    VideoCaption.IsReadOnly = true;
    TemporaryFolder.IsReadOnly = true;
    StartRecordingButton.IsEnabled = !_isRecording;
    StopRecordingButton.IsEnabled = _isRecording;
}

Once we are recording we will need to save the images locally, this means that we will need to change our OnColorFrameArrived-method

Right after we’ve updated our WriteableBitmap we will check if the recording flag is on.
If so we are going to copy the _colorPixels array, save the image asynchronous in the temporary folder and increment the sequence number.

Important to know is that the image will contain the recording ID & sequence number for this frame.

 // Save image when recording
if (_isRecording)
{
    // Create a new byte-array
    byte[] imageData = new byte[_colorPixels.Length];

    // Copy the orginal array in the new one
    Array.Copy(_colorPixels, imageData, _colorPixels.Length);

    // Save the image in the local folder
    await ImageProcessor.SaveJpegAsync(imageData, frameDesc.Width, frameDesc.Height, frameDesc.Width * _bytePerPixel, TemporaryFolder.Text, string.Format("{0}_{1:000000}", _recordingID, _sequenceNr));

    // Increment the sequence number
    _sequenceNr++;
}

The ImageProcessor is a helper class that does all the saving for us – We just pass in the data with its width, height & stride along with the requested location & filename. It will then save the message as a Jpeg by using the JpegBitmapEncoder in an asynchronous way.

public class ImageProcessor
{
    /// <summary>
    /// Save a buffer as a JPEG
    /// </summary>
    /// <param name="data">Image data</param>
    /// <param name="width">Width of the image</param>
    /// <param name="height">Height of the image</param>
    /// <param name="stride">Stride of the image</param>
    /// <param name="folder">Output folder</param>
    /// <param name="filename">Filename</param>
    public static async Task SaveJpegAsync(byte[] data, int width, int height, int stride, string folder, string filename)
    {
        Task saveJpegTask = Task.Run(() =>
        {
        if (data != null)
        {
            // Create a new bitmap
            WriteableBitmap bmp = new WriteableBitmap(width, height, 96.0, 96.0, PixelFormats.Bgr32, null);

            // write pixels to bitmap
            bmp.WritePixels(new Int32Rect(0, 0, width, height), data, stride, 0);

            // create jpg encoder from bitmap
            JpegBitmapEncoder enc = new JpegBitmapEncoder();

            // create frame from the writable bitmap and add to encoder
            enc.Frames.Add(BitmapFrame.Create(bmp));

            // Create whole path
            string path = Path.Combine(folder, filename + ".jpg");

            try
            {
                // write the new file to disk
                using (FileStream fs = new FileStream(path, FileMode.Create))
                {
                   enc.Save(fs);
                }
            }
            catch (IOException ex)
            {
                Console.ForegroundColor = ConsoleColor.Red;
                Console.WriteLine("Error! Exception - " + ex.Message);
            }
        }
        });

        await saveJpegTask;
    }
}

Once the recording is stopped we will clear the recording flag, change the UI and start processing the image frames we now render the local images into a video.

private async Task StopRecording()
{
    // Stop recording
    _isRecording = false;

    // Disable stop controls
    StopRecordingButton.IsEnabled = false;

    // Process the recorded frames
    await ProcessFrames();

    // Reset caption & Enable start
    VideoCaption.Text = string.Empty;
    VideoCaption.IsReadOnly = false;
    TemporaryFolder.IsReadOnly = false;
    StartRecordingButton.IsEnabled = true;
}

Locally rendering the Kinect video

We will load all the local images from the temporary folder for that recording ID and render it as a video. This will be done in a VideoProcessor where we pass in the FPS, width & height of the images, the quality, path to the temporary folder and our recording ID.

As you can see I am forcing it to use 15 FPS since the FPS from the camera can be different depending on the light, in order to have a constant FPS I’m forcing 15 since we will always have 15 or more.

private async Task ProcessFrames()
{
    Status.Content = "Starting video render...";
	
    // Render video locally
    string videoPath = await VideoProcessor.RenderVideoAsync(15, 1920, 1080, 100, TemporaryFolder.Text, _recordingID);
}

Before we can start redering we need to download the SharpAVI library that will render the video for us.

SharpAVI allows us to use a AviWriter that is configured and a IAviVideoStream with MotionJpegVideoEncoderWpf with the specified values. After that we will loop all the images in our temp folder with that recording ID and write the pixels to the stream that will write the AVI-video to the temporary folder.

public class VideoProcessor
{
    /// <summary>
    /// Render a video based on JPEG-images
    /// </summary>
    /// <param name="fps">Requested frames-per-second</param>
    /// <param name="width">Width of the images</param>
    /// <param name="height">Height of the images</param>
    /// <param name="quality">Requested quality</param>
    /// <param name="path">Path to the folder containing frame-images</param>
    /// <param name="renderGuid">Unique GUID for this frame-batch</param>
    /// <returns>Path to the video</returns>
    public static async Task<string> RenderVideoAsync(int fps, int width, int height, int quality, string path, string renderGuid)
    {
        if(quality < 1 && quality > 100) throw new ArgumentException("Quality can only be between 1 and 100.");

        Task<string> renderT = Task.Run(() =>
        {
        // Compose output path
        string outputPath = string.Format("{0}/{1}.avi", path, renderGuid);

        // Create a new writer with the requested FPS
        AviWriter writer = new AviWriter(outputPath)
        {
            FramesPerSecond = fps
        };

        // Create a new stream to process it
        IAviVideoStream stream = writer.AddVideoStream().WithEncoder(new MotionJpegVideoEncoderWpf(width, height, quality));
        stream.Width = width;
        stream.Height = height;

        // Create an output stream
        byte[] frameData = new byte[stream.Width * stream.Height * 4];

        // Retrieve all iamges for this batch
        string[] images = Directory.GetFiles(path, string.Format("{0}*.jpg", renderGuid));

        // Process image per image
        foreach (string file in images)
        {
            // Decode the bitmap
            JpegBitmapDecoder decoder = new JpegBitmapDecoder(new Uri(file), BitmapCreateOptions.None, BitmapCacheOption.Default);

            // Get bitmap source
            BitmapSource source = decoder.Frames[0];
            
            // Copy pixels
            source.CopyPixels(frameData, 1920 * 4, 0);

            // Write it to the stream
            stream.WriteFrame(true, frameData, 0, frameData.Length);
        }

        // Close writer
        writer.Close();

        return outputPath;
        });

        await renderT;

        return renderT.Result;
    }
}

Provisioning a Microsoft Azure Media Service

It is time to provision ourselves a Media Service on the Microsoft Azure platform!

Browse to the management portal and select New > App Services > Media Service > Quick Create. Here you can assign a name to your media service, the requested region where it will be running and create or link a storage account.
Creating media service
Once that our service is provisioned click on Manage keys, here you can find the authentication keys we will use. Don’t share this with anyone!
Copying the keys
Copy & save the Account Name & Primary key in your App.config of your WPF project, we will use this to authenticate with the service.-config

<appSettings>
    <add key="MediaAccount" value="_YOUR-SERVICE-NAME_" />
    <add key="MediaKey" value="_YOUR-PRIMARY-KEY_" />
</appSettings>

Be careful with regenerating keys, it could break other applications relying on the service.

Encoding and packaging to Smooth Streaming

Now that we have our local video we will upload, encode and package it with Microsoft Azure Media Services.

To do so we will first add two new NuGet packages – Windows Azure Media Services .NET SDK & Windows Azure Media Services .NET SDK.

Next up we will create a MediaServicesAgent that we will use to do all our Media Services stuff with. For now we will start with a CTOR that accepts the Media Account Name & Key so we can create a CloudMediaContext.

public class MediaServicesAgent
{
    /// <summary>
    /// Media services credentials
    /// </summary>
    private MediaServicesCredentials _mediaCredentials;

    /// <summary>
    /// Media Context
    /// </summary>
    private CloudMediaContext _mediaContext;

    /// <summary>
    /// Default CTOR
    /// </summary>
    /// <param name="mediaAccount"></param>
    /// <param name="mediaKey"></param>
    public MediaServicesAgent(string mediaAccount, string mediaKey)
    {
        _mediaCredentials = new MediaServicesCredentials(mediaAccount, mediaKey);
        _mediaContext = new CloudMediaContext(_mediaCredentials);
    }
}

Now that we have our agent we will extend the ProcessFrames-method and save the timestamp when the video was rendered, update the status and call a new HostVideoInAzure-method that will contain all the Media Services logic that requires the local path of the video.

private async Task ProcessFrames()
{
    Status.Content = "Starting video render...";

    // Render video locally
    string videoPath =
await VideoProcessor.RenderVideoAsync(15, 1920, 1080, 100, TemporaryFolder.Text, _recordingID);

    // Save recording timestamp
    DateTime recordedStamp = DateTime.Now;

    Status.Content = "Done rendering video.";

    // Host video in Microsoft Azure
    string streamUrl = await HostVideoInAzure(videoPath);
}

Next we will create an instance of our Media Services Agent based on our Media Services keys in our configuration file, this requires a reference to System.Configuration.

Second we will create a basic version of HostVideoInAzure and start with calling the UploadAsset-method and passing the local path and a method that will display the progress of the upload.

/// <summary>
/// Media Services agent (Microsoft Azure Media Services)
/// </summary>
private MediaServicesAgent _mediaAgent = new MediaServicesAgent(ConfigurationManager.AppSettings.Get(&quot;MediaAccount&quot;), ConfigurationManager.AppSettings.Get(&quot;MediaKey&quot;));

/// <summary>
/// Upload the rendered video to the cloud, encode to MP4 and deliver as Smooth Stream
/// </summary>
/// <param name="videoPath">Path to the local video</param>
private async Task<string> HostVideoInAzure(string videoPath)
{
    Status.Content = "Starting video upload...";

    // Upload the video as an Asset
    IAsset rawAsset = await _mediaAgent.UploadAsset(videoPath, UploadAssetHandler);...
}

/// <summary>
/// Displays the progress of the upload
/// </summary>
private void UploadAssetHandler(object sender, UploadProgressChangedEventArgs e)
{
    Dispatcher.Invoke(() => Status.Content = string.Format("Uploading Asset - {0}%", Math.Round(e.Progress, 0)));
}

This method will upload a unencrypted IAsset – hence the AssetCreationOptions.None – that contains one IAssetFile that will be our video and return it when we are done so we can use it later on. We also assign the upload handler so we can update our UI.

The snippet in comment can be used as well thanks to the Extensions NuGet-package.

 public async Task<IAsset> UploadAsset(string filePath, EventHandler<UploadProgressChangedEventArgs> uploadHandler = null)
{
    Task<IAsset> uploadTask = Task.Run(() =>
    {
        // Retrieve filename
        string assetName = Path.GetFileName(filePath);

        // Create a new asset in the context
        IAsset asset = _mediaContext.Assets.Create(assetName, AssetCreationOptions.None);

        // Create a new asset file
        IAssetFile file = asset.AssetFiles.Create(assetName);

        // Hook-up the event if handler is specified
        if (uploadHandler != null)
            file.UploadProgressChanged += uploadHandler;

        // Upload the video
        file.Upload(filePath);

        return asset;
    });

    await uploadTask;

    return uploadTask.Result;

    // Snippet when you want to use the Microsoft Azure Media Services extensions
    //return await _mediaContext.Assets.CreateFromFileAsync(filePath, AssetCreationOptions.None, cancellationToken);
}

Next we will create a Media Services Job that will encode our Asset into a ‘H264 Adaptive Bitrate MP4 Set SD 16×9′ and package it into a Smooth Stream. We will do this in a new EncodeAndPackage-method that requires a job name, our raw asset and a handler to visulize the progress.

Let’s start by creating a new job to which we will assing two tasks – One for the encoding & one for the packaging.

For our encoding task we will retrieve the Windows Azure Media Encoder and create a new set based on this encoder, our requested preset and give it a decent name.
Next we will add our raw asset as an input asset and create a new unencrypted output asset suffixed with “_MP4″.

Our packing task is done pretty much the same – We retrieve the Windows Azure Media Packager, read the configuration of the stream from an XML file and create a new job based on this configuration.
Last thing we need to do for the packaging is adding an input asset for this task – which is the output of our first task – and create a new output asset.

With our tasks set we are ready to link our handler, submit the job and wait untill it has been processed.

public async Task<IJob> EncodeAndPackage(string jobName, IAsset rawAsset, EventHandler<JobStateChangedEventArgs> jobHandler = null)
{
    Task<IJob> t = Task.Run(() =>
    {
        // Create a new job
        IJob job = _mediaContext.Jobs.Create(jobName);

        /* Taks I - Encode into MP4
           Retrieve the encoder */
        IMediaProcessor latestWameMediaProcessor = (from p in _mediaContext.MediaProcessors
                    where p.Name == "Windows Azure Media Encoder"
                    select p).ToList().OrderBy(wame => new Version(wame.Version)).LastOrDefault();

        // Select the requested preset (Same as in the portal)
        string encodingPreset = "H264 Adaptive Bitrate MP4 Set SD 16x9";

        // Add a new task to the job for the encoding
        ITask encodeTask = job.Tasks.AddNew("Encoding", latestWameMediaProcessor, encodingPreset, TaskOptions.None);

        // Add our rendered video as input
        encodeTask.InputAssets.Add(rawAsset);

        // Add a new asset as output
        encodeTask.OutputAssets.AddNew(rawAsset.Name + "_MP4", AssetCreationOptions.None);


        /* Taks II - Package into Smooth Streaming
           Retrieve the packager */
        IMediaProcessor latestPackagerMediaProcessor = (from p in _mediaContext.MediaProcessors
                        where p.Name == "Windows Azure Media Packager"
                        select p).ToList().OrderBy(wame => new Version(wame.Version)).LastOrDefault();

        // Read the config from XML
        string SSConfig = File.ReadAllText(Path.GetFullPath(@"D:\Source Control\Kinect for Windows\Second Generation Kinect\Kinect - VOD Media Services\K4W.KinectVOD\K4W.KinectVOD.Client.WPF\Assets\Media_Services_MP4_to_Smooth_Streams.xml"));

        // Add a new packaging task
        ITask packagingSSTask = job.Tasks.AddNew("Packing into Smooth Streaming", latestPackagerMediaProcessor, SSConfig, TaskOptions.None);

        // Add the output of the encoding
        packagingSSTask.InputAssets.Add(encodeTask.OutputAssets[0]);

        // Create a new output Asset
        packagingSSTask.OutputAssets.AddNew("Result_SS_" + rawAsset.Name, AssetCreationOptions.None);

        // Hook-up the handler if reaquired
        if (jobHandler != null)
            job.StateChanged += jobHandler;

        // Submit the job
        job.Submit();

        // Execute
        job.GetExecutionProgressTask(CancellationToken.None).Wait();

        return job;
    });

    await t;

    return t.Result;
}

As with the uploading, we are displaying the progress of the job in our UI.

private void JobStateChangedHandler(object sender, JobStateChangedEventArgs e)
{
    Dispatcher.Invoke(() => Status.Content = string.Format("Job is currently {0}", e.CurrentState));
}

With everything set we can add this line to our HostVideoInAzure-method to start the job and visualize the progress.

IJob encodedAssetId = await _mediaAgent.EncodeAndPackage(string.Format("Encoding '{0}' into Mp4 & package to SS", rawAsset.Name), rawAsset, JobStateChangedHandler);

Our last step is to create a location where the stream can be consumed.

We will create a new method CreateNewSsLocator that will create a new locator for our packaged asset where we will assign an IAccessPolicy & Locator type and return the URI to the stream. The IAcessPolicy will be used to define what the requested permissions for that locator.

The locator type on the other hand defines the kind of access to the asset – You can create a Shared Access Signature URL that will be based on storage level and is mostly used for downloading the video or progressive download while we are using Smooth Streaming and an On-Demand Locator is required so it will create a Origin streaming endpoint.

More information about those two types here.

public Uri CreateNewSsLocator(IAsset packagedAsset, LocatorType locatorType, AccessPermissions accessPermissions, TimeSpan duration)
{
    if (packagedAsset == null) throw new Exception("Invalid encoded asset");

    // Create a new access policy to the video
    IAccessPolicy policy = _mediaContext.AccessPolicies.Create("Streaming policy", duration, accessPermissions);

    // Create a new locator to that resource with our new policy
    _mediaContext.Locators.CreateLocator(locatorType, packagedAsset, policy);

    // Return the uri by using the extensions package
    return packagedAsset.GetSmoothStreamingUri();
}

This is how your HostVideoInAzure-method should look like after calling our CreateNewSsLocator-method.

private async Task<string> HostVideoInAzure(string videoPath)
{
    Status.Content = "Starting video upload...";

    // Upload the video as an Asset
    IAsset rawAsset = await _mediaAgent.UploadAsset(videoPath, UploadAssetHandler);

    Status.Content = "Starting encoding & packaging...";

    // Encode & Package in Media Services
    IJob encodedAssetId = await _mediaAgent.EncodeAndPackage(string.Format("Encoding '{0}' into Mp4 & package to SS", rawAsset.Name), rawAsset, JobStateChangedHandler);

    Status.Content = "Creating locator endpoint...";

    // Create a new Smooth Streaming Locator
    Uri ssUri = _mediaAgent.CreateNewSsLocator(encodedAssetId.OutputMediaAssets[1], LocatorType.OnDemandOrigin, AccessPermissions.Read, TimeSpan.FromDays(7));

    return ssUri.ToString();
}

Testing the stream

You can now test your Smooth Stream on this website.

Provisioning a Microsoft Notification Hub

Time to provision a notification hub in a Service Bus namespace, this will allow us to send notifications!

Log in into the portal and click New > App Services > Service Bus > Notification Hub > Quick Create and give the hub a self-describing name. You’ll also need to specify the region where the hub will be provisioned along with a name for the new Service Bus namespace or use an existing one!
Creating notification hub
Next we will create new Shared Access Policies or SAS Policies that we will use for the authentication. Instead of using one policy with full control we will create two – a notifier that will be able to send messages and thus our Kinect recorder. Our second policy will be a listener that will be used by our clients to listen for notifications.

This allows us to restrict the level of access based on the requirements so the listeners can’t abuse their access to spam notifications to others.
Creating SAS policies
Navigate to the Service Bus-overview page, select your namespace and click Connection Information.
Generating keys
Copy the namespace for the Notifier and store it in the App.config of your WPF application. Also store the nameyour notification hub so we know what hub we need to send to.

<appSettings>
    <add key="MediaAccount" value="_YOUR-SERVICE-NAME_" />
    <add key="MediaKey" value="_YOUR-PRIMARY-KEY_" />
    <add key="NotificationHub" value="kinect-VOD-tutorial" />
</appSettings>
<connectionStrings>
    <add name="servicebus-ns" connectionString="Endpoint=sb://_YOUR_NAMESPACE_.servicebus.windows.net/;SharedAccessKeyName=SendPolicy;SharedAccessKey=_SHARED-KEY_" />
</connectionStrings>

Notifing the clients

Now that we have our Smooth Stream ready in the cloud we still need to notify our viewers that there is a new video available. We will use Microsoft Azure Notification Hubs to send a push notification to all our clients.

We will create a NotificationHubAgent that will do all the work for us. This agent requires the name of the push notification hub and the connection string. It will then create a NotificationHubClient that we will use to send.

The agent will expose a SendTemplateNotificationAsync-method that will send a set of properties to the notification hub. The advantage of the TemplateNotification is that this is done in a platform independent way and the receiver is responsible for the appearance of the notification.
The client applications will then be able to use our set of properties in their notification.

public class NotificationHubAgent
{
    /// <summary>
    /// Notification hub client to certain hub
    /// </summary>
    private NotificationHubClient _hubClient;

    /// <summary>
    /// Default CTOR
    /// </summary>
    /// <param name="hubName">Name of the requested notification hub</param>
    /// <param name="connectionString">Connection string to the Service Bus namespace</param>
    public NotificationHubAgent(string hubName, string connectionString)
    {
        if (string.IsNullOrEmpty(hubName)) throw new ArgumentException("Invalid hub name.");
        if (string.IsNullOrEmpty(connectionString)) throw new ArgumentException("Invalid Service Bus connection string.");

        // Create a new hub client
        _hubClient = NotificationHubClient.CreateClientFromConnectionString(connectionString, hubName);
    }

    /// <summary>
    /// Send a template notification (Platform independent)
    /// </summary>
    /// <param name="properties">Set of properties</param>
    public async Task SendTemplateNotificationAsync(Dictionary<string, string> properties)
    {
        if (properties == null) throw new ArgumentException("Properties cannot be Null.");

        // Send
        await _hubClient.SendTemplateNotificationAsync(properties);
    }
}

We will use this agent to send out the URL to our clients along with some metadata.

First we create a new instance of the agent based on our App.config. There after we create a new SendNotification-method that will assemble a list of properties along with the stream url that will be sent out.

Hence that I create a new RecordingData object, convert it to JSON and add it to the property list.
This allows me to push some additional metadata that we will use later on.

/// <summary>
/// Notification Agent (Microsoft Azure Notification Hubs)
/// </summary>
private NotificationHubAgent _notificationAgent = new NotificationHubAgent(ConfigurationManager.AppSettings.Get("NotificationHub"), ConfigurationManager.ConnectionStrings["servicebus-ns"].ConnectionString);
		
/// <summary>
/// Send the streaming URL & caption to the clients
/// </summary>
/// <param name="streamUrl">Url of the stream</param>
private async Task SendNotification(string streamUrl, DateTime stamp)
{
    // Create metadata for the client (will be used in the launch-property of the tile)
    RecordingData recordingData = new RecordingData(VideoCaption.Text, streamUrl, _recordingID, stamp);

    // Assign properties for the notification
    Dictionary<string, string> properties = new Dictionary<string, string>()
    {
        {"Caption", recordingData.Caption},
        {"SmoothStreamUrl", recordingData.SmoothStreamUrl},
        {"RecordingId", recordingData.RecordingId},
        {"RecordingStamp", recordingData.RecordingStamp.ToString()},
        {"RecordingData", recordingData.SerializeToJson()}
    };

    // Send the notification
    await _notificationAgent.SendTemplateNotificationAsync(properties);
}

Last step in our WPF client is to clean-up our temporary folder for our current recording.

private async Task RemoveLocalAssets()
{
    string tempFolder = TemporaryFolder.Text;
    Task cleanupT = Task.Run(() =>
    {
        foreach (string file in Directory.GetFiles(tempFolder, string.Format("{0}*", _recordingID)))
        {
            File.Delete(file);
        }
    });

    await cleanupT;
}

This is how your ProcessFrames method should look like in your WPF application.

private async Task ProcessFrames()
{
    Status.Content = "Starting video render...";

    // Render video locally
    string videoPath =
        await VideoProcessor.RenderVideoAsync(15, 1920, 1080, 100, TemporaryFolder.Text, _recordingID);

    // Save recording timestamp
    DateTime recordedStamp = DateTime.Now;

    Status.Content = "Done rendering video.";

    // Host video in Microsoft Azure
    string streamUrl = await HostVideoInAzure(videoPath);

    Status.Content = "Video is available on-demand.";

    // Send notifications to clients
    await SendNotification(streamUrl, recordedStamp);

    // Remove saved images & local video afterwards
    await RemoveLocalAssets();
}

Consuming the Smooth Stream with a Windows 8.1 Store App

Before we can start receiving notifications we need to register our application and link it to our notification hub.

In the Notification Hub documentation they explain how you can associate your app with the Notification hub and the changes to your app.manifest to setup your application in order to receive notifications!

Receiving & handling push notification

Once that we have linked our store application with our Push Notification Hub we can now receive & process notifications.

Let’s start by creating a PushNotificationHelper that will expose a RegisterTemplateNotificationAsync-method.
This method will register a notification template on a NotificationHub with the given name, our Xml notification template and a new PushNotificationChannel.

The Xml template is based on a ToastImageAndText02 where we fill in a decent title, caption and image.
Note that we are adding a Launch to the DocumentElement that will contain our metadata, this will be the metadata that you receive when the viewers taps the notification.

public class PushNotificationsHelper
{
    /// <summary>
    /// Register a template notification
    /// </summary>
    /// <param name="hubName">Name of the sending hub</param>
    /// <param name="connectionString">Connection string to the Service Bus namespace</param>
    /// <param name="templateName">Name of the template</param>
    /// <param name="metadata">Notification property holding the metadata</param>
    /// <param name="header">Header text of the toast</param>
    /// <param name="footer">Footer text of the toast</param>
    /// <param name="image">Url to the image</param>
    /// <returns></returns>
    public static async Task<TemplateRegistration> RegisterTemplateNotificationAsync(string hubName, string connectionString, string templateName, string metadata, string header, string footer, string image)
    {
        // Create a new push notification channel
        PushNotificationChannel channel = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();

        // Create a new notification hub
        NotificationHub hub = new NotificationHub(hubName, connectionString);

        // Generate the template for the toast
        XmlDocument toastTemplate = await GenerateXmlTemplateAsync(metadata, header, footer, image);

        // Register the template
        return await hub.RegisterTemplateAsync(channel.Uri, toastTemplate, templateName);
    }

    /// <summary>
    /// Generate the Xml Template for the 'ToastImageAndText02' notification
    /// </summary>
    /// <param name="metadata">Notification property holding the metadata</param>
    /// <param name="header">Header text of the toast</param>
    /// <param name="footer">Footer text of the toast</param>
    /// <param name="image">Url to the image</param>
    private static async Task<XmlDocument> GenerateXmlTemplateAsync(string metadata, string header, string footer, string image)
    {
        var template = ToastNotificationManager.GetTemplateContent(ToastTemplateType.ToastImageAndText02);

        // msg, id, url, tag
        template.DocumentElement.SetAttribute("launch", metadata);

        var titleNode = template.SelectSingleNode("//text[@id='1']") as XmlElement;
        if (titleNode != null)
        {
        titleNode.InnerText = header;
        }

        var captionNode = template.SelectSingleNode("//text[@id='2']") as XmlElement;
        if (captionNode != null)
        {
        captionNode.InnerText = footer;
        }

        var imgNode = template.SelectSingleNode("//image[@id='1']") as XmlElement;
        if (imgNode != null)
        {
        imgNode.SetAttribute("src", image);
        imgNode.SetAttribute("alt", image);
        }

        return template;
    }
}

In our scenario we are broadcasting to all our clients but in some scenarios you’d only want to notify a specific group of clients. In that case the clients would need to register as above but additionally specify a set of properties that it wants to be notified of.

Let’s say that we expand our scenario and recorders can add tags f.e. ‘Diving’ to their video, the viewer can then select the categories of their interest. The store app notification registration will then specify the set of interesting tags.

It is important to know that each register returns a temporary TemplateRegistration.

We will now create a new RegisterPushNotifications-method that will call our helper class.
In this method we will save the expiration date of the registration and renew it when required.

public async Task RegisterPushNotifications(string hubName, string connectionString, string templateName, string metadata, string header, string footer, string image)
{
    bool registerTemplate = false;

    // Retrieve local settings
    ApplicationDataContainer localSettings = ApplicationData.Current.LocalSettings;

    // Retrieve saved expiration date for this template
    object registerExpiration = localSettings.Values[templateName.Replace(" ", "-")];

    // Flag as to-register when no value found
    if (registerExpiration != null)
    {
        // Try parse to datetime
        DateTime expirationDateTime;
        DateTime.TryParse(registerExpiration.ToString(), out expirationDateTime);

        // Register when expired
        if (expirationDateTime <= DateTime.Now)
            registerTemplate = true;
    }
    else
        registerTemplate = true;

    // Create a new registration when required
    if (registerTemplate == true)
    {
        TemplateRegistration tempRegistration = await PushNotificationsHelper.RegisterTemplateNotificationAsync(hubName, connectionString, templateName, metadata, header, footer, image);

        // Save new expiration date
        localSettings.Values[templateName.Replace(" ", "-")] = tempRegistration.ExpiresAt.ToString();
    }
}

We will expand the OnLaunched-method by checking if there are arguments available.

When the user taps a notification our metadata will be available in the Arguments.
This means that when there are no arguments available we need to check if our registration exists or has expired & navigate to the MainPage.
If there are arguments available we will deserialize the metadata & pass it along when we’re navigating to our VideoPage.

// Switching decision between pages
if (string.IsNullOrEmpty(e.Arguments))
{
    // Register for notifications
    await RegisterPushNotifications("kinect-VOD-tutorial",
            "_YOUR-NOTIFICATION-HUB-CS_",
            "new-video-template",
            string.Format("$({0})", "RecordingData"),
            "New recorded video",
            string.Format("$({0})", "Caption"),
            "http://www.kinectingforwindows.com/images/notification_logo.png");

    // Navigate to the overview page
    rootFrame.Navigate(typeof(MainPage));
}
else
{
    // Deserialize to RD
    RecordingData data = e.Arguments.DeserializeFromJson<RecordingData>();

    // Navigate to the video page
    rootFrame.Navigate(typeof(VideoPage), data);
}

Watching the video

Now that we have our metadata we can watch the stream in our app. but unfortunately Smooth Streaming isn’t supported out-of-the-box in Windows 8.1 Store Apps.

Luckily there are frameworks available to do so – Microsoft Player Framework & Adaptive plugin will help us, so don’t forget to add the NuGet packages.
Next to that you need to install the Smoot Streaming Client SDK for Windows 8.1 or add it in Visual Studio under Tools > Extensions and Updates.

We will retrieve the metadata in our Parameter and pass it to our defaultViewModel so we can bind it to our video control.

protected override void OnNavigatedTo(NavigationEventArgs e)
{
    defaultViewModel["RecordingData"] = (e.Parameter) as RecordingData;
}

In the Xaml of VideoPage we will add two new references – One to Microsoft.PlayerFramework & one to Microsoft.PlayerFramework.Adaptive.
There references allow us to add a MediaPlayer with AdativePlugin, this plugin is required to play Smooth Streams. We will also bind our Smooth Stream Url from our metadata to the media player.

As you can see I wrapped the MediaPlayer into ViewBox. This allows us to scale on the WxH ration I specified on the MediaPlayer to optimize for different screen resolutions.

<Page
    ...
    xmlns:playerFx="using:Microsoft.PlayerFramework"
    xmlns:playerFxPlugin="using:Microsoft.PlayerFramework.Adaptive">
<Viewbox Margin="0,0,140,40" Grid.Column="1" Grid.Row="1">
    <playerFx:MediaPlayer 
			AutoPlay="True"
			IsThumbnailVisible="True"
			Height="540"
			Width="960"
			HorizontalAlignment="Center"
			VerticalAlignment="Center"
			Source="{Binding RecordingData.SmoothStreamUrl}">
				<playerFx:MediaPlayer.Plugins>
					<playerFxPlugin:AdaptivePlugin />
				</playerFx:MediaPlayer.Plugins>
    </playerFx:MediaPlayer>
</Viewbox>

Locally saving the video streams

To improve the Store Application we will save all the metadata for the videos and save it in the local storage.
This allows us to display the complete list of videos when we start the application.

To do so, I created a genericLocalStorageHelper that will save & load our data in the local storage.

public class LocalStorageHelper
{
    /// <summary>
    /// Load a local file and retrieve the content
    /// </summary>
    /// <typeparam name="T">Requested result type</typeparam>
    /// <param name="fileName">Local filename</param>
    /// <returns>Local content</returns>
    public static async Task<T> LoadFileContentAsync<T>(string fileName)
    {
        try
        {
            StorageFile localFile = await ApplicationData.Current.LocalFolder.GetFileAsync(fileName);
            return (localFile != null) ? (await FileIO.ReadTextAsync(localFile)).DeserializeFromJson<T>() : default(T);
        }
        catch (FileNotFoundException ex)
        {
            return default(T);
        }
    }

    /// <summary>
    /// Save content to a local file
    /// </summary>
    /// <typeparam name="T">Content Type</typeparam>
    /// <param name="fileName">Requested filename</param>
    /// <param name="content">Content to save</param>
    public static async Task SaveFileContentAsync<T>(string fileName, T content)
    {
        StorageFile localFile = await ApplicationData.Current.LocalFolder.CreateFileAsync(fileName, CreationCollisionOption.ReplaceExisting);
        await FileIO.WriteTextAsync(localFile, content.SerializeToJson());
    }
}

We will expand our OnLaunched-handler to load the previous videos, add the new one and save it back in local storage before we will navigate to the video page.

// Deserialize to RD
RecordingData data = e.Arguments.DeserializeFromJson<RecordingData>();

// Load recording history on first run
if (_recordingHistory == null)
   _recordingHistory = await LocalStorageHelper.LoadFileContentAsync<ObservableCollection<RecordingData>>(RecordingFileName) 
                       ?? new ObservableCollection<RecordingData>();
// Add to the list
_recordingHistory.Add(data);

// Save the new list locally
await LocalStorageHelper.SaveFileContentAsync(RecordingFileName, _recordingHistory);

// Navigate to the video page
rootFrame.Navigate(typeof(VideoPage), data);

In our MainPage we will override the OnNavigatedTo-method and load the videos from local storage.

protected override async void OnNavigatedTo(NavigationEventArgs e)
{
    this.DefaultViewModel["Items"] = await LocalStorageHelper.LoadFileContentAsync<ObservableCollection<RecordingData>>(App.RecordingFileName) ?? new ObservableCollection<RecordingData>();
}

Now when you’ll start your application from the Start-screen it will load all previously notified videos!

Supporting Kinect availability changes

Now that our end-to-end scenario is working fine we can make our WPF application more stable. We will end by supporting availability changes of our sensor so that our recordings aren’t broken & the user is acquainted about the sensor that is unavailable

Let’s start by creating a new handler for the IsAvailableChanged-event and changing the UI before opening the sensor

// Hook-up availability event
_kinect.IsAvailableChanged += OnKinectAvailabilityChanged;

// Setup initial controls
if (_kinect.IsAvailable == false)
{
    StartRecordingButton.IsEnabled = false;
    Status.Content = "Kinect is unavailable.";
    KinectCamera.Visibility = Visibility.Collapsed;
    KinectUnavailable.Visibility = Visibility.Visible;
}

// Open connection
_kinect.Open();

In the handler we will simply check if the Kinect is available or not.
There after we will update the UI, update the status and stop the recording if we lost connection.

private async void OnKinectAvailabilityChanged(object sender, IsAvailableChangedEventArgs e)
{
    if (e.IsAvailable == false)
    {
        // Update status
        Status.Content = "Kinect is unavailable.";

        if (_isRecording)
        {
            // Stop recording and render as-is
            await StopRecording();
        }
        else
        {
            // Disable recording
            StartRecordingButton.IsEnabled = false;
        }

        // Update UI
        KinectCamera.Visibility = Visibility.Collapsed;
        KinectUnavailable.Visibility = Visibility.Visible;
    }
	else
    {
        // Update status
        Status.Content = "Kinect is available.";

        // Update UI
        StartRecordingButton.IsEnabled = true;
        KinectCamera.Visibility = Visibility.Visible;
        KinectUnavailable.Visibility = Visibility.Collapsed;
    }
}

It’s a wrap!

That was it! Although we went to a decent amount of code it’s not that hard to build this scenario.

I hope you like it, my code is available here if you want to give it a spin.
Feel free to report bugs or extend the scenario!

Delivering to multiple platforms

Imagine that you publish your application to the store and people are massively downloading it and there is a demand for an Windows Phone, iOS or Android app – No worries!
Because we are using Media Services and Notification Hubs we can use the same backend without any big changes!

The notification that we are broadcasting are templated notifications that are platform-independent because the client application is responsible of defining the appearance.
On the other hand Notification Hubs is handling the backend for us by contacting the Push Notification System for iOS, Windows Phone and/or Android to make sure that they are sent out.
The only thing you need to do is link your notification hub to your new app.

Unfortunately Smooth Streaming is a protocol developed by Microsoft to support adaptive streaming in the Microsoft ecosystem.
This means that iOS, Android or even HTTP/HTML application will not be able to view your videos.
Media Services offer you two choices – Extend your current packaging job with packaging to a new protocol like HLS v3. This will create a new asset that can be consumed by using another locator endpoint on an Origin server.
The downside of this is that you are storing multiple assets in the Azure Blob Storage and thereby pay more.

Another option is to perform dynamic packing instead of packaging to a specfic protocol.
This allows you to only store your encoded MP4 asset in the Blob Storage and dynamically package to the requested stream based on the demand.
You then don’t need to store multiple assets anymore and improves maintenance but the downside here is that you need a dedicated origin instance to stream which also costs more.

Mingfei recently did a session with Scott Hanselman a session on this for Azure Friday, watch it here.

Why not build a Store App recorder?!

The Kinect for Windows allows C# developers to build WPF & Windows Store applications, you might ask yourself why I chose to use a WPF recorder?

Next to the fact that I prefer WPF over Store Apps is that storing all these images locally can be a bottleneck. The recorder is intensively using local storage to save each image frame & render the video later on.

With WPF this is not a problem, I have direct access to my local drives and can do anything if my account permits me. Store Applications however runs in a sandbox and doesn’t allow this without user interaction. We could force the user to select a folder where we store everything but I don’t like the idea of that.

An alternative would to use the local storage of the Store app but I don’t know if that is build for that. In my opinion this feature hasn’t been built to store Full HD images at 30-60 FPS but I am not a Store App developer so don’t shoot me if it is possible!

Resources

Here are some resources that might help you experiment yourself -

  • “Using Windows Azure Media Services .NET SDK with key concepts explained” by Mingfei Yan (article)
  • “Introducing Extensions for Windows Azure Media Services .NET SDK” by Mingfei Yan (article)
  • “Lights, Camera, Action – Media Services on the Loose” by Mike Martin (video / slides)
  • “Useful resources for Windows Azure Media Services” by Mingfei Yan (article)
  • “Getting started with Notification Hubs” (article)
  • Patterns & Practices ‘Building an On-Demand Video Service with Microsoft Azure Media Services’ (article)

Conclusion

In this post we’ve build an end-to-end scenario that enables a user to record a video with their Kinect and broadcast it to all viewers by using the cloud.

This was also a small introduction to ‘Kinecting the Cloud’, I hope you liked it.

Thanks for reading,

Tom.

Thank you Mingei Yan & Mike Martin for reviewing

Posted in Kinecting the Cloud, Second Generation Kinect for Windows, Tutorial | Tagged , , | 1 Comment

Delivering Kinect On-Demand to a Store App with Azure Media Services & Notification Hubs – Introduction

In this post I will introduce you to an end-to-end scenario where a Kinect application is using a cloud backend.

I will also briefly introduce you to Microsoft Azure, the cloud platform of Microsoft, and what is has to offer in our scenario.

End-to-end Scenario

In this scenario we will develop a Kinect application that enables the user to record a video with a self-describing caption. All the viewers will be notified that there is a new video available so they can watch it on-demand.

Before I start with the tutorial, let me quickly introduce some of the services we will be using in this scenario.

Microsoft Azure Storage

Microsoft Azure Storage offers three types of storage Queues, Tables & Blobs.

Queues are used for simple messaging scenarios while tables are used for NoSQL. Blobs on the other hand is storage of files – or Blobs – in the Cloud seperated in several Containers.

MA Storage

Microsoft Azure Media Services

Microsoft Azure Media Services enables you to upload, encode, package, secure and delivery media on-demand or live in the cloud.

You can upload asset that represent media files - Audio & Video - and are stored as a Storage Blob behind the scenes. These assets can be used in jobs to encode into a new assets with different format or package them for streaming.

The assets delivered with an on-demand locator which is a streaming endpoint hosted by Origin servers.

But there is more – Support for ads, secure delivery, content protection, integrated CDN capability and more! Media Services was also responsible for the heavy-lifting live streaming in the past for football world cup & Olympics.

At //BUILD/ 2014 Mingfei Yan & Mariano Converti gave a really good overview of the platform and is available here or read more about Media Services here.

MA Media Services

Microsoft Azure Notification Hubs (Service Bus Stack)

Microsoft Azure Notification Hubs provide an easy-to-use infrastructure that enables you to send push notifications from any backend (in the cloud or on-premises) to any mobile platform.

With Notification Hubs you can easily send cross-platform, personalized push notifications without having to deal with the different platform notification systems (PNSs) youself. With a single API call, you can target an entire audience segments containing millions of users or a individual users based on tags. Read more about Notification Hubs here.

MA Notification Hubs

Try it for free

Microsoft Azure offers a free trial for one month (limited) that allows you to play with Media Services, Notification Hubs or others services.

This tutorial requires you to have an active Azure subscription either an existing one or a trial.
You can apply for the free trial here!

Architecture

Let’s start by taking a look at the high-level “architecture”.

We will develop a WPF client that will orchestrate the communication between the Kinect sensor & the cloud. The WPF client enables the users to start & stop the recording and assign a self-describing caption for the viewers. Upon recording we will save each frame as a JPG-image and render it into an AVI-video at the end. Important to know is that the recording will automatically stop when the Kinect sensor becomes unavailable.

When the recording is done we will have a local video that we will upload as our raw Asset, encode it into MP4 & package it to a Smooth Stream for our viewers app. Last but not least we will send a notification to all our viewers that there is a new video available along with the stream URL & the specified caption.
Demo Scenario - Kinect

The viewers will use a simple Windows Store App that will receive push notifications when a new video is ready. They can then use the stream URL and play the video in from Media Services. The stream URL will also be stored in the local storage so that the video can be watched again later on.
Demo Scenario - Client

Conclusion

In this post I gave a brief introduction Microsoft Azure what services we will use to build a robust application to deliver Kinect on-demand in a Store app

This scenario is a good example of ‘Kinecting the Cloud’, a term I like to use that ressembles combining Kinect with the cloud. There are a lot more scenarios that share the same terminology and this is only the beginning, more about this in the future.

In my next post we will dive in some code as I explain how we can implement this scenario & how I support new viewer clients in the future by using the cloud as a backend.

Thanks for reading,

Tom.

Posted in Kinecting the Cloud, Second Generation Kinect for Windows, Tutorial | Tagged , , | 2 Comments

Event – Amsterdam Kinect Hackathon September 5-6

Event Logo

The Kinect for Windows team & MVPs have been on the road for several hackatons New York, Dallas, Redmond & Waterloo – Bringing devices, experimental SDKs and happy to listen to their ideas.

Recently a new hackaton was announced in Europe – It will take place on the 5th & 6th of September in Pakhuis De Zwijger, Amsterdam, The Netherlands.

As always there will be three grand prizes for the best application but everyone who attends will receive a Amsterdam Kinect Hackathon T Shirt!

Next to that the hackaton allows you to talk to fellow Kinect developers, UI/UX developers, etc or just share your ideas with the Kinect for Windows team & MVPs - incl. me!

If you want to hack along, you can register here or go the event website.

See you there!

Tom Kerkhove

Posted in Event, Second Generation Kinect for Windows | Tagged , | Leave a comment

Mayday, mayday! Ending the Kinecting AR Drone series.

DISCLAIMER – This application is not finished and needs additional work

It’s been almost a year since I announced my Kinecting AR drone series that is combining Kinect for Windows with AR Drone.

The big idea behind it was to teach you some of the core Kinect for Windows v1 features while playing with an awesome toy. It would be using the camera, speech and skeletal tracking to manipulate the drone – Fly around, do some tricks, blink some leds and play with the camera.

Drone

Unfortunately with the private & public preview of Kinect for Windows I’ve been swimming in a sea of work – Covering the new content , thinking of new concepts and serve quality content.

Loads of cool ideas but only so few time to get my hands dirty, this blog has even become my second “job” – Although it is fun ofcourse!

I’m open-sourcing my current status code without covering it in new blog posts.

Currently you are able to enter your “battle station” as a “Commander” and take-off by using speech commands while monitoring the Kinect & Drone cameras. You can blink the drone LEDs, perform some tricks and the foundation of flying with your arms.

All this is done with AR.Drone-library from Ruslan Balanukhin.

Flying gestures

Unfortunately flying the drone with your body isn’t finished yet – The gestures are partially developed but not flying smoothly as I want it to be.

Flying is done by spreading your arms so you can fly like a real helicopter –

  • Fly up – Move both your hand above your head (25° Angle)
  • Fly down – Move both your hand below your shoulder your head (25° Angle)
  • Move left – Move left hand below your shoulders and right hand above your head (25° Angle)
  • Move right – Move right hand below your shoulders and left hand above your head (25° Angle)
  • Move forward – Lean forward
  • Move backwards – Lean backward
  • Rotate left – Rotate your arms counter clock wise with your spine as a center
  • Rotate right – Rotate your arms clock wise with your spine as a center

Now it’s up to you!

Although that I don’t have the time to finish it doesn’t mean that you should stop! You can try to make the flying more smoothly and fly it yourself!

You can download and take a look yourself here.

Good luck, have fun & thanks for reading,

Tom

Posted in Kinecting AR Drone | Tagged , | Leave a comment

Kinect for Windows SDK 2.0 day – Public Preview availability & looking back at MVA

UPDATE 23/07/2014 – The MVA sessions are available on-demand here!

Note- Some of the information provided was already covered in previous articles, I recommend reading them to get deeper into that topic.

Yesterday was a big day for Kinect for Windows – Ben Lower & Rob Relyea hosted a Microsoft Virtual Academy on the day that the public preview of the Kinect for Windows SDK 2.0 has been released & is available here!

Public Preview SDK

The free SDK allows you to get started with the new sensor that leverages improved skeletal tracking, higher depth fidelity, 1080p HD video, new active infrared capabilities, extended field of view, and so on.

This allows the SDK to be more powerful as wel f.e. improved skeletal to track hand states and more joints or even use the highly requested face expression tracking!

Source – Kinect for Windows blog

The SDK now ships with a new version of Kinect Studio that allows you to record clips and play them without being connected to a sensor, this is a really big win! Next to that you can use one sensor in multiple apps at the same time!

Source – Kinect for Windows blog

Last but not least you can now build Windows Store or Unity apps for Kinect as well!

Some of my latests posts already covered the alpha version of the SDK that give you a kickstart, feel free to read them! The SDK also ships with a lot of samples for WPF, Store apps & C++!

If you have any questions regarding the public preview, feel free to post them on the forum or ask me!

Looking back at the Microsoft Virtual Academy

The academy covered seven modules going from an introduction to data sources, face tracking, advanced topics like custom sources and so on. Pretty nice online event that covered most of the features that are available!

The modules will be available for on-demand later on, I recommend watching them!

Source – Kinect for Windows blog

Updated blog samples

I’m glad to announce that my samples have been update in GitHub to run with the preview SDK, if you notice bugs – feel free to let me know!

You can read the official statement here or order a sensor here!

Thanks for reading,

Tom Kerkhove

Posted in News | Leave a comment

First look at Expressions – Displaying expressions for a tracked person

UPDATE (15/07/2014) – The sample is updated based on the public preview SDK.

One of the biggest feature requests was the ability to track the expressions of the users. Today I’m happy to tell you that this is now available in the alpha SDK thanks to the facetracking!

In this post I will walk you through the steps to display the expressions for one user but this is possible for all the tracked persons!

Template

I developed a small template that displays the camera so you can follow along & is available here.

Tutorial

Setting up expression tracking is pretty easy – We just need to set up body tracking, assign a FaceFrameSource to it and start processing the results. This requires us to add two references – Microsoft.Kinect for body tracking & Microsoft.Kinect.Face for the face tracking.

As I mentioned in my basic overview we need to create a BodyFrameReader to start receiving BodyFrameReferences in the FrameArrived event.

/// <summary>
/// Body reader
/// </summary>
public BodyFrameReader _bodyReader;

/// <summary>
/// Collection of all tracked bodies
/// </summary>
public Body[] _bodies;
		
/// <summary>
/// Initialize body tracking
/// </summary>
private void InitializeBodyTracking()
{
    // Body Reader
    _bodyReader = _kinect.BodyFrameSource.OpenReader();

    // Wire event
    _bodyReader.FrameArrived += OnBodyFrameReceived;
}

Next we need to determine what FaceFrameFeatures we will use and create a FaceFrameSource & FaceFrameReader as a global variable.

/// <summary>
/// Requested face features
/// </summary>
private const FaceFrameFeatures _faceFrameFeatures = FaceFrameFeatures.BoundingBoxInInfraredSpace
														| FaceFrameFeatures.PointsInInfraredSpace
														| FaceFrameFeatures.MouthMoved
														| FaceFrameFeatures.MouthOpen
														| FaceFrameFeatures.LeftEyeClosed
														| FaceFrameFeatures.RightEyeClosed
														| FaceFrameFeatures.LookingAway
														| FaceFrameFeatures.Happy
														| FaceFrameFeatures.FaceEngagement
														| FaceFrameFeatures.Glasses;

/// <summary>
/// Face Source
/// </summary>
private FaceFrameSource _faceSource;

/// <summary>
/// Face Reader
/// </summary>
private FaceFrameReader _faceReader;

Once the BodyFrameReferences arrive we need to create a new FaceFrameSource based on our _kinect-instance. We’ll assign the requiested face features & the TrackingId of the first tracked body to the source, this is how a face is being linked to a certain body.

Next we will create a new FaceFrameReader-instance and start listening to the FrameArrived & TrackingIdLost-events. Note – This only supports one user

/// <summary>
/// Process body frames
/// </summary>
private void OnBodyFrameReceived(object sender, BodyFrameArrivedEventArgs e)
{
    // Get Frame ref
    BodyFrameReference bodyRef = e.FrameReference;

    if (bodyRef == null) return;

    // Get body frame
    using (BodyFrame frame = bodyRef.AcquireFrame())
    {
        if (frame == null) return;

        // Allocate array when required
        if (_bodies == null)
            _bodies = new Body[frame.BodyCount];

        // Refresh bodies
        frame.GetAndRefreshBodyData(_bodies);

        foreach (Body body in _bodies)
        {
            if (body.IsTracked && _faceSource == null)
            {
                // Create new sources with body TrackingId
                _faceSource = new FaceFrameSource(_kinect)
                                  {
                                       FaceFrameFeatures = _faceFrameFeatures,
                                       TrackingId = body.TrackingId
                                  };

                // Create new reader
                _faceReader = _faceSource.OpenReader();

                // Wire events
                _faceReader.FrameArrived += OnFaceFrameArrived;
                _faceSource.TrackingIdLost += OnTrackingIdLost;
            }
        }
    }
}

Once everything is set up and the sensor detects the requested face, based on the tracking ID, the methodology is pretty much the same – Retrieve the reference, acquire a frame & process the data.

Here I’m reading all the face properties and displaying them in the UI.

private void OnFaceFrameArrived(object sender, FaceFrameArrivedEventArgs e)
{
    // Retrieve the face reference
    FaceFrameReference faceRef = e.FrameReference;

    if (faceRef == null) return;

    // Acquire the face frame
    using (FaceFrame faceFrame = faceRef.AcquireFrame())
    {
        if (faceFrame == null) return;

        // Retrieve the face frame result
        FaceFrameResult frameResult = faceFrame.FaceFrameResult;

        // Display the values
        HappyResult.Text = frameResult.FaceProperties[FaceProperty.Happy].ToString();
        EngagedResult.Text = frameResult.FaceProperties[FaceProperty.Engaged].ToString();
        GlassesResult.Text = frameResult.FaceProperties[FaceProperty.WearingGlasses].ToString();
        LeftEyeResult.Text = frameResult.FaceProperties[FaceProperty.LeftEyeClosed].ToString();
        RightEyeResult.Text = frameResult.FaceProperties[FaceProperty.RightEyeClosed].ToString();
        MouthOpenResult.Text = frameResult.FaceProperties[FaceProperty.MouthOpen].ToString();
        MouthMovedResult.Text = frameResult.FaceProperties[FaceProperty.MouthMoved].ToString();
        LookingAwayResult.Text = frameResult.FaceProperties[FaceProperty.LookingAway].ToString();
    }
}

Copying the Nui database

Last step to get this working is to copy the NuiDatabase to the output folder, without it the values will always be “No”.

We will use a simple post-build event in our project settings that will copy it for us -

xcopy "C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0\ExtensionSDKs\Microsoft.Kinect.Face\2.0\Redist\CommonConfiguration\x64\NuiDatabase" "NuiDatabase" /e /y /i /r

The result should look like the following -
post-build event

My guess is that this database contains all the values that the SDK will use to detect happiness, wearing glasses, etc. but I haven’t found documentation on this.

Lost track of body

People come and go, this also means that the sensor will lose track of a body, everytime this occures the event ‘TrackingIdLost’ is thrown where we will blank out the values and reset our variables.

private void OnTrackingIdLost(object sender, TrackingIdLostEventArgs e)
{
    // Update UI
    HappyResult.Text = "No face tracked";
    EngagedResult.Text = "No face tracked";
    GlassesResult.Text = "No face tracked";
    LeftEyeResult.Text = "No face tracked";
    RightEyeResult.Text = "No face tracked";
    MouthOpenResult.Text = "No face tracked";
    MouthMovedResult.Text = "No face tracked";
    LookingAwayResult.Text = "No face tracked";

    // Reset values for next body
    _faceReader = null;
    _faceSource = null;
}

Testing the application

When you give the application a spin this is how it should look like –
result

Conclusion

In this post I illustrated how easy it is to set up expression tracking for one person and what it allows you to do f.e. user feedback when they see a new product at a conference.

Keep in mind that the sensor is able to track up to six persons and your algorithm should support this as well.

Download my full code sample here.

Thanks for reading,

Tom.

Posted in Kinect for Windows Developer Program, Second Generation Kinect for Windows, Tutorial | Tagged | 18 Comments

dotnetConf – Kinect for Windows introduction by Ben Lower

dotnetconf_logo
Last week Ben Lower gave a wonderful introduction session on Kinect for Windows Gen. II for dotnetConf!

He talked about the differences with the first sensor, gave an introduction to Kinect Studio, talked about some scenarios, how you can use interactions in Windows Store apps and the sample that are part of the SDK

You can watch the session here or attend the Kinect for Windows jump start on 15th of July!

Posted in Presentation, Second Generation Kinect for Windows | Leave a comment