//BUILD/ 2014 – Introduction to Kinect, releasing this summer and support for Unity & Windows Store apps

This week the annual Microsoft event //build/ took place in San Francico with tons of new announcements! A few of these announcements are focussing on Kinect for Windows.

Here is a small summary of BUILD 2014!

Develop Windows Store apps with Kinect for Windows!

The biggest announcement was that you will be able to develop Windows Store apps that are using Kinect for Windows! You can do this with your favorite set of tools, f.e. XAML & C#.

Unfortunately you will not be able to target RT devices, this is because the app still requires the Kinect drivers & runtime to be installed on the device. Next to that the RT tablets lack the CPU/GPU bandwidth that is required. Keep in mind that you will have to check this when you are starting your app to maintain a nice user experience.

At BUILD they also announced the universal apps that you can build for Windows 8.1, Windows Phone 8.1 and Xbox One. This is not possible at the moment for Kinect for Windows, only desktop and Windows Store apps!

Store logo

Introducing Unity support

On top of the currently supported tools you are now also able to develop Unity applications for the desktop or Windows Store apps by using a Unity-plugin!

By doing this the growing huge Unity community is joining the Kinect family since there are a lot of developers in the program creating awesome stuff like virtual worlds!

Unity logo

//BUILD/ Sessions

Kinect 101 – Introduction to Kinect for Windows

Chris White gave an introduction session on Kinect for Windows talking about the capabilities of the sensor & what the hardware offers. Next to that he gave the audience an idea about what applications are currently in use and how they are built. Last but not least he gave some demos & comparison with the two generations and points out the differences between them!

You can watch his session here.

Bringing Kinect into Your Windows Store App

The second Kinect for Windows session on //build/ was Kevin Kennedy on how you can build Windows Store apps that are using the Kinect for Windows!

He starts off with a quick introduction to Kinect with a high level overview of the architecture where after he explains how the Kinect methodology works. In the coding demos he creates a Windows Store app that illustrates the infrared stream and map the HandLeft-joint on it by using the BodyStream!

Very nice introduction to Windows Store apps and well explained!
This session is available on-demand here.

Start coding this summer!

They also announced that Kinect for Windows Gen. II will be general available this summer!
Though they did not announce in what countries and what the price will be but hold on tight!

Last week they also revealed the sexy looks of the sensor, read my previous post!

The release is coming closer and I’m looking forward to see what you guys build with it!

Getting Started & Additional information

Want to get started? Feel free to read some of my introduction posts -

  • Create your own Kinect Television with Gen. I Kinect for windows here!
  • Read my introduction to the Kinect for Windows Gen. II streamshere
  • Read how I started developing an application to control my AR Drone here

Here is some additional information -

  • Read the official statement – Windows Store app development is coming to K4W (post)
  • Read the official statement – BUILDing business with Kinect for Windows v2 (post)
  • Download the Kinect for Windows Gen. I SDK (link)
  • Download the Kinect for Windows Gen. I Toolkit (link)
  • Download the Human Interface Guidelines (link)

Thanks for reading,

Tom.

Posted in News | Leave a comment

Kinect for Windows Gen. II hardware is revealed!

Last week the official Kinect for Windows product blog published the very first official photos of the final Gen. II hardware!

Here is a photo of the Kinect for Windows sensor where the Xbox One logo is replaced with a status-light and “Kinect” on top -
Official K4W Gen. II

Second picture is one of the power supply that is a lot smaller than the current alpha version!
Official K4W Gen. II Adapter

You can read the full official post here.

Posted in News, Second Generation Kinect for Windows | 1 Comment

Donating €100 of compute to Global Windows Azure Bootcamp & charity

bootcamp
Next to Kinect for Windows I love playing around with Microsoft its cloud platform Windows Azure. One of the lovely things about Windows Azure is the big community.

On 29th of March is the second edition of Global Windows Azure Bootcamp, a free one-day training event driven by local Windows Azure community enthusiasts and experts. It consists of a day of sessions, labs and a global lab that will use its tremendous computing power to charity. To support this event I’m donating €100 of compute power!

GWAB is an initiative of Windows Azure MVPs Maarten Balliauw, Magnus Mårtensson, Mike Martin, Alan Smith & Michael Wood.

Computing for charity

This year, the organizers decided to put all this computing power at work supporting medical research on diabetes, by hosting a globally distributed lab in which attendees of the event will deploy virtual machines in Windows Azure. Those will help analyse data related to specific sugars called glycans, being studied as an early marker for Type 2 diabetes.

Read more about the charity lab here!

Where do I sign up?!

With more than 135 locations around the globe in 50+ countries I hope you can attend one of the events, here is an overview!

I am co-hosting a location with Glenn Colpaert & AZUG in Kortrijk, Belgium.

Last year the global lab involved a Kinect to capture people and render it in the cloud, here is the aftermovie!

This blog is focussing on Kinect for Windows but this community/charity event is way too awesome to ignore it!

Happy Kinecting,

Tom.

Posted in Community | Leave a comment

[Tutorial] Gen. II Kinect for Windows – Basics Overview

After a theoretical overview it is time to get our hands dirty and start with a basic application that will visualize the basic streams – Color, depth, infrared & body tracking.

Disclaimer

Although this is a tutorial I am bound to the Kinect for Windows Developer program which means I can’t share the SDK/DLL.

“This is preliminary software and/or hardware and APIs are preliminary and subject to change”.

What you will learn

This tutorial covers the following aspects -

  • Introduction to the alpha SDK
  • Visualize the camera
  • Depth indication
  • Display the infrared stream
  • Body/Skeletal tracking on top of the camera output

tutorial_sample

Prerequisites

In order to follow the tutorial you will need the following aspects -

  • Windows 8/8.1
  • Visual Studio 2013
  • Basic C# & WPF knowledge
  • Kinect for Windows alpha sensor & SDK

Template

For the sake of this tutorial I’ve created a basic WPF template that we will use in this tutorial, you can download it here.

I. Introduction to the new SDK

This tutorial is based on the v2 alpha version (nov-13) of the SDK and some core functionality has changed due to the SDK “architecture”.

The SDK is built on top of the Kinect Core API where Xbox One application will use a seperate SDK built on top of the same SDK.

tutorial_sample

Sensor data model

The core API uses a different, more Modern Style App-ish, data model but the SDK doesn’t support Modern Style Apps as noted in my previous post.

In the first generation a KinectSensor is able to throw an event, f.e. ColorFrameReady, that gives us a Frame with all the data. This leaves us with 1 sensor that can attach 1 event handler for each type of stream.

tutorial_sample

The second generation introduces sources & readers. Each type of input is represented as a source, f.e. ColorFrameSource, that can open multiple readers for the same source. This reader is capable attaching an event handler to the FrameArrived that exposes a certain frame and its data ready for processing.

tutorial_sample

First generation also has an event called AllFramesReady that exposes frames for all the input types. Good thing is that the second generation also introduces a MultiSourceFrameReader that you can assign to different FrameSourceTypes for each output type.

Here is an example of opening a MultiSourceFrameReader for Color & Depth -

_multiFrameReader = _kinect.OpenMultiSourceFrameReader(FrameSourceTypes.Color | FrameSourceTypes.Depth);

In this tutorial I will use a reader for each type of data, more about the MultiSourceFrameReader in a later post.

Connection lost? No problem!

Losing connection to your sensor is no longer a problem, the KinectSensor object will still be valid and our code will not crash.

KinectSensor now has a flag IsAvailable that will indicate if it is still connected to the sensor or not so you can check on the state.

If the sensor is unavailable no frames will arrive which makes sense..

II. Getting started

Time to get our hands dirty and start by adding a reference to the new DLL! Important note is that you need to set the “Platform target” to x64 as this is a requirement.

build action

After that we are ready to rock and will start by calling a new method “InitializeKinect” in the CTOR of our MainWindow.

public MainWindow()
{
	InitializeComponent();

	// Initialize Kinect
	InitializeKinect();
}

In this method we will retrieve the default KinectSensor that represents our sensor and will open the sensor for usage if there is a default sensor. After that we will call four new methods to initialize the basic streams.

private KinectSensor _kinect = null;

private void InitializeKinect()
{
	// Get first Kinect
	_kinect = KinectSensor.Default;

	if (_kinect == null) return;

	// Open connection
	_kinect.Open();

	// Initialize Camera
	InitializeCamera();

	// Initialize Depth
	InitializeDepth();

	// Initialize Infrared
	InitializeInfrared();

	// Initialize Body
	IntializeBody();
}

Before we move on it is important to expand the MainWindow CTOR with a Closing event handler where we will close the connection if required.

public MainWindow()
{
	...

	// Close Kinect when closing app
	Closing += OnClosing;
}

private void OnClosing(object sender, System.ComponentModel.CancelEventArgs e)
{
	// Close Kinect
	if (_kinect != null) _kinect.Close();
}

III. Visualizing the camera

It is time to visualize the camera output and this require some variables.

/// <summary>
/// Size fo the RGB pixel in bitmap
/// </summary>
private readonly int _bytePerPixel = (PixelFormats.Bgr32.BitsPerPixel + 7) / 8;

/// <summary>
/// FrameReader for our coloroutput
/// </summary>
private ColorFrameReader _colorReader = null;

/// <summary>
/// Array of color pixels
/// </summary>
private byte[] _colorPixels = null;

/// <summary>
/// Color WriteableBitmap linked to our UI
/// </summary>
private WriteableBitmap _colorBitmap = null;

We can now use these variables to initialize our camera by first checking if a sensor was found. After that we will request the metadata for the ColorFrameSource of our sensor as a FrameDescription-object. We will use this description to allocate our pixel array based on the dimensions of the color output & the amount of bytes per pixel.

_colorReader will represent our ColorFrameReader that is bound to the ColorFrameSource that we will hook up to the FrameArrived-event. Last thing we need to initialize is our WriteableBitmap that will use to rewrite our color data. This WriteableBitmap will be linked to the source of our Image-control.

private void InitializeCamera()
{
	if (_kinect == null) return;

	// Get frame description for the color output
	FrameDescription desc = _kinect.ColorFrameSource.FrameDescription;

	// Get the framereader for Color
	_colorReader = _kinect.ColorFrameSource.OpenReader();

	// Allocate pixel array
	_colorPixels = new byte[desc.Width * desc.Height * _bytePerPixel];

	// Create new WriteableBitmap
	_colorBitmap = new WriteableBitmap(desc.Width, desc.Height, 96, 96, PixelFormats.Bgr32, null);

	// Link WBMP to UI
	CameraImage.Source = _colorBitmap;

	// Hook-up event
	_colorReader.FrameArrived += OnColorFrameArrived;
}

Each FrameArrived-event should be processed the same for all types of data – You get a FrameReference from the EventArgs that you can use to get the corresponding frame from by using the AcquireFrame-method.

In our scenario this means that we will retrieve the FrameReference from the ColorFrameArrivedEventArgs and use this to acquire the ColorFrame.

After that the processing is as easy as it was in Gen I – Validate the data, copy it, show it.

At first we check if the size is the same based on the FrameDescription. After that we check what the raw ColorImageFormat is of our frame and copy the raw if it is ColorImageFormat.Bgra, if not we copy the converted frame data to our output.

Last but not least, we write the pixel array to our WriteableBitmap and our Image control will be updated automatically!

private void OnColorFrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
	// Get the reference to the color frame
	ColorFrameReference colorRef = e.FrameReference;

	if (colorRef == null) return;

	// Acquire frame for specific reference
	ColorFrame frame = colorRef.AcquireFrame();

	// It's possible that we skipped a frame or it is already gone
	if (frame == null) return;

	using (frame)
	{
		// Get frame description
		FrameDescription frameDesc = frame.FrameDescription;

		// Check if width/height matches
		if (frameDesc.Width == _colorBitmap.PixelWidth && frameDesc.Height == _colorBitmap.PixelHeight)
		{
			// Copy data to array based on image format
			if (frame.RawColorImageFormat == ColorImageFormat.Bgra)
			{
				frame.CopyRawFrameDataToArray(_colorPixels);
			}
			else frame.CopyConvertedFrameDataToArray(_colorPixels, ColorImageFormat.Bgra);

			// Copy output to bitmap
			_colorBitmap.WritePixels(
					new Int32Rect(0, 0, frameDesc.Width, frameDesc.Height),
					_colorPixels,
					frameDesc.Width * _bytePerPixel,
					0);
		}
	}
}

IV. Depth indication

To provide a depth indication we also need some new variables

/// <summary>
/// FrameReader for our depth output
/// </summary>
private DepthFrameReader _depthReader = null;

/// <summary>
/// Array of depth values
/// </summary>
private ushort[] _depthData = null;

/// <summary>
/// Array of depth pixels used for the output
/// </summary>
private byte[] _depthPixels = null;

/// <summary>
/// Depth WriteableBitmap linked to our UI
/// </summary>
private WriteableBitmap _depthBitmap = null;		

This method is very simular as the the initialization of the color sensor but next to a _depthPixels that will hold our array of pixels we allocate an array of ushort that will hold the depth values for each pixel.

private void InitializeDepth()
{
	if (_kinect == null) return;

	// Get frame description for the color output
	FrameDescription desc = _kinect.DepthFrameSource.FrameDescription;

	// Get the framereader for Color
	_depthReader = _kinect.DepthFrameSource.OpenReader();

	// Allocate pixel array
	_depthData = new ushort[desc.Width * desc.Height];
	_depthPixels = new byte[desc.Width * desc.Height * _bytePerPixel];

	// Create new WriteableBitmap
	_depthBitmap = new WriteableBitmap(desc.Width, desc.Height, 96, 96, PixelFormats.Bgr32, null);

	// Link WBMP to UI
	DepthImage.Source = _depthBitmap;

	// Hook-up event
	_depthReader.FrameArrived += OnDepthFrameArrived;
}

After we’ve acquired our DepthFrame we will first validate our data where after we will copy the depth data to our _depthData array. Next we will save the minimum & maximum reliable distance from the frame and we can start visualizing the distance.

We will go through the set of depth data and assign specific values to the depth pixel values.

As you can see this will be done for the first three bytes and the fourth will be skipped, this is because we are in a BGRA scenario and we don’t want to assign a value to the alpha-channel.

If the distance is 0 we want to represent this with a yellow value, if the value is out-of-bound (probably never) than assign a red color.

In the last scenario we are dealing with “valid” data and we want to visualize it in waves of 250 mmm. We calculate the distance ratio and multiple it with a basic factor, here 12.75 and assign it to the BGR-values.

private void OnDepthFrameArrived(object sender, DepthFrameArrivedEventArgs e)
{
	DepthFrameReference refer = e.FrameReference;

	if (refer == null) return;

	DepthFrame frame = refer.AcquireFrame();

	if (frame == null) return;

	using (frame)
	{
		FrameDescription frameDesc = frame.FrameDescription;

		if (((frameDesc.Width * frameDesc.Height) == _depthData.Length) && (frameDesc.Width == _depthBitmap.PixelWidth) && (frameDesc.Height == _depthBitmap.PixelHeight))
		{
			// Copy depth frames
			frame.CopyFrameDataToArray(_depthData);

			// Get min & max depth
			ushort minDepth = frame.DepthMinReliableDistance;
			ushort maxDepth = frame.DepthMaxReliableDistance;

			// Adjust visualisation
			int colorPixelIndex = 0;
			for (int i = 0; i < _depthData.Length; ++i)
			{
				// Get depth value
				ushort depth = _depthData[i];

				if (depth == 0)
				{
					_depthPixels[colorPixelIndex++] = 41;
					_depthPixels[colorPixelIndex++] = 239;
					_depthPixels[colorPixelIndex++] = 242;
				}
				else if (depth < minDepth || depth > maxDepth)
				{
					_depthPixels[colorPixelIndex++] = 25;
					_depthPixels[colorPixelIndex++] = 0;
					_depthPixels[colorPixelIndex++] = 255;
				}
				else
				{
					double gray = (Math.Floor((double)depth / 250) * 12.75);

					_depthPixels[colorPixelIndex++] = (byte)gray;
					_depthPixels[colorPixelIndex++] = (byte)gray;
					_depthPixels[colorPixelIndex++] = (byte)gray;
				}

				// Increment
				++colorPixelIndex;
			}

			// Copy output to bitmap
			_depthBitmap.WritePixels(
					new Int32Rect(0, 0, frameDesc.Width, frameDesc.Height),
					_depthPixels,
					frameDesc.Width * _bytePerPixel,
					0);
		}
	}
}

My result looks like the following (hence the distance “waves) -

build action

NOTE - This is alpha hardware

V. Displaying the Infrared stream

Displaying the infrared stream is very simular to the depth visualization as you will notice.

Add the following variables we will use to display the infrared stream.

/// <summary>
/// FrameReader for our coloroutpu
/// </summary>
private InfraredFrameReader _infraReader = null;

/// <summary>
/// Array of infrared data
/// </summary>
private ushort[] _infraData = null;

/// <summary>
/// Array of infrared pixels used for the output
/// </summary>
private byte[] _infraPixels = null;

/// <summary>
/// Infrared WriteableBitmap linked to our UI
/// </summary>
private WriteableBitmap _infraBitmap = null;

Initializing the infrared stream is similar to the depth initilization but based on the InfraredFrameSource, we also have to allocate an array for the infrared data & pixel output.

private void InitializeInfrared()
{
	if (_kinect == null) return;

	// Get frame description for the color output
	FrameDescription desc = _kinect.InfraredFrameSource.FrameDescription;

	// Get the framereader for Color
	_infraReader = _kinect.InfraredFrameSource.OpenReader();

	// Allocate pixel array
	_infraData = new ushort[desc.Width * desc.Height];
	_infraPixels = new byte[desc.Width * desc.Height * _bytePerPixel];

	// Create new WriteableBitmap
	_infraBitmap = new WriteableBitmap(desc.Width, desc.Height, 96, 96, PixelFormats.Bgr32, null);

	// Link WBMP to UI
	InfraredImage.Source = _infraBitmap;

	// Hook-up event
	_infraReader.FrameArrived += OnInfraredFrameArrived;
}

As with the depth processing we will acquire our frame and validate it against the bound of our bitmap & infrared data.

After we are sure that everything is valid we are able to copy the infrared data to our infrared data array and ready to loop it.

Each cycle we will receive a 16-bit ushort indicating the infrared value that we will bitshift to a 8-bit byte that we will assign to our infrared output.

We’re discarding the least-significant bits and therefor with the least impact on our result..

 private void OnInfraredFrameArrived(object sender, InfraredFrameArrivedEventArgs e)
{
	// Reference to infrared frame
	InfraredFrameReference refer = e.FrameReference;

	if (refer == null) return;

	// Get infrared frame
	InfraredFrame frame = refer.AcquireFrame();

	if (frame == null) return;

	// Process it
	using (frame)
	{
		// Get the description
		FrameDescription frameDesc = frame.FrameDescription;

		if (((frameDesc.Width * frameDesc.Height) == _infraData.Length) && (frameDesc.Width == _infraBitmap.PixelWidth) && (frameDesc.Height == _infraBitmap.PixelHeight))
		{
			// Copy data
			frame.CopyFrameDataToArray(_infraData);

			int colorPixelIndex = 0;

			for (int i = 0; i < _infraData.Length; ++i)
			{
				// Get infrared value
				ushort ir = _infraData[i];

				// Bitshift
				byte intensity = (byte)(ir >> 8);

				// Assign infrared intensity
				_infraPixels[colorPixelIndex++] = intensity;
				_infraPixels[colorPixelIndex++] = intensity;
				_infraPixels[colorPixelIndex++] = intensity;

				++colorPixelIndex;
			}

			// Copy output to bitmap
			_infraBitmap.WritePixels(
					new Int32Rect(0, 0, frameDesc.Width, frameDesc.Height),
					_infraPixels,
					frameDesc.Width * _bytePerPixel,
					0);
		}
	}
}

This is how my infrared stream looks like -

build action

VI. Body tracking, the new skeletal tracking

Last part of the tutorial is my favorite – Body tracking, the new skeletal tracking.

There no longer are skeletons, everything are bodies from now on and fully track 6 bodies with 25 joints.

We will visualize all the joints along with the state of the hand.

First things first, we will need an array of bodies and a BodyFrameReader to process the frames.

/// <summary>
/// All tracked bodies
/// </summary>
private Body[] _bodies = null;
		
/// <summary>
/// FrameReader for our coloroutpu
/// </summary>
private BodyFrameReader _bodyReader = null;

Initializing our body tracking is very easy – Allocate the correct size for out bodies based on BodyCount, open a reader and start listening for new frames.

private void IntializeBody()
{
	if (_kinect == null) return;

	// Allocate Bodies array
	_bodies = new Body[_kinect.BodyFrameSource.BodyCount];

	// Open reader
	_bodyReader = _kinect.BodyFrameSource.OpenReader();

	// Hook-up event
	_bodyReader.FrameArrived += OnBodyFrameArrived;
}

Once again we will be able to get a FrameReference from the event args that we can use to acquire a BodyFrame. With this frame we can refresh our array of bodies so we can loop them and draw the tracked ones in a new method called DrawBody.

(Note that we first clear our SkeletonCanvas that is a Canvas on top of our Image-control)

private void OnBodyFrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
	// Get frame reference
	BodyFrameReference refer = e.FrameReference;

	if (refer == null) return;

	// Get body frame
	BodyFrame frame = refer.AcquireFrame();

	if (frame == null) return;

	using (frame)
	{
		// Aquire body data
		frame.GetAndRefreshBodyData(_bodies);

		// Clear Skeleton Canvas
		SkeletonCanvas.Children.Clear();

		// Loop all bodies
		foreach (Body body in _bodies)
		{
			// Only process tracked bodies
			if (body.IsTracked)
			{
				DrawBody(body);
			}
		}
	}
}

Drawing traked joints

In our new DrawBody-method we are going to loop all the joint keys for a body and visualize them by using a new DrawJoint method.

Next to the joint we will pass in the radius, color, border width and border color for our joint.

private void DrawBody(Body body)
{
	// Draw points
	foreach (JointType type in body.Joints.Keys)
	{
		// Draw all the body joints
		switch (type)
		{
			case JointType.Head:
			case JointType.FootLeft:
			case JointType.FootRight:
				DrawJoint(body.Joints[type], 20, Brushes.Yellow, 2, Brushes.White);
				break;
			case JointType.ShoulderLeft:
			case JointType.ShoulderRight:
			case JointType.HipLeft:
			case JointType.HipRight:
				DrawJoint(body.Joints[type], 20, Brushes.YellowGreen, 2, Brushes.White);
				break;
			case JointType.ElbowLeft:
			case JointType.ElbowRight:
			case JointType.KneeLeft:
			case JointType.KneeRight:
				DrawJoint(body.Joints[type], 15, Brushes.LawnGreen, 2, Brushes.White);
				break;
			default:
				DrawJoint(body.Joints[type], 15, Brushes.RoyalBlue, 2, Brushes.White);
				break;
		}
	}
}

At first we will check if the joint is tracked, otherwise we will ignore it. Second we will map the distance from the CameraPoint to ColorSpace by using the CoordienateMapper so that the new position will be on the correct location on our image.
Last but not least we will create a WPF Ellipse control based on the specified values and add it to our SkeletonCanvas. After some extra checks we will align it to the correct position on the canvas.

private void DrawJoint(Joint joint, double radius, SolidColorBrush fill, double borderWidth, SolidColorBrush border)
{
	if (joint.TrackingState != TrackingState.Tracked) return;
	
	// Map the CameraPoint to ColorSpace so they match
	ColorSpacePoint colorPoint = _kinect.CoordinateMapper.MapCameraPointToColorSpace(joint.Position);

	// Create the UI element based on the parameters
	Ellipse el = new Ellipse();
	el.Fill = fill;
	el.Stroke = border;
	el.StrokeThickness = borderWidth;
	el.Width = el.Height = radius;

	// Add the Ellipse to the canvas
	SkeletonCanvas.Children.Add(el);

	// Avoid exceptions based on bad tracking
	if (float.IsInfinity(colorPoint.X) || float.IsInfinity(colorPoint.X)) return;

	// Allign ellipse on canvas (Divide by 2 because image is only 50% of original size)
	Canvas.SetLeft(el, colorPoint.X / 2);
	Canvas.SetTop(el, colorPoint.Y / 2);
}

Drawing the hand state

Since the november release it is also possible to track the HandState of a hand indicating the following states -
build action
We will visualize the Open, Closed & Lasso state for both left & right hand.

To do so add two extra cases to the switch in our DrawBody-method that will call a new method ‘DrawHandJoint’ where we pass in the joint, the HandState for the corresponding hand and some UI parameters.

case JointType.HandLeft:
	DrawHandJoint(body.Joints[type], body.HandLeftState, 20, 2, Brushes.White);
	break;
case JointType.HandRight:
	DrawHandJoint(body.Joints[type], body.HandRightState, 20, 2, Brushes.White);
	break;

This new method will simply switch the different supported HandStates and call our DrawJoint-method and assign a specific color to the fill of our ellipse so we have visual feedback.

private void DrawHandJoint(Joint joint, HandState handState, double radius, double borderWidth, SolidColorBrush border)
{
	switch (handState)
	{
		case HandState.Lasso:
			DrawJoint(joint, radius, Brushes.Cyan, borderWidth, border);
			break;
		case HandState.Open:
			DrawJoint(joint, radius, Brushes.Green, borderWidth, border);
			break;
		case HandState.Closed:
			DrawJoint(joint, radius, Brushes.Red, borderWidth, border);
			break;
		default:
			break;
	}
}

Conclusion

In this post we’ve learned how we can implement the basic streams, Color – Depth- Infrared & Body, and visualize them for the user.
I hope you’ve noticed that each output type is using the same principles and it is only a matter of processing the data!

Remember this – Connect, listen, acquire, process & disconnect.

You can download my complete demo here.

Thanks for reading,

Tom.

Posted in Kinect for Windows Developer Program, Second Generation Kinect, Tutorial | Tagged , , | 9 Comments

Second Gen. Kinect for Windows – What’s new?

It has been a while since the alpha version of the second generation of Kinect for Windows has been released. At first I was not going to post any 101 post because there are already a lot of them out there but why not. In this post I will give a theoretical overview of what is included in the november version of the new SDK.

Everything in this post is based on alpha hardware & alpha SDK, this is still a work in progress.

Disclaimer

“This is preliminary software and/or hardware and APIs are preliminary and subject to change”.

I. Hardware

The hardware is improved in several ways, f.e. the new IR technology – The sensor now uses the Time-of-Flight technology to calculate the distance between the sensor and objects for each pixel of the image by measuring the time a light signal travels to an object and back to the sensor.

Tilt motor is no more

First thing I noticed was that there is no longer a tilt motor that can be controlled from code. You can still tilt the camera manually but because of the improved FoV there is no longer a motor.

II. Features

The focus of the first alpha SDK is on the core elements – Color, depth, IR & body.
Unfortunately nothing else has been implemented yet and no news concerning audio, face tracking, fusion & interaction is available yet.

For the sake of interaction you can build your own controls but this requires some effort if you’re new to Kinect.

Color

Color is now available as RGBA, GBRA or YUV2 format in 1920×1080 as noted before.
This resolution is fixed in contrast to the first generation where you could specify the requested resolution.

FPS for the color data is lightsensitive and will automatically reduce the FPS when there is too much light, more on this in a next post.

Data stream Resolution Field-of-View Frames/Second
Color 1920 x 1080 +/- 85°(H) – 55°(V) 15 fps / 30 fps
Specifications gathered with SDK

Depth & IR

The depth range has been increased to 0.5m – 4.5m and uses one mode to rule them all.
This means that you no longer need to decide between Near-mode or not.

Data stream Resolution Field-of-View Frames/Second
Depth & IR 512 x 424 70° (H) – 60°(V) 30 fps
Specifications gathered with SDK

Body, the new skeletal tracking

Skeletal tracking has been renamed to “Body” and is now capable of tracking 6 persons completely with a total of 25 joints.

The biggest improvement is that each hand now has a seperate joint for the thumb and one for the other four fingers.
body_v2

  • Hand tracking - Each hand of a body now has an indication of what state it is in, f.e. Open, Closed, Lasso where lasso is pointing with two fingers
  • Activities - Indication what the facial emotion of the user is, f.e. eye left closed, mouth open, etc. (More might be added later)
  • Leaning - Indication if the user is leaning to the left or right
  • Appearance - Tells more about the user if he/she is wearing glasses (More might be added later)
  • Expressions - Expression of the current person, f.e. happy or neutral (More might be added later)
  • Engaged - Indication if the user is looking at the sensor or not

NOTE – Activities, leaning, appearances, expressions & engaged are already included in the API but not available yet.

III. Other

Kinect Studio

After installing the SDK I also noticed that Kinect Studio isn’t available yet but will surely ship with the official SDK. Good to know is that the most highly requested feature is to support “offline”-support for debugging reasons.

In the first generation you still need to be connected to a sensor to start simulating but this isn’t always possible f.e. coding on an airplane.

Supported systems

Here is a small overview of the supported systems for the alpha SDK in a VM or native machine.

  • Windows Embedded or Windows 8+ are required for the SDK. It is still not possible to create Windows Store apps for the same reason as in v1 – You can stream the data from the desktop mode to your app but your app won’t pass certification because of this streaming.
  • Windows 7 is not officially supported at the moment because Win8+ has improved USB 3.0 support
  • Micro framework is not supported due to too few processing power

Multiple sensor applications

Since the developer program is still in progress nobody tried to combine multiple sensors but my guess is that it will support 4 sensors on one machine like v1.

V. Conclusion

The first version looks like a big step ahead concerning the specs but some functionality is still unclear but time will tell.

In my next post I will tell you how you can create your first Kinect v2 application that will demonstrate all the core datastreams.

Posted in Kinect for Windows Developer Program, News | Tagged | 1 Comment

MVP Award

I started blogging about Kinect, after one year of fooling around with kinect since Beta v2 and a Kinect for Xbox sensor, to share my pitfalls, updates etc. and this evolved into as much sharing as I could.
In the past I mentored students, lectored at MIC Vlaanderen, gave my first sessions in the community etc. to help people in their Kinect adventure and I’m not thinking about stopping.

My goal is to keep assisting people in thinking about Kinect scenarios, tell people what the sensor is capable of, teach people how to use it or help thinking of the best solution.

On the 1st of January 2014, after more than two years of Kinecting, I received an email that I have been granted the Microsoft ‘Most Valuable Professional’ (MVP) award for my contributions to the community in Belgium & beyond. I feel truely honored to serve next to the other MVPs!.

This award is a big motivation and keeps be going and do my best to assist you in your Kinect quest.

Tom.

MVP Award

Posted in News | 4 Comments

Kinect for Windows Developer Kit – The journey continues…

This week I received a very nice gift – My Kinect for Windows developer kit!

You may have already seen a lot of blog posts on unboxing the sensor, etc. but I won’t be blogging the same. At first my focus was creating 101 tutorials but since everyone is doing this my plans are going into another direction.
The new sensor opens a new era for Kinect and this made me see some posibilities that you might hear from in the future.

Note - By receiving the dev kit my blog series on Kinecting AR drone is put on hold but part I is already available here.

Kinect v2

Posted in General, Kinect for Windows Developer Program, Second Generation Kinect for Windows | 4 Comments