[Tutorial] Gen. II Kinect for Windows – Basics Overview

UPDATE (15/07/2014) – The sample is updated based on the public preview SDK.

After a theoretical overview it is time to get our hands dirty and start with a basic application that will visualize the basic streams – Color, depth, infrared & body tracking.

What you will learn

This tutorial covers the following aspects -

  • Introduction to the alpha SDK
  • Visualize the camera
  • Depth indication
  • Display the infrared stream
  • Body/Skeletal tracking on top of the camera output

tutorial_sample

Prerequisites

In order to follow the tutorial you will need the following aspects -

  • Windows 8/8.1
  • Visual Studio 2013
  • Basic C# & WPF knowledge
  • Kinect for Windows alpha sensor & SDK

Template

For the sake of this tutorial I’ve created a basic WPF template that we will use in this tutorial, you can download it here.

I. Introduction to the new SDK

This tutorial is based on the v2 alpha version (nov-13) of the SDK and some core functionality has changed due to the SDK “architecture”.

The SDK is built on top of the Kinect Core API where Xbox One application will use a seperate SDK built on top of the same SDK.

tutorial_sample

Sensor data model

The core API uses a different, more Modern Style App-ish, data model but the SDK doesn’t support Modern Style Apps as noted in my previous post.

In the first generation a KinectSensor is able to throw an event, f.e. ColorFrameReady, that gives us a Frame with all the data. This leaves us with 1 sensor that can attach 1 event handler for each type of stream.

tutorial_sample

The second generation introduces sources & readers. Each type of input is represented as a source, f.e. ColorFrameSource, that can open multiple readers for the same source. This reader is capable attaching an event handler to the FrameArrived that exposes a certain frame and its data ready for processing.

tutorial_sample

First generation also has an event called AllFramesReady that exposes frames for all the input types. Good thing is that the second generation also introduces a MultiSourceFrameReader that you can assign to different FrameSourceTypes for each output type.

Here is an example of opening a MultiSourceFrameReader for Color & Depth -

_multiFrameReader = _kinect.OpenMultiSourceFrameReader(FrameSourceTypes.Color | FrameSourceTypes.Depth);

In this tutorial I will use a reader for each type of data, more about the MultiSourceFrameReader in a later post.

Connection lost? No problem!

Losing connection to your sensor is no longer a problem, the KinectSensor object will still be valid and our code will not crash.

KinectSensor now has a flag IsAvailable that will indicate if it is still connected to the sensor or not so you can check on the state.

If the sensor is unavailable no frames will arrive which makes sense..

II. Getting started

Time to get our hands dirty and start by adding a reference to the new DLL! Important note is that you need to set the “Platform target” to x64 as this is a requirement.

build action

After that we are ready to rock and will start by calling a new method “InitializeKinect” in the CTOR of our MainWindow.

public MainWindow()
{
	InitializeComponent();

	// Initialize Kinect
	InitializeKinect();
}

In this method we will retrieve the default KinectSensor that represents our sensor and will open the sensor for usage if there is a default sensor. After that we will call four new methods to initialize the basic streams.

private KinectSensor _kinect = null;

private void InitializeKinect()
{
	// Get first Kinect
	_kinect = KinectSensor.Default;

	if (_kinect == null) return;

	// Open connection
	_kinect.Open();

	// Initialize Camera
	InitializeCamera();

	// Initialize Depth
	InitializeDepth();

	// Initialize Infrared
	InitializeInfrared();

	// Initialize Body
	IntializeBody();
}

Before we move on it is important to expand the MainWindow CTOR with a Closing event handler where we will close the connection if required.

public MainWindow()
{
	...

	// Close Kinect when closing app
	Closing += OnClosing;
}

private void OnClosing(object sender, System.ComponentModel.CancelEventArgs e)
{
	// Close Kinect
	if (_kinect != null) _kinect.Close();
}

III. Visualizing the camera

It is time to visualize the camera output and this require some variables.

/// <summary>
/// Size fo the RGB pixel in bitmap
/// </summary>
private readonly int _bytePerPixel = (PixelFormats.Bgr32.BitsPerPixel + 7) / 8;

/// <summary>
/// FrameReader for our coloroutput
/// </summary>
private ColorFrameReader _colorReader = null;

/// <summary>
/// Array of color pixels
/// </summary>
private byte[] _colorPixels = null;

/// <summary>
/// Color WriteableBitmap linked to our UI
/// </summary>
private WriteableBitmap _colorBitmap = null;

We can now use these variables to initialize our camera by first checking if a sensor was found. After that we will request the metadata for the ColorFrameSource of our sensor as a FrameDescription-object. We will use this description to allocate our pixel array based on the dimensions of the color output & the amount of bytes per pixel.

_colorReader will represent our ColorFrameReader that is bound to the ColorFrameSource that we will hook up to the FrameArrived-event. Last thing we need to initialize is our WriteableBitmap that will use to rewrite our color data. This WriteableBitmap will be linked to the source of our Image-control.

private void InitializeCamera()
{
	if (_kinect == null) return;

	// Get frame description for the color output
	FrameDescription desc = _kinect.ColorFrameSource.FrameDescription;

	// Get the framereader for Color
	_colorReader = _kinect.ColorFrameSource.OpenReader();

	// Allocate pixel array
	_colorPixels = new byte[desc.Width * desc.Height * _bytePerPixel];

	// Create new WriteableBitmap
	_colorBitmap = new WriteableBitmap(desc.Width, desc.Height, 96, 96, PixelFormats.Bgr32, null);

	// Link WBMP to UI
	CameraImage.Source = _colorBitmap;

	// Hook-up event
	_colorReader.FrameArrived += OnColorFrameArrived;
}

Each FrameArrived-event should be processed the same for all types of data – You get a FrameReference from the EventArgs that you can use to get the corresponding frame from by using the AcquireFrame-method.

In our scenario this means that we will retrieve the FrameReference from the ColorFrameArrivedEventArgs and use this to acquire the ColorFrame.

After that the processing is as easy as it was in Gen I – Validate the data, copy it, show it.

At first we check if the size is the same based on the FrameDescription. After that we check what the raw ColorImageFormat is of our frame and copy the raw if it is ColorImageFormat.Bgra, if not we copy the converted frame data to our output.

Last but not least, we write the pixel array to our WriteableBitmap and our Image control will be updated automatically!

private void OnColorFrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
	// Get the reference to the color frame
	ColorFrameReference colorRef = e.FrameReference;

	if (colorRef == null) return;

	// Acquire frame for specific reference
	ColorFrame frame = colorRef.AcquireFrame();

	// It's possible that we skipped a frame or it is already gone
	if (frame == null) return;

	using (frame)
	{
		// Get frame description
		FrameDescription frameDesc = frame.FrameDescription;

		// Check if width/height matches
		if (frameDesc.Width == _colorBitmap.PixelWidth && frameDesc.Height == _colorBitmap.PixelHeight)
		{
			// Copy data to array based on image format
			if (frame.RawColorImageFormat == ColorImageFormat.Bgra)
			{
				frame.CopyRawFrameDataToArray(_colorPixels);
			}
			else frame.CopyConvertedFrameDataToArray(_colorPixels, ColorImageFormat.Bgra);

			// Copy output to bitmap
			_colorBitmap.WritePixels(
					new Int32Rect(0, 0, frameDesc.Width, frameDesc.Height),
					_colorPixels,
					frameDesc.Width * _bytePerPixel,
					0);
		}
	}
}

IV. Depth indication

To provide a depth indication we also need some new variables

/// <summary>
/// FrameReader for our depth output
/// </summary>
private DepthFrameReader _depthReader = null;

/// <summary>
/// Array of depth values
/// </summary>
private ushort[] _depthData = null;

/// <summary>
/// Array of depth pixels used for the output
/// </summary>
private byte[] _depthPixels = null;

/// <summary>
/// Depth WriteableBitmap linked to our UI
/// </summary>
private WriteableBitmap _depthBitmap = null;		

This method is very simular as the the initialization of the color sensor but next to a _depthPixels that will hold our array of pixels we allocate an array of ushort that will hold the depth values for each pixel.

private void InitializeDepth()
{
	if (_kinect == null) return;

	// Get frame description for the color output
	FrameDescription desc = _kinect.DepthFrameSource.FrameDescription;

	// Get the framereader for Color
	_depthReader = _kinect.DepthFrameSource.OpenReader();

	// Allocate pixel array
	_depthData = new ushort[desc.Width * desc.Height];
	_depthPixels = new byte[desc.Width * desc.Height * _bytePerPixel];

	// Create new WriteableBitmap
	_depthBitmap = new WriteableBitmap(desc.Width, desc.Height, 96, 96, PixelFormats.Bgr32, null);

	// Link WBMP to UI
	DepthImage.Source = _depthBitmap;

	// Hook-up event
	_depthReader.FrameArrived += OnDepthFrameArrived;
}

After we’ve acquired our DepthFrame we will first validate our data where after we will copy the depth data to our _depthData array. Next we will save the minimum & maximum reliable distance from the frame and we can start visualizing the distance.

We will go through the set of depth data and assign specific values to the depth pixel values.

As you can see this will be done for the first three bytes and the fourth will be skipped, this is because we are in a BGRA scenario and we don’t want to assign a value to the alpha-channel.

If the distance is 0 we want to represent this with a yellow value, if the value is out-of-bound (probably never) than assign a red color.

In the last scenario we are dealing with “valid” data and we want to visualize it in waves of 250 mmm. We calculate the distance ratio and multiple it with a basic factor, here 12.75 and assign it to the BGR-values.

private void OnDepthFrameArrived(object sender, DepthFrameArrivedEventArgs e)
{
	DepthFrameReference refer = e.FrameReference;

	if (refer == null) return;

	DepthFrame frame = refer.AcquireFrame();

	if (frame == null) return;

	using (frame)
	{
		FrameDescription frameDesc = frame.FrameDescription;

		if (((frameDesc.Width * frameDesc.Height) == _depthData.Length) && (frameDesc.Width == _depthBitmap.PixelWidth) && (frameDesc.Height == _depthBitmap.PixelHeight))
		{
			// Copy depth frames
			frame.CopyFrameDataToArray(_depthData);

			// Get min & max depth
			ushort minDepth = frame.DepthMinReliableDistance;
			ushort maxDepth = frame.DepthMaxReliableDistance;

			// Adjust visualisation
			int colorPixelIndex = 0;
			for (int i = 0; i < _depthData.Length; ++i)
			{
				// Get depth value
				ushort depth = _depthData[i];

				if (depth == 0)
				{
					_depthPixels[colorPixelIndex++] = 41;
					_depthPixels[colorPixelIndex++] = 239;
					_depthPixels[colorPixelIndex++] = 242;
				}
				else if (depth < minDepth || depth > maxDepth)
				{
					_depthPixels[colorPixelIndex++] = 25;
					_depthPixels[colorPixelIndex++] = 0;
					_depthPixels[colorPixelIndex++] = 255;
				}
				else
				{
					double gray = (Math.Floor((double)depth / 250) * 12.75);

					_depthPixels[colorPixelIndex++] = (byte)gray;
					_depthPixels[colorPixelIndex++] = (byte)gray;
					_depthPixels[colorPixelIndex++] = (byte)gray;
				}

				// Increment
				++colorPixelIndex;
			}

			// Copy output to bitmap
			_depthBitmap.WritePixels(
					new Int32Rect(0, 0, frameDesc.Width, frameDesc.Height),
					_depthPixels,
					frameDesc.Width * _bytePerPixel,
					0);
		}
	}
}

My result looks like the following (hence the distance “waves) -

build action

NOTE - This is alpha hardware

V. Displaying the Infrared stream

Displaying the infrared stream is very simular to the depth visualization as you will notice.

Add the following variables we will use to display the infrared stream.

/// <summary>
/// FrameReader for our coloroutpu
/// </summary>
private InfraredFrameReader _infraReader = null;

/// <summary>
/// Array of infrared data
/// </summary>
private ushort[] _infraData = null;

/// <summary>
/// Array of infrared pixels used for the output
/// </summary>
private byte[] _infraPixels = null;

/// <summary>
/// Infrared WriteableBitmap linked to our UI
/// </summary>
private WriteableBitmap _infraBitmap = null;

Initializing the infrared stream is similar to the depth initilization but based on the InfraredFrameSource, we also have to allocate an array for the infrared data & pixel output.

private void InitializeInfrared()
{
	if (_kinect == null) return;

	// Get frame description for the color output
	FrameDescription desc = _kinect.InfraredFrameSource.FrameDescription;

	// Get the framereader for Color
	_infraReader = _kinect.InfraredFrameSource.OpenReader();

	// Allocate pixel array
	_infraData = new ushort[desc.Width * desc.Height];
	_infraPixels = new byte[desc.Width * desc.Height * _bytePerPixel];

	// Create new WriteableBitmap
	_infraBitmap = new WriteableBitmap(desc.Width, desc.Height, 96, 96, PixelFormats.Bgr32, null);

	// Link WBMP to UI
	InfraredImage.Source = _infraBitmap;

	// Hook-up event
	_infraReader.FrameArrived += OnInfraredFrameArrived;
}

As with the depth processing we will acquire our frame and validate it against the bound of our bitmap & infrared data.

After we are sure that everything is valid we are able to copy the infrared data to our infrared data array and ready to loop it.

Each cycle we will receive a 16-bit ushort indicating the infrared value that we will bitshift to a 8-bit byte that we will assign to our infrared output.

We’re discarding the least-significant bits and therefor with the least impact on our result..

 private void OnInfraredFrameArrived(object sender, InfraredFrameArrivedEventArgs e)
{
	// Reference to infrared frame
	InfraredFrameReference refer = e.FrameReference;

	if (refer == null) return;

	// Get infrared frame
	InfraredFrame frame = refer.AcquireFrame();

	if (frame == null) return;

	// Process it
	using (frame)
	{
		// Get the description
		FrameDescription frameDesc = frame.FrameDescription;

		if (((frameDesc.Width * frameDesc.Height) == _infraData.Length) && (frameDesc.Width == _infraBitmap.PixelWidth) && (frameDesc.Height == _infraBitmap.PixelHeight))
		{
			// Copy data
			frame.CopyFrameDataToArray(_infraData);

			int colorPixelIndex = 0;

			for (int i = 0; i < _infraData.Length; ++i)
			{
				// Get infrared value
				ushort ir = _infraData[i];

				// Bitshift
				byte intensity = (byte)(ir >> 8);

				// Assign infrared intensity
				_infraPixels[colorPixelIndex++] = intensity;
				_infraPixels[colorPixelIndex++] = intensity;
				_infraPixels[colorPixelIndex++] = intensity;

				++colorPixelIndex;
			}

			// Copy output to bitmap
			_infraBitmap.WritePixels(
					new Int32Rect(0, 0, frameDesc.Width, frameDesc.Height),
					_infraPixels,
					frameDesc.Width * _bytePerPixel,
					0);
		}
	}
}

This is how my infrared stream looks like -

build action

VI. Body tracking, the new skeletal tracking

Last part of the tutorial is my favorite – Body tracking, the new skeletal tracking.

There no longer are skeletons, everything are bodies from now on and fully track 6 bodies with 25 joints.

We will visualize all the joints along with the state of the hand.

First things first, we will need an array of bodies and a BodyFrameReader to process the frames.

/// <summary>
/// All tracked bodies
/// </summary>
private Body[] _bodies = null;
		
/// <summary>
/// FrameReader for our coloroutpu
/// </summary>
private BodyFrameReader _bodyReader = null;

Initializing our body tracking is very easy – Allocate the correct size for out bodies based on BodyCount, open a reader and start listening for new frames.

private void IntializeBody()
{
	if (_kinect == null) return;

	// Allocate Bodies array
	_bodies = new Body[_kinect.BodyFrameSource.BodyCount];

	// Open reader
	_bodyReader = _kinect.BodyFrameSource.OpenReader();

	// Hook-up event
	_bodyReader.FrameArrived += OnBodyFrameArrived;
}

Once again we will be able to get a FrameReference from the event args that we can use to acquire a BodyFrame. With this frame we can refresh our array of bodies so we can loop them and draw the tracked ones in a new method called DrawBody.

(Note that we first clear our SkeletonCanvas that is a Canvas on top of our Image-control)

private void OnBodyFrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
	// Get frame reference
	BodyFrameReference refer = e.FrameReference;

	if (refer == null) return;

	// Get body frame
	BodyFrame frame = refer.AcquireFrame();

	if (frame == null) return;

	using (frame)
	{
		// Aquire body data
		frame.GetAndRefreshBodyData(_bodies);

		// Clear Skeleton Canvas
		SkeletonCanvas.Children.Clear();

		// Loop all bodies
		foreach (Body body in _bodies)
		{
			// Only process tracked bodies
			if (body.IsTracked)
			{
				DrawBody(body);
			}
		}
	}
}

Drawing traked joints

In our new DrawBody-method we are going to loop all the joint keys for a body and visualize them by using a new DrawJoint method.

Next to the joint we will pass in the radius, color, border width and border color for our joint.

private void DrawBody(Body body)
{
	// Draw points
	foreach (JointType type in body.Joints.Keys)
	{
		// Draw all the body joints
		switch (type)
		{
			case JointType.Head:
			case JointType.FootLeft:
			case JointType.FootRight:
				DrawJoint(body.Joints[type], 20, Brushes.Yellow, 2, Brushes.White);
				break;
			case JointType.ShoulderLeft:
			case JointType.ShoulderRight:
			case JointType.HipLeft:
			case JointType.HipRight:
				DrawJoint(body.Joints[type], 20, Brushes.YellowGreen, 2, Brushes.White);
				break;
			case JointType.ElbowLeft:
			case JointType.ElbowRight:
			case JointType.KneeLeft:
			case JointType.KneeRight:
				DrawJoint(body.Joints[type], 15, Brushes.LawnGreen, 2, Brushes.White);
				break;
			default:
				DrawJoint(body.Joints[type], 15, Brushes.RoyalBlue, 2, Brushes.White);
				break;
		}
	}
}

At first we will check if the joint is tracked, otherwise we will ignore it. Second we will map the distance from the CameraPoint to ColorSpace by using the CoordienateMapper so that the new position will be on the correct location on our image.
Last but not least we will create a WPF Ellipse control based on the specified values and add it to our SkeletonCanvas. After some extra checks we will align it to the correct position on the canvas.

private void DrawJoint(Joint joint, double radius, SolidColorBrush fill, double borderWidth, SolidColorBrush border)
{
	if (joint.TrackingState != TrackingState.Tracked) return;
	
	// Map the CameraPoint to ColorSpace so they match
	ColorSpacePoint colorPoint = _kinect.CoordinateMapper.MapCameraPointToColorSpace(joint.Position);

	// Create the UI element based on the parameters
	Ellipse el = new Ellipse();
	el.Fill = fill;
	el.Stroke = border;
	el.StrokeThickness = borderWidth;
	el.Width = el.Height = radius;

	// Add the Ellipse to the canvas
	SkeletonCanvas.Children.Add(el);

	// Avoid exceptions based on bad tracking
	if (float.IsInfinity(colorPoint.X) || float.IsInfinity(colorPoint.X)) return;

	// Allign ellipse on canvas (Divide by 2 because image is only 50% of original size)
	Canvas.SetLeft(el, colorPoint.X / 2);
	Canvas.SetTop(el, colorPoint.Y / 2);
}

Drawing the hand state

Since the november release it is also possible to track the HandState of a hand indicating the following states -
build action
We will visualize the Open, Closed & Lasso state for both left & right hand.

To do so add two extra cases to the switch in our DrawBody-method that will call a new method ‘DrawHandJoint’ where we pass in the joint, the HandState for the corresponding hand and some UI parameters.

case JointType.HandLeft:
	DrawHandJoint(body.Joints[type], body.HandLeftState, 20, 2, Brushes.White);
	break;
case JointType.HandRight:
	DrawHandJoint(body.Joints[type], body.HandRightState, 20, 2, Brushes.White);
	break;

This new method will simply switch the different supported HandStates and call our DrawJoint-method and assign a specific color to the fill of our ellipse so we have visual feedback.

private void DrawHandJoint(Joint joint, HandState handState, double radius, double borderWidth, SolidColorBrush border)
{
	switch (handState)
	{
		case HandState.Lasso:
			DrawJoint(joint, radius, Brushes.Cyan, borderWidth, border);
			break;
		case HandState.Open:
			DrawJoint(joint, radius, Brushes.Green, borderWidth, border);
			break;
		case HandState.Closed:
			DrawJoint(joint, radius, Brushes.Red, borderWidth, border);
			break;
		default:
			break;
	}
}

Conclusion

In this post we’ve learned how we can implement the basic streams, Color – Depth- Infrared & Body, and visualize them for the user.
I hope you’ve noticed that each output type is using the same principles and it is only a matter of processing the data!

Remember this – Connect, listen, acquire, process & disconnect.

You can download my complete demo here.

Thanks for reading,

Tom.

This entry was posted in Kinect for Windows Developer Program, Second Generation Kinect, Tutorial and tagged , , . Bookmark the permalink.

41 Responses to [Tutorial] Gen. II Kinect for Windows – Basics Overview

  1. Ross says:

    Hi Tom,
    This is a great walk-through of setting up the Kinect, thanks for posting it. I’m trying to replicate this in a graphical programming language (Labview), but I’m stuck at the section:
    _colorPixels = new byte[desc.Width * desc.Height * _bytePerPixel]

    Exactly what does this look like. The “copyrawframedatatoarray” seems to want a 1D array input, but I can’t figure out what the array should look like. Any help you can provide would be greatly appreciated.

    • Tom_Kerkhove says:

      Thanks for reading it, I appreciate it! If 1D means a 1-dimensional array I guess this will be the same as my _colorPixels, this is also a 1D array with the size of Width x Height x BytePerPixel.

      Does this help you any further?

      • Ross says:

        Thanks Tom – I did mean 1D array, and I’ve got the camera, depth and IR working perfectly now. Thanks heaps for your help!

        I am a bit stuck on the skeleton tracking now though. This time its for the bodies, specifically:

        _bodies = new Body[_kinect.BodyFrameSource.BodyCount];

        From what I can tell the value you put into this is the body count (which is a 32-bit integer) from the reference BodyFrameSource, but the input into bodies seems to be an object and not a number. Is there any info you can give me about what the input “_bodies” is?

        I realise that this is a much tougher question than the last one – but any help you could give me would be very much appreciated.

        • Tom_Kerkhove says:

          What I do is allocate an array with the length of the max bodies that can be tracked. This is currently 6 which is the same as ‘_kinect.BodyFrameSource.BodyCount’

          • Ross says:

            Thanks for the response Tom, however unfortunately the programming system I’m using doesn’t support generics easily. What a pain!

          • Bul says:

            Hi Ross,
            I am also trying to use Kinect V2 with LabVIEW. I read some of the Microsoft forums and the only person that is doing the same encountered the same “generics” issue. Have you been able to resolve this? Would you be able to share your LabVIEW code with me?
            Thanks,
            Bul

  2. Preeti says:

    Hi Tom,

    I also have this developer preview kinect v2, but I keep getting “VVTechs SwitchVersion failed 0×80070057″ when I run KinectSensor from command prompt. And in KinectStatus I can see the camera being detected, but depth sensor area is blank. :(
    Did you face any such problem?

    I suspect it is either hardware issue (unsupported laptop?) or corrupt SDK.
    Can you tell me md5sum of your setup file? Just want to make sure if my SDK setup is not corrupted.
    Any help would be greatly appreciated. Thanks.

    • Tom_Kerkhove says:

      Hi,

      I cannot answer this question since this is related to the dev hit, I suggest you post your issue on the develop program forum!

    • john says:

      did you ever find the answer to this issue? I have the same problem

  3. Pingback: //BUILD/ 2014 – Introduction to Kinect, releasing this summer and support for Unity & Windows Store apps | Kinecting for Windows

  4. Willian says:

    Hi Tom,

    The color frame resolution is pretty higher than the depth. Is there a way to capture color frames at same resolution of depth?

    Thanks.

    • Tom_Kerkhove says:

      Hi Willian,
      It is not possible to select a resolution for color neither for depth. They each have their own fixed resolution so it will not be possible

  5. Pingback: Gen. II Kinect for Windows – Comparing MultiSourceFrameReader and XSourceFrameReader | Kinecting for Windows

  6. Hi,

    Playing with Kinect 2 in my project, is there best practices about resolution 1920×1080 ?

    - The API do not provide smaller byte[]
    - Should I resize the image before doing face recognition ?
    - Or better best practice to work with large byte array

    About Audio, it still crashing with audio feature in April beta SDK, is there a workaround ?

    • Nick says:

      Hi tom and Jean.
      I would like to crop the color image into a smaller one.
      I’m loosing myself into those arrays! Any suggestion? Can u provide me an example for a cropping?
      Ty a lot

      • Tom Kerkhove says:

        Hi Nick,
        I don’t know it by heart but a lot of posts on this can be found on the internet!

        • Nick says:

          Are you so kind to give me some link? Are the kinect 2 data streams and arrays the same as the old kinect? Because i’m new in the kinect world. Thank you a lot for answering

  7. Pingback: Microsoft Virtual Academy – ‘Programming the Kinect for Windows Jump Start’ announced | Kinecting for Windows

  8. Pingback: First look at Expressions – Displaying expressions for a tracked person | Kinecting for Windows

  9. Pingback: Shahed Chowdhuri's Blog: My Experience (So Far!) With Kinect for Windows v2 | Wake Up And Code!

  10. Hi Tom,

    I used your awesome sample to build a “speech bubble” Kinect app, as an interactive lobby display. I also blogged about it, before the July SDK update: http://wakeupandcode.com/my-experience-so-far-with-kinect-for-windows-v2/

    However, I noticed that there were some compatibility issues after I installed the July 2014 SDK updates. So, I tried redownloading your latest source code again, and noticed that the code was still incompatible, e.g. I had to replace KinectSensor.Default with KinectSensor.GetDefault() to get it to work.

    I wasn’t sure if I missed any updates from you, so I took the liberty of updating my project until I got it to work, and then updated my blog post as well.

    Please take a look when you get a few minutes, thanks! :-)

    Shahed Chowdhuri
    Sr. Technical Evangelist @ Microsoft

    • Tom Kerkhove says:

      Hi,
      Thanks for reading and linking :)
      Can I ask you when you downloaded the code? It should work with Public Preview SDK since I updated it the day of the launch, check it here

      • Initially, I had downloaded your pre-July code and got it to work with the pre-July SDK. After I installed the newer SDK, I was having issues at first, even after I tried your revised code. (That’s when I posted the above comment)

        Soon after I posted the comment, I tried reinstalling the latest SDK and that seemed to have resolved my issue. Your revised project now works on my machine, and so does my speech bubble application.

        Thanks again,
        Shahed Chowdhuri
        Sr. Technical Evangelist @ Microsoft

  11. Pingback: Delivering Kinect On-Demand to a Store App with Azure Media Services & Notification Hubs – Tutorial | Kinecting for Windows

  12. Adam Li says:

    I got this error when I ran your final code…

    ‘K4W.BasicOverview.UI.vshost.exe’ (CLR v4.0.30319: K4W.BasicOverview.UI.vshost.exe): Loaded ‘C:\Users\Adam\Desktop\G2KBasicOverview-master\K4W.BasicOverview.UI\bin\x64\Debug\K4W.BasicOverview.UI.exe’. Symbols loaded.
    Step into: Stepping over non-user code ‘K4W.BasicOverview.UI.App..ctor’
    ‘K4W.BasicOverview.UI.vshost.exe’ (CLR v4.0.30319: K4W.BasicOverview.UI.vshost.exe): Loaded ‘C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Configuration\v4.0_4.0.0.0__b03f5f7f11d50a3a\System.Configuration.dll’. Skipped loading symbols. Module is optimized and the debugger option ‘Just My Code’ is enabled.
    Step into: Stepping over non-user code ‘K4W.BasicOverview.UI.App.Main’
    Step into: Stepping over non-user code ‘K4W.BasicOverview.UI.App.InitializeComponent’
    An unhandled exception of type ‘System.Windows.Markup.XamlParseException’ occurred in PresentationFramework.dll
    Additional information: ‘The invocation of the constructor on type ‘K4W.BasicOverview.UI.MainWindow’ that matches the specified binding constraints threw an exception.’ Line number ’3′ and line position ’9′.

    Anyone know what the problem is?

    • Tom Kerkhove says:

      Hi,

      Are you sure that all your references are working fine? Looks like System.Configuration is broke.

      • Adam Li says:

        How would you fix that? I downloaded your end solution file as is, so is there any extra steps I need to take? I believe my SDK was the only extra thing downloaded.

        • Tom Kerkhove says:

          Try removing & re-adding it. It’s a common issue that’s on the internet, issue is not related to Kinect but to references

  13. Pingback: Shahed Chowdhuri's Blog: Kinect v2 Speech Bubbles Enhanced | Wake Up And Code!

  14. Yubo li says:

    Hi, i am now working on the kinect 2.0. Do you know how to open multi-kinect on a same compute?

    • Tom Kerkhove says:

      Hi, this is not supported at the moment in v2 but is a highly requested feature, maybe later on but nothing on the horizon…

  15. Dominik B says:

    Hey Tom,
    thanks for this nice post!

    I have a question about the skeleton tracking, i´d like to store the joint positions to a string and then write it to a txt file (writing a string to a text file is not the problem so far). Is there any easy way or entry point e.g. while drawing the joints to send the x y z positions of the joints framewise to a string? Or is this a bigger Problem with a lot of programming work?

    best regards,
    Dominik

    • Tom Kerkhove says:

      Hi Dominik,

      While a Body object is not serializable a Joint is, so you could just serialize all the joints and save that, does that help?

      • Dominik B says:

        Hi Tom,

        thanks for your reply, it didnt gave the answer on demand but delivered a good thought impulse. I now defined the joint positions in “private void OnBodyFrameArrived” after “if (body.IsTracked)”, and have written the X,Y,Z positions to an ArrayList (Writing it directly into a txt file caused performance problems) which i wrote to a txt file after klicking a button.

        Thanks again for your tutorials,
        Dominik

  16. Wahaj Bangash says:

    Hi
    I need to know whether this kind of application will work on Windows 7 and using Kinect 1,7 SDK. Or kindly let me know how to achieve this using Windows 7 and Kinect 1.7
    Thanks

  17. Elias Faruk says:

    Hi, will your tutorial also work with Visual Studio 2012? Or Visual Studio 2013 Express?

  18. jon says:

    I need help i want to save data from kinect and use it to control a robot.
    my email is jonaep_lh@hotmail.com any help?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>