Second Gen. Kinect for Windows – What’s new?

It has been a while since the alpha version of the second generation of Kinect for Windows has been released. At first I was not going to post any 101 post because there are already a lot of them out there but why not. In this post I will give a theoretical overview of what is included in the november version of the new SDK.

Everything in this post is based on alpha hardware & alpha SDK, this is still a work in progress.


“This is preliminary software and/or hardware and APIs are preliminary and subject to change”.

I. Hardware

The hardware is improved in several ways, f.e. the new IR technology – The sensor now uses the Time-of-Flight technology to calculate the distance between the sensor and objects for each pixel of the image by measuring the time a light signal travels to an object and back to the sensor.

Tilt motor is no more

First thing I noticed was that there is no longer a tilt motor that can be controlled from code. You can still tilt the camera manually but because of the improved FoV there is no longer a motor.

II. Features

The focus of the first alpha SDK is on the core elements – Color, depth, IR & body.
Unfortunately nothing else has been implemented yet and no news concerning audio, face tracking, fusion & interaction is available yet.

For the sake of interaction you can build your own controls but this requires some effort if you’re new to Kinect.


Color is now available as RGBA, GBRA or YUV2 format in 1920×1080 as noted before.
This resolution is fixed in contrast to the first generation where you could specify the requested resolution.

FPS for the color data is lightsensitive and will automatically reduce the FPS when there is too much light, more on this in a next post.

Data stream Resolution Field-of-View Frames/Second
Color 1920 x 1080 +/- 85°(H) – 55°(V) 15 fps / 30 fps
Specifications gathered with SDK

Depth & IR

The depth range has been increased to 0.5m – 4.5m and uses one mode to rule them all.
This means that you no longer need to decide between Near-mode or not.

Data stream Resolution Field-of-View Frames/Second
Depth & IR 512 x 424 70° (H) – 60°(V) 30 fps
Specifications gathered with SDK

Body, the new skeletal tracking

Skeletal tracking has been renamed to “Body” and is now capable of tracking 6 persons completely with a total of 25 joints.

The biggest improvement is that each hand now has a seperate joint for the thumb and one for the other four fingers.

  • Hand tracking - Each hand of a body now has an indication of what state it is in, f.e. Open, Closed, Lasso where lasso is pointing with two fingers
  • Activities - Indication what the facial emotion of the user is, f.e. eye left closed, mouth open, etc. (More might be added later)
  • Leaning - Indication if the user is leaning to the left or right
  • Appearance - Tells more about the user if he/she is wearing glasses (More might be added later)
  • Expressions - Expression of the current person, f.e. happy or neutral (More might be added later)
  • Engaged - Indication if the user is looking at the sensor or not

NOTE – Activities, leaning, appearances, expressions & engaged are already included in the API but not available yet.

III. Other

Kinect Studio

After installing the SDK I also noticed that Kinect Studio isn’t available yet but will surely ship with the official SDK. Good to know is that the most highly requested feature is to support “offline”-support for debugging reasons.

In the first generation you still need to be connected to a sensor to start simulating but this isn’t always possible f.e. coding on an airplane.

Supported systems

Here is a small overview of the supported systems for the alpha SDK in a VM or native machine.

  • Windows Embedded or Windows 8+ are required for the SDK. It is still not possible to create Windows Store apps for the same reason as in v1 – You can stream the data from the desktop mode to your app but your app won’t pass certification because of this streaming.
  • Windows 7 is not officially supported at the moment because Win8+ has improved USB 3.0 support
  • Micro framework is not supported due to too few processing power

Multiple sensor applications

Since the developer program is still in progress nobody tried to combine multiple sensors but my guess is that it will support 4 sensors on one machine like v1.

V. Conclusion

The first version looks like a big step ahead concerning the specs but some functionality is still unclear but time will tell.

In my next post I will tell you how you can create your first Kinect v2 application that will demonstrate all the core datastreams.

This entry was posted in Kinect for Windows Developer Program, News and tagged . Bookmark the permalink.

2 Responses to Second Gen. Kinect for Windows – What’s new?

  1. Pingback: Gen. II Kinect for Windows – Basics Overview | Kinecting for Windows

  2. Pingback: Microsoft Virtual Academy – ‘Programming the Kinect for Windows Jump Start’ announced | Kinecting for Windows

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>