Terminator Program: With The Kinect 2

I got my hands on a Kinect2 last week so I decided to re-write the Terminator program using the Kinect2 api.  Microsoft made some major changes to the domain api (no more skeleton frame, now using a body) but the underlying logic is still the same.  Therefore, it was reasonably easy to port the code.  There is plenty of places in the V2 api that are not documented yet but because I did some work in the V1 api, I could still get things done.  For example, the V2 api documentation and code samples use event handlers to work with any new frame that arrives from the Kinect.  This lead to some pretty laggy code.  However, by using polling on a second thread, I was able to get the performance to where it needs to be.  Also, a minor annoyance is that you have to use Win8 with the Kinect 2.

So here is the Terminator application, Gen 2.  The UI is still just a series of UI controls:

1 <Window x:Class="ChickenSoftware.Terminator.Gen2.UI.MainWindow" 2 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 3 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 4 Title="MainWindow" Height="700" Width="650" Loaded="Window_Loaded"> 5 <Canvas Width="650" Height="700"> 6 <Image x:Name="kinectColorImage" Width="640" Height="480" /> 7 <Canvas x:Name="bodyCanvas" Width="640" Height="480" /> 8 <Button x:Name="takePhotoButton" Canvas.Left="10" 9 Canvas.Top="485" Height="40" Width="125" Click="takePhotoButton_Click">Take Photo</Button> 10 <TextBox x:Name="facialRecognitionTextBox" Canvas.Left="10" Canvas.Top="540" Width="125" Height="40" FontSize="8" /> 11 <Image x:Name="currentImage" Canvas.Left="165" Canvas.Top="485" Height="120" Width="170" /> 12 <Image x:Name="compareImage" Canvas.Left="410" Canvas.Top="485" Height="120" Width="170" /> 13 </Canvas> 14 </Window> 15

In the code behind, I set up some class-level variables.  The only real difference is that the photo is moving from 640/480 to 1920/1080:

1 KinectSensor _kinectSensor = null; 2 Boolean _isKinectDisplayActive = false; 3 Boolean _isTakingPicture = false; 4 WriteableBitmap _videoBitmap = null; 5 Int32 _width = 1920; 6 Int32 _height = 1080;

When the page is loaded, a new thread is spun up that handles rendering the Kinect data:

1 private void Window_Loaded(object sender, RoutedEventArgs e) 2 { 3 SetUpKinect(); 4 _isKinectDisplayActive = true; 5 Thread videoThread = new Thread(new ThreadStart(DisplayKinectData)); 6 videoThread.Start(); 7 }

Setting up the Kinect is a bit different (KinectSensor.GetDefault()) but intuitive:

1 internal void SetUpKinect() 2 { 3 _videoBitmap = new WriteableBitmap(1920, 1080, 96, 96, PixelFormats.Bgr32, null); 4 _kinectSensor = KinectSensor.GetDefault(); 5 _kinectSensor.Open(); 6 }

With the big change in the DisplayKinectData method

1 internal void DisplayKinectData() 2 { 3 var colorFrameSource = _kinectSensor.ColorFrameSource; 4 var colorFrameReader = colorFrameSource.OpenReader(); 5 var bodyFrameSource = _kinectSensor.BodyFrameSource; 6 var bodyFrameReader = bodyFrameSource.OpenReader(); 7 8 while (_isKinectDisplayActive) 9 { 10 using (var colorFrame = colorFrameReader.AcquireLatestFrame()) 11 { 12 if (colorFrame == null) continue; 13 using (var bodyFrame = bodyFrameReader.AcquireLatestFrame()) 14 { 15 if (bodyFrame == null) continue; 16 //Color 17 var colorFrameDescription = colorFrame.ColorFrameSource.CreateFrameDescription(ColorImageFormat.Bgra); 18 var bytesPerPixel = colorFrameDescription.BytesPerPixel; 19 var frameSize = colorFrameDescription.Width * colorFrameDescription.Height * bytesPerPixel; 20 var colorData = new byte[frameSize]; 21 if (colorFrame.RawColorImageFormat == ColorImageFormat.Bgra) 22 { 23 colorFrame.CopyRawFrameDataToArray(colorData); 24 } 25 else 26 { 27 colorFrame.CopyConvertedFrameDataToArray(colorData, ColorImageFormat.Bgra); 28 } 29 //Body 30 var bodies = new Body[bodyFrame.BodyCount]; 31 bodyFrame.GetAndRefreshBodyData(bodies); 32 var trackedBody = bodies.FirstOrDefault(b => b.IsTracked); 33 34 //Update 35 if (_isTakingPicture) 36 { 37 Dispatcher.Invoke(new Action(() => AnalyzePhoto(colorData))); 38 } 39 else 40 { 41 if (trackedBody == null) 42 { 43 Dispatcher.Invoke(new Action(() => UpdateDisplay(colorData))); 44 } 45 else 46 { 47 Dispatcher.Invoke(new Action(() => UpdateDisplay(colorData, trackedBody))); 48 } 49 } 50 } 51 } 52 } 53 } 54

I am using a frameReader and frameSource for both the color (the video image) and the body (the old skeleton).  The method to get the frame has changed –> I am using AquireLatestFrame().  It is nice that we are still using byte[] to hold the data.

With the data in the byte[] arrays, the display is updated.  There are two UpdateDisplay methods:

1 internal void UpdateDisplay(byte[] colorData) 2 { 3 var rectangle = new Int32Rect(0, 0, _width, _height); 4 _videoBitmap.WritePixels(rectangle, colorData, _width * 4, 0); 5 kinectColorImage.Source = _videoBitmap; 6 } 7 8 internal void UpdateDisplay(byte[] colorData, Body body) 9 { 10 UpdateDisplay(colorData); 11 var drawingGroup = new DrawingGroup(); 12 using (var drawingContext = drawingGroup.Open()) 13 { 14 var headPosition = body.Joints[JointType.Head].Position; 15 if (headPosition.Z < 0) 16 { 17 headPosition.Z = 0.1f; 18 } 19 var adjustedHeadPosition = _kinectSensor.CoordinateMapper.MapCameraPointToDepthSpace(headPosition); 20 bodyCanvas.Children.Clear(); 21 Rectangle headTarget = new Rectangle(); 22 headTarget.Fill = new SolidColorBrush(Colors.Red); 23 headTarget.Width = 10; 24 headTarget.Height = 10; 25 Canvas.SetLeft(headTarget, adjustedHeadPosition.X + 75); 26 Canvas.SetTop(headTarget, adjustedHeadPosition.Y); 27 bodyCanvas.Children.Add(headTarget); 28 } 29 }

This is pretty much like V1 where the video byte[] is being written to a WritableBitmap and the body is being drawn on the canvas.  Note that like V1, the coordinates of the body need to be adjusted to the color frame.  The API has a series of overloads that makes it easy to do the translation.

With the display working, I added in taking the photo, sending it to Azure blob storage, and having Sky Biometry analyze the results.  This code is identical to V1 with the connection strings for Azure and Sky Biometry broken out into their own methods and the sensitive values placed into the app.config:

1 internal void AnalyzePhoto(byte[] colorData) 2 { 3 var bitmapSource = BitmapSource.Create(_width, _height, 96, 96, PixelFormats.Bgr32, null, colorData, _width * 4); 4 JpegBitmapEncoder encoder = new JpegBitmapEncoder(); 5 encoder.Frames.Add(BitmapFrame.Create(bitmapSource)); 6 var photoImage = UploadPhotoImage(encoder); 7 CompareImages(photoImage); 8 _isTakingPicture = false; 9 }

1 internal PhotoImage UploadPhotoImage(JpegBitmapEncoder encoder) 2 { 3 using(MemoryStream memoryStream = new MemoryStream()) 4 { 5 encoder.Save(memoryStream); 6 var photoImage = new PhotoImage(Guid.NewGuid(), memoryStream.ToArray()); 7 8 var customerUniqueId = new Guid(ConfigurationManager.AppSettings["customerUniqueId"]); 9 var connectionString = GetAzureConnectionString(); 10 11 IPhotoImageProvider provider = new AzureStoragePhotoImageProvider(customerUniqueId, connectionString); 12 provider.InsertPhotoImage(photoImage); 13 memoryStream.Close(); 14 return photoImage; 15 } 16 }

1 internal void CompareImages(PhotoImage photoImage) 2 { 3 String skyBiometryUri = ConfigurationManager.AppSettings["skyBiometryUri"]; 4 String uid = ConfigurationManager.AppSettings["skyBiometryUid"]; 5 String apiKey = ConfigurationManager.AppSettings["skyBiometryApiKey"]; 6 String apiSecret = ConfigurationManager.AppSettings["skyBiometryApiSecret"]; 7 var imageComparer = new SkyBiometryImageComparer(skyBiometryUri, uid, apiKey, apiSecret); 8 9 String basePhotoUri = GetBasePhotoUri(); 10 String targetPhotoUri = GetTargetPhotoUri(photoImage); 11 currentImage.Source = new BitmapImage(new Uri(targetPhotoUri)); 12 compareImage.Source = new BitmapImage(new Uri(basePhotoUri)); 13 14 var matchValue = imageComparer.CalculateFacialRecognitionConfidence(basePhotoUri, targetPhotoUri); 15 facialRecognitionTextBox.Text = "Match Value Confience is: " + matchValue.Confidence.ToString(); 16 }

With the code in place, I can the run the Terminator Gen 2:

image

I think I am doing the Sky Biometry recognition incorrectly so I will look at that later.  In any event, working with the Kinect V2 was fairly easy because it was close enough to the V1 that the concepts could translate.  I look forward to adding the targeting system this weekend!!!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: