Terminator Program: Part 2

Following up on my last post, I decided to send the entire photograph to Sky Biometry and have them parse the photograph and identify individual people.  This ability is built right into their API.  For example, if you pass them this picture, you get the following json back.

image

I added the red highlight to show that Sky Biometry can recognize multiple people (it is an array of uids) and that each face tag has a center.x and center:y.  Reading the API documentation, this point is center of the face tag point and their point is a percentage of the photo width.

image

So I need to translate the center point of the skeleton from the Kinect to eqiv center point of the sky biometry recognition output and I should be able to identify individual people within the Kinect’s field of vision.  Going back to the Kinect code, I ditched the DrawBoxAroundHead method and altered the UpdateDisplay method like so

  1. private void UpdateDisplay(byte[] colorData, Skeleton[] skeletons)
  2. {
  3.     if (_videoBitmap == null)
  4.     {
  5.         _videoBitmap = new WriteableBitmap(640, 480, 96, 96, PixelFormats.Bgr32, null);
  6.     }
  7.     _videoBitmap.WritePixels(new Int32Rect(0, 0, 640, 480), colorData, 640 * 4, 0);
  8.     kinectColorImage.Source = _videoBitmap;
  9.     var selectedSkeleton = skeletons.FirstOrDefault(s => s.TrackingState == SkeletonTrackingState.Tracked);
  10.     if (selectedSkeleton != null)
  11.     {
  12.         var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  13.         var adjustedHeadPosition =
  14.             _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  15.         var adjustedSkeletonPosition = _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(selectedSkeleton.Position, ColorImageFormat.RgbResolution640x480Fps30);
  16.  
  17.         skeletonCanvas.Children.Clear();
  18.         Rectangle headRectangle = new Rectangle();
  19.         headRectangle.Fill = new SolidColorBrush(Colors.Blue);
  20.         headRectangle.Width = 10;
  21.         headRectangle.Height = 10;
  22.         Canvas.SetLeft(headRectangle, adjustedHeadPosition.X);
  23.         Canvas.SetTop(headRectangle, adjustedHeadPosition.Y);
  24.         skeletonCanvas.Children.Add(headRectangle);
  25.  
  26.         Rectangle skeletonRectangle = new Rectangle();
  27.         skeletonRectangle.Fill = new SolidColorBrush(Colors.Red);
  28.         skeletonRectangle.Width = 10;
  29.         skeletonRectangle.Height = 10;
  30.         Canvas.SetLeft(skeletonRectangle, adjustedHeadPosition.X);
  31.         Canvas.SetTop(skeletonRectangle, adjustedHeadPosition.Y);
  32.         skeletonCanvas.Children.Add(skeletonRectangle);
  33.  
  34.         String skeletonInfo = headPosition.X.ToString() + " : " + headPosition.Y.ToString() + " — ";
  35.         skeletonInfo = skeletonInfo + adjustedHeadPosition.X.ToString() + " : " + adjustedHeadPosition.Y.ToString() + " — ";
  36.         skeletonInfo = skeletonInfo + adjustedSkeletonPosition.X.ToString() + " : " + adjustedSkeletonPosition.Y.ToString();
  37.  
  38.         skeletonInfoTextBox.Text = skeletonInfo;
  39.  
  40.     }
  41. }

Notice that there are two rectangles because I was not sure if the Head.Position or the Skeleton.Position would match SkyBiometry.  Turns out that I want the Head.Position for SkyBiometry (besides, the terminator would want head shots only)

image

So I ditched the Skeleton.Position.  I then needed a way to translate the Head.Posotion.X to SkyBiometry.X and Head.Posotion.Y to SkyBiometry.Y.  Fortunately, I know the size of each photograph (640 X 480) so calculating the percent is an exercise of altering UpdateDisplay:

  1. private void UpdateDisplay(byte[] colorData, Skeleton[] skeletons)
  2. {
  3.     Int32 photoWidth = 640;
  4.     Int32 photoHeight = 480;
  5.  
  6.     if (_videoBitmap == null)
  7.     {
  8.         _videoBitmap = new WriteableBitmap(photoWidth, photoHeight, 96, 96, PixelFormats.Bgr32, null);
  9.     }
  10.     _videoBitmap.WritePixels(new Int32Rect(0, 0, photoWidth, photoHeight), colorData, photoWidth * 4, 0);
  11.     kinectColorImage.Source = _videoBitmap;
  12.     var selectedSkeleton = skeletons.FirstOrDefault(s => s.TrackingState == SkeletonTrackingState.Tracked);
  13.     if (selectedSkeleton != null)
  14.     {
  15.         var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  16.         var adjustedHeadPosition =
  17.             _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  18.  
  19.         skeletonCanvas.Children.Clear();
  20.         Rectangle headRectangle = new Rectangle();
  21.         headRectangle.Fill = new SolidColorBrush(Colors.Blue);
  22.         headRectangle.Width = 10;
  23.         headRectangle.Height = 10;
  24.         Canvas.SetLeft(headRectangle, adjustedHeadPosition.X);
  25.         Canvas.SetTop(headRectangle, adjustedHeadPosition.Y);
  26.         skeletonCanvas.Children.Add(headRectangle);
  27.  
  28.         var skyBiometryX = ((float)adjustedHeadPosition.X / photoWidth)*100;
  29.         var skyBioMetryY = ((float)adjustedHeadPosition.Y / photoHeight)*100;
  30.  
  31.         String skeletonInfo = adjustedHeadPosition.X.ToString() + " : " + adjustedHeadPosition.Y.ToString() + " — ";
  32.         skeletonInfo = skeletonInfo + Math.Round(skyBiometryX,2).ToString() + " : " + Math.Round(skyBioMetryY,2).ToString();
  33.  
  34.         skeletonInfoTextBox.Text = skeletonInfo;
  35.  
  36.     }

And so now I have

image

The next step is to get the Kinect photo to Sky Biometry.  I decided to use Azure Blob Storage as my intermediately location.  I updated the architectural diagram like so:

image

At this point, it made sense to move the project over to F# so I could better concentrate on the work that needs to be done and also getting the important code out of the UI code behind.  I fired up a F# project in my solution added a couple different implementations of Storing Photos.  To keep things consistent, I created a data structure and an interface:

  1. namespace ChickenSoftware.Terminator.Core
  2.  
  3. open System
  4.  
  5. type public PhotoImage (uniqueId:Guid, imageBytes:byte[]) =
  6.     member this.UniqueId = uniqueId
  7.     member this.ImageBytes = imageBytes
  8.  
  9. type IPhotoImageProvider =
  10.     abstract member InsertPhotoImage : PhotoImage -> unit
  11.     abstract member DeletePhotoImage : Guid -> unit
  12.     abstract member GetPhotoImage : Guid -> PhotoImage

My 1st stop was to replicate what Miles did with the Save File Dialog box with a File System Provider.  It was very much like a C# implementation:

  1. namespace ChickenSoftware.Terminator.Core
  2.  
  3. open System
  4. open System.IO
  5. open System.Drawing
  6. open System.Drawing.Imaging
  7.  
  8. type LocalFileSystemPhotoImageProvider(folderPath: string) =
  9.  
  10.     member this.GetPhotoImageUri(uniqueIdentifier: Guid) =
  11.         let fileName = uniqueIdentifier.ToString() + ".jpg"
  12.         Path.Combine(folderPath, fileName)
  13.  
  14.     interface IPhotoImageProvider with
  15.         member this.InsertPhotoImage(photoImage: PhotoImage) =
  16.             let fullPath = this.GetPhotoImageUri(photoImage.UniqueId)
  17.             use memoryStream = new MemoryStream(photoImage.ImageBytes)
  18.             let image = Image.FromStream(memoryStream)
  19.             image.Save(fullPath)
  20.  
  21.         member this.DeletePhotoImage(uniqueIdentifier: Guid) =
  22.             let fullPath = this.GetPhotoImageUri(uniqueIdentifier)
  23.             File.Delete(fullPath)        
  24.  
  25.         member this.GetPhotoImage(uniqueIdentifier: Guid) =
  26.             let fullPath = this.GetPhotoImageUri(uniqueIdentifier)
  27.             use fileStream = new FileStream(fullPath,FileMode.Open)
  28.             let image = Image.FromStream(fileStream)
  29.             use memoryStream = new MemoryStream()
  30.             image.Save(memoryStream,ImageFormat.Jpeg)
  31.             new PhotoImage(uniqueIdentifier, memoryStream.ToArray())

To call the save method, I altered the SavePhoto method in the C# project to use a MemoryStream and not a FileStream:

  1. private void SavePhoto(byte[] colorData)
  2. {
  3.     var bitmapSource = BitmapSource.Create(640, 480, 96, 96, PixelFormats.Bgr32, null, colorData, 640 * 4);
  4.     JpegBitmapEncoder encoder = new JpegBitmapEncoder();
  5.     encoder.Frames.Add(BitmapFrame.Create(bitmapSource));
  6.     using (MemoryStream memoryStream = new MemoryStream())
  7.     {
  8.         encoder.Save(memoryStream);
  9.         PhotoImage photoImage = new PhotoImage(Guid.NewGuid(), memoryStream.ToArray());
  10.  
  11.         String folderUri = @"C:\Data";
  12.         IPhotoImageProvider provider = new LocalFileSystemPhotoImageProvider(folderUri);
  13.  
  14.         provider.InsertPhotoImage(photoImage);
  15.         memoryStream.Close();
  16.     }
  17.     _isTakingPicture = false;
  18. }

And sure enough, it saves the photo to disk:

image

One problem that took me 20 minutes to uncover is that if you get your file system path wrong, you get the unhelpful exception:

image

This has been well-bitched about on stack overflow so I won’t comment further. 

With the file system up and running, I turned my attention to Azure.  Like the File System provider, it is very close to a C# implementation

  1. namespace ChickenSoftware.Terminator.Core
  2.  
  3. open System
  4. open System.IO
  5. open Microsoft.WindowsAzure.Storage
  6. open Microsoft.WindowsAzure.Storage.Blob
  7.  
  8. type AzureStoragePhotoImageProvider(customerUniqueId: Guid, connectionString: string) =
  9.  
  10.     member this.GetBlobContainer(blobClient:Blob.CloudBlobClient) =
  11.         let container = blobClient.GetContainerReference(customerUniqueId.ToString())
  12.         if not (container.Exists()) then
  13.             container.CreateIfNotExists() |> ignore
  14.             let permissions = new BlobContainerPermissions()
  15.             permissions.PublicAccess <- BlobContainerPublicAccessType.Blob
  16.             container.SetPermissions(permissions)
  17.         container
  18.  
  19.     member this.GetBlockBlob(uniqueIdentifier: Guid) =
  20.         let storageAccount = CloudStorageAccount.Parse(connectionString)
  21.         let blobClient = storageAccount.CreateCloudBlobClient()
  22.         let container = this.GetBlobContainer(blobClient)
  23.         let photoUri = this.GetPhotoImageUri(uniqueIdentifier)
  24.         container.GetBlockBlobReference(photoUri)
  25.  
  26.     member this.GetPhotoImageUri(uniqueIdentifier: Guid) =
  27.         uniqueIdentifier.ToString() + ".jpg"
  28.  
  29.     interface IPhotoImageProvider with
  30.         member this.InsertPhotoImage(photoImage: PhotoImage) =
  31.             let blockBlob = this.GetBlockBlob(photoImage.UniqueId)
  32.             use memoryStream = new MemoryStream(photoImage.ImageBytes)
  33.             blockBlob.UploadFromStream(memoryStream)
  34.  
  35.         member this.DeletePhotoImage(uniqueIdentifier: Guid) =
  36.             let blockBlob = this.GetBlockBlob(uniqueIdentifier)
  37.             blockBlob.Delete()       
  38.  
  39.         member this.GetPhotoImage(uniqueIdentifier: Guid) =
  40.             let blockBlob = this.GetBlockBlob(uniqueIdentifier)
  41.             if blockBlob.Exists() then
  42.                 blockBlob.FetchAttributes()
  43.                 use memoryStream = new MemoryStream()
  44.                 blockBlob.DownloadToStream(memoryStream)
  45.                 let photoArray = memoryStream.ToArray()
  46.                 new PhotoImage(uniqueIdentifier,photoArray)
  47.             else
  48.                 failwith "photo not found"

And when I pop it into the WPF application,

  1. private void SavePhoto(byte[] colorData)
  2. {
  3.     var bitmapSource = BitmapSource.Create(640, 480, 96, 96, PixelFormats.Bgr32, null, colorData, 640 * 4);
  4.     JpegBitmapEncoder encoder = new JpegBitmapEncoder();
  5.     encoder.Frames.Add(BitmapFrame.Create(bitmapSource));
  6.     using (MemoryStream memoryStream = new MemoryStream())
  7.     {
  8.         encoder.Save(memoryStream);
  9.         PhotoImage photoImage = new PhotoImage(Guid.NewGuid(), memoryStream.ToArray());
  10.  
  11.         Guid customerUniqueId = new Guid("7282AF48-FB3D-489B-A572-2EFAE80D0A9E");
  12.         String connectionString =
  13.             "DefaultEndpointsProtocol=http;AccountName=XXX;AccountKey=XXX";
  14.         IPhotoImageProvider provider = new AzureStoragePhotoImageProvider(customerUniqueId, connectionString);
  15.  
  16.  
  17.         provider.InsertPhotoImage(photoImage);
  18.         memoryStream.Close();
  19.     }
  20.     _isTakingPicture = false;
  21. }

I can now write my images to Azure.

image

With that out of the way, I can now have SkyBiometry pick up my photo, analyze it, and push the results back.  I went ahead and added in the .fs module that I had already created for this blog post.  I then added FSharp.Data via NuGet and was ready to roll. In he Save photo event handler,after saving the photo to blob storage, it then calls Sky Biometry to compare against a base image that has already been trained:

  1. private void SavePhoto(byte[] colorData)
  2. {
  3.     var bitmapSource = BitmapSource.Create(640, 480, 96, 96, PixelFormats.Bgr32, null, colorData, 640 * 4);
  4.     JpegBitmapEncoder encoder = new JpegBitmapEncoder();
  5.     encoder.Frames.Add(BitmapFrame.Create(bitmapSource));
  6.     PhotoImage photoImage = UploadPhotoImage(encoder);
  7.  
  8.     String skyBiometryUri = "http://api.skybiometry.com&quot;;
  9.     String uid = "Kinect@ChickenFace";
  10.     String apiKey = "XXXX";
  11.     String apiSecret = "XXXX";
  12.  
  13.     var imageComparer = new SkyBiometryImageComparer(skyBiometryUri, uid, apiKey, apiSecret);
  14.     String basePhotoUri = "XXXX.jpg";
  15.     String targetPhotoUri = "XXXX/" + photoImage.UniqueId + ".jpg";
  16.  
  17.     currentImage.Source = new BitmapImage(new Uri(basePhotoUri));
  18.     compareImage.Source = new BitmapImage(new Uri(targetPhotoUri)); ;
  19.     
  20.     var matchValue = imageComparer.CalculateFacialRecognitionConfidence(basePhotoUri, targetPhotoUri);
  21.     FacialRecognitionTextBox.Text = "Match Value is: " + matchValue.ToString();
  22.     _isTakingPicture = false;
  23. }

And I am getting a result back from Sky Biometry.

image

Finally, I added in the SkyBiometry X and Y coordinates for the photo and compared to the calculated ones based on the Kinect Skeleton Tracking:

  1. currentImage.Source = new BitmapImage(new Uri(basePhotoUri));
  2. compareImage.Source = new BitmapImage(new Uri(targetPhotoUri)); ;
  3.  
  4. var matchValue = imageComparer.CalculateFacialRecognitionConfidence(basePhotoUri, targetPhotoUri);
  5.  
  6. var selectedSkeleton = skeletons.FirstOrDefault(s => s.TrackingState == SkeletonTrackingState.Tracked);
  7. if (selectedSkeleton != null)
  8. {
  9.     var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  10.     var adjustedHeadPosition =
  11.         _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  12.  
  13.     var skyBiometryX = ((float)adjustedHeadPosition.X / 640) * 100;
  14.     var skyBioMetryY = ((float)adjustedHeadPosition.Y / 480) * 100;
  15.  
  16.     StringBuilder stringBuilder = new StringBuilder();
  17.     stringBuilder.Append("Match Value is: ");
  18.     stringBuilder.Append(matchValue.Confidence.ToString());
  19.     stringBuilder.Append("Sky Biometry X: ");
  20.     stringBuilder.Append(matchValue.X.ToString());
  21.     stringBuilder.Append("Sky Biometry Y: ");
  22.     stringBuilder.Append(matchValue.Y.ToString());
  23.     stringBuilder.Append("Kinect X: ");
  24.     stringBuilder.Append(Math.Round(skyBiometryX, 2).ToString());
  25.     stringBuilder.Append("Kinect Y: ");
  26.     stringBuilder.Append(Math.Round(skyBioMetryY, 2).ToString());
  27.     FacialRecognitionTextBox.Text = stringBuilder.ToString();
  28. }
  29.  
  30. _isTakingPicture = false;

And the results are encouraging –> it looks like I can use the X and Y to identify different people on the screen:

Match Value is: 53
Sky Biometry X: 10
Sky Biometry Y: 13.33

Kinect X: 47.5
Kinect Y: 39.79

Up next will be pointing the laser and the target…

 

 

 

Terminator Program: Part 1

I am starting to work on a new Kinect application for TRINUG’s code camp.  I wanted to extend the facial recognition application I did using Sky Biometry and have the Kinect identify people in its field of view.  Then, I want to give the verbal command “Terminate XXX” where XXX is the name of a recognized person.  That would activate a couple of servos via a netduino and point a laser pointer at that person and perhaps make a blaster sound.  The <ahem> architectural diagram </ahem? looks like this

image

Not really worrying about how far I will get (the fun is in the process, no?), I picked up Rob Miles’s excellent book Start Here: Learn The Kinect API and plugged in my Kinect.

The first thing I did was see if I can get a running video from the Kinect –> which was very easy.  I created a new C#/WPF application and replaced the default markup with this::

  1. <Window x:Class="ChickenSoftware.Terminiator.UI.MainWindow"
  2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.         Title="MainWindow" Height="545" Width="643"
  5.         Loaded="Window_Loaded" Closing="Window_Closing">
  6.     <Grid>
  7.         <Image x:Name="kinectColorImage" Width="640" Height="480" />
  8.     </Grid>
  9. </Window>

And in the code-behind, I added the following code.  The only thing that is kinda tricky is that there are two threads: the Main UI thread and then the thread that processes the Kinect data.  Interestingly, it is easy to pass data from the Kinect Thread to the Main UI Thread –> just call the delegate and pass in the byte array.

  1. Boolean _isKinectDisplayActive = false;
  2. KinectSensor _sensor = null;
  3. WriteableBitmap _videoBitmap = null;
  4.  
  5. private void Window_Loaded(object sender, RoutedEventArgs e)
  6. {
  7.     SetUpKinect();
  8.     Thread videoThread = new Thread(new ThreadStart(DisplayKinectData));
  9.     _isKinectDisplayActive = true;
  10.     videoThread.Start();
  11. }
  12. private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e)
  13. {
  14.     _isKinectDisplayActive = false;
  15.  
  16. }
  17.  
  18. private void SetUpKinect()
  19. {
  20.     _sensor = KinectSensor.KinectSensors[0];
  21.     _sensor.ColorStream.Enable();
  22.     _sensor.Start();
  23. }
  24.  
  25. private void DisplayKinectData()
  26. {
  27.     while (_isKinectDisplayActive)
  28.     {
  29.         using (ColorImageFrame colorFrame = _sensor.ColorStream.OpenNextFrame(10))
  30.         {
  31.             if (colorFrame == null) continue;
  32.             var colorData = new byte[colorFrame.PixelDataLength];
  33.             colorFrame.CopyPixelDataTo(colorData);
  34.             Dispatcher.Invoke(new Action(() => UpdateDisplay(colorData)));
  35.         }
  36.     }
  37.     _sensor.Stop();
  38. }
  39.  
  40. private void UpdateDisplay(byte[] colorData)
  41. {
  42.     if (_videoBitmap == null)
  43.     {
  44.         _videoBitmap = new WriteableBitmap(640, 480, 96, 96, PixelFormats.Bgr32, null);
  45.     }
  46.     _videoBitmap.WritePixels(new Int32Rect(0, 0, 640, 480), colorData, 640 * 4, 0);
  47.     kinectColorImage.Source = _videoBitmap;
  48. }

And I have a live-feed video

image

With that out of the way, I went to add picture taking capability.  I altered the XAML like so:

  1. <Window x:Class="ChickenSoftware.Terminiator.UI.MainWindow"
  2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.         Title="MainWindow" Height="545" Width="643"
  5.         Loaded="Window_Loaded" Closing="Window_Closing">
  6.     <Grid>
  7.         <Image x:Name="kinectColorImage" Width="640" Height="480" />
  8.         <Button x:Name="takePhotoButton" Margin="0,466,435,10" Click="takePhotoButton_Click">Take Photo</Button>
  9.     </Grid>
  10. </Window>

And added this to the code behind:

  1. Boolean _isTakingPicture = false;
  2. BitmapSource _pictureBitmap = null;
  3.  
  4. private void takePhotoButton_Click(object sender, RoutedEventArgs e)
  5. {
  6.     _isTakingPicture = true;
  7.     SaveFileDialog dialog = new SaveFileDialog();
  8.     dialog.FileName = "Snapshot";
  9.     dialog.DefaultExt = ".jpg";
  10.     dialog.Filter = "Pictures (.jpg)|*.jpg";
  11.  
  12.     if (dialog.ShowDialog() == true)
  13.     {
  14.         String fileName = dialog.FileName;
  15.         using (FileStream fileStream = new FileStream(fileName, FileMode.Create))
  16.         {
  17.             JpegBitmapEncoder encoder = new JpegBitmapEncoder();
  18.             encoder.Frames.Add(BitmapFrame.Create(_pictureBitmap));
  19.             encoder.Save(fileStream);
  20.         }
  21.     }
  22. }

 

And altered the DisplayKinectDatra method to poll the _isTakingPicture flag

  1. private void DisplayKinectData()
  2. {
  3.     while (_isKinectDisplayActive)
  4.     {
  5.         using (ColorImageFrame colorFrame = _sensor.ColorStream.OpenNextFrame(10))
  6.         {
  7.             if (colorFrame == null) continue;
  8.             var colorData = new byte[colorFrame.PixelDataLength];
  9.             colorFrame.CopyPixelDataTo(colorData);
  10.             Dispatcher.Invoke(new Action(() => UpdateDisplay(colorData)));
  11.  
  12.             if (_isTakingPicture)
  13.             {
  14.                 Dispatcher.Invoke(new Action(() => SavePhoto(colorData)));
  15.             }
  16.         }
  17.     }
  18.     _sensor.Stop();
  19. }

And now I have screen capture ability.

image

With that out of the way, I needed a way of identifying the people in the Kinect’s field of vision and taking their picture individually.  I altered the XAML like so

  1. <Window x:Class="ChickenSoftware.Terminiator.UI.MainWindow"
  2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.         Title="MainWindow" Height="545" Width="643"
  5.         Loaded="Window_Loaded" Closing="Window_Closing">
  6.     <Grid>
  7.         <Image x:Name="kinectColorImage" Width="640" Height="480" />
  8.         <Button x:Name="takePhotoButton" Margin="0,466,435,10" Click="takePhotoButton_Click">Take Photo</Button>
  9.         <Canvas x:Name="skeletonCanvas" Width="640" Height="480" />
  10.                 <TextBox x:Name="skeletonInfoTextBox" Margin="205,466,10,10" />
  11.     </Grid>
  12. </Window>

And altered the Setup method like so:

  1. private void SetUpKinect()
  2. {
  3.     _sensor = KinectSensor.KinectSensors[0];
  4.     _sensor.ColorStream.Enable();
  5.     _sensor.SkeletonStream.Enable();
  6.     _sensor.Start();
  7. }

And then altered the UpdateDisplay method to take in both the color byte array and the skeleton byte array and display the head and skeleton location.  Note that there is a built in function called MapSkeletonPointToColorPoint() which takes the skeleton coordinate position and translates it to the color coordinate position.  I know that is needed, but I have no idea who it works –> magic I guess.

  1. private void UpdateDisplay(byte[] colorData, Skeleton[] skeletons)
  2. {
  3.     if (_videoBitmap == null)
  4.     {
  5.         _videoBitmap = new WriteableBitmap(640, 480, 96, 96, PixelFormats.Bgr32, null);
  6.     }
  7.     _videoBitmap.WritePixels(new Int32Rect(0, 0, 640, 480), colorData, 640 * 4, 0);
  8.     kinectColorImage.Source = _videoBitmap;
  9.     var selectedSkeleton = skeletons.FirstOrDefault(s => s.TrackingState == SkeletonTrackingState.Tracked);
  10.     if (selectedSkeleton != null)
  11.     {
  12.         var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  13.         var adjustedHeadPosition =
  14.             _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  15.         var adjustedSkeletonPosition = _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(selectedSkeleton.Position, ColorImageFormat.RgbResolution640x480Fps30);
  16.  
  17.  
  18.         String skeletonInfo = headPosition.X.ToString() + " : " + headPosition.Y.ToString() + " — ";
  19.         skeletonInfo = skeletonInfo + adjustedHeadPosition.X.ToString() + " : " + adjustedHeadPosition.Y.ToString() + " — ";
  20.         skeletonInfo = skeletonInfo + adjustedSkeletonPosition.X.ToString() + " : " + adjustedSkeletonPosition.Y.ToString();
  21.  
  22.         skeletonInfoTextBox.Text = skeletonInfo;
  23.  
  24.     }
  25. }

And the invocation of the UpdateDisplay now looks like this:

  1. private void DisplayKinectData()
  2. {
  3.     while (_isKinectDisplayActive)
  4.     {
  5.         using (ColorImageFrame colorFrame = _sensor.ColorStream.OpenNextFrame(10))
  6.         {
  7.             if (colorFrame == null) continue;
  8.             using (SkeletonFrame skeletonFrame = _sensor.SkeletonStream.OpenNextFrame(10))
  9.             {
  10.                 if (skeletonFrame == null) continue;
  11.  
  12.                 var colorData = new byte[colorFrame.PixelDataLength];
  13.                 var skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
  14.  
  15.                 colorFrame.CopyPixelDataTo(colorData);
  16.                 skeletonFrame.CopySkeletonDataTo(skeletons);
  17.  
  18.  
  19.                 if (_isTakingPicture)
  20.                 {
  21.                     Dispatcher.Invoke(new Action(() => SavePhoto(colorData)));
  22.                 }
  23.                 Dispatcher.Invoke(new Action(() => UpdateDisplay(colorData, skeletons)));
  24.  
  25.             }
  26.         }
  27.     }
  28.     _sensor.Stop();
  29. }

And the results are what you expect:

image

With the ability to identify individuals, I then wants to take individual photos of each person and feed it to Sky Biometry.  To that end, I added a method to draw a rectangle around each person and then (somehow) take a snapshot of the contents within the triangle.  Drawing the rectangle was a straight-forward WPF exercise:

  1. private void DrawBoxAroundHead(Skeleton selectedSkeleton)
  2. {
  3.     skeletonCanvas.Children.Clear();
  4.     var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  5.     var shoulderCenterPosition = selectedSkeleton.Joints[JointType.ShoulderCenter].Position;
  6.  
  7.     var adjustedHeadPosition =
  8.         _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  9.     var adjustedShoulderCenterPosition =
  10.         _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(shoulderCenterPosition, ColorImageFormat.RgbResolution640x480Fps30);
  11.     var delta = adjustedHeadPosition.Y – adjustedShoulderCenterPosition.Y;
  12.     var centerX = adjustedHeadPosition.X;
  13.     var centerY = adjustedHeadPosition.Y;
  14.  
  15.     Line topLline = new Line();
  16.     topLline.Stroke = new SolidColorBrush(Colors.Red);
  17.     topLline.StrokeThickness = 5;
  18.     topLline.X1 = centerX + (delta * -1);
  19.     topLline.Y1 = centerY – (delta * -1);
  20.     topLline.X2 = centerX + delta;
  21.     topLline.Y2 = centerY – (delta * -1);
  22.     skeletonCanvas.Children.Add(topLline);
  23.     Line bottomLine = new Line();
  24.     bottomLine.Stroke = new SolidColorBrush(Colors.Red);
  25.     bottomLine.StrokeThickness = 5;
  26.     bottomLine.X1 = centerX + (delta * -1);
  27.     bottomLine.Y1 = centerY + (delta * -1);
  28.     bottomLine.X2 = centerX + delta;
  29.     bottomLine.Y2 = centerY + (delta * -1);
  30.     skeletonCanvas.Children.Add(bottomLine);
  31.     Line rightLine = new Line();
  32.     rightLine.Stroke = new SolidColorBrush(Colors.Red);
  33.     rightLine.StrokeThickness = 5;
  34.     rightLine.X1 = centerX + (delta * -1);
  35.     rightLine.Y1 = centerY – (delta * -1);
  36.     rightLine.X2 = centerX + (delta * -1);
  37.     rightLine.Y2 = centerY + (delta * -1);
  38.     skeletonCanvas.Children.Add(rightLine);
  39.     Line leftLine = new Line();
  40.     leftLine.Stroke = new SolidColorBrush(Colors.Red);
  41.     leftLine.StrokeThickness = 5;
  42.     leftLine.X1 = centerX + delta;
  43.     leftLine.Y1 = centerY – (delta * -1);
  44.     leftLine.X2 = centerX + delta;
  45.     leftLine.Y2 = centerY + (delta * -1);
  46.     skeletonCanvas.Children.Add(leftLine);
  47. }

And then adding that line in the Update Display

  1. if (selectedSkeleton != null)
  2. {
  3.     var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  4.     var adjustedHeadPosition =
  5.         _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  6.     var adjustedSkeletonPosition = _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(selectedSkeleton.Position, ColorImageFormat.RgbResolution640x480Fps30);
  7.  
  8.     DrawBoxAroundHead(selectedSkeleton);
  9.  
  10.     String skeletonInfo = headPosition.X.ToString() + " : " + headPosition.Y.ToString() + " — ";
  11.     skeletonInfo = skeletonInfo + adjustedHeadPosition.X.ToString() + " : " + adjustedHeadPosition.Y.ToString() + " — ";
  12.     skeletonInfo = skeletonInfo + adjustedSkeletonPosition.X.ToString() + " : " + adjustedSkeletonPosition.Y.ToString();
  13.  
  14.     skeletonInfoTextBox.Text = skeletonInfo;
  15.  
  16. }

Gives me this:

image

Which is great, but now I am stuck.  I need a way of isolating the contents of that rectangle in the byte array that I am feeding to bitmap encoder and I don’t know how to trim the array.  Instead of trying to learn any more WPF and graphic programming, I decided to take a different tact and send the photograph in its entirety to Sky Biometry and let it figure out the people in the photograph.  How I did that is the subject of my next blog post…

 

 

 

 

Microsoft Language Stack Analogy

I am getting ready for my presentations at Charlotte Code Camp next Saturday.  My F# session is a business-case driven one: reasons why the average C# developer might want to take a look at F#.  I break the session down into 5 sections:  F# is integrated, fast, expressive, bug-resistant, and analytical.  In the fast piece, I am going to make the analogy of Visual Studio to a garage. 

Consider a man who lives in a nice house in a suburban neighborhood with a three car garage. Every morning when he gets ready for his morning commute to work, he opens the door that goes from their house into the their garage and there sitting in the 1st bay is a minivan. 

image

Now there is nothing wrong with the minivan – it is dependable, all of the neighbors drive it, it does many things pretty well.  However, consider that right next to the minivan, never been used, is a Ferrari.  Our suburban programmer has heard about a Ferrari, and has perhaps even glanced at it curiously when he  pulls out in the morning , but he:

  • Doesn’t see the point of driving it because the minivan suits him just fine
  • Is afraid to try driving it because he doesn’t drive stick and taking the time to learn would slow him down
  • Don’t want to drive it because then he would have to explain to his project manager wife why he are driving around town in such a car

So the Ferrari sits unused.  To round out the analogy, in the 3rd bay is a helicopter that no one in their right mind will touch.  Finally, there is a junked car around back that no one uses anymore that he has to keep around because it is too expensive to haul it to the junkyard.

image

 

So this is what happens to a majority of .NET developers when they open their garage called visual studio.  The go with the comfortable language of the C# minivan, ignoring the power and expressiveness of the F# Ferrari and certainly not touching the C++ helicopter.  I picked helicopter for C++ b/c helicopters can go places cars can not, is notoriously difficult to pilot, and when they crash, it is often spectacular and brings down others with them.  The junked car is VB.NET, which makes me sad on certain days….

Also, since C# 2.0, the minivan has tried to becomes more Ferrari-like.  It has added turbo engine called linq, added the var keyword, anonymous types, the dynamic keyword, all in the attempt to become the one minivan that shall rule all.

image

I don’t know much about Roslyn but what I have seen, I think I can take and remove language syntax and it will still compile.  If so, I will try and write a C# program that removes all curly-braces and semi-colons and replaces the var keyword with let.  Is it still C# then?

OT: can you tell which session I am doing at the Hartford Code Camp in 2 weeks?

image

(And no, I did not submit in all caps.  I guess the organizer is very excited about the topic?)