Terminator Program: Part 2

Following up on my last post, I decided to send the entire photograph to Sky Biometry and have them parse the photograph and identify individual people.  This ability is built right into their API.  For example, if you pass them this picture, you get the following json back.

image

I added the red highlight to show that Sky Biometry can recognize multiple people (it is an array of uids) and that each face tag has a center.x and center:y.  Reading the API documentation, this point is center of the face tag point and their point is a percentage of the photo width.

image

So I need to translate the center point of the skeleton from the Kinect to eqiv center point of the sky biometry recognition output and I should be able to identify individual people within the Kinect’s field of vision.  Going back to the Kinect code, I ditched the DrawBoxAroundHead method and altered the UpdateDisplay method like so

  1. private void UpdateDisplay(byte[] colorData, Skeleton[] skeletons)
  2. {
  3.     if (_videoBitmap == null)
  4.     {
  5.         _videoBitmap = new WriteableBitmap(640, 480, 96, 96, PixelFormats.Bgr32, null);
  6.     }
  7.     _videoBitmap.WritePixels(new Int32Rect(0, 0, 640, 480), colorData, 640 * 4, 0);
  8.     kinectColorImage.Source = _videoBitmap;
  9.     var selectedSkeleton = skeletons.FirstOrDefault(s => s.TrackingState == SkeletonTrackingState.Tracked);
  10.     if (selectedSkeleton != null)
  11.     {
  12.         var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  13.         var adjustedHeadPosition =
  14.             _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  15.         var adjustedSkeletonPosition = _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(selectedSkeleton.Position, ColorImageFormat.RgbResolution640x480Fps30);
  16.  
  17.         skeletonCanvas.Children.Clear();
  18.         Rectangle headRectangle = new Rectangle();
  19.         headRectangle.Fill = new SolidColorBrush(Colors.Blue);
  20.         headRectangle.Width = 10;
  21.         headRectangle.Height = 10;
  22.         Canvas.SetLeft(headRectangle, adjustedHeadPosition.X);
  23.         Canvas.SetTop(headRectangle, adjustedHeadPosition.Y);
  24.         skeletonCanvas.Children.Add(headRectangle);
  25.  
  26.         Rectangle skeletonRectangle = new Rectangle();
  27.         skeletonRectangle.Fill = new SolidColorBrush(Colors.Red);
  28.         skeletonRectangle.Width = 10;
  29.         skeletonRectangle.Height = 10;
  30.         Canvas.SetLeft(skeletonRectangle, adjustedHeadPosition.X);
  31.         Canvas.SetTop(skeletonRectangle, adjustedHeadPosition.Y);
  32.         skeletonCanvas.Children.Add(skeletonRectangle);
  33.  
  34.         String skeletonInfo = headPosition.X.ToString() + " : " + headPosition.Y.ToString() + " — ";
  35.         skeletonInfo = skeletonInfo + adjustedHeadPosition.X.ToString() + " : " + adjustedHeadPosition.Y.ToString() + " — ";
  36.         skeletonInfo = skeletonInfo + adjustedSkeletonPosition.X.ToString() + " : " + adjustedSkeletonPosition.Y.ToString();
  37.  
  38.         skeletonInfoTextBox.Text = skeletonInfo;
  39.  
  40.     }
  41. }

Notice that there are two rectangles because I was not sure if the Head.Position or the Skeleton.Position would match SkyBiometry.  Turns out that I want the Head.Position for SkyBiometry (besides, the terminator would want head shots only)

image

So I ditched the Skeleton.Position.  I then needed a way to translate the Head.Posotion.X to SkyBiometry.X and Head.Posotion.Y to SkyBiometry.Y.  Fortunately, I know the size of each photograph (640 X 480) so calculating the percent is an exercise of altering UpdateDisplay:

  1. private void UpdateDisplay(byte[] colorData, Skeleton[] skeletons)
  2. {
  3.     Int32 photoWidth = 640;
  4.     Int32 photoHeight = 480;
  5.  
  6.     if (_videoBitmap == null)
  7.     {
  8.         _videoBitmap = new WriteableBitmap(photoWidth, photoHeight, 96, 96, PixelFormats.Bgr32, null);
  9.     }
  10.     _videoBitmap.WritePixels(new Int32Rect(0, 0, photoWidth, photoHeight), colorData, photoWidth * 4, 0);
  11.     kinectColorImage.Source = _videoBitmap;
  12.     var selectedSkeleton = skeletons.FirstOrDefault(s => s.TrackingState == SkeletonTrackingState.Tracked);
  13.     if (selectedSkeleton != null)
  14.     {
  15.         var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  16.         var adjustedHeadPosition =
  17.             _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  18.  
  19.         skeletonCanvas.Children.Clear();
  20.         Rectangle headRectangle = new Rectangle();
  21.         headRectangle.Fill = new SolidColorBrush(Colors.Blue);
  22.         headRectangle.Width = 10;
  23.         headRectangle.Height = 10;
  24.         Canvas.SetLeft(headRectangle, adjustedHeadPosition.X);
  25.         Canvas.SetTop(headRectangle, adjustedHeadPosition.Y);
  26.         skeletonCanvas.Children.Add(headRectangle);
  27.  
  28.         var skyBiometryX = ((float)adjustedHeadPosition.X / photoWidth)*100;
  29.         var skyBioMetryY = ((float)adjustedHeadPosition.Y / photoHeight)*100;
  30.  
  31.         String skeletonInfo = adjustedHeadPosition.X.ToString() + " : " + adjustedHeadPosition.Y.ToString() + " — ";
  32.         skeletonInfo = skeletonInfo + Math.Round(skyBiometryX,2).ToString() + " : " + Math.Round(skyBioMetryY,2).ToString();
  33.  
  34.         skeletonInfoTextBox.Text = skeletonInfo;
  35.  
  36.     }

And so now I have

image

The next step is to get the Kinect photo to Sky Biometry.  I decided to use Azure Blob Storage as my intermediately location.  I updated the architectural diagram like so:

image

At this point, it made sense to move the project over to F# so I could better concentrate on the work that needs to be done and also getting the important code out of the UI code behind.  I fired up a F# project in my solution added a couple different implementations of Storing Photos.  To keep things consistent, I created a data structure and an interface:

  1. namespace ChickenSoftware.Terminator.Core
  2.  
  3. open System
  4.  
  5. type public PhotoImage (uniqueId:Guid, imageBytes:byte[]) =
  6.     member this.UniqueId = uniqueId
  7.     member this.ImageBytes = imageBytes
  8.  
  9. type IPhotoImageProvider =
  10.     abstract member InsertPhotoImage : PhotoImage -> unit
  11.     abstract member DeletePhotoImage : Guid -> unit
  12.     abstract member GetPhotoImage : Guid -> PhotoImage

My 1st stop was to replicate what Miles did with the Save File Dialog box with a File System Provider.  It was very much like a C# implementation:

  1. namespace ChickenSoftware.Terminator.Core
  2.  
  3. open System
  4. open System.IO
  5. open System.Drawing
  6. open System.Drawing.Imaging
  7.  
  8. type LocalFileSystemPhotoImageProvider(folderPath: string) =
  9.  
  10.     member this.GetPhotoImageUri(uniqueIdentifier: Guid) =
  11.         let fileName = uniqueIdentifier.ToString() + ".jpg"
  12.         Path.Combine(folderPath, fileName)
  13.  
  14.     interface IPhotoImageProvider with
  15.         member this.InsertPhotoImage(photoImage: PhotoImage) =
  16.             let fullPath = this.GetPhotoImageUri(photoImage.UniqueId)
  17.             use memoryStream = new MemoryStream(photoImage.ImageBytes)
  18.             let image = Image.FromStream(memoryStream)
  19.             image.Save(fullPath)
  20.  
  21.         member this.DeletePhotoImage(uniqueIdentifier: Guid) =
  22.             let fullPath = this.GetPhotoImageUri(uniqueIdentifier)
  23.             File.Delete(fullPath)        
  24.  
  25.         member this.GetPhotoImage(uniqueIdentifier: Guid) =
  26.             let fullPath = this.GetPhotoImageUri(uniqueIdentifier)
  27.             use fileStream = new FileStream(fullPath,FileMode.Open)
  28.             let image = Image.FromStream(fileStream)
  29.             use memoryStream = new MemoryStream()
  30.             image.Save(memoryStream,ImageFormat.Jpeg)
  31.             new PhotoImage(uniqueIdentifier, memoryStream.ToArray())

To call the save method, I altered the SavePhoto method in the C# project to use a MemoryStream and not a FileStream:

  1. private void SavePhoto(byte[] colorData)
  2. {
  3.     var bitmapSource = BitmapSource.Create(640, 480, 96, 96, PixelFormats.Bgr32, null, colorData, 640 * 4);
  4.     JpegBitmapEncoder encoder = new JpegBitmapEncoder();
  5.     encoder.Frames.Add(BitmapFrame.Create(bitmapSource));
  6.     using (MemoryStream memoryStream = new MemoryStream())
  7.     {
  8.         encoder.Save(memoryStream);
  9.         PhotoImage photoImage = new PhotoImage(Guid.NewGuid(), memoryStream.ToArray());
  10.  
  11.         String folderUri = @"C:\Data";
  12.         IPhotoImageProvider provider = new LocalFileSystemPhotoImageProvider(folderUri);
  13.  
  14.         provider.InsertPhotoImage(photoImage);
  15.         memoryStream.Close();
  16.     }
  17.     _isTakingPicture = false;
  18. }

And sure enough, it saves the photo to disk:

image

One problem that took me 20 minutes to uncover is that if you get your file system path wrong, you get the unhelpful exception:

image

This has been well-bitched about on stack overflow so I won’t comment further. 

With the file system up and running, I turned my attention to Azure.  Like the File System provider, it is very close to a C# implementation

  1. namespace ChickenSoftware.Terminator.Core
  2.  
  3. open System
  4. open System.IO
  5. open Microsoft.WindowsAzure.Storage
  6. open Microsoft.WindowsAzure.Storage.Blob
  7.  
  8. type AzureStoragePhotoImageProvider(customerUniqueId: Guid, connectionString: string) =
  9.  
  10.     member this.GetBlobContainer(blobClient:Blob.CloudBlobClient) =
  11.         let container = blobClient.GetContainerReference(customerUniqueId.ToString())
  12.         if not (container.Exists()) then
  13.             container.CreateIfNotExists() |> ignore
  14.             let permissions = new BlobContainerPermissions()
  15.             permissions.PublicAccess <- BlobContainerPublicAccessType.Blob
  16.             container.SetPermissions(permissions)
  17.         container
  18.  
  19.     member this.GetBlockBlob(uniqueIdentifier: Guid) =
  20.         let storageAccount = CloudStorageAccount.Parse(connectionString)
  21.         let blobClient = storageAccount.CreateCloudBlobClient()
  22.         let container = this.GetBlobContainer(blobClient)
  23.         let photoUri = this.GetPhotoImageUri(uniqueIdentifier)
  24.         container.GetBlockBlobReference(photoUri)
  25.  
  26.     member this.GetPhotoImageUri(uniqueIdentifier: Guid) =
  27.         uniqueIdentifier.ToString() + ".jpg"
  28.  
  29.     interface IPhotoImageProvider with
  30.         member this.InsertPhotoImage(photoImage: PhotoImage) =
  31.             let blockBlob = this.GetBlockBlob(photoImage.UniqueId)
  32.             use memoryStream = new MemoryStream(photoImage.ImageBytes)
  33.             blockBlob.UploadFromStream(memoryStream)
  34.  
  35.         member this.DeletePhotoImage(uniqueIdentifier: Guid) =
  36.             let blockBlob = this.GetBlockBlob(uniqueIdentifier)
  37.             blockBlob.Delete()       
  38.  
  39.         member this.GetPhotoImage(uniqueIdentifier: Guid) =
  40.             let blockBlob = this.GetBlockBlob(uniqueIdentifier)
  41.             if blockBlob.Exists() then
  42.                 blockBlob.FetchAttributes()
  43.                 use memoryStream = new MemoryStream()
  44.                 blockBlob.DownloadToStream(memoryStream)
  45.                 let photoArray = memoryStream.ToArray()
  46.                 new PhotoImage(uniqueIdentifier,photoArray)
  47.             else
  48.                 failwith "photo not found"

And when I pop it into the WPF application,

  1. private void SavePhoto(byte[] colorData)
  2. {
  3.     var bitmapSource = BitmapSource.Create(640, 480, 96, 96, PixelFormats.Bgr32, null, colorData, 640 * 4);
  4.     JpegBitmapEncoder encoder = new JpegBitmapEncoder();
  5.     encoder.Frames.Add(BitmapFrame.Create(bitmapSource));
  6.     using (MemoryStream memoryStream = new MemoryStream())
  7.     {
  8.         encoder.Save(memoryStream);
  9.         PhotoImage photoImage = new PhotoImage(Guid.NewGuid(), memoryStream.ToArray());
  10.  
  11.         Guid customerUniqueId = new Guid("7282AF48-FB3D-489B-A572-2EFAE80D0A9E");
  12.         String connectionString =
  13.             "DefaultEndpointsProtocol=http;AccountName=XXX;AccountKey=XXX";
  14.         IPhotoImageProvider provider = new AzureStoragePhotoImageProvider(customerUniqueId, connectionString);
  15.  
  16.  
  17.         provider.InsertPhotoImage(photoImage);
  18.         memoryStream.Close();
  19.     }
  20.     _isTakingPicture = false;
  21. }

I can now write my images to Azure.

image

With that out of the way, I can now have SkyBiometry pick up my photo, analyze it, and push the results back.  I went ahead and added in the .fs module that I had already created for this blog post.  I then added FSharp.Data via NuGet and was ready to roll. In he Save photo event handler,after saving the photo to blob storage, it then calls Sky Biometry to compare against a base image that has already been trained:

  1. private void SavePhoto(byte[] colorData)
  2. {
  3.     var bitmapSource = BitmapSource.Create(640, 480, 96, 96, PixelFormats.Bgr32, null, colorData, 640 * 4);
  4.     JpegBitmapEncoder encoder = new JpegBitmapEncoder();
  5.     encoder.Frames.Add(BitmapFrame.Create(bitmapSource));
  6.     PhotoImage photoImage = UploadPhotoImage(encoder);
  7.  
  8.     String skyBiometryUri = "http://api.skybiometry.com&quot;;
  9.     String uid = "Kinect@ChickenFace";
  10.     String apiKey = "XXXX";
  11.     String apiSecret = "XXXX";
  12.  
  13.     var imageComparer = new SkyBiometryImageComparer(skyBiometryUri, uid, apiKey, apiSecret);
  14.     String basePhotoUri = "XXXX.jpg";
  15.     String targetPhotoUri = "XXXX/" + photoImage.UniqueId + ".jpg";
  16.  
  17.     currentImage.Source = new BitmapImage(new Uri(basePhotoUri));
  18.     compareImage.Source = new BitmapImage(new Uri(targetPhotoUri)); ;
  19.     
  20.     var matchValue = imageComparer.CalculateFacialRecognitionConfidence(basePhotoUri, targetPhotoUri);
  21.     FacialRecognitionTextBox.Text = "Match Value is: " + matchValue.ToString();
  22.     _isTakingPicture = false;
  23. }

And I am getting a result back from Sky Biometry.

image

Finally, I added in the SkyBiometry X and Y coordinates for the photo and compared to the calculated ones based on the Kinect Skeleton Tracking:

  1. currentImage.Source = new BitmapImage(new Uri(basePhotoUri));
  2. compareImage.Source = new BitmapImage(new Uri(targetPhotoUri)); ;
  3.  
  4. var matchValue = imageComparer.CalculateFacialRecognitionConfidence(basePhotoUri, targetPhotoUri);
  5.  
  6. var selectedSkeleton = skeletons.FirstOrDefault(s => s.TrackingState == SkeletonTrackingState.Tracked);
  7. if (selectedSkeleton != null)
  8. {
  9.     var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  10.     var adjustedHeadPosition =
  11.         _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  12.  
  13.     var skyBiometryX = ((float)adjustedHeadPosition.X / 640) * 100;
  14.     var skyBioMetryY = ((float)adjustedHeadPosition.Y / 480) * 100;
  15.  
  16.     StringBuilder stringBuilder = new StringBuilder();
  17.     stringBuilder.Append("Match Value is: ");
  18.     stringBuilder.Append(matchValue.Confidence.ToString());
  19.     stringBuilder.Append("Sky Biometry X: ");
  20.     stringBuilder.Append(matchValue.X.ToString());
  21.     stringBuilder.Append("Sky Biometry Y: ");
  22.     stringBuilder.Append(matchValue.Y.ToString());
  23.     stringBuilder.Append("Kinect X: ");
  24.     stringBuilder.Append(Math.Round(skyBiometryX, 2).ToString());
  25.     stringBuilder.Append("Kinect Y: ");
  26.     stringBuilder.Append(Math.Round(skyBioMetryY, 2).ToString());
  27.     FacialRecognitionTextBox.Text = stringBuilder.ToString();
  28. }
  29.  
  30. _isTakingPicture = false;

And the results are encouraging –> it looks like I can use the X and Y to identify different people on the screen:

Match Value is: 53
Sky Biometry X: 10
Sky Biometry Y: 13.33

Kinect X: 47.5
Kinect Y: 39.79

Up next will be pointing the laser and the target…

 

 

 

Advertisements

Terminator Program: Part 1

I am starting to work on a new Kinect application for TRINUG’s code camp.  I wanted to extend the facial recognition application I did using Sky Biometry and have the Kinect identify people in its field of view.  Then, I want to give the verbal command “Terminate XXX” where XXX is the name of a recognized person.  That would activate a couple of servos via a netduino and point a laser pointer at that person and perhaps make a blaster sound.  The <ahem> architectural diagram </ahem? looks like this

image

Not really worrying about how far I will get (the fun is in the process, no?), I picked up Rob Miles’s excellent book Start Here: Learn The Kinect API and plugged in my Kinect.

The first thing I did was see if I can get a running video from the Kinect –> which was very easy.  I created a new C#/WPF application and replaced the default markup with this::

  1. <Window x:Class="ChickenSoftware.Terminiator.UI.MainWindow"
  2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.         Title="MainWindow" Height="545" Width="643"
  5.         Loaded="Window_Loaded" Closing="Window_Closing">
  6.     <Grid>
  7.         <Image x:Name="kinectColorImage" Width="640" Height="480" />
  8.     </Grid>
  9. </Window>

And in the code-behind, I added the following code.  The only thing that is kinda tricky is that there are two threads: the Main UI thread and then the thread that processes the Kinect data.  Interestingly, it is easy to pass data from the Kinect Thread to the Main UI Thread –> just call the delegate and pass in the byte array.

  1. Boolean _isKinectDisplayActive = false;
  2. KinectSensor _sensor = null;
  3. WriteableBitmap _videoBitmap = null;
  4.  
  5. private void Window_Loaded(object sender, RoutedEventArgs e)
  6. {
  7.     SetUpKinect();
  8.     Thread videoThread = new Thread(new ThreadStart(DisplayKinectData));
  9.     _isKinectDisplayActive = true;
  10.     videoThread.Start();
  11. }
  12. private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e)
  13. {
  14.     _isKinectDisplayActive = false;
  15.  
  16. }
  17.  
  18. private void SetUpKinect()
  19. {
  20.     _sensor = KinectSensor.KinectSensors[0];
  21.     _sensor.ColorStream.Enable();
  22.     _sensor.Start();
  23. }
  24.  
  25. private void DisplayKinectData()
  26. {
  27.     while (_isKinectDisplayActive)
  28.     {
  29.         using (ColorImageFrame colorFrame = _sensor.ColorStream.OpenNextFrame(10))
  30.         {
  31.             if (colorFrame == null) continue;
  32.             var colorData = new byte[colorFrame.PixelDataLength];
  33.             colorFrame.CopyPixelDataTo(colorData);
  34.             Dispatcher.Invoke(new Action(() => UpdateDisplay(colorData)));
  35.         }
  36.     }
  37.     _sensor.Stop();
  38. }
  39.  
  40. private void UpdateDisplay(byte[] colorData)
  41. {
  42.     if (_videoBitmap == null)
  43.     {
  44.         _videoBitmap = new WriteableBitmap(640, 480, 96, 96, PixelFormats.Bgr32, null);
  45.     }
  46.     _videoBitmap.WritePixels(new Int32Rect(0, 0, 640, 480), colorData, 640 * 4, 0);
  47.     kinectColorImage.Source = _videoBitmap;
  48. }

And I have a live-feed video

image

With that out of the way, I went to add picture taking capability.  I altered the XAML like so:

  1. <Window x:Class="ChickenSoftware.Terminiator.UI.MainWindow"
  2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.         Title="MainWindow" Height="545" Width="643"
  5.         Loaded="Window_Loaded" Closing="Window_Closing">
  6.     <Grid>
  7.         <Image x:Name="kinectColorImage" Width="640" Height="480" />
  8.         <Button x:Name="takePhotoButton" Margin="0,466,435,10" Click="takePhotoButton_Click">Take Photo</Button>
  9.     </Grid>
  10. </Window>

And added this to the code behind:

  1. Boolean _isTakingPicture = false;
  2. BitmapSource _pictureBitmap = null;
  3.  
  4. private void takePhotoButton_Click(object sender, RoutedEventArgs e)
  5. {
  6.     _isTakingPicture = true;
  7.     SaveFileDialog dialog = new SaveFileDialog();
  8.     dialog.FileName = "Snapshot";
  9.     dialog.DefaultExt = ".jpg";
  10.     dialog.Filter = "Pictures (.jpg)|*.jpg";
  11.  
  12.     if (dialog.ShowDialog() == true)
  13.     {
  14.         String fileName = dialog.FileName;
  15.         using (FileStream fileStream = new FileStream(fileName, FileMode.Create))
  16.         {
  17.             JpegBitmapEncoder encoder = new JpegBitmapEncoder();
  18.             encoder.Frames.Add(BitmapFrame.Create(_pictureBitmap));
  19.             encoder.Save(fileStream);
  20.         }
  21.     }
  22. }

 

And altered the DisplayKinectDatra method to poll the _isTakingPicture flag

  1. private void DisplayKinectData()
  2. {
  3.     while (_isKinectDisplayActive)
  4.     {
  5.         using (ColorImageFrame colorFrame = _sensor.ColorStream.OpenNextFrame(10))
  6.         {
  7.             if (colorFrame == null) continue;
  8.             var colorData = new byte[colorFrame.PixelDataLength];
  9.             colorFrame.CopyPixelDataTo(colorData);
  10.             Dispatcher.Invoke(new Action(() => UpdateDisplay(colorData)));
  11.  
  12.             if (_isTakingPicture)
  13.             {
  14.                 Dispatcher.Invoke(new Action(() => SavePhoto(colorData)));
  15.             }
  16.         }
  17.     }
  18.     _sensor.Stop();
  19. }

And now I have screen capture ability.

image

With that out of the way, I needed a way of identifying the people in the Kinect’s field of vision and taking their picture individually.  I altered the XAML like so

  1. <Window x:Class="ChickenSoftware.Terminiator.UI.MainWindow"
  2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.         Title="MainWindow" Height="545" Width="643"
  5.         Loaded="Window_Loaded" Closing="Window_Closing">
  6.     <Grid>
  7.         <Image x:Name="kinectColorImage" Width="640" Height="480" />
  8.         <Button x:Name="takePhotoButton" Margin="0,466,435,10" Click="takePhotoButton_Click">Take Photo</Button>
  9.         <Canvas x:Name="skeletonCanvas" Width="640" Height="480" />
  10.                 <TextBox x:Name="skeletonInfoTextBox" Margin="205,466,10,10" />
  11.     </Grid>
  12. </Window>

And altered the Setup method like so:

  1. private void SetUpKinect()
  2. {
  3.     _sensor = KinectSensor.KinectSensors[0];
  4.     _sensor.ColorStream.Enable();
  5.     _sensor.SkeletonStream.Enable();
  6.     _sensor.Start();
  7. }

And then altered the UpdateDisplay method to take in both the color byte array and the skeleton byte array and display the head and skeleton location.  Note that there is a built in function called MapSkeletonPointToColorPoint() which takes the skeleton coordinate position and translates it to the color coordinate position.  I know that is needed, but I have no idea who it works –> magic I guess.

  1. private void UpdateDisplay(byte[] colorData, Skeleton[] skeletons)
  2. {
  3.     if (_videoBitmap == null)
  4.     {
  5.         _videoBitmap = new WriteableBitmap(640, 480, 96, 96, PixelFormats.Bgr32, null);
  6.     }
  7.     _videoBitmap.WritePixels(new Int32Rect(0, 0, 640, 480), colorData, 640 * 4, 0);
  8.     kinectColorImage.Source = _videoBitmap;
  9.     var selectedSkeleton = skeletons.FirstOrDefault(s => s.TrackingState == SkeletonTrackingState.Tracked);
  10.     if (selectedSkeleton != null)
  11.     {
  12.         var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  13.         var adjustedHeadPosition =
  14.             _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  15.         var adjustedSkeletonPosition = _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(selectedSkeleton.Position, ColorImageFormat.RgbResolution640x480Fps30);
  16.  
  17.  
  18.         String skeletonInfo = headPosition.X.ToString() + " : " + headPosition.Y.ToString() + " — ";
  19.         skeletonInfo = skeletonInfo + adjustedHeadPosition.X.ToString() + " : " + adjustedHeadPosition.Y.ToString() + " — ";
  20.         skeletonInfo = skeletonInfo + adjustedSkeletonPosition.X.ToString() + " : " + adjustedSkeletonPosition.Y.ToString();
  21.  
  22.         skeletonInfoTextBox.Text = skeletonInfo;
  23.  
  24.     }
  25. }

And the invocation of the UpdateDisplay now looks like this:

  1. private void DisplayKinectData()
  2. {
  3.     while (_isKinectDisplayActive)
  4.     {
  5.         using (ColorImageFrame colorFrame = _sensor.ColorStream.OpenNextFrame(10))
  6.         {
  7.             if (colorFrame == null) continue;
  8.             using (SkeletonFrame skeletonFrame = _sensor.SkeletonStream.OpenNextFrame(10))
  9.             {
  10.                 if (skeletonFrame == null) continue;
  11.  
  12.                 var colorData = new byte[colorFrame.PixelDataLength];
  13.                 var skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
  14.  
  15.                 colorFrame.CopyPixelDataTo(colorData);
  16.                 skeletonFrame.CopySkeletonDataTo(skeletons);
  17.  
  18.  
  19.                 if (_isTakingPicture)
  20.                 {
  21.                     Dispatcher.Invoke(new Action(() => SavePhoto(colorData)));
  22.                 }
  23.                 Dispatcher.Invoke(new Action(() => UpdateDisplay(colorData, skeletons)));
  24.  
  25.             }
  26.         }
  27.     }
  28.     _sensor.Stop();
  29. }

And the results are what you expect:

image

With the ability to identify individuals, I then wants to take individual photos of each person and feed it to Sky Biometry.  To that end, I added a method to draw a rectangle around each person and then (somehow) take a snapshot of the contents within the triangle.  Drawing the rectangle was a straight-forward WPF exercise:

  1. private void DrawBoxAroundHead(Skeleton selectedSkeleton)
  2. {
  3.     skeletonCanvas.Children.Clear();
  4.     var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  5.     var shoulderCenterPosition = selectedSkeleton.Joints[JointType.ShoulderCenter].Position;
  6.  
  7.     var adjustedHeadPosition =
  8.         _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  9.     var adjustedShoulderCenterPosition =
  10.         _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(shoulderCenterPosition, ColorImageFormat.RgbResolution640x480Fps30);
  11.     var delta = adjustedHeadPosition.Y – adjustedShoulderCenterPosition.Y;
  12.     var centerX = adjustedHeadPosition.X;
  13.     var centerY = adjustedHeadPosition.Y;
  14.  
  15.     Line topLline = new Line();
  16.     topLline.Stroke = new SolidColorBrush(Colors.Red);
  17.     topLline.StrokeThickness = 5;
  18.     topLline.X1 = centerX + (delta * -1);
  19.     topLline.Y1 = centerY – (delta * -1);
  20.     topLline.X2 = centerX + delta;
  21.     topLline.Y2 = centerY – (delta * -1);
  22.     skeletonCanvas.Children.Add(topLline);
  23.     Line bottomLine = new Line();
  24.     bottomLine.Stroke = new SolidColorBrush(Colors.Red);
  25.     bottomLine.StrokeThickness = 5;
  26.     bottomLine.X1 = centerX + (delta * -1);
  27.     bottomLine.Y1 = centerY + (delta * -1);
  28.     bottomLine.X2 = centerX + delta;
  29.     bottomLine.Y2 = centerY + (delta * -1);
  30.     skeletonCanvas.Children.Add(bottomLine);
  31.     Line rightLine = new Line();
  32.     rightLine.Stroke = new SolidColorBrush(Colors.Red);
  33.     rightLine.StrokeThickness = 5;
  34.     rightLine.X1 = centerX + (delta * -1);
  35.     rightLine.Y1 = centerY – (delta * -1);
  36.     rightLine.X2 = centerX + (delta * -1);
  37.     rightLine.Y2 = centerY + (delta * -1);
  38.     skeletonCanvas.Children.Add(rightLine);
  39.     Line leftLine = new Line();
  40.     leftLine.Stroke = new SolidColorBrush(Colors.Red);
  41.     leftLine.StrokeThickness = 5;
  42.     leftLine.X1 = centerX + delta;
  43.     leftLine.Y1 = centerY – (delta * -1);
  44.     leftLine.X2 = centerX + delta;
  45.     leftLine.Y2 = centerY + (delta * -1);
  46.     skeletonCanvas.Children.Add(leftLine);
  47. }

And then adding that line in the Update Display

  1. if (selectedSkeleton != null)
  2. {
  3.     var headPosition = selectedSkeleton.Joints[JointType.Head].Position;
  4.     var adjustedHeadPosition =
  5.         _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(headPosition, ColorImageFormat.RgbResolution640x480Fps30);
  6.     var adjustedSkeletonPosition = _sensor.CoordinateMapper.MapSkeletonPointToColorPoint(selectedSkeleton.Position, ColorImageFormat.RgbResolution640x480Fps30);
  7.  
  8.     DrawBoxAroundHead(selectedSkeleton);
  9.  
  10.     String skeletonInfo = headPosition.X.ToString() + " : " + headPosition.Y.ToString() + " — ";
  11.     skeletonInfo = skeletonInfo + adjustedHeadPosition.X.ToString() + " : " + adjustedHeadPosition.Y.ToString() + " — ";
  12.     skeletonInfo = skeletonInfo + adjustedSkeletonPosition.X.ToString() + " : " + adjustedSkeletonPosition.Y.ToString();
  13.  
  14.     skeletonInfoTextBox.Text = skeletonInfo;
  15.  
  16. }

Gives me this:

image

Which is great, but now I am stuck.  I need a way of isolating the contents of that rectangle in the byte array that I am feeding to bitmap encoder and I don’t know how to trim the array.  Instead of trying to learn any more WPF and graphic programming, I decided to take a different tact and send the photograph in its entirety to Sky Biometry and let it figure out the people in the photograph.  How I did that is the subject of my next blog post…

 

 

 

 

WPF and Images In Subdirectories

Dear Jamie of the future:

When you have a WPF project and you want to add images in a subfolder to a page, you need to do this:

  1. Image x:Name="TestImage" Source="/Images/TestImage.png">

Note the forward slashes.

Also, if you have two images and you change the location on both, you will get the blue squiggly line of approbation like this:

image

If you fix the first one, BOTH will still have the BSLA:

image

So it looks to be all or none – perhaps b/c the IDE can’t parse anything so it leaves the last BSLA in place?  In any event, fix all of them and the BSLA goes away.

Sincerely,

Jamie of the past

PS: you really should exercise more….

WPF Event Bubbling and Routed Events

Consider a User Control with a single button on it:

  1. <UserControl x:Class="Tff.ButtonClickBubble.MainUserControl"
  2.              xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.              xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.              xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006&quot;
  5.              xmlns:d="http://schemas.microsoft.com/expression/blend/2008&quot;
  6.              mc:Ignorable="d" Height="100" Width="100">
  7.     <Grid Background="Red" Margin="0,0,0,0">
  8.         <Button x:Name="MainButton"
  9.             Content="Push Me" HorizontalAlignment="Left"
  10.                 Margin="19,36,0,0" VerticalAlignment="Top"
  11.                 Width="62" Height="25"
  12.                 Click="MainButton_Click"/>
  13.     </Grid>
  14. </UserControl>

 

In the code behind, the click event shows a dialog

  1. public partial class MainUserControl : UserControl
  2. {
  3.     public MainUserControl()
  4.     {
  5.         InitializeComponent();
  6.     }
  7.  
  8.     private void MainButton_Click(object sender, RoutedEventArgs e)
  9.     {
  10.         MessageBox.Show(e.Source.ToString());
  11.     }
  12. }

After hitting F6 so the control will show in my toolbox, I then put an instance of that control on a basic page like so:

  1. <Window
  2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.         xmlns:local="clr-namespace:Tff.ButtonClickBubble" x:Class="Tff.ButtonClickBubble.MainWindow"
  5.         Title="MainWindow" Height="236" Width="184">
  6.     <Grid x:Name="MainGrid" Background="AliceBlue">
  7.         <local:MainUserControl HorizontalAlignment="Left" Margin="35,44,0,0" VerticalAlignment="Top"/>
  8.     </Grid>
  9. </Window>

 

When I hit F5, I get the expected MessageBox:

image

So now I want the main window to intercept that button click and pop its own dialog box.  The Window class does not have a click event – rather it has a mousedown event.  By adding the mouse down event hander to the main window, I have this:

  1. <Window
  2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.         xmlns:local="clr-namespace:Tff.ButtonClickBubble" x:Class="Tff.ButtonClickBubble.MainWindow"
  5.         Title="MainWindow" Height="236" Width="184" MouseDown="Window_MouseDown">

 

And in the code behind:

  1. private void Window_MouseDown(object sender, MouseButtonEventArgs e)
  2. {
  3.     MessageBox.Show(e.Source.ToString());
  4. }

 

Unfortunately, that doesn’t work.  The button click event swallows the event so the MouseDown only fires on the part of the window where the button is not:

imageimage

So I went into StackOverflow and I found this post that I think describes the problem.  So I changed the MouseDown to the PreviewMouseDown event and sure enough, I can handle the event from the main screen:

image

But the ClickEvent on the button is not being fired.  I then added a e.handled = false but that did not help:

  1. private void Window_PreviewMouseDown(object sender, MouseButtonEventArgs e)
  2. {
  3.     MessageBox.Show(e.Source.ToString());
  4.     e.Handled = false;
  5. }

 

So the answer marked as correct on stack overflow does apply here.  The answer below the marked as correct answer is relevant.  I tried to add a handler to the Grid (Windows don’t have AddHandler methods) like this:

  1. public MainWindow()
  2. {
  3.     InitializeComponent();
  4.     MainGrid.AddHandler(MouseDownEvent, new MouseButtonEventHandler(MainGrid_MouseDown), true);
  5. }
  6.  
  7. private void MainGrid_MouseDown(object sender, MouseButtonEventArgs e)
  8. {
  9.     MessageBox.Show(e.Source.ToString());
  10. }

 

The problem is that it is not working either.  Fortunately, the answer below THAT one does work:

  1. <Window
  2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation&quot;
  3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml&quot;
  4.         xmlns:local="clr-namespace:Tff.ButtonClickBubble" x:Class="Tff.ButtonClickBubble.MainWindow"
  5.         Title="MainWindow" Height="236" Width="184" >
  6.     <Grid x:Name="MainGrid" Background="AliceBlue" Button.Click="MainGrid_MouseDown">
  7.         <local:MainUserControl HorizontalAlignment="Left" Margin="35,44,0,0" VerticalAlignment="Top"/>
  8.     </Grid>
  9. </Window>

 

And

  1. public partial class MainWindow : Window
  2. {
  3.     public MainWindow()
  4.     {
  5.         InitializeComponent();
  6.     }
  7.  
  8.     private void MainGrid_MouseDown(object sender, RoutedEventArgs e)
  9.     {
  10.         MessageBox.Show(e.Source.ToString());
  11.     }
  12.  
  13. }

 

They key thing is wiring up the Grid like this:  Button.Click="MainGrid_MouseDown"

Exception Handling in WPF

There are 2 overriding principles since .NET 1.0, irrespective of  platform:

  • Only catch exceptions you plan to handle
  • Only catch System.Exception in 1 place
    I whipped up a WinForm project and added a button to the default form.  I then added the following code its code behind:
    private void ThrowExceptionButton_Click(object sender, EventArgs e)
    {
        ArgumentOutOfRangeException exception = new ArgumentOutOfRangeException("New Exception");
        throw exception;
    }
    When I run the app and press the button, I get this:

image

      Then, I added a general try…catch around the Application.Run method in Program.cs:
    try
    {
        Application.Run(new Form1());
    }
    catch (Exception exception)
    {
        MessageBox.Show("Exception occurred: " + exception.Message);
    }

    And I got this:

    image

    All well and good – this has been the standard for years. I then decided to apply the same methodology to a WPF application.

    The first thing I noticed is that there is not an Application.Run method.  Rather, there is this:

    <Application x:Class="Tff.ExceptionHandling.App"
                 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
                 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
                 StartupUri="MainWindow.xaml">
        <Application.Resources>
             
        </Application.Resources>
    </Application>
    

    Which means I can’t put that StartupUri in a general try…catch.  Going over to MSDN, I see that you are supposed to use the Application.Current.DispatcherUnhandledException like so:

    public partial class App : Application
    {
        protected override void OnStartup(StartupEventArgs e)
        {
            Application.Current.DispatcherUnhandledException += 
                new System.Windows.Threading.DispatcherUnhandledExceptionEventHandler(Current_DispatcherUnhandledException);
            base.OnStartup(e);
        }
    
        void Current_DispatcherUnhandledException(object sender, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs e)
        {
            //log(e);
            e.Handled = true;
        }
    }

    All well and good.  If I run the app without debugging, the application handles the exception without shutting down.  However if I run the application while debugging, I got this with or without the exception handling:

    image

    Which is different than WinForms…

    So now that I have come to grips with WPF Exceptions on the mainUI thread, what about secondary threads?

    In WinForms, I can do something like this:

    BackgroundWorker backgroundWorker = null;
    public Form1()
    {
        backgroundWorker = new BackgroundWorker();
        backgroundWorker.DoWork += new DoWorkEventHandler(backgroundWorker_DoWork);
        InitializeComponent();
    }
    
    void backgroundWorker_DoWork(object sender, DoWorkEventArgs e)
    {
        Thread.Sleep(1000);
        ArgumentOutOfRangeException exception = new ArgumentOutOfRangeException("New Exception");
        throw exception;
    }
    
    private void ThrowExceptionOnSecondThreadButton_Click(object sender, EventArgs e)
    {
        backgroundWorker.RunWorkerAsync();
    }

    Sure enough:

    image

    But the important thing is this.  If I start the application without debugging, then nothing happens when the secondary thread has an exception raised – it just ends silently.  Another interesting offshoot of this is that when I add the global try..catch in Program – it doesn’t do anything for the background thread.  Therefore, the debugger still stops when the exception is raised.

    Hopping over to the WCF solution, the effect was exactly the same.  If I am debugging, I get this:

    image

    But if I am not, the thread ends silently.  As just like WinForms, that DispatcherUnhandledException doesn’t matter to the secondary thread.