User:Nrp/Weekly updates/20080726: Difference between revisions

From OLPC
Jump to navigation Jump to search
No edit summary
No edit summary
 
Line 17: Line 17:
* Made it possible to do an inverse threshold. As in, rather than changing the color of the pixels outside the threshold, change it for those within it. This makes it possible to, for example, do a kind of "green screen" and have a person appear over a virtual background.
* Made it possible to do an inverse threshold. As in, rather than changing the color of the pixels outside the threshold, change it for those within it. This makes it possible to, for example, do a kind of "green screen" and have a person appear over a virtual background.
* Had another attempt at threading. It turns out I misunderstood how Python does threading, and the blocking I/O call in get_image() makes the threading slower. I'm going to try releasing GIL on just the blocking call, though this may involve changing much of how the function is structured.
* Had another attempt at threading. It turns out I misunderstood how Python does threading, and the blocking I/O call in get_image() makes the threading slower. I'm going to try releasing GIL on just the blocking call, though this may involve changing much of how the function is structured.
== Wednesday ==
* Started writing a function to trace the outline of a Mask and return it as points. The result of a very interesting discussion with Brian Jordan about combining physics and vision.
* Release GIL on get_image(), which allows threading to actually be useful.
== Thursday ==
* Completed mask.outline().
* Performance tested the stuff I've written recently on the XO. Capturing and processing an image for use as input takes a little under 20ms. So, all but the most CPU intensive activities should be able to run at 30fps while using vision stuff. If the Activity doesn't depend on displaying live video from the camera, its best to run the capturing and processing stuff in a separate thread.
* Wrote mask.from_threshold(), to avoid having to do a threshold followed by a mask.from_surface(). This saves around 8ms per frame on the XO.

Latest revision as of 03:08, 25 July 2008

Saturday

  • Wrote a function to create a mask of the collisions between two masks.
  • Wrote wrappers for all of the code that was sitting unused in bitmask.c, fill, clear, draw, erase, scale.
  • Fixed a potential flaw in invert and used the same code to make fill faster.

Sunday

  • Changed set_controls to use keywords.
  • Changed get_controls to make way for use of non-boolean controls.
  • Looked at v4l2 spec in preparation for a function to enumerate available controls.
  • Started writing code to get the second moments of a mask, to find its angle.
  • Started working on how to separate v4l2 specific code and allow for other interfaces.

Monday

  • Wrote Mask.angle(), which calculates the zeroth through second moments of a mask to find the approximate orientation of an object with respect to the x axis
  • Wrote Mask.count(), which uses the very fast bitcount() function in bitmask.c to get the total pixels set in the image.
  • Made Mask.centroid() stop returning total pixels set since count() exists. This is an API change that breaks some of the scripts I've written, but its better to do it sooner than never.
  • Fixed a bug in Mask.get_bounding_rects() that made it calculate the width of the rect incorrectly.

Tuesday

  • Made it possible to do an inverse threshold. As in, rather than changing the color of the pixels outside the threshold, change it for those within it. This makes it possible to, for example, do a kind of "green screen" and have a person appear over a virtual background.
  • Had another attempt at threading. It turns out I misunderstood how Python does threading, and the blocking I/O call in get_image() makes the threading slower. I'm going to try releasing GIL on just the blocking call, though this may involve changing much of how the function is structured.

Wednesday

  • Started writing a function to trace the outline of a Mask and return it as points. The result of a very interesting discussion with Brian Jordan about combining physics and vision.
  • Release GIL on get_image(), which allows threading to actually be useful.

Thursday

  • Completed mask.outline().
  • Performance tested the stuff I've written recently on the XO. Capturing and processing an image for use as input takes a little under 20ms. So, all but the most CPU intensive activities should be able to run at 30fps while using vision stuff. If the Activity doesn't depend on displaying live video from the camera, its best to run the capturing and processing stuff in a separate thread.
  • Wrote mask.from_threshold(), to avoid having to do a threshold followed by a mask.from_surface(). This saves around 8ms per frame on the XO.