gsoc 2011

Batisseur: The End?

Blog post by jrabbit on Thu, 2011-09-08 17:11

GSOC 2011 is over and I’ve had some time to cool off from last minute stress. A few awesome tools for haikuporter will be coming soon. I’m going to work on rounding off those tools. The builddrone project somewhat works but is not of a very high quality. The queen needs love with respect to databases and or data structure. I may revisit it later, but I’d love for someone with relevant experience to implement something better. Jenkins reporting and distributed uploading may make it into haikuporter along with gpg signing (According to GPG availability, last time I checked gpg doesn’t work on Haiku.). There are a few interesting peices of code I may cut off into packages for others to easily depend on (Bitbucket and Github commit post parsing anyone?)

This whole process was a real experience for me on working with long term projects that will come in handy with a college program focusing heavily on student-designed work. My Haiku related work isn’t over and I’m still excited to make packaging fun and easy. One project in the back of my head is an interface layer for PyPI packages, Rubygems and CPAN, to be used by package managers so they don’t duplicate work. Basically what that amounts to is processing their packages into a portable format.

Camlistore and Go on Haiku will be very interesting to see. I may be able to help with Google Code In this fall also. My schedule seems to be flexible enough to allow it. The Python BeAPI interface will be amazing to work with.

UVC Driver -- GSoC Midterm Report

Blog post by gabriel.hartmann on Sun, 2011-07-10 23:31

Since my last blog entry a lot of progress has been made. Currently I'm right on the cusp of actually producing images on the screen that have been captured by my camera. Successful communication is occurring between the driver and the camera in at least two different forms.

The first form of communication to be successfully implemented involves the setting of values within the camera which affect image capture. These are the familiar brightness, contrast, sharpness etc. settings which most cameras support. Nearly all of the options available to my camera are now presented for manipulation by end-users and successfully communicated to the camera. These values are maintained within the camera between power cycles and this fact is successfully communicated to the user via the available controls. The controls can be viewed and modified in the media preferences application or the cortex demo application. ParameterWeb control documentation indicates a range of different style controls within the continuous or discrete parameter varieties. However it appears to me that the only discrete value input method currently supported with an appropriate gui interface is the binary on/off option. This is suitable for features like the auto value for white balance which can only be either on or off. However the powerline frequency setting which has three possible values was unable to be represented with the appropriate discrete control of the B_RECSTATE type which has three possible states. To simulate this capability a continuous control was modified to only allow three values which are indicated by placing the sliding control within a +/-10 range of the desired value. The slider snaps to the available values to indicate this behaviour. One future feature which would perhaps be desirable is controls with auto settings which indicate in real time by their movement what values the camera is using in its auto mode. Right now sliders are simply frozen in their last position when the auto mode is in effect. I had some brief discussion with my mentors about this feature, but it was deemed to be unnecessary at this stage as a lot of work is left to be done in actual image capture.

Syndicate content