GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

February 04, 2012

GJS Improvements

Myself, Giovanni Campagna as well as Colin Walters have all been working hard trying to make GJS somewhat of a competitor to PyGObject, being a full introspection stack for the GNOME Desktop Environment. Rather than give you a bunch of history, let me just give you a quick taste. There’s much more to the landing than this, such as implementing your own signals, properties, as well as implementing interfaces, but it will take me a few days to come up with an exciting example that fully showcases the power that you now have.

const Lang = imports.lang;

const Cogl = imports.gi.Cogl;
const Clutter = imports.gi.Clutter;
const Gio = imports.gi.Gio;

const MyClutterActor = new Lang.Class({
    Name: 'MyClutterActor',
    Extends: Clutter.Actor,

    vfunc_get_preferred_width: function(actor, forHeight) {
        return [100, 100];
    },

    vfunc_get_preferred_height: function(actor, forWidth) {
        return [100, 100];
    },

    vfunc_paint: function(actor) {
        let alloc = this.get_allocation_box();
        Cogl.set_source_color4ub(255, 0, 0, 255);
        Cogl.rectangle(alloc.x1, alloc.y1, alloc.x2, alloc.y2);
    }
});

const MyClutterEffect = new Lang.Class({
    Name: 'MyClutterEffect',
    Extends: Clutter.DeformEffect,

    vfunc_deform_vertex: function(effect, width, height, vertex) {
        vertex.x += Math.random() * 20 - 10;
        vertex.y += Math.random() * 20 - 10;
    }
});

let actor = new MyClutterActor();
let stage = new Clutter.Stage();
actor.animatev(Clutter.AnimationMode.EASE_IN_OUT_CUBIC, 2000, ['x', 'y'], [200, 200]);
actor.add_effect(new MyClutterEffect());
stage.add_actor(actor);
stage.show_all();

Clutter.main();

In Brussels!

I’ve arrived safely and soundly in Brussels for FOSDEM, despite a weird back injury (I didn’t know you could get those from sneezing…) I’ve had a nice time talking to folks already, even if I’ve gotten a couple of rants about GNOME. There’s been some really great positive discussion too.

I’m excited for the Legal Issues devroom I’m cohosting with Tom, Bradley and Richard and also for the Crossdesktop devroom. I’ll also try to hang around the GNOME booth. Please come and say hi. See you there!

February 03, 2012

An unintended gem about usability


<UU> Somedays, I think why can't we have computers which just work.
<UU> But then I remember that I am a Computer Scientist.
<UU> So, yeah, I guess I understand why.
<Nirbheek> :D

Quite related to GNOME, really.

Zoom H1 firmware update 2.0 adds USB Digital Audio support in Linux

I’m so happy about this added functionality! I want to publicly thank Zoom for such a great free update.

The Zoom H1 makes super high quality recordings, and now also serves as a high quality digital audio mic while connected to a Linux computer.

Performing the Zoom H1 version 1.x to 2.0 upgrade in Linux

In short, this fails.

Something about writing the H1MAIN.bin to the Fat32 file system in Linux causes the very brittle upgrade process to fail. It will notice the file and begin the process, and end with “WRITE ERROR”. Thankfully it doesn’t brick the device.

The solution is to:

  1. copy your recordings off the device
  2. format the card inside the device: hold the Trash button while turning it on, then confirm the format by pressing the Record button
  3. copy the H1MAIN.bin file to the root of the device’s filesystem using a Windows computer (download Zoom H1 System Software Version 2.0 and unpack)
  4. initiate the upgrade: turn on the device while holding the Play/Pause button, then confirm the upgrade by pressing the Record button (twice)

Once upgraded, the mic functionality is detected and works automatically in Ubunutu (and presumably other Linux distros), and shows up in PulseAudio as both an Input and an Output. This means you also now have two audio outputs.

It even works in the Luz Spectrum Analyzer. :) Enjoy!


Recent misc likes

Even though some of these tools have been around for years, I have only recently started using them.

* byobu - nicer than plain screen with good defaults, for example key binding for scrolling is like in a regular terminal.
* sbuild - nicer than pbuilder, defaults to overlay directory instead of tarball, hence fast by default, nice colors, build summary. I have heard about it for a long time, but the recent mention during Ubuntu devel week made me curious. It is friendlier now - no need for LVM snapshots. http://wiki.debian.org/mk-sbuild
* syncpackage - which now allows syncing from Debian if you have Ubuntu upload rights. No need to burden the archive team members anymore for every sync or go the roundabout way of getting from Debian and then uploading manually without changes.
* Modern Debian packaging in the form of the 3.0(quilt) source format and the new dh tools. The former allows a cleaner separation between the upstream and distro bits while the latter makes the debian/rules file much shorter and cleaner even than with CDBS, let alone with the classic debhelper way.
* Twitter Bootstrap - mostly unrelated to packaging or command line stuff, but very nice regardless. CSS+Javascript UI elements that for me at least make jQueryUI superfluous, while being promoted as 'oh, just a CSS framework and style guide, not much else'.

Openismus at FOSDEM 2012

A few Openismus people will be at FOSDEM In Brussels this weekend. FOSDEM is always a great conference, but I can’t be there myself as my travel is generally limited by the need to take care of my kids.

Michael Hasselmann and Jon Nordby are both giving talks about the Maliit input method framework, as seen on the N9. We are eager to find customers who need our help to integrate and improve this only real choice for an open-source on-screen keyboard. So we hope that some people of influence take the opportunity to get to know the project and its excellent developers.

Jens Georg is also giving a talk about Rygel, used in the N9 to support UPnP and DLNA. For German speakers, there are already video and slides online of a recent talk that Jens did about Rygel in Berlin for Deutsche Telekom’s Developer Garden. I was amused to discover that DLNA had specified themselves into a situation where a minimum certified server and a minimum certified receiver were only able to share a small resolution JPEG format. Apparently it’s getting better, and Rygel can deal with it all.

Name that event! Poll for openSUSE Event in Florida

As promised, the poll to name the openSUSE event to be held in Orlando, FL September 21-23, 2012 is now online.  Please vote before February 11th!

Naming Poll

And as always, if you want to get involved in planning, please visit our planning page.

Thanks!

Bryen M Yunashko

FOSDEM 2012

In a few hours, I'll be flying to Brussels with Ivan, for a new edition of FOSDEM, undoubtedly the best Free Software conference in Europe.

I'm looking forward to hang out with Debian, GNOME and #dudes people, as well as to explore some other quiet and cool spots in the city with our hosts Raül and Vir.

I'll probably be around the CrossDistro and CrossDesktop rooms most of the time, but before that I'll be at the Delirium café not long after landing in Brussels.

For someone who doesn't enjoy cold weather that much, this is going to be a special edition… oh dear, -10℃, this is fucking crazy!

I'm going to FOSDEM 2012

What’s up?

So as usual I need an excuse for not blogging for so long. This times it’s work, moving to Berlin and some other things.

Anjuta

While I haven’t contributed that much code in that cycle apart from minor bug-fixing there have been a couple of nice contributions:

  • Sébastien Granjoux did amazing work to improve our project management which is now much easier to use and more powerful
  • Marco Diego Aurélio Mesquita (what a name) improved the glade integration by allowing to automatically connect widgets and code
  • But I guess I should cover all this in a “What’s new in Anjuta 3.4″ post pretty soon

Gdl

The often forgotten but still heavily used docking library…Inkscape forked that library into their repository and added some fixes for them that were never magically contributed back and at some point (especially with the gtk+3.0 transition of gdl) I became very hard to merge between the projects. However, lately Alex Valavanis stepped up and ported most of the Inkscape patches back into gdl master and hopefully Inkscape will be able to use stock gdl (or probably better gdlmm) really soon.

Gnucash

As I tried to organize all my banking stuff I made some contribution to the best linux banking software in the area of HBCI/FinTS which is a german standard to securely initiate online transactions with your bank. I hope to find some time to actually implement SEPA (read EU- or international transactions using IBAN and BIC) at least originating from german accounts. But I have to think about how to compute (98 – (x mod 97)) for x being larger than a 64-bit integer and while I found some strategies on the web this was too much math for a late evening. Before you ask, this is part of the way an IBAN checksum is computed and I need this checksum because at least for Germany the IBAN can be generated as a combination of account number and bank-code.

Brno Hackfest

/me will be there saving the world or drinking beer, maybe both.

Laptop

After having been for a couple of hackfests which my much loved white netbook (read: “Oh, it’s so cute…”) I though it’s time for a real (male ;) laptop. It doesn’t seem very easy to get a reasonably priced laptop without a Windows license or preferably with a preinstalled and working Linux. After some searching I ended up buying a ThinkPad Edge 320 from linux-laptop.de which arrived pretty quickly (apart from some problems with the postal service). I ordered it preinstalled with Linux Mint after having only used Fedora for a while.

The installation was complete but the fan was constantly running which annoyed me but can be fixed by installing the thinkfan utility and now things are quite again! I reported this back as I kind of assume things like that installed when I order a laptop with operating system.

2012-02-03: Friday

  • Up early, breakfast, onto Easy Hackery slides, nasty head cold catching me with a vengance. Italo published a beautiful FOSDEM infographic which he has been building.

PackageKit/aptdaemon “what-provides” plugin support

PackageKit has a “WhatProvides” API for mapping distribution independent concepts to particular package names. For example, you could ask “which packages provide a decoder for AC3 audio files?

$ pkcon what-provides  "gstreamer0.10(decoder-audio/ac3)"
[...]
Installed   	gstreamer0.10-plugins-good-0.10.30.2-2ubuntu2.amd64	GStreamer plugins from the "good" set
Available  	gstreamer0.10-plugins-ugly-0.10.18-3ubuntu4.amd64	GStreamer plugins from the "ugly" set

This is the kind of question your video player would ask the system if it encounters a video it cannot play. In reality they of course use the D-BUS or the library API, but it’s easier to demonstrate with the PackageKit command line client.

PackageKit provides a fair number of those concepts; I recently added LANGUAGE_SUPPORT for packages which provide dictionaries, spell checkers, and other language support for a given language or locale code.

However, PackageKit’s apt backend does not actually implement a lot of these (only CODEC and MODALIAS), and aptdaemons’s PackageKit compatibility API does not implement any. That might be because their upstreams do not know enough how to do the mapping for a particular distro/backend, because doing so involves distro specific code which should not go into upstreams, or simply because of the usual chicken-egg problem of app developers rather doing their own thing instead of using generic APIs.

So this got discussed between Sebastian Heinlein and me, and voila, there it is: it is now very easy to provide Python plugins for “what-provides” to implement any of the existing types. For example, language-selector now ships a plugin which implements LANGUAGE_SUPPORT, so you can ask “which packages do I need for Chinese in China” (i. e. simplified Chinese)?

$ pkcon what-provides "locale(zh_CN)"
[...]
Available   	firefox-locale-zh-hans-10.0+build1-0ubuntu1.all	Simplified Chinese language pack for Firefox
Available   	ibus-sunpinyin-2.0.3-2.amd64            	sunpinyin engine for ibus
Available   	language-pack-gnome-zh-hans-1:12.04+20120130.all	GNOME translation updates for language Simplified Chinese
Available   	ttf-arphic-ukai-0.2.20080216.1-1.all    	"AR PL UKai" Chinese Unicode TrueType font collection Kaiti style
[...]

Rodrigo Moya is currently working on implementing the control-center region panel redesign in a branch. This uses exactly this feature.

In Ubuntu we usually do not use PackageKit itself, but aptdaemon and its PackageKit API compatibility shim python-aptdaemon.pkcompat. So I ported that plugin support for aptdaemon-pkcompat as well, so plugins work with either now. Ubuntu Precise got the new aptdaemon (0.43+bzr769-0ubuntu1) and language-selector (0.63) versions today, so you can start playing around with this now.

So how can you write your own plugins? This is a trivial, although rather nonsense example:

from packagekit import enums

def my_what_provides(apt_cache, provides_type, search):
    if provides_type in (enums.PROVIDES_CODEC, enums.PROVIDES_ANY):
        return [apt_cache["gstreamer-moo"]]
    else:
        raise NotImplementedError('cannot handle type ' + str(provides_type))

The function gets an apt.Cache object, one of enums.PROVIDES_* and the actual search type as described in the documentation (above dummy example does not actually use it). It then decides whether it can handle the request and return a list of apt.package.Package objects (i. e. values in an apt.Cache map), or raise a NotImplementedError otherwise.

You register the plugin through Python pkg-resources in your setup.py (this needs setuptools):

   setup(
       [....]

       entry_points="""[packagekit.apt.plugins]
what_provides=my_plugin_module_name:my_what_provides
""",
       [...])

You can register arbitrarily many plugins, they will be all called and their resulting package lists joined.

All this will hopefully help a bit to push distro specifics to the lowest possible levels, and use upstream friendly and distribution agnostic APIs in your applications.

The license term smorgasbord: copyleft, share-alike, reciprocal, viral, or hereditary?

I microblogged (diaspora, identica, twitter) the following statement a few weeks ago:

First new year’s resolution, 10 days late: I will use ‘hereditary license’ any time I am tempted to say ‘viral license.’

Surprisingly, this generated quite a few responses (on identica and elsewhere)- some people liked it, but many people had their own alternative to propose. So here are some longer-form thoughts.

There are four primary options that I am aware of when trying to find a one-word term for open source licenses that in some way compel distributors to also distribute code- i.e., the licenses called “copyleft” by those of us who have spent too much time with this stuff. The terms:

  • Copyleft: This is the common name when speaking to other people experienced in open source, so it’s obviously the first choice when you know your audience has at least some experience in open source. But to an audience not already involved in open source (the only time I’m ever even vaguely “tempted to say viral”), the phrase is completely non-obvious. It has zero evident meaning. In fact, it can actively confuse: it could mean the reverse of copyright, which to most people probably means “no license” or anti-copyright altogether. So it’s really not a good word to use for audiences who aren’t familiar with open source- which is to say, most audiences.
  • Viral: This is another old standby. Traditionally, the objection to this term has been that it is perjorative: no one likes viruses, so ‘viral’ is often seen as (and sometimes is) a deliberate attempt to frame copyleft licenses as inherently bad. That objection is certainly accurate, but I think there is another problem with this word: it implies that, like a virus, copyleft can spread to someone without their active involvement; you can “catch” it from the digital equivalent of being in the same room with someone, or not washing your hands. This isn’t the case – there must be a strong relationship between the copylefted code and the other code that might be required to be shared. This, to me, is where “viral” really fails to communicate. It makes people think that a copyleft is something that might “happen to them” instead of it being something that they have to be actively involved with.
  • Share-alike (or the related word “reciprocal”): Oddly, neither of these is used much outside of the Creative Commons world. Neither of these are bad terms: they are reasonably value-neutral, and they both imply that there must be an actively chosen relationship between the parties, which I think is important. But to me they don’t capture the why of the relationship; it makes it sound like there might be a choice in the matter, or something you do because you’re a nice guy.
  • Hereditary: Richard Fontana traced this back to at least 2004, so it isn’t new, but without doubt this is the least used of the various terms I’m discussing here. At least for the moment, though, and for general audiences, I’m leaning towards thinking it is the best option. First, it implies that there has to be a real, derivative relationship between the two codebases; it can’t just happen at random (like viral implies). Second, it also implies that once the relationship exists, further licensing isn’t a choice- you must pass it on to the next folks who “inherit” from you. (Share-alike/reciprocal might imply that you do this because you’re a nice guy, which isn’t the case, or that you do it back to the original sharer- which also isn’t necessarily the case.) The analogy obviously isn’t perfect, most notably because a mere redistributor who hasn’t created a derivative work that “inherits” from the parent work is still bound by the license. But I think on balance that it has the fewest tradeoffs.

So there you go, for the dozen people who asked, and the hundreds, nay billions more who don’t care :)

Well hello OpenShift

I haven’t blogged lately. For some reason since the popularization of Facebook posts and tweets, my ability to write more than a few coherent sentences have greatly diminished. Perhaps it is just me getting old but change is what change does and a lot of change has happened recently. The biggest recent change is me getting a promotion to Senior Software Engineer and moving from the Fedora team to the OpenShift team inside of Red Hat. Yes, I have traded Beefy Miracles for Space Pandas and I think the change has done me some good. I have wanted to transition to a more customer driven structured part of Red Hat without sacrificing the excitement of working with a fast moving project. OpenShift fit the bill very nicely with their agile development workflow in the emerging field of PaaS (Platform as a Service) cloud development. It is also nice having a large and growing team to work with.

My involvement with PyGObject

That being said most of my hacking time will be spent on OpenShift related projects and while I had already transitioned out of day to day PyGObject maintainership some time ago, I will no longer have any real time to dedicate to the project (I’m actually learning Ruby right now). To tell the truth, not being able to put any more serious time into the project is one of the major reasons I decided I needed a change. There are a number of other people still contributing to the project but it is sorely in need of a lead maintainer who can do releases, keep people on schedule and ping the right people when bugs languish. I feel PyGObject is in good shape but as it begins to get more uptake bugs fixes need to be committed, edge cases corralled and the last mile needs to be traversed. I will still hang out in #python on GIMPNet and can be persuaded to look at patches or even write a few if you ping me and are nice.

Jobs

With me leaving the Fedora team there is now an opening for someone to join the team. They are looking for an all around FOSS rock star who can work in a number of different areas such as packaging, desktop and web development, and any number of miscellaneous skill your would encounter with any FOSS project. The main responsibilities would be maintaining, improving and integrating our infrastructure tools such as Fedora Community Packages web app, Bodhi update tool and Fas accounts system as well as developing tools to make it easier to contribute to Fedora. Most of the tools are written in Python so being a Python expert is a big plus as well as having worked as part of a team on any major open source project. If that sounds like fun to you send me your resume (I get a bonus if you get hired).

OpenShift is also expanding so if any of these jobs look more like your speed feel free to mail me also.

[read this post in: ar de es fr it ja ko pt ru zh-CN ]

February 02, 2012

Moving Along for Florida’s openSUSE Event

As some of you may have noticed, SUSE will have their SUSECon event in Orlando, Florida this coming September.  And because SUSE recognizes the importance of the openSUSE Community, they have graciously granted space for us to have our own conference of sorts immediately after SUSECon.

So, yesterday we had our kickoff meeting to start planning our event.   We have yet to have an official name for it but we will be launching a poll tomorrow on our Connect site to see which name most people like.  Stay tuned for the announcement to that poll link!  And if you have a suggestion for a name, mention it now so I can be sure to include it in the poll.

I posted a detailed summary of what we discussed during the kickoff meeting.  What I really liked about yesterday’s meeting was the obvious interest and energy of the community in attendance.  The community has waited a long time to see such an event in the Western Hemisphere and now those voices are finally heard.  This does not however replace our main openSUSE Conference which is held every year in Europe.  The Florida event is more of a regional somewhat smaller-scale event that aims to bring together the Community in the North America region, though obviously it is open to everyone from around the globe.

It is an event that will be defined and driven by our community.  And I’ll be sure to blog whenever I can to keep you updated on the evolution of this event.

So what’s next?

This is not a community event if the community isn’t involved.  And that means we now need volunteers to step up and join the various committees.   Small or big, the tasks are varied and there’s a good chance there’s something interesting enough for you to roll up your sleeves and get involved.  Check out our planning wiki page and add your name to the volunteer section.

We’re hoping to get a formal announcement of the event (including its name and website) by early March.  But don’t let that hold you back from marking your calendar to visit Orlando, Florida September 21-23!

VLC Illegal Under ACTA?

VLC  uses libdvdcss and ignores region codes, a copyright mechanism designed for the sole purpose of fucking consumers and the primary reason why I will never ever purchase DVDs.

From the final version of ACTA:

5. Each Party shall provide adequate legal protection and effective legal remedies against the circumvention of effective technological measures that are used by authors, performers or producers of phonograms in connection with the exercise of their rights in, and that restrict acts in respect of, their works, performances, and phonograms, which are not authorized by the authors, the performers or the producers of phonograms concerned or permitted by law.

Considering that region codes are an arbitrary software restriction, surely they aren’t considered an effective technological measure. Or are they?

Without prejudice to the scope of copyright or related rights contained in a Party’s law, technological measures shall be deemed effective where the use of  protected works, performances, or phonograms is controlled by authors, performers or producers of phonograms through the application of a relevant access control or protection process, such as encryption or scrambling, or a copy control mechanism, which achieves the objective of protection.

Do you notice something funny about that definition? It removes the criteria that effective technological measures be effective. Bypassing region codes might be legal, depending on how broadly you interpret the above, but playing encrypted dvds on Linux is out of the question.

Stop ACTA. More here.


If you are reading this on Planet GNOME, follow my blog directly. This is my last week on PGO.

2012-02-02: Thursday

  • Up earlyish, attempted to catch the train to Cambridge, drove instead, slideware on the train to Kings Cross. More happy hacking on the Eurostar.
  • Off to meet up with some Mozilla hackers at a beautiful co-working space. Caught up with JP, Julian Seward, Taras Gleck & met a host of others. Out for dinner.

Gotta get git

Two more big weeks at GNOME, and Marina has reminded me that my internship is already halfway over.  This made me sad, until I remembered I don’t have to leave… ever!

These weeks were full:

  • Allan Day, Sri and I had a successful meeting using Google hangout, where we discussed the redesign of news.gnome.org, a restructuring of the news process, and tried on reindeer disguises. We couldn’t help but be impressed with the hangout, which worked perfectly with our 3 different operating systems. Since then, Sri has worked tirelessly to create a test space for me to start building the site. I attempted to load the gnome-grass theme to my own hosted space as a fall-back, but encountered php errors when activating the theme. I’m working on solving that, but it would probably be better to have space on existing GNOME servers. THANK YOU to Sri and Andrea Veri for all their work.
  • I started on the task of writing an article about the GNOME “Thank You Pants Award” for the Annual Report. To that end, past pants winners, Matthias Clasen and Gil Forcada have obliged me by answering several interview questions regarding this creative award given annually for service to the GNOME community. I hope to write an amusing entry for the Report by next week.
  • On Joanie Diggs’ suggestion, I have started working on updating the GNOME deployments page on the wiki. It is sorely out of date, and does not present a good or current portrait of GNOME in the real world. The very good news is that, with Marina’s help,  I have already tracked down some unmentioned projects of note, and am hopeful of finding more. Our marketing efforts depend on us proving the viability of GNOME by being able to cite real-world usability. I’m happy to contribute to that end.
  • During an OPW intern meeting, we discussed a fun new design project to improve the women’s intern application process and new intern introduction process by creating a series of comics outlining these areas for contributors. I plan to help with this once it gets going.
  • I’m happy to report that on Tuesday, I got my git privileges instated, thanks to Andrea Veri and Vinicius Depizzol, creator of the Gnome grass wordpress theme. I hope this will make me more effective helping out with the web development task load, which is hefty right now. Little tasks like updating the a11y progress banner on gnome.org can be done so quickly when you can git push!
  • Last, but not least, I’m going to start creating some of the hackergotchis that some of you have been waiting for! Look for more floating heads coming your way!!

So, I’m busy… and it’s all good.

P.S. If any of you are going to the developer conference in Brno, I’d sure love to connect with you about doing a short report from the event for GNOME news-


Aventuras en Málaga y Ronda

Last week was a blast. Spending a couple of days working on Pitivi fulltime and meeting with the awesome GStreamer folks again was a thrilling experience. Not only that, but it happened in the beautiful city of Málaga.

Coming from Montréal, I’m still a bit shocked at the sight of people wearing coats, scarves and tuques in broad daylight at +15°C.

Hell, the weather in Malaga was so consistently mild that even if we were in the middle of January, in the morning I went outside straight from the shower with wet hair, which dried in minutes!

While we’re talking about showers again… Malagans, unlike Bostonians, got the usability of their shower handles/lever/thingie right. However, they completely fail at road signage and urban planning. I thought I’d never see the day when I’d find something worse than the province of Québec, but it seems we have a new winner here:

  • Street names written in minuscule font sizes on buildings (when they are present at all)? Check.
  • Road signs inside the ramps/exits (when it’s too late)? Check. (Québec has that too)
  • One-ways everywhere? Check.
  • Inability to get back on a highway if you take the wrong exit? Check.
  • Roundabouts where you have to yield to people outside the roundabout? Check.
  • Roundabouts with streetlights every 30 degrees? Check.
  • Tiny road signs inside the roundabout instead of on exits? Check.

I found this all quite amusing. Except when Antigoni and I had to get back to the airport and mistakenly ended up on an exit ramp on avenida de Andalucía.

Okay, back to the hackfest.

Notwithstanding the work we did on planes or busy airports and the many discussions we’ve had around tapas, we spent three days doing solid coding, debugging and ass-kickin’ on Pitivi and GES. I’m very happy that, in the process, Antigoni learned some new tricks and knowledge to make her more comfortable with hacking on the pitivi codebase. We also got our shares of laughs too. Like the fact that I ate three hours of Edward’s time trying to investigate why importing clips crunched my hard drive for many seconds… and then realizing that the culprit was not my code nor the gst discoverer, but GTK+ itself.

More precisely:

  • Thibault spent the whole time hacking on GES and answering our questions.
  • Antigoni went concrete/practical by attempting to fix undo/redo for effects, getting more familiar with GES in the process. I’m happy to have been able to answer some of her questions and being able to point out pitfalls in the code: at times, it even seemed like I knew what I was talking about, which is always a great thing!
  • Edward spent nearly the whole time grunting and swearing in French, except when the sound was muffled by his palms:

As for me, I:

  • Fixed image thumbnailing/permissions on the wiki and deleted 1846 spam image files scattered in 1539 folders.
  • Reimplemented pitivi’s clip import process using the asynchroneous gst discoverer, which means that not only does the import progressbar work again, but it is blazing fast and doesn’t block the UI.
  • Cleaned up code, standardized variables and fixed various bugs (such as seeking to the end of the timeline, or making the viewer check the pipeline position only when actually playing… your CPU will thank me)
  • Implemented the ability to save/export the current frame as an image file. Hey, the code was just sitting there in GES, waiting to be used!
  • Got convinced by Thibault to try implementing transitions and timeline video thumbnails myself. We’ll see how it goes.

We stayed one or two days after the end of the hackfest. Thus, on Saturday, pretty much the only day where weather sucked, the superhacker trio went on a touristic ride to Ronda, in awe at some of the alpine beauties of Andalucía:

More pics here.

This week made me realize/feel something even more strongly than before: since Thibault’s massive cleanup, hacking on Pitivi with GES is easy. No more files/modules confusion. No more “massive core” getting in your way. The code can still benefit from some cleanup/simplifications to make it feel more “pythonic” (patches welcome), but it already feels incredibly more agile and elegant. It now feels enjoyable rather than a maintenance ordeal. More like poetry, less like a thesis.

If you were hesitating to contribute to Pitivi, now is an exciting time to take a fresh look at it. We need help and there’s a lot of low hanging fruit that can be fixed. We’ll be happy to help you get started.

Survey about the usability of Virtaal

A student at the University of Mainz in Germany, Almana Mukabenova, is doing some research on the usability of the translation tool Virtaal. She prepared a questionnaire for the purposes of her master's thesis:

https://www.surveymonkey.com/s/NV9YFJR

I'm very interested to see what comes out of this, and want to encourage all users of Virtaal (past and present, serious and casual) to take part if they can.

Gstreamer HackFest 2012 in Malaga

Last week, I attended the GStreamer HackFest in Malaga and it turned out to be a very good decision to be there! First of all, I got to meet my two mentors for the OPW (Jeff and Thibault) and Edward, who first started PiTiVi and work closely with them for 3 days. Also, I got to meet a lot of people working on the porting of several applications to GStreamer 0.11. It was a great HackFest and I thank all the people that made it happen.

Working closely with the other PiTiVi team members, offered me the opportunity to work in a more efficient way (by clarifying many things that seemed to confuse me) and get to feel the team spirit (which is a greeeat power of motivation). Meeting all these people made me demystify what exactly is the challenge and the advantages to come while porting PiTiVi to GES. The challenge is that the development of GES is being done almost simultaneously with the porting, so some features/functions could be still not implemented and the advantages are that after the porting advances, the code of PiTiVi will be cleaner and smarter.

As far as the undo/redo functionality is concerned, the MediaLibrary and Timeline parts are working now fine and I am on the Effect part, which must be finally ok very soon (hope today...).

Open Mobile Linux, this Saturday in FOSDEM

As mentioned in the earlier call for presentations, we're running a track on Open Mobile Linux in FOSDEM this Saturday. Room AW1.120 at the ULB campus in Brussels. From the CfP:

Our primary goal is to facilitate meetups, collaboration and awareness between different projects and communities within Open Mobile Linux and provide a place to present directions, ideas and your projects themselves.

By Open Mobile Linux we mean any open source projects revolving around typical non-desktop/server Linux, such as handsets, tablets, netbooks or other creative uses. Examples of such projects could be Qt5, Mer, MeeGo, Android, webOS, Plasma Active, Tizen, Boot to Gecko, SHR and other related efforts.

There are several exciting things happening in this space, including the recently announced Spark tablet, open sourcing of webOS's Enyo framework and continuing interest in the Maemo platform. Saturday's program includes:

If there are any last-minute announcements or happenings that people want to discuss, we may be a ble to squeeze in a talk or two. Contact Carsten about this.

Also, if you want to chat other things (like PHPCR or CreateJS), I'll be around the whole weekend including the beer event. Drop me an SMS.

Looking forward to seeing as many of you there as possible!

February 01, 2012

Some Basic Thoughts on GPL Enforcement

I've had the interesting pleasure the last 36 hours to watch people debate something that's been a major part of my life's work for the last thirteen years. I'm admittedly proud of myself for entirely resisting the urge to dive into the comment threads, and I don't think it would be all that useful to do so. Mostly, I believe my work stands on its own, and people can make their judgments and disagree if they like (as a few have) or speak out about how they support it (as even more did — at least by my confirmation-biased count, anyway :).

I was concerned, however, that some of the classic misconceptions about GPL enforcement were coming up yet again. I generally feel that I give so many talks (including releasing one as an oggcast) that everyone must by now know the detailed reasons why GPL enforcement is done the way it is, and how a plan for non-profit GPL enforcement is executed.

But, the recent discussion threads show otherwise. So, over on Conservancy's blog, I've written a basic, first-principles summary of my GPL enforcement philosophy and I've also posted a few comments on the BusyBox mailing list thread, too.

I may have more to say about this later, but that's it for now, I think.

Localization. Which tools to use

Hi all! :)

At the last OPW interns meeting I was asked which application I use for my localization activity, and why. A few weeks ago I’ve experienced Virtaal (the application I use) hanging, so I’ve tried to install some other applications and see how they would work for my workflow.

Below you can find a brief overview of some open-source applications available for localization.

poEdit


poEdit is a very simple gettext editor with a minimal set of features.

Pros:

  • Automatic translation is possible with a database of previous translations, with a special menu command.

Cons:

  • Not able to plug in your own glossary of terms in gettext format.
  • Translation memory can be applied only in batch mode with a special menu command, without any way to choose between several possible options from the memory.
  • You can’t add alternate language for comparison. This is a very useful feature for those translators who are fluent in several languages.


All in all, I wouldn’t use this application unless under pressure. :)


lokalize


This is an official KDE application for localization. Its history goes deep to KBabel (KDE 3.x localization tool).

Pros:

  • Translation memory supported (though setting it up is not user-friendly and required some documentation reading).

Cons:

  • Not able to plug in your own glossary of terms in gettext format (same as for poEdit). Our localization team depends much on a terminology file which is used to keep consistency between different translations. See https://raw.github.com/booxter/i18n-bel-gnome/master/terminology.po for example. Though you can create a custom glossary in the application, it uses its own internal file format and doesn’t play nice with standard Gettext terminology files.
  • Constant hangs while working with it (at least on my Ubuntu 11.04, Lokalize 1.2, KDE 4.6.5).

Though this application seems more advanced (at least it has some panes, tabs etc.), it still lacks most advanced features which are really needed for efficient translator work.


gtranslator


gTranslator is an (un)official GNOME localization tool. Its development slowed down in recent years, with a 2.0 release staying in plans only for a very long time, though there were some signs of the project revival in git tree lately.

Pros:

  • Translation memory: you can plug in all the files previously translated.
  • You can add an alternate language for comparison.
  • A Dictionary plugin which allows to search through general purpose network dictionaries (dict.org).
  • Source code view which shows messages for translation in the source code. Sometimes this feature helps a translator to get what developers meant. Of course, it would be better if all ambiguous messages are provided with appropriate translator comments but the life is more complicated... :)
Cons:

  • Still, no way to plug in your own glossary of terms. :(
In contrast to previous apps mentioned, this one is more suitable for daily translator work. More than that, it would actually be the best application for my personal needs if only it gets a good terminology plugin.


virtaal


Virtaal is a quite new localization tool out there. It seems that the main accent in its development was made on its GUI simplicity, though it still has quite a lot of advanced features for those who need them. Its interface doesn’t have multiple panes or tabs or buttons, but instead it has lots of keyboard shortcuts for efficient use. Also it has some plugins to try out.

Pros:

  • This is the only program out there which actually supports plugging in custom glossaries of terms in Gettext format (which is quite petty indeed). Terms are highlighted in an original message to translate, and you can insert its translation from your glossary using convenient keyboard shortcuts. It also supports multiple translation options, in this case it shows its user a drop down list of available options.
  • Translation memory are fully supported, with possibility to choose between multiple options, if available.
  • Translation statistics is provided not only for messages, but for words too. Similar feature was added lately to Damned Lies (GNOME localization site) and helps much for planning translation tasks. This is because message lengths may vary greatly (from one to hundrends of words).
  • Very convenient system of shortcuts.

Cons:

  • You can’t add alternate language for comparison. Instead, users are forced to open a separate instance of Virtaal with alternate translation.
  • It hangs sometimes (possibly due to slow nature of Python language in which it’s written in).
As you can see, each application has its drawbacks. Though in my personal opinion, only Virtaal and gTranslator can be seriously considered for active usage.

Ideally, either gTranslator gets a good glossary plugin or Virtaal adds alternate languages support. In latter case, it may require some GUI rearrangements and become not so easy to do without breaking its UI simplicity.

Also it may be worth to consider if multiple alternate languages can be supported by localization tools. Some translators (like me) are fluent in multiple languages, and sometimes looking at how other teams translated a message or a term can greatly help.

I hope this blog post will stimulate gTranslator and Virtaal developers to consider adding missing features mentioned by me. :)

Please, ow, please...

Google is killing Free Software

Now that I got your attention with the catchy title, lemme rephrase that in a somewhat longer sentence:

Google is the greatest danger to the Free Software movement at the current time.

I’m not sure I should presume intent because of Hanlon’s razor, but a lot of smart people concerned about Free Software work at Google, so they should at least be aware of it.

The first problem I have with Google is that they are actively working on making the world of Free Software a worse place. The best example for this is Native Client. It’s essentially a tool that allows building web pages without submitting anything even resembling source code to the client. And thereby it’s killing the “View Source” option. (You could easily build such a tool as Free Software if instead of transmitting the binary, you’d transmit the source code. Compare HTML with Flash here.)

The second problem I have with Google is that what they do actively confuses people about Free Software. Google likes to portray itself as an Open Source company and managed to accumulate a lot of geek cred that way (compared to Apple, Amazon or Microsoft). But unfortunately, a lot of people get introduced to Open Source and Free Software via Google. And withholding source code for a while, shipping source code to something similar are close enough to confuse unsuspecting bystanders. And I fear that behavior is doing more harm than good to Free Software.

All that said, I don’t think Google is stupid, illegal or even illogical in what they do. Everything they do makes sense. All I’m saying is that they’re scumbags and you should be aware of that.

How to foster productive online conversations: Mozilla Conductors

When I first started using email, I had a part time job in the computer science department at Rice University. A new grad student joined the department and a few days after he started, I noticed it was his birthday. Knowing he was unlikely to know many people in town, I sent him an email that said, “HAPPY BIRTHDAY” all spelled out in big letters made out of asterisks. He wrote me back “Thanks a lot”. Now in my world, “Thanks a lot” was always said “Thanks a *lot*” with a slightly sarcastic twist to it. I emailed him right back to ask him if he really liked it or if he was being sarcastic. He said no, of course, he really liked it.

So if one happy birthday email can be that confusing, imagine what can go wrong with a complicated email about project directions and motivations … Especially when it’s going out to a mailing list that has hundreds of people on it. That’s what most of us in the open source space deal with every day. Some of us do it better than others. Some days we do it better than others. But we all work at it every day. It’s the way we communicate with our friends, peers and co-workers.

Please meet the Mozilla Conductors

A few months ago, several of us at Mozilla had a conversation about how we could best help people learn how to communicate well online. We have new people joining the project all the time and they have to learn how to communicate on mailing lists, IRC and bugzilla. Those of us helping them realize daily what a challenge it can be. As much as we don’t think about it, cc’ing the right people, quoting previous mail messages and keeping the conversation from getting argumentative are not easy things.

We were looking for a way to help everyone communicate better, exploring all sorts of crazy options like classes and consultants, and realized we had the best resources right inside our project. We have people that are really good at fostering online conversations. They’ve been doing it for years; quietly (and not so quietly) leading and directing the conversations and projects they are part of.

So we sent out a bunch of emails, came to a consensus and created the Mozilla Conductors!

Mozilla Conductors help Mozillians with difficult online conversations. We offer advice, suggestions, a listening ear, moral support and, in the case where the discussion is public, occasionally direct intervention. But the goal is to help everyone communicate effectively, not to be enforcers.  If you end up in a difficult online situation, you can reach out to Conductors via the mailing list or to any individual in the group to ask for help. Maybe you just need a sounding board or  help figuring out how to phrase a particular idea or how to make someone particularly difficult go away. The Conductors will help brainstorm, ask questions,  provide ideas and help. And where we are on mailing lists, we commit to helping  keep the conversations constructive.

We are not an officially appointed group. We are a peer nominated group. We are a group of people from across the Mozilla project. We are a Mozilla Module.

Please help us make online conversations productive and Mozilla a success!

Related posts:

  1. 10 skills to master to get things done online
  2. Best way to conquer difficult conversations: just do it!
  3. Does open source exclude high context cultures?

Procmail vs. Launchpad

Launchpad users know that it can send quite a bit of e-mail. Okay, a LOT of e-mail. There has been effort on the Launchpad side of things to add controls to set the amount of Launchpad e-mail you get. But for some of us, even getting the mail that you need results in a fair amount of Launchpad mail. In playing with my procmail config for Launchpad mail I stumbled on this little feature that I love, and thought I'd share, as it's cleaned up my mail a lot. The secret rule is:

:0:
* ^X-launchpad-bug:.*product=\/[a-z0-9-]+
.Launchpad.Bugs.${MATCH}/

Quite simply that rule takes the project specifier on a bug mail, and uses it for the folder name that it puts the mail into. This means each project gets it's own mail box, no matter what. So even as you add groups or subscribe to new bugs, you just get new mail boxes. Works for me. Hope it helps other folks too.

Comments: Identi.ca | Twitter

eval, that spectral hound

Friends, I am not a free man. Eval has been my companion of late, a hellhound on my hack-trail. I give you two instances.

the howl of the-environment, across the ages

As legend has it, in the olden days, Aubrey Jaffer, the duke of SCM, introduced low-level FEXPR-like macros into his Scheme implementation. These allowed users to capture the lexical environment:

(define the-environment
  (procedure->syntax
   (lambda (exp env)
     env)))

Tom Lord inherited this cursed bequest from Jaffer, when he established himself in the nearby earldom of Guile. It so affected him that he added local-eval to Guile, allowing the user to evaluate an expression within a captured local environment:

(define env (let ((x 10)) (the-environment)))
(local-eval 'x env)
=> 10
(local-eval '(set! x 42) env)
(local-eval 'x env)
=> 42

Since then, the tenants of the earldom of Guile have been haunted by this strange leakage of the state of the interpreter into the semantics of Guile.

When the Guile co-maintainer title devolved upon me, I had a plan to vanquish the hound: to compile Guile into fast bytecode. There would be no inefficient association-lists of bindings at run-time. Indeed, there would be no "environment object" to capture. I succeeded, and with Guile 2.0, local-eval, procedure->syntax and the-environment were no more.

But no. As Guile releases started to make it into distributions, and users started to update their code, there arose such a howling on the mailing lists as set my hair on end. The ghost of local-eval was calling: it would not be laid to rest.

I resisted fate, for as long as I could do so in good conscience. In the end, Guile hacker Mark Weaver led an expedition to the mailing list moor, and came back with a plan.

Mark's plan was to have the syntax expander recognize the-environment, and residualize a form that would capture the identities of all lexical bindings. Like this:

(let ((x 10)) (the-environment))
=>
(let ((x 10))
  (make-lexical-environment
   ;; Procedure to wrap captured environment around
   ;; an expression
   wrapper
   ;; Captured variables: only "x" in this case
   (list (capture x))))

I'm taking it a little slow because hey, this is some tricky macrology. Let's look at (capture x) first. How do you capture a variable? In Scheme, with a closure. Like this:

;; Capture a variable with a closure.
;;
(define-syntax-rule (capture var)
  (case-lambda
    ;; When called with no arguments, return the value
    ;; of VAR.
    (() var)
    ;; When called with one argument, set the VAR to the
    ;; new value.
    ((new-val) (set! var new-val))))

The trickier part is reinstating the environment, so that x in a local-eval'd expression results in the invocation of a closure. Identifier syntax to the rescue:

;; The wrapper from above: a procedure that wraps
;; an expression in a lexical environment containing x.
;;
(lambda (exp)
  #`(lambda (x*) ; x* is a fresh temporary var
      (let-syntax ((x (identifier-syntax
                        (_ (x*))
                        ((set! _ val) (x* val)))))
        #,exp)))

By now it's clear what local-eval does: it wraps an expression, using the wrapper procedure from the environment object, evaluates that expression, then calls the resulting procedure with the case-lambda closures that captured the lexical variable.

So it's a bit intricate and nasty in some places, but hey, it finally tames the ghostly hound with modern Scheme. We were able to build local-eval on top of Guile's procedural macros, once a couple of accessors were added to our expander to return the set of bound identifiers visible in an expression, and to query whether those bindings were regular lexicals, or macros, or pattern variables, or whatever.

"watson, your service revolver, please."

As that Guile discussion was winding down, I started to hear the howls from an unexpected quarter: JavaScript. You might have heard, perhaps, that JavaScript eval is crazy. Well, it is. But ES5 strict was meant to kill off its most egregious aspect, in which eval can introduce new local variables to a function.

Now I've been slowly hacking on implementing block-scoped let and const in JavaScriptCore, so that we can consider switching gnome-shell over to use JSC. Beyond standard ES5 supported in JSC, existing gnome-shell code uses let, const, destructuring binding, and modules, all of which are bound to be standardized in the upcoming ES6. So, off to the hack.

My initial approach was to produce a correct implementation, and then make it fast. But the JSC maintainers, inspired by the idea that "let is the new var", wanted to ensure that let was fast from the beginning, so that it doesn't get a bad name with developers. OK, fair enough!

Beyond that, though, it looks like TC39 folk are eager to get let and const into all parts of JavaScript, not just strict mode. Do you hear the hound? It rides again! Now we have to figure out how block scope interacts with non-strict eval. Awooooo!

Thankfully, there seems to be a developing consensus that eval("let x = 20") will not introduce a new block-scoped lexical. So, down boy. The hound is at bay, for now.

life with dogs

I'm making my peace with eval. Certainly in JavaScript it's quite a burden for an implementor, but the current ES6 drafts don't look like they're making the problem worse. And in Scheme, I'm very happy to provide the primitives needed so that local-eval can be implemented in terms of our existing machinery, without needing symbol tables at runtime. But if you are making a new language, as you value your life, don't go walking on the local-eval moors at night!

Bufferbloat demonstration videos

If people have heard of bufferbloat at all, it is usually just an abstraction despite having personal experience with it. Bufferbloat can occur in your operating system, your home router, your broadband gear, wireless, and almost anywhere in the Internet.  They still think that if experience poor Internet speed means they must need more bandwidth, and take vast speed variation for granted. Sometimes, adding bandwidth can actually hurt rather than help. Most people have no idea what they can do about bufferbloat.

So I’ve been working to put together several demos to help make bufferbloat concrete, and demonstrate at least partial mitigation. The mitigation shown may or may not work in your home router, and you need to be able to set both upload and download bandwidth.

Two  of four cases we commonly all suffer from at home are:

  1. Broadband bufferbloat (upstream)
  2. Home router bufferbloat (downstream)
Rather than attempt to show worst case bufferbloat which can easily induce complete failure, I decided to demonstrate these two cases of “typical” bufferbloat as shown by the ICSI data. As the bufferbloat varies widely as the ICSI data shows, your mileage will also vary widely.

There are two versions of the video:

  1. A short bufferbloat video, of slightly over 8 minutes, which includes both demonstrations, but elides most of the explanation. It’s intent is to get people “hooked” so they will want to know more.
  2. The longer version of the video clocks in at 21 minutes, includes both demonstrations, but gives a simplified explanation of bufferbloat’s cause, to encourage people to dig yet further.
Since bufferbloat only affects the bottleneck link(s), and broadband and WiFi bandwidth are often similar and variable, it’s very hard to predict where you will have trouble. If you to understand that the bloat grows just before the slowest link in a path, (including in your operating system!) you may be able to improve the situation. You have to take action where the queues grow. You may be able to artificially move the bottleneck from a link that is bloated to one that is not. The first demo moves the bottleneck from the broadband equipment to the home router, for example.
To reduce bufferbloat in the home (until the operating systems and home routers are fixed), your best bet is to ensure your actual wireless bandwidth is always greater than your broadband bandwidth (e.g., by using 802.11n and possibly multiple access points) and use bandwidth shaping in the router to “hide” the broadband bufferbloat.  You’ll still see problems inside your house, but at least, if you also use the mitigation demonstrated in the demo, you can avoid problems accessing external web sites.
The most adventurous of you may come help out on the CeroWrt project, an experimental OpenWrt router where we are working on both mitigating and eventually fixing bufferbloat in home routers. Networking and ability to reflash routers required!



PackageKit in Cheese

As I already mentioned in one of my last posts, the sharing functionality in Cheese uses nautilus-sendto, which is a runtime dependency, and we wanted to do it our best to let the user/developer/maintainer know about it:


  2. By writing a comment in the README file.

  3. By using PackageKit to install the package at runtime. If the package is not installed, and the user tries to use the sharing functionality, PackageKit will be used to install it at runtime. 

The adventure
The PackageKit bit has been quite challenging for me, and I have to admit that it took me much longer than expected :(! I have learned a lot of new things though!

The first problem I had, was that I spent a few days working on dbus-glib, which I later discovered through Dave, that was deprecated. The source of the problem and biggest issue for me was, that both of the examples I followed in the PackageKit FAQ and the Vala tutorial, were obsolete and using dbus-glib. I have already talked with Richard Hughes about the first example and marked the second one as obsolete in the Vala tutorial wiki.

After this discovery, I started using the C API and the org.freedesktop.PackageKit.Transaction interface, which offers the InstallPackages method. I realised of course, that all the examples I looked at, used the org.freedesktop.PackageKit.Modify interface instead, but from my last experience, and since I did not see this interface documented anywhere in the API, I just went on working using the 'Transaction' interface. It also took me a long time to get it 'almost' working, but it never did completely. Another failure!

While I was in the middle of this frustrating process, Dave found out, that the 'Modify' interface was actually not obsolete, and pointed me to a very interesting link that took him some time to find and explained us why all the examples we saw, were using the 'Modify' interface. We both wondered why this was not documented and so hidden, and agreed that this should be somewhere more visible.

After making all the changes in the code again to use D-Bus and the 'Modify' interface instead, I talked to Richard Hughes regarding all these issues. He was really nice, and informed me that the reason why this was not documented in the API, was because the API is just documenting the 'system interface', since the 'session interface' isn't really part of the core PackageKit project but a reference implementation that both KDE and GNOME provide. He also pointed me to another very useful link and agreed on the need of a better entry in the PackageKit FAQ, to provide the users with all these information and documentation, something I will be helping with.

Gained knowledge
I could resume what I have learned about PackageKit in a few paragraphs, that I hope will be useful for you in the future.

There are two different levels of API in PackageKit:

  - The system API: It is the 'Transaction' interface, which is more low level, but allows the developer to control everything, including things like GPG keys and EULAs. Everything is explained in the PackageKit reference manual.

  - The session API: It is the 'Modify' and 'Query' interface, which allows the developer to install a package without taking care of the low level part. This is the one used in Cheese. The documentation in this case, is not so visible as I already mentioned before, but you can find what you need in the git.gnome.org repositories under the /src and /docs directories of gnome-packagekit.

Dave wrote a nice generic synchronous example of PackageKit usage in Vala, using the 'Modify' interface:

[DBus (name = "org.freedesktop.PackageKit.Modify")]
interface Packagekit : Object {
    public abstract void install_package_names (uint xid, string[] packages,
                                                                            string interaction) throws IOError;}


int main () { 
    Packagekit pk;
    try {
        pk = Bus.get_proxy_sync (BusType.SESSION, "org.freedesktop.PackageKit",
                                                   "/org/freedesktop/PackageKit");

        string[] packages = {"nautilus-sendto" };
        var interaction = "hide-confirm-search,hide-finished,hide-warning";
        pk.install_package_names (0, packages, interaction);
    } catch (IOError error) {
        critical ("D-Bus error: %s", error.message);
        return 1;
    }
    return 0;

}

This is basically what you need to know if you want to install any package at runtime in your application using PackageKit, and do not want to bother with the details. I have used the asynchronous version of it for Cheese.

PackageKit in Cheese.

This has been a long post, but as you can see, it was quite an adventure to get here! I will write soon with more news!

01.02.2012 bfsync: the journey from SQLite to Berkeley DB

My software for keeping a collection of files synchronized on many computers, bfsync, is perfect in the current stable release if the number of files in the collection is small (at most a few 100000 files). But since I’ve always wanted to use it for backups, too, this is not enough. I blogged about my scalability issues before, and the recommendation was: use Berkeley DB instead of SQLite for the database.

And so I started porting my code to Berkeley DB. I can now report that its quite a journey to take, but it seems that it really solved my problem.

* Berkeley DB is not an SQL database: I first tried the Berkeley DB SQL frontend, but this was much slower than using the “native” Berkeley DB methods, so I got rid of all SQL code; this especially turned out to be a challenge since the Python part of my application used to use the SQLite python binding, but now I had to write my own binding for accessing the Berkeley DB functions.

* Berkeley DB does not have a “table data format”: all it can do is store key=value pairs, where key and value need to be passed to the database as binary data blocks. So I wrote my own code for reading/writing data types from/to binary data blocks, I then used as key and value.

* Database scheme optimization:While I ported the code to Berkeley DB, I changed a few things in the database scheme. For instance, the old SQLite based code would store the information about one file’s metadata by the file’s ID. The ID was randomly generated. So if you would backup and 100 files were added to /usr/local, the file metadata would be stored “somewhere” in the database, that is the database would be accessed at 100 random positions. As long as the database is small, thats not a problem. But if the database is larger than the available cache memory, this causes seeks, and therefore is slow. The new (Berkeley DB) database scheme will generate a prefix for each file ID based on its path. This will for our example mean that all 100 files added to /usr/local will share the same path prefix, which in turn means that the new data will be stored next to each other in the database file. This results in much better performance.

I’ve designed a test which shows how much better the new code is. The tests adds 100000 files to the repository, and commits. It repeats this over and over again. You’ll see that with the old SQLite code, the time it takes for one round of the test to complete grows pretty quickly. With the Berkeley DB version you can see that more and more files can be added, without any difference in performance. Adding 100000 files takes an almost constant amount of time, regardless if the repository already contains zero or 20 million files.

.

It will still take some time before the Berkeley DB version of bfsync is stable enough to make a release. The code is available in the bdb-port branch of the repository, but some things remain to be done before it can be used by end users.

How the ECDSA algorithm works

To popular demand, I have decided to try and explain how the ECDSA algorithm works. I’ve been struggling a bit to understand it properly and while I found a lot of documentation about it, I haven’t really found any “ECDSA for newbies” anywhere. So I thought it would be good to explain in simple terms how it works so others can learn from my research. I have found some websites that explain the basic principles but nowhere near enough to actually understand it, others that explains things without any basics, making it incomprehensible, and others that go way too deep into the the mathematics behind it.

ECDSA stands for “Elliptic Curve Digital Signature Algorithm”, it’s used to create a digital signature of data (a file for example) in order to allow you to verify its authenticity without compromising its security. Think of it like a real signature, you can recognize someone’s signature, but you can’t forge it without others knowing. The ECDSA algorithm is basically all about mathematics.. so I think it’s important to start by saying : “hey kids, don’t slack off at school, listen to your teachers, that stuff might be useful for you some day!” :) But these maths are fairly complicated, so while I’ll try to vulgarize it and make it understandable for non technical people, you will still probably need some knowledge in mathematics to understand it properly. I will do this in two parts, one that is a sort of high level explanation about how it works, and another where I dig deeper into its inner workings to complete your understanding. Note however that I’ve just recently learned this stuff, so I’m definitely not an expert on the matter.

So the principle is simple, you have a mathematical equation which draws a curve on a graph, and you choose a random point on that curve and consider that your point of origin. Then you generate a random number, this is your private key, you do some magical mathematical equation using that random number and that “point of origin” and you get a second point on the curve, that’s your public key. When you want to sign a file, you will use this private key (the random number) with a hash of the file (a unique number to represent the file) into a magical equation and that will give you your signature. The signature itself is divided into two parts, called R and S. In order to verify that the signature is correct, you only need the public key (that point on the curve that was generated using the private key) and you put that into another magical equation with one part of the signature (S), and if it was signed correctly using the the private key, it will give you the other part of the signature (R). So to make it short, a signature consists of two numbers, R and S, and you use a private key to generate R and S, and if a mathematical equation using the public key and S gives you R, then the signature is valid. There is no way to know the private key or to create a signature using only the public key.

Alright, now for the more in depth understanding, I suggest you take an aspirin right now as this might hurt! :P

Let’s start with the basics (which may be boring for people who know about it, but is mandatory for those who don’t) : ECDSA uses only integer mathematics, there are no floating points (this means possible values are 1, 2, 3, etc.. but not 1.5..),  also, the range of the numbers is bound by how many bits are used in the signature (more bits means higher numbers, means more security as it becomes harder to ‘guess’ the critical numbers used in the equation), as you should know, computers use ‘bits’ to represent data, a bit is a ‘digit’ in binary notation (0 and 1) and 8 bits represent one byte. Every time you add one bit, the maximum number that can be represented doubles, with 4 bits you can represent values 0 to 15 (for a total of 16 possible values), with 5 bits, you can represent 32 values, with 6 bits, you can represent 64 values, etc.. one byte (8 bits) can represent 256 values, and 32 bits can represent 4294967296 values (4 Giga).. Usually ECDSA will use 160 bits total, so that makes… well, a very huge number with 49 digits in it…

ECDSA is used with a SHA1 cryptographic hash of the message to sign (the file). A hash is simply another mathematical equation that you apply on every byte of data which will give you a number that is unique to your data. Like for example, the sum of the values of all bytes may be considered a very dumb hash function. So if anything changes in the message (the file) then the hash will be completely different. In the case of the SHA1 hash algorithm, it will always be 20 bytes (160 bits). It’s very useful to validate that a file has not been modified or corrupted, you get the 20 bytes hash for a file of any size, and you can easily recalculate that hash to make sure it matches. What ECDSA signs is actually that hash, so if the data changes, the hash changes, and the signature isn’t valid anymore.

Now, how does it work? Well Elliptic Curve cryptography is based on an equation of the form :

y^2 = (x^3 + a * x + b) mod p

First thing you notice is that there is a modulo and that the ‘y‘ is a square. This means that for any x coordinate, you will have two values of y and that the curve is symmetric on the X axis. The modulo is a prime number and makes sure that all the values are within our range of 160 bits and it allows the use of “modular square root” and “modular multiplicative inverse” mathematics which make calculating stuff easier (I think). Since we have a modulo (p) , it means that the possible values of y^2 are between  0 and p-1, which gives us p total possible values. However, since we are dealing with integers, only a smaller subset of those values will be a “perfect square” (the square value of two integers), which gives us N possible points on the curve where N < p (N being the number of perfect squares between 0 and p). Since each x will yield two points (positive and negative values of the square-root of y^2), this means that there are N/2 possible ‘x‘ coordinates that are valid and that give a point on the curve. So this elliptic curve has a finite number of points on it, and it’s all because of the integer calculations and the modulus. Another thing you need to know about Elliptic curves, is the notion of “point addition“. It is defined as adding one point P to another point Q will lead to a point S such that if you draw a line from P to Q, it will intersect the curve on a third point R which is the negative value of S (remember that the curve is symmetric on the X axis). In this case, we define R = -S to represent the symmetrical point of R on the X axis. This is easier to illustrate with an image : So you can see a curve of the form y^2 = x^3 + ax + b (where a = -4 and b = 0), which is symmetric on the X axis, and where P+Q is the symmetrical point through X of the point R which is the third intersection of a line going from P to Q. In the same manner, if you do P + P,  it will be the symmetrical point of R which is the intersection of the line that is a tangent to the point P.. And P + P + P is the addition between the resulting point of P+P with the point P since P + P + P can be written as (P+P) + P.. This defines the “point multiplication” where k*P is the addition of the point P to itself k times… here are two examples showing this :  

Here, you can see two elliptic curves, and a point P from which you draw the tangent, it intersects the curve with a third point, and its symmetric point it 2P, then from there, you draw a line from 2P and P and it will intersect the curve, and the symmetrical point is 3P. etc… you can keep doing that for the point multiplication. You can also already guess why you need to take the symmetric point of R when doing the addition, otherwise, multiple additions of the same point will always give the same line and the same three intersections.

One particularity of this point multiplication is that if you have a point R = k*P, where you know R and you know P, there is no way to find out what the value of ‘k‘ is. Since there is no point subtraction or point division, you cannot just resolve k = R/P. Also, since you could be doing millions of  point additions, you will just end up on another point on the curve, and you’d have no way of knowing “how” you got there. You can’t reverse this operation, and you can’t find the value ‘k‘ which was multiplied with your point P to give you the resulting point R.

This thing where you can’t find the multiplicand even when you know the original and destination points is the whole basis of the security behind the ECDSA algorithm, and the principle is called a “trap door function“.

Now that we’ve handled the “basics”, let’s talk about the actual ECDSA signature algorithm. For ECDSA, you first need to know your curve parameters, those are a, b, p, N and G. You already know that ‘a‘ and ‘b‘ are the parameters of the curve function (y^2 = x^3 + ax + b), that ‘p‘ is the prime modulus,  and that ‘N‘ is the number of points of the curve, but there is also ‘G‘ that is needed for ECDSA, and it represents a ‘reference point’ or a point of origin if you prefer. Those curve parameters are important and without knowing them, you obviously can’t sign or verify a signature. Yes, verifying a signature isn’t just about knowing the public key, you also need to know the curve parameters for which this public key is derived from.

So first of all, you will have a private and a public key.. the private key is a random number (of 20 bytes) that is generated, and the public key is a point on the curve generated from the point multiplication of G with the private key. We set ‘dA‘ as the private key (random number) and ‘Qa‘ as the public key (a point), so we have : Qa = dA * G (where G is the point of reference in the curve parameters).

So how do you sign a file/message ? First, you need to know that the signature is 40 bytes and is represented by two values of 20 bytes each, the first one is called R and the second one is called S.. so the pair (R, S) together is your ECDSA signature.. now here’s how you can create those two values in order to sign a file.. first you must generate a random value ‘k‘ (of 20 byes), and use point multiplication to calculate the point P=k*G. That point’s x value will represent ‘R‘. Since the point on the curve P is represented by its (x, y) coordinates (each being 20 bytes long), you only need the ‘x‘ value (20 bytes) for the signature, and that value will be called ‘R‘. Now all you need is the ‘S‘ value.

To calculate S, you must make a SHA1 hash of the message, this gives you a 20 bytes value that you will consider as a very huge integer number and we’ll call it ‘z‘. Now you can calculate S using the equation :

S = k^-1 (z + dA * R) mod p

Note here the k^-1 which is the ‘modular multiplicative inverse‘ of k… it’s basically the inverse of k, but since we are dealing with integer numbers, then that’s not possible, so it’s a number such that (k^-1 * k ) mod p is equal to 1. And again, I remind you that k is the random number used to generate R, z is the hash of the message to sign, dA is the private key and R is the x coordinate of k*G (where G is the point of origin of the curve parameters).

Now that you have your signature, you want to verify it, it’s also quite simple, and you only need the public key (and curve parameters of course) to do that. You use this equation to calculate a point P :

P=  S^-1*z*G + S^-1 * R * Qa

If the x coordinate of the point P is equal to R, that means that the signature is valid, otherwise it’s not.

Pretty simple, huh? now let’s see why and how… and this is going to require some mathematics to verify :

We have :

P = S^-1*z*G + S^-1 * R *Qa

but Qa = dA*G, so:

P = S^-1*z*G + S^-1 * R * dA*G = S^-1 (z + dA* R) * G

But the x coordinate of P must match R and R is the x coordinate of k * G, which means that :

k*G = S^-1 (z + dA * R) *G

we can simplify by removing G which gives us :

k = S^-1(z + dA * R)

by inverting k and S, we get :

S = k^-1 (z + dA *R)

and that is the equation used to generate the signature.. so it matches, and that is the reason why you can verify the signature with it.

You can note that you need both ‘k‘ (random number) and ‘dA‘ (the private key) in order to calculate S, but you only need R and Qa (public key) to validate the signature. And since R=k*G and Qa = dA*G and because of the trap door function in the ECDSA point multiplication (explained above), we cannot calculate dA or k from knowing Qa and R, this makes the ECDSA algorithm secure, there is no way of finding the private keys, and there is no way of faking a signature without knowing the private key.

The ECDSA algorithm is used everywhere and has not been cracked and it is a vital part of most of today’s security.

Now I’ll discuss on how and why the ECDSA signatures that Sony  used in the PS3 were faulty and how it allowed us to gain access to their private key.

So you remember the equations needed to generate a signature.. R = k*G and S= k^-1(z + dA*R) mod p.. well this equation’s strength is in the fact that you have one equation with two unknowns (k and dA) so there is no way to determine either one of those. However, the security of the algorithm is based on its implementation and it’s important to make sure that ‘k‘ is randomly generated and that there is no way that someone can guess, calculate, or use a timing attack or any other type of attack in order to find the random value ‘k‘. But Sony made a huge mistake in their implementation, they used the same value for ‘k‘ everywhere, which means that if you have two signatures, both with the same k, then they will both have the same R value, and it means that you can calculate k using two S signatures of two files with hashes z and z’ and signatures S and S’ respectively :

S – S’ = k^-1 (z + dA*R) – k^-1 (z’ + da*R) = k^-1 (z + da*R – z’ -dA*R) = k^-1 (z – z’)

So : k = (z – z’) / (S – S’)

Once you know k, then the equation  for S because one equation with one unknown and is then easily resolved for dA :

dA = (S*k – z) / R

Once you know the private key dA, you can now sign your files and the PS3 will recognize it as an authentic file signed by Sony. This is why it’s important to make sure that the random number used for generating the signature is actually “cryptographically random”.  This is also the reason why it is impossible to have a custom firmware above 3.56, simply because since the 3.56 version, Sony have fixed their ECDSA algorithm implementation and used new keys for which it is impossible to find the private key.. if there was a way to find that key, then the security of every computer, website, system may be compromised since a lot of systems are relying on ECDSA for their security, and it is impossible to crack.

Finally! I hope this makes the whole algorithm clearer to many of you.. I know that this is still very complicated and hard to understand. I usually try to make things easy to understand for non technical people, but this algorithm is too complex to be able to explain in any simpler terms. After all that’s why I prefer to call it the MFET algorithm (Mathematics For Extra Terrestrials) :)

But if you are a developer or a mathematician or someone interested in learning about this because you want to help or simple gain knowledge, then I’m sure that this contains enough information for you to get started or to at least understand the concept behind this unknown beast called “ECDSA”.

That being said, I’d like to thank a few people who helped me understand all of this, one particularly who wishes to remain anonymous, as well as the many wikipedia pages I linked to throughout this article, and Avi Kak thanks to his paper explaining the mathematics behind ECDSA, and from which I have taken those graph images aboves.

P.s: In this article, I used ’20 bytes’ in my text to talk about the ECDSA signature because that’s what is usually used as it matches the SHA1 hash size of 20 bytes and that’s what the PS3 security uses, but the algorithm itself can be used with any size of numbers. There may be other inaccuracies in this article, but like I said, I’m not an expert, I just barely learned all of this in the past week.

Back to FOSDEM

I'm going to FOSDEM, the Free and Open Source Software Developers' European MeetingSo it seems I’m going to FOSDEM this year (yay!), together with a bunch of other Igalians who will be attending as well, coming from different places from across the globe (well, mainly from Europe this time).

I know some people will probably disagree with me on this, but for me FOSDEM is one of the greatest events of this kind, and so I’m quite happy to go there this time, specially after not being able to attend last year due to some unexpected (and unavoidable) personal matters.

Opposite to other occasions, this time I’ll be there not only as an attendant but also as an speaker, talking about WebKitGTK+, its status and the roadmap of the project towards WebKit2 (the split process model “flavour” of WebKit), together with my mate Philippe, on Sunday afternoon. Thus, for the first time ever, nobody will be able to accuse me of going there just because of the beer event, which wouldn’t be true anyway.

For the impatient ones, the talk will be mainly about reporting on the work done during the last months in “WebKitGTK+ land“, as well as on the stuff that is already planned for the upcoming releases. Good examples of those would be, for instance, the ongoing effort to add support for Accelerated Compositing, or just the new features related to WebKit2GTK+ such as, of course, the solution for enabling accessibility support there. Ah! And of course, we’ll try to run some demos there too… fingers crossed!

Besides, I’m of course looking forward to meeting some people I haven’t seen for a while now (haven’t attended to the latest Desktop Summit either, due to very good reasons too), so if you see me around and want to chat and/or meet for a while, just let me know. I must look shy, but it’s usually a matter of minutes (seconds?) for my shyness to go away…

So that’s it. Just a final line to say “thanks” to my company for fully sponsoring this thing.

See you in Brussels!

GNOME as a platform

In the previous post, I discussed platforms and their relationship to “projects” and “products”. While I was writing it, I had in mind an old blog post from Havoc. It took me a while to find it…can’t believe it’s been 6 years. Anyways, you should go and read that post before continuing. Here’s the link again.

What I’d like to argue – and most of you probably agree – is that GNOME shouldn’t explicitly take the “building block” or “platform” approach. There are multiple reasons for this, but the strongest one I think is that if we focus just on making a Free Software desktop that doesn’t suck, by side effect we will produce a platform. And in fact – that’s exactly what has happened. Think NetworkManager for example. Getting a network experience (particularly with wireless) that was remotely competitive with Windows XP required us to invent a new networking system.

If we just said “we’re a bucket of parts”, and not the ones actually out in front trying to make a networking user interface, basically there would be no obvious driver for a networking API (besides toys/tests), so it wouldn’t be tested, and in practice it wouldn’t really work. Or at least, there would be some immense lag between some third party engineer telling us problems with the API and getting them fixed.

Will third parties take the code and do things with it? Of course. And that’s allowed by the fact that GNOME is Free Software, and we want to “support” that for some values of “support”.

One thing bears mentioning – of course GNOME should be a platform for application authors. That’s in fact an important part of our place in the ecosystem. But as far as being a collection of parts versus something more, here’s the way I think of it: if you can walk up to a computer and say “Oh that’s running GNOME”, i.e. we have a coherent design and visual identity, then we’re succeeding.

GNOME is not unique in being an “end-user” focused Free Software project debating the platform versus project/product issue. See also the Mozilla platform versus Firefox. The role and relationship of those two has been a subject of (sometimes very contentious) debate in that community. And that’s fine – debating the line is good. As long as you keep producing something that doesn’t suck while debating =)


Avahi on fedora

I have been frustrated by avahi apparently not working properly on Fedora. Turned out I just had to disable the firewall… One more thing fedora could polish…

January 31, 2012

How do I set up my Vala IDE in Vim

Today I want to share how I set my Vim for Vala coding. It is not a fully-loaded IDE but it works for me very well. Hopefully it will help you too. First I would like to show a screenshot of Vim session when I’m hacking GNOME Gnames: Gnomine.

It shows basically three awesome Vim plugins (Tagbar, Fuzzyfinder and Nerdtree) and the terminal multiplexer, Tmux. For those are not familiar with the plugins, Tagbar shows the class variables, functions, etc for easy jumping. Nerdtree is a file explorer. Fuzzyfinder is a nifty buffer switcher. Note that at the bottom of the screenshot, Tmux has three windows, named as ‘src’,'build’ and ‘bash’ and Vim is running in ‘src’ window now. Such a setting makes you quickly switch back and forth between ‘src’ window for editing and ‘build’ window for building, or ‘bash’ window for something else.

I guess most Vim users have already known these. So the real story I’m going to tell is how to make tagbar work with Vala code. Tagbar is based on  ctags however the official ctags doesn’t support Vala. Luckily I have found anjuta-tags is a ctags clone with Vala support. Type in commandline to see if you have anjuta-tags available. It should be installed along with Anjuta. Therefore I replaced the default ctags with anjuta-tags by copying anjuta-tags to a location before ctags in PATH and rename it as ctags. I guess another way is to add

let g:tagbar_ctags_bin = "anjuta-tags"

in your .vimrc file.

That is NOT enough yet. You can generate Vala tags with anjuta-tags but Tagbar still shows nothing. Now you need to edit $VIM/autoload/tagbar.vim by adding the following lines.

" Vala {{{3
 let type_vala = {}
 let type_vala.ctagstype = 'vala'
 let type_vala.kinds = [
 \ {'short' : 'c', 'long' : 'classes', 'fold' : 0},
 \ {'short' : 'd', 'long' : 'delegates', 'fold' : 0},
 \ {'short' : 'e', 'long' : 'enumerations', 'fold' : 0},
 \ {'short' : 'E', 'long' : 'error domains', 'fold' : 0},
 \ {'short' : 'f', 'long' : 'fields', 'fold' : 0},
 \ {'short' : 'i', 'long' : 'interfaces', 'fold' : 0},
 \ {'short' : 'm', 'long' : 'methods', 'fold' : 0},
 \ {'short' : 'p', 'long' : 'properties', 'fold' : 0},
 \ {'short' : 'r', 'long' : 'error codes', 'fold' : 0},
 \ {'short' : 's', 'long' : 'structures', 'fold' : 0},
 \ {'short' : 'S', 'long' : 'signals', 'fold' : 0},
 \ {'short' : 'v', 'long' : 'enumeration values', 'fold' : 0}
 \ ]
 let type_vala.sro = '.'
 let type_vala.kind2scope = {
 \ 'i' : 'interface',
 \ 'c' : 'class',
 \ 's' : 'structure',
 \ 'e' : 'enum'
 \ }
 let type_vala.scope2kind = {
 \ 'interface' : 'i',
 \ 'class' : 'c',
 \ 'struct' : 's',
 \ 'enum' : 'e'
 \ }
 let s:known_types.vala = type_vala


Also, remember to check out http://live.gnome.org/Vala/Vim for syntax highlighting:-)

A few useful Puppet snippets

I’ve been playing with Puppet lately both on my home network and within the Fedora’s Infrastructure team and I thought some of the work I did might be useful for anyone out there being stuck with a Puppet’s manifest or an ERB template.

Snippet #1: Make sure the user ‘foo’ is alwais created with its own home directory, password, shell, and full name.

class users {
    users::add { "foo":
        username        => 'foo',
        comment         => 'Foo's Full Name',
        shell           => '/bin/bash',
        password_hash   => 'pwd_hash_as_you_can_see_in_/etc/shadow'
    }

define users::add($username, $comment, $shell, $password_hash) {
    user { $username:
        ensure => 'present',
        home   => "/home/${username}",
        comment => $comment,
        shell  => $shell,
        managehome => 'true',
        password => $password_hash,
    }
  }
}

Snippet #2: Make sure the user ‘foo’ gets added into /etc/sudoers.

class sudoers {

file { "/etc/sudoers":
      owner   => "root",
      group   => "root",
      mode    => "440",
     }
}

augeas { "addfootosudoers":
  context => "/files/etc/sudoers",
  changes => [
    "set spec[user = 'foo']/user foo",
    "set spec[user = 'foo']/host_group/host ALL",
    "set spec[user = 'foo']/host_group/command ALL",
    "set spec[user = 'foo']/host_group/command/runas_user ALL",
  ],
}

Snippet #3: Make sure that openssh-server is: installed, running on Port 222 and accepting RSA authentications only.

class openssh-server {

  package { "openssh-server":
      ensure => "installed",
  }

    service { "ssh":
        ensure    => running,
        hasstatus => true,
        require   => Package["openssh-server"],
    }

augeas { "sshd_config":
  context => "/files/etc/ssh/sshd_config",
    changes => [
    "set PermitRootLogin no",
    "set RSAAuthentication yes",
    "set PubkeyAuthentication yes",
    "set AuthorizedKeysFile	%h/.ssh/authorized_keys",
    "set PasswordAuthentication no",
    "set Port 222",
  ],
 }
}

Snippet #4: Don’t apply a specific IPTABLES rule if an host is tagged as ‘staging’ in the relevant node file.

On templates/iptables.erb:

<% if environment == "production" %>

-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 25 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

<% unless defined?(staging).nil? %>
-A INPUT -s X.X.X.X -j REJECT --reject-with icmp-host-prohibited
<% end -%>

# Allow unlimited traffic on eth0
-A INPUT -i eth0 -j ACCEPT
-A OUTPUT -o eth0 -j ACCEPT

# Allow unlimited traffic from trusted IP addresses
-A INPUT -s 192.168.1.1/24 -j ACCEPT

<% end -%>

On the manifest file:

class iptables {
    package { iptables:
        ensure => installed;
    }

    service { "iptables":
        ensure    => running,
        hasstatus => true,
        require   => Package["iptables"],
    }

    file { "/etc/sysconfig/iptables":
        owner   => "root",
        group   => "root",
        mode    => 644,
        content => template("iptables/iptables.erb"),
        notify  => Service["iptables"],
    }
}

That’s all for now!

Commit Digests

After several months on hiatus, then some January evenings to process the backlog, I am happy to have the commit digests back to the present day.

What now? I'll try to get back to the weekly updates, whatever the weather.

Of course you can help; whenever you see a noteworthy commit, whenever you make a noteworthy commit, just send me an email, or ping me on IRC, this will help me, and could also bring other perspectives on what constitutes a “noteworthy” commit. And if you love the commit digests, if you have time on your hands, you can help extending the projects to new heights, got an interest in statistics? got an interest in interviews? there's a place for you.

Happy reading!

How to hire an Executive Director

When I told the GNOME Foundation Board of Directors that I was leaving my job as executive director, I told them my number one priority was to hire my replacement. Before I was hired, the GNOME Foundation went through a long period without an executive director and I wanted to make sure that didn’t happen again. At the Boston Summit, there was actually some discussion about whether they wanted another executive director or whether they could hire more specialized individuals for particular tasks. For numerous reasons, they opted to hire another executive director. (I was relieved – speaking as a current GNOME Foundation board member, it would be a lot of work for a volunteer board to manage more staff without an executive director.)

The most amazing thing about this process was that an all volunteer hiring committee was formed and made a recommendation to the board in just two months. We received a number of high quality candidates and we were committed to moving quickly through the interview and decision process.

Executive Director Hiring Process

Here’s the process we used to hire an executive director:

  • We put together a great hiring committee.
  • We created a mailing list and set of private wiki pages for the hiring committee.
  • We drafted and posted the job description.
  • We collected resumes; conducting phone screening as we went. We were quite excited at the number of quality candidates that we got.
    • On the wiki we tracked candidates, who was phone screened, who was set up for follow up interviews, etc.
    • The phone screener for each candidate was responsible for managing that candidate for the rest of the process.
    • All communication that involved decisions went through a GNOME board member who was also part of the hiring committee.
  • We recommended three candidates to the board.
  • The board interviewed the top candidate and negotiated an offer.
  • She accepted! To carry on the tradition, we made her write her own press release. (Actually, Luis Villa helped me with mine.)

The GNOME Executive Director Hiring Committee

The group that agreed to help out and did an awesome job is:

  • Bradley Kuhn, Executive Director at Software Freedom Conservancy. Member of the Advisory Board representing FSF, former Executive Director of FSF. Bradley offered a lot of free software and nonprofit expertise to the hiring process. Bradley has a personal friendship with Karen, which he disclosed to the committee as soon as her application arrived. Other committee members carried out the initial interviews with Karen, and Bradley recused himself on 14 March 2011 when Karen became the top candidate.
  • Dave Neary, Neary Consulting. GNOME contributor, former Director of GNOME Foundation. Dave brought us a lot of GNOME experience and understanding. He was involved in recruiting me for the job several years earlier.
  • Germán Póo-Caamaño, Director of GNOME Foundation. Germán was our board member contact. He pulled us all together and was our communication point with the board of directors. Og Maciel and Brian Cameron, two other board members, joined him midways through the process. We had board members communicate all official decisions to candidates and that turned out to be quite a bit of work. Og did great sending out a lot of emails – some fun and some hard.
  • Jonathan Blandford, Manager of the Desktop team at Red Hat. Member of the Advisory Board representing Red Hat, former Director of GNOME Foundation. Jonathan brought us not only GNOME experience but hiring experience in the open source world.
  • Kim Weins, OpenLogic. Senior VP of Marketing at OpenLogic. I invited Kim to the committee because Kim makes things happen! She brought a wealth of team building and hiring experience as well as strength in execution that kept us moving along whenever we started to stall.
  • Luis Villa, Greenberg-Traurig. Attorney at Greenberg-Traurig, formally attorney at Mozilla, former member of the Advisory Board representing Mozilla, former Director of GNOME Foundation. Luis joined to help us part time. He did not interview candidates but leant his GNOME experience – and he’s the one that hired the former GNOME Executive Director (me!).
  • Robert Sutor, IBM. Vice President of Open System and Linux at IBM. Bob brought a history of GNOME but also ties to the greater industry and a lot of hiring experience. He also drove us to keep moving at times when volunteer orgs tend to slow down.
  • Stormy Peters, Head of Developer Engagement at Mozilla. Former Executive Director of GNOME Foundation, former member of the Advisory Board representing HP, now Director of GNOME the GNOME Foundation (but not at the time of the hiring committee).

The timeline

Here’s the actual time line of how it worked:

  • I gave notice on October 20, 2010 and said we should work on hiring a replacement right away.
  • At the Boston Summit, the board decided to hire an executive director to replace me.
  • The board appointed Germán as the board member in charge.
  • Germán posted the job description on November 7, 2010.
  • On November 29th, Germán involved me in the hiring committee formation.
  • On December 27th, we introduced the hiring committee.
  • We started screening resumes and doing phone interviews.
  • On February 2, 2011, the hiring committee made a recommendation to the board.
  • On March 11, 2011, the board told the hiring committee they were ready to make an offer to the top candidate.
  • Discussions, clarifications, negotiations and communications.
  • On June 21, 2011, we announced that Karen Sandler would be joining the GNOME Foundation!

The process went well and I’d recommend it to others trying to hire in a virtual, global, nonprofit environment. There are parts that could have been more efficient but we learned and adjusted as we went. We talked to a large number of high quality candidates and hired a new executive director in an a very efficient manner – all done by a volunteer board of directors and a volunteer hiring committee!

Related posts:

  1. I’m the new Executive Director of the GNOME Foundation!
  2. What do I do as Executive Director of GNOME?
  3. Stormy’s GNOME Update: November 7, 2010

The love of work overcomes all problems

I do not want to lie I had some few problems with my computer since the Gnome internship. For instance the other day I was working and I experienced power failure. When I had to open my computer it showed that I must repair system files.I panicked a bit but I thought to myself I am not the first person to experience this problem. I Googled the answer in my BlackBerry mobile device.

I found the answer that I should type:

“blkid followed by this command: fsck -y /dev/sda2 and then type exit”.

It worked and I got back to my work.

The conclusion is, no matter what problem I can face the love of my work in Localization (Gnome Software translation) has gave me an inspiration that there is no problem that cannot be fixed. I am still continuing with my work, thirsty to learn more and excited to see more results.

Again I say the love of work overcomes all problems!

Planning week 30-Jan to 3rd February 2012




This week I will be focusing on documentation (help file). I will focus on the help files on Mallard Format. Help files of the files I have already translated.


I will focus on one document at a time cause leaving a file half way and jump to another one does not help.


The one file that I am about to finish with its software translated is Cheese which I reserved earl yer this month although I was working with some other files.


cheese master

Cheese is a program to take pictures and videos. It has fun effects and nice view of photo stream at the side.

Going to FOSDEM!

This is this time of the year again, FOSDEM will take place in Brussels next week-end. This is one of my favourite free software event, lots of interesting talks, lots of interesting people, and lots of energy everywhere. This year, it looks like it will be the best FOSDEM ever! More devrooms, more than 400 talks, more everything!

I've helped again organizing the crossdesktop devroom. Among these talks, I can only recommend the gnome-boxes presentation that Marc-André and Zeeshan will be giving :) While I'm at it, here are a few more shameless plugs: Hans de Goede will be giving 2 SPICE talks in the Virtualization devroom, one general presentation of SPICE, and one where he will describe the USB redirection support in SPICE. And Alon Levy will present his work to interact with an X server through SPICE without using a virtual machine.

Last but not least, there will also be a GNOME booth with some goodies...

See you all there in a few days!

(Hopefully) Going to FOSDEM 2012

Most of you might have heard about the sudden “death” of the airline Spanair. I had never known of an airline stopping its planes on the same day it announces it’s rupture.
Several colleagues of mine in Igalia were affected by the events and guess with whom some of us (mostly Igalians based at Coruña) were flying to Brussels? That’s right…
Fortunately yesterday we bought new flights and decided to try to get the reimbursement for the cancelled ones later.

This means that thanks to Igalia:

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

I had two presentations in each of the last two editions of FOSDEM but I didn’t really have something new to show in this year’s so I’m attending only as a participant which doesn’t make it less exciting for me.

I have already some arrangements planned, as in having a beer with some folks, and you’re invited too if you wanna talk about Igalia or the projects I’m involved in.

See you in Brussels.

Gnome Gnomine has a new look

I’m very glad today since My patch to the bug has been accepted which removes the status bar and put everything else into the toolbar. Hopefully in the coming Gnome release you will see a new look of Gnome Gnomine. Thanks for the reviews from Robert! Here is the preview:


GNOME Shell 3.2 in wheezy: a retrospective

When you read this, GNOME Shell 3.2 will (hopefully!) have finally transitioned to Debian’s testing suite.

Planet GNOME readers might think Debian now has outdated versions of software even in their development versions, or the distribution’s development marches at glacial pace. Wheezy GNOME users will finally have a Shell that matches the rest of their GNOME components, something that works with the Shell extensions website and much less problems and limitations compared to 3.0.2.

The reality is that GNOME 3.2’s packaging was quite ready back when it was released in late September, but a number of not-so-desirable situations held GNOME Shell from transitioning to testing until today, four months later. So, what happened?

TL;DR: transitioning from GNOME 2 → GNOME 3 is not so easy if you want to keep testing in a sane state, and when you need to deal with dozens of indirectly related packages, for more than 10 architectures… but it shouldn’t take nearly a full year, either…

Let’s go back to the last months of 2010. Debian squeeze is in very deep freeze, and the release team and many Debian developers are focusing on squashing as many release critical bugs as they can, in order to make Debian 6.0 the great release it ended up being. The GNOME project has recently delayed the big launch of GNOME 3.0 again, until March 2011; Debian has already settled on GNOME 2.28 for its release, although it will end up cherry-picking many updates from the 2.30 release modules.

With most of the stabilization work being done, many Debian GNOME team members were at that time working on packaging very early versions of what would end up being GNOME 3.0 technology: GTK+3.0, GNOME Shell, Mutter… and some brave users even tried to use it via the experimental archive.

On February 6th, Debian 6.0 was released, and soon after, on April 6th, GNOME made a huge step forward with the much anticipated release of GNOME 3.0. At that time, Debian developers were busy breaking unstable as much as they could, as it’s tradition on the weeks following a major release, and the Debian GNOME team was able to start moving some GNOME 3.0 libraries (those which were parallel-installable with their GTK+2.0 versions) to unstable.

However, moving the bulk of GNOME 3.0 to unstable wasn’t so easy. When you start doing that, you need to be sure you’re ready to have all affected packages in a “transitionable” state as soon as possible, to minimise the chances of blocking transitions of unrelated packages via the dependencies they pick up with rebuilds. All the packages involved in a transition need to be ready to go in the same “testing run”, for all supported architectures. When you’re dealing with dozens of GNOME source packages at the same time, many of which introduce new libraries, or worse, introduce incompatible APIs that affect many more unrelated packages, things get hairy, and you need a plan.

So, Joss outlined what a sane approach to this monster transition could look like. The amount of work to do was what we call “fun” on #debian-gnome. In a nutshell, we had to deal with quite a few transitions, starting with having a newer version of libnotify in unstable, and a pre-requisite for that was making sure all the packages using libnotify1 were ready to use the source-incompatible libnotify4, and this meant preparing patches and NMUs for many of our packages, as well as many others not under our control.

Before starting a controlled transition like this one, we had to get an ACK from the release team, who was busy enough handling other huge transitions like Perl 5.12, so by the time we got our own slot, we were well into Summer.

With libnotify done in August, it was time to get our hands dirty with more exciting stuff, like getting Nautilus in testing. This meant bumping a soname and requiring all packages providing Nautilus extensions to migrate to GTK+3.0, or drop the extension entirely, as you can’t mix GTK+2.0 and GTK+3.0 symbols in the same process. However, in GNOME 3.0, automounting code had moved from Nautilus to gnome-settings-daemon, so in order to not break filesystem automounting in testing for an unreasonable amount of time, both Nautilus and g-s-d needed to go in at the same time. The fun thing is that g-s-d dragged glib2.0, gvfs, gnome-control-center, gdm3, gnome-media, gnome-session and gnome-panel into the equation, so this transition needed extra planning and a lot more work than initially expected: migrating all nautilus extensions, plus ensuring all Panel applets had migrated to GTK+3.0 and the new libpanel-applet-4 interface. In short, this was the monster transition we were trying to avoid.

By the time all this mess was sorted out, GNOME 3.2 had been released, and for what users said, it was a lot better than 3.0. We still had no more than a few bits and pieces of 3.0 in testing, and we were working hard to get 3.0 in wheezy. With all the excitement around 3.2, at times it was difficult to explain outsiders why we were beating a dead 3.0 horse… Going back to our huge transition, it was just a matter of time before all the packages would be built and be ready to enter, on the same run, in testing.

A few weeks later, in early November and after several rounds of mass-bug-filings, fixing unrelated FTBFS, many NMUs, package removal requests and dealing with any possible problem that could block our transition, everything seemed to be set, and our release team magicians had everything in place for the big magic to happen. However, our first clash with the rest of Debian happened a few hours before our victory, in the form of an unannounced ruby-gnome2 upload which resetted the count for everyone. It was fun to see the release team trying all sorts of black magic in an attempt to mitigate the damage. Fortunately, after a few tries they managed to fool britney (the script that handles package transitions from unstable to testing) somehow, and the hardest part of the job was done with just one day of delay.

At last, the core of GNOME 3 was in testing, and testing users found soon after. The rest of the week saw a cascade of hate posts against GNOME 3 in Planet Debian, and personally I didn’t find that especially motivating to keep on working on the rest of GNOME bits. With experimental clear of GNOME 3.0 stuff, we finally were able to focus on packaging whatever GNOME 3.2 components were not already done, and preparing for what should be a plain simple transition of GNOME 3.0 to 3.2.

After our share of wait for a transition slot, as Perl 5.14, ICU and OpenSSL were in the line before us, and after dealing with a minor tracker 0.12 transition, we were ready for our next episode: evolution-data-server.

At first sight, we thought this would be a lot easier, but it still got a bit hairy due to evo-data-server massive soname bumps. We were given our slot just before Christmas, after a few weeks of wait for others to finish their migration rounds, and most of the pack entered wheezy a few days before the new year.

No rejoicing, though, as GNOME Shell 3.2 didn’t make it. First, we discovered it was FTBFS on kFreeBSD architectures, as NetworkManager had been promoted from optional to required, for apparently no good reason, leaving the BSD world in the cold, including our exotic GNU/kFreeBSD architectures. Now, let’s clarify that I’m a supporter of the Debian kFreeBSD architectures and was really happy to see it accepted as a technology preview in squeeze. However, as you know, GNOME Shell currently requires hardware acceleration to run, a requirement hardly met in kFreeBSD, unless you’re using a DRI1 X driver. We seriously doubted anyone had ever ran a GNOME 3 session on kfreebsd-*. However, if it didn’t build, it was a blocker bug for GNOME Shell. We considered creating different meta-packages for kFreeBSD architectures, to conclude it’d be a mess, so our awesome Michael Biebl ended up cooking up a patch that restored the ability to build the Shell without NetworkManager support.

With this out of our way, we just needed to upload Michael’s fix and watch the buildds do their part of the job. Or maybe not?

Enter Iceweasel 9.

In parallel, and with incredible bad timing, Iceweasel 9.0 was uploaded to Debian the very same day it was released by Mozilla. Again, it greeted us with a nasty surprise: yet another mozjs API change, which made gjs FTBFS, which meant our kFreeBSD fixes would be unusable until someone who knew Gjs’ internals well enough bit the bullet and worked around the new API changes. Again, Michael Biebl tried to be our saviour, but unfortunately wasn’t able to fix all the problems, so we tried to focus on plan B.

Mozilla had released a fork of the mozjs that is included in Firefox, so that embedders would have a bit less of a hard time with these recurrent API changes. This was based on Firefox 4, and was already being packaged by Ubuntu. Gjs would build using this older version just fine, so we just needed to get it in Debian as soon as possible. We just needed to find a sucke^Wvolunteer that would be inclined to maintain the beast. Only after a few weeks we managed to get Chris Coulson, the Ubuntu packager, to maintain the package directly through the Debian archive via package syncs. However, his package had only been auto-compiled in the three Ubuntu architectures, that is amd64, armel and i386. It’s late January 2012, and we’ve been fighting this war for 10 months.

After getting some help from Michael to get the new package in shape for Debian standards, we were excited to sponsor it for Chris. Duh, after a few days in the NEW fridge, it was rejected by the ftp-masters. The license statement was missing quite a few details, so I went ahead and sacrificed a few hours of my copious free time to get this sorted out. A few days later, mozjs was accepted, but the result was horrible. It was very red. mozjs didn’t build on half of our targets.

Mike Hommey was quick to file a bug and point us to the most obvious fuckups. As he had dealt with this in the past as the Iceweasel maintainer, all of these issues were fixed and patches were ready to be applied verbatim or with minimal changes to our sources. With mozjs finally built successfully (although with severe problems on ia64), we were finally able to rebuild Gjs against it, upload GNOME Shell with our kFreeBSD fixes and wait until today for this mess to be over. Whew.

I can’t say I’ve enjoyed all the stages of this ride. Some bumps on the road were clearly there to test our patience, but it has helped me get back in touch with non-leaf GNOME packaging, which was all I was doing for a while due to being super-busy lately with studies. It also reminds me of the privilege of working side by side with some awesome people, not only Joss, Michael, Sjoerd, Laurent or Gustavo, to name just a few Debian GNOME team members, but also the receptive release team members like Julien or Cyril, and NEW-processing record-breaking ftp-master Luca. Without them, we might be trying to figure out the Nautilus transition since last Summer.

We really hope GNOME 3.4 will be a piece of cake compared to this. ;)

Severed Fifth Release Party this Friday in San Francisco

Can’t see the video? Watch it here.

Just a quick note to let you know that this Friday, 3rd February in San Francisco we will be having the Severed Fifth CD Release Party. The new album ‘Liberate’ was funded by donations from the Severed Fifth community and will be released soon under a Creative Commons license.

As such, on Friday we will be releasing the album at Cafe Cocomo, 650 Indiana St, San Francisco, CA where we will perform a full, live set of the new record. We will also be supported by Ulysses Siren and My Victim. Not only this but everyone who comes to the show will get a free copy of the new album on CD and there will plenty of give-aways and prizes.

Tickets are $10 advance ($12 on the door). You can buy tickets for the show here as well as buying tickets on the door. Doors open at 8pm.

I would love to encourage you to come out to support Creative Commons and local music and have a great time. :-)

January 30, 2012

The ongoing fight against GPL enforcement

GPL enforcement is a surprisingly difficult task. It's not just a matter of identifying an infringement - you need to make sure you have a copyright holder on your side, spend some money sending letters asking people to come into compliance, spend more money initiating a suit, spend even more money encouraging people to settle, spend yet more money actually taking them to court and then maybe, at the end, you have some source code. One of the (tiny) number of groups involved in doing this is the Software Freedom Conservancy, a non-profit organisation that offers various services to free software projects. One of their notable activities is enforcing the license of Busybox, a GPLed multi-purpose application that's used in many embedded Linux environments. And this is where things get interesting

GPLv2 (the license covering the relevant code) contains the following as part of section 4:

Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License.

There's some argument over what this means, precisely, but GPLv3 adds the following paragraph:

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation

which tends to support the assertion that, under V2, once the license is terminated you've lost it forever. That gives the SFC a lever. If a vendor is shipping products using Busybox, and is found to be in violation, this interpretation of GPLv2 means that they have no license to ship Busybox again until the copyright holders (or their agents) grant them another. This is a bit of a problem if your entire stock consists of devices running Busybox. The SFC will grant a new license, but on one condition - not only must you provide the source code to Busybox, you must provide the source code to all other works on the device that require source distribution.

The outcome of this is that we've gained access to large bodies of source code that would otherwise have been kept by companies. The SFC have successfully used Busybox to force the source release of many vendor kernels, ensuring that users have the freedoms that the copyright holders granted to them. Everybody wins, with the exception of the violators. And it seems that they're unenthusiastic about that.

A couple of weeks ago, this page appeared on the elinux.org wiki. It's written by an engineer at Sony, and it's calling for contributions to rewriting Busybox. This would be entirely reasonable if it were for technical reasons, but it's not - it's explicitly stated that companies are afraid that Busybox copyright holders may force them to comply with the licenses of software they ship. If you ship this Busybox replacement instead of the original Busybox you'll be safe from the SFC. You'll be able to violate licenses with impunity.

What can we do? The real problem here is that the SFC's reliance on Busybox means that they're only able to target infringers who use that Busybox code. No significant kernel copyright holders have so far offered to allow the SFC to enforce their copyrights, with the result that enforcement action will grind to a halt as vendors move over to this Busybox replacement. So, if you hold copyright over any part of the Linux kernel, I'd urge you to get in touch with them. The alternative is a strangely ironic world where Sony are simultaneously funding lobbying for copyright enforcement against individuals and tools to help large corporations infringe at will. I'm not enthusiastic about that.

comment count unavailable comments

Awesome details

I like small details in software. Here’s a nice one I spotted the other day:

IMG_0070 IMG_0071 IMG_0072

The Amazon Kindle app for iPad changes its background depending on the hour of the day. It even has some very nice effects, for instance, when switching to the night view, a falling star flies by.

Small details and nice polish shows your users that you care. Don’t be happy when it works, go the extra mile.

A year goes past

Blimey, I've been doing these birthday posts for ten years.

Today I am a year older. This particular day will be a subdued day; last weekend I spent with Niamh and Birmingham geeks (not at the same time), the previous one with my parents, so there's not actually a lot left to do on this actual birthday day. So I'm working, heh.

When I first met Sam, I related the old joke about being able to say "eighteen happy years... and then I met her". Which was totally invalid since we only were eighteen. Today I could tell that joke legitimately. Well, except that we're not married any more, probably because of inappropriate jokes. Might give her a ring later.

It's an interesting age, this. I'm now over halfway to the days of my years (three-score and ten), and I am supremely unworried by this. At previous points in my life I've felt like I knew everything now, and it turned out there was always more to learn. Now, of course, I finally have learned everything. It's a good feeling.

(No, of course I haven't.)

Anyway, many happy returns to me. I have to get back to work now. I'm wearing the rosette that Niamh bought me, though.

Interview of ColorHug maker, Richard Hughes

For a while now, I've wanted Banu to do interviews of makers of things with free and open designs. Being a fan of PingMag MAKE, it was apparent that there was a lot of hard work, learning, fun and satisfaction to be had in making. It's too bad that PingMag shutdown, but they still inspire. So when the ColorHug comes along—an open hardware product related to graphics—there's no better time to start interviewing. Solder when the iron's hot!

The ColorHug is a colorimeter that can be used to calibrate computer displays. It was created by Richard Hughes (hughsie). It is a fully open hardware project, and the design, drivers and firmware are available on the Gitorious code hosting website. From the branches and commit logs it appears that others have taken an interest in its development too, and have begun to contribute to it.

ColorHug 1 ColorHug 2 ColorHug 3

Without further ado, here's Richard Hughes. :)

Question Who are you? What have you been doing so far? How did you get interested in computers, electronics and building stuff?

My name is Richard Hughes, and I'm a programmer in the desktop group at Red Hat. I created and now maintain the following projects: upower, PackageKit, colord, shared-color-profiles, shared-color-targets, gnome-color-manager, gnome-power-manager and gnome-packagekit. From right since I was a little child, I was always taking things apart and making new things, and so my degree choice of Electronic Engineering shocked none of my friends or family. Whilst at University I got into writing code, and specifically open source code. I started hacking on low level userspace bits and the kernel, and then after my masters had finished I took a job at a large UK defence contractor. It was pretty much the opposite environment to open source, and as soon as Red Hat asked if wanted to hack on cool stuff full time I jumped at the chance. Although I'm hugely privileged to spend all day writing free software, I've always missed making things, and I figured I could do something with open hardware as a hobby.

Question How did you end up in computer graphics / color management?

When I bought a digital SLR camera, my wife paid for me to go on a course to learn how to use the camera properly. During this course I used OSX for the first time, and came to the realisation that the color stuff just worked. No messing around on the command line, no technical jargon, it just worked. Color Management on Linux was in sorry state of affairs then, and I thought I could do something about that.

Question What is the ColorHug? What is a colorimeter?

A colorimeter is a device that attaches to the screen and measures the actual colors output by the computer. It allows us to see what color is actually being shown to the user, rather than what color the computer thinks it's showing the user. Using lots of these samples, we can apply a correction profile to the screen to make images look as they ought to look. As LCD panels get older, they get yellow and dull, and even CRT monitors have phosphors that degrade over time. This means you have to calibrate every few months to ensure that the colors match what they should.

The ColorHug is an open source colorimeter. It's designed from scratch, and every part is 100% open source. All the other colorimeters you can buy in shops have proprietary code that means we have to reverse engineer the hardware to make it work on Linux, and then we can't modify the hardware to do something else, or fix bugs and add features like you can with open source hardware.

Question What information is in color profiles?

A color profile is really just a binary blob of data that describes the color response of an input or output device. You can have color profiles that just encode the curves of colors as a set of matrices, or you can have color profiles that are made up of of huge lookup tables. You can also store arbitrary data in a profile, and so you normally store a lot of extra useful stuff there too like the model number of the device or the paper type for the printer.

Question What are the tech specs of the ColorHug? Does it use USB? Does it need a battery?

The device is a small USB peripheral that connects to the host computer and takes XYZ measurements of a specified accuracy. Longer measurements lead to a more accurate result, but take more time. The ColorHug has no battery and takes the few hundred milliamps it needs from the USB bus. The device has two LEDs that can either be switched on or off, or be programmed to do a repeated flashing with specified on and off durations. The device supports up to 64 calibration matrices, and by default 3 are provided which are mapped to LCD, CRT and projector, with 3 additional reserved types.

Question Tell us about the ColorHug's design.

The ColorHug is actually a PIC microcontroller that is interfaced with a TCS3200 light to frequency chip. The frequency is proportional to the amount of light, and so by enabling the red, green and blue photodiodes in the sensor we can combine these with a bit of clever maths into an XYZ color value. The PIC just controls the sensor and accurately counts pulses at high speed, and then responds to requests from the USB host to do different low level things.

Question What data structures / file formats do you work with?

Bit of an odd question, the answer is LOTS. :)

Question What are some of the unique algorithms used in the ColorHug?

I'm pretty sure there's nothing unique in the device, it's really simple hardware along with some school vector and matrix maths. The ColorHug is able to store custom CCMX matrices on the device, which is something that I was surprised that no other colorimeter vendor seems to do.

Question Can you walk us through the software path through various components of a Linux stack, when using typical ColorHug facilities?

Well, just taking a sample in gcm-picker is a good example.

gcm-picker opens the ColorHug device using colord, and asks colord for a number of XYZ samples. colord opens the device, and sets up some initial parameters (e.g. sample time) and then does a write to the USB device. When the write is complete, the host then reads the data back from the USB device and transports it back to the application over DBus.

Question What hardware tools do you need to build/debug/test this device?

To build and test the device, I need:

  • An ESD workstation with temperature controlled soldering iron, microscope, surgical tweezers, flux pen, fume extractor and bright light
  • A generic multimeter, bench power supply and an oscilloscope is handy
  • A PIC programmer with 40 pin ZIF socket and spring loaded ICSP port
  • A photospectrometer and a good display such as a DreamColor to test each device with
  • A lot of patience

Question What software tools did you use to build this?

I used gEDA to design the schematic and PCB, and from someone that's used very expensive PCB design software like Mentor Graphics, it was super easy. The gEDA guys are doing a really good job. From a software side I use the "free" (but not open) Windows MPLAB compiler and then wrote all the firmware loader software myself.

Question What skills did you have to use in this project? Is this your first hardware project?

I used to work on designing test equipment for military flight computers, so I've got a fair bit of experience doing moderately clever things with very small components. That said, a few people have built the ColorHug board now who are just hobbyists, and it's refreshing to know the board is easy to build.

Question How did you go about building this project?

The "hello world" program is often the hardest part of a project. I probably spent about 3 weeks working with a PIC evaluation board just to turn on a LED from custom flashed firmware before I could be happy that the processor was suitable for the job. The other bits are easy, once you have that first flashing LED. I soldered the sensor onto a SOIC8DIP8 convertor board and then used some breadboard to wire it onto the evaluation board. From there I got the sensor working, and could sense the different colors.

Question What are the main components of the ColorHug? What are their roles?

  • The sensor [TCS3200], to measure the number of photons of each wavelength
  • The processor [PIC18F46J50], to deal with USB comms and to turn on and measure colors using the sensor
  • The clock, 12Mhz to keep the processor running at high speed
  • The rubber aperture, to only accept light that is 90 degrees from the screen down a small hole
  • The LEDs, to show state and any error code
  • The box, to keep the PCB safe from static and help keep out the light
  • The pads, to avoid damaging the screen when the device is pressed against a fragile glass panel

ColorHug schematic

Question How does the main IC sensor work? How does light get transformed into values you can use?

The sensor is actually an array of 64 smaller sensors, which are arranged in a grid of Red, Green, Blue and White. When the PIC reads the frequency counts for a given sample, it converts then into 3 deviceRGB numbers, which are converted using a calibration 3×3 matrix into the XYZ values. The XYZ values can also undergo another 3×3 matrix on the device, which converts the numbers to a given set of display primaries. This lets the ColorHug be calibrated against an sRGB screen and used on wide-gamut displays.

TCS230D sensor

Question How do you calibrate ColorHugs?

Calibration involves taking about 500 samples of dRGB and then using the Argyll CMS program ccxxmake to crunch the numbers into the best calibration matrix. This adapts the dRGB color into an XYZ value.

Question How did you design the enclosure?

The enclosure is a standard black ABS box from Hammond, with two cut outs that I do with several wooden and metal jigs. When I've got a bit of profit, I'm hoping to buy some equipment to do the holes in a better and quicker way.

Question How does the hardware process commands sent via USB by the software?

From a device point of view it just sits and waits for an input packet (64-byte buffer) and then parses the input. It then does whatever is required and then writes out an output packet. Much more details about what kind of data is sent is available in the firmware specification header file: https://gitorious.org/colorhug/firmware/blobs/master/ColorHug.h

Here's a basic flow chart:

Client: set the sensor multiplier (the frequency divider) to 100%
Hardware: OK

Client: set the sensor integral time to 65000 processor cycles
Hardware: OK

Client: flash the green LED twice
Hardware: OK

Client: take an XYZ reading with the default XYZ matrix for an LCD display
Hardware: OK, 123.45 67.89 34.56

Client: set the sensor multiplier to 0% (to turn the sensor off)
Hardware: OK

Question Is ColorHug free hardware? What did you keep in mind when designing it so that it was easily hackable?

It's all 100% free. When designing the PCB, I had to keep in mind that people who wanted to build the PCB themselves were probably not experts in SMD rework. So, the PCB is larger than it could be, and I've deliberately used large 0805 components rather than the more standard 0603 size. The code is designed for accidental shorts, for instance the spare processor pins are wired up as inputs, and there's a watchdog timer to stop the hardware getting wedged. The LEDs help enormously when debugging, as they output Morse code in event of a hardware error.

ColorHug PCB neaar side ColorHug PCB far side

Question What is Hughski Limited? How did you decide to start it?

Hughski Limited is a tiny company I set up to reduce the amount of personal liability I had. I'm naturally quite a risk-averse person, and so if some crazy lawyer decided to sue the company because his ColorHug poisoned his cat and ran away with his wife then they could claim all £66 of profit in the business rather than risking my personal savings. It's also a good way to work out how much tax you've got to pay at the end of the year if you try to keep the business and private finances separate.

Question I see that it's a UK registered company. What challenges did you face in creating this company? How much work was it? Are there other employees than you?

Actually creating the company is very easy and only a few hundred pounds. The hard bit is setting up the tax and getting all the permits you need to start trading. There are technically no paid employees, although I solder the boards and my wife sticks on stickers and screws the units together.

Question How do you intend to attract more people to use the ColorHug?

At the moment, there is a waiting list to buy the ColorHug, and so I'm not intending to advertise at all. I'm hoping that when people get their ColorHug and tell their friends, that will be all the advertising I need.

Question How do you source your components?

Most of the components are from China and Taiwan. Most are bought using big companies such as Mouser and Farnell, but some (like the IR cut-off filter) are custom made in China, and these are hard to obtain.

Question Do you intend to do other devices in the future?

I'm playing with the idea of making an open source three axis CNC machine, as I've found the hardest part of making the ColorHug was actually drilling the case to any kind of precision.

Question How does it feel to work on free hardware?

It's a bit unreal really. I was only going to make 50 devices, and I hoped that I would not be left with devices spare. Now I've got a few hundred orders and the community is starting to grow. That bit feel great.

Question Are the software parts entirely free? Are there any closed tools which you have to use?

I use the closed source MPLAB to compile the bootloader and firmware, but only because the open source SDCC compiler didn't work for me. At some point I'll have to put in the hours and fix SDCC, but until then I just need to get hardware out of the door.

Question Now that you have completed this project, what observations did you make in retrospect?

Well, the amount of cash it took to make a professional looking product is huge. The main problem is that most companies have a minimum order quantity of about 10,000 units, which is a ton of money for me on a project with no precedence. I've been lucky to find local companies that will take on small orders at reasonable prices.

In hindsight, I should have also worked longer on the prototype unit before announcing it to the world, as it takes quite a long time to convert a design designed for one unit, to a design that can be made in batches of 100. In the amazon-one-click world we live in, people don't like it when you announce its going to be 12 weeks until they get hardware.

Question Did you have fun? Does the ColorHug rock?

I'm still having fun, and the hardware seems to work for people, which is the main thing. Hopefully soon I can afford to pay myself something for my time, as up until now I've been buillding the units for free.

ColorHug 4 ColorHug 5 ColorHug 6

Question I am a GIMP user. Assuming GIMP supports a color managed workflow, how does one configure and use ColorHug on a Linux desktop?

If you're new to all this color management stuff, just fire up gnome-control-center and click the color panel. Then click "Learn more" and read all the documentation I wrote for GNOME 3.2. The ColorHug is just another supported device, so there's really nothing special you need to do.

Richard's computer
Richard's computer
Where it all happens

Richard Hughes
Richard Hughes
GNOME developer and free hardware hacker

That is all, readers. :) We hope you enjoyed this interview. If you have someone in mind to be interviewed, or you yourself want to be interviewed about your free hardware or software project, please email me the details or tweet @banushop. The ColorHug is offered for purchase at £48 currently. If you use Linux and are into graphics or hardware hacking, buy one and support its development. Happy hacking!

Facebook Twitter Reddit Hacker News Google +1

My sound hardware didn’t vanish, honest

I’ve been having intermittent problems with sound not working. Usually restarting (ie, killing) PulseAudio has done the trick but today it was even worse; the sound hardware mysteriously vanished from the Sound Settings capplet. Bog knows what’s up with that, but buried in “Sound Troubleshooting” I found “Getting ALSA to work after suspend / hibernate” which contains this nugget:

The alsa “force-reload” command will kill all running programs using the sound driver so the driver itself is able to be restarted.

Huh. Didn’t know about that one. But seems reasonable, and sure enough,

$ /sbin/alsa force-reload

did the trick.

That wiki page goes on to detail adding a script to /etc/pm/sleep.d to carry this out after every resume. That seems excessive; I know that sometimes drivers don’t work or hardware doesn’t reset after the computer has been suspended or hibernated, but in my case the behaviour is only intermittent, and seems related to having docked (or not), having used an external USB headphone (or not), and having played something with Flash (which seems to circumvent PulseAudio. Bad). Anyway, one certainly doesn’t want to kill all one’s audio-using programs just because you suspended! But as a workaround for whatever it is that’s wrong today, nice.

AfC

January 29, 2012

Countdown to FOSDEM!!

The countdown has begun! Today, I am doing laundry, and cleaning out the fridge & the house just in general, in anticipation of our upcoming trip to Brussels & FOSDEM 2012! Kevin & I will be flying out of Columbus on thursday afternoon and thus arriving in Brussels, friday morning ~8am. Most of friday will be spent wandering around Brussels before attending the FOSDEM Beer Party that evening, FOSDEM on Saturday & Sunday, and then another day of exploring Belgium before flying back on Tuesday morning. It’ll be a quick, but hopefully fun & productive trip.

I keep going over the schedule at FOSDEM, going back and forth on what I want to attend, as there are so many fascinating sounding events! I’m pretty sure I want to attend  Transifex: Localizing your application (mostly since I’m currently taking a class on multilingual translation systems) , but otherwise, I really have no definitive plans. I’m sure I’ll figure it all out once I’m there though, and suspect that I’m better off having few definitive plans so that I can be flexible for when I hear about something especially cool. Anyhow, I am super excited – to be attending FOSDEM, to be traveling, and to be kid-free for the first time in nearly 5 yrs! (Kevin’s mom has agreed to watch our boys while we’re gone – to which I can only say Thank You Amy!! :) )

So, any suggestions? What events are you planning to attend? What do you recommend? What should we make sure to see/do in Belgium? We’re up for pretty much anything! :)


Mallard Cheat Sheet

Mallard cheat sheet:

Summary of GStreamer Hackfest

So as I talked about in my last blog post we had a great GStreamer hackfest. A lot of things got done and quite a few applications got an initial port over to 0.11. For instance Edward Hervey ended up working on porting the Totem video player, or rather trying to come up with a more optimized design for the Clutter-gst as the basis port was already done.

Another cool effort was by Philippe Normand from Igalia who put a lot of effort into porting WebKit to use 0.11. His efforts where rewarded with success as you can see in this screenshot.

Jonathan Matthew had flown up all the way from Australia and made great progress in porting Rhythmbox over to the 0.11 API, a port which became hugely more useful after Wim Taymans and Tim-Phillip Muller fixed a bug that caused mp3 playback not to work :) .

Peteris Krisjanis made huge strides in porting Jokosher to 0.11. Although like Jason DeRose from Novacut and myself on Transmageddon he did end up spending a lot of time on debugging issues related to gobject-introspection. The challenge for non-C applications like Jokosher, Novacut, Transmageddon and PiTiVi is a combination of the API having changed quite significantly due to the switch to gobject-introspection generated bindings, some general immaturity challenges with the gobject-introspection library and finally missing or wrong annotations in the GStreamer codebase. So once all these issues are sorted things should look a lot brighter for language bindings, but as we discovered there is a lot of heavy lifting to get there. For instance I thought I had Transmageddon running quite smoothly before I suddenly triggered this gobject-introspection bug.

There was a lot of activity around PiTiVi too, with Jean-François Fortin Tam, Thibault Saunier and Antigoni Papantoni working hard on porting PiTiVi to 0.11 and the GStreamer Editing Services library. And knowing Jean-François Fortin I am sure there will soon be a blog with a lot more details about that :) .

Thomas Vander Stichele, who also wrote a nice blog entry about the event, was working with Andoni Morales Alastruey, both from Flumotion, on porting Flumotion to 0.11, but due to some of the plugins needed not having been ported yet most of their effort ended up being on porting the needed plugins in GStreamer and not so much application porting, but for those of you using plugins such as multifdsink, this effort will be of great value and Andoni also got started on porting some of the non-linux plugins, like the directsoundsink for Windows.

Josep Torra from Fluendo ended up working with Edward Hervey on hammering out the design for the clutter-gst sink at the conference, but he also found some time to do a port of his nice little tuner tool as you can see from the screenshot below.

Tuner tool for GStreamer 0.11

George Kiagiadakis kept hammering away at the qtGStreamer bindings, working both on a new release of the bindings for the GStreamer 0.10 series, but also some preparatory work for 0.11.

In addition to the application work, Wim Taymans, Tim-Phillip Müller and Sebastian Dröge from Collabora did a lot of core GStreamer clean ups and improvements in addition to providing a lot of assistance, bugfixing and advice for the people doing application porting. All critical items are now sorted in 0.11 although there are some nice to have’s still on the radar, and Wim plans on putting out some new releases next week, to kickstart the countdown to the first 1.0 release.

As for my own little pet project Transmageddon, it is quite far along now, with both manually configured re-encodes and profile re-encodes working. Still debugging remuxing though and I am also waiting for the deinterlacer to get ported to re-enable deinterlacing in the new version. For a screenshot take a look at the one I posted in my previous blogpost.

January 28, 2012

Python GTK Documentation

After my recent blog post about the lack of Python GTK documentation since the new era of GIR bindings, I was delighted to find this awesome online documentation.

I am certainly not presuming that this documentation was as a result of someone reading my blog post; I assume I didn’t see it online before, but thankyou to everyone who has contributed to it.

Getting conned: eBay/Paypal fun

After seeing, this article about "How secure is Paypal for eBay sellers" in this morning's Guardian, I'll share my personal experience with you.

In October, I sold my first generation MacBook Air on eBay, and got a buyer within a day for the £500 "Buy It Now" price. "Buy It Now" requires using Paypal, and the £500 (minus commission) appeared in my Paypal account¹. After a bit of to and fro, the buyer got in contact, and suggested that he come and pick it up. Saving about £30 of shipping, and sorting out the sale faster, strike me as good ideas.

The "buyer" said he couldn't come, sent one of his "employees". A very courteous man came to pick the laptop. In hindsight, he seemed slightly uncomfortable, and looked like he was very happy to see how easy it was going to be.

The spooky thing is that within 40 minutes -- note, not 3 hours, not a day after, not the day before) -- within 40 minutes of the laptop getting picked up, the holder of the eBay and Paypal accounts submitted an "unauthorised account activity claim", leading to "payment reversal" (me owing £500 to Paypal²).

During my call to eBay's customer support, I was told that "I had nothing to worry about" (I'm guessing that would be the case as long as I repaid the £500). Paypal promptly sent a mail mentioning they needed my help, but with very little possibilities from my side ("no courier tracking number? Give us the money now").

Surrey Police failed to find the culprits, with the 2 mobile numbers associated with the con only being pay-as-you-go phones (topped up in a little convenience store in North London that only keeps a day's worth of CCTV).

So my advices:

  • If you sell anything via eBay using Paypal, send it recorded, and keep the receipt.
  • If you bought a MacBook Air first generation with the serial W88500DJ12G, it's stolen, send me an e-mail.

And as opposed to Mssrs Lodge and Reakes, Paypal didn't reimburse me anything, and I'm £500 out of pocket.

¹: I'll pass you the details on eBay referring to a closed Paypal account that meant I got conned two days later than the "buyers" anticipated.
²: On an account that was already closed, see ¹.

Update: Added mention of eBay's ludicrously bad customer service.

January 27, 2012

Platforms as a side effect

What I want to talk about here is a simple statement that I believe to be true:

The best platforms are written by the people who are forced to invent them as they make a product.

Years ago I learned a bit about J2EE; never actually wrote an app using it, but enough to get a sense. I came away with the very strong impression that the people working on it were driven by committee, with managers in their respective contributing corporations telling them what to do. They weren’t the same people out in the field writing apps using it, day in and day out, under time pressure to produce as much as possible. On the other hand, from Ruby On Rails Wikipedia:

David Heinemeier Hansson extracted Ruby on Rails from his work on Basecamp, a project management tool by 37signals (now a web application company).[10]

Now, I’ve never written a Rails app either, but it’s pretty clear from the Internet which one of these wins. Another excellent example is the Amazon Web Services. Amazon had a lot of this internally because they were forced to in order to make a web shopping site before CEO Jeff Bezos made the key decision to spin it off as a platform.

And the most topical example here – GTK+ was originally spun out of the GIMP project because Motif sucked. Anyways, some food for thought. Basically if you’re one of those people in the trenches writing a platform – you should consider asking your manager to switch to writing apps for a bit. At least hopefully this blog post reminds me later that I have a few GTK+ apps that I really should get back to hacking on…


FlattrStat, a small statistic tool for Flattr

I'm a big fan of Flattr. But I find it hard to have some statistics about your things that have been flattered.

On my Flattr account, I receive flatts for both my blog and for Getting Things GNOME!. But I want to keep a clear separation. There are multiple persons now involved in GTG and they deserve part of the money (we will use that to buy beers at FOSDEM).

Also, on my own blog, I was interested to know which posts where the more successful, speaking of revenue. I knew that, so far, this post had the most clicks but I had no idea which one received the most money (for the curious, it is that one).

In order to do that, I quickly wrote FlattrStat, a python script. You need to download all the csv files from flatr, put them in a folder then run the script with "python flattrstat.py".

output of flattrstat

It will outputs the total clicks and revenues for each domain separately and, for each domain, sort all your things from the most successful to the least one.

Ideally, it should download the CSV files automatically and have a nice GUI but I don't really need that. It was for my own needs but I realize that it might be useful to someone else. So, feel free to use it or to contribute, it is under the WTFPL license.

FlattrStat on GitHub


Flattr our API Documentation

previous two weeks work on GTG


In these two weeks I continue my works on Google task syncing. In order to make it consistent with other backends, I break the authentication into three stage, add the pin request when authenticating. So in the first stage it will open the browser to ask for the allowance of syncing. Then the browser gives a code. After type the code in GTG and checking, finally the authentication succeed and the program start syncing.


Another thing I did is fix the task content cannot shown in google tasks problem. All the task content shows like <content></content>.
After that I did more tests on it and finally push my final version on launchpad.

Then after discussion with my mentor Luca, I got my next mission, which is write unity lens for GTG. This is really a bit challenge for me, because I didn't what unity lens is at the first time. After  a few days learning, now I have some basic ideas in my mind and found it very interesting. So in the next weeks I plan to start working on it.

Accessibility support in WebKit2GTK+

As Piñeiro already mentioned in some posts, last week a bunch of hackers attended the ATK/AT-SPI Hackfest 2012 here at the Igalia offices, in the lovely city of Coruña.

As the guy working on accessibility support for WebKitGTK+, I attended the hackfest to join some other great people representing different projects, such as Mozilla, Orca, AT-SPI, ATK, GTK+ and Qt. So, apart from helping with some “local” organizational details of the hackfest and taking some pictures, I spent some time hacking in WebKitGTK+’s accessibility code and participating in some discussions.

And from that dedication I managed to achieve some interesting things too, being my favorite ones a big refactoring of the a11y code in WebCore (so it’s now better organized and hence more readable and easy to hack on) and pushing my patch for enabling accessibility support in WebKit2GTK+, after going through a meticulous process of review (see the related WK bug), which started with the patch I wrote and attached back when attending to the WebKitGTK+ hackfest, as I mentioned in my previous entry in this blog.

Yeah, I know that some weeks have already passed since then and so perhaps you’re thinking this could have been done faster… but I’ve spent some weeks on holidays in Barcelona in December (pictures here!) and so I wouldn’t have much time before January to devote to this task. However, the patch got integrated faster than what I would expect when I proposed the first version of it, so I’m quite satisfied and happy anyway just by being able to announce this at this moment. Hope you share my joy :-)

So, what does this mean from the point of view of accessibility in GNOME? Well, that’s an easy question to answer: from now on, every browser that uses WebKit2GTK+ will be as much accessible as those using the previous version of WebKitGTK+, and this is definitely a good thing. Of course, I’m certain there will be bugs in this specific part that will need fixing (as it always happens), but for the time being this achievement means “yet another thing less” preventing us from pushing for upgrading some applications to switch to WebKit2GTK+, such as devhelp (some ongoing work already done, as my mate Carlos announced yesterday), yelpliferea… and the mighty Epiphany browser, which is rocking more and more ech day that goes by.

Last, I’d like to share with you an screenshot showing this new stuff, but as I am a little bit tired of always using Minibrowser (that small browser we use for testing WebKit2), so I decided to try instead that new branch Carlos recently pushed for devhelp, so you could check that what I mentioned before is actually true.

So here you have it (along with a couple of additions done with Gimp):

As you can see, devhelp is running and Accerciser is showing the full hierarchy of accessible objects associated to the application, starting in the UI process (GTK+ world) and continuing in the Web process, where all the accessible objects from the WebKitGTK+ world are being exposed. As I explained in a previous post, the magic making possible the connection between the two process is done by means of the AtkSocket and the AtkPlug classes, also represented in the screenshot attached above.

So, that’s it.

Losing Planet GNOME Feed

Looks like I'll be losing my Planet GNOME feed in the next few weeks based on the new policy changes. I'll still be blogging at the same frequency, but you'll have to pick up the feed from Blogspot. I've greatly enjoyed the interaction with all of you and hope that it continues.

Feeds