GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

May 10, 2013

morituri and Hidden Track One Audio

I have tomorrow (saturday) blocked out for a whole day of morituri hacking as I will be home alone.

One of the things a lot of morituri users are puzzled by is its relentless drive to extract every single sample of audio from the CD. Currently, even if it’s a really short pre-gap, and most likely just an inaccurate master or burn, with no useful audio in it.

For me, that was a design goal of morituri – I want to be able to exactly reproduce a CD as is. That is to say, ripping a CD should extract *all* audio from the CD, and it should be possible to make a copy of that CD and then rip that copy, and end up with exactly the same result as from the original CD. (I’m sure there’s a fancy scientific term for that that I can’t remember right now)

To a lot of other people, it seems to be annoying and they don’t like having those small almost empty files lying around.

So I thought I’d do something about that, and that it might be useful as well to analyze my current collection of tracks and figure out what’s in there. Maybe I can find some hidden gems that I hadn’t noticed before?

So I added a quick task to morituri that calculates the maximum sample value (I didn’t want to use my own level element in GStreamer for this as I wanted to make sure it was actual digital zero; this should be done in an element instead though, but I preferred the five minute hack for this one).

And then I ran:

rip debug maxsample /mnt/nas/media/audio/rip/morituri/own/album/*/00*flac

Sadly, that turned up 0 as the biggest sample for all these tracks!

Wait, what? I spent all that time on getting those secret tracks ripped just to get none? That’s not possible! I know some of those tracks!

Maybe the algorithm is wrong. Nope, it works fine on all the regular tracks.

Oh, crap. Maybe morituri has been ripping silence all this time because my CD drive can’t get that data off. Yikes, that would be a bit of egg on my face.

No, it works if I check that Bloc Party track I know about.

Ten minutes of staring at the screen to realize that, while I was outputting names from a variable from the for loop over my arguments, the track I was actually passing to the task was always the first one. Duh. Problem solved.

As for what I found in my collection:

  • a cute radio jingle that brought back memories from a live bootleg I had made myself of Bloem. That’s from over ten years ago, but that must have been around the time I learned about the existence of HTOA and wanted to get one in
  • found unknown HTOA tracks on Art Brut’s Bang Bang Rock & Roll, Mew’s Half the world is watching me; not their best stuff
  • soundscapey or stagesetting tracks on QOTSA’s Songs for the Deaf, Motorpsycho’s Angels and Daemons at play And Blissard; not that worth it (the Blissard track was ok, but really quiet)
  • Pulp hid a single piano chord in a 2 second pre-gap on This is Hardcore; very curious. It’s not an intro to the first track, because it doesn’t fit with the sound at all.
  • Damien Rice hid a demo version of 9 Crimes (the first track) in the pregap; instead of piano and female vocals, he plays guitar and sings all the parts.
  • Got reacquainted with my favourite HTOA tracks: the orchestral quasi-wordless medley on the Luke Haines/Das Capital disc; the first Bloc Party album with a beautiful instrumental (up there with the hidden track at the end of Placebo’s first album; both bands delivering an atypical but stunning moodscape; the beautiful cover of Ben Kenobi’s Theme by Arab Strap on the Cherubs EP (no idea why that landed in my album dir, that needs to be fixed); the silly Soulwax skit for their second album.

Of course, Wikipedia has the last word on everything

I note that they think Pulp recorded a cymbal, not a piano. And now that I see the title of the QOTSA hidden track, I get the joke I think.

In total, on my album collection of 1564 full CD’s, I have 171 HTOA’s ripped, 138 tracks of pure digital silence, and only about 11 are actually useful tracks.

I expected to find more gems in my collection. I’ll go through ep’s, singles and compilations next just to be sure.

But with this code in hand, maybe it’s time to add something to morituri to save the silent HTOA tracks as pure .cue information.

Joining Intel

Today is my last day at Oi WiFi.

It has been 1 year and a half since I moved from my small city (Maceió) to the biggest, craziest Brazilian city, São Paulo. I don’t regret!

I’m lucky to have joined a great company (Vex at the time. Oi WiFi nowadays), with great people where I learnt a lot. I’m glad for the things I helped to improve, I’m sure we have better products than before and I’m proud to be part of that progress. I leave as legacy the spirit of the Free Software, where we can (and should) contribute back to projects we use and improve internally. Every improvement we made here we submitted back to projects like Openwrt, busybox, glib, etc.
However things and priorities in the company have changed a bit in the last few months. Time to look for a new challenge in my career.

What a challenge!

At Intel I’ll join the OTC – Intel Open Source Technology Center, and will work on Open Source projects such as Tizen, EFL, Webkit and hopefully GTK+ :)
The team I’ll work with is formed by the former Profusion company, acquired by Intel in the beginning of the year. Profusion was a company that I admired even before it was acquired by Intel :)

I’m very excited to join Intel. It’s a great opportunity in a great company and I don’t want to disappoint them!

I hope to publish here very soon the things I’m working on under the Intel umbrella. See you!

Online event for women in India on Cultural Freedom Day

Finally, things are taking shape. We got a very few but dedicated people interested to promote FOSS among women in India. After the first two meetings we decided to have this online event where

  • we will have an online digital art competition
  • a Mediawiki hackathon
  • an event on localization.

Everything is in planning stage right now. But we want to make FOSS attractive for girls/women in India to first use it and then contribute to it. When they hear about this event or attend it, they should feel – yeah, it’s cool – I want to do this.

Have to plan some cool promotional materials for it.

There’s one big obstacle though. People, especially women in India think of FOSS as something “geeky” – not to be touched by everyone. That it is not easy to use.

If we do not address these myths, they will continue using third grade proprietary software without even trying any other alternative!

Please share your ideas/suggestions on how to make this event interesting and appealing to women in India.

In case you want to help, please join http://wfs-india.dreamwidth.org/ an participate in the irc meetings.


Hackerspace in Hong Kong

dimsumlabs1

In order to promote Hardware Freedom Day that was hosted on April 20, I went to visit the only hackerspace in Hong Kong, the DimSumLabs. We met some new geeky friends including Graham, a professor at PolyU and Manolis, web admin at DimSumLabs among other many things… We had a lot of great discussions and talked from 8pm until 2am. We ended up agreeing to host some events in PolyU together in the near future.

The following Saturday, we went to do site visit and talked more. Of course we also met with our old friend, Mathieu who just started a hacking group in Hong Kong and had a very great first event. His main occupation these days apart from his job, is to work on an improved input method for Hong Kongese. We also invited him to join Digital Freedom Foundation as a director as he is helping us coordinating and running some local activities in Hong Kong, which happen to be a requirement now. All in all it was a great week, not really a HFD event but definitely big steps forwards for us.

Hackerspace in Hong Kong

She is currently the Director and Secretary of Digital Freedom Foundation. She is also co-founder and VP of Greenboard, a NGO deploying open education solution in poor schools in China. Previously she was President of Beijing Linux User Group, co-founder and lead organizer of GNOME.Asia, as well as VP Marketing for Gdium Foundation striving to allow everybody to access knowledge for free.

In order to spend all these time volunteering and making a better world, Pockey works as a consultant in dao² Inc. Before that she was the CEO of Compario China and the Vice General Manager of Willsee, one of the largest manufacturer of corporate and business gifts in South China. In her early career stage, she focused in the marketing and advertising field working in international advertising agencies in Hong Kong and Beijing.

May 09, 2013

2013-05-09: Thursday.

  • Up early, to work, mail chew; sync. with Noel; pleased by the great work the QA team are doing identifying duplicate bugs eg. this one, it's really great to see things cleaned up and associated into clusters like that.

Wed 2013/May/08

May 08, 2013

2013-05-08: Wednesday.

  • Up early; music practise with the babes, packed them off to school. Mail chew, poked at slides, misc. admin. Dug through the crazy gallery creation / loading code - and with David's help cleaned up lots of building oddities. Up late.

generators in v8

Hey y'all, ES6 generators have landed in V8! Excellent!

Many of you know what that means already, but for those of you that don't, a little story.

A few months ago I was talking with Andrew Paprocki over at Bloomberg. They use JavaScript in all kinds of ways over there, and as is usually the case in JS, it often involves communicating with remote machines. This happens on the server side and on the client side. JavaScript has some great implementations, but as a language it doesn't make asynchronous communication particularly easy. Sure, you can do anything you want with node.js, but it's pretty easy to get stuck in callback hell waiting for data from the other side.

Of course if you do a lot of JS you learn ways to deal with it. The eponymous Callback Hell site lists some, and lately many people think of Promises as the answer. But it would be nice if sometimes you could write a function and have it just suspend in the middle, wait for your request to the remote computer to come through, then continue on.

So Andrew asked if me if we could somehow make asynchronous programming easier to write and read, maybe by implementing something like C#'s await operator in the V8 JavaScript engine. I told him that the way into V8 was through standards, but that fortunately the upcoming ECMAScript 6 standard was likely to include an equivalent to await: generators.

ES6 generators

Instead of returning a value, when you first call a generator, it returns an iterator:

// notice: function* instead of function
function* values() {
  for (var i = 0; i < arguments.length; i++) {
    yield arguments[i];
  }
}

var o = values(1, 2, 3);  // => [object Generator]

Calling next on the iterator resumes the generator, lets it run until the next yield or return, and then suspends it again, resulting in a value:

o.next(); // => { value: 1, done: false }
o.next(); // => { value: 2, done: false }
o.next(); // => { value: 3, done: false }
o.next(); // => { value: undefined, done: true }

Maybe you're the kind of person that likes imprecise, incomplete, overly abstract analogies. Yes? Well generators are like functions, with their bodies taken to the first derivative. Calling next integrates between two yield points. Chew on that truthy nugget!

asynchrony

Anyway! Supending execution, waiting for something to happen, then picking up where you left off: put these together and you have a nice facility for asynchronous programming. And happily, it works really well with promises, a tool for asynchrony that lots of JS programmers are using these days.

Q is a popular promises library for JS. There are some 250+ packages that depend on it in NPM, node's package manager. So cool, let's take their example of the "Pyramid of Doom" from the github page:

step1(function (value1) {
    step2(value1, function(value2) {
        step3(value2, function(value3) {
            step4(value3, function(value4) {
                // Do something with value4
            });
        });
    });
});

The promises solution does at least fix the pyramid problem:

Q.fcall(step1)
.then(step2)
.then(step3)
.then(step4)
.then(function (value4) {
    // Do something with value4
}, function (error) {
    // Handle any error from step1 through step4
})
.done();

But to my ignorant eye, some kind of solution involving a straight-line function would be even better. Remember what generators do: they suspend computation, wait for someone to pass back in a result, and then continue on. So whenever you would register a callback, whenever you would chain a then onto your promise, instead you suspend computation by yielding.

Q.async(function* () {
  try {
    var value1 = yield step1();
    var value2 = yield step2(value1);
    var value3 = yield step3(value2);
    var value4 = yield step4(value3);
    // Do something with value4
  } catch (e) {
    // Handle any error from step1 through step4
  }
});

And for a super-mega-bonus, we actually get to use try and catch to handle exceptions, just as Gods and Brendan intended.

Now I know you're a keen reader and there are two things that you probably noticed here. One is, where are the promises, and where are the callbacks? Who's making these promises anyway? Well you probably saw already in the second example that using promises well means that you have functions that return promises instead of functions that take callbacks. The form of functions like step1 and such are different when you use promises. So in a way the comparison between the pyramid and the promise isn't quite fair because the functions aren't quite the same. But we're all reasonable people so we'll deal with it.

Note that it's not actually necessary that any of the stepN functions return promises. The promises library will lift a value to a promise if needed.

The second and bigger question would be, how does the generator resume? Of course you've already seen the answer: the whole generator function is decorated by Q.async, which takes care of resuming the generator when the yielded promises are fulfilled.

You don't have to use generators of course, and using them with Q does mean you have to understand more things: not only standard JavaScript, but newfangled features, and promises to boot. Well, if it's not your thing, that's cool. But it seems to me that the widespread appreciation of await in C# bodes well for generators in JS.

ES6, unevenly distributed

Q.async has been part of Q for a while now, because Firefox has been shipping generators for some time.

Note however that the current ES6 draft specification for generators is slightly different from what Firefox ships: calling next on the iterator returns an object with value and done properties, whereas the SpiderMonkey JS engine used by Firefox uses an exception to indicate that the iterator is finished.

This change was the result of some discussions at the TC39 meeting in March, and hasn't made it into a draft specification or to the harmony:generators page yet. All could change, but there seems to be a consensus.

I made a patch to Q to allow it to work both with the old SpiderMonkey-style generators as well as with the new ES6 style, and something like it should go in soon.

give it to me already!

So yeah, generators in V8! I've been working closely with V8 hackers Michael Starzinger and Andreas Rossberg on the design and implementation of generators in V8, and I'm happy to say that it's all upstream now, modulo yield* which should go in soon. Along with other future ES6 features, it's all behind a runtime flag. Pass --harmony or --harmony-generators to your V8 to enable it.

Barring unforeseen issues, this will probably see the light in Chromium 29, corresponding to V8 3.19. For now though this hack is so hot out of the fire that it hasn't even had time to cool down and make it to the Chrome Canary channel yet. Perhaps within a few weeks; whenever the V8 dependency in Chrome gets updated to the 3.19 tree.

As far as Node goes, they usually track the latest stable V8 release, and so for them it will probably also be a few weeks before it goes in. You'll still have to run Node with the --harmony flag. However if you want a sneak preview of the future, I uploaded a branch of Node with V8 3.19 that you can build from source. It might mulch your cat, but life is not without risk on the bleeding_edge.

Finally for Q, as I said ES6 compatibility should come soon; track the progress or check out your own copy here.

final notes

Thanks to the V8 team for their support in this endeavor, especially to Michael Starzinger for enduring my constant questions. There's lots of interesting things in the V8 pipeline coming out of Munich. Thanks also to Andrew Paprocki and the Bloomberg crew for giving me the opportunity to fill out this missing piece of real-world ES6. The Bloomberg folks really get the web.

This has been a hack from Igalia, your friendly neighborhood browser experts. Have fun with generators, and happy hacking!

It's all about monkeys

Yesterday evening, May 7, two belgian user groups,  MADN and DotNetHub invited me to give a 2 hour introduction session on creating multi-platforms mobile applications in c# with Xamarin 2.0.

Microsoft Belgium was hosting the session, and the room was packed !

I really enjoyed that evening, and just wanted to thank you all: attendees for their presence and interactions, MADN and DNH for the invite and the bottle of wine, Microsoft Belgium for the place, food and drinks, and Xamarin for the give away licences, monkeys and t-shirts.


The Cost of Being Convinced

When debating, we usually consider that opinions are merely resulting of being exposed to logical arguments. And understanding them. If arguments are logical and understood, people will change their mind.

Anybody having been connected long enough on the internet knows that it never happens. Everybody stays on his own position. But why?

The reason is simple: changing opinion has a cost. A cost that we usually ignore. A good exercice is to try to evaluate this cost before any debate. For yourself and for the counterpart.

Let’s take a music fan that was convinced that piracy hurts artists. Convincing him that it’s not the case and that piracy is not immoral means to him that, firstly, he was dumb enough to be brainwashed by major companies and that, secondly, the money spent on CD is a complete waste.

Each time you will tell him “Piracy is not hurting artists and not immoral”, he will ear “You are stupid and you wasted money for years”.

This is quite a high cost but not impossible to overcome. It means that arguments should not only convince him, but also overcome that cost.

Worst: intuitively, we take the symmetry of costs for granted.

Let’s take the good old god debate.

For the atheist, the cost of being convinced is usually admitting being wrong. This is a non-negligible cost but sometimes possible. Most non-hardcore atheists are thus quite ready to be convinced. They enter any religious debate expecting the same mindset from the opponents.

But the opposite is not true. For a religious person, believing in god is
often a very important part of her life. In most case, this is something inherited from her parents. Some life choices have been made because of her belief. The person is often engaged in activities and societies related to her belief. It could be as far as being the core foundation of her social circles.

When you say “God doesn’t exist”, the religious will hear “You are stupid, your parents were liars, you wrecked your life and you have no reason to see your friends anymore”.

It looks like a joke, right? It isn’t. But, subconsciously, it is exactly what people feel and understand. No wonder that religious debates are so emotional.

Why do you think that some religious communities are fighting any individual atheist? Why do you think that any religion always try to get money or personal involvement from you? Because they want to increase the cost of not believing in them. Scammers understand that very well: they will ask you more and more money to increase the cost of you realizing it’s a scam.
Before any argument, any debate, ask everyone to answer sincerely to the question “what will happen if I’m convinced? What will I do? What will change in my life?”.

More often than not, changing opinion is simply not an option. Which settle any debate before the start.

And you? Which of your opinions are too costly to be changed? And what can you do to improve the situation?

 

Picture by r.nial.bradshaw

flattr this!

It’s all about being productive

Stuff like this makes me sad:

Also, the github issue where TJ requests that everything gets rewritten in plain JavaScript: https://github.com/rethinkdb/rethinkdb/issues/766

 

We’ve been here before
Language discussions aren’t new (nor is vim vs. emacs). In the GNOME community we’ve seen a ton of them. Just recently there was a huge one at the DX Hackfest.

GNOME/Mono developers have certainly received their dose of crap thrown at them. But so have GNOME developers that preferred Vala, Python, JavaScript, or even just GObject/C. Whatever you seem to be using, it’s never the right thing for someone.

Have all these years of shedding words over it solved anything? Frankly: no. We are still seeing a large combination of languages being used and all of those projects have good reasons to do so.

I get TJ’s point though: by using CoffeeScript, the rethinkdb people are making it harder for the wider JS community to contribute to their project. But…

 

It really doesn’t matter
Most open-source projects (or modules) don’t have a ton of contributors. It’s usually a modest team of core maintainers/developers that do the bulk of the work. And that’s fine: the success of a project should not be measured by the number of contributors, but by the quality of the software it produces.

This smallish team of core developers will have their own good reasons for picking up a certain language. They’ll use the language that they feel most productive with for the task at hand. And that’s a good thing, they are mostly the people that move the project forward.

The biggest barrier to contributing on a project is not the language, there are plenty of projects written in unproductive languages that get a ton of contributions. Any good programmer can pick up a new language up quickly (and TJ is more than just a good programmer, he’s a fantastic one, much respect). The bigger hurdle is the specific domain knowledge involved.

Let’s all agree to disagree and have some respect for each other’s opinions, they are all valid anyway.

 

PS: I’ll be heavily moderating comments that try to turn this into a flame-war. I’m writing this to find some more respect and understanding.

May 07, 2013

Introducing CallingProxy Pro – Automate Calls Like a Pro!

It’s been a while since the last post. When I’m not running Wayne Enterprises (my day job), I have been mostly working on Gnucash for Android and CallingProxy Pro in the cave.

Today, I am announcing a brand new version of CallingProxy Pro for Android. The app enables you to automate call inputs by having call code presets which are automatically dialed for you either before, during or after your call.

callingproxy_feature

In order to create your proxy numbers, all you need know is following

  • Commas (,) are for two-second pauses
  • Proxies must have an ‘N’ character (available on Android keypad) which will be replaced with the actual number you dial.
  • An example the proxy #33#N will dial #33#<number you called>.  
    Or if you want to dial a hotline and press 1 for English and 3 for tech support during the call, save the proxy as N,1,3.

CallingProxy Pro

New and improved features in the pro version include:

  • Support for infixes and suffixes in addition to prefixes
  • Download new proxies / Share your own proxies
  • Improved call log management
  • Contact searching support
  • New and Improved look using latest Android guidelines

You can download the app from the Google Play store

The post Introducing CallingProxy Pro – Automate Calls Like a Pro! appeared first on Coding User.

A short introduction to TPMs

I've been working on TPMs lately. It turns out that they're moderately awful, but what's significantly more awful is basically all the existing documentation. So here's some of what I've learned, presented in the hope that it saves someone else some amount of misery.

What is a TPM?

TPMs are devices that adhere to the Trusted Computing Group's Trusted Platform Module specification. They're typically microcontrollers[1] with a small amount of flash, and attached via either i2c (on embedded devices) or LPC[2] (on PCs). While designed for performing cryptographic tasks, TPMs are not cryptographic accelerators - in almost all situations, carrying out any TPM operations on the CPU instead would be massively faster[3]. So why use a TPM at all?

Keeping secrets with a TPM

TPMs can encrypt and decrypt things. They're not terribly fast at doing so, but they have one significant benefit over doing it on the CPU - they can do it with keys that are tied to the TPM. All TPMs have something called a Storage Root Key (or SRK) that's generated when the TPM is initially configured. You can ask the TPM to generate a new keypair, and it'll do so, encrypt them with the SRK (or another key descended from the SRK) and hand it back to you. Other than the SRK (and another key called the Endorsement Key, which we'll get back to later), these keys aren't actually kept on the TPM - the running OS stores them on disk. If the OS wants to encrypt or decrypt something, it loads the key into the TPM and asks it to perform the desired operation. The TPM decrypts the key and then goes to work on the data. For small quantities of data, the secret can even be stored in the TPM's nvram rather than on disk.

All of this means that the keys are tied to a system, which is great for security. An attacker can't obtain the decrypted keys, even if they have a keylogger and full access to your filesystem. If I encrypt my laptop's drive and then encrypt the decryption key with the TPM, stealing my drive won't help even if you have my passphrase - any other TPM simply doesn't have the keys necessary to give you access.

That's fine for keys which are system specific, but what about keys that I might want to use on multiple systems, or keys that I want to carry on using when I need to replace my hardware? Keys can optionally be flagged as migratable, which makes it possible to export them from the TPM and import them to another TPM. This seems like it defeats most of the benefits, but there's a couple of features that improve security here. The first is that you need the TPM ownership password, which is something that's set during initial TPM setup and then not usually used afterwards. An attacker would need to obtain this somehow. The other is that you can set limits on migration when you initially import the key. In this scenario the TPM will only be willing to export the key by encrypting it with a pre-configured public key. If the private half is kept offline, an attacker is still unable to obtain a decrypted copy of the key.

So I just replace the OS with one that steals the secret, right?

Say my root filesystem is encrypted with a secret that's stored on the TPM. An attacker can replace my kernel with one that grabs that secret once the TPM's released it. How can I avoid that?

TPMs have a series of Platform Configuration Registers (PCRs) that are used to record system state. These all start off programmed to zero, but applications can extend them at runtime by writing a sha1 hash into them. The new hash is concatenated to the existing PCR value and another sha1 calculated, and then this value is stored in the PCR. The firmware hashes itself and various option ROMs and adds those values to some PCRs, and then grabs the bootloader and hashes that. The bootloader then hashes its configuration and the files it reads before executing them.

This chain of trust means that you can verify that no prior system component has been modified. If an attacker modifies the bootloader then the firmware will calculate a different hash value, and there's no way for the attacker to force that back to the original value. Changing the kernel or the initrd will result in the same problem. Other than replacing the very low level firmware code that controls the root of trust, there's no way an attacker can replace any fundamental system components without changing the hash values.

TPMs support using these hash values to decide whether or not to perform a decryption operation. If an attacker replaces the initrd, the PCRs won't match and the TPM will simply refuse to hand over the secret. You can actually see this in use on Windows devices using Bitlocker - if you do anything that would change the PCR state (like booting into recovery mode), the TPM won't hand over the key and Bitlocker has to prompt for a recovery key. Choosing which PCRs to care about is something of a balancing act. Firmware configuration is typically hashed into PCR 1, so changing any firmware configuration options will change it. If PCR 1 is listed as one of the values that must match in order to release the secret, changing any firmware options will prevent the secret from being released. That's probably overkill. On the other hand, PCR 0 will normally contain the firmware hash itself. Including this means that the user will need to recover after updating their firmware, but failing to include it means that an attacker can subvert the system by replacing the firmware.

What about using TPMs for DRM?

In theory you could populate TPMs with DRM keys for media playback, and seal them such that the hardware wouldn't hand them over. In practice this is probably too easily subverted or too user-hostile - changing default boot order in your firmware would result in validation failing, and permitting that would allow fairly straightforward subverted boot processes. You really need a finer grained policy management approach, and that's something that the TPM itself can't support.

This is where Remote Attestation comes in. Rather than keep any secrets on the local TPM, the TPM can assert to a remote site that the system is in a specific state. The remote site can then make a policy determination based on multiple factors and decide whether or not to hand over session decryption keys. The idea here is fairly straightforward. The remote site sends a nonce and a list of PCRs. The TPM generates a blob with the requested PCR values, sticks the nonce on, encrypts it and sends it back to the remote site. The remote site verifies that the reply was encrypted with an actual TPM key, makes sure that the nonce matches and then makes a policy determination based on the PCR state.

But hold on. How does the remote site know that the reply was encrypted with an actual TPM? When TPMs are built, they have something called an Endorsement Key (EK) flashed into them. The idea is that the only way to have a valid EK is to have a TPM, and that the TPM will never release this key to anything else. There's a couple of problems here. The first is that proving you have a valid EK to a remote site involves having a chain of trust between the EK and some globally trusted third party. Most TPMs don't have this - the only ones I know of that do are recent Infineon and STMicro parts. The second is that TPMs only have a single EK, and so any site performing remote attestation can cross-correlate you with any other site. That's a pretty significant privacy concern.

There's a theoretical solution to the privacy issue. TPMs never actually sign PCR quotes with the EK. Instead, TPMs can generate something called an Attestation Identity Key (AIK) and sign it with the EK. The OS can then provide this to a site called a PrivacyCA, which verifies that the AIK is signed by a real EK (and hence a real TPM). When a third party site requests remote attestation, the TPM signs the PCRs with the AIK and the third party site asks the PrivacyCA whether the AIK is real. You can have as many AIKs as you want, so you can provide each service with a different AIK.

As long as the PrivacyCA only keeps track of whether an AIK is valid and not which EK it was signed with, this avoids the privacy concerns - nobody would be able to tell that multiple AIKs came from the same TPM. On the other hand, it makes any PrivacyCA a pretty attractive target. Compromising one would not only allow you to fake up any remote attestation requests, it would let you violate user privacy expectations by seeing that (say) the TPM being used to attest to HolyScriptureVideos.com was also being used to attest to DegradingPornographyInvolvingAnimals.com.

Perhaps unsurprisingly (given the associated liability concerns), there's no public and trusted PrivacyCAs yet, and even if they were (a) many computers are still being sold without TPMs and (b) even those with TPMs often don't have the EK certificate that would be required to make remote attestation possible. So while remote attestation could theoretically be used to impose DRM in a way that would require you to be running a specific OS, practical concerns make it pretty difficult for anyone to deploy that at any point in the near future.

Is this just limited to early OS components?

Nope. The Linux kernel has support for measuring each binary run or each module loaded and extending PCRs accordingly. This makes it possible to ensure that the running binaries haven't been modified on disk. There's not a lot of distribution infrastructure for setting this up, but in theory a distribution could deploy an entirely signed userspace and allow the user to opt into only executing correctly signed binaries. Things get more interesting when you add interpreted scripts to the mix, so there's still plenty of work to do there.

So what can I actually use a TPM for?

Drive encryption is probably the best example (Bitlocker does it on Windows, and there's a LUKS-based implementation for Linux here) - while in theory you could do things like use your TPM as a factor in two-factor authentication or tie your GPG key to it, there's not a lot of existing infrastructure for handling all of that. For the majority of people, the most useful feature of the TPM is probably the random number generator. rngd has support for pulling numbers out of it and stashing them in /dev/random, and it's probably worth doing that unless you have an Ivy Bridge or other CPU with an RNG.

Things get more interesting in more niche cases. Corporations can bind VPN keys to corporate machines, making it possible to impose varying security policies. Intel use the TPM as part of their anti-theft technology on education-oriented devices like the Classmate. And in the cloud, projects like Trusted Computing Pools use remote attestation to verify that compute nodes are in a known good state before scheduling jobs on them.

Is there a threat to freedom?

At the moment, probably not. The lack of any workable general purpose remote attestation makes it difficult for anyone to impose TPM-based restrictions on users, and any local code is obviously under the user's control - got a program that wants to read the PCR state before letting you do something? LD_PRELOAD something that gives it the desired response, or hack it so it ignores failure. It's just far too easy to circumvent.

Summary?

TPMs are useful for some very domain-specific applications, drive encryption and random number generation. The current state of technology doesn't make them useful for practical limitations of end-user freedom.

[1] Ranging from 8-bit things that are better suited to driving washing machines, up to full ARM cores
[2] "Low Pin Count", basically ISA without the slots.
[3] Loading a key and decrypting a 5 byte payload takes 1.5 seconds on my laptop's TPM.

comment count unavailable comments

Whew!

I just looked and my last blog was March 27th; The last 1+ month has been very busy.  Not a lot of things took place that warranted screenshots or R&D updates.  OpenLDAP and Zimbra deployments both occurred and took most of my time.  Status updates:

OpenLDAP
The change to centralized passwords went off very well.  During my prep work, I created a "Password Sandbox" application that allows you to test various passwords before actually making the change.  This proved to be very popular and effective.  Part of my time was then allocated monitoring all 800 of our users to ensure they all changed their passwords and sending out daily status reports to work on 100% completion.  It's now all done and our passwords have been hardened to current metrics.

Support Portal Application
I have been putting in some patches and features for our support staff.  They could see at a glance users with passwords that still had not been changed, and made it easier to create and upload LDIF records.  We've reached a milestone with this functionality where it's very simple to create new employees in LDAP.  We also can now auto-provision new people in the Zimbra post office with just a single click.  Within 10 minutes of checking the button, the user has email.

LDAP Sync To Zimbra.
Zimbra auto provisions email accounts, and I wrote a small python script that runs at 1am to push any updates that occur from OpenLDAP to Zimbra.  Phone numbers and job titles change, and all of this information is now synced.  Support staff only has to change the information in one place, saving us time.

Zimbra Is Live
After a long ramp up time of testing, we put Zimbra live and it replaced Evolution/Groupwise.  It was kind of sad to move away from Evolution, but unfortunately we just could not get bug fixes from Novell/Suse under our support contract in a timely manner.  Zimbra was deployed with virtually no problems.  We built one server and have about 250 concurrent connections via the web interface and then an equal number of connections from mail-notification(IMAP) on the lower panel to alert them of new messages for a total of 500+ connections.  Zimbra is flying and you can barely notice the difference with a full load.  Fast and crisp.

In the past our server which provided Firefox would have about 100 concurrent users, now that users are accessing email from a browser this has jumped to about 250.  The server took this load without a hiccup and is working very well.  With 100 concurrent the machine would be about 10% busy and now it's running about 15%.  We scaled this machine with the expectation of greatly increased Firefox loads and it's working like a champ.

And The One GOTCHA
There is always one, right? :)  LibreOffice (and OpenOffice) have features embedded in various places to hand off the current document, or the current document converted to another format directly to your email software.  This is a great feature because it eliminates the need for users to convert to other formats prior to sending to outside.  A document can be created in ODT format and retained here in that manner, and then converted to DOC on the fly...and the DOC is never saved to the local file system.  In case you didn't know it did this, here is where the magic happens:


Now the bad part, it basically does a mailto: to your email client and uses &attach which it turns out is not officially in the RFC and is only available in certain email software...including Evolution.  But many/most web based email software does not allow or support it.  I can certainly see that this could be a bad exploit.  Someone could create a mailto tag that requests files of an unsuspecting users desktop. 

You cannot just tell users to drag and drop the file out of Nautilus because in many cases, it's generated as a temp file and is never saved for them to be able to find.  You can't expect them to open Nautilus in /tmp and *maybe* find their file out of the hundreds of other temp files from the day.  Clunky to say the least.

Zimbra supports HTML 5 drag and drop reception, so I worked up a prototype idea.  Jasper and Federico were so kind to help me with the drag-and-drop mechanics of GTK and we're testing this concept now.  When a temp file is generated and needs to be attached to email from LibreOffice (and other applications if it pans out)...a popup window appears on the left edge with a thumbnail of the document.  The end user can then grab the document and drag it into their composer window.  In the shot below, user has selected the option to "Email as PDF" and the popup appears with thumbnail and Zimbra composer window opens (PURPLE).  The user then drags the file into the composer (RED). 



In the coming days, I'll be testing this concept and soliciting user feedback.  If it seems to be working, I'll connect it to the rest of our applications that previously auto-attached files to Evolution. 

Next up for me is creating LDAP groups and then getting Alfresco to auto-provision users and put people into those groups.  We're at the breaking point around here of maintaining documents only on a file system and are in need of a management system.  We'll be pilot testing this in IT to see how it works.

Other issues: 64bit HP thin client is sitting here for me to test, various portal changes have been requested, continued Zimbra troubleshooting and support, various Java/Flash/Firefox upgrades. 

gtkmm 3.8

I didn’t get around to blogging about gtkmm 3.8 when I released it last month, partly because we had to fix a crasher bug and do a gtkmm .1 release before people noticed.

Anyway, just a little while after GNOME 3.8 was released, we managed to release gtkmm 3.8 and glibmm 2.36. There is quite a bit of work in these, almost all by José Alburquerque and Kjell Ahlstedt, as well as the usual few days of last-minute work by me to wrap remaining new API.

I spend very little time on glibmm and gtkmm these days, and don’t have much motivation to change that.

There’s also a change that we expect in glib 2.38 that will break many installed applications that use gtkmm. We successfully begged the glib developers to add a special case for us in glib 2.36, but we have not found any way to avoid this for 2.38. So far our best option seems to be to do a new parallel-installed gtkmm (and glibmm) ABI, leaving the old (broken with glib 2.38) one behind, at least allowing applications to be changed slightly and then rebuilt.

Personally, I have no great incentive to go through that pain.

Votes for talks at open source conferences

I’ve never been a fan of voting for talks, because it tends to be poorly implemented under the guise of democracy. Of course it’s easy for me to talk, I’ve never organized anything at that scale.

I’ll give two examples on why I feel this way, one of which triggering today’s blog post.

First off, my colleague Marek submitted a talk to Djangocon. The talk was about how to use feat (a toolkit we wrote for livetranscoding) to serve Django pages, but in such a way that they can use Deferreds to remove the concurrency bottleneck of “1 request at a time” per process running Django.

Personally, to me, this is one of the most irritating design choices of Django – from the ground up it was built synchronously (which could have been fine in most places). But the fact that, when you get a request, you have to always synchronously respond to it (and block every other request for that process in the meantime) is a design choice that could have easily been avoided.

In our particular use case, it was really painful. If our website has to do an API request to some other service we don’t control that can easily take 30 seconds, our process throughput suddenly becomes 2 pages per minute. All the while, the server is sitting there waiting.

Yes, you can throw RAM at the problem and start 30 times more processes; or thread out API requests; or farm it out to Celery, and do some back-and-forthing to see when the call’s done. Or do any other number of workarounds for a fundamental design choice.

Since we like Twisted, we preferred to throw Twisted at the problem, and ended up with something that worked.

Anyway, that’s a lot of setup to explain what the talk was about. Marek submitted the talk to DjangoCon, and honestly I didn’t expect it to get much traction because, when you’re inside Django, you think like Django, and you don’t really realize that this is a real problem. Most people who do realize it switch away to something else.

But to my surprise, Marek’s talk was the most-voted talk! I wish I could link to the results, but of course that vote site is no longer online.

I guess I expected that would mean he’d be presenting at DjangoCon this year. So I asked him today when his talk was, and he said “Oh that’s right. I did not get accepted.”

Well, that was a surprise. Of course, the organising committee reserves the right to decide on their own – maybe they just didn’t like the talk. But if you ask your potential visitors to vote, you’d expect the most-voted talk to make it on the schedule no ?

The feedback Marek got from them was surprising too, though. Their first response was that this talk was too similar to another talk, titled “How to combine JavaScript & Django in a smart way”. Now, I’m not a JavaScript expert, but from the title alone I can already tell that it’s very unlikely that these two talks have many similarities beyond the word ‘Django’.

After refuting that point, their second reason was that they wanted more experienced speakers (but they didn’t ask Marek for his experience), and their third reason was that the talk was in previous editions of DjangoCon US/EU (it’s unclear whether they meant his talk or the JavaScript one, but Marek’s definitely wasn’t, and we couldn’t find any mention of the other talk in previous conferences. I’m also not sure why that even matters one way or the other. This email thread was in Polish, so I have to rely on Marek’s interpretation of it)

Personally, my reaction would have been to complain to the organizers or Django maintainers. Marek’s flegmatic attitude was much better though – after such an exchange, he simply doesn’t want to have anything to do with the conference.

He’s probably right – it’s hard to argue with someone who doesn’t want to invite you and is lying about the reasons.

The second example is BCNDevCon, a great conference here in Barcelona, organized by a guy who used to work for Flumotion who I have enormous respect for. I’ve never seen anyone create such a big conference over so little time.

He believes strongly in the democratic aspect, and as far as I can tell constructs the schedule solely based on the votes.

Sadly I didn’t go to the last one, and the reason is simply because I felt that the talks that made it were too obviously corporate. A lot of talks were about Microsoft products, and you could tell that they won votes because people’s coworkers voted on talks. I’m not saying that’s necessarily wrong – given that he worked at our company and has friends here, I’m sure people working here presenting at his conference have also done vote tending. It’s natural to do so. But there should be a way to balance that out.

I think the idea of voting is good, but implementation matters too. Ideally, you would only want people that actually are going to show up to vote. I have no idea how you can ensure that, though. Do you ask people to pre-pay ? Do you ask them to commit to pay if at least 50% of their votes make it in the final schedule, kickstarter-style ?

These two examples are on opposite extremes of voting. One conference simply disregards completely what people vote on. If I had voted or bought a ticket, I would feel lied to. Why waste the time of so many people? The other conference puts so much stock in the vote, that I feel the final result was strongly affected. I seriously doubt all those Windows 8 voters actually showed up.

Does anyone have good experiences with conference voting that did work? Feel free to share!

DLNA, client side

While we were busy fixing the server and rendering side of DLNA with Rygel, the guys at Intel OTC are fixing the Client side of DLNA with something called dLeyna, a nice set of APIs to access and maipulate UPnP-AV and DLNA servers / renderers (such as Rygel, of course), so you can easily add DLNA support to your applications, including the obvious server browsing and render remote control, but also the more non-obvious like media pushing, synchronization, server-side playlists. They already prepared a cool set of demos (for example a Firefox extension to send images from your browser to your TV).

So why is this better than using GUPnP for this? Let me show you some examples.

Controlling a renderer

Not much code to see here, you get the usual suspects of player control functions such as start, stop, etc. as well as methods to query device’s capabilities as there are a lot of optional things on UPnP devices.

Uploading

Well, say you want to upload a file to a server. The code how to do that in GUPnP is  available in gupnp-tools and it’s not exactly pretty. With dLeyna, on the other hand, it’s a fewliner:

#!/usr/bin/env python
import sys
import mediaconsole as mc
 
u = mc.UPNP()
d = u.server_from_udn(sys.argv[1])
d.upload_to_any(sys.argv[2], sys.argv[3])

In DLNA land, this is called “+UP+”.

Playing a file

Or you want to show some media file you got on your device or app on a DLNA-capable TV? Korva is showing how you can do that with plain GUPnP, again with a lots of lines of code. dLeyna providing a nice and clean solution:

#!/usr/bin/env python
import sys
import rendererconosle as rc
m = rc.Manager()
d = m.renderer_from_udn(sys.argv[1])
uri = d.host_file(sys.argv[2])
d.stop()
d.open_uri(uri)
d.play()

And this is called “+PU+” in DLNA land.

Behind the scenes, this is all GUPnP of course. Currently it consists of two DBus services, dleyna-renderer-service and dleyna-server-service, although other IPC mechanisms are on its way. What happens is that that these two services scan the network for available devices and making them available through a set of DBus interfaces, relieving you from the need of searching for devices yourself (and with that providing a device cache, relieving the network from UDP packet bursts), introspecting the devices for supported capabilities and methods and so on.

If you execute the push script from above you get a python wrapper for the com.intel.dLeynaRenderer.Manager DBus interface, which is then locally looking for the DBus path matching the given UPnP UDN and returning a python object implementing the com.intel.dLeynaRenderer.PushHost and com.intel.dLeynaRenderer.RendererDevice interfaces.

Then we temporarily host the file given on the command-line on dLeyna’s internal HTTP server, stopping the currently running playback (Which translates to RenderingControl:Stop SOAP call), send the URI to the server (RenderingControl:SetAVTransportURI) and last but not least start the playback (RenderingControl:Play) which in the end starts the HTTP streaming from dleyna’s internal HTTP server to (Rygel’s) renderer.

And it doesn’t stop at the application level, there’s even integration with HTML5 through cloudeebus and cloud-dLeyna.

As a sidenote: You might ask how that relates to Grilo’s UPnP-AV support or Korva. This is a very valid question. Grilo and Korva are doing very specific tasks while dLeyna aims to be a more complete SDK. It should be quite easy, for example, to port Grilo’s UPnP-AV suppport to dLeyna.

May 06, 2013

summing up 2

a more or less weekly digest of juicy stuff

Information diet weekend

As a slight sequel to my “feed reading is an open web problem” post, so far this weekend I have taken the following information diet steps:

RSS feeds: 610→339 (and counting).

Based on Google’s stats, I’d probably read about a million feed items in Reader. This is just too much. The complaints about attention span in this piece and in The Information Diet1 rang very true. Reader is a huge part of that problem for me. (Close friends will also note that I’ve been mostly off gchat and twitter during the work day since I started the new job, and that’s been great.) So I’ve spent time, and will spend more time soon, pruning this list.

National news feeds: lots→~0; weekly paper news magazines: 0→2; local news feeds: small #? large #?

My friend Ed put in my head a long time ago that national news is not very useful. It riles the passions, but otherwise isn’t helpful: you’re not making the world a better place as a result of knowing more, and you’re not making yourself happier either.2  So you’re better off reading much less national political news, and much less frequently: hence the two new on-paper subscriptions to weekly news magazines.

Besides allowing you to get off the computer(!), the time saved can also be used to focus on things that either make your life better (e.g., happier) or that give you actionable information to resolve problems. To tackle both of those needs, I’d like to curate a set of local news feeds. I’ll be blogging more about this later (including what I’m already reading), but suggestions are welcome. I suspect that will make me much happier (or at least less angry), and present opportunities to actually do things, in ways that the national news obviously never can.

Moved from reader→feedly.

The impending shutdown of Reader was obviously the catalyst for all this change; feedly seems not perfect but pretty solid. I continue to keep an eye on newsblur (still a variety of issues) and feedbin.me (no mature Android client yet), since feedly is still (1) closed source and (2) has no visible business model – leaving it susceptible to the same Reader shutdown problem.

"Two young children picket for the ILGWU carrying placards including 'I Need a Healthy Diet!' outside the Kolodney and Myers Employment Office" by the Kheel Center at Cornell University, used under CC-BY 2.0.
“Two young children picket for the ILGWU carrying placards including ‘I Need a Healthy Diet!’ outside the Kolodney and Myers Employment Office” by the Kheel Center at Cornell University, used under CC-BY 2.0.

Steps still to come:

Separate the necessary from the entertaining

Joe pointed out to me that all news sources aren’t equal. There are feeds you must read in a timely manner (e.g., for me right now, changes in work-critical Wikipedia talk pages), and feeds that can be sampled instead. The traditional solution to this is folders or categories within the same app. But we’re starting to see apps that are optimized for the not-mission-critical entertainment feed stream (Joe specifically recommended Currents). I’d like to play with those apps, and use one of them to further prune my “serious feeds” list.  Recommendations happily accepted.

Improve publication

I do want to participate, in some small way, in the news stream, by creating a stream of outbound articles and commentary on them. I never used Reader’s features for this, because of the walled garden aspect. Many of our tools now make it easy to share out to places like Twitter and Facebook, but that means I’m contributing to the problem for my friends, not helping solve it. I’d like my outbound info to be less McDonalds and more Chez Panisse :) The tools for that aren’t quite there, but this morning I stumbled across readlists, which looks like it is about 90% something I’ve been looking for forever. I’ll keep keeping an eye out, so again: good suggestions for outbound curation tools happily accepted.

What else?

I hate the weasely “ask your audience” blog post ending as much as anyone, but here, I have a genuine curiosity: what else are friends doing for their info diets? I want to eventually get towards the “digital sabbath” but I’m not there yet; other tips/suggestions?

  1. capsule book review: great diagnosis of the problem, pretty poor recommendations for solutions
  2. It’s pretty much a myth that reading the news makes you a better voter: research shows even supposedly high-information voters have already decided well before they read any news, and if for some reason you’re genuinely undecided, you’re better off reading something like ballotpedia than a streaming bunch of horse-race coverage.

May 05, 2013

Introducing gocl, a gobject wrapper to OpenCL

For the past few months I have been working on this project to bring OpenCL closer to GNOME technologies, and today I’m glad to make the first public announcement. For the uninformed reader, OpenCL is a framework and language for writing programs that execute across heterogeneous HW pieces like CPUs, GPUs, DSPs, etc. While not applicable to any piece of software, OpenCL can unleash unparalleled performance and power efficiency on specific heavy algorithms like media decoding, cryptography, computer vision, big data indexing and processing, physics simulation, graphics, image compositing, among others.

Gocl is a GLib/GObject based library that aims at simplifying the use of OpenCL in GNOME software. It is intended to be a lightweight wrapper that adapts OpenCL programming patterns and boilerplate, and expose a simpler API that is known and comfortable to GNOME developers. Examples of such adaptations are the integration with GLib’s main loop, exposing non-blocking APIs, GError based error reporting and full gobject-introspection support. It will also be including convenient API to simplify code for the most common use patterns.

Gocl started as part of the work and research we do at Igalia on HW acceleration, that I decided to take a bit of, clean it up and release it in a way that can be useful to others. OpenCL is gaining relevance and popularity since the number of implementations and supported chips have grown significantly in recent years. Soon we are going to see OpenCL running anywhere and GNOME technologies should be ready to take advantage of it.

Full gtk-doc documentation is available, and source code is hosted at my GitHub account.

The API is very simple and limited at this stage, and should be considered very unstable. Although I’m not currently working on it full time, I do have kind of a roadmap for the API and features that I will prioritize:

  • Completing the missing asynchronous API
  • Adding API to query available OpencL extensions
  • Provide API to expose cl_khr_gl_sharing extension, for object sharing with OpenGL

You are welcome to suggest/request features that you would like to see in Gocl, as well as propose changes on the API. The GitHub issue tracking at project’s page is available for that, and also to report bugs.

So, do you know of a specific piece of software in GNOME that could potentially benefit from OpenCL? I would love to hear about it.

At Igalia, as part of our strong commitment to make the Web better and faster, we are already looking into ways of applying OpenCL to WebKit and its related technologies, and I’m personally interested on that line of work.

GNOME Music: Reaching the end of phase one.

TL;DR

We can now browse our albums, artists and songs (no playlists yet) and play them :D

Details:

GNOME Music application development is reaching the end of phase one (out of three).

This phase consists of:

  • Set basic infrastructure (done)
  • Implement Grilo Querying (done) 
  • Implement Albums View (done)
  • Implement Songs View (done)
  • Implement Artist View (done)
  • Implement Playback support (done)
  • Clean up and port to Glade (in progress)

If you feel like hacking along please don’t mind to help out:

Thanks for everybody who has been helping out.

Thanks to Guillaume Quintard and the potential SoC students for posting to glade and fixing some of the UI issues. Also Vadim Rutkovsky started working on some unittests (which kicks ass).

And now to leave you with some screenshots…

Screenshot from 2013-05-05 10:09:16 Screenshot from 2013-05-05 10:09:45 Screenshot from 2013-05-05 10:10:32

 

 

flattr this!

First thoughts on RedHat OpenShift

OpenShift logoI’m looking for a PaaS provider that isn’t going to cost me very much (or anything at all) and supports Flask and PostGIS. Based on J5′s recommendation in my blog the other day, I created an OpenShift account.

A free account OpenShift gives you three small gears1 which are individual containers you can run an app on. You can either run an app on a single gear or have it scale to multiple gears with load balancing. You then install components you need, which OpenShift refers to by the pleasingly retro name of cartridges. So for instance, Python 2.7 is one cartridge and PostgreSQL is another. You can either install all cartridges on one gear or on separate gears based on your resource needs2.

You choose your base platform cartridge (i.e. Python-2.6) and you optionally give it a git URL to do an initial checkout from (which means you can deploy an app that is already arranged for OpenShift very fast). The base cartridge sets up all the hooks for setting up after a git push (you get a git remote that you can push to to redeploy your app). The two things you need are a root setup.py containing your pip requirements, and a wsgi/application file which is a Python blob containing an WSGI object named application. For Python it uses virtualenv and all that awesome stuff. I assume for node.js you’d provide a package.json and it would use npm, similarly RubyGems for Ruby etc.

There’s a nifty command line tool written in Ruby (what happened to Python-only Redhat?) that lets you do all the sort of cloud managementy stuff, including reloading cartridges and gears, tailing app logs and SSHing into the gear. I think an equivalent of dbshell would be really useful based on your DB cartridge, but it’s not a big deal.

There are these deploy hooks you can add to your git repo to do things like create your databases. I haven’t used them yet, but again it would make deploying your app very fast.

There are also quickstart scripts for deploying things like WordPress, Rails and a Jenkins server onto a new gear. Speaking of Jenkins there’s also a Jenkins client cartridge which I think warrants experimentation.

So what’s a bit crap? Why isn’t my app running on OpenShift yet? Basically because the available cartridges are a little antique. The supported Python is Python 2.6, which I could port my app too; or there are community-supported 2.7 and 3.3 cartridges, so that’s fine for me (TBH, I thought my app would run on 2.6) but maybe annoying for others. There is no Celery cartridge, which is what I would have expected, ideally so you can farm tasks out to other gears, and although you apparently can use it, there’s very little documentation I could find on how to get it running.

Really though the big kick in the pants is there is no cartridge for Postgres 9.2/PostGIS 2.0. There is a community cartridge you can use on your own instance of OpenShift Origin, but that defeats the purpose. So either I’m waiting for new Postgres to be made available on OpenShift or backporting my code to Postgres 8.4.

Anyway, I’m going to keep an eye on it, so stay tuned.

  1. small gears have 1GB of disk and 512MB of RAM allocated
  2. I think if you have a load balancing (scalable) application, your database needs to be on its own gear so all the other gears can access it.

May 04, 2013

Writing and deploying a small Firefox OS application

For the last week I’ve been using a Geeksphone Keon as my only phone. There’s been no cheating here, I don’t have a backup Android phone and I’ve not taken to carrying around a tablet everywhere I go (though its use has increased at home slightly…) On the whole, the experience has been positive. Considering how entrenched I was in Android applications and Google services, it’s been surprisingly easy to make the switch. I would recommend anyone getting the Geeksphones to build their own OS images though, the shipped images are pretty poor.

Among the many things I missed (Spotify is number 1 in that list btw), I could have done with a countdown timer. Contrary to what the interfaces of most Android timer apps would have you believe, it’s not rocket-science to write a usable timer, so I figured this would be a decent entry-point into writing mobile web applications. For the most part, this would just be your average web-page, but I did want it to feel ‘native’, so I started looking at the new building blocks site that documents the FirefoxOS shared resources. I had elaborate plans for tabs and headers and such, but turns out, all I really needed was the button style. The site doesn’t make hugely clear that you’ll actually need to check out the shared resources yourself, which can be found on GitHub.

Writing the app was easy, except perhaps for getting things to align vertically (for which I used the nested div/”display: table-cell; vertical-alignment: middle;” trick), but it was a bit harder when I wanted to use some of the new APIs. In particular, I wanted the timer to continue to work when the app is closed, and I wanted it to alert you only when you aren’t looking at it. This required use of the Alarm API, the Notifications API and the Page Visibility API.

The page visibility API was pretty self-explanatory, and I had no issues using it. I use this to know when the app is put into the background (which, handily, always happens before closing it. I think). When the page gets hidden, I use the Alarm API to set an alarm for when the current timer is due to elapse to wake up the application. I found this particularly hard to use as the documentation is very poor (though it turns out the code you need is quite short). Finally, I use the Notifications API to spawn a notification if the app isn’t visible when the timer elapses. Notifications were reasonably easy to use, but I’ve yet to figure out how to map clicking on a notification to raising my application – I don’t really know what I’m doing wrong here, any help is appreciated! Update: Thanks to Thanos Lefteris in the comments below, this now works – activating the notification will bring you back to the app.

The last hurdle was deploying to an actual device, as opposed to the simulator. Apparently the simulator has a deploy-to-device feature, but this wasn’t appearing for me and it would mean having to fire up my Linux VM (I have my reasons) anyway, as there are currently no Windows drivers for the Geeksphone devices available. I obviously don’t want to submit this to the Firefox marketplace yet, as I’ve barely tested it. I have my own VPS, so ideally I could just upload the app to a directory, add a meta tag in the header and try it out on the device, but unfortunately it isn’t as easy as that.

Getting it to work well as a web-page is a good first step, and to do that you’ll want to add a meta viewport tag. Getting the app to install itself from that page was easy to do, but difficult to find out about. I think the process for this is harder than it needs to be and quite poorly documented, but basically, you want this in your app:

if (navigator.mozApps) {
  var request = navigator.mozApps.getSelf();
  request.onsuccess = function() {
    if (!this.result) {
      request = navigator.mozApps.install(location.protocol + "//" + location.host + location.pathname + "manifest.webapp");
      request.onerror = function() {
        console.log("Install failed: " + this.error.name);
      };
    }
  };
}

And you want all paths in your manifest and appcache manifest to be absolute (you can assume the host, but you can’t have paths relative to the directory the files are in). This last part makes deployment very awkward, assuming you don’t want to have all of your app assets in the root directory of your server and you don’t want to setup vhosts for every app. You also need to make sure your server has the webapp mimetype setup. Mozilla has a great online app validation tool that can help you debug problems in this process.

Timer app screenshot

And we’re done! (Ctrl+Shift+M to toggle responsive design mode in Firefox)

Visiting the page will offer to install the app for you on a device that supports app installation (i.e. a Firefox OS device). Not bad for a night’s work! Feel free to laugh at my n00b source and tell me how terrible it is in the comments :)

May 03, 2013

Hi~ Planet GNOME

Hello! Nice to meet you. I’m so glad to introduce myself here.
My name is Joone. I work for Intel and have been contributing to WebKitGtk+ since 2010.

Actually, I’m a lazy blogger, but I will try to write about WebKitGtk+ and Tizen. Also, I’m involved in the GNOME Korea community so you will find GNOME Korea news in my new posts.

Thank you!

2nd meeting of wfs-india to promote FOSS among women in India

We will be having our second meeting on Tuesday 7 May at 9pm IST at #wfs-india channel on irc.freenode.net.

The agenda includes -

  • make plans for an event
  • divide the tasks among ourselves

Please join us if you are interested.

The minutes of the first meeting is at – http://wfs-india.dreamwidth.org/832.html


WebKit Contributors Meeting 2013

It turns out I’m writing this post at 6:00 AM in the morning from a hotel instead of doing it at a more reasonable time from my comfy home or a nice cafeteria in Staines. That’s already quite a new thing by itself, and the reason for that is not that I became crazy or something, but the fact that I’m completely jet-lagged in California right now in order to attend my second WebKit Contributors Meeting (my first time was in 2011), this time as part of the Samsung team in the UK R&D center, together with my mate Anton Obzhirov.

With regard to that, it has been a very interesting experience so far where I could meet new people I still haven’t had the chance to see in real life yet (e.g. my mates from other Samsung R&D centers or some guys from Apple I didn’t have the chance to meet in person before), as well as chat again with some friends and former mates that I haven’t seen for a while, such as Martin, Xan and Philippe from Igalia, Byungseon from LG, Nayan from Motorola or Gustavo from Collabora to mention some of them. It’s strange, and at the same time wonderful, how easily you can catch up on conversations with people that you barely see once a year (or even less) and mainly in conferences, and definitely one of my favourite parts of attending these kind of events, to be honest.

Also, from a less social point of view, I have to say I found very interesting the sessions I’ve attended so far, specially the one about “managing the differences between ports”, although the one about “build systems” was quite interesting too. Not sure how far we are yet in the WebKitGTK+ port from realistically switching to some kind of commonly agreed build system (cmake?), but at least it’s a good start to agree on the fact that it would be an interesting move and now that some people pushing for it.

My only regret about this first day is that I missed Hyatt‘s talk about pagination due to some health issues I’m experimenting while in California, mostly due to the extremely hot and dry weather (anything over 25 Celsius is “unbearable hot” for me), which is causing me a little bit of cough, sore throat and fever, all well mixed with the jet lag to make it a perfect “welcome pack” to the meeting. Fortunately, I got some “interesting” medicines that seem to have relieved a bit the pain and I could attend the rest of the sessions without much trouble, other than some occasional coughing. Not bad.

By the way, for those of you who were not lucky enough to attend the meeting but are anyway interested in the topics being discussed here, make sure you check the main TRAC page for the meeting, where you can also find transcripts for most of the sessions.

As for today, some more sessions will take place as well as a couple of hackathons so I expect it to be very interesting as well. Also I hope I can find some time too to work a bit on my patches to remove the nasty dependency on pango we have in WebKitGTK+ accessibility code, which is preventing us to have proper caret navigation in WebKit2GTK+ based browsers, as well as to discuss possible ways in which our lab could collaborate more actively upstream. Seems a promising day already!

Last (but not least), and in a completely unrelated and super-off-topic way, I would like to tell the world that I’m extremely happy for the fact that next week will be the end of my “lonely existence in the UK”, finally. After 4 months of living alone in Staines away from my family with just some flash trips from Friday to Sunday (every 2 weeks), I’m once and for all travelling on Thursday to my home town with a one way plane ticket to do some final arrangements, put everything (family included!) in the car and travel to Santander, where we’ll be taking a ferry that will take us to the Portsmouth (southern coast of England), from where we will just drive to Staines in order to start our new life, all together again.

It has been quite hard for us to live this way for so long, but I think in the end we managed to handle the situation quite well, and now it seems all our efforts are already paying off because things seem to be finally fitting in the right places: we have a lovely house in Staines, we have a place in a nearby public school for my oldest kid to start on September, most of the needed paperwork seems to be done and we already moved all our stuff from Spain (lots of toys!), which is now waiting to be used in our new place.

I really can’t wait to live again in the noisy and chaotic atmosphere that two kids can so easily create around them. Even if that means it will probably drive me crazy every now and then and that I won’t sleep that well sometimes.

Yes. Even considering that.

Mono is life improvement for mobile developers

Being a developer myself, I’m constantly looking at how to improve my way of working. When it comes to mobile development, the best way to improve your life is by using Mono (Xamarin.iOS and Xamarin.Android).

That’s, in a nutshell, the talk I’ve given today at Apps City: an introductory tour on Xamarin.iOS and Xamarin.Android.

Slides are over here, though they’re very light on details and unlike my previous talk, I haven’t had time to annotate them.

mono-apps-city-2013

Auto-EDID Results [updated]

A couple of weeks ago I asked people to run a command which uploaded all their auto-EDID display profiles to me. This was a massive success with 1858 profiles being added to a large dataset. These were scanned by the cd-find-broken tool, and results plotted on my G+ page. As there’s been so much new data I’m updating the graphs:

edid-vendors

I’m actually using this data to make sure we show something sane in the client UIs. Some interesting vendors are not included, e.g.:

  • “System manufacturer”,3
  • “To Be Filled By O.E.M.”,4

edid-vendors-broken

This is a chart of vendors Doing It Wrong™ by including random data (or implausible data) as the display primaries.

edid-cmf

This shows what program created the Auto-EDID ICC profile. Unknown is probably a mixture of oyranos and also early versions of gnome-setting-daemon which didn’t set the extra metadata.

edid-vendors-noserial

Last graph I promise. This shows a chart of all the vendors who do not populate the serial number in the EDID blob. I’ll explain why this is bad.

When we construct the device ID for colord, we use the vendor{-model}{-serial} as part of the key.  This allows you to use different ICC profiles even if you’ve got two “identical” external panels attached. Without the serial number, “lenovo-foo” looks the same as “lenovo-foo” and colord treats them as if they were the same panel. This sucks if the panels were not bought at the same time and have identical backlight burn time. Ohh, and we can’t use the connection name (e.g. DVI-1) as it would suck if you had to reassign all your profiles if you moved the connector to DVI-2…

This isn’t always a disaster: Laptops. We only need the make and model to ensure this is unique on the system as you can’t typically have two internal panels installed. This explains the Lenovo, Samsung, Dell and Apple entries I think, so don’t get out the pitchforks just yet. Unfortunately there’s nothing in the ICC profile that says “this is a laptop” so we can’t be more selective and hence this graph isn’t super useful. But, even on laptops, vendors should really be doing something semi-sane with the serial number, even if it’s just the batch number.

A new 0.1.34 colord was released this week. Thanks again to everyone that uploaded profiles.

 

A FOSS Devanagari to Bharati Braille Converter

Almost a year ago, I worked with Pooja on transliterating a Hindi poem to Bharati Braille for a Type installation at Amar Jyoti School; an institute for the visually-impaired in Delhi. You can read more about that on her blog post about it. While working on that, we were surprised to discover that there were no free (or open source) tools to do the conversion! All we could find were expensive proprietary software, or horribly wrong websites. We had to sit down and manually transliterate each character while keeping in mind the idiosyncrasies of the conversion.

Now, like all programmers who love what they do, I have an urge to reduce the amount of drudgery and repetitive work in my life with automation ;). In addition, we both felt that a free tool to do such a transliteration would be useful for those who work in this field. And so, we decided to work on a website to convert from Devanagari (Hindi & Marathi) to Bharati Braille.

Now, after tons of research and design/coding work, we are proud to announce the first release of our Devanagari to Bharati Braille converter! You can read more about the converter here, and download the source code on Github.

If you know anyone who might find this useful, please tell them about it!

May 02, 2013

#GNOMEPERU2013

Hello GNOME Planet!

I have recently organize pictures and videos we got from the event GNOME FEST PERU 2013. We presented the community GNOME, the programs that GNOME offers, coding for GNOME extensions, Gnoduino and more according to the plan.

GNOMEFESTPERU2013

I share with you the pictures and Harlem Shake we did :)

harlem

P.S.: We are planning another event for GNOME and this time USIL University in going to be the place!  I did coordinations with Fedora and IBM people in order to organize a great event


Filed under: GNOME Tagged: GNOME FEST PERU 2013, Julita Inca, Julita Inca Chiroque

Geary crowdfunding: What went wrong?

OMG! Ubuntu ran a postmortem yesterday: Why Did Geary’s Fundraiser Fail?  It’s a question we’ve been asking ourselves at Yorba, obviously.  Quite a number of people have certainly stepped forward to offer their opinions, in forums and comment boards and social media and via email.  OMG!’s article is the first attempt I’ve encountered at a complete list of theories.

Although I didn’t singlehandedly craft and execute the entire Geary crowdfunding campaign, I organized it and held its hand over those thirty days.  Ultimately I take responsibility for the end result.  That’s why I’d like to respond to OMG’s theories as well as offer a few of my own.

Some preliminary throat-clearing

Let me state a few things before I tackle the various theories.

First, it’s important to understand that the Geary campaign was a kind of experiment.  We wanted to know if crowdfunding was a potential route for sustaining open-source development.  We weren’t campaigining to create a new application; Geary exists today and has been under development for two years now.  Unlike OpenShot and VLC, we weren’t porting Geary to Windows or the Mac, we wanted to improve the Linux experience.  And we had no plans on using the raised money as capital to later sell a product or service, which is the usual route for most crowdfunded projects.  Our pitch was simply this: donate money so we can make Geary on Linux even better than it is today.

Also, we didn’t go into this campaign thinking Yorba “deserved” the money.  We weren’t asking for the community to reward us for what we’ve done.  We were asking the community to help fund the things we wanted to develop and share.

Nor did we think that everyone in the FOSS world needed to come to our aid.  Certainly we would’ve appreciated that, but our goal was to entice current and prospective Geary users to open their wallets and donate.

I hope you keep that in mind as I go through OMG!’s list of theories.

OMG’s possible reasons

“$100,000 was too much”

This was by far the most voiced complaint about the campaign.  It was also the most frustrating because of its inexact nature — too much compared to what?  When asked to elucidate, I either never heard back from the commenter or got the hand-wavey response “It’s just too much for an email client.”

It’s important to point out — and we tried! — that Yorba was not asking you for $100,000, we were asking the community for a lot of small donations.  Your email account is your padlock on the Internet.  Just about every online account you hold is keyed to it.  It’s also the primary way people and companies will communicate with you on the Internet.  Is a great email experience — a program you will use every day, often hours every day — worth $10, $25, $50?

Another point I tried to make: How much did it cost to produce Thunderbird?  Ten thousand, fifteen thousand dollars?  According to Ohloh, Thunderbird cost $18 million to develop.  Even if that’s off by a factor of ten (and I’m not sure it is) we were asking for far less.  (Incidentally, Ohloh puts Geary’s current development cost at $380,000.  I can attest that’s not far off the mark.)  Writing a kickin’ video game, a Facebook clone, or an email client takes developer time, and that’s what costs money, whether the software is dazzling, breathtaking, revolutionary, disruptive, or merely quietly useful in your daily routine.

The last Humble Indie Bundle earned $2.65 million in two weeks.  Linux users regularly contribute 25% of the Humble Haul.  Is that too much for a few games?  Not at all.

“The Proposition Wasn’t Unique”

I agree, there are a lot of email client options for Linux.  What I don’t see are any that look, feel, or interact quite like Geary, and that’s one reason Yorba started this project.

We were in a bind on this matter.  We wanted the Geary campaign to be upbeat, positive, and hopeful.  We saw no need to tear down other projects by name.  We preferred to talk about what Geary offers and what it could grow into rather than point-by-point deficiencies in other email clients.  (Just my generic mention in the video of Geary organizing by conversations rather than “who-replied-to-whom” threading was criticized.)

That there are so many email clients for Linux does not necessarily mean the email problem is “solved”.  It may be that none of them have hit on the right combination of features and usability.  That jibes with the positive reaction we’ve received from many Geary users.

“People Consider it an elementary App”

I honestly don’t recall anyone saying this.  If this notion stopped anyone from contributing, it’s news to me.

To clarify a couple of points: OMG! states there are a few elementary-specific features in Geary.  That’s not true.  The cog wheel is not elementary’s.  We planned that feature before writing the first line of code.  Geary once had a smidgen of conditionally-compiled code specific to elementary, but it was ripped out before 0.3 shipped.

elementary has been a fantastic partner to work with and they supported Yorba throughout the campaign.  Still, Yorba’s mission is to make applications available to as many free desktop users as possible, and that’s true for Geary as well.

“They Chose the Wrong Platform”

Why did we go with Indiegogo over Kickstarter?  We had a few reasons.

A number of non-U.S. users told us that Paypal was the most widely-available manner of making international payments.  Kickstarter uses Amazon Payments in the United States and a third-party system in the United Kingdom.  We have a lot of users in continental Europe and elsewhere, and we didn’t want to make donating inconvenient for them.

Unlike Indiegogo, Kickstarter vets all projects, rejecting 25% of their applications.  And one criteria for Kickstarter is that a project must be creating something new.  We’re not doing that.  One key point we tried to stress was that Geary was built and available today.  (We even released a major update in the middle of the campaign.)  There is no guarantee they would’ve accepted our campaign.

Another point about Kickstarter: a common complaint was that we should’ve done a flexible funding campaign (that is, we take whatever money is donated) rather than the fixed funding model we elected to run, where we must meet a goal in a time period to receive anything.  Kickstarter only allows fixed funding.  A few people said we should’ve done a flexible funding campaign and then said we should’ve used Kickstarter.  It doesn’t work that way.

“Not Enough Press”

I agree with OMG!, we received ample press, but more would not have hurt.  The Tech Crunch article was a blessing, but what we really needed was more coverage from the Ubuntu-specific press.  Even if that happened, I don’t feel it would’ve bridged the gap we needed to cross to reach the $100,000 mark.

A couple OMG! missed

There are two more categories that go unmentioned in the OMG! article:

“You Should Improve Thunderbird / Evolution / (my favorite email app)”

We considered this before launching Geary.  What steered us away from this approach was our criteria for conversations over threading, fast search, and a lightweight UI and backend.

Thunderbird is 1.1 million lines of code.  It was still under development when we started Geary.  We attended a UDS meeting where the Thunderbird developers were asked point-blank about making its interface more like Gmail’s (that is, organizing by conversations rather than threads).  The suggestion was flatly rebuffed: use an extension.  For us, that’s an unsatisfying answer.

Evolution is 900,000 lines of code, and includes many features we did not want to take on.  Its fifteen years of development also bring with it what Federico Mena Quintero succinctly calls “technical debt”.

(Even if you quibble with Ohloh’s line-counting methodology, I think everyone can agree Evolution and Thunderbird are Big, Big Projects.)

In both cases, we would want to make serious changes to them.  We would also want to rip features out in order to simplify the interface and the implementation.  Most projects will flatly deny those kinds of patches.

In comparison, Geary stands at 30,000 lines of code today.

“I Use Web Mail.  No One Uses a Desktop Client Anymore”

Web mail is convenient and serves a real need.  Web mail is also, with all but the rarest exceptions, closed source.

Think of it this way: you probably don’t like the idea of installing Internet Explorer on your Linux box.  If you do, you probably would at least like to have an open-source alternative.  (Heck, even Windows users want a choice.)  Web mail locks you out of alternatives.  People are screaming about Gmail’s new compose window.  What can they do about it?  Today they can temporarily disable it.  Some time soon, even that won’t be available.

Consider the astonishingly casual way Google has end-of-lifed Google Reader.  Come July 31st, Google Reader is dust.  I don’t predict Gmail is going away any time soon — it’s too profitable — but every Gmail user should at least have a fallback plan.  And if Gmail did go away, Google would take with it all that code.  (This is why Digg, Feedly, and others are rushing to create Google Reader lookalikes rather than forking what exists today.)  Not so with open-source email clients.  That’s why asking Yorba to improve Thunderbird or Evolution is even askable.  Yorba improving Gmail?  Impossible.

That’s the pragmatic advantage of open-source over closed: code never disappears.  Even if you change your email provider, you’re not stuck relying on your new provider’s software solution, if they even have one.

The principled advantage of free software is that you’re supporting open development for applications that don’t carry riders, waivers, or provisos restricting your use of it.

My theory

That’s the bulk of the criticism we received over the course of the campaign.  However, I don’t think any or all get to the heart of what went wrong.  Jorge Castro echoes my thinking:

Lesson learned here? People don’t like their current email clients but not so much that they’re willing to pay for a new one.

All I’d add is that over one thousand people were willing to donate a collective sum of $50,000 for a new email client.  Let’s say Jorge is half-right.

I don’t intend this post to be argumentative, merely a chance to air my perspective.

Next time I’ll talk about the lessons we learned and offer advice for anyone interested in crowdfunding their open-source project.

May 01, 2013

Tue 2013/Apr/30

  • Report from the GTK+ Hackfest

    Last week we had the GTK+ Hackfest in the OLPC office in Cambridge.

    My intentions for the hackfest were to finish the merge of GtkPlacesSidebar into the master master branch — it's a new public widget that will show up in GTK+ 3.10.

    GtkPlacesSidebar

    Over the past months I have worked on finishing the details of the sidebar: merging all the code from Nautilus, polishing the API with the help of the GTK+ team, writing reference documentation. During the hackfest I worked on some of the last user-visible details in the sidebar, particularly the way drag-and-drop feedback gets shown.

    After I pushed the merge into GTK+, Cosimo gave me the green light for merging this into Nautilus, and so the thing is done now! It feels good to have shared code between Nautilus and GtkFileChooser at last.

    Some things are pending:

    • In the file chooser, when you click on the Trash item in the sidebar, you get an error message saying that "trash:/// is not a local file system". Even though files in the trash live in your normal Unix file system, GIO says that trash:/// is not native and doesn't have a local path associated to it. We have to see how to get GIO/GVFS to mount the trash into a FUSE mount automatically.

    • There is still an ugly gtk_places_sidebar_set_show_desktop() API. I'll remove it and turn that into a GtkSetting, so that the sidebar can automatically adjust to the surrounding environment's policy about whether to show the Desktop folder or not (e.g. show it for XFCE, don't show it for Gnome).

    • Add a GtkSetting for showing XDG directories (Music, Photos, etc.) or not. XDG doesn't want to show them by default, and prefers to let users bookmark those folders directly; Gnome does.

    • Once GTK+ gets an API for notifications, the places sidebar should be able to notify you when a volume is being mounted (and is taking a long time to do so). Right now that code is disabled; it came from Nautilus, and it pretty much uses libnotify directly.

    Meanwhile in Cambridge

    For me going to the Boston area is a treat. It's a lovely city, with good public transportation, good restaurants, and with a manageable size.

    I had a chance to meet Steve Branam, of the fantastic Close Grain blog — a fellow woodworker and software developer. Steve started a hand-tool woodworking school a while ago, so if you are in the Boston area and are interested in learning, shoot him an email.

    Steve        Branam and Federico

    Also, thanks to my friends Alán and Dori, for letting me crash at their house and keeping me well fed :)

April 30, 2013

PyGObject 3.9.1 released

Time for the first PyGObject release for GNOME 3.9.x! This release brings the performance optimizations (thanks to Daniel Drake), quite a lot of internal code cleanup, and various bug fixes.

Thanks to all contributors!

  • gtk-demo: Wrap description strings at 80 characters (Simon Feltman) (#698547)
  • gtk-demo: Use textwrap to reformat description for Gtk.TextView (Simon Feltman) (#698547)
  • gtk-demo: Use GtkSource.View for showing source code (Simon Feltman) (#698547)
  • Use correct class for GtkEditable’s get_selection_bounds() function (Mike Ruprecht) (#699096)
  • Test results of g_base_info_get_name for NULL (Simon Feltman) (#698829)
  • Remove g_type_init conditional call (Jose Rostagno) (#698763)
  • Update deps versions also in README (Jose Rostagno) (#698763)
  • Drop compat code for old python version (Jose Rostagno) (#698763)
  • Remove duplicate call to _gi.Repository.require() (Niklas Koep) (#698797)
  • Add ObjectInfo.get_class_struct() (Johan Dahlin) (#685218)
  • Change interpretation of NULL pointer field from None to 0 (Simon Feltman) (#698366)
  • Do not build tests until needed (Sobhan Mohammadpour) (#698444)
  • pygi-convert: Support toolbar styles (Kai Willadsen) (#698477)
  • pygi-convert: Support new-style constructors for Gio.File (Kai Willadsen) (#698477)
  • pygi-convert: Add some support for recent manager constructs (Kai Willadsen) (#698477)
  • pygi-convert: Check for double quote in require statement (Kai Willadsen) (#698477)
  • pygi-convert: Don’t transform arbitrary keysym imports (Kai Willadsen) (#698477)
  • Remove Python keyword escapement in Repository.find_by_name (Simon Feltman) (#697363)
  • Optimize signal lookup in gi repository (Daniel Drake) (#696143)
  • Optimize connection of Python-implemented signals (Daniel Drake) (#696143)
  • Consolidate signal connection code (Daniel Drake) (#696143)
  • Fix setting of struct property values (Daniel Drake)
  • Optimize property get/set when using GObject.props (Daniel Drake) (#696143)
  • configure.ac: Fix PYTHON_SO with Python3.3 (Christoph Reiter) (#696646)
  • Simplify registration of custom types (Daniel Drake) (#696143)
  • pygi-convert.sh: Add GStreamer rules (Christoph Reiter) (#697951)
  • pygi-convert: Add rule for TreeModelFlags (Jussi Kukkonen)
  • Unify interface struct to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify Python interface struct to GI marshaling code (Simon Feltman) (#693405)
  • Unify Python float and double to GI marshaling code (Simon Feltman) (#693405)
  • Unify filename to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify utf8 to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify unichar to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify Python unicode to filename GI marshaling code (Simon Feltman) (#693405)
  • Unify Python unicode to utf8 GI marshaling code (Simon Feltman) (#693405)
  • Unify Python unicode to unichar GI marshaling code (Simon Feltman) (#693405)
  • Fix enum and flags marshaling type assumptions (Simon Feltman)
  • Make AM_CHECK_PYTHON_LIBS not depend on AM_CHECK_PYTHON_HEADERS (Christoph Reiter) (#696648)
  • Use distutils.sysconfig to retrieve the python include path. (Christoph Reiter) (#696648)
  • Use g_strdup() consistently (Martin Pitt) (#696650)
  • Support PEP 3149 (ABI version tagged .so files) (Christoph Reiter) (#696646)
  • Fix stack corruption due to incorrect format for argument parser (Simon Feltman) (#696892)
  • Deprecate GLib and GObject threads_init (Simon Feltman) (#686914)
  • Drop support for Python 2.6 (Martin Pitt)
  • Remove static PollFD bindings (Martin Pitt) (#686795)
  • Drop test skipping due to too old g-i (Martin Pitt)
  • Bump glib and g-i dependencies (Martin Pitt)

April 29, 2013

summing up 1

a more or less weekly digest of juicy stuff

UI polishing in Firefox for Android

Last week, we did our very first topic-oriented hackathon focused on UI polishing bugs. The UI changes we’ve done will make a substantial difference in the experience of using Firefox on Android. Here are some of my favourite fixes and improvements.

Tabs

Details in the tabs UI can make a big difference UX-wise. We changed the tabs button icon (see image) to provide better affordance. The new icon also features a much cooler animation when tabs are added or removed.

Last but not least, we added a subtle parallax effect when you the open/close the tabs panel giving it a more fluid feel.

Address bar

As Wes has already reported, you now have the option to show URLs instead of page titles in the address bar. The domain highlight (see image) is a nice touch and gives us feature parity with Firefox on desktop.

The reader and stop buttons now have properly sized hit areas to avoid tapping other parts of the toolbar by mistake—a long overdue issue.

That’s not all

Reader Mode will get some nice style updates for serif fonts, doorhanger notifications now have a more polished animation, text selection handles have a more consistent style, favicons in the awesomescreen will look fancier, some visual glitches in the awesomescreen and toolbar were fixed, and more.

Not all these changes are in Nightly just yet but they will show up in the next days. Firefox 23 has everything to be my favourite release ever. Download and install our Nightly build on your Android and let us know what you think.

April 28, 2013

Share some design works for GNOME.Asia

Poster: Training session in the GNOME.Asia Summit 2013
Training session in the GNOME.Asia Summit 2013

T-Shirt: Let us to meet GNOMERS
Let us to meet GNOMERS

T-Shirt: GNOME IS MOBILE
GNOME IS MOBILE

Certificate of Training -Template
Certificate of Training


No more stuck rendering dialogs!

If you’ve tried rendering projects with Pitivi 0.15 or older, chances are you’ve encountered one of these dreadful situations where the rendering process would get stuck:

  • …at the beginning, with the progressbar saying it’s currently “estimating” — which was a lie that I corrected a little while ago.
  • …at the very end. Extra trolling points for having made you waste a huge amount of time to get a 0 bytes output file (if we’re lucky, that bug is gone).
  • …somewhere in the middle, because caps negotiation failed, some elements were not linked, GStreamer thinks you ran out of available RAM, or because you’ve been very naughty.

In any such case, the rendering dialog just sat there and smiled at you, as if everything was fine in the world. Well, no more:

slap

Pitivi is going to give you the honest, brutal truth.

This is the result of a horrifying thought suddenly springing to my mind yesterday night: “Hey, what if the code was not even checking for errors in the pipeline when rendering?”

Indeed, it wasn’t. How silly is that! I have thus prepared a simple fix to improve the situation: catch pipeline error messages, abort the render (you really don’t want to ignore a GStreamer error) and display an error dialog. This will at least let people know that something is wrong and that they should start writing patches to GStreamer instead of accusing Pitivi of hurting kittens. You’d be surprised how many people can sit for hours in front of that stuck progressbar.

Before I commit the fix however, I would need your feedback on the usability of that dialog:

2013-04-27

This is not terribly pretty, but it’s better than nothing. A few things to consider:

  • In that screenshot, all the text except the window title (“Error While Rendering Project”) comes from the GStreamer pipeline error message (the error and the error’s details). I know that the error details look ugly, but I suspect it wouldn’t be useful to GStreamer/Pitivi developers if we don’t have them “verbatim”. Maybe we could try to mangle the error details string (split using “:” and take only the first and last two items of the resulting list?) and encourage the user to run from a terminal to get better debug info, but that feels a bit backwards.
  • We should probably have some less-scary text to accompany the actual error details. Something that guides the user towards an action that can be done to address the problem (ex: reporting a bug). Maybe it can be placed between the header and the details (above the “qtdemux.c” line)? The problem is finding a universal text to be used.
  • If we consider the route where we suggest the user to report bugs, where should we point to? The Pitivi bugs investigation page? Pitivi bugzilla? GStreamer bugzilla? The distro’s bug tracker?
  • Let’s keep this simple, both visually and in terms of code/implementation.

What do you think? Is the current approach sufficient or is there something better that we can easily do?

Update: here’s an alternative dialog with some more comprehensible text, where the actual error (as seen in the previous screenshot) gets shoved under the rug by putting it in a GTK expander widget (clicking “Details” reveals the error’s details as above):

2013-04-29

April 27, 2013

Extending geoalchemy through monkeypatching

I’ve been working on the data collection part of my cycle route modelling. I’m hoping that I can, as a first output, put together a map of where people are cycling in Melbourne. A crowd-sourced view of the best places to cycle, if you will. Given I will probably be running this in the cloud1, I thought it was best to actually store the data in a GIS database, rather than lots and lots of flat files.

A quick Google turned up GeoAlchemy, which are GIS extensions for SQLAlchemy. Provides lots of the standard things you want to do as methods on fields, but this is only a limited set of what you can do with PostGIS. Since I’m going to be wanting to do things like binning data, I thought it was worth figuring out how hard it was to call other PostGIS methods.

GeoAlchemy supports subclassing to create new dialects, but you have to subclass 3 classes, and it’s basically a pain in the neck when you just want to extend the functionality of the PostGIS dialect. Probably what I should do is submit a pull request with the rest of the PostGIS API as extensions, but I’m lazy. Henceforth, for the second time this week I am employing monkey patching to get the job done (and for the second time this week, kittens cry).

Functions in GeoAlchemy require two things, a method stub saying how we collect the arguments and the return (look at geoalchemy.postgis.pg_functions) and a mapping from this to the SQL function. Since we only care about one dialect, we can make this easier on ourselves by combining these two things. Firstly we monkeypatch in the method stubs:

from geoalchemy.functions import BaseFunction
from geoalchemy.postgis import pg_functions

@monkeypatchclass(pg_functions)
class more_pg_functions:
    """
    Additional functions to support for PostGIS
    """

    class length_spheroid(BaseFunction):
        _method = 'ST_Length_Spheroid'

Note the _method attribute which isn’t something used anywhere else. We can then patch in support for this:

from geoalchemy.dialect import SpatialDialect

@monkeypatch(SpatialDialect)
def get_function(self, function_cls):
    """
    Add support for the _method attribute
    """

    try:
        return function_cls._method
    except AttributeError:
        return self.__super__get_function(function_cls)

The monkeypatching functions look like this:

def monkeypatch(*args):
    """
    Decorator to monkeypatch a function into class as a method
    """

    def inner(func):
        name = func.__name__

        for cls in args:
            old = getattr(cls, name)
            setattr(cls, '__super__{}'.format(name), old)

            setattr(cls, name, func)

    return inner


def monkeypatchclass(cls):
    """
    Decorator to monkeypatch a class as a baseclass of @cls
    """

    def inner(basecls):
        cls.__bases__ += (basecls,)

        return basecls

    return inner

Finally we can do queries like this:

>>> track = session.query(Track).get(1)
>>> session.scalar(track.points.length_spheroid('SPHEROID["WGS 84",6378137,298.257223563]'))
6791.87502950043

Code on GitHub.

  1. your recommendations for cloud-based services please, must be able to run Flask and PostGIS and be super cheap

April 26, 2013

Fri 2013/Apr/26

  • Oh the WebKits! During the past few weeks, thanks to Igalia's collaboration with the good folks at Bloomberg, I have descended from the heights of Epiphany and WebKitGTK+ to the depths of WebCore, that obscure but cleverly assembled part of WebKit that magnificently takes care of the logic inherent to layouting, rendering, and the inner representation of HTML documents. A fascinating aspect of WebCore is that its architecture, completely decoupled from the actual implementation in the different WebKit ports, means that any change to its parts will affect all ports and browsers built upon this marvelous piece of engineering. Let me assure you, dear reader, the challenges this implies are comparable only to the joy it brings to this humble hacker, as the following will reveal!

    Among the many duties of WebCore lies controlling the logic behind user interaction with HTML documents — something that has changed considerably in recent years. While originally, most interactive editing in the web was limited to plain and boring web forms, in this brave new world of ours it is also possible to build complete HTML editors using nothing but HTML and JavaScript access to the DOM. Have you seen Wordpress' fantastic editor? Then you shall agree with me that this is an extremely powerful feature.

    But with great power comes great responsibility, as the old saying goes. And with great responsibility come bugs, says a more recent variation of the same maxim. And where bugs are to be found, relentless minds work tirelessly in order to ensure that your browsing experience never ceases to improve. This is one of the goals that Igalia, humbly but boldly, pursues with utmost seriousness. And so it has been that I, your humble servant, have spent countless hours mastering my way through the DOM and editing features of WebCore. Bugs have been fixed already — some affecting editing in Windows, others affecting editing in GNU/Linux, and others affecting all platforms equally. More will be fixed in the forthcoming weeks. I can only attempt to share my excitement through these words, for I am unable to express it in a way that would do it justice.

  • As a side note, I am a committer to the WebKit project for a little while now. This is pretty cool, as it means I get a direct chance to break your browser. Or unbreak it, shall it be the case. I try to lean towards the latter but trust me, it is not an easy task!

The changing world of adding a service

1991

You: “I’d like to install a file server for the LAN – can I have the root password for the server, please”
Admin: “You’re kidding, right? Submit a ticket, we’ll install it when we get around to it”
crickets

1996

You: “I installed a Samba file server for the LAN on my own Linux machine”
Admin: “Gah! You messed up my workgroup. What happens when you turn it off? Bloody amateurs…”

2000

You: “I’d like to install a bug tracker for the dev team”
Admin: “The existing servers are overloaded – you’ll need a hardware req. Lodge a ticket, and get management approval first.”

2005

You: “I’d like to install a bug tracker for the dev team”
Admin: “I’ll create a VM for you to use. Lodge a ticket”

2013

You: “I’d like to install a bug tracker for the dev team”
Admin: “You have a self-service account on OpenStack, don’t you? What are you talking to me for?”

 

Decorating your Xamarin.iOS code with Behaviors

Note: this is the post in which I'm getting out of the closet and make it clear that I had an affair with Silverlight. I'm still thinking about it sometimes, and when I do, this is what happens...

Every time you have to ask your user "What's your favourite colour" or "What is the air-speed velocity of an unladen swallow?" from within your iOS application, you have to ask yourself "Wait, will the field still be visible with the virtual keyboard displayed ?"

I don't know how you do it (experience sharing is welcome), but me, I do it this way:

public override void ViewDidLoad ()
{
base.ViewDidLoad ();

//Set Bindings and Commands
placeField.Bind (ViewModel, "Place");
sendButton.Command (ViewModel.SendCommand);
busyIndicator.Bind (ViewModel, "IsBusy");

//Slide the view on keyboard show/hide
placeField.EditingDidBegin += (sender, e) => {
UIView.BeginAnimations ("keyboardslide");
UIView.SetAnimationCurve (UIViewAnimationCurve.EaseInOut);
UIView.SetAnimationDuration (.3f);
var frame = View.Frame;
frame.Y = -100;
View.Frame = frame;
UIView.CommitAnimations();
};

placeField.EditingDidEnd += (sender, e) => {
UIView.BeginAnimations ("keyboardslide");
UIView.SetAnimationCurve (UIViewAnimationCurve.EaseInOut);
UIView.SetAnimationDuration (.3f);
var frame = View.Frame;
frame.Y = 20;
View.Frame = frame;
UIView.CommitAnimations();
};

}



It works fine, but looks messy next to readable code setting bindings or commands (those come from a very light Binding library I'm working on). Then yesterday evening, I had a realisation. It looks very similar to Silverlight Behaviors, so this code could just be like:

 placeField.Attach (new SlideOnEditBehavior (View, defaultPosition:20, alternatePosition:-100));

And the SlideOnEditBehavior is kept aside (OnDetaching implementation left out for clarity):

public class SlideOnEditBehavior : Behavior
{
UIView view;
int defaultPosition;
int alternatePosition;

public SlideOnEditBehavior (UIView view, int defaultPosition, int alternatePosition)
{
this.view = view;
this.defaultPosition = defaultPosition;
this.alternatePosition = alternatePosition;
}

protected override void OnAttached ()
{
base.OnAttached ();
AssociatedObject.EditingDidBegin += (sender, e) => {
UIView.BeginAnimations ("keyboardslide");
UIView.SetAnimationCurve (UIViewAnimationCurve.EaseInOut);
UIView.SetAnimationDuration (.3f);
var frame = view.Frame;
frame.Y = alternatePosition;
view.Frame = frame;
UIView.CommitAnimations();
};

AssociatedObject.EditingDidEnd += (sender, e) => {
UIView.BeginAnimations ("keyboardslide");
UIView.SetAnimationCurve (UIViewAnimationCurve.EaseInOut);
UIView.SetAnimationDuration (.3f);
var frame = view.Frame;
frame.Y = defaultPosition;
view.Frame = frame;
UIView.CommitAnimations();
};
}
}

Cleaner. Simpler. Reusable. And it also supports BehaviorCollections:

 placeField.Attach (new BehaviorCollection {
new SlideOnEditBehavior (View, defaultPosition:20, alternatePosition:-100),
//Any other behavior here
});

As expected, the code for all of this is trivial, but if you like the idea and save yourself the 30 minutes it takes to write it, it's on Github.

[UPDATE: 2013-04-26] I updated the code as per Stuart Lodge suggestion (of MvvmCross) to use WeakReference to NSObjects. Doesn't change the API at all.


DoggFooding in Glade

I’ve been meaning to write a short post showing what we’ve been able to do with Glade since we introduced composite widget templates in GTK+, the post will be as brief as possible since I’m preoccupied with other things but here’s a run over of what’s changed in the Dogg Food release.

Basically, after finally landing the composite template machinery (thanks to Openismus for giving me the time to do that), I couldn’t resist going the extra mile in Glade, over the weekends and such, to leverage the great new features and do some redesign work in Glade itself.

So please enjoy ! or don’t and yell very loudly about how you miss the old editor design, and make suggestions :)

Glade Preferences Dialog

Preferences Dialog Before

Preferences Dialog Before

Preferences Dialog After

Preferences Dialog After

 

 

 

 

 

 

 

The old preferences dialog was a sort of lazy combo box, now that we have composite templates and create the interface using GtkBuilder, it was pretty easy to add the treeview and create a nicer interface for adding customized catalog paths.

Also there are some new features and configurations in the dialog, since the new Dogg Food release we now have an Autosave feature, and we optionally save your old file.ui to a file.ui~ backup every time you save. There are also some configurations on what kind of warnings to show when saving your glade file (since it can be annoying if you already know there are deprecated widgets, or unrecognized widgets and have the dialog popup every time you save).

Glade Project Properties

Project Properties Dialog Before

Project Properties Dialog After

 

 

 

 

 

 

 

Refactoring out the project properties dialog into a separate source file, and implementing the UI with Glade makes the GladeProject code more readable, also the UI benefits again, notice the not so clear “Execute” button has been moved to be a secondary dialog button (with a tooltip explaining what it does).

Also the new project attributes have been added to allow one to set the project’s translation domain or Composite Template toplevel widget.

Now that’s just the beginning, let’s walk through the new custom editors.

Button Editor

GtkButton Editor After

GtkButton Editor After

GtkButton Editor Before

GtkButton Editor Before

 

 

 

 

 

 

 

 

 

Here’s where the fun starts, while we did have some custom editors before, they all had to be hand written, now I’ve added some base classes making it much easier to design the customized property editors with Glade.

First thing to notice is we have these check button property editors for some boolean properties which we can place wherever in the customized property editor layout (checkbuttons previously didnt make any sense in a layout where one always expects to see the title label on the left, and the property control on the right, in a table or grid layout).

Entry Editor Before

GtkEntry Editor Before (top portion)

GtkEntry Editor Before (top portion)

GtkEntry Editor Before (bottom portion)

GtkEntry Editor Before (bottom portion)

 

 

 

 

 

 

 

 

 

 

 

 

Entry Editor After

GtkEntry Editor After (top portion)

GtkEntry Editor After (top portion)

GtkEntry Editor After (bottom portion)

GtkEntry Editor After (bottom portion)

 

 

 

 

 

 

 

 

 

 

 

 

All around better layout I think, also we save space by playing tricks with the tooltip-text / tooltip-markup properties for the icons. While in reality GTK+ has separate properties, we just add a “Use Markup” check to the tooltip editor and use that to decide whether to edit the normal tooltip text property, or the tooltip markup property.

Image Editor

GtkImage Editor Before

GtkImage Editor Before

GtkImage Editor After

GtkImage Editor After

 

 

 

 

 

 

 

 

 

Here we economize on space a bit by putting the GtkMisc alignment and padding details down at the bottom, also we group the “use-fallback” property with the icon name setting, since the fallback property can only apply to images that are set by icon name.

Label Editor

GtkLabel Editor Before

GtkLabel Editor Before

GtkLabel Editor After

GtkLabel Editor After

 

 

 

 

 

 

 

 

 

 

 

 

 

Like the GtkImage Editor, we’ve grouped the GtkMisc properties together near the bottom. We also have generally better grouping all around of properties, hopefully this will help the user find what they are looking for more quickly. Another interesting thing is that the mnemonic widget editor is insensitive if “use-underline” is FALSE, when “use-underline” becomes selected, the mnemonic widget property can be set directly to the right of the “use-underline” property.

Widget Editor / Common Tab

Last but not least (of what we’ve done so far) is a completely new custom editor for the “Common” tab (perhaps we can do away with the “Common” tab altogether… use expanders where we currently have bold heading labels, now that we do it all with GtkBuilder script, sky is the limit really)

GtkWidget Editor Before (top portion)

GtkWidget Editor Before (top portion)

GtkWidget Editor Before (bottom portion)

GtkWidget Editor Before (bottom portion)

 

 

 

 

 

 

 

 

 

 

Widget Editor After

GtkWidget Editor After

GtkWidget Editor After

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Here again we play some tricks with the tooltip, so that we don’t have two separate property entries for “tooltip-text” and “tooltip-markup” but instead a simple switch deciding whether to set the tooltip as markup or not. The little “Custom” check button in there makes the tooltip editors insensitive and instead sets the “has-tooltip” property to TRUE, so that you can hook into the “query-tooltip” signal to create a custom tooltip window.

Now while these are just the first iterations of the new editor designs and layouts, the really great news is that we can now use Glade to design the layouts of Glade’s widget editors, so you can expect (and even request ;-) ) better designs in the future. Also, we’re open to ideas, if you have any great ideas on how to make widget property editing more fun, more obvious, more usable… please share them with us in bugzilla or on our mailing list.

 

Extra amendment: Fitting images into blog post side by side has been a delicate exercise, it looks different in the editor, different at blogs.gnome.org, and again different on planet.gnome.org, just goes to show that I make for a terrible poster boy, not to mention I don’t post quite that often… anyway… hope the formatting of this post is endurable at least, it’s best viewed at blogs.gnome.org I think.

Don’t forget to submit your GUADEC talk!

You’ve got until Saturday (that’s tomorrow for many of you!) to submit your talks for GUADEC. I think It looks like it’s going to be really great this year. We’re in the process of confirming our first keynote speaker, which I can’t wait to announce! Plus, so much has been happening in the GNOME world – there’s sure to be a lot of great conversation. The GUADEC organizers have been hard at work and the conference is shaping up nicely!

On a personal note, I’ve never been to the Czech Republic and I’m so excited to go!I can’t wait to see you all there in person (and happy that this year i can drink with you!)

April 25, 2013

Geary crowdfunding: What’s next?

geary-yorbaThirty days comes and goes faster than you think.  The Geary crowdfunding campaign’s 30 days are up, and unfortunately, we didn’t make our target amount.  That means Yorba will take in none of the $50,860 pledged by 1,192 generous donors over the past month, who will receive refunds.

I’d like to thank each person who pledged to the Geary campaign.  That money represented more than dollars and pennies, it represented trust in Yorba and the work we’re doing to bring high-quality software to the Free Desktop.  $50,860 is not small potatoes, and 1,192 donors in 30 days tells me we’re doing something right.  Even if we “failed” I like to believe we succeeded in some sense.

What’s next?  In some ways, it’s back to business for Yorba.  We’re still coding.  In fact, we released new versions of Geary and Shotwell (even Valencia!) during the crowdfunding campaign, which has to be some kind of record.  We’re working now to find other sources of income to cover our costs.  All options are being considered.

That said, please consider giving directly to Yorba.  If you pledged and are going to receive a Paypal refund, you can still donate some or all of that money to Yorba knowing it will be put to good use.  Every dollar we take in is a little more oxygen for Yorba to continue developing free software.  Just follow this link to donate:

http://www.yorba.org/about/donate/

Thanks everyone!

Need for Exercises

For many years, I have learned various subjects (mostly programming related, like languages and frameworks) purely by reading a book, blog posts or tutorials on the subjects, and maybe doing a few samples.

In recent years, I "learned" new programming languages by reading books on the subject. And I have noticed an interesting phenomenon: when having a choice between using these languages in a day-to-day basis or using another language I am already comfortable with, I go for the language I am comfortable with. This, despite my inner desire to use the hot new thing, or try out new ways of solving problems.

I believe the reason this is happening is that most of the texts I have read that introduce these languages are written by hackers and not by teachers.

What I mean by this is that these books are great at describing and exposing every feature of the language and have some clever examples shown to you, but none of these actually force you to write code in the language.

Compare this to Scheme and the book "Structure and Interpretation of Computer Programs". That book is designed with teaching in mind, so at the end of every section where a new concept has been introduced, the authors have a series of exercises specifically tailored to use the knowledge that you just gained and put it to use. Anyone that reads that book and does the exercises is going to be a guaranteed solid Scheme programmer, and will know more about computing than from reading any other book.

In contrast, the experience of reading a modern computing book from most of the high-tech publishers is very different. Most of the books being published do not have an educator reviewing the material, at best they have an editor that will fix your English and reorder some material and make sure the proper text is italicized and your samples are monospaced.

When you finish a chapter in a modern computing book, there are no exercises to try. When you finish it, your choices are to either take a break by checking some blogs or keep marching in a quest to collect more facts on the next chapter.

During this process, while you amass a bunch of information, at some neurological level, you have not really mastered the subject, nor gained the skills that you wanted. You have merely collected a bunch of trivia which most likely you will only put to use in an internet discussion forum.

What books involving an educator will do is include exercises that have been tailored to use the concepts that you just learned. When you come to this break, instead of drifting to the internet you can sit down and try to put your new knowledge to use.

Well developed exercises are an application of the psychology of Flow ensuring that the exercise matches the skills that you have developed and they guide you through a path that keeps you in an emotional state ranging that includes control, arousement and joy (flow).

Anecdote Time

Back in 1988 when I first got the first edition of the "C++ Language", there were a couple of very simple exercises in the first chapter that took me a long time to get right and they both proved very educational.

The first exercises was "Compile Hello World". You might think, that is an easy one, I am going to skip that. But I had decided that I was going to do each and every single of one of the exercises in the book, no matter how simple. So if the exercise said "Build Hello World", I would build Hello World, even if I was already seasoned assembly language programmer.

It turned out that getting "Hello World" to build and run was very educational. I was using the Zortech C++ compiler on DOS back, and getting a build turned out to be almost impossible. I could not get the application to build, I got some obscure error and no way to fix it.

It took me days to figure out that I had the Microsoft linker in my path before the Zortech Linker, which caused the build to fail with the obscure error. An important lesson right there.

On Error Messages

The second exercise that I struggled with was a simple class. The simple class was missing a semicolon at the end. But unlike modern compilers, the Zortech C++ compiler at the time error message was less than useful. It took a long time to spot the missing semicolon, because I was not paying close enough attention.

Doing these exercises trains your mind to recognize that "useless error message gobble gobble" actually means "you are missing a semicolon at the end of your class".

More recently, I learned in this same hard way that the F# error message "The value or constructor 'foo' is not defined" really means "You forgot to use 'rec' in your let", as in:

let foo x =
   if x == 1
     1
   else
     foo (x-1)

That is a subject for another post, but the F# error message should tell me what I did wrong at a language level, as opposed to explaining to me why the compiler is unable to figure things out in its internal processing of the matter.

Plea to book authors

Nowadays we are cranking books left and right to explain new technologies, but rarely do these books get the input from teachers and professional pedagogues. So we end up accumulating a lot of information, we sound lucid at cocktail parties and might even engage in a pointless engineering debate over features we barely master. But we have not learned.

Coming up with the ideas to try out what you have just learned is difficult. As you think of things that you could do, you quickly find that you are missing knowledge (discussed in further chapters) or your ideas are not that interesting. In my case, my mind drifts into solving other problems, and I go back to what I know best.

Please, build exercises into your books. Work with teachers to find the exercises that match the material just exposed and help us get in the zone of Flow.

Design Goings On

The GNOME 3.8 release kept me pretty busy. In the run up to UI freeze I was focusing on tracking bugs, providing guidance and testing. Then it was marketing time, and I was spending all my time writing the release notes as well as some of the website. (Kudos to the marketing team for a great 3.8 release, btw.)

With 3.8 behind me, I’ve been able to turn back to some good honest design work. I’ve been looking at quite a few aspects of GNOME 3, including Settings and GNOME Shell. However, in this post I am going to focus on some of the application design activities that I have been involved in recently. One of the nice things here is that I have found the opportunity to fill in some gaps and pay some attention to some of the long-lost applications that are in need of design love.

Contacts

I haven’t blogged about Contacts for a while. 3.8 was a great release for the application though, mostly thanks to some fantastic work by Erick Pérez Castellanos. We got a new editing UI and a new selection mode, as well as a new linked accounts dialog. Along the way many of the most prominent usability bugs were fixed. Thanks to Erick for making this happen.

The Contacts designs have been slowly evolving since they were first conceived, and they have turned into something that I am really happy with. I spent a bit more time on them recently, with some updates to the toolbar and a few other things.

Contacts

Contacts - Editing

Character Map

I use the GNOME Character Map on a fairly regular basis, and it has to be said that it could do with some love. I’ve been meaning to do a redesign for quite some time, and I finally found the opportunity a few weeks ago.

Character Map

Character Map - Filter Menu

The most important thing about the design, in my opinion, is that it provides an easy way to browse different types of characters. This alone will make a huge difference to the experience. Another nice feature is the recently used section, since I think that most people have a small set of characters that they keep going back to.

Web Apps

I think that web applications could be pretty important for GNOME in the future, and we already have a great foundation on which to build here. I recently took a look at how web apps could have toolbars of their own, which resulted in the following mockups.

Web Applications

The web app toolbars are pretty simple – back, forward, reload and close.

Cheese

Last summer I had the pleasure of mentoring Fabinia Simões, who did a great job redesigning Cheese. In the past few weeks I’ve revisited her designs and done a set of hi-resolution mockups. We’re continuing to discuss some of the details, but I’m increasingly happy with the design.

Cheese

Cheese - Video Preview

One interesting thing to note about this design is how the navigation design patterns that we’ve developed for GNOME 3 applications are able to help even with a simple application like Cheese. Having a set of patterns like this really helps to reduce the work involved in designing (or in this case, redesigning) applications, as well as leading to consistency for users.

Transfers

Transfers is a new application for GNOME 3. It’s like a download manager, but it handles other things like copy/move operations for files and file transfers from Chat contacts. In some ways it isn’t the most exciting application, but it will fill in an important blank in the content story, and will make it easy to find content that you have received from other people and places.

Transfers

These mockups are still a little rough, but they are a good place to be starting.

So what’s the story with the close buttons?

Those of us who work on GNOME design have been pushing to reduce the presence of window titlebars for some time. The main driver for this is to use screen space more efficiently. The average titlebar includes big swathes of empty space, and they take up valuable vertical pixels. We’ve already seen the result of this direction in our treatment of maximised applications, where the titlebar is hidden.

Now that Wayland and client side decorations are on their way, we are able to realise our ambitions for screen efficiency even further. So far we have only been able to hide the titlebar when windows are maximised. In the new world of Wayland, windows can permanently lose their titlebars, whatever state they are in. Not only that, but they can also present window controls – like the close button – inside the window itself. This means that we can consistently show the close button on the right side of the toolbar, whether the window is maximised or not.

One of the drivers for my recent application design work has been to test out this approach to titlebars in an array of different contexts, and me and the other designers will continue to examine how it will work in different applications as we move forward.

As always, these designs are in a process of evolution, and feedback is welcome.


GTK+ hackfest, wrapup and end

I had to take a day off from the hackfest on Tuesday to get a few things done in the office. The GTK+ hackers surprised me by collectively jumping on the bus and coming out to visit me in Westford. How sweet ! Yesterday we were back in the OLPC offices for the last day – much faster to get to the airport from Cambridge…

Discussions

In these last 2 days, we’ve discussed various non-technical topics, including Guadec planning, the 3.10 roadmap, regressions, etc. Before diving into these, here’s a little demonstration of scrolling under Wayland. This is a testcase we’ve used in evaluating drawing performance with the wip/simple-draw branch. It certainly shows that the Wayland backend is doing ok as far as drawing speed is concerned.

Scrolling, scrolling, scrollingOur Guadec presence will include at least two or three GTK+ talks, and we’ll also have a Wayland BOF, which should touch on GTK+ topics as well. If you haven’t submitted a talk yet, do so this week !

On regressions: There are a number of problems in git master currently. This includes growing infobars, the (temporary) loss of search in the file chooser, problems when drawing to client-side decorated toplevels, as well as numerous issues with window sizes.  We are aware of these, and will hopefully have most of these addressed before too long.

Merging all of the big pieces early in the cycle gives us enough time to find and fix these problems before 3.10.

Typing

On that topic, we’ve made a list of things that we still hope to complete and merge for 3.10 (I wouldn’t quite call it a roadmap).

EggListBox:

  • Alex will add support for row containers
  • We can improve the separator API by turning separators into properties of the row container
  • Model support should not block the initial merging

Simple drawing branch:

  • This should be ready for merging soon
  • The drawing model changes are considerable, but incompatibility should not be a problem unless you are using GDK without GTK+ (and who does that ?)

Support for hi-DPI displays:

  • Alex has hardware to work on this
  • the goal is to demonstrate it working at Guadec

Wayland tasks:

  • Clipboard cleanup: Benjamin is working on moving GtkClipboard to GDK, so we can cleanly support multiple backends at the same time
  • DND: a lesser priority, but also on Benjamin’s list
  • Owen got frame synchronization working with Wayland at the hackfest, and the performance is good
  • Client-side decorations: I’m going to introduce a second window again, to make things more compatible. widget->window will go back to being just the content area.

A few other things would be nice to get landed for 3.10, but these are less certain to make it:

  • Notification API (Ryan and Lars)
  • Action descriptions (Ryan)
  • GtkBuilder / action integration (Ryan)
  • Make GtkPathBar public and share it with nautilus (Federico)

Finally, there are things that really should be worked on, but don’t have a name behind them currently:

  • Popovers: We do have a prototype for this in GtkBubbleWindow, but it needs love
  • Touch: There are many details that we currently don’t get right

If you feel like you could be interested in working on either of these, meet us on #gtk+.

I hope to see most of the GTK+ team again at Guadec, this meeting was very productive.

Auto-EDID Profiles Results

First, thanks for everyone that contributed ICC profiles. I’ve received over 800 uploads in a little under 24 hours, so I’m very grateful.

TLDR:

Total profiles scanned: 800
Profiles with invalid or unlikely primaries: 45
EDIDs are valid 94.4% of the time

This resulted in the following commit to colord:

commit 87be4ed4411ca90b00509a42e64db3aa7d6dba5c
Author: Richard Hughes <richard@hughsie.com>
Date:   Wed Apr 24 21:47:14 2013 +0100
    Do not automatically add EDID profiles with warnings to devices

I’ll explain a little about what these numbers and the commit means. The EDID reports three primaries, i.e. what real world XYZ color red, green and blue map to. It also tells us the whitepoint of the display. From basic color science we know that

  • If R=G=B, we should be displaying a black/gray/white color without significant amounts of red green and blue showing
  • The reported gamut of colors should not be bigger on real hardware than theoretical archive spaces like ProPhotoRGB.
  • If R=G=B=100%, then we should have a good approximation of D50 white
  • The temperature of the display should not be too cold (>3000K) or too warm (<10000K).

There are actually 11 checks colord now does on RGB profiles, similar to the checks done above. If any of the 11 checks fail, the automatically generated profile is not used. The user can still add it manually if they want and then of course it will be used for the monitor, but we don’t break things by default for 5.6% of users.

If anyone is interested, the results were generated by this program, and the raw results are available here. My personal take-home messages you can take from this file are:

  • Sometimes blue and green are the wrong way around (Samsung SyncMaster)
  • Vendors need to use something other than random binary data (AU Optronics)
  • If you don’t know what a whitepoint is, don’t try and guess (Sharp and Lenovo YT07)
  • Projectors generally don’t really know/care what values to use (OPTi PK301 and In Focus Systems)

There’ll be a new colord release with this feature released in the next couple of weeks.

April 24, 2013

Growing up and expanding

tl:dr version: We’re hiring!

For the past year or so I’ve been working with Mirko, Sebastian and Karl together at Agile Workers UG. In this time things have gone really well. From the beginning we had plans to grow the company and now we’re there.

Mirko and I have both had experiences working in software consulting companies and, thus, we inevitably developed ideas and values that we wanted to feed into any new company we’d found. One of the most important of these was that employees that invest their time and dedication should be given a strong voice in, and a share of, the company. In our current form (UG) this is not possible and was always considered a temporary solution. Thus, we are currently taking the final steps through the German bureaucratic forest to form a share holders company (AG) which allows us the flexibility to offer employees both a share of the company and a legally binding voice. Expect more details once we’ve finished the process.

Oh yeah, WE HIRING! We’re currently looking for C/C++ developers with a strong skill-set in system-level Linux. We are working on full-stack Android development and would like to build a team around that. Expect to work on the entire Android stack from the application layer, down through system services (written in C/C++ and Java) and into the device drivers (user-space and kernel).

We’re located in Berlin and, for the time being, need people to be here on location. It’s a great time to come to Berlin. The city offers a huge variety of opportunities for the technically inclined. More immediately, the weather has just turned nice and there are few places I’d rather be than Berlin in the summer time. As for winter, let’s just say it provides ample time for hacking. ;)

You can find more information at our careers page.

April 23, 2013

Adventures in the Music Streaming World

These last couple of years have brought (along with some new wrinkles and occasional grey hairs) some interesting changes on how I manage and maintain my “digital belongings”. For a long while I used to worry about backing up and storing in a safe place all the files, photos, books, movies and music I’ve collected through the years. I have also managed to collect a variety of different external USB hard drives to keep up with this digital sprawl, where for each iteration the next device would increase in size and sometimes in speed compared to the current one. It got to a point where I got tired of the backup/restore game and found myself paying less and less attention to the things I had spent so much time maintaining.

Music was one of the last items that I eventually stopped backing up. One day last year I took my entire collection of approximately 9000 (legal) songs and uploaded them to Google Play! The very next time I re-installed my desktop I didn’t have to restore my music collection anymore. All I needed was a net connection and I was off to listening to my tunes! I also had full access to that collection via my Android phone and laptop! Triple yummy! Sure, without net access I would be out of luck, but I could always keep a smaller subset of my collection around if I wanted to listen to anything while offline.

After a while I noticed that I seldon played my own tunes, often spending more and more of my music listening minutes on sites such as Pandora and Grooveshark to experience new types of music and genres. At one point I found myself subscribing to Pandora, Rdio and Spotify, looking for what in my opinion would provide me with the best listening experience. After about a month I think I have finally found the combination that gets me closer to that goal: Pandora + Spotify.

A little background though. I have been a Pandora (non-paying) customer for many years now and I can’t say enough about the music quality and the variety that you can get for this service! I mean, for the completely FREE option (with the occasional advertisement) you get to listen to (what feels like to be) hand picked songs that match whatever criteria you can come up with! Be it a song, a word, an artist or an album, Pandora’s matching algorithm is by far the best I’ve seen out there. Period! It is because of this plus the fact that I can access it from anywhere and any device with net access that I became a paid customer.

But how about those times when I specifically want to listen to a given song or album or even make a playlist with some of my favorite jams? After a while I learned a nice trick that lets you sample an album from Pandora but that wasn’t enough for what I wanted to do. So Grooveshark was definitely a great find for me and for a while I really enjoyed the freedom and wide selection that it offered me for free. Something really simple but it also made a difference for me was that I could “scrobble” the music I listened to my Last.fm account, something that Pandora doesn’t do. But alas, I couldn’t listen to my playlists on the go or even using my phone, so I started looking for options.

Now Rdio impressed me right away for being exactly what Groveshark was but with the added capability of being available on multiple platforms, and including some of the newest and latest releases! The pricing model was a bit more expensive than Pandora, but it did give me the ability to create my own playlists and interact with my friends via different social networks. I definitely enjoyed the experience and would have stuck with if it wasn’t for the small music collection that is available right now. I understand that Rdio tries to add as many new (and old) titles as it can, but at the end of the day, I couldn’t always find what I was looking for.

Spotify was the “dark horse” during my experimentation, mostly because it didn’t offer a first class client for Linux. There was a “half baked” client out there that never worked for me/or crashed too many times… I even ran the Windows client via Wine for the first 2-3 weeks but it felt “dirty” to pay for a service that would not run natively or provide a decent support for my platform. The Android and iOS apps worked like a charm, but I spent the bulk of my days in front of a Fedora box and listening to music from my phone was not going to cut it for me. The music variety is definitely much larger that what Rdio offers and it even has its own “Radio” streaming that provides something similar to what Pandora does. But the matching algorithm is still light-years behind Pandora and I often found myself wondering how certain songs and genres ended up in the “station” I was listening to.

After about a month into the experiment, it looked like I was going to keep Pandora and Rdio to get great music selection and variety (Pandora), web front end access and multi-platform support (Pandora, Rdio), and playlists (Rdio)… until a co-worker mentioned that Spotify had just announced their web based player! All of a sudden Spotify went from being in last place to bumping Rdio out of the equation!

Spotify web player

So now I am using both Pandora and Spotify at home and on the go (Spotify lets you download your playlists for offline listening) and so far the experience has definitely been positive. I feel that the streaming quality and variety has provided me with many enjoyable hours of music while I work and even my kids have started experimenting with Pandora as they get more exposure to the musical world. And if I ever feel like listening to some of my own music, some of which is not yet found on Spotify, I can always turn to Google Play… and I definitely enjoy not having to manage my backups anymore. :)

Wearing the red fedora

This is some news I have been wanting to share with my friends and the community for quite a while, starting on the 15th of June I will be working for Red Hat.

I will be filling Christian's position before he got promoted to lead the entire desktop group, that means I will be managing the engineers working on Evolution, Firefox and LibreOffice. This means that I will be closer to the tree upstreams I have cared the most for my entire career: GNOME, Mozilla and LibreOffice (and OO.o before them) and I get to do it in one of the greatest companies of the open source ecosystem.

I will be moving to Brno, Czech Republic, to work along side 600 redhatters in the European engineering HQs, also, this means I don't have to figure out flights and accommodation for GUADEC this year ;-)

I can't express how thrilled I am for being given this chance. This has been, however, one of the hardest decisions I had to make in my entire life.

Moving to Dublin when I was 22 to work for Sun was certainly easy, I was fed up with college at that point, I never left home before and I was young enough so that my decisions did not necessarily affect how the next 10 years would look like. It turned out quite well, I loved working for Sun and learned a lot, my English improved quite a lot and spent probably one of the best years of my life (2008).

Moving to Manchester from Dublin to work for Codethink in 2009 was also somewhat easy, most of my friends left or where about to leave Dublin and things at Sun were starting to get a bit stiff because of the acquisition process from Oracle. It was a good move after all and I fell in love with the UK and Manchester as a place to live. I also learned a great deal alongside Rob and the rest of the Codethink bunch.

However this time is different, I am 29 (yeah I know, not an old fart yet, but still...) I came back to Gran Canaria and started working for Canonical a year and a half ago after 5 years abroad, I rebuilt my social circles, I am closer to my family, I've just lost 15kg recently and I am going through a very happy stage of my life now, not to mention that moving to a country whose language I'm far from barely speaking is certainly frightening.

It's the first time this kind of move feels like a sacrifice to me, but my gut feeling keeps telling me that it will be worth it, and it has never failed me before for these kind of matters.

Exciting and challenging times ahead!

http-streams 0.5.0 released

I’ve done some internal work on my http-streams package. Quite a number of bug fixes, which I’m pleased about, but two significant qualitative improvements as well.

First we have rewritten the “chunked” transfer encoding logic. The existing code would accept chunks from the server, and feed them as received up to the user. The problem with this is the server is the one deciding the chunk size, and that means you can end up being handed multi-megabyte ByteStrings. Not exactly streaming I/O. So I’ve hacked that logic so that it yield‘s bites of maximum 32 kB until it has iterated through the supplied chunk, then moves on to the next. Slight increase in code complexity internally, but much smoother streaming behaviour for people using the library.

Secondly I’ve brought in the highly tuned HTTP header parsing code from Gregory Collins’s new snap-server. Our parser was already pretty fast, but this gave us a 13% performance improvement. Nice.

We changed the types in the openConnection functions; Hostname and Port are ByteString and Word16 now, so there’s an API version bump to 0.5.0. Literals will continue to work so most people shouldn’t be affected.

AfC

GTK+ hackfest, days 3 and 4

The GTK+ hackfest continued on Sunday and Monday. These were days full of good discussion and  hacking, but we still managed to catch some of the nice spring weather outside.

Since we are meeting at the OLPC office in Cambridge, there’s plenty of lunch choices around in walking distance.

So, what have we achieved so far ? Lets start with a few old projects.

The file chooser sidebar is now a public widget, and nautilus will use it in 3.10. This will address long-standing complaints that the file chooser dialog looks and feels subtly different from the file manager. Federico has been working on this for quite a while.

Another old project that we’re finally wrapping up this cycle is composite containers.  In a nutshell, this means (a) less lines of code in complex GTK+ widgets like the file chooser and (b) you get to create such complex widgets in glade in a structured way.

Tristan and Juan worked on this for several years. Tristan wrote about it here. This branch was actually merged a few days before the hackfest (good thing too, since last-minute scheduling complication prevented Tristan from attending).

Alex has just merged his baseline alignment branch – this lets us align widgets like spin buttons, buttons, labels so that their text is at the same level, visually. The effect of this will be subtle in most places, partially because we have trained ourselves to avoid layouts where (lack of) baseline alignment would be very noticeable. I’m listing this among the old projects even though Alex’ work on this doesn’t have a long history, since it was part of the original height-for-width geometry management gsoc project long ago.

The stated goal for the hackfest is new widgets to support modern applications like gnome-documents or gnome-music.  As the patterns for these applications were developing over the past year or so, we’ve used libgd as a staging area where these new widgets could be shared between applications, but they really belong in the GTK+.

So, what has been achieved ?

GtkHeaderBar was already integrated a few weeks ago as part of the client-side decoration support, and I’ve written about it here.

GtkStack and the associated GtkStackSwitcher have just appeared in GTK+ master. Taken together, these two can replace GtkNotebook in many uses (though it is not a 1-1 feature-complete replacement. E.g GtkStack does not support tab drag-and-drop). A nice new feature of the stack widgets is that the transitions between pages can be animated in various ways. This kind of animation is reasonably easy to do in GTK+ now, with the new frame clock framework that we have since 3.8.

You can see GtkStack and the switcher in the video above. I’ve slowed down the transition there to make it very obvious, it is normally much quicker.

Another new widget that makes use of animation is GtkRevealer, which can show a child widget in an animated fashion. This is commonly used to implement in-app notifications, or for sidebars that should not appear abruptly, but smoothly. Compared to GdRevealer, the GTK+ version has been generalized a bit: the child can slide in from any direction or it can fade in. We’ve also added rtl flipping support.

The ‘is a multiple of 3′ popup in the video above is an example of the revealer in action.

What’s still cooking ?

The last big new widget on our wishlist is EggListBox. This one is not quite ready to be merged as-is, but after our discussion, we now have a  list of what is missing:

We agreed that we need a row container widget – being able to add arbitrary children to the listbox is very nice, but without an intermediate container, handling selection state, focus drawing and accessibility is a bit problematic. Alex is looking into adding this to EggListBox (and EggFlowBox).

The other thing we need for scalability is a way to hook up a data model and only instantiate rows as they are needed, instead of populating the entire list or grid at once. Benjamin has prototyped this long ago, in the wip/list branch. While we eventually need this, many of the current button-box-like uses of EggListBox are working just fine without it.

Apart from new widgets, we’ve looked at all the GtkSettings and have plans for how to deal with many of them in better ways. Some will require more work (like getting rid of modules), others will be easy (like can-change-accels – just stop doing it).

Of course, plenty of other cool stuff has been hacked on on-the-side, and may has already landed or may still find its way into GLib or GTK+ before too long:

Cosimo and I have spent some time on client-side decoration, and fixed some issues. There are no more black flashes when complex widgets are mapped, opacity works again, and the theming has been simplified.

Alex has a branch that simplifies the gdk drawing scrolling model. Initial tests with this are very encouraging, so it will likely find its way into 3.10 after the few remaining problems have been fixed.

Ryan has committed a nice speedup to GObject instantiation, and while at it, made GLib behave nicer when running under valgrind – it is no longer necessary to set G_SLICE=always-malloc manually.

Once again, thanks to the OLPC for hosting us in their offices, and thanks to the GNOME foundation for travel assistance.

About the GNOME 3 Application Development book

Yesterday I read Danielle’s review of GNOME 3 Application Development: Beginner’s Guide and I would like to write some comments about it because my name figures in there.

I was involved in the book as a technical reviewer and I reviewed a total of 5 chapters. This was the first time I did so for a printed book.
After reviewing those chapters, I asked my contact at the Packt Publishing to give me a final draft before publishing it and associating my name to it because I really wanted some of the issues I pointed out to be addressed.
After a while, to my surprise, I received an email from that contact announcing that the book had been published and thanking me for my contribution. Turns out they didn’t respect my request and included my name directly! After I asked that contact about our “arrangement”, he apologized for that and said it was his fault…

I am still waiting for my printed copy and I was hoping that some of the issues I had told them were corrected but after reading Danielle’s review it is pretty clear to me that they weren’t. That’s surprising to me because I made it clear that some issues were very important things and I assumed it’d be the editor/publisher’s work to take care of that.

Writing and producing a book is a difficult thing and in no way I intend to bash the author or the publisher (I haven’t even looked at the final version yet) with this post but I wanted to clarify how I my name is associated to it.

Hopefully next time Packt will take into consideration what their technical reviewers say and not just use their name for the book.

April 22, 2013

Multi-part items in Smoothie

Smoothie makes it really easy to load ListView/GridView items asynchronously, off the UI thread. It handles all the complexity from gestures, threads, scrolling state, preloading, and view recycling behind a simple API.

Up until now, one of the biggest limitations of the Smoothie API has been the lack of proper support for multi-part items. What is a multi-part item? It’s a ListView/GridView item composed by multiple parts that have to be loaded asynchronously with different priorities as you scroll.

Classic example: a list of photos with items composed by the photo image and the author’s avatar—both loaded from the cloud. With the existing API,  Smoothie would force you to load the whole content of each item in one go. This means you were forced to load both the main photo image and the avatar image for each item before loading the next item in the list.

What if you wanted to start loading the main photo image of all visible items before loading their respective avatars? The photos are probably the content your users are actually interested in after all. That’s what the multi-part item support is about. It allows you to split the loading of each item into multiple asynchronous operations with different global priorities.

So, how would you implement the above example assigning higher priority to the main photo image over the avatar using Smoothie? Assuming you’re already familiar with Smoothie’s API, just follow these steps:

  1. Override the getItemPartCount() method from ItemLoader. Return the number of parts the item in the given Adapter position has.
  2. Handle the new itemPart argument accordingly in loadItemPartFromMemory(), loadItemPart(), and displayItemPart(). These methods will be called once for each item part.

The item parts will have indexes starting from zero. e.g. for items with 2 parts, the part indexes will be 0 and 1. The indexes also define the relative priority between parts. Smoothie will load the part with index 0 for all visible items before loading part with index 1.

Important note: I had to break API backwards compatibility. If you don’t really need multi-part items, the only change you’ll have to make in your code is to subclass from SimpleItemLoader instead of ItemLoader. SimpleItemLoader is an ItemLoader specialized in single-part items that hides all the part-related bits from the API.

The updated documentation contains code samples and a more detailed overview of the new API. Grab the latest code while it’s hot. Feedback, bug reports, and patches are all very welcome as usual.

Why feed reading is an open web problem, and what browsers could do about it

I’ve long privately thought that Firefox should treat feed reading as a first-class citizen of the open web, and integrate feed subscribing and reading more deeply into the browser (rather than the lame, useless live bookmarks.) The impending demise of Reader has finally forced me to spit out my thoughts on the issue. They’re less polished than I like when I blog these days, but here you go – may they inspire someone to resuscitate this important part of the open web.

What? Why is this an open web problem?

When I mentioned this on twitter, an ex-mozillian asked me why I think this is the browser’s responsibility, and particularly Mozilla’s. In other words – why is RSS an open web problem? why is it different from, say, email? It’s a fair question, with two main parts.

First, despite what some perceive as the “failure” of RSS, there is obviously  a demand by readers to consume web content as an automatically updated stream, rather than as traditional pages.1 Google Reader users are extreme examples of this, but Facebook users are examples too: they’re no longer just following friends, but companies, celebrities, etc. In other words, once people have identified a news source they are interested in, we know many of them like doing something to “follow” that source, and get updated in some sort of stream of updates. And we know they’re doing this en masse! They’re just not doing it in RSS – they’re doing it in Twitter and Facebook. The fact that people like the reading model pioneered by RSS – of following a company/news source, rather than repeatedly visiting their web site – suggests to me that the widely perceived failure of RSS is not really a failure of RSS, but rather a failure of the user experience of discovering and subscribing to RSS.

Of course, lots of things are broadly felt desires, and aren’t integrated into browsers – take email for example. So why are feeds different? Why should browsers treat RSS as a first-class web citizen in a way they don’t treat other things? I think that the difference is that if closed platforms (not just web sites, but platforms) begins to the only (or even best) way to experience “reading streams of web content”, that is a problem for the web. If my browser doesn’t tightly integrate email, the open web doesn’t suffer. If my browser doesn’t tightly integrate feed discovery and subscription, well, we get exactly what is happening: a mass migration away from consuming (and publishing!) news through the open web, and instead it being channeled into closed, integrated publishing and subscribing stacks like FB and Twitter that give users a good subscribing and reading experience.

To put it another way: Tantek’s definition of the open web (if I may grotesquely simplify it) is a web where publishing content, implementing software that consumes that content, and accessing the content is all open/decentralized. RSS2 is the only existing way to do stream-based reading that meets these requirements. So if you believe (as I do) that reading content delivered in a stream is a central part of the modern web experience, then defending RSS is an important part of defending the open web.

So that’s, roughly, my why. Here’s a bunch of random thoughts on what the how might look like:

Discovery

When you go to CNN on Facebook, “like” – in plain english, with a nice icon – is right up there, front and center. RSS? Not so much. You have to know what the orange icon means (good luck with that!) and find it (either in the website or, back in the day, in the browser toolbar). No wonder no one uses it, when there is no good way to figure out what it means. Again, the failure is not the idea of feeds- the failure is in the way it was presented to users. A browser could do this the brute-force way (is there an RSS feed? do a notice bar to subscribe) but that would probably get irritating fast. It would be better to be smart about it. Have I visited nytimes.com five times today? Or five days in a row? Then give me a notice bar: “hey, we’ve noticed you visit this site an awful lot. Would you like to get updates from it automatically?” (As a bonus, implementing this makes your browser the browser that encourages efficiency. ;)

Subscription

Once you’ve figured out you can subscribe, then what? As it currently stands, someone tells you to click on the orange icon, and you do, and you’re presented with the NASCAR problem, made worse because once you click, you have to create an account. Again, more fail; again, not a problem inherent in RSS, but a problem caused by the browser’s failure to provide an opinionated, useful default.

This is not an easy problem to solve, obviously. My hunch is that the right thing to do is provide a minimum viable product for light web users – possibly by supplementing the current “here are your favorite sites” links with a clean, light reader focused on only the current top headlines. Even without a syncing service behind it, that would still be helpful for those users, and would also encourage publishers to continue treating their feeds as first-class publishing formats (an important goal!).

Obviously solving the NASCAR problem is still hard (as is building a more serious built-in app), but perhaps the rise of browser “app stores” and web intents/web activities might ease it this time around.

Other aspects

There are other aspects to this – reading, social, and provision of reading as a service. I’m not going to get into them here, because, well, I’ve got a day job, and this post is a month late as-is ;) And because the point is primarily (1) improving the RSS experience in the browser needs to be done and (2) some minimum-viable products would go a long way towards making that happen. Less-than-MVPs can be for another day :)

  1. By “RSS” and “feeds” in this post, I really mean the subscribing+reading experience; whether the underlying tech is RSS, Atom, Activity Streams, or whatever is really an implementation detail, as long as anyone can publish to, and read from them, in distributed fashion.
  2. again, in the very broad sense of the word, including more modern open specifications that do basically the same thing

April 20, 2013

GTK+ Hackfest 2013/Day 1 & 2

Day 1

it turns out that this week wasn’t the best possible to hold a hackfest in Boston and Cambridge; actually, it was the worst. what was supposed to be the first day passed with us hacking in various locations, mostly from home, or hotel lobbies. nevertheless, there were interesting discussions on experimental work, like a rework of the drawing and content scrolling model that Alex is working on.

Day 2

or Day 1 redux

with the city-wide lockdown revoked, we finally managed to meet up at the OLPC offices and start the discussion on Wayland, input, and compatibility; we took advantage of Kristian attending so we could ask questions about client-side decorations, client-side shadows, and Wayland compatibility. we also discussed clipboard, and drag and drop, and the improvements in the API that will be necessary when we switch to Wayland — right now, both clipboard and DnD are pretty tied to the X11 implementation and API.

after lunch, the topic moved to EggListBox and EggFlowBox: scalability, selection, row containers, CSS style propagation, and accessibility.

we also went over a whole set of issues, like positioning popups; high resolution displays; input methods; integrating the Gd widgets into GTK+, and various experimental proposals that I’m sure will be reported by their authors on Planet GNOME soon. :-) it was mostly high level discussion, to frame the problems and bring people up to speed with each problem space and potential/proposed solutions.

we’d all want to thank OLPC, and especially Walter Bender, for being gracious hosts at their office in Cambridge, even on a weekend and the GNOME Foundation.

GNOME Music development status

The last weeks a lot of volunteers showed up to develop on (GNOME) Music.

Now we can browse the albums and their content making it our most complete view. Playback to the albums view and songs view is in development (works but is buggy).

Screenshot from 2013-04-20 19:40:24

Screenshot from 2013-04-20 19:39:30

 

Also we are heavy working on the artist view trying to match the mockups, currently the code delivers the following screenie:

Screenshot from 2013-04-20 19:41:08

There is still a lot to be done, and we created a semi roadmap of our development plan. Phase 1 should be completed within the next 2 – 3 weeks. We also moved our UI development to be glade.

I am very happy with the contributor turnout (no special order):

  • Vadim Rutkovsky
  • Eslam Mostafa
  • Paolo Borelli
  • Guillaume Quintard
  • Allan Day
  • Jakub Steiner
  • Shivani Poddar
  • Sriram Ramkrishna
  • Hylke Bons

Nice to have this mix of old and new contributors working together. If interested join us on #gnome-music on gimpnet. This is where the communication happens.

Also I would like to thank Next Tuesday for sponsoring part of my time on working on GNOME Music

.Logo_nt

 

flattr this!

Three Types of Documentation You Should Stop Writing

How do you decide what to write about? How do you organize what you’ve written? Often, your view will be shaped by the technology you’re used to. Mallard users will think of topics and guides. DITA users will turn to tasks and concepts. DocBook users will line up chapters and sections. All of these have their pros and cons, but there are three types of documents you need to stop writing.

README: Read you? I’ll read what I like, thankyouverymuch. What’s in this README file? Instructions for how to use the software? How to install it? Develop plugins for it? Start contributing to it? The answer to all of these is yes, and much more. I just don’t know what’s in there. All I know is that somebody thinks I should read it. Maybe.

TODO: I want to know what I can do with your software today, not tomorrow, and certainly not in your imagined tomorrow. By all means, keep a TODO list. Better yet, use an issue tracker. Software development is hard work, and we all need help keeping track of what we need to do. But don’t put it in your user documentation.

FAQ: I don’t care if my question is frequently asked. There are many ways you could organize information, but an FAQ isn’t a taxonomy. It’s a brain dump of answers to some questions somebody had. Worse, it’s often a brain dump of answers to questions the developers think you should ask. (Did anybody really ask how your project got its witty name?) Identify the valuable information in your FAQ and take the time to work it into the organizational structure of your documentation.

All of these are failures to identify the audience and organize information for them. A writer’s job doesn’t end with writing some words and just putting them somewhere. When writing and organizing, always think of where your reader is likely to look for the information you’re providing.

Feeds