GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

July 15, 2018

GUADEC 2018

I would like to begin this special blog post by congratulating everybody for contributing to a memorable GUADEC. This was my first time officially attending the GUADEC conference, after attending as a visitor some of the events held in Manchester during the GUADEC 20th edition last year, and this time it was truly an amazing experience.

After countless talks and many social events that took place during the conference, I have to say my favourite event of them all would definitely have to be the newcomers lunch. At first, I rendered it as nerve-racking. Right before my eyes, the whole thing unfolded as the most frightening social nightmare: having to meet a dozen of unknown people, have lunch and socialise with them, all while avoiding the embarrassment of smiling mid-lunch with a piece of spinach stuck between my teeth (imagine the horror 😬). Looking back now I laugh every time I remember this initial thought. I had the chance to meet some of my favourite GUADEC attendees, whom I might not have otherwise met, and I looked forward to seeing them all together ever since. Therefore, it is really fitting I thank Carlos Soriano for bringing us together and mediating our nice little lunch break so wonderfully and a thank you goes also to all of my newcomer fellows for being so easy-going about it.

However, some of my other favourite events included the talks which took place during the core days. And I would have to say my top favourite included Building for Humans and Designing Gnome Mobile, but all of the other talks I attended to were equally interesting, inspiring and easy to follow.

I also had the opportunity to deliver a lightning speech on modernising Five or More during the newcomer lightning talks time slot, and then to listen to the other newcomers talking about the interesting projects they are currently working on and the progress they achieved thus far.

I am very lucky to have chosen to contribute to GNOME as a GSoC student, as I felt truly welcomed in the community. I am also overjoyed to have had the chance to attend GUADEC, and I will hopefully get to see you all at GUADEC in the years to come. ✨


I would like to thank through this blogpost the organising team for the effort and dedication put into holding the GUADEC conference in the beautiful city of Almeria. Without all of your hard work I would not be writing this post now.

To the women of GNOME, thank you for kindly receiving me at the women’s dinner and sharing your experiences with me. I truly appreciate it, and I will try my best to keep in touch with you all and continue to share ideas and experiences with you.

Thank you to everyone who interacted with me after delivering the lightning speech on modernising Five or More. It really means the world to me you came by to say hi, are willing to offer feedback, or even help with some aspects.

Also, a big thank you goes to the GNOME foundation for offering travel and accommodation sponsorship.

Thank you everybody, again, for making this such an unforgettable memory.


July 14, 2018

Application screenshots with Gitlab CI

The fresh new tooling used for development in the GNOME project (gitlab, meson, docker, flatpak) has a lots of potential.

Some applications providing nightly flatpaks using Gitlab CI, and while it is nice, I wanted to start with something simple, like taking a screenshot of the application built with Gitlab CI+meson. Did ask on #gnome-hackers, and several hackers jumped right in with ideas, but it turned out no one has done it recently.

Thanks to the suggestions and a couple of DuckDuckGo searches I implemented taking screenshots of gnome-calculator (having initial GitLab CI pipelines thanks to Robert Ancell), anyone can check it out on the ci-screenshots branch GitLab CI config.

Screenshot of Calculator from Gitlab CI
The idea is fairly simple (inspired from a github repo)
* Build the application using meson
* Start Xvfb, a virtual framebuffer X Server
* Start the application, in this case Calculator
* Take a screenshot using the import tool from ImageMagick (using the window name you want to screenshot or the whole desktop)

This seems to work, with some glitches:
* the fallback appmenu mode is used, which means the application icon is shown in the headerbar and reveals the appmenu on click (this is something I use, but is not the default)
* The titlebar contains all three window controls, on default Fedora there's only the close button.
* The fonts also do not look like the default Cantarell for me
* A strange border appears all around the window

All of the three issues might be caused by the fact that there's no real user with default GTK/GNOME and session settings, so GitLab CI is using some sort of system-defaults of Fedora (I only added the screenshot stuff to the Fedora build for now). If you happen to have an idea on how to fix/improve any of the above, just let me know on #gnome-hackers (evfool) or in a comment.

[GSoC 2018] Welcome Window Integration in Pitivi – Part 4

In my last post (link), I talked about:

  1. Changing the layout of recent projects items to display meta info regarding a project such as its directory and last updated timestamp, and
  2. Displaying greeter message when there are no recent projects.

In this post, I want to introduce the two new features that I have added to the new welcome window in Pitivi:

  1. “Search” feature for easy browsing of recent projects (box marked as “1” in the screenshot below), and
  2. “Remove” feature to allow removing project(s) from recent projects list (box marked as “2” in the screenshot below).

sear

“Search” feature…

Search feature is meant to allow users to easily browse throught recent projects list to quickly get to the project they want to work on. This feature can be pretty handy if there are lots of recent projects.

I have implemented it using Gtk.SearchEntry and it is focussed by default to allow users to quickly search for a project as soon as the welcome window appears.

Screenshot from 2018-07-14 23-53-19Recent projects found for search query “demo1”
Screenshot from 2018-07-14 23-55-01No recent project found for search query “demo12”

“Remove” feature…

Remove feature is meant to remove unwanted/unused project(s) from recent projects list. The removed project(s) are not deleted from the disk. One use case for this feature can be to unclutter the recent projects list after we have finished working on a project and we no longer need it.

Screenshot from 2018-07-15 00-06-45Remove project(s) screen
Screenshot from 2018-07-15 00-09-02Selected “demo4” project for removal
Screenshot from 2018-07-15 00-10-18“demo4” project removed from recent projects list

The next and the last task under “Welcome Window Integration in Pitivi” as per my GSoC project is to integrate project thumbnails in recent projects list. I am currently working on this task and hope to finish it by next week.

I will keep posting my progress on this blog. Until next time.

Stay tuned 🙂

The Flatpak BoF at Guadec

Here is a quick summary of the Flatpak BoF that happened last week at Guadec.

1.0 approaching fast

We started by going over the list of outstanding 1.0 items. It is a very short list, and they should all be included in an upcoming 0.99.3 release.

  • Alex wants to add information about renaming to desktop files
  • Owen will hold his OCI appstream work for 1.1 at this point
  • James is interested in getting more information from snapd to portal backends, but this also does not need to block 1.0, and can be pulled into a 1.1 portals release
  • Matthias will review the open portal issues and make sure everything is in good shape for a 1.0 portal release

1.0 preparation

Alex will do a 0.99.3 release with all outstanding changes for 1.0 (Update: this release has happened by now). Matthias will work with  Allan and Bastien on the press release and other materials. Nick is very interested in having information about runtime availability, lifetime and stability easily available on the website for 1.0.

We agreed to remove the ‘beta’ label from the flathub website.

Post 1.0 plans

There was a suggestion that we should have an autostart portal. This request spawned a bigger discussion of application life-cycle control, background apps and services. We need to come up with a design for these intertwined topics before adding portals for it.

After 1.0, Alex wants to focus on tests and ci for a while. One idea in this area is to have a scriptable test app that can make portal requests.

Automatic migration on renames or EOL is on Endless’ wishlist.

Exporting repositories in local networks is a feature that Endless has, but it may end up upstream in ostree instead of flatpak.

Everybody agreed that GNOME Software should merge apps from different sources in a better way.

For runtimes, the GNOME release teams aims to have the GNOME runtime built using buildstream, on top of the freedesktop 1.8 runtime. This may or may not happen in time for GNOME 3.30.

July 13, 2018

GUADEC 2018

GUADEC 2018

GUADEC is the GNOME Users And Developers European Conference, is an annual conference that take place in Europe, and this year was in Spain, so I should go. I've became a foundation member this year and I've two Google Summer of Code students from GNOME organization working on Fractal, so this year GUADEC was an important one for me.

This year GUADEC was in AlmerĂ­a and I was there for the main days. I can't stay there for the full GUADEC because I'm using my holidays, so I stay in AlmerĂ­a from Thursday 5th to Sunday 8th and then I continue to Murcia to enjoy my free time in the beach.

almeria

Not my first GUADEC

Until the end of 2017 I've not been GNOME foundation member, but I've been doing small contributions to GNOME since 2010.

My first GUADEC was the Gran Canaria Desktop Summit, a big event that joined in one city the GUADEC and Akademy so there was a lot of desktop developers, from GNOME, KDE and other open source projects. I was there with some friends, I was working with some GNOME developers, in the guadalinex project, the year before so I started to know the GNOME community.

The next year I was in the Hague GUADEC. This year I was there as a developer, I was working in the Evince accessibility support so my employer paid the travel and all costs.

After that, I don't have more paid work related to GNOME, and started to work with Qt, so I started to go to Akademy, but I don't leave the GNOME community, I was always a GNOME user and was there doing small contributions.

My last GUADEC was the 2012, in A CoruĂąa, Spain again. Indeed I'm in the wikipedia GUADEC article photo, from that year, try to find me there.

guadec-2012

AlmerĂ­a

This year GNOME was a great event, with good social events. We've a beach party with a big paella and the famous Ice Cream Death Match.

We've a good time in the Alcazaba of AlmerĂ­a, with a great performance done by the GNOME crew.

alcazaba gnome

Then we've a flamenco show in a great place with drinks and some food.

GNOME

As I said before, this is not my first GUADEC, but this one is an important one for me, because Fractal is becoming important and this year I've been working with a lot of great people and it was good to meet them in the GUADEC.

This year I've been in the Madrid Rust hackfest and in the Strasbourg Fractal hackfest, unfortunately I can't stay there for the working days because there was interesing BoFs about Rust and other technologies.

It was great to meet again with Julian and Tobias, and to meet in person other people that I've been working with, like Carlos Soriano and Alberto Fanjul.

guadec 2018 group photo

The most important thing from this GUADEC, for me, is the growth plan for GNOME, the new board of directors want to grow the GNOME projects, attracting more money and more developers.

The GNOME project starts to loose interest and developers when the mobile app ecosystem starts to grow, but it seems that now the interest in this great project is increasing again, thanks to a lot of innovative projects like flatpak, gnome-builder, librem5, etc and a lot of great changes like the gitlab migration. The gnome Board of Directors is doing a great work too, increasing the money spend in hackfest, to make it easy for developers to work in the gnome projects.

I've no doubt that GNOME will grow during the next year in developers and importance. I've been around a lot of time, doing a little in my free time, but I'll try to spend more time in GNOME and I'll try to find a way to get payed for it :D

See you in the 2019 GUADEC.

GUADEC 2018 thoughts

GUADEC this year was another good one; thank you to the organisers for putting on a great and welcoming conference, and to Endless for sending me.

Unfortunately I couldn’t make the first two days due to a prior commitment, but I arrived on the Sunday in time to give my talks. I’ve got a lot of catching up to do with the talks on Friday and Saturday — looking forward to seeing the recordings online!

The slides for my talk on the state of GLib are here and the notes are here (source for them is here). I think the talk went fairly well, although I imagine it was quite boring for most involved — I’m not sure how to make new APIs particularly interesting to listen to!

The slides for my talk on download management on metered connections (the ‘Mogwai’ project) are here and the notes are here (source for them is here). I think this talk also went fairly well, and I’m pleased by how many people turned up and asked insightful questions. As I said in the talk, my time to spend on this project is currently limited, but I am interested in mentoring new contributors on it. Get in touch if you’re interested.

During the birds of a feather days, I spent most of my time on GLib, clearing out old bugs. We had the GLib BoF during the GTK+ one on Monday. The notes are here. Emmanuele has already done a good writeup of the results of the BoF here; and Matthias has written up the GTK+ BoF itself here.

There were some good discussions over dinner during the BoF days about people’s niggles with GLib, which has set a few ideas in motion in my head which I will try and explore over the coming few months, once the 2.58 release is out of the way.

It was good to catch up with everyone, great to see AlmerĂ­a and sample its food and drink, and nice to finally meet some of my colleagues from Endless for the first time!

My Perspective on This Year’s GUADEC

Greetings GNOMEies

This year, I had the pleasure to attend GUADEC at Almeria, Spain. Lots of things happened, and I believe some of them are important to be shared with the greater community.

GUADEC

This year’s GUADEC happened in Almería, Spain. It turns out Almería is a lovely city! Small and safe, locals were friendly and I managed to find pretty good vegan food with my broken Spanish.

I was particularly happy whenever locals noticed my struggle with the language, and helped and taught me some handy words. This alone was worth the entire trip!

Getting there was slightly complicated: there were no direct flights, nor single-connection routes, to there. I ended up having to get a 4 connection route to there, and it was somewhat exhausting. Apparently other people also had troublesome journeys there.

The main accommodation and the main venue could have been closer, but commuting to there was not a problem whatsoever because the GUADEC team schedule a morning bus to there. A well handled situation, I must say — turns out, commuting with other GNOME folks sparked interesting discussions and we had some interesting ideas there. The downside is that, if anyone wanted the GNOME Project to die, we were basically in a single bus 😛

Talks

There were quite a few interesting talks this year. My personal highlights:

BoFs

To me, the BoFs were the best part of this year’s GUADEC. The number of things that happened, the hard talks we’ve made, they all were extremely valuable. I think I made a good selection of BoFs to attend, because the ones I attended were interesting and valuable. Decisions were made, discussions were held, and overall it was productive.

I was particularly involved in five major areas: GNOME Shell & Mutter, GJS, GTK, GNOME Settings, and GNOME To Do.

GNOME Shell & Mutter

A big cleanup was merged during GUADEC. This probably will mean small adaptations in extensions, but I don’t particularly think it’s groundbreaking.

At the second BoF day, me and Jonas Ådahl dived into the Remote Desktop on Wayland work to figure out a few bugs we were having. Fortunately, Pipewire devs were present and we figured out some deadlocks into the code. Jonas also gave a small lecture on how the KMS-based renderer of Wayland’s code path works (thanks!), and I feel I’m more educated in that somewhat complex part of the code.

As of today, Carlos Garnacho’s paint volume rework was merged too, after extensive months of testing. It was a high-impact work, and certainly reduces Mutter’s CPU usage on certain situations.

At the very last day, we talked about various ideas for further performance improvements and cleanups on Mutter and GNOME Shell.  I myself am on the last steps of working on one of these ideas, and will write about it later.

As I sidenote, I would like to add that I can only work on that because Endless is sponsoring me to do that. Because

banner-down

Exciting times for GNOME Shell ahead!

GJS

The git master GJS received a bunch of memory optimizations. In my very informal testing, I could measure a systematic 25~33% reduce in the memory usage of every GJS-based application (Maps, Polari and GNOME Shell). However, I can’t guarantee the precisions of these results. They’re just casual observations.

Unfortunately, this rework was making GNOME Shell crash immediately on startup. Philip Chimento tricked me into fixing that issue, and so this happened! I’m very happy with the result, and looks like it’ll be an exciting release for GJS too!

Thanks Philip for helping me deep dive into the code.

GTK

Matthias already wrote an excellent write-up about the GTK BoF, and I won’t duplicate it. Check his blog post if you want to learn more about what was discussed, and what was decided.

GNOME Settings

At last, a dedicate Settings BoF happened at the last day of the conference. It had a surprisingly higher number of attendees than what I was expecting! A few points on our agenda that were addressed:

  • Maintainership: GNOME Settings has a shared maintainership model with different levels of power. We’ll add all the maintainers to the DOAP file so that anyone knows who to ping when opening a merge request against GNOME Settings.
  • GitLab: we want to finish the move to GitLab, so we’ll do like other big modules and triage Bugzilla bugs before moving them to GitLab. With that, the GitLab migration will be over.
  • Offloading Services to Systemd: Iain Lane has been working on starting sessions with systemd, and that means that we’ll be able to drop a bunch of code from GNOME Settings Daemon.
  • Future Plans: we’ve spent a good portion of this cycle cleaning up code. Before the final stable release, we’ll need to do some extensive testing on GNOME Settings. A bit of help from tech enthusiasts would be fantastic!

We should all thank Robert Ancell for proposing and organizing this BoF. It was important to get together and make some decisions for once! Also, thanks Bastien for being present and elucidating our problems with historical context – it certainly wouldn’t be the same without you!

GNOME To Do

Besides these main tracks, me and Tobias could finally sit down and review GNOME To Do’s new layout. Delegating work to who knows best is a good technique:

Tobias' GNOME To Do mockups in my engineering notebook.Tobias’ GNOME To Do mockups in my engineering notebook.

I was also excited to see GNOME To Do stickers there:

gnome-todo stickersSexy GNOME To Do stickers, a courtesy of Jakub

It’s fantastic to see how GNOME To Do is gaining momentum these days. I certainly did not expect it three years ago, when I bootstrapped it as a small app to help me deal with my Google Summer of Code project on Nautilus. It’s just getting out of control.

Epilogue

Even though I was reluctant to go, this GUADEC turned out to be an excellent and productive event. Thanks for all the organizers and volunteers that worked hard on making it happen – you all deserve a drink and a hug!

I was proudly sponsored by the GNOME Foundation.

Sponsored by the GNOME Foundation

Gtk4 Flatpak example

As part of Ernestas Kulik work on porting Nautilus to gtk4 he has created a tagged entry widget to replace libgd tagged entry and eventually upstream to gtk proper. To give easy testing he created a Flatpak file for building a simple app with this widget, which serves as an example of how to create a simple app with gtk4 too.

flatpak-example-gtk4.png

So feel free to grab the YAML Flatpak file and start to test out gtk4 with your toy applications!

Untitled.png

 

 

Fixing issues with the “New Messages” divider

Fractal is a Matrix client for GNOME and is written in Rust. Matrix is an open network for secure, decentralized communication.

This week, I’ve been working on two issues with the “New Messages” divider. This divider is placed just above the first new message when you enter a room. It looks like this:Capture du 2018-07-13 10-47-30

Show divider in rooms with new messages at startup

Previously, there was sometimes when the divider wouldn’t show at startup. Fractal has a variable called last_viewed_messages of type HashMap<String, String>which was mapping a string holding the ID of a room with another string containing the ID of the last viewed message of this room. This variable was stored in the cache of Fractal in order to be saved between startups. The problem was that it wasn’t updated if the user were to use Matrix from somewhere else.

The solution was to enable Fractal to fetch the last viewed messages from the homeserver during the initial sync instead of storing/loading to/from the cache.

Read receipts in Matrix are ephemeral events in a room which indicate until which event a user has read the room’s content. So in order to know what is the last viewed message of a room, you look for the event ID that is mapped with a read receipt that has the user’s ID (see more details here). Further more, the ephemeral events (read receipts and typing events) are sent with the responses to sync requests (see it here). So the strategy is to update the last viewed messages by parsing the read receipts received during the initial sync.

I’ve first added a field in the `Message` model to hold the read receipts associated to a message, if any. See this commit. And I’ve implemented the extraction of the read receipts from the sync responses (see it here). Then I’ve removed the last viewed messages from the cache (in this commit). And finally, I’ve implemented the update of the last viewed messages during the initial sync (see it here).

Only show divider for messages from other people

The second issue was that the divider could be added even for your own messages. It was because we couldn’t simply compare your user ID with the ID of the sender of the last viewed message but we had to compare it with the ID of the sender of the first viewed message instead!

So we had to calculate the first new message which is not from the user themself (if it exists) each time we enter in a room and add a divider just above it.

You can see the full details of what was done in this MR.

July 12, 2018

On Avoiding Conflation of Political Speech and Hate Speech

If you're one of the people in the software freedom community who is attending O'Reilly's Open Source Software Convention (OSCON) next week here in Portland, you may have seen debate about O'Reilly and Associates (ORA)'s surreptitious Code of Conduct change (and quick revocation thereof) to name “political affiliation” as a protected class. If you're going to OSCON or plan to go to an OSCON or ORA event in the future, I suggest that you familiarize yourself with this issue and the political historical context in which these events of the last few days take place.

First, OSCON has always been political: software freedom is inherently a political struggle for the rights of computer users, so any conference including that topic is necessarily political. Additionally, O'Reilly himself had stated his political positions many times at OSCON, so it's strange that, in his response this morning, O'Reilly admits that he and his staff tried to require via agreements that speakers … refrain from all political speech. OSCON can't possibly be a software freedom community event if ORA's intent … [is] to make sure that conferences put on for the exchange of technical information aren't politicized (as O'Reilly stated today). OTOH, I'm not surprised by this tack, because O'Reilly, in large part via OSCON, often pushes forward political views that O'Reilly likes, and marginalizes those he doesn't.

Second, I must strongly disagree with ORA's new (as of this morning) position that Codes of Conduct should only include “protected classes” that the laws of a particular country currently recognize. Codes of Conduct exist in our community not only as mechanism to assure the rights of protected classes, but also to assure that everyone feels safe and free of harassment and hate speech. In fact, most Codes of Conduct in our community have “including but not limited to” language alongside any list of protected classes, and IMO all of them should.

More than that, ORA has missed a key opportunity to delineate hate speech and political speech in a manner that is sorely needed here in the USA and in the software freedom community. We live in a political climate where our Politician-in-Chief governs via Twitter and smoothly co-mingles political positioning with statements that would violate the Code of Conduct at most conferences. In other words, in a political climate where the party-ticket-headline candidate is exposed for celebrating his own sexual harassing behavior and gets elected anyway, we are culturally going to have trouble nationwide distinguishing between political speech and hate speech. Furthermore, political manipulators now use that confusion to their own ends, and we must be ever-vigilant in efforts to assure that political speech is free, but that it is delineated from hate speech, and, most importantly, that our policy on the latter is zero-tolerance.

In this climate, I'm disturbed to see that O'Reilly, who is certainly politically savvy enough to fully understand these delineations, is ignoring them completely. The rancor in our current politics — which is not just at the national level but has also trickled down into the software freedom community — is fueled by bad actors who will gladly conflate their own hate speech and political speech, and (in the irony that only post-fact politics can bring), those same people will also accuse the other side of hate speech, primarily by accusing intolerance of the original “political speech” (which is of course was, from the start, a mix of hate speech and political speech). (Examples of this abound, but one example that comes to mind is Donald Trump's public back-and-forth with San Juan Mayor Carmen Yulín Cruz.) None of ORA's policy proposals, nor O'Reilly's public response, address this nuance. ORA's detractors are legitimately concerned, because blanketly adding “political affiliation” to a protected class, married with a outright ban on political speech, creates an environment where selective enforcement favors the powerful, and furthermore allows the Code of Conduct to more easily become a political weapon by those who engage in the conflation practice I described.

However, it's no surprise that O'Reilly is taking this tack, either. OSCON (in particular) has a long history — on political issues of software freedom — of promoting (and even facilitating) certain political speech, even while squelching other political speech. Given that history (examples of which I include below), O'Reilly shouldn't be surprised that many in our community are legitimately skeptical about why ORA made these two changes without community discussion, only to quickly backpedal when exposed. I too am left wondering what political game O'Reilly is up to, since I recall well that Morozov documented O'Reilly's track record of political manipulation in his article, The Meme Hustler. I thus encourage everyone who attends ORA events to follow this political game with a careful eye and a good sense of OSCON history to figure out what's really going on. I've been watching for years, and OSCON is often a master class in achieving what Chomsky critically called “manufacturing consent” in politics.

For example, back in 2001, when OSCON was already in its third year, Microsoft executives went on the political attack against copyleft (calling it unAmerican and a “cancer”). O'Reilly, long unfriendly to copyleft himself, personally invited Craig Mundie of Microsoft to have a “Great Debate” keynote at the next OSCON — where Mundie would “debate” with “Open Source leaders” about the value of Open Source. In reality, O'Reilly put on stage lots of Open Source people with Mundie, but among them was no one who supported the strategy of copyleft, the primary component of Microsoft's political attacks. The “debate” was artfully framed to have only one “logical” conclusion: “we all love Open Source — even Microsoft (!) — it's just copyleft that can be problematic and which we should avoid”. It was no debate at all; only carefully crafted messaging that left out much of the picture.

That wasn't an isolated incident; both subtle and overt examples of crafted political messaging at OSCON became annual events after that. As another example, ten years later, O'Reilly did almost the same playbook again: he invited the GitHub CEO to give a very political and completely anti-copyleft keynote. After years of watching how O'Reilly carefully framed the political issue of copyleft at OSCON, I am definitely concerned about how other political issues might be framed.

And, not all political issues are equal. I follow copyleft politics because it's my been my day job for two decades. But, I admit there are stakes even higher with other political topics, and having watched how ORA has handled the politics of copyleft for decades, I'm fearful that ORA is (at best) ill-equipped to handle political issues that can cause real harm — such as the current political climate that permits hate speech, and even racist speech (think of Trump calling Elizabeth Warren “Pocahontas”), as standard political fare. The stakes of contemporary politics now leave people feeling unsafe. Since OSCON is a political event, ORA should face this directly rather than pretending OSCON is merely a series of technical lectures.

The most insidious part of ORA's response to this issue is that, until the issue was called out, it seems that all political speech (particularly that in opposition to the status quo) violated OSCON's policies by default. We've successfully gotten ORA to back down from that position, but not without a fight. My biggest concern is that ORA nearly ran OSCON this year with the problematic combination of banning political speech in the speaker agreement, while treating “political affiliation” as a protected class in the Code of Conduct. Regardless of intent, confusing and unclear rules like that are gamed primarily by bad actors, and O'Reilly knows that. Indeed, just days later, O'Reilly admits that both items were serious errors, yet still asks for voluntary compliance with the “spirit” of those confusing rules.

How could it be that an organization that's been running the same event for two decades only just began to realize that these are complex issues? Paradoxically, I'm both baffled and not surprised that ORA has handled this issue so poorly. They still have no improved solution for the original problem that O'Reilly states they wanted to address (i.e., preventing hate speech). Meanwhile, they've cycled through a series of failed (and alarming) solutions without community input. Would it have really been that hard for them to publicly ask first: “We want to welcome all political views at OSCON, but we also detest hate speech that is sometimes joined with political speech. Does anyone want to join a committee to work on improvements to our policies to address this issue?” I think if they'd handled this issue in that (Open Source) way, the outcome would have not be the fiasco it's become.

A report from the Guadec GTK+ BoF

The GTK+ team had a full day planning session during the BoF days at Guadec, and we had a full room, including representatives from several downstreams, not just GNOME.

We had a pretty packed agenda, too.

GTK+ 3

We started out by reviewing the GTK+ 3 plans that we’ve outlined earlier.

In addition to what was mentioned there, we also plan to backport the new event controllers, to make porting to GTK+ 4 easier. We will also add meson build support to help with Windows builds.

The 3.24 releases will effectively be a continuation of the 3.22 branch and should be entirely safe to put out as stable updates in distributions.

We plan to release GTK+ 3.24.0 in time for GNOME 3.30.

GTK+ 4 leftovers

The bulk of the day was taken up by GTK+ 4 discussion. We’ve reviewed the list of leftover tasks on the roadmap:

  • Finish DND: Gestures on the GTK+ level, local shortcuts
  • Introduce GtkToplevel and cleanly support popovers
  • Add transformations
  • Create a shortcuts event controller to replace key bindings
  • Port GtkTextView to render nodes
  • Profile the cairo backend, make sure its performance is on par with GTK+ 3
  • Port various dependent libraries:
    • vte
    • webkit
    • libchamplain
    • gtk-vnc
    • gtk-spice

Most of these tasks have names next to them, but if you want to help with any of these tasks, by all means, contact us!

Noticeably absent from this list are a few things that were on the roadmap before:

  • Constraint-based layout (emeus)
  • Shader compiler and application provided shaders
  • Designer support

All of these can still happen if merge requests appear, but we don’t think that we should block on them. They can be developed externally to GTK+ 4, and become GTK+ 5 material.

GTK+ backends

We spent some time evaluating the state of GDK backends in GTK+ master.

The Windows backend is in OK shape. We have several people who help with maintenance and feature development for it, meson makes building it a lot easier, and we have ci for it.

The Quartz backend is in a much worse state. It has not been kept in buildable shape, nobody is providing fixes or feature development for it, and we don’t have ci. We had a macbook offered that could be used for ci, and it was suggested that we could use travis ci for the OS X.

GTK+ timeline

We spent a long time on this, and did not reach a 100% consensus, but it seems realistic to aim for a GTK+ 4 release in spring of 2019, if we keep making good progress on the outstanding leftovers.

When we release GTK+ 3.96, we will also announce a date for GTK+ 4.0. We hope to be able commit to release before GNOME 3.32, so GNOME application developers can switch their master branches to GTK+ 4 without worrying about whether that will disrupt other development for 3.32.

Application porting

We really want feedback from application ports at this point. But we are in a bit of a difficult position, since we can’t plausibly claim to be done with major API work until the GtkToplevel and shortcuts controller work is done.

Our recommendation to app authors at this point is:

  • If you are a bit adventurous, do a port to 3.94 on a branch. It should be possible to keep it working without too much work during the remainder of GTK+ 4 development.
  • If you are not quite as adventurous, wait until 3.24 is released, use it to prepare your port, and port to GTK+ 3.96.
  • Either way, please make your port available to users for testing, either as a regular release, or as a Flatpak with a bundled GTK+.

GLib diversion

In the afternoon, we spent a while talking about GLib. We went over a laundry list of larger and smaller items. Notable highlights: GProperty may happen for 2.60 and we may be able to use g_autoptr soon.

Other ideas

We discussed a great number of other things that we could and should do.

For example, it was suggested (and generally agreed to) that we should merge gsk into gdk, since it is small and the internals are somewhat intertwined. It was also suggested to create subdirectories in gtk/, for example for the css machinery.

[GSoC 2018] Welcome Window Integration in Pitivi – Part 3

In my last post (link), I talked about Pitivi finally getting a Welcome window. In this window, the layout of the recent projects list was pretty basic – we were only showing the name of the projects.

pitivi_greeter_adWelcome window – “Recent Projects” list is only showing name of projects

The next two tasks on my ToDo list were –

  1. Displaying meta info regarding a project, such as its directory and last updated timestamp, and
  2. Greeter message on Welcome window when there are no recent projects.

I have successfully completed these two tasks. The layout of recent projects list now shows project name, project uri, and project’s last updation timestamp.

greeter_newNew layout of recent projects list

Note: If the project is inside user’s home directory, we replace the home directory path with ~ (tilde).  Also, the “Updated” info might not be accurate everytime as it is not displaying project’s last updation timestamp in very specific manner (like 30 mins ago, 2 days ago, etc.) but in a vague manner (like Just now, Yesterday, etc.). The idea is to present this info in a nice human readable format and not care much about the accuracy of the timestamp as it doesn’t matter too much for a video editing app. Also, there are other GNOME apps like Builder who display updation timestamp in exact same manner.

This is how the empty (no recent projects) greeter screen looks like –

empty_greeter

For now we have kept this screen very simple as it is more like a one time welcome screen and we don’t want to put a lot of effort into it as we have other important tasks at hand. Almost all the time users will be dealing with the screen displaying recent projects.

The next two tasks on my ToDo list are:

  1. Adding search functionality for easy browsing of projects.
  2. Allowing removing items from “Recent Projects” list.

I will keep posting my progress on this blog. Until next time.

Stay tuned 🙂

July 11, 2018

Writing docs in a container

In February, Matthias Clasen started a series of blog posts about Fedora Atomic Workstation (now Team Silverblue) and Flatpak. I gave it a try to see how the container would work with the documentation tools.

The screenshot below shows the setup I used to submit this merge request. The buildah container is in the shell window on the right where git and Emacs operate in the /srv directory. At the same time on the Silverblue desktop, gitg and Yelp see the same files in the /var/srv directory.

Recently I launched buildah and found it wasn’t connecting to the network. It goes without saying that I needed to look no further than GUADEC for the solution (Matthias indicated that “–net=host” was now required on the command line). Now I create the container like this:

sudo chcon -R -h -t container_file_t /var/srv
sudo buildah run --net=host -v /var/srv:/srv:rslave fedora-working-container bash

Emacs bindings for Mallard are courtesy of JaromĂ­r HradĂ­lek.

GUADEC 2018: BoF Days

Birds of a feather flock together..

Monday went with engagement BoF. I worked with Rosanna to finalize the annual report. Please help us proofread it! I have also started collecting information for the GNOME 3.30 release video. If you are  a developer and you have exciting features for GNOME 3.30, please add them to the wiki. The sooner you do it, the happier I am.

Tuesday went by with the GUADEC BoF where we reflected on the conference as a whole and identified pain points and how we might improve them. Afterwards, glorious sandcastles were made at the Sandcastle BoF.

Wednesday morning I hosted the Developer Center BoF. It was a productive session where we identified what the developer experience currently consists of, the possible audiences, the variables coming into play, challenges, stakeholders who might be interested in its development and developers centers by other projects equally sized like GNOME. I’ll write a blog post summarizing the BoF soon.

In the afternoon me and Sam recorded audio in preparation for a possible Flatpak Release Video and Britt helped mastering it. I also helped the GUADEC Video Editing BoF with generating intros and outtros for this year’s GUADEC videos.

GUADEC is over and I am going home tomorrow. But there is a lot of stuff coming up in July for me. The GNOME annual report needs final review and publishing. We plan to have a developer center call in the by end of July (if you are interested in participating, please  mark your availability here.) We also expect to make a Hackfest for the Developer Center after FOSDEM. And I have the GNOME release video and Flatpak release video on my to do list.  GUADEC has been productive and I hope I can work on some of these projects in my free time (help is welcome!).

Thanks to the local team and all volunteers at GUADEC for a great conference!

 

 

News from GLib 2.58

Next September, GLib will hit version 2.58. There have been a few changes during the past two development cycles, most notably the improvement of the Meson build, which in turn led to an improved portability of GLib to platforms such as Windows, macOS, and Android. It is time to take stock of the current status of GLib, and to highlight some of the changes that will impact GLib-based code.

  • Meson – Thanks to the ongoing work of Nirbheek Chauhan and Xavier Claessens, the Meson build has been constantly improving, to the point that we can start switching to it as the default build system. The plan—as outlined on the mailing list—is to release GLib 2.58 using Meson, while keeping the Autotools build in tree and available in the release archive; then, we’ll drop the Autotools build during the following development cycle, and release GLib 2.60 without Autotools support. Linux distributors are very much welcome to start testing the Meson build in their builders; we’ve been running the Meson build as part of our CI process for a while, now, but more exposure will bring out eventual regressions that we missed; additionally, it would be stellar if people with different toolchains than GCC/Clang/MSVC would start trying the Meson build and report bugs. In the meantime, if you’re using GLib on macOS and Windows, we already recommend you switch to Meson to build GLib, as it’s easier and better integrated with those platforms than Autotools
  • Reliability and portability – GLib switched to GitLab alongside the rest of GNOME, which meant being able to run continuous integration outside of the GNOME Continuous builds. Now we run CI on multiple toolchains, multiple build systems, and multiple platforms for every commit and merge request, which significantly reduces the chances of a broken build. We’ve also improved the code coverage in the test suite. Of course, we could always do better; for instance, we don’t have a CI runner for macOS and the Solaris family of OSes, and more runners for the *BSD family would be greatly appreciated. We’ve issued a call for help, if you have a spare machine and some bandwidth that you can donate
  • File monitoring on *BSD – Apropos the *BSD family, the kqueue backend for file monitoring in GIO has been completely overhauled by Martin Pieuchot and Ting-Wei Lan; the new code is simpler, more robust, and passes all the tests
  • Use posix_spawn() for efficient process launching — Thanks to Daniel Drake, GLib now can use posix_spawn() under specific circumstances, if the platform’s C library supports it; this allows hitting fast paths in the kernel, compared to manually calling fork() + exec(); those fast paths are especially beneficial when running on memory constrained platforms
  • Reference counting types and allocations — GLib uses reference counting as a memory management and garbage collection mechanism in many of its types, but lacks the public API to allow other people to implement the same semantics in their own data structures; this leads to much copy-pasting and re-implementations, and typically to things like undefined behavior when it comes to saturation and thread safety. GLib 2.58 has a grefcount and a gatomicrefcount types, alongside their API, to reduce this duplication. Additionally, taking a cue from other languages like Rust, GLib provides a way to add reference counting semantics on memory allocations, by adding a low level API that allows you to allocate structures that do not have a reference count field, and automatically add reference counting semantics to them
  • Deprecations – A few soft deprecations have become real deprecations in this last development cycle:
      • g_type_class_add_private() has finally been deprecated, five years after we introduced the instance private data macros; if you’re still using that function in your class initialization, please switch to G_DEFINE_TYPE_WITH_PRIVATE or G_ADD_PRIVATE
      • g_main_context_wait() is officially deprecated, but you should have already seen run time warnings about its use
      • gtester, the GTest harness provided by GLib, is deprecated; if you’re using Autotools, you should use the TAP harness that comes with Automake

There have been lots of contributions in GLib, in this past cycle, thanks to the tireless efforts of Philip Withnall; he’s been instrumental in reviewing patches, triaging bugs, and implementing changes in the development process of the project. The switch over to GitLab has also improved the contribution process, with many more developers opening merge requests:

  • 2.54.0..c182cd68: 968 changesets from 143 developers, up from 412 changesets and 68 developers during the 2.53 development cycle
  • A total of 31851 lines added, 27976 removed (delta: +3875)
Developers with the most changesets
Philip Withnall 303 31.3%
Xavier Claessens 79 8.2%
Emmanuele Bassi 69 7.1%
Christoph Reiter 42 4.3%
Ting-Wei Lan 21 2.2%
Chun-wei Fan 21 2.2%
Nirbheek Chauhan 21 2.2%
Ondrej Holy 20 2.1%
Руслан Ижбулатов 20 2.1%
Mikhail Zabaluev 20 2.1%
Simon McVittie 15 1.5%
Matthias Clasen 14 1.4%
Christian Hergert 13 1.3%
IĂąigo MartĂ­nez 12 1.2%
Bastien Nocera 10 1.0%
Rafal Luzynski 9 0.9%
Michael Catanzaro 9 0.9%
Will Thompson 8 0.8%
Allison Lortie 8 0.8%
Daniel Boles 8 0.8%

Make sure to test your code with GLib 2.57.2, the next development snapshot towards the 2.58.0 stable release.

July 10, 2018

GUADEC 2018

I’m feeling extremely grateful for the shot in the arm GUADEC provides by way of old friends, new friends, expert advice, enthusiasm, time-worn wisdom, and so many reminders of why we do this.

I use FreeCAD for freelance work, and build the development version from git periodically. There is a copr nightly build for recent versions of Fedora, but not for Rawhide. The first person to whom I related this experience, David King, said the software would be ideal for the Flatpak treatment. Since then I’ve been getting a tutorial on building the YAML manifest, and after four days of hard work (thanks Dave!), it’s on the very brink of completion.

On the docs front, having adapted to GitLab and getting a merge request committed to the Desktop Help in the spring, it’s time to refresh some of the topics. I’ll be starting with the Settings pages.

A couple of jokers photobomb André’s portrait session.

Thanks to Ismael Olea, RubĂŠn GĂłmez and the organizing team for a spectacular event and a wonderful cultural experience! Thank you GNOME Foundation for the sponsorship.

GUADEC 2018

Today, my first GUADEC experience has come to an end, and it was great! Kudos to the organizers for a very well-planned and executed event. Being a part of the volunteer team was a fantastic experience and thanks for the nice t-shirt!

It was wonderful to meet the GNOME community in person, quite a surreal experience to say the least. The talks were a great opportunity to learn about everything going on at GNOME. I had amazing discussions with my mentors on various topics ranging from “Integrating AI in gnome applications” to “The big dilemma: Is a PhD really worth it?” and finally, some stuff about the GSoC project too.

Thank you for the beautiful memories. I look forward to meeting everyone once again, next year.
2018-GUADEC badge (1).png

P.S.- Please prepare for the next ice cream deathmatch. I won’t be eating any more ice cream until the next GUADEC.

sponsored-by-foundation-round

July 09, 2018

GUADEC 2018 Day 3

Day 2 ended with a guided tour inside the Alcazaba of AlmerĂ­a.

Surprisingly, the castle tour featured an exciting belly dance and a bonus theater show starring GNOME’s legendary actors.

Day 3 had plenty of talks like the other days – but I decided to spent it working with Britt on the annual report.

Lastly, lightning talks took place by the end of the day, I spoke about my experience starting Open Source Aalborg (Download PDF Slideshow).

(all picture are CC-BY-SA 4.0, by me)

July 08, 2018

meson fails with "ERROR: Native dependency 'foo' not found" - and how to fix it

A common error when building from source is something like the error below:


meson.build:50:0: ERROR: Native dependency 'foo' not found
or a similar warning

meson.build:63:0: ERROR: Invalid version of dependency, need 'foo' ['>= 1.1.0'] found '1.0.0'.
Seeing that can be quite discouraging, but luckily, in many cases it's not too difficult to fix. As usual, there are many ways to get to a successful result, I'll describe what I consider the simplest.

What does it mean? Dependencies are simply libraries or tools that meson needs to build the project. Usually these are declared like this in meson.build:


dep_foo = dependency('foo', version: '>= 1.1.0')
In human words: "we need the development headers for library foo (or 'libfoo') of version 1.1.0 or later". meson uses the pkg-config tool in the background to resolve that request. If we require package foo, pkg-config searches for a file foo.pc in the following directories:
  • /usr/lib/pkgconfig,
  • /usr/lib64/pkgconfig,
  • /usr/share/pkgconfig,
  • /usr/local/lib/pkgconfig,
  • /usr/local/share/pkgconfig
The error message simply means pkg-config couldn't find the file and you need to install the matching package from your distribution or from source.

And important note here: in most cases, we need the development headers of said library, installing just the library itself is not sufficient. After all, we're trying to build against it, not merely run against it.

What package provides the foo.pc file?

In many cases the package is the development version of the package name. Try foo-devel (Fedora, RHEL, SuSE, ...) or foo-dev (Debian, Ubuntu, ...). yum and dnf provide a great shortcut to install any pkg-config dependency:


$> dnf install "pkgconfig(foo)"

$> yum install "pkgconfig(foo)"
will automatically search and install the right package, including its dependencies.
apt-get requires a bit more effort:

$> apt-get install apt-file
$> apt-file update
$> apt-file search --package-only foo.pc
foo-dev
$> apt-get install foo-dev
For those running Arch and pacman, the sequence is:

$> pacman -S pkgfile
$> pkgfile -u
$> pkgfile foo.pc
extra/foo
$> pacman -S extra/foo
Once that's done you can re-run meson and see if all dependencies have been met. If more packages are missing, follow the same process for the next file.

Any users of other distributions - let me know how to do this on yours and I'll update the post

My version is wrong!

It's not uncommon to see the following error after installing the right package:


meson.build:63:0: ERROR: Invalid version of dependency, need 'foo' ['>= 1.1.0'] found '1.0.0'.
Now you're stuck and you have a problem. What this means is that the package version your distribution provides is not new enough to build your software. This is where the simple solutions and and it all gets a bit more complicated - with more potential errors. Unless you are willing to go into the deep end, I recommend moving on and accepting that you can't have the newest bits on an older distribution. Because now you have to build the dependencies from source and that may then require to build their dependencies from source and before you know you've built 30 packages. If you're willing read on, otherwise - sorry, you won't be able to run your software today.

Manually installing dependencies

Now you're in the deep end, so be aware that you may see more complicated errors in the process. First of all you need to figure out where to get the source from. I'll now use cairo as example instead of foo so you see actual data. On rpm-based distributions like Fedora run dnf or yum:


$> dnf info cairo-devel # or yum info cairo-devel
Loaded plugins: auto-update-debuginfo, langpacks
Installed Packages
Name : cairo-devel
Arch : x86_64
Version : 1.13.1
Release : 0.1.git337ab1f.fc20
Size : 2.4 M
Repo : installed
From repo : fedora
Summary : Development files for cairo
URL : http://cairographics.org
License : LGPLv2 or MPLv1.1
Description : Cairo is a 2D graphics library designed to provide high-quality
: display and print output.
:
: This package contains libraries, header files and developer
: documentation needed for developing software which uses the cairo
: graphics library.
The important field here is the URL line - got to that and you'll find the source tarballs. That should be true for most projects but you may need to google for the package name and hope. Search for the tarball with the right version number and download it. On Debian and related distributions, cairo is provided by the libcairo2-dev package. Run apt-cache show on that package:

$> apt-cache show libcairo2-dev
Package: libcairo2-dev
Source: cairo
Version: 1.12.2-3
Installed-Size: 2766
Maintainer: Dave Beckett
Architecture: amd64
Provides: libcairo-dev
Depends: libcairo2 (= 1.12.2-3), libcairo-gobject2 (= 1.12.2-3),[...]
Suggests: libcairo2-doc
Description-en: Development files for the Cairo 2D graphics library
Cairo is a multi-platform library providing anti-aliased
vector-based rendering for multiple target backends.
.
This package contains the development libraries, header files needed by
programs that want to compile with Cairo.
Homepage: http://cairographics.org/
Description-md5: 07fe86d11452aa2efc887db335b46f58
Tag: devel::library, role::devel-lib, uitoolkit::gtk
Section: libdevel
Priority: optional
Filename: pool/main/c/cairo/libcairo2-dev_1.12.2-3_amd64.deb
Size: 1160286
MD5sum: e29852ae8e8e5510b00b13dbc201ce66
SHA1: 2ed3534d02c01b8d10b13748c3a02820d10962cf
SHA256: a6099cfbcc6bd891e347dd9abc57b7f137e0fd619deaff39606fd58f0cc60d27
In this case it's the Homepage line that matters, but the process of downloading tarballs is the same as above. For Arch users, the interesting line is URL as well:

$> pacman -Si cairo | grep URL
Repository : extra
Name : cairo
Version : 1.12.16-1
Description : Cairo vector graphics library
Architecture : x86_64
URL : http://cairographics.org/
Licenses : LGPL MPL
....

Now to the complicated bit: In most cases, you shouldn't install the new version over the system version because you may break other things. You're better off installing the dependency into a custom folder ("prefix") and point pkg-config to it. So let's say you downloaded the cairo tarball, now you need to run:


$> mkdir $HOME/dependencies/
$> tar xf cairo-someversion.tar.xz
$> cd cairo-someversion
$> autoreconf -ivf
$> ./configure --prefix=$HOME/dependencies
$> make && make install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run meson again
So you create a directory called dependencies and install cairo there. This will install cairo.pc as $HOME/dependencies/lib/cairo.pc. Now all you need to do is tell pkg-config that you want it to look there as well - so you set PKG_CONFIG_PATH. If you re-run meson in the original project, pkg-config will find the new version and meson should succeed. If you have multiple packages that all require a newer version, install them into the same path and you only need to set PKG_CONFIG_PATH once. Remember you need to set PKG_CONFIG_PATH in the same shell as you are running configure from.

In the case of dependencies that use meson, you replace autotools and make with meson and ninja:


$> mkdir $HOME/dependencies/
$> tar xf foo-someversion.tar.xz
$> cd foo-someversion
$> meson builddir -Dprefix=$HOME/dependencies
$> ninja -C builddir install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run meson again

If you keep seeing the version error the most common problem is that PKG_CONFIG_PATH isn't set in your shell, or doesn't point to the new cairo.pc file. A simple way to check is:


$> pkg-config --modversion cairo
1.13.1
Is the version number the one you installed or the system one? If it is the system one, you have a typo in PKG_CONFIG_PATH, just re-set it. If it still doesn't work do this:

$> cat $HOME/dependencies/lib/pkgconfig/cairo.pc
prefix=/usr
exec_prefix=/usr
libdir=/usr/lib64
includedir=/usr/include

Name: cairo
Description: Multi-platform 2D graphics library
Version: 1.13.1

Requires.private: gobject-2.0 glib-2.0 >= 2.14 [...]
Libs: -L${libdir} -lcairo
Libs.private: -lz -lz -lGL
Cflags: -I${includedir}/cairo
If the Version field matches what pkg-config returns, then you're set. If not, keep adjusting PKG_CONFIG_PATH until it works. There is a rare case where the Version field in the installed library doesn't match what the tarball said. That's a defective tarball and you should report this to the project, but don't worry, this hardly ever happens. In almost all cases, the cause is simply PKG_CONFIG_PATH not being set correctly. Keep trying :)

Let's assume you've managed to build the dependencies and want to run the newly built project. The only problem is: because you built against a newer library than the one on your system, you need to point it to use the new libraries.


$> export LD_LIBRARY_PATH=$HOME/dependencies/lib
and now you can, in the same shell, run your project.

Good luck!

July 07, 2018

Flatpak, making contribution easy

One vision that i’ve talked abut in the past is that moving to flatpak could make it much easier to contribute to applications.

Fast-forward 3 years, and the vision is (almost) here!

Every application on flathub has a Sources extension that you can install just like anything else from a flatpak repo:

flatpak install flathub org.seul.pingus.Sources

This extension contains a flatpak manifest which lists the exact revisions of all the sources that went into the build. This lets you reproduce the build — if you can find the manifest!

Assuming you install the sources per-user, the manifest is here (using org.seul.pingus as an example):

$HOME/.local/share/flatpak/runtime/org.seul.pingus.Sources/x86_64/stable/active/files/manifest/org.seul.pingus.json

And you can build it like this:

flatpak-builder build org.seul.pingus.json

I said the vision is almost, but not quite there. Here is why: gnome-builder also has a way to kickstart a project from a manifest:

gnome-builder --manifest org.seul.pingus.json

But sadly, this currently crashes. I filed an issue for it, and it will hopefully work very soon. Next step, flip-to-hack !

July 06, 2018

Improving Fractal’s media viewer

Fractal is a Matrix client for GNOME and is written in Rust. Matrix is an open network for secure, decentralized communication.

This week, I have made improvements for the media viewer. I will talk about the most important of them. You can have a look at this issue to get the full details of what has been done.

A header bar in the full screen mode

I’ve added the possibility to access the header bar while in full screen mode by moving the cursor up to the top of the screen, like in Builder or Videos.

At first, I didn’t have an idea of how I could implement it but I’ve figured out a way to simply do that.

First I’ve asked how it was done in the Builder IRC channel, someone told me to look at this page for implementing a custom header bar with full screen toggle button in Python. It could help me to figure out how to make the first step toward the implementation of this feature.

The media viewer UI has two main parts:

  • `media_viewer_headerbar_box` which is the part of the media viewer interface that goes in the `GtkStack` of the main window’s header bar
  • `media_viewer_box` which is the part of the media viewer interface that goes in the `GtkStack` of the main window’s content

When in full screen mode, we only see media_viewer_box on the entire screen, so in order to view the media viewer’s header bar, it needs to be moved in this GtkBox when entering in full screen mode and to be placed back in the header bar of the application when leaving this mode. The page I’ve previously cited helped me to know how to do that.

Doing this was far from being enough to get the expected result as I wanted the header bar to drop down when the cursor reaches the top of the screen. So I’ve added a GtkRevealer in the GtkOverlay containing the navigation buttons and the header bar would be placed into it instead of being directly added to media_viewer_box. The header bar would be revealed when the pointer is moved and when its vertical coordinate is less or equal than 6 pixels (i.e. when the cursor is close from the top of the screen). After that, a timer is set and when the time is out and that the cursor isn’t hovering the header bar, it is hidden back by the revealer.

I’ve finally hidden some buttons that could cause unexpected behaviors (the back and close buttons) when in full screen mode, the full screen button is also replaced by another one to leave the full screen mode.

Here are the MR for this and a picture of the result:

Capture du 2018-07-06 16-34-24

Add the possibility to go back in the media history without restrictions

Before this, the media viewer was navigating through a list of media which was built when we were entering the media viewer. Hence the navigation was limited to the media already present in the loaded messages.

To be able to go back beyond the available media, I needed to add a method in the backend that would fetch messages containing media from a certain message in the history (usually, the earliest media we currently have in the viewer). It is done by making a GET request to a homeserver at this address “/_matrix/client/r0/rooms/{roomId}/messages” with the query parameters “from” set with the previous batch ID, “dir” set to “b” (for backwards), “limit” set to 40 (that is the default page limit in Fractal), the user’s “access_token” and “filter” set to “{\”filter_json\”: { \”contains_url\”: true, \”not_types\”: [\”m.sticker\”] } }”. It says that we are requesting 40 messages having a “url” field and that are not of the type “m.sticker” (i.e. messages that contains media) starting from the previous batch (we will see later how to get it) and that we are going backward in the message history. The problem was to determine how to get the previous batch ID.

In Matrix, when a client is requesting a list of events, it does this by making a first request that asks for a certain number of them. When the server responds to this query, it sends a previous batch ID with it. It can be used by the client during the next request to ask for the next batch of events, and so on. The problem I had that I couldn’t use a message ID in order to ask for media after this message but I needed a previous batch ID if I wanted to start loading media from it, and I only have the message ID before doing the first request. So my mentor found out a way to get a previous batch ID from a message ID. This is done  by making a GET request to a homeserver at this address “/_matrix/client/r0/rooms/{roomId}/context/{eventId}” with the query parameter “limit” set to “0”. So the “start” field in the server’s answer is the previous batch ID associated with this event.

After the backend’s method was implemented (you can view the implementation in the commit here), it was then pretty much easy to integrate it within the media viewer (see this commit here). However, I needed to spend more time for the implementation of the loading spinner (it is needed when you press on the previous media button and that Fractal is fetching the images from a homeserver as this can take a while) because the UI was frozen while the backend was requesting more media from a server. So I’ve made the operation asynchronous (see this commit) and then I could add the spinner (see this commit).

You can see the full details on this MR.

 

July 05, 2018

First impressions of PureOS

My new Librem 13 arrived yesterday, and it was my first opportunity to play around with PureOS. I thought I'd share a few thoughts here.

First, PureOS uses GNOME for the desktop. And not that it matters much to usability, but they picked a beautiful default desktop wallpaper:


Because it's GNOME, the desktop was immediately familiar to me. I've been a GNOME user for a long time, and I work with GNOME in testing usability of new features. So the GNOME desktop was a definite plus for me.

It's not a stock GNOME, however. PureOS uses a custom theme that doesn't use the same colors as a stock GNOME. GNOME doesn't use color very often, but I noticed this right away in the file manager. Clicking on a navigation item highlights it in a sort of rust color, instead of the usual blue.


Overall, I thought PureOS was okay. It doesn't really stand out in any particular way, and I didn't like a few choices they made. So in the end, it's just okay to me.

However, I did run into a few things that would seem odd to a new user.

What's that file icon?


When I clicked on Activities to bring up the overview, I was confused about what looked like a "file" icon in the dock.


I understood the other icons. The speaker icon is Rhythmbox, my favorite music application. The camera icon is clearly a photo application (GNOME Photos). The blue file cabinet is the GNOME file manager. And the life ring is GNOME's Help system (but I would argue the "ring buoy" icon is not a great association for "help" in 2018; better to use an international circle-"?" help symbol, but I digress).

Two icons seemed confusing. The "globe" icon was a little weird to me, but I quickly realized it probably meant the web browser. (It is.)

But the one that really confused me was the "file" icon, between the camera and the file manager icons. What was a "file" icon doing here? Was it a broken image, representing an icon that should exist but wasn't on the system? I didn't need to click on it right away, so I didn't discover until later that the "file" icon is LibreOffice. I hadn't seen that icon before, even though that's actually the LibreOffice icon. I guess I'm used to the LibreOffice Writer or LibreOffice Calc icons, which is what I launch most of the time anyway.

No updates?


I wanted to install some extra applications, so I launched GNOME Software. And from there, I realized that PureOS didn't have any updates.


Really? Linux gets updates all the time. Even if Purism updated the OS right before shipping my laptop to me, there should have been a few updates in the time FedEx took to deliver the laptop. But maybe Purism is slow to release updates, so this could be expected. It seemed odd to me, though.

Where's the extra software?


Once I was in GNOME Software, I realized the "store" was quite empty. There's not much to choose from.


If this were my first experiment with Linux, I'd probably think Linux didn't have very many applications. They don't even have the Chromium or Firefox web browsers available to install.



But really, there are a ton of applications out there for Linux. It's just the packages that PureOS makes available through GNOME Software seems pretty limited.

The terminal is broken?


Finally, I'll mention the terminal emulator. PureOS doesn't use the standard GNOME Terminal package, but rather the Tilix terminal emulator. It's a fine terminal, except for the error message you see immediately upon launching it:


I wondered why a pre-configured operating system, aimed at the Linux community, would ship with a broken configuration. I clicked on the link shown, and basically the fix is to set Tilix as a login shell, or to do some other setup steps.

Presenting an error message the first time the user runs a program is very poor usability. I haven't run it yet, so I assume the program should be using defaults. Everything should be okay the first time I run the program. I assume things will "just work." Instead, I get an error message. If I were a novice user, this would probably turn me off PureOS.

Overall


In the end, PureOS is a GNOME desktop that runs well. But with a few confusing issues or problems here and there, it doesn't exactly stand out. To me, PureOS is just "okay." It's not bad, but it's not great.

I think my biggest concern as a longtime Linux user is that the distribution doesn't seem to have updates. I'm curious to hear from any PureOS users how often updates are pushed out. I run Fedora Linux, and I get updates pretty regularly. What should I have expected on PureOS?

Segregating views

In my last blog post I talked about how to add self registering keys to lua based sources and segregating views. Now I was finally free to add and use any keys I wanted to thegamesdb for fetching and using that metadata. Here’s how I used the newly added feature to grilo.

4th June - 25th June

As I had already added keys like description, rating, developer and publisher, I began by adding all the following remaining keys to thegamesdb and GNOME Games.

  • release date
  • coop
  • players
  • genre

These keys will be later used to display game metadata as description view.

What happened to segregating views?

For a long time now Games has had a very basic UI, displaying only a collection of games. Having already had added the developer key to Games, I chose to add a developer view, displaying collection of games by developer. Along with the developer view a platform view was also added to display collection of games by platforms. A GtkStackSwitcher was added to the header bat to easily navigate between these views.

Developer view

The games which do not use Grilo to fetch their metadata or the games for which the developer wasn’t available were assigned an “Unknown” developer.

Platform view

Game covers aren’t currently visible as thegamesdb have switched to a new API. I’ll soon be updating grl-thegamesdb to use this newly updated API.

I’ve already started working on description view mentioned above, and will be updating you all soon on it.

Cheers!

July 04, 2018

Starting sessions with systemd

https://www.flickr.com/photos/niff/520627425/in/photolist-N1moV-8X9uFQ-N1pCg-VR9FSg-4tGWBK-jFuBU5-mS3D9Z-khe5ZF-VJHPJu-S7oF8f-N1gZh-Hgouj-iewPUs-8WvTqv-tBQPH-8WCjmW-iewxJc-8WvTin-g9L3VE-8WASYG-N1ojv-8WxJyt-8WCjgy-8Ww3it-8WyNCf-8Wz2Pw-8WyFMP-8WyWL3-8Wy3FS-8WyuFJ-8WvsKB-VMBvu3-acH7Kg-8WvW7H-8Wz42o-8WvBH4-8WyEfD-iewDyC-8WvXMn-8Wz6am-numG2F-8WBVJ3-8WvUDg-8Wy2tn-8WCk1Y-8WyyTN-8WvU1V-8Ww1wt-8WyqTs-8WvRtK

I’ve been working on this as a project for a little while now, and it’s getting to a reasonably usable state, so I thought I’d write about it in public.

What?

When you fire up your machine and see GDM’s friendly face smiling back at you, how did it get there? Clearly it was executed, but by what? The short answer is that GDM asked gnome-session to start its UI up for you.

What you see is GNOME Shell started up in a special mode, which it was asked to do by being started in an environment where GNOME_SHELL_SESSION_MODE=gdm. Behind the scenes gnome-session also started a load of session services from the given XDG autostart directories, which gives you gnome-settings-daemon amongst other things. These services can specify at what “phase” they are started at – look for X-GNOME-Autostart-Phase in /etc/xdg/autostart/*.desktop to see this – which gives a form of ordering. The session specifies a bunch of required components, and gnome-session makes sure that these are all running before the session can be considered fully started. If any of them don’t start up, the session has failed to start. Much the same thing exists for the user session too.

Hmm… this is ringing some bells…

Why?

This is quite a crude dependency mechanism. While there are some mechanisms to add extra conditions to autostart services, such as only if a certain file exists or a GSettings key is set, we now (this is not actually a new idea; the bug for this problem was filed in 2012) have a much better way of starting programs when they are needed. That’s systemd of course. If the session needs program X, and program Y needs program X, we can express this. If X isn’t ready until it claims a name on the bus, we can express that too. Restarting things on failure, executing programs at a specified time (alarms), the list goes on.

We already use this stuff for system services in fact, so why not for user ones? If starting a session is essentially executing a bunch of programs at the right time, and we have a utility to execute things in a more flexible way – why not join the two together?

A year or two ago some of us from Ubuntu sat down and worked up an implementation of this for the Unity session we were using back then. We in fact shipped 16.10 (yakkety) with this scheme in place. Since we’re now using GNOME Shell in Ubuntu – and because it’s the right thing to do anyway – it’s time to start working on getting this upstream. Credit goes to Martin Pitt and Sebastien Bacher for the work back then which we’re building on now.

Where are we now?

I’ve got a bunch of patches that I’m starting this week to push to maintainers for review. Most of these are prerequisites for transitioning sessions to be started in this way, but are not harmful when the current method is used either. They can be applied immediately. The actual switch is performed in two places – one to GDM to instruct gnome-session to launch a particular target, and the second to gnome-session to update the session definition files to launch targets instead of executing parts itself.

There is scope for further work here. My initial conversions don’t use particularly advanced features or dependencies. We could probably move more things out of the critical path and have them started on demand.

If you’re worrying about your third party session needing to be reworked for this, don’t be (yet). I’ve been quite careful so far to not require this scheme to be used, so non-GNOME sessions should continue to work as they do now. However, since gnome-session is now duplicating a lot of the same work that systemd is doing, it makes sense for us to talk about how we will go about removing this functionality in a future cycle, so that gnome-session becomes a much smaller thing.

I’ve written a wiki page with a few more technical details. This will be updated as patches are posted and merged.

Hmm, I guess this is kind of lame if you can’t try it. I should make a repository or something.

See you at GUADEC!

I’m currently writing this at Minneapolis airport. Having ramen and sushi, before boarding my flight to CDG and ultimately to Malaga and Almeria. GUADEC is always the most special time being able to meet absent friends, and of course the scheming, the plotting and rabble rousing and that’s just the things I’m doing! :-)

I’m presenting this year with a community talk. Which the slides are being written on the flight . This will be a busy GUADEC as there are a lot of things going on, projects that I’m looking to complete, a conference that I need to promote, and a lot of conversations about all things engagement, website, and various other things. There might even be a special blog post later! ;)

See ya there!  I would like to thank the GNOME Foundation of which I would not have been able to attend.

I’m going to GUADEC (with Ubuntu Desktop team)!

Hi Folks,

I’m writing these lines while I’m in the flight to Almeria where this year’s GNOME Users And Developers European Conference will take place, typing with my Thinkpad Bluetooth keyboard on my mobile phone (I’ve to admit that the Android physical keyboard usage is getting awesome, allowing proper WM actions) :), as the battery of my T460p was already over after the flight from Florence to Madrid during which I fixed some more shell JS errors.

This will be my first GUADEC ever, and as a fresh Foundation member, I’m quite excited to finally join it.

I’m not coming alone, of course, as this year the ubuntu-desktop team will be quite crowded, as me, Ken VanDine, Sébastien Bacher, Didier Roche, Iain Lane, James Henstridge and Robert Ancell will be part of the conference, to give input and help to get GNOME even better.

Soo, looking forward to meet you all very soon (almost landed – or better – trying to, in the mean time)!

As always, I’ve to thank Canonical for allowing me and the desktop crew to be part of this great community reunion. But also for being one of the silver sponsors of the event.

These are the events that really matter in order to get things done.

Testbit is going static

This is the first post that is published through Iris, the static website generator used by Testbit. Over the years I fine tuned and tweaked my editing and revisioning tools like probably all programmers/writers do and needed a way to apply this to the web. First Steps I’ve always felt at odds with…

Librem 13: Review

In May, I decided it was finally time to replace my old laptop. Technically, there wasn't anything wrong with my old laptop (Lenovo Thinkpad X1 Carbon, first-gen) but after six years, I thought it was time to replace it.

Of course, I wanted my new laptop to only run Linux. After some searching, I decided on the Librem 13, from Purism. Purism laptops are designed and built for Linux, and I wanted to support a hardware vendor that aimed squarely at the Linux and Free/open source software market.

Unfortunately, I had a few problems with the Librem laptop. The Intel on-board video card "flickered" when I used the internal display, and sometimes would go to "sleep" (not sure it was really in sleep mode or just shut itself off, but when the screen goes black and the laptop is still running, that feels like "sleep" to me). I contacted Purism, and they suggested this was a hardware fault they've seen on some laptops, and they gave me an RMA to return it for repair.

A tech later emailed me to say they couldn't repair the laptop, so they sent me a new one instead. My new Librem 13 arrived today, and it's great!

System information

I've highlighted the ordered specs and the system details so they are easier to compare: memory, disk, and CPU. Here's what I ordered: (copied from my order confirmation)

  • Keyboard: English (US)
  • TPM: Include
  • Memory: 16GB (1x16GB) (+$209.00) $209.00
  • Storage (M.2 SSD): 500GB (NVMe) (+$499.00) $499.00
  • Storage (2.5" SATA 3 SSD): None -$99.00
  • AC Adapter Power Plug: US
  • Wireless: Include Wireless
  • Operating System: PureOS
  • Warranty: 1 Year

I figured I'd max out the memory. I'd like this laptop to last a long time, and memory is a good investment there. Also, I swapped out the standard SATA SSD storage for a 500GB M.2 SSD storage. The prices here reflect those changes.

And the technical details:

$ free -h
total used free shared buff/cache available
Mem: 15G 2.5G 5.5G 616M 7.6G 12G
Swap: 7.9G 263M 7.6G

$ cat /proc/cpuinfo |egrep '^processor|^model name'
processor : 0
model name : Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz
processor : 1
model name : Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz
processor : 2
model name : Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz
processor : 3
model name : Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz

$ sudo fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x46f877a1

Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 * 2048 2099199 2097152 1G 83 Linux
/dev/nvme0n1p2 2099200 940814335 938715136 447.6G 83 Linux
/dev/nvme0n1p3 940815444 976768064 35952621 17.1G 83 Linux

Update (7/4/18): The numbers don't match exactly, and that's expected Note that free and fdisk display powers-of-ten Gibibytes (GiB), while the specs from Purism display powers-of-two Gigabytes (GB). So 500 GB = 466 GiB and 16 GB = 15 GiB. (see comment)

First impressions

Overall
I've been using the new laptop for a few hours now, and I'm happy so far. This is a great system.
Video flicker is fixed
I'm happy to report that the video "flicker" problem is not present on this model! So that seems to have been a hardware fault, and not a driver problem. Very pleased that ended up being a fixable hardware issue.
Wrong key code for backslash and pipe
The keyboard issue is still there. The Purism laptop uses a keyboard that sends the wrong key code for the backslash key (\). The "shift" on this key is the pipe symbol (|). Try running any commands at the Linux command line, and you'll quickly run into a problem where you can't send the output of one program into another program. You need the pipe for that. Or try escaping a character at the command line, or in program code. You need the backslash for that.

This is a known issue on the Librem, but it's easy to fix. You need to run setkeycodes 56 43 to reset the correct key codes for that key system-wide. To make the fix permanent, create a new /etc/rc.d/rc.local file that is executable (I used mode 750, but anything that's executable and owned by root should do) and has these lines:

#!/bin/bash
setkeycodes 56 43
exit 0

This fixes the problem each time the system boots. You don't need to do anything at the user level. Note that I have my Librem connected to an external display, and I'm using an external keyboard and mouse. This key code fix doesn't impact backslash or pipe on my external keyboard, so I'm good there.
Operating system
I did end up re-installing the operating system. When I first booted the Librem, it was using the pre-installed PureOS Linux distribution. I played with it for a while, and actually did some work online with it, then decided I'd rather run the Fedora Linux distribution that I'm used to. I'll post an article later with impressions about PureOS.

July 03, 2018

Going to GUADEC

Another year, another GUADEC, and here I am crossing oceans to see my fellow GNOMEies. This time, it’s going to be particularly challenging: 32 hours of travel, 4 connections, no vegan meal available. I heard GNOME are resilient folk though, perhaps this is the proving?

I am proudly being sponsored by the GNOME Foundation.

Sponsored by the GNOME Foundation

See y’all there!

2018-07-03 Tuesday.

  • Into Cambridge for an interview with Chris in the morning - had lunch together.
  • Thrilled to see Dennis' multivariate regression work for Calc pushed & published - go Dennis.
  • CP board meeting; drove home, customer call, took H., M., Elize & Alish to Soham for the Young Musician of the Year competition / concert - good music, home. Built ESC minutes, bed.

July 02, 2018

2018-07-02 Monday.

  • Mail; admin, product team call, managed to finally get J's and my gmail / Android calendars to sync. up after another horrible scheduling conflict. Skype call with great news from Rob & Amelia.

Affiliated Vendors on the LVFS

We’ve just about to deploy another feature to the LVFS that might be interesting to some of you. First, some nomenclature:

OEM: Original Equipment Manufacturer, the user-known company name on the outside of the device, e.g. Sony, Panasonic, etc
ODM: Original Device Manufacturer, typically making parts for one or more OEMs, e.g. Foxconn, Compal

There are some OEMs where the ODM is the entity responsible for uploading the firmware to the LVFS. The per-device QA is typically done by the OEM, rather than the ODM, although it can be both. Before today we didn’t have a good story about how to handle this other than having a “fake” oem_odm@oem.com useraccounts that were shared by all users at the ODM. The fake account isn’t of good design from a security or privacy point of view and so we needed something better.

The LVFS administrator can now mark other vendors as “affiliates” of other vendors. This gives the ODM permission to upload firmware that is “owned” by the OEM on the LVFS, and that appears in the OEM embargo metadata. The OEM QA team is also able to edit the update description, move the firmware to testing and stable (or delete it entirely) as required. The ODM vendor account also doesn’t have to appear in the search results or the vendor table, making it hidden to all users except OEMs.

This also means if an ODM like Foxconn builds firmware for two different OEMs, they also have to specify which vendor should “own” the firmware at upload time. This is achieved with a simple selection widget on the upload page, but will only be shown if affiliations have been set up. The ODM is able to manage their user accounts directly, either using local accounts with passwords, or ODM-specific OAuth which is the preferred choice as it means there is only one place to manage credentials.

If anyone needs more information, please just email me or leave a comment below. Thanks!

fwupdate is {nearly} dead; long live fwupd

If the title confuses you, you’re not the only one that’s been confused with the fwupdate and fwupd project names. The latter used the shared library of the former to schedule UEFI updates, with the former also providing the fwup.efi secure-boot signed binary that actually runs the capsule update for the latter.

In Fedora the only user of libfwupdate was fwupd and the fwupdate command line tool itself. It makes complete sense to absorb the redundant libfwupdate library interface into the uefi plugin in fwupd. Benefits I can see include:

  • fwupd and fwupdate are very similar names; a lot of ODMs and OEMs have been confused, especially the ones not so Linux savy.
  • fwupd already depends on efivar for other things, and so there are no additional deps in fwudp.
  • Removal of an artificial library interface, with all the soname and package-induced pain. No matter how small, maintaining any project is a significant use of resources.
  • The CI and translation hooks are already in place for fwupd, and we can use the merging of projects as a chance to write lots of low-level tests for all the various hooks into the system.
  • We don’t need to check for features or versions in fwupd, we can just develop the feature (e.g. the BGRT localised background image) all in one branch without #ifdefs everwhere.
  • We can do cleverer things whilst running as a daemon, for instance uploading the fwup.efi to the ESP as required rather than installing it as part of the distro package.
    • The last point is important; several distros don’t allow packages to install files on the ESP and this was blocking fwupdate being used by them. Also, 95% of the failures reported to the LVFS are from Arch Linux users who didn’t set up the ESP correctly as the wiki says. With this new code we can likely reduce the reported error rate by several orders of magnitude.

      Note, fwupd doesn’t actually obsolete fwupdate, as the latter might still be useful if you’re testing capsule updates on something super-embedded that doesn’t ship Glib or D-Bus. We do ship a D-Bus-less fwupdate-compatible command line in /usr/libexec/fwupd/fwupdate if you’re using the old CLI from a shell script. We’re all planning to work on the new integrated fwupd version, but I’m sure they’ll be some sharing of fixes between the projects as libfwupdate is shipped in a lot of LTS releases like RHEL 7.

      All of this new goodness is available in fwupd git master, which will be the new 1.1.0 release probably available next week. The 1_0_X branch (which depends on libfwupdate) will be maintained for a long time, and is probably the better choice to ship in LTS releases at the moment. Any distros that ship the new 1.1.x fwupd versions will need to ensure that the fwup.efi files are signed properly if they want SecureBoot to work; in most cases just copying over the commands from the fwupdate package is all that is required. I’ll be updating Fedora Rawhide with the new package as soon as it’s released.

      Comments welcome.

Attending GUADEC!

Just passing by to say that I am looking forward to see you all later this week in Almeria. The conference program sounds very promising and the host city is looking outstanding.

We will be hosting a Boxes BoF during the July 9th afternoon, so make sure to swing by if you are interested on contributing to Boxes or have any questions/ideas to discuss.

Besides the normal talks schedule, I will be involved in organizing newcomers and sport activities. Stay tuned!

June 30, 2018

LMDB: Cache database in memory

Some days ago I was in a meeting talking about the E2E implementation for Fractal, to know the current implementation state and to talk with other developers. This meeting was promoted by Puri.sm people, because they want to use Fractal for the Librem5 phone and they want to have E2E. There's people working in the E2E and I think we can have this on Fractal at the end of the year, but this is not what I want to talk about today.

In this meeting there was Fractal developers but we've also other people, like the nheko developer, mujx (nheko is a Qt matrix client). During the meeting, mujx ask us about the cache storage that we're using, because for the E2E is too important to don't lose any key, because that'll be catastrophic, you won't be able to read room messages. So it's important to have a transactional database storage for this information. They are using LMDB, I didn't know nothing about LMDB so after this meeting I start to about it.

LMDB

Lightning Memory-Mapped Database Manager (LMDB) is a key-value database, it's memory mapped so it's fast, and also uses filesystem storage so we've persistence. This database has transactions so it's safe to read/write from different threads or process.

In Fractal we're using a simple json file for cache, but this doesn't support transactions and if the app crash or if something bad happens, we can lose data. This method is simple, but is slow and insecure, so using LMDB will improve Fractal in several ways.

But LMDB is in memory and in Fractal we've a lot of interface code sharing the app state, so we're passing the state between threads with copies and complex data sharing. This can simplify the interface code because using LMDB for the application global state will make this state accesible from different threads.

Testing LMDB

Fractal is written in Rust so I want to write some tests before start to use this on Fractal. There's a simple LMDB rust crate, and I've been writting an example lib to test it.

Basically what I've done is to write a simple trait that you can implement for simple Rust structs so that struct can be stored and recover from the cache with a simple method:

#[derive(Serialize, Deserialize, Debug)]
struct B {
    pub id: u32,
    pub complex: Vec<String>,
}
impl Cacheable for B {
    fn db() -> &'static str { "TESTDB" }
    fn key(&self) -> String {
        format!("b:{}", self.id)
    }
}

The struct should be serializable/deserializable with serde because in this example I'm using bincode to convert structs to [u8].

The Cacheable trait only have two required methods, the db and the key. The db is the db name to use to store this struct instances and the key is the key to use when storing a concrete instance.

With this, we can store in the cache and query from the cache, using the key:

let db = &format!("{}-basic", DB);
let mut cache = Cache::new(db).unwrap();

let b = B{ id: 1, complex: vec![] };
let r = b.store(&mut cache);
assert!(r.is_ok());

let b1: B = B::get(&mut cache, "b:1").unwrap();
assert_eq!(b1.id, b.id);
assert_eq!(b1.complex.len(), 0);
b1.complex.push("One string".to_string());
b1.complex.push("Second string".to_string());
b1.store(&mut cache);

let b2: B = B::get(&mut cache, "b:1").unwrap();
assert_eq!(b2.id, b.id);
assert_eq!(b2.complex.len(), 2);
assert_eq!(&b2.complex[0][..], "One string");

This is thread safe, so we can read from the cache from different threads and we'll always get the last version in the database.

LMDB is a key-value database, so we don't have relations. This is not a real problem for us because we can model the relations in the keys, for example, we can store Room messages with keys like this "message:ROOMID:messageID" and then we can iterate over all objects with this prefix: "message:ROOMID" that will give us all room messages. Something like this should work:

let prefix = format!("message:{}", room.id);
Message::iter(&mut cache, &prefix, |m| {
    // m is a Message struct fetched from the database, we can do
    // what we want here
    // ...
    Continue(true)
});

LMDB in Fractal data model

We're thinking about moving the Fractal data model from the AppOp struct to a new crate, independent of the UI, to simplify the UI code and to be able to use the same data model from different UIs (we wan't to split fractal in two different apps), Julian wrote about this.

I think that we can use the LMDB cache to store this new app state that we want to have and this will simplify a lot our code, because we will share the same state and this state will be persisted in filesystem.

I'll start to write a new crate for Fractal to start to move all the app state to this new crate. I think I can write a generic tool to simplify the LMDB use, maybe I'll publish another crate in crates.io and use that in Fractal, but I need to think a little more about the pattern to follow.

I write a lot of web code in my day to day work, and I've been working with react+redux. This LMDB cache thing in Fractal reminds me a lot to the redux store and I want to follow a similar pattern so we can have only an app state and only a way to update this state.

June 29, 2018

NetworkManager 1.12, ready to serve your networking needs

A brand new version of NetworkManager, a standard Linux network management daemon, is likely to reach your favourite Linux distribution soon. As usual, the new version is 100% compatible with the older releases and most users can update their systems without spending much time caring about technicalities.

Nevertheless, we’ve spent significant effort improving things under the hood, addressed many bug reports and added new features. We are especially proud of the increased community contributions to NetworkManager.

Read on to learn what awaits you in the version 1.12!

Checkpoint/Restore

One of the lesser known goodies provided by NetworkManager is the checkpoint/restore functionality. It allows the user to roll back to a working network configuration if any changes render a machine inaccessible over a network.

The user needs to define a checkpoint first, then conduct the potentially dangerous changes and finally confirm that the changes didn’t disrupt connectivity. A checkpoint is essentially a snapshot of an active network configuration along with a timer. Should the changes cause a networking outage, the timer expires before the user can confirm success and the changes are reverted, hopefully restoring connectivity.

Have you ever downed a network interface on a server at the other end of the world?

While the checkpoint/restore D-Bus API is available since version 1.4, its use was limited to tools that would use D-Bus directly. Starting with NetworkManager 1.12 the functionality is accompanied by a libnm API. If you’re developing a network configuration tool that could benefit from the functionality, take a look. Remember, libnm is capable of GObject introspection, and thus it’s fairly simple to use it from scripting languages, such as Python. Check out our examples!

Yet better Wi-Fi: FILS, WoWLAN and IWD

The process of associating a Wi-Fi client to an Access Point is traditionally a complicated process that easily takes multiple seconds to finish. Not a huge problem for stations that tend to stay within the AP’s range after the initial association, but a severe inconvenience for those that move around and roam from AP to AP. Even one second of connectivity loss is likely to be unpleasant for a video call participant. To lower the link setup times is the goal of FILS, a recent amendment to Wi-Fi specification.

Actually using FILS requires an AP equipment recent enough to support 802.11ai amendment, Linux kernel 4.9 and a git snapshot of wpa_supplicant newer than the 2.6 release. We’ll likely see FILS getting more widely adopted in near future, but changeset contributed by Masashi Honma ensures that NetworkManager 1.12 defaults to enabling it whenever possible.

FILS is expected to improve experience for users who frequently roam across Wi-Fi Access Points

Another Wi-Fi capability that gained support from NetworkManager in version 1.12 is Wake on WLAN (or simply WoWLAN). WoWLAN makes it possible to a device equipped with sufficiently capable Wi-Fi radio to be powered-on on receiving a specially crafted network packet. The WoWLAN support used to be available as an out-of-tree patch set for Ubuntu users, but thanks to Canonical developers Alfonso SĂĄnchez-Beato and Simon Fels the functionality will be available to all NetworkManager users.

Announcement of IWD, a new Linux wireless daemon, generated considerable interest among the Linux users two years ago. Since then IWD has seen three stable versions and Andrew Zaborowski of Intel now contributed initial IWD support to NetworkManager. It hasn’t reached feature parity with good ol’ wpa_supplicant, which we’re defaulting to in the foreseeable future, but for most users it should do just well. If you feel adventurous, go ahead and try it out!

Colorful nmcli

NetworkManager’s command line utility, nmcli, has colored its output since version 1.2. A hardcoded color palette was used when the terminal was capable of displaying colors. In version 1.12 the color palette is configurable and, perhaps more importantly to some users, colors can be disabled. The configuration mechanism is the same as was originally introduced by util-linux and documented in terminal-colors.d(5) manual.

Diet

One of the greatest pleasures of software development is undoubtly the removal of unused and unloved parts. With version 1.12 we’ve said our last goodbye to the ifnet settings plugin.

Our years-long quest to remove libnm-glib, a client library that has been superseded by libnm since version 1.0 is nearing an end. We’ve been reluctant to get rid of the library without being absolutely certain we’re not leaving any software package behind unported. It seems that now, with Fedora and Debian being able to disable libnm-glib build from their packages, we’ve reached that point. libnm-glib is now disabled by default and we’re looking forward to the final removal in a future NetworkManager version.

There are not many libnm-glib users left

We’ve also noticed that we’re using only a remarkably small part of libnl, the Netlink protocol library. Netlink serves as a management interface between the kernel networking subsystem and userspace. In version 1.12, we managed to handle all Netlink interfacing completely within NetworkManager itself, getting rid of an external library. This further reduces the dependency chain of NetworkManager, hopefully to the delight of those who use NetworkManager in slimmed down installations.

Learn More

A more comprehensive overview is in the distribution’s NEWS file.

The version 1.14 is likely to arrive sooner than usual. Stay tuned!

(Thanks to Ĺ tevko, Ulrich, Francesco and Thomas for correcting mistakes.)

Nautilus Tagged Entry Redux

It works!

Since my last post, the tagged entry became a subclass of GtkSearchEntry, as was the case with GdTaggedEntry (yay GTK+ 4) and the tags became GtkWidgets (instead of GtkBins). It didn’t take much effort to move from GtkBin to GtkWidget – only implementing size_allocate(), measure() and snapshot(), which are really trivial when working with actual widgets as children. That, and tweaking the appearance some more, as the move broke the styling a tad. Some perhaps questionable methods of dealing with that were employed, but nothing too nefarious.

With this out of the way now, I’ll be able to focus on porting the rest of Nautilus.

The code is available at https://gitlab.gnome.org/ernestask/tagged-entry, if you’re interested.

June 28, 2018

Summer with Maps

It´s been a while since I wrote a blog post last time… and even though we´ve had summer weather here (more or less) since quite some while, it seems appropriate with a little “start of summer” summer post. Since last time time I´ve amended a pretty long-standing issue we´ve had when running under a Wayland compositor (at least with the Mutter compositor, as used by gnome-shell) that makes the revealer widgets we´ve had for showing notifications not working in this case, as the map view is using the Clutter scene graph library and overlaying GTK+ widgets on that is not working under Wayland. Since Clutter is deprecated and this issue won´t be fixed and re-writing the map view library using some other backend (also making it working under the upcoming GTK+ 4) is a rather big undertaking, I´ve went ahead with a few workarounds to get rid of the overlayed widgets.

Notifications used for showing i.e. the need to zoom in to add OpenStreetMap POIs and for informing that location services is turned off (and this has been an issue at least on Fedora, where user location is by default disabled for privacy reasons) have been replaced with modal dialogs (not as elegant, but better than not showing up at all)




The notifications showing when routing fails for some reason have been replaced with showing messages in place of the result list.


Furthermore the total number of “via points” for routing has been limited to 10. This has been done partly because GraphHopper imposes a limit of 30, and also the UI doesn´t really cope well with too many locations anyway.

And with that Maps wishes you all a continued happy summer (or winter if you´re on the sothern hemisphere) 😎

Going to GUADEC: talking about the state of GLib and metered data handling in downloads

I’ll be at GUADEC in Almería this year, giving two talks on Sunday:

  • GLib: What’s new and what’s next?, which will be a general overview of recent developments in the GNOME utility library, some future plans, and some stats about what happens to contribution rates when you move to GitLab and Meson.
  • Download management on metered connections, which will be an overview of the Mogwai project, which I’ve worked on in recent months at Endless. Mogwai is a download scheduler, which your code can use to determine the best time to do a big download to avoid incurring bandwidth charges from your internet provider (if you’re on a metered connection).

I won’t be arriving until Sunday morning, but will be around until Thursday (12th) morning, and will be in the GTK+/GLib BoF on Monday in room 2 to plot the next GLib release.

Shout out to my coworker Matthew Leeds’ talk, on P2P Distribution of Flatpaks and OSTrees, which comes towards the culmination of a lot of work by him (and others in Endless, and upstream OSTree and flatpak) to introduce peer to peer support in OSTree and flatpak, so you can distribute OSs and apps using USB sticks and the LAN.

Nautilus File Operations 2.0

Following (and during) my latest blog post I was in the middle of my final exams session. While that has concluded eventually and I managed to pass everything just fine, that meant less time hacking and toying around with nautilus’ operations I’ve been working on.

More work was put in the aforementioned move operations until it was a finished product that would set the tone for the following operations as well.

With the move test out of the way, following operations could (more or less, with some needing extra nitpicking) be approached in a similar manner, which we tried to do, also giving them a nice sense of modularity.

Nowadays, I’ve been focusing on finishing the copy/paste test and debugging the ins and outs of the trashing/deleting functionality. After a few fights with multiple G_BREAKPOINT’s and moving the code back and forth trying to provide us an asynchronous alternative while still keeping the code clean and functional, I managed to come up with a test meant for both deleting and trashing. My work on the copy/paste component can be seen here and the trash/delete test should be in its own merge request here.

Hopefully, by the time I finish with these, all tests should look like this:Screenshot from 2018-06-28 16-18-45

And not like this (which happened more than I’d like to admit during the actual writing of the test):

Screenshot from 2018-06-28 16-32-46

Feeds