October 04, 2022

My journey and a begginers guide to Open Source

Namaste, Everyone!

For quite a while I've been receiving multiple questions regarding how to start contributing to open source, how to get into GSoC, My journey, etc...

After repeating myself, again and again, I'm writing this blog, to answer some of the most asked queries 😃

lets start reading

My Journey -

How did I start?

Well, I don't have a definite date, you can assume the day I started coding to be my start :P I started with making some personal websites, they were pretty basic, but they encouraged me to achieve more.

I soon learned about Web development, both backend, and frontend. My main language until then was Java, but due to frameworks like Django and Flask, I became more accustomed to Python instead. Java is still my main language for anything other than Project development (CP folks hail 😛 ) 😄

How I started Open Source?

I started with my own projects, I learned there is a platform called GitHub, where I posted very basic sites which I made ( They are gone now, you don't want to see them lol), they were nothing serious but gave me some idea about how Git works, but it was mostly dormant. I then did courses like CS50 and CS50w which required Git, so that made me more comfortable using it.

I then worked on a project, which didn't solve any problem, but just strengthened my skills, it was my own Penguin-based OS, called Aryan Linux, made by using "Linux from Scratch". On compiling it, I understood what open source really is, it made me so happy.

Aryan Linux

Then I finally made the switch to Penguin (Linux in common folks terms), I used to use that in a VM, but then I realized that I can use it without issues as my main OS, It was hard to convince my brain to make the switch, it took months of consideration to finally dual boot my system, and well, it was one of the best decisions of my life.

I started with Garuda Linux (An Indian distro :D ), and although it was great, the theme started to poke my eyeballs, so I decided to do an experiment, switch back to Windows, Easy Peasy 😄 I took a theme that I like, but didn't like the color scheme of - WhiteSur, and combined it with other themes and added a bit of my own flavor to make a new theme - NewSur (Innovative name, right? ), and like others, I made a setup and flexed on Reddit :P Turns out there were other crazy peeps who liked it and asked me to post it, so I did, on GitHub 😌

why_colors

This made me even cozier with the community and Git. I then did some other projects like - Logo Menu, Modified AndOTP, Modified LBRY, DraculaSur, etc... All these were made by me for myself or someone specific, but through the power of open source, other people also wanted it and strengthened my love for Open Source.

I then also made one commit to GNOME Extensions website. Though it was nothing big, it was a step in the right direction.

How GSoC?

I came to know about GSoC from my sister, at the time it was something that I believed was impossible for me, on opening the website, all the organizations listed there scared me but also inspired me.

Impossible

But, later on, I forgot about it. Once during family dinner, the topic of GSoC came up, and my sister told me that the deadline is now reached, and I was late.

I got a bit sad but assuming it was impossible anyway, I didn't feel too bad. So I thought even if I couldn't participate, I can at least learn from this year's GSoC and maybe crack it in 2023. The first thing I saw was that there was still time, the proposal period was going to start the next day so I had just 16 days.

I then searched for GNOME Foundation, as I love it, then I opened the ideas list and searched through it, and to my surprise, I found two port to GTK4 projects, one of which used the snake language python, and well, that gave me a ton of hope.

I instantly conveyed it to my guru sis and began drafting the proposal. I removed all thoughts of it being impossible and just began working on the proposal. The only guidance I had was from my sister and her junior who achieved GSoC'21 under Chromium. (Most probably the browser you are reading this on, uses Chromium 🙂 )

I then submitted my proposal, and well after some back and forth with my amazing acharya/mentor Aleb, and some PRs to Pitivi, I got the acceptance mail in May 🥳

How can you start?

Where to start

Where to contribute?

Don't contribute just for the sake of GSoC, contribute because of the love for Open Source, and it will become much easier. Start to use open source alternatives for your existing apps, if they lack something then make issues on the repository, and if you can, maybe fix it yourself 😁

get open source

Become active on platforms like Reddit, and join some communities (Although, beware, there are some toxic communities, you just have to ignore them :) Most of the community is not like that ), there are tons of small niche projects where you can contribute a lot, don't just start running after big shiny projects, start from small projects instead.

Unable to understand code?

If you know the programming language with which the code is written, then it becomes easier. If you use the program, then check some unique text in the application and then search for it in the code. Editors like VSCode help in this regard as you can search the whole repository.

Once you find the string, start to expand your view. You will be able to see how the string was declared, and how it was added to the application, and if everything goes right, using this method you will start to understand some of the code. Understanding the whole code at once is not easy, so don't try that, don't throw all of it towards yourself at once, make it digestible first, only understand parts of it, and then start to connect the dots.

divide and conquer

But what to contribute?

Most repositories have "good first issue" labels, these are put on issues that the developer believes could be a good starting point, so start from those. In Pitivi there were some very easy issues, dating back years and were still unsolved, so try to start from those, don't think this is too small or too old, if it is open and has the label, then the developer wants that to be fixed 🤯

In my case, one of them was to just change one "True" to a "False" 😁

Conclusion

Nothing is impossible if you dedicate yourself to it.

Don't just have dreams, have life goals. Because dreams vanish when you wake up and are something you already assume you can't do, but goals are something that you believe you can achieve and work towards it.

If you still have any queries, feel free to reach out to me, hopefully, I can guide you 🤗

End

That's it for this one, hope to see you in the next blog :)

October 03, 2022

Mon 2022/Oct/03

The series on the WPE port by the WebKit team at Igalia grows, with several new articles that go deep into different areas of the engine:

These articles are an interesting read not only if you're working on WebKit, but also if you are curious on how a modern browser engine works and some of the moving parts beneath the surface. So go check them out!

On a related note, the WebKit team is always on the lookout for talent to join us. Experience with WebKit or browsers is not necessarily a must, as we know from experience that anyone with a strong C/C++ background and enough curiosity will be able to ramp up and start contributing soon enough. If these articles spark your curiosity, feel free to reach out to me to find out more or to apply directly!

on "correct and efficient work-stealing for weak memory models"

Hello all, a quick post today. Inspired by Rust as a Language for High Performance GC Implementation by Yi Lin et al, a few months ago I had a look to see how the basic Rust concurrency facilities that they used were implemented.

One of the key components that Lin et al used was a Chase-Lev work-stealing double-ended queue (deque). The 2005 article Dynamic Circular Work-Stealing Deque by David Chase and Yossi Lev is a nice read defining this data structure. It's used when you have a single producer of values, but multiple threads competing to claim those values. This is useful when implementing per-CPU schedulers or work queues; each CPU pushes on any items that it has to its own deque, and pops them also, but when it runs out of work, it goes to see if it can steal work from other CPUs.

The 2013 paper Correct and Efficient Work-Stealing for Weak Memory Models by Nhat Min Lê et al updates the Chase-Lev paper by relaxing the concurrency primitives from the original big-hammer sequential-consistency operations used in the Chase-Lev paper to an appropriate mix of C11 relaxed, acquire/release, and sequentially-consistent operations. The paper therefore has a C11 translation of the original algorithm, and a proof of correctness. It's quite pleasant. Here's the a version in Rust's crossbeam crate, and here's the same thing in C.

I had been using this updated C11 Chase-Lev deque implementation for a while with no complaints in a parallel garbage collector. Each worker thread would keep a local unsynchronized work queue, which when it grew too large would donate half of its work to a per-worker Chase-Lev deque. Then if it ran out of work, it would go through all the workers, seeing if it could steal some work.

My use of the deque was thus limited to only the push and steal primitives, but not take (using the language of the Lê et al paper). take is like steal, except that it takes values from the producer end of the deque, and it can't run concurrently with push. In practice take only used by the the thread that also calls push. Cool.

Well I thought, you know, before a worker thread goes to steal from some other thread, it might as well see if it can do a cheap take on its own deque to see if it could take back some work that it had previously offloaded there. But here I ran into a bug. A brief internet search didn't turn up anything, so here we are to mention it.

Specifically, there is a bug in the Lê et al paper that is not in the Chase-Lev paper. The original paper is in Java, and the C11 version is in, well, C11. The issue is.... integer overflow! In brief, push will increment bottom, and steal increments top. take, on the other hand, can decrement bottom. It uses size_t to represent bottom. I think you see where this is going; if you take on an empty deque in the initial state, you create a situation that looks just like a deque with (size_t)-1 elements, causing garbage reads and all kinds of delightful behavior.

The funny thing is that I looked at the proof and I looked at the industrial applications of the deque and I thought well, I just have to transcribe the algorithm exactly and I'll be golden. But it just goes to show that proving one property of an algorithm doesn't necessarily imply that the algorithm is correct.

October 02, 2022

Toolbx — running the same host binary on Arch Linux, Fedora, Ubuntu, etc. containers

This is a deep dive into some of the technical details of Toolbx and is a continuation from the earlier post about bypassing the immutability of OCI containers.

The problem

As we saw earlier, Toolbx uses a special entry point for its containers. It’s the toolbox executable itself.

$ podman inspect --format "{{.Config.Cmd}}" --type container fedora-toolbox-36
toolbox --log-level debug init-container ...

This is achieved by bind mounting the toolbox executable invoked by the user on the hosts to /usr/bin/toolbox inside the containers. While this has some advantages, it opens the door to one big problem. It means that executables from newer or different host operating systems might be running against older or different run-time environments inside the containers. For example, an executable from a Fedora 36 host might be running inside a Fedora 35 Toolbx, or one from an Arch Linux host inside an Ubuntu container.

This is very unusual. We only expect executables from an older version of an OS to keep working on newer versions of the same OS, but never the other way round, and definitely not across different OSes.

When binaries are compiled and linked against newer run-time environments, they may start relying on symbols (ie., non-static global variables, functions, class and struct members, etc.) that are missing in older environments. For example, glibc-2.32 (used in Fedora 33 onwards) added a new version of the pthread_sigmask symbol. If toolbox binaries built and linked against glibc-2.32 are run against older glibc versions, then they will refuse to start.

$ objdump -T /usr/bin/toolbox | grep GLIBC_2.32
0000000000000000      DO *UND*        0000000000000000  GLIBC_2.32  pthread_sigmask

This means that one couldn’t use Fedora 32 Toolbx containers on Fedora 33 hosts, or similarly any containers with glibc older than 2.32 on hosts with newer glibc versions. That’s quite the bummer.

If the executables are not ELF binaries, but carefully written POSIX shell scripts, then this problem goes away. Incidentally, Toolbx used to be implemented in POSIX shell, until it was re-written in Go two years ago, which is how it managed to avoid this problem for a while.

Fortunately, Go binaries are largely statically linked, with the notable exception of the standard C library. The scope of the problem would be much bigger if it involved several other dynamic libraries, like in the case of C or C++ programs.

Potential options

In theory, the easiest solution is to build the toolbox binary against the oldest supported run-time environment so that it doesn’t rely on newer symbols. However, it’s easier said than done.

Usually downstream distributors use build environments that are composed of components that are part of that specific version of the distribution. For example, it will be unusual for an RPM for a certain Fedora version to be deliberately built against a run-time from an older Fedora. Carlos O’Donell had an interesting idea on how to implement this in Fedora by only ever building for the oldest supported branch, adding a noautobuild file to disable the mass rebuild automation, and having newer branches always inherit the builds from the oldest one. However, this won’t work either. Building against the oldest supported Fedora won’t be enough for Fedora’s Toolbx because, by definition, Toolbx is meant to run different kinds of containers on hosts. The oldest supported Fedora hosts might still be too new compared to containers of supported Debian, Red Hat Enterprise Linux, Ubuntu etc. versions.

So, yes, in theory, this is the easiest solution, but, in practice, it requires a non-trivial amount of cross-distribution collaboration, and downstream build system and release engineering effort.

The second option is to have Toolbx containers provide their own toolbox binary that’s compatible with the run-time environment of the container. This would substantially complicate the communication between the toolbox binaries on the hosts and the ones inside the containers, because the binaries on the hosts and containers will no longer be exactly the same. The communication channel between commands like toolbox create and toolbox enter running on the hosts, and toolbox init-container inside the containers can no longer use a private and unstable interface that can be easily modified as necessary. Instead, it would have complicated backwards and forwards compatibility requirements. Other than that, it would complicate bug reports, and every single container on a host may need to be updated separately to fix bugs, with updates needing to be co-ordinated across downstream distributors.

The next option is to either statically link against the standard C library, or disable its use in Go. However, that would prevent us from using glibc’s Name Service Switch to look up usernames and groups, or to resolve host names. The replacement code, written in pure Go, can’t handle enterprise set-ups involving Network Information Service and Lightweight Directory Access Protocol, nor can it talk to host OS services like SSSD, systemd-userdbd or systemd-resolved.

It’s true that Toolbx currently doesn’t support enterprise set-ups with NIS and LDAP, but not using NSS will only make it more difficult to add that support in future. Similarly, we don’t resolve any host names at the moment, but given that we are in the business of pulling content over the network, it can easily become necessary in the future. Disabling the use of NSS will leave the toolbox binary as this odd thing that behaves differently from the rest of the OS for some fundamental operations.

An extension of the previous option is to split the toolbox executable into two. One dynamically linked against the standard C library for the hosts, and another that has no dynamic linkage to run inside the containers as their entry point. This can impact backwards compatibility and affect the developer experience of hacking on Toolbx.

Existing Toolbx containers want to bind mount the toolbox executable from the host to /usr/bin/toolbox inside the containers and run toolbox init-container as their entry point. This can’t be changed because of the immutability of OCI containers, and Toolbx simply can’t afford to break existing containers in a way where they can no longer be entered. This means that the toolbox executable needs to become a shim, without any dynamic linkage, that forwards the invocation to the right executable depending on whether it’s running on the hosts or inside the containers.

That brings us to the developer experience of hacking on Toolbx. The first thing note is that we don’t to go back to using POSIX shell to implement the executable that’s meant to run inside the container. Ondřej spent a lot of effort replacing the POSIX shell implementation of Toolbx, and we don’t want to undo any part of that. Ideally, we would use the same programming language (ie., Go) to implement both executables so that one doesn’t need to learn multiple disparate languages to work on Toolbx. However, even if we do use Go, we would have to be careful not to share code across the two executables, or be aware that they may have subtle differences in behaviour depending on how they might be linked.

Then there’s the developer experience of hacking on Toolbx on Fedora Silverblue and similar OSTree-based OSes, which is what you would do to eat your own dog food. Experiences are always subjective and this one is unique to hacking Toolbx inside a Toolbx. So let’s take a moment to understand the situation.

On OSTree-based OSes, Toolbx containers are used for development, and, generally speaking, it’s better to use container-specific locations invisible to the host as the development prefixes because the generated executables are specific to each container. Executables built on one container may not work on another, and not on the hosts either, because of the run-time problems mentioned above. Plus, it’s good hygiene not to pollute the hosts.

Similar to Flatpak and Podman, Toolbx is a tool that sets up containers. This means that unlike most other executables, toolbox must be on the hosts because, barring the init-container command, it can’t work inside the containers. The easiest way to do this, is to have a separate terminal emulator with a host shell, and invoke toolbox directly from Meson’s build directory in $HOME that’s shared between the hosts and the Toolbx containers, instead of installing toolbox to the container-specific development prefixes. Note that this only works because toolbox has always been implemented in programming languages with none to minimal dynamic linking, and only if you ensure that the Toolbx containers for hacking on Toolbx matches the hosts. Otherwise, you might run into the run-time problems mentioned above.

The moment there is one executable invoking another, the executables need to be carefully placed on the file system so that one can find the other one. This means that either the executables need to be installed into development prefixes or that the shim should have special logic to work out the location of the other binary when invoked directly from Meson’s build directory.

The former is a problem because the development prefixes will likely default to container-specific locations invisible from the hosts, preventing the built executables from being trivially invoked from the host. One could have a separate development prefix only for Toolbx that’s shared between the containers and the hosts. However, I suspect that a lot of existing and potential Toolbx contributors would find that irksome. They either don’t know or want to set up a prefix manually, but instead use something like jhbuild to do it for them.

The latter requires two different sets of logic depending on whether the shim was invoked directly from Meson’s build directory or from a development prefix. At the very least this would involve locating the second executable from the shim, but could grow into other areas as well. These separate code paths would be crucial enough that they would need to be thoroughly tested. Otherwise, Toolbx hackers and users won’t share the same reality. We could start by running our test suite in both modes, and then meticulously increase coverage, but that would come at the cost of a lengthier test suite.

Failed attempts

Since glibc uses symbol versioning, it’s sometimes possible to use some .symver hackery to avoid linking against newer symbols even when building against a newer glibc. This is what Toolbox used to do to ensure that binaries built against newer glibc versions still ran against older ones. However, this doesn’t defend against changes to the start-up code in glibc, like the one in glibc-2.34 that performed some security hardening.

Current solution

Alexander Larsson and Ray Strode pointed out that all non-ancient Toolbx containers have access to the hosts’ /usr at /run/host/usr. In other words, Toolbx containers have access to the host run-time environments. So, we decided to ensure that toolbox binaries always run against the host run-time environments.

The toolbox binary has a rpath pointing to the hosts’ libc.so somewhere under /run/host/usr and it’s dynamic linker (ie., PT_INTERP) is changed to the one inside /run/host/usr. Unfortunately, there can only be one PT_INTERP entry inside the binary, so there must be a /run/host on the hosts too for the binary to work on the hosts. Therefore, a /run/host symbolic link is also created on the host pointing to the hosts’ /.

The toolbox binary now looks like this, both on the hosts and inside the Toolbx containers:

$ ldd /usr/bin/toolbox
    linux-vdso.so.1 (0x00007ffea01f6000)
    libc.so.6 => /run/host/usr/lib64/libc.so.6 (0x00007f6bf1c00000)
    /run/host/usr/lib64/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f6bf289a000)

It’s been almost a year and thus far this approach has held its own. I am mildly bothered by the presence of the /run/host symbolic link on the hosts, but not enough to lose sleep over it.

Other options

Recently, Robert McQueen brought up the idea of possibly using the Linux kernel’s binfmt_misc mechanism to modify the toolbox binary on the fly. I haven’t explored this in any seriousness, but maybe I will if the current set-up doesn’t work out.

September 30, 2022

#63 Experiments and Prototypes

Update on what happened across the GNOME project in the week from September 23 to September 30.

Circle Apps and Libraries

Sophie says

This week, Workbench joined GNOME Circle. Workbench lets you experiment with GNOME technologies, whether tinkering for the first time or building and testing a GTK user interface. Congratulations!

Workbench

A sandbox to learn and prototype with GNOME technologies.

sonnyp reports

Workbench 43 is out!

  • Display CSS errors inline
  • Blueprint 0.4.0
  • VTE 0.70.0
  • Use AdwAboutWindow
  • Fix responsiveness when working on large Blueprint files
  • Various bug and crash fixes
  • Use GNOME 43 platform/SDK

Make sure to check what’s new for developers in GNOME 43 and give it a try in Workbench 43.

NewsFlash feed reader

Follow your favorite blogs & news sites.

Jan Lukas reports

After the 2.0 release of NewsFlash last week followed a quick 2.0.1 to fix a nasty database migration issue. But now development of version 2.1 has started with more fixes and two new features already merged:

  1. Tags are now also displayed in the article list. So now you can directly see which article has which tags assigned.
  2. A simple share mechanism. Nothing fancy with logins etc. Just a auto-generated URL. But this means you can add your own share service easily.

Kooha

A simple screen recorder with a minimal interface. You can simply click the record button without having to configure a bunch of settings.

SeaDve announces

I am pleased to announce Kooha 2.2.0. This release introduces fresh new features and bug fixes from over a hundred commits. Here’s the summary of some of the most significant changes:

  • New area selection UI inspired from GNOME Shell
  • Added option to change the frame rate through the UI
  • Improved delay settings flexibility
  • Added preferences window for easier configuration
  • Added KOOHA_EXPERIMENTAL env var to show experimental (unsupported) encoders like VAAPI-VP8 and VAAPI-H264
  • Added the following experimental (unsupported) encoders: VP9, AV1, and VAAPI-VP9
  • Unavailable formats/encoders are now hidden from the UI
  • Fixed broken audio on long recordings

Gaphor

A simple UML and SysML modeling tool.

danyeaw says

Excited to announce Gaphor, the simple UML and SysML tool, version 2.12.0 is released!

  • GTK4 is now the default for Flatpak
  • Save folder is remembered across save actions
  • State machine functionality has been expanded, including support for regions
  • Resize of partition keeps actions in the same swimlane
  • Activities (behaviors) can be assigned to classifiers
  • Stereotypes can be inherited from other stereotypes
  • Many GTK4 fixes: rename, search, instant editors
  • Many translation updates

Third Party Projects

Nick reports

Tagger V2022.9.2 is finally here! This release mainly adds support for automatically downloading and applying tag metadata from MusicBrainz with support for retriving the album art, if avaliable, as well!

Here’s a full changelog:

  • Added support for downloading tag metadata from MusicBrainz
  • Fixed an issue where Tagger would not allow opening more than about 1024 files
  • Fixed an issue where the chromaprint fingerprint contained an extra alien unicode character
  • Rewrote the MusicFile model used by Tagger to be faster and better support a large music library
  • Various UX improvements (Tagger should feel much more snappier and responsive)

Komikku

A manga reader for GNOME.

Valéry Febvre (valos) reports

Pleased to announce the release of version 1.0.0 of Komikku, the manga reader (but not only).

After several months of efforts, the porting of Komikku on GTK4 and libadwaita is finished.

  • Refreshing of the UI to follow the GNOME HIG as much as possible
  • Library has now two display modes: Grid and Compact grid
  • Faster display of the chapters list, whether there are few or many chapters
  • Full rewriting of the Webtoon reading mode
  • Modern ‘About’ window
  • [Preferences] Reader: Add ‘Landscape Pages Zoom’ setting
  • [Preferences] Reader: Add ‘Maximum Width’ setting
  • [Servers] Add Grisebouille by @gee [FR]
  • [Servers] MangaNato (MangaNelo): Update
  • [Servers] Mangaowl: Update
  • [Servers] Read Comic Online: Update
  • [L10n] Update French, German, Spanish and Turkish translations

Fractal

Matrix messaging app for GNOME written in Rust.

Julian Sparber reports

This week we tagged Fractal as 5.alpha1. This is our first release since Fractal has been rewritten to take advantage of GTK 4 and the Matrix Rust SDK. It is the result of eighteen months of work. Currently supported features are:

Currently supported features are:

  • Sending and receiving messages and files
  • Sending files via Drag-n-Drop and pasting in the message entry
  • Rendering of rich formatted (HTML) messages, as well as media
  • Displaying edited messages, redacting messages
  • Showing and adding reactions
  • Tab completion of user names
  • Sending and displaying replies
  • Sharing the current location
  • Exploring the room directory
  • Sorting the rooms by category
  • Joining rooms
  • Sending and accepting invitations
  • Logging into multiple accounts at once
  • Logging in with Single-Sign On
  • Sending and reading encrypted messages
  • Verifying user sessions using cross-signing
  • Exporting and importing encryption keys
  • Managing the connected devices
  • Changing the user profile details
  • Deactivating the account

Major missing features are:

  • Notifications
  • Read markers

As the name implies, this is still considered alpha stage and is not ready for general use just yet. If you want to give this development version a try, you can get it from the [GNOME Apps Nightly flatpak repository] (https://wiki.gnome.org/Apps/Nightly). A list of known issues and missing features for a 5.0 release can be found in the Fractal v5 milestone on Gitlab.

We also published a blogpost about the security quick scan performed by Radically Open Security as part of the NLnet grant https://blogs.gnome.org/jsparber/2022/09/27/fractal-security-audit/

Documentation

Julian 🍃 says

I’ve published the final section of the Libadwaita chapter in GUI development with Rust and GTK 4. It has been reviewed by Ivan Molodetskikh and Alexander Mikhaylenko.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

The Fedora Project Remains Community Driven

Introduction

Recently, the Fedora Project removed all patented codecs from their Mesa builds, without the rest of the community’s input. This decision was heavily criticized from the community. For that decision, some even asked the Fedora Project to remove “community driven” from its official description. I’d like to spend some time to explain why, in my opinion, this decision was completely justified, and how the Fedora Project remains community driven.

Law Compliance Cannot Be Voted

Massive organizations, like the Fedora Project, must comply with laws to avoid lawsuits as much as possible. Patent trolls are really common and will target big organizations. Let’s not forget that, in 2019, GNOME was sued by a patent troll. Unfortunately, patent trolling is quite common. And even worse, patent trolling against open source projects has considerably increased since early this year. So, this decision had to be acted quickly, to avoid potential lawsuits as soon as possible.

Complying with laws is not up to the community to decide. For example, Arch Linux, another community driven distribution, cannot and will not redistribute proprietary software at all. And this is not something that can be voted on, but must be complied with. This doesn’t mean that Arch Linux is not community driven whatsoever; it only means that it’s legally bound, just like how the Fedora Project cannot ship these patented codecs.

Even if the Fedora Project wasn’t sued in the past years, it doesn’t mean that they will continue to be free of lawsuits in the future. The increase in patent trolling is a good reason for the Fedora Project to quickly react on this. If they ever get sued, is the community going to pay for lawyers?

Community Driven

As a volunteer of the Fedora Project who is unaffiliated with Red Hat, I believe that the Fedora Project remains community driven. I am currently a part of Fedora Websites & Apps Team with the “developer” role of the upcoming website revamp repository. This is mainly a volunteer effort, as the majority of us contributing to it are unaffiliated with Red Hat and unpaid developers.

Since we (volunteers) are the ones in control with the decision, we could intentionally make the website look displeasing and appalling. Of course, we care about the Fedora Project, so we want it to look appealing for potential users, contributors and even enterprises that are willing to switch to open source.

Recently, I proposed to unify Silverblue, Kinoite, CoreOS, and other pages’ layouts into one that looks uniform and consistent when navigating, e.g. same navigation bar, footer, color palette, etc. Some developers are considering joining the effort, but some disagree. Of course, this is merely a proposal, but if everyone is on board, then we volunteers will be the ones leading this initiative.

This is one example from personal experience, but many initiatives were (and will be) proposed by independent contributors, and can also lead the effort. Nonlegal proposals are still democratically voted and surveys are still taken seriously. Currently, the Fedora Project is in the process of migrating from Pagure to GitLab, and from IRC to Matrix. That is because the community voted on it. I voiced my opinion and was one of the people who proposed both of those changes in the surveys.

Conclusion

I completely agree with the Fedora Project’s decision on disabling patented codecs from Mesa. These changes cannot and should not be asked by the community, as this is a legal discussion about potential lawsuits. Anything that is nonlegal remains democratically voted by the community, as long as you comply with US laws (unfortunately) and the Fedora Code of Conduct.


Edit 1: Use Arch Linux as an example instead of Gentoo Linux, as it is a binary based distribution

September 29, 2022

Progress Update For GNOME 43

GNOME 43 is out the door now, and I want to use this post to share what I’ve done since my post about my plans.

Adaptive Nautilus

The main window of Nautilus is now adaptive, working at mobile sizes. This change required multiple steps:

  • I moved the sidebar from the old GtkPaned widget to the new AdwFlap widget.
  • I added a button to reveal the sidebar.
  • I refactored the toolbar widgetry to seamlessly support multiple toolbars (allowing me to add a bottom toolbar without code duplication).
  • I ported most of the message dialogs to the new AdwMessageDialog widget.
  • I made the empty views use AdwStatusPage.

There are a few issues left before I can call Nautilus fully adaptive, though. The biggest issue is that the Other Locations view does not scale down to mobile sizes. The Other Locations review is currently in the process of being redesigned, so that should be resolved in the near future. Next, the new Properties dialog does not get small enough vertically for landscape mode. Finally, a few message dialogs don’t use AdwMessageDialog and will require special care to port.

Screenshot of Nautilus with a narrow width
Screenshot of Nautilus with a narrow width

In addition to the adaptive widgetry, I also landed some general cleanups to the codebase after the GTK4 port.

Loupe

Since my post in April, Loupe has received many changes. Allan Day provided a new set of mockups for me to work from, and I’ve implemented the new look and a sidebar for the properties. There are some open questions about how the properties should be shown on mobile sizes, so for now Loupe doesn’t fit on phones with the properties view open.

Screenshot of Loupe with the properties sidebar open
Screenshot of Loupe with the properties sidebar open

 

I’ve also reworked the navigation and page loading. Back in April, Loupe only loaded one image at a time, and pressing the arrow keys would load the next image. This could lead to freezes when loading large images. Now Loupe uses AdwCarousel and buffers multiple images on both sides of the current image, and loads the buffered images on a different thread.

Loupe also now has code for translations in place, so that once it’s hooked up to GNOME’s translation infrastructure contributors will be able to translate the UI.

Libadwaita

Some exciting new widgets landed in libadwaita this cycle: AdwAboutWindow, AdwMessageDialog, AdwEntryRow, and AdwPasswordEntryRow. I made an effort to have these new widgets adopted in core applications where possible.

I ported the following apps to use AdwAboutWindow:

  • Text Editor
  • Weather
  • Disk Usage Analyzer
  • Font Viewer
  • Characters
  • Nautilus
  • Calendar
  • Clocks
  • Calculator
  • Logs
  • Maps
  • Extensions
  • Music

Now every single core app that uses GTK4 uses AdwAboutWindow.

Screenshot of Text Editor's about window
Screenshot of Text Editor’s about window

I ported Nautilus and Maps to AdwMessageDialog where possible, and adjusted Contacts and Calendar to use AdwEntryRow. Contacts needed some extra properties on AdwEntryRow, so I implemented those.

I also started work on a new widget, AdwSpinRow. Hopefully it will land this cycle.

Calendar

In addition to the changes mentioned in the libadwaita section, I also made Calendar fit at small widths with AdwLeaflet. The app received a large redesign already, and it was only a few small changes away from being mobile-ready. There are still a few issues with fit, but those should hopefully be resolved soon.

Calendar 44 will hopefully use AdwMessageDialog and a new date selector in the event editor – I have open merge requests for both changes.

Misc. Changes

  • Minor fixups for GNOME Music’s empty state
  • Updated core app screenshots for Disk Usage Analyzer, Text Editor, Contacts, Calendar, and Nautilus
  • Ported Sound Recorder to Typescript

Conclusion

Overall I made a lot of progress, and I hope to make much more this cycle. The GNOME 43 cycle overlapped a very busy time in my life, and now things have cooled down. With your help, I would love to be able to focus more of my time on implementing things you care about.

I have three places you can support me:

That’s all for now. Thank you for reading to the end, and I look forward to reporting more progress at the end of the GNOME 44 cycle.

September 28, 2022

"Why is it that package managers are unnecessarily hard?" — or are they?

At the moment the top rated post in In the C++ subreddit is Why is it that package managers are unnecessarily hard?. The poster wants to create an application that uses fmt and SDL2. After writing a lengthy and complicated (for the task) build file, installing a package manager, integrating the two and then trying to build their code the end result fails leaving only incomprehensible error messages in its wake.

The poster is understandably frustrated about all this and asks a reasonable question about the state of package management. The obvious follow-up question, then, would be whether they need to be hard. Let's try to answer that by implementing the thing they were trying to do from absolute scratch using Meson. For extra challenge we'll do it on Windows to be entirely sure we are not using any external dependency providers.

Prerequisites

  • A fresh Windows install with Visual Studio
  • No vcpkg, Conan or any other third party package manager installed (more strictly, they can be installed, just ensure that they are not used)
  • Meson installed so that you can run it just by typing meson from a VS dev tools command prompt (if you set it up so that you run python meson.py or meson.py, adjust the commands below accordingly)
  • Ninja installed in the same way (you can also use the VS solution generator if you prefer in which case this is not needed)

The steps required

Create a subdirectory to hold source files.

Create a meson.build file in said dir with the following contents.

project('deptest', 'cpp',
    default_options: ['default_library=static',
                      'cpp_std=c++latest'])
fmt_dep = dependency('fmt')
sdl2_dep = dependency('sdl2')
executable('deptest', 'deptest.cpp',
   dependencies: [sdl2_dep, fmt_dep])

Create a deptest.cpp file in the same dir with the following contents:

#include<fmt/core.h>
#include<SDL.h>

int main(int, char**) {
    if (SDL_Init(SDL_INIT_VIDEO|SDL_INIT_AUDIO) != 0) {
        fmt::print("Unable to initialize SDL: {}", SDL_GetError());
        return 1;
    }
    SDL_version sdlver;
    SDL_GetVersion(&sdlver);
    fmt::print("Currently using SDL version {}.{}.{}.",
               sdlver.major, sdlver.minor, sdlver.patch);
    return 0;
}

Start a Visual Studio x64 dev tools command prompt, cd into the source directory and run the following commands.

mkdir subprojects
meson wrap install fmt
meson wrap install sdl2
meson build
ninja -C build
build\deptest

This is all you need to do to get the following output:

Currently using SDL version 2.24.0.

Most people would probably agree that this is not "unnecessarily hard". Some might even call it easy.

What Not to Recommend to Flatpak Users

Introduction

Whenever I browse through the web, I find many “tips and tricks” from various blog writers, YouTubers and others who recommend users to take steps that either they aren’t supposed to, or have better alternatives. In this article, I will go over some of those steps you should not be taking and explain why.

Setting GTK_THEME

The GTK_THEME variable is often used to force custom themes for GTK applications. For example, setting GTK_THEME=adw-gtk3-dark will set the dark variant of adw-gtk3 if installed on Flathub.

GTK_THEME is a debug variable. It is intended to be used for testing stylesheets for GTK3 and GTK4. However, it is NOT intended to be used by users. libhandy and libadwaita ship additional widgets that the majority of GTK3 and GTK4 themes don’t support, as they are made for GTK3 and/or GTK4 alone. This means that using a custom theme on a GTK4+libadwaita application may remove libadwaita widgets.

Many applications are increasingly porting from GTK3 to GTK4+libadwaita. While GTK_THEME may work fine on GTK3, the application will appear broken after it gets ported to GTK4+libadwaita, if GTK_THEME is kept. The solution in that case is to unset GTK_THEME.

When recommending GTK_THEME, ensure that the user knows that they will need to unset that variable after the application gets ported to GTK4+libadwaita. Or better yet, don’t recommend debug variables to users. Otherwise, they will get the impression that the application itself is buggy and not working as intended. They won’t know notice that GTK_THEME caused it.

Aliasing flatpak run

A common recommendation is to alias flatpak run. When launching Flatpak applications from the terminal, it’s typical to type flatpak run $APP_ID, where $APP_ID is the application ID, for example org.mozilla.firefox for Firefox. So, logically, users find that flatpak run is too long, so they alias it to, e.g. fr.

While this works, there is a better way to improve this situation. Flatpak has its own /bin directories that we can add to PATH. For system installations, the directory is located at /var/lib/flatpak/exports/bin. For user installations, it’s located at ~/.local/share/flatpak/exports/bin.

After we restart the shell, we should be able to use application IDs only, without the need to type flatpak run. These directories are not set to PATH by default to avoid Flatpak’s /bin from clashing over distribution packages’ binaries that follow the reverse domain name notation convention.

flatpak run is often used to temporarily add or remove permissions when running the Flatpak application. For example, flatpak run --nofilesystem=home $APP_ID denies access to filesystem=home for $APP_ID for that session specifically.

Placing Themes and Cursors in ~/.icons and ~/.themes

A major mistake users often make is overriding filesystem permissions to allow read access to ~/.icons and ~/.themes. These aforementioned paths are entirely unsupported by Flatpak, as they are legacy directories.

A user who uses a cursor from ~/.icons may encounter an issue where Flatpak applications fall back to the Xlib cursor. And a user who uses ~/.themes may encounter an issue where Flatpak doesn’t automatically detect the theme and install it (if available).

Flatpak heavily relies on XDG standards and thus honors XDG compliant equivalent paths:

  • ~/.icons~/.local/share/icons
  • ~/.themes~/.local/share/themes

It is best to use these XDG compliant paths to avoid overriding permissions, as they are better supported long-term. If you use a program that installs cursors/icons or themes in legacy paths, contact the developers and kindly ask them to follow XDG standards instead!


Edit 1: Correct mistake (Credit to re:fi.64)

September 27, 2022

Fractal security audit

Projects that receive funding from NLnet are required to have their code audited for potential security issues. Ours was performed by Radically Open Security, a Non-Profit Computer Security Consultancy from the Netherlands.
Since Fractal, by design, doesn’t include much security critical code the security researcher extended the quick scan somewhat also to the matrix-rust-sdk.

I have been in direct contact with the security researcher and they kept me up-to-date about their findings. This way, I could already during the audit start to fix identified security issues. Luckily, no major security issue was identified.

The issues found were addressed by us in the following way:

  • 4.1CLN-013 — Fractal client stores images containing malware on filesystem

This is mainly a problem of the matrix server that it doesn’t sanitize images. Images downloaded from the server are stored in the encrypted store. This was initially an issue but was resolved. Videos on the other hand are currently downloaded and stored in the cache unencrypted because of this issue.

  • 4.2CLN-012 — Fractal’s markdown implementation hides URLs to possible malicious websites

To address this we now show the full URL when the user hovers a link in the room history. This was introduced in this merge request.

  • 4.3CLN-011 — Fractal allows opening of .html and .htm files

This is a problem with any file downloaded from an untrusted source. The researchers suggested adding a warning dialog to ask if the user is sure they want to open the file. I don’t think adding a warning is sufficient to prevent users from opening files containing malicious code, especially since users often don’t read things and just click continue or end up confused. Also we recommend using Fractal inside a Flatpak sandbox, that uses a portal that asks with which application to use to open the file.

Additionally, we decided to remove the open file button from the room history to make sure that user can’t open them easily by mistake in this merge request.

  • 4.4CLN-010 — Matrix server does not sanitize uploaded images

The matrix server should address this and we can’t really do anything about it locally.

  • 4.5CLN-009 — Images are stored on disk unencrypted

Now all data is stored encrypted. See issue for more details.

  • 4.6CLN-008 — Security impact not sufficiently documented

We documented this in our README in this merge request.

  • 4.7CLN-007 — Sensitive data can be extracted from database

Now all data is stored encrypted. See issue for more details.

  • 4.8CLN-006 — Fractal client supports weak TLS cipher suites

This would be something nice to have, unfortunately currently not possible. See this issue for more details.

  • 4.9CLN-005 — Fractal client is able to connect with insecure TLS versions

See issue 4.9CLN-006.

 

You can read the full report of the security audit here.

Enforcing pull request workflow and green CI status of PRs on Flathub repositories

This blog post was originally posted on Flathub Discourse. Re-posting it here for these sweet sweet fake Internet points publicity.

Starting from 2022-10-04, we’re going to tighten the branch protection settings by disabling direct pushes to protected branches (i.e. master, beta, and the ones starting with branch/) and requiring status checks to pass. This means that all changes will need to go through a regular pull request workflow and require the build tests to pass (i.e. that they be green) on Buildbot before being merged.

As part of this change, we’re introducing two new checks as well. Manifests will be linted with flatpak-builder-lint to ensure compliance with best practices we suggest doing the initial review phase. If your app should be exempted from specific linter rules, please open an issue with an explanation why.

Additionally, if a manifest contains a stanza for flatpak-external-data-checker, it will be validated to ensure update detection works correctly.

September 23, 2022

#62 Forty-three!

Update on what happened across the GNOME project in the week from September 16 to September 23.

This week we released GNOME 43!

This new major version of GNOME is full of exciting new features like a redesigned Shell quick settings menu, a modernized file manager, new device security settings - and of course much more. More information can be found in the GNOME 43 release notes.

Readers who have been following this site for a few weeks will already know some of the new features. If you want to follow the development of GNOME 44 (Spring 2023), keep an eye on this page - we’ll be posting exciting news every week!

Circle Apps and Libraries

NewsFlash feed reader

Follow your favorite blogs & news sites.

Jan Lukas says

NewsFlash 2.0 has just been released on flathub. It has been ported to Gtk4 and can now sync with Nextcloud News & FreshRSS. For more details take a look at issue 55, 56 & 57 of “This Week in GNOME”.

Dialect

Translate between languages.

Rafael Mardojai CM says

Dialect was updated to use new widgets from libadwaita 1.2 like AdwAboutWindow and AdwEntryRow, it also received an style update to have a more flat look. These changes will be released in an upcoming version targeting GNOME 43 along with other minor improvements.

Apostrophe

A distraction free Markdown editor.

Manu reports

I’ve implemented some basic autocompletion for Apostrophe. It completes parenthesis, brackets, unordered lists, ordered lists and nested lists

Third Party Projects

FineFindus reports

I’ve released version 0.3.0 of Eyedropper. This release features a basic color shades generation and the ability to customize the order of the shown color formats. The app has been translated into French by rene-coty and German. It is now available on Flathub.

alexhuntley announces

I released version 0.7.0 of Plots, a simple graphing app for GNOME. It introduces a new color picker, a preferences dialog, and support for the system dark theme.

Plots was then ported from GTK 3 to GTK 4 and Libadwaita, with the current version 0.8.1 available on Flathub now.

Sophie announces

I’m announcing my small new project Key Rack. Key Rack will allow you to browse and edit Passwords, Tokens, and similar things that Flatpak apps store encrypted. The app is intended for developers for debugging and for everyone for looking up the password that you forgot again. Key Rack currently has alpha quality.

Why a new app? The TL;DR is: Currently, for technical reasons, your passwords might appear only in the Passwords app you are used to, or only in Key Rack. Maybe Key Rack will show all of them one day.

A more details explanation is the following: Existing apps like Passwords and Keys (Seahorse) allow access to the global key storage (accessible via Secret Service.) Since the global storage has no access control for Flatpaks, the recommended way to store keys in Flatpaks is for apps to use a local Keyring encrypted with a key derived from the secret obtained from the secret portal. Both libsecret and oo7 provide convenience APIs for your app, that automatically use a local Keyring when used inside a Flatpak.

Telegrand

A Telegram client optimized for the GNOME desktop.

Marco Melorio reports

Summer is officially over, so now it’s a good time to wrap up what has been done in Telegrand in the meantime. Here’s a quick summary of the nearly 180 commits that happened since the last update:

  • Reimplemented chat search in a new panel, which can now also search for global chats
  • Show a list of recently found chats in the new search panel when no query is set
  • Added timestamp to the messages in the chat history
  • Added “sending status” and “edited” indicators to the messages in the chat history, by Marcus Behrendt
  • Added a scroll to bottom button in the chat history, by Marcus Behrendt
  • Show mini-thumbnails for media messages in the chat list, by Marcus Behrendt
  • Added sending status indicator for last messages in the chat list, by Marcus Behrendt
  • Added the ability to mark chats as read or unread in the chat list, by Marcus Behrendt
  • Use new libadwaita widgets like AdwEntryRow and AdwMessageDialog, by Marcus Behrendt
  • Show when a chat is from a deleted user account, by Carlod

More really exciting things are in the works, so stay tuned for the next updates!

Gradience

Change the look of Adwaita, with ease.

0xMRTT announces

Gradience Team is happy to announce new version of Gradience 0.3.0. This release introduces many new features and improvements.

  • Added plugins support, this allows creating plugins for customizing other apps
  • Preset Manager performance are significantly enhanced, presets are downloading much faster and app don’t freeze on preset removal
  • Added search to Preset Manager
  • Community presets refactor
  • Preset Manager is attached to the main window
  • Added Quick Preset Switcher back, with it you can switch presets with less clicks
  • Save dialog now shows up when you close app with unsaved preset
  • Currently applied preset now auto-loads on app start-up
  • Toasts now less annoying
  • Added theming warning to Welcome screen
  • Added Mini Welcome screen on update from previous version
  • Added aarch64 builds

Login Manager Settings

A settings app for the login manager GDM.

Mazhar Hussain says

Login Manager Settings v1.0 (stable) has been released.

There are not many changes compared to v1.0-beta.4. One significant change is that it utilizes blueprint-compiler v0.4.0 now instead of v0.2.0.

Changes in v1.0-beta.4 and previous beta versions for v1.0 have already been posted to TWIG in #58, #59, and #61.

If you would like see all changes since the previous stable release either go to GitHub Releases or Flathub page for the app.

Miscellaneous

jjardon announces

A preview of GNOME OS images for some mobile devices are available (at the moment pine64’s PinePhone and PinePhone Pro. They both rely on the GNOME OS infrastructure so we have working atomic updates thanks to ostree! A preview of this work is already available, read more about it here: https://www.codethink.co.uk/articles/2022/gnome-os-mobile/

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Introducing Compiano

I previously introduced Minuit. Later I got notified that there was also a music education application for KDE named Minuet. So it was natural to yield the name. It's relatively easy to do when you haven't had a release.

I decided to rename my application Compiano, a portemanteau word for Computer Piano.

Since I talked last about it a lot of time as passed. I ported it to Gtk4, added some libadwaita support to make it more GNOME, reworked some of the UI, and more importantly implemented a mechanism to download the optional "soundbanks" to implement some of the instruments that use the biggest data set.

I have drawn an icon, in Inkscape, which exhibit my poor artistic skills. Icon

I am currently nearing an actual public release, at least as a preview as I expect it to be a situation of "it works on my machine". At least the flatpak should aleviate most of the issues, and I will be submitting it to Flathub.

Here is a screenshot: Main window

Beside the few blockers for the release, there won't much else going into 0.9. I have a list for the next one up to maybe 1.0. This include adding more instrument using LV2. I have an implementation that is glitchy, and I don't want to delay more a release.

I also made a website.

One more thing: the source code is on GNOME gitlab.

September 22, 2022

GNOME Builder 43.0

After about 5 months of keeping myself very busy, GNOME Builder 43 is out!

This is the truly the largest release of Builder yet, with nearly every aspect of the application improved. It’s pretty neat to see all this come together after having spent the past couple years doing a lot more things outside of Builder like modernizing GTKs OpenGL renderer, writing the new macOS GDK backend, shipping a new Text Editor for GNOME, and somehow getting married during all that.

Modern and Expressive Theming

The most noticeable change, of course, is the port to GTK 4. Builder now uses WebKit, VTE, libadwaita, libpanel, GtkSourceView, and many other libraries recently updated to support GTK 4.

Like we did for GNOME Text Editor, Builder will restyle the application window based on the syntax highlighting scheme. In practice this feels much less jarring as you use the application for hours.

a screenshot of the editor with code completion

a screenshot of the syntax color selector

The Foundry

Behind the scenes, the “Foundry” behind Builder has been completely revamped to make better use of SDKs and runtimes. This gives precise control over how processes are created and run. Such control is important when doing development inside container technologies.

Users can now define custom “Commands” which are used to run your project and can be mapped to keyboard shortcuts. This allows for the use of Builder in situations where it traditionally fell short. For example, you can open a project without a build system and use commands to emulate a build system.

a screenshot of creating a new run command

Furthermore, those commands can be used to run your application and integrate with tooling such as the GNU debugger, Valgrind, Sysprof, and more. Controlling how the debugger was spawned has been a long requested feature by users.

a screenshot of the gdb debugger integration, stopped on a breakpoint

You can control what signal is sent to stop your application. I suspect that will be useful for tooling that does cleanup on signals like SIGHUP. It took some work but this is even plugged into “run tools” so things like Sysprof can deliver the signal to the right process.

If you’re using custom run commands to build your project you can now toggle-off installation-before-run and likely still get what you want out of the application. This can be useful for very large projects where you’re working on a small section and want to cheat a little bit.

application preferences

Unit Testing

In previous version of Builder, plugins were responsible for how Unit Tests were run. Now, they also use Run Commands which allows users to run their Unit Tests with the debugger or other tooling.

Keyboard Shortcuts

Keyboard shortcuts were always a sore spot in GTK 3. With the move to GTK 4 we redesigned the whole system to give incredible control to users and plugin authors. Similar to VS Code, Builder has gained support for a format similar to “keybindings.json” which allows for embedding GObject Introspection API scripting. The syntax matches the template engine in Builder which can also call into GObject Introspection.

keyboard shortcuts

Command Bar and Project Search

We’ve unified the Command Bar and Project Search into one feature. Use Ctrl+Enter to display the new Global Search popover.

We do expect this feature to be improved and expanded upon in upcoming releases as some features necessary are still to land within a future GTK release.

A screenshot of the search panel

Movable Panels and Sessions

Panels can be dragged around the workspace window and placed according to user desire. The panel position will persist across future openings of the project.

Additionally, Builder will try to save the state of various pages including editors, terminals, web browsers, directory listings, and more. When you re-open your project with Builder, you can expect to get back reasonably close to where you left off.

Closing the primary workspace will now close the project. That means that the state of secondary workspaces (such as those created for an additional monitor) will be automatically saved and restored the next time the project is launched.

A screenshot of panels rearranged in builder

GtkSourceView

Core editing features have been polished considerably as part of my upstream work on maintaining GtkSourceView. Completion feels as smooth as ever. Interactive tooltips are polished and working nicely. Snippets too have been refined and performance improved greatly.

Not all of our semantic auto-indenters have been ported to GtkSourceIndenter, but we expect them (and more) to come back in time.

There is more work to be done here, particularly around hover providers and what can be placed in hover popovers with expectation that it will not break input/grabs.

Redesigned Preferences

Preferences have been completely redesigned and integrated throughout Builder. Many settings can be tweaked at either the application-level as a default, or on a per-project basis. See “Configure Project” in the new “Build Menu” to see some of those settings. Many new settings were added to allow for more expressive control and others improved open.

Use Ctrl+, to open application preferences, and Alt+, to open your project’s preferences and configurations.

A screenshot showing app preferences vs project preferences

Document Navigation

Since the early versions of Builder, users have requested tabs to navigate documents. Now that we’re on GTK 4 supporting that in a maintainable fashion is trivial and so you can choose between tabs or the legacy “drop down” selector. Navigation tabs are enabled by default.

Some of the UI elements that were previously embedded in the document frame can be found in the new workspace statusbar on the bottom right. Additionally, controls for toggling indentation, syntax, and encoding have been added.

Switching between similar files is easy with Ctrl+Shift+O. You’ll be displayed a popover with similarly named files to the open document.

The symbol tree is also still available, but moved to the statusbar. Ctrl+Shift+K will bring it up and allow for quick searching.

a screenshot of the similar file popover

A screenshot of the symbol selector

WebKit

A new web browser plugin was added allowing you to create new browser tabs using Ctrl+Shift+B. It is minimal in features but can be useful for quick viewing of information or documentation.

Additionally, the html-preview, markdown-preview, and sphinx-preview plugins have been rewritten to build upon this WebKit integration.

Integrated webkit browser within Builder

Plugin Removals

Some features have been removed from Builder due to the complexity and time necessary for a proper redesign or port. The Glade plugin (which targets GTK 3 only) has been removed for obvious reasons. A new designer will replace it and is expected as part of GNOME 44.

Devhelp has also been removed but may return after it moves to supporting GTK 4. Additionally, other tooling may supersede this plugin in time.

The code beautifier and color-picker were also removed and will likely return in a different form in future releases. However, language servers providing format capabilities can be enabled in preferences to format-on-save.

Project Templates

Project templates have been simplified and improved for GTK 4 along with a new look and feel for creating them. You’ll see the new project template workflow from the application greeter by clicking on “Create New Project”.

project creation assistant

Top Matches

Heavy users of code completion will notice a new completion result which contains a large star (★) next to it. This indicates that the proposal is a very close match for the typed text and is getting resorted to the top of the completion results. This serves as an alternative to sorting among completion providers which is problematic due to lack of common scoring algorithms across different data sources.

a screenshot of top matches support

Sysprof Integration

Tooling such as Sysprof went through a lot of revamp too. As part of this process I had to port Sysprof to GTK 4 which was no small task in it’s own right.

Additionally, I created new tooling in the form of sysprof-agent which allows us to have more control when profiling across container boundaries. Tools which need to inject LD_PRELOAD (such as memory profilers) now work when combined with an appropriate SDK.

A screenshot of sysprof integration

Language Servers

Language servers have become a part of nearly everyone’s development toolbox at this point. Builder is no different. We’ve added support for a number of new language servers including jdtls (Java), bash-language-server (Bash), gopls (Golang) and improved many others such as clangd (C/C++), jedi-language-server (Python), ts-language-server (JavaScript/Typescript), vls (Vala), rust-analyzer (Rust), blueprint, and intelephense (PHP).

Many language servers are easier to install and run given the new design for how cross-container processes are spawned.

A screenshot of the rust-analyzer language server providing completion results

Quick Settings

From the Run Menu, many new quick settings are available to tweak how the application runs as well as well as configure tooling.

For example, you can now toggle various Valgrind options from the Leak Detector sub-menu. Sysprof integration also follows suit here by allowing you to toggle what instruments will be used when recording system state.

To make it easier for developers to ensure their software is inclusive, we’ve added options to toggle High Contrast themes, LTR vs RTL, and light vs dark styling.

A screenshot of the build menu

Refactory

For language tooling that supports it, you can do things like rename symbols. This has been in there for years, but few knew about it. We’ve elevated the visibility a bit now in the context menu.

Renaming a symbol using clang-rename

Vim Emulation

In GTK 3, we were very much stuck with deep hacks to make something that looked like Vim work. Primarily because we wanted to share as much of the movements API with other keybinding systems.

That changed with GtkSourceView 5. Part of my upstream maintainer work on GtkSourceView included writing a new Vim emulator. It’s not perfect, by any means, but it does cover a majority of what I’ve used in more than two decades as a heavy Vim user. It handles registers, marks, and tries to follow some of the same pasteboard semantics as Vim (“+y for system clipboard, for example).

I made this available in GNOME Text Editor for GNOME 42 as well. Those that wonder why we didn’t an external engines to synchronize with, can read the code to find out.

Plugins

We have been struggling with our use of PyGObject for sometime. It’s a complex and difficult integration project and I felt like I spent more time debugging issues than I was comfortable with. So this port also included a rewrite of every Python-based plugin to C. We still enable the Python 3 plugin loader from libpeas (for third-party plugins), but in the future we may switch to another plugin language.

Maintainers Corner

So…

A special thanks to all those that sent me merge requests, modernized bits of software I maintain, fixed bugs, or sent words of encouragement.

I’m very proud of where we’ve gotten. However, it’s been an immense amount of work. Builder could be so much more than it is today with your help with triage of bugs, designing and writing features, project and product management, writing documentation, maintaining plugins, improving GNOME OS, and everything in-between.

The biggest lesson of this cycle is how a strong design language is transformative. I hope Builder’s transformation serves as an example for other GNOME applications and the ecosystem at large. We can make big leaps in short time if we have the right tooling and vision.

September 21, 2022

Came Full Circle

As mentioned in the previous post I’ve been creating these short pixel art animations for twitter and mastodon to promote the lovely apps that sprung up under the umbrella of the GNOME Circle project.

I was surprised the video gets actually quite long. It’s true that a little something every day really adds up. The music was composed on the Dirtywave M8 and the composite and sequence assembled in Blender.

GNOME Circle Pixels

Please take the time to enjoy this fullscreen for that good’ol CRT feel. If you’re a maintainer or contributer to any of the apps featured, thank you!

Status update 21/09/22

Last week I attended OSSEU 2022 in Dublin, gave a talk about BuildStream 2.0 and the REAPI, and saw some new and old faces. Good times apart from the common cold I picked up on the way — I was glad that the event mandated face-masks for everyone so I could cover my own face without being the “odd one out”. (And so that we were safer from the 3+ COVID-19 cases reported at the event).

Being in the same room as Javier allowed some progress on our slightly “skunkworks” project to bring OpenQA testing to upstream GNOME. There was enough time to fix the big regressions that had halted testing completely since last year, one being an expired API key and the other, removal of virtio VGA support in upstream’s openqa_worker container. We prefer using the upstream container over maintaining our own fork, in the hope that our limited available time can go on maintaining tests instead, but the containers are provided on a “best effort” basis and since our tests are different to openqa.opensuse.org, regressions like this are to be expected.

I am also hoping to move the tests out of gnome-build-meta into a separate openqa-tests repo. We initially put them in gnome-build-meta because ultimately we’d like to be able to do pre-merge testing of gnome-build-meta branches, but since it takes hours to produce an ISO image from a given commit, it is painfully slow to create and update the OpenQA tests themselves. Now that Gitlab supports child pipelines, we can hopefully satisfy both use cases: one pipeline that quickly runs tests against the prebuilt “s3-image” from os.gnome.org, and a second that is triggered for a specific gnome-build-meta build pipeline and validates that.

First though, we need to update all the existing tests for the visual changes that occurred in the meantime, which are mostly due to gnome-initial-setup now using GTK4. That’s still a slow process as there are many existing needles (screenshots), and each time the tests are run, the Web UI allows updating only the first one to fail. That’s something else we’ll need to figure out before this could be called “production ready”, as any non-trivial style change to Adwaita would imply rerunning this whole update process.

All in all, for now openqa.gnome.org remains an interesting experiment. Perhaps by GUADEC next year there may be something more useful to report.

Team Codethink in the OSSEU 2022 lobby

My main fascination this month besides work has been exploring “AI” image generation. It’s amazing how quickly this technology has spread – it seems we had a big appetite for generative digital images.

I am really interested in the discussion about whether such things are “art”, because I this discussion is soon going to encompass music as well. We know that both OpenAI and Spotify are researching machine-generated music, and it’s particularly convenient for Spotify if they can continue to charge you £10 a month while progressively serving you more music that they generated in-house – and therefore reducing their royalty payments to record labels.

There are two related questions: whether AI-generated content is art, and whether something generated by an AI has the same monetary value as something a human made “by hand”. In my mind the answer is clear, but at the same time not quantifiable. Art is a form of human communication. Whether you use a neural network, a synthesizer, a microphone or a wax cylinder to produce that art is not relevant. Whether you use DALL-E 2 or a paintbrush is not relevant. Whether your art is any good depends on how it makes people feel.

I’ve been using Stable Diffusion to try and illustrate some of sound worlds from my songs, and my favourite results so far are for Don’t Go Into The Zone:

And finally, a teaser for an upcoming song release…

An elephant with a yellow map background

September 20, 2022

Google Summer of Code 2022: It’s a wrap!

Google Summer of Code logo

Another program year is ending and we are extremely happy with the resulting work of our contributors!

This year GNOME had nine Google Summer of Code projects covering various areas, from improving apps in our ecosystem to standardizing our web presence. We hope our interns had a glimpse of our community that motivated them to continue engaged with their projects and involved with the broad GNOME ecosystem.

A special thanks goes to our mentors that are the front-line of this initiative, sharing their knowledge and introducing our community to the new contributors. Thank you so much!

We encourage interns now to contemplate their future after GSoC. If you want to continue with us, speak to your mentor about your interests and ask for some tips on how you can continue participating in the project. Also, there are opportunities of employment that can help you build a career in open source.

Thanks for choosing GNOME for your internship! We were lucky to have you!

GAFAM to MAGMA

The GAFAM are evil, and the nice thing about it is that we can call them the MAGMA now (replace the F with M for Meta).

We can also call the MAGMA a form of hyper-capitalism: they are so big that they destroy any kind of competition, by either buying other companies, or creating something better. The "barrier to entry" to compete with them is just way too high.

So it's urgent that at some point the governments act to split these big corporations and stop the magma from rolling in any further.

Crediting people

Crediting people is important, and it's something that we - in the free software community - don't always apply enough. Or we unconsciously don't do so.

That's where scientific papers get it really right. And for blog posts or articles, it depends on many factors (exercise for you: try to categorize blog posts into a discrete color scale, between red, green and blue, wrt. what this blog post is all about).

So, with the exercise statement, you already know that not all text are equal in that regard [1]. I'm thinking about somehow quickly-written raw text, or sometimes just the title that happens to be exactly the same (with a totally different body), with a long timespan in-between.

Examples:

  • In my shame opinion, I've been bad at crediting people over the past years. If you read this, you'll recognize yourself - oh hum, not about being bad at crediting people (oops!) - I mean to credit you :-)
  • An almost-the-same title, with as prefix: A gentle introduction to GObject. I used that title for a chapter in this getting started guide (according to my git friend, I chose the title in 2017), while I realized much later that I had read 5 years earlier a gentle introduction to gobject construction (Allison). This was either done subconsciously, or it was just a coincidence.
  • Another coincidence for the title, and here the reverse (first me and then someone else, with a totally different angle): Doing things that scale (2016 and 2020). (BTW I should republish my full article which has fallen through the cracks). As a corollary, if you choose that title, chances are that it'll be quoted, apparently :)

I know I know, there are more articles that I wrote that I would like to republish than there are subjects that I would like to write about in the near future. That being said, /me disappeared temporarily into a temporary other location [2].

Thanks for reading, and see you for my next blog post or article!

In the meantime, I know some people like beholding awe-inspiring videos (with some reflection and experiments about the spring equation, heterogeneous containers and peer review, various discussions, talking about TeX, but forget about it. With a small wink towards my more local readers). Any relationship with past events would be pure coincidence.

Feet notes

[1] I realized, again, that I do a bit my Captain Obvious here.. Sorry.

[2] Or, in common jargon: vacation!

Even though it's not possible to write comments on this blog, don't hesitate to drop me an email. I do read them, and like to have feedbacks and interesting discussions. And ... I'll credit you if you agree not to be anonymous.

Handling WebAuthn over remote SSH connections

Being able to SSH into remote machines and do work there is great. Using hardware security tokens for 2FA is also great. But trying to use them both at the same time doesn't work super well, because if you hit a WebAuthn request on the remote machine it doesn't matter how much you mash your token - it's not going to work.

But could it?

The SSH agent protocol abstracts key management out of SSH itself and into a separate process. When you run "ssh-add .ssh/id_rsa", that key is being loaded into the SSH agent. When SSH wants to use that key to authenticate to a remote system, it asks the SSH agent to perform the cryptographic signatures on its behalf. SSH also supports forwarding the SSH agent protocol over SSH itself, so if you SSH into a remote system then remote clients can also access your keys - this allows you to bounce through one remote system into another without having to copy your keys to those remote systems.

More recently, SSH gained the ability to store SSH keys on hardware tokens such as Yubikeys. If configured appropriately, this means that even if you forward your agent to a remote site, that site can't do anything with your keys unless you physically touch the token. But out of the box, this is only useful for SSH keys - you can't do anything else with this support.

Well, that's what I thought, at least. And then I looked at the code and realised that SSH is communicating with the security tokens using the same library that a browser would, except it ensures that any signature request starts with the string "ssh:" (which a genuine WebAuthn request never will). This constraint can actually be disabled by passing -O no-restrict-websafe to ssh-agent, except that was broken until this weekend. But let's assume there's a glorious future where that patch gets backported everywhere, and see what we can do with it.

First we need to load the key into the security token. For this I ended up hacking up the Go SSH agent support. Annoyingly it doesn't seem to be possible to make calls to the agent without going via one of the exported methods here, so I don't think this logic can be implemented without modifying the agent module itself. But this is basically as simple as adding another key message type that looks something like:
type ecdsaSkKeyMsg struct {
       Type        string `sshtype:"17|25"`
       Curve       string
       PubKeyBytes []byte
       RpId        string
       Flags       uint8
       KeyHandle   []byte
       Reserved    []byte
       Comments    string
       Constraints []byte `ssh:"rest"`
}
Where Type is ssh.KeyAlgoSKECDSA256, Curve is "nistp256", RpId is the identity of the relying party (eg, "webauthn.io"), Flags is 0x1 if you want the user to have to touch the key, KeyHandle is the hardware token's representation of the key (basically an opaque blob that's sufficient for the token to regenerate the keypair - this is generally stored by the remote site and handed back to you when it wants you to authenticate). The other fields can be ignored, other than PubKeyBytes, which is supposed to be the public half of the keypair.

This causes an obvious problem. We have an opaque blob that represents a keypair. We don't have the public key. And OpenSSH verifies that PubKeyByes is a legitimate ecdsa public key before it'll load the key. Fortunately it only verifies that it's a legitimate ecdsa public key, and does nothing to verify that it's related to the private key in any way. So, just generate a new ECDSA key (ecdsa.GenerateKey(elliptic.P256(), rand.Reader)) and marshal it ( elliptic.Marshal(ecKey.Curve, ecKey.X, ecKey.Y)) and we're good. Pass that struct to ssh.Marshal() and then make an agent call.

Now you can use the standard agent interfaces to trigger a signature event. You want to pass the raw challenge (not the hash of the challenge!) - the SSH code will do the hashing itself. If you're using agent forwarding this will be forwarded from the remote system to your local one, and your security token should start blinking - touch it and you'll get back an ssh.Signature blob. ssh.Unmarshal() the Blob member to a struct like
type ecSig struct {
        R *big.Int
        S *big.Int
}
and then ssh.Unmarshal the Rest member to
type authData struct {
        Flags    uint8
        SigCount uint32
}
The signature needs to be converted back to a DER-encoded ASN.1 structure (eg,
var b cryptobyte.Builder
b.AddASN1(asn1.SEQUENCE, func(b *cryptobyte.Builder) {
        b.AddASN1BigInt(ecSig.R)
        b.AddASN1BigInt(ecSig.S)
})
signatureDER, _ := b.Bytes()
, and then you need to construct the Authenticator Data structure. For this, take the RpId used earlier and generate the sha256. Append the one byte Flags variable, and then convert SigCount to big endian and append those 4 bytes. You should now have a 37 byte structure. This needs to be CBOR encoded (I used github.com/fxamacker/cbor and just called cbor.Marshal(data, cbor.EncOptions{})).

Now base64 encode the sha256 of the challenge data, the DER-encoded signature and the CBOR-encoded authenticator data and you've got everything you need to provide to the remote site to satisfy the challenge.

There are alternative approaches - you can use USB/IP to forward the hardware token directly to the remote system. But that means you can't use it locally, so it's less than ideal. Or you could implement a proxy that communicates with the key locally and have that tunneled through to the remote host, but at that point you're just reinventing ssh-agent.

And you should bear in mind that the default behaviour of blocking this sort of request is for a good reason! If someone is able to compromise a remote system that you're SSHed into, they can potentially trick you into hitting the key to sign a request they've made on behalf of an arbitrary site. Obviously they could do the same without any of this if they've compromised your local system, but there is some additional risk to this. It would be nice to have sensible MAC policies that default-denied access to the SSH agent socket and only allowed trustworthy binaries to do so, or maybe have some sort of reasonable flatpak-style portal to gate access. For my threat model I think it's a worthwhile security tradeoff, but you should evaluate that carefully yourself.

Anyway. Now to figure out whether there's a reasonable way to get browsers to work with this.

comment count unavailable comments

September 19, 2022

Inside the GTK font chooser

I’ve written about the handling of fonts in GTK before. This post is going to focus on how to use the more advanced font (and font chooser) features in your application.

Finding fonts

The most prominent end-user feature of the file chooser is of course that you can search for fonts by name, using the search entry:

A more hidden feature is that you can filter the list by various criteria. One criterium is to show only monospace fonts, another is to only show fonts covering a certain language:

A little detail to notice here is that GTK automatically changes the preview text to match the language you are filtering by.

Less is more

The font chooser returns a PangoFontDescription which contains the full details of the selected font: family, style, size, etc. If your application only needs the family, then it is confusing to let the user select a style and size only to have them be ignored.

If this is the case for your application, you can instruct GTK about the font details you need, using gtk_font_chooser_set_level(), and the GtkFontChooserLevel flags:

typedef enum {
  GTK_FONT_CHOOSER_LEVEL_FAMILY     = 0,
  GTK_FONT_CHOOSER_LEVEL_STYLE      = 1 << 0, 
  GTK_FONT_CHOOSER_LEVEL_SIZE       = 1 << 1,
  GTK_FONT_CHOOSER_LEVEL_VARIATIONS = 1 << 2,
  GTK_FONT_CHOOSER_LEVEL_FEATURES   = 1 << 3
} GtkFontChooserLevel;

For example, after

gtk_font_chooser_set_level (chooser, 
                            GTK_FONT_CHOOSER_LEVEL_FAMILY);

the font chooser looks like this:

Much simpler!

Into the abyss

Modern fonts are complicated beasts, and there’s much that’s lurking under the surface. The GTK font chooser can make many of these font features available if you tell it to.

First, there are font variations. These let you continuously vary the characteristics of a font (as long as those characteristics are exposed as variation axes).

 

Typical variation axes are weight, width and slant of a font, but there can others (such as Optical Size in this example).

The selected variations are part of the PangoFontDescription that the font chooser returns, applications don’t have to do any extra work to apply them. Just use the font description as usual.

To enable the font variation support in the GTK file chooser, use GTK_FONT_CHOOSER_LEVEL_VARIATIONS flag:

level = level | GTK_FONT_CHOOSER_LEVEL_VARIATIONS;
gtk_font_chooser_set_level (chooser, level);

More features

Fonts contain not just the glyph contours, but lots of other data that can be applied in various ways when rendering those glyphs. This includes traditional data like kerning and ligatures, but also things like optional glyph shape or positioning variants or even color palettes. Many of these can be enabled by the user with the help of OpenType features.

Here is an example of an OpenType feature for glyph shape variations:


The feature that is toggled on here when going from left to right is called ss12. Thankfully, the font provides the more meaningful name “Single-story g” as well.

This example shows the effect of the frac feature on the display of fractions.

In the GTK font chooser, OpenType features are presented on the same page as variations. As you see, there can be quite a few of them:

Note that Pango treats OpenType features as separate from the font itself. They are not part of the font description, but have to be applied to text either with PangoAttributes or via Pango markup.

To apply the selected font features from a GTK font chooser, call gtk_font_chooser_get_font_features () and pass the returned string to pango_attr_font_features_new().

To enable the OpenType features support in the GTK file chooser, use GTK_FONT_CHOOSER_LEVEL_FEATURES flag:

level = level | GTK_FONT_CHOOSER_LEVEL_FEATURES;
gtk_font_chooser_set_level (chooser, level);

Summary

In summary, you can use the level property of GtkFontChooser to influence the granularity of font selection you offer to users of your application. If you include font features in it, don’t forget to apply the selected features, using PangoAttributes or markup.

All of this is enabled by harfbuzz providing us with a cross-platform API to fonts and all their features. It would not be possible otherwise. It is worth pointing out that this is done by accessing harfbuzz objects directly, rather than wrapping all the harfbuzz APIs in Pango.

 

 

Fedora at OpenAlt 2022

Covid stopped a lot of activities including IT events. As things are hopefully coming to normal the Czech community of Fedora had its first booth at a physical event since 2019. It was also a revival for OpenAlt, traditional open source conference in Brno, because its last edition was in 2019, too. The traditional date of OpenAlt is the first weekend in November, but to avoid any possible autumn covid waves the organizers decided to have it on Sep 17-18.

Fedora booth

The conference grew to occupy pretty much the whole venue (Faculty of Informatics of Brno Technical University) and offer 6+ tracks. This year it shrank back to its pre-2012 times.

For us from the Fedora community it was great to be back among people, talking directly to our users. We demoed the freshly released Fedora 37 Beta and we also showcased Fedora on a Pinephone (with the Posh environment). The theme of this year’s OpenAlt turned out to be Linux on mobile phones. There were several talks on this topic, people showed up with different phones (Pinephone, Librem 5…) and different OSes on them (Fedora, Manjaro, Debian, SailfishOS…).

Fedora on Pinephone

During the last 3 years we have also accumulated a lot of Fedora swag in the storage, so we had a lot to give away and people appreciated it because apparently getting a sticker of your favorite distribution is something people were missing too.

I’d also like to thank Vojtech Trefny, Jan Beran, Ondrej Michal, and Lukas Kotek for helping me staff the booth.

Diving deeper into custom PDF and epub generation

In a previous blog post I looked into converting a custom markup text format into "proper" PDF and epub documents. The format at the time was very simple and could not do even simple things like italic text. At the time it was ok, but as time went on it seemed a bit unsatisfactory.

Ergo, here is a sample input document:

# Demonstration document

This document is a collection of sample paragraphs that demonstrate
the different features available, like /italic text/, *bold text* and
even |Small Caps text|. All of Unicode is supported: ", », “, ”.

The previous paragraph was not indented as it is the first one following a section title. This one is indented. Immediately after this paragraph the input document will have a scene break token. It is not printed, but will cause vertical white space to be added. The
paragraph following this one will also not be indented.

#s

A new scene has now started. To finish things off, here is a
standalone code block:

```code
#include<something.h>
/* Cool stuff here */
```

This is "Markdown-like" but specifically not Markdown because novel typesetting has requirements that can't easily be retrofit in Markdown. When processed this will yield the following output:

Links to generated documents: PDF, epub. The code can be found on Github.

A look in the code

An old saying goes that that the natural data structure for any problem is an array and if it is not, then change the problem so that it is. This turned out very much to be the case in this problem. The document is an array of variants (section, paragraph, scene change etc). Text is an array of words (split at whitespace) which get processed into output, which is an array of formatted lines. Each line is an array of formatted words.

For computing the global chapter justification and final PDF it turned out that we need to be able to render each word in its final formatted form, and also hyphenated sub-forms, in isolation. This means that the elementary data structure is this:

struct EnrichedWord {
    std::string text;
    std::vector<HyphenPoint> hyphen_points;
    std::vector<FormattingChange> format;
    StyleStack start_style;
};

This is "pure data" and fully self-contained. The fields are obvious: text has the actual text in UTF-8. hyphen_points lists all points where the word can be hyphenated and how. For example if you split the word "monotonic" in the middle you'd need to add a hyphen to the output but if you split the hypothetical combination word "meta–avatar" in the middle you should not add a hyphen, because there is already an en-dash at the end. format contains all points within the word where styling changes (e.g. italic starts or ends). start_style is the only trickier one. It lists all styles (italic, bold, etc) that are "active" at the start of the word and the order in which they have been declared. Since formatting tags can't be nested, this is needed to compute and validate style changes within the word.

Given an array of these enriched words the code computes another array of all possible points where the text stream can be split, both within and between words. The output of this algorithm is then yet another array. It contains all the split points. With this the final output can be created fairly easily: each output line is the text between split points n and n+1.

The one major missing typographical feature missing is widow and orphan control. The code merely splits the page whenever it is full. Interestingly it turns out that doing this properly is done with the same algorithm as paragraph justification. The difference is that the penalty terms are things like "widow existence" and "adjacent page height imbalance".

But that, as they say, is another story. Which I have not written yet and might not do for a while because there are other fruit to fry.

Bring Your Own Disaster

After my last post, someone suggested that having employers be able to restrict keys to machines they control is a bad thing. So here's why I think Bring Your Own Device (BYOD) scenarios are bad not only for employers, but also for users.

There's obvious mutual appeal to having developers use their own hardware rather than rely on employer-provided hardware. The user gets to use hardware they're familiar with, and which matches their ergonomic desires. The employer gets to save on the money required to buy new hardware for the employee. From this perspective, there's a clear win-win outcome.

But once you start thinking about security, it gets more complicated. If I, as an employer, want to ensure that any systems that can access my resources meet a certain security baseline (eg, I don't want my developers using unpatched Windows ME), I need some of my own software installed on there. And that software doesn't magically go away when the user is doing their own thing. If a user lends their machine to their partner, is the partner fully informed about what level of access I have? Are they going to feel that their privacy has been violated if they find out afterwards?

But it's not just about monitoring. If an employee's machine is compromised and the compromise is detected, what happens next? If the employer owns the system then it's easy - you pick up the device for forensic analysis and give the employee a new machine to use while that's going on. If the employee owns the system, they're probably not going to be super enthusiastic about handing over a machine that also contains a bunch of their personal data. In much of the world the law is probably on their side, and even if it isn't then telling the employee that they have a choice between handing over their laptop or getting fired probably isn't going to end well.

But obviously this is all predicated on the idea that an employer needs visibility into what's happening on systems that have access to their systems, or which are used to develop code that they'll be deploying. And I think it's fair to say that not everyone needs that! But if you hold any sort of personal data (including passwords) for any external users, I really do think you need to protect against compromised employee machines, and that does mean having some degree of insight into what's happening on those machines. If you don't want to deal with the complicated consequences of allowing employees to use their own hardware, it's rational to ensure that only employer-owned hardware can be used.

But what about the employers that don't currently need that? If there's no plausible future where you'll host user data, or where you'll sell products to others who'll host user data, then sure! But if that might happen in future (even if it doesn't right now), what's your transition plan? How are you going to deal with employees who are happily using their personal systems right now? At what point are you going to buy new laptops for everyone? BYOD might work for you now, but will it always?

And if your employer insists on employees using their own hardware, those employees should ask what happens in the event of a security breach. Whose responsibility is it to ensure that hardware is kept up to date? Is there an expectation that security can insist on the hardware being handed over for investigation? What information about the employee's use of their own hardware is going to be logged, who has access to those logs, and how long are those logs going to be kept for? If those questions can't be answered in a reasonable way, it's a huge red flag. You shouldn't have to give up your privacy and (potentially) your hardware for a job.

Using technical mechanisms to ensure that employees only use employer-provided hardware is understandably icky, but it's something that allows employers to impose appropriate security policies without violating employee privacy.

comment count unavailable comments

September 18, 2022

GSoC 2022 with GNOME: Final Report

This post marks the ending of my GSoC'22 journey with GNOME. I worked on the database migration and managing user model for the GNOME Health Application. Let's take a deep dive into the project.

Project Overview

Health is a Health and Fitness Tracking application. It helps the user to track and visualize their health indicators better. That means a user can track down their activities and weight progressions. The project is created and maintained by Rasmus Thomsen, who is also the mentor of my GSoC project.

Attached below is the screenshot of the Health MainView:

Health

Project Goals

  • Add the database version for the better migration of the database.
  • Add the user model for the better management of the user data with the database.
  • Add the sync model to improve the synchronization of the data with the third-party sync providers.
  • Add Apple HealthKit and NextCloud Health as additional sync providers.

Project Contributions

I started my GSoC journey with the project by adding the database version to the database. The database version is used to migrate the database to the latest version. A static version is added to the code that depicts the current version of the database. A new property called `version is added to the database which depicts the current version of the database for the user. If the versions are same then migration is skipped, else we carry on the migration of the database. This slight change improved the startup time of the application by a huge margin.

Then I worked on designing a new model for the users. This helped in migrating the user data from the GSettings to the database so that multiple users can be supported. More information regarding this project can be found in my previous blog.

MR Link: https://gitlab.gnome.org/World/Health/-/merge_requests/174

I'm currently working on the implementation of the sync model and it will be finished soon.

Future Work

As I mentioned earlier, I'm currently working on the implementation of the sync model. I will be adding the Apple HealthKit and NextCloud Health as additional sync providers.

Apart from that, I'm also working on a new Habit Tracking application with Rust and GTK. I will update more about it later on. Stay tuned!

GUADEC 2022

This year I got to present my project at the GUADEC 2022. It was a great experience to present my project to the GNOME community. I got to learn a lot from the GNOME community and I'm really thankful to them for their support. This was my first time attending GUADEC and I hope to attend it again next year. You can find the recording of my presentation here.

Conclusion

Overall it was a great experience during the last 12 weeks. I learned a lot about the GNOME community and the GNOME development process. I'm really thankful to my mentor Rasmus Thomsen for his support and guidance throughout the project. I'm also thankful to the GNOME community for their support and feedback. I'm looking forward to contributing more to the GNOME community in the future. That will be all for now. Thank you for reading this post. See you in the next one.

Garden work

Another post which is not about software! I’ve recently, finally, finished reworking my garden, and here’s a brief writeup of what happened. It includes some ideas for low-embodied energy and ecologically friendly garden design.

The original garden

This house is on a hill. The original garden was a set of three concrete slabbed terraces going down the hill, with some wooden decking on the top terrace. There was a block paved path ramping down the garden, separating the decking from a sloping grass lawn. There were very few plants which weren’t grass or a few pots.

Problems with this included:

  • Decking was rotten
  • Lower terrace served no purpose and was devoid of life
  • There was very little biodiversity in the garden, and no space to grow anything
  • Steps in the path had been installed with uneven heights and that was surprisingly annoying

The plan

  • Get rid of the decking because it’s rotten
  • Remove the terraces because they’re just concrete, and replace them with more soil and planting area
  • Make the subdivisions of the garden less rectilinear so it feels a bit less brutal
  • Lower some areas of the terraces a bit to get a bit more privacy (the garden is overlooked)
  • Severely reduce the grass lawn area, because it requires frequent mowing and is not very ecologically diverse
  • Rebuild the path to make it curvy and add some planting area in a sunny spot by it
  • Keep the garden adaptable and don’t make anything too permanent (by cementing it in place) — I, or others, may want to rearrange things in future

Executing the plan

I started on this in 2019. Progress was slow at first, because a large part of the plan involved digging out the terraces, and there was some question about whether this would undermine the foundations of the house. That could cause the house to fall down. That would be bad.

I talked to a structural engineer, and he specified a retaining wall system I could install, which would retain the house wall and foundations, act as a raised bed, and is made out of wood so would have low embodied carbon compared to a (more standard) masonry wall (which is about 40kgCO2e/m2, see table 11 here). There was various other research and considerations about adjoining property, safety, drainage, appearance of the materials as they age, and suitability for DIY which fed into this decision. I can go into the details if anyone’s interested (get in touch if so).

What followed was about 10 months of intermittent work on it, removing the old terraces, digging a pond and sowing some wildflowers, installing the new retaining wall, fixing drainage through the clay, bringing in soil, laying a clover lawn, and rebuilding the path.

The result

I’m pretty pleased with the result. There are a few decisions in particular which I’m quite pleased worked out, as they’re not a common approach in the UK and were a bit of a gamble:

  • Clover patio. Rather than a paved patio area (as is common here), I planted clover seed on a thin bed of soil with a weed control membrane beneath. This has a significantly lower embodied carbon than paving (around 100-200kgCO2e/tonne less, if imported natural stone was used, which is the standard in the UK at the moment), and drains better, so it doesn’t contribute to flash flooding runoff. With rain becoming less frequent but more intense in the UK, runoff is going to become more of a problem. My full analysis of the options is here. I chose clover for the planting because it doesn’t require mowing, and should stay short enough to sit on. As per table 1 of this paper, I might adjust the planting in future to include other non-grass species.
  • Wooden retaining wall. I used Woodblocx, and it worked out great. It didn’t require any cementatious materials (which have high embodied carbon), just a compacted type 2 sub-base and their wooden blocks. It’s repairable, and recyclable at end of life (in about 25 years).
  • Wood chip path. This was easy to install (wood chips over a weed control membrane), doesn’t contribute to flash flooding runoff like paved paths do, and is a good environment for various insects which like damp and dark places. It will need topping up with more wood chips every few years, but no other maintenance. The path edging is made from some of the old decking planks from the original garden (the ones which weren’t rotten).
  • Water butt stands. These are all made from bits of old decking or Woodblocx offcuts, and make the water butts easier to use by bringing the tap to a more reachable level. I also made a workbench out of old decking planks.

Subjectively, now the garden’s been basically finished for a year (I finished the final few bits of it the other day), I’ve seen more insects living in it, and birds feeding on them, than I did before. Yay!

September 17, 2022

SUSE is my new distribution (new job)

This week I've started to work at SUSE. I'll be working as Python Specialist, in the packaging team, so I will go back to work on packaging and distribution after more than ten years. My first job in 2008 was working on a Ubuntu based local distribution, Guadalinex, so packaging and distribution work is not something new for me.

Python

Python was the first language that I fell in love. I learned to write code with C and C++, but when I discovered Python, in 2006, I found a really nice language to be able to create amazing things really fast and with a great community behind.

I'm very happy for this new opportunity to be able to collaborate to the Python distribution in all the SUSE flavours, and also to be able to collaborate in the creation of one of the most famous and used Linux distributions.

Tumbleweed

As part of this job change I've also installed SUSE Tumbleweed for the first time. Tumbleweed is a rolling release distribution with the latests packages. In the past I was using other rolling releases distributions like Arch, but this one looks more user friendly.

I've not spent a lot of time here, but from the point of view of a GNOME developer, I can say that it's a great distribution for development with updated packages, and it looks "stable". You can choose the desktop to use on installation and the GNOME desktop is there without any customization that I've detected, so it looks like it's a good vanilla GNOME desktop distribution.

Endless, it's not the end

I'm not working for EndlessOS now, but it's not the end. I've been working here for almost 4 years. At first I worked on the Hack Computer and after that project didn't work, I was working on the Endless Key.

During this time I've also collaborated a bit with the EndlessOS distribution, and I can say that's a really nice distribution to use, the ostree usage for the whole filesystem is a great idea, and the amount of content that comes with the installation is really good.

The EndlessOS Foundation Goal is to reduce the digital divide, providing content and tools for offline people, centered on kids. This is a great mission and in the future, if I find the opportunity to help my local community, I'll try to use the EndlessOS tools and content to provide good learning content for kids without online access.

I was very happy these years at Endless, and I've learned a lot from different great people. It's incredible the number of talented software engineers that are related to Endless, and for me it was a real privilege to be able to share this space and mission for a few years.

The future!

So there we go, I'm exited for this change, and also sad about leaving a great project, but life is change and we should go ahead and think about the future! And my future is green now.

And if you don't know how to pronounce it, here you've a music video:

GSoC 2022: Overview

Introduction

Throughout this summer I've been working on making the New Documents feature discoverable in Nautilus, a file manager for GNOME as part of the GSoC project. This post is an overview with links of the work I did together with my mentor Antonio Fernandes.

Results

For the project I was supposed to resolve the discoverability problem of this feature - when there are no templates in the Templates directory, the new document menu is not shown, and many users don't know about its existence. Below is a list of steps I've taken to fulfill that quest:

  1. Planning - first we had to establish a schedule for our work
  2. Research - then we had to do some research about how others implement this feature
  3. Design - afterwards we had to design a mockup for the improved feature
  4. Code - at last a prototype was made, and after several design iterations, a final MR was submitted and merged to a feature branch (not master yet), during that phase I also presented at GUADEC

Future

While the short term solution is implemented in a feature branch, it's still not a final one - we need to keep iterating on the design, perform usability testing, get more feedback, before we implement it in master branch and it can reach the users. There is also a long term plan detailed in the mockups - it's an ambitious one that requires changes across the app ecosystem, but some day the work towards it needs to start, and I'm going to aim towards it. I'm also planning on keeping contributing to Files, as well as other core GNOME apps written in C.

Conclusion

This GSoC project has been an amazing journey - I expanded my knowledge of C, Glib, GTK, Libadwaita, and the internals of Files. I also learned how to do proper research with Boxes, use Inkscape for creating mockups, ask for designer feedback, and how to use Builder to create magic.

I would like to thank my mentor Antonio Fernandes for answering my questions and guiding me through the project, as well as Tobias Bernard, Allan Day and Michaël Bertaux for designer feedback.

GSoC Finale

It definitely was a long journey. Although I didn’t write enough blogs to outline the exact shaping of it, here’s a final report on what has been going on.

The workings of the protocol and the connection specifics have already been discussed before, so let’s keep this short.

  • Discovery using mDNS
  • Connection over TCP
  • Messages are structured in JSON and encoded using protobuf
  • All the messages have at least a sender id, destination id, namespace, payload type, and the payload itself

Last time we connected to the Chromecast device, we opened a media receiver web app and played a video on it.
Our goal, as it remains, is to cast our desktop/window on the Chromecast display. We can choose from three different streaming protocols to have what we want: DASH, HLS, and Smooth Streaming.

Implementing one of those was the primary goal until a sparse study of Chromium’s code unveiled a namespace called urn:x-cast:com.google.cast.webrtc.
We also used Google Chrome’s (or Chromium’s) tab casting feature and recorded the logs. Looking at the logs virtually revealed the entire message exchange and the steps involved, but it was no documentation by any stretch. We had no idea what the possibilities were or the error cases, for that matter.
While trying to figure out the Chromecast protocol, we noticed many logs in the message exchange files that were not in Chrome’s logs, even with the highest level of verbosity. DLOG and VLOG, for example, did not print out the DLOG, where I suspect D stands for development and V for verbose.
It would have been easier to see what exactly happens with Chrome’s logs. We tried using the dev symbols that come with the Linux distribution, but Arch Linux had some issues at the time.
Nevertheless, I don’t suppose that would’ve worked for the logs.

So the only option left was to read the code and figure it out ourselves. And that’s what we did since no one was successful with the webrtc stream before on the internet.
We came across this directory. It contained all the knowledge there was. The proto files, the json schemas, the header files with all the constants, all the good stuff we needed. From the names of the files itself we knew how to send the stream to Chromecast, except that we didn’t (hint hint something to do with encryption).

Let me loop back to the negotiation part of the WebRTC stream. In normal WebRTC, there is a separate entity that relays messages between the two parties interested. Here that happens to be the sender itself.
We open the app called Chrome Mirroring (app id 0F5096E8). This is for audio and video streaming. There is also one for only audio streaming. You can find a few others here.
Now, when the app is opened, we get a receiver status message, and we start communicating with the app from this point on by addressing the messages to the session id of the app, after connecting to it first.

The first thing we do is send an offer message with the necessary information for the streams we wish to send and the encryption keys, among other details. Here is an example message:

{
 "source_id": "sender-990878",
 "destination_id": "a8a79f97-a63e-4555-baec-53e3f49e6df3",
 "namespace": "urn:x-cast:com.google.cast.webrtc",
 "payload_utf8": {
 "offer": {
 "castMode": "mirroring",
 "receiverGetStatus": true,
 "supportedStreams": [
 {
 "aesIvMask": "1D20EA1C710E5598ECF80FB26ABC57B0",
 "aesKey": "BB0CAE24F76EA1CAC9A383CFB1CFD54E",
 "bitRate": 102000,
 "channels": 2,
 "codecName": "opus",
 "index": 0,
 "receiverRtcpEventLog": true,
 "rtpExtensions": "adaptive_playout_delay",
 "rtpPayloadType": 127,
 "rtpProfile": "cast",
 "sampleRate": 48000,
 "ssrc": 144842,
 "targetDelay": 400,
 "timeBase": "1/48000",
 "type": "audio_source"
 },
 {
 "aesIvMask": "1D20EA1C710E5598ECF80FB26ABC57B0",
 "aesKey": "BB0CAE24F76EA1CAC9A383CFB1CFD54E",
 "codecName": "vp8",
 "index": 1,
 "maxBitRate": 5000000,
 "maxFrameRate": "30000/1000",
 "receiverRtcpEventLog": true,
 "renderMode": "video",
 "resolutions": [{ "height": 1080, "width": 1920 }],
 "rtpExtensions": "adaptive_playout_delay",
 "rtpPayloadType": 96,
 "rtpProfile": "cast",
 "ssrc": 545579,
 "targetDelay": 400,
 "timeBase": "1/90000",
 "type": "video_source"
 }
 ]
 },
 "seqNum": 730137397,
 "type": "OFFER"
 }
}

Note: only the payload_utf8 is structured in JSON, other sibling keys are part of protobuf.

You see that? All the details are right there in the offer message.
Other codecs are also supported, such as h264, vp9, hevc, and aac.
Although most of the things should be self-explanatory, focus on the key ssrc in both the audio and video streams. This is the identifier for the stream.
The AES keys and masks are passed down in the offer message still. The algorithm used however is AES-CTR-128.

We sent an offer. We must receive an answer to it.

{
 "source_id": "a8a79f97-a63e-4555-baec-53e3f49e6df3",
 "destination_id": "sender-990878",
 "namespace": "urn:x-cast:com.google.cast.webrtc",
 "payload_utf8": {
 "answer": {
 "castMode": "mirroring",
 "receiverGetStatus": true,
 "receiverRtcpEventLog": [0, 1],
 "rtpExtensions": [[], []],
 "sendIndexes": [0, 1],
 "ssrcs": [144843, 545580],
 "udpPort": 51810
 },
 "result": "ok",
 "seqNum": 730137397,
 "type": "ANSWER"
 }
}

You’d find it weird that the ssrc values have been incremented compared to what we sent. Apart from that, the keys that deserve our attention are updPort, result, and type. If the result is “ok”, we collect the port number and start streaming RTP packets and exchanging RTCP packets. For streaming RTP packets, we need a controller of the packets, either RTCP or RTSP. Found this out the hard way while messing around with the GStreamer launch command.

After these findings, we had some time left to reorganise the code and handle the edge cases well.
Daily routine was to read docs, write code, test, check logs, and repeat.

Now we had what you can call state management (just a variable and if checks) and some nice JSON string generators.
We jumped into media handling right on. Already having a pipeline to process video and audio in GNOME Network Displays made things easy, but tinkering with it was no easy job until the saviour lent a helping hand. The first few tutorials were all that was needed to get to speed.

The pipeline created for video (only) streaming looked like what follows:

Video Pipeline
Video Pipeline

You might want to zoom in a little.

This was an unencrypted RTP stream with unencrypted RTCP messages. The only thing to add was an encryption layer over the RTP packets. The plugin srtpenc provided by GStreamer did precisely this, except that it did not do AES-CTR, only AES-ICM and AES-GCM. Going deeper, it used the library libsrtp by Cisco, which, unfortunately, only has implementations for the same two algorithms.

There are two ways to go moving forward:

  • Add necessary headers in srtpenc and the AES-CTR implementation in libsrtp upstream
  • Implement a GStreamer plugin that does what both srtpenc and libsrtp do with some help from openssl

Despite that, we were enticed to test the unencrypted stream as is. We had the following broadcast message (and a blank Chromecast display to go with it).

{
 "source_id": "b7319b19-5641-472e-91d8-daeff9746a68",
 "destination_id": "*",
 "namespace": "urn:x-cast:com.google.cast.media",
 "payload_utf8": {
 "requestId": 0,
 "status": [
 {
 "currentTime": 0.0,
 "disableStreamGrouping": true,
 "media": {
 "contentId": "",
 "contentType": "video/webm",
 "metadata": { "metadataType": 0, "title": "Chrome tab casting" },
 "streamType": "LIVE"
 },
 "mediaSessionId": 0,
 "playbackRate": 1.0,
 "playerState": "PLAYING",
 "supportedMediaCommands": 0,
 "volume": { "level": 0.059999998658895493, "muted": false }
 }
 ],
 "type": "MEDIA_STATUS"
 }
}

What does all of this mean for the end-user? Do you wait patiently for us to code the plugin?
Of course not.
Remember that Chromecast also provides three streaming options with the media app, specifically the Deafult Media Receiver app (app id CC1AD845). We went with HLS.

Laptop screen mirrored on Chromecast
Laptop screen mirrored on Chromecast

There is still some time till this lands on the gitlab. Some cleanup is left.
And the files (playlist file and the segment files) are served from an nginx server (with CORS header). So that needs to be inside the app as well. Soup is the answer to this one.

Now you won’t be able to competitive game on this or use it as a second display, but this surely can handle presentations well.

Huge thanks to my mentors Benjamin Berg (@bberg) and Claudio Wunder (@cwunder).
This project was practically impossible to complete without their help.

PS: In the offer message, we set castMode to mirroring. There is a second option called remoting. From what we gathered, it is a dynamic mode that does not need the codec declarations beforehand and supports RPC messages for some reason. Our best bet is that this has to do with Chrome Remote Desktop and is of no interest to us.


This isn’t the finale. This is the beginning.

September 16, 2022

GUADEC and App Organization BoF

TL;DR: I attended my second GUADEC, got awarded the Community Appreciation Award, and still have questions about the future of conferences. Also, there is a ton of stuff coming up for the organization of apps within GNOME!

Berlin Mini GUADEC

Before GUADEC there was an astonishing Covid wave in Germany, so I finally somehow caught it despite not even really leaving my flat around this time. I was still feeling quite weak around GUADEC and also had to catch up with my preparation for the BoF I was hosting. More about the BoF below. I still managed to drop by for one evening, seeing some new faces and attending Tobias’ talk in person.

Despite still being pretty much a greenhorn within the GNOME community I was awarded this year’s Community Appreciation Award (aka “Pants of Thanks”.) I was very thankful that this year’s general assembly included a huge block of attributions to a lot of initiatives and contributions within the GNOME project. We have so many wonderful projects and contributors within the project that a single award is not nearly enough to cover everything that’s going on. I can’t deny that though felt flattered to receive an award for my work😊 I also had a pretty huge smile on my face when reading the hints as to who will be getting this year’s award.

has a passion for espresso comfortable with sharp tools to do woodworking deeply cares about the public face of GNOME fights for the rehabilitation of a small mammal, against one of the most icon Pokemon

The Future of Conferences

“Conferences are broken” is not really a hot take anymore and with what we know today about how different people learn and socialize it’s not really a surprise that there is not one format that fits everyone. Apart from the question about the usefulness of conferences in the current format, I wanted to boost awareness of some aspects of why in-person-only conferences can be exclusionary:

    • For minoritized community members, it is more likely that the conference environment is unsafe, despite the event having a code of conduct.
    • For some minoritized community members, travel is either not possible or much more challenging than for others.
    • Some community members don’t find it environmentally responsible to travel long distances for such an event.
    • For some disabled or neurodivergent community members, a traditional conference setting might not be feasible or very exhausting.
    • For some community members, taking the time off might not be financially viable or they are bound to a location because of care work commitments.

There are probably more that I forgot or don’t know of.

So I hope that GUADEC will at least continue to enable remote participation. And maybe, one day, someone will find the time to ask the question of what the target audience of GUADEC is, what the conference should provide, and what the best formats for the audience are to achieve this.

App Criteria and Organization BoF¹

There has been ongoing work around organization, branding, review, and many more around apps. For an attempted overview, you can check my previous blog post. With this BoF we have found a consensus on how to go forward with many things around apps within GNOME!

    • Going forward we will try to share a lot of criteria that we apply to apps when reviewing them for GNOME Core or GNOME Circle.
    • We are planning to have a new Incubation Process for GNOME Core apps. This will allow for a more transparent way of creating new apps for GNOME Core and is also intended to get more feedback from the wider community as well as from other stakeholders like distributions during the app development.
    • There will also be a clearer path on how and when apps can be removed from GNOME Core or GNOME Circle. Hopefully, this will make the process more transparent.
    • We are also planning to have regular (every two years for now) reviews of GNOME Core and GNOME Circle apps to help maintainers with quality control and to potentially find ways to help with existing problems within the projects.
    • We will finally have some more concrete and conceptual definitions of what GNOME Core Apps and GNOME Development Tools are. Those two groups will form the apps part of official GNOME software. We are also trying to define the role of GNOME Core App maintainers and the project ownership more clearly.

Most of those things are now documented in a central App Organization repository. The GNOME Circle Committee recently started relying on those new criteria. Hopefully, the Release Team can start introducing the new mechanisms to Core apps as well, soon. Of course, we will have to somewhat experiment with all of those new things as we go and might have to adjust them while we are gaining experience.

Huge thanks to everyone who has contributed to this effort so far! Recently especially Chris Davis who designed the incubation process and Michael Catanzaro who helped a lot with making the BoF a success.

¹ BoF: Usually a discussion group on a particular topic at a conference. See for example IETF BoFs.

Learning asynchronous programming in Rust

I recently found myself needing to write a dynamic reverse HTTP/websocket proxy. After some prototyping it is now time to write something real. To prepare myself for that, I devoted today’s Red Hat Day of Learning to another aspect of Rust: asynchronous programming, and learning about tokio. There is really no getting around tokio in the Rust world of networking. I started with the small book “Asynchronous Programming in Rust”. Honestly I found this a bit hard to follow, as it quickly dives into a lot of technical details, which I don’t have yet.

September 15, 2022

Even Mo’ Pixels

To keep the habbit alive, I continue to do a daily pixel routine, now covering almost all of the GNOME Circle apps.

x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

I’ve been practicing the art of animation a little too in an effort to promote GNOME Circle on Twitter and Mastodon. Presenting all these GIFs would probably not be kind to Planet GNOME readers though. Perhaps I could compose a video in the future (no GIF support in Blender, strangely!). Keep grinding your (pointless) skills, kids!

Previously

Libadwaita 1.2

So, half a year after 1.1, libadwaita 1.2 has been released.

While it doesn’t contain everything I had planned (since I ended up being mostly unavailable for about half of the cycle for reasons outside my control), it still has a bunch of additions, so let’s take a look at the changes.

Entry Rows

First, we have a widget that was planned for 1.0, but didn’t make it because it still needed work. This cycle I took time to finish and land it.

Originally implemented by Maximiliano, AdwEntryRow is a new type of boxed list rows, containing an inline entry. Its title doubles as the entry placeholder.

Entry rows in Libadwaita 1.2 demo

Entry rows can require confirmation via an apply button shown when its contents are edited. Otherwise, it’s similar to GtkEntry API, and can have prefix and suffix widgets like AdwActionRow.

There is also a companion widget AdwPasswordEntryRow, mirroring GtkPasswordEntry.

Message Dialogs

AdwMessageDialog with heading: Save Changes?, body text: Open document contains unsaved changes. Changes which are not saved will be permanently lost., and buttons: Cancel, Discard, Save. The Discard button has destructive appearance, the Save button has suggested appearance

AdwMessageDialog is a new widget that replaces GtkMessageDialog. On the first glance, it looks more or less the same. However, it has a few important differences.

Adaptive Layout

GtkMessageDialog tends to overflow off the screen very easily. AdwMessageDialog doesn’t because it’s fully adaptive: when scaling down the window, it first restricts its own size to the parent’s size:

AdwMessageDialog with heading: Save Changes?, body text: Open document contains unsaved changes. Changes which are not saved will be permanently lost., and buttons: Cancel, Discard, Save. The Discard button has destructive appearance, the Save button has suggested appearance. When the window is shrunk down to mobile size, it becomes narrower

And if that’s not enough to fit, it arranges its buttons vertically:

AdwMessageDialog with heading: Save Changes?, body text: Open document contains unsaved changes. Changes which are not saved will be permanently lost., and buttons: Cancel, Close without Saving, Save As…. The Close without Saving button has destructive appearance, the Save As… button has suggested appearance. When the window is shrunk down to mobile size, it rearranges buttons vertically

Additionally, it always uses the vertical arrangement if the dialog would end up ridiculously wide otherwise, even when there is enough space to display it like that.

API

The second important thing is that it’s not a GtkDialog subclass and, as such, has completely new API.

  • GtkMessageDialog inherits the response API from GtkDialog. It relies on integer IDs that are provided by applications, except when they aren’t – the GtkResponseType enum provides a few predefined responses. This is a very C-centric API (as it assumes enum values being interchangeable with integers), and it’s not overly convenient to use. AdwMessageDialog replaces it with simple string IDs that are always provided by applications.

    This also allowed the response signal to be detailed, using the response ID as the detail.

    In addition, AdwMessageDialog is automatically closed on response, and doesn’t need to be closed manually. As such, common code like this:

    static void
    dialog_response_cb (GtkDialog       *dialog,
                        GtkResponseType  response,
                        MyObject        *self)
    {
      if (response == GTK_RESPONSE_ACCEPT)
        my_object_foo (self);
    
      gtk_window_destroy (GTK_WINDOW (dialog);
    }
    
    ...
    
    g_signal_connect (dialog, "response",
                      G_CALLBACK (dialog_response_cb), self);
    

    can be replaced with just:

    g_signal_connect_swapped (dialog, "response::accept",
                              G_CALLBACK (my_object_foo), self);
    
  • Just as AdwMessageDialog doesn’t have predefined responses and leaves that to apps, it doesn’t use GtkButtonsType. It seems counterproductive to have API to easily create buttons like “OK”, “Cancel”, “Yes” and “No” when we’ve been discouraging labels like this for just about two decades now, doesn’t it?
  • No direct widget access. GtkMessageDialog required apps to poke into its widgetry to pack additional widgets, add style classes to its buttons and so on. AdwMessageDialog can have a single extra child, displayed below the body text and managed with the extra-child property, and natively supports changing button appearance instead.

    AdwMessageDialog from GNOME Text Editor, with heading: Save Changes?, body text: Open document contains unsaved changes. Changes which are not saved will be permanently lost., and buttons: Cancel, Discard, Save. The Discard button has destructive appearance, the Save button has suggested appearance. It has an additional child under the body text - a boxed list with one unsaved file called "123"

  • While the intention of GtkMessageDialog is to use its primary and secondary text, it also displays the window title, and many applications get it wrong, ending up with three labels or wrong styling:

    GtkMessageDialog from LibreOffice Writer. It has three labels: title: Save Document? primary text: Save changes to document "Untitled 1" before closing? secondary text: Your changes will be lost if you don't save them.

    GtkMessageDialog from Rnote. It has title: Quit Application, and primary text: Any unsaved changes will be lost. Do you want to quit anyways?, but no secondary text, so both labels appear wrong.

    Primary text also changes its styles to look like a heading or body text, depending on whether secondary text is set.

    AdwMessageDialog does not do any of that. It just has heading and body, that’s it.

  • Finally, it’s derivable. While making GtkMessageDialog final in GTK4 was consistent with most of the other GTK widgets, it has generally resulted in vastly more convoluted code for apps with complex dialogs – for example, the save dialog in GNOME Text Editor with its list of open files.

So now app developers have no excuses for shipping unpolished message dialogs anymore. 😉

About Window

AdwAboutWindow from Libadwaita demo 1.2 AdwAboutWindow from Libadwaita demo 1.2, demoing a made-up Typeset app

Another widget originally planned for 1.0 that didn’t make it was Adrien’s AdwAboutWindow, replacing GtkAboutDialog. This cycle I finished it up and merged it.

The new window is adaptive and natively provides a lot of details people were doing hacks with GtkAboutDialog for before:

  • Legal information for dependencies and other components;

    A screenshot of legal information from GNOME Maps, listing information for map data provider, map tile provider and search provider

  • A list of acknowledgements in addition to credits — for example, for crowdfunding backers;
  • A place to list recent changes in the application;
  • Links to the application’s website, issue tracker, support forum, as well as any additional links, as opposed to a single link in GtkAboutDialog;
  • Debugging information, along with easy ways to save or copy it.

    Screenshot of debugging information from GNOME Text Editor's AdwAboutWindow

Christopher Davis has ported core apps using libadwaita to AdwAboutWindow, so it’s also pretty consistently used by now.

Tab View and Tab Bar Additions

AdwTabView and AdwTabBar have seen quite a few changes this cycle.

The most important change is with shortcut handling. AdwTabView provides a lot of shortcuts for switching and reordering tabs, for example, Ctrl+Tab to switch to the next tab, or Alt+1Alt+0 to switch to the first 10 tabs.

In Libhandy, these shortcuts were added via the shortcut-widget property, with the idea that you set it to your toplevel window unless you have multiple tab views.

In Libadwaita 1.0, that property was removed, and instead the shortcuts were switched to the GTK4 shortcut engine, using global scope (really MANAGED scope, but doesn’t matter). This means that they were available for the whole window automatically.

However, this also brought issues. For example, the Ctrl+Tab shortcut stopped working, because GTK defines that shortcut itself, for changing widget focus. Similarly, GTK4 port of the VTE library handles all input on the CAPTURE propagation phase, meaning none of the tab view shortcuts could work at all in the GTK4 port of GNOME Console.

Meanwhile in both libhandy and libadwaita 1.0.x-1.1.x, apps that want to disable some of the shortcuts had to do pretty terrible hacks to achieve it, for example, registering the same shortcuts on the toplevel window and redirecting input to the tab contents or whatever widget needs to handle them.

So, to fix that, AdwTabView has switched its shortcuts to the CAPTURE phase as well, and added an easy way to enable or disable shortcuts.

Other than that, AdwTabPage now has a property to set the tooltip for its indicator icon, and AdwTabBar tabs similarly has a tooltip on the tab close buttons – not customizable this time.

Toast Additions

  • Kévin Commaille added a way to have custom widgets as toast titles. This allows, for example, to have avatar+name pills inside toasts in Fractal.
  • Emmanuele Bassi added a helper function for creating a toast with formatted title.
  • Jamie added AdwToast::button-clicked signal to make it possible to have toast buttons without actions.
  • Jonathan Blandford made it possible to add or dismiss toasts multiple times to make them easier to use. Dismissing toasts that have already been dismissed does nothing now, while adding toasts that have already been added resets the timeout for toasts that are currently shown, or bumps them forward in the queue otherwise. Previously doing that resulted in criticals being printed.

Other Additions

  • AdwPropertyAnimationTarget has been added to simplify animating object properties.
  • AdwAnimation can now change targets on the fly.
  • Sophie Herold added a property to allow disabling Pango markup in boxed list row titles (as well as subtitles in case of AdwActionRow etc).
  • AdwSplitButton can now have a separate tooltip for its dropdown button.
  • AdwViewStack has gained a helper function to add a child along with a title and an icon.
  • JCWasmx86 added plumbing to AdwStyleManager to allow GNOME Builder to launch apps with light/default/dark color scheme, as well as in high contrast. This has also been backported to Libhandy, and is available in 1.8.0 (which doesn’t have a blog post as it’s the only noteworthy change there).

Styles

AdwTabBar in Libadwaita demo 1.2

A few styles have been tweaked. In particular, AdwTabBar has seen big changes, mainly to improve contrast in dark variant and make it clearer which tab is selected.

A comparison between AdwTabBar in Libadwaita 1.1 and 1.2, in dark style

The lack of full height separators has also allowed to remove the distracting light borders from header bars and search bars in favor of dark borders using @headerbar_shade_color and add a proper backdrop style consistent with header bars.

In addition, GtkActionbar and AdwViewSwitcherBar now have the same style as header bars, search bars and tab bars, while boxed lists use dark separators.

Alarms page from GNOME Clocks with Libadwaita 1.2

The .large-title style class is now documented as deprecated and apps should use .title-1 instead.

There are also a few smaller changes, for example:

  • Toasts without buttons are smaller now, instead of taking a fixed width.
  • AdwActionRow child spacing is now smaller, matching toolbars and the new AdwEntryRow.
  • AdwExpanderRow arrows have changed orientation to look less like rows that open subpages on click.
  • Lots of fixes, some of them have been backported to libadwaita 1.1.x as well.

As it happens, quite a lot didn’t make it because it needed more work, or the timing just didn’t work out. This also means that there’s a lot to look forward to in the next release though, so stay tuned.

Thanks to all the contributors, and thanks to my employer, Purism, for letting me work on Libadwaita and GTK to make this release happen.

gnome-info-collect closing soon

There has been a fantastic response to gnome-info-collect since Vojtěch announced it three weeks ago. To date we’ve had over 2,200 responses. That’s amazing! Thanks to everyone who has run the tool.

We now have enough data to perform the analyses we want. As a result, it’s time to close data collection. This will happen next Monday, 19 September. On that day, the data collection server will be turned off.

If you haven’t run gnome-info-collect yet, and would like to, there’s still a little time. See the project readme for instructions on how to install and run it.

Just because we’re shutting down gnome-info-collect doesn’t mean that it doesn’t have a future. Hopefully there will be further rounds of data collection in the future, where we can look at other aspects of GNOME usage that we didn’t examine this time round.

In the mean time, we have lots of great data to process and analyse. Watch this space to learn about what we find!

September 13, 2022

Adding software to the Steam Deck with systemd-sysext

Yakuake on SteamOS

Introduction: an immutable OS

The Steam Deck runs SteamOS, a single-user operating system based on Arch Linux. Although derived from a standard package-based distro, the OS in the Steam Deck is immutable and system updates replace the contents of the root filesystem atomically instead of using the package manager.

An immutable OS makes the system more stable and its updates less error-prone, but users cannot install additional packages to add more software. This is not a problem for most users since they are only going to run Steam and its games (which are stored in the home partition). Nevertheless, the OS also has a desktop mode which provides a standard Linux desktop experience, and here it makes sense to be able to install more software.

How to do that though? It is possible for the user to become root, make the root filesytem read-write and install additional software there, but any changes will be gone after the next OS update. Modifying the rootfs can also be dangerous if the user is not careful.

Ways to add additional software

The simplest and safest way to install additional software is with Flatpak, and that’s the method recommended in the Steam Deck Desktop FAQ. Flatpak is already installed and integrated in the system via the Discover app so I won’t go into more details here.

However, while Flatpak works great for desktop applications not every piece of software is currently available, and Flatpak is also not designed for other types of programs like system services or command-line tools.

Fortunately there are several ways to add software to the Steam Deck without touching the root filesystem, each one with different pros and cons. I will probably talk about some of them in the future, but in this post I’m going to focus on one that is already available in the system: systemd-sysext.

About systemd-sysext

This is a tool included in recent versions of systemd and it is designed to add additional files (in the form of system extensions) to an otherwise immutable root filesystem. Each one of these extensions contains a set of files. When extensions are enabled (aka “merged”) those files will appear on the root filesystem using overlayfs. From then on the user can open and run them normally as if they had been installed with a package manager. Merged extensions are seamlessly integrated with the rest of the OS.

Since extensions are just collections of files they can be used to add new applications but also other things like system services, development tools, language packs, etc.

Creating an extension: yakuake

I’m using yakuake as an example for this tutorial since the extension is very easy to create, it is an application that some users are demanding and is not easy to distribute with Flatpak.

So let’s create a yakuake extension. Here are the steps:

1) Create a directory and unpack the files there:

$ mkdir yakuake
$ wget https://steamdeck-packages.steamos.cloud/archlinux-mirror/extra/os/x86_64/yakuake-21.12.1-1-x86_64.pkg.tar.zst
$ tar -C yakuake -xaf yakuake-*.tar.zst usr

2) Create a file called extension-release.NAME under usr/lib/extension-release.d with the fields ID and VERSION_ID taken from the Steam Deck’s /etc/os-release file.

$ mkdir -p yakuake/usr/lib/extension-release.d/
$ echo ID=steamos > yakuake/usr/lib/extension-release.d/extension-release.yakuake
$ echo VERSION_ID=3.3.1 >> yakuake/usr/lib/extension-release.d/extension-release.yakuake

3) Create an image file with the contents of the extension:

$ mksquashfs yakuake yakuake.raw

That’s it! The extension is ready.

A couple of important things: image files must have the .raw suffix and, despite the name, they can contain any filesystem that the OS can mount. In this example I used SquashFS but other alternatives like EroFS or ext4 are equally valid.

NOTE: systemd-sysext can also use extensions from plain directories (i.e skipping the mksquashfs part). Unfortunately we cannot use them in our case because overlayfs does not work with the casefold feature that is enabled on the Steam Deck.

Using the extension

Once the extension is created you simply need to copy it to a place where systemd-systext can find it. There are several places where they can be installed (see the manual for a list) but due to the Deck’s partition layout and the potentially large size of some extensions it probably makes more sense to store them in the home partition and create a link from one of the supported locations (/var/lib/extensions in this example):

(deck@steamdeck ~)$ mkdir extensions
(deck@steamdeck ~)$ scp user@host:/path/to/yakuake.raw extensions/
(deck@steamdeck ~)$ sudo ln -s $PWD/extensions /var/lib/extensions

Once the extension is installed in that directory you only need to enable and start systemd-sysext:

(deck@steamdeck ~)$ sudo systemctl enable systemd-sysext
(deck@steamdeck ~)$ sudo systemctl start systemd-sysext

After this, if everything went fine you should be able to see (and run) /usr/bin/yakuake. The files should remain there from now on, also if you reboot the device. You can see what extensions are enabled with this command:

$ systemd-sysext status
HIERARCHY EXTENSIONS SINCE
/opt      none       -
/usr      yakuake    Tue 2022-09-13 18:21:53 CEST

If you add or remove extensions from the directory then a simple “systemd-sysext refresh” is enough to apply the changes.

Unfortunately, and unlike distro packages, extensions don’t have any kind of post-installation hooks or triggers, so in the case of Yakuake you probably won’t see an entry in the KDE application menu immediately after enabling the extension. You can solve that by running kbuildsycoca5 once from the command line.

Limitations and caveats

Using systemd extensions is generally very easy but there are some things that you need to take into account:

  1. Using extensions is easy (you put them in the directory and voilà!). However, creating extensions is not necessarily always easy. To begin with, any libraries, files, etc., that your extensions may need should be either present in the root filesystem or provided by the extension itself. You may need to combine files from different sources or packages into a single extension, or compile them yourself.
  2. In particular, if the extension contains binaries they should probably come from the Steam Deck repository or they should be built to work with those packages. If you need to build your own binaries then having a SteamOS virtual machine can be handy. There you can install all development files and also test that everything works as expected. One could also create a Steam Deck SDK extension with all the necessary files to develop directly on the Deck 🙂
  3. Extensions are not distribution packages, they don’t have dependency information and therefore they should be self-contained. They also lack triggers and other features available in packages. For desktop applications I still recommend using a system like Flatpak when possible.
  4. Extensions are tied to a particular version of the OS and, as explained above, the ID and VERSION_ID of each extension must match the values from /etc/os-release. If the fields don’t match then the extension will be ignored. This is to be expected because there’s no guarantee that a particular extension is going to work with a different version of the OS. This can happen after a system update. In the best case one simply needs to update the extension’s VERSION_ID, but in some cases it might be necessary to create the extension again with different/updated files.
  5. Extensions only install files in /usr and /opt. Any other file in the image will be ignored. This can be a problem if a particular piece of software needs files in other directories.
  6. When extensions are enabled the /usr and /opt directories become read-only because they are now part of an overlayfs. They will remain read-only even if you run steamos-readonly disable !!. If you really want to make the rootfs read-write you need to disable the extensions (systemd-sysext unmerge) first.
  7. Unlike Flatpak or Podman (including toolbox / distrobox), this is (by design) not meant to isolate the contents of the extension from the rest of the system, so you should be careful with what you’re installing. On the other hand, this lack of isolation makes systemd-sysext better suited to some use cases than those container-based systems.

Conclusion

systemd extensions are an easy way to add software (or data files) to the immutable OS of the Steam Deck in a way that is seamlessly integrated with the rest of the system. Creating them can be more or less easy depending on the case, but using them is extremely simple. Extensions are not packages, and systemd-sysext is not a package manager or a general-purpose tool to solve all problems, but if you are aware of its limitations it can be a practical tool. It is also possible to share extensions with other users, but here the usual warning against installing binaries from untrusted sources applies. Use with caution, and enjoy!

September 12, 2022

Mini GUADEC Berlin 2022

Last July I have been to Berlin for the second Mini GUADEC Berlin.

It was great seeing many other GNOME contributors in person after such long time. The conference took place in an awesome location at the c-base.

During the event I worked on an offline indicator for Fractal and improving the error handling.

I also spent some time on the Flatpak CI, adding information on how to get aarch64 builds. I previously added automatic builds of the CI images for aarch64 of the CI images.

Niels De Graef and I had a discussion on how Contacts should use the tel:// URI to open calls apps. The spec for the URI requires it to be a globally unique identifier. In the case of Contacts we don’t always have a country code, thus the phone-number can’t be globally unique. In my opinion we should use the phone number as entered by the user and let the calls app decide if it can dial a phone call. For example, Android behaves like this.

Lastly, I looked at building tarballs automatically in CI after tagging a new release. The tarballs are especially useful for Flathub, because it builds apps in an environment without internet connection.

I would like to thank the GNOME Foundation for sponsoring my travel to Berlin and Sonny and Tobias for organizing the event.

September 11, 2022

Tracker 3.x, a retrospect.

Time files, for better or for the worst. The last time I bored you with ramblings on this blog was more than 2 years ago already, prepping up for Tracker 3.0. Since I’m sure you don’t need a general catch up about these last 2 years, let’s stay on that same subject.

Nowadays, we are very close to GNOME 43, and an accompanying 3.4.0 release of Tracker SPARQL library and data miners, that is 4 minor releases ahead! What happened since then? Most immediately after that previous blog post, the 3.0 release did roll in, the uncertainty behind all major structural changes vanished, and the transition could largely be called a success: The overhauled internals for complete SPARQL support stood ground without large regressions; the promises of portals and data isolation delivered and to this day remains unchallenged (except for spare requests to let more pieces of metadata through); the increased genericity and versatility kept fostering further improvements.

Overall, there’s been no major regrets, and we are now sitting comfortably in 3.x. And that was all good, since we could use all that time not fixing fallout in keeping up with the improvements. Let’s revisit what happened since.

Tracker (SPARQL library)

Testing

Ever since 3.0, test coverage has been growing fairly steadily. A very good thing about SPARQL and Tracker API is that it is very “circular” altogether, every data format used in information exchange must be both parsed and produced by the SPARQL implementation, every external request made is also a external request it should be able to serve, and so on.

This makes it fairly easy to reach all corners in testing coverage, despite the involved complexity. It has been also quite a rule for some time that SPARQL language compliance fixes come with tests. The accumulated result over the years is a fairly large collection of tests to the internal machinery, there’s over 330 subtests already for the SPARQL language alone.

To the day of writing, Tracker stands at 76.4% coverage (Up-to-date for the posterity: ), we are getting very good at catching deviations from how the SPARQL library should behave, and decently good at catching how it should not misbehave. Simply following W3C standards and recommendations pays off here too, since it settles the direction and resolves most matters about what the right behavior is.

Developer Experience

3.0 marked the point where being able to create private Tracker databases with custom data models transitioned from an easter egg to a first-class feature. This also means that developers can now write an ontology (or data model, or schema, pick a name) that suits their data like a glove, instead of using the default Nepomuk one, which is well-trodden and literally written by academics, but will be overkill for the needs of individual applications.

And of course there is room for failure in writing those ontologies. Last year, GSoC student Abanoub Ghadban worked hard on “breaking it”, polishing the experience and trying to produce helpful warnings, so developer mistakes are easily visible and solvable. The CLI tools provided facilitate these checks, e.g. creating a temporary endpoint that loads the ontology being edited, and running queries against it.

The documentation front got also steady improvements, the API itself is 100% documented and there is now a fully fleshed out SPARQL tutorial. Also, drawing inspiration from SQLite, there are now miscellaneous docs on some implementation details like limits, a discussion on the security considerations of the implemented specs, or extensions and interpretations of the SPARQL spec. The examples have been also modernized, and written in Python and Javascript in addition.

A great blunder of how Tracker tended to be used in applications was having SPARQL mixed in code, or worse, built through string manipulation. The latter got better API-wise in the past with compiled statements, but the mix of code and database logic was still prevalent. Since 3.3, there is support for loading and creating compiled statements from query files located in GResources. This neatly addresses the separation of queries and code, while keeping them indissoluble to the produced binary, while preserving the benefits of compiled statements (compile once, run many times).

Performance

One of the benefits of the API provided by Tracker is that it gives a great amount of leeway in internal refactors without altering the surface. We are largely just backwards-compatibility constrained by the underlying database format. This has allowed for further optimizations to happen under the hood since 3.0, database updates both in terms of database changes and queries. Databases also take now less space, specially in the presence of many blank nodes.

Although the greatest performance boost for data producers can be obtained through the TrackerBatch API (Since 3.1). Prior to it, normally a TrackerResource would be used build RDF data, then used to produce a SPARQL update, and the SPARQL update parsed to generate and apply the RDF changes. This new API can efficiently traverse a series of TrackerResources (describing RDF already) and turn them to database modifications skipping the SPARQL middle man altogether.

In the git tree, there now is a small utility to benchmark certain uses of Tracker API, let’s see how the output looks on a modern Intel i7 for an in-memory database:

Batch size: 5000, Individual test duration: 30 sec
Opening in-memory database…
                           Test		Elements	Elems/sec	Min         	Max         	Avg
   Resource batch update (sync)		6169883.801	205662.793	4.292 usec	5.615 usec	4.862 usec
     SPARQL batch update (sync)		2430664.747	81022.158	11.889 usec	14.255 usec	12.342 usec
   Resource modification (sync)		4440988.603	148032.953	6.588 usec	8.438 usec	6.755 usec
  Resource insert+delete (sync)		3033137.552	101104.585	9.689 usec	12.669 usec	9.891 usec
Prepared statement query (sync)		8566182.714	285539.424	3.000 usec	745.000 usec	3.502 usec
            SPARQL query (sync)		1329076.956	44302.565	21.000 usec	189.000 usec	22.572 usec

After the usual disclaimer that this benchmark utility greatly relies on CPU and disk characteristics, and your mileage may vary, there’s a few things to highlight here:

  • Using modern APIs always pays off. Inserting data directly from TrackerResource (Resource batch update) is 2.5x faster than inserting data through SPARQL updates (SPARQL batch update), and querying through prepared statements (Prepared statement query) is ~6.5x faster than repeating SPARQL queries (SPARQL query).
  • Even though the tests are run synchronously on the main loop, queries can be greatly parallelized, so the actual throughput on a modern machine will be much higher in reality. Updates are single-threaded though.
  • Used the right way, Tracker code is never in the hot paths, merits there go to SQLite and production of data itself. You are expected to get results that are in the same ballpark than using SQLite directly, given similar volumes and layout of data.

But this snapshot does not fully highlight the improvements done. For a reference baseline, a backported version of this benchmark on the same computer over Tracker 3.0.x gives:

Batch size: 5000, Individual test duration: 30 sec
Opening in-memory database…
                           Test		Elements	Elems/sec	Min         	Max         	Avg
     SPARQL batch update (sync)		1387346.192	46244.873	17.035 usec	24.290 usec	21.624 usec
   Resource modification (sync)		259923.682	8664.123	49.863 usec	122.236 usec	115.418 usec
  Resource insert+delete (sync)		707638.539	23587.951	41.593 usec	73.702 usec	42.395 usec
Prepared statement query (sync)		7729898.742	257663.291	3.000 usec	527.000 usec	3.881 usec
            SPARQL query (sync)		888896.319	29629.877	31.000 usec	180.000 usec	33.750 usec

Looking past the lack of TrackerBatch there, it’s still easy to see there has been massive improvements pretty much all over the board. As we already encourage, using the latest Tracker gives you the best Tracker.

Data serialization

SPARQL was very much thought out with the task of storing RDF data, querying into RDF data, and moving RDF data around. From the tiniest resource/value existence check, to full content dumps, everything is one query away.

What we were lacking is a consistent way to convert all that data into something that could easily be piped through, saved, processed, etc. To make this task easy, there is now API to serialize and deserialize data between a Tracker database and the popular RDF file formats. This is performed efficiently, with flat RAM usage on both ends, it is even possible to pipe these APIs with the RDF data never existing anywhere at once during the process.

And since this serialization to RDF formats is a builtin feature of HTTP endpoints, it has allowed us to level up our support for these. A Tracker HTTP endpoint is now entirely compliant and indistinguishable from the larger players.

The most immediately useful usage of this serialization support is in the CLI tools, the import and export commands now use this API and can deal with these formats. But what is this for? Is this driven by a level of completionism that borders the sickness? Well, yes, but there are of course plans around these features, more on that later.

Tracker Miners

Performance

You might think the SPARQL library improvements above would be the larger improvement the filesystem miner could get, and you would be wrong.

Part of the raison d’être of a filesystem indexer is to stay up-to-date with filesystem changes. In the GNOME world, this catching up is usually done through a GFileMonitor, which provides a GLib-friendly way to do the dirty job of setting up an tracking an inotify handle to track changes on an individual directory for you. What is wrong with that? Nothing, unless you do that at a large scale like indexers do. Each of those GFileMonitors is backed by a pollable FD, and a GSource wrapping it, and iterating a GMainContext that has thousands of GSources attached to it massively, thoroughly sucks.

Is this a case of Tracker abusing a perfectly fine GLib API? Or on the contrary is this a case of bad GLib API design? I will let you debate on that, as I am unclear myself.

The first solution to alleviate that (since 3.1) was delegating file monitoring to a separate thread, so the GMainContext that is expensive to iterate only affects file monitors, as opposed to everything that goes on. Later on, FANotify finally gained the missing features that made it suitable for indexers (not requiring CAP_SYS_ADMIN was one of them) and Tracker Miners got an implementation for it (since 3.3). Most notably here, with this kernel API it is only necessary to poll a single FD to receive events for all FANotify marks set on the filesystem.

In what it sounds like a case of miscommunication between kernel developers developing independent new features that didn’t mix well, unfortunately it’s not possible nowadays to mix the bleeding edge in file monitors (FANotify) with the bleeding edge in filesystems (btrfs), for these (and other) cases Tracker Miners will still fallback on plain glib/inotify. Hopefully the situation will be resolved at some point.

At 3.1, the filesystem indexer also implemented flow control mechanisms that allowed its RAM usage to stay mostly flat independently of the filesystem size and layout. At the peak of its activity, tracker-miner-fs-3 uses 30-40MB here (per gnome-system-monitor), and idles at 5MB. Needless to say, it is also many times faster than its past 3.0 self.

But all this was about tracker-miner-fs-3, the daemon in charge of monitoring filesystem changes and mirroring file/folder structure into the database. What about tracker-extract-3, the daemon in charge of nitty-gritty file metadata extraction? When this step needs to happen (say, newly indexed or modified files). It’s for all accounts expensive, now by the sheer magic of everything else shrinking, it comparatively only got worse. There is a reason we avoid that from happening frequently at all costs.

But what is slow there? Roughly speaking, it should be just a loop going through files, getting the metadata and inserting it, and that should be fucking fast as per the benchmarks above, right? Right, the problem is in the “getting the metadata” step. This will wildly fluctuate depending on the files scattered in the filesystem, their mimetypes, and the libraries used to extract their metadata. The plain text extractor or the in-tree MP3 extractor are capable of opening, extracting metadata from, and closing multiple thousands of files per second. All the external libraries used for metadata extraction (yes, all of them) are slower, ranging from several times over to up to 4 orders of magnitude, also depending on the input files (I curated some infernal PDFs). The worst offenders are Poppler, GStreamer and libtiff.

As it’s evident here (You don’t need to believe me, add TRACKER_DEBUG=statistics to /etc/environment and reset/restart the miner), most libraries dealing with files and formats optimize for library resources being long lived across an application lifetime, while optimizing the creation and disposal of these library resources is often overlooked. The metadata extraction daemon faces that hard fact file after file, so its slowness is just a reflection of the slowness of these libraries in setting themselves up. If, after all of this, someone thinks the filesystem indexer is slow, that is where the money is.

Extending and maintaining metadata

Although the focus has been mainly on making things work reliably, rather than going crazy with extending the metadata stored (yet), a point worth noting here is last year’s GSoC work from student Nishit Patel, who worked on indexing creation time (in the filesystems that support/enable it), and allowing for its search all across the stack.

We also got support for a number of game file formats (mainly, retro ones), which GNOME Games (now Highscore) readily made use of. LOL, jk.

Handling and following failures

Whenever a file is broken or corrupted, or a 3rd party library crashes or produces a syscall that is caught up by seccomp, the tracker-extract-3 daemon would quit (with varying degrees of gracefulness) and be taught on the next restart to avoid the file that triggered this situation. This is not precisely new behavior, what is new though is that these failures are now recorded and can be easily inspected over the CLI with tracker3 status. Most bugs we receive about broken extraction are reactive (e.g. “why does Music not show this file?”), this would allow for a more proactive approach to fixing metadata extraction bugs, if users happen to look there and cooperate.

There is also a slight possibility that extraction bugs are due to Tracker itself, but these are largely a think of the past.

Coming up next…

I very much cheered when I learnt of the “Local first” initiative. In fact, I so much anticipated it that I literally anticipated it. Development of the serialization APIs started sometime around the last year, with a plan to provide facilities to transparently and neatly synchronize RDF data across instances in multiple machines owned by the same user.

Who wants that? Certainly not the filesystem indexer. However, there’s indeed a desire to avoid reliance on third party services for user sensitive data like their own health information, chat logs, or bookmarked sites. With some truckloads of optimism, I would hope that this becomes a cornerstone of that goal, for applications under the GNOME umbrella that need to deal with a non-trivial amount of data.

How would that work? What do we need to get there? We need a query language that supports it (check, duh), a data model that can handle the different patterns that might emerge in synchronizing data (check), a way to make these machines talk between each other (check), and a way to diff missing data (check). All the pieces are really set, so what is missing is putting those together, of course drawing inspiration from Christian Hergert’s Bonsai to make machines discover each other.

And of course there is still very much a desire to keep the heart of content applications compelling and relevant. There’s still opportunities to further extend and link the metadata stored by the filesystem indexer, perhaps with the help of the actual semantic web that lives out there. We already have a number of universal identifiers available (musicbrainz tags, IMDB IDs, game rom IDs) to interrelate and cross-reference data.

Now that the codebase features are settled and working well, we can start thinking on new fancy features, stay tuned for the next installment of this series in 2024, when I talk about Tracker 3.8.0, or perhaps Tracker 3.5.20. If you made it this far, you have my appreciation, until the next time!

Wrapping up GSoC’22!

My 12+ weeks in the Google Summer of Code program have finally come to an end. It’s incredible how quickly time passes and how things develop. It has been a great journey and I have learned a great deal about software web development and design from exploring the project.

GSoC is a global, online program focused on encouraging budding developers and designers into the world of open-source. GSoC introduced me to the amazing GNOME community. Looking back, a year ago, I was hesitant and apprehensive about even applying to GSoC. But this program has helped me grow and push through the boundaries of my technical and soft skills.

📌My Project

My project “GNOME HIG CSS Library” aims to create a CSS library for various GNOME web ecosystems i.e evaluating, designing, developing, testing, and documenting elements and components such as buttons, paragraphs, links, and headers, according to the revised GNOME Human Interface and Visual Identity Guidelines(HIG). GNOME’s website is the first place where a user or contributor visits, it is vital that it be welcoming to encourage more people to be a part of the community.

Goals

➢ Provide a library for developing congruent, consistent, and refreshed GNOME websites

➢ Improve the usability and accessibility of the website

➢ Shorter learning curves and easy-to-use, extensible library

Check out my previous blogs to know more about the project and my progress.

Contributions👩‍💻

I started by reading and understanding the GNOME Human Interface Guidelines. Next, I evaluated the existing websites both old and updated to understand how components and elements were designed and developed.

I created the mockups in Figma - a collaborative interface design tool. After the designs were evaluated and reviewed, I coded the elements and components and finally documented them.

1. Evaluated, designed, and implemented Links

I performed thorough research on how links should be designed to make them inclusive and accessible for everyone. Here’s the summary of my research - “How do design inclusive links”.

Merge Request: !53 (Merged)

Issue: #59 (Closed)

2. Updated lists with new designs

For this issue, old and new designs of the GNOME websites were analyzed to understand how the lists should be created. Factors such as color contrast, readability, and spacing were considered.

Merge Request: !57 (Merged)

Issue: #51 (Closed)

3. Updated button designs

Buttons have been revamped with several new colors, gradients, and animations. Several variants like squared, rounded, outlined, and dynamic-size were implemented for added flexibility during development.

Merge Request: !58 (Merged)

Issue: #66 (Closed)

Quick Links🔗

All Merge Requests

All Issues

Link to Project

GUADEC 2022 - GNOME’s biggest conference🤩

But perhaps the most exciting part of my GSoC experience was when I got a chance to present and talk about my project at GNOME’s largest conference GNOME Users and Developers European Conference(GUADEC). The conference brings hundreds of users, contributors, community members, and enthusiastic supporters together for a week of amazing talks and workshops✨

I absolutely enjoyed the workshops, seminars, and BOFs. I was truly fascinated by the work that has been going on. I would highly suggest everyone attend the conference. Check out some of the intriguing and informative talks that I enjoyed in my previous Blog!

Here’s the YouTube Link to my Lighting Talk.

Future goals and Epilouge📝

There is still work to be done on the project before it can be deployed. Anyone interested in contributing can get in touch with me or my mentors. I’d be more than happy to help :)

Closing thoughts (for now😉)

I’m really grateful for the opportunity to work on a GNOME project with the awesome GNOME Community✨ None of this would have been possible without the support of my mentors Claudio Wunder and Caroline Henriksen. They have been exceptionally helpful and kind always giving me constructive suggestions, guiding me, and helping me with the smallest of doubts.

I would love to continue being a part of GNOME and contribute, collaborate and learn more about GNOME’s initiatives, where I truly feel like a part of something larger than myself.

Pitivi GSoC: Final Report

Pitivi GSoC Final Report

I Love Pitivi

This project was aimed at porting Pitivi to GTK4, Pitivi used the GTK3+ library and required the change.

The whole project is confined to a single MR: gitlab.gnome.org/GNOME/pitivi/-/merge_requests/442

The port required extensive changes and brainstorming. There were namely three categories of changes -

  1. Component renamed
  2. Component gone but ideal replacement given
  3. Component gone with no real/simple replacement

Some code needed refactoring, with varying complexity and length, some were small but some were very extensive, for example, GtkDialog run() removal, some parts were easy to refactor to the callback-based system but some places still remain to be ported over due to the extent of their reliance on the return process.

After getting suggestions, I have also listed basic things I've done in the MR for easy reference and tracking.

The last commit I did during the GSoC period is: "Fix effectslibrary.ui search" "gitlab.gnome.org/GNOME/pitivi/-/commit/0ad9503df5f49e3a3dbbf1f71bc3b87b706ad213?merge_request_iid=442"

Work Done -

Till now I have been able to port a lot of the stuff to GTK4, most files are ported over and with some local changes I can open the application, this has given me a chance to understand what I should focus on. Pitivi is hard to port due to its massive size, but I'm really happy that I was able to port it to the extent I did.

Progress and hardships in chronologial order -

During the first update -

  • Work - Ported stuff over to GTK4 which was backported to GTK3+, allowing for an easier porting experience, as you can run the application in this stage.

  • Click Here for Full Blog 1

During the second update -

  • Work - Started the breaking phase, and enjoyed compile errors, ported quite a lot of stuff to GTK4

  • Hardships - Event controllers were a mess to understand

  • Click Here for Full Blog 2

During the third update -

  • Work - Still in breaking phase, fixed most big errors and made significant progress.

  • Hardships - All the errors? :P Mostly event controller, GtkContainer, GtkLayout, etc...

  • Click Here for Full Blog 3

During the fourth update -

  • Work - Finally able to run the application (with some caveats) and able to solve big issues, the moment where things start to fall in place. Was also able to port most of the event controller.

  • Hardships - Drag and Drop, GtkDraw, UI nightmare, GtkLayout, GdkScreen. Fell ill and had to change university, causing less productivity.

  • Click Here for Full Blog 4

Is the port complete?

No, there is still work remaining, but it is not much, most things are done, and the things which are remaining, are so because of the amount of refactoring they would require and some cosmetic fixes.

Most of the application is ported over, but if you open it, you won't be able to do much, the reason for which is that if some places fail to run, then it will prevent the rest of the file from loading further, making a lot more things useless. Currently, you can open the application, open, import, or create a new project, access the effects and medialibrary (some work is required), and other miscellaneous things.

The timeline still remains to be properly ported due to the removal of GtkLayout and drag & drop event controller.

You cannot hope to do extensive refactoring work during GSoC because that will limit how much work you can do. In the start I was pushing commits with great quality, i.e. no change required later on, but that results in less time remaining thus it is better to get a working version and push that. If you still have remaining time before the deadline and other work is done then make those commits better and better, else just continue and after the work is done, focus on how you can improve it further to get the best result.

At least that's what I've come to believe.

But it depends on the project's length, complexity and expectations.

My plans after GSoC -

GSoC never made me join Open Source, it is something I believe in and love to be a part of. GSoC gave me an opportunity to work on such a big project along with the support of mentors and I appreciate all the love I've got. Thus, to conclude, this was just the beginning, you will keep hearing from me, in a good way :D

GUADEC -

I also had an amazing experience presenting my work during GUADEC, it gave me great confidence and was an awesome experience. Thanks, GNOME for providing me the opportunity.

You can watch me present at - GUADEC Youtube

GUADEC

That will be it for this one, have a nice time everyone :)

September 09, 2022

GNOME Shell on mobile: An update

It’s been a while since the last update on GNOME Shell mobile, but there’s been a huge amount of progress during that time, which culminated in a very successful demo at the Prototype Fund Demo Day last week.

​The current state of the project is that we have branches with all the individual patches for GNOME Shell and Mutter, which together comprise a pretty complete mobile shell experience. This includes all the basics we set out to cover during the Prototype Fund project (navigation gestures, screen size detection, app grid, on-screen keyboard, etc.) and some additional things we ended up adding along the way.

The heart of the mobile shell experience is the sophisticated 2D gesture navigation: The gestures to go to the overview vertically and switch horizontally between apps are fluid, interruptible, and multi-dimensional. This allows for navigation which is not only quick and ergonomic, but also intuitive thanks to an incredibly simple spatial model.

While the overall gesture paradigm we use is quite similar to what iOS and Android have, there’s one important difference: We have a single overview for both launching and switching, instead of two separate screens on iOS (home screen and multitasking) and three separate screens on Android (home screen, app drawer, multitasking).

This allows us to avoid the awkward “swipe, stop, and wait” gesture to go to multitasking that other systems rely on, as well as the confusing spatial model, where apps live both within the app icon and next to the home screen, and sometimes show up from the left when swiping… up?

Our overview is always a single swipe away, and allows instant access to both open apps and the app grid, without having to choose between launching and switching.

In case you’re wondering where the “overview” state with just the multitasking cards (like we had in previous iterations) went – After some experimentation and informal user research we realized that it’s not really adding any value over the row of thumbnails in the app grid state. The smaller thumbnails are more than large enough to interact with, and more useful because you can see more of them at the same time.

We ported the shell search experience to a single-column layout for the narrower screen, which coincidentally is a direction we’re also exploring for the desktop search layout.

We completely replaced the on-screen keyboard gesture input, applying several tricks that OSKs on other mobile OSes employ, e.g. releasing the currently pressed key when another one is pressed. The heuristics for when the keyboard shows up are a lot more intuitive now and more in line with other mobile OSes.

The keyboard layout was adapted to the narrower size and the emoji keyboard got a redesign. There’s also a very fancy new gesture for hiding the keyboard, and it automatically hides when scrolling the view.

The app grid layout was adapted to portrait sizes, including a new style for folders and lots of spacing and padding tweaks to make it work well for the phone use case. All the advanced re-ordering and organizing features the app grid already had before are of course available.

Luckily for us, Florian independently implemented the new Quick Settings this cycle. These work great on the phone layout, but on top of that we also added notifications to that same menu, to get a unified system menu you can open with a swipe from the top. This is not as mature as other parts of the mobile shell yet and needs further work, which we’ll hopefully get to soon as part of the planned notifications overhaul.

One interesting new feature here is that notifications can be swiped away horizontally to close, and notification bubbles can be swiped up to hide them.

Next steps

From a development perspective the next steps are primarily upstreaming all of the work done so far, starting with the new gesture API, which is used by many different parts of the mobile shell and will bring huge improvements to gestures on desktop as well. This upstreaming effort is going to require many separate merge requests that depend on each other, and will likely take most of the 44 cycle.

Beyond upstreaming what already exists there are many additional things we want or need to work on to make the mobile experience really awesome, including:

  • Calls on the lock screen (i.e. an API for apps to draw over the lock screen)
  • Emergency calls
  • Haptic feedback
  • PIN Unlock
  • Adapt terminal keyboard layout for mobile, more custom keyboard layouts e.g. for URLs
  • Notifications revamp, including grouping and better actions
  • Flashlight quick settings toggle
  • Workspace reordering in the overview

There are also a few rough edges visually which need lower-level changes to fix:

  • Rounded thumbnails in the overview
  • Transparent panel
  • A way for apps to draw behind the top and bottom bars and the keyboard (to allow for glitch-free keyboard showing/hiding)

Help with any of the above would be highly appreciated!

How to try it

In addition to further development work there’s also the question of getting testing images. While the current version is definitely still work in progress, it’s quite usable overall, so we feel it would make sense to start having experimental GNOME OS Nightly images with it. There’s also postmarketOS, who are working to add builds of the mobile shell to their repositories.

The hardware question

The main question we’re being asked by everyone is “What device do I have to get to start using this?”, which at this stage is especially important for development. Unfortunately there’s not a great answer to this right now.

So far we used a Pinephone Pro sponsored by the GNOME Foundation to allow for testing, but unfortunately it’s nowhere near ready in terms of hardware enablement and it’s unclear when it will be.

The original Pinephone is much further along in hardware enablement, but the hardware is too weak to be realistically usable. The Librem 5 is probably the best option in both hardware support and performance, but it still takes a long time to ship. There are a number of Android phones that sort of work, but there unfortunately isn’t one that’s fully mainlined, performant enough, and easy to buy.

Thanks to the Prototype Fund

All of this work was possible thanks to the Prototype Fund, a grant program supporting public interest software by the German Ministry of Education (BMBF).