August 04, 2021

Avoid Head Spinning

inkscape

In a versatile tool like Inkscape, there are always features that aren’t for you. There are some that really get in your way though, like the recently added canvas rotation.

If you’re like me and constantly keep triggering it by accident (Blender zooming being Inkscape’s panning having to do with it), you’ll be happy to learn it can be completely disabled. Sip on your favorite beverage and dive into the thick preferences dialog again (Edit>Preferences), this time you’re searching for Lock canvas rotation by default in the Interface section. One more thing that might throw you off is that you need to restart Inkscape for the change to have any effect.

If you don’t wish to go nuclear on the function, do note it can be reset from the status bar bottom right.

A quick update on libadwaita’s animation API

Last time we left on the general API design. Since then I’ve been refactoring the existing animation-related code so we can reuse it for our public API. Part of that refactoring has been converting the current boxed-type adwaita animation code into a gobject class. I’ve learned a lot of how GObject works under the hood by doing so, so I expect to be a lot quicker implementing the next milestones.

After that work, which is already merged, I started working on timed animations, and moving functionality from the baseclass “adw-animation” into it, as well as starting opening the API (which was completely private until now).

I quickly prototyped a demo page for said timed animations (which is highly WIP, from design to phrasing):

timed animations demo

 

 

August 03, 2021

Deescalating Tensions

inkscape

One of the great attributes of SVG is that its text nature lends itself to be easily version controlled. Inkscape uses SVG as its native format (and extends it using its private namespace).

Unfortunately it uses the documents themselves to store things like canvas position and zoom state. This instantly erases one of the benefits for easy version control as every change instantly turns into unsolvable conflict.

Luckily you can at least give up the ability to store the canvas position for the greater good of not having merge conflicts, if you manage to convince your peers to change its defaults. Which is what this blog post is about :)

To change these defaults, you have to dive into the thick forrest that is Inkscape’s preferences (Edit > Preferences). You’ll find then in the Interface > Windows section. The default being the unfortunate Save and restore window geometry for each document needs to be changed either to Don't save window geometry or Remember to use last window's geometry.

From now on, rebasing icon-development-kit won’t cause any more grey hair for you!

2021-08-03 Tuesday

  • Up early, poked at some code; call with Pranam, then Kendy, then Cor, lunch. Bit of hackery on a performance regression, more calls.
  • Liked Matthew's post on Copilot - sharing ideas even those codified in neural networks (repeatedly counting the number of lines in an in-lined, C, double linked list manipulation method to check it used to be some sort of fun); so I tend to agree, that is unless we train a neural network to type out whole programs of existing (C) FOSS code verbatim: also possible, but not proposed. Looking forward to having time to play with CoPilot myself.
  • Chat with Gokay.

August 02, 2021

2021-08-02 Monday

  • Slept poorly, up super-early; quiet time - started chewing the E-mail backlog. Caught up with Kendy, planning call, lunch with the babes, patch review, admin bits.
  • Call with Mert, who has made the online auto-typing benchmark hugely faster; got some nice profiling data from that; lush.
  • Tried to reproduce the family bathroom plumbing problem (having cut a hole in the wall on Saturday pipes) - not reproducible - bother, ordered more plumbing bits - lets see.

More on input

I’ve written about input before (here and here), and more recently, Carlos and myself gave a Guadec talk about input-related topics (slides). In those writings, I have explained how dead keys work, and how you can type

<dead_acute> A

to produce an Á character.

But input is full of surprises, and I’ve just learned about an alternative to dead keys that is worth presenting here.

Background

First lets recap what happens when you send the <dead_acute> A sequence to GTK.

We receive the first key event and notice that it is a dead key, so we stash it in what we call the preedit, and wait for the next event.  When the next key arrives, and it represents a letter (more precisely, is in one of the Unicode categories Ll, Lu, Lt, Lm or Lo), we look up the Unicode combining mark matching the dead_acute, which is U+301 COMBINING ACUTE ACCENT, and then we flip the sequence around. So the text that gets committed is

A <combining acute>

The reason that we have to flip things around is that combining marks go after the base character, while dead keys go before.

This works, but it is a bit unintuitive for writing multi-accented characters. You have to think about the accents you want to apply from top to bottom, since they get applied backwards. For example to create an  with an acute accent on top, you type

<dead_acute> <dead_circumflex> A

which then gets flipped around and ends up as:

A <combinining circumflex> <combining acute>

A better way

To me, it feels much more natural to specify the accents in that order:

  1. give me an A
  2. then put a ^ on top
  3. and then put an ´ on top

The good news is: we can do just that! Keyboard layouts can use any Unicode character as keysyms, so we can just use the combining marks directly, without the detour through dead keys.

For example, the “English (US,  Intl, AltGr Unicode combining)” layout contains keys for combining marks. A slight hurdle to using this layout is that it does not show up in the GNOME Settings keyboard panel by default. You have to run

gsettings set org.gnome.desktop.input-sources show-all-sources true

to make it show up.

The combining marks in this layout are placed in a “3rd level”. To use them, you need to set up a “3rd level chooser” key. In the keyboard panel, this is called the “Alternative Characters Key”. A common choice is the right Alt key.

After all these preparations, you can now type A Alt+^ Alt+’ to get an   with an   ́ on top. Neat!

Career Goals

For this week’s Outreachy blog post, I’ll be talking about my personal career goals, so it’ll be less GNOME/Librsvg-focused than my recent posts.

I’m looking for work!

The end of my Outreachy internship is fast approaching, and so after August 24th I’ll be available to work either full or part time. I’m currently based in Kansas, US, and I’m open to remote positions based anywhere in the world, along with relocation within the US or internationally.

Who am I?

With that bit out of the way, who am I? What experiences do I have? Do I have to write rhetorical questions? I don’t but it’s fun. To begin, I’ve been a Outreachy intern working with the GNOME Foundation on Librsvg for this cohort, and at the end of it I’ll have 3 months of Rust programming and remote work experience.

What experiences do I have?

In the realm of programming and along with Rust, I have experience using C# to write a video game in university (see things unsaid on madds.hollandart.io), and over the years I’ve used Java, Python, Lua, and PHP.

Aside from the aforementioned programming and remote experience, I also have the experience of using Linux for 9 years as my daily driver for desktop systems and 2 years for my personal server (including the blog you’re reading this on).

Tech has been a constant part of my life, I took a liking to it early on, so I have been the one others called for help for years, both in my jobs and at home. In this, I have learned to listen closely to others, to figure out what issues they’re having then come up with a solution that fits their needs, whether that’s picking laptops for a 1:1 initiative in high school, or troubleshooting multiple projector systems in theatres. I get excited when I get a new challenge, it’s a chance for me to delve into a topic or new technology I may only know a little bit, then use that knowledge to then help someone. What this means then is that I’ve spent the past few years getting better at learning new things quickly, then distilling that information down to pass on to others.

What about school?

I graduated with a bachelors degree in film and media studies, a minor in Japanese language (I can understand to about JLPT N3 or intermediate level), and a global awareness certificate. I earned several departmental awards during my time in university, two for my 360 animated project Feeling Green (see madds.hollandart.io), one for my service to the department, and another for my potential in VFX.

During university I studied abroad, worked in several research positions, a theatre, as an intern for the local Women in Film and TV group, and finally as part of the staff for the film department itself. These jobs were all technical in some way, I had to research VR and computational linguistics in my research positions, learn the wide array of film equipment and its setup and usage in my film department job, and I had to learn how to use a lighting board and its programming language at the theatre. I sought out technical jobs to do, where I would be challenged and be pushed to learn new things while having fun using interesting pieces of technology, and Outreachy is where I ended up.

Outreachy?

If you don’t know, Outreachy is an internship program which helps get underrepresented people into open source technology, so why am I a part of it? Well, I’m LGBT, and I’m neurodivergent, I understand the world from a fundamentally different place than the majority cis straight white man and I want to bring my unique perspective to a team and project. As an example, I literally see differently since I have Visual Snow Syndrome, which means that the visual style of a webpage or application (like repeating high contrast areas, like stripes, which produces a strong visual vibrating effect) can render it nearly unusable depending on how intense the effect is, a consideration for UX design that doesn’t matter for most, but can make some things inaccessible for me.

What job am I looking for?

Right now, I’m looking for a job where I can contribute to and drive forward a really cool project while honing my Rust skills and learning new ones. I am interested in systems programming, similar to what I’ve gotten a taste of working with Librsvg, so I would love to start there. I’m not limited though, as many, many other things are interesting to me too, like servers, system administration, art tools development, VR / video games programming, and so much more.

Thank you!

If you would like to get in touch, send me an email: madds@hollandart.io

Or DM me on Twitter @madds_io

Introducing the GNOME Web Canary flavor

Today I am happy to unveil GNOME Web Canary which aims to provide bleeding edge, most likely very unstable builds of Epiphany, depending on daily builds of the WebKitGTK development version. Read on to know more about this.

Until recently the GNOME Web browser was available for end-users in two flavors. The primary, stable release provides the vanilla experience of the upstream Web browser. It is shipped as part of the GNOME release cycle and in distros. The second flavor, called Tech Preview, is oriented towards early testers of GNOME Web. It is available as a Flatpak, included in the GNOME nightly repo. The builds represent the current state of the GNOME Web master branch, the WebKitGTK version it links to is the one provided by the GNOME nightly runtime.

Tech Preview is great for users testing the latest development of GNOME Web, but what if you want to test features that are not yet shipped in any WebKitGTK version? Or what if you are GNOME Web developer and you want to implement new features on Web that depend on API that was not released yet in WebKitGTK?

Historically, the answer was simply “you can build WebKitGTK yourself“. However, this requires some knowledge and a good build machine (or a lot of patience). Even as WebKit developer builds have become easier to produce thanks to the Flatpak SDK we provide, you would still need to somehow make Epiphany detect your local build of WebKit. Other browsers offer nightly or “Canary” builds which don’t have such requirements. This is exactly what Epiphany Canary aims to do! Without building WebKit yourself!

A brief interlude about the term: Canary typically refers to highly unstable builds of a project, they are named after Sentinel species. Canary birds were taken into mines to warn coal miners of carbon monoxide presence. For instance Chrome has been providing Canary builds of its browser for a long time. These builds are useful because they allow early testing, by end-users. Hence potentially early detection of bugs that might not have been detected by the usual automated test harness that buildbots and CI systems run.

To similar ends, a new build profile and icon were added in Epiphany, along with a new Flatpak manifest. Everything is now nicely integrated in the Epiphany project CI. WebKit builds are already done for every upstream commit using the WebKit Buildbot. As those builds are made with the WebKit Flatpak SDK, they can be reused elsewhere (x86_64 is the only arch supported for now) as long as the WebKit Flatpak platform runtime is being used as well. Build artifacts are saved, compressed, and uploaded to a web server kindly hosted and provided by Igalia. The GNOME Web CI now has a new job, called canary, that generates a build manifest that installs WebKitGTK build artifacts in the build sandbox, that can be detected during the Epiphany Flatpak build. The resulting Flatpak bundle can be downloaded and locally installed. The runtime environment is the one provided by the WebKit SDK though, so not exactly the same as the one provided by GNOME Nightly.

Back to the two main use-cases, and who would want to use this:

  • You are a GNOME Web developer looking for CI coverage of some shiny new WebKitGTK API you want to use from GNOME Web. Every new merge request on the GNOME Web Gitlab repo now produces installable Canary bundles, that can be used to test the code changes being submitted for review. This bundle is not automatically updated though, it’s good only for one-off testing.
  • You are an early tester of GNOME Web, looking for bleeding edge version of both GNOME Web and WebKitGTK. You can install Canary using the provided Flatpakref. Every commit on the GNOME Web master branch produces an update of Canary, that users can get through the usual flatpak update or through their flatpak-enabled app-store.

Update:

Due to an issue in the Flatpakref file, the WebKit SDK flatpak remote is not automatically added during the installation of GNOME Web Canary. So it needs to be manually added before attempting to install the flatpakref:

$ flatpak --user remote-add --if-not-exists webkit https://software.igalia.com/flatpak-refs/webkit-sdk.flatpakrepo
$ flatpak --user install https://nightly.gnome.org/repo/appstream/org.gnome.Epiphany.Canary.flatpakref

As you can see in the screenshot below, the GNOME Web branding is clearly modified compared to the other flavors of the application. The updated logo, kindly provided by Tobias Bernard, has some yellow tones and the Tech Preview stripes. Also the careful reader will notice the reported WebKitGTK version in the screenshot is a development build of SVN revision r280382. Users are strongly advised to add this information to bug reports.

As WebKit developers we are always interested in getting users’ feedback. I hope this new flavor of GNOME Web will be useful for both GNOME and WebKitGTK communities. Many thanks to Igalia for sponsoring WebKitGTK build artifacts hosting and some of the work time I spent on this side project. Also thanks to Michael Catanzaro, Alexander Mikhaylenko and Jordan Petridis for the reviews in Gitlab.

GSoC Project update part II

For the previous week’s update check out my last post.

Week 4

While reading the documentation, I came across a bug that was leading to broken links. After some debugging and testing, I was able to fix the bug. It was due to a missing configuration in the documentation engine.

Issues: #317

Merge Requests: !446

Week 5

Resolved all the threads, and marked the MR ready for merge. After few more changes MR was merged, and with this one out of two project goal was achieved.

Issue: #158

Merge Requests: !340

Week 6

I began working towards my second milestone. Cloned the nautilus repository and spent few days understanding the codebase. I was also working on my GUADEC presentation (hopefully will write a separate blog for it :-).

Week 7

Opened MR for search by creation time in nautilus, and while writing tests for nautilus discovered a bug and fixed it.

Issues: #1933

Merge Requests: !693, !697

August 01, 2021

Documenting GNOME for developers

You may have just now noticed that the GNOME developers documentation website has changed after 15 years. You may also have noticed that it contains drastically less content than it used to. Before you pick up torches and pitchforks, let me give you a short tl;dr of the changes:

  • Yes, this is entirely intentional
  • Yes, I know that stuff has been moved
  • Yes, I know that old URLs don’t work
  • Yes, some redirections will be put in place
  • No, we can’t go back

So let’s recap a bit the state of the developers documentation website in 2021, for those who weren’t in attendance at my GUADEC 2021 presentation:

  • library-web is a Python application, which started as a Summer of Code project in 2006, whose job was to take Autotools release tarballs, explode them, fiddle with their contents, and then publish files on the gnome.org infrastructure.
  • library-web relies heavily on Autotools and gtk-doc.
  • library-web does a lot of pre-processing of the documentation to rewrite links and CSS from the HTML files it receives.
  • library-web is very much a locally sourced, organic, artisanal pile of hacks that revolve very much around the GNOME infrastructure from around 2007-2009.
  • library-web is incredibly hard to test locally, even when running inside a container, and the logging is virtually non-existent.
  • library-web is still running on Python 2.
  • library-web is entirely unmaintained.

That should cover the infrastructure side of things. Now let’s look at the content.

The developers documentation is divided in four sections:

  • a platform overview
  • the Human Interface guidelines
  • guides and tutorials
  • API references

The platform overview is slightly out of date; the design team has been reviewing the HIG and using a new documentation format; the guides and tutorials still like GTK1 and GTK2 content; or how to port GNOME 2 applications to GNOME 3; or how to write a Metacity theme.

This leaves us with the API references, which are a grab bag of miscellaneous things, listed by version numbers. Outside of the C API documentation, the only other references hosted on developer.gnome.org are the C++ bindings—which, incidentally, use Doxygen and when they aren’t broken by library-web messing about with the HTML, they have their own franken-style mash up of gtkmm.org and developer.gnome.org.

Why didn’t I know about this?

If you’re asking this question, allow me to be blunt for a second: the reason you never noticed that the developers documentation website was broken is that you never actually experienced it for its intended use case. Most likely, you either just looked in a couple of well known places and never ventured outside of those; and/or you are a maintainer, and you never literally cared how things worked (or didn’t work) after you uploaded a release tarball somewhere. Like all infrastructure, it was somebody else’s problem.

I completely understand that we’re all volunteers, and that things that work can be ignored because everyone has more important things to think about.

Sadly, things change: we don’t use Autotools (that much), which means release archives do not contain the generated documentation any more; this means library-web cannot be updated, unless somebody modifies the configuration to look for a separate documentation tarball that the maintainer has to generate manually and upload in a magic location on the gnome.org file server—this has happened for GTK4 and GLib for the past two years.

Projects change the way they lay out the documentation, or gtk-doc changes something, and that causes library-web to stop extracting the right files; you can look at the ATK reference for the past year and a half for an example.

Projects bump up their API, and now the cross-referencing gets broken, like the GTK3 pages linking GDK2 types.

Finally, projects decide to change how their documentation is generated, which means that library-web has no idea how to extract the HTML files, or how to fiddle with them.

If you’re still using Autotools and gtk-doc, and haven’t done an API bump in 15 years, and all you care about is copying a release archive to the gnome.org infrastructure I’m sure all of this will come as a surprise, and I’m sorry you’re just now being confronted with a completely broken infrastructure. Sadly, the infrastructure was broken for everybody else long before this point.

What did you do?

I tried to make library-web deal with the changes in our infrastructure. I personally built and uploaded multiple versions of the documentation for GLib (three different archives for each release) for a year and a half; I configured library-web to add more “extra tarball” locations for various projects; I tried making library-web understand the new layout of various projects; I even tried making library-web publish the gi-docgen references used by GTK, Pango, and other projects.

Sadly, every change broke something else—and I’m not just talking about the horrors of the code base. As library-web is responsible for determining the structure of the documentation, any change to how the documentation is handled leads to broken URLs, broken links, or broken redirections.

The entire castle of cards needed to go.

Which brings us to the plan.

What are you going to do?

Well, the first step has been made: the new developer.gnome.org website does not use library-web. The content has been refreshed, and more content is on the way.

Again, this leaves the API references. For those, there are two things that need to happen—and are planned for GNOME 41:

  1. all the libraries that are part of the GNOME SDK run time, built by gnome-build-meta must also build their documentation, which will be published as part of the org.gnome.Sdk.Docs extension; the contents of the extension will also be published online.
  2. every library that is hosted on gnome.org infrastructure should publish their documentation through their CI pipeline; for that, I’m working on a CI template file and image that should take care of the easy projects, and will act as model for projects that are more complicated.

I’m happy to guide maintainers to deal with that, and I’m also happy to open merge requests on various projects.

In the meantime, the old documentation is still available as a static snapshot, and the sysadmins are going to set up some redirections to bridge us from the old platform to the new—and hopefully we’ll soon be able to redirect to each project’s GitLab pages.

Can we go back, please?

Sadly, since nobody has ever bothered picking up the developers documentation when it was still possible to incrementally fix it, going back to a broken infrastructure isn’t going to help anybody.

We also cannot keep the old developer.gnome.org and add a new one, of course; now we’d have two websites, one of which broken and unmaintained and linked all over the place, and a new one that nobody knows exists.

The only way is forward, for better or worse.

What about Devhelp

Some of you may have noticed that I picked up the maintenance of Devhelp, and landed a few fixes to ensure that it can read the GTK4 documentation. Outside of some visual refresh for the UI, I also am working on making it load the contents of the org.gnome.Sdk.Docs run time extension, which means it’ll be able to load all the core API references. Ideally, we’re also going to see a port to GTK4 and libadwaita, as soon as WebKitGTK for GTK4 is more wideley available.

July 31, 2021

Looking at building O3DE with Meson, part II

After the first post, some more time was spent on building O3DE with Meson. This is the second and most likely last post on the subject. Currently the repository builds all of AzCore basic code and a notable chunk of its Qt code. Tests are not built and there are some caveats on the existing code, which will be discussed below. The rest of the conversion would most likely be just more of the same and would probably not provide all that much new things to tackle.

Code parts and dependencies

Like most projects, the code is split into several independent modules like core, testing, various frameworks and so on. The way Meson is designed is that you traverse the source tree one directory at a time. You enter it, do something, possibly recurse into subdirectories and then exit it. Once exited you can never again return to the directory. This imposes some extra limitations on project structure, such as making circular dependencies impossible, but also makes it more readable.

This is almost always what you want. However there is one exception that many projects have: the lowest layer library has no internal dependencies, the unit testing library uses that library and the tests for the core library use the unit testing library. This is not a circular dependency as such, but if the unit tests are defined in the same subdir as the core library, this causes problems as you can't return to it. This needs to be broken in some way, like the following:

subdir('AzCore')
subdir('AzTest')
subdir('AzCore/tests')

Code generation

Most large projects have a code generator. O3DE is no exception. Its code generator is called AutoGen and it's a Python script that expands XML using Jinja templates. What is strange is that it is only used in three places, only one of which is in the core code. Further, if you look at the actual XML source file it only has a few definitions. This seems like a heavy weighted way to go about it. Maybe someone could summon Jason Turner to constexrpify it to get rid of this codegen.

This part is not converted, I just commented out the bits that were using it.

Extra dependencies

There are several other dependencies used that seem superfluous. As an example the code uses a standalone library for MD5, but it also uses OpenSSL, which provides an MD5 implementation. As for XML parsers, there are three, RapidXML, Expat and the one from Qt (though the latter is only used in the editor).

Editor GUI

Almost all major game engines seem to write their own GUI toolkits from scratch. Therefore it was a bit surprising to find out that O3DE has gone all-in on Qt. This makes it easy to use Meson's builtin Qt 5 support, though it is not without some teething issues. First of all the code has been set up so that each .cpp file #includes the moc file generated from its header:

#include "Components/moc_DockBarButton.cpp"

Meson does things differently and builds the moc files automatically so users don't have to do things like this. They are also written in a different directory than what the existing configuration uses so this include could not work, the path is incorrect. This #include could be removed altogether, but since you probably need to support both at the same time (due to, for example, a transition period) then you'd need to do something like this:

#ifndef MESON_BUILD
#include "Components/moc_DockBarButton.cpp"
#endif

What is more unfortunate is that the code uses Qt internal headers. For some reason or another I could not make them work properly as there were missing private symbols when linking. I suspect that this is because distro Qt libraries have hidden those symbols so they are not exported. As above I just commented these out.

The bigger problem is that O3DE seems to have a custom patches in their version. At least it refers to style enum values that do not exist. Googling for the exact string produces zero relevant matches. If this is the case then the editor can not be used with official Qt releases. Further, if said patches exist, then they would need to be provided to the public as per the LGPL, since the project is providing prebuilt dependency binaries. As mentioned in the first blog post, the project does not provide original sources for their patched dependencies or, if they do, finding them is not particularly easy.

What next?

Probably nothing. It is unlikely that upstream would switch from CMake to Meson so converting more of the code would not be particularly beneficial. The point of this experiment was to see if Meson could compile O3DE. The answer for that is yes, there have not been any major obstacles. The second was to see if the external dependencies could be provided via Meson's Wrap mechanism. This is also true, with the possible exception of Qt.

The next interesting step would be to build the code on multiple platforms. The biggest hurdle here is the dependency on OpenSSL. Compiling it yourself is a bear, and there is not a Wrap for it yet. However once this merge request is merged, then you should be able to build OpenSSL as a Meson subproject transparently. Then you could build the core fully from source on any platform.

Portfolio 0.9.11

Catching up

The last couple of months have been particularly busy:

  • A few more releases down the road, the portal permissions support in Flatseal has finally matured. I took the opportunity to fix a crash in Flatpak’s permission store and to complete its permissions API, to pave the way for existing and future Flatseal-like front ends.
  • I took a short break to take care of my twenty five Sugar applications. Released a new version of the BaseApp, and updated every application to the latest upstream release and GNOME runtime.
  • On a personal note, I have been mentoring some brilliant interns at work, which is a refreshing experience after so many months of lockdown due to COVID.

What’s new in Portfolio?

On the visuals department, this new release brings a refreshed icon by @jimmac, which looks fantastic and takes it closer to the modern art style in GNOME.

Regarding features, well, there’s quite a lot. The most noticeable one is the Trash folder.

One of my goals for Portfolio is that, for the little it does, it should just work™.  It shouldn’t matter what the users environment might be or how it’s being distributed. This imposes some technical challenges and, I imagine, is one of the reasons why a few file managers available on Flathub don’t provide feature parity with their non-flatpak versions.

Because of this, I prototyped different Trash folder implementations. Initially, I went for the right way™ and simply relied on the gvfsd. Sadly, there were a few issues with the sandbox interaction that prevented me from fulfilling my goal. Therefore, I stuck to my own implementation of freedesktop’s Trash spec. I must admit though, that I really enjoy reading these specs for Portfolio.

But there’s more!

A common issue among users, of the Flatpak version, is that they can’t see the real root directory. This is understandably confusing. Therefore, Portfolio now includes a Host device shortcut, as a best-effort attempt to mitigate this.

If you have been using Portfolio on devices with slow storage, you have probably seen that loading screen a few times when opening folders. I will eventually get around to something more elaborated but, for the time being, I reduced these load times with a bit of cache.

Among other improvements, there are now proper notifications when removing devices, filtering and sorting options will persist between sessions, and the files view will restore its scroll position to the previous directory when navigating back.

As for the bugs, kudos to @craftyguy for fixing a couple ones that prevented Portfolio to run on Postmarket OS.

Last but never least! Thanks to @lqs01, @AsciiWolf, @eson57, @Vistaus, @rffontenelle and @cho2 for helping me with translations.

July 29, 2021

How much effort would it take to convert OpenSSL's Perl source code generators to Python?

There is an ongoing discussion to write Meson build definitions to OpenSSL so it can be added to the WrapDB and built transparently as a subproject. One major issue is that OpenSSL generates a lot of assembly during build time with Perl. Having a Perl dependency would be bad, but shipping pregenerated source files would also be bad. Having "some pregenerated asm" that comes from "somewhere" would understandably be bad in a crypto library.

The obvious third option would be to convert the generator script from Perl to Python. This is not the first time this has been proposed and the counterargument has always been that said conversion would take an unreasonable amount of effort and could never be done. Since nobody has tried to do the conversion we don't really know whether that claim is accurate or not. I converted the x86_64 AES asm generator to see how much work it would actually take. The code is here.

The code setup

The asm generator has two different levels of generators. First a Perl script generates asm in a specific format and after that a second Perl script converts it to a different asm type if needed (Nasm, At&T, Intel etc). The first script can be found here and the second one is here. In this test only the first script was converted.

At first sight the task seems formidable. The generator script is 2927 lines of code. Interestingly the asm file it outputs is only 2679 lines. Thus the script is not only larger than its output, it is also less understandable than the thing it generates both because it is mishmash of text processing operations and because it is written in Perl.

Once you get over the original hump, things do get easier. Basically you have a lot of strings with statements that look like this:

movzb `&lo("$s2")`,$acc0

This means that the string needs to be evaluated so that $s2 and $acc0 are replaced with the values of variables s2 and acc0. The backticks mean that you have to then call the lo function with the given value and substitute that in the output string. This is very perlistic and until recently would not have been straightforward to do in Python. Fortunately now we have f-strings so that becomes simply:

f'movzb {lo(s2)},{acc0}'

With that worked out the actual problem is no longer hard, only laborious. First you replace all the Perl multiline strings with Python equivalents, then change the function declarations from something like this:

sub enctransform_ref()
{ my $sn = shift;

to this:

def enctransform_ref(sn):

and then it's a matter of repeated search-and-replaces for all the replacement variables.

The absolute best thing about this is that it is almost trivial to verify that the conversion has been done without errors. First you run the original script and store the generated assembly. Then you run your own script and compare the output. If they are not identical then you know exactly where the bug is. It's like having a unit tests for every print statement for the program.

How long did it take?

From scratch the conversion took less than a day. Once you know how it's done a similar conversion would take maybe a few hours. The asm type converter script seems more complicated so would probably take longer.

A reasonable port would contain these conversions for the most popular algorithms to the most popular CPU architectures (x86, x86_64, arm, aarch64). It would require a notable amount of work but it should be measured in days rather than months or years. I did browse through some of the other asm files and it seems that they have generators that work in quite different ways. Converting them might take more or less work, but probably it would still be within an order of magnitude.

July 28, 2021

It's templates all the way down - part 4

Part 1, Part 2, Part 3

After getting thouroughly nerd-sniped a few weeks back, we now have FreeBSD support through qemu in the freedesktop.org ci-templates. This is possible through the qemu image generation we have had for quite a while now. So let's see how we can easily add a FreeBSD VM (or other distributions) to our gitlab CI pipeline:


.freebsd:
variables:
FDO_DISTRIBUTION_VERSION: '13.0'
FDO_DISTRIBUTION_TAG: 'freebsd.0' # some value for humans to read

build-image:
extends:
- .freebsd
- .fdo.qemu-build@freebsd
variables:
FDO_DISTRIBUTION_PACKAGES: "curl wget"
Now, so far this may all seem quite familiar. And indeed, this is almost exactly the same process as for normal containers (see Part 1), the only difference is the .fdo.qemu-build base template. Using this template means we build an image babushka: our desired BSD image is actual a QEMU RAW image sitting inside another generic container image. That latter image only exists to start the QEMU image and set up the environment if need be, you don't need to care what distribution it runs out (Fedora for now).

Because of the nesting, we need to handle this accordingly in our script: tag for the actual test job - we need to start the image and make sure our jobs are actually built within. The templates set up an ssh alias "vm" for this and the vmctl script helps to do things on the vm:


test-build:
extends:
- .freebsd
- .fdo.distribution-image@freebsd
script:
# start our QEMU image
- /app/vmctl start

# copy our current working directory to the VM
# (this is a yaml multiline command to work around the colon)
- |
scp -r $PWD vm:

# Run the build commands on the VM and if they succeed, create a .success file
- /app/vmctl exec "cd $CI_PROJECT_NAME; meson builddir; ninja -C builddir" && touch .success || true

# Copy results back to our run container so we can include them in artifacts:
- |
scp -r vm:$CI_PROJECT_NAME/builddir .

# kill the VM
- /app/vmctl stop

# Now that we have cleaned up: if our build job before
# failed, exit with an error
- [[ -e .success ]] || exit 1
Now, there's a bit to unpack but with the comments above it should be fairly obvious what is happening. We start the VM, copy our working directory over and then run a command on the VM before cleaning up. The reason we use touch .success is simple: it allows us to copy things out and clean up before actually failing the job.

Obviously, if you want to build any other distribution you just swap the freebsd out for fedora or whatever - the process is the same. libinput has been using fedora qemu images for ages now.

July 27, 2021

Final Types

The type system at the base of our platform, GType, has various kinds of derivability:

  • simple derivability, where you’re allowed to create your derived version of an existing type, but you cannot derive your type any further;
  • deep derivability, where you’re allowed to derive types from other types;

An example of the first kind is any type inheriting from GBoxed, whereas an example of the second kind is anything that inherits from GTypeInstance, like GObject.

Additionally, any derivable type can be marked as abstract; an abstract type cannot be instantiated, but you can create your own derived type which may or may not be “concrete”. Looking at the GType reference documentation, you’ll notice various macros and flags that exist to implement this functionality—including macros that were introduced to cut down the boilerplate necessary to declare and define new types.

The G_DECLARE_* family of macros, though, introduced a new concept in the type system: a “final” type. Final types are leaf nodes in the type hierarchy: they can be instantiated, but they cannot be derived any further. GTK 4 makes use of this kind of types to nudge developers towards composition, instead of inheritance. The main problem is that the concept of a “final” type is entirely orthogonal to the type system; there’s no way to programmatically know that a type is “final”—unless you have access to the introspection data and start playing with heuristics about symbol visibility. This means that language bindings are unable to know without human intervention if a type can actually be inherited from or not.

In GLib 2.70 we finally plugged the hole in the type system, and we introduced the G_TYPE_FLAG_FINAL flag. Types defined as “final” cannot be derived any further: as soon as you attempt to register your new type that inherits from a “final” type, you’ll get a warning at run time. There are macros available that will let you define final types, as well.

Thanks to the “final” flag, we can also include this information into the introspection data; this will allow language bindings to warn you if you attempt at inheriting from a “final” type, likely using language-native tools, instead of getting a run time warning.

If you are using G_DECLARE_FINAL_TYPE in your code you should bump up your GObject dependency to 2.70, and switch your implementation from G_DEFINE_TYPE and friends to G_DEFINE_FINAL_TYPE.

libinput and hold gestures

Thanks to the work done by Josè Expòsito, libinput 1.19 will ship with a new type of gesture: Hold Gestures. So far libinput supported swipe (moving multiple fingers in the same direction) and pinch (moving fingers towards each other or away from each other). These gestures are well-known, commonly used, and familiar to most users. For example, GNOME 40 recently has increased its use of touchpad gestures to switch between workspaces, etc. Swipe and pinch gestures require movement, it was not possible (for callers) to detect fingers on the touchpad that don't move.

This gap is now filled by Hold gestures. These are triggered when a user puts fingers down on the touchpad, without moving the fingers. This allows for some new interactions and we had two specific ones in mind: hold-to-click, a common interaction on older touchscreen interfaces where holding a finger in place eventually triggers the context menu. On a touchpad, a three-finger hold could zoom in, or do dictionary lookups, or kill a kitten. Whatever matches your user interface most, I guess.

The second interaction was the ability to stop kinetic scrolling. libinput does not actually provide kinetic scrolling, it merely provides the information needed in the client to do it there: specifically, it tells the caller when a finger was lifted off a touchpad at the end of a scroll movement. It's up to the caller (usually: the toolkit) to implement the kinetic scrolling effects. One missing piece was that while libinput provided information about lifting the fingers, it didn't provide information about putting fingers down again later - a common way to stop scrolling on other systems.

Hold gestures are intended to address this: a hold gesture triggered after a flick with two fingers can now be used by callers (read: toolkits) to stop scrolling.

Now, one important thing about hold gestures is that they will generate a lot of false positives, so be careful how you implement them. The vast majority of interactions with the touchpad will trigger some movement - once that movement hits a certain threshold the hold gesture will be cancelled and libinput sends out the movement events. Those events may be tiny (depending on touchpad sensitivity) so getting the balance right for the aforementioned hold-to-click gesture is up to the caller.

As usual, the required bits to get hold gestures into the wayland protocol are either in the works, mid-flight or merge-ready so expect this to hit the various repositories over the medium-term future.

Ignoring GtkTextTag when printing

Now that Text Editor has spell checking integrated I needed a way to print without displaying tags such our “misspelled word” underline squiggles. So GtkSourceView 5.2 will include gtk_source_print_compositor_ignore_tag() to do the obvious thing.

Previously, If you wanted to do this, you had to remove all your tags and then print, only to restore them afterwards. This should be a lot more convenient for people writing various GtkSourceView-based text editors. Although, I’m suspect many of them weren’t even doing this correctly to begin with, hence this PSA.

July 25, 2021

GNOME Radio 12 Notes at GUADEC 2021

GUADEC 2021 took place July 21 – 25. This year’s conference was to be held online and last five days. The first two days of the conference, July 21 – 22, was dedicated to presentations. The 23 – 24 were Birds of a Feather sessions and workshops, and the last day will be for social activities.

The latest release of GNOME Internet Radio Locator 12.0.1 features 4 Free Radio Transmissions from San Francisco, California (SomaFM Groove Salad, SomaFM The Trip, SomaFM Dub Step Beyond, and SomaFM DEF CON).

See my GUADEC 2021 notes on GNOME Radio 12 building and installation on Fedora Core 34 from source and x86_64 architecture packages.

July 24, 2021

Modifying Expectations

Hey everyone! Welcome to my new blog post. This post will tell you about my mid-point progress and something related to the project expectations, “actual vs. expected.” I am currently working on “Making GNOME Asynchronous!”. If you’re interested in reading more about my project, kindly read this blog post where I explained what my project is all about.

Let’s start by talking about my original internship project timeline

I had to solve two major issues in my internship project. So I created two significant tasks in the timeline for issue#1 and issue#2. Then I divided the central issue of this internship which is issue#2, into multiple jobs.

  • First half
  • May’21
  • June’21
    • Learn about the native syntax of the asynchronous function, GIR parser and the FUNC annotations.
    • Understand the heuristics which correspond to both FINISH and SYNC annotation.
  • July’21
    • Implement both FINISH and SYNC annotations with and without the heuristics.
  • Second half
    • Add tests corresponding to the annotation, which is translated to a finish-func attribute in the GIR.
    • Add tests corresponding to the annotation, which is translated to a sync-func attribute in the GIR.
    • Add tests to check whether the async-func attribute is pointing in the reverse direction.
  • August’21
    • Buffer period for reviewing and adding suggested improvements
    • ***Stretch goals: like adding these changes in GJS to use this new GObject-introspection feature if time persists.

Until my 8th week of internship, I completed all the tasks specified for May and June ’21. In addition, I have implemented the annotations without using heuristics. And also added the tests specified in the Second half of the Internship timeline.

The changes I would make in the timeline if I were starting the project over.

What exactly should a timeline represent? Should it tell us all about the tasks to be completed by the end of the tenure? or should it be seen as a decision-making tool? 

I misunderstood it as the former one, but it represents the latter one. Due to this, I didn’t focus on prioritising the tasks and sub-tasks while creating the timeline. As a result, I had to prioritise the tasks while working on them, which took up a lot of my time. 

There might be times when things occur differently from what you had initially planned. If we look at the kinds of adaptations that we can make to a timeline. The first kind is, When we do have information, in the beginning, but we didn’t use it optimally, and the other is when we uncover new information and need to adapt. Let’s understand both of these adaptations using some examples.

As seen in the “accomplished goals” section, I’m glad that even after modifying the expectations, I completed the tasks till the 8th week according to the expectations set in the timeline. And then came the GUADEC’21 (GNOME community conference). I was very enthusiastic about participating in the “Intern lightning talk”  at the GUADEC’21, scheduled for 23rd July ’21. I spent a whole week in preparation, which as a result, affected my timeline, and I had to modify the expectations again. As the first kind of adaptation states, I should have taken the information about GUADEC into account while creating the timeline. 

As for the second kind, the timeline I created with the given information was realistic. Still, software engineering is a constant process of uncovering new information, which we must adapt to. Like in my project, “I didn’t realise that the annotation had to be added to girparser.py as well.” The problem with this kind of adaptation is that one cannot foresee it. So even if I were to start over the project, I would not be able to adapt it in the timeline, which is totally acceptable.

There might be many aspects that can become obstacles for you to abide by your timeline. 

In my case, If I were to start over the project, I would talk to the projects’ mentor to determine a smaller scope for the project and prioritise what parts are to be completed, leaving the rest unfinished. It will eliminate the timeline fluctuations and improve production time. In addition to that, I would also try to add GUADEC to the timeline

My new plan for the second half of the internship

  • Mid-July’21 (from 9th week)
    • GUADEC preparation (till 23rd July’21)
    • Implementing the func using the heuristic for FUNC annotation
  • August’21
    • Adding the gobject-introspection APIs that is C API’s g_callable_infor_get_async_func and g_callable_infor_get_finish_func for accessing the change in GJS
    • Adding these changes in GJS to use this new GObject-introspection feature

We are making progress and will continue to make it better!

I’m almost done with my actual implementation and will be ready to move on to the Stretch goals of my internship soon enough. This project has taught me many new things. There were times when I felt exhausted, but my mentor is very supportive. Without his guidance,  I wouldn’t have achieved what I’ve accomplished so far. This internship is indeed going to be the most wonderful part of my life. I’m learning a lot and am excited to work on my next task. I am looking forward to learning more.

Have a Nice Day!

July 21, 2021

Emojent behavior

Earlier today I saw a social-media post saying, essentially, “Microsoft ought to release its new emoji font as FOSS!” with the addendum that doing so would “give some competition to Noto,” which the post-writer claimed to love. Doesn’t matter who wrote it. You do see people ostensibly in the FOSS community say stuff like that on a fairly frequent basis.

For starters, though, begging for a proprietary software vendor to re-license its product under FOSS terms is, at best, a wild misinterpretation of Why Vendors Do What They Do. Microsoft doesn’t re-license products on a whim, or even because they’re asked nicely, and they don’t decide to make something open source / free software accidentally. When they do it, it’s because a lot of internal offices have debated it and weighed the options and all that other corporate-process stuff. I think that’s fairly well-understood, so let’s skip to the font-specific parts.

That original post elicits eye-rolls in part because it undervalues fonts and emoji, as if the ONLY way that end users are going to get something of quality is if a “better” (read: proprietary) project makes it for them and then takes pity and releases into the wild. It also elicits some eye-rolls because it smacks of “ragequit Google products”, although naturally it’s hard to know if that’s really happening behind the scenes or not. I’m pretty active on Mastodon, and one of the peculiarities of the “fediversal” mindset is that there are a lot of folks with a knee-jerk reaction of hating on any software project deemed too-close-for-comfort with one of the Big Suspicious Vendors. It can be hard to adequately extract & reframe material from that pervasive context. So who can say; maybe there’s none of that.

Un-regardless, the bit in the original post that’s most obviously and demonstrably off-base in the suggestion that Noto is in want of competition on the “FOSS emoji” front in the first place. I can think of four other FOSS-emoji-font projects off the top of my head.

But it got me thinking I wonder how many such projects there are, in total, since I’m certain I’m not up-to-date on that info. A couple of years ago, I made a list, so I at least had a start, but I decided to take a few minutes to catalog them just for procrastination’s sake. Consider this a spiritual sequel/spin-off of the earlier “how many font licenses are there” post. Here’s a rough approximation, loosely grouped by size & relationship:

  1. Noto Emoji (Color) [src] — the obvious one, referred to above.
  2. Noto Emoji B&W [same repo, different build] — which you might not be as familiar with. This is an archived black-and-white branch (think “IRC and terminal”) which is still available. Interested parties could pick it back up, since it’s FOSS.
  3. Blobmoji [src] — this is another fork of Noto, in color, but which preserves the now-dropped “blob” style of smiley/person. [Side note: most emoji fonts these days are color-only; I’ll point out when they’re not. Just flagging the transition here.]
  4. Twemoji [src] — This is the other giant, corporate-funded project (developed by Twitter) which everyone ought to be familiar with.
  5. EmojiTwo [src] — This is perhaps the biggest and most active of the not-part-of-another-project projects. It’s a fork of the older EmojiOne [src] font, which in classic fashion used to be FOSS, then got taken proprietary as of its 3.0 release.
  6. EmojiOne Legacy [src] — This is the last available FOSS(ish; depending on who you ask) version of EmojiOne, said to be c. version 1.5.4. As the name implies, not being developed. If you take a liking to it, clone the repo because it could go away.
  7. EmojiOne 2.3 / Adobe [src] — This is another rescue-fork (I think we need a better word for that; hit me up) created by Adobe Fonts, around EmojiOne 2.3.
  8. FxEmojis [src] — This is a no-longer-developed font by Mozilla, originally part of the FirefoxOS project. I tested a FirefoxOS phone back in the day. It was a little ahead of its time; perhaps the emoji were as well…?
  9. Adobe Source Emoji [src] — This is a black-and-white emoji font also by Adobe Fonts, originally designed for use in Unicode Consortium documents. Does not seem to be actively updated anymore, however.
  10. Openmoji [src] — This is a pure-FOSS project on its own, which includes both color and black-and-white branches.
  11. Symbola [src] — This is an older emoji font that predates a lot of more formalized FOSS-font-licensing norms. But it is still there.
  12. GNU Unifont [src] — Last but not quite least, Unifont is not a traditional font at all, but a fallback pan-Unicode BMP font in dual-width. It does, however, technically include emoji, which is quite an undertaking.
  13. Emojidex [src] — Last and certainly least is Emojidex, a fork-by-the-same-author of an older emoji font project named Phantom Open Emoji. Both the older project (despite its name) and the new one have a hard-to-parse, not-really-free, singleton license that I suspect is unredistributable and likely self-contradictory. But it seems like the license quirks are probably more to be chalked up to being assembled by a non-lawyer and not reviewed, rather than being intentionally hard on compatibility. So who knows. If you get curious, maybe it’d be an easy sell to persuade the author to re-evaluate.

I’m sure there are more. And that’s not even getting into other symbol-font projects (which are super popular, especially among chat & social app developers for flair and reaction stickers). Just Raw Unicode Code Point stuff.

Making an emoji font is a LOT of hard work. Maintaining one is a LOT of hard work, too. The visual assets take a tremendous amount of time to design and hone for consistency and test; the engineering and font-building process is extremely difficult (with different font-compilation toolchains and different source-file editors than other fonts, not to mention the fact that there are multiple binary formats and the files themselves are utterly massive in size when compared to other font binaries).

Most of the fonts above are not packaged for Debian/Ubuntu nor, I’d be willing to wager, for many other distributions. So there’s a big, unprotected barn-side project there. The Noto Color emoji font is, because, well, it builds, thanks to the toolchain the team maintains. Want to find one from a different source and revive it or freshen & update it?

All of the above projects are under-staffed. So if you actually care about FOSS emoji fonts, they’re where you should start contributing.

Discovery Docs Part 4: Discovery

This is Part 4 in a series about the Discovery Docs initiative, which I will present about in my upcoming GUADEC talk. In Part 1: Discovering Why, I laid the groundwork for why I think we should focus our docs on discovery. In Part 2: Templates and Taxonomies, I talked about how to structure topics differently to emphasize learning. In Part 3: Voice and Style, I proposed using a more casual, direct writing style. In this post, I’ll look at increasing reader engagement.

“Nobody reads the docs.” This is a common complaint, a cliché even. It has some truth to it, but it misses the bigger picture. For this post, the more important point is that people don’t often seek out the docs. So if we’re writing interesting material, as I’ve discussed throughout this blog series, how do we reach interested people?

This post is all about how we encourage people to discover.

Help menus

I have been trying to rethink help menus for over a decade. From the venerable Help ▸ Contents, to the Help item in the deprecated app menu, to the Help item in the hamburger menu, Help has always been a blind target. What’s on the other side of that click? A helpful tutorial? A dusty manual? An anthropomorphic paperclip? Who knows.

To address this problem, I’ve worked on a design for help menus:

This design presents the users with topics that are relevant to what they’re doing right now. In these mockups, the example topics are mostly simple tasks. As I’ve discussed in this blog series, I want to move away from those. Combining this design with longer learning-based material can encourage people to explore and learn.

Learning

Speaking of learning, is “Help” even the right term anymore? That word is deeply ingrained in UI design. (Remember right-aligned Help menus in Motif applications?) And it fits well with the bite-sized tasks we currently write, probably better than it fit old manuals. But does it fit content focused on learning and discovery? Are people looking for help at all?

As we shift our focus, perhaps we should shift our language toward the word “Learn”. Use “Learn” instead of “Help” whenever it appears in the UI. Change the docs website from help.gnome.org to learn.gnome.org. Rename the app to something like “Help & Learning”.

Side note: I’ve never been a fan of the help buoy icon, and have always preferred the question mark in a blue circle. Somebody smart might be able to think of something even better for learning, although there’s also value in sticking with iconography that people know.

Web design

I mentioned the docs website. It needs more than a new URL. The current site uses an old design and is difficult to maintain. We have a documentation team initiative to redo the site using a documentation build tool designed to do these kinds of things. Here’s what it looks like at the moment:

This is extremely important for the docs team, regardless of whether we shift to learning-based content or not.

Visual presentation makes a difference in how people feel about your documentation. For comparison, imagine using GNOME 40 with the same user interaction, but using the boxy beveled aesthetics of GNOME 1. It’s just not as exciting.

To that end, it would be good to open up our designs to more people. I don’t scale when it comes to chasing design trends. The styling has been locked up in XSLT, which not many people are familiar with. One thing I did recently was to move the CSS to separate template files, which helps me at least. For a more radical change, I’ve also spent a bit of time on doing document transforms and styling entirely with JavaScript, Handlebars, and Sass. Unfortunately, I don’t have the bandwidth to finish that kind of overhaul as a free time project.

Social media

Imagine we have interesting and exciting new content that people enjoy reading. Imagine it’s all published on a visually stunning new website. Now what? Do we wait for people to stumble on it? Remember that the focus is on discovery, not stumbling.

Any well-run outreach effort meets people where they are. If you run a large-scale project blog or resource library, you don’t just quietly publish an article and walk away. You promote it. You make noise.

If we have topics we want people to discover, we should do what we can to get them in front of eyeballs. Post to Twitter. Post to Reddit. Set up a schedule of lessons to promote. Have periodic themes. Tie into events that people are paying attention to. Stop waiting for people to find our docs. Start promoting them.

Documentation is outreach.

Progress Bar in Next.js

Sometimes when we transition from one route to another, it takes a little time to do so due to different factors. Behind the scenes, it may be rendering a complex page component or doing an API call. In such cases, the app looks like it has frozen for some seconds and then suddenly transitions to the next route. This results in a poor UX. In such cases, it is better to add a progress bar to our application which gives our users a sense that something is loading.

In this tutorial, we learn how to implement a progress bar in a Next.js application.

Contents

1. Installing NProgress

The first step we need to do is to install nprogress npm module.

npm i --save nprogress

2. Basic Usage

In pages/_app.js, import the following modules:

import Router from 'next/router'
import NProgress from 'nprogress'

Now, we need to add some Router events to control the behaviour of the progress bar. We need to add the following code:

Router.events.on('routeChangeStart', () => NProgress.start())
Router.events.on('routeChangeComplete', () => NProgress.done())
Router.events.on('routeChangeError', () => NProgress.done())

Depending upon our use case, we can remove the loading spinner that comes by default.

NProgress.configure({ showSpinner: false })

The final code for pages/_app.js will look like this:

import Router from 'next/router'
import NProgress from 'nprogress'

Router.events.on('routeChangeStart', () => NProgress.start())
Router.events.on('routeChangeComplete', () => NProgress.done())
Router.events.on('routeChangeError', () => NProgress.done())

NProgress.configure({ showSpinner: false })

function MyApp({ Component, pageProps }) {
  return <Component {...pageProps} />
}

export default MyApp

Results

We are done with the code. Let’s see how our progress bar will look like in a Next.js application.

July 20, 2021

Making Progress

A progress update for my Outreachy internship on Librsvg. Also, GUADEC is happening this week, are you registered?

Learning

My first steps into working on this internship with Librsvg was learning about what to learn about. From Rust to the internals of Librsvg itself, I had a lot of unfamiliar things thrust at me, but I used the bits of time I had the first weeks and poured time into learning about everything I could for this project. I tried to go into this with as much of an open mind as I could, learning about all these new things with eagerness. Largest on the to-do list was organizing what needed to be done, so I did what I generally do and made a list! I listed out in a spreadsheet a subset of the features SVG 2 had added, then Federico (my mentor, maintainer of Librsvg and GNOME co-founder, for those of you not seeing this post on planet GNOME) and I sorted that list, removed things that weren’t applicable, and added things that were, until we got a more detailed list up on the Librsvg Gitlab wiki.

First Bits of Code

Following this first step, Federico gave me some tasks to focus on, so I got to coding! The first task I worked on was implementing orient=”auto-start-reverse” as a property for the marker element. This required changing quite a few files and learning a lot about how the code fit together for the orientation and rendering of markers, but Federico walked me through the code to help me learn how it all fit together, leaving notes for how it should be implemented, and after a long walkthrough of the code with a lot of notes, I got to working on it. It was a bit rough, especially with me fighting Git to actually get everything in order, but Federico helped me along the way to finally get it done and the merge request made! Git can be a very complex and annoying machine to a newcomer, and my experience with it was (and still is) no exception.

A railway map showing the S Bahn train system in Germany, this version has arrows pointing straight from the end of the lines to show where the train lines go.

One example of auto-start-reverse is this train map, this is the fixed version.

A railway map showing the S Bahn train system in Germany, this version has arrows pointing haphazardly from the end of the lines, most of them pointing to the right side of the image, not at all where they're supposed to be pointing

The original image, see the grey arrows that point where the lines go? From Wikimedia: https://commons.wikimedia.org/wiki/File:S-Bahn_RheinNeckar2020.svg (SVG is GPLv3)

Following that was another set of changes that required learning an entirely different part of the code, context-fill and context-stroke for the marker element, or allowing an element to get its fill and stroke from another object by context (context meaning a convoluted process of inheritance that depends on the element being referenced). This also took a tour of the code to get implemented, which was exciting, as it delved more deeply with the rendering code and how it all fit together, and how elements got their final colors from the SVG file all the way to the final render. It was fascinating to learn about the rendering pipeline and the process of all the property values getting their values parsed and stored (something which I’m about to tackle more in depth in the future). It’s still work-in-progress for the second half, implementing for the use element, but it’s close to being done!

An SVG test image, it's a green bar above a blue square with a green outline. The ends and center of the green bar have circles with white filling them, while the box has circles at the corners with blue outlining them.

This is one of the tests for context fill and stroke, with this one being the fixed version, here the circles on the green line and corners of the blue square render how they’re supposed to.

This is the same SVG test image as the one above, but instead of the circles on the green line having the correct green outline, they are entirely white, breaking up the green line. The blue box's corners are completely black rather than blue and green.

This is how it used to render, the differences are especially noticeable on the circles on the green bar, and the corners of the blue square.

Learning Some More

This was the step where I learned the most about Git, and how to use it in a way that wasn’t my old ‘everything goes on the main branch and I’ll just delete it and re-download when it is too much to handle or I need to sync with the main repository’ method of making it work. Now I make a new branch, keep my main branch in sync with the upstream, and can even merge new changes to main back into the changes I made in the separate branch! It’s a wondrous boost to my ease of use and happiness.

Aside from that, I spent an evening making some scripts to run a docker image of OpenSUSE on my Fedora machine to then run the Librsvg test suite inside of it. It was fun, as I had last worked with Docker when running Nextcloud through it, so learning how to work with it in a slightly less complex environment was quite educational. So now there’s a fairly functional set of scripts to run the test suite in a OpenSUSE, Fedora, or Debian docker container, for all your development needs! These scripts also allowed Federico to debug a sporatic memory bug that’s been crashing our Gitlab CI for a while, which was eventually traced back to a bug with Pango and has been fixed upstream!

The next feature I tackled after that was implementing paint-order for text, which allows someone to specify whether a bit of text’s fill is supposed to go on top of or below the stroke, or outline. It’s a very useful feature, and this is the first feature that I completed the first draft of without too much assistance. It was awesome seeing it working when I finished. See here:

This is one of the tests for paint-order on text, with the right ‘pizazz’ being marked to have the fill on top of the stroke.

This is how it used to render, with the right one having the fill completely hidden by the stroke.

After that I began working on auto width and height for image, rect, and SVG elements. This feature varies depending on the element it’s applied to, but the part of the code that needed to be modified was about the same for each of them, so I was able to get it mostly done by myself with just some questions and feedback on it. This was also the first changes where I practiced using Git to merge my mess of commits down into one to ease merging upstream, which was really satisfying to understand how to use.

GUADEC & The Future

Finally, we’re to the present day! GUADEC is this week, and I’ll be participating in the intern lighting talks on Friday, so make sure to register for it and attend! Learning about so many different things and becoming a part of this community has been an amazing experience so far, I’m very thankful for the past half of the internship and so excited about the future. Thank you!

July 19, 2021

Discovery Docs Part 3: Voice and Style

This is Part 3 in a series about the Discovery Docs initiative, which I will present about in my upcoming GUADEC talk. In Part 1: Discovering Why, I laid the groundwork for why I think we should focus our docs on discovery. In Part 2: Templates and Taxonomies, I talked about how to structure topics differently to emphasize learning. In this post, I’ll talk about how we should write to be engaging, but still clear.

One of the main goals of Discovery Docs is to be more engaging and to create enthusiasm. It’s hard to create enthusiasm when you sound bored. Just as your speaking voice can either excite or bore people, so too can your writing voice affect how people feel while reading. Boring docs can leave people feeling bored about the software. And in a world of short-form media, boring docs probably won’t even be read.

This post has been the hardest in the series for me to write. I’ve been in the documentation industry for two decades, and I’ve crafted a docs voice that is deliberately boring. It has been a long learning process for me to write for engagement and outreach.

To write for engagement, we need to adopt a more casual voice that addresses the reader directly. This isn’t just a matter of using the second person. We do that already when giving instructions. This is a matter of writing directly to the reader. Think about how blog posts are often written. Think about how I’m writing this blog post. I’m talking to you, as if I’m explaining my thoughts to you over a cup of coffee.

This doesn’t mean our writing should be filled with needless filler words, or that we should use complicated sentences. Our writing should still be direct and concrete. But it can still be friendly and conversational. Let’s look at an example.

  • #1: Some users need to type accented characters that are not available on their keyboards. A number of options are available.
  • #2: You may need to type accented characters that are not available on your keyboard. There are a number of ways you can do this.
  • #3: Maybe you need to type an accented character that’s not on your keyboard. You have a few options.

#1 is stuffy. It talks about hypothetical users in the third person. It is stiff and boring, and it uses too many words. Don’t write like this.

#2 is more direct. It’s how most of our documentation is written right now. There’s nothing wrong with it, but it doesn’t feel exciting.

#3 is more casual. It uses the fewest words. It feels like something I would actually say when talking to you.

Using a more casual and direct voice helps get readers excited about the software they’re learning about. It makes learning feel less like grunt work. A combination of exciting material and an engaging writing style can create docs that people actually want to read.

What’s more, an engaging writing style can create docs that people actually want to write. Most people don’t enjoy writing dry instructions. But many people enjoy writing blogs, articles, and social media posts that excitedly show off their hard work. Let’s excitedly show off our hard work in our docs too.

July 16, 2021

Introducing “This Week in GNOME”

I have been following the “This Week in Matrix” blog series with great interest for some time now, and wondered: Why isn’t there something like this for GNOME?
To summarize the principle in a few words: A short, weekly summary in which maintainers briefly announce what they worked on for the past week.

For example, the following may be included in a weekly summary:

  • General news about a project
  • Presentation of new projects
  • New features
  • Instructions / Tutorials
  • Conferences / Meetings
  • General interesting thoughts that might be of public interest
  • … and much more! Just scroll through the Matrix blog and you”ll understand the principle very quickly.

After discussing the idea with other GNOME members, and agreeing that this is an idea with great potential, I started to implement the necessary technical requirements. We ran it as an experiment with a reduced set of maintainers. Here is our very first issue!

This Week in GNOME: #1 Scrolling in the Dark

Read through the blog post – it’s full of exciting news, and that’s just the beginning!

How does it work?

A user sends a message in the TWIG matrix room, mentioning the bot at the beginning of the message:

The bot will automatically recognize this message, and save it. In order for this message to appear in the next summary, it must be approved by an editor. This is done by adding the “⭕” emoji (only editors have this permission).

Likewise, editors can add messages to a specific section, or to a specific project.

In this example I have done the following

  • ⭕: I have approved this message.
  • 📻: I have added the project description “Shortwave” to this message
  • 🟢: I have added this message to the “Circle Apps” section.
When a week has passed, an editor will create a new summary: this is a list of all the pieces people have been reporting since the last summary. To issue it, an editors runs the “!render-file” command in the administration room.

All collected messages with related information will be summarized in a markdown document. This can be used to create a new blog post using Hugo for example.

The message shown above would result in the following (raw markdown preview using Apostrophe):

The technical basis for this is hebbot – a matrix bot I developed in Rust using the matrix-rust-sdk. I tried to make this bot as generic and adaptable as possible, so that other communities can reuse it.

There have already been failed attempts to do monthly summaries, so why should it work with a weekly rhythm?

There are several reasons why it is very difficult to publish a monthly summary blog in the long term:
  • The time period is too long. A lot happens in one month. The longer the period, the more difficult (and time-consuming!) it is to summarize what has happened. Do you remember what you did in detail during this month? No? Neither do I.
  • Someone had to prepare the information so it could be shared in the form of a blog post. Either a central editor does this, or the submitter does it themselves. Either way, it’s a tedious and time-consuming process that many people don’t want to do.

TWIG has the following advantages here:

  • It’s super easy and quick to share news. You just need to open your chat client and send a short message to the TWIG room. You just finished a new feature on your project? Send a short (!) message about it, so that it will appear in the next weekly summary. A few words and maybe a screenshot/video are totally sufficient, no need to write a detailed blog post! Fire and forget!
  • The administrative workload is very low. An editor only has to approve and categorize the messages, the bot does the rest completely automatically.

Let’s show the world together what we do!

I’ve been involved in the GNOME project for quite some time now, and can say from personal experience that an outsider has absolutely no idea how much valuable work is being done behind the scenes.

  • Give the community the opportunity to share information with a large mass. GNOME Foundation members have access to the WordPress infrastructure, but there are many members who are not part of the Foundation. For TWIG, in principle, information can be shared by anyone, no matter who, as long as it is relevant to GNOME and newsworthy.
  • News first hand. We all know what happens when news / information gets out to the public via 5 different detours. Most of the time important information is lost or distorted. With TWIG there is a reliable and central source of truth.
  • Attract interested people / newcomers. The more people become aware of something / see what is happening, the more interest there will be.
Let us know what you’re working on, what cool feature you have released, or what bugs you have fixed! Join #thisweek:gnome.org and drop a message, we’ll do the rest!

GUADEC 2021 – Things you need to know!

GUADEC 2021 is less than a week away! Please make sure to register online for the conference if you have not done so yet.

GUADEC is the GNOME community’s main conference. This year’s event takes place remotely between July 21st-25th and features talks from many community members and contributors covering a range of subjects.

GUADEC 2021 also features two fantastic keynote speakers.
The first, Shauna Gordon-McKeon, programmer and community organizer, will present on July 21 at 20:30 UTC.
The second, Hong Phuc Dang, the founder of FOSSASIA, will present on July 22 at 15:00 UTC.

Don’t forget about the social events! Our GUADEC 2021 schedule is packed with post-conference social activities. You can find all the details online.

No GUADEC would be complete without a new t-shirt. The GUADEC 2021 event shirt is available for purchased on the GNOME Shop.
More information about GUADEC 2021 is available on the official event page.
Hope you enjoy it!
Particpants walking outside of the GUADEC 2018 venue

July 15, 2021

GSoC 2021: Selection Editing and Window Selection

This summer I’m implementing a new screenshot UI for GNOME Shell. In this post I’ll show my progress over the past two weeks.

The new screenshot UI in the area selection mode

I spent the most time adding the four corner handles that allow you to adjust the selection. GNOME Shell’s drag-and-drop classes were mostly sufficient, save for a few minor things. In particular, I ended up extending the _Draggable class with a drag-motion signal emitted every time the dragged actor’s position changes. I used this signal to update the selection rectangle coordinates so it responds to dragging in real-time without any lag, just as one would expect. Some careful handling was also required to allow dragging the handle past selection edges, so for example it’s possible to grab the top-left handle and move it to the right and to the bottom, making it a bottom-right handle.

Editing the selection by dragging the corner handles

I’ve also implemented a nicer animation when opening the screenshot UI. Now the screen instantly freezes when you press the Print Screen button and the screenshot UI fades in, without the awkward screenshot blend. Here’s a side-by-side comparison to the previous behavior:

Comparison of the old and new opening animation, slowed down 2×

Additionally, I fixed X11 support for the new screenshot capturing. Whereas on Wayland the contents of the screen are readily available because GNOME Shell is responsible for all screen compositing, on X11 that’s not always the case: full-screen windows get unredirected, which means they bypass the compositing and go straight through the X server to the monitor. To capture a screenshot, then, GNOME Shell first needs to disable unredirection for one frame and paint the stage.

This X11 capturing works just as well as on Wayland, including the ability to capture transient windows such as tooltips—a long-requested feature. However, certain right-click menus on X11 grab the input and prevent the screenshot UI hotkey (and other hotkeys such as Super to enter the Overview) from working. This has been a long-standing limitation of the X11 session; unfortunately, these menus cannot be captured on X11. On Wayland this is not a problem as GNOME Shell handles all input itself, so windows cannot block its hotkeys.

Finally, over the past few days I’ve been working on window selection. Similarly to full-screen screenshots, every window’s contents are captured immediately as you open the screenshot UI, allowing you to pick the right window at your own pace. To capture the window contents I use Robert Mader’s implementation, which I invoke for all windows from the current workspace when the screenshot UI is opening. I arrange these window snapshots in a grid similar to the Overview and let the user pick the right window.

Window selection in action

As usual, the design is nowhere near finished or designer-approved. Consider it an instance of my “programmer art”. 😁

My goal was to re-use as much of the Overview window layout code as possible. I ended up making my own copy of the WorkspaceLayout class (I was able to strip it down considerably because the original class has to deal with windows disappearing, re-appearing and changing size, whereas the screenshot UI window snapshots never change) and directly re-using the rest of the machinery. I also made my own widget compatible with WindowPreview, which exports the few functions used by the layout code, once again considerably simplified thanks to not having to deal with the ever changing real windows.

The next step is to put more work into the window selection to make sure it handles all the different setups and edge cases right: the current implementation is essentially the first working draft that only supports the primary monitor. Then I’ll need to add the ability to pick the monitor in the screen selection mode and make sure it works fine with different setups too. I also want to figure out capturing screenshots with a visible cursor, which is currently notably missing from the screenshot UI. After that I’ll tackle the screen recording half.

Also, unrelated to the screenshot UI, I’m happy to announce that my merge request for reducing input latency in Mutter has finally been merged and should be included in Mutter 41.alpha.

That’s it for this post, see you in the next update!

GNOME Nightly Annual ABI Break

This only affects GNOME Nightly, if you are using the stable runtimes you have nothing to worry about

It’s that time of the year again. We’ve updated the base of the GNOME Nightly Flatpak runtime to the Freedesktop-SDK 21.08 beta release.

This brings lots of improvements and updates to the underlying toolchain, but it also means that between yesterday and today, there is an ABI break and that all your Nightly apps will need to be rebuilt against the newer base.

Thankfully this should be as simple as triggering a new Gitlab CI pipeline. If you merge anything that will also trigger a new build as well.

I suggest you also take the time to set up a daily scheduled CI job so that your applications keep up with runtime changes automatically, even if there hasn’t been new activity in the app for some time. It’s quite simple.

Go to the your project, Settings -> CI/CD -> Schedules -> New schedule button -> Select the daily preset.

Happy hacking.

July 13, 2021

On Building Bridges

After reading “Community Power Part 4: The GNOME Way“, unlike the other articles of the series, I was left with a bittersweet taste in my mouth. Strangely, reading it triggered some intense negative feelings on me, even if I fundamentally agree with many of the points raised there. In particular, the “The Hows” and “In Practice” sections seemed to be the most intense triggers.

Reading it over and over and trying to understand the reason I had such strong reactions gave me some insights that I’d like to share. Perhaps they could be useful to more people, including to the author of article.

On Pronouns

I think one of the misleading aspects of the article is the extensive usage of “we” and “us”. I’d like to remind the reader that the article is hosted on a personal blog, and thus its content cannot be taken as an official statement of the GNOME community as a whole. I’m sure many members of the community read this “Community Power” series as “Tobias’ interpretation of the dynamics of the community”, but this may not be clear to people outside of this community.

In this particular article, I feel like the usage of these plural pronouns may have had a bad side effect. They seem to subtly imply that the GNOME community think and act on a particular way – perhaps even contradicting the first part of the series – which is not a productive way to put it.

On Nuance And Bridges

The members of the GNOME community seem to broadly share some core values, yes, and these values permeate many aspects of daily interactions in varying degrees. Broad isn’t strict, though, and there actually is a surprising amount of disagreement inside the community. Most of the times, I think this is beneficial to both personal and collective growth. Ideas rarely go uncontested. There is nuance.

And nuance is precisely where I think many statements of the article fail.

Let’s look at an example:

Shell extensions are always going to be a niche thing. If you want to have real impact your time is better invested working on apps or GNOME Shell itself.

If I take the individual ideas, they make sense. Yes, contributing to GNOME Shell itself, or apps themselves, is almost always a good idea, even if it takes more time and energy. Yes, Shell extensions fill in the space for very specialized features. So, what’s the problem then?

Let me try and analyze this backwards, from how I would have written this sentence:

Shell extensions aren’t always the best route. If a particular feature is deemed important, contributing to GNOME Shell directly will have a much bigger impact. Contributors are encouraged to share their ideas and contribute upstream as much as possible.

Writing it like this, I think, gives a stronger sense of building bridges and positive encouragement while the core of the message remains the same. And I think that is achieved by getting rid of absolutes, and a better choice of words.

Compare that to the original sentence. “Niche” doesn’t necessarily convey a negative meaning, but then it is followed by “if you want to have real impact […]“, implying that niche equals unsubstantial impact. “Your time is better invested” then threateningly assumes the form of “stop wasting your time“. Not only it seems to be an aggressive way of writing these ideas, but it also seems to put down the efforts of contributors who spent time crafting extensions that help the community.

It burns bridges instead of building them.

Another example:

The “traditional desktop” is dead, and it’s not coming back. Instead of trying to bring back old concepts like menu bars or status icons, invent something better from first principles.

These are certainly bold statements! However, it raises some questions:

  • Is the “traditional desktop” really dead? I’m sure the people using Windows and Mac outnumber people using GNOME by many degrees of exponentiality. Or perhaps was Tobias only thinking about the experience side of things?
  • Is it really not coming back?
  • Are old concepts necessarily bad? Do they need to be reinvented?

I am no designer or user experience expert, evidently. I’m just playing the devil’s advocate here. These are unsubstantiated claims that do sound almost dogmatic to me. In addition to that, saying that a tradition is dead cannot be taken lightly. It is, in essence, a powerful statement, and I think it’s more generally productive to avoid it. Perhaps it could have been written in a less threatening and presumptuous way?

Let’s try and repeat the rewriting exercise above. Here’s my take:

GNOME’s focus on getting out of the way, and creating meaningful and accessible interfaces, conflicted with many elements that compose what we call the “traditional desktop”, such as menus and status icons. We set ourselves on a hard challenge to invent better patterns and improve the experience of using the desktop, and we feel like we are progressing the state of the art of the desktop experience.

My goal was to be less confrontational, and evoke the pride of working on such a hard problem with a significant degree of success. What do you, reader, think of this rewritten sentence?

Epilogue

To conclude this piece, I’m honestly upset with the original article that was discussed here. Over the past few years, I and many others have been working hard to build bridges with the extended community, specially extension developers, and it’s been extremely successful. I can clearly see more people coming together, helping the platform grow, and engaging and improving GNOME. I personally reviewed the first contribution of more than a dozen new contributors.

It seems to me that this article comes in the opposite direction: it puts down people for their contributions; it generates negativity towards certain groups of the extended GNOME community; it induces readers into thinking that it is written on the behalf of the GNOME community when it is not.

Now that it is already out there, there is little I can do. I’m writing this hoping that it can undo some of the damage that I think the original article did. And again: despite using “we” and “us” extensively, the article is only the Tobias’ personal interpretation of the community.

Community Power Part 4: The GNOME Way

In the first three parts of this series (part 1, part 2, part 3) we looked at how power works within GNOME and what that means for getting things done. We got to the point that to make things happen you (or someone you’ve hired) need to become a trusted member of the community, which requires understanding the project’s ethos.

In this post we’ll go over that ethos, both in terms of high level values, and what those translate to in more practical terms.

Values and Principles

GNOME is a very principled project, and there’s a fair amount of writing on this topic already.

Allan Day’s “The GNOME Way” (2017) is a great starting point, but I’d also recommend Havoc Pennington’s classic “Choosing our Preferences” (2002), and Emmanuele Bassi’s “Dev v. Ops” (2017). I’ve also written about some aspects of this in the past, including “There is no Linux Platform (2019)”. For some broader historical context also check out Emmanuele’s excellent History of GNOME podcast.

To give you an overview though, here’s my personal bullet point summary. It follows the same structure as the development process laid out in part 2 based on what areas specific values and ideas apply to. It’s not meant to be comprehensive, but rather give you an idea of the way people inside the project think.

The Why

Base motivations that inform everything we do.

  • We believe in software freedom as an inclusive, accountable model for producing technology in the commons.
  • Our software is built to be usable by everyone. We care deeply about user experience, accessibility, internationalization, and support for a diverse range of hardware.
  • Software should be structurally and aesthetically elegant, both in terms of underlying technology and user interface.

The What

What kinds of things we think are worth pursuing, and (just as important) what kinds of things should be avoided.

  • Third-party apps are the best abstraction to extend the core system with additional functionality. This is why we put a huge amount of work into empowering third party app developers to build more and better apps.
  • Every preference has a cost, and this cost rises exponentially as you add more of them. This is why we avoid preferences as much as possible, and focus on fixing the underlying problems instead.
  • Similarly, there is a direct relationship between how vertically integrated a product is and how cohesive you can make it. Every unnecessary variable you eliminate across the stack frees up time and energy, and creates opportunities for features you couldn’t otherwise build.
  • People’s attention is precious. We pride ourselves in being distraction free.

The How

Useful rules of thumb around how we go about making things.

  • We don’t do hacks. Rather than working around a problem at the wrong layer of abstraction, we believe in going to the root of the problem and fixing it for everyone, even if that means digging into lower layers (and ends up being far more difficult as a result).
  • We see design holistically, rather than as an isolated thing the design team does. It’s not just about functionality and aesthetics, but also underlying technology, and what to build in the first place. Even if you’re not contributing on the design team, developing an affinity for design will make you a more effective contributor.
  • Looking at relevant art is important, but simply copying the competition doesn’t usually produce great results. We have a proud history of inventing new paradigms that are better than the status quo.
  • As a general rule, start from the user experience you want and then go about building the technology necessary to create it, not the other way around. However: This is not an excuse for bad engineering or pursuing ideas that are conceptually impossible (e.g. multi-protocol chat clients).
  • Defer to the Expert. Everyone has different areas of expertise, such as user experience, security, accessibility, performance, or localization. Listen to the people most experienced in a given domain.
  • Design is all about trade-offs. Be wary of hard and fast rules that only look at one part of a problem (e.g. “vertical space is at a premium, therefore…”), and instead try to balance various concerns in a way that works well overall.

In Practice

Some of the above principles are quite abstract, so what do they translate to when actually building software day to day? Here are some examples of how they apply to real-world questions.

  • App developers should do their own packaging. It’s the only way to do it sustainably at scale.
  • Flatpak is the future of app distribution.
  • The “traditional desktop” is dead, and it’s not coming back (Note: I’m talking about Windows 95 era UI patterns here, not desktop vs. mobile). Instead of trying to bring back old concepts like menu bars or status icons, invent something better from first principles.
  • System-wide theming is a broken idea. If you don’t like the way apps look, contribute to them directly (or to the platform style).
  • Shell extensions are always going to be a niche thing. If you want to have real impact your time is better invested working on apps or GNOME Shell itself.
  • “Filling the available space” is rarely a good goal by itself, and an easy way to design yourself into a corner.

All of the above is of course my personal perception, and you’ll find variations on these ideas depending on who you talk to. However, in my experience most of them are shared fairly consistently by people across the community, especially given our informal structure.

Now that we’ve covered how things get done, by whom, and why, you’re in a great position to start making your mark. In the next part of this series we’ll look at practical first steps for contributing.

Until then, happy hacking!

Does free software benefit from ML models being derived works of training data?

Github recently announced Copilot, a machine learning system that makes suggestions for you when you're writing code. It's apparently trained on all public code hosted on Github, which means there's a lot of free software in its training set. Github assert that the output of Copilot belongs to the user, although they admit that it may occasionally produce output that is identical to content from the training set.

Unsurprisingly, this has led to a number of questions along the lines of "If Copilot embeds code that is identical to GPLed training data, is my code now GPLed?". This is extremely understandable, but the underlying issue is actually more general than that. Even code under permissive licenses like BSD requires retention of copyright notices and disclaimers, and failing to include them is just as much a copyright violation as incorporating GPLed code into a work and not abiding by the terms of the GPL is.

But free software licenses only have power to the extent that copyright permits them to. If your code isn't a derived work of GPLed material, you have no obligation to follow the terms of the GPL. Github clearly believe that Copilot's output doesn't count as a derived work as far as US copyright law goes, and as a result the licenses on the training data don't apply to the output. Some people have interpreted this as an attack on free software - Copilot may insert code that's either identical or extremely similar to GPLed code, and claim that there are no license obligations created as a result, effectively allowing the laundering of GPLed code into proprietary software.

I'm completely unqualified to hold a strong opinion on whether Github's legal position is justifiable or not, and right now I'm also not interested in thinking about it too much. What I think is more interesting is what the impact of either position has on free software. Do we benefit more from a future where the output of Copilot (or similar projects) is considered a derived work of the training data, or one where it isn't? Having been involved in a bunch of GPL enforcement activities, it's very easy to think of this as something that weakens the GPL and, as a result, weakens free software. That was my initial reaction, but that's shifted over the past few days.

Let's look at the GNU manifesto, specifically this section:

The fact that the easiest way to copy a program is from one neighbor to another, the fact that a program has both source code and object code which are distinct, and the fact that a program is used rather than read and enjoyed, combine to create a situation in which a person who enforces a copyright is harming society as a whole both materially and spiritually; in which a person should not do so regardless of whether the law enables him to.

The GPL makes use of copyright law to ensure that GPLed work can't be taken from the commons. Anyone who produces a derived work of GPLed code is obliged to provide that work under the same terms. If software weren't copyrightable, the GPL would have no power. But this is the outcome Stallman wanted! The GPL doesn't exist because copyright is good, it exists because software being copyrightable is what enables the concept of proprietary software in the first place.

The powers that the GPL uses to enforce sharing of code are used by the authors of proprietary software to reduce that sharing. They attempt to forbid us from examining their code to determine how it works - they argue that anyone who does so is tainted, unable to contribute similar code to free software projects in case they produce a derived work of the original. Broadly speaking, the further the definition of a derived work reaches, the greater the power of proprietary software authors. If Oracle's argument that APIs are copyrightable had prevailed, it would have been disastrous for free software. If the Apple look and feel suit had established that Microsoft infringed Apple's copyright, we might be living in a future where we had no free software desktop environments.

When we argue for an interpretation of copyright law that enhances the power of the GPL, we're also enhancing the power of giant corporations with a lot of lawyers on hand. So let's look at this another way. If Github's interpretation of copyright law holds, we can train a model on proprietary code and extract concepts without having to worry about being tainted. The proprietary code itself won't enter the commons, but the ideas it embodies will. No more worries about whether you're literally copying the code that implements an algorithm you want to duplicate - simply start typing and let the model remove the risk for you.

There's a reasonable counter argument about equality here. How much GPL-influenced code is going to end up in proprietary projects when compared to the reverse? It's not an easy question to answer, but we should bear in mind that the majority of public repositories on Github aren't under an open source license. Copilot is already claiming to give us access to the concepts embodied in those repositories. Do these provide more value than is given up? I honestly don't know how to measure that. But what I do know is that free software was founded in a belief that software shouldn't be constrained by copyright, and our default stance shouldn't be to argue against the idea that copyright is weaker than we imagined.

(Edit: this post by Julia Reda makes some of the same arguments, but spends some more time focusing on a legal analysis of why having copyright cover the output of Copilot would be a problem)

comment count unavailable comments

Record Live Multiple-Location Audio immediately in GNOME Gingerblue 0.6.0

GNOME Gingerblue 0.6.0 is available and builds/runs on GNOME 40 systems such as Fedora Core 34.

It supports immediate, live audio recording in compressed Xiph.org Ogg Vorbis encoded audio files stored in the private $HOME/Music/ directory from the microphone/input line on a computer or remote audio cards through USB connection through PipeWire (www.pipewire.org) with GStreamer (gstreamer.freedesktop.org) on Fedora Core 34 (getfedora.org).

See the GNOME Gingerblue project (www.gingerblue.org) for screenshots, Fedora Core 34 x86_64 RPM package and GNU autoconf installation package (https://download.gnome.org/sources/gingerblue/0.6/gingerblue-0.6.0.tar.xz) for GNOME 40 systems and https://gitlab.gnome.org/ole/gingerblue.git for the GPLv3 source code in my GNOME Git repository.

Gingerblue music recording session screen. Click “Next” to begin session.

The default name of the musician is extracted from g_get_real_name(). You can edit the name of the musician and then click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip the details).

Type the name of the musical song name. Click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip any of the details).

Type the name of the musical instrument. The default instrument is “Guitar”. Click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip any of the details).

Type the name of the audio line input. The default audio line input is “Mic” ( gst_pipeline_new("record_pipe") in GStreamer). Click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip the details).

Enter the recording label. The default recording label is “GNOME” (Free label). Click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip the details).

Enter the Computer. The default station label is a Fully-Qualified Domain Name (g_get_host_name()) for the local computer. Click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip the details).

Notice the immediate, live recording file. The default immediate, live recording file name falls back to the result of g_strconcat(g_get_user_special_dir(G_USER_DIRECTORY_MUSIC), "/", gtk_entry_get_text(GTK_ENTRY(musician_entry)), "_-_", gtk_entry_get_text(GTK_ENTRY(song_entry)), "_[",g_date_time_format_iso8601 (datestamp),"]",".ogg", NULL) in gingerblue/src/gingerblue-main.c

Click on “Cancel” once in GNOME Gingerblue to stop immediate recording and click on “Cancel” once again to exit the application (or Ctrl-c in the terminal).

The following Multiple-Location Audio Recording XML file [.gingerblue] is created in G_USER_DIRECTORY_MUSIC (usually $HOME/Music/ on American English systems):

<?xml version='1.0' encoding='UTF-8'?>
<gingerblue version='0.6.0'>
<musician>Wilber</musician>
<song>Gingerblue Track 0001</song>
<instrument>Piano</instrument>
<line>Mic</line>
<label>GNOME Music</label>
<station>streaming.gnome.org</station>
<filename>/home/wilber/Music/Wilber_-_Song_-_2021-07-12T21:36:07.624570Z.ogg</filename>
</gingerblue>

You’ll find the recorded Ogg Vorbis audio files along with the Multiple-Location Audio Recording XML files in g_get_user_special_dir(G_USER_DIRECTORY_MUSIC) (usually $HOME/Music/) on GNOME 40 systems configured in the American English language.

July 12, 2021

Add metadata to your app to say what inputs and display sizes it supports

The appstream specification, used for appdata files for apps on Linux, supports specifying what input devices and display sizes an app requires or supports. GNOME Software 41 will hopefully be able to use that information to show whether an app supports your computer. Currently, though, almost no apps include this metadata in their appdata.xml file.

Please consider taking 5 minutes to add the information to the appdata.xml files you care about. Thanks!

If your app supports (and is tested with) touch devices, plus keyboard and mouse, add:

<recommends>
  <control>keyboard</control>
  <control>pointing</control>
  <control>touch</control>
</recommends>

If your app is only tested against keyboard and mouse, add:

<requires>
  <control>keyboard</control>
  <control>pointing</control>
</requires>

If it supports gamepads, add:

<recommends>
  <control>gamepad</control>
</recommends>

If your app is only tested on desktop screens (the majority of cases):

<requires>
  <display_length compare="ge">medium</display_length>
</requires>

If your app is adaptive and works on mobile device screens through to desktops, add:

<requires>
  <display_length compare="ge">small</display_length>
</requires>

Or, if you’ve developed your app to work at a specific size (mostly relevant for mobile devices), you can specify that explicitly:

<requires>
  <display_length compare="ge">360</display_length>
</requires>

Note that there may be updates to the definition of display_length in appstream in future for small display sizes (phones), so this might change slightly.

Another example is what I’ve added for Hitori, which supports touch and mouse input (but not keyboard input) and which works on small and large screens.

See the full specification for more unusual situations and additional examples.

July 11, 2021

A New Look for the Health app

This required the creation of a new view (homepage) along with the creation of 2 widgets (circular progress bar and arrow) drawn with cairo.

The rationale for this design is to show important data at the top. Users can get more details (graphs and activity history) by clicking the buttons below.

Right now there only three buttons, but we will soon add 1 more button that will show the history of calories burnt . This will then have a 2x2 grid appearance


Gtk-rs Is Pretty Neat

I’m sure by now you’ve seen the gtk-rs bindings which keep getting better

Recently I put together a little demo1 taking advantage of gtk-rs to bind a small GLib/C library providing a simple object, extend that object in rust and ultimatly consume it in GJS2.

The name ‘gtk-rs’, like ‘gtk-doc’, is a bit of a misdirect. Whilst the project’s main goal is Gtk bindings it’s really a system for building rust crates wrapping GLib-style libraries via gobject-introspection. Much of the work is done by the gir3 4 tool which both generates the low-level C ffi and automates most of the high-level binding – with the notable exception of subclass support.

Fortunately, we can look at the Gtk4 bindings for help, I found layout_manager.rs to be a useful example – one gotcha to watch out for is vfuncs, especially signals, as they don’t necessarily have a default implementation

Unfortunatly gtk-rs doesn’t have any magic for generating introspection data (yet?) so we follow rsvg’s example and manually build C-ABI wrappers along with a hand-written header to use with g-ir-scannerhandle.rs is a useful resource when doing this.

Happy multi-language hacking 🙂


Footnotes

1
This was orignally an minimal reproducer, turns out I was just being silly
2
Of course it doesn’t have to be GJS, PyGObject etc should work as well
3
You could think of it as a high-level (and specialised) version of bindgen
4
The name is a tad confusing and may change

Thanks to Bilal, Jordan, Chris, Sebastian, and others for their tips/fixes and/or rubber ducking as well as the wider gtk-rs/g-i communities who build this tooling

July 10, 2021

No more open tickets left in GNOME Bugzilla

It’s already been impossible for a while to create new tickets in GNOME Bugzilla when GNOME moved to GitLab for bug reports and feature requests (and many other software development aspects, of course).
In May 2021, Bart proposed to wrap up the Bugzilla migration (very appreciated). In addition, in November 2020 and May 2021 I (had) sent emails to maintainers (listed in the DOAP files in each Git repository) of all remaining code bases with open tickets left, with instructions to file a migration request for importing tickets from Bugzilla to GitLab if wanted.

Since this week there are finally no open tickets in GNOME Bugzilla left. All were either migrated to GNOME GitLab or mass-closed over the last weeks by Bart or me. When mass-closing, an explanatory comment was added (example) to allow contributors to understand why we closed their ticket.
While two or three authors expressed unhappiness about closing their more than 10 year old open tickets, most folks seem to be aware of the constraints of free and open source projects, aware of the need for infrastructure systems able to fulfil modern software development needs, or simply don’t care.
Creating new Bugzilla accounts has also been disabled, as (to my surprise) some folks are quite good in ignoring large banners on top of websites.

This brings GNOME closer to making its Bugzilla instance read-only, converting to static content, shutting down legacy systems that create maintenance costs.

July 08, 2021

Summer of 2021 with GNOME

GSoC project: People/Faces of GNOME

Credits: Jakub Steiner

Story of Faces of GNOME.

GNOME Foundation is a vibrant worldwide community of amazing people involved in making GNOME, one of the most loved desktop environment. The community is not limited to the people delivering pieces of code. But also includes people helping with designs, translations, documentation, management, and much more.

GNOME is built by people, and we value your each & every contribution to help create such world-class free software.

Back in 2020, the Engagement team — the frontiers of GNOME came with the new initiative Faces of GNOME to celebrate all kinds of contributions with a motive of creating a much stronger community driven by passion.

GNOME loves diversity!

GNOME has all kind of people.

1. Designers

They design our components, apps, our guidelines and ensure GNOME look and feel high standards.

2. Developers

They deliver the pieces of code behind GNOME. They ship our Apps and the GNOME Desktop.

3. Translators

They ensure GNOME to be more understandable and usable to all people around the world.

4. Documenters

They document and create guides of everything GNOME. From our GTK APIs to Newcomer guides.

5. Engagement

They manage our Communities, Conferences, Marketing, and Events. They are the frontier of GNOME.

6. Moderators

They ensure that our Communities are healthy and secure and play a vital role in establishing the tenor of a forum.

7. Mentors

They manage and supervise student programs like Google Summer of Code, Google Season of Docs, and Outreachy.

8. Staff

They handle the internals of the GNOME Foundation and ensure our operations run smoothly.

9. Speakers

Speakers are contributors, volunteers, or sponsors that held any kind of workshop, talk, or activity during GNOME Events.

10. Outreach Interns

These consist of Google Summer of Code, Outreachy students who spend their summers working on some exciting project ideas.

Project abstract.

The ultimate goal of the People/Faces of GNOME is to develop a website using modern site generators & JavaScript to showcase all past, current GNOME contributors stated above. It has the intention to let community members create, maintain profile pages, add custom information and markdown supported blogs by providing them with a personal space hosted in multiple languages.

The project includes introducing bots for automating markdown profiles, managing SEO, setting up localization capabilities, reworking search queries, improving web performance alongside deploying the website using GitLab CI services.

The project aims to have a full code-complete solution, with documentation, guidelines, and all the pages complete and ready to be launched which would allow the Faces/People of GNOME to succeed not only as a project and but also as a program.

Stay tuned for my next series of blogs in which I will be sharing the challenges faced, my experience of GUADEC and much more. :)

July 06, 2021

How your organisation’s equipment policy can impact the environment

At the Endless OS Foundation, we’ve recently been updating some of our internal policies. One of these is our equipment policy, covering things like what laptops and peripherals are provided to employees. While updating it, we took the opportunity to think about the environmental impact it would have, and how we could reduce that impact compared to standard or template equipment policies.

How this matters

For many software organisations, the environmental impact of hardware purchasing for employees is probably at most the third-biggest contributor to the organisation’s overall impact, behind carbon emissions from energy usage (in building and providing software to a large number of users), and emissions from transport (both in sending employees to conferences, and in people’s daily commute to and from work). These both likely contribute tens of tonnes of emissions per year for a small/medium sized organisation (as a very rough approximation, since all organisations are different). The lifecycle emissions from a modern laptop are in the region of 300kgCO2e, and one target for per-person emissions is around 2.2tCO2e/year by 2030.

If changes to this policy reduce new equipment purchase by 20%, that’s a 20kgCO2e/year reduction per employee.

So, while changes to your organisation’s equipment policy are not going to have a big impact, they will have some impact, and are easy and unilateral changes to make right now. 20kgCO2e is roughly the emissions from a 150km journey in a petrol car.

What did we put in the policy?

We started with a fairly generic policy. From that, we:

  • Removed time-based equipment replacement schedules (for example, replacing laptops every 3 years) and went with a more qualitative approach of replacing equipment when it’s no longer functional enough for someone to do their job properly on.
  • Provided recommended laptop models for different roles (currently several different versions of the Dell XPS 13), which we have checked conform to the rest of the policy and have an acceptable environmental impact — Dell are particularly good here because, unlike a lot of laptop manufacturers, they publish a lifecycle analysis for their laptops
  • But also gave people the option to justify a different laptop model, as long as it meets certain requirements:

All laptops must meet the following standards in order to have low lifetime impacts:

All other equipment must meet relevant environmental standards, which should be discussed on a case by case basis

If choosing a device not explicitly listed above, manufacturers who provide Environmental Product Declarations for their products should be preferred

  • These requirements aim to minimise the laptop’s carbon emissions during use (i.e. its power consumption), and increase the chance that it will be repairable or upgradeable when needed. In particular, having a replaceable battery is important, as the battery is the most likely part of the laptop to wear out.
  • The policy prioritises laptop upgrades and repairs over replacement, even when the laptop would typically be coming up for replacement after 3 years. The policy steers people towards upgrading it (a new hard drive, additional memory, new battery, etc.) rather than replacing it.
  • When a laptop is functional but no longer useful, the policy requires that it’s wiped, refurbished (if needed) and passed on to a local digital inclusion charity, school, club or similar.
  • If a laptop is broken beyond repair, the policy requires that it’s disposed of according to WEEE guidelines (which is the norm in Europe, but potentially not in other countries).

A lot of this just codifies what we were doing as an organisation already — but it’s good to have the policy match the practice.

Your turn

I’d be interested to know whether others have similar equipment policies, or have better or different ideas — or if you make changes to your equipment policy as a result of reading this.

July 05, 2021

GtkSourceView Searching with PCRE2

Last year I did some work to make GtkSourceView use PCRE2 for syntax highlighting. The primary motivation there was to improve syntax highlighting performance by using PCRE2’s JIT capability.

However, that left us in an odd place with how GtkSourceSearchContext works for regex-enabled search. It was using GRegex which itself uses PCRE (1). It’s pretty clear that the goal is to completely deprecate GRegex in GLib and it’s days are numbered. In particular, there is a lot we can’t do to control the execution environment and protect against things like stack overflows. Worsening things, PCRE doesn’t appear to be maintained these days.

So I finally got around to moving GtkSourceSearchContext to using our very-GRegex-looking PCRE2 wrapper ImplRegex after fixing a number of bugs in it. However, it does not use the JIT because that seems a bit unnecessary for something meant to be interactive. We want latency from Regex creation to execution to be quite low.

It will be available in nightlies soon for the 5.x series. I’m hesitant to backport it to 4.x or 3.x because it’s a rather large subsystem change and I’d like to reduce chances of breaking things I can’t reasonably test. Given the relatively few GtkSourceView 5.x applications in the wild (other than my own) I feel it’s a more acceptable change there.

July 03, 2021

GSoC 2021 and GNOME Design tools

This Google School of Code, I decided to work with Bilal Elmoussaoui as a mentor, the goal being updating some GNOME design tools to GTK 4, specifically Icon Library and App Icon Preview. Both apps are written in Rust and make use of the gtk-rs bindings for gtk.

Icon Library

Icon Library

Icon Library is an app to preview symbolic icons in icon-development-kit. It allows to export and search icons. The process of porting Icon Library !16 was quite straightforward. The clipboard had to be adapted to the new api and the search provider had to be tweaked.

The most interesting bit was the search feature. Because of the better support for subclassing in gtk4-rs, it was natural to use list models for displaying lists, which in turn made using a filter on the model an obvious choice. But applying a filter on the list model resulted that each icon had to be redrawn on each search query, which is quite slow. The solution was to go back to setting the filter directly on the list widget.

App Icon Preview

App Icon Preview

App Icon Preview is an app for designing icons that target the GNOME environment. One complex part of porting this is app is its dependence on librsvg, which was recently updated for the latest gtk-rs. Another friction point is that widgets have moved away from using Cairo as their render and the Paintable api is not very friendly with the Clipboard or saving widgets as images. Clipboards consumegdk::Texture which contain metainfo, and Paintables have no methods to construct such.

After some investigation, we found a way to free the node from the render and paint it into the Cairo context used to drawn the texture.

The port !62 still has some rough edges, but it is looking promising.