Free Software, Free Society!
Thoughts of the FSFE Community (English)

Saturday, 07 March 2020

Real-time communication and collaboration in a file sync & share environment - A introduction to Nextcloud Talk

At the beginning of this year I gave two times a presentation about Nextcloud Talk. First at the annual CS3 conference in Copenhagen and just one week later at FOSDEM in Brussels. Nextcloud Talk provides a full featured real-time communication platform. Completely Free Software, self-hosted and nicely integrated with all the other aspects of Nextcloud.


(This blog contain some presentation slides, you can see them here.)

Saturday, 29 February 2020

Diving into the world of Ring Fit Adventure

Continuing with my Ring Fit Adventure adventure1, these are the first full two weeks of working out with Ring Fit Adventure.

My plan is to work our every work day, but this weekend I could not stop myself from trying out a few more mini-games. On the other hand, I skipped two days, but felt bad and missed the routine.

Anyway, onwards with the adventure²! :D

Adventure mode

These weeks, according to the heart sensor, I am getting a light to moderate workout, which given that I seem to have caught a bit of a cold, is just about right. I also continue to stick to the game’s suggestion when to stop training for the day.

The difficulty level is still at 16. But I may ramp it up a bit next week, when I feel better.

World 3

On Monday I started with World 3 and so far it brought several new elements, keeping everything still fresh:

  • (semi-)optional side-quest
  • more mini-games – it seems through the Adventure mode the game plans to gradually introduce the player to all the mini-games, which is a neat trick
  • NPCs
  • shops for equipment, consumable items and ingredients
  • equipment – changes stats
  • consumable items – regenerates health, hints at other effects in the future
  • ingredients and recipes to create consumables
  • early signs of a plot twist
  • new battle skills

World 4

This world provded a new hurdle that I needed to overcome with a new movement skill. Which I had to obtain by training (surprise there!).

In addition to a nice story loop (and plenty of puns again), again there were new things introduced:

  • more ingredients and recipes – now they have additional effects like different attack buffs, better drops, or easier travel
  • another shop with new equipment and consumables – it seems like from now on there will be a new shop with new items in every world
  • optional side-quests of different kinds – one including a teleport
  • even more new battle skills
  • enemies with healing skills
  • healing skill
  • new movement skill

I even revisited World 3 and did some side-quests there. It seems like side-quests will pop up in previous worlds as one progresses to new ones. Which means revisiting previous worlds will be needed for a 100% run and they are also awarded with new skills and items, which is always neat.

General thoughts

So far, every world brought a new surprise, new gaming techniques and skills to master, at a very good pace. It seems to me it should not be too fast for newbies, but also does not seem to go too slow for veteran gamers.

What really needs praise is the level design. It might not be apparent at first, but once you start paying attention, you notice that it introduces not just great timing between slower and more intensive workouts, but also changes the types of workout either explicitly within the level or implied through the enemy choices and unlocking of new attack skills. If you look closely, you will even see that several levels have alternative paths.

Finally, I am honestly very positively surprised at the quality of the RPG elements in this game! Of course it is nowhere near D&D complexity, but it is much deeper and well made than it looks on face value. Again, easy to grasp, but still just meaty enough to keep veterans also engaged.

There are only two critiques I have at this stage:

  • I would like the alarm to be done in a better way (but I do not have a great suggestion either); and
  • Some of the skills/workouts are two-part – e.g. Bow Pull, where for the first part you pull with one arm, and in the second part with the other. With these, if you defeat an enemy before you did your full workout, you basically just trained just one side of your body asymmetrically to the other. A simple fix would be to switch which is the first side every now and again.

Tips

If you want to target specific muscle-groups or do certain types of workouts, you can select the Sets tab in the Set Skills menu and there you will find pre-sets that target e.g. legs, or are good for posture, or concentrate on core muscles. I find it great that apart from just min-maxing, the game gives you a really easy way to customise your adventure play-through also to fit your needed/wanted workout best.

Do not skip the pre-workout and the post-workout stretching – these are vital for a healthy workout. And a really cool thing Ring Fit Adventure does is that the post-workout/cooldown stretching varies depending on which muscle groups you trained during that session.

Quick Play and Custom mode

I also noticed that in Quick Play mode and Custom mode, all the workout skills are already present from the start, so if the Adventure mode2 does not appeal to you, the game does not force you to unlock them for your custom training sets.

After trying out a tiny bit the workout options outside the main Adventure mode, here is what I think of them.

Simple workouts seem to be basically in the gist of how many repetitions can you do in a given amount of time, and do not appeal to me much. Perhaps they are fun if you want to compete with friends in an (off the) couch co-op mode.

Minigames I find quite fun and are as entertaining as they are challenging. It may be that the novelty will wear off, but for now I am enjoying them quite a bit.

Workout Sets that target specific muscles or muscle groups are actually good and still remain quite fun, so it seems like a good choice for when you want to concentrate a bit on just one part of the body, core muscles, posture, or endurance.

Custom mode lets you assemble your own sets of workouts from the whole range of workout skills. For now I can just say they are super easy to set up and select, also from other users on the same system. I imagine these become useful later on, when you want to have more control of what you want to train that day.

Next time: first month or so.

hook out → still unsure whether I like Ring or Tipp better … although Dracaux is also growing on me slowly


  1. Ring Fit Adventure² for short ;) 

  2. I have to say, so far I am having a blast with Adventure mode though! 

Thursday, 27 February 2020

Diving into the world of Ring Fit Adventure

Continuing with my Ring Fit Adventure adventure1, these are the first full two weeks of working out with Ring Fit Adventure.

My plan is to work our every work day, but this weekend I could not stop myself from trying out a few more mini-games. On the other hand, I skipped two days, but felt bad and missed the routine.

Anyway, onwards with the adventure²! :D

Adventure mode

These weeks, according to the heart sensor, I am getting a light to moderate workout, which given that I seem to have caught a bit of a cold, is just about right. I also continue to stick to the game’s suggestion when to stop training for the day.

The difficulty level is still at 16. But I may ramp it up a bit next week, when I feel better.

World 3

On Monday I started with World 3 and so far it brought several new elements, keeping everything still fresh:

  • (semi-)optional side-quest
  • more mini-games – it seems through the Adventure mode the game plans to gradually introduce the player to all the mini-games, which is a neat trick
  • NPCs
  • shops for equipment, consumable items and ingredients
  • equipment – changes stats
  • consumable items – regenerates health, hints at other effects in the future
  • ingredients and recipes to create consumables
  • early signs of a plot twist
  • new battle skills

World 4

This world provded a new hurdle that I needed to overcome with a new movement skill. Which I had to obtain by training (surprise there!).

In addition to a nice story loop (and plenty of puns again), again there were new things introduced:

  • more ingredients and recipes – now they have additional effects like different attack buffs, better drops, or easier travel
  • another shop with new equipment and consumables – it seems like from now on there will be a new shop with new items in every world
  • optional side-quests of different kinds – one including a teleport
  • even more new battle skills
  • enemies with healing skills
  • healing skill
  • new movement skill

I even revisited World 3 and did some side-quests there. It seems like side-quests will pop up in previous worlds as one progresses to new ones. Which means revisiting previous worlds will be needed for a 100% run and they are also awarded with new skills and items, which is always neat.

General thoughts

So far, every world brought a new surprise, new gaming techniques and skills to master, at a very good pace. It seems to me it should not be too fast for newbies, but also does not seem to go too slow for veteran gamers.

What really needs praise is the level design. It might not be apparent at first, but once you start paying attention, you notice that it introduces not just great timing between slower and more intensive workouts, but also changes the types of workout either explicitly within the level or implied through the enemy choices and unlocking of new attack skills. If you look closely, you will even see that several levels have alternative paths.

Finally, I am honestly very positively surprised at the quality of the RPG elements in this game! Of course it is nowhere near D&D complexity, but it is much deeper and well made than it looks on face value. Again, easy to grasp, but still just meaty enough to keep veterans also engaged.

There are only two critiques I have at this stage:

  • I would like the alarm to be done in a better way (but I do not have a great suggestion either); and
  • Some of the skills/workouts are two-part – e.g. Bow Pull, where for the first part you pull with one arm, and in the second part with the other. With these, if you defeat an enemy before you did your full workout, you basically just trained just one side of your body asymmetrically to the other. A simple fix would be to switch which is the first side every now and again.

Tips

If you want to target specific muscle-groups or do certain types of workouts, you can select the Sets tab in the Set Skills menu and there you will find pre-sets that target e.g. legs, or are good for posture, or concentrate on core muscles. I find it great that apart from just min-maxing, the game gives you a really easy way to customise your adventure play-through also to fit your needed/wanted workout best.

Do not skip the pre-workout and the post-workout stretching – these are vital for a healthy workout. And a really cool thing Ring Fit Adventure does is that the post-workout/cooldown stretching varies depending on which muscle groups you trained during that session.

Quick Play and Custom mode

I also noticed that in Quick Play mode and Custom mode, all the workout skills are already present from the start, so if the Adventure mode2 does not appeal to you, the game does not force you to unlock them for your custom training sets.

After trying out a tiny bit the workout options outside the main Adventure mode, here is what I think of them.

Simple workouts seem to be basically in the gist of how many repetitions can you do in a given amount of time, and do not appeal to me much. Perhaps they are fun if you want to compete with friends in an (off the) couch co-op mode.

Minigames I find quite fun and are as entertaining as they are challenging. It may be that the novelty will wear off, but for now I am enjoying them quite a bit.

Workout Sets that target specific muscles or muscle groups are actually good and still remain quite fun, so it seems like a good choice for when you want to concentrate a bit on just one part of the body, core muscles, posture, or endurance.

Custom mode lets you assemble your own sets of workouts from the whole range of workout skills. For now I can just say they are super easy to set up and select, also from other users on the same system. I imagine these become useful later on, when you want to have more control of what you want to train that day.

Next time: first month or so.

hook out → still unsure whether I like Ring or Tipp better … although Dracaux is also growing on me slowly


  1. Ring Fit Adventure² for short ;) 

  2. I have to say, so far I am having a blast with Adventure mode though! 

Tuesday, 25 February 2020

How to Implement a XEP for Smack.

Smack is a FLOSS XMPP client library for Java and Android app development. It takes away much of the burden a developer of a chat application would normally have to carry, so the developer can spend more time working on nice stuff like features instead of having to deal with the protocol stack.

Many (80+ and counting) XMPP Extension Protocols (XEPs) are already implemented in Smack. Today I want to bring you along with me and add support for one more.

What Smack does very well is to follow the Open-Closed-Principle of software architecture. That means while Smacks classes are closed for modification by the developer, it is pretty easy to extend Smack to add support for custom features. If Smack doesn’t fit your needs, don’t change it, extend it!

The most important class in Smack is probably the XMPPConnection, as this is where messages coming from and going to. However, even more important for the developer is what is being sent.

XMPP’s strength comes from the fact that arbitrary XML elements can be exchanged by clients and servers. Heck, the server doesn’t even have to understand what two clients are sending each other. That means that if you need to send some form of data from one device to another, you can simply use XMPP as the transport protocol, serialize your data as XML elements with a namespace that you control and send if off! It doesn’t matter, which XMPP server software you choose, as the server more or less just forwards the data from the sender to the receiver. Awesome!

So lets see how we can extend Smack to add support for a new feature without changing (and therefore potentially breaking) any existing code!

For this article, I chose XEP-0428: Fallback Indication as an example protocol extension. The goal of Fallback Indication is to explicitly mark <body/> elements in messages as fallback. For example some end-to-end encryption mechanisms might still add a body with an explanation that the message is encrypted, so that older clients that cannot decrypt the message due to lack of support still display the explanation text instead. This enables the user to switch to a better client đŸ˜› Another example would be an emoji in the body as fallback for a reaction.

XEP-0428 does this by adding a fallback element to the message:

<message from="alice@example.org" to="bob@example.net" type="chat">
  <fallback xmlns="urn:xmpp:fallback:0"/>  <-- THIS HERE
  <encrypted xmlns="urn:example:crypto">Rgreavgl vf abg n irel ybat
gvzr nccneragyl.</encrypted>
  <body>This message is encrypted.</body>
</message>

If a client or server encounter such an element, they can be certain that the body of the message is intended to be a fallback for legacy clients and act accordingly. So how to get this feature into Smack?

After the XMPPConnection, the most important types of classes in Smack are the ExtensionElement interface and the ExtensionElementProvider class. The later defines a class responsible for deserializing or parsing incoming XML into the an object of the former class.

The ExtensionElement is itself an empty interface in that it does not provide anything new, but it is composed from a hierarchy of other interfaces from which it inherits some methods. One notable super class is NamedElement, more on that in just a second. If we start our XEP-0428 implementation by creating a class that implements ExtensionElement, our IDE would create this class body for us:

package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.ExtensionElement;
import org.jivesoftware.smack.packet.XmlEnvironment;

public class FallbackIndicationElement implements ExtensionElement {
    
    @Override
    public String getNamespace() {
        return null;
    }

    @Override
    public String getElementName() {
        return null;
    }

    @Override
    public CharSequence toXML(XmlEnvironment xmlEnvironment) {
        return null;
    }
}

The first thing we should do is to change the return type of the toXML() method to XmlStringBuilder, as that is more performant and gains us a nice API to work with. We could also leave it as is, but it is generally recommended to return an XmlStringBuilder instead of a boring old CharSequence.

Secondly we should take a look at the XEP to identify what to return in getNamespace() and getElementName().

<fallback xmlns="urn:xmpp:fallback:0"/>
[   ^    ]      [        ^          ]
element name          namespace

In XML, the part right after the opening bracket is the element name. The namespace follows as the value of the xmlns attribute. An element that has both an element name and a namespace is called fully qualified. That’s why ExtensionElement is inheriting from FullyQualifiedElement. In contrast, a NamedElement does only have an element name, but no explicit namespace. In good object oriented manner, Smacks ExtensionElement inherits from FullyQualifiedElement which in term is inheriting from NamedElement but also introduces the getNamespace() method.

So lets turn our new knowledge into code!

package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.ExtensionElement;
import org.jivesoftware.smack.packet.XmlEnvironment;

public class FallbackIndicationElement implements ExtensionElement {
    
    @Override
    public String getNamespace() {
        return "urn:xmpp:fallback:0";
    }

    @Override
    public String getElementName() {
        return "fallback";
    }

    @Override
    public XmlStringBuilder toXML(XmlEnvironment xmlEnvironment) {
        return null;
    }
}

Hm, now what about this toXML() method? At this point it makes sense to follow good old test driven development practices and create a JUnit test case that verifies the correct serialization of our element.

package tk.jabberhead.blog.wow.nice;

import static org.jivesoftware.smack.test.util.XmlUnitUtils.assertXmlSimilar;
import org.jivesoftware.smackx.pubsub.FallbackIndicationElement;
import org.junit.jupiter.api.Test;

public class FallbackIndicationElementTest {

    @Test
    public void serializationTest() {
        FallbackIndicationElement element = new FallbackIndicationElement();

        assertXmlSimilar("<fallback xmlns=\"urn:xmpp:fallback:0\"/>",
element.toXML());
    }
}

Now we can tweak our code until the output of toXml() is just right and we can be sure that if at some point someone starts messing with the code the test will inform us of any breakage. So what now?

Well, we said it is better to use XmlStringBuilder instead of CharSequence, so lets create an instance. Oh! XmlStringBuilder can take an ExtensionElement as constructor argument! Lets do it! What happens if we return new XmlStringBuilder(this); and run the test case?

<fallback xmlns="urn:xmpp:fallback:0"

Almost! The test fails, but the builder already constructed most of the element for us. It prints an opening bracket, followed by the element name and adds an xmlns attribute with our namespace as value. This is typically the “head” of any XML element. What it forgot is to close the element. Lets see… Oh, there’s a closeElement() method that again takes our element as its argument. Lets try it out!

<fallback xmlns="urn:xmpp:fallback:0"</fallback>

Hm, this doesn’t look right either. Its not even valid XML! (ăƒŽŕ˛ ç›Šŕ˛ )ăƒŽĺ˝Ąâ”ťâ”â”ť Normally you’d use such a sequence to close an element which contained some child elements, but this one is an empty element. Oh, there it is! closeEmptyElement(). Perfect!

<fallback xmlns="urn:xmpp:fallback:0"/>
package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.ExtensionElement;
import org.jivesoftware.smack.packet.XmlEnvironment;

public class FallbackIndicationElement implements ExtensionElement {
    
    @Override
    public String getNamespace() {
        return "urn:xmpp:fallback:0";
    }

    @Override
    public String getElementName() {
        return "fallback";
    }

    @Override
    public XmlStringBuilder toXML(XmlEnvironment xmlEnvironment) {
        return new XmlStringBuilder(this).closeEmptyElement();
    }
}

We can now serialize our ExtensionElement into valid XML! At this point we could start sending around FallbackIndications to all our friends and family by adding it to a message object and sending that off using the XMPPConnection. But what is sending without receiving? For this we need to create an implementation of the ExtensionElementProvider custom to our FallbackIndicationElement. So lets start.

package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.XmlEnvironment;
import org.jivesoftware.smack.provider.ExtensionElementProvider;
import org.jivesoftware.smack.xml.XmlPullParser;

public class FallbackIndicationElementProvider
extends ExtensionElementProvider<FallbackIndicationElement> {
    
    @Override
    public FallbackIndicationElement parse(XmlPullParser parser,
int initialDepth, XmlEnvironment xmlEnvironment) {
        return null;
    }
}

Normally implementing the deserialization part in form of a ExtensionElementProvider is tiring enough for me to always do that last, but luckily this is not the case with Fallback Indications. Every FallbackIndicationElement always looks the same. There are no special attributes or – shudder – nested named child elements that need special treating.

Our implementation of the FallbackIndicationElementProvider looks simply like this:

package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.XmlEnvironment;
import org.jivesoftware.smack.provider.ExtensionElementProvider;
import org.jivesoftware.smack.xml.XmlPullParser;

public class FallbackIndicationElementProvider
extends ExtensionElementProvider<FallbackIndicationElement> {
    
    @Override
    public FallbackIndicationElement parse(XmlPullParser parser,
int initialDepth, XmlEnvironment xmlEnvironment) {
        return new FallbackIndicationElement();
    }
}

Very nice! Lets finish the element part by creating a test that makes sure that our provider does as it should by creating another JUnit test. Obviously we have done that before writing any code, right? We can simply put this test method into the same test class as the serialization test.

    @Test
    public void deserializationTest()
throws XmlPullParserException, IOException, SmackParsingException {
        String xml = "<fallback xmlns=\"urn:xmpp:fallback:0\"/>";
        FallbackIndicationElementProvider provider =
new FallbackIndicationElementProvider();
        XmlPullParser parser = TestUtils.getParser(xml);

        FallbackIndicationElement element = provider.parse(parser);

        assertEquals(new FallbackIndicationElement(), element);
    }

Boom! Working, tested code!

But how does Smack learn about our shiny new FallbackIndicationElementProvider? Internally Smack uses a Manager class to keep track of registered ExtensionElementProviders to choose from when processing incoming XML. Spoiler alert: Smack uses Manager classes for everything!

If we have no way of modifying Smacks code base, we have to manually register our provider by calling

ProviderManager.addExtensionProvider("fallback", "urn:xmpp:fallback:0",
new FallbackIndicationElementProvider());

Element providers that are part of Smacks codebase however are registered using an providers.xml file instead, but the concept stays the same.

Now when receiving a stanza containing a fallback indication, Smack will parse said element into an object that we can acquire from the message object by calling

FallbackIndicationElement element = message.getExtension("fallback",
"urn:xmpp:fallback:0");

You should have noticed by now, that the element name and namespace are used and referred to in a number some places, so it makes sense to replace all the occurrences with references to a constant. We will put these into the FallbackIndicationElement where it is easy to find. Additionally we should provide a handy method to extract fallback indication elements from messages.

...

public class FallbackIndicationElement implements ExtensionElement {
    
    public static final String NAMESPACE = "urn:xmpp:fallback:0";
    public static final String ELEMENT = "fallback";

    @Override
    public String getNamespace() {
        return NAMESPACE;
    }

    @Override
    public String getElementName() {
        return ELEMENT;
    }

    ...

    public static FallbackIndicationElement fromMessage(Message message) {
        return message.getExtension(ELEMENT, NAMESPACE);
    }
}

Did I say Smack uses Managers for everything? Where is the FallbackIndicationManager then? Well, lets create it!

package tk.jabberhead.blog.wow.nice;

import java.util.Map;
import java.util.WeakHashMap;

import org.jivesoftware.smack.Manager;
import org.jivesoftware.smack.XMPPConnection;

public class FallbackIndicationManager extends Manager {

    private static final Map<XMPPConnection, FallbackIndicationManager>
INSTANCES = new WeakHashMap<>();

    public static synchronized FallbackIndicationManager
getInstanceFor(XMPPConnection connection) {
        FallbackIndicationManager manager = INSTANCES.get(connection);
        if (manager == null) {
            manager = new FallbackIndicationManager(connection);
            INSTANCES.put(connection, manager);
        }
        return manager;
    }

    private FallbackIndicationManager(XMPPConnection connection) {
        super(connection);
    }
}

Woah, what happened here? Let me explain.

Smack uses Managers to provide the user (the developer of an application) with an easy access to functionality that the user expects. In order to use some feature, the first thing the user does it to acquire an instance of the respective Manager class for their XMPPConnection. The returned instance is unique for the provided connection, meaning a different connection would get a different instance of the manager class, but the same connection will get the same instance anytime getInstanceFor(connection) is called.

Now what does the user expect from the API we are designing? Probably being able to send fallback indications and being notified whenever we receive one. Lets do sending first!

    ...

    private FallbackIndicationManager(XMPPConnection connection) {
        super(connection);
    }

    public MessageBuilder addFallbackIndicationToMessage(
MessageBuilder message, String fallbackBody) {
        return message.setBody(fallbackBody)
                .addExtension(new FallbackIndicationElement());
}

Easy!

Now, in order to listen for incoming fallback indications, we have to somehow tell Smack to notify us whenever a FallbackIndicationElement comes in. Luckily there is a rather nice way of doing this.

    ...

    private FallbackIndicationManager(XMPPConnection connection) {
        super(connection);
        registerStanzaListener();
    }

    private void registerStanzaListener() {
        StanzaFilter filter = new AndFilter(StanzaTypeFilter.MESSAGE, 
                new StanzaExtensionFilter(FallbackIndicationElement.ELEMENT, 
                        FallbackIndicationElement.NAMESPACE));
        connection().addAsyncStanzaListener(stanzaListener, filter);
    }

    private final StanzaListener stanzaListener = new StanzaListener() {
        @Override
        public void processStanza(Stanza packet) 
throws SmackException.NotConnectedException, InterruptedException,
SmackException.NotLoggedInException {
            Message message = (Message) packet;
            FallbackIndicationElement fallbackIndicator =
FallbackIndicationElement.fromMessage(message);
            String fallbackBody = message.getBody();
            onFallbackIndicationReceived(message, fallbackIndicator,
fallbackBody);
        }
    };

    private void onFallbackIndicationReceived(Message message,
FallbackIndicationElement fallbackIndicator, String fallbackBody) {
        // do something, eg. notify registered listeners etc.
    }

Now that’s nearly it. One last, very important thing is left to do. XMPP is known for its extensibility (for the better or the worst). If your client supports some feature, it is a good idea to announce this somehow, so that the other end knows about it. That way features can be negotiated so that the sender doesn’t try to use some feature that the other client doesn’t support.

Features are announced by using XEP-0115: Entity Capabilities, which is based on XEP-0030: Service Discovery. Smack supports this using the ServiceDiscoveryManager. We can announce support for Fallback Indications by letting our manager call

ServiceDiscoveryManager.getInstanceFor(connection)
        .addFeature(FallbackIndicationElement.NAMESPACE);

somewhere, for example in its constructor. Now the world knows that we know what Fallback Indications are. We should however also provide our users with the possibility to check if their contacts support that feature as well! So lets add a method for that to our manager!

    public boolean userSupportsFallbackIndications(EntityBareJid jid) 
            throws XMPPException.XMPPErrorException,
SmackException.NotConnectedException, InterruptedException, 
            SmackException.NoResponseException {
        return ServiceDiscoveryManager.getInstanceFor(connection())
                .supportsFeature(jid, FallbackIndicationElement.NAMESPACE);
    }

Done!

I hope this little article brought you some insights into the XMPP protocol and especially into the development process of protocol libraries such as Smack, even though the demonstrated feature was not very spectacular.

Quick reminder that the next Google Summer of Code is coming soon and the XMPP Standards Foundation got accepted đŸ˜‰
Check out the project ideas page!

Happy Hacking!

Monday, 17 February 2020

Smack: Some more busy nights and 12 bytes of IV

In the last months I stayed up late some nights, so I decided to add some additional features to Smack.

Among the additions is support for some new XEPs, namely:

I also started working on an implementation of XEP-0245: Message Moderation, but that one is not yet finished and needs more work.

Direct MUC invitations are a method to invite users to a group chat. Smack already had support for another similar mechanism, but this one is recommended by the XMPP Compliance Suites 2020.

Message Fastening is a generalized mechanism to add information to messages. That might be a reaction, eg. a thumbs up which is added to a previous message.

Message Retraction is used to retract previously sent messages. Internally it is based on Message Fastening.

The Stanza Content Encryption pull request only teaches Smack what SCE elements are, but it doesn’t yet teach it how to use them. That is partly due to no E2EE specification actually using them yet. That will hopefully change soon đŸ˜‰

Anu brought up the fact that the OMEMO XEP is not totally clear on the length of initialization vectors used for message encryption. Historically most clients use 16 bytes length, while normally you would want to use 12. Apparently some AES-GCM libraries on iOS only support 12 bytes length, so using 12 bytes is definitely desirable. Most OMEMO implementations already support receiving 12 bytes as well as 16 bytes IV.

That’s why Smack will soon also start sending OMEMO messages with 12 bytes IV.

Friday, 14 February 2020

Why I am not using Grindr

Grindr is proprietary software that only runs on Android and iOS. It also depends on a centralized server infrastructure that stores data in unencrypted form. The company that hosts Grindr, Amazon is known for violating users privacy. Grindr also sends data to Third-Party Websites and is known for sharing users HIV status without their consent. The terms of use and privacy policy are much too long (about 50 pages), therefore most users don’t read them. If a user has read only parts of those terms, they should become suspect that Grindr violates their privacy and not use the service. I think that sensitive information should be visible only to the intended recipients and not the administrators of any servers or routers, therefore I never use Grindr.

To share such sensitive information I could only use copylefted free software such as GNUnet, which has strong privacy guarantees. In GNUnet every communication is end to end encrypted and metadata leakage is minimized. This is important today where secret services such as the NSA kill on metadata. GNUnet provides social scalability while protecting metadata and it allows users to have multiple unlinkable Egos. It also uses public key cryptography which is inherently more secure than using passwords. Systems such as Alovoa still use passwords and depend on email which is unencrypted by default. Even if used with GPG email leaks metadata. Since GNUnet is a peer to peer network no centralized servers storing data of millions of users are needed. It also provides a replacement for centralized identity providers such as Facebook that act as a kind of password store. When you send personal data to Facebook, the NSA gets the data anyway and they can abuse it for killing people. Please do not do that.

I Love Free Software on the go: the Replicant operating system in practice

On I Love Free Software Day 2020 I’d like to pay attention to and thank the Replicant operating system, which is in active development and empowers users to use Free Software on the go. As a user with a non-technical background it was an honor and a privilege to attend the Replicant Birds of a […]

I love the hidden champions

A few days ago I’ve sent an announcement email for today’s I Love Free Software Day to a large bunch of people. Most of the remarkably many replies have been positive and a pure joy to read, but some were a bit sceptical and critical. These came from Free Software contributors who are maintaining and helping projects that they think nobody knows and sees – not because these software projects are unused, but because they are small, a building block for other, more popular applications.

When we ask people to participate in #ilovefs (this year for the 10th time in a row!) by expressing their gratitude to contributors of their favourite Free Software projects, many think about the applications they often use and come up with obvious ones like Mozilla’s Firefox and Thunderbird, LibreOffice, their Linux-based distribution, or CMSs like WordPress and Drupal. Not that I think this is not deserved, but what about the projects that actually form the foundations for these popular suites?

I researched a bit on my own system (based on Arch Linux) and checked on how many packages some of the aforementioned applications depend (including dependencies of their dependencies)1:

  • Firefox: 221
  • Thunderbird: 179
  • LibreOffice: 185
  • GIMP: 166
  • Inkscape: 164

Phew! Looking through the list of dependencies, there a dozens of programmes and libraries that I couldn’t even imagine what they could be about. But they make a big application, be it Firefox, Thunderbird or GIMP, actually possible. Isn’t it a bit unfair that we often don’t see these small (or sometimes huge) projects and the people who take care of it?2

I decided to change that, at least for one day! I’ve analysed which packages are most used as dependencies of other packages (similar for Debian/Ubuntu [^3]):

for p in $(pacman -Q | cut -d" " -f1); do
  echo "$(pactree -r -l $p | tail -n+2 | sort | uniq | wc -l)$p$(pacman -Qi $p | grep "^Description" | grep -oP '(?<=: ).*')"
done | column -t -s'–' | sort -nr

Output:

1621  iana-etc                   /etc/protocols and /etc/services provided by IANA
1620  tzdata                     Sources for time zone and daylight saving time data
1620  linux-api-headers          Kernel headers sanitized for use in userspace
1620  filesystem                 Base Arch Linux files
1619  glibc                      GNU C Library
1349  gcc-libs                   Runtime libraries shipped by GCC
1287  ncurses                    System V Release 4.0 curses emulation library
1267  readline                   GNU readline library
1261  bash                       The GNU Bourne Again shell
...

As you might expect, on the very top I found a lot of GNU and Linux sub-projects, some widely known (like bash), some which I as a more-user-than-developer never heard of before (like libffi). This alone has been an interesting journey during which I learnt a lot about projects and their maintainers which play a crucial role on my laptop.

In the end, I decided to express my thanks today to the following projects and people:

  • The development team behind acl/attr which controls access permissions
  • The four initial creators of argon2, Jean-Philippe, Samuel, Dmitry and Daniel, for their password hasing function
  • Jan Dittberner (who also is a FSFE supporter!) and Nathan Neulinger, developers of CrackLib which checks and enforces strong passwords
  • Reuben Thomas and Dom Lachowicz for their enchant project, a wrapper for various spell checking engines
  • Maintainers of glibc and gcc, important tools for the C library and compiler
  • The HarfBuzz team which can shape glyphs from Unicode texts
  • The libmnl/netfilter people, who provide tools for network-related operations
  • The contributors of libxml2 for their library and tools that are crucial for the FSFE website
  • Martin Mitáš who more or less alone maintains md4c, a Markdown parser
  • Thomas Dickey who maintains ncurses which provides a text-based interface for the command line
  • Chet Ramey as representative of readline, a programme for interactive user input
  • And last but not least Lasse Collin who maintains xz, a compression tool

But of course, that’s only a small fraction of the many interesting Free Software components that enable my daily work. However, if we all do the same and think about the hidden champions – not only during #ILoveFs day but beyond – we can make the humans behind it enjoy their invaluable contributions a bit more.

Happy I Love Free Software Day everyone! ❤

PS: If you want to try the same with apt (with another separator):

for p in $(dpkg --get-selections | cut -f1 | cut -d":" -f1); do
  echo "$(apt-cache rdepends $p | tr -d '|' | tail -n+3 | sort | uniq | wc -l)*$p*$(apt-cache show $p | grep -m 1 "^Description:" | grep -oP '(?<=: ).*')"
done | column -t -s'*' | sort -nr

  1. During the writing of this blog post I remembered Matthias hugging Peter Stuge for #ilovefs 2013 who also contributes to widely used Free Software projects. ↩︎

  2. pactree -l firefox | sort | uniq ↩︎

Thursday, 13 February 2020

The beginning of my Ring Fit Adventure²

This week I finally got my Ring Fit Adventure for the Nintendo Switch.

For those who do not know it yet, Nintendo Switch is a hybrid gaming console, which can be used either hand-held or docked and connected into a TV. Ring Fit Adventure is a fitness game for it that uses the joy-cons’ motion controls in connection with a custom pilates ring and a leg strap in order to track your movement.

The waiting

I actually liked the sound of it from the very start. It sounded exactly what I needed to trick me into starting a regular training routine and get back into shape. I already keep a workout log, and am ashamed to admit it is depressingly empty.

Ideally I would like to properly pick up rowing, but I found that my core muscles are not on par yet to keep up with others in the Ljubljana Rowing Club. At least that is my excuse …

Why did it take me so long? Well, I seriously misjudged how popular Ring Fit Adventure would be and it sold out before I could get my hands on one.

But this week, I finally got mine!

So how did it work out?

Day 1 – Getting to know the beast

The first evening I fired it up just to check it out. Did not even bother changing clothes.

After calibration, after getting 100% push and pull strength on the ring, and asking me a few questions, the game evaluated my difficulty level to 14. Playing the first level of the first world proved to be pretty easy, but I was definitely moving.

I really liked that the game wants you to stretch before and after, and even rewards you for doing so. On the first level, I got to meet the main protagonist and antagonist, but there was no fighting yet.

When you finish for the day, the game also assesses how hard the workout was for you. As it turns out it was merely a light workout for me, it suggested to raise the difficulty a bit, so next day I start on difficulty level 16.

Then I tried the paragliding and robot-smashing mini-games, which were surprisingly fun, but also physically engaging.

First impressions:

  • very well made, both hardware and software
  • this could be fun, yay!
  • mini-games are pretty fun as well

Day 2 – Fighting through the first world

Right, so first proper day, level 16 difficulty, and I set myself to go through the whole first world, including replaying the first level. This time I changed to gym clothes – and, boy, was it a good idea!

This time, I not only needed to run, squeeze and pull the ring, but also had to fight of monsters.

The turn-based combat where you attack and block performing exercises like squats, over-head squeees of the ring, knees-to-chests, and the chair yoga position, was even more engaging than I thought! Both workout- and gaming-wise.

In the end, I had fun, felt engaged and challenged, and actually broke out quite a sweat. The boss battle at the end of the first world was pretty intense.

So far Ring Fit Adventure exceeded my expectations. Let us see if it keeps me engaged.

Day 3 – Things ramp up

Next day I kept the difficulty at 16, which proved to be a good idea. If the day before I got a moderate workout, today I got substantially worked out.

What was not a good idea, was setting the alarm to an early hour. The vibration is pretty strong and made a lot of noise on the table. I fixed it by changing it to a later hour.

I managed to play throught three adventure levels of the second world, where one was the robo-smashing mini-game, before the game asked me if I had enough. I am somewhat ashamed to admit, that I did.

What I also noticed is that the cooldown stretching at the end differs depending on which muscles you worked out most during your playthrough. Which makes a lot of sense, but I honestly did not expect.

I cannot say my muscles hurt, but I definitely feel them. So far the game seems to set its difficulty really well.

So far the story is not really super griping, but it is good and funny enough to keep me engaged and moderately entertained.

During the weekend I will probably rest, but next week, I will start it up again.

hook out → feeling fitter by the meter

Tuesday, 11 February 2020

Working with different remotes in git

One of the things that is typical when working with gitlab/github is work with different git remotes.

This is sometimes because you don't have commit access to the original repository so you fork it into your own repository and work over there, but you still want to have the original repository around so you can rebase your changes over it.

In this blog we will see how to do that with the okular repository.

First off, we start by cloning the original repository

Since we don't know the URL by memory, we go to https://invent.kde.org/kde/okular/ and press the Clone button to get a hint, if we have commit access we can use both urls, otherwise we have to use the https one, for the sake of this let's assume we do not have commit access.


$ git clone https://invent.kde.org/kde/okular.git
$ cd okular


Ok, at this point we have clone the upstream Okular repository, we can see we only have one remote, called origin


$ git remote -v
origin https://invent.kde.org/kde/okular.git (fetch)
origin https://invent.kde.org/kde/okular.git (push)


Now we want to do some fixes, since we can't commit into the main repository, we need to fork, for that we press the fork button in https://invent.kde.org/kde/okular/. Once done we end up in the fork of Okular under our name, e.g. https://invent.kde.org/aacid/okular.

Now what we want is to add our remote to the existing one, so we press the blue button (here we use the git@ one since we can always commit to our fork)


$ git remote add aacid_fork git@invent.kde.org:aacid/okular.git
$ git remote -v
aacid_fork git@invent.kde.org:aacid/okular.git (fetch)
aacid_fork git@invent.kde.org:aacid/okular.git (push)
origin https://invent.kde.org/kde/okular.git (fetch)
origin https://invent.kde.org/kde/okular.git (push)


So now we have a remote called aacid_fork that points to url fork, aacid_fork is the name i chose because it's easy to remember, but we could have used any name we wanted there.

Now there's several things one may want to do

Do changes in master and push them to your fork

This is really not the recommended way but since it's what i do and it'll explain how to push from one branch name to another i'll explain it.

After doing the changes and doing the typical git commit now we have to push the changes to our aacid_fork, so we do

git push aacid_fork master:adding_since_to_function

What this does is push the local branch master to the branch named adding_since_to_function of the aacid_fork remote

Create a branch and then push that to your fork

This is what you should be doing, so what you should do is

git branch adding_since_to_function

and then change to work on that branch

git switch adding_since_to_function

After doing the changes and doing the typical git commit now we have to push the changes to our aacid_fork, so we do

git push aacid_fork adding_since_to_function

What this does is push the local branch adding_since_to_function to a branch with the same name of the aacid_fork


Get a branch from someone else's remote and push to it


Sometimes some people say "hey let's work on my branch together", so you need to push not to origin, not to your fork but to someone else's fork.

Let's say you want to work on joliveira's gsoc2019_numberFormat branch, so you would need to add his remote


$ git remote add joliveira_fork git@invent.kde.org:joliveira/okular.git
$ git remote -v
aacid_fork git@invent.kde.org:aacid/okular.git (fetch)
aacid_fork git@invent.kde.org:aacid/okular.git (push)
joliveira_fork git@invent.kde.org:joliveira/okular.git (fetch)
joliveira_fork git@invent.kde.org:joliveira/okular.git (push)
origin https://invent.kde.org/kde/okular.git (fetch)
origin https://invent.kde.org/kde/okular.git (push)


Then we would need to tell git, hey listen, please go and read the branches that remote i just added has

git fetch joliveira_fork

Next we have to tell git to actually give us the gsoc2019_numberFormat branch, there's lots of ways to do that, one that works is

git checkout --track joliveira_fork/gsoc2019_numberFormat

This will create a local gsoc2019_numberFormat from the contents of the remote branch joliveira_fork/gsoc2019_numberFormat and that also "tracks" it, that means that if someone else does changes to it and you do git pull --rebase while on your local gsoc2019_numberFormat, you'll get them.

After doing the changes and doing the typical git commit now we have to push the changes to the joliveira_fork, so we do

git push joliveira_fork gsoc2019_numberFormat


What you don't want to do

Don't push to the master branch of your remote, it's weird, some people do, but it's note really recommended.

Things to remember

A git remote is another repository, it just so happens that it has "similar" code, but it's a fork, so you can push to it, checkout branches from it, etc.

Every time you want to get changes from a remote, remember to git fetch remote_name otherwise you're still on the "old" snapshot form your last fetch.

When git pushing the syntax is git push remote_name local_branch_name:remote_branch_name

Bonus track: Using git mr

As shown in my previous blog post you can use git mr to easy download the code of a mr. Let's use as example Okular's MR #20 https://invent.kde.org/kde/okular/merge_requests/20.

You can simply do git mr 20 and it will create a local branch named mr/20 with the contents of that MR. Unfortunately, if you want to commit changes to it, you still need to use the original remote and branch name name, so if you do some changes, after the git commit you should do

git push joliveira_fork mr/20:gsoc2019_percentFormat

Sunday, 09 February 2020

20.04 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/release_service/20.04_Release_Schedule

Dependency freeze is in ~five weeks (March12) and Feature Freeze a week after that, make sure you start finishing your stuff!

Saturday, 08 February 2020

From socket(2) to .onion with pf(4)

I’ve been rebuilding my IRC bouncer setup and as part of this process I’ve decided to connect to IRC via onion services where possible. This setup isn’t intended to provide anonymity as once I’m connected I’m going to identify to NickServ anyway. I guess it provides a little protection in that my IP address shouldn’t be visible in that gap between connection and a cloak activating, but there’s so many other ways that my identity could leak.

You might wonder why I even bothered if not for anonymity. There are two reasons:

  1. to learn more about tor(1) and pf(4), and
  2. to figure out how to get non-proxy aware software to talk to onion services.

I often would find examples of socat, torsocks, etc. but none of them seemed to fit with my goal of wanting to use an onion service as if it were just another host on the Internet. By this I mean, with a socket(AF_INET, SOCK_STEAM) that didn’t also affect my ability to connect to other Internet hosts.

Onion services don’t have IP addresses. They have names that look like DNS names but that are not actually in DNS. So the first problem here is going to be that we’re not going to be able to give an onion address to the kernel, it wants an IP address. In my setup I chose 10.10.10.0/24 as a subnet that will have IP addresses that when connected to, will actually connect to onion services.

In the torrc file you can use MapAddress to encode these mappings, for example:

MapAddress 10.10.10.10 ajnvpgl6prmkb7yktvue6im5wiedlz2w32uhcwaamdiecdrfpwwgnlqd.onion # Freenode
MapAddress 10.10.10.11 dtlbunzs5b7s5sl775quwezleyeplxzicdoh3cnhm7feolxmkfd42nqd.onion # Hackint
MapAddress 10.10.10.12 awwqg2ishrohngue.onion # 2600net - broken(?)
MapAddress 10.10.10.13 darksci3bfoka7tw.onion # darkscience
MapAddress 10.10.10.14 akeyxc6hie26nlfylwiuyuf3a4tdwt4os7wiz3fsafijpvbgrkrzx2qd.onion # Indymedia

Now when tor(1) is asked to connect to 10.10.10.10 it will map this to the address of Freenode’s onion service, and connect to that instead. The next part of the problem is allowing tor(1) to receive these requests from a non-proxy aware application, in my case ZNC. This setup will also need a network interface to act as the interface to tor(1). A loopback interface will suffice and it’s not necessary to add an IP address to it:

# ifconfig lo1 up

pf is a firewall for OpenBSD, that can also perform some other related functions. One such function is called divert-to. Unfortunately, there is also divert-packet which is completely unrelated. tor(1) supports receiving packets that have been processed by a divert-to rule and this is often used for routing all traffic from a network through the Tor network. This arrangement is known as a “transparent proxy” because the application is unaware that anything is going on.

In my setup, I’m only routing traffic for specific onion services via the Tor network, but the same concepts are used.

In the torrc:

TransPort 127.0.0.1:1338
TransProxyType pf-divert

In pf.conf(5):

pass in quick on lo1 inet proto tcp all divert-to 127.0.0.1 port 1338
pass out inet proto tcp to 10.10.10.0/24 route-to lo1

and that’s it! I’m now able to connect to 10.10.10.10 from ZNC and pf will divert the traffic to tor.

On names and TLS certificates: I’m using TLS to connect to the onion services, but I’m not validating the certificates. I’ve already verified the server identities because they have the key for the onion service, the reason I’m using TLS is because I’m then presenting a client certificate to the servers (CertFP) to log in to NickServ. The TLS is there for the server’s benefit while the onion service authentication is for my benefit. You could add entries to your /etc/hosts file with mappings from irc.freenode.org to 10.10.10.10 but it seemed like a bit of a fragile arrangement. If pf or tor stop functioning currently, then no connection is made, but if the /etc/hosts file were to be rewritten, you’d then connect over the Internet and you’ve disabled TLS verification because you’re relying on the onion service to do that, which you’re not using.

On types of tranparent proxy: There are a few different types of transparent proxy supported by tor. pf-divert seemed like the most appropriate one to use in my case. It’s possible that the natd(8) “protocol” referred to in the NATDPort torrc option is actually talking about divert(4) sockets which are supported in OpenBSD, and that’s another option, but it’s not clear which would be the preferred way to do it. If I had more time I’d dig into which methods are useful and which are redundant as removing code is often a good thing to do.

Friday, 07 February 2020

Sway and the Dock station

I just moved permanently from awesome to Sway because I can barely see any difference. Really.

The whole Wayland ecosystem has improved a LOT since last time I used it. That was last year, as I give Wayland a try once a year since 2016.

However, I had to ditch an useful daemon, dockd. It does automatically disable my laptop screen when I put it in the dock station, but it does relies over xrandr.

What to use then?

ACPI events.

The acpid daemon can be configured to listen to ACPI events and to trigger your custom script. You just have to define which events are you interested in (it does accept wildcards also) and which script acpid should trigger when such events occurs.

I used acpi_listen to catch the events which gets triggered by the physical dock/undock actions:

# acpi_listen
ibm/hotkey LEN0068:00 00000080 00004010
[...]
ibm/hotkey LEN0068:00 00000080 00004011
[...]

Then, I setup an acpid listener by creating the file /etc/acpi/events/dock with the following content:

event=ibm/hotkey
action=/etc/acpi/actions/dock.sh %e

This listener will call my script only when an event of type ibm/hotkey occurs, then it tells sway to disable or enable the laptop screen based on the action code. Here’s my dock.sh script:

#!/bin/sh

pid=$(pgrep '^sway$')

if [ -z $pid ]; then
    logger "sway isn't running. Nothing to do"
    exit
fi

user=$(ps -o uname= -p $pid)

case "$4" in
  00004010)
    runuser -l $user -c 'SWAYSOCK=/run/user/$(id -u)/sway-ipc.$(id -u).$(pidof sway).sock swaymsg "output LVDS-1 disable"'
    logger "Disabled LVDS-1"
    ;;
  00004011)
    runuser -l $user -c 'SWAYSOCK=/run/user/$(id -u)/sway-ipc.$(id -u).$(pidof sway).sock swaymsg "output LVDS-1 enable"'
    logger "Enabled LVDS-1"
    ;;
esac

Don’t forget to make it executable!

chmod +x /etc/acpi/actions/dock.sh

And then start the acpid daemon:

systemctl enable --now acpid

Happy docking!

Thursday, 06 February 2020

New help porting to Go >= 1.12

At KDE we have a GitHub mirror. One of the problems about having a mirror is that people routinely try to propose Pull Requests over there, but no one is watching, so they would go stale, which is not good for anyone.

What no one? Actually no, we have kdeclose, a bot that will go over all Pull Requests and gracefully close them suggesting people to move the patch over to KDE infrastructure where we are watching.

The problem is that I'm running that code on Google AppEngine and they are cutting support for the old Go version that it's using, so I would need someone help me port the code to a newer Go version.

Anyone can help me?

P.S: No, I'm not the original author of the code, it's a fork of something else, but that has not been updated either.

Update: This is now done, thanks to Daniele (see first comment). Mega fast, thanks community!

Sunday, 02 February 2020

QCA cleanup spree

The last few weeks I've done quite a bit of QCA cleanup.

Bit of a summary:
* Moved to KDE's gitlab and enable clazy and clang-tidy continuous integration checks
* Fixed lots of crashes when copying some of the classes, it's not a normal use case, but it's supported by the API so make it work :)
* Fixed lots of crashes because we were assuming some of the backend libraries had more features than they actually do (e.g. we thought botan would support always support a given crypto algorithm, but some versions don't, now we check if the algorithm it's supported before saying it is)
* Made all the tests succeed :)
* Dropped Qt4 support
* Use override, nullptr (Laurent), various of the "sanity" QT_* defines, etc.
* botan backend now requires botan2
* Fixed most of the compile warnings

What I probably also did is maybe break the OSX and Windows builds, so if you're using QCA there you should start testing it and propose Merge Requests.

Note: My original idea was to actually kill QCA because i started looking at it and lot of the code looked a bit fishy, and no one wants crypto fishy code, but then i realized we use it in too many places in KDE and i'd rather have "fishy crypto code" in one place than in lots of different places, at least this way it's easier to eventually fix it.

Monday, 27 January 2020

The Qt Company is stopping Qt LTS releases. We (KDE) are going to be fine :)

Obvious disclaimer, this is my opinion, not KDE's, not my employer's, not my parents', only mine ;)

Big news today is that Qt Long-term-supported (LTS) releases and the offline installer will become available to commercial licensees only.

Ignoring upcoming switch to Qt6 scenario for now, how bad is that for us?

Let's look at some numbers from our friends at repology.

At this point we have 2 Qt LTS going on, Qt 5.9 (5.9.9 since December) and Qt 5.12 (5.12.6 since November).

How many distros ship Qt 5.9.9? 0. (there's macports and slackbuilds but none of those seem to provide Plasma packages, so I'm ignoring them)

How many distros ship Qt 5.12.6? 5, AdĂŠlie Linux, Fedora 30, Mageia 7, OpenSuse Leap 15.2, PCLinux OS (ALT Linux and GNU Guix also do but they don't seem to ship Plasma). Those are some bigger names (I'd say specially Fedora and OpenSuse).

On the other hand Fedora 28 and 29 ship some 5.12.x version but have not updated to 5.12.6, Opensuse Leap 15.1 has a similar issue, it's stuck on 5.9.7 and did not update to 5.9.9 and so is Mageia 6 which is stuck on Qt 5.9.4

Ubuntu 19.04, 19.08 and 20.04 all ship some version of Qt 5.12 (LTS) but not the lastest version.

On the other a few of other "big" distros don't ship Qt LTS, Arch and Gentoo ship 5.14, our not-distro-distro Neon is on 5.13 and so is flatpak.

As I see it, the numbers say that while it's true that some distros are shipping the latest LTS release, it's not all of them by far, and it looks more like an opportunistic use, the LTS branch is followed for a while in the last release of the distro, but the previous ones get abandoned at some point, so the LTS doesn't really seem to be used to its fully potential.

What would happen if there was no Qt LTS?

Hard to say, but I think some of the "newer" distros would actually be shipping Qt 5.13 or 5.14, and in my book that's a good thing, moving users forward is always good.

The "already released" distros is different story, since they would obviously not be updating from Qt 5.9 to 5.14, but as we've seen it seems that most of the times they don't really follow the Qt LTS releases to its full extent either.

So all in all, I'm going to say not having Qt LTS releases is not that bad for KDE, we've lived for that for a long time (remember there has only been 4 Qt LTS, 4.8, 5.6, 5.9 and 5.12) so we'll do mostly fine.

But What about Qt 5.15 and Qt 6 you ask!


Yes, this may actually be a problem, if all goes to plan Qt 5.15 will be released in May and Qt 6.0 in November, that means we will likely get up to Qt 5.15.2 or 5.15.3 and then that's it, we're moving to Qt 6.0

Obviously KDE will have to move to Qt 6 at some point, but that's going to take a while (as example Plasma 5 was released when Qt was at 5.3) so for let's say that for a year or two we will still be using Qt 5.15 without any bugfix releases.

That can be OK if Qt 5.15 ends being a good release or a problem if it's a bit buggy. If it's buggy, well then we'll have to figure out what to do, and it'll probably involve some kind of fork somewhere, be it by KDE (qt already had that for a while in ancient history with qt-copy) or by some other trusted source, but let's hope it doesn't get to that, since it would mean that there's two set of people fixing bugs in Qt 5.15, The Qt Company engineers and the rest of the world, and doing the same work twice is not smart.

Sunday, 26 January 2020

git mr: easily downloading gitlab merge requests

With KDE [slowly] moving to gitlab you may probably find yourselves reviewing more gitlab based patches.

In my opinion the web UI in gitlab is miles better, the fact that it has a "merge this thing" button makes it a game changer.

Now since we are coming from phabricator you have probably used the arc patch DXXX command to download and locally test a patch.

The gitlab web UI has a a link named "You can merge this merge request manually using the command line" that if pressed tells you to


git fetch "git@invent.kde.org:sander/okular.git" "patch-from-kde-bug-415012"
git checkout -b "sander/okular-patch-from-kde-bug-415012" FETCH_HEAD


if you want to locally test https://invent.kde.org/kde/okular/merge_requests/80

That is *horrible*

Enter git mr a very simple script that makes it so that you only have to type


git mr 80




P.S: If you're an archlinux user you can get it from AUR https://aur.archlinux.org/packages/git-mr

P.P.S: Unfortunately it does not support pushing, so if you want to push to that mr you'll have to do some work.

Friday, 24 January 2020

Digital Photo Creation Dates

I learned something new yesterday, that probably shouldn't have shocked me as much as it did. For legacy reasons, the "creation time" in the Exif metadata attached to digital camera pictures is not expressed in absolute time, but rather in some arbitrary expression of "local" time! This caused me to spend a long evening learning how to twiddle Exif data, and then how to convince Piwigo to use the updated metadata. In case I or someone else need to do this in the future, it seems worth taking the time to document what I learned and what I did to "make things right".

The reason photo creation time matters to me is that my wife Karen and I are currently in the midst of creating a "best of" subset of photos taken on our recently concluded family expedition to Antarctica and Argentina. Karen loves taking (sometimes award-winning) nature photos, and during this trip she took thousands of photos using her relatively new Nikon COOLPIX P900 camera. At the same time, both of us and our kids also took many photos using the cameras built into our respective Android phones. To build our "best of" list, we wanted to be able to pick and choose from the complete set of photos taken, so I started by uploading all of them to the Piwigo instance I host on a virtual machine on behalf of the family, where we assigned a new tag for the subset and started to pick photos to include.

Unfortunately, to our dismay, we noted that all the photos taken on the P900 weren't aligning correctly in the time-line. This was completely unexpected, since one of the features of the P900 is that it includes a GPS chip and adds geo-tags to every photo taken, including a GPS time stamp.

Background

We've grown accustomed to the idea that our phones always know the correct time due to their behavior on the mobile networks around the world. And for most of us, the camera in our phone is probably the best camera we own. Naively, my wife and I assumed the GPS time stamps on the photos taken by the P900 would allow it to behave similarly and all our photos would just automatically align in time... but that's not how it worked out!

The GPS time stamp implemented by Nikon is included as an Exif extension separate from the "creation time", which is expressed in the local time known by the camera. While my tiny little mind revolts at this and thinks all digital photos should just have a GPS-derived UTC creation time whenever possible... after thinking about it for a while, I think I understand how we got here.

In the early days of Exif, most photos were taken using chemical processes and any associated metadata was created and added manually after the photo existed. That's probably why there are separate tags for creation time and digitization time, for example. As cameras went digital and got clocks, it became common to expect the photographer to set the date and time in their camera, and of course most people would choose the local time since that's what they knew.

With the advent of GPS chips in cameras, the hardware now has access to an outstanding source of "absolute time". But the Nikon guys aren't actually using that directly to set image creation time. Instead, they still assume the photographer is going to manually set the local time, but added a function buried in one of the setup menus to allow a one-time set of the camera's clock from GPS satellite data.

So, what my wife needs to do in the future is remember at the start of any photo shooting period where time sync of her photos with those of others is important, she needs to make sure her camera's time is correctly set, taking advantage of the function that allows here to set the local time from the GPS time. But of course, that only helps future photos...

How I fixed the problem

So the problem in front of me was several thousand images taken with the camera's clock "off" by 15 hours and 5 minutes. We figured that out by a combinaton of noting the amount the camera's clock skewed by when we used the GPS function to set the clock, then noticing that we still had to account for the time zone to make everything line up right. As far as I can tell, 12 hours of that was due to AM vs PM confusion when my wife originally set the time by hand, less 1 hour of daylight savings time not accounted for, plus 4 time zones from home to where the photos were taken. And the remaining 5 minutes probably amount to some combination of imprecision when the clock was originally set by hand, and drift of the camera's clock in the many months since then.

I thought briefly about hacking Piwigo to use the GPS time stamps, but quickly realized that wouldn't actually solve the problem, since they're in UTC and the pictures from our phone cameras were all using local time. There's probably a solution lurking there somewhere, but just fixing up the times in the photo files that were wrong seemed like an easier path forward.

A Google search or two later, and I found jhead, which fortunately was already packaged for Debian. It makes changing Exif timestamps of an on-disk Jpeg image file really easy. Highly recommended!

Compounding my problem was that my wife had already spent many hours tagging her photos in the Piwigo web GUI, so it really seemed necessary to fix the images "in place" on the Piwigo server. The first problem with that is that as you upload photos to the server, they are assigned unique filenames on disk based on the upload date and time plus a random hash, and the original filename becomes just an element of metadata in the Piwigo database. Piwigo scans the Exif data at image import time and stuffs the database with a number of useful values from there, including the image creation time that is fundamental to aligning images taken by different cameras on a timeline.

I could find no Piwigo interface to easily extract the on-disk filenames for a given set of photos, so I ended up playing with the underlying database directly. The Piwigo source tree contains a file piwigo_structure-mysql.sql used in the installation process to set up the database tables that served as a handy reference for figuring out the database schema. Looking at the piwigo_categories table, I learned that the "folder" I had uploaded all of the raw photos from my wife's camera to was category 109. After a couple hours of re-learning mysql/mariadb query semantics and just trying things against the database, this is the command that gave me the list of all the files I wanted:

select piwigo_images.path into outfile '/tmp/imagefiles' from piwigo_image_category, piwigo_images where piwigo_image_category.category_id=109 and piwigo_images.date_creation >= '2019-12-14' and piwigo_image_category.image_id=piwigo_images.id;

That gave me a list of the on-disk file paths (relative to the Piwigo installation root) of images uploaded from my wife's camera since the start of this trip in a file. A trivial shell script loop using that list of paths quickly followed:

        cd /var/www/html/piwigo
        for i in `cat /tmp/imagefiles`
        do
                echo $i
                sudo -u www-data jhead -ta+15:05 $i
        done

At this point, all the files on disk were updated, as a little quick checking with exif and exiv2 at the command line confirmed. But my second problem was figuring out how to get Piwigo to notice and incorporate the changes. That turned out to be easier than I thought! Using the admin interface to go into the photos batch manager, I was able to select all the photos in the folder we upload raw pictures from Karen's camera to that were taken in the relevant date range (which I expressed as taken:2019-12-14..2021), then selected all photos in the resulting set, and performed action "synchronize metadata". All the selected image files were rescanned, the database got updated...

Voila! Happy wife!

Wednesday, 22 January 2020

The story of my first job in Tech Industry

The other day I was thinking about my first ever job in this industry as a junior software engineer at the age of 20. I was doing okay with my studies at the Athens university of applied sciences but I was working outside of this industry. I had to gain some working experience in the field, so I made a decision to find part time work in a small software house. The (bad) experience and lessons learned in those couple weeks are still with me till this day … almost 20 years after!

Introductions

I got a flyer from the job board at school and I walked a couple of kilometers to the address of the place. I didn’t have a car back then (or for the next 7 years), so I had to use public transportation (bus) or walk wherever I wanted to go. I rang the doorbell around noon and went up on the second floor. There I introduced myself and asked for an opportunity to work with them. The owner/head of software team asked me a few things and got to the technical parts of the job.

  • We are working with visual studio, but we are using HTML pages as forms for our product. In a sense we have copied the Amazon model!

Impressed, that I was going to work with the next amazon, I immediately said Yes to the offer.

HTML4

  • Do you know HTML ?
  • No, but I am a quick study.

He smiled at me and gave me (I think) this 800 pages book to read about HTML4.

html4book.png

HTML-4-Bible-Bryan-Pfaffenberger

He then told me:

  • Read this book and come back when you finished it.

That was Friday noon.

I spent 10 hours quickly reading the book and keeping notes. Then I made a static demo site about Milos Island, where I had spent two weeks in the summer with my girlfriend. I had photos and material to write about, so I did that as an exercise.

Monday morning, I was presenting him with my homework. He didn’t believe me and spent a couple of hours talking about HTML4, just to prove that I had made the site, reading the book he gave me. In the end he was convinced.

Visual Studio

My next assignment was to learn about Visual Basic and Visual Studio. I had a basic idea about this but I had never worked as a professional programmer, so he prepared a few coding exercises to get familiar with the codebase. This was my onboarding period.

  • Take this exercise and come back when you finish it. It will take you about a week.

Next day, I was again first in the office.

  • So you came back to ask for help. That is okay. You should ask for help but you need to make an effort to do it yourself.
  • I finished it, it was easy.
  • Really? Then here is your next assignment. This is more difficult. Come back when you finished it.

Next day … I was back in the office.

  • I finished it, what is next ?
  • Okay, read this today and come back tomorrow.

Read it, returned the next day.

  • Done
  • Okay, I need you to sit here and work on the next assignments. I want to see how you are working on these coding exercises for myself.
  • Okay.

Next two days, worked there on coding exercises to get familiar with their codebase. He was impressed and I was very happy.

QA

Next day (Friday):

  • You now have access to our production code. Here are your tasks, whenever your finish something I want to see it. But before all that, here is a copy of our product. Today you will test it and report any bugs that you think we need to fix.

I took this task as my personal goal to prove myself. Worked ten hours that day and made a few comments on how to improve customer experience.

I asked if I can take the CD back with me at home and tested it on my personal computer.

It was a windows executable and the installer was pretty decent.

Next, next, install, done.

My windows 98 second edition didn’t have enough free space on my hard disk, and I needed to also install oracle to work on my semester lab exercises. My 8G hard disk and the gazillion of floppy disks around my home office on my Pentium III was my entire kingdom back then. So I uninstalled the application and rebooted my computer.

Then something horrible happened. My computer could not start the operating system. There were indications of missing DLLs.

I re-installed (repair) windows and was curious about what happened.

I re-installed the application and re-uninstalled it once more.
Reboot Windows and again missing DLLs.

First Conflict

I returned on Monday morning at the office and explained in details the extreme bug I had found. When a customer removes our software, they would corrupt their operating system. The majority of our customers didn’t have the technical experience to fix this problem. So I made it very clear that this is something we need to fix ASAP and we should inform every customer not to remove our application and reboot their machine. I was really proud that I had found this super bug and that we were going to save our company.

And then the owner told me:

  • Our customers are paying us for installation of our software application. They are not paying us for fixing their computer problems.
  • But this is something we introduced.
  • Do not be silly, we are professionals, we do not make mistakes.
  • But …
  • No butts, this is not our problem.

Whatttt ?

First business lesson was:

  • We do not make mistakes, customers should pay us for fixing our bugs!

Fixing Bugs

The next thing was to check the installer. We’ve noticed that they had marked a few windows DLLs as important to be there for our application to run. To avoid any mistakes we copied these DLLs from the application’s CD to our customer’s windows. The uninstallation process, was removing everything that installed so … the windows DLLs were gone! It was a simple mistake and easy to fix. Click on the correct checkbox for those files, not to be removed during the uninstallation process.

Distribution

We needed to distribute our application to all 2.000 customers all over Greece. We had to burn 2.000 physical CD’s, print 2.000 CD covers, compile 2.000 CD cases and put them in 2.000 envelopes and write 2.000 addresses on the envelopes. Then visit the local post office, pay for stamps etc and mail 2.000 CDs to our customer’s snail addresses.

We also had to provide letters of instructions:

  • Uninstall the previous version
  • Install the new version

in any circumstance do not reboot your PC till the new version is up and running. Then copy your license key into the program and connect to the internet to upload your contracts/data or sync your data from the central database to your laptop/desktop.

Money

For every patch (that meant a new CD to sent) our business model was to get money from our customers for our work and any expenses for distributing these CDs around Greece. That was the business deal with our customers. Customers were paying us, for our mistakes and could also take a week or so to get the fix. Depending on the post office delays. License keys were valid (I am not sure but I believe) for a year and then there was a subscription model for the patches. If customers wanted to subscribe. then they should pay us for every CD, for every patch, for every mistake. Our business model depended on that.

Second Conflict

For some reasons I had opinions about this effort. I made a suggestion to use our web server (web site) to provide the patch, so the customers can download from the internet and install it immediately without waiting for weeks till we sent the next CD with the latest version. Also ,no need of extra money for the post office or CDs or burning 2.000 CDs through the weekend. Customers should pay for the patch (our work) so this way would be best for everybody.

The owner replied to me, that they made more money with the current system, so no need of making things easier or cheaper for customers and I should keep this innovated ideas to myself.

At that point, the thought that I wasn’t working for the next amazon came in mind. They would put this extra profit on top of their customer’s needs.

Coding style

Finally, after my first week as an employee, I was now writing code as a software engineer. I did an impressive work of fixing bugs and refactoring code and in a sense made our product better, faster and safer. I had ideas and worked closely with the senior programmer on a few things. I was doing good, working fast, learning and providing value.

I’ve noticed a specific coding style so I kept it. The senior programmer could read my code and comments (I wrote a lot of comments) and vice versa. Finally I had joy from my work as a programmer.

Third Conflict

I vividly remember a specific coding issue, even 20 years after it happened. There was a form with 10 buttons. 10 clicks were the maximum possible events on this form. So I wrote a case statement of 9 events and one default. I submitting the code and the owner/head software programmer came to the office yelling at me.

  • I’ve started reviewing your code and I can not read it. Why you are writing code like this. this is shit code. Case statements!!! No no no no. I want from you to write the same code as I write, so I can read/review it.
  • But your example is a nested if-then-else for 11 events and we only have 10 events there. I made a case statement of 9 events and a default. It’s better.
  • No, this is not better, it’s shit. I can not review your code. I want you to delete everything and start from the beginning. I want to read your code and think that I was writing this code instead of you.
  • I am sorry, but I think your are wrong on this. This is better, trust me. I worked closely with our senior programmer and we believe this is better.
  • No, remove everything.

Final Discussion

after a couple of hours

  • So I need to talk with you.
  • Sure, what can I do for you?
  • I think this collaboration is not working between you and us.
  • okay, I am really sorry about that. Can I please ask what are the problems so that I can improve in the future. This is my first job.

The truth bomb:

  • You have all these new ideas to disturb our business model and cash flow. Using the web server to publish and distribute patches? Come on, you are very young to give me advice on how to run my business. you do not know anything.
  • You made a lot of comments and suggestions about what we are doing wrong. This should never be the case, especially if you are talking to customers. We never make mistakes and we need to be paid for every customer request. I never make mistakes. I have a master’s degree in computer science and you are still a student. If something is wrong, customers should make a request and we are going to make a patch. That’s it.
  • Finally ,you are writing code that I can not read/review. I am the head software engineer and I need from you to write code as I write code. You should never introduce anything new that I can not read.

Exit

Two weeks, I felt like really shit. I felt like I didn’t know anything about business but he paid me for the whole month.

After all these years, I now believe that he was afraid of my ideas. Of using the internet to help our business and reduce customer’s costs but the most important was he was afraid that new people came to his business and wrote code that he could not understand.

I made a promise that day to myself, that last Friday from my very first job:

  • I will try always to do my best in this industry.

Almost 20 years have past from those two weeks, I never worked as a programmer, I chose to work as a sysadmin, mostly doing operations.

Thankfully I think I am doing well. So here, to the next 20 years ahead.

Thank you for reading my story.

Monday, 20 January 2020

The importance of culture

Origin Post on LinkedIn, Published on January 6, 2020

osakajapan.jpg

Being abroad in Japan the last couple weeks, I’ve noticed that the high efficiency -from crossing roads to almost everything- they do (cooking/public transportation/etc) is due to the fact of using small queues for every step of the process. Reaching to a maximum throughout with small effort.

The culture of small batches/queues reminds me the core principles of #DevOps as they have identified in the book “The Goal: A Process of Ongoing Improvement” by Eli Goldratt and of course in “Theory of Constraints”.

Imagine this culture to everything you do in your life. From work to your personal life. Reducing any unnecessary extra cost, reducing waste by performing Kata. Kata is about form, from dancing to creating your cloud infrastructure with reproducible daily work or routines that are focusing in the process for reaching your business goals.

This truly impresses me in Japanese culture among with the respect they are showing to each other. You may of course notice the young people riding their bicycles in the middle of the street, watching their smartphone instead of the road đŸ˜€but the majority of people bow their head to show respect to other people and other people’s work or service.

We, sometimes forget this simple rule in our work. Sometimes the pressure, the deadlines or the plethora of open tickets in our Jira board (or boards) makes us cranky with our colleagues. We forget to show our respect to other people work. We forget that we need each other for reaching to our business values as a team.

We forget to have fun and joy. To be productive is not about closing tickets is about using your creativity to solve problems or provide a new or improve an old feature that can make your customers happy.

Is about the feedback you will get from your customers and colleagues, is about the respect to your work. Is about being happy.

For the first time in my life, I took almost 30days out of work, to relax, to detox (not having a laptop with me) to spend some time with family and friends. To be happy. So if any colleague from work is reading this article:

  • Domo arigato

Happy new year (2020) to everybody. I wish you all good health and happiness.

PS: I am writing this article in a superexpress speed train going to Hiroshima, at 300 km/h

Thursday, 16 January 2020

KPatience added to flathub. Which app should be next?

This week we added KPatience to flathub.

That makes for a quite a few applications from KDE already in flathub



Which one do you think we should add next?

Sunday, 12 January 2020

Magic wormhole – easiest way to transfer a file across the Internet

  • Seravo
  • 13:29, Sunday, 12 January 2020

Transferring files between two computers on the Internet is as old of a problem as the Internet itself, and surprisingly hard. Sending an attachment over e-mail involves all kind of hassles and does not work for big files. Having both the sending and receiving part sign up for Dropbox or a similar service, or setting up your own Nextcloud server requires unreasonably much work if you simply want to transfer just one file from one computer to another. Luckily we live now in 2020, and there is a solution: the Magic Wormhole, an open source software in Python by Brian Warner.

All you need to do is install it on both computers (e.g. apt install magic-wormhole) and run it. No user account or any other setup is required. It works across any networks, no need to have public IP addresses or anything.

To send a file, simply run wormhole send and the file name. To receive a file, just run wormhole receive, and enter the key phrase that given by the sending party.

Screenshot from sending party:

$ wormhole send Maperitive-1000.zip 
Sending 3.7 MB file named 'Maperitive-1000.zip'
On the other computer, please run: wormhole receive
Wormhole code is: 7-virginia-drumbeat

Sending (->relay:tcp:magic-wormhole-transit.debian.net:4001)..
100%|████████████████████| 3.75M/3.75M [00:01<00:00, 2.74MB/s]
File sent.. waiting for confirmation
Confirmation received. Transfer complete.

Screenshot from receiving party:

$ wormhole receive
Enter receive wormhole code: 7-virginia-drumbeat
 (note: you can use <Tab> to complete words)
Receiving file (3.7 MB) into: Maperitive-1000.zip
ok? (y/N): y
Receiving (->relay:tcp:magic-wormhole-transit.debian.net:4001)..
100%|████████████████████| 3.75M/3.75M [00:02<00:00, 1.81MB/s]
Received file written to Maperitive-1000.zip

Simple and brilliant!

Sunday, 05 January 2020

Big Distro

I like to read GNU/Linux hobbyist forums from time to time. Partially to keep up with all the changes that are constantly happening within the lovely world of Free Software, but mostly because I’m just very excited about GNU/Linux. It is quite possibly the world’s biggest international collaborative effort, and that’s just mind-bogglingly cool—the idea that people from all over the world come together to make this amazing tool for everyone to freely use. And it works! Most of the time, anyway.

There is one thing that bothers me about the hobbyist forums, however, and that is:

btw i use arch

The prevalence of Arch Linux. Now I don’t actually intensely dislike Arch Linux, and this post isn’t “Ten Reasons Arch Linux Sucks”. It’s a fine distribution that gets a lot of things right for the hobbyist crowd, and I am sure that it is a technologically sound distribution. This post isn’t even about Arch Linux specifically—it is about the host of distributions with which Arch shares a lot of attention in the popularity contest. There is no immediate pattern that binds these distributions, but among them are Manjaro, Linux Mint, elementary OS, Solus, Zorin OS, Pop!_OS, NixOS, et cetera.

The crux is that I am a little sad that these distributions win out in the popularity contest. Generally speaking, these distributions serve very specific niches: A rolling release distribution model, a focus on a certain desktop environment, an experimental package manager, or some combination thereof. These distributions distinguish themselves very clearly, but it is my opinion that the best distribution distinguishes itself not in any single category, but in its general purposeness.

Or rather, that is a half-lie. General purposeness is a direct consequence of the main trait I seek in a distribution: Size. I am talking Big Distro. This is a plea for Debian, Fedora, openSUSE, and Ubuntu.

Size and general purposeness

When I talk about size, I’m not concerned about the amount of disk space the default disk image takes up. Rather, I’m honing in on a vague metric at the intersection of market share, project size, and the amount of packages. There is something that sets Debian, Fedora, and to a slightly lesser extent openSUSE and Ubuntu apart from all the other distributions—the sheer scope of these projects.

These projects are absolutely massive with hundreds of active contributors each. And the contributions aren’t just limited to packaging; the projects have people working on internationalisation, infrastructure, support, new software development, quality assurance, outreach, documentation, design, accessibility, security, and the awe-inspiring task of coordinating all of this work.

As a result of collaboration at this massive scope, these distributions have an unmatched general purposeness. Just about anything you might want to do, you can do with these distributions, and you can be fairly certain that it’s supported.

Contrast this with other distributions, and you’ll find that they have much smaller teams supporting them. Arch Linux actually stands out here in having a sizeable contributor base, but Solus has only a handful of people actively working on it. Mind, this isn’t necessarily indicative of quality, but certainly of scope.

But why does scope matter? Surely Solus is simply just good at what it does, which is providing a high-quality Budgie desktop, and doesn’t need to do anything else.

Security

The best example of scope being important is security. You simply need people working full-time on security if you’re creating a distribution that you expect people to use for their privacy-sensitive computing. Certainly if I’m relying on an operating system, I get some peace of mind in knowing that there is a team of people that is actively trying to make sure that the whole thing is and stays secure.

Security is a daunting task, because security flaws can creep in anywhere. It isn’t sufficient to simply use the latest version of all software and rely on upstream to get things right, because security flaws can be introduced by the way that distribution makers configure, combine, or distribute the software.

Although I don’t intend to name-and-shame in this article, I think that the smaller distributions do a generally less-than-stellar job in the security department. Especially noteworthy is Linux Mint containing malware for a while because their website had been compromised. The linked LWN article is worth a read, and echoes some of the sentiments I am writing here:

The Linux Mint developers have taken a certain amount of grief for this episode, and for their approach to security in general. They do not bother with security advisories, so their users have no way to know if they are affected by any specific vulnerability or whether Linux Mint has made a fixed package available. Putting the web site back online without having fully secured it mirrors a less-than-thorough approach to security in general. These are charges that anybody considering using Linux Mint should think hard about. Putting somebody’s software onto your system places the source in a position of great trust; one has to hope that they are able to live up to that trust.

It could be argued that we are approaching the end of the era of amateur distributions. Taking an existing distribution, replacing the artwork, adding some special new packages, and creating a web site is a fair amount of work. Making a truly cohesive product out of that distribution and keeping the whole thing secure is quite a bit more work. It’s not that hard to believe that only the largest and best-funded projects will be able to sustain that effort over time, especially when faced with an increasingly hostile and criminal net.

Though, in the spirit of fairness, it goes on to add:

There is just one little problem with that view: it’s not entirely clear that the larger, better-funded distributions are truly doing a better job with security. It probably is true that they are better able to defend their infrastructure against attacks, have hardware security modules to sign their packages, etc. But a distribution is a large collection of software, and few distributors can be said to be doing a good job of keeping all of that software secure.

Linux Mint is not the only distribution that has struggled with security. Manjaro let their SSL certificate expire not once, but twice, and suggested some questionable workarounds. Frustratingly, these two distributions are often recommended to beginners and laypeople.

Accessibility

Accessibility is important, and a lot of smaller distributions fail immensely on this front. Arch Linux is nearly impossible to use if you are technologically disinclined or have a disability that makes using a TTY terminal difficult. Strangely, some people see this as a strength of Arch Linux. I disagree firmly with this. At best, Arch Linux sacrifices accessibility to enhance or enable some of their niche goals. Its developers might justify this choice because non-technical and disabled people simply aren’t their target audience.

But accessibility is important, and GNOME is the only desktop environment I can think of that takes accessibility absolutely seriously, followed by KDE Plasma. Incidentally, GNOME is the default desktop environment of three of the four Big Distros, and openSUSE ships both GNOME and KDE Plasma in their installation image.

Everything else is important, too

The other aspects of scope are a little difficult to individually highlight, but I think they are all important in a project. For example, both openSUSE and Fedora use openQA to test their distributions as a cohesive whole. This completely automated suite runs hundreds of tests, and catches bugs before humans do. At the risk of saying the obvious, quality assurance makes a distribution better, and bigger distributions have more resources to do good quality assurance.

And at the risk of repeating the obvious, X makes a distribution better, and bigger distributions have more resources to do X. Substitute X with internationalisation, infrastructure, support, outreach, documentation, design, accessibility, and so forth.

In conclusion to an earlier question: Solus is good at what it does, which is providing a high-quality Budgie desktop, but it would be a lot better if it had the resources to do everything else as well.

But it doesn’t. And unless it grows to join the list of Big Distros, it won’t.

But I’m not personally affected

An obvious retort would be that—barring perhaps security—none of that matters, because I’m happy with my favourite niche distribution! And there is little that can be said in response to that individually. If you’re happy with a distribution, then keep doing what you’re doing, and don’t pay too much attention to an opinion-haver on the internet.

But I don’t think that that retort is sufficient. You see, I want Free Software to actually succeed. I want to live in a world where Free Software has won. And towards that end, I don’t think the smaller distributions are sufficient at all. A lot of work goes into creating a cohesive, all-encompassing distribution for the masses, and the likes of Linux Mint aren’t up to that task.

It’s the difference between “what would happen if I installed Linux Mint on my grandmother’s computer?” and “what would happen if I installed Linux Mint on the computer of millions of laypeople?". Grandma is probably going to be just fine individually, but the masses are seriously underserved by an understaffed distribution.

I see GNU/Linux as the public technological infrastructure of the future. And towards that end, I think we can do better than fractured, tiny distributions that serve hyper-specific niches.


Footnote: Linux Mint is a derivative distribution

Because Linux Mint is 99% identical to Ubuntu owing to its derivative status, one might argue that it benefits both from the scope and size of the Ubuntu project as well as the additional expertise that goes into it. That would make a lot of the above arguments null and void, because you’re basically using Ubuntu.

I want to argue instead that Linux Mint loses a lot of the benefits of the scope of Ubuntu. Linux Mint has to duplicate a lot of the effort that goes into Ubuntu. It obviously needs its own infrastructure, translations, design, and so forth. But it also needs its own quality assurance and security team. By introducing small changes to the cohesive whole, Linux Mint introduces a lot of vectors for errors and security flaws.

Moreover, Linux Mint changes the desktop environment, which is like the most important component for your average user. All of the quality assurance and accessibility work that Ubuntu and others put into GNOME does not apply to Linux Mint’s Cinnamon. So on the contrary, you are not basically using Ubuntu. You are using Ubuntu with its most important component replaced. It’s the difference between getting a car from a trusted car manufacturer, or that same car, but some hobbyists changed the entire interior.

Friday, 03 January 2020

Support for REUSE (SPDX) headers in emacs-reveal

About 18 months ago, I asked: Do you teach or educate?

I continue to use and develop emacs-reveal, a FLOSS bundle to create HTML presentations based on reveal.js as Open Educational Resources (OER) from Org mode source files in GNU Emacs. Last time, I mentioned license attribution for OER figures as tedious challenge, which I believe to be addressed properly in emacs-reveal.

Over the last couple of days, I added functionality that generates license information in my OER HTML presentations from SPDX headers embedded in source files. The FSFE project REUSE recommends the use of SPDX headers to indicate copyright and licensing information in free software projects, and, although OER are not software, I started to make my OER source files REUSE compliant. Thus, the following two simple header lines in an Org source file (e.g., the emacs-reveal howto)

#+SPDX-FileCopyrightText: 2017-2020 Jens Lechtenbörger <https://lechten.gitlab.io/#me>
#+SPDX-License-Identifier: CC-BY-SA-4.0

result in the following HTML licensing information (as part of the final slide in the howto) with RDFa markup for machine readability:

<div class="rdfa-license" about="https://oer.gitlab.io/emacs-reveal-howto/howto.html">
  <p>Except where otherwise noted, the work 
     “<span property="dcterms:title">How to create presentations with emacs-reveal</span>”,
     <span property="dc:rights">© 
        <span property="dcterms:dateCopyrighted">2017-2020</span>
        <a rel="cc:attributionURL dcterms:creator"
           href="https://lechten.gitlab.io/#me"
           property="cc:attributionName">Jens Lechtenbörger</a></span>, 
     is published under the 
     <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">
       Creative Commons license CC BY-SA 4.0</a>.</p></div>

Previously, I created the slide with license information separately, without taking copyright and license information of source files into account. Clearly, that type of redundancy was a Bad Thing, which I got rid of now. And, since license information is generated (currently in English or German based on the document’s language), you as future users of emacs-reveal do not need to know anything about RDFa (and next to nothing about HTML).

Happy New Year to all of you!

Sunday, 29 December 2019

Re: The Ecosystem is Moving

Moxie Marlinspike, the creator of Signal gave a talk at 36C3 on Saturday titled “The ecosystem is moving”.

The Fahrplan description of that talk reads as follows:

Considerations for distributed and decentralized technologies from the perspective of a product that many would like to see decentralize.

Amongst an environment of enthusiasm for blockchain-based technologies, efforts to decentralize the internet, and tremendous investment in distributed systems, there has been relatively little product movement in this area from the mobile and consumer internet spaces.

This is an exploration of challenges for distributed technologies, as well as some considerations for what they do and don’t provide, from the perspective of someone working on user-focused mobile communication. This also includes a look at how Signal addresses some of the same problems that decentralized and distributed technologies hope to solve.

https://fahrplan.events.ccc.de/congress/2019/Fahrplan/events/11086.html

Basically the talk is a reiteration of some arguments from a blog post with the same title he posted back in 2016.

In his presentation, Marlinspike basically states that federated systems have the issue of being frozen in time while centralized systems are flexible and easy to change.

As an example, Marlinspike names HTTP/1.1, which was released in 1999 and on which we are stuck on ever since. While it is true that a huge part of the internet is currently running on HTTP 1.0 and 1.1, one has to consider that its successor HTTP/2.0 was only released in 2015. 4 / 5 years are not a long time to update the entirety of the internet, especially if you consider the fact that the big browser vendors announced to only make their browsers work with HTTP/2.0 sites when they are TLS encrypted.

Marlinspike then goes on listing 4 expectations that advocates of federated systems have, namely privacy, censorship resistance, availability and control. This is pretty accurate and matches my personal expectations pretty well. He then argues, that Signal as a centralized application can fulfill those expectations as well, if not better than a decentralized system.

Privacy

Privacy is often expected to be provided by the means of data ownership, says Marlinspike. As an example he mentions email. He argues that even though he is self-hosting his emails, “each and every mail has GMail at the other end”.

I agree with this observation and think that this is a real problem. But the answer to this problem would logically be that we need to increase our efforts to change that by reducing the number of GMail accounts and increasing the number of self-hosted email servers, right? This is not really an argument for centralization, where each and every message is guaranteed to have the same service at the other end.

I also agree with his opinion that a more effective tool to gain privacy is good encryption. He obviously brings the point that email encryption is unusable, (hinting to PGP probably), totally ignoring modern approaches to email encryption like autocrypt.

Censorship resistance

Federated systems are censorship resistant. At least that is the expectation that advocates of federated systems have. Every time a server gets blocked, the user just simply switches to another server. The issue that Marlinspike points out is, that every time this happens, the user loses his entire social graph. While this is an issue, there are solutions to this problem, one being nomadic identities. If some server goes down the user simply migrates to another server, taking his contacts with him. Hubzilla does this for example. There are also import/export features present in most services nowadays thanks to the GDPR. XMPP offers such a solution using XEP-0277.

But lets take a look at how Signal circumvents censorship according to Marlinspike. He proudly presents Domain Fronting as the solution. With domain fronting, the client connects to some big service which is costly to block for a censor and uses that as a proxy to connect to the actual server. While this appears to be a very elegant solution, Marlinspike conceals the fact that Google and Amazon pretty quickly intervened and stopped Signal from using their domains.

With Google Cloud and AWS out of the picture, it seems that domain fronting as a censorship circumvention technique is now largely non-viable in the countries where Signal had enabled this feature.

https://signal.org/blog/looking-back-on-the-front/

Notice that above quote was posted by Marlinspike himself more than one and a half years ago. Why exactly he brings this as an argument remains a mystery to me.

Update: Apparently Signal still successfully uses Domain Fronting, just with content delivery networks other than Google and Amazon.

And even if domain fronting was an effective way to circumvent censorship, it could also be applied to federated servers as well, adding an additional layer of protection instead of solely relying on it.

But what if the censor is not a foreign nation, but instead the nation where your servers are located? What if the US decides to shutdown signal.org for some reason? No amount of domain fronting can protect you from police raiding your server center. Police confiscating each and every server of a federated system (or even a considerable fraction of it) on the other hand is unlikely.

Availability

This brings us nicely to the next point on the agenda, availability.

If you have a centralized service than you want to move that centralized service into two different data centers. And the way you did that was by splitting the data up between those data centers and you just halved your availability, because the mean time between failures goes up since you have two different data centers which means that it is more likely to have an outage in one of those data centers in any given moment.

Moxie Marlinspike in his 36c3 talk “The Ecosystem is Moving”

For some reason Marlinspike confuses a decentralized system with a centralized, but distributed system. It even reads “Centralized Service” on his slides… Decentralization does not equal distribution.

A federated system would obviously not be fault free, as servers naturally tend to go down, but an outage only causes a small fraction of the network to collapse, contrary to a total outage of centralized systems. There even are techniques to minimize the loss of functionality further, for example distributed chat rooms in the matrix protocol.

Control

The advocates argument of control says that if a service provider behaves undesirably, you simply switch to another service provider. Marlinspike rightfully asks the question how it then can be that many people still use Yahoo as their mail provider. Indeed that is a good question. I guess the only answer I can come up with is that most people probably don’t care enough about their email to make the switch. To be honest, email is kind of boring anyways đŸ˜‰

XMPP

Next Marlinspike talks about XMPP. He (rightfully) notes that due to XMPPs extensibility there is a morass of XEPs and that those don’t really feel consistent.

The XMPP community already recognized the problem that comes with having that many XEPs and tries to solve this issue by introducing so called compliance suites. These are annually published documents that contain a list of XEPs that are considered vitally important for clients or servers. These suites act as maps that point a way through the XEP jungle.

Next Marlinspike states that the XMPP protocol still fails to be a suitable option for mobile devices. This statement is plain wrong and was already debunked in a blog post by Daniel Gultsch back in 2016. Gultsch develops an XMPP client for Android which is totally usable and generally has lower battery consumption than Signal has. Conversations implements all of the XEPs listed in the compliance suites to be required for mobile clients. This shows that implementing a decent mobile client for a federated system can be done and there is a recipe for it.

What Marlinspike could have pointed out instead is that the XMPP community struggles to come up with a decent iOS client. That would have been a fair argument, but spreading FUD about the XMPP protocol as a whole is unfair and dishonest.

Luckily the audience of the talk didn’t fully buy into Marlinspikes weaker arguments as demonstrated by some entertaining questions during the QA afterwards.

What Marlinspike is right about though is that developing a federated system is harder than doing a centralized service. You as the developer have control over the whole system and subsequently over the users. However this is actually the reason why we, the community of decentralized systems and federated protocols do what we do. In the words of J.F. Kennedy, we do these things…

…not because they are easy, but because they are hard…

… or simply because they are right.

Friday, 27 December 2019

How to create an AppImage

AppImage is a brilliant way to have executable linux apps to every distro, without the need of re-packaging or re-build them. Without getting into too many details, it uses FUSE (Filesystem in Userspace) and SquashFS to bundle the app into one file.

AppImages require FUSE to run. Filesystem in Userspace (FUSE) is a system that lets non-root users mount filesystems.

So here are my personal notes on how to create Mozilla Firefox 68.3.0esr binary archive to an AppImage file.

download

Let’s begin by gathering all necessaries files

export VERSION=68.3.0esr

curl -sLO https://github.com/AppImage/AppImageKit/releases/download/continuous/appimagetool-x86_64.AppImage

curl -sL https://ftp.mozilla.org/pub/firefox/releases/$VERSION/linux-x86_64/en-US/firefox-$VERSION.tar.bz2 | tar xjf -

configuration files

we need 3 files, under the firefox directory

  • AppRun (executable shell script)
  • Icon (.png,.svg,.xpm)
  • firefox.desktop (freedesktop.org desktop file)

AppRun

this is our guide, this file will start our application inside the AppImage mount.

#!/bin/sh
cd "$(dirname "$0")"
exec ./firefox "$@"

or

cat > firefox/AppRun <<EOF
#!/bin/sh
cd "\$(dirname "\$0")"
exec ./firefox "\$@"

EOF

Dont forget to make it executable

chmod +x firefox/AppRun

Icon

There is an image within firefox directory that we can use as firefox icon:

./firefox/browser/chrome/icons/default/default128

firefox.desktop

for more info check here: Desktop Entry Specification

[Desktop Entry]
Categories=Network;WebBrowser;
Icon=/browser/chrome/icons/default/default128
Name=Mozilla Firefox
Terminal=false
Type=Application
Version=1.0

or

cat > firefox/firefox.desktop <<EOF
[Desktop Entry]
Categories=Network;WebBrowser;
Icon=/browser/chrome/icons/default/default128
Name=Mozilla Firefox
Terminal=false
Type=Application
Version=1.0
EOF

In the Icon attribute, it must be an absolute path, not relative.

Perms

Give execute permission to appimagetool

chmod +x appimagetool-x86_64.AppImage

Build your AppImage

./appimagetool-x86_64.AppImage --no-appstream firefox/

Mozilla Firefox

if everything is okay, you will see this:

ls -l Mozilla_Firefox-x86_64.AppImage

and you can run it !

./Mozilla_Firefox-x86_64.AppImage

firefoxappimage.png

if you want to run a specific profile:

./Mozilla_Firefox-x86_64.AppImage --profile $(pwd)/.mozilla/firefox/ichznbon.test/

Mount

When you are running your AppImage, you will notice that there is a new mount point in our system (fusermount)

$ mount | grep -i firefox
Mozilla_Firefox-x86_64.AppImage on /tmp/.mount_MozillshcmPB type fuse.Mozilla_Firefox-x86_64.AppImage (ro,nosuid,nodev,relatime,user_id=347,group_id=347)

and if you look really careful, you will see that it is mounted under /tmp/ !

$ ls /tmp/.mount_MozillshcmPB
application.ini     firefox          icons               libmozsqlite3.so  libplc4.so       minidump-analyzer     Throbber-small.gif
AppRun              firefox-bin      libfreeblpriv3.chk  libmozwayland.so  libplds4.so      omni.ja               updater
browser             firefox-bin.sig  libfreeblpriv3.so   libnspr4.so       libsmime3.so     pingsender            updater.ini
chrome.manifest     firefox.desktop  liblgpllibs.so      libnss3.so        libsoftokn3.chk  platform.ini          update-settings.ini
crashreporter       firefox.sig      libmozavcodec.so    libnssckbi.so     libsoftokn3.so   plugin-container
crashreporter.ini   fonts            libmozavutil.so     libnssdbm3.chk    libssl3.so       plugin-container.sig
defaults            gmp-clearkey     libmozgtk.so        libnssdbm3.so     libxul.so        precomplete
dependentlibs.list  gtk2             libmozsandbox.so    libnssutil3.so    libxul.so.sig    removed-files

That’s it !

Your first AppImage bundle linux package.

Docker Notes

FUSE ¡ AppImage/AppImageKit Wiki ¡ GitHub

docker run --cap-add SYS_ADMIN --cap-add MKNOD --device /dev/fuse:mrw --rm -ti ubuntu:18.04 bash


 apt-get update

 apt-get -y install curl libfuse2 file 

 export VERSION=68.3.0esr

 curl -sLO https://github.com/AppImage/AppImageKit/releases/download/continuous/appimagetool-x86_64.AppImage

 curl -sL https://ftp.mozilla.org/pub/firefox/releases/$VERSION/linux-x86_64/en-US/firefox-$VERSION.tar.bz2 | tar xjf -

 cat > firefox/AppRun <<EOF
#!/bin/sh
cd "\$(dirname "\$0")"
exec ./firefox "\$@"
EOF

 cat > firefox/firefox.desktop <<EOF
[Desktop Entry]
Categories=Network;WebBrowser;
Icon=/browser/chrome/icons/default/default128
Name=Mozilla Firefox
Terminal=false
Type=Application
Version=1.0
EOF

 chmod +x appimagetool-x86_64.AppImage

 ./appimagetool-x86_64.AppImage --no-appstream firefox/
appimagetool, continuous build (commit 64321b7), build 2111 built on 2019-11-23 22:20:53 UTC
WARNING: gpg2 or gpg command is missing, please install it if you want to create digital signatures
Using architecture x86_64
/firefox should be packaged as Mozilla_Firefox-x86_64.AppImage
Deleting pre-existing .DirIcon
Creating .DirIcon symlink based on information from desktop file
Generating squashfs...
Parallel mksquashfs: Using 8 processors
Creating 4.0 filesystem on Mozilla_Firefox-x86_64.AppImage, block size 131072.
[===========================================================================================================================|] 1583/1583 100%

Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
    compressed data, compressed metadata, compressed fragments,
    compressed xattrs, compressed ids
    duplicates are removed
Filesystem size 71064.05 Kbytes (69.40 Mbytes)
    36.14% of uncompressed filesystem size (196646.16 Kbytes)
Inode table size 5305 bytes (5.18 Kbytes)
    60.46% of uncompressed inode table size (8774 bytes)
Directory table size 1026 bytes (1.00 Kbytes)
    54.78% of uncompressed directory table size (1873 bytes)
Number of duplicate files found 3
Number of inodes 81
Number of files 67
Number of fragments 7
Number of symbolic links  1
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 13
Number of ids (unique uids + gids) 1
Number of uids 1
    root (0)
Number of gids 1
    root (0)
Embedding ELF...
Marking the AppImage as executable...
Embedding MD5 digest
Success

Please consider submitting your AppImage to AppImageHub, the crowd-sourced
central directory of available AppImages, by opening a pull request
at https://github.com/AppImage/appimage.github.io

final notes:

 du -h Mozilla_Firefox-x86_64.AppImage
70M Mozilla_Firefox-x86_64.AppImage

 ls -l Mozilla_Firefox-x86_64.AppImage
-rwxr-xr-x 1 root root 72962088 Dec 26 21:55 Mozilla_Firefox-x86_64.AppImage

 file Mozilla_Firefox-x86_64.AppImage
Mozilla_Firefox-x86_64.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/l, for GNU/Linux 2.6.18, stripped

 ldd Mozilla_Firefox-x86_64.AppImage
    not a dynamic executable
Tag(s): AppImage, firefox

Wednesday, 25 December 2019

doh-cli, a simple DoH client

original post on LibreOps

A couple months ago, we announced a public and free DNS service, so people can have encrypted DNS in their browsers and systems. We support both DNS over HTTPS (DoH) and DNS over TLS and our DoH service has two endpoints, the default /dns-query and one for blocking trackers and ads /ads. You can visit our page for more info.

dns

What is DNS?

Domain Name Service in a nutshell is when you are asking directions to find where Wikipedia is in the internet. Your browser does not know, so it will ask your computer. Your computer will ask your internet provider and your internet provider will ask someone else till they find the correct answer. In the end, your browser will know where to go and this is how you are visiting Wikipedia.

You need to trust all the above parties, to give you the correct answer and everybody knows that you are visiting Wikipedia.

doh

What is DoH (DNS Queries over HTTPS)?

It’s the implementation of RFC 8484. This is a way for your browser to ask where to find Wikipedia, without exposing to everybody that you want to visit Wikipedia! Still you need someone to ask for directions, but now both your question and the answer are encrypted. So you have privacy.

let’s get technical

What is RFC 8484?

In the above rfc, your client (eg. browser) asks your DNS via HTTP/2 representational state transfer (REST). DoH clients and servers need to sent a application/dns-message content (question/answer) and encode both the question and the answer in a bace64url message. Usually is GET, but POST is also supported on some servers.

doh-cli

So, today, we introduce doh-cli, a simple command line DoH client, written in python. You can use doh-cli as a binary client in your system. We support a few DoH public servers to test, and of course both LibreDNS DoH endpoints

You can see the code here:

install it

It is super easy

pip install doh-cli

or if python3 is not your default python

pip3 install doh-cli

howto use it?

Just ask your favorite DoH server (default is https://doh.libredns.gr/dns-query)

eg.

doh-cli libredns.gr A

and use help to see all the options

doh-cli --help

Why default output is json?

With modern tools and with multiline output, it is best to support a serialized format so you can use doh-cli with your tools. But if you dont like it:

doh-cli --output plain libredns.gr A

You can see all the options and help, on the project’s page.

doh-cli

Tag(s): doh-cli, DoH, python

Tuesday, 24 December 2019

ipname - hostnames for all

A few day ago, I was introduced to xip.io.
TLDR; You can have hostname for any IP Address!

ipname.me

$ dig +short @ipname.me www.192-168-1-1-ipname.me
192.168.1.1

ipname.png

project

It uses the powerdns pipe backend to run a (187 lines) bash script, that strips the IP from the hostname and returns the IP. This works so well, that a few services depends on xip!

I was playing with the idea of using dnsdist to do that with the embedded Lua supports that dnsdist has. And the proof-of-concept result is about 10lines of Lua code.

The project is here: ipname on github

ifconfig

But not only returns you an IP Address for any (dynamic) hostname you ask, but you can also use this free & public service as a what-is-my-ip project over DNS.

$ dig +short @ipname.me googleyahoo.com
116.203.115.192

PS The code also validates the IPv4 Addresses!

Tag(s): ipname, dnsdist

Monday, 23 December 2019

Don't miss the new software freedom podcast

In October we have started to publish a podcast from the FSFE. Meanwhile we have three episodes: with Cory Doctorow, with Lydia Pintscher, and with Harald Welte. The plan for 2020 is to publish one around every month.

Software freedom podcast logo

Some years ago I myself started to listen to podcasts. Not so much podcasts about technology but rather documentaries, features, and comments about politics. I mainly did that while travelling or if I was not tired but wanted to relax my eyes a bit.

Earlier this year Katharina Nocun encouraged us to start a podcast for the FSFE ourselves. After some considerations we decided to give it a try and cover topics about software freedom in a monthly podcast.

In October for the International Day Against digital restriction management (DRM) I was happy that Cory Doctorow, one of my favourite writers agreed to join us as a guest. We talked with him about the difference between books and an e-books with DRM, how authors and artist can make money without DRM, security implications of DRM, regulation of the so called "Internet of Things", and other questions related to this issue.

In November we talked with Lydia Pintscher, vice president of KDE, about the development of the KDE community, the different KDE projects, the issues they will be tackling over the next two years, how to maintain long term sustainability in such a large project, and how she balances her long time volunteer commitment with her day job.

The last episode for this year was dedicated to an area which was in the focus of heated discussions during 2019: how should the new mobile phone infrastructure for 5G look like. Several countries have taken steps to ban specific vendors and try to convince others to do likewise. For this topic we got Harald Welte – Free Software developer for Osmocom ("open source mobile communications) whom many might know for his past work for gplviolations.org and the OpenMoko. Harald gave an overview of the use of Free Software in mobile phone communication, basics of this technology, or the Huawei ban.

You can subscribe to the podcast either through the OPUS feed or MP3 feed so you do not miss new episodes next year.

If you are new to podcasts I got the feedback from many that they enjoy listing to podcasts on their mobile with Antennapod which you can install through F-Droid.

Friday, 13 December 2019

a simple DoH/DoT using only dnsdist

In this blog post I will describe the easiest installation of a DoH/DoT VM for personal use, using dnsdist.

Next I will present a full installation example (from start) with dnsdist and PowerDNS.

Server Notes: Ubuntu 18.04
Client Notes: Archlinux

Every {{ }} is a variable you need to change.
Do NOT copy/paste without making the changes.

dohdot.png

Login to VM

and became root

$ ssh {{ VM }}
$ sudo -i

from now on, we are running commands as root.

TLDR;

dnsdist DoH/DoT

If you just need your own DoH and DoT instance, then dnsdist will forward your cleartext queries to another public DNS server with the below configuration.

cat > /etc/dnsdist/dnsdist.conf <<EOF

-- resets the list to this array
setACL("::/0")
addACL("0.0.0.0/0")

addDOHLocal('0.0.0.0', '/etc/dnsdist/fullchain.pem', '/etc/dnsdist/privkey.pem')
addTLSLocal('0.0.0.0', '/etc/dnsdist/fullchain.pem', '/etc/dnsdist/privkey.pem')

newServer({address="9.9.9.9:53"})
EOF

You will need -of course- to have your certificates before hand.
That’s It !

a DoH/DoT using dnsdist and powerdns

For people that need a more in-depth article, here are my notes on how to setup from scratch an entire VM with powerdns recursor and dnsdist.

Let’s Begin:

Enable PowerDNS Repos

Add key

curl -sL https://repo.powerdns.com/FD380FBB-pub.asc | apt-key add -
OK

Create PowerDNS source list

cat > /etc/apt/sources.list.d/powerdns.list <<EOF
deb [arch=amd64] http://repo.powerdns.com/ubuntu bionic-dnsdist-14 main
deb [arch=amd64] http://repo.powerdns.com/ubuntu bionic-rec-42 main
EOF

cat > /etc/apt/preferences.d/pdns <<EOF
Package: pdns-* dnsdist*
Pin: origin repo.powerdns.com
Pin-Priority: 600
EOF

Update System and Install packages

apt-get update
apt-get -qy install dnsdist pdns-recursor certbot

You may see errors from powerdns, like

  failed: E: Sub-process /usr/bin/dpkg returned an error code (1)

ignore them for the time being.

PowerDNS Recursor

We are going to setup our recursor first and let’s make it a little interesting.

PowerDNS Configuration

cat > /etc/powerdns/recursor.conf <<EOF
config-dir=/etc/powerdns
hint-file=/etc/powerdns/root.hints
local-address=127.0.0.1
local-port=5353
lua-dns-script=/etc/powerdns/pdns.lua
etc-hosts-file=/etc/powerdns/hosts.txt
export-etc-hosts=on
quiet=yes
setgid=pdns
setuid=pdns
EOF

chmod 0644 /etc/powerdns/recursor.conf
chown pdns:pdns /etc/powerdns/recursor.conf

Create a custom response

This will be handy for testing our dns from cli.

cat > /etc/powerdns/pdns.lua <<EOF
domainame = "test.{{ DOMAIN }}"
response  = "{{ VM_ipv4.address }}"

function nxdomain(dq)
    if dq.qname:equal(domainame) then
        dq.rcode=0 -- make it a normal answer
        dq:addAnswer(pdns.A, response)
        dq.variable = true -- disable packet cache
        return true
    end
    return false
end
EOF

chmod 0644 /etc/powerdns/pdns.lua
chown pdns:pdns /etc/powerdns/pdns.lua

AdBlock

Let’s make it more interesting, block trackers and ads.

cat > /usr/local/bin/update.stevenBlack.hosts.sh <<EOF
#!/bin/bash

# Get StevenBlack hosts
curl -sLo /tmp/hosts.txt https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts

touch /etc/powerdns/hosts.txt

# Get diff
diff -q <(sort -V /etc/powerdns/hosts.txt | column -t) <(sort -V /tmp/hosts.txt | column -t)
DIFF_STATUS=$?

# Get Lines
LINES=`grep -c ^ /tmp/hosts.txt`

# Check & restart if needed
if [ "${LINES}" -gt "200" -a "${DIFF_STATUS}" != "0" ]; then
    mv -f /tmp/hosts.txt /etc/powerdns/hosts.txt
    chmod 0644 /etc/powerdns/hosts.txt
    chown pdns:pdns /etc/powerdns/hosts.txt
    systemctl restart pdns-recursor
fi

# vim: sts=2 sw=2 ts=2 et
EOF

chmod +x /usr/local/bin/update.stevenBlack.hosts.sh
/usr/local/bin/update.stevenBlack.hosts.sh

Be Careful with Copy/Paste. Check the $ dollar sign.

OpenNic Project

Is it possible to make it more interesting ?
Yes! by using OpenNIC Project, instead of the default root NS

cat > /usr/local/bin/update.root.hints.sh <<EOF
#!/bin/bash

# Get root hints
dig . NS @75.127.96.89 | egrep -v '^;|^$' > /tmp/root.hints

touch /etc/powerdns/root.hints

# Get diff
diff -q <(sort -V /etc/powerdns/root.hints | column -t) <(sort -V /tmp/root.hints | column -t)
DIFF_STATUS=$?

# Get Lines
LINES=`grep -c ^ /tmp/root.hints`

# Check & restart if needed
if [ "${LINES}" -gt "20" -a "${DIFF_STATUS}" != "0" ]; then
    mv -f /tmp/root.hints /etc/powerdns/root.hints
    chmod 0644 /etc/powerdns/root.hints
    chown pdns:pdns /etc/powerdns/root.hints
    systemctl restart pdns-recursor
fi

# vim: sts=2 sw=2 ts=2 et
EOF

chmod +x /usr/local/bin/update.root.hints.sh
/usr/local/bin/update.root.hints.sh

dnsdist

dnsdist is a DNS load balancer with enhanced features.

dnsdist configuration

cat > /etc/dnsdist/dnsdist.conf <<EOF
-- resets the list to this array
setACL("::/0")
addACL("0.0.0.0/0")

addDOHLocal('0.0.0.0', '/etc/dnsdist/fullchain.pem', '/etc/dnsdist/privkey.pem')
addTLSLocal('0.0.0.0', '/etc/dnsdist/fullchain.pem', '/etc/dnsdist/privkey.pem')

newServer({address="127.0.0.1:5353"})
EOF

Certbot

Now it is time to get a new certificate with the help of letsencrypt.

Replace {{ DOMAIN }} with your domain

We need to create the post hook first and this is why we need to copy the certificates under dnsdist folder.

cat > /usr/local/bin/certbot_post_hook.sh <<EOF
#!/bin/bash

cp -f /etc/letsencrypt/live/{{ DOMAIN }}/*pem /etc/dnsdist/
systemctl restart dnsdist.service

# vim: sts=2 sw=2 ts=2 et
EOF

chmod +x /usr/local/bin/certbot_post_hook.sh

and of course create a certbot script.

Caveat: I have the dry-run option in the below script. When you are ready, remove it.

cat > /usr/local/bin/certbot.create.sh <<EOF
#!/bin/bash

certbot --dry-run --agree-tos --standalone certonly --register-unsafely-without-email
    --pre-hook 'systemctl stop dnsdist'
    --post-hook /usr/local/bin/certbot_post_hook.sh
    -d {{ DOMAIN }} -d doh.{{ DOMAIN }} -d dot.{{ DOMAIN }}

# vim: sts=2 sw=2 ts=2 et
EOF

chmod +x /usr/local/bin/certbot.create.sh

Firewall

Now open your firewall to the below TCP Ports:

ufw allow 80/tcp
ufw allow 443/tcp
ufw allow 853/tcp
  • TCP 80 for certbot
  • TCP 443 for dnsdist (DoT) and certbot !
  • TCP 853 for dnsdist (DoH)

Let’s Encrypt

When you are ready, run the script

/usr/local/bin/certbot.create.sh

That’s it !

Client

For this blog post, my test settings are:

Domain: ipname.me
IP: 88.99.36.45

DoT - Client

From systemd 243+ there is an option to validate certificates on DoT but

systemd-resolved only validates the DNS server certificate if it is issued for the server’s IP address (a rare occurrence).

so it is best to use: opportunistic

/etc/systemd/resolved.conf 
[Resolve]
DNS=88.99.36.45
FallbackDNS=1.1.1.1
DNSSEC=no
#DNSOverTLS=yes
DNSOverTLS=opportunistic
Cache=yes
ReadEtcHosts=yes

systemctl restart systemd-resolved

Query

resolvectl query test.ipname.me 
test.ipname.me: 88.99.36.45                    -- link: eth0

-- Information acquired via protocol DNS in 1.9ms.
-- Data is authenticated: no

DoH - Client

Firefox Settings

dohdot_01.png

Firefox TRR

dohdot_02.png

dnsleak

Click on DNS leak test site to verify

dohdot_03.png

Tag(s): DoH, DoT, PowerDNS, dnsdist

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

        Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog – Think. Innovation.  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – Repentinus  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nikos Roussos - opensource  Planet FSFE on Iain R. Learmonth  Po angielsku — mina86.com  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog