Thoughts of the FSFE Community

Saturday, 20 October 2018

sharing keyboard and mouse with synergy

Evaggelos Balaskas - System Engineer | 21:34, Saturday, 20 October 2018

Synergy

Mouse and Keyboard Sharing

aka Virtual-KVM

 

Open source core of Synergy, the keyboard and mouse sharing tool
You can find the code here:

https://github.com/symless/synergy-core

or you can use the alternative barrier

https://github.com/debauchee/barrier

 

Setup

My setup looks like this:

synergy setup

I bought a docking station for the company’s laptop. I want to use a single monitor, keyboard & mouse to both my desktop PC & laptop when being at home.

My DekstopPC runs archlinux and company’s laptop is a windows 10.

Keyboard and mouse are connected to linux.

Both machines are connected on the same LAN (cables on a switch).

Host

/etc/hosts

192.168.0.11   myhomepc.localdomain  myhomepc
192.168.0.12 worklaptop.localdomain  worklaptop

 

Archlinux

DesktopPC will be my Virtual KVM software server. So I need to run synergy as a server.

Configuration

If no configuration file pathname is provided then the first of the
following to load successfully sets the configuration:

${HOME}/.synergy.conf
/etc/synergy.conf

 

vim ${HOME}/.synergy.conf
section: screens
    # two hosts named: myhomepc and worklaptop
      myhomepc:
      worklaptop:
end

section: links
    myhomepc:
        left = worklaptop
end

 

Testing

run in the foreground

$ synergys --no-daemon

example output:

[2018-10-20T20:34:44] NOTE: started server, waiting for clients
[2018-10-20T20:34:44] NOTE: accepted client connection
[2018-10-20T20:34:44] NOTE: client "worklaptop" has connected
[2018-10-20T20:35:03] INFO: switch from "myhomepc" to "worklaptop" at 1919,423
[2018-10-20T20:35:03] INFO: leaving screen
[2018-10-20T20:35:03] INFO: screen "myhomepc" updated clipboard 0
[2018-10-20T20:35:04] INFO: screen "myhomepc" updated clipboard 1
[2018-10-20T20:35:10] NOTE: client "worklaptop" has disconnected
[2018-10-20T20:35:10] INFO: jump from "worklaptop" to "myhomepc" at 960,540
[2018-10-20T20:35:10] INFO: entering screen
[2018-10-20T20:35:14] NOTE: accepted client connection
[2018-10-20T20:35:14] NOTE: client "worklaptop" has connected
[2018-10-20T20:35:16] INFO: switch from "myhomepc" to "worklaptop" at 1919,207
[2018-10-20T20:35:16] INFO: leaving screen
[2018-10-20T20:43:13] NOTE: client "worklaptop" has disconnected
[2018-10-20T20:43:13] INFO: jump from "worklaptop" to "myhomepc" at 960,540
[2018-10-20T20:43:13] INFO: entering screen
[2018-10-20T20:43:16] NOTE: accepted client connection
[2018-10-20T20:43:16] NOTE: client "worklaptop" has connected
[2018-10-20T20:43:40] NOTE: client "worklaptop" has disconnected

 

Systemd

To use synergy as a systemd service, then you need to copy your configuration file under /etc directory

sudo cp ${HOME}/.synergy.conf /etc/synergy.conf

Beware: Your user should have read access to the above configuration file.

and then:

$ systemctl start  --user synergys
$ systemctl enable --user synergys

 

Verify

$ ss -lntp '( sport = :24800 )'
State                   Recv-Q                   Send-Q                                      Local Address:Port                                      Peer Address:Port
LISTEN                  0                        3                                                 0.0.0.0:24800                                          0.0.0.0:*                      users:(("synergys",pid=10723,fd=6))

 

Win10

On windows10 (the synergy client) you just need to connect to the synergy server !

And ofcourse create a startup-shortcut:

win10 synergy

and that’s it !

 

Thursday, 18 October 2018

No activity

Thomas Løcke Being Incoherent | 11:32, Thursday, 18 October 2018

This blog is currently dead…. Catch me at twitter

Friday, 28 September 2018

Technoshamanism in Barcelona on October 4

agger's Free Software blog | 13:00, Friday, 28 September 2018

Technoshamanism event in Barcelona, october 4.

TECNOXAMANS, ELS HACKERS DE L’AMAZONES.

XERRADA I RITUAL DIY DE LA XARXA INTERNACIONAL TECNOXAMANISME AL CSOA LA FUSTERIA.

El dijous 4 d’octubre celebrarem al CSOA La Fusteria una xerrada amb membres de la xarxa Tecnoxamanisme, un col·lectiu internacional de producció d’imaginaris format per artistes, biohackers, pensadors, activistes, indígenes i indigenistes que intenten recuperar idees de futur perdudes al passat ancestral. La xerrada estarà conduïda per l’escriptor Francisco Jota-Pérez i després es realitzarà una performance ritual DIY on totes podreu participar.

Què té en comú el moviment hacker amb les lluites dels pobles indígenes amenaçats per allò anomenat “progrés”?

El tecnoxamanisme va sorgir el 2014 a partir de la confluència de diverses xarxes nascudes al voltant del moviment del software i la cultura lliure per promoure intercanvis de tecnologies, rituals, sinergies i sensibilitats amb les comunitats indígenes. Actuen impulsant trobades i esdeveniments que transcendeixen als rituals DIY, la música electrònica, la permacultura i els processos immersius, barrejant cosmovisions i impulsant la descolonització del pensament.

Segons els membres de la xarxa tecnoxamans: “Encara gaudim de zones autònomes temporals, d’invenció de formes de vida, d’art/vida; tractem de pensar i col·laborar amb la reforestació de la Terra amb un imaginari ancestrofuturista. El nostre principal exercici és crear xarxes d’inconscients, enfortint el desig de comunitat, així com proposar alternatives al pensament ‘productiu’ de la ciència i la tecnologia”.

Després de dos festivals internacionals realitzats en el sud de Bahia, Brasil, (produïts juntament amb l’associació indígena de l’ètnia Pataxó Aldeia Velha i Aldeia Pará, prop de Porto Seguro, on van arribar els primers vaixells portuguesos durant la colonització), organitzen el III Festival de Tecnoxamanisme els dies 5,6 i 7 d’octubre a França. I tenim la sort de que visitin Barcelona per poder compartir amb nosaltres els seus projectes i filosofia. On ens parlaran de temps espiral (no lineal), de ficcions col·lectives, de com el monoteisme i després el capitalisme van segrestar la tecnologia, o d’allò que podem aprendre de les comunitats indígenes tant a nivell de supervivència com de resistència.

Us esperem a totes a les 20.30 a La Fusteria.
C/J. Benlluire, 212 (El Cabanyal)

Col·labora Láudano Magazine.
www.laudanomag.com

Tuesday, 25 September 2018

Libre Application Summit 2018

TSDgeos' blog | 22:28, Tuesday, 25 September 2018

Earlier this month i attended Libre Application Summit 2018 in Denver.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

Libre Application Summit wants to be a place for all people involved in doing Free Software applications to meet and share ideas, though being almost organized by GNOME it had a some skew towards GNOME/flatpak. There was a good presence of KDE, but personally I felt that we would have needed more people at least from LibreOffice, Firefox and someone from the Ubuntu/Canonical/Snap field (don't get annoyed at you if I failed to mention your group).

The Summit was kicked off by a motivational talk on how to make sure we ride the wave of "Open Source has won but people don't know it". I felt the content of the talk was nice but the speaker was hit by some issues (not being able to have the laptop in front of her due to the venue being a bit weirdly layouted) that sadly made her speech a bit too stumbly.

Then we had a bunch of flatpak related talks, ranging from the new freedesktop-sdk runtime, from very technical stuff about how ostree works and also including a talk by our own Aleix Pol on how KDE is planning to approach the release of flatpaks. Some of us ended the day having BBQ at the house the Codethink people were staying. Thanks for the good time!

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

I kicked off the next day talking about how we (lately mostly Christoph) are doing the KDE Applications releases. We got appreciation of the good work we're doing and some interesting follow up questions so I think people were engaged by it.

The morning continued with talks about how to engage the "non typical" free software people, designers, students, professors, etc.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

After lunch we had a few talks by the Elementary people and another talk with Aleix focused on which apps will run on your Plasma devices (hint: all of them).

The day finished with a quizz sponsored by System 76, it was fun!

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The last day of talks started again with me speaking, this time about how amazing Qt is and why you should use it to build your apps. There I had some questions about people worrying if QtWidgets was going to die, I told them not to worry, but it's clear The Qt Company needs to improve their messaging in that regard.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

After that we had a talk about a Fedora project to create a distro based exclusively in flaptaks, which sounds really interesting. The last talks of LAS 2018 were about how to fight vandalism in crowdsourced data, the status of Librem 5 (it's still a bit far away) and a very interesting one about the status of Free Software in Research.

All in all i think the conference was interesting, still a bit small and too GNOME controlled to appeal to the general crowd, but it's the second time it has been organized, so it will improve.

I want to thank the KDE e.V. for sponsoring my flight and hosting to attend this conference. Please Donate! so we can continue attending conferences like this and spreading the good work of KDE :)

Tuesday, 18 September 2018

No Netflix on my Smart TV

tobias_platen's blog | 19:50, Tuesday, 18 September 2018

When I went to the Conrad store in Altona, I saw that new Sony Smart TVs come with a Netflix button on the remote.
Since I oppose DRM, I would never buy such a thing. I would only buy a Smart TV that Respects My Freedom, but such a thing does not exist.
Instead I use a ThinkPad T400 as an external TV tuner and harddisk recorder, since my old TV set does not support DVB-C. As a DVB-C tuner I use the
FRITZ!WLAN Repeater DVB‑C which works well with the free VLC player. Since it lacks a CI+ slot, it cannot decode DRM encumbered streams.

When Netflix was founded in 1998 they initially only offered rental DVDs only. Today most DVDs can be played on GNU/Linux using libdvdcss.
Even if most DVDs that Netflix offers do not contain strong DRM, Netflix is still a surveillance system that requires proprietary JavaScript.
When I buy DVDs, I go to a store where I can pay using cash.

The Thinkpad T400 has no HDMI port and “management engine” back door is removed by installing Libreboot. Most modern Intel systems come with a HDMI port.
HDMI comes with some kind of DRM called HDCP which was developed by Intel. On newer hardware the “management engine” is used to implement video DRM.
Netflix in 4K only works on Kaby-Lake processors, which implement the latest version of Intels hardware DRM.

Wednesday, 12 September 2018

Return to Limbo

David Boddie - Updates (Full Articles) | 15:57, Wednesday, 12 September 2018

After an unsatisfactory period of employment in March this year I took some time to reflect on the technologies I use, trying to learn about more systems and languages that I had only superficially explored. In the period immediately after leaving employment I wanted to try and get back into technologies like Inferno with the idea, amongst others, of porting it to an old netbook-style device I had acquired at the end of 2017. The problem was that I was in a different country to my development system, so any serious work would have to wait until I could access it again. In the meantime I thought about my brief exposure to the world of Qt in yet another work environment – specialisation can be an advantage when finding work but can also limit your opportunities to expand your horizons – and I wondered about other technologies I had missed out on over the years. I had previously explored Android development in an attempt to see what was interesting about it, but it was time to try something different.

Ready, Steady

Being very comfortable with Python as a development language during the late 1990s and 2000s had probably made me a bit lazy when it came to learning other programming languages. However, having experimented with my own flavour of Python, and being unimpressed by the evolution of the Python language, I considered making a serious attempt to learn Go with the idea that it might be a desirable skill to have. Many years ago I was asked to be a proofreader for Mark Summerfield's Programming in Go textbook but found it difficult to find the time outside work to do that, so I didn't manage to learn much of the language. This time, I worked through A Tour of Go, which I found quite rewarding even if I wasn't quite sure how to express what I wanted in some of the exercises — I often felt that I was guessing rather than figuring out exactly how a type should be defined, for example.

When the time came to pack up and return to Norway I considered whether I wanted to continue writing small examples in Go and porting some of my Python modules. It didn't feel all that comfortable or intuitive to write in Go, though I realise that it simply takes practice to gain familiarity. Despite this, it was worth taking some time to get an overview of the basics of Go for reasons that I'll get to later.

An Interlude

As mentioned earlier, I was interested in setting up Inferno on an old netbook – an Efika MX Smartbook – and had already experimented with running the system in its hosted form on top of Debian GNU/Linux. Running hosted Inferno is a nice way to get some experience using the system and seems to be the main way it is used these days. Running the system natively requires porting it to the specific hardware in use, and I knew that I could use the existing code for U-Boot, FreeBSD and Linux as a reference at the very least. So, the task would be to take hardware-specific code for the i.MX51 platform and adapt it to use the conventions of the Inferno porting layer. Building from the ground up, there are a few ports of Inferno to other ARM devices that could be used as foundations for a new port.

One of the things that made it possible for me to consider starting a port of Inferno to the Smartbook was the existing work that had gone into porting FreeBSD to the device. This included a port of U-Boot that enabled the LCD screen to be used to show the boot console instead of requiring a debug cable that is no longer available. This made it much easier to test “bare metal” programs and gain experience with modifying U-Boot, as well as using its API to access the keyboard and screen. Slowly, I built up a set of programs to work out how I might boot into Inferno while using the U-Boot API to allow the operating system to access the framebuffer console.

As you might expect, booting Inferno involves a lot of C code and some assembly language. It also involves modules written in Limbo, a language from the C programming language family that is an ancestor of Go. At a certain point in the boot process – see lecture 10 of this course for details – Inferno runs a Limbo module to perform some system-specific initialisation, and it's useful to know how to write Limbo code beyond simply copying and pasting lines of code from other ports.

Visiting Limbo

At this point the time spent learning the basics of Go proved to be useful, though maybe only in the sense that aspects of Limbo seemed more familiar to me than they might have done if I had never looked at Go. I wrote a few lines of code to help set up devices for the booting system and check that some simple features worked. By this time I had started looking at Limbo for its own sake, not just as something I had to learn to get Inferno working.

There are a lot of existing programs written in Limbo, though it's not always obvious where to find them. The standard introductions, A Descent into Limbo by Brian Kernighan and The Limbo Programming Language by Dennis Ritchie contain example programs, but many practical programs can be found in the Inferno repository itself inside the appl directory, which is where the sources reside for Limbo applications and libraries. I linked to some other resources in my first article about Inferno.

While there are a few resources already available, I wanted to experiment a bit on my own and get a feel for the language. As a result I started to write a collection of small example programs that tested my intuitions about how certain features of the language worked. Over time it became more natural to write programs in Limbo, especially if they used features like threads and channels to delegate work and coordinate how it is performed. Since Limbo was inspired by communicating sequential processes and designed with threading in mind, threads are a built-in feature of the language, so using them is fairly painless compared to their counterparts in languages like Python and C. Using channels to communicate between threads is fairly intuitive, though it can take time to become accustomed to the syntax and control structures.

Outside the Inferno

While Limbo's natural environment is Inferno, the ability to run Inferno as a hosted environment in another operating system makes it possible to experiment with the language fairly conveniently. However, if you want to edit programs in Inferno's graphical environment then you might find it takes some effort to adapt to it, especially if you already have a favourite editor or integrated development environment. As a result, it might be desirable to edit programs in the host operating system and copy them into the hosted Inferno environment. To enable more rapid prototyping I created a tool to automate the process of transferring and compiling a Limbo file to a standalone application, but the way it works is a bit more complicated than that sounds.

Inspired by Chris Double's article on Bundling Inferno Applications and the resources he drew from, the tool I wrote automates the process of building a custom hosted Inferno installation. This seems excessive for the purpose of building a standalone application, though it is important to realise that Limbo programs are relying on features of the Inferno platform, such as the Dis virtual machine, even when they are running on another operating system. By controlling exactly what is built we can ensure that we only include features that a standalone application will need — we can also run strip on the executable afterwards to reduce its size even further.

With a tool to make standalone executables that can run on the host system, it becomes interesting to think about how we might combine the interesting features of Limbo (and its own packaged Inferno) with the libraries and services provided by the host operating system. However, that's a topic for another article.

Categories: Limbo, Inferno, Free Software

Tuesday, 11 September 2018

Fab Lab-enabled Humanitarian Aid in India

Blog – Think. Innovation. | 08:36, Tuesday, 11 September 2018

Since June 2018 the state of Kerala in India has endured massive floodings, as you may have read in the news. This article contains a brief summary what the international Fab Lab Community has been doing until now (early September 2018) to help the people recovering.

Note 1: I am writing this from my point of view, from what I am remembering that has happened. In case you have any additions or corrections, contact me.

Note 2: Unfortunately the conversations are taking place in a closed Telegram group, making it difficult for other people to read back on what happened, study it and learn from it. An attempt has been made to move the conversation to the Fab Lab Forum, but that was not successful.

It started with my search for non-chemical plywood in The Netherlands for which I contacted many of the Fab Labs in the BeNeLux. In a response someone gave me an invite link to a Telegram group consisting of Fabbers from all over the world. I joined the group and started following the conversation. Soon people from Fab Labs in India reported about the floodings and asked the international Fab Lab Community there for ideas and help. A seperate Telegram group “Fab for Kerala” was created and several dozens of people joined.

It was interesting to follow the progress of the conversation. At first the ideas were all over the place, as if ‘we’ were the only people going to provide assistance to the affected people in Kerala. Luckily that soon changed towards recognizing the unique strengths and position of local Fab Labs, understanding that not only we are operating in a very complex physical situation on the flood-affected ground, but also in a complex situation of various organizations providing help. The need to get into contact with these organizations was understood and steps taken.

There was also a call for ‘going out there and knowing what is going on locally’ and ‘getting an overview of the data’ first, before anything could be done. Local Fabbers responded that it was very difficult to get to know first what was going on and we should just get started providing the basic needs. These two came together by people putting forward ideas of what practical things could be done and people putting forward understandings of what they heard that people needed.

The ideas that came forward were amongst others:

  • Building DIY gravity lamps so people who do not have electricity can have light at night.
  • Building temporary houses for people who lost their house altogether.
  • Various water filtration systems so people can make their own drinking water.

The needs that came forward were amongst others:

  • A way to detect snakes in the houses that were flooded: as the water subsided the snakes hid in moist dark places and people got bit.
  • Innovative ways to quickly clean a house of the mud that was left after the water subsided
  • A quick way to give people an alternative form of shelter, as they were moving from the centralized government issued shelters back to their own neighbourhoods.

The idea for temporary housing and need for alternative shelter came together and there was a brainstorm on how to proceed. With the combined brain resources in the group the conclusion was quickly drawn that building family-size domes was the best way to go.

Various Fab Labs in India then made practical arrangements on where to source, how to transport and where to build 5 domes. A volunteer from Fab Lab Kerala got the government interested and accomplished that upon showing the success of the domes, the government would finance building more domes.

Various volunteers from these local Fab Labs then set out to get funding for building 5 domes. This proved to be a challenging task, as it appeared to very difficult and costly to move money into India. I do not really understand the specifics of the difficulties, but understood that the government of India is actively preventing money from coming into the country. A pragmatic solution was found, which relies a lot on trust and ‘social contract’ between the members of the Fab Labs.

At this moment volunteers from Vigyan Ashram Fab Lab (in India, but outside of Kerala) have constructed ‘dome kits’ for the skeletons. These kits are being transported to Fab Lab Kerala, so contruction can begin. In the meantime practical challenges are being faced, such as which material to use as ‘walls’ for the domes.

One of the volunteers involved with building the domes has indicated that he will send out regular updates. So, hopefully I can soon link to those from here.

I have been following this group in awe of the amazing willingness and organizing capacity of the international Fab Lab Community.

Hopefully this initiative will continue and people in need will get help. The shift in direction that I have seen from “be all to everything” towards focus on the Fab Labs’ unique position and core competences, I feel has been vital. As is the intent of communication and coordination with other organizations active in the area.

Maybe this could be the dawn of a new category of humanitarian aid and humanitarian development, with the Fab Lab and Maker values at its core. Not to ‘disrupt and replace’, but to provide a different perspective, bring new value and fresh ideas and solutions.

Diderik

P.S.:

Pieter van der Hijden has made an effort to gather information resources relevant to this initiative. For the complete overview, see the international Fab Lab forum. Here a list of the resources:

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Photo from article “For Kerala’s flood disaster, we have ourselves to blame” under fair use.

The Fairy Tale of the Unicorn and the Zebras

Blog – Think. Innovation. | 06:58, Tuesday, 11 September 2018

Once upon a time hippies where roaming the world. They were a happy bunch, living in freedom and spiritual abundance. A small group of them lived in a place called California, or to be more specific, an area that had the appearance of a lush valley. In that valley life was good. A new kind of technology called computers made that entrepreneurial life flourished and people had steady jobs. But the computers were big, clunky and expensive. They were exclusive to big corporations, governments and universities who had the space for them, both physically and in their budgets.

But then the hippies saw this as unfair and were convinced that such a powerful technology should be in the hands of the people. One person in particular had this vision of democratizing technology. His name was Steve Jobs, who later became an iconic figure, maybe more famous than Robin Hood. Steve found a group of geeky hobbyists who were building their own computers from electronics available at the local store. These people were also called ‘nerds’ and Steve befriended them. One nerd called Steve Wozniak was particularly bright, although not able to talk freely, just answering specific questions that someone asked him.

Jobs saw the genius of Wozniak and the possibility to make his vision a reality. Together they created a company to put these ‘personal’ computers in the hands of the many. This company became hugely successful, changing the way we perceive and use computers for the better, forever. An entire new phenomenon was created of personal computing devices with countless possibilities and functions, that took over many tasks and created new ones in every business and home on the planet. And that made Jobs and Wozniak rich men, being worshiped all over the world.

Over the years this new kind of vision and entrepreneurship transformed into ‘start-up culture’ and the lush valley got the name Silicon Valley. And everybody in the world wanted to create Silicon Valley in their backyard, copying the start-up culture to breed heroic entrepreneurs who would save the world and make a lot of money while doing so.

And everybody lived happily ever after. Or so was the idea…

In the years that followed something unthinkable happened, however. Slowly but surely the hippies’ vision of a better world through technology was taken over by a dark force led by the Venture Capital Overlords. These VC Overlords were mingling among the hippies and the nerds, slowly influencing them in such a way that they did not even notice. The VC Overlords were sent by Wall Street Empire to extract capital for the already extraordinary wealthy 1%. Smartly talking about ‘radical innovation’, ‘disruption’ and the ‘free market’ the VC Overlords made everyone believe in a story that resulted in extraction and destruction. They convinced start-up founders of the necessary pursuit of the mythical Unicorn, where competition was for losers and creating a monopoly the only prize. Start-ups were meant to become Unicorns, or die trying. And start-up founders believed the myth and did everything they could to make it to the Unicorn.

But not everybody. A small group of good people saw through the false promise of the Unicorn and understood that something terrible had happened. The VC Overlords did more harm than good and the 99% of people were not better of. So they decided to do something about it. They studied the castle the Overlords created and pinpointed its shortcomings. They decided to do away with the Unicorn and proposed a different kind of animal to pursue for the start-up founders: the Zebra. This animal was not the mythical and rare creature that the Unicorn was and not everything had to be sacrificed to become the Zebra. In fact, every start-up could be a Zebra, as Zebras are social animals that live in groups and benefit from each others presence. Zebras balance doing good and making a profit. The story of the Zebra was a much-needed alternative to the one of the VC overlords and the Zebra movement, called Zebras Unite, quickly gained traction.

However, something went unnoticed by the good people of Zebras Unite. In their study of the VC Overlords’ Castle they overlooked something fundamental. They did not go into the dungeons of the castle to understand its foundations, a centuries old heritage from what later became the Wall Street Empire. These foundations are of debt-based fiat money and the investment-based corporate charter resulting in the endless growth trap. The new farm of Zebras Unite was being built on the old foundations of the Wall Street Empire and VC Overlords. Fundamentally, the system did not change. Despite all good people and good intentions, the Zebras could sooner or later grow a horn and start to look more and more like that damned creature the Unicorn…

How will this story end?

Will the good people of Zebras Unite find out and correct their mistake in time, so the people can live happily ever after?

– Diderik

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Photo by Awesome Content.

Monday, 10 September 2018

Nextcloud 14 and Video Verification

Free Software – Frank Karlitschek_ | 21:03, Monday, 10 September 2018

Today the Nextcloud community released Nextcloud 14. This release comes with a ton of improvements in the areas of User Experience, Accessibility, Speed, GDPR compliance, 2 Factor Authentication, Collaboration, Security and many other things. You can find an overview here

But there is one feature I want to highlight because I find it especially noteworthy and interesting. Some people ask us why we are doing more than the classic file sync and share. Why do we care about Calendar, Contacts, Email, Notes, RSS Reader, Deck, Chat, Video and audio calls and so on.

It all fits together

The reason is that I believe that a lot if these features belong together. There is a huge benefit in an integrated solution. This doesn’t mean that everyone needs and wants all features. This is why we make it possible to switch all of them off so that you and your users have only the functionality available that you really want. But there are huge advantages to have deep integration. This is very similar to the way KDE and GNOME integrate all applications together on the Desktop. Or how Office 365 and Google Suite integrate cloud applications.

The why of Video Verification

The example I want to talk about for this release is Video Verification. It is a solution for a problem that was unsolved until now.

Let’s imagine you have a very confidential document that you want to share with one specific person and only this person. This can be important for lawyers, doctors or bank advisors. You can send the sharing link to the email you might have of this person but you can’t be sure that it reaches this person and only exactly this person. You don’t know if the email is seen by the mailserver admin or the kid who plays with the smartphone or the spouse or the hacker who has hijacked the PC or the mail account of the recipient. The document is transmitted vie encrypted https of course but you don’t know who is on the other side. Even if you sent the password via another channel, you can’t have 100% certainty.

Let’s see how this is done in other cases.

TLS solves two problems for https. The data transfer is encrypted with strong encryption algorithms but this is not enough. Additionally certificates are used to make sure that you are actually talking to the right endpoint on the other side of the https connection. It doesn’t help to securily communicate with what you think is your bank but is actually an attacker!

In GPG encrypted emails the encryption is done with strong and proven algorithms. But there is an additional key signing needed to make sure that the key is owned by the right person.

This second part, the verification of the identity of the recipient, is missing in file sync and share until now. Video verification solves that.

How it works

I want to share a confidential document with someone. In the Nextcloud sharing dialog I type in the email of the person and i activate the option ‘Password via Talk’ then I can set a password to protect the document.

The recipient gets a link to the document by email. Once the person clicks on the link the person sees a screen that asks for a password. The person can click on the ‘request password’ button and then a sidebar open which initiates a Nextcloud Talk call to me. I get a notification about this call in my web interface, via my Nextcloud Desktop client or, most likely to get my attention, my phone will ring because the Nextcloud app on my phone got a push notification. I answer my phone and I have an end to end encrypted, peer to peer video call with the recipient of the link. I can verify that this is indeed the right person. Maybe because I know the person or because the person holds up a personal picture ID. Once I’m sure it is the right person I tell the person the password to the document. The person types in the password and has access to the document.

This procedure is of course over the top for normal standard shares. But if you are dealing with very confidential documents because you are a doctor, a lawyer, a bank or a whistleblower then this is the only way to make sure that the document reaches the right person.

I’m happy and a b it proud that the Nextcloud community is able to produce such innovative features that don’t even exist in proprietary solution

As always all the server software, the mobile apps and the desktop clients are 100% open source, free software and can be self hosted by everyone. Nextcloud is a fully open source community driven project without a contributor agreement or closed source and proprietary extensions or an enterprise edition.

You can get more information on nextcloud.com or contribute at github.com/nextcloud

 

 

 

Sunday, 09 September 2018

The Elephant in the Room

Paul Boddie's Free Software-related blog » English | 16:14, Sunday, 09 September 2018

I recently had my attention drawn towards a blog article about the trials of Free Software development by senior Python core developer, Brett Cannon. Now, I agree with the article’s emphasis on being nice to other people, and I sympathise with those who feel that their community-related activities are wearing them down. However, I would like to point out some aspects of his article that fall rather short of my own expectations about what Free Software, or “open source” as he calls it, should be about.

I should perhaps back up a little and mention where this article was found, which was via the “Planet Python” blog aggregator site. I do not read Planet Python, either in my browser or using a feed reader, any more. Those who would create some kind of buzz or energy around Python have somehow managed to cultivate a channel where it seems that almost every post is promoting something. I might quickly and crudely categorise the posts as follows:

  • “Look at our wonderful integrated development environment which is nice to Python (that is written in Java)! (But wouldn’t you rather use the Java-related language we are heavily promoting instead?)”
  • Stub content featuring someone’s consulting/training/publishing business.
  • Random “beginner” articles either parading the zealotry of the new convert or, of course, promoting someone’s consulting/training/publishing business.

Maybe such themes are merely a reflection of attitudes and preoccupations held amongst an influential section of the Python community, and perhaps there is something to connect those attitudes with the topics discussed below. I do recall other articles exhorting Python enthusiasts to get their name out there by doing work on “open source”, with the aim of getting some company’s attention by improving the software that company has thrown over the wall, and “Python at <insert company name>” blogging is, after all, a common Planet Python theme.

Traces of the Pachyderm

But returning to the article in question, if you read it with a Free Software perspective – that is to say that you consciously refer to “Free Software”, knowing that “open source” was coined by people who, for various reasons, wanted another term to use – then certain things seem to stand out. Most obviously, the article never seems to mention software freedom: it is all about “having fun”, attracting contributors to your projects, giving and receiving “kindnesses”, and participating in “this grand social experiment we call open source”. It is almost as if the Free Software movement and the impetus for its foundation never took place, or if it does have a place in someone’s alternative version of history, then in that false view of reality Richard Stallman was only motivated to start the GNU project because maybe he wanted to “have fun hacking a printer”.

Such omissions are less surprising if you have familiarity with attitudes amongst certain people in various Free Software communities – those typically identifying as “open source”, of course – who bear various grudges against the FSF and Richard Stallman. In the Python core development community, those grudges are sometimes related to some advice given about GPL-compatible licensing back when CPython was changing custodian and there had been concerns, apparently expressed by the entity being abandoned by the core developers, that the original “CWI licence” was not substantial enough. We might wonder whether grudges might be better directed towards those who have left CPython with its current, rather incoherent, licensing paper trail.

A Different Kind of Free

Now, as those of us familiar with the notion of Free Software should know, it is “a matter of freedom, not price”. You can very well sell Free Software, and nobody is actually obliged to distribute their Free Software works at no cost. In fact, the advice from those who formulated the very definition of Free Software is this:

Distributing free software is an opportunity to raise funds for development. Don’t waste it!

Of course, there are obligations about providing the source code for software already distributed in executable form and limitations about the fees or charges to be imposed on recipients, but these do not compel no-cost sharing, publication or distribution. Meanwhile, the Open Source Definition, for those who need an “open source” form of guidance, states the following:

The license shall not require a royalty or other fee for such sale.

This appearing, rather amusingly, in a section entitled “Free Redistribution” where “Free” apparently has the same meaning as the “Free” in Free Software: the label that the “open source” crowd were so vehemently opposed to. It is also interesting that the “Source Code” section of the definition also stipulates similar obligations to those upheld by copyleft licences.

So, in the blog article in question, it is rather interesting to see the following appear:

While open source, by definition, is monetarily free, that does not mean that the production of it is free.

Certainly, the latter part of the sentence is true, and we will return to that in a moment, but the former part is demonstrably false: the Open Source Definition states no such thing. In fact, it states that “open source” is not obliged to cost anything, which as the practitioners of logic amongst us will note is absolutely not the same thing as obliging it to always be cost-free.

Bearing the Cost

Much of the article talks about the cost of developing Free Software from the perspective of those putting in the hours to write, test, maintain and support the code. These “production” costs are acknowledged while the producers are somehow shackled to an economic model – one that is apparently misinformed, as noted above – that demands that the cost of all this work be zero to those wanting to acquire it.

So, how exactly are the production costs going to be met? One of the most useful instruments for doing so has apparently been discarded, and I imagine that a similarly misguided attitude lingers with regard to supporting Free Software produced under such misconceptions. Indeed, much of the article focuses on doing “free work”, that of responding to requests, dealing with feedback, shepherding contributions, and the resulting experience of being “stressed by strangers”.

Normally, when one hears of something of this nature taking place, when the means to live decently and to control one’s own life is being taken away from people, there is a word that springs to mind: exploitation. From what we know about certain perspectives about Free Software and “open source”, it is hardly a surprise that the word “exploitation” does not appear in the article because such words are seen by some as “political”, where “political” takes on the meaning of “something raising awkward ethical questions” that if acknowledged and addressed appropriately would actually result in people not being exploited.

But there is an ideological condition which prevents people from being “political”. According to those with such a condition, we are not supposed to embarrass those who could help us deal with the problems that trouble us because that might be “impolite”, and it might also be questioning just how they made their money, how badly they may have treated people on their way to the top, and whether personal merit had less to do with their current status than good fortune and other things that undermine the myth of their own success. We are supposed to conflate money and power with merit or to play along convincingly enough at least as long as the wallets of such potential benefactors are open.

So there is this evasion of the “political” and a pandering to those who might offer validation and maybe even some rewards for all the efforts that are being undertaken as long as their place is not challenged. And what that leaves us with is a catalogue of commiseration, where one can do no more than appeal to those in the same or similar circumstances to be nicer to each other – not a bad idea, it must be said – but where the divisive and exploitative forces at work will result in more conflict over time as people struggle even harder to keep going.

When the author writes this…

Remember, open source is done by people giving something away for free because they choose to; you could say you’re dealing with a bunch of digital hippies.

…we should also remember that “open source” is also done by people who will gladly take those things and still demand more things for free, these being people who will turn a nice profit for themselves while treating others so abominably.

Selective Sustainability

According to the article “the overall goal of open source is to attract and retain people to help maintain an open source project while enjoying the experience”. I cannot speak for those who advocate “open source”, but this stated goal is effectively orthogonal to the aim of Free Software, which is to empower users by presenting them with the means to take control of the software they use. By neglecting software freedom, the article contemplates matters of sustainability whilst ignoring crucial factors that provide the foundations of sustainability.

For instance, there seems to be some kind of distinction being drawn between Free Software projects that people are paid to work on (“corporate open source”) and those done in their own time (“community open source”). This may be a reflection of attitudes within companies: that there are some things that they may use but which, beyond “donations” and letting people spend a portion of their work time on it, they will never pay for. Maybe such software does not align entirely with the corporate goals and is therefore “something someone else can pay for”, like hospitals, schools, public services, infrastructure, and all the other things that companies of a certain size often seem to be unwilling to fund as they reduce their exposure to taxation.

Free Software, then, becomes almost like the subject of charity. Maybe the initiator of a project will get recognised and hired for complementing and enhancing a company’s proprietary product, just like the individual described in the article’s introduction whose project was picked up by the author’s employer. I find it interesting that the author notes how important people are to the sustainability of a project but then acknowledges that the project illustrating his employer’s engagement with “open source” could do just fine without other people getting involved. Nothing is said about why that might be the case.

So, with misapprehensions about whether anyone can ask for money for their Free Software work, plus cultural factors that encourage permissive licensing, “building a following” and doing things “for exposure”, and with Free Software being seen as something needing “donations”, an unsustainable market is cultivated. Those who wish to find some way of funding their activities must compete with people being misled into working for free. And it goes beyond whether people can afford the time: time is money, as they say, and the result may well be that people who have relatively little end up subsidising “gifts” for people who are substantially better off.

One may well be reminded of other exploitative trends in society where the less well-off have to sacrifice more and work harder for the causes of “productivity” and “the economy”, with the real beneficiaries being the more well-off looking to maximise their own gains and optimise their own life-enriching experiences. Such trends are generally not regarded as sustainable in any way. Ultimately, something has to give, as history may so readily remind us.

Below the Surface

It is certainly important to make sure people keep wanting to do an activity, whether that is Free Software development or anything else, but having enough people who “enjoy doing open source” is far from sufficient to genuinely sustain a Free Software project. It is certainly worthwhile investigating the issues that determine whether people derive enjoyment from such work or not, along with the issues that cause stress, dissatisfaction, disillusionment and worse.

But what good is it if no-one deals with these issues? When taking “a full month off annually from volunteering” is seen as the necessary preventative medicine to avoid “potential burnout”, and when there is even such a notion as “open source detox”, does it not indicate that the symptoms may be seeing some relief but the cause remains untreated? The author of the article seems to think that the main source of misery is the way people treat each other:

It all comes down to how people treat each other in open source.

That in itself is something of a superficial diagnosis given that some people may not experience random angry people criticising their work at all, and yet they may be dissatisfied with their situation nevertheless. Others may experience bad interactions, but these might be the result of factors that are not so readily acknowledged. I do not condone behaviour that might be offensive, let alone abusive, but when people react strongly in their interactions with others, they may be doing so as the consequence of what they perceive as ill-treatment or even a form of oppression or betrayal.

There is much talk of kindness, and I cannot exactly disagree with the recommendation that people be kind to each other. But I also have the feeling that another story is not being told, one of how people with a level of power and influence choose to discharge their responsibilities. And in the context of Python, the matter of Python 3 is never far away. People may have received the “gift” of Python, but they have invested in it, too. In a way, this goes beyond any reciprocation of a mere gift because this investment is also a form of submission to the governance of the technology, as well as a form of validation of it that persuades others of its viability and binds those others to its governance, too.

What then must someone with a substantial investment in that technology think when presented with something like the “Python 2.7 Countdown” clock? Is it a helpful tool for technological planning or a way of celebrating and trivialising disruption to widespread investment in, and commitment to, a mature technology? What about the “Python 3 Statement” with projects being encouraged to pledge to drop support for Python 2 and to deliberately not maintain any such support beyond the official “end of life” date? Is it an encouraging statement of enthusiasm or another way of passive-aggressively shaming those who would continue to use and support Python 2?

I accept that it would be unfair to demand that the Python core developers be made to continue to support Python 2. But I also think it is unfair to see destructive measures being taken to put Python 2 “beyond use”, the now-familiar campaigns of inaccurate or incorrect information to supposedly stir people into action to port their software to Python 3, the denial of the name “Python” to anyone who might step up and continue to support Python 2, the atmosphere of hostility to those who might take on that role. And, well, excuse me if I cannot really take the following statement seriously based on the strategic choices of the Python core developers:

And then there’s the fact that your change may have just made literally tons of physical books in bookstores around the world obsolete; something else I have to consider.

It is intriguing that there is an insistence that people not expect anything when they do something for the benefit of another, that “kindnesses” are more appropriate than “favours”:

I switched to using kindnesses because being kind in the cultures I’m familiar with has no expectation of something in return.

Aside from the fact that it becomes pretty demotivating to send fixes to projects and expect nothing to ever happen to them, to take an example of the article’s author, which after a while amounts to a lot of wasted time and effort, I cannot help but observe that returning the favour was precisely what the Python core developers expected when promoting Python 3. From there, one cannot help but observe that maybe there is one rule for one group and one rule for another group in the stratified realm of “open source”.

The Role of the Elephant

In developing Free Software and acknowledging it as such, we put software freedom – the elephant in this particular room – at the forefront of our efforts. Without it, as we have seen, the narrative is weaker, people’s motivations seem less convincing or unfathomable, and suggestions for improving everybody’s experience, although welcome, fail to properly grasp some of the actual causes of dissatisfaction and unhappiness. This because the usual myths of efficiency, creative exuberancy, and an idealised “gift culture” need to be conjured up to either explain people’s behaviour or to motivate it, the latter often in a deliberately exploitative way.

It is, in fact, software freedom that gives Python 2 users any hope for their plight, even though many of them may be dissatisfied and some of them may end up committing to other languages instead of Python 3 in future. By emphasising software freedom, they and others may be educated about their right to control their technological investment, and they may be reminded that in seeking assistance to exercise that control, they might be advised to pay others to sustain their investment. At no point does the narrative need to slip off into “free stuff”, “gifts” and the like.

Putting software freedom at the centre of Free Software activities might also promote a more respectful environment. When others are obliged to uphold end-user freedoms, they might already be inclined to think about how they treat other people. We have seen a lot written about interpersonal interactions, and it is right to demand that people treat each other with respect, but maybe such respect needs to be cultivated by having people think about higher goals. And maybe such respect is absent if those goals are deliberately ignored, focusing people to consider only each individual transaction in isolation and to wonder why everyone acts so selfishly.

So instead of having an environment where a company might be looking for people to do free work so that they can seal it up, sell a proprietary product to hapless end-users, treat the workers like “digital hippies”, and thus exploit everyone involved, we invoke software freedom to demand fairness and respect. A culture of respecting the rights of others should help participants realise that they have a collective responsibility, that everyone is in it together, that the well-being of others does not come at the cost of each participant’s own well-being.

I realise that some of the language used above is “political” for some, but when those who object to “political” language perpetuate ignorance of the roots of Free Software and marginalise such movements for social change, they also perpetuate a culture of exploitation, whether they have this as their deliberate goal or not. This elephant has been around for some time, and having a long memory as one might expect, it stands as a witness to the perils of neglecting the ethical imperatives for what we do as Free Software developers.

It is, of course, possible to form a more complete, more coherent picture of how Free Software development occurs and how sustainability in such endeavours might be achieved, but evidently this remains out of reach for those still wishing to pretend that there is no elephant in the room.

Monday, 03 September 2018

Sustainable Computing

Paul Boddie's Free Software-related blog » English | 20:47, Monday, 03 September 2018

Recent discussions about the purpose and functioning of the FSFE have led me to consider the broader picture of what I would expect Free Software and its developers and advocates to seek to achieve in wider society. It was noted, as one might expect, that as a central component of its work the FSFE seeks to uphold the legal conditions for the use of Free Software by making sure that laws and regulations do not discriminate against Free Software licensing.

This indeed keeps the activities of Free Software developers and advocates viable in the face of selfish and anticompetitive resistance to the notions of collaboration and sharing we hold dear. Advocacy for these notions is also important to let people know what is possible with technology and to be familiar with our rich technological heritage. But it turns out that these things, although rather necessary, are not sufficient for Free Software to thrive.

Upholding End-User Freedoms

Much is rightfully made of the four software freedoms: to use, share, study and modify, and to propagate modified works. But it seems likely that the particular enumeration of these four freedoms was inspired (consciously or otherwise) by those famously stated by President Franklin D. Roosevelt in his 1941 “State of the Union” address.

Although some of Roosevelt’s freedoms are general enough to be applicable in any number of contexts (freedom of speech and freedom from want, for instance), others arguably operate on a specific level appropriate for the political discourse of the era. His freedom from fear might well be generalised to go beyond national aggression and to address the general fears and insecurities that people face in their own societies. Indeed, his freedom of worship might be incorporated into a freedom from persecution or freedom from prejudice, these latter things being specialised but logically consequent forms of a universal freedom from fear.

But what might end-users have to fear? The list is long indeed, but here we might as well make a start. They might fear surveillance, the invasion of their privacy and of being manipulated to their disadvantage, the theft of their data, their identity and their belongings, the loss of their access to technology, be that through vandalism, technological failure or obsolescence, or the needless introduction of inaccessible or unintuitive technology in the service of fad and fashion.

Using technology has always entailed encountering risks, and the four software freedoms are a way of mitigating those risks, but as technology has proliferated it would seem that additional freedoms, or additional ways of asserting these freedoms, are now required. Let us look at some areas where advocacy and policy work fail to reach all by themselves.

Cultivating Free Software Development

Advocating for decent laws and the fair treatment of Free Software is an essential part of organisations like the FSFE. But there also has to be Free Software out in the wider world to be treated fairly, and here we encounter another absent freedom. Proponents of the business-friendly interpretation of “open source” insist that Free Software happens all by itself, that somewhere someone will find the time to develop a solution that is ripe for wider application and commercialisation.

Of course, this neglects the personal experience of any person actually doing Free Software development. Even if people really are doing a lot of development work in their own time, playing out their roles precisely as cast in the “sharing economy” (which rather seems to be about wringing the last drops of productivity out of the lower tiers of the economy than anyone in the upper tiers actually participating in any “sharing”), it is rather likely that someone else is really paying their bills, maybe an employer who pays them to do something else during the day. These people squeeze their Free Software contributions in around the edges, hopefully not burning themselves out in the process.

Freedom from want, then, very much applies to Free Software development. For those who wish to do the right thing and even get paid for it, the task is to find a sympathetic employer. Some companies do indeed value Free Software enough to pay people to develop it, maybe because those companies provide such software themselves. Others may only pay people as a consequence of providing non-free software or services that neglect some of the other freedoms mentioned above. And still others may just treat Free Software as that magical resource that keeps on providing code for nothing.

Making sure that Free Software may actually be developed should be a priority for anyone seriously advocating Free Software adoption. Otherwise, it becomes a hypothetical quantity: something that could be used for serious things but might never actually be observed in such forms, easily dismissed as the work of “hobbyists” and not “professionals”, never mind that the same people can act in either capacity.

Unfortunately, there are no easy solutions to suggest for this need. It is fair to state that with a genuine “business case”, Free Software can get funded and find its audience, but that often entails a mixture of opportunism, the right connections, and an element of good fortune, as well as the mindset needed to hustle for business that many developers either do not have or do not wish to cultivate. It also assumes that all Free Software worth funding needs to have some kind of sales value, whereas much of the real value in Free Software is not to be found in things that deliver specific solutions: it is in the mundane infrastructure code that makes such solutions possible.

Respecting the User

Those of us who have persuaded others to use Free Software have not merely been doing so out of personal conviction that it is the ethically-correct thing for us and those others to use. There are often good practical reasons for using Free Software and asserting control over computing devices, even if it might make a little more work for us when things do not work as they should.

Maybe the risks of malware or experience of such unpleasantness modifies attitudes, combined with a realisation that not much help is actually to be had with supposedly familiar and convenient (and illegally bundled) proprietary software when such malevolence strikes. The very pragmatism that Free Software advocates supposedly do not have – at least if you ask an advocate for proprietary or permissively-licensed software – is, in fact, a powerful motivation for them to embrace Free Software in the first place. They realise that control is an illusion without the four software freedoms.

But the story cannot end with the user being able to theoretically exercise those freedoms. Maybe they do not have the time, skills or resources to do so. Maybe they cannot find someone to do so on their behalf, perhaps because nobody is able to make a living performing such services. And all the while, more software is written, deployed and pushed out globally. Anyone who has seen familiar user interfaces becoming distorted, degraded, unfamiliar, frustrating as time passes, shaped by some unfathomable agenda, knows that only a very well-resourced end-user would be able to swim against such an overpowering current.

To respect the user must involve going beyond acknowledging their software freedoms and also acknowledge their needs: for secure computing environments that remain familiar (even if that seems “boring”), that do not change abruptly (because someone had a brainwave in an airport lounge waiting to go to some “developer summit” or other), that allow sensible customisation that can be reconciled with upstream improvements (as opposed to imposing a “my way or the highway”, “delete your settings” ultimatum). It involves recognising their investment in the right thing, not telling them that they have to work harder, or to buy newer hardware, just to keep up.

And this also means that the Free Software movement has to provide answers beyond those concerning the nature of the software. Free Software licensing does not have enough to say about privacy and security, let alone how those things might be upheld in the real world. Yet such concerns do impact Free Software developers, meaning that some kinds of solutions do exist that might benefit a wider audience. Is it not time to deliver things like properly secure communications where people can trust the medium, verify who it is that sends them messages, ignore the time-wasters, opportunists and criminals, and instead focus on the interactions that are meaningful and important?

And is it not time that those with the knowledge and influence in the Free Software realm offered a more coherent path to achieving this, instead of all the many calls for people to “use encryption” only to be presented with a baffling array of options and a summary that combines “it’s complicated” with “you’re on your own”? To bring the users freedom from the kind of fear they experience through a lack of privacy and security? It requires the application of technical knowledge, certainly, but it also requires us to influence the way in which technology is being driven by forces in wider society.

Doing the Right Thing

Free Software, especially when labelled as “open source”, often has little to say about how the realm of technology should evolve. Indeed, Free Software has typically reacted to technological evolution, responding to the demands of various industries, but not making demands of its own. Of course, software in itself is generally but a mere instrument to achieve other things, and there are some who advocate a form of distinction between the specific ethics of software freedom and ethics applying elsewhere. For them, it seems to be acceptable to promote “open source” while undermining the rights and freedoms of others.

Our standards should be far higher than that! Although there is a logical argument to not combine other considerations with the clearly-stated four software freedoms, it should not stop us from complementing those freedoms with statements of our own values. Those who use or are subject to our software should be respected and their needs safeguarded. And we should seek to influence the development of technology to uphold our ideals.

Let us consider a mundane but useful example. The World Wide Web has had an enormous impact on society, providing people with access to information, knowledge, communication, services on a scale and with a convenience that would have been almost unimaginable only a few decades ago. In the beginning, it was slow (due to bandwidth limitations, even on academic networks), it was fairly primitive (compared to some kinds of desktop applications), and it lacked support for encryption and sophisticated interactions. More functionality was needed to make it more broadly useful for the kinds of things people wanted to see using it.

In the intervening years, a kind of “functional escalation” has turned it into something that is indeed powerful, with sophisticated document rendering and interaction mechanisms, perhaps achieving some of the ambitions of those who were there when the Web first gathered momentum. But it has caused a proliferation of needless complexity, as sites lazily call out to pull down megabytes of data to dress up significantly smaller amounts of content, as “trackers” and “analytics” are added to spy on the user, as absurd visual effects are employed (background videos, animated form fields), with the user’s computer now finding it difficult to bear the weight of all this bloat, and with that user struggling to minimise their exposure to privacy invasions and potential exploitation.

For many years it was a given that people would, even should, upgrade their computers regularly. It was almost phrased as a public duty by those who profited from driving technological progress in such a selfish fashion. As is frequently the case with technology, it is only after people have realised what can be made possible that they start to consider whether it should have been made possible at all. Do we really want to run something resembling an operating system in a Web browser? Is it likely that this will be efficient or secure? Can we trust the people who bring us these things, their visions, their competence?

The unquestioning proliferation of technology poses serious challenges to the well-being of individuals and the ecology of our planet. As people who have some control over the way technology is shaped and deployed, is it not our responsibility to make sure that its use is not damaging to its users, that it does not mandate destructive consumer practices, that people can enjoy the benefits – modest as they often are when the background videos and animated widgets are stripped away – without being under continuous threat of being left behind, isolated, excluded because their phone or computer is not this season’s model?

Strengthening Freedoms

In rather too many words, I have described some of the challenges that need to be confronted by Free Software advocates. We need to augment the four software freedoms with some freedoms or statements of our own. They might say that the software and the solutions we want to develop and to encourage should be…

  • Sustainable to make: developers and their collaborators should be respected, their contributions fairly rewarded, their work acknowledged and sustained by those who use it
  • Sustainable to choose and to use: adopters should have their role recognised, with their choices validated and rewarded through respectful maintenance and evolution of the software on which they have come to depend
  • Encouraging of sustainable outcomes: the sustainability of the production and adoption of the software should encourage sustainability in other ways, promoting longevity, guarding against obsolescence, preventing needless and frivolous consumption, strengthening society and making it fairer and more resilient

It might be said that in order to have a fairer, kinder world there are no shortage of battles to be fought. With such sentiments, the discussion about what more might be done is usually brought to a swift conclusion. In this article, I hope to have made a case that what we can be doing is often not so different from what we are already doing.

And, of course, this brings us back to the awkward matter of why we, or the organisations we support, are not always so enthusiastic about these neglected areas of concern. Wouldn’t we all be better off by adding a dimension of sustainability to the freedoms we already recognise and enjoy?

Come meet KDE in Denver - September 6-9

TSDgeos' blog | 19:39, Monday, 03 September 2018

This week, Aleix (KDE eV Vice Predisdent), Albert Vaca (KDE Connect maintainer) and me will be in Denver to attend the Libre Application Summit 2018.

Libre Application Summit is unfortunately not free to attend so even if i'd urge you to come and see the amazing talks we're going to give I can see why not everyone would want to come.

So I'm going to say that if you're in the Denver area and are a KDE fan, write a comment here and maybe we can meet for drinks or something :)

Saturday, 01 September 2018

A simple picture language for GNU Guile

Rekado | 21:00, Saturday, 01 September 2018

One thing that I really love about Racket is its picture language, which allows you to play with geometric shapes in an interactive session in Dr Racket. The shapes are displayed right there in the REPL, just like numbers or strings. Instead of writing a programme that prints "hello world" or that computes the Fibonacci numbers, one could write a programme that composes differently rotated, coloured shapes and prints those instead.

I use GNU Guile for my own projects, and sadly we don't have an equivalent of Racket's picture language or the Dr Racket editor environment. So I made something: a simple picture language for GNU Guile. It provides simple primitive procedures to generate shapes, to manipulate them, and to compose them.

Download the single Guile module containing the implementation:

mkdir ~/pict
wget https://elephly.net/downies/pict.scm

To actually see these shapes as you play with them, you need to use a graphical instance of GNU Emacs with Geiser.

Start geiser in Emacs and load the module:

M-x run-guile
(add-to-load-path (string-append (getenv "HOME") "/pict"))
,use (pict)

Let’s play!

(circle 100)

If you see a pretty circle: hooray! Let’s play some more:

(colorize (circle 100) "red")
(disk 80)
(rectangle 50 100)

Let's compose and manipulate some shapes!

,use (srfi srfi-1)
,use (srfi srfi-26)
(apply hc-append
       (map (cut circle <>)
            (iota 10 2 4)))

(apply cc-superimpose
       (map (cut circle <>)
            (iota 10 2 4)))

(apply hc-append
       (map (cut rotate (rectangle 10 30) <>)
            (iota 36 0 10)))

(apply cc-superimpose
       (map (cut rotate (triangle 100 300) <>)
            (iota 36 0 10)))

There are many more procedures for primitive shapes and for manipulations. Almost all procedures in pict.scm have docstrings, so feel free to explore the code to find fun things to play with!

PS: I realize that it's silly to have a blog post about a picture language without any pictures. Instead of thinking about this now, get the module and make some pretty pictures yourself!

Thursday, 30 August 2018

Technoshamanism meeting in Axat, France (October 5 to 8)

agger's Free Software blog | 19:25, Thursday, 30 August 2018

 

<header>

18880818073_292066bba7_o

Complete program to appear here.

First published at the technoshamanism site.

</header>

THE MEETING

We would like to invite you to the Technoshamanism meeting in Axat, south of France, at Le Dojo art space and association.

We still enjoy temporary autonomous zones, new ways of life, of art / life, we try to think and cooperate towards the goal of food self-sufficiency and interdependence, towards the reforestation of the Earth, and towards the ancestorfuturist fertilization of the imagination. Our main practice is to promote networks of the unconscious to strengthen the desire to form communities as well as proposing alternatives to the “productive” thinking of science and technology.

THE NETWORK

The Technoshamanism network exists since 2014. The network has published one book and organized two international festivals held in the south of Bahia (these were produced in partnership with the Pataxó from indigenous villages Aldeia Velha and Aldeia Pará – these villages are near Porto Seguro, where the first Portuguese caravels arrived in the year 1500). The third festival is planned for August 2019 in Denmark.

Technoshamanism arises from the confluence of several networks deriving from the free software and free culture movements. It is promoting meetings, events, DIY ritual performances, electronic music, foodforest and immersive processes, remixing worldviews and promoting the decolonization of thought. The network brings together artists, biohackers, thinkers, activists, indigenous and indigenists promoting a social clinic for the future, the meeting between technologies, rituals, synergies and sensitivities.

Keywords: free cosmogony, ancestorfuturism, noisecracy, perspectivism, earthcosmism, free technologies, decolonialism of thought, production of meaning and care.

THE MEETING

We will meet in Axat, south of France, in the space and association Le Dojo, a house that brings together artists, activists, and others since 2011, from October 5-8. The meals will be collective and shared among the participants. The lodging costs 2 euros/day per person. To get to Axat, you need to take a transport to the nearby towns (Carcassone or Perpignan) and then a local bus for one euro. Timetable (in French)

http://axat.fr/horaires-bus/;http://axat.fr/wp-content/uploads/2015/10/ST_29072015-_Ligne_53_-Carcassonne_Axat_horaires.pdf

http://www.pays-axat.org/iso_album/ligne_100_quillan_-_perpignan_septembre_2013.pdf

THE OPEN CALL

If you are interested in getting to know further the Technoshamanism network, go deeper into ancestry and speculative fiction, we invite you curious people, earthlings, hackers, cyborgs, among other strangers from this anthropocene world to join our meeting bringing your ideas, experiences, practices, etc. Send us a short bio + proposal containing up to 300 words to the email xamanismotecnologico@gmail.com. The open call ends September 18.

Do not hesitate to come back to us for any doubt or comment. Welcome to the Tecnoshamanism network!

LINKS/LIENS

Blog https://tecnoxamanismo.wordpress.com/blog/

Questionário sobre o tecnoxamanismo / Questionnaire sur le Tecnochamanisme/ Questionnaire about Tecnoshamanism  https://tecnoxamanismo.files.wordpress.com/2018/06/questionnaire-about-echnoshamanism.pdf

Book/Livre/Livro https://issuu.com/invisiveisproducoes/docs/tcnxmnsm_ebook_resolution_1

Sobre o primeiro festival /About the first festival / Sur le premier festival                 http://tecnoxamanismo.com.br/

https://tecnoxamanismo.wordpress.com/2017/06/29/technoshamanism-in-aarhus-rethink-ancestrality-and-technology/

https://archive.org/details/@tecnoxamanismo

About the Dojo / Sur le Dojo/ Sobre o Dojo https://www.flickr.com/photos/ledojodaxat/

29939385480_39a433a118_c

Switzerland/ Suisse/Suiça – 2016

Português

ENCONTRO DE TECNOXAMANISMO EM AXAT, FRANÇA (5 a 8 de outubro)

Gostaríamos de convidar vocês para o encontro da rede Tecnoxamanismo, no sul da França, na cidade de Axat, no espaço de arte e associação Le Dojo.

Ainda gostamos das zonas autônomas temporárias, da invenção de modos de vida, da arte/vida, tentamos pensar e colaborar com a interdependência alimentar, com o reflorestamento da Terra, na adubagem imaginária ancestrofuturista. Nosso principal exercício é promover redes de inconscientes, fortalecendo  o desejo de comunidade, assim como propor alternativas ao pensamento “produtivo” da ciência e tecnologia.

A REDE

A rede Tecnoxamanismo existe desde 2014, conta com livro publicado e dois festivais internacionais realizados no sul da Bahia (estes produzidos com a parceria indígena da etnia Pataxó Aldeia Velha e Aldeia Pará – nas redondezas de Porto Seguro –  onde aconteceu a primeira invasão das caravelas – a colonização portuguesa). O terceiro festival está previsto para agosto de 2019 na Dinamarca.

O Tecnoxamanismo surge a partir da confluência de várias redes oriundas do movimento do software e cultura livre. Atua promovendo encontros, eventos, acontecimentos permeando performances rituais DIY, música eletrônica, agrofloresta e processos imersivos, remixando cosmovisões e promovendo a descolonização do pensamento. A rede congrega artistas, biohackers, pensadores, ativistas, indígenas e indigenistas promovendo uma clínica social do futuro, o encontro entre tecnologias, rituais, sinergias e sensibilidades.

Palavras-chave: cosmogonia livre, ancestrofuturismo, ruidocracia, perspectivismo, terracosmismo, tecnologias livres, descolonização do pensamento, produção de sentido e de cuidado de si e do outro.

O ENCONTRO

Nos reuniremos na cidade de Axat, no sul da França, no espaço e associação Dojo, uma casa que reúne artistas, ativistas, entre outros passantes desde 2011, durante os dias 5 a 8 de outubro. As refeições serão coletivas e compartilhadas entre os participantes. O custo da casa é de 2 euros por dia por pessoa.

Para chegar em Axat, é preciso pegar um transporte até as cidades próximas (Carcassone ou Perpignam) e depois um ônibus local à um euro. Tabela de horários (em francês) :

http://axat.fr/horaires-bus/ ; http://axat.fr/wp-content/uploads/2015/10/ST_29072015-_Ligne_53_-Carcassonne_Axat_horaires.pdf http://www.pays-axat.org/iso_album/ligne_100_quillan_-perpignan_septembre_2013.pdf

A CONVOCATÓRIA

Se você tem interesse em conhecer mais do Tecnoxamanismo, se aprofundar sobre ancestralidades e ficção especulativa, convocamos os interessados, curiosos, terráqueos, hackers, cyborgs, entre outros estranhos deste mundo antropoceno à participar do nosso encontro, trazendo suas ideias, processos, experiências, práticas, etc. envie um resumo de até 300 palavras para o e-mail xamanismotecnologico@gmail.com. A chamada aberta termina dia 18 de setembro.

Fiquem à disposição para tirar dúvidas ou fazer comentários. Bem-vindos à rede do Tecnoxamanismo!

Français

RENCONTRE DE TECHNOSHAMANISME À AXAT, DANS LE SUD DE LA FRANCE

Le réseau de Tecnoshamanime a le plaisir de vous inviter pour une immersion/rencontre au sûd de la France, dans la ville d’Axat, dans l’espace d’art et association le Dojo.

On croit toujours dans les zones autonomes temporaires, qui puisse rendre compte de l’invention des modes de vie alternatifs, l’art/vie, l’indépendance alimentaire, à travers de la fomentation des mémoires ancestrales et futuristes, le reforestation de la Terre et la fertilization de l’imagination. Notre but principal c’est de créer espaces pour penser d’autres possibilités d’existence horizontal, et également pour fortifier le désir et l’idée communal, ainsi que reformuler l’idéologie de la production scientifique et technologique.

LE RÉSEAU

Le réseau du Tecnochamanisme envisage ces rencontres et projets depuis 2014, il compte avec un livre publié et deux festivals internationaux accomplis au sud de Bahia (ceux produits avec le partenariat indigène des indiens Pataxó- Aldeia Velha et Aldeia Pará – dans la rondeur de Porto Seguro – où la première invasion des caravelles est arrivée de la colonisation portugaise). Le troisième festival est prévu pendant août de 2019 au Danemark.

Tecnochamanisme apparaît commençant de la confluence de plusieurs réseaux originaires du mouvement du software et de la culture libre. Il s’agit en promouvant des rencontres, des événements et des happenings pénétrant des processus immersifs, des performances rituels DIY (faites toi-même), la musique électronique, également des projets agroforestiers, en faisant un remix de cosmo-visions et la promotion de la décolonisation de la pensée. Le réseau rassemble des artistes, biohackers, des penseurs, des activistes, indigènes et indigénistes dans la création collective d’une clinique sociale de l’avenir, le rencontre parmi des technologies, des rituels, des synergies et des sensibilités.

Mots-clés: Cosmogonie libre, ancestro-futurisme, noisecracie, perspectivisme, terrecosmisme, technologies libres, décolonisation de la pensée, production de sens et de soin, de soi et de l’autre.

LE RENCONTRE

Nous rassemblera dans la ville d’Axat, au sud de la France, dans l’espace et l’association Le Dojo, une maison qui réunit des artistes, des activistes, parmi d’autres passants depuis 2011, du 5 au 8 octobre. Les repas seront collectifs et partagés parmi les participants. Le coût de la maison c’est de 2 euros par jour par personne. Pour arriver à Axat, il est nécessaire d’arriver dans les villes proches comme Carcassone ou Perpignam et aprés, attraper un bus local à un euro. Planning des autobus locales:http://axat.fr/horaires-bus/

L’APPEL

Si vous êtes intéressé à en apprendre davantage sur Tecnoxamanism, s’approfondir sûr la fiction spéculative et l’ancestral, nous invitons les personnes intéressées, les curieux, les Terriens, les hackers, les cyborgs, entre autres étrangers de ce monde anthropocènique, à se joindre à nous. Si vous avez des idées, des pratique ou des interventions engagées envoyez à nous un résumé de 300 mots à l’email xamanismotecnologico@gmail.com. Les inscriptions doivent-elles être envoyés jusqu’au 18 septembre.

N’hésitez pas revenir vers nous pour faire n’importe quel doute ou commentaire. Soyez-les bienvenus sur le réseau du Technoshamanisme!

<header> </header>

 

Wednesday, 29 August 2018

FrOSCon 2018

Inductive Bias | 16:34, Wednesday, 29 August 2018

A more general summary: https://tech.europace.de/froscon-2018/ of the conference written in German. Below a more detailed summary of the keynote by Lorena Jaume-Palasi.

In her keynote "Blessed by the algorithm - the computer says no!" Lorena detailed the intersection of ethics and technology when it comes to automated decision making systems. As much as humans with a technical training shy away from questions related to ethics, humans trained in ethics often shy away from topics that involve a technical layer. However as technology becomes more and more ingrained in everyday life we need people who understand both - tech and ethical questions.

Lorena started her talk detailing how one typical property of human decision making involves inconsistency, otherwise known as noise: Where machine made decisions can be either accurate and consistent or biased and consistent, human decisions are either inconsistent but more or less accurate or inconsistent and biased. Experiments that showed this level of inconsistency are plenty, ranging from time estimates for tasks being different depending on weather, mood, time of day, being hungry or not up to judges being influenced by similar factors in court.

One interesting aspect: While in order to measure bias, we need to be aware of the right answer, this is not necessary for measuring inconsistency. Here's where monitoring decisions can be helpful to palliate human inconsistencies.

In order to understand the impact of automated decision making on society one needs a framework to evaluate that - the field of ethics provides multiple such frameworks. Ethics comes in three flavours: Meta ethics dealing with what is good, what are ethical requests? Normative ethics deals with standards and principles. Applied ethics deals with applying ethics to concrete situations.

In western societies there are some common approaches to answering ethics related questions: Utilitarian ethics asks which outputs we want to achieve. Human rights based ethics asks which inputs are permissible - what obligations do we have, what things should never be done? Virtue ethics asks what kind of human being one wants to be, what does behaviour say about one's character? These approaches are being used by standardisation groups at e.g. DIN and ISO to answer ethical questions related to automation.

For tackling ethics and automation today there are a couple viewpoints, looking at questions like finding criteria within the context of designing and processing of data (think GDPR), algorithmic transparency, prohibiting the use of certain data points for decision making. The importance of those questions is amplified now because automated decision making makes it's way into medicine, information sharing, politics - often separating the point of decision making from the point of acting. One key assumption in ethics is that you should always be able to state why you took a certain action - except for actions taken by mentally ill people, so far this was generally true. Now there are many more players in the decision making process: People collecting data, coders, people preparing data, people generating data, users of the systems developed. For regulators this setup is confusing: If something goes wrong, who is to be held accountable? Often the problem isn't even in the implementation of the system but in how it's being used and deployed. This confusion leads to challenges for society: Democracy does not understand collectives, it understands individuals acting. Algorithms however do not understand individuals, but instead base decisions on comparing individuals to collectives and inferring how to move forward from there. This property does impact individuals as well as society.

For understanding which types of biases make it into algorithmic decision making systems that are built on top of human generated training data one needs to understand where bias can come from:

The uncertainty bias is born out of a lack of training data for specific groups amplifying outlier behaviour, as well as the risk for over-fitting. One-sided criteria can serve to reinforce a bias that is generated by society: Even ruling out gender, names and images from hiring decisions a focus on years of leadership experience gives an advantage to those more likely exposed to leadership roles - typically neither people of colour, nor people from poorer districts. One-sided hardware can make interaction harder - think face recognition systems having trouble identifying non-white humans, having trouble identifying non-male humans.

In the EU we focus on the precautionary principle where launching new technology means showing it's not harmful. This though proves more and more complex as technology becomes entrenched in everyday life.

What other biases do humans have? There's information biases, where humans tend to reason based on analogy, based on the illusion of control (overestimating oneself, downplaying risk, downplaying uncertainty), there's an escalation of committment (a tendency to stick to a decision even if it's the wrong one), there are single outcome calculations.

For cognitive biases are related to framing, criteria selection (we tend to value quantitative criteria over qualitative criteria), rationality. There's risk biases (uncertainties about positive outcomes typically aren't seen as risks, risk tends to be evaluated by magnitude rather than by a combination of magnitude and probability). There's attitude based biases: In experiments senior managers considered risk taking as part of their job. The level of risk taken depended on the amount of positive performance feedback given to a certain person: The better people believe they are, the more risk they are willing to take. Uncertainty biases relate to the difference between the information I believe I need vs. the information available - in experiments humans made worse decisions the more data and information was available to them.

General advise: Beware of your biases...

Tuesday, 28 August 2018

Repeated prompts for SSH key passphrase after upgrading to Ubuntu 18.04 LTS?

Losca | 08:02, Tuesday, 28 August 2018

This was a tricky one (for me, anyway) so posting a note to help others.

The problem was that after upgrading to Ubuntu 18.04 LTS from 16.04 LTS, I had trouble with my SSH agent. I was always being asked for the passphrase again and again, even if I had just used the key. This wouldn't have been a showstopper otherwise, but it made using virt-manager over SSH impossible because it was asking for the passphrase tens of times.

I didn't find anything on the web, and I didn't find any legacy software or obsolete configs to remove to fix the problem. I only got a hint when I tried ssh-add -l, with which I got the error message ”error fetching identities: Invalid key length”. This lead me on the right track, since after a while I started suspecting my old keys in .ssh that I hadn't used for years. And right on: after I removed one id_dsa (!) key and one old RSA key from .ssh directory (with GNOME's Keyring app to be exact), ssh-add -l started working and at the same time the familiar SSH agent behavior resumed and I was able to use my remote VMs fine too!

Hope this helps.

ps. While at the topic, remember to upgrade your private keys' internal format to the new OpenSSH format from the ”worse than plaintext” format with the -o option: blog post – tl; dr; ssh-keygen -p -o -f id_rsa and retype your passphrase.

Thursday, 16 August 2018

Mixed Emotions On Debian Anniversary

Bits from the Basement | 17:26, Thursday, 16 August 2018

When I woke up this morning, my first conscious thought was that today is the 25th anniversary of a project I myself have been dedicated to for nearly 24 years, the Debian GNU/Linux distribution. I knew it was coming, but beyond recognizing the day to family and friends, I hadn't really thought a lot about what I might do to mark the occasion.

Before I even got out of bed, however, I learned of the passing of Aretha Franklin, the Queen of Soul. I suspect it would be difficult to be a caring human being, born in my country in my generation, and not feel at least some impact from her mere existence. Such a strong woman, with amazing talent, whose name comes up in the context of civil rights and women's rights beyond the incredible impact of her music. I know it's a corny thing to write, but after talking to my wife about it over coffee, Aretha really has been part of "the soundtrack of our lives". Clearly, others feel the same, because in her half-century-plus professional career, "Ms Franklin" won something like 18 Grammy awards, the Presidential Medal of Freedom, and other honors too numerous to list. She will be missed.

What's the connection, if any, between these two? In 2002, in my platform for election as Debian Project Leader, I wrote that "working on Debian is my way of expressing my most strongly held beliefs about freedom, choice, quality, and utility." Over the years, I've come to think of software freedom as an obvious and important component of our broader freedom and equality. And that idea was strongly reinforced by the excellent talk Karen Sandler and Molly de Blanc gave at Debconf18 in Taiwan recently, in which they pointed out that in our modern world where software is part of everything, everything can be thought of as a free software issue!

So how am I going to acknowledge and celebrate Debian's 25th anniversary today? By putting some of my favorite Aretha tracks on our whole house audio system built entirely using libre hardware and software, and work to find and fix at least one more bug in one of my Debian packages. Because expressing my beliefs through actions in this way is, I think, the most effective way I can personally contribute in some small way to freedom and equality in the world, and thus also the finest tribute I can pay to Debian... and to Aretha Franklin.

Thursday, 09 August 2018

PSA: Use SASL in konversation

TSDgeos' blog | 17:44, Thursday, 09 August 2018

You probably have seen that Freenode has been getting lots of spam lately.

To protect against that some channels have activated a flag that only allows authenticated users to enter the channel.

If you're using the regular "nickserv" authentication way as I was doing, the authentication happens in parallel to entering the channels and you'll probably be rejected from joining some.

What you want is use SASL, that is a "new IRC" protocol that will first authenticate and then join the server/channels.

More info at https://userbase.kde.org/Konversation/Configuring_SASL_authentication.

Thanks Fuchs on #kde-devel for enlightening me :)

And of course, I'm going to Akademy ;)

Wednesday, 01 August 2018

Two years of terminal device freedom

tobias_platen's blog | 19:01, Wednesday, 01 August 2018

On August 1, 2016 a new law that allows clients of German internet providers to use any terminal device they choose entered into force. Internet service providers (ISPs) are now required to give users any information you need to connect an alternative router. In many other EU countries there is still no such law and the Radio Lockdown Directive is compulsory in all those countries. In Germany there the old “Gesetz über Funkanlagen und Telekommunikationsendeinrichtungen” is now replaced with the new “Funkanlagengesetz”.

Routers that use radio standards such as WiFi and DECT fall under the Radio Lockdown Directive and since the European Commission did not pass a delegated act yet there is no requirement to implement a lock down for current hardware. Many WiFi chipsets require non-free firmware, future generations of that non-free firmware could be used to lock down all kinds of Radio Equipment. Radio Equipment that comes with the Respects Your Freedom hardware product certification is 2.4GHz only in many cases, but some hardware that supports 5 GHz does exist.

Voice over IP (VoIP) is supported by most alternative routers and free software such as Asterisk. Since most ISPs and routers use SIP it is now possible to connect modern VoIP telephones directly to routers such as the FritzBox. Many compulsory routers such as the O2 Box 6431 use SIP internally, but it is not possible to connect external SIP phones with the stock firmware. So some users install OpenWRT on their box to get rid of those restrictions. Some ISPs in the cable market don’t use SIP, but an incompatible protocol called EuroPacketCable which is unsupported by most alternative routers.

Many set-top boxes used for TV streaming use Broadcom chips which offer a bad Free Software support. TV streaming could be done with free software, but many channels are scrambled requiring non-free software to unscramble. Old hardware such as the Media-Receiver Entry may become obsolete when the Telekom stops offering Start TV in 2019. No ISP publishes the interface descriptions for TV streaming, even if they could do so for the DRM-free channels. It is possible to use Kodi to watch those DRM free channels, but many features such as an Electronic Program Guide (EPG) do not work with IP TV streaming.

With this new law users now have a “freedom of choice” but they do not have full “software freedom” because many embedded devices still use proprietary software. Freedom respecting terminal device are rare and often they do not implement all features a user needs. Old analogue telephones sold in the 90s did not have any of those problems.

Tuesday, 31 July 2018

Building Briar Reproducible And Why It Matters

Free Software – | 15:12, Tuesday, 31 July 2018

Briar is a secure messenger, the next step in the crypto messenger evolution if you want. It is Free Software (Open Source), so everybody has the possibility inspect and audit its source code without needing to trust third-parties to have done so in secret.

However, for security critical software, just being Free Software is not enough. It is very easy to install a backdoor before compiling the source code into a binary file. This backdoor would of course not be part of the published source code, but it would be part of the file that gets released to the public.

So the question becomes: How can I be sure that the public source code corresponds exactly to what gets installed on my phone? The answer to that question is: Reproducible Builds.

A reproducible build is a process of compiling the human-readable source code into the final binary in such a way that it can be repeated by different people on different computers and still produce identical results. This way you can be reasonably sure there’s nothing else to worry about than the source code itself. Of course, there can still be backdoors or other malicious features hidden in the code, but at least then these can be found and exposed. Briar already had one security audit and hopefully more will follow.

While working on reproducible builds for a long time, Briar released version 1.0.12 which is the first version that was finally built deterministically, so everybody should be able to reproduce the exact same binary. To make this as easy as possible, we wrote a tool called Briar Reproducer. It automates the building and verification process, but it introduces yet another layer of things you need to trust or audit. So you are of course free to reproduce the results independently. We tried to keep things as basic and simple as possible though.

When we tag a new release in our source code repository, our continuous integration system will automatically fetch this release, build it and compare it to the published reference APK binary. It would be great, if people independent from Briar would set up an automated verification infrastructure that publishes the results for each Briar release. The more (diverse) people do this, the easier it is to trust that Briar releases haven’t been compromised!

So far, we only reproduce Briar itself and not all the libraries that it uses. However, we at least pin the hashes of these libraries with gradle-witness. There’s one exception. Since it is rather essential and critical, we also provide a Tor Reproducer that you can you use to verify that the Tor binaries that are embedded in Briar correspond to the public source code as well.

Doing reproducible builds can be challenging and there are waiting problems at every corner. We hope that all future releases of Briar will continue to be reproducible. If you are interested in the challenges, we overcame so far, you are welcome to read through the tickets linked in this master ticket.

What about F-Droid?

Reproducible builds are the precondition for getting Briar into the official F-Droid repository with its official code signature. (Briar also has its own F-Droid repository for now.) Unfortunately, we can not get it into F-Droid just now, because we are still working around a reproducible builds bug and this workaround is not yet available on F-Droid’s buildserver (which still uses Debian old-stable). We hope that it will be upgraded soon, so Briar will be available to all F-Droid users without adding an extra repository.

<script type="text/javascript"> (function () { var s = document.createElement('script'); var t = document.getElementsByTagName('script')[0]; s.type = 'text/javascript'; s.async = true; s.src = '/wp-content/libs/SocialSharePrivacy/scripts/jquery.socialshareprivacy.min.autoload.js'; t.parentNode.insertBefore(s, t); })(); </script>

Sunday, 29 July 2018

Should you donate to Open Source Software?

Blog – Think. Innovation. | 13:06, Sunday, 29 July 2018

In short: yes, you should! If you are a regular user and can afford it. For the longer version: read on. In this article I will explain donating is not just “the right thing to do”, but also a practical way of supporting Open Source Software (OSS). I will show you a fair and pragmatic method that I use myself. You will see that donating does not need to cost you much (in my case less than € 25 per month; 25% of the proprietary alternatives), is easy and gets this topic “off your mind” for the rest of the time.

Using LibreOffice as an example you will also see that even if only the 5 governments mentioned on the LibreOffice website would follow my method, then this would bring in almost 10 times more than would be needed to pay all the people working on the project a decent salary, even when living in a relatively ‘expensive’ country like The Netherlands!

“When a project is valuable it will find funds”

From a user’s perspective it seems fair to donate to Open Source Software: you get utility value from using these programs and you would otherwise need to buy proprietary software. So some reciprocity simply feels like the right thing to do. For me not only the utility value matters, but the whole idea of software freedom resonates.

On occasion I raised the question: “How much should one donate?” in the Free Software Foundation Europe (FSFE) community. I was surprised to find out that this is a question of little concern there. The reply that I got was something like: “When a project is valuable it will find the necessary funds anyway.” From my user’s perspective I did not understand this reaction, until I started doing some research for writing this piece.

It boils down to this: every contributor to OSS has her own reasons for contributing, without expecting to be paid by the project. Maybe she wants to learn, is a heavy user of the program herself, is being paid for it by her employer (when this company sells products or services based on the OSS) or simply enjoys the process of creation.

The accepted fact is that donations will never be able to provide a livelihood for the contributors, as they are insufficient and vary over time. I even read that bringing in too much money could jeopardize the stability of a project, as the questions of: who gets to decide what to spend the money on and where is the money going, could result in conflict.

Contribute, otherwise donate

So, as a user should you donate to OSS at all then? My answer: yes, if you can afford it. However, the real scarcity for OSS development is not money, it is time. The time to develop, test, write documentation, translate, do community outreach and more. If you have a skill that is valuable to an OSS project and you can free up some time, then contribute with your time performing this skill.

In case you are not able or willing to contribute, then donate money if the project asks for it and you can afford it. My take is that if they ask for donations, they must have a reason to do so as part of maintaining the project. It is the project’s responsibility to decide on spending it wisely.

Furthermore, while you may have the option to go for a proprietary alternative if the need comes, many less fortunate people do not have this option. By supporting an OSS project you aid in keeping it alive, indirectly making it possible for those people to also use great quality software in freedom.

For me personally, I decided to primarily donate money. On occasion I have donated time, in the form of speaking about Open Source at conferences. For the rest, I do not have the necessary skills.

To which projects to donate?

The question then arises, to which projects to donate? I have taken a rather crude yet pragmatic approach here, donating only to those projects for which I consciously take a decision to install it. Since every OSS project typically depends on so many other pieces of OSS (called ‘upstream packages’), donating to all projects gets very complicated very quickly, if not practically impossible. Furthermore, I feel it as a responsibility of the project to decide on donating part of their incoming funds to packages that they depend on.

For example, if GIMP (photo editing software) would come pre-installed with the GNU/Linux distribution I am using, then I would not donate to GIMP. If on the other hand GIMP would not come pre-installed and I had to install it manually with the (graphical interface of the) package manager, then I would donate to GIMP. Is this arbitrary? It absolutely is, but at the moment I do not know of any pragmatic alternative approach. If you do, please share!

I also take my time in evaluating software, meaning that it could take months until it becomes apparent that I actually use a piece of software and come back to it regularly. I feel this as a major “freedom”: paying based on proven utility.

How much to donate?

After deciding to which projects to donate, the next question becomes: how much to donate? One of the many positive stated attributes of Open Source is that while enabling freedom and being more valuable, it is also cheaper to create than proprietary counterparts. There is no head office, no bureaucratic layers of managers, no corporate centralized value extraction, no expensive marketing campaigns and sales people, no legal enforcement costs. So for software that has many users globally, even if the software developers would be paid the costs per user would be very low.

I once read that Open Source Ecology is aiming to create the technology and know-how to be able to produce machines at 25% of the costs of proprietary commercial counterparts. Although completely arbitrary, I feel this 25% to be a reasonable assumption. It is well less than half of the proprietary solution, but certainly not nothing.

So, for example. I use LibreOffice for document writing, spreadsheets and presentations. A proprietary alternative would be Microsoft Office. The one-time price for a new license of Microsoft Office 2016 is € 279 in The Netherlands. I have no idea how long this license would last, but let’s assume a period of 5 years. I choose to donate monthly to OSS, so this means that for LibreOffice I donate: € 279 / 5 / 12 = € 4,65 per month.

The table below shows this calculation for all OSS I use regularly:

Open Source Software Proprietary alternative (*) Proprietary Price Donate 25% monthly
Elementary OS Windows 10 Pro € 259 (5 years) € 4,35
LibreOffice Microsoft Office 2016 € 279 (5 years) € 4,65
GIMP Adobe Photoshop Photography € 12 per month € 3
NixNote Evernote Premium € 60 per year € 5 (**)
Workrave Workpace € 82 (5 years) € 1,40
KeePassX 1Password $ 3 per month € 2,60
Slic3r Simplify3D $149 (5 years) € 2,15
Total monthly donation 23,15
Equals total per year 277,80

 

(*) Calculating the amount to donate based on the proprietary alternative is my pragmatic choice. Another approach could be to donate based on ‘value received’. Although I feel that this could be more fair, I do not know how to make this workable at the moment. I think it would quickly become too complex to do, but if you have thoughts on this, please share!

(**) I am using NixNote without the Evernote integration. Evernote Premium has a lot more functionality than NixNote alone. Especially being able to access and edit notes on multiple devices would be handy for me. The Freemium version of Evernote compares more to NixNote. However, Evernote Freemium is not free, it is a marketing channel for which the Premium users are paying. To keep things simple, I decided to donate 25% of Evernote Premium.

What would happen if every user did this?

Continuing from the LibreOffice example, I found that LibreOffice had over 1 million active users in 2015. The total amount of donations coming in on the last quarter of 2015 was $ 21,000, which is about € 6,000 per month, or less than 1 euro cent per active user. If instead every active user would donate ‘my’ € 4,65 then over € 4.5 million would be coming in every month!

I am not saying that every user should do this, many people who are reaping the benefits of OSS cannot afford to donate and everyone else has the freedom to make up their own mind. That is the beauty of Open Source Software.

What would happen if those who can afford it did this?

Okay, so it is not fair to do this calculation based on every active user of LibreOffice. Instead, let’s do a calculation based on the active users who can afford it, minus the people who already reciprocate with a contribution in time and skills.

To start with the latter group: the number of LibreOffice developers stood at about 1,000 in November 2015. If we assume that half of the people involved are developers and half are people doing ‘everything else’ (testing, documentation writing, translating, etc.), then this translates to 0.2% of users. In other words: no significant impact on the calculations.

Then the former group: how can we know who can afford to donate? I believe we can safely assume that at least large institutions like governments can donate money they otherwise would have spent on Microsoft licenses. I found a list of governments using LibreOffice: in France, Spain, Italy, Taiwan and Brazil. They are using LibreOffice on a total of no less than 754,000 PCs! If these organizations would be donating that € 4,65, the project would still bring in € 3.5 million per month!

Am I missing something here? How is it possible that the LibreOffice project, one of the best-known OSS projects making software for everyday use by pretty much anyone is still only bringing in € 6,000 per month? Did they forgot to mention “amounts are *1000” in the graph’s legend?

Or do large institutions directly pay the salary of LibreOffice contributors, instead of donating? Let’s have a look at the numbers of that (hypothetical) situation. In November 2015 LibreOffice had 80 ‘committers’, so developers committing code. I imagine that these developers are not working full working weeks on LibreOffice, let’s assume 50% part-time, being 20 hours per week.

We again assume that development is half of the work being done. This translates to 80 people working full-time on the LibreOffice project at any given time. If they would be employed in The Netherlands, a reasonable average salary could be something like € 4,500 (before tax), meaning a total cost of € 360,000 per month.

Conclusion: even if only the 5 governments mentioned on the LibreOffice website would donate only 25% of the costs of the popular proprietary counterpart (Microsoft Office), then this would bring in almost 10 times more than would be needed to pay all the people working on the project a decent salary, even when living in a relatively ‘expensive’ country like The Netherlands.

How amazing is this? I almost cannot believe these numbers.

Setting up automated monthly donations

Anyway, back to my pragmatic approach for monthly donations to OSS you regularly use. Now that we know which amount to give to which projects, the task is actually doing so. The following table shows the donation options for the OSS I regularly use:

Open Source Software Donation options
Elementary OS Patreon, PayPal, Goodies, BountySource
LibreOffice PayPal, Credit Card, Bitcoin, Flattr, Bank transfer
GIMP Patreon (developers directly), PayPal, Flattr, Bitcoin, Cheque
NixNote Not accepting donations (*)
Workrave Not accepting donations (*)
KeePassX Not accepting donations (*)
Slic3r PayPal

 

(*) Interestingly not all projects accept donations. The people at Workrave suggest donation to the Electronic Frontier Foundation instead, but the EFF only accepts donations of $ 5 per month or more. For now, these projects will not receive a donation.

Donating is not something you want to do manually every month, so an automatic set-up is ideal. In the above cases I think only Patreon and PayPal allow this. Even though PayPal is cheaper than Patreon and according to Douglas Rushkoff Patreon is becoming problematic as an extractive multi-billion-dollar valued entity with over $ 100 million invested in it, I prefer this option over PayPal. The reason is that donating via Patreon gives ‘marketing value’ to the project, as the donations and donors are visible, boosting general recognition.

Even though I have not done any research (yet), I will go with Rushkoff that Drip is the preferred option over Patreon, so switch to that when it becomes available for these projects.

I will evaluate the list once or perhaps twice a year, as to keep the administrative burden manageable. For me personally once I get used to a piece of software and get the benefits from it, I rarely go look for something else.

How can OSS projects bring in more donations?

Presuming that a project is interested in bringing in more donations, my thoughts from my personal donating experience are:

  • Being available on Drip or otherwise Patreon, because these are also communication channels besides mere donation tools.
  • Nudging downloaders towards leaving their e-mail address, then follow up after a few months. For me personally, I want to try out the software first. Once I find out I really like it and use it regularly, I am absolutely willing to pay for it.
  • Provide a suggested donation, explain why that amount it suggested and what it will be used for. And of course nudge (potential) donors towards contributing, making it transparent and super easy to figure out how contributions can be done and provide value.

So, what do you think? Is it fair to pay for Open Source Software that you are using? If so, which approach would you advocate? And if not, if projects are asking for donations, why not donate?

– Diderik

Photo by Nathan Lemon on Unsplash

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Thursday, 26 July 2018

Happy SysAdmin Day (2018)

Ramblings of a sysadmin (Posts about planet-fsfe) | 22:00, Thursday, 26 July 2018

Just wanted to wish all my fellow system administrators a very happy sysadmin day. This one goes out to all you ninjas, who carry out their work in the shadows to ensure maximum availability, stability, performance and security. To all of you, who try to explain magic to muggles on a daily basis.

/img/posts/2018/07/27-happy-sysadmin-day-2018/wizard-penguin.thumbnail.png

Source: OpenClipArt

Sunday, 22 July 2018

Tuple Performance Optimisations in CPython and Lichen

Paul Boddie's Free Software-related blog » English | 16:56, Sunday, 22 July 2018

One of the nice things about the Python programming language is the selection of built-in data types it offers for common activities. Things like lists (collections of values) and dictionaries (key-value mappings) are very convenient and do not need much further explanation, but there is also the concept of the tuple, which is also a collection of values, like a list, but whose size and values are fixed for its entire lifespan, unlike a list. Here is what a very simple tuple looks like:

(123, "abc")

Normally, the need for a data type like this becomes apparent when programming and needing to return multiple values from a function. In languages that do not support such a convenient way of bundling things together, some extra thought is usually required to send data back to the caller, and in languages like C the technique of mutating function arguments and thus communicating such data via a function’s parameters is often used instead.

Lichen, being a Python-like language, also supports tuples. The parsing of source code, employing various existing Python libraries, involves the identification of tuple literals: occurrences of tuples written directly in the code, as seen above. For such values to have any meaning, they must be supported by a particular program representation of the tuple plus the routines that provide each tuple with their familiar characteristics. In other words, we need to provide a way of translating these values into code that makes them tangible things within a running program.

Here, Lichen differs from various Python implementations somewhat. CPython, for instance, defines practically all of the nature of its tuples in C language code (the “C” in CPython), with the pertinent file in CPython versions from 1.0 all the way to 2.7 being found as Objects/tupleobject.c within the source distribution. Meanwhile, Jython defines its tuples in Java language code, with the pertinent file being found as src/org/python/core/PyTuple.java (without the outermost src directory in very early versions). Lichen, on the other hand, implements the general form of tuples in the Lichen language itself.

Tuples All The Way Down

This seems almost nonsensical! How can Lichen’s tuples be implemented in the language itself? What trickery is involved to pull off such an illusion? Well, it might be worth clarifying what kind of code is involved and which parts of the tuple functionality are really provided by the language framework generally. The Lichen code for tuples is found in lib/__builtins__/tuple.py and has the following outline:

class tuple(sequence, hashable):
    "Implementation of tuple."

    def __init__(self, args=None):
        "Initialise the tuple."

    def __hash__(self):
        "Return a hashable value for the tuple."

    def __add__(self, other):
        "Add this tuple to 'other'."

    def __str__(self):
        "Return a string representation."

    def __bool__(self):
        "Tuples are true if non-empty."

    def __iter__(self):
        "Return an iterator."

    def __get_single_item__(self, index):
        "Return the item at the normalised (positive) 'index'."

Here, the actual code within each method has been omitted, but the outline itself defines the general structure of the data type, described by a class, representing the behaviour of each tuple. As in Python, a collection of special methods are provided to support standard operations: __hash__ supports the hash built-in function and is used when using tuples as dictionary keys; __bool__ permits the truth value testing of tuples so that they may be considered as “true” or not; and so on.

Since this definition of classes (data types) is something that needs to be supported generally, it makes sense to use the mechanisms already in place to allow us to define the tuple class in this way. Particularly notable here is the way that the tuple class inherits from other “base classes” (sequence and hashable). Indeed, why should the tuple class be different from any other class? It still needs to behave like any other class with regard to supporting things like methods, and in Lichen its values (or instances) are fundamentally just like instances of other classes.

It would, of course, be possible for me to define the tuple class in C (it being the language to which Lichen programs are compiled), but another benefit of just using the normal process of parsing and compiling the code written in the Lichen language is that it saves me the bother of having to work with such a lower-level representation and the accompanying need to update it carefully when changing its functionality. The functionality itself, being adequately expressed as Lichen code, would need to be hand-compiled to C: a tedious exercise indeed.

One can turn such questions around and ask why tuples are special things in various Python implementations. A fairly reasonable response is that CPython, at least, has evolved its implementation of types and objects over the years, starting out as a “scripting language” offering access to convenient data structures implemented in C and a type system built using those data structures. It was not until Python 2.2 that “type/class unification” became addressed, meaning that the built-in types implemented at the lowest levels – tuples amongst them – could then be treated more like “user-defined classes”, these classes being implemented in Python code.

Although the outline of a tuple class can be defined in the Lichen language itself, and although operations defining tuple behaviour are provided as Lichen code, this does not mean that everything can be implemented at this level. For example, programs written in Lichen do not manage the memory their objects use but instead delegate this task to “native” code. Moreover, some of the memory being managed may have representations that only sensibly exist at a lower level. We can start to investigate this by considering the method returning the size or length of a tuple, invoked when the len built-in function is called on a tuple:

    def __len__(self):
        "Return the length of the tuple."
        return list_len(self.__data__)

Here, the method delegates practically everything to another function, presenting the __data__ attribute of the instance involved in the method call (self). This other function actually isn’t implemented in Lichen: it is a native function that knows about memory and some low-level structures that support the tuple and list abstractions. It looks like this:

__attr __fn_native_list_list_len(__attr self, __attr _data)
{
    unsigned int size = _data.seqvalue->size;
    return __new_int(size);
}

And what it does is to treat the __data__ attribute as a special sequence structure, obtaining its size and passing that value back as an integer usable within Lichen code. The sequence structure is defined as part of the support library for compiled Lichen programs, along with routines to allocate such structures and to populate them. Other kinds of values are also represented at the native level, such as integers and character strings.

To an extent, such native representations are not so different from the special data types implemented in C within CPython and in Java within Jython. However, the Lichen implementation seeks to minimise the amount of native code dedicated to providing abstractions. Where functionality supporting a basic abstraction such as a tuple does not need to interact directly with native representations or perform “machine-level” operations, it is coded in Lichen, and this code can remain happily oblivious to the nature of the data passing through it.

There are interesting intellectual challenges involved here. One might wonder how minimal the layer of native code might need to be, for instance. With a suitable regime in place for translating Lichen code into native operations, might it be possible to do memory management, low-level arithmetic, character string operations, system calls, and more, all in the same language, not writing any (or hardly writing any) native code by hand? It is an intriguing question but also a distraction, and that leads me back towards the main topic of the article!

The Benchmarking Game

Quite a few years ago now, there was a project being run to benchmark different programming languages in order to compare their performance. It still exists, it would seem. But in the early days of this initiative, the programs were fairly simple translations between languages and the results relatively easy to digest. Later on, however, there seemed to be a choice of results depending on the hardware used to create them, and the programs became more complicated, perhaps as people saw their favourite language falling down the result tables and felt that they needed to employ a few tricks to boost their language’s standing.

I have been interested in Python implementations and their performance for a long time, and one of the programs that I have used from time to time has been the “binary trees” benchmark. You can find a more complicated version of this on the Python Interpreters Benchmarks site as well as on the original project’s site. It would appear that on both these sites, different versions are being run even for the same language implementation, presumably to showcase optimisations.

I prefer to keep things simple, however. As the Wikipedia page notes, the “binary trees” benchmark is presumably a test of memory allocation performance. What I discovered when compiling a modified version of this program, one that I had originally obtained without the adornments of multiprocessing module and generator usage, was perhaps more interesting in its own right. The first thing I found was that my generated C program was actually slower than the original program run using CPython: it took perhaps 140% of the CPython running time (48 seconds versus 34 seconds).

My previous article described various realisations that I had around integer performance optimisations in CPython. But when I first tried to investigate this issue, I was at a loss to explain it. It could be said that I had spent so much effort getting the toolchain and supporting library code into some kind of working order that I had little energy left for optimisation investigations, even though I had realised one of my main objectives and now had the basis for such investigations available to me. Perhaps a quick look at the “binary trees” code is in order, so here is an extract:

def make_tree(item, depth):
    if depth > 0:
        item2 = 2 * item
        depth -= 1
        return (item, make_tree(item2 - 1, depth), make_tree(item2, depth))
    else:
        return (item, None, None)

So, here we have some tuples in action, and in the above function, recursion takes place – the function calls itself – to make the tree, hence the function name. Consequently, we have a lot of tuples being created and can now understand what the Wikipedia page was claiming about the program. The result of this function gets presented to another function which unpacks the return value, inspects it, and then calls itself recursively, too:

def check_tree(tree):
    (item, left, right) = tree
    if left is not None:
        return item + check_tree(left) - check_tree(right)
    else:
        return item

I did wonder about all these tuples, and in the struggle to get the language system into a working state, I had cobbled together a working tuple representation, in which I didn’t really have too much confidence. But I wondered about what the program would look like in the other languages involved in the benchmarking exercise and whether tuples (or some equivalent) were also present in whichever original version that had been written for the exercise, possibly in a language like Java or C. Sure enough, the Java versions (simple version) employ class instances and not things like arrays or other anonymous data structures comparable to tuples.

So I decided to change the program to also use classes and to give these tree nodes a more meaningful form:

class Node:
    def __init__(self, item, left, right):
        self.item = item
        self.left = left
        self.right = right

def make_tree(item, depth):
    if depth > 0:
        item2 = 2 * item
        depth -= 1
        return Node(item, make_tree(item2 - 1, depth), make_tree(item2, depth))
    else:
        return Node(item, None, None)

In fact, this is a somewhat rudimentary attempt at introducing object orientation since we might also make the function a method. Meanwhile, in the function handling the return value of the above function, the tuple unpacking was changed to instead access the attributes of the returned Node instances seen above.

def check_tree(tree):
    if tree.left is not None:
        return tree.item + check_tree(tree.left) - check_tree(tree.right)
    else:
        return tree.item

Now, I expected this to be slower in CPython purely because there is more work being done, and instance creation is probably more costly than tuple creation, but I didn’t expect it to be four times slower (at around 2 minutes 15 seconds), which it was! And curiously, running the same program compiled by Lichen was actually quicker (22 seconds), which is about 65% of the original version’s running time in CPython, half the running time of the original version compiled by Lichen, and nearly an sixth of the revised version’s running time in CPython.

One may well wonder why CPython is so much slower when dealing with instances instead of tuples, and this may have been a motivation for using tuples in the benchmarking exercise, but what was more interesting to me at this point was how the code generated by the Lichen toolchain was managing to be faster for instances, especially since tuples are really just another kind of object in the Lichen implementation. So why were tuples slower, and could there be a way of transferring some of the performance of general objects to tuples?

Unpacking More Performance

The “binary trees” benchmark is supposed to give memory allocation a workout, but after the integer performance investigation, I wasn’t about to fall for the trick of blaming the allocator (provided by the Boehm-Demers-Weiser garbage collection library) whose performance is nothing I would complain about. Instead, I considered how CPython might be optimising tuple operations and paid another visit to the interpreter source code (found in Python/ceval.c in the sources for all the different releases of Python 1 and 2) and searched for tuple-related operations.

My experiments with Python over the years have occasionally touched upon the bytecode employed by CPython to represent compiled programs, each bytecode instruction being evaluated by the CPython interpreter. I already knew that some program operations were supported by specific bytecodes, and sure enough, it wasn’t long before I encountered a tuple-specific example: the UNPACK_SEQUENCE instruction (and its predecessors in Python 1.5 and earlier, UNPACK_TUPLE and UNPACK_LIST). This instruction is generated when source code like the following is used:

(item, left, right) = tree

The above would translate to something like this:

              0 LOAD_FAST                0 (tree)
              3 UNPACK_SEQUENCE          3
              6 STORE_FAST               1 (item)
              9 STORE_FAST               2 (left)
             12 STORE_FAST               3 (right)

In CPython, you can investigate the generated bytecode instructions using the dis module, putting the code of interest in a function, and running the dis.dis function on the function object, which is how I generated the above output. Here, UNPACK_SEQUENCE makes an appearance, accessing the items in the tree sequence one by one, pushing them onto the evaluation stack, CPython’s interpreter being a stack-based virtual machine. And sure enough, the interpreter capitalises on the assumption that the operand of this instruction will most likely be a tuple, testing it and then using tuple-specific operations to get at the tuple’s items.

Meanwhile, the translation of the same source code by the Lichen toolchain was rather less optimal. In the translation code, the unpacking operation from the input program is rewritten as a sequence of assignments, and something like the following was being generated:

item = tree[0]
left = tree[1]
right = tree[2]

This in turn gets processed, rewriting the subscript operations (indicated by the bracketing) to the following:

item = tree.__getitem__(0)
left = tree.__getitem__(1)
right = tree.__getitem__(2)

This in turn was being translated to C for the output program. But this is not particularly efficient: it uses a generic mechanism to access each item in the tree tuple, since it is possible that the only thing we may generally assert about tree is that it may provide the __getitem__ special method. The resulting code has to perform extra work to eventually arrive at the code that actually extracts an item from the tuple, and it will be doing this over and over again.

So, the first thing to try was to see if there was any potential for a speed-up by optimising this unpacking operation. I changed the generated C code emitted for the operations above to use the native tuple-accessing functions instead and re-ran the program. This was promising: the running time decreased from 48 seconds to 23 seconds; I had struck gold! But it was all very well demonstrating the potential. What now needed to be done was to find a general way of introducing something similarly effective that would work automatically for all programs.

Of course, I could change the initial form of the unpacking operations to use the __getitem__ method directly, but this is what was being produced anyway, so there would be no change whatsoever in the resulting program. However, I had introduced a Lichen-specific special method, used within the standard library, that accesses individual items in a given sequence instance. (It should be noted that in Python and Lichen, the __getitem__ method can accept a slice object and thus return a collection of values, not just one.) Here is what the rewritten form of the unpacking would now look like:

item = tree.__get_single_item__(0)
left = tree.__get_single_item__(1)
right = tree.__get_single_item__(2)

Compiling the program and running it gave a time of 34 seconds. We were now at parity with CPython. Ostensibly, the overhead in handling different kinds of item index (integers or slice objects) was responsible for 30% of the original version’s running time. What other overhead might there be, given that 34 seconds is still rather longer than 23 seconds? What does this other special method do that my quick hack does not?

It is always worth considering what the compiler is able to know about the program in these cases. Looking at the __get_single_item__ method for a tuple reveals something of interest:

    def __get_single_item__(self, index):
        "Return the item at the normalised (positive) 'index'."
        self._check_index(index)
        return list_element(self.__data__, index)

In the above, the index used to obtain an item is checked to see if it is valid for the tuple. Then, the list_element native function (also used on tuples) obtains the item from the low-level data structure holding all the items. But is there a need to check each index? Although we do need to make sure that accesses to do not try and read “off the end” of the collection of items, accessing items that do not exist, we do not actually need to “normalise” the index.

Such normalisation is the process of interpreting negative index values as referring to items relative to the end of the collection, with -1 referring to the last item, -2 to the next last item, and so on, all the way back to -n referring to the first item (with n being the number of items in the collection). However, the code being generated does not use negative index values, and if we introduce a test to make sure that the tuple is large enough, then we should be able to get away with operations that use the provided index values directly. So I resolved to introduce another special method for this purpose, now rewriting the code as follows:

__builtins__.sequence._test_length(tree, 3)
item = tree.__get_single_item_unchecked__(0)
left = tree.__get_single_item_unchecked__(1)
right = tree.__get_single_item_unchecked__(2)

The _test_length function will raise an exception if the length is inappropriate. Meanwhile, the newly-introduced special method is implemented in a base class of both tuples and lists, and it merely employs a call to list_element for the provided index. Compiling the code with these operations now being generated and running the result yielded a running time of 27 seconds. Some general changes to the code generation, not specific to tuples, brought this down to 24 seconds (and the original version down to 44 seconds, with the object-based version coming down to 16 seconds).

So, the progression in performance looks like this:

Program Version (Lichen Strategy) Lichen CPython
Objects 135 seconds
Tuples (__getitem__) 48 seconds

44 seconds
Tuples (__get_single_item__) 34 seconds  34 seconds
Tuples (__get_single_item_unchecked__) 27 seconds

24 seconds
Objects 22 seconds

16 seconds

Here, the added effect of these other optimisations is also shown where measured.

Conclusions

As we saw with the handling of integers in CPython, optimisations also exist to tune tuple performance in that implementation of Python, and these also exist in other implementations such as Jython (see the unpackSequence method in the org.python.core.Py class, found in the org/python/core/Py.java file). Taking advantage of guarantees about accesses to tuples that are written explicitly into the program source, the generated code can avoid incurring unnecessary overhead, thus considerably speeding up the running time of programs employing tuple unpacking operations.

One might still be wondering why the object-based version of the program is faster than the tuple-based version for Lichen. This is most likely due to the ability of the compiler to make the attribute accesses on the tree object efficient based on deductions it has performed. Fewer low-level operations are performed to achieve the same result, and time is saved as a consequence. One might also wonder why the object-based version is slower when run by CPython. That would probably be due to the flexible but costly way objects are represented and accessed in that language implementation, and this was indeed one of my motivations for exploring other language design and implementation approaches with Lichen.

Monday, 16 July 2018

KDE Applications 18.08 branches created

TSDgeos' blog | 21:07, Monday, 16 July 2018

Make sure you commit anything you want to end up in the KDE Applications 18.08 release to them :)

We're already past the dependency freeze.

The Freeze and Beta is this Thursday 19 of July.

More interesting dates
August 2: KDE Applications 18.08 RC (18.07.90) Tagging and Release
August 9: KDE Applications 18.08 Tagging
August 16: KDE Applications 18.08 Release

https://community.kde.org/Schedules/Applications/18.08_Release_Schedule

Friday, 13 July 2018

The tasting of surströmming

Hook’s Humble Homepage | 08:40, Friday, 13 July 2018

For the uninitiated, Surströmming is an infamous heavily fermented herring.

Below is my experience with it.

Preparations

I “smuggled” (more on this below) it from Sweden a few months ago and on the evening before the Swedish national day1 my brother, a brave (or naïve) soul of a schoolmate of his, and I (not to mention our dog) opened it up near the river. We chose the riverside and the night time strategically, of course.

As was advised to us by a friend, we also took a bucket of water with us. Not – as some may wrongly assume – to vomit into, but to open the tin under water. Due to the fermentation continuing in the tin, it builds up pressure and when you open the tin, it inevitably and violently discharges the bile water. The best way to avoid it spraying your clothes is to open it under water.

The tasting

Since this was an impromptu action, – other than the bucket – we came only half-prepared. As condiments we brought only a little bread, a shallot and three pickled gherkins.

The hint with the bucket was greatly appreciated, as the opening of the tin was the most vile part of the whole experience. So if you plan to try it, do get a bucket! It stopped not only the bile spraying us, but also diluted most of the putrid smell that was caught in the tin.

Once opened and aired, the contents of the tin were actually quite recognisable. Fish fillets swimming in brine. The brine was already brownish and a tiny bit gelatinous, but darkness helped us get past that.

As for the taste and texture, if you ever had pickled herrings before – it is like that on steroids, married with anchovies. Very soft, but still recognisable as fish, extremely salty, and with acidity that is very similar to that of good sauerkraut.

Washing the fish in the pickle jar helped take the edge of – both in sense of smell and saltiness. The onion as well as the pickles were a great idea, bread was a must!

In summary, it is definitely an acquired taste, but I can very much see how this was a staple in the past and how it can still be used in cuisine. As a condiment, I think it could work well even in a modern dish.

We did go grab a beer afterwards to wash it down though.

P.S. Our dog was very enthusiastic about it the whole time and somewhat sullen that he did not get any.

The smuggling

Well, I did not actually smuggle it, per se, but it took me ¾ of an hour to get it cleared at the airport and in the end the actual carrier still did not know about what I was carrying in my checked luggage. The airport, security, two information desks and the main ground stewardess responsible for my flight were all in on it though. And in my defence, the actual carrier does not have a policy against Surströmming on board (most probably because they have not thought about it yet).

As for acquiring this rotten fish in the first place, I saw it in a shop in Malmö and took the least deformed tin (along with other local specialities). When I came to the cash register with grin like a madman in a sweetshop, I asked the friendly young clerk if she has any suggestion how to prepare it, and she replied that she never had it and knows barely anyone of her generation who did, apart from perhaps as a challenge.

hook out → more fish soon ;)


  1. The timing was purely by chance, but fitted perfectly :) 

Wednesday, 11 July 2018

Review of some Vahdam’s Masala Chai teas

Hook’s Humble Homepage | 17:00, Wednesday, 11 July 2018

Masala chai (commonly and somewhat falsely abbreviated to just “chai”) literally means “spice mix tea” – and this is what this review is about. I got myself a selection of Vahdam’s masala chais and kept notes of each one I tried. Some came in the Chai Tea Sampler and others I either already bought before or were a free sample that came with some other order.

Classical CTC BOP

CTC BOP is usually cheaper than more delicately processed whole leaves. Although the common perception is that it is of lower quality than e.g. FTGFOP or even just FOP or OP for that matter, the fact is that they simply a different method with a different outcome. You can get away with breaking cheaper leaves, though, than whole.

Also bare in mind that while BOP is the most common broken leaf grade, there are several more.

It makes for a stronger brew and a more robust flavour– ideal for breakfast teas. The down-side is that it can coat your tongue. But if you want to recycle it, the second steep will be much lighter.

Original Chai Spiced Black Tea Masala Chai

The quintessential masala chai – the strength of the CTC BOP, paired with the classic mix of spices. A great daily driver and a true classic, but for my personal taste a tiny bit too light on the spice.

Ingredients: CTC BOP black tea, cardamom, clove, cinnamon, black pepper

Double Spice Masala Chai Spiced Black Tea

Same as India’s Original Masala Chai above, but with a bigger amount of spice. Of the two I definitely prefer this one.

Ingredients: CTC BOP black tea, cardamom, clove, cinnamon, black pepper

Fennel Spice Masala Chai Spiced Black Tea

Due to the fennel, the overall taste reminds me a lot of Slovenian cinnamon-honey cookies1, which we traditionally bake for Christmas. The odd bit is the cookies do not include the fennel at all, but most of the other spices in a classic masala chai (minus pepper). I suppose the fennel sways it a bit to the sweet honey-like side.

In short, I really liked the fennel variation – could become firm winter favourite of mine.

Ingredients: CTC BOP black tea, fennel, cardamom, clove, cinnamon, black pepper

Saffron Premium Masala Chai Spiced Black Tea

When I saw the package I thought that saffron was more of a marketing gimmick and I would only find a strand or two in the whole 10g package. But no! The saffron’s pungence punches you in the face – in a good way. It felt somewhat weird to put sugar and milk into it, so strong is the aroma.

Personally, I really like it and it does present an interesting savoury twist. It is a taste that some might love and others might hate though.

Ingredients: CTC BOP black tea, cardamom, cinnamon, clove, saffron, almonds

Earl Grey Masala Chai Spiced Black Tea

I am (almost) always game for a nice spin on an Earl Grey. In this case, the standard masala complements the bergamot surprisingly well and in a way where none of the two particularly stand out too much.

The combination works so well that it would feel wrong to call it a spiced-up Earl Grey or a earl-grey’d masala chai. It is a pleasantly lightly spiced, somewhat citrusy and fresh blend that goes well with or without milk.

Ingredients: CTC BOP black tea, bergamot, cardamom, cinnamon, clove, black pepper

Cardamom Chai Masala Chai Spiced Black Tea

Now, this one is interesting because it only has two ingredients – black tea and cardamom. While not as complex in aroma as most others, it is interesting how much freshness and sweetness a quality cardamom pod can carry.

I found it equally enjoyable with milk and sugar or without any of the two.

Ingredients: CTC BOP Assam black tea, cardamom

Sweet Cinnamon Massala Chai Black Tea

Similar to their Cardamom Chai, it is a masala chai with very few ingredients. The cinnamon and cardamom get allong very well and while it lacks the complexity of a full masala/spice mix, it is a very enjoyable blend.

Recommended especially if you like your masala chai not too spicy, but sweet.

Ingredients: CTC BOP Assam black tea, cardamom, cinnamon

Ortodox black

What is described with “orthodox” usually means a whole leaf grade, starting with OP. These are much weaker than CTC, but therefore bring out the more delicate flavours. It is a bigger challenge therefore to make sure spices do not push the flavour of the tea too much into the back-seat.

Because the leaves are whole, as a rule you can get more steeps out of them than of broken leaves.

Assam Spice Masala Chai Spiced Black Tea

The more refined spin on the classic masala chai – with whole leaves of a quality Assam, it brings a smoothness and mellowness that the CTC cannot achieve. Because of that the spices are a bit more pronounced, which in my opinion is not bad at all. The quality of the leaf also results in a much better second steep compared to the CTC.

Most definitely a favourite for me.

Ingredients: FTGFOP1 Assam black tea, cardamom, cinnamon, clove, black pepper

Tulsi Basil Organic Masala Chai Spiced Black Tea

I have not had the pleasure of trying tulsi2 and regarding masala chais, this is a very peculiar blend. The taste of the Assam is quite well hidden behind the huge bunch of herbs. In fact, for some reason it reminds me more of the Slovenian Mountain Tea than of a masala chai.

In the end, the combination is quite pleasant and uplifting.

What I found fascinating is that it tastes very similar both with milk and sugar, and without any of the two.

Ingredients: organic Assam black tea, tulsi basil, cinnamon, ginger, clove, cardamom, black pepper, long pepper, bay leaves, nutmeg

Darjeeling Spice Masala Chai Spiced Black Tea

As expected, the Darjeeling version is much lighter and works well also without milk, or even sugar. Still, a tiny cloud of milk does give it that extra smoothness and mellowness. It is not over-spiced, and the balance is quite well. The taste of cloves (and perhaps pepper) are just slightly more pronounced, but as a change that is quite fun. It goes very well with the muscatel of the Darjeeling.

Ingredients: SFTGFOP1 Darjeeling black tea, cardamom, cinnamon, clove, black pepper

Oolong

Maharani Chai Spiced Oolong Tea

Despite the fancy abbreviation, IMHO the oolong tea itself in this blend is not one you would pay high prices as a stand-alone tea. Still, I found the combination interesting. If nothing else, it is interesting to have a masala chai that can be drank just as well without milk and sugar as with them.

Personally, I found the spice a bit to strong in this blend for the subtle tea it was combined with. I actually found the second steep much more enjoyable.

Ingredients: SFTGFOP1 Oolong tea, cardamom, cinnamon, clove, black pepper

Green

Kashmiri Kahwa Masala Chai Spiced Green Tea

A very enjoyable and refreshing blend, which I enjoyed without milk or sugar. The saffron is not as heavy as in the Saffron Premium Masala Chai, but goes really well with the almonds and the rest of the spices.

When I first heard of Kashmiri Kahwa, I saw a recipe that included rose buds, so in the future I might try adding a few.

Ingredients: FTGFOP1 green tea, cardamom, cinnamon, saffron, almonds

Green Tea Chai

As is to be expected, the green variety of the Darjeeling masala chai is even lighter than its black Darjeeling counterpart. The spice is well-balanced, with cinnamon and cloves perhaps being just a bit more accentuated. This effect is increased when adding milk.

It goes pretty well without milk or sugar and can be steeped multiple times. Adding either or both works fine as well though.

Quite an enjoyable tea, but personally, in this direction, I prefer either the Kashmiri Kahwa or the “normal” Darjeeling Spice masala chais.

Ingredients: FTGFOP1 darjeeling green tea, cardamom, cinnamon, clove, black pepper

hook out → hopefully back to blogging more often


  1. The Slovenian name is “medenjaki” and the closest thing the English cuisine has to offer is probably gingerbread. 

  2. For more about tulsi – or holy basil, as they call it in some places – see its Wikipedia entry

The Invisible Hole in Doughnut Economics

Blog – Think. Innovation. | 04:29, Wednesday, 11 July 2018

Doughnut Economics is one of the best books I have read in the past few years. I believe it should be standard reading for any economics student and fills in a big gap in “normal” economics theory. Kate Raworth excellently points out how and why the traditional models and theories do not work (anymore) and even better, replaces them with new concepts and pictures.

 

The Doughnut in a Nutshell

Doughnut Economics takes the planet and the human condition as the goal and explains how the economy should be serving these, instead of the other way around as it is nowadays portrayed by both economists and politicians.

I believe the strong point is that Raworth recognizes that only explaining how and why the old models and “laws” are wrong is not enough, she also replaces them with new models and most importantly pictures that are fit for the future and encourages us to take a pen and start drawing as well.

As the quote goes: “All models are wrong but some are useful” (George Box). I believe this book gives us those much needed new models in a holistic view where both the planet, humans, economy and government are included. Raworth does not come up with all of the materials herself, but cleverly compiled contemporary critiques, thoughts and models into a comprehensive and easy to read book.

In that sense Doughnut Economics does not provide a clear-cut solution, which the author strongly emphasizes. There is no 1 answer, 1 solution, in this great and complex world. Thinking or wanting to believe that there is such a thing, is actually the pitfall of the old theories.

Of all the great new economic models in the book, the one economic model that encompasses all and defines economy as serving all humans and limits it to planetary boundaries is the Doughnut, shown in the following image:

The Doughnut of social and planetary boundaries (2017)
Source: kateraworth.com

As you can see the planetary boundaries (ecological ceiling) is defined as not “overshooting” in: climate change, ocean acidification, chemical pollution, nitrogen & phosphorus loading, freshwater withdrawals, land conversion, biodiversity loss, air pollution and ozone layer depletion.

And the social foundation for humanity is defined as: housing, networks, energy, water, food, health, education, income & work, peace & justice, political voice, social equity and gender equality.

The “safe and just space for humanity” is then defined as the space between providing the social foundation for everyone while not overshooting the ecological ceiling.

I find this a very attractive model, way more inspirational and ‘manageable’ than the often-used ‘model’ of Sustainable Development Goals, shown in the following image:

The Sustainable Development Goals by the United Nations
Source: sustainabledevelopment.un.org

I also like how the Doughnut does not include a reference to “Poverty” at all, since I believe that money is only a proxy (nobody eats money or lives in it) and limits us to talking inside the current economic system, of which the flawed monetary system is a big part.

Besides debunking existing (neo-liberal) economic theories and models and presenting fitter ones, Raworth gives many examples of pioneering initiatives that pop up around the world. These examples serve as inspiration and anecdotal evidence of these fitter economic models.

As a big proponent of open collaboration and free knowledge sharing, I was very happy to read how Raworth strongly puts forward the role of Open Source Design as an essential building block for the future. She gives familiar examples like the Global Village Construction Set, the Open Building Institute and Open Source Circular Economy.

The Invisible Hole

However, I also have some critical remarks on the book. The rest of this article will elaborate on those. But before that I want to emphasize my admiration for the incredible work of Raworth and it is only because she has put in this enormous amount of time and energy that I am able to write this at all.

My two main points of critique regard two of the core assumptions of the Doughnut Economics model. Namely, in striving for a safe and just space for humanity:

1. Can we actually safely provide a social foundation within planetary resources and without creating overshoot?

2. What exactly is a just social foundation in a practical sense and who decides that?

The book does not address these issues and worse, Raworth does not mention that the Doughnut is based on these assumptions. She does not, as academic writers normally do, provide a small portion of the writing on limitations and further research. So these implicit assumptions probably go by unnoticed to most readers.

I call this problem with the Doughnut model the “Invisible Hole”, as the two issues are interrelated and can be seen as one and the author has not made them explicit to the reader.

The following paragraphs will elaborate on my points of critique regarding these ‘safe’ and ‘just’ issues.

Can we be safe?

The “Can we be safe within planetary boundaries?”-issue already came to mind the first time I saw the Doughnut in an article on-line. The model states that we should at first provide in basic human needs for all people on the planet.

The author calls this the Social Foundation, providing in: sufficient food, clean water and decent sanitation, access to energy and clean cooking facilities, access to education and to healthcare, decent housing, a minimum income and decent work and access to networks of information and social support. Furthermore, it calls for achieving these with gender equality, social equity, political voice, and peace and justice (page 45).

This is the inner circle of the Doughnut. Then, there is some “wiggle room” in providing more within planetary boundaries, before we would go into “overshoot”, taking too much from the planet. This is the outer circle of the Doughnut.

But how does the author know if we can actually provide in all these basic human needs within planetary boundaries and have this wiggle room left?

Maybe the stuff needed for the Social Foundation already causes overshoot and the inner circle should actually be outside the outer circle?

For example, do we have sufficient materials to make things that will provide a basic provision of energy to all on the planet, forever?

This question was so obvious to me and I expected a solid answer while reading the book. But that answer did not come.

Well, actually I did find one remark only regarding food supply, which is on page 56. Given that 30% to 50% of the world’s food is wasted, she states that hunger could be ended with just 10% of the food that gets never eaten.

This seems acceptable at face value, but the current food system, especially meat production, is a major ‘overshoot’ factor. So the calculation should instead be based on a regenerative distributive kind of food system that Raworth talks about, not the current food system. Furthermore, the calculation is based on caloric intake and leaves out quality (and transportation) of food.

Who decides what is just?

The “What is just?”-issue relates to my Safe-issue: the book assumes that “safe” and “just” go hand in hand and are possible well within planetary boundaries. But what exactly is a just amount of (for example) energy for each of us? And who decides that? Or is it by definition that “just” is below the ecological ceiling?

Since Raworth does not address what “just” means in a practical way, or how we should find out, we are left to guess. Actually, I only realized that I had an implicit personal WEIRD(*) assumption of what just means, until I almost finished the book.

(*) WEIRD: Western, Educated, Industrialized, Rich and Democratic (page 95).

Some primary very basic questions from my personal perspective are: in order to provide “just” access to basic human needs while being “safe”, should we give up watching TV? How about dishwashers? Showering every day? A house made with concrete? Flying?

Indicators of Shortfall

Although Raworth does not define or at least elaborate on what she means with a just space for humanity or how or by whom that should be determined, the book does contain a table of data with indicators of shortfall in the appendix at the end (in my opinion: too little, too late).

These data are absolutely interesting to keep an eye on and these shortfalls should be overcome to enter the just space, but I would rather say they are necessary conditions than sufficient conditions. I believe that it is therefore useful to use other indicators as well to get a more complete ‘dashboard’ of where we are in achieving the social foundation.

As food for thought the table below contains a rewording of each indicator into a “just statement” contrasted with a brief summary of my personal WEIRD view.

This exercise is meant as a basis for a further dialogue on “what is just” and which other indicators would fit there. It is not meant to say that the author is wrong and I am right or anything like that.

Basic Need Indicator “just statement” My WEIRD interpretation
Food Everybody is well nourished (now 11% are not) Everybody has access to high-quality healthy food (this will then probably become a plant-based whole foods diet that is grown locally; with meat, fish, dairy and processed food available at much higher prices which not everyone can afford).
Water Everybody has access to improved drinking water (now 9% do not) Everybody has direct access to clean water sufficient for drinking, bathing and washing
Sanitation Everybody has access to improved sanitation (now 32% do not) Everybody has direct access to clean toilets where waste is regeneratively returned to its ecosystems
Energy Everybody has access to electricity (now 17% do not) Everybody has sufficient electricity for their lighting, washing machine and devices
Cooking facilities Everybody has access to clean cooking facilities (now 38% do not) Everybody has access to clean cooking facilities (so the same)
Education Every adult is able to read and write (now 15% cannot);
– Every child goes to school at least until 15 years old (now 17% does not)
Everybody has access to education, free information and guidance to become a productive, informed, critical-thinking citizen, with the opportunity of self-actualization.
Healthcare Less than 25 in 1,000 babies (live births) die before age 5 (now 46% living in countries where this number is higher);
– Everybody has a life expectancy of over 70 years (now 39% living in countries where this is less)
Everybody has access to modern healthcare
Housing Nobody lives in slum housing (now 24% do) Everybody can live in a safe, comfortable house with direct access to water, sanitation, energy and cooking facilities.
Income and work Everybody lives above the international poverty limit of $3.10 per day (now 29% live below);
– All young people (15-24) who seek work can find it (now 13% cannot)
Everybody can readily provide in their basic human needs regardless of income and employment
Networks of information Everybody has Internet access (now 57% do not) Everybody has access to reliable, uncensored and broadband Internet
Social support Everybody has someone to count on for help in times of trouble (now 24% are not) Everybody has the opportunity to live in a supportive community.

 

This concludes my two main points of critique: the Invisible Hole. The following paragraphs contain some other remarks I have about the book and a bit of reflection.

What about local government and communities?

The book focuses on three units of analysis for a new economic system: households, companies and national governments. I found a lack of emphasis on local government and local communities.

I believe that shifting the unit of analysis from the “rational economic man” to that of the household with its more complex dynamics hidden to traditional economics is very valuable. Also, I agree that there is a strong but different kind of role to play by government in a safe and just society (I am not an anarchist).

However, I would say that the distributive and regenerative society that Raworth proposes would depend a lot on resilient and self-reliant local communities and local government acting as a ‘partner’ as is nowadays talked about a lot.

These communities and government would need to figure out together how to let amongst others local agriculture, manufacturing and managing the Commons flourish. The book instead pretty much leaves out local communities in which the households are embedded and government focus is almost completely on nation states.

What about Mobility?

Regarding the basic human needs, the one that I found to be missing is Mobility. Would it not be just if people would be able to explore a bit more of the world than just their own town? Or should we regard this a luxury only for the privileged?

Do I have a point?

Asking myself “Do I have a point?” is a bit odd. Because I would not have written this article if I thought I had not. Even though this is only a relatively short writing up of my own thoughts in my little spare time, I feel that the described Invisible Hole in the model is a valid critique.

But perhaps I am misunderstanding the model, or put too much emphasis on these aspects? What do you think?

I spent some time finding and reading other reviews of Doughnut Economics and see what they are about. Many reviews are superficial and contain nothing more than a brief introduction and praise.

But I did find some more interesting ones:

  • Doughnut Economics: a Step Forward, but Not Far Enough” by Ugo Bardi: this review is the one I found closest to the points I am making. It addresses the issue of resource depletion and how it is missing from the book (and even contemporary dialogue in general). Furthermore, I believe the author is spot on questioning the logic in the circular shape of the Doughnut. The Doughnut might just as well been a candy bar or a Reece’s Piece. The circular design of the model does not really have a function, other than perhaps that it is visually attractive and makes people think it relates to circularity (which much of the thinking of Raworth does, but the model does not show that).
  • Review of Doughnut Economics – a new book you will need to know about” by Duncan Green: a nice summary of some key points of the book, along with the emotional opinions of the author and links to short informative videos of the book.
  • ‘DOUGHNUT ECONOMICS’: A HUMANE, 21ST CENTURY TAKE ON THE DISMAL SCIENCE” by James O’Shea: this author starts with a nice personal story as an economics journalist and how the book created that ‘aha moment’ for him, then summarizes some key statements and findings in the book.
  • There’s a Hole in the Middle of Doughnut Economics” by Steven Horwitz: this is an interesting one, as the review pretty much dismisses the entire book claiming that GDP/economic growth is the only proven way of reducing poverty and having more people with sufficient food, clothing, shelter etc. My cognitive dissonance with this claim immediately makes me think of all the pages Raworth spends on explanation including (academic) references and the lack of these in the article of Horwitz.

– Diderik

*** Special gratitude to Jaime Arredondo for doing an excellent review of a draft version of this article. Thanks to his contribution I was able to greatly improve the article’s structure, focus on the essentials and better articulate many points. Now that you have finished reading this article, go and read his awesome work on Bold & Open!

Photo by Jez Timms on Unsplash

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Sunday, 08 July 2018

Shared-Mode Executables in L4Re for MIPS-Based Devices

Paul Boddie's Free Software-related blog » English | 21:47, Sunday, 08 July 2018

I have been meaning to write about my device driver experiments with L4Re, following on from my porting exercises, but that exercise took me along various routes and I haven’t yet got back to documenting all of them. Meanwhile, one thing that did start to bother me was how much space the software was taking up when compiled, linked and ready to deploy.

Since each of my device drivers is a separate program, and since each one may be linked to various libraries, they each started to contribute substantially to the size of the resulting file – the payload – needing to be transferred to the device. At one point, I had to resize the boot partition on the memory card used by the Letux 400 notebook computer to make the payload fit in the available space.

The work done to port L4Re to the MIPS Creator CI20 had already laid the foundations for functioning payloads, and once the final touches were put in place to support the peculiarities of the Ingenic JZ4780 system-on-a-chip, it was possible to run both the conventional “hello” example which is statically linked to its libraries, as well as a “shared-hello” example which is dynamically linked to its libraries. The latter configuration of the program results in a smaller executable program and thus a smaller payload.

So it seemed clear that I might be able to run my own programs on the Letux 400 or Ben NanoNote with similar reductions in payload size. Unfortunately, nothing ever seems to be as straightforward as it ought to be.

Exceptional Obstructions and Observations

Initially, I set about trying one of my own graphical examples with the MODE variable set to “shared” in its Makefile. This, upon powering up, merely indicated that it had not managed to start up properly. Instead of a blank screen, the viewports set up by the graphical multiplexer, Mag, were still active and showing their usual blankness. But these regions did not then change in any way when I pressed keys on the keyboard (which is functionality that I will hopefully get round to describing in another article).

I sought some general advice from the l4-hackers mailing list, but quickly realised that to make any real progress, I would need a decent way of accessing the debugging output produced by the dynamic linker. This took me on a diversion that led to my debugging capabilities being strengthened with the availability of a textual output console on the screen of my devices. I still don’t like the idea of performing hardware modifications to get access to the serial console, so this is a useful and welcome alternative.

Having switched out the “hello” program with the “shared-hello” program in the system configuration and module list demonstrating the framebuffer terminal, I deployed the payload and powered up, but I did not get the satisfying output of the program operating normally. Instead, the framebuffer terminal appeared and rewarded me with the following message:

L4Re: rom/ex_hello_shared: Unhandled exception: PC=0x800000 PFA=8d7a LdrFlgs=0

This isn’t really the kind of thing you want to see. Having not had to debug L4Re or Fiasco.OC in any serious fashion for a couple of months, I was out of practice in considering the next step, but fortunately some encouragement arrived in a private e-mail from Jean Wolter. This brought the suggestion of triggering the kernel debugger, but since this requires serial console access, it wasn’t a viable approach. But another idea that I could use involved writing out a bit more information in the routine that was producing this output.

The message in question originates in the pkg/l4re-core/l4re_kernel/server/src/region.cc file, within the Region_map::op_exception method. The details it produces are rather minimal and generic: the program counter (PC) tells us where the exception occurred; the loader flags (LdrFlags) presumably tell us about the activity of the library loader; the mysterious “PFA” is supposedly the page fault address but it actually seemed to be the stack pointer address on these MIPS-based systems.

On their own, these details are not particularly informative, but I suppose that more useful information could quickly become fairly specific to a particular architecture. Jean suggested looking at the structure describing the exception state, l4_exc_regs_t (defined with MIPS-specific members in pkg/l4re-core/l4sys/include/ARCH-mips/utcb.h), to see what else I might dig up. This I did, generating the following:

pc=0x800000
gp=0x82dd30
sp=0x8d7a
ra=0x802f6c
cause=0x1000002c

A few things interested me, thus motivating my choice of registers to dump. The global pointer (gp) register tells us about symbols in the problematic code, and I felt that having once made changes to the L4Re sources – way back in the era of getting the CI20 to run GCC-generated code – so that another register (t9) would be initialised correctly, this so that the gp register would be set up correctly within programs, it was entirely possible that I had rather too enthusiastically changed something that was now causing a problem.

The stack pointer (sp) is useful to check, just to see if it located in a sensible region of memory, and here I discovered that this seemed to be the same as the “PFA” number. Oddly, the “PFA” seems to occupy the same place in the exception structure as any “bad virtual address” featuring in an address exception, and so I started to suspect that maybe the stack pointer was referencing the wrong part of memory. But this was partially ruled out by examining the value of the stack pointer in the “hello” example, which appeared to reference broadly the same part of memory. And, of course, the “hello” example works just fine.

In fact, the cause register indicated another kind of exception entirely, and it was one I was not really expecting: a “coprocessor unusable” exception indicating that coprocessor 1, typically a floating point arithmetic unit, was being illegally requested by an instruction. Here is how I interpreted that register’s value:

hex value   binary value
1000002c == 00010000000000000000000000101100
              --                     -----
              CE                     ExcCode

=> CE == 1; ExcCode == 11 (coprocessor unusable)
=> coprocessor 1 unusable

Now, as I may have mentioned before, the hardware involved in this exercise does not support floating point instructions itself, and this is why I have configured compilers to use “soft-float” (software-based floating point arithmetic) support. It meant that I had to find places that might have wanted to use floating point instructions and eliminate those instructions somehow. Fortunately, only code generated by the compiler was likely to contain such instructions. But now I wondered if there weren’t some instructions of this nature lurking in places I hadn’t checked.

I had also thought to check the return address (ra) register. This tells us where the processor will jump to when it has finished executing the current routine, and since this is usually a matter of “returning” somewhere, it tells us something about the code that was being executed before the problematic routine was called. I figured that the work being done before the exception was probably going to be more important than the exception itself.

Floating Point Magic

Another debugging suggestion that now became unavoidable was to inspect the erroneous instruction. I noted above that this instruction was causing the processor to signal an illegal attempt to use an unusable – actually completely unavailable – coprocessor. Writing a numeric representation of the instruction to the display provided me with the following hexadecimal (base 16) value:

464c457f

This can be interpreted as follows in binary, with groups of bits defined for interpretation according to the MIPS instruction set architecture, and with tentative interpretations of these groups provided beneath:

010001 10010  01100 01000 10101 111111
COP1   rs/fmt rt/ft rd/fs       C.ABS.NGT

The first group of bits is the opcode field which is interpreted as a coprocessor 1 (COP1) opcode. Should we then wish to consider what the other groups mean, we might then examine the final group which could indicate a comparison instruction. However, this becomes rather hypothetical since the processor will most likely interpret the opcode field and then decide that it cannot handle the instruction.

So, I started to look for places where the instruction might have been written, but no obvious locations were forthcoming. One peculiar aspect of all this is that the location of the instruction is at a rather “clean” location – 0×800000 – and some investigations indicated that this is where the library containing the problematic code gets loaded. I actually don’t remember precisely how I figured this out, but I think it was as follows.

I had looked at linker scripts that might give some details of the location of program objects, and one of them (pkg/l4re-core/ldscripts/ARCH-mips/main_dyn.ld) seemed to be related. It gave an address for the code of 0×400000. This made me think that some misconfiguration or erroneous operation was putting the observed code somewhere it shouldn’t be. But changing this address in the linker script just gave another exception at 0×400000, meaning that I had disrupted something that was intentional and probably working fine.

Meanwhile, emitting the t9 register’s value from the exception state yielded 0×800000, indicating that the calling routine had most likely jumped straight to that address, not to another address with execution having then proceeded normally until reaching the exception location. I decided to look at the instructions around the return address, these most likely being the ones that had set up the call to the exception location. Writing these locations out gave me some idea about the instructions involved. Below, I provide the stored values and their interpretations as machine instructions:

8f998250 # lw $t9, -32176($gp)
24a55fa8 # addiu $a1, $a1, 0x5fa8
0320f809 # jalr $t9
24844ee4 # addiu $a0, $a0, 0x4ee4
8fbc0010 # lw $gp, 16($sp)

One objective of doing this, apart from confirming that a jump instruction (jalr) was involved, with the t9 register being employed as is the convention with MIPS code, was to use the fragment to identify the library that was causing the error. A brute-force approach was employed here, generating “object dumps” from the library files and writing them out as files in a new directory:

mkdir tmpdir
for FILENAME in mybuild/lib/mips_32/l4f/* ; do
    mipsel-linux-gnu-objdump -d "$FILENAME" > tmpdir/`basename "$FILENAME"`
done

The textual dump files were then searched for the instruction values using grep, narrowing down the places where these instructions were found in consecutive locations. This yielded the following code, found in the libld-l4.so library:

    2f5c:       8f998250        lw      t9,-32176(gp)
    2f60:       24a55fa8        addiu   a1,a1,24488
    2f64:       0320f809        jalr    t9
    2f68:       24844ee4        addiu   a0,a0,20196
    2f6c:       8fbc0010        lw      gp,16(sp)

The integer operands for the addiu instructions are the same, of course, just being shown as decimal rather than hexadecimal values. Now, we previously saw that the return address (ra) register had the value 0x802f6c. When a MIPS processor executes a jump instruction, it will also fetch the following instruction and execute it, this being a consequence of the way the processor architecture is designed.

So, the instruction after the jump, residing in what is known as the “branch delay slot” is not the instruction that will be visited upon returning from the called routine. Instead, it is the instruction after that. Here, we see that the return address from the jump at location 0x2f64 would be two locations later at 0x2f6c. This provides a kind of realisation that the program object – the libld-l4.so library – is positioned in memory at 0×800000: 0x2f6c added to 0×800000 gives the value of ra, 0x802f6c.

And this means that the location of the problematic instruction – the cause of our exception – is the first location within this object. Anyone with any experience of this kind of software will have realised by now that this doesn’t sound like a healthy situation: the first location within a library is not actually going to be code because these kinds of objects are packaged up in a way that permits their manipulation by other programs.

So what is the first location of a library used for? Since such objects employ the Executable and Linkable Format (ELF), we can take a look at some documentation. And we see that the first location is used to identify the kind of object, employing a “magic number” for the purpose. And that magic number would be…

464c457f

In the little-endian arrangement employed by this processor, the stored bytes are as follows:

7f
45 ('E')
4c ('L')
46 ('F')

The value was not a floating point instruction at all, but the magic number at the start of the library object! It was something of a coincidence that such a value would be interpreted as a floating point instruction, an accidentally convenient way of signalling something going badly wrong.

Missing Entries

The investigation now started to focus on how the code trying to jump to the start of the library had managed to get this incorrect address and what it was trying to do by jumping to it. I started to wonder if the global pointer (gp), whose job it is to reference the list of locations of program routines and other global data, might have been miscalculated such that attempts to load the addresses of routines would then be failing with data being fetched from the wrong places.

But looking around at code fragments where the gp register was being calculated, they seemed to look set to calculate the correct values based on assumptions about other registers. For example, from the object dump for libld-l4.so:

00002780 <_ftext>:
    2780:       3c1c0003        lui     gp,0x3
    2784:       279cb5b0        addiu   gp,gp,-19024
    2788:       0399e021        addu    gp,gp,t9

Assuming that the processor has t9 set to 0×2780 and then jumps to the value of t9, as is the convention, the following calculation is then performed:

gp = 0x30000 (since lui loads the "upper" half-word)
gp = gp - 19024 = 0x30000 - 19024 = 0x2b5b0
gp = gp + t9 = 0x2b5b0 + 0x2780 = 0x2dd30

Using the nm tool, which tells us about symbols in program objects, it was possible to check this value:

mipsel-linux-gnu-nm -n mybuild/lib/mips_32/l4f/libld-l4.so

This shows the following at the end of the output:

0002dd30 d _gp

Also appearing somewhat earlier in the output is this, telling us where the table of symbols starts (as well as the next thing in the file):

00025d40 a _GLOBAL_OFFSET_TABLE_
00025f90 g __dso_handle

Some digging around in the L4Re source code gave a kind of confirmation that the difference between _gp and _GLOBAL_OFFSET_TABLE_ was to be expected. Here is what I found in the pkg/l4re-core/uclibc/lib/contrib/uclibc/ldso/ldso/mips/elfinterp.c file:

#define OFFSET_GP_GOT 0x7ff0

If gp, when recalculated in other places, ended up getting the same value, there didn’t seem to be anything wrong with it. Some quick inspections of neighbouring calculations indicated that this wasn’t likely to be the problem. But what about the values used in conjunction with gp? Might they be having an effect? In the case of the erroneous jump, the following calculation is involved:

lw t9,-32176(gp) => load word into t9 from the location at gp - 32176
                 => ...               from 0x2dd30 - 32176
                 => ...               from 0x25f80

The calculated address, 0x25f80, is after the start of _GLOBAL_OFFSET_TABLE_ providing entries for program routines and other things, which is a good sign, but what is perhaps more troubling is how far after the start of the table such a value is. In the above output, another symbol (__dso_handle) indicates something that is located at the end of the table. Now, although its address is still greater than the one computed above, meaning that the computation does not cause us to stray off the end of the table, the computed address is suspiciously close to the end.

There was nothing else to do than to have a look at the table contents itself, and here it was rather useful to have a way of displaying a number of values on the screen. At this point, we have to note that the addresses in use in the running system are adjusted according to the start of the loaded object, so that the table is positioned at 0x25d40 in the object dump, but in the running system we would see 0×800000 + 0x25d40 and thus 0x825d40 instead.

What I saw was that the table contained entries that varied in the expected way right up until 0x825f60 (corresponding to 0x25f60 in the object dump) being only 0×30 (or 48 bytes, or 12 entries) before the end of the table, but then all remaining entries starting at 0x825f64 (corresponding to 0x25f64) yielded a value of 0×800000, apart from 0x825f90 (corresponding to 0x25f90, right at the end of the table) which yielded itself.

Since the calculated address above (0x25f80, adjusted to 0x825f80 in the running system) lies in this final region, we now know the origin of this annoying 0×800000: it comes from entries at the end of the table that do not seem to hold meaningful values. Indeed, the object dump for the library seemed to skip over this region of the table entirely, presumably because it was left uninitialised. And using the readelf tool with the –relocs option to show “relocations”, which applies to this table, it appeared that the last entries rather confirmed my observations:

00025d34  00000003 R_MIPS_REL32
00025f90  00000003 R_MIPS_REL32

Clearly, something is missing from this table. But since something has to adjust the contents of the table to add the “base address”, 0×800000, to the entries in order to provide valid addresses within the running program, what started to intrigue me was whether the code that performed this adjustment had any idea about these missing entries, and how this code might be related to the code causing the exception situation.

Routines and Responsibilities

While considering the nature of the code causing the exception, I had been using the objdump utility with the -d (disassemble) and -D (disassemble all) options. These provide details of program sections, code routines and the machine instructions themselves. But Jean pointed out that if I really wanted to find out which part of the source code was responsible for producing certain regions of the program, I might use a combination of options: -d, -l (line numbers) and -S (source code). This was almost a revelation!

However, the code responsible for the jump to the start of the library resisted such measures. A large region of code appeared to have no corresponding source, suggesting that it might be generated. Here is how it starts:

_ftext():
    2dac:       00000000        nop
    2db0:       3c1c0003        lui     gp,0x3
    2db4:       279caf80        addiu   gp,gp,-20608
    2db8:       0399e021        addu    gp,gp,t9
    2dbc:       8f84801c        lw      a0,-32740(gp)
    2dc0:       8f828018        lw      v0,-32744(gp)

There is no function defined in the source code with the name _ftext. However, _ftext is defined in the linker script (in pkg/l4re-core/ldscripts/ARCH-mips/main_rel.ld) as follows:

  .text           :
  {
    _ftext = . ;
    *(.text.unlikely .text.*_unlikely .text.unlikely.*)
    *(.text.exit .text.exit.*)
    *(.text.startup .text.startup.*)
    *(.text.hot .text.hot.*)
    *(.text .stub .text.* .gnu.linkonce.t.*)
    /* .gnu.warning sections are handled specially by elf32.em.  */
    *(.gnu.warning)
    *(.mips16.fn.*) *(.mips16.call.*)
  }

If you haven’t encountered linker scripts before, then you probably don’t want to spend too much time looking at this, linker scripts being frustratingly terse and arcane, but the essence of the above is that a bunch of code is stuffed into the .text section, with _ftext being assigned the address of the start of all this code. Now, _ftext in the linker script corresponds to a particular label in the object dump (which we saw earlier was positioned at 0×2780) whereas the _ftext function in the code occurs later (at 0x2dac, above). After the label but before the function is code whose source is found by objdump.

So I took the approach of removing things from the linker script, ultimately removing everything from the .text section apart from the assignment to _ftext. This removed the annotated regions of the code and left me with only the _ftext function. It really did appear that this was something the compiler might be responsible for. But where would I find the code responsible?

One hint that was present in the _ftext function code was the use of another identified function, __cxa_finalize. Searching the GCC sources for code that might use it led me to the libgcc sources and to code that invokes destructor functions upon program exit. This wasn’t really what I was looking for, but the file containing it (libgcc/crtstuff.c) would prove informative.

Back to the Table

Jean had indicated that there might be a difference in output between compilers, and that certain symbols might be produced by some but not by others. I investigated further by using the readelf tool with the -a option to show almost everything about the library file. Here, the focus was on the global offset table (GOT) and information about the entries. In particular, I wanted to know more about the entry providing the erroneous 0×800000 value, located at (gp – 32176). In my output I saw the following interesting thing:

 Global entries:
   Address     Access  Initial Sym.Val. Type    Ndx Name
  00025f80 -32176(gp) 00000000 00000000 FUNC    UND __register_frame_info

This seems to tell us what the program expects to find at the location in question, and it indicates that the named symbol is presumably undefined. There were some other undefined symbols, too:

_ITM_deregisterTMCloneTable
_ITM_registerTMCloneTable
__deregister_frame_info

Meanwhile, Jean was seeing symbols with other names:

__register_frame_info_base
__deregister_frame_info_base

During my perusal of the libgcc sources, I had noticed some of these symbols being tested to see if they were non-zero. For example:

  if (__register_frame_info)
    __register_frame_info (__EH_FRAME_BEGIN__, &object);

These fragments of code appear to be located in functions related to program initialisation. And it is also interesting to note that back in the library code, after the offending table entry has been accessed, there are tests against zero:

    2f34:       3c1c0003        lui     gp,0x3
    2f38:       279cadfc        addiu   gp,gp,-20996
    2f3c:       0399e021        addu    gp,gp,t9
    2f40:       27bdffe0        addiu   sp,sp,-32
    2f44:       8f828250        lw      v0,-32176(gp)
    2f48:       afbc0010        sw      gp,16(sp)
    2f4c:       afbf001c        sw      ra,28(sp)
    2f50:       10400007        beqz    v0,2f70 <_ftext+0x7f0>

Here, gp gets set up correctly, v0 is set to the value of the table entry, which we now believe refers to __register_frame_info, and the beqz instruction tests this value against zero, skipping ahead if it is zero. Does that not sound a bit like the code shown above? One might think that the libgcc code might handle an uninitialised table entry, and maybe it is intended to do so, but the table entry ends up getting adjusted to 0×800000, presumably as part of the library loading process.

I think that the most relevant function here for the adjustment of these entries is _dl_perform_mips_global_got_relocations which can be found in the pkg/l4re-core/uclibc/lib/contrib/uclibc/ldso/ldso/ldso.c file as part of the L4Re C library code. It may well have changed the entry from zero to this erroneous non-zero value, merely because the entry lies within the table and is assumed to be valid.

So, as a consequence, the libgcc code acts as if it has a genuine __register_frame_info function to call, and doing so causes the jump to the start of the library object and the exception. Maybe the code is supposed to be designed to handle missing symbols, those symbols potentially being deliberately omitted, but it doesn’t function correctly under these particular circumstances.

Symbol Restoration

However, despite identifying this unfortunate interaction between C library and libgcc, the matter of a remedy remained unaddressed. What was I to do about these missing symbols? Were they supposed to be there? Was there a way to tell libgcc not to expect them to be there at all?

In attempting to learn a bit more about the linking process, I had probably been through the different L4Re packages several times, but Jean then pointed me to a file I had seen before, perhaps before I had needed to think about these symbols at all. It contained “empty” definitions for some of the symbols but not for others. Maybe the workaround or even the solution was to just add more definitions corresponding to the symbols the program was expecting? Jean thought so.

So, I added a few things to the file (pkg/l4re-core/ldso/ldso/fixup.c):

void __deregister_frame_info(void);
void __register_frame_info(void);
void _ITM_deregisterTMCloneTable(void);
void _ITM_registerTMCloneTable(void);

void __deregister_frame_info(void) {}
void __register_frame_info(void) {}
void _ITM_deregisterTMCloneTable(void) {}
void _ITM_registerTMCloneTable(void) {}

I wasn’t confident that this would fix the problem. After all the investigation, adding a few lines of trivial code to one file seemed like too easy a way to fix what seemed like a serious problem. But I checked the object dump of the library, and suddenly things looked a bit more reasonable. Instead of referencing an uninitialised table entry, objdump was able to identify the jump target as __register_frame_info:

    2e14:       8f828040        lw      v0,-32704(gp)
    2e18:       afbc0010        sw      gp,16(sp)
    2e1c:       afbf001c        sw      ra,28(sp)
    2e20:       10400007        beqz    v0,2e40 <_ftext+0x7f0>
    2e24:       8f85801c        lw      a1,-32740(gp)
    2e28:       8f84803c        lw      a0,-32708(gp)
    2e2c:       8f998040        lw      t9,-32704(gp)
    2e30:       24a55fa8        addiu   a1,a1,24488
    2e34:       04111c39        bal     9f1c <__register_frame_info>

Of course, the code had been reorganised and so things were no longer in quite the same places, but in the above, (gp – 32704) is actually a reference to __register_frame_info, and although this value gets tested against zero as before, we can see that enough is now known about the previously-missing symbol that a branch directly to the location of the function has been incorporated, rather than a jump to the address stored in the table.

And sure enough, powering up the Letux 400 produced the framebuffer terminal showing the expected output:

Hi World! (shared)

It had been a long journey for such a modest reward, but thanks to Jean’s help and a bit of perseverance, I got there in the end.

Saturday, 07 July 2018

Investigating CPython’s Optimisation Trickery for Lichen

Paul Boddie's Free Software-related blog » English | 13:39, Saturday, 07 July 2018

For those of us old enough to remember how the Python programming language was twenty or so years ago, nostalgic for a simpler and kinder time, looking to escape the hard reality of today’s feature enhancement arguments, controversies, general bitterness and recriminations, it can be informative to consider what was appealing about Python all those years ago. Some of us even take this slightly further and attempt to formulate our own take on the language, casting aside things that do not appeal or that seem superfluous, needlessly confusing, or redundant.

My own Python variant, called Lichen, strips away quite a few things from today’s Python but probably wouldn’t seem so different to twentieth century Python. Since my primary objective with Lichen is to facilitate static analysis so that observations can be made about program behaviour before running the program, certain needlessly-dynamic features have been eliminated. Usually, when such statements about feature elimination are made, people seize upon them to claim that the resulting language is statically typed, but this is deliberately not the case here. Indeed, “duck typing” is still as viable as ever in Lichen.

Ancient Dynamism

An example of needless dynamism in Python can arguably be found with the __getattr__ and __setattr__ methods, introduced as far back as Python 1.1. They allow accesses to attributes via instances to be intercepted and values supposedly provided by these attributes to be computed on demand. In effect, these methods support virtual or dynamic attributes that are not really present on an object. Here’s an extract from one of the Python 1.2 demonstration programs (Demo/pdist/client.py):

        def __getattr__(self, name):
                if name in self._methods:
                        method = _stub(self, name)
                        setattr(self, name, method) # XXX circular reference
                        return method
                raise AttributeError, name

In this code, if an instance of the Client class (from which this method is taken) is used to access an attribute called hello, then this method will see if the string “hello” is found in the instance’s _methods attribute, and if so it produces a special object that is then returned as the value for the hello attribute. Otherwise, it raises an exception to indicate that this virtual attribute is not recognised. (Here, the setattr call stores the special object as a genuine attribute in order to save this method from being called again for the same attribute.)

Admittedly, this is quite neat, and it quickly becomes tempting to use such facilities everywhere – this is very much the story of Python and its development – but such things make reasoning about programs more difficult. We cannot know what attributes the instances of this Client class may have without running the program. Indeed, to find out in this case, running the program is literally unavoidable since the _methods attribute is actually populated using the result of a message received over the network!

But even in simpler cases, it can readily be intuitively understood that finding out the supported attributes of instances whose class offers such a method might involve a complicated exercise looking at practically all the code in a program. Despite all the hard work, this exercise will nevertheless produce unreliable or imprecise results. It says something about the fragility of such facilities that properties were later added to Python 2.2 to offer a more declarative alternative.

(It also says something about Python 3 that the cornucopia of special mechanisms for dynamically exposing attributes are apparently still present, despite much having been said about Python 3 remedying such Python 1 and 2 design artefacts.)

Hidden Motives

With static analysis, we might expect to be able to deduce which attributes are provided by class instances without running a program, this potentially allowing us to determine the structure of program objects and to detect errors around their use. But another objective with Lichen is to see how constraints on the language may be used to optimise the performance of programs. I will not deny that performance has always been an interest of mine with respect to Python and its implementations, and I imagine that many compiler and virtual machine implementers have been motivated by such concerns throughout the years.

The deductions made during static analysis can potentially allow us to generate executable programs that perform the same work more efficiently. For example, if it is known that a collection of method calls on an object identify that object as being of a certain type, we can then employ more efficient ways of calling those methods. So, for the following code…

        while number:
            digits.append(_hexdigits[number % base])
            number = number / base

…if we can assert that digits is a list, then considering that we might normally generate code for the append method call as something like this…

__load_via_class(digits, append)(...)

…where the __load_via_class operation has to go and find the append method via the class of digits (or, in some cases, even look for the append attribute on the object first), we might instead be able to generate code like this…

__fn_list_append(digits, ...)

…where __fn_list_append is a genuine C function and the digits instance is passed directly to it, together with the elided arguments. When we can get this kind of thing to happen, it can obviously be very satisfying. Various Python implementations and tools also attempt to make method calls efficient in their own ways, some possibly relying on run-time caches that short-circuit the exercise of finding the method.

Magic Numbers

It can be informative to compare the performance of code generated by the Lichen toolchain and the performance of the same program running in the CPython virtual machine, Python and Lichen being broadly compatible (but not identical). As I noted in my summary of 2017, the performance of generated programs was rather disheartening to see at first. I needed to employ profiling to discover where the time was being spent in my generated code that seemed not to be a comparable burden on CPython.

The practicalities of profiling are definitely beyond the scope of this article, but what I did notice was just how much time was being spent allocating space in memory for integers used by programs. I recalled that Python does some special things with integers itself, and so I set about looking for the details of its memory allocation strategies.

It turns out that integers are allocated in a simplified fashion for performance reasons, instead of using the more general allocator that is compatible with garbage collection. And not just that: a range of “small” integers is also allocated in advance when programs run, so that no time is wasted repeatedly allocating objects for numbers that would likely see common use. The details of this can be found in the Objects/intobject.c file in CPython 1.x and 2.x source distributions. Even CPython 1.0 employs this technique.

At first, I thought I had discovered the remedy for my performance problems, but replicating similar allocation arrangements in my run-time code demonstrated that such a happy outcome was not to be so easily achieved. As I looked around for what other special treatment CPython does, I took a closer look at the bytecode interpreter (found in Python/ceval.c), which is the mechanism that takes the compiled form of Python programs (the bytecode) and evaluates the instructions encoded in this form.

My test programs involved simple loops like this:

i = 0
while i < 100000:
    f(i)
    i += 1

And I had a suspicion that apart from allocating new integers, the operations involved in incrementing them were more costly than they were in CPython. Now, in Python versions from 1.1 onwards, the special operator methods are supported for things like the addition, subtraction, multiplication and division operators. This could conceivably lead to integer addition being supported by the following logic in one of the simpler cases:

# c = a + b
c = a.__add__(b)

But from Python 1.5 onwards, some interesting things appear in the CPython source code:

                case BINARY_ADD:
                        w = POP();
                        v = POP();
                        if (PyInt_Check(v) && PyInt_Check(w)) {
                                /* INLINE: int + int */
                                register long a, b, i;
                                a = PyInt_AS_LONG(v);
                                b = PyInt_AS_LONG(w);
                                i = a + b;

Here, when handling the bytecode for the BINARY_ADD instruction, which is generated when the addition operator (plus, “+”) is used in programs, there is a quick test for two integer operands. If the conditions of this test are fulfilled, the result is computed directly (with some additional tests being performed for overflows not shown above). So, CPython was special-casing integers in two ways: with allocation tricks, and with “fast paths” in the interpreter for cases involving integers.

The Tag Team

My response to this was similarly twofold: find an efficient way of allocating integers, and introduce faster ways of handling integers when they are presented to operators. One option that the CPython implementers actually acknowledge in their source code is that of employing a different representation for integers entirely. CPython may have too much legacy baggage to make this transition, and Python 3 certainly didn’t help the implementers to make the break, it would seem, but I have a bit more flexibility.

The option in question is the so-called tagged pointer approach where instead of having a dedicated object for each integer, with a pointer being used to reference that object, the integers themselves are represented by a value that would normally act as a pointer. But this value is not actually a valid pointer at all since it has its lowest bit set, which violates a restriction that is imposed by some processor architectures, but it can be a self-imposed restriction on other systems as well, merely ruling out the positioning of objects at odd-numbered addresses.

So, we might have the following example representations on a 32-bit architecture:

hex value      31..............................0 (bits)
0x12345678 == 0b00010010001101000101011001111000 => pointer
0x12345679 == 0b00010010001101000101011001111001 => integer

Clearing bit 0 and shifting the other bits one position to the right yields the actual integer value, which in the above case will be 152709948. It is conceivable that in future I might sacrifice another bit for encoding other non-pointer values, especially since various 32-bit architectures require word-aligned addresses, where words are positioned on boundaries that are multiples of four bytes, meaning that the lowest two bits would have to be zero for a pointer to be valid.

Albeit with some additional cost incurred when handling pointers, we can with such an approach distinguish integers from other types rapidly and conveniently, which makes the second part of our strategy more efficient as well. Here, we need to identify and handle integers for the arithmetic operators, but unlike CPython, where this happens to be done in an interpreter loop, we have no such loop. Instead we generate code for such operators that simply invokes some existing functions (written in the Lichen language and compiled to C code, another goal being to write as much of the language system in Lichen itself, not C).

It would be rather wasteful to generate tests for integers in addition to these operator function calls every time such a call is made, but the tests can certainly reside within those functions instead. So, here is what we might do for the addition operator:

def add(a, b):
    if is_int(a) and is_int(b):
        return int_add(a, b)
    return binary_op(a, b, lambda a: a.__add__, lambda b: b.__radd__)

This code leaves me with a bit of explaining to do! Last things first: the final statement is the general case of dispatching to the operands and calling an appropriate operator method, with the binary_op function performing the logic in conjunction with the operands and some lambda functions that defer access to the special methods until they are really needed. It is probably best just to trust me that this does the job!

Before the generic operator method dispatch, however, is the test of the operands to see if they are both integers, and this should be vaguely familiar from the CPython source code. A special function is then called to add them efficiently. Note that we couldn’t use the addition (plus, “+”) operator because this code is meant to be handling that, and it would most likely send us on an infinitely recursive loop that never gets round to performing the addition! (I don’t treat the operator as a special case in this code, either. This code is compiled exactly like any other code written in the language.)

The is_int function is what I call “native”, meaning that it is implemented using low-level operations, in this case ones that test the representation of the argument to see if it has its lowest bit set, returning a true value if so. Meanwhile, int_add is largely equivalent to the addition operation seen in the CPython source code above, just with different details involved.

Progress and Reflections

Such adjustments made quite a difference to the performance of my generated code. They do also make some sense, too. Integers are used a lot in programs, being used not only for general arithmetic, but also for counters, index values for things like lists, tuples, strings and other collections, plus a range of other mundane things whose performance can be overlooked until it proves to be suboptimal. Python has something of a reputation for having slow implementations, but CPython’s trickery here optimises in favour of fast results where they can be obtained, falling back on the slower, general mechanisms when these are required.

But I discovered that this is not the only optimisation trickery CPython does, as another program with interesting representation choices and some wildly varying running times was to demonstrate. More on that in the next article on this topic!

Wednesday, 20 June 2018

Qt Contributor Summit 2018

TSDgeos' blog | 22:56, Wednesday, 20 June 2018

About two weeks ago i attended Qt Contributor Summit 2018, i did so wearing my KDAB hat, but given that KDE software is based heavily on Qt I think I'll give a quick summary of the most important topic that was handled at the Summit: Qt 6

  • Qt 6 is planned for a November 2020 release
  • Qt 5 releases will continue with the current cadence as of now with 5.15 being the last release (and also LTS)
  • The work branch for Qt 6 will be branched soon after Qt 5.12
  • Qt 6 has to be easy to migrate from Qt 5
  • Qt 6 will use C++17
  • Everything to be removed in Qt 6 should be marked as deprecated in 5.15 (ideally sooner)
  • What can be done in Qt 5 should be done to Qt 5
  • Qt 6 should be a "boring" release user feature wise, mostly cleanup and preparing for the future
  • Qt 6 should change things that break at compile time, those are easy to fix, silent runtime changes are scarier
  • Qt 6 will not use qmake as build system
  • The build system for Qt 6 is still not decided, but there's people working on a qbs build and noone working on any other alternative

On a community related note, Tero Kojo the Community Manager for The Qt Company is leaving and doesn't seem a replacement is on sight

Of course, note that these are all plans, and as such they may be outdated already since the last 10 days :D

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

    /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André Ockers on Free Software  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Kristi Progri  English – Max's weblog  English — mina86.com  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  FSFE – Patis Blog  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Giacomo Poderi  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Planet FSFE on Iain R. Learmonth  Posts - Carmen Bianca Bakker's blog  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Repentinus » English  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  english – Davide Giunchi  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog