Thoughts of the FSFE Community

Sunday, 18 November 2018

Apple iOS Vs your Linux Mail, Contact and Calendar Server

Evaggelos Balaskas - System Engineer | 20:51, Sunday, 18 November 2018

The purpose of this blog post is to act as a visual guide/tutorial on how to setup an iOS device (iPad or iPhone) using the native apps against a custom Linux Mail, Calendar & Contact server.

Disclaimer: I wrote this blog post after 36hours with an apple device. I have never had any previous encagement with an apple product. Huge culture change & learning curve. Be aware, that the below notes may not apply to your setup.

Original creation date: Friday 12 Oct 2018
Last Update: Sunday 18 Nov 2018

 

Linux Mail Server

Notes are based on the below setup:

  • CentOS 6.10
  • Dovecot IMAP server with STARTTLS (TCP Port: 143) with Encrypted Password Authentication.
  • Postfix SMTP with STARTTLS (TCP Port: 587) with Encrypted Password Authentication.
  • Baïkal as Calendar & Contact server.

 

Thunderbird

Thunderbird settings for imap / smtp over STARTTLS and encrypted authentication

mail settings

 

Baikal

Dashboard

baikal dashboard

 

CardDAV

contact URI for user Username

https://baikal.baikal.example.org/html/card.php/addressbooks/Username/default

CalDAV

calendar URI for user Username

https://baikal.example.org/html/cal.php/calendars/Username/default

 

iOS

There is a lot of online documentation but none in one place. Random Stack Overflow articles & posts in the internet. It took me almost an entire day (and night) to figure things out. In the end, I enabled debug mode on my dovecot/postifx & apache web server. After that, throught trail and error, I managed to setup both iPhone & iPad using only native apps.

 

Mail

Open Password & Accounts & click on New Account

iPad_iOS_mail_01

Choose Other

iPad_iOS_mail_02

iPad_iOS_mail_03

iPad_iOS_mail_04

 

Now the tricky part, you have to click Next and fill the imap & smtp settings.

 

iPad_iOS_mail_05

iPad_iOS_mail_06

iPad_iOS_mail_07

 

Now we have to go back and change the settings, to enable STARTTLS and encrypted password authentication.

 

iPad_iOS_mail_08

iPad_iOS_mail_09

 

STARTTLS with Encrypted Passwords for Authentication

 

iPad_iOS_mail_10

iPad_iOS_mail_11

iPad_iOS_mail_12

iPad_iOS_mail_13

iPad_iOS_mail_14

iPad_iOS_mail_15

iPad_iOS_mail_16

 

In the home-page of the iPad/iPhone we will see the Mail-Notifications have already fetch some headers.

 

iPad_iOS_mail_17

and finally, open the native mail app:

iPad_iOS_mail_18

 

Contact Server

Now ready for setting up the contact account

https://baikal.baikal.example.org/html/card.php/addressbooks/Username/default

iPad_iOS_mail_19

iPad_iOS_mail_20

iPad_iOS_mail_21

iPad_iOS_mail_22

iPad_iOS_mail_23

 

Opening Contact App:

 

iPad_iOS_mail_24

 

Calendar Server

https://baikal.example.org/html/cal.php/calendars/Username/default

iPad_iOS_mail_25

iPad_iOS_mail_26

iPad_iOS_mail_27

iPad_iOS_mail_28

iPad_iOS_mail_29

iPad_iOS_mail_30

iPad_iOS_mail_31

 

Cloud-init with CentOS 7

Evaggelos Balaskas - System Engineer | 14:04, Sunday, 18 November 2018

Cloud-init is the defacto multi-distribution package that handles early initialization of a cloud instance

This article is a mini-HowTo use cloud-init with centos7 in your own libvirt qemu/kvm lab, instead of using a public cloud provider.

 

How Cloud-init works

cloud-init.png

Josh Powers @ DebConf17

How really works?

Cloud-init has Boot Stages

  • Generator
  • Local
  • Network
  • Config
  • Final

and supports modules to extend configuration and support.

Here is a brief list of modules (sorted by name):

  • bootcmd
  • final-message
  • growpart
  • keys-to-console
  • locale
  • migrator
  • mounts
  • package-update-upgrade-install
  • phone-home
  • power-state-change
  • puppet
  • resizefs
  • rsyslog
  • runcmd
  • scripts-per-boot
  • scripts-per-instance
  • scripts-per-once
  • scripts-user
  • set_hostname
  • set-passwords
  • ssh
  • ssh-authkey-fingerprints
  • timezone
  • update_etc_hosts
  • update_hostname
  • users-groups
  • write-files
  • yum-add-repo

 

Gist

Cloud-init example using a Generic Cloud CentOS-7 on a libvirtd qmu/kvm lab · GitHub

 

Generic Cloud CentOS 7

You can find a plethora of centos7 cloud images here:

Download the latest version

$ curl -LO http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz

Uncompress file

$ xz -v --keep -d CentOS-7-x86_64-GenericCloud.qcow2.xz

Check cloud image

$ qemu-img info CentOS-7-x86_64-GenericCloud.qcow2

image: CentOS-7-x86_64-GenericCloud.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 863M
cluster_size: 65536
Format specific information:
    compat: 0.10
    refcount bits: 16

The default image is 8G.
If you need to resize it, check below in this article.

 

Create metadata file

meta-data are data that comes from the cloud provider itself. In this example, I will use static network configuration.

cat > meta-data <<EOF
instance-id: testingcentos7
local-hostname: testingcentos7

network-interfaces: |
  iface eth0 inet static
  address 192.168.122.228
  network 192.168.122.0
  netmask 255.255.255.0
  broadcast 192.168.122.255
  gateway 192.168.122.1

# vim:syntax=yaml
EOF

 

Crete cloud-init (userdata) file

user-data are data that comes from you aka the user.

cat > user-data <<EOF
#cloud-config

# Set default user and their public ssh key
# eg. https://github.com/ebal.keys
users:
  - name: ebal
    ssh-authorized-keys:
      - `curl -s -L https://github.com/ebal.keys`
    sudo: ALL=(ALL) NOPASSWD:ALL

# Enable cloud-init modules
cloud_config_modules:
  - resolv_conf
  - runcmd
  - timezone
  - package-update-upgrade-install

# Set TimeZone
timezone: Europe/Athens

# Set DNS
manage_resolv_conf: true
resolv_conf:
  nameservers: ['9.9.9.9']

# Install packages
packages:
  - mlocate
  - vim
  - epel-release

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# Remove cloud-init
runcmd:
  - yum -y remove cloud-init
  - updatedb

# Configure where output will go
output:
  all: ">> /var/log/cloud-init.log"

# vim:syntax=yaml
EOF

 

Create the cloud-init ISO

When using libvirt with qemu/kvm the most common way to pass the meta-data/user-data to cloud-init, is through an iso (cdrom).

$ genisoimage -output cloud-init.iso -volid cidata -joliet -rock user-data meta-data

or

$ mkisofs -o cloud-init.iso -V cidata -J -r user-data meta-data

 

Provision new virtual machine

Finally run this as root:

# virt-install
    --name centos7_test
    --memory 2048
    --vcpus 1
    --metadata description="My centos7 cloud-init test"
    --import
    --disk CentOS-7-x86_64-GenericCloud.qcow2,format=qcow2,bus=virtio
    --disk cloud-init.iso,device=cdrom
    --network bridge=virbr0,model=virtio
    --os-type=linux
    --os-variant=centos7.0
    --noautoconsole

 

The List of Os Variants

There is an interesting command to find out all the os variants that are being supported by libvirt in your lab:

eg. CentOS

$ osinfo-query os | grep CentOS

centos6.0  |  CentOS  6.0  |  6.0  |  http://centos.org/centos/6.0
centos6.1  |  CentOS  6.1  |  6.1  |  http://centos.org/centos/6.1
centos6.2  |  CentOS  6.2  |  6.2  |  http://centos.org/centos/6.2
centos6.3  |  CentOS  6.3  |  6.3  |  http://centos.org/centos/6.3
centos6.4  |  CentOS  6.4  |  6.4  |  http://centos.org/centos/6.4
centos6.5  |  CentOS  6.5  |  6.5  |  http://centos.org/centos/6.5
centos6.6  |  CentOS  6.6  |  6.6  |  http://centos.org/centos/6.6
centos6.7  |  CentOS  6.7  |  6.7  |  http://centos.org/centos/6.7
centos6.8  |  CentOS  6.8  |  6.8  |  http://centos.org/centos/6.8
centos6.9  |  CentOS  6.9  |  6.9  |  http://centos.org/centos/6.9
centos7.0  |  CentOS  7.0  |  7.0  |  http://centos.org/centos/7.0

 

DHCP

If you are not using a static network configuration scheme, then to identify the IP of your cloud instance, type:

$ virsh net-dhcp-leases default

 Expiry Time           MAC address         Protocol   IP address           Hostname   Client ID or DUID
---------------------------------------------------------------------------------------------------------
 2018-11-17 15:40:31   52:54:00:57:79:3e   ipv4       192.168.122.144/24   -          -                  

 

Resize

The easiest way to grow/resize your virtual machine is via qemu-img command:

$ qemu-img resize CentOS-7-x86_64-GenericCloud.qcow2 20G

Image resized.

$ qemu-img info CentOS-7-x86_64-GenericCloud.qcow2

image: CentOS-7-x86_64-GenericCloud.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 870M
cluster_size: 65536
Format specific information:
    compat: 0.10
    refcount bits: 16

You can add the below lines into your user-data file

growpart:
  mode: auto
  devices: ['/']
  ignore_growroot_disabled: false

The result:

[root@testingcentos7 ebal]# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        20G  870M   20G   5% /

 

Default cloud-init.cfg

For reference, this is the default centos7 cloud-init configuration file.

# /etc/cloud/cloud.cfg 
users:
 - default

disable_root: 1
ssh_pwauth:   0

mount_default_fields: [~, ~, 'auto', 'defaults,nofail', '0', '2']
resize_rootfs_tmp: /dev
ssh_deletekeys:   0
ssh_genkeytypes:  ~
syslog_fix_perms: ~

cloud_init_modules:
 - migrator
 - bootcmd
 - write-files
 - growpart
 - resizefs
 - set_hostname
 - update_hostname
 - update_etc_hosts
 - rsyslog
 - users-groups
 - ssh

cloud_config_modules:
 - mounts
 - locale
 - set-passwords
 - rh_subscription
 - yum-add-repo
 - package-update-upgrade-install
 - timezone
 - puppet
 - chef
 - salt-minion
 - mcollective
 - disable-ec2-metadata
 - runcmd

cloud_final_modules:
 - rightscale_userdata
 - scripts-per-once
 - scripts-per-boot
 - scripts-per-instance
 - scripts-user
 - ssh-authkey-fingerprints
 - keys-to-console
 - phone-home
 - final-message
 - power-state-change

system_info:
  default_user:
    name: centos
    lock_passwd: true
    gecos: Cloud User
    groups: [wheel, adm, systemd-journal]
    sudo: ["ALL=(ALL) NOPASSWD:ALL"]
    shell: /bin/bash
  distro: rhel
  paths:
    cloud_dir: /var/lib/cloud
    templates_dir: /etc/cloud/templates
  ssh_svcname: sshd

# vim:syntax=yaml

Sunday, 11 November 2018

Publishing Applications through F-Droid

David Boddie - Updates (Full Articles) | 17:16, Sunday, 11 November 2018

In 2016 I started working on a set of Python modules for reading and writing bytecode for the Dalvik virtual machine so that I could experiment with creating applications on Android without having to write them in Java. In the time since then, on and off, I have written a few small applications to learn about Android and explore the capabilities of the devices I own. Some of these were examples, demos and tests for the compiler framework, but others were intended to be useful to me. I could have just downloaded applications to perform the tasks I wanted to do, but I wanted minimal applications that I could understand and trust, so I wrote my own simple applications and was happy to install them locally on my phone.

In September I had the need to back up some data from a phone I no longer use, so I wrote a few small applications to dump data to the phone's storage area, allowing me to retrieve it using the adb tool on my desktop computer. I wondered if other people might find applications like these useful and asked on the FSFE's Android mailing list. In the discussion that followed it was suggested that I try to publish my applications via F-Droid.

Preparations

The starting point for someone who wants to publish applications via F-Droid is F-Droid's Contribute page. The documentation contains a set of links to relevant guides for various kinds of developers, including those making applications and those deploying the F-Droid server software on their own servers.

I found the Contributing to F-Droid document to be a useful shortcut for finding out the fundamental steps needed to submit an application for publication. This document is part of the F-Droid Data repository which holds information about all the applications provided in the catalogue. However, I was a bit uncertain about how to write the metadata for my applications. The problem being that the F-Droid infrastructure builds all the applications from source code and most of these are built using the standard Android SDK, but my applications are not.

Getting Help

I joined the F-Droid forum to ask for assistance with this problem under the topic of Non-standard development tools and was quickly offered help in getting my toolchain integrated into the F-Droid build process. I had expected it was going to take a lot of effort on my part but F-Droid contributor Pierre Rudloff came up with a sample build recipe for my application, showing how simple it could be. To include my application in the F-Droid catalogue, I just needed to know how to describe it properly.

The way to describe an application, its build process and available versions is contained in the Build Metadata Reference document. It can be a bit intimidating to read, which is why it was so useful to be given an example to start with. Having an example meant that I didn't really have to look too hard at this document, but I would be returning to it before too long.

To get the metadata into the build process for the applications hosted by F-Droid, you need to add it to the F-Droid Data repository. This is done using the familiar cycle of cloning, committing, testing, pushing, and making merge requests that many of us are familiar with. The Data repository provides a useful Quickstart guide that tells you how to set up a local repository for testing. One thing I did before cloning any repositories was to fork the Data repository so that I could add my metadata to that. So, what I did was this:

git clone https://gitlab.com/fdroid/fdroidserver.git
export PATH="$PATH:$PWD/fdroidserver"

# Clone my fork of the Data repository:
git clone https://gitlab.com/dboddie/fdroiddata.git

I added a file in the metadata directory of the repository called uk.org.boddie.android.weatherforecast.txt containing a description of my application and information about its version, dependencies, build process, and so on. Then I ran some checks using the fdroid tool from the Server repository, checking that my metadata was valid and in a suitable format for the Data repository, before verifying that a package could be built:

fdroid lint uk.org.boddie.android.weatherforecast
fdroid rewritemeta uk.org.boddie.android.weatherforecast
fdroid build -v -l uk.org.boddie.android.weatherforecast

The third of these commands requires the ANDROID_HOME environment variable to be set. I have a very old Android SDK installation that I had to refer to in order for this to work, even though my application doesn't use it. For reference, the ANDROID_HOME variable refers to the directory in the SDK or equivalent that contains the build-tools directory.

Build, Test, Repeat

At this point it was time to commit my new file to my fork of the Data repository and push it back up to GitLab. While logged into the site, verifying that my commit was there, I was offered the option to create a merge request for the official F-Droid Data repository, and this led to my first merge request for F-Droid. Things were moving forward.

Unfortunately, I had not managed to specify all the dependencies I needed to build my application using my own tools. In particular, I rely on PyCrypto to create digests of files, and this was not available by default in the build environment. This led to another change to the metadata and another merge request. Still, it was not plain sailing at this point because, despite producing an application package, the launcher icon was generated incorrectly. This required a change to the application itself to ensure that the SVG used for the icon was in a form that could be converted to PNG files by the tools available in the build environment. Yet another merge request updated the current version number so that the fixed application could be build and distributed. My application was finally available in the form I originally intended!

Still, not everything was perfect. A couple of users quickly came forward with suggestions for improvements and a bug report concerning the way I handled place names – or rather failed to handle them. The application was fixed and yet another merge request was created and handled smoothly by the F-Droid maintainers.

Finishing Touches

There are a few things I've not done when developing and releasing this application; there's at least one fundamental thing and at least one cosmetic enhancement that could be done to improve the experience around the installation process.

The basic thing is to write a proper changelog. Since it was really just going to be the proof of concept that showed applications written with DUCK could be published via F-Droid, the application evolved from its original form very informally. I should go back and at least document the bug fixes I made after its release.

The cosmetic improvement would be to improve its F-Droid catalogue entry to include a screenshot or two, then at least prospective users could see what they were getting.

Another meta-feature that would be useful for me to use is F-Droid's auto-update facility. I'm sure I'll need to update the application again in the future so, while this document will remind me how to do that, it would be easier for everyone if F-Droid could just pick up new versions as they appear. There seem to be quite a few ways to do that, so I think that I'll have something to work with.

In summary, it was easier than I had imagined to publish an application in the F-Droid catalogue. The process was smooth and people were friendly and happy to help. If you write your own Free Software applications for Android, I encourage you to publish them via F-Droid and to submit your own metadata for them to make publication as quick and easy as possible.

Categories: Serpentine, F-Droid, Android, Python, Free Software

Friday, 09 November 2018

KDE Applications 18.12 branches created

TSDgeos' blog | 22:31, Friday, 09 November 2018

Make sure you commit anything you want to end up in the 18.12 release to them

We're already past the dependency freeze.

The Freeze and Beta is this Thursday 15 of November.

More interesting dates
November 29: KDE Applications 18.12 RC (18.11.90) Tagging and Release
December 6: KDE Applications 18.12 Tagging
December 13: KDE Applications 18.12 Release

https://community.kde.org/Schedules/Applications/18.12_Release_Schedule

Thursday, 08 November 2018

Another Look at VGA Signal Generation with a PIC32 Microcontroller

Paul Boddie's Free Software-related blog » English | 19:15, Thursday, 08 November 2018

Maybe some people like to see others attempting unusual challenges, things that wouldn’t normally be seen as productive, sensible or a good way to spend time and energy, things that shouldn’t be possible or viable but were nevertheless made to work somehow. And maybe they like the idea of indulging their own curiosity in such things, knowing that for potential future experiments of their own there is a route already mapped out to some kind of success. That might explain why my project to produce a VGA-compatible analogue video signal from a PIC32 microcontroller seems to attract more feedback than some of my other, arguably more useful or deserving, projects.

Nevertheless, I was recently contacted by different people inquiring about my earlier experiments. One was admittedly only interested in using Free Software tools to port his own software to the MIPS-based PIC32 products, and I tried to give some advice about navigating the documentation and to describe some of the issues I had encountered. Another was more concerned with the actual signal generation aspect of the earlier project and the usability of the end result. Previously, I had also had a conversation with someone looking to use such techniques for his project, and although he ended up choosing a different approach, the cumulative effect of my discussions with him and these more recent correspondents persuaded me to take another look at my earlier work and to see if I couldn’t answer some of the questions I had received more definitively.

Picking Over the Pieces

I was already rather aware of some of the demonstrated and potential limitations of my earlier work, these being concerned with generating a decent picture, and although I had attempted to improve the work on previous occasions, I think I just ran out of energy and patience to properly investigate other techniques. The following things were bothersome or a source of concern:

  • The unevenly sized pixels
  • The process of experimentation with the existing code
  • Whether the microcontroller could really do other things while displaying the picture

Although one of my correspondents was very complimentary about the form of my assembly language code, I rather felt that it was holding me back, making me focus on details that should be abstracted away. It should be said that MIPS assembly language is fairly pleasant to write, at least in comparison to certain other architectures.

(I was brought up on 6502 assembly language, where there is an “accumulator” that is the only thing even approaching a general-purpose register in function, and where instructions need to combine this accumulator with other, more limited, registers to do things like accessing “zero page”: an area of memory that supports certain kinds of operations by providing the contents of locations as inputs. Everything needs to be meticulously planned, and despite odd claims that “zero page” is really one big register file and that 6502 is therefore “RISC-like”, the existence of virtual machines such as SWEET16 say rather a lot about how RISC-like the 6502 actually is. Later, I learned ARM assembly language and found it rather liberating with its general-purpose registers and uncomplicated, rather easier to use, memory access instructions. Certain things are even simpler in MIPS assembly language, whereas other conveniences from ARM are absent and demand a bit more effort from the programmer.)

Anyway, I had previously introduced functionality in C in my earlier work, mostly because I didn’t want the challenge of writing graphics routines in assembly language. So with the need to more easily experiment with different peripheral configurations, I decided to migrate the remaining functionality to C, leaving only the lowest-level routines concerned with booting and exception/interrupt handling in assembly language. This effort took me to some frustrating places, making me deal with things like linker scripts and the kind of memory initialisation that one’s compiler usually does for you but which is absent when targeting a “bare metal” environment. I shall spare you those details in this article.

I therefore spent a certain amount of effort in developing some C library functionality for dealing with the hardware. It could be said that I might have used existing libraries instead, but ignoring Microchip’s libraries that will either be proprietary or the subject of numerous complaints by those unwilling to leave that company’s “ecosystem”, I rather felt that the exercise in library design would be useful in getting reacquainted and providing me with something I would actually want to use. An important goal was minimalism, whereas my impression of libraries such as those provided by the Pinguino effort are that they try and bridge the different PIC hardware platforms and consequently accumulate features and details that do not really interest me.

The Wide Pixel Problem

One thing that had bothered me when demonstrating a VGA signal was that the displayed images featured “wide” pixels. These are not always noticeable: one of my correspondents told me that he couldn’t see them in one of my example pictures, but they are almost certainly present because they are a feature of the mechanism used to generate the signal. Here is a crop from the example in question:

Picture detail from VGAPIC32 output

Picture detail from VGAPIC32 output

And here is the same crop with the wide pixels highlighted:

Picture detail with wide pixels highlighted

Picture detail with wide pixels highlighted

I have left the identification of all wide pixel columns to the reader! Nevertheless, it can be stated that these pixels occur in every fourth column and are especially noticeable with things like text, where at such low resolutions, the doubling of pixel widths becomes rather obvious and annoying.

Quite why this increase in pixel width was occurring became a matter I wanted to investigate. As you may recall, the technique I used to output pixels involved getting the direct memory access (DMA) controller in the PIC32 chip to “copy” the contents of memory to a hardware register corresponding to output pins. The signals from these pins were sent along the cable to the monitor. And the DMA controller was transferring data as fast as it could and thus producing pixel colours as fast as it could.

Pixel Output Using DMA Transfer

An overview of the architecture for pixel output using DMA transfer

One of my correspondents looked into the matter and confirmed that we were not imagining this problem, even employing an oscilloscope to check what was happening with the signals from the output pins. The DMA controller would, after starting each fourth pixel, somehow not be able to produce the next pixel in a timely fashion, leaving the current pixel colour unchanged as the monitor traced the picture across the screen. This would cause these pixels to “stretch” until the first pixel from the next group could be emitted.

Initially, I had thought that interrupts were occurring and the CPU, in responding to interrupt conditions and needing to read instructions, was gaining priority over the DMA controller and forcing pixel transfers to wait. Although I had specified a “cell size” of 160 bytes, corresponding to 160 pixels, I was aware that the architecture of the system would be dividing data up into four-byte “words”, and it would be natural at certain points for accesses to memory to be broken up and scheduled in terms of such units. I had therefore wanted to accommodate both the CPU and DMA using an approach where the DMA would not try and transfer everything at once, but without the energy to get this to work, I had left the matter to rest.

A Steady Rhythm

The documentation for these microcontrollers distinguishes between block and cell transfers when describing DMA. Previously, I had noted that these terms could be better described, and I think there are people who are under the impression that cells must be very limited in size and that you need to drive repeated cell transfers using various interrupt conditions to transfer larger amounts. We have seen that this is not the case: a single, large cell transfer is entirely possible, even though the characteristics of the transfer have been less than desirable. (Nevertheless, the documentation focuses on things like copying from one UART peripheral to another, arguably failing to explore the range of possible applications for DMA and to thereby elucidate the mechanisms involved.)

However, given the wide pixel effect, it becomes more interesting to introduce a steady rhythm by using smaller cell sizes and having an external event coordinate each cell’s transfer. With a single, large transfer, only one initiation event needs to occur: that produced by the timer whose period corresponds to that of a single horizontal “scanline”. The DMA channel producing pixels then runs to completion and triggers another channel to turn off the pixel output. In this scheme, the initiating condition for the cell transfer is the timer.

VGA Display Line Structure

The structure of each visible display line in the VGA signal

When using multiple cells to transfer the pixel data, however, it is no longer possible to use the timer in this way. Doing so would cause the initiation of the first cell, but then subsequent cells would only be transferred on subsequent timer events. And since these events only occur once per scanline, this would see a single line’s pixel data being transferred over many scanlines instead (or, since the DMA channel would be updated regularly, we would see only the first pixel on each line being emitted, stretched across the entire line). Since the DMA mechanism apparently does not permit one kind of interrupt condition to enable a channel and another to initiate each cell transfer, we must be slightly more creative.

Fortunately, the solution is to chain two channels, just as we already do with the pixel-producing channel and the one that resets the output. A channel is dedicated to waiting for the line timer event, and it transfers a single black pixel to the screen before handing over to the pixel-producing channel. This channel, now enabled, has its cell transfers regulated by another interrupt condition and proceeds as fast as such a condition may occur. Finally, the reset channel takes over and turns off the output as before.

Pixel Output Using Timed DMA Transfers

An overview of the architecture for pixel output using timed DMA transfers

The nature of the cell transfer interrupt can take various forms, but it is arguably most intuitive to use another timer for this purpose. We may set the limit of such a timer to 1, indicating that it may “wrap around” and thus produce an event almost continuously. And by configuring it to run as quickly as possible, at the frequency of the peripheral clock, it may drive cell transfers at a rate that is quick enough to hopefully produce enough pixels whilst also allowing other activities to occur between each transfer.

VGA Pixel Output (Using Transfer Timer)

Using a timer to initiate pixel transfers for the VGA signal

One thing is worth mentioning here just to be explicit about the mechanisms involved. When configuring interrupts that are used for DMA transfers, it is the actual condition that matters, not the interrupt nor the delivery of the interrupt request to the CPU. So, when using timer events for transfers, it appears to be sufficient to merely configure the timer; it will produce the interrupt condition upon its counter “wrapping around” regardless of whether the interrupt itself is enabled.

With a cell size of a single byte, and with a peripheral clock running at half the speed of the system clock, this approach is sufficient all by itself to yield pixels with consistent widths, with the only significant disadvantage being how few of them can be produced per line: I could only manage in the neighbourhood of 80 pixels! Making the peripheral clock run as fast as the system clock doesn’t help in theory: we actually want the CPU running faster than the transfer rate just to have a chance of doing other things. Nor does it help in practice: picture stability rather suffers.

A picture of the display output from timed DMA transfers

A picture of the display output from timed DMA transfers

Using larger cell sizes, we encounter the wide pixel problem, meaning that the end of a four-byte group is encountered and the transfer hangs on for longer than it should. However, larger cell sizes also introduce byte transfers at a different rate from cell transfers (at the system clock rate) and therefore risk making the last pixel produced by a cell longer than the others, anyway.

Uncovering DMA Transfers

I rather suspect that interruptions are not really responsible for the wide pixels at all, and that it is the DMA controller that causes them. Some discussion with another correspondent explored how the controller might be operating, with transfers perhaps looking something like this:

DMA read from memory
DMA write to display (byte #1)
DMA write to display (byte #2)
DMA write to display (byte #3)
DMA write to display (byte #4)
DMA read from memory
...

This would, by itself, cause a transfer pattern like this:

R____R____R____R____R____R ...
_WWWW_WWWW_WWWW_WWWW_WWWW_ ...

And thus pixel output as follows:

41234412344123441234412344 ...
=***==***==***==***==***== ... (narrow pixels as * and wide pixel components as =)

Even without any extra operations or interruptions, we would get a gap between the write operations that would cause a wider pixel. This would only get worse if the DMA controller had to update the address of the pixel data after every four-byte read and write, not being able to do so concurrently with those operations. And if the CPU were really able to interrupt longer transfers, even to obtain a single instruction to execute, it might then compete with the DMA controller in accessing memory, making the read operations even later every time.

Assuming, then, that wide pixels are the fault of the way the DMA controller works, we might consider how we might want it to work instead:

                     | ...
DMA read from memory | DMA write to display (byte #4)
                 \-> | DMA write to display (byte #1)
                     | DMA write to display (byte #2)
                     | DMA write to display (byte #3)
DMA read from memory | DMA write to display (byte #4)
                 \-> | ...

If only pixel data could be read from memory and written to the output register (and thus the display) concurrently, we might have a continuous stream of evenly-sized pixels. Such things do not seem possible with the microcontroller I happen to be using. Either concurrent reading from memory and writing to a peripheral is not possible or the DMA controller is not able to take advantage of this concurrency. But these observations did give me another idea.

Dual Channel Transfers

If the DMA controller cannot get a single channel to read ahead and get the next four bytes, could it be persuaded to do so using two channels? The intention would be something like the following:

Channel #1:                     Channel #2:
                                ...
DMA read from memory            DMA write to display (byte #4)
DMA write to display (byte #1)
DMA write to display (byte #2)
DMA write to display (byte #3)
DMA write to display (byte #4)  DMA read from memory
                                DMA write to display (byte #1)
...                             ...

This is really nothing different from the above, functionally, but the required operations end up being assigned to different channels explicitly. We would then want these channels to cooperate, interleaving their data so that the result is the combined sequence of pixels for a line:

Channel #1: 1234    1234     ...
Channel #2:     5678    5678 ...
  Combined: 1234567812345678 ...

It would seem that channels might even cooperate within cell transfers, meaning that we can apparently schedule two long transfer cells and have the DMA controller switch between the two channels after every four bytes. Here, I wrote a short test program employing text strings and the UART peripheral to see if the microcontroller would “zip up” the strings, with the following being used for single-byte cells:

Channel #1: "Adoc gi,hlo\r"
Channel #2: "n neaan el!\n"
  Combined: "And once again, hello\r\n"

Then, seeing that it did, I decided that this had a chance of also working with pixel data. Here, every other pixel on a line needs to be presented to each channel, with the first channel being responsible for the pixels in odd-numbered positions, and the second channel being responsible for the even-numbered pixels. Since the DMA controller is unable to step through the data at address increments other than one (which may be a feature of other DMA implementations), this causes us to rearrange each line of pixel data as follows:

 Displayed pixels: 123456......7890
Rearranged pixels: 135...79246...80
                   *       *

Here, the asterisks mark the start of each channel’s data, with each channel only needing to transfer half the usual amount.

Pixel Output Using Timed Dual-Channel Transfers

The architecture involved in employing two pixel data channels with timed transfers

The documentation does, in fact, mention that where multiple channels are active with the same priority, each one is given control in turn with the controller cycling through them repeatedly. The matter of which order they are chosen, which is important for us, seems to be dependent on various factors, only some of which I can claim to understand. For instance, I suspect that if the second channel refers to data that appears before the first channel’s data in memory, it gets scheduled first when both channels are activated. Although this is not a significant concern when just trying to produce a stable picture, it does limit more advanced operations such as horizontal scrolling.

A picture of the display output from timed, dual-channel DMA transfers

A picture of the display output from timed, dual-channel DMA transfers

As you can see, trying this technique out with timed transfers actually made a difference. Instead of only managing something approaching 80 pixels across the screen, more than 90 can be accommodated. Meanwhile, experiments with transfers going as fast as possible seemed to make no real difference, and the fourth pixel in each group was still wider than the others. Still, making the timed transfer mode more usable is a small victory worth having, I suppose.

Parallel Mode Revisited

At the start of my interest in this project, I had it in my mind that I would couple DMA transfers with the parallel mode (or Parallel Master Port) functionality in order to generate a VGA signal. Certain aspects of this, particularly gaps between pixels, made me less than enthusiastic about the approach. However, in considering what might be done to the output signal in other situations, I had contemplated the use of a flip-flop to hold output stable according to a regular tempo, rather like what I managed to achieve, almost inadvertently, when introducing a transfer timer. Until recently, I had failed to apply this idea to where it made most sense: in regulating the parallel mode signal.

Since parallel mode is really intended for driving memory devices and display controllers, various control signals are exposed via pins that can tell these external devices that data is available for their consumption. For our purposes, a flip-flop is just like a memory device: it retains the input values sampled by its input pins, and then exposes these values on its output pins when the inputs are “clocked” into memory using a “clock pulse” signal. The parallel mode peripheral in the microcontroller offers various different signals for such clock and selection pulse purposes.

VGA Output Circuit (Parallel Mode)

The parallel mode circuit showing connections relevant to VGA output (generic connections are not shown)

Employing the PMWR (parallel mode write) signal as the clock pulse, directing the display signals to the flip-flop’s inputs, and routing the flip-flop’s outputs to the VGA circuit solved the pixel gap problem at a stroke. Unfortunately, it merely reminded us that the wide pixel problem also affects parallel mode output, too. Although the clock pulse is able to tell an external component about the availability of a new pixel value, it is up to the external component to regulate the appearance of each pixel. A memory device does not care about the timing of incoming data as long as it knows when such data has arrived, and so such regulation is beyond the capabilities of a flip-flop.

It was observed, however, that since each group of pixels is generated at a regular frequency, the PMWR signalling frequency might be reduced by being scaled by a constant factor. This might allow some pixel data to linger slightly longer in the flip-flop and be slightly stretched. By the time the fourth pixel in a group arrives, the time allocated to that pixel would be the same as those preceding it, thus producing consistently-sized pixels. I imagine that a factor of 8/9 might do the trick, but I haven’t considered what modification to the circuit might be needed or whether it would all be too complicated.

Recognising the Obvious

When people normally experiment with video signals from microcontrollers, one tends to see people writing code to run as efficiently as is absolutely possible – using assembly language if necessary – to generate the video signal. It may only be those of us using microcontrollers with DMA peripherals who want to try and get the DMA hardware to do the heavy lifting. Indeed, those of us with exposure to display peripherals provided by system-on-a-chip solutions feel almost obliged to do things this way.

But recent discussions with one of my correspondents made me reconsider whether an adequate solution might be achieved by just getting the CPU to do the work of transferring pixel data to the display. Previously, another correspondent had indicated that it this was potentially tricky, and that getting the timings right was more difficult than letting the hardware synchronise the different mechanisms – timer and DMA – all by itself. By involving the CPU and making it run code, the display line timer would need to generate an interrupt that would be handled, causing the CPU to start running a loop to copy data from the framebuffer to the output port.

Pixel Output Using CPU-Driven Transfers

An overview of the architecture with the CPU driving transfers of pixel data

This approach puts us at the mercy of how the CPU handles and dispatches interrupts. Being somewhat conservative about the mechanisms more generally available on various MIPS-based products, I tend to choose a single interrupt vector and then test for the different conditions. Since we need as little variation as possible in the latency between a timer event occurring and the pixel data being generated, I test for that particular event before even considering anything else. Then, a loop of the following form is performed:

    for (current = line_data; current < end; current++)
        *output_port = *current;

Here, the line data is copied byte by byte to the output port. Some adornments are necessary to persuade the compiler to generate code that writes the data efficiently and in order, but there is nothing particularly exotic required and GCC does a decent job of doing what we want. After the loop, a black/reset pixel is generated to set the appropriate output level.

One concern that one might have about using the CPU for such long transfers in an interrupt handler is that it ties up the CPU, preventing it from doing other things, and it also prevents other interrupt requests from being serviced. In a system performing a limited range of activities, this can be acceptable: there may be little else going on apart from updating the display and running programs that access the display; even when other activities are incorporated, they may accommodate being relegated to a secondary status, or they may instead take priority over the display in a way that may produce picture distortion but only very occasionally.

Many of these considerations applied to systems of an earlier era. Thinking back to computers like the Acorn Electron – a 6502-based system that formed the basis of my first sustained experiences with computing – it employs a display controller that demands access to the computer’s RAM for a certain amount of the time dedicated to each video frame. The CPU is often slowed down or even paused during periods of this display controller’s activity, making programs slower than they otherwise would be, and making some kinds of input and output slightly less reliable under certain circumstances. Nevertheless, with certain kinds of additional hardware, the possibility is present for such hardware to interrupt the CPU and to override the display controller that would then produce “snow” or noise on the screen as a consquence of this high-priority interruption.

Such issues cause us to consider the role of the DMA controller in our modern experiment. We might well worry about loading the CPU with lots of work, preventing it from doing other things, but what if the DMA controller dominates the system in such a way that it effectively prevents the CPU from doing anything productive anyway? This would be rather similar to what happens with the Electron and its display controller.

So, evaluating a CPU-driven solution seems to be worthwhile just to see whether it produces an acceptable picture and whether it causes unacceptable performance degradation. My recent correspondence also brought up the assertion that the RAM and flash memory provided by PIC32 microcontrollers can be accessed concurrently. This would actually mitigate contention between DMA and any programs running from flash memory, at least until the point that accesses to RAM needed to be performed by those programs, meaning that we might expect some loss of program performance by shifting the transfer burden to the CPU.

(Again, I am reminded of the Electron whose ROM could be accessed at full speed but whose RAM could only be accessed at half speed by the CPU but at full speed by the display controller. This might have been exploited by software running from ROM, or by a special kind of RAM installed and made available at the right place in memory, but the 6502 favours those zero-page instructions mentioned earlier, forcing RAM access and thus contention with the display controller. There were upgrades to mitigate this by providing some dedicated memory for zero page, but all of this is really another story for another time.)

Ultimately, having accepted that the compiler would produce good-enough code and that I didn’t need to try more exotic things with assembly language, I managed to produce a stable picture.

A picture of the display output from CPU-driven pixel data transfers

A picture of the display output from CPU-driven pixel data transfers

Maybe I should have taken this obvious path from the very beginning. However, the presence of DMA support would have eventually caused me to investigate its viability for this application, anyway. And it should be said that the performance differences between the CPU-based approach and the DMA-based approaches might be significant enough to argue in favour of the use of DMA for some purposes.

Observations and Conclusions

What started out as a quick review of my earlier work turned out to be a more thorough study of different techniques and approaches. I managed to get timed transfers functioning, revisited parallel mode and made it work in a fairly acceptable way, and I discovered some optimisations that help to make certain combinations of techniques more usable. But what ultimately matters is which approaches can actually be used to produce a picture on a screen while programs are being run at the same time.

To give the CPU more to do, I decided to implement some graphical operations, mostly copying data to a framebuffer for its eventual transfer as pixels to the display. The idea was to engage the CPU in actual work whilst also exercising access to RAM. If significant contention between the CPU and DMA controller were to occur, the effects would presumably be visible on the screen, potentially making the chosen configuration unusable.

Although some approaches seem promising on paper, and they may even produce promising results when the CPU is doing little more than looping and decrementing a register to introduce a delay, these approaches may produce less than promising results under load. The picture may start to ripple and stretch, and under “real world” load, the picture may seem noisy and like a badly-tuned television (for those who remember the old days of analogue broadcast signals).

Two approaches seem to remain robust, however: the use of timed DMA transfers, and the use of the CPU to do all the transfer work. The former is limited in terms of resolution and introduces complexity around the construction of images in the framebuffer, at least if optimised as much as possible, but it seems to allow more work to occur alongside the update of the display, and the reduction in resolution also frees up RAM for other purposes for those applications that need it. Meanwhile, the latter provides the resolution we originally sought and offers a straightforward framebuffer arrangement, but it demands more effort from the CPU, slowing things down to the extent that animation practically demands double buffering and thus the allocation of even more RAM for display purposes.

But both of these seemingly viable approaches produce consistent pixel widths, which is something of a happy outcome given all the effort to try and fix that particular problem. One can envisage accommodating them both within a system given that various fundamental system properties (how fast the system and peripheral clocks are running, for example) are shared between the two approaches. Again, this is reminiscent of microcomputers where a range of display modes allowed developers and users to choose the trade-off appropriate for them.

A demonstration of text plotting at a resolution of 160x128

A demonstration of text plotting at a resolution of 160x128

Having investigated techniques like hardware scrolling and sprite plotting, it is tempting to keep developing software to demonstrate the techniques described in this article. I am even tempted to design a printed circuit board to tidy up my rather cumbersome breadboard arrangement. And perhaps the library code I have written can be used as the basis for other projects.

It is remarkable that a home-made microcontroller-based solution can be versatile enough to demonstrate aspects of simple computer systems, possibly even making it relevant for those wishing to teach or to learn about such things, particularly since all the components can be connected together relatively easily, with only some magic happening in the microcontroller itself. And with such potential, maybe this seemingly pointless project might have some meaning and value after all!

Update

Although I can’t embed video files of any size here, I have made a “standard definition” video available to demonstrate scrolling and sprites. I hope it is entertaining and also somewhat illustrative of the kind of thing these techniques can achieve.

Thursday, 01 November 2018

Keep yourself organized

English on Björn Schießle - I came for the code but stayed for the freedom | 17:00, Thursday, 01 November 2018

The Zim Desktop Wiki

Over the years I tried various tools to organize my daily work and manage my ToDo list, including Emacs org-mode, ToDo.txt, Kanban boards and simple plain text files. This are all great tools but I was never completely happy with it, over time it always became to unstructured and crowded or I didn’t managed to integrate it into my daily workflow.

Another tool I used through all this time was Zim, a wiki-like desktop app with many great features and extensions. I used it for notes and to plan and organize larger projects. At some point I had the idea to try to use it as well for my task management and to organize my daily work. The great strength of Zim, it’s flexibility can be also a weak spot because you have to find your own way to organize and structure your stuff. After reading various articles like “Getting Things Done” with Zim and trying different approaches I come up with a setup which works nicely for me.

Basic setup

Zim allows you to have multiple, completely separated “notebooks”. But I realized soon that this adds another level of complexity and makes it less likely for me to use it regularly. Therefore I decided to just have one single notebook, with the three top-level pages “FSFE”, “Nextcloud” and “Personal”. Each of this top-level pages have four sub-pages “Archive”, “Notes”, “Ideas” and “Projects”. Every project which is done or notes which are no longer needed will be moved to the archive.

Additionally I created a top-level category called “Journal”. I use this to organize my week and to prepare my weekly work report. Zim has a quite powerful journal functionality with a integrated calendar. You can have daily, weekly, monthly or yearly journals. I decided to use weekly journals because at Nextcloud we also write weekly work reports, so this fits quite well. Whenever I click on a date in the calendar the journal of the corresponding week opens. I use a template so that a newly created journal looks like the screenshot at the top.

Daily usage

As you can see at the picture at the top, the template adds directly the recurring tasks like “check support tickets” or write “work report”. If one of the task is completed I tick the checkbox. During a day typically tasks come in which I can’t handle directly. In this case I will quickly decide to add it to the current day or the next day, depending on how urgent the task is and whether I plan to handle it at the same day or not. Sometimes, if I already know that the task is for a deadline later during the week it can happen that I add it directly to a later day, in rare situations even to the next week. But in general I try to plan the work not to far in the future.

In the past I added everything to my ToDo list, also stuff for which the deadline was a few months away or stuff I wanted to do once I have some time left. My experience is that this grows the list so quickly that it becomes highly unproductive. That’s why I decided to add stuff like this to the “Ideas” category. I review this category regularly and see if some of this stuff could become a real project, task or if it is no longer relevant. If it is something which has already a deadline, but only in a few months, I create a calendar entry with a reminder and add it later to the task list once it become relevant again. This keeps the ToDo list manageable.

At the bottom of the screenshot you see the beginning of my weekly work report, so I can update it directly during the week and don’t have to remember at the end of the week what I have actually done. At the end of each day I review the tasks for the day, mark finished stuff as done and re-evaluate the other stuff: Some tasks might be no longer relevant so that I can remove them completely, otherwise I move them forward to the next day. At the end of the week the list looks something like this:

Zim Journal on Wednesday

This is now the first time were I start to feel really comfortable with my workflow. The first thing I do in the morning is to open Zim, move it to my second screen and there it stays open until the end of the day. I think the reasons that it works so well is that it provides a clear structure, and it integrates well with all the other stuff like notes, project handling, writing my work report and so on.

What I’m missing

Of course nothing is perfect. This is also true for Zim. The one thing I really miss is a good mobile app to add, update or remove task while I’m away from my desktop. At the moment I sync the Zim notebook with Nextcloud and use the Nextcloud Android app to browse the notebook and open the files with a normal text editor. This works, but a dedicated Zim app for Android would make it perfect.

Sunday, 28 October 2018

Linux Software RAID mismatch Warning

Evaggelos Balaskas - System Engineer | 16:18, Sunday, 28 October 2018

I use Linux Software RAID for years now. It is reliable and stable (as long as your hard disks are reliable) with very few problems. One recent issue -that the daily cron raid-check was reporting- was this:

 

WARNING: mismatch_cnt is not 0 on /dev/md0

 

Raid Environment

A few details on this specific raid setup:

RAID 5 with 4 Drives

with 4 x 1TB hard disks and according the online raid calculator:

RAID Calculator

raid5-4disks

that means this setup is fault tolerant and cheap but not fast.

 

Raid Details

# /sbin/mdadm --detail /dev/md0

raid configuration is valid

/dev/md0:
        Version : 1.2
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 976631296 (931.39 GiB 1000.07 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sat Oct 27 04:38:04 2018
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : ServerTwo:0  (local to host ServerTwo)
           UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
         Events : 60352

    Number   Major   Minor   RaidDevice State
       4       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       6       8       48        2      active sync   /dev/sdd
       5       8        0        3      active sync   /dev/sda

 

Examine Verbose Scan

with a more detailed output:

# mdadm -Evvvvs

there are a few Bad Blocks, although it is perfectly normal for a two (2) year disks to have some. smartctl is a tool you need to use from time to time.

/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
           Name : ServerTwo:0  (local to host ServerTwo)
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953266096 (931.39 GiB 1000.07 GB)
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258984 sectors, after=3504 sectors
          State : clean
    Device UUID : bdd41067:b5b243c6:a9b523c4:bc4d4a80

    Update Time : Sun Oct 28 09:04:01 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 6baa02c9 - correct
         Events : 60355

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

/dev/sde:
   MBR Magic : aa55
Partition[0] :      8388608 sectors at         2048 (type 82)
Partition[1] :    226050048 sectors at      8390656 (type 83)
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
           Name : ServerTwo:0  (local to host ServerTwo)
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258992 sectors, after=3504 sectors
          State : clean
    Device UUID : a90e317e:43848f30:0de1ee77:f8912610

    Update Time : Sun Oct 28 09:04:01 2018
       Checksum : 30b57195 - correct
         Events : 60355

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
           Name : ServerTwo:0  (local to host ServerTwo)
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258984 sectors, after=3504 sectors
          State : clean
    Device UUID : ad7315e5:56cebd8c:75c50a72:893a63db

    Update Time : Sun Oct 28 09:04:01 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : b928adf1 - correct
         Events : 60355

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

/dev/sda:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
           Name : ServerTwo:0  (local to host ServerTwo)
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258984 sectors, after=3504 sectors
          State : clean
    Device UUID : f4e1da17:e4ff74f0:b1cf6ec8:6eca3df1

    Update Time : Sun Oct 28 09:04:01 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : bbe3e7e8 - correct
         Events : 60355

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

MisMatch Warning

WARNING: mismatch_cnt is not 0 on /dev/md0

So this is not a critical error, rather tells us that there are a few blocks that are “Not Synced Yet” across all disks.

 

Status

Checking the Multiple Device (md) driver status:

# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

We verify that none job is running on the raid.

 

Repair

We can run a manual repair job:

# echo repair >/sys/block/md0/md/sync_action

now status looks like:

# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [=========>...........]  resync = 45.6% (445779112/976631296) finish=54.0min speed=163543K/sec

unused devices: <none>

Progress

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [============>........]  resync = 63.4% (619673060/976631296) finish=38.2min speed=155300K/sec

unused devices: <none>
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [================>....]  resync = 81.9% (800492148/976631296) finish=21.6min speed=135627K/sec

unused devices: <none>

Finally

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

 

Check

After repair is it useful to check again the status of our software raid:

# echo check >/sys/block/md0/md/sync_action

# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [=>...................]  check =  9.5% (92965776/976631296) finish=91.0min speed=161680K/sec

unused devices: <none>

and finally

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>
Tag(s): md0, mdadm, linux, raid

Tuesday, 23 October 2018

KDE Applications 18.12 Schedule finalized

TSDgeos' blog | 22:59, Tuesday, 23 October 2018

It is available at the usual place https://community.kde.org/Schedules/Applications/18.12_Release_Schedule

Dependency freeze is in 2 weeks and Feature Freeze in 3 weeks, so hurry up!

Monday, 22 October 2018

Reinventing Startups

Blog – Think. Innovation. | 13:08, Monday, 22 October 2018

Once we have come to realize that the Silicon Valley model of startup-based innovation is not for the betterment of people and planet, but is a hyper-accelerated version of the existing growth-based capitalist Operating System increasing inequality, destroying nature and benefiting the 1%, the question arises: is there an alternative?

One group of people that have come to this realization is searching for an alternative model. They call their companies Zebras, opposing the Silicon Valley $ 1 Billion+ valued startups called Unicorns. Their movement is called Zebras Unite and is definitely worth checking out. I recently wrote a ‘Fairy Tale’ story to give Zebras Unite a warning as I believe they run the risk of leaving the root cause of the current faulty Operating System untouched. This warning is based on the brilliant work of Douglas Rushkoff.

This article proposes an alternative entrepreneurial-based innovation model that I believe has the potential to be good for people and planet by design. This model goes beyond the lip service of the hyped terms ‘collaboration’, ‘openness’, ‘sharing’ and ‘transparency’ and puts these terms in practice in a structural and practical way. The model is based on the following premises, while at the same time leaving opportunity for profitable business practices:

  • Freely sharing all created knowledge and technology under open licenses: if we are going to solve the tremendous ‘wicked’ problems of humanity, we need to realize that no single company can solve these on their own. We need to realize that getting to know and building upon each others’ innovations via coincidental meeting, liking and contract agreements is too slow and leaves too much to chance. Instead, we need to realize that there is no time to waste and all heads, hands and inventiveness is necessarily shared immediately and openly to achieve what I call Permissionless Innovation.
  • Being growth agnostic: if we are creating organizations and companies, these should not depend on growth for their survival. Instead they should thrive no matter what the size, because they provide a sustained source of value to humanity.
  • Having interests aligned with people and planet: for many corporations, especially VC-backed and publicly traded companies, doing good for owners/shareholders and doing good for people and the planet is a trade-off. It should not be that way. Providing value for people and planet should not conflict with providing value for owners/shareholders.

Starting from these premises, I propose the following structural changes for customer-focused organizations and companies:

Intellectual Property

We should stop pretending that knowledge is scarce, whereas it is in fact abundant. Putting open collaboration in practice means that sharing is the default: publish technology/designs/knowledge under Open Licenses. This results in what I call ‘Permissionless Innovation’ and favors inclusiveness, as innovation is not only for the privileged anymore. Also, not one single company can pretend to have the knowledge to solve ‘wicked problems’, we need free flow of information for that. And IP protection also creates huge amounts of ‘competitive waste’.

Sustainable Product Development

Based on abundant knowledge we can co-design products that are made to be used, studied, repaired, customized and innovated upon. We get rid of the linear ‘producer => product => consumer’ and get to a ‘prosumer’ or ‘peer to peer’ dynamic model. This goes beyond the practice of Design Thinking / Service Design and crowdsourcing and puts the power of design and production outside of the company as well as inside it (open boundaries).

Company Ownership

Worker and/or customer owned cooperatives as the standard, instead of rent-seeking hands-off investor shareholder-owners. With cooperative ownership the interests of the company are automatically aligned with those of the owners. I do not have sufficient knowledge about how this could work with start-up founders who are also owners, but at some point the founders should transition their ownership.

Funding

Companies that need outside funding should be aware of the ‘growth trap’ as described above. Investment should be limited in time (capped ROI) and loans paid back. To be sustainable and ‘circular’ the company in the long run should not depend on loans or investments. Ideally in the future even the use of fiat money should be abandoned, in favor of alternative currency that does not require continuous endless growth.

Scaling

Getting a valuable solution in the hands of those who need it, should not depend on a centralized organization having control, the supply-chain model. Putting ‘distributive’ in practice means an open dynamic ecosystem model supporting local initiative and entrepreneurship, while the company has its unique place in the ecosystem. This is similar to the franchise model, but mainly permissionless except for brands/trademarks. This dynamic model is much more resilient and can scale faster, especially relevant for non-digital products.

– Diderik

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Photo from Stocksnap.io.

This article first appeared on reinvent.cc.

Saturday, 20 October 2018

sharing keyboard and mouse with synergy

Evaggelos Balaskas - System Engineer | 21:34, Saturday, 20 October 2018

Synergy

Mouse and Keyboard Sharing

aka Virtual-KVM

 

Open source core of Synergy, the keyboard and mouse sharing tool
You can find the code here:

https://github.com/symless/synergy-core

or you can use the alternative barrier

https://github.com/debauchee/barrier

 

Setup

My setup looks like this:

synergy setup

I bought a docking station for the company’s laptop. I want to use a single monitor, keyboard & mouse to both my desktop PC & laptop when being at home.

My DekstopPC runs archlinux and company’s laptop is a windows 10.

Keyboard and mouse are connected to linux.

Both machines are connected on the same LAN (cables on a switch).

Host

/etc/hosts

192.168.0.11   myhomepc.localdomain  myhomepc
192.168.0.12 worklaptop.localdomain  worklaptop

 

Archlinux

DesktopPC will be my Virtual KVM software server. So I need to run synergy as a server.

Configuration

If no configuration file pathname is provided then the first of the
following to load successfully sets the configuration:

${HOME}/.synergy.conf
/etc/synergy.conf

 

vim ${HOME}/.synergy.conf
section: screens
    # two hosts named: myhomepc and worklaptop
      myhomepc:
      worklaptop:
end

section: links
    myhomepc:
        left = worklaptop
end

 

Testing

run in the foreground

$ synergys --no-daemon

example output:

[2018-10-20T20:34:44] NOTE: started server, waiting for clients
[2018-10-20T20:34:44] NOTE: accepted client connection
[2018-10-20T20:34:44] NOTE: client "worklaptop" has connected
[2018-10-20T20:35:03] INFO: switch from "myhomepc" to "worklaptop" at 1919,423
[2018-10-20T20:35:03] INFO: leaving screen
[2018-10-20T20:35:03] INFO: screen "myhomepc" updated clipboard 0
[2018-10-20T20:35:04] INFO: screen "myhomepc" updated clipboard 1
[2018-10-20T20:35:10] NOTE: client "worklaptop" has disconnected
[2018-10-20T20:35:10] INFO: jump from "worklaptop" to "myhomepc" at 960,540
[2018-10-20T20:35:10] INFO: entering screen
[2018-10-20T20:35:14] NOTE: accepted client connection
[2018-10-20T20:35:14] NOTE: client "worklaptop" has connected
[2018-10-20T20:35:16] INFO: switch from "myhomepc" to "worklaptop" at 1919,207
[2018-10-20T20:35:16] INFO: leaving screen
[2018-10-20T20:43:13] NOTE: client "worklaptop" has disconnected
[2018-10-20T20:43:13] INFO: jump from "worklaptop" to "myhomepc" at 960,540
[2018-10-20T20:43:13] INFO: entering screen
[2018-10-20T20:43:16] NOTE: accepted client connection
[2018-10-20T20:43:16] NOTE: client "worklaptop" has connected
[2018-10-20T20:43:40] NOTE: client "worklaptop" has disconnected

 

Systemd

To use synergy as a systemd service, then you need to copy your configuration file under /etc directory

sudo cp ${HOME}/.synergy.conf /etc/synergy.conf

Beware: Your user should have read access to the above configuration file.

and then:

$ systemctl start  --user synergys
$ systemctl enable --user synergys

 

Verify

$ ss -lntp '( sport = :24800 )'
State                   Recv-Q                   Send-Q                                      Local Address:Port                                      Peer Address:Port
LISTEN                  0                        3                                                 0.0.0.0:24800                                          0.0.0.0:*                      users:(("synergys",pid=10723,fd=6))

 

Win10

On windows10 (the synergy client) you just need to connect to the synergy server !

And ofcourse create a startup-shortcut:

win10 synergy

and that’s it !

 

Thursday, 18 October 2018

No activity

Thomas Løcke Being Incoherent | 11:32, Thursday, 18 October 2018

This blog is currently dead…. Catch me at twitter

Friday, 28 September 2018

Technoshamanism in Barcelona on October 4

agger's Free Software blog | 13:00, Friday, 28 September 2018

Technoshamanism event in Barcelona, october 4.

TECNOXAMANS, ELS HACKERS DE L’AMAZONES.

XERRADA I RITUAL DIY DE LA XARXA INTERNACIONAL TECNOXAMANISME AL CSOA LA FUSTERIA.

El dijous 4 d’octubre celebrarem al CSOA La Fusteria una xerrada amb membres de la xarxa Tecnoxamanisme, un col·lectiu internacional de producció d’imaginaris format per artistes, biohackers, pensadors, activistes, indígenes i indigenistes que intenten recuperar idees de futur perdudes al passat ancestral. La xerrada estarà conduïda per l’escriptor Francisco Jota-Pérez i després es realitzarà una performance ritual DIY on totes podreu participar.

Què té en comú el moviment hacker amb les lluites dels pobles indígenes amenaçats per allò anomenat “progrés”?

El tecnoxamanisme va sorgir el 2014 a partir de la confluència de diverses xarxes nascudes al voltant del moviment del software i la cultura lliure per promoure intercanvis de tecnologies, rituals, sinergies i sensibilitats amb les comunitats indígenes. Actuen impulsant trobades i esdeveniments que transcendeixen als rituals DIY, la música electrònica, la permacultura i els processos immersius, barrejant cosmovisions i impulsant la descolonització del pensament.

Segons els membres de la xarxa tecnoxamans: “Encara gaudim de zones autònomes temporals, d’invenció de formes de vida, d’art/vida; tractem de pensar i col·laborar amb la reforestació de la Terra amb un imaginari ancestrofuturista. El nostre principal exercici és crear xarxes d’inconscients, enfortint el desig de comunitat, així com proposar alternatives al pensament ‘productiu’ de la ciència i la tecnologia”.

Després de dos festivals internacionals realitzats en el sud de Bahia, Brasil, (produïts juntament amb l’associació indígena de l’ètnia Pataxó Aldeia Velha i Aldeia Pará, prop de Porto Seguro, on van arribar els primers vaixells portuguesos durant la colonització), organitzen el III Festival de Tecnoxamanisme els dies 5,6 i 7 d’octubre a França. I tenim la sort de que visitin Barcelona per poder compartir amb nosaltres els seus projectes i filosofia. On ens parlaran de temps espiral (no lineal), de ficcions col·lectives, de com el monoteisme i després el capitalisme van segrestar la tecnologia, o d’allò que podem aprendre de les comunitats indígenes tant a nivell de supervivència com de resistència.

Us esperem a totes a les 20.30 a La Fusteria.
C/J. Benlluire, 212 (El Cabanyal)

Col·labora Láudano Magazine.
www.laudanomag.com

Tuesday, 25 September 2018

Libre Application Summit 2018

TSDgeos' blog | 22:28, Tuesday, 25 September 2018

Earlier this month i attended Libre Application Summit 2018 in Denver.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

Libre Application Summit wants to be a place for all people involved in doing Free Software applications to meet and share ideas, though being almost organized by GNOME it had a some skew towards GNOME/flatpak. There was a good presence of KDE, but personally I felt that we would have needed more people at least from LibreOffice, Firefox and someone from the Ubuntu/Canonical/Snap field (don't get annoyed at you if I failed to mention your group).

The Summit was kicked off by a motivational talk on how to make sure we ride the wave of "Open Source has won but people don't know it". I felt the content of the talk was nice but the speaker was hit by some issues (not being able to have the laptop in front of her due to the venue being a bit weirdly layouted) that sadly made her speech a bit too stumbly.

Then we had a bunch of flatpak related talks, ranging from the new freedesktop-sdk runtime, from very technical stuff about how ostree works and also including a talk by our own Aleix Pol on how KDE is planning to approach the release of flatpaks. Some of us ended the day having BBQ at the house the Codethink people were staying. Thanks for the good time!

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

I kicked off the next day talking about how we (lately mostly Christoph) are doing the KDE Applications releases. We got appreciation of the good work we're doing and some interesting follow up questions so I think people were engaged by it.

The morning continued with talks about how to engage the "non typical" free software people, designers, students, professors, etc.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

After lunch we had a few talks by the Elementary people and another talk with Aleix focused on which apps will run on your Plasma devices (hint: all of them).

The day finished with a quizz sponsored by System 76, it was fun!

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The last day of talks started again with me speaking, this time about how amazing Qt is and why you should use it to build your apps. There I had some questions about people worrying if QtWidgets was going to die, I told them not to worry, but it's clear The Qt Company needs to improve their messaging in that regard.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

After that we had a talk about a Fedora project to create a distro based exclusively in flaptaks, which sounds really interesting. The last talks of LAS 2018 were about how to fight vandalism in crowdsourced data, the status of Librem 5 (it's still a bit far away) and a very interesting one about the status of Free Software in Research.

All in all i think the conference was interesting, still a bit small and too GNOME controlled to appeal to the general crowd, but it's the second time it has been organized, so it will improve.

I want to thank the KDE e.V. for sponsoring my flight and hosting to attend this conference. Please Donate! so we can continue attending conferences like this and spreading the good work of KDE :)

Tuesday, 18 September 2018

No Netflix on my Smart TV

tobias_platen's blog | 19:50, Tuesday, 18 September 2018

When I went to the Conrad store in Altona, I saw that new Sony Smart TVs come with a Netflix button on the remote.
Since I oppose DRM, I would never buy such a thing. I would only buy a Smart TV that Respects My Freedom, but such a thing does not exist.
Instead I use a ThinkPad T400 as an external TV tuner and harddisk recorder, since my old TV set does not support DVB-C. As a DVB-C tuner I use the
FRITZ!WLAN Repeater DVB‑C which works well with the free VLC player. Since it lacks a CI+ slot, it cannot decode DRM encumbered streams.

When Netflix was founded in 1998 they initially only offered rental DVDs only. Today most DVDs can be played on GNU/Linux using libdvdcss.
Even if most DVDs that Netflix offers do not contain strong DRM, Netflix is still a surveillance system that requires proprietary JavaScript.
When I buy DVDs, I go to a store where I can pay using cash.

The Thinkpad T400 has no HDMI port and “management engine” back door is removed by installing Libreboot. Most modern Intel systems come with a HDMI port.
HDMI comes with some kind of DRM called HDCP which was developed by Intel. On newer hardware the “management engine” is used to implement video DRM.
Netflix in 4K only works on Kaby-Lake processors, which implement the latest version of Intels hardware DRM.

Wednesday, 12 September 2018

Return to Limbo

David Boddie - Updates (Full Articles) | 15:57, Wednesday, 12 September 2018

After an unsatisfactory period of employment in March this year I took some time to reflect on the technologies I use, trying to learn about more systems and languages that I had only superficially explored. In the period immediately after leaving employment I wanted to try and get back into technologies like Inferno with the idea, amongst others, of porting it to an old netbook-style device I had acquired at the end of 2017. The problem was that I was in a different country to my development system, so any serious work would have to wait until I could access it again. In the meantime I thought about my brief exposure to the world of Qt in yet another work environment – specialisation can be an advantage when finding work but can also limit your opportunities to expand your horizons – and I wondered about other technologies I had missed out on over the years. I had previously explored Android development in an attempt to see what was interesting about it, but it was time to try something different.

Ready, Steady

Being very comfortable with Python as a development language during the late 1990s and 2000s had probably made me a bit lazy when it came to learning other programming languages. However, having experimented with my own flavour of Python, and being unimpressed by the evolution of the Python language, I considered making a serious attempt to learn Go with the idea that it might be a desirable skill to have. Many years ago I was asked to be a proofreader for Mark Summerfield's Programming in Go textbook but found it difficult to find the time outside work to do that, so I didn't manage to learn much of the language. This time, I worked through A Tour of Go, which I found quite rewarding even if I wasn't quite sure how to express what I wanted in some of the exercises — I often felt that I was guessing rather than figuring out exactly how a type should be defined, for example.

When the time came to pack up and return to Norway I considered whether I wanted to continue writing small examples in Go and porting some of my Python modules. It didn't feel all that comfortable or intuitive to write in Go, though I realise that it simply takes practice to gain familiarity. Despite this, it was worth taking some time to get an overview of the basics of Go for reasons that I'll get to later.

An Interlude

As mentioned earlier, I was interested in setting up Inferno on an old netbook – an Efika MX Smartbook – and had already experimented with running the system in its hosted form on top of Debian GNU/Linux. Running hosted Inferno is a nice way to get some experience using the system and seems to be the main way it is used these days. Running the system natively requires porting it to the specific hardware in use, and I knew that I could use the existing code for U-Boot, FreeBSD and Linux as a reference at the very least. So, the task would be to take hardware-specific code for the i.MX51 platform and adapt it to use the conventions of the Inferno porting layer. Building from the ground up, there are a few ports of Inferno to other ARM devices that could be used as foundations for a new port.

One of the things that made it possible for me to consider starting a port of Inferno to the Smartbook was the existing work that had gone into porting FreeBSD to the device. This included a port of U-Boot that enabled the LCD screen to be used to show the boot console instead of requiring a debug cable that is no longer available. This made it much easier to test “bare metal” programs and gain experience with modifying U-Boot, as well as using its API to access the keyboard and screen. Slowly, I built up a set of programs to work out how I might boot into Inferno while using the U-Boot API to allow the operating system to access the framebuffer console.

As you might expect, booting Inferno involves a lot of C code and some assembly language. It also involves modules written in Limbo, a language from the C programming language family that is an ancestor of Go. At a certain point in the boot process – see lecture 10 of this course for details – Inferno runs a Limbo module to perform some system-specific initialisation, and it's useful to know how to write Limbo code beyond simply copying and pasting lines of code from other ports.

Visiting Limbo

At this point the time spent learning the basics of Go proved to be useful, though maybe only in the sense that aspects of Limbo seemed more familiar to me than they might have done if I had never looked at Go. I wrote a few lines of code to help set up devices for the booting system and check that some simple features worked. By this time I had started looking at Limbo for its own sake, not just as something I had to learn to get Inferno working.

There are a lot of existing programs written in Limbo, though it's not always obvious where to find them. The standard introductions, A Descent into Limbo by Brian Kernighan and The Limbo Programming Language by Dennis Ritchie contain example programs, but many practical programs can be found in the Inferno repository itself inside the appl directory, which is where the sources reside for Limbo applications and libraries. I linked to some other resources in my first article about Inferno.

While there are a few resources already available, I wanted to experiment a bit on my own and get a feel for the language. As a result I started to write a collection of small example programs that tested my intuitions about how certain features of the language worked. Over time it became more natural to write programs in Limbo, especially if they used features like threads and channels to delegate work and coordinate how it is performed. Since Limbo was inspired by communicating sequential processes and designed with threading in mind, threads are a built-in feature of the language, so using them is fairly painless compared to their counterparts in languages like Python and C. Using channels to communicate between threads is fairly intuitive, though it can take time to become accustomed to the syntax and control structures.

Outside the Inferno

While Limbo's natural environment is Inferno, the ability to run Inferno as a hosted environment in another operating system makes it possible to experiment with the language fairly conveniently. However, if you want to edit programs in Inferno's graphical environment then you might find it takes some effort to adapt to it, especially if you already have a favourite editor or integrated development environment. As a result, it might be desirable to edit programs in the host operating system and copy them into the hosted Inferno environment. To enable more rapid prototyping I created a tool to automate the process of transferring and compiling a Limbo file to a standalone application, but the way it works is a bit more complicated than that sounds.

Inspired by Chris Double's article on Bundling Inferno Applications and the resources he drew from, the tool I wrote automates the process of building a custom hosted Inferno installation. This seems excessive for the purpose of building a standalone application, though it is important to realise that Limbo programs are relying on features of the Inferno platform, such as the Dis virtual machine, even when they are running on another operating system. By controlling exactly what is built we can ensure that we only include features that a standalone application will need — we can also run strip on the executable afterwards to reduce its size even further.

With a tool to make standalone executables that can run on the host system, it becomes interesting to think about how we might combine the interesting features of Limbo (and its own packaged Inferno) with the libraries and services provided by the host operating system. However, that's a topic for another article.

Categories: Limbo, Inferno, Free Software

Tuesday, 11 September 2018

Fab Lab-enabled Humanitarian Aid in India

Blog – Think. Innovation. | 08:36, Tuesday, 11 September 2018

Since June 2018 the state of Kerala in India has endured massive floodings, as you may have read in the news. This article contains a brief summary what the international Fab Lab Community has been doing until now (early September 2018) to help the people recovering.

Note 1: I am writing this from my point of view, from what I am remembering that has happened. In case you have any additions or corrections, contact me.

Note 2: Unfortunately the conversations are taking place in a closed Telegram group, making it difficult for other people to read back on what happened, study it and learn from it. An attempt has been made to move the conversation to the Fab Lab Forum, but that was not successful.

It started with my search for non-chemical plywood in The Netherlands for which I contacted many of the Fab Labs in the BeNeLux. In a response someone gave me an invite link to a Telegram group consisting of Fabbers from all over the world. I joined the group and started following the conversation. Soon people from Fab Labs in India reported about the floodings and asked the international Fab Lab Community there for ideas and help. A seperate Telegram group “Fab for Kerala” was created and several dozens of people joined.

It was interesting to follow the progress of the conversation. At first the ideas were all over the place, as if ‘we’ were the only people going to provide assistance to the affected people in Kerala. Luckily that soon changed towards recognizing the unique strengths and position of local Fab Labs, understanding that not only we are operating in a very complex physical situation on the flood-affected ground, but also in a complex situation of various organizations providing help. The need to get into contact with these organizations was understood and steps taken.

There was also a call for ‘going out there and knowing what is going on locally’ and ‘getting an overview of the data’ first, before anything could be done. Local Fabbers responded that it was very difficult to get to know first what was going on and we should just get started providing the basic needs. These two came together by people putting forward ideas of what practical things could be done and people putting forward understandings of what they heard that people needed.

The ideas that came forward were amongst others:

  • Building DIY gravity lamps so people who do not have electricity can have light at night.
  • Building temporary houses for people who lost their house altogether.
  • Various water filtration systems so people can make their own drinking water.

The needs that came forward were amongst others:

  • A way to detect snakes in the houses that were flooded: as the water subsided the snakes hid in moist dark places and people got bit.
  • Innovative ways to quickly clean a house of the mud that was left after the water subsided
  • A quick way to give people an alternative form of shelter, as they were moving from the centralized government issued shelters back to their own neighbourhoods.

The idea for temporary housing and need for alternative shelter came together and there was a brainstorm on how to proceed. With the combined brain resources in the group the conclusion was quickly drawn that building family-size domes was the best way to go.

Various Fab Labs in India then made practical arrangements on where to source, how to transport and where to build 5 domes. A volunteer from Fab Lab Kerala got the government interested and accomplished that upon showing the success of the domes, the government would finance building more domes.

Various volunteers from these local Fab Labs then set out to get funding for building 5 domes. This proved to be a challenging task, as it appeared to very difficult and costly to move money into India. I do not really understand the specifics of the difficulties, but understood that the government of India is actively preventing money from coming into the country. A pragmatic solution was found, which relies a lot on trust and ‘social contract’ between the members of the Fab Labs.

At this moment volunteers from Vigyan Ashram Fab Lab (in India, but outside of Kerala) have constructed ‘dome kits’ for the skeletons. These kits are being transported to Fab Lab Kerala, so contruction can begin. In the meantime practical challenges are being faced, such as which material to use as ‘walls’ for the domes.

One of the volunteers involved with building the domes has indicated that he will send out regular updates. So, hopefully I can soon link to those from here.

I have been following this group in awe of the amazing willingness and organizing capacity of the international Fab Lab Community.

Hopefully this initiative will continue and people in need will get help. The shift in direction that I have seen from “be all to everything” towards focus on the Fab Labs’ unique position and core competences, I feel has been vital. As is the intent of communication and coordination with other organizations active in the area.

Maybe this could be the dawn of a new category of humanitarian aid and humanitarian development, with the Fab Lab and Maker values at its core. Not to ‘disrupt and replace’, but to provide a different perspective, bring new value and fresh ideas and solutions.

Diderik

P.S.:

Pieter van der Hijden has made an effort to gather information resources relevant to this initiative. For the complete overview, see the international Fab Lab forum. Here a list of the resources:

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Photo from article “For Kerala’s flood disaster, we have ourselves to blame” under fair use.

The Fairy Tale of the Unicorn and the Zebras

Blog – Think. Innovation. | 06:58, Tuesday, 11 September 2018

Once upon a time hippies where roaming the world. They were a happy bunch, living in freedom and spiritual abundance. A small group of them lived in a place called California, or to be more specific, an area that had the appearance of a lush valley. In that valley life was good. A new kind of technology called computers made that entrepreneurial life flourished and people had steady jobs. But the computers were big, clunky and expensive. They were exclusive to big corporations, governments and universities who had the space for them, both physically and in their budgets.

But then the hippies saw this as unfair and were convinced that such a powerful technology should be in the hands of the people. One person in particular had this vision of democratizing technology. His name was Steve Jobs, who later became an iconic figure, maybe more famous than Robin Hood. Steve found a group of geeky hobbyists who were building their own computers from electronics available at the local store. These people were also called ‘nerds’ and Steve befriended them. One nerd called Steve Wozniak was particularly bright, although not able to talk freely, just answering specific questions that someone asked him.

Jobs saw the genius of Wozniak and the possibility to make his vision a reality. Together they created a company to put these ‘personal’ computers in the hands of the many. This company became hugely successful, changing the way we perceive and use computers for the better, forever. An entire new phenomenon was created of personal computing devices with countless possibilities and functions, that took over many tasks and created new ones in every business and home on the planet. And that made Jobs and Wozniak rich men, being worshiped all over the world.

Over the years this new kind of vision and entrepreneurship transformed into ‘start-up culture’ and the lush valley got the name Silicon Valley. And everybody in the world wanted to create Silicon Valley in their backyard, copying the start-up culture to breed heroic entrepreneurs who would save the world and make a lot of money while doing so.

And everybody lived happily ever after. Or so was the idea…

In the years that followed something unthinkable happened, however. Slowly but surely the hippies’ vision of a better world through technology was taken over by a dark force led by the Venture Capital Overlords. These VC Overlords were mingling among the hippies and the nerds, slowly influencing them in such a way that they did not even notice. The VC Overlords were sent by Wall Street Empire to extract capital for the already extraordinary wealthy 1%. Smartly talking about ‘radical innovation’, ‘disruption’ and the ‘free market’ the VC Overlords made everyone believe in a story that resulted in extraction and destruction. They convinced start-up founders of the necessary pursuit of the mythical Unicorn, where competition was for losers and creating a monopoly the only prize. Start-ups were meant to become Unicorns, or die trying. And start-up founders believed the myth and did everything they could to make it to the Unicorn.

But not everybody. A small group of good people saw through the false promise of the Unicorn and understood that something terrible had happened. The VC Overlords did more harm than good and the 99% of people were not better of. So they decided to do something about it. They studied the castle the Overlords created and pinpointed its shortcomings. They decided to do away with the Unicorn and proposed a different kind of animal to pursue for the start-up founders: the Zebra. This animal was not the mythical and rare creature that the Unicorn was and not everything had to be sacrificed to become the Zebra. In fact, every start-up could be a Zebra, as Zebras are social animals that live in groups and benefit from each others presence. Zebras balance doing good and making a profit. The story of the Zebra was a much-needed alternative to the one of the VC overlords and the Zebra movement, called Zebras Unite, quickly gained traction.

However, something went unnoticed by the good people of Zebras Unite. In their study of the VC Overlords’ Castle they overlooked something fundamental. They did not go into the dungeons of the castle to understand its foundations, a centuries old heritage from what later became the Wall Street Empire. These foundations are of debt-based fiat money and the investment-based corporate charter resulting in the endless growth trap. The new farm of Zebras Unite was being built on the old foundations of the Wall Street Empire and VC Overlords. Fundamentally, the system did not change. Despite all good people and good intentions, the Zebras could sooner or later grow a horn and start to look more and more like that damned creature the Unicorn…

How will this story end?

Will the good people of Zebras Unite find out and correct their mistake in time, so the people can live happily ever after?

– Diderik

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Photo by Awesome Content.

Monday, 10 September 2018

Nextcloud 14 and Video Verification

Free Software – Frank Karlitschek_ | 21:03, Monday, 10 September 2018

Today the Nextcloud community released Nextcloud 14. This release comes with a ton of improvements in the areas of User Experience, Accessibility, Speed, GDPR compliance, 2 Factor Authentication, Collaboration, Security and many other things. You can find an overview here

But there is one feature I want to highlight because I find it especially noteworthy and interesting. Some people ask us why we are doing more than the classic file sync and share. Why do we care about Calendar, Contacts, Email, Notes, RSS Reader, Deck, Chat, Video and audio calls and so on.

It all fits together

The reason is that I believe that a lot if these features belong together. There is a huge benefit in an integrated solution. This doesn’t mean that everyone needs and wants all features. This is why we make it possible to switch all of them off so that you and your users have only the functionality available that you really want. But there are huge advantages to have deep integration. This is very similar to the way KDE and GNOME integrate all applications together on the Desktop. Or how Office 365 and Google Suite integrate cloud applications.

The why of Video Verification

The example I want to talk about for this release is Video Verification. It is a solution for a problem that was unsolved until now.

Let’s imagine you have a very confidential document that you want to share with one specific person and only this person. This can be important for lawyers, doctors or bank advisors. You can send the sharing link to the email you might have of this person but you can’t be sure that it reaches this person and only exactly this person. You don’t know if the email is seen by the mailserver admin or the kid who plays with the smartphone or the spouse or the hacker who has hijacked the PC or the mail account of the recipient. The document is transmitted vie encrypted https of course but you don’t know who is on the other side. Even if you sent the password via another channel, you can’t have 100% certainty.

Let’s see how this is done in other cases.

TLS solves two problems for https. The data transfer is encrypted with strong encryption algorithms but this is not enough. Additionally certificates are used to make sure that you are actually talking to the right endpoint on the other side of the https connection. It doesn’t help to securily communicate with what you think is your bank but is actually an attacker!

In GPG encrypted emails the encryption is done with strong and proven algorithms. But there is an additional key signing needed to make sure that the key is owned by the right person.

This second part, the verification of the identity of the recipient, is missing in file sync and share until now. Video verification solves that.

How it works

I want to share a confidential document with someone. In the Nextcloud sharing dialog I type in the email of the person and i activate the option ‘Password via Talk’ then I can set a password to protect the document.

The recipient gets a link to the document by email. Once the person clicks on the link the person sees a screen that asks for a password. The person can click on the ‘request password’ button and then a sidebar open which initiates a Nextcloud Talk call to me. I get a notification about this call in my web interface, via my Nextcloud Desktop client or, most likely to get my attention, my phone will ring because the Nextcloud app on my phone got a push notification. I answer my phone and I have an end to end encrypted, peer to peer video call with the recipient of the link. I can verify that this is indeed the right person. Maybe because I know the person or because the person holds up a personal picture ID. Once I’m sure it is the right person I tell the person the password to the document. The person types in the password and has access to the document.

This procedure is of course over the top for normal standard shares. But if you are dealing with very confidential documents because you are a doctor, a lawyer, a bank or a whistleblower then this is the only way to make sure that the document reaches the right person.

I’m happy and a b it proud that the Nextcloud community is able to produce such innovative features that don’t even exist in proprietary solution

As always all the server software, the mobile apps and the desktop clients are 100% open source, free software and can be self hosted by everyone. Nextcloud is a fully open source community driven project without a contributor agreement or closed source and proprietary extensions or an enterprise edition.

You can get more information on nextcloud.com or contribute at github.com/nextcloud

 

 

 

Sunday, 09 September 2018

The Elephant in the Room

Paul Boddie's Free Software-related blog » English | 16:14, Sunday, 09 September 2018

I recently had my attention drawn towards a blog article about the trials of Free Software development by senior Python core developer, Brett Cannon. Now, I agree with the article’s emphasis on being nice to other people, and I sympathise with those who feel that their community-related activities are wearing them down. However, I would like to point out some aspects of his article that fall rather short of my own expectations about what Free Software, or “open source” as he calls it, should be about.

I should perhaps back up a little and mention where this article was found, which was via the “Planet Python” blog aggregator site. I do not read Planet Python, either in my browser or using a feed reader, any more. Those who would create some kind of buzz or energy around Python have somehow managed to cultivate a channel where it seems that almost every post is promoting something. I might quickly and crudely categorise the posts as follows:

  • “Look at our wonderful integrated development environment which is nice to Python (that is written in Java)! (But wouldn’t you rather use the Java-related language we are heavily promoting instead?)”
  • Stub content featuring someone’s consulting/training/publishing business.
  • Random “beginner” articles either parading the zealotry of the new convert or, of course, promoting someone’s consulting/training/publishing business.

Maybe such themes are merely a reflection of attitudes and preoccupations held amongst an influential section of the Python community, and perhaps there is something to connect those attitudes with the topics discussed below. I do recall other articles exhorting Python enthusiasts to get their name out there by doing work on “open source”, with the aim of getting some company’s attention by improving the software that company has thrown over the wall, and “Python at <insert company name>” blogging is, after all, a common Planet Python theme.

Traces of the Pachyderm

But returning to the article in question, if you read it with a Free Software perspective – that is to say that you consciously refer to “Free Software”, knowing that “open source” was coined by people who, for various reasons, wanted another term to use – then certain things seem to stand out. Most obviously, the article never seems to mention software freedom: it is all about “having fun”, attracting contributors to your projects, giving and receiving “kindnesses”, and participating in “this grand social experiment we call open source”. It is almost as if the Free Software movement and the impetus for its foundation never took place, or if it does have a place in someone’s alternative version of history, then in that false view of reality Richard Stallman was only motivated to start the GNU project because maybe he wanted to “have fun hacking a printer”.

Such omissions are less surprising if you have familiarity with attitudes amongst certain people in various Free Software communities – those typically identifying as “open source”, of course – who bear various grudges against the FSF and Richard Stallman. In the Python core development community, those grudges are sometimes related to some advice given about GPL-compatible licensing back when CPython was changing custodian and there had been concerns, apparently expressed by the entity being abandoned by the core developers, that the original “CWI licence” was not substantial enough. We might wonder whether grudges might be better directed towards those who have left CPython with its current, rather incoherent, licensing paper trail.

A Different Kind of Free

Now, as those of us familiar with the notion of Free Software should know, it is “a matter of freedom, not price”. You can very well sell Free Software, and nobody is actually obliged to distribute their Free Software works at no cost. In fact, the advice from those who formulated the very definition of Free Software is this:

Distributing free software is an opportunity to raise funds for development. Don’t waste it!

Of course, there are obligations about providing the source code for software already distributed in executable form and limitations about the fees or charges to be imposed on recipients, but these do not compel no-cost sharing, publication or distribution. Meanwhile, the Open Source Definition, for those who need an “open source” form of guidance, states the following:

The license shall not require a royalty or other fee for such sale.

This appearing, rather amusingly, in a section entitled “Free Redistribution” where “Free” apparently has the same meaning as the “Free” in Free Software: the label that the “open source” crowd were so vehemently opposed to. It is also interesting that the “Source Code” section of the definition also stipulates similar obligations to those upheld by copyleft licences.

So, in the blog article in question, it is rather interesting to see the following appear:

While open source, by definition, is monetarily free, that does not mean that the production of it is free.

Certainly, the latter part of the sentence is true, and we will return to that in a moment, but the former part is demonstrably false: the Open Source Definition states no such thing. In fact, it states that “open source” is not obliged to cost anything, which as the practitioners of logic amongst us will note is absolutely not the same thing as obliging it to always be cost-free.

Bearing the Cost

Much of the article talks about the cost of developing Free Software from the perspective of those putting in the hours to write, test, maintain and support the code. These “production” costs are acknowledged while the producers are somehow shackled to an economic model – one that is apparently misinformed, as noted above – that demands that the cost of all this work be zero to those wanting to acquire it.

So, how exactly are the production costs going to be met? One of the most useful instruments for doing so has apparently been discarded, and I imagine that a similarly misguided attitude lingers with regard to supporting Free Software produced under such misconceptions. Indeed, much of the article focuses on doing “free work”, that of responding to requests, dealing with feedback, shepherding contributions, and the resulting experience of being “stressed by strangers”.

Normally, when one hears of something of this nature taking place, when the means to live decently and to control one’s own life is being taken away from people, there is a word that springs to mind: exploitation. From what we know about certain perspectives about Free Software and “open source”, it is hardly a surprise that the word “exploitation” does not appear in the article because such words are seen by some as “political”, where “political” takes on the meaning of “something raising awkward ethical questions” that if acknowledged and addressed appropriately would actually result in people not being exploited.

But there is an ideological condition which prevents people from being “political”. According to those with such a condition, we are not supposed to embarrass those who could help us deal with the problems that trouble us because that might be “impolite”, and it might also be questioning just how they made their money, how badly they may have treated people on their way to the top, and whether personal merit had less to do with their current status than good fortune and other things that undermine the myth of their own success. We are supposed to conflate money and power with merit or to play along convincingly enough at least as long as the wallets of such potential benefactors are open.

So there is this evasion of the “political” and a pandering to those who might offer validation and maybe even some rewards for all the efforts that are being undertaken as long as their place is not challenged. And what that leaves us with is a catalogue of commiseration, where one can do no more than appeal to those in the same or similar circumstances to be nicer to each other – not a bad idea, it must be said – but where the divisive and exploitative forces at work will result in more conflict over time as people struggle even harder to keep going.

When the author writes this…

Remember, open source is done by people giving something away for free because they choose to; you could say you’re dealing with a bunch of digital hippies.

…we should also remember that “open source” is also done by people who will gladly take those things and still demand more things for free, these being people who will turn a nice profit for themselves while treating others so abominably.

Selective Sustainability

According to the article “the overall goal of open source is to attract and retain people to help maintain an open source project while enjoying the experience”. I cannot speak for those who advocate “open source”, but this stated goal is effectively orthogonal to the aim of Free Software, which is to empower users by presenting them with the means to take control of the software they use. By neglecting software freedom, the article contemplates matters of sustainability whilst ignoring crucial factors that provide the foundations of sustainability.

For instance, there seems to be some kind of distinction being drawn between Free Software projects that people are paid to work on (“corporate open source”) and those done in their own time (“community open source”). This may be a reflection of attitudes within companies: that there are some things that they may use but which, beyond “donations” and letting people spend a portion of their work time on it, they will never pay for. Maybe such software does not align entirely with the corporate goals and is therefore “something someone else can pay for”, like hospitals, schools, public services, infrastructure, and all the other things that companies of a certain size often seem to be unwilling to fund as they reduce their exposure to taxation.

Free Software, then, becomes almost like the subject of charity. Maybe the initiator of a project will get recognised and hired for complementing and enhancing a company’s proprietary product, just like the individual described in the article’s introduction whose project was picked up by the author’s employer. I find it interesting that the author notes how important people are to the sustainability of a project but then acknowledges that the project illustrating his employer’s engagement with “open source” could do just fine without other people getting involved. Nothing is said about why that might be the case.

So, with misapprehensions about whether anyone can ask for money for their Free Software work, plus cultural factors that encourage permissive licensing, “building a following” and doing things “for exposure”, and with Free Software being seen as something needing “donations”, an unsustainable market is cultivated. Those who wish to find some way of funding their activities must compete with people being misled into working for free. And it goes beyond whether people can afford the time: time is money, as they say, and the result may well be that people who have relatively little end up subsidising “gifts” for people who are substantially better off.

One may well be reminded of other exploitative trends in society where the less well-off have to sacrifice more and work harder for the causes of “productivity” and “the economy”, with the real beneficiaries being the more well-off looking to maximise their own gains and optimise their own life-enriching experiences. Such trends are generally not regarded as sustainable in any way. Ultimately, something has to give, as history may so readily remind us.

Below the Surface

It is certainly important to make sure people keep wanting to do an activity, whether that is Free Software development or anything else, but having enough people who “enjoy doing open source” is far from sufficient to genuinely sustain a Free Software project. It is certainly worthwhile investigating the issues that determine whether people derive enjoyment from such work or not, along with the issues that cause stress, dissatisfaction, disillusionment and worse.

But what good is it if no-one deals with these issues? When taking “a full month off annually from volunteering” is seen as the necessary preventative medicine to avoid “potential burnout”, and when there is even such a notion as “open source detox”, does it not indicate that the symptoms may be seeing some relief but the cause remains untreated? The author of the article seems to think that the main source of misery is the way people treat each other:

It all comes down to how people treat each other in open source.

That in itself is something of a superficial diagnosis given that some people may not experience random angry people criticising their work at all, and yet they may be dissatisfied with their situation nevertheless. Others may experience bad interactions, but these might be the result of factors that are not so readily acknowledged. I do not condone behaviour that might be offensive, let alone abusive, but when people react strongly in their interactions with others, they may be doing so as the consequence of what they perceive as ill-treatment or even a form of oppression or betrayal.

There is much talk of kindness, and I cannot exactly disagree with the recommendation that people be kind to each other. But I also have the feeling that another story is not being told, one of how people with a level of power and influence choose to discharge their responsibilities. And in the context of Python, the matter of Python 3 is never far away. People may have received the “gift” of Python, but they have invested in it, too. In a way, this goes beyond any reciprocation of a mere gift because this investment is also a form of submission to the governance of the technology, as well as a form of validation of it that persuades others of its viability and binds those others to its governance, too.

What then must someone with a substantial investment in that technology think when presented with something like the “Python 2.7 Countdown” clock? Is it a helpful tool for technological planning or a way of celebrating and trivialising disruption to widespread investment in, and commitment to, a mature technology? What about the “Python 3 Statement” with projects being encouraged to pledge to drop support for Python 2 and to deliberately not maintain any such support beyond the official “end of life” date? Is it an encouraging statement of enthusiasm or another way of passive-aggressively shaming those who would continue to use and support Python 2?

I accept that it would be unfair to demand that the Python core developers be made to continue to support Python 2. But I also think it is unfair to see destructive measures being taken to put Python 2 “beyond use”, the now-familiar campaigns of inaccurate or incorrect information to supposedly stir people into action to port their software to Python 3, the denial of the name “Python” to anyone who might step up and continue to support Python 2, the atmosphere of hostility to those who might take on that role. And, well, excuse me if I cannot really take the following statement seriously based on the strategic choices of the Python core developers:

And then there’s the fact that your change may have just made literally tons of physical books in bookstores around the world obsolete; something else I have to consider.

It is intriguing that there is an insistence that people not expect anything when they do something for the benefit of another, that “kindnesses” are more appropriate than “favours”:

I switched to using kindnesses because being kind in the cultures I’m familiar with has no expectation of something in return.

Aside from the fact that it becomes pretty demotivating to send fixes to projects and expect nothing to ever happen to them, to take an example of the article’s author, which after a while amounts to a lot of wasted time and effort, I cannot help but observe that returning the favour was precisely what the Python core developers expected when promoting Python 3. From there, one cannot help but observe that maybe there is one rule for one group and one rule for another group in the stratified realm of “open source”.

The Role of the Elephant

In developing Free Software and acknowledging it as such, we put software freedom – the elephant in this particular room – at the forefront of our efforts. Without it, as we have seen, the narrative is weaker, people’s motivations seem less convincing or unfathomable, and suggestions for improving everybody’s experience, although welcome, fail to properly grasp some of the actual causes of dissatisfaction and unhappiness. This because the usual myths of efficiency, creative exuberancy, and an idealised “gift culture” need to be conjured up to either explain people’s behaviour or to motivate it, the latter often in a deliberately exploitative way.

It is, in fact, software freedom that gives Python 2 users any hope for their plight, even though many of them may be dissatisfied and some of them may end up committing to other languages instead of Python 3 in future. By emphasising software freedom, they and others may be educated about their right to control their technological investment, and they may be reminded that in seeking assistance to exercise that control, they might be advised to pay others to sustain their investment. At no point does the narrative need to slip off into “free stuff”, “gifts” and the like.

Putting software freedom at the centre of Free Software activities might also promote a more respectful environment. When others are obliged to uphold end-user freedoms, they might already be inclined to think about how they treat other people. We have seen a lot written about interpersonal interactions, and it is right to demand that people treat each other with respect, but maybe such respect needs to be cultivated by having people think about higher goals. And maybe such respect is absent if those goals are deliberately ignored, focusing people to consider only each individual transaction in isolation and to wonder why everyone acts so selfishly.

So instead of having an environment where a company might be looking for people to do free work so that they can seal it up, sell a proprietary product to hapless end-users, treat the workers like “digital hippies”, and thus exploit everyone involved, we invoke software freedom to demand fairness and respect. A culture of respecting the rights of others should help participants realise that they have a collective responsibility, that everyone is in it together, that the well-being of others does not come at the cost of each participant’s own well-being.

I realise that some of the language used above is “political” for some, but when those who object to “political” language perpetuate ignorance of the roots of Free Software and marginalise such movements for social change, they also perpetuate a culture of exploitation, whether they have this as their deliberate goal or not. This elephant has been around for some time, and having a long memory as one might expect, it stands as a witness to the perils of neglecting the ethical imperatives for what we do as Free Software developers.

It is, of course, possible to form a more complete, more coherent picture of how Free Software development occurs and how sustainability in such endeavours might be achieved, but evidently this remains out of reach for those still wishing to pretend that there is no elephant in the room.

Monday, 03 September 2018

Sustainable Computing

Paul Boddie's Free Software-related blog » English | 20:47, Monday, 03 September 2018

Recent discussions about the purpose and functioning of the FSFE have led me to consider the broader picture of what I would expect Free Software and its developers and advocates to seek to achieve in wider society. It was noted, as one might expect, that as a central component of its work the FSFE seeks to uphold the legal conditions for the use of Free Software by making sure that laws and regulations do not discriminate against Free Software licensing.

This indeed keeps the activities of Free Software developers and advocates viable in the face of selfish and anticompetitive resistance to the notions of collaboration and sharing we hold dear. Advocacy for these notions is also important to let people know what is possible with technology and to be familiar with our rich technological heritage. But it turns out that these things, although rather necessary, are not sufficient for Free Software to thrive.

Upholding End-User Freedoms

Much is rightfully made of the four software freedoms: to use, share, study and modify, and to propagate modified works. But it seems likely that the particular enumeration of these four freedoms was inspired (consciously or otherwise) by those famously stated by President Franklin D. Roosevelt in his 1941 “State of the Union” address.

Although some of Roosevelt’s freedoms are general enough to be applicable in any number of contexts (freedom of speech and freedom from want, for instance), others arguably operate on a specific level appropriate for the political discourse of the era. His freedom from fear might well be generalised to go beyond national aggression and to address the general fears and insecurities that people face in their own societies. Indeed, his freedom of worship might be incorporated into a freedom from persecution or freedom from prejudice, these latter things being specialised but logically consequent forms of a universal freedom from fear.

But what might end-users have to fear? The list is long indeed, but here we might as well make a start. They might fear surveillance, the invasion of their privacy and of being manipulated to their disadvantage, the theft of their data, their identity and their belongings, the loss of their access to technology, be that through vandalism, technological failure or obsolescence, or the needless introduction of inaccessible or unintuitive technology in the service of fad and fashion.

Using technology has always entailed encountering risks, and the four software freedoms are a way of mitigating those risks, but as technology has proliferated it would seem that additional freedoms, or additional ways of asserting these freedoms, are now required. Let us look at some areas where advocacy and policy work fail to reach all by themselves.

Cultivating Free Software Development

Advocating for decent laws and the fair treatment of Free Software is an essential part of organisations like the FSFE. But there also has to be Free Software out in the wider world to be treated fairly, and here we encounter another absent freedom. Proponents of the business-friendly interpretation of “open source” insist that Free Software happens all by itself, that somewhere someone will find the time to develop a solution that is ripe for wider application and commercialisation.

Of course, this neglects the personal experience of any person actually doing Free Software development. Even if people really are doing a lot of development work in their own time, playing out their roles precisely as cast in the “sharing economy” (which rather seems to be about wringing the last drops of productivity out of the lower tiers of the economy than anyone in the upper tiers actually participating in any “sharing”), it is rather likely that someone else is really paying their bills, maybe an employer who pays them to do something else during the day. These people squeeze their Free Software contributions in around the edges, hopefully not burning themselves out in the process.

Freedom from want, then, very much applies to Free Software development. For those who wish to do the right thing and even get paid for it, the task is to find a sympathetic employer. Some companies do indeed value Free Software enough to pay people to develop it, maybe because those companies provide such software themselves. Others may only pay people as a consequence of providing non-free software or services that neglect some of the other freedoms mentioned above. And still others may just treat Free Software as that magical resource that keeps on providing code for nothing.

Making sure that Free Software may actually be developed should be a priority for anyone seriously advocating Free Software adoption. Otherwise, it becomes a hypothetical quantity: something that could be used for serious things but might never actually be observed in such forms, easily dismissed as the work of “hobbyists” and not “professionals”, never mind that the same people can act in either capacity.

Unfortunately, there are no easy solutions to suggest for this need. It is fair to state that with a genuine “business case”, Free Software can get funded and find its audience, but that often entails a mixture of opportunism, the right connections, and an element of good fortune, as well as the mindset needed to hustle for business that many developers either do not have or do not wish to cultivate. It also assumes that all Free Software worth funding needs to have some kind of sales value, whereas much of the real value in Free Software is not to be found in things that deliver specific solutions: it is in the mundane infrastructure code that makes such solutions possible.

Respecting the User

Those of us who have persuaded others to use Free Software have not merely been doing so out of personal conviction that it is the ethically-correct thing for us and those others to use. There are often good practical reasons for using Free Software and asserting control over computing devices, even if it might make a little more work for us when things do not work as they should.

Maybe the risks of malware or experience of such unpleasantness modifies attitudes, combined with a realisation that not much help is actually to be had with supposedly familiar and convenient (and illegally bundled) proprietary software when such malevolence strikes. The very pragmatism that Free Software advocates supposedly do not have – at least if you ask an advocate for proprietary or permissively-licensed software – is, in fact, a powerful motivation for them to embrace Free Software in the first place. They realise that control is an illusion without the four software freedoms.

But the story cannot end with the user being able to theoretically exercise those freedoms. Maybe they do not have the time, skills or resources to do so. Maybe they cannot find someone to do so on their behalf, perhaps because nobody is able to make a living performing such services. And all the while, more software is written, deployed and pushed out globally. Anyone who has seen familiar user interfaces becoming distorted, degraded, unfamiliar, frustrating as time passes, shaped by some unfathomable agenda, knows that only a very well-resourced end-user would be able to swim against such an overpowering current.

To respect the user must involve going beyond acknowledging their software freedoms and also acknowledge their needs: for secure computing environments that remain familiar (even if that seems “boring”), that do not change abruptly (because someone had a brainwave in an airport lounge waiting to go to some “developer summit” or other), that allow sensible customisation that can be reconciled with upstream improvements (as opposed to imposing a “my way or the highway”, “delete your settings” ultimatum). It involves recognising their investment in the right thing, not telling them that they have to work harder, or to buy newer hardware, just to keep up.

And this also means that the Free Software movement has to provide answers beyond those concerning the nature of the software. Free Software licensing does not have enough to say about privacy and security, let alone how those things might be upheld in the real world. Yet such concerns do impact Free Software developers, meaning that some kinds of solutions do exist that might benefit a wider audience. Is it not time to deliver things like properly secure communications where people can trust the medium, verify who it is that sends them messages, ignore the time-wasters, opportunists and criminals, and instead focus on the interactions that are meaningful and important?

And is it not time that those with the knowledge and influence in the Free Software realm offered a more coherent path to achieving this, instead of all the many calls for people to “use encryption” only to be presented with a baffling array of options and a summary that combines “it’s complicated” with “you’re on your own”? To bring the users freedom from the kind of fear they experience through a lack of privacy and security? It requires the application of technical knowledge, certainly, but it also requires us to influence the way in which technology is being driven by forces in wider society.

Doing the Right Thing

Free Software, especially when labelled as “open source”, often has little to say about how the realm of technology should evolve. Indeed, Free Software has typically reacted to technological evolution, responding to the demands of various industries, but not making demands of its own. Of course, software in itself is generally but a mere instrument to achieve other things, and there are some who advocate a form of distinction between the specific ethics of software freedom and ethics applying elsewhere. For them, it seems to be acceptable to promote “open source” while undermining the rights and freedoms of others.

Our standards should be far higher than that! Although there is a logical argument to not combine other considerations with the clearly-stated four software freedoms, it should not stop us from complementing those freedoms with statements of our own values. Those who use or are subject to our software should be respected and their needs safeguarded. And we should seek to influence the development of technology to uphold our ideals.

Let us consider a mundane but useful example. The World Wide Web has had an enormous impact on society, providing people with access to information, knowledge, communication, services on a scale and with a convenience that would have been almost unimaginable only a few decades ago. In the beginning, it was slow (due to bandwidth limitations, even on academic networks), it was fairly primitive (compared to some kinds of desktop applications), and it lacked support for encryption and sophisticated interactions. More functionality was needed to make it more broadly useful for the kinds of things people wanted to see using it.

In the intervening years, a kind of “functional escalation” has turned it into something that is indeed powerful, with sophisticated document rendering and interaction mechanisms, perhaps achieving some of the ambitions of those who were there when the Web first gathered momentum. But it has caused a proliferation of needless complexity, as sites lazily call out to pull down megabytes of data to dress up significantly smaller amounts of content, as “trackers” and “analytics” are added to spy on the user, as absurd visual effects are employed (background videos, animated form fields), with the user’s computer now finding it difficult to bear the weight of all this bloat, and with that user struggling to minimise their exposure to privacy invasions and potential exploitation.

For many years it was a given that people would, even should, upgrade their computers regularly. It was almost phrased as a public duty by those who profited from driving technological progress in such a selfish fashion. As is frequently the case with technology, it is only after people have realised what can be made possible that they start to consider whether it should have been made possible at all. Do we really want to run something resembling an operating system in a Web browser? Is it likely that this will be efficient or secure? Can we trust the people who bring us these things, their visions, their competence?

The unquestioning proliferation of technology poses serious challenges to the well-being of individuals and the ecology of our planet. As people who have some control over the way technology is shaped and deployed, is it not our responsibility to make sure that its use is not damaging to its users, that it does not mandate destructive consumer practices, that people can enjoy the benefits – modest as they often are when the background videos and animated widgets are stripped away – without being under continuous threat of being left behind, isolated, excluded because their phone or computer is not this season’s model?

Strengthening Freedoms

In rather too many words, I have described some of the challenges that need to be confronted by Free Software advocates. We need to augment the four software freedoms with some freedoms or statements of our own. They might say that the software and the solutions we want to develop and to encourage should be…

  • Sustainable to make: developers and their collaborators should be respected, their contributions fairly rewarded, their work acknowledged and sustained by those who use it
  • Sustainable to choose and to use: adopters should have their role recognised, with their choices validated and rewarded through respectful maintenance and evolution of the software on which they have come to depend
  • Encouraging of sustainable outcomes: the sustainability of the production and adoption of the software should encourage sustainability in other ways, promoting longevity, guarding against obsolescence, preventing needless and frivolous consumption, strengthening society and making it fairer and more resilient

It might be said that in order to have a fairer, kinder world there are no shortage of battles to be fought. With such sentiments, the discussion about what more might be done is usually brought to a swift conclusion. In this article, I hope to have made a case that what we can be doing is often not so different from what we are already doing.

And, of course, this brings us back to the awkward matter of why we, or the organisations we support, are not always so enthusiastic about these neglected areas of concern. Wouldn’t we all be better off by adding a dimension of sustainability to the freedoms we already recognise and enjoy?

Come meet KDE in Denver - September 6-9

TSDgeos' blog | 19:39, Monday, 03 September 2018

This week, Aleix (KDE eV Vice Predisdent), Albert Vaca (KDE Connect maintainer) and me will be in Denver to attend the Libre Application Summit 2018.

Libre Application Summit is unfortunately not free to attend so even if i'd urge you to come and see the amazing talks we're going to give I can see why not everyone would want to come.

So I'm going to say that if you're in the Denver area and are a KDE fan, write a comment here and maybe we can meet for drinks or something :)

Saturday, 01 September 2018

A simple picture language for GNU Guile

Rekado | 21:00, Saturday, 01 September 2018

One thing that I really love about Racket is its picture language, which allows you to play with geometric shapes in an interactive session in Dr Racket. The shapes are displayed right there in the REPL, just like numbers or strings. Instead of writing a programme that prints "hello world" or that computes the Fibonacci numbers, one could write a programme that composes differently rotated, coloured shapes and prints those instead.

I use GNU Guile for my own projects, and sadly we don't have an equivalent of Racket's picture language or the Dr Racket editor environment. So I made something: a simple picture language for GNU Guile. It provides simple primitive procedures to generate shapes, to manipulate them, and to compose them.

Download the single Guile module containing the implementation:

mkdir ~/pict
wget https://elephly.net/downies/pict.scm

To actually see these shapes as you play with them, you need to use a graphical instance of GNU Emacs with Geiser.

Start geiser in Emacs and load the module:

M-x run-guile
(add-to-load-path (string-append (getenv "HOME") "/pict"))
,use (pict)

Let’s play!

(circle 100)

If you see a pretty circle: hooray! Let’s play some more:

(colorize (circle 100) "red")
(disk 80)
(rectangle 50 100)

Let's compose and manipulate some shapes!

,use (srfi srfi-1)
,use (srfi srfi-26)
(apply hc-append
       (map (cut circle <>)
            (iota 10 2 4)))

(apply cc-superimpose
       (map (cut circle <>)
            (iota 10 2 4)))

(apply hc-append
       (map (cut rotate (rectangle 10 30) <>)
            (iota 36 0 10)))

(apply cc-superimpose
       (map (cut rotate (triangle 100 300) <>)
            (iota 36 0 10)))

There are many more procedures for primitive shapes and for manipulations. Almost all procedures in pict.scm have docstrings, so feel free to explore the code to find fun things to play with!

PS: I realize that it's silly to have a blog post about a picture language without any pictures. Instead of thinking about this now, get the module and make some pretty pictures yourself!

Thursday, 30 August 2018

Technoshamanism meeting in Axat, France (October 5 to 8)

agger's Free Software blog | 19:25, Thursday, 30 August 2018

 

<header>

18880818073_292066bba7_o

Complete program to appear here.

First published at the technoshamanism site.

</header>

THE MEETING

We would like to invite you to the Technoshamanism meeting in Axat, south of France, at Le Dojo art space and association.

We still enjoy temporary autonomous zones, new ways of life, of art / life, we try to think and cooperate towards the goal of food self-sufficiency and interdependence, towards the reforestation of the Earth, and towards the ancestorfuturist fertilization of the imagination. Our main practice is to promote networks of the unconscious to strengthen the desire to form communities as well as proposing alternatives to the “productive” thinking of science and technology.

THE NETWORK

The Technoshamanism network exists since 2014. The network has published one book and organized two international festivals held in the south of Bahia (these were produced in partnership with the Pataxó from indigenous villages Aldeia Velha and Aldeia Pará – these villages are near Porto Seguro, where the first Portuguese caravels arrived in the year 1500). The third festival is planned for August 2019 in Denmark.

Technoshamanism arises from the confluence of several networks deriving from the free software and free culture movements. It is promoting meetings, events, DIY ritual performances, electronic music, foodforest and immersive processes, remixing worldviews and promoting the decolonization of thought. The network brings together artists, biohackers, thinkers, activists, indigenous and indigenists promoting a social clinic for the future, the meeting between technologies, rituals, synergies and sensitivities.

Keywords: free cosmogony, ancestorfuturism, noisecracy, perspectivism, earthcosmism, free technologies, decolonialism of thought, production of meaning and care.

THE MEETING

We will meet in Axat, south of France, in the space and association Le Dojo, a house that brings together artists, activists, and others since 2011, from October 5-8. The meals will be collective and shared among the participants. The lodging costs 2 euros/day per person. To get to Axat, you need to take a transport to the nearby towns (Carcassone or Perpignan) and then a local bus for one euro. Timetable (in French)

http://axat.fr/horaires-bus/;http://axat.fr/wp-content/uploads/2015/10/ST_29072015-_Ligne_53_-Carcassonne_Axat_horaires.pdf

http://www.pays-axat.org/iso_album/ligne_100_quillan_-_perpignan_septembre_2013.pdf

THE OPEN CALL

If you are interested in getting to know further the Technoshamanism network, go deeper into ancestry and speculative fiction, we invite you curious people, earthlings, hackers, cyborgs, among other strangers from this anthropocene world to join our meeting bringing your ideas, experiences, practices, etc. Send us a short bio + proposal containing up to 300 words to the email xamanismotecnologico@gmail.com. The open call ends September 18.

Do not hesitate to come back to us for any doubt or comment. Welcome to the Tecnoshamanism network!

LINKS/LIENS

Blog https://tecnoxamanismo.wordpress.com/blog/

Questionário sobre o tecnoxamanismo / Questionnaire sur le Tecnochamanisme/ Questionnaire about Tecnoshamanism  https://tecnoxamanismo.files.wordpress.com/2018/06/questionnaire-about-echnoshamanism.pdf

Book/Livre/Livro https://issuu.com/invisiveisproducoes/docs/tcnxmnsm_ebook_resolution_1

Sobre o primeiro festival /About the first festival / Sur le premier festival                 http://tecnoxamanismo.com.br/

https://tecnoxamanismo.wordpress.com/2017/06/29/technoshamanism-in-aarhus-rethink-ancestrality-and-technology/

https://archive.org/details/@tecnoxamanismo

About the Dojo / Sur le Dojo/ Sobre o Dojo https://www.flickr.com/photos/ledojodaxat/

29939385480_39a433a118_c

Switzerland/ Suisse/Suiça – 2016

Português

ENCONTRO DE TECNOXAMANISMO EM AXAT, FRANÇA (5 a 8 de outubro)

Gostaríamos de convidar vocês para o encontro da rede Tecnoxamanismo, no sul da França, na cidade de Axat, no espaço de arte e associação Le Dojo.

Ainda gostamos das zonas autônomas temporárias, da invenção de modos de vida, da arte/vida, tentamos pensar e colaborar com a interdependência alimentar, com o reflorestamento da Terra, na adubagem imaginária ancestrofuturista. Nosso principal exercício é promover redes de inconscientes, fortalecendo  o desejo de comunidade, assim como propor alternativas ao pensamento “produtivo” da ciência e tecnologia.

A REDE

A rede Tecnoxamanismo existe desde 2014, conta com livro publicado e dois festivais internacionais realizados no sul da Bahia (estes produzidos com a parceria indígena da etnia Pataxó Aldeia Velha e Aldeia Pará – nas redondezas de Porto Seguro –  onde aconteceu a primeira invasão das caravelas – a colonização portuguesa). O terceiro festival está previsto para agosto de 2019 na Dinamarca.

O Tecnoxamanismo surge a partir da confluência de várias redes oriundas do movimento do software e cultura livre. Atua promovendo encontros, eventos, acontecimentos permeando performances rituais DIY, música eletrônica, agrofloresta e processos imersivos, remixando cosmovisões e promovendo a descolonização do pensamento. A rede congrega artistas, biohackers, pensadores, ativistas, indígenas e indigenistas promovendo uma clínica social do futuro, o encontro entre tecnologias, rituais, sinergias e sensibilidades.

Palavras-chave: cosmogonia livre, ancestrofuturismo, ruidocracia, perspectivismo, terracosmismo, tecnologias livres, descolonização do pensamento, produção de sentido e de cuidado de si e do outro.

O ENCONTRO

Nos reuniremos na cidade de Axat, no sul da França, no espaço e associação Dojo, uma casa que reúne artistas, ativistas, entre outros passantes desde 2011, durante os dias 5 a 8 de outubro. As refeições serão coletivas e compartilhadas entre os participantes. O custo da casa é de 2 euros por dia por pessoa.

Para chegar em Axat, é preciso pegar um transporte até as cidades próximas (Carcassone ou Perpignam) e depois um ônibus local à um euro. Tabela de horários (em francês) :

http://axat.fr/horaires-bus/ ; http://axat.fr/wp-content/uploads/2015/10/ST_29072015-_Ligne_53_-Carcassonne_Axat_horaires.pdf http://www.pays-axat.org/iso_album/ligne_100_quillan_-perpignan_septembre_2013.pdf

A CONVOCATÓRIA

Se você tem interesse em conhecer mais do Tecnoxamanismo, se aprofundar sobre ancestralidades e ficção especulativa, convocamos os interessados, curiosos, terráqueos, hackers, cyborgs, entre outros estranhos deste mundo antropoceno à participar do nosso encontro, trazendo suas ideias, processos, experiências, práticas, etc. envie um resumo de até 300 palavras para o e-mail xamanismotecnologico@gmail.com. A chamada aberta termina dia 18 de setembro.

Fiquem à disposição para tirar dúvidas ou fazer comentários. Bem-vindos à rede do Tecnoxamanismo!

Français

RENCONTRE DE TECHNOSHAMANISME À AXAT, DANS LE SUD DE LA FRANCE

Le réseau de Tecnoshamanime a le plaisir de vous inviter pour une immersion/rencontre au sûd de la France, dans la ville d’Axat, dans l’espace d’art et association le Dojo.

On croit toujours dans les zones autonomes temporaires, qui puisse rendre compte de l’invention des modes de vie alternatifs, l’art/vie, l’indépendance alimentaire, à travers de la fomentation des mémoires ancestrales et futuristes, le reforestation de la Terre et la fertilization de l’imagination. Notre but principal c’est de créer espaces pour penser d’autres possibilités d’existence horizontal, et également pour fortifier le désir et l’idée communal, ainsi que reformuler l’idéologie de la production scientifique et technologique.

LE RÉSEAU

Le réseau du Tecnochamanisme envisage ces rencontres et projets depuis 2014, il compte avec un livre publié et deux festivals internationaux accomplis au sud de Bahia (ceux produits avec le partenariat indigène des indiens Pataxó- Aldeia Velha et Aldeia Pará – dans la rondeur de Porto Seguro – où la première invasion des caravelles est arrivée de la colonisation portugaise). Le troisième festival est prévu pendant août de 2019 au Danemark.

Tecnochamanisme apparaît commençant de la confluence de plusieurs réseaux originaires du mouvement du software et de la culture libre. Il s’agit en promouvant des rencontres, des événements et des happenings pénétrant des processus immersifs, des performances rituels DIY (faites toi-même), la musique électronique, également des projets agroforestiers, en faisant un remix de cosmo-visions et la promotion de la décolonisation de la pensée. Le réseau rassemble des artistes, biohackers, des penseurs, des activistes, indigènes et indigénistes dans la création collective d’une clinique sociale de l’avenir, le rencontre parmi des technologies, des rituels, des synergies et des sensibilités.

Mots-clés: Cosmogonie libre, ancestro-futurisme, noisecracie, perspectivisme, terrecosmisme, technologies libres, décolonisation de la pensée, production de sens et de soin, de soi et de l’autre.

LE RENCONTRE

Nous rassemblera dans la ville d’Axat, au sud de la France, dans l’espace et l’association Le Dojo, une maison qui réunit des artistes, des activistes, parmi d’autres passants depuis 2011, du 5 au 8 octobre. Les repas seront collectifs et partagés parmi les participants. Le coût de la maison c’est de 2 euros par jour par personne. Pour arriver à Axat, il est nécessaire d’arriver dans les villes proches comme Carcassone ou Perpignam et aprés, attraper un bus local à un euro. Planning des autobus locales:http://axat.fr/horaires-bus/

L’APPEL

Si vous êtes intéressé à en apprendre davantage sur Tecnoxamanism, s’approfondir sûr la fiction spéculative et l’ancestral, nous invitons les personnes intéressées, les curieux, les Terriens, les hackers, les cyborgs, entre autres étrangers de ce monde anthropocènique, à se joindre à nous. Si vous avez des idées, des pratique ou des interventions engagées envoyez à nous un résumé de 300 mots à l’email xamanismotecnologico@gmail.com. Les inscriptions doivent-elles être envoyés jusqu’au 18 septembre.

N’hésitez pas revenir vers nous pour faire n’importe quel doute ou commentaire. Soyez-les bienvenus sur le réseau du Technoshamanisme!

<header> </header>

 

Wednesday, 29 August 2018

FrOSCon 2018

Inductive Bias | 16:34, Wednesday, 29 August 2018

A more general summary: https://tech.europace.de/froscon-2018/ of the conference written in German. Below a more detailed summary of the keynote by Lorena Jaume-Palasi.

In her keynote "Blessed by the algorithm - the computer says no!" Lorena detailed the intersection of ethics and technology when it comes to automated decision making systems. As much as humans with a technical training shy away from questions related to ethics, humans trained in ethics often shy away from topics that involve a technical layer. However as technology becomes more and more ingrained in everyday life we need people who understand both - tech and ethical questions.

Lorena started her talk detailing how one typical property of human decision making involves inconsistency, otherwise known as noise: Where machine made decisions can be either accurate and consistent or biased and consistent, human decisions are either inconsistent but more or less accurate or inconsistent and biased. Experiments that showed this level of inconsistency are plenty, ranging from time estimates for tasks being different depending on weather, mood, time of day, being hungry or not up to judges being influenced by similar factors in court.

One interesting aspect: While in order to measure bias, we need to be aware of the right answer, this is not necessary for measuring inconsistency. Here's where monitoring decisions can be helpful to palliate human inconsistencies.

In order to understand the impact of automated decision making on society one needs a framework to evaluate that - the field of ethics provides multiple such frameworks. Ethics comes in three flavours: Meta ethics dealing with what is good, what are ethical requests? Normative ethics deals with standards and principles. Applied ethics deals with applying ethics to concrete situations.

In western societies there are some common approaches to answering ethics related questions: Utilitarian ethics asks which outputs we want to achieve. Human rights based ethics asks which inputs are permissible - what obligations do we have, what things should never be done? Virtue ethics asks what kind of human being one wants to be, what does behaviour say about one's character? These approaches are being used by standardisation groups at e.g. DIN and ISO to answer ethical questions related to automation.

For tackling ethics and automation today there are a couple viewpoints, looking at questions like finding criteria within the context of designing and processing of data (think GDPR), algorithmic transparency, prohibiting the use of certain data points for decision making. The importance of those questions is amplified now because automated decision making makes it's way into medicine, information sharing, politics - often separating the point of decision making from the point of acting. One key assumption in ethics is that you should always be able to state why you took a certain action - except for actions taken by mentally ill people, so far this was generally true. Now there are many more players in the decision making process: People collecting data, coders, people preparing data, people generating data, users of the systems developed. For regulators this setup is confusing: If something goes wrong, who is to be held accountable? Often the problem isn't even in the implementation of the system but in how it's being used and deployed. This confusion leads to challenges for society: Democracy does not understand collectives, it understands individuals acting. Algorithms however do not understand individuals, but instead base decisions on comparing individuals to collectives and inferring how to move forward from there. This property does impact individuals as well as society.

For understanding which types of biases make it into algorithmic decision making systems that are built on top of human generated training data one needs to understand where bias can come from:

The uncertainty bias is born out of a lack of training data for specific groups amplifying outlier behaviour, as well as the risk for over-fitting. One-sided criteria can serve to reinforce a bias that is generated by society: Even ruling out gender, names and images from hiring decisions a focus on years of leadership experience gives an advantage to those more likely exposed to leadership roles - typically neither people of colour, nor people from poorer districts. One-sided hardware can make interaction harder - think face recognition systems having trouble identifying non-white humans, having trouble identifying non-male humans.

In the EU we focus on the precautionary principle where launching new technology means showing it's not harmful. This though proves more and more complex as technology becomes entrenched in everyday life.

What other biases do humans have? There's information biases, where humans tend to reason based on analogy, based on the illusion of control (overestimating oneself, downplaying risk, downplaying uncertainty), there's an escalation of committment (a tendency to stick to a decision even if it's the wrong one), there are single outcome calculations.

For cognitive biases are related to framing, criteria selection (we tend to value quantitative criteria over qualitative criteria), rationality. There's risk biases (uncertainties about positive outcomes typically aren't seen as risks, risk tends to be evaluated by magnitude rather than by a combination of magnitude and probability). There's attitude based biases: In experiments senior managers considered risk taking as part of their job. The level of risk taken depended on the amount of positive performance feedback given to a certain person: The better people believe they are, the more risk they are willing to take. Uncertainty biases relate to the difference between the information I believe I need vs. the information available - in experiments humans made worse decisions the more data and information was available to them.

General advise: Beware of your biases...

Tuesday, 28 August 2018

Repeated prompts for SSH key passphrase after upgrading to Ubuntu 18.04 LTS?

Losca | 08:02, Tuesday, 28 August 2018

This was a tricky one (for me, anyway) so posting a note to help others.

The problem was that after upgrading to Ubuntu 18.04 LTS from 16.04 LTS, I had trouble with my SSH agent. I was always being asked for the passphrase again and again, even if I had just used the key. This wouldn't have been a showstopper otherwise, but it made using virt-manager over SSH impossible because it was asking for the passphrase tens of times.

I didn't find anything on the web, and I didn't find any legacy software or obsolete configs to remove to fix the problem. I only got a hint when I tried ssh-add -l, with which I got the error message ”error fetching identities: Invalid key length”. This lead me on the right track, since after a while I started suspecting my old keys in .ssh that I hadn't used for years. And right on: after I removed one id_dsa (!) key and one old RSA key from .ssh directory (with GNOME's Keyring app to be exact), ssh-add -l started working and at the same time the familiar SSH agent behavior resumed and I was able to use my remote VMs fine too!

Hope this helps.

ps. While at the topic, remember to upgrade your private keys' internal format to the new OpenSSH format from the ”worse than plaintext” format with the -o option: blog post – tl; dr; ssh-keygen -p -o -f id_rsa and retype your passphrase.

Thursday, 16 August 2018

Mixed Emotions On Debian Anniversary

Bits from the Basement | 17:26, Thursday, 16 August 2018

When I woke up this morning, my first conscious thought was that today is the 25th anniversary of a project I myself have been dedicated to for nearly 24 years, the Debian GNU/Linux distribution. I knew it was coming, but beyond recognizing the day to family and friends, I hadn't really thought a lot about what I might do to mark the occasion.

Before I even got out of bed, however, I learned of the passing of Aretha Franklin, the Queen of Soul. I suspect it would be difficult to be a caring human being, born in my country in my generation, and not feel at least some impact from her mere existence. Such a strong woman, with amazing talent, whose name comes up in the context of civil rights and women's rights beyond the incredible impact of her music. I know it's a corny thing to write, but after talking to my wife about it over coffee, Aretha really has been part of "the soundtrack of our lives". Clearly, others feel the same, because in her half-century-plus professional career, "Ms Franklin" won something like 18 Grammy awards, the Presidential Medal of Freedom, and other honors too numerous to list. She will be missed.

What's the connection, if any, between these two? In 2002, in my platform for election as Debian Project Leader, I wrote that "working on Debian is my way of expressing my most strongly held beliefs about freedom, choice, quality, and utility." Over the years, I've come to think of software freedom as an obvious and important component of our broader freedom and equality. And that idea was strongly reinforced by the excellent talk Karen Sandler and Molly de Blanc gave at Debconf18 in Taiwan recently, in which they pointed out that in our modern world where software is part of everything, everything can be thought of as a free software issue!

So how am I going to acknowledge and celebrate Debian's 25th anniversary today? By putting some of my favorite Aretha tracks on our whole house audio system built entirely using libre hardware and software, and work to find and fix at least one more bug in one of my Debian packages. Because expressing my beliefs through actions in this way is, I think, the most effective way I can personally contribute in some small way to freedom and equality in the world, and thus also the finest tribute I can pay to Debian... and to Aretha Franklin.

Thursday, 09 August 2018

PSA: Use SASL in konversation

TSDgeos' blog | 17:44, Thursday, 09 August 2018

You probably have seen that Freenode has been getting lots of spam lately.

To protect against that some channels have activated a flag that only allows authenticated users to enter the channel.

If you're using the regular "nickserv" authentication way as I was doing, the authentication happens in parallel to entering the channels and you'll probably be rejected from joining some.

What you want is use SASL, that is a "new IRC" protocol that will first authenticate and then join the server/channels.

More info at https://userbase.kde.org/Konversation/Configuring_SASL_authentication.

Thanks Fuchs on #kde-devel for enlightening me :)

And of course, I'm going to Akademy ;)

Wednesday, 01 August 2018

Two years of terminal device freedom

tobias_platen's blog | 19:01, Wednesday, 01 August 2018

On August 1, 2016 a new law that allows clients of German internet providers to use any terminal device they choose entered into force. Internet service providers (ISPs) are now required to give users any information you need to connect an alternative router. In many other EU countries there is still no such law and the Radio Lockdown Directive is compulsory in all those countries. In Germany there the old “Gesetz über Funkanlagen und Telekommunikationsendeinrichtungen” is now replaced with the new “Funkanlagengesetz”.

Routers that use radio standards such as WiFi and DECT fall under the Radio Lockdown Directive and since the European Commission did not pass a delegated act yet there is no requirement to implement a lock down for current hardware. Many WiFi chipsets require non-free firmware, future generations of that non-free firmware could be used to lock down all kinds of Radio Equipment. Radio Equipment that comes with the Respects Your Freedom hardware product certification is 2.4GHz only in many cases, but some hardware that supports 5 GHz does exist.

Voice over IP (VoIP) is supported by most alternative routers and free software such as Asterisk. Since most ISPs and routers use SIP it is now possible to connect modern VoIP telephones directly to routers such as the FritzBox. Many compulsory routers such as the O2 Box 6431 use SIP internally, but it is not possible to connect external SIP phones with the stock firmware. So some users install OpenWRT on their box to get rid of those restrictions. Some ISPs in the cable market don’t use SIP, but an incompatible protocol called EuroPacketCable which is unsupported by most alternative routers.

Many set-top boxes used for TV streaming use Broadcom chips which offer a bad Free Software support. TV streaming could be done with free software, but many channels are scrambled requiring non-free software to unscramble. Old hardware such as the Media-Receiver Entry may become obsolete when the Telekom stops offering Start TV in 2019. No ISP publishes the interface descriptions for TV streaming, even if they could do so for the DRM-free channels. It is possible to use Kodi to watch those DRM free channels, but many features such as an Electronic Program Guide (EPG) do not work with IP TV streaming.

With this new law users now have a “freedom of choice” but they do not have full “software freedom” because many embedded devices still use proprietary software. Freedom respecting terminal device are rare and often they do not implement all features a user needs. Old analogue telephones sold in the 90s did not have any of those problems.

Tuesday, 31 July 2018

Building Briar Reproducible And Why It Matters

Free Software – | 15:12, Tuesday, 31 July 2018

Briar is a secure messenger, the next step in the crypto messenger evolution if you want. It is Free Software (Open Source), so everybody has the possibility inspect and audit its source code without needing to trust third-parties to have done so in secret.

However, for security critical software, just being Free Software is not enough. It is very easy to install a backdoor before compiling the source code into a binary file. This backdoor would of course not be part of the published source code, but it would be part of the file that gets released to the public.

So the question becomes: How can I be sure that the public source code corresponds exactly to what gets installed on my phone? The answer to that question is: Reproducible Builds.

A reproducible build is a process of compiling the human-readable source code into the final binary in such a way that it can be repeated by different people on different computers and still produce identical results. This way you can be reasonably sure there’s nothing else to worry about than the source code itself. Of course, there can still be backdoors or other malicious features hidden in the code, but at least then these can be found and exposed. Briar already had one security audit and hopefully more will follow.

While working on reproducible builds for a long time, Briar released version 1.0.12 which is the first version that was finally built deterministically, so everybody should be able to reproduce the exact same binary. To make this as easy as possible, we wrote a tool called Briar Reproducer. It automates the building and verification process, but it introduces yet another layer of things you need to trust or audit. So you are of course free to reproduce the results independently. We tried to keep things as basic and simple as possible though.

When we tag a new release in our source code repository, our continuous integration system will automatically fetch this release, build it and compare it to the published reference APK binary. It would be great, if people independent from Briar would set up an automated verification infrastructure that publishes the results for each Briar release. The more (diverse) people do this, the easier it is to trust that Briar releases haven’t been compromised!

So far, we only reproduce Briar itself and not all the libraries that it uses. However, we at least pin the hashes of these libraries with gradle-witness. There’s one exception. Since it is rather essential and critical, we also provide a Tor Reproducer that you can you use to verify that the Tor binaries that are embedded in Briar correspond to the public source code as well.

Doing reproducible builds can be challenging and there are waiting problems at every corner. We hope that all future releases of Briar will continue to be reproducible. If you are interested in the challenges, we overcame so far, you are welcome to read through the tickets linked in this master ticket.

What about F-Droid?

Reproducible builds are the precondition for getting Briar into the official F-Droid repository with its official code signature. (Briar also has its own F-Droid repository for now.) Unfortunately, we can not get it into F-Droid just now, because we are still working around a reproducible builds bug and this workaround is not yet available on F-Droid’s buildserver (which still uses Debian old-stable). We hope that it will be upgraded soon, so Briar will be available to all F-Droid users without adding an extra repository.

<script type="text/javascript"> (function () { var s = document.createElement('script'); var t = document.getElementsByTagName('script')[0]; s.type = 'text/javascript'; s.async = true; s.src = '/wp-content/libs/SocialSharePrivacy/scripts/jquery.socialshareprivacy.min.autoload.js'; t.parentNode.insertBefore(s, t); })(); </script>

Sunday, 29 July 2018

Should you donate to Open Source Software?

Blog – Think. Innovation. | 13:06, Sunday, 29 July 2018

In short: yes, you should! If you are a regular user and can afford it. For the longer version: read on. In this article I will explain donating is not just “the right thing to do”, but also a practical way of supporting Open Source Software (OSS). I will show you a fair and pragmatic method that I use myself. You will see that donating does not need to cost you much (in my case less than € 25 per month; 25% of the proprietary alternatives), is easy and gets this topic “off your mind” for the rest of the time.

Using LibreOffice as an example you will also see that even if only the 5 governments mentioned on the LibreOffice website would follow my method, then this would bring in almost 10 times more than would be needed to pay all the people working on the project a decent salary, even when living in a relatively ‘expensive’ country like The Netherlands!

“When a project is valuable it will find funds”

From a user’s perspective it seems fair to donate to Open Source Software: you get utility value from using these programs and you would otherwise need to buy proprietary software. So some reciprocity simply feels like the right thing to do. For me not only the utility value matters, but the whole idea of software freedom resonates.

On occasion I raised the question: “How much should one donate?” in the Free Software Foundation Europe (FSFE) community. I was surprised to find out that this is a question of little concern there. The reply that I got was something like: “When a project is valuable it will find the necessary funds anyway.” From my user’s perspective I did not understand this reaction, until I started doing some research for writing this piece.

It boils down to this: every contributor to OSS has her own reasons for contributing, without expecting to be paid by the project. Maybe she wants to learn, is a heavy user of the program herself, is being paid for it by her employer (when this company sells products or services based on the OSS) or simply enjoys the process of creation.

The accepted fact is that donations will never be able to provide a livelihood for the contributors, as they are insufficient and vary over time. I even read that bringing in too much money could jeopardize the stability of a project, as the questions of: who gets to decide what to spend the money on and where is the money going, could result in conflict.

Contribute, otherwise donate

So, as a user should you donate to OSS at all then? My answer: yes, if you can afford it. However, the real scarcity for OSS development is not money, it is time. The time to develop, test, write documentation, translate, do community outreach and more. If you have a skill that is valuable to an OSS project and you can free up some time, then contribute with your time performing this skill.

In case you are not able or willing to contribute, then donate money if the project asks for it and you can afford it. My take is that if they ask for donations, they must have a reason to do so as part of maintaining the project. It is the project’s responsibility to decide on spending it wisely.

Furthermore, while you may have the option to go for a proprietary alternative if the need comes, many less fortunate people do not have this option. By supporting an OSS project you aid in keeping it alive, indirectly making it possible for those people to also use great quality software in freedom.

For me personally, I decided to primarily donate money. On occasion I have donated time, in the form of speaking about Open Source at conferences. For the rest, I do not have the necessary skills.

To which projects to donate?

The question then arises, to which projects to donate? I have taken a rather crude yet pragmatic approach here, donating only to those projects for which I consciously take a decision to install it. Since every OSS project typically depends on so many other pieces of OSS (called ‘upstream packages’), donating to all projects gets very complicated very quickly, if not practically impossible. Furthermore, I feel it as a responsibility of the project to decide on donating part of their incoming funds to packages that they depend on.

For example, if GIMP (photo editing software) would come pre-installed with the GNU/Linux distribution I am using, then I would not donate to GIMP. If on the other hand GIMP would not come pre-installed and I had to install it manually with the (graphical interface of the) package manager, then I would donate to GIMP. Is this arbitrary? It absolutely is, but at the moment I do not know of any pragmatic alternative approach. If you do, please share!

I also take my time in evaluating software, meaning that it could take months until it becomes apparent that I actually use a piece of software and come back to it regularly. I feel this as a major “freedom”: paying based on proven utility.

How much to donate?

After deciding to which projects to donate, the next question becomes: how much to donate? One of the many positive stated attributes of Open Source is that while enabling freedom and being more valuable, it is also cheaper to create than proprietary counterparts. There is no head office, no bureaucratic layers of managers, no corporate centralized value extraction, no expensive marketing campaigns and sales people, no legal enforcement costs. So for software that has many users globally, even if the software developers would be paid the costs per user would be very low.

I once read that Open Source Ecology is aiming to create the technology and know-how to be able to produce machines at 25% of the costs of proprietary commercial counterparts. Although completely arbitrary, I feel this 25% to be a reasonable assumption. It is well less than half of the proprietary solution, but certainly not nothing.

So, for example. I use LibreOffice for document writing, spreadsheets and presentations. A proprietary alternative would be Microsoft Office. The one-time price for a new license of Microsoft Office 2016 is € 279 in The Netherlands. I have no idea how long this license would last, but let’s assume a period of 5 years. I choose to donate monthly to OSS, so this means that for LibreOffice I donate: € 279 / 5 / 12 = € 4,65 per month.

The table below shows this calculation for all OSS I use regularly:

Open Source Software Proprietary alternative (*) Proprietary Price Donate 25% monthly
Elementary OS Windows 10 Pro € 259 (5 years) € 4,35
LibreOffice Microsoft Office 2016 € 279 (5 years) € 4,65
GIMP Adobe Photoshop Photography € 12 per month € 3
NixNote Evernote Premium € 60 per year € 5 (**)
Workrave Workpace € 82 (5 years) € 1,40
KeePassX 1Password $ 3 per month € 2,60
Slic3r Simplify3D $149 (5 years) € 2,15
Total monthly donation 23,15
Equals total per year 277,80

 

(*) Calculating the amount to donate based on the proprietary alternative is my pragmatic choice. Another approach could be to donate based on ‘value received’. Although I feel that this could be more fair, I do not know how to make this workable at the moment. I think it would quickly become too complex to do, but if you have thoughts on this, please share!

(**) I am using NixNote without the Evernote integration. Evernote Premium has a lot more functionality than NixNote alone. Especially being able to access and edit notes on multiple devices would be handy for me. The Freemium version of Evernote compares more to NixNote. However, Evernote Freemium is not free, it is a marketing channel for which the Premium users are paying. To keep things simple, I decided to donate 25% of Evernote Premium.

What would happen if every user did this?

Continuing from the LibreOffice example, I found that LibreOffice had over 1 million active users in 2015. The total amount of donations coming in on the last quarter of 2015 was $ 21,000, which is about € 6,000 per month, or less than 1 euro cent per active user. If instead every active user would donate ‘my’ € 4,65 then over € 4.5 million would be coming in every month!

I am not saying that every user should do this, many people who are reaping the benefits of OSS cannot afford to donate and everyone else has the freedom to make up their own mind. That is the beauty of Open Source Software.

What would happen if those who can afford it did this?

Okay, so it is not fair to do this calculation based on every active user of LibreOffice. Instead, let’s do a calculation based on the active users who can afford it, minus the people who already reciprocate with a contribution in time and skills.

To start with the latter group: the number of LibreOffice developers stood at about 1,000 in November 2015. If we assume that half of the people involved are developers and half are people doing ‘everything else’ (testing, documentation writing, translating, etc.), then this translates to 0.2% of users. In other words: no significant impact on the calculations.

Then the former group: how can we know who can afford to donate? I believe we can safely assume that at least large institutions like governments can donate money they otherwise would have spent on Microsoft licenses. I found a list of governments using LibreOffice: in France, Spain, Italy, Taiwan and Brazil. They are using LibreOffice on a total of no less than 754,000 PCs! If these organizations would be donating that € 4,65, the project would still bring in € 3.5 million per month!

Am I missing something here? How is it possible that the LibreOffice project, one of the best-known OSS projects making software for everyday use by pretty much anyone is still only bringing in € 6,000 per month? Did they forgot to mention “amounts are *1000” in the graph’s legend?

Or do large institutions directly pay the salary of LibreOffice contributors, instead of donating? Let’s have a look at the numbers of that (hypothetical) situation. In November 2015 LibreOffice had 80 ‘committers’, so developers committing code. I imagine that these developers are not working full working weeks on LibreOffice, let’s assume 50% part-time, being 20 hours per week.

We again assume that development is half of the work being done. This translates to 80 people working full-time on the LibreOffice project at any given time. If they would be employed in The Netherlands, a reasonable average salary could be something like € 4,500 (before tax), meaning a total cost of € 360,000 per month.

Conclusion: even if only the 5 governments mentioned on the LibreOffice website would donate only 25% of the costs of the popular proprietary counterpart (Microsoft Office), then this would bring in almost 10 times more than would be needed to pay all the people working on the project a decent salary, even when living in a relatively ‘expensive’ country like The Netherlands.

How amazing is this? I almost cannot believe these numbers.

Setting up automated monthly donations

Anyway, back to my pragmatic approach for monthly donations to OSS you regularly use. Now that we know which amount to give to which projects, the task is actually doing so. The following table shows the donation options for the OSS I regularly use:

Open Source Software Donation options
Elementary OS Patreon, PayPal, Goodies, BountySource
LibreOffice PayPal, Credit Card, Bitcoin, Flattr, Bank transfer
GIMP Patreon (developers directly), PayPal, Flattr, Bitcoin, Cheque
NixNote Not accepting donations (*)
Workrave Not accepting donations (*)
KeePassX Not accepting donations (*)
Slic3r PayPal

 

(*) Interestingly not all projects accept donations. The people at Workrave suggest donation to the Electronic Frontier Foundation instead, but the EFF only accepts donations of $ 5 per month or more. For now, these projects will not receive a donation.

Donating is not something you want to do manually every month, so an automatic set-up is ideal. In the above cases I think only Patreon and PayPal allow this. Even though PayPal is cheaper than Patreon and according to Douglas Rushkoff Patreon is becoming problematic as an extractive multi-billion-dollar valued entity with over $ 100 million invested in it, I prefer this option over PayPal. The reason is that donating via Patreon gives ‘marketing value’ to the project, as the donations and donors are visible, boosting general recognition.

Even though I have not done any research (yet), I will go with Rushkoff that Drip is the preferred option over Patreon, so switch to that when it becomes available for these projects.

I will evaluate the list once or perhaps twice a year, as to keep the administrative burden manageable. For me personally once I get used to a piece of software and get the benefits from it, I rarely go look for something else.

How can OSS projects bring in more donations?

Presuming that a project is interested in bringing in more donations, my thoughts from my personal donating experience are:

  • Being available on Drip or otherwise Patreon, because these are also communication channels besides mere donation tools.
  • Nudging downloaders towards leaving their e-mail address, then follow up after a few months. For me personally, I want to try out the software first. Once I find out I really like it and use it regularly, I am absolutely willing to pay for it.
  • Provide a suggested donation, explain why that amount it suggested and what it will be used for. And of course nudge (potential) donors towards contributing, making it transparent and super easy to figure out how contributions can be done and provide value.

So, what do you think? Is it fair to pay for Open Source Software that you are using? If so, which approach would you advocate? And if not, if projects are asking for donations, why not donate?

– Diderik

Photo by Nathan Lemon on Unsplash

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

    /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André Ockers on Free Software  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Kristi Progri  English – Max's weblog  English — mina86.com  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  FSFE – Patis Blog  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Giacomo Poderi  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Planet FSFE on Iain R. Learmonth  Posts - Carmen Bianca Bakker's blog  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Repentinus » English  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  english – Davide Giunchi  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog