September 10, 2017

hackergotchi for Steve Kemp

Steve Kemp

Debian-Administration.org is closing down

After 13 years the Debian-Administration website will be closing down towards the end of the year.

The site will go read-only at the end of the month, and will slowly be stripped back from that point towards the end of the year - leaving only a static copy of the articles, and content.

This is largely happening due to lack of content. There were only two articles posted last year, and every time I consider writing more content I lose my enthusiasm.

There was a time when people contributed articles, but these days they tend to post such things on their own blogs, on medium, on Reddit, etc. So it seems like a good time to retire things.

An official notice has been posted on the site-proper.

10 September, 2017 09:00PM

hackergotchi for Adnan Hodzic

Adnan Hodzic

Secure traffic to ZNC on Synology with Let’s Encrypt

I’ve been using IRC since late 1990’s, and I continue to do so to this day due to it (still) being one of the driving development forces in various open source communities. Especially in Linux development … and some of my acquintances I can only get in touch with via IRC :)

My Setup

On my Synology NAS I run ZNC (IRC bouncer/proxy) to which I connect using various IRC clients (irssi/XChat Azure/AndChat) from various platforms (Linux/Mac/Android). In this case ZNC serves as a gateway and no matter which device/client I connect from, I’m always connected to same IRC servers/chat rooms/settings when I left off.

This is all fine and dandy, but connecting from external networks to ZNC means you will hand in your ZNC credentials in plain text. Which is a problem for me, even thought we’re “only” talking about IRC bouncer/proxy.

With that said, how do we encrypt external traffic to our ZNC?

HowTo: Chat securely with ZNC on Synology using Let’s Encrypt SSL certificate

For reference or more thorough explanation of some of the steps/topics please refer to: Secure (HTTPS) public access to Synology NAS using Let’s Encrypt (free) SSL certificate

Requirements:

  • Synology NAS running DSM >= 6.0
  • Sub/domain name with ability to update DNS records
  • SSH access to your Synology NAS

1: DNS setup

Create A record for sub/domain you’d like to use to connect to your ZNC and point it to your Synology NAS external (WAN) IP. For your reference, subdomain I’ll use is: irc.hodzic.org

2: Create Let’s Encrypt certificate

DSM: Control Panel > Security > Certificates > Add

Followed by:

Add a new certificate > Get a certificate from Let's Encrypt

Followed by adding domain name A record was created for in Step 1, i.e:

Get a certificate from Let's Encrypt for irc.hodzic.org

After certificate is created, don’t forget to configure newly created certificate to point to correct domain name, i.e:

Configure Let's Encrypt Certificate

3: Install ZNC

In case you already have ZNC installed, I suggest you remove it and do a clean install. Mainly due to some problems with package in past, where ZNC wouldn’t start automatically on boot which lead to creating projects like: synology-znc-autostart. In latest version, all of these problems have been fixed and couple of new features have been added.

ZNC can be installed using Synology’s Package Center, if community package sources are enabled. Which can simply be done by adding new package sources:

Name: SynoCommunity
Location: http://packages.synocommunity.com

Enable Community package sources in Synology Package Center

To successfuly authenticate newly added source, under “General” tab, “Trust Level” should be set to “Any publisher”

As part of installation process, ZNC config will be run with most sane/useful options and admin user will be created allowing you access to ZNC webadmin.

4: Secure access to ZNC webadmin

Now we want to bind our sub/domain created in “Step 1” to ZNC webadmin, and secure external access to it. This can be done by creating a reverse proxy.

As part of this, you need to know which port has been allocated for SSL in ZNC Webadmin, i.e:

ZNC Webadmin > Global settings - Listen Ports

In this case, we can see it’s 8251.

Reverse Proxy can be created in:

DSM: Control Panel > Application Portal > Reverse Proxy > Create

Where sub/domain created in “Step 1” needs to be point to ZNC SSL port on localhost, i.e:

Reverse proxy: irc.hodzic.org setup

ZNC Webadmin is now available via HTTPS on external network for the sub/domain you setup as part of Step 1, or in my case:

ZNC webadmin (HTTPS)

As part of this step, in ZNC webadmin I’d advise you to create IRC servers and chatrooms you would like to connect to later.

Step 5: Create .pem file from LetsEncrpyt certificate for ZNC to use

On Synology, Let’s Encrypt certificates are stored and located on:

/usr/syno/etc/certificate/_archive/

In case you have multiple certificates, based on date your certificate was created, you can determine in which directory is your newly generated certificated stored, i.e:

drwx------ 2 root root 4096 Sep 10 12:57 JeRh3Y

Once it’s determined which certifiate is the one we want use, generate .pem by running following:

sudo cat /usr/syno/etc/certificate/_archive/JeRh3Y/{privkey,cert,chain}.pem > /usr/local/znc/var/znc.pem

After this restart ZNC:

sudo /var/packages/znc/scripts/start-stop-status stop && sudo /var/packages/znc/scripts/start-stop-status start

6: Configure IRC client

In this example I’ll use XChat Azure on MacOS, and same procedure should be identical for HexChat/XChat clients on any other platform.

Altough all information is picked up from ZNC itself, user details will need to be filled in.

In my setup I automatically connect to freenode and oftc networks, so I created two for local network and two for external network usage, later is the one we’re concentrating on.

On “General” tab of our newly created server, hostname for our server should be the sub/domain we’ve setup as part of “Step 1”, and port number should be the one we defined in “Step 4”, SSL checkbox must be checked.

Xchat Azure: Network list - General tab

On “Connecting” tab “Server password” field needs to be filled in following format:

johndoe/freenode:password

Where, “johndoe” is ZNC username. “freenode” is ZNC network name, and “password” is ZNC password.

Xchat Azure: Network list - Connecting tab

“freenode” in this case must first be created as part of ZNC webadmin configuration, mentioned in “step 4”. Same case is for oftc network configuration.

As part of establishing the connection, information about our Let’s Encrypt certificate will be displayed, after which connection will be established and you’ll be automatically logged into all chatrooms.

Happy hacking!

10 September, 2017 04:40PM by Adnan Hodzic

intrigeri

Can you reproduce this Tails ISO image?

Thanks to a Mozilla Open Source Software award, we have been working on making the Tails ISO images build reproducibly.

We have made huge progress: since a few months, ISO images built by Tails core developers and our CI system have always been identical. But we're not done yet and we need your help!

Our first call for testing build reproducibility in August uncovered a number of remaining issues. We think that we have fixed them all since, and we now want to find out what other problems may prevent you from building our ISO image reproducibly.

Please try to build an ISO image today, and tell us whether it matches ours!

Build an ISO

These instructions have been tested on Debian Stretch and testing/sid. If you're using another distribution, you may need to adjust them.

If you get stuck at some point in the process, see our more detailed build documentation and don't hesitate to contact us:

Setup the build environment

You need a system that supports KVM, 1 GiB of free memory, and about 20 GiB of disk space.

  1. Install the build dependencies:

    sudo apt install \
        git \
        rake \
        libvirt-daemon-system \
        dnsmasq-base \
        ebtables \
        qemu-system-x86 \
        qemu-utils \
        vagrant \
        vagrant-libvirt \
        vmdebootstrap && \
    sudo systemctl restart libvirtd
    
  2. Ensure your user is in the relevant groups:

    for group in kvm libvirt libvirt-qemu ; do
       sudo adduser "$(whoami)" "$group"
    done
    
  3. Logout and log back in to apply the new group memberships.

Build Tails 3.2~alpha2

This should produce a Tails ISO image:

git clone https://git-tails.immerda.ch/tails && \
cd tails && \
git checkout 3.2-alpha2 && \
git submodule update --init && \
rake build

Send us feedback!

No matter how your build attempt turned out we are interested in your feedback.

Gather system information

To gather the information we need about your system, run the following commands in the terminal where you've run rake build:

sudo apt install apt-show-versions && \
(
  for f in /etc/issue /proc/cpuinfo
  do
    echo "--- File: ${f} ---"
    cat "${f}"
    echo
  done
  for c in free locale env 'uname -a' '/usr/sbin/libvirtd --version' \
            'qemu-system-x86_64 --version' 'vagrant --version'
  do
    echo "--- Command: ${c} ---"
    eval "${c}"
    echo
  done
  echo '--- APT package versions ---'
  apt-show-versions qemu:amd64 linux-image-amd64:amd64 vagrant \
                    libvirt0:amd64
) | bzip2 > system-info.txt.bz2

Then check that the generated file doesn't contain any sensitive information you do not want to leak:

bzless system-info.txt.bz2

Next, please follow the instructions below that match your situation!

If the build failed

Sorry about that. Please help us fix it by opening a ticket:

  • set Category to Build system;
  • paste the output of rake build;
  • attach system-info.txt.bz2 (this will publish that file).

If the build succeeded

Compute the SHA-512 checksum of the resulting ISO image:

sha512sum tails-amd64-3.2~alpha2.iso

Compare your checksum with ours:

9b4e9e7ee7b2ab6a3fb959d4e4a2db346ae322f9db5409be4d5460156fa1101c23d834a1886c0ce6bef2ed6fe378a7e76f03394c7f651cc4c9a44ba608dda0bc

If the checksums match: success, congrats for reproducing Tails 3.2~alpha2! Please send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 successful" and system-info.txt.bz2 attached. Thanks in advance! Then you can stop reading here.

Else, if the checksums differ: too bad, but really it's good news as the whole point of the exercise is precisely to identify such problems :) Now you are in a great position to help improve the reproducibility of Tails ISO images by following these instructions:

  1. Install diffoscope version 83 or higher and all the packages it recommends. For example, if you're using Debian Stretch:

    sudo apt remove diffoscope && \
    echo 'deb http://ftp.debian.org/debian stretch-backports main' \
      | sudo tee /etc/apt/sources.list.d/stretch-backports.list && \
    sudo apt update && \
    sudo apt -o APT::Install-Recommends="true" \
             install diffoscope/stretch-backports
    
  2. Download the official Tails 3.2~alpha2 ISO image.

  3. Compare the official Tails 3.2~alpha2 ISO image with yours:

    diffoscope \
           --text diffoscope.txt \
           --html diffoscope.html \
           --max-report-size 262144000 \
           --max-diff-block-lines 10000 \
           --max-diff-input-lines 10000000 \
           path/to/official/tails-amd64-3.2~alpha2.iso \
           path/to/your/own/tails-amd64-3.2~alpha2.iso
    bzip2 diffoscope.{txt,html}
    
  4. Send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 failed", attaching:

    • system-info.txt.bz2;
    • the smallest file among diffoscope.txt.bz2 and diffoscope.html.bz2, except if they are larger than 100 KiB, in which case better upload the file somewhere (e.g. share.riseup.net and share the link in your email.

Thanks a lot!

Credits

Thanks to Ulrike & anonym who authored a draft on which this blog post is based.

10 September, 2017 02:27PM

Sylvain Beucler

dot-zed archive file format

TL,DR: I reverse-engineered the .zed encrypted archive format.
Following a clean-room design, I'm providing a description that can be implemented by a third-party.
Interested? :)

(reference version at: https://www.beuc.net/zed/)

.zed archive file format

Introduction

Archives with the .zed extension are conceptually similar to an encrypted .zip file.

In addition to a specific format, .zed files support multiple users: files are encrypted using the archive master key, which itself is encrypted for each user and/or authentication method (password, RSA key through certificate or PKCS#11 token). Metadata such as filenames is partially encrypted.

.zed archives are used as stand-alone or attached to e-mails with the help of a MS Outlook plugin. A variant, which is not covered here, can encrypt/decrypt MS Windows folders on the fly like ecryptfs.

In the spirit of academic and independent research this document provides a description of the file format and encryption algorithms for this encrypted file archive.

See the conventions section for conventions and acronyms used in this document.

Structure overview

The .zed file format is composed of several layers.

  • The main container is using the (MS-CFB), which is notably used by MS Office 97-2003 .doc files. It contains several streams:

    • Metadata stream: in OLE Property Set format (MS-OLEPS), contains 2 blobs in a specific Type-Length-Value (TLV) format:

      • _ctlfile: global archive properties and access list
        It is obfuscated by means of static-key AES encryption.
        The properties include archive initial filename and a global IV.
        A global encryption key is itself encrypted in each user entry.

      • _catalog: file list
        Contains each file metadata indexed with a 15-bytes identifier.
        Directories are supported.
        Full filename is encrypted using AES.
        File extension is (redundantly) stored in clear, and so are file metadata such as modification time.

    • Each file in the archive compressed with zlib and encrypted with the standard AES algorithm, in a separate stream.
      Several encryption schemes and key sizes are supported.
      The file stream is split in chunks of 512 bytes, individually encrypted.

    • Optional streams, contain additional metadata as well as pictures to display in the application background ("watermarks"). They are not discussed here.

Or as a diagram:

+----------------------------------------------------------------------------------------------------+
| .zed archive (MS-CBF)                                                                              |
|                                                                                                    |
|  stream #1                         stream #2                       stream #3...                    |
| +------------------------------+  +---------------------------+  +---------------------------+     |
| | metadata (MS-OLEPS)          |  | encryption (AES)          |  | encryption (AES)          |     |
| |                              |  | 512-bytes chunks          |  | 512-bytes chunks          |     |
| | +--------------------------+ |  |                           |  |                           |     |
| | | obfuscation (static key) | |  | +-----------------------+ |  | +-----------------------+ |     |
| | | +----------------------+ | |  |-| compression (zlib)    |-|  |-| compression (zlib)    |-|     |
| | | |_ctlfile (TLV)        | | |  | |                       | |  | |                       | | ... |
| | | +----------------------+ | |  | | +---------------+     | |  | | +---------------+     | |     | 
| | +--------------------------+ |  | | | file contents |     | |  | | | file contents |     | |     |
| |                              |  | | |               |     | |  | | |               |     | |     |
| | +--------------------------+ |  |-| +---------------+     |-|  |-| +---------------+     |-|     |
| | | _catalog (TLV)           | |  | |                       | |  | |                       | |     |
| | +--------------------------+ |  | +-----------------------+ |  | +-----------------------+ |     |
| +------------------------------+  +---------------------------+  +---------------------------+     |
+----------------------------------------------------------------------------------------------------+

Encryption schemes

Several AES key sizes are supported, such as 128 and 256 bits.

The Cipher Block Chaining (CBC) block cipher mode of operation is used to decrypt multiple AES 16-byte blocks, which means an initialisation vector (IV) is stored in clear along with the ciphertext.

All filenames and file contents are encrypted using the same encryption mode, key and IV (e.g. if you remove and re-add a file in the archive, the resulting stream will be identical).

No cleartext padding is used during encryption; instead, several end-of-stream handlers are available, so the ciphertext has exactly the size of the cleartext (e.g. the size of the compressed file).

The following variants were identified in the 'encryption_mode' field.

STREAM

This is the end-of-stream handler for:

  • obfuscated metadata encrypted with static AES key
  • filenames and files in archives with 'encryption_mode' set to "AES-CBC-STREAM"
  • any AES ciphertext of size < 16 bytes, regardless of encryption mode

This end-of-stream handler is apparently specific to the .zed format, and applied when the cleartext's does not end on a 16-byte boundary ; in this case special processing is performed on the last partial 16-byte block.

The encryption and decryption phases are identical: let's assume the last partial block of cleartext (for encryption) or ciphertext (for decryption) was appended after all the complete 16-byte blocks of ciphertext:

  • the second-to-last block of the ciphertext is encrypted in AES-ECB mode (i.e. block cipher encryption only, without XORing with the IV)

  • then XOR-ed with the last partial block (hence truncated to the length of the partial block)

In either case, if the full ciphertext is less then one AES block (< 16 bytes), then the IV is used instead of the second-to-last block.

CTS

CTS or CipherText Stealing is the end-of-stream handler for:

  • filenames and files in archives with 'encryption_mode' set to "AES-CBC-CTS".
    • exception: if the size of the ciphertext is < 16 bytes, then "STREAM" is used instead.

It matches the CBC-CS3 variant as described in Recommendation for Block Cipher Modes of Operation: Three Variants of Ciphertext Stealing for CBC Mode.

Empty cleartext

Since empty filenames or metadata are invalid, and since all files are compressed (resulting in a minimum 8-byte zlib cleartext), no empty cleartext was encrypted in the archive.

metadata stream

It is named 05356861616161716149656b7a6565636e576a33317a7868304e63 (hexadecimal), i.e. the character with code 5 followed by '5haaaaqaIekzeecnWj31zxh0Nc' (ASCII).

The format used is OLE Property Set (MS-OLEPS).

It introduces 2 property names "_ctlfile" (index 3) and "_catalog" (index 4), and 2 instances of said properties each containing an application-specific VT_BLOB (type 0x0041).

_ctlfile: obfuscated global properties and access list

This subpart is stored under index 3 ("_ctlfile") of the MS-OLEPS metadata.

It consists of:

  • static delimiter 0765921A2A0774534752073361719300 (hexadecimal) followed by 0100 (hexadecimal) (18 bytes total)
  • 16-byte IV
  • ciphertext
  • 1 uint32be representing the length of all the above
  • static delimiter 0765921A2A0774534752073361719300 (hexadecimal) followed by "ZoneCentral (R)" (ASCII) and a NUL byte (32 bytes total)

The ciphertext is encrypted with AES-CBC "STREAM" mode using 128-bit static key 37F13CF81C780AF26B6A52654F794AEF (hexadecimal) and the prepended IV so as to obfuscate the access list. The ciphertext is continuous and not split in chunks (unlike files), even when it is larger than 512 bytes.

The decrypted text contain properties in a TLV format as described in _ctlfile TLV:

  • global archive properties as a 'fileprops' structure,

  • extra archive properties as a 'archive_extraprops' structure

  • users access list as a series of 'passworduser' and 'rsauser entries.

Archives may include "mandatory" users that cannot be removed. They are typically used to add an enterprise wide recovery RSA key to all archives. Extreme care must be taken to protect these key, as it can decrypt all past archives generated from within that company.

_catalog: file list

This subpart is stored under index 4 ("_catalog") of the MS-OLEPS metadata.

It contains a series of 'fileprops' TLV structures, one for each file or directory.

The file hierarchy can be reconstructed by checking the 'parent_id' field of each file entry. If 'parent_id' is 0 then the file is located at the top-level of the hierarchy, otherwise it's located under the directory with the matching 'file_id'.

TLV format

This format is a series of fields :

  • 4 bytes for Type (specified as a 4-bytes hexadecimal below)
  • 4 bytes for value Length (uint32be)
  • Value

Value semantics depend on its Type. It may contain an uint32be integer, a UTF-16LE string, a character sequence, or an inner TLV structure.

Unless otherwise noted, TLV structures appear once.

Some fields are optional and may not be present at all (e.g. 'archive_createdwith').

Some fields are unique within a structure (e.g. 'files_iv'), other may be repeated within a structure to form a list (e.g. 'fileprops' and 'passworduser').

The following top-level types that have been identified, and detailed in the next sections:

  • 80110600: fileprops, used for the file list as well as for the global archive properties
  • 001b0600: archive_extraprops
  • 80140600: accesslist

Some additional unidentified types may be present.

_ctlfile TLV

  • 80110600: fileprops (TLV structure): global archive properties
    • 00230400: archive_pathname (UTF-16LE string): initial archive filename (past versions also leaked the full pathname of the initial archive)
    • 80270200: encryption_mode (utf32be): 103 for "AES-CBC-STREAM", 104 for "AES-CBC-CTS"
    • 80260200: encryption_strength (utf32be): AES key size, in bytes (e.g. 32 means AES with a 256-bit key)
    • 80280500: files_iv (sequence of bytes): global IV for all filenames and file contents
  • 001b0600: archive_extraprops (TLV structure): additionnal archive properties (optional)
    • 00c40500: archive_creationtime (FILETIME): date and time when archive was initially created (optional)
    • 00c00400: archive_createdwith (UTF-16LE string): uuid-like structure describing the application that initialized the archive (optional)
      {00000188-1000-3CA8-8868-36F59DEFD14D} is Zed! Free 1.0.188.
  • 80140600: accesslist (TLV structure): describe the users, their key encryption and their permissions
    • 80610600: passworduser (TLV structure): user identified by password (0 or more)
    • 80620600: rsauser (TLV structure): user identified by RSA key (via file or PKCS#11 token) (0 or more)
    • Fields common to passworduser and rsauser:
      • 80710400: login (UTF-16LE string): user name
      • 80720300: login_md5 (sequence of bytes): used by the application to search for a user name
      • 807e0100: priv1 (uchar): user privileges; present and set to 1 when user is admin (optional)
      • 00830200: priv2 (uint32be): user privileges; present and set to 2 when user is admin, present and set to 5 when user is a marked as mandatory, e.g. for recovery keys (optional)
      • 80740500: files_key_ciphertext (sequence of bytes): the archive encryption key, itself encrypted
      • 00840500: user_creationtime (FILETIME): date and time when the user was added to the archive
    • passworduser-specific fields:
      • 80760500: pbe_salt (sequence of bytes): salt for PBE
      • 80770200: pbe_iter (uint32be): number of iterations for PBE
      • 80780200: pkcs12_hashfunc (uint32be): hash function used for PBE and PBA key derivation
      • 80790500: pba_checksum (sequence of bytes): password derived with PBA to check for password validity
      • 807a0500: pba_salt (sequence of bytes): salt for PBA
      • 807b0200: pba_iter (uint32be): number of iterations for PBA
    • rsauser-specific fields:
      • 807d0500: certificate (sequence of bytes): user X509 certificate in DER format

_catalog TLV

  • 80110600: fileprops (TLV structure): describe the archive files (0 or more)
    • 80300500: file_id (sequence of bytes): a 16-byte unique identifier
    • 80310400: filename_halfanon (UTF-16LE string): half-anonymized filename, e.g. File1.txt (leaking filename extension)
    • 00380500: filename_ciphertext (sequence of bytes): encrypted filename; may have a trailing NUL byte once decrypted
    • 80330500: file_size (uint64le): decompressed file size in bytes
    • 80340500: file_creationtime (FILETIME): file creation date and time
    • 80350500: file_lastwritetime (FILETIME): file last modification date and time
    • 80360500: file_lastaccesstime (FILETIME): file last access date and time
    • 00370500: parent_directory_id (sequence of bytes): file_id of the parent directory, 0 is top-level
    • 80320100: is_dir (uint32be): 1 if entry is directory (optional)

Decrypting the archive AES key

rsauser

The user accessing the archive will be authenticated by comparing his/her X509 certificate with the one stored in the 'certificate' field using DER format.

The 'files_key_ciphertext' field is then decrypted using the PKCS#1 v1.5 encryption mechanism, with the private key that matches the user certificate.

passworduser

An intermediary user key, a user IV and an integrity checksum will be derived from the user password, using the deprecated PKCS#12 method as described at rfc7292 appendix B.

Note: this is not PKCS#5 (nor PBKDF1/PBKDF2), this is an incompatible method from PKCS#12 that notably does not use HMAC.

The 'pkcs12_hashfunc' field defines the underlying hash function. The following values have been identified:

  • 21: SHA-1
  • 22: SHA-256

PBA - Password-based authentication

The user accessing the archive will be authenticated by deriving an 8-byte sequence from his/her password.

The parameters for the derivation function are:

  • ID: 3
  • 'pba_salt': the salt, typically an 8-byte random sequence
  • 'pba_iter': the iteration count, typically 200000

The derivation is checked against 'pba_checksum'.

PBE - Password-based encryption

Once the user is identified, 2 new values are derived from the password with different parameters to produce the IV and the key decryption key, with the same hash function:

  • 'pbe_salt': the salt, typically an 8-bytes random sequence
  • 'pbe_iter': the iteration count, typically 100000

The parameters specific to user key are:

  • ID: 1
  • size: 32

The user key needs to be truncated to a length of 'encryption_strength', as specified in bytes in the archive properties.

The parameters specific to user IV are:

  • ID: 2
  • size: 16

Once the key decryption key and the IV are derived, 'files_key_ciphertext' is decrypted using AES CBC, with PKCS#7 padding.

Identifying file streams

The name of the MS-CFB stream is derived by shuffling the bytes from the 'file_id' field and then encoding the result as hexadecimal.

The reordering is:

Initial  offset: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Shuffled offset: 3 2 1 0 5 4 7 6 8 9 10 11 12 13 14 15

The 16th byte is usually a NUL byte, hence the stream identifier is a 30-character-long string.

Decrypting files

The compressed stream is split in chunks of 512 bytes, each of them encrypted separately using AES CBS and the global archive encryption scheme. Decryption uses the global AES key (retrieved using the user credentials), and the global IV (retrieved from the deobfuscated archive metadata).

The IV for each chunk is computed by:

  • expressing the current chunk number as little endian on 16 bytes
  • XORing it with the global IV
  • encrypting with the global AES key in ECB mode (without IV).

Each chunk is an independent stream and the decryption process involves end-of-stream handling even if this is not the end of the actual file. This is particularly important for the CTS handler.

Note: this is not to be confused with CTR block cipher mode of operation with operates differently and requires a nonce.

Decompressing files

Compressed streams are zlib stream with default compression options and can be decompressed following the zlib format.

Test cases

Excluded for brevity, cf. https://www.beuc.net/zed/#test-cases.

Conventions and references

Feedback

Feel free to send comments at beuc@beuc.net. If you have .zed files that you think are not covered by this document, please send them as well (replace sensitive files with other ones). The author's GPG key can be found at 8FF1CB6E8D89059F.

Copyright (C) 2017 Sylvain Beucler

Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.

10 September, 2017 01:50PM

hackergotchi for Charles Plessy

Charles Plessy

Summary of the discussion on off-line keys.

Last month, there has been an interesting discussion about off-line GnuPG keys and their storage systems on the debian-project@l.d.o mailing list. I tried to summarise it in the Debian wiki, in particular by creating two new pages.

10 September, 2017 12:06PM

hackergotchi for Joachim Breitner

Joachim Breitner

Less parentheses

Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell.

The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you

Context aka. Sections

This is probably the most relevant of the three proposals. Consider a bunch of related functions, say analyseExpr and analyseAlt, like these:

analyseExpr :: Expr -> Expr
analyseExpr (Var v) = change v
analyseExpr (App e1 e2) =
  App (analyseExpr e1) (analyseExpr e2)
analyseExpr (Lam v e) = Lam v (analyseExpr flag e)
analyseExpr (Case scrut alts) =
  Case (analyseExpr scrut) (analyseAlt <$> alts)

analyseAlt :: Alt -> Alt
analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the Var case. You thus add a parameter to all these functions, and hence an argument to every call:

type Flag = Bool

analyseExpr :: Flag -> Expr -> Expr
analyseExpr flag (Var v) = if flag then change1 v else change2 v
analyseExpr flag (App e1 e2) =
  App (analyseExpr flag e1) (analyseExpr flag e2)
analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e)
analyseExpr flag (Case scrut alts) =
  Case (analyseExpr flag scrut) (analyseAlt flag <$> alts)

analyseAlt :: Flag -> Alt -> Alt
analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e)

I find this code problematic. The intention was: “flag is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, flag should be considered constant.”

But this intention is neither easily visible nor enforced. And in fact, in the above code, flag does “change”, as analyseExpr passes something else in the Lam case. The idiom is indistinguishable from the environment idiom, where a locally changing environment (such as “variables in scope”) is passed around.

So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over!

The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here:

type Flag = Bool
section (flag :: Flag) where
  analyseExpr :: Expr -> Expr
  analyseExpr (Var v) = if flag then change1 v else change2v 
  analyseExpr (App e1 e2) =
    App (analyseExpr e1) (analyseExpr e2)
  analyseExpr (Lam v e) = Lam v (analyseExpr e)
  analyseExpr (Case scrut alts) =
    Case (analyseExpr scrut) (analyseAlt <$> alts)

  analyseAlt :: Alt -> Alt
  analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

Now the intention is clear: Within a clearly marked block, flag is fixed and when reasoning about this code I do not have to worry that it might change. Either all variables will be passed to change1, or all to change2. An important distinction!

Therefore, inside the section, the type of analyseExpr does not mention Flag, whereas outside its type is Flag -> Expr -> Expr. This is a bit unusual, but not completely: You see precisely the same effect in a class declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do.

Note that idioms like implicit parameters or the Reader monad do not give the guarantee that the parameter is (locally) constant.

More details can be found in the GHC proposal that I prepared, and I invite you to raise concern or voice support there.

Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template Haskell based implementation of this idea in the seal-module package!

Less parentheses 1: Bulleted argument lists

The next two proposals are all about removing parentheses. I believe that Haskell’s tendency to express complex code with no or few parentheses is one of its big strengths, as it makes it easier to visualy parse programs. A common idiom is to use the $ operator to separate a function from a complex argument without parentheses, but it does not help when there are multiple complex arguments.

For that case I propose to steal an idea from the surprisingly successful markup language markdown, and use bulleted lists to indicate multiple arguments:

foo :: Baz
foo = bracket
        • some complicated code
          that is evaluated first
        • other complicated code for later
        • even more complicated code

I find this very easy to visually parse and navigate.

It is actually possible to do this now, if one defines (•) = id with infixl 0 •. A dedicated syntax extension (-XArgumentBullets) is preferable:

  • It only really adds readability if the bullets are nicely vertically aligned, which the compiler should enforce.
  • I would like to use $ inside these complex arguments, and multiple operators of precedence 0 do not mix. (infixl -1 • would help).
  • It should be possible to nest these, and distinguish different nesting levers based on their indentation.

Less parentheses 1: Whitespace precedence

The final proposal is the most daring. I am convinced that it improves readability and should be considered when creating a new language. As for Haskell, I am at the moment not proposing this as a language extension (but could be convinced to do so if there is enough positive feedback).

Consider this definition of append:

(++) :: [a] -> [a] -> [a]
[]     ++ ys = ys
(x:xs) ++ ys = x : (xs++ys)

Imagine you were explaining the last line to someone orally. How would you speak it? One common way to do so is to not read the parentheses out aloud, but rather speak parenthesised expression more quickly and add pauses otherwise.

We can do the same in syntax!

(++) :: [a] -> [a] -> [a]
[]   ++ ys = ys
x:xs ++ ys = x : xs++ys

The rule is simple: A sequence of tokens without any space is implicitly parenthesised.

The reaction I got in Oxford was horror and disgust. And that is understandable – we are very used to ignore spacing when parsing expressions (unless it is indentation, of course. Then we are no longer horrified, as our non-Haskell colleagues are when they see our code).

But I am convinced that once you let the rule sink in, you will have no problem parsing such code with ease, and soon even with greater ease than the parenthesised version. It is a very natural thing to look at the general structure, identify “compact chunks of characters”, mentally group them, and then go and separately parse the internals of the chunks and how the chunks relate to each other. More natural than first scanning everything for ( and ), matching them up, building a mental tree, and then digging deeper.

Incidentally, there was a non-programmer present during my presentation, and while she did not openly contradict the dismissive groan of the audience, I later learned that she found this variant quite obvious to understand and more easily to read than the parenthesised code.

Some FAQs about this:

  • What about an operator with space on one side but not on the other? I’d simply forbid that, and hence enforce readable code.
  • Do operator sections still require parenthesis? Yes, I’d say so.
  • Does this overrule operator precedence? Yes! a * b+c == a * (b+c).
  • What is a token? Good question, and I am not not decided. In particular: Is a parenthesised expression a single token? If so, then (Succ a)+b * c parses as ((Succ a)+b) * c, otherwise it should probably simply be illegal.
  • Can we extend this so that one space binds tighter than two spaces, and so on? Yes we can, but really, we should not.
  • This is incompatible with Agda’s syntax! Indeed it is, and I really like Agda’s mixfix syntax. Can’t have everything.
  • Has this been done before? I have not seen it in any language, but Lewis Wall has blogged this idea before.

Well, let me know what you think!

10 September, 2017 10:10AM by Joachim Breitner (mail@joachim-breitner.de)

Lior Kaplan

PHP 7.2 is coming… mcrypt extension isn’t

Early September, it’s about 3 months before PHP 7.2 is expected to be release (schedule here). One of the changes is the removal of the mcrypt extension after it was deprecated in PHP 7.1. The main problem with mcrypt extension is that it is based on libmcrypt that was abandoned by it’s upstream since 2007. That’s 10 years of keeping a library alive, moving the burden to distribution’s security teams. But this isn’t new, Remi already wrote about this two years ago: “About libmcrypt and php-mcrypt“.

But with removal of the extension from the PHP code base (about F**King time), it would force the recommendation was done “nicely” till now. And forcing people means some noise, although an alternative is PHP’s owns openssl extension. But as many migrations that require code change – it’s going slow.

The goal of this post is to reach to the PHP eco system and map the components (mostly frameworks and applications) to still require/recommend mcyrpt and to pressure them to fix it before PHP 72 is released. I’ll appreciate the readers’ help with this mapping in the comments.

For example, Laravel‘s release notes for 5.1:

In previous versions of Laravel, encryption was handled by the mcrypt PHP extension. However, beginning in Laravel 5.1, encryption is handled by the openssl extension, which is more actively maintained.

Or, on the other hand Joomla 3 requirements still mentions mcrypt.

mcrypt safe:

mcrypt dependant:

For those who really need mcrypt, it is part of PECL, PHP’s extensions repository. You’re welcome to compile it on your own risk.


Filed under: Debian GNU/Linux, PHP

10 September, 2017 08:56AM by Kaplan

September 09, 2017

Russell Coker

Observing Reliability

Last year I wrote about how great my latest Thinkpad is [1] in response to a discussion about whether a Thinkpad is still the “Rolls Royce” of laptops.

It was a few months after writing that post that I realised that I omitted an important point. After I had that laptop for about a year the DVD drive broke and made annoying clicking sounds all the time in addition to not working. I removed the DVD drive and the result was that the laptop was lighter and used less power without missing any feature that I desired. As I had installed Debian on that laptop by copying the hard drive from my previous laptop I had never used the DVD drive for any purpose. After a while I got used to my laptop being like that and the gaping hole in the side of the laptop where the DVD drive used to be didn’t even register to me. I would prefer it if Lenovo sold Thinkpads in the T series without DVD drives, but it seems that only the laptops with tiny screens are designed to lack DVD drives.

For my use of laptops this doesn’t change the conclusion of my previous post. Now the T420 has been in service for almost 4 years which makes the cost of ownership about $75 per year. $1.50 per week as a tax deductible business expense is very cheap for such a nice laptop. About a year ago I installed a SSD in that laptop, it cost me about $250 from memory and made it significantly faster while also reducing heat problems. The depreciation on the SSD about doubles the cost of ownership of the laptop, but it’s still cheaper than a mobile phone and thus not in the category of things that are expected to last for a long time – while also giving longer service than phones usually do.

One thing that’s interesting to consider is the fact that I forgot about the broken DVD drive when writing about this. I guess every review has an unspoken caveat of “this works well for me but might suck badly for your use case”. But I wonder how many other things that are noteworthy I’m forgetting to put in reviews because they just don’t impact my use. I don’t think that I am unusual in this regard, so reading multiple reviews is the sensible thing to do.

09 September, 2017 10:18AM by etbe

François Marier

TLS Authentication on Freenode and OFTC

In order to easily authenticate with IRC networks such as OFTC and Freenode, it is possible to use client TLS certificates (also known as SSL certificates). In fact, it turns out that it's very easy to setup both on irssi and on znc.

Generate your TLS certificate

On a machine with good entropy, run the following command to create a keypair that will last for 10 years:

openssl req -nodes -newkey rsa:2048 -keyout user.pem -x509 -days 3650 -out user.pem -subj "/CN=<your nick>"

Then extract your key fingerprint using this command:

openssl x509 -sha1 -noout -fingerprint -in user.pem | sed -e 's/^.*=//;s/://g'

Share your fingerprints with NickServ

On each IRC network, do this:

/msg NickServ IDENTIFY Password1!
/msg NickServ CERT ADD <your fingerprint>

in order to add your fingerprint to the access control list.

Configure ZNC

To configure znc, start by putting the key in the right place:

cp user.pem ~/.znc/users/<your nick>/networks/oftc/moddata/cert/

and then enable the built-in cert plugin for each network in ~/.znc/configs/znc.conf:

<Network oftc>
    ...
            LoadModule = cert
    ...
</Network>
    <Network freenode>
    ...
            LoadModule = cert
    ...
</Network>

Configure irssi

For irssi, do the same thing but put the cert in ~/.irssi/user.pem and then change the OFTC entry in ~/.irssi/config to look like this:

{
  address = "irc.oftc.net";
  chatnet = "OFTC";
  port = "6697";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";
}

and the Freenode one to look like this:

{
  address = "chat.freenode.net";
  chatnet = "Freenode";
  port = "7000";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";
}

That's it. That's all you need to replace password authentication with a much stronger alternative.

09 September, 2017 04:52AM

September 08, 2017

Vincent Fourmond

Extract many attachement from many mails in one go using ripmime

I was recently looking for a way to extract many attachments from a series of emails. I first had a look at the AttachmentExtractor thunderbird plugin, but it seems very old and not maintained anymore. So I've come up with another very simple solution that also works with any other mail client.

Just copy all the mails you want to extract attachments from to a single (temporary) mail folder, find out which file holds the mail folder and use ripmime on that file (ripmime is packaged for Debian). For my case, it looked like:

~ ripmime -i .icedove/XXXXXXX.default/Mail/pop.xxxx/tmp -d target-directory

Simple solution, but it saved me quite some time. Hope it helps !

08 September, 2017 09:42PM by Vincent Fourmond (noreply@blogger.com)

Sven Hoexter

munin with TLS

Primarily a note for my future self so I don't have to find out what I did in the past once more.

If you're running some smaller systems scattered around the internet, without connecting them with a VPN, you might want your munin master and nodes to communicate with TLS and validate certificates. If you remember what to do it's a rather simple and straight forward process. To manage the PKI I'll utilize the well known easyrsa script collection. For this special purpose CA I'll go with a flat layout. So it's one root certificate issuing all server and client certificates directly. Some very basic docs can be also found in the munin wiki.

master setup

For your '/etc/munin/munin.conf':

tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/master.key
tls_certificate /etc/munin/master.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1

A node entry with TLS will look like this:

[node1.stormbind.net]
    address [2001:db8::]
    use_node_name yes

Important points here:

  • "tls_certificate" is a Web Client Authentication certificate. The master connects to the nodes as a client.
  • "tls_ca_certificate" is the root CA certificate.
  • If you'd like to disable TLS connections, for example for localhost, set "tls disabled" in the node block.

For easy-rsa the following command invocations are relevant:

./easyrsa init-pki
./easyrsa build-ca
./easrsa gen-req master
./easyrsa sign-req client master
./easyrsa set-rsa-pass master nopass

node setup

For your '/etc/munin/munin-node.conf':

tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/node1.key
tls_certificate /etc/munin/node1.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1

For easy-rsa the following command invocations are relevant:

./easyrsa gen-req node1
./easyrsa sign-req server node1
./easyrsa set-rsa-pass node1 nopass

Important points here:

  • "tls_certificate" on the node must be a server certificate.
  • You've to provide the CA here as well so we can verify the client certificate provided by the munin master.

08 September, 2017 02:41PM

September 07, 2017

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Licensing woes

On releasing modified versions of GPLv3 software in binary form only (quote anonymized):

And in my opinion it's perfectly ok to give out a binary release of a project, that is a work in progress, so that people can try it out and coment on it. It's easier for them to have it as binary and not need to compile it themselfs. If then after a (long) while the code is still only released in binary form, then it's ok to start a discussion. But only for a quick test, that is unneccessary. So people, calm down and enjoy life!

I wonder at what point we got here.

07 September, 2017 10:45PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

It was thirty years ago today... (and a bit more): My first ever public speech!

I came across a folder with the most unexpected treasure trove: The text for my first ever public speech! (and some related materials)
In 1985, being nine years old, I went to the IDESE school, to learn Logo. I found my diploma over ten years ago and blogged about it in this same space. Of course, I don't expect any of you to remember what I wrote twelve years ago about a (then) twenty years old piece of paper!

I add to this very old stuff about Gunnar the four pages describing my game, Evitamono ("Avoid the monkey", approximately). I still remember the game quite vividly, including traumatic issues which were quite common back then; I wrote that «the sprites were accidentally deleted twice and the game once». I remember several of my peers telling about such experiences. Well, that is good if you account for the second system syndrome!

I also found the amazing course material for how to program sound and graphics in the C64 BASIC. That was a course taken by ten year old kids. Kids that understood that you had to write [255,129,165,244,219,165,0,102] (see pages 3-5) into a memory location starting at 53248 to redefine a character so it looked as the graphic element you wanted. Of course, it was done with a set of POKEs, as everything in C64. Or that you could program sound by setting the seven SID registers for each of the three voices containing low frequency, high frequency, low pulse, high pulse, wave control, wave length, wave amplitude in memory locations 54272 through 54292... And so on and on and on...

And as a proof that I did take the course:

...I don't think I could make most of my current BSc students make sense out of what is in the manual. But, being a kid in the 1980s, that was the only way to get a computer to do what you wanted. Yay for primitivity! :-D

AttachmentSize
Speech for "Evitamono"1.29 MB
Coursee material for sound and graphics programming in C64 BASIC15.82 MB
Proof that I was there!4.86 MB

07 September, 2017 06:35PM by gwolf

Lior Kaplan

FOSScamp Syros 2017 – day 3

The 3rd day should have started with a Debian sprint and then a LibreOffice one, taking advantage I’m still attending, as that’s my last day. But plans don’t always work out and we started 2 hours later. When everybody arrive we got everyone together for a short daily meeting (scrum style). The people were divided to 3 teams for translating:  Debian Installer, LibreOffice and Gnome. For each team we did a short list of what left and with what to start. And in the end – how does what so there will be no toe stepping. I was really proud with this and felt it was time well spent.

The current translation percentage for Albanian in LibreOffice is 60%. So my recommendation to the team is translate master only and do not touch the help translation. My plans ahead would be to improve the translation as much as possible for LibreOffice 6.0 and near the branching point (Set to November 20th by the release schedule) decide if it’s doable for the 6.0 life time or to set the goal at 6.1. In the 2nd case, we might try to backport translation back to 6.0.

For the translation itself, I’ve mentioned to the team about KeyID language pack and referred them to the nightly builds. These tools should help with keeping the translation quality high.

For the Debian team, after deciding who works on what, I’ve asked Silva to do review for the others, as doing it myself started to take more and more of my time. It’s also good that the reviewer know the target language and not like me, can catch more the syntax only mistakes. Another point, as she’s available more easily to the team while I’m leaving soon, so I hope this role of reviewer will stay as part of the team.

With the time left I mostly worked on my own tasks, which were packaging the Albanian dictionary, resulting in https://packages.debian.org/sid/myspell-sq and making sure the dictionary is also part of LibreOffice resulting in https://gerrit.libreoffice.org/#/c/41906/ . When it is accepted, I want to upload it to the LibreOffice repository so all users can download and use the dictionary.

During the voyage home (ferry, bus, plain and train), I mailed Sergio Durigan Junior, my NM applicant, with a set of questions. My first action as an AM (:

Overall FOSScamp results for Albanian translation were very close to the goal I set (100%):

  • Albanian (sq) level1 – 99%
  • Albanian (sq) level2 – 25% (the rest is pending at #874497)
  • Albanian (sq) level3 – 100%

That’s the result of work by Silva Arapi, Eva Vranici, Redon Skikuli, Anisa Kuci and Nafie Shehu.


Filed under: Debian GNU/Linux, i18n & l10n, LibreOffice

07 September, 2017 03:13PM by Kaplan

hackergotchi for Thomas Lange

Thomas Lange

My recent FAI activities

During DebConf 17 in Montréal I had a FAI demo session (video), where I showed how to create a customized installation CD and how to create a diskimage using the same configuration. This diskimage is ready for use with a VM software or can be booted inside a cloud environment.

During the last weeks I was working on FAI 5.4 which will be released in a few weeks. I you want to test it use

deb https://fai-project.org/download beta-testing koeln

in your sources.list file.

The most important new feature will be the cross architecture support. I managed to create an ARM64 diskimage on a x86 host and boot this inside Qemu. Currently I learn how to flash images onto my new Hikey960 board for booting my own Debian images on real hardware. The embedded world is still new for me and very different in respect to the boot process.

At DebConf, I also worked on debootstrap. I produced a set of patches which can speedup debootstrap by a factor of 2. See #871835 for details.

FAI debootstrap ARM

07 September, 2017 03:03PM

Reproducible builds folks

Reproducible Builds: Weekly report #123

Here's what happened in the Reproducible Builds effort between Sunday August 27 and Saturday September 2 2017:

Talks and presentations

Holger Levsen talked about our progress and our still-far goals at BornHack 2017 (Video).

Toolchain development and fixes

The Debian FTP archive will now reject changelogs where different entries have the same timestamps.

UDD now uses reproducible-tracker.json (~25MB) which ignores our tests for Debian unstable, instead of our full set of results in reproducible.json. Our tests for Debian unstable uses a stricter definition of "reproducible" than what was recently added to Debian policy, and these stricter tests are currently more unreliable.

Packages reviewed and fixed, and bugs filed

Patches sent upstream:

Debian bugs filed:

Debian packages NMU-uploaded:

Reviews of unreproducible packages

25 package reviews have been added, 50 have been updated and 86 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (46)
  • Martín Ferrari (1)
  • Steve Langasek (1)

diffoscope development

Version 86 was uploaded to unstable by Mattia Rizzolo. It included previous weeks' contributions from:

  • Mattia Rizzolo
    • tests/binary: skip a test if the 'distro' module is not available.
    • Some code quality and style improvements.
  • Guangyuan Yang
    • tests/iso9660: support both cdrtools' genisoimage's versions of isoinfo.
  • Chris Lamb
    • comparators/xml: Use name attribute over path to avoid leaking comparison full path in output.
    • Tidy diffoscope.progress a little.
  • Ximin Luo
    • Add a --tool-prefix-binutils CLI flag. Closes: #869868
    • On non-GNU systems, prefer some tools that start with "g". Closes: #871029
    • presenters/html: Don't traverse children whose parents were already limited. Closes: #871413
  • Santiago Torres-Arias
    • diffoscope.progress: Support the new fork of python-progressbar. Closes: #873157

reprotest development

Development continued in git with contributions from:

  • Ximin Luo:
    • Add -v/--verbose which is a bit more popular.
    • Make it possible to omit "auto" when building packages.
    • Refactor how the config file works, in preparation for new features.
    • chown -h for security.

Misc.

This week's edition was written by Ximin Luo, Chris Lamb, Bernhard M. Wiedemann and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

07 September, 2017 09:54AM

John Goerzen

Switching to xmonad + Gnome – and ditching a Mac

I have been using XFCE with xmonad for years now. I’m not sure exactly how many, but at least 6 years, if not closer to 10. Today I threw in the towel and switched to Gnome.

More recently, at a new job, I was given a Macbook Pro. I wasn’t entirely sure what to think of this, but I thought I’d give it a try. I found MacOS to be extremely frustrating and confining. It had no real support for a tiling window manager, and although projects like amethyst tried to approximate what xmonad can do on Linux, they were just too limited by the platform and were clunky. Moreover, the entire UI was surprisingly sluggish; maybe that was an induced effect from animations, but I don’t think that explains it. A Debisn stretch install, even on inferior hardware, was snappy in a way that MacOS never was. So I have requested to swap for a laptop that will run Debian. The strange use of Command instead of Control for things, combined with the overall lack of configurability of keybindings, meant that I was going to always be fighting muscle memory moving from one platform to another. Not only that, but being back in the world of a Free Software OS means a lot.

Now then, back to xmonad and XFCE situation. XFCE once worked very well with xmonad. Over the years, this got more challenging. Around the jessie (XFCE 4.10) time, I had to be very careful about when I would let it save my session, because it would easily break. With stretch, I had to write custom scripts because the panel wouldn’t show up properly, and even some application icons would be invisible, if things were started in a certain order. This took much trial and error and was still cumbersome.

Gnome 3, with its tightly-coupled Gnome Shell, has never been compatible with other window managers — at least not directly. A person could have always used MATE with xmonad — but a lot of people that run XFCE tend to have some Gnome 3 apps (for instance, evince) anyhow. Cinnamon also wouldn’t work with xmonad, because it is simply another tightly-coupled shell instead of Gnome Shell. And then today I discovered gnome-flashback. gnome-flashback is a Gnome 3 environment that uses the traditional X approach with a separate window manager (metacity of yore by default). Sweet.

It turns out that Debian’s xmonad has built-in support for it. If you know the secret: apt-get install gnome-session-flashback (OK, it’s not so secret; it’s even in xmonad’s README.Debian these days) Install that, plus gnome and gdm3 and things are nice. Configure xmonad with GNOME support and poof – goodness right out of the box, selectable from the gdm sessions list.

I still have some gripes about Gnome’s configurability (or lack thereof). But I’ve got to say: This environment is the first one I’ve ever used that got external display switching very nearly right without any configuration, and I include MacOS in that. Plug in an external display, and poof – it’s configured and set up. You can hit a toggle key (Windows+P by default) to change the configurations, or use the Display section in gnome-control-center. Unplug it, and it instantly reconfigures itself to put everything back on the laptop screen. Yessss! I used to have scripts to do this in the wheezy/jessie days. XFCE in stretch had numerous annoying failures in this area which rendered the internal display completely dark until the next reboot – very frustrating. With Gnome, it just works. And, even if you have “suspend on lid closed” turned on, if the system is powered up and hooked up to an external display, it will keep running even if the lid is closed, figuring you must be using it on the external screen. Another thing the Mac wouldn’t do well.

All in all, some pretty good stuff here. I continue to be impressed by stretch. It is darn impressive to put this OS on generic hardware and have it outshine the closed-ecosystem Mac!

07 September, 2017 02:43AM by John Goerzen

September 06, 2017

Mike Gabriel

MATE 1.18 landed in Debian testing

This is to announce that finally all MATE Desktop 1.18 components have landed in Debian testing (aka buster).

Credits

Again a big thanks to the packaging team (esp. Vangelis Mouhtsis and Martin Wimpress, but also to Jeremy Bicha for constant advice and Aron Xu for joining the Debian+Ubuntu MATE Packaging Team and merging all the Ubuntu zesty and artful branches back to master).

Fully Available on all Debian-supported Architectures

The very special thing about this MATE 1.18 release for Debian is that MATE is now available on all Debian hardware architectures. See "Buildd" column on our DDPO overview page [1]. Thanks to all the people from the Debian porters realm for providing feedback to my porting questions.

References

06 September, 2017 09:04AM by sunweaver

September 05, 2017

hackergotchi for Kees Cook

Kees Cook

security things in Linux v4.13

Previously: v4.12.

Here’s a short summary of some of interesting security things in Sunday’s v4.13 release of the Linux kernel:

security documentation ReSTification
The kernel has been switching to formatting documentation with ReST, and I noticed that none of the Documentation/security/ tree had been converted yet. I took the opportunity to take a few passes at formatting the existing documentation and, at Jon Corbet’s recommendation, split it up between end-user documentation (which is mainly how to use LSMs) and developer documentation (which is mainly how to use various internal APIs). A bunch of these docs need some updating, so maybe with the improved visibility, they’ll get some extra attention.

CONFIG_REFCOUNT_FULL
Since Peter Zijlstra implemented the refcount_t API in v4.11, Elena Reshetova (with Hans Liljestrand and David Windsor) has been systematically replacing atomic_t reference counters with refcount_t. As of v4.13, there are now close to 125 conversions with many more to come. However, there were concerns over the performance characteristics of the refcount_t implementation from the maintainers of the net, mm, and block subsystems. In order to assuage these concerns and help the conversion progress continue, I added an “unchecked” refcount_t implementation (identical to the earlier atomic_t implementation) as the default, with the fully checked implementation now available under CONFIG_REFCOUNT_FULL. The plan is that for v4.14 and beyond, the kernel can grow per-architecture implementations of refcount_t that have performance characteristics on par with atomic_t (as done in grsecurity’s PAX_REFCOUNT).

CONFIG_FORTIFY_SOURCE
Daniel Micay created a version of glibc’s FORTIFY_SOURCE compile-time and run-time protection for finding overflows in the common string (e.g. strcpy, strcmp) and memory (e.g. memcpy, memcmp) functions. The idea is that since the compiler already knows the size of many of the buffer arguments used by these functions, it can already build in checks for buffer overflows. When all the sizes are known at compile time, this can actually allow the compiler to fail the build instead of continuing with a proven overflow. When only some of the sizes are known (e.g. destination size is known at compile-time, but source size is only known at run-time) run-time checks are added to catch any cases where an overflow might happen. Adding this found several places where minor leaks were happening, and Daniel and I chased down fixes for them.

One interesting note about this protection is that is only examines the size of the whole object for its size (via __builtin_object_size(..., 0)). If you have a string within a structure, CONFIG_FORTIFY_SOURCE as currently implemented will make sure only that you can’t copy beyond the structure (but therefore, you can still overflow the string within the structure). The next step in enhancing this protection is to switch from 0 (above) to 1, which will use the closest surrounding subobject (e.g. the string). However, there are a lot of cases where the kernel intentionally copies across multiple structure fields, which means more fixes before this higher level can be enabled.

NULL-prefixed stack canary
Rik van Riel and Daniel Micay changed how the stack canary is defined on 64-bit systems to always make sure that the leading byte is zero. This provides a deterministic defense against overflowing string functions (e.g. strcpy), since they will either stop an overflowing read at the NULL byte, or be unable to write a NULL byte, thereby always triggering the canary check. This does reduce the entropy from 64 bits to 56 bits for overflow cases where NULL bytes can be written (e.g. memcpy), but the trade-off is worth it. (Besdies, x86_64’s canary was 32-bits until recently.)

IPC refactoring
Partially in support of allowing IPC structure layouts to be randomized by the randstruct plugin, Manfred Spraul and I reorganized the internal layout of how IPC is tracked in the kernel. The resulting allocations are smaller and much easier to deal with, even if I initially missed a few needed container_of() uses.

randstruct gcc plugin
I ported grsecurity’s clever randstruct gcc plugin to upstream. This plugin allows structure layouts to be randomized on a per-build basis, providing a probabilistic defense against attacks that need to know the location of sensitive structure fields in kernel memory (which is most attacks). By moving things around in this fashion, attackers need to perform much more work to determine the resulting layout before they can mount a reliable attack.

Unfortunately, due to the timing of the development cycle, only the “manual” mode of randstruct landed in upstream (i.e. marking structures with __randomize_layout). v4.14 will also have the automatic mode enabled, which randomizes all structures that contain only function pointers.

A large number of fixes to support randstruct have been landing from v4.10 through v4.13, most of which were already identified and fixed by grsecurity, but many were novel, either in newly added drivers, as whitelisted cross-structure casts, refactorings (like IPC noted above), or in a corner case on ARM found during upstream testing.

lower ELF_ET_DYN_BASE
One of the issues identified from the Stack Clash set of vulnerabilities was that it was possible to collide stack memory with the highest portion of a PIE program’s text memory since the default ELF_ET_DYN_BASE (the lowest possible random position of a PIE executable in memory) was already so high in the memory layout (specifically, 2/3rds of the way through the address space). Fixing this required teaching the ELF loader how to load interpreters as shared objects in the mmap region instead of as a PIE executable (to avoid potentially colliding with the binary it was loading). As a result, the PIE default could be moved down to ET_EXEC (0x400000) on 32-bit, entirely avoiding the subset of Stack Clash attacks. 64-bit could be moved to just above the 32-bit address space (0x100000000), leaving the entire 32-bit region open for VMs to do 32-bit addressing, but late in the cycle it was discovered that Address Sanitizer couldn’t handle it moving. With most of the Stack Clash risk only applicable to 32-bit, fixing 64-bit has been deferred until there is a way to teach Address Sanitizer how to load itself as a shared object instead of as a PIE binary.

early device randomness
I noticed that early device randomness wasn’t actually getting added to the kernel entropy pools, so I fixed that to improve the effectiveness of the latent_entropy gcc plugin.

That’s it for now; please let me know if I missed anything. As a side note, I was rather alarmed to discover that due to all my trivial ReSTification formatting, and tiny FORTIFY_SOURCE and randstruct fixes, I made it into the most active 4.13 developers list (by patch count) at LWN with 76 patches: a whopping 0.6% of the cycle’s patches. ;)

Anyway, the v4.14 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

05 September, 2017 11:01PM by kees

hackergotchi for Gunnar Wolf

Gunnar Wolf

Made with Creative Commons: Over half translated, yay!

An image speaks for a thousand words...

And our translation project is worth several thousand words!
I am very happy and surprised to say we have surpassed the 50% mark of the Made with Creative Commons translation project. We have translated 666 out of 1210 strings (yay for 3v1l numbers)!
I have to really thank Weblate for hosting us and allowing for collaboration to happen there. And, of course, I have to thank the people that have jumped on board and helped the translation — We are over half way there! Lets keep pushing!

Translation status

PS - If you want to join the project, just get in Weblate and start translating right away, either to Spanish or other languages! (Polish, Dutch and Norwegian Bokmål are on their way) If you translate into Spanish, *please* read and abide by the specific Spanish translation guidelines.

05 September, 2017 07:05PM by gwolf

hackergotchi for Chris Lamb

Chris Lamb

Ask the dumb questions

In the same way it vital to ask the "smart questions", it is equally important to ask the dumb ones.

Whilst your milieu might be—say—comparing and contrasting the finer details of commission structures between bond brokers, if you aren't quite sure of the topic learn to be bold and confident enough to boldly ask: I'm sorry, but what actually is a bond?

Don't consider this to be an all-or-nothing affair. After all, you might have at least some idea about what a bond is. Rather, adjust your tolerance to also ask for clarification when you are merely slightly unsure or merely slightly uncertain about a concept, term or reference.


So why do this? Most obviously, you are learning something and expanding your knowledge about the world, but a clarification can avoid problems later if you were mistaken in your assumptions.

Not only that, asking "can you explain that?" or admitting "I don't follow…" is not only being honest with yourself, the vulnerability you show when admitting one's ignorance opens yourself to others leading to closer friendships and working relationships.

We clearly have a tendency to want to come across as knowledgable or―perhaps more honestly―we don't want to appear dumb or uninformed as it will bruise our ego. But the precise opposite is true: nodding and muddling your way through conversations you only partly understand is unlikely to cultivate true feelings of self-respect and a healthy self-esteem.

Since adopting this approach I have found I've rarely derailed the conversation. In fact, speaking up not only encourages and flatters others that you care about their subject, it has invariably lead to related matters which are not only more inclusive but actually novel and interesting to all present.


So push through the voice in your head and be that elephant in the room. After all, you might not the only person thinking it. If it helps, try reframing it to yourself as helping others…

You'll be finding it effortless soon enough. Indeed, asking the dumb question is actually a positive feedback loop where each question you pose helps you make others in the future. Excellence is not an act, but a habit.

05 September, 2017 11:51AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

It's already September.

It's already September. I haven't written much code last month. I wrote a CSV parser and felt a little depressed after reading rfc4180. None of my CSV files were in CRLF.

05 September, 2017 12:26AM by Junichi Uekawa

September 04, 2017

hackergotchi for Jonathan Dowland

Jonathan Dowland

Sortpaper: 16:9 edition

sortpaper 16:9

sortpaper 16:9

Back in 2011 I stumbled across a file "sortpaper.png", which was a hand-crafted wallpaper I'd created some time in the early noughties to help me organise icons on my computer's Desktop. I published it at the time in the blog post sortpaper.

Since then I rediscovered the blog post, and since I was looking for an excuse to try out the Processing software, I wrote a Processing Sketch to re-create it, but with the size and colours parameterized: sortpaper.pde.txt. The thumbnail above links to an example 1920x1080 rendering.

04 September, 2017 09:34PM

Dominique Dumont

cme: some read-write backend features are being deprecated

Hello

Config::Model and cme read and write configuration data with a set of “backend” classes, like Config::Model::Backend::IniFile. These classes are managed by Config::Model::BackendMgr.

Well, that’s the simplified view. Actually, the backend manager can handle several different backends to read and write data: read backends are tried until one of them succeeds to read configuration data. And write backend cen be different from the read backend, thus offering the possibility to migrare from one format to another. This feature came at the beginning of the project, back in 2005. This felt like a good idea to let user migrate from one data format to another.

More than 10 years later, this feature has never been used and is handled by a bunch of messy code that hampers further evolution of the backend classes.

So, without further ado, I’m going to deprecate the following features in order to simplify the backend manager:

  • The “custom” backend that can be easily replaced with more standard backend based on Config::Model::Backend::Any. This feature has been deprecated with Config::Model 2.107
  • The possibility to specify more that one backend. Soon, only the first read backend will be taken into account. This will simplify the declaration of backend. The “read_config” parameter, which is currently a list of backend specification, will become a single backend specification. The command cme meta edit will handle the migration of existing model to the new scheme.
  • the “write_config” parameter will be removed.

Unless someone objects, actual removal of these feature will be done in the next few months, after a quite short deprecation period.

All the best


Tagged: cme, config-model, Config::Model, configuration

04 September, 2017 05:39PM by dod

hackergotchi for Alessio Treglia

Alessio Treglia

MeteoSurf: a free App for the Mediterranean Sea

meteosurf

MeteoSurf is a free multi-source weather forecasting App designed to provide wind and wave conditions of the Mediterranean Sea. It is an application for smartphones and tablets, built as a Progressive Web App able to supply detailed and updated maps and data showing heights of sea waves (and other information) in the Central Mediterranean. It is mainly targeted for surfers and wind-surfers but anyone who needs to know the sea conditions will take advantage from this app.

Data can be displayed as animated graphical maps, or as detailed table data. The maps refer to the whole Mediterranean Sea, while the table data is able to provide specific information for any of the major surf spots in the Med.

As of current version, MeteoSurf shows data collecting them from 3 different forecasting systems…

Read More… [by Fabio Marzocca]

04 September, 2017 10:14AM by Fabio Marzocca

Lior Kaplan

FOSScamp Syros 2017 – day 2

The morning stated by taking the bus to Kini beach. After some to enjoy the water (which were still cold in the morning), we sat for talking about the local Debian community and how can we help it grow. The main topic was localization (l10n), but we soon started to check other options. I reminded them that l10n isn’t only translation and we also talked about dictionaries for spell checking, fonts and local software which might be relevant (e.g. hdate for the Jewish/Hebrew calendar or Jcal for the Jalali calendar). For example it seems that regular Latin fonts are missing two Albanian characters.

We also talked about how to use Open Labs to better work together with two hats – member of the local FOSS community and also as members of various open source projects (not forgetting open content / data ones projects as well). So people can cooperate both on the local level, the international level or to mix (using the other’s project international resources). In short: connections, connections, connections.

Another aspect I tried to push the guys toward is cooperating with local companies about open source, whether it’s the local market, the municipal and general government. Such cooperation can take many forms, sponsoring events / giving resources (computers, physical space or employee’s time) and of course creating more jobs for open source people, which in turn will support more people doing open source for longer period.

One of the guys thought  benefit the local community will benefit from a mirror server, but that also requires to see the network topology of Albania to make sure it makes sense to invest in one (resources and effort).

We continued to how it would be best to contribute to open source, mostly that Debian, although great isn’t always the best target, and they should always try to work with the relevant upstream. It’s better to translate gnome upstream then sending the Debian maintainer the translation to be included in the package. That shortcut can work if there’s something urgent like a really problematic typo or something what unless done before the release would require a long long wait (e.g. the next Debian release). I gave an example that for important RTL bugs in LibreOffice I’ve asked Rene Engelhard to include the patch instead of waiting for the next release and its inclusion in Debian.

When I started the conversation I mentioned that we have 33% females out of the 12 participants. And that’s considered good comparing to other computer/technical events, especially open source. To my surprise the guys told me that in the Open Labs hackerspace the situation is the opposite, they have more female members than male (14 female to 12 male). Also in their last OSCAL event they had 220 women and 100 men. I think there’s grounds to learn what happens there, as the gals do something damn right over there. Maybe Outreachy rules for Albania should be different (:

Later that day I did another session with Redon Skikuli to be more practical, so I started to search on an Albanian dictionary for spell checking, found an old one and asked Redon to check the current status with the guy. And also check info about such technical stuff with Social Sciences and Albanological Section of the Academy of Sciences of Albania, who is officially the regulator for Albanian.

In parallel I started to check how to include the dictionary in LibreOffice, and asked Rene Engelhard to enable Albanian language pack in Debian (as upstream already provide one). Checking the dictionaries I’ve took the opportunity to update the Hebrew. It took me a little longer as I needed to get rust off my LibreOffice repositories (dictionaries is a different repository) and also the gerrit setup. But in the end: https://gerrit.libreoffice.org/#/c/41864/

With the talks toady and the starting to combine both Debian and LibreOffice work today (although much of it was talking) – I felt like I’m the right person on the right place. I’m happy to be here and contribute to two projects in parallel (:


Filed under: Debian GNU/Linux, i18n & l10n, LibreOffice

04 September, 2017 09:44AM by Kaplan

hackergotchi for Daniel Pocock

Daniel Pocock

Spyware Dolls and Intel's vPro

Back in February, it was reported that a "smart" doll with wireless capabilities could be used to remotely spy on children and was banned for breaching German laws on surveillance devices disguised as another object.

Would you trust this doll?

For a number of years now there has been growing concern that the management technologies in recent Intel CPUs (ME, AMT and vPro) also conceal capabilities for spying, either due to design flaws (no software is perfect) or backdoors deliberately installed for US spy agencies, as revealed by Edward Snowden. In a 2014 interview, Intel's CEO offered to answer any question, except this one.

The LibreBoot project provides a more comprehensive and technical analysis of the issue, summarized in the statement "the libreboot project recommends avoiding all modern Intel hardware. If you have an Intel based system affected by the problems described below, then you should get rid of it as soon as possible" - eerily similar to the official advice German authorities are giving to victims of Cayla the doll.

All those amateur psychiatrists suggesting LibreBoot developers suffer from symptoms of schizophrenia have had to shut their mouths since May when Intel confirmed a design flaw (or NSA backdoor) in every modern CPU had become known to hackers.

Bill Gates famously started out with the mission to put a computer on every desk and in every home. With more than 80% of new laptops based on an Intel CPU with these hidden capabilities, can you imagine the NSA would not have wanted to come along for the ride?

Four questions everybody should be asking

  • If existing laws can already be applied to Cayla the doll, why haven't they been used to alert owners of devices containing Intel's vPro?
  • Are exploits of these backdoors (either Cayla or vPro) only feasible on a targeted basis, or do the intelligence agencies harvest data from these backdoors on a wholesale level, keeping a mirror image of every laptop owner's hard disk in one of their data centers, just as they already do with phone and Internet records?
  • How long will it be before every fast food or coffee chain with a "free" wifi service starts dipping in to the data exposed by these vulnerabilities as part of their customer profiling initiatives?
  • Since Intel's admissions in May, has anybody seen any evidence that anything is changing though, either in what vendors are offering or in terms of how companies and governments outside the US buy technology?

Share your thoughts

This issue was recently raised on the LibrePlanet mailing list. Please feel free to join the list and click here to reply on the thread.

04 September, 2017 06:09AM by Daniel.Pocock

September 03, 2017

Lior Kaplan

FOSScamp Syros 2017 – day 1

During Debconf17 I was asked by Daniel Pocock if I can attend FOSScamp Syros to help with Debian’s l10n in the Balkans. I said I would be happy to, although my visit would be short (2.5 days) due to previous plans. The main idea of camp is to have FOSS people meet for about 1 week near a beach. So it’s sun, water and free software.  This year it takes place  in Syros, Greece.

After take the morning ferry, I met with the guys at noon. I didn’t know how would it be, as it’s my first time with this group/meeting, but they were very nice and welcoming. 10 minutes after my arrival I found myself setting with two of the female attendees starting to work on Albanian (sq) translation of Debian Installer.

It took my a few minutes to find my where to check out the current level1 files, as I thought they aren’t in SVN anymore, but ended up learning the PO files is the only part of the installer still on SVN. As the girls were quick with the assinged levle1 sublevels, I started to look for the level2 and level3 files, and it was annoying to have the POT files very accessible, but no links to the relevant git repositories. I do want to have all the relevant links in one central place, so people who want to help with translation could do that.

While some of the team member just used a text editor to edit the files, I suggested to them using either poedit or granslator, both I used a few years ago. Yaron Shahrabani also recommended virtaal to me, but after trying it for a while I didn’t like it (expect it’s great feature showing the diff with fuzzy messages). For the few people who also have Windows on their machine, both poedit and Virtaal have windows binaries for download. So you don’t have to have Linux in order to help with translations.

In parallel, I used the “free” time to work on the Hebrew translation for level1, as it’s been a while since either me or Omer Zak worked on it. Quite soon the guys started to send me the files for review, and I did find some errors using diff. Especially when not everyone use a PO editor. I also missed a few strings during the review, which got fixed later on by Christian Perrier. Team work indeed (:

I found it interesting to see the reactions and problems for the team to work with the PO files, and most projects now use some system (e.g. Pootle) for online web translation. Which saves some of the head ace, but also prevents from making some review and quality check before submitting the files. It’s a good idea to explore this option for Debian as well.

A tip for those who do want to work with PO files, either use git’s diff features or use colordiff to check your changes (notice less will require -R parameter to keep the color).

Although I met the guys only at noon, the day was very fruitful for Debian Installer l10n:

  • Albanian (sq) level1 – from 78% to 82% (Eva Vranici, Silva Arapi)
  • Albanian (sq) level2 – from 20% to 24% (Nafie Shehu)
  • Hebrew (he) level1 – from 96% to 97% (me)
  • Greek (el) level1 – from 96% to 97% (Sotirios Vrachas)

Some files are still work in progress and will be completed tomorrow. My goal is to have Albanian at 100% during the camp and ready for the next d-i alpha.

I must admit that I remember d-i to have many more strings as part of the 3 levels, especially levels 2+3 which were huge (e.g. the iso codes).

Except all the work and FOSS related conversations, I found a great group who welcomed me quickly, made me feel comfortable and taught me a thing or two about Greece and the Syros specifically.

TIP: try the dark chocolate with red hot chili pepper in the icecream shop.


Filed under: Debian GNU/Linux, i18n & l10n

03 September, 2017 07:09PM by Kaplan

Thorsten Alteholz

My Debian Activities in August 2017

FTP assistant

This month I accepted 217 packages and rejected 16 uploads. Though this might seem to be a low number this month, I am very pleased about the total number of packages that have been accepted: 558.

Debian LTS

This was my thirty-eight month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 20.25h. During that time I did LTS uploads of:

  • [DLA 1055-1] libgd2 security update for one CVE
  • [DLA 1060-1] libxml2 security update for two CVEs
  • [DLA 1062-1] curl security update for one CVE
  • [DLA 1063-1] extplorer security update for one CVE
  • [DLA 1065-1] fontforge security update for 8 CVEs
  • [DLA 1082-1] graphicsmagick security update for 8 CVEs

I also did the upload and sent the DLA for [DLA 1061-1] newsbeuter security update

Last but not least I also spent some time for frontdesk duties.

Other stuff

As announced last month I finished uploading all dependencies of glewlwyd and now we have an oauth2 server available in Debian. This month I am trying to really use it and will tell you about my experiences.

As the severity of the gcc-7 bugs have been raised, I took care of: #853501, #853304, #853305, #853306, #853307, #853308 and #871088

I also uploaded a new version of duktape and now try to also provide a library that can be used in other packages.

Last but not least my DOPOM of this month has been oysttyer. Actually it is a new package, but as ttytter has been abandonded upstream, this is the replacement. It is a fork so you should only get a new authorization key and simply use it as ttytter before.

03 September, 2017 06:10PM by alteholz

September 02, 2017

hackergotchi for David Bremner

David Bremner

Indexing Debian's buildinfo

Introduction

Debian is currently collecting buildinfo but they are not very conveniently searchable. Eventually Chris Lamb's buildinfo.debian.net may solve this problem, but in the mean time, I decided to see how practical indexing the full set of buildinfo files is with sqlite.

Hack

  1. First you need a copy of the buildinfo files. This is currently about 2.6G, and unfortunately you need to be a debian developer to fetch it.

     $ rsync -avz mirror.ftp-master.debian.org:/srv/ftp-master.debian.org/buildinfo .
    
  2. Indexing takes about 15 minutes on my 5 year old machine (with an SSD). If you index all dependencies, you get a database of about 4G, probably because of my natural genius for database design. Restricting to debhelper and dh-elpa, it's about 17M.

     $ python3 index.py
    

    You need at least python3-debian installed

  3. Now you can do queries like

     $ sqlite3 depends.sqlite "select * from depends where depend='dh-elpa' and depend_version<='0106'"
    

    where 0106 is some adhoc normalization of 1.6

Conclusions

The version number hackery is pretty fragile, but good enough for my current purposes. A more serious limitation is that I don't currently have a nice (and you see how generous my definition of nice is) way of limiting to builds currently available e.g. in Debian unstable.

02 September, 2017 08:41PM

Antoine Beaupré

My free software activities, August 2017

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. This month I worked on a few major packages that took a long time instead of multiple smaller issues. Affected packages were Mercurial, libdbd-mysql-perl and Ruby.

Mercurial updates

Mercurial was vulnerable to two CVEs: CVE-2017-1000116 (command injection on clients through malicious ssh URLs) and CVE-2017-1000115 (path traversal via symlink). The former is an issue that actually affects many other similar software like Git (CVE-2017-1000117), Subversion (CVE-2017-9800) and even CVS (CVE-2017-12836). The latter symlink issue is a distinct issue that came up during an internal audit.

The fix, shipped as DLA-1072-1, involved a rather difficult backport, especially because the Mercurial test suite takes a long time to complete. This reminded me of the virtues of DEB_BUILD_OPTIONS=parallel=4, which sped up the builds considerably. I also discovered that the Wheezy build chain doesn't support sbuild's --source-only-changes flag which I had hardcoded in my sbuild.conf file. This seems to be simply because sbuild passes --build=source to dpkg-buildpackage, an option that is supported only in jessie or later.

libdbd-mysql-perl

I have worked on fixing two issues with the libdbd-mysql-perl package, CVE-2017-10788 and CVE-2017-10789, which resulted in the DLA-1079-1 upload. Behind this mysteriously named package sits a critical piece of infrastructure, namely the mysql commandline client which is probably used and abused by hundreds if not thousands of home-made scripts, but also all of Perl's MySQL support, which is probably used by even a larger base of software.

Through the Debian bug reports (Debian bug #866818 and Debian bug #866821), I have learned that the patches existed in the upstream tracker but were either ignored or even reverted in the latest 4.043 upstream release. It turns out that there are talks of forking that library because of maintainership issue. It blows my mind that such an important part of MySQL is basically unmaintained.

I ended up backporting the upstream patches, which was also somewhat difficult because of the long-standing issues with SSL support in MySQL. The backport there was particularly hard to test, as you need to run that test suite by hand, twice: once with a server configured with a (valid!) SSL certificate and one without (!). I'm wondering how much time it is really worth spending on trying to fix SSL in MySQL, however. It has been badly broken forever, and while the patch is an improvement, I would actually still never trust SSL transports in MySQL over an untrusted network. The few people that I know use such transports wrap their connections around a simpler stunnel instead.

The other issue was easier to fix so I submitted a pull request upstream to make sure that work isn't lost, although it is not clear what the future of that patch (or project!) will be at this point.

Rubygems

I also worked on the rubygems issues, which, thanks to the "vendoring" practice of the Ruby community, also affects the ruby1.9 package. 4 distinct CVEs were triaged here (CVE-2017-0899, CVE-2017-0900, CVE-2017-0901 and CVE-2017-0902) and I determined the latter issue didn't affect wheezy as rubygems doesn't do its own DNS resolution there (later versions lookup SRV records).

This is another package where the test suite takes a long time to run. Worse, the packages in Wheezy actually fails to build from source: the test suites just fail in various steps, particularly because of dh key too small errors for Rubygems, but also other errors for Ruby. I also had trouble backporting one test which I had to simply skip for Rubygems. I uploaded and announced test packages and hopefully I'll be able to complete this work soon, although I would certainly appreciate any help on this...

Triage

I took a look at the sox, libvorbis and exiv2 issues. None had fixes available. sox and exiv2 were basically a list of fuzzing issues, which are often minor or at least of unknown severity. Those would have required a significant amount of work and I figured I would prioritize other work first.

I also triaged CVE-2017-7506, which doesn't seem to affect the spice package in wheezy, after doing a fairly thorough audit of the code. The vulnerability is specifically bound to the reds_on_main_agent_monitors_config function, which is simply not present in our older version. A hostile message would fall through the code and not provoke memory allocation or out of bounds access, so I simply marked the wheezy version as not-affected, something which usually happens during the original triage but can also happen during the actual patching work, as in this case.

Other free software work

This describes the volunteer work I do on various free software projects. This month, again, my internal reports show that I spent about the same time on volunteer and paid time, but this is probably a wrong estimate because I spent a lot of time at Debconf which I didn't clock in...

Debconf

So I participated in the 17th Debian Conference in Montreal. It was great to see (and make!) so many friends from all over the world in person again, and I was happy to work on specific issues together with other Debian developers. I am especially thankful to David Bremner for fixing the syncing of the flagged tag when added to new messages (patch series). This allows me to easily sync the one tag (inbox) that is not statically assigned during notmuch new, by using flagged as a synchronization tool. This allows me to use notmuch more easily across multiple machines without having to sync all tags with dump/restore or using muchsync which wasn't working for me (although a new release came out which may fix my issues). The magic incantation looks something like this:

notmuch tag -inbox tag:inbox and not tag:flagged
notmuch tag +inbox not tag:inbox and tag:flagged

However, most of my time in the first week (Debcamp) was spent trying to complete the networking setup: configure switches, setup wiring and so on. I also configured an apt-cacher-ng proxy to serve packages to attendees during the conference. I configured it with Avahi to configure clients automatically, which led me to discover (and fix) issue Debian bug #870321) although there are more issues with the autodiscovery mechanism... I spent extra time to document the (somewhat simple) configuration of such a server in the Debian wiki because it was not the first time I had research that procedure...

I somehow thought this was a great time to upgrade my laptop to stretch. Normally, I keep that device running stable because I don't use it often and I don't want to have major traumatizing upgrades every time I leave with it on a trip. But this time was special: there were literally hundreds of Debian developers to help me out if there was trouble. And there was, of course, trouble as it turns out! I had problems with the fonts on my display, because, well, I had suspended (twice) my laptop during the install. The fix was simply to flush the fontconfig cache, and I tried to document this in the fonts wiki page and my upgrades page.

I also gave a short training called Debian packaging 101 which was pretty successful. Like the short presentation I made at the last Montreal BSP, the workshop was based on my quick debian development guide. I'm thinking of expanding this to a larger audience with a "102" course that would discuss more complex packaging problems. But my secret plan (well, secret until now I guess) is to make packaging procedures more uniform in Debian by training new Debian packagers using that same training for the next 2 decades. But I will probably start by just trying to do this again at the next Debconf, if I can attend.

Debian uploads

I also sponsored two packages during Debconf: one was a "scratch an itch" upload (elpa-ivy) which I requested (Debian bug #863216) as part of a larger effort to ship the Emacs elisp packages as Debian packages. The other was an upload of diceware to build the documentation in a separate package and fix other issues I have found in the package during a review.

I also uploaded a bunch of other fixes to the Debian archive:

Signing keys rotation

I also started the process of moving my main OpenPGP certification key by adding a signing subkey. The subkey is stored in a cryptographic token so I can sign things on more than one machine without storing that critical key on all those devices physically.

Unfortunately, this meant that I need to do some shenanigans when I want to sign content in my Debian work, because the new subkey takes time to propagate to the Debian archive. For example, I have to specify the primary key with a "bang" when signing packages (debsign -k '792152527B75921E!' ...) or use inline signatures in email sent for security announcement (since that trick doesn't work in Mutt or Notmuch). I tried to figure out how to better coordinate this next time by reading up documentation on keyring.debian.org, but there is no fixed date for key changes on the rsync interface. There are "monthly changes" so one's best bet is to look for the last change in their git repository.

GitLab.com and LFS migration

I finally turned off my src.anarc.at git repository service by moving the remaining repos to GitLab. Unfortunately, GitLab removed support for git-annex recently, so I had to migrate my repositories to Git-LFS, which was an interesting experience. LFS is pretty easy to use, definitely simpler than git-annex. It also seems to be a good match for the use-case at hand, which is to store large files (videos, namely) as part of slides for presentations.

It turns out that their migration guide could have been made much simpler. I tried to submit those changes to the documentation but couldn't fork the GitLab EE project to make a patch, so I just documented the issue in the original MR for now. While I was there I filed a feature request to add a new reference shortcut (GL-NNN) after noticing a similar token used on GitHub. This would be a useful addition because I often have numbering conflicts between Debian BTS bug numbers and GitLab issues in packages I maintain there. In particular, I have problems using GitLab issue numbers in Monkeysign, because commit logs end up in Debian changelogs and will be detected by the Debian infrastructure even though those are GitLab bug numbers. Using such a shortcut would avoid detection and such a conflict.

Numpy-stats

I wrote a small tool to extract numeric statistics from a given file. I often do ad-hoc benchmarks where I store a bunch of numbers in a file and then try to make averages and so on. As an exercise in learning NumPy, I figured I would write such a simple tool, called numpy-stats, which probably sounds naive to seasoned Python scientists.

My incentive was that I was trying to figure out what was the distribution of password length in a given password generator scheme. So I wrote this simple script:

for i in seq 10000 ; do
    shuf -n4 /usr/share/dict/words | tr -d '\n'
done > length

And then feed that data in the tool:

$ numpy-stats lengths 
{
  "max": 60, 
  "mean": 33.883293722913464, 
  "median": 34.0, 
  "min": 14, 
  "size": 143060, 
  "std": 5.101490225062775
}

I am surprised that there isn't such a tool already: hopefully I am wrong and will just be pointed towards the better alternative in the comments here!

Safe Eyes

I added screensaver support to the new SafeEyes project, which I am considering as a replacement to the workrave project I have been using for years. I really like how the interruptions basically block the whole screen: way more effective than only blocking the keyboard, because all potential distractions go away.

One feature that is missing is keystrokes and mouse movement counting and of course an official Debian package, although the latter would be easy to fix because upstream already has an unofficial build. I am thinking of writing my own little tool to count keystrokes, since the overlap between SafeEyes and such a counter isn't absolutely necessary. This is something that workrave does, but there are "idle time" extensions in Xorg that do not need to count keystrokes. There are already certain tools to count input events, but none seem to do what I want (most of them are basically keyloggers). It would be an interesting test to see if it's possible to write something that would work both for Xorg and Wayland at the same time. Unfortunately, preliminary research show that:

  1. in Xorg, the only way to implement this is to sniff all events, ie. to implement a keylogger

  2. in Wayland, this is completely unsupported. it seems some compositors could implement such a counter, but then it means that this is compositor specific, or, in other words, unportable

So there is little hope here, which brings to my mind "painmeter" as an appropriate name for this future programming nightmare.

Ansible

I sent my first contribution to the ansible project with a small documentation fix. I had an eye opener recently when I discovered a GitLab ansible prototype that would manipulate GitLab settings. When I first discovered Ansible, I was frustrated by the YAML/Jinja DSL: it felt silly to write all this code in YAML when you are a Python developer. It was great to see reasonably well-written Python code that would do things and delegate the metadata storage (and only that!) to YAML, as opposed to using YAML as a DSL.

So I figured I would look at the Ansible documentation on how this works, but unfortunately, the Ansible documentation is severly lacking in this area. There are broken links (I only fixed one page) and missing pieces. For example, the developing plugins page doesn't explain how to program a plugin at all.

I was told on IRC that: "documentation around developing plugins is sparse in general. the code is the best documentation that exists (right now)". I didn't get a reply when asking which code in particular could provide good examples either. In comparison, Puppet has excellent documentation on how to create custom types, functions and facts. That is definitely a turn-off for a new contributor, but at least my pull request was merged in and I can only hope that seasoned Ansible contributors expand on this critical piece of documentation eventually.

Misc

As you can see, I'm all over the place, as usual. GitHub tells me I "Opened 13 other pull requests in 11 repositories" (emphasis mine), which I guess means on top of the "9 commits in 5 repositories" mentioned earlier. My profile probably tells a more detailed story that what would be useful to mention here. I should also mention how difficult it is to write those reports: I basically do a combination of looking into my GitHub and GitLab profiles, the last 30 days of emails (!) and filesystem changes (!!). En vrac, a list of changes which may be of interest:

  • font-large (and its alias, font-small): shortcut to send the right escape sequence to rxvt so it changes its font
  • fix-acer: short script to hardcode the modeline (you remember those?!) for my screen which has a broken EDID pin (so autodetection fails, yay Xorg log files...)
  • ikiwiki-pandoc-quickie: fake ikiwiki renderer that (ab)uses pandoc to generate a HTML file with the right stylesheet to preview Markdown as it may look in this blog (the basic template is missing still)
  • git-annex-transfer: a command I've often been missing in git-annex, which is a way to transfer files between remotes without having to copy them locally (upstream feature request)
  • I linked the graphics of the Debian archive software architecture in the Debian wiki in the hope more people notice it.
  • I did some tweaks on my Taffybar to introduce a battery meter and hoping to have temperature sensors, which mostly failed. there's a pending pull request that may bring some sense into this, hopefully.
  • I made two small patches in Monkeysign to fix gpg.conf handling and multiple email output, a dumb bug I cannot believe anyone noticed or reported just yet. Thanks Valerie for the bug report! The upload of this in Debian is pending a review from the release team.

02 September, 2017 08:16PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Playing with Moose and FFmpeg

As I've blogged before, I've been on and off working on SReview, a video review system. Development over the past year has been mostly driven by the need to have something up and running for first FOSDEM 2017, and then DebConf17, and so I've cut corners left and right which made the system, while functional, not quite entirely perfect everywhere. For instance, the backend scripts were done in ad-hoc perl, each reinventing their own wheel. Doing so made it easier for me to experiment with things and figure out where I want them to go, without immediately creating a lot of baggage that is not necessarily something I want to be stuck to. This flexibility has already paid off, in that I've redone the state machine between FOSDEM and DebConf17—and all it needed was to update a few SQL statements here and there. Well, and add a few of them, too.

It was always the intent to replace most of the ad-hoc perl with something better, however, once the time was ripe. One place where historical baggage is not so much of a problem, and where in fact abstracting away the complexity would now be an asset, is in the area of FFmpeg command lines. Currently, these are built by simple string expansion. For example, we do something like this (shortened for brevity):

system("ffmpeg -y -i $outputdir/$slug.ts -pass 1 -passlogfile ...");

inside an environment where the $outputdir and $slug variables are set in a perl environment. That works, but it has some downsides; e.g., adding or removing options based on which codecs we're using is not so easy. It would be much more flexible if the command lines were generated dynamically based on requested output bandwidth and codecs, rather than that they be hardcoded in the file. Case in point: currently there are multiple versions of some of the backend scripts, that only differ in details—mostly the chosen codec on the ffmpeg command line. Obviously this is suboptimal.

Instead, we want a way where video file formats can be autodetected, so that I can just say "create a file that uses encoder etc settings of this other file here". In addition, we also want a way where we can say "create a file that uses encoder etc settings of this other file here, except for these one or two options that I want to fine-tune manually". When I first thought about doing that about a year ago, that seemed complicated and not worth it—or at least not to that extent.

Enter Moose.

The Moose OO system for Perl 5 is an interesting way to do object orientation in Perl. I knew Perl supports OO, and I had heard about Moose, but never had looked into it, mostly because the standard perl OO features were "good enough". Until now.

Moose has a concept of adding 'attributes' to objects. Attributes can be set at object construction time, or can be accessed later on by way of getter/setter functions, or even simply functions named after the attribute itself (the default). For more complicated attributes, where the value may not be known until some time after the object has been created, Moose borrows the concept of "lazy" variables from Perl 6:

package Object;

use Moose;

has 'time' => (
    is => 'rw',
    builder => 'read_time',
    lazy => 1,
);

sub read_time {
    return localtime();
}

The above object has an attribute 'time', which will not have a value initially. However, upon first read, the 'localtime()' function will be called, the result is cached, and then (and on all further calls of the same function), the cached result will be returned. In addition, since the attribute is read/write, the time can also be written to. In that case, any cached value that may exist will be overwritten, and if no cached value exists yet, the read_time function will never be called. (it is also possible to clear values if needs be, so that the function would be called again).

We use this with the following pattern:

package SReview::Video;

use Moose;

has 'url' => (
    is => 'rw',
)

has 'video_codec' => (
    is => 'rw',
    builder => '_probe_videocodec',
    lazy => 1,
);

has 'videodata' => (
    is => 'bare',
    reader => '_get_videodata',
    builder => '_probe_videodata',
    lazy => 1,
);

has 'probedata' => (
    is => 'bare',
    reader => '_get_probedata',
    builder => '_probe',
    lazy => 1,
);

sub _probe_videocodec {
    my $self = shift;
    return $self->_get_videodata->{codec_name};
}

sub _probe_videodata {
    my $self = shift;
    if(!exists($self->_get_probedata->{streams})) {
        return {};
    }
    foreach my $stream(@{$self->_get_probedata->{streams}}) {
        if($stream->{codec_type} eq "video") {
            return $stream;
        }
    }
    return {};
}

sub _probe {
    my $self = shift;

    open JSON, "ffprobe -print_format json -show_format -show_streams " . $self->url . "|"
    my $json = "";
    while(<JSON>) {
        $json .= $_;
    }
    close JSON;
    return decode_json($json);
}

The videodata and probedata attributes are internal-use only attributes, and are therefore of the 'bare' type—that is, they cannot be read nor written to. However, we do add 'reader' functions that can be used from inside the object, so that the object itself can access them. These reader functions are generated, so they're not part of the object source. The probedata attribute's builder simply calls ffprobe with the right command-line arguments to retrieve data in JSON format, and then decodes that JSON file.

Since the passed JSON file contains an array with (at least) two streams—one for video and one for audio—and since the ordering of those streams depends on the file and is therefore not guaranteed, we have to loop over them. Since doing so in each and every attribute of the file we might be interested in would be tedious, we add a videodata attribute that just returns the data for the first found video stream (the actual source also contains a similar one for audio streams).

So, if you create an SReview::Video object and you pass it a filename in the url attribute, and then immediately run print $object->video_codec, then the object will

  • call ffprobe, and cache the (decoded) output for further use
  • from that, extract the video stream data, and cache that for further use
  • from that, extract the name of the used codec, cache it, and then return that name to the caller.

However, if the caller first calls $object->video_codec('h264'), then the ffprobe and most of the caching will be skipped, and instead the h265 data will be returned as video codec name.

Okay, so with a reasonably small amount of code, we now have a bunch of attributes that have defaults based on actual files but can be overwritten when necessary. Useful, right? Well, you might also want to care about the fact that sometimes you want to generate a video file that uses the same codec settings of this other file here. That's easy. First, we add another attribute:

has 'reference' => (
    is => 'ro',
    isa => 'SReview::Video',
    predicate => 'has_reference'
);

which we then use in the _probe method like so:

sub _probe {
    my $self = shift;

    if($self->has_reference) {
        return $self->reference->_get_probedata;
    }
    # original code remains here
}

With that, we can create an object like so:

my $video = SReview::Video->new(url => 'file.ts');
my $generated = SReview::Video->new(url => 'file2.ts', reference => $video);

now if we ask the $generated object what the value of its video_codec setting is without telling it ourselves first, it will use the $video object for its probed data, and use that.

That only misses generating the ffmpeg command line, but that's all fairly straightforward and therefore left as an exercise to the reader. Or you can cheat, and look it up.

02 September, 2017 04:12PM

hackergotchi for Daniel Silverstone

Daniel Silverstone

F/LOSS activity, August 2017

Shockingly enough, my focus started out on Gitano once more. We managed a 1.1 release of Gitano during the Debian conference's "camp" which occurs in the week before the conference. This was a joint effort of myself, Richard Maw, and Richard Ipsum. I have to take my hat off to Richard Maw, because without his dedication to features, 1.1 would lack some stuff which Richard Ipsum proposed around ruleset support for basic readers/writers and frankly 1.1 would be a weaker release without it.

Because of the debconf situation, we didn't have a Gitano developer day which, while sad, didn't slow us down much...

  • Once again, we reviewed our current task state
  • I submitted a series which fixed our test suite for Git 2.13 which was an FTBFS bug submitted against the Debian package for Gitano. Richard Maw reviewed and merged it.
  • Richard Maw supplied a series to add testing for dangling HEAD syndrome. I reviewed and merged that.
  • Richard Maw submitted a patch to improve the auditability of the 'as' command and I reviewed and merged that.
  • Richard Ipsum submitted a patch to add reader/writer configs to ease simple project management in Gitano. I'm not proud to say that I was too busy to look at this and ended up saying it was unlikely it'd get in. Richard Maw, quite rightly, took umbrage at that and worked on the patch, eventually submitting a new series with tests which I then felt obliged to review and I merged the series eventually.

    This is an excellent example of where just because one person is too busy doesn't mean that a good idea should be dropped, and I am grateful to Richard Maw for getting this work mergeable and effectively guilt-tripping me into reviewing/merging. This is a learnable moment for me and I hope to do better into the future.

  • During all that time, I was working on a plugin to support git-multimail.py in Gitano. This work ranged across hooks and caused me to spend a long time thinking about the semantics of configuration overriding etc. Fortunately I got there in the end, and with a massive review effort from Richard Maw, we got it merged into Gitano.
  • Finally I submitted a patch which caused the tests we run in Gitano to run from an 'install' directory which ensures that we catch bugs such as those which happened in earlier series where we missed out rules files for installation etc. Richard Maw reviewed and merged that.
  • And then we released the new version of Gitano and subsidiary libraries.

    There was Luxio version 13 which switched us to readdir() from readdir_r() thanks to Richard Ipsum; Gall 1.3 which contained a bunch of build cleanups, and also a revparse_single() implementation in the C code to speed things up thanks to Richard Maw; Supple 1.0.8 which improved wrapper environment cleanups thanks to Richard Ipsum, allowed baking of paths in which means Nix is easier to support (again thanks to Richard Ipsum), fixed setuid handling so that Nix is easier to support (guess what? Richard Ipsum again); Lace 1.4 which now verifies definition names in allow/deny/anyof/allof and also produces better error messages from nested includes.

    And, of course, Gitano 1.1 whose changes were somewhat numerous and so you are invited to read them in the Gitano NEWS file for the release.

Not Gitano

Of course, not everything I did in August was Gitano related. In fact once I had completed the 1.1 release and uploaded everything to Debian I decided that I was going to take a break from Gitano until the next developer day. (In fact there's even some patch series still unread on the mailing list which I will get to when I start the developer day.)

I have long been interested in STM32 microcontrollers, using them in a variety of projects including the Entropy Key which some of you may remember. Jorge Aparicio was working on Cortex-M3 support (among other microcontrollers) in Rust and he then extended that to include a realtime framework called RTFM and from there I got interested in what I might be able to do with Rust on STM32. I noticed that there weren't any pure Rust implementations of the USB device stack which would be necessary in order to make a device, programmed in Rust, appear on a USB port for a computer to control/use. This tweaked my interest.

As many of my readers are aware, I am very bad at doing things without some external motivation. As such, I then immediately offered to give a talk at a conference which should be happening in November, just so that I'd be forced to get on with learning and implementing the stack. I have been chronicling my work in this blog, and you're encouraged to go back and read them if you have similar interests. I'm sure that as my work progresses, I'll be doing more and more of that and less of Gitano, for at least the next two months.

To bring that into context as F/LOSS work, I did end up submitting some patches to Jorge's STM32F103xx repository to support a couple more clock configuration entries so that USB and ADCs can be set up cleanly. So at least there's that.

02 September, 2017 10:00AM by Daniel Silverstone

hackergotchi for Clint Adams

Clint Adams

Litigants

Bronwyn’s mom got hit by a semi. She was on the passenger side of the car, the side of impact, and she did not rebound with extreme resilience. The family sued the trucking company and came away with a settlement of roughly $10 million. The lawyers took $6.5 million of that: quite a deal.

Bronwyn learned two things from this, and neither one was about Christopher Lloyd.

Posted on 2017-09-02
Tags: mintings

02 September, 2017 01:21AM

September 01, 2017

Good night moon and spoon balloon

“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

Posted on 2017-09-01
Tags: bgs

01 September, 2017 11:17PM

Bits from Debian

New Debian Developers and Maintainers (July and August 2017)

The following contributors got their Debian Developer accounts in the last two months:

  • Ross Gammon (rossgammon)
  • Balasankar C (balasankarc)
  • Roland Fehrenbacher (rfehren)
  • Jonathan Cristopher Carter (jcc)

The following contributors were added as Debian Maintainers in the last two months:

  • José Gutiérrez de la Concha
  • Paolo Greppi
  • Ming-ting Yao Wei
  • Boyuan Yang
  • Paul Hardy
  • Fabian Wolff
  • Moritz Schlarb
  • Shengjing Zhu

Congratulations!

01 September, 2017 06:30PM by Jean-Pierre Giraud

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in August 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h and during this time I did 4 days of front desk handling CVE triage (28 commits to the security tracker). I had a bit of time left and I opted to work on a package that had been lingering for a while: exiv2. It turns out the security researchers who requested the CVE did not even contact the upstream author so I opened 12 tickets on GitHub. The upstream author was unaware of those issues and is relatively unfamiliar with the general process of handling security updates. I started the work of reproducing each issue and so far they only affect the version 0.26 in experimental.

Misc Debian/Kali work

live-build and live-config. I pushed a few updates: dropping the useless xorriso –hardlinks option (as discussed in https://bugs.kali.org/view.php?id=4109), adding a .disk/mkisofs file on request of Thomas Schmitt, fixing a severe issue with the handling of locales configuration that broke wayland sessions entirely.

open-vm-tools and vmwgfx. The switch of GNOME to Wayland by default resulted in multiple regressions reported by Kali users, in particular for VMWare users where desktop resizing was no longer working. There was a patch available but it did not work for me, so I worked with Thomas Hellstrom (of VMWare) to identify the problems and he provided me an updated patch. I submitted this patch to Debian too (bug report, pull request).

Linux 4.12 also showed another regression for VMWare users where the screen would not be refreshed/updated when you are using Wayland/KMS. I did multiple tests for Thomas and provided the requested data so that they could create a fix (which I incorporated into Kali and should come to Debian through the upstream stable tree).

Packaging. I uploaded zim 0.67 to unstable. I fixed an RC bug on shiboken to get pyside and ubertooth back into testing. I had to hack the package to use gcc-6 on mips64el because that architecture is suffering from a severe gcc bug which probably broke a large part of the code compiled since the switch to gcc-7 (and which triggered a test failure in shiboken, fortunately)… I wonder if anybody will make sure to recompile all packages that might have been misbuilt.

Infrastructure. In a discussion on debian-devel, the topic of using tracker.debian.org to store “who is maintaining what” came up again. I responded to let know that this is something that I’d like to see done and that I have already taken measures to go into this direction. I wanted to make an experiment with my zim package but quickly came on a problem with ftpmaster’s lintian auto-rejects (which I submitted in #871575).

The BTS is now linking to tracker.debian.org on its web interface. To continue and give a push to this move, I scanned all the files in the qa SVN repository and updated many occurrences of packages.qa.debian.org with tracker.debian.org.

I also spotted a small problem in the way we handle autoremovals mails in tracker.debian.org, we often get them twice: I filed #871683 to get this fixed on release.debian.org.

Bug reports. vmdebootstrap creates unbootable qemu image (#872999). bugs in udebs are not shown on view by source package (#872784). New upstream release of ethtool (#873692). Upstream bugreport on systemd: support a systemd.swap=no boot command-line option.

I also shared some of my ideas/dreams in #859867 speaking of a helper tool to setup and maintain up-to-date build chroots and autopkgtest qemu images.

More bug fixes and pull requests. I created a patch to fix a build failure of systemd when /tmp is an overlayfs (#854400, the pull request has been discarded). I fixed the RC bug #853570 on ncrack and forwarded my changes upstream (here and here).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

01 September, 2017 01:40PM by Raphaël Hertzog

Russ Allbery

Review: Regeneration

Review: Regeneration, by Julie E. Czerneda

Series: Species Imperative #3
Publisher: DAW
Copyright: 2006
ISBN: 0-7564-0345-6
Format: Hardcover
Pages: 543

This is the third book of the Species Imperative trilogy, and this is the type of trilogy that's telling a single story in three books. You don't want to read this out of order, and I'll have to be cautious about aspects of the plot to not spoil the earlier books.

Mac is still recovering from the effects of the first two books of the series, but she's primarily worried about a deeply injured friend. Worse, that friend is struggling to explain or process what's happened, and the gaps in her memory and her very ability to explain may point at frightening, lingering risks to humanity. As much as she wants to, Mac can't give her friend all of her focus, since she's also integral to the team trying to understand the broader implications of the events of Migration. Worse, some of the non-human species have their own contrary interpretations that, if acted on, Mac believes would be desperately risky for humanity and all the other species reachable through the transects.

That set of competing priorities and motivations eventually sort themselves out into a tense and rewarding multi-species story, but they get off to an awkward start. The first 150 pages of Regeneration are long on worry, uncertainty, dread, and cryptic conversations, and short on enjoyable reading. Czerneda's recaps of the previous books are appreciated, but they weren't very smoothly integrated into the story. (I renew my occasional request for series authors to include a simple plot summary of the previous books as a prefix, without trying to weave it into the fiction.) I was looking forward to this book after the excellent previous volumes, but struggled to get into the story.

That does change. It takes a bit too long, with a bit too much nameless dread, a bit too much of an irritating subplot between Fourteen and Oversight that I didn't think added anything to the book, and not enough of Mac barreling forward doing sensible things. But once Mac gets back into space, with a destination and a job and a collection of suspicious (or arrogant) humans and almost-incomprehensible aliens to juggle, Czerneda hits her stride.

Czerneda doesn't entirely avoid Planet of the Hats problems with her aliens, but I think she does better than most of science fiction. Alien species in this series do tend to be a bit all of a type, and Mac does figure them out by drawing conclusions from biology, but those conclusions are unobvious and based on Mac's mix of biological and human social intuition. They refreshingly aren't as simple as biology completely shaping culture. (Czerneda's touch is more subtle than James White's Sector General, for example.) And Mac has a practical, determined, and selfless approach that's deeply likable and admirable. It's fun as a reader to watch her win people over by just being competent, thoughtful, observant, and unrelentingly ethical.

But the best part of this book, by far, are the Sinzi.

They first appeared in the second book, Migration, and seemed to follow the common SF trope of a wise elder alien race that can bring some order to the universe and that humanity can learn from. They, or more precisely the one Sinzi who appeared in Migration, was very good at that role. But Czerneda had something far more interesting planned, and in Regeneration they become truly alien in their own right, with their own nearly incomprehensible way of viewing the universe.

There are so many ways that this twist can go wrong, and Czerneda avoids all of them. She doesn't undermine their gravitas, nor does she elevate them to the level of Arisians or other semi-angelic wise mentors of other series. Czerneda makes them different in profound ways that are both advantage and disadvantage, pulls that difference into the plot as a complicating element, and has Mac stumble yet again into a role that is accidentally far more influential than she intends. Mac is the perfect character to do that to: she has just the right mix of embarrassment, ethics, seat-of-the-pants blunt negotiation skills, and a strong moral compass. Given a lever and a place to stand, one can believe that Mac can move the world, and the Sinzi are an absolutely fascinating lever.

There are also three separate, highly differentiated Sinzi in this story, with different goals, life experience, personalities, and levels of gravitas. Czerneda's aliens are good in general, but her focus is usually more on biology than individual differentiation. The Sinzi here combine the best of both types of character building.

I think the ending of Regeneration didn't entirely work. After all the intense effort the characters put into understanding the complexity of the universe over the course of the series, the denouement has a mopping-up feel and a moral clarity that felt a bit too easy. But the climax has everything I was hoping for, there's a lot more of Mac being Mac, and I loved every moment of the Sinzi twist. Now I want a whole new series exploring the implications of the Sinzi's view of the universe on the whole history of galactic politics that sat underneath this story. But I'll settle for moments of revelation that sent shivers down my spine.

This is a bit of an uneven book that falls short of its potential, but I'll remember it for a long time. Add it on to a deeply rewarding series, and I will recommend the whole package unreservedly. The Species Imperative is excellent science fiction that should be better-known than it is. I still think the romance subplot was unfortunate, and occasionally the aliens get too cartoony (Fourteen, in particular, goes a bit too far in that direction), but Czerneda never lingers too long on those elements. And the whole work is some of the best writing about working scientific research and small-group politics that I've read.

Highly recommended, but read the whole series in order.

Rating: 9 out of 10

01 September, 2017 05:30AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAnnoy 0.0.9

An new version 0.0.9 of RcppAnnoy, our Rcpp-based R integration of the nifty Annoy library by Erik, is now on CRAN. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours.

This release corrects an issue for Windows users discovered by GitHub user 'khoran' who later also suggested the fix of binary mode. It upgrades to Annoy release 1.9.1 and brings its new Manhattan distance to RcppAnnoy. A number of unit tests were added as well, and we updated some packaging internals such as symbol registration.

And I presume I had a good streak emailing with Uwe's robots as the package made it onto CRAN rather smoothly within ten minutes of submission:

RcppAnnou to CRAN 

Changes in this version are summarized here:

Changes in version 0.0.9 (2017-08-31)

  • Synchronized with Annoy upstream version 1.9.1

  • Minor updates in calls and tests as required by annoy 1.9.1

  • New Manhattan distance modules along with unit test code

  • Additional unit tests from upstream test code carried over

  • Binary mode is used for save (as suggested by @khoran in #21)

  • A new file init.c was added with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Symbol registration is enabled in useDynLib

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

01 September, 2017 01:20AM

August 31, 2017

Paul Wise

FLOSS Activities August 2017

Changes

Issues

Review

Administration

  • myrepos: get commit/admin access from joeyh at DebConf17, add commit/admin access for other patch submitters, apply my stack of patches
  • Debian: fix weird log file issues, redirect hardware donor, cleaned up a weird dir, fix some OOB info, ask for TLS on meetings-archive.d.n, check an I/O error, restart broken stunnels, powercycle 1 borked machine,
  • Debian mentors: lintian/security updates & reboot
  • Debian wiki: remove some stray cache files, whitelist 3 email domains, whitelist some email addresses, disable 1 spammer account, disable 1 accounts with bouncing email,
  • Debian QA: apply patch to fix PTS watch file errors, deploy changes
  • Debian derivatives census: run scripts for Purism, remove some noise from logs, trigger a recheck, merge fix by Unit193, deploy changes
  • Openmoko: security updates, reboots, enable unattended-upgrades

Communication

  • Attended DebConf17 and provided some input in BoFs
  • Sent Misc Dev News #44
  • Invite Google gLinux (on IRC) to the Debian derivatives census
  • Welcome Sven Haardiek (of GreenboneOS) to the Debian derivatives census
  • Inquire about the status of Canaima

Sponsors

The samba bug report was sponsored by my employer. All other work was done on a volunteer basis.

31 August, 2017 11:45PM

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in August 2017

Here is my monthly update covering what I have been doing in the free software world in August 2017 (previous month):

  • Created ZeroCoolOS, a live operating system that plays the film Hackers (1995) on a continuous loop.
  • Sent a patch for pristine-tar to allow storage of detached upstream signatures. (#871809)
  • Worked more on Lintian, a static analysis tool for Debian packages, reporting on various errors, omissions and quality-assurance issues to the maintainer (previous changes):
    • Fix an apache2-unparsable-dependency false positive by allowing periods in dependency names. (#873701)
    • Ignore "repacked" packages when checking for upstream source tarball signatures as they will never match.
    • Downgrade the severity of orig-tarball-missing-upstream-signature. (#870722)
    • From a suggestion by Theodore Ts'o, expand the explanation of orig-tarball-missing-upstream-signature to include the location of where dpkg-source looks.
    • Address a number of issues in the copyright-year-in-future tag including preventing false positives in port numbers, email addresses, ISO standard numbers and street addresses (#869788), as well as "meta" or testing statements (#873323). In addition, report all violating years in a line and expand the testsuite.
    • Don't match quoted "FIXME" variants of file-contains-fixme-placeholder (#870199), avoid checking copyright_hints files (#872843) and downgrade the tag's severity.
    • Apply a patch from Alex Muntada to recommend "substr" over of "substring" in mentions-deprecated-usr-lib-perl5-directory. (#871767)
    • Prevent missing-build-dependency-for-dh_-command false positives exposed by following the advice in useless-autoreconf-build-depends. (#869541)
    • Ensure readme-debian-contains-debmake-template also checks for files containing "Automatically generated by debmake".
    • Check python3-foo packages have a Section: python, not just python2-foo. (#870272)
    • Check for packages shipping compiled Java class files. (#873211)
    • Additionally consider .cljc files to avoid codeless-jar warnings. (#870649)
    • Prevent desktop-entry-lacks-keywords-entry false positives for Link and Directory-style .desktop files. (#873702)
    • Split out Python checks from checks/scripts.pm check to a new, source check of type source.
    • Check for python-foo without a corresponding python3-foo package. (#870681)
    • Complain about packages that Build-Depend on python-sphinx only. (#870730)
    • Warn about packages that alternatively Build-Depend on the Python 2 and Python 3 versions of Sphinx. (#870758)
    • Check for packages that depend on Python 2.x. (#870822)
    • Correct false positives in unconditional-use-of-dpkg-statoverride by detecting "if !" as a shell prefix. (#869587)
    • Alert on for missing calls to dpkg-maintscript-helper(1) in maintainer scripts. (#872042)
    • Check for packages using sensible-utils without declaring a dependency after splitting from debianutils. (#872611)
    • Warn about scripts using nodejs as an interpreter now that the nodejs script provides /usr/bin/node. (#873096)
    • Remove recommendations to add a Testsuite: autopkgtest field to debian/control and emit a new tag the package if it does so. (#865531)
    • Recognise autopkgtest-pkg-elpa as a valid test suite. (#873458)
    • Add note to /etc/bash_completion.d's obsolete path warning output regarding stricter filename requirements. (#814599)
    • Add 4.0.1 and 4.1.0 as known Policy standards versions.
    • Apply a patch from Maia Everett to avoid British spellings under the en_US locale. (#868897)
    • Stop emitting {maintainer,uploader}-address-causes-mail-loops for @packages.debian.org addresses. (#871575)
    • Modify Lintian::Data's all subroutine to always return keys in insertion order.
    • Apply a patch from Steve Langasek to accomodate binutils outputting symbols in a different format on the ppc64el architecture. (#869750)
    • Add an explicit test for packages including external fonts via the Google Font and TypeKit APIs. (#873434)
    • Add missing entries in internal Test-For fields to make development/testing workflow less error-prone.
  • Sent three pull requests to git-buildpackage, a tool to assist in Debian packaging from Git repositories:
    • Make pq --abbrev= configurable. (#872351)
    • Use build profiles to avoid installation of test dependencies. (#31)
    • Correct "allow to" grammar. (#30)
  • Updated travis.debian.net (my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform for testing):
    • Move away from deb.debian.org; Travis appears to be using a HTTP proxy that strips SRV records. (commit)
    • Highlight double quotes are required for TRAVIS_DEBIAN_EXTRA_REPOSITORY. (commit)
    • Use force-unsafe-io. (commit)
    • Clarify docs when upstream already has a travis.yml file. (#46)
    • Make documentation easier to copy-paste. (commit)
  • Merged a pull request in django-slack, my library to easily post messages to the Slack group-messaging utility, where instantiation of a SlackException was failing. (#71)
  • Assigned two pull requests to the Redis key-value database store to correct "did not received" and "faield" typos. (#4216 & #4215).

Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Presented a status update at Debconf17 in Montréal, Canada alongside Holger Levsen, Maria Glukhova, Steven Chamberlain, Vagrant Cascadian, Valerie Young and Ximin Luo.
  • I worked on the following issues upstream:
    • glib2.0: Please make the output of gio-querymodules reproducible. (...)
    • gcab: Please make the output reproducible. (...)
    • gtk+2.0: Please make the immodules.cache files reproducible. (...)
    • desktop-file-utils: Please make the output reproducible. (...)
  • Within Debian:
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Worked on publishing our weekly reports. (#118, #119, #120, #121 & #122)

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Use name attribute over path to avoid leaking comparison full path in output. (commit)
  • Add missing skip_unless_module_exists import. (commit)
  • Tidy diffoscope.progress and the XML comparator (commit, commit)

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Add a simple autopkgtest smoke test. (commit)


Debian

Patches contributed

  • openssh: Quote the IP address in ssh-keygen -f suggestions. (#872643)
  • libgfshare:
    • SIGSEGV if /dev/urandom is not accessible. (#873047)
    • Add bindnow hardening. (#872740)
    • Support nodoc build profile. (#872739)
  • devscripts:
  • memcached: Add hardening to systemd .service file. (#871610)
  • googler: Tidy long and short package descriptions. (#872461)
  • gnome-split: Homepage points to domain-parked website. (#873037)

Uploads

  • python-django 1:1.11.4-1 — New upstream release.
  • redis:
    • 4:4.0.1-3 — Drop yet more non-deterministic tests.
    • 4:4.0.1-4 — Tighten systemd/seccomp hardening.
    • 4:4.0.1-5 — Drop even more tests with timing issues.
    • 4:4.0.1-6 — Don't install completions to /usr/share/bash-completion/completions/debian/bash_completion/.
    • 4:4.0.1-7 — Don't let sentinel integration tests fail the build as they use too many timers to be meaningful. (#872075)
  • python-gflags 1.5.1-3 — If SOURCE_DATE_EPOCH is set, either use that as a source of current dates or the UTC-version of the file's modification time (#836004), don't call update-alternatives --remove in postrm. update debian/watch/Homepage & refresh/tidy the packaging.
  • bfs 1.1.1-1 — New upstream release, tidy autopkgtest & patches, organising the latter with Pq-Topic.
  • python-daiquiri 1.2.2-1 — New upstream release, tidy autopkgtests & update travis.yml from travis.debian.net.
  • aptfs 2:0.10-2 — Add upstream signing key, refer to /usr/share/common-licenses/GPL-3 in debian/copyright & tidy autopkgtests.
  • adminer 4.3.1-2 — Add a simple autopkgtest & don't install the Selenium-based tests in the binary package.
  • zoneminder (1.30.4+dfsg-2) — Prevent build failures with GCC 7 (#853717) & correct example /etc/fstab entries in README.Debian (#858673).

Finally, I reviewed and sponsored uploads of astral, inflection, more-itertools, trollius-redis & wolfssl.


Debian LTS


This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1049-1 for libsndfile preventing a remote denial of service attack.
  • Issued DLA 1052-1 against subversion to correct an arbitrary code execution vulnerability.
  • Issued DLA 1054-1 for the libgxps XML Paper Specification library to prevent a remote denial of service attack.
  • Issued DLA 1056-1 for cvs to prevent a command injection vulnerability.
  • Issued DLA 1059-1 for the strongswan VPN software to close a denial of service attack.

Debian bugs filed

  • wget: Please hash the hostname in ~/.wget-hsts files. (#870813)
  • debian-policy: Clarify whether mailing lists in Maintainers/Uploaders may be moderated. (#871534)
  • git-buildpackage: "pq export" discards text within square brackets. (#872354)
  • qa.debian.org: Escape HTML in debcheck before outputting. (#872646)
  • pristine-tar: Enable multithreaded compression in pristine-xz. (#873229)
  • tryton-meta: Please combine tryton-modules-* into a single source package with multiple binaries. (#873042)
  • azure-cli:
  • fwupd-tests: Don't ship test files to generic /usr/share/installed-tests dir. (#872458)
  • libvorbis: Maintainer fields points to a moderated mailing list. (#871258)
  • rmlint-gui: Ship a rmlint-gui binary. (#872162)
  • template-glib: debian/copyright references online source without quotation. (#873619)

FTP Team


As a Debian FTP assistant I ACCEPTed 147 packages: abiword, adacgi, adasockets, ahven, animal-sniffer, astral, astroidmail, at-at-clojure, audacious, backdoor-factory, bdfproxy, binutils, blag-fortune, bluez-qt, cheshire-clojure, core-match-clojure, core-memoize-clojure, cypari2, data-priority-map-clojure, debian-edu, debian-multimedia, deepin-gettext-tools, dehydrated-hook-ddns-tsig, diceware, dtksettings, emacs-ivy, farbfeld, gcc-7-cross-ports, git-lfs, glewlwyd, gnome-recipes, gnome-shell-extension-tilix-dropdown, gnupg2, golang-github-aliyun-aliyun-oss-go-sdk, golang-github-approvals-go-approval-tests, golang-github-cheekybits-is, golang-github-chzyer-readline, golang-github-denverdino-aliyungo, golang-github-glendc-gopher-json, golang-github-gophercloud-gophercloud, golang-github-hashicorp-go-rootcerts, golang-github-matryer-try, golang-github-opentracing-contrib-go-stdlib, golang-github-opentracing-opentracing-go, golang-github-tdewolff-buffer, golang-github-tdewolff-minify, golang-github-tdewolff-parse, golang-github-tdewolff-strconv, golang-github-tdewolff-test, golang-gopkg-go-playground-validator.v8, gprbuild, gsl, gtts, hunspell-dz, hyperlink, importmagic, inflection, insighttoolkit4, isa-support, jaraco.itertools, java-classpath-clojure, java-jmx-clojure, jellyfish1, lazymap-clojure, libblockdev, libbytesize, libconfig-zomg-perl, libdazzle, libglvnd, libjs-emojify, libjwt, libmysofa, libundead, linux, lua-mode, math-combinatorics-clojure, math-numeric-tower-clojure, mediagoblin, medley-clojure, more-itertools, mozjs52, openssh-ssh1, org-mode, oysttyer, pcscada, pgsphere, poppler, puppetdb, py3status, pycryptodome, pysha3, python-cliapp, python-coloredlogs, python-consul, python-deprecation, python-django-celery-results, python-dropbox, python-fswrap, python-hbmqtt, python-intbitset, python-meshio, python-parameterized, python-pgpy, python-py-zipkin, python-pymeasure, python-thriftpy, python-tinyrpc, python-udatetime, python-wither, python-xapp, pythonqt, r-cran-bit, r-cran-bit64, r-cran-blob, r-cran-lmertest, r-cran-quantmod, r-cran-ttr, racket-mode, restorecond, rss-bridge, ruby-declarative, ruby-declarative-option, ruby-errbase, ruby-google-api-client, ruby-rash-alt, ruby-representable, ruby-test-xml, ruby-uber, sambamba, semodule-utils, shimdandy, sjacket-clojure, soapysdr, stencil-clojure, swath, template-glib, tools-analyzer-jvm-clojure, tools-namespace-clojure, uim, util-linux, vim-airline, vim-airline-themes, volume-key, wget2, xchat, xfce4-eyes-plugin & xorg-gtest.

I additionally filed 6 RC bugs against packages that had incomplete debian/copyright files against: gnome-recipes, golang-1.9, libdazzle, poppler, python-py-zipkin & template-glib.

31 August, 2017 08:39PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

C, floating point, and help!

Floating point is a pain. I know this. But I recently took over the sigrok packages in Debian and as part of updating to the latest libsigkrok4 library enabled the post compilation tests. Which promptly failed on i386. Some narrowing down of the problem leads to the following test case (which fails on both gcc-6 under Debian/Stretch and gcc-7 on Debian/Testing):

#include <inttypes.h>
#include <stdio.h>
#include <stdint.h>

int main(int argc, char *argv[])
{
        printf("%" PRIu64 "\n", (uint64_t)((1.034567) * (uint64_t)(1000000ULL)));
}

We expect to see 1.034567 printed out. On x86_64 we do:

$ arch
x86_64
$ gcc -Wall t.c -o t ; ./t
1034567

If we compile for 32-bit the result is also as expected:

$ gcc -Wall -m32 t.c -o t ; ./t
1034567

Where things get interesting is when we enable --std=c99:

$ gcc -Wall --std=c99 t.c -o t ; ./t
1034567
$ gcc -Wall -m32 --std=c99 t.c -o t ; ./t
1034566

What? It turns out all the cXX standards result in the last digit incorrectly being 6, while the gnuXX standards (gnu11 is apparently the default) result in the correct trailing 7. Is there some postfix I can add to the value to prevent the floating point truncation taking place? Or do I just have to accept this? It works fine on armel, so it’s not a simple 32/64 bit issue.

31 August, 2017 04:58PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

gwolf.blog.fork()

Ohai,

I have recently started to serve as a Feature Editor for the ACM XRDS magazine. As such, I was also invited to post some general blog posts on XRDS blog — And I just started yesterday by posting about DebConf.

I'm not going to pull (or mention) each of my posts in my main blog, nor will I syndicate it to Planet Debian (where most of my readership comes from), although I did add it to my dlvr.it account (that relays my posts to Twitter and Facebook, for those of you that care about said services). This mention is a one-off thing.

So, if you want to see yet another post explaining what is DebConf and what is Debian to the wider public, well... That's what I came up with :)

31 August, 2017 04:21PM by gwolf

gwolf.blog.fork()

Ohai,

I have recently started to serve as a Feature Editor for the ACM XRDS magazine. As such, I was also invited to post some general blog posts on XRDS blog — And I just started yesterday by posting about DebConf.

I'm not going to pull (or mention) each of my posts in my main blog, nor will I syndicate it to Planet Debian (where most of my readership comes from), although I did add it to my dlvr.it account (that relays my posts to Twitter and Facebook, for those of you that care about said services). This mention is a one-off thing.

So, if you want to see yet another post explaining what is DebConf and what is Debian to the wider public, well... Thate's what I came up with :)

[Update]: Of course, I wanted to thank Aigars Mahinovs for the photos I used on that post. Have you looked at them all? I spent a moste enjoyable time going through them :-]

31 August, 2017 03:54PM by gwolf

hackergotchi for Martin-Éric Racine

Martin-Éric Racine

Firefox slow as HEL even after boosting RAM from 4GB to 16GB

The title says it all: Firefox is the only piece of software that shows zero sign of improvement in its performance after quadrupling the amount of RAM installed on my 64-bit desktop computer. All other applications show a significant boost in performance. Yet, among all the applications I use, Firefox is the one that most needed this speed boost. To say that this is a major disapointment is an understatement. FFS, Mozilla devs!

31 August, 2017 10:33AM by Martin-Éric (noreply@blogger.com)

August 30, 2017

hackergotchi for Kees Cook

Kees Cook

GRUB and LUKS

I got myself stuck yesterday with GRUB running from an ext4 /boot/grub, but with /boot inside my LUKS LVM root partition, which meant GRUB couldn’t load the initramfs and kernel.

Luckily, it turns out that GRUB does know how to mount LUKS volumes (and LVM volumes), but all the instructions I could find talk about setting this up ahead of time (“Add GRUB_ENABLE_CRYPTODISK=y to /etc/default/grub“), rather than what the correct manual GRUB commands are to get things running on a failed boot.

These are my notes on that, in case I ever need to do this again, since there was one specific gotcha with using GRUB’s cryptomount command (noted below).

Available devices were the raw disk (hd0), the /boot/grub partition (hd0,msdos1), and the LUKS volume (hd0,msdos5):

grub> ls
(hd0) (hd0,msdos1) (hd0,msdos5)

Used cryptomount to open the LUKS volume (but without ()s! It says it works if you use parens, but then you can’t use the resulting (crypto0)):

grub> insmod luks
grub> cryptomount hd0,msdos5
Enter password...
Slot 0 opened.

Then you can load LVM and it’ll see inside the LUKS volume:

grub> insmod lvm
grub> ls
(crypto0) (hd0) (hd0,msdos1) (hd0,msdos5) (lvm/rootvg-rootlv)

And then I could boot normally:

grub> configfile $prefix/grub.cfg

After booting, I added GRUB_ENABLE_CRYPTODISK=y to /etc/default/grub and ran update-grub. I could boot normally after that, though I’d be prompted twice for the LUKS passphrase (once by GRUB, then again by the initramfs).

To avoid this, it’s possible to add a second LUKS passphrase, contained in a file in the initramfs, as described here and works for Ubuntu and Debian too. The quick summary is:

Create the keyfile and add it to LUKS:

# dd bs=512 count=4 if=/dev/urandom of=/crypto_keyfile.bin
# chmod 0400 /crypto_keyfile.bin
# cryptsetup luksAddKey /dev/sda5 /crypto_keyfile.bin
*enter original password*

Adjust the /etc/crypttab to include passing the file via /bin/cat:

sda5_crypt UUID=4aa5da72-8da6-11e7-8ac9-001cc008534d /crypto_keyfile.bin luks,keyscript=/bin/cat

Add an initramfs hook to copy the key file into the initramfs, keep non-root users from being able to read your initramfs, and trigger a rebuild:

# cat > /etc/initramfs-tools/hooks/crypto_keyfile <<EOF
#!/bin/bash
if [ "$1" = "prereqs" ] ; then
    cp /crypto_keyfile.bin "${DESTDIR}"
fi
EOF
# chmod a+x /etc/initramfs-tools/hooks/crypto_keyfile
# chmod 0700 /boot
# update-initramfs -u

This has the downside of leaving a LUKS passphrase “in the clear” while you’re booted, but if someone has root, they can just get your dm-crypt encryption key directly anyway:

# dmsetup table --showkeys sda5_crypt
0 155797496 crypt aes-cbc-essiv:sha256 e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 8:5 2056

And of course if you’re worried about Evil Maid attacks, you’ll need a real static root of trust instead of doing full disk encryption passphrase prompting from an unverified /boot partition. :)

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

30 August, 2017 05:27PM by kees

hackergotchi for Daniel Silverstone

Daniel Silverstone

STM32 USB and Rust - Packet Memory Area

In this, our next exciting installment of STM32 and Rust for USB device drivers, we're going to look at what the STM32 calls the 'packet memory area'. If you've been reading along with the course, including reading up on the datasheet content then you'll be aware that as well as the STM32's normal SRAM, there's a 512 byte SRAM dedicated to the USB peripheral. This SRAM is called the 'packet memory area' and is shared between the main bus and the USB peripheral core. Its purpose is, simply, to store packets in transit. Both those IN to the host (so stored queued for transmission) or OUT from the host (so stored, queued for the application to extract and consume).

It's time to actually put hand to keyboard on some Rust code, and the PMA is the perfect starting point, since it involves two basic structures. Packets are the obvious first structure, and they are contiguous sets of bytes which for the purpose of our work we shall assume are one to sixty-four bytes long. The second is what the STM32 datasheet refers to as the BTABLE or Buffer Descriptor Table. Let's consider the BTABLE first.

The Buffer Descriptor Table

The BTABLE is arranged in quads of 16bit words. For "normal" endpoints this is a pair of descriptors, each consisting of two words, one for transmission, and one for reception. The STM32 also has a concept of double buffered endpoints, but we're not going to consider those in our proof-of-concept work. The STM32 allows for up to eight endpoints (EP0 through EP7) in internal register naming, though they support endpoints numbered from zero to fifteen in the sense of the endpoint address numbering. As such there're eight descriptors each four 16bit words long (eight bytes) making for a buffer descriptor table which is 64 bytes in size at most.

Buffer Descriptor Table
Byte offset in PMA Field name Description
(EPn * 8) + 0 USB_ADDRn_TX The address (inside the PMA) of the TX buffer for EPn
(EPn * 8) + 2 USB_COUNTn_TX The number of bytes present in the TX buffer for EPn
(EPn * 8) + 4 USB_ADDRn_RX The address (inside the PMA) of the RX buffer for EPn
(EPn * 8) + 6 USB_COUNTn_RX The number of bytes of space available for the RX buffer for EPn (and once received, the number of bytes received)

The TX entries are trivial to comprehend. To transmit a packet, part of the process involves writing the packet into the PMA, putting the address into the appropriate USB_ADDRn_TX entry, and the length into the corresponding USB_COUNTn_TX entry, before marking the endpoint as ready to transmit.

To receive a packet though is slightly more complex. The application must allocate some space in the PMA, setting the address into the USB_ADDRn_RX entry of the BTABLE before filling out the top half of the USB_COUNTn_RX entry. For ease of bit sizing, the STM32 only supports space allocations of two to sixty-two bytes in steps of two bytes at a time, or thirty-two to five-hundred-twelve bytes in steps of thirty-two bytes at a time. Once the packet is received, the USB peripheral will fill out the lower bits of the USB_COUNTn_RX entry with the actual number of bytes filled out in the buffer.

Packets themselves

Since packets are, typically, a maximum of 64 bytes long (for USB 2.0) and are simply sequences of bytes with no useful structure to them (as far as the USB peripheral itself is concerned) the PMA simply requires that they be present and contiguous in PMA memory space. Addresses of packets are relative to the base of the PMA and are byte-addressed, however they cannot start on an odd byte, so essentially they are 16bit addressed. Since the BTABLE can be anywhere within the PMA, as can the packets, the application will have to do some memory management (either statically, or dynamically) to manage the packets in the PMA.

Accessing the PMA

The PMA is accessed in 16bit word sections. It's not possible to access single bytes of the PMA, nor is it conveniently structured as far as the CPU is concerned. Instead the PMA's 16bit words are spread on 32bit word boundaries as far as the CPU knows. This is done for convenience and simplicity of hardware, but it means that we need to ensure our library code knows how to deal with this.

First up, to convert an address in the PMA into something which the CPU can use we need to know where in the CPU's address space the PMA is. Fortunately this is fixed at 0x4000_6000. Secondly we need to know what address in the PMA we wish to access, so we can determine which 16bit word that is, and thus what the address is as far as the CPU is concerned. If we assume we only ever want to access 16bit entries, we can just multiply the PMA offset by two before adding it to the PMA base address. So, to access the 16bit word at byte-offset 8 in the PMA, we'd look for the 16bit word at 0x4000_6000 + (0x08 * 2) => 0x4000_6010.

Bundling the PMA into something we can use

I said we'd do some Rust, and so we shall…

    // Thanks to the work by Jorge Aparicio, we have a convenient wrapper
    // for peripherals which means we can declare a PMA peripheral:
    pub const PMA: Peripheral<PMA> = unsafe { Peripheral::new(0x4000_6000) };

    // The PMA struct type which the peripheral will return a ref to
    pub struct PMA {
        pma_area: PMA_Area,
    }

    // And the way we turn that ref into something we can put a useful impl on
    impl Deref for PMA {
        type Target = PMA_Area;
        fn deref(&self) -> &PMA_Area {
            &self.pma_area
        }
    }

    // This is the actual representation of the peripheral, we use the C repr
    // in order to ensure it ends up packed nicely together
    #[repr(C)]
    pub struct PMA_Area {
        // The PMA consists of 256 u16 words separated by u16 gaps, so lets
        // represent that as 512 u16 words which we'll only use every other of.
        words: [VolatileCell<u16>; 512],
    }

That block of code gives us three important things. Firstly a peripheral object which we will be able to (later) manage nicely as part of the set of peripherals which RTFM will look after for us. Secondly we get a convenient packed array of u16s which will be considered volatile (the compiler won't optimise around the ordering of writes etc). Finally we get a struct on which we can hang an implementation to give our PMA more complex functionality.

A useful first pair of functions would be to simply let us get and put u16s in and out of that word array, since we're only using every other word…

    impl PMA_Area {
        pub fn get_u16(&self, offset: usize) -> u16 {
            assert!((offset & 0x01) == 0);
            self.words[offset].get()
        }

        pub fn set_u16(&self, offset: usize, val: u16) {
            assert!((offset & 0x01) == 0);
            self.words[offset].set(val);
        }
    }

These two functions take an offset in the PMA and return the u16 word at that offset. They only work on u16 boundaries and as such they assert that the bottom bit of the offset is unset. In a release build, that will go away, but during debugging this might be essential. Since we're only using 16bit boundaries, this means that the first word in the PMA will be at offset zero, and the second at offset two, then four, then six, etc. Since we allocated our words array to expect to use every other entry, this automatically converts into the addresses we desire.

If we pop (and please don't worry about the unsafe{} stuff for now):

    unsafe { (&*usb::pma::PMA.get()).set_u16(4, 64); }

into our main function somewhere, and then build and objdump our test binary we can see the following set of instructions added:

 80001e4:   f246 0008   movw    r0, #24584  ; 0x6008
 80001e8:   2140        movs    r1, #64 ; 0x40
 80001ea:   f2c4 0000   movt    r0, #16384  ; 0x4000
 80001ee:   8001        strh    r1, [r0, #0]

This boils down to a u16 write of 0x0040 (64) to the address 0x4006008 which is the third 32 bit word in the CPU's view of the PMA memory space (where offset 4 is the third 16bit word) which is exactly what we'd expect to see.

We can, from here, build up some functions for manipulating a BTABLE, though the most useful ones for us to take a look at are the RX counter functions:

    pub fn get_rxcount(&self, ep: usize) -> u16 {
        self.get_u16(BTABLE + (ep * 8) + 6) & 0x3ff
    }

    pub fn set_rxcount(&self, ep: usize, val: u16) {
        assert!(val <= 1024);
        let rval: u16 = {
            if val > 62 {
                assert!((val & 0x1f) == 0);
                (((val >> 5) - 1) << 10) | 0x8000
            } else {
                assert!((val & 1) == 0);
                (val >> 1) << 10
            }
        };
        self.set_u16(BTABLE + (ep * 8) + 6, rval)
    }

The getter is fairly clean and clear, we need the BTABLE base in the PMA, add the address of the USB_COUNTn_RX entry to that, retrieve the u16 and then mask off the bottom ten bits since that's the size of the relevant field.

The setter is a little more complex, since it has to deal with the two possible cases, this isn't pretty and we might be able to write some better peripheral structs in the future, but for now, if the length we're setting is 62 or less, and is divisible by two, then we put a zero in the top bit, and the number of 2-byte lumps in at bits 14:10, and if it's 64 or more, we mask off the bottom to check it's divisible by 32, and then put the count (minus one) of those blocks in, instead, and set the top bit to mark it as such.

Fortunately, when we set constants, Rust's compiler manages to optimise all this very quickly. For a BTABLE at the bottom of the PMA, and an initialisation statement of:

    unsafe { (&*usb::pma::PMA.get()).set_rxcount(1, 64); }

then we end up with the simple instruction sequence:

80001e4:    f246 001c   movw    r0, #24604  ; 0x601c
80001e8:    f44f 4104   mov.w   r1, #33792  ; 0x8400
80001ec:    f2c4 0000   movt    r0, #16384  ; 0x4000
80001f0:    8001        strh    r1, [r0, #0]

We can decompose that into a C like *((u16*)0x4000601c) = 0x8400 and from there we can see that it's writing to the u16 at 0x1c bytes into the CPU's view of the PMA, which is 14 bytes into the PMA itself. Since we know we set the BTABLE at the start of the PMA, it's 14 bytes into the BTABLE which is firmly in the EP1 entries. Specifically it's USB_COUNT1_RX which is what we were hoping for. To confirm this, check out page 651 of the datasheet. The value set was 0x8400 which we can decompose into 0x8000 and 0x0400. The first is the top bit and tells us that BL_SIZE is one, and thus the blocks are 32 bytes long. Next the 0x4000 if we shift it right ten places, we get the value 2 for the field NUM_BLOCK and multiplying 2 by 32 we get the 64 bytes we asked it to set as the size of the RX buffer. It has done exactly what we hoped it would, but the compiler managed to optimise it into a single 16 bit store of a constant value to a constant location. Nice and efficient.

Finally, let's look at what happens if we want to write a packet into the PMA. For now, let's assume packets come as slices of u16s because that'll make our life a little simpler:

    pub fn write_buffer(&self, base: usize, buf: &[u16]) {
        for (ofs, v) in buf.iter().enumerate() {
            self.set_u16(base + (ofs * 2), *v);
        }
    }

Yes, even though we're deep in no_std territory, we can still get an iterator over the slice, and enumerate it, getting a nice iterator of (index, value) though in this case, the value is a ref to the content of the slice, so we end up with *v to deref it. I am sure I could get that automatically happening but for now it's there.

Amazingly, despite using iterators, enumerators, high level for loops, function calls, etc, if we pop:

    unsafe { (&*usb::pma::PMA.get()).write_buffer(0, &[0x1000, 0x2000, 0x3000]); }

into our main function and compile it, we end up with the instruction sequence:

80001e4:    f246 0000   movw    r0, #24576  ; 0x6000
80001e8:    f44f 5180   mov.w   r1, #4096   ; 0x1000
80001ec:    f2c4 0000   movt    r0, #16384  ; 0x4000
80001f0:    8001        strh    r1, [r0, #0]
80001f2:    f44f 5100   mov.w   r1, #8192   ; 0x2000
80001f6:    8081        strh    r1, [r0, #4]
80001f8:    f44f 5140   mov.w   r1, #12288  ; 0x3000
80001fc:    8101        strh    r1, [r0, #8]

which, as you can see, ends up being three sequential halfword stores directly to the right locations in the CPU's view of the PMA. You have to love seriously aggressive compile-time optimisation :-)

Hopefully, by next time, we'll have layered some more pleasant routines on our PMA code, and begun a foray into the setup necessary before we can begin handling interrupts and start turning up on a USB port.

30 August, 2017 04:15PM by Daniel Silverstone

Foteini Tsiami

Internationalization, part five: documentation and release!

LTSP Manager has just been announced in the ltsp-discuss mailing list! After long hours of testing and fixing issues that were related to the internationalization process, it’s now time to make it available to a broader audience. It’s currently available for Ubuntu and Debian via a PPA, and it will shortly get uploaded to Debian experimental.

We’ve documented all the necessary steps to install and maintain LTSP using LTSP Manager in the LTSP wiki: http://wiki.ltsp.org/wiki/Ltsp-manager. The initial documentation is deliberately concise so that new users can read all of it. We’ve also included best practices about user account management in schools etc.

This concludes all the tasks outlined by my Outreachy project, but of course not my involvement to LTSP Manager. I’ll keep using it in my schools and be an active part of its ecosystem. Many thanks to Debian Outreachy and to my mentors for the opportunity to work on this excellent project!


30 August, 2017 07:27AM by fottsia

hackergotchi for Wouter Verhelst

Wouter Verhelst

Re-encoding DebConf17 videos

Feedback we received after DebConf17 was that the quality of the released videos was rather low. Since I'd been responsible for encoding the videos, I investigated that, and found that I had used the video bitrate defaults of ffmpeg, which are rather low. So, I read some more documentation, and came up with better settings for the encoding to be done properly.

While at it, I found that the reason that VP9 encoding takes forever, as I'd found before, has everything to do with the same low-bitrate defaults, and is not an inherent problem to VP9 encoding. In light of that, I ran a test encode of a video with VP8 as well as VP9 encoding, and found that it shaves off some of the file size for about the same quality, with little to no increase in encoder time.

Unfortunately, the initial VP9 encoder settings that I used produced files that would play in some media players and in some browsers, but not in all of them. A quick email to the WebM-discuss mailinglist got me a reply, explaining that since our cameras used YUV422 encoding, which VP9 supports but some media players do not, and that since VP8 doesn't support that format, the result was VP9 files with YUV422 encoding, and VP8 ones with YUV420. The fix was to add a single extra command-line parameter to force the pixel format to YUV420 and tell ffmpeg to also downsample the video to that.

After adding another few parameters so as to ensure that some metadata would be added to the files as well, I've now told the system to re-encode the whole corpus of DebConf17 videos in both VP8 as well as VP9. It would appear that the VP9 files are, on average, about 1/4th to 1/3rd smaller than the equivalent VP8 files, with no apparent loss in quality.

Since some people might like them for whatever reason, I've not dropped the old lower-quality VP8 files, but instead moved them to an "lq" directory. They are usually about half the size of the VP9 files, but the quality is pretty terrible.

TLDR: if you downloaded videos from the debconf video archive, you may want to check whether it has been re-encoded yet (which would be the case if the state on that page is "done"), and if so, download them anew.

30 August, 2017 05:52AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.7.960.1.2

armadillo image

A second fix-up release is needed following on the recent bi-monthly RcppArmadillo release as well as the initial follow-up as it turns out that OS X / macOS is so darn special that it needs an entire separate treatment for OpenMP. Namely to turn it off entirely...

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 384 other packages on CRAN---an increase of 54 since the CRAN release in June!

Changes in RcppArmadillo version 0.7.960.1.2 (2017-08-29)

  • On macOS, OpenMP support is now turned off (#170).

  • The package is now compiling under the C++11 standard (#170).

  • The vignette dependency is correctly set (James and Dirk in #168 and #169)

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 August, 2017 02:20AM

August 29, 2017

hackergotchi for Colin Watson

Colin Watson

env —chdir

I was recently asked to sort things out so that snap builds on Launchpad could themselves install snaps as build-dependencies. To make this work we need to start doing builds in LXD containers rather than in chroots. As a result I’ve been doing some quite extensive refactoring of launchpad-buildd: it previously had the assumption that it was going to use a chroot for everything baked into lots of untested helper shell scripts, and I’ve been rewriting those in Python with unit tests and with a single Backend abstraction that isolates the high-level logic from the details of where each build is being performed.

This is all interesting work in its own right, but it’s not what I want to talk about here. While I was doing all this refactoring, I ran across a couple of methods I wrote a while back which looked something like this:

def chroot(self, args, echo=False):
    """Run a command in the chroot.

    :param args: the command and arguments to run.
    """
    args = set_personality(
        args, self.options.arch, series=self.options.series)
    if echo:
        print("Running in chroot: %s" %
              ' '.join("'%s'" % arg for arg in args))
        sys.stdout.flush()
    subprocess.check_call([
        "/usr/bin/sudo", "/usr/sbin/chroot", self.chroot_path] + args)

def run_build_command(self, args, env=None, echo=False):
    """Run a build command in the chroot.

    This is unpleasant because we need to run it in /build under sudo
    chroot, and there's no way to do this without either a helper
    program in the chroot or unpleasant quoting.  We go for the
    unpleasant quoting.

    :param args: the command and arguments to run.
    :param env: dictionary of additional environment variables to set.
    """
    args = [shell_escape(arg) for arg in args]
    if env:
        args = ["env"] + [
            "%s=%s" % (key, shell_escape(value))
            for key, value in env.items()] + args
    command = "cd /build && %s" % " ".join(args)
    self.chroot(["/bin/sh", "-c", command], echo=echo)

(I’ve already replaced the chroot method with a call to Backend.run, but it’s easier to see what I’m talking about in the original form.)

One thing to notice about this code is that it uses several adverbial commands: that is, commands that run another command in a different way. For example, sudo runs another command as another user, while chroot runs another command with a different root directory, and env runs another command with different environment variables set. These commands chain neatly, and they also have the useful property that they take the subsidiary command and its arguments as a list of arguments. coreutils has several other commands that behave this way, and adverbio is another useful example.

By contrast, su -c is something you might call a “quasi-adverbial” command: it does modify the behaviour of another command, but it takes it as a single argument which it then passes to sh -c. Every time you have something that’s passed to a shell like this, you need a corresponding layer of shell quoting to escape any shell metacharacters that should be interpreted literally. This is often cumbersome and is easy to get wrong. My Python implementation is as follows, and I wouldn’t be totally surprised to discover that it contained a bug:

import re

non_meta_re = re.compile(r'^[a-zA-Z0-9+,./:=@_-]+$')

def shell_escape(arg):
    if non_meta_re.match(arg):
        return arg
    else:
        return "'%s'" % arg.replace("'", "'\\''")

Python >= 3.3 has shlex.quote, which is an improvement and we should probably use that instead, but it’s still another thing to forget to call. This is why process-spawning libraries such as Python’s subprocess, Perl’s system and open, and my own libpipeline for C encourage programmers to use a list syntax and to avoid involving the shell entirely wherever possible.

One thing that the standard Unix tools don’t let you do in an adverbial way is to change your working directory, and I’ve run into this annoying limitation several times. This means that it’s difficult to chain that operation together with other adverbs, for example to run a command in a particular working directory inside a chroot. The workaround I used above was to invoke a shell that runs cd /build && ..., but that’s another command that’s only quasi-adverbial, since the extra shell means an extra layer of shell quoting.

(Ian Jackson rightly observes that you can in fact write the necessary adverb as something like sh -ec 'cd "$1"; shift; exec "$@"' chdir. I think that’s a bit uglier than I ideally want to use in production code, but you might reasonably think that it’s worth it to avoid the extra layer of shell quoting.)

I therefore decided that this was a feature that belonged in coreutils, and after a bit of mailing list discussion we felt it was best implemented as a new option to env(1). I sent a patch for this which has been accepted. This means that we have a new composable adverb, env --chdir=NEWDIR, which will allow the run_build_command method above to be rewritten as something like this:

def run_build_command(self, args, env=None, echo=False):
    """Run a build command in the chroot.

    :param args: the command and arguments to run.
    :param env: dictionary of additional environment variables to set.
    """
    env_args = ["env", "--chdir=/build"]
    if env:
        for key, value in env.items():
            env_args.append("%s=%s" % (key, value))
    self.chroot(env_args + args, echo=echo)

The env --chdir option will be in coreutils 8.28. We won’t be able to use it in launchpad-buildd until that’s available in all Ubuntu series we might want to build for, so in this particular application that’s going to take a few years; but other applications may well be able to make use of it sooner.

29 August, 2017 11:54PM by Colin Watson

hackergotchi for Holger Levsen

Holger Levsen

20170829-qubes-os

"Qubes OS from the POV of a Debian developer" and "Qubes OS user meetup at Bornhack"

I wrote the following while on my way home from Bornhack which was an awesome hacking camp on the Danish island of Bornholm, where about 200 people gathered for a week, with a nice beach in walking distance (and a not too cold Baltic Sea) and vegan and not so vegan grills, to give some hints why it was awesome. (Actually it was mostly awesome due to the people there, not the things, but anyway…)

And there were of course also talks and workshops, and one was a Qubes OS users meetup, which amazingly was attended by 20 Qubes OS users (and some Qubes OS users present were even missing), so in other words: Qubes OS had a >10% user base at this event! And I even found one heads user! ;-)

At DebConf17 I gave a talk titled "Qubes OS from the POV of a Debian developer" and while the video was immediatly available (thanks to the DebConf videoteam!) I've also now put my slides for it online.

Since then, I learned a few things:

  • I should have mentioned Standalone-VM (=non-template based VMs) in the talk, as those are also possible, and sometimes handy.
  • IPv6 works with manual fiddling… (but this is undocumented)
  • I've given Salt (for configuation management) a short try (with the help of an experienced Salt users, thanks nodens!) and I must say I'm not impressed yet. But Qubes 4.0 will bring huge changes to this area (including introducing an AdminVM), so while I will not use Salt for the time I'll still be using Qubes 3.2, I will come back and look at this later.
  • after adding a single line containing "iwldvm iwlwifi" to /rw/config/suspend-module-blacklist in the NetVM the wireless comes back nicely (on my X230) after Suspend/Resume.

I'm looking forward to more Qubes OS user meetups in future!

29 August, 2017 10:46PM

Carl Chenet

Send scheduled messages to both Twitter and Mastodon with the Remindr bot

Do you need to send messages to both Twitter and Mastodon? Use the Remindr bot! Remindr is written in Python, released under the GPLv3 license.

1. How to install Remindr

Install Remindr from PyPI:

# pip3 install remindr

2. How to configure Remindr

First, start by writing a messages.txt file with the following content:

o en Send scheduled messages to both Twitter and Mastodon with the Remindr bot https://carlchenet.com/send-scheduled-messages-to-both-twitter-and-mastodon-with-the-remindr-bot #remindr #twitter #mastodon
x en Follow Carl Chenet for great news about Free Software! https://carlchenet.com #freesoftware

The first field only indicates if the line is the next one to be considered by Remindr, the ‘o’ indicates the next line to be sent, ‘x’ means it won’t. The second field is the 2-letters code language your content is using, in my example en or fr. Next content on the line will compose the body of your messages to Mastodon and Twitter.

You need to configure the Mastodon and the Twitter credentials in order to allow Remindr to send the messages. First you need to generate the credentials. For Twitter, you need to manually create an app on apps.twitter.com. For Mastodon, just launch the following command:

$ register_remindr_app

Some information will be asked by the command. At the end, two files are created, remindr_usercred.txt and remindr_clientcred.txt. You’re going to need them for the Remindr configuration above.

For the Remindr configuration, here is a complete configuration using the

[mastodon]
instance_url=https://mastodon.social
user_credentials=remindr_usercred.txt
client_credentials=remindr_clientcred.txt

[twitter]
consumer_key=a6lv2gZxkvk6UbQ30N4vFmlwP
consumer_secret=j4VxM2slv0Ud4rbgZeGbBzPG1zoauBGLiUMOX0MGF6nsjcyn4a
access_token=1234567897-Npq5fYybhacYxnTqb42Kbb3A0bKgmB3wm2hGczB
access_token_secret=HU1sjUif010DkcQ3SmUAdObAST14dZvZpuuWxGAV0xFnC

[image]
path_to_image=/home/chaica/blog-carl-chenet.png

[entrylist]
path_to_list=/etc/remindr/messages.txt

Your configuration is complete! Now we have to check if everything is fine.

Read the full documentation on Readthedocs.

3. How to use Remindr

Now let’s try your configuration by launching Remindr the first time by-hand:

$ remindr -c /etc/remindr/remindr.ini

The messages should appear on both Twitter and Mastodon.

4. How to schedule the Remindr execution

The easiest way is to use you user crontab. Just add the following line in your crontab file, editing it with crontab -e

00 10 * * * remindr -c /etc/remindr/remindr.ini

From now on, your message will be sent every day at 10:00AM.

Going further with Remindr

… and finally

You can help me developing tools for Mastodon and other social networks by donating anything through Liberaypay (also possible with cryptocurrencies). Any contribution will be appreciated. That’s a big factor motivation 😉
Donate

You also may follow my account @carlchenet on Mastodon 😉

29 August, 2017 10:00PM by Carl Chenet

Jeremy Bicha

GNOME Tweaks 3.25.91

The GNOME 3.26 release cycle is in its final bugfix stage before release.

Here’s a look at what’s new in GNOME Tweaks since my last post.

I’ve heard people say that GNOME likes to remove stuff. If that were true, how would there be anything left in GNOME? But maybe it’s partially true. And maybe it’s possible for removals to be a good thing?

Removal #1: Power Button Settings

The Power page in Tweaks 3.25.91 looks a bit empty. In previous releases, the Tweaks app had a “When the Power button is pressed” setting that nearly duplicated the similar setting in the Settings app (gnome-control-center). I worked to restore support for “Power Off” as one of its options. Since this is now in Settings 3.25.91, there’s no need for it to be in Tweaks any more.

Removal #2: Hi-DPI Settings

GNOME Tweaks offered a basic control to scale windows 2x for Hi-DPI displays. More advanced support is now in the Settings app. I suspect that fractional scaling won’t be supported in GNOME 3.26 but it’s something to look forward to in GNOME 3.28!

Removal #3 Global Dark Theme

I am announcing today that one of the oldest and popular tweaks will be removed from Tweaks 3.28 (to be released next March). Global Dark Theme is being removed because:

  • Changing the Global Dark Theme option required closing any currently running apps and reopening them to get the correct theme.
  • It didn’t work for sandboxed apps (Flatpak and Snap)
  • It only worked for gtk3 apps (it can’t work on gtk2 apps)
  • Some themes never supported a Dark variant. The switch wouldn’t do anything at all with a theme like that.

Adwaita now has a separate Adwaita Dark theme. Arc has 2 different dark variations.

Therefore, if you are a theme developer, you have about 6-7 months to offer a dark version of your theme. The dark version can be distributed the same way as your regular version.

Removal #4 Some letters from our name

In case you haven’t noticed, GNOME Tweak Tool is now GNOME Tweaks. This better matches the GNOME app naming style. Thanks Alberto Fanjul for this improvement!

For other details of what’s changed including a helpful scrollbar fix from António Fernandes, see the NEWS file.

29 August, 2017 09:52PM by Jeremy Bicha

hackergotchi for Sean Whitton

Sean Whitton

Nourishment

This semester I am taking JPN 530, “Haruki Murakami and the Literature of Modern Japan”. My department are letting me count it for the Philosophy Ph.D., and in fact my supervisor is joining me for the class. I have no idea what the actual class sessions will be like—first one this afternoon—and I’m anxious about writing a literature term paper. But I already know that my weekends this semester are going to be great because I’ll be reading Murakami’s novels.

What’s particularly wonderful about this, and what I wanted to write about, is how nourishing I find reading literary fiction to be. For example, this weekend I read

This was something sure to be crammed full of warm secrets, like an antique clock built when peace filled the world. … Potentiality knocks at the door of my heart.

and I was fed for the day. All my perceived needs dropped away; that’s all it takes. This stands in stark contrast to reading philosophy, which is almost always draining rather than nourishing—even philosophy I really want to read. Especially having to read philosophy at the weekend.

(quotation is from On Seeing the 100% Perfect Girl on a Beautiful April Morning)

29 August, 2017 03:35PM

Reproducible builds folks

Reproducible Builds: Weekly report #122

Here's what happened in the Reproducible Builds effort between Sunday August 20 and Saturday August 26 2017:

Debian development

  • "Packages should build reproducibly" was released in Debian Policy 4.1.0.0. For more background please see last week's post.
  • A patch by Chris Lamb to make Dpkg::Substvars warnings output deterministic was merged by Guillem Jover. This helps the Reproducible Builds effort as it removes unnecessary differences in logs of two package builds. (#870221)

Packages reviewed and fixed, and bugs filed

Forwarded upstream:

Accepted repoducibility NMUs in Debian:

Other issues:

Reviews of unreproducible packages

16 package reviews have been added, 38 have been updated and 48 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (37)
  • Dmitry Shachnev (1)
  • James Cowgill (1)

diffoscope development

disorderfs development

Version 0.5.2-1 was uploaded to unstable by Ximin Luo. It included contributions from:

reprotest development

Misc.

This week's edition was written — in alphabetical order — by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

29 August, 2017 03:13PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSMC 0.2.0

A new version 0.2.0 of the RcppSMC package arrived on CRAN earlier today (as a very quick pretest-publish within minutes of submission).

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article.

This release 0.2.0 is chiefly the work of Leah South, a Ph.D. student at Queensland University of Technology, who was during the last few months a Google Summer of Code student mentored by Adam and myself. It was pleasure to work with Leah on this, and see her progress. Our congratulations to Leah for a job well done!

Changes in RcppSMC version 0.2.0 (2017-08-28)

  • Also use .registration=TRUE in useDynLib in NAMESPACE

  • Multiple Sequential Monte Carlo extensions (Leah South as part of Google Summer of Code 2017)

    • Switching to population level objects (#2 and #3).

    • Using Rcpp attributes (#2).

    • Using automatic RNGscope (#4 and #5).

    • Adding multiple normalising constant estimators (#7).

    • Static Bayesian model example: linear regression (#10 addressing #9).

    • Adding a PMMH example (#13 addressing #11).

    • Framework for additional algorithm parameters and adaptation (#19 addressing #16; also #24 addressing #23).

    • Common adaptation methods for static Bayesian models (#20 addressing #17).

    • Supporting MCMC repeated runs (#21).

    • Adding adaptation to linear regression example (#22 addressing #18).

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 August, 2017 02:38AM

August 28, 2017

hackergotchi for Norbert Preining

Norbert Preining

Gaming: The Long Dark – Wintermute Episode 1

One month ago the Story Mode for The Long Dark has been released. Finally I managed to finish the first of the two chapters, and it was worth the wait.

I have played many ours in the Sandbox mode, but while the first days in story mode are like hand-holding kids, on the fifth day you are kicked into a brutal void with lots, and I mean *lots* of wolves being more than eager to devour you.

Targeting new players the four days one just practice basic techniques, fire making, water, food, first aid, collecting materials. Finally one is allowed to leave the crash site of the plane and climbs out into a mountainous area searching for shelter and at the end for Milton, a deserted village. But the moment one reaches a street, wolves are appearing again and again, and the only way is often to run from car to car and hide there, hoping not to freeze to death. Took me several tries to make it to the church and then into town to meet the Grey Mother.

The rest of epsiode one is dedicated to various tasks set by the Grey Mother, and some additional (optional) side quests. And while the first encounter with the wolves was pretty grim, in the later parts I had the feeling that they became a bit more easy to take. This might be related to one of the many patches (8 till now) that were shipped out in this month.

After having finished all the quests, including the side quests (and found the very useful distress pistol), I made my way out of Milton, probably not to be seen again?! And while all the quests were finished, I had the feeling I could spend a bit more time and explore the surroundings, maybe something is still hiding out there. But shouldering the climbing rope and climbing out of Milton leads to a very beautiful last stretch, a canyon lined with waterfalls leading to a cave and to the next episode.

Now I only need more time to play Episode 2.

28 August, 2017 05:50AM by Norbert Preining

John Goerzen

The Joy of Exploring: Old Phone Systems, Pizza, and Discovery

This story involves boys pretending to be pizza deliverymen using a working automated Strowger telephone exchange demonstrator on display in a museum, which is very old and is, to my knowledge, the only such working exhibit in the world. (Yes, I have video.) But first, a thought on exploration.

There are those that would say that there is nothing left to explore anymore – that the whole earth is mapped, photographed by satellites, and, well, known.

I prefer to look at it a different way: the earth is full of places that billions of people will never see, and probably don’t even know about. Those places may be quiet country creeks, peaceful neighborhoods one block away from major tourist attractions, an MTA museum in Brooklyn, a state park in Arkansas, or a beautiful church in Germany.

Martha is not yet two months old, and last week she and I spent a surprisingly long amount of time just gazing at tree branches — she was mesmerized, and why not, because to her, everything is new.

As I was exploring in Portland two weeks ago, I happened to pick up a nearly-forgotten book by a nearly-forgotten person, Beryl Markham, a woman who was a pilot in Africa about 80 years ago. The passage that I happened to randomly flip to in the bookstore, which really grabbed my attention, was this:

The available aviation maps of Africa in use at that time all bore the cartographer’s scale mark, ‘1/2,000,000’ — one over two million. An inch on the map was about thitry-two miles in the air, as compared to the flying maps of Europe on which one inch represented no more than four air miles.

Moreover, it seemed that the printers of the African maps had a slightly malicious habit of including, in large letters, the names of towns, junctions, and villages which, while most of them did exist in fact, as a group of thatched huts may exist or a water hold, they were usually so inconsequential as completely to escape discovery from the cockpit.

Beyond this, it was even more disconcerting to examine your charts before a proposed flight only to find that in many cases the bulk of the terrain over which you had to fly was bluntly marked: ‘UNSURVEYED’.

It was as if the mapmakers had said, “We are aware that between this spot and that one, there are several hundred thousands of acres, but until you make a forced landing there, we won’t know whether it is mud, desert, or jungle — and the chances are we won’t know then!”

— Beryl Markham, West With the Night

My aviation maps today have no such markings. The continent is covered with radio beacons, the world with GPS, the maps with precise elevations of the ground and everything from skyscrapers to antenna towers.

And yet, despite all we know, the world is still a breathtaking adventure.

Yesterday, the boys and I were going to fly to Abilene, KS, to see a museum (Seelye Mansion). Circumstances were such that we neither flew, nor saw that museum. But we still went to Abilene, and wound up at the Museum of Independent Telephony, a wondrous place for anyone interested in the history of technology. As it is one of those off-the-beaten-path sorts of places, the boys got 2.5 hours to use the hands-on exhibits of real old phones, switchboards, and also the schoolhouse out back. They decided — why not? — to use this historic equipment to pretend to order pizzas.

Jacob and Oliver proceeded to invent all sorts of things to use the phones for: ordering pizza, calling the cops to chase the pizza delivery guys, etc. They were so interested that by 2PM we still hadn’t had lunch and they claimed “we’re not hungry” despite the fact that we were going to get pizza for lunch. And I certainly enjoyed the exhibits on the evolution of telephones, switching (from manual plugboards to automated switchboards), and such.

This place was known – it even has a website, I had been there before, and in fact so had the boys (my parents took them there a couple of years ago). But yesterday, we discovered the Strowger switch had been repaired since the last visit, and that it, in fact, is great for conversations about pizza.

Whether it’s seeing an eclipse, discovering a fascination with tree branches, or historic telephones, a spirit of curiosity and exploration lets a person find fun adventures almost anywhere.

28 August, 2017 01:54AM by John Goerzen