Fedora People

Charla sobre Fedora 31 y 16 años de Evolución

Posted by Bernardo C. Hermitaño Atencio on October 26, 2019 05:43 AM

En día 23 de Octubre en el Instituto de Educación Superior Tecnológico Público Manuel Seoane Corrales ubicado en el distrito San Juan de Lurigancho de la Ciudad de Lima en Perú, se desarrolló la semana técnica donde se programaron un conjunto de actividades como charlas, talleres y exposición de proyectos desarrollados en la Institución.

Dentro de la programación estaba preparada mi charla titulada “Fedora 31 y 16 años de evolución”, donde se hizó mención y detalló el siguiente contenido: la licencia de software, las versiones desde su inicio en el año 2003, el patrocinio de RedHat, Fedora Workstation, Fedora Server, ediciones emergentes como CoreOS, SilverBlue, Fedora IoT, Fedora Spins, Fedora Labs, alternativas de descarga, fedora para procesadores ARM, la comunidad Fedora y las novedades que se aprecia en la versión Fedora 31 Beta.

Durante el proceso de la charla tuve la oportunidad de interactuar con los participantes e ir regalando algunos adhesivos que me quedaron de los eventos anteriores, hubo ingreso libre a las actividades, entre docentes y estudiantes hubo una asistencia aproximada de 35 personas. Al final de la charla se pudo responder algunas preguntas y posteriormente un breve compartir con los colegas en un reencuentro después de casi 2 años.

New badge: I Voted: Fedora 31 !

Posted by Fedora Badges on October 25, 2019 10:00 PM
I Voted: Fedora 31Participated in the Fedora 31 Elections!

New badge: F31 i18n Test Day Participant !

Posted by Fedora Badges on October 25, 2019 09:54 PM
F31 i18n Test Day ParticipantYou helped to test Fedora 30 i18n features

FPgM report: 2019-43

Posted by Fedora Community Blog on October 25, 2019 06:57 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week. Fedora 31 RC1.9 is GO and will release on Tuesday 29 October. We are currently under the Final freeze.

I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

Announcements

Help wanted

Upcoming meetings

Fedora 31

Schedule

  • 29 October — Final release target #1

Fedora 32

Changes

Announced

Submitted to FESCo

Approved by FESCo

The post FPgM report: 2019-43 appeared first on Fedora Community Blog.

New badge: Ohio LinuxFest 2019 !

Posted by Fedora Badges on October 25, 2019 11:59 AM
Ohio LinuxFest 2019You visited Fedora at Ohio LinuxFest 2019!

ATO 2019 - Inclusion event (a report)

Posted by Susan Lauber on October 25, 2019 11:01 AM
This was the second year that ATO hosted a pre-conference track on diversity and inclusion. It was a sold out event with a free but separate registration (for booking, budgets, and accounting). I attended last year as well.

As I began writing up this report, I noticed the title of the event does not include the word diversity. According to the wayback machine, the main title was the same last year but it felt like the word diversity was included in most of the promotion of the event. Last year did have "A Conversation" as part of the title and incorporated much discussion on the definitions and differences in diversity, inclusion, and equity. This year the title was simply Inclusion in Open Source & Technology [1] and the presentations had a lot more actionable examples of how a project, organization, team, or individual can be more inclusive.

I really like the format of this event. They have a series of short talks which this year were basically people's stories of how they felt included or actions they thought there should be more of so others feel more included. Later there is a Q&A session for everyone to further explore these topics and suggestions.

This year also included a screening of the second episode of the Chasing Grace Project and a Q&A with the producer. I cannot seem to remember which event I was at when I had the opportunity to screen the first episode. I am looking forward to the complete series being available to a wider audience.

Last year I remember feeling a mix of depression and optimism. There were a lots of examples showing how those paying attention have expanded the types of diversity beyond gender and race and how many opportunities do exist. There were also a lot of stats showing how slow the progress is happening and where it is even going backwards. In many ways I felt like I was hearing the same things I've heard all my life and that is a tiring thought.

This year was, at least for me, a lot more positive. I think mostly because the discussions were not so much around statistics and abstract items which still need to be done, but rather a lot of examples of activities that have helped and could help:

  • The young high school student asked for more everyday roles models like parents and teachers sponsoring club activities. Representation at the C-level is important but not as important has having someone in room learning technology along side the students.
  • The older but not ready to retire gentleman reminding people that having had to change technologies so many times, older people bring a lot of experience and can still learn new things - sometimes even learning faster. Most of us also accept (even enjoy) being managed by more youthful enthusiasm as long as we are not just dismissed as a dinosaur. 
  • The consultants that help D&I committees  proactively create company communities and both networking and educational opportunities. 
  • The examples of how to reach out of your comfort bubble, grow your own network, and be an ally.
I came away reminded that I am where I am and still an Open Source consultant and educator because the of the welcoming and supportive people I have gotten to work with. People who treat other people as people. People who can work as part of a team. People who want to do the right thing and give the right people the credit they deserve. These people were rarely official mentors and many have never thought of themselves as an ally but by being good humans, they were an ally to me.

The little things matter. They matter when they produce the thousand paper cuts that drive people away. They matter when they appear from an ally and encourage inclusion.

-SML


[1] Note: at the time of writing the URL for this event was for the current year. At some time in the future it may be replaced with the next year details. I do not know if it will be archived. I was able to submit the page to the wayback machine.

4 cool new projects to try in COPR for October 2019

Posted by Fedora Magazine on October 25, 2019 08:00 AM

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the COPR User Documentation for how to get started.

Nu

Nu, or Nushell, is a shell inspired by PowerShell and modern CLI tools. Using a structured data based approach, Nu makes it easy to work with commands that output data, piping through other commands. The results are then displayed in tables that can be sorted or filtered easily and may serve as inputs for further commands. Finally, Nu provides several builtin commands, multiple shells and support for plugins.

Installation instructions

The repo currently provides Nu for Fedora 30, 31 and Rawhide. To install Nu, use these commands:

sudo dnf copr enable atim/nushell
sudo dnf install nushell

NoteKit

NoteKit is a program for note-taking. It supports Markdown for formatting notes, and the ability to create hand-drawn notes using mouse. In NoteKit, notes are sorted and organized in a tree structure.

Installation instructions

The repo currently provides NoteKit for Fedora 29, 30, 31 and Rawhide. To install NoteKit, use these commands:

sudo dnf copr enable lyessaadi/notekit
sudo dnf install notekit

Crow Translate

Crow Translate is a program for translating. It can translate text as well as speak both the input and result, and offers a command line interface as well. For translation, Crow Translate uses Google, Yandex or Bing translate API.

Installation instructions

The repo currently provides Crow Translate for Fedora 30, 31 and Rawhide, and for Epel 8. To install Crow Translate, use these commands:

sudo dnf copr enable faezebax/crow-translate
sudo dnf install crow-translate

dnsmeter

dnsmeter is a command-line tool for testing performance of a nameserver and its infrastructure. For this, it sends DNS queries and counts the replies, measuring various statistics. Among other features, dnsmeter can use different load steps, use payload from PCAP files and spoof sender addresses.

Installation instructions

The repo currently provides dnsmeter for Fedora 29, 30, 31 and Rawhide, and EPEL 7. To install dnsmeter, use these commands:

sudo dnf copr enable @dnsoarc/dnsmeter
sudo dnf install dnsmeter

PHP version 7.1.33, 7.2.24 and 7.3.11

Posted by Remi Collet on October 25, 2019 05:54 AM

RPM of PHP version 7.3.11 are available in remi repository for Fedora 30-31 and in remi-php73 repository for Fedora 29 and Enterprise Linux  6 (RHEL, CentOS).

RPM of PHP version 7.2.24 are available in remi repository for Fedora 29 and in remi-php72 repository for Enterprise Linux  6 (RHEL, CentOS).

RPM of PHP version 7.1.33 are available in remi-php71 repository for Enterprise Linux (RHEL, CentOS).

emblem-important-2-24.pngPHP version 5.6 and version 7.0 have reached their end of life and are no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository and as module for Fedora 29-31 and EL-8.

security-medium-2-24.pngThese versions fix a security bug, so update is strongly recommended.

emblem-important-2-24.pngVersion 7.1 being close to its end of life, in December 2019, an upgrade to a higher version is recommended.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.3 installation (simplest):

yum-config-manager --enable remi-php73
yum update php\*

or, the modular way (Fedora and EL 8):

dnf module enable php:remi-7.3
dnf update php\*

Parallel installation of version 7.3 as Software Collection

yum install php73

Replacement of default PHP by version 7.2 installation (simplest):

yum-config-manager --enable remi-php72
yum update

or, the modular way (Fedora and EL 8):

dnf module enable php:remi-7.2
dnf update php\*

Parallel installation of version 7.2 as Software Collection

yum install php72

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 rpm are build using RHEL-8.0
  • EL-7 rpm are build using RHEL-7.7
  • EL-6 rpm are build using RHEL-6.10
  • EL-7 builds now use libicu62 (version 62.1)
  • EL builds now uses oniguruma5 (version 6.9.3 ,instead of bundle copy)
  • oci8 extension now uses Oracle Client version 19.3 (excepted on EL-6)
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php71 / php72 / php73)

How to lock old kernel (or another package) from removed and updating on Fedora

Posted by Robbi Nespu on October 25, 2019 03:44 AM

We have 3 option here either to exlude or lock the package or be blocking from replaced and updating with recent version.

1. Configuration file exclude

The method is simple. Open /etc/dnf/dnf.conf as root and add exclude= parameter. for example, here I show my current configuration file:

$ cat /etc/dnf/dnf.conf 
[main]
gpgcheck=1
installonly_limit=5
clean_requirements_on_remove=True
fastestmirror=true
deltarpm=true
exclude=kernel-5.2.17-200.fc30, kernel-core-5.2.17-200.fc30, kernel-devel-5.2.17-200.fc30, kernel-modules-5.2.17-200.fc30, kernel-modules-extra-5.2.17-200.fc30

Package names are separated by a comma. Shell globs using wildcards (eg. * and ?) are allowed.

2. Locking the package version with versionlock

You need to install versionlock before you can use, just simply run sudo dnf install 'dnf-command(versionlock)' on your terminal.

then you can lock specific package that already installed on your system. For example:

$ sudo dnf versionlock add kernel-5.2.17-200.fc30

If you want to remove the locked version, use the delete option:

$ sudo dnf versionlock delete kernel-5.2.17-200.fc30

3. DNF update command line flag --exclude

The update command accepts a package name to exclude for updating package. For example you can run the following commands

$ sudo dnf update --exclude=firefox
$ sudo dnf update --exclude=kernel*

You can use wildcard to match several package together from been replace or update.

WHY??

Sometimes we need specific kernel modules compiled to better handle something for example Oracle VirtualBox which using older kernel than the current recent on fedora. In other case, some people want to keep old and outdate software because it works fine compare to newer version.

Anyway, this just a personal references. You need to be caution about risk of using outdated kernel or package.

How to lock old kernel (or another package) from removed and updating on Fedora

Posted by Robbi Nespu on October 25, 2019 03:44 AM

We have 3 option here either to exlude or lock the package or be blocking from replaced and updating with recent version.

1. Configuration file exclude

The method is simple. Open /etc/dnf/dnf.conf as root and add exclude= parameter. for example, here I show my current configuration file:

$ cat /etc/dnf/dnf.conf 
[main]
gpgcheck=1
installonly_limit=5
clean_requirements_on_remove=True
fastestmirror=true
deltarpm=true
exclude=kernel-5.2.17-200.fc30, kernel-core-5.2.17-200.fc30, kernel-devel-5.2.17-200.fc30, kernel-modules-5.2.17-200.fc30, kernel-modules-extra-5.2.17-200.fc30

Package names are separated by a comma. Shell globs using wildcards (eg. * and ?) are allowed.

2. Locking the package version with versionlock

You need to install versionlock before you can use, just simply run sudo dnf install 'dnf-command(versionlock)' on your terminal.

then you can lock specific package that already installed on your system. For example:

$ sudo dnf versionlock add kernel-5.2.17-200.fc30

If you want to remove the locked version, use the delete option:

$ sudo dnf versionlock delete kernel-5.2.17-200.fc30

3. DNF update command line flag --exclude

The update command accepts a package name to exclude for updating package. For example you can run the following commands

$ sudo dnf update --exclude=firefox
$ sudo dnf update --exclude=kernel*

You can use wildcard to match several package together from been replace or update.

WHY??

Sometimes we need specific kernel modules compiled to better handle something for example Oracle VirtualBox which using older kernel than the current recent on fedora. In other case, some people want to keep old and outdate software because it works fine compare to newer version.

Anyway, this just a personal references. You need to be caution about risk of using outdated kernel or package.

ATO 2019 - an event report

Posted by Susan Lauber on October 24, 2019 05:28 PM
ATO 2019 was a good year.

For a number of years now, each October, thousands of technical folks converge in Raleigh for All Things Open. The "all things" includes a lot of developers talking about opensource platforms, tools, stacks, and applications but it also includes topics on open hardware, open government, open education, and building communities in addition to projects and products.

For a couple of years, I felt there was too much of a programmer focus for me and I wasn't finding new things in the community tracks. It is local though and so with expectations set, I continue to support a great  conference and enjoy the hallway track with a number of people I "see" mostly online even though I was not previously finding a lot of talks for my sysadmin or infosec interests.

I know several local people that have not attended the past couple of years because of this trend and I bring it up because this year was a bit different. While I attended expecting to once again content either repetitive (of other years and other conferences) or too dev focused, I was pleasantly surprised. There were full tracks both days for Security and Linux/Infrastructure. [1]

I attended a few of the security sessions, two that stood out were:

Prepping for the Zero Day Attack 
Eric Starr discussed a CI/CD pipeline that includes checking for vulnerabilities with both source code analysis and container scanning. He shared experiences where unit tests were disable "to speed up the deployments" which later turned into disasters. He was practical in his approach where some of the scans take hours to run. If the deployment or test cycle is shorter than a day, maybe those scans get run daily instead of with each change but do NOT eliminate them just because they take too long! He mentioned tools that work for his project but regularly pointed out what type of tool it was and that the specific tool used is not important. I would add that the best or right tool is any one you will use though you may be limited by what will work in your environment.

Insecurities and Vulnerabilities: How to Keep the National Vulnerability Database Current
I really enjoyed this one! Rob Tompkins shared his experience reporting CVE as part of an opensource project security team. When I teach about tools such as openscap and Red Hat Insights which include information from the NVD and then suggest remediations, it is helpful to understand how the information gets into this database. This example along with a talk from OSCON years ago about reporting embargoed security issues helps me also explain how an administrator should go about reporting a suspected vulnerability with correct documentation. This is a topic I am now adding to my "write and article on this" list.

Next door, at the Linux/Infrastructure room, by title, I would be interested in Getting Started with Flatpak and possibly Platform Agnostic and Self Organizing Software Packages . Also the What You Most Likely Did Not Know About Sudo…  and maybe the Terminal Velocity: Work faster in your shell  talks.

With these tracks, I would encourage a few of my more "Ops" friends to rethink attending this conference, especially if they are local to the area. I also have some new ideas for articles to write and possible presentations at future events.

Oh, they also have great book signings scattered across both days!

-SML

[1] Note: at the time of writing the URLs for the tracks were for the current year. At some time in the future these will be replaced with the next year tracks. I do not know if they will be archived. I was able to submit the parent tracks page for the wayback machine.

Déployer et enrichir CentOS 8

Posted by Didier Fabert (tartare) on October 24, 2019 03:00 PM

Avec la sortie de CentOS 8, j’ai eu le besoin de déployer facilement la distribution et de construire des paquets personnels pour l’enrichir. Le déploiement sera fait via un cobbler existant et la construction de RPM avec un système Koji déjà opérationnel.

Cobbler

Il y a peu de chances que votre cobbler connaisse déjà CentOS 8, on va donc les présenter…

On télécharge l’ISO DVD et on le monte afin de rendre le contenu accessible à notre cobbler

sudo mount -o ro CentOS-8-x86_64-1905-dvd1.iso /mnt/centos

On met à jour la liste des signatures

cobbler signature update

On vérifie que la nouvelle distribution est présente

cobbler signature report --name=redhat | grep rhel8
        rhel8

On importe le DVD

cobbler import --path=/mnt/centos --name=CentOS-8-x86_64

On déclare notre second profil, l’installation du mode graphique

cobbler profile add --name=CentOS-8-x86_64-Desktop --distro=CentOS-8-x86_64 --kickstart=/var/lib/cobbler/kickstarts/sample_end.ks --virt-file-size=12 --virt-ram=2048
cobbler profile edit --name CentOS-8-x86_64-Desktop --ksmeta="type=desktop"

On synchronise

cobbler sync

On vérifie le fichier quickstart associé à notre nouvelle distribution

cobbler profile getks --name=CentOS-8-x86_64
tip

Si vous utilisez mon image docker, qui a été créé pour l’article Dockerisation du service cobbler (Dockerisation du service cobbler), il y a quelques modifications à apporter pour pour CentOS 8

  • L’import se fait avec la commande
    cobbler import --path=/mnt --name=CentOS-8-x86_64
  • On modifie le snippet de partitionnement afin de créer par défaut un partition /boot de 1Go (/var/lib/cobbler/snippets/partition_config).
    En effet si la distribution est CentOS 6, la partition /boot fera 200Mo, pour CentOS 7 elle fera 500Mo, mais il n’y avait rien de prévu pour les autres cas.

    diff --git a/snippets/partition_config b/snippets/partition_config
    index 964c122..07b8918 100644
    --- a/snippets/partition_config
    +++ b/snippets/partition_config
    @@ -8,6 +8,8 @@ clearpart --all
     part /boot --size=200 --recommended --asprimary
     #else if $el_version == "rhel7"
     part /boot --size=500 --recommended --asprimary
    +#else
    +part /boot --size=1024 --recommended --asprimary
     #end if
     part pv.01 --size=1024 --grow
     volgroup vg0 pv.01
    
  • On modifie le Dockerfile afin qu’il modifie le fichier /var/lib/cobbler/snippets/func_install_if_enabled
    diff --git a/Dockerfile b/Dockerfile
    index b533575..f410a4a 100644
    --- a/Dockerfile
    +++ b/Dockerfile
    @@ -67,7 +67,7 @@ RUN for kickstart in sample sample_end legacy ; \
         done
     
     # Install vim-enhanced by default and desktop packages if profile have el_type set to desktop (ksmeta)
    -RUN echo -e "@core\n\nvim-enhanced\n#set \$el_type = \$getVar('type', 'minimal')\n#if \$el_type == 'desktop'\n@base\n@network-tools\n@x11\n@graphical-admin-tools\n#set \$el_version = \$getVar('os_version', None)\n#if \$el_version == 'rhel6'\n@desktop-platform\n@basic-desktop\n#else if \$el_version == 'rhel7'\n@gnome-desktop\n#end if\n#end if\nkernel" >> /var/lib/cobbler/snippets/func_install_if_enabled
    +RUN echo -e "@core\nvim-enhanced\n#set \$el_type = \$getVar('type', 'minimal')\n#if \$el_type == 'desktop'\n@base\n@network-tools\n@graphical-admin-tools\n#set \$el_version = \$getVar('os_version', None)\n#if \$el_version == 'rhel6'\n@x11\n@desktop-platform\n@basic-desktop\n#else if \$el_version == 'rhel7'\n@x11\n@gnome-desktop\n#else if \\$el_version == 'rhel8'\n@graphical-server-environment\n#end if\n#end if\nkernel" >> /var/lib/cobbler/snippets/func_install_if_enabled
     
     COPY first-sync.sh /usr/local/bin/first-sync.sh
     COPY entrypoint.sh /entrypoint.sh
    
    

Koji

Ajout des étiquettes

koji add-tag centos-8
koji add-tag --parent centos-8 --arches 'x86_64' centos-8-build

Définition des groupes

koji add-group centos-8-build build
koji add-group centos-8-build srpm-build

Ajout des dépôts externe

koji add-external-repo -t centos-8-build -p 10 centos-8-external-baseos http://mirrors.ircam.fr/pub/CentOS/8/BaseOS/\$arch/os
koji add-external-repo -t centos-8-build -p 11 centos-8-external-appstream http://mirrors.ircam.fr/pub/CentOS/8/AppStream/\$arch/os
koji add-external-repo -t centos-8-build -p 15 centos-8-external-powertools http://mirrors.ircam.fr/pub/CentOS/8/PowerTools/\$arch/os
koji add-external-repo -t centos-8-build -p 20 centos-8-external-extra http://mirrors.ircam.fr/pub/CentOS/8/extras/\$arch/os
koji add-external-repo -t centos-8-build -p 15 centos-8-external-epel http://mirrors.ircam.fr/pub/fedora/epel/8/Everything/\$arch

Ajout de la cible

koji add-target centos-8 centos-8-build

Ajout de paquets à nos groupes

  • build
    koji add-group-pkg centos-8-build build bash bash bzip2 coreutils cpio diffutils \
    findutils gawk gcc grep sed gcc-c++ gzip info patch redhat-rpm-config \
    rpm-build shadow-utils tar unzip util-linux which make \
    redhat-release centos-release xz
  • srpm-build
    koji add-group-pkg centos-8-build srpm-build bash \
    redhat-release centos-release make redhat-rpm-config rpm-build shadow-utils

On force la régénération du dépôt

koji regen-repo centos-8-build

On n’a plus qu’à ajouter nos paquets et laisser la magie opérer

koji add-pkg --owner tartare centos-8 pkbs-release

Python function to generate Tor v3 onion service authentication keys

Posted by Kushal Das on October 24, 2019 04:39 AM

Here is a small Python function using the amazing Python Cryptography module to generate the Tor v3 Onion service authentication services.

from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import x25519
import base64

def generate_tor_v3_keys():
    "Generates public, private keypair"
    private_key = x25519.X25519PrivateKey.generate()
    private_bytes = private_key.private_bytes(
        encoding=serialization.Encoding.Raw	,
        format=serialization.PrivateFormat.Raw,
        encryption_algorithm=serialization.NoEncryption())
    public_key = private_key.public_key()
    public_bytes = public_key.public_bytes(
        encoding=serialization.Encoding.Raw,
        format=serialization.PublicFormat.Raw)
    public = base64.b32encode(public_bytes).replace(b'=', b'') \
                       .decode("utf-8")
    private = base64.b32encode(private_bytes).replace(b'=', b'') \
                        .decode("utf-8")
    return public, private

You can follow my previous blog post to setup an authenticated Onion service.

Capslock and keyboard layout indicator for plymouths diskcrypt password screen

Posted by Hans de Goede on October 23, 2019 06:24 PM
As some of you running Fedora 31 may already have noticed, I have some good news to share. As part of my recent work on plymouth I've implemented a feature request which was first requested in 2012: support for an indicator that capslock is active while entering the disk unlock password for machines using full diskencryption. Besides the capslock indicator I've also added support for an indicator of the configured keyboard layout, since this sometimes also causes confusion:



And here is what it looks like when capslock is pressed:



If you're running Fedora 31 with full diskencryption then you may notice that the above screenshots are slightly different then what you have now. I've pushed an update to Fedora 31 updates-testing today which implements a few small tweaks to the theme after feedback from the design-team on the initial version. For those of you still on Fedora 30, this is coming to Fedora 30 too, it should show up in updates-testing with the next updates push.

FFI extension usage with PHP 7.4

Posted by Remi Collet on October 23, 2019 01:09 PM

The FFI extension (Foreign Function Interface) give access to features from system libraries directly from PHP without any need to additional extension.

Here is some examples, results of my tests of this  extension.

1. Preloading

Another new feature of PHP 7.4 is to allow to preload some classes, which will be usable as internal classes of the language or of an extension.

  • File with class definition: preload-foo.inc
  • Test file checking Fichier de test vérifiant l’existence de la classe: foo.php

Usage:

$ php -dopcache.preload=preload-foo.inc foo.php
Class Remi\Foo exists

So we'll use this feature with FFI.

2. ZSTD compression

Zstandard is a well known and efficient compression algorithm. The libzstd library provides a reference implementation.

zstd for PHP extension already exists, we'll use it to checkthe performance of our FFI solution.

  • Library definition, copy/paste from the library header zstd.h: preload-zstd.h
  • Remi\Zstd class definition which can be preloaded: preload-zstd.inc
  • Test script using this classe and zstd extension for benchmarking: zstd.php

Notice: if the class is not preloaded, it will be included, simple usage:

$ php zstd.php

If only the class is preloaded, headers will be loaded using FFI;load(), usage:

$ php -d opcache.preload=preload-zstd.inc zstd.php

Starting with 7.4.0RC5 (or using RPM from my repository), headers can also be preloaded, and will be used calling FFI:scope(), usage:

$ php d ffi.preload=preload-zstd.h -d opcache.preload=preload-zstd.inc zstd.php

In previous versions, headers preload only works when run as a normal user, so doesn't work with  mod_php or php-fpm started under administrative account (root)

Execution output:

PHP version 7.4.0RC4
Use preloaded class
Using FFI::scope OK

Src length           = 8673632
ZSTD_compress        = 1828461
Src length           = 1828461
ZSTD_decompress      = 8673632
Check                = OK
Using FFI  extension = 0,09"

Src length           = 8673632
ZSTD_compress        = 1828461
Src length           = 1828461
ZSTD_decompress      = 8673632
Check                = OK
Using ZSTD extension = 0,09"

For final user, code using FFI is close the code using the Zstd extension, and performances are identical (no noticeable difference).

3. Redis client

Various implementations of Redis client exist, written in C or PHP, this sample use FFI to access functions of the hiredis library.

  • Library definition, copy/paste from the library headers hiredis/hredis.h et hiredis/read.h: preload-redis.h
  • Remi\Redis class definition to be preloaded: preload-redis.inc
  • Test script using this class: redis.php

Output extract

$ php74 -d ffi.preload=preload-redis.h -d opcache.preload=preload-redis.inc redis.php
...
+ Remi\Redis::__construct(localhost, 6379)
+ Remi\Redis::initFFI()
+ Remi\Redis::del(foo)
int(1)
+ Remi\Redis::get(foo)
NULL
+ Remi\Redis::set(foo, 2019/10/23 12:45:03)
string(2) "OK"
+ Remi\Redis::get(foo)
string(19) "2019/10/23 12:45:03"
+ Remi\Redis::__destruct

This simplistic code, written in a few hours works and fulfill its goal..

4. Liens

  • Complete and really detailed documentation: https://www.php.net/ffi
  • FFIme projet by Anthony Ferrara designed to automate soe part of the work (experimental)
  • Git reporitory with used examples

5. Conclusion

FFI appears to be a new way to develop directly using PHP, and allowing more features without any need to create and maintain extension written in C language.

Its usage stills requires good C skills, to understand the library headers and documentation, and to avoid memory leaks, but should attract more developers / contributors.

The future will tell if FFI keep its promises, for a production usage, and if it will allow to reduce the number of existing extensions which will have to be maintained and adapted for next PHP versions.

Writing Summary - late summer 2019

Posted by Susan Lauber on October 23, 2019 01:06 PM
I've done some (ok, very little) writing for opensource.com in the past and I still have some notes for more articles that keep getting pushed aside. This site is almost 10 years old, community driven (with Red Hat Sponsorship), and tries to cover a variety of open topics, products, projects, and distributions.

This summer, some of the staff from that project switched over to help Red Hat start a new blog for system administrators called Enable Sysadmin. As the name implies it is focused on system administration topics and as a corporate blog it can also be a bit more Red Hat product specific. In addition to a small staff, a few part time contractors, and a number of Red Hat employee contributors, they do accept and encourage community contributions.

I have enjoyed being one of the early authors. Of course, like all my writing projects, I have plenty more ideas in my head and not enough focus to get them organized in a timely manner.

So far I have written two articles about using SSH keypairs, two articles about SELinux, and a short article about cybersecurity awareness month.

How to manage multiple SSH key pairs

Passwordless SSH using public-private key pairs

Accessing SELinux policy documentation

Four semanage commands to keep SELinux in enforcing mode

Security advice for sysadmins: Own IT, Secure IT, Protect IT

-SML

Using SSH port forwarding on Fedora

Posted by Fedora Magazine on October 23, 2019 08:00 AM

You may already be familiar with using the ssh command to access a remote system. The protocol behind ssh allows terminal input and output to flow through a secure channel. But did you know that you can also use ssh to send and receive other data securely as well? One way is to use port forwarding, which allows you to connect network ports securely while conducting your ssh session. This article shows you how it works.

About ports

A standard Linux system has a set of network ports already assigned, from 0-65535. Your system reserves ports up to 1023 for system use. In many systems you can’t elect to use one of these low-numbered ports. Quite a few ports are commonly expected to run specific services. You can find these defined in your system’s /etc/services file.

You can think of a network port like a physical port or jack to which you can connect a cable. That port may connect to some sort of service on the system, like wiring behind that physical jack. An example is the Apache web server (also known as httpd). The web server usually claims port 80 on the host system for HTTP non-secure connections, and 443 for HTTPS secure connections.

When you connect to a remote system, such as with a web browser, you are also “wiring” your browser to a port on your host. This is usually a random high port number, such as 54001. The port on your host connects to the port on the remote host, such as 443 to reach its secure web server.

So why use port forwarding when you have so many ports available? Here are a couple common cases in the life of a web developer.

Local port forwarding

Imagine that you are doing web development on a remote system called remote.example.com. You usually reach this system via ssh but it’s behind a firewall that allows very little additional access, and blocks most other ports. To try out your web app, it’s helpful to be able to use your web browser to point to the remote system. But you can’t reach it via the normal method of typing the URL in your browser, thanks to that pesky firewall.

Local forwarding allows you to tunnel a port available via the remote system through your ssh connection. The port appears as a local port on your system (thus “local forwarding.”)

Let’s say your web app is running on port 8000 on the remote.example.com box. To locally forward that system’s port 8000 to your system’s port 8000, use the -L option with ssh when you start your session:

$ ssh -L 8000:localhost:8000 remote.example.com

Wait, why did we use localhost as the target for forwarding? It’s because from the perspective of remote.example.com, you’re asking the host to use its own port 8000. (Recall that any host usually can refer to itself as localhost to connect to itself via a network connection.) That port now connects to your system’s port 8000. Once the ssh session is ready, keep it open, and you can type http://localhost:8000 in your browser to see your web app. The traffic between systems now travels securely over an ssh tunnel!

If you have a sharp eye, you may have noticed something. What if we used a different hostname than localhost for the remote.example.com to forward? If it can reach a port on another system on its network, it usually can forward that port just as easily. For example, say you wanted to reach a MariaDB or MySQL service on the db.example.com box also on the remote network. This service typically runs on port 3306. So you could forward it with this command, even if you can’t ssh to the actual db.example.com host:

$ ssh -L 3306:db.example.com:3306 remote.example.com

Now you can run MariaDB commands against your localhost and you’re actually using the db.example.com box.

Remote port forwarding

Remote forwarding lets you do things the opposite way. Imagine you’re designing a web app for a friend at the office, and want to show them your work. Unfortunately, though, you’re working in a coffee shop, and because of the network setup, they can’t reach your laptop via a network connection. However, you both use the remote.example.com system at the office and you can still log in there. Your web app seems to be running well on port 5000 locally.

Remote port forwarding lets you tunnel a port from your local system through your ssh connection, and make it available on the remote system. Just use the -R option when you start your ssh session:

$ ssh -R 6000:localhost:5000 remote.example.com

Now when your friend inside the corporate firewall runs their browser, they can point it at http://remote.example.com:6000 and see your work. And as in the local port forwarding example, the communications travel securely over your ssh session.

By default the sshd daemon running on a host is set so that only that host can connect to its remote forwarded ports. Let’s say your friend wanted to be able to let people on other example.com corporate hosts see your work, and they weren’t on remote.example.com itself. You’d need the owner of the remote.example.com host to add one of these options to /etc/ssh/sshd_config on that box:

GatewayPorts yes       # OR
GatewayPorts clientspecified

The first option means remote forwarded ports are available on all the network interfaces on remote.example.com. The second means that the client who sets up the tunnel gets to choose the address. This option is set to no by default.

With this option, you as the ssh client must still specify the interfaces on which the forwarded port on your side can be shared. Do this by adding a network specification before the local port. There are several ways to do this, including the following:

$ ssh -R *:6000:localhost:5000                   # all networks
$ ssh -R 0.0.0.0:6000:localhost:5000             # all networks
$ ssh -R 192.168.1.15:6000:localhost:5000        # single network
$ ssh -R remote.example.com:6000:localhost:5000  # single network

Other notes

Notice that the port numbers need not be the same on local and remote systems. In fact, at times you may not even be able to use the same port. For instance, normal users may not to forward onto a system port in a default setup.

In addition, it’s possible to restrict forwarding on a host. This might be important to you if you need tighter security on a network-connected host. The PermitOpen option for the sshd daemon controls whether, and which, ports are available for TCP forwarding. The default setting is any, which allows all the examples above to work. To disallow any port fowarding, choose none, or choose only a specific host:port setting to permit. For more information, search for PermitOpen in the manual page for sshd daemon configuration:

$ man sshd_config

Finally, remember port forwarding only happens as long as the controlling ssh session is open. If you need to keep the forwarding active for a long period, try running the session in the background using the -N option. Make sure your console is locked to prevent tampering while you’re away from it.

GNOME, and Free Software Is Under Attack

Posted by Richard Hughes on October 22, 2019 01:34 PM

A month ago, GNOME was hit by a patent troll. We’re fighting, but need some money to fund the legal defense, and counterclaim. I just donated, and if you use or develop free software you should too.

How to setup an anonymous FTP download server

Posted by Fedora Magazine on October 22, 2019 08:00 AM

Sometimes you may not need to set up a full FTP server with authenticated users with upload and download privileges. If you are simply looking for a quick way to allow users to grab a few files, an anonymous FTP server can fit the bill. This article shows you show to set it up.

This example uses the vsftp server.

Installing and configuring the anonymous FTP server

Install the vsftp server using sudo:

$ sudo dnf install vsftpd

Enable the vsftp server.

$ sudo systemctl enable vsftpd

Next, edit your /etc/vsftpd/vsftpd.conf file to allow anonymous downloads. Make sure you have the following entries.

anonymous_enable=YES

This option controls whether anonymous logins are permitted or not. If enabled, both the usernames ftp and anonymous are recognized as anonymous logins.

local_enable=NO

This option controls whether local logins are permitted.

write_enable=NO

This option controls whether any FTP commands which change the filesystem are allowed.

no_anon_password=YES

When enabled, this option prevents vsftpd from asking for an anonymous password. With this setting, the anonymous user will log straight in without one.

hide_ids=YES

Enable this option to display all user and group information in directory listings as ftp.

pasv_min_port=40000
pasv_max_port=40001

Finally, these options set the minimum and maximum port to allocate for PASV style data connections. Use them to specify a narrow port range to assist firewalling. You should choose a range for ports that aren’t currently in use. This example uses port 40000-40001 to limit the ports to a range of 1.

Final steps

Now that you’ve set the options, add the appropriate firewall rules to allow vsftp connections along with the passive port range you specified.

$ firewall-cmd --add-service=ftp --perm
$ firewall-cmd --add-port=40000-40001/tcp --perm
$ firewall-cmd --reload

Next, configure SELinux to allow passive FTP:

$ setsebool -P ftpd_use_passive_mode on

And finally, start the vsftp server:

$ systemctl start vsftpd

At this point you have a working FTP server. Place the content you want to offer in /var/ftp. (Typically, system administrators put publicly downloadable content under /var/ftp/pub.) Now you can connect to your server using an FTP client on another system.


Image courtesy of Tom Woodward on Flickr, CC-BY-SA 2.0.

Where are the team’s newcomers?

Posted by Fedora Community Blog on October 22, 2019 06:53 AM
Attendees create Fedora accounts during Fedora Women's Day 2018 in Lima, Peru. Organizers invited attendees to contribute through whatcanidoforfedora.org

I was wondering why, in the QA team, there are various newcomers willing to contribute, but so little interaction in the mailing list.

If a person would like to join the QA team, like many other Fedora teams, one of the first things they are supposed to do (at least as a good practice, if not as prescribed by the team SOP) is to send an introductory email to the team’s mailing list. 

And it is simple to spot that—after the introduction email and eventually being sponsored into the FAS group—in most cases the newcomers don’t send any other mail in the following times. Why?

I was wondering: is it ever possible that a newcomer is so skilled that he/she doesn’t need to ask any clarification to other team members? Is it possible that the documentation we have on the wiki or on docs.f.o. is sufficient to teach a newcomer all the tasks he/she is supposed to perform? How things work? No doubts? Any specific curiosity? All the processes, all the tasks, are they so clear? Wow… or… there is something strange.

But also: people introduce themselves, they start to perform some tasks, and what? Nobody have the need to share first steps experiences? Nobody needs to dialogue with other team members? “Hey, I spotted this behavior, and you?” “Hey, final release is approaching, which test are more important?” No… silence.

Well, as community members we all know that people come and go. Somebody jumps in a community channel full of initiatives and ideas, then he suddenly disappears. Somebody else would like to contribute in a specific area, then he realize that such area doesn’t fit his interests. Someone else would like to contribute, but he doesn’t know where. And sometimes life happens. All that is pretty normal in a community.

But my curiosity was not satisfied, then I started to look at which data we have available, and I developed a couple of Python scripts in order to query datagrepper and FAS.

The goal was to answer some questions: since the start of this year, how many emails does each newcomer sent to the QA mailing list after the introductory one? Such people is still active? How many activities related to QA did such people performed? Ok, they don’t need to communicate in the mailing list: are they performing tasks silently? Or they left the team without any announcement, and are they active in other areas of the project? Or do they leave the project?

Obviously the intent is not to measure each team member activity, or to press newcomers in performing tasks.

My concern was: why newcomers are so silent? We could do something in order to engage people? Does newcomers are afraid to take the floor?

How about the results?

Without watering down the numbers (if you are still curious, you can find the results here on Pagure), the feeling is something well-known in any community: people would like to contribute, but loses interest pretty fast, and we can’t do too much to hold them off. Hopefully the recent Fedora Join workflow experiment will be helpful.

As said before, the curiosity come out looking at the little interaction from newcomers in the QA mailing list. So I was hoping that, maybe, the newcomers realized that the QA tasks just wasn’t doing it for them, and they go to contribute in other area of the community. But no. Sadly the fact is that in most cases it seems that newcomers don’t participate in any other team (at least looking at data available in datagrepper), and after a short time the they don’t even use their FAS account anymore.
A little number of newcomers is instead still active and they perform some team tasks without too much interaction.

How to get data from datagrepper

The URL to query is https://apps.fedoraproject.org/datagrepper/raw

To get more info and examples on how to query various kinds of historical data, look at https://apps.fedoraproject.org/datagrepper/

Obviously you can use Python. Starting from the idea and the code behind Fedora Commops geofp tool, with my limited skills I developed the tools you can find in the qastats Pagure repo.

I used two ways to get the messages sent to the QA mailing list.

The first one, that is the slowest, will get all the mailings list (not only the ones addressed to the test@f.o. mailing list) messages by using these parameters:

payload = { "start": start_timestamp, "end": end_timestamp, "rows_per_page": 100, “topic": "org.fedoraproject.prod.mailman.receive" }

Where start_timestamp and end_timestamp are the dates (in unix timestamp format) of the period we want to take into account.

Then inside a loop the script will filter by list name (getting only the messages sent to test@fedoraproject.org), and the result will be a CSV file containing all the messages sent to the QA mailing list in this form: 

sender, subject, timestamp, message date, unique ID (Each message in datagrepper has a unique uid).

The other one, is much faster, but limited to the last 8 months (so, start_date should be lesser than 8 months in the past), and it will make use of the “contains” parameter. In this way there is no need to loop through all the messages in the Python script:

payload = {"start": start_date, "rows_per_page": rows_per_page, "category": "mailman", "contains": contains}

“category”: “mailman” and “topic": "org.fedoraproject.prod.mailman.receive" should query the same thing.

The result is a CSV file as well, containing the same things as the previous one.

Then there is the script that actually parses the result of the query to datagrepper.

The logic inside this script is: for each mail sent to the mailing list, get the ones containing “intro” (actually a case insensitive regular expression) in the subject. Then query FAS by email to get the FAS username (hopefully the mail used in the mailing list is the same used in the FAS account). Then:

  • Get the last_seen value from FAS
  • Get the additional FAS groups the user is part of
  • Count the number of emails sent to the QA mailing list starting from the timestamp of the introduction mail
  • Count the activities in these categories: 
    • bodhi, to guess the number of updates a user has tested
    • bugzilla, to count interactions on bugzilla (like reported bugs)
    • kerneltest, to count the number of kernel regression test cases performed by the user
    • wiki, in order to guess the number of performed validation tests
    • Mailmain, to get the total number of messages (minus the one already counted) sent to the rest of the Fedora mailing lists (maybe the user is active in other parts of the project)

To get these activities, the script will query again datagrepper with these parameters:

{'page': 1, 'rows_per_page': 100, 'size': 'small', 'start': timestamp, 'user': user, 'category': category}

Where category is one of the previous one.

This will get the total number of messages (no need to loop here, since in the result there is a field containing the total value) .

Even if these tools could look catered around the QA mailing list, they can easily adapted to get data from other teams, and they could be a starting point to get other kind of information about community activity and to start to play with datagrepper and FAS.

The post Where are the team’s newcomers? appeared first on Fedora Community Blog.

NBD over AF_VSOCK

Posted by Richard W.M. Jones on October 21, 2019 12:00 AM

How do you talk to a virtual machine from the host? How does the virtual machine talk to the host? In one sense the answer is obvious: virtual machines should be thought of just like regular machines so you use the network. However the connection between host and guest is a bit more special. Suppose you want to pass a host directory up to the guest? You could use NFS, but that’s sucky to set up and you’ll have to fiddle around with firewalls and ports. Suppose you run a guest agent reporting stats back to the hypervisor. How do they talk? Network, sure, but again that requires an extra network interface and the guest has to explicitly set up firewall rules.

A few years ago my colleague Stefan Hajnoczi ported VMware’s vsock to qemu. It’s a pure guest⟷host (and guest⟷guest) sockets API. It doesn’t use regular networks so no firewall issues or guest network configuration to worry about.

You can run NFS over vsock [PDF] if you want.

And now you can of course run NBD over vsock. nbdkit supports it, and libnbd is (currently the only!) client.

Episode 166 - Every day should be cybersecurity awareness month!

Posted by Open Source Security Podcast on October 21, 2019 12:00 AM
Josh and Kurt about cybersecurity awareness month. What's our actionable advice we can give out? There isn't much which is a fundamental part of the problem.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/11714378/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    Migrating from Docker to Podman

    Posted by Elliott Sales de Andrade on October 20, 2019 10:33 PM
    If you use Docker, you may or may not have already heard of Podman. It is an alternative container engine, and while I don’t have much knowledge of the details, there are a few reasons why I’m switching: Podman runs in rootless mode, i.e., it does not need a daemon running as root; Podman supports new things like cgroupsv2 (coming in Fedora 31); Docker (actually moby-engine) is difficult to keep up-to-date in Fedora (which may correlate with point 2), and people seem to complain about this (though I’ve not cared too much.

    Music with the Synthstrom Deluge

    Posted by Richard W.M. Jones on October 20, 2019 03:33 PM

    I bought a Deluge a while back, and I’ve owned synthesizers and kaossilators and all kinds of other things for years. The Deluge is several things: expensive, awkward to use, but (with practice) it can make some reasonable music. Here are some ambient tunes I’ve written with it:

    Soundscape (with Japanese TV)

    Trips

    Cookie Sunday

    Sunday Bells

    I’m not going to pretend that any of this is good music, but it’s a lot of fun to make.

    Disney+ streaming uses draconian DRM, avoid

    Posted by Hans de Goede on October 20, 2019 01:23 PM
    First of all, as always my opinions are my own, not those of my employer.

    Since I have 2 children I was happy to learn that the Netherlands would be one of the first countries to get Disney+ streaming.

    So I subscribed for the testing period, problem all devices in my home run Fedora. I started up Firefox and was greeted with an "Error Code 83", next I tried Chrome, same thing.

    So I mailed the Disney helpdesk about this, explaining how Linux works fine with Netflix, AmazonPrime video and even the web-app from my local cable provider. They promised to get back to me in 24 hours, the eventually got back to me in about a week. They wrote: "We are familiar with Error 83. This often happens if you want to play Disney + via the web browser or certain devices. Our IT department working hard to solve this. In the meantime, I want to advise you to watch Disney + via the app on a phone or tablet. If this error code still occurs in a few days, you can check the help center ..." this was on September 23th.

    So I thought, ok they are working on this lets give them a few days. It is almost a month later now and nothing has changed. Their so called help-center does not even know about "Error Code 83" even though the internet is full of people experiencing this. Note that this error also happens a lot on other platforms, it is not just Linux.

    Someone on tweakers.net has done some digging and this is a Widevine error: "the response is: {"errors":[{"code":"platform-verification-failed","description":"Platform verification status incompatible with security level"}]}". Widevine has 3 security levels and many devices, including desktop Linux and many Android devices only support the lowest security setting (software encryption only). In this case e.g. Netflix will not offer full HD or 4k resolutions, but otherwise everything works fine, which is a balance between DRM and usability which I can accept. Disney+ OTOH seems to have the drm features kranked up to maximum draconian settings and simply will not work on a lot of android devices, nor on Chromebooks, nor on desktop Linux.

    So if you care about Linux in any way, please do not subscribe to Disney+, instead send them a message letting them know that you are boycotting them until they get their Linux support in order.

    Started a newsletter

    Posted by Kushal Das on October 20, 2019 11:31 AM

    I started a newsletter, focusing on different stories I read about privacy, security, programming in general. Following the advice from Martijn Grooten, I am storing all the interesting links I read (for many months). I used to share these only over Twitter, but, as I retweet many things, it was not easy to share a selected few.

    I also did not want to push them in my regular blog. I wanted a proper newsletter over email service. But, keeping the reader’s privacy was a significant point to choose the service. I finally decided to go with Write.as Letters service. I am already using their open source project WriteFreely. This is an excellent excuse to use their tool more and also pay them for the fantastic tools + service.

    Feel free to subscribe to the newsletter and share the link with your friends.

    AdamW’s Debugging Adventures: “dnf is locked by another application”

    Posted by Adam Williamson on October 18, 2019 08:45 PM

    Gather round the fire, kids, it’s time for another Debugging Adventure! These are posts where I write up the process of diagnosing the root cause of a bug, where it turned out to be interesting (to me, anyway…)

    This case – Bugzilla #1750575 – involved dnfdragora, the package management tool used on Fedora Xfce, which is a release-blocking environment for the ARM architecture. It was a pretty easy bug to reproduce: any time you updated a package, the update would work, but then dnfdragora would show an error “DNF is locked by another process. dnfdragora will exit.”, and immediately exit.

    The bug sat around on the blocker list for a while; Daniel Mach (a DNF developer) looked into it a bit but didn’t have time to figure it out all the way. So I got tired of waiting for someone else to do it, and decided to work it out myself.

    Where’s the error coming from?

    As a starting point, I had a nice error message – so the obvious thing to do is figure out where that message comes from. The text appears in a couple of places in dnfdragora – in an exception handler and also in a method for setting up a connection to dnfdaemon. So, if we didn’t already know (I happened to) this would be the point at which we’d realize that dnfdragora is a frontend app to a backend – dnfdaemon – which does the heavy lifting.

    So, to figure out in more detail how we were getting to one of these two points, I hacked both the points where that error is logged. Both of them read logger.critical(errmsg). I changed this to logger.exception(errmsg). logger.exception is a very handy feature of Python’s logging module which logs whatever message you specify, plus a traceback to the current state, just like the traceback you get if the app actually crashes. So by doing that, the dnfdragora log (it logs to a file dnfdragora.log in the directory you run it from) gave us a traceback showing how we got to the error:

    2019-10-14 17:53:29,436 <a href="ERROR">dnfdragora</a> dnfdaemon client error: g-io-error-quark: GDBus.Error:org.baseurl.DnfSystem.LockedError: dnf is locked by another application (36)
    Traceback (most recent call last):
    File "/usr/bin/dnfdragora", line 85, in <module>
    main_gui.handleevent()
    File "/usr/lib/python3.7/site-packages/dnfdragora/ui.py", line 1273, in handleevent
    if not self._searchPackages(filter, True) :
    File "/usr/lib/python3.7/site-packages/dnfdragora/ui.py", line 949, in _searchPackages
    packages = self.backend.search(fields, strings, self.match_all, self.newest_only, tags )
    File "/usr/lib/python3.7/site-packages/dnfdragora/misc.py", line 135, in newFunc
    rc = func(*args, **kwargs)
    File "/usr/lib/python3.7/site-packages/dnfdragora/dnf_backend.py", line 464, in search
    newest_only, tags)
    File "/usr/lib/python3.7/site-packages/dnfdaemon/client/__init__.py", line 508, in Search
    fields, keys, attrs, match_all, newest_only, tags))
    File "/usr/lib/python3.7/site-packages/dnfdaemon/client/__init__.py", line 293, in _run_dbus_async
    result = self._get_result(data)
    File "/usr/lib/python3.7/site-packages/dnfdaemon/client/__init__.py", line 277, in _get_result
    self._handle_dbus_error(user_data['error'])
    File "/usr/lib/python3.7/site-packages/dnfdaemon/client/__init__.py", line 250, in _handle_dbus_error
    raise DaemonError(str(err))
    dnfdaemon.client.DaemonError: g-io-error-quark: GDBus.Error:org.baseurl.DnfSystem.LockedError: dnf is locked by another application (36)</module>
    

    So, this tells us quite a bit of stuff. We know we’re crashing in some sort of ‘search’ operation, and dbus seems to be involved. We can also see a bit more of the architecture here. Note how we have dnfdragora/dnf_backend.py and dnfdaemon/client/__init__.py included in the trace, even though we’re only in the dnfdragora executable here (dnfdaemon is a separate process). Looking at that and then looking at those files a bit, it’s quite easy to see that the dnfdaemon Python library provides a sort of framework for a client class called (oddly enough) DnfDaemonBase which the actual client – dnfdragora in our case – is expected to subclass and flesh out. dnfdragora does this in a class called DnfRootBackend, which inherits from both dnfdragora.backend.Backend (a sort of abstraction layer for dnfdragora to have multiple of these backends, though at present it only actually has this one) and dnfdaemon.client.Client, which is just a small extension to DnfDaemonBase that adds some dbus signal handling.

    So now we know more about the design we’re dealing with, and we can also see that we’re trying to do some sort of search operation which looks like it works by the client class communicating with the actual dnfdaemon server process via dbus, only we’re hitting some kind of error in that process, and interpreting it as ‘dnf is locked by another application’. If we dig a little deeper, we can figure out a bit more. We have to read through all of the backtrace frames and examine the functions, but ultimately we can figure out that DnfRootBackend.Search() is wrapped by dnfdragora.misc.ExceptionHandler, which handles dnfdaemon.client.DaemonError exceptions – like the one that’s ultimately getting raised here! – by calling the base class’s own exception_handler() on them…and for us, that’s BaseDragora.exception_handler, one of the two places we found earlier that ultimately produces this “DNF is locked by another process. dnfdragora will exit” text. We also now have two indications (the dbus error itself, and the code in exception_handler() that the error we’re dealing with is “LockedError”.

    A misleading error…

    At this point, I went looking for the text LockedError, and found it in two files in dnfdaemon that are kinda variants on each other – daemon/dnfdaemon-session.py and daemon/dnfdaemon-system.py. I didn’t actually know offhand which of the two is used in our case, but it doesn’t really matter, because the codepath to LockedError is the same in both. There’s a function called check_lock() which checks that self._lock == sender, and if it doesn’t, raises LockedError. That sure looks like where we’re at.

    So at this point I did a bit of poking around into how self._lock gets set and unset in the daemon. It turns out to be pretty simple. The daemon is basically implemented as a class with a bunch of methods that are wrapped by @dbus.service.method, which makes them accessible as DBus methods. (One of them is Search(), and we can see that the client class’s own Search() basically just calls that). There are also methods called Lock() and Unlock(), which – not surprisingly – set and release this lock, by setting the daemon class’ self._lock to be either an identifier for the DBus client or None, respectively. And when the daemon is first initialized, the value is set to None.

    At this point, I realized that the error we’re dealing with here is actually a lie in two important ways:

    1. The message claims that the problem is the lock being held “by another application”, but that’s not what check_lock() checks, really. It passes only if the caller holds the lock. It does fail if the lock is held “by another application”, but it also fails if the lock is not held at all. Given all the code we looked at so far, we can’t actually trust the message’s assertion that something else is holding the lock. It is also possible that the lock is not held at all.
    2. The message suggests that the lock in question is a lock on dnf itself. I know dnf/libdnf do have locking, so up to now I’d been assuming we were actually dealing with the locking in dnf itself. But at this point I realized we weren’t. The dnfdaemon lock code we just looked at doesn’t actually call or wrap dnf’s own locking code in any way. This lock we’re dealing with is entirely internal to dnfdaemon. It’s really a lock on the dnfdaemon instance itself.

    So, at this point I started thinking of the error as being “dnfdaemon is either locked by another DBus client, or not locked at all”.

    So what’s going on with this lock anyway?

    My next step, now I understood the locking process we’re dealing with, was to stick some logging into it. I added log lines to the Lock() and Unlock() methods, and I also made check_lock() log what sender and self._lock were set to before returning. Because it sets self._lock to None, I also added a log line to the daemon’s __init__ that just records that we’re in it. That got me some more useful information:

    2019-10-14 18:53:03.397784 XXX In DnfDaemon.init now!
    2019-10-14 18:53:03.402804 XXX LOCK: sender is :1.1835
    2019-10-14 18:53:03.407524 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is :1.1835
    2019-10-14 18:53:07.556499 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is :1.1835
    [...snip a bunch more calls to check_lock where the sender is the same...]
    2019-10-14 18:53:13.560828 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is :1.1835
    2019-10-14 18:53:13.560941 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is :1.1835
    2019-10-14 18:53:16.513900 XXX In DnfDaemon.init now!
    2019-10-14 18:53:16.516724 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is None
    

    so we could see that when we started dnfdragora, dnfdaemon started up and dnfdragora locked it almost immediately, then throughout the whole process of reproducing the bug – run dnfdragora, search for a package to be updated, mark it for updating, run the transaction, wait for the error – there were several instances of DBus method calls where everything worked fine (we see check_lock() being called and finding sender and self._lock set to the same value, the identifier for dnfdragora), but then suddenly we see the daemon’s __init__ running again for some reason, not being locked, and then a check_lock() call that fails because the daemon instance’s self._lock is None.

    After a couple of minutes, I guessed what was going on here, and the daemon’s service logs confirmed it – dnfdaemon was crashing and automatically restarting. The first attempt to invoke a DBus method after the crash and restart fails, because dnfdragora has not locked this new instance of the daemon (it has no idea it just crashed and restarted), so check_lock() fails. So as soon as a DBus method invocation is attempted after the dnfdaemon crash, dnfdragora errors out with the confusing “dnf is locked by another process” error.

    The crash was already mentioned in the bug report, but until now the exact interaction between the crash and the error had not been worked out – we just knew the daemon crashed and the app errored out, but we didn’t really know what order those things happened in or how they related to each other.

    OK then…why is dnfdaemon crashing?

    So, the question now became: why is dnfdaemon crashing? Well, the backtrace we had didn’t tell us a lot; really it only told us that something was going wrong in libdbus, which we could also tell from the dnfdaemon service log:

    Oct 14 18:53:15 adam.happyassassin.net dnfdaemon-system[226042]: dbus[226042]: arguments to dbus_connection_unref() were incorrect, assertion "connection->generation == _dbus_current_generation" failed in file ../../dbus/dbus-connection.c line 2823.
    Oct 14 18:53:15 adam.happyassassin.net dnfdaemon-system[226042]: This is normally a bug in some application using the D-Bus library.
    Oct 14 18:53:15 adam.happyassassin.net dnfdaemon-system[226042]:   D-Bus not built with -rdynamic so unable to print a backtrace
    

    that last line looked like a cue, so of course, off I went to figure out how to build DBus with -rdynamic. A bit of Googling told me – thanks “the3dfxdude”! – that the trick is to compile with –enable-asserts. So I did that and reproduced the bug again, and got a bit of a better backtrace. It’s a long one, but by picking through it carefully I could spot – in frame #17 – the actual point at which the problem happened, which was in dnfdaemon.server.DnfDaemonBase.run_transaction(). (Note, this is a different DnfDaemonBase class from dnfdaemon.client.DnfDaemonBase; I don’t know why they have the same name, that’s just confusing).

    So, the daemon’s crashing on this self.TransactionEvent('end-run', NONE) call. I poked into what that does a bit, and found a design here that kinda mirrors what happens on the client side: this DnfDaemonBase, like the other one, is a framework for a full daemon implementation, and it’s subclassed by a DnfDaemon class here. That class defines a TransactionEvent method that emits a DBus signal. So…we’re crashing when trying to emit a dbus signal. That all adds up with the backtrace going through libdbus and all. But, why are we crashing?

    At this point I tried to make a small reproducer (which basically just set up a DnfDaemon instance and called self.TransactionEvent in the same way, I think) but that didn’t work – I didn’t know why at the time, but figured it out later. Continuing to trace it out through code wouldn’t be that easy because now we’re in DBus, which I know from experience is a big complex codebase that’s not that easy to just reason your way through. We had the actual DBus error to work from too – “arguments to dbus_connection_unref() were incorrect, assertion “connection->generation == _dbus_current_generation” failed” – and I looked into that a bit, but there were no really helpful leads there (I got a bit more understanding about what the error means exactly, but it didn’t help me understand *why it was happening* at all).

    Time for the old standby…

    So, being a bit stuck, I fell back on the most trusty standby: trial and error! Well, also a bit of logic. It did occur to me that the dbus broker is itself a long-running daemon that other things can talk to. So I started just wondering if something was interfering with dnfdaemon’s connection with the dbus broker, somehow. This was in my head as I poked around at stuff – wherever I wound up looking, I was looking for stuff that involved dbus.

    But to figure out where to look, I just started hacking up dnfdaemon a bit. Now this first part is probably pure intuition, but that self._reset_base() call on the line right before the self.TransactionEvent call that crashes bugged me. It’s probably just long experience telling me that anything with “reset” or “refresh” in the name is bad news. 😛 So I thought, hey, what happens if we move it?

    I stuck some logging lines into this run_transaction so I knew where we got to before we crashed – this is a great dumb trick, btw, just stick lines like self.logger('XXX HERE 1'), self.logger('XXX HERE 2') etc. between every significant line in the thing you’re debugging, and grep the logs for “XXX” – and moved the self._reset_base() call down under the self.TransactionEvent call…and found that when I did that, we got further, the self.TransactionEvent call worked and we crashed the next time something else tried to emit a DBus signal. I also tried commenting out the self._reset_base() call entirely, and found that now we would only crash the next time a DBus signal was emitted after a subsequent call to the Unlock() method, which is another method that calls self._reset_base(). So, at this point I was pretty confident in this description: “dnfdaemon is crashing on the first interaction with DBus after self._reset_base() is called”.

    So my next step was to break down what _reset_base() was actually doing. Turns out all of the detail is in the DnfDaemonBase skeleton server class: it has a self._base which is a dnf.base.Base() instance, and that method just calls that instance’s close() method and sets self._base to None. So off I went into dnf code to see what dnf.base.Base.close() does. Turns out it basically does two things: it calls self._finalize_base() and then calls self.reset(True, True, True).

    Looking at the code it wasn’t immediately obvious which of these would be the culprit, so it was all aboard the trial and error train again! I changed the call to self._reset_base() in the daemon to self._base.reset(True, True, True)…and the bug stopped happening! So that told me the problem was in the call to _finalize_base(), not the call to reset(). So I dug into what _finalize_base() does and kinda repeated this process – I kept drilling down through layers and splitting up what things did into individual pieces, and doing subsets of those pieces at a time to try and find the “smallest” thing I could which would cause the bug.

    To take a short aside…this is what I really like about these kinds of debugging odysseys. It’s like being a detective, only ultimately you know that there’s a definite reason for what’s happening and there’s always some way you can get closer to it. If you have enough patience there’s always a next step you can take that will get you a little bit closer to figuring out what’s going on. You just have to keep working through the little steps until you finally get there.

    Eventually I lit upon this bit of dnf.rpm.transaction.TransactionWrapper.close(). That was the key, as close as I could get to it: reducing the daemon’s self._reset_base() call to just self._base._priv_ts.ts = None (which is what that line does) was enough to cause the bug. That was the one thing out of all the things that self._reset_base() does which caused the problem.

    So, of course, I took a look at what this ts thing was. Turns out it’s an instance of rpm.TransactionSet, from RPM’s Python library. So, at some point, we’re setting up an instance of rpm.TransactionSet, and at this point we’re dropping our reference to it, which – point to ponder – might trigger some kind of cleanup on it.

    Remember how I was looking for things that deal with dbus? Well, that turned out to bear fruit at this point…because what I did next was simply to go to my git checkout of rpm and grep it for ‘dbus’. And lo and behold…this showed up.

    Turns out RPM has plugins (TIL!), and in particular, it has this one, which talks to dbus. (What it actually does is try to inhibit systemd from suspending or shutting down the system while a package transaction is happening). And this plugin has a cleanup function which calls something called dbus_shutdown() – aha!

    This was enough to get me pretty suspicious. So I checked my system and, indeed, I had a package rpm-plugin-systemd-inhibit installed. I poked at dependencies a bit and found that python3-dnf recommends that package, which means it’ll basically be installed on nearly all Fedora installs. Still looking like a prime suspect. So, it was easy enough to check: I put the code back to a state where the crash happened, uninstalled the package, and tried again…and bingo! The crash stopped happening.

    So at this point the case was more or less closed. I just had to do a bit of confirming and tidying up. I checked and it turned out that indeed this call to dbus_shutdown() had been added quite recently, which tied in with the bug not showing up earlier. I looked up the documentation for dbus_shutdown() which confirmed that it’s a bit of a big cannon which certainly could cause a problem like this:

    “Frees all memory allocated internally by libdbus and reverses the effects of dbus_threads_init().

    libdbus keeps internal global variables, for example caches and thread locks, and it can be useful to free these internal data structures.

    You can’t continue to use any D-Bus objects, such as connections, that were allocated prior to dbus_shutdown(). You can, however, start over; call dbus_threads_init() again, create new connections, and so forth.”

    and then I did a scratch build of rpm with the commit reverted, tested, and found that indeed, it solved the problem. So, we finally had our culprit: when the rpm.TransactionSet instance went out of scope, it got cleaned up, and that resulted in this plugin’s cleanup function getting called, and dbus_shutdown() happening. The RPM devs had intended that call to clean up the RPM plugin’s DBus handles, but this is all happening in a single process, so the call also cleaned up the DBus handles used by dnfdaemon itself, and that was enough (as the docs suggest) to cause any further attempts to communicate with DBus in dnfdaemon code to blow up and crash the daemon.

    So, that’s how you get from dnfdragora claiming that DNF is locked by another process to a stray RPM plugin crashing dnfdaemon on a DBus interaction!

    FPgM report: 2019-42

    Posted by Fedora Community Blog on October 18, 2019 07:35 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week. Fedora 31 was declared No-Go. We are currently under the Final freeze.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

    Announcements

    Help wanted

    Upcoming meetings

    Fedora 31

    Schedule

    • 29 October — Final release target #1

    Blocker bugs

    Bug IDBlocker statusComponentBug status
    1747408Accepted (previous release)distributionMODIFIED
    1728240Accepted (final)sddmPOST
    1691430Accepted (final)dnfON_QA
    1762689Proposed (final)gnome-softwareON_QA
    1762751Proposed (Final)PackageKitNEW

    Fedora 32

    Changes

    Announced

    Submitted to FESCo

    CPE update

    Community Application Handover & Retirement Updates

    • Nuancier: Maintainer(s) found. Changes discussion happening on the infrastructure mailing list
    • Fedocal: Maintainer Found! Admin domain is handed over and the CPE team are engaging with the new maintainer to fully transition & Taiga board is created
    • Elections: Blocked by PostgreSQL database is missing in application catalogue
    • Badges: Discussion still ongoing for maintainers – please come forward if interested!
    • Pastebin: Updated to point to CentOS and updated in F30 & F31

    Other Project updates

    • Rawhide Gating: Still on track for early November release.
    • repoSpanner: Email from one of our team detailing their discoveries during a two week performance sprint is on the infrastructure mailing list
    • JMS messaging plugin is working now ** PR to jms upstream submitted, waiting for review from upstream maintainer
    • CentOS mirror is migrated to CentOS 7 node (ansible managed), and now fully working for CentOS Stream
    • CentOS 7.7 aarch64 was retired from EPEL – it no longer works

    The post FPgM report: 2019-42 appeared first on Fedora Community Blog.

    rpminspect-0.8 released (and a new rpminspect-data-fedora)

    Posted by David Cantrell on October 18, 2019 03:25 PM
    Work on the test suite continues with rpminspect and it is finding a lot of corner-case type runtime scenarios.  Fixing those up in the code is nice.  I welcome contributions to the test suite.  You can look at the tests/test_*.py files to see what I'm doing and then work through one inspection and do the different types of checks.  Look in the lib/inspect_NAME.c file and for all of the add_result() calls to figure out what tests should exist in the test suite.  If this is confusing, feel free to reach out via email or another means and I can provide you with a list for an inspection.

    Changes in rpminspect-0.8:

    • Integration test suite continues to grow and fix problems.

    • The javabytecode inspection will report the JAR relative path as well as the path to the embedded class file when a problem is found. (#56)

    • libmandoc 1.14.5 API support. rpminspect will continue to work with 1.14.4 and previous releases and will detect which one to use at build time. The mandoc API changed completely between the 1.14.4 and 1.14.5 release. This is not entirely their fault as we are using it built as a shared library and the upstream project does not officially do that.

    • rpminspect now exits with code 2 when there is a program error. Exit code 0 means inspections passed and exit code 1 means there was at least one inspection failure. (#57)

    • If there is a Python json module exception raised in the test suite, print the inspection name, captured stdout, and captured stderr. This is meant to help debug the integration test suite.

    • Fix the Icon file check in the desktop inspection. Look at all possible icon path trees (set in rpminspect.conf). Also honor the extensionless syntax in the desktop file.

    • Fix the Exec file check in the desktop inspection so it honors arguments specified after the program name.

    • Fix a SIGSEGV when the before and/or after arguments on the command line contain ".." in the pathspec.

    • [MAJOR] Fix fundamental problems with the peer detection code. The integration test suite caught this and was leading to false results.

    • Add the IPv6 function blacklist check. The configuration file can carry a list of forbidden IPv6 functions and raise a failure if it finds any of those used.

    Changes in rpminspect-data-fedora-0.6:
    • Change bytecode version to be JDK 8
    • Add desktop_icon_paths to rpminspect.conf

    Many thanks to the contributors, reporters, and testers.  I am continuing on with the test suite work and new inspections.  Keep the reports coming in.

    New badge: Fedora 32 Change Accepted !

    Posted by Fedora Badges on October 18, 2019 01:10 PM
    Fedora 32 Change AcceptedYou got a "Change" accepted into the Fedora 32 Change list

    Letting Birds scooters fly free

    Posted by Matthew Garrett on October 18, 2019 11:44 AM
    (Note: These issues were disclosed to Bird, and they tell me that fixes have rolled out. I haven't independently verified)

    Bird produce a range of rental scooters that are available in multiple markets. With the exception of the Bird Zero[1], all their scooters share a common control board described in FCC filings. The board contains three primary components - a Nordic NRF52 Bluetooth controller, an STM32 SoC and a Quectel EC21-V modem. The Bluetooth and modem are both attached to the STM32 over serial and have no direct control over the rest of the scooter. The STM32 is tied to the scooter's engine control unit and lights, and also receives input from the throttle (and, on some scooters, the brakes).

    The pads labeled TP7-TP11 near the underside of the STM32 and the pads labeled TP1-TP5 near the underside of the NRF52 provide Serial Wire Debug, although confusingly the data and clock pins are the opposite way around between the STM and the NRF. Hooking this up via an STLink and using OpenOCD allows dumping of the firmware from both chips, which is where the fun begins. Running strings over the firmware from the STM32 revealed "Set mode to Free Drive Mode". Challenge accepted.

    Working back from the code that printed that, it was clear that commands could be delivered to the STM from the Bluetooth controller. The Nordic NRF52 parts are an interesting design - like the STM, they have an ARM Cortex-M microcontroller core. Their firmware is split into two halves, one the low level Bluetooth code and the other application code. They provide an SDK for writing the application code, and working through Ghidra made it clear that the majority of the application firmware on this chip was just SDK code. That made it easier to find the actual functionality, which was just listening for writes to a specific BLE attribute and then hitting a switch statement depending on what was sent. Most of these commands just got passed over the wire to the STM, so it seemed simple enough to just send the "Free drive mode" command to the Bluetooth controller, have it pass that on to the STM and win. Obviously, though, things weren't so easy.

    It turned out that passing most of the interesting commands on to the STM was conditional on a variable being set, and the code path that hit that variable had some impressively complicated looking code. Fortunately, I got lucky - the code referenced a bunch of data, and searching for some of the values in that data revealed that they were the AES S-box values. Enabling the full set of commands required you to send an encrypted command to the scooter, which would then decrypt it and verify that the cleartext contained a specific value. Implementing this would be straightforward as long as I knew the key.

    Most AES keys are 128 bits, or 16 bytes. Digging through the code revealed 8 bytes worth of key fairly quickly, but the other 8 bytes were less obvious. I finally figured out that 4 more bytes were the value of another Bluetooth variable which could be simply read out by a client. The final 4 bytes were more confusing, because all the evidence made no sense. It looked like it came from passing the scooter serial number to atoi(), which converts an ASCII representation of a number to an integer. But this seemed wrong, because atoi() stops at the first non-numeric value and the scooter serial numbers all started with a letter[2]. It turned out that I was overthinking it and for the vast majority of scooters in the fleet, this section of the key was always "0".

    At that point I had everything I need to write a simple app to unlock the scooters, and it worked! For about 2 minutes, at which point the network would notice that the scooter was unlocked when it should be locked and sent a lock command to force disable the scooter again. Ah well.

    So, what else could I do? The next thing I tried was just modifying some STM firmware and flashing it onto a board. It still booted, indicating that there was no sort of verified boot process. Remember what I mentioned about the throttle being hooked through the STM32's analogue to digital converters[3]? A bit of hacking later and I had a board that would appear to work normally, but about a minute after starting the ride would cut the throttle. Alternative options are left as an exercise for the reader.

    Finally, there was the component I hadn't really looked at yet. The Quectel modem actually contains its own application processor that runs Linux, making it significantly more powerful than any of the chips actually running the scooter application[4]. The STM communicates with the modem over serial, sending it an AT command asking it to make an SSL connection to a remote endpoint. It then uses further AT commands to send data over this SSL connection, allowing it to talk to the internet without having any sort of IP stack. Figuring out just what was going over this connection was made slightly difficult by virtue of all the debug functionality having been ripped out of the STM's firmware, so in the end I took a more brute force approach - I identified the address of the function that sends data to the modem, hooked up OpenOCD to the SWD pins on the STM, ran OpenOCD's gdb stub, attached gdb, set a breakpoint for that function and then dumped the arguments being passed to that function. A couple of minutes later and I had a full transaction between the scooter and the remote.

    The scooter authenticates against the remote endpoint by sending its serial number and IMEI. You need to send both, but the IMEI didn't seem to need to be associated with the serial number at all. New connections seemed to take precedence over existing connections, so it would be simple to just pretend to be every scooter and hijack all the connections, resulting in scooter unlock commands being sent to you rather than to the scooter or allowing someone to send fake GPS data and make it impossible for users to find scooters.

    In summary: Secrets that are stored on hardware that attackers can run arbitrary code on probably aren't secret, not having verified boot on safety critical components isn't ideal, devices should have meaningful cryptographic identity when authenticating against a remote endpoint.

    Bird responded quickly to my reports, accepted my 90 day disclosure period and didn't threaten to sue me at any point in the process, so good work Bird.

    (Hey scooter companies I will absolutely accept gifts of interesting hardware in return for a cursory security audit)

    [1] And some very early M365 scooters
    [2] The M365 scooters that Bird originally deployed did have numeric serial numbers, but they were 6 characters of type code followed by a / followed by the actual serial number - the number of type codes was very constrained and atoi() would terminate at the / so this was still not a large keyspace
    [3] Interestingly, Lime made a different design choice here and plumb the controls directly through to the engine control unit without the application processor having any involvement
    [4] Lime run their entire software stack on the modem's application processor, but because of [3] they don't have any realtime requirements so this is more straightforward

    comment count unavailable comments

    Managing user accounts with Cockpit

    Posted by Fedora Magazine on October 18, 2019 08:00 AM

    This is the latest in a series of articles on Cockpit, the easy-to-useintegratedglanceable, and open web-based interface for your servers. In the first article, we introduced the web user interface. The second and third articles focused on how to perform storage and network tasks respectively.

    This article demonstrates how to create and modify local accounts. It also shows you how to install the 389 Directory Server add-on (or plugin). Finally, you’ll see how 389 DS integrates into the Cockpit web service.

    Managing local accounts

    To start, click the Accounts option in the left column. The main screen provides an overview of local accounts. From here, you can create a new user account, or modify an existing account.

    <figure class="wp-block-image">Accounts screen overview in Cockpit<figcaption>Accounts screen overview in Cockpit</figcaption></figure>

    Creating a new account in Cockpit

    Cockpit gives sysadmins the ability to easily create a basic user account. To begin, click the Create New Account button. A box appears, requesting basic information such as the full name, username, and password. It also provides the option to lock the account. Click Create<mark class="annotation-text annotation-text-yoast" id="annotation-text-67c391fb-4728-4e72-8db9-a1a5020bf101"></mark> to complete the process. The example below creates a new user named Demo User.

    <figure class="wp-block-image">Creating a local account in Cockpit<figcaption>Creating a local account in Cockpit</figcaption></figure>

    Managing accounts in Cockpit

    Cockpit also provides basic management of local accounts. Some of the features include elevating the user’s permissions, password expiration, and resetting or changing the password.

    Modifying an account

    To modify an account, go back to the accounts page and select the user you wish to modify. Here, we can change the full name and elevate the user’s role to Server Administrator — this adds user to the wheel group. It also includes options for access and passwords.

    The Access options allow admins to lock the account. Clicking Never lock account will open the “Account Expiration” box. From here we can choose to Never lock the account, or to lock it on a scheduled date.

    Password management

    Admins can choose to Set password and Force Change. The first option prompts you to enter a new password. The second option forces users to create a new password the next time they login.

    Selecting the Never change password option opens a box with two options. The first is Never expire the password. This allows the user to keep their password without the need to change it. The second option is Require Password change every … days. This determines the amount of days a password can be used before it must be changed.

    Adding public keys

    We can also add public SSH keys from remote computers for password-less authentication. This is equivalent to the ssh-copy-id command. To start, click the Add Public Key (+) button. Finally, copy the public key from a remote machine and paste it into the box.

    To remove the key, click the remove (-) button to the right of the key.

    Terminating the session and deleting an account

    Near the top right-corner are two buttons: Terminate Session, and Delete. Clicking the Terminate Session button immediately disconnects the user. Clicking the Delete button removes the user and offers to delete the user’s files with the account.

    <figure class="wp-block-image">Modifying and deleting a local account with Cockpit<figcaption>Modifying and deleting a local account with Cockpit</figcaption></figure>

    Managing 389 Directory Server

    Cockpit has a plugin for managing the 389 Directory Service. To add the 389 Directory Server UI, run the following command using sudo:

    $ sudo dnf install cockpit-389-ds

    Because of the enormous number of settings, Cockpit provides detailed optimization of the 389 Directory Server. Some of these settings include:

    • Server Settings: Options for server configuration, tuning & limits, SASL, password policy, LDAPI & autobind, and logging.
    • Security: Enable/disable security, certificate management, and cipher preferences.
    • Database: Configure the global database, chaining, backups, and suffixes.
    • Replication: Pertains to agreements, Winsync agreements, and replication tasks.<mark class="annotation-text annotation-text-yoast" id="annotation-text-32c139be-02ab-493b-8c86-720abe508099"></mark>
    • Schema: Object classes, attributes, and matching rules.
    • Plugins: Provides a list of plugins associated with 389 Directory Server. Also gives admins the opportunity to enable/disable, and edit the plugin.
    • Monitoring: Shows database performance stats. View DB cache hit ratio and normalized DN cache. Admins can also configure the amount of tries, and hits. Furthermore, it provides server stats and SNMP counters.

    Due to the abundance of options, going through the details for 389 Directory Server is beyond the scope of this article. For more information regarding 389 Directory Server, visit their documentation site.

    <figure class="wp-block-image">Managing 389 DS with Cockpit<figcaption>Managing 389 Directory Server with Cockpit</figcaption></figure>

    As you can see, admins can perform quick and basic user management tasks. However, the most noteworthy is the in-depth functionality of the 389 Directory Server add-on.

    The next article will explore how Cockpit handles software and services.


    Photo by Daniil Vnoutchkov on Unsplash.

    Foliate - A simple and modern ebook viewer for linux

    Posted by Robbi Nespu on October 18, 2019 12:31 AM

    Looking for best e-book viewer on linux? Then use Foliate! This is my favourite e-book viewer!!

    Foliate viewer support epub, .mobi, .azw, and .azw3 files. Have few mode for you such as light, dark, sepia and invert theme mode.

    How to install? Luckly, they also release distribution package for Fedora (sudo dnf install foliate , Arch and Void linux (xbps-install -S foliate). For DEB based such a Ubuntu or Debian can be download on latest release page. For others distribution, just download the source code and build yourself. Else, just download from Flatpak.

    I really like the stylish interface and having fun experience compare to others viewer. Two-page view, scrolled view, metadata viewer and reading progress features that made me happy using this software.

    There, I hope you will like this software too. Adios!

    libinput and tablet pad keys

    Posted by Peter Hutterer on October 17, 2019 11:23 PM

    Upcoming in libinput 1.15 is a small feature to support Wacom tablets a tiny bit better. If you look at the higher-end devices in Wacom's range, e.g. the Cintiq 27QHD you'll notice that at the top right of the device are three hardware-buttons with icons. Those buttons are intended to open the config panel, the on-screen display or the virtual keyboard. They've been around for a few years and supported in the kernel for a few releases. But in userspace, they events from those keys were ignored, casted out in the wild before eventually running out of electrons and succumbing to misery. Well, that's all changing now with a new interface being added to libinput to forward those events.

    Step back a second and let's look at the tablet interfaces. We have one for tablet tools (styli) and one for tablet pads. In the latter, we have events for rings, strips and buttons. The latter are simply numerically ordered, so button 1 is simply button 1 with no special meaning. Anything more specific needs to be handled by the compositor/client side which is responsible for assigning e.g. keyboard shortcuts to those buttons.

    The special keys however are different, they have a specific function indicated by the icon on the key itself. So libinput 1.15 adds a new event type for tablet pad keys. The events look quite similar to the button events but they have a linux/input-event-codes.h specific button code that indicates what they are. So the compositor can start the OSD, or control panel, or whatever directly without any further configuration required.

    This interface hasn't been merged yet, it's waiting for the linux kernel 5.4 release which has a few kernel-level fixes for those keys.

    libinput and button scrolling locks

    Posted by Peter Hutterer on October 17, 2019 10:56 PM

    For a few years now, libinput has provided button scrolling. Holding a designated button down and moving the device up/down or left/right creates the matching scroll events. We enable this behaviour by default on some devices (e.g. trackpoints) but it's available on mice and some other devices. Users can change the button that triggers it, e.g. assign it to the right button. There are of course a couple of special corner cases to make sure you can still click that button normally but as I said, all this has been available for quite some time now.

    New in libinput 1.15 is the button lock feature. The button lock removes the need to hold the button down while scrolling. When the button lock is enabled, a single button click (i.e. press and release) of that button holds that button logically down for scrolling and any subsequent movement by the device is translated to scroll events. A second button click releases that button lock and the device goes back to normal movement. That's basically it, though there are some extra checks to make sure the button can still be used for normal clicking (you will need to double-click for a single logical click now though).

    This is primarily an accessibility feature and is likely to find it's way into the GUI tools under the accessibility headers.

    Riddle me this

    Posted by Benjamin Otte on October 17, 2019 10:46 PM

    Found this today while playing around, thought people might enjoy this riddle.

    $> echo test.c
    typedef int foo;
    int main()
    {
      foo foo = 1;
      return (foo) +0;
    }
    $> gcc -Wall -o test test.c && ./test && echo $?

    What does this print?

    1. 0
    2. 1
    3. Some compilation warnings, then 0.
    4. Some compilation warnings, then 1.
    5. It doesn’t compile.

    I’ll put an answer in the comments.

    IBus 1.5.21 is released

    Posted by Takao Fujiwara on October 17, 2019 08:47 AM

    IBus 1.5.21 is now released and available in Fedora 31.

    # dnf update ibus

    This release enhances the IBus compose features. The maximum number of the compose key sequences was 7. Also the output character was limited in 16 bit and only one character could be output so the latest emoji characters or custom long compose characters were not supported.
    The following is the demo.

    <iframe allowfullscreen="true" class="youtube-player" height="349" src="https://www.youtube.com/embed/S0iFQTrBles?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="425"></iframe>

    IBus can loads either $HOME/.config/ibus/Compose or $HOME/.config/gtk-3.0/Compose or $HOME/.XCompose file and save the cache files in $HOME/.cache/ibus/compose/

    You can customize the compose keys with gnome-tweaks in GNOME desktop or any utilities in other desktops or setxkbmap -option in Xorg desktops.

    Currently IBus XKB engines and a few language engines support the IBus compose features. To enable IBus in text applications in GNOME desktop, you need to enable more than one IBus engines using gnome-control-center region, likes ibus-typing-booster, ibus-m17n or else. Otherwise GtkIMContextSimple is used and the compose feature is not available. To enable IBus in non-GNOME desktop, you can use any IBus engines by default or customize with `ibus-setup` command.

    Also now ibus-daemon exits with the parent program’s death.

    Also IBus provides ibus.its file which can i18n IBus component files in /usr/share/ibus/component/ to detect “longname” and “description” tag.

    ibus-anthy 1.5.11 and anthy-unicode 1.0.0.20191015 are released

    Posted by Takao Fujiwara on October 17, 2019 08:06 AM

    ibus-anthy 1.5.11 is released and available in Fedora 30 or later.
    # dnf update ibus-anthy

    The default input mode is now Eisu (direct) mode but not Hiragana mode.

    Eisu mode now can load a user compose file of either $HOME/.config/ibus/Compose or $HOME/.XCompose although the system compose files has been already loaded.

    The emoji dictionary is updated for emoji 12.0 beta.

    The ibus-anthy build now uses gettext instead of intltool.

    This release now supports to use anthy-unicode which converts the internal EUC-JP data to UTF-8 data and enhanced some functions and the ibus-anthy build detects /usr/lib*/pkgconfig/anthy-unicode.pc for anthy-unicode or /usr/lib*/pkgconfig/anthy.pc for anthy. anthy-unicode is still an unofficial or testing release.

    Rclone to GDrive

    Posted by Paul Mellors [MooDoo] on October 17, 2019 07:54 AM
    I have 2tb storage space with google, so I wanted to sync the files from my Fedora 30 installation to GDrive.  I didn't want to have to drag and drop on a Chrome window or click the upload button, I wanted to fire and forget.

    With this in mind I discovered rclone [https://rclone.org/], it's basically rsync for cloud storage.  I set mine up like this.

    dnf install rclone

    rclone config
    This will allow you to setup the cloud connection

    No remotes found - make a new one
    n) New remote
    s) Set configuration password
    q) Quit config
    n/s/q> n
    name> GDrive
    Type of storage to configure.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / 1Fichier
       \ "fichier"
     2 / Alias for an existing remote
       \ "alias"
     3 / Amazon Drive
       \ "amazon cloud drive"
     4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
       \ "s3"
     5 / Backblaze B2
       \ "b2"
     6 / Box
       \ "box"
     7 / Cache a remote
       \ "cache"
     8 / Dropbox
       \ "dropbox"
     9 / Encrypt/Decrypt a remote
       \ "crypt"
    10 / FTP Connection
       \ "ftp"
    11 / Google Cloud Storage (this is not Google Drive)
       \ "google cloud storage"
    12 / Google Drive
       \ "drive"
    13 / Google Photos
       \ "google photos"
    14 / Hubic
       \ "hubic"
    15 / JottaCloud
       \ "jottacloud"
    16 / Koofr
       \ "koofr"
    17 / Local Disk
       \ "local"
    18 / Mega
       \ "mega"
    19 / Microsoft Azure Blob Storage
       \ "azureblob"
    20 / Microsoft OneDrive
       \ "onedrive"
    21 / OpenDrive
       \ "opendrive"
    22 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
       \ "swift"
    23 / Pcloud
       \ "pcloud"
    24 / Put.io
       \ "putio"
    25 / QingCloud Object Storage
       \ "qingstor"
    26 / SSH/SFTP Connection
       \ "sftp"
    27 / Union merges the contents of several remotes
       \ "union"
    28 / Webdav
       \ "webdav"
    29 / Yandex Disk
       \ "yandex"
    30 / http Connection
       \ "http"
    31 / premiumize.me
       \ "premiumizeme"
    Storage> 12
    ** See help for drive backend at: https://rclone.org/drive/ **

    Google Application Client Id
    Setting your own is recommended.
    See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
    If you leave this blank, it will use an internal key which is low performance.
    Enter a string value. Press Enter for the default ("").
    client_id>
    Google Application Client Secret
    Setting your own is recommended.
    Enter a string value. Press Enter for the default ("").
    client_secret>
    Scope that rclone should use when requesting access from drive.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / Full access all files, excluding Application Data Folder.
       \ "drive"
     2 / Read-only access to file metadata and file contents.
       \ "drive.readonly"
       / Access to files created by rclone only.
     3 | These are visible in the drive website.
       | File authorization is revoked when the user deauthorizes the app.
       \ "drive.file"
       / Allows read and write access to the Application Data folder.
     4 | This is not visible in the drive website.
       \ "drive.appfolder"
       / Allows read-only access to file metadata but
     5 | does not allow any access to read or download file content.
       \ "drive.metadata.readonly"
    scope> 1
    ID of the root folder
    Leave blank normally.
    Fill in to access "Computers" folders. (see docs).
    Enter a string value. Press Enter for the default ("").
    root_folder_id>
    Service Account Credentials JSON file path
    Leave blank normally.
    Needed only if you want use SA instead of interactive login.
    Enter a string value. Press Enter for the default ("").
    service_account_file>
    Edit advanced config? (y/n)
    y) Yes
    n) No
    y/n> n
    Remote config
    Use auto config?
     * Say Y if not sure
     * Say N if you are working on a remote or headless machine
    y) Yes
    n) No
    y/n> y
    If your browser doesn't open automatically go to the following link: <redacted>

    Log in and authorize rclone for access
    Waiting for code...

    At this point a web browser should open and you need to sign into google and authorise the app

    Got code
    Configure this as a team drive?
    y) Yes
    n) No
    y/n>n

    You then get token information and a request to confirm all ok, you can then quit the config and all should be ok.

    I then use the command below, there are a large amount of options, these worked for me.

    rclone sync /home/paulmellors/Pictures GDrive:Pictures --progress --tpslimit 10 --bwlimit 900K

    rclone sync <what you want to sync> <the connection>:<remote folder> <show progress> <Limit HTTP transactions per second to this> <This option controls the bandwidth limit. >

    Seems to be working so far with my 400GB of photos :) 


    Contribute to Fedora Magazine

    Posted by Fedora Magazine on October 16, 2019 08:00 AM

    Do you love Linux and open source? Do you have ideas to share, enjoy writing, or want to help run a blog with over 60k visits every week? Then you’re at the right place! Fedora Magazine is looking for contributors. This article walks you through various options of contributing and guides you through the process of becoming a contributor.

    There are three main areas of contribution:

    1. Proposing ideas
    2. Writing articles
    3. Keeping it all running

    Proposing ideas

    Everything starts with an idea. We discuss ideas and how to turn them into articles that are interesting and useful to the Magazine’s audience.

    Everyone is welcome to submit an idea. It can be a very specific article proposal, or really just an idea. The Editorial Board discusses each proposal and decides about the next step.

    Many ideas are turned into a so-called Article Spec, which is a specific description of an article to get written for the Magazine. It usually describes the desired structure and other aspects of the article.

    By submitting a proposal you’re not automatically committing to write it. It’s a separate step by design. But, of course, you’re very welcome to do both!

    Submit an idea by opening an issue in our issue tracker. To do that, you’ll need a FAS (Fedora Account System) account.

    See the docs on proposing articles for more info.

    Writing articles

    If you enjoy writing, you’re welcome to write for the Magazine! Being a good writer doesn’t necessarily mean that you also need to come up with the topic — we have a list of article specs ready to be written.

    The Editorial Board maintains a Kanban board with cards representing specific articles. Each article starts as an Article Spec, and goes through various states to the very end when it’s published. Each column on the board represents a state.

    If you want to write an article, just pick any card in the Article Spec column you like. First, assign yourself to the card of your choice, and move it to the In Progress column. That’s how you indicate to the rest of the community you’re working on it. Writing itself is done in the Magazine WordPress — log in, click new/post at the very top, and start writing.

    We strongly encourage writers to read the Tips for Writers page in the docs.

    Once you’re done writing, paste the preview URL from WordPress into the card. (You can get it using the Preview button at the top-right in the WordPress editor.) Then move the card to the Review column. An editor then reviews and moves it forward.

    In some cases, an editor might ask for certain changes. When that happens, the card is moved back to In Progress. All you need to do is to make those changes and move it to the Review column again.

    If you’re a first-time contributor, you’ll need to get access to WordPress and Taiga first. Start by introducing yourself on the Fedora Magazine mailing list, and an editor will set everything up for you.

    See what article specs are ready to be written in the Article Spec column and you can just start writing.

    Also, you can see the docs on writing articles for more info.

    Becoming an editor

    Looking for a longer-term contribution to the Magazine? Perhaps by setting the publishing schedule every week, reviewing ideas, editing articles, and attending the regular meeting? Become a member of the Editorial Board!

    There are a few ways to start:

    Help review ideas

    The easiest start might be reviewing ideas and turning them into article spec. Provide feedback, suggest what should be included, and help decide what the article should look like overall. To do that, simply go to the issue tracker and start commenting.

    Sometimes, we also receive ideas on the mailing list. Engaging with people on the mailing list is also a good way to contribute.

    Attend the Editorial meeting

    The Fedora Magazine editorial meeting is the place when we set the publishing schedule for the next week, and discuss various ideas regarding the Magazine.

    You are very welcome to just attend one of the Editorial meetings we have. Just say hi, and maybe volunteer to edit an article (read below), create an image (read below), or even write something when we’re short on content. 

    Article reviews

    When there is any card in the Review column on the board, that means the writer is asking for a final review. You can read their article and put a comment in the card with what you think about it. You might say it looks great, or point out specific things you believe should be changed (although that is rare).

    Design a cover image

    Every article published on the Magazine has a cover image. If you enjoy making graphics, you can contribute some. See the cover image guidelines in our docs for more info, and either ask on the list or come to one of our editorial meetings to get assigned one. 

    Recap

    Fedora Magazine is a place to share useful and interesting content with people who love Linux by people who love Fedora. And it all happens thanks to people contributing ideas, writing articles, and helping to keep the Magazine running. If you like the idea of Fedora being popular, or people using open source, Fedora Magazine is a great place for anyone to discover and learn about all of that. Join us and be a part of Fedora’s success!

    Planet Fedora

    Posted by Paul Mellors [MooDoo] on October 16, 2019 07:35 AM
    I just wanted to post this onto the planet, it's a test really to see if I've edited my .planet correctly.  Wasn't sure if the last post worked.

    Nothing else here yet, move along :)


    libinput's bus factor is 1

    Posted by Peter Hutterer on October 16, 2019 05:56 AM

    A few weeks back, I was at XDC and gave a talk about various current and past input stack developments (well, a subset thereof anyway). One of the slides pointed out libinput's bus factor and I'll use this blog to make this a bit more widely known.

    If you don't know what the bus factor is, Wikipedia defines it as:

    The "bus factor" is the minimum number of team members that have to suddenly disappear from a project before the project stalls due to lack of knowledgeable or competent personnel.
    libinput has a bus factor of 1.

    Let's arbitrarily pick the 1.9.0 release (roughly 2 years ago) and look at the numbers: of the ~1200 commits since 1.9.0, just under 990 were done by me. In those 2 years we had 76 contributors in total, but only 24 of which have more than one commit and only 6 contributors have more than 5 commits. The numbers don't really change much even if we go all the way back to 1.0.0 in 2015. These numbers do not include the non-development work: release maintenance for new releases and point releases, reviewing CI failures [1], writing documentation (including the stuff on this blog), testing and bug triage. Right now, this is effectively all done by one person.

    This is... less than ideal. At this point libinput is more-or-less the only input stack we have [2] and all major distributions rely on it. It drives mice, touchpads, tablets, keyboards, touchscreens, trackballs, etc. so basically everything except joysticks.

    Anyway, I'm largely writing this blog post in the hope that someone gets motivated enough to dive into this. Right now, if you get 50 patches into libinput you get the coveted second-from-the-top spot, with all the fame and fortune that entails (i.e. little to none, but hey, underdogs are big in popular culture). Short of that, any help with building an actual community would be appreciated too.

    Either way, lest it be said that no-one saw it coming, let's ring the alarm bells now before it's too late. Ding ding!

    [1] Only as of a few days ago can we run the test suite as part of the CI infrastructure, thanks to Benjamin Tissoires. Previously it was run on my laptop and virtually nowhere else.
    [2] fyi, xf86-input-evdev: 5 patches in the same timeframe, xf86-input-synaptics: 6 patches (but only 3 actual changes) so let's not pretend those drivers are well-maintained.

    Τι κάνεις FOSSCOMM 2019

    Posted by Julita Inca Chiroque on October 16, 2019 02:29 AM

    Thanks to the sponsorship of Fedora, I was able to travel to Lamia, Greece from October 10 to October 14 to attend at FOSSCOMM (Free and Open Source Software Communities Meeting), the pan-Hellenic conference of free and open source software communities.

    Things I did in the event:

    1.- Set up a Fedora booth

    I arranged the booth during the first hours when I arrived Lamia. The event registration started at 4:00 p.m. and thanks to the help of enthusiastic volunteers and Alex Angelo (I met him in GUADEC 2019), the booth was all ready to go since the first day of the event.

    The Fedora project sent swags directly to the University of Central Greece, and I created my own handmade decoration. I used Fedora and GNOME ballons to have a nice booth 🙂 Thanks to the tools provided by the university I was able to finish what I had in mind:

    2.- Spread up the Fedora word

    When the students visited our Fedora booth, they were excited to take some Fedora gifts, especially the tattoo sticker. I was asking how many of them used Fedora, and most of them were using Ubuntu, Linux Mint, Kali Linux and Elementary OS. It was an opportunity to share the Fedora 30 edition and give the beginner’s guide that the Fedora community wrote in a little book. Most of them enjoyed taking photos with the Linux frame I did in Edinburgh 💙  Alex shared also his Linux knowledge in our Fedora booth.

    3.- Do a keynote about Linux on Supercomputers

    I was invited to the conference to do a talk about Linux in supercomputers. Only 9 out of 42 attendees were non-Linux users. However, I am so glad that they attended to know what is going on in the supercomputer world that uses Linux. Then, I started by asking questions about Linux in general, and some linuxers were able to answer part of the questions but not all of them. I have been told by professor Thanos that Greece has a supercomputer called Aris, as well as the students were aware about GPUs technologies. When I asked a question about GPUs, a female student answered correctly about the use of GPUs and that is why she won the t-shirt of the event I offered as a prize to the audience. You might see my entire talk in the live streaming video. 

    4.- Do a workshop of GTK on C

    I was planning to teach the use of the GTK library with C, Python, and Vala. However, because of the time and the preference of the attendees, we only worked with C. The workshop was supported by Alex Angelo who also traduced some of my expressions in Greek. I was flexible in using different Operating Systems such as Linux Mint, Ubuntu, Kubuntu among other distros. There were only two users that used Fedora. Almost half of the audience did not bring a laptop, and then I grouped in groups to work together. I enjoyed to see young students eager to learn, they took their own notes, and asked questions. You might see the video of the workshop that was recorded by the organizers.

    My feelings about the event:

    The agenda of the event was so interesting, I was quite sad to not attend because I had to take care of the booth, and most of the talks were done in Greek. As you can see in the pictures, there were a variety of technical talks in charge of women. I was impressed by Greek ladies because they are well prepared, most of them were self-taught in Linux and in trending technologies such as IoT, security, programming, Linux, and in bio-science.

    Authorities supported this kind of Linux events and I think that was an important factor to have a successful event. Miss Catherine and Mister Thanos were pictured with minorities, women and kids were very excited to be part of FOSSCOMM 2019. Additionally, its local government also supported this event. Here a post in the magazine.

    Greek people are warm and happy.  Thank you so much to everyone for the kindness!

    Food for everyone

    I was surprised by time and schedules, they started the journey every day at 8:00 am and the talks finished at 8p.m. The lunch break was set at 2:30p.m. and a local guy told me that just for breakfast they usually take a cup of coffee. We had a very delicious and consistent dinner on the first day of the event with the professors of the Informatics and Biology department of the University Central Greece. Free lunch and coffee breaks were served carefully to all. I enjoyed Greece food, we had a variety of salads and sweeties. 

    Turistic places I visited

    I only had a few hours before leaving Lamia, I had time to visit the castle and the museum where I learned more about the different ancient eras and legends of Greece.

    Special Thanks

    Thanks to Alex for being my local guide during the whole event! Thanks to Iris for the welcoming, to Argiris for the invitation and the t-shirt he promised me, and to Kath for being so nice in the thousand pictures we took and for the touristic guide and her help.

    Thanks to Stathis who encouraged me to apply to FOSSCOMM, to each volunteer for the help they gave me and all the effort they did, I know that most of them live an hour and a half far from the university. Thanks again to Fedora for the travel sponsorship!

    Libosinfo (Part I)

    Posted by Fabiano Fidêncio on October 16, 2019 12:00 AM

    This is the first blog post of a series which will cover Libosinfo, what it is, who uses it, how it is used, how to manage it, and, finally, how to contribute to it.

    A quick overview

    Libosinfo is the operating system information database. As a project, it consists of three different parts, with the goal to provide a single place containing all the required information about an operating system in order to provision and manage it in a virtualized environment.

    The project allows management applications to:

    • Automatically identify for which operating system an ISO image or an installation tree is intended to;

    • Find the download location of installable ISOs and LiveCDs images;

    • Find the location of installation trees;

    • Query the minimum, recommended, and maximum CPU / memory / disk resources for an operating system;

    • Query the hardware supported by an operating system;

    • Generate scripts suitable for automating “Server” and “Workstation” installations;

    The library (libosinfo)

    The library API is written in C, taking advantage of GLib and GObject. Thanks to GObject Introspection, the API is automatically available in all dynamic programming languages with bindings for GObject (JavaScript, Perl, Python, and Ruby). Auto-generated bindings for Vala are also provided.

    As part of libosinfo, three tools are provided:

    • osinfo-detect: Used to detect an Operating System from a given ISO or installation tree.

    • osinfo-install-script: Used to generate a “Server” or “Workstation” install-script to perform automated installation of an Operating System;

    • osinfo-query: Used to query information from the database;

    The database (osinfo-db)

    The database is written in XML and it can either be consumed via libosinfo APIs or directly via management applications’ own code.

    It contains information about the operating systems, devices, installation scripts, platforms, and datamaps (keyboard and language mappings for Windows and Linux OSes).

    The database tools (osinfo-db-tools)

    These are tools that can be used to manage the database, which is distributed as a tarball archive.

    • osinfo-db-import: Used to import an osinfo database archive;

    • osinfo-db-export: Used to export an osinfo database archive;

    • osinfo-db-validate: Used to validate the XML files in one of the osinfo database locations for compliance with the RNG schema.

    • osinfo-db-path: Used to report the paths associated with the standard database locations;

    The consumers …

    Libosinfo and osinfo-db have management applications as their target audience. Currently the libosinfo project is consumed by big players in the virtual machine management environment such as OpenStack Nova, virt-manager, GNOME Boxes, and Cockpit Machines.

    … a little bit about them …

    • OpenStack Nova: An OpenStack project that provides a way to provision virtual machines, baremetal servers, and (limited supported for) system containers.

    • virt-manager: An application for managing virtual machines through libvirt.

    • GNOME Boxes: A simple application to view, access, and manage remote and virtual systems.

    • Cockpit Machines: A Cockpit extension to manage virtual machines running on the host.

    … and why they use it

    • Download ISOs: As libosinfo provides the ISO URLs, management applications can offer the user the option to download a specific operating system;

    • Automatically detect the ISO being used: As libosinfo can detect the operating system of an ISO, management applications can use this info to set reasonable default values for resources, to select the hardware supported, and to perform unattended installations.

    • Start tree installation: As libosinfo provides the tree installation URLs, management applications can use it to start a network-based installation without having to download the whole operating system ISO;

    • Set reasonable default values for RAM, CPU, and disk resources: As libosinfo knows the values that are recommended by the operating system’s vendors, management applications can rely on that when setting the default resources for an installation.

    • Automatically set the hardware supported: As libosinfo provides the list of hardware supported by an operating system, management applications can choose the best defaults based on this information, without taking the risk of ending up with a non-bootable guest.

    • Unattended install: as libosinfo provides unattended installations scripts for CentOS, Debian, Fedora, Fedora Silverblue, Microsoft Windows, OpenSUSE, Red Hat Enterprise Linux, and Ubuntu, management applications can perform unattended installations for both “Workstation” and “Server” profiles.

    What’s next?

    The next blog post will provide a “demo” of an unattended installation using both GNOME Boxes and virt-install and, based on that, explain how libosinfo is internally used by these projects.

    By doing that, we’ll both cover how libosinfo can be used and also demonstrate how it can ease the usage of those management applications.

    Cockpit 205

    Posted by Cockpit Project on October 16, 2019 12:00 AM

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 205.

    Firewall: UI restructuring

    The firewall page was redesigned. Instead of having separate listings for services and zones, the services are now listed per zone. This aims to make the relationship between zones and services clearer.

    Firewall Redesign

    Machines: Refactor Create VM dialog and introduce a download option

    A guest operating system can now be downloaded automatically by only selecting its name. Memory and storage size will default to recommended values for the selected OS.

    Create VM dialog

    Adjust menu to PatternFly’s current navigation design

    The pages menu now has a dark theme, the recommended current design from PatternFly after an user study.

    Searching with keywords

    Enable searching by page names and keywords. Also works with translated page names and translated keywords. Searching by page content is not available yet.

    Dark navigation

    Software Updates: Use notifications for available updates info

    Cockpit will notify you about available updates in the navigation menu.

    Notify about available updates

    Web server security hardening

    The cockpit-tls proxy and the cockpit-ws instances now run as different system users, and the instances are controlled by systemd. This provides better isolation and robustness.

    Try it out

    Cockpit 205 is available now:

    Fedora 30 : News about python 3.8.0 and install on Linux.

    Posted by mythcat on October 15, 2019 09:09 PM
    The new release of python development comes today.
    You can see on the official webpage the new versions of Python 3.7.5 Oct. 15, 2019 and Python 3.8.0 Oct. 14, 2019.
    I wrote about how to install version 3.8.0 on Fedora 30.
    See the full tutorial here.

    Extending the Minimization objective

    Posted by Fedora Community Blog on October 15, 2019 02:48 PM
    Fedora community elections

    Earlier this summer, the Fedora Council approved the first phase of the Minimization objective. Minimization looks at package dependencies and tries to minimize the footprint for a variety of use cases. The first phase resulted in the development of a feedback pipeline, a better understanding of the problem space, and some initial ideas for policy improvements.

    Phase two is now submitted to the Council for approval. In this phase, the team will select specific use cases to target and work to develop a minimized set of packages for them. You can read the updated objective in pull request #64. Please provide feedback there or on the council-discuss mailing list. The Council will vote on this in two weeks.

    The post Extending the Minimization objective appeared first on Fedora Community Blog.

    Building GDB on a freshly installed machine FAQ

    Posted by Gary Benson on October 15, 2019 01:35 PM

    So you just installed Fedora, RHEL or CentOS and now you want to build GDB from source.

    1. How do you make sure everything you need to build it is installed?
      # dnf builddep gdb
    2. Did it say, No such command: builddep? Do this, then try again:
      # dnf install dnf-plugins-core
    3. Did it say, dnf: command not found…? You’re using yum, try this:
      # yum-builddep gdb
    4. Did it say, yum-builddep: command not found…? Do this, then try again:
      # yum install yum-utils

    Thank you, you’re welcome.

    syslog-ng in two words at One Identity UNITE: reduce and simplify

    Posted by Peter Czanik on October 15, 2019 10:44 AM

    UNITE is the partner and user conference of One Identity, the company behind syslog-ng. This time the conference took place in Phoenix, Arizona where I talked to a number of American business customers and partners about syslog-ng. They were really enthusiastic about syslog-ng and emphasized two major reasons why they use syslog-ng or plan to introduce it to their infrastructure: syslog-ng allows them to reduce the log data volume and greatly simplify their infrastructure by introducing a separate log management layer.

    Reduce

    Log messages are very important both for the operation and security of a company. This is why you do not just simply store them, but feed the log messages to SIEM and other log analysis systems that create reports and actionable alerts from your messages.

    Applications can produce tremendous amount of log data. This is a problem for SIEM and other log analysis systems for two major reasons:

    • hardware costs, as the more data you have the more storage place and processing power you need to analyze the data

    • licensing costs, as most analysis platforms are priced on data volume

    You can easily reduce message volume by parsing and filtering your log messages and only forwarding the logs for analysis which are really necessary. Many people started to use syslog-ng just for this use case, as it is really easy to create complex filters using syslog-ng.

    This is why I was surprised to learn about another approach: sending all log messages, but not whole messages, only the necessary parts. This needs a bit of extra work, as you need to figure out which part of the log message is used by your log analysis application. But once you are ready with your research, you can easily halve the log messages, or in some special cases even reduce the message volume by 90%.

    Some examples are:

    • Reading the name-value pairs from the systemd journal, but forwarding only selected name-value pairs.

    • Parsing HTTP access logs and forwarding only those columns which are actually analyzed by your software.

    The syslog-ng application has powerful parsers to segment the log messages to name-value pairs, after which you can use templates and template functions of syslog-ng for such selective log delivery.

    If your log analysis infrastructure is already in place, it is still worth to make the switch to syslog-ng and reduce your log volume using these techniques. You can use the current log analysis infrastructure for a lot longer time without having to expand it with further storage and processing power.

    Simplify

    Most SIEM and log analysis solutions come with their own client applications to collect log messages. So, why bother installing a separate application from yet another vendor to collect your log messages? Installing syslog-ng as a separate log management layer does not actually complicate your infrastructure, but rather simplifies it:

    • No vendor lock-in: replacing your SIEM is pain free and quick, as you do not have to replace all the agents as well

    • Operations, security and different teams of the company use different software solutions to analyze log messages: instead of installing 3-4 or even more agents, you only install one that can deliver the required log messages to the different solutions.

    When you collect log messages to a central location using syslog-ng, you can archive all of the messages there. If you add a new log analysis application to your infrastructure, you can just point syslog-ng at it and forward the necessary subset of log data there.

    Life at both security and operations in your environment becomes easier, as there is only a single software to check for security problems and distribute on your systems instead of many.

    What is next?

    If you are on the technical side, I recommend you reading two chapters from the syslog-ng documentation:

    These explain you how you can reformat your log messages using syslog-ng, giving you a way to reduce your data volume significantly by including only necessary name-value pairs.

    If you want to learn more about this topic, our Optimize SIEM white paper explains it in more depth.

    The open source version of syslog-ng is part of most Linux distributions, but packages might be outdated. For up-to-date packages check the 3rd party binaries page for information.

    If you need commercial level support and help in integrating syslog-ng to your environment, start an evaluation of syslog-ng Premium Edition.


    If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

    Unoon, a tool to monitor network connections from my system

    Posted by Kushal Das on October 14, 2019 01:46 PM

    I always wanted to have a tool to monitor the network connections from my laptop/desktop. I wanted to have alerts for random processes making network connections, and a way to block those (if I want to).

    Such a tool can provide peace of mind in a few cases. A reverse shell is one the big one, just in case if I manage to open any random malware (read downloads) on my regular Linux system, I want to be notified about the connections it will make. The same goes for trying out any new application. I prefer to use Qubes OS based VMs testing random binaries and applications, and it is also my daily driver. But, the search for a proper tool continued for some time.

    Introducing unoon

    Unoon main screen

    Unoon is a desktop tool that I started writing for monitoring network connections for my system. It has two parts, the backend is written in Go and that monitor and adds details to a local Redis instance (this should be password protected).

    I started writing this backend in Rust, but then I had to rewrite it in Go as I wanted to reuse parts of my code from another project so that I can track all DNS queries from the system. This helps to make sense of the data; otherwise, we will see some random IP numbers in the UI.

    The frontend is written using PyQt5. Around 14 years ago, I released my first ever released tool using PyQt, and it is still my favorite library to create a desktop application.

    Using the development version of unoon

    The README has the build steps. You have to start the backend as a daemon, the easiest option is to run it inside of a tmux shell. At first, it will show all the currently running processes in the first “Current processes” tab. If you add any executable (via the absolute path) in the Edit->whitelists dialog and then save (and then restart the UI app), those will turn up the whitelisted processes.

    Unoon alert

    For any new process making network calls, you will get an alert dialog. In the future, we will have the option to block hosts/ips via this alert dialog.

    Unoon history

    The history tabs will show all alerts history in the runtime. Again, we will have to save this information in a local database, so that we can have better statistics shown to the users.

    You can move between different tabs/tables via Alt+1 or Alt+2 and Alt+3 key combinations.

    I will add more options to create better-whitelisted processes. There is also ongoing work to mark any normal process as a whitelisted one from the UI (by right-clicking).

    Last week, Micah and I managed to spend some late-night hotel room hacking on this tool.

    How can you help?

    You can start by testing the code base, and provide suggestions on how to improve the tool. Help in UX (major concern) and patches are always welcome.

    A small funny story

    A few weeks back, on a Sunday late night, I was demoing the very initial version of the tool to Saptak. While we were talking about the tool, suddenly, an entry popped up in the UI /usr/bin/ssh, to a random host. A little bit of search showed that the IP belongs to an EC2 instance. For the next 40 minutes, we both were trying to debug to find out what happened and if the system was already compromised or not. Luckily I was talking about something else before, and to demo something (we totally forgot that topic), I was running Wireshark on the system. From there, we figured that the IP belongs to github.com. It took some more time to figure out that one of my VS Code extension was updating the git, and was using ssh. This is when I understood that I need to show the real domain names on the UI than random IP addresses.