September 22, 2020

hackergotchi for Steve Kemp

Steve Kemp

Using a FORTH-like language for something useful

So my previous post was all about implementing a simple FORTH-like language. Of course the obvious question is then "What do you do with it"?

So I present one possible use - turtle-graphics:

\ Draw a square of the given length/width
: square
  dup dup dup dup
  4 0 do
    forward
    90 turn
  loop
;

\ pen down
1 pen

\ move to the given pixel
100 100 move

\ draw a square of width 50 pixels
50 square

\ save the result (png + gif)
save

Exciting times!

22 September, 2020 01:00PM

hackergotchi for Daniel Lange

Daniel Lange

Russ Allbery

Review: Unconquerable Sun

Review: Unconquerable Sun, by Kate Elliott

Series: Sun Chronicles #1
Publisher: Tor
Copyright: 2020
ISBN: 1-250-19725-2
Format: Kindle
Pages: 526

Sun is the daughter and heir of the mercurial Queen-Marshal Eirene, ruler of the Republic of Chaonia. Chaonia, thanks to Eirene and her ancestors, has carved out a fiercely independent position between the Yele League and the Phene Empire. Sun's father, Prince João, is one of Eirene's three consorts, all chosen for political alliances to shore up that fragile position. João is Gatoi, a civilization of feared fighters and supposed barbarians from outside Chaonia who normally ally with the Phene, which complicates Sun's position as heir. Sun attempts to compensate for that by winning battles for the Republic, following in the martial footsteps of her mother.

The publisher's summary of this book is not great (I'm a huge fan of Princess Leia, but that is... not the analogy that comes to mind), so let me try to help. This is gender-swapped Alexander the Great in space. However, it is gender-swapped Alexander the Great in space with her Companions, which means the DNA of this novel is half space opera and half heist story (without, to be clear, an actual heist, although there are some heist-like maneuvers). It's also worth mentioning that Sun, like Alexander, is not heterosexual.

The other critical thing to know before reading, mostly because it will get you through the rather painful start, is that the most interesting character in this book is not Sun, the Alexander analogue. It's Persephone, who isn't introduced until chapter seven.

Significant disclaimer up front: I got a reasonably typical US grade school history of Alexander the Great, which means I was taught that he succeeded his father, conquered a whole swath of the middle of the Eurasian land mass at a very young age, and then died and left his empire to his four generals who promptly divided it four uninteresting empires that no one's ever heard of, and that's why Rome is more important than Greece. (I put in that last bit to troll one specific person.)

I am therefore not the person to judge the parallels between this story and known history, or to notice any damage done to Greek pride, or to pick up on elements that might cause someone with a better grasp of that history to break out in hives. I did enough research to know that one scene in this book is lifted directly out of Alexander's life, but I'm not sure how closely the other parallels track. Yele is probably southern Greece and Phene is probably Persia, but I'm not certain even of that, and some of the details don't line up. If I had to hazard a guess, I'd say that Elliott has probably mangled history sufficiently to make it clear that this isn't intended to be a retelling, but if the historical parallels are likely to bother you, you may want to do more research before reading.

What I can say is that the space opera setup, while a bit stock, has all the necessary elements to make me happy. Unconquerable Sun is firmly in the "lost Earth" tradition: The Argosy fleet fled the now-mythical Celestial Empire and founded a new starfaring civilization without any contact with their original home. Eventually, they invented (or discovered; the characters don't know) the beacons, which allow for instantaneous travel between specific systems without the long (but still faster-than-light) journeys of the knnu drive. More recently, the beacon network has partly collapsed, cutting off the characters' known world from the civilization that was responsible for the beacons and guarded their secrets. It's a fun space opera history with lots of lost knowledge to reference and possibly discover, and with plot-enabling military choke points at the surviving beacons that link multiple worlds.

This is all background to the story, which is the ongoing war between Chaonia and the Phene Empire mixed with cutthroat political maneuvering between the great houses of the Chaonian Republic. This is where the heist aspects come in. Each house sends one representative to join the household of the Queen-Marshal and (more to the point for this story) another to join her heir. Sun has encouraged the individual and divergent talents of her Companions and their cee-cees (an unfortunate term that I suspect is short for Companion's Companion) and forged them into a good working team. A team that's about to be disrupted by the maneuverings of a rival house and the introduction of a new team member whom no one wants.

A problem with writing tactical geniuses is that they often aren't good viewpoint characters. Sun's tight third-person chapters, which is a little less than half the book, advance the plot and provide analysis of the interpersonal dynamics of the characters, but aren't the strength of the story. That lies with the interwoven first-person sections that follow Persephone, an altogether more interesting character.

Persephone is the scion of the house that is Sun's chief rival, but she has no interest in being part of that house or its maneuverings. When the story opens, she's a cadet in a military academy for recruits from the commoners, having run away from home, hidden her identity, and won a position through the open entrance exams. She of course doesn't stay there; her past catches up with her and she gets assigned to Sun, to a great deal of mutual suspicion. She also is assigned an impeccably dressed and stunningly beautiful cee-cee, Tiana, who has her own secrets and who is by far the most interesting character in the book.

Somewhat unusually for the space opera tradition, this is a book that knows that common people exist and have interesting lives. It's primarily focused on the ruling houses, but that focus is not exclusive and the rulers do not have a monopoly on competence. Elliott also avoids narrowing the political field too far; the Gatoi are separate from the three rival powers, and there are other groups with traditions older than the Chaonian Republic and their own agendas. Sun and her Companions are following a couple of political threads, but there is clearly more going on in this world than that single plot.

This is exactly the kind of story I think of when I think space opera. It's not doing anything that original or groundbreaking, and it's not going to make any of my lists of great literature, but it's a fun romp with satisfyingly layered bits of lore, a large-scale setting with lots of plot potential, and (once we get through the confusing and somewhat tedious process of introducing rather too many characters in short succession) some great interpersonal dynamics. It's the kind of book in which the characters are in the middle of decisive military action in an interstellar war and are also near-teenagers competing for ratings in an ad hoc reality TV show, primarily as an excuse to create tactical distractions for Sun's latest scheme. The writing is okay but not great, and the first few chapters have some serious infodumping problems, but I thoroughly enjoyed the whole book and will pre-order the sequel.

One Amazon review complained that Unconquerable Sun is not a space opera like Hyperion or Use of Weapons. That is entirely true, but if that's your standard for space opera, the world may be a disappointing place. This is a solid entry in a subgenre I love, with some great characters, sarcasm, competence porn, plenty of pages to keep turning, a few twists, and the promise of more to come. Recommended.

Followed by the not-yet-published Furious Heaven.

Rating: 7 out of 10

22 September, 2020 02:58AM

September 21, 2020

hackergotchi for Kees Cook

Kees Cook

security things in Linux v5.7

Previously: v5.6

Linux v5.7 was released at the end of May. Here’s my summary of various security things that caught my attention:

arm64 kernel pointer authentication
While the ARMv8.3 CPU “Pointer Authentication” (PAC) feature landed for userspace already, Kristina Martsenko has now landed PAC support in kernel mode. The current implementation uses PACIASP which protects the saved stack pointer, similar to the existing CONFIG_STACKPROTECTOR feature, only faster. This also paves the way to sign and check pointers stored in the heap, as a way to defeat function pointer overwrites in those memory regions too. Since the behavior is different from the traditional stack protector, Amit Daniel Kachhap added an LKDTM test for PAC as well.

BPF LSM
The kernel’s Linux Security Module (LSM) API provide a way to write security modules that have traditionally implemented various Mandatory Access Control (MAC) systems like SELinux, AppArmor, etc. The LSM hooks are numerous and no one LSM uses them all, as some hooks are much more specialized (like those used by IMA, Yama, LoadPin, etc). There was not, however, any way to externally attach to these hooks (not even through a regular loadable kernel module) nor build fully dynamic security policy, until KP Singh landed the API for building LSM policy using BPF. With this, it is possible (for a privileged process) to write kernel LSM hooks in BPF, allowing for totally custom security policy (and reporting).

execve() deadlock refactoring
There have been a number of long-standing races in the kernel’s process launching code where ptrace could deadlock. Fixing these has been attempted several times over the last many years, but Eric W. Biederman and Ernd Edlinger decided to dive in, and successfully landed the a series of refactorings, splitting up the problematic locking and refactoring their uses to remove the deadlocks. While he was at it, Eric also extended the exec_id counter to 64 bits to avoid the possibility of the counter wrapping and allowing an attacker to send arbitrary signals to processes they normally shouldn’t be able to.

slub freelist obfuscation improvements
After Silvio Cesare observed some weaknesses in the implementation of CONFIG_SLAB_FREELIST_HARDENED‘s freelist pointer content obfuscation, I improved their bit diffusion, which makes attacks require significantly more memory content exposures to defeat the obfuscation. As part of the conversation, Vitaly Nikolenko pointed out that the freelist pointer’s location made it relatively easy to target too (for either disclosures or overwrites), so I moved it away from the edge of the slab, making it harder to reach through small-sized overflows (which usually target the freelist pointer). As it turns out, there were a few assumptions in the kernel about the location of the freelist pointer, which had to also get cleaned up.

RISCV page table dumping
Following v5.6’s generic page table dumping work, Zong Li landed the RISCV page dumping code. This means it’s much easier to examine the kernel’s page table layout when running a debug kernel (built with PTDUMP_DEBUGFS), visible in /sys/kernel/debug/kernel_page_tables.

array index bounds checking
This is a pretty large area of work that touches a lot of overlapping elements (and history) in the Linux kernel. The short version is: C is bad at noticing when it uses an array index beyond the bounds of the declared array, and we need to fix that. For example, don’t do this:

int foo[5];
...
foo[8] = bar;

The long version gets complicated by the evolution of “flexible array” structure members, so we’ll pause for a moment and skim the surface of this topic. While things like CONFIG_FORTIFY_SOURCE try to catch these kinds of cases in the memcpy() and strcpy() family of functions, it doesn’t catch it in open-coded array indexing, as seen in the code above. GCC has a warning (-Warray-bounds) for these cases, but it was disabled by Linus because of all the false positives seen due to “fake” flexible array members. Before flexible arrays were standardized, GNU C supported “zero sized” array members. And before that, C code would use a 1-element array. These were all designed so that some structure could be the “header” in front of some data blob that could be addressable through the last structure member:

/* 1-element array */
struct foo {
    ...
    char contents[1];
};

/* GNU C extension: 0-element array */
struct foo {
    ...
    char contents[0];
};

/* C standard: flexible array */
struct foo {
    ...
    char contents[];
};

instance = kmalloc(sizeof(struct foo) + content_size);

Converting all the zero- and one-element array members to flexible arrays is one of Gustavo A. R. Silva’s goals, and hundreds of these changes started landing. Once fixed, -Warray-bounds can be re-enabled. Much more detail can be found in the kernel’s deprecation docs.

However, that will only catch the “visible at compile time” cases. For runtime checking, the Undefined Behavior Sanitizer has an option for adding runtime array bounds checking for catching things like this where the compiler cannot perform a static analysis of the index values:

int foo[5];
...
for (i = 0; i < some_argument; i++) {
    ...
    foo[i] = bar;
    ...
}

It was, however, not separate (via kernel Kconfig) until Elena Petrova and I split it out into CONFIG_UBSAN_BOUNDS, which is fast enough for production kernel use. With this enabled, it's now possible to instrument the kernel to catch these conditions, which seem to come up with some regularity in Wi-Fi and Bluetooth drivers for some reason. Since UBSAN (and the other Sanitizers) only WARN() by default, system owners need to set panic_on_warn=1 too if they want to defend against attacks targeting these kinds of flaws. Because of this, and to avoid bloating the kernel image with all the warning messages, I introduced CONFIG_UBSAN_TRAP which effectively turns these conditions into a BUG() without needing additional sysctl settings.

Fixing "additive" snprintf() usage
A common idiom in C for building up strings is to use sprintf()'s return value to increment a pointer into a string, and build a string with more sprintf() calls:

/* safe if strlen(foo) + 1 < sizeof(string) */
wrote  = sprintf(string, "Foo: %s\n", foo);
/* overflows if strlen(foo) + strlen(bar) > sizeof(string) */
wrote += sprintf(string + wrote, "Bar: %s\n", bar);
/* writing way beyond the end of "string" now ... */
wrote += sprintf(string + wrote, "Baz: %s\n", baz);

The risk is that if these calls eventually walk off the end of the string buffer, it will start writing into other memory and create some bad situations. Switching these to snprintf() does not, however, make anything safer, since snprintf() returns how much it would have written:

/* safe, assuming available <= sizeof(string), and for this example
 * assume strlen(foo) < sizeof(string) */
wrote  = snprintf(string, available, "Foo: %s\n", foo);
/* if (strlen(bar) > available - wrote), this is still safe since the
 * write into "string" will be truncated, but now "wrote" has been
 * incremented by how much snprintf() *would* have written, so "wrote"
 * is now larger than "available". */
wrote += snprintf(string + wrote, available - wrote, "Bar: %s\n", bar);
/* string + wrote is beyond the end of string, and availabe - wrote wraps
 * around to a giant positive value, making the write effectively 
 * unbounded. */
wrote += snprintf(string + wrote, available - wrote, "Baz: %s\n", baz);

So while the first overflowing call would be safe, the next one would be targeting beyond the end of the array and the size calculation will have wrapped around to a giant limit. Replacing this idiom with scnprintf() solves the issue because it only reports what was actually written. To this end, Takashi Iwai has been landing a bunch scnprintf() fixes.

That's it for now! Let me know if there is anything else you think I should mention here. Next up: Linux v5.8.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

21 September, 2020 11:32PM by kees

Antoine Beaupré

PSA: Mailman used to harrass people

It seems that Mailman instances are being abused to harrass people with subscribe spam. If some random people complain to you that they "never wanted to subscribe to your mailing list", you may be a victim to that attack, even if you run the latest Mailman 2.

TL;DR: IKR! HOW DO I FIX THIS!?

Make sure you have SUBSCRIBE_FORM_SECRET set in your mailman configuration:

SECRET=$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 30)'
echo "SUBSCRIBE_FORM_SECRET = '$SECRET'" >> /etc/mailman/mm.cfg

This will add a magic token to all forms in the Mailman web forms that will force the attacker to at least get a token before asking for registration. There are, of course, other ways of performing the attack then, but it's more expensive than a single request for the attacker and keeps most of the junk out.

Other solutions

I originally deployed a different fix, using referrer checks and an IP block list:

RewriteMap hosts-deny  txt:/etc/apache2/blocklist.txt
RewriteCond ${hosts-deny:%{REMOTE_ADDR}|NOT-FOUND} !=NOT-FOUND [OR]
RewriteCond ${hosts-deny:%{REMOTE_HOST}|NOT-FOUND} !=NOT-FOUND [OR]
RewriteCond %{HTTP_REFERER} !^https://lists.torproject.org/$ [NC]
RewriteRule ^/cgi-bin/mailman/subscribe/ - [F]
# see also https://www.w3.org/TR/referrer-policy/#referrer-policy-origin
Header always set Referrer-Policy "origin"

I kept those restrictions in place because it keeps the spammers from even hitting the Mailman CGI, which is useful to preserve our server resources. But if "they" escalate with smarter crawlers, the block list will still be useful.

You can use this query to extract the top 10 IP addresses used for subscription attempts:

awk '{ print $NF }' /var/log/mailman/subscribe | sort | uniq -c | sort -n | tail -10  | awk '{ print $2 " " $1 }'

Note that this might include email-based registration, but in our logs those are extremely rare: only two in three weeks, out of over 73,000 requests. I also use this to keep an eye on the logs:

tail -f  /var/log/mailman/subscribe /var/log/apache2/lists.torproject.org-access.log | grep -v 'GET /pipermail/'

The server-side mitigations might also be useful if you happen to run an extremely old version of Mailman, that is pre-2.1.18, but it's now over 6 years old and part of every supported Debian release out there (all the way back to Debian 8 jessie).

Why does that attack work?

Because Mailman 2 doesn't have CSRF tokens in its forms by default, anyone can send a POST request to /mailman/subscribe/LISTNAME to have Mailman send an email to the user. In the old "Internet is for nice people" universe, that wasn't a problem: all it does is ask the victim if they want to subscribe to LISTNAME. Innocuous, right?

But in the brave, new, post-Eternal-September, "Internet is for stupid" universe, some assholes think it's a good idea to make a form that collects hundreds of mailing list URLs and spam them through an iframe. To see what that looks like, you can look at the rendered source code behind samedyfreeday.co.uk (not linking to avoid promoting it). That site does what is basically a distributed cross-site scripting attack against Mailman servers.

Obviously, CSRF protection should be enabled by default in Mailman, but there you go. Hopefully this will help some folks...

(The latest Mailman 3 release doesn't suffer from such idiotic defaults and ships with proper CSRF protection out of the box.)

21 September, 2020 06:44PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Mainline Linux on the MikroTik RB3011

I upgraded my home internet connection to fibre (FTTP) last October. I’m still on an 80M/20M service, so it’s no faster than my old VDSL FTTC connection was, and as a result for a long time I continued to use my HomeHub 5A running OpenWRT. However the FTTP ONT meant I was using up an additional ethernet port on the router, and I was already short, so I ended up with a GigE switch in use as well. Also my wifi is handled by a UniFi, which takes its power via Power-over-Ethernet. That mean I had a router, a switch and a PoE injector all in close proximity. I wanted to reduce the number of devices, and ideally upgrade to something that could scale once I decide to upgrade my FTTP service speed.

Looking around I found the MikroTik RB3011UiAS-RM, which is a rack mountable device with 10 GigE ports (plus an SFP slot) and a dual core Qualcomm IPQ8064 ARM powering it. There’s 1G RAM and 128MB NAND flash, as well as a USB3 port. It also has PoE support. On paper it seemed like an ideal device. I wasn’t particularly interested in running RouterOS on it (the provided software), but that’s based on Linux and there was some work going on within OpenWRT to add support, so it seemed like a worthwhile platform to experiment with (what, you expected this to be about me buying an off the shelf device and using it with only the supplied software?). As an added bonus a friend said he had one he wasn’t using, and was happy to sell it to me for a bargain price.

RB3011 router in use

I did try out RouterOS to start with, but I didn’t find it particularly compelling. I’m comfortable configuring firewalling and routing at a Linux command line, and I run some additional services on the router like my MQTT broker, and mqtt-arp, my wifi device presence monitor. I could move things around such that they ran on the house server, but I consider them core services and as a result am happier with them on the router.

The first step was to get something booting on the router. Luckily it has an RJ45 serial console port on the back, and a reasonably featured bootloader that can manage to boot via tftp over the network. It wants an ELF binary rather than a plain kernel, but Sergey Sergeev had done the hard work of getting u-boot working for the IPQ8064, which mean I could just build normal u-boot images to try out.

Linux upstream already had basic support for a lot of the pieces I was interested in. There’s a slight fudge around AUTO_ZRELADDR because the network coprocessors want a chunk of memory at the start of RAM, but there’s ongoing discussions about how to handle this cleanly that I’m hopeful will eventually mean I can drop that hack. Serial, ethernet, the QCA8337 switches (2 sets of 5 ports, tied to different GigE devices on the processor) and the internal NOR all had drivers, so it was a matter of crafting an appropriate DTB to get them working. That left niggles.

First, the second switch is hooked up via SGMII. It turned out the IPQ806x stmmac driver didn’t initialise the clocks in this mode correctly, and neither did the qca8k switch driver. So I need to fix up both of those (Sergey had handled the stmmac driver, so I just had to clean up and submit his patch). Next it turned out the driver for talking to the Qualcomm firmware (SCM) had been updated in a way that broke the old method needed on the IPQ8064. Some git archaeology figured that one out and provided a solution. Ansuel Smith helpfully provided the DWC3 PHY driver for the USB port. That got me to the point I could put a Debian armhf image onto a USB stick and mount that as root, which made debugging much easier.

At this point I started to play with configuring up the device to actually act as a router. I make use of a number of VLANs on my home network, so I wanted to make sure I could support those. Turned out the stmmac driver wasn’t happy reconfiguring its MTU because the IPQ8064 driver doesn’t configure the FIFO sizes. I found what seem to be the correct values and plumbed them in. Then the qca8k driver only supported port bridging. I wanted the ability to have a trunk port to connect to the upstairs switch, while also having ports that only had a single VLAN for local devices. And I wanted the switch to handle this rather than requiring the CPU to bridge the traffic. Thankfully it’s easy to find a copy of the QCA8337 datasheet and the kernel Distributed Switch Architecture is pretty flexible, so I was able to implement the necessary support.

I stuck with Debian on the USB stick for actually putting the device into production. It makes it easier to fix things up if necessary, and the USB stick allows for a full Debian install which would be tricky on the 128M of internal NAND. That means I can use things like nftables for my firewalling, and use the standard Debian packages for things like collectd and mosquitto. Plus for debug I can fire up things like tcpdump or tshark. Which ended up being useful because when I put the device into production I started having weird IPv6 issues that turned out to be a lack of proper Ethernet multicast filter support in the IPQ806x ethernet device. The driver would try and setup the multicast filter for the IPv6 NDP related packets, but it wouldn’t actually work. The fix was to fall back to just receiving all multicast packets - this is what the vendor driver does.

Most of this work will be present once the 5.9 kernel is released - the basics are already in 5.8. Currently not queued up that I can think of are the following:

  • stmmac IPQ806x FIFO sizes. I sent out an RFC patch for these, but didn’t get any replies. I probably just need to submit this.
  • NAND. This is missing support for the QCOM ADM DMA engine. I’ve sent out the patch I found to enable this, and have had some feedback, so I’m hopeful it will get in at some point.
  • LCD. AFAICT LCD is an ST7735 device, which has kernel support, but I haven’t spent serious effort getting the SPI configuration to work.
  • Touchscreen. Again, this seems to be a zt2046q or similar, which has a kernel driver, but the basic attempts I’ve tried don’t get any response.
  • Proper SFP functionality. The IPQ806x has a PCS module, but the stmmac driver doesn’t have an easy way to plumb this in. I have ideas about how to get it working properly (and it can be hacked up with a fixed link config) but it’s not been a high priority.
  • Device tree additions. Some of the later bits I’ve enabled aren’t yet in the mainline RB3011 DTB. I’ll submit a patch for that at some point.

Overall I consider the device a success, and it’s been entertaining getting it working properly. I’m running a mostly mainline kernel, it’s handling my house traffic without breaking a sweat, and the fact it’s running Debian makes it nice and easy to throw more things on it as I desire. However it turned out the RB3011 isn’t as perfect device as I’d hoped. The PoE support is passive, and the UniFi wants 802.1af. So I was going to end up with 2 devices. As it happened I picked up a cheap D-Link DGS-1210-10P switch, which provides the PoE support as well as some additional switch ports. Plus it runs Linux, so more on that later…

21 September, 2020 05:53PM

Russ Allbery

Review: Lower Ed

Review: Lower Ed, by Tressie McMillan Cottom

Publisher: The New Press
Copyright: 2017
Printing: 2018
ISBN: 1-62097-472-X
Format: Kindle
Pages: 217

Lower Ed (subtitled The Troubling Rise of For-Profit Colleges in the New Economy) is the first book by sociologist Tressie McMillan Cottom. (I previously reviewed her second book, the excellent essay collection Thick.) It is a deep look at the sociology of for-profit higher education in the United States based on interviews with students and executives, analysis of Wall Street filings, tests of the admissions process, and her own personal experiences working for two of the schools. One of the questions that McMillan Cottom tries to answer is why students choose to enroll in these institutions, particularly the newer type of institution funded by federal student loans and notorious for being more expensive and less valuable than non-profit colleges and universities.

I was hesitant to read this book because I find for-profit schools depressing. I grew up with the ubiquitous commercials, watched the backlash develop, and have a strongly negative impression of the industry, partly influenced by having worked in traditional non-profit higher education for two decades. The prevailing opinion in my social group is that they're a con job. I was half-expecting a reinforcement of that opinion by example, and I don't like reading infuriating stories about people being defrauded.

I need not have worried. This is not that sort of book (nor, in retrospect, do I think McMillan Cottom would approach a topic from that angle). Sociology is broader than reporting. Lower Ed positions for-profit colleges within a larger social structure of education, credentialing, and changes in workplace expectations; takes a deep look at why they are attractive to their students; and humanizes and complicates the motives and incentives of everyone involved, including administrators and employees of for-profit colleges as well as the students. McMillan Cottom does of course talk about the profit motive and the deceptions surrounding that, but the context is less that of fraud that people are unable to see through and more a balancing of the drawbacks of a set of poor choices embedded in institutional failures.

One of my metrics for a good non-fiction book is whether it introduces me to a new idea that changes how I analyze the world. Lower Ed does that twice.

The first idea is the view of higher education through the lens of risk shifting. It used to be common for employers to hire people without prior job-specific training and do the training in-house, possibly through an apprenticeship structure. More notably, once one was employed by a particular company, the company routinely arranged or provided ongoing training. This went hand-in-hand with a workplace culture of long tenure, internal promotion, attempts to avoid layoffs, and some degree of mutual loyalty. Companies expected to invest significantly in an employee over their career and thus also had an incentive to retain that employee rather than train someone for a competitor.

However, from a purely financial perspective, this is a risk and an inefficiency, similar to the risk of carrying a large inventory of parts and components. Companies have responded to investor-driven focus on profits and efficiency by reducing overhead and shifting risk. This leads to the lean supply chain, where no one pays for parts to sit around in warehouses and companies aren't caught with large stockpiles of now-useless components, but which is more sensitive to any disruption (such as from a global pandemic). And, for employment, it leads to a desire to hire pre-trained workers, retain only enough workers to do the current amount of work, and replace them with new workers who already have appropriate training rather than retrain them.

The effect of the corporate decision to only hire pre-trained employees is to shift the risk and expense of training from the company to the prospective employee. The individual has to seek out training at their own expense in the hope (not guarantee) that at the conclusion of that training they will get or retain a job. People therefore turn to higher education to both provide that training and to help them decide what type of training will eventually be valuable. This has a long history with certain professional fields (doctors and lawyers, for example), but the requirements for completing training in those fields are relatively clear (a professional license to practice) and the compensation reflects the risk. What's new is the shift of training risk to the individual in more mundane jobs, without any corresponding increase in compensation.

This, McMillan Cottom explains, is the background for the growth in demand for higher education in general and the the type of education offered by for-profit colleges in particular. Workers who in previous eras would be trained by their employers are now responsible for their own training. That training is no longer judged by the standards of a specific workplace, but is instead evaluated by a hiring process that expects constant job-shifting. This leads to increased demand by both workers and employers for credentials: some simple-to-check certificate of completion of training that says that this person has the skills to immediately start doing some job. It also leads to a demand for more flexible class hours, since the student is now often someone older with a job and a family to balance. Their ongoing training used to be considered a cost of business and happen during their work hours; now it is something they have to fit around the contours of their life because their employer has shifted that risk to them.

The risk-shifting frame makes sense of the "investment" language so common in for-profit education. In this job economy, education as investment is not a weird metaphor for the classic benefits of a liberal arts education: broadened perspective, deeper grounding in philosophy and ethics, or heightened aesthetic appreciation. It's an investment in the literal financial sense; it is money that you spend now in order to get a financial benefit (a job) in the future. People have to invest in their own training because employers are no longer doing so, but still require the outcome of that investment. And, worse, it's primarily a station-keeping investment. Rather than an optional expenditure that could reap greater benefits later, it's a mandatory expenditure to prevent, at best, stagnation in a job paying poverty wages, and at worst the disaster of unemployment.

This explains renewed demand for higher education, but why for-profit colleges? We know they cost more and have a worse reputation (and therefore their credentials have less value) than traditional non-profit colleges. Flexible hours and class scheduling explains some of this but not all of it. That leads to the second perspective-shifting idea I got from Lower Ed: for-profit colleges are very good at what they focus time and resources on, and they focus on enrolling students.

It is hard to enroll in a university! More precisely, enrolling in a university requires bureaucracy navigation skills, and those skills are class-coded. The people who need them the most are the least likely to have them.

Universities do not reach out to you, nor do they guide you through the process. You have to go to them and discover how to apply, something that is often made harder by the confusing state of many university web sites. The language and process is opaque unless other people in your family have experience with universities and can explain it. There might be someone you can reach on the phone to ask questions, but they're highly unlikely to proactively guide you through the remaining steps. It's your responsibility to understand deadlines, timing, and sequence of operations, and if you miss any of the steps (due to, for example, the overscheduled life of someone in need of better education for better job prospects), the penalty in time and sometimes money can be substantial. And admission is just the start; navigating financial aid, which most students will need, is an order of magnitude more daunting. Community colleges are somewhat easier (and certainly cheaper) than universities, but still have similar obstacles (and often even worse web sites).

It's easy for people like me, who have long professional expertise with bureaucracies, family experience with higher education, and a support network of people to nag me about deadlines, to underestimate this. But the application experience at a for-profit college is entirely different in ways far more profound than I had realized. McMillan Cottom documents this in detail from her own experience working for two different for-profit colleges and from an experiment where she indicated interest in multiple for-profit colleges and then stopped responding before signing admission paperwork. A for-profit college is fully invested in helping a student both apply and get financial aid, devotes someone to helping them through that process, does not expect them to understand how to navigate bureaucracies or decipher forms on their own, does not punish unexpected delays or missed appointments, and goes to considerable lengths to try to keep anyone from falling out of the process before they are enrolled. They do not expect their students to already have the skills that one learns from working in white-collar jobs or from being surrounded by people who do. They provide the kind of support that an educational institution should provide to people who, by definition, don't understand something and need to learn.

Reading about this was infuriating. Obviously, this effort to help people enroll is largely for predatory reasons. For-profit schools make their money off federal loans and they don't get that money unless they can get someone to enroll and fill out financial paperwork (and to some extent keep them enrolled), so admissions is their cash cow and they act accordingly. But that's not why I found it infuriating; that's just predictable capitalism. What I think is inexcusable is that nothing they do is that difficult. We could being doing the same thing for prospective community college students but have made the societal choice not to. We believe that education is valuable, we constantly advocate that people get more job training and higher education, and yet we demand prospective students navigate an unnecessarily baroque and confusing application process with very little help, and then stereotype and blame them for failing to do so.

This admission support is not a question of resources. For-profit colleges are funded almost entirely by federally-guaranteed student loans. We are paying them to help people apply. It is, in McMillan Cottom's term, a negative social insurance program. Rather than buffering people against the negative effects of risk-shifting of employers by helping them into the least-expensive and most-effective training programs (non-profit community colleges and universities), we are spending tax dollars to enrich the shareholders of for-profit colleges while underfunding the alternatives. We are choosing to create a gap that routes government support to the institution that provides worse training at higher cost but is very good at helping people apply. It's as if the unemployment system required one to use payday lenders to get one's unemployment check.

There is more in this book I want to talk about, but this review is already long enough. Suffice it to say that McMillan Cottom's analysis does not stop with market forces and the admission process, and the parts of her analysis that touch on my own personal experience as someone with a somewhat unusual college path ring very true. Speaking as a former community college student, the discussion of class credit transfer policies and the way that institutional prestige gatekeeping and the desire to push back against low-quality instruction becomes a trap that keeps students in the for-profit system deserves another review this length. So do the implications of risk-shifting and credentialism on the morality of "cheating" on schoolwork.

As one would expect from the author of the essay "Thick" about bringing context to sociology, Lower Ed is personal and grounded. McMillan Cottom doesn't shy away from including her own experiences and being explicit about her sources and research. This is backed up by one of the best methodological notes sections I've seen in a book. One of the things I love about McMillan Cottom's writing is that it's solidly academic, not in the sense of being opaque or full of jargon (the text can be a bit dense, but I rarely found it hard to follow), but in the sense of being clear about the sources of knowledge and her methods of extrapolation and analysis. She brings her receipts in a refreshingly concrete way.

I do have a few caveats. First, I had trouble following a structure and line of reasoning through the whole book. Each individual point is meticulously argued and supported, but they are not always organized into a clear progression or framework. That made Lower Ed feel at times like a collection of high-quality but somewhat unrelated observations about credentials, higher education, for-profit colleges, their student populations, their business models, and their relationships with non-profit schools.

Second, there are some related topics that McMillan Cottom touches on but doesn't expand sufficiently for me to be certain I understood them. One of the big ones is credentialism. This is apparently a hot topic in sociology and is obviously important to this book, but it's referenced somewhat glancingly and was not satisfyingly defined (at least for me). There are a few similar places where I almost but didn't quite follow a line of reasoning because the book structure didn't lay enough foundation.

Caveats aside, though, this was meaty, thought-provoking, and eye-opening, and I'm very glad that I read it. This is a topic that I care more about than most people, but if you have watched for-profit colleges with distaste but without deep understanding, I highly recommend Lower Ed.

Rating: 8 out of 10

21 September, 2020 03:59AM

September 20, 2020

Enrico Zini

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.2: New upstream, awesome new stopwatch

Following up on the initial RcppSpdlog 0.0.1 release earlier this week, we are pumped to announce release 0.0.2. It contains upstream version 1.8.0 for spdlog which utilizes (among other things) a new feature in the embedded fmt library, namely completely automated formatting of high resolution time stamps which allows for gems like this (taken from this file in the package and edited down for brevity):

    spdlog::stopwatch sw;                                   // instantiate a stop watch

    // some other code

    sp->info("Elapsed time: {}", sw);

What we see is all there is: One instantiates a stopwatch object, and simply references it. The rest, as they say, is magic. And we get tic / toc alike behaviour in modern C++ at essentially no cost us (as code authors). So nice. Output from the (included in the package) function exampleRsink() (again edited down just a little):

R> RcppSpdlog::exampleRsink()
[13:03:23.845114] [fromR] [I] [thread 793264] Welcome to spdlog!
[...]
[13:03:23.845205] [fromR] [I] [thread 793264] Elapsed time: 0.000103611
[...]
[13:03:23.845281] [fromR] [I] [thread 793264] Elapsed time: 0.00018203
R> 

We see that the two simple logging instances come 10 and 18 microseconds into the call.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.2 (2020-09-17)

  • Upgraded to upstream release 1.8.0

  • Switched Travis CI to using BSPM, also test on macOS

  • Added 'stopwatch' use to main R sink example

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppSpdlog page.

The only sour grapes, again, are once more over the CRAN processing. And just how 0.0.1 was delayed for no good reason for three weeks, 0.0.2 was delayed by three days just because … well that is how CRAN rules sometimes. I’d be even more mad if I had an alternative but I don’t. We remain grateful for all they do but they really could have let this one through even at one-day update delta. Ah well, now we’re three days wiser and of course nothing changed in the package.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 September, 2020 06:21PM

hackergotchi for Andy Simpkins

Andy Simpkins

Using an IP camera in conference calls

My webcam has broken, something that I have been using a lot during the last few months for some reason.

A friend of mine suggested that I use the mic and camera on my mobile phone instead.  There is a simple app ‘droidcam’ that makes the phone behave as a simple webcam, it also has a client application to run on your PC to capture the web-stream and present it as a video device.  All well and good but I would like to keep propitiatory software off my PCs (I have a hard enough time accepting it on a phone but I have to draw a line somewhere).

I decided that there had to be a simple way to ingest that stream and present it as a local video device on a Linux box.  It turns out that it is a lot simpler than I thought it would be.  I had it working within 10 minutes!

Packages needed:  ffmpeg, v4l2loopback-utils

sudo apt-get install ffmpeg v4l2loopback-utils

Start the loop-back device:

sudo modprobe v4l2loopback

Ingest video and present on loop-back device:

ffmpeg -re -i <URL_VideoStream> -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0

Where

-re
Read input at native frame rate. Mainly used to simulate a grab
device, or live input stream (e.g. when reading from a file). Should
not be used with actual grab devices or live input streams (where it
can cause packet loss). By default ffmpeg attempts to read the
input(s) as fast as possible. This option will slow down the reading
of the input(s) to the native frame rate of the input(s). It is
useful for real-time output (e.g. live streaming).
 -i <source>
In this case my phone running droidcam http://10.14.1.100:4747/video
-vcodec rawvideo
Select the output video codec to raw video (as expected for /dev/video#)
-pix_fmt yuv420p
Set pixel format.  in this example tell ffmpeg that the video will
be yuv colour space at 420p resolution
-f v4l2 /dev/video0
Force the output to be in video for Linux 2 (v4l2) format bound to
device /dev/video0

20 September, 2020 09:50AM by andy

September 19, 2020

Vincent Bernat

Keepalived and unicast over multiple interfaces

Keepalived is a Linux implementation of VRRP. The usual role of VRRP is to share a virtual IP across a set of routers. For each VRRP instance, a leader is elected and gets to serve the IP address, ensuring the high availability of the attached service. Keepalived can also be used for a generic leader election, thanks to its ability to use scripts for healthchecking and run commands on state change.

A simple configuration looks like this:

vrrp_instance gateway1 {
  state BACKUP          # ❶
  interface eth0        # ❷
  virtual_router_id 12  # ❸
  priority 101          # ❹
  virtual_ipaddress {
    2001:db8:ff/64
  }
}

The state keyword in ❶ instructs Keepalived to not take the leader role when starting. Otherwise, incoming nodes create a temporary disruption by taking over the IP address until the election settles. The interface keyword in ❷ defines the interface for sending and receiving VRRP packets. It is also the default interface to configure the virtual IP address. The virtual_router_id directive in ❸ is common to all nodes sharing the virtual IP. The priority keyword in ❹ helps choosing which router will be elected as leader. If you need more information around Keepalived, be sure to check the documentation.

VRRP design is tied to Ethernet networks and requires a multicast-enabled network for communication between nodes. In some environments, notably public clouds, multicast is unavailable. In this case, Keepalived can send VRRP packets using unicast:

vrrp_instance gateway1 {
  state BACKUP
  interface eth0
  virtual_router_id 12
  priority 101
  unicast_peer {
    2001:db8::11
    2001:db8::12
  }
  virtual_ipaddress {
    2001:db8:ff/64 dev lo
  }
}

Another process, like a BGP daemon, should advertise the virtual IP address to the “network”. If needed, Keepalived can trigger whatever action is needed for this by using notify_* scripts.

Until version 2.21 (not released yet), the interface directive is mandatory and Keepalived will transmit and receive VRRP packets on this interface only. If peers are reachable through several interfaces, like on a BGP on the host setup, you need a workaround. A simple one is to use a VXLAN interface:

$ ip -6 link add keepalived6 type vxlan id 6 dstport 4789 local 2001:db8::10 nolearning
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::11
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::12
$ ip link set up dev keepalived6

Learning of MAC addresses is disabled and one generic entry for each peer is added in the forwarding database: transmitted packets are broadcasted to all peers, notably VRRP packets. Have a look at “VXLAN & Linux” for additional details.

vrrp_instance gateway1 {
  state BACKUP
  interface keepalived6
  mcast_src_ip 2001:db8::10
  virtual_router_id 12
  priority 101
  virtual_ipaddress {
    2001:db8:ff/64 dev lo
  }
}

Starting from Keepalived 2.21, unicast_peer can be used without the interface directive. I think using VXLAN is still a neat trick applicable to other situations where communication using broadcast or multicast is needed, while the underlying network provide no support for this.

19 September, 2020 08:02PM by Vincent Bernat

Syncing NetBox with a custom Ansible module

The netbox.netbox collection from Ansible Galaxy provides several modules to update NetBox objects:

- name: create a device in NetBox
  netbox_device:
    netbox_url: http://netbox.local
    netbox_token: s3cret
    data:
      name: to3-p14.sfo1.example.com
      device_type: QFX5110-48S
      device_role: Compute Switch
      site: SFO1

However, if NetBox is not your source of truth, you may want to ensure it stays in sync with your configuration management database1 by removing outdated devices or IP addresses. While it should be possible to glue together a playbook with a query, a loop and some filtering to delete unwanted elements, it feels clunky, inefficient and an abuse of YAML as a programming language. A specific Ansible module solves this issue and is likely more flexible.

Notice

I recommend that you read “Writing a custom Ansible module” as an introduction, as well as “Syncing MySQL tables” for a first simpler example.

Code

The module has the following signature and it syncs NetBox with the content of the provided YAML file:

netbox_sync:
  source: netbox.yaml
  api: https://netbox.example.com
  token: s3cret

The synchronized objects are:

  • sites,
  • manufacturers,
  • device types,
  • device roles,
  • devices, and
  • IP addresses.

In our environment, the YAML file is generated from our configuration management database and contains a set of devices and a list of IP addresses:

devices:
  ad2-p6.sfo1.example.com:
     datacenter: sfo1
     manufacturer: Cisco
     model: Catalyst 2960G-48TC-L
     role: net_tor_oob_switch
  to1-p6.sfo1.example.com:
     datacenter: sfo1
     manufacturer: Juniper
     model: QFX5110-48S
     role: net_tor_gpu_switch
# […]
ips:
  - device: ad2-p6.example.com
    ip: 172.31.115.18/21
    interface: oob
  - device: to1-p6.example.com
    ip: 172.31.115.33/21
    interface: oob
  - device: to1-p6.example.com
    ip: 172.31.254.33/32
    interface: lo0.0
# […]

The network team is not the sole tenant in NetBox. While adding new objects or modifying existing ones should be relatively safe, deleting unwanted objects can be risky. The module only deletes objects it did create or modify. To identify them, it marks them with a specific tag, cmdb. Most objects in NetBox accept tags.

Module definition

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    source=dict(type='path', required=True),
    api=dict(type='str', required=True),
    token=dict(type='str', required=True, no_log=True),
    max_workers=dict(type='int', required=False, default=10)
)

result = dict(
    changed=False
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)

It contains an additional optional arguments defining the number of workers to talk to NetBox and query the existing objects in parallel to speedup the execution.

Abstracting synchronization

We need to synchronize different object types, but once we have a list of objects we want in NetBox, the grunt work is always the same:

  • check if the objects already exist,
  • retrieve them and put them in a form suitable for comparison,
  • retrieve the extra objects we don’t want anymore,
  • compare the two sets, and
  • add missing objects, update existing ones, delete extra ones.

We code these behaviours into a Synchronizer abstract class. For each kind of object, a concrete class is built with the appropriate class attributes to tune its behaviour and a wanted() method to provide the objects we want.

I am not explaining the abstract class code here. Have a look at the source if you want.

Synchronizing tags and tenants

As a starter, here is how we define the class synchronizing the tags:

class SyncTags(Synchronizer):
    app = "extras"
    table = "tags"
    key = "name"

    def wanted(self):
        return {"cmdb": dict(
            slug="cmdb",
            color="8bc34a",
            description="synced by network CMDB")}

The app and table attributes defines the NetBox objects we want to manipulate. The key attribute is used to determine how to lookup for existing objects. In this example, we want to lookup tags using their names.

The wanted() method is expected to return a dictionary mapping object keys to the list of wanted attributes. Here, the keys are tag names and we create only one tag, cmdb, with the provided slug, color and description. This is the tag we will use to mark the objects we create or modify.

If the tag does not exist, it is created. If it exists, the provided attributes are updated. Other attributes are left untouched.

We also want to create a specific tenant for objects accepting such an attribute (devices and IP addresses):

class SyncTenants(Synchronizer):
    app = "tenancy"
    table = "tenants"
    key = "name"

    def wanted(self):
        return {"Network": dict(slug="network",
                                description="Network team")}

Synchronizing sites

We also need to synchronize the list of sites. This time, the wanted() method uses the information provided in the YAML file: it walks the devices and builds a set of datacenter names.

class SyncSites(Synchronizer):

    app = "dcim"
    table = "sites"
    key = "name"
    only_on_create = ("status", "slug")

    def wanted(self):
        result = set(details["datacenter"]
                     for details in self.source['devices'].values()
                     if "datacenter" in details)
        return {k: dict(slug=k,
                        status="planned")
                for k in result}

Thanks to the use of the only_on_create attribute, the specified attributes are not updated if they are different. The goal of this synchronizer is mostly to collect the references to the different sites for other objects.

>>> pprint(SyncSites(**sync_args).wanted())
{'sfo1': {'slug': 'sfo1', 'status': 'planned'},
 'chi1': {'slug': 'chi1', 'status': 'planned'},
 'nyc1': {'slug': 'nyc1', 'status': 'planned'}}

Synchronizing manufacturers, device types and device roles

The synchronization of manufacturers is pretty similar, except we do not use the only_on_create attribute:

class SyncManufacturers(Synchronizer):

    app = "dcim"
    table = "manufacturers"
    key = "name"

    def wanted(self):
        result = set(details["manufacturer"]
                     for details in self.source['devices'].values()
                     if "manufacturer" in details)
        return {k: {"slug": slugify(k)}
                for k in result}

Regarding the device types, we use the foreign attribute linking a NetBox attribute to the synchronizer handling it.

class SyncDeviceTypes(Synchronizer):

    app = "dcim"
    table = "device_types"
    key = "model"
    foreign = {"manufacturer": SyncManufacturers}

    def wanted(self):
        result = set((details["manufacturer"], details["model"])
                     for details in self.source['devices'].values()
                     if "model" in details)
        return {k[1]: dict(manufacturer=k[0],
                           slug=slugify(k[1]))
                for k in result}

The wanted() method refers to the manufacturer using its key attribute. In this case, this is the manufacturer name.

>>> pprint(SyncManufacturers(**sync_args).wanted())
{'Cisco': {'slug': 'cisco'},
 'Dell': {'slug': 'dell'},
 'Juniper': {'slug': 'juniper'}}
>>> pprint(SyncDeviceTypes(**sync_args).wanted())
{'ASR 9001': {'manufacturer': 'Cisco', 'slug': 'asr-9001'},
 'Catalyst 2960G-48TC-L': {'manufacturer': 'Cisco',
                           'slug': 'catalyst-2960g-48tc-l'},
 'MX10003': {'manufacturer': 'Juniper', 'slug': 'mx10003'},
 'QFX10002-36Q': {'manufacturer': 'Juniper', 'slug': 'qfx10002-36q'},
 'QFX10002-72Q': {'manufacturer': 'Juniper', 'slug': 'qfx10002-72q'},
 'QFX5110-32Q': {'manufacturer': 'Juniper', 'slug': 'qfx5110-32q'},
 'QFX5110-48S': {'manufacturer': 'Juniper', 'slug': 'qfx5110-48s'},
 'QFX5200-32C': {'manufacturer': 'Juniper', 'slug': 'qfx5200-32c'},
 'S4048-ON': {'manufacturer': 'Dell', 'slug': 's4048-on'},
 'S6010-ON': {'manufacturer': 'Dell', 'slug': 's6010-on'}}

The device roles are defined like this:

class SyncDeviceRoles(Synchronizer):

    app = "dcim"
    table = "device_roles"
    key = "name"

    def wanted(self):
        result = set(details["role"]
                     for details in self.source['devices'].values()
                     if "role" in details)
        return {k: dict(slug=slugify(k),
                        color="8bc34a")
                for k in result}

Synchronizing devices

A device is mostly a name with references to a role, a model, a datacenter and a tenant. These references are declared as foreign keys using the synchronizers defined previously.

class SyncDevices(Synchronizer):
    app = "dcim"
    table = "devices"
    key = "name"
    foreign = {"device_role": SyncDeviceRoles,
               "device_type": SyncDeviceTypes,
               "site": SyncSites,
               "tenant": SyncTenants}
    remove_unused = 10

    def wanted(self):
        return {name: dict(device_role=details["role"],
                           device_type=details["model"],
                           site=details["datacenter"],
                           tenant="Network")
                for name, details in self.source['devices'].items()
                if {"datacenter", "model", "role"} <= set(details.keys())}

The remove_unused attribute is a safety implemented to fail if we have to delete more than 10 devices: this may be the indication there is a bug somewhere, unless one of your datacenter suddenly caught fire.

>>> pprint(SyncDevices(**sync_args).wanted())
{'ad2-p6.sfo1.example.com': {'device_role': 'net_tor_oob_switch',
                             'device_type': 'Catalyst 2960G-48TC-L',
                             'site': 'sfo1',
                             'tenant': 'Network'},
 'to1-p6.sfo1.example.com': {'device_role': 'net_tor_gpu_switch',
                             'device_type': 'QFX5110-48S',
                             'site': 'sfo1',
                             'tenant': 'Network'},
[…]

Synchronizing IP addresses

The last step is to synchronize IP addresses. We do not attach them to a device.2 Instead, we specify the device names in the description of the IP address:

class SyncIPs(Synchronizer):
    app = "ipam"
    table = "ip-addresses"
    key = "address"
    foreign = {"tenant": SyncTenants}
    remove_unused = 1000

    def wanted(self):
        wanted = {}
        for details in self.source['ips']:
            if details['ip'] in wanted:
                wanted[details['ip']]['description'] = \
                    f"{details['device']} (and others)"
            else:
                wanted[details['ip']] = dict(
                    tenant="Network",
                    status="active",
                    dns_name="",        # information is present in DNS
                    description=f"{details['device']}: {details['interface']}",
                    role=None,
                    vrf=None)
        return wanted

There is a slight difficulty: NetBox allows duplicate IP addresses, so a simple lookup is not enough. In case of multiple matches, we choose the best by preferring those tagged with cmdb, then those already attached to an interface:

def get(self, key):
    """Grab IP address from NetBox."""
    # There may be duplicate. We need to grab the "best".
    results = super(Synchronizer, self).get(key)
    if len(results) == 0:
        return None
    if len(results) == 1:
        return results[0]
    scores = [0]*len(results)
    for idx, result in enumerate(results):
        if "cmdb" in result.tags:
            scores[idx] += 10
        if result.interface is not None:
            scores[idx] += 5
    return sorted(zip(scores, results),
                  reverse=True, key=lambda k: k[0])[0][1]

Getting the current and wanted states

Each synchronizer is initialized with a reference to the Ansible module, a reference to a pynetbox’s API object, the data contained in the provided YAML file and two empty dictionaries for the current and expected states:

source = yaml.safe_load(open(module.params['source']))
netbox = pynetbox.api(module.params['api'],
                      token=module.params['token'])

sync_args = dict(
    module=module,
    netbox=netbox,
    source=source,
    before={},
    after={}
)
synchronizers = [synchronizer(**sync_args) for synchronizer in [
    SyncTags,
    SyncTenants,
    SyncSites,
    SyncManufacturers,
    SyncDeviceTypes,
    SyncDeviceRoles,
    SyncDevices,
    SyncIPs
]]

Each synchronizer has a prepare() method whose goal is to compute the current and wanted states. It returns True in case of a difference:

# Check what needs to be synchronized
try:
    for synchronizer in synchronizers:
        result['changed'] |= synchronizer.prepare()
except AnsibleError as e:
    result['msg'] = e.message
    module.fail_json(**result)

Applying changes

Back to the skeleton described in the previous article, the last step is to apply the changes if there is a difference between these states. Each synchronizer registers the current and wanted states in sync_args["before"][table] and sync_args["after"][table] where table is the name of the table for a given NetBox object type. The diff object is a bit elaborate as it is built table by table. This enables Ansible to display the name of each table before the diff representation:

# Compute the diff
if module._diff and result['changed']:
    result['diff'] = [
        dict(
            before_header=table,
            after_header=table,
            before=yaml.safe_dump(sync_args["before"][table]),
            after=yaml.safe_dump(sync_args["after"][table]))
        for table in sync_args["after"]
        if sync_args["before"][table] != sync_args["after"][table]
    ]

# Stop here if check mode is enabled or if no change
if module.check_mode or not result['changed']:
    module.exit_json(**result)

Each synchronizer also exposes a synchronize() method to apply changes and a cleanup() method to delete unwanted objects. Order is important due to the relation between the objects.

# Synchronize
for synchronizer in synchronizers:
    synchronizer.synchronize()
for synchronizer in synchronizers[::-1]:
    synchronizer.cleanup()
module.exit_json(**result)

The complete code is available on GitHub. Compared to using netbox.netbox collection, the logic is written in Python instead of trying to glue Ansible tasks together. I believe this is both more flexible and easier to read, notably when trying to delete outdated objects. While I did not test it, it should also be faster. An alternative would have been to reuse code from the netbox.netbox collection, as it contains similar primitives. Unfortunately, I didn’t think of it until now. 😶


  1. In my opinion, a good option for a source of truth is to use YAML files in a Git repository. You get versioning for free and people can get started with a text editor. ↩︎

  2. This limitation is mostly due to laziness: we do not really care about this information. Our main motivation for putting IP addresses in NetBox is to keep track of the used IP addresses. However, if an IP address is already attached to an interface, we leave this association untouched. ↩︎

19 September, 2020 04:11PM by Vincent Bernat

hackergotchi for Bits from Debian

Bits from Debian

New Debian Maintainers (July and August 2020)

The following contributors were added as Debian Maintainers in the last two months:

  • Chirayu Desai
  • Shayan Doust
  • Arnaud Ferraris
  • Fritz Reichwald
  • Kartik Kulkarni
  • François Mazen
  • Patrick Franz
  • Francisco Vilmar Cardoso Ruviaro
  • Octavio Alvarez
  • Nick Black

Congratulations!

19 September, 2020 02:00PM by Jean-Pierre Giraud

Russell Coker

Burning Lithium Ion Batteries

I had an old Nexus 4 phone that was expanding and decided to test some of the theories about battery combustion.

The first claim that often gets made is that if the plastic seal on the outside of the battery is broken then the battery will catch fire. I tested this by cutting the battery with a craft knife. With every cut the battery sparked a bit and then when I levered up layers of the battery (it seems to be multiple flat layers of copper and black stuff inside the battery) there were more sparks. The battery warmed up, it’s plausible that in a confined environment that could get hot enough to set something on fire. But when the battery was resting on a brick in my backyard that wasn’t going to happen.

The next claim is that a Li-Ion battery fire will be increased with water. The first thing to note is that Li-Ion batteries don’t contain Lithium metal (the Lithium high power non-rechargeable batteries do). Lithium metal will seriously go off it exposed to water. But lots of other Lithium compounds will also react vigorously with water (like Lithium oxide for example). After cutting through most of the center of the battery I dripped some water in it. The water boiled vigorously and the corners of the battery (which were furthest away from the area I cut) felt warmer than they did before adding water. It seems that significant amounts of energy are released when water reacts with whatever is inside the Li-Ion battery. The reaction was probably giving off hydrogen gas but didn’t appear to generate enough heat to ignite hydrogen (which is when things would really get exciting). Presumably if a battery was cut in the presence of water while in an enclosed space that traps hydrogen then the sparks generated by the battery reacting with air could ignite hydrogen generated from the water and give an exciting result.

It seems that a CO2 fire extinguisher would be best for a phone/tablet/laptop fire as that removes oxygen and cools it down. If that isn’t available then a significant quantity of water will do the job, water won’t stop the reaction (it can prolong it), but it can keep the reaction to below 100C which means it won’t burn a hole in the floor and the range of toxic chemicals released will be reduced.

The rumour that a phone fire on a plane could do a “China syndrome” type thing and melt through the Aluminium body of the plane seems utterly bogus. I gave it a good try and was unable to get a battery to burn through it’s plastic and metal foil case. A spare battery for a laptop in checked luggage could be a major problem for a plane if it ignited. But a battery in the passenger area seems unlikely to be a big problem if plenty of water is dumped on it to prevent the plastic case from burning and polluting the air.

I was not able to get a result that was even worthy of a photograph. I may do further tests with laptop batteries.

19 September, 2020 07:27AM by etbe

Jelmer Vernooij

Debian Janitor: Expanding Into Improving Multi-Arch

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

As of dpkg 1.16.2 and apt 0.8.13, Debian has full support for multi-arch. To quote from the multi-arch implementation page:

Multiarch lets you install library packages from multiple architectures on the same machine. This is useful in various ways, but the most common is installing both 64 and 32- bit software on the same machine and having dependencies correctly resolved automatically. In general you can have libraries of more than one architecture installed together and applications from one architecture or another installed as alternatives.

The Multi-Arch specification describes a new Multi-Arch header which can be used to indicate how to resolve cross-architecture dependencies.

The existing Debian Multi-Arch hinter is a version of dedup.debian.net that compares binary packages between architectures and suggests fixes to resolve multi-arch problems. It provides hints as to what Multi- Arch fields can be set, allowing the packages to be safely installed in a Multi-Arch world. The full list of almost 10,000 hints generated by the hinter is available at https://dedup.debian.net/static/multiarch-hints.yaml.

Recent versions of lintian-brush now include a command called apply-multiarch-hints that downloads and locally caches the hints and can apply them to a package maintained in Git. For example, to apply multi-arch hints to autosize.js:

 $ debcheckout autosize.js
 declared git repository at https://salsa.debian.org/js-team/autosize.js.git
 git clone https://salsa.debian.org/js-team/autosize.js.git autosize.js ...
 Cloning into 'autosize.js'...
 [...]
 $ cd autosize.js
 $ apply-multiarch-hints
 Downloading new version of multi-arch hints.
 libjs-autosize: Add Multi-Arch: foreign.
 node-autosize: Add Multi-Arch: foreign.
 $ git log -p
 commit 3f8d1db5af4a87e6ebb08f46ddf79f6adf4e95ae (HEAD -> master)
 Author: Jelmer Vernooij <jelmer@debian.org>
 Date:   Fri Sep 18 23:37:14 2020 +0000

     Apply multi-arch hints.
     + libjs-autosize, node-autosize: Add Multi-Arch: foreign.

     Changes-By: apply-multiarch-hints

 diff --git a/debian/changelog b/debian/changelog
 index e7fa120..09af4a7 100644
 --- a/debian/changelog
 +++ b/debian/changelog
 @@ -1,3 +1,10 @@
 +autosize.js (4.0.2~dfsg1-5) UNRELEASED; urgency=medium
 +
 +  * Apply multi-arch hints.
 +    + libjs-autosize, node-autosize: Add Multi-Arch: foreign.
 +
 + -- Jelmer Vernooij <jelmer@debian.org>  Fri, 18 Sep 2020 23:37:14 -0000
 +
  autosize.js (4.0.2~dfsg1-4) unstable; urgency=medium

    * Team upload
 diff --git a/debian/control b/debian/control
 index 01ca968..fbba1ae 100644
 --- a/debian/control
 +++ b/debian/control
 @@ -20,6 +20,7 @@ Architecture: all
  Depends: ${misc:Depends}
  Recommends: javascript-common
  Breaks: ruby-rails-assets-autosize (<< 4.0)
 +Multi-Arch: foreign
  Description: script to automatically adjust textarea height to fit text - NodeJS
   Autosize is a small, stand-alone script to automatically adjust textarea
   height to fit text. The autosize function accepts a single textarea element,
 @@ -32,6 +33,7 @@ Package: node-autosize
  Architecture: all
  Depends: ${misc:Depends}
   , nodejs
 +Multi-Arch: foreign
  Description: script to automatically adjust textarea height to fit text - Javascript
   Autosize is a small, stand-alone script to automatically adjust textarea
   height to fit text. The autosize function accepts a single textarea element,

The Debian Janitor also has a new multiarch-fixes suite that runs apply-multiarch-hints across packages in the archive and proposes merge requests. For example, you can see the merge request against autosize.js here.

For more information about the Janitor's lintian-fixes efforts, see the landing page.

19 September, 2020 12:45AM by Jelmer Vernooij, Perry Lorrier

September 18, 2020

hackergotchi for Daniel Lange

Daniel Lange

Fixing the Nextcloud menu to show more than eight application icons

I have been late to adopt an on-premise cloud solution as the security of Owncloud a few years ago wasn't so stellar (cf. my comment from 2013 in Encryption files ... for synchronization across the Internet). But the follow-up product Nextcloud has matured quite nicely and we use it for collaboration both in the company and in FLOSS related work at multiple nonprofit organizations.

There is a very annoying "feature" in Nextcloud though that the designers think menu items for apps at the top need to be limited to eight or less to prevent information overload in the header. The whole item discussion is worth reading as it it an archetypical example of design prevalence vs. user choice.

And of course designers think they are right. That's a feature of the trade.
And because they know better there is no user configurable option to extend that 8 items to may be 12 or so which would prevent the annoying overflow menu we are seeing with 10 applications in use:

Screenshot of stock Nextcloud menu

Luckily code can be changed and there are many comments floating around the Internet to change const minAppsDesktop = 8. In this case it is slightly complicated by the fact that the javascript code is distributed in compressed form (aka "minified") as core/js/dist/main.js and you probably don't want to build the whole beast locally to change one constant.

Basically

const breakpoint_mobile_width = 1024;

const resizeMenu = () => {
    const appList = $('#appmenu li')
    const rightHeaderWidth = $('.header-right').outerWidth()
    const headerWidth = $('header').outerWidth()
    const usePercentualAppMenuLimit = 0.33
    const minAppsDesktop = 8
    let availableWidth = headerWidth - $('#nextcloud').outerWidth() - (rightHeaderWidth > 210 ? rightHeaderWidth : 210)
    const isMobile = $(window).width() < breakpoint_mobile_width
    if (!isMobile) {
        availableWidth = availableWidth * usePercentualAppMenuLimit
    }
    let appCount = Math.floor((availableWidth / $(appList).width()))
    if (isMobile && appCount > minAppsDesktop) {
        appCount = minAppsDesktop
    }
    if (!isMobile && appCount < minAppsDesktop) {
        appCount = minAppsDesktop
    }

    // show at least 2 apps in the popover
    if (appList.length - 1 - appCount >= 1) {
        appCount--
    }

    $('#more-apps a').removeClass('active')
    let lastShownApp
    for (let k = 0; k < appList.length - 1; k++) {
        const name = $(appList[k]).data('id')
        if (k < appCount) {
            $(appList[k]).removeClass('hidden')
            $('#apps li[data-id=' + name + ']').addClass('in-header')
            lastShownApp = appList[k]
        } else {
            $(appList[k]).addClass('hidden')
            $('#apps li[data-id=' + name + ']').removeClass('in-header')
            // move active app to last position if it is active
            if (appCount > 0 && $(appList[k]).children('a').hasClass('active')) {
                $(lastShownApp).addClass('hidden')
                $('#apps li[data-id=' + $(lastShownApp).data('id') + ']').removeClass('in-header')
                $(appList[k]).removeClass('hidden')
                $('#apps li[data-id=' + name + ']').addClass('in-header')
            }
        }
    }

    // show/hide more apps icon
    if ($('#apps li:not(.in-header)').length === 0) {
        $('#more-apps').hide()
        $('#navigation').hide()
    } else {
        $('#more-apps').show()
    }
}

gets compressed during build time to become part of one 15,000+ character line. The relevant portion reads:

var f=function(){var e=s()("#appmenu li"),t=s()(".header-right").outerWidth(),n=s()("header").outerWidth()-s()("#nextcloud").outerWidth()-(t>210?t:210),i=s()(window).width()<1024;i||(n*=.33);var r,o=Math.floor(n/s()(e).width());i&&o>8&&(o=8),!i&&o<8&&(o=8),e.length-1-o>=1&&o--,s()("#more-apps a").removeClass("active");for(var a=0;a<e.length-1;a++){var l=s()(e[a]).data("id");a<o?(s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header"),r=e[a]):(s()(e[a]).addClass("hidden"),s()("#apps li[data-id="+l+"]").removeClass("in-header"),o>0&&s()(e[a]).children("a").hasClass("active")&&(s()(r).addClass("hidden"),s()("#apps li[data-id="+s()(r).data("id")+"]").removeClass("in-header"),s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header")))}0===s()("#apps li:not(.in-header)").length?(s()("#more-apps").hide(),s()("#navigation").hide()):s()("#more-apps").show()}

Well, we can still patch that, can we?

Continue reading "Fixing the Nextcloud menu to show more than eight application icons"

18 September, 2020 10:17AM by Daniel Lange

Sven Hoexter

Avoiding the GitHub WebUI

Now that GitHub released v1.0 of the gh cli tool, and this is all over HN, it might make sense to write a note about my clumsy aliases and shell functions I cobbled together in the past month. Background story is that my dayjob moved to GitHub coming from Bitbucket. From my point of view the WebUI for Bitbucket is mediocre, but the one at GitHub is just awful and painful to use, especially for PR processing. So I longed for the terminal and ended up with gh and wtfutil as a dashboard.

The setup we have is painful on its own, with several orgs and repos which are more like monorepos covering several corners of infrastructure, and some which are very focused on a single component. All workflows are anti GitHub workflows, so you must have permission on the repo, create a branch in that repo as a feature branch, and open a PR for the merge back into master.

gh functions and aliases

# setup a token with perms to everything, dealing with SAML is a PITA
export GITHUB_TOKEN="c0ffee4711"
# I use a light theme on my terminal, so adjust the gh theme
export GLAMOUR_STYLE="light"

#simple aliases to poke at a PR
alias gha="gh pr review --approve"
alias ghv="gh pr view"
alias ghd="gh pr diff"

### github support functions, most invoked with a PR ID as $1

#primary function to review PRs
function ghs {
    gh pr view ${1}
    gh pr checks ${1}
    gh pr diff ${1}
}

# very custom PR create function relying on ORG and TEAM settings hard coded
# main idea is to create the PR with my team directly assigned as reviewer
function ghc {
    if git status | grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    fi
    git push --set-upstream origin HEAD
    gh pr create -f -r "$(git remote -v | grep push | grep -oE 'myorg-[a-z]+')/myteam"
}

# merge a PR and update master if we're not in a different branch
function ghm {
    gh pr merge -d -r ${1}
    if [[ "$(git rev-parse --abbrev-ref HEAD)" == "master" ]]; then
        git pull
    fi
}

# get an overview over the files changed in a PR
function ghf {
    gh pr diff ${1} | diffstat -l
}

# generate a link to a commit in the WebUI to pass on to someone else
# input is a git commit hash
function ghlink {
    local repo="$(git remote -v | grep -E "github.+push" | cut -d':' -f 2 | cut -d'.' -f 1)"
    echo "https://github.com/${repo}/commit/${1}"
}

wtfutil

I have a terminal covering half my screensize with small dashboards listing PRs for the repos I care about. For other repos I reverted back to mail notifications which get sorted and processed from time to time. A sample dashboard config looks like this:

github_admin:
  apiKey: "c0ffee4711"
  baseURL: ""
  customQueries:
    othersPRs:
      title: "Pull Requests"
      filter: "is:open is:pr -author:hoexter -label:dependencies"
  enabled: true
  enableStatus: true
  showOpenReviewRequests: false
  showStats: false
  position:
    top: 0
    left: 0
    height: 3
    width: 1
  refreshInterval: 30
  repositories:
    - "myorg/admin"
  uploadURL: ""
  username: "hoexter"
  type: github

The -label:dependencies is used here to filter out dependabot PRs in the dashboard.

Workflow

Look at a PR with ghv $ID, if it's ok ACK it with gha $ID. Create a PR from a feature branch with ghc and later on merge it with ghm $ID. The $ID is retrieved from looking at my wtfutil based dashboard.

Security Considerations

The world is full of bad jokes. For the WebUI access I've the full array of pain with SAML auth, which expires too often, and 2nd factor verification for my account backed by a Yubikey. But to work with the CLI you basically need an API token with full access, everything else drives you insane. So I gave in and generated exactly that. End result is that I now have an API token - which is basically a password - which has full power, and is stored in config files and environment variables. So the security features created around the login are all void now. Was that the aim of it after all?

18 September, 2020 10:05AM

September 17, 2020

Russell Coker

Dell BIOS Updates

I have just updated the BIOS on a Dell PowerEdge T110 II. The process isn’t too difficult, Google for the machine name and BIOS, download a shell script encoded firmware image and GPG signature, then run the script on the system in question.

One problem is that the Dell GPG key isn’t signed by anyone. How hard would it be to get a few well connected people in the Linux community to sign the key used for signing Linux scripts for updating the BIOS? I would be surprised if Dell doesn’t employ a few people who are well connected in the Linux community, they should just ask all employees to sign such GPG keys! Failing that there are plenty of other options. I’d be happy to sign the Dell key if contacted by someone who can prove that they are a responsible person in Dell. If I could phone Dell corporate and ask for the engineering department and then have someone tell me the GPG fingerprint I’ll sign the key and that problem will be partially solved (my key is well connected but you need more than one signature).

The next issue is how to determine that a BIOS update works. What you really don’t want is to have a BIOS update fail and brick your system! So the Linux update process loads the image into something (special firmware RAM maybe) and then reboots the system and the reboot then does a critical part of the update. If the reboot doesn’t work then you end up with the old version of the BIOS. This is overall a good thing.

The PowerEdge T110 II is a workstation with an NVidia video card (I tried an ATI card but that wouldn’t boot for unknown reasons). The Nouveau driver has some issues. One thing I have done to work around some Nouveau issues is to create a file “~/.config/plasma-workspace/env/nouveau-broken.sh” (for KDE sessions) with the following contents:

export LIBGL_ALWAYS_SOFTWARE=1

I previously wrote about using this just for Kmail to stop it crashing [1]. But after doing that I still had other problems with video and disabling all GL on the NVidia card was necessary.

The latest problem I’ve had is that even when using that configuration things don’t go well. When I run the “reboot” command I end up with a kernel message about the GPU not responding and then it doesn’t reboot. That means that the BIOS update doesn’t apply, a hard reboot signals to the system that the new BIOS wasn’t good and I end up with the old BIOS again. I discovered that disabling sddm (the latest xdm program in Debian) from starting on boot meant that a reboot command would work. Then I ran the BIOS update script and it’s reboot command worked and gave a successful BIOS update.

So I’ve gone from a 2013 BIOS to a 2018 BIOS! The update list says that some CVEs have been addressed, but the spectre-meltdown-checker doesn’t report any fewer vulnerabilities.

17 September, 2020 03:18PM by etbe

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.1: New and Exciting Logging Package

Very thrilled to announce a new package RcppSpdlog which is now on CRAN in its first release 0.0.1. We had tweeted once about the earliest version which had already caught the eyes of Gabi upstream.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

I had meant to package this for a few years now but didn’t find an (easy, elegant) way to completely lift stdout / stderr and related uses which R wants us to remove for smoother operations from R itself including synchronized input/output. It was only a few weeks ago that I realized I should subclass a logger (or, more concretely, a sink for a logger) which could then use R input/output. With the right idea the implementaton was easy, and [Gabi]((https://github.com/gabime) was most helpful in making sure R CMD check would not see one or two remaining C++ i/o operations (which we currently do by not activating a default logger, and substituing REprintf() in one call). So this is now clean and sween and a simple use is included in an example in the package we can show here too (in slightly shorter form minus the documentation header):

// this portmanteau include also defines the r_sink we use below, and which
// diverts all logging to R via the Rcpp::Rcout replacement for std::cout
#include <RcppSpdlog>

// [[Rcpp::export]]
void exampleRsink() {

    std::string logname = "fromR";                          // fix a name for this logger
    auto sp = spdlog::get(logname);                         // retrieve existing one
    if (sp == nullptr) sp = spdlog::r_sink_mt(logname);     // or create new one if needed

    // change log pattern (changed from [%H:%M:%S %z] [%n] [%^---%L---%$] )
    sp->set_pattern("[%H:%M:%S.%f] [%n] [%^%L%$] [thread %t] %v");

    sp->info("Welcome to spdlog!");
    sp->error("Some error message with arg: {}", 1);

    sp->warn("Easy padding in numbers like {:08d}", 12);
    sp->critical("Support for int: {0:d};  hex: {0:x};  oct: {0:o}; bin: {0:b}", 42);
    sp->info("Support for floats {:03.2f}", 1.23456);
    sp->info("Positional args are {1} {0}..", "too", "supported");
    sp->info("{:<30}", "left aligned");

}

The NEWS entry for the first release follows.

Changes in RcppSpdlog version 0.0.1 (2020-09-08)

  • Initial release with added R/Rcpp logging sink example

The only sour grapes, if any, are over the CRAN processing. This was originally uploaded three weeks ago. As a new package, it got extra attention and some truly idiosyncratic attention to two details that were already supplied in the first uploaded version. Yet it needed two rounds of going back and forth for really no great net gain, yet wasting a week each time. I am not all that impressed by this, and not particularly pleased either, but I presume is the “tax” we all pay in order to enjoy the unsurpassed richness of the CRAN repository system which continues to work just flawlessly.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 September, 2020 12:54AM

September 16, 2020

hackergotchi for Steve Kemp

Steve Kemp

Implementing a FORTH-like language ..

Four years ago somebody posted a comment-thread describing how you could start writing a little reverse-polish calculator, in C, and slowly improve it until you had written a minimal FORTH-like system:

At the time I read that comment I'd just hacked up a simple FORTH REPL of my own, in Perl, and I said "thanks for posting". I was recently reminded of this discussion, and decided to work through the process.

Using only minimal outside resources the recipe worked as expected!

The end-result is I have a working FORTH-lite, or FORTH-like, interpreter written in around 2000 lines of golang! Features include:

  • Reverse-Polish mathematical operations.
  • Comments between ( and ) are ignored, as expected.
    • Single-line comments \ to the end of the line are also supported.
  • Support for floating-point numbers (anything that will fit inside a float64).
  • Support for printing the top-most stack element (., or print).
  • Support for outputting ASCII characters (emit).
  • Support for outputting strings (." Hello, World ").
  • Support for basic stack operations (drop, dup, over, swap)
  • Support for loops, via do/loop.
  • Support for conditional-execution, via if, else, and then.
  • Load any files specified on the command-line
    • If no arguments are included run the REPL
  • A standard library is loaded, from the present directory, if it is present.

To give a flavour here we define a word called star which just outputs a single start-character:

: star 42 emit ;

Now we can call that (NOTE: We didn't add a newline here, so the REPL prompt follows it, that's expected):

> star
*>

To make it more useful we define the word "stars" which shows N stars:

> : stars dup 0 > if 0 do star loop else drop then ;
> 0 stars
> 1 stars
*> 2 stars
**> 10 stars
**********>

This example uses both if to test that the parameter on the stack was greater than zero, as well as do/loop to handle the repetition.

Finally we use that to draw a box:

> : squares 0 do over stars cr loop ;
> 4 squares
****
****
****
****

> 10 squares
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********

For fun we allow decompiling the words too:

> #words 0 do dup dump loop
..
Word 'square'
 0: dup
 1: *
Word 'cube'
 0: dup
 1: square
 2: *
Word '1+'
 0: store 1.000000
 2: +
Word 'test_hot'
  0: store 0.000000
  2: >
  3: if
  4: [cond-jmp 7.000000]
  6: hot
  7: then
..

Anyway if that is at all interesting feel free to take a peak. There's a bit of hackery there to avoid the use of return-stacks, etc. Compared to gforth this is actually more featureful in some areas:

  • I allow you to use conditionals in the REPL - outside a word-definition.
  • I allow you to use loops in the REPL - outside a word-definition.

Find the code here:

16 September, 2020 09:00PM

Bastian Blank

Salsa hosted 1e6 CI jobs

Today, Salsa hosted it's 1,000,000th CI job. The price for hitting the target goes to the Cloud team. The job itself was not that interesting, but it was successful.

16 September, 2020 07:00PM by Bastian Blank

hackergotchi for Bits from Debian

Bits from Debian

Debian Local Groups at DebConf20 and beyond

There are a number of large and very successful Debian Local Groups (Debian France, Debian Brazil and Debian Taiwan, just to name a few), but what can we do to help support upcoming local groups or help spark interest in more parts of the world?

There has been a session about Debian Local Teams at Debconf20 and it generated quite a bit of constructive discussion in the live stream (recording available at https://meetings-archive.debian.net/pub/debian-meetings/2020/DebConf20/), in the session's Etherpad and in the IRC channel (#debian-localgroups). This article is an attempt at summarizing the key points that were raised during that discussion, as well as the plans for the future actions to support new or existent Debian Local Groups and the possibility of setting up a local group support team.

Pandemic situation

During a pandemic it may seem strange to discuss offline meetings, but this is a good time to be planning things for the future. At the same time, the current situation makes it more important than before to encourage local interaction.

Reasoning for local groups

Debian can seem scary for those outside. Already having a connection to Debian - especially to people directly involved in it - seems to be the way through which most contributors arrive. But if one doesn't have a connection, it is not that easy; Local Groups facilitate that by improving networking.

Local groups are incredibly important to the success of Debian since they often help with translations, making us more diverse, support, setting up local bug squashing sprints, establishing a local DebConf team along with miniDebConfs, getting sponsors for the project and much more.

Existence of a Local Groups would also facilitate access to "swag" like stickers and mugs, since people not always have the time to deal with the process of finding a supplier to actually get those made. The activity of local groups might facilitate that by organizing related logistics.

How to deal with local groups, how to define a local group

Debian gathers the information about Local Groups in its Local Groups wiki page (and subpages). Other organisations also have their own schemes, some of them featuring a map, blogs, or clear rules about what constitutes a local group. In the case of Debian there is not a predefined set of "rules", even about the group name. That is perfectly fine, we assume that certain local groups may be very small, or temporary (created around a certain time when they plan several activities, and then become silent). However, the way the groups are named and how they are listed on the wiki page sets expectations with regards to what kinds of activities they involve.

For this reason, we encourage all the Debian Local Groups to review their entries in the Debian wiki, keep it current (e.g. add a line "Status: Active (2020)), and we encourage informal groups of Debian contributors that somehow "meet", to create a new entry in the wiki page, too.

What can Debian do to support Local Groups

Having a centralized database of groups is good (if up-to-date), but not enough. We'll explore other ways of propagation and increasing visibility, like organising the logistics of printing/sending swag and facilitate access to funding for Debian-related events.

Continuation of efforts

Efforts shall continue regarding Local Groups. Regular meetings are happening every two or three weeks; interested people are encouraged to explore some other relevant DebConf20 talks (Introducing Debian Brasil, Debian Academy: Another way to share knowledge about Debian, An Experience creating a local community on a small town), websites like Debian flyers (including other printed material as cube, stickers), visit the events section of the Debian website and the Debian Locations wiki page, and participate in the IRC channel #debian-localgroups at OFTC.

16 September, 2020 05:15PM by Francisco M. Neto and Laura Arjona Reina

hackergotchi for Joey Hess

Joey Hess

comically bad shipping estimates and middlemen

My inverter has unfortunately died, and I wanted to replace it with the same model. Ideally before I lose the contents of the fridge. It's a 24v inverter, which is not at all as easy to find a replacement for as a 12v inverter would be.

Somehow Walmart was the only retailer that had it available with a delivery estimate: Just 2 days.

It's the second day now, with no indication they've shipped it. I noticed the "sold and shipped by Zoro", so went and found it on that website.

So, the reality is it ships direct from China via container ship. As does every product from Zoro, which all show as 2 day delivery on Walmart's website.

I don't think this is a pandemic thing. I think it's a trying to compete with Amazon and failing thing.


My other comically bad shipping estimate this pandemic was from Amazon though. There was a run this summer on Kayaks, because social distancing is great on the water. I found a high quality inflatable kayak.

Amazon said "only 2 left in stock" and promised delivery in 1 week. One week later, it had not shipped, and they updated the delivery estimate forward 1 week. A week after that, ditto.

Eventually I bought a new model from the same manufacturer, Advanced Elements. Unfortunately, that kayak exploded the second time I inflated it, due to a manufacturing defect.

So I got in touch with Advanced Elements and they offered a replacement. I asked if, instead, they maybe still had any of the older model of kayak I had tried to order. They checked their warehouse, and found "the last one" in a corner somewhere.

No shipping estimate was provided. It arrived in 3 days.

16 September, 2020 01:06AM

September 15, 2020

Molly de Blanc

“Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” W. Quinn

There are a lot of interesting and valid things to say about the philosophy and actual arguments of the “Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” by Warren Quinn. Unfortunately for me, none of them are things I feel particularly inspired by. I’m much more attracted to the many things implied in this paper. Among them are the role of social responsibility in making moral decisions.

At various points in the text, Quinn makes brief comments about how we have roles that we need to fulfill for the sake of society. These roles carry with them responsibilities that may supersede our regular moral responsibilities. Examples Quinn makes include being a private life guard (and being responsible for the life of one particular person) and being a trolley driver (and your responsibility is to make sure the train doesn’t kill anyone). This is part of what has led to me brushing Quinn off as another classist. Still, I am interested in the question of whether social responsibilities are more important than moral ones or whether there are times when this might occur.

One of the things I maintain is that we cannot be the best versions of ourselves because we are not living in societies that value our best selves. We survive capitalism. We negotiate climate change. We make decisions to trade the ideal for the functional. For me, this frequently means I click through terms of service, agree to surveillance, and partake in the use and proliferation of oppressive technology. I also buy an iced coffee that comes in a single use plastic cup; I shop at the store with questionable labor practices; I use Facebook.  But also, I don’t give money to panhandlers. I see suffering and I let it pass. I do not get involved or take action in many situations because I have a pass to not. These things make society work as it is, and it makes me work within society.

This is a self-perpetuating, mutually-abusive, co-dependent relationship. I must tell myself stories about how it is okay that I am preferring the status quo, that I am buying into the system, because I need to do it to survive within it and that people are relying on the system as it stands to survive, because that is how they know to survive.

Among other things, I am worried about the psychic damage this causes us. When we view ourselves as social actors rather than moral actors, we tell ourselves it is okay to take non-moral actions (or in-actions); however, we carry within ourselves intuitions and feelings about what is right, just, and moral. We ignore these in order to act in our social roles. From the perspective of the individual, we’re hurting ourselves and suffering for the sake of benefiting and perpetuating an caustic society. From the perspective of society, we are perpetuating something that is not just less than ideal, but actually not good because it is based on allowing suffering.[1]

[1] This is for the sake of this text. I don’t know if I actually feel that this is correct.

My goal was to make this only 500 words, so I am going to stop here.

15 September, 2020 01:28PM by mollydb

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, August 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 237.25 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

August was a regular LTS month once again, even though it was only our 2nd month with Stretch LTS.
At the end of August some of us participated in DebConf 20 online where we held our monthly team meeting. A video is available.
As of now this video is also the only public resource about the LTS survey we held in July, though a written summary is expected to be released soon.

The security tracker currently lists 56 packages with a known CVE and the dla-needed.txt file has 55 packages needing an update.

Thanks to our sponsors

Sponsors that recently joined are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

15 September, 2020 10:01AM by Raphaël Hertzog

Russell Coker

More About the PowerEdge T710

I’ve got the T710 (mentioned in my previous post [1]) online. When testing the T710 at home I noticed that sometimes the VGA monitor I was using would start flickering when in some parts of the BIOS setup, it seemed that the horizonal sync wasn’t working properly. It didn’t seem to be a big deal at the time. When I deployed it the KVM display that I had planned to use with it mostly didn’t display anything. When the display was working the KVM keyboard wouldn’t work (and would prevent a regular USB keyboard from working if they were both connected at the same time). The VGA output of the T710 also wouldn’t work with my VGA->HDMI device so I couldn’t get it working with my portable monitor.

Fortunately the Dell front panel has a display and tiny buttons that allow configuring the IDRAC IP address, so I was able to get IDRAC going. One thing Dell really should do is allow the down button to change 0 to 9 when entering numbers, that would make it easier to enter 8.8.8.8 for the DNS server. Another thing Dell should do is make the default gateway have a default value according to the IP address and netmask of the server.

When I got IDRAC going it was easy to setup a serial console, boot from a rescue USB device, create a new initrd with the driver for the MegaRAID controller, and then reboot into the server image.

When I transferred the SSDs from the old server to the newer Dell server the problem I had was that the Dell drive caddies had no holes in suitable places for attaching SSDs. I ended up just pushing the SSDs in so they are hanging in mid air attached only by the SATA/SAS connectors. Plugging them in took the space from the above drive, so instead of having 2*3.5″ disks I have 1*2.5″ SSD and need the extra space to get my hand in. The T710 is designed for 6*3.5″ disks and I’m going to have trouble if I ever want to have more than 3*2.5″ SSDs. Fortunately I don’t think I’ll need more SSDs.

After booting the system I started getting alerts about a “fault” in one SSD, with no detail on what the fault might be. My guess is that the SSD in question is M.2 and it’s in a M.2 to regular SATA adaptor which might have some problems. The data seems fine though, a BTRFS scrub found no checksum errors. I guess I’ll have to buy a replacement SSD soon.

I configured the system to use the “nosmt” kernel command line option to disable hyper-threading (which won’t provide much performance benefit but which makes certain types of security attacks much easier). I’ve configured BOINC to run on 6/8 CPU cores and surprisingly that didn’t cause the fans to be louder than when the system was idle. It seems that a system that is designed for 6 SAS disks doesn’t need a lot of cooling when run with SSDs.

15 September, 2020 04:30AM by etbe

hackergotchi for Norbert Preining

Norbert Preining

GIMP washed out colors: Color to Alpha and layer recombination

Just to remind myself because I keep forgetting it again and again: If you get washed out colors when doing color to alpha and then recombine layers in GIMP, that is due to the new default in GIMP 2.10 that combines layers in linear RGB.

This creates problems because Color to Alpha works in perceptual RGB, and the recombination in linear creates washed out colors. The solution is to right click on the respective layer, select “Composite Space”, and there select “RGB (perceptual)”. Here is the “bug” report that has been open since 2 years.

Hoping that for next time I remember it.

15 September, 2020 12:30AM by Norbert Preining

September 14, 2020

hackergotchi for Jonathan McDowell

Jonathan McDowell

onak 0.6.1 released

Yesterday I did the first release of my OpenPGP compatible keyserver, onak, in 4 years. Actually, 2 releases because I discovered my detection for various versions of libnettle needed some fixing.

It was largely driven by the need to get an updated package sorted for Debian due to the removal of dh-systemd, but it should have come sooner. This release has a number of clean-ups for dealing with the hostility shown to the keyserver network in recent years. In particular it implements some of dkg’s Abuse-Resistant OpenPGP Keystores, and finally adds support for verifying signatures fully. That opens up the ability to run a keyserver that will only allow verifiable updates to keys. This doesn’t tie in with folk who want to run PGP based systems because of the anonymity, but for those of us who think PGP’s strength is in the web of trust it’s pretty handy. And it’s all configurable to taste; you can turn off all the verification if you want, or verify everything but not require any signatures, or even enable v3 keys if you feel like it.

The main reason this release didn’t come sooner is that I’m painfully aware of the bits that are missing. In particular:

  • Documentation. It’s all out of date, it needs a lot of work.
  • FastCGI support. Presently you need to run the separate CGI binaries.
  • SKS Gossip support. onak only supports the email syncing option. If you run a stand alone server this is fine, but Gossip is the right approach for a proper keyserver network.

Anyway. Available locally or via GitHub.

0.6.0 - 13th September 2020

  • Move to CMake over autoconf
  • Add support for issuer fingerprint subpackets
  • Add experimental support for v5 keys
  • Add read-only OpenPGP keyring backed DB backend
  • Move various bits into their own subdirectories in the source tree
  • Add support for full signature verification
  • Drop v3 keys by default when cleaning keys
  • Various code cleanups
  • Implement pieces of draft-dkg-openpgp-abuse-resistant-keystore-03
  • Add support for a fingerprint blacklist (e.g. Evil32)
  • Deprecate the .conf configuration file format
  • Drop version info from armored output
  • Add option to deny new keys and only allow updates to existing keys
  • Various pieces of work removing support for 32 bit key IDs and coping with colliding 64 bit key IDs.
  • Remove support for libnettle versions that lack the full SHA2 suite

0.6.1 - 13th September 2020

  • Fixes for compilation without nettle + with later releases of nettle

14 September, 2020 06:09PM

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Quick debugging of a Linux printer via cups command line tools

Step by step cups debugging ( here with a network printer)

Which printer queue do I have configured ?
lpstat -p
printer epson is idle.  enabled since Sat Dec 24 13:18:09 2017
#here I have a printer called 'epson", doing nothing, that the cups daemon considers as enabled

Which connection am I using to get to this printer ?
lpstat -v
device for epson: lpd://epson34dea0.local:515/PASSTHRU
# here the locally configured 'epson' printer queue is backed by a network device at the adress epson34dea0.local, to which I am sending my print jobs via the lpd protocol

Is my printer ready ?
lpq
epson is ready
no entries

# here my local print queue 'epson' is accepting print jobs ( which does not say anything about the physical device, it might be offline

If here you local print queue 'epson' is not ready, you can try to reenable it in the cups system with:

sudo cupsenable epson

If you notice that the printer is disabled all the time, because for instance of a flaky network, you can edit /etc/cups/printers.conf and change the ErrorPolicy for each printer from stop-printer to retry-job.
It should be also possible to set this parameter in cupsd.conf

Finally you can print a test page with
lpr /usr/share/cups/data/testprint

14 September, 2020 05:17PM by Emmanuel Kasper (noreply@blogger.com)

Using Debian and RHEL troubleshootings containers on Kubernetes & OpenShift

You can connect to a running pod with oc/kubectl rsh pod_name, or start a copy of a running pod with oc debug pod_name, but as best practises recommend unprivileged, slim container images, where do you get sosreport, kdump, dig and nmap for troubleshooting ? 

Fortunately you can start either a transient Debian troubleshooting container with:

oc run troubleshooting-pod --stdin --tty --rm --image=docker.io/library/debian:buster

or a Red Hat Entreprise Linux:

oc run troubleshooting-pod --stdin --tty --rm --image=registry.access.redhat.com/rhel7/rhel-tools

14 September, 2020 03:18PM by Emmanuel Kasper (noreply@blogger.com)

Russ Allbery

Review: Who Do You Serve, Who Do You Protect?

Review: Who Do You Serve, Who Do You Protect?, edited by Maya Schenwar, et al.

Editor: Maya Schenwar
Editor: Joe Macaré
Editor: Alana Yu-lan Price
Publisher: Haymarket Books
Copyright: June 2016
ISBN: 1-60846-684-1
Format: Kindle
Pages: 250

Who Do You Serve, Who Do You Protect? is an anthology of essays about policing in the United States. It's divided into two sections: one that enumerates ways that police are failing to serve or protect communities, and one that describes how communities are building resistance and alternatives. Haymarket Books (a progressive press in Chicago) has made it available for free in the aftermath of the George Floyd killing and resulting protests in the United States.

I'm going to be a bit unfair to this book, so let me start by admitting that the mismatch between it and the book I was looking for is not entirely its fault.

My primary goal was to orient myself in the discussion on the left about alternatives to policing. I also wanted to sample something from Haymarket Books; a free book was a good way to do that. I was hoping for a collection of short introductions to current lines of thinking that I could selectively follow in longer writing, and an essay collection seemed ideal for that.

What I had not realized (which was my fault for not doing simple research) is that this is a compilation of articles previously published by Truthout, a non-profit progressive journalism site, in 2014 and 2015. The essays are a mix of reporting and opinion but lean towards reporting. The earliest pieces in this book date from shortly after the police killing of Michael Brown, when racist police violence was (again) reaching national white attention.

The first half of the book is therefore devoted to providing evidence of police abuse and violence. This is important to do, but it's sadly no longer as revelatory in 2020, when most of us have seen similar things on video, as it was to white America in 2014. If you live in the United States today, while you may not be aware of the specific events described here, you're unlikely to be surprised that Detroit police paid off jailhouse informants to provide false testimony ("Ring of Snitches" by Aaron Miguel Cantú), or that Chicago police routinely use excessive deadly force with no consequences ("Amid Shootings, Chicago Police Department Upholds Culture of Impunity" by Sarah Macaraeg and Alison Flowers), or that there is a long history of police abuse and degradation of pregnant women ("Your Pregnancy May Subject You to Even More Law Enforcement Violence" by Victoria Law). There are about eight essays along those lines.

Unfortunately, the people who excuse or disbelieve these stories are rarely willing to seek out new evidence, let alone read a book like this. That raises the question of intended audience for the catalog of horrors part of this book. The answer to that question may also be the publication date; in 2014, the base of evidence and example for discussion had not been fully constructed. This sort of reporting is also obviously relevant in the original publication context of web-based journalism, where people may encounter these accounts individually through social media or other news coverage. In 2020, they offer reinforcement and rhetorical evidence, but I'm dubious that the people who would benefit from this knowledge will ever see it in this form. Those of us who will are already sickened, angry, and depressed.

My primary interest was therefore in the second half of the book: the section on how communities are building resistance and alternatives. This is where I'm going to be somewhat unfair because the state of that conversation may have been different in 2015 than it is now in 2020. But these essays were lacking the depth of analysis that I was looking for.

There is a human tendency, when one becomes aware of an obvious wrong, to simply publicize the horrible thing that is happening and expect someone to do something about it. It's obviously and egregiously wrong, so if more people knew about it, certainly it would be stopped! That has happened repeatedly with racial violence in the United States. It's also part of the common (and school-taught) understanding of the Civil Rights movement in the 1960s: activists succeeded in getting the violence on the cover of newspapers and on television, people were shocked and appalled, and the backlash against the violence created political change.

Putting aside the fact that this is too simplistic of a picture of the Civil Rights era, it's abundantly clear at this point in 2020 that publicizing racist and violent policing isn't going to stop it. We're going to have to do something more than draw attention to the problem. Deciding what to do requires political and social analysis, not just of the better world that we want to see but of how our current world can become that world.

There is very little in that direction in this book. Who Do You Serve, Who Do You Protect? does not answer the question of its title beyond "not us" and "white supremacy." While those answers are not exactly wrong, they're also not pushing the analysis in the direction that I wanted to read.

For example (and this is a long-standing pet peeve of mine in US political writing), it would be hard to tell from most of the essays in this book that any country besides the United States exists. One essay ("Killing Africa" by William C. Anderson) talks about colonialism and draws comparisons between police violence in the United States and international treatment of African and other majority-Black countries. One essay talks about US military behavior oversees ("Beyond Homan Square" by Adam Hudson). That's about it for international perspective. Notably, there is no analysis here of what other countries might be doing better.

Police violence against out-groups is not unique to the United States. No one has entirely solved this problem, but versions of this problem have been handled with far more success than here. The US has a comparatively appalling record; many countries in the world, particularly among comparable liberal democracies in Europe, are doing far better on metrics of racial oppression by agents of the government and of law enforcement violence. And yet it's common to approach these problems as if we have to develop a solution de novo, rather than ask what other countries are doing differently and if we could do some of those things.

The US has some unique challenges, both historical and with the nature of endemic violence in the country, so perhaps such an analysis would turn up too many US-specific factors to copy other people's solutions. But we need to do the analysis, not give up before we start. Novel solutions can lead to novel new problems; other countries have tested, working improvements that could provide a starting framework and some map of potential pitfalls.

More fundamentally, only the last two essays of this book propose solutions more complex than "stop." The authors are very clear about what the police are doing, seem less interested in why, and are nearly silent on how to change it. I suspect I am largely in political agreement with most of the authors, but obviously a substantial portion of the country (let alone its power structures) is not, and therefore nothing is changing. Part of the project of ending police violence is understanding why the violence exists, picking apart the motives and potential fracture lines in the political forces supporting the status quo, and building a strategy to change the politics. That isn't even attempted here.

For example, the "who do you serve?" question of the book's title is more interesting than the essays give it credit. Police are not a monolith. Why do Black people become police officers? What are their experiences? Are there police forces in the United States that are doing better than others? What makes them different? Why do police act with violence in the moment? What set of cultural expectations, training experiences, anxieties, and fears lead to that outcome? How do we change those factors?

Or, to take another tack, why are police not held accountable even when there is substantial public outrage? What political coalition supports that immunity from consequences, what are its fault lines and internal frictions, and what portions of that coalition could be broken off, pealed away, or removed from power? To whom, institutionally, are police forces accountable? What public offices can aspiring candidates run for that would give them oversight capability? This varies wildly throughout the United States; political approaches that work in large cities may not work in small towns, or with county sheriffs, or with the FBI, or with prison guards.

To treat these organizations as a monolith and their motives as uniform is bad political tactics. It gives up points of leverage.

I thought the best essays of this collection were the last two. "Community Groups Work to Provide Emergency Medical Alternatives, Separate from Police," by Candice Bernd, is a profile of several local emergency response systems that divert emergency calls from the police to paramedics, mental health experts, or social workers. This is an idea that's now relatively mainstream, and it seems to be finding modest success where it has been tried. It's more of a harm mitigation strategy than an attempt to deal with the root problem, but we're going to need both.

The last essay, "Building Community Safety" by Ejeris Dixon, is the only essay in this book that is pushing in the direction that I was hoping to read. Dixon describes building an alternative system that can intervene in violent situations without using the police. This is fascinating and I'm glad that I read it.

It's also frustrating in context because Dixon's essay should be part of a discussion. Dixon describes spending years learning de-escalation techniques, doing hard work of community discussion and collective decision-making, and making deep investment in the skills required to handle violence without calling in a dangerous outside force. I greatly admire this approach (also common in parts of the anarchist community) and the people who are willing to commit to it. But it's an immense amount of work, and as Dixon points out, that work often falls on the people who are least able to afford it. Marginalized communities, for whom the police are often dangerous, are also likely to lack both time and energy to invest in this type of skill training. And many people simply will not do this work even if they do have the resources to do it.

More fundamentally, this approach conflicts somewhat with division of labor. De-escalation and social work are both professional skills that require significant time and practice to hone, and as much as I too would love to live in a world where everyone knows how to do some amount of this work, I find it hard to imagine scaling this approach without trained professionals. The point of paying someone to do this work as their job is that the money frees up their time to focus on learning those skills at a level that is difficult to do in one's free time. But once you have an organized group of professionals who do this work, you have to find a way to keep them from falling prey to the problems that plague the police, which requires understanding the origins of those problems. And that's putting aside the question of how large the residual of dangerous crime that cannot be addressed through any form of de-escalation might be, and what organization we should use to address it.

Dixon's essay is great; I wouldn't change anything about it. But I wanted to see the next essay engaging with Dixon's perspective and looking for weaknesses and scaling concerns, and then the next essay that attempts to shore up those weaknesses, and yet another essay that grapples with the challenging philosophical question of a government monopoly on force and how that can and should come into play in violent crime. And then essays on grass-roots organizing in the context of police reform or abolition, and on restorative justice, and on the experience of attempting police reform from the inside, and on how to support public defenders, and on the merits and weaknesses of focusing on electing reform-minded district attorneys. Unfortunately, none of those are here.

Overall, Who Do You Serve, Who Do You Protect? was a disappointment. It was free, so I suppose I got what I paid for, and I may have had a different reaction if I read it in 2015. But if you're looking for a deep discussion on the trade-offs and challenges of stopping police violence in 2020, I don't think this is the place to start.

Rating: 3 out of 10

14 September, 2020 04:27AM

September 13, 2020

Enrico Zini

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

mlocate slowness

/usr/bin/mlocate asdf > /dev/null  23.28s user 0.24s system 99% cpu 23.536 total
/usr/local/sbin/mlocate-fast asdf > /dev/null  1.97s user 0.27s system 99% cpu 2.251 total

That is just changing mbsstr to strstr. Which I guess causes trouble for EUC_JP locales or something? But guys, scanning linearly through millions of entries is sort of outdated :-(

13 September, 2020 09:05PM

hackergotchi for Jonathan Carter

Jonathan Carter

Wootbook / Tongfang laptop

Old laptop

I’ve been meaning to get a new laptop for a while now. My ThinkPad X250 is now 5 years old and even though it’s still adequate in many ways, I tend to run out of memory especially when running a few virtual machines. It only has one memory slot, which I maxed out at 16GB shortly after I got it. Memory has been a problem in considering a new machine. Most new laptops have soldered RAM and local configurations tend to ship with 8GB RAM. Getting a new machine with only a slightly better CPU and even just the same amount of RAM as what I have in the X250 seems a bit wasteful. I was eyeing the Lenovo X13 because it’s a super portable that can take up to 32GB of RAM, and it ships with an AMD Ryzen 4000 series chip which has great performance. With Lenovo’s discount for Debian Developers it became even more attractive. Unfortunately that’s in North America only (at least for now) so that didn’t work out this time.

Enter Tongfang

I’ve been reading a bunch of positive reviews about the Tuxedo Pulse 14 and KDE Slimbook 14. Both look like great AMD laptops, supports up to 64GB of RAM and clearly runs Linux well. I also noticed that they look quite similar, and after some quick searches it turns out that these are made by Tongfang and that its model number is PF4NU1F.

I also learned that a local retailer (Wootware) sells them as the Wootbook. I’ve seen one of these before although it was an Intel-based one, but it looked like a nice machine and I was already curious about it back then. After struggling for a while to find a local laptop with a Ryzen CPU and that’s nice and compact and that breaks the 16GB memory barrier, finding this one that jumped all the way to 64GB sealed the deal for me.

This is the specs for the configuration I got:

  • Ryzen 7 4800H 2.9GHz Octa Core CPU (4MB L2 cache, 8MB L3 cache, 7nm process).
  • 64GB RAM (2x DDR4 2666mHz 32GB modules)
  • 1TB nvme disk
  • 14″ 1920×1280 (16:9 aspect ratio) matt display.
  • Real ethernet port (gigabit)
  • Intel Wifi 6 AX200 wireless ethernet
  • Magnesium alloy chassis

This configuration cost R18 796 (€947 / $1122). That’s significantly cheaper than anything else I can get that even starts to approach these specs. So this is a cheap laptop, but you wouldn’t think so by using it.

I used the Debian netinstall image to install, and installation was just another uneventful and boring Debian installation (yay!). Unfortunately it needs the firmware-iwlwifi and firmare-amd-graphics packages for the binary blobs that drives the wifi card and GPU. At least it works flawlessly and you don’t need an additional non-free display driver (as is the case with NVidia GPUs). I haven’t tested the graphics extensively yet, but desktop graphics performance is very snappy. This GPU also does fancy stuff like VP8/VP9 encoding/decoding, so I’m curious to see how well it does next time I have to encode some videos. The wifi upgrade was nice for copying files over. My old laptop maxed out at 300Mbps, this one connects to my home network between 800-1000Mbps. At this speed I don’t bother connecting via cable at home.

I read on Twitter that Tuxedo Computers thinks that it’s possible to bring Coreboot to this device. That would be yet another plus for this machine.

I’ll try to answer some of my own questions about this device that I had before, that other people in the Debian community might also have if they’re interested in this device. Since many of us are familiar with the ThinkPad X200 series of laptops, I’ll compare it a bit to my X250, and also a little to the X13 that I was considering before. Initially, I was a bit hesitant about the 14″ form factor, since I really like the portability of the 12.5″ ThinkPad. But because the screen bezel is a lot smaller, the Wootbook (that just rolls off the tongue a lot better than “the PF4NU1F”) is just slightly wider than the X250. It weighs in at 1.1KG instead of the 1.38KG of the X250. It’s also thinner, so even though it has a larger display, it actually feels a lot more portable. Here’s a picture of my X250 on top of the Wootbook, you can see a few mm of Wootbook sticking out to the right.

Card Reader

One thing that I overlooked when ordering this laptop was that it doesn’t have an SD card reader. I see that some variations have them, like on this Slimbook review. It’s not a deal-breaker for me, I have a USB card reader that’s very light and that I’ll just keep in my backpack. But if you’re ordering one of these machines and have some choice, it might be something to look out for if it’s something you care about.

Keyboard/Touchpad

On to the keyboard. This keyboard isn’t quite as nice to type on as on the ThinkPad, but, it’s not bad at all. I type on many different laptop keyboards and I would rank this keyboard very comfortably in the above average range. I’ve been typing on it a lot over the last 3 days (including this blog post) and it started feeling natural very quickly and I’m not distracted by it as much as I thought I would be transitioning from the ThinkPad or my mechanical desktop keyboard. In terms of layout, it’s nice having an actual “Insert” button again. This is things normal users don’t care about, but since I use mc (where insert selects files) this is a welcome return :). I also like that it doesn’t have a Print Screen button at the bottom of my keyboard between alt and ctrl like the ThinkPad has. Unfortunately, it doesn’t have dedicated pgup/pgdn buttons. I use those a lot in apps to switch between tabs. At leas the Fn button and the ctrl buttons are next to each other, so pressing those together with up and down to switch tabs isn’t that horrible, but if I don’t get used to it in another day or two I might do some remapping. The touchpad has en extra sensor-button on the top left corner that’s used on Windows to temporarily disable the touchpad. I captured it’s keyscan codes and it presses left control + keyscan code 93. The airplane mode, volume and brightness buttons work fine.

I do miss the ThinkPad trackpoint. It’s great especially in confined spaces, your hands don’t have to move far from the keyboard for quick pointer operations and it’s nice for doing something quick and accurate. I painted a bit in Krita last night, and agree with other reviewers that the touchpad could do with just a bit more resolution. I was initially disturbed when I noticed that my physical touchpad buttons were gone, but you get right-click by tapping with two fingers, and middle click with tapping 3 fingers. Not quite as efficient as having the real buttons, but it actually works ok. For the most part, this keyboard and touchpad is completely adequate. Only time will tell whether the keyboard still works fine in a few years from now, but I really have no serious complaints about it.

Display

The X250 had a brightness of 172 nits. That’s not very bright, I think the X250 has about the dimmest display in the ThinkPad X200 range. This hasn’t been a problem for me until recently, my eyes are very photo-sensitive so most of the time I use it at reduced brightness anyway, but since I’ve been working from home a lot recently, it’s nice to sometimes sit outside and work, especially now that it’s spring time and we have some nice days. At full brightness, I can’t see much on my X250 outside. The Wootbook is significantly brighter even (even at less than 50% brightness), although I couldn’t find the exact specification for its brightness online.

Ports

The Wootbook has 3x USB type A ports and 1x USB type C port. That’s already quite luxurious for a compact laptop. As I mentioned in the specs above, it also has a full-sized ethernet socket. On the new X13 (the new ThinkPad machine I was considering), you only get 2x USB type A ports and if you want ethernet, you have to buy an additional adapter that’s quite expensive especially considering that it’s just a cable adapter (I don’t think it contains any electronics).

It has one hdmi port. Initially I was a bit concerned at lack of displayport (which my X250 has), but with an adapter it’s possible to convert the USB-C port to displayport and it seems like it’s possible to connect up to 3 external displays without using something weird like display over usual USB3.

Overall remarks

When maxing out the CPU, the fan is louder than on a ThinkPad, I definitely noticed it while compiling the zfs-dkms module. On the plus side, that happened incredibly fast. Comparing the Wootbook to my X250, the biggest downfall it has is really it’s pointing device. It doesn’t have a trackpad and the touchpad is ok and completely usable, but not great. I use my laptop on a desk most of the time so using an external mouse will mostly solve that.

If money were no object, I would definitely choose a maxed out ThinkPad for its superior keyboard/mouse, but the X13 configured with 32GB of RAM and 128GB of SSD retails for just about double of what I paid for this machine. It doesn’t seem like you can really buy the perfect laptop no matter how much money you want to spend, there’s some compromise no matter what you end up choosing, but this machine packs quite a punch, especially for its price, and so far I’m very happy with my purchase and the incredible performance it provides.

I’m also very glad that Wootware went with the gray/black colours, I prefer that by far to the white and silver variants. It’s also the first laptop I’ve had since 2006 that didn’t come with Windows on it.

The Wootbook is also comfortable/sturdy enough to carry with one hand while open. The ThinkPads are great like this and with many other brands this just feels unsafe. I don’t feel as confident carrying it by it’s display because it’s very thin (I know, I shouldn’t be doing that with the ThinkPads either, but I’ve been doing that for years without a problem :) ).

There’s also a post on Reddit that tracks where you can buy these machines from various vendors all over the world.

13 September, 2020 08:44PM by jonathan

Russell Coker

Setting Up a Dell T710 Server with DRAC6

I’ve just got a Dell T710 server for LUV and I’ve been playing with the configuration options. Here’s a list of what I’ve done and recommendations on how to do things. I decided not to try to write a step by step guide to doing stuff as the situation doesn’t work for that. I think that a list of how things are broken and what to avoid is more useful.

BIOS

Firstly with a Dell server you can upgrade the BIOS from a Linux root shell. Generally when a server is deployed you won’t want to upgrade the BIOS (why risk breaking something when it’s working), so before deployment you probably should install the latest version. Dell provides a shell script with encoded binaries in it that you can just download and run, it’s ugly but it works. The process of loading the firmware takes a long time (a few minutes) with no progress reports, so best to plug both PSUs in and leave it alone. At the end of loading the firmware a hard reboot will be performed, so upgrading your distribution while doing the install is a bad idea (Debian is designed to not lose data in this situation so it’s not too bad).

IDRAC

IDRAC is the Integrated Dell Remote Access Controller. By default it will listen on all Ethernet ports, get an IP address via DHCP (using a different Ethernet hardware address to the interface the OS sees), and allow connections. Configuring it to be restrictive as to which ports it listens on may be a good idea (the T710 had 4 ports built in so having one reserved for management is usually OK). You need to configure a username and password for IDRAC that has administrative access in the BIOS configuration.

Web Interface

By default IDRAC will run a web server on the IP address it gets from DHCP, you can connect to that from any web browser that allows ignoring invalid SSL keys. Then you can use the username and password configured in the BIOS to login. IDRAC 6 on the PowerEdge T710 recommends IE 6.

To get a ssl cert that matches the name you want to use (and doesn’t give browser errors) you have to generate a CSR (Certificate Signing Request) on the DRAC, the only field that matters is the CN (Common Name), the rest have to be something that Letsencrypt will accept. Certbot has the option “--config-dir /etc/letsencrypt-drac” to specify an alternate config directory, the SSL key for DRAC should be entirely separate from the SSL key for other stuff. Then use the “--csr” option to specify the path of the CSR file. When you run letsencrypt the file name of the output file you want will be in the format “*_chain.pem“. You then have to upload that to IDRAC to have it used. This is a major pain for the lifetime of letsencrypt certificates. Hopefully a more recent version of IDRAC has Certbot built in.

When you access RAC via ssh (see below) you can run the command “racadm sslcsrgen” to generate a CSR that can be used by certbot. So it’s probably possible to write expect scripts to get that CSR, run certbot, and then put the ssl certificate back. I don’t expect to use IDRAC often enough to make it worth the effort (I can tell my browser to ignore an outdated certificate), but if I had dozens of Dells I’d script it.

SSH

The web interface allows configuring ssh access which I strongly recommend doing. You can configure ssh access via password or via ssh public key. For ssh access set TERM=vt100 on the host to enable backspace as ^H. Something like “TERM=vt100 ssh root@drac“. Note that changing certain other settings in IDRAC such as enabling Smartcard support will disable ssh access.

There is a limit to the number of open “sessions” for managing IDRAC, when you ssh to the IDRAC you can run “racadm getssninfo” to get a list of sessions and “racadm closessn -i NUM” to close a session. The closessn command takes a “-a” option to close all sessions but that just gives an error saying that you can’t close your own session because of programmer stupidity. The IDRAC web interface also has an option to close sessions. If you get to the limits of both ssh and web sessions due to network issues then you presumably have a problem.

I couldn’t find any documentation on how the ssh host key is generated. I Googled for the key fingerprint and didn’t get a match so there’s a reasonable chance that it’s unique to the server (please comment if you know more about this).

Don’t Use Graphical Console

The T710 is an older server and runs IDRAC6 (IDRAC9 is the current version). The Java based graphical console access doesn’t work with recent versions of Java. The Debian package icedtea-netx has has the javaws command for running the .jnlp command for the console, by default the web browser won’t run this, you download the .jnlp file and pass that as the first parameter to the javaws program which then downloads a bunch of Java classes from the IDRAC to run. One error I got with Java 11 was ‘Exception in thread “Timer-0” java.util.ServiceConfigurationError: java.nio.charset.spi.CharsetProvider: Provider sun.nio.cs.ext.ExtendedCharsets could not be instantiated‘, Google didn’t turn up any solutions to this. Java 8 didn’t have that problem but had a “connection failed” error that some people reported as being related to the SSL key, but replacing the SSL key for the web server didn’t help. The suggestion of running a VM with an old web browser to access IDRAC didn’t appeal. So I gave up on this. Presumably a Windows VM running IE6 would work OK for this.

Serial Console

Fortunately IDRAC supports a serial console. Here’s a page summarising Serial console setup for DRAC [1]. Once you have done that put “console=tty0 console=ttyS1,115200n8” on the kernel command line and Linux will send the console output to the virtual serial port. To access the serial console from remote you can ssh in and run the RAC command “console com2” (there is no option for using a different com port). The serial port seems to be unavailable through the web interface.

If I was supporting many Dell systems I’d probably setup a ssh to JavaScript gateway to provide a web based console access. It’s disappointing that Dell didn’t include this.

If you disconnect from an active ssh serial console then the RAC might keep the port locked, then any future attempts to connect to it will give the following error:

/admin1-> console com2
console: Serial Device 2 is currently in use

So far the only way I’ve discovered to get console access again after that is the command “racadm racreset“. If anyone knows of a better way please let me know. As an aside having “racreset” being so close to “racresetcfg” (which resets the configuration to default and requires a hard reboot to configure it again) seems like a really bad idea.

Host Based Management

deb http://linux.dell.com/repo/community/ubuntu xenial openmanage

The above apt sources.list line allows installing Dell management utilities (Xenial is old but they work on Debian/Buster). Probably the packages srvadmin-storageservices-cli and srvadmin-omacore will drag in enough dependencies to get it going.

Here are some useful commands:

# show hardware event log
omreport system esmlog
# show hardware alert log
omreport system alertlog
# give summary of system information
omreport system summary
# show versions of firmware that can be updated
omreport system version
# get chassis temp
omreport chassis temps
# show physical disk details on controller 0
omreport storage pdisk controller=0

RAID Control

The RAID controller is known as PERC (PowerEdge Raid Controller), the Dell web site has an rpm package of the perccli tool to manage the RAID from Linux. This is statically linked and appears to have different versions of libraries used to make it. The command “perccli show” gives an overview of the configuration, but the command “perccli /c0 show” to give detailed information on controller 0 SEGVs and the kernel logs a “vsyscall attempted with vsyscall=none” message. Here’s an overview of the vsyscall enmulation issue [2]. Basically I could add “vsyscall=emulate” to the kernel command line and slightly reduce security for the system to allow system calls from perccli that are called from old libc code to work, or I could run code from a dubious source as root.

Some versions of IDRAC have a “racadm raid” command that can be run from a ssh session to perform RAID administration remotely, mine doesn’t have that. As an aside the help for the RAC system doesn’t list all commands and the Dell documentation is difficult to find so blog posts from other sysadmins is the best source of information.

I have configured IDRAC to have all of the BIOS output go to the virtual serial console over ssh so I can see the BIOS prompt me for PERC administration but it didn’t accept my key presses when I tried to do so. In any case this requires downtime and I’d like to be able to change hard drives without rebooting.

I found vsyscall_trace on Github [3], it uses the ptrace interface to emulate vsyscall on a per process basis. You just run “vsyscall_trace perccli” and it works! Thanks Geoffrey Thomas for writing this!

Here are some perccli commands:

# show overview
perccli show
# help on adding a vd (RAID)
perccli /c0 add help
# show controller 0 details
perccli /c0 show
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0

When a disk is added to a PERC controller about 525MB of space is used at the end for RAID metadata. So when you create a RAID-0 with a single device as in the above example all disk data is preserved by default except for the last 525MB. I have tested this by running a BTRFS scrub on a disk from another system after making it into a RAID-0 on the PERC.

13 September, 2020 10:57AM by etbe

September 12, 2020

Russ Allbery

PGP::Sign 1.03

Part of the continuing saga to clean up CPAN testing failures with this module. Test reports uncovered a tighter version dependency for the flags I was using for GnuPG v2 (2.1.23) and a version dependency for GnuPG v1 (1.4.20). As with the previous release, I'm now skipping tests if the version is too old, since this makes the CPAN test results more useful to me.

I also took advantage of this release to push the Debian packaging to Salsa (and the upstream branch as well since it's easier) and update the package metadata, as well as add an upstream metadata file since interest in that in Debian appears to have picked up again.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

12 September, 2020 10:28PM

Ryan Kavanagh

Configuring OpenIKED VPNs for StrongSwan Clients

A few weeks ago I configured a road warrior VPN setup. The remote end is on a VPS running OpenBSD and OpenIKED, the VPN is an IKEv2 VPN using x509 authentication, and the local end is StrongSwan. I also configured an IKEv2 VPN between my VPSs. Here are the notes for how to do so.

In all cases, to use x509 authentication, you will need to generate a bunch of certificates and keys:

  • a CA certificate
  • a key/certificate pair for each client

Fortunately, OpenIKED provides the ikectl utility to help you do so. Before going any further, you might find it useful to edit /etc/ssl/ikeca.cnf to set some reasonable defaults for your certificates.

Begin by creating and installing a CA certificate:

# ikectl ca vpn create
# ikectl ca vpn install

For simplicity, I am going to assume that the you are managing your CA on the same host as one of the hosts that you want to configure for the VPN. If not, see the bit about exporting certificates at the beginning of the section on persistent host-host VPNs.

Create and install a key/certificate pair for your server. Suppose for example your first server is called server1.example.org:

# ikectl ca vpn certificate server1.example.org create
# ikectl ca vpn certificate server1.example.org install

Persistent host-host VPNs

For each other server that you want to use, you need to also create a key/certificate pair on the same host as the CA certificate, and then copy them over to the other server. Assuming the other server is called server2.example.org:

# ikectl ca vpn certificate server2.example.org create
# ikectl ca vpn certificate server2.example.org export

This last command will produce a tarball server2.example.org.tgz. Copy it over to server2.example.org and install it:

# tar -C /etc/iked -xzpvf server2.example.org.tgz

Next, it is time to configure iked. To do so, you will need to find some information about the certificates you just generated. On the host with the CA, run

$ cat /etc/ssl/vpn/index.txt
V       210825142056Z           01      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org
V       210825142208Z           02      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org

Pick one of the two hosts to play the “active” role (in this case, server1.example.org). Using the information you gleaned from index.txt, add the following to /etc/iked.conf, filling in the srcid and dstid fields appropriately.

ikev2 'server1_server2_active' active esp from server1.example.org to server2.example.org \
	local server1.example.org peer server2.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org'

On the other host, add the following to /etc/iked.conf

ikev2 'server2_server1_passive' passive esp from server2.example.org to server1.example.org \
	local server2.example.org peer server1.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org'

Note that the names 'server1_server2_active' and 'server2_server1_passive' in the two stanzas do not matter and can be omitted. Reload iked on both hosts:

# ikectl reload

If everything worked out, you should see the negotiated security associations (SAs) in the output of

# ikectl show sa

On OpenBSD, you should also see some output on success or errors in the file /var/log/daemon.

For a road warrior

Add the following to /etc/iked.conf on the remote end:

ikev2 'responder_x509' passive esp \
	from 0.0.0.0/0 to 10.0.1.0/24 \
	local server1.example.org peer any \
	srcid server1.example.org \
	config address 10.0.1.0/24 \
	config name-server 10.0.1.1 \
	tag "ROADW"

Configure or omit the address range and the name-server configurations to suit your needs. See iked.conf(5) for details. Reload iked:

# ikectl reload

If you are on OpenBSD and want the remote end to have an IP address, add the following to /etc/hostname.vether0, again configuring the address to suit your needs:

inet 10.0.1.1 255.255.255.0

Put the interface up:

# ifconfig vether0 up

Now create a client certificate for authentication. In my case, my road-warrior client was client.example.org:

# ikectl ca vpn certificate client.example.org create
# ikectl ca vpn certificate client.example.org export

Copy client.example.org.tgz to client and run

# tar -C /etc/ipsec.d/ -xzf client.example.org.tgz -- \
	./private/client.example.org.key \
	./certs/client.example.org.crt ./ca/ca.crt

Install StrongSwan and add the following to /etc/ipsec.conf, configuring appropriately:

ca example.org
  cacert=ca.crt
  auto=add

conn server1
  keyexchange=ikev2
  right=server1.example.org
  rightid=%server1.example.org
  rightsubnet=0.0.0.0/0
  rightauth=pubkey
  leftsourceip=%config
  leftauth=pubkey
  leftcert=client.example.org.crt
  auto=route

Add the following to /etc/ipsec.secrets:

# space is important
server1.example.org : RSA client.example.org.key

Restart StrongSwan, put the connection up, and check its status:

# ipsec restart
# ipsec up server1
# ipsec status

That should be it.

Sources:

12 September, 2020 09:42PM

Configuring OpenIKED VPNs for Road Warriors

A few weeks ago I configured a road warrior VPN setup. The remote end is on a VPS running OpenBSD and OpenIKED, the VPN is an IKEv2 VPN using x509 authentication, and the local end is StrongSwan. I also configured an IKEv2 VPN between my VPSs. Here are the notes for how to do so.

In all cases, to use x509 authentication, you will need to generate a bunch of certificates and keys:

  • a CA certificate
  • a key/certificate pair for each client

Fortunately, OpenIKED provides the ikectl utility to help you do so. Before going any further, you might find it useful to edit /etc/ssl/ikeca.cnf to set some reasonable defaults for your certificates.

Begin by creating and installing a CA certificate:

# ikectl ca vpn create
# ikectl ca vpn install

For simplicity, I am going to assume that the you are managing your CA on the same host as one of the hosts that you want to configure for the VPN. If not, see the bit about exporting certificates at the beginning of the section on persistent host-host VPNs.

Create and install a key/certificate pair for your server. Suppose for example your first server is called server1.example.org:

# ikectl ca vpn certificate server1.example.org create
# ikectl ca vpn certificate server1.example.org install

Persistent host-host VPNs

For each other server that you want to use, you need to also create a key/certificate pair on the same host as the CA certificate, and then copy them over to the other server. Assuming the other server is called server2.example.org:

# ikectl ca vpn certificate server2.example.org create
# ikectl ca vpn certificate server2.example.org export

This last command will produce a tarball server2.example.org.tgz. Copy it over to server2.example.org and install it:

# tar -C /etc/iked -xzpvf server2.example.org.tgz

Next, it is time to configure iked. To do so, you will need to find some information about the certificates you just generated. On the host with the CA, run

$ cat /etc/ssl/vpn/index.txt
V       210825142056Z           01      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org
V       210825142208Z           02      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org

Pick one of the two hosts to play the “active” role (in this case, server1.example.org). Using the information you gleaned from index.txt, add the following to /etc/iked.conf, filling in the srcid and dstid fields appropriately.

ikev2 'server1_server2_active' active esp from server1.example.org to server2.example.org \
	local server1.example.org peer server2.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org'

On the other host, add the following to /etc/iked.conf

ikev2 'server2_server1_passive' passive esp from server2.example.org to server1.example.org \
	local server2.example.org peer server1.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org'

Note that the names 'server1_server2_active' and 'server2_server1_passive' in the two stanzas do not matter and can be omitted. Reload iked on both hosts:

# ikectl reload

If everything worked out, you should see the negotiated security associations (SAs) in the output of

# ikectl show sa

On OpenBSD, you should also see some output on success or errors in the file /var/log/daemon.

For a road warrior

Add the following to /etc/iked.conf on the remote end:

ikev2 'responder_x509' passive esp \
	from 0.0.0.0/0 to 10.0.1.0/24 \
	local server1.example.org peer any \
	srcid server1.example.org \
	config address 10.0.1.0/24 \
	config name-server 10.0.1.1 \
	tag "ROADW"

Configure or omit the address range and the name-server configurations to suit your needs. See iked.conf(5) for details. Reload iked:

# ikectl reload

If you are on OpenBSD and want the remote end to have an IP address, add the following to /etc/hostname.vether0, again configuring the address to suit your needs:

inet 10.0.1.1 255.255.255.0

Put the interface up:

# ifconfig vether0 up

Now create a client certificate for authentication. In my case, my road-warrior client was client.example.org:

# ikectl ca vpn certificate client.example.org create
# ikectl ca vpn certificate client.example.org export

Copy client.example.org.tgz to client and run

# tar -C /etc/ipsec.d/ -xzf client.example.org.tgz -- \
	./private/client.example.org.key \
	./certs/client.example.org.crt ./ca/ca.crt

Install StrongSwan and add the following to /etc/ipsec.conf, configuring appropriately:

ca example.org
  cacert=ca.crt
  auto=add

conn server1
  keyexchange=ikev2
  right=server1.example.org
  rightid=%server1.example.org
  rightsubnet=0.0.0.0/0
  rightauth=pubkey
  leftsourceip=%config
  leftauth=pubkey
  leftcert=client.example.org.crt
  auto=route

Add the following to /etc/ipsec.secrets:

# space is important
server1.example.org : RSA client.example.org.key

Restart StrongSwan, put the connection up, and check its status:

# ipsec restart
# ipsec up server1
# ipsec status

That should be it.

Sources:

12 September, 2020 09:42PM

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in August 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in September) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

teeworlds
  • I packaged a new upstream release of teeworlds, the well-known 2D multiplayer shooter with cute characters called tees to resolve a Python 2 bug (although teeworlds is actually a C++ game). The update also fixed a severe remote denial-of-service security vulnerability, CVE-2020-12066. I prepared a patch for Buster and will send it to the security team later today.
  • I sponsored updates of mgba, a Game Boy Advance emulator, for Ryan Tandy, and osmose-emulator for Carlos Donizete Froes.
  • I worked around a RC GCC 10 bug in megaglest by compiling with -fcommon.
  • Thanks to Gerardo Ballabio who packaged a new upstream version of galois which I uploaded for him.
  • Also thanks to Reiner Herrmann and Judit Foglszinger who fixed a regression (crash) in monsterz due to the earlier port to Python 3. Reiner also made fans of supertuxkart happy by packaging the latest upstream release version 1.2.

Debian Java

Misc

  • I was contacted by the upstream maintainer of privacybadger, a privacy addon for Firefox and Chromium, who dislikes the idea of having a stable and unchanging version in Debian stable releases. Obviously I can’t really do much about it although I believe the release team would be open-minded for regular point updates of browser addons though. However I don’t intend to do regular updates for all of my packages in stable unless there is a really good reason to do so. At the moment I’m willing to make an exception for ublock-origin and https-everywhere because I feel these addons should be core browser functionality anyway. I talked about this on our Debian Mozilla Extension Maintainers mailinglist and it seems someone is interested to take over privacybadger and prepare regular stable point updates. Let’s see how it turns out.
  • Finally this month saw the release of ublock-origin 1.29.0 and the creation of two different browser-specific binary packages for Firefox and Chromium. I have talked about it before and I believe two separate packages for ublock-origin are more aligned to upstream development and make the whole addon easier to maintain which benefits users, upstream and maintainers.
  • imlib2, an image library, and binaryen also got updated this month.

Debian LTS

This was my 54. month as a paid contributor and I have been paid to work 20 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2303-1. Issued a security update for libssh fixing 1 CVE.
  • DLA-2327-1. Issued a security update for lucene-solr fixing 1 CVE.
  • DLA-2369-1. Issued a security update for libxml2 fixing 8 CVE.
  • Triaged CVE-2020-14340, jboss-xnio as not-affected for Stretch.
  • Triaged CVE-2020-13941, lucene-solr as no-dsa because the security impact was minor.
  • Triaged CVE-2019-17638, jetty9 as not-affected for Stretch and Buster.
  • squid3: I backported the patches for CVE-2020-15049, CVE-2020-15810, CVE-2020-15811 and CVE-2020-24606 from squid 4 to squid 3.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 „Jessie“. This was my 27. month and I have been paid to work 14,25 hours on ELTS.

  • ELA-271-1. Issued a security update for squid3 fixing 19 CVE. Most of the work was already done before ELTS started, only the patch for CVE-2019-12529 had to be adjusted for the nettle version in Jessie.
  • ELA-273-1. Issued a security update for nss fixing 1 CVE.
  • ELA-276-1. Issued a security update for libjpeg-turbo fixing 2 CVE.
  • ELA-277-1. Issued a security update for graphicsmagick fixing 1 CVE.
  • ELA-279-1. Issued a security update for imagemagick fixing 3 CVE.
  • ELA-280-1. Issued a security update for libxml2 fixing 4 CVE.

Thanks for reading and see you next time.

12 September, 2020 07:26AM by apo

Jelmer Vernooij

Debian Janitor: All Packages Processed with Lintian-Brush

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

On 12 July 2019, the Janitor started fixing lintian issues in packages in the Debian archive. Now, a year and a half later, it has processed every one of the almost 28,000 packages at least once.

Graph with Lintian Fixes Burndown

As discussed two weeks ago, this has resulted in roughly 65,000 total changes. These 65,000 changes were made to a total of almost 17,000 packages. Of the remaining packages, for about 4,500 lintian-brush could not make any improvements. The rest (about 6,500) failed to be processed for one of many reasons – they are e.g. not yet migrated off alioth, use uncommon formatting that can't be preserved or failed to build for one reason or another.

Graph with runs by status (success, failed, nothing-to-do)

Now that the entire archive has been processed, packages are prioritized based on the likelihood of a change being made to them successfully.

Over the course of its existence, the Janitor has slowly gained support for a wider variety of packaging methods. For example, it can now edit the templates for some of the generated control files. Many of the packages that the janitor was unable to propose changes for the first time around are expected to be correctly handled when they are reprocessed.

If you’re a Debian developer, you can find the list of improvements made by the janitor in your packages by going to https://janitor.debian.net/m/.

For more information about the Janitor's lintian-fixes efforts, see the landing page.

12 September, 2020 06:00AM by Jelmer Vernooij, Perry Lorrier

September 11, 2020

Petter Reinholdtsen

Buster update of Norwegian Bokmål edition of Debian Administrator's Handbook almost done

Thanks to the good work of several volunteers, the updated edition of the Norwegian translation for "The Debian Administrator's Handbook" is now almost completed. After many months of proof reading, I consider the proof reading complete enough for us to move to the next step, and have asked for the print version to be prepared and sent of to the print on demand service lulu.com. While it is still not to late if you find any incorrect translations on the hosted Weblate service, but it will be soon. :) You can check out the Buster edition on the web until the print edition is ready.

The book will be for sale on lulu.com and various web book stores, with links available from the web site for the book linked to above. I hope a lot of readers find it useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11 September, 2020 07:45AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Hire me!

I'm happy to announce I handed out my Master's Thesis last Monday. I'm not publishing the final copy just yet1, as it still needs to go through the approval committee. If everything goes well, I should have my Master of Economics diploma before Christmas!

It sure hasn't been easy, and although I regret nothing, I'm also happy to be done with university.

Looking for a job

What an odd time to be looking for a job, right? Turns out for the first time in 12 years, I don't have an employer. It's oddly freeing, but also a little scary. I'm certainly not bitter about it though and it's nice to have some time on my hands to work on various projects and read things other than academic papers. Look out for my next blog posts on using the NeTV2 as an OSHW HDMI capture card, on hacking at security tokens and much more!

I'm not looking for anything long term (I'm hoping to teach Economics again next Winter), but for the next few months, my calendar is wide open.

For the last 6 years, I worked as Linux system administrator, mostly using a LAMP stack in conjunction with Puppet, Shell and Python. Although I'm most comfortable with Puppet, I also have decent experience with Ansible, thanks to my work in the DebConf Videoteam.

I'm not the most seasoned Debian Developer, but I have some experience packaging Python applications and libraries. Although I'm no expert at it, lately I've also been working on Clojure packages, as I'm trying to get Puppet 6 in Debian in time for the Bullseye freeze. At the rate it's going though, I doubt we're going to make it...

If your company depends on Puppet and cares about having a version in Debian 11 that is maintained (Puppet 5 is EOL in November 2020), I'm your guy!

Oh, and I guess I'm a soon-to-be Master of Economics specialising in Free and Open Source Software business models and incentives theory. Not sure I'll ever get paid putting that in application, but hey, who knows.

If any of that resonates with you, contact me and let's have a chat! I promise I don't bite :)


  1. The title of the thesis is What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model. Once the final copy is approved, I'll be sure to write a longer blog post about my findings here. 

11 September, 2020 06:15AM by Louis-Philippe Véronneau

Reproducible Builds (diffoscope)

diffoscope 160 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 160. This version includes the following changes:

* Check that pgpdump is actually installed before attempting to run it.
  Thanks to Gianfranco Costamagna (locutusofborg). (Closes: #969753)
* Add some documentation for the EXTERNAL_TOOLS dictionary.
* Ensure we check FALLBACK_FILE_EXTENSION_SUFFIX, otherwise we run pgpdump
  against all files that are recognised by file(1) as "data".

You find out more by visiting the project homepage.

11 September, 2020 12:00AM

September 10, 2020

hackergotchi for Daniel Silverstone

Daniel Silverstone

Broccoli Sync Conversation

Broccoli Sync Conversation

A number of days ago (I know, I'm an awful human who failed to post this for over a week), myself, Lars, Mark, and Vince discussed Dropbox's article about Broccoli Sync. It wasn't quite what we'd expected but it was an interesting discussion of compression and streamed data.

Vince observed that it was interesting in that it was a way to move storage compression cost to the client edge. This makes sense because decompression (to verify the uploaded content) is cheaper than compression; and also since CPU and bandwidth are expensive, spending the client CPU to reduce bandwidth is worthwhile.

Lars talked about how even in situations where everyone has gigabit data connectivity with no limit on total transit, bandwidth/time is a concern, so it makes sense.

We liked how they determined the right compresison level to use available bandwidth (i.e. not be CPU throttled) but also gain the most compression possible. Their diagram showing relative compression sizes for level 1 vs. 3 vs. 5 suggests that the gain for putting the effort in for 5 rather than 1. It's interesting in that diagram that 'documents' don't compress well but then again it is notable that such documents are likely DEFLATE'd zip files. Basically if the data is already compressed then there's little hope Brotli will gain much.

I raised that it was interesting that they chose Brotli, in part, due to the availability of a pure Rust implementation of Brotli. Lars mentioned that Microsoft and others talk about how huge quantities of C code has unexpected memory safety issues and so perhaps that is related. Daniel mentioned that the document talked about Dropbox having a policy of not running unconstrained C code which was interesting.

Vince noted that in their deployment challenges it seemed like a very poor general strategy to cope with crasher errors; but Daniel pointed out that it might be an over-simplified description, and Mark suggested that it might be sufficient until a fix can be pushed out. Vince agreed that it's plausible this is a tiered/sharded deployment process and thus a good way to smoke out problems.

Daniel found it interesting that their block storage sounds remarkably like every other content-addressible storage and that while they make it clear in the article that encryption, client identification etc are elided, it looks like they might be able to deduplicate between theoretically hostile clients.

We think that the compressed-data plus type plus hash (which we assume also contains length) is an interesting and nice approach to durability and integrity validation in the protocol. And the compressed blocks can then be passed to the storage backend quickly and effectively which is nice for latency.

Daniel raised that he thought it was fun that their rust-brotli library is still workable on Rust 1.12 which is really quite old.

We ended up on a number of tangential discussions, about Rust, about deployment strategies, and so on. While the article itself was a little thin, we certainly had a lot of good chatting around topics it raised.

We'll meet again in a month (on the 28th Sept) so perhaps we'll have a chunkier article next time. (Possibly this and/or related articles)

10 September, 2020 07:49AM by Daniel Silverstone

September 09, 2020

Reproducible Builds

Reproducible Builds in August 2020

Welcome to the August 2020 report from the Reproducible Builds project.

In our monthly reports, we summarise the things that we have been up to over the past month. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced from the original free software source code to the pre-compiled binaries we install on our systems. If you’re interested in contributing to the project, please visit our main website.




This month, Jennifer Helsby launched a new reproduciblewheels.com website to address the lack of reproducibility of Python wheels.

To quote Jennifer’s accompanying explanatory blog post:

One hiccup we’ve encountered in SecureDrop development is that not all Python wheels can be built reproducibly. We ship multiple (Python) projects in Debian packages, with Python dependencies included in those packages as wheels. In order for our Debian packages to be reproducible, we need that wheel build process to also be reproducible

Parallel to this, transparencylog.com was also launched, a service that verifies the contents of URLs against a publicly recorded cryptographic log. It keeps an append-only log of the cryptographic digests of all URLs it has seen. (GitHub repo)

On 18th September, Bernhard M. Wiedemann will give a presentation in German, titled Wie reproducible builds Software sicherer machen (“How reproducible builds make software more secure”) at the Internet Security Digital Days 2020 conference.


Reproducible builds at DebConf20

There were a number of talks at the recent online-only DebConf20 conference on the topic of reproducible builds.

Holger gave a talk titled “Reproducing Bullseye in practice”, focusing on independently verifying that the binaries distributed from ftp.debian.org are made from their claimed sources. It also served as a general update on the status of reproducible builds within Debian. The video (145 MB) and slides are available.

There were also a number of other talks that involved Reproducible Builds too. For example, the Malayalam language mini-conference had a talk titled എനിയ്ക്കും ഡെബിയനില്‍ വരണം, ഞാന്‍ എന്തു് ചെയ്യണം? (“I want to join Debian, what should I do?”) presented by Praveen Arimbrathodiyil, the Clojure Packaging Team BoF session led by Elana Hashman, as well as Where is Salsa CI right now? that was on the topic of Salsa, the collaborative development server that Debian uses to provide the necessary tools for package maintainers, packaging teams and so on.

Jonathan Bustillos (Jathan) also gave a talk in Spanish titled Un camino verificable desde el origen hasta el binario (“A verifiable path from source to binary”). (Video, 88MB)


Development work

After many years of development work, the compiler for the Rust programming language now generates reproducible binary code. This generated some general discussion on Reddit on the topic of reproducibility in general.

Paul Spooren posted a ‘request for comments’ to OpenWrt’s openwrt-devel mailing list asking for clarification on when to raise the PKG_RELEASE identifier of a package. This is needed in order to successfully perform rebuilds in a reproducible builds context.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

Chris Lamb provided some comments and pointers on an upstream issue regarding the reproducibility of a Snap / SquashFS archive file. []

Debian

Holger Levsen identified that a large number of Debian .buildinfo build certificates have been “tainted” on the official Debian build servers, as these environments have files underneath the /usr/local/sbin directory []. He also filed against bug for debrebuild after spotting that it can fail to download packages from snapshot.debian.org [].

This month, several issues were uncovered (or assisted) due to the efforts of reproducible builds.

For instance, Debian bug #968710 was filed by Simon McVittie, which describes a problem with detached debug symbol files (required to generate a traceback) that is unlikely to have been discovered without reproducible builds. In addition, Jelmer Vernooij called attention that the new Debian Janitor tool is using the property of reproducibility (as well as diffoscope when applying archive-wide changes to Debian:

New merge proposals also include a link to the diffoscope diff between a vanilla build and the build with changes. Unfortunately these can be a bit noisy for packages that are not reproducible yet, due to the difference in build environment between the two builds. []

56 reviews of Debian packages were added, 38 were updated and 24 were removed this month adding to our knowledge about identified issues. Specifically, Chris Lamb added and categorised the nondeterministic_version_generated_by_python_param and the lessc_nondeterministic_keys toolchain issues. [][]

Holger Levsen sponsored Lukas Puehringer’s upload of the python-securesystemslib pacage, which is a dependency of in-toto, a framework to secure the integrity of software supply chains. []

Lastly, Chris Lamb further refined his merge request against the debian-installer component to allow all arguments from sources.list files (such as [check-valid-until=no]) in order that we can test the reproducibility of the installer images on the Reproducible Builds own testing infrastructure and sent a ping to the team that maintains that code.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of these patches, including:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can not only locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds. In August, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 155, 156, 157 and 158 to Debian:

  • New features:

    • Support extracting data of PGP signed data. (#214)
    • Try files named .pgp against pgpdump(1) to determine whether they are Pretty Good Privacy (PGP) files. (#211)
    • Support multiple options for all file extension matching. []
  • Bug fixes:

    • Don’t raise an exception when we encounter XML files with <!ENTITY> declarations inside the Document Type Definition (DTD), or when a DTD or entity references an external resource. (#212)
    • pgpdump(1) can successfully parse some binary files, so check that the parsed output contains something sensible before accepting it. []
    • Temporarily drop gnumeric from the Debian build-dependencies as it has been removed from the testing distribution. (#968742)
    • Correctly use fallback_recognises to prevent matching .xsb binary XML files.
    • Correct identify signed PGP files as file(1) returns “data”. (#211)
  • Logging improvements:

    • Emit a message when ppudump version does not match our file header. []
    • Don’t use Python’s repr(object) output in “Calling external command” messages. []
    • Include the filename in the “… not identified by any comparator” message. []
  • Codebase improvements:

    • Bump Python requirement from 3.6 to 3.7. Most distributions are either shipping with Python 3.5 or 3.7, so supporting 3.6 is not only somewhat unnecessary but also cumbersome to test locally. []
    • Drop some unused imports [], drop an unnecessary dictionary comprehensions [] and some unnecessary control flow [].
    • Correct typo of “output” in a comment. []
  • Release process:

    • Move generation of debian/tests/control to an external script. []
    • Add some URLs for the site that will appear on PyPI.org. []
    • Update “author” and “author email” in setup.py for PyPI.org and similar. []
  • Testsuite improvements:

    • Update PPU tests for compatibility with Free Pascal versions 3.2.0 or greater. (#968124)
    • Mark that our identification test for .ppu files requires ppudump version 3.2.0 or higher. []
    • Add an assert_diff helper that loads and compares a fixture output. [][][][]
  • Misc:

In addition, Mattia Rizzolo documented in setup.py that diffoscope works with Python version 3.8 [] and Frazer Clews applied some Pylint suggestions [] and removed some deprecated methods [].

Website

This month, Chris Lamb updated the main Reproducible Builds website and documentation to:

  • Clarify & fix a few entries on the “who” page [][] and ensure that images do not get to large on some viewports [].
  • Clarify use of a pronoun re. Conservancy. []
  • Use “View all our monthly reports” over “View all monthly reports”. []
  • Move a “is a” suffix out of the link target on the SOURCE_DATE_EPOCH age. []

In addition, Javier Jardón added the freedesktop-sdk project [] and Kushal Das added SecureDrop project [] to our projects page. Lastly, Michael Pöhn added internationalisation and translation support with help from Hans-Christoph Steiner [].

Testing framework

The Reproducible Builds project operate a Jenkins-based testing framework to power tests.reproducible-builds.org. This month, Holger Levsen made the following changes:

  • System health checks:

    • Improve explanation how the status and scores are calculated. [][]
    • Update and condense view of detected issues. [][]
    • Query the canonical configuration file to determine whether a job is disabled instead of duplicating/hardcoding this. []
    • Detect several problems when updating the status of reporting-oriented ‘metapackage’ sets. []
    • Detect when diffoscope is not installable  [] and failures in DNS resolution [].
  • Debian:

    • Update the URL to the Debian security team bug tracker’s Git repository. []
    • Reschedule the unstable and bullseye distributions often for the arm64 architecture. []
    • Schedule buster less often for armhf. [][][]
    • Force the build of certain packages in the work-in-progress package rebuilder. [][]
    • Only update the stretch and buster base build images when necessary. []
  • Other distributions:

    • For F-Droid, trigger jobs by commits, not by a timer. []
    • Disable the Archlinux HTML page generation job as it has never worked. []
    • Disable the alternative OpenWrt rebuilder jobs. []
  • Misc;

Many other changes were made too, including:

Finally, build node maintenance was performed by Holger Levsen [], Mattia Rizzolo [][] and Vagrant Cascadian [][][][]


Mailing list

On our mailing list this month, Leo Wandersleb sent a message to the list after he was wondering how to expand his WalletScrutiny.com project (which aims to improve the security of Bitcoin wallets) from Android wallets to also monitor Linux wallets as well:

If you think you know how to spread the word about reproducibility in the context of Bitcoin wallets through WalletScrutiny, your contributions are highly welcome on this PR []

Julien Lepiller posted to the list linking to a blog post by Tavis Ormandy titled You don’t need reproducible builds. Morten Linderud (foxboron) responded with a clear rebuttal that Tavis was only considering the narrow use-case of proprietary vendors and closed-source software. He additionally noted that the criticism that reproducible builds cannot prevent against backdoors being deliberately introduced into the upstream source (“bugdoors”) are decidedly (and deliberately) outside the scope of reproducible builds to begin with.

Chris Lamb included the Reproducible Builds mailing list in a wider discussion regarding a tentative proposal to include .buildinfo files in .deb packages, adding his remarks regarding requiring a custom tool in order to determine whether generated build artifacts are ‘identical’ in a reproducible context. []

Jonathan Bustillos (Jathan) posted a quick email to the list requesting whether there was a list of To do tasks in Reproducible Builds.

Lastly, Chris Lamb responded at length to a query regarding the status of reproducible builds for Debian ISO or installation images. He noted that most of the technical work has been performed but “there are at least four issues until they can be generally advertised as such”. He pointed that the privacy-oriented Tails operation system, which is based directly on Debian, has had reproducible builds for a number of years now. []



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

09 September, 2020 06:00PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

RPi 4 + 8GB, Finally, USB-functional!

So… Finally, kernel 5.8 entered the Debian Unstable repositories. This means that I got my Raspberry image from their usual location and was able to type the following, using only my old trusty USB keyboard:

So finally, the greatest and meanest Raspberry is fully supported with a pure Debian image! (only tarnished by the nonfree raspi-firmware package.

Oh, in case someone was still wondering — The images generated follow the stable release. Only the kernel and firmware are installed from unstable. If / when kernel 5.8 enters Backports, I will reduce the noise of adding a different suit to the sources.list.

09 September, 2020 02:56PM

September 08, 2020

Welcome to the family

Need I say more? OK, I will…

Still wanting some more details? Well…

I have had many cats through my life. When I was about eight years old, my parents tried to have a dog… but the experiment didn’t work, and besides those few months, I never had one.

But as my aging cats spent the final months of their last very long lifes, it was clear to us that, after them, we would be adopting a dog.

Last Saturday was the big day. We had seen some photos of the mother and the nine (!) pups. My children decided almost right away her name; they were all brownish, so the name would be corteza (tree bark. They didn’t know, of course, that dogs also have a bark! 😉)

Anyway, welcome little one!

08 September, 2020 03:43PM

Sven Hoexter

Cloudflare Bot Management, MITM Boxes and TLS 1.3

This is just a "warn your brothers" post for those who use Cloudflare Bot Management, and have customers which use MITM boxes to break up TLS 1.3 connections.

Be aware that right now some heuristic rules in the Cloudflare Bot Management score TLS 1.3 requests made by some MITM boxes with 1 - which equals "we're 99.99% sure that this is none human browser traffic". While technically correct - the TLS connection hitting the Cloudflare Edge node is not established by a browser - that does not help your customer if you block those requests. If you do something like blocking requests with a BM score of 1 at the Cloudflare Edge, you might want to reconsider that at the moment and sent a captcha challenge instead. While that is not a lot nicer, and still pisses people off, you might find a balance there between protecting yourself and still having some customers.

I've a confirmation for this happening with Cisco WSA, but it's likely to be also the case with other vendors. Breaking up TLS 1.2 seems to be stealthy enough in those appliances that it's not detected, so this issue creeps in with more enterprises rolling out modern browser.

You can now enter youself here a rant about how bad the client-server internet of 2020 is, and how bad it is that some of us rely on Cloudflare, and that they have accumulated a way too big market share. But the world is as it is. :(

08 September, 2020 12:08PM

September 07, 2020

Arturo Borrero González

Debconf 2020 online, summary

Debconf2020 logo

Debconf2020 took place when I was on personal vacations time. But anyway I’m lucky enough that my company, the Wikimedia Foundation, paid the conference registration fee for me and allowed me to take the time (after my vacations) to watch recordings from the conference.

This is my first time attending (or watching) a full-online conference, and I was curious to see first hand how it would develop. I was greatly surprised to see it worked pretty nicely, so kudos to the organization, video team, volunteers, etc!

What follows is my summary of the conference, from the different sessions and talks I watched (again, none of them live but recordings).

The first thing I saw was the Welcome to Debconf 2020 opening session. It is obvious the video was made with lots of love, I found it entertaining and useful. I love it :-)

Then I watched the BoF Can Free Software improve social equality. It was introduced and moderated by Hong Phuc Dang. Several participants, about 10 people, shared their visions on the interaction between open source projects and communities. I’m pretty much aware of the interesting social advancement that FLOSS can enable in communities, but sometimes is not so easy, it may also present challenges and barriers. The BoF was joined by many people from the Asia Pacific region, and for me, it has been very interesting to take a step back from the usual western vision of this topic. Anyway, about the session itself, I have the feeling the participants may have spent too much time on presentations, sharing their local stories (which are interesting, don’t get me wrong), perhaps leaving little room for actual proposal discussions or the like.

Next I watched the Bits from the DPL talk. In the session, Jonathan Carter goes over several topics affecting the project, both internally and externally. It was interesting to know more about the status of the project from a high level perspective, as an organization, including subjects such as money, common project problems, future issues we are anticipating, the social aspect of the project, etc.

The Lightning Talks session grabbed my attention. It is usually very funny to watch and not as dense as other talks. I’m glad I watched this as it includes some interesting talks, ranging from HAM radios (I love them!), to personal projects to help in certain tasks, and even some general reflections about life.

Just when I’m writing this very sentence, the video for the Come and meet your Debian Publicity team! talk has been uploaded. This team does an incredible work in keeping project information flowing, and social networks up-to-date and alive. Mind that the work of this team is mostly non-engineering, but still, is a vital part of the project. The folks in session explain what the team does, and they also discuss how new people can contribute, the different challenges related to language barriers, etc.

I have to admit I also started watching a couple other sessions that turned out to don’t be interesting to me (and therefore I didn’t finish the video). Also, I tried to watch a couple more sessions that didn’t publish their video recording just yet, for example the When We Virtualize the Whole Internet talk by Sam Hartman. Will check again in a couple of days.

It is a real pleasure the video recordings from the conference are made available online. One can join the conference anytime (like I’m doing!) and watch the sessions at any pace at any time. The video archive is big, I won’t be able to go over all of it. I won’t lie, I still have some pending videos to watch from last year Debconf2019 :-)

07 September, 2020 08:00AM

September 06, 2020

Enrico Zini

hackergotchi for Jonathan Carter

Jonathan Carter

DebConf 20 Online

This week, last week, Last month, I attended DebConf 20 Online. It was the first DebConf to be held entirely online, but it’s the 7th DebConf I’ve attended from home.

My first one was DebConf7. Initially I mostly started watching the videos because I wanted to learn more about packaging. I had just figured out how to create binary packages by hand, and have read through the new maintainers guide, but a lot of it was still a mystery. By the end of DebConf7 my grasp of source packages was still a bit thin, but other than that, I ended up learning a lot more about Debian during DebConf7 than I had hoped for, and over the years, the quality of online participation for each DebConf has varied a lot.

I think having a completely online DebConf, where everyone was remote, helped raise awareness about how important it is to make the remote experience work well, and I hope that it will make people who run sessions at physical events in the future consider those who are following remotely a bit more.

During some BoF sessions, it was clear that some teams haven’t talked to each other face to face in a while, and I heard at least 3 teams who said “This was nice, we should do more regular video calls!”. Our usual communication methods of e-mail lists and IRC serve us quite well, for the most part, but sometimes having an actual conversation with the whole team present at the same time can do wonders for dealing with many kind of issues that is just always hard to deal with in text based mediums.

There were three main languages used in this DebConf. We’ve had more than one language at a DebConf before, but as far as I know it’s the first time that we had multiple talks over 3 languages (English, Malayalam and Spanish).

It was also impressive how the DebConf team managed to send out DebConf t-shirts all around the world and in time before the conference! To my knowledge only 2 people didn’t get theirs in time due to customs.

I already posted about the new loop that we worked on for this DebConf. It was an unintended effect that we ended up having lots of shout-outs which ended up giving this online DebConf a much more warmer, personal feel to it than if we didn’t have it. I’m definitely planning to keep on improving on that for the future, for online and in-person events. There were also some other new stuff from the video team during this DebConf, we’ll try to co-ordinate a blog post about that once the dust settled.

Thanks to everyone for making this DebConf special, even though it was virtual!

06 September, 2020 07:50PM by jonathan

Thorsten Alteholz

My Debian Activities in August 2020

FTP master

This month I accepted 159 packages and rejected 16. The overall number of packages that got accepted was 172.

Debian LTS

This was my seventy-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 21.75h. During that time I did LTS uploads of:

  • [DLA 2336-1] firejail security update for two CVEs
  • [DLA 2337-1] python2.7 security update for nine CVEs
  • [DLA 2353-1] bacula security update for one CVE
  • [DLA 2354-1] ndpi security update for one CVE
  • [DLA 2355-1] bind9 security update for two CVEs
  • [DLA 2359-1] xorg-server security update for five CVEs

I also started to work on curl but did not upload a fixed version yet. As usual, testing the package takes up some time.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty sixth ELTS month.

During my allocated time I uploaded:

  • ELA-265-1 for python2.7
  • ELA-270-1 for bind9
  • ELA-272-1 for xorg-server

Like in LTS, I also started to work on curl and encountered the same problems as in LTS above.

Last but not least I did some days of frontdesk duties.

Other stuff

This month I found again some time for other Debian work and uploaded packages to fix bugs, mainly around gcc10:

I also uploaded new upstream versions of:

All package called *osmo* are developed by the Osmocom project, that is about Open Source MObile COMmunication. They are really doing a great job and I apologize that my uploads of new versions are mostly far behind their development.

Some of the uploads are related to new packages:

06 September, 2020 04:38PM by alteholz

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

inline 0.3.16: Now with system2()

A new minor release of the inline package just arrived on CRAN. inline facilitates writing code in-line in simple string expressions or short files. The package is mature and stable, and can be considered to be in maintenance mode: Rcpp used it extensively in the vrey early days before Rcpp Attributes provided an even better alternative. Seveal other package still rely on inline.

One of these package is rstan, and Ben Goodrich updated our use of system() to system2() allowing for better error diagnostics. We also did a bit of standard maintenance to Travis CI and the README.md file.

See below for a detailed list of changes extracted from the NEWS file.

Changes in inline version 0.3.16 (2020-09-06)

  • Maintenance updates to README.md standardizing badges (Dirk).

  • Maintenance update to Travis CI setup (Dirk).

  • Switch to using system2() for better error diagnostics (Ben Goodrich in #12).

Courtesy of CRANberries, there is a comparison to the previous release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 September, 2020 03:46PM

Molly de Blanc

NYU VPN

I needed to setup a VPN in order to access my readings for class. The instructions for Linux are located: https://nyu.service-now.com/sp?id=kb_article_view&sysparm_article=KB0014932

After you download the VPN client of your choice (they recommend Cisco AnyConnect), connect to: vpn.nyu.edu.

It will ask for two passwords: your NYU username and password and a multi-factor authentication (MFA) code from Duo. Use the Duo. See below for stuff on Duo.

Hit connect and viola, you can connect to the VPN.

Duo Authentication Setup

Go to: https://start.nyu.edu and follow the instructions for MFA. They’ll tell you that a smart phone is the most secure method of setting up. I am skeptical.

Install the Duo Authentication App on your phone, enter your phone number into the NYU web page (off of ) and it will send a thing to your phone to connect it.

Commentary

Okay, I have to complain at least a little bit about this. I had to guess what the VPN address was because the instructions are for NYU Shanghai. I also had to install the VPN client using the terminal. These sorts of things make it harder for people to use Linux. Boo.

06 September, 2020 01:45PM by mollydb

hackergotchi for Ben Armstrong

Ben Armstrong

Dronefly relicensed under copyleft licenses

To ensure Dronefly always remains free, the Dronefly project has been relicensed under two copyleft licenses. Read the license change and learn more about copyleft at these links.

I was prompted to make this change after a recent incident in the Red DiscordBot development community that made me reconsider my prior position that the liberal MIT license was best for our project. While on the face of it, making your license as liberal as possible might seem like the most generous and hassle-free way to license any project, I was shocked into the realization that its liberality was also its fatal flaw: all is well and good so long as everyone is being cooperative, but it does not afford any protection to developers or users should things suddenly go sideways in how a project is run. A copyleft license is the best way to avoid such issues.

In this incident – a sad story of conflict between developers I respect on both sides of the rift, and owe a debt to for what they’ve taught me – three cogs we had come to depend on suddenly stopped being viable for us to use due to changes to the license & the code. Effectively, those cogs became unsupported and unsupportable. To avoid any such future disaster with the Dronefly project, I started shopping for a new license that would protect developers and users alike from similarly losing support, or losing control of their contributions. I owe thanks to Dale Floer, a team member who early on advised me the AGPL might be a better fit, and later was helpful in selecting the doc license and encouraging me to follow through. We ran the new licenses by each contributor and arrived at this consensus: the AGPL is best suited for our server-based code, and CC-BY-SA is best suited for our documentation. The relicensing was made official this morning.

On Discord platform alternatives

You might well question what I, a Debian developer steeped in free software culture and otherwise in agreement with its principles, am doing encouraging a community to grow on the proprietary Discord platform! I have no satisfying answer to that. I explained when I introduced my project here some of the backstory, but that’s more of an account of its beginnings than justification for it to continue on this platform. Honestly, all I can offer is a rather dissatisfying it seemed like the right thing to do at the time.

Time will tell whether we could successfully move off of it to a freedom-respecting and privacy-respecting alternative chat platform that is both socially and technically viable to migrate to. That platform would ideally:

  • not be under the control of a single, central commercial entity running proprietary code, so their privacy is safeguarded, and they are protected from disaster, should it become unattractive to remain on the platform;
  • have a vibrant and supportive open source third party extension development community;
  • support our image-rich content and effortless sharing of URLs with previews automatically provided from the page’s content (e.g. via OpenGraph tags);
  • be effortless to join regardless of what platform/device each user uses;
  • keep a history of messages so that future members joining the community can benefit from past conversations, and existing members can catch up on conversations they missed;
  • but above all else: be acceptable and compelling to the existing community to move over onto it.

I’m intrigued by Matrix and wonder if it provides some or all of the above in its current form. Are you a developer writing bots for this platform? If so, I especially want to hear from you in the comments about your experience. Or in any case, if you’ve been there before – if you’ve faced the same issue with your community and have a success story to share, I would love to hear from you.

06 September, 2020 12:37PM by Ben Armstrong

hackergotchi for Junichi Uekawa

Junichi Uekawa

Trying to boot Debian on Chromebook Kevin.

Trying to boot Debian on Chromebook Kevin. But I haven't yet found how to get the USB serial console device to work, and display is black. So I can't see anything.

06 September, 2020 09:26AM by Junichi Uekawa

Russ Allbery

September haul

So many good books, so little reading time.

Jairus Banaji — A Brief History of Commercial Capitalism (nonfiction)
Steven Brust — The Baron of Magister Valley (sff)
Micaiah Johnson — The Space Between Worlds (sff)
Ian McDonald — Luna: New Moon (sff)
Elizabeth Moon — Trading in Danger (sff)
Tamsyn Muir — Harrow the Ninth (sff)
Suzanne Palmer — Finder (sff)
Kit Rocha — Beyond Shame (sff)
Kit Rocha — Beyond Control (sff)
Kit Rocha — Beyond Pain (sff)
Arundhati Roy — Azadi (nonfiction)
Jeff VanderMeer — Authority (sff)
Jeff VanderMeer — Acceptance (sff)
K.B. Wagers — Behind the Throne (sff)
Jarrett Walker — Human Transit (nonfiction)

I took advantage of a few sales to get books I know I'm going to want to read eventually for a buck or two.

06 September, 2020 05:08AM

September 05, 2020

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS (August 2020)

In August 2020, I have worked on the Debian LTS project for 16 hours (of 8 hours planned, plus another 8 hours that I carried over from July).

For ELTS, I have worked for another 8 hours (of 8 hours planned).

LTS Work

  • LTS frontdesk: triage wireshark, yubico-piv-tool, trousers, software-properties, qt4-x11, qtbase-opensource-src, openexr, netty and netty-3.9
  • upload to stretch-security: libvncserver 0.9.11+dfsg-1.3~deb9u5 (fixing 9 CVEs, DLA-2347-1 [1])
  • upload to stretch-security: php-horde-core 2.27.6+debian1-2+deb9u1 (1 CVE, DLA-2348 [2])
  • upload to stretch-security: php-horde 5.2.13+debian0-1+deb9u3 (fixing 1 CVE, DLA-2349-1 [3])
  • upload to stretch-security: php-horde-kronolith 4.2.19-1+deb9u1 (fixing 1 CVE, DLA-2350-1 [4])
  • upload to stretch-security: php-horde-kronolith 4.2.19-1+deb9u2 (fixing 1 more CVE, DLA-2351-1 [5])
  • upload to stretch-security: php-horde-gollem 3.0.10-1+deb9u2 (fixing 1 CVE, DLA-2352-1 [6])
  • upload to stretch-security: freerdp 1.1.0~git20140921.1.440916e+dfsg1-13+deb9u4 (fixing 14 CVEs, DLA-2356-1 [7])
  • prepare salsa MRs for gnome-shell (for gnome-shell in stretch [8] and buster [9])

ELTS Work

  • Look into open CVEs for Samba in Debian jessie ELTS. Revisit issues affecting the Samba AD code that have previously been considered as issues.

Other security related work for Debian

  • upload to buster (SRU): libvncserver 0.9.11+dfsg-1.3+deb10u4 (fixing 9 CVEs) [10]

References

05 September, 2020 09:12PM by sunweaver

Elana Hashman

Three talks at DebConf 2020

This year has been a really unusual one for in-person events like conferences. I had already planned to take this year off from travel for the most part, attending just a handful of domestic conferences. But the pandemic has thrown those plans into chaos; I do not plan to attend large-scale in-person events until July 2021 at the earliest, per my employer's guidance.

I've been really sad to have turned down multiple speaking invitations this year. To try to set expectations, I added a note to my Talks page that indicates I will not be writing any new talks for 2020, but am happy to join panels or reprise old talks.

And somehow, with all that background, I still ended up giving three talks at DebConf 2020 this year. In part, I think it's because this is the first DebConf I've been able to attend since 2017, and I was so happy to have the opportunity! I took time off work to give myself enough space to focus on the conference. International travel is very difficult for me, so DebConf is generally challenging if not impossible for me to attend.

A panel a day keeps the FTP Team away?

On Thursday, August 27th, I spoke on the Leadership in Debian panel, where I discussed some of the challenges leadership in the project must face, including an appropriate response to the BLM movement and sustainability for volunteer positions that require unsustainable hours (such as DPL).

On Friday, August 28th, I hosted the Debian Clojure BoF, attended by members of the Clojure and Puppet teams. The Puppet team is working to package the latest versions of Puppet Server/DB, which involve significant Clojure components, and I am doing my best to help.

On Saturday, August 29th, I spoke on the Meet the Technical Committee panel. The Committee presented a number of proposals for improving how we work within the project. I was responsible for presenting our first proposal on allowing folks to engage the committee privately.

05 September, 2020 04:00AM by Elana Hashman

September 04, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.3.2: Tweaks

Another (minor) nanotime release, now at version 0.3.2. This release brings an endianness correction which was kindly contributed in a PR, switches to using the API header exported by RcppCCTZ, and tweaks test coverage a little with respect to r-devel.

nanotime relies on the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from work by co-author Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

The NEWS snippet adds full details.

Changes in version 0.3.2 (2020-09-03)

  • Correct for big endian (Elliott Sales de Andrade in #81).

  • Use the RcppCCTZ_API.h header (Dirk in #82).

  • Conditionally reduce test coverage (Dirk in #83).

Thanks to CRANberries there is also a diff to the previous version. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 September, 2020 09:28PM