Tech News issue #17, 2021 (April 26, 2021)

00:00, Monday, 26 2021 April UTC
This document has a planned publication deadline (link leads to timeanddate.com).
previous 2021, week 17 (Monday 26 April 2021) next
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Nederlands • ‎français • ‎italiano • ‎magyar • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎বাংলা • ‎中文 • ‎日本語

Tech News issue #16, 2021 (April 19, 2021)

00:00, Monday, 19 2021 April UTC
previous 2021, week 16 (Monday 19 April 2021) next
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Nederlands • ‎español • ‎français • ‎italiano • ‎magyar • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎বাংলা • ‎中文 • ‎日本語 • ‎한국어

Production Excellence #30: March 2021

13:43, Thursday, 15 2021 April UTC

How’d we do in our strive for operational excellence last month? Read on to find out!

Incidents

2 documented incidents. That's average for this time of year, when we usually had 1-4 incidents.

Learn about recent incidents at Incident status on Wikitech, or Preventive measures in Phabricator.


Trends

In March we made significant progress on the outstanding errors of previous months. Several of the 2020 months are finally starting to empty out. But with over 30 new tasks from March itself remaining, we did not break even, and ended up slightly higher than last month. This could be reversing two positive trends, but I hope not.

Firstly, there was a steep increase in the number of new production errors that were not resolved within the same month. This is counter the positive trend we started in November. The past four months typically saw 10-20 errors outlive their month of discovery, and this past month saw 34 of its 48 new errors remain unresolved.

Secondly, we saw the overall number of unresolved errors increase again. This January began a downward trend for the first time in thirteen months, which continued nicely through February. But, this past month we broke even and even pushed upward by one task. I hope this is just a breather and we can continue our way downward.


Month-over-month plots based on spreadsheet data.


Outstanding errors

Take a look at the workboard and look for tasks that could use your help:

View Workboard

Summary over recent months, per spreadsheet:

Jul 2019 (0 of 18 left) ✅ Last two tasks resolved! -2
Aug 2019 (1 of 14 left) ⚠️ Unchanged (over one year old).
Oct 2019 (3 of 12 left) ⬇️ One task resolved. -1
Nov 2019 (0 of 5 left) ✅ Last task resolved! -1
Dec 2019 (0 of 9 left) ✅ Last task resolved! -1
Jan 2020 (2 of 7 left) ⬇️ One task resolved. -1
Feb 2020 (0 of 7 left) ✅ Last task resolved! -1
Mar 2020 (2 of 2 left) ⚠️ Unchanged (over one year old).
Apr 2020 (5 of 14 left) ⬇️ Four tasks resolved. -4
May 2020 (5 of 14 left) ⬇️ One task resolved. -1
Jun 2020 (6 of 14 left) ⬇️ One task resolved. -1
Jul 2020 (5 of 24 issues) ⬇️ Four tasks resolved. -4
Aug 2020 (15 of 53 issues) ⬇️ Five tasks resolved. -5
Sep 2020 (7 of 33 issues) ⬇️ One task resolved. -1
Oct 2020 (22 of 69 issues) ⬇️ Four tasks resolved. -4
Nov 2020 (9 of 38 issues) ⬇️ Two tasks resolved. -2
Dec 2020 (11 of 33 issues) ⬇️ One task resolved. -1
Jan 2021 (4 of 50 issues) ⬇️ One task resolved. -1
Feb 2021 (9 of 20 issues) ⬇️ Two tasks resolved. -2
Mar 2021 (34 of 48 issues) 34 new tasks survived and remain unresolved. +48; -14
Tally
138 issues open, as of Excellence #29 (6 Mar 2021).
-33 issues closed, of the previous 138 open issues.
+34 new issues that survived March 2021.
139 issues open, as of today (2 Apr 2021).

Thanks!

Thank you to everyone who helped by reporting, investigating, or resolving problems in Wikimedia production. Thanks!

Until next time,

– Timo Tijhof


Footnotes:

Incident status, Wikitech.
Wikimedia incident stats by Krinkle, CodePen.
Production Excellence: Month-over-month spreadsheet and plot.
Report charts for Wikimedia-production-error project, Phabricator.

Tech News issue #15, 2021 (April 12, 2021)

00:00, Monday, 12 2021 April UTC
previous 2021, week 15 (Monday 12 April 2021) next
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Nederlands • ‎español • ‎français • ‎italiano • ‎polski • ‎português do Brasil • ‎русский • ‎العربية • ‎বাংলা • ‎中文 • ‎日本語

Tech News issue #14, 2021 (April 5, 2021)

00:00, Monday, 05 2021 April UTC
previous 2021, week 14 (Monday 05 April 2021) next
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Nederlands • ‎español • ‎français • ‎italiano • ‎polski • ‎português do Brasil • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎বাংলা • ‎తెలుగు • ‎ไทย • ‎中文 • ‎日本語 • ‎한국어

Tracking memory issue in a Java application

13:01, Friday, 02 2021 April UTC

One of the critical pieces of our infrastructure is Gerrit. It hosts most of our git repositories and is the primary code review interface. Gerrit is written in the Java programming language which runs in the Java Virtual Machine (JVM). For a couple years we have been struggling with memory issues which eventually led to an unresponsive service and unattended restarts. The symptoms were the usual ones: the application responses being slower and degrading until server side errors render the service unusable. Eventually the JVM terminates with:

java.lang.OutOfMemoryError: Java heap space

This post is my journey toward identifying the root cause and having it fixed up by the upstream developers. Given I barely knew anything about Java and much less about its ecosystem and tooling, I have learned more than a few things on the road and felt like it was worth sharing.

Prior work

The first meaningful task was in June 2019 (T225166) which over several months has led us to:

  • replace aging underlying hardware
  • tuning the memory garbage collector and switching to the G1 garbage collector
  • raising the amount of memory allocated to the JVM (the heap)
  • upgraded the Debian operating system by two major release (Jessie Stretch Buster)
  • conduct a major upgrade of Gerrit (June 2020, Gerrit 2.15 3.2)
  • bots crawling the repositories get moved to a replica
  • fixing lack of cache in a MediaWiki extension querying Gerrit more than it should have

All of those were sane operations that are part of any application life-cycle, some were meant to address other issues. Raising the maximum heap size (20G to 32G) definitely reduced the frequency of crashes.

Still, we had memory filing over and over. The graph below shows the memory usage from September 2019 to September 2020. The increase of maximum heap usage in October 2020 is the JVM heap being raised from 20G to 32G. Each of the "little green hills" correspond to memory filing up until we either restarted Gerrit or the JVM unattended crash:

Zooming on a week, it is clearly seen the memory was almost entirely filled until we had to restart:

This had to stop. Complaints about Gerrit being unresponsive, SRE having to respond to java.lang.OutOfMemoryError: Java heap space or us having to "proactively" restart before a week-end. They were not good practices. Back and fresh from vacations, I filed a new task T263008 in September 2020 and started to tackle the problem on my spare time. Would I be able to find my way in an ecosystem totally unknown to me?

Challenge accepted!

stuff learned

  • Routine maintenance are definitely a need
  • Don't expect things to magically solve but commit to thoroughly identify the root cause instead of hoping.

Looking at memory

Since the JVM runs out of memory, lets look at memory allocation. The JDK provides several utilities to interact with a running JVM. Be it to attach a debugger, writing a copy of the whole heap or sending admin commands to the JVM.

jmap lets one take a full capture of the memory used by a Java virtual machine. It has to run as the same user as the application (we use Unix username gerrit2) and when having multiple JDKs installed, one has to make sure to invoke the jmap that is provided by the Java version running the targeted JVM.

Dumping the memory is then a magic:

sudo -u gerrit2 /usr/lib/jvm/java-8-openjdk-amd64/bin/jmap \
  -dump:live,format=b,file=/var/lib/gerrit-202009170755.hprof <pid of java process here>

It takes a few minutes depending on the number of objects. The resulting .hprof file is a binary format, which can be interpreted by various tools.

jhat, a Java heap analyzer, is provided by the JDK along jmap. I ran it disabling tracking of of object allocations (-stack false) as well as references to object (|-refs false) since even with 64G of RAM and 32 core it took a few hours and eventually crashed. That is due to the insane amount of live objects. On the server I thus ran:

/usr/lib/jvm/java-8-openjdk-amd64/bin/jhat -stack false -refs false gerrit-202009170755.hprof

It spawns a web service which I can reach from my machine over ssh using some port redirection and open a web browser for it:

ssh  -C -L 8080:ip6-localhost:7000 gerrit1001.wikimedia.org &
xdg-open http://ip6-localhost:8080/

Instance Counts for All Classes (excluding native types)

2237744 instances of class org.eclipse.jgit.lib.ObjectId
2128766 instances of class org.eclipse.jgit.lib.ObjectIdRef$PeeledNonTag
735294 instances of class org.eclipse.jetty.util.thread.Locker
735294 instances of class org.eclipse.jetty.util.thread.Locker$Lock
735283 instances of class org.eclipse.jetty.server.session.Session
...

And an other view shows 3.5G of byte arrays.

I got pointed to https://heaphero.io/ however the file is too large to upload and it contains sensitive information (credentials, users personal information) which we can not share with a third party.

Nothing really conclusive at this point, the heap dump has been taken shortly after a restart and Gerrit was not in trouble.

Eventually I found Javamelody has a view providing the exact same information without all the trouble of figuring out jmap, jhat and ssh proper set of parameters. Just browse to the monitoring page and:

stuff learned

  • jmap to issue commands to the jvm including taking a heap dump
  • jhat to run analysis with some options required to make it workable
  • Use JavaMelody instead

JVM handling of out of memory error

An idea was to take a heap dump whenever the JVM encounters an out of memory error. That can be turned on by passing the extended option HeapDumpOnOutOfMemoryError to the JVM and specifying where the dump will be written to with HeapDumpPath:

java \
  -XX:+HeapDumpOnOutOfMemoryError \
  -XX:HeapDumpPath=/srv/gerrit \
  -jar gerrit.war ...

And surely next time it ran out of memory:

Nov 07 13:43:35 gerrit2001 java[30197]: java.lang.OutOfMemoryError: Java heap space
Nov 07 13:43:35 gerrit2001 java[30197]: Dumping heap to /srv/gerrit/java_pid30197.hprof ...
Nov 07 13:47:02 gerrit2001 java[30197]: Heap dump file created [35616147146 bytes in 206.962 secs]

Which results in a 34GB dump file which was not convenient for a full analysis. Even with 16G of heap for the analyze and a couple hours of CPU churning it was not any helpful

And at this point the JVM is still around, the java process is still there and thus systemd does not restart the service for us even though we have instructed it to do so:

/lib/systemd/system/gerrit.service
[Service]
ExecStart=java -jar gerrit.war
Restart=always
RestartSec=2s

That lead to our Gerrit replica being down for a whole weekend with no alarm whatsoever (T267517). I imagine the reason for the JVM not exiting on an OutOfMemoryError is to let one investigate the reason. Just like heap dump, the behavior can be configured via the ExitOnOutOfMemoryError extended option:

java -XX:+ExitOnOutOfMemoryError

Next time the JVM will exit causing systemd to notice the service went away and so it will happily restart it again.

stuff learned

  • automatic heap dumping with the JVM for future analysis
  • Be sure to have the JVM exit when running out of memory so systemd will restart the service
  • Process can be up while still not serving its purpose

Side track to jgit cache

When I filed the task, I suspected enabling git protocol version 2 (J199) on CI might have been the root cause. That eventually lead me to look at how Gerrit caches git operations. Being a Java application it does not use the regular git command but a pure Java implementation jgit, a project started by the same author as Gerrit (Shawn Pearce).

To speed up operations, jgit keeps git objects in memory with various tuning settings. You can read more about it at T263008#6601490 , but in the end it was of no use for the problem. @thcipriani would later point out that jgit cache does not overgrow past its limit:

The investigation was not a good lead, but surely it prompted us to have a better view as to what is going on in the jgit cache. But to do so we would need to expose historical metrics of the status of the cache.

Stuff learned

  • Jgit has in memory caches to hold frequently accessed repositories / objects in the JVM memory speeding up access to them.

Metrics collection

We always had trouble determining whether our jgit cache was properly sized and tuned it randomly with little information. Eventually I found out that Gerrit does have a wide range of metrics available which are described at https://gerrit.wikimedia.org/r/Documentation/metrics.html . I always wondered how we could access them without having to write a plugin.

The first step was to add the metrics-reporter-jmx plugin. It registers all the metrics with JMX, a Java system to manage resources. That is then exposed by JavaMelody and at least let us browse the metrics:

I long had a task to get those metrics exposed (T184086) but never had a strong enough incentive to work it. The idea was to expose those metrics to the Prometheus monitoring system which would scrape them and make them available in Grafana. They can be exposed using the metrics-reporter-prometheus plugin. There is some configuration required to create an authentication token that lets Prometheus scrape the metrics and it is then all set and collected.

In Grafana, discovering which metrics are of interest might be daunting. Surely for the jgit cache it is only a few metrics we are interested in and crafting a basic dashboard for it is simple enough. But since we now collect all those metrics, surely we should have dashboards for anything that could be of interest to us.

While browsing the Gerrit upstream repositories, I found an unadvertised repository: gerrit/gerrit-monitoring. The project aims at deploying to Kubernetes a monitoring stack for Gerrit composed of Grafana, Loki, Prometheus and Promtail. While browsing the code, I found out they already had a Grafana template which I could import to our Grafana instance with some little modifications.

During the Gerrit Virtual Summit I raised that as a potentially interesting project for the whole community and surely a few days later:

In the end we have a few useful Grafana dashboards, the ones imported from the gerrit-monitoring repo are suffixed with (upstream): https://grafana.wikimedia.org/dashboards/f/5AnaHr2Mk/gerrit

And I crafted one dedicated to jgit cache: https://grafana.wikimedia.org/d/8YPId9hGz/jgit-block-cache

Stuff learned

  • Prometheus scraping system with auth token
  • Querying Prometheus metrics in Grafana and its vector selection mechanism
  • Other Gerrit administrators already created Vizualization
  • Raising our reuse prompted upstream to further advertise their solution which hopefully has led to more adoption of their solution.

Despair

After a couple months, there was no good lead. The issue has been around for a while, in a programming language I don't know with assisting tooling completely alien to me. I even found jcmd to issue commands to the JVM, such as dumping a class histogram, the same view provided by JavaMelody:

$ sudo -u gerrit2 jcmd 2347 GC.class_histogram
num     #instances         #bytes  class name
3      ----------------------------------------------
4         5:      10042773     1205132760  org.eclipse.jetty.server.session.SessionData
5         8:      10042773      883764024  org.eclipse.jetty.server.session.Session
6        11:      10042773      482053104  org.eclipse.jetty.server.session.Session$SessionInactivityTimer$1
7        13:      10042779      321368928  org.eclipse.jetty.util.thread.Locker
8        14:      10042773      321368736  org.eclipse.jetty.server.session.Session$SessionInactivityTimer
9        17:      10042779      241026696  org.eclipse.jetty.util.thread.Locker$Lock

That is quite handy when already in a terminal, saves a few click to switch to a browser, head to JavaMelody and find the link.

But it is the last week of work of the year.

Christmas is in two days.

Kids are messing up all around the home office since we are under lockdown.

Despair.

Out of rage I just stall the task shamelessly hoping for Java 11 and Gerrit 3.3 upgrades to solve this. Much like we hoped the system would be fixed by upgrading.

Wait..

1 million?

ONE MILLION ??

TEN TO THE POWER OF SIX ???

WHY IS THERE A MILLION HTTP SESSIONS HELD IN GERRIT !!!!!!?11??!!??

10042773  org.eclipse.jetty.server.session.SessionData

There. Right there. It was there since the start. In plain sight. And surely 19 hours later Gerrit had created 500k sessions for 56 MBytes of memory. It is slowly but surely leaking memory.

stuff learned

  • Everything clears up once one has found the root cause

When upstream saves you

At this point it was just an intuition, albeit a strong one. I know not much about Java or Gerrit internals and went to invoke upstream developers for further assistance. But first, I had to reproduce the issue and investigate a bit more to give as many details as possible when filing a bug report.

Reproduction

I copied a small heap dump I took just a few minutes after Gerrit got restarted, it had a manageable size making it easier to investigate. Since I am not that familiar with the Java debugging tools, I went with what I call a clickodrome interface, a UI that lets you interact solely with mouse clicks: https://visualvm.github.io/

Once the heap dump is loaded, I could easily access objects. Notably the org.eclipse.jetty.server.session.Session objects had a property expiry=0, often an indication of no expiry at all. Expired sessions are cleared by Jetty via a HouseKeeper thread which inspects sessions and deletes expired ones. I have confirmed it does run every 600 seconds, but since sessions are set to not expire, they pile up leading to the memory leak.

On December 24th, a day before Christmas, I filed a private security issue to upstream (now public): https://bugs.chromium.org/p/gerrit/issues/detail?id=13858

After the Christmas and weekend break upstream acknowledged and I did more investigating to pinpoint the source of the issue. The sessions are created by a SessionHandler and debug logs show: dftMaxIdleSec=-1 or Default maximum idle seconds set to -1, which means that by default the sessions are created without any expiry. The Jetty debug log then gave a bit more insight:

DEBUG org.eclipse.jetty.server.session : Session xxxx is immortal && no inactivity eviction

It is immortal and is thus never picked up by the session cleaner:

DEBUG org.eclipse.jetty.server.session : org.eclipse.jetty.server.session.SessionHandler
==dftMaxIdleSec=-1 scavenging session ids []
                                          ^^^ --- empty array

Our Gerrit instance has several plugins and the leak can potentially come from one of them. I then booted a dummy Gerrit on my machine (java -jar gerrit-3.3.war) cloned the built-in All-Projects.git repository repeatedly and observed objects with VisualVM. Jetty sessions with no expiry were created, which rules out plugins and point at Gerrit itself. Upstream developer Luca Milanesio pointed out that Gerrit creates a Jetty session which is intended for plugins. I have also narrowed down the leak to only be triggered by git operations made over HTTP. Eventually, by commenting out a single line of Gerrit code, I eliminated the memory leak and upstream pointed at a change released a few versions ago that may have been the cause.

Upstream then went on to reproduce on their side, took some measurement before and after commenting out and confirmed the leak (750 bytes for each git request made over HTTP). Given the amount of traffic we received from humans, systems or bots, it is not surprising we ended up hitting the JVM memory limit rather quickly.

Eventually the fix got released and new Gerrit versions were released. We upgraded to the new release and haven't restarted Gerrit since then. Problem solved!

Stuff learned

  • Even with no knowledge about a programming language, if you can build and run it, you can still debug using print or the universal optimization operator: //.
  • Quickly acknowledge upstream hints, ideas and recommendations. Even if it is to dismiss one of their leads.
  • Write a report, this blog.

Thank you upstream developers Luca Milanesio and David Ostrovsky for fixing the issue!

Thank you @dancy for the added clarifications as well as typos and grammar fixes.

References

Tech News issue #13, 2021 (March 29, 2021)

00:00, Monday, 29 2021 March UTC
previous 2021, week 13 (Monday 29 March 2021) next
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Nederlands • ‎español • ‎français • ‎italiano • ‎magyar • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎বাংলা • ‎中文 • ‎日本語

Tech News issue #12, 2021 (March 22, 2021)

00:00, Monday, 22 2021 March UTC
previous 2021, week 12 (Monday 22 March 2021) next
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Nederlands • ‎español • ‎français • ‎italiano • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎سنڌي • ‎فارسی • ‎বাংলা • ‎中文 • ‎日本語 • ‎ꯃꯤꯇꯩ ꯂꯣꯟ

Tech News issue #11, 2021 (March 15, 2021)

00:00, Monday, 15 2021 March UTC
previous 2021, week 11 (Monday 15 March 2021) next
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Nederlands • ‎español • ‎français • ‎italiano • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎বাংলা • ‎ไทย • ‎中文 • ‎日本語 • ‎ꯃꯤꯇꯩ ꯂꯣꯟ

Tech News issue #10, 2021 (March 8, 2021)

00:00, Monday, 08 2021 March UTC
This document has a planned publication deadline (link leads to timeanddate.com).
previous 2021, week 10 (Monday 08 March 2021) next
Other languages:
Deutsch • ‎English • ‎Hausa • ‎Nederlands • ‎español • ‎français • ‎italiano • ‎magyar • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎বাংলা • ‎中文 • ‎日本語 • ‎ꯃꯤꯇꯩ ꯂꯣꯟ

Production Excellence #29: February 2021

01:03, Saturday, 06 2021 March UTC

How’d we do in our strive for operational excellence last month? Read on to find out!

📈 Incidents

3 documented incidents last month, [1] which is average for the time of year. [2]

Learn about these incidents at Incident status on Wikitech, and their Preventive measures in Phabricator.

For those with NDA-restricted access, there may be additional private incident reports 🔒 available.

💡 Did you know: Our Incident reports have switched to using the ISO date format in their titles and listings, for improved readability and edit-ability (esp. when publishing on a later date). So long 202010221, and hello 2021-02-21!

📊 Trends

In February we saw a continuation of the new downward trend that began this January, which came after twelve months of continued rising. Let's make sure this trend sticks with us as we work our way through the debt, whilst also learning to have a healthy week-to-week iteration where we monitor and follow-up on any new developments such that they don't introduce lasting regressions.

The recent tally (issues filed since we started reporting in March 2019) is down to 138 unresolved errors, from 152 last month. The old backlog (pre-2019 issues) also continued its 5-month streak and is down to 148, from 160 last month. If this progress continues we'll soon have fewer "Old" issues than "Recent" issues, and possibly by the start of 2022 we may be able to report and focus only on our rotation through recent issues as hopefully we are then balancing our work such that issues reported this month are addressed mostly in the same month or otherwise later that quarter within 2-3 months. Visually that would manifest as the colored chunks having a short life on the chart with each drawn at a sharp downwards angle – instead of dragged out where it was building up an ever-taller shortcake. I do like cake, but I prefer the kind I can eat. 🍰

Month-over-month plots based on spreadsheet data. [3] [4]


📖 Outstanding errors

Summary over recent months:

  • ⚠️ July 2019 (2 of 18 issues left): no change.
  • ⚠️ August 2019 (1 of 14 issues): no change.
  • ⚠️ October 2019 (4 of 12 issues): no change.
  • ⚠️ November 2019 (1 of 5 issues): no change.
  • ⚠️ December 2019 (1 of 9 issues): One task resolved (-1).
  • ⚠️ January 2020 (2 of 7 issues): no change.
  • ⚠️ February 2020 (1 of 7 issues): no change.
  • ⚠️ March 2020 (2 of 2 issues): no change.
  • April 2020 (9 of 14 issues left): no change.
  • May 2020 (6 of 14 issues left): no change.
  • June 2020 (7 of 14 issues left): no change.
  • July 2020 (9 of 24 new issues): no change.
  • August 2020 (20 of 53 new issues): Two tasks resolved (-2).
  • September 2020 (9 of 33 new issues): Five tasks resolved (-5).
  • October 2020 (26 of 69 new issues): Five tasks resolved (-5).
  • November 2020 (11 of 38 new issues): Three tasks resolved (-3).
  • December 2020 (12 of 33 new issues): Seven tasks resolved (-7).
  • January 2021 (5 of 50 new issues): Two tasks resolved (-2).
  • February 2021: 11 of 20 new issues survived the month and remained unresolved (+20; -9)
Recent tally
152 issues open, as of Excellence #28 (16 Feb 2021).
-25 issues closed since, of the previous 152 open issues.
+11 new issues that survived Feb 2021.
138 issues open, as of today 5 Mar 2021.

For the on-going month of March 2021, we've got 12 new issues so far.

Take a look at the workboard and look for tasks that could use your help!

View Workboard


🎉 Thanks!

Thank you to everyone else who helped by reporting, investigating, or resolving problems in Wikimedia production. Thanks!

Until next time,

– Timo Tijhof


Footnotes:

[1] Incident status Wikitech.
[2] Wikimedia incident stats by Krinkle, CodePen.
[3] Month-over-month, Production Excellence spreadsheet.
[4] Open tasks, Wikimedia-prod-error, Phabricator.

Production Excellence #28: January 2021

23:57, Friday, 05 2021 March UTC

How’d we do in our strive for operational excellence last month? Read on to find out!

📈 Incidents

1 documented incident last month. That's the third month in a row that we are at or near zero major incidents – not bad! [1] [2]

Learn about recent incidents at Incident status on Wikitech, or Preventive measures in Phabricator.

💡 Did you know: Our Incident status page provides a green-yellow status reflection over the past ten days, with a link to the most recent incident doc if there was any during that time.

📊 Trends

This January saw a small recovery in our otherwise negative upward trend. For the first time in twelve month more reports were closed than new reports having outlived the previous month without resolution. What happened twelve months ago? In January 2020, we also saw a small recovery during the otherwise upward trend before and after it.

Perhaps it's something about the post-December holidays that temporarily improves the quality and/or reduces the quantity — of code changes. Only time will tell if this is the start of a new positive trend, or merely a post-holiday break. [3]

While our month-to-month trend might not (yet) be improving, we do see persistent improvements in our overall backlog of pre-2019 reports. This is in part because we generally don't file new reports there, so it makes sense that it doesn't go back up, but it's still good to see downward progress every month, unlike with reports from more recent months which often see no change month-to-month (see "Outstanding errors" below, for example).

This positive trend on our "Old" backlog started in October 2020 and has consistently progressed every month since then (refer to the "Old" numbers in red on the below chart, or the same column in the spreadsheet). [3][4]


📖 Outstanding errors

Summary over recent months:

  • ⚠️ July 2019 (2 of 18 issues left): no change.
  • ⚠️ August 2019 (1 of 14 issues): no change.
  • ✅ September 2019 (0 of 12 issues): Last two tasks were resolved (-2).
  • ⚠️ October 2019 (4 of 12 issues): One task resolved (-1).
  • ⚠️ November 2019 (1 of 5 issues): no change.
  • ⚠️ December 2019 (2 of 9 issues), Two tasks resolved (-2).
  • ⚠️ January 2020 (2 of 7 issues), no change.
  • ⚠️ February 2020 (1 of 7 issues left), One task resolved (-1).
  • March 2020 (2 of 2 issues left), no change.
  • April 2020 (9 of 14 issues left): no change.
  • May 2020 (6 of 14 issues left): One task resolved (-1).
  • June 2020 (7 of 14 issues left): no change.
  • July 2020 (9 of 24 new issues): no change.
  • August 2020 (22 of 53 new issues): One task resolved (-1).
  • September 2020 (13 of 33 new issues): One task resolved (-1).
  • October 2020 (31 of 69 new issues): Four tasks fixed (-4).
  • November 2020 (14 of 38 new issues): no change.
  • December 2020 (19 of 33 new issues) Three tasks resolved (-3)
  • January 2021: 7 of 50 new issues survived the month and remained unresolved (+50; -43)
Recent tally
160 issues open, as of Excellence #27 (4 Feb 2021).
-15 issues closed since, of the previous 160 open issues.
+7 new issues that survived January 2021.
152 issues open, as of today (16 Feb 2021).

January saw +50 new production errors reported in a single month, which is an unfortunate all-time high. However, we've also done remarkably well on addressing 43 of them within a month, when the potential root cause and diagnostics data were still fresh in our minds. Well done!

For the on-going month of February, there have been 16 new issues reported so far.

Take a look at the workboard and look for tasks that could use your help!

View Workboard


🎉 Thanks!

Thank you to everyone else who helped by reporting, investigating, or resolving problems in Wikimedia production. Thanks!

Until next time,

– Timo Tijhof


Footnotes:

[1] Incident status Wikitech.
[2] Wikimedia incident stats by Krinkle, CodePen.
[3] Month-over-month, Production Excellence spreadsheet.
[4] Open tasks, Wikimedia-prod-error, Phabricator.

Gerrit now automatically adds reviewers

10:19, Friday, 05 2021 March UTC
WARNING: 20210305 the reviewers by blame Gerrit plugin got disabled after it got announced by this blog post. It turns out the author of change is not necessarily an adequate reviewer suggestion in our context and some were being added to review for a whole lot code than they would expect. The post still have some worthy information as to how one can find reviewers.

Finding reviewers for a change is often a challenge, especially for a newcomer or folks proposing changes to projects they are not familiar with. Since January 16th, 2019, Gerrit automatically adds reviewers on your behalf based on who last changed the code you are affecting.

Antoine "@hashar" Musso exposes what lead us to enable that feature and how to configure it to fit your project. He will offers tip as to how to seek more reviewers based on years of experience.


When uploading a new patch, reviewers should be added automatically, that is the subject of the task T91190 opened almost four years ago (March 2015). I declined the task since we already have the Reviewer bot (see section below), @Tgr found a plugin for Gerrit which analyzes the code history with git blame and uses that to determine potential reviewers for a change. It took us a while to add that particular Gerrit plugin and the first version we installed was not compatible with our Gerrit version. The plugin was upgraded yesterday (Jan 16th) and is working fine (T101131).

Let's have a look at the functionality the plugin provides, and how it can be configured per repository. I will then offer a refresher of how one can search for reviewers based on git history.

Reviewers by blame plugin

NOTE: the reviewers by blame plugin has been removed the day after this announce blog post got posted. This section thus does not apply to the Wikimedia Gerrit instance anymore. It is left here for historical reason.

The Gerrit plugin looks at affected code using git blame, it extracts the top three past authors which are then added as reviewers to the change on your behalf. Added reviewers will thus receive a notification showing you have asked them for code review.

The configuration is done on a per project basis and inherits from the parent project. Without any tweaks, your project inherits the configuration from All-Projects. If you are a project owner, you can adjust the configuration. As an example the configuration for operations/mediawiki-config which shows inherited values and an exception to not process a file named InitialiseSettings.php:

The three settings are described in the documentation for the plugin:

plugin.reviewers-by-blame.maxReviewers
The maximum number of reviewers that should be added to a change by this plugin.
By default 3.

plugin.reviewers-by-blame.ignoreFileRegEx
Ignore files where the filename matches the given regular expression when computing the reviewers. If empty or not set, no files are ignored.
By default not set.

plugin.reviewers-by-blame.ignoreSubjectRegEx
Ignore commits where the subject of the commit messages matches the given regular expression. If empty or not set, no commits are ignored.
By default not set.

By making past authors aware of a change to code they previously altered, I believe you will get more reviews and hopefully get your changes approved faster.

Previously we had other methods to add reviewers, one opt-in based and the others being cumbersome manual steps. They should be used to compliment the Gerrit reviewers by blame plugin, and I am giving an overview of each of them in the following sections.

Gerrit watchlist

The original system from Gerrit lets you watch projects, similar to a user watch list on MediaWiki. In Gerrit preferences, one can get notified for new changes, patchsets, comments... Simply indicate a repository, optionally a search query and you will receive email notifications for matching events.

The attached image is my watched projects configuration, I thus receive notifications for any changes made to the integration/config config as well as for changes in mediawiki/core which affect either composer.json or one of the Wikimedia deployment branches for that repo.

One drawback is that we can not watch a whole hierarchy of projects such as mediawiki and all its descendants, which would be helpful to watch our deployment branch. It is still useful when you are the primary maintainer of a repository since you can keep track of all activity for the repository.

Reviewer bot

The reviewer bot has been written by Merlijn van Deen (@valhallasw), it is similar to the Gerrit watched projects feature with some major benefits:

  • watcher is added as a reviewer, the author thus knows you were notified
  • it supports watching a hierarchy of projects (eg: mediawiki/*)
  • the file/branch filtering might be easier to gasp compared to Gerrit search queries
  • the watchers are stored at a central place which is public to anyone, making it easy to add others as reviewers.

One registers reviewers on a single wiki page: https://www.mediawiki.org/wiki/Git/Reviewers.

Each repository filter is a wikitext section (eg: === mediawiki/core ===) followed by a wikitext template and a file filter using using python fnmatch. Some examples:

Listen to any changes that touch i18n:

== Listen to repository groups ==
=== * ===
* {{Gerrit-reviewer|JohnDoe|file_regexp=<nowiki>i18n</nowiki>}}

Listen to MediaWiki core search related code:

=== mediawiki/core ===
* {{Gerrit-reviewer|JaneDoe|file_regexp=<nowiki>^includes/search/</nowiki>

The system works great, given maintainers remember to register on the page and that the files are not moved around. The bot is not that well known though and most repositories do not have any reviewers listed.

Inspecting git history

A source of reviewers is the git history, one can easily retrieve a list of past authors which should be good candidates to review code. I typically use git shortlog --summary --no-merges for that (--no-merges filters out merge commit crafted by Gerrit when a change is submitted). Example for MediaWiki Job queue system:

$ git shortlog --no-merges --summary --since "one year ago" includes/jobqueue/|sort -n|tail -n4
     3 Petr Pchelko
     4 Brad Jorsch
     4 Umherirrender
    16 Aaron Schulz

Which gives me four candidates that acted on that directory over a year.

Past reviewers from git notes

When a patch is merged, Gerrit records in git trace votes and the canonical URL of the change. They are available in git notes under /refs/notes/review, once notes are fetched, they can be show in git show or git log by passing --show-notes=review, for each commit, after the commit messages, the notes get displayed and show votes among other metadata:

$ git fetch refs/notes/review:refs/notes/review
$ git log --no-merges --show-notes=review -n1
commit e1d2c92ac69b6537866c742d8e9006f98d0e82e8
Author: Gergő Tisza <tgr.huwiki@gmail.com>
Date:   Wed Jan 16 18:14:52 2019 -0800

    Fix error reporting in MovePage
    
    Bug: T210739
    Change-Id: I8f6c9647ee949b33fd4daeae6aed6b94bb1988aa

Notes (review):
    Code-Review+2: Jforrester <jforrester@wikimedia.org>
    Verified+2: jenkins-bot
    Submitted-by: jenkins-bot
    Submitted-at: Thu, 17 Jan 2019 05:02:23 +0000
    Reviewed-on: https://gerrit.wikimedia.org/r/484825
    Project: mediawiki/core
    Branch: refs/heads/master

And I can then get the list of authors that previously voted Code-Review +2 for a given path. Using the previous example of includes/jobqueue/ over a year, the list is slightly different:

$ git log --show-notes=review --since "1 year ago" includes/jobqueue/|grep 'Code-Review+2:'|sort|uniq -c|sort -n|tail -n5
      2     Code-Review+2: Umherirrender <umherirrender_de.wp@web.de>
      3     Code-Review+2: Jforrester <jforrester@wikimedia.org>
      3     Code-Review+2: Mobrovac <mobrovac@wikimedia.org>
      9     Code-Review+2: Aaron Schulz <aschulz@wikimedia.org>
     18     Code-Review+2: Krinkle <krinklemail@gmail.com>

User Krinkle has approved a lot of patches, even if he doesn't show in the list of authors obtained by the previous mean (inspecting git history).

Conclusion

The Gerrit reviewers by blame plugin acts automatically which offers a good chance your newly uploaded patch will get reviewers added out of the box. For finer tweaking one should register as a reviewer on https://www.mediawiki.org/wiki/Git/Reviewers which benefits everyone. The last course of action is meant to compliment the git log history.

For any remarks, support, concerns, reach out on IRC freenode channel #wikimedia-releng or fill a task in Phabricator.

Thank you @thcipriani for the proof reading and english fixes.

Production Excellence #27: December 2020

18:35, Thursday, 04 2021 February UTC

How’d we do in our strive for operational excellence last month? Read on to find out!

📈 Incidents

1 documented incident in December. [1] In previous years, December typically had 4 or fewer documented incidents. [3]

Learn about recent incidents at Incident documentation on Wikitech, or Preventive measures in Phabricator.


📊 Trends

Month-over-month plots based on spreadsheet data. [4] [2]


📖 Outstanding errors

Take a look at the workboard and look for tasks that could use your help.
https://phabricator.wikimedia.org/tag/wikimedia-production-error/

Summary over recent months:

  • ⚠️ July 2019 (2 of 18 issues left): no change.
  • ⚠️ August 2019 (1 of 14 issues): no change.
  • ⚠️ September 2019 (2 of 12 issues): One task resolved (-1).
  • ⚠️ October 2019 (5 of 12 issues): no change.
  • ⚠️ November 2019 (1 of 5 issues): no change.
  • ⚠️ December 2019 (4 of 9 issues), no change.
  • ⚠️ January 2020 (2 of 7 issues), no change.
  • February 2020 (2 of 7 issues left), no change.
  • March 2020 (2 of 2 issues left), no change.
  • April 2020 (9 of 14 issues left): no change.
  • May 2020 (7 of 14 issues left): no change.
  • June 2020 (7 of 14 issues left): no change.
  • July 2020 (9 of 24 new issues): no change.
  • August 2020 (23 of 53 new issues): no change.
  • September 2020 (13 of 33 new issues): One task resolved (-1).
  • October 2020 (35 of 69 new issues): Four issues fixed (-4).
  • November 2020 (14 of 38 new issues): Five issues fixed (-5).
  • December 2020: 22 of 33 new issues survived the month and remained unresolved (+33; -22)
Recent tally
149 as of Excellence #26 (15 Dec 2020).
-11 closed of the 149 recent issues.
+22 new issues survived December 2020.
160 as of 27 Jan 2020.

🎉 Thanks!

Thank you to everyone else who helped by reporting, investigating, or resolving problems in Wikimedia production. Thanks!

Until next time,

– Timo Tijhof


Footnotes:

[1] Incident documentation 2020, Wikitech.
[2] Open tasks, Wikimedia-prod-error, Phabricator.
[3] Wikimedia incident stats by Krinkle, CodePen.
[4] Month-over-month, Production Excellence spreadsheet.

Production Excellence #26: November 2020

18:34, Thursday, 04 2021 February UTC

How’d we do in our strive for operational excellence last month? Read on to find out!

📈 Incidents

Zero documented incidents in November. [1] That's the only month this year without any (publicly documented) incidents. In 2019, November was also the only such month. [3]

Learn about recent incidents at Incident documentation on Wikitech, or Preventive measures in Phabricator.


📊 Trends

The overall increase in errors was relatively low this past month, similar to the November-December period last year.

What's new is that we can start to see a positive trend emerging in the backlogs where we've shrunk issue count three months in a row, from the 233 high in October, down to the 181 we have in the ol' backlog today.

Month-over-month plots based on spreadsheet data. [4]


📖 Outstanding errors

Take a look at the workboard and look for tasks that could use your help.
https://phabricator.wikimedia.org/tag/wikimedia-production-error/

Summary over recent months:

  • ⚠️ July 2019 (2 of 18 tasks): One task closed (-1).
  • ⚠️ August 2019 (1 of 14 tasks): no change.
  • ⚠️ September 2019 (3 of 12 tasks): no change.
  • ⚠️ October 2019 (5 of 12 tasks): no change.
  • ⚠️ November 2019 (1 of 5 tasks): no change.
  • ⚠️ December 2019 (3 of 9 tasks left), no change.
  • January 2020 (3 of 7 tasks left), One task closed (-1).
  • February (2 of 7 tasks left), no change.
  • March (2 of 2 tasks left), no change.
  • April (9 of 14 tasks left): no change.
  • May (7 of 14 tasks left): no change.
  • June (7 of 14 tasks left): no change.
  • July 2020 (9 of 24 new tasks): no change.
  • August 2020 (23 of 53 new tasks): Three tasks closed (-3).
  • September 2020 (14 of 33 new tasks): One task closed (-1).
  • October 2020 (39 of 69 new tasks): Six tasks closed (-6).
  • November 2020: 19 of 38 new tasks survived the month and remain open today (+38; -19)
Recent tally
142 as of Excellence #25 (23 Oct 2020).
-12 closed of the 142 recent tasks.
+19 survived November 2020.
149 as of today, 15 Dec 2020.

The on-going month of December, has 19 unresolved tasks so far.


🎉 Thanks!

Thank you to everyone else who helped by reporting, investigating, or resolving problems in Wikimedia production. Thanks!

Until next time,

– Timo Tijhof


❝   The plot "thickens" as they say. Why, by the way? Is it a soup metaphor? ❞

Footnotes:

[1] Incident documentation 2020, Wikitech.
[2] Open tasks, Wikimedia-prod-error, Phabricator.
[3] Wikimedia incident stats, Krinkle, CodePen.
[4] Month-over-month, Production Excellence (spreadsheet).

Perf Matters at Wikipedia in 2015

00:33, Thursday, 31 2020 December UTC

Hello, WANObjectCache

This year we achieved another milestone in our multi-year effort to prepare Wikipedia for serving traffic from multiple data centres.

The MediaWiki application that powers Wikipedia relies heavily on object caching. We use Memcached as horizontally scaled key-value store, and we’d like to keep the cache local to each data centre. This minimises dependencies between data centres, and makes better use of storage capacity (based on local needs).

Aaron Schulz devised a strategy that makes MediaWiki caching compatible with the requirements of a multi-DC architecture. Previously, when source data changed, MediaWiki would recompute and replace the cache value. Now, MediaWiki broadcasts “purge” events for cache keys. Each data centre receives these and sets a “tombstone”, a marker lasting a few seconds that limits any set-value operations for that key to a miniscule time-to-live. This makes it tolerable for recache-on-miss logic to recompute the cache value using local replica databases, even though they might have several seconds of replication lag. Heartbeats are used to detect the replication lag of the databases involved during any re-computation of a cache value. When that lag is more than a few seconds (a large portion of the tombstone period), the corresponding cache set-value operation automatically uses a low time-to-live. This means that large amounts of replication lag are tolerated.

This and other aspects of WANObjectCache’s design allow MediaWiki to trust that cached values are not substantially more stale, than a local replica database; provided that cross-DC broadcasting of tiny in-memory tombstones is not disrupted.


First paint time now under 900ms

In July we set out a goal: improve page load performance so our median first paint time would go down from approximately 1.5 seconds to under a second – and stay under it!

I identified synchronous scripts as the single-biggest task blocking the browser, between the start of a page navigation and the first visual change seen by Wikipedia readers. We had used async scripts before, but converting these last two scripts to be asynchronous was easier said than done.

There were several blockers to this change. Including the use of embedded scripts by interactive features. These were partly migrated to CSS-only solutions. For the other features, we introduced the notion of “delayed inline scripts”. Embedded scripts now wrap their code in a closure and add it to an array. After the module loader arrives, we process the closures from the array and execute the code within.

Another major blocker was the subset of community-developed gadgets that didn’t yet use the module loader (introduced in 2011). These legacy scripts assumed a global scope for variables, and depended on browser behaviour specific to serially loaded, synchronous, scripts. Between July 2015 and August 2015, I worked with the community to develop a migration guide. And, after a short deprecation period, the legacy loader was removed.


Hello, WebPageTest

Previously, we only collected performance metrics for Wikipedia from sampled real-user page loads. This is super and helps detect trends, regressions, and other changes at large. But, to truly understand the characteristics of what made a page load a certain way, we need synthetic testing as well.

Synthetic testing offers frame-by-frame video captures, waterfall graphs, performance timelines, and above-the-fold visual progression. We can run these automatically (e.g. every hour) for many urls, on many different browsers and devices, and from different geo locations. These tests allow us to understand the performance, and analyse it. We can then compare runs over any period of time, and across different factors. It also gives us snapshots of how pages were built at a certain point in time.

The results are automatically recorded into a database every hour, and we use Grafana to visualise the data.

In 2015 Peter built out the synthetic testing infrastructure for Wikimedia, from scratch. We use the open-source WebPageTest software. To read more about its operation, check Wikitech.


The journey to Thumbor begins

Gilles evaluated various thumbnailing services for MediaWiki. The open-source Thumbor software came out as the most promising candidate.

Gilles implemented support for Thumbor in the MediaWiki-Vagrant development environment.

To read more about our journey to Thumbor, read The Journey to Thumbor (part 1).


Save timing reduced by 50%

Save timing is one of the key performance metrics for Wikipedia. It measures the time from when a user presses “Publish changes” when editing – until the user’s browser starts to receive a response. During this time, many things happen. MediaWiki parses the wiki-markup into HTML, which can involve page macros, sub-queries, templates, and other parser extensions. These inputs must be saved to a database. There may also be some cascading updates, such as the page’s membership in a category. And last but not least, there is the network latency between user’s device and our data centres.

This year saw a 50% reduction in save timing. At the beginning of the year, median save timing was 2.0 seconds (quarterly report). By June, it was down to 1.6 seconds (report), and in September 2015, we reached 1.0 seconds! (report)

The effort to reduce save timing was led by Aaron Schulz. The impact that followed was the result of hundreds of changes to MediaWiki core and to extensions.

Deferring tasks to post-send

Many of these changes involved deferring work to happen post-send. That is, after the server sends the HTTP response to the user and closes the main database transaction. Examples of tasks that now happen post-send are: cascading updates, emitting “recent changes” objects to the database and to pub-sub feeds, and doing automatic user rights promotions for the editing user based on their current age and total edit count.

Aaron also implemented the “async write” feature in the multi-backend object cache interface. MediaWiki uses this for storing the parser cache HTML in both Memcached (tier 1) and MySQL (tier 2). The second write now happens post-send.

By re-ordering these tasks to occur post-send, the server can send a response back to the user sooner.

Working with the database, instead of against it

A major category of changes were improvements to database queries. For example, reducing lock contention in SQL, refactoring code in a way that reduces the amount of work done between two write queries in the same transaction, splitting large queries into smaller ones, and avoiding use of database master connections whenever possible.

These optimisations reduced chances of queries being stalled, and allow them to complete more quickly.

Avoid synchronous cache re-computations

The aforementioned work on WANObjectCache also helped a lot. Whenever we converted a feature to use this interface, we reduced the amount of blocking cache computation that happened mid-request. WANObjectCache also performs probabilistic preemptive refreshes of near-expiring values, which can prevent cache stampedes.

Profiling can be expensive

We disabled the performance profiler of the AbuseFilter extension in production. AbuseFilter allows privileged users to write rules that may prevent edits based on certain heuristics. Its profiler would record how long the rules took to inspect an edit, allowing users to optimise them. The way the profiler worked, though, added a significant slow down to the editing process. Work began later in 2016 to create a new profiler, which has since completed.

And more

Lots of small things. Including the fixing of the User object cache which existed but wasn’t working. And avoid caching values in Memcached if computing them is faster than the Memcached latency required to fetch it!

We also improved latency of file operations by switching more LBYL-style coding patterns to EAFP-style code. Rather than checking whether a file exists, is readable, and then checking when it was last modified – do only the latter and handle any errors. This is both faster and more correct (due to LBYL race conditions).


So long, Sajax!

Sajax was a library for invoking a subroutine on the server, and receiving its return value as JSON from client-side JavaScript. In March 2006, it was adopted in MediaWiki to power the autocomplete feature of the search input field.

The Sajax library had a utility for creating an XMLHttpRequest object in a cross-browser-compatible way. MediaWiki deprecated Sajax in favour of jQuery.ajax and the MediaWiki API. Yet, years later in 2015, this tiny part of Sajax remained popular in Wikimedia's ecosystem of community-developed gadgets.

The legacy library was loaded by default on all Wikipedia page views for nearly a decade. During a performance inspection this year, Ori Livneh decided it was high time to finish this migration. Goodbye Sajax!


Further reading

This year also saw the switch to encrypt all Wikimedia traffic with TLS by default.

Mentioned tasks: T107399, T105391, T109666, T110858, T55120.

Runnable runbooks

18:59, Tuesday, 15 2020 December UTC

Recently there has been a small effort on the Release-Engineering-Team to encode some of our institutional knowledge as runbooks linked from a page in the team's wiki space.

What are runbooks, you might ask? This is how they are described on the aforementioned wiki page:

This is a list of runbooks for the Wikimedia Release Engineering Team, covering step-by-step lists of what to do when things need doing, especially when things go wrong.

So runbooks are each essentially a sequence of commands, intended to be pasted into a shell by a human. Step by step instructions that are intended to help the reader accomplish an anticipated task or resolve a previously-encountered issue.

Presumably runbooks are created when someone encounters an issue, and, recognizing that it might happen again, helpfully documents the steps that were used to resolve said issue.

This all seems pretty sensible at first glance. This type of documentation can be really valuable when you're in an unexpected situation or trying to accomplish a task that you've never attempted before and just about anyone reading this probably has some experience running shell commands pasted from some online tutorials, setup instructions for a program, etc.

Despite the obvious value runbooks can provide, I've come to harbor a fairly strong aversion to the idea of encoding what are essentially shell scripts as individual commands on a wiki page. As someone who's job involves a lot of automation, I would usually much prefer a shell script, a python program, or even a "maintenance script" over a runbook.

After a lot of contemplation, I've identified a few reasons that I don't like runbooks on wiki pages:

  • Runbooks are tedious and prone to human errors.
    • It's easy to lose track of where you are in the process.
    • It's easy to accidentally skip a step.
    • It's easy to make typos.
  • A script can be code reviewed and version controlled in git.
  • A script can validate it's arguments which helps to catch typos.
  • I think that command line terminal input is more like code than it is prose. I am more comfortable editing code in my usual text editor as apposed to editing in a web browser. The wikitext editor is sufficient for basic text editing, and visual editor is quite nice for rich text editing, but neither is ideal for editing code.

I do realize that mediawiki does version control. I also realize that sometimes you just can't be bothered to write and debug a robust shell script to address some rare circumstances. The cost is high and it's uncertain whether the script will be worth such an effort. In those situations a runbook might be the perfect way to contribute to collective knowledge without investing a lot of time into perfecting a script.

My favorite web comic, xkcd, has a lot few things to say about this subject:

"The General Problem" xkcd #974. "Automation" xkcd #1319. "Is It Worth the Time?" xkcd #1205.

Potential Solutions

I've been pondering a solution to these issues for a long time. Mostly motivated by the pain I have experienced (and the mistakes I've made) while executing the biggest runbook of all on a regular basis.

Over the past couple of years I've come across some promising ideas which I think can help the problems I've identified with runbooks. I think that one of the most interesting is Do-nothing scripting. Dan Slimmon identifies some of the same problems that I've detailed here. He uses the term *slog* to refer to long and tedious procedures like the Wikimedia Train Deploys. The proposed solution comes in the form of a do-nothing script. You should go read that article, it's not very long. Here are a few relevant quotes:

Almost any slog can be turned into a do-nothing script. A do-nothing script is a script that encodes the instructions of a slog, encapsulating each step in a function.

...

At first glance, it might not be obvious that this script provides value. Maybe it looks like all we’ve done is make the instructions harder to read. But the value of a do-nothing script is immense:

  • It’s now much less likely that you’ll lose your place and skip a step. This makes it easier to maintain focus and power through the slog.
  • Each step of the procedure is now encapsulated in a function, which makes it possible to replace the text in any given step with code that performs the action automatically.
  • Over time, you’ll develop a library of useful steps, which will make future automation tasks more efficient.

A do-nothing script doesn’t save your team any manual effort. It lowers the activation energy for automating tasks, which allows the team to eliminate toil over time.

I was inspired by this and I think it's a fairly clever solution to the problems identified. What if we combined the best aspects of gradual automation with the best aspects of a wiki-based runbook? Others were inspired by this as well, resulting in tools like braintree/runbook, codedown and the one I'm most interested in, rundoc.

Runnable Runbooks

My ideal tool would combine code and instructions in a free-form "literate programming" style. By following some simple conventions in our runbooks we can use a tool to parse and execute the embedded code blocks in a controlled manner. With a little bit of tooling we can gain many benefits:

  • The tooling will keep track of the steps to execute, ensuring that no steps are missed.
  • Ensure that errors aren't missed by carefully checking / logging the result of each step.
  • We could also provide a mechanism for inputting the values of any variables / arguments and validate the format of user input.
  • With flexible control flow management we can even allow resuming from anywhere in the middle of a runbook after an aborted run.
  • Manual steps can just consist of a block of prose that gets displayed to the operator. With embedded markup we can format the instructions nicely and render them in the terminal using [Rich][7]. Once the operator confirms that the step is complete then the workflow moves on to the next step.

Prior Art

I've found a few projects that already implement many of these ideas. Here are a few of the most relevant:

The one I'm most interested in is Rundoc. It's almost exactly the tool that I would have created. In fact, I started writing code before discovering rundoc but once I realized how closely this matched my ideal solution, I decided to abandon my effort. Instead I will add a couple of missing features to Rundoc in order to get everything that I want and hopefully I can contribute my enhancements back upstream for the benefit of others.

Demo: https://asciinema.org/a/MKyiFbsGzzizqsGgpI4Jkvxmx
Source: https://github.com/20after4/rundoc

References

[1]: https://www.mediawiki.org/wiki/Wikimedia_Release_Engineering_Team/Runbooks "runbooks"
[2]: https://wikitech.wikimedia.org/wiki/Heterogeneous_deployment/Train_deploys "Train deploys"
[3]: https://blog.danslimmon.com/2019/07/15/do-nothing-scripting-the-key-to-gradual-automation/ "Do-nothing scripting: the key to gradual automation by Dan Slimmon"
[4]: https://github.com/braintree/runbook "runbook by braintree"
[5]: https://github.com/earldouglas/codedown "codedown by earldouglas"
[6]: https://github.com/eclecticiq/rundoc "rundoc by eclecticiq"
[7]: https://rich.readthedocs.io/en/latest/ "Rich python library"

Changes and improvements to PHPUnit testing in MediaWiki

10:32, Wednesday, 25 2020 November UTC

Building off the work done at the Prague Hackathon (T216260), we're happy to announce some significant changes and improvements to the PHP testing tools included with MediaWiki.

PHP unit tests can now be run statically, without installing MediaWiki

You can now download MediaWiki, run composer install, and then composer phpunit:unit to run core's unit test suite (T89432).

The standard PHPUnit entrypoint can be used, instead of the PHPUnit Maintenance class

You can now use the plain PHPUnit entrypoint at vendor/bin/phpunit instead of the MediaWiki maintenance class which wraps PHPUnit (tests/phpunit/phpunit.php).

Both the unit tests and integration tests can be executed with the standard phpunit entrypoint (vendor/bin/phpunit) or if you prefer, with the composer scripts defined in composer.json (e.g. composer phpunit:unit). We accomplished this by writing a new bootstrap.php file (the old one which the maintenance class uses was moved to tests/phpunit/bootstrap.maintenance.php) which executes the minimal amount of code necessary to make core, extension and skin classes discoverable by test classes.

Tests should be placed in tests/phpunit/{integration,unit}

Integration tests should be placed in tests/phpunit/integration while unit tests go in tests/phpunit/unit, these are discoverable by the new test suites (T87781). It sounds obvious now to write this, but a nice side effect is that by organizing tests into these directories it's immediately clear to authors and reviewers what type of test one is looking at.

Introducing MediaWikiUnitTestCase

A new base test case, MediaWikiUnitTestCase has been introduced with a minimal amount of boilerplate (@covers validator, ensuring the globals are disabled, and that the tests are in the proper directory, the default PHPUnit 4 and 6 compatibility layer). The MediaWikiTestCase has been renamed to MediaWikiIntegrationTestCase for clarity.

Please migrate tests to be unit tests where appropriate

A significant portion of core's unit tests have been ported to use MediaWikiUnitTestCase, approximately 50% of the total. We have also worked on porting extension tests to the unit/integration directories. @Ladsgroup wrote a helpful script to assist with automating the identification and moving of unit tests, see P8702. Migrating tests from MediaWikiIntegrationTestCase to MediaWikiUnitTestCase makes them faster.

Note that unit tests in CI are still run with the PHPUnit maintenance class (tests/phpunit/phpunit.php), so when reviewing unit test patches please execute them locally with vendor/bin/phpunit /path/to/tests/phpunit/unit or composer phpunit -- /path/to/tests/phpunit/unit.

Generating code coverage is now faster

The PHPUnit configuration file now resides at the root of the repository, and is called phpunit.xml.dist. (As an aside, you can copy this to phpunit.xml and make local changes, as that file is git-ignored, although you should not need to do that.) We made a modification (T192078) to the PHPUnit configuration inside MediaWiki to speed up code coverage generation. This makes it feasible to have a split window in your IDE (e.g. PhpStorm), run "Debug with coverage", and see the results in your editor fairly quickly after running the tests.

What is next?

Things we are working on:

  • Porting core tests to integration/unit
  • Porting extension tests to integration/unit.
  • Removing legacy testsuites or ensuring they can be run in a different way (passing the directory name for example).
  • Switching CI to use new entrypoint for unit tests, then for unit and integration tests

Help is wanted in all areas of the above! We can be found in the #wikimedia-codehealth channel and via the phab issues linked in this post.

Credits

The above work has been done and supported by Máté (@TK-999), Amir (@Ladsgroup), Kosta (@kostajh), James (@Jdforrester-WMF), Timo (@Krinkle), Leszek (@WMDE-leszek), Kunal (@Legoktm), Daniel (@daniel), Michael Große (@Michael), Adam (@awight), Antoine (@hashar), JR (@Jrbranaa) and Greg (@greg) along with several others. Thank you!

thanks for reading, and happy testing!

Amir, Kosta, & Máté

Production Excellence #25: October 2020

05:50, Tuesday, 24 2020 November UTC

How’d we do in our strive for operational excellence last month? Read on to find out!

📈 Incidents

2 documented incidents in October. [1] Historically, that's just below the median of 3 for this time of year. [3]

Learn about recent incidents at Incident documentation on Wikitech, or Preventive measures in Phabricator.


📊 Trends

Month-over-month plots based on spreadsheet data. [5]


📖 Outstanding errors

Take a look at the workboard and look for tasks that could use your help.
https://phabricator.wikimedia.org/tag/wikimedia-production-error/

Summary over recent months:

  • ⚠️ July 2019 (3 of 18 tasks): One task closed.
  • ⚠️ August 2019 (1 of 14 tasks): no change.
  • ⚠️ September 2019 (3 of 12 tasks): no change.
  • ⚠️ October 2019 (5 of 12 tasks): One task closed.
  • ⚠️ November 2019 (1 of 5 tasks): Two tasks closed.
  • December (3 of 9 tasks left), no change.
  • January 2020 (4 of 7 tasks left), no change.
  • February (2 of 7 tasks left), no change.
  • March (2 of 2 tasks left), no change.
  • April (9 of 14 tasks left): One task closed.
  • May (7 of 14 tasks left): no change.
  • June (7 of 14 tasks left): no change.
  • July 2020 (9 of 24 new tasks): One task closed.
  • August 2020 (26 of 53 new tasks): Five tasks closed.
  • September 2020 (15 of 33 new tasks): Two tasks closed.
  • October 2020: 45 of 69 new tasks survived the month of October and remain open today.
Recent tally
110 as of Excellence #24 (23rd Oct).
-13 closed of the 110 recent tasks.
+45 survived October 2020.
142 as of today, 23rd Nov.

For the on-going month of November, there are 25 new tasks so far.


🎉 Thanks!

Thank you to everyone else who helped by reporting, investigating, or resolving problems in Wikimedia production. Thanks!

Until next time,

– Timo Tijhof


 👤  Howard Salomon:

❝   Problem is when they arrest you, you get put on the justice train, and the train has no brain. ❞  

Footnotes:

[1] Incident documentation 2020, Wikitech
[2] Open tasks in Wikimedia-prod-error, Phabricator
[3] Wikimedia incident stats by Krinkle, CodePen
[4] Month-over-month, Production Excellence (spreadsheet)

CI now updates your deployment-charts

23:46, Tuesday, 17 2020 November UTC

If you're making changes to a service that is deployed to Kubernetes, it sure is annoying to have to update the helm deployment-chart values with the newest image version before you deploy. At least, that's how I felt when developing on our dockerfile-generating service, blubber.

Over the last two months we've added

And I'm excited to say that CI can now handle updating image versions for you (after your change has merged), in the form of a change to deployment-charts that you'll need to +2 in Gerrit. Here's what you need to do to get this working in your repo:

Add the following to your .pipeline/config.yaml file's publish stage:

promote: true

The above assumes the defaults, which are the same as if you had added:

promote:
  - chart: "${setup.projectShortName}"  # The project name
    environments: []                    # All environments
    version: '${.imageTag}'             # The image published in this stage

You can specify any of these values, and you can promote to multiple charts, for example:

promote:
  - chart: "echostore"
    environments: ["staging", "codfw"]
  - chart: "sessionstore"

The above values would promote the production image published after merging to all environments for the sessionstore service, and only the staging and codfw environments for the echostore service. You can see more examples at https://wikitech.wikimedia.org/wiki/PipelineLib/Reference#Promote

If your containerized service doesn't yet have a .pipeline/config.yaml, now is a great time to migrate it! This tutorial can help you with the basics: https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial#Publishing_Docker_Images

This is just one step closer to achieving continuous delivery of our containerized services! I'm looking forward to continuing to make improvements in that area.

From student to professor: Amanda Levendowski

17:23, Monday, 16 2020 November UTC

This fall, we’re celebrating the 10th anniversary of the Wikipedia Student Program with a series of blog posts telling the story of the program in the United States and Canada.

Amanda Levendowski was a law school student 10 years ago when her professor assigned her to edit a Wikipedia article as a class assignment, part of the pilot program of what is now known as the Wikipedia Student Program. She tackled the article on the FAIR USE Act, a piece of failed copyright reform legislation introduced by Rep. Zoe Lofgren. And she was hooked.

“It felt so impactful to be able to contribute to this repository of knowledge that everyone I knew was using and leave behind something valuable,” Amanda says.

When her class ended, she wasn’t done with Wikipedia. She developed an independent study in law school to create the article about revenge porn because she was writing a scholarly piece about it and noticed that there wasn’t a Wikipedia article about the problem.

“That article has been viewed more than 1 million times — it’s probably gonna have more views than any piece of scholarship I write for the rest of my life,” she says.

She continued editing herself, even appearing in a 2015 “60 Minutes” piece about editing Wikipedia. (“There was a lot of footage that was understandably left on the cutting-room floor, but I’ll always remember wryly responding to Morley Safer when he suggested that copyright law was a little outdated and maybe a little boring — I think I said something like, ‘I’m sure many of your producers who rely on fair use would disagree.’ Who says that to Morley Safer?!” she recalls.) But she attributes her ongoing dedication to Wikipedia in part to Barbara Ringer.

“The year I graduated from law school, I overhauled the article about Ringer, the lead architect of the 1976 Copyright Act, the law around which much of my professional life revolves, during a WikiCon edit-a-thon,” she explains (the hero image on this blog post is of Amanda speaking at WikiConference USA in 2014). “There is something meditative about making an article better, about sharing an untold story, that I couldn’t resist wanting to continue experiencing alongside my students. And in the process, I found this stunning quote from Ringer about how the public interest of copyright law should be ‘to provide the widest possible access to information of all kinds.’ It’s hard to hear that and not think of Wikipedia and its mission.”

And now the student has become a professor herself. Amanda’s an Associate Professor of Law and Director, Intellectual Property and Information Policy Clinic at the Georgetown University Law Center. And she assigns her students to edit Wikipedia as a class assignment, of course.

One such student is Laura Ahmed, who is interested in the intersection of intellectual property and privacy law. Laura, who graduated in spring 2020, was both excited and nervous to tackle a Wikipedia assignment, making improvements to current Supreme Court case Google v. Oracle America, on the copyrightability of APIs and fair use.

“It is almost certainly going to have a substantial impact on software development in the United States, so I think it’s important for the information that is out there about the case to be accurate. That is what made me so nervous about it; it’s such a critical issue and I wanted to be sure that anything I was saying about it was adequately supported by facts,” she says. “Amanda was really great though about helping me get started and build up my confidence to edit the page. When we were editing, COVID-19 had just caused the Supreme Court to postpone several arguments, including this case. So Amanda suggested I start there, and once I’d made that one change it felt easier to go into the substance of the case and change some of the article to better reflect the legal arguments that are being made in the case.”

While Laura found the time constraints of a class assignment challenging, she thought the assignment was critical for both Wikipedia’s readers and her own hands-on learning as a law student.

“This assignment made me really think critically about what I’ve learned in law school and how I can use that knowledge in productive, but unexpected ways,” Laura explains. “When you’re a law student, you tend to forget that a lot of legal concepts aren’t common knowledge. So a lot of cases on Wikipedia really could benefit from a first or second year law student going in and just clarifying what the court actually said or what has actually happened with a case. It’s a nice reminder that we have more to contribute than we think.”

This reflection is exactly what Amanda experienced as a student herself, and is now seeing as an instructor. She reflects back on the American Bar Association’s Model Rules of Professional Conduct: “As a member of a learned profession…a lawyer should further the public’s understanding of and confidence in the rule of law and the justice system because legal institutions in a constitutional democracy depend on popular participation and support to maintain their authority.”

“It’s hard to imagine a more powerful way to further the public’s understanding of law and justice than by empowering law students to improve Wikipedia articles about those laws: it teaches the public, but it also teaches the students the twin skillsets of editing and the value of giving knowledge back to our communities,” Amanda says. “This community isn’t perfect, but I’m so inspired by the many, many volunteers who are striving to make it better. I’m proud to include myself and my students among them, and I’m excited to see where we are another decade out.”

Image: Geraldshields11, CC BY-SA 3.0, via Wikimedia Commons

Arabic and the Web

00:00, Monday, 16 2020 November UTC


I remember a Wikipedia workshop organized by the Institute of Computer Science at the University of Oxford? The question was why the number of Arabic speakers is around half a billion and the Arabic content is less than 5%, and in these five cases, perhaps a third is useful. A question and whether he found his answers and suggestions for a solution that the road is still long to support more content.

And because Arabic speakers are peoples who master multilingualism, perhaps unlike the American or European peoples, for example, you will always find those who master a second language, such as Algerian speaking in French and Egyptian speaking in English.

In the history of languages: And when the mother tongue is second or third. We waste time learning language instead of science. And many fall behind in their knowledge if they don't master the language. The rest do not succeed because they are not able to understand the culture of the language.

But in reality, how many people live in Algeria? How many contributors are from Algeria? And how many Algerians add encyclopedic content?

I can't answer here, but I have retrieved the 140-page report of the study in which I shared my thoughts and which was conducted by Oxford - whose excellent analyses I recommend.

In summary: we need to focus on the important objectives to define, organize and direct the work on this topic.

Permanent, adaptation and recurrence. I am optimistic about our future at this time.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2430912

(This is a conversation that took place on a social media page, which was collected by this text for several responses with light behaviour)

Wikicite from the ground up: references

11:23, Sunday, 15 2020 November UTC
When a point is made in Wikipedia, when a statement is made in Wikidata, best practice is to include a reference. The same is true in a scholarly paper, its references are typically found in a references section.

Wikicite is a project that brings many scholarly papers into Wikidata as beautiful as it is, it is a top down process. As an ordinary editor there is a lot that you can do to enrich the result.

The paper, "Can trophic rewilding reduce the impact of fire in a more flammable world?" has a DOI, the PDF includes a reference section. It takes a lot of effort to add the authors and papers it cites to Wikidata. The visibility of the paper improves and so does the visibility of the paper it cites. The Scholia shows that at this time, this paper is not used as a reference in Wikipedia. 

There is now a template that retrieves information from Wikidata for its reference data. It will be great when it is widely adopted because it provides an additional pathway from Wikipedia to the used references and the information relating to the reference.

So what can we do to improve on the quality of the data in Wikidata. First, the processes that import the bulk of new data are crucial, they are essential and need to be appreciated as such. The next part is enabling a community to improve the data. A recent paper explained what can be done with a top down approach. All kinds of decisions were made for us and the result feels like a one off project. 

When ORCID is considered to be our partner, it makes sense to invite people registered at ORCID to contribute to Wikidata. Their papers can be uploaded from ORCID into Wikidata, their co-authors and references can be linked by these people. As they do this while being logged into ORCID, we are assured because of their known personal involvement and use this as a reference.

The quality of such a reference is better than our current references that came with a link to an "author name string". Who knows that the disambiguation was correct? When a paper is linked to at least one known ORCID person with public information, we have a link we can verify and consequently it becomes a link we can trust. Once the link with a person with a ORCID identifier is established, we can ask to acknowledge the  changes that happen in his or her papers. Our quality is enhanced and a sense of community with ORCID is established.

Thanks, GerardM

Wikicite from the ground up: "Trophic rewilding"

11:15, Sunday, 15 2020 November UTC
In nature conservation, trophic rewilding and trophic cascades are important topics. When an animal like the howler monkey is no longer around, it no longer distributes the seeds of trees. The likely effect is that in time plants are no longer part of the ecosystem. Reintroducing a howler monkey restores the relation; it is considered an example of trophic rewilding.

At Wikipedia there is no article about trophic rewilding. As someone famously said, references are the most important part of a Wikipedia article, let's start with finding references.

There is a longstanding process of importing data about scholarly papers, all kinds of scholarly papers. Some of them have "trophic rewilding" in their title. Trophic rewilding was not known as a subject so it was easy enough to look for "trophic rewilding" and add it as a subject. Slowly but surely the Scholia representation evolves. More papers means more authors and more authors known to have collaborated on multiple publications. More citations are found for these papers and by inference they have a relation to the subject.

The initial set of data is already good enough to get a grasp of the subject but when you want more, you can look for missing data using Scholia, information like missing authors. The author disambiguator aids in finding papers for the missing author. With such iterations, the Scholia for trophic rewilding becomes more complete.

Another avenue to improve the coverage of a subject is by adding "cites work" in Wikidata for a paper like this one. Not all cited works are known to Wikidata but the effect can be impressive. NB The citations are often found in a PDF  and not in the article..

Slowly but surely all the scholarly references to be used for a new article are available, you can use a template in the article to link to the (evolving) Scholia. The best bit is you can add this template in an existing Wikipedia article as well providing a scholarly rabbit hole for interested readers.

Thanks, GerardM


weeklyOSM 538

10:27, Sunday, 15 2020 November UTC

03/11/2020-09/11/2020

lead picture

Peruvian vaccination bases on an OSM map. 1 | © Ministerio de Salud, Perú | map data © OpenStreetMap contributors |

About us

  • Since issue #537 we have been publishing in the Polish language. We are very happy to welcome our new colleagues and hope that this service in Poland will inspire even more people to contribute to OpenStreetMap when they can read our news in their native language. Witamy, polska drużyno. 😉

Mapping

  • Pascal Neis has just updated his ‘Unmapped Places‘ using OSM data from 30 October 2020.
  • Christina Ludwig, Sascha Fendrich and Alexander Zipf report about their study on ‘Regional variations of context‐based association rules in OpenStreetMap’. This study investigates the variability of association rules extracted from OSM across different geographic regions and their dependence on different context variables, such as the number of OSM mappers.
  • The Spanish Red Cross is organising an online Mapathon (es) on Thursday 19 November from 17:00 to 19:00, to help people in Burundi vulnerable to natural disasters, armed conflict and epidemics.
  • OSM Ireland now offers a Tasking Manager that organises the simultaneous mapping of buildings close to each other, by multiple mappers, by assigning the mappers different squares to work on. Another possibility is to start your own projects anywhere in the world.
  • PoliMappers report on their effort to introduce new and interested people to the world of geospatial collaborative projects in the Politecnico di Milano campus of Piacenza.
  • Brian M. Sperlongano (ZeLonewolf) announced that his proposal boundary=special_economic_zone, to tag an area in which the business and trade laws are different from the rest of the country or state, is now open for voting until 24 November.
  • With the outbreak of COVID-19 in March 2020 the German speaking Telegram group started voting on and publishing a weekly mapping focus. Along with the German version, they recently made available an English version of their wiki page to inspire more people to contribute to the weekly changing mapping challenges. amenity=car_sharing is the current ‘Weekly Focus’. Please enter your own ideas in the ‘Focus Idea Depot’ and vote on them in the Telegram group ‘OSM de‘.

Community

  • YouthMappers has appointed a new cohort of regional ambassadors for 2020–2021.
  • You can now read the answers to questions from the AskMeAnything thread on Reddit with some of the OSM Foundation Board members.

OpenStreetMap Foundation

  • Jonathan Beliën, one of the OSMF Microgrant Program recipients, has submitted his final report for the Road Completion Project. The project focused on software for conflating open data road networks against OpenStreetMap roads in Belgium. The code is open and can be used to achieve similar results anywhere in the world.

Local chapter news

  • Due to the lockdown in place since late October, OSM Ireland has come up with a schedule for November to make focused progress on the osmIRL_buildings project. Each day, there is a different town or group of towns to be mapped using the HOT OSM task manager.
  • mapeadora reported (es) > en in her blog about the official opening of the YouthMappers chapter at the Universidad Autónoma del Estado de México (UAEM), Faculty of Geography, on the Campus Toluca.

Events

  • Code The City’s 21st hack event invites everyone to use mapping, software, open data, or programming tools on Saturday 28 November and Sunday 29 November to create, update, digitise and modernise maps of our locales. The event will take place online and coding is, despite the name, only a small part of what is planned – take a look at the agenda here.
  • The OSM Geography Awareness Week is taking place from 15 to 21 November. ‘OSMGeoWeek’ is a week when teachers, students, community groups, organisations, and map lovers around the world can join together to celebrate geography and OpenStreetMap. Consider planning a mapathon, a webinar to show off your latest project, a career panel to talk about how your organisation uses OSM, or a workshop to teach others what you know. Follow #osmgeoweek, and share your experiences using the #osmgeoweek hashtag. When editing in OpenStreetMap add the #osmgeoweek2020 hashtag to your changesets to be included in the metrics. Add your event, or find one, at osmgeoweek.org!
  • State of the Map Japan 2020 (ja) and FOSS4G Japan 2020 (ja) were jointly held on 7 and 8 November. The summary (ja) of tweets and each video are now online (SotMJ (ja), FOSS4G (ja)).
  • Following the success of the mapping party held in September, the Chair of Urban Structure and Transport Planning of the Technical University in Munich is organising another mapping party to be held on 18 November at 18:15.

Humanitarian OSM

  • The HOT Disaster Services Team offers an update on the disaster responses the team is currently supporting or preparing to support around the world, as well as detailing ways the community can help.
  • Ramani Huria has been training students in Dar es Salaam and equipping them with industrial and technical skills for the 21st century while generating vital high-precision, low-cost data for flood prediction and preparedness. Through the students’ work, Ramani Huria has mapped more than 10,000 flood data points in eight weeks.
  • Crowd2Map, which was created in 2015 by Janet Chapman, has enabled volunteers to map almost five million buildings in Tanzania. This made it possible for welfare workers to locate and save 3000 girls from female genital mutilation and bring them to safe houses, where they can also receive education.

Maps

  • [1] The Peruvian Ministry of Health has published (es) > en its vaccination data points on an OSM base map.

Open Data

  • terrestris shows us several different possibilities for visualising the SRTM-30 elevation model.
  • The European Data Portal has 2259 (and counting) open datasets for Romania.

Software

  • Jochen Topf gave an overview of the first ten years of Taginfo, a service developed and maintained by himself, and describes some of the recently added features.
  • Trufi Association has created a new multi-modal bike app. It has information about the cycling road network, the public transport network, at what times bicycles can be carried on it, and how busy the vehicles are, enabling the app to propose combination routes that no other routing engine can. Maps and POIs are based on OpenStreetMap and the routing is based on OpenTripPlanner. The app can be adapted for use in any city.
  • The MIERUNE Inc experimentally released (ja) > en an address and facility search service ‘MIERUNE Search(ja). This service focuses on geo services. OSM is used (ja) > en for some data, such as POIs.
  • Dongha Hwang (LuxuryCoop) has created a Taginfo instance for South Korea.
  • Hartmut (maposmatic) added some new ‘OpenOrienteeringStyles’ to ‘OpenOrienteeringMap‘, the easy Street-O map creation tool. You can quickly and easily set a map, add controls, and create a print-ready, high quality vector PDF. If you have any comments, leave them at the end.
  • Researchers from the Federal University of Paraná (UFPR), the University of Maryland (UMD), and the University of Florida published a free e-book on QGIS (pt) in October. QGIS is a free and open-source cross-platform desktop geographic information system that supports viewing, editing, and analysing geospatial data. The book is intended for both students and professionals.

Releases

  • Walter Nordmann (wambacher) has started rebuilding the OSM Software Watchlist (a list of the current release status of OSM software products).
  • Marcus Wolschon announced the highlights of the Vespucci 15.1 Beta, released on 28 October.

Did you know …

  • … the osm-in-realtime website by James Westman?
  • … Taginfo now has a chronology tab? You can use it to see how often a tag has been used in the past.
  • … that Geofabrik is hosting Taginfo instances for each country (plus some regions) and continent, even Antarctica?

Other “geo” things

  • I Hate Coordinate Systems! has a very informative overview of common questions and pitfalls encountered when working with coordinate systems.
  • Topi Tjukanov started the #30DayMapChallenge, a daily social mapping project for every day of November 2020.
  • Hoefler&Co has compiled a cartography font collection, with 70 fonts recommended for maps and inspired by mapmaking.
  • Udo Urban, from the German Research Centre for Artificial Intelligence (DFKI), presented (de) > en the project ‘TreeSatAI – Artificial Intelligence with Earth Observation and Multi-Source Geodata’.
  • Mike Darracott reported about Yorkshire Wildlife’s use of MGISS cloud technology to map and help protect habitats.

Upcoming Events

Where What When Country
Online 2020 Pista ng Mapa 2020-11-13-2020-11-27 philippines
Cologne Bonn Airport 133. Bonner OSM-Stammtisch (Online) 2020-11-17 germany
Berlin OSM-Verkehrswende #17 (Online) 2020-11-17 germany
Lyon Rencontre mensuelle (virtuelle) 2020-11-17 france
Cologne Köln Stammtisch ONLINE 2020-11-18 germany
Munich TUM Mapping Party 2020-11-18 germany
Online Missing Maps Mapathon Bratislava #10 2020-11-19 slovakia
Online FOSS4G SotM Oceania 2020 2020-11-20 oceania
Bremen Bremer Mappertreffen (Online) 2020-11-23 germany
Derby Derby pub meetup 2020-11-24 united kingdom
Salt Lake City / Virtual OpenStreetMap Utah Map Night 2020-11-24 united states
Düsseldorf Düsseldorfer OSM-Stammtisch [1] 2020-11-25 germany
London Missing Maps London Mapathon 2020-12-01 united kingdom
Stuttgart Stuttgarter Stammtisch (online) 2020-12-02 germany
Taipei OSM x Wikidata #23 2020-12-07 taiwan
Michigan Michigan Online Meetup 2020-12-07 usa

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Elizabete, Climate_Ben, MatthiasMatthias, Nordpfeil, PierZen, Polyglot, Rogehm, Sammyhawkrad, TheSwavu, derFred, k_zoar, richter_fn.

Tesseract OCR web interface

09:08, Saturday, 14 2020 November UTC

I prepared a web frontend for Tesseract OCR to do optical character recognition for Malayalam - https://ocr.smc.org.in This application uses Tesseract.js, Javascript port of Tesseract. You can use images with English or Malayalam content. Use the editor and the spellchecker for proofreading the text recognized. Your image does not leave your browser since the recognition is done in browser and does not use any remote servers. Source code: https://gitlab.com/smc/tesseract-ocr-web

Fixing a bug in Malayalam ya, ra, va sign rendering

11:20, Friday, 13 2020 November UTC

In Malayalam, the Ya, Va and Ra consonant signs when appeared together has an interesting problem. The Ra sign(്ര also known as reph) is prebase sign, meaning, it goes to left side of the consonant or conjunct to which it applies. The Ya sign(്യ) and Va sign(്വ) are post base, meaning it goes to the right side of consonant or conjunct to which it applies. So, after a consonant or conjunct, if Ra sign and Ya sign is present, Ra sign goes to left and Ya sign remain to the right.

Open Practice in Practice

18:33, Thursday, 12 2020 November UTC

Last week I had the pleasure of running a workshop on open practice with Catherine Cronin as part of City University of London’s online MSc in Digital Literacies and Open Practice, run by the fabulous Jane Secker.  Both Catherine and I have run guest webinars for this course for the last two years, so this year we decided collaborate and run a session together.  Catherine has had a huge influence on shaping my own open practice so it was really great to have an opportunity to work together.  We decided from the outset that we wanted to practice what we preach so we designed a session that would give participants plenty of opportunity to interact with us and with each other, and to choose the topics the workshop focused on. 

We began with a couple of definitions open practice, emphasising that there is no one hard and fast definition and that open practice is highly contextual and continually negotiated and we then asked participants to suggest what open practice meant to them by writing on a shared slide.  We went on to highlight some examples of open responses to the COVID-19 pandemic, including the UNESCO Call for Joint Action to support learning and knowledge sharing through open educational resources, Creative Commons Open COVID Pledge, Helen Beetham and ALT’s Open COVID Pledge for Education and the University of Edinburgh’s COVID-19 Critical Care MOOC

We then gave participants an opportunity to choose what they wanted us to focus on from a list of four topics: 

  1. OEP to Build Community – which included the examples of Femedtech and Equity Unbound.
  2. Open Pedagogy –  including All Aboard Digital Skills in HE, the National Forum Open Licensing Toolkit, Open Pedagogy Notebook, and University of Windsor Tool Parade
  3. Open Practice for Authentic Assessment – covering Wikimedia in Education and Open Assessment Practices.
  4. Open Practice and Policy – with examples of open policies for learning and teaching from the University of Edinburgh. 

For the last quarter of the workshop we divided participants into small groups and invited them to discuss

  • What OEP are you developing and learning most about right now?
  • What OEP would you like to develop further?

Before coming back together to feedback and share their discussions. 

Finally, to draw the workshop to a close, Catherine ended with a quote from Rebecca Solnit, which means a lot to both of us, and which was particularly significant for the day we ran the workshop, 3rd November, the day of the US elections.

Rebecca Solnit quote

Slides from the workshop are available under open licence for anyone to reuse and a recording of our session is also available:  Watch recording | View slides.

10 years of teaching with Wikipedia: Jonathan Obar

17:34, Thursday, 12 2020 November UTC

This fall, we’re celebrating the 10th anniversary of the Wikipedia Student Program with a series of blog posts telling the story of the program in the United States and Canada.

Jonathan Obar was teaching at Michigan State University ten years ago when he heard some representatives from the Wikimedia Foundation would be visiting. As the governance of social media was central to Jonathan’s research and teaching, he looked forward to the meeting.

“To be honest, I was highly critical of Wikipedia at the time, assuming incorrectly that Wikipedia was mainly a problematic information resource with few benefits beyond convenience,” he admits. “How my perspective changed during that meeting and in the months that followed. I was taught convincingly the distinction between Wikipedia as a tool for research, and Wikipedia as a tool for teaching. Clearly much of the controversy has always been, and remains, about the former. More to the moment, was the realization about the possibilities of the latter. Banning Wikipedia is counter-productive if teaching about the internet is the plan. The benefits of active, experiential learning via Web 2.0 are as convincing now as they were then.”

Jonathan should know: He joined the pilot program of what’s now known as the Wikipedia Student Program, and ten years later, he’s still actively teaching with Wikipedia. Jonathan incorporated Wikipedia assignments into his classes at Michigan State, the University of Toronto, the University of Ontario Institute of Technology, and now at York University, where he’s been since 2016. Not only has Jonathan taught with Wikipedia himself, he also spearheaded efforts to expand the program within Canada.

“The opportunity to work with Wikimedia and now Wiki Education continues to be one of the more meaningful academic experiences I’ve been fortunate enough to encounter these last ten years,” he says. “I’ve connected more than 15 Communication Studies courses to the Education Program, and in each course I’ve worked with students eager to learn about Wikipedia, happy when they learn how to edit, and thrilled when their work contributes to the global internet. As a Canadian recruiter for the Education Program I had the privilege to work with more than 35 different classes operating across Canada, meeting and learning with different instructors, while also sharing a fascination with Wikipedia.”

As an early instructor in the program, Jonathan experienced the evolution of our support resources, from the original patchwork wiki pages to the now seamless Dashboard platform with built-in training modules. He notes he appreciates the ways it’s become easier to teach with Wikipedia in the 10 years he’s been doing it. He notes that training he received as an early instructor in the program a decade ago talked about source triangulation; now, the information literacy environment online requires these skills more than ever.

“Students consistently emphasize how Wikipedia assignments help them develop information and digital literacies, which they view as essential to developing their knowledge of the internet,” Jonathan says. “The students are correct as learning about Wikipedia and its social network helps to address many disinformation and misinformation challenges.”

Jonathan Obar with student who received award
Professor Jonathan Obar, at left, with student Andrew Hatelt and Writing Prize Coordinator Jon Sufrin of York University.

In 10 years, many moments stand out for Jonathan, particularly in the support he’s received and interactions he’s had with Wikipedia’s volunteer community. But he points to one student’s work as being a particular favorite: A York University student in his senior undergraduate seminar created the article on the “Digital Divide in Canada”, including passing through the “Did You Know” process to land on Wikipedia’s main page. York University also recognized the student’s work, giving him the senior undergraduate writing prize, over more than 20,000 other students across 20 departments and programs in the Faculty.

“The recognition by the university emphasizes not only that the community is starting to acknowledge the value of Wikipedia, but perhaps also that the student’s work, supported by the program, helped inform that perspective,” he says.

Jonathan is teaching two more classes this year as part of our program, one on Fake News, Fact-Finding, and the Future of Journalism and one on Information and Technology.

“After attending that meeting all those years ago, I was convinced that Wikipedia was one of the most effective tools for eLearning available (and it remains that way),” he says. “I hope to continue teaching with Wikipedia, and with the Wikipedia Student Program, for many years to come.”

Hero image credit: Alin (Public Policy), CC BY-SA 3.0, via Wikimedia Commons; In-text image credit: Jon Sufrin, on behalf of Faculty of LA&PS, York University, CC BY-SA 4.0, via Wikimedia Commons

The Listeria Evolution

09:40, Thursday, 12 2020 November UTC

My Listeria tool has been around for years now, and is used on over 72K pages across 80 wikis in the Wikimediaverse. And while it still works in principle, it has some issues, an, being a single PHP script, it is not exactly flexible to adapt to new requirements.

Long story short, I rewrote the thing in Rust. The PHP-based bot has been deactivated, and all editing of ListeriaBot (marked as “V2”, example) since 2020-11-12 are done by the new version.

I tried to keep the output as compatible to the previous version as possible, but some minute changes are to be expected, so there should be a one-time “wave” of editing by the bot. Once every page has been updated, things should stabilize again.

As best as I can tell, the new version does everything the old one did, but it can do more already, and has some foundations for future expansions:

  • Multiple lists per page (a much requested feature), eliminating the need for subpage transclusion.
  • Auto-linking external IDs (eg VIAF) instead of just showing the value.
  • Multiple list rows per item, depending on the SPARQL (another requested feature). This requires the new one_row_per_item=no parameter.
  • Foundation to use other SPARQL engines, such as the one being prepared for Commons (as there is an OAuth login required for the current test one, I have not completed that yet). This could generate lists for SDC queries.
  • Portability to generic wikibase installations (untested might require some minor configuration changes). Could even be bundled with Docker, as QuickStatements is now.
  • Foundation to use the Commons Data namespace to store the lists, then display them on a wiki via Lua. This would allow lists to be updated without editing the wikitext of the page, and no part of the list is directly editable by users (thus, no possibility of the bot overwriting human edits, a reason given to disallow Listeria edits in main namespace). The code is actually pretty complete already (including the Lua), but it got bogged down a bit in details of encoding information like sections which is not “native” to tabular data. An example with both wiki and “tabbed” versions is here.

As always with new code, there will be bugs and unwanted side effects. Please use the issue tracker to log them.