Fixing a bug in Malayalam ya, ra, va sign rendering

11:20, Friday, 13 2020 November UTC

In Malayalam, the Ya, Va and Ra consonant signs when appeared together has an interesting problem. The Ra sign(്ര also known as reph) is prebase sign, meaning, it goes to left side of the consonant or conjunct to which it applies. The Ya sign(്യ) and Va sign(്വ) are post base, meaning it goes to the right side of consonant or conjunct to which it applies. So, after a consonant or conjunct, if Ra sign and Ya sign is present, Ra sign goes to left and Ya sign remain to the right.

Open Practice in Practice

18:33, Thursday, 12 2020 November UTC

Last week I had the pleasure of running a workshop on open practice with Catherine Cronin as part of City University of London’s online MSc in Digital Literacies and Open Practice, run by the fabulous Jane Secker.  Both Catherine and I have run guest webinars for this course for the last two years, so this year we decided collaborate and run a session together.  Catherine has had a huge influence on shaping my own open practice so it was really great to have an opportunity to work together.  We decided from the outset that we wanted to practice what we preach so we designed a session that would give participants plenty of opportunity to interact with us and with each other, and to choose the topics the workshop focused on. 

We began with a couple of definitions open practice, emphasising that there is no one hard and fast definition and that open practice is highly contextual and continually negotiated and we then asked participants to suggest what open practice meant to them by writing on a shared slide.  We went on to highlight some examples of open responses to the COVID-19 pandemic, including the UNESCO Call for Joint Action to support learning and knowledge sharing through open educational resources, Creative Commons Open COVID Pledge, Helen Beetham and ALT’s Open COVID Pledge for Education and the University of Edinburgh’s COVID-19 Critical Care MOOC

We then gave participants an opportunity to choose what they wanted us to focus on from a list of four topics: 

  1. OEP to Build Community – which included the examples of Femedtech and Equity Unbound.
  2. Open Pedagogy –  including All Aboard Digital Skills in HE, the National Forum Open Licensing Toolkit, Open Pedagogy Notebook, and University of Windsor Tool Parade
  3. Open Practice for Authentic Assessment – covering Wikimedia in Education and Open Assessment Practices.
  4. Open Practice and Policy – with examples of open policies for learning and teaching from the University of Edinburgh. 

For the last quarter of the workshop we divided participants into small groups and invited them to discuss

  • What OEP are you developing and learning most about right now?
  • What OEP would you like to develop further?

Before coming back together to feedback and share their discussions. 

Finally, to draw the workshop to a close, Catherine ended with a quote from Rebecca Solnit, which means a lot to both of us, and which was particularly significant for the day we ran the workshop, 3rd November, the day of the US elections.

Rebecca Solnit quote

Slides from the workshop are available under open licence for anyone to reuse and a recording of our session is also available:  Watch recording | View slides.

10 years of teaching with Wikipedia: Jonathan Obar

17:34, Thursday, 12 2020 November UTC

This fall, we’re celebrating the 10th anniversary of the Wikipedia Student Program with a series of blog posts telling the story of the program in the United States and Canada.

Jonathan Obar was teaching at Michigan State University ten years ago when he heard some representatives from the Wikimedia Foundation would be visiting. As the governance of social media was central to Jonathan’s research and teaching, he looked forward to the meeting.

“To be honest, I was highly critical of Wikipedia at the time, assuming incorrectly that Wikipedia was mainly a problematic information resource with few benefits beyond convenience,” he admits. “How my perspective changed during that meeting and in the months that followed. I was taught convincingly the distinction between Wikipedia as a tool for research, and Wikipedia as a tool for teaching. Clearly much of the controversy has always been, and remains, about the former. More to the moment, was the realization about the possibilities of the latter. Banning Wikipedia is counter-productive if teaching about the internet is the plan. The benefits of active, experiential learning via Web 2.0 are as convincing now as they were then.”

Jonathan should know: He joined the pilot program of what’s now known as the Wikipedia Student Program, and ten years later, he’s still actively teaching with Wikipedia. Jonathan incorporated Wikipedia assignments into his classes at Michigan State, the University of Toronto, the University of Ontario Institute of Technology, and now at York University, where he’s been since 2016. Not only has Jonathan taught with Wikipedia himself, he also spearheaded efforts to expand the program within Canada.

“The opportunity to work with Wikimedia and now Wiki Education continues to be one of the more meaningful academic experiences I’ve been fortunate enough to encounter these last ten years,” he says. “I’ve connected more than 15 Communication Studies courses to the Education Program, and in each course I’ve worked with students eager to learn about Wikipedia, happy when they learn how to edit, and thrilled when their work contributes to the global internet. As a Canadian recruiter for the Education Program I had the privilege to work with more than 35 different classes operating across Canada, meeting and learning with different instructors, while also sharing a fascination with Wikipedia.”

As an early instructor in the program, Jonathan experienced the evolution of our support resources, from the original patchwork wiki pages to the now seamless Dashboard platform with built-in training modules. He notes he appreciates the ways it’s become easier to teach with Wikipedia in the 10 years he’s been doing it. He notes that training he received as an early instructor in the program a decade ago talked about source triangulation; now, the information literacy environment online requires these skills more than ever.

“Students consistently emphasize how Wikipedia assignments help them develop information and digital literacies, which they view as essential to developing their knowledge of the internet,” Jonathan says. “The students are correct as learning about Wikipedia and its social network helps to address many disinformation and misinformation challenges.”

Jonathan Obar with student who received award
Professor Jonathan Obar, at left, with student Andrew Hatelt and Writing Prize Coordinator Jon Sufrin of York University.

In 10 years, many moments stand out for Jonathan, particularly in the support he’s received and interactions he’s had with Wikipedia’s volunteer community. But he points to one student’s work as being a particular favorite: A York University student in his senior undergraduate seminar created the article on the “Digital Divide in Canada”, including passing through the “Did You Know” process to land on Wikipedia’s main page. York University also recognized the student’s work, giving him the senior undergraduate writing prize, over more than 20,000 other students across 20 departments and programs in the Faculty.

“The recognition by the university emphasizes not only that the community is starting to acknowledge the value of Wikipedia, but perhaps also that the student’s work, supported by the program, helped inform that perspective,” he says.

Jonathan is teaching two more classes this year as part of our program, one on Fake News, Fact-Finding, and the Future of Journalism and one on Information and Technology.

“After attending that meeting all those years ago, I was convinced that Wikipedia was one of the most effective tools for eLearning available (and it remains that way),” he says. “I hope to continue teaching with Wikipedia, and with the Wikipedia Student Program, for many years to come.”

Hero image credit: Alin (Public Policy), CC BY-SA 3.0, via Wikimedia Commons; In-text image credit: Jon Sufrin, on behalf of Faculty of LA&PS, York University, CC BY-SA 4.0, via Wikimedia Commons

The Listeria Evolution

09:40, Thursday, 12 2020 November UTC

My Listeria tool has been around for years now, and is used on over 72K pages across 80 wikis in the Wikimediaverse. And while it still works in principle, it has some issues, an, being a single PHP script, it is not exactly flexible to adapt to new requirements.

Long story short, I rewrote the thing in Rust. The PHP-based bot has been deactivated, and all editing of ListeriaBot (marked as “V2”, example) since 2020-11-12 are done by the new version.

I tried to keep the output as compatible to the previous version as possible, but some minute changes are to be expected, so there should be a one-time “wave” of editing by the bot. Once every page has been updated, things should stabilize again.

As best as I can tell, the new version does everything the old one did, but it can do more already, and has some foundations for future expansions:

  • Multiple lists per page (a much requested feature), eliminating the need for subpage transclusion.
  • Auto-linking external IDs (eg VIAF) instead of just showing the value.
  • Multiple list rows per item, depending on the SPARQL (another requested feature). This requires the new one_row_per_item=no parameter.
  • Foundation to use other SPARQL engines, such as the one being prepared for Commons (as there is an OAuth login required for the current test one, I have not completed that yet). This could generate lists for SDC queries.
  • Portability to generic wikibase installations (untested might require some minor configuration changes). Could even be bundled with Docker, as QuickStatements is now.
  • Foundation to use the Commons Data namespace to store the lists, then display them on a wiki via Lua. This would allow lists to be updated without editing the wikitext of the page, and no part of the list is directly editable by users (thus, no possibility of the bot overwriting human edits, a reason given to disallow Listeria edits in main namespace). The code is actually pretty complete already (including the Lua), but it got bogged down a bit in details of encoding information like sections which is not “native” to tabular data. An example with both wiki and “tabbed” versions is here.

As always with new code, there will be bugs and unwanted side effects. Please use the issue tracker to log them.

This Month in GLAM: October 2020

22:38, Wednesday, 11 2020 November UTC
  • AfLIA Wikipedia in African Libraries report: Wikipedia in African Libraries Project
  • Brazil report: Abre-te Código hackathon, Wikidata related events and news from our partners
  • Finland report: Postponed Hack4FI GLAM hackathon turned into an online global Hack4OpenGLAM
  • France report: Partnership with BNU Strasbourg
  • Germany report: Coding da Vinci cultural data hackathon heads to Lower Saxony
  • India report: Mapping GLAM in Maharashtra, India
  • Indonesia report: Bulan Sejarah Indonesia 2.0; Structured data edit-a-thon; Proofreading mini contest
  • Netherlands report: National History Month: East to West, Dutch libraries and Wikipedia
  • New Zealand report: West Coast Wikipedian at Large
  • Norway report: The Sámi Languages on wiki
  • Serbia report: Many activities are in our way
  • Sweden report: Librarians learn about Wikidata; More Swedish literature on Wikidata; Online Edit-a-thon Dalarna; Applications to the Swedish Innovation Agency; Kulturhistoria som gymnasiearbete; Librarians and Projekt HBTQI; GLAM Statistical Tool
  • UK report: Enamels of the World
  • USA report: American Archive of Public Broadcasting; Smithsonian Women in Finance Edit-a-thon; Black Lunch Table; San Diego/October 2020; WikiWednesday Salon
  • Calendar: November’s GLAM events

How helping others edit Wikipedia changes lives

17:07, Tuesday, 10 2020 November UTC

This fall, we’re celebrating the 10th anniversary of the Wikipedia Student Program with a series of blog posts telling the story of the program in the United States and Canada.

When we started what is now the Wikipedia Student Program, we wanted to create support for students and instructors participating in the program. An initial plan involved supporting a new volunteer role within Wikipedia: the Campus Ambassador, who would help support participants in-person.

We sought out people who would be newbie-friendly faces on campus, helping students learn the basics of Wikipedia. Paired with a more Wikipedia-experienced Online Ambassador to answer technical questions, many Campus Ambassadors hadn’t edited Wikipedia prior to this role. While we’re no longer using the Ambassador model, we note the role itself had a profound impact on at least two people whose involvement on Wikipedia began as Campus Ambassadors in 2010: Max Klein and PJ Tabit.

Max Klein
Max Klein, today (the hero image on this post is of Max in 2011). Image courtesy Max Klein.

“That was basically the entire jumping off point for my whole career,” Max says. “I’ve made a living out of being knowledgeable about Wikipedia and contributing to the ecosystem, mostly through bots and data projects.”

Max taught a student-led class at the University of California at Berkeley that he and collaborator Matt Senate decided to build out entirely on the Wikipedia project namespace. He also served as an Ambassador for other courses. After graduating from Berkeley in 2012, Max’s first job was as a Wikimedian-in-Residence for OCLC, teaching librarians to contribute to Wikipedia. Then Wikidata became a project.

“Wikidata legitimized and exponentiated the idea that Wikipedia could be about data as well as articles,” Max says. “That is a useful way to get involved if you are more, let’s say, numerically-minded. That allowed me to get involved in a way were I could start immediately with large individual contributions. However today I recognize that the best projects merge all the different perspectives of the users, the aesthetes, the editors, and the programmers.”

He built a bot that contributed bibliographic and autographic data from the Library of Congress to Wikipedia, then helped build the WikiProject Open Access Citation Bot. In 2015, Max piloted the Wikipedia Human Gender Indicators, the first automated documentation of biography-gender representation across all language Wikipedias. He helped create an AI-powered version of HostBot to find the best newcomers. Then he supported the Citizens and TechLab experiment to see if wiki-thanking by other users led editors to contribute more. Today, Max is starting project “Humaniki” to provide data and tools to assist systemic-bias-focused editing.

In other words, Max has done a lot from his initial start as an Ambassador!

“It’s defined my career and values,” he says. “Wikipedia is one of the few remaining sites that hold the promise of what we thought the internet would be at the turn of the millennium. We knew entertainment and commerce would come online, but the promise of libraries and public parks and civic-engagement coming on-line has found less of a foothold. Luckily Wikipedia is still ticking showing what a non-commercial internet could be like. I’m motivated by the feeling of collaborating on public-good, socially important projects with humans all around the world.”

PJ Tabit
PJ Tabit in 2011, at a training for the program.

While Max branched out from his work with the program to other areas of Wikipedia work, PJ has continued to be involved with the educational efforts. He originally got involved when starting graduate school in public policy at George Washington University.

“It seemed like an exciting opportunity to work on something related to what I was studying and involving one of the most visited websites on the internet,” PJ says.

After supporting courses on campus at GW, PJ traveled to India in 2011 to support the Wikimedia Foundation’s efforts to replicate the program there. When a working group was formed to find a new home for the program, PJ volunteered. And when Wiki Education as a new organization was formed, PJ was elected to the board, initially serving as treasurer. Since 2017, PJ has been Wiki Education’s board chair.

“Simply, I think the work is critical,” PJ says. “Wikipedia stands out as a source of reliable factual information on the internet, and Wiki Education, through the Student Program, helps Wikipedia become more representative, accurate, and complete. I am extremely proud of what this organization and program accomplish.”

PJ points to the scale of Wiki Education’s program and impact as a key success marker over the last decade. He noted that when we were first starting out in 2010, we couldn’t have imagined that 20% of English Wikipedia’s new active editors would come from this program.

And his involvement over the last decade has meant a lot to PJ personally as well.

“I have made amazing friends that I likely would never have met if not for Wikipedia,” he says. “My involvement with Wiki Education and the Student Program have also given me an understanding and deep respect for how Wikipedia gets made, which I would not have gained as just a reader of the site.”

Both Max and PJ hope to see a future in which Wikipedia reflects fewer and fewer systemic biases.

“Wiki Education has made tremendous progress toward ensuring Wikipedia is representative, accurate, and complete, but clearly there is much more to do,” PJ says. “I hope that we eventually resolve Wikipedia’s systemic biases and that it truly represents the sum of all human knowledge.”

“I hope that Wikipedia lives for another 20 years, and beyond. But I also hope that Wikipedia can be a platform for change vis-a-vis the problems of gender, economic, racial, and political justice,” Max says. “I think it’s already stepping in this direction with amazing editors who increase its coverage and fight misinformation. Obviously an encyclopedia can only do so much (although it’s quite a lot despite its medium). Still I imagine there is another project beyond Wikipedia, like Wikidata hinted at, that can utilize the pattern of collaboration that’s existed and has been so fruitful. I don’t know what it is yet, I’ve been thinking about it for 10 years, but I believe it’s there in the future.”

Natasha in St Malo earlier this year.

In October we recruited for a role that we have long known will be critical to the sustainability of Wikimedia UK’s vital work. Having a Head of Development and Communications gives us a strategic approach to our public image, fundraising, and external outreach. We wanted the role to be in senior management, leading a new team consisting of Katie Crampton, our Communications and Governance Assistant, and another new role that we’re currently recruiting for, a Fundraising Development Coordinator. Though we had to postpone recruitment for the Head of Development and Communications due to lockdown, we’re pleased to announce that one month ago we found a candidate who we think is the perfect fit; Natasha Iles.

With a background in the corporate world, Natasha took a career change into the Third Sector over ten years ago knowing she wanted to make a broader, more positive impact with her skills. Since Natasha’s first charity role as a sole fundraiser and marketeer, she has developed to lead both fundraising and communications functions as an active member of senior management. Natasha holds a Diploma in Fundraising and is a member of the Chartered Institute of Fundraising.

When asked about her goals while working with us, Natasha outlined her aims for our new Development and Communications team to continue to increase visibility of our amazing programmes and activities across the UK. Natasha will also work to diversify our income streams. Like us, Natasha feels that increasing our profile and the positive impact of our work is vital to ensuring we continue breaking down the barriers to accessing and contributing to free knowledge.

Though she’s only been with us a few weeks, we’ve already seen incredible work from Natasha. To say she hit the ground running is a bit of an understatement! We’re very excited for everything she’s bringing to the team.

CI now updates your deployment-charts

08:32, Tuesday, 10 2020 November UTC

If you're making changes to a service that is deployed to Kubernetes, it sure is annoying to have to update the helm deployment-chart values with the newest image version before you deploy. At least, that's how I felt when developing on our dockerfile-generating service, blubber.

Over the last two months we've added

And I'm excited to say that CI can now handle updating image versions for you (after your change has merged), in the form of a change to deployment-charts that you'll need to +2 in Gerrit. Here's what you need to do to get this working in your repo:

Add the following to your .pipeline/config.yaml file's publish stage:

promote: true

The above assumes the defaults, which are the same as if you had added:

promote:
  - chart: "${setup.project}"           # The project name
    environments: []                    # All environments
    version: '${.imageTag}'             # The image published in this stage

You can specify any of these values, and you can promote to multiple charts, for example:

promote:
  - chart: "echostore"
    environments: ["staging", "codfw"]
  - chart: "sessionstore"

The above values would promote the production image published after merging to all environments for the sessionstore service, and only the staging and codfw environments for the echostore service. You can see more examples at https://wikitech.wikimedia.org/wiki/PipelineLib/Reference#Promote

If your containerized service doesn't yet have a .pipeline/config.yaml, now is a great time to migrate it! This tutorial can help you with the basics: https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial#Publishing_Docker_Images

This is just one step closer to achieving continuous delivery of our containerized services! I'm looking forward to continuing to make improvements in that area.

Outreachy report #14: October 2020

00:00, Monday, 09 2020 November UTC



Application review

We were able to review all applications by extending the review period. This also led to an unplanned experimentation with the contribution period length that we’re still trying to tune perfectly after the essay questions were implemented back in 2018. There’s a fine line between making it too long for projects that require simpler contributions and too short for those that ask for more complex ones.

Communication and planning

Sage and I had a “decompress” meeting to discuss what went right and wrong during this application/review period and to set up short and long term goals for Outreachy. For the first time in my two years working on Outreachy we were able to build strategies keeping in mind the long term health of the program, and I attribute that in part to the fact Sage and I have been sharing more responsibilities. Taking a lot of weight off Sage’s shoulders and transferring it to mine directly translates into more getting done and expanding the program’s horizons as never before.

Development

I’m resuming my involvement with the development of Outreachy’s website. The project’s documentation has greatly improved since the last time I set up a local environment, as well as the automation of tests and scenarios to explore. I was able to test setting up a new environment in different systems (Fedora 32 and 33, Ubuntu 20.04 and 20.10), writing down every single dependency needed to build the environment and explore it. Thanks to Jamey and Sage upgrading dependencies, I was able to overcome a specific issue I was running into in all systems (failure to compile psycopg2, a known bug with Python 3.8).

I’ve been focusing on understanding flows related to the mentor roles, which leads us to…

Mentor interviews

Sage and I have been discussing improving the mentor documentation for a few months, and one of the best ways to start thinking about that is interviewing new Outreachy mentors to understand how and why they took interest in the program, how was the onboarding process in their community, what issues they’ve ran into during the process to become a mentor, and which ways we can improve our own onboarding.

I sent an email to the mentors mailing list encouraging mentors to contact me to either an asynchronous email interview or a synchronous video or text chat. The response was better than I expected: I have 12 interviews in my schedule in the next 10 days. However, none of our volunteer interviewees are Outreachy alums–I’ll have to send emails to specific mentors to see if we can schedule interviews with those in that group too.

Promotion

I accepted two invitations for live events this November:

  • LKCAMP, a Linux kernel study group, invited me to participate in LKConf to talk specifically about Outreachy internships from an alum and organizer perspective on November 17th.
  • Casa Hacker invited me to talk about free software as a whole on November 18th – we’ll discuss concepts, ideas, latest events. This is more of a generalistic livestream to help people understand free software communities.

Tech News issue #46, 2020 (November 9, 2020)

00:00, Monday, 09 2020 November UTC
previous 2020, week 46 (Monday 09 November 2020) next
Other languages:
Bahasa Indonesia • ‎British English • ‎Deutsch • ‎English • ‎Nederlands • ‎français • ‎italiano • ‎lietuvių • ‎magyar • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎српски / srpski • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎中文 • ‎日本語 • ‎한국어

weeklyOSM 537

10:43, Sunday, 08 2020 November UTC

27/10/2020-02/11/2020

lead picture

Daily updated corona incidences per county 1 | © netgis.de | map data © OpenStreetMap contributors | © RKI, DIVI Intensivregister, BKG, LVermGeo Rlp 2020

Mapping

  • TheFive reported that weeklyOSM often receives messages alerting us to map errors. In a small blog post he points people (de) > en to the map notes system and encourages people to participate.
  • Yuu Hayashi, who published (ja) > en a draft of a scheme for route mapping of Japanese historical paths, is asking how to manage a long-term mapping project with multiple members, and whether there is a form of monitoring that works better than updating mapping progress on OSM Wiki. He also asked if there is a better way to decide on a scheme for mapping a series of features than the Tag proposal process.
  • The user Vollis proposed the tag amenity=chapel_of_rest for a room or building where families and friends can say goodbye to a deceased person before his or her funeral. This proposal is now up for vote until 18 November.
  • Privatemajory, Luke proposed the tag electricity=[grid, generator, yes, no] to indicate the electricity source used in a dwelling, a general building or a settlement. After a short voting period (29 and 30 October) the voting was stopped due to a formal error and the proposal is again in RFC state and open for comments.

Community

  • The MapRoulette team pointed out, on Twitter, that a MapRoulette user box can be added to OpenStreetMap wiki user profile pages.
  • The video ‘4 tools to start with OpenStreetMap‘, by Captain Mustache, is now available under a free Creative Commons BY licence on the PeerTube OpenStreetMap France instance. (fr)
    • In another video, he answers the question, ‘OpenStreetMap? What is it?’ (fr)
  • DeBigCs examined the claim that poorer areas around Dublin are less completely mapped than wealthier ones and the reasons for this.
  • Jennings Anderson writes in his blog about ‘OSMUS Community Chronicles’, exploring the growth and temporal mapping patterns in North America.
  • Nuno Caldeira is committed to the correct attribution of maps based on OSM and he has criticised Mapbox many times about the incorrect attribution function on their map service. This time, he praises Mapbox customer Flickr, which has managed to use correct attribution, even on the smallest maps. So, small size seems to be just an excuse and one can clearly add visible attribution on any map.
  • OpenStreetMap US published its newsletter for November 2020.

OpenStreetMap Foundation

  • You can find the key dates for the upcoming OSMF Annual General Meeting 2020 here.
  • Mikel Maron would like to revitalise diversity and inclusion in the OSM Foundation; in his blog post he calls on all those who have been less represented so far not to be shy but to contact him.

Local chapter news

  • FLOSSK, the Local chapter for OSM Kosovo, signed a Memorandum of Cooperation with the LUMBARDHI Foundation. This cooperation will serve for the exchange of knowledge, capacities, and resources for digitalisation, as well as the provision of materials and publications for free use by the public. FLOSSK and LUMBARDHI will cooperate in the digitisation of Zëri newspaper, TAN newspaper, and completion of the digital archive of Rilindja newspaper, as well as Jeta e Re, Përparimi and Çevren magazines, which will also be public with free access.
  • Maggie Cawley, Martijn van Exel, and Steven Johnson report on the OpenStreetMap US Charter Project Program.
  • The OpenStreetMap France Blog described (fr) > en a cartographic portal initially developed by the OSM Cameroon Association. This interactive visualiser/downloader of OSM data (OSMdata (fr)) allows you to visualise different OSM thematic layers, defined by Jean Louis Zimmermann and grouped into 16 geothematic layers. The open source code for the portal is on Github.

Events

  • As part of National Heritage Week in Ireland, the local OSM community has decided to focus on the historic town of Clonmel. The first step was to quickly map from satellite imagery; in order to supplement this a Mapillary stream was also taken, COVID-19 compliant with mask from inside a car, using a camera attached to the window so that it could capture both sides of the road.

Humanitarian OSM

  • Russell Deffner, from HOT, is asking for assistance in mapping Izmir, Turkey. On 30 October a magnitude 7.0 earthquake struck the region encompassing south/southeast Greece and western Turkey, with the epicentre being the city of Izmir, home to about 4 million residents.

Maps

  • Sven Geggus has had trouble with the capacity tag on OpenCampingMap, wrote a blogpost about it, and is trying to engage with the community, on the tagging mailing list, to clarify ‘the meaning of the capacity tag for tourism=camp_site‘.
  • AcquaMAT is a project powered by CleaNAP, based in Naples. It is creating a crowdsourced map of drinking water points scattered in all the cities of Europe, with the aim of promoting the use of public water, thus reducing the purchase of plastic bottles for water.
    The map allows you to geolocate to see if there are points in the immediate vicinity of streets or squares of the city. Help the project by mapping water points that work and those that do not work, through a reporting form on the site.
  • [1] The map of Germany by sven_s8 (de), from the NETGIS (de) > en office in Trier, visualises the incidence of COVID-19, updated daily, as well as the intensive care bed situation (DIVI Intensive Care Register (de)) in districts or independent cities. It uses various open data interfaces and, of course, OpenStreetMap. The OSM data are imported (de) via a map of the Federal Office of Cartography and Geodesy. The application uses the UMN Mapserver and PostgreSQL/PostGIS in the backend.

switch2OSM

  • Deutsche Bahn has updated (de) their information portal about active and future construction projects. The start page shows where all of the projects are located on an OSM-based map.

Software

  • The JOSM issue tracker reached ticket #20000. The issue, a bug in the Wikipedia plugin, was fixed a few hours later.
  • Mail.ru sold MAPS.ME to Daegu Limited for 1,557 million RUB (£15.3 million). They had acquired the mobile app and its services in 2014 for 542 million RUB (£5.3 million).
    The app has been installed more than 140 million times and has ten million active users.
  • An updated version of mod_tile, the classic raster tile stack of OpenStreetMap, has been released by Felix Delattre, from the German Research Centre for Geosciences (GFZ). They packaged this software and included it as libapache2-mod-tile and renderd in Debian so that it will automatically be part of upcoming Debian and Ubuntu releases, and they are now asking for help with testing.

Releases

  • Quincy Morgan reported the updates to iD in v2.19.4 (#2931).
  • Tobias Zwick compared the download times for StreetComplete before and after he reworked the download to exclusively use the OSM API, instead of individual Overpass queries, in this chart. User mmd commented, on OSM Slack, that a similar reduction in download times might have been achieved through the performance improvements he developed for Overpass a year ago but which still haven’t been merged. The StreetComplete changes have been released in v26.0-beta1.

Did you know …

OSM in the media

  • The Times of India reported that the OSM community in Kerala has created geospatial open data maps of all local government bodies in the state, numbering over 1200.

Other “geo” things

  • The Open Geospatial Consortium (OGC) has adopted a new international standard, opening the way to a common format for cartographic description.
  • If the world were a piano roll, this is what it would sound like.
  • Marios Kyriakou created a YouTube video showing the entire changelog of QGIS 3.16 (Hannover). There is a lot to show in those 12 minutes, so it’s blazingly fast. If you prefer a slower overview you can also watch this screencast in Spanish made by Patricio Soriano from Asociación Geoinnova and QGIS.es. In this one the first 15 minutes are introduction and installation.
  • In Quantarctica, a collection of Antarctic geographical datasets, version 4 is intended to offer expanded theme coverage and newer datasets, with more capabilities. Therefore, help is needed to identify the community’s requirements. The questionnaire takes a maximum of ten minutes to complete and will be very helpful in developing the next version of Quantarctica.

Upcoming Events

Where What When Country
Online State of the Map Japan 2020 Online 2020-11-07 japan
Taipei OSM x Wikidata #22 2020-11-09 taiwan
Salt Lake City / Virtual OpenStreetMap Utah Map Night 2020-11-10 united states
Munich Münchner Stammtisch 2020-11-11 germany
Zurich 123. OSM Meetup Zurich 2020-11-11 switzerland
Berlin 149. Berlin-Brandenburg Stammtisch (Online) 2020-11-12 germany
Online 2020 Pista ng Mapa 2020-11-13-2020-11-27 philippines
Cologne Bonn Airport 133. Bonner OSM-Stammtisch (Online) 2020-11-17 germany
Berlin OSM-Verkehrswende #17 (Online) 2020-11-17 germany
Cologne Köln Stammtisch ONLINE 2020-11-18 germany
Online FOSS4G SotM Oceania 2020 2020-11-20 oceania
Derby Derby pub meetup 2020-11-24 united kingdom
Salt Lake City / Virtual OpenStreetMap Utah Map Night 2020-11-24 united states
Düsseldorf Düsseldorfer OSM-Stammtisch [2] 2020-11-25 germany
Taipei OSM x Wikidata #23 2020-11-07 taiwan

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by AnisKoutsi, Joker234, Lejun, MatthiasMatthias, MichaelFS, Nordpfeil, PierZen, Polyglot, Rogehm, TheSwavu, YoViajo, alesarrett, derFred, richter_fn.

Moving Plants

08:13, Friday, 06 2020 November UTC
All humans move plants, most often by accident and sometimes with intent. Humans, unfortunately, are only rarely moved by the sight of exotic plants. 

Unfortunately, the history of plant movements is often difficult to establish. In the past, the only way to tell a plant's homeland was to look for the number of related species in a region to provide clues on their area of origin. This idea was firmly established by Nikolai Vavilov before he was sent off to Siberia, thanks to Stalin's crank-scientist Lysenko, to meet an early death. Today, genetic relatedness of plants can be examined by comparing the similarity of DNA sequences (although this is apparently harder than with animals due to issues with polyploidy). Some recent studies on individual plants and their relatedness have provided insights into human history. A study on baobabs in India and their geographical origins in East Africa established by a study in 2015 and that of coconuts in 2011 are hopefully just the beginnings. These demonstrate ancient human movements which have never received much attention from most standard historical accounts.
Inferred trasfer routes for Baobabs -  source

Unfortunately there are a lot of older crank ideas that can be difficult for untrained readers to separate. I recently stumbled on a book by Grafton Elliot Smith, a Fullerian professor who succeeded J.B.S.Haldane but descended into crankdom. The book "Elephants and Ethnologists" (1924) can be found online and it is just one among several similar works by Smith. It appears that Smith used a skewed and misapplied cousin of Dollo's Law. According to him, cultural innovation tended to occur only once and that they were then carried on with human migrations. Smith was subsequently labelled a "hyperdiffusionist", a disparaging term used by ethnologists. When he saw illustrations of Mayan sculpture he envisioned an elephant where others saw at best a stylized tapir. Not only were they elephants, they were Asian elephants, complete with mahouts and Indian-style goads and he saw this as definite evidence for an ancient connection between India and the Americas! An idea that would please some modern-day Indian cranks and zealots.

Smith's idea of the elephant as emphasised by him.
The actual Stela in question
 "Fanciful" is the current consensus view on most of Smith's ideas, but let's get back to plants. 

I happened to visit Chikmagalur recently and revisited the beautiful temples of Belur on the way. The "Archaeological Survey of India-approved" guide at the temple did not flinch when he described an object in the hand of a carved figure as being maize. He said maize was a symbol of prosperity. Now maize is a crop that was imported to India and by most accounts only after the Portuguese reached the Americas in 1492 and made sea incursions into India in 1498. In the late 1990s, a Swedish researcher identified similar  carvings (actually another one at Somnathpur) from 12th century temples in Karnataka as being maize cobs. It was subsequently debunked by several Indian researchers from IARI and from the University of Agricultural Sciences where I was then studying. An alternate view is that the object is a mukthaphala, an imaginary fruit made up of pearls.
Somnathpur carvings. The figures to the
left and right hold the puported cobs in their left hands.
(Photo: G41rn8)

The pre-Columbian oceanic trade ideas however do not end with these two cases from India. The third story (and historically the first, from 1879) is that of the sitaphal or custard apple. The founder of the Archaeological Survey of India, Alexander Cunningham, described a fruit in one of the carvings from Bharhut, a fruit that he identified as custard-apple. The custard-apple and its relatives are all from the New World. The Bharhut Stupa is dated to 200 BC and the custard-apple, as quickly pointed out by others, could only have been in India post-1492. The Hobson-Jobson has a long entry on the custard apple that covers the situation well. In 2009, a study raised the possibility of custard apples in ancient India. The ancient carbonized evidence is hard to evaluate unless one has examined all the possible plant seeds and what remains of their microstructure. The researchers however establish a date of about 2000 B.C. for the carbonized remains and attempt to demonstrate that it looks like the seeds of sitaphal. The jury is still out.
The Hobson-Jobson has an interesting entry on the custard-apple
I was quite surprised that there are not many writings that synthesize and comment on the history of these ideas on the Internet and somewhat oddly I found no mention of these three cases in the relevant Wikipedia article (naturally, fixed now with an entire new section) - pre-Columbian trans-oceanic contact theories

There seems to be value for someone to put together a collation of plant introductions to India along with sources, dates and locations of introduction. Some of the old specimens of introduced plants may well be worthy of further study.

Introduction dates
  • Pithecollobium dulce - Portuguese introduction from Mexico to Philippines and India on the way in the 15th or 16th century. The species was described from specimens taken from the Coromandel region (ie type locality outside native range) by William Roxburgh.
  • Eucalyptus globulus? - There are some claims that Tipu planted the first of these (See my post on this topic).  It appears that the first person to move eucalyptus plants (probably E. globulosum) out of Australia was  Jacques Labillardière. Labillardiere was surprized by the size of the trees in Tasmania. The lowest branches were 60 m above the ground and the trunks were 9 m in diameter (27 m circumference). He saw flowers through a telescope and had some flowering branches shot down with guns! (original source in French) His ship was seized by the British in Java and that was around 1795 or so and released in 1796. All subsequent movements seem to have been post 1800 (ie after Tipu's death). If Tipu Sultan did indeed plant the Eucalyptus here he must have got it via the French through the Labillardière shipment.  The Nilgiris were apparently planted up starting with the work of Captain Frederick Cotton (Madras Engineers) at Gayton Park(?)/Woodcote Estate in 1843.
  • Muntingia calabura - when? - I suspect that Tickell's flowerpecker populations boomed after this, possibly with a decline in the Thick-billed flowerpecker.
  • Delonix regia - when?
  • In 1857, Mr New from Kew was made Superintendent of Lalbagh and he introduced in the following years several Australian plants from Kew including Araucaria, Eucalyptus, Grevillea, Dalbergia and Casuarina. Mulberry plant varieties were introduced in 1862 by Signor de Vicchy. The Hebbal Butts plantation was establised around 1886 by Cameron along with Mr Rickets, Conservator of Forests, who became Superintendent of Lalbagh after New's death - rain trees, ceara rubber (Manihot glaziovii), and shingle trees(?). Apparently Rickets was also involved in introducing a variety of potato (kidney variety) which got named as "Ricket". -from Krumbiegel's introduction to "Report on the progress of Agriculture in Mysore" (1939) [Hebbal Butts would be the current day Airforce Headquarters)

Further reading
  • Johannessen, Carl L.; Parker, Anne Z. (1989). "Maize ears sculptured in 12th and 13th century A.D. India as indicators of pre-columbian diffusion". Economic Botany 43 (2): 164–180.
  • Payak, M.M.; Sachan, J.K.S (1993). "Maize ears not sculpted in 13th century Somnathpur temple in India". Economic Botany 47 (2): 202–205. 
  • Pokharia, Anil Kumar; Sekar, B.; Pal, Jagannath; Srivastava, Alka (2009). "Possible evidence of pre-Columbian transoceanic voyages based on conventional LSC and AMS 14C dating of associated charcoal and a carbonized seed of custard apple (Annona squamosa L.)" Radiocarbon 51 (3): 923–930. - Also see
  • Veena, T.; Sigamani, N. (1991). "Do objects in friezes of Somnathpur temple (1286 AD) in South India represent maize ears?". Current Science 61 (6): 395–397.
  • Rangan, H., & Bell, K. L. (2015). Elusive Traces: Baobabs and the African Diaspora in South Asia. Environment and History, 21(1):103–133. doi:10.3197/096734015x1418317996982 [The authors however make a mistake in using Achaya, K.T. Indian Food (1994) who in turn cites Vishnu-Mittre's faulty paper for the early evidence of Eleusine coracana in India. Vishnu-Mittre himself admitted his error in a paper that re-examined his specimens - see below]
Dubious research sources
  • Singh, Anurudh K. (2016). "Exotic ancient plant introductions: Part of Indian 'Ayurveda' medicinal system". Plant Genetic Resources. 14(4):356–369. 10.1017/S1479262116000368. [Among the claims here are that Bixa orellana was introduced prior to 1000 AD - on the basis of Sanskrit names which are assigned to that species - does not indicate basis or original dated sources. The author works in the "International Society for Noni Science"! ] 
  • The same author has rehashed this content with several references and published it in no less than the Proceedings of the INSA - Singh, Anurudh Kumar (2017) Ancient Alien Crop Introductions Integral to Indian Agriculture: An Overview. Proceedings of the Indian National Science Academy 83(3). There is a series of cherry-picked references, many of the claims of which were subsequently dismissed by others or remain under serious question. In one case there is a claim for early occurrence of Eleusine coracana in India - to around 1000 BC. The reference cited is in fact a secondary one - the original work was by Vishnu-Mittre and the sample was rechecked by another bunch of scientist and they clearly showed that it was not even a monocot - in fact Vishnu-Mittre himself accepted the error - the original paper was Vishnu-Mittre (1968). "Protohistoric records of agriculture in India". Trans. Bose Res. Inst. Calcutta. 31: 87–106. and the re-analysis of the samples can be found in - Hilu, K. W.; de Wet, J. M. J.; Harlan, J. R. Harlan (1979). "Archaeobotanical Studies of Eleusine coracana ssp. coracana (Finger Millet)". American Journal of Botany. 66 (3):330–333. Clearly INSA does not have great peer review and have gone with argument by claimed authority.
  • PS 2019-August. Singh, Anurudh, K. (2018). Early history of crop presence/introduction in India: III. Anacardium occidentale L., Cashew Nut. Asian Agri-History 22(3):197-202. Singh has published another article claiming that cashew was present in ancient India well before the Columbian exchange - with "evidence" from J.L. Sorenson of a sketch purportedly made from a Bharhut stupa balustrade carving - the original of which is not found here and a carving from Jambukeshwara temple with a "cashew" arising singly and placed atop a stalk that rises from below like a lily! He also claims that some Sanskrit words and translations (from texts/copies of unknown provenance or date) confirm ancient existence. I accidentally asked about whether he had examined his sources carefully and received a rather interesting response which I find very useful as a classic symptom of the problems of science in India. More interestingly I learned that John L. Sorenson is well known for his affiliation with the Church of Jesus Christ of Latter-day Saints and apparently part of Mormon foundations is the claim that Mesoamerican cultures were of Semitic origin and much of the "research" of their followers have attempted to bolster support for this by various means. Below is the evidence that A.K.Singh provides for cashew in India.
  •  

Worth examining the motivation of Sorenson through the life of a close associate  -  here

Authorship Highlighting improvements

17:53, Thursday, 05 2020 November UTC

We recently launched an awesome new feature to the Dashboard’s Authorship Highlighting, thanks to volunteer open source developer Bailey McKelway. Bailey is a full-stack developer who recently graduated from Fullstack Academy in New York City, and judging by the sophisticated work he’s done on the Dashboard, he’s got a strong software development career ahead of him. Here’s Bailey to explain his new feature (and he also wrote a technical post about it on his blog). – Sage Ross, Chief Technology Officer

Demonstration of scrolling to the first highlighted contribution by a student
Click the arrow to scroll to an editor’s highlighted contributions.
Demonstration of scrolling back to the top after the last contribution is reached
Once you’ve reached the last contribution, click again to scroll back to the first one.
Demonstration of switching between students
Check for different students’ contributions as you scroll through the page.

So you may have noticed there is a new feature within the Authorship Highlighting view. Now you will be able to scroll to a user’s revisions just by clicking a button.

This makes it much easier to find revisions that students have made to articles. All you have to do is click the arrow next to the user’s name at the bottom and the page will scroll to the user’s revisions!

By clicking the arrow for the first time, this will scroll the selected student’s first edit to the top of the page. Continuing to click the arrow will scroll to the next revision that is currently not in view.

After clicking the arrow at the last revision the page will scroll back up to the first revision.

If you switch to a different user, then the feature is smart enough to scroll to the new editor’s closest edit below the current edit. If there are no edits below the current edit then it scrolls to the first edit made by the editor.

If you scroll to a revision and all the current revisions are currently in view, then the page will “bump” signifying there are no other edits.

Hope you all enjoy the new feature!

– Bailey McKelway

Semantic MediaWiki 3.2.0 released

16:03, Wednesday, 04 2020 November UTC

September 7, 2020

Semantic MediaWiki 3.2.0 (SMW 3.2.0) has been released today as the next release of Semantic MediaWiki.

It is a major release. Please refer to Semantic MediaWiki 3.2.0 for further information.

A Wikipedian six years in the making

17:20, Monday, 02 2020 November UTC

In 2014, I joined Wiki Education as Program Manager for the Wikipedia Student Program. Six years later, I can now proudly call myself a real Wikipedian!

Though I had never edited Wikipedia myself before joining Wiki Education, I believed whole-heartedly in its mission of making knowledge free and accessible to all and was thrilled to be part of a team attempting to bridge the gap between Wikipedia and academia. I had come from academia myself, having completed a Ph.D. in History from UC Berkeley in 2012, and I was excited to bring my own expertise and skills to Wikipedia. Right away, I began learning the ins and outs of editing and had soon racked up edits on talk pages as Helaine (Wiki Ed). I slowly but surely became a member of the Wikipedia community, but still I had not made any content contributions in the article main space. I could speak about notability with the best of them and had shepherded thousands of students and instructors through their Wikipedia assignments, but I had yet to take those first baby steps myself.

Finally, in September of this year, I decided to take that leap. There was no better way to do so than with one of our own Wiki Scholars courses. I enrolled as a student in a course specifically devoted to improving content around COVID-19 led by my wonderful colleague Ian Ramjohn.

I chose to write a topic near and dear to my heart: how the pandemic has affected people with disabilities. I am blind myself, and while I have fared relatively well during this tumultuous period, I wanted to make sure the world had access to information about how COVID has impacted an already vulnerable community.

As User:Hblumen I got to work, scouring the internet for scant information on how the pandemic has affected people with disabilities, and finally encountered both the challenges and the heights new editors face when contributing to Wikipedia for the first time. As a blind editor, in particular, I learned that the VisualEditor is not at all accessible with screen readers and that references are tricky as well. I was glad that I had learned wikicode all those years ago when I joined the team. I also learned that, while I had not contributed article content to Wikipedia, I already knew a great deal and mostly just needed the motivation and confidence boost to make those first edits.

By the end of the course, I was incredibly proud to have written Impact of the COVID-19 pandemic on people with disabilities. I was both dismayed but unsurprised to find a paucity of information on the topic, but I’m hopeful that my article sparks others to think about how COVID has affected populations already at high risk for a host of physical, emotional, and socioeconomic disadvantages.

Thank you to Ian and to my fellow Wiki Scholar participants for helping this would-be Wikipedian take those final critical steps. For years I have read comments from students and instructors on the pride and satisfaction that comes with seeing your edits live on Wikipedia, and now I truly understand how gratifying it is to contribute to public knowledge.

Interested in taking a course like the one Helaine took? Visit learn.wikiedu.org to see our current course offerings.

Tech News issue #45, 2020 (November 2, 2020)

00:00, Monday, 02 2020 November UTC
previous 2020, week 45 (Monday 02 November 2020) next
Other languages:
Bahasa Indonesia • ‎British English • ‎Deutsch • ‎English • ‎Nederlands • ‎español • ‎français • ‎italiano • ‎magyar • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎中文 • ‎日本語 • ‎한국어

weeklyOSM 536

12:04, Sunday, 01 2020 November UTC

20/10/2020-26/10/2020

lead picture

Wikimap with all geotagged Wikipedia articles 1 | © Louis Jencka, Wikidata, Wikipedia | map data © OpenStreetMap contributors

Mapping

  • User Darafei updated the Disaster.Ninja tool, which was developed to assist HOT in their activation process, but could also be useful for the general mapping community.
  • Robert Delmenico published a proposal to substitute the existing tags man_made=* with the new tag artificial=* in order to make the language more gender-neutral. The proposal is open for comments.
  • Jeroen Hoek and Supaplex have made a proposal for parking=street_side for tagging areas suitable, or designated for, parking, which are directly adjacent to the carriageway of a road and can be reached directly from the roadway without having to use an access way. The proposal is now open for comments.
  • Brian Sperlongano (User ZeLonewolf) has published a proposal for a new tag boundary=special_economic_zone to map Special Economic Zones. The proposal is now open for comments.
  • Alter Geosystems explains (es) > en how OpenStreetMap data can be enriched with Wikidata knowledge. They also link to the Wikidata Query Service, a facility for running queries.
  • PanierAvide reports (fr) > en about the most successful project of the month of the French OSM community so far, the mapping of defibrillators.

Community

  • Labian Gashi has won the DINAcon 2020 Award in the category ‘Best Newcomer’ for his JOSM plugin ‘NeTEx Converter‘. Stefan Keller, from the HSR Rapperswil/Ostschweizer University of Applied Sciences, as well as specialists from SBB, supervised the project. The ‘NeTEx Converter’ converts OpenStreetMap data into the Network Timetable Exchange (NeTEx) format, (a CEN standard), which is ‘designed for the efficient exchange of complex transport data’. The plugin also checks rudimentary indoor routing within stations.
  • coolmule0 has been mapping since July of last year and has summarised a beginner’s experience of OSM in a blog post. Among others things they discuss Mapcarta and the wiki article about building=terrace.
  • DeBigC blogged that he discovered a little brother of the infamous Melbourne skyscraper in Dublin and traced it down to a typo.

OpenStreetMap Foundation

  • If you are intending to run for an OSM Foundation seat, don’t forget that the deadline is Saturday 7 November 2020.
  • Rory McCann, member of the OSMF Board of Directors, proposes an amendment to Article 91 of the OSMF Constitution. In future, it should be possible for board committees to include members who are not board members.
  • User Nakaner is proposing a resolution for the upcoming Annual General Meeting of the OSM Foundation on 12 December 2020. About 80 supporters are necessary before 4 November.
  • Some OpenStreetMap Foundation board members will host an Ask me Anything (AMA) on Reddit. All questions can be asked. The AMA will start 9 November at 16:00 CET. Questions can be raised from 2 November in the AMA thread.
  • Between concern and disappointment, Severin Menard outlined his opinion of OSMF’s development since the last elections. He refers to Christoph Hormann’s (Imagico) blog post that we covered earlier.
  • The OSMF-Talk mailing list discussed two proposals from the OSMF board of directors to harden OSMF against hostile takeovers by big/bad companies. Well-known employees of Facebook and Mapbox argued against these proposals.
    • Rory McCann proposes that the OSMF bylaws include a provision that memberships expire, and votes become invalid if a member cannot freely exercise his or her rights or is contractually bound (e.g., with the employer) in the exercise of those rights. The opponents believe that this is practically impossible to prove. Proponents believe that the non-usability of the clause does no harm and that one must assume that companies are evil.
    • Tobias Knerr proposes a member resolution on minimum requirements for new members. In the future, the board should reject membership applications if the interested party has not made significant contributions to OSM. Here, too, there is headwind from the American business environment.

Events

  • The next Geomob will take place online on 17 November 2020. Signing up for an invite (Zoom URL) is necessary.
  • A mixed physical-digital Missing Maps Mapathon is planned (de) > en for 30 November. If the epidemiological situation permits, the physical part will take place in Wabern (Bern, Switzerland) at swisstopo.
  • On Monday and Tuesday (2 and 3 November) the biennial conference GeOnG will take place for the seventh time and participants will join in over 30 live sessions around the topics of technology and information management in the humanitarian and development sector. This year’s theme is ‘People at the heart of Information Management: promoting responsible and inclusive practices’. Check the full agenda here.

Humanitarian OSM

  • Marcel Reinmuth provided a cross-sectional analysis about mapping physical access to health care for older adults in sub-Saharan Africa and the implications for the COVID-19 response.
  • Jikka Defiño reports about the collection of field data for the PhilAWARE disaster risk reduction project and training in the Philippines.
  • The Mapping Power campaign was featured in Mapillary’s Blog, explaining how students across the YouthMappers network are using augmented and volunteer mapping through Mapillary, Map With AI, TeachOSM, and HOT to improve Sierra Leone’s electrical grid and connect rural communities.

Maps

  • Diego Alonso explained (es) > en how to download Sentinel images with QGIS.
  • flo2154 provided (de) > en his first MapComplete theme, which displays benches amenity=bench and other elements tagged with bench=yes.
  • derstefan is looking (de) > en for beta testers for the new OpenTopoMap-Garmin maps. The temporary address is https://garmin2.opentopomap.org.
  • cquest presented (fr) a map showing the areas affected by the health curfew in France.

Open Data

  • Russian startup company Geoalert has published Urban Mapping, the first open dataset of automatically traced building footprints covering Russia. To achieve this the company used Mapbox Satellite imagery, which Mapbox has explicitly permitted others to auto-trace using machine learning algorithms. Despite the fact that Mapbox has quite poor coverage in Russia, in terms of images quality and timeliness, for some regions the ‘Urban Mapping’ datasets surpass the current count of OSM buildings significantly.Currently there are three regions available via the links on Github: Chechnya, Tyva and Moscow.

Programming

  • Simon Legner reports that the Java version of Osmpbf, a library for reading and writing OSM PBF files, is now available from Maven Central.
  • Erick de Oliveira Leal explained how to enable the Strava High Resolution Layer in OpenStreetMap (JOSM or ID). Editor’s note: Please note that when you are using this Strava layer there is, at present, no permission to use this layer for OSM and you run the risk of having your edits removed.
  • Guillaume Rischard, maintainer of the Editor Layer Index, suggests abandoning the ELI and for iD to use the background layer list from JOSM (we have covered previous discussions of this).
  • Sarah Hoffmann, aka lonvia, reports that the download server for Photon now has ready-to-use database dumps for over 200 countries.

Releases

  • QGIS 3.16.0 ‘Hannover’ has been released. It brings new options for 3D mapping, mesh generation from other data types, additional spatial analysis tools, symbology and user interface enhancements.

Did you know …

  • … the list of English exonyms for foreign toponyms?
  • [1] … Wikimap, a map showing the location of all geotagged Wikipedia articles?

Other “geo” things

  • Matthias Schwindt, from GPS Radler, presents (de) > en three models of the robust outdoor Garmin Montana 700 series in practical tests and helps you to decide which one is the right one.
  • Google AI recently launched the open-source browser-based toolset , which was created to enable the exploration of city transitions from 1800 to 2000 virtually in a three-dimensional view.
  • Jonathan Amos, a BBC Science correspondent, reported about Norway’s funding of satellite maps of the world’s tropical forests.
  • Seán Lynch informed us of his decision to make OpenLitterMap available as open source (GPLv3).
  • David Hambling, from BBC Future, poses the question of what the world would do without GPS.
  • The Fraunhofer Institute for Industrial Engineering (FhG-IAO) offers (de) as a result of the Communal Innovation Center (KIC@bw) (de) > en a full-text download of the practice-oriented guideline ‘Communal Data for Future-Oriented Urban Development’, which is intended to provide orientation knowledge and show fields of application, options for action and development possibilities. The data, generated by administrative digitisation or the use of digital offerings in public spaces, can provide municipalities with potential for improving the quality of life, reducing the number of resources used, cutting costs, improving citizen services, or making administrative processes more efficient, and thus making a significant contribution to municipal development.

Upcoming Events

Where What When Country
Bratislava Meeting Missing Maps CZ & SK [1] 2020-10-31 slovakia
London Missing Maps London Mapathon 2020-11-03 united kingdom
Stuttgart Stuttgarter Stammtisch (online) 2020-11-04 germany
Bochum Bochum OSM-Stammtisch (Online) [2] 2020-11-05 germany
Dresden Dresdner OSM-Stammtisch 2020-11-05 germany
Online State of the Map Japan 2020 Online 2020-11-07 japan
Taipei OSM x Wikidata #22 2020-11-09 taiwan
Salt Lake City / Virtual OpenStreetMap Utah Map Night 2020-11-10 united states
Munich Münchner Stammtisch 2020-11-11 germany
Zurich 123. OSM Meetup Zurich 2020-11-11 switzerland
Berlin 149. Berlin-Brandenburg Stammtisch (Online) 2020-11-12 germany
Online 2020 Pista ng Mapa 2020-11-13-2020-11-27 philippines
Cologne Bonn Airport 133. Bonner OSM-Stammtisch (Online) 2020-11-17 germany
Lüneburg Lüneburger Mappertreffen 2020-11-17 germany
Berlin OSM-Verkehrswende #17 (Online) 2020-11-17 germany
Cologne Köln Stammtisch ONLINE 2020-11-18 germany
Online FOSS4G SotM Oceania 2020 2020-11-20 oceania

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Lejun, MatthiasMatthias, MichaelFS, Nakaner, Nordpfeil, NunoMASAzevedo, PierZen, Rogehm, TheSwavu, derFred, richter_fn.

English Malayalam Translation using OpusMT

11:40, Sunday, 01 2020 November UTC

SMC had started a machine translation service at translate.smc.org.in for English-Malayalam. This system uses huggingface transformers with OpusMT language models for translation. OPUS MT provides pre-trained neural translation models trained on OPUS data. These models can seamlessly run with the OPUS-MT transation servers that can be installed from our OPUS-MT github repository. The translation service is powered by Marian Neural MT engine The quality of the machine translation depends on the availability of parallel corpus.

Web application for learning Malayalam writing

11:20, Sunday, 01 2020 November UTC

In my previous blog post, I wrote about an experiment of using SVG path animation to help learn malayalam letter writing. That prototype application was well received by many people and that encouraged me to host it as a proper application. Now, the Malayalam learning application is available at https://learn.smc.org.in Source code: https://gitlab.com/smc/mlmash I added all letters of Malayalam there. Added a few common ligatures too. Kavya helped to record and add pronunication of these letters with couple of examples.

Wikicite from the ground up - oyster reefs

09:36, Sunday, 01 2020 November UTC

 


I watched this video having looked for oysters and oyster reefs. They are a thing in the Netherlands, we don't have them enough of them and should have them as a functioning ecosystem. 

The video starts with a Prof A. Randall Hughes moving into the water for an experiment. Prof Hughes was already in Wikidata from 2018. Being triggered by the video, adding additional information and papers is for me the thing to do. One of her paper is about oyster reefs, linking the paper to the item for oyster reefs includes her in the Scholia for oyster reef

Wikicite is about citations and one of its ambitions is to link Wikipedia references. There are many articles referenced that include the subject of the article: "oyster reef" but only one of them can be found in Wikidata. When you check the authors, Megan K. La Peyre is an associate Research Professor in the School of Renewable Natural Resources at Louisiana State University Agricultural Center, her name you will find quite often. It is cumbersome to add papers by hand, I made a stab at one of them. Only to find that I have to merge two items for Prof La Peyre because "there can be only one". 

Given that the scholarly papers among these references all have a DOI, we should have a tool that collects all DOI from the reference section of an article. It then gets the information from CrossRef using the DOI, includes the publication in Wikidata AND, something on my wishlist, link it to the Wikipedia article where it is used as a reference.

The objective of this tool is not so much expanding Wikidata but make it easy and obvious to find more information and publications on a topic through co-authors, subjects and Wikipedia articles where the same paper is used as a reference. When references are considered by some as the most important component of an article, it follows that it should be easy to expand from there in a whole different rabbit hole.

Thanks, GerardM


Wikicite, but from the bottom up

13:43, Saturday, 31 2020 October UTC
Wikicite is one of the most active projects in Wikidata. Its purpose is to "develop open citations and linked bibliographic data to serve free knowledge". A lot of work has been done over the years, there is only one issue; what purpose does it serve.

One of the visible parts of Wikicite are the many Scholia presentations for information. Papers, authors, organisations, subjects even combinations. There is a template that enables the inclusion of Scholia information on a Wikipedia article like here.

One objective of Wikicite is to become the repository of all references of Wikipedia articles. This is where progress is possible enabling people like myself to combine the two and make it easier for Wikipedia editors to find even more sources.. I spend a lot of time adding the subject "trophic cascade" to scholarly articles that include the phrase "trophic cascade" in its title. In addition I attributed the papers of many a scholar as well. This is reflected in the Scholia for trophic cascade. Many of the papers in the references part of the English article are these same papers.

Referenced articles may be specific to multiple subjects and, may be part of the references of multiple articles. When we know all the papers used as references in a Wikipedia article in Wikidata, we can make the information in a Scholia even more useful. 

The information of existing papers with authors and citations can be enriched. For references we can add new papers., the subject of the Wikipedia article can be marked as a "main subject" for the paper as well. We weave a mighty web in this way. Our quality will be improved by flagging retracted papers and we can flag articles for an update when new information becomes available as well.

What we do does not have to be complete. That is not the way, that is not the Wiki way. When we start with what we have, we will find that it is already really useful.

Thanks, GerardM

Happy Birthday Wikidata!

10:00, Saturday, 31 2020 October UTC

It’s Wikidata’s 8th birthday today, and we’re incredibly proud of Wikipedia’s lesser known little sister. Twenty years ago an incredible idea was made reality in the form of a democratic encyclopedia built from the bottom up, all by volunteers with no corporate influence or advertisement. Now, there are many projects related to Wikipedia that make the Internet a truly very different place than if we’d gone without them. Wikidata is like Wikipedia for computers. Collectively we’ve become aware of just how much data there is out in the world, but most of it is held by private companies for their own gains. So Wikidata stepped up. A free, democratically created software that has no agenda beyond the spread of information for the betterment of human knowledge. It’s a noble goal, and seemingly a fool’s errand. But Wikipedia worked, and now, so is Wikidata.

Say you want to find out where all the paintings by Van Gogh are housed? A bit of googling and digging would be needed, and unless someone’s made a specific web page listing such information, it’d take you a while. What about something a little more complex, like a list of all the self-portraits by female artists? It’s questions like these that Wikidata’s working towards answering with one simple search query, and projects like Crotos, a Wikidata-driven tool for exploring the world’s artworks from hundreds of different collections, that have spawned from Wikidata.

 

A number of our programmes use Wikidata to create something truly brilliant. Take the award winning Scottish Witches Map. A pretty design with a sobering bit of history, the visuals of seeing where Scottish witches were accused, their story, and what happened to them is an excellent example of what can be achieved with Wikidata. Scottish Equate Scotland student intern, Emma Carroll, worked with Wikimedian in Residence at the University of Edinburgh, Ewan McAndrew, during the summer 2019 to geolocate the place names recorded in the Survey of Scottish Witchcraft Database (1563 to 1736), and find the place of residence of 3,141 accused Scottish witches.

Through Emma’s detective work c.500 place-names have been located using Ordnance Survey maps, place-name books, historical maps, and gazetteers. This data was uploaded into Wikidata, as linked open data and further enriched with the location of detentions, trials, place of death, and more. Richard Lawson, ISG web developer, provided the technical expertise for the new website and graphic design was contributed by Interactive Content Manager Stewart Lamb Cromar.

It builds on the university’s breakthrough work on the Scottish Witchcraft Survey which brought to life the persecution of women during the period, with many burned at the stake or drowned. Ewan McAndrew, Wikimedian in Residence at the University of Edinburgh, said: “The map is a really effective way to connect where we are now to these stories of the past.”

“The tragedy is that Scotland had five times the number of executions of women. The idea of being able to plot these on a map really brings it home. These places are near everyone.

“There does seem to be a growing movement that we need to be remembering these women, remembering what happened and understanding what happened”

Emma Carroll, Equate Scotland Careerwise Intern (or ‘Witchfinder General’) said, “not only does the project help highlight the power of data science but also shows the capability of Wikidata to aid in the making of all of the different visualisations.”

The surfacing of the witchcraft data as linked open data to Wikidata has motivated Design Informatics Masters students each year since 2017 and showed what is possible both for the teaching of data science and for furthering discovery and engagement with real world research datasets.

The Mapping the Scottish Reformation project has since been inspired by the Map of Accused Witches project and are collaborating with Ewan McAndrew and the university’s Interactive Content team to build a new map website, powered by Wikidata.

It’s a truly beautiful interactive map, with an important and harrowing bit of information that’s critical to our understanding of women and marginalised people’s history.

 

Dr Martin Poulter is a long time Wikimedian and resident of ours, first at the University of Oxford where he worked on a project using Wikidata to describe its library and museum collections. And he is currently using the platform to describe the private collections of Sir David Khalili.

“Wikidata links the world’s cultural and scientific archives together into a web of knowledge,” Martin says. “I’ve learned things through Wikidata that otherwise would have required hundreds of different websites and databases. Anyone writing software can tap into this vast free resource with billions of facts; it has transformed how we visualise our cultural heritage. Text isn’t always the best way to share knowledge: people want something interactive they can explore and see where their curiosity takes them. Wikidata’s many graphical interfaces let them do that, and in hundreds of different languages.” – Martin Poulter.

 

The capabilities and visuals of Wikidata are truly a magnificent achievement, and that it’s run by volunteers when so many people’s mantra is ‘time is money’ only makes it more remarkable. We’re continually impressed by this fantastically clever little bit of software and the community that has built and keeps on building it. So Happy Birthday Wikidata, here’s to many more years.

The wonderful world of Wikipedia

17:40, Friday, 30 2020 October UTC

Yohanna White graduated from the University of Georgia in May 2020 with an MS in Chemistry. She recently took the Wiki Scholars Informing Citizens training to learn how she can expand representation in Wikipedia. Her past community efforts to diversify STEM workplaces for women, underrepresented populations in higher education, and undocumented students inspired her to become a Wikipedia editor. 

The Informing Citizens training course gave me the opportunity to practice Wikipedia’s motto to “Be Bold”. As a woman scientist and advocate for STEM diversity and accessibility, I have no choice but to be bold. Not when marginalized groups are disproportionately underrepresented in the world’s most popular encyclopedia. I choose to write and be bold so that I can be part of the movement that is challenging the status quo by questioning who qualifies for “Wiki worthiness”.

Before this course, I pictured Wikipedia editors as mythical beings who were all-knowing and possessed some hidden mark that deemed them qualified. This misconception only deterred me from imagining myself as an editor. Thankfully, this course corrected my naive thinking by welcoming me into a cultivated and inclusive world. It even made me realize that Wikipedia is a great platform for marginalized editors; it is oddly liberating to write under a penname so that I don’t have to wonder whether my identity affected someone’s judgement of my work.

Anyone with knowledge and access to the internet can become a Wikipedia editor. What distinguishes someone as a Wikipedia editor is having the confidence to write for the world to change the world. I was motivated to take this course for the opportunity to change the narrative of women scientists—especially women scientists who self-identify as a person of color—and to give them recognition that is long overdue.

It was a pleasure to learn about Wikipedia’s useful features and idiosyncratic culture. My favorite feature that I learned to use was the Talk page of an article. It turns out that this is where all the behind-the-scenes discussions take place. Here, you can find people asking for suggestions, debating how to structure the article, what to include and exclude, etc.  It is amazing to see a community of well-intentioned strangers create and improve articles together for the sole benefit of providing  free and accurate information to the public. I now have a habit of checking out the Talk page whenever I’m on Wikipedia because I want to honor the group effort involved, and  it also reminds me that I don’t have to be perfect or make up excuses for why I can’t be an editor (symptoms of imposter syndrome, which, unfortunately, tends to affect marginalized people). There will always be a community of skilled editors who can fill in knowledge gaps, fix typos, and correct spelling errors. I can also showcase my strengths by revealing my WikiFauna, which is a way for editors to describe their editing style. For example, WikiFairies improve the aesthetics of an article, while a WikiJanitor eliminates vandalism from pages. There are also WikiElves, WikiHobbits, and WikiGnomes, and many other WikiFauna that roam in this virtual world. Editors may also spread some WikiLove and show appreciation for good work in the form of WikiCookies. Belonging in this positive community is the reason why I am committing to be a lifelong Wikipedia editor.

Everyone has a different reason for why they choose to be bold. I choose to be bold because as an underrepresented minority in science, it is a source of empowerment to be able to express my knowledge. I look forward to taking my boldness to the next level: hosting edit-a-thons to recruit and inspire potential editors! I am grateful for the opportunity to have a seat on the table. It is truly an amazing feat that a group of diverse volunteers can maintain the world’s greatest encyclopedia, considering that all it takes to make a difference is to create an account and be bold enough to click edit.

Interested in taking a course like the one Yohanna took? Visit learn.wikiedu.org to see current course offerings.

Image courtesy Yohanna White

How Wikipedia Is Preparing For The 2020 U.S. Election

15:05, Friday, 30 2020 October UTC

If the internet is the most important battleground in next week’s U.S. presidential election, then Wikipedia is the Web’s neutral zone.

Last month, U.S. federal agencies issued a public service announcementwith a warning that bad actors could use the internet to spread disinformation in an effort to discredit the legitimacy of the voting process.

As we know from the 2016 U.S. presidential election and other recent events, coordinated actors have previously attempted to influence election outcomes by spreading false or misleading information. But the rising rate and sophistication of disinformation campaigns in recent years makes the threat of disinformation even more acute. And while disinformation is not new, in a year that has been rocked by a global pandemic, civil unrest, devastating climate change events, and an unsteady economy, its effects add to the volatility that many are already feeling.

As the world’s largest multilingual online encyclopedia, and one of the most consulted knowledge resources online, Wikipedia exists to provide people with reliable information about the topics, moments, and people who shape our world. As such, all of the parts of our movement have been working to help combat the spread of malicious edits and disinformation on Wikipedia in and around the U.S. presidential election.

“If the internet is the most important battleground in next week’s U.S. presidential election, then Wikipedia is the Web’s neutral zone.”

For the last 20 years, Wikipedia’s global volunteer editors have developed robust mechanisms and editorial guidelines that have made the site one of the most trusted sources of information online. We recognize that we are not perfect, and we are certainly not immune to dis- or misinformation; there’s no website that can claim that mantle. But our nonprofit, ad-free model and adherence to principles of neutrality, transparency, and citations of reliable sources have in many ways acted as an antidote to malicious information spreading on the site.

We want to protect this track record and continue to make it very difficult for bad actors to use Wikipedia in their attempts to negatively influence or discredit any election.

Over the last two months, we’ve launched initiatives that augment the work of Wikipedia’s volunteer community, and have invested in more research and product development. The Wikimedia Foundation has also invested in strengthened capacity building by creating several new positions, including anti-disinformation director and research scientist roles, and hiring industry experts that will further help us implement and spot disinformation-related trends.

Additional efforts and safeguards include:

The Foundation launched a new interdisciplinary working group with representatives from our security, product, legal, trust and safety, and communications departments. The task force aims to refine and improve our ability to assess and respond to attacks, and enhance Wikipedia volunteers’ capacity by establishing processes and clear lines of communications between the Foundation and the community to surface and address disinformation attempts.

As part of this effort, the taskforce developed a playbook, laying out scenario plans around specific incidents, and has held several simulation exercises to model potential attacks. In addition, specific members of the task force are regularly meeting with representatives from major technology companies and U.S. government agencies to share insights and discuss ways they are addressing potential disinformation issues in relation to the election.

As part of our ongoing commitment to knowledge integrity, the Foundation’s research team, in collaboration with multiple universities around the world, delivered a suite of new research projects that examined how disinformation could manifest on the site. The insights from the research led to the product development of new human-centered machine learning services that enhance the community’s oversight of the projects.

These algorithms support editors in tasks such as detecting unsourced statements on Wikipedia and identify malicious edits and behavior trends. Some of the tools used or soon available to be used by editors include:

  • ORES, a set of AI tools that measure and categorize Wikipedia content. Vandalism detection API is one of its key features, and allows the community to automatically assess the quality of an edit, helping to detect possible vandalism.
  • An algorithm that identifies unsourced statements or edits that require citation. The algorithm surfaces unverified statements; it helps editors decide if the sentence needs a citation, and, in return, human editors improve the algorithm’s deep learning ability.
  • Algorithms to help community experts to identify accounts that may be linked to suspected sockpuppet accounts.
  • A machine learning system to detect inconsistencies across Wikipedia and Wikidata, helping editors to spot contradictory content across different Wikimedia projects.
  • A daily report of articles that have recently received a high volume of traffic from social media platforms. The report helps editors detect trends that may lead to spikes of vandalism on Wikipedia helping them identify and respond faster.

For weeks, Wikipedia’s community has been diligently preparing and debating whether to extend protections on election-related pages and determining the threshold for citations.

The following guidelines are actively being discussed in the forum, where editors debate the contents and policies related to the 2020 U.S. Presidential Election article. For example, at least three reputable sources are needed before declaring a candidate the winner of a state; winners cannot be posted for at least 12 hours after a polling place has closed; and absolutely no original research can be used for citations.

The large number of volunteers who enforce these decisions are admins and functionaries, a type of specialized admin, who has extended privileges allowing them to make decisions about critical content. In a recent meeting with the Foundation task force, in which they discussed the added vigilance required around events such as the U.S. election, these volunteers agreed that they aren’t interested in ‘breaking the news,’ but rather ensuring that Wikipedia readers now and 20 years from now have access to a reliable, well-documented, living record of what happens in the world.

Ryan Merkley (@ryanmerkley) is Chief of Staff at the Wikimedia Foundation.

Diego Sáez-Trumper(@e__migrante) is a Senior Research Scientist at the Wikimedia Foundation.

Happy 8th birthday, Wikidata!

15:44, Thursday, 29 2020 October UTC

Wikidata is 8! Let’s celebrate! In eight short years Wikidata has grown from a central repository for all language versions of Wikipedia, to an increasingly essential fixture of the internet.

Wikidata endeavors to represent the world’s knowledge as linked data. Linked data is the connective tissue that brings life and context to concepts, web pages, research, and anything else you can image across the internet. Many museums, libraries, and other cultural institutions are growing more interested in linked data because it is the future of data on the internet. It stands in contrast to tabular data or relational databases in many ways, but even if you’re unfamiliar with information architecture, there is a lot you can already appreciate about Wikidata. For your reading pleasure, here are eight characteristics to celebrate Wikidata:

First: Wikidata is multilingual. Having started as a project to connect all language versions of Wikipedia, Wikidata makes it easy to view and edit in any language that exists on Wikipedia. This is great not only for consumption of data, but also being able to contribute in the language of your preference (and, hey, if you can speak multiple languages, we have projects for you!).

Second: Wikidata is a hub of identifiers. What does this mean? Well, if you’re talking about a specific person or topic, there may be a unique number associated with it in a local collection somewhere (if you deal with identities, you’ll know this as authority control). Wikidata gathers as many of these unique numbers that refer to the same thing and associates them with that thing on Wikidata. In tracking thousands of identifiers, Wikidata helps with clarity and disambiguation, internet-wide.

Third: Wikidata supports an emergent ontology (ontology = the rules the database follows). If there’s a relationship missing on Wikidata, you can propose its inclusion! Many databases may adhere to very formal structures. Wikidata is adaptable, which is a trait that makes Wikidata expressive.

Fourth: Wikidata is expressive. Between multiple values, qualifiers, and ranks, Wikidata can express conflicting statements, vague representations, even incorrect data, and still not break or breed confusion. Having a system that is this expressive can help make information more explicit and improve data quality system wide.

Fifth: Wikidata is free and open. Is your collection incomplete? Check Wikidata to see if it can fill in the gaps. Download all of it. Or some of it. Or none of it. Use the Wikidata query service to get exactly what you want from Wikidata. You can do as much as you want. Data in Wikidata is licensed CC0, which means you can use it as you see fit…for free!

Sixth: Wikidata is visual! Using the Query Service, you can represent parts of Wikidata using images, maps, graphs, all with a quick click. Enamored with a data visualization? You can also embed it on a webpage for all to see!

Seventh: Wikidata plays well with machines (and humans!). Being machine readable has implications regarding teaching artificial intelligence, testing algorithms, and data science analyses, but it also means batch edits are possible. If you know what you’re doing, you can make some substantial changes to many things pretty quickly. And most important…

Eighth: Wikidata is changing. Underrepresentation on Wikidata is a real issue. Systemic bias is real. Wikidata is incomplete. Wikidata needs a diverse, well-represented community to begin to address these systemic issues. There’s room for it to grow and improve. We should feel an urgent obligation to improve these shortcomings and engage with these issues that pervade our historical and current representation of knowledge. Even though it’s flawed in its current state, there’s no better time to start contributing and join this community. Every item, every collection helps. Everyone makes Wikidata better.

Happy birthday, Wikidata. Looking forward to everything you’ll learn this year.

Interested in learning more? Join an upcoming virtual Wikidata course!

By Moriel Schottlender, Principal System Architect, The Wikimedia Foundation

A few months ago, I made the switch from Software Development to Systems Architecture. This change is exciting and an incredible opportunity to grow and learn. A new universe of possibilities and thinking has opened up; one I did not expect.

In this, the first of a series of posts I’ll document my journey so far into the new frontier of Systems Architecture. I’ll explore my own shift in thinking from Software to Systems and what it could mean for the Wikimedia movement and future websites. Follow-up posts will delve into the more technical realms like systems design, event modeling, and the meaning and purpose of some of the exciting ongoing experiments. First, let’s start with an analogy that aims to put the journey into perspective and to provide a common frame to think about and talk about this mental shift. 

In the beginning, there was a castle in the wilderness

Twenty years ago, when the first foundations were laid for Wikipedia, the idea that the internet’s interconnectivity can enable freely available knowledge was revolutionary. Wikipedia filled a void no one even knew needed filling. 

In the wilderness of the internet, Wikipedia was a bright castle, holding within it a collaborative library of knowledge. 

Wikipedians were alone in their task, so they invented a lot of tools from scratch. The castle had to be self-sustaining in a wilderness that had very little to offer. 

We built and strengthened our foundation as more floors were added: More communities joined to create more wikis, and more features were added to accommodate them. We devised our own infrastructure and maintained it ourselves — at the time, other existing infrastructures weren’t suitable for our task.

Our language support is a good example of our ability to innovate and create our own tools. When new communities joined the Wikipedia family, we had to find ways to enable and celebrate languages that the internet wilderness didn’t know how to handle. We did. We took it further to enable translations that are collaborative. These things are still mind-blowingly unique, even in today’s internet.

Through all of that, not only did we remain strong and self-sufficient, we grew. Our castle grew to include different projects, types of media, and unique maintenance tools. We grew into the space around us and expanded the capacity inside. With fewer resources, we solved problems that other websites were only beginning to deal with.

A destination for pilgrimage: how users come to us

Successful support of Wikipedia’s communities led to a pilgrimage of readers and contributors. Curious readers came from all over the world, either casually or extensively, some becoming editors, some visiting over and over.

We developed ways to ensure they could read, edit, and curate. They began arriving using different devices, like different browsers, mobile phones, and apps. We added more doors and gateways to the castle — our web interface, mobile interface, API endpoints. We discovered that people come to us with various intents and purposes, we made sure they were accepted and supported.

We established Wikipedia as a place where truth lives. Because of the collective wisdom of the editors, who care so much about the integrity of knowledge, Wikipedia flourished. Without our communities, the castle would have been a disintegrating ghost house.

Building collaboratively: the Jenga tower castle 

We accommodated and adjusted our castle to fit the multitudes of people who came to take part: readers from different languages and devices, editors with different technological needs, and the seemingly endless types of media types and collection tools. 

We built taller and wider, adding more features. We added a Notification system and expanded the Recent Changes feed to empower contributors to monitor content. We added powerful search tools and enhanced editing accessibility with Visual Editor. We created mobile versions of the interface to enable a better reading experience on smaller screens. This is just a small sampling of the robust features we’ve added through the years. 

The castle’s simple towers branched out into a complex of structures on top of structures, and took the shape of an elaborate, interconnected, precarious Jenga tower, with impressively branched balconies and spires.

This led to another brilliant challenge to overcome. Every time we needed a new balcony or window, we had to verify the structure of the entire castle. Every new floor had to account for the walls around it, the supports underneath it, the hot-air balloons that keep it upright. Our castle’s design became more and more elaborate as we opened more and more doorways of entry and allowed more types of knowledge to exist within its library.

The design — the architecture —  also became heavily interdependent, making each decision to add a feature more and more complicated.

At some point, the castle infrastructure itself was not enough, and we began relying on tools that were created adjacent to Wikipedia, inserted into our production systems. Wikidata is a good example, as well as the iOS and Android apps, serving content and collections through different technologies, and feeding it back to Wikipedia’s collection of articles and views. The castle was no longer a single structure. It had become dependent on the tools around it, even as they lived outside its walls.

Our users built gadgets and extensions to our content, some becoming intrinsic to the workflow of the editors and curators. The castle became dependent on them. Gadgets like HotCat, NavigationPopups, and EditTop, or tools like Twinkle, became ubiquitous to the operation of many of our communities. 

We were no longer simply a castle. All the pieces are so interconnected, it is difficult to distinguish between them. Making changes to our infrastructure became complicated, and our system became monolithic. If we wanted an additional feature, another way to view or edit — we had to adjust and accommodate not only for the structure of the castle but also for the hundreds of gadgets floating around it.

The city that replaced the wilderness: modernization efforts

Twenty years later, Wikipedia is still a beacon that people look up to in awe. Many articles have been written about the “magic” that made Wikipedia possible, and in our current political and social climate our existence matters even more. 

While we were creating a bustling hive of tools, gadgets, extensions, and features around our castle, the world around us changed too.

We are a sociotechnical system. Our technology is heavily intertwined with the social workflows of our users and communities. The emergence of new technologies changed the behavior of readers, editors, and users. They, in turn, led to changes in the technologies we’ve used to support them. This is a cycle that continues in endless loops of change.

Today, the internet is no longer a wilderness, it is a full-fledged city, with skyscrapers and highways and airports. The Wikipedia castle is no longer alone in need to support itself —it is now surrounded by a city that has services and infrastructure to offer. The changing landscape of the internet also meant our incoming traffic has changed; our castle must support people who come from farther places, languages, and cultures—people who arrive via new technology or expect to discover information in a new way.

Opportunities for modernization

Now that a bustling city has replaced the wilderness around us, new opportunities for modernization have opened. We created our own infrastructures, but the modern internet has a slew of services that we can potentially benefit from. By decoupling parts of the architecture, we can support them better, connect them more directly, and focus on their purpose. Most importantly, we can focus on our purpose, and our mission, by directing our energy and expertise to the places where they make the most impact.

Are there spaces where we can switch our own in-house code for an existing open-source tool, and contribute to it upstream? Is there an external, well-maintained library that we can utilize to decouple our monolith and provide a modern frontend framework? Can we find a way to transform our monolithic architecture in a way that will empower us and the capabilities and features we want to expose? What should we keep? What can we leave behind? What do we need to adjust as we move forward?

Dependency, Randall Munroe, xkcd.com, CC BY-NC 2.5

These are not easy questions to ask, and the answer is not necessarily clear cut, but these are questions that definitely deserve to be explored and investigated. The process itself will result in exploring where and how we can move towards modernizing ourselves for the future.

Imagine putting all our energy, expertise, and effort, full force, into the building and maintaining the tools and services that we do best, without dealing with the inherent complexity of the monolith. Imagine keeping up with the incredible size and scale we’re reaching — without having to reshape the castle’s supports. Imagine freeing ourselves to focus on what we do best: delivering knowledge to a world that is dynamically changing around us. A world that relies on us for expertise and resources.

There’s another challenge that our castle is facing now that it’s surrounded by a bustling city. This one touches on the expectations of our users, the residents, and visitors of the city.

Using distribution services: enabling the TL;DR

(Or: how the way we enable access to our information matters)

People today expect information to arrive wherever they are— on their phone as push notifications, from a news aggregator, in Alexa or Google Home services, on their smartwatch, through augmented reality apps, virtual reality, and more. New technologies are coming and going at a fast pace, and that pace has created a different expectation — from the users — about where and how they digest and accept information.

We are experiencing a shift from readers and contributors coming to Wikipedia directly to search for information — to seeing external services curating our information, translating, crunching the data, and manipulating the output, to fit whatever method they use to send it to the end-user. Google’s been displaying part of articles on its sidebar. Amazon’s Alexa, Apple’s Siri, and Google Home are reading pieces of our articles — sometimes mixed in with other sources — when answering questions by users, taking over control of our content.

People don’t go to the library as often anymore; they expect the library to come to them.

If we don’t adjust to this new reality, we risk giving up the control of how our context and information is presented to users to services that make those decisions themselves.

We have an incredible opportunity to reach our users where they’re at, to deliver experiences that make use of the trusted and robust universe of knowledge that we offer—wherever they are at, whatever technology they use, whatever method they prefer.

Making the transition: from the castle to the city; from Software to Systems

Shifting into the mental models that are needed for modernization is not easy. We need to change the questions we ask ourselves. We need to move from asking how to change the entire castle so it can support several more balconies, and ask how we can transform into a village that thrives in the dynamic, bustling city the internet has become?

I believe this mentality-shift paves the road forward and opens doors. I am excited about the opportunities we have to serve our communities and the world in new ways.

And yet, when questions arise, I still have to remind myself to step outside of my comfort zone and look differently at the capabilities we want to ship. This is where I am at now. Accepting and embracing the unknown, the uncertainty. This is hard for me. I grew up as an Engineer inside this castle. Instinctively, I think and plan and build inside it, too. I’ve learned to be overly cautious. But the truth is that modernization needs me to open my mind further. It needs us to do so together. It’s not enough to think inside this twisty castle; we need to allow ourselves to think bigger. 

The castle needs to transform itself into a village of its own, where individual structures are easy to maintain, expand, improve upon, and scale. Where the roads and alleys allow for adding more structures, we aren’t even thinking about yet. Where new people — developers and editors alike — don’t need a magical series of elaborate maps to find their way from one service to another. Or, when adding a new window, don’t need to check the stability of the entire monolith castle.

We need to modernize in a sustainable way that answers not only today’s new technology challenges but allows us to adjust and pivot 20 years from now. One guarantee is that technology will continue to change and evolve. Modernization designs with that in mind, too.

The process of modernization isn’t just technological; it’s a mental shift. It’s a cultural shift. Shifting the thinking from what we’ve been relying on for 20 years to something so different, so potentially radical — is overwhelming. It’s also incredibly exciting and opens up endless possibilities. 

Our castle, and the farmlands around it, are ready for the next stage. I am too.

About this post

Kamianets-Podilskyi Castle, Ukraine, Rbrechko, CC BY-SA 4.0

In 2015 I noticed git fetches from our most active repositories to be unreasonably slow, sometimes up to a minute which hindered fast development and collaboration. You can read some of the debugging details I have conducted at the time on T103990. Gerrit upstream was aware of the issue and a workaround was presented though we never went to implement it.

When fetching source code from a git repository, the client and server conduct a negotiation to discover which objects have to be sent. The server sends an advertisement that lists every single reference it knows about. For a very active repository in Gerrit it means sending references for each patchset and each change ever made to the repository, or almost 200,000 references for mediawiki/core. That is a noticeable amount of data resulting in a slow fetch, especially on a slow internet connection.

Gerrit originated at Google and has full time maintainers. In 2017 a team at Google went to tackle the problem and proposed a new protocol to address the issue, and they closely worked with git maintainers while doing so. The new protocol makes git smarter during the advertisement phase, notably to filter out references the client is not interested in. You can read Google introduction post at https://opensource.googleblog.com/2018/05/introducing-git-protocol-version-2.html

Since June 28th 2020, our Gerrit has been upgraded and now supports git protocol version 2. But to benefit from faster fetches, your client also needs to know about the newer protocol and have it explicitly enabled. For git, you will want version 2.18 or later. Enable the new protocol by setting git configuration protocol.version to 2.

It can be done either on an on demand basis:

git -c protocol.version=2 fetch

Or enabled in your user configuration file:

$HOME/.gitconfig
[protocol]
    version = 2

On my internet connection, fetching for mediawiki/core.git went from ~15 seconds to just 3 seconds. A noticeable difference in my day to day activity.

If you encounter any issue with the new protocol, you can file a task in our Phabricator and tag it with git-protocol-v2.

Five best practices for Wikipedia assignments

16:21, Wednesday, 28 2020 October UTC

Elyssa Faison has been assigning Wikipedia projects in both undergraduate and graduate classes since 2015. She is an associate professor in the Department of History at the University of Oklahoma.

Elyssa Faison
Elyssa Faison

When I was first approached several years ago by the Assistant Director of the Office of Digital Learning at my university about assigning a Wikipedia project in my class, I was skeptical. I had never edited Wikipedia myself, had little understanding of how editing was done, and harbored doubts about the reliability of information on the platform. But our digital learning specialist, who himself had participated in numerous Wikipedia edit-a-thons, explained how content is monitored and how the demographics of Wikipedia editors can lead to the over- or under-representation of certain topics. 

Student editors in my classes on Japanese history, women’s history, and environmental history (and sometimes all three at once) could help add content in areas where it was most needed, he argued; and as their instructor, I could make sure that the content they were adding would be appropriate and properly sourced. What’s more, asking students to edit on Wikipedia would make them think in different ways, and would make them accountable to a broader public. Students who are accustomed to having a singular audience for their writing (their instructor), would now be producing work that could potentially be read by thousands.

Over the years I have developed a set of best practices for my own Wikipedia Project. You might choose to do things differently, but this is what has worked for me:

First, really commit. Set up your assignment to run for eight weeks or longer. For me, that’s half a semester. For some of you, it might be nearly an entire quarter. Students will need to understand the philosophy behind Wikipedia editing and some of the (not difficult) technical aspects of the platform. Then they will need to choose a topic, conduct research, write, edit their writing, and finally move their work to the main space of Wikipedia. Short-changing this process in terms of time can make all of you frustrated, as students struggle to keep up with the many new things they are learning about form, content, and technology. Dedicate a portion of class time each week—some weeks ten minutes will be sufficient; other weeks you might need more—to talk about the Wikipedia project and see if students have questions.

Second, use as many of the tools available on the Wiki Education Dashboard as possible. The Wiki Education team has put together an incredible dashboard that allows you to set up a timeline for the entire term that includes everything your students will need to move through the process step by step. The dashboard provides training materials for students and instructors and gives suggestions about assignments you might want to add to your syllabus as part of the project. You can choose which of them you want to use, but my own experience is that “more is more” in this particular case. Use these materials and assignments as the basis of the weekly discussions you have in class.

Third, once you have decided on assignments for your Wikipedia Project and have set up your timeline through the dashboard, take the time at the beginning of the term to integrate specific Wikipedia assignments and deadlines into whatever LMS you are already using for your course. While the Wiki Education Dashboard is fantastic, I learned by trial and error that students get frustrated and lose track of what they are supposed to do if the main syllabus and content of your course is on your LMS (Canvas, Blackboard, D2L), but you are counting on them independently knowing they also need to navigate over to their Wikipedia Project timeline on the dashboard. Something as simple as listing the specific Wikipedia assignment for the week in your main syllabus or weekly module system on your LMS signals to them that they need to move to the Wikipedia dashboard to get detailed instructions and complete that work.

Fourth, pre-select several stub articles appropriate for the topic of your course that students can choose to edit. Students in my Japanese history classes generally do not come into the course with enough background to be able to generate topics that might be good to work on. Give them a list of possibilities, and let them offer up something else for you to approve if they desire.

Fifth, take full advantage of the peer review process. (You can find guidance on this on the dashboard.) Reading each others’ work fine tunes students’ understanding of how to write a good Wikipedia article. They get to see examples of good or not-so-good use of sources, tone, and basic writing. And they learn from each other in ways they don’t always learn from us.

We all use Wikipedia. As it has become more robust over the years, it has turned into an indispensable resource to check facts and dates, or to get a quick overview of a topic. This is as true for faculty as it is for undergraduates, graduate students, and the general public. Over the years, students in my classes—undergraduate and graduate alike—have consistently commented on how the project has changed their own perceptions of Wikipedia. In the reflection papers they write after completing their articles, time and time again students leave the class saying things like “Wikipedia’s editing process was more structured than I had expected,” and “the information put on Wikipedia has to go through a lot more vetting than I had originally thought.”

In recent years, students in my classes have contributed significantly to articles related to Japanese women’s history and Japanese environmental history. They have engaged with other Wikipedians on the talk pages of the articles they are working on, and they have seen that their work really is getting an audience. They get to experience how the work of historians is important in a way that writing a term paper that only I will ever read could never teach them.

Image of Bizzell Library credit: ragesoss, CC BY-SA 2.0, via Wikimedia Commons

Scots Wiki – moving forward

16:56, Tuesday, 27 2020 October UTC
‘Stinking’ misprint, 1787 Edinburgh Edition. Poems Chiefly in the Scottish Dialect, by Robert Burns. By user Rosser1954, public domain.

By Dr Sara Thomas, Scotland Programme Coordinator

As Wikimedia UK, we work to support language communities living in, or connected to the UK. This translates to a range of projects, including Scots Wikipedia. 

Up until recently, there were only a relatively small number of regular, active editors of sco.wiki. However, as of the end of August, that has most definitely changed. And that’s the best thing that we could have hoped for. I really hope that these new editors will feel motivated to stick around, because their long-term support will would be transformational for the Scots Wiki, and hopefully will have benefits for the wider Scots language community too.

With all the press coverage, a certain amount of immediate interest was inevitable. And the community has worked hard to increase their capacity to help deal with this; nominating and onboarding new Scots-speaking admins, improving on-wiki tools, organising review of articles, discussing spelling and dialect, deflecting vandalism, writing a new notability policy, deleting spam.

We’ve been heartened by the energy and proactive attitude of the existing Scots wiki community in dealing with the increased attention and participation in their project. At the same time, it was disappointing to see some of that attention fail to assume good faith on the part of the editor upon whom attention fell, and to engage in personal criticism. That’s not a behaviour we would support, and what we want to focus on here is the positive impact of the story on Scots wiki. 

With any minority language Wikipedia, community building is incredibly important; one of the ways this happens is through events like the Celtic Knot conference which Wikimedia UK have organised since 2017. Calls for content to be parachuted into the Wiki are ultimately not the most helpful, not least because a Wiki relies on its community; it needs the ongoing support and oversight of that community to survive. It needs those volunteers who look out for vandalism, who fix spelling mistakes, who create new articles, who review articles, who work on tech infrastructure – all of those kinds of things which it’s easy to take for granted if you spend most of your time on en:wiki. If you want paid editing, and an encyclopedia which remains fixed in one point in time, there are options for that. But that is not Wikipedia. 

The relationship between a chapter and a language Wikipedia is one of support, not of dictatorship. So as Scotland Programme Coordinator for Wikimedia UK what I’ve been working on for the last few months is seeing how I can support, and help to grow that community. 

In practice, what that’s meant is a whole load of activity if not behind the scenes, then in the wings. User:Cobra3000 set up a two-day editathon at the end of August, for which I ran online training sessions (we’ve been doing a lot of that recently) using the Wikimedia UK Zoom and Eventbrite accounts; set up a Dashboard to track activity; helped to set up an on-wiki event page, and I created and uploaded some sco.wiki specific how-to videos to Commons, which went on the event page, was used for training, and in the off-wiki locations where activity was being organised – Cobra3000’s Scots Language Discord server, and the new Scots Wikipedia Editors Facebook group, which is now at over 100 members. I’ve been active in both of the latter, answering questions, promoting the training, and answering wiki-specific questions where possible. For the editathon, we also were sure to include a range of activities for non- or lower-proficiency Scots speakers, of whom there were many who were interested in helping out. Dr Michael Dempster of the Scots Language Centre has been very involved, including making an 8 hour introduction to Scots course available for free on YouTube. The editathon produced some quite incredible stats; they include high-volume AWB tasks, but even so, I was excited to see the enthusiasm and care that the community has for the Wiki. 

We’ve also been talking to the Scots Language Centre about how we might engage the wider Scots community with Wikimedia in the future, and this will hopefully build on some existing projects which had to be shelved due to COVID. 

The second editathon was held at the end of September, focussing on places, and we hope that these editathons can become a regular event. Now that we’ve done with the initial firefighting period, it’s time to dig in for the long term. 

If you’d like to find out more about community building through events, you can get involved here, or to see more about our Scottish activities, you can browse the blog tag.

10 years of tackling Wikipedia’s equity gaps

16:38, Tuesday, 27 2020 October UTC

This fall, we’re celebrating the 10th anniversary of the Wikipedia Student Program with a series of blog posts telling the story of the program in the United States and Canada.

In 2018, Wiki Education launched a three year strategic plan whose three pillars are equity, quality, and reach. Equity, however, has long been one of the defining forces behind Wiki Education and the Wikipedia Student Program since its inception in 2010. Wikipedia aims to represent the sum total of human knowledge, but despite its more than 6 million articles, it still has a long way to go. This is especially true when it comes to topics that deal with historically underrepresented populations, such as women and minorities, as well as more academic subjects. With their access to information, often behind paywalls for the population at large, and with their instructors, experts in their subject-matter areas, to guide them, the program quickly realized that students can play a critical role in making Wikipedia a more equitable space.

Knowledge equity can be an elusive concept and difficult to define. At its core, though, lies two key pillars: knowledge should be accessible both in its creation as well as its dissemination. It means that knowledge should be accurate, representative, and inclusive to those who seek to create it as well as those who seek it out. While knowledge equity has been at the core of our programs since the beginning, it’s taken on new meaning and urgency in recent years with the rise of fake news and the ease with which misinformation is so readily spread.

Equity through participation

Knowledge is only as equitable as its creators and disseminators. Filling in content gaps and correcting the historical record is a critical part of knowledge equity, but who creates knowledge matters. This is especially true on Wikipedia, a site run by a volunteer base composed overwhelmingly of people who identify as white and male. We’ve known since the outset of the Student Program that roughly 60% of the students we support are women. This is representative of college campuses nationwide. Based on recent survey results, we also know that about 17% of our students identify as Asian, 13% as Hispanic/Latinx, and about 6% as African-American. Additionally, 45% of the students in our program speak another language other than English, and roughly 8% identify as having a disability.

While the makeup of the students in our program is fairly representative of college campuses nationwide, it deviates quite dramatically from Wikipedia’s active editing community. We also know that 19% of all new active editors to English Wikipedia come from the Student Program. While Wikipedia strives to promote a neutral point of view in all of its content, our students bring a diversity of perspectives and experiences that ultimately make Wikipedia a more inclusive, accessible, and equitable place. Whether it’s writing about a language they speak, the town they come from, or another topic near and dear to their heart, our students bring new voices to the Wikipedia editing community.

In recent years, we’ve also begun to collect demographic data on the instructors who participate in our program. Where our students are consistent with college campuses more broadly, our instructors deviate dramatically from academia as well as the Wikipedia editing community. More than 60% of the instructors who participate in our program are women, a number that is far higher than the 30% that make up academia. While most of our instructors do not contribute to Wikipedia directly, they are nevertheless important members of the Wikipedia community. They are using their expertise to guide their students and ensure that their areas of subject-matter expertise are accurately represented on Wikipedia. They too influence Wikipedia’s trajectory and diversify its base of knowledge creators. Whether these professors are focused on Women’s and Gender Studies, improving biographies of women, or plant biology, they are making Wikipedia more equitable.

Not content with content gaps

Whether they come to the Wikipedia assignment with knowledge equity in mind or not, all of our instructors and students participate in making Wikipedia more equitable. They do this through their very participation, but also in the content they produce. A content gap is just that, a gap in knowledge, but not all content gaps are created equally. Wikipedia has notable content gaps in subjects related to women, minorities, and other historically underrepresented populations. It also has significant gaps in more academic and obscure topics. The reasons for these gaps are complex. They arise in part due to who edits Wikipedia, but it also is the result of the fact that these subjects are often poorly sourced in the written record. To a large degree, Wikipedia reflects broader societal biases. Reflection, though, need not mean reinforcement. While this is often the case, Wikipedia can be a powerful tool for correcting those inequities in content. We know that Wikipedia is not a one way street. Information flows both into and out of Wikipedia. It is not simply a repository of information, but an agent of knowledge creation and dissemination.

Most notable among these content gaps is Wikipedia’s gender gap. Despite the fervent efforts of WikiProject: Women in Red, only slightly more than 18% of Wikipedia’s biographies are of women. Women are all too often written about as someone else’s wife or mother rather than for their own achievements. The same is true for other historically underrepresented populations.

To help scale our work in filling in these content gaps, we’ve formed several partnerships with academic organizations over the years. In 2014, we began our important work with the National Women’s Studies Association (NWSA) to tackle Wikipedia’s gender gap. In that time, we’ve worked with 405 courses whose roughly 8,600 students have added more than 6 million words on Wikipedia in the field of Women’s and Gender Studies. Because of our students, the world now has access to information ranging from Mental disorders and gender to Chanda Prescod-Weinstein, American cosmologist and activist.

Equity is as much about producing content as it is about having access to that content. It’s not just about correcting the record so that a female scientist gets proper recognition for her contribution to a field, but that other women seeking to establish themselves in that same field have a role model to which they can turn. Six million words is certainly a lot, but we’ve hardly scratched the surface when it comes to filling in these critical content gaps.

Equity as a skill

While knowledge equity has been a driving force in the Student Program since its inception, we’ve come to realize in recent years that tackling issues of equity is a skill to be learned, honed, and practiced. We know that there is a steep learning curve when it comes to contributing to Wikipedia. To shepherd our students through this sometimes complex and confusing process, we’ve developed a host of training materials to ensure that students can successfully make those first edits. But just as important as it is to learn how to cite work or add media is how to identify equity gaps on Wikipedia and remedy them.

Over the past year we’ve made a series of updates to our resources so that students and instructors are thinking about equity at every step in the process of learning how to contribute to Wikipedia. In fact, it’s our goal to make editing Wikipedia and tackling knowledge equity one and the same. Whether its asking students to think about how to critically evaluate a Wikipedia article, whether the sources they’re using come from a diverse array of authors, or whether their classmates addressed knowledge equity in their peer review, we want to make sure that students are able to identify bias and ultimately to correct it where possible.

We encourage students to think more deeply about Wikipedia through a series of discussion prompts, covering topics ranging from sources and plagiarism to content gaps to thinking about Wikipedia broadly speaking.

It’s long been our goal to help our students develop digital literacy skills, to enable them to discern reliable information from unreliable so that they can become full digital citizens of our modern media landscape. It’s now our twin goal that our students are also able to identify bias and knowledge inequities both on and off of Wikipedia and to correct those inequities where possible. It’s a skill that is undeniably difficult to measure and assess, but we’re highly encouraged by the fact that over 60% of instructors believe that the Wikipedia assignment achieves this very goal.

It’s no surprise that many of our instructors view the Wikipedia assignment as an act of social action. As one instructor put it: “It provides an opportunity for those who have access to reliable information to share it with those who do not have access. It also inspires people who are used to only writing transactionally to shift focus and write to support the common good as part of a community of writers.”

We know that our students will continue to play a critical role in making Wikipedia a more equitable space because knowledge equity does not have a finite finish line. It’s an ongoing endeavor of which we are truly proud to be apart.