Taking Care of Alex’s Plugins

On February 27, 2019, Alex Mills (aka Viper007Bond) passed away at home surrounded by his family after a two and a half year battle with leukaemia.

Alex Mills, Montmorency Falls, Quebec. 2009.

Alex was a gifted developer and a much liked member of the WordPress community. I have always been in awe of everything he did. He contributed to core many times and is listed as a contributor to WordPress 5.1.1 that was released last month. He still has 15 open Trac tickets so I expect to see his name on future releases too. He was the primary author of many plugins as well as contributing to others.

In his last blog post he expressed the wish that his plugins would be maintained and developed.

Since all of my plugins are open-source, they are free to be forked by reputable authors in the WordPress community. It would mean a lot to have my legacy go on.

Alex Mills, February 18th 2019.

I’m glad to say that a team inside Automattic will be taking care of his plugins in the future.

We will fork each of Alex’s GitHub repositories adding them to the Automattic GitHub account. We’ll update any links in his plugin readme files so they’re pointing at the new locations.

A short paragraph about Alex will be added to the plugin readme files. You can see an example on the Regenerate Thumbnails homepage.

We’ll do our best to stay on top of support queries on the forums too. If anyone else wants to help please don’t hesitate to get in touch. I’m donncha on the WordPress.org Slack.

In times gone by authors left works of music, novels, poetry, and letters on their passing. They were static works of art frozen in time. Alex leaves behind his code that will continue to evolve and operate in a living world used by thousands (millions?) of people every day as they go about their online lives.

***

Donncha Ó Caoimh is a code wrangler with Automattic, sometimes contributor to WordPress and writer of plugins.

Launching Gutenberg Block Kit on Glitch

Today we are launching Gutenberg Block Kit on Glitch. We are hoping this experiment will help anyone with an idea for a Gutenberg block to quickly prototype and build their block.

When you click the Remix button on the Gutenberg Block Kit project, you get a full JavaScript development environment right in your browser. Nothing to install, just edit and see your changes in the live preview.

Getting Started

After you click the Remix button, you’ll be taken to your own copy of the project. You can see a live preview of the block by clicking the “Show” button at the top. As you make changes to the block, the watcher will re-build the JavaScript and update the preview window.

Take a look at src/block.js. This is the core of your block. Try making changes to the edit and save methods. You can make CSS changes in src/editor.scss (for styles that should be in the editor) and src/style.scss (for styles on the published page).

When you are happy with your block, or you want to start developing in your own WordPress environment, you can download your block as a WordPress plugin. Click the “Download Block Plugin for WordPress” link on the preview page. The ZIP file can be uploaded to a WordPress site and used immediately.

Learn more

If you want to learn more about how to develop Gutenberg blocks, take a look at the Block Editor Handbook. There you can learn more about how Gutenberg blocks work and follow along with the block tutorial.

How it works under the hood:

We are using ParcelJS to watch for changes to the source files, which transpiles, bundles, and serves your code. See server.js:7 which tells Parcel to watch src/index.html as an entry point.

Gutenberg is made up of a number of packages, exposed on the window.wp object. Normally WordPress ensures that these are available, however the constraints of Glitch prevent us from running a full WordPress development environment. Instead, we bundle the Gutenberg packages offline and upload them to Glitch’s CDN. The CDN serves the packages up as a static asset. You can see how it’s built in gutenberg.js.

Any files you put in the plugin/ directory will be added to the ZIP file for your plugin. After you download it, you can continue working on your plugin locally See the JavaScript Build Setup guide for more about what you need on your local machine to continue developing.

If you have any ideas for improving Gutenberg Block Kit, please open an issue on the GitHub repository.

Developing WordPress, Jetpack, and Calypso on ChromeOS

Today I was travelling to meet my team in Mexico City (one of the many perks of working at Automattic), and I decided to see if I could set up a functional development environment on my new Google Pixelbook.

It’s quite a capable machine (Core i5, 8GB RAM) – not quite Macbook-Pro-level, but powerful enough. Plus it has great battery life, touch and pen support, a beautiful keyboard, Android app support and other really nice things. You can even run Android studio on it.

So, without further ado – let’s see if we can develop Jetpack and Calypso on ChromeOS! As a bonus, we will add ngrok so that our WordPress instance can be viewed by anyone on the internet.

These instructions assume you have some familiarity with Linux and the vi editor. Feel free to adapt them to develop your own WordPress plugin or node app.

Basic Setup

Enable “Linux Mode” on ChromeOS

ChromeOS supports running Linux in a container – but only on Pixelbooks right now, and only on the “developer channel”. Just follow these instructions.

Once you’ve installed Linux, you simply get a Terminal icon in your App drawer. Clicking it boots Linux and opens a Bash shell. Woohoo!

This Linux is a lightly customized version of Debian 9 (Stretch). Most of the customizations are to support the Wayland display server for the Linux container.

As always with Linux, it’s worth doing an sudo apt-get update && sudo apt-get upgrade to make sure everything is up-to-date.

Also you should know that the network address for the Linux container is “penguin.linux.test”. Cute 🙂

Install an Editor

Installing Visual Studio Code

This is my preferred editor. You can follow the Debian (Stretch) instructions on the VS Code web page.

Once you have install VS Code, you can edit the files in any directory using the command code /path/to/directory.

Developing Calypso

For the unfamiliar, Calypso is the user interface of WordPress.com. It’s a beautiful, fast and responsive replacement for wp-admin, which talks to your site via the WordPress.com APIs. It’s built using node, with a user interface powered by React.

Clone wp-calypso

I personally put my repositories in a folder called `workspace` under my home directory.

mkdir workspace
cd workspace
git clone git@github.com:Automattic/wp-calypso.git

Install NVM

We need this in order to build Calypso.

sudo apt-get install build-essential libssl-dev
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash

Follow any prompts from the install script, including installing Node 10.

Let’s run Calypso!

cd wp-calypso 
npm run start

Now open http://calypso.localhost:3000/ and behold your beautiful WordPress interface 🙂

Install WordPress and connect Jetpack

Installing WordPress

The Debian Wiki has official instructions, which I have summarized here with light customizations.

As you may know, WordPress has just three dependencies: PHP, Curl and MySQL (aka MariaDB).

sudo apt-get install wordpress curl apache2 mariadb-server
sudo mysql_secure_installation

Follow the prompts to set a MySQL root password and disable remote access.

Now let’s create an Apache configuration:

sudo vi /etc/apache2/sites-available/wp.conf

I have taken the following config from the Debian instructions, but set the ServerName to “penguin.linux.test” (the default address for the VM) and add a ServerAlias directive with something like “myspecialsite.ngrok.io”. This will allow us to connect Jetpack later, when we install ngrok.

Here’s my config:

<VirtualHost *:80>
        ServerName penguin.linux.test
        ServerAlias goldsoundschrome.ngrok.io
        ServerAdmin webmaster@example.com
        DocumentRoot /usr/share/wordpress

        Alias /wp-content /var/lib/wordpress/wp-content
        <Directory /usr/share/wordpress>
            Options FollowSymLinks
            AllowOverride Limit Options FileInfo
            DirectoryIndex index.php
            Require all granted
        </Directory>
        <Directory /var/lib/wordpress/wp-content>
            Options FollowSymLinks
            Require all granted
        </Directory>

        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Now let’s disable the default site and enable our wp site in Apache:

sudo /usr/sbin/a2dissite 000-default
sudo /usr/sbin/a2ensite wp
sudo systemctl reload apache2

Since we’ll only be running one site (for now) let’s create/edit the default WordPress config file, where the WordPress debian package expects to find it.Also note that contrary to the Debian WP instructions, the a2dissite and a2ensite commands need to be prefixed with /usr/sbin:

sudo vi /etc/wordpress/config-default.php
<?php
define('DB_NAME', 'wordpress');
define('DB_USER', 'wordpress');
define('DB_PASSWORD', 'password');
define('DB_HOST', 'localhost');
define('WP_CONTENT_DIR', '/var/lib/wordpress/wp-content');
define('FS_METHOD', 'direct');
?>

Be sure to change the password to something more secure 🙂

Also note that we added define('FS_METHOD', 'direct'); so that we can update plugins using the regular WordPress update mechanism – otherwise WordPress will prompt you for an FTP password whenever you update.

You will also need to change the ownership of the wp-content directories so that Apache can write to them:

sudo chown -R www-data:www-data /usr/share/wordpress/wp-content

Now create a one-time SQL script for creating the DB (remember to match the password here to the one in the config file above).

CREATE DATABASE wordpress;
GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER
ON wordpress.*
TO wordpress@localhost
IDENTIFIED BY 'password';
FLUSH PRIVILEGES;

Now load the file to create our database:

sudo cat ~/wp.sql | sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf

The command should complete without errors.

To confirm that the DB was created, load up the MySQL command line:

sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf

MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| wordpress |
+--------------------+
4 rows in set (0.00 sec)

MariaDB [(none)]>

Hooray! Ok – now you should be able to go to http://penguin.linux.test in your browser and see the famous 5 minute install! Follow the prompts and you have WordPress running. Yay!

Install wp-cli

Follow the instructions at https://wp-cli.org/#installing to install wp-cli

Now make sure we create a www-data-owned directory for wp-cli to do caching, otherwise you’ll get warnings about that:

sudo mkdir /var/www/.wp-cli
sudo chown www-data:www-data /var/www/.wp-cli

Once you’re done, you should be able to cd to /usr/share/wordpress and run commands like this:

sudo -u www-data wp plugin install --activate jetpack

Reinstall default plugin and theme

At this point, we can remove the old akisment and twentyseventeen symlinks and use wp-cli to install them again (so they can be auto-updated).

sudo rm /var/lib/wordpress/wp-content/plugins/akismet
sudo rm /var/lib/wordpress/wp-content/themes/twentyseventeencd /usr/share/wordpress
sudo -u www-data wp theme install --activate twentyseventeen
sudo -u www-data wp plugin install --activate akismet

In order to connect Jetpack, our WordPress install needs to be accessible on the internet. To do this, I use ngrok.

Installing ngrok

Head to https://ngrok.com/download and copy the link to the latest Linux amd64 build, then unzip it.

In my case, the commands were:

wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
unzip ngrok-stable-linux-amd64.zip
sudo mv ngrok /usr/local/bin/

Now we need to activate ngrok. If you head to ngrok.io and log in, then you should be able to copy the “Connect your account” command (careful of that leading ./) and activate your ngrok install.

ngrok authtoken blahblahblahblahtokentokentoken

Assuming you successfully authenticated, you should now be able to launch ngrok using the subdomain you specified in your ServerAlias command in the Apache config:

ngrok http -subdomain=goldsoundschrome 80

Now go to http://goldsoundschrome.ngrok.io and see your site!

There’s one wrinkle here – if you attempt to authenticate or connect to your site externally, it will redirect to penguin.linux.test.

Luckily the wp command we installed earlier gives us an easy way to change all the domain-related settings:

cd /usr/share/wordpress
wp search-replace penguin.linux.test goldsoundschrome.ngrok.io

Now head to http://goldsoundschrome.ngrok.io/wp-admin (or whatever your address is) and log in, then follow the prompts to set up Jetpack.

Woohoo!

Observations / Thoughts

This actually works pretty well. Developing on my Pixelbook feels really fast and snappy (and it’s not the highest-end model, either). I honestly didn’t notice an enormous difference between the Pixelbook and my Macbook Pro in terms of performance.

Display scaling issues for Linux apps need to be tuned – some are teeny-tiny (xterm), some are middling (Visual Studio Code) and some are normal-sized (GTK apps like the Package Manager).

Overall, it’s a perfectly functional system for developing WordPress plugins, themes, and node apps. It has tons of battery life, great security, and thanks to the Linux mode it has access to all the useful software you can think of.

Enjoy!

***

Daniel Walmsley is a Team Lead at Automattic, mainly working on partnerships. He loves working from home in beautiful Nevada City.

Video of Automated Test Runs

Sometimes, automated end-to-end (e2e) tests fail and it’s difficult to tell why. Screenshots and stack traces are useful, but video of the actual test running in the CI environment would also be helpful. That’s why we recently added video to the automated tests we run in CircleCI. We are now able to see everything that occurred during the test and can quickly debug issues that would previously require more investigation.

How we’re using it

Any time we run our full automated e2e test suite, we make a video recording of every test. If the test passes, we delete the video. If it fails, we keep the video, and it’s returned as an artifact in CircleCI. We also output the path of the video file so it’s easier to find the file that corresponds to the test that you’re looking at.

We are currently only making recordings when running the full suite of tests; we don’t record the “canaries” to avoid adding additional overhead to simple test runs.

How it works

The builds we use in CircleCI are running in Linux docker images. To get video of the automated tests, we install FFmpeg in the container and use Xvfb for the display. For each test we:

  1. Start an Xvfb session on a display that is not already in use.
  2. Start a recording of the display using FFmpeg.
  3. Run the automated test in Chrome. There is a --display parameter that allows you to specify which display to run the test on.
  4. Stop the FFmpeg recording when the test is completed or has failed.
  5. Stop the Xvfb session.

Get involved!

Feel free to check out our e2e tests repository, make a fork, and give us your feedback or suggestions. Pull requests are always welcome.

***
Brent Sessions is an Excellence Wrangler for Automattic working to provide tools and technology to help developers improve product testing.

Cloud-hosted Branches for Painless Testing

Anyone familiar with developing web applications knows how tedious it is to test code changes made by your peers. Whether it’s a designer learning how to build and run a local server, or even a developer that needs to stash local changes and wait for a server rebuild, testing changes is a major workflow interruption that can mean minutes of thumb twiddling. At WordPress.com, we’re proud that we’ve made testing in-progress work a quick process that’s as easy as sharing a url.

Introducing Calypso.live

If you’ve ever seen a Calypso pull request, you might have noticed that our helpful matticbot always comments on the request with a link to calypso.live. If you follow the link, it takes you to a hosted version of that branch. If you’re the first person to access the branch, you may have to wait a couple of minutes for the app to be built, during which you’ll be greeted with this loading screen:

Screen Shot 2018-04-10 at 7.02.21 PM.png

calypso.live loading screen

Once the server is done building, it presents you with a version of Calypso corresponding to that branch of code. This makes it easy and quick for developers to review functionality and designers to review the look and feel, all without having to manage local development servers.

One piece of good news is that you usually aren’t the first to access a branch. That’s because our automated tests actually rely on these cloud-hosted branches to test against.

How does this work technically?

Calypso.live is powered by a specialized docker server named dserve, which was written to work for any Docker-based web application. It automatically manages thousands of versions of an app and makes both each git sha and branch accessible by url. An example request might go like this:

  1. Request is made for calypso.live?branch=my-branch
  2. Once dserve receives the request, it checks to see if my-branch is ready to be served
  3. Assuming a container is already running for my-branch, dserve proxies the request to the right container — very similar to how a load balancer sits in front of an array of servers and proxies the request.

Get involved!

Feel free to check out the dserve repository, our e2e tests repository, or wp-calypso, make a fork, and provide us with any feedback or suggestions. Pull requests are always welcome.

***

Jake Fried is a Software Engineer at Automattic, mainly working on Calypso.

How Canaries Help Us Merge Good Pull Requests

At WordPress.com we strive to provide a consistent and reliable user experience as we merge and release hundreds of code changes each week.

We run automated unit and component tests for our Calypso user interface on every commit against every pull request (PR).

We also have 32 automated end-to-end (e2e) test scenarios that, until recently, we would only automatically run across our platform after merging and deploying to production. While these e2e scenarios have found regressions fairly quickly after deploying (the 32 scenarios execute in parallel in just 10 minutes), they don’t prevent us from merging and releasing regressions to our customer experience.

Introducing our Canaries

Earlier this year we decided to identify three of our 32 automated end-to-end test scenarios that would act as our “canaries”: a minimal subset of automated tests to quickly tell us if our most important flows are broken. These tests execute after a pull request is merged and deployed to our staging environment, but before we deploy the changes to all our customers in production.

These canaries have been very successful in preventing us from deploying regressions to production, however, running these after merging to master (and automatically deploying code to staging) means we’d have to revert code changes if something was wrong. This wasn’t good enough.

Last month we took our canaries to the next level. Instead of just running canaries on merging to master, we now execute canaries against live pull requests and provide feedback to the pull request itself about the canary test status.

How does it work?

Our process is that if you’re a developer working on a pull request for Calypso and it’s ready to review, you add the “[Status] Needs Review” label to alert someone to review your code. Adding this label automatically triggers the e2e canary tests against your pull request:

The results are separate from the unit and component tests which already run against every pull request (on every push).

How does this technically work?

Our automated e2e tests are open-source, but they reside separately from our Calypso GitHub code repository. This is because the e2e scenarios represent the entire WordPress.com customer experience: they’re not just automated Calypso user interface tests. For example, our tests include verifying that our customers receive appropriate emails that are not part of the Calypso code base.

We “connect” our two projects using CircleCI builds and a custom “bridge” written in Node.js (which is also open-source). This bridge provides webhooks for GitHub pull requests to execute CircleCI builds using the CircleCI API. It reports the status of these builds using the GitHub status API. We do apply a little bit of cleverness in that we can match branch names so we can make changes to our e2e tests that correspond to changes to our Calypso changes. Our bridge runs on Automattic’s VIP Go platform.

A summary and what’s next?

Running our canaries on pull requests has been a great success. Developers love the confidence the canaries give them in knowing that our key end-to-end scenarios won’t regress when introducing changes rapidly.

We’d now like to expand the bridge’s scope to optionally run the full set of 32 end-to-end automated tests on pull requests that have a broader impact, changes like upgrading a dependency or refactoring a framework design pattern. This again will give our developers even greater confidence in the ability to merge code and provide a consistent and reliable experience to our customers.

Get involved!

Feel free to check out our e2e tests repository, or our bridge repository, make a fork, and provide us with any feedback or suggestions. Pull requests are always welcomed.

***

Alister Scott is an Excellence Wrangler for Automattic and blogs regularly about software testing at his blog WatirMelon.

New WordPress unified API console

Since the WordPress 4.7 “Vaughan” release, each WordPress installation includes REST API endpoints to access and manipulate its content.  These endpoints will be the foundation for the next generation of WordPress websites and applications.

Today we’re releasing a brand new Open Source WordPress API console. You can use it to try these endpoints and explore the results.  The console works for any website on WordPress.com and also for any self-hosted WordPress installation.

Using the console with WordPress.com APIs

You can use this application today to make read and write requests to the WordPress.com API or the WordPress REST API for any website hosted on WordPress.com or using Jetpack.  Visit the new version of the application here:  https://developer.wordpress.com/docs/api/console/

Using the console with your self-hosted WordPress sites

To use the console with your self-hosted WordPress installation(s), you’ll need to download the application from GitHub, configure it, and run it on your local machine.  You’ll also need to install the WP REST API – OAuth 1.0a Server plugin on your WordPress site.  The Application Passwords plugin is another option – but – if you use this plugin, make sure that your site is running over HTTPS.  Otherwise, this configuration is insecure.

Full installation and configuration instructions are on the GitHub repository.

Technical Details

The console is a React/Redux application based on create-react-app that persists its state to localStorage.

What’s next?

We have a few more features planned that we think you’ll like.

  • We can use the new console application to allow you to easily generate and save OAuth2 tokens for your WordPress.com API Applications.  As compared to implementing the OAuth2 flow yourself, this will be a much easier way to obtain an API token for testing your applications.
  • We also plan to ship the console as a regular WordPress plugin, replacing the existing older plugin.
  • We can allow you to add/edit self-hosted WordPress websites on our hosted version of the console and persist them to localStorage.  This way you’ll be able to query your WordPress sites without having to install the console yourself.

Contribute

As usual, the new console is open source, and we hope this will be a tool that will benefit the entire WordPress community.

If you find a bug, think of a new feature, or want to make some modifications to the API console, feel free to look through existing issues and open a new issue or a PR on the GitHub repository.  We welcome all contributions.