The end of IPv6 will not be our fault, at least not directly.

Indirectly, and in the next 25 years or so, bots will be so ubiquitous to the modern web, that bots will have bots with bots, and they will autonomously be setting up both physical & virtual servers to scale their requests in such a way that will eventually saturate even the IPv6 protocol.

340,282,366,920,938,463,463,374,607,431,768,211,456

This is the number of available addresses in the global IPv6 range. If I’m right, that’s gonna take a lot of bots.

But consider that services like Akismet claim to fight spam at more than 7.5 million requests per hour, and Gravatar claims to serve more than 8.6 billion images per day, and that’s with me only cherry-picking 2 services that help power 25% of the web, amongst a sea of tens of thousands in the remaining 75%.

As services like these start to become increasingly intelligent, their computational needs increase exponentially, and the number of independent services necessary to keep up with those needs will follow suit.

Consider an application like Slack, where upon opening 1 application, it opens close to 20 individual sockets, each acting like a neurological meld between the client application on my Mac and the many servers they no-doubt are wrangling to keep up with the growing number of Slack networks I’m a part of.

When you start to look at the raw numbers, the insane amount of traffic, the ludicrous amount of connections required to make the world wide web of computers interact with each other, IPv6 suddenly starts to look less big than it did originally.

If we round up Gravatar’s numbers to 10 billion images per day, it will only take 100 days to hit 1 trillion (1,000,000,000,000) images. I have no idea how many physical servers (or public IP addresses) it takes to do this, but I bit it’s at least a few. If a few more services the size of (and equally as efficient as) Gravatar are invented, we start to double-up pretty quickly.

And I have a hunch that no service will be as efficient as Gravatar is at doing what it does; anything of this scale will only grow in complexity and necessities.

It may not happen in my lifetime, but make note that if you squint far enough into the not-too-distant future, even IPv6 won’t save the internet for very long.

If you’re using WordPress Multisite in a highly scalable environment using HyperDB or LudicrousDB, you may have seen global__r errors in your logs.

Can't select global__r... yada yada yada

The “global” part of “global__r” comes from these database drop-ins defaulting to a “global” dataset if nothing is found or explicitly passed in. The “__r” part comes from looking at databases intended for reading — databases designated as slaves (vs. master databases intended for writing “__w”).

So if a SELECT query is failing, why would that be?

The first and most logical reason is that the database is down. Check to make sure it’s not by attempting to communicate with the database directly via whatever you are most comfortable with (command line, SequalPRO, etc…)

The second most logical reason is that your web server (powering the PHP part of your application) is unable to reach your database server. Check to make sure fail2ban or some other firewall utility hasn’t erroneously blocked things, and then try to manually ping & connect the two servers together to ensure you receive a good response.

The final and less obvious reason this will occur, is harder to track down, and I think might be the source of your error log entries if everything else checks out and you’ve made it to this blog post after scouring the web for answers.

There are two queries that run inside of upgrade_network() and populate_options() respectfully, that try to delete all of the transients for a specific site and a specific network. These two queries are unlikely to get caught by the matching regex used to map database table names to what HyperDB or LudicrousDB use to route queries to their respective servers. They look something like this:

DELETE a, b FROM $wpdb->options a, $wpdb->options b

and

DELETE a, b FROM $wpdb->sitemeta a, $wpdb->sitemeta b

If you search all of WordPress, these are the only two places raw queries like this are done, and they’re only ran under specific conditions where WordPress is cleaning up after itself during a database upgrade. This means the conditions are perfect for a surprise entry in your error logs once in a blue moon when you aren’t hand-holding a huge WordPress multisite/multi-network database upgrade.

How do we prevent these, and what’s the repercussion? The solution is probably a regex fix upstream to these plugins and/or WordPress’s WPDB base class to properly match these queries. The repercussion is transients that don’t get deleted, which isn’t usually a huge problem unless it causes the database upgrade to continuously run; if that was the case, you’d have lots of entries in your error logs.

I have a hunch this issue is exacerbated by object caching plugins that store transients in memory and not in the database. In these types of installations, these raw queries are trying to delete data that never would have been there in the first place.

I’ve also been staring at this code back and forth for a few weeks now, and while there are a lot of moving parts, I haven’t identified any data corruption or loss issues, and these queries are properly escaped and prepared, so it’s unlikely HyperDB or LudicrousDB would introduce anything that might be harmful to existing data.

If you have these issues, hopefully this helps you isolate the root cause to identify whether this is a configuration issue, a caching issue, a communication issue, or an issue with the database itself. If you have more info, I’d love to hear about it in the comments below. <3