A Call to Arms: Supporting Matrix!

Hi folks,

TL;DR: if you like Matrix (and especially if you’re building stuff on it), please support us via Patreon or Liberapay to keep the core team able to work on it full-time, otherwise the project is going to be seriously impacted.  And if you’re a company who is invested in Matrix (e.g. itching for Dendrite), please get in touch ASAP if you’d like to sponsor core development work from the team.  And if you’re a philanthropic billionaire who believes in our ideals of decentralisation, encryption, and open communication as a basic human right – we’d love to hear from you too O:-)

I was expecting this blog post to be the Matrix Summer Special, focusing entirely on the incredible progress and updates we’ve made in the last few months in Matrix.  However, instead I’m going to talk about something different and literally critical to Matrix’s success.

As many people know, Matrix.org development has historically been exclusively and very generously sponsored by a large multinational telecoms infrastructure company for whom most of the core team once built telco messaging apps.  However, despite the project progressing better than ever (more on that later), we have just had our funding dramatically cut by >60%.

We seem to be suffering from a darkly amusing paradox, as the rationale from our corporate overlords is essentially: “Wow! Matrix is doing great and growing well – and you seem to have all sorts of exciting people and companies using and building on it.  But we’ve been footing the whole development bill since the outset in May 2014, and this simply doesn’t feel fair.  We’re happy to keep funding though – but only if others do too!”.  In other words, in some ways we are a victim of our own success…

So we now find ourselves in the situation that despite the project looking better than ever and having a tonne of amazing stuff in the pipeline, we are suddenly missing the funding to keep the core team working on it.  And the team is quite sizeable – reflecting the ambition and size of Matrix: right now we have effectively 11 people working specifically on Matrix itself: 1 on Synapse, 1 on Dendrite, 1 on e2e crypto, 2 on matrix-react-sdk (which powers Riot/Web), 2 on matrix-ios-kit / matrix-ios-sdk, 2 on matrix-android-sdk, 1 on bridges, and me (Matthew) managing the overall project.  (This ignores folks who overlap the team who are working specifically on Riot stuff).

Over the last few years we’ve had countless people ask if they can financially support Matrix. We haven’t been able to accept it for various reasons, but now is the time for us to step towards a more independent setup, and avoid a repeat of the situation we’re currently facing by opening up to external support.

So we need help from the community to keep going!  Please head over to Patreon or Liberapay and put some money in the meter (or send some bitcoin to 1LxowEgsquZ3UPZ68wHf8v2MDZw82dVmAE). In return, you’ll get to keep Matrix evolving at a decent rate, be a member of the upcoming +supporters:matrix.org group (complete with flair badges!), and other benefits like access to #matrix-supporters:matrix.org – a new dedicated room for prioritised support, discounted goodies from Riot once paid services arrive, access to a weekly supporters-only status podcast(!), and of course receive our eternal thanks. :)

Meanwhile, if you’re a company who depends on Matrix: please get in touch. We have the option for you to sponsor core Matrix development (e.g. Dendrite) or for us to provide you with more targeted support or feature development.  We’re already talking to several organisations who want to accelerate Dendrite specifically – and the more support we have there the faster we can go.

We’d also like to thank UpCloud for sponsoring hosting for the Matrix.org synapse instances – UpCloud has been coping impressively with the massive I/O and CPU/RAM requirements we have, and we recommend them unreservedly for folks looking to run their own homeservers.

Finally, one of the longer term plans to help fund Matrix is to get sponsorship from Riot, once Riot starts offering paid services. So, if you’re an investor who’s interested in the for-profit sides of Riot (paid integrations and paid Matrix hosting) then please get in touch with the Riot team ASAP!

Moving forward we are confident that we can secure funding, through sponsorship and Riot paid services, but in truth this decision caught us by surprise and so we need help both long term but also right now!

And whatever the funding situation, we’re of course always looking for contributions for code, bug reports, or just spreading the word about the project too! :)

Status Update

(or scroll to next section to see why this is bigger than “just” decentralised encrypted communication)

Despite the funding issue, the project really is going very well. Our vital stats (as seen through the lens of the matrix.org synapse) are looking like this:


And meanwhile, looking back at the last big update (Holiday Special 2016), we can compare our progress with our goals for 2017 thus far:

  • Getting E2E Encryption out of beta ASAP.

This has progressed massively – we haven’t really yelled about it yet, but latest https://riot.im/develop/ now finally implements the ability to share message keys between clients to let them decrypt older history and fix “unable to decrypt” errors (Mobile coming soon).  Meanwhile various root causes of “unable to decrypt” errors have been gradually eliminated; I can’t actually remember the last time I saw one! Once key-sharing and improved device verification UX is fully tested and tuned we should be able to declare E2E out of beta.

 

  • Ensuring we can scale beyond Synapse – i.e. implement Dendrite

Likewise, Dendrite is on track: we’ve implemented all the Hard Stuff which forms the skeleton of Dendrite (core federation, message signing, /sync, message sending, media repository etc) – which takes us to over 50% of Phase 1. After phase 1, we will have an initial usable release for all the core functionality.  Synapse’s performance has also improved enormously this year.

 

  • Getting as many bots and bridges into Matrix as possible, and doing everything we can to support them, host them and help them be as high quality as possible – making the public federated Matrix network as useful and diverse as possible.

Bridges and bots continue – from the core team we have a ‘puppeting’ Telegram bridge (matrix-appservice-tg), and from the wider community we have Discord, Skype, Signal, new Rocket.Chat and more.  Getting them polished and live is certainly an area where we need more manpower though.

  • Supporting Riot’s leap to the mainstream, ensuring Matrix has at least one killer app.

Riot has been sprouting new releases every few weeks, with a huge emphasis on proving UX:

  • an entirely new streamlined sign-up process
  • the new concept of home pages
  • a user directory search that actually works
  • internationalised to 27 languages
  • compact layout
  • loads of desktop improvements
  • piwik analytics support; etc.

There is still a lot of UX work to be done, but it’s converging fast on being a great entry point into the Matrix ecosystem, driving its growth across different groups and communities..

Meanwhile, a massive update to the iOS & Android apps just landed yesterday, switching to an entirely new UI layout to separate People from Rooms, synchronized Read Markers, and more!

  • Adding the final major missing features:
    • Customisable User Profiles (this is almost done, actually)

This is still hovering at ‘almost done’, and will be needed for some of the implementation of Groups (see below)..

  • Groups (i.e. ability to define groups of users, and perform invites, powerlevels, etc. per-group as well as per-user)

Groups are also in testing in Synapse too!  These will probably be the single biggest change to Matrix that we’ve seen since E2E encryption landed: it changes the dynamic of the whole network, given users can explicitly declare allegiance to different groups, which in turn have their own home pages and directories etc.  It lets users form communities, and declare their participation in those communities (if desired), and also lets rooms be grouped together.  One of our single biggest requests has been “subrooms” and we’re incredibly excited to see how well Groups solve this.

  • Threading

Sadly no progress on Threading so far this year.

  • Editable events (and Reactions)

We’re hoping to get looking at this (at last!) once Groups are done.

  • Maturing and polishing the spec (we are way overdue a new release)

You’ll have noticed that in the “how many people work on Matrix?” stats above, we didn’t mention anyone working on the Spec.  Because right now there isn’t anyone explicitly maintaining it, unfortunately; updates are done best-effort when everyone’s primary responsibilities allow it.  That said, there’s quite a lot of good stuff currently unreleased on HEAD. This is something which is obviously critical to fix once we have sustainable funding sorted again.  We can only apologise to folks like the Ruma developers who have suffered from the spec lag. :(

  • Improving VoIP – especially conferencing.

VoIP is improving lots on iOS, thanks to Denis Morozov’s GSoC project, and meanwhile we have all new conferencing powered by Jitsi on the horizon in the next few weeks too.

  • Reputation/Moderation management (i.e. spam/abuse filtering).

Lots of thinking about this (see below), but no development yet.

  • Much-needed SDK performance work on matrix-{react,ios,android}-sdk.

About 40% of the desired performance work has happened here (although not all has gone live yet).

  • …and a few other things which would be premature to mention right now. :D

All will be revealed in the next week or two – but suffice it to say that Integrations are going to be getting a Lot More Useful™. :)

Reflections

There are very very few people actually working professionally on trying to build general-purpose open communication networks and protocols.  There’s us, some XMPP, IRCv3 and GNU Social/Mastodon folks, GNU Ring, Tox, Briar, Secure Scuttlebutt, IPFS, Status.im, Ricochet… and that’s literally all the major projects I can think of (sorry if I missed you!).  There’s probably only 50 developers in total working in this domain as their day job.

Meanwhile, there are literally hundreds of thousands of folks trudging away building more and more near-indistinguishable proprietary closed communication systems – trapping users inside ever more silos and fragmenting the basic ability to communicate on the ‘net.  It’s like a world where the open web was pushed into a tiny underground resistance, and everyone else was trapped in the walled gardens of AOL and Compuserve (or more contemporarily: Facebook, Twitter, WhatsApp etc).

In other words: the whole world of decentralised communication desperately needs your support.  This is a clear case of user choice and freedom: to give users the ability to pick who they trust with their data and metadata, without being forced into unilaterally trusting the Silicon Valley megacorps.  And this, dear Reader, is your chance to fix the world for the greater good. Seriously, the Matrix team is one of a handful in the world in a position to continue to push things in the right direction and avoid us falling into a permanent dystopia where communication is even more closed and proprietary than the Public Telephone Network!

Finally, there’s an even bigger issue at stake here than open communication.  As an open network, people can literally publish whatever content they like into Matrix – same as the web or the internet itself.  As a result, there’s scope for spam; abusive/malicious content; propaganda; and generally the whole spectrum of the best and worst of humanity.  Now, if we were a centralised system like Facebook, we might hire thousands of content moderators to frantically impose a rulebook on ‘acceptable’ content.  Or we might build invisible filter-bubbles for our users based on their social graph, cocooning them from scary unfamiliar content outside their social circles and reinforcing their preconceptions (whilst the resulting self-affirmation keeps them coming back, viewing more and more ads).

But we’re decentralised, and we have no absolute moral arbiter, and nor should we – on an open network it should be up to users and users alone to define and manage their own worldview and alignment.  Plus we are not fiscally obligated to keep users coming back to view more ads no matter what.  Instead, we are forced to confront the fundamental problem: building tools which empower users to curate and visualise their own content filters; letting them filter out the stuff they’re not interested in or find repellant… while still helping them be aware of their own viewpoint and the shape of the world beyond it.  We haven’t really started building this yet, but in the long term our feeling is that these tools will literally be vital for the survival of the human race (e.g. exposing anti-climate-change propaganda for what it is or helping users opt out of World War 3) – let alone the success of decentralisation.  A world where users blindly consume propaganda is doomed, and it’s a fascinating situation that the same tools which will allow Matrix users to tune out the rooms, users and conversations they’re not interested in could be directly applied to the bigger global problem.

So: Matrix needs you. Please become a supporter on Patreon or Liberapay, and help us save the world :)

– Matthew, Amandine & the whole Matrix.org team.

 

Synapse 0.22.0 released!

Hi Synapsefans,

Synapse 0.22.0 has just been released! This release lands a few interesting features:

  • The new User directory API which supports Matrix clients’ providing a much more intuitive and effective user search capability by exposing a list of:
    • Everybody your user shares a room with, and
    • Everybody in a public room your homeserver knows about
  • New support for server admins, including a Shutdown Room API (to remove a room from a local server) and a Media Quarrantine API (to render a media item inaccessible without its actually being deleted)

As always there are lots of bug fixes and performance improvements, including increasing the default cache factor size from 0.1 to 0.5 (should improve performance for those running their own homeservers).

You can get Synapse 0.22.0 from https://github.com/matrix-org/synapse or https://github.com/matrix-org/synapse/releases/tag/v0.22.0 as normal.

Changes in synapse v0.22.0 (2017-07-06)

No changes since v0.22.0-rc2

Changes in synapse v0.22.0-rc2 (2017-07-04)

Changes:

  • Improve performance of storing user IPs (PR #2307, #2308)
  • Slightly improve performance of verifying access tokens (PR #2320)
  • Slightly improve performance of event persistence (PR #2321)
  • Increase default cache factor size from 0.1 to 0.5 (PR #2330)

Bug fixes:

  • Fix bug with storing registration sessions that caused frequent CPU churn
    (PR #2319)

Changes in synapse v0.22.0-rc1 (2017-06-26)

Features:

  • Add a user directory API (PR #2252, and many more)
  • Add shutdown room API to remove room from local server (PR #2291)
  • Add API to quarantine media (PR #2292)
  • Add new config option to not send event contents to push servers (PR #2301)
    Thanks to @cjdelisle!

Changes:

Bug fixes:

  • Fix users not getting notifications when AS listened to that user_id (PR
    #2216) Thanks to @slipeer!
  • Fix users without push set up not getting notifications after joining rooms
    (PR #2236)
  • Fix preview url API to trim long descriptions (PR #2243)
  • Fix bug where we used cached but unpersisted state group as prev group,
    resulting in broken state of restart (PR #2263)
  • Fix removing of pushers when using workers (PR #2267)
  • Fix CORS headers to allow Authorization header (PR #2285) Thanks to @krombel!

 

Synapse 0.21.1 released!

Hi folks – we forgot to mention that Synapse 0.21.1 was released last week.  This contains a important fix to the report-stats option, which was otherwise failing to report any usage stats for folks who had the option turned on.

This is a good opportunity to note that the report-stats option is really really important for the ongoing health of the Matrix ecosystem: when raising funding to continue working on Matrix we have to be able to demonstrate how the ecosystem is growing and why it’s a good idea to support us to work on it. In practice, the data we collect is: hostname, synapse version & uptime, total_users, total_nonbridged users, total_room_count, daily_active_users, daily_active_rooms, daily_messages and daily_sent_messages.

Folks: if you have turned off report-stats for whatever reason, please consider upgrading to 0.21.1 and turning it back on.

In return, the plan is that we’ll start to publish an official Grafana of the anonymised aggregate stats, probably embedded into the frontpage of Matrix.org, and then you and everyone else can have a view of the state of the Matrix universe. And critically, it’ll really help us continue to justify $ to spend on growing the project!

You can get Synapse 0.21.1 from https://github.com/matrix-org/synapse or https://github.com/matrix-org/synapse/releases/tag/v0.21.1 as normal.

Use you a Matrix for Great Good!

Hi all,

We’re currently looking into different ways that Matrix is being used in the wild, and an important question that has come up is whether anyone is using Matrix yet for decentralised communication in parts of the world where centralised communication poses a problem – due to bad connectivity or privacy concerns.  Similarly we’d love to hear from anyone who is seriously trialling Matrix’s end-to-end encryption for use in geographies where privacy is a particularly big issue for human rights.

So, if anyone has stories (anecdotal or otherwise) about how they’re using or planning to use Matrix to make the world a better place, in a location where that’s particularly critical, please can you let us know as soon as possible (@matthew:matrix.org or @Amandine:matrix.org).  This is fairly urgent because we’re currently looking at various options for how to prioritise effort and funding for Matrix, and if there are people out there who are depending on Matrix in this manner it would significantly help us support them!

thanks,

Matthew, Amandine & the team.

Synapse 0.21.0 is released!

Hi all,

Synapse 0.21.0 was released a moment ago. This release lands a number of performance improvements and stability fixes, plus a couple of small features.

For those of you upgrading https://github.com/matrix-org/synapse has the details as usual.  Full changelog follows:

Changes in synapse v0.21.0 (2017-05-17)

Features:

  • Add per user rate-limiting overrides (PR #2208)
  • Add config option to limit maximum number of events requested by /sync and /messages (PR #2221) Thanks to @psaavedra!

Changes:

  • Various small performance fixes (PR #2201, #2202, #2224, #2226, #2227, #2228, #2229)
  • Update username availability checker API (PR #2209, #2213)
  • When purging, don’t de-delta state groups we’re about to delete (PR #2214)
  • Documentation to check synapse version (PR #2215) Thanks to @hamber-dick!
  • Add an index to event_search to speed up purge history API (PR #2218)

Bug fixes:

  • Fix API to allow clients to upload one-time-keys with new sigs (PR #2206)

Changes in synapse v0.21.0-rc2 (2017-05-08)

Changes:

  • Always mark remotes as up if we receive a signed request from them (PR #2190)

Bug fixes:

  • Fix bug where users got pushed for rooms they had muted (PR #2200)

Changes in synapse v0.21.0-rc1 (2017-05-08)

Features:

  • Add username availability checker API (PR #2183)
  • Add read marker API (PR #2120)

Changes:

Bug fixes:

  • Fix nuke-room script to work with current schema (PR #1927) Thanks @zuckschwerdt!
  • Fix db port script to not assume postgres tables are in the public schema (PR #2024) Thanks @jerrykan!
  • Fix getting latest device IP for user with no devices (PR #2118)
  • Fix rejection of invites to unreachable servers (PR #2145)
  • Fix code for reporting old verify keys in synapse (PR #2156)
  • Fix invite state to always include all events (PR #2163)
  • Fix bug where synapse would always fetch state for any missing event (PR #2170)
  • Fix a leak with timed out HTTP connections (PR #2180)
  • Fix bug where we didn’t time out HTTP requests to ASes (PR #2192)

Docs:

  • Clarify doc for SQLite to PostgreSQL port (PR #1961) Thanks @benhylau!
  • Fix typo in synctl help (PR #2107) Thanks @HarHarLinks!
  • web_client_location documentation fix (PR #2131) Thanks @matthewjwolff!
  • Update README.rst with FreeBSD changes (PR #2132) Thanks @feld!
  • Clarify setting up metrics (PR #2149) Thanks @encks!

Update on Matrix.org homeserver reliability

Hi folks,

We’ve had a few outages over the last week on the Matrix.org homeserver which have caused problems for folks using bridges or accounts hosted on matrix.org itself – we’d like to apologise to everyone who’s been caught in the crossfire.  In the interests of giving everyone visibility on what’s going on and what we’re doing about it (and so folks can learn from our mistakes! :), here’s a quick writeup (all times are UTC):

  • 2017-05-04 21:05: The datacenter where we host matrix.org performs an emergency unscheduled upgrade of the VM host where the main matrix.org HS & DB master lives.  This means a live-migration of the VM onto another host, which freezes the (huge) VM for 9 minutes, during which service is (obviously) down.  Monitoring fires; we start investigating and try to get in on the console, but by the point we’re considering failing over to the hot-spare, the box has come back and recovers fine other than a load spike as all the traffic catches up.  The clock however is off by 9 minutes due to its world having paused.
  • 2017-05-04 22:30: We step NTP on the host to fix the clock (maximum clock skew on ISC ntpd is 500ppm, meaning it would take weeks to reconverge naturally, during which time we’re issuing messages with incorrect timestamps).
  • 2017-05-05 01:25: Network connectivity breaks between the matrix.org homeserver and the DC where all of our bridges/bots are hosted.
  • 2017-05-05 01:40: Monitoring alerts fire for bridge traffic activity and are picked up.  After trying to restart the VPN between the DC a few times, it turns out that the IP routes needed for the VPN have mysteriously disappeared.
  • 2017-05-05 02:23: Routes are manually readded and VPN recovers and traffic starts flowing again.  It turns out that the routes are meant to be maintained by a post-up script in /etc/network/interfaces, which assumes that /sbin/ip is on the path.  Historically this hasn’t been a problem as the DHCP lease on the host has never expired (only been renewed every 6 hours) – but the time disruption caused by the live-migration earlier means that on this renewal cycle the lease actually expires and the routes are lost and not-readded.  Basic bridging traffic checks are done (e.g. Freenode->Matrix).
  • 2017-05-05 08:30: Turns out that whilst Freenode->Matrix traffic was working, Matrix->Freenode was wedged due to a missing HTTP timeout in the AS logic on Synapse.  Synapse is restarted and the bug fixed.
  • …the week goes by…
  • 2017-05-11 18:00: (Unconnected to the rest of this outage, an IRC DDoS on GIMPnet cause intermittent load problems and delayed messages on matrix.org; we turn off the bridge for a few hours until it subsides).
  • 2017-05-12 02:50: The postgres partition on the matrix.org DB master diskfills and postgres halts.  Monitoring alerts fire (once, phone alerts), but the three folks on call manage to sleep through their phone ringing.
  • 2017-05-12 04:45: Folks get woken up and notice the outage; clear up diskspace; restart postgres. Meanwhile, synapse appears to recover, other than mysteriously refusing to send messages from local users.  Investigation commences in the guts of the database to try to see what corruption has happened.
  • 2017-05-12 06:00: We realise that nobody has actually restarted synapse since the DB outage begun, and the failure is probably a known issue where worker replication can get fail and cause the master synapse process to fail to process writes.  Synapse is restarted; everything recovers (including bridges).
  • 2017-05-12 06:20: Investigation into the cause of the diskfill reveals it to be due to postgres replication logs (WALs) stacking up on the DB master, due to replication having broken to a DB slave during the previous networking outage.  Monitoring alerts triggered but weren’t escalated due to a problem in PagerDuty.

Lessons learned:

  • Test your networking scripts and always check your box self-recovers after a restart (let alone a DHCP renewal).
  • Don’t use DHCP in production datacenters unless you really have no choice; it just adds potential ways for everything to break.
  • We need better end-to-end monitoring for bridged traffic.
  • We need to ensure HS<->Bridge traffic is more reliable (improved by fixing timeout logic in synapse).
  • We need better monitoring and alerting of DB replication traffic.
  • We need to escalate PagerDuty phone alerts more aggressively (done).
  • We need better alerting for disk fill thresholds (especially “time until fill”, remembering to take into account the emergency headroom reserved by the filesystem for the superuser).
  • We should probably have scripts to rapidly (or even automatedly) switch between synapse master & hot-spare, and to promote DB slaves in the event of a master failure.

Hopefully this is the last we’ve seen of this root cause; we’ll be working through the todo list above.  Many apologies again for the instability – however please do remember that you can (and should!) run your own homeserver & bridges to stay immune to whatever operational dramas we have with the matrix.org instance!

Synapse 0.20.0 is released!

Hi folks,

Synapse 0.20.0 was released a few hours ago – this is a major new release with loads of stability and performance fixes and some new features too. The main headlines are:

  • Support for using phone numbers as 3rd party identifiers as well as email addresses!  This is huge: letting you discover other users on Matrix based on whether they’ve linked their phone number to their matrix account, and letting you log in using your phone number as your identifier if you so desire.  Users of systems like WhatsApp should find this both familiar and useful ;)
  • Fixes some very nasty failure modes where the state of a room could be reset if a homeserver received an event it couldn’t verify.  Folks who have suffered rooms suddenly losing their name/icon/topic should particularly upgrade – this won’t fix the rooms retrospectively (your server will need to rejoin the room), but it should fix the problem going forwards.
  • Improves the retry schedule over federation significantly – previously there were scenarios where synapse could try to retry aggressively on servers which were offline.  This fixes that.
  • Significant performance improvements to /publicRooms, /sync, and other endpoints.
  • Lots of juicy bug fixes.

We highly recommend upgrading (or installing!) asap – https://github.com/matrix-org/synapse has the details as usual.  Full changelog follows:

Changes in synapse v0.20.0 (2017-04-11)

Bug fixes:

  • Fix joining rooms over federation where not all servers in the room saw the
    new server had joined (PR #2094)

Changes in synapse v0.20.0-rc1 (2017-03-30)

Features:

  • Add delete_devices API (PR #1993)
  • Add phone number registration/login support (PR #1994, #2055)

Changes:

  • Use JSONSchema for validation of filters. Thanks @pik! (PR #1783)
  • Reread log config on SIGHUP (PR #1982)
  • Speed up public room list (PR #1989)
  • Add helpful texts to logger config options (PR #1990)
  • Minor /sync performance improvements. (PR #2002, #2013, #2022)
  • Add some debug to help diagnose weird federation issue (PR #2035)
  • Correctly limit retries for all federation requests (PR #2050, #2061)
  • Don’t lock table when persisting new one time keys (PR #2053)
  • Reduce some CPU work on DB threads (PR #2054)
  • Cache hosts in room (PR #2060)
  • Batch sending of device list pokes (PR #2063)
  • Speed up persist event path in certain edge cases (PR #2070)

Bug fixes:

  • Fix bug where current_state_events renamed to current_state_ids (PR #1849)
  • Fix routing loop when fetching remote media (PR #1992)
  • Fix current_state_events table to not lie (PR #1996)
  • Fix CAS login to handle PartialDownloadError (PR #1997)
  • Fix assertion to stop transaction queue getting wedged (PR #2010)
  • Fix presence to fallback to last_active_ts if it beats the last sync time.
    Thanks @Half-Shot! (PR #2014)
  • Fix bug when federation received a PDU while a room join is in progress (PR
    #2016)
  • Fix resetting state on rejected events (PR #2025)
  • Fix installation issues in readme. Thanks @ricco386 (PR #2037)
  • Fix caching of remote servers’ signature keys (PR #2042)
  • Fix some leaking log context (PR #2048, #2049, #2057, #2058)
  • Fix rejection of invites not reaching sync (PR #2056)

Opening up cyberspace with Matrix and WebVR!

TL;DR: here’s the demo!

Hi everyone,

Today is a special day, the sort of day where you take a big step towards an ultimate dream. Starting Matrix and seeing it gaining momentum is already huge for us, a once in a lifetime opportunity. But one of the crazier things which drove us to create Matrix is the dream of creating cyberspace; the legendary promised land of the internet.

Whether it’s the Matrix of Neuromancer, the Metaverse of Snow Crash or the Other Plane of True Names, an immersive 3D environment where people can meet from around the world to communicate, create and share is the ultimate expression of the Internet’s potential as a way to connect: the idea of an open, neutral, decentralised virtual reality within the ‘net.

This is essentially the software developer equivalent of lying on your back at night, looking up at the stars, and wondering if you’ll ever fly among them… and Matrix is not alone in dreaming of this!  There have been many walled-garden virtual worlds over the years – Second Life, Habbo Hotel, all of the MMORPGs, Project Sansar etc.  And there have been decentralised worlds which lack the graphics but share the vision – whether it’s FidoNet, Usenet, IRC servers, XMPP, the blogosphere or Matrix as it’s used today.  And there are a few ambitious projects like Croquet/OpenCobalt, Open Simulator, JanusVR or High Fidelity which aim for a decentralised cyberspace, albeit without defining an open standard.

But despite all this activity, where is the open cyberspace? Where is the universal fabric which could weave these worlds together?  Where is the VR equivalent of The Web itself?  Where is the neutral communication layer that could connect the galaxies of different apps and users into a coherent reality?  How do you bridge between today’s traditional web apps and their VR equivalents?

Aside from cultural ones, we believe there are three missing ingredients which have been technically holding back the development of an open cyberspace so far:

  1. The hardware
  2. Client software support (i.e. apps)
  3. A universal real-time data layer to store the space

Nowadays the hardware problem is effectively solved: the HTC Vive, Oculus Rift and even Google Cardboard have brought VR displays to the general public.  Meanwhile, accelerometers and head-tracking turn normal screens into displays for immersive content without even needing goggles, giving everyone a window into a virtual world.

Client software is a more interesting story:  If there are many custom and proprietary VR apps that already exist out there, almost none of them can connect to other servers than the ones ran by their own vendors, or even other services and apps.   An open neutral cyberspace is just like the web: it needs the equivalent of web browsers, i.e. ubiquitous client apps which can connect to any servers and services written by any vendors and hosted by any providers, communicating together via an open common language.   And while web browsers of course exist, until very recently there was no way to link them into VR hardware.

This has changed with the creation of WebVR by Mozilla – defining an API to let browsers render VR content, gracefully degrading across hardware and platforms such that you get the best possible experience all the way from a top-end gaming PC + Vive, down to tapping on a link on a simple smartphone.  WebVR is a genuine revolution: suddenly every webapp on the planet can create a virtual world! And frameworks like A-Frame, aframe-react and React VR make it incredibly easy and fun to do.

So looking back at our list, the final missing piece is nothing less than a backbone: some kind of data layer to link these apps together.  Right now, all the WebVR apps out there are little islands – each its own isolated walled garden and there is no standard way to provide shared experiences.  There is no standard way for users to communicate between these worlds (or between the VR and non-VR web) – be that by messaging, VoIP, Video or even VR interactions.  There is no standard way to define an avatar, its location and movement within a world, or how it might travel between worlds.  And finally, there is no standard way to describe the world’s state in general: each webapp is free to manage its scene and its content any way it likes; there is nowhere to expose the realtime scene-graph of the world such that other avatars, bots, apps, services etc. can interact with it. The same way there is no standard way to exchange messages or reuse a user profile between messaging apps today: if the cyberspace is taking shape as we speak, it is definitely not taking the path of openness. At least not yet.

Predictably enough, it’s this last point of the ‘missing data layer for cyberspace’ which we’ve been thinking about with Matrix: an open, neutral, decentralised meta-network powering or connecting these worlds.  To start with, we’ve made Matrix available as a generic communications layer for VR, taking WebVR (via A-Frame) and combining it with matrix-js-sdk, as an open, secure and decentralised way to place voip calls, video calls and instant messaging both within and between WebVR apps and the rest of the existing Matrix ecosystem (e.g. apps like Riot).

In fact, the best way is to test it live: we’ve put together a quick demo at https://matrix.org/vrdemo to show it off, so please give it a go!

 

 

In the demo you get:

  1. a virtual lobby, providing a 1:1 WebRTC video call via Matrix through to a ‘guide’ user of your choice anywhere else in Matrix (VR or not).  From the lobby you can jump into two other apps:
  2. a video conference, calling between all the participants of a given Matrix room in VR (no interop yet with other Matrix apps)
  3. a ‘virtual tourism’ example, featuring a 1:1 WebRTC video call with a guide, superimposed over the top of the user going skiing through 360 degree video footage.

Video calling requires a WebRTC-capable browser (Chrome or Firefox). Unfortunately no iOS browsers support it yet. If you have dedicated VR hardware (Vive or Rift), you’ll have to configure your browser appropriately to use the demo – see https://webvr.rocks for the latest details.

Needless to say, the demo’s open sourced under the Apache License like all things Matrix – you can check out the code from https://github.com/matrix-org/matrix-vr-demo.  Huge kudos to Richard Lewis, Rob Swain and Ben Lewis for building this out – and to Aidan Taub and Tom Elliott for providing the 360 degree video footage!

The demo is quite high-bandwidth and hardware intensive, so here’s a video of it in action, just in case:

 

 

Now, it’s important to understand that here we’re using Matrix as a standard communications API for VR, but we’re not using Matrix to store any VR world data (yet).  The demo uses plain A-Frame via aframe-react to render its world: we are not providing an API which exposes the world itself onto the network for folks to interact with and extend.  This is because Matrix is currently optimised for storing and synchronising two types of data structure: decentralised timelines of conversation data, and arbitrary decentralised key-value data (e.g. room names, membership, topics).

However, the job of storing arbitrary world data requires storing and flexibly querying it as an object graph of some kind – e.g. as a scene graph hierarchy.  Doing this efficiently whilst supporting Matrix’s decentralised eventual consistency model is tantamount to evolving Matrix into being a generic decentralised object-graph database (whilst upholding the constraints of that virtual world).  This is tractable, but it’s a bunch more work than just supporting the eventually-consistent timeline & key-value store we have today.  It’s something we’re thinking about though. :)

Also, Matrix is currently not super low-latency – on a typical busy Synapse deployment event transmission between clients has a latency of 50-200ms (ignoring network).  This is fine for instant messaging and setting up VoIP calls etc, but useless for publishing the realtime state of a virtual world: having to wait 200ms to be told that something happened in an interactive virtual world would be a terrible experience.  However, there are various fixes we can do here: Matrix itself is getting faster; Dendrite is expected to be one or two orders of magnitude faster than Synapse.  We could also use Matrix simply as a signalling layer in order to negotiate a lower latency realtime protocol to describe world data updates (much as we use Matrix as a signalling layer for setting up RTP streams for VoIP calls or MIDI sessions).

Finally, you need somewhere to store the world assets themselves (textures, sounds, Collada or GLTF assets, etc).  This is no different to using attachments in Matrix today – this could be as plain HTTP, or via the Matrix decentralised content store (mxc:// URLs), or via something like IPFS.

This said, it’s only a matter of time before someone starts storing world data itself in Matrix.  We have more work to do before it’s a tight fit, but this has always been one of the long-term goals of the project and we’re going to do everything we can to support it well.

So: this is the future we’re thinking of.  Obviously work on today’s Matrix servers, clients, spec & bridges is our focus and priority right now and we lots of work left there – but the longer term plan is critical too.  Communication in VR is pretty much a blank canvas right now, and Matrix can be the connecting fabric for it – which is unbelievably exciting.  Right now our demo is just a PoC – we’d encourage all devs reading this to have a think about how to extend it, and how we all can build together the new frontier of cyberspace!

Finally, if you’re interested in chatting more about VR on Matrix, come hang out over at #vr:matrix.org!

– Matthew, Amandine & the Matrix team

Synapse 0.19.3 released

Hi all,

We’ve released Synapse 0.19.3-rc2 as 0.19.3 with no changes. This is a slightly unusual release, as 0.19.3-rc2 dates from March 13th and a lot of stuff has landed on the develop branch since then – however, we’ll be releasing that as 0.20.0 once it’s ready. Instead, 0.19.3 has a set of intermediary performance and bug fixes; the only new feature is a set of admin APIs kindly contributed by @morteza-araby.

The changelog follows – please upgrade from https://github.com/matrix-org/synapse or your OS packages as normal :)

Changes in synapse v0.19.3 (2017-03-20)

No changes since v0.19.3-rc2

Changes in synapse v0.19.3-rc2 (2017-03-13)

Bug fixes:

  • Fix bug in handling of incoming device list updates over federation.

Changes in synapse v0.19.3-rc1 (2017-03-08)

Features:

Changes:

Bug fixes:

  • Fix synapse_port_db failure. Thanks to @Pneumaticat! (PR #1904)
  • Fix caching to not cache error responses (PR #1913)
  • Fix APIs to make kick & ban reasons work (PR #1917)
  • Fix bugs in the /keys/changes api (PR #1921)
  • Fix bug where users couldn’t forget rooms they were banned from (PR #1922)
  • Fix issue with long language values in pushers API (PR #1925)
  • Fix a race in transaction queue (PR #1930)
  • Fix dynamic thumbnailing to preserve aspect ratio. Thanks to @jkolo! (PR #1945)
  • Fix device list update to not constantly resync (PR #1964)
  • Fix potential for huge memory usage when getting device that have changed (PR #1969)

Dendrite receives its first messages!!!

Hi all,

We hit a major milestone today on Dendrite, our next-generation golang homeserver: Dendrite received its first messages!!

Before you get too excited, please understand that Dendrite is still a pre-alpha work in progress – whilst we successfully created some rooms on an instance and sent a bunch of messages into them via the Client-Server API, most other functionality (e.g. receiving messages via /sync, logging in, registering, federation etc) is yet to exist. It cannot yet be used as a homeserver. However, this is still a huge step in the right direction, as it demonstrates the core DAG functionality of Matrix is intact, and the beginnings of a usable Client-Server API are hooked up.

The architecture of Dendrite is genuinely interesting – please check out the wiring diagram if you haven’t already. The idea is that the server is broken down into a series of components which process streams of data stored in Kafka-style append-only logs. Each component scales horizontally (you can have as many as required to handle load), which is an enormous win over Synapse’s monolithic design. Each component is also decoupled from each other by the logs, letting them run on entirely different machines as required. Please note that whilst the initial implementation is using Kafka for convenience, the actual append-only log mechanism is abstracted away – in future we expect to see configurations of Dendrite which operate entirely from within a single go executable (using go channels as the log mechanism), as well as alternatives to Kafka for distributed solutions.

The components which have taken form so far are the central roomserver service, which is responsible (as the name suggests) for maintaining the state and integrity of one or more rooms – authorizing events into the room DAG; storing them in postgres, tracking the auth chain of events where needed; etc. Much of the core matrix DAG logic of the roomserver is provided by gomatrixserverlib. The roomserver receives events sent by users via the ‘client room send’ component (and ‘federation backfill’ component, when that exists). The ‘client room send’ component (and in future also ‘client sync’) is provided by the clientapi service – which is what as of today is successfully creating rooms and events and relaying them to the roomserver!

The actual events we’ve been testing with are the history of the Matrix Core room: around 10k events. Right now the roomserver (and the postgres DB that backs it) are the main bottleneck in the pipeline rather than clientapi, so it’s been interesting to see how rapidly the roomserver can consume its log of events. As of today’s benchmark, on a generic dev workstation and an entirely unoptimised roomserver (i.e. no caching whatsoever) running on a single core, we’re seeing it ingest the room history at over 350 events per second. The vast majority of this work is going into encoding/decoding JSON or waiting for postgres: with a simple event cache to avoid repeatedly hitting the DB for every auth and state event, we expect this to increase significantly. And then as we increase the number of cores, kafka partitions and roomserver instances it should scale fairly arbitrarily(!)

For context, the main synapse process for Matrix.org currently maxes out persisting events at around 15 and 20 per second (although it is also spending a bunch of time relaying events to the various worker processes, and other miscellanies). As such, an initial benchmark for Dendrite of 350 msgs/s really does look incredibly promising.

You may be wondering where this leaves Synapse? Well, a major driver for implementing Dendrite has been to support the growth of the main matrix.org server, which currently persists around 10 events/s (whilst emitting around 1500 events/s). We have exhausted most of the low-hanging fruit for optimising Synapse, and have got to point where the architectural fixes required are of similar shape and size to the work going into Dendrite. So, whilst Synapse is going to be around for a while yet, we’re putting the majority of our long-term plans into Dendrite, with a distinct degree of urgency as we race against the ever-increasing traffic levels on the Matrix.org server!

Finally, you may be wondering what happened to Dendron, our original experiment in Golang servers. Well: Dendron was an attempt at a strangler pattern rewrite of Synapse, acting as an shim in front of Synapse which could gradually swap out endpoints with their golang implementations. In practice, the operational complexity it introduced, as well as the amount of room for improvement (at the time) we had in Synapse, and the relatively tight coupling to Synapse’s existing architecture, storage & schema, meant that it was far from a clear win – and effectively served as an excuse to learn Go. As such, we’ve finally formally killed it off as of last week – Matrix.org is now running behind a normal haproxy, and Dendron is dead. Meanwhile, Dendrite (aka Dendron done Right ;) is very much alive, progressing fast, free from the shackles of Synapse.

We’ll try to keep the blog updated with progress on Dendrite as it continues to grow!