<aside> 📃 This pre-mortem was shared with affected Linear customers on May 18th, 2021. It addresses reported performance issues and describes steps our engineering team is taking to fix them. In full transparency we're making it available to all customers and the work we're doing to improve performance should have positive effects for everyone.

</aside>

We've designed Linear to be the fastest issue tracking and project management tool out there. Recently, we've become aware of significant slowness affecting some workspaces with larger issue and user counts where loading and using the application can feel slow and thus fall below the standards we set for ourselves.

Addressing degraded application performance is our top priority. We wrote this to provide full transparency to what is causing the issues and how we're addressing them. A big part is due to faster than expected growth. We also want to give you a glimpse into how our application is architected and how some of those decisions can affect performance in some instances. We will highlight what we are doing to address these going forward, both in the short term and the mid to long term.

Linear's high-level architecture

To understand why workspaces might see degraded performance, we'd like to give you an understanding of Linear's current architecture.

On initial startup, the Linear client retrieves the entire database for a workspace from the backend and stores the result on the client in an IndexedDB database (this is what we call a full bootstrap). Under normal circumstances, this should happen only during the first startup of the client and infrequently after that, whenever the data schema changes.

Once the IndexedDB is populated with the workspace data, a graph of the data is constructed in memory to be able to for example link issues with teams, projects, and cycles. Once the graph is constructed, the UI finally renders.

After this initial render pass, the client connects to our realtime sync server that sends any updates that the client might have missed since it was last online (i.e., fast-forwarding the client to the current state of the data). It then starts sending real-time updates whenever data for the workspace changes. If you're loading the application after the full bootstrap, you should only experience a fast warm start where we only fetch changes your client hasn't seen yet.

Why you might see performance issues

At the time of writing, you might encounter performance degradations if you're using Linear in a large workspace, such as one with several hundred users and tens of thousands of issues.

Most of the performance issues are due to the sheer size of the data that needs to be transferred when your client is out of date or doesn't have a partial copy of the data. This can make initial application load times after periods of inactivity feel slow and sluggish.

There are a couple of reasons for this. Still, it's mostly due to the way we write to your client's IndexedDB and some constraints with it (writes are slow, but reads are fast), and the way we've currently implemented client catchup, which might result in the client needing to do frequent full data reloads instead of more atomic fast-forward catchups.

These problems stem from trade-offs in our architecture. The reason Linear is fast is also the reason Linear is slow once workspaces grow in size. This didn't come as a surprise to us, and we were aware that in-memory representation of the entire dataset would become a bottleneck sometime in the future. But we were caught off guard by how quickly this happened, mostly because we weren't anticipating organizations using Linear to grow so large so quickly. And thus, we ended up in an undesirable and regrettable place where we weren't able to re-architect our client quickly enough for no one to run into performance degradation.

What we're doing to improve performance in the near term

Making sure teams have great performance is our top priority for the entire engineering organization. We have planned some work short-term that will alleviate the biggest performance degradations. We will keep updating this list as we identify larger optimizations and will also list smaller visible ones in our Changelog.

Split data up into smaller tables (Done)

Previously we stored all workspace data in one big table. While splitting it up into smaller tables doesn't increase performance directly, it enables us to undertake many of the following optimizations.

Don't invalidate all the data on schema changes (Done)