Skip to main content

Β· 5 min read

Relay’s approach to application authorship enables a unique combination of optimal runtime performance and application maintainability. In this post I’ll describe the tradeoffs most apps are forced to make with their data fetching and then describe how Relay’s approach allows you to sidestep these tradeoffs and achieve an optimal outcome across multiple tradeoff dimensions.

In component-based UI systems such as React, one important decision to make is where in your UI tree you fetch data. While data fetching can be done at any point in the UI tree, in order to understand the tradeoffs at play, let’s consider the two extremes:

  • Leaf node: Fetch data directly within each component that uses data
  • Root node: Fetch all data at the root of your UI and thread it down to leaf nodes using prop drilling

Where in the UI tree you fetch data impacts multiple dimensions of the performance and maintainability of your application. Unfortunately, with naive data fetching, neither extreme is optimal for all dimensions. Let’s look at these dimensions and consider which improve as you move data fetching closer to the leaves, vs. which improve as you move data fetching closer to the root.

Loading experience​

  • 🚫 Leaf node: If individual nodes fetch data, you will end up with request cascades where your UI needs to make multiple request roundtrips in series (waterfalls) since each layer of the UI is blocked on its parent layer rendering. Additionally, if multiple components happen to use the same data, you will end up fetching the same data multiple times
  • βœ… Root node: If all your data is fetched at the root, you will make single request and render the whole UI without any duplicate data or cascading requests

Suspense cascades​

  • 🚫 Leaf node: If each individual component needs to fetch data separately, each component will suspend on initial render. With the current implementation of React, unsuspending results in rerendering from the nearest parent suspense boundary. This means you will have to reevaluate product component code O(n) times during initial load, where n is the depth of the tree.
  • βœ… Root node: If all your data is fetched at the root, you will suspend a single time and evaluate product component code only once.


  • βœ… Leaf node: Using an existing component in a new place is as easy as rendering it. Removing a component is as simple as not-rendering it. Similarly adding/removing data dependencies can be done fully locally.
  • 🚫 Root node: Adding an existing component as a child of another component requires updating every query that includes that component to fetch the new data and then threading the new data through all intermediate layers. Similarly, removing a component requires tracing those data dependencies back to each root component and determining if the component you removed was that data’s last remaining consumer. The same dynamics apply to adding/removing new data to an existing component.

Granular updates​

  • βœ… Leaf node: When data changes, each component reading that data can individually rerender, avoiding the need to rerender unaffected components.
  • 🚫 Root node: Since all data originates at the root, when any data updates it always forces the root component to update forcing an expensive rerender of the entire component tree.


Relay leverages GraphQL fragments and a compiler build step to offer a more optimal alternative. In an app that uses Relay, each component defines a GraphQL fragment which declares the data that it needs. This includes both the concrete values the component will render as well as the fragments (referenced by name) of each direct child component it will render.

At build time, the Relay compiler collects these fragments and builds a single query for each root node in your application. Let’s look at how this approach plays out for each of the dimensions described above:

  • βœ… Loading experience - The compiler generated query fetches all data needed for the surface in a single roundtrip
  • βœ… Suspense cascades - Since all data is fetched in a single request, we only suspend once, and it’s right at the root of the tree
  • βœ… Composability - Adding/removing data from a component, including the fragment data needed to render a child component, can be done locally within a single component. The compiler takes care of updating all impacted root queries
  • βœ… Granular updates - Because each component defines a fragment, Relay knows exactly which data is consumed by each component. This lets relay perform optimal updates where the minimal set of components are rerendered when data changes


As you can see, Relay’s use of a declarative composable data fetching language (GraphQL), combined a compiler step, allows us to achieve optimal outcomes across all of the tradeoff dimensions outlined above:

Leaf nodeRoot nodeGraphQL/Relay
Loading experienceπŸš«βœ…βœ…
Suspense cascadesπŸš«βœ…βœ…
Granular updatesβœ…πŸš«βœ…

Β· 4 min read

The Relay team is happy to announce the release of Relay v15. While this release is a major version bump and includes a couple of breaking changes, we expect that most users will be unaffected and will experience a seamless upgrade. You can find the full list of changes in the v15 Release Notes.

What's new in Relay 15?​

Support for @refetchable on interfaces​

Previously it wasn't possible to add the @refetchable directive on fragment definitions on server interface types.

// schema.graphql

interface RefetchableInterfaceFoo @fetchable(field_name: "id") {
id: ID!

extend type Query {
fetch__RefetchableInterfaceFoo(id: ID!): RefetchableInterfaceFoo

// fragment

fragment RefetchableFragmentFoo on RefetchableInterfaceFoo
@refetchable(queryName: "RefetchableFragmentFooQuery") {

Persisted query improvements​

If you use URL-based persisted queries, you can now specify custom headers to send with the request that persists the query. For example, this can be used to send auth headers to your query persistence URL endpoint.

persistConfig: {
url: '',
headers: {
Authorization: 'bearer TOKEN'

For file-based persisted queries, we added a new feature flag, compact_query_text, that removes all whitespace from the persisted query text. This can make the file more than 60% smaller. This new feature flag can be enabled within your Relay config file.

persistConfig: {
file: 'path/to/file.json',
algorithm: 'SHA256'
featureFlags: {
compact_query_text: true

Typesafe updates now support missing field handlers​

Typesafe updaters now support missing field handlers. Previously, if you selected node(id: 4) { ... on User { name, __typename } } in a typesafe updater, but that user was fetched in a different way (e.g. with best_friend { name }), you would not be able to access and mutate that user using the typesafe updater.

In this release, we add support for missing field handlers in typesafe updaters, meaning that if a missing field handler is set up for node (as in this example), you will be able to update the user's name with this missing field handler.

In order to support this, the signature of missing field handlers has been changed. The record argument to the handler used to recieve a Record type (which is an untyped grab-bag of data). It now receives a ReadOnlyRecordProxy. Furthermore, the field argument of type NormalizationLinkedField is now CommonLinkedField, which is a type containing the properties found in both ReaderLinkedField and NormalizationLinkedField.

Flow type improvements​

Flow users will now get types inferred from graphql literals with more Relay APIs. No longer do Flow users need to explicitly type the return value of the usePreloadedQuery, useQueryLoader, useRefetchableFragment, usePaginationFragment, and useBlockingPaginationFragment API methods.

Relay Resolver improvements​

A significant portion of our development effort since our last release has gone into improving Relay Resolvers (a mechanism for exposing derived data in the graph). It is worth noting that Relay Resolvers are still experimental and API changes might occur in the future.

Terser docblock tags​

The annotation for Relay Resolver functions has been simplified. In many scenarios you can now use the ParentType.field_name: ReturnType syntax to define what new field your Relay Resolver exposes.


* @RelayResolver
* @onType User
* @fieldName favorite_page
* @rootFragment myRootFragment


* @RelayResolver User.favorite_page: Page
* @rootFragment myRootFragment

In the above example, the Page type is a schema type. If your Relay Resolver doesn't return a schema type, you can use fixed RelayResolverValue value as your return type

* @RelayResolver User.best_friend: RelayResolverValue
* @rootFragment myRootFragment

Define multiple resolvers per file​

Prior to this release we only allowed a single Relay Resolver per file and required the Relay Resolver function to be the default export. In Relay 15 you're now able to define multiple Relay Resolvers per file and use named exports.

* @RelayResolver User.favorite_page: Page
* @rootFragment favoritePageFragment
function usersFavoritePage(){

* @RelayResolver User.best_friend: RelayResolverValue
* @rootFragment bestFriendFragment
function usersBestFriend(){

module.exports = {

Happy Querying!

Β· 22 min read
Guest Post

This is a guest post written by Ernie Turner, a Staff Engineer at Coinbase. Coinbase has thoroughly adopted Relay in their applications and is a strong ally of the Relay Team. Last year they helped co-develop the Relay VSCode extension. Ernie has agreed to share this internal enginnering blog post with us.

How to provide the best experience for customers during service disruptions​

In a perfect world, none of the services at Coinbase would suffer outages, and all fields in our GraphQL schema would resolve correctly all the time. As this isn't practical, Coinbase applications should be resilient to downtime and minimize the impact on customers: a single service suffering downtime should not prevent users from using or interacting with an entire app. However, it's also important that we convey issues to users when our applications aren't working as expected. Showing error messages that convey downtime with retry buttons is a better experience than confusing users with missing content or UI they can't interact with.

This post will cover the common patterns and best practices for dealing with missing data in a Relay application.

Screen Architecture and Error Boundaries​

Before we discuss handling service downtime and failures in GraphQL queries, let's first discuss broader screen architecture and how React Error Boundaries can help create a better user experience when used correctly.

Like most things in life, Error Boundaries should be used in moderation. Let's look at a common screen in the Coinbase Retail app.

Any section in the above screen could fail to get the data required to render, but it's how we approach these failures that differentiates what experience a user has with our app. For example, only using a single screen-level ErrorBoundary for any failure causes the app to be unusable when any error occurs, regardless of the significance of that error. In contrast, wrapping each component in its own ErrorBoundary can create just as bad of an experience. Lastly, omitting components with errors entirely is as bad as the other two options. There is no one-size-fits-all approach, so let's break down each of these and explain why they create poor user experiences.

Full Screen Error​

The UI above is Coinbase's full-screen error fallback that is displayed if a service is experiencing disruptions and we couldn't get the data necessary to render the components on this screen. In certain situations, this actually creates a good user experience. We may not be giving the user detailed information as to what happened, but in most situations providing the technical cause is not possible, nor would it improve the users' experience. However, we are telling them something isn't working correctly and giving them a clear Retry button to attempt to get the app working again.

If the reason we're showing this to the user is because we can't load something non-critical, like the asset price history graph or their watchlist status, we shouldn't take down the entire screen. Hiding the current price of bitcoin and preventing the user from trading, just because we can't tell them whether bitcoin is on their watchlist, is a negative user experience.

Another negative of this UI is that it hides all app navigation from the user. Even if we have a good reason to show the user a full screen error, that doesn't mean we should hide the rest of the app in the process. A user should still be able to navigate to a different screen. In practice, we should only show users a β€œfull screen error” and not a β€œfull app error”.

Error Messages Everywhere​

The UI pictured above is, in many ways, worse. This is the opposite end of the previous experience and showing the user a full-screen error would be preferable. Error messages for the price history graph make sense, because the user would expect that UI to be on this screen, but if the user can't even see the price of bitcoin or find the Trade button, we really ought to show them the UI in the first screenshot (but with navigation) - as the core goal and purpose of this screen has been lost.

This image also demonstrates how ErrorBoundaries can be too prevalent. The entire price history graph with the time range selectors should only have a single error message, not one per time range.

Empty Fallbacks​

The UI above is just as bad as the example prior, In this case, our ErrorBoundaries fall back to empty content. For certain UI elements, this makes sense. The missing Share button next to the watchlist isn't critical for this UI, so omitting it makes sense. However, hiding the current price of bitcoin, the price history graph, and the Trade button makes the UI unusable and even somewhat misleading. Even users who don't use the app every day would know that something is off. We also aren't giving the user any option to retry any failures β€”the user just sees empty content with no way to recover.

What should the user see instead?​

The following two screenshots show an example of a better experience for the user. The first screenshot is what the user should see if we can't get the current price of bitcoin or if we can't determine whether the user is allowed to trade. The second screenshot would be a better experience for a user if we couldn't get the current change in the price of bitcoin or the price history.

All of this points to a need to classify sections of the UI on a screen: what is critical for the user's experience, what UI the user expects to see, and what supporting content is optional to the experience.

Critical vs Expected vs Optional UI​

Not all UI elements in an application screen are the same. Some portions of the UI are critical to the core purpose of the screen, others might just be more informational and helpful to users. For application design at Coinbase, we group UI elements into three categories, Critical, Expected, and Optional.

Critical UI Elements​

The parts of a screen that define the core information or interaction a user has with the UI. Without these elements in the UI, the screen does not make sense, and if they were missing, users would be confused and/or angry, as it isn't clear why the app wasn't working as expected.

Suppose we can't load the data necessary to display these critical UI elements. In that case, we should show the user a full-screen error message explaining the problem (if possible) with a retry button that lets them easily attempt to re-request the missing data.

Letting users interact with an application that is missing critical UI elements will cause confusion, anger, and even possible loss of funds if the user is able to complete a transaction without knowing the full details of what is happening.

Examples of Critical UI elements:

  • The user's current portfolio balance on the Coinbase app home screen
  • The Asset Price, Payment Method, and total Purchase Price on the order preview screen
  • The user's lifetime earnings and earnings per asset on the Earn screen

Expected UI Elements​

Expected UI elements are the parts of a screen that might not serve the core purpose of a screen, but that most users would expect to be present. If Expected UI elements are missing from a screen, the user is likely to think that something is wrong, but this wouldn't prevent them from performing the core actions of the screen.

If we can't load the data necessary to display these expected UI elements, we should show the user a component-local error message telling them that there is an expected UI that is missing. These error messages should also be accompanied by a retry button to let the user re-request the missing data. Localized errors have a higher chance of not being seen or interacted with by the user, which is somewhat acceptable since they aren't required for the core purpose of the screen.

Letting users interact with an application that is missing expected UI elements should be acceptable but it might cause confusion about what is happening. Completely omitting these UI elements without an accompanying error message would create a worse experience.

Examples of Expected UI elements:

  • An asset's current price on the Buy Asset screen (where they enter the amount to buy)
  • The price history graph on an asset detail screen
  • A list of recent transactions on the Coinbase Card screen

Optional UI Elements​

Optional UI elements are the parts of a screen that are purely supportive to the main purpose of a screen. Some users might notice these missing elements, but others might be completely unaware that they're supposed to be present at all. In either scenario, a user wouldn't be prevented from accomplishing their main goal on the screen.

If we can't load the data necessary to display these Optional UI elements, we should instead just omit them entirely from the UI. However, this comes with the following risks:

A. The user might not know that anything is missing B. There won't be a way for the user to re-request the data for this UI unless they do a full screen refresh.

Developers should consider these downsides and ensure that they do not cause a negative user experience. Instead, these failures should be logged so that product engineers are notified when the user experience is less than ideal.

Examples of Optional UI Elements:

  • Offer cards on the asset detail screen
  • Asset category sections on the Trade screen (New on Coinbase, Top Movers, etc.)
  • News feed on the Home Screen

Let's return to the image above and classify the sections of the UI into these categories.

Element Classification Limits​

In the example above, we have a screen that has two critical components, two expected components, and one optional component. Most screens in an app should only have a handful of critical UI components on them. For some screens, the entire UI might be composed of one single critical component.

The same is true for expected elements. If we have a screen that's composed of five separate expected UI elements, we'd end up with the screenshot above with β€˜Try Again' buttons littered across the app. Limit the number of expected elements and retry buttons on a single screen to only one or two if possible.

Pull To Refresh​

For all of the above scenarios, users on mobile apps should be able to pull-to-refresh to retry any failed request on a screen. With Relay applications, this will usually mean retrying the full screen-level query. If a screen has any error messages or hidden components because of missing data, using pull-to-refresh should always attempt to fix all of those error conditions.

Work with your Product Managers and Designers​

All of this classification is subjective β€” and all of the examples above are just one opinion and a designer or PM may have different opinions on how screens should degrade. It is important for cross-functional alignment when designing application UI. Teams should consult engineers, designers, and product managers to ensure seamless and on-brand screens across your entire app.

How Relay Can Help​

Once you've classified your screen into sections, the next step is to add the proper ErrorBoundaries to your app and configure your components' GraphQL fragments depending on their classification. This is where Relay can help. Based on our experience working with Relay apps, we've created several best practices around how to deal with missing data from GraphQL queries.


Our goal at Coinbase is to work with a nullable schema as recommended by the Relay team. The primary driver is that it puts the decision on how to handle service outages and missing query data in the hands of the client engineer. Without a nullable schema, the decision of what to do with missing data is made on the server (by bubbling up null values to the nearest nullable parent), and the client code has no recourse to change this decision.

This decision is buoyed by the existence of the Relay @required directive, which allows client engineers to annotate their queries and fragments with directives that tell Relay how to handle missing data at runtime. This reduces boilerplate code that engineers would be required to write otherwise. On the surface, the directive seems very simple: it only comes with three options which are all pretty straightforward. However, when attempting to use this directive for various use cases, it becomes clear that the choice of which option to pick is not always obvious, nor is the decision of whether to use the directive at all.

Locality of @required​

One great feature of the @required directive is that it only affects the fragment in which you use it. It will never change the behavior of other fragments that query the same field. This allows you to add or remove the directive without thinking about anything outside your component's scope. This is important because different components may be categorized differently, even if they get data from the same query. Being able to mark fields in fragments of the same query with different @required arguments is important to help build ideal user experiences.

Using action: LOG vs action: NONE​

The LOG and NONE actions both have the same runtime behavior, but LOG will send a message to your logging mechanism of choice, logging the full path to the field that was returned as null. For most use cases where the @required directive is needed, LOG should be used over NONE. The only time NONE should be preferred is if a field is expected to be null for some users.

While the log entry created by using action: LOG isn't likely to be actionable on its own, however, it can be a useful signal as a breadcrumb for future errors. Being able to look at the history of an error and see that a specific field was unexpectedly null can help track down future errors the user might encounter in a workflow.

When to use @required(action:LOG/NONE)​

The LOG/NONE actions should only be used on fields which are necessary to display Optional UI in your components. There are two distinct use cases that this shows up when designing your application

  1. Your component is Optional UI and shouldn't be rendered at all if a field or set of fields is null
  2. A portion of your component is Optional UI and relies on an object type field where that object makes no sense without one or more of its child fields

Let's look at a fragment that encompasses both of these use cases:

fragment MyFragment on Asset {
name @required(action: LOG)
slug @required(action: LOG)
supply {
total @required(action: LOG)
circulating @required(action: LOG)

For this fragment, we're saying that the entire fragment is invalid if we don't get the name or slug fields. If those fields are returned from the server as null, we can't render this component at all. This fragment also shows how to use the @required(action: LOG/NONE) directive to invalidate an entire object type field. This fragment says that if we don't have either of the or supply.circulating fields, then the entire supply object is itself invalid and should be null. This nullability will then be used to hide an optional portion of this component's UI.

Now let's see how our component will handle the results from this query:

const asset = useFragment(
fragment MyFragment on Asset {
name @required(action: LOG)
slug @required(action: LOG)
supply {
total @required(action: LOG)
circulating @required(action: LOG)

// If we couldn't get the required asset name or slug fields, hide this entire UI
if (asset === null) {
return null;
// Otherwise hide certain portions of the UI if data is missing
return (
<Title color={asset.color}>{}</Title>
{ && (
<SupplyStats total={} circulating={} />

The @required directive really shines here because it removes complex null checks that we'd have to write otherwise. Instead of having to check whether both the or asset.slug fields are null, we can simply check if our entire fragment was nulled out and prevent rendering. The same is true when checking whether we should render the SupplyStats component. We only have to check whether the parent field is null in order to know that the two subfields are non-null.

When to use @required(action:THROW)​

Using @required(action: THROW) is more straightforward. This action should be used on fields that are necessary to render your Expected or Critical UI component. If these fields are returned as null from the server, your component should throw an error to the nearest ErrorBoundary and the user should see an error message.

How far up the tree your ErrorBoundary is depends on how much of the UI you want to remove if there's an error. For example, if we're showing the user an error instead of an asset price history graph, it doesn't make sense to keep the time series buttons still in view, that entire UI should disappear as well. But we don't want to take out the entire screen if this happens either.

Make sure your ErrorBoundary provides a mechanism for the user to retry the failed query to see if they can get the data on a subsequent attempt. We should always pair an error message with an actionable element to let the user recover. We shouldn't rely on the user being able (or knowing) to use the pull-to-refresh to reload the screen.

A note about using @required(action: THROW) on fields in arrays​

You should almost never use the THROW action in a component that selects both an array field and fields of that array. As an example of what not to do:

function Component({ assetPriceRef }) {
const { quotes } = useFragment(
fragment ComponentFragment on AssetPriceData {
quotes {
# Returns an array of items
price @required(action: THROW)

This component selects both the quotes array along with the timestamp and price fields on every item in that array. Putting THROW on the quotes field would be acceptable if we want to show the user an error if we don't get back any quotes. But, putting THROW on the price field would result in showing the user an error if even a single price field in that array was null. That's probably not the behavior we want. If we got back 23 of the 24 quotes for the past day correctly, we should probably still display the results we have and just omit the empty values instead.

Instead, we should use action: LOG/NONE so that we only invalidate a single item in the array instead of all items. We can then optionally filter out the null values in the array if needed.

function Component({ assetPriceRef }) {
const { quotes } = useFragment(
fragment ComponentFragment on AssetPriceData {
quotes {
# Returns an array of items
price @required(action: LOG)
const validQuotes = quotes.filter(removeNull);

When NOT to use @required on a field​

The unhelpful answer to this question would be β€œdon't use @required when a field isn't required”. That answer trivializes the decision of what is required and what isn't when the answer is usually more nuanced, especially when your fragment has a dozen fields or more. However, we can follow a number of best practices to decide whether to mark a field as required or not. Again, it is important that you work with your PMs and Designers to help you with these decisions.

There is also a fine line between when to omit the @required directive vs using it with the LOG/NONE action. The primary difference is that you should omit the @required directive when the UI rendered by that field is Optional UI.

Some components in your application can render a combination of different classifications of UI. For example, a single component might be responsible for displaying both the current price of an asset as well as what percent of users have bought or sold the asset over some time frame. This means the component is mixing both Critical UI (asset price) and Optional UI (buy/sell stats).

If a field is used to render optional content which can instead be omitted from the UI entirely without causing confusion for the user (remember, that's the definition of Optional UI) then you shouldn't use the @required directive on that field. Instead, you should add checks to your code to omit the UI if the field is null.

function SomeComponent({ queryRef }) {
const { asset } = useFragment(
asset {
latestQuote @required(action: THROW) # Required data
buyPercent # Optional data

return (
<div>Price: {asset.latestQuote}</div>
{asset.buyPercent !== null && (
<div>Buy Percent: {asset.buyPercent}</div>
<div>Sell Percent: {1 - asset.buyPercent}</div>

In this example it would be incorrect to use @required(action: LOG/NONE) on the buyPercent field because that would invalidate the entire fragment which isn't the behavior we want.

Another less common use case of when to omit the @required directive is when you can provide a safe fallback value. Providing a fallback/default value for a field can be very dangerous if done incorrectly. While there are a few cases where it's potentially safe to fall back to a default value, it's generally pretty rare and should be avoided. However, if you can provide a safe fallback value, you should avoid adding @required to that field and instead use a fallback value.

A couple of guidelines of when to provide a fallback value:

  • Fallback values for numeric fields (numbers or strings that represent numbers) should not be used.
    • Using a 0 in place of a missing value will always create more confusion for the user. Coinbase is a financial company and if we can't display accurate values to users, we shouldn't be displaying them at all. Showing a user that their account balance is $0.00 is clearly much worse than showing them an error message. That's an obvious use case, but even places such as the price change percent for an asset, APY% for Coinbase Card, or the amount a user can make via Coinbase Earn should never show 0 if we don't have the actual value.
  • Fallback values for boolean fields should be used with caution.
    • The first choice for a fallback for boolean fields is usually to set the field to false. Depending on what the boolean field represents, falling back to false can create a worse customer experience than showing the user an error. Falling back to false for a field like isEligibleForOffer is probably acceptable because that is likely showing Optional content. Falling back to false for a field like hasCoinbaseOneSubscription would not be acceptable because for a user who is a CoinbaseOne subscriber the content is Expected and the user is going to be confused about why that UI is missing in the app
  • Falling back to an empty array for array fields should be used with caution.
    • If you're showing the user their list of Coinbase Card transactions, falling back to an empty array is a bad idea, but if you're showing the user a list of recently added assets, it's probably okay to fallback to an empty array to omit the UI from displaying since the component is already doing to have to deal with the case of the array being empty.
  • String fields should usually just deal with null instead.
    • In some cases, you might want to fallback to an empty string for string fields that are returned as null, but usually this creates the same code path if you just leave the field as null. Most string fields in a schema aren't expected to be empty so falling back to an empty string can create negative user experiences where the user will be shown an empty string instead of actual content.
function SomeComponent({ queryRef }) {
const asset = useFragment(
fragment MyFragment on Asset {
canTrade @required(action: THROW) # Required data
hasOfferToStake # Optional data

const showStakeOffer = asset.hasOfferToStake ?? false;

return (
{asset.canTrade && <Button>Trade</Button>}
{showStakeOffer && <Button>Stake your currency</Button>}


If you've taken anything away from this document, hopefully, it's that a lot of thought needs to go into how to handle downtime and service interruptions. Handling failure states is an important part of building world-class applications. Make sure your design and PM team are on the same page with your team when scoping out new features. If they don't give you advice on what to show the user when data is missing, push back to come to a consensus as a team on these decisions.

Relay can be a powerful tool in helping deal with application failures. Its granular ability to help you decide how to deal with failure might involve more work than you're used to. However, this extra effort pays off in the long run and goes a long way to improving customer experience with your applications.

Β· 12 min read

We're extremely excited to release a preview of the new, Rust-based Relay compiler to open source today (as v13.0.0-rc.1)! This new compiler is faster, supports new runtime features, and provides a strong foundation for additional growth in the future.

Leading up to this release, Meta's codebase had been growing without signs of stopping. At our scale, the time it took to compile all of the queries in our codebase was increasing at the direct expense of developer productivity. Though we tried a number of strategies to optimize our JavaScript-based compiler (discussed below), our ability to incrementally eke out performance gains could not keep up with the growth in the number of queries in our codebase.

So, we decided to rewrite the compiler in Rust. We chose Rust because it is fast, memory-safe, and makes it easy to safely share large data structures across threads. Development began in early 2020, and the compiler shipped internally at the end of that year. The rollout was smooth, with no interruptions to application development. Initial internal benchmarks indicated that the compiler performed nearly 5x better on average, and nearly 7x better at P95. We've further improved the performance of the compiler since then.

This post will explore why Relay has a compiler, what we hope to unlock with the new compiler, its new features, and why we chose to use the Rust language. If you're in a hurry to get started using the new compiler, check out the compiler package README or the release notes instead!

Why does Relay have a compiler?​

Relay has a compiler in order to provide stability guarantees and achieve great runtime performance.

To understand why, consider the workflow of using the framework. With Relay, developers use a declarative language called GraphQL to specify what data each component needs, but not how to get it. The compiler then stitches these components' data dependencies into queries that fetch all of the data for a given page and precomputes artifacts that give Relay applications such a high level of performance and stability.

In this workflow, the compiler

  • allows components to be reasoned about in isolation, making large classes of bugs impossible, and
  • shifts as much work as possible to build time, significantly improving the runtime performance of applications that use Relay.

Let's interrogate each of these in turn.

Supporting local reasoning​

With Relay, a component specifies only its own data requirements through the use of GraphQL fragments. The compiler then stitches these components data dependencies into queries that fetch all of the data for a given page. Developers can focus on writing a component without worrying how its data dependencies fit into a larger query.

However, Relay takes this local reasoning a step further. The compiler also generates files that are used by the Relay runtime to read out just the data selected by a given component's fragment (we call this data masking). So a component never accesses (in practice, not just at the type level!) any data that it didn't explicitly request.

Thus, modifying one component's data dependencies cannot affect the data another component sees, meaning that developers can reason about components in isolation. This gives Relay apps an unparalleled level of stability and makes large classes of bugs impossible, and is a key part of why Relay can scale to many developers touching the same codebase.

Improved runtime performance​

Relay also makes use of the compiler to shift as much work as possible to build time, improving the performance of Relay apps.

Because the Relay compiler has global knowledge of all components' data dependencies, it is able to write queries that are as good β€” and generally even better β€” than they would be if they had been written by hand. It's able to do this by optimizing queries in ways that would be impractically slow at runtime. For example, it prunes branches that can never be accessed from the generated queries and flattens identical sections of queries.

And because these queries are generated at build time, Relay applications never generate abstract syntax trees (ASTs) from GraphQL fragments, manipulate those ASTs, or generate query text at runtime. Instead, the Relay compiler replaces an application's GraphQL fragments with precomputed, optimized instructions (as plain ol' Javascript data structures) that describe how to write network data to the store and read it back out.

An added benefit of this arrangement is that a Relay application bundle includes neither the schema nor β€” when using persisted queries β€” the string representation of the GraphQL fragments. This helps to reduce application size, saving users' bandwidth and improving application performance.

In fact, the new compiler goes further and saves users' bandwidth in another way β€” Relay can inform an application's server about each query text at build time and generate a unique query ID, meaning that the application never needs to send the potentially very long query string over users' slow networks. When using such persisted queries, the only things that must be sent over the wire to make a network request are the query ID and the query variables!

What does the new compiler enable?​

Compiled languages are sometimes perceived as introducing friction and slowing developers down when compared to dynamic languages. However, Relay takes advantage of the compiler to reduce friction and make common developer tasks easier. For example, Relay exposes high-level primitives for common interactions that are easy to get subtly wrong, such as pagination and refetching a query with new variables.

What these interactions have in common is that they require generating a new query from an old one, and thus involve boilerplate and duplication β€” an ideal target for automation. Relay takes advantage of the compiler's global knowledge to empower developers to enable pagination and refetching by adding one directive and changing one function call. That's it.

But giving developers the ability to easily add pagination is just the tip of the iceberg. Our vision for the compiler is that it provides even more high-level tools for shipping features and avoiding boilerplate, gives developers real-time assistance and insights, and is made up of parts that can be used by other tools for working with GraphQL.

A primary goal of this project was that the rewritten compiler's architecture should set us up to achieve this vision over the coming years.

And while we're not there yet, we have made significant achievements on each of the criteria.

For example, the new compiler ships with support for the new @required directive, which will nullify the parent linked field or throw an error if a given subfield is null when read out. This may sound like a trivial quality-of-life improvement, but if half of your component's code is null checks, @required starts to look pretty good!

A component without @required
And with @required:

Next, the compiler powers an internal-only VSCode extension that autocompletes field names when you type and shows type information on hover, among many other features. We haven't made it public, yet, but we hope to at some point! Our experience is that this VSCode extension makes working with GraphQL data much easier and more intuitive.

Lastly, the new compiler was written as a series of independent modules that can be reused by other GraphQL tools. We call this the Relay compiler platform. Internally, these modules are being reused for other code generation tools and for other GraphQL clients for different platforms.

Compiler performance​

So far, we've discussed why Relay has a compiler and what we hope the rewrite enables. But we haven't discussed why we decided to rewrite the compiler in 2020: performance.

Prior to the decision to rewrite the compiler, the time it took to compile all of the queries in our codebase was gradually, but unrelentingly, slowing as our codebase grew. Our ability to eke out performance gains could not keep up with the growth in the number of queries in our codebase, and we saw no incremental way out of this predicament.

Reaching the end of JavaScript​

The previous compiler was written in JavaScript. This was a natural choice of language for several reasons: it was the language with which our team had the most experience, the language in which the Relay runtime was written (allowing us to share code between the compiler and runtime), and the language in which the GraphQL reference implementation and our mobile GraphQL tools were written.

The compiler's performance remained reasonable for quite some time: Node/V8 comes with a heavily-optimized JIT compiler and garbage collector, and can be quite fast if you're careful (we were). But compilation times were growing.

We tried a number of strategies to keep up:

  • We had made the compiler incremental. In response to a change, it only recompiled the dependencies that were affected by that change.
  • We had identified which transforms were slow (namely, flatten), and made the algorithmic improvements we could (such as adding memoization).
  • The official graphql npm package's GraphQL schema representation took multiple gigabytes of memory to represent our schema, so we replaced it with a custom fork.
  • We made profiler-guided micro-optimizations in our hottest code paths. For example, we stopped using the ... operator to clone and modify objects, instead preferring to explicitly list out the properties of objects when copying them. This preserved the object's hidden class, and enabled the code to better JIT-optimized.
  • We restructured the compiler to shell out to multiple workers, with each worker handling a single schema. Projects with multiple schemas are uncommon outside of Meta, so even with this, most users would have been using a single-threaded compiler.

These optimizations weren't enough to keep pace with the rapid internal adoption of Relay.

The biggest challenge was that NodeJS does not support multithreaded programs with shared memory. The best one can do is to start multiple workers that communicate by passing messages.

This works well in some scenarios. For example, Jest employs this pattern and makes use of all cores when running tests of transforming files. This is a good fit because Jest doesn't need to share much data or memory between processes.

On the other hand, our schema is simply too large to have multiple instances in memory, so there was simply no good way to efficiently parallelize the Relay compiler with more than one thread per schema in JavaScript.

Deciding on Rust​

After we decided to rewrite the compiler, we evaluated many languages to see which would meet the needs of our project. We wanted a language that was fast, memory-safe and supported concurrency β€” preferably with concurrency bugs caught at build time, not at runtime. At the same time we wanted a language that was well-supported internally. This narrowed it down to a few choices:

  • C++ met most of the criteria, but felt difficult to learn. And, the compiler doesn't assist with safety as much as we'd like.
  • Java was probably also a decent choice. It can be fast and is multi-core, but provides less low-level control.
  • OCaml is a proven choice in the compiler space, but multi-threading is challenging.
  • Rust is fast, memory-safe, and supports concurrency. It makes it easy to safely share large data structures across threads. With the general excitement around Rust, some previous experience on our team, and usage by other teams at Facebook, this was our clear top choice.

Internal rollout​

Rust turned out to be a great fit! The team of mostly JavaScript developers found Rust easy to adopt. And, Rust's advanced type system caught many errors at build time, helping us maintain a high velocity.

We began development in early 2020, and rolled out the compiler internally at the end of that year. Initial internal benchmarks indicated that the compiler performed nearly 5x better on average, and nearly 7x better at P95. We've further improved the performance of the compiler since then.

Release in OSS​

Today, we're excited to publish the new version of the compiler, as part of the Relay v13. New compiler features include:

You can find more information about the compiler in the README and in the release notes!

We're continuing to develop features within the compiler, such as giving developers the ability to access derived values on the graph, adding support for a more ergonomic syntax for updating local data, and fully fleshing out our VSCode extension, all of which we hope to release to open source. We're proud of this release, but there's still a lot more to come!


Thank you Joe Savona, Lauren Tan, Jason Bonta and Jordan Eldredge for providing amazing feedback on this blog post. Thank you ch1ffa, robrichard, orta and sync for filing issues related to compiler bugs. Thank you to MaartenStaa for adding TypeScript support. Thank you @andrewingram for pointing out how difficult it is to enable the @required directive, which is now enabled by default. There are many others that contributed β€” this was truly a community effort!

Β· 6 min read

We are extremely excited to release Relay Hooks, the most developer-friendly version of Relay yet, and make it available to the OSS community today! Relay Hooks is a set of new, rethought APIs for fetching and managing GraphQL data using React Hooks.

The new APIs are fully compatible with the existing, container-based APIs. Though we recommend writing any new code using Relay Hooks, migrating existing containers to the new APIs is optional and container-based code will continue to work.

Although these APIs are newly released, they are not untested: the rewritten is entirely powered by Relay Hooks and these APIs have been the recommended way to use Relay at Facebook since mid-2019.

In addition, we are also releasing a rewritten guided tour and updated documentation that distill the best practices for building maintainable, data-driven applications that we have learned since first developing Relay.

Though we still have a ways to go before getting started with Relay is as easy as we’d like, we believe these steps will make the Relay developer experience substantially better.

What was released?​

We released Relay Hooks, a set of React Hooks-based APIs for working with GraphQL data. We also took the opportunity to ship other improvements, like a more stable version of fetchQuery and the ability to customize object identifiers in Relay using getDataID (which is useful if your server does not have globally unique IDs.)

See the release notes for a complete list of what was released.

What are the advantages of the Hooks APIs?​

The newly released APIs improve the developer experience in at least the following ways:

  • The Hooks-based APIs for fetching queries, loading data with fragments, pagination, refetching, mutations and subscriptions generally require fewer lines of code and have less indirection than the equivalent container-based solution.
  • These APIs have more complete Flow and Typescript coverage.
  • These APIs take advantage of compiler features to automate error-prone tasks, such as the generation of refetch and pagination queries.
  • These APIs come with the ability to configure fetch policies, which let you determine the conditions in which a query should be fulfilled from the store and in which a network request will be made.
  • These APIs give you the ability to start fetching data before a component renders, something that could not be achieved with the container-based solutions. This allows data to be shown to users sooner.

The following examples demonstrate some of the advantages of the new APIs.

Refetching a fragment with different variables​

First, let’s take a look at how we might refetch a fragment with different variables using the Hooks APIs:

type Props = {
comment: CommentBody_comment$key,

function CommentBody(props: Props) {
const [data, refetch] = useRefetchableFragment<CommentBodyRefetchQuery, _>(
fragment CommentBody_comment on Comment
@refetchable(queryName: "CommentBodyRefetchQuery") {
body(lang: $lang) {

return <>
<CommentText text={data?.text} />
onClick={() =>
refetch({ lang: 'SPANISH' }, { fetchPolicy: 'store-or-network' })

Compare this to the equivalent container-based example. The Hooks-based example takes fewer lines, does not require the developer to manually write a refetch query, has the refetch variables type-checked and explicitly states that a network request should not be issued if the query can be fulfilled from data in the store.

Starting to fetch data before rendering a component​

The new APIs allow developers to more quickly show content to users by starting to fetch data before a component renders. Prefetching data in this way is not possible with the container-based APIs. Consider the following example:

const UserQuery = graphql`
query UserLinkQuery($userId: ID!) {
user(id: $userId) {

function UserLink({ userId, userName }) {
const [queryReference, loadQuery] = useQueryLoader(UserQuery);

const [isPopoverVisible, setIsPopoverVisible] = useState(false);

const maybePrefetchUserData = useCallback(() => {
if (!queryReference) {
// calling loadQuery will cause this component to re-render.
// During that re-render, queryReference will be defined.
loadQuery({ userId });
}, [queryReference, loadQuery]);

const showPopover = useCallback(() => {
}, [maybePrefetchUserData, setIsPopoverVisible]);

return <>
{isPopoverVisible && queryReference && (
<React.Suspense fallback={<Glimmer />}>
<UserPopoverContent queryRef={queryReference} />

function UserPopoverContent({queryRef}) {
// The following call will Suspend if the request for the data is still
// in flight:
const data = usePreloadedQuery(UserQuery, queryRef);
// ...

In this example, if the query cannot be fulfilled from data in the local cache, a network request is initiated when the user hovers over a button. When the button is finally pressed, the user will thus see content sooner.

By contrast, the container-based APIs initiate network requests when the component renders.

Hooks and Suspense for Data Fetching​

You may have noticed that both of the examples use Suspense.

Although Relay Hooks uses Suspense for some of its APIs, support, general guidance, and requirements for usage of Suspense for Data Fetching in React are still not ready, and the React team is still defining what this guidance will be for upcoming releases. There are some limitations when Suspense is used with React 17.

Nonetheless, we released Relay Hooks now because we know these APIs are on the right trajectory for supporting upcoming releases of React. Even though parts of Relay’s Suspense implementation may still change, the Relay Hooks APIs themselves are stable; they have been widely adopted internally, and have been in use in production for over a year.

See Suspense Compatibility and Loading States with Suspense for deeper treatments of this topic.

Where to go from here​

Please check out the getting started guide, the migration guide and the guided tour.


Releasing Relay Hooks was not just the work of the React Data team. We'd like to thank the contributors that helped make it possible:

@0xflotus, @AbdouMoumen, @ahmadrasyidsalim, @alexdunne, @alloy, @andrehsu, @andrewkfiedler, @anikethsaha, @babangsund, @bart88, @bbenoist, @bigfootjon, @bondz, @BorisTB, @captbaritone, @cgriego, @chaytanyasinha, @ckknight, @clucasalcantara, @damassi, @Daniel15, @daniloab, @earvinLi, @EgorShum, @eliperkins, @enisdenjo, @etcinit, @fabriziocucci, @HeroicHitesh, @jaburx, @jamesgeorge007, @janicduplessis, @jaroslav-kubicek, @jaycenhorton, @jaylattice, @JonathanUsername, @jopara94, @jquense, @juffalow, @kafinsalim, @kyarik, @larsonjj, @leoasis, @leonardodino, @levibuzolic, @liamross, @lilianammmatos, @luansantosti, @MaartenStaa, @MahdiAbdi, @MajorBreakfast, @maraisr, @mariusschulz, @martinbooth, @merrywhether, @milosa, @mjm, @morrys, @morwalz, @mrtnzlml, @n1ru4l, @Nilomiranda, @omerzach, @orta, @pauloedurezende, @RDIL, @RicCu, @robrichard, @rsmelo92, @SeshanPillay25, @sibelius, @SiddharthSham, @stefanprobst, @sugarshin, @taion, @thedanielforum, @theill, @thicodes, @tmus, @TrySound, @VinceOPS, @visshaljagtap, @Vrq, @w01fgang, @wincent, @wongmjane, @wyattanderson, @xamgore, @yangshun, @ymittal, @zeyap, @zpao and @zth.

The open source project relay-hooks allowed the community to experiment with Relay and React Hooks, and was a source of valuable feedback for us. The idea for the useSubscription hook originated in an issue on that repo. Thank you @morrys for driving this project and for playing such an important role in our open source community.

Thank you for helping make this possible!