banner
AgedCoffee

AgedCoffee

Rethinking React Best Practices

Rethinking React Best Practices#

Original Article

Over a decade ago, React rethought the best practices for client-side rendering of single-page applications.

Today, the adoption rate of React has peaked and continues to face healthy criticism and scrutiny.

With React 18 and React Server Components (RSCs), we mark a significant shift from the initial "view" client MVC paradigm to an important phase.

In this article, we will attempt to understand the evolution of React from the React library to the React architecture.

The Anna Karenina principle states: "All happy families are alike; each unhappy family is unhappy in its own way."

We will start by understanding the core constraints of React and the past approaches to managing them, exploring the fundamental patterns and principles that unite happy React applications.

By the end, we will understand the evolving mental models in React frameworks like Remix and the Next 13 application directory.

Let’s begin by understanding the potential problems we have been trying to solve so far. This will help us contextualize the recommendations from the React core team to leverage high-level frameworks that tightly integrate server, client, and bundler.

What Problems Are Being Solved?#

In software engineering, there are typically two categories of problems: technical problems and interpersonal problems.

Architecture can be viewed as a process of finding the right constraints over time to address these issues.

Without the right constraints to address interpersonal problems, the more people collaborate, the greater the complexity, fault tolerance, and risk of changes over time. Without the right constraints to manage technical problems, the more content you publish, the worse the end-user experience typically becomes.

These constraints ultimately help us manage the greatest limitations faced by humans building and interacting within complex systems—limited time and attention.

React and Interpersonal Problems#

Addressing interpersonal problems has a high leverage effect. We can increase the productivity of individuals, teams, and organizations under limited time and attention.

Teams have limited time and resources and need to deliver quickly. As individuals, our brain capacity is limited and cannot accommodate a large amount of complexity.

Most of our time is spent figuring out the status quo and how best to make changes or add new content. People need to be able to operate without loading the entire system into their minds.

The success of React is largely attributed to its performance in managing this constraint compared to existing solutions at the time. It allows teams to work in parallel to build decoupled components that can be declaratively composed together and "work smoothly" through one-way data flow.

Its component model and escape hatch allow for the abstraction of legacy systems and integrated chaos within clear boundaries. However, one effect of this decoupling and component model is that it is easy to lose sight of the bigger picture because of the trees.

React and Technical Problems#

Compared to existing solutions at the time, React also simplified the process of implementing complex interactive features.

Its declarative model produces an n-ary tree data structure that is fed into renderers for specific platforms like react-dom. As we scaled our teams and sought ready-made packages, this tree structure quickly became very deep.

Since its rewrite in 2016, React has actively addressed the technical problems of handling large, deep trees on end-user hardware.

On the other side of the screen, users' time and attention are also limited. Expectations are rising while attention spans are shrinking. Users do not care about frameworks, rendering architectures, or state management. They want to complete tasks that need to be done without friction. Another constraint is to be fast and not make users think.

We will see that many of the best practices recommended in the next generation of React (and React-style) frameworks mitigate the impacts of purely handling deep trees on the client-side CPU.

Revisiting the Great Divide#

So far, the tech industry has been filled with swings along different axes, such as the centralization versus decentralization of services and thin clients versus thick clients.

We swung from thick desktop clients to increasingly thin ones with the rise of the web, and then back to thicker clients with the rise of mobile computing and SPAs. Today, the dominant mental model of React is rooted in this thick client approach.

This shift has created a divide between "front-end front-end" developers (who excel in CSS, interaction design, HTML, and accessibility patterns) and "front-end back-end" developers, as we migrated to the client during the front-end/back-end split.

In the React ecosystem, as we attempt to reconcile the best practices of these two worlds, the direction of the swing is returning to some middle ground, where much of the "front-end back-end" style code is being moved back to the server.

From "View in MVC" to Application Architecture#

In large organizations, a certain proportion of engineers, as part of a platform, integrate architectural best practices.

These developers enable others to invest their limited time and energy into things that yield real benefits.

One effect of being constrained by limited time and attention is that we often choose the path that feels easiest. Thus, we hope these positive constraints will keep us on the right track and allow us to easily fall into the "pit of success."

A significant part of this success lies in reducing the amount of code that needs to be loaded and run on end-user devices. Following the principle of only downloading and running what is necessary. This is hard to adhere to when we are limited to purely client-side paradigms. Packages end up including data fetching, processing, and formatting libraries (like moment.js) that can run off the main thread.

This is shifting in frameworks like Remix and Next, where React's one-way data flow extends to the server, combining the simple request-response mental model of MPAs with the interactivity of SPAs.

The Journey Back to the Server#

Now let’s understand what optimizations we have applied over time to this purely client-side paradigm. This requires reintroducing the server for better performance. This context will help us understand the React framework, where the server has evolved into a first-class citizen.

Here is a simple way to serve a front end for client-side rendering—a blank HTML page with many script tags:
OZkfAt
The illustration shows the basic principles of client-side rendering.
The advantage of this approach is a fast TTFB (Time to First Byte), a simple operational model, and a decoupled backend. Combined with React's programming model, this combination simplifies many interpersonal problems.

However, we quickly encounter technical problems as all responsibilities are handed over to user hardware. We must wait for everything to download and run before we can fetch useful content from the client.

As code accumulates, there is only one place to store it. Without careful performance management, this can lead to applications running slowly to the point of being unbearable.

Enter Server-Side Rendering#

Our first step back to the server is to try to address these slow startup times.

Instead of responding to the initial document request with a blank HTML page, we immediately start fetching data on the server and then render the component tree as HTML to respond.

In the context of client-side rendered SPAs, SSR acts like a trick to display some content first while loading JavaScript, rather than a blank white screen.
rFtSEE
The illustration shows the basic principles of server-side rendering and client hydration.
SSR can improve perceived performance, especially for content-rich pages. But it brings operational costs, which may degrade user experience for highly interactive pages—because TTI (Time to Interactive) is further delayed.

This is known as the "uncanny valley," where users see content on the page and try to interact with it, but the main thread is locked. The problem remains excessive JavaScript.

The Demand for Speed - More Optimizations#

Thus, SSR can speed things up, but it is not a silver bullet.

There is also an inherent inefficiency in that after rendering on the server, all operations need to be re-executed when React takes over on the client.

Slower TTFB means that the browser must patiently wait after requesting the document to receive header information to know which resources to download.

This is where streaming comes into play, bringing more parallelism to the picture.

We can imagine that if ChatGPT displayed a spinner while waiting for the entire response to complete, most people would think it was broken and close the tab. Therefore, we show whatever content we can as early as possible by streaming it to the browser as data and content are completed.

For dynamic page streaming, it is a way to start fetching data on the server early while allowing the browser to start downloading resources, all in parallel. This is much faster than the above illustration, where we wait for everything to be fetched and rendered before sending the HTML with data to the client.

More about streaming

This streaming technique depends on whether the backend server stack or edge runtime can support streaming data.

For HTTP/2, it uses HTTP streams (a feature that allows multiple requests and responses to be sent concurrently), while for HTTP/1, it uses the Transfer-Encoding: chunked mechanism, which allows data to be sent in smaller, independent chunks.

Modern browsers have the Fetch API built-in, which can consume fetched responses as readable streams.

The body property of the response is a readable stream, allowing the client to receive data in chunks as the server provides it, rather than waiting for all chunks to download at once.

This approach requires setting up the ability to send streaming responses on the server and reading them on the client, which necessitates close collaboration between the client and server.

Streaming also has some noteworthy nuances, such as caching considerations, handling HTTP status codes and errors, and the actual end-user experience. Here, there is a trade-off between fast TTFB and layout shifts.

So far, we have optimized the startup time of the client-rendered tree by fetching data early on the server while refreshing HTML early to download data and resources in parallel.

Now let’s focus on fetching and changing data.

Data Fetching Constraints#

One constraint of a hierarchical component tree is that "everything is a component," meaning nodes often have multiple responsibilities, such as initiating fetch operations, managing loading states, responding to events, and rendering.

This often means we need to traverse the tree to know what to fetch.

In the early days, generating initial HTML via SSR often meant manually traversing the tree on the server. This involved diving deep into React internals to collect all data dependencies and fetching them in order while traversing the tree.

On the client, this "render then fetch" order leads to the coexistence of loading indicators and layout shifts because traversing the tree causes a cascading network waterfall effect.

Thus, we need a way to fetch data and code in parallel without having to traverse the tree from top to bottom each time to know what to download.

Understanding Relay#

Understanding the principles behind Relay and how it addresses challenges at Facebook scale is very useful. These concepts will help us understand the patterns we will see later.

  • Components have co-located data dependencies

    In Relay, components declaratively define their data dependencies in the form of GraphQL fragments.

    The main difference from libraries like React Query, which also have co-location features, is that components do not initiate fetch operations.

  • Tree traversal happens at build time

    The Relay compiler traverses the tree, collecting each component's data requirements and generating an optimized GraphQL query.

    Typically, this query is executed at runtime at the routing boundary (or specific entry point), allowing component code and data to be loaded in parallel as early as possible.

Co-location supports one of the most valuable architectural principles—being able to remove code. By removing a component, its data requirements are also removed, and the query will no longer include them.

Relay alleviates many of the trade-offs associated with fetching resources from large tree-like data structures.

However, it can be complex, requiring GraphQL, a client runtime environment, and a sophisticated compiler to coordinate DX attributes while maintaining high performance.

Later, we will see how React Server Components follow a similar pattern for the broader ecosystem.

The Next Best Thing#

When fetching data and code, how can we avoid traversing the tree without adopting all these methods?

This is where nested routing on the server in frameworks like Remix and Next comes into play.

The initial data dependencies of components can often be mapped to URLs. Here, the nested segments of the URL map to the component subtrees. This mapping allows the framework to identify the data and component code needed for a specific URL in advance.

For example, in Remix, subtrees can encapsulate their own data requirements independently of parent routes, and the compiler ensures that nested routes load in parallel.

This encapsulation also achieves graceful degradation by providing separate error boundaries for independent sub-routes. It also allows the framework to pre-load data and code by looking at the URL for faster SPA transitions.

More Parallelization#

Let’s delve into how Suspense, concurrent mode, and streaming enhance the data-fetching patterns we have been discussing.

Suspense allows subtrees to fall back to displaying a loading interface when data is unavailable and resumes rendering when the data is ready.

This is a primitive that enables us to declaratively represent asynchronicity in what would otherwise be a synchronous tree. This allows us to achieve parallelism while fetching resources and rendering.

As we saw in streaming, we can start sending data early without waiting for everything to finish rendering.

In Remix, this pattern is expressed by using the defer function in route-level data loaders:

// Remix APIs encourage fetching data at route boundaries
// where nested loaders load in parallel
export function loader ({ params }) {
	// not critical, start fetching, but don't block rendering
	const productReviewsPromise = fetchReview(params.id)
	// critical to display page with this data - so we await
	const product = await fetchProduct(params.id)

	return defer({ product, productReviewsPromise })
}

export default function ProductPage() {
	const { product, productReviewsPromise }  = useLoaderData()
	return (
		<>
			<ProductView product={product}>
			<Suspense fallback={<LoadingSkeleton />}>
				<Async resolve={productReviewsPromise}>
					{reviews => <ReviewsView reviews={reviews} />}
				</Async>
			</Suspense>
		</>
	)
}

In Next, RSC (React Server Components) provides a similar data-fetching pattern by using asynchronous components on the server to wait for critical data.

// Example of similar pattern in a server component
export default async function Product({ id }) {
	// non critical - start fetching but don't block
	const productReviewsPromise = fetchReview(id)
	// critical - block rendering with await
	const product = await fetchProduct(id)
	return (
		<>
			<ProductView product={product}>
			<Suspense fallback={<LoadingSkeleton />}>
				{/* Unwrap promise inside with use() hook */}
				<ReviewsView data={productReviewsPromise} />
			</Suspense>
		</>
	)
}

The principle here is to fetch data early on the server. Ideally, this is achieved by placing loaders and RSCs close to the data source.

To avoid unnecessary waiting, we stream less critical data, allowing the page to load in stages—this becomes very simple in Suspense.

It is worth noting that RSC itself does not have a built-in API to support data fetching at route boundaries. If not carefully constructed, this can lead to cascading waterfall requests.

This is a line that frameworks need to balance between built-in best practices and providing greater flexibility and more surfaces for missteps.

It is noteworthy that when RSCs are deployed close to the data, the impact of cascading waterfall requests is significantly reduced compared to client-side waterfall requests.

Emphasizing these patterns indicates that RSC needs a higher level of framework integration with routers that can map URLs to specific components.

Before we dive deeper into RSC, let’s take a moment to understand the other half of this picture.

Data Changes#

A common pattern for managing remote data in a purely client-side model is to store it in some form of normalized store (e.g., Redux store).

In this model, changes are typically optimistically updated in the in-memory client cache, followed by a network request to update the remote state on the server.

Historically, manually managing these aspects involved a lot of boilerplate code and was prone to errors in all the edge cases we discussed in The New Wave of React State Management.

The emergence of Hooks has led to the development of tools like Redux RTK and React Query that focus on handling these edge cases.

This requires code to be transmitted over the network to handle these issues, where values propagate through React context. Additionally, it also becomes easy to create inefficient sequential I/O operations when traversing the tree.

So how will this existing pattern change when React's one-way data flow extends to the server?

Much of this "front-end back-end" style code is actually shifting to the back end.

Below is an image from Remix's data flow that shows the trend of the framework evolving towards a request-response model in MPA (multi-page application) architecture.

This shift is from a model where everything is purely handled by the client to one where the server plays a more significant role.

x8Tsvh

You can also check out The Web's Next Transition for a deeper understanding of this shift.

This pattern also extends to RSC (React Server Component), where we will later introduce the experimental "server action functions." Here, React's one-way data flow extends to the server, adopting a simplified request-response model and progressively enhanced forms.

One benefit of this approach is the removal of code from the client. However, the primary benefit is the simplification of the mental model for data management, which in turn simplifies much of the existing client-side code.

Understanding React Server Components#

So far, we have leveraged the server as a means to optimize purely client-side approaches. Now, we are deeply rooted in the mental model of React, which is based on client-rendered trees running on user machines.

RSC (React Server Component) introduces the server as a first-class citizen rather than an afterthought. React has evolved, embedding the backend into the component tree, forming a powerful outer layer.

This architectural shift has led to changes in many existing mental models regarding what React applications are and how they are deployed.

The two most apparent impacts are the support for the optimized data loading patterns we have discussed so far and automatic code splitting.

In Building and Serving Frontends at Scale part two, we discussed some key issues at scale, such as dependency management, internationalization, and optimized A/B testing.

When confined to a purely client-side environment, these issues can be challenging to address at scale. RSC and many features of React 18 provide a foundational set of tools for addressing many of these problems.

One confusing change in the mental model is that client components can render server components.

This helps us visualize a component tree with RSCs as they connect along the tree. Client components connect through "holes," providing client interactivity.

B3808J

Extending the server down into the component tree is very powerful because we can avoid sending unnecessary code down. Moreover, unlike user hardware, we have more control over server resources.

The roots of the tree are anchored in the server, the trunk traverses the network, and the leaves are pushed down to client components running on user hardware.

This extension model requires us to understand the serialization boundaries in the component tree, which are marked by the 'use client' directive.

This also re-emphasizes the importance of mastering composition to allow RSCs to render as deeply into the tree as possible through child components or slots in client components.

Server Action Functions#

As we migrate parts of the front end to the server, many innovative ideas are being explored. These provide a glimpse into a future of seamless integration between client and server.

What if we could gain the benefits of co-location with components without needing client libraries, GraphQL, or worrying about runtime inefficiencies of waterfalls?

An example of a server action can be seen in the React-style meta-framework Qwik City. Similar ideas are also being explored and discussed in React (Next) and Remix.

The Wakuwork repository also provides a proof of concept for implementing React server "action functions" for data mutations.

As with any experimental approach, there are trade-offs to consider. Concerns about security, error handling, optimistic updates, retries, and race conditions arise in client-server communication. As we have learned, these issues are often unmanageable without a framework.

This exploration also highlights that achieving the best user experience and developer experience often requires advanced compiler optimizations that increase complexity.

Conclusion#

Software is just a tool to help people get things done - many programmers never understand this. Keep your eyes on the value delivered, not on the details of the tools - John Carmack

As the React ecosystem evolves beyond purely client-side paradigms, it is important to understand the abstractions below and above us.

Clearly understanding the fundamental constraints we operate under enables us to make more informed trade-offs.

With each swing, we gain new knowledge and experiences to integrate into the next iteration. The advantages of previous approaches still hold. As always, it is a trade-off.

The great thing is that frameworks increasingly provide more leverage tools to empower developers to make more nuanced trade-offs for specific situations. Where optimizing user experience meets optimizing developer experience, and simple models of MPAs intersect with rich model SPAs in a blend of client and server.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.