The Hitchhiker’s Guide to Next.js

How Next.js evolved, and why it’s one of the best React frameworks on the market

Brandon Duffany
The Startup

--

Recently, there has been a lot of hype around Next.js — a Web framework based on React. But is the hype warranted? What improvements does Next.js provide over the existing tools, like the feature-rich, developer-friendly create-react-app? or the simple, tried-and-true, static site generation tool — Jekyll?

Is Next.js just going to add more bloat to your app? Is it really the future of frontend development, or is it just a fad? What problems is it really solving?

In this article, I’ll try to answer all of these questions and more.

In this article, I don’t draw a direct comparison between Next.js and its main competitor, Gatsby. These frameworks both have very similar features and have their own strengths. They are both evolving rapidly and have thriving ecosystems surrounding them, so it’s too difficult to make a comparison that will still be relevant even a month from now.

What Next.js is, and what it isn’t

Many people have heard that Next.js is basically just a “static site generator” that allows writing your code in React. Some React developers I have talked to in the past have followed the rule of thumb that if they want to create a static site, they should use a tool like Next.js or Gatsby, but if they want a dynamic site with more features, they should stick with the tried and true create-react-app.

But Next.js is a lot more than just a static site generator. It’s much more powerful than Jekyll, which is primarily designed to host simple, static sites. Next.js supports the same static-site generation features as Jekyll, in addition to all the features of create-react-app, and much, much more. And if you deploy Next.js on Vercel, you get an incredibly smooth, no-nonsense, “zero-config” developer experience.

In other words, static site generation is just one thing that Next.js is great at, but it is by far its main selling point. I wouldn’t call Next.js a static site generator, but rather an automatic static optimization framework, which is opinionated about embracing React at its core.

When a Next.js app is deployed on Vercel, it is also an excellent serverless framework, ideal for building a JAMstack application.

But what do these things mean? How does “automatic static optimization” differ from static site generation? And what do I mean by “serverless?”

On top of this, Next.js uses an opinionated file-system based routing approach, instead of requiring you to configure all the routes yourself through JavaScript, or using something like React Router. But is it flexible enough? Is this approach the right fit for a serious enterprise app that can have many complex routing scenarios, or even a medium-sized app with some dynamic functionality?

Lastly, should you use Vercel to deploy your Next.js app, or something else?

Once again, I’ll try to answer or at least touch on all of these questions in this article, and more.

How Next.js came to be: a brief history of Web development

Before I start raving about Next.js, let’s try to understand how Next.js evolved as a technology.

If you have been developing Web apps since the early days, you can skip this part without really missing out on anything.

In the early 1990s, when the Web was first introduced, most of the Web consisted of a bunch of HTML files stored on a bunch of computers in different places.

You’d make a request to a specific Web server using a URL. The request would ultimately get delivered to a Web server, which would then respond back with an HTML file. Finally, your browser would display the HTML document to you.

This was pretty slow in those days, mainly because computers were slower, and Internet infrastructure was not as advanced as it is today. But all in all, this was a pretty solid system. It was simple, and it worked. Life was good.

There are a lot of pieces missing here, like DNS servers and Internet routers, but this is the basic idea.

This “static file” approach was very limited, though — what about pages that need to display some data from a database, which is changing all the time? What about pages that contain information that contains info which should only be rendered some of the time? Do we have to create separate HTML files for every possible page that the user might see?

Server-side rendering

To enable more dynamic Web pages, technologies like PHP were introduced, which made it easy to return different HTML responses for the same URL, dependent on different conditions. This approach is called server-side rendering, because the server “renders” (generates) the HTML page on-the-fly for each request, and then sends it back to the user.

In reality it’s a lot more complicated than this… but hopefully you get the basic idea!

PHP was a fantastic idea. It allowed you to write some code in HTML, but then as soon as you realized that a section of the HTML you were writing had to be dynamically computed, you just switch into PHP mode and then start writing the logic that decides what belongs in the current chunk of HTML. Then, you could switch back to HTML mode, and carry on with plain old HTML.

A basic PHP page. The server transforms the <?php> parts into HTML and sends it back to the user. The non-PHP parts are static (the same for every request).

The server-side rendering approach enabled tons of great Web page features (like email, online banking, and more), but at the cost of page load time. Every time the server needed to read some data from a database, data could be traveling hundreds or even thousands of miles across the globe.

Even if data were transmitted at the speed of light, downloading an HTML page could take several seconds, especially as more requests are added and the amount of data loaded into the page increases.

But despite these drawbacks, this approach worked well, and people were happy.

JavaScript and AJAX

Server-side rendering was only one way that the Web became more interactive and interesting.

As browsers improved and computers got more powerful, more interactive Web pages became more common, all thanks to JavaScript. Drop-down menus, popups, date range pickers, interactive charts, and even interactive maps became possible.

Using AJAX, Web pages could update themselves with fresh data without the user needing to fully reload the page. This made Web pages feel even more like desktop apps, which let you update your data and see the result almost immediately without the screen going blank and being rendered from scratch.

Client-side rendering and SPAs

As computers started getting faster and JavaScript became more and more central to Web page functionality, Web developers became sick of writing JavaScript code and PHP code and HTML code and CSS code…and having to make sure all these languages played nicely together, and having to mentor all of their team members on so many different technologies…

So they thought, “what if we ditched PHP, and rendered the entire page using only JavaScript, running right in the Web browser? Then we’d have one less programming language to worry about!”

And that is exactly what they did! Libraries like jQuery were introduced, which made it even more convenient to update a Web page dynamically using JavaScript and fetch data using AJAX, and many people thought that jQuery was the best thing since sliced bread.

Later, JavaScript frameworks like Ember.js and Angular.js evolved. (Those links point to snapshots of those sites from the early 2010s — check them out!) These frameworks were a bit more complex than jQuery, but they allowed people to be very productive when writing Web apps. They made Web development simpler by encouraging people to write most of their code on the client side, instead of worrying so much about the server side.

Around this time, the concept of a “single page application” (SPA) took shape, in which a Web page only sent one request to the server to fetch the initial HTML, and then every update to the page afterwards was done with JavaScript and AJAX, instead of completely reloading the page.

The “new world” of client-side rendering, JS frameworks, and SPAs looked something like this:

With client-side rendering, the Web gets even more complex.

Client-side rendering made Web developers and tech companies very happy. They could save on computing costs by having the user execute code on their own machines, and it was much easier to write and maintain code that ran mostly on the client side, rather than writing the code in both PHP and JavaScript.

As libraries like jQuery, Ember, Angular, React, and Vue were introduced, and tools like create-react-app and bower made it even easier to use these libraries, it became even more convenient and trendy to do 100% of the rendering right in the Web browser.

Problems with client side rendering

Client-side rendering is great for productivity, because it enables developers to focus mostly on JavaScript. But it’s not necessarily better for the user.

Why? Because downloading and executing JavaScript and making DOM manipulations through JavaScript is costly.

Before, with server-side rendering, the browser just requested the final HTML response directly from the server, and maybe a few scripts would load to add a few more things to the page.

Now, with client-side rendering, the browser receives a mostly blank HTML page, then starts downloading JavaScript, then parses, compiles, and executes the JavaScript, which then generates HTML, and then the page is finally rendered.

This all takes time.

Loading time is made even worse when many scripts and other files (like images and CSS) are loaded at the same time, which is often the case for modern Web apps.

Now, you might be thinking that browsers, data centers, CPUs, disk drives, network infrastructure, caching, RAM, are all getting better and faster as time goes on, so is this extra cost even noticeable?

Well, these advancements don’t happen fast enough to keep up with the demands of modern Web apps. Worse yet, users in countries with fewer resources (where the average computer is older and less powerful, and Internet infrastructure is not upgraded as frequently), are stuck with a worse experience, with Web pages that load painfully slowly.

And there was an even more pressing issue: the mobile Web. More people were using their mobile devices to access Web pages. When smartphones and laptops first came out, they had only a fraction of the power of desktop computers, and client-rendered pages built with frameworks like Angular.js were simply too slow — the phones themselves were not powerful enough, and wireless network speeds were not as great as they are today.

In addition to being slower than server-side rendering, client-side rendering introduced another huge problem: Search Engine Optimization (SEO).

If you don’t know how SEO works, here’s a brief summary. Search engines like Google have special programs called “crawlers,” which “crawl” the Web by starting with a curated list Web pages (e.g. wikipedia.org), then following all the clickable links from those pages, and then following the links from those pages, and so on. Eventually, they will reach every part of the “public Web.”

With client-side rendering, recall that the server sends back a mostly blank initial HTML response, which then gets updated by JavaScript. This is a problem, because search engines started getting empty pages back from servers, which now just had <script> tags in them that needed to be run in order to see the final page! So, client-side rendered pages were said to have “poor SEO.”

Naturally, search engines solved this problem by simulating a real browser and actually running all the JavaScript on the page, then scanning the final page after all the scripts were run and the page was fully “visible.”

However, this solution introduced many problems of its own. It is slower and costlier for search engines to simulate a Web browser when scanning your page, and often more error-prone. Google has even written a comprehensive guide on how to write your Web page to be SEO-friendly.

“Fantastic,” thought developers everywhere. “Even more problems for us to worry about when developing Web apps!”

The return of server-side rendering

Many Web developers who had gotten on board with client-side rendering ultimately took a step back and did some thinking when they realized their pages were too slow.

They realized that ultimately, they had to make the switch back to server-side rendering. But at the same time, they wanted to stay free of the perils of PHP.

Fortunately, Node.js was created — a technology which allows writing a Web server in JavaScript!

A simple Node.js server using express

Many people were skeptical about Node at first — after all, JavaScript was slow, even when run on a powerful Web server. But at least it was faster to run JavaScript on the server than it was to run it on a tiny mobile device! Plus, as JavaScript got better and faster, Node-based servers got faster and faster, just by installing software updates! The performance problem with Node.js was becoming less and less of an issue, and many developers started to embrace JavaScript on the server.

At the same time, the npm ecosystem was thriving. More and more powerful open source JavaScript libraries became easily usable in both the browser and the server. Node.js frameworks like Meteor were created, blurring the line even more between client-side and server-side development.

Node.js and JavaScript were booming.

Meanwhile, people did not want to ditch their Web frameworks like Angular and React. Libraries like ReactDOMServer, vue-server-renderer, and Angular Universal made it even easier to render the app on a Node.js server using the same code that the client would have used, if the page were client-side rendered instead.

These frameworks introduced a hydration step to the initial page load, which allows client-side JavaScript to pick up where the server-rendered JavaScript left off. This approach allows the client to get the important parts of the HTML page sooner, and then progressively enhance the page with more interactivity later on, with minimal effort from the developer.

At this point, life is pretty good. We’re now back to server-rendered pages which are much lighter on the client, and much smarter, in the sense that they deliver the initial HTML response to the client sooner.

When combined with techniques like lazy module loading, code splitting, and bundling, this new world can result in a page that loads much faster than a fully client-rendered page.

Introducing Next.js

So, at this point, it might seem like we’ve solved most of our problems, and we have a pretty good system going on.

We can write all of our code in JavaScript. The page is mostly rendered on the server. We still get to use our much-loved Web frameworks like Angular and React.

In reality, though, there were a few big problems that needed addressing:

  1. Server-side rendering with client-side hydration was not easy to set up! Even with frameworks like Meteor and modern React tools (like create-react-app), developers had to read lots of documentation to understand how server-side rendering works, why they need it, and how to wire it up. For most people trying to get something done, it was just too much work.
  2. Not all pages contain dynamic data that requires a Web server to read from a database. In fact, a huge fraction of Web pages can just be static HTML files, like they were in the very, very early days of the Web. Landing pages, help pages, contact forms, terms of service/privacy policy pages, career directories, sitemaps, and blogs are all examples of pages that could be static.
  3. Taking #2 a step further, even the pages which read from a database can still be represented as static pages! The app’s “skeleton” HTML can be sent straight to the client, and then the client can make AJAX requests to fetch data. So ideally, we don’t need a complicated Web server to host our pages! We just need a separate API server that the static HTML file can make its requests to.

With these optimizations, loading a Web page might look something like this:

“But why would we want all of our HTML files to be static? Won’t that mean that we will have less feature-rich apps?”

Well, “static” does not mean “motionless!” Static pages can load scripts of their own, containing complex JavaScript and CSS. The scripts can then do “dynamic” stuff to the page. You can even use React to create a static page. (Spoiler alert: that’s what Next.js does!)

“How does this static file approach improve anything? We still have to make a Web server in order to get the static file anyway, right? So why not just request the file directly from our cool Node.js server that will do our server-side rendering on the fly?”

Well, the answer to this is a bit complicated.

The first thing to understand is that when you visit a Website, your Web browser cannot do anything until it gets a response back from the Web server. As a result, this initial HTML request is the single biggest bottleneck in the entire page load sequence. Even if your page loads dozens of scripts and stylesheets and you have done tons of optimizations to make them all load super fast, none of that can happen until the initial HTML comes back!

So, we really want to make sure that the browser gets back the initial HTML response for our site as quickly as possible.

“So, what do static files have to do with the initial HTML response coming back quickly? Can’t we just run a bunch of copies of our server around the globe, close to where users are located? That way, requests to our servers would come back super fast.”

Sure, that’s one way to do it. But that would also be super expensive! Most people can’t afford to pay for so many servers to be running around the globe at all times.

Fortunately, we have CDNs, which are cheap, global networks of machines that are heavily optimized for storing static files around the world. If we could somehow make sure that all of our Web pages were just static HTML files, then we could leverage CDNs to cheaply distribute our app around the world, making it readily available to users.

Side note: In the latest Web jargon, you might hear that a file is served from “the edge” — that’s pretty much what this is referring to. You might have also heard of Google’s “AMP” pages — AMP leverages Google’s CDN to store and serve special types of HTML files (called AMP files), which place tight restrictions on the HTML in order to prevent it from doing anything that would slow down the initial page load.

So, with static HTML served from a CDN, we get the initial HTML to the user super quickly, and the page loads almost immediately if they are close enough to a CDN edge.

But it gets even better. If a user has already visited a page, then the HTML file might already be stored in their browser’s cache. If the cached HTML is already up to date, then the CDN server responds with304 Not Modified, and the browser knows to serve up the HTML directly from disk. The CDN doesn’t even have to send the HTML file back to the browser!

With caching and CDNs, a Web page’s loading sequence now looks more like this:

…but can we optimize this even more?

Yes! The initial page load sequence can be optimized using tons of ingenious techniques like prefetching, preloading, preconnecting, SWR, font and stylesheet inlining, service worker caching, and much more — but I won’t go into all of those techniques here. The main speedup I want to focus on is the one gained by serving up a static HTML page so that it can be cached by CDN servers, because this single speedup has the biggest positive impact on page load time.

So, now we know that CDN technology is the primary tool at our disposal for ensuring that the initial HTML response is returned as quickly as possible.

With this in mind, here’s our new strategy for making sure our site loads as quickly as possible:

  1. If a particular page on our site is 100% static (contains no dynamic content whatsoever), then we should make sure it is available on a CDN as a static HTML file. Otherwise, the user would have to make an unnecessary request to our server, which may be far away from them and will incur unnecessary costs by recomputing the same exact HTML output for every single request.
  2. If a Web page does have dynamic functionality, then we should try to make as much of the page static as possible, so that the static parts can load almost instantly, and then we should enhance the page by fetching the data that we need from our server.

OK, this seems like a great strategy. But how do we actually implement this? What code do we need to write? Do we have to maintain separate code bases for our static pages and our dynamic pages? Do we have to write the static parts in pure HTML/CSS and then use plain old JavaScript and AJAX to update the HTML (the old, inconvenient way of doing things?)

Is there a better way?

Next.js to the rescue: automatic static optimization!

So, I mentioned near the start of the article that Next.js’s killer feature is automatic static optimization. It exactly implements this strategy that we talked about above, making as much of the site static as possible, and producing CDN-friendly static output for pages that are 100% static.

And it does all of this automatically!

At the same time, Next.js is not overly restrictive, like AMP — it lets you write your app in the same JavaScript and React that you’re used to, so you can make use of state-of-the-art UI libraries like material-ui and react-final-form. Next.js also provides a built-in way to implement a REST API, so you can implement endpoints for the UI to fetch data (if needed).

Okay, that sounds amazing. But how is it possible?

It’s actually pretty simple! Next.js works like this:

  • Each page (“route”) in your application is associated with a single React component. For example, /home is associated with HomeComponent.
  • Pages can optionally fetch data asynchronously if they need to, using a special function called getServerSideProps, which gets attached to the component. These props are passed to the component as props when rendered.
  • If getServerSideProps is used, then the page is marked “dynamic.” Otherwise, it’s marked “static.”
  • Pages that are marked “static” are built once at compile time, and they are distributed to CDNs.
What it looks like when you build a Next.js app

To make a page static and still have dynamic functionality, all you have to do is avoid using getServerSideProps, and instead use a library like SWR that makes it easy to request data on the client side instead of the server side.

Incremental static generation

But what about pages like /products/:productid? How can these types of pages be generated statically? Wouldn’t we have to generate static pages for all possible products at build time? What if we add more products?”

These are great questions, and Next.js has a great answer: incremental static generation.

The way this works is that the first time a page is requested, Next.js can make a request to your server to render the page, but also save a static copy of the response and distribute it to CDNs.

So, the first time someone views a product, it will be a bit slow, but then it will be super fast for everyone else that views that product afterwards.

Check out this demo of incremental static regeneration to see it in action!

The routing system — how flexible is it?

I mentioned before that Next.js has a file-system based routing architecture, associating each route with a React component.

This architecture is somewhat controversial, and has caused many developers to stray away from Next.js entirely. Some developers wind up with the belief that requiring a file for each route is just an odd, inflexible design choice that doesn’t scale to larger apps. But how much merit is there to these kinds of claims?

Next.js’s page-based routing system is indeed very opinionated. But that’s because it is heavily optimized to make the most common use cases extremely fast. Specifically, Next.js performs automatic code splitting when it builds each page, so that the code for each page is only loaded when you request that page, and no sooner.

For those rare cases where you don’t want to deal with Next’s routing system, (which will probably only happen if you are migrating a legacy app to Next.js), there is a beautifully simple escape hatch: Next.js exposes a minimal API that lets you imperatively render any page from your Node.js server. Here’s an example. But in my experience, there has been no need to use the imperative API. Even that linked example could be implemented using file-based routing by creating a file called /products/[handle].js.

Next.js deployment: how easy is it?

Next.js sounds fantastic. But how do you launch an app built with Next.js?

One option is to use Vercel, which allows you to launch a Next.js app and get the static files super close to your users in only a few minutes.

Vercel works like this:

  • You push your code to GitHub, then grant Vercel access to the repository.
  • Vercel automatically builds your app and deploys it to a live domain ending with .now.sh. You can also purchase a domain through their site, if you want a custom domain. Vercel automatically rebuilds your app and deploys to this URL every time you push your code to Git.
  • Vercel distributes the statically generated HTML files to its CDN network, so you don’t have to worry about how the static pages will get close to your users.
  • Vercel automatically sets up a serverless cloud function for each page that uses getServerSideProps, as well as each API route in your app. If you’d like to learn more about serverless architecture, you can read more here. If you’ve ever worked with AWS Lambda or Google Cloud Functions, you’ll quickly realize that Vercel’s approach is way easier to deal with.

For me, this experience was super smooth and saved loads of time. By contrast, I tried deploying a Next.js app on Google Cloud, and it was a hassle. I had to create a Dockerfile that builds the app and then listens on $PORT, set up a cloudbuild.yaml file to build a Docker image from the app, configure a build trigger to automatically build the app when it was pushed to GitHub, update my cloudbuild.yaml file to automatically clean up unused Docker images, and set up a billing alert to make sure I was always on the free tier.

After all that configuration, I still needed to get the CDN working, figure out whether it’s possible to get Google Cloud working with Next.js’s new incremental static generation, and figure out how to get pages and API routes set up as Firebase Cloud Functions. Overall, it doesn’t seem worth it. Services like Google Cloud and AWS are extremely flexible, but from what I have seen, they are not as optimized as Vercel is for deploying Next.js apps.

Services like Netlify and GatsbyCloud are other services that offer a similar, “zero-config” approach to launching a Next.js app, but it’s hard to give an opinion on those services that wouldn’t be outdated a month from now.

Conclusion

  • Next.js is a powerful React framework optimized for super fast page load times that can handle both static and dynamic sites extremely well.
  • To fully leverage the power of Next.js, avoid using getServerSideProps and getInitialProps unless it is absolutely necessary, and make sure that you write your dynamic pages by fetching data on the client side using a library like SWR.
  • Even for pages that accept dynamic URL parameters like /products/1234, you can use Next.js’s incremental static generation feature to generate static pages at runtime.
  • When deployed on Vercel, Next.js “just works.” Your app is automatically distributed to servers around the globe so that it loads super fast everywhere. Since Vercel automatically manages servers for you, you never have to worry that you’re paying for servers that aren’t being used, or not paying for enough servers, making your app slow.

Thanks for reading! Hopefully you enjoyed the article and have a better understanding of Next.js!

--

--