Blog
A Small, Honest react-bun-ssr vs Next.js Benchmark on Bun and Node
A narrow production benchmark comparing react-bun-ssr with the same Next.js app running on Node and Bun across a markdown content page and a local-data SSR page.
I wanted a first benchmark for react-bun-ssr that says something real without pretending to say everything.
react-bun-ssr is a Bun-native SSR React framework with file-based routing, loaders, actions, streaming, soft navigation, and first-class markdown routes. I chose to build it on Bun because I wanted the framework to start from Bun's runtime, server model, bundler, file APIs, and markdown support instead of treating Bun as a compatibility target. If you want the longer background on that decision, I wrote about it here: Why I Built a Bun-Native SSR Framework.
So this is intentionally narrow. It is not a universal "framework X is faster than framework Y" post. It is one production-mode comparison against the same Next.js app running on both Node and Bun in two scenarios that match the current shape of react-bun-ssr especially well:
- a docs-like markdown content page
- an SSR page that reads a local JSON file and renders a non-trivial HTML list
That is the whole scope.
Why these two scenarios
The markdown route is the clearest current strength of react-bun-ssr.
The framework already treats .md files as first-class routes, and the docs site in this repository uses that model directly. If I want to measure a real differentiator instead of inventing a synthetic micro-benchmark, that is the obvious place to start.
The local-data SSR page is the secondary scenario because it exercises a different path:
- read a file at request time
- run server-side rendering
- produce a fairly large HTML response
That gives a second datapoint without collapsing the whole benchmark into a raw JSON endpoint test.
I explicitly did not make a plain JSON API benchmark the headline result. react-bun-ssr's json() helper is intentionally thin, so a JSON-only comparison would say more about runtime and serialization details than about the framework itself.
What I measured
/content
All three measured app/runtime combinations render the same authored markdown fixture:
- 2,382 words
- frontmatter
- 13 rendered
h2/h3headings - 6 fenced code blocks
For react-bun-ssr, this is a first-class .md route.
For Next.js, this is the same App Router page importing the shared content through the official @next/mdx setup, measured once with Node and once with Bun.
/data
All three measured app/runtime combinations read the same local data-benchmark.json file on every request and render the same 100-item catalog grid on the server.
That is still a small benchmark app, but it is not a one-line hello-world route. It forces both apps to do file I/O plus HTML rendering work on each request.
Exact setup and fairness rules
These numbers were generated on March 30, 2026 on the same local machine:
- Apple M1 Pro
react-bun-ssrbuilt and served with Bun1.3.10- Next.js
15.5.14built and served with Nodev22.18.0 - the same Next.js
15.5.14app also built and served with Bun1.3.10 @next/mdx15.5.14autocannon8.0.0
Benchmark rules:
- production mode only
- 5 clean builds per app
- 200 warm-up requests per route
- 5 measured runs per route
autocannonwith concurrency50and duration20s- one app running at a time on
localhost - no CDN, no compression tuning, no database, no remote fetches, no dev mode
This benchmark now includes both sides of the runtime question for Next.js:
react-bun-ssron Bun- Next.js on Node
- Next.js on Bun
That gives a more useful view than a single Next.js runtime number. It shows how much of the result is about framework design and how much of it shifts when the same Next.js app changes runtimes.
Results
All values below are medians from the five measured runs.
Clean build
| App | Median clean build |
|---|---|
react-bun-ssr | 0.13s |
| Next.js on Node | 7.59s |
| Next.js on Bun | 8.74s |
Warm serve
/content
| App | req/s | avg latency | p95 latency |
|---|---|---|---|
react-bun-ssr | 4734.73 | 10.05ms | 16ms |
| Next.js on Node | 335.23 | 148.08ms | 179ms |
| Next.js on Bun | 429.57 | 115.49ms | 143ms |
/data
| App | req/s | avg latency | p95 latency |
|---|---|---|---|
react-bun-ssr | 831.13 | 59.54ms | 95ms |
| Next.js on Node | 109.29 | 451.86ms | 492ms |
| Next.js on Bun | 157.14 | 315.90ms | 363ms |
Interpretation
For these two scenarios, react-bun-ssr is substantially faster than both Next.js variants on this machine.
The markdown route is the strongest signal, which is not surprising. That benchmark lines up directly with a current design advantage of the framework: first-class markdown routes in a Bun-native pipeline.
The /data route is also clearly ahead, but the gap is smaller than on /content. That still makes sense. Once both apps are doing more server rendering work over a 100-item page, the benchmark shifts from "who handles content routing and content rendering better" toward "who moves through general SSR work faster."
The extra Next.js-on-Bun column matters for exactly that reason. On this machine, the same Next.js app got noticeably better warm-serve throughput and latency on Bun than on Node for both routes, but its clean build median was still slower on Bun than on Node. That is useful nuance, and it is exactly why I wanted both runtime columns instead of a single Next.js number.
It also says something broader about Bun as a framework foundation. Next.js is not designed first around Bun, and this benchmark did not add Bun-specific optimizations to the Next app. Even so, the Bun run improved the same Next.js app from 335.23 req/s to 429.57 req/s on /content, and from 109.29 req/s to 157.14 req/s on /data, while also lowering both average and p95 latency. That does not prove Bun automatically wins every workload. It does show that Bun is already a meaningful lever even for a framework that is not primarily built around it.
That is one of the biggest reasons I still think Bun is a very good idea for a framework. A Bun-first framework is not only betting on raw speed. It is betting on a simpler foundation: one runtime, one server model, one bundler, strong file primitives, built-in markdown support, and a more direct deployment story. That gives framework authors more room to turn runtime advantages into actual product features instead of spending that energy on adapter layers and Node-first compatibility seams. For react-bun-ssr, that is exactly the appeal: use Bun not just as a faster engine under the hood, but as the base that makes first-class markdown routes, a Bun-native SSR pipeline, and a smaller framework surface make sense together.
Next.js Bun vs Node benchmark
The short answer from this run is simple:
- the same Next.js app was faster on Bun than on Node for warm SSR in both benchmark routes
- the same Next.js app still built faster on Node than on Bun in this benchmark
Here is the direct Next.js-on-Node vs Next.js-on-Bun comparison from the same benchmark run:
| Scenario | Next.js on Node | Next.js on Bun |
|---|---|---|
| Clean build | 7.59s | 8.74s |
/content req/s | 335.23 | 429.57 |
/content avg latency | 148.08ms | 115.49ms |
/content p95 latency | 179ms | 143ms |
/data req/s | 109.29 | 157.14 |
/data avg latency | 451.86ms | 315.90ms |
/data p95 latency | 492ms | 363ms |
So for this particular comparison, Bun improved warm-serve throughput by roughly 28% on the markdown content page and roughly 44% on the local-data SSR page, while also lowering average and p95 latency on both routes. Node still had the better clean production build time.
That is the useful part of the side-by-side result: not just "Bun is fast" as a slogan, but a concrete comparison where the framework stays the same and only the runtime changes.
That is another reason I like Bun as a framework base. If an unoptimized Next.js app already picks up a meaningful warm-serve improvement from Bun, then a framework that is intentionally designed around Bun has even more room to turn that runtime headroom into faster docs pages, landing pages, and content routes.
What this benchmark does not claim
This benchmark does not prove that react-bun-ssr is faster than Next.js for every application or every runtime setup.
It does not cover:
- remote data fetching
- database-backed loaders
- caching layers
- edge deployment
- CDN behavior
- React Server Components tradeoffs in larger app shapes
- developer experience
- dev server performance
- JSON API-only workloads
It is best read as:
react-bun-ssr is already very competitive, and in the content-heavy SSR scenarios it is currently built around, it can be extremely fast.
That is a useful claim. It is also a much smaller claim than "this framework beats Next.js everywhere."
Reproducing it
The executable benchmark suite now lives in a separate repository so it can stay isolated from the main react-bun-ssr codebase.
The public benchmark repository is here: react-bun-ssr-benchmark.
In that standalone benchmark project, the runner command is:
bun run bench:run
It writes the raw JSON report plus a Markdown-ready summary to that benchmark project's results/ directory.
That keeps this docs repository focused on the framework and the article itself, while the benchmark project can evolve independently and be rerun against published package versions.