▸ Mental model — what await compiles to
An async function is a state machine. The body up to the first await runs synchronously on the caller's stack. Everything after each await becomes a continuation handed to a .then.
You write
async function f() { doSync(); const v = await slow(); useResult(v); }
The engine effectively runs
function f() { return new Promise((resolve, reject) => { try { doSync(); // sync, on caller stack Promise.resolve(slow()).then((v) => { try { useResult(v); resolve(); } catch (e) { reject(e); } }, reject); } catch (e) { reject(e); } }); }
▸ Every await is a microtask boundary
Each await costs at least one microtask tick — even on already-resolved values. Calling an async function returns immediately; the body runs in pieces separated by these boundaries.
async function f() { console.log('1'); // runs on caller stack await 0; // ───── BOUNDARY 1 ───── console.log('2'); // microtask M_a await 0; // ───── BOUNDARY 2 ───── console.log('3'); // microtask M_b } f(); console.log('caller'); // runs before M_a — caller wins // Output: 1 caller 2 3
Rough timeline
stack: [f starts] [log 1] [log caller] microQ: → [continue f: log 2] → [continue f: log 3] time: ────────────────────────────────────────────────────────────►
async at all.▸ trap #1 Lost parallelism
The most common migration regression. Two independent calls that used to run in parallel become serial because each await waits before the next call even starts.
Naïve await migration — sequential, slow
const user = await fetchUser(id); const orders = await fetchOrders(id); return { user, orders }; // time: ─fetchUser─►─fetchOrders─►
Correct — parallel, fast
const [user, orders] = await Promise.all([ fetchUser(id), fetchOrders(id), ]); // time: ─fetchUser─► // ─fetchOrders─►
Alternative: lift the calls, await later
const userP = fetchUser(id); // starts now const ordersP = fetchOrders(id); // starts now, in parallel const user = await userP; // then wait const orders = await ordersP; // already in flight
await you write, ask: "is the next line dependent on this result?" If no — you've serialised something that didn't need to be.▸ trap #2 forEach doesn't await
Array.prototype.forEach doesn't know about promises — it discards the return value of its callback. Async callbacks fire-and-forget.
Silently broken
items.forEach(async (item) => { await process(item); }); console.log('done'); // fires BEFORE process() completes
Sequential
for (const item of items) { await process(item); } console.log('done');
Parallel + awaited
await Promise.all( items.map((i) => process(i)) ); console.log('done');
Async iterators: for await ... of
for await (const chunk of stream) { process(chunk); }
Works on anything implementing Symbol.asyncIterator — Node streams, async generators, async-iterable libraries.
.map, .filter, .reduce, .find with async callbacks return arrays of promises, not arrays of resolved values. Wrap in Promise.all if you need the values, or use a for-loop for sequencing.▸ trap #3 Fire-and-forget rejections
Every promise created must be awaited or have a .catch attached. Otherwise its rejection becomes an unhandled rejection — logged by Node, eventually crashing the process under default --unhandled-rejections=throw.
Leaks rejection
async function leaks() { fetchUser(); // not awaited! }
Awaited
async function safe() { await fetchUser(); }
Explicit fire-and-forget
function background() { fetchUser().catch(logError); }
@typescript-eslint/no-floating-promises— flags un-awaited promises.@typescript-eslint/no-misused-promises— flags promises in places they shouldn't be (e.g. as a boolean condition).@typescript-eslint/require-await— flagsasyncfunctions that never await (often a sign of an accidental async).
▸ trap #4 return await vs return
Two differences: stack traces and try/catch boundaries.
Inside try/catch — meaningfully different
Rejection escapes try
async function f() { try { return mayReject(); // NOT caught } catch (e) { return fallback(); } }
Rejection caught locally
async function f() { try { return await mayReject(); // caught here } catch (e) { return fallback(); } }
Outside try/catch — stack trace clarity
// Functionally identical for the caller: async function a() { return slow(); } // a() not in stack trace async function b() { return await slow(); } // b() in stack trace
If slow() rejects, only b() stays on the async stack trace — because plain return promise lets a return before slow settles, removing it from the call chain. Modern V8 has removed the historical extra-microtask cost of return await, so use it freely.
return await inside try/catch; outside, it's a style choice (lean toward return await for trace clarity).▸ trap #5 The first await changes the function's "shape"
Code before the first await runs on the caller's stack. Code after runs on a microtask. This affects error catching.
async function f() { throw new Error('oops'); // before any await } try { f(); // throws synchronously? NO } catch (e) { // not reached — async function console.log('caught', e); // converts throws to rejections }
Inside an async function, every throw — whether it happens before the first await or after — becomes a promise rejection. The synchronous-looking throw still rejects asynchronously.
try { await f(); // catches both sync and async throws } catch (e) { console.log('caught', e.message); // 'oops' }
async and now rejects. Callers that wrapped the call in a non-await try/catch silently miss the error.▸ trap #6 Argument evaluation is eager + left-to-right
Function-call arguments are evaluated one at a time, left to right. await arguments are no exception: each await blocks the next argument from starting.
Sequential — waits for a() before starting b()
log(await a(), await b()); // time: ─a─►─b─►
Parallel — both start, then await
const [ra, rb] = await Promise.all([a(), b()]); log(ra, rb);
Same trap inside object literals, array literals, JSX attributes — any expression context where multiple awaits are evaluated left-to-right.
▸ trap #7 await on a non-thenable still defers
Spec says await x ≡ await Promise.resolve(x). So awaiting a plain value still costs one microtask tick.
async function f() { console.log('1'); await 42; // 42 isn't a promise — still yields console.log('2'); } f(); console.log('3'); // Output: 1 3 2
Idiomatic use: yielding to the event loop
for (let i = 0; i < hugeBatch.length; i++) { processItem(hugeBatch[i]); if (i % 1000 === 0) await 0; // give timers/IO a turn }
await 0 only releases to the microtask queue — other microtasks will still beat any macrotask. For true yielding to timers/IO, use await new Promise(r => setImmediate(r)).▸ trap #8 Microtask flooding
Microtasks drain to completion before any macrotask runs. A loop that keeps queueing microtasks — including a long async function with many awaits — can starve setTimeout, I/O callbacks, and HTTP responses.
async function processForever() { while (true) { const item = getNextItem(); // sync if (!item) break; await Promise.resolve(handle(item)); // microtask, but no IO yield } } // Server handling other requests gets no CPU time — // every "await" only releases the microtask queue.
The fix: periodic macrotask yields
function yieldToIO() { return new Promise((resolve) => setImmediate(resolve)); } let count = 0; while (true) { handle(); if (++count % 100 === 0) await yieldToIO(); }
▸ trap #9 Top-level await serialises module evaluation
ESM modules can await at the top level. The whole module's evaluation becomes async, and every importer waits for it.
// config.mjs export const config = await loadConfig(); // blocks all consumers
// caller.mjs import { config } from './config.mjs'; // waits for loadConfig console.log(config);
- Only works in ESM modules — not CommonJS (
require). - The module graph is evaluated in topological order; any cycle that includes a top-level await throws.
- A slow top-level await delays every dependent module's initialisation. Use sparingly.
- Good use: capability detection, dynamic config load, polyfill checks.
- Bad use: heavy data fetching at import time.
▸ wins What you gain from migration
- Readability. Straight-line code reads like sync code. No nesting, no callback pyramid.
- Unified error handling. One
try/catchcovers both sync throws and async rejections inside the function — sometimes spanning many awaits. - Stack traces. Modern V8 keeps async frames on the trace ("async stack traces").
.thencallbacks historically lost this — the trace stopped at the platform. - Conditional flow.
if/else,for,while,switchwork naturally — no nested.then(cond ? doA() : doB()). - Easier refactors. Extracting a chunk of code into a helper doesn't require restructuring the entire promise chain.
- Debugger UX. Step-over now actually steps over an
await. With.then, the debugger jumps to the callback registration site, not the resumption.
▸ checklist Migration playbook
- Find
.thenchains with 2+ sequential.thens — convert toawait. Linear flow becomes linear code. - Find
Promise.all([...])— keep them. Don't unwrap into sequential awaits. - For every new
await, ask "is the next line dependent on this?" If no — lift the call earlier or wrap inPromise.all. - Replace
.catch()withtry/catch. Span thetryover the right scope — not too narrow, not the whole function. - Search for
.forEach(async ...). Convert tofor...offor sequential,Promise.all(arr.map(...))for parallel. - Inside
try/catch, usereturn await. Outside, usereturn awaitfor stack clarity (no runtime cost on modern V8). - Enable linter rules:
no-floating-promises,no-misused-promises,require-await. - Audit any pre-existing
try { syncFn() }sites: ifsyncFnbecameasync, the catch no longer fires unless you addawait. - For long-running loops, add a periodic
await new Promise(r => setImmediate(r))to avoid microtask flooding. - If you introduce top-level await, profile the module graph — slow import startup compounds across consumers.
▸ Part 1 — rewritten with await
The original used setTimeout + Promise.then as scheduling primitives. The await version turns each scheduled callback into an async function body and replaces the primitives with their await equivalents.
const delay = (ms) => new Promise((r) => setTimeout(r, ms)); async function timer1() { await delay(0); // was: setTimeout(..., 0) — macrotask boundary console.log('B'); await null; // was: nested Promise.resolve().then(...) for C console.log('C'); } async function microtask1() { await null; // was: Promise.resolve().then(...) console.log('D'); await delay(0); // was: nested setTimeout(..., 0) for E console.log('E'); } async function microtask2() { await null; // was: Promise.resolve().then(...) console.log('F'); await null; // was: nested Promise.resolve().then(...) for G console.log('G'); } console.log('A'); timer1(); // fire-and-forget — mirrors original scheduling microtask1(); microtask2(); console.log('H');
Verified output (run via node): A H D F G B C E — identical to the original.
Two translation moves
await delay(0)≡setTimeout(cb, 0). Both add a timer to the macrotask queue and resume after it fires. The await form costs one extra microtask hop (the timer resolvesdelay's internal promise, which then runs the continuation as a microtask), but that's invisible in this puzzle's output.await null≡Promise.resolve().then(cb). Spec-wiseawait xisawait Promise.resolve(x), so awaiting any non-thenable forces one microtask boundary.
timer1()/microtask1()/microtask2() without awaiting. In production code, you'd await Promise.all([...]) them or wrap each call in .catch(logError).▸ Part 2 — rewritten with await
Two sub-puzzles, each translated separately.
Puzzle 1 — Promise executor body → async function with no awaits
async function exec() { console.log('2'); console.log('3'); } console.log('1'); exec().then(() => console.log('4')); console.log('5');
Output: 1 2 3 5 4 — identical to the original.
async function with no awaits runs its body to completion synchronously on the caller's stack — just like a Promise executor. The wrapper still returns a promise, and attaching .then(cb) to it queues cb as a microtask. That's why '4' still prints after '5': the function body is sync, but the .then is async.Puzzle 2 — chained .thens → awaits inside async functions
async function chainA() { console.log('A1'); await null; console.log('A2'); await null; console.log('A3'); } async function chainB() { console.log('B1'); await null; console.log('B2'); await null; console.log('B3'); } chainA(); chainB();
Output: A1 B1 A2 B2 A3 B3 — identical to the original. Each await is a chain step; the two functions interleave at every boundary because they share the microtask queue.
A subtle observation when you combine the two
Running both puzzles back-to-back in one file gives 1 2 3 5 A1 B1 4 A2 B2 A3 B3 — not the original Part 2's combined output (1 2 3 5 4 A1 B1 A2 B2 A3 B3). Why?
- In the original,
Promise.resolve().then(() => console.log('A1'))queuesA1as a microtask aftercb_4is queued. So4wins the drain. - In the await rewrite,
chainA()'s body logsA1synchronously — there's no implicitPromise.resolve()at the start. SoA1/B1beat the microtask drain entirely.
To make the await version exactly trace-equivalent to the original combined run, add an initial await null; at the top of chainA and chainB — that recreates the leading Promise.resolve() microtask hop.
.then and await is usually trace-preserving when each unit is considered alone — but composition can shift ordering. If two parts of your codebase depend on the exact tick at which a value becomes available, migrating one to async/await without the other can introduce subtle races.▸ Further reading
- MDN — async function
- MDN — await operator
- V8 blog — Faster async functions and promises (covers the
return await+awaitoptimisations) - Jake Archibald — await vs return vs return await
- TC39 — Top-level await proposal
- Node.js — Unhandled rejection modes
- Previous parts: Part 1 (timers + microtasks) · Part 2 (executor + chain interleave)
async function a() { console.log('a1'); await 0; console.log('a2'); } async function b() { console.log('b1'); await 0; console.log('b2'); } a(); b(); console.log('sync');
Answer: a1 b1 sync a2 b2. Both async functions run sync up to their first await, then both queue their continuations as microtasks. 'sync' logs while the stack is still busy. Then the microtask queue drains in FIFO order — same interleave pattern as Part 2.