HN
Today

We rewrote our Rust WASM Parser in TypeScript – and it got 3x Faster

OpenUI dramatically sped up its parser by rewriting a Rust WASM module in TypeScript, revealing that WASM's performance benefits are often negated by JavaScript interop overhead for specific use cases. The surprising finding was that direct JsValue passing in WASM was even slower than JSON serialization due to numerous internal boundary crossings. This story resonated on HN for debunking common WASM assumptions and emphasizing that algorithmic improvements can trump language-level optimizations in real-world performance gains.

13
Score
5
Comments
#2
Highest Rank
21h
on Front Page
First Seen
Mar 20, 10:00 PM
Last Seen
Mar 21, 6:00 PM
Rank Over Time
4333323569586911141615262529

The Lowdown

The blog post from OpenUI details their journey of optimizing a crucial parser component for their openui-lang DSL, which converts LLM output into React component trees. Initially built in Rust and compiled to WASM for perceived speed, they discovered significant performance bottlenecks stemming not from Rust's execution, but from the boundary interactions between WASM and JavaScript.

  • The parser uses a six-stage pipeline and processes streaming LLM chunks, making latency critical.
  • Initial WASM implementation suffered from high overhead due to string copying (JS -> WASM) and JSON serialization/deserialization (WASM -> JS -> JS object).
  • An attempt to optimize by using serde-wasm-bindgen for direct JsValue passing made it 30% slower because it involved many fine-grained, invisible boundary crossings for object construction, compared to one large JSON string transfer.
  • Benchmarks showed the JSON round-trip WASM approach was 2-4x slower than a pure TypeScript implementation for one-shot parsing.
  • The biggest performance gain came from an algorithmic improvement: changing the streaming parser from an O(N^2) "re-parse everything" approach to an O(N) "statement-level incremental caching" approach.
  • This incremental caching resulted in a 2.6-3.3x speedup for full-stream total parse cost, independent of the language.
  • The authors conclude that WASM is best for compute-bound tasks with minimal interop or porting native libraries, not for frequent, small-input functions or parsing structured text into JS objects due to serialization costs and the lack of shared heap.

Their experience highlights the critical importance of profiling actual bottlenecks and understanding data transfer costs across runtime boundaries, often finding that fundamental algorithmic improvements yield greater practical benefits than language or platform choices.

The Gossip

Algorithmic Acumen vs. Language Lore

The dominant discussion centered on whether the title accurately reflected the primary source of performance gains. Many argued that the O(N^2) to O(N) algorithmic improvement through incremental caching was the true hero, not simply switching from Rust WASM to TypeScript. This perspective suggests that rewrites inherently lead to performance boosts by providing an opportunity for deeper architectural fixes, making the language choice less central to the *observed* speedup. Some even labeled the title "clickbait" for potentially misattributing the gains.

Branding Blunders & Blog Beauty

A minor thread emerged regarding the company's choice of "OpenUI" as a name, noting its prior use by a W3C Community Group. Separately, the aesthetic qualities of the blog post's design, particularly its interactive scrollspy sidebar, were positively highlighted, with one user identifying the platform used.