Clean code vs Efficient code: a real case optimizing performance in Three.js

Clean code vs Efficient code: a real case optimizing performance in Three.js
This week I ran into a very common dilemma (but not always an obvious one): should you prioritize clean, well-structured code, or fast, efficient code when it matters most? Spoiler: efficiency sometimes demands different decisions.
✨ I'm working on a "secret" project in Three.js where I'm building an interactive terrain based on hexagonal tiles. This week, I had to revisit the system that determines which tiles are reachable from a starting position.
And I discovered something worth sharing with anyone building real-time experiences: optimization doesn't always mean writing cleaner code.
🧪 The experiment: two functions, same purpose
I had two functions that performed the same task: determine which hex tiles are reachable from a point X, given a number of movement steps.
Both worked fine... until I tested with larger boards (300–400+ tiles). Suddenly, performance tanked.
🧠 Why was one function significantly faster than the other?
Here are the three main differences between the two functions (insert code images for full context):
📸 Function 1: hexIsReachable
📸 Function 2: getNeighborsWithDistance
1⃣ Direct access to precomputed data
The faster version directly accessed a neighbors
array stored in each tile (mesh).
This avoided recalculating or fetching neighbors dynamically with getNeighbor()
, which was called 6 times per tile per movement step.
Lesson: if you can precompute relationships in repetitive structures like maps, do it.
2⃣ Unnecessary validation: skip safeParse
The slower function used zod.safeParse
to validate the data of each neighboring tile. While this is generally good practice, it was being executed thousands of times per frame, just to confirm data already guaranteed by design.
Result: tons of wasted time doing validations that added no real value in this case.
Lesson: schema validation is great, but avoid it inside tight loops when the data is already trusted.
3⃣ Avoid unnecessary transformations
In hexIsReachable
, I used a Set<string>
with keys like "q,r" and later split each string back into an object { q, r }
.
In the optimized version, I used a Map<string, object>
from the beginning, avoiding those extra conversions.
Lesson: small things matter. Repeatedly converting data adds up when looping over large datasets.
📊 Final result
With just those changes, I went from major slowdowns on medium-sized boards to handling large ones smoothly. And I didn’t change what the code did — just how it did it.
✅ Conclusion
Sometimes it’s not about “clean code” vs. “dirty code”. It’s about context. If you're building real-time experiences —like 3D graphics, games, or simulations— your top priority should be eliminating unnecessary work.
- If you can precompute it, do it.
- If you don’t need to validate it, skip it.
- If you’re iterating over hundreds of items… look closely at what you’re doing.