90 Percent Time Saving Myth With AI In Archviz
The 35% Myth: What AI Actually Saves a Working Archviz Studio
Someone on LinkedIn last week claimed AI had cut their archviz delivery time by 90%. Replies were full of fire emojis and people calling it the future of the industry. I read the post couple of times to figure out if they were talking about a real project or a tutorial they ran in their kitchen. Pretty sure it was the kitchen.
We've been logging AI usage on every project for almost a year. Real billable work, real clients, real deadlines. The actual number is closer to 20-25%, depending on the project type. That's a real productivity gain. It's also nowhere near the numbers being thrown around on social media.
Here's what AI actually does for a working archviz studio like us in 2026. And here's what it doesn't.
The numbers, by task
I'm going to skip the inspirational quote and go straight to the spreadsheet.
Entourage and figures. Two hours per exterior scene became twenty minutes. About 80% time saved. AI fills in distant pedestrians, cyclists, drivers, the messy human texture of a city. The figure libraries we used to license still exist, but we only use them in the foreground now. Anything past 50 metres is AI generated.
Background context and skies. Six to eight hours became three. About 60% saved. AI skies, distant cityscapes, atmospheric haze. The stuff that used to mean licensing HDRIs and stock photography.
Hero stills. Sixteen-hour render-and-post became fourteen hours. Maybe 10-15% saved if we're being honest, mostly in the comp stage. Client expects every pixel to be intentional. AI cannot make those decisions for you.
Animation. Forty-hour animation pipeline became thirty-six hours. About 10% saved. AI video tools (Veo, Sora, Kling) are great at generic establishing footage and useless at anything where the building has to look the same in frame 1 and frame 240.
Revisions. Zero time saved. Sometimes negative. When a client says "make the brick warmer and the sky a bit lower" you can't tell ComfyUI to do that surgically. You're regenerating an entire image and then comping pieces back in. Often slower than just doing the revision in Photoshop.
Average across our project mix last quarter: 30% time saved. If you weight by revenue, it drops to almost 20%.
That's real. That's good. It's also one third of the 90% claim.
What AI is genuinely earning its keep on
The pattern, is obvious. AI is great at the unglamorous middle of the pipeline. The bits that used to be tedious. The bits where "good enough" is genuinely good enough.
Entourage in distance. Mood iterations. Sky variations. Texture starting points. Storyboard frames. Initial atmospheric tests. The stuff that used to eat junior artist time and didn't pay.
We've got a junior who used to spend his time cutting and placing people out of stock photos. He doesn't anymore. He spends them learning to light. That's the actual transformation, and it's quieter than the LinkedIn version.
________=====================================
What AI is bad at
This is the section nobody wants to write because it makes you sound like a Luddite.
Multi-view consistency. The same building from three angles in three renders has to be the same building. AI cannot do this reliably. You can constrain it with ControlNet, IPAdapter, depth maps, and a custom LoRA, and it will still drift. We've never delivered a multi-image package where AI did the heavy lifting. The base render is always rendered traditionally.
Surgical edits. Clients don't say "regenerate the image." They say "the lobby tile is too dark, the woman in the foreground should be looking at the entrance, and the tree is blocking the signage." AI workflows are bad at this. Photoshop is good at this. We still do most of our revisions in Photoshop.
Brand-safe humans. AI-generated people are a legal grey area we don't touch for any project that goes on a billboard or a website. Resemblance lawsuits are starting to happen. Our policy: AI for distance-only, licensed libraries for foreground, custom-shot footage for anything actually marketed.
Materials at close range. AI textures look incredible in thumbnails and break down at 4K. The grain is wrong. The repeat patterns hide. You can use AI to start a material, but you finish it in Substance.
Anything precise. Window mullions, joint lines, brick coursing, anything where the architecture has been thought about. AI smooths these out. You either fight it constantly or you give up and model.
The ComfyUI graph we actually use
Since everyone asks: yes, we run ComfyUI. No, we don't share the JSON publicly because it has client-specific LoRAs trained on their material libraries. But the structure is not exotic.
Base model is Flux Dev for most concept work, SDXL for situations where we need a specific community LoRA. ControlNet depth is fed from a V-Ray render pass, which is the bit most people skip and which is also the bit that makes the output match the architecture instead of inventing it. IPAdapter for style transfer from a mood reference. Inpainting masks driven by render mattes for selective enhancement. Magnific or an open upscaler at the end depending on whether the client is paying for it.
The thing nobody tells you: building this graph took three weeks of a senior artist's time. Maintaining it takes about a day a month as new models drop. That's a hidden cost most studios don't account for when they calculate their AI ROI.
The costs nobody talks about
Compute. Running Flux Dev at quality settings on an RTX 4090 takes 30-90 seconds per image. Across a project with 200-400 iteration images, that's a meaningful chunk of GPU time. We added a second workstation specifically for ComfyUI runs.
Storage. AI projects bloat. We used to spin up 50-100GB of project files. Now it's 300-600GB because every iteration is saved, every prompt has variations, every variation has alternate seeds. Buy bigger NAS.
Training time. Three weeks per junior to be productive in ComfyUI. Two more to understand when not to use it.
QA. AI introduces bugs you don't see at first glance. The sixth finger. The wrong number of windows. The car driving the wrong way down a one-way street. We added a dedicated QA pass on every AI-touched deliverable. It catches things, every time.
Style consistency across the team. Five artists prompting the same scene gives you five different aesthetics. Locking down style references and shared LoRAs took us months to figure out and is still not solved.
Where we draw the line
MIR updated their bio recently to "Truthful Renderings made with Human Intelligence." Pedro Fernandes at Arqui9 talks about "Creativity First." These are positions, and they're worth taking seriously.
Our line is somewhere in the middle. AI is in the pipeline. It is not in the brief. We don't market AI capabilities to clients because we don't think they should care. They should care that the work is good. The tools we use are our problem.
We also don't deliver pure AI images as final work. Every hero still that leaves the studio has been rendered, comped, graded, and finished by a person. AI is sometimes one step in a 40-step process. It's never the whole process.
If a client specifically wants AI-generated concept work for a pitch deck, fine. We'll do it, and we'll label it. For everything else, the AI is invisible by the time it ships.
So who's right, the doomers or the hypers
Neither.
The doomers are wrong because AI genuinely does save 25-35% of working hours on a typical archviz project. That's not nothing. It changes hiring, pricing, and what kind of work juniors learn. It changes what a small studio can compete for.
The hypers are wrong because the 80-90% numbers don't survive contact with a real client revision cycle. The bits AI is bad at (consistency, precision, surgical edits, finishing) are exactly the bits that take the most time on a real project. Saving 90% of the easy 30% of the work is not the same as saving 90% of the work.
Show us your real timesheet. Then we'll talk.