These are the real questions principals and IT directors ask after the demo. The answers aren’t fluff — they shape how the engagement actually works.
Yes. AI is junior staff — it does the work, and the licensed professional reviews and signs off. Engineers, contractors, architects, and permit expediters have always used junior staff, drafters, and specialized tools to prepare documents. AI is no different. The professional in responsible charge reviews every output before it goes out the door. That’s always been the model — AI just makes the production faster. Every agent surfaces its methodology, inputs, and assumptions so the professional can verify the work, not just check the format.
Engineer-stamped work is covered the same as any other engineer-stamped work. Your insurer treats AI-assisted output the same as drafter-assisted output: the engineer is responsible for the final stamp. AI is a tool, not a designer.
The opposite. AI removes the boring 80% — repetitive calcs, templating, formatting — and frees your engineers to focus on judgment, client work, and cases that actually need expertise. Firms that deploy AI well grow capacity per engineer; they don’t shrink the team.
Agents work with PDFs natively — the standard delivery format for most AEC workflows. DWG, RVT, and IFC files are processed via export or conversion layer; no CAD software required on our end. They don’t replace CAD — they work alongside it.
The custom layer of your deployment is built around your templates, output formats, jurisdictions, and standards. Every deliverable comes out looking like your firm wrote it — because the agents are configured to your firm.
The architecture is portable. We build on Claude because it’s currently the strongest, but the agents can swap to OpenAI, Google, or local models with config changes. You’re not locked to a single vendor.
The engineer who stamps the deliverable is responsible — same as always. AI output is treated as a draft, not a final. The QC gate catches most issues before delivery; the engineer catches the rest before stamping. Liability follows the stamp, not the tool.
2–3 weeks from signed contract to first live job, because the base stack ships pre-built. The custom layer continues to grow over the months that follow.
Nowhere it doesn’t already go. The system deploys on your infrastructure — your VM, your server, your network. Your project files, emails, and outputs never leave your environment. Your existing security stack (SSO, BitLocker, EDR) is the AI’s security stack. We don’t add new vendors, new compliance gaps, or new attack surface. The system makes two outbound calls: Anthropic’s Claude API for inference (enterprise endpoint, zero training-data retention) and a Firma AI update bridge for pushing fixes — a toggle your team controls. If your IT team approves the Claude API, they’ve approved the entire vendor surface.
The architecture, agents, and configuration are Firma AI’s IP — but you hold an exclusive, perpetual license to run your AI ecosystem locally. The hardware it runs on is yours: your server, your infrastructure, your environment. The deliverables your system produces are yours outright. And Firma AI cannot distribute, resell, or share your firm’s specific configuration with anyone else — what we build for your firm stays with your firm.
Not broken is different from not improvable. Your current process works — the question is how much of your senior staff’s time it consumes, and what they could be doing with those hours instead. You don’t have to commit to a full AI deployment to find out. The AI Readiness Audit is one week, $2,000, and leaves you with a specific written answer: here are the 5–7 workflows where AI would have the highest impact, here are the projected hours saved per week, here is the estimated P&L effect. If the answer is “marginal — stick with what you have,” you’ll have that in writing. If the answer is “you’re leaving 15 hours per engineer per week on the table,” you’ll have that too. Make the decision with data. The audit is the on-ramp.
This is the right question to ask, and the QC architecture is designed around it. Every deliverable passes through a multi-agent QC gate before it reaches your engineer — three agents independently check completeness, methodology, and format. Discrepancies between them are flagged, not averaged. The delivery note that comes with every output itemizes engineer action items: inputs the system couldn’t verify, flags for independent review, and anything requiring judgment before stamping. Your engineer reads the delivery note, checks the flagged items, and stamps only after independent verification. It’s the same model you use with junior staff — except this junior staff never skips the checklist and never has a bad day. Liability follows the stamp, not the tool.
Start with an AI Readiness Audit — $2,000 fixed fee.
One week. We map your workflows, identify 5–7 highest-value opportunities, and hand you a written report with hours saved and P&L impact. The audit fee is credited toward deployment if you proceed.
No sales reps, no account managers. The engineer who built the system is the person answering your question.
hello@firma-ai.com