Agentic Workflow Automation for a Professional Services Firm

A complex 6-person manual workflow was replaced with an agentic AI system running 24 hours a day with full observability and human override built in.

AI
Agentic AI Systems
AI Process Automation
3.8 hrs
Workflow turnaround time, down from 4.5 days
94%
Of workflow steps completed autonomously
Work Image
ABOUT THE PROJECT

Overview

A mid-sized management consulting firm specialising in regulatory compliance and market entry advisory was producing 140 to 180 research and due diligence reports per month for clients across financial services, healthcare, and energy. Each report followed a consistent multi-step process: regulatory landscape research across federal and state databases, competitor benchmarking from commercial and public sources, risk factor identification and scoring, financial data extraction from public filings, and final report assembly with executive summary generation. The process involved six senior analysts working sequentially, with each analyst responsible for a specific stage before passing the output to the next.

Average turnaround from client brief to delivered report was 4.5 days. The firm's largest clients were requesting 48-hour turnaround as a contract condition for their enterprise agreements — a commitment the firm was meeting on only 34% of requests, causing repeat escalations and one contract renewal conversation where turnaround performance had been cited as a risk factor.

Verttx built an agentic AI system that executes all five workflow stages autonomously for 94% of report types, with human analyst involvement reserved for the 6% of requests that require original source research or client-specific judgement beyond the agent's defined operating boundaries. The system went live 10 weeks after the initial discovery call. Average turnaround fell to 3.8 hours.

The Situation

The workflow's core problem was its sequential human dependency chain. Each of the five stages required a different analyst with different expertise — regulatory research, competitive intelligence, risk analysis, financial modelling, and report writing — and each stage could only begin after the previous one was complete. A report that arrived on Monday morning might not reach the financial modelling stage until Wednesday afternoon, not because the work itself took that long, but because each analyst was working across multiple concurrent reports and the handoff sequence introduced waiting time at every transition.

An internal time study found that of the average 4.5-day turnaround, only 11.4 hours represented active working time. The remaining 3.5 days were queue time — reports waiting for an analyst to become available at each stage of the sequence. The six analysts were not underperforming. They were each individually efficient. The process architecture — sequential, human-dependent, with no parallelisation — was the problem.

The firm had also identified a second and more strategically significant problem. The six analysts spending the majority of their time on structured research and report assembly — tasks that followed a consistent, documentable methodology — were the same people the firm needed for the high-complexity advisory work that commanded premium billing rates and drove client retention. Senior analyst time was the firm's most valuable and most constrained resource, and it was being consumed by work that was high-value but process-driven rather than genuinely requiring the judgement of a senior professional.

The Approach

Workflow mapping before agent design

Verttx spent three weeks mapping the full workflow before designing any agent architecture. Every step in every stage was documented: the data sources accessed, the decisions made, the quality criteria applied, the exception conditions that required escalation, and the specific outputs each stage produced for the next. This mapping produced a 94-step workflow specification covering all five stages across the firm's 12 report types. The specification identified which steps were fully deterministic — given the same inputs, an experienced analyst would always produce the same output — and which required genuine professional judgement that an agent should not attempt to replicate. The boundary between automatable and non-automatable steps was agreed with the firm's senior partners before any agent was built.

Multi-agent architecture

The system is built as a pipeline of five specialised agents, each responsible for one workflow stage, orchestrated by a coordinator agent that manages sequencing, passes structured outputs between agents, monitors execution, and triggers human escalation when any agent encounters a condition outside its operating parameters. Each specialised agent has access only to the tools and data sources relevant to its stage — regulatory database APIs for the research agent, financial data feeds for the modelling agent, the firm's internal style guide and client preference records for the report assembly agent — enforcing the minimum necessary access principle that governs what each agent can do.

The five specialised agents are:

  • Regulatory research agent — queries federal and state regulatory databases, extracts relevant regulatory requirements for the client's jurisdiction and sector, and produces a structured regulatory landscape summary with source citations
  • Competitive intelligence agent — queries commercial market intelligence APIs, public company databases, and news aggregation services to benchmark the client's competitive position against a configurable peer set
  • Risk scoring agent — applies the firm's proprietary risk framework — encoded from the senior risk analysts' documented methodology — to the regulatory and competitive outputs, producing a risk factor register with severity ratings and mitigation recommendations
  • Financial analysis agent — extracts and analyses relevant financial data from SEC EDGAR filings, commercial financial databases, and client-provided data, producing the financial summary section of the report
  • Report assembly agent — assembles the structured outputs from all four preceding agents into the firm's report template, generates an executive summary calibrated to the client's stated reading level and sector familiarity, and applies the style and formatting standards from the firm's brand guidelines

Guardrails and human-in-the-loop design

Every agent operates within explicitly defined boundaries. The coordinator agent enforces 14 escalation triggers — conditions that route the report to a human analyst rather than proceeding automatically. These include: regulatory databases returning insufficient results for the client's specific jurisdiction, a competitive peer set that cannot be constructed from available data sources, a client with publicly disclosed litigation or regulatory action that requires human assessment, and any report type that involves original primary research not covered by the agent's data source integrations. Analysts receive escalated cases pre-briefed — the coordinator produces a structured handoff document covering what the agents have completed, what triggered the escalation, and what specific human input is required — rather than a cold brief to start from scratch.

All agent actions are logged in a structured audit trail. Every data source queried, every decision made, every output generated, and every tool call executed is recorded with timestamps and stored for 90 days. The audit trail is reviewed in the firm's weekly quality assurance meeting and is available for client inspection on request — a transparency commitment the firm had made to several enterprise clients as a condition of deploying AI in their reporting workflow.

The Result

Average report turnaround fell from 5 days to 3.8 hours for the 94% of report types that run straight through the agent pipeline without human escalation. The 4.5-day turnaround had been driven almost entirely by queue time rather than working time — removing the sequential human dependency chain eliminated that queue entirely. The firm's 48-hour turnaround commitment, previously met on 34% of enterprise requests, is now met on 97% of all requests including the 6% that require human escalation, because even escalated reports complete within 6-8 hours when the coordinator's pre-briefing reduces analyst engagement time to 45-90 minutes of focused input rather than a full-stage rebuild.

The six senior analysts who had been spending an estimated 68% of their time on structured research and assembly tasks now spend that capacity on client advisory work, new business development, and the complex analytical work that justifies the firm's billing rates. Revenue per analyst increased by 31% in the two quarters following deployment, driven by the reallocation of senior capacity from process execution to advisory delivery. The firm accepted two enterprise contracts in the quarter following deployment that it had previously declined because it lacked the analyst capacity to meet the turnaround commitments — both contracts were won citing the new turnaround performance as the decisive differentiating factor.

Report quality scores — measured through the firm's existing client satisfaction survey which included a specific section on research depth and accuracy — improved from a mean score of 4.1 to 4.6 out of 5 in the two quarters post-deployment. The improvement was attributed to the consistency of agent-produced research — no variation in source coverage or methodology across analysts, no quality variance based on individual workload or experience level — combined with the senior analysts now having time to review and enhance agent outputs rather than produce first drafts under time pressure.

The full agentic system — all five specialised agents, the coordinator orchestration layer, all data source integrations, the escalation framework, and the complete audit logging infrastructure — was transferred to the firm's engineering team at handover with documentation precise enough for any senior engineer to understand, maintain, and extend independently.

The analysts were not slow. The process was slow. Every report was waiting at five different desks. Verttx built something that removes the waiting entirely for 94% of our work. The analysts are now doing the work we hired them for — advising clients — instead of running a research assembly line. Revenue per analyst is up 31%. That is the number that matters. — Managing Director, Professional Services Firm

Work Image
RESULTS

Report turnaround fell from 5 days to 3.8 hours for 94% of report types. The 48-hour enterprise commitment, previously met on 34% of requests, is now met on 97%. Six senior analysts reallocated from structured research to client advisory work. Revenue per analyst increased 31% in the two quarters following deployment. Client report quality scores improved from 4.1 to 4.6 out of 5. Two enterprise contracts accepted in the deployment quarter that had previously been declined due to turnaround capacity constraints — both citing the new turnaround performance as the decisive factor.

3.8 hrs
Report turnaround time, down from 5 days
94%
Workflow steps completed autonomously
31%
Revenue per analyst increase in two quarters
4.6 / 5
Client quality score, up from 4.1 out of 5
Logo