The Lobbi Delivery Team
Operational Systems Engineering
The ROI projection that justifies an automation investment is always produced before the build. It is based on estimated hours, estimated error rates, and estimated adoption. These estimates are useful for making go/no-go decisions. They are not measurements.
After deployment, the question changes from "what do we think this will save?" to "what did this actually save?" Most organizations never answer the second question. The automation goes live, the team moves to the next project, and the projected ROI becomes the assumed ROI - never validated, never corrected, never used to improve the next estimate.
This is a missed opportunity. Post-deployment measurement is where the actual learning happens.
What to measure
Three metrics tell the story of whether an automation is delivering.
Throughput. How many items does the automated process handle per period? This should be measured against the pre-automation baseline. If the manual process handled 200 submissions per week and the automated process handles 350, the throughput increase is measurable and attributable.
Throughput is the simplest metric and the hardest to game. Either the system is processing more work or it is not. The important nuance: measure items completed, not items started. An automation that starts 350 items but completes only 250 (with 100 falling into exception queues) has a throughput of 250, not 350.
Exception rate. What percentage of items require manual intervention? Every automation has exceptions - inputs that do not match expected patterns, edge cases the rules do not cover, upstream data quality issues that cause processing failures. The exception rate determines the actual labor savings.
A projected 90% automation rate with a 10% exception rate means one person reviews exceptions instead of ten people doing the whole process. If the actual exception rate is 30%, three people are still needed, and the labor savings are 70% of the projection.
The exception rate almost always starts higher than projected and decreases over time as edge cases are identified and handled. Measuring it weekly for the first 90 days produces the curve that shows the system learning and improving.
Cycle time improvements often deliver more business value than pure labor savings because they affect client experience, cash flow timing, and competitive positioning. A mortgage servicer that processes applications in 2 hours instead of 3 days does not just save staff time - they close more deals.
When to measure
30 days post-launch. The early signal. Is the system stable? Is adoption happening? The numbers at 30 days are not the final numbers - they include the ramp-up period, the initial exception spike, and the adjustment period as staff adapt to new workflows. But they should show the right direction. If throughput has not increased and exception rate has not decreased at 30 days, something structural is wrong.
60 days post-launch. Steady state. The adoption curve has flattened, the obvious exception patterns have been addressed, and the system is running as designed. The 60-day numbers are the first reliable comparison against the pre-build projection.
90 days post-launch. Validated baseline. This is the number that goes in the report to leadership. It is also the number that calibrates future projections - if the 90-day actual was 80% of the projected ROI, the next project's projection should account for that calibration.
The measurement infrastructure
Measuring post-deployment ROI requires instrumentation - the same kind of instrumentation that makes the operational system observable.
The automated workflow must emit events: item received, processing started, processing completed, exception raised, exception resolved. Each event carries a timestamp. From these events, throughput, exception rate, and cycle time can be calculated automatically.
This is not additional engineering work. It is the same observability infrastructure that the operations team needs to monitor the system. The same events that trigger alerts when exception rates spike also feed the ROI dashboard. Building one without the other is building half a system.
Closing the feedback loop
The most valuable output of post-deployment measurement is not the ROI number itself. It is the calibration data for the next investment.
If a projected 80% automation rate turned out to be 65% because data quality was worse than expected, the next project in the same environment should project 65% until data quality is addressed. If cycle time improvements were 2x better than projected because the bottleneck was waiting time (not processing time), the next project should model waiting-time elimination more aggressively.
Organizations that measure post-deployment ROI systematically get better at estimating pre-build ROI over time. Their projections converge toward reality. Organizations that skip measurement keep producing projections with the same systematic biases - and keep being surprised when actuals differ.
The discipline is straightforward: define baselines before the build, instrument the automation for measurement, and check the numbers at 30, 60, and 90 days. The cost is minimal - a few hours of engineering for instrumentation, a few minutes per month to review the dashboard. The value is a compounding improvement in investment decisions.
Frequently asked
How do you measure automation ROI after deployment?
When should you measure automation ROI?
What if the actual ROI is lower than projected?
Topic clusters
Get a real ROI estimate
Our discovery process produces measurable baselines, not guesses.