The Lobbi Delivery Team
Operational Systems Engineering
The most common reason automation projects fail is not a bad vendor, an underpowered tool, or a budget overrun. The team tried to automate a process they had not mapped. They knew roughly what they wanted the system to do. They did not know precisely what the process actually was, where it broke, or what a successful outcome looked like in measurable terms.
Ambiguity cannot be automated. It can only be encoded - at which point the system reliably does the wrong thing faster than the manual process did.
Three questions before any build
Before scoping, before tool evaluation, three diagnostic questions determine whether automation is the right move and what exactly to build.
What is the actual process?
Not the intended process - the actual one. The sequence of steps as they happen today, including every manual workaround, every exception path, every "that's just how Jessica handles it." The happy path is usually documented somewhere. The exception paths, where the operational cost lives, almost never are.
What are the failure modes?
Where does the process break, and how often? Where does work stall, get lost, get duplicated, or require re-entry? What percentage of items go through the exception path rather than the happy path? These numbers determine where automation delivers the most value - and any automation that ignores failure modes will reproduce those failures at scale.
What does done look like?
What is the specific, measurable outcome? "Faster" is not a success criterion. "Reconciliation cycle reduced from 14 days to 2 days" is. Without a defined success criterion, there is no way to scope the build, estimate ROI, or evaluate whether the engagement delivered.
Documenting current state
Answering those three questions requires a structured current-state investigation in two parts.
Stakeholder interviews are the primary input. The people doing the work know things documentation does not capture - the undocumented workarounds, the informal approval paths, the step that always breaks when a certain carrier's system is down. A structured 45 - 60 minute interview with each key process participant surfaces the actual workflow, not the idealized version.
The output of each interview is a swimlane diagram: a visual map of who does what, in what sequence, with explicit handoff points and documented exception paths. Swimlanes force precision. "Operations handles it" becomes three distinct steps owned by two different people, one of which has a 30% exception rate nobody had articulated before.
For each step in the diagram, time-cost estimates capture how long the step takes, how often it runs, and what happens when it fails. This produces a rough operational cost baseline - total staff-hours consumed per cycle - that becomes the denominator in any ROI calculation.
Not all manual steps are equal
A process inventory with 40 steps does not mean 40 automation candidates of equal value.
Ranking each step across three dimensions separates high-value targets from noise:
Time cost - staff-hours consumed per month. High-volume repetitive steps score highest even if each individual instance is fast.
Error rate - percentage of instances requiring rework, producing downstream data quality issues, or triggering exception handling. Steps with high error rates are strong automation candidates even when time cost is modest, because error cost extends far beyond the step itself.
Compliance risk - audit, regulatory, or contractual requirements attached to the step. Manual steps with compliance implications are high-priority candidates because the cost of failure is regulatory, not just operational.
Multiplying these three dimensions produces a prioritized list. The top items are where the build should start - not the steps that are most technically interesting or that the loudest stakeholder wants automated, but the ones that score highest on the three-dimensional ranking.
The output: a prioritized roadmap
The deliverable from a current-state mapping engagement is not a list of things to automate. It is a prioritized roadmap with three elements for each candidate:
Effort estimate - based on integration complexity, API quality, and data normalization work required. Not a vague range - a specific week estimate tied to defined acceptance criteria.
ROI baseline - the projected reduction in staff-hours, error rate, and compliance exposure if the automation performs as specified. This gets measured at go-live and at 90 days post-launch.
Sequencing rationale - why this item comes before or after others. Some automation candidates are prerequisites: the downstream step cannot be automated until the upstream data quality problem is solved. Making sequencing explicit prevents the common failure mode of building automations that work in isolation but cannot connect.
The output is not glamorous - a scoped roadmap, not a working system. But it is the foundation that makes every subsequent build faster, more accurate, and more likely to deliver against the numbers the business actually cares about.
Teams that skip this step and start building immediately typically spend the first three months rebuilding what they built in the first month. The mapping work is not overhead. It is the project plan.
Frequently asked
Why do automation projects fail?
What is a current-state process map?
How do you prioritize which steps to automate first?
Topic clusters