The Lobbi Delivery Team
Operational Systems Engineering
There is a conversation happening in every industry right now. It goes something like this: "We need to start using AI." Then someone buys a tool. Nobody is trained on it. It collects dust for six months. Eventually someone cancels the subscription.
I have watched this play out at insurance agencies, mortgage companies, financial advisory firms, and healthcare practices. The technology is not the problem. The problem is that nobody in the organization owns the outcome.
Companies keep treating AI and automation as tools to buy. They are not. They are capabilities to build. And capabilities require people in new roles, or existing people with expanded responsibilities.
This article is about the specific roles your team will need over the next three to five years. Not theoretical roles from some future-of-work whitepaper. Practical roles that small and mid-sized businesses in regulated industries are already starting to define. The companies that create these roles now will be the ones that actually capture the value from automation. Everyone else will keep buying tools and watching them collect dust.
The Shift That's Already Happening
The World Economic Forum's Future of Jobs Report estimates that 23% of jobs will change in the next five years, with AI and automation driving most of that change [1]. But "change" is the key word. This is not primarily a story about job elimination. It is a story about job evolution.
McKinsey's annual State of AI survey found that 72% of organizations have adopted AI in at least one business function, up from 55% the prior year [2]. That adoption rate is climbing fast. But here is the part that matters for you: the same survey found that the biggest barrier to capturing value from AI is not the technology itself. It is talent and organizational readiness.
Salesforce's Small and Medium Business Trends Report found that 91% of SMBs using AI say it has boosted revenue [3]. But the gap between companies that are using AI effectively and companies that are dabbling is enormous. The difference is not budget. It is not even technical sophistication. It is whether someone in the organization has clear ownership of making automation work.
For businesses in regulated industries, insurance, mortgage, financial services, healthcare, the stakes are even higher. You are not just dealing with efficiency gains. You are dealing with compliance obligations, audit trails, and fiduciary responsibilities.
When AI makes a recommendation about a client's insurance coverage or processes a mortgage document, someone needs to be responsible for the quality of that output. Not the vendor. Not the algorithm. A human being on your team.
Role 1: Automation Operations Owner
This is the most important role and the one almost nobody has created yet.
The Automation Operations Owner is not a developer. They are not an IT person. They are an operations person who owns the outcomes of your automated workflows.
Here is what this role does:
Monitors automated processes daily. When you automate a workflow, say, policy renewal processing or client onboarding document collection, someone needs to watch it. Not watch it work.
Watch for when it does not work. What is the error rate? Where are items getting stuck? What changed in the environment that is causing failures?
Maintains and improves workflows. Automation is not a one-time setup. The carrier changes their portal interface.
The state adds a new compliance requirement. A form field gets renamed. Someone needs to maintain the automation the same way someone maintains a vehicle. Regular inspection, adjustment, and repair.
Serves as the escalation point. When an automated process encounters something it cannot handle, where does it go? If the answer is "it just stops" or "it sends an email that nobody reads," your automation is broken. The Automation Operations Owner defines escalation paths and ensures exceptions are handled.
Tracks ROI. This person can answer the question: "Is our automation actually saving us money and reducing errors?" Not with feelings. With data. Hours saved, error rates before and after, processing times, cost per transaction.
The Federal Reserve's Small Business Credit Survey found that 67% of small businesses faced financial challenges in the prior year [4]. You cannot afford to invest in automation that does not deliver measurable returns. The Automation Operations Owner makes sure you know what you are getting for your investment.
In a small company, say, 10 to 50 people, this is probably not a full-time role at first. It is a formal responsibility assigned to someone with strong operational instincts. Maybe your office manager. Maybe your operations lead. The title matters less than the clarity of ownership.
Role 2: Prompt Quality Manager
I know this one sounds trendy. Bear with me.
Generative AI is already being used in regulated industries for drafting client communications, summarizing documents, generating reports, and answering questions about policies and procedures. The NIST AI Risk Management Framework emphasizes that organizations deploying AI need structured approaches to managing output quality [5].
The Prompt Quality Manager is the person who ensures that when your team interacts with AI tools, the outputs are accurate, appropriate, and compliant.
Here is what this looks like in practice.
Maintains a library of tested prompts. When your team uses AI to draft a client email, summarize a claim, or generate a compliance report, they should not be writing prompts from scratch every time. The Prompt Quality Manager maintains a set of approved, tested prompts that produce reliable results for your specific use cases.
Tests AI outputs against quality standards. This person regularly runs your standard prompts against your AI tools and evaluates the outputs. Are they still accurate? Has the model changed in ways that affect quality? Do the outputs comply with your industry's regulatory requirements?
Defines what AI can and cannot do. In a regulated industry, there are things AI should not do without human review. The Prompt Quality Manager draws those lines. AI can draft the renewal letter, but a licensed agent reviews it before it goes to the client. AI can summarize the application, but an underwriter reviews the summary before making a decision.
Trains the team on proper AI use. Most employees figure out AI tools through trial and error. The Prompt Quality Manager provides structured guidance. Here is how to use this tool for your specific job function. Here is what to watch out for. Here is when to trust the output and when to verify.
Gartner projects that by 2025, organizations that operationalize AI transparency will see 40% improvement in adoption rates and business outcomes [6]. Transparency starts with someone owning the quality of AI interactions. That is this role.
In a small company, this might be 5 to 10 hours per week of someone's time. Often, it is the person who is most naturally curious about AI tools, the one who has already been experimenting. Give that curiosity a structure and a mandate.
Role 3: Lightweight Governance Coordinator
"Governance" is a word that makes small business owners run for the exits. It conjures images of committees, policies, and bureaucracy. That is not what I am talking about.
I am talking about a single person who can answer three questions at any time:
- What automated processes are running in our business right now?
- Who is responsible for each one?
- When was the last time each one was reviewed?
That is lightweight governance. An inventory, an ownership map, and a review schedule.
The NIST AI Risk Management Framework outlines the importance of governance structures even for small organizations [5]. You do not need a 50-page policy document. You need a spreadsheet with three columns and someone who keeps it current.
Here is why this matters more than you think.
Deloitte's Tech Trends report highlights the growing risk of "shadow automation", automated processes that individuals create without organizational awareness [7]. In practice, this means someone on your team has set up automated email rules, Zapier workflows, or AI tools that the organization does not know about. Some of these might be brilliant. Some of them might be sending client data to unauthorized third-party services.
In insurance, every automated decision that touches a policyholder is potentially subject to regulatory scrutiny. In mortgage, automated document processing is subject to fair lending laws. In healthcare, automated communications are subject to HIPAA. Shadow automation in regulated industries is not just an operational risk. It is a compliance risk.
The Lightweight Governance Coordinator maintains visibility into what is automated, ensures each automated process has an owner, and flags anything that touches regulated data for appropriate review. In a company of 20 people, this is probably two hours per week. But those two hours prevent the audit findings that cost you thousands.
Role 4: AI Output Supervisor
This is different from the Prompt Quality Manager. The Prompt Quality Manager works on the input side, making sure your team asks AI the right questions in the right way. The AI Output Supervisor works on the output side, reviewing what AI produces before it reaches clients or affects business decisions.
Gartner predicts that through 2025, at least 30% of generative AI projects will be abandoned after proof of concept due to poor data quality, inadequate risk controls, or escalating costs [8]. The primary reason is not that the AI does not work. It is that nobody is consistently checking whether the AI is working correctly in production.
Here is what this role handles:
Spot-checks AI outputs. Not every output. A statistically meaningful sample. If your AI generates 100 policy summaries a day, this person reviews 10 of them for accuracy, completeness, and compliance.
Defines escalation criteria. What constitutes an AI error serious enough to stop the process? A typo in a client letter is different from an incorrect coverage amount. The AI Output Supervisor defines severity levels and response protocols.
Tracks error patterns. AI does not make random errors. It makes systematic errors, the same kinds of mistakes, repeatedly, in predictable situations. Tracking these patterns allows you to adjust your prompts, your training data, or your human review processes to prevent recurrence.
Reports to leadership. Once a month, the AI Output Supervisor tells the leadership team: here is how our AI tools performed. Here is the error rate. Here are the patterns. Here is what we are doing about it. This is how you build confidence in AI. not by trusting it blindly, but by measuring and verifying.
McKinsey's survey found that organizations with dedicated AI quality practices capture significantly more value from their AI investments [2]. This role is the difference between AI as a liability and AI as an asset.
In a small company, this role often starts as a weekly review by a senior team member. As your AI usage grows, it becomes a more significant time commitment.
Role 5: Frontline AI Trainer
Your frontline employees, the people processing claims, handling client calls, managing accounts, preparing documents, are the ones who will use AI tools most. They are also the ones least likely to receive meaningful training.
The World Economic Forum estimates that 44% of workers' core skills will be disrupted in the next five years [1]. Disrupted does not mean eliminated. It means changed. And change without training produces anxiety, resistance, and mistakes.
The Frontline AI Trainer is someone from your team, ideally someone who does the same work as the people they are training, who becomes the go-to resource for how to use AI tools effectively in daily work.
Develops role-specific training. Generic AI training is useless. Your claims processor does not need to know how large language models work. They need to know how to use the AI tool to summarize a claim file accurately and what to double-check before submitting the summary.
Provides ongoing coaching. Training is not a one-time event. It is a continuous process. The Frontline AI Trainer is available for questions, runs monthly refreshers, and adapts training as tools change.
Collects feedback from users. Frontline employees are the first to notice when an AI tool is not working correctly. The Frontline AI Trainer creates a channel for that feedback and routes it to the Prompt Quality Manager or AI Output Supervisor.
Reduces fear. The biggest barrier to AI adoption in most companies is not technical. It is emotional.
People are afraid the AI will replace them. The Frontline AI Trainer, who is a peer, not a manager, can address those fears with honest, practical answers. "This tool is not replacing your job. It is replacing the part of your job you hate.
The Federal Reserve's survey data shows that small businesses investing in workforce development report better financial outcomes [4]. AI training is workforce development for the next decade.
How These Roles Work Together
These five roles are not five separate silos. They form a feedback loop.
The Frontline AI Trainer helps employees use AI tools correctly and collects feedback on what is not working. That feedback goes to the Prompt Quality Manager, who adjusts the prompts and guidelines. The AI Output Supervisor verifies that the adjusted outputs are accurate and tracks error rates. The Automation Operations Owner monitors the overall health of automated workflows and measures ROI. And the Lightweight Governance Coordinator maintains visibility into the entire system and ensures nothing is operating in the shadows.
In a company of 15 people, these might be five hats worn by two or three people. In a company of 100, they might be dedicated positions. The scale does not matter. The structure does.
The Winners Are Adding Ownership, Not Just Tools
Here is the pattern I see in companies that are actually getting value from AI and automation, versus those that are just spending money on subscriptions:
Companies that struggle: Buy an AI tool. Send a company-wide email saying "we now have access to [tool name]." Provide no training. Define no quality standards. Assign no ownership. Wonder why adoption is low and results are mixed.
Companies that succeed: Identify a specific operational problem. Select a tool that addresses it. Assign an owner for the outcome.
Train the affected team members. Define quality standards. Measure results. Iterate.
Gartner's research on cybersecurity and data analytics consistently shows that organizations with clear role definitions around new technology see dramatically better outcomes than those without [8]. This applies equally to AI.
The Salesforce SMB report found that high-performing small businesses are 1.7 times more likely than underperformers to have adopted AI [3]. But the real differentiator is not adoption.
It is structured adoption. Having a plan. Having roles. Having accountability.
Starting The Transition
You do not need to create all five roles tomorrow. Here is a practical sequence for a company in a regulated industry:
Month 1-3: Automation Operations Owner. Start here because this role applies whether you are using AI or not. Any automated workflow. even a simple email rule or a spreadsheet formula, benefits from clear ownership. Assign someone. Give them the mandate to inventory all automated processes and establish basic monitoring.
Month 3-6: Prompt Quality Manager. As your AI usage grows beyond casual experimentation, designate someone to maintain quality. Start with a library of approved prompts for your most common use cases. Test them monthly.
Month 6-9: Lightweight Governance Coordinator. Once you have multiple automated processes and AI tools in use, someone needs the master inventory. This often becomes a secondary responsibility of the Automation Operations Owner, but in a regulated industry, it may deserve its own designation for audit purposes.
Month 9-12: AI Output Supervisor. As you depend more on AI outputs for business decisions and client-facing communications, formalize the quality review process. This is when spot-checking becomes a scheduled activity with documented results.
Month 12+: Frontline AI Trainer. By this point, you have enough tools and enough experience that systematic training makes sense. Identify your best internal AI users and give them the training role.
What Happens If You Wait
The McKinsey survey found a widening gap between AI leaders and AI laggards [2]. The leaders are not just adopting more AI. They are building the organizational capabilities to use AI well. The laggards are buying tools without building capabilities, and the gap is compounding.
In regulated industries, the risk of waiting is amplified. Compliance requirements around AI are increasing, not decreasing. The longer you operate AI tools without governance, quality standards, and clear ownership, the more compliance debt you accumulate. When regulators start auditing AI use in insurance or mortgage or healthcare, and they will, the companies that have been operating without structure will face the steepest costs.
But the biggest cost of waiting is simpler than that. Every month you operate without these roles, you leave value on the table. Your AI tools underperform. Your team wastes time on trial-and-error. Your automated processes drift without anyone noticing. Problems compound.
The Path Forward
AI and automation are not going away. They are accelerating. The question for your business is not whether you will use these technologies. It is whether you will use them well.
Using them well requires people. Not more people, necessarily. But people in the right roles with clear mandates. An owner for automation outcomes.
A manager for AI quality. A coordinator for governance. A supervisor for outputs. A trainer for frontline adoption.
The companies that build these roles over the next three to five years will operate at a fundamentally different level than those that do not. They will be faster, more accurate, more compliant, and more resilient. Not because they have better technology. Because they have better organizational structure around the technology.
At The Lobbi, we help regulated businesses build these capabilities. Not just the tools, the roles, the processes, and the accountability structures that make AI and automation actually work. If you are starting to think about how AI fits into your operation and who should own it, that is a good conversation to have before you buy anything.
Book a discovery call at [thelobbi.io/discovery](https://thelobbi.io/discovery).
Topic clusters
Ready to see where the friction is?
The Lobbi's Operations Discovery maps your workflows, identifies your highest-impact bottlenecks, and gives you a clear picture of what's possible.