Change management
Agentic analytics — analytics that combine traditional platforms with AI agents that can run queries, build pipelines, and generate insights — can significantly lift the speed and quality of analysis. Realising that benefit depends as much on how you roll it out as on the technology. This part is focused on the people side: finding champions for use cases, getting executive and operational buy-in, using feature workshops to involve stakeholders in model development, and piloting in a way that builds evidence and trust.
1. Find champions for use cases
Every use case needs at least one champion: a person in the business who will own the outcome, act on the output, and defend the investment. Without a champion, analytics gets built and then ignored.
Who makes a good champion?
| Trait | Why it matters |
|---|---|
| Decision authority | They can change how their team works based on the analytics. |
| Pain they feel | They have a problem today (e.g. "we don't know which leads to prioritise") that the use case addresses. |
| Willingness to try | They're prepared to pilot, give feedback, and iterate rather than expect a perfect solution on day one. |
| Credibility with peers | When they say "this works," others in the business listen. |
How to find them. Start from the use cases you've prioritised (e.g. from TrueState 360). For each use case, name the role or person who would act on the output. Then ask: do they have the traits above? If the natural owner isn't willing or able, either pick a different use case or invest in bringing that person along (see executive and operational buy-in below). Don't build for a use case that has no champion.
Practical step: Maintain a simple table: Use case | Champion (name/role) | Status (identified / engaged / piloting / scaled). Review it with your steering group so that "no champion" is a visible risk.
2. Get executive buy-in
Executive buy-in gives you airtime, budget, and cover when priorities conflict. It doesn't mean the exec runs the pilot — it means they visibly support it and unblock issues.
What to ask for (concretely).
| Ask | Example |
|---|---|
| A named sponsor | "We need one exec who will show up at kick-off and pilot review, and who we can escalate to if we hit a blocker." |
| Success defined in their language | "Success for this pilot is: 20% of the team's weekly reports produced by the agent, with quality sign-off from the ops lead." |
| Permission to pilot | "We need this team's time for a 6-week pilot: 2 hours kick-off, 1 hour weekly check-in, and ad-hoc feedback. Can you make that non-negotiable?" |
| A decision rule for scale | "If we hit X adoption and Y quality by week 6, we expand to the next team; if not, we iterate before expanding." |
How to present. Lead with the business problem and the decision that will improve, not with "AI" or "agentic." For example: "The collections team doesn't know which accounts to contact first. We're piloting a model that scores each account so they can prioritise. We need your backing for a 6-week pilot with the collections lead." Bring the champion into the conversation so the exec hears both you and the person who will use the output.
Red flags. If the exec won't name a sponsor, won't define success, or won't free up the champion's time, scale back the scope or pick a use case with a more engaged sponsor. Don't build on the hope that buy-in will appear later.
3. Get operational buy-in
The people who will use the analytics day to day — analysts, ops leads, frontline managers — need to trust the output and see how their job gets better. Without operational buy-in, adoption stays low even when the exec is on board.
What operational teams care about.
| Concern | How to address it |
|---|---|
| "Is this replacing me?" | Be explicit: the agent handles repetitive work; people focus on interpretation, exceptions, and decisions. Give examples of what stays human (e.g. sign-off on external reports, handling edge cases). |
| "Can I trust the numbers?" | Show how the output is produced (sources, logic, limits). Offer a clear escalation path ("if something looks wrong, here's who to ask"). Include them in validation (e.g. spot-checks on a sample of outputs). |
| "I don't have time to learn something new." | Design for minimal new behaviour: e.g. "you get a list every Monday; the list is the same as before but now it's prioritised." Train in short, practical sessions and provide one-pagers and FAQs. |
| "We've seen tools come and go." | Commit to a pilot with a clear end date and a real decision. Share early wins and "what we're fixing" so the change feels iterative, not imposed. |
Involve them early. Bring the operational lead and a few power users into design: what would make this useful? What would make it annoying or unsafe? Their input should shape the pilot scope and the governance (e.g. when a human must review before an output is used).
4. Run feature workshops
Feature workshops are sessions where stakeholders (champions, ops, subject-matter experts) contribute to model development by suggesting features (inputs the model could use) and target variables (what you're predicting or optimising). They don't need to be data scientists — they need to know how the business works.
Why do this. The best feature ideas often come from people who live the process: "we always look at how many times they've been late in the last 6 months" or "week of month matters for our conversions." Feature workshops turn that knowledge into a list the analytics team can implement and test. They also build ownership: when stakeholders have suggested variables, they're more likely to trust and use the model.
How to run one.
| Step | What to do |
|---|---|
| Before | Define the use case and the decision (e.g. "prioritise which accounts to contact first in collections"). Invite 4–8 people: the champion, 1–2 ops, 1–2 subject-matter experts. Send a short brief: "We're building X; we want your input on what signals matter." |
| Opening | Restate the use case and the target (e.g. "We're predicting likelihood of resolving arrears without escalation"). Ask: "What do you look at today when you make this decision?" and "What would you want to know if you had perfect data?" |
| Features | Capture every suggestion as a potential feature (e.g. "days since first missed payment," "number of previous arrangements," "segment"). Don't judge feasibility in the room — note it for later. Group similar ideas. |
| Target variable | Agree the outcome you're predicting or optimising. Sometimes the room will disagree (e.g. "resolve in 30 days" vs "resolve without legal action"). Nail this down; it drives everything else. |
| After | Turn the list into a backlog: which features do we have data for? Which can we build first? Share the prioritised list back with the group and iterate as you build. |
Pitfalls. Don't let the workshop become a generic "what would be nice" session. Keep it tied to one use case and one target. Don't promise every suggested feature will be in v1 — explain that you'll test and prioritise.
5. Pilot in a contained scope
Piloting lets you test the workflow, governance, and messaging with real users before you scale. Run one or a small number of pilots; use them to learn and refine.
Design the pilot.
| Element | What to decide |
|---|---|
| Use case | One (or at most two) clearly defined use cases. Not "all of analytics" — e.g. "weekly prioritisation list for the collections team." |
| Pilot group | A single team or a bounded set of users who have a champion and are willing to give feedback. |
| Duration | 4–8 weeks is typical. Long enough to see adoption and issues; short enough to decide and iterate. |
| Success criteria | Measurable: e.g. "80% of the team use the list at least once a week"; "quality spot-check: 95% of sampled outputs pass review." Agree these with the champion and sponsor before you start. |
| Feedback loop | Weekly or bi-weekly check-ins: what's working, what's not, what's blocking? Capture and act on it. |
What you're learning. The pilot is not only "does the model work?" It's also: Do people trust it? Do they know when to override it? Is the governance (e.g. human sign-off where needed) clear and workable? Is the training and support enough? Use the pilot to fix workflow and messaging, not only the algorithm.
After the pilot. Decide explicitly: iterate (fix and extend the pilot), expand (add the next team or use case), or pause (if adoption or quality didn't meet the bar). Share the decision and the reasoning with the pilot group and the sponsor. If you're scaling, document what you learned so the next rollout is smoother.
6. Governance and support (brief)
Beyond buy-in and piloting, you need clear guardrails and support.
- Data and access: Who can ask what? Which data can agents access? Apply the same principles you use for dashboards and reports.
- Review and approval: Where must a human review or sign off before an output is used? (e.g. external reporting, credit decisions, regulatory submissions.) Define and document it.
- Transparency and escalation: Can users see how an answer was produced? Who do they contact when something looks wrong? Make it obvious.
- Training and FAQs: Role-based guidance (what analysts do differently; what business users can expect), hands-on sessions, and a central place for "how do I…?" and "who do I ask?"
Governance should be clear enough to follow, not so heavy that it kills speed. Start with the minimum needed for the pilot and tighten only where risk or compliance require it.
For a structured approach to deciding which analytics to build in the first place, see TrueState 360. For the essentials of advanced analytics (algorithms, data cleaning, feature engineering), see Understanding advanced analytics.