The People Factor: Why Change Management Is the Key to Decision Intelligence Success

In 2016, the state of Arkansas started using a computer algorithm called RUGs to decide how many hours of in-home care Medicaid recipients with disabilities would receive each week. Before the switch, trained nurses made those assessments in person. After the algorithm took over, nearly half of the roughly 8,000 people in the program saw their care hours cut significantly, with some losing more than 40% of their services, even though their medical conditions had not changed at all.1

Caseworkers and healthcare advocates raised alarms quickly. A federal court later ruled that the state had violated due process requirements by failing to explain the algorithm's decisions to those affected.2 The algorithm was eventually overturned by a state court, and the Arkansas legislature voted to remove the system entirely. The technology worked as designed. The transformation failed because people were excluded from the process, the system was not explainable, and no one managed the human side of the change.

This story is not unique.3 Governments around the world are trying to use decision intelligence to improve public services, but most projects struggle to stick. The technology often works well on paper, yet real adoption by real people falls short again and again.

Decision intelligence (DI) is the practice of using tools such as data analytics, machine learning, and other forms of artificial intelligence (AI) to help people or organizations make better decisions. In public policy, this might mean using software to figure out who qualifies for benefits, to predict which roads need repair first, or to identify patterns of fraud. Think of it as giving a decision-maker a very powerful assistant that can process large amounts of data very quickly and suggest a course of action.

The promise of decision intelligence in public policy is real. It can help governments work faster, more consistently, and more fairly. But the gap between launching a system and actually using it well is wide. That gap is not usually about software bugs or missing data. It is about people, specifically whether the people who work with these systems every day trust them, understand them, and feel included in the process of rolling them out.

That is where change management comes in. Change management is the structured practice of helping people understand, accept, and adapt to major organizational changes. It is the human side of any big transition. Without it, even the best decision intelligence tools end up sitting unused, worked around, or outright rejected.

Why DI Adoption Is So Hard

The statistics paint a clear picture of the challenge of implementing decision intelligence systems, especially with projects involving AI:

• only 28% of AI projects in large organizations fully succeed and meet their goals4

• 35% of U.S. workers name job displacement as a major concern about AI5

Strong change management practices have proven benefits to implementation: organizations are 6 times more likely to meet performance goals with strong change management in place than without.6

Decision Intelligence Is Different from a Regular Software Upgrade

Research from McKinsey and BCG confirms the problem: only about 30% of large-scale transformation programs successfully improve performance and maintain those improvements over time. The main reasons for failure are not technical. They are cultural misalignment, poor leadership engagement, and not preparing the workforce for the change.7 Gartner also found that the top predictor of AI project success is integrating AI into existing workflows and getting full commitment from executives, not the sophistication of the technology itself.4

Four Frameworks That Help: Matching the Method to the Challenge

Kotter's 8-Step Model

This framework starts by helping leaders explain why change is necessary right now, then builds a team of champions, creates a shared vision, and works to embed that vision into everyday culture. It works well in large agencies where different departments are siloed and need a shared sense of direction before any technology rolls out.8

Prosci ADKAR

ADKAR stands for Awareness, Desire, Knowledge, Ability, and Reinforcement. Instead of thinking about the organization as a whole, it asks: what does each individual person need to move through this change successfully? Prosci research shows this model can reduce resistance by up to 60%, and it is especially useful when frontline workers need to genuinely internalize new ways of working.9

McKinsey's 7-S

This framework looks at seven elements of an organization: strategy, structure, systems, shared values, skills, style, and staff. It is particularly useful when a DI project touches many of these areas at once and leaders need to find where things are out of alignment before the rollout begins.10

Bridges Transition Model

This model focuses less on the change itself and more on how people experience it. It describes three stages: ending the old way, a "neutral zone" where nothing feels settled, and the new beginning. It is especially useful when a DI project replaces a process that workers have used for years and feel attached to.

The most successful public policy DI projects combine these frameworks rather than picking just one. A common approach: use Kotter to build urgency and a leadership coalition at the top, use ADKAR to support each frontline employee through their personal transition, and draw on Bridges when helping teams cope with the disorienting period right after launch, when the new system is live but the new habits have not formed yet.

The Difference Between Compliance and Real Acceptance

Three things are especially important for building genuine acceptance in public sector settings. First, the system needs to be explainable. Many jurisdictions are now legally requiring this. The EU's AI Act, for example, requires that high-risk AI systems used in government be able to explain their decisions in plain terms. But beyond legal compliance, explainability is simply a requirement for trust. A caseworker who understands why the system flagged a case can act as a thoughtful reviewer rather than a rubber stamp. Change management plans must make explainability a real, accessible feature of day-to-day use, not just a legal footnote.

Second, frontline workers should help design the system. When people are involved in shaping how a tool works, what it optimizes for, and where a human should be able to override it, they stop being passive recipients of a change and start being stakeholders in its success. Prosci's research confirms this: organizations that give employees a safe space to experiment with and influence AI tools see significantly stronger long-term adoption than those that simply mandate use from the top down.11 Engaging end users in the design and development process is also a core tenet of many product management approaches, including design thinking, Jobs to be Done, and others.

Third, human override must be real and visible. Many workers fear that once an algorithm is in charge, their professional judgment will no longer matter. Agencies that explicitly design and communicate meaningful human-in-the-loop controls, and that make clear overrides will not be penalized, see less passive resistance and more genuine engagement. Fortunately, this is where decision intelligence shines: by intentionally designing the systems around human decision-making, the DI learning loop not only includes people, it depends on them.

What Good Leadership Looks Like in a DI Rollout

Effective leadership in a DI rollout looks different from a motivational memo or a ribbon-cutting ceremony. McKinsey's own internal AI deployment offers a useful example. They built a dedicated adoption team that first studied how different types of users related to the tool, then offered tailored training to each group, and created communities of regular users who could share tips and raise concerns. They also found that companies investing in building genuine trust in AI are nearly twice as likely to see strong business performance compared with those that skip this step.5

The same principle applies in government. Leaders who ask hard questions about model outputs in public settings, create channels for workers to raise concerns without fear, and follow up visibly when problems are reported are the ones who build the kind of credibility that makes a rollout stick.

Keeping the Change Going

Most technology change programs treat the launch date as the finish line. For decision intelligence, it is really the starting line.

DI systems are inherently dynamic: policy environments shift, populations change, or court decisions introduce new requirements. A model trained on data from even two years ago may produce biased or inaccurate recommendations today. This means the change management effort must be designed for ongoing adaptation, not just a one-time deployment.

Practically, this means building in regular feedback loops where frontline workers can flag when the system seems to be getting things wrong. It means treating patterns of human override not as a problem to be corrected but as a key input to the system. And it means creating regular performance reviews where people at all levels, not just data scientists, can weigh in on how the system is performing.

Gartner's research on AI maturity reinforces this approach: organizations that consistently keep AI projects operational for three or more years are the ones that connect their AI governance directly to ongoing business workflows rather than treating the go-live date as the end of the project.12 Agencies that reach this stage describe a shift in how their staff relate to the system. Workers stop simply "using the tool" and start actively "governing the tool." They flag issues, suggest improvements, and feel a sense of ownership over how the system performs. That shift is the real measure of whether change management has done its job.

Culture Has to Come First

Decision intelligence has the potential to help all organizations make better decisions at a scale that was never possible before. Faster processing, more consistent outcomes, and better use of limited resources are all within reach. But none of that happens automatically when software is deployed. It happens when the people using the system trust it, when leaders take accountability seriously, and when the organizations adopting these tools treat the human transition as just as important as the technical one. The Arkansas Medicaid case is a reminder of what is at stake when change management is skipped. An algorithm that few people understood, that no one explained, that excluded the people most affected by it, caused real harm to real people. The technology was not the problem. The approach to change was. The frameworks exist. The evidence for what works is growing. What remains is the willingness to use them, seriously and consistently, on every decision intelligence project that affects the public.

References:

1. Johnson, A. D. Suit Filed Over Computer Program Making Medicaid Cuts - Arkansas Access To Justice. https://arkansasjustice.org/2017/02/27/suit-filed-over-computer-program-making-medicaid-cuts/ (2017).

2. Brown, L. X. M. Z. What Happens When Computer Programs Automatically Cut Benefits That Disabled People Rely on to Survive. Center for Democracy and Technology https://cdt.org/insights/what-happens-when-computer-programs-automatically-cut-benefits-that-disabled-people-rely-on-to-survive/ (2020).

3. What happens when an algorithm cuts your health care. https://ihpi.umich.edu/news/what-happens-when-algorithm-cuts-your-health-care.

4. Gartner Says AI Projects in I&O Stall Ahead of Meaningful ROI Returns. Gartner https://www.gartner.com/en/newsroom/press-releases/2026-04-07-gartner-says-artificial-intelligence-projects-in-infrastructure-and-operations-stall-ahead-of-meaningful-roi-returns.

5. 5 steps for change management in the gen AI age | McKinsey. https://www.mckinsey.com/capabilities/quantumblack/our-insights/reconfiguring-work-change-management-in-the-age-of-gen-ai.

6. Change management statistics: understanding the real numbers behind successful transformations. Change Management Hub https://www.change-management-hub.com/blog/change-management-statistics-understanding-the-real-numbers-behind-successful-transformations (2024).

7. Millerd, P. Do 70% of change initiatives really fail? | StrategyU Blog. StrategyU https://strategyu.co/do-70-percent-of-change-initiatives-really-fail/ (2026).

8. Umbrex. Kotter’s 8-Step Model. Independent Management Consultants https://umbrex.com/resources/frameworks/project-management-frameworks/kotters-8-step-model/.

9. Catalyst Report: 10 Workplace Conditions for Individual AI Adoption. https://empower.prosci.com/workplace-conditions-for-individual-ai-adoption.

10. Shah, M. 6 Change Management Strategies to Scale AI Adoption in Engineering Teams. https://www.augmentcode.com/guides/6-change-management-strategies-to-scale-ai-adoption-in-engineering-teams (2025).

11. AI Adoption: Driving Change With a People-First Approach. https://www.prosci.com/blog/ai-adoption.

12. Gartner Survey Finds 45% of Organizations With High AI Maturity Keep AI Projects Operational for at Least Three Years. Gartner https://www.gartner.com/en/newsroom/press-releases/2025-06-30-gartner-survey-finds-forty-five-percent-of-organizations-with-high-artificial-intelligence-maturity-keep-artificial-intelligence-projects-operational-for-at-least-three-years.

Feature photo by Keli Black from Pixabay.

Next
Next

Making Smarter Decisions Under Scarcity: Decision Intelligence and Water Management in the Yakima River Basin