Skip to main content

Podcast: Deploying AI in payment integrity responsibly

Podcast: Deploying AI in payment integrity responsibly

Artificial intelligence (AI) and machine learning can help payment integrity programs achieve higher levels of value. But these technologies are not quick fixes and require proper planning and governance. Payers need to ensure that AI and machine learning are used in a way that supports their objectives and principles while improving member and provider satisfaction, not decreasing it.

On the sixth episode of Cotiviti’s Payment Integrity Insights podcast, Cotiviti’s Brett Arnold, senior vice president of product development, is joined by Anandhi Periyanan, senior vice president of R&D, to continue our conversation on the role of AI in payment integrity. Listen as Brett and Anandhi discuss these four key tenets to incorporating AI into payment integrity responsibly:

  • AI is a tool, not a solution.
  • AI should be used to improve your results.
  • AI must be used responsibly.
  • AI does not replace human expertise.

Don’t miss this opportunity to learn how AI can drive measurable value for your payment integrity program with the right inputs and principles in place. If you missed part one of the podcast, listen in to learn more about the potential for AI to improve payment integrity and reduce administrative costs.

Podcast guests

Blog_Brett Arnold_avatar_v2 Brett Arnold
Senior Vice President, Product Development
AnandhiPeriyanan_200x200 Anandhi Periyanan
Senior Vice President, Research and Development

Podcast transcript

Anandhi: Today, we will dive into how safe and purposeful application of AI can bring real value to health plans that are trying to thrive in today's challenging environment. We'll focus this around four key values we think of as vital to incorporating AI into your payment integrity program: AI is a tool, not a solution. AI should be used to improve results. AI must be used responsibly, and AI does not replace human expertise.

Brett: I'll take a shot at that first one: AI is a tool, not a solution. Think about AI like you think about any tool: Microsoft Excel, Java, the internet. Like those tools, vendors should be using artificial intelligence to improve the value delivered for their health plan clients. And similarly, health plans will be using them in the same fashion for their own payment integrity programs.

Like any tools, AI does not work in a vacuum. It needs a large data set as an input to be trained. It has to learn from something. It often uses prior results from humans to mimic their decision making to help make them more effective or more complete going forward   . The real value requires not only data scientists, but data and also deep expertise. So you need to bring together the technology team, the data science team, and the subject matter experts to make this work.

As one relevant example, Cotiviti had our first machine learning model in production back in 2015. This was a model that helped augment our existing selection process for DRG reviews. It looked at claim data, using algorithms and machine learning to determine which claims should we seek a medical record to review. And in the current state, it adds a lot of value, but our first attempt here we tried to do this with just data scientists and keep them separate from our experts. We were using an external partner. We were a little nervous about teaching them too much about what we did, and the result was pretty poor.

We did a pilot where we selected a couple hundred medical records. We got through the 100 and had no correct findings, and the big lesson we learned in that first attempt was that it's not just about data science and data. The technology doesn't just work by itself; we have to bring together our experts that helped guide the technology. Once we did that, we saw great results, improving the precision of our selection so we can deliver the most value for our clients and not have to increase the medical records we're requesting.

Anandhi: Now let's talk about the second value driver: AI should be used to improve your results. For health plan payment integrity programs, this means improving the medical cost savings, reducing your administrative burden, enhancing the experience, and lowering abrasion for your provider partners.

At the end of the day, we are not using AI for the sake of using AI. We are only using AI to improve the business results that matter. For example, here are a few ways AI can be used to improve payment integrity programs. Increase savings value by accurately detecting true positive fraud and find more previously unknown schemes hidden in the health plan data, improve consistency and accuracy using NLP and LLM (large language models)  to prepare the medical records for human review. This will allow us to capture all the relevant information from the record and display it for the human reviewers to be more thorough and more consistent. Develop new and innovative content by exploring ways—with the use of generative AI—to help improve the processes of maintaining the payment policies to aid in exploration of new policies.

For the most part, generative AI operates in three different phases, training, tuning, and evaluation. Training to create the foundational model that can serve as a basis for multiple GenAI applications; tuning to tailor foundational models to a specific GenAI application or use case; evaluation of the resource retuning to assess the application's outcome and continually improve its quality and accuracy.

Brett: And the work your team did in improving our medical record selection processes is a great example. Leveraging machine learning, we were able to improve the savings for our clients while selecting fewer medical records for review. This is a rare win-win scenario for DRG reviews. We need to acquire those medical records from hospitals to validate their accuracy and this creates work for health plans, for hospitals, and for Cotiviti. We need to select those cases based on claim data without access to the detailed information you're reviewing, the medical record. The machine learning that Anandhi’s team built was able to increase the overall value while decreasing the administrative impact on plans and providers in supplying those medical records. Again, a rare win-win in payment integrity programs.

Anandhi: Now let's discuss the importance of responsible AI: governance, security, and privacy. All data being used for AI is subjected to all the required controls as any other data as a starting point. But then we need additional controls layered on top of that specific to AI. Fairness and bias: There will always be bias in any algorithm or data model, and how do we minimize or monitor it and concerns around the algorithm being biased? Section 1557 of the final HHS rule prohibiting discrimination on the basis of race, color, national origin, sex, age, or disability, applies to doctors and insurers. Health plans acknowledge that member and provider bias may both exist.

Plans also want to know: Are their vendors using AI, and if so, how? How do the plans monitor how vendors are controlling for risk, and how do they mitigate the risk? And if we are using AI, what information needs to be shared with members and providers? Do they need to know when AI is in use? Do we need to share the information about how those models are trained? If so, how do we communicate this? And I want to take a moment to discuss what we are hearing from our clients. In the client questionnaire, we often see our clients are nervous about security and the use of AI.

Brett: Speaking of responsibility, let's address an elephant in the AI room: AI not replacing human expertise. As I discussed, you need prior results to train AI. Experts are even more critical to training and manage AI. So as I mentioned before, be wary of vendors who do not have extensive knowledge of healthcare or payment integrity and your unique needs. But it goes beyond model training. Cotiviti strongly feels that we should not replace human decisions, especially clinical decisions with AI.  Nobody wants to be in the New York Times for overstepping with AI.

We use natural language processing to prepare medical records for human review. As I mentioned above, this helps to make the reviews more productive and ensures the reviewer finds everything relevant in the medical record. This produces a more complete and consistent review, improving client results. But our expert reviewers make the determination. We are not ready and don't believe the healthcare industry is ready for AI being the final word on clinical decisions.

As we move towards the end of our discussion, let's end on a high note. What are we excited about with AI in payment integrity?

Anandhi: AI has the potential to have a transformative impact on society. However, life in the AI garden is not all rosy. We will most likely see how AI is progressing in the next few years or even a decade. Like any technology, AI has both pros and cons. At Cotiviti, I wanted to highlight how AI is helping us move more findings from postpay to prepay intervention, which by now we all know is key to avoiding waste, increasing medical cost savings, and decreasing provider and member abrasion.

Moving post to pre is not as easy as it sounds. If you want to pause a claim and delay a provider's payment, you need to have high confidence if there is an error. That is where AI can help. AI can improve the precision of analytics, which is key in transforming a retrospective solution to a prepayment solution. Within Cotiviti’s operating tooling or workflow that is required to prepare the metadata on which our claims are being adjudicated, GenAI can help us maintain and improve the accuracy which we are known for in the industry. What about you, Brett?

Brett: I mentioned earlier how excited I am about the ability for AI to help healthcare evolve to be more personal and predictive, but closer to home for me, there are exciting opportunities for generative AI and payment integrity as well. Our clients consistently ask for improved transparency in our programs and are consistently looking to minimize administrative impacts both on their programs and for their providers.

Where traditional AI has really helped increase the medical cost savings we produce for our clients, I believe generative AI has a chance to really help us improve the experience for our clients and their providers. These technologies are helpful in separating the wheat from the chaff, helping Cotiviti and our clients only focus on situations that matter, being better at avoiding false positives, and GenAI specifically is great at collecting and communicating information. I have faith and believe it can really help us over time improve the experience and the impact of payment integrity on health plans and on providers.

About the Author

Beth Waibel has a long history of connecting health plan leaders with answers to their payment integrity questions. As director of marketing, she helps Cotiviti client partners and prospects understand the value and potential of our longstanding and new Payment Accuracy solutions in solving business challenges, lowering provider abrasion, and improving member engagement.

Profile Photo of Beth Waibel