In this episode, Adam Parks sits down with Shantanu Gangal of Prodigal to unpack how intelligence layer architecture for collections enables governed, explainable AI decisioning in regulated environments. Rather than focusing on tools or APIs, the conversation explores how data context, decision traceability, and compliance guardrails determine whether AI scales safely or creates risk.

Listen to Your Favorite Podcasts

Adam Parks (00:08)
Hello everybody, Adam Parks here with another episode of Receivables Podcast. Today I'm here with my good friend and someone that most of you have probably already met at a conference or seen on stage somewhere, Shantanu joining me from Prodigal here to talk to us about intelligence and activating the data within our organizations in order to be able to leverage artificial intelligence. And as we talk about AI, all of the different tools that we're trying to plug in. These tools are only as good as the data that we're feeding into the tools. And I think Shantanu has got an interesting approach in the way in which they've architected their solutions to be able to address some of the unique challenges that are the receivables management industry. So Shantanu, thank you so much for joining me today, sharing your insights. I appreciate your time.

Shantanu Gangal (01:01)
Thank you so much for having me. It's good to be back and always a pleasure. And I leave the conversation with more ideas. So I look forward to today's discussion as well.

Adam Parks (01:12)
Well, thank you so much for joining us. For anyone who has not been as lucky as me to get to spend some time with you through the years, could you tell everyone a little about yourself and how you got to the seat that you're in today?

Shantanu Gangal (01:23)
Absolutely. So I've been doing some of the other form of consumer finance for about 15 years now. My education is in computer science. I'm a graduate of computer science, but funnily enough, I've never earned a single paycheck writing code. I write code for fun, but never earned a single paycheck writing code. Right after college though, I actually stepped into the financial services industry.

I was a consultant at Boston Consulting Group. Then I was an investor at Blackstone. And since then I've been effectively very, very laser focused on this industry. Part of it is technology always appealed me, but technology was slow to come to our industry, even compared to like marketing, compared to retail and so on and so forth. But in like early 2010s, I knew that change was offered. And with that, Kind of had a very three-stage, very clear framework of how change was going to happen to this industry. Right in the 2010, we already seen a lot of change on the origination side. Through 2013 to 2018, we saw a lot of change where tech was making a very substantial difference in underwriting. And right at 2018 is when we started to drive change in how loan servicing and collection happens, which is actually the best in class differentiate from the average lender. And so a lot of lot of thought process, a lot of philosophical understanding of the underpinnings of this industry, a lot of history lessons, both good and bad, have gotten me to this point. Excited to see what 2026 brings.

Adam Parks (02:52)
And so you've been leading Prodigal now for about seven years. Tell us a little about your organization and what it is that you do there.

Shantanu Gangal (02:59)
Yeah. So we, we essentially build AI agents for loan servicing and collections, but that is really the tip of the iceberg. The AI is a means to an end. And today it is at the top of what everyone talks about. What we build and where I actually spend a lot of my time on is what is below the water level. What is it that the eye doesn't really see what doesn't make on a website, which again goes back to a lot of the philosophical underpinnings of this industry, which has What, is it that people care about? How have we gotten to this point as an industry? What are some of the, you know, pulls and pushes that have gotten us to this industry and how does, how do we still achieve the same goals that all of us in this audience have, while acknowledging history, working to make a big difference, incorporating the latest technology. And so to that extent, on like, why we still have AI agents for various end goals that are again, the tip of the iceberg that meet the eye. There's a lot of time that we spend very deeply thinking through how technology is gonna make a difference to this industry, in some ways, unpacking 30, 35 years of investments in IT and rebuilding some of it altogether. so when you think of us, you think obviously of AI agents for loan servicing and collections, people that do autonomous phone calls, carry out these conversations, people who do autonomous texting, emailing to engage borrowers, get them to your website, have them liquidate on the website. You also think about like AI co-pilots that make your team efficient when they are between calls and more effective when they're on call. So they are, know, just more productive per minute per hour. And we also actually help a lot on the backend, which is the back office quality assurance, QA monitoring, document handling and stuff as well. So all of these are individual AI agents that all work hand in glove because they're all based on the same substrate, which is what we like to call the Prodigal Intelligence Engine, AKA PIE. And they work extremely well, hand in glove with each other. They are proverbial toppings on our pie. So they effectively have the flavor of the pie. And they also obviously contribute back to the pie.

Adam Parks (05:11)
So the intelligence engine is really the big piece that I wanted to talk about today because I think it's an interesting approach to the architecture and we hear about tying APIs back to your system of record or other simple solutions, but is that really the be all end all and what does that start to look like in terms of the movement of data? So I think it's interesting when I hear the way that you've done this. Now, for those of you that watch a lot of my content, you've seen me talk about the siloed systems across the debt collection industry and how difficult it can be for an organization to bridge those silos. So, Shantanu, as somebody who's been at this for seven, eight years in the debt collection space, can you talk to me a little about what you saw in the marketplace and how you came to the conclusion of the architecture that you chose?

Shantanu Gangal (06:00)
Absolutely. So I'll pick up on one thing you said, which is data silos. I'll draw in a bunch of like history lessons based on speaking with a lot of executives in like who have done, who made some trade-offs along the way. And then I'll obviously talk about the state of the technology, like being based in the Bay Area. We get to obviously hear the latest and the greatest, but then that gives us also the opportunity to like pause and think like, how is this relevant to my customers? And then implement the things that are relevant and obviously like snooze out of the things that are not.

So in all of these pieces, right, data silos is super important. Obviously a lot of these companies have been established even let's say pre-internet and to that extent those systems of records exist from like the nineties even. That's one silo. Obviously you interface with your customers that might be lenders, that might be debt buyers. And lot of those folks kind of have their own systems and the idiosyncrasies that come with that that either you keep in SFTP or something like that. That includes media documents, evidences, and so on and so forth as well. That's the second silo. The third silo is obviously everything that you use to do outreach. And even here, what we call as systems of engagement, what do you use to engage your consumers? That might be a telephone system. That might be your texting or emailing platform. That might be the chat bot on your website. All of these things, unfortunately, again, sit in silos.

And over time, people have drawn some strings or, you know, pulled some wires to connect a few of these things to get us to this point. That gets me to the second part, which is the history lesson. Why has that happened? Because obviously like very little texting and emailing even happened till about like seven, eight years back. So all of these are mandates that have been put in place along the way. Obviously the third kind of aspect of it is a lot of people have come and gone through the industry and everyone has applied their own judgment, flavor, put in place some processes. Some of it has been experiential decisions that people have used to set in place some rules of engagement inside their team. And that is not codified. That is, know, institutional knowledge that doesn't exist in any written down place. And we've got a good point.

And that kind of gets us to technology, which is like also obviously driving a big force of what we do. So I am a big believer in this concept of like context graphs, right? It is less about like databases or beta lakes. It's about, hey, modern AI and the future where AI is going to be an important tool in the toolkit is actually needs to work with your accounts, embrace a lot of the history, needs to work with your people, embrace a lot of what both the knowledge and the behaviors and inertia that your team will bring to the table. And how does AI do that? I think AI does that by actually creating, not just like thinking about like, know, bespoke random APIs that go from a system to system. You actually think about it in many, many different layers. So at the base of it, sure, you need data connectors. get it. APIs is one form of data connectors. You could also have like file exchanges, but you also need decision traces. You need your AI to understand what workflows have happened.

Like a final note on a document doesn't capture the entirety of what led to that point. is kind of path agnostic, but AI can't actually be okay with it because the kind of knowledge that your team is able to interpolate by looking at two points and figure out how it probably got from here to there. Someone needs to actually tell AI that very explicitly. And that is where we've seen a lot of AI half baked AI approaches fail. So essentially I think there are these three or four things. And then the fourth is obviously like internal policies, procedures, there is state by state compliance. Some, obviously in New York city has a city level compliance. All of these things need to create a substrate foundation for your AI agents to really excel. And the same foundation that got you here will not get you there. I think that is where what that, is our concept of the Prodigal intelligence engine. It is the foundational substrate that allows not one but essentially a swarm of AI agents to both work with each other, to behave very consistently with one another and to obviously do it while leveraging your information that has gotten you to this point so far. And that is being a lot of investment and a lot of like thoughtful architecture on this, you know, happy and this is kind of like gets me super excited and I know alot on it.

Adam Parks (10:29)
I can tell which I like a lot. Let me try and restate and make sure that I'm understanding. So basically the intelligence layer is connected into the various silos in order to capture context, bringing that context into a single layer, which on which the AI tools are built. The bot, for example, or the

Shantanu Gangal (10:43)
Yes.

Adam Parks (10:52)
swarm of bots are ultimately all operating off of the same, call it brain layer, without adding another layer of strain on that existing IT infrastructure like the underpinning legacy systems from the nineties, for example, are not becoming the workhorse. There's a workhorse layer that is feeding the, that's, called consuming the context and then using that context to create next action and communication. Am I understanding that the right way? It's complex method, working my way through it.

Shantanu Gangal (11:24)
Yeah, it gives us access to. So more than just the research pieces of it, I think the engineering complexity of it is also worth appreciating. So we are able to lean on the latest technology, but we are able to do it while obviously drawing upon a proprietary data by reflecting your existing workflows, by reflecting your existing rules, procedures, and policies. Anyone can put like a simple AI.

Adam Parks (11:39)
Yeah, that's great.

Shantanu Gangal (11:59)
to use, but absent all of this context is going to work for 95 % of the times, which is really, really bad. You really needed to work like 99.999 % of the time for you to feel comfortable doing it. The second thing, which again, a lot of people are just waking up to is your AI will do a bunch of things. You need a lot of those actions to go back and reflect in a way that the rest of your team can understand it, build up on it and communicate to their customers, meet lenders, and so forth. This backhaul of information is very different. AI acts in ways that are superior in some aspects, but also counter-intuitive in other aspects. So how do you communicate what led to this tool invocation? How did I act when I acted? And make it in a way that the rest of your team can follow along. And then you think of AI as a collaborator or colleague is again, something that is, we've grown to appreciate a lot, definitely over the last one year. And then we obviously put in a lot of investment to kind of reflect that.

Adam Parks (13:07)
So when we think about decisioning and those processes, one of the things that you'd mentioned in the planning call was that when you're looking at not everything should be a black box decision and that we have to be able to thoughtfully capture the process or the thought behind the decision that's made. Can you help me understand what you mean by that and how you've seen that put into practice?

Shantanu Gangal (13:32)
Absolutely. So, LLMs are very powerful and in many ways they can They're very powerful, but they can also embellish their own progress. And then that's when you run the risk of any non-compliance. The way to do that is to really ring fence and channel their power. And think of all the like, civilizationally changing technologies that we've had so far, be it the fire, be it the printing press, be it the internet. All of these technologies have essentially both the ability to kind of break heavy destruction, if not channel properly and AI is no different. And so to that extent, if you think of, you know, just like wildfire helps you cook, keeps you warm, does a bunch of things, but it can also obviously wreck the whole village. AI is no different. You need to figure out ways to channel it. So what we do is essentially AI gets the right context in an extremely limited form on an as needed basis, just in time to act and make a recommendation.

The alternative or what feels like easy to demo in a conference is absolutely to show, throw the kitchen sink at it and feel like, my God, this AI can be magical. Sure. I mean, it kind of gets it done, but you actually then don't understand what it did. There's a famous quote from Harry Potter incidentally that I like, is never trust a system whose brains you cannot see. And AI can actually feel like way if you, if you're oblivious to what it is doing.

And the way to demystify it is by saying, I'm going to only give you so much power over the next five seconds. And you've got to come back to me with an answer that don't get me wrong. I think that is going to be a far more powerful answer compared to like Google search or any other kind of system that we've had in the past. But then you do ring fence it. You give it only the necessary amount of information. don't give it unnecessary information. So it then cannot make up things on its own. You ask it for very clear explanation of why did what it did. So then you're able to either agree. Most often you will agree or in the real scenario, you don't agree. You're able to ask him to do its work again. Again, that requires a lot of engineering, a lot of like very careful interpolation to channel its energy in just the right ways and not let it run amok.

And I think that's where I think, again, the fact that we are able to work with a customer's proprietary workflows, the fact that we are able to work with the customer's proprietary policies is super important. Many of our customers have like 30 different policies across different states, different kind of loan portfolios across different customers. Now you want to give the AI, bring the right policy at the right time.

Adam Parks (15:49)
definitely do not want it to run amok.

Shantanu Gangal (16:19)
give it access to the right policy, make sure it understands the policy correctly. We are basically rewriting a lot of our customers' policy and codifying them. We are not changing anything, but we are codifying them in our prodigal intelligence engine in a way that AI agents know which policy to pull up at the right time and then act still in obviously adherence to that policy. And that is what the prodigal intelligence engine is all about.

Adam Parks (16:41)
So it's about being able to put those guardrails up and to have one source of truth that you've been able to pull this context in terms of the communications across different platforms. You're able to reach into the various silos, bring that information together and use it in an active way for determining next step messaging, whatever the, whatever the account treatment is that's coming next. Now I know that you started off with a couple of other products where you had ProNotes which was taking the automatically taking the notes on behalf of the agent who was on a live call and some of those things. Have you continued to develop those products and bring them into or sit on top of the intelligence layer or how have those legacy systems been integrated or currently function under the updated architecture?

Shantanu Gangal (17:27)
all of them have been upgraded to work on top of Pie. In fact, and one of the biggest values that we see is the fact that all of them work extremely consistently to deliver a seamless experience. If a person has had a bad experience or is let's say recipient to verify themselves with the last four of the social security number, and we have somewhere, we have that from a prior note, our ProAgent will actually obviously rules permitting, not ask them for a social security number. It just improves the experience of it. If we see someone during the day, look at an email and not act on it, look at it again in 10 minutes and not act on it again and look at it a third time during the lunch break, be like, hey, look, they're inching to do some action, but something about it is not giving them the right confidence. And obviously subject to a bunch of the other restrictions, if we can make an outbound AI call, we would make a call at that point. And then that kind of tells them, that kind of is a signal for us saying, these are people who want help. And once you're able to help them because they have been looking at it, we are able to build upon the fact that they have received an email from us in the morning. They have looked at it, they've opened it, maybe clicked it, but they've not really completed the transaction. And that allows that context is already passed on to the ProAgent in order for it to carry out a far more relevant conversation. So ProAgent is the latest. We also obviously have a checkout payment page that is far more superior and very ready for the ChatGPT way of doing things. ProNotes, ProAssist, that makes your team effective in calls, efficient between calls, as well as obviously ProInsight are all now working on top of those same Prodigal intelligence engines. That means that both they are, they're very coordinated. So they are like different people in your team doing different jobs. And they're right back to a same single source of.

Adam Parks (19:24)
Yeah, but they're working from the same source of truth and the same brain, so to speak, in order to drive next action. What I think is interesting about that is a lot of organizations are struggling to pull the signals together to drive it to actionable intelligence. And so we might know that an email was opened three times, but can we understand it in real time in order to actively act on it in a fast enough time period to which the data has not decayed, become stale, and is still relevant?

Shantanu Gangal (19:52)
And this is going back to the whole silos point, which is like, if you had data in silos, it is really hard to do it. If you kind of think about, put the customer, the end consumer at the center of everything, then every action that the consumer takes or looks at, but doesn't take is actually telling you something. And it allows our agents to be that much more considerate, that much more empathetic.

Adam Parks (20:14)
Let's bring those signals together and then those signals when they add to a certain point now triggers an action and being able to bring that across your communication platforms. So like we see, for example, there's some groups that have fully integrated portals and there's other groups that are uploading information to a portal on a daily basis via SFTP. Now the consumers' engagements from that portal and the ability to execute on the next thing I think are severely limited in a lot of organizations because they don't have that combined intelligence layer that's pulling the signals together for next action, which is ultimately, think what we're trying to get to. And I think that's the, the promise that artificial intelligence has made to the commercial space. That you're going to be able to bring this data together and do it. But for the organizations who are maintaining that very siloed data, that capability becomes limited having that intelligence engine architecture that's sitting over the top of these siloed systems and sucking that data into a single piece. This might be a little too nerdy for some of the audience, but I'm just curious myself, is the intelligence layer pulling that data in from each one of those silos or is it using its access to those silos to kind of run that understanding? I'm just curious about how you structured that data flow.

Shantanu Gangal (21:37)
The format for sure. I think the intelligence engine will pull in a lot of this data. It has its own staging layer of thoughts and it goes through its own refinement. So the intelligence engine serves multiple use cases, right? There are four primary use cases. One is it kind of is the translator. It translates the consumer's business context into context that an AI or AI app suite can understand. And then they both translate it both ways. It translates what the AI is doing, what tools it invoked or what decision traces it took. It is able to translate all of it as needed back to the customer system. It's like, think of it as the opposite side of the same coin. So that's the universe translator of sorts. The second is it actually is super intelligent. It is not like a CRM that is like a dusty file cabinet sitting in a basement. So it is, it is a dynamic, actively thinking like learning quickly system. So just like, you know, you visit a web page on the internet, you might start seeing Instagram ads or like Facebook ads for it. It is kind of similar. I think Instagram, Facebook have built that whole ecosystem to be a lot more relevant and able to reflect your needs in your field. Similarly, think Pie is very responsive, very intelligent. Obviously, again, we'll do it within the framework of what is allowed, but that's the second piece of real value that it adds because it kind of keeps it thinking. The third thing is, again, it knows what to pull in at what time. So it is effectively an orchestrator. It passes the right information to the... So one is the first one was just a translator. The third is it's an orchestrator. Think of it as like a puppet master, where it actually gives the right information, the right evidence, the right documents to the AI agent at the right time. You don't want to start a call overwhelming it by too much data.

Adam Parks (23:02)
What are you? Go ahead.

Shantanu Gangal (23:28)
you only want to give it very limited data and only the bare minimum needed to do an RPC. Once it does an RPC, depending on their ability or interest to pay or not pay, you give it more data or different data. So it's an orchestrator of sorts and that does that job really well. Then, and so part of it is orchestrating one single agent. And then the second part of it, it kind of has an alert system where it says, hey, we had a signal come in from an email saying that this person clicked on an email, but actually hasn't pulled ahead. It then kind of is also orchestrating signals coming in from different applications and then maybe creating a trigger or alert to a fourth application or the agents saying, hey, maybe, give me a call or maybe just pop in saying, don't give him a call right now. give him a call at the end of the month.

Adam Parks (24:14)
I think that coordination is interesting. Now, as you're as your knee deep in this with a variety of different customers and clients and the creditor side, debt buyers, agencies, et cetera, what do you think the next wave of challenges are going to be for the receivables management industry in leveraging artificial intelligence? We've seen some challenges as we've started to roll things out, comfort levels, et cetera. But what do you think that next wave of challenges looks like?

Shantanu Gangal (24:42)
Yeah, I'll talk about both the challenge and opportunity in the same breath. For some of the leaders in the industry, they are definitely identifying the challenge, but very actively turning it into an opportunity. And this centers around how does the rest of the organization think of AI, view AI, embrace AI. Again, lot of the organizations have been built over 30 years of doing things the same way and it really worked.

It really made these into very, very large important businesses in our industry. That is understood. And so it is very natural for people to think that, this AI today isn't as good enough as what we've been doing. And we've had a lot of success doing things the way we have done it so far. A lot of people that is, while that argument can feel challenging. A lot of people obviously embracing that opportunity, thinking very naturally, very aggressively of new ways to position AI within their organization, thinking about how will the team's roles evolve as AI takes more and more center stage. They are finding opportunities where a lot of folks have asked some of their junior paralegals or compliance experts to actually help us build this intelligence engine and help us build this context graph.

By saying, all this while you were setting rules for other team members and you did a really good job at it, now help us set rules for AI agents in a way that AI agents look. So while formerly you would have probably been writing like, know, policy documents as know, documents and PDFs. Now what is it that you can write and that we have a lot of internal tooling for it, where they'll actually write this internal to our internal tooling and say, this is what I expect AI agents to do. That looks slightly different from a Google Doc, but it essentially has the same essence about telling a different kind of worker what needs to happen. A great example is we have a lot of email templates, text templates. So we have a bunch of internal tools that our customers use to either approve, disapprove, give us suggestions to make them better, email templates.

What was formerly happening through like obviously like Google Sheets and stuff like that now happens in a way that our internal tool itself is like self-learning. But the same people who writing policies for digital outreach on pen and paper or like in the Google Doc are now writing these policies in a way that our AI agents are naturally attuned to learning better off. But that is an exception. Some of our customers who are doing it are definitely an exception and like leading the wave. Unfortunately, back to your original question, I think that remains a widespread challenge in the industry.

Adam Parks (27:16)
It's really interesting that you've got them writing it. How different is the documentation that's written for a live agent from an AI agent? I'm curious, as someone who's written a lot of documentation through the years, like how does that look different? How would I communicate to the, to the AI bot agent differently than I would a human?

Shantanu Gangal (27:25)
Thank I mean, it's a little bit more structured, but think of like XML or like JSON files that you are probably very familiar with. But it is prima facie at first glance. It looks very different from a word document for sure. I get it. But at the heart of it, it is communicating the same end goal. And obviously we have put in a good process so that it doesn't feel like you're writing like JSON files. But effectively the idea is how you teach machines is going to be very different. Like trainers, like we have, we actually personally sat through a bunch of like training classes for our, at our customers site. And then obviously some of those trainers are helping us write training modules for AI agents, for their AI agents, obviously, not all of this is cross-learned, but the point is for their AI agents are now better off because now their AI agents have also sat in the training class. And that kind of change is limited to people who are seeing

Adam Parks (28:14)
Yeah.

Shantanu Gangal (28:26)
and thinking ahead, but that remains a challenge for a lot of the bulk of the industry.

Adam Parks (28:33)
Really interesting. So taking those policies and it sounds like almost an export of those policies into a structured XML file or whatever the input file is the format's not important.

Shantanu Gangal (28:43)
So that in itself actually doesn't do it. I think there is a lot more like pre-processing and there's a lot of tooling that it is essentially the same start point and the end point. But I think the journey through that is a lot more conversational. It's a lot more in bespoke because a lot of this doesn't even exist in the policy in the first place. So how we create an environment or tooling that makes this go as smoothly as it can is something that we continue thinking about. And then that makes the intelligence layer that much more thoughtful and relevant for the AI agents. And that makes it that much more ready to work within your existing system, existing business.

Adam Parks (29:22)
I think that's really interesting. The process for training those AI bots and leveraging the people that would be training the live folks makes all the sense in the world. I can't find any flaw in that logic. It's just interesting because you don't think about some of these old school collections trainers that have been training for 25 years are now the ones that are actually teaching the bot how to do it because it's not just listening to calls and trying to learn. And I'm sure that there's some learning modules that are happening from that perspective. it's just for those of us that are not building AI tools on a day to day basis, it's interesting to think about how we're feeding them the information and what the outputs and how different the outputs between two AI bot could ultimately be based on the rule sets that are being put around those bots. So if it's client restrictions or whatever.

Like in my mind, I think about, they're performing somewhat equally, but that's not realistic. You know, this one's got these rules and this one's got these rules and what is that going to look like in terms of final production. Any direction on where you've seen bots, the AI agent bots become the most impactful for an organization.

Shantanu Gangal (30:37)
Absolutely. So I'll talk about two cases. A lot of healthcare accounts are warehoused. They are too small of a balance for anyone to collect them in a profitable manner. So they're just not worked. These are balances where you are like, even if I do liquidate it and it takes me 10 minutes of a person's time to liquidate it, I barely break even on spending 10 minutes on this account just because they are smaller and end up not being worked. And AI has actually been just completely blown that economics model wide open and be like, my God, now I'm just being like so many millions of dollars of like three, four years of warehouse inventory. And AI is actually working through it. has been able to, especially on the inbound side, able to obviously answer questions 24 seven. AI is able to collect more where some of this wouldn't have been connected ever before. AI's able to do it in a way that the patient is a lot more, feels the patience. I think our AI is super patient compared to a lot of folks. And that actually works great. And obviously in many cases, we are not even necessarily cheaper than the alternative, but that's not the point because you just like the ROI on the system or the ROI on the business model is just so much better. And so some of our customers actually going in and like talking, buying up warehouse inventory, small balance inventory, and they're like, look, we'll work this with AI. So that's one place where we've seen a lot of change with AI and just be like very, very lucrative for our customers to act on because it's not just saving you money. It's also actually generating revenue that you wouldn't have otherwise gotten.

Adam Parks (32:15)
I like that. I mean, I think we often think about artificial intelligence as a saver, not necessarily where it can become a new revenue generator. And it depends on the way that we look at the different use cases. Just a couple of weeks here, Shantanu, we're going to be in Las Vegas at the RMAI annual conference again, and we're going to get flooded with people that are trying to talk to us about artificial intelligence and tell us all about the latest and greatest and the new things. What advice do you have for those industry executives that are going there and they're hearing all of these pitches that are coming at them from newer organizations? What advice do you have for guys like me to help me kind of weed through and find the right organizations to be engaging?

Shantanu Gangal (33:02)
So the first thing, and this is actually very easy, is ask them, can I call a phone number right now and will it connect me to a production system? I may not be able to go past RPC, but actually see that. A lot of people overindex on demo systems. They might have it like, oh, we have the best natural quality voice, our AI cracks a joke, our AI giggles. All of that is fine, and actually not that impressive to me at least because what it is missing is the learning, the training. Have they sat in the training class? I mean, that is a question that you need to ask. How do they handle New York City different from New York state, New Hampshire different from Utah? Very, very different things. Which are the states in which you give an AI recording disclosure? Which are the states in which you don't give an AI recording disclosure? All of these will effectively allow you to parse out how real the performance is and are they doing like 10,000 calls or are they doing like a million calls are going to basically set these things apart. I don't want to talk ill about anyone else in this industry really, but I think the obfuscation that ends up happening is dangerous and maybe silly and it is only a matter of time before before kind of the hot money leaves and then you don't want to be holding a bag like has happened far back where like someone came in offered IVA to the industry and then like 12 months later they were nowhere to be seen and this has happened in the past and that kind of makes a very very big difference like do you have

Adam Parks (34:28)
If ever

Shantanu Gangal (34:42)
folks who can understand these things very, well. I think one of the biggest things are some of our competitors don't understand is they even assume that a lot of these policies and procedures are all written up in like very nicely formatted documents. The reality is industry reality is again for no fault of anyone, no fault of our customers. That is not the case. The reality is people assume that, every account number will be like a six digit number. No, they'll be like long digit. They'll be like

Adam Parks (35:09)
No, different creditors, different ways. It's never the same.

Shantanu Gangal (35:13)
And like even your top accounts can have like typos in them, like X one bank can like be spelled like many different ways. The humans are so awesome at looking past these errors and correcting like typos in their mind. Like an AI is just going to collapse if you haven't built like a very thoughtful way to say all of these are typos of the same vendor's name and stuff like that is a great, mean, like, hey, have you seen, how many different typos have you seen for this blender is actually a good way. And if that's like, that seems like the question they're not even thought about, I think that actually is telling you something because this industry is right for typos.

Adam Parks (35:53)
It's the little things that count, right? Like this is a game of inches and it's about combining all of those things and those experiences because you didn't walk into the space going, I need to find 500 typo doppelgangers for XYZ company, like you've learned it through experience, which I respect.

Shantanu Gangal (36:09)
a good one. And again, I wish some of our peers kind of understood it, but they just assume it's easy. Like for example, we have a very elaborate module to actually identify accounts where the first name and the last name is actually switched in the data. Again, happens ⁓ commonly enough to trip up the AI. And if you aren't thoughtful about it, you're just basically losing money because

Adam Parks (36:28)
yeah, that's a good one, I like that. What data are you feeding to the AI? think that makes a lot of sense. Shantanu I always ask the same first two questions of anybody who tells me that they have an artificial intelligence tool that they want me to look at in the debt collection industry. Question number one, who's your lawyer? And question number two is who's managing compliance? Because those are the two core things to make sure that you have brought some experience, that you're bringing some of those historical challenges to the table and you're able to think through them prior to me putting money on.

Shantanu Gangal (37:09)
Yeah, I completely agree. I think that's definitely something that you should be asking. But I think even on the technology side, think there are clear gimmicks that you will spot in some of these fly-by-the-line operators.

Adam Parks (37:24)
Well, Shantanu, I really do appreciate you coming on and spending some time with me today. I appreciate you walking me through how you've developed the intelligence engine and how that really has had an impact on your ability to deploy the types of tools that we spend a lot of time on this channel talking about. Artificial intelligence, self-service, and all of those pieces. But thank you so much for coming on and sharing your insights.

Shantanu Gangal (37:47)
Thank you so much. You can't do AI by renting out someone else's intelligence. We have held close to heart and thank you for having me and talking about something that is actually very close to my heart. Thanks Adam.

Adam Parks (37:58)
Well, this has been a lot of fun. I'm looking forward to seeing you in a couple of weeks at the RMAI conference. For those of you that are watching, if you have additional questions you'd like to ask Shantanu or myself, you can leave those in the comments and LinkedIn or YouTube. Or if you just want to go walk up to Shantanu and say hi at the product booth or come see me over at the receivables info booth at RMAI. We'll be there answering questions as well. But until next time, Shantanu, I'll see you in a few weeks. Looking forward to it.

Shantanu Gangal (38:24)
Looking forward to it as well. Thanks Adam.

Adam Parks (38:26)
And thank you everybody for watching. appreciate your time and attention. We'll see you all again soon.

Why Intelligence Layer Architecture for Collections Matters

Across the receivables industry, AI conversations often start and end with tools. New bots. New copilots. New integrations. But as Adam Parks points out on this episode of the Receivables Podcast, that framing misses the real issue: AI is only as effective as the intelligence layer beneath it.

That question of what actually sits beneath AI, is the foundation of Adam’s conversation with Shantanu Gangal of Prodigal. Rather than focusing on features, the discussion centers on intelligence layer architecture for collections, and why data context enabling AI decisioning is now a prerequisite for scale, compliance, and trust.

Adam draws on years of observing failed implementations across agencies, debt buyers, and creditors. The pattern is familiar: disconnected systems, fragmented policies, and AI tools expected to “figure it out.” As Shantanu explains, that approach almost guarantees inconsistent decisions and unnecessary risk.

This episode matters because it reframes AI not as a labor replacement, but as a capacity design problem. For leaders responsible for compliance, audit defensibility, and portfolio performance, that distinction changes everything.

Key Takeaways from the Episode

AI Fails Without Contextual Intelligence

“AI is a means to an end. What really matters is what sits below the waterline.”

Shantanu makes it clear that AI agents operating without historical context, decision traces, and policy awareness will always underperform. Adam expands on this by noting that human collectors instinctively interpolate missing information, which AI cannot.

Key Reflection:
This is where many AI deployments break down. Organizations connect tools via APIs but never solve for shared understanding. Without an intelligence layer, AI decisions lack continuity, memory, and accountability. Context is not a luxury, it’s the difference between automation and chaos.

Breaking Data Silos Is a Governance Problem, Not a Tech Problem

“Data silos exist because of history, not because people wanted them.”

Shantanu outlines how legacy systems, lender requirements, engagement platforms, and institutional knowledge all form separate silos. Adam reinforces that most leaders underestimate how much decision authority is fragmented across these systems.

Key Reflection:

  • Siloed data leads to inconsistent AI behavior
  • Compliance rules often live outside systems of record
  • Human judgment is rarely codified for AI use
  • Governance fails when ownership is unclear

AI Decision Governance Must Be Designed In

“Never trust a system whose brain you cannot see.”

This moment lands squarely with compliance and risk leaders. Shantanu explains how unrestricted AI creates explainability gaps and how guardrails must be intentional, not reactive.

Key Reflection:
AI that cannot explain why it acted becomes a liability. Adam highlights that decision traceability isn’t about slowing innovation, it’s about making AI safe to scale.

If you can’t explain the decision, you don’t control the system.

Operationalizing AI Requires an Intelligence Layer

“The same foundation that got you here will not get you there.”

Shantanu describes how Prodigal rebuilt foundational layers to support autonomous agents, QA automation, and compliant engagement, all drawing from the same substrate.

Key Reflection:
This takeaway reframes AI maturity. The winners won’t be those with the most tools, but those with the strongest data infrastructure for autonomous agents. Intelligence layers turn AI from an experiment into an operating system.

The Questions That Expose AI Risk in Collections

  • Where does decision context live today?
  • Can your AI explain why it acted?
  • Are policies codified or trapped in PDFs?
  • Who owns AI decisions operationally?
  • What happens when signals conflict?
  • Can humans audit AI logic in real time?
  • Does AI adapt without breaking compliance?
  • Are you scaling intelligence—or just tools?

Industry Trends: Intelligence Layer Architecture for Collections

The industry is shifting away from surface-level AI adoption. Regulators, carriers, and creditors increasingly expect explainable, governed automation. Intelligence layer architecture is emerging as the control plane that enables innovation without sacrificing trust.

Adam predicts that within the next few years, organizations without this foundation will struggle to deploy AI at scale, regardless of how advanced their tools appear.

Key Moments from This Episode

00:00 – Introduction to Shantanu Gangal and Prodigal
03:15 – Why intelligence layer architecture matters
08:10 – Data context enabling AI decisioning
14:40 – AI decision governance and guardrails
21:30 – Decision traceability and autonomy
28:10 – Scaling AI safely in collections
34:45 – Executive takeaways

FAQs on Intelligence Layer Architecture for Collections

Q1: What is an intelligence layer in collections AI?
A: An intelligence layer governs data context, decision logic, and compliance rules before AI acts, ensuring consistency and traceability.

Q2: Why is AI decision governance critical?
A: Without governance, AI decisions become opaque, risky, and difficult to defend during audits or regulatory review.

Q3: How does breaking data silos improve AI outcomes?
A: Unified context allows AI to understand history, policy, and intent which improves accuracy and compliance.

About Company

Logo with an orange stylized "P" next to the word "Prodigal" in blue text on a white background.

Prodigal

Prodigal is a financial technology company focused on helping lenders, servicers, and collections organizations use AI responsibly by grounding automation in data context, decision governance, and operational intelligence.

About The Guest

A smiling person with glasses wearing a blue checkered shirt against a plain background.

Shantanu Gangal

Shantanu Gangal is the CEO of Prodigal and a longtime fintech leader with deep experience across consulting, investing, and financial services technology. He is known for his architecture-first perspective on AI adoption in regulated environments.

Related Roundtable Videos

Related Roundtable Videos

Share This Story, Choose Your Platform!