Adam Parks (00:06)
Hello everybody, Adam Parks here with another episode of Receivables Podcast. Today, I'm here with my friend Anshul, here to talk to us about artificial intelligence in communication channels and how to draw context across those channels for more effective, efficient, and intelligent communications with consumers. So Anshul, thank you so much for coming on today, joining me and sharing your insights.
Anshul (00:32)
Thank you so much, Adam. I'm glad to be here and looking forward to the conversation about context and AI agents.
Adam Parks (00:39)
Absolutely. Look, you and I have had the opportunity to meet a few times at the conferences in 2025. But for anyone who has not been as lucky as me to get to meet you in person, could you tell everyone a little about yourself and how you got to the seat that you're in today?
Anshul (00:53)
Yeah, absolutely. So, hello everyone. My name is Anshul. I'm the founder and CEO of Vodex.ai. A little bit of background. So I started in a company and 2010 is when I started my career. I joined a company back in India and we used to work with lot of different software companies. We worked with different companies. One project, one assignment that I got was with BBC in London where I got an opportunity to work on Alexa project. And that was back in 2013, 2014, Alexa was just launched. And that was the first time I was building something for Alexa and I was interacting with Alexa and that got me fascinated. So I was very much intrigued about, now you can talk to devices. It was the very first time we were able to talk to device, get a response back. So I purchased my own Alexa and started building my own chatbots, voicebots, started building as a freelancer, started getting more and more projects. And then at certain point, I think back in 2018 or 2019, I decided that this is what I want to do full time. So I resigned from my company. I started my own company, like building chat bots, voice bots for different businesses. I built a lot of bots for call centers across the world, for US, Australia, New Zealand, different clients across the world. And then at some point we realized that a lot of clients are looking for voice bots. which can make phone calls. So we started working with call center, building voice boards which can do inbound and outbound calls. And then through our partner who were like a VP of call center, they introduced us to the world of collections. They were doing human telecallers for collections. And through them, we got introduced to the world of collection. That is back in 2022 something.
So yeah, I mean, from them, we started focusing on collections. you know, like we have been building chat bots and voice bots specifically for collections use cases. And yeah, that's where we are right now.
Adam Parks (02:53)
And so you went through a change over the last year where you were hyper focused on specifically the voice communication channel. And then your mind kind of opened to what does it look like in terms of context across the channels? Can you talk to me a little about the journey that you went through in that change in focus or that pivot?
Anshul (03:11)
Yeah, absolutely. So we started building voice bots for collections, but very soon we realized that the voice bot as a standalone cannot do much. And it is because, you know, our main focus was to give a human-like experience. Like the voice bot should be as good as a human telecaller, right? But in order to be as good as a human telecaller, the bot needs to know all the context, right? What is happening across different channels, let's say, collection agencies coordinating with a debtor with an account over SMS, email, WhatsApp, different channels. All those information should be available to the voice board. Let's say for a debtor has sent an SMS saying that, hey, I won't be able to make payment this month, for example. Now, if that information is not passed on to a voice board, and if the voice board makes a call and say, hey, when will you make the payment? It will create a very broken experience for the customer, right? He said, hey, just now I sent you a text message. Are you not reading my text messages? So I mean, it doesn't matter whether it's a human or an AI agent. Context is very important to give a human-like experience.
Adam Parks (04:18)
That's interesting. So the context is such a mission critical piece. So how did you modify your approach to it? How did you change the company or pivot the organization to accomplish this ⁓ larger goal?
Anshul (04:30)
Exactly. Correct. So exactly. So in order to solve that problem, we have built a specific product, which is very much focused, very much targeted to debt collection agencies. We are calling it DROS, debt resolution operating system. But basically the idea is it's a context orchestration engine. You know, it provides everything in one place. So you have all the account information, the debtor information, how much they owe. Not only that, it pulls all the information from different channels also.
Like you can send text messages, email, can do WhatsApp, you can do chatbots, you can do AI calling, everything in one place. So, you know, whatever is happening across the channel, that context is available to other channels. And it is very important in today's world, you know, because if you see more and more channels are coming, customers prefer to engage in any channel. They might want to send emails. Some might want to send SMS. Some might want to maybe engage over WhatsApp or you never know, like Snapchat or something.
And it is very important to synchronize that context across multiple channels, because if someone is sending a message over one channel, and if you are making a call to the customer, and if you are not aware of that conversation over the channel, it creates a very broken experience for a customer. you know, like, it's not only for AI agent, it's also for humans also. Like, even if a human is going on a call with a debtor, he needs to know what is happening. Right now, what is happening, if you see, is like, each collector has at least five to six different windows open, and he has to go to his system of records, his payment gateway system, his text or email system, and then multiple channels, compliance dashboard and different places in order to get a full picture. Why can't we have a one single dashboard where he can get everything, plus there should be an AI where he can just ask a question and get the complete picture about that particular account.
And the system should proactively, in fact, telling about, OK, what is happening in this account. So we saw that gap in the market, and that's exactly why we built DROS. So it's kind of a new product that we have launched. Behind the scenes, we are using the Vodex technology that we have built to power the AI calling. But apart from AI calling, it has SMS, email, integration, other channels integration, live chat integration, plus a complete system of records for collection agencies.
Adam Parks (06:52)
So you've got the system of record, you've got all of the omni-channel options in terms of outbound communication, at least that we're actively using today with an eye on WhatsApp, Snapchat, and some of the other communication modes that maybe some of the younger generations are using. And you're basically trying to build almost a collection agency in a box.
Anshul (07:12)
Correct, you can say that. We are working on like, you know, working very closely with our clients. We are understanding their challenges, their demand, their requirement. And based on that, we are integrating, you know, like we are integrating with different text messaging provider. We are integrating with different calling provider. We are integrating with different CRMs also. Sometimes what we can do is instead of using our own system of record, we can push, pull data from other system of records also to give a complete context, you know.
So integrations are there plus we are also working on building our own MCP server. So MCPs are really powerful. It can fetch the context from other systems very easily, you know, and it's really powerful. So things like that.
Adam Parks (07:51)
I think you probably need to explain the M.C.P. to our audience.
Anshul (07:55)
OK, so MCP is launched by a company called Anthropic, the company behind Cloud. And in simple terms, you can think of it like an advanced version of APIs. So right now, if two system needs to talk to each other, they use API, something called REST API, where one system pulls data from another system using API. You can think of API as like an integration of wire, a connection between two systems. Now, the problem with API approaches, it's very robust and it's very fragile also at the same time, which means if something change in one system, it can easily break the connection. The architecture or the schema, schema is like the design, the data which you send from this system to that system. If there is slight change in that architecture, it can break the complete connection.
That's very fragile, if you see. It doesn't happen often, but there are high chances that happens. if you talk about MCP, it's kind of a next generation of connecting two systems. And it is good for AI agents also. So what happens is, instead of sending a request in a very specific format, AI agents can simply ask. For example, let's say there is a MCP server for weather, which tells the weather in a particular city.
MCP server will just respond to human-like requests. Like an AI agent can ask question like, hey, what's the weather in Dallas? Or what's the weather in Austin? Or what's the weather in San Francisco? And the server will respond. The server will understand that human-like natural language request, and it will respond with a simple response, that weather in Dallas is like 70 Fahrenheit or 50 Fahrenheit, something like that. And now the agent, because it's an AI agent, it understands natural language. it can understand that response and then use that response and displaying in a dashboard or whatever it is. you see it's very like human-like conversation between agents. That's what MCP has enabled.
Adam Parks (09:51)
versus that static communication methodology that is the API connection, right? It's structured, it's sent, it's received, it's coming back and forth, but without that natural language capability. And I would think that the natural language capability would be a mission critical function in terms of bringing context together for these tool sets. Now, as you
Anshul (09:57)
Exactly.
Adam Parks (10:14)
kind of shifted from the focus on just the AI voice to all of these written communications and other channels. What kind of impact have you seen for the clients that you've deployed?
Anshul (10:26)
Right, I mean the impact is huge because initially, first thing is like deployment time. It was taking too much time to build the voice bot and still it was not giving a very good experience because of all the missing context. The bot was making call, it was receiving call, but sometimes we are seeing that the information is within that system. So you need to again maybe download the data from AI Calling Engine and upload it here in the other system.
Or sometimes, integrations would take too much time because you have to integrate with different systems in order to pass the context properly. Now what we have seen is when we deploy this complete system to a client, they can get started really quickly because you have everything. If you are using draw system, you have everything in one place. So you can start quickly. Your templates are already ready. You can start making outbound calls. Even for inbound calls, there are templates ready system of record is already there. So if you upload your Excel file, you know, your spreadsheet Excel, immediately all the data is present in your dashboard in your system. And then the AI agent has immediate access to all the data, whether it's account balance, you know, like previous conversation history, you know, the reason for that charge, why the payment, you know, like what is the reason behind this payment call or all that information is immediately available to the AI agent. Plus it is integrated with other communication system like SMS, email and all. So the thing is like, if you want to start fresh with draw system, you can literally get started in less than a week. Versus if you want to integrate with other system, it can take months. Absolutely, if not couple of months.
Adam Parks (12:02)
It always depends on how many different tools are you integrating into, but it sounds like the whole whole piece ecosystem is something you can deploy quickly. Now we talked about this context going across the various channels. You know, one of the things that I've talked to with people about on the podcast before is context windows. And we know that context windows have changed pretty dramatically in 2025. Like we're I don't even think we're playing the same sport anymore when it comes to what can an AI bot understand and respond to within a single response. Talk to me a little about what a context window is and how you've seen that shift over time.
Anshul (12:46)
That's a great question and a very important question actually. So context window you can think of it as like a working memory. Think of LLM like a brain and in your brain you have a working memory, like a short-term temporary memory where you remember things for a small time period and then you just forget about that. Think of it like when you try to log in in some website and you get a one-time password, right, OTPs. So when you get that OTP, you remember it for like maybe a couple of minutes but then you just forget about it. You don't need to remember it. You don't need to remember what OTP have got two years back, right? So think of context window as like a short-term memory, which is required to complete their operations. Now you are right, in the recent few months, I would say the context windows has increased a lot. There is something called tokens. Tokens is like, you can think of one token as a word, but sometimes a token could be two words or sometimes one word could be two tokens also, but on an average you can think of one word as a one token. Now, some LLMs have capability to store up to 200,000, 300,000 tokens as a context window. Now, this is a good thing, but that doesn't mean you have to store everything in the context window, right? Because there is a trade-off and you need to create a balance. And why it is important is that let's say if you feed everything, all the previous conversations, all the account details, if you feed everything in the context window, it will actually make the LLM slow. Because when you make a call to LLM and when you get the response, it has to go to all the long context window and it has to check through everything. And in voice AI kind of scenario, voice AI agents, need to get the response fast. You need to get response immediately, right? So even though you can feed literally everything, but you should not do that. So there needs to be a balance. And I'll give you one example to explain it more clearly. Let's say I ask you a simple question. What did you order for food yesterday? Let's say I ask you this question. What did you order for lunch yesterday? Now, you don't need to remember your whole life in order to answer that question. You just need to maybe open your Door Dash or Uber Eats or open that app and just see. And you get the context.
And now you can frame your answer and you can tell me. You don't need to remember everything. What did you order two years back or those kind of things. Same concept here. If I ask a question like how much this data has paid in last six months. So we don't need to feed in all the context. We need to only feed the specific context in that LLM and then the LLM will generate a response and give it back. So context window is very important, but we need to be very careful like how we use those context generating responses. I hope that answers that.
Adam Parks (15:30)
So how, no, it leads me to more questions, right? So when we think about the context window and the size of the context window, it sounds like some of the data that we need to be able to pull from is structured stored data versus living in the short-term memory. So the long-term memory can hold account information back to whatever period of time, but the context window that we're using or that is being reviewed on a response basis because it's one thing to have it stored in that short-term memory. It's another thing to be able to reach into something else, pull that piece of information out, and then add it to my context window. So am I thinking about that the right way?
Anshul (16:12)
You are right. So when we start a conversation, let's say we start an AI call, at that time we pull some information from that long-term memory, like maybe a database or something, and we create a proper context. And this is very important. And you need to pull context from multiple systems, maybe from system of records, maybe from payment system, maybe from omni-channel connection system, SMS and email. You pull all that context, you prepare a small context which is initially sent as a system prompt or something. And then what happens is every time you do the conversation, you keep on adding to that context. You know, the context window starts increasing. But yeah, at that time, we need to be very careful about what we are fetching from the long-term memory, from the database, and what we are appending. So we don't want a situation where the original context becomes so big that the bot starts taking too much time to respond to the queries. So yeah, there's a fine balance which is required in order to orchestrate that. And that's where you know the power of orchestration, context orchestration engine comes in, which can take context from different system and orchestrates what is required for this conversation.
Adam Parks (17:20)
And how has that strategy changed? How are you monitoring the use of or how a consumer is engaging with channel A versus channel B versus channel C and bringing that together? What does that look like on the backside in terms of monitoring, understanding that feedback, and then reacting or responding?
Anshul (17:38)
Yeah, absolutely. So there are different dashboards, different API, which is connected to all different channels. And we log everything, all the conversations across every channels. And then it can be presented in a nice way to the customer. And customer can see the interaction for each account versus overall, like which channels are more preferred by the consumers. And then which channels are giving most engagement and most results. We track everything and we present it in a dashboard.
And yeah, I mean, it is very important to measure that and track that because you can have 10 different channels. But if you are not tracking which one is more effective, then you might be losing out. Let's say you figure out that, let's say WhatsApp is working really well nowadays. So you can maybe do more engagement, more conversation over WhatsApp versus other channel. If you realize that people are not responding to text messages, you know, certain type of people, then you can change your strategy of engagement. Like instead of sending text, maybe send WhatsApp. I'm just giving one example, but it is very important to monitor how the engagement is happening across multiple channels. And you can see that clearly in DROS dashboard.
Adam Parks (18:44)
So that leads me to another question, because one of the things you talked about within your platform was almost like a chat GPT within the system that allows me to query against. I can ask it questions, or the agent can ask it questions and get those responses. So you've got dashboards, you've got the ability to manage that chat. Is it preformed, or is this just ask questions of my system or ask questions about this consumer and could you extend that context and its understanding to third party data solutions? Meaning if I wanted to tie that up to let's say the the TransUnion credit reports, could I potentially start tying some additional data sets together to improve its context and understanding of my query?
Anshul (19:31)
Absolutely. So to answer your first question, so it is actually like ChatGPT where you have a section called Ask AI. And you literally ask just like you asked ChatGPT. But the good thing is now the system has all the information about different accounts and how much they have paid and everything, basically the context. So now think of like ChatGPT answering all your questions specific to those accounts. So in this draw system, you can ask question like, OK, which channels are more productive for us? How much payment is scheduled for this month, how much payment we have received this month versus last month. You can ask all those questions in natural language, and it will go through all the different records, database, and it will fetch all the information. And it will give you an answer in a human-like way, plus it can also give you a dashboard. So there are some pre-baked dashboards, but you can create your own custom dashboards also. And then when you ask questions, the AI also presents the information in a nice graphical way. So you can clearly see, you know, basic basis on what question you have asked. You can clearly see the answer in a nice graphical presentation way. Second question that you asked about integration with external systems, right? Like TransUnion which is very, very important. And in fact, I'm working with, you know, a couple of agencies to help me integrate with TransUnion and experience because experience because the thing is like you have only so much data in your own system, but there are other details like credit scores, like previous payment history, how this debtor has been making payment to his other accounts, other credit cards. And you need to have that information. There are some, I think, agencies which are providing data like household income, how much they have been making every month. So imagine if you have all that context, how much better you can engage. Imagine if you see a person is not earning, he has lost his job. If you have that context in advance, You can tailor your conversation in a very different way. And even the customer will be very happy because you are changing your conversation study. You cannot have same type of conversation with every customer, every debtor. So that's why context is very, important.
Adam Parks (21:37)
Well, and a big chunk of this is not necessarily just about creating better communications or collecting more money. It's about providing a better level of customer service because now the consumer can engage across all these channels. You're providing new options in terms of how they can make payments or how they can communicate or negotiate even. And so it sounds like you've tied together multiple use cases for artificial intelligence because we've identified kind of the core six.
And you're doing a lot of these in terms of the scoring and segmentation and being able to prioritize the accounts, the voice communication, chat communication, negotiation. Like it seems like you're trying to wrap a lot of these use cases into a single use product that would roll off the shelf for a smaller organization fairly easily as they're trying to modernize themselves. And as they've faced challenges with other systems of record, because right now there's a lot of organizations that are facing being on technology that's being depreciated and it's not gonna be carried forward, it's not gonna be supported and now they have to figure out what those options are gonna be for the future or they're identifying that voice and chat and text and email are gonna be big parts of the future of the industry for sure and hopefully their organization and this is one of those shortcut opportunities where they can roll out a single ecosystem versus trying to
Anshul (22:37)
Yeah.
Adam Parks (23:02)
bolt all these things together and build that intelligence piece that's going to be required to collect and deploy or to collect into actively turn to intelligence, actionable intelligence, all of these different pieces that are being laid out of the puzzle.
Anshul (23:20)
You summarized this very well, perfectly, Bang On. I will just summarize in a very short way. The ultimate idea of building this product is to improve the experience, experience of customer, experience of even your collectors. Think of new generation collectors. They're not going to use very old system, which is very clunky, difficult to use. They need something modern, new age. So how can we make their lives easy? A simple system, they don't need to juggle between multiple systems. They don't need to complicate things. Super easy operations for collectors and super good experience, super easy experience, super, I should say, seamless combined experience for customer. It doesn't matter which channel they prefer. That's the ultimate goal of building this product.
Adam Parks (24:03)
I think it makes a lot of sense because not every organization is ready to deploy all of these different technologies on their own. Some of them are needing a shepherd to help guide them down the path and make sure that they're bringing these pieces together. But that it's more complex when you're starting to try and drag context across multiple channels versus focusing on individual channels. So I realized that the challenge is not small.
When you're trying to bring all this information together. And although context windows are bigger, they're not infinite, as you mentioned, the 200,000 or 300,000 tokens, which is a significant amount. You're talking 90, 100 plus pages of information that it's able to recall or understand in a given response. But the amount of time, energy and power that goes into providing those responses can be a new and interesting challenge for our industry.
Now for someone who's looking to deploy a tool set like this, is this strictly cloud based and they don't have to worry about internal servers and systems or is there still a on-premise component that needs to be considered?
Anshul (25:13)
Now this is completely cloud-based. If a customer is looking for on-prem, it can be deployed on-prem, but by default it is a cloud-based. So all you need to go is just go to the website, sign up, and you can get started. It's a complete cloud-based system.
Adam Parks (25:29)
Fantastic. I mean, the cloud is definitely where most organizations are going, although some still have some hybrid and that has been a long and arduous road for the debt collection industry to move a lot of their core critical infrastructure into the cloud.
Anshul (25:43)
And the biggest concern is sometimes about the data security and also we are fully HIPAA compliant. are using state of the art technology. We are using Google Cloud basically, deploying all of our solutions. It's SOC 2 compliant, HIPAA compliant. All those compliances are done, ISO 27,000. So we take care of all those things, all the compliances. Still, if an organization wants on-premise, that option is there, our team is ready to work with the clients and do everything on their own and help them set up this on their environment.
Adam Parks (26:15)
And so this seems like an easy enough opportunity for people to be able to deploy that. But if I remember correctly from our conversations, it sounds like you're not limiting the users to your particular providers, meaning that they're able to leverage text messaging as a technology platform, but you're provider agnostic in terms of, are they using service provider X, Y, or Z?
Anshul (26:43)
They can bring their own, it doesn't matter. They can bring their own text messaging provider. They can bring their own credit, you know, rating provider. They can bring their own email provider. They can bring their own chat or live chat provider, or even if they want their own calling provider, they can bring their own. Even though we have Vodex integrated, they can bring maybe WAPI or retail or any other provider and we integrate with everything. So now think of it like a platform which kind of integrate all different provider.
Adam Parks (26:44)
because it doesn't really matter to you.
Anshul (27:12)
And the ultimate goal is to provide all the context in one place so that your human collector and AI collector can work more efficiently.
Adam Parks (27:19)
Well, it sounds like you're onto something interesting. And from all of the conversations that we have, I learned at least a little bit about artificial intelligence and you change a little of my perspective. And that's why I thought having a conversation today about context and carrying context across these omni-channels was really going to be such an important thing for our industry over the coming years.
Anshul (27:43)
Absolutely. I mean, yeah, idea is to give seamless experience to the customer, to the collector, to everyone. And it's 2026, we need to do that. AI is here and it needs to be upgraded and match what is happening across the industries.
Adam Parks (27:59)
Well, Anshul, I really appreciate you coming on, sharing your insights with me today. I think this has been a fantastic conversation. Again, I learn a lot from every discussion, but thank you for coming on and sharing with me today.
Anshul (28:11)
Thank you so much Adam for having me. Thank you so much.
Adam Parks (28:14)
For those of you that are watching, you have additional questions you'd like to ask Anshul or myself, you can leave those in the comments below on LinkedIn and YouTube. Or if you want to go see a demo of the Dros platform, you can check out Anshul's 5 Minute Pitch on receivablesinfo.com or go over to the Receivables Info YouTube channel and check out the platform that he's built. He even did a voice AI call demo within that little 5 Minute Pitch where I was really impressed. So I hope you guys go take a look at that. But until next time, Anshul, thank you so much. I look forward to seeing you at RMAI.
Anshul (28:48)
Thank you so much, Adam. And I would just like to say, if anyone is interested, just go to dros.ai, that dros.ai, and visit our website. Thank you so much once again, Adam, for having me here. And yeah, looking forward to meeting at RMAI and looking forward to meeting everyone at RMAI. Thank you.
Adam Parks (29:05)
Awesome, well thank you so much and thank you everybody for watching. We appreciate your time and attention. We'll see you all again soon, bye.