Tune in for a chat with John Nokes, Chief Information Officer at National Credit Adjusters, about applying AI in receivables management. Hear about building guardrails, structured vs unstructured data, computing power and storage, AI use cases, finding efficiency & calculating ROI, and regulatory precautions for debt collection AI. Learn more with this week’s #ReceivablesPodcast, hosted by Adam Parks!
Listen to Your Favorite Podcasts
Adam Parks (00:01.612)
Hello everybody, Adam Parks here with another episode of Receivables Podcast. Today I'm here with my friend John Nokes, who is an IT professional with National Credit Adjusters and just another fellow tech guy across the space. But through the years, I've had the opportunity to have some really interesting conversations with John and his knowledge.
But John, for anybody who has not been as lucky as me to get to know you through the years, could you tell everyone a little bit about yourself and how you got to the seat that you're in today?
John Nokes (00:33.206)
Absolutely. Born and raised in New Mexico and Albuquerque, went to college at University of Arizona, got a degree in information systems. But at college, I was in ROTC, went into the Marine Corps as a lieutenant, spent six years in the Marine Corps, mostly stationed out in the West Coast, San Diego, Coronado area. After I got out of the Marine Corps, I didn't do any technology things in the Marine Corps, but after I got out, got a couple of jobs doing, you starting off as a network guy, just doing some basic stuff. Ended up, going to a company called West capital West capital back in the day was a debt buyer out of San Diego. they were acquired by Midland credit credit management, which was then they became Encore capital. So I was with Encore capital when it first started, it was there for, for, I think 10 years or so. 10, 15 years there. From there, I went to a company in Austin as the CIO for a couple of years, Collins Financial. From there, went and I did a startup doing vendor management outside of the collection space. I was sort of done with collections. Realized that startups probably aren't for me.
Adam Parks (01:54.892)
you
John Nokes (01:58.742)
It was really interesting. It was a good, really good experience, but that's probably not my personality. But I came back into National Credit Justers, brought me back into the collection space and I've been here for I think nine years now and I really enjoy working at National Credit Justers. It's a, it's had the family business feel, but it's not, not, it's bigger than a family business, but it's really, it's a really good bunch of people that I work with.
Enjoy working, coming to work every day. It's lot of fun.
Adam Parks (02:33.078)
Well, it's one of my favorite companies in the space. What an interesting background. Kind of that storyline is kind of keeps you in the technical space. But every time somebody thinks that they're going to get out of the space, they just get pulled right back in. I understand that particular predicament.
John Nokes (02:49.526)
Yeah. And collections is interesting. Receivables is interesting. When I first started, I thought it was Guido and a baseball bat going out there collecting. And it's not that at all. It's hard to explain to people that it's much more akin to a bank than anything else. And people don't realize that. And it's a fascinating space. It's a space that is needed. It has a bad rep because there's been some bad apples in the past. think generally people
Adam Parks (02:57.825)
Yeah.
John Nokes (03:19.542)
are trying to do the right thing now. know National Credit Adjusters always tries to do the right thing and you can do it and be an upstanding company and you can still make money and do it, which is good.
Adam Parks (03:31.692)
Well, the application of technology data sets, right, and being able to improve the efficiency of processes, I think has been a core functionality. But for today's discussion, I wanted to spend some time talking about artificial intelligence, because I know it's something that you really find a lot of interest in. And right before we started recording today, we were kind of talking about our kind of different technology setups between us, right? You're very much a
Microsoft setup kind of guy. I'm very much an Apple kind of guy and we were talking about the application of artificial intelligence directly into the operating systems. You've got Microsoft co -pilot, from you've got Apple intelligence that will be rolling out into the iOS platforms here and that integration down to the OS level or the operating system level I think is something that is exciting and scary at the same time. So as you've started going through the process of experimenting with this tools, what's your experience been?
John Nokes (04:27.63)
Well, it's interesting. I'm relatively new to experimenting with AI. I went to a day seminar a few months ago and really it sort of opened my eyes to what AI is, right? AI is a very ambiguous term. It means different things to different people. And depending on what you want, there's different AI. It's not the same. And one of the guys presenting explained it to me and made
Adam Parks (04:45.751)
Hehe.
John Nokes (04:57.176)
Perfect sense. All AI is is advanced statistical analysis of data. That's all it is. It makes it look like it's intelligent because it answers like intelligent. But all it is is analyzing huge data sets and saying, OK, if these words are used, then these words are what I want to come back with to make it sound like I know what I'm doing, which is fascinating. It's really cool. But that's also why you hear stories of the lawyer that used chat GPT to do a case, to do a briefing, it was citing things that didn't exist because AI makes things up. If it doesn't understand or doesn't know, it makes things up. And so it's fascinating. OK, what does that mean to collections? So we were testing a product last week where they said, give us your policies. Well, ingest your policies. You have a chat application where you can say, how do I
Adam Parks (05:26.903)
Yeah.
John Nokes (05:53.94)
A consumer is asking about bankruptcy, what do I do? Right? And it goes against our policies and says, this is what you do when a customer says this. And it came back with an intelligent answer. It was the wrong answer. It was ingesting a policy and it was taking things out of policy, but it wasn't the right things out of policy. And so it's like, okay, you know, so danger, Will Robinson.
Adam Parks (06:14.26)
Interesting.
John Nokes (06:20.682)
What do we do, right? How do you present this to a, how do you use it for a consumer to access it? If you can't be comfortable, it's going to give legal appropriate answers. And that's where I'm at right now. I'm saying, okay, I have so many things I want to do with AI that I have in my head, but how do I know that it's going to give the right answer?
Adam Parks (06:24.664)
and
Adam Parks (06:45.016)
I think that's an interesting, it's an interesting challenge. you know, as much as I hear about artificial intelligence, one, I think that there's a distinct misunderstanding in the space between robotic process automation and artificial intelligence, which would be large language model driven, right? So I think that number one, there's that piece.
Because I had somebody recently tell me that they had AI that was picking up and dropping off the files from their FTP sites and I said well, it's interesting but that's not artificial intelligence. That's just robotic process automation with maybe a little bit of selector information in there. So I think that there's some limits there. But you know when we start looking at the industry and this is the first time I've heard the policy and procedure one.
and I do kind of like the idea behind it, but again, how do you start to build the guardrails around it to ensure that it's providing you with true and accurate information for you? You don't want to be the lawyer who gets disbarred over, know, citing cases that don't exist. And so I think that the, when I start thinking about the use cases of artificial intelligence for the receivable space, you know, we start talking about generative AI.
from a communication standpoint. We start talking about scoring models. We start talking about accessing or understanding information. But with all of those different use cases, I think we still come back to where do we get these guardrails to keep the responses that we receive down the path in which it's expected? Because I think there's a certain degree of expectation to what is going to come back.
Right. And when the deviation is so strong, how do you keep those guardrails in place to kind of keep everything together?
John Nokes (08:30.666)
That's a great question and I certainly don't have the answer. I know a challenge the receivable space is going to have is
are the regulations around us and how we can, how the best way to communicate with the consumer is different than most other industries. So if you are using a generic model for a chat application or something, it's probably not going to give you great results because it's looking at all this data all across the internet, but it's not receivable information. So it's not dealing with consumers that are bankrupt fraud, you know,
don't want to pay all these things, it's not talking about, it's not using that as its training set. So it's not going to be as accurate. does a company build a receivables specific chat application or model? Maybe, but every company I've talked to, they do collections, they do receivables a little bit differently. It's not all the same. So how do you build a model? know, talking, I was talking with
my executive team and they wanted to be able to go into all of our call transcripts, all of our notes and start asking questions and getting responses. I'm like, that's a great idea, but what model do we use? You know, do we have to train our own model? You know, do we, you know, one use case we have is we get all this mail and whether it's physical mail or email, we get all this mail. So we have people that have to read the mail and say, okay, this is a saying fraud. This is saying dispute. This is saying
Whatever, they have to categorize it. They have to update the system. They have to do all the stuff. Sounds like an interesting thing for AI. Could AI read OCR, the mail, be it physical or electronic, categorize it based on things, and then pull out the address and the phone number and the social or whatever they give us, find the account, update the system. And in our testing, we find it can do it OK.
It can present to an agent the information, but they still have to validate it. Right. Because we don't have a model that says, okay, these are all the different ways to say dispute. Right. And so it's a, it's a big challenge in understanding the model. And to your point, you know, you say there's different types of AIs, right? So we, we use, some statistics for score and accounts. And I bet almost everyone in receivable industry does.
Adam Parks (11:10.42)
Yeah, of course.
John Nokes (11:11.212)
whether their own or they buy it or whatever, right? That's sort of an AI, right? A statistical analysis of the data to say, this is more likely to pay, this is less likely to pay.
Adam Parks (11:22.092)
Well, that's where the black box discussion starts to come in, right? So as we talk about the regulators and their largest concerns is what goes in and out of the black box. And so from a scoring perspective, you what you need to understand what is inside the model, right? The CFPB has been clear that they're not, you know, the black box said so is not going to be a defensible position when they come knocking on your door.
So what is going to be a defensible position, right? You're to have to understand that no demographic information, for example, is being used to drive workflows because that then starts to come into an area of violation. So I think it's interesting to see how.
Again, these and I go back to the guardrails and I don't think that there is an easy answer, but I think the challenges for our industry can kind of be broken down into, I'm gonna say two buckets, right? There's the structured data bucket and the unstructured data bucket. The mail coming in, unstructured data, right? Scoring and analytics, structured data. And I have seen some tools in the industry and I'll throw a mention in here to the Produgal Pro Notes tool.
because that was one of the few tools that I've actually seen that was actually listening to the phone call, constructing it into a structured notes format, and then adding those notes to the system. Because as you want to start being able to analyze what's within the notes, right, that becomes one of the challenges. It's an unstructured data set. Every collector has got their own abbreviation for follow up or, you know, for any term that is frequently used, right? They all have their own shorthand.
John Nokes (12:51.096)
Yeah.
Adam Parks (12:56.61)
Well, how do you start to consolidate that unless those notes are created in a structured format, right? It feels like it's a use case that requires less guardrails because it doesn't actually go out directly to the consumer. There's no consumer interaction there. It is only an internal tool set, but whether or not you need to use somebody's existing model or you can start to train a model based on those phone calls. But if you're using artificial intelligence tool sets in order to
understand, let's say from a compliance standpoint, what your risk levels are, right? Like the Cedric AI type tool sets. I think it's, I feel like that same model is ultimately going to be used in multiple ways, but how do you train that specific to the industry unless you're sitting on a data set that is ultimately millions of calls, right? Because it has to start to find how is it going to put these little pieces and parts together in order to consolidate a thought.
Does that make sense?
John Nokes (13:53.518)
It does. It does. And, you know, we have, you know, 10 years worth of recordings. We have, I don't know, we have years of recordings, right? So we could have a process that goes through all those recordings, creates a transcript, you know, as long as our checkbook is big enough, we can do that, right? And then, then we can score it. then the challenge is, and, you know, I am by no means an expert. I am starting to dip my toe in. But my understanding is to create your own model.
Adam Parks (13:59.681)
Yeah.
Adam Parks (14:10.104)
Yeah.
John Nokes (14:22.956)
You have to say, OK, here's these million calls. Here's these transcripts. Someone has to look at them and say, OK, these words, I want to categorize it like this and be able to create your model. So you have to have a bunch of people going through your data to be able to start training it before it's usable. And I know.
Adam Parks (14:33.474)
Yeah.
Adam Parks (14:44.834)
Well, you talk about the checkbook too, right? Which I think is a really interesting conversation. So I was having a private call with a former cloud services executive from one of the major three cloud services companies. And we were talking about artificial intelligence and its application to the BPO services. And one of the things they mentioned to me was that I said, I don't understand why AWS and Google and Microsoft and all these companies are pushing so hard for the artificial intelligence to be used in
these other ways without the supporting models. And they really broke it down for me in the simplest way. They said, well, they charge for computing power and storage. so AI models require a significant amount of computing power, which is why they're so expensive to run. And I don't know if this is going to be true, but at the time and we're recording this, the reports that I'm seeing in the marketplace right now is that our $20 a month ChatGDP Pro functionality
under the new, I'm not sure if it's gonna be the strawberry model or the next, you know, 5 .0 model that we're expecting to see in the next couple of months. But they're talking about adding two zeros to that price point. So going from $20 a month to $2 ,000 a month because of the amount of computing power that's ultimately required to execute. That's a...
That's a pretty big deal. You talk about a 100x price increase because of how much power is ultimately required, how much computing and processing energy is ultimately required to execute on these things.
John Nokes (16:16.754)
Yeah, and you know, I'm not a genius, but I'm an average intelligence guy trying to go to Azure AI and figure out what the cost is going to be if we create an application that scans letters and categorizes it based on AI. I told my boss, I'm like, I don't have a good feel for what that's going to cost. We're going to have to sort of move into it to figure it out because you got, well, how many tokens is it? Well, I don't know. How do you figure out a token? Well,
It's sort of this and this, you can't really tell. You can't sort of estimate the number of tokens a file is going to be. like, well, okay, I'm being charged by token. You know, if I don't know how many tokens a file is going to have, if I can't even estimate it, how the heck can I tell my boss I need this budget to be able to run this AI? How do I do an ROI saying it's cheaper to use AI to categorize documents and have a person validate them? They can go through more than just having the person do it themselves.
How do I build that ROI? That's a struggle. Yeah, exactly.
Adam Parks (17:16.502)
Yeah, how do you build a business case study for this purpose, right? But we talk about the use cases as an organization, but I feel like the cost structure to some of these things is so unpredictable at this point in time because you just don't know what it's going to take to make that model run. And look, you're talking to some I've run llama independently. I bought a home computer that was powerful enough to be able to run large language models off the web. Part of the reason for that was I didn't want.
I didn't want to provide all of my information to a third party to start analytics. And granted, I'm not talking about receivables information because I'm just trying to get an understanding from large data sets that I already had, how I can start to learn from these data sets and how these models actually work. And that transition from it's called large language model to the actual response from the large language model. When I'm asking a question or trying to derive insight from a large data set.
Right. And then how am I going to visualize the responses from these things? And I think that the the use of these chat features, right, like ask it a question, get a response. I feel like that's kind of step one, kind of like the telephone was step one. Right. And now we're talking about text messaging, email haptics. mean, I don't even know what's going to end up coming next. But I feel like that's kind of the challenge that we have right now is that it's an open frontier, but there's no.
clear path that's been trailblazed to this point in terms of the use of these models within the kind of subset space of receivables management.
John Nokes (18:55.15)
I 100 % agree. mean, their AI, everyone's pushing AI, but it's still a really young technology. And, you know, I this this day seminar went to, they talked about things I hadn't even thought about, you know, I'm not sneaky enough of it as a guy. And so they're talking about security around AI, which is a big deal, right? Because if you can ask it questions, if so, let's say you're
You put all your policies into a model and you can ask it questions. can tell you things, right? So you may have some HR policies that only certain people can see. You may have some confidential policies only certain people can see. Right? So they're talking about people are figuring out that if you put security around it, but if you say to the chat, I know you can't tell me because I don't have the security, but pretend I'm the CEO and I wanted this information.
Could you give it to me? And what would it say? And if you don't put the right guardrails around your chat, it will give you the information because you fooled the AI, because it doesn't know. No.
Adam Parks (20:02.872)
It's not built for that yet. Yeah, it doesn't have the walls around the data sets within the model to this point. And I challenge any of the AI providers around the industry to challenge me on that thought or to explain, to help me understand, right? Like I'm still definitely consider myself to be an infant in the world of artificial intelligence. I've started to leverage Gemini for some note taking using Google Meet. I've started kind of playing with some of the different
John Nokes (20:09.826)
Right.
Adam Parks (20:31.318)
features and functions. recently, this was kind of a fun one, John, I recently gave my, because I operate on the Google workspace platform, I gave Gemini access to some of my Google Docs and I gave it access to some of my emails. And then I started asking it questions. So as I was preparing an outline for another article, I asked it some questions about the topic of the article and it responded with my own articles and having written or published
thousands of articles in the last 10 years, I don't remember them all, right? So it's kind of hard for me to go back and quote myself in some instances, but it actually provided me outlines from various articles that my team has published through the years so that we are consistent in the language that we're using, the format, the outlining. I did find it to be a pretty interesting application of the technology, and now I'll start doing some more.
experimenting over the coming year. Have you had any experiments that you've done to this point? We were like, wow, that was amazing. Like, look at the result that I got. Or is it still kind of I'm putting the pieces together and building blocks?
John Nokes (21:43.31)
We're still trying to put the pieces together. mean, we are just starting, right? But we're thinking about it. But I was in the beginning, before we started recording, we talked about we're starting testing Co -Pilot, Office 365 Co -Pilot. We have a handful of people using it. A really cool feature is if you're in a Teams call and you turn on transcripts, at the end of the call, you can say, give me a summary with the to -dos. And it creates a...
a relatively good summary with to -dos and you can throw it out to people, right? So instead of having to do summaries, it's much more verbose and it's pretty good. You have to read it and make sure, okay, is this really what we said? And, you know, is this really an action item? But it saves a lot of time, right? You can, so that's really nice. You can, from an email chain, you can say, okay, we need to have a meeting on this from this email chain, set up a meeting with these people.
It will do it. It'll give you a summary of what, why we're having the meeting and send it out to people. So that's really nice. You know, but it's efficiency, but it's yeah, it's 35 bucks a person per month. Right? So that's what almost, 350, almost $400 a year per person. 10 people. That's $4 ,000. Is it that much? Are you getting $35 a month efficiency out of it?
Adam Parks (22:47.648)
That's efficiency. That's a significant efficiency.
John Nokes (23:11.694)
It goes back to the ROI in the business case. I don't know.
Adam Parks (23:12.161)
Agreed.
Adam Parks (23:16.106)
It was a hard one because you have look at the executive time you start trying to calculate backwards. But my usage Gemini is very similar, right? It's about pulling those transcripts together. So I do it as a transcript in a summary and then it automatically saves it to the count. The original calendar invitation from that meeting and now I'm trying to get it to actually feed into my CRM and do some other things so that it becomes a little bit more consistent in the way in which I would actually go to access that information in the future so.
John Nokes (23:36.749)
Yeah.
Adam Parks (23:44.504)
I think we'll see more of that kind of efficiency opportunity there, but I don't know that $4 ,000, it's gonna have to do a little more for, you know, for 4 ,000 per user per year, you talk about 100 users. It's a pretty big line item, it starts multiplying quickly for a small business.
John Nokes (23:58.432)
Yeah, yeah, exactly.
Exactly. And if you go to, you know, chat, GPT or, or any co -pilot, any of them, and you ask the same question multiple times, you get a different answer every time. And so, okay. You know, yeah. And so if you're trying, if you want to automate something, especially with a consumer, you can't have that variation. you need to be confident that when the CPV comes, you say, look, I'm
Adam Parks (24:19.862)
Where's the inconsistency and why?
John Nokes (24:34.048)
It's good. And this is what it is. Otherwise, you're going to get a class action because everything you touch is now a defendant against you if something's wrong. And that's bad.
Adam Parks (24:42.646)
Yeah, and making sure that you're only using the pieces of information to feed the model that are legal to use to feed the model and then being able to explain to a third party. I CFPB has been clear you're going to have to explain the black box. What's going in, what's going out and why? they're all I mean, and I respect what they're trying to accomplish there in terms of making sure that we're not using illegal demographics or redlining type behavior, right, against consumers. And I think that that's that's
John Nokes (24:49.656)
Yeah.
John Nokes (24:59.169)
Absolutely.
Adam Parks (25:12.704)
It's an admirable goal, but how do you get there and what level of explanation will be considered to be acceptable? So I think as we start to see them go out there and dig into that, and I don't know if that'll be this year or next year, depending on who's sitting in the White House, but I think it'll be interesting to see how they actually apply some of the broad statements that the regulators have made. Now you got California that's already starting to roll out some rules and regulations.
I'm hoping the federal government, like I have in the past and I doubt that they will, will actually put up their guardrails so that we don't end up having to deal with this mishmash of individual states doing ridiculous things. It's just going to become almost impossible to manage when you're dealing with third party model.
John Nokes (25:58.682)
I don't have much faith in the federal government being able to figure it out because they can't even figure out Facebook. Right? You have the people, the people, the majority of the lawmakers are old guys, old people, older than me, and they don't understand the technology. so they're, regardless of, of party, if you don't know the technology, you don't understand it, you can't do laws against it.
Adam Parks (26:02.487)
Neither.
John Nokes (26:24.812)
You know, how do you do laws against AI? What, how do you put guardrails around that? What's right? What's wrong? And, and you, do a lot today in two weeks, it's different. The, technology is changing so fast. It's not mature. It's going to be changing. So it's a really hard problem and I don't have an answer for it. You know, I'm looking at how can I use it to make my agents more efficient, make my processes more efficient? How can we save money? And that's what I'm looking at doing sort of the.
Adam Parks (26:34.359)
Yeah.
John Nokes (26:54.466)
the Robo process, intelligent Robo process, so you can automate things or you can streamline things. For me, that's where I want to go initially, at least till we have a much better understanding of how to use it and what it is.
Adam Parks (27:08.152)
I think that's a great roadmap, right? Starting by trying to find the efficiencies in the day -to -day operations and processes before you start getting over your skis, so to speak, and really starting to have some problems. I really like the approach that you're taking. I think we're taking pretty close to the same approach between our organizations, right? I'm very much looking at it the same way. Where can I find efficiency? I'm not quite comfortable going all the way down the path yet. wanna find.
some small incremental additions that my team can apply to where the business use case exceeds the cost. And I'm hoping that we're gonna find that. But John, I really appreciate you taking the time to come on and have a chat with me today. I hope that you'll come back again next year and give us an update on what you've learned as you've continued to dig into the world of artificial intelligence and its use cases for the receivables management.
John Nokes (28:03.084)
It's been a wonderful conversation. I'd love to come back and talk some more.
Adam Parks (28:07.094)
Absolutely. For those of you that are watching, you have additional questions you'd like to ask John or myself, you can leave those in the comments on LinkedIn and YouTube and we'll be responding to those. Or if you have additional topics you'd like to see us discuss, you can leave those in the comments below as well. And hopefully I'll get John back here at least one more time to help me continue to create great content for a great industry. But until next time everybody, thank you so much. And John, I really do appreciate your insights and knowledge. Thank you for sharing with us.
John Nokes (28:30.936)
Thank you, Adam.
Adam Parks (28:32.384)
and we'll see you all again soon. Bye everybody.
About Company
National Credit Adjusters, LLC specializes in purchasing and servicing distressed and non-performing consumer accounts receivables. Our services are rooted in our company’s mission to bring integrity, professionalism, and the highest standards of compliance to debt servicing.