In this episode of the Receivables Podcast, Prince Matharu of TEC Services Group explains a structured data experimentation framework in collections built around a simple but powerful model: hypothesize, experiment, measure, report.

Listen to Your Favorite Podcasts

Adam Parks (00:08)
Hello everybody, Adam Parks here with another episode of Receivables Podcast. Today I'm here with a fellow deep data nerd, Prince, joining us from TEC to talk about data experimentation. And the reason I asked Prince to join me for the conversation today is because in his role, he has an opportunity to build data experiments and to participate in them across a variety of products,

Adam Parks (00:34)
organizations and different variables which gives him a really broad point of view and perspective of what does it mean to build a good data experiment that's built for success and not just setting things up for failure. So Prince, thank you so much for joining me today. I really appreciate you coming on and sharing your insight.

Prince (00:56)
Thank you for having me on.

Adam Parks (00:57)
So for anyone who has not been as lucky as me to get to know you a little, could you tell everyone a little about yourself and how you got to the seat that you're in today?

Prince (01:06)
Certainly. So my journey started in the early 2000s. I'm a travel addict, so was in Australia backpacking and a friend of mine had an interview, so she couldn't attend. So she sent me to GE Money for an interview for a role collecting outstanding debt, which I had no exposure to previously, but that was my start. And then I was finishing my MBA at the time in Australia.

I saw an opportunity to be at an organization that had leadership programs, so I enrolled in a leadership program at GE. Fast forward, I worked there for numerous years, managed a big book of bad debt, both commercial and consumer, ran a team of 40 collectors, other managers underneath, and then I moved to the US with that experience and found TEC, which has been my home now for 10 years. we essentially, my job at TEC is to identify how data can transform many aspects, including rate of return, employee retention, and overall revenue commitments with with good data and that's the problem I solved on a day-to-day basis so excited for our discussion today.

Adam Parks (02:26)
I'm glad to have you here. Now, TEC, I know does a lot of things even beyond the data. Could you give our audience an understanding of kind of a holistic ecosystem that is TEC?

Prince (02:37)
Yes, so TEC has a few umbrellas. One that we're perhaps most renowned for is our professional services where we help implementations, integrations. We have a deep, deep talent pool, probably the deepest talent pool across multi-software reach. any system, particularly in the third party collections, arm industry or vertical rather. So we've got broad coverage from a talent pool standpoint. So that's our professional services arm. Then we have TEC Analytics, which is the division that I run and manage. And that has, again, a very unique positioning because we have a flagship product, which is supplemented with white glove consulting. But the technology that we built is a marketplace of vendors, a quick portal, if you will, where any client who is generally a debt servicer can access up to 120 different data points and data products. So we have a vast coverage, technologically speaking, from end to end. So any company that has any need to append at any stage of the consumer life cycle, because if you think of a general consumer life cycle begins with exclusionary scrubs like bankruptcy, disease, military. Then it moves on to enrichment phase, which is most commonly your phone append, your address append, all the way to verified and specialty scrubs, which would be appending socials, appending date of birth, verified places of employment. So that technology is combined with deep comprehensive reporting. So we capture first hand the results of implementing these scrubs and then tweaking and managing efficiently as to where the data is being profitable and where it isn't and right sizing the relationships. So that falls under our analytics division. Then we have TEC Solutions, which is a tech advisory wing, if you will. So again, we have very, very talented colleagues in line that have vast and deep understanding of how to optimize digital strategies, telephony systems, procuring systems of record and qualifying each product based on the needs of the clients. That falls under our advisory service named TEC Solutions. And then most recently, as of April, May last year, we acquired Latitude, which was formerly owned by Genesys.

Now we have probably one of the best systems of record, particularly for our scope, is receivable management. So that is a part of TEC Umbrella as well. we're dedicated to growing that and making sure that that continues to serve as not only just the third party market, the first party market as well.

Adam Parks (05:40)
a lot of different things under one umbrella. But today I want to focus on the analytics piece because I feel like that's, it's one of those areas where everybody's looking to do more with less. But when we set out on the journey of a new data experiment and we're going to test this new piece of data here, so often do we not define what a successful experiment is upfront.

And then the goalposts continually get moved throughout the process. So we're kind of setting ourselves up for failure on minute one, because we're never going to reach the goalpost if the goalpost keeps moving. Now, I know you've been through this quite a few times and we were talking about the cycle of hypothesize, experiment, measure and report, right? And having that life cycle of a data test.

When an organization comes to you and says, okay, I'm ready to go test this new piece of data, what advice do you have for them and how do you help them structure that experiment so they can really understand the impact of that isolated piece of data?

Prince (06:48)
Yeah, so the very first thing that I discuss with my prospects or clients is a little bit of myth busting, which is generally the perception tends to be, and it's no one's fault, but I think we've commoditized data, especially in our vertical, where we have some common myths that all data is created equal.

And if a company is diligently seeking advancements and investing in digital strategies, they need to equally emphasize a multi-threaded, multi-vendor strategy for data as well. Because the myth is that if they're buying data from a single source, that's going to be enough. So that's generally my beginning set point. And then from there, it follows the formula that you mentioned beginning with the goal setting. And the goal setting would be dependent upon the use case at hand. So it's going to look different for a first party bank versus a third party servicer. And we can discuss about what those specifics are. But that's my starting point, that we need to create an existing take existing data, which is often a challenge, because I would say 70 % of the clients that we work with either have very minimal reporting or no reporting. Meaning, even in this day and age of AI advancement, I still on a day-to-day basis deal with clients that could not tell. These are clients servicing bad debt in particular could not tell us what the right party contact rate is. Where is the diminishing return from calling? Because we're focused on calling, but we couldn't definitively tell us when does the economies of scale hit? What's the optimized number of dial per phone, etc. So the point is to supplement data in the right way to then be able to uncover those things. But that's the level set starts with what is the goal, what is the use case.

Adam Parks (09:03)
So starting with what are you actually trying to accomplish? But as we think about what they're trying to accomplish and we get a better understanding of the end state that they're trying to get to, how can you help guide them toward that experiment process? And this is gonna sound overly simplified, but one of the tools that I've used in the past is an eighth grade science experiment document.

Prince (09:04)
Okay

Adam Parks (09:29)
quite literally an eighth grade science experiment document and just writing it down because I find so often that just writing down what the objective is and clarifying it into words, at least now it's no longer the unspoken. It's gonna be better or worse. Like we're able to now start to use language to define what success would look like at the end of the experiment.

Prince (09:44)
Okay.

Adam Parks (09:55)
What kind of tricks have you used to kind of get everybody onto that same page and marching in the same direction?

Prince (10:02)
Yeah. So that's, that's a good approach. I haven't used that exact approach, but I might add that to my, my tool, to my tool tricks and the, the, the formula or the methodology that I tend to use, there's two, two buckets of calls, if you will. And I agree with you completely that we have to crystallize this, this, whole undertaking, this exercise of running a pilot or a champion challenger has to mean something in the end and for us to circle the wagons and say, OK, we did this experiment. What results were we hoping to get? And what is that we're trying to get out of this as an insight? The two buckets that I like to think of, one are the tangible goals, and then one are sort of intangible or maybe semi-tangible. So the semi-tangible goals tend to be around efficiencies in existing FTEs that are or resources that are being invested into a batch process. It could be just the overall cleaning up of the data can have an intangible effect of improving customer service because now we're reaching more reduction in compliance, et cetera. Although there are and there can be metrics associated with those, I still like to think of them as the intangible side effects of having good data and good testing.

That's going to come from that. The second pool is the tangible bucket. And this is where the use case, based on the use case, let's if we stick with within the third party and the first party servicing bucket or umbrella, that would be establishing contact rates cumulatively. how many, what kind of influence impact or increase can we create by implementing a multi-vendor strategy, data strategy into the contact rates. So that's one. The second could be speed of liquidation. So that's another one. And then the third one is reduction in dialing effort to get to the increase of contact rate and liquidation. So that's how I like to frame between the two buckets, tangible and intangible, because what I wish if I could tell the market, especially in our vertical, that data has such multi-threaded impact onto our processes. Again, there is currently such a heavy focus on digital strategies. I want to tell the market that emphasis on data is equally important because data is the lifeblood of every piece of technology that you're implementing. So yeah, but back to your question. Yeah, tangible and intangible is how I like to think.

Adam Parks (12:44)
I think it's an interesting approach is I look at the marketplace and we think our way through the data sets that we're starting to use. Being able to measure out like what is the value here, but you're looking at two aspects. You're looking at what's my forward new value and then what am I saving in terms of reduced value effort risk, whatever the case may be from a reduction standpoint. So it is kind of that seesaw. How much am I going to improve, but also measuring how much am I going to be able to save, avoid or mitigate over that same experiment.

Prince (13:20)
Yes, yeah. The experimentation also includes, so we interface with, as I mentioned, technologically speaking, 120 different data products. So we have a vast array of available data products. So the experimentation also includes in tying the right data product to the experiment or the test in question. So for example, if it's a law firm that's following a very strict legal path to resolution, and they generally have greater margins, then the discussion on the scope of the experimentation would include heavy use of verified data products versus a third party agency who's servicing the debt.

The margins might not be there or for other reasons the emphasis might be on a phone scrub product or an address scrub. I also recently learned of a data product that does two things. It's verified phone hits, which are quite unique, and warm transfers. So this provider can dial on behalf or do a cold dial, so not disclosing who they're calling behalf of, but establish a contact with the right party and transfer that directly. So it's all about understanding the first part, which is the goal setting. What is the target? Is the target to improve the rate? Or is it to improve the liquidation of both? Or is the target to just agitate the pool and see what comes out, which is probably not the most effective way of going about things, different companies have different intentions. So it is not just, yeah, my point being it's not just the experimentation of going from one vendor to a multi-vendor strategy, but also looking at the array of products that are available and distinguishing between verified and unverified products. Excuse me, losing my voice a little.

Adam Parks (15:28)
Yeah, that's all right. It seems like the industry itself has been moving towards self-service and digital communication technology. And as we think about those tools, always go back to these tools don't add value unless we're powering them with the right data. And then when we were preparing for the webinar that you and I had done a couple of months ago, we started talking about data decay. And if I remember the number correctly, was 32 or 33 % of data is going to decay on an annual basis, which means two years in, 66 plus percent of your data is no longer valuable. When you're building out these data strategies, how do you look at the data decay portion? It's kind of the first part of it, but are you seeing more organizations that are actively experimenting as they're starting to realize just how quickly the data they have has lost?

Prince (16:12)
Okay. Historically speaking, the method used for testing is a single file or a handful of accounts, whatever the deeming sample size is, is sent to one single source of data and then what's measured is the number of hits. So how many hits were returned, which is often misleading because data vendors are very good at returning hits. But in my experience, when we work with setting up a test, we work with the data vendor and their inherent intelligence to refine the configuration and refine the criteria for a qualified return. So that we're getting hits that are going to be high caliber and high quality. And the data vendors are very gracious in because they want the best possible result from the test as well. So that's one shift. And the second, again, if we think about how historically tests and experiments are run, the emphasis is very high on the hits. The second portion is then those hits are worked in some capacity, which is often undefined. Again, we don't take the approach of defining that we're going to, for example, in the case of a phone test, that we're going to dial each phone X amount of times. It's not predefined. And the third thing is rarely is the impact measured in terms of degradation or the overall impact in the contact rate. So what we do when we run a test, first of all, every test we run is across multiple vendors. So we're removing that whole idea of no single source. And I've personally consulted for many data vendors and I feel confident in saying that no single data vendor can provide optimal coverage.

So the very first thing that we do in our methodology for testing is we implement a multi-vendor strategy. And the way we do so that we're doing, we're being fair across the board is we would split and create strategies where each vendor gets a placement in inventory and volume in first position, meaning they're getting the first look at the accounts and then every other position thereafter. So it's a very comprehensive way of testing. And at the end of it, the insight you end up with is you can confidently tell from a statistical standpoint, and when vendor A was implemented and given accounts in the first position, without any interruption or any other disruptions, what was their coverage, i.e. how many hits they could return. And then we enforce probably is a harsher word, but we work with our clients to make sure that data and inventory is worked equally across the board so that every vendor's data gets worked because that's often a missed point that you know, vendor B provides equal amounts a hit but somehow something happened in the dialer campaign where we ran one in predictive, the other one in preview. So there's a lot of variables and variances that can impact the performance and the end result.

Adam Parks (19:21)
Yeah.

Prince (19:35)
And the third point is that we make sure that we're capturing some sort of leading metric and based on the use case and it's based around what we're driving at at the end of the exercise. What's one takeaway that we want to walk away with? It could be as simple as increase in liquidation, but across the board, we're looking for one key defining metric. It could be an increase in your contact rate. It could be speed at which the liquidation happened it could be a return in return mail if we're discussing addresses, et cetera. So that's how we approach it. We begin with a multi-vendor strategy. We would give equal opportunity to each vendor in every position. starting with first position and probably an easier way to think about this, if we had 100 accounts, we would create, and if we had four vendors, we would create four different sequences. where vendor A, B, and C, and D get the first position in each one of those sequences. each vendor can, at the end of the day, get first look at accounts, show us their coverage, show us the capacity to provide qualified hits and results. And then we work with our clients to make sure that the effort is uniform across the board. And then we measure that one defining value, that one defining metric.

Adam Parks (20:59)
Interesting. Now the structure I'm assuming is feeding pretty well into these digital strategies because now you're identifying who's going to be able to provide what at which point you're documenting that process. You're determining what success looks like in an early stage. And then you're starting to feed these tools. Have you seen an increase in organizations' willingness to experiment with new data as you've seen the adoption of these digital channels rise. The phone numbers are important, but now we're looking at so many different email addresses and so many other things that decay even faster.

Prince (21:33)
Yeah. Yes. I think my experience is being mixed. There is ton of excitement. I'm getting a lot more curiosity around digital strategies coming through me. But we are also implementing a lot of clients on emails and sell-only scrubs, which are probably a better choice for texting strategies or phone data that is coming up with some sort of line type indicator so that we can separate the voice over IP and the cell lines, et cetera, and create digital strategies, especially texting strategies. I think definitely the market is much more curious. There is some hesitation still around emails. And again, I'm not a lawyer. I don't protect P1 online. But there is still some confusion around can email be used as an alternative to pre- to, you know, historic methods of contacting the client and correspondences. again, we've had and I've been part of certain discussions where it's well clarified that they can be used, but there is still some hesitation. But overall, I think there's a lot of excitement and curiosity in the market about this and where we've implemented email, we've seen some stunning results in terms of the delivery rates, the open rates, very, very healthy. And you can see why that is the future. It has to be quickly if it's not already on its way to becoming table stakes where it would become very fundamental to the business to run digital strategy because they're so efficient. They're the speed at which correspondences can happen contact can be generated etc. So I'm along with the market I'm very excited and keeping an open mind and testing various various sources to find the the best data sources for our clients.

Adam Parks (23:32)
Do you think that the increase in adoption for artificial intelligence is also going to drive more people to look at the data? Because again, just like the digital strategies, we can buy the fanciest tool, but if we are not fueling it correctly, like we can go buy a race car, but if we're putting bad gas in it, we're not going to go very far, very fast.

Prince (23:54)
Yes.

Adam Parks (23:55)
How do you think the artificial intelligence deployments and adoption are going to impact what you're doing from a data perspective?

Prince (24:02)
I think the first thing I would say is, again, around with AI, there is lot of curiosity and excitement in the market. And I first urge the audience or anybody entertaining the idea of implementing AI to look for truly agentic sources of AI, because AI can be misleading if we just stick with general terms, right? So what I've seen when implemented what performs is true agentic AI that is not just another language model feeding scripted answers back, but it can have some rationale behind it. So that would be my first comment. But secondly, yes, absolutely.

The age of AI is upon us. So once we implement agentic AI with rationale, then absolutely the data becomes, it is sort of that one-two punch. If we want to assure success, then we have to emphasize the data. And thankfully data vendors are coming along for the ride. We have to understand that they are investing large sums of money annually into their research and development and building strong algorithms, etc. So, thankfully, the data vendors are also responding to the upcoming trends of AI and digital strategies and now creating products that are going to feed into those strategies all the way from unverified to verified sources of information. We've got vendors that can track individual consumer movements and that sources can be fed in some instances. So again, it's a great time to be looking into and uncovering because the possibilities are becoming quite exciting.

Adam Parks (25:48)
New and interesting data seems to be arriving to our space. We saw license plate data get added into the mix a few years ago. And what did that mean both from a repossession perspective and from a location perspective? And then we saw a lot of changes and people working remote and the challenges that came with that role. I think there's a lot of the world has started moving back into the office for the employment verifications and those types of data sets that life's have become more stable since 2020. And now we're going to start to see these data patterns start to play a larger impact in our ability to predict the future. And that's kind of what we, what we have to do as debt collectors is try to predict when or why would they answer the phone, right? Like where are they? It's all, it's a lot of prediction in terms of is this account collectible? but I think you're right. It's a great time to be in the business with all of this new information and data points that we can start turning into actionable intelligence. We've got a lot of signals, but driving next action is our objective with a lot of these data tools. Now for anybody who's kind of still holding back in terms of my data works just fine and they haven't rebuilt their waterfall, they're really taking a look at it in the last

Prince (26:54)
Absolutely.

Adam Parks (27:10)
five years or so, what advice do you have to those that have not actively partaken in improving their data waterfalls over the last half a decade?

Prince (27:19)
My advice and request would be that it's a very high risk strategy to keep your data at the back end or not emphasize it. think its data is the new competitive advantage. If you have a multi-threaded, multi-vendor strategy that is dynamic enough that gives you reporting to see which data source is performing for you and which isn't, unless you're doing that.

You're running a high-risk strategy in my opinion because you don't have that competitive advantage of accelerating liquidation, accelerating contacts, accelerating customer service reduction and compliance events, etc. So yeah, I urge everybody listening to if you haven't evaluated your data, especially in five years because the law has changed in five years, then to be curious, to curiously ask questions as to reach out to your data vendor, reach out to TEC, and we'll be happy to have that dialogue to really take a deeper dive into your consumer treatment as to how you're dealing with consumer today from a touch point standpoint, either it's an established contact or correspondence, and how that can be accelerated or efficiencies that can come from deploying multiple sources of data, increasing your coverage, increasing the footprint that you can reach out to at an accelerated pace through digital strategies and then all the end metrics that matter which is the revenue, the liquidation, the reduction in compliance, the efficiency from an FTE standpoint. All of that is associated around data. There is no doubt in my mind that data plays a pivotal role in all those different segments of your business.

Adam Parks (29:07)
When you say that data is a competitive advantage in the future, I think you're right on point. And it's not just about the raw data itself because there's so many data vendors and even being able to go out and buy the data is not the end solution. It's about being able to interpret those signals, turn them into actionable intelligence, and then take the next action with it. And that is where I really see that competitive advantage solidifying itself and being something that becomes eventually easier and easier for an organization to defend and harder for a new organization to penetrate the space because they're not going to have that historic context data set and understanding of how these different data elements combined into the perfect recipe.

Prince (29:56)
Yeah, absolutely. Yeah, most agencies, for example, compete on a scorecard. So we're already cognizant by design to pay attention to metrics that we discussed. It's the overall aggregation of contact rates, the speed and acceleration of liquidation, et cetera. So those are already points of competition. I think it's us tying the connection that data is that one solution or could be that one missing link that can have impact across the board and make you successful in so many different ways.

Adam Parks (30:33)
Prince, this has been an absolutely fantastic conversation. Every time that I sit down and talk with you, I learn a little something about the data waterfalls and how I can start looking at attacking data across the debt collection industry because there's so many tools, there's so many new ways of doing things, and if we're not going to feed it with the best available data, we're never going to get the most out of these platforms.

Prince (30:57)
Agreed. Thank you for having me on. for our listeners, this was insightful. Yeah, I urge everyone to be curious, keep experimenting, and there is a lot of fun. Again, I'm a standard data nerd, so maybe this is too exciting for me personally. But hopefully we can infect with some of the curiosity around data and have some sort of insight around that.

Adam Parks (31:23)
For those of you that are watching, if you have additional questions you'd like to ask Prince or myself, you can leave those in the comments on LinkedIn and YouTube and we'll be responding to those. Or if you have additional topics you'd like to see us discuss, you can leave those in the comments below as well. And hopefully I can get Prince back at least one more time to help me continue to create great content for a great industry. But until next time Prince, thank you so much for your time. I really appreciate all your insights.

Prince (31:46)
Thank you, Adam. Thanks for having me again.

Adam Parks (31:49)
Absolutely. And thank you everybody for watching. We appreciate your time and attention. We'll see y'all again soon.

Prince (31:53)
Super.

Why How to Test New Data Vendors in Debt Collection Matters

How to test new data vendors in debt collection isn’t just a tactical question: it’s a strategic one.

In a world where agencies compete on scorecards, liquidation speed, and compliance precision, data quality can quietly determine who wins and who plateaus. Yet many organizations still evaluate new vendors based solely on hit rates.

On a recent episode of the Receivables Podcast, Prince Matharu of TEC Services Group unpacked a simple but powerful framework: hypothesize, experiment, measure, report. The discussion centered on building a data experimentation framework in collections that eliminates guesswork and produces measurable outcomes.

The core message? Testing without structure leads to noise, not insight.

For debt collection agency executives, debt buyers, and first party creditors, the ability to design statistically fair tests and measure contact rate improvement in collections is quickly becoming a competitive advantage.

And in today’s environment, data as competitive advantage in receivables is no longer optional — it’s table stakes.

Structured Data Testing in Debt Collection Starts With Clear Hypotheses

“The goal setting would be dependent upon the use case at hand.”

Many agencies jump straight into testing without defining what success actually means.

Key Reflection:

  • Define a single measurable outcome before running a pilot.
  • Separate tangible goals (contact rate, liquidation, speed) from intangible gains (efficiency, compliance reduction).
  • Document your experiment like a formal process.
  • Ensure every stakeholder agrees on the success metric before launching.

Without hypothesis clarity, performance conversations become subjective. With it, decisions become defendable.

Multi Vendor Data Strategy for Receivables Eliminates Bias

“No single data vendor can provide optimal coverage.”

This insight changes everything.

Prince outlined a rotational sequencing model where vendors receive equal opportunity in first position, ensuring statistical fairness. Too often, vendor performance is distorted by dialing strategy, campaign timing, or uneven inventory placement.

A structured multi vendor data strategy for receivables:

  • Reduces reliance on a single source
  • Increases overall coverage
  • Prevents internal operational bias
  • Produces defensible performance comparisons

The strongest data programs don’t just test vendors: they test position, process, and effort.

Measure Contact Rate Improvement in Collections — Not Just Hits

“The emphasis is very high on the hits.”

Hit rate feels measurable. It feels tangible. But it rarely tells the full story. A data experimentation framework in collections shifts focus toward:

  • Right party contact rate
  • Speed of liquidation
  • Reduction in dialing effort
  • Return mail improvements
  • Compliance risk mitigation

Key Reflection:
Measuring outcomes instead of outputs reframes vendor conversations. It turns opinion into performance management. It also supports long-term AI readiness because predictive systems require validated input quality.

Performance metrics, not hit counts, define sustainable growth.

Data as Competitive Advantage in Receivables

“Data is the new competitive advantage.”

This statement captures the long-term implication of the episode. Executives competing on scorecards already understand metrics. The missing link is connecting structured experimentation to revenue acceleration.

When agencies implement the hypothesize experiment measure report process:

  • Waterfalls become dynamic
  • Vendor relationships become accountable
  • Performance forecasting improves
  • Digital strategy becomes more reliable

Data maturity compounds over time. Organizations that experiment systematically build internal knowledge that becomes difficult for competitors to replicate.

Data Experimentation Best Practices for Collections Leaders

  • Define one leading metric per experiment
  • Rotate vendor sequencing fairly
  • Standardize dialing effort across tests
  • Separate coverage from performance
  • Track liquidation acceleration, not just contact
  • Reevaluate waterfalls annually
  • Document every experiment for historical benchmarking
  • Treat data testing as a strategic initiative, not a vendor trial

Industry Trends: How to Test New Data Vendors in Debt Collection

Digital channels, AI modeling, and remote workforce shifts have accelerated the need for validated data. As organizations expand into email and mobile communication strategies, poor testing frameworks amplify risk.

Structured experimentation reduces uncertainty and strengthens compliance posture. Agencies that ignore multi vendor data strategy for receivables may unknowingly accept degraded performance over time.

The industry is moving toward disciplined analytics. The question is how quickly each organization adapts.

Key Moments from This Episode

00:00 – Introduction to Prince Matharu and TEC Services Group
03:45 – Common vendor testing mistakes
09:30 – Multi vendor sequencing explained
15:45 – Measuring contact rate improvement in collections
22:30 – Data as competitive advantage in receivables
28:30 – Final framework recap

FAQs on How to Test New Data Vendors in Debt Collection

Q1: What is the first step in testing new data vendors?
A: Start with a defined hypothesis tied to liquidation or contact rate improvement in collections.

Q2: Why is multi vendor testing important?
A: It reduces bias and increases coverage while producing statistically valid performance comparisons.

Q3: How long should a data experiment run?
A: Long enough to capture measurable liquidation or contact rate impact, typically aligned with portfolio cycle timing.

Q4: What metric matters most?
A: The metric aligned to your business objective, often right party contact rate or speed of liquidation.

About Company

TEC Services Group logo.

TEC Services Group

TEC Services Group is a technology and analytics firm serving the receivables industry with professional services, consulting, systems optimization, and data marketplace solutions. The company supports agencies, debt buyers, and creditors with tools and expertise designed to improve performance and operational efficiency.

About The Guest

A man in a suit and blue tie smiling against a gray background.

Prince Matharu

Prince Matharu is the Director at TEC Services Group, focusing on structured data experimentation and multi-vendor data strategy for receivables. With a background in collections leadership and portfolio management, he brings both operational and analytical experience to performance optimization discussions.

Related Roundtable Videos

Related Roundtable Videos

Share This Story, Choose Your Platform!