In this episode of the Receivables Podcast, Adam Parks sits down with Will Turner of TEC Services to break down what data waterfalls are, how they’re built, and why performance depends on your portfolio, your goals, and your testing methodology: not just “who has the best data.”
Listen to Your Favorite Podcasts
Adam Parks (00:06)
Hello everybody, Adam Parks here with another episode of Receivables Podcast. Today I'm here with a good friend and a deep data nerd just like me, Mr. Will Turner, joining us from TEC Services to talk with us about what debt collection data waterfalls are, why they're important, and why no two waterfalls should be the same. So Will, thank you so much for joining me today. I really appreciate you spending a little time with me and sharing your insights.
Will Turner (00:33)
Yeah, thank you, Adam.
Adam Parks (00:35)
So, well, for anyone who has not been as lucky as me to become your friend through the years, could you tell our audience a little about yourself and how you got to the seat that you're in today?
Will Turner (00:45)
Yeah, so my name is Will Turner. I live in and work in Omaha, Nebraska. And I got my start in the industry in 1999, a little payment processing company called First Data, now Fiserv. And there was a data and analytics division there that I used to work in and really focused on a few use cases as it relates to data waterfalls, fraud prevention.
Adam Parks (00:58)
No.
Will Turner (01:11)
skip tracing and debt collections were really the three. And as it relates to using performance data, let me back up. I always felt like there was a gap in the way that both First Data as a data vendor and our customers, the way that they tested third-party data providers. And it was typically done with a rudimentary data test using some performance data but it didn't have the analytic rigor that maybe testing the predictive model would have or building a predictive model would have. And so I wanted to use customer data to improve those tests, to make them more robust, use the analytic rigor of a predictive model. And the rebuttal internally was always, don't own that data. That's customer data. You can't use that it's customer data. So. The rest of my career has been focused on going to customers who own that data and helping customers use their own data to evaluate third party data providers. ⁓
Adam Parks (02:17)
Okay, so basically that capability of being able to analyze the value of data sources for specific use cases within an organization in structuring those tests. Okay, wow, okay, well, we're gonna have a great conversation today, clearly. But now you're spending your time with TEC services, talk to me about the organization and your role there.
Will Turner (02:30)
That's right. Yep, that's exactly right. Yeah, so between First Data and TEC, so kind of what led me here was I worked for various data companies. I've either worked for or consulted with most of the big ones, the three credit bureaus, Equifax, TransUnion, Experian, LexisNexis, CBC, Anovis, the bigger data companies. And TEC in 2010 brought me on board to start a new analytical services division, which we call TEC Analytical Services. and stand up the industry's first SAS-based data waterfall management platform.
Adam Parks (03:14)
Okay, well that that's going to bring us deep into today's conversation. So I'm really looking forward to this. Talk to us a little about what a data waterfall is. It's a terminology that we use a lot across the industry. like, let's speak to the least sophisticated employee at a debt collection operation about what a data waterfall actually is in this industry.
Will Turner (03:38)
Yeah, for me, Adam, a data waterfall is simply a company is going to use more than one external data company. It could be a combination of internal data sources and external data sources. But I think waterfall refers to the fact that you're using more than one data source. And you're going to use those in a waterfall fashion, meaning a sequence. So it's simply that. You're trying to solve a business problem using information. And that information resides in databases or a data source. And you're simply going to waterfall through those data sources in a sequence.
Adam Parks (04:18)
So you're looking at either appending new data, excluding data that maybe you had received through a source. is it all about adding data or are you going through both cleansing and kind of validation processes as you build?
Will Turner (04:34)
Yeah, good question. So I think it's a combination of the two, right? So we know that whether it's consumer data or commercial data, business data, data degrades at a pretty fast clip. the industry averages are 30 to 60 % of a customer's client relationship management system or CRM application will be inaccurate or incomplete in one year. And so...
Will Turner (04:59)
Part of my job is to help, yeah, it's crazy. At two years, basically 60 % of your consumer data has gone stale. And so part of my job using internal and external databases is to help companies validate and verify what information they already have that is correct and then enrich that data with the information that's not correct. So that could be, you know, appending phone numbers, email addresses, address information, attributes for predictive models. So it's a combination of verifying and validating information you already have, and then enriching your data with external data sources.
Adam Parks (05:38)
So as we think about building a waterfall for those that are newer to the concept, what kind of components might we look at in the completion of a debt collection waterfall? I'm assuming you've got some scrubs and then you're moving into appendage, but like talk to us a little about some of the components that would make up a waterfall.
Will Turner (05:55)
So as far as the components that make up a waterfall, we would start with what business problem are we trying to solve? What performance outcome are we trying to achieve? And then based on that, we sort of work backwards. What's the problem we're trying to solve? How are we going to measure success? And
Will Turner (06:17)
then we sort of assemble what that strategy might look like, which typically involves more than one third party data provider. So for a debt collection process, as you know, Adam, there's a sequence that happens with data. Typically companies are running what we call exclusionary scrubs, things like deceased and bankruptcy, litigious debtor, more sort of compliance and regulatory scrubs that they'll run up front.
Will Turner (06:44)
then they'll run recovery scores to segment their portfolio based on the consumers that are most likely to pay. And then from there, we get into sort of the skip tracing bucket where I think the term waterfall
Adam Parks (06:56)
kind of originates from.
Will Turner (06:58)
Originally it typically applies. In the UK and in Australia, they call it a data wash, anonymous with data waterfall. So yeah, the components are, and if you're referring to, in your question, Adam, are you referring to the components of what external data sources apply?
Well, I think the you've explained it pretty well in terms of it's this you've explained some of the stair steps. There's the exclusions from those military, deceased, and probate where like all of that is being filtered out. Then you're going through maybe cleansing some of the communication or the consumer data, phone numbers, email addresses. Then you're moving into for consumers that you haven't been able to identify a right party contact. Now you're going to skip tracing. So now you're starting to build on these components of which each one of those, let's call it stages of the waterfall is potentially being seen by multiple vendors, depending on what kind of product you're working on. Because if it's, you know, license plate data, you may be talking to DRN data and those guys, it's whatever, there's so many different data sources. But I think that's an interesting, kind of brings to an interesting point is with all of these different data sources that are available to us and the level of complexity and energy that goes into testing, I guess hiring a concierge such as yourself to kind of walk the journey with you would make a lot of sense. I mean, even in what we're talking about in the basics here, within skip tracing, there's gonna be potentially five, six, seven, eight steps of their waterfall in which data sources belong at which levels of that stair step. Because even order of operations will change everything.
Will Turner (08:34)
Yeah, that's 100%. And that's part of what's fun about this, Adam, is obviously our customers know their business better than anybody. And fortunately, we know the data side really, really well, both because we have a number of customers across a bunch of different markets, all providing performance data to us. So we have about 120 different data sources connected to our platform today, but we have performance data that's helping us inform how those data vendors are performing.
Will Turner (09:04)
and then how to sequence those vendors, how to configure those products to get the most out of them. So it's a combination of our customers knowing their business better than we do and us knowing the data piece better than most. And you put those two together and there's something that happened. It's the one plus one equals three scenario.
Adam Parks (09:11)
Yeah. Well, just even engaging in that process, I think, is a great start for most organizations. But I think people get scared, like, don't have a data scientist. I don't know how to measure not only the value of the data, but the value of that investment in comparison to other potential data investments. And if the number one thing in mathematics is the order of operations, then the order of operations of adding and subtracting data to the waterfall process is ultimately going to have a pretty significant impact.
And then if you think about all the different reasons that two waterfalls are never the same, because of the level of complexity that we're talking about, I mean, look at all the consulting I've ever done. I know you've been in far more shops than I have. I've never seen two organizations running identical waterfalls because depending on your product, your location, your strategies, I mean, there's so many different variables that go into it. Talk to me a little about the uniqueness that you see between these different types of waterfalls at different organizations.
Will Turner (10:20)
Yeah, so one of the things I find interesting, Adam, is so we get the question a lot, which data vendor is the best? that's a really hard one to answer because it's relative to your business, your budget, your tolerance for risk, what you define success as. what I find interesting is that we do a lot of AB testing or champion challenger testing.
Adam Parks (10:29)
Such a loaded question. Ktron's the best.
Will Turner (10:46)
And you can take five different data providers and run them in a different order or sequence and get very different performance outcomes with the exact same vendors. ⁓ So you asked me about the uniqueness of a data waterfall. And so I'd start with the customer. The customer in itself is unique. They're unique in their budget.
Will Turner (11:10)
They're unique in their risk tolerance, their philosophy. They're unique in the type of consumers that they work with. The portfolio itself varies, right? What's the makeup of this portfolio? What type of debt is it? How old is that debt? Has it been worked before? Is it prime? Is it non-prime? What are the characteristics of the consumer? How good is the data within that portfolio? Is it accurate? Is it incomplete? And we haven't even talked about
Adam Parks (11:20)
How many times has it been worked?
Will Turner (11:40)
you know, the third party element, the external data, but just the customer in itself. And so that's where I think part of the uniqueness is saying, okay, what is the business challenge or use case that we're trying to solve? What's the cost of fixing that or not fixing that? So what's the financial or performance gain or loss if we fix that problem or we don't fix that problem?
Will Turner (12:06)
How are we going to measure that? What does success look like? And then once we know that, then it's time to start assembling sort of the ingredients, the third party data providers that we're going to use in our waterfall fashion to solve for this. So I talked about the customer and the portfolio and the uniqueness of that. Now we're getting into the third party data providers or external data and what's the uniqueness of them. And so sort of on the surface, the simple tests that a lot of people do is, you know, they'll take a small sample of accounts. They'll run the same file of accounts out to, you know, a handful of different data providers. They'll look at what data is unique, what data overlaps. And that just gives you kind of a quick and dirty understanding of do they have the depth of coverage? How unique is their data? And that's typically where a lot of the data testing stops. And I think that's really where ours starts.
Adam Parks (13:04)
Okay.
Adam Parks (13:05)
I mean that that makes a lot of sense because as we start thinking our way through building out this data waterfall and really where people are generally stopping is before they can actionably move that data set into an intelligence. So we can bring all this data or all these signals together, but we're gonna have to convert them into some sort of actionable intelligence that allows us to move things down the path here. But as we think about how unique they are, so we've talked about how the different organizations themselves are driving what's unique about their waterfalls and all of those different data sources that are coming in. But each one of those data providers and each one of those vendors is also going to have a whole other set of data products. And now you're comparing different data products sourced without truly understanding where they're coming from. So what kind of KPIs are you able to start looking at from a data quality perspective to understand the value attributes of these different sources as they're available?
Will Turner (14:04)
Yeah, so as far as KPIs, so going back to the uniqueness of the customers and what they're trying to solve for, it could be a lender or a debt buyer or a servicer. In the case of a servicer, may have, just within the same customer, they may have a portfolio where liquidation rates are fairly high but maybe their profitability isn't where they wanted to be. So their goal is to maintain their liquidation rates, but improve the profitability of this portfolio. They may have another portfolio where they're competing on customer scorecards and they're chasing market share awards. ⁓ So maybe the speed at which they
Will Turner (14:50)
get to a right party contact and a payment or important metrics for them to compete on their client scorecards. And so when it comes to KPIs, I think going back to what specific problem are we trying to solve and then how are we gonna measure success? And we break those success metrics or KPIs out into sort of two buckets. So I'll use a phone waterfall because it's just really simple.
Adam Parks (15:15)
Okay.
Will Turner (15:16)
In the case of a phone waterfall, when we're using external data to verify what information is correct or enrich or append data to get to the correct information, we look at things like how many dial attempts were made on this account. What was the result of those dials? Was it a right party contact, a wrong party contact? The sort of the dialing metrics themselves are important to inform not only how the data is performing, but how is the client actually using the data that they're purchasing and then on the payment side, of course did this person pay or not? What percentage of the balance did they pay? What's the average payment amount? What are your gross dollars collected? What are your fee dollars collected? What is the ROI on that information? You know, what is the liquidation rate or let's give me the liquidation dollars per account So we kind of break it out into operational metrics from dialing, sending SMS text messages, emailing, whatever the channel communication is, how is the customer working that data, and what are the performance metrics there. And then there's more of the money metrics around liquidation rates, dollars collected, ROI, et cetera.
Adam Parks (16:28)
And it feels like there's decent measurements or at least some predictive measurements within each one of those. And, you know, as you're talking about the phone number one, and then I start thinking about some of the skip tracing that has to be done post judgment and really like just how different the needs of the different disciplines of debt collection really are when we think about debt buyers versus agencies versus law firms, and what that looks like, just how truly different is all of that, I would think that their data needs themselves are wildly unique.
Will Turner (16:59)
and you're referring to maybe a lender versus a debt buyer versus a collection agency at early stage versus a legal strategy. Yeah, wildly different data needs. And even within any of those categories, right, Adam, each customer can kind of have unique goals. Is it performance? Is it revenue? Is it profitability? Is it chasing scorecard metrics?
Will Turner (17:23)
But yeah, certainly a data waterfall or wash for a legal debt collection strategy is going to look a lot different than, you know, early stage collections for a first party collection agency, for example.
Adam Parks (17:36)
Yeah, investment capability, like how much am I willing to spend early stage versus later stages and how much money am I willing to throw after bad money? Now, with all of these different data sources that you've been playing with over time, I have to assume that measurement, you testing and evolving the waterfalls for your clients is one of the big things that you're spending a lot of your time on.
How do you get a organization to set those parameters in advance of a test? It's been my experience as a consultant that when you go in, if you don't put those yardsticks on the field, the goalpost always seems to move further away. How do you go through the process of getting everybody on board in advance to make sure that we're all speaking the same language before we're spending dollars on data?
Will Turner (18:25)
Yeah, so I think I know what you're driving up there, Adam. So we typically start with a complementary data assessment. we look at, first we talk about the organization's goals, their priorities, how they measure success. And then we look at their individual data strategies that they have in place, the data vendors that they use, the reporting that they have in place or don't have in place to track key performance indicators. And We make observations and recommendations that say, you hey, you told me that your goal was X, Y, and Z, yet your process today appears. So maybe the goal is, let's say revenue, you growing revenue, but the data strategies that they have in place are really more based on data cost and ROI. Those two.
Will Turner (19:16)
Meaning that the data strategy they have in place isn't necessarily aligned with what their performance goals are. And so having a dialogue around that to get alignment on these are your goals, here are the success measurements, we agree on those. And then we talk about what is the testing methodology that we're going to use to prove out a new data strategy whether it's performing better or worse than their current data strategy.
Adam Parks (19:45)
Okay, and so for an organization that feels like they have their data waterfall in order. They've been working through it and all of that. How often should an organization be revisiting that data strategy?
Will Turner (19:58)
I love this question because we take the philosophy and our clients that have the highest performance have the philosophy of continuous improvement. So you never sort of arrive at this ultimate data strategy. So we would recommend that there's always a portion of your inventory that's in a A-B testing type environment. But if I were to answer it as far as periodic testing, I mean no less than quarterly.
Will Turner (20:26)
Again, we're not advocating nor are we building, you know, set it and forget it type data waterfall strategies. You know, all of us are operating, especially our customers in very competitive environments. And, you know, I think, and everything is in flux, right? The market is in flux, consumers lives are in flux, our customers.
Will Turner (20:49)
their customer environments are in flux. And so I think for data strategies to really perform, they require refinement over time. And the way that we know how to do that is through, you know, A-B testing and trying out new data sources and new data strategies. And then we achieve success today and then we say, okay, now how can we do that better tomorrow?
Adam Parks (21:13)
I like that approach. How do you recommend testing new data sources or treatment paths without disrupting the current production? Because we're talking about testing again, quarterly and constantly going through this process. But how do you structure some of those tests so that you can continually run this testing, continually push the boundaries, but without interrupting the business at hand?
Will Turner (21:36)
Yeah, great question, Adam. So a couple of ideas come to mind or approaches that we take. So one is doing retroactive testing where, you know, you grab a pool, a sampling of accounts that have known performance outcomes. And then you take those accounts and you bounce them across a data vendor or a waterfall, a group of data providers, and you determine If this account, if you didn't know what the performance data was and you ran this account at time of placement, what would the data vendors have appended? And then you max it up to the performance results. And so it's a way without any operational impact to identify, you know, hey, is there something here with this data provider that could add value? The other way is through production champion challenge or testing where you take a small percentage of your inventory and then run your champion challenger on that. And that way, the bulk of your accounts are running your existing strategy. And so it's a lower risk way on production accounts to run a strategy but not impact the whole population.
Adam Parks (22:49)
So you're testing it in real time, but you're testing it within a isolated sample population to avoid any deeper disruption. But if you've already built out that waterfall, your risk level should be pretty minimal there. So as you see all these organizations that are going through the constant evolution of improving their data waterfalls, what's the big mistake that you're seeing organizations make again and again and again. As a consultant, you get such visibility into so many different landscapes. Is there a recurring theme that you're seeing in terms of a mistake or a challenge that organizations are creating for themselves?
Will Turner (23:27)
Yeah. So in no particular order, there's a few that come to mind. I think overconfidence is, one of the main ones, you know, thinking that you have all the things that you need data vendor wise and the strategies in place, the reporting in place. It's more that sort of that static mindset of maybe it worked when we tested it and we put that in place and then assuming that that is still working.
Adam Parks (23:28)
No particular order. I already got a list. I love it. Okay.
Will Turner (23:53)
would probably be a pretty bad assumption because again, all the flux that I talked about. Number two would be being single threaded. putting all of your eggs in one basket, using one primary data provider would be a mistake in most situations or for most use cases. And there was a third one I was thinking of, ⁓ I think for a servicer, and this would apply to a lender too, is assuming the data that the consumer provided the original creditor is good. We talked at the start of this podcast about the rate at which consumer data becomes inaccurate or stale. And it's somewhere between 30 and 40 % per year. And so a lot of times I hear, especially from collection agencies, that, hey, we don't skip trace. We think that's too risky. So we only use the data that came from our customer, the creditor.
Will Turner (24:59)
You know, again, after two years, 60 to 70 % of that consumer data is not correct. And so that in itself can be risky, right? So those are the three big ones that come to mind, overconfidence, being single threaded, and then assuming that the consumer data is the best.
Adam Parks (25:17)
So the advice that I hear there is remain diversified, continue to push the boundaries like a velociraptor testing the fences, like we always wanna be pushing those boundaries and see what we're capable of doing there. But as the world continues to change around us data professionals, what do you see looking forward in the application of artificial intelligence? Clearly data is required to run those models to run those tool sets and define the success behind AI itself. But that aside, how do you think artificial intelligence will ultimately impact your role, like your world, so to speak, in terms of testing, understanding data and being able to apply it? What do you think that looks like over the next five years?
Will Turner (26:03)
Two things come to mind, Adam. So for data companies themselves, they're starting to use artificial intelligence to gain additional insights. They're building products that are using hundreds of different data sources to come up with these products. And artificial intelligence is now starting to be used to get insights into that data to figure out how to better piece that.
Will Turner (26:27)
data together to solve a particular business problem. And then also on the reporting and analytics side, drawing insights from that data for the customer using artificial intelligence. And then the second way that I see it being applied is, as you know, Adam, our customers are starting to use things like AI powered virtual agents ⁓ to communicate with consumers in a 24 by seven environment.
Will Turner (26:53)
And so there's a whole new channel of communication that's happening between our clients and consumers using artificial intelligence that come with a whole set of new key performance indicators. It's a little bit more complex now, right? Because it used to be that, you know, somebody would send a letter, pick up the phone and make a phone call. And those were the two channels of communication. Now we have, you know, a whole, a whole bunch of ways that consumers
Adam Parks (27:06)
Yeah.
Will Turner (27:18)
can interact and engage with our customers. And so with that comes a whole new set of key performance indicators and metrics that need to be understood, I guess, to maximize the investments that our customers are making in third-party technology tools like artificial intelligence.
Adam Parks (27:39)
I think you're right on the AI use case. I think when it comes to the data is yeah, it's going to be to say, combine the data and maybe operationalize some of the data that's coming into organizations, getting it into their CRMs, identifying the signals from their communication channels. But it's that conversion to actionable intelligence when we can take all of these data sets. But we also read, you know, articles and things around.
I was reading the other day, it's like 250 bad documents can corrupt an entire AI model. And like, that's really all that it takes for that to happen. So I think the curation, the sommelier of data ultimately still needs to be there to refine what is being fed into it. Because I think there's some value attributes from an artificial intelligence standpoint, but it's so specific and the more data that we can feed it, obviously the more insights or correlations that we can find in the data. But I think that's the big use case over the next five years and how we're going to start activating some of that from a data collection standpoint to like actionable intelligence where we can drive next behavioral action.
Will Turner (28:45)
100% agree Adam
Adam Parks (28:47)
Well, it sounds like the future is now because we're starting to see some of that stuff start to happen. But well, as we think about data waterfalls and kind of our whole discussion today, is there anything that I didn't ask you that you thought I would ask you today?
Will Turner (29:00)
You didn't ask me which was the best data source, which I thought was good. It is a ridiculous question. Yeah.
Adam Parks (29:05)
Yeah, well, because that's a ridiculous question. Use cases are everything.
Will Turner (29:14)
No, I don't think so. Maybe what type of performance lift we typically see.
Adam Parks (29:21)
To that seem to me that's too specific of a question because whatever number you give out is going to apply to one specific client that you were working on yesterday and not right all of the people and that's why I try to like formulate the questions the way that I do because I want to make sure that we provide an accurate representation to the industry of kind of what's happening here but will I know I'm going to have more questions for you in the next couple of months.
Will Turner (29:30)
Yeah.
Adam Parks (29:47)
just looking at everything that's happening in the economy. And once the 2025 TransUnion Debt Collection Industry Report rolls out in February, I know that we're going to have to have another discussion here. But for today, I really do appreciate you coming on, sharing your insights, spending a little time with me. This was fun. Clearly, there's a lot that I can continue to learn from you. So I look forward to our next adventure together at whatever conference that happens to be.
Will Turner (30:12)
Hey, thanks for talking to me Adam. Always enjoy talking to you. Appreciate the opportunity.
Adam Parks (30:16)
Absolutely, for those of you that are watching, you have additional questions you'd like to ask Will or myself, you can leave those in the comments here on LinkedIn and YouTube and we'll be responding to those. Or if you have additional topics you'd like to see us discuss, you can leave those in the comments below as well. And I'll get Will back here at least one more time to help me continue to create great content for a great industry. But until next time, Will, thank you so much. I appreciate you. And thank you everybody for watching. We'll see you all again soon. Bye.
Will Turner (30:38)
Thank you, Adam.
Why Debt Collection Data Waterfalls Matter
A debt collection data waterfall isn’t just “adding more data.” It’s a deliberate sequence of internal and external sources designed to solve a specific business outcome—compliance protection, faster right-party contact, higher liquidation, or improved profitability.
Will Turner’s core message is simple: the waterfall is only “good” relative to your portfolio, goals, constraints, and how you measure success. Two shops can use the same vendors, in a different order, and get dramatically different results.
Key Takeaways from the Episode
1. Start with the outcome, then work backwards
Before picking vendors, define the business problem, success metrics, and what “better” means.
2. A waterfall includes verification and enrichment
Data goes stale quickly—so validating what you already have matters just as much as appending new phones/emails/addresses.
3. Compliance scrubs first, then segmentation, then skip tracing
A common sequence begins with exclusionary scrubs (deceased/bankruptcy), then recovery scoring, then deeper skip tracing.
4. KPIs should be split into two buckets
- Operational metrics: dials, contact outcomes, channel effectiveness
- Money metrics: dollars collected, liquidation rates, ROI, dollars per account
5. Quarterly is the minimum cadence for revisiting strategy
The best-performing organizations treat waterfall optimization as continuous improvement and always keep some inventory in testing.
Industry Trends: Waterfall Strategy Is Becoming a Competitive Advantage
As more communication channels and AI-supported tools enter collections operations, waterfall management is expanding beyond “who has the best phone numbers.” Teams now need measurement frameworks that connect data, channel behavior, and outcomes—without overfitting to yesterday’s portfolio.
Timestamps
00:00 – Intro: Will Turner + why data waterfalls matter
00:45 – Will’s background: analytics, fraud, skip tracing, collections
02:30 – TEC Services + TEC Analytical Services
03:38 – What a data waterfall really is
04:34 – Validate vs enrich (and why data goes stale)
05:55 – Waterfall components: scrubs → scores → skip tracing
10:20 – Why no two waterfalls match
14:04 – KPIs: ops vs revenue/profit
19:58 – How often to test (and why)
21:36 – Retroactive vs production testing
23:27 – Biggest mistakes to avoid
25:17 – Where AI changes the game
29:47 – Closing
FAQs on Debt Collection Data Waterfalls
Q1: What’s the biggest benefit of a debt collection data waterfall?
A: It improves performance by sequencing data sources to verify, cleanse, and enrich accounts—leading to better contactability, compliance outcomes, and collections results.
Q2: How do I know if my waterfall needs to change?
A: If portfolio mix shifts, channel strategy changes, vendors change products, or KPIs drift, it’s time to re-test. Quarterly reviews are a strong baseline.
Q3: What’s the safest way to test a new vendor?
A: Run retroactive tests on accounts with known outcomes, or test on a small percentage of live inventory using a champion-challenger approach.
About Company
TEC Services Group
TEC Services Group is a technology and professional services firm for the credit and collections industry, providing advisory, analytical, and managed services—helping organizations evaluate and optimize third-party data, improve performance, and support collections technology initiatives.



