In this episode of the Receivables Podcast, host Adam Parks sits down with Porter Heath Morgan, partner at Martin Golden Lyons Watts Morgan and author of the forthcoming book The Memory Project. The book explores AI, consciousness, and how technology could one day redefine human memory and identity.

Listen to Your Favorite Podcasts

Adam Parks (00:06)
Hello everybody, Adam Parks here with another episode of Receivables Podcast. Today I'm here with a very well known industry legend, Mr. Heath Morgan here to join me and talk about artificial intelligence and really the book that he's been writing, he was my guest on the AI Hub, my first guest on the AI Hub podcast and we were talking about really consciousness and artificial intelligence and what that looks like from a societal perspective.

And he let me know that he was writing this book. So as he's gotten further down the line, I wanted to get back here and start talking more about the project that he's working on and what that all means for us. So Heath thank you so much for joining me today. I really do appreciate you joining and sharing your insights.

Heath Morgan (00:52)
Yeah, Adam, thanks for having me back on. was a fun discussion for the first ever podcast. And then it's fun to see how the journeys progress, both with the book, but also with AI technology too.

Adam Parks (01:03)
Exactly. Well, for anyone who has not been as lucky as me to get to know you through the years, can you tell everyone a little about yourself and how you got to the seat that you're in today?

Heath Morgan (01:12)
Sure, yeah, I'm a third generation collection attorney. So I've been born in this industry and grown up in this industry. I went to law school with the idea, I was a film major in college, was the idea of going into film production and did some creative projects there. And I realized they didn't pay as well as debt collection soon after that and got back into the collection space. And for 20 years, I was in-house counsel and expanded my practice out to the firm I'm at now about six years ago. And I love the idea of being able to help multi-generational family businesses with sustainable tools, processes, technology, and strategic planning to be viable for the future.

And in that process, you know, I had another creative thought to write a book with my son back in 2020. At the time, I didn't know how much technology actually existed at the time. And I kind of put it on the shelf and said, you know, one day when I retire, I'll write this book. And then when chat GPT came out and I saw this is what we were talking about, this is happening quicker.

I've spent the last three years diving in, researching AI technology and what it means for ourselves and, you know, end up writing a book to kind of help me not just speak to our industry about how we use these tools, but other industries, parenting groups, education groups, because this digital transformation isn't just unique to our industry, it's all across society.

Adam Parks (02:47)
It feels a lot like back when the internet was first coming around and you know, the internet or the application of the internet into the American home impacted almost every aspect of life over the last 30 years. think artificial intelligence is going to be something similar. So when we first started talking about the consciousness and the understanding of Chat GPT, one of the most interesting comments that you had made on the last podcast was that someday your son would be able to refer to a bot and ask it questions and it would respond and react like you. Well, since we had that conversation, my wife got pregnant, we had a baby and now I'm starting to think about these same things. So when I saw the book come back to the surface again, I said, I have to have this conversation because now I'm so interested. So could you go back and give us a little bit of a history lesson on where that concept and thought process came from for you?

Heath Morgan (03:40)
Yeah, for sure. Yeah, I that's exactly right. That was the conversation my son and I were having during the pandemic was, you know, when I die, I'll be gone, but he will have a conversational audio video chatbot of me that he can talk with the rest of his life. My grandkids will have it. My great grandkids, descendants I won't even know will be able to engage and interact with me for generations. It's kind of that legacy mindset about what we leave behind and what information we want to put into it. That's one part of it. And then the other part is, what do they, what does the next generation want to listen to? Right? You know, the idea is that, do I get to curate this bot and be the loving father who always gave the perfect advice, or does he get to curate it and say, no, dad was kind of a jerk and I want my kids know the truth.

You know, and the reality is there's a lot of relationships where, you know, kids may not want to interact with a legacy bot because they don't want to be judged for their decisions from the grave with their parents. Right. And that's kind of when we decided that it's just going to be this multi-channel different levels, you know, just like channels on a TV station or radio station, you're going to have different personas programmed for different purposes that you can interact with throughout your day.

Adam Parks (05:02)
Which is interesting, we're already starting to see the beginning of that with the agentic AI bots and some of the challenges that have come with that, because I think responsible adults have looked at it as a business tool like a hammer, and I'm to use it to build a house or I can use it to build a deck or whatever. But some of the younger generations maybe don't have that same level of maturity in in those interactions. And there's been some bad advice out there from from what I've read.

Heath Morgan (05:28)
Yeah, absolutely. I you know, this is all they know. They're native speakers of this technology and the younger they get, you know, they'll be more more native speakers of this technology. And yeah, you're exactly right. And these, you know, the bad use cases of AI psychosis and emotional attachments, emotional manipulation, even from chat GPT don't leave this conversation.

Heath Morgan (05:55)
We're seeing more and more articles come out about, know, I was, guess when I was writing the book, I was thinking that you would assign a persona and have different enterprise personas, but it's already happening just on a public chat GPT format or a meta format or any other of these LLMs that we are assigning human characteristics to it. And then you know, building our worth or value or identity based off the feedback we're receiving from it.

Adam Parks (06:26)
It's a scary thought about those types of interactions and where it can lead. And I think it starts with that social media comfort and just the comfort level with these online communications continuing to increase generation over generation, I think is going to cause some issues in the future. Now, as you started prepping and kind of writing this book, talk me through a little bit of what you're trying to communicate and kind of the underlying themes.

Heath Morgan (06:52)
Yeah, that's a good question. You know, I think I kind of had one idea and I started to write the book. I was kind of looking for more of an action adventure book. And then as I started researching this technology and seeing, you know, the parallels that we've seen with social media and how easy it would be to manipulate our emotions, right? Social media, they use algorithms to manipulate our emotions to get us to engage longer. And then you know, this emotional connection we have with a bot that's responding to us, that's affirming us that, you know, we're sharing information that we don't share with anybody else. It builds that deeper connection and the opportunity to, manipulate and drive ad, you know, when we start introducing ads in the chat GPT, now we're selling products. Now we're selling time and selling more ads for those products, right?

Heath Morgan (07:47)
It just makes these vehicles and these tools more prone to manipulation. And then if you really want to you know, dark, you can think about political parties. What's the stop both political parties from funding their own chat bots to engage with kids to start, you know, talking about their ideology and their principles at an early age to kind of shape them and mold and grow new voters for their political ideology, right? The possibilities of misuse of this are pretty great. And so with that, I kind of shaped the story into a younger adult theme that my son could read, his friends could read, my friends and family and their kids. And the idea is that how do we teach our kids how to use this responsibly and not make the same mistakes we did with social media. so I guess beyond that, know, it, you know, one, it certainly is part of my consulting. When I do AI consulting, it helps to think 20, 30 years in the future to kind of walk back and give advice about how we're going to use this technology now. But it's also, it's so prevalent in our society that it also affects the next generation, how we interact with each other and, know, really can shape our society and human communication moving forward.

Adam Parks (09:08)
One of the things that you and I talked about previously was the our children won't be judged on the output of their models, they'll be judged on how they input to the model. So as we look at the dynamic of education and schooling, and what that is going to start to look like, because how does not only can we start to use AI to teach certain subjects, and I've gone down that rabbit hole a few different times myself on various things just wanting to learn upon whatever topic.

But then there's also the additional challenges of, you know, what are you actually grading for? Are you grading based on the quality of prompting and prompt quality and those types of inputs? Or are we still going to be looking at the outputs from the models and saying that's what we're going to judge based off of? And, you know, that extends even beyond education into when these same people get into the real world and get jobs and do these things. Where are we going to start looking and evaluating employees and team members around us as it relates to input versus output of AI models?

Heath Morgan (10:09)
Yeah, that's exactly right. And when we spoke before, we talked about how schools will have their own LLM accounts that they will assign to each classmate. the teachers have oversight, so the teachers can see the prompting. They can see how they're challenging outputs and putting it back together and then curating it together in a final output. That's natural language coding that we will be teaching.

And that's going to change, you know, especially if we, you know, we talked about before, you know, chat GPT and LLMs, it's a TI 85 calculator. Don't bother keeping up in this arms race of seeing if you have better AI to detect the AI output of a student. You you've got to figure out how to teach with this being a tool that can be used. And I think a big part of that

Heath Morgan (11:05)
is, know, how do we teach cognitive skills? How do we teach the reasoning? OK, does this output make sense? Does this align with what I was asking, what I'm going for versus just taking the final, the first output as gospel and turning that in? Right. And so we need to retain those critical thinking skills, because as we outsource, you know, the parallel to this, you know, from 20 years ago and 10 years ago, is Google Maps, right? How many of us don't know how to get a certain direction because we rely on Google Maps, right? We've lost that geo location in our mind that can take us from one path to the other. Same thing with phone numbers, right? Once we have all of our cell phone numbers programmed into our phone, how often, you know, we have seven digit phone numbers because it's the, know, know, scientists back when the phone numbers came out said seven digits was the maximum amount we had for

Heath Morgan (12:02)
rapid recollection, right? Now it doesn't even matter anymore. Now how many, how many phone numbers do we remember? And so, you know, if we just go carte blanche in and rely on, you know, you know, GPTs to do the thinking for us, what do we lose in term of our cognitive skills, and things like that. And then the next iteration, You know, once we have your glasses, your smart Ray-Ban glasses, your meta glasses, smart contact lenses that can record, transcribe across my screen. And now do I really even need any kind of memory retention or is it just about presenting the material to appear authentic and as an expert, right? You know, we've seen aspects where it doesn't matter If you actually visited Brazil, it matters if took a picture and shared to your friends about, you know, being in front of, you know, different landmarks there. Right. And we've seen how we can laugh about how, you know, Instagram and selfies are the distinction of doing that. But really what's the meaning behind traveling to a country like Brazil? What, what, what part of the culture are you understanding? And so we're seeing it being eroded in different ways and it will be.

Heath Morgan (13:21)
know, magnified tenfold if schools don't change education to value the experience versus just the appearance of the experience.Both Apple and Android now have their translation tools directly built into their headsets. And so they've come up with these opportunities to cross those barriers. So now spending six months in Brazil, how much Portuguese do I really get to speak? And I think part of that is people wanting to learn to speak English and constantly speaking to me English but

Heath Morgan (13:37)
Yeah, AirPods, yeah.

Adam Parks (13:53)
How much do I miss out on the opportunity to engage in that because now there's a tool set that allows me to make it easier. So do I develop the same functionality? And I say that as a person who's like a thousand days into Duolingo and still doesn't speak Portuguese.

Heath Morgan (14:08)
Well, it takes time, right? the idea is that it's a life hack, right? You don't need Duolingo to actually learn it yourself because technology can do it for you. And if we give up those and opt in for those life hacks long enough, we start opting out of life.

Adam Parks (14:09)
So welcome to What I think is a very interesting, you know, underlying theme for this book, I mean, to think about how some of these things are impactful and what you were talking about in terms of being able to, let's say, manipulate the youth through these chat bots that are have a particular political ideology or whatever the case may be, and it could go in either direction. It's just very interesting to start thinking about the application of this technology to the realities of social norms.

And how much we're seeing things that are really, we started seeing cryptocurrency, but for the first time, you know, for example, in Brazil, their picks platform, their digital payments platform will exceed credit card transactions in 2025. Like that's almost impossible to think about as an American and how often you're swiping, whether it be a debit card or a credit card, but almost nobody carries cash anymore. And so you see a lot of people that are swiping and now even that technology starting to change.

So I think some of the norms that we've faced over time are gonna start to continue to evolve as well. And some of that's gonna be powered by the level of quote unquote intelligent technology that sits behind it.

Heath Morgan (15:36)
Yeah, that's absolutely right. you know, cause I, yeah, well, I think, and I think it's, I think that's a, that's a good point, Adam, because I think it's an, it's an easy cop out to say, you know, these first models that are out here, they're free. So they're collecting your data now. You know, it's easy to say, no, that's, that's too much of an intrusion of privacy. Right. But we're not too far away from having enterprise models.

Adam Parks (15:38)
You talk about Google Maps.

Heath Morgan (16:06)
that won't train your data on AI. That will just be for you that will address some of those privacy concerns. And now we can start having deeper conversations about the adoption rates and how do we use them. And, you know, if I'm paying for this, if I'm paying for this service to use crypto to transact, am I really getting value of it versus it's just free, you know? And that's where I think, you know, the mature

Heath Morgan (16:32)
you know, once AI matures, we'll have those conversations of, we're done with the free versions of these tools. Now we're going to pay for the ones that we value and how do we use those to interact with currency, transactions, everyday communications, that kind of thing.

Adam Parks (16:49)
Yeah, you're if you're not paying for the product, you are the product. So I wonder how long these things will be available for free. But the adoption rates, I think, especially at younger levels are happening a lot faster than than I would have expected. I mean, I guess when I first started hearing about chatbots, didn't sound that interesting that I was going to talk back and forth with the computer. But I think some of the things as you go down the rabbit hole and when you realize some of the tasks that you're able to automate or automate intentionally because the robotic process automation versus what we're able to do with a genetic AI agents now is two very different worlds.

Heath Morgan (17:25)
It is, it is. And my example is, when I saw the first phone with a camera on it, I that was the dumbest thing in the world. Who wants a camera on their phone? You know, and now we use it and it's not that it's just a camera. It's how we use it, right? We use it for ⁓ receipts. We use it for, you know, different apps, you know, and things like that. And so it's really, you know, it's taken us 20 years to get there, but you know,

Heath Morgan (17:54)
Um, it's really the evolution of having that convenience. You know, I, I, the other analogy is like, you know, having a phone that tracks your location, right? That happened, with the iPhone back in 2007, and it's what brought us Uber, right? It's, brought the ability to have Uber eats food delivered to me, you know, cars taking me around. So there is mature use cases of every technology.

Heath Morgan (18:20)
that we will pay for or we won't pay for and then trade the convenience of our data for it.

Adam Parks (18:27)
It's an interesting dynamic right now. And I think you've still got organizations that are pushing back against the use of artificial intelligence at all. And just even in talking with people at the conferences, you can still hear some trepidation, although I'm seeing more voice AI companies at a conference than I've ever even existed in this world. What do you think happens with those that

Heath Morgan (18:45)
They are exploding right now.

Adam Parks (18:50)
those that are leading organizations that are not getting on board, whether it be personally or professionally or even trying to experiment with it, what advice do you have for those people that are the hard nos in the use of artificial intelligence?

Heath Morgan (19:01)
Yeah, you know, it's a good question because it depends on what the hard no is. And, you know, is it I'm overwhelmed, I'm scared, I'm scared of change, or is it this is truly too much of a privacy risk and concern based off either my values or federal and state laws about it? Right. I think either way, the answer is you still have to kind of dive in and play with this technology at certain levels because if it's values, it's laws about data privacy, again, this is the early versions of it. There will be more mature models that can offer you the privacy, the confidentiality that you would need. And if you don't have some level of experience in it now, it's going to be a real hard ramp up.

If it's a matter of being overwhelmed and being uncertain about the future, The antidote to or opposite of uncertainty isn't certainty, it's clarity. And so how can I understand this? How can I have my team understand these tools? And again, I think it goes back to playing around with it. My wife, she works in a multifamily, for a multifamily housing company.

Heath Morgan (20:17)
And she's programming bots. mean, her background's in advertising and marketing and has done a lot of sales. And now these tools are so accessible and the language that the ability to code in natural language has kind of opened up lots of possibilities. And if you're not, if you're just saying, you know, this is too much for me, you're not engaging in one, how easy it actually may be to dive into this.

But you also could be ignoring future customers because when we do have consumers and I mean, you can write an AI policy says we're never going to do AI and then you're going to get phone calls. I those phone calls are already coming to collection agencies and creditors from AI bots on behalf of consumers. And if you don't have a plan in place, you you're going to have a lot of long calls that don't go anywhere because or, you know, you get a lot of your human agents just hanging up on the AI, not really understanding and not having that guidance to how to use it.

Adam Parks (21:17)
I think that's a really astute point to talk about what's gonna happen from an AI bot standpoint on the consumer side. And I think we'll see the first iteration of that in the not too distant future with the, I believe it's iOS 26 from Apple answering the calls and being able to manage and manipulate some of that information. That is, I think really gonna be a challenge for the industry in terms of your right party contact starting to drop. But I also think that from a planning standpoint, I've been using a tool for years called RoboKiller that answers the spam calls and tries to keep that person on the phone as long as possible with an AI bot. We're going to see the same thing. I know that we're already seeing that now, we're going to see those challenges become significantly more sophisticated over the coming years. Maybe the coming months. I don't even know if it's going to be years anymore. feel like I'm measuring in the wrong, you know, increments.

Heath Morgan (21:59)
Yeah, absolutely. Yeah, that's right. Yeah, well, yeah, for sure. The calls are happening now. We're seeing a more on the debt settlement side of things, debt settlement companies, but right now in this early iteration, the AI from the consumer doesn't have the ability to make payments or resolve the account. And so that duration that your human is spending talking to it, that's wasted time on a nonproductive, nonselling call. Now,

Heath Morgan (22:40)
When we have this next iteration of these consumer bots. So it's not just, Siri, hey, Gemini, read this email, call this collector and negotiate my debt. It's going to be, and I authorize you to pay a max of $200 today for my Apple Pay. And I launched that. And then now that's a paying customer, just like a consumer. And then it's how do we verify

Heath Morgan (23:08)
that the consumer launched and initiated that call. And, you know, we're still working that out as an industry because it still hasn't here yet. But one of the best ways I've heard about that is, you know, some form of multi, you know, multi-factor authentication where if you get an AI call, they are authorized to make a payment. Okay. I'm going to send a six digit code to the email address associated with the consumer. Can you recite that back to me to let me know you have that authorization and the consumer consents to any third party disclosure, any other concerns you have. But I don't think we're that far off from it really to start planning out what would that look like to text or email a multi-factor authentication code to a consumer to make sure we have authorization to speak to that AI.

Adam Parks (23:55)
That's interesting that third party disclosure situation I think is complex, especially when models are watching models and things are stacked on top of each other from an AI perspective, Independent models versus these stacked or layered models. So I think there's going to be some really interesting challenges that we're going to face as both an industry and as a society because the underlying factors here, is that we service the consumers and the consumers preferences and their behaviors are changing as it relates to the use of artificial intelligence. And we're gonna have to continue to try and catch up. I did ask a former CFPB executive on a recent podcast, hey, look, we got Reg F 30, 40 years after we got the FDCPA, but even in Reg F, which was only four years ago, we did not anticipate the rapid growth of artificial intelligence from a consumer perspective you know, what's that going to look like over the next 10 years? I thought it was an interesting response that, you know, they're really tried to look at the technology channels and what was realistic at the time. And I'm hoping that we do get some sort of federal clarity on it at some point, because letting the states decide how artificial intelligence is going to work is going to be not only next to impossible but it's gonna create a massive imbalance in the behavior and the treatment of consumers across geographic locations, which is quite literally what we're told not to do.

Heath Morgan (25:22)
That's right. That's right. Well, yeah, and it'll be up to the states right now without a federal legislation. I still think it's too early to kind of really, you know, predict guardrails for it. You know, I think, you know, if you're looking to comply with AI technology, think certainly having elements of transparency, having logs of use and tools are good. I'm great.

Heath Morgan (25:47)
I think that helps with what future regulations would be. I think that would help with the bias concerns that we've seen kind of highlighted. And I think, you know, having, you know, some sort of human in the loop, you know, human guardrail for either launching the tool, overseeing the tool, taking the final output from the AI and making sure it's gone through a human first. One, I think it's good practice, but I think two you know, state legislatures who are concerned about humans losing jobs to AI are going to want to have some sort of element where if you're using AI tools, you need three people to oversee it or whatever it looks like, you know, so they're protecting their constituents' jobs. think, you know, having that kind of structure is a good framework as you explore these tools. But yeah You know, even though even the state laws we're seeing right now, the definitions are way too broad. They cover things that have been in use for five, 10 years. And, you know, it's like the lawmaker has chat GPT in mind. What if the financial services industry, what if the health care industry, you know, use chat GPT?

But they're making the definition so broad it covers every aspect of AI technology and tools, including machine learning, automation, and that tools that have been around, as you and I know, for five, seven years without incident, without issue. And so it's going to take education from our industry as well to really engage, identify what the concerns are, identify the bad use cases, and one, see if there's existing laws already in place.

Heath Morgan (27:25)
that would stop those bad practices. And then if there is not, if this isn't mature enough to have new use cases, then let's address those new use cases.

Adam Parks (27:34)
Wow, Heath, I tell you what, I never depart a conversation with you without learning something. Every time that we get on a call or we run into each other at a conference, you open up my mind to some new ideas, thoughts and perspectives that maybe I didn't have before. And I really do appreciate you. I know that you still have your Kickstarter open for the for the new book.

So I'm going to go and make a donation today because I really want to get a character named after somebody in my life. I feel like that's right up my alley. But really, I can't wait to read the book. I'm excited.

Heath Morgan (27:56)
Yeah, thank you. The Kickstarter campaign runs at the end of the month and then the book should be out. We'll get all the names of the new characters. We'll put them into the book and then look to probably publish end of October is kind of what we're targeting. Also record that audio book for those people who don't like to read the books. But yeah, you know, the idea behind that is, know, in this future world, human writing is essentially banned, right? It's deemed as being inefficient because it's too slow to record versus just AI recording data. And it's also inaccurate because sometimes our memories can be wrong. How I perceive a situation may be different than others. And if I've got eight surveillance cameras around me that has an accurate transcription of that, there's going to be people that say, don't need human writing anymore. And so, one of the thoughts is that this is the last book to be written without AI. And I made very clear to not write it with, know, write it all by myself. And, you know, and with that, you know, you know, people, the characters in the book are looking back to things that haven't been manipulated possibly by AI and whether it be human writing, whether it be they drive, make a road trip, they go to a place called register cliff during the Oregon Trail where settlers carve their names in the history, looking for indications of what can't be altered.

Heath Morgan (29:25)
by an AI governance structure that can kind of curate our past and our history. So it's kind of a fun little tribute to that, to name a character and a book that won't be done, but yeah, I appreciate that. And then yeah, the book will be out October and I'll be, I hope it's a good read. I hope it helps create conversations. It doesn't necessarily take one side or the other. It's just meant to. Let's think about these things, not from how we see AI today or next year, but let's talk about 10, 20, 30 years in the future. And what do we value as humanity, as part of our identity? And what are we okay with accepting as we look into kind of this post-humanism or transhumanism movement where humans are going to have computer chips in our brain? what makes us us and what makes us a new species, so to speak.

Adam Parks (30:22)
I'm really looking forward to reading it after it comes out in October, I will be sure to put that at the top of my reading list. And hopefully you and I can have a continuation conversation after or before the end of the year. And we have an opportunity to read it and digest it because you're approaching so many different, very relevant topics from a philosophical standpoint that we really do need to start considering.

Heath Morgan (30:36)
Yeah.

Adam Parks (30:44)
in the deployment of this technology on a grander scale across our society more so than just our industry using it as a business tool. So Heath, thank you again for coming on and sharing your insights. I really do appreciate you.

Heath Morgan (30:56)
Yeah, thanks, Adam. Thanks for having me. Always a good conversation. I know that we share similar values on where we feel about privacy and convenience. So it's always good to talk to a fellow convert on stuff. So thanks for having me again.

Adam Parks (31:10)
Absolutely. And for those of you that are watching, if you have additional questions you'd like to ask Heath or myself and you want to get involved in the conversation, you can leave those comments below on LinkedIn and YouTube and we'll be responding to those. Or if you have additional topics you'd to see us discuss, you can leave those in the comments below as well. And I'll get Heath back here at least one more time to help me continue to create great content for a great industry. But until next time, Heath thank you so much. I really do appreciate you. And thank you everybody for watching. We'll see you all again soon. Bye.

Why Ethical Artificial Intelligence Matters

When was the last time your agency or law firm audited its use of artificial intelligence? For most compliance teams, AI has become a buzzword, generally used in marketing decks and strategy sessions, but not yet governed with the same precision as financial data.

In this new episode of the Receivables Podcast, host Adam Parks sits down with Porter Heath Morgan of Martin Golden Lyons Watts Morgan to explore a powerful idea: What if the real risk in AI isn’t automation itself, but how we choose to use it?

Heath Morgan believes the industry is at a crossroads. “The antidote to uncertainty isn’t certainty—it’s clarity,” he tells Adam Parks. As firms deploy AI across call centers, portals, and analytics platforms, clarity means understanding the ethics and legal obligations behind every algorithmic decision.

This conversation moves far beyond hype. It’s a reality check for leaders who want responsible and risk-free AI adoption without sacrificing innovation.

The Memory Project: Exploring Humanity in the Age of AI

  • Beyond his work in compliance and technology, Porter Heath Morgan is also the author of the forthcoming book The Memory Project, a thought-provoking exploration of how artificial intelligence, consciousness, and legacy intersect.

Written as both a speculative and philosophical reflection, The Memory Project imagines a future where human writing has been banned, and AI governs memory, history, and even identity. The story examines what it means to preserve human thought in a digital world and challenges readers to consider how much of our “self” can survive when machines record and rewrite memory faster than we can experience it.

Morgan’s book mirrors the same themes discussed in this episode: ethical AI, accountability, and the tension between progress and humanity. It’s not just fiction but a framework for how we might navigate the coming era of cognitive automation and digital permanence.

AI Governance in Collections: Where Compliance Meets Technology

“If AI becomes part of your compliance process, who’s accountable—the code or the company?” — Porter Heath Morgan

That question defines the entire discussion. Heath reminds listeners that even when automation drives a decision, human oversight still carries the liability.

Adam Parks reflects that this tension mirrors the early internet era, when new tools reshaped every operational process before regulation caught up. Back then, data governance lagged behind digital adoption. Today, AI governance faces the same challenge, but the stakes are higher.

Key Takeaways:

  • Firms must treat AI compliance policies as extensions of existing consumer-protection laws.
  • Transparent logging and explainability are essential to avoid bias claims.
  • State-level legislation is coming faster than expected; federal alignment will follow.
  • Human-in-the-loop systems remain the gold standard for oversight.
  • AI governance is not a technology project—it’s a culture of accountability.

In short: compliance leaders can’t outsource judgment to code.

Data Privacy and Responsible AI Adoption

“If you’re not paying for the product, you are the product.” — Adam Parks

Adam’s warning hits home for every compliance officer evaluating free or “freemium” AI tools. Data privacy isn’t optional, it’s a fiduciary duty.

Adam Parks and Heath Morgan discuss how data privacy for AI enterprises must evolve from static policies to living frameworks that continuously audit inputs, prompts, and training data. For collection agencies handling sensitive consumer information, AI introduces new exposure points:

  • Chat interfaces that may inadvertently store personal data.
  • Predictive models built on third-party datasets with unclear provenance.
  • Vendor APIs that lack clear contractual boundaries around data use.

Reflections:

Compliance officers need to ask new questions: Who owns the prompt data? How long is it stored? Is it ever reused for model training?
It’s not paranoia, but rather due diligence. As Heath puts it, “clarity is the antidote to uncertainty.”

The Future of Human Communication

“Our children won’t be judged on the output of their models—they’ll be judged on how they input the model.” — Porter Heath Morgan

This insight reframes AI literacy as the next essential compliance skill. Tomorrow’s professionals won’t simply use technology, they’ll also be responsible for teaching it how to behave.

Heath argues that schools, firms, and regulators must start defining “ethical input standards” which includes teaching prompt quality, bias awareness, and interpretive reasoning. Adam adds that these same principles apply to collections: human empathy and communication can’t be fully automated, but AI can help measure and enhance it.

Reflection:

  • Ethical AI is about intent, not just outcomes.
  • Communication remains human; technology amplifies, not replaces it.
  • Compliance teams must evaluate tone, fairness, and accessibility in every AI-mediated interaction.

The future of human communication depends on designing AI that still listens.

Building a Responsible AI Roadmap for Collections

  • Audit every AI-enabled process for transparency and accountability.
  • Create an AI policy framework aligned with CFPB and state privacy laws.
  • Keep a human in the loop for all consumer-facing automation.
  • Partner with AI consulting for financial firms experienced in compliance.
  • Train staff on prompt engineering and responsible input creation.
  • Regularly review vendor contracts for data usage clauses.
  • Establish an internal AI ethics committee to guide deployment.
  • Document every decision—governance loves a paper trail.

Industry Trends: Ethical Artificial Intelligence

Ethical AI has moved from theory to boardroom priority. The CFPB has hinted at examining algorithmic decisioning under existing UDAAP frameworks, while several states have introduced bills targeting automated bias. Industry conferences now dedicate entire tracks to AI governance and compliance in collections, proving that ethical frameworks are no longer optional, but the baseline for trust.

Forward-thinking firms are already integrating compliance officers into tech-selection teams, ensuring that innovation never outpaces accountability.

Key Moments from This Episode

00:00 – Introduction to Porter Heath Morgan and MGLWM
02:47 – Defining ethical artificial intelligence in collections
05:10 – Emotional manipulation and AI misuse
07:45 – Governance frameworks that protect consumers
13:00 – Teaching prompt literacy and cognitive reasoning
15:30 – Privacy and data ethics for AI enterprises
21:20 – The future of human communication
27:00 – Legal implications of AI authorship and bias
30:22 – Final thoughts and takeaways

FAQs on Ethical Artificial Intelligence

Q1: What is ethical artificial intelligence in collections?
A: It’s the responsible use of automation that ensures fairness, transparency, and consumer protection within regulated financial environments.

Q2: Why does AI governance matter for compliance teams?
A: Governance prevents bias, maintains accountability, and aligns AI processes with legal obligations which protects the firms and consumers.

Q3: How can agencies achieve responsible and risk-free AI adoption?
A: Start small, document every decision, and engage legal counsel familiar with both AI technology and debt-collection law.

Q4: What role does data privacy play in AI governance?
A: Data privacy sets the ethical boundary for how AI systems can access, process, and retain consumer information.

Q5: Will AI replace human collectors?
A: No. It will augment them, automating repetitive tasks so professionals can focus on empathy, negotiation, and compliance quality.

About Company

Logo with "MARTIN GOLDEN LYONS" in blue and "WATTS MORGAN" in gray below, featuring an abstract "MGL" design above.

Martin Golden Lyons Watts Morgan

Martin Golden Lyons Watts Morgan is a national civil-litigation and creditors’ rights law firm, representing creditors, collection agencies, and financial institutions in pre-litigation and appellate matters. The company combines legal precision with modern compliance and technology strategy, helping clients implement automation and AI responsibly. With deep experience in regulated industries, the firm is recognized for guiding financial services clients through emerging challenges in AI governance, data privacy, and regulatory compliance while maintaining operational integrity and consumer protection.

About The Guest

Man in a suit and tie with a neutral expression against a light background.

Porter Heath Morgan

Porter Heath Morgan, Partner at Martin Golden Lyons Watts Morgan, is a compliance strategist and attorney focused on the ethical adoption of technology in financial services. He advises creditors, collection agencies, and financial institutions on AI governance, data ethics, and responsible automation. With extensive experience in both legal practice and operational strategy, Heath helps organizations integrate innovation with regulatory compliance, building frameworks that ensure transparency, accountability, and consumer protection.

Related Roundtable Videos

Related Roundtable Videos

Share This Story, Choose Your Platform!