• Title
  • Headline
  • About
  • Pricing
  • Our Team
  • RA Members
  • Articles
  • LOGIN
  • …  
    • Title
    • Headline
    • About
    • Pricing
    • Our Team
    • RA Members
    • Articles
    • LOGIN
GET STARTED
  • Title
  • Headline
  • About
  • Pricing
  • Our Team
  • RA Members
  • Articles
  • LOGIN
  • …  
    • Title
    • Headline
    • About
    • Pricing
    • Our Team
    • RA Members
    • Articles
    • LOGIN
GET STARTED

AI Agents Calling Third Parties

By Sara Woggerman and Research Assistant

At a recent Research Assistant Peer Group call, we discussed the risks associated with AI agents making outbound calls, especially to third parties such as spouses and employers.

The conversation began when a member shared an interaction with an AI vendor who was excited about using AI agents to intentionally run outbound campaigns calling spouses. The group reacted immediately and negatively. The level of risk felt palpable.

Under the Fair Debt Collection Practices Act (FDCPA), communication in connection with a debt is permitted with a spouse who is not obligated on the debt, but only for communication purposes. If the spouse is not a responsible party, such as a co-signer or someone liable under the doctrine of necessities, they cannot be asked to pay the debt, threatened with legal action, or treated as a debtor. Communication is strictly limited to information about the debt.

This raises a critical question: can an AI agent reliably distinguish between a spouse who is legally responsible and one who is merely a third party? Add to that the variation in state laws governing spousal communication, and the risk multiplies quickly.

Now layer in another issue. The call is being made using an artificial voice. Are these calls compliant with the Telephone Consumer Protection Act (TCPA), which places consent requirements on outbound calls made using artificial or prerecorded voices? If not, the exposure is significant.

If an AI-driven outbound campaign violates either the FDCPA or TCPA, or both, does this open the door to class action litigation? The answer is, absolutely.

Outbound calls are inherently more difficult than inbound calls for several reasons:

  • You are interrupting someone who was not expecting the call.
  • Right Party Contact (RPC) is harder to establish, as the called party is often less cooperative.
  • Outbound calls are more likely to reach third parties than inbound calls.
  • RPC questions asked are sometimes asked differently

With inbound calls, consumers initiate the contact and are generally prepared to engage. They are more willing to answer questions such as:

  • “Who am I speaking with?”
  • “What is your current address?”
  • “What is your date of birth?”
  • “What are the last four digits of your Social Security number?”

Outbound calls are very different. If you ask these same questions without first identifying who you are and why you are calling (which you cannot identify who you are unless they ask, or why you are calling until they verify RPC), most will hang up. They are less likely to cooperate with verification questions phrased this way.

Often giving a small piece of information on outbound calls makes a person feel comfortable, a piece of their address, part of the date of birth, etc. The consumer being called will feel you may actually know them and give you the full information if asked. Stating you are serious about protecting their privacy when they do not want to verify can help too.

However, if you reach a third party, the call changes entirely. Under the FDCPA, collectors can only request “location information” from third parties. The consumer address, phone number, and name of employer, that’s it! Without phrasing the question in ways to make the third party feel comfortable, or if the AI agent fails to recognize when the third party has provided new or corrected information that could justify a follow-up call, the opportunity may be lost. This may be the only time we can communicate with this third party.

Make sure to properly vet your AI vendors from the beginning. How much do they understand the laws and regulations that our industry must follow? What are their protections? How do they program their agents? How do their agents learn? What are their policies on FDCPA and TCPA amongst others? Protect yourself and think outside the box.

For these reasons, inbound calls paired with AI agents are the safest and most practical way to begin integrating AI voice technology into operations. Inbound interactions carry less risk, allow for easier verification, and result in fewer frustrated consumers and third parties.

Hey, wait! There's more where this came from.

With Research Assistant (from insideARM), industry compliance is our expertise. Our weekly peer group roundtable is the perfect place to ask a question and get timely advice from industry colleagues who are facing the same challenges you are. Try it on for size with our 1-month free trial.

Join us today.

Previous
Scams in the industry continue to be…Interesting!
Next
 Return to site
Cookie Use
We use cookies to improve browsing experience, security, and data collection. By accepting, you agree to the use of cookies for advertising and analytics. You can change your cookie settings at any time. Learn More
Accept all
Settings
Decline All
Cookie Settings
Necessary Cookies
These cookies enable core functionality such as security, network management, and accessibility. These cookies can’t be switched off.
Analytics Cookies
These cookies help us better understand how visitors interact with our website and help us discover errors.
Preferences Cookies
These cookies allow the website to remember choices you've made to provide enhanced functionality and personalization.
Save