By Ashley Taylor, Abby Hylton, and Namrata Kang
Consumer-facing businesses across the U.S. are increasingly incorporating artificial intelligence (AI) into
their decision-making processes and business models. In response, the Massachusetts attorney
general’s (AG) office issued an advisory on April 16, to provide guidance on how existing consumer
protection, civil rights, and privacy laws apply to AI.1 This is in addition to the AGs of California,
Connecticut, Florida, Minnesota, and South Dakota announcing increased focus on AI especially with
respect to marketing to consumers. The advisory warns that despite AI’s “tremendous potential benefits
to society” and “exciting opportunities to boost efficiencies and cost-savings in the marketplace,” it
nevertheless poses risks, such as lack of transparency, bias, and threats to privacy. In its advisory
opinion, Massachusetts emphasized that (1) novelty and complexity do not exempt AI systems from
applicable law and (2) the law applies to AI just as it would within any other applicable context.
II. Relevant Laws Implicated
A. Consumer Protection Laws
First, businesses should take care to ensure that AI technology complies with applicable consumer
protection law, including the Massachusetts Consumer Protection Law.2 Indeed, legal risk is high where
consumers are not aware of AI usage and cannot “meaningfully opt out of most AI use cases.”
Furthermore, businesses that use AI should not misrepresent audio or video content generated by AI,
such as “deepfakes” and “voice cloning,” to deceive consumers about a product or service the user is
offering to consumers.
Developers that market AI tools should also ensure that the AI system is functioning as they claim it does,
particularly where developers do not have knowledge or control over how AI generates its results. It could
be an unfair or deceptive practice for developers to make false claims about AI systems, including the
systems’ quality, value, or usability. AI systems must be usable for the purpose advertised, and
developers should take care to accurately represent the reliability and condition of the AI system.
1 Attorney General Advisory on the Application of the Commonwealth’s Consumer Protection, Civil Rights,
and Data Privacy Laws to Artificial Intelligence, MASS. OFFICE OF THE ATT’Y GEN. (April 16, 2024), available at
https://www.mass.gov/news/ag-campbell-issues-advisory-providing-guidance-on-how-state-consumer-protection-
and-other-laws-apply-to-artificial-intelligence.
2 Mass. Gen. Laws Chapter 93A.
Finally, the advisory notes the Massachusetts AG’s office can also enforce federal consumer protection
laws applicable to AI.3 For example, federal law requires covered creditors to provide consumers specific
and accurate reasons regarding denial of their loan applications even when the creditor is using AI
models as part of its decision process.
B. Anti-Discrimination
Another area where AI developers and users should be cautious is with respect to civil rights laws. Where
AI-based systems are created using discriminatory inputs or generate outputs that reflect bias toward a
protected class, the Massachusetts AG warns that state and federal civil rights laws could apply.
Businesses should work to ensure that their use of AI does not have discriminatory effects, including
disfavoring or disadvantaging persons or groups based on legally protected characteristics such as race
or gender.
C. Data Privacy
Finally, the advisory states that AI must comply with Massachusetts data privacy laws and regulations,
including the Commonwealth’s Standards for the Protection of Personal Information of Residents of the
Commonwealth.4 AI developers, suppliers, and users must safeguard personal data that their systems
use, and must comply with breach notification requirements as well.
III. Implications for Debt Collectors
Debt collectors5 and other entities in the lending space need to be cognizant of the potential legal risks
associated with the use of AI. It is essential for them to ensure that their AI usage aligns with consumer
protection laws. AI, for instance, can be a powerful tool in assessing a borrower’s ability-to-pay before
issuing a loan or in organizing payment plans based on demographic data. As a matter of risk
management, the correct application of AI can automate data analysis efficiently.
However, this comes with its own set of legal risks. If foundational data is biased, AI algorithms could
inadvertently perpetuate or amplify these biases, potentially leading to discrimination or unjust treatment.
If an AI system disadvantages legally protected groups, allegations of discrimination could arise. This risk
is particularly pertinent given Massachusetts’ history of enforcement against companies suspected of
racial and income-based discrimination, among other regulators.6 The inherent lack of transparency in AI
3 The advisory specifically mentions that the FTC “has taken the position that deceptive or misleading
claims about the capabilities of an AI system, and the sale or use of AI systems that cause harm to consumers”
violate the FTC Act. The FTC consistently collaborates with state AGs on various regulatory matters, and the
regulation of AI will be no exception.
4 Mass. Gen. Laws Chapter 93H.
5 Massachusetts collection laws expressly define “debt collector” to include creditors collecting their own
debts. See 940 CMR 7.03.
6 See AG Healey Reaches $600,000 Settlement with Real Estate Company Over Allegations of Racial and
Income Based Discrimination, MASS. OFFICE OF THE ATT’Y GEN. (March 22, 2019), available at
https://www.mass.gov/news/ag-healey-reaches-600000-settlement-with-real-estate-company-over-allegations-of-
racial-and-income-based-discrimination?_gl=1*abukd4*_ga*MTg4MjExMTk1OC4xNzEzODE4MDk1*_ga_MCLPEGW7WM*MTcxMzgy
MjE0MC4xLjEuMTcxMzgyMjE4NS4wLjAuMA..; see also AG Campbell Announce Settlement with Auto
Dealership Over Alleged Pricing Discrimination for Add-On Products, MASS. OFFICE OF THE ATT’Y GEN. (Jan. 31,
systems adds another layer of complication. This opacity can make it challenging for debt collectors to
explain their decision-making process to consumers or regulators when necessary.
AI systems trained on flawed or incomplete data could make incorrect or unfair decisions about whom to
target for debt collection and the level of assertiveness in their collection tactics. The use of AI could also
lead to an increase in automated communications, which could result in automated harassment of
debtors. Furthermore, AI’s reliance on vast amounts of data brings forth issues pertaining to privacy and
data protection — a critical aspect when handling sensitive financial information. Therefore, debt
collectors must ensure that robust measures are in place for privacy and consumer protection when
assessing or implementing AI systems.
IV. Conclusion
While Massachusetts encourages AI innovation that complies with the law, it also cautions companies
that develop and use AI about the legal risks associated with the technology. Debt collection companies,
routinely scrutinized for their practices, should be particularly attuned to the legal risk associated with
such programs. Further, as technology continues to evolve, companies should follow developments in the
legal landscape, including amendments to regulatory advisory opinions, state law, and federal law.