Kevin Frazier, Author at Law & Liberty https://lawliberty.org/author/kevin-frazier/ Tue, 03 Jun 2025 18:50:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 226183671 Federalist Solutions to AI Regulation https://lawliberty.org/federalist-solutions-to-ai-regulation/ Mon, 07 Apr 2025 10:00:00 +0000 https://lawliberty.org/?p=66365 Policy analysts often worry about the possibility of states stifling AI innovation by passing a patchwork of complex and even conflicting legislation. By way of example, SB 1047, a bill in California dictating the development of leading AI models, evoked concerns that a couple dozen Golden State legislators could meaningfully bend the AI development curve […]

The post Federalist Solutions to AI Regulation appeared first on Law & Liberty.

]]>
Policy analysts often worry about the possibility of states stifling AI innovation by passing a patchwork of complex and even conflicting legislation. By way of example, SB 1047, a bill in California dictating the development of leading AI models, evoked concerns that a couple dozen Golden State legislators could meaningfully bend the AI development curve without input from the rest of the nation. Governor Gavin Newsom vetoed the legislation, but fears remain that a handful of states could pass legislation with significant extraterritorial effects—altering the pace and nature of AI research and development. Whatever degree of government oversight and regulation, critics argue, ought to be undertaken by Congress.

Much less has been written about the role states can, should, and, in some cases, are playing in fostering AI adoption. In contrast to efforts to regulate the development of AI, efforts to shape and accelerate adoption of AI are neutral or even complementary—one state’s AI incubator program, for example, does not inhibit another state’s effort to do the same (nor the ability of the federal government to follow suit). This is the sweet spot for state involvement—not dictating AI development but rather shaping its diffusion.

A brief counterfactual analysis confirms the utility of this division of regulatory authority. Consider how state-by-state regulation of internal combustion engine development might have unfolded in the last century. Michigan might have established minimal restrictions, Tennessee could have implemented specific safety protocols, and Kansas might have required certain environmental standards. This fragmented approach would have made the Model T, for instance, more accessible in some regions than others. Manufacturers likely would have avoided certain jurisdictions where compliance costs were prohibitively high. Affluent Americans could have circumvented these inconsistencies by purchasing vehicles from less regulated states. However, most citizens would have remained in the past—traveling by horse while their rich neighbors honked at them—longer than necessary. The conclusion is clear: states should not control the development of innovative technologies because such decisions require national deliberation. Nevertheless, states fulfill their proper regulatory function by implementing policies that reflect their residents’ preferences regarding how rapidly these technologies should be adopted within their boundaries.

Comparison of AI proposals currently pending before state legislatures helps clarify this critical difference between regulating the development of a general purpose technology and shaping its adoption. The New York Artificial Intelligence Consumer Protection Act, introduced in the state assembly and senate, serves as an example of the sort of regulation that might alter the trajectory of AI development. Among numerous other provisions, the bill would require AI developers to document how they plan to mitigate harms posed by their systems subject to an audit by the state attorney general, create a risk management policy, and document several technical aspects of their development process.

Imagine this act multiplied by 50. Developers may find themselves in an endless compliance maze. One day they would bend to the expectations of New York’s attorney general, then the whims of Washington’s Department of Commerce the next, and finally Idaho’s AI Regulation Panel a week or so later. This task would be made all the more difficult given that states already struggle to agree on basic regulatory concepts, such as how to define artificial intelligence.

Large AI firms could likely handle these burdens, but startups may flounder. The net result is a less competitive AI ecosystem. This reality may partially explain why Apple, Alphabet, Microsoft, and Meta have swallowed up a number of AI companies over the past ten years—founders looking at the regulatory horizon likely realize that it is easier to exit than wait around for 50 auditors to come kick their tires.

We’ve seen this dynamic play out over a longer term in a related context. Researchers at the Information Technology and Innovation Foundation calculated that the patchwork approach to privacy laws could impose more than $1 trillion in compliance costs over a decade, with small businesses bearing a fifth of those costs. There’s no reason to emulate this pattern in the AI context.

Congress should establish clear, preemptive guidelines for AI development while empowering states to implement adoption strategies that reflect their unique circumstances and values.

States should instead stay in their regulatory lane, implementing community preferences for local technology adoption rather than dictating broader terms of technological development. Utah stands out as an example. The Utah Office of AI Policy does not impose any regulations with extraterritorial ramifications. Instead, it invites AI companies to partner with the state to develop a bespoke regulatory agreement. Any AI entity that serves Utah customers may work with the Office to develop an agreement that may include regulatory exemptions, capped penalties, cure periods to remedy any alleged violations of applicable regulations, and specifications as to which regulations do apply and how.

This variant of a regulatory sandbox avoids the potential overreach of a one-size-fits-all regulation, while still affording Utahns a meaningful opportunity to accelerate the diffusion of AI tools across their state. What’s more, this scheme avoids the pitfalls of SB 1047 look-alikes because it does not pertain to the development of the technology, just its application. This dynamic regulatory approach allows the State to deliberately think through how and when it wants to help spread certain AI tools. The Spark Act, pending before the Washington State Legislature, likewise exemplifies an adoption-oriented bill. If passed, Washington would partner with private AI companies to oversee the creation of AI tools targeting pressing matters of public policy, such as the detection of wildfires.

States can serve as laboratories for innovation by deliberately incorporating AI into their own operations. Take Maryland’s plans to rely on AI in managing its road network. Rather than mandating private adoption, Maryland demonstrates the technology’s utility by using it to identify common bottlenecks, propose new traffic flows, and generally help residents get from A to B in a safer and faster manner. This approach allows residents to witness AI’s practical benefits before deciding whether to embrace similar tools in their businesses or communities. This example reveals how states can shape adoption through demonstration rather than dictation—creating a pull rather than push dynamic.

States also play a crucial role when it comes to preparing their residents for a novel technological wave. Oklahoma’s partnership with Google to provide 10,000 residents with AI training will reduce the odds of residents fearing AI and instead train them to harness it. By ensuring diverse participation in the AI economy, Oklahoma may avoid the pitfalls of previous technological transformations that exacerbated existing inequalities—with some communities experiencing a brighter future several years, if not decades, before their neighbors. This program speaks to another instance in which states can shape AI adoption without dictating AI development. Massachusetts offers up yet another example. Its AI Hub promises to empower residents to thrive in the Age of AI via workforce training opportunities.

These positive examples prove that the federalist system, with its distinct spheres of authority, offers a compelling framework for AI governance. Just as our Founders envisioned a division between national and local concerns, so too should we partition responsibility for AI. Development requires the uniform hand of federal oversight, while adoption benefits from the diverse approaches that emerge from state-level experimentation. This distinction serves both innovation and democracy by allowing breakthrough technologies to emerge under consistent national standards while preserving communities’ right to determine how quickly these innovations reshape their daily lives.

Though some communities may wish to avoid the turbulence associated with incorporating any new technology—let alone one as novel as AI—into their economies, cultures, and systems, that choice is likely off the table. More than 40 percent of the working age population already uses AI to some degree. Of that user base, 33 percent use AI nearly every day. That figure will likely increase as AI tools continue to advance and address an ever-greater set of tasks. Americans may also find that AI literacy—knowing how to use AI tools as well as the risks and benefits of those tools—is an economic necessity. Fortune 500 companies have leaned hard into AI and have expressed an interest in hiring AI-savvy employees. Though this seeming inevitability makes it appear as though AI is a force beyond our control, states remain the actors best suited to directing AI toward the common good (defined by that state’s community) while leaving others to do the same.

In sum, the coming decade will likely witness an acceleration of AI capabilities that rivals or exceeds the rapid diffusion of the Internet in the 1990s. Then, as now, the key question is not whether to adopt these technologies but how to do so in a manner that respects community values while maximizing benefits. States that thoughtfully shape adoption—creating regulatory sandboxes, demonstrating practical applications, addressing equity concerns, and building human capital—will likely see their residents thrive in this new era. Those that overreach into development questions may unintentionally hamper innovation, while those that neglect adoption entirely risk watching from the sidelines as the future unfolds without them.

The path forward requires respecting this division of regulatory labor. Congress should establish clear, preemptive guidelines for AI development while empowering states to implement adoption strategies that reflect their unique circumstances and values. This balanced approach preserves both technological momentum and democratic choice. It would ensure that Americans collectively shape AI rather than merely being shaped by it.

The post Federalist Solutions to AI Regulation appeared first on Law & Liberty.

]]>
66365 https://lawliberty.org/app/uploads/2025/04/Chat-GPT_shutterstock_2598701103.jpg
Deference to AI? https://lawliberty.org/ai-deference-and-the-reality-doctrine/ Wed, 24 Jul 2024 10:00:00 +0000 https://lawliberty.org/?p=59945 As courts and administrative agencies encounter uncertainty posed by the post-Chevron era, a few foundational principles remain in place and serve as guides through doctrinal disruption. For one, courts have the authority and capacity to say what the law is. Likewise, if Congress unambiguously mandates a specific agency action, including the means to perform that […]

The post Deference to AI? appeared first on Law & Liberty.

]]>
As courts and administrative agencies encounter uncertainty posed by the post-Chevron era, a few foundational principles remain in place and serve as guides through doctrinal disruption. For one, courts have the authority and capacity to say what the law is. Likewise, if Congress unambiguously mandates a specific agency action, including the means to perform that act, jurists and scholars alike agree that Congress’s mandate must be followed. Absent that statutory clarity, the deference afforded to an agency’s interpretation of ambiguous language varies in light of several factors. Among legal scholars, the general thinking is that if Congress granted an agency the authority to determine the meaning of an ambiguous statute, then the judiciary should not infringe on Congress’s directive. Congress, though, rarely specifies the exact procedures agencies should use in interpreting and refining statutes. This ambiguity creates a contest for interpretative authority among courts and agencies. Whether agency reliance on artificial intelligence (AI) to inform their actions and interpretations should give them an upper hand in this skirmish has so far remained an open question.

Delay in answering this question is understandable and raising it now may seem to put the trailer before the truck. Agencies have yet to rely on AI to conduct the very important task of statutory and regulatory interpretation and drafting. My argument is that they eventually will. The combination of advances in AI capacities and AI adoption suggests that AI interpretations will become an agency norm sooner rather than later. Bruce Schneier and Nathan Sanders of Harvard contend that existing AI tools can already generate proposed regulatory changes, analyze the likely effects of that regulation, and develop strategies to increase the odds of its promulgation. Agency officials keen on accelerating their regulatory agendas would be hard-pressed not to turn to such useful tools.

There’s a case to be made that increased use of AI would generate “better” interpretations by agencies. Generative AI tools may lend agency interpretations a gloss of neutrality—a boon in a time of hyper-partisanship. On the other hand, reliance on AI tools may undermine the very purpose of delegation to experts who have acquired certain wisdom through experience administering the law. This essay argues the latter is the case.

The flaws associated with AI decision-making render it unworthy of any judicial deference when used by agencies to interpret statutes or regulations. AI interpretations detract from rather than augment any basis for respect for an agency’s understanding of a law or regulation. Before agencies test the limits of judicial tolerance for interpretations of the law predominantly informed by AI, it is important to establish a “Reality Doctrine”—AI interpretations merit no special weight and, if anything, should be subject to heightened judicial scrutiny.

Recent disruptions to the status quo in administrative law combined with ongoing advances in AI make now the right moment to address the idea of agency use of AI to interpret the law. The Supreme Court’s recent decision in Loper Bright Enterprises v. Raimondo effectively decided the interpretative war in favor of courts, while leaving some space for agencies to win individual interpretative battles via the power to persuade. Loper Bright was not revolutionary, so much as a rediscovery of a prior era of administrative law. Though the Court overruled some previously foundational aspects of administrative law, it left in place more established principles. More specifically, the Court recognized that the judiciary has long accorded due respect to Executive Branch interpretations that reflected the lessons gleaned from the “practice of the government.” In this new era, agencies have better odds of earning judicial respect if they adopt and adhere to processes that allow for well-reasoned judgments and incorporate documented findings from prior administration of the law. A new tool—generative AI—may aid in not only identifying those findings but also in steering agency decisions and interpretations. What persuasive power, then, should be afforded to an agency interpretation generated or informed by AI?

Judicial deference to agency actions and interpretations turns on a finite list of theories related to the processes used by that agency. Deference can also turn on whether the question is substantive or procedural. Courts tend to be more deferential to an agency’s choice as to how to pursue an end that is within their substantive authority. With few exceptions, the means, rather than the ends, determine the persuasive power of an agency’s interpretation. A flawed policy will receive the same deference as an ideal policy so long as the agency officials act on sound, deliberate procedures. This approach reflects but does not penalize the fact that bureaucrats are humans, not supercomputers.

Courts have tried to show deference to humanity in other ways, as well. Congressional intent, theoretically an aggregation of the intent of its very human members, is one theory of deference. Accountability to very human voters is yet another theory. Agency expertise, derived from the judgment, experience, and thoroughness of very human bureaucrats, is one more theory. The consistency and validity of decisions made by those bureaucrats may also affect the level of deference afforded to agency actions. What level of deference each of these theories receives and how courts rank these theories has changed over time, but this list has remained fairly fixed when referred to by scholars and relied on by jurists.

Unchecked encouragement of agencies using AI is misguided and irresponsible unless and until all legal actors understand how those uses fit into our broader systems of administration and judicial review.

Those theories do not explicitly cover how much persuasive power, if any, should be afforded to agency interpretations made or largely informed by AI. How those deference theories apply to AI interpretations has likewise been undertheorized and understudied. Much of the scholarship on this topic has focused on less substantive issues, such as discrimination that may occur as a result of government use of biased AI systems. Sooner than later, though, courts will have to fill in those legal lacunas.

Agencies have already started incorporating AI into more processes and decisions, but legal scholars have not given the issue enough thought. Even before OpenAI released ChatGPT-3, agencies extensively relied on AI. Little evidence suggests this trend will stop or even slow officials in other jurisdictions, including certain states, who have already started using AI to draft laws and author judicial opinions. What’s more, actors within the US government have called for increasingly substantive uses of AI by regulatory agencies. Some federal judges have already started to challenge opposition to the use of AI in drafting judicial decisions. The lid has come off AI’s Pandora’s box.

Pressure will continue to mount on agencies to join other public entities, foreign and domestic, in the incorporation of AI. These jurisdictions will pave the way in showing how AI can alleviate administrative burdens and further policy aims. They may also render the public more tolerant of an expanded role of AI in government work, as is being seen in some municipalities around the world. Congressional proposals and executive orders to ease and accelerate agency use of AI will likewise raise the odds of agencies exploring how to use AI. Shalanda Young, the director of the Office of Management and Budget, recently directed agencies to “increase their capacity to responsibly adopt AI, including generative AI, and take steps to enable sharing and reuse of AI models, code, and data.” .For now, though, current and planned uses of AI by agencies are fairly insignificant. OMB itself, for example, regards AI as “a helpful tool for modernizing agency operations and improving Federal Government service to the public.”

Rather than gamble that agencies will remain immune from pressure to use AI in more and more legally substantive fashions, the better bet is that agencies will become increasingly reliant on AI to take on even the most influential decisions, absent legal or policy reasons to avoid such uses of AI. Three key existing theories of deference apply to agency interpretations generated or substantively informed by AI, but based on the nature of AI, one must conclude that such interpretations hold no persuasive power.

The Reality Doctrine, on the other hand, recognizes that AI-generated interpretations lack all of the attributes that have traditionally justified judicial deference. In turn, the Doctrine demands de novo review of any AI-generated interpretation and regulation.

Bruce Schneier has noted that “when an AI takes over a human task, the task changes.” Actions taken by agencies to interpret statutes and regulations, and issue new regulations and guidance are no exception. Current AI models complete tasks in a different manner and commit errors differently than humans. These differences have legal significance. Respect accorded to agency decisions reflects the very human process of acquiring wisdom through the trial and error of administering the law.

Absent specific direction by Congress, the theories of judicial deference to agency actions generally do not mandate deference to actions exclusively or predominantly performed by AI. Use of AI models by agencies decreases the odds of those agencies being held accountable by the people, one of the central rationales for deferring to agencies over courts. AI models used by government officials are often privately owned and designed, which limits what the public can learn about those models. Agency officials may find AI as a useful scapegoat that misleads the public as to who bears responsibility for certain actions. Finally, AI works in a way that makes it difficult, if not impossible, to determine how and why it produces a certain result. A reality that exacerbates accountability concerns, may result in a head-first collision with the Administrative Procedure Act, and might undermine the rule of law.

AI models cannot apply “informed judgment,” in the sense of relying on years of expertise and experience to identify the proper regulatory response. This limitation arises because of two key factors. First, AI models are only as good as the data they are trained on and there is no guarantee that even models trained on specific information will be error free. Second, AI models process information and make decisions in a different way than humans.

Decisions made or significantly informed by AI also lack consistency. Unexplained and unpredictable changes characteristic of AI models conflict with the rule of law. The public may not receive adequate fair notice if regulations change at unexpected times and for unclear reasons.

Rather than wait for the moment of agency reliance on AI to interpret laws and regulations, scholars should clarify now how such AI interpretations ought to be treated by courts. The attention to administrative law following landmark Supreme Court decisions increases this sense of urgency. For a short while, admin law is headline news. That attention should not be squandered but should instead be channeled into further efforts to clarify the law and reinforce foundational principles of administrative and constitutional law.

This short essay does not answer all the questions raised by AI interpretations. For instance, what is the line between de minimis uses of AI to inform interpretations versus substantial reliance on AI to develop those interpretations? These and other questions merit further study. The difficulties associated with answering those questions, though, merit a default adoption of the Reality Doctrine. Unchecked encouragement of agencies using AI is misguided and irresponsible unless and until all legal actors understand how those uses fit into our broader systems of administration and judicial review.

The post Deference to AI? appeared first on Law & Liberty.

]]>
59945 https://lawliberty.org/app/uploads/2020/01/US-Supreme-Court.jpg
The Neo-Brandeisians Are Half Right https://lawliberty.org/the-neo-brandeisians-are-half-right/ Thu, 13 Jun 2024 10:00:00 +0000 https://lawliberty.org/?p=58490 The Neo-Brandeisian conception of antitrust touted by Federal Trade Commission Chair Lina Khan and others can be boiled down to “big business is bad.” Their response, in short, is to develop a complex regulatory regime that prevents the ills associated with that bigness. This approach suffers from at least two flaws: first, it assumes that […]

The post The Neo-Brandeisians Are Half Right appeared first on Law & Liberty.

]]>
The Neo-Brandeisian conception of antitrust touted by Federal Trade Commission Chair Lina Khan and others can be boiled down to “big business is bad.” Their response, in short, is to develop a complex regulatory regime that prevents the ills associated with that bigness.

This approach suffers from at least two flaws: first, it assumes that regulatory costs will hit the biggest corporations the hardest; and, second, and relatedly, it neglects to consider that a larger regulatory state is also a threat to liberty and our constitutional order. Neo-Brandeisians need not give up on their strategy—they need only recognize that fighting corporate giants also requires slaying government behemoths.

A few examples illustrate the Neo-Brandeisians’s flawed focus on bigness. A soon-to-be proposed rule on “commercial surveillance” would subject corporations to a litany of fine-sounding requirements. If and when finalized, the rule may force corporations to adhere to data collection standards, consumer consent requirements, and data security obligations. These requirements, though, would add to the preexisting patchwork of state privacy laws. Small businesses already struggle to comply with such varying standards. The addition of even more regulations will only saddle the very businesses Khan is counting on to increase competition.

Similarly, the proposed trade regulation rule on unfair or deceptive fees, when finalized, will tackle common consumer concerns around “junk fees.” As proposed, the ambiguous language in the rule renders compliance trickier and, by extension, more expensive. The latest draft of the rule includes a ban on “excessive” or “worthless” fees. Even the FTC acknowledges that such vague language would result in regulatory uncertainty and greater compliance costs. While such proposals may sound good in a law review article or op-ed, in practice, these and other regulations impose comparatively fewer costs on the big businesses they supposedly target.

American firms pay upwards of $300 billion a year to comply with the latest rules and regulations. Some firms, though, pay far more than others. The extent of the disparities in compliance costs by the size of the firm requires thinking through how firms actually go about complying with the latest government mandate. More than 90 percent of compliance costs are tied to labor. An accurate assessment of a regulation’s compliance costs, then, should turn on analysis of the labor hours and wages required to toe the new line. Based on that framework, economists estimate firms with around 500 employees incur nearly 50 percent more in compliance costs than smaller firms (fewer than 50 employees), but they also pay almost 20 percent more than large firms (more than 500 employees). By taking a labor-focused approach to analyzing regulations, this disparity might be lessened. This approach should also cause Neo-Brandeisians to pause before rushing ahead with regulations meant to bring down corporate giants that, once implemented, only serve to entrench and expand their bigness.

A more expansive administrative state benefits big businesses that can afford to capture staffers and submit comment after comment in rulemaking processes. A look back at the informal meetings held by EPA staffers from 1994 to 2009 reveals that industry groups were almost always the other attendees—in comparison to public interest outfits, industry groups tallied 170 times more informal communications with the agency. In addition to holding a near monopoly over staffers’ time, industry groups fill up an agency’s record in the rulemaking process by submitting the vast majority of comments during notice and comment periods. When the EPA sought input from the public on an air pollutants rule, industry groups filled the information void—submitting more than 80 percent of the comments received by the agency.

Increased regulation and, consequently, a larger administrative state undermines the democratic ideals that Neo-Brandeisians allegedly seek to advance. Congress alone, per Alexander Hamilton, must “prescribe[] the rules by which the duties and rights of every citizen are to be regulated.” Though Congress is far from a perfect institution—it’s the institution the Framers intended to wield legislative power because its members are directly accountable to the people. Administrative agencies, in stark contrast, cannot claim to operate with the elective consent of the people.

What’s the point of encouraging people to vote and lowering barriers to the ballot if the people’s representatives are simply going to hand their legislative powers to unaccountable bureaucrats?

Neo-Brandeisians have avoided answering that question—opting instead to prioritize their policy preferences over their democratic principles. The noncompete ban recently finalized by the FTC will impact 30 million contracts and affect some of the most important industries in our economy. It’s true that the FTC afforded the public a couple of comment windows to make their voices heard, but those comment windows are far short of the kind of participation, transparency, and accountability that should be at the foundation of our constitutional order. What’s more, that rulemaking effort—including the agency costs to draft it, refine it, and finalize it as well as the regulatory ambiguity it has already sparked—is likely all for naught. Legal scholars anticipate that the legal challenges filed immediately after the finalization of the rule will succeed. In fact, administrative and antitrust lawyers list several compelling reasons, including the Major Questions Doctrine, why the rule will not stand.

Today’s FTC has accumulated much more power than anyone could have anticipated.

A few changes to the Neo-Brandeisian approach could avoid such harmful outcomes. The Neo-Brandeisian impulse to preserve individual liberty by fighting corporate bigness aligns with the Founders’ fear of too much power concentrating in any set of hands. Their solution—to concentrate power in the hands of unelected bureaucrats—undermines their good intentions. An alternative solution would be three-folded: first, actively consider how compliance costs will affect small, medium, and large businesses (and only move forward regulations that will not entrench the dominance of corporate giants); second, eliminate existing regulations and alter current processes that benefit those giants; and third, acknowledge that a smaller government is a more accountable government by restoring Congress’s intended role as the sole legislative actor.

Justice Louis Brandeis would applaud the Neo-Brandeisians realizing that concentrated power in the hands of any actor is problematic. He long ago warned that “experience should teach us to be most on our guard to protect liberty when the Government’s purposes are beneficent.” Adherence to Brandeis’s guidance would not only safeguard liberty but also align with the original understanding of the FTC’s purpose and function.

The FTC was never intended to be a rival to Congress. George Rublee, one of the first FTC Commissioners and the author of Section 5 of the FTC Act, regarded the agency as having significant, yet finite powers. Rublee conceived of the FTC as having “broad powers of investigation and report and facilities for making an expert and impartial study of such questions” relating to unfair methods of competition. As an “expert and impartial commission,” the results of those studies would “have weight and in this way progress might be made in bringing our statutory and case law into harmony with economic law.”

Today’s FTC has accumulated much more power than Rublee or anyone in 1914 could have anticipated. It can and should return to operating more as an impartial investigator and informant rather than a rival policymaker. A return to this conception of the FTC by a clarifying amendment of the FTC Act would comport with its original purpose and increase the odds of it influencing responsive congressional activity. The alternative—continued efforts by the FTC to invade Congress’s legislative realm—is untenable.

Our constitutional order is not efficient. It was not intended to be. Difficult decisions about how to regulate the most important industries must be left to the people and their representatives. Few question that Khan has good intentions, but those intentions do not justify the FTC operating as a philosopher agency.

The post The Neo-Brandeisians Are Half Right appeared first on Law & Liberty.

]]>
58490 https://lawliberty.org/app/uploads/2020/01/federal-trade-commission.jpg