This two-part series examines how enterprise-grade artificial intelligence is reshaping commercial litigation, focusing on how existing legal doctrines must evolve when AI model performance is at the center of a dispute.
Read the full series here.
Part one explained why disputes over AI performance strain traditional legal doctrines. Part two turns to how those pressures manifest in specific commercial claims—fraud, warranty, indemnity, and related theories—and what litigators should expect when prosecuting or defending them.
Fraud claims continue to be useful tools for litigants seeking damages that would normally be recoverable in breach of contract or breach of warranty claims, while avoiding contractual limitations on the recoverability of consequential damages and fee shifting provisions. For this reason, as well as the ones outlined below, we comment on how fraud claims are likely to be asserted in connection with enterprise-grade AI models.
At their core, fraud claims require misrepresentations of material fact made with knowledge of their falsity or with reckless disregard to their truth.¹ With respect to enterprise applications of AI, there are two key considerations.
The first involves intent. Fraud is an intentional tort because it requires knowledge or knowing disregard of a statement’s falsity. AI’s black box characteristics directly implicate the knowledge element of fraud claims—it is not difficult to imagine litigation over underperforming AI models centering on defensive arguments that the model proprietor did not or could not know of black-boxed issues affecting model performance. The black box nature of AI will, in turn, affect the issue of intent: What does intent mean when the party responsible for operating the model cannot say that they know exactly how the model generates its outputs?
The second consideration involves the nature of the representations made about the AI model. Proponents of enterprise-grade AI models are not reluctant in making claims about how their models can revolutionize workflows, speed up work processes, or exponentially benefit revenue generation. If these representations amount to more than just marketing claims, then how will a model’s compliance with those representations be evaluated (a question similar to those affecting warranty claims)? These questions are qualitative, asking whether the representation is material and factual as well as whether the model’s performance makes those representations false.
Finally, we should note the potential overlap between theories of fraud and theories of breach of contract or warranty. In one sense, all three claims can be built on the same theory: A business implemented an AI product or service and either the implementation consultant and/or the model proprietor promised (contract), warranted (warranty), or represented (fraud) that the model would provide some value or perform some task that it failed to perform as promised (breach). In situations where a contract exists, the economic loss rule would render fraud claims as backup theories to the contractual ones.² But fraudulent inducement claims have been asserted against sellers of computer software for decades.³ Given the aggressive marketing around the benefits AI can provide enterprise, there is every reason to expect that fraudulent inducement claims will continue to be made in this context. Thus, we provide some analysis of what those claims might look like.⁴
Fraud requires intent, e.g., that the actionable statements were made with knowledge of their falsity or with reckless disregard to their truth.⁵ But, what does this mean vis-à-vis the black box problem? How does one go about proving that an implementation consultant or model proprietor knew that the model would not perform as promised when, to at least some extent, that proprietor cannot be said to know exactly how their model functions? One commentator perfectly articulated this issue:
It may be impossible to tell how an AI that has internalized massive amounts of data is making its decisions. For example, AI that relies on machine-learning algorithms, such as deep neural networks, can be as difficult to understand as the human brain. There is no straightforward way to map out the decision-making process of these complex networks of artificial neurons. Other machine-learning algorithms are capable of finding geometric patterns in higher dimensional space, which humans cannot visualize. Put simply, this means that it may not be possible to truly understand how a trained AI program is arriving at its decisions or predictions.
The implications of this inability to understand the decision-making process of AI are profound for intent and causation tests, which rely on evidence of human behavior to satisfy them. These tests rely on the ability to find facts as to what is foreseeable, what is causally related, what is planned or expected, and even what a person is thinking or knows. Humans can be interviewed or cross-examined; they leave behind trails of evidence such as e-mails, letters, and memos that help answer questions of intent and causation; and we can draw on heuristics to help understand and interpret their conduct. If an AI program is a black box, it will make predictions and decisions as humans do, but without being able to communicate its reasons for doing so. The AI’s thought process may be based on patterns that we as humans cannot perceive, which means understanding the AI may be akin to understanding another highly intelligent species — one with entirely different senses and powers of perception. This also means that little can be inferred about the intent or conduct of the humans that created or deployed the AI, since even they may not be able to foresee what solutions the AI will reach or what decisions it will make.⁶
A plausible defense here might be to claim that fraud requires a false statement built on willfulness or an intent to deceive, and that this type of intent would typically be lacking from well-meaning model developers who simply want to sell their products and services.⁷ But statements made recklessly are also actionable.⁸ And while simple negligence in making a statement is not enough to be fraudulent, conscious disregard for the truth is sufficient.⁹ Will the fact that the black box nature of AI is known to model proprietors be sufficient to constitute recklessness in the event that the model does not perform as intended? How is recklessness supposed to be measured when model drift may change the material falsity of statements over time? Again, these questions require qualitative evaluations of each specific model and will require judges and juries to evaluate the reasonableness of the implementation consultant’s or model proprietor’s actions vis-à-vis the performance of their products.¹⁰
Evaluating model governance and quality assurance and quality control (QA/QC) processes may help judges and juries in answering these questions. Consideration should be given to whether the implementation consultant or model operator maintained a robust model evaluation protocol; whether there was a process in place for monitoring drift and degradation; and whether there were validation results that contradicted marketing claims but were ignored or siloed from sales and business development teams. If the implementation consultant or model operator made representations based on early test performance but failed to account for known decay patterns or retraining issues, a plaintiff may argue that the vendor should have known those representations were no longer accurate. That argument is particularly compelling in scenarios where post-deployment results showed a sharp divergence from pre-sale claims—suggesting either a lack of diligence or an unwillingness to revisit the truth of earlier representations.
To help begin answering these questions, the following issues should be considered:
Implementation consultants and model operators should have access to internal performance data, including longitudinal studies that demonstrate how the model performs over time and across various inputs. If internal metrics revealed significant performance drift, degradation, or accuracy issues—especially when measured against the same use case for which the model was being marketed—this evidence may support a finding that the consultant or operator should have known its claims were overstated. This is particularly relevant where consultants and operators continued making representations based on earlier or outdated test results that no longer reflected real-world behavior.
Constructive knowledge may also be inferred from prior customer complaints or support tickets identifying recurring problems with the model. If, for example, a consultant or operator receives repeated feedback that the model fails to handle a specific data type (e.g., handwritten invoices, scanned contracts, or low-resolution PDFs), and that feedback aligns with the model’s documented limitations, continued claims to the contrary may rise to the level of reckless disregard. Courts will be interested in whether such complaints were escalated, investigated, and resolved, or simply ignored.
An implementation consultant’s or model operator’s broader approach to product validation can also inform the constructive knowledge analysis. The absence of formal QA/QC processes—or the failure to apply them to critical representations—may be probative of liability. Conversely, if the consultant or operator has a robust validation pipeline and can show that its claims were grounded in good-faith testing, it may support a defense against fraud. Two processes are particularly important here:
In sum, constructive knowledge will often hinge on what internal information the consultant or proprietor had access to and whether that information reasonably undermined the truth of their public-facing statements. Courts and litigators evaluating these issues will look closely at the integrity of the model governance practices, including its willingness to revisit and revise representations as new performance data becomes available. Where that governance is lacking—or where known issues are buried beneath marketing collateral—fraud may be inferred, even in the absence of direct intent to deceive.
It is not difficult to envision considerable litigation involving AI models not living up to aggressive representations of their supposed capabilities. For example, some model proprietors claim their models result in 99% improvements in workflows,¹¹ achieve “demonstrable outcomes,”¹² and complete “navigational and transactional tasks up to 90%.”¹³ Aside from the problems of qualitatively assessing model performance, fraud claims figure to have a precursor problem of whether the representation is actionable at all.¹⁴ While discussion as to whether these types of statements are actionable is far beyond the scope of this series, fraud claims will likely raise a threshold question of actionability, in addition to disputes over whether the model lived up to the representations.
Because these claims will turn on whether a given AI model lived up to representations concerning its performance or capabilities, fraud claims will have a qualitative component similar to those for warranty claims discussed above. In a certain sense, fraud claims could be viewed as carbon copies of warranty claims, where misrepresentations over model performance or capability replace contractually set benchmarks as to what a model will do. However, fraud claims may be hazier than contractual ones as there may be less specificity as to what is promised. Even if the fraud claim revolves around representations made in substantive marketing materials, those materials are likely to be less specific than clear contractual delineations of what a model will or will not do.¹⁵ Nevertheless, the issue of whether representations are based in fact or are expressions of opinion is not unique to AI.
At first glance, indemnity and limitation of liability provisions in AI contracts may appear to be identical to their counterparts in traditional commercial agreements. And while that is largely true, there is a somewhat unique feature to indemnities and damages limitations as they pertain to AI models—they illustrate the implementation consultant’s or model developer’s belief, or lack thereof, in the product they offer. The more prone to hallucination, drift, or erratic behavior a model is, the less robust indemnities and more stringent damages caps would be expected (to protect the model developer). Conversely, more reliable and consistent models will be accompanied by more extensive indemnities and more minimal damages caps.
A comparison to autonomous or self-driving cars is illustrative. As autonomous vehicles progress toward higher autonomy—moving into SAE Levels 4 and 5—the regulatory framework shifts responsibility for accidents from the human operator to the vehicle manufacturer.¹⁶ Car manufacturers are expected to stand behind the performance of their technology, particularly their representations of vehicle autonomy, by indemnifying drivers in the event of malfunction or collision. The logic is straightforward: If the system purports to eliminate human error, then the responsibility for system failure lies with its developer. Similarly, a model consultant or developer that makes strong claims about the autonomy, repeatability, and trustworthiness of its model should be expected to stand behind those claims in the form of robust indemnity obligations and minimal damages limitations.
And so, consultants and proprietors should give some consideration as to what inferences regarding the overall performance or worth of their models could be drawn from the robustness of their liability limitations and indemnity provisions.
The foregoing has provided a treatment of the most common claims brought in complex commercial litigation. While claims for breach of contract, breach of warranty, and fraud are frequently asserted, they are not the only claims that are brought in commercial disputes. We anticipate that disputes involving commercial deployments of enterprise-grade AI will be expansive and will include claims for theft of trade secrets, negligence, and products liability, to name a few. While they are important, an in-depth analysis of these claims is beyond the scope of this series for a couple of reasons. First, trade secret claims involving AI figure to be so complex that they warrant their own series. Second, the current legal frameworks for claims involving negligence and products liability appear, at first blush, to be well suited to deal with the intricacies posed by enterprise-wide AI deployments. And so, we do not believe that a similarly detailed treatment of such claims is required here. Nevertheless, because these three claims do commonly appear in commercial disputes, we provide a brief treatment.
At a fundamental level, trade secrets require secrecy to retain legal protection.¹⁷ Disclosure—whether intentional or accidental—extinguishes those protections entirely.¹⁸ One well-established defense to a claim of trade secret misappropriation is that the alleged secret was made public or disclosed in a way that eliminated its confidential status. In the context of AI, the disclosure risk takes on new and evolving forms in at least two distinct scenarios.
First, organizations may inadvertently forfeit trade secret protection when employees or contractors input confidential or proprietary information into an external AI model—particularly those without contractual or technical safeguards that prevent the reuse, retraining, or dissemination of that data outside the organization.¹⁹ If a user pastes trade-secreted code, internal product roadmaps, or customer pricing models into a public-facing model without appropriate controls, such as organizational service barriers, that information may no longer meet the legal requirements for secrecy.
Second, disclosure can also occur on the model development side. If a vendor trains its model on datasets that contain third-party trade secrets—whether pulled from internal systems, confidential customer data, or improperly obtained files—and then makes that model available for commercial use by others, that act of deployment may functionally disseminate the secret, destroying the information’s status as a trade secret. If the model can regenerate or expose trade-secreted content through its outputs, the risk of unintentional disclosure becomes very real.
Perhaps more pressing than disclosure is the issue of who owns model-generated trade secrets. On the one hand, models are developed by proprietors, and one would expect that the outputs generated by those models would belong to those proprietors. On the other hand, models will be trained on sets of data belonging to the enterprise-level user for whom the proprietor has developed the model. And trade secreted output will surely be generated in response to prompting from the enterprise-user as well. In this sense, one could argue that the trade secret belongs to the user, not the proprietor. In any event, we anticipate that ownership of intellectual property generated from AI models will be handled in the parties’ contracts governing their use.
Claims for negligent model implementation or negligent misrepresentation regarding the model should closely track with fraud and warranty theories, merely applying a different legal standard—in both instances, the standard will be whether the model proprietor acted reasonably under the circumstances.²⁰ As with warranty and fraud, these torts typically require a qualitative assessment of the model’s behavior against some benchmark of reasonableness. Courts will increasingly need to evaluate what constitutes sufficient testing, whether the deployment context justified additional safeguards, and how warnings or disclaimers affect the allocation of risk.²¹
One unresolved but critical question will be whether the model—or more specifically, its outputs—constitutes a “product” for purposes of strict liability.²² Traditional product liability doctrine imposes strict liability on manufacturers for defective products that cause harm, even in the absence of negligence or contractual privity.²³ If AI outputs are deemed to be “products” rather than services or intangible information, then developers and deployers could face enormous liability exposure under existing product liability statutes.
Courts have not yet reached a consensus on this question, and the outcome may turn on how the AI system is implemented, marketed, and consumed. However, should courts determine that AI outputs are “products” in the legal sense, the implications would be significant. Developers could be held liable for defective outputs—such as erroneous financial advice, false medical summaries, or misclassified compliance risks—regardless of intent or diligence. From a risk-management perspective, this possibility underscores the need to tightly draft disclaimers, clarify the nature of the offering (service vs. product), and understand how state law treats AI-generated content under existing liability frameworks.
Litigation over enterprise-grade AI deployments is still in its early stages, and the governing patterns are not yet settled. While a wholesale recasting of entire legal frameworks does not appear to be on the horizon—we anticipate reformation, not revolution—the complexities engendered by AI will require litigators to reposition how they assert claims to be mindful of AI’s peculiarities, such as model stochasticity and the black box problem. The unique features of AI will pressure-test the ways in which parties and trial attorneys think of concepts like causation and how they will put on evidence before the triers of fact. This process will necessarily be iterative, responding to developments in the marketplace and the courtroom. However, it is safe to say that we are far removed from any sort of consensus on these issues as they are just now beginning to appear. And so, the foregoing has been an attempt to begin this iterative process by providing guidance to litigators on how they should think about framing claims and the types of discovery that will be necessary to successfully assert them.
This is Part II of a two-part series. Read the full series here.
Varant Yegparian is founder and principal of Yegparian PLLC. With nearly two decades of experience in high-stakes litigation, Varant represents companies and executives in complex commercial and technology-based disputes. He regularly litigates disputes involving enterprise-grade technological implementations. His practice includes matters involving AI and cloud platforms, smart infrastructure, failed software implementations, and biomedical technology. Varant approaches each case with a trial-ready mindset, focusing on strategic positioning from the outset and building cases designed to withstand scrutiny in court.
1 See, e.g., DoubleLine Capital LP v. Construtora Norberto Odebrecht, S.A., No. 17 CIV. 4576 (DEH), 2025 WL 1951864, at *14 (S.D.N.Y. July 16, 2025) (listing elements of fraud).
2 See, e.g., Bermel v. BlueRadios, Inc., 2019 CO 31, ¶ 15, 440 P.3d 1150, 1153 (Colo. 2019) (economic loss rule bars tort claims for purely economic losses absent an independent tort duty); Indem. Ins. Co. of N. Am. v. Am. Aviation, Inc., 891 So. 2d 532, 536 (Fla. 2004) (rule prevents parties in contractual privity from recasting contract losses as tort damages); Sw. Bell Tel. Co. v. DeLanney, 809 S.W.2d 493, 494 (Tex. 1991) (no tort recovery where plaintiff seeks only loss to the contract’s subject matter).
3 See, e.g., Budgetel Inns, Inc. v. Micros Sys., Inc., 8 F. Supp. 2d 1137, 1146 (E.D. Wis. 1998); Huron Tool & Eng’g Co. v. Precision Consulting Services, Inc., 209 Mich. App. 365, 375, 532 N.W.2d 541, 546 (1995).
4 Much attention will be paid to whether claims of fraud are sufficiently separate and distinct from what is covered by the parties’ contract. See Budgetel, 8 F. Supp. 2d at 1146. And where these claims are “not so interwoven” with the subject matter of the parties’ contract, separate consideration must be given to any contractual disclaimers of reliance as well. See Italian Cowboy Partners, Ltd. v. Prudential Ins. Co. of Am., 341 S.W.3d 323 (Tex. 2011).
5 See supra note 1.
6 Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of Intent and Causation, 31 HARVARD J. OF LAW & TECH. 889, 892-93 (2018).
7 See, e.g., H1 Lincoln, Inc. v. S. Washington St., LLC, 489 Mass. 1, 19, 179 N.E.3d 545, 560 (2022).
8 PR Diamonds, Inc. v. Chandler, 364 F.3d 671, 681 (6th Cir. 2004) (holding that recklessness is a mental state distinct from negligence and akin to conscious disregard).
9 Id.
10 Id. (explaining that recklessness involves an extreme departure from ordinary care where the risk would be obvious to a reasonable person).
11 IBM, Realize the Promise of AI with watsonx, https://www.ibm.com/products/watsonx (last visited Feb. 23, 2026).
12 IBM, Finance Consulting Services, https://www.ibm.com/consulting/finance (last visited Feb. 23, 2026).
13 SAP SE, Joule from SAP: Artificial Intelligence Assistant, https://www.sap.com/products/artificial-intelligence/ai-assistant.html (last visited Feb. 23, 2026).
14 Compare Fagan Holdings, Inc. v. Thinkware, Inc., 750 F. Supp. 2d 820, 832 (S.D. Tex. 2010) (representation that software “was going to make our lives a whole lot easier” constituted non-actionable puffery) with Grouse River Outfitters, Ltd. v. Oracle Corp., 848 Fed. Appx. 238, 243 (9th Cir. 2021) (holding trial court erred in treating statement that software had “[s]peed of deployment in months not years” as statement of opinion); PC Connection, Inc. v. Int’l Bus. Machines Corp., 687 F. Supp. 3d 227, 260 (D.N.H. 2023); Budget Rent A Car Corp. v. Genesys Software Sys., Inc., No. 96 C 0944, 1996 WL 480388, at *4 (N.D. Ill. Aug. 22, 1996).
15 This inquiry will be highly fact specific and fact intensive as courts require a holistic evaluation of all statements together. See, e.g., Oran v. Stafford, 226 F.3d 275, 286 (3d Cir. 2000); Casella v. Webb, 883 F.2d 805, 808 (9th Cir. 1989).
16 See Mark MacCarthy, Setting the Standard of Liability for Self-Driving Cars, BROOKINGS (Aug. 8, 2025), https://www.brookings.edu/articles/setting-the-standard-of-liability-for-self-driving-cars/.
17 Signet Mar. Corp. v. Nykanen, 2023 WL 7093028, at *5 (S.D. Tex. Oct. 26, 2023); Ultraflo Corp. v. Pelican Tank Parts, Inc., 926 F. Supp. 2d 935, 948 (S.D. Tex. 2013).
18 See Signet and Ultraflo, supra note 17.
19 A related issue concerns what reasonable steps a trade secret holder took to protect the information. See, e.g., Superb Motors Inc. v. Deo, 776 F. Supp. 3d 21, 76 (E.D.N.Y. 2025).
20 See, e.g., Talley v. Danek Med., Inc., 179 F.3d 154, 157–58 (4th Cir. 1999); Borman v. Brown, 59 Cal. App. 5th 1048, 1060, 273 Cal. Rptr. 3d 868, 879 (2021).
21 See Talley, supra note 20.
22 See Kostina Prifti, Is Artificial Intelligence a Product or a Service? ROBOTICS & AI LAW SOCIETY (RAILS) (May 7, 2023), https://blog.ai-laws.org/is-artificial-intelligence-a-product-or-a-service/.
23 See, e.g., TEX. CIV. PRAC. & REM. CODE § 82.001(2).