What AI Will Never Conquer in IP Analysis

There’s a narrative that floats around in tech circles, and you probably know it already. AI keeps getting better, IP analysis processes keep getting automated, and one morning the patent attorneys and licensing negotiators and IP strategists wake up to find that their jobs have been quietly eliminated. I don’t buy this version of events. Not because AI isn’t genuinely powerful — it is — but because the people telling this story keep glossing over what the technology actually cannot do.
Let me be specific.
AI tools are changing IP analysis in real ways. Prior art searches that used to take weeks can now take minutes. Competitive landscaping, feasibility checks, patent claim drafting — all of these have been meaningfully accelerated. If you’re an IP professional and you’re not using these tools yet, that’s a conversation worth having with yourself. But there is a significant gap between “AI makes this work faster” and “AI can do this work without humans,” and that gap is where all the interesting, important stuff lives.
The Question Picasso Already Answered
In 1964, Pablo Picasso told writer William Fifield something that has held up remarkably well. As recorded in The Paris Review, he said of computing machines: “They are useless. They can only give you answers.” The modern version of the quote (“Computers are useless. They can only give you answers”) is semantically accurate; Quote Investigator has traced the original context in some detail.
The distinction Picasso was drawing is between question-asking and answer-giving. Computers, including the AI systems we have right now, are extraordinarily good at answers. You give them a prompt, a dataset, a comparison to run, and they return something fast. What they cannot do is independently decide what is worth asking in the first place.
This becomes concrete in patent work when you’re drafting claims for a technology that doesn’t fully exist yet. Think 6G, or quantum networking, or whatever is three generations past the current frontier. A good patent attorney working on claims for an emerging technology is not pattern-matching against existing filings. They are imagining future attack vectors, future use cases, future implementations that a competitor has not tried yet and might try eventually. They are asking questions that the patent database has never seen. An AI trained on existing patents is very good at telling you what has already been claimed. It has no capacity to tell you what should be claimed in territory that hasn’t been mapped.
This matters practically. AI tools can generate an initial list of draft patent claims based on your invention description. That is a useful starting point. But treating those claims as finished work is how you end up with gaps that a well-funded competitor walks through five years from now.
The Room You Can’t Read Remotely
Licensing negotiations for Standard Essential Patents are not a clerical task. As of early 2024, more than 100,000 cellular patent families had been declared globally (LexisNexis 5G SEP Report, 2025), and the number is growing at more than 10,000 new families per year. Identifying who holds what and calculating portfolio shares — AI can help with that, and the help is real. What happens next is something different.
FRAND negotiations (Fair, Reasonable, and Non-Discriminatory, for anyone who has managed to avoid the acronym) are where the economics of 5G licensing actually get resolved. The final terms of an agreement covering SEPs can determine the business viability of an entire product line. And the factors that move those negotiations in one direction or another include things that do not exist in any training dataset: whether the person across the table is genuinely close to walking away or running a known bluff, whether the relationship between the two companies creates space for a firm counteroffer or closes it off, whether the negotiator on your side has a reputation in this specific industry that is worth something.
An AI system can scan a dataset of comparable licensing deals and tell you the range of historical outcomes. That’s valuable. It cannot tell you what this particular deal, with these particular people, is likely to produce. And in negotiations where the stakes are measured in hundreds of millions of dollars, that difference matters.
The Hallucination Problem, Which Is Not Actually Minor
Current large language models make things up. The term used for this is “hallucinations,” though an Australian federal court pointed out in 2025 that “fabricated” would be more accurate. (JML Rose Pty Ltd v Jorgensen, Federal Court of Australia, 2025)
Stanford researchers tested leading legal AI tools and found that even the purpose-built ones — designed specifically to avoid this problem — still produced incorrect information more than 17% of the time. Westlaw’s AI-Assisted Research hallucinated more than 34% of the time. (Stanford HAI, “Hallucination-Free?”) These are not acceptable error rates when the output is going to inform a patent prosecution strategy or a licensing opinion.
The legal profession has been learning this painfully. A researcher at HEC Paris has been tracking legal cases involving AI-generated fabricated citations. As of mid-2025, the database had identified over 1,000 cases globally, with the rate reportedly running at two to three new cases per day. Some of these involved leading law firms. Most involved lawyers who trusted AI outputs without verifying them.
The IP equivalent is obvious. An AI-generated prior art search that misrepresents the claims of an existing patent, or invents one that doesn’t exist, is a foundational error. The professional who signs off on any analysis derived from that output owns that error.
Centaur Chess
In 2005, an online chess tournament called Freestyle Chess produced a result that surprised almost everyone. The tournament allowed any combination of humans and computer assistance. The winner was not a grandmaster. It was a pair of amateur American players using three computers simultaneously, who were skilled enough at directing the machines to outperform both standalone chess engines and grandmaster-plus-computer combinations. As Kasparov described it in a 2010 essay for The New York Review of Books: a weak human with a machine and a better process beat a strong computer alone.
This model — sometimes called centaur chess or advanced chess — is worth taking seriously outside the context of chess.
The question for IP professionals is not whether AI replaces human expertise. The question is how effectively a given human can direct, interrogate, and critically evaluate what AI produces. The people who do best will not be those with the strongest aversion to these tools, nor those who defer to them uncritically. They will be the ones who understand where the tools are reliable and where they fail, and who apply their own judgment precisely at those failure points.
The division of labor is actually fairly clear. AI is good at scale, speed, and initial organization: scanning millions of patent documents, extracting concepts, flagging prior art similarities, generating first-draft claims. Humans are better at original question-framing, interpersonal judgment, high-stakes decision-making, and catching the moments when an AI output is confidently, fluently wrong.
A Note on AGI
The outline for this post originally cited the claim that AGI is “decades, if not centuries, away.” This is worth qualifying, because the honest answer is that expert opinion is genuinely divided. Some researchers and AI company leaders now suggest general-purpose AI capabilities could arrive within years, not decades. Others believe current architectures have hard limits that will take much longer to overcome. What is not in serious dispute, regardless of where you land on that question, is that the AI tools available today for IP work are powerful assistants, not autonomous professionals. The gap between “can generate draft claims” and “can be trusted with final strategic decisions without oversight” remains large and well-documented.
What This Means in Practice
The question of whether to use AI in IP analysis is largely settled. The more interesting question is how to use it well. That means building workflows where AI handles the fast, scalable work and humans review outputs at the points where errors are expensive, where judgment is required, and where the questions being asked haven’t been asked before.
Picasso said machines can only give you answers. That was true in 1964 and it is mostly still true now. The real work, in IP analysis as in most things, is knowing what to ask.