Skip to content Skip to footer

Ryan Whalen: “The What, Why, and How of Automated Patent Decision-making”

The Amsterdam Law & Technology Institute’s team is inviting external faculty members to publish guest articles in the ALTI Forum. Here is the latest contribution authored by Ryan Whalen (The University of Hong Kong).

***

Introduction

The patent system is slow, expensive, and inaccurate. So much so that some refer to it as “broken.”1Adam B Jaffe and Josh Lerner, Innovation and Its Discontents: How Our Broken Patent System Is Endangering Innovation and Progress, and What to Do About It (Princeton University Press 2011). Meanwhile, patent applications are increasing in number and complexity.2Nicolas van Zeebroeck, Bruno van Pottelsberghe de la Potterie and Dominique Guellec, ‘Claiming More: The Increased Voluminosity of Patent Applications and Its Determinants’ (2009) 38 Research Policy 1006; Ryan Whalen, ‘Boundary Spanning Innovation and the Patent System: Interdisciplinary Challenges for a Specialized Examination System’ (2018) 47 Research Policy 1334. Examiners are overworked, which leads them to make more incorrect decisions.3Michael D Frakes and Melissa F Wasserman, ‘Is the Time Allocated to Review Patent Applications Inducing Examiners to Grant Invalid Patents? Evidence from Microlevel Application Data’ (2016) 99 The Review of Economics and Statistics 550; Michael Frakes and Melissa F Wasserman, ‘Does the U.S. Patent & Trademark Office Grant Too Many Bad Patents?: Evidence from a Quasi-Experiment’ [2015] Stanford University Law Review. This has led some to advocate for increased use of technologies, and in particular patentability classifiers, at patent offices.4Arti K Rai, ‘Machine Learning at the Patent Office: Lessons for Patents and Administrative Law’ (2018) 104 Iowa L. Rev. 2617; Ben Dugan, ‘Mechanizing Alice: Automating the Subject Matter Eligibility Test of Alice v. CLS Bank’ (2018) 2018 University of Illinois Journal of Law, Technology & Policy 33; Laura G Pedraza-Fariña and Ryan Whalen, ‘A Network Theory of Patentability’ (2020) 87 The University of Chicago Law Review.

This essay provides an overview of issues related to implementing automated patentability decision-making technologies within patent offices. It first briefly discusses the technologies in question, underpinned by advances in machine learning and natural language processing. The subsequent section explores why the patent system is a prime candidate for the adoption of increased automation. Finally, it turns to exploring different ways these technologies could contribute to existing patent examination processes, or enable the development of new categories of “machine examined” patent grants with distinct legal protections.

The What

Technology has long played an important role in the patent examination process. In the context of substantive examination—where the examiner compares the claimed invention to the prior art to determine patentability—technology’s most important contribution is in improved prior art search and retrieval systems. These databases and information retrieval systems help guide examiners to relevant prior art, easing their workload and helping minimize incorrect decisions.5Gerry J Elman, ‘Automated Patent Examination Support—A Proposal’ (2007) 26 Biotechnology Law Report 435.

Although the modern examination system makes extensive use of technology, the final decisions as to patentability are still made by human examiners. In simplified terms, they compare applications to the prior art to assess whether they claim a useful, novel, and nonobvious invention.6See Article 27, Agreement on Trade-Related Aspects of Intellectual Property Rights (requiring TRIPS member states to provide patent protection for new, useful, and nonobvious inventions). For the most part, the patentability decision-making process has not been influenced by developments in predictive analytics, artificial intelligence, machine learning, or whatever other term one might use to describe technologies that automate traditionally human-made decisions. These developments have steadily improved in their ability to accurately and automatically make decisions that were once thought the exclusive realm of human decision-makers.

The precise design of an automated patentability decision-making system is not the focus here, and so I will elide the technical details that could underpin such a system. Suffice to say that the patentability decision-making that humans currently engage in is essentially a classification task. Each claim on a patent application is subject to one of two classifications – patentable or not patentable. The “not patentable” classification is itself subject to a variety of subclassifications—or reasons for rejection—such as anticipation, obviousness, not patentable subject matter, et cetera. So, an automated system would need to ingest an application, parse each claim, compare it to the prior art and classify it as patentable or not patentable. Depending on the implementation approach adopted by patent offices, the system may or may not be required to provide a reason for a rejection.

While the precise design of an automated patentability decision-making system is as yet uncertain, given the tendency of patent offices to adopt new technologies to assist with application examination, it seems likely that—sooner or later—as patentability classifiers become more accurate, the arguments in favour of their adoption will outcompete the hesitancy to adopt automated examination systems.

The Why

There are at least two reasons that adoption of automated patentability classifiers seems both inevitable and advisable—the large efficiency gains possible and the state of the patent system as an appropriate test bed for the automated decision-making. The efficiency rationale is straightforward. The current labour intensive patent examination system used by many jurisdictions is expensive and slow. The budgets of major offices like the EPO, at €2.4 billion, or the USPTO’s patent expenses at $3.15 billion, give a sense of the scale of resources required by the manual examination process. It is true that in most jurisdictions the patent examination budget is funded by user fees. However, these costs are ultimately borne by the public as they raise the costs of research and development. Efficiency gains could reduce these costs, while also expediting the process and thereby allowing patent applicants more certainty about their rights.

The second justification—that of being a useful test case scenario—relies on patent law’s focus on economic as opposed to personal rights. As automated decision-making technologies are increasingly integrated into legal contexts, it is certain that mistakes will be made. It is therefore important to choose appropriate contexts within which to test and deploy these systems. Perhaps the most important precondition is that mistakes be remediable. The patent system’s focus on granting or denying economic rights—i.e. term delimited intellectual property ownership—offers just such a context. Mistakes within the patent system can be remedied, either with monetary damages and/or with an appeals process that allows participants to challenge decisions.7Rai (n 4). Meanwhile, implementing an automated patent examination system will provide lessons for other areas about how best to implement (or not) automated decisionmaking technologies.

The How

There are multiple approaches through which automated or computationally assisted decision-making could be integrated into patent office processes. Each has a different set of possible implications. Here I will detail three distinct approaches.

The Technologically Assisted Decision-making Approach. The simplest and most plausible manner for automated patent examination to take a role in global patent offices is by using such a tool as a first stage pre-examination prior to the usual human examination process. Under this arrangement automated examination software would process an application and render a recommended decision as to patentability.

Especially in early stages of development, this recommendation might be application-wide rather than claim-specific. So, the system might recommend “this application should be rejected” or “this application may have some patentable claims.”  The human examiner would then engage in the standard examination process and render their decision as to patentability.

This approach is the most similar to the status quo. It retains the “human in the loop” nature of patent examination,8Fabio Massimo Zanzotto, ‘Human-in-the-Loop Artificial Intelligence’ (2019) 64 Journal of Artificial Intelligence Research 243. while also adding possible efficiency and accuracy gains by having an automated system to supplement human judgment. By limiting the ability of an automated patentability system to make legal decisions, this sort of approach would be precautionary in nature.9Kenneth R Foster, Paolo Vecchia, and Michael H Repacholi, ‘Science and the Precautionary Principle’ (2000) 288 Science 979. However, it would also be very costly. This technologically assisted decision-making approach to integrating automated analyses into patent examination would not garner sufficient cost savings. Rather, it might actually increase patent system costs by adding significantly to the IT development requirements while not significantly changing patent examiner human resource requirements.

Another possible downside of the technologically assisted decision-making approach is that, despite efforts to retain the human decisionmaker’s agency, patent examiners might be unduly influenced by the system’s recommendations. This sort of “automation bias” can arise when humans over-rely on automated systems and fail to sufficiently monitor or double-check the system’s recommendations.10Kate Goddard, Abdul Roudsari and Jeremy C Wyatt, ‘Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators’ (2012) 19 Journal of the American Medical Informatics Association 121; LINDA J Skitka, KATHLEEN L Mosier and MARK Burdick, ‘Does Automation Bias Decision-Making?’ (1999) 51 International Journal of Human-Computer Studies 991. Because of this risk, any introduction of automated patentability assessment systems that are designed to be used only as decision-making aides should be designed in such a way as to ensure that human examiners continue to critically examine application materials.

The Independent Roboexaminer Approach. The option of having a fully-enabled ‘roboexaminer’ that is authorized to make legal patentability determinations offers a second model for the adoption of automated examination at patent offices. Under this approach, an automated patentability classifier would be fully authorized to make decisions as to whether a claim is patentable, and ultimately to grant or deny patent protection.

The primary advantage of the independent roboexaminer approach is the major efficiency gain it would offer to the patent examination process. A roboexaminer could process applications almost instantaneously, saving many person hours in patent offices and offering a significant increase in clarity about intellectual property rights. No longer would applicants need to wait months or years to know whether their new invention is patent-eligible. Similarly, the intellectual property portfolios of third parties—whether they be competitor firms or possible acquisitions—would be more clear and certain.

Despite possible efficiency gains, integrating an independent roboexaminer into the patent examination process is not without challenges. There is of course the possibility of error, although it must be noted that the current human examination system is not error-free.11Frakes and Wasserman, ‘Does the U.S. Patent & Trademark Office Grant Too Many Bad Patents?: Evidence from a Quasi-Experiment’ (n 3). An automated patentability classifier would be subject to both Type I (false positive) and Type II (false negative) errors. In the context of patent examination, a type I error can be said to occur when an application is deemed patentable when it is not eligible, and a type II error would occur when a patent application is incorrectly rejected.

Types I and II error can be addressed in different ways. In the case of a type I error where a patentable invention is not granted patent protection, an appeal process can help identify and correct the error. This process already exists at most patent offices, allowing applicants for rejected applications to request further examination. In the case where the initial decision is made by a roboexaminer, allowing for subsequent appeals or requests for further examination by a human could provide a backstop to reduce machine error. Cases of type II error, where an unpatentable invention is granted patent protection, can be reduced by allowing for post-grant review of granted patents.12Bronwyn H Hall and Dietmar Harhoff, ‘Post-Grant Reviews in the U.S. Patent System – Design Choices and Expected Impact Symposium – Ideas into Action – Implementing Reform of the Patent System’ (2004) 19 Berkeley Technology Law Journal 989; Beth Simone Noveck, ‘Peer to Patent: Collective Intelligence, Open Review, and Patent Reform’ (2006) 20 Harvard Journal of Law & Technology 123. Post-grant review regimes allow third parties to challenge patents that they believe were inappropriately granted. These systems are imperfect, but it must be noted that whether the initial granting error was committed by a human examiner or automated system has no bearing on the challenges facing a post-grant review system.

Ultimately, an automated patent examination system should only be instituted when it is reasonably certain that doing so would not substantially increase the error rate already present under human examination. This will require implementing any such technologies very slowly, likely in tandem with human examiners, and providing substantial amounts of human oversight over the determinations made by any official patentability classifiers. As such, the independent roboexaminer approach may be the eventual destination of the more conservative technologically assisted decision-making approach. One path towards automation would enable automated decisions in cases where the classifier is highly confident of its determination, while shuttling other less clear cases to human examiners. 

The Legal Adaptation Approach.  The first two approaches outlined here—those either using a patentability classifier as a human decision-making aide, or an automated examination technology—are convenient because they integrate directly into existing patent law. Under these approaches, we just use an improved technology to help make the same decisions and issue the same legal rights, as we do in the current patent examination process. There is a third approach, which requires adapting patent law significantly but may offer a more appropriate and efficient use of the capabilities of an automated patentability classifier. Under this third approach, an automated system could be used to independently examine applications for patents offering less protection than traditional utility patents.

There are many jurisdictions offering differing levels of patent protection based on varied depth of examination. The lesser patents—so-called petty patents, short-term patents, utility models, etc.13John Richards, ‘Petty Patent Protection Part XI: Patents, Industrial Design and Petty Patents: Chapter 47’ (1998) 2 International Intellectual Property Law & Policy 47; Jussi Heikkilä and Michael Verba, ‘The Role of Utility Models in Patent Filing Strategies: Evidence from European Countries’ (2018) 116 Scientometrics 689.—are often not substantively examined, but rather registered on application. They generally have shorter durations and do not offer the same presumption of validity as a fully examined utility patent. In theory, these short-term patents are designed to offer easier access to intellectual property protection for small firms and independent inventors. In practice, some jurisdictions have found short term patents are abused by large well-resourced firms resulting in large and complex patent thickets.14Adam Hyland, ‘Last Stand for the Innovation Patent’ (2021) 73 Food Australia 25.

A patentability classifier offers the opportunity for a middle ground between short-term patents with no substantive examination, and full utility patents requiring patentability assessment by a human examiner. Under this approach an applicant would be entitled to apply for a robo-examined patent, that would be subject to patentability examination by a classifier. On grant the applicant would enjoy patent protection with a shorter term, and less fulsome deference, than a fully examined utility patent.

Conclusion

Automated patent examination technologies will have profound effects regardless of how they are integrated into the patent system. It is thus important to carefully plan for their development and implementation. When new technologies are adopted, they are often designed to fit into existing systems and complement existing practices. However, in the legal context, policymakers must remain aware that the legal system within which automated decision-making technologies might be adopted are products of their own design. We do not need to shoehorn patentability classifiers directly into the current examination process. Rather, policymakers should remain open-minded to creative ways to integrate future technological improvements into the intellectual property system.

Ryan Whalen

***

Citation: Ryan Whalen, The What, Why, and How of Automated Patent Decision-making, ALTI Forum, May 9, 2022.

Invited by Thibault Schrepel

Subscribe to our newsletter