The rise of generative artificial intelligence systems is exciting everyone around the world, but one professional community will need to be more excited than any other: copyright and patent regulatory professionals.
Some may recall how in 2020 the USPTO announced that artificial intelligence systems could not be named as inventors in patent registrations. The statement came in response to a request to register two patents created by an AI system called DABUS, whose author is Stephen Thaler. Later, similar opinions were expressed by the Intellectual Property Office of Great Britain IPO and the European Patent Office EPO. A refusal also came from the Australian judicial system.
Thaler decided he wasn’t going to give up easily. He started a series of lawsuits. So far, however, all authorities related to patent law – in the US, in the EU, in Great Britain, Australia – refuse to satisfy Thaler’s requests. As he fought court battles and received rejection after rejection, GPT3, DALL-E, Midjourney, and dozens more AI-based creativity systems emerged.
Taking the bull by the horns
It is obvious that lawyers, judges, legislators and patent officers will have to take the bull by the horns. Denials cannot go on indefinitely. The works that AI systems create are already being used intensively in a variety of fields, and patent law will have to change.
So far, the use of AI does not appear to pose a serious challenge to the patent system, as the technology is used as a tool to help people shape ideas rather than acting on their own, said Chris Morgan, a partner at law firm Reed Smith. However, referring to the possibility that AI systems could one day create works entirely of their own, she added: “As they currently stand, our laws do not have the tools to deal with this scenario.”
Even before that stage was reached, she and other legal experts warned that systems like ChatGPT could be used to generate large numbers of new patent applications, flooding the patent office with claims – in the hope of making a big profit for their creators.
New ideas without human intervention
Thaler’s case is emblematic, and decisions in his cases will be crucial. Thaler claims that his DABUS system is designed to suggest new ideas without any human guidance or supervision. And that means her ideas cannot be attributed to a human inventor.
“He comes to his own revelations as he thinks about the world on his own,” Thaler said in an interview with the Financial Times. The algorithm’s creator claimed that this made his system a precursor to so-called artificial general intelligence systems, which many AI experts believe will one day reach human levels of intelligence.
Refusal to recognize the outcome of autonomous systems like this threatens to leave many valuable inventions without legal protection, according to a group of academics including Harvard Law Professor Lawrence Lessig. That would put “billions” invested in AI systems at risk, the group said in a legal brief filed with the US Supreme Court.
Even if most experts reject the idea that artificial intelligence is poised to supplant human inventors, the US Patent Office’s recent decision to begin a review of how artificial intelligence affects patent rights underscores growing concerns about the technology. The three-year old thesis that AI “cannot invent or create without human intervention” is now being put to the test with the rise of generative AI.
For now, patent practitioners gravitate around the understanding that ideas generated by AI should be attributed to the humans who design, train, or interpret the results produced by the systems. “Any AI system can ultimately be traced back to the people who created it,” says John Villasenor, an engineer and law professor at the University of California.
This hypothesis will most likely prove to be outdated very soon. But even at the current moment, when it is roughly adequate, the growing complexity of the technology, even without full autonomy, still poses a number of challenges, experts point out.
Corey Salsberg, head of intellectual property at drugmaker Novartis, stresses that it’s not always clear which people are most directly responsible for AI’s successes in patent research. Software developers are often the people who are actually the furthest from the final ideas.
Another problem with patent systems stems from the presumption that potential inventors should have a complete “concept” in their minds of the new idea, how it works and how it fits into the world around it. But AIs take on much of the “dirty work” of creating something new, be it a text, a picture or a drug formula. Therefore, there is a risk that no single inventor can claim the “concept,” says Salsberg.
Experts tend to agree that amid all this, patent rules need to be expanded to reflect new ways of working with AI. Last but not least, it should be taken into account that patent rules are often used at the corporate level for the so-called patent wars.
One potential outcome of the overhaul of the systems could be the emergence of new rules requiring detailed disclosure of how AI is used in the process of creating a new work. That would be important for developing patent applications, says Reed Smith’s Morgan.