The Federal Circuit held in Thaler v. Vidal that an “inventor” must be a human. During the patent drafting process, the human inventors meet with the patent attorney to describe the invention. In this meeting, the patent attorney learns the bounds of the invention, and when drafting the patent application, “fills in the gaps” to get a complete patent application. In the ordinary process, if this gap-filling ends up being the novelty that gets the patent allowed during prosecution, these features would still be within the scope that the inventors contemplated and discussed in the disclosure meeting - that is part of the patent attorney's job. Further, even if that wasn't the case for a particular patent, the patent attorney is a human and could be added as an inventor. But what happens when AI performs the gap-filling? The AI doesn't understand the limits of the invention and could easily go beyond and add novelty to the application. If the novelty of the claims is generated by AI, the patent is invalid.
During litigation, patent litigators often depose the patent attorney who drafted the application. I would not be surprised if future depositions include a question requiring the patent attorney to answer under oath whether any part of the patent application was drafted using AI, and if so, which parts? If the defendant's counsel can prove that any part of a claim was “invented” by AI, that claim would be invalid.
The patent system has a history of adding new rules that invalidate many previously filed patents (e.g., patent drafting changes that attorneys made to overcome Alice rejections can't be used to fix pre-Alice patent applications). There is already case law saying a computer cannot be an inventor. While we don't yet have case law on whether AI patent drafting is problematic, inventors should be aware of the risks before allowing a patent attorney to use AI in drafting their patent applications.