IPWENSDAYs™ Episode 1: DELVING INTO THE USPTO AI SUBJECT MATTER ELIGIBLITY GUIDANCE
by Wen Xie, Founder
The U.S. Patent and Trademark Office’s (USPTO’s) AI subject matter eligibility guidance issued in July of 2024 contains three written examples, each example having a set of sample claims for which the USPTO conducted their subject matter eligibility analysis. Overall, there are three key takeaways from these examples:
1) It’s not about what you claim, it’s actually about what you don’t claim.
In order to avoid a Section 101 rejection at the outset, it’s about not reciting one of the judicial exceptions – mathematical formula, mental process, method of organizing human activity. Practitioners sometimes get bogged down in reciting enough physical structure assuming that with enough physical structure, the claim should obviously be directed to physical structure and not an abstract idea, right? That sounds right but it’s wrong. It’s not the presence of physical structure or positively claiming enough structure that shields your claim from a Section 101 challenge. Rather, the guidance makes clear that if you recite both statutory and non-statutory matter in a claim – the claim is directed to NON-STATUTORY matter under the broadest reasonable interpretation (BRI). That is stated clearly in MPEP §2106 and we will see in these examples that claiming more or being more narrow can hurt you because you’ve ended up claiming more non-statutory subject matter.
2) It’s about breadth.
Oftentimes, whether a limitation is an abstract idea comes down to breadth. The USPTO is looking for a particular way to achieve a particular solution, which goes to Step 2, Prong 2 for the practical application analysis.
3) Remember, the sample claims and their corresponding backgrounds are all hypothetical.
All of the examples are hypothetical and not real-world examples. All the sample claims and backgrounds that came with the 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) are also hypothetical. These examples aren’t meant to be taken as doctrine but rather should be used for illustrative purposes for us to see how the USPTO applies the law to the facts, namely, the law of Section 101 and BRI, to the hypothetical facts of a particular claim set.
4) Find a way to argue BRI.
BRI analysis is very prominent in these samples. Practitioners should be looking for opportunities to argue the examiner’s application of BRI when suitable. And here’s a big issue– BRI requires claim interpretation in light of the specification. An entire specification. But what we see in these examples is application of BRI to hypothetical claims based on hypothetical backgrounds that are clearly not complete disclosures – there are no exemplary embodiments, no drawings. Rather, you have to remember that these examples were written specifically for arriving at these outcomes for illustrative purposes.
So let’s review each one of the examples and their main takeaways.
Example 47, claim 1
This is the only example claim in which eligibility is found because the claim did not recite any judicial exception or non-statutory subject matter. It’s just pure statutory subject matter. Here’s what you should take away from this sample claim: the broadest reasonable interpretation of the claimed ANN requires hardware because the claimed ASIC is a physical circuit. Also, the microprocessor and memory are hardware and not judicial exceptions in this claim. Sometimes examiners call these components part of the human mind – your brain can be a microprocessor and it has memory. But here, the microprocessor and memory are claimed as part of a circuit. Therefore, it’s hardware.
Example 47, claim 2
This claim is not eligible. Steps (b)-(c) are considered mathematical concepts while steps (d)-(f) are considered mental processes. The analysis really homed in on step (c) which is the training step based on an algorithm. The claim requires training by a backpropagation algorithm or a gradient descent algorithm, which are deemed to be directed to mathematical concepts. These methods and algorithms are commonly used in the machine learning art. Please also note the term “by a computer” wasn’t afforded patentable weight in this claim.
Example 47, claim 3
This is another method claim using an ANN. This claim recites practical application by way of technological improvement. In particular, the way the background for this sample was written was that steps (d)-(f) allowed for certain technological improvements in detecting malicious network packets efficiently. In particular, the claim recites detecting a source address associated with malicious network packets and then dropping the malicious network packets in real time. Here’s a potential problem – depending on how you’ve defined malicious network packets in your disclosure, there could be a detection issue when it comes to enforcement. A lot of times, technological improvements involve third party action. You need to watch out if it does because you might overcome Section 101 by claiming technological improvement, but then you have other problems when it comes to enforcing your claim.
Practice tip: If you find yourself in the situation in which the examiner wants you to add potentially third-party action in your claim to have practical application, have an interview. Explain to them you’ll have a detection or enforcement issue with this kind of claiming. Try to find a workaround. Many examiners will actually work with you.
Does example 47 conflict with example 39 of the 2019 PEG?
It’s been commonly reported that Example 39 of the 2019 PEG is no longer as effective for overcoming Section 101 rejections as previously, and the reason might be attributed to Example 47 of the new AI Section 101 guidance. The method claims in Example 47 are use claims with a training step, whereas Example 39 is a training claim comprising multiple steps. I believe these claims don’t necessarily conflict.
Practice tip: If you find that Example 39 isn’t as useful as it was in the past, this is another opportunity to conduct an interview with the examiner and argue using Example 39 during the interview. Should the examiner appear unconvinced, you can ask why and get an answer in real time. If it is in fact due to the new examples of the AI Section 101 guidance, you should then distinguish between the cases and present reasons why Example 39 should apply over the examples of the new guidance.
Example 48, claim 2 (skipped claim 1)
Here, the USPTO determined that steps (f) and (g) provided practical application by means of a technological improvement. Their reasoning was pretty cut and dry, so no further explanations here. What is interesting about this example was that the discussion did go to some length to state that step (g) was not a mathematical concept. Even though the disclosure explains that the stitching could be performed by an overlap-add method, which is a mathematical operation, the claim recites no details of how the stitching is performed. Additionally, while the claim recites variables, variables on their own are not mathematical relationships, formulas, or calculations. Therefore, the combining step is merely based on or involves a mathematical concept but does not recite a mathematical concept.
That’s interesting, because previously we saw that reciting a backpropagation algorithm or a gradient descent algorithm meant the claim recited a mathematical concept. But when a claim generally recites stitching, which could also involve performing mathematical steps, and the disclosure does not restrict the step of stitching to performing a mathematical calculation, then the stitching step is not directed to a mathematical concept.
Practice tip: Look for situations in which you can recite more broadly, such as in this example of when going more broad helps. Generically claiming stitching and not restricting the stitching method to a mathematical calculation, even though they were also not specific as to what other methods can be used, still saved this limitation from being directed to a mathematical concept.
Example 48, claim 3
According to the guidance, step (b) requires converting a time-frequency representation of the mixed speech signal into embeddings in a feature space as a function of the mixed speech signal, which the guidance says is a mathematical equation written in text format. Step (c) requires clustering the embeddings by a k-means clustering algorithm, which the guidance says is a mathematical calculation. Step (d) obtains masked clusters by applying binary masks to the clusters. And the guidance says this is also a mathematical calculation.
This example is a bit worrisome because using a DNN, k-means clustering algorithm or applying binary masks are all well-known machine learning, data processing methods, along with backpropagation and gradient descent which was presented in a previous claim. That’s the thing with these hypothetical claims—when you really take a close look and step back, it’s very clear they all seem to have novelty and obviousness issues. These claims are not directed to novel and nonobvious subject matter, which makes sense because the USPTO isn’t back there inventing novel features to claim. They are using what appear to be common limitations seen in AI claims and putting them together for illustrative purposes. But because they are so known, there’s really no other way to describe these methods. So, if you’re claiming these features, it looks like the USPTO does consider these to be mathematical concepts. And this is where “well-understood, routine and conventional” creeps its way into the guidance even though it’s never specifically mentioned. When confronting this issue, look for practical application, obviously, or do what we saw in the previous example, which is find a way to claim broadly.
Practice tip: Notably, steps (e) and (f) were deemed to not recite mathematical concepts because these steps do not “require mathematical formulas, calculations or relationships.” This is good language to adopt when traversing a Section 101 rejection and to cite Example 48 of the guidance.
Example 49, claim 1
Here, we see AI claims in the field of life sciences. The background states that the applicant invented a new drug, Compound X, for treating glaucoma, and also filed an application describing how compound X can be topically administered in eye drop form after micro stent implant surgery. The first claim is really about diagnosing a patient by assessing their risk for glaucoma using AI. Claim 1 was deemed to be directed to math, a mental process, and a law of nature because the guidance states that determining the risk for glaucoma was based on the relationship between genotype and phenotype. This example illustrates the difficulty of patenting diagnostic claims.
Example 49, claim 2
Claim 2 is eligible and requires administering the compound X, which is effectively positively claimed here. Note that the background states that compound X is a novel, new compound and not part of a common treatment course. So, this is what they’re saying: the AI part isn’t eligible, but positively claiming the novel compound is eligible.
Does the guidance provide actual guidance on subject matter eligibility?
Yes. The guidance provides explanation on what is deemed to be a mathematical concept and it strongly appears that commonly used or well-known machine learning techniques are mathematical concepts under the guidance. Practitioners should try to get a practical application with these features – that is, a particular way for a particular solution, or technological improvement. And if you can, go broad. Describe your specification in a way that doesn’t restrict steps or processes to requiring mathematical formulas or algorithms. It’s much harder than you think!
Are the examples hostile to AI or machine learning claims?
While hostile is yet to be determined, the guidance is flawed because these are hypothetical claims and not real-world examples. Some of the sample claims might very well be unpatentable due to Section 102 and 103 issues, which weakens the stance of the AI limitations on the Section 101 front. But the main problem is that these hypothetical claims are supported by corresponding backgrounds that are also hypothetical and are clearly not detailed descriptions. It then becomes impossible to conduct a true application of BRI with these samples, as BRI requires examining claims in light of an entire disclosure and not based on excerpts of hypothetical disclosures lacking practical embodiments and drawings amongst other matters. The result is that these claim examples and their claimed AI limitations are being examined out of context of a true claim presentation.
Subscribe to IPWENSDAYs™.