Before the airplane was invented, some people were concerned that everything that could be invented had been invented. Obviously, that was not the case then, and it’s certainly not the case now. So as you create novel inventions, how do you protect them? What’s the process? And what tools can help you and your team navigate the world of patents?
Janal Kalis: It was like a black hole. Almost nothing got out of there alive. So it became slightly more possible to try and steer your application away by using magic words . . . it didn’t always work but sometimes it did.
Ginette: I’m Ginette.
Curtis: And I’m Curtis.
Ginette: And you are listening to Data Crunch.
Curtis: A podcast about how data and prediction shape our world.
Ginette: A Vault Analytics production.
Here at Data Crunch, we research how data, artificial intelligence, and machine learning are changing things. We see new applications every single day as we research, and we realize we can’t possibly keep you well enough informed with just our podcast. So to help keep you, our listeners, informed, we’ve started collecting and categorizing all of the artificial applications we see in our daily research. It’s on a website we just launched. Go explore the future at datacrunchpodcast.com/ai, and if you want to keep up with the artificial intelligence beat, we send a weekly newsletter highlighting the top three to four applications we find each week that you can sign up for on the website. It’s an easy read, we really enjoy writing it, and we hope you’ll enjoy reading. And, now let’s get back to today’s episode.
Curtis: Today we dive into a world filled with strategy, intrigue, and artful negotiation, a world located in the wild west of innovation.
Ginette: In this world, you fight for your right to own something you can’t touch: your ideas. You and your team ride out into this wild west to mark your territory, drawing a border with words. Sometimes during this land grab, people get a lot of what they want, but generally they don’t, so you have to negotiate with the people in charge, called examiners, to decide what you can own, but what if you’re assigned someone who isn’t fair? Or what if you want to avoid someone who isn’t fair? Is there anything you can do? Maybe, but first you need to understand how the system works. Let’s dive into the world of patents and hear from Trent Ostler, a patent practitioner at Illumina.
Trent: The kind of back and forth that goes on oftentimes is trying to get broad coverage for a particular invention, and chances are, the examiner, at least initially, will reject those claims.
Curtis: Claims define the boundaries of the invention you’re seeking to protect. It’s like buying a plot of land. There are boundaries that come with the property. These claims define how far your ownership of the invention extends. Claims can be used to tell the examiner why he or she should allow, or approve, your exclusive rights to your idea, giving you ownership over that idea, or in other words, grant you a patent.
Trent: The examiner will say that they are broad. The claims don’t deserve patent protection. And he could say that they would have been obvious. He could say that it’s been done before—it’s not novel, and so what this means for anyone trying to get a patent is that it’s very complex. There are thousands of pages of rules and cases that come out that further refine what it is that’s too broad or what it is that makes something obvious, and oftentimes there is a balancing act of coming close to the line to get the protection that you deserve but not going overboard.
Ginette: So there’s a back-and-forth volley between the inventor’s lawyers and the examiner. The examiner says, “hey, you don’t deserve these claims,” and he or she gives you a sound reason or argument for it, and then you and your team try to persuade him or her otherwise, and hopefully overcome those rejections by arguing for why your claims are reasonable, and why the examiner should allow, or approve, your claims.
Trent: There is always going to be a back-and-forth with the examiner, and when an examiner does have credibility and is applying reasonable rejections and backing that up with sound arguments, then that’s obviously not going to be as easy to overcome those rejections through argument alone, and so you have to resort to other strategies, such as narrowing the claims. Kind of what that is is not getting as broad of coverage for your invention but still getting some coverage.
Curtis: When an examiner rejects your application for a patent, you have options, and if everything goes well as you carefully navigate, you just might be able to be allowed a patent.
Trent: When the examiner rejects your application, you are entitled to appeal that decision to judges or to some sort of second eyes, and they assess whether that examiner was correct or not.
Ginette: When you appeal your patent case, there’s a chance that the judge could overturn the examiner’s decision and allow, or approve, you to lay claim to your intellectual property.
Curtis: In this world guided by laws and the outcomes of previous patent cases, there’s often ambiguity and lack of visibility, and Trent has developed a product, called Anticipat, that works to look deeper into the inner workings of the patent appeals process using data.
Trent: Anticipat seeks to make sense of all of the appeals decisions to shed some light into how to get a patent, to increase the efficiency that the current process has, and so what each decision does is that we extract the particular ground of rejection. There can be 10 or so different grounds that the examiner can reject the application on, and I already touched on a couple of those: if it’s too broad of a claim, if it would have been obvious—novelty, and we catalog all these rejections so that you can look at different trends and different patterns. Maybe there are certain examiners who are getting overturned by the judges and that means the higher that that rate happens that means the more that examiner is being unreasonable with their rejections.
What the Patent and Trademark Office, the PTO, does is they post all of these decisions online for free. This will post thousands, literally thousands of these decisions every year of appeals that the judges either overturn the examiner and say that the examiner was wrong in that rejection, or they affirm the examiner and they side with the examiner by saying that his rejection was was good.
We’re extracting all the information that the PTO has, trying to make sense of that, and tying in information that can be relevant to someone that’s trying to get a patent.
Ginette: In addition to tracking how often examiner’s rejections are overturned, which is an important data point for understanding what may or may not be a reasonable rejection, Trent’s tool takes a look at overturned rejections by the group an examiner belongs to, called an art unit or group—which is basically called that because they review similar types of what they term art. Above an art group is what’s called a tech center, which houses several art groups, and there are nine of these tech centers under the USPTO umbrella, and Trent’s tool looks at the analytics at this level as well.
Trent: We like to look at reversal rates because from the patent-office perspective, there are two ways for bad rejections to get weeded out. Before it goes on appeal, there’s a pre-appeal conference, and sometimes if the rejection is not good, then this panel of examiners will kick it back to the examiner, and they can either have him do another rejection or allow the case. And then that same conference happens again.
And from the patent practitioner’s perspective, it cost a lot of money and time to go through an appeal, and only 1 to 2 percent of all applications go to appeal, so they’re going to be invested in their position to go through with the whole process.
So we think that the appeal is actually a good data point for measuring whether a rejection that an examiner applies is a reasonable rejection or not because otherwise it would have been weeded out by the two other steps, and so we can keep track of the reversal rates for this particular examiner, for the group, and for the tech center, and it’s kind of like a different level of hierarchy, and then you can see whether there’s an anomaly in any of those, so if this examiner is reversed higher than his group that he works in of 20 or so examiners, then maybe that suggests that it may be not worth the time to work with this examiner and go back and forth, and maybe just be easier to appeal and have a judge decide in your favor, so there is that advantage.
The other advantage is that using the analytics page can provide you with the specific rationales that were relied on to overturn the examiner, so for some of the grounds of rejection, such as obviousness, it’s a very nuanced rejection, and we actually have a newly introduced tagging system that tags every different type of rationale that can be used for obviousness, and so we have 27 ways that you can use either for or against obviousness, and this is very helpful because otherwise you have to really rely on your own experience and your own memory and sometimes go and, and do a lot of legal research, but with with this, it provides a very organized structure for all the possible arguments that you could use. We also make note of the legal cases that the board relies on.
Ginette: The board is the Patent Trial and Appeal Board, or the PTAB.
Trent: Just like a petitioner in responding to an examiner, the board can rely on cases that come out, on the the PTO rules, which are called the NPEP, and just various guidelines that the PTO provides on a regular basis. It’s tough to keep track of all these things, these legal support documents, but thankfully the board does that, and so it’s an easy way to look at the specific rationales that the board applied, and you can have the legal support that they cited.
There are a number of different panels of judges that come out with these decisions, and they all use different language. Some of the language is consistent, such as the statutory grounds of rejection, and that’s helpful, but when it comes to the rationales, that has been a challenge for us, we’ve had to rely on manual intervention to make sure that our algorithm is correctly picking out the right rationales. That’s a work in progress; we’re working on just continuing to feed the algorithm training sets that are what we called correct rationales. We’re still working on that.
Curtis: The ultimate goal of this process is to offer protections for inventions that deserve it and not to over-extend or under-extend protections. The aim is to allow quality patents.
Trent: I think that the Patent Office is concerned about, or at least they would say that they’re concerned about patent quality, and a big part of patent quality is good examination and having the examiners do a quality examination of each application, and so I think the hope of many people in this field is that examiners who do frequently get overturned, that there should be some sort of concern about, about that.
Ginette: It’s known in the patent community that there are some examiners who have never allowed a case and brag about the fact that they’ve never allowed a case. These examiner analytics show that some examiners are statistical outliers. While some allow no cases and brag about it, others allow more cases than the average examiner. One would think that the USPTO get rid of the outliers, the examiners that don’t allow many if any cases and the examiners who allow what seems like way too many compared to their peers. But as of yet, the USPTO has not recognized the value of these analytics, although they do have analytics of their own, which are somewhat skewed.
Trent: There is not a formal accountability mechanism. There is some sort of metric. The patent office does keep track of of appeals outcomes, but they keep track of the outcomes in a different way that does skew the data towards affirmances. Oftentimes you’re not only appealing one ground of rejection. You can be appealing four or five grounds of rejection, and if one of those rejections sticks, then the patent office treats the entire appeal as affirmed even if the other grounds of rejection are reversed, so because of that, it does skew the data towards showing that the examiners are being affirmed more than maybe one might suspect.
Curtis: Trent’s tool helps divide up each ground of rejection so users know how the court handled each individual rejection.
Trent: What this tool [Anticipat] does that has not been done before is it granularizes what it is that’s being decided in each rejection, and it keeps track of each individual ground separately rather than lumping them up into one outcome and just saying this decision is affirmed or this decision is reverse, or affirmed in part. It goes into each of the rejections and it pulls it out. And with that granularity, you can find some very interesting things. that some of the grounds of rejection are overturned a lot more frequently than others, such as novelty and Section 102, and Section 112 is related to the the breadth of the claim and having proper support for the claim in the disclosure. These two grounds are over 50 percent reversed, which is high. For me, it was unexpectedly high. Other grounds of rejection, such as obviousness and abstract idea, they are much lower. These are areas of the law that . . . one would argue that obviousness has more discretion for the board to side with the examiner because . . . it’s a lot more of a nuanced argument. With novelty, it’s either the examiner found a reference that teaches your invention or not, and obviousness is a little bit more of a technical exercise.
I think that one of the big interest that we are seeing is in section 101, which is a very hot—if there is such thing in patent law—it’s a very hot item. A couple of years ago the Supreme Court came out with a case that ruled in favor of this patent being an (ineligible) abstract idea, and many thought that that would spell the end for software patents because it was the opinion used language that seemed to indicate that it would be very challenging to get a patent that is related to software,
Curtis: This is a famous case known as the Alice decision which ruled in favor of a specific patent being an ineligible abstract idea.
Trent: and we’ve found that that’s not the case—that software is still patentable, but it has made it more challenging, and both examiners and practitioners are trying to find out exactly how a software application gets to be eligible for a patent. It’s not black and white at all. I think that using the board decisions—because there’s so many board decisions that are deciding these types of questions—it’s a way for for patterns and for trends to be discovered and especially it’s a way to look for persuasive arguments that the board found that were persuasive so that the practitioner can use that in his or her own practice.
Ginette: In some cases, it doesn’t matter how many statistics you and your lawyers have, it may not help you avoid a tough examiner. But it may help you with strategy of how to handle that examiner. Other times, the statistics may help you steer your case away from a tough art group.
We spoke with another patent practitioner, Janal Kalis of Schwegman, Lundberg & Woessner, about her experience with the analytics and how they’ve helped.
Janal: I’m working on a case right now where the examiner, his allowance rate is way below that of his art group. In fact it’s discouragingly low, we are leaning towards filing an appeal a lot earlier than we normally would in order to move this case along, and without the analytics, we wouldn’t be able to do that.
It’s very expensive too, so if we can see that an examiner they have a very low allowance rate and long prosecutions, than we know that filing RCEs is not likely to be very productive, and it’s going to be very expensive.
Ginette: An RCE is a request for continued examination.
Janal: Appeals are not cheap either, but in the medium we’re on, they may be better value than just going the RCE route.
It’s very difficult to get any kind of control over the examiner that will be examining a particular case. In the case that I’ve mentioned, that general art group has an allowance rate of 73 percent, which is pretty average for the patent office, but this particular examiner has an allowance rate of 42 percent. There’s no way we can steer a case away from this examiner. The best that we can do is to have frequent interviews and call the supervisor in for every interview in the hopes that the supervisor will knock some sense into the examiner, which may or may not happen.
Now a situation where we may have had some modicum of control was related to and still is related to 101 rejections in Art Group 3600. After the Alice decision, Art Group 3600 was a killing field for software patents. It was like a black hole. Almost nothing got out of there alive. In fact usually nothing. So there it became slightly more possible to try and steer your application away from 3600 by using magic words in the field of the invention. It didn’t always work again, but sometimes it did.
Ginette: While these stats can’t always steer an application away from an unwanted examiner, these statistics can help, at least patent lawyers, in other surprising ways.
Janal: We recently found from an Anticipat report that our firm ranks at the very top as far as getting 101 appeals overturned. That’s not information that used to be easy to get. Without Anticipat, it’s actually still is very difficult to get, and expensive, but they’ve definitely streamline the process, and with that information, we were able to pull all of the briefs where we had been successful and identify the attorneys who had been successful, and we are using that information on future appeal briefs so that we can perhaps even improve our record.
Surprisingly, we have found this examiner information to be very useful in marketing, especially for existing clients, because it becomes information that we can provide to the client and give them predictions as to when we might get an allowance. If the examiner’s performance is typical, we see that the turnaround is within three years, then we can say it’s probably going to be a fairly typical prosecution. Some might be faster, some might be slower, but we have the data now to present that to clients, and they really liked it.
Trent: What I see this product doing is helping to further develop the technology coming in and helping the efficiencies of the process. I kind of envisioned like a like a Turbo Patent type of a type of a product, and I hope that this, that this facilitates a better quality prosecution experience and also more efficient one.
Conclusion: A huge thanks to both Trent Ostler and Janal Kalis for speaking with us, and as always, check out datacrunchpodcast.com for show notes and attribution. And if you’d like to easily learn about the latest artificial intelligence and machine learning applications every week, go sign up for our weekly newsletter at datacrunchpodcast.com/ai. Thanks for listening.