Monday, September 25, 2017

What can we learn from variation in patent examiner leniency?

Studying the effect of granting vs. rejecting a given patent application can reveal little about the ex ante patent incentive (since ex ante decisions were already made), but it can say a lot about the ex post effect of patents on things like follow-on innovation. But directly comparing granted vs. rejected applications is problematic because one might expect there to be important differences between the underlying inventions and their applicants. In an ideal (for a social scientist) world, some patent applications would be randomly granted or denied in a randomized controlled trial, allowing for a rigorous comparison. There are obviously problems with doing this in the real world—but it turns out that the real world comes close enough.

The USPTO does not randomly grant application A and reject application B, but it does often assign (as good as randomly) application A to a lenient examiner who is very likely to grant, while assigning B to a strict examiner who is very likely to reject. Thus, patent examiner leniency can be used as an instrumental variable for which patent applications are granted. This approach was pioneered by Bhaven Sampat and Heidi Williams in How Do Patents Affect Follow-on Innovation? Evidence from the Human Genome, in which they used this approach to concluded that on average, gene patents appear to have had no effect on follow-on innovation.

Since their seminal work, I have seen a growing number of other scholars adopt this approach, including these recent papers:

  • What is a Patent Worth? Evidence from the U.S. Patent "Lottery" – Joan Farre-Mensa, Deepak Hegde, and Alexander Ljungqvist use the approach developed by Sampat and Williams on all 34,215 first-time U.S. patent applications filed by U.S. startups since 2001 that received a final decision by the end of 2013. They "find that startups that win the patent 'lottery' by drawing lenient examiners have, on average, 55% higher employment growth and 80% higher sales growth five years later. Patent winners also pursue more, and higher quality, follow-on innovation."
  • The Effect of Patent Protection on Inventor Mobility – Eduardo Melero, Neus Palomeras, and David Wehrheim "suggest that patents make human capital more specific to the employer" and use an examiner leniency instrument to find "that one additional patent granted decreases inventor mobility by approximately 25 percent. The estimated negative effect is nearly twice as large for discrete technologies (chemicals and pharmaceuticals) for which patent effectiveness is greater."
  • Who Feeds the Trolls? Patent Trolls and the Patent Examination Process – Josh Feng and Xavier Jaravel "find that non-practicing entities (NPEs) purchase patents granted by examiners that tend to issue incremental patents with vaguely worded claims. In comparison, practicing entities purchase a very different set of patents, but assert patents similar to those purchased by NPEs."
  • The Ways We’ve been Measuring Patent Scope are Wrong: How to Measure and Draw Causal Inferences with Patent Scope – I mentioned this effort by Jeffrey Kuhn and Neil Thompson to validate the first-claim-word-count measure of patent scope in July, but I didn't mention the role of examiner leniency. Whereas Sampat and Williams based their measure on the likelihood of patents being granted at all, Kuhn and Thompson construct a new instrument to measure an examiner's "scope toughness." They give an example of the use of this instrument (showing that greater scope toughness leads to a lower probability of a patent being declared standard essential). But they say their main purpose is simply to provide useful (and validated) tools for other patent scholars—so budding empirical researchers: take note!

I asked Sampat and Williams if they had any suggestions for researchers who planning studies using this approach. They replied:
This methodology often gets summarized as requiring random assignment of patent applications to patent examiners. Formal tests of that assumption usually fail—see, e.g., this paper by Righi and Simcoe. However, random assignment of patent applications to patent examiners is sufficient but not necessary for the strategy to be valid. Our take based on interviews with SPEs and others at the patent office (Lemley–Sampat 2012, Cockburn–Kortum–Stern 2003, Frakes–Wasserman 2014) is that it is plausible that applications were "as good as randomly assigned to examiners" from the perspective of being uncorrelated with potential outcomes. But in any given application you need to show statistical tests investigating that assumption in the particular context and sample of interest.
In our paper, and in several other papers that we have seen using this strategy, applications do not look like they are sorted across examiners in a way that is problematic, based on e.g. measures of the value of patent applications or the characteristics of the assignees, which suggests the strategy can be valid even if random assignment in a strict sense was not used. But this isn't something that should just be assumed to be valid in all cases—it is an assumption that should be tested and validated in the specific sample of interest to a given study. And where possible it is generally a good idea to verify that the substantive conclusions of the study hold in other identification approaches as well (as we try to do this in our paper, and others have done so as well).
In general, nailing down the causal impact of patents is hard, and we think this methodology adds to the toolkit. At the same time we recognize that the method is not a panacea, and examiners as instruments may work better for some research questions and in some contexts than in others.

 

July 23, 2022 Update: I've continued to see many new papers taking advantage of this instrument, so I thought it was worth adding a note to this post to (1) emphasize that it is important to show statistical tests investigating whether examiner assignment is as good as random for a given application, and (2) note that this assumption is unlikely to hold after October 1, 2020, when the USPTO adopted a new automated application routing system that eliminates any quasi-randomness. 

No comments:

Post a Comment