Tuesday, November 14, 2017

What is Essential? Measuring the Overdeclaration of Standards Patents

Standard essential patents are a relatively hot area right now, and seem to be of growing importance in the academic literature. I find the whole issue fascinating, in large part because most of the decisions are handled through private ordering, and so most of the studies are based on breakdowns.

One such breakdown occurs when companies declare too many patents essential to a standard. This happens if a company claims that too many of its patents must be practiced for the standard. The incentives for doing this are obvious: once declared essential, it is easier to argue for royalties or cross-licensing. But there are also important incentives against leaving patents out, for doing so may bring penalties in terms of participation in formation of the standard in the first place. Given that the incentives all align to disclosure, it is no wonder that some companies push back against paying. That said, if portfolio theory holds true--and I think it does in most cases--it doesn't matter much if there are 10 or 100 patents, as long as the first few are strong and essential. But that's an argument for another day.

Just how prevalent is this overdeclaration problem? One paper tries to figure that out. Robin Sitzing (Nokia), Pekka Sääskilahti (Compass Lexecon), Jimmy Royer (Analysis Group, Sherbrooke U. Economics), and Marc Van Audenrode (Analysis Group, Laval U. Economics) have posted Over-Declaration of Standard Essential Patents and Determinants of Essentiality to SSRN. Here is the abstract:
Not all Standard Essential Patents (SEPs) are actually essential – a phenomenon called over-declaration. IPR policies of standard-setting organizations require patent holders to declare any patents as SEPs that might be essential, without further SSO review or detailed compulsory declaration information. We analyze actual essentiality of 4G cellular standard SEPs. A declaration against a specific technical specification document of the standard is a strong predictor of essentiality. We also find that citations from and to SEPs declared to the same standard predict essentiality. Our results provide policy guidance and call for recognition of over-declaration in the economics literature.
This is an ambitious study. The authors used data on SEP declared patents (for the ETSI 4G LTE standard, among others) that were independently judged* by technical experts. They then performed regressions to determine whether there were specific factors that had an effect on being "actually" essential. One key finding was that when the patent was declared for a specific standards document, it was much more likely to be deemed essential than if it were declared for the standard generally. My takeaway is that when the specifics are outlined, companies know what their patents cover, but when faced with a broad standard, they will contribute anything they think might be close.

They also found that patents later assigned to NPEs were not more likely to be nonessential. Similarly, while firm size and R&D investment had a statistically significant effect on the likelihood of being actually essential, that effect was so small that it was practically insignificant. Finally, they find that longer claims (which are theoretically narrower) are, in fact, less likely to be essential.

As with other papers, there is a lot of data here that is worth looking at. But the final conclusion is an interesting one, worth carrying over to other papers: the traditional measures that economists use to judge patent value (such as citations) do not predict whether a declared patent will be technically essential. This is growing support for paper findings that question the use of these metrics.

*The authors explain the trustworthiness of their data. I'll leave it to the reader to decide whether it holds up.

Sunday, November 12, 2017

Ryan Abbott on Machines as IP-Generators, and Dan Burk on Women as IP-Generators

This blog post addresses two different articles that might at first blush seem to be very different. The first is Ryan Abbott's new article Everything Is Obvious, which explores the implications of machine-generated IP for the nonobviousness standard of patentability. Abbott argues the inventiveness standard should be adjusted to take into account the new reality that inventors are frequently assisted by machines or, in some cases, are machines. The second article is Dan Burk's Diversity Levers, published in 2015 in the Duke Journal of Gender Law & Policy. In the article, Burk argues the standard for nonobviousness should be adjusted to take into account the unique mindset and institutional situation of female inventors. (To be clear, Burk is not coming at this issue out of the blue. He has previously written about feminism in collision with copyright, arguing that copyright can be used to suppress feminist discourse).

Abbott's thesis is that, in comparison to machines, humans are all a little less skilled, so a human-based obviousness standard will necessarily lead to too many patents if machines are commonly employed. Burk's point is that, in comparison to men, women are typically more risk-adverse, so a male-based obviousness standard will necessarily lead to too few female-invented patents.

Tuesday, November 7, 2017

Tracking the Sale of Patent Portfolios

Finding out about patent sales and prices is notoriously difficult, yet critically important for patent valuation. Brian Love (Santa Clara Law), Kent Richardson, Erik Oliver, and Michael Costa (Richardson Oliver Law Group) have helped us all out by posting An Empirical Look at the "Brokered" Patent Market to SSRN. Here is the abstract:
We study five years of data on patents listed and sold in the quasi-public “brokered” market. Our data covers almost 39,000 assets, an estimated 80 percent of all patents and applications offered for sale by patent brokers between 2012 and 2016. We provide statistics on the size and composition of the brokered market, including the types of buyers and sellers who participate in the market, the types of patents listed and sold on the market, and how market conditions have changed over time. We conclude with an analysis of what our data can tell us about how to accurately value technology, the costs and benefits of patent monetization, and the brokered market’s ability to measure the impact of changes to patent law.
The article provides some really useful data about brokered patent portfolios - that is, groups of patents sold by brokers rather than "secretly." While brokered transactions are also confidential, their public offering makes them more visible than company to company direct transactions.

The information is quite interesting: the number of patents in each portfolio is quite small - most are less than a dozen. The offering prices have dropped over the last five years (shocker). Operating companies sell a lot of these, and PAE's buy them (something I pointed out five years ago in Patent Troll Myths, and which gave rise to the LOT Network framework- in fact, Open Innovation Network is a now a key buyer). There is a lot more data here, and I don't want to preempt the paper by just repeating it all - it's worth a look. I will note that, as the authors point out, this isn't the whole market and they can't accurately capture sale prices, so they use a "spot check" to estimate what they expect them to be.

Having introduced the paper, I do want to ask, like every good academic, "But what about my article?" Here I'll note a couple takeaways from the paper that bear on my own work on this subject, Patent Portfolios as Securities. First, the first portion of that paper was dedicated to the notion that buying and selling portfolios isn't just about patent trolls. I told anecdotes and used some data, so I'm glad to see a broader based survey provide stronger support for that assertion. Second, my argument was that treating portfolios as securities would force more transparency in sales and valuations. This paper's results support this notion in two ways. Itt shows how difficult it is to get any kind of transparency, even when you have brokered transactions. It also shows how easy it would be to jump from a brokered transaction to a more transparent clearinghouse that might provide the type of valuation information that market participants crave. I view this paper as a useful followon to my own, and hope to write more about how it might bear on the treatment of patent portfolios as assets.

Anyone interested in real-world patent market transactions should give this paper a read. It provides a view into the system that we don't often see. I found it really useful.

Tuesday, October 31, 2017

Using Experts to Prove Software Copyright Infringement

[UPDATE: It turns out that my initial thoughts mirrored EA's here, and that Antonick filed a reply brief. It's interesting enough that I took a closer look at the initial briefing (and at the District Court), and I've updated/edited my post below.]

I ran across an interesting cert. petition today that I thought I would share and discuss. The case is Antonick v. Electronic Arts, 841 F.3d 1062 (2016), and the petition (filed by David Nimmer, Peter Menell, and Kevin Green) is here. The case is interesting because it is about software copyright infringement, a topic near and dear to my heart on which I've written and blogged several times.

It's also topically relevant, because it is about Madden Football, one of the more popular sports video game franchises (it's probably the most popular, but I didn't do a search to find out). Antonick was an author of the original game, dating all the way back to the Apple II (!), but had a license that he would be paid for any derivative works. And so the question was whether his code was incorporated into newer versions of the software published for Sega and Super Nintendo.

The problem was that nobody could find the all source code for any of the versions to compare, and the graphic displays were not admitted into evidence. There were snippets, drafts, and binary data files. Using these, "Antonick's expert, Michael Barr, opined that Sega Madden was substantially similar to certain elements of Apple II Madden. In particular, Barr opined that the games had similar formations, plays, play numberings, and player ratings; a similar, disproportionately wide field; a similar eight-point directional system; and similar variable names, including variables that misspelled 'scrimmage.'" Based on this and other circumstantial evidence, the jury decided for infringement and the plaintiff.

But the District Court reversed the judgment as a matter of law. The Court ruled that the jury could not decide infringement because it did not have the source code in evidence to compare, and that the expert's testimony was insufficient to show infringement.

And here is where the interesting legal issue comes into play: what is the role of expert testimony? I'll discuss more after the jump, but here's a teaser: I think the expert can play a role, and while that is the focus of the "legal" issues in this case, I am not sure that's what's driving the opinion. In other words, my sense is that the cert. petition's claim of a circuit split is in law more than it is in practice. That may be enough for a certiorari grant; I tend to think that Antonick got a raw deal here, so if his lawyers can convince the Court to take this case, more power to him. That said, my gut says that, perhaps through no fault of his own other than waiting too long to sue, the plaintiff just didn't have enough evidence here--and if they did, they couldn't convince the District or Appellate courts of it.

Sunday, October 29, 2017

Rebecca Wexler on IP in the Criminal Justice System

The protection of criminal justice technologies with trade secrets is a hot topic. Last Term, the Supreme Court called for the views of the solicitor general in Loomis v. Wisconsin on whether using proprietary software for sentencing is a due process violation, though they ultimately denied the cert petition. Last month, I described Natalie Ram's forthcoming article, which focuses on the innovation angle: Ram argues that trade secrecy protection is not necessary for efficient levels of innovation for these kinds of technologies. I just enjoyed another terrific article in this space by Yale Information Society Project Fellow Rebecca Wexler: Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, forthcoming in the Stanford Law Review.

Wexler describes the growing privatization of the criminal justice system, particularly through black-box algorithms. She explains that the importance of trade secrecy in this area is likely to grow: data-driven systems for forensics or risk assessment are more difficult to protect with patents post-Alice, whereas trends like the federal Defend Trade Secrets Act of 2016 seem to have strengthened the value of trade secrets. Wexler agrees that the innovation policy rationale for secrecy of criminal justice technologies is unconvincing and that this secrecy may raise due process concerns, but the focus of her article is on the problems with this trend as a matter of the law of evidence. She argues that the trade secrets privilege that two-thirds of states have codified in their evidence rules should not exist in criminal proceedings—rather, as for other sensitive information like medical records, courts should simply use protective orders to limit the distribution of trade secrets beyond the needs of the proceeding.

Since I am not an evidence law expert, I will not discuss these aspects of Wexler's argument in detail; in short, she explains that the trade secrets privilege is harmful and unnecessary in criminal cases, and that it does not serve the purpose of evidentiary privilege law. From an IP perspective, she also argues that none of the theoretical justifications for trade secrecy law support the privilege. She suggests that the privilege is most analogous to the controversial "inevitable disclosure" doctrine, under which some states will enjoin conduct based on a speculative concern rather than any direct evidence of threatened misappropriation. But even here, the trade secrets privilege doctrine overprotects because it is upheld without any reference to the circumstances of a particular case. Wexler also notes that "claims that secrecy will incentivize innovation are tenuous at best when the privilege shields information from criminal defendants who are unlikely to be business competitors." And despite the status quo of robust protection, a 2009 National Academy of Sciences report notes the "dearth of peer-reviewed, published studies establishing the scientific bases and validity of many forensic methods"; as Wexler explains, greater transparency is likely to improve rather than worsen this problem.

I think there is plenty in Wexler's article to interest scholars of IP, criminal procedure, evidence, and more. But more importantly, I hope it is read by judges in criminal cases who are faced with assertions of trade secrets privilege. And judges will have opportunities since the issue is percolating through the courts in other cases, such as California v. Johnson; see the defense attorney's brief (which cites Wexler's article), as well as amicus briefs from the ACLU, EFF, Legal Aid, and Innocence Project. It seems like it is time for the uncritical acceptance of the privilege to end, and for judges and practitioners to grapple with the concerns Wexler raises.

Thursday, October 26, 2017

Virtual Copyright

I have posted to SSRN a draft of a new book chapter that I've written with my former law partner Jack Russo (Computer Law Group LLP in Palo Alto). It is coming out in The Law of Virtual and Augmented Reality (Woody Barfield and Marc Blitz, eds). The abstract of our chapter, called Virtual Copyright, is here:
This book chapter explores the development of virtual reality technology from its rudimentary roots toward its realistic depiction of the world. It then traces the history of copyright protection for computer software user interfaces (a law that only predates virtual reality by a few years), highlighting competing approaches toward protection and infringement. While the focus is on virtual reality, this chapter contains an exhaustive examination of the state of "look and feel" protection for software interfaces.
The chapter then considers how these competing approaches -- each of which is still holds some sway in the courts -- will apply to virtual reality objects, application, worlds, and interfaces. We posit that as VR becomes more realistic, courts will find their way to allow more reuse.
We do not expect to see traditional characters and animation treated any differently in virtual reality. Mickey Mouse is still Mickey Mouse, and Pikachu lives in trading cards, cartoons, augmented reality, and virtual reality. It is whether and how realistic depiction, gesture control, modularization and sharing fit within copyright's limiting doctrines that will create important and difficult questions for future developers, judges, juries, and appellate courts.
We wrote on this topic many, many years ago (before I even went to law school), so it was fun revisiting the topic now that the state of virtual reality and of copyright have advanced somewhat.

But that's one of the interesting things about this topic. Despite the advances, there really weren't that many...you know...advances. In the chapter, we detail some of the earliest virtual reality inventions, including gloves, goggles, and gestures. And we now have much more advanced...gloves, goggles, and gestures. To be sure, the technology is faster, cheaper, more compact, and higher quality, but we are nowhere near the Star Trek holodeck--yes, we discuss CAVEs briefly, but they had those then, too--an example we used to imagine where copyright might go.

And, despite the passage of time, there really haven't been that many advances in copyright treatment of look and feel. As I noted in my article Hidden in Plain Sight, the last really important interface case was decided by an evenly split Supreme Court more than twenty years ago. To be sure, we discuss newer cases like Oracle v. Google, Author's Guild v. Google, all of the important transformative fair use cases, and so forth, but the handwriting for these cases was on the wall some twenty to twenty-five years ago.

And, yet, we think this is an important chapter. All these years later, the courts are still divided about how to handle some of the borderline cases (just look at how difficult the Oracle v. Google API case has been), and courts are still struggling with how to manage modularization and realistic depictions (as seen in disputes about fan fiction, museum photography, and social media). These are all problems that will seep into virtual reality, and we explain the different ways courts have handled disputes and how we think they will treat particularly salient virtual reality problems in the future.

Tuesday, October 24, 2017

Experiments on Bias in Patent Litigation OR Does Everyone Hate NPEs?

Lisa has written about the importance of experiments in patents, and I agree. I read about a really good one today. Bernard Chao (Denver Law) and one of his students, Roderick O'Dorisio, conducted an experiment to simultaneously test whether there is a bias against patentees sued for declaratory relief of non-infringement and against NPEs. To do so, they made identical patent vignettes used to resolve a close, but simple, infringement case. The only differences in the videos shown to the subjects were whether the defendant sued first and whether the plaintiff was an NPE (and in one, both were true). The abstract his here, for the paper forthcoming in the Federal Circuit Bar Journal:
Although everyone believes that telling a good story is an important part of jury persuasion, attorneys inevitably rely on their intuition to choose their stories. Experimental methodologies now allow us to test how effective these stories are. In this article, we rigorously test how two different narratives common to patent law affect mock jurors. First, we look at whether accused infringers can improve their chances of prevailing by being the aggressor. Prior studies have observed that accused infringers that file declaratory judgment actions to vindicate their rights win more often than those that are sued by patent holders. However, these results may simply be an artifact of the selection effects. For example accused infringers may simply be suing on stronger cases. To date, no studies have tried to control for these selection effects and determine whether it is truly the story that sways juries. Second, we looked at whether an accused infringer can influence mock jurors by making a few disparaging remarks about one kind of patentee’s business model, the non-practicing entity (NPE). NPEs, often pejoratively called patent trolls, may have a more difficult time prevailing at trial than practicing entities do.
To test how these narratives affect potential juries, we used a 2x2 between-subjects online experiment. We randomly assigned virtual mock jurors to watch one of four different scenarios of an abbreviated patent trial and render verdicts. The results showed that accused infringers that filed declaratory judgment actions prevailed more often than those where the patentee initiated the lawsuit. In addition, our study found that NPEs won less often than practicing entities. We discuss implications for strategy and policy.
The results are pretty clear - there were marked differences in favor of those who sued first and in favor of those sued by NPEs. And for the group that is both NPEs sued for declaratory relief, the numbers are the lowest of all. I consider this to be a validating check on the findings for each of the individual treatments (though more on that later, as statistically it is not so clear).

As the title of this post implies, there are a couple of ways to read this data. The results here may show an implicit bias against NPEs. Or, NPEs may be the baseline, and it shows a preference for practicing entities. The highest win rate was 39%, so it is not like the plaintiffs were running away with victory here. Or, it may show that taking the bull by the horns is rewarded - patentees prefer defendants who assert their "rights" to defend against infringement.

Nonetheless, the results are a bit shocking - a product making plaintiff was more than twice as likely to win than an NPE sued for declaratory judgment of non-infringement on identical facts and presentations. This makes me think that we have to talk about more than patent quality when we talk about low NPE win rates.

About the statistics: the Declaratory Relief effect was significant at p<.1 (and at p<.05 if you included demographics). The NPE effect was significant at p<.01. Interesting, despite the marked drop for both combined, when the entire model was tested, including the interaction of declaratory relief and NPE, then none of the treatments was statistically significant. This result is difficult to interpret, but my sense from eyeballing the data is that the NPE effect is doing most of the work in the combined model, and so combining the DJ effect with it confounds the model.

A final note on methodology - the authors use Mechanical Turk, and cite to literature that such users are reliable for research like this. They also use some techniques to ensure attention. Finally, if there are attention issues, it is unclear why they would affect one category more than any other. Nonetheless, to the extent that one is skeptical of mTurk, one might be skeptical of the results here.

Tuesday, October 17, 2017

A Deep Dive on NPE Outcomes

I glibly commented on a friend's Facebook post last week that "patent troll" academic articles are so passe, despite the growing number of articles that use that term as compared to, say, 2012. Now, I shouldn't complain; given that my most cited article is called Patent Troll Myths (2012, naturally), I'd like to think that I'm driving that trend (of course, that's what the folks who wrote in 2007 would say).

But one of the reasons I joked about trolls being so 2012 is that this is where much of the detailed data comes from, and this is when the key articles that are cited by many were published. Indeed, I've published two follow-on articles to Patent Troll Myths, each of which contains more and better data (and thus took longer complete and published later), but which gets only a tiny fraction of the citation love of the original article.

And so it is no surprise that the latest in a series of articles by Chris Cotropia (Richmond), Jay Kesan (Illinois), and David Schwartz (Northwestern) was released with little fanfare. The article, called Heterogeneity among Patent Plaintiffs: An Empirical Analysis of Patent Case Progression, Settlement, and Adjudication is forthcoming in Journal of Empirical Legal Studies, but a draft is on SSRN. Here is the abstract:
This article empirically studies current claims that patent assertion entities (PAEs), sometimes referred to as ‘patent trolls’ or non-practicing entities (NPEs), behave badly in litigation by bringing frivolous patent infringement suits and seeking nuisance fee settlements. The study explores these claims by examining the relationship between the type of patentee-plaintiffs and litigation outcomes (e.g., settlement, grant of summary judgment, trial, and procedural dispositions), while taking into account, among other factors, the technology of the patents being asserted and the identity of the lawyers and judges. The study finds significant heterogeneity among different patent holder entity types. Individual inventors, failed operating companies, patent holding companies, and large patent aggregators each have distinct litigation strategies largely consistent with their economic posture and incentives. These PAEs appear to litigate differently from each other and from operating companies. Accordingly, to the extent any patent policy reform targets specific patent plaintiff types, such reforms should go beyond the practicing entity versus non-practicing entity distinction and understand how the proposed legislation would impact more granular and meaningful categories of patent owners.
In my article A Generation of Patent Litigation, I presented data about how often cases settle, and how that skews our view of how long they last, and who wins. This article extends the authors' earlier work on categorizing just who is filing NPE suits (in 2010 in this article), and asks when they settle for each and every defendant. This is hard work. In most of today's cases, each defendant is sued separately, so when the defendant settles, the case is over. Analytics companies track this all the time...now.

But in 2010, a patentee could sue 100 defendants at once, and you could not tell how long each remained in the case without tracking each defendant. If you only track the end of the case, you capture the one defendant who fought it out, but you miss all the defendants who exited early. The other added value of this series of papers is tracking all plaintiffs by type, rather than one big "NPE" status. I do this in The Layered Patent System, but I only had a subset of cases over a longer period of time, They have captured all of the cases in a single year.  I'll discuss what this all means after the jump.

Wednesday, October 11, 2017

The Case for a Patent Box with Strings Attached

[This post is co-authored with Daniel Hemel, an assistant professor of law at the University of Chicago Law School, and cross-posted at Whatever Source Derived.] 

Trump administration officials are hoping that their plan for steep business tax cuts will spur economic growth. Economists are skeptical of the administration’s rosy growth projections. But there may yet be a way to reduce business taxes that accelerates growth, encourages innovation, and delivers tangible benefits to American consumers.

To achieve these objectives, administration officials and lawmakers should consider implementing a “patent box” — a reduced tax rate for revenues derived from the licensing and sale of patents. But unlike the patent box regimes that the United Kingdom and several other advanced economies have implemented, a U.S. patent box should come with strings attached. Specifically, the reduced rate on patent-related revenues should be conditional upon the patent holder agreeing to a shorter patent term.

Here’s how it could work: Right now, a patent confers exclusivity for 20 years from the date of application. If the patent is held by a U.S. corporation, the corporation pays a top tax rate of 35% on patent-related income. Under a “patent box with strings attached,” the corporation would have the option to pay no tax on patent-related income in exchange for a shorter patent life.

The system would be structured such that the net present value of the patent holder’s expected income stream — in after-tax terms — would be slightly more attractive under a patent box and a shorter patent life than under the status quo. For example, assuming a 5% interest rate and a 35% corporate tax rate, the net present value of a constant stream of tax-free payments over 11 years is slightly more than the net present value of a constant stream of taxable payments over a 20-year term. Thus, if utilizing the patent box meant accepting an 11-year term, patent holders would have an incentive to choose the patent box and relinquish the last 9 years of exclusivity. (If we assume instead that the prevailing tax rate is 20%, as the Trump administration and congressional Republican leaders have proposed, then the patent box with strings attached becomes preferable to a 20-year term plus full taxability if the patent box allows 15 years of exclusive rights.)

Tuesday, October 10, 2017

Patents and Vertical Integration: A Revised Theory of the Firm

I'm a big fan of Peter Lee's work, and I'm a big fan of theory of the firm work. Imagine my joy upon seeing Prof. Lee's new article, forthcoming in Stanford Law Review, called: Innovation and the Firm: A New Synthesis. This article is a really thoughtful, really thorough re-examination of patents and the firm. The abstract is here:
Recent scholarship highlights the prevalence of vertical disintegration in high-technology industries, wherein specialized entities along a value chain transfer knowledge-intensive assets between them. Patents play a critical role in this process by lowering the cost of technology transactions between upstream and downstream parties, thus promoting vertical disintegration. This Article, however, challenges this prevailing narrative by arguing that vertical integration pervades patent-intensive fields. In biopharmaceuticals, agricultural biotechnology, information technology, and even university-industry technology transfer, firms are increasingly absorbing upstream and downstream technology providers rather than simply licensing their patents.
 This Article explains this counterintuitive development by retheorizing the relationship between innovation and the firm. Synthesizing previously disconnected lines of theory, it first argues that the challenge of aggregating tacit technical knowledge — which patents do not disclose — leads high-tech companies to vertically integrate rather than simply rely on licenses to transfer technology. Relatedly, the desire to obtain not just discrete technological assets but also innovative capacity, in the form of talented engineers and scientists, also motivates vertical integration. Due to the socially embedded nature of tacit knowledge and innovative capacity, firms frequently absorb entire pre-existing organizations and grant them significant autonomy, an underappreciated phenomenon this Article describes as “semi-integration.” Finally, strategic imperatives to achieve rapid scale and scope also lead firms to integrate with other entities rather than simply license their patents. The result, contrary to theory, is a resurgence of vertical integration in patent-intensive fields. The Article concludes by evaluating the costs and benefits of vertically integrated innovative industries, suggesting private and public mechanisms for improving integration and tempering its excesses.
The abstract does a pretty complete job of explaining the thesis and arguments here, so I'll make a few comments after the jump.

Sunday, October 8, 2017

Tejas Narechania: Is The Supreme Court Against "Patent Exceptionalism" Or In Favor of "Universality"?

Tejas Narechania's new paper, Certiorari, Universality, and a Patent Puzzle, forthcoming in Michigan Law Review argues that a major identifying factor for the Supreme Court's interest in patent cases is a field split: an area where a particular patent law doctrine plays out differently in patent law than in other fields of law where it is used. Narechania argues that the Court's apparent need to resolve, or at least address, these differences by taking review, has to do with the Court's overarching interest in preserving "universality." "[T]he Court," he writes, "is not interested in merely eliminating exceptionalism altogether. Rather, it appears concerned for calibrating a degree of consistency across doctrinal areas in light of its underlying interests in judicial efficiency, neutrality, and legitimacy." (47).

Narechania's article, especially when read alongside recent work by Peter Lee, teaches that there are two ways to explain the Supreme Court's increased interest in patent law. One is that the Court is against what is often called "patent exceptionalism" - i.e., against the Federal Circuit's use of patent-specific rules that differ from similar doctrines used other fields.  The other, which may or may not be the same thing, is that the Court is intent on preserving universal rules across all areas of law. Narechania has insightfully reoriented the "patent exceptionalism" discussion towards the latter.

Read more at the jump.

Tuesday, October 3, 2017

Repealing Patents, Oil States, and IPRs

If you haven't read any of Chris Beauchamp's (Brooklyn Law) work on patent law and litigation history, you really should. His book, Invented by Law, on the history of Bell telephone litigation, and his Yale L.J. article on the First Patent Litigation Explosion (in the 1800's) are both engaging, thorough, and thoughtful looks at history. Furthermore, he writes as a true historian, placing his work in context even if there is no clear answer for today's disputes. He points to where we can draw lessons and where we might be too quick to draw lessons. Chris doesn't publish that often because he does so much work toiling over source materials in the national archives and elsewhere.

Prof. Beauchamp posted a new essay to SSRN last week that caught my eye, and I thought I would share it here. Repealing Patents could not be more timely given the pending Oil States case - it discusses how patent revocation worked at our nation's founding, both in England and the U.S.  Here is the abstract:
The first known patent case in the United States courts did not enforce a patent. Instead, it sought to repeal one. The practice of cancelling granted patent rights has appeared in various forms over the past two and a quarter centuries, from the earliest U.S. patent law in 1790 to the new regime of inter partes review (“IPR”) and post grant review. With the Supreme Court’s grant of cert in Oil States Energy Services v. Greene’s Energy Group and its pending review of the constitutionality of IPR, this history has taken on a new significance.
This essay uses new archival sources to uncover the history of patent cancellation during the first half-century of American patent law. These sources suggest that the early statutory provisions for repealing patents were more widely used and more broadly construed than has hitherto been realized. They also show that some U.S. courts in the early Republic repealed patents in a summary process without a jury, until the Supreme Court halted the practice. Each of these findings has implications—though not straightforward answers—for the questions currently before the Supreme Court.
As with his other work, this essay is careful not to draw too many conclusions. It cannot answer all of our questions, and he explains why.

There were a few key points that really stood out for me; things we should be thinking about when we think about the "common law" right to a jury with respect to the Seventh Amendment, and more broadly how we think of revocation of patents as public.

First, the essay points out that the first Patent Act (with repeal included) predated the bill of rights. So when we think of common law, we usually look to England because the U.S. adopted English law at the time of the Seventh Amendment. But, here, the U.S. broke with England and installed its own procedure. It is quite possible that English practice at the time is simply irrelevant. I don't know how this cuts for the case, frankly.

Second, the revocation action came at a time when patents were essentially registered rather than examined. Beauchamp points out that the first three years had three cabinet members using discretion to grant patents, but they were not conducting prior art searches and the like. In other words, revocation, which was abolished when the patent examination system was installed in 1836, was a creature of non-examination, not a way to do re-examination.

Third, there were some summary revocations, but there was a dispute about whether a jury should decide factual issues on revocation. That debate lasted until 1924, when Justice Story (for the Supreme Court) ruled that the English procedure of a jury trial should apply. This, too, is ambiguous, because the right to a jury trial was really up in the air for a while. But what struck me most about this history is something different. As I wrote in my own article America's First Patents, Justice Story had an affinity for English patent law, and apparently liked to discard American breaks from the law in favor of English rule. In my article, it was his importation of a distrust of process patents (which gave rise to much of our patentable subject matter jurisprudence today). In this essay, it is his importation of the English revocation process, which required a jury. If it turns out that jury rule in early American repeal proceedings is important in this case, you'll know who to thank.

Tuesday, September 26, 2017

How does trade secrecy affect patenting?

As I mention in my forthcoming book chapter on empirical methods in trade secret research, there's really a dearth of good empirical scholarship about the role of trade secrets in the economy. One scholar who has written several articles in this area is my Ivan Png from the National University of Singapore. Professor Png exploits the variation in strength of trade secret protection to find causal effects on, say, innovation or worker mobility.

His latest article, called Secrecy and Patents: Theory and Evidence from the Uniform Trade Secrets Act (SSRN draft here, Final paywall version here), examines how rates of patenting change when levels of protection for trade secrets change. Here is the abstract, which shares some of findings:

How should firms use patents and secrecy as appropriability mechanisms? Consider technologies that differ in the likelihood of being invented around or reverse engineered. Here, I develop the profit-maximizing strategy: (i) on the internal margin, the marginal patent balances appropriability relative to cost of patents vis-a-vis secrecy, and (ii) on the external margin, commercialize products that yield non-negative profit. To test the theory, I exploit staggered enactment of the Uniform Trade Secrets Act (UTSA), using other uniform laws as instruments. The Act was associated with 38.6% fewer patents after one year, and smaller effects in later years. The Act was associated with larger effect on companies that earned higher margins, spent more on R&D, and faced weaker enforcement of covenants not to compete. The empirical findings are consistent with businesses actively choosing between patent and secrecy as appropriability mechanisms, and appropriability affecting the number of products commercialized.
Frankly, I think that the abstract undersells the findings a bit, as it seems targeted to the journal, "Strategy Science." The paper takes a much broader view of his model: "If trade secrets law is stronger in the sense of reducing the likelihood of reverse engineering, then businesses should adjust by (i) patenting fewer technologies and keeping more of them secret, and (ii) commercializing more products."

Like Png's other work in this area, the core of the analysis begins with an index of trade secret strength in each state, based on passage of the UTSA and variations of each state's implementation of UTSA (e.g. with respect to inevitable disclosure). In this paper, Png then obtained data about the location of company R&D facilities and patents coming out of those facilities. He also used other uniform laws passed at around the same time as an instrument, to make sure that the UTSA is not endogenous with patenting.

This is a really interesting and important paper, even if it validates what most folks probably assumed (dating back to the days of Kewanee v. Bicron): if you strengthen secrecy, there will be fewer patents. That said, there is a lot going on in this paper, and a lot of assumptions in the modeling. First and foremost, the levels of protection of trade secrets don't have many degrees of freedom. I much prefer the categories created by Lippoldt and Schultz. That said, even a binary variable might be sufficient. Second, the model and estimation are based on the assumption that the marginal patent is the one most likely to be designed around, and uses the number of technology classes to estimate patent scope (and validate the assumption). I know many folks who would disagree with using patent classes as a measure of scope.

Even with these critiques, this paper is worth a read and some attention. I'd love to see more like it.

Monday, September 25, 2017

What can we learn from variation in patent examiner leniency?

Studying the effect of granting vs. rejecting a given patent application can reveal little about the ex ante patent incentive (since ex ante decisions were already made), but it can say a lot about the ex post effect of patents on things like follow-on innovation. But directly comparing granted vs. rejected applications is problematic because one might expect there to be important differences between the underlying inventions and their applicants. In an ideal (for a social scientist) world, some patent applications would be randomly granted or denied in a randomized controlled trial, allowing for a rigorous comparison. There are obviously problems with doing this in the real world—but it turns out that the real world comes close enough.

The USPTO does not randomly grant application A and reject application B, but it does often assign (as good as randomly) application A to a lenient examiner who is very likely to grant, while assigning B to a strict examiner who is very likely to reject. Thus, patent examiner leniency can be used as an instrumental variable for which patent applications are granted. This approach was pioneered by Bhaven Sampat and Heidi Williams in How Do Patents Affect Follow-on Innovation? Evidence from the Human Genome, in which they used this approach to concluded that on average, gene patents appear to have had no effect on follow-on innovation.

Since their seminal work, I have seen a growing number of other scholars adopt this approach, including these recent papers:

Monday, September 18, 2017

Tattoos, Architecture, and Copyright

In my IP seminar, I ask students to pick an article to present in class for a critical style and substance review. This year, one of my students picked an article about copyright and tattoos, a very live issue. The article was decent enough, raising many concerns about tattoos: Is human skin fixed? Is it a copy? How do you deposit it at the Library of Congress? (answer: photographs) What rights are there to modify it? To photograph it? Why is it ok for photographers to take pictures, but not ok for video game companies to emulate them? Can they be removed or modified under VARA (which protects against such things for visual art)?

It occurred to me that we ask many of these same questions with architecture, and that the architectural rules have solved the problem. You can take pictures of buildings. You can modify and destroy buildings. You register buildings by depositing plans and photographs. Standard features are not protectible (sorry, no teardrop, RIP, and Mom tattoo protection). But you can't copy building designs. If we view tattoos on the body as a design incorporated into a physical structure (the human body), it all makes sense, and solves many of our definitional and protection problems.

Clever, right? I was going to write and article about it, maybe. Except then I discovered that somebody else had. In That Old Familiar Sting: Tattoos, Publicity and Copyright, Matthew Parker writes:

Tattoos have experienced a significant rise in popularity over the last several decades, and in particular an explosion in popularity in the 2000s and 2010s. Despite this rising popularity and acceptance, the actual mechanics of tattoo ownership and copyright remain very much an issue of first impression before the courts. A series of high-priced lawsuits involving famous athletes and celebrities have come close to the Supreme Court at times, but were ultimately settled before any precedent could be set. This article describes a history of tattoos and how they might be seen to fit in to existing copyright law, and then proposes a scheme by which tattoo copyrights would be bifurcated similar to architecture under the Architectural Works Copyright Protection Act.
It's a whole article, so Parker spends more time developing the theory and dealing with topics such as joint ownership than I do in my glib  recap. For those interested in this topic, it's certainly a thought-provoking analogy worth considering.

Barton Beebe: Bleistein and the Problem of Aesthetic Progress in American Copyright Law



Bleistein v. Donaldson Lithographing Co., is a well-known early twentieth century copyright decision of the U.S. Supreme Court. In his opinion for the majority, Justice Holmes is taken to have articulated two central propositions about the working of copyright law. The first is the idea that copyright's originality requirement may be satisfied by the notion of "personality," or the "personal reaction of an individual upon nature," which was satisfied in just about every work of authorship. The second is the principle of aesthetic neutrality, according to which "[it] would be a dangerous undertaking for persons trained only to the law to constitute themselves final judges of the worth of pictorial illustrations, outside of the narrowest and most obvious limits." Both of these propositions are today understood as relating to copyright's relatively toothless originality requirement, which few works ever fail to satisfy.

In a paper recently published in the Columbia Law Review, Barton Beebe (NYU) unravels the intellectual history of Bleistein and concluded that for over a century, American copyright jurisprudence has relied on a misreading (and misunderstanding) of what Holmes was trying to do in his opinion. On the first proposition, he shows that Holmes was deeply influenced by American (rather than British or European) literary romanticism, which constructed the author in a "distinctively democratic—and more particularly, Emersonian—image of everyday, common genius." (p. 370). On the second, Beebe argues that Holmes' comments on neutrality had little to do with the originality requirement, but were instead a response to the dissenting opinion that had sought to deny protection to the work at issue (an advertisement for a circus) because it did not "promote the progress," as mandated by the Constitution. The paper then examines how this misunderstanding (about both propositions) came to influence copyright jurisprudence, and Beebe then proceeds to suggest ways in which an accurate understanding of Bleistein may be used to reform crucial aspects of modern copyright law. The paper is well worth a read for anyone interested in copyright.

Beebe's examination of Holmes' views on progress, personality and literary romanticism did however raise a question for me about the unity (or coherence) of Holmes' views, especially given that he was a polymath. Long-regarded as a Legal Realist who thought about legal doctrine in largely functional and instrumental terms, Bleistein's commonly (mis)understood insights about originality comport well with Holmes' pragmatic worldview. His treatment of originality as a narrow (and normatively empty) concept, for instance, sits well with his anti-conceptualism and critique of formalist thinking. But if Holmes really did not intend for originality to be a banal and content-less standard (as Beebe suggests), how might he have squared its innate indeterminacy with his Realist thinking? Does Beebe's reading of Bleistein suggest that Holmes was not a Legal Realist after all when it came to questions of copyright law and its relationship to aesthetic progress? This of course isn't Beebe's inquiry in the paper (nor should it be, given the other important questions that it addresses), but the possibility of revising our view of Holmes intrigued me.

Wednesday, September 13, 2017

Tribal Sovereign Immunity and Patent Law

Guest post by Professor Greg Ablavsky, Stanford Law School

In Property, I frequently hedge my answers to student questions by cautioning that I am not an expert in intellectual property. I’m writing on an IP blog today because, with Allergan’s deal with the Saint Regis Mohawk Tribe, IP scholars have suddenly become interested in an area of law I do know something about: federal Indian law.

Two principles lie at the core of federal Indian law. First, tribes possess inherent sovereignty, although their authority can be restricted through treaty, federal statute, or when inconsistent with their dependent status. Second, Congress possesses plenary power over tribes, which means it can alter or even abolish tribal sovereignty at will.

Tribal sovereign immunity flows from tribes’ sovereign status. Although the Supreme Court at one point described tribal sovereign immunity as an “accident,” the doctrine’s creation in the late nineteenth century in fact closely paralleled contemporaneous rationales for the development of state, federal, and foreign sovereign immunity. But the Court’s tone is characteristic of its treatment of tribal sovereign immunity: even as the Court has upheld the principle, it has done so reluctantly, even hinting to Congress that it should cabin its scope. This language isn’t surprising. The Court hasn’t been a friendly place for tribes for nearly forty years, with repeated decisions imposing ever-increasing restrictions on tribes’ jurisdiction and authority. What is surprising is that tribal sovereign immunity has avoided this fate. The black-letter law has remained largely unchanged, narrowly surviving a 2014 Court decision that saw four Justices suggest that the doctrine should be curtailed or even abolished.

Monday, September 11, 2017

Reexamining the Private and Social Costs of NPEs

It's good to be returning from a longish hiatus. I've just taken over as the Associate Dean for Faculty Research; needless to say, it's kept me busier that I would like. But I'm back, and hope to resume regular blogging.

My first entry has been sitting on my desk (errrr, my email) for about six months. In 2011 Bessen, Meurer, and Ford published The Private and Social Costs of Patent Trolls, which was received with much fanfare. Its findings of nearly $500 billion in market value decrease over a 20 year period, and $80 billion losses a year for four years in the late 2000's garnered significant attention; the paper has been downloaded more than 5000 times on SSRN.

Enter Emiliano Giudici and Justin Robert Blount, both of Stephen F. Austin Business School. They have attempted to replicate the findings of Bessen, Meurer, and Ford with newer data. The results are pretty stark: they find no significant evidence of loss at all. They also attribute the findings of the prior paper to a few outliers, among other possible explanations. These are really important findings. Their paper has fewer than 50 downloads. The abstract is here: 
An ongoing debate in patent law involves the role that “non-practicing entities,” sometimes called “patent trolls” serve in the patent system. Some argue that they serve as valuable market intermediaries and other argue that they are a drain on innovation and an impediment to a well-functioning patent system. In this article, we add to the data available in this debate by conducting an event study that analyzes the market reaction to patent litigation filed by large, “mass-aggregator” NPE entities against large publicly traded companies. This study advances the literature by attempting to reproduce the results of previous event studies done in this area on newer market data and also by subjecting the event study results to more rigorous statistical analysis. In contrast to a previous event study, in our study we found that the market reacted little, if at all, to the patent litigation filed by large NPEs.
This paper is a useful read beyond the empirics. It does a good job explaining the background, the prior study, and critiques of the prior study. It is also circumspect in its critique - focusing more on the inferences to be drawn from the study than the methods. This is a key point: I'm not a fan of event studies for a variety of reasons. But that doesn't mean that I think event studies are somehow unsound methodologically. It just means that our takeaways from them have to be tempered by the limitations. And I've always been troubled that the key takeaways from Bessen, Meurer & Ford were outsized (especially in the media) compared to the method.

But Giudici and Blount embrace the event study, weaknesses and all, and do not find the same results. This, I think, is an important finding and worthy of publicity. That said, there are some critiques, which I'll note after the break.

Natalie Ram: Innovating Criminal Justice

Natalie Ram (Baltimore Law) applies the tools of innovation policy to the problem of criminal justice technology in her latest article, Innovating Criminal Justice (forthcoming in the Northwestern University Law Review), which is worth a read by innovation and criminal law scholars alike. Her dive into privately developed criminal justice technologies—"[f]rom secret stingray devices that can pinpoint a suspect’s location to source code secrecy surrounding alcohol breath test machines, advanced forensic DNA analysis tools, and recidivism risk statistic software"—provides both a useful reminder that optimal innovation policy is context specific and a worrying depiction of the problems that over-reliance on trade secrecy has wrought in this field.

She recounts how trade secrecy law has often been used to shield criminal justice technologies from outside scrutiny. For example, criminal defense lawyers have been unable to examine the source code for TrueAllele, a private software program for analyzing difficult DNA mixtures. Similarly, the manufacturer of Intoxilyzer, a breath test, has fought efforts for disclosure of its source code. But access to the algorithms and other technical details used for generating incriminating evidence is important for identifying errors and weaknesses, increasing confidence in their reliability (and in the criminal justice system more broadly), and promoting follow-on innovations. Ram also argues that in some cases, secrecy may raise constitutional concerns under the Fourth Amendment, the Due Process Clause, or the Confrontation Clause.

Drawing on the full innovation policy toolbox, Ram argues that contrary to the claims of developers of these technologies, trade secret protection is not essential for the production of useful innovation in this field: "The government has at its disposal a multitude of alternative policy mechanisms to spur innovation, none of which mandate secrecy and most of which will easily accommodate a robust disclosure requirement." Patent law, for example, has the advantage of increased disclosure compared with trade secrecy. Although some of the key technologies Ram discusses are algorithms that may not be patentable subject matter post-Alice, to the extent patent-like protection is desirable, regulatory exclusivities could be created for approved (and disclosed) technologies. R&D tax incentives for such technologies also could be conditioned on public disclosure.

But one of Ram's most interesting points is that the main advantage of patents and taxes over other innovation policy tools—eliciting information about the value of technologies based their market demand—is significantly weakened for most criminal justice technologies for which the government is the only significant purchaser. For example, there is little private demand for recidivism risk statistical packages. Thus, to the extent added incentives are needed, this may be a field in which the most effective tools are government-set innovation rewards—grants, other direct spending, and innovation inducement prizes—that are conditioned on public accessibility of the resulting algorithms and other technologies. In some cases, agencies looking for innovations may even be able to collaborate at no financial cost with academics such as law professors or other social scientists who are looking for opportunities to conduct rigorous field tests.

Criminal justice technologies are not the only field of innovation in which trade secrecy can pose significant social costs, though most prior discussions I have seen are focused on purely medical technologies. For instance, Nicholson Price and Arti Rai have argued that secrecy in biologic manufacturing is a major public policy problem, and a number of scholars (including Bob Cook-Deegan et al., Dan Burk, and Brenda Simon & Ted Sichelman) have discussed the problems with secrecy over clinical data such as genetic testing information. It may be worth thinking more broadly about the competing costs and benefits of trade secrecy and disclosure in certain areas—while keeping in mind that the inability to keep secrets does not mean the end of innovation in a given field.

Tuesday, September 5, 2017

Adam Mossoff: Trademarks As Property

There are two dominant utilitarian frameworks for justifying trademark law. Some view trademark protection as necessary to shield consumers from confusion about the source of market offerings, and to reduce consumers' "search costs" in finding things they want. Others view trademark protection as necessary to secure producers' incentives to invest in "quality". I personally am comfortable with both justifications for this field of law. But I have always been unclear as to how trademarks work as propertyWith certain caveats, I do not find it difficult to conceive of the patented and copyrighted aspects of inventions and creative writings as "property" on the theory that we generally create property rights in subject matter that we want more of.  But surely Congress did not pass the Lanham Act in 1946 and codify common law trademark protection simply because Congress wanted companies to invest in catchy names and fancy logos?

In his new paper, Trademark As A Property Right, Adam Mossoff seeks to clarify this confusion and convince people that trademarks are property rights based on Locke's labor theory. In short, Mossoff's view is that trademarks are not a property right on their own; rather, trademarks are a property right derived from the underlying property right of goodwill. Read more at the jump.

Saturday, September 2, 2017

Petra Moser and Copyright Empirics

I thought this short Twitter thread was such a helpful, concise summary of some of NYU economist Petra Moser's excellent work—and the incentive/access tradeoff of IP laws—that it was worth memorializing in a blog post. You can read more about Moser's work on her website.

Monday, August 28, 2017

Dinwoodie & Dreyfuss on Brexit & IP

In prior work such as A Neofederalist Vision of TRIPS, Graeme Dinwoodie and Rochelle Dreyfuss have critiqued one-size-fits-all IP regimes and stressed the value of member state autonomy. In theory, the UK's exit from the EU could promote these autonomy values by allowing the UK to revise its IP laws in ways that enhance its national interests. But in Brexit and IP: The Great Unraveling?, Dinwoodie and Dreyfuss argue that these gains are mostly illusory: "the UK will, to maintain a robust creative sector, be forced to recreate much of what it previously enjoyed" through the EU, raising the question "whether the transaction costs of the bureaucratic, diplomatic, and private machinations necessary to duplicate EU membership are worth the candle."

The highlight of the piece for me is that Dinwoodie and Dreyfuss give numerous specific examples of how post-Brexit UK might depart from EU IP policy in ways that serve its perceived national policy interests, which nicely illustrate some of the ways in which the EU has harmonized IP law. For example, in the copyright context, it could resist the expansion in copyrightable subject matter suggested by EU Court of Justices cases; re-enact its narrow, compensation-free private copying exception; or reinstate section 52 of its Copyright, Designs and Patents Act, which limited the term of copyright for designs to the maximum term available under registered design law. In the trademark context, Dinwoodie and Dreyfuss describe how UK courts have grudgingly accepted more protectionist EU trademark policies that would not be required post-Brexit, such as limits on comparative advertising. Patent law is the area "where the UK will formally re-acquire the least sovereignty as a result of Brexit," given that it will continue to be part of the European Patent Convention (EPC) and that it still intends to ratify the Unified Patent Court Agreement—though the extent of UK involvement remains unclear.

Of course, whether such changes to copyright or trademark law would in fact further UK interests in an economic sense is highly debatable—but if UK policymakers think they would, why would they nonetheless recreate existing harmonization? I think Dinwoodie and Dreyfuss would respond that these these national policy interests are outweighed by the benefits of coordination on IP, which "have been substantial and well recognized for more than a century." Their argument is perhaps grounded more in political economy than economic efficiency, as their examples of the benefits of coordination are all benefits for content producers rather than overall welfare benefits. In any case, they note that coordination became even easier within the institutional structures of the EU, and that after Brexit, "the UK will have to seek the benefits of harmonization through the same international process that has been the subject of sustained resistance as well as scholarly critique, rather than under these more efficient EU mechanisms." While it is plausible that the lack of these efficiency gains will tilt the cost-benefit balance in favor of IP law tailored to national interests, Dinwoodie and Dreyfuss suggest that a desire for continuity and commercial certainty will override autonomy concerns.

With all the uncertainties regarding Brexit (as recently reviewed by John Oliver), intellectual property might seem low on the list of things to worry about. But the companies with significant financial stakes in UK-based IP are anxiously awaiting greater clarity in this area.

Sunday, August 20, 2017

Gugliuzza & Lemley on Rule 36 Patentable-Subject-Matter Decisions

Paul Gugliuzza (BU) and Mark Lemley (Stanford) have posted Can a Court Change the Law by Saying Nothing? on the Federal Circuit's many affirmances without opinion in patentable subject matter cases. They note a remarkable discrepancy: "Although the court has issued over fifty Rule 36 affirmances finding the asserted patent to be invalid, it has not issued a single Rule 36 affirmance when finding in favor of a patentee. Rather, it has written an opinion in every one of those cases. As a result, the Federal Circuit’s precedential opinions provide an inaccurate picture of how disputes over patentable subject matter are actually resolved."

Of course, this finding alone does not prove that the Federal Circuit's Rule 36 practice is changing substantive law. The real question isn't how many cases fall on each side of the line, but where that line is. As the authors note, the skewed use of opinions might simply be responding to the demand from patent applicants, litigants, judges, and patent examiners for examples of inventions that remain eligible post-Alice. And the set of cases reaching a Federal Circuit disposition tells us little about cases that settle or aren't appealed or in which subject-matter issues aren't raised. But their data certainly show that patentees have done worse at the Federal Circuit than it appears from counting opinions.

Perhaps most troublingly, Gugliuzza and Lemley find some suggestive evidence that Federal Circuit judges' substantive preferences on patent eligibility are affecting their choice of whether to use Rule 36: Judges who are more likely to find patents valid against § 101 challenges are also more likely to cast invalidity votes via Rule 36. When both active and senior judges are included, this correlation is significant at the five-percent level. The judges on either extreme are Judge Newman (most likely to favor validity, and most likely to cast invalidity votes via Rule 36) and Chief Judge Prost (among least likely to favor validity, and least likely to cast invalidity vote via Rule 36), who also happen to be the two judges who are most likely to preside on the panels they sit. Daniel Hemel and Kyle Rozema recently posted an article on the importance of the assignment power across the 13 federal circuits; this may be one concrete example of that power in practice.

Gugliuzza and Lemley do not call for precedential opinions in all cases, but they do argue for more transparency, such as using short, nonprecedential opinions to at least list the arguments raised by the appellant. For lawyers without the time and money to find the dockets and briefs of Rule 36 cases, this practice would certainly provide a richer picture of how the Federal Circuit disposes of subject-matter issues.

Monday, August 14, 2017

Research Handbook on the Economics of IP (Depoorter, Menell & Schwartz)

Many IP professors have posted chapters of the forthcoming Research Handbook on the Economics of Intellectual Property Law. As described in a 2015 conference for the project, it "draws together leading economics, legal, and empirical scholars to codify and synthesize research on the economics of intellectual property law." This should be a terrific starting point for those new to these fields. I'll link to new chapters as they become available, so if you are interested in this project, you might want to bookmark this post. (If I've missed one, let me know!)

Volume I – Theory (Ben Depoorter & Peter Menell eds.)


Volume II – Analytical Methods (Peter Menell & David Schwartz eds.)

Patents

Wednesday, August 2, 2017

Kevin Collins on Patent Law's Authorship Screen

Numerous scholars have examined the various functionality screens that are used to prevent non-utility-patent areas of IP from usurping what is properly the domain of utility patent law (see, e.g., the terrific recent articles by Chris Buccafusco and Mark Lemley and by Mark McKenna and Chris Sprigman). But hardly anyone has asked the inverse question: How should utility patent law screen out things that should be protected by non-patent IP? In Patent Law's Authorship Screen (forthcoming U. Chi. L. Rev.), Kevin Collins focuses on the patent/copyright boundary, and he coins the term "authorship screen" as the mirror image of copyright's functionality screen. As with pretty much everything Collins writes, it is thought provoking and well worth reading.

Wednesday, July 26, 2017

Kuhn & Thompson on Measuring Patent Scope by Word Count

I've seen a number of recent papers that attempt to algorithmically measure patent scope by counting the number of words in the patent's first claim and comparing to other patents in the same technological field (with longer claims → more details → narrower scope). In their new paper, The Ways We've Been Measuring Patent Scope are Wrong: How to Measure and Draw Causal Inferences with Patent Scope, Jeffrey Kuhn (UNC) and Neil Thompson (MIT Sloan) argue that this measure is superior to prior scope measures.

They validate the word-count measure by comparing with survey responses from seven patent attorneys (below). In comparison, they find that previous measures of patent scope—the number of classes, the number of citations by future patents, and the number of claims—are uncorrelated or negatively correlated with their attorneys' subjective responses.


Of course, there are lots of reasons that word count is an imperfect measure, and additional validation would be helpful. (It would also be good to confirm that the attorneys in this study were blinded to the study design.) Those planning empirical patent studies should approach this variable with caution (and with good advice from patent law experts), but it is a potential scope measure that patent empiricists should at least have on their radar screens.

Wednesday, July 19, 2017

Liscow & Karpilow on Innovation Snowballing and Climate Law

Patent scholars are often skeptical of the government "picking winners," but in Innovation Snowballing and Climate Law, Zach Liscow and Quentin Karpilow (Yale Law) argue that the government should target specific technologies to address social harms like climate change.

It is well known that green technologies present a double externality problem. Both innovation and environmentally friendly goods have significant positive spillovers (and thus will be undersupplied absent government intervention), and the problem is magnified for environmentally friendly innovations. The standard policy solution is to correct each externality, such as through carbon taxes (or cap and trade) and innovation subsidies (e.g., patents, grants, and R&D tax incentives).

Liscow and Karpilow argue that this approach misses the dynamics of cumulative innovation. We know that innovators stand on the shoulders of giants, but Innovation Snowballing describes recent research on how innovators "prefer to stand on the tallest shoulders in order to get the quickest, largest financial returns." Specific "clean" technologies (like solar) thus need a big push to snowball past "dirty" technologies (like fossil fuels):

Thursday, July 13, 2017

Judge Dyk on the Supreme Court and Patent Law, with Responses

Judge Timothy Dyk of the Federal Circuit has long welcomed the Supreme Court's involvement in patent law—see, e.g., essays in 2008 and 2014. In a new Chicago-Kent symposium essay, he states that he "continue[s] to believe that Supreme Court review of our patent cases has been critical to the development of patent law and likewise beneficial to our court," such as by "reconciling [Federal Circuit] jurisprudence with jurisprudence in other areas."

Four pieces were published in response to Judge Dyk, and while Michael previously noted Greg Reilly's argument that the Supreme Court does understand patent law, the others are also worth a quick read. Tim Holbrook (Emory) argues that some of the Court's interest reflects "suspicion about the Federal Circuit as an institution" but that the result is "a mixed bags" (with some interventions having "gone off the rails"). Don Dunner (Finnegan) is even more critical of the Supreme Court's involvement, arguing that "it has created uncertainty and a lack of predictability in corporate boardrooms, the very conditions that led to the Federal Circuit's creation." And Paul Gugliuzza (BU) argues that "the Supreme Court's effect on patent law has actually been more limited" because its decisions "have rarely involved the fundamental legal doctrines that directly ensure the inventiveness of patents and regulate their scope" and because its "minimalist approach to opinion writing in patent cases frequently enables the Federal Circuit to ignore the Court's changes to governing doctrine."

Monday, July 3, 2017

USPTO Economists on Patent Litigation Predictors

Alan Marco (USPTO Chief Economist) and Richard Miller (USPTO Senior Economist) have recently posted Patent Examination Quality and Litigation: Is There a Link?, which compares the characteristics of litigated patents with various matched controls. The litigation data was from RPX, the patent data was from various USPTO datasets, and the controls were either chosen randomly from the same art unit and grant year or were chosen with propensity score matching based on various observable characteristics. They are interested in whether examination-related variables that can be controlled by the USPTO are related to later litigation, and they conclude that "some examination characteristics predict litigation, but that the bulk of the predictive power in the model comes from filing characteristics."

Marco and Miller report that patents filed by small entities are more than twice as likely to be litigated than those filed by large entities, and patents with longer continuation histories and application pendency are also more likely to be litigated. Government-interest patents and foreign-priority patents are much less likely to be litigated than other similar patents. Other characteristics that indicate higher probability of subsequent litigation include having more independent claims and shorter independent claims (proxies for broader patents), being allowed by examiners with signatory authority, not being allowed on first action, having more IDS filings or examiner interviews.

Monday, June 26, 2017

Gugliuzza & Lichtman on the Timing of Patent Litigation

Are patent cases being litigated too quickly or too slowly? Two recently posted articles tackle this problem from different angles:

Paul Gugliuzza's Quick Decisions in Patent Cases argues that patent litigation is "notoriously expensive and time consuming," but that there is a beneficial trend toward quicker decisions through practices such as pleadings-stage dismissals on patent eligibility grounds, post-grant revocation at the PTO, and heightened pleading requirements. These changes have been controversial, and Gugliuzza discusses ways each development might be further improved in terms of the overall tradeoff between accuracy and cost. But overall, Gugliuzza argues that the benefits of faster litigation resolution probably outweigh the downsides.

On the other hand, Doug Lichtman's Patient Patents begins with the provocative claim that "a large number of patent cases are today being litigated too quickly." His basic argument is straightforward: Delay is most costly in cases that potentially involve injunctions, but post-eBay, many patent plaintiffs are denied injunctive relief. In these cases, "delay takes a day for which the accused infringer would have been paying a court-ordered ongoing royalty and transforms it into a day for which the accused infringer will instead pay court-ordered backward-looking damages." Thus, these cases "are the ideal candidates for which to consider tailored, accuracy-enhancing litigation delay." This is not to say that delay is costless; perhaps most importantly, as Lichtman acknowledges, it "increases the duration of patent uncertainty." His point is simply that the optimal balance has been shifted by the increased prevalence of damages over injunctions.

Although these two articles might initially seem contradictory, they are really focused on different aspects of the cost-benefit analysis of litigation timing. Indeed, Gugliuzza notes and does not dispute Licthman's argument, but contends that it does not affect the majority of cases because "nearly seventy-five percent of successful patentees still obtain permanent injunctions, and that figure increases to eighty percent when PAEs are excluded." I think both are worth a read.

Tuesday, June 20, 2017

More Classic Patent Scholarship

It has been a while since the last update to my Classic Patent Scholarship, so I thought I would add some works that I view as "classics" but that haven't made it onto the list yet.

First, while the body of "Beyond IP" scholarship is blossoming (see, e.g., the two Yale ISP conferences, where I got to present work with Daniel Hemel), there is a long history of work on innovation incentives beyond patents. For example, Machlup and Penrose (already on the list of classics) describe how the patents-vs-prizes debate dates back to at least the 19th century. Here are two works I would add to the classics list:
Other important works in this genre, which don't quite fit under my pre-2000 "classic" bar, include Frischmann 2000, Shavell & van Ypersele 2001, Gallini & Scotchmer 2002, and Abramowicz 2003.

As a former grant-funded university researcher (during my physics grad school days), I'm particularly interested in the role of grants and other direct funding as a non-patent incentive, and their overlap with patents through the Bayh–Dole Act. Here are some additional classics in this area:

Finally, there is now a long strand of literature on the Federal Circuit as an institution and the value of specialized patent adjudication; anyone interested in this area should start with the work of Rochelle Dreyfuss:

For other classics—including more extended commentary on them by prominent patent law professors—see the Classic Patent Scholarship page. And if you have suggestions of other pre-2000 works that should be on the list, please add them to the comments on send me an email!

Monday, June 12, 2017

Jeanne Fromer: Should We Regulate Certification Marks?

Teaching trademark law for the first time this spring, I fielded several questions from students on a lesser known corner of trademark law: certification marks. For those who have not encountered certifications marks, they are a special type of trademark, whose role is to certify that goods or services comply with a particular standard. Precisely how are certification marks obtained, students asked, and how closely does the PTO scrutinize the chosen standard? What if a company wants to certify its goods, and the mark owner refuses out of an arbitrary dislike for the seller rather than the contents or quality of its offerings? So it was a fortuitous event that I came across Jeanne Fromer's article The Unregulated Certification Mark(et)published in January in the Stanford Law Review.  Former's paper answers these questions and much more.

Wednesday, June 7, 2017

More Impressions About Patent Exhaustion

Daniel Hemel and Lisa Larrimore Ouellette
Cross-posted at Whatever Source Derived

As we explained last week, the full impact of the Supreme Court’s decision in Impression Products v. Lexmark will depend on whether courts are willing to view creative patent transactions as licenses (which do not exhaust the patentee’s rights) rather than sales (which, after Impression, now do). While it is too early to answer that question, we can already anticipate answers to two related questions regarding Impression’s impact: (1) What does the decision mean for pharmaceutical prices in the United States and abroad?; and (2) How will Impression affect information costs in markets for patented products? With respect to the first question, we expect that Impression will put upward pressure on pharmaceutical prices in developing countries—and downward pressure on prices in the United States—notwithstanding the fact that the importation of drugs from abroad will remain illegal under most circumstances. As for the second question, we are skeptical that Impression will have a substantial effect on information costs in markets for patented products, notwithstanding some of the enthusiastic commentary in the technology press immediately after the decision.

Below, we explain both of these conclusions in more detail.