Almost everyone (even my parents) has seen the Crying Michael Jordan meme popping up around the internet and social media. Crying Jordan has appeared in the standard meme form of photoshopped images and gifs but has also inspired Halloween masks and even customized Air Jordan sneakers. TMZ reports that Jordan doesn’t have a problem with it, as long as no one uses it to “promote their commercial interests.” But what if he changed his mind or someone started using it for commercial gain? Could Jordan protect himself against “unauthorized memeing”?
Tweet nicely to the Twitter bot, “LnH: The Band”—a newcomer in artificial intelligence music generation—and the bot will automatically compose melodies for you. The AI-based band is “currently working on their first album,” according to LnH Music, but who will own the rights and royalties to the album? Or what about Mubert, which is touted by its creators as the world’s first online music composer, and which “continuously produces music in real-time … based on the laws of musical theory, mathematics and creative experience?” In other words, if a computer program generates a creative work—be it a song, book or other creation—is there a copyright to be owned? If so, who owns and gets to collect on the copyright?
Until recently, social media has been one of the only recourses for fashion designers and labels that have had their designs knocked off. Take the Acquazurra “Wild Thing” sandal, for example. Acquazzura is a high-end shoe brand that designed and released the $785 sandal, identifiable by its “wild” fringe on the toes. Shortly after, Ivanka Trump released the “Hettie” sandal, an almost identical shoe which, priced at $145, was almost $600 less expensive.
In “The Case of Prince, a Dancing Baby and the DMCA Takedown Notice,” we discussed the potential impact of the Ninth Circuit decision in Lenz v. Universal Music Corp., 801 F.3d 1126 (2015), a.k.a. the “dancing baby case,” in which the appeals court held that under the Digital Millennium Copyright Act (DMCA), copyright holders have a “duty to consider—in good faith and prior to sending a takedown notification—whether allegedly infringing material constitutes fair use.” However, in considering whether there is fair use, the court was “mindful of the pressing crush of voluminous infringing content that copyright holders face in a digital age.” To deal with this reality, the court affirmed that computers may be leveraged to support the fair use analysis.
We have written previously about the role of traditional discovery roles in “newer” platforms, and how social media content can be discoverable and used in litigation. What about using information from social media in jury selection? U.S. District Court Judge William Alsup says no.
At its heart, social media’s purpose is sharing content; however, fair use can only take one so far. A recent case serves as yet another reminder to exercise caution when reposting content, and that, in a litigious society, it is advisable to take the conservative approach and secure permission before reposting another’s content, even when there has been some modification of that content.
In our recent post, Living in a Nonmaterial World: Determining IP Rights for Digital Data, we discussed the potential impact of the Federal Circuit decision in ClearCorrect v. ITC, 2014-1527, in which the appeals court held that the “articles that infringe” are limited to “material things” and thus do not include “electronic transmission of digital data.” The decision limited the regulatory jurisdiction of the U.S. International Trade Commission (ITC) to articles that are considered physical products. The implications of the decision are far-reaching since the Internet of Things touches on most industry sectors. As previously noted, the decision has been supported by open-Internet advocacy groups, characterizing the decision as a “win for the Internet,” while other groups (including the dissent to the opinion) see the decision as a significant setback in the fight against overseas piracy of patented and copyrighted works.
Hours. Days. Weeks. Months. When it comes to acting on copyright infringement takedown notices, just how fast is fast enough for social media platforms? Some recent (and not-so-recent) cases reveal how difficult the question has proven for the courts.
Last month, Google announced a groundbreaking policy that may help shift the balance of power between copyright claimants and those who upload YouTube videos that may be covered by fair use. According to Google’s Public Policy Blog, users upload more than 400 hours of video every minute. Those uploads sometimes make use of existing video or music clips in new and transformative ways. When uploads transform the original work in this way (such as a parody or critique), it adds social value beyond the value contained in the original work. In the United States, a transformative use is considered a fair use and exempted from copyright infringement liability.
As user-generated content explodes over the Internet, intellectual property disputes over posting or uploading such content without the owner’s consent continue to escalate. As we touched on in a recent post, social media platforms, hosting websites or other online service providers (OSPs) may be entrapped in these disputes based on the infringing actions of users of these OSPs. In such a situation, the Digital Millennium Copyright Act (DMCA) provides a safe harbor provision to the OSP known as the Online Copyright Infringement Liability Limitation Act (OCILLA.) This provision, found at 17 U.S.C. § 512(c), protects service providers from liability for “infringement of copyright by reason of the storage at the direction of a user of material that resides” on the provider’s system or network, if certain requirements are met.