Recent developments in deep learning artificial intelligence have enabled almost anyone to superimpose facial features—including an entirely different face—into a preexisting video with relatively minimal effort. Until very recently, editing facial features in a video has been incredibly difficult. Even movie studios with access to professional video editing tools have struggled with the task as recently as in 2017, when actor Henry Cavill—portraying everyone’s favorite son of Krypton—sported a mustache he was contractually unable to remove during reshoots, leading to a widely criticized post-production digital shave. Because of the inherent difficulty in convincingly manipulating video to appear realistic, the public has widely been trusting of video’s authenticity while viewing still photos more skeptically. With recent developments in artificial intelligence, this thinking must now change.
Artificial intelligence is a transformative technology–or existential threat, depending on what futurist/sci-fi author you read–that will leave few if any industries untouched when all is said and done. Still, no matter how transformed your particular business landscape, most companies that decide they need to employ AI probably won’t be AI companies themselves, which means using third-party vendors. Colleagues Tim Wright and Antony Bott have prepared some pointers on how to maximize the benefits when dealing with such vendors while minimizing the downsides (and cutting through the sales hype) on Pillsbury’s Sourcing Speak blog. “How to Buy AI: Ten Top Tips for Buying Automation Technologies” looks at some key best practices to adopt when structuring and negotiating your AI contracts.
Three years after Elon Musk announced in his famous “All Our Patent Are Belong To You” blog post that Tesla would be opening all of its patents to the public, he tweeted a recommendation of Max Tegmark’s recent book Life 3.0: Being Human in the Age of Artificial Intelligence—which just happens to allude to a not-too-distant future world in which, based on current patent law, all inventions might be free and open to the public. In this story, superhuman general artificial intelligence is secretly created by humans, and its creation began the end of human invention.
Under Section 7 of the National Labor Relations Act (NLRA), all employees have a right to engage in protected concerted activity, even if they are not unionized. Such activities include those performed for the mutual aid or protection of all employees, such as discussing the terms and conditions of employment. An employer is prohibited by the Act from interfering with, restraining or coercing employees from exercising their Section 7 rights. In the past decade, there have been a number of important cases decided by the National Labor Relations Board (NLRB), the agency that protects the rights of employees to join together and improve wage and working conditions, that impact social media policies. In fact, many of the decisions have struck down social media policies as unenforceable under the NLRA. If any provision in a social media policy is vague or overbroad and can be read as restricting activities protected by Section 7, that provision will likely be found unlawful and unenforceable by the NLRB.
A recent order by the SEC relating to an initial coin offering (ICO) by Munchee Inc. dealt a blow to the common practice of making a distinction between “utility tokens” and “security tokens.” In doing so, the SEC seems to also reject what our colleagues Daniel N. Budofsky and Robert B. Robbins refer to as the “magic frog” approach, the belief that a token can begin life as a security token (i.e., a magic frog) but at the point that the application and ecosystem go “live,” the token will be transformed into a utility token (i.e., the magic frog becomes a prince) and any securities law restrictions will no longer apply. In their recent client alert, “The SEC’s Shutdown of the Munchee ICO,” they examine this issue in greater detail and explore ways in which it is still possible to carry out an ICO that’s in compliance with the Securities Act.
In a time where “fake news” is common parlance and tensions rise in response to the smallest media slight, is it time for algorithms to take the place of humans in moderating news? This New York Times article seems to think so. What role, and to what extent, should algorithms be used in regulating and implementing everyday business ventures, government agency routine processes, health care management, etc.? Who should take responsibility in the event of a problem or negative consequence, if it is all verified by an algorithm? And, importantly, what will enhanced monitoring of algorithms do to the progress and profitability of companies whose bottom line depends on the very algorithms that can cause unforeseen, sometimes very harmful, problems?
As technology becomes increasingly advanced and complex, it seems that a new software emerges every day to perform some novel function. Whether it is computer generated imagery (CGI) or deciphering a code in a bible, software developers are helping the users of their software make great strides in all types of industries. In these situations, it’s commonly accepted that the developer owns the software and the user can use the benefits of the software through a license. However, a less clear issue has arisen in recent years—does the software developer own the output generated when using the software?
Niantic looks to the Potterverse for its next potential AR blockbuster, Instagram’s ToS don’t travel so well in Germany, Google gives VR and AR app developers a new tool, holograms may help our memories outlive us, and more!
Cloning is the process of creating a video game that is significantly motivated or inspired by an existing popular video game or series. Developers have been cloning popular video games since the 1980s, including Tetris, Doom, Minecraft, Bejeweled and Flappy Bird. Often, game developers create clones in an attempt to confuse users and cash in on a game’s popularity.