Under Section 7 of the National Labor Relations Act (NLRA), all employees have a right to engage in protected concerted activity, even if they are not unionized. Such activities include those performed for the mutual aid or protection of all employees, such as discussing the terms and conditions of employment. An employer is prohibited by the Act from interfering with, restraining or coercing employees from exercising their Section 7 rights. In the past decade, there have been a number of important cases decided by the National Labor Relations Board (NLRB), the agency that protects the rights of employees to join together and improve wage and working conditions, that impact social media policies. In fact, many of the decisions have struck down social media policies as unenforceable under the NLRA. If any provision in a social media policy is vague or overbroad and can be read as restricting activities protected by Section 7, that provision will likely be found unlawful and unenforceable by the NLRB.
A recent order by the SEC relating to an initial coin offering (ICO) by Munchee Inc. dealt a blow to the common practice of making a distinction between “utility tokens” and “security tokens.” In doing so, the SEC seems to also reject what our colleagues Daniel N. Budofsky and Robert B. Robbins refer to as the “magic frog” approach, the belief that a token can begin life as a security token (i.e., a magic frog) but at the point that the application and ecosystem go “live,” the token will be transformed into a utility token (i.e., the magic frog becomes a prince) and any securities law restrictions will no longer apply. In their recent client alert, “The SEC’s Shutdown of the Munchee ICO,” they examine this issue in greater detail and explore ways in which it is still possible to carry out an ICO that’s in compliance with the Securities Act.
In a time where “fake news” is common parlance and tensions rise in response to the smallest media slight, is it time for algorithms to take the place of humans in moderating news? This New York Times article seems to think so. What role, and to what extent, should algorithms be used in regulating and implementing everyday business ventures, government agency routine processes, health care management, etc.? Who should take responsibility in the event of a problem or negative consequence, if it is all verified by an algorithm? And, importantly, what will enhanced monitoring of algorithms do to the progress and profitability of companies whose bottom line depends on the very algorithms that can cause unforeseen, sometimes very harmful, problems?
As technology becomes increasingly advanced and complex, it seems that a new software emerges every day to perform some novel function. Whether it is computer generated imagery (CGI) or deciphering a code in a bible, software developers are helping the users of their software make great strides in all types of industries. In these situations, it’s commonly accepted that the developer owns the software and the user can use the benefits of the software through a license. However, a less clear issue has arisen in recent years—does the software developer own the output generated when using the software?
Niantic looks to the Potterverse for its next potential AR blockbuster, Instagram’s ToS don’t travel so well in Germany, Google gives VR and AR app developers a new tool, holograms may help our memories outlive us, and more!
Cloning is the process of creating a video game that is significantly motivated or inspired by an existing popular video game or series. Developers have been cloning popular video games since the 1980s, including Tetris, Doom, Minecraft, Bejeweled and Flappy Bird. Often, game developers create clones in an attempt to confuse users and cash in on a game’s popularity.
When it comes to finding ways of making money, no corner of a capitalistic society shall go unmined. This applies to obvious goods and services but also comes into play with our very thoughts and how we express them. In the age of social media, not even the framed needlepoint proverb is safe from “disruption”: behold, the framed tweet.
As we discussed recently, the Equifax data breach has inevitably brought a great deal of scrutiny and legal action against the credit reporting agency. Amidst the numerous brewing class actions and other reactions from government agencies and state AGs, it’s worth pointing out another front on which the company—and more importantly, individuals within the company—may face legal consequences.
Since September 7, 2017, Equifax, one of three credit rating agencies in the United States, has been dealing with the fallout from one of the largest (known) data breaches of personal information, putting 143 million Americans at risk from fraud and identity theft (roughly 44% of the U.S. population).
Last week, the FTC brought its first action against a social media influencer for failing to make appropriate disclosures on sponsored posts. While it had previously prosecuted companies who pay influencers for posts such as Lord & Taylor and Warner Brothers, this marks the first time the FTC has pursued an influencer.