A few quick social media legal notes for today.
First, two senators are continuing to press the employer Facebook issue. This is the downside of social media–sometimes a story will get a lot of press attention even though it doesn’t need it. In this case a few stories from several years ago got blown out of proportion, people got outraged, and now we’re wasting time talking about potential legal actions over issues that don’t really matter. Employers who ask for password already expose themselves to significant legal risk and employees are always free to say no. I know it’s wishful thinking to ask if we could just move on, but that’d be nice.
The FTC has issued their report on Protecting Consumer Privacy in an Era of Rapid Change. It’s really long, so we’ll see more about it in the weeks ahead, but the big points were no surprise–they endorse Do Not Track features, want big companies to do more, look to Congress to establish some rules and give the FTC the ability to enforce them, etc. The biggest surprise to me was the FTC’s frequent endorsement of an “eraser button” that would permanently delete user posts. It’s similar to what the EU has been debating in the Right to be Forgotten but on a smaller level. The FTC does admit it might have some technical obstacles, but that’s an interesting first step towards a view that has mostly only gained traction in Europe.
Tumblr announced they will regulate posts about anorexia, cutting, and other self-harm. The balance between being a neutral content platform (to provide legal protection) and blindly turning away from harmful material is a tricky one. Amazon faced related pressure a while ago when they allowed books to be published on the Kindle platform that endorsed pedophilia–at first they said they would be a neutral platform then caved under media pressure. We have yet to see a major platform lose their overall protections as a neutral provider for policing this kind of harmful content, but it’s only a matter of time until it happens. Then we’ll have to see where we want to draw the line between allowing editorial control against harmful posts and making platforms accountable for anything they host. Right now it’s easy–advocates for anorexia aren’t great plaintiffs. But what happens if a large social platform decides to delete all posts that criticize Planned Parenthood because they deem those posts to be harmful to women?