Last week showed how social media can be incredibly powerful on the one hand and incredibly damaging on the other. In the wake of the marathon bombings in Boston, social media communities rose up to offer assistance. Some of it was valuable and helpful, like Google’s Person Finder tool which helped people in the Boston area find each other in the hectic aftermath, or the online document which connected people in need of places to stay with generous locals willing to house stranded runners or visitors. Some of that offered assistance was less than helpful.
Two communities (which I’m not going to name but they’re easy to find) took it upon themselves to try and find the bomber(s) by combing through photos and videos taken near the scene and identify suspects. While the sentiment behind the effort appears well intentioned, the results were anything but. The groups focused their efforts on a number of people, inferring malicious motives to the way they carried bags or their proximity to some of the bombing locations or the fact that a single frame showed the person not intensely studying the runners still on the marathon course. But what certainly stood out was one word.
The communities focused on a number of criteria: carrying bags, distracted, alone, pictures showing they did not have bags later, etc. But one criteria that popped up numerous times was whether the individual had dark skin. A factor that was ultimately wrong and ended up saying much more about the reviewing community than it did the actual suspects. That thinking wasn’t alone on internet forums though. Even the New York Post jumped in on the potentially racist, certainly irresponsible accusations as Deadspin called them out for falsely accusing a high school runner of being sought by authorities. The real story was that the student saw his picture circulating online so he turned himself in to avoid anything bad happening. One of the communities that contributed to this false identification later apologized for their part, but we should be thankful the harm was contained.
It turns out that social media may be good at reporting some facts, uploading photos, and providing video, but it’s really, really bad at forensic analysis. One could easily say dangerously bad given the heightened emotions around this situation.
But the strength of providing information is certainly a good thing for social media, it’s just a question of what we do with it afterwards. That’s what makes social media Jimmy Olsen rather than even Lois Lane–we can take pictures but collectively we might not be very good at analyzing or even accurately reporting on the information. And we certainly can’t act on them (thank goodness).
Social media is still a valuable resource in the pursuit of justice. Take, for example, the 2011 Vancouver riots where concerned citizens provided authorities with thousands of hours of video and over a million photos taken during the riots. Analyzing video tape and pictures after a riot wasn’t a new experience, not even to Vancouver in 2011–after their 1994 riots they analyzed a bit over 100 hours of video footage but that took authorities four months. In 2011, with the help of the Law Enforcement and Emergency Services Video Association (LEVA), authorities were able to analyze over 5,000 hours of footage and over a million photos in two weeks. Authorities were able to tag 15,000 criminal acts which led to just over 300 convictions. A fraction of the identified actions, but still a huge improvement over the time taken to analyze the resources in 1994 (which led to only around 100 convictions).
Social media’s application in criminal justice will only grow over time. The value now in contributing photos and other information is incredible–but we must be very careful as the crowd starts to get involved beyond the purely objective. It turns out there’s a reason why we have experts doing analytical work.