When Big Data goes bad
While browsing recent articles about big data, I found Joshua Klein’s “When Big Data goes bad” and was sucked in by the fear mongering title. In fact, however, the article is not about how big data is going to lead to the apocalypse, but how the models that computers use to interpret and make use of big data for us can have small flaws that lead to major mistakes. Klein goes through a few examples of how this has happened, including a book about biology being listed for millions of dollars on Amazon, and the mortgage crisis of 2008. Klein’s argument is that these, and other examples, are the result of minor mistakes in the models that computers use to interpret big data for humans to understand. Golumbia talks about the differences between how a computer and a human understand things and I think this is what Klein’s article hinges on. A computer can only “understand” something based on the set of rules its given in a program. A human on the other hand understands things based on many different factors including their own assumptions, the cultural implications, and a whole host of other factors. Klein doesn’t seem to be arguing that we ought to do away with big data or the models used to understand them, just that we ought to make sure to take a close look at what is really going on “under the hood” so that if and when they fail we can understand why.