I was really struck by this article last week describing research that used publicly available Facebook profiles to predict students likely to suffer from alcohol abuse. The article suggests toward the end that there is an open question of how appropriate it is to go scanning students’ public Facebook profiles for behavior that might be suggestive of a drinking problem—this to me misses the point, and detracts from more important questions. The more interesting questions are who gets missed if your predictive algorithm is wrong, or whether you substitute this predictive algorithm in for other more tried-and-tested measures of screening (either because it’s lower cost, or perceived to be more effective in the short run).
I suspect we’ll be seeing more and more applications like this. Those in the personal healthcare arena—e.g., personal health tracking that predicts risk for disease—will seem largely uncontroversial at first, but may pose the hardest questions in terms of how to ensure fair access to insurance and preventative care. Others like this that are more focused on public healthcare and/or social behavior modifications I suspect will provoke more ire, uncertainty, fear, and concern. For example, it probably doesn’t take much imagination to think of a world in which the social graph and one’s behavior in a social network are used as predictors of an individual’s likely political tendency—or, put another way, the likelihood that one is a terrorist from the point of view of a given state.
It would be interesting to see, for a given use case, a mapping of the pre-digital algorithm to the post-digital algorithm. That is, what inputs are used to screen for alcohol abuse today, and how do those inputs change in a digital era.