The future of health research depends on more than bio data

Last week I spent a bit of time learning about the Quantified Self movement—a group I’ve been following from the shadows for a couple years now, but which I’d not really engaged with closely until recently. The group is an informal collection of self-trackers: people who use measurement and/or data collection as a means to deeper self understanding and to improve the quality of their lives. It’s a pretty neat group, and you’ll learn a lot just from perusing the blog.

The quantified self trend is one which I’ve come to think is the future of healthcare—data-driven research and self-improvement. What I find most compelling about some of the QS examples are the way in which otherwise irrelevant, mundane observations about one’s life can lead to sharp insights and improvements. Which is why I find a new project from John Wilbanks particularly interesting—as he describes it:

 The idea behind CtR is simple: make it easy for people who want to share data about themselves for scientific, medical, and health research to do so. It’s not centered on intellectual property, though it does touch on it. It’s more about privacy, and in particular, about making it possible for people to get informed about what is possible with their data and how beautiful research can emerge if enough genomes, enough biosamples, and enough other kinds of data can be shared and connected.

When we think about the future of scientific, medical and health research it’s easy to stop at the types of medical data we collect today—blood pressure, genomes, drugs consumed, etc.—but the QS movement would suggest we could learn more by going farther into the mundane and unexpected sources of data.

Which brings me to Flu Trends, an indicator for the incidence of flu in a given area based on search behavior. It can sometimes feel, especially in some privacy circles I move in, like an overused example, but an important one. Flu Trends speaks to an issue at the core of the change we need to see in how we think about privacy—the idea that data must be used for the purpose for which it was collected, otherwise called purpose specification.

In a talk I gave at last year’s OECD roundtable on the economics of personal data I emphasized this point: improving the world in the 21st century is going to depend at least in part on drawing intelligence from vast amounts of loosely collected information. More often than not, surprisingly useful intelligence will come from the most unexpected sources of information—and we need a flexible enough policy environment to support the exploration of those sources.

 

Posted in Uncategorized | Comments Off on The future of health research depends on more than bio data

You had the data, you should have known!

A couple years ago while I was at home with my family for Christmas I got a call from my credit card company. Someone had tried to charge $14,000 to my account on Christmas Eve, “was that you?” they wanted to know. It was one of the first times I sat grateful that my credit card company had a fraud detection department capable of detecting unusual charges and flagging them quickly—how smart that they could analyze the data so quickly and notice unusual activity. Over this past weekend I got a call from the fraud department for the second time ever, and this one left me wondering if the fraud detection department has just stopped investing in data analytics.

I’ve been traveling for a week, something I have been doing less and less of over the past year or so. I’d flown into Boston on Tuesday night arriving 2hrs late at roughly 2am, worked from Cambridge for a couple days, then rented a car Friday to drive to a wedding in Hanover, New Hampshire. Being the amazing advanced-planner that I am (ahem, or rather, not-so-amazing) I’d taken off from Cambridge and high-tailed it through to New Hampshire without thinking twice about cash or gas. Along the way, I’d hit a toll plaza (forgot about those!) and had to apologize profusely to the toll collector while I pecked around in my coin purse to produce 96 cents, not quite the $1 he was hoping for but he kindly let me through. I realized I may have been slightly unprepared, and was now without any cash, watching a rapidly falling gas gauge, and approaching nightfall.  An hour or so later, just before 8pm and with 20miles to go, I decided the near-empty tank wasn’t going to get me all the way there and I better pull off for gas before stations closed. I pulled into an isolated gas station a few miles off the highway only to find that my credit card company had disabled my account. Lucky for me, since I had no cash on hand, I had a different card on hand—so I paid and went my merry way.

I arrived at my hotel to find a voice mail from the fraud detection department asking for a return call. After verifying my identity, they asked about a slew of what seemed like fairly obvious charges: “Did you charge $7 to United Airlines on Tuesday? $10 to the Boston T on Wednesday? $300 to Enterprise Car Rental on Friday? What about this $30 for <some random NH gas station> earlier tonight?”

Seriously guys? I bought an airplane ticket from SFO-BOS using your credit card a month earlier. I then reserved a hotel room for Friday and Saturday nights in Hanover on your credit card. Then I made a car rental reservation from Cambridge, again with your credit card. So your detection system sounded alarms because I bought a glass of wine on a flight you knew I’d purchased, bought a subway pass in a city you knew I’d arrived in, rented a car you knew I’d made a reservation for, and bought gas for the car you knew I’d rented? Was it really that hard to connect the dots? What exactly have you been investing in or innovating over the past few years, anything at all?

I spend so much of my professional life working with people who worry about privacy invasions resulting from advanced data analytics. Never do I hear people saying things like I thought Friday night: oh dear credit card company, you had the data, you should be smart enough to make my life easier by figuring out what it means. Instead, when companies do this sort of analytics well, I hear outrage over how it is that the company knew what it did, and whether what it knew was actually accurate.

When are we as a society going to get as outraged about the lost opportunities that could have been captured using data analytics as we get about the potential privacy invasions that might arise from collection and use of the underlying data sets? When will the public pressure be of the form “Why didn’t you know better?” instead of “You shouldn’t have known that?”

Posted in Uncategorized | Comments Off on You had the data, you should have known!

This week’s reading: Predicting Premeditation

Last week I stumbled across an interesting summary of a recent paper in the journal Memory and Cognition: 

By setting your predictions for the future in a familiar landscape, you allow yourself to use your memories of the past to help you predict what might go wrong in the future.  If you are only able to think abstractly about the future, then you are much less likely to find specific problems that may arise.

The paper is titled “Predicting Premeditation: Future Behavior is Seen as More Intentional than Past Behavior.” I look forward to reading it in full this week.

At a high level, this led me to ask a few questions about predictions. My anecdotal observations are that in general we “freak out” a bit about predictive analytics, the idea that we might predict a future event based on recorded history. As shown in popular culture (e.g., Minority Report), we spin up all sorts of dystopian fears about what a world driven by predictive analytics might look like.

But ask a scientist who studies humans not machines, and apparently he’ll tell you that humans predict in basically the same way machines do—based on past history. When thought about in the human capacity, my hypothesis is that predictions “freak us out” a little bit less. I have one guess as to why that might be: we understand human predictions to be simply that—predictions—whereas when data and computers are involved we perceive the predictions to have more of a discrete character, and we imagine our use of those predictions to be quite different.

In fact, predictions form the basis of the majority of decisions we make every day. In my professional capacity I’m actually encouraged to document my predictions and the fact-base I’m basing them on so that later we can review our accuracy and revise our internal (human) predictive models in the future. What I find fascinating is this prediction of my own: as predictive analytics become more and more commonplace, we will find more and more reason to fear the outcome and discourage its use. Yet, we generally recognize that better data yields better answers—so the interesting question will be this: what do we trust more, subjective interpretations of history, or data-drive analyses of history?

Posted in Uncategorized | Comments Off on This week’s reading: Predicting Premeditation

Distributed healthcare or risky business?

Several years ago I worked on a project for a client examining the business opportunities in mobile healthcare—particularly in developing countries. At the time,  it all seemed theoretical—if you had good mobile connections and if smart phone penetration were high and if you could connect the patient to the right specialist and if and if and if…it didn’t strike me at the time like the sort of business one would advise a Fortune 500 company to get into for the next growth market. But as with many things, I may have been wrong—distributed analytics on the mobile phone may yet be a growth area. But suffice it to say, I didn’t predict we’d wind up where we are.

Last week I came across this story about a new iPhone app that scans moles to detect likely melanoma: “It scans skin lesions, takes pictures of them, measures the diameter, and uses an algorithm to determine the likelihood of melanoma.” I found it surprising to read about (as Business Insider apparently did to write about, titling the post “HOLY MOLEY…”), but in retrospect I’m not sure I should have. It seems fairly straightforward—they always say you just look for abnormal edges, unrounded moles, or very large moles to detect ones that might warrant some additional medical attention. You could even imagine this application storing measurements of your moles and noting when significant changes occurred, identifying the other major risk factor of growing moles. Seems totally straightforward.

Except, the whole “what do you do with this little prediction” part of the equation. That’s where things get messy. Business Insider alludes to the complexity here by mentioning the possibility of future lawsuits over missed malignancies. Forget lawsuits, what about the more straightforward question of whether this is useful at a macro scale or not?

Around the world we face rising healthcare costs, and in the US we face costs that seem to be rising disproportionate to the actual improvements in quality of life or lifespan.  We have here an opportunity (maybe) to put a screening mechanism in the hands of anyone with a smartphone. You could quickly get into questions here of whether screening works and how well (I recall discussion of the efficacy of screening in breast cancer at some point in Emperor of Maladies, which I highly recommend by the way), but setting all that aside for a minute: we’re talking about putting a screening tool in the hands of a layman.

We all have different set points of anxiety—I’m one of those fortunate souls who has a very high set point when it comes to anxiety. I have had to tell a doctor that self-screening for a disease I’m not tremendously likely to have seems like a raw deal to me, because it just creates anxiety and in more cases than not prompts an unnecessary trip to the doctor to allay my worry.

If my phone told me one morning, that mole might be malignant, well I’d call my doctor. That may be a good thing, if the prediction is likely to be accurate. But the thing is, the developer of an application like this has every reason to over-communicate risk—and to bias toward false positives. As someone with Mediterranean skin whose had her share of moles removed and possible-malignancies investigated, I might wind up calling my doctor a fair bit with this tool.

At some point I may delve into comparisons between this type of directed, action-oriented risk and the longer-term, wellness oriented risk (where I might classify, for example, 23andMe)—I suspect that both the types of risk at issue and the ways predictions of risk are communicated impact the spectrum of responses available to us, and the likelihood of impact. That seems the core issue here—can this app achieve impact with its predictions, or is it likely to make poor predictions that lead to higher costs all-around? And are layman able to process and act on the predictions given to them in ways that benefit society at large?

Posted in Uncategorized | Comments Off on Distributed healthcare or risky business?

Predictive sentencing

The upcoming issue of the Atlantic has a fascinating piece written by neuroscientist David Eagleman about criminal behavior and responsibility. It’s apparently adapted from his book, Incognito: The Secret Lives of the Brain—which I am looking forward to reading during an upcoming flight to Europe. This is precisely the type of work I’m hoping to canvas and understand better in the coming year at Berkman.

Eagleman’s book appears to describe the science behind the claim that our brains are made up of competing rivalrous modules of activity. That alone is fascinating enough. But where it should get really interesting is the final chapter, in which Eagleman sketches out a theory of jurisprudence that bases punishment on predictions of future behavior, as opposed to factual analysis of past behavior. Note he is talking about punishment, as opposed to the basic determination of wrongdoing.

I find the general idea a really interesting one. Could we predict—based on a person’s biology, upbringing, and the context of his life at the time—whether or not he is likely to be a repeat offender? And if we could, should we?  I think Eagleman is right when he says:

Neuroscience is beginning to touch on questions that were once only in the domain of philosophers and psychologists, questions about how people make decisions and the degree to which those decisions are truly “free.” These are not idle questions. Ultimately, they will shape the future of legal theory and create a more biologically informed jurisprudence.

It seems likely, though, that it won’t just be neuroscience answering these idle questions. That is, a tremendous amount of data is required to do the type of analysis Eagleman is referring to here—the data about upbringing and life context. It’s both neuroscience and big data analysis that are leading us down this path. There is a related set of questions, then, around access to and use of data—many of those questions are being sorted through today, and crystallized into public policy in ways that will be more and more relevant to all aspects of our lives in the future.

Posted in Uncategorized | Comments Off on Predictive sentencing

20% time, new focus of my blogging efforts

As announced earlier this week, I’m going to be spending my 20% time next year at the Berkman Center researching the ethics of predictive analytics. I’m really excited about the opportunity. Though much of my work in the past two years has focused on privacy as generally conceived in contemporary discourse, it’s also touched heavily on the concept of “big data” and unknown innovations that may arise from it—including elements of predictive analytics.

I’ve yet to sort out exact details about when I’ll be in Boston and what my work-plan is for my time there, but in the meantime I hope to be blogging more often. I anticipate that I will use my blog in part as a repository for interesting reading on various types of predictive analytics and the ethics surrounding their use. Expect to see more of that, with some of the same old contemporary commentary thrown in.

Posted in Uncategorized | Comments Off on 20% time, new focus of my blogging efforts

Hyperpublic vs Data-driven

I have taken quite a long break from blogging, but hope to begin again with renewed energy in the coming months. As the first foray back into writing, and the first blog post of 2011, I’m going to share the thoughts I sketched out in preparation for a panel at Hyperpublic at the Berkman Center last Friday.

Before I do, a quick comment. In these notes I identify a quote from Jonathan Franzen on the interplay between public and private and shame. It was a quote I found quite insightful and thought-provoking, but unfortunately during the panel at Hyperpublic the audience seemed to think I had asserted that privacy was only necessary for things that provoked shame. I certainly did not intend for that message to come across, clearly there are many reasons for an individual to seek privacy. I simply found it an interesting lens through which to examine the issue, particularly in the contrast I was painting between “hyperpublic” and “data-driven.” I recommend to anyone interested in the topic of privacy and/or publicity to take a read through Franzen’s 1998 essay, the Imperial Bedroom—it is quite thought provoking, and quite a different approach than that we so often see in today’s discourse.

With that, my thoughts for the panel, “The Risks and Beauty of a Hyperpublic Life.”

When they first asked me to join this panel I read over the word “risk” in the title and only saw “beauty of the hyper-public life.” My immediate reaction was—I have nothing to say about that, I reject the notion. Hyper-public life to me implies a Paris Hilton like existence, and while some of you may find that appealing I personally don’t. Even most mega-celebrities seek privacy in their lives, I don’t predict that will change.

But when reading the actual description of the panel, it struck me that the title doesn’t accurately represent what we’re theoretically talking about here. When we talk about unprecedented information gathered—we’re talking about a data driven life. That to me is a beautiful thing, albeit one that comes with risks. My claim is that the risk of a data-driven life is that it become a hyper-public one, but if managed correctly I believe that risk can be mitigated so that only benefits accrue.

There is a tremendous amount of data being collected about people, their behavior and the world around them. This data may represent clicks on webpages, GPS coordinates, purchases, communications between people. And from all that data, we can learn a tremendous amount about the world around us, things that will help us make a better world for the next generation.

The risk that many of us concerned about privacy have a natural tendency to focus on is “what if all that data is tied to me and used to create a dossier, which could be used to exercise power over me in unjust ways?” Until recently, this risk was hard to conceptualize because it was quite theoretical. Well, sure your credit card company knows something about your purchases, and sure a phone company or ISP knows about your communications, and yeah I suppose the websites you visit know what content you like to consume—but it’s all tied to different types of identifiers, held by different companies, it seemed impractical to aggregate it all together.

Meanwhile, analysis of that data provided unprecedented value, and not all of it (much of it?) easily quantifiable. The improved value in something as straightforward as Search—that Google can get you to a better result today than a year ago, and probably faster too—is the result of data analysis. Advertisers have enjoyed more efficiency in dollars spent on marketing—and consumers have enjoyed access to more free content than ever before as a result, to the tune of $100 billion in surplus according to McKinsey & Company and IAB. But those are just the easy gains, the low hanging fruit. What comes next?

Google has used search log data to advance the idea of “predicting the present”—learning something useful about macro-social behavior based on analysis of aggregate query data. This idea was behind the now-familiar Flu Trends and the more recent launch of Dengue Trends, two tools that barely scratch the surface of advances we might make in public health monitoring with the use of predictive analytics. Other sectors that stand to be transformed by big data include energy—where analysis of consumption patterns can enable service operators to manage their networks more efficiently, or end consumers to manage their consumption more efficiently. One of the most exciting areas of this type of analysis is language, where automated real-time translation and transliteration are rapidly becoming commonplace.

We often don’t notice these improvements made possible by predictive analytics because it’s not always clear how marginal data analysis led to incremental or step-change improvements. This is and will continue to be a struggle for companies in the data sector—being transparent not just about data practices, but about the consequences of data practices for the user’s experience. One of the best examples of doing this well today comes from the recommendation engines—sites that make a business out of predicting what content you’ll like best. These sites have perfected subtle design features that indicate, hey we think you’ll like this movie because you rated this other one over here highly, if we got it wrong help us correct it by telling us what you’d like better. Those subtle design features in no way convey the complexity of the predictive analysis happening behind the scenes, but they do convey the idea that data analysis of my past behavior is enabling this end user benefit.

It’s notable that the recommendation engine space is one where the benefits accrue to the individual. I see better recommendations for me—yeah, they probably use my data to improve everyone’s recommendations, but viscerally what I’m aware of is the direct improvement of my personal experience based on my personal behavior. As an industry we struggle more with building these design cues where the improvement of my experience is derived from the aggregate analysis of other people’s behavior. This is nowhere more obvious than in Search, where lots of other users’ clicks and searches over time enable a search engine to point me to the best answer today.

In all of these examples we can start to see that there is a real beauty in the data driven life. So what is the risk? I think the risk is that it becomes the hyper public life. We fear the day when aggregating data across contexts becomes so easy as to collapse all contexts onto one plane of existence, one which is visible for all to see. In practical terms the concern is as simple as the risk of re-identifying an individual from a series of search queries, or of a data broker amassing data from multiple service providers and collapsing it all into a single profile, then overlaying it with whatever we may have published ourselves via social networks and the web, and making it all available for sale to anyone. I don’t think this is necessarily an impractical concern, but I also think it is nothing close to a full picture of the landscape we’re talking about.

I took a late flight out to Boston Wednesday afternoon, and was trying frantically in the airport to download to my iPad a book I’d purchased for the flight. SFO, your free wifi while free (thank you for that!) let me down. I couldn’t get a signal, and when I did the data moved at an achingly slow pace. So I found myself on the plane with only my existing library, much of which I’ve already read. But I came across this book I’d purchased a while back, ‘How to Be Alone’ which is a collection of essays by Jonathan Franzen.

After reading a few of these, I stumbled on his essay, “Imperial Bedroom.” I was still a teenager when that essay was written and hadn’t yet overcome my general attitude toward technology (that it was a boys’ hobby, of course) so stumbling on this essay while en route to this workshop on publicity was a pleasant surprise, and an opportunity to glimpse how folks may have been thinking about these issues a decade ago. I imagine many in the room are quite familiar with it, but for those who aren’t let me offer an interpretation of his thesis: the problem is not a loss of privacy but an injection of too much private behavior into public spaces, where it erodes the quality of the public space.

Franzen says something interesting: “without shame there can be no distinction between public and private.”

(Note the definition of shame: “Emotional distress or humiliation caused by what may be perceived as wrong or foolish behavior.”)

That somehow makes some sense to me. Without having given this much more thought than a few hours after work yesterday allowed me the time to give, I’d posit that shame is tied to identity in a critical way. The shame one feels for wrong or foolish behavior may exist even if it is known to no one, but with the potential perceptions of an entire society weighing on your behavior there are a multitude of things one might feel shame about.

Recall Dog Poop Girl. Had she been unidentifiable, unrecorded, she may have felt shame—but in all likelihood nothing like she is rumored to have felt following the incident in which her identity and behavior were broadcast to a nation.

The data driven life is indeed a beautiful one, full of potential. If we can capture opportunities to really demonstrate the public good that arises from multiple individual contributions, to design that transparency into services directly, untold advances will be made. But there is also a risk that the data collected about an individual’s behavior is tied to their identity and published in a way the individual didn’t understand, expect or desire—a risk that the data driven life unexpectedly turns into a hyperpublic one.

A few closing thoughts. It strikes me, when I step back and look at these issues from this big picture perspective, that much of the solution that lies ahead of us rests on identity management. We need to enable users to be who they want to be, where they want to be—Alma Whitten put this quite well a few months ago.

It is my perception that many in the privacy community seem to have not quite given up on identity as a solution, but turned away from it for what seems a simple and obvious reason: theoretically, re-identification ought to become so trivial within a few years that the concept of having multiple identities online strikes some of us as an impossible future. Friday at Hyperpublic someone pointed to facial recognition as one example of the ways in which technology seems to be siphoning us into a single identity. I understand the theoretical direction folks are concerned about, and why they may be concerned about it—but I am not quite ready to give up on the idea that I can manage different facets of my identity across different contexts.

I mean this sincerely, and yet as I consider creating an OkCupid profile in the next few months, it has occurred to me that much of my public life can be aggregated so easily—if I’m to post a picture on my profile, perhaps instead of creating a pseudonymous username and revealing my identity to potential dates at my own pace, I simply ought to use my real name and allow the initial judgements to form based on my public identity.

Certainly there is no easy answer in this space, but as usual only questions. That’s what makes it interesting, I suppose!

Posted in Uncategorized | Comments Off on Hyperpublic vs Data-driven

Risk management and privacy

I was in a graduate program with a bunch of engineers studying energy and environmental sciences, transportation systems, and civil engineering. When I first started the program, I didn’t fully understand that there were any significant parallels between my work and theirs. Over time, as with most things, I came to appreciate that unique assembly of multidisciplinary thinkers.

The other night I was thinking about regulation of air travel. Seems like a risky business, air travel — probability of a crash may be low, but if you crash it seems likely you won’t make it out unscathed. People die from plane crashes, although far less frequently than we think. We don’t limit air travel to prevent it though, we require safety measures be put in place and do our best to mitigate a risk that seems far greater to those of us perceiving it as infrequent air travelers than it actually is.

Auto travel, turns out, is more risky. Or so they say. Makes sense, people on average travel in cars more often than planes, so probability of accidents is more likely for the average person. We also regulate the safety of cars, though far less stringently than planes (how would you like a backscatter screening before getting in your car?).  And individually we perceive ourselves to be safer driving our cars than riding in airplanes, a perception that I believe has a lot to do with control. If you’re not flying the plane, you have to trust the guy who is. If you’re driving, however, you’re more likely to do what feels safe to you, sometimes overlooking the fact that a lot of driving safety is not in your control but in the hands of other drivers.

In environmental regulation there has been a lot of talk about the precautionary principle: if there is risk of an action harming public welfare, the assumption is that there is a risk until proven otherwise. If dumping chemicals is suspected of causing cancer, well, don’t dump them until you’ve proven it doesn’t cause cancer. Seems to make sense, if irreversible physical harm might result we ought to figure out whether it will (though, one has to wonder how we figure that out…)

So I was thinking the other night, with privacy, where are the parallels? Well, I’ll observe just a few at a very superficial level for now. Airplane crashes make the news in part because they are so rare, but after hearing about plane crashes on the news it seems as though we as individuals fear flying more. Look at the security measures taken up after 9/11, some would argue in disproportion to the threat posed. We want to feel safe, because we aren’t in control. Similarly individual privacy gaffes with life-destroying outcomes make the news — is that because they are relatively rare? The teacher who loses a job because of an incriminating photo, do we hear about that because it is representative of many instances of similar behavior, or because it was one rare event worth reporting on? Do the news reports, whatever their root cause, lead people to fear invasions of privacy more after hearing about them?

And how does driving fit in here — it’s often been observed that users express concern about privacy in the abstract but do not behave in accordance to those expressions, is that because when they feel in control they misperceive risk? If so, should we treat privacy like we treat cars: establish basic rules of the road for auto makers and drivers alike, approach curves with caution, learn to drive defensively to mitigate the risk of another driver’s recklessness?

Where should the precautionary principle play a role, if at all?

Questions not answers, as usual. I’d love to see policy intellectuals delve into these types of questions with their research.

Posted in Uncategorized | Comments Off on Risk management and privacy

Reminders of yesterday

About once a month one of my Google calendars rises up from hibernation to remind me of birthdays. I’ll never forget as a child going to visit my grandmother, who every week would sit down with her calendar in front of her and write birthday notes to the friends and family she cared for. My grandmother was an extremely social person, with more friends than anyone else I know had pre-Facebook, so this weekly ritual really took an hour and required quite a lot of dedication. She would every January sit down to copy the birthday calendar anew, presumably making edits where appropriate to reflect new friends and those who had drifted away. I was always impressed by this and other rituals she relied on to demonstrate her love for others, and upon reaching adulthood vowed to maintain some sort of ritual myself.

Of course, Facebook birthdays supplanted the thoughtfulness of a ritual like that of my grandmother’s. Suddenly, everyone can demonstrate such thoughtfulness and devotion just by posting a message on a wall whenever Facebook prompts him. For this reason I’ve never found Facebook “happy birthday” wall posts to be very personal or meaningful, and am not one to send them. Somehow a thoughtful email or card seems far more personal and far more meaningful — at least, this is how I feel when I receive them. So, I send emails and cards, I don’t do wall posts.

But I still need that birthday calendar, like the one my grandmother kept. So, a couple years ago I created a Google calendar and set reminders for the entries of all the birthdays I entered in. Every few weeks I get an email reminding me of the birthdays ahead, the people whom I ought to send cards or emails wishing them well. I have not gone back through and audited the calendar since creating it, though, so many of these reminders increasingly bear the names of people with whom I have not spoken for well over a year, sometimes two.

We keep building these tools to simplify the effort required to do the little things that matter so much in friendships: send a birthday card, check in to see how someone is doing, share photos and news of our lives. As the cost of doing these things go down, the gestures lose their meaning. Well, of course you remembered my birthday and said so on my Facebook wall, at some point five years ago you clicked yes to a friend request and it was sealed then that each year you would be reminded to say happy birthday. There are a lot of commentators who claim that the social web is bringing us closer together through tacit and passive information sharing, and I don’t wholly disagree. But there is something that gets lost when we rely on automation for the little gestures that once signaled so much. I’m not one for tradition, so I’d be very happy to leave the birthday tradition behind and accept that a new tradition is signaling the type of devotion, care and love that my grandmother’s birthday calendar once signaled — but I’m not sure I know what new tradition is replacing the old?

Posted in Uncategorized | Comments Off on Reminders of yesterday

Do you want a right to fuggedaboutit?

Over in Europe there is an interesting debate raging about a “right to be forgotten.” I find it a fascinating debate on many levels, and as a primer point you to this analysis written up by a colleague. At the core of this debate is the ongoing struggle between the right to control one’s reputation and the right of others to say what’s on their mind. But I want to focus on one small piece of the “right to be forgotten” as it is framed: has such a right ever existed?

I for one don’t believe any of us has the right to be forgotten, as a simple statement of reality. My memories are my own, they are etchings of experiences I have had and people I have met. They aren’t anyone else’s to decide I should forget. But, like the oft-spoken wisdom, even if we can’t forget we can forgive.

Take the example of a murderer acquitted who wants to leave his alleged criminal past behind him, but cannot thanks to the Internet, and suffers poor social treatment in work, love and life forever afterwards. The murderer is not poorly treated later in life because someone has remembered or discovered he is a murderer; he is poorly treated because society cannot respect the forgiveness bestowed upon him presumably by his government.

One of my great concerns, generally, with information policy is that we find it so easy to cast blame on the information itself as the cause of social ills. I have this gut instinct that such an approach may create a great many unintended consequences, without actually solving the root of the problem. This is I suppose similar to Nicklas’ observation that the debate around forgetting has actually very little to do with technology. In most cases, the existence of the information is not the problem, it is what we as a human society do with that information that causes concern.

Perhaps instead of a right to be forgotten, which to me invokes things like delete and restrict, it seems to me that we need to develop information institutions that could be charged with disseminating “forgotten” information under appropriate circumstances. Achieving that type of institutional development seems more difficult than making a proclamation, and layered on top of the Internet seems to require a degree of architectural and technical design if it is to succeed. So perhaps instead of debating about whether a right to be forgotten should be enshrined or not, we should do an unconference or two focused on the types of technically-supported institutions we need to adeptly forgive in the online world.

Posted in Uncategorized | 1 Comment