Fascinating series at Concurring Opinions this week

Earlier this week I was honored to have been invited to participate in an online symposium revisiting the ideas laid out in Jonathan Zittrain’s book The Future of the Internet.  There is a lot of fascinating material there to pour over, and I highly suggest setting aside the time to do so. I unfortunately only had the time to post twice, and am not going to cross-post them here but here are the links (Lessons in Designing for Privacy, How Can we Create Even Better Incentives?).

My only disappointment is that my call for hard questions in network design appears to have fallen on deaf ears. So I will raise it here again:

I’d love to see as an outcome of this discussion a curated list of difficult policy and design questions we will face as tethered and generative systems continue their mutual march toward the future. Could we come up with a list of “Hilbert’s problems” for network design? I’ll get that list started by asking: How can we preserve the ability to remain anonymous online while reaping all the benefits that an embedded identity system can provide?

If you have any tough questions to add to such a list, throw them in the comments!

Posted in Uncategorized | 1 Comment

Determinism and privacy

I’ve been thinking about determinism lately, and wondering whether or not it matters for privacy. Specifically, if the audience of shared information is non-deterministic, are there circumstances in which that information should be considered anything other than public?

Twitter has this little feature I just learned about (though, as usual, I’m late to the party – it’s been here since 2008 at least). It turns out that if you begin a tweet with an @reply, only people who follow both you and the person to whom you are @replying are fed that tweet in their stream. I haven’t tested, and can’t easily find a description, of whether the tweet is visible to everyone who clicks on your profile, or only to those jointly following both participants. Regardless, it’s an interesting feature.

Similarly GMail Chat enables asymmetrical relationships. You can hide a GMail chat user without blocking them, in which case they can still see your chat status. I’ve made a lot of interesting choices with respect to this feature over the years; most commonly I hide people who I perceive to be invading my chat list, but whom I don’t want to outright block for fear of being rude. A few weeks or months later, I’ll run into one of these people or they will ping me unexpectedly and reference a recent chat status, which never fails to freak me out just a little. “I didn’t realize they could see my chat status!” But of course, I should have, since I made the decision to hide – not block – them from my chat list in the first place.

Both these features have a bit of non-determinism built in. It’s possible to share information not quite publicly, but to a contained audience that is perceived by the user as non-deterministic. My initial reaction upon hearing about the Twitter feature I describe above was, what an interesting privacy feature. Similarly my reaction as a user to my own use of Chat has at times been the feeling that my privacy was invaded. Which begs the question: does determinism matter to privacy, and/or should non-deterministic audiences be assumed to be public?

It’s gotten me thinking about another angle, as well. Some have used privacy to describe the feeling of having one’s physical space invaded. One translation of this, which I’ve heard discussed though not at length, is the online translation to information feeds. Is it an invasion of my privacy if unwanted information surfaces in my feed? Say, spam, for example? This was of course the motivation for hiding individuals from my chat list – I didn’t desire distant connections surfacing in my chat list inside my email client. Similarly, I would guess it’s a large part of why the Twitter feature is so useful – if a conversation is taking place between two users, it’s probably annoying at best to see only one half of that conversation. But do either of these scenarios invoke privacy?

I realize this is a semi-unfinished stream of consciousness but the thoughts are nascent at best in my mind. I’m interested to hear your comments and thoughts – perhaps someone has written about this before?

Posted in Uncategorized | 1 Comment

The rant after the calm

At some point in July my blog went down. I have been so preoccupied, I’m not even sure when exactly that happened. I discovered it July 31 and it took me a few days to get around to fixing it. It’s fixed now. Obviously, it’s a good thing I’m not a professional blogger.

When I discovered that my blog was down I was intending to post a few thoughts on defaults. Since I couldn’t, I unleashed a stream of thoughts into a doc to revisit later. I haven’t really revisited them, or taken the time to add in links (yet, perhaps I’ll update if readers send me relevant source links…hint hint). So, with that disclaimer…here they are, several days stale.

There’s this meme in the privacy community that defaults matter; if you get the defaults right you can get privacy right. It’s not necessarily a new meme, and it’s not all that different from the preference privacy advocates have for opt-in despite all its unintended consequences. But, it seems like this view of good defaults as the solution has built new strength during the age of policy driven by behavioral economics à la Cass Sunstein’s Nudge. Don’t get me wrong, I liked the ideas in Nudge as much as the next person, but I think there are some subtle distinctions worth considering. I note my bias against paternalism, in general, before continuing.

The 401k default example — which shows that if the default on HR forms is set to contribute to 401ks, people wind up saving more over time — is the oft-invoked example of why defaults matter. 401ks deal with discrete values: dollars. There is also, in many people’s view, an objective good with respect to these discrete values: more saved for retirement is better. I won’t take on the presumption that more saved is better in this blog post, but note my skepticism even of that assumption. Regardless, this idea that paternalistic defaults make the world a better place in the 401ks can actually be studied and understood with objective measures: dollars and time.

What privacy advocates seem to overlook when discussing defaults is that there is not an objective widely-agreed upon “better” end game to push towards. Sure, in the abstract everyone thinks more privacy is better. But how do we measure privacy? What do we even mean by privacy? And what is more of it? These questions have been the subject of endless debate for decades, and remain without even a semblance of resolution. Most often, privacy is framed against corporate profit as some sort of trade-off that must be made, and of course in that context corporate profits at the expense of an abstract normative good like privacy… well, there is an obvious answer. Of course we should choose defaults that maximize that abstract normative good, or so the argument goes.

It’s often overlooked, however, that there are plenty of other trade-offs at stake, that – really – there may even be value to sharing information with other people, with companies, with the public at large. This isn’t the value, however, that we’re maximizing for when we set paternalistic privacy defaults. But, why not?

I’ll return briefly to my skepticism about 401k defaults. No one can predict markets. No one can predict lifespan. The 401k paternalistic default isn’t advocated for in the context of individual benefit, not at all. The argument is that saving in 401ks is better for society as a whole — we all save more, we can finance our retirements ourselves, there’s money invested into the markets vis-à-vis 401K funds, saving reduces the likelihood of bankruptcy. I could go on and on with abstract and perceived benefits of saving for society at large, I’m just not wholly persuaded there’s an objective and certain benefit to the individual of this 401k default contribution. But, at least in the context of societal gains, I can swallow the idea that these paternalistic defaults might have important benefits for society at large.

With privacy it seems that not only is there no objective measurement — nay, not even an objective agreed-upon definition — but the privacy-friendly defaults are advocated for in the interest of an individual, not society at large. I wonder, if we were to reverse the logic, and instead of paternally forcing a normative default on individuals for their own sake, examine which default made sense for the greater good of society, as we do in cases such as 401Ks. I wonder what the right defaults would be, and whether the privacy community (of which I am a card-carrying member, albeit an outlying skeptic on some points) would continue arguing that getting the defaults right solves privacy.

As usual: questions, not answers.
Posted in Uncategorized | Comments Off on The rant after the calm

Solitude and the social web

Anyone who thinks a lot about privacy is familiar with the paper “The Right to Privacy” by Warren and Brandeis, and the famous phrase therein that privacy is “the right to be let alone.” In the past couple of years though, conceptualizing privacy  this way has taken a back seat to notions of choice and control, in particular over who collects what information about us and how they share it with others. But I think it might be time to re-introduce this concept of being let alone into today’s privacy discourse.

There are a lot of believers in the social web here in Silicon Valley. I must admit, sometimes I wonder if “social web” is a useful way of talking about this future vision or not — which is a way of saying, everything that follows may be based on faulty assumptions and misinterpretations. Often what I start to imagine when I think about the social web is a future web in which my social life is overlaid on top of my Web experience: websites become places not publications, and in these places I find and interact with other people, some of whom I know and others of whom I do not. I believe many of the foundations of such an experience are in-place today, although I imagine the interface will be changing rapidly over the coming year or two.

The true shift to a social web will occur when, as a rule, the people I interact with on websites become as much if not more of an attraction than the content itself. Think about movies. Many of us watch movies alone, but more of us watch them with other people. Movies are a broadcast medium, but their consumption is often a social experience.  Now contrast movies with books. Some of us read books aloud to each other, but more of us read books alone. We discuss them with others after the fact, but the initial readings tend to take place alone. Books are also a broadcast medium, but their consumption is often a solitary experience.

Last Tuesday I was in desperate need of solitude. I hadn’t slept well the night before, I’d had back-to-back meetings all day, and I was tired. I needed some time alone to recharge my batteries. I went home and logged-on — of course. I’m not a terribly active user of the social web; I use Twitter now and then, and I occasionally log-in to Facebook, but my primary mode of communication remains email. (I hear this might mean I’m …<gasp>… old.) Even then, despite being a relatively inactive social web user, I felt somewhat inundated by people when I logged on that night. Twitter, friends sharing things in my RSS feed, chat, email. And then a vision of the future social web came to mind. If I felt inundated by this web, was navigating the social web going to be like making my way through a crowded party?

Some questions worth pondering: If the web we are building is fundamentally social, will we be able to find solitude online? If books are going digital, and digital is becoming social, will reading a book ever be a solitary experience? Or will the book be magically marked up with commentary by our friends by default, as if the book club were meeting in real time while we were reading? How easy will it be to turn on a “solitude” mode?

Phrased another way: will we build the social web to easily enable a right to be let alone online?

Posted in Uncategorized | Comments Off on Solitude and the social web

Random updates

The past couple of months have been very busy but exciting ones here. In the past few weeks a couple big projects I’d been working on made their way out the door. First, a paper my colleague and I submitted for publication 6 months (!) ago was finally published, you can read about it on the Google Public Policy blog. Second, the project I am most proud to have worked on at Google launched: Government Requests.

I’m sure I’ll write something intelligent soon…

Posted in Uncategorized | Comments Off on Random updates

Make no mistake

A couple months ago I spoke about Google’s Power Meter at a conference in Canada. Those who know me know that Canada confuses me on several counts, but I am rarely confused to the point of speechlessness. After my talk a woman approached me, introduced herself, and said “You must have the most boring job in the world.”

Two things: first, really? second, she had it so wrong. I absolutely love my job.

It was a privacy conference, and her explanation for this assertion had to do with her impression of privacy generally (one might have asked why she was at a privacy conference…). The thing is (and I don’t usually like to admit it) I can totally geek out on privacy without getting bored. More to the point, I completely enjoy geeking out on Internet architecture, economics and policy. I guess the fact that some folks find this boring explains why I have such a tough time landing a date. The whole thing really left an impression.

Next week I’m talking at my high school as part of a guest speakers series, and will be speaking alongside a far more successful and impressive classmate who happens to be one of my best friends. In pondering what I might say, and knowing my friend, it seems unlikely I’ll be able to match his advice and range of experience.  So instead of talking about myself, I’m thinking I’m going to say a thing or two about Kermit, the one guy important enough for me to have hung a photograph of in my cube at work.

Posted in Uncategorized | Comments Off on Make no mistake

A very quick rant on anonymity

I read yet another paper today that expounded the virtues of authored content creation and ripped anonymity to shreds, assailing it as a quality that reduces humans to inhumane form on the net. As usual the assertion was that online anonymity is the problem because in the real world we would never treat each other the way we do online. I’m frustrated by the way in which we’ve allowed anonymity to be cast as the cause of so many problems online.

We are anonymous as we go about our day to day activities offline also, and we aren’t mean to each other the way that commenters and bullies can be online. Anonymity is not the problem – it exists in context, and in much of our real-world context we are actually anonymous even if we leave trace identifiers behind. The problem is….I feel like I am becoming a broken record….we haven’t enabled social signaling on the web.

This drives me nuts mostly because I would hate for us to make rash decisions about the architecture of the net, or the laws that govern it, to eliminate anonymity in favor of some hoped-for eventual achievement of perceived social bliss. Anonymity – as it exists in context, not necessarily as an absolute – is a very important element of our modern society. I especially think that anonymous content consumption is essential to free expression, and to some extent anonymous content creation as well. So to end my quick rant, let’s stop assailing anonymity as the problem, and start focusing on how to build enablers of social lubricant in anonymous contexts online instead.

In search of solutions, not problems.

Posted in Uncategorized | 1 Comment

Self-representation online

Yesterday I gave a talk at an AAAI Symposium on privacy, and wanted to share a few of the thoughts I presented there.

A lot of attention has been paid in the past several years to the reputation angle of the social web, and especially the way in which this interacts with privacy. I talked today about one challenge in particular that existed even before the web became social, for which fixes may appear more readily as the web becomes more social. It is the challenge of giving users a modicum of influence over their self-representation.

My choice of words here is very deliberate. At the core of the reputation challenge as it relates to privacy lies an inherent tension between free expression and control. Historically, this tension has been managed by libel and defamation law, areas I don’t claim to be expert in. As the web has expanded access to publishing methods, as well as increased discoverability, and enabled the wisdom of the crowds to take hold, there’s an argument to be made that these current mechanism for handling attacks on one’s reputation just aren’t enough. I’m interested in helping people influence the representations that appear about them online.

Now, I’m not talking about situations in which the commentary being made about a person is untrue, situations that would typically be handled defamation or libel law. Instead, I’m interested in the commentary that is entirely factual, and which has been transformed into an indelible and inescapable mark on the Web. There are cases in which it seems appropriate to have such character statements be made permanent: elected officials, for example, go on the record throughout their careers in favor of or against particular policy positions for the purposes of allowing the electorate to make informed votes. But there are countless other examples of events that would 20 years ago have been forgiven and/or forgotten with the passage of time but have today become indelible marks on a person’s record. I call these look-back-and-laugh events, the things you did as a teenager or even as an adult that everyone looks back on and laughs about a couple years later.

The Wikipedia entry on Internet vigilantism lists several of the most well known examples of these look-back-and-laugh events that were exploited by the crowds and, in some cases, had detrimental implications for the reputations of those involved. I took a look at two of these and drew some conclusions about how engineers might push the envelope in the next five years and produce some really valuable solutions in the self-representation space.

The first example I looked at was that of Stephen Fowler. Go type that into your favorite search engine. You don’t even need quotes. What did you find? The first hit says stephenfowler.sucks.com, doesn’t it? And in the top 5 somewhere there is maybe a headline (now this varies by search engine) that reads, “The worst husband in the world,” right? Poor guy. I never watched the show, but I can’t imagine anyone deserves that. Most of us are kind people by nature, but this crowd behavior doesn’t appear to be very kind.

The second example worth mentioning is the Star Wars kid. This was an Internet meme centered on a teenage boy who had videotaped himself using a golf club as if it were a light saber. The video was uploaded without his permission and became viral, embarrassing him and leading to mass media attention. If you read the Wikipedia article, you won’t find the boy’s name, but click on the first Reference link and you will be taken to a BBC article that cites his name in the first paragraph.

There are a few conclusions one might draw from these examples. First, don’t do something potentially embarrassing if there is a digital recording device anywhere nearby. (That is, I think, wholly unrealistic advice today). Second, if you do something embarrassing and it draws attention … well, tough luck.

I’m not satisfied with those conclusions. There are a couple ways we might think about the problem. In the case of Stephen Fowler, how can we make it easier for him to participate in his own self-representation? I don’t mean, how can we make it easier for Stephen Fowler to get that content removed from the Internet – no, it’s factual, and free expression is important, so short of valid legal process it ought to stay. What I mean is, can we give Stephen Fowler a way to act as his own SEO, within reason? At the moment, he is at the mercy of the crowds. People publish – via a blog, or Twitter, or Facebook for example – commentary about Stephen Fowler and what a horrible husband he is, and all that content begins to link to each other. Some subset of it draws a lot of links, and thus becomes reputable. How can one man be expected to duplicate a network of millions of independent publishers, to write enough that his views would be noticed in that crowd? He can’t. We need to find a way to enable the subject of these look-back-and-laugh events to participate in the crowd in a vocal and reputable way, to stand up in front of everyone and tell his story (either of what happened, or how he’s changed, or what other things about him you might like to know), without being muffled by the raucous hooligans down below. In examples like this one, we need to give him a megaphone just to give him a voice. No one really knows how to do this on the Web today without distorting free expression and the actual wisdom of the crowds, but sorting through the tensions here would make a great research topic.

The additional challenge I see in all of this, and I credit the idea somewhat to Jonathan who planted pieces of it in my mind vis-a-vis his TED talk on kindness, is to build social signals into the Web that would more naturally enable us to behave like the community of Wikipedians who chose not to name Star Wars kid in their article. I’ve written about social signaling before, and it seems like every week I see a new application for its eventual implementation. In the context of self-representation, we need to give Star Wars kid or Stephen Fowler the ability to blush online, perhaps even cry online, to show the raucous hooligans we might become in crowds that we aren’t being particularly kind anymore. Let’s be honest: the crowd sometimes needs little reminders that we are all human and deserve a degree of compassion.

Ryan Calo is doing some work that looks at the intersection of legal notice and neuroscience (I hope that’s an accurate characterization…that’s my impression of his work). The basic idea is that we are hard-wired to respond to particular anthropomorphic cues, and he’s thinking we can use that hard-wiring to rethink the law around notice. I’d like to take that thought a step further. We’re clearly hard-wired to respond to particular anthropomorphic emotional cues in particular ways; most of us see a child crying and want to comfort it, for example. How can we take that basic reality of human neuroscience and transform it into a human computer interaction that will facilitate a smoother social web? I’m absolutely fascinated by this problem, and would love to hear from (or about) anyone doing work that touches on it.

My talk touched on a couple other ideas as well, but unfortunately in 10 minutes I really couldn’t do them much justice. Perhaps in another venue I’ll be able to deliver a more polished version with adequate time. If the paper that formed the basis of this presentation, which I co-authored with my colleague Alma Whitten, appears online in the near future I’ll post a link to it here.  Until then, curious to hear your thoughts!

Posted in Uncategorized | 1 Comment

Limitations of notice

Last week I found myself engaged in a discussion with a lawyer-friend about the great advances that mobile applications have made with respect to privacy. Rather than clicking through a lengthy privacy policy, he told me, you’re now presented with a screen that asks you directly if this application can access your location. You then know this application could get your location, and have given opt-in consent up front in the case that it tracks and stores your whereabouts. Or, alternatively, you can deny that application the ability to access your location (though I’m not certain if applications retain any functionality in this case). I think this analysis misses some of the usability issues here. While I do think these forms of notice are movements forward, I have some thoughts on why they still aren’t enough.

The problem with these forms of notice is the misalignment of incentives. An application has every incentive to get that opt-in consent up front, even if it will only access your location once or not at all. It is a greedy form of opt-in, one that mitigates future risk by covering all possible scenarios. If you were an application developer, why wouldn’t you ask for this agreement upon installation? I admit to not knowing the full technical details, it is possible that a platform only enables this consent process if it knows the application will actually access location, but even then, as a developer why wouldn’t you just build into your application the ability to access location, even if you didn’t necessarily need it?

For the user, the effect is that the notice becomes meaningless. You have no idea which applications are accessing your location when, whether they are storing historical records of it, or what they are doing with your location. This request appears upon startup of a new application, and I at least click through, eager to use the application I just installed. Is this starting to sound familiar? Click through? …you might be thinking that these notices are the new EULA. I might also note that this is, I think, an almost identical structure as what happens with applications on social networking platforms like Facebook. The user consents to the application’s ability to access certain information up-front, or, now, with Facebook generally the user agrees to let any and all applications (none of which may be installed on this account) access certain types of information. It’s a reduction to the lowest common denominator problem: all these forms of opt-in notice create incentives to be as greedy as acceptable, leaving the user with the least necessary amount of notice and choice.

What I’d rather see is a log of the mobile applications that accessed my location, when, and for what purpose. And an icon that appeared along the top of the screen every time an application accessed my location. With both features, I’d notice when an application was doing something suspicious AND be able to dig into the details to see whether I liked what it was doing. In the case of Facebook, I’d like to see a list of applications that have accessed my “Publicly Available Information” that I agreed to make available simply by having an account. Those forms of notice might actually be meaningful for me and assist me in making choices about which platforms and applications to use, whereas the opt-in consent form of notice requires me to make a choice up front without requiring that more information about that choice be made available to me over time. I have a forthcoming paper that explores some challenges that opt-in consent presents, and I think the mobile and social networking application examples are useful in thinking through the consequences of this opt-in privacy architecture. More commentary on this in a month or so.

There has been research done on the effectiveness of privacy policies, much of which seems to me to be targeted at simplifying the method of communication. The hypothesis being, I presume, that lengthy written policies are too difficult for users to work through, but standard policies would be simpler. Here’s where the mobile application example is interesting. The request for location access is not lengthy or hard to understand, I’d say it’s crystal clear. But it is still a click-through policy. Users are presented with one click-through request after another, every time they install an application that wants location information. It’s a pretty standard template, and yet for me, is relatively meaningless. I can’t understand the consequences of having clicked through, because I have no awareness of when any given application is actually accessing my location, or the reasons offered for doing so.

Now, there are some real usability challenges associated with shifting the model to something closer to what I’ve just described. I don’t think there is a solution available and that companies just aren’t implementing it, I think the movement signaled by the simpler privacy notice is a good one and that we have years of innovation in this space ahead of us.  I did find it interesting, though, that my lawyer-friend didn’t seem to agree with this analysis even after some discussion, but instead actually argued the opposite: that this greedy consent was a good thing because it was opt-in and covered all the possible risks. Seen from a contract perspective, this makes a lot of sense. But from a usability perspective, I’m not at all sure I agree.

Posted in Uncategorized | 1 Comment

May I introduce you…?

Apparently Apple might be launching a new device that promises to lure people into Apple Stores around the world in mobs, waiting in line to get their very own. Call me crazy, I’ve just never understood that … level of zeal. What’s caught my eye about the rumored tablet, though, is that it has supposedly been designed to be “intuitive to share,” including the ability to recognize users vis-à-vis a camera and a face print.

Almost 8 years ago I started out on a year-long journey of researching the intersection of biometric and surveillance technologies (pdf). Facial recognition was the main focus of the work – we had recently seen the first mass deployments of the technology in surveillance contexts in the name of security. But the technology just wasn’t that good at the time, at least not good enough to be rolled out in consumer devices.

I wonder if the rumored Apple tablet’s rumored capability will follow the same path of the fingerprint scanners that were deployed in laptops years ago. To this day, I’ve yet to see anyone actually use one of those things. I suspect two reasons why: first, it wasn’t seamless to use; second, there was an easier alternative. I can imagine that if Apple has actually done a nice job with face authentication, it might actually take off.

It’s hard for me to believe that 8 years has passed, but clearly it’s been enough time that the technology has transformed dramatically. There are a number of companies offering face recognition services that run on top of social networks like Facebook. Many commentators have speculated about the use of face recognition in Google Goggles, after the product was launched explicitly without the technology in recognition of the privacy challenges ahead.

When I wrote about the technology so long ago, I had a lot of discomfort with the implications for individual privacy and our social fabric. In general, I still do. If one thing is certain, though, it is that technology marches on. Innovation will continue, and efforts to hamper it out of fear of the unknown tend to be unsuccessful. Instead, we can influence innovation such that it moves in a direction that acknowledges and adapts to a wide range of individual discomforts and desires, even the ones traditionally underrepresented among a technologically savvy crowd.

The challenge with face recognition of course is that it’s hard to keep my face private. I walk around and meet people all day, I show my face in public locations, I subject myself to photographs with friends, some of which later make their way onto the Web – I simply do not live in isolation, as it would not be much of a life to do so. But the prospect of a tablet device that automatically recognizes me has given me a new way of framing this challenge.

Computers are acquiring senses and learning how to use them. They can see (center stage: built-in camera), they can hear (center stage: built-in microphone), and they can even touch (center stage: touch screens).  And computers are also learning how to use those senses. Google Voice can transcribe my voicemails, albeit with some errors, but it’s probably doing about as good a job as a young child might. It not only hears, it listens too. Similarly, image recognition programs usher in the era of not only seeing, but interpreting sight.

So perhaps the science fiction of yesterday is closer to reality than most of us think. Soon, maybe in a matter of weeks if the rumors are true, we may sit down to check our email on a computer that knows us personally, in the way a friend does.  That computer will in all likelihood be networked, and might very well be easily integrated into a social Web. You can imagine the computer being able to introduce you to its friends out there on the network, giving them the ability to recognize you at some later point in time on the network, and some of these introductions might happen without you realizing it. In the world of human interaction, we generally have some control over who we meet, although since few of us live in isolation that control is limited. Still, we are generally present for and thus aware of situations in which we might be introduced to others. I wonder, how do we give individuals similar presence and awareness with respect to the computers to which they get introduced?

Privacy on the social web is going to largely come down to how well we can translate our human-to-human social cues and build them into the digital system. There are many privacy-related initiatives underfoot that attempt to narrow in on this vision, many of them related to the idea of a “Creative Commons for Privacy.” I’m somewhat skeptical of privacy solutions for the social Web that begin from legal or contractual frameworks, but am a supporter of the general signaling idea. Not that long ago I saw my friend Jonathan Zittrain give a talk in which he pitched the idea of wearing a tee-shirt that asks others to “please not photograph me.” A humorous idea, but perhaps a best effort signaling solution? (It certainly might help me, I instead just dash off to the restroom when cameras appear at dinner parties!) I’ve also written about the nuanced ways in which social signals influence our understanding of privacy in the human-to-human world. To get privacy right in the next decade, we have to do more than simply offer users a choice of “privacy settings” which require constant maintenance and management.

No one seems to have quite figured out how to digitize social signals, and I wonder if that’s because the digital world expects standards. I maintain that privacy is inherently a subjective quality, one that cannot easily be standardized into a set of discrete options. The discrete approach has worked relatively well so far, but as we move forward into a world in which the components of our identity that are beyond our control – for example, our face instead of our chosen username – have been digitized and are generally available for others to recognize, we will need far more intelligent digital signals to communicate and be aware of our privacy choices.  I suspect those signals are being developed in research labs somewhere, in all likelihood with a variety of use cases in mind. I just hope the folks developing them have a broad understanding of privacy in the back of their mind as they march, unstoppably, forward.

Posted in Uncategorized | Comments Off on May I introduce you…?