Restarting from wherever I am

Over the holidays I was down in LA with family—it was the first time in three or four years that we’d all been together at the same time. We mostly succeeded in avoiding the holiday craziness of shopping malls and packed movie theaters, but I did drive my mom down to the mall one night.  Sometimes I’m in awe of the insights my mom has about technology. This line from that night will stay with me for a long time:

“The nice thing about the Maps lady I guess is that she never gets angry with you. She just restarts from wherever you are.”

So much wisdom in one simple observation. How often do we all wish we could be more like the Maps lady? The success of Lululemon would suggest the answer is, “Often — very, very often.”

Sometime in the past year or two I read Sherry Turkle’s latest book, Alone Together, and I may need to revisit it again soon. Human relationships are hard. They require work, give and take, tolerance, flexibility — in ways that our relationship with computers don’t. As the maps example illustrates, the endlessly forgiving and nonjudgemental nature of our interactions with technology can be a blessing. I suppose that was in some ways the appeal of Samantha, though she ultimately adopted a deeper emotional palate. The Samantha-future is a long way off, if it exists at all. But in the near-term, I wonder.

If you spent enough time with Maps-lady equivalents, would you adopt that forgiving approach to human error? Or would you just lack the ability to cope adequately with a judgmental human companion? You often hear that the technology is making us dumber — we used to not need those navigation systems after all — but it’s too easy to quickly forget the mundane, daily stresses it’s taken out of the equation.  Wouldn’t it be the greatest irony of all if the technology we blame for rushing us through life, that we go to yoga classes and meditation retreats to get away from, might be the very thing that could teach us finally to restart wherever we are?

Posted in Uncategorized | Comments Off on Restarting from wherever I am

Attention Deficits

I’ve recently been paying some attention to the commentary and research surrounding questions of whether and how the “always on” lifestyle we currently are subsumed by impacts our ability to maintain focus. Daniel Goleman has a new book out on the subject, reviewed by Nicholas Carr in tomorrow’s Sunday Times—expect it to be a good read.

Along the same lines, I’ve recently stumbled on a few start-ups that are pursuing brain science + web in interesting ways. The one I’ve spent the most time with is Lumosity, which I must admit I’m already a tad addicted to. Lumosity scores your brain performance along a couple axes, one of which is attention. I was, well, a bit bummed to see that my initial scores were very low compared to my peers—especially my attention score.

There were several curious things here, the first being how they measure attention, the second being whether the scores are reflections of inherent brain performance or merely practiced skill in the games offered. But I also wonder what the percentiles are based on—I after a week of Lumostiy “training” am still only in the 27th percentile of people my age, but it’s unclear if those are Lumosity users or if it’s based on third party research. The question interests me because it seems like there might be a budding data set here on which to test a variety of hypothesis about technology’s impact on our attention. Lumosity clearly gets that as they have a Human Cognition Project to facilitate future research.

Posted in Uncategorized | Comments Off on Attention Deficits

Should we teach problems instead of disciplines?

A couple weeks ago I was chatting with a colleague about how we can do more to encourage STEM education (a topic that comes up a lot in Silicon Valley), and someone chimed in saying no, no we need STEAM education. Turns out there is a movement to introduce arts into the STEM curriculum—see for example It’s an interesting idea, and at least on this site you quickly can see that the addition of “A” does nothing to impact the emphasis on “creation” and “building” in how we think about education.

Now, it’s not that there’s anything wrong with this, but I’ve recently been thinking a lot about literary culture, and in particular how critical it is for scientists and innovators. Fiction has been my guilty pleasure of late—I’m currently making my way through the Orphan Master’s Son (a full post on that to come at some point, I imagine, unbelievable novel), and a couple weeks ago quite enjoyed Lahiri’s recent and somewhat dark story about immigrant Americans. But the novel that really stood out recently was Flight Behavior.

There is apparently a growing genre of fiction about the effects of climate change on our world. That this is true is perhaps not much of a surprise, and yet…Kingsolver’s contribution, at least, left an impact. Unlike the dystopia novels of yesteryear predicting the various consequences of climate change, she writes about a story we actually are experiencing today. The thing is, the truth about which she writes her fictional story doesn’t seem terribly well-known—I like to think of myself as reasonably well-read, but I hadn’t heard about the plight of the monarchs until reading Flight Behavior.

Thinking of these two things alongside each other—STEAM education and climate change fiction—reminded me of this brilliant post over at HBS from a year or so ago. Instead of encouraging our kids to become passionate about science, technology, engineering, math, or art, why don’t we spend more time making sure they are exposed to the really hard, really important problems society is facing?

Posted in Uncategorized | 1 Comment

What is it to be human?

Last night I read this utterly depressing article about organ transplant. I used to be a huge fan of organ transplant, I’ve opted in of course and have a little dot on my driver’s license. It just seems so … obvious. Until you read that article. The big takeaway for me in that piece is how much uncertainty there is in life—very literally. That we even have the notion of a “beating heart cadaver” illustrates just how uncertain the whole venture really is.

But there is also an interesting comparison here to the Turing test, which of course tests for “intelligence” in a machine (or more accurately, a machine’s ability to imitate human intelligence). Apparently in 1968, thirteen men at Harvard Medical School decided the criteria against which human life or death can be measured. These criteria are: unreceptivity and unresponsivity, lack of spontaneous breathing or movement, lack of reflexes, and a flat EEG. An evaluation of these criteria, by another human, is in a sense a test for human intelligence. If an individual fails to demonstrate intelligence against these measures, but the beating heart still indicates life, he is deemed brain-dead (a living human, but on partially so) and can be evaluated as a potential organ donor without the same restrictions put on a living organ donor.

So I found it particularly interesting to read Nicklas’ thoughts on Cleverbot today. Apparently Cleverbot is partially human, which Nicklas observes is an odd conclusion for the Turing test to arrive at. Not only is he right that examples throughout human history show that we often think of “other” as some partial form of our own humanity, that in dehumanizing the “other” we calm our fear somewhat, but we also think of people as partially human in the context of organ donation. In the case of organ donations, though, we create this mental construct of “partial humanity” to theoretically achieve a higher end—presumably, saving other lives that will be more fully lived than one that is only “partially” lived.

All of this of course rests against the backdrop of a society that is embracing robots as our own. One of the more interesting books I read last winter was Alone Together in which author Sherry Turkle explores the ways in which humans substitute robots to fulfill needs that are otherwise not being met by human companions. The most fascinating example of human-robot connection (which I do not think was in Turkle’s book, but something I think I heard from Ryan Calo) is the soldier who dove in front of gunfire to protect his robot weapon.

We are unquestionably capable of emotional connection to non-humans—the family pet being the most obvious such example. Researchers are demonstrating that we are also to a degree capable of connecting to robots, some of which may be deemed “partially human.” At some point in the future, the question of robot rights will become a subject of public discourse, and I imagine at that point we will revisit this discussion of whether the Turing test adequately measures “humanity” for the purposes of conferring certain individual rights. Perhaps there will even be a similar set of “donation” criteria created for robot-part donations.

It interests me how the Turing test compares against our own criteria for determining brain-death. On the surface, the Turing test seems a higher standard to apply—and (as a human, in 2012) that seems appropriate. I wonder though if we will in my lifetime be having conversations about whether that’s a double standard, or whether the beloved robot companion deserves equal rights to a brain-dead patient before he is harvested for parts.

Posted in Uncategorized | Comments Off on What is it to be human?

Habits and technology

Last month I read a great book, The Power of Habit, which explores the neurological science behind habit formation. There are a lot of interesting tidbits in the book—the most frequently cited (and somehow least interesting) one I’ve seen is around whether Target can predict you’re pregnant. The book tends to have a bit more focus on marketing than I’d like, but the angle of how marketing can be used to drive habit formation does offer some insight. The author shares an example of marketers creating a habit for millions of people to brush their teeth every day by connecting a behavior (brushing one’s teeth) to a rewarding feeling (smooth teeth, clean of film). Brush teeth, get reward—no more film!

But the most interesting insight in the book was not the need to have a reward for habitual behavior, but the insight that habit formation requires inserting a new behavior into an existing routine. The author uses Febreze as an example of how (again, marketers) made this connection—the P&G marketing team was able to turn Febreze into a success once they connected a cue behavior, in this case making the bed, with the new habitual behavior, here spraying Febreze onto the sheets. In other words, they injected the new habit into an existing routine.

I may find the routine aspect of this so interesting for personal reasons—creating a routine in my own life proves to be an ongoing challenge, and this may be the elusive input to my creating new habits. But is also suggests something interesting to me about the limitations of technology to help.

This weekend I signed up for HealthMonth, a neat little tool out of the folks up at Habit Labs in Seattle. I’ve been curious about Habit Labs for a while, and figured it was time to try out one of their tools. The secret to HealthMonth seems to be gamification of challenging goals—winning the game involves sticking to new behaviors, at which point you give yourself a reward. Along the way, you get little rewards as you record your progress—more points for performing against your challenges, social feedback, etc. All this is well and good, but what it is missing (at least for me) is insertion into an existing routine. I can play the game all I want, but until I find a way to make 30min of daily exercise part of a daily routine, I’m going to have a hard time making it into a true habit.

Which leads me to wonder where technology could really help me create new habits. Perhaps I need technology to help me understand my routine better, so I can identify opportunities to inject new behaviors. For example, maybe if I monitored my detailed location history for a week I’d see that every morning at about 11am or so I wander into the microkitchen at work and grab a snack (maybe I do!?), and instead I could choose a different behavior to insert at that time. The book describes an example like this, but the individual only identifies the routine through careful manual monitoring—something that almost requires a habit of its own! But I wonder if this is a limitation, where technology can only do so much to help change our behavior, or if it’s just an opportunity that remains open for grabs.

Posted in Uncategorized | Comments Off on Habits and technology

Compete to win. If you lose consider creating a new game.

A student in Peter Thiel’s class on start-ups blogged notes of one of the lectures, which led to a NYT article, which led to a blog post by a colleague. The general argument being put forth is that intense competition leads to competition-for-the-sake-of-competition, instead of leading to innovation.

There are many things mixed up in this piece, among them the poorly thought-through notion that war and sports are the same type of competition. First, I need to comment on why this is just not-quite-right. Playing games is an infinite endeavor. If you lose, you try again, the competition changes, you hope to win sometime. If you can’t win, you can try to change the rules of the game. You may not succeed in doing so, the rules may be set in stone. But in that case, you can find a new game to play. And if there isn’t another game you like, you can always create your own. Consider American Football, or the more recent invention of Frisbee Golf.  Or how could I forget, Quidditch.

War, however, is not the same. War involves ultimate termination, permanent occupation, subjugation. In historical terms, it often involves the outright exploitation of the weak and disempowered (in real war, women get raped, children murdered). War is not, generally speaking, an infinite endeavor. Nor is it one that enables restarts, do-overs, learning development or growth, and certainly not creation of any kind. War is destructive—and it destroys absolutely.

The other thing Thiel mixes up in all this is that he sort of suggests that competition does not drive innovation when he says maybe “competition isn’t as good as we’re told it is.” I think that’s wrong, competition is every bit as good as we’re told it is. Take Thiel’s own example—one could argue that losing a competition is the thing that prompted a creative spark. Without competition, he would not have experienced loss (of not getting the clerkship), and he would not have been forced to go through a coping process, through which he wound up starting PayPal. Any seasoned athlete knows why competition in games is a great metaphor for life: “the great accomplishment is not in never failing, but in rising again after you fall.” Thiel got back up when he lost. He just decided the next time to play a different game.

There is an implicit idea in all this that somehow, by “opting out” of competition for a legal clerkship, Thiel found his creative spark and stopped competing—but this is an absolutely absurd notion. Did he not compete when he started PayPal? Of course he did. In some sense, he entered an even more competitive endeavor. And he is still competing—today in the early-stage investment space. At some point, though, Thiel will decide he is done competing. He will become, as all athletes eventually do, a spectator. That point would have occurred at some point even had he done a a clerkship, and it will occur for all of us. It is part of being human.

All of which is a long way of making just two points: (1) I think the war analogy is a bad one—for work and for life—I much prefer we think in games; and, (2) I do think competition is the thing that spurs creativity, and in many cases losing itself can be the thing that ignites a creative spark to create a whole new game.

As a small tangent, a few months ago one of my oldest friends and I went back to our high school to talk to the PTSA about success. Ironically, we both talked an awful lot more about failure. The evening largely focused on how important it is that students experience failure in high school, and the parents were nodding right along with us the entire time. We need to compete, but just as importantly we need to fail.

So, on a related note, there is one final point to make on this Thiel piece. Brooks ends his NYT column thus: “Everybody worries about American competitiveness. That may be the wrong problem. The future of the country will probably be determined by how well Americans can succeed at being monopolists.”  This strikes me as unnecessary and slightly inaccurate simplification.

If I am right that competition in games is a good thing that drives creativity and innovation, then what do we need to focus on? Here I think the Thiel example teaches us two things, which many of us intuitively know: tolerance for failure and openness to new ideas are the necessary preconditions for innovation. Without those preconditions, you can’t easily decide to stop playing the game you’re losing and create a new one.


Posted in Uncategorized | 1 Comment

Startups aren’t the answer, data-driven system-wide innovation is

The folks over at Edge have a really interesting read on innovation, creativity and culture by Mark Pagel. I had a couple reactions to this:

(1) Is innovation always data-driven? Mark makes a rather compelling argument that innovation is rarely the result of someone seeing a process or tool and automatically knowing how to make it better—it’s usually a combination of copying and trial-and-error. This leads me to think the argument is actually that innovation involves a process of evaluating how well something is working (e.g., measuring its success), and iterating to make it work better. I don’t know how much I buy it as an absolute—that all innovation is data-driven—but I suspect that it holds for the vast majority of cases.

(2) How many of us should or will be innovators?  Mark makes a compelling enough argument that any given individual in a society relies more on copying than he does on innovating—this is not to say that innovation isn’t important. The argument seems to be that any given innovation scales easily, and it scales more and more easily the more connected a society becomes. With the Internet, he argues, we need even fewer individuals innovating, because an idea that pops up in southeast Asia can make its way around the world in no time. He also makes a point about language—so I think the advent of online translation tools are also part of his argument. He seems to be saying that most of us are organized to be copiers or consumers of other people’s innovations.

What interests me about these two points in combination is what that means for how we organize as a society—how we distribute resources and human capital.

On the point about how many of us should be innovators, I think Mark is both right and wrong. He’s right if when we talk about innovation we mean new idea creation, or what I’ll call isolated innovation. That’s the idea of innovation that two guys in a garage can build something revolutionary and change the world. We saw a lot of this type of innovation in the past century. But I think Mark is a little bit wrong if we want to talk about the type of innovation we need to see more of in the next century, which I’d argue is not going to be isolated but rather systemic innovation.

The biggest problems we face this century are systems problems—climate and environment, public health, sustainable urban development, etc. These types of problems won’t be solved by two guys in a garage. Instead we need data-driven innovation at scale: we need lots of well-funded scientists collecting, sharing and analyzing huge data sets about complex systems.

If you agree with me so far, I think this suggests a few different things.

  • We need to divorce the idea of innovation from startups. Innovation is as much about existing, large institutions as it is smaller, new ones. Instead of talking about start-ups, we should focus on R&D policy—and make sure that it is size-agnostic.
  • We need to be able to collect, share and analyze data across institutions for the purposes of innovation. This means creating open data standards, especially in the public sector. Proposals like the EU Open Data strategy and work done on are encouraging.
  • Some of the data we will need to analyze is going to be personal data, so we need mechanisms to support consent in the innovation process. This is why projects like the one John Wilbanks is leading, Consent to Research, are so important.
  • Analyzing the large sets of data that will drive a lot of this innovation will mean using the cloud. It’s just not cost-effective to expect everyone to run their own data centers for this type of computation. We need to reduce barriers to access these cloud services, such as restrictions on cross-border data flow. The APEC Pathfinder project is one encouraging effort to achieve this goal.

Just a few thoughts on a very big subject. 

Posted in Uncategorized | Comments Off on Startups aren’t the answer, data-driven system-wide innovation is

To learn how the Internet works is to learn civics

*Update* This post was originally titled “Teach not coding but architecture,” which is still reflected in the permalink. I have however updated the title to reflect a far better one proposed by Jonathan Zittrain on Twitter.

My colleague Nicklas Lundblad has a good post this weekend on the virtues of teaching computer science. I felt so immediately jarred by a missing piece of the argument that I was compelled to sign in and blog for the first time in months (an absence I feel tremendous guilt about). His argument is spot-in, but misses a very simple yet critical addendum.

My own education in computer science of course influences my views on the matter. I often tell people, somewhat jokingly, that despite beginning my technical studies at an all-women’s college, I have an ex-boyfriend to thank for the decision. My high school sweetheart was desperately passionate about technology, and in my freshman year of college I made an attempt to bridge the physical distance between Boston and California by bridging an intellectual one instead.  I endeavored to take the introductory CS class at Wellesley in large part because it was what he was so passionate about….but I promptly fell in love with the subject myself.

CS is a liberal art. This was a time-honored debate our professors engaged us in. Many of them came from engineering schools but wound up teaching the subject at a liberal arts school, and so had strong views on the subject. As do I, now. It is absolutely a subject that teaches you how to think—not just how to build. Building is a side effect of the discipline, a very useful one, but nonetheless a side effect.  So here, I agree 100% with Nicklas when he says:

Computer science offers a new way to understand the world, to think about it as algorithms and data structures and data sets. That is extremely powerful. So should we teach kids coding instead of teaching them to cut and paste in word processing software? It does not seem to be a very hard question does it?

Where I disagree is that I don’t think coding is enough of a prerequisite. Like advanced biology or chemistry it should be included in the standard curriculum. But the core class we aren’t teaching our kids that we need to be is Internet Architecture—that should be like government or civics classes are today: a prerequisite for graduation.

After college I somehow found my way into MIT’s Technology & Policy Program. Like most other new graduate students, I spent the first few weeks of my time at MIT on the job market looking for a RA or TA position to help pay the bills. In our case, the TPP program required that we find a research advisor also, which usually dovetailed with finding an RA position. I stumbled into Dave Clark’s office one day to ask for a job, completely unaware of his stature in the Internet community. He asked if I’d read his paper about the end-to-end principle and I said no, could he tell me about it? What followed from that initial conversation was by far the most meaningful educational experience of my life.

For the next two years I would work with Dave on research topics relating to identity and net neutrality. Each week, I would sit down with him for an hour and get an individual tutorial on how the Internet worked and the history of its design. I learned early on what an IP address was, and how TCP/IP worked—amazingly, I wound up graduating college with a CS degree without understanding those concepts. I could code when I left college, and I could think about problems through this lens—it had undoubtedly changed the way I thought about the world. But despite a college degree in CS I fundamentally had not learned the basic governance principles of the Internet.

To appreciate the mechanisms through which information can be exchanged and manipulated is to appreciate the mechanisms through which people are able to organize and communicate. To appreciate the processes through which technical standards are agreed upon is not unlike appreciating the processes through which laws and regulations are agreed upon. We teach all high school students how a bill becomes a law. Why don’t we teach them how a packet becomes an email?

I went to college with a lot of women who were interested in politics and economics, who aspired to careers in public service, and whose interests lay in questions of how society organizes itself. The women I know who are now working in these fields are having and will absolutely continue to have an impact on the world. But I would dare any of them to name a field of public policy not currently impacted by or disrupted by the Internet. Understanding how to write code that builds an isolated piece of technology is like understanding how to read and write, or knowing the ins and outs of a particular subject like Biology. But understanding how the Internet works is like understanding the way society is governed. The architectural design should be taught in high school, the same way we teach about the design of the US Constitution.

This might seem like a radical recommendation, but there it is. Lessig wrote “Code is Law” which is basically the same idea, but had the unfortunate side effect of focusing everyone’s attention on coding. And I suppose that most engineers get to a place where they understand the architecture of the net by starting out writing code. But I don’t believe that being able to write code is a prerequisite to understanding the design of the architecture, and therefore in my view teaching the architecture is the mandatory prerequisite we don’t have yet. We should be equipping people to answer the simple question: how does the Internet work?

Posted in Uncategorized | 14 Comments

Using social networking behavior to predict behavior problems

I was really struck by this article last week describing research that used publicly available Facebook profiles to predict students likely to suffer from alcohol abuse. The article suggests toward the end that there is an open question of how appropriate it is to go scanning students’ public Facebook profiles for behavior that might be suggestive of a drinking problem—this to me misses the point, and detracts from more important questions. The more interesting questions are who gets missed if your predictive algorithm is wrong, or whether you substitute this predictive algorithm in for other more tried-and-tested measures of screening (either because it’s lower cost, or perceived to be more effective in the short run).

I suspect we’ll be seeing more and more applications like this. Those in the personal healthcare arena—e.g., personal health tracking that predicts risk for disease—will seem largely uncontroversial at first, but may pose the hardest questions in terms of how to ensure fair access to insurance and preventative care. Others like this that are more focused on public healthcare and/or social behavior modifications I suspect will provoke more ire, uncertainty, fear, and concern. For example, it probably doesn’t take much imagination to think of a world in which the social graph and one’s behavior in a social network are used as predictors of an individual’s likely political tendency—or, put another way, the likelihood that one is a terrorist from the point of view of a given state.

It would be interesting to see, for a given use case, a mapping of the pre-digital algorithm to the post-digital algorithm. That is, what inputs are used to screen for alcohol abuse today, and how do those inputs change in a digital era.

Posted in Uncategorized | 2 Comments

No time better than the present

At Google we use a technique called OKRs to set goals and measure progress, which has been written about a fair bit. Since I’m spending some time at Berkman this year, I set out OKRs for my efforts there as well—one of which was to document a bibliography of all the reading I do related to predictive analytics (or anything close). Closing out the first month of the year, I’ve at least made some progress against that OKR tonight. I’ll be updating the Readings page here as I make my way through books and articles on point (and hopefully catching up to all reading to date soonish).

On an unrelated topic: as I went to document key takeaways from Incognito tonight, I realized how poor I find the digital medium for note-taking in books and generally for any sort of non-fiction reading that you may want to flip back through. I’ve noted this observation elsewhere in the past, but was struck again by this drawback tonight.

I’d welcome suggestions on better ways and tools to document said bibliography, but for now I fear it’s just going to be a semi-disorganized list.

Posted in Uncategorized | Comments Off on No time better than the present