An evolutionary game-theoretic model of privacy versus utility

I first thought about this while savouring coffee-soaked madeleine with a few good friends at Tous Les Jours on a Sunday evening a few moons ago (September 29). I personally think it is a question worth investigating, and I did not want to write about it until I worked out more details, but I have not found a peaceful stretch of time to do so. In the meantime, I will scribble about it here. Perhaps you could tell me whether it would be useful?

The seed was planted two days earlier when Seda Gurses organized a delightful meeting on the technical implications of the NSA and GCHQ revelations. In the meeting, a very intelligent acquaintance asked, “Why don’t we build a browser add-on or extension to encrypt everything we do on Facebook?”, to which I said, “But won’t this mean the end of Facebook?”

Having recently read Part 1 of Schneier’s insightful book Liars & Outliers, it occurred to me (while enjoying coffee and madeleine) that we could probably use game theory to model this relationship between privacy and utility. If Facebook is the host, then we are the little organisms that both sustain and need Facebook, just as Facebook needs us to survive at all. Life has seen many such examples of mutualism. Furthermore, we could study, as Maynard-Smith did, evolutionary stable strategies that sustain both Facebook and ourselves despite conflicting interests.

Let me explain better what I mean by all of this. This is not a profound observation, but I believe there is a deep relationship between privacy, utility and machine learning. If we could somehow measure the utility that we provide Facebook (perhaps by Facebook being able to extract meaningful statistics via machine learning), then consider how different privacy-preserving strategies would affect the utility we provide Facebook (which would, in turn, ultimately affect us). If we encrypt absolutely everything such that Facebook is unable to discern at all what we say and do, then Facebook the host will probably not survive, which deprives us of its useful services. On the other hand, if Facebook is able to discern everything we say and do, then we have no privacy left, which detracts substantially the value of its services. Our model must be able to capture both of these extremes, and study how different privacy-preserving strategies will affect both Facebook and ourselves in the long term.

So I hope you can see why I think an evolutionary game-theoretic model would capture how different privacy-preserving strategies would sustain both Facebook and ourselves despite conflicting interests. What follows are some rough ideas to explore:

  • Measure utility to Facebook by studying how different privacy-preserving strategies would affect, say, the effectiveness of targeted advertising.
  • Would a budget for encryption or only partial encryption of data result in a sustainable relationship?
  • Study different probability distributions of the risks of being identified (privacy loss). So far I have considered the utility to Facebook, and I think this might be useful to studying the privacy loss to ourselves, although I don’t yet see exactly how.
  • Impact of memory (a.k.a. “right to be forgotten”): if Facebook was forced to remember only a limited window of data, would Facebook be able to sustain itself? Consider how memory affects the evolution of cooperation.

If something does not make sense here, it is because the whole thing has not yet been carefully thought out. I would be interested to hear corrections or other feedback from you.

Leave a comment