All the dialogues about trust, computing and society: why?

18 Apr

Any glance at the contemporary intellectual landscape would make it clear that trust and computing is of considerable interest. And by this I do not mean that this has to do with whether computers can be relied upon to do their job; that they simply have to do as they are told. If only it were as simple as that – an interface. As Douglas Rushkoff argues in his brief and provocative book, Program or be Programmed, (2010) when people rely on computers in their everyday life it is not like Miss Daisy trusting in her chauffeur to take her car to the right destination. It’s not what computers are told that is the issue. It’s what computers tell us, the humans. With computing, so Rushkoff wants to have us believe, there is no knowing what the destination is: it is unclear what it is that the humans are trusting in or for. John Naughton, in his From Gutenburg to Zuckerberg (2012), asks similarly large questions and here too the topic of the ‘interface’ seems inconsequential: for him we need to ask whether we can trust computing (and the internet in particular) to bring us dystopia or a heaven – though the contrast he presents is not entirely without irony: it is the duplicitous appeal of Huxley’s Brave New World or the bleakness of Orwell’s Nineteen Eight Four. Meanwhile, Pariser complains in his Filter Bubble (2011) that we cannot trust search engines anymore; today, in the age of The Cloud and massive aggregation systems, search engine providers can hide things away from us in ways that we could not guess. Doing so is at once sinister and capitalistic, Pariser argues; sinister since it is disempowering, capitalistic since it places the market above the public good. Search engines take you to what companies want to sell, not to what you need to know. One time capitalist himself William Davidow is likewise agitated, though it’s not salesmanship that worries him: we are now Overconnected (2011), as he argues in his eponymous book: we cannot trust ourselves to reason properly. This is merely a list of well-known texts in the public domain, there are equally many in the more scholarly world of philosophy, sociology and, of course, computer science. In the first of these there are so many journal papers as to be immense, whether it be Holton’s Deciding to Trust, coming to Believe paper of 1994 or Baier’s book Essays on Moral Prejudice (1994); in sociology there at least as many, including Mitstzal (1996), Mollering (2006) and Gambetta’s edited collection of 1988 (including as it does some philosophers, such as Williams). In computer science and human computer interaction (HCI) there are as many, with Piotr Cofta’s The Trustworthy and Trusted Web of 2011 being one of the most recent. The sheer volume and scale of this discourse leads one to doubt whether any single, unified view will arise out of it even if many of the authors in question want to offer one: Bruce Schneier, though not an academic, comes to mind with his highly readable Liars and Outliers, Enabling the trust that society needs to thrive (2012).

Navigating the domain

So what is one to make of this all? It seems to me that we have to stop rushing to answer what trust is – even if in the end we might come to seek such an answer. Rather, at the moment, and given the wealth of views currently being presented on the topic, we need to ask something about trust that is as it were prior to the question of what it is. We need to ask why all the fuss about trust now? Having done this we can inquire into how these current concerns are effecting what is treated as trust, how that trust is ‘theorised’ and what are the ways that evidence are brought to bear on discussions about that theorised object.

The sociologist Luhmann noted in his essay Familiarity and Trust (1988) that societies seem to make trust a topic of concern at certain historical moments – they need to arrange themselves so as to make trust a possibility and a worry. This interest does not seem to have much to do with trust itself – in its a-historic or transcendental conceptual sense (even if Luhmann had an interest in that himself). It has to do with a particular confluence of concerns that lead societies to reflect on certain things at particular times. This argument is in accord with Richard Rorty’s view about how to understand ideas and concepts in his Philosophy and the Mirror of Nature (1979). In this view, appreciating debates about some concern requires one to see them as historical (even contemporary debates). Doing so entails dissecting the links between views of other apparently disconnected concerns, to create maps of the historical topography of ideas and investigation into the performative goal or goals that lay behind the development and deployment of the ideas in question. It requires, in sum, understanding the ‘when’ of an argument and the ‘so what?’ of it – what it led to.

Let me illustrate what is meant by this in relation to arguments about trust and computing. A decade ago, the philosopher Onora O’Neill offered an account of trust in the Reith Lectures (2002). She wanted to characterise some of the essential, true elements of trust and its basis in action. Hers purported to be an a-historic view, a concern simply with the conceptual fabric of the term. She claimed that trust between people is a function of being near one another. By that she did not mean near in a moral or social sense. She meant in terms of the body. This might seem a strange argument, but bear with me. It comes down to the idea that people trust each other because they can touch each other; because they can see each other, their every movement; that people can, say, grasp another at will and be grasped back in turn: because they are altogether, in one place. Trust would seem to turn on cuddles. This is of course to paraphrase O’Neill. But, given this, a problem occurs when distances are introduced into social relations such that people can no longer cuddle. Trust is weakened if not dissolved. Mechanisms need to be developed, O’Neill argued, that make ties between bodies separated by space possible. In her lectures, she explored various answers to the question of how trust could be made.

Why did O’Neill come up with this view? It seems quite stark; almost startling certainly to one who has not confronted it before. If truth be told, I have simplified her case and used a colourful way of illustrating it, though I do not think mischaracterised it. In presenting it thus, however, one can begin to see that there might be very profound links between it and the context, the historical context in which it was presented. This was just a decade ago and although it seems an eternity in terms of the internet it is the internet that I think is key to that context. And, it is in light of that context, that the credit one should give to O’Neill’s views lie. It seems to me that O’Neill was putting forth a view about the relationship between our bodies, our location in space, and the trust that was fostered (or not) by the use of contemporary technologies of communication, most especially internet-related ones. Her theory of the nature of trust (assuming for the moment that one can call it a theory), was created against the backdrop of the problems of trust and communication highlighted by the internet. With the latter, the human body seemed to be visibly absent and, since trust was problematic on the internet, by dint of that the body must be the seat of trust in ‘normal’ (non-internet) settings of action. Hence O’Neill’s theory.

As it happens, O’Neill did not refer very much to the internet in her lectures. The important point I am wanting to make is that, to understand O’Neill, one does not have to accept the idea that the presence of the body in any universal sense is always essential to trust: one simply has to accept that the absence of the body in acts of communication is a problem in the context of contemporary society, in the internet society. If one places her argument in context one sees that that is in fact what she is writing about. It is, as it were, her starting point. Something about the acts of communication we undertake on the internet make the location of the body – its presence/absence – salient. So, following in Luhmann’s and Rorty’s view, what we have in O’Neill’s lectures is a historically situated claim. Now one could say that historisizing her argument is perhaps reducing the credit it should be given. That is not my intention – though this might not be clear at the moment. One of the reasons why I choose her view to illustrate my case was because her argument was quite often presented at that time. It is in this sense exemplary. As it happens the argument has continued to be argued. Be that as it may, what I have thus far sought to show is the topographical relationship between O’Neill’s ideas and socio-technical context. But one also needs to consider its performativity. In having raised an argument, an argument can thus be assessed, considered, brought to bear; one has to consider also where the argument was deployed, for whom. In my view, what O’Neill was doing in her Lectures was getting the public to think about the role of philosophy, and to suggest that, despite appearances otherwise, philosophy can concern itself with everyday concerns, ones even to do with the body. Whether she succeeded in persuading the public of the relevance of philosophy I do not know, but what one can say is that she got the argument widespread attention, even if she was not the only advocate of it. As Charles Ess and May Thorseth (Eds) discuss in Trust and Virtual Worlds (2011), the idea that it is the absence of the body that undermines trust came to be cultivated when new communications technologies enabled by the internet began to take off – in the nineteen nineties – O’Neill’s the Reith Lectures are illustrative of this ‘cultural moment’. In research since, as Ess and Thorseth show, this link between body and trust can be seen to have been exaggerated. O’Neill can now be seen to be putting forth too strong a case. The purpose of placing arguments in context and exploring their performative consequences, however, should be to make it clear that one ought not to judge attempts to explore trust by a simple right or wrong metric. In historisizing a point of view, we can also see what that point of view might help create, the dialogues it led to and the richer understandings that ensued. It seems to me that O’Neill (and others who put forward her perspective at that time) helped foster discussions, analysis and debate and more nuanced understandings about the role of the body in social relations. The value of O’Neil, part of the success of her argument, is to be found in the fact that this topic was (and is still being) more thoroughly examined than it might otherwise have been.

To locate the discussion of trust, computing and society in time, in the contemporary moment, and to present and consider those arguments in terms of what they seek to attain is of course a big enterprise. There are many such arguments, and there are various goals behind them. Their topography is diverse, their performativities also. Some come from quite particular specialist areas, such as the computer science domain known as Human Computer Interaction (HCI). This has been looking at how to design trust into systems for many years. Criteria for success have to do with the practical use of designs, and less to do with any philosophical aspirations to define trust in a universal sense. Other arguments have their provenance in, for example, sociology and here the topic of trust turns out to be specifically how the concept is used performatively in social action: it is not what the sociologists think trust ought to be that is the topic but how people in everyday life use the concept. In addition to the sociological and the HCI perspectives, there are also philosophical points of view, and here the concern is to address the topic as a species of concept, as illustrative of the stuff of philosophical inquiries. Methods and means of argument are different from those found in, say, sociology, just as they are from those found in HCI. There are also arguments from the domain of technology itself (if not from HCI), and by that I mean from the point of view of those who engineer the systems that constitute the internet as we know it and as it is coming to be: this is the view, broadly speaking, of computer science. From this perspective –admittedly a broad camp – issues to do with distinguishing between systems trustable in engineering terms and systems whose use begs questions about the trustability (or otherwise) of users is prominent. And then we have arguments that are more in the public domain, of the type that were listed in the first paragraph. These are ones that are helping constitute the narrative of our age, what society thinks it is about and what it needs to focus on.

These diverse arguments cannot be added up and a sum made. As should be clear, they need each to be understood as part of the mis-en-scène of contemporary life. And each needs to be judged in terms of their diverse goals. Key, above all, is to see how they variously help foster a dialogue and sense of perspective on the large and sometimes worrisome topic that is trust, technology and society: maybe that is the answer to my question, to the question that led to this blog: why are there so many dialogues about trust, computing and society.

 

Selected Bibliography

Davidow,W. Overconnected, Headline Publishing (2011).

Cofta, P. The Trustworthy and Trusted Web, Now Publihsers (2011).

Ess, C. & Thorseth, M. (Eds) Trust and Virtual Worlds, Peter Lang (2011).

Gergen, K. Relational Being, OUP. (2009).

Hollis, M Trust withing Reason, CUP, (1998)

Lessig, L. Remix, Penguin (2008)

Luhmann,N. (1988) Familiarity and Trust, in Gambetta,D. (Ed) Trust, Blackwell, pp 94-107.

Masum, H. & Tovey, Ms The Reputation Society, MIT (2011).

Mitzal, B. Trust in Moderr Societies, Polity Press, (1996)

Möllering, G. Trust: Reason, Routine, Reflexivity, Elsevier (2006)

Naughton, J. From Gutenburg to Zuckerberg, Quercus (2012).

Pariser, E. Filter Bubble, Viking (2011).

Rorty, R. Philosophy and the Mirror of Nature, Princeton (1979).

O’Neill, O. the Reith Lectures, The BBC (2002).

Schneier, B. Liars and Outliers: Enabling the trust that society needs to thrive, John Wiley (2012).

Rushkoff, D. Program or be Programmed, Soft Skull Press (2010).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: