Tag Archives: trust

Dialogues with computers?

9 Jul

At the conference on Human Computer Interaction in Paris (CHI-2013), one of the more interesting panels asked why spoken word dialogues between humans and computers have not had the success predicted. Voice recognition is now good, and the points of interaction with machines make voice-based dialogues not only easy but often preferable for safety reasons. Using voice commands when driving a car, for example, is certainly less hazardous than keyboard data entry. Voice-based systems are quite common, too; most people can hardly say they reject them because of unfamiliarity. Finally, voice-based dialogues seem ‘natural’; ‘intuitive’ one might say.

One would think that, taken together, these reasons would make voice-based interactions, dialogues with computing, the norm. And yet it isn’t.

Many of the participants in the panel (and those who added comments from the floor) suggested that the reason(s) for this had to do with a profound resistance amongst users to speaking with computers. Something about doing so left people feeling as if trust was at issue. Users either don’t trust in the systems they are dialoguing with, fearing they are being misled or fobbed off with interactions designed to trap them. Or they don’t trust in their own participation in such interactions: they fear they are being made fools of in ways they cannot understand.

These discussions led me to reflect on my own current reading. Dialogues with computing is certainly a hot topic – though the concern here is not with the adequacy of the technology that enables this – speech recognition engines, dialogue protocols and so forth. It has to do with the purposes or consequences of such dialogues.

For example, Douglas Rushkoff argues in his brief and provocative book, Program or be Programmed (2010), when people rely on computers to do some job, it is not like Miss Daisy trusting in her chauffeur to take her car to the right destination (an allusion to a film and book of the same name). It’s not what computers are told that is the issue. It’s what computers tell us, the humans, as they get on with whatever task is at hand. And this in turn implies things about who and what we are because of these dialogues with computing.

According to Rushkoff, there is no knowing what the purpose of an interaction between person and machine might be: it is certainly not as simple as a question of command and response. In his metaphor about driving, what comes into doubt are rarely questions about whether the computer has correctly heard and identified a destination. The dialogues that we have with computers lead us to doubt in why some destination is chosen. This in turn leads to doubts about whether such choices should be in the hands of the human or the computer. The computer seems to ‘know’ more, why should it not decide?

John Naughton, in his From Gutenburg to Zuckerberg (2012), raises similarly large issues again illustrated with destinations. For him we need to ask whether we can trust computing (and the internet in particular) to lead us to dystopia or to heaven–though the contrast he presents is not entirely without irony: heaven is represented in the duplicitous appeal of Huxley’s Brave New World or dystopia in the self-evidently bleak form of Orwell’s Nineteen Eighty Four (1984).

Meanwhile, Pariser complains in his Filter Bubble (2011) that we cannot trust in the dialogue with have with search engines: today, in the age of ‘the cloud’ and massive aggregation systems, search engine providers can hide things away from us in ways that we cannot guess. When we ask search engines something we cannot know what the answer will be for search engine technology is now deciding what we need or want; even what is good for us to know. That this is so is at once sinister and capitalistic, Pariser argues: sinister since it is disempowering of the human, capitalistic since it places the market above the public good. Search engines take you to what companies want to sell, not to what you need to know.

These books, subtle though they are, seem to miss something: they all assume that the issue is one about trusting either the computer or ourselves: that dialogues are between two parties, and the issue is that not both can be trusted – at least not all the time. And, importantly, it is not always the computer that breaks trust: sometimes a computer does know more than the human interlocutor, and so should be trusted to make the right decisions in certain circumstances. What these authors seem to miss is the question of what speaking with computers says about the value that people – that society more generally gives – to speech. John Durham Peters argues in his book, Speaking into the Air (1999), that one of the essential values that came out of the Old Testament was the Hebrew idea that speech distinguishes people from beasts. Or, rather, it is the capacity to speak to God that distinguishes humanity from the wild animal.

At the CHI conference I mention above, one of the panellists argued something similar: that people treat speaking as something hallowed, precious, a unique bond between people. It is therefore not a skill that should be debased into being a method of dealing with computers. As it happens this individual, Professor Matt Jones, of Swansea University, is a trained priest and so this view might reflect his desire to honour the spoken word as does the Old Testament. But as I listened to the various points of view put forward, including his own, I began to think that perhaps there is something to do with the status given to speech that leads people to resist defiling it with the mere task of communicating with computers. Perhaps there is something about our capacity to talk with other people (and our Gods if we so choose) that we want to preserve as well as honour.

This lead me to think of Wittgenstein and his remarks that if lions could speak we would not find anything to talk about with them. In his view, our conversations are about our human experience; what it means and feels to be human.

And then, as I reflected on the tribulations that using voice-based dialogues with computing induce, how foolish they can make one seem as they force us to keep repeating words and phrases, I began to realise that this foolishness might be making us feel less human. It degrades our hopes for what we want to be: gifted with words and talk, talk that bonds us with each other (and for some, like Matt Jones, to their God).

And then, as I recalled also the tasks one often seeks to undertake in such dialogues, I thought there was even more credit to the idea that talk with people is special. After all, a typical use of voice dialogues is to be found when someone calls a company to complain about a service or product. They find their attempts to speak with someone are spurned: they end up in engaged in endless and seemingly pointless dialogues with a computer!

This too, like the shame we feel when we are instructed on how to speak by computers, attests to our desire to speak to people.

Speech is not then a mere modality of interacting with computers; it’s a modality that has especial status for people: it’s the modality for being human. No wonder then that voice-based dialogues are not as popular as predicted. We really don’t want dialogues with computers.

Advertisement

All the dialogues about trust, computing and society: why?

18 Apr

Any glance at the contemporary intellectual landscape would make it clear that trust and computing is of considerable interest. And by this I do not mean that this has to do with whether computers can be relied upon to do their job; that they simply have to do as they are told. If only it were as simple as that – an interface. As Douglas Rushkoff argues in his brief and provocative book, Program or be Programmed, (2010) when people rely on computers in their everyday life it is not like Miss Daisy trusting in her chauffeur to take her car to the right destination. It’s not what computers are told that is the issue. It’s what computers tell us, the humans. With computing, so Rushkoff wants to have us believe, there is no knowing what the destination is: it is unclear what it is that the humans are trusting in or for. John Naughton, in his From Gutenburg to Zuckerberg (2012), asks similarly large questions and here too the topic of the ‘interface’ seems inconsequential: for him we need to ask whether we can trust computing (and the internet in particular) to bring us dystopia or a heaven – though the contrast he presents is not entirely without irony: it is the duplicitous appeal of Huxley’s Brave New World or the bleakness of Orwell’s Nineteen Eight Four. Meanwhile, Pariser complains in his Filter Bubble (2011) that we cannot trust search engines anymore; today, in the age of The Cloud and massive aggregation systems, search engine providers can hide things away from us in ways that we could not guess. Doing so is at once sinister and capitalistic, Pariser argues; sinister since it is disempowering, capitalistic since it places the market above the public good. Search engines take you to what companies want to sell, not to what you need to know. One time capitalist himself William Davidow is likewise agitated, though it’s not salesmanship that worries him: we are now Overconnected (2011), as he argues in his eponymous book: we cannot trust ourselves to reason properly. This is merely a list of well-known texts in the public domain, there are equally many in the more scholarly world of philosophy, sociology and, of course, computer science. In the first of these there are so many journal papers as to be immense, whether it be Holton’s Deciding to Trust, coming to Believe paper of 1994 or Baier’s book Essays on Moral Prejudice (1994); in sociology there at least as many, including Mitstzal (1996), Mollering (2006) and Gambetta’s edited collection of 1988 (including as it does some philosophers, such as Williams). In computer science and human computer interaction (HCI) there are as many, with Piotr Cofta’s The Trustworthy and Trusted Web of 2011 being one of the most recent. The sheer volume and scale of this discourse leads one to doubt whether any single, unified view will arise out of it even if many of the authors in question want to offer one: Bruce Schneier, though not an academic, comes to mind with his highly readable Liars and Outliers, Enabling the trust that society needs to thrive (2012).

Navigating the domain

So what is one to make of this all? It seems to me that we have to stop rushing to answer what trust is – even if in the end we might come to seek such an answer. Rather, at the moment, and given the wealth of views currently being presented on the topic, we need to ask something about trust that is as it were prior to the question of what it is. We need to ask why all the fuss about trust now? Having done this we can inquire into how these current concerns are effecting what is treated as trust, how that trust is ‘theorised’ and what are the ways that evidence are brought to bear on discussions about that theorised object.

The sociologist Luhmann noted in his essay Familiarity and Trust (1988) that societies seem to make trust a topic of concern at certain historical moments – they need to arrange themselves so as to make trust a possibility and a worry. This interest does not seem to have much to do with trust itself – in its a-historic or transcendental conceptual sense (even if Luhmann had an interest in that himself). It has to do with a particular confluence of concerns that lead societies to reflect on certain things at particular times. This argument is in accord with Richard Rorty’s view about how to understand ideas and concepts in his Philosophy and the Mirror of Nature (1979). In this view, appreciating debates about some concern requires one to see them as historical (even contemporary debates). Doing so entails dissecting the links between views of other apparently disconnected concerns, to create maps of the historical topography of ideas and investigation into the performative goal or goals that lay behind the development and deployment of the ideas in question. It requires, in sum, understanding the ‘when’ of an argument and the ‘so what?’ of it – what it led to.

Let me illustrate what is meant by this in relation to arguments about trust and computing. A decade ago, the philosopher Onora O’Neill offered an account of trust in the Reith Lectures (2002). She wanted to characterise some of the essential, true elements of trust and its basis in action. Hers purported to be an a-historic view, a concern simply with the conceptual fabric of the term. She claimed that trust between people is a function of being near one another. By that she did not mean near in a moral or social sense. She meant in terms of the body. This might seem a strange argument, but bear with me. It comes down to the idea that people trust each other because they can touch each other; because they can see each other, their every movement; that people can, say, grasp another at will and be grasped back in turn: because they are altogether, in one place. Trust would seem to turn on cuddles. This is of course to paraphrase O’Neill. But, given this, a problem occurs when distances are introduced into social relations such that people can no longer cuddle. Trust is weakened if not dissolved. Mechanisms need to be developed, O’Neill argued, that make ties between bodies separated by space possible. In her lectures, she explored various answers to the question of how trust could be made.

Why did O’Neill come up with this view? It seems quite stark; almost startling certainly to one who has not confronted it before. If truth be told, I have simplified her case and used a colourful way of illustrating it, though I do not think mischaracterised it. In presenting it thus, however, one can begin to see that there might be very profound links between it and the context, the historical context in which it was presented. This was just a decade ago and although it seems an eternity in terms of the internet it is the internet that I think is key to that context. And, it is in light of that context, that the credit one should give to O’Neill’s views lie. It seems to me that O’Neill was putting forth a view about the relationship between our bodies, our location in space, and the trust that was fostered (or not) by the use of contemporary technologies of communication, most especially internet-related ones. Her theory of the nature of trust (assuming for the moment that one can call it a theory), was created against the backdrop of the problems of trust and communication highlighted by the internet. With the latter, the human body seemed to be visibly absent and, since trust was problematic on the internet, by dint of that the body must be the seat of trust in ‘normal’ (non-internet) settings of action. Hence O’Neill’s theory.

As it happens, O’Neill did not refer very much to the internet in her lectures. The important point I am wanting to make is that, to understand O’Neill, one does not have to accept the idea that the presence of the body in any universal sense is always essential to trust: one simply has to accept that the absence of the body in acts of communication is a problem in the context of contemporary society, in the internet society. If one places her argument in context one sees that that is in fact what she is writing about. It is, as it were, her starting point. Something about the acts of communication we undertake on the internet make the location of the body – its presence/absence – salient. So, following in Luhmann’s and Rorty’s view, what we have in O’Neill’s lectures is a historically situated claim. Now one could say that historisizing her argument is perhaps reducing the credit it should be given. That is not my intention – though this might not be clear at the moment. One of the reasons why I choose her view to illustrate my case was because her argument was quite often presented at that time. It is in this sense exemplary. As it happens the argument has continued to be argued. Be that as it may, what I have thus far sought to show is the topographical relationship between O’Neill’s ideas and socio-technical context. But one also needs to consider its performativity. In having raised an argument, an argument can thus be assessed, considered, brought to bear; one has to consider also where the argument was deployed, for whom. In my view, what O’Neill was doing in her Lectures was getting the public to think about the role of philosophy, and to suggest that, despite appearances otherwise, philosophy can concern itself with everyday concerns, ones even to do with the body. Whether she succeeded in persuading the public of the relevance of philosophy I do not know, but what one can say is that she got the argument widespread attention, even if she was not the only advocate of it. As Charles Ess and May Thorseth (Eds) discuss in Trust and Virtual Worlds (2011), the idea that it is the absence of the body that undermines trust came to be cultivated when new communications technologies enabled by the internet began to take off – in the nineteen nineties – O’Neill’s the Reith Lectures are illustrative of this ‘cultural moment’. In research since, as Ess and Thorseth show, this link between body and trust can be seen to have been exaggerated. O’Neill can now be seen to be putting forth too strong a case. The purpose of placing arguments in context and exploring their performative consequences, however, should be to make it clear that one ought not to judge attempts to explore trust by a simple right or wrong metric. In historisizing a point of view, we can also see what that point of view might help create, the dialogues it led to and the richer understandings that ensued. It seems to me that O’Neill (and others who put forward her perspective at that time) helped foster discussions, analysis and debate and more nuanced understandings about the role of the body in social relations. The value of O’Neil, part of the success of her argument, is to be found in the fact that this topic was (and is still being) more thoroughly examined than it might otherwise have been.

To locate the discussion of trust, computing and society in time, in the contemporary moment, and to present and consider those arguments in terms of what they seek to attain is of course a big enterprise. There are many such arguments, and there are various goals behind them. Their topography is diverse, their performativities also. Some come from quite particular specialist areas, such as the computer science domain known as Human Computer Interaction (HCI). This has been looking at how to design trust into systems for many years. Criteria for success have to do with the practical use of designs, and less to do with any philosophical aspirations to define trust in a universal sense. Other arguments have their provenance in, for example, sociology and here the topic of trust turns out to be specifically how the concept is used performatively in social action: it is not what the sociologists think trust ought to be that is the topic but how people in everyday life use the concept. In addition to the sociological and the HCI perspectives, there are also philosophical points of view, and here the concern is to address the topic as a species of concept, as illustrative of the stuff of philosophical inquiries. Methods and means of argument are different from those found in, say, sociology, just as they are from those found in HCI. There are also arguments from the domain of technology itself (if not from HCI), and by that I mean from the point of view of those who engineer the systems that constitute the internet as we know it and as it is coming to be: this is the view, broadly speaking, of computer science. From this perspective –admittedly a broad camp – issues to do with distinguishing between systems trustable in engineering terms and systems whose use begs questions about the trustability (or otherwise) of users is prominent. And then we have arguments that are more in the public domain, of the type that were listed in the first paragraph. These are ones that are helping constitute the narrative of our age, what society thinks it is about and what it needs to focus on.

These diverse arguments cannot be added up and a sum made. As should be clear, they need each to be understood as part of the mis-en-scène of contemporary life. And each needs to be judged in terms of their diverse goals. Key, above all, is to see how they variously help foster a dialogue and sense of perspective on the large and sometimes worrisome topic that is trust, technology and society: maybe that is the answer to my question, to the question that led to this blog: why are there so many dialogues about trust, computing and society.

 

Selected Bibliography

Davidow,W. Overconnected, Headline Publishing (2011).

Cofta, P. The Trustworthy and Trusted Web, Now Publihsers (2011).

Ess, C. & Thorseth, M. (Eds) Trust and Virtual Worlds, Peter Lang (2011).

Gergen, K. Relational Being, OUP. (2009).

Hollis, M Trust withing Reason, CUP, (1998)

Lessig, L. Remix, Penguin (2008)

Luhmann,N. (1988) Familiarity and Trust, in Gambetta,D. (Ed) Trust, Blackwell, pp 94-107.

Masum, H. & Tovey, Ms The Reputation Society, MIT (2011).

Mitzal, B. Trust in Moderr Societies, Polity Press, (1996)

Möllering, G. Trust: Reason, Routine, Reflexivity, Elsevier (2006)

Naughton, J. From Gutenburg to Zuckerberg, Quercus (2012).

Pariser, E. Filter Bubble, Viking (2011).

Rorty, R. Philosophy and the Mirror of Nature, Princeton (1979).

O’Neill, O. the Reith Lectures, The BBC (2002).

Schneier, B. Liars and Outliers: Enabling the trust that society needs to thrive, John Wiley (2012).

Rushkoff, D. Program or be Programmed, Soft Skull Press (2010).