Archive | Uncategorized RSS feed for this section

Enchantment with Computer Reason

4 Aug

Today, when computer systems are so ubiquitous and therefore mundane, so immensely powerful but yet taken for granted, how do programmers motivate themselves? How do they get out of bed in the mornings and say, “Yep! I can’t wait to get on the keyboard!”

There was a time when this motivation seemed easy to explain. Think of Sherry Turkle’s book, The Second Self. There she described how exciting it was to get a machine to ‘act’ in accord with one’s own instructions. She said that getting one’s thoughts onto a screen and for those thoughts to get the machine to function like one’s second self was magical, something that really motivated you. But her book was written when the machines on the desk being one’s second self were called microcomputers, not personal computers. That gives an idea of how long ago that excitement was. So today – what enchants coders?

I do think it is enchantment that gets the coder out of bed but I think this is a quite different kind from that which Turkle described. Indeed it is almost reverse, one might say. In my view, many coders today find their enchantment in Machine Learning. They are enchanted because machine learning makes computers act in ways that they, the coder, cannot understand. It is not their reasoning writ large on the performance of the machine that excites them or provokes a sense of wonder; it is, on the contrary, how the machine works despite them that is.

The aspect of computer programming I am thinking of is a part of machine learning that is sometimes called Deep Learning. This is part of a broader family of methods based on the notion that programmes themselves can, as it were, ‘learn’ how to correctly represent data and thus act on that data. In the approach I am thinking of, no human is required to label data as part of some training set. Rather, the machine or rather the application somehow ‘uncovers’ categories and features in the data (about the world, say) and then acts accordingly.

What comes to mind, particularly, are computer vision systems, where certain programmes are able to identify (to ‘see’, as it were) objects not merely as a function of ‘unsupervised learning’, a technique whereby the programmes come to recognise objects without the aid of a human expert, for such techniques presuppose that what the system finds accords with what the human programmer can see too – the machine in this sense is only copying what the human can do, though doing this autonomously. In contrast, these new systems are identifying objects – patterns, shapes, phenomena in the visual field – that no human could see. They are, if you like, doing something beyond what the human can do.

As it happens, and in many instances, various advanced computer vision processing applications have been doing this for some time – though without the fanfare that has erupted recently.

Good examples of what such programmes can do can be found in the work of, for example, Graham Budgett, an artist at the University of California, Santa Barbara. Here, the images he produces, his art if you like, are to be seen through a browser. These images keep iterating and changing themselves as you look. They do so as a function of the algorithms that make the images you see a transitory output. That is to say, these algorithms constantly reinterpret the objects, the shapes, the forms, the colours, that Budgett provides for them in the first place. The algorithms present these as the first thing one sees. But then they start interpreting and reinterpreting these shapes, colours, forms. In each cycle of interpretation, the code starts with a same initial set of objects (whatever they might be), and these are processed and interpreted results in infinitely new forms every time the code (or the application) is run. The code is probalistic, not deterministic, and so comes up with different interpretations each time it parses.

In a sense on might say that the art here – the painting if you prefer but no paintbrushes are involved, only a keyboard and mouse – is being done by code. What the artist does, in this case Budgett, is select the machine learning algorithms as if they were paints for the palette. The ‘art’ comes to be in how the code interacts with its own output; thus Budgett has created art that performs without his controlling hand.

Though his examples are only of pictorial art, in important respects the pictures are showing something quite radical. The applications producing these pictures are not articulating human knowledge, knowledge about shapes and objects in the world. Rather, they are creating, through the application’s interpretation, new knowledge, new forms and shapes. These are the dynamic output from algorithms. In these respects, the Turing Test has been passed in radically impressive way since computing is not so much mimicking human intelligence, as it is doing something people cannot do – making thing with a new kind of intelligence.

This is significant. If this is the enchantment that coders are finding today, then, this is fundamentally different to the kind described by Turkle in Second Self. If, then, the delight she described was in getting a machine to act according to a coder’s own reasons, now the delight that coders feel is in getting machines to act in terms of reasons that the machine produces. The enchantment is no longer in the self, in how one gets a machine to act as a mirror of one’s own thoughts; it is in how some application can reason autonomously. It is as if the coders want the applications they code to do something more than the coders can imagine themselves.

Now for many coders this seems to be an enchanting moment. Here at last is a glimpse of what they have been seeking since the term ‘AI’ was first made common currency after the Dartmouth Conference where the term was first coined in 1956.

The trick, though, is that the applications that are currently being sought are ones that seem to have reasons that people don’t have, that people couldn’t have, that are more than human in their intelligence. And here it is not simply that computers can process at vast speed, that they are simply better calculators; on the contrary, the coders think that the applications they are producing reason in ways that is beyond human reason.

This is somehow beyond what Turing imagined. Given the deity like status this mathematician has in the pantheon of computer science, this is presumably enormously exciting to the coder. No wonder they are so keen to get out of bed. It’s not what they do that excites them, its what their applications will do.

All the dialogues about trust, computing and society: why?

18 Apr

Any glance at the contemporary intellectual landscape would make it clear that trust and computing is of considerable interest. And by this I do not mean that this has to do with whether computers can be relied upon to do their job; that they simply have to do as they are told. If only it were as simple as that – an interface. As Douglas Rushkoff argues in his brief and provocative book, Program or be Programmed, (2010) when people rely on computers in their everyday life it is not like Miss Daisy trusting in her chauffeur to take her car to the right destination. It’s not what computers are told that is the issue. It’s what computers tell us, the humans. With computing, so Rushkoff wants to have us believe, there is no knowing what the destination is: it is unclear what it is that the humans are trusting in or for. John Naughton, in his From Gutenburg to Zuckerberg (2012), asks similarly large questions and here too the topic of the ‘interface’ seems inconsequential: for him we need to ask whether we can trust computing (and the internet in particular) to bring us dystopia or a heaven – though the contrast he presents is not entirely without irony: it is the duplicitous appeal of Huxley’s Brave New World or the bleakness of Orwell’s Nineteen Eight Four. Meanwhile, Pariser complains in his Filter Bubble (2011) that we cannot trust search engines anymore; today, in the age of The Cloud and massive aggregation systems, search engine providers can hide things away from us in ways that we could not guess. Doing so is at once sinister and capitalistic, Pariser argues; sinister since it is disempowering, capitalistic since it places the market above the public good. Search engines take you to what companies want to sell, not to what you need to know. One time capitalist himself William Davidow is likewise agitated, though it’s not salesmanship that worries him: we are now Overconnected (2011), as he argues in his eponymous book: we cannot trust ourselves to reason properly. This is merely a list of well-known texts in the public domain, there are equally many in the more scholarly world of philosophy, sociology and, of course, computer science. In the first of these there are so many journal papers as to be immense, whether it be Holton’s Deciding to Trust, coming to Believe paper of 1994 or Baier’s book Essays on Moral Prejudice (1994); in sociology there at least as many, including Mitstzal (1996), Mollering (2006) and Gambetta’s edited collection of 1988 (including as it does some philosophers, such as Williams). In computer science and human computer interaction (HCI) there are as many, with Piotr Cofta’s The Trustworthy and Trusted Web of 2011 being one of the most recent. The sheer volume and scale of this discourse leads one to doubt whether any single, unified view will arise out of it even if many of the authors in question want to offer one: Bruce Schneier, though not an academic, comes to mind with his highly readable Liars and Outliers, Enabling the trust that society needs to thrive (2012).

Navigating the domain

So what is one to make of this all? It seems to me that we have to stop rushing to answer what trust is – even if in the end we might come to seek such an answer. Rather, at the moment, and given the wealth of views currently being presented on the topic, we need to ask something about trust that is as it were prior to the question of what it is. We need to ask why all the fuss about trust now? Having done this we can inquire into how these current concerns are effecting what is treated as trust, how that trust is ‘theorised’ and what are the ways that evidence are brought to bear on discussions about that theorised object.

The sociologist Luhmann noted in his essay Familiarity and Trust (1988) that societies seem to make trust a topic of concern at certain historical moments – they need to arrange themselves so as to make trust a possibility and a worry. This interest does not seem to have much to do with trust itself – in its a-historic or transcendental conceptual sense (even if Luhmann had an interest in that himself). It has to do with a particular confluence of concerns that lead societies to reflect on certain things at particular times. This argument is in accord with Richard Rorty’s view about how to understand ideas and concepts in his Philosophy and the Mirror of Nature (1979). In this view, appreciating debates about some concern requires one to see them as historical (even contemporary debates). Doing so entails dissecting the links between views of other apparently disconnected concerns, to create maps of the historical topography of ideas and investigation into the performative goal or goals that lay behind the development and deployment of the ideas in question. It requires, in sum, understanding the ‘when’ of an argument and the ‘so what?’ of it – what it led to.

Let me illustrate what is meant by this in relation to arguments about trust and computing. A decade ago, the philosopher Onora O’Neill offered an account of trust in the Reith Lectures (2002). She wanted to characterise some of the essential, true elements of trust and its basis in action. Hers purported to be an a-historic view, a concern simply with the conceptual fabric of the term. She claimed that trust between people is a function of being near one another. By that she did not mean near in a moral or social sense. She meant in terms of the body. This might seem a strange argument, but bear with me. It comes down to the idea that people trust each other because they can touch each other; because they can see each other, their every movement; that people can, say, grasp another at will and be grasped back in turn: because they are altogether, in one place. Trust would seem to turn on cuddles. This is of course to paraphrase O’Neill. But, given this, a problem occurs when distances are introduced into social relations such that people can no longer cuddle. Trust is weakened if not dissolved. Mechanisms need to be developed, O’Neill argued, that make ties between bodies separated by space possible. In her lectures, she explored various answers to the question of how trust could be made.

Why did O’Neill come up with this view? It seems quite stark; almost startling certainly to one who has not confronted it before. If truth be told, I have simplified her case and used a colourful way of illustrating it, though I do not think mischaracterised it. In presenting it thus, however, one can begin to see that there might be very profound links between it and the context, the historical context in which it was presented. This was just a decade ago and although it seems an eternity in terms of the internet it is the internet that I think is key to that context. And, it is in light of that context, that the credit one should give to O’Neill’s views lie. It seems to me that O’Neill was putting forth a view about the relationship between our bodies, our location in space, and the trust that was fostered (or not) by the use of contemporary technologies of communication, most especially internet-related ones. Her theory of the nature of trust (assuming for the moment that one can call it a theory), was created against the backdrop of the problems of trust and communication highlighted by the internet. With the latter, the human body seemed to be visibly absent and, since trust was problematic on the internet, by dint of that the body must be the seat of trust in ‘normal’ (non-internet) settings of action. Hence O’Neill’s theory.

As it happens, O’Neill did not refer very much to the internet in her lectures. The important point I am wanting to make is that, to understand O’Neill, one does not have to accept the idea that the presence of the body in any universal sense is always essential to trust: one simply has to accept that the absence of the body in acts of communication is a problem in the context of contemporary society, in the internet society. If one places her argument in context one sees that that is in fact what she is writing about. It is, as it were, her starting point. Something about the acts of communication we undertake on the internet make the location of the body – its presence/absence – salient. So, following in Luhmann’s and Rorty’s view, what we have in O’Neill’s lectures is a historically situated claim. Now one could say that historisizing her argument is perhaps reducing the credit it should be given. That is not my intention – though this might not be clear at the moment. One of the reasons why I choose her view to illustrate my case was because her argument was quite often presented at that time. It is in this sense exemplary. As it happens the argument has continued to be argued. Be that as it may, what I have thus far sought to show is the topographical relationship between O’Neill’s ideas and socio-technical context. But one also needs to consider its performativity. In having raised an argument, an argument can thus be assessed, considered, brought to bear; one has to consider also where the argument was deployed, for whom. In my view, what O’Neill was doing in her Lectures was getting the public to think about the role of philosophy, and to suggest that, despite appearances otherwise, philosophy can concern itself with everyday concerns, ones even to do with the body. Whether she succeeded in persuading the public of the relevance of philosophy I do not know, but what one can say is that she got the argument widespread attention, even if she was not the only advocate of it. As Charles Ess and May Thorseth (Eds) discuss in Trust and Virtual Worlds (2011), the idea that it is the absence of the body that undermines trust came to be cultivated when new communications technologies enabled by the internet began to take off – in the nineteen nineties – O’Neill’s the Reith Lectures are illustrative of this ‘cultural moment’. In research since, as Ess and Thorseth show, this link between body and trust can be seen to have been exaggerated. O’Neill can now be seen to be putting forth too strong a case. The purpose of placing arguments in context and exploring their performative consequences, however, should be to make it clear that one ought not to judge attempts to explore trust by a simple right or wrong metric. In historisizing a point of view, we can also see what that point of view might help create, the dialogues it led to and the richer understandings that ensued. It seems to me that O’Neill (and others who put forward her perspective at that time) helped foster discussions, analysis and debate and more nuanced understandings about the role of the body in social relations. The value of O’Neil, part of the success of her argument, is to be found in the fact that this topic was (and is still being) more thoroughly examined than it might otherwise have been.

To locate the discussion of trust, computing and society in time, in the contemporary moment, and to present and consider those arguments in terms of what they seek to attain is of course a big enterprise. There are many such arguments, and there are various goals behind them. Their topography is diverse, their performativities also. Some come from quite particular specialist areas, such as the computer science domain known as Human Computer Interaction (HCI). This has been looking at how to design trust into systems for many years. Criteria for success have to do with the practical use of designs, and less to do with any philosophical aspirations to define trust in a universal sense. Other arguments have their provenance in, for example, sociology and here the topic of trust turns out to be specifically how the concept is used performatively in social action: it is not what the sociologists think trust ought to be that is the topic but how people in everyday life use the concept. In addition to the sociological and the HCI perspectives, there are also philosophical points of view, and here the concern is to address the topic as a species of concept, as illustrative of the stuff of philosophical inquiries. Methods and means of argument are different from those found in, say, sociology, just as they are from those found in HCI. There are also arguments from the domain of technology itself (if not from HCI), and by that I mean from the point of view of those who engineer the systems that constitute the internet as we know it and as it is coming to be: this is the view, broadly speaking, of computer science. From this perspective –admittedly a broad camp – issues to do with distinguishing between systems trustable in engineering terms and systems whose use begs questions about the trustability (or otherwise) of users is prominent. And then we have arguments that are more in the public domain, of the type that were listed in the first paragraph. These are ones that are helping constitute the narrative of our age, what society thinks it is about and what it needs to focus on.

These diverse arguments cannot be added up and a sum made. As should be clear, they need each to be understood as part of the mis-en-scène of contemporary life. And each needs to be judged in terms of their diverse goals. Key, above all, is to see how they variously help foster a dialogue and sense of perspective on the large and sometimes worrisome topic that is trust, technology and society: maybe that is the answer to my question, to the question that led to this blog: why are there so many dialogues about trust, computing and society.


Selected Bibliography

Davidow,W. Overconnected, Headline Publishing (2011).

Cofta, P. The Trustworthy and Trusted Web, Now Publihsers (2011).

Ess, C. & Thorseth, M. (Eds) Trust and Virtual Worlds, Peter Lang (2011).

Gergen, K. Relational Being, OUP. (2009).

Hollis, M Trust withing Reason, CUP, (1998)

Lessig, L. Remix, Penguin (2008)

Luhmann,N. (1988) Familiarity and Trust, in Gambetta,D. (Ed) Trust, Blackwell, pp 94-107.

Masum, H. & Tovey, Ms The Reputation Society, MIT (2011).

Mitzal, B. Trust in Moderr Societies, Polity Press, (1996)

Möllering, G. Trust: Reason, Routine, Reflexivity, Elsevier (2006)

Naughton, J. From Gutenburg to Zuckerberg, Quercus (2012).

Pariser, E. Filter Bubble, Viking (2011).

Rorty, R. Philosophy and the Mirror of Nature, Princeton (1979).

O’Neill, O. the Reith Lectures, The BBC (2002).

Schneier, B. Liars and Outliers: Enabling the trust that society needs to thrive, John Wiley (2012).

Rushkoff, D. Program or be Programmed, Soft Skull Press (2010).

Hello Grumps!

23 Jan

I wondered whether anyone might want to idle away some time reading the scribbles that don’t get into the ponderous journals and books of a professional HCI researcher in Corporate life?


Well what about the scribblings of one who finds Baudrillard funny, contemporary analytic philosophy sterile, and HCI a way of exploring what it means to be ‘human’ -even a way of thinking anew about that without  recourse to the likes of Harroway – more to the likes of, let us say, oh, John Naughton?