Archive | Uncategorized RSS feed for this section

Sociology, Anthropology and the Social Consequences of AI

18 Jun

{This a version of a talk I gave at the Lord Kelvin/Adam Smith Symposium Series at the University of Glasgow in May, 2021 See: A Manifesto for Inclusive Digital Futures}

Sociological views of computer technology are a commonplace today. These see technological development in terms of social factors – how the structure of society, its class, gender or economic relations shape how technology is developed. And these relations are then reinforced by the technologies that result.

One can understand why such views appeal – after all, the social facts of computer technologies can be quite startling – so many people use Google, the profits Facebook makes can seem enormous, and the things one can buy on Alibaba are almost infinite. All this says something about our society, our culture, what we value. Technology seems to reflect these things however good or bad they are even as it helps make them anew. But sociological views tend to neglect how technology developments itself occurs. Developments often have more to do with particular technical problems and their solution – there are social matters in these, but not as sociologists imagine.

Take the case of AI. This technology is often said to have sociological consequences. Indeed, one can hardly doubt this. But what is the way AI is developed? What are the ‘requirements’ AI engineers seek to satisfy? While it might be true that AI struggles with, say, social diversity when it is used, understanding why AI has the form it does might require investigation into how computer engineers actually build it. Their concerns might not be ‘sociological’. But what are they? It is to that task I turn now.      

AI is found in lots of computing technologies, but the ones I am interested in are the latest incarnation of what are called interactive ones: not the ones inside appliances and machines which are beyond our reach. It is those we ‘use’ in interaction. These computers are exemplified by the desktops, laptops and mobiles by the million. These are built by engineers who have two things in mind when they ‘innovate’: they define what these interactive computers will do by defining in part what the ‘user’ will want them to do. This means that these engineers are in effect defining both what the computer will do and who the user will be. This is how they innovate. This sounds like they are doing a kind of sociology but it doesn’t entail the kind of sociological topics I mention above. The ‘user’ these engineers have in mind is a ‘social actor’, doing things with other people, but the way that user (and the things they do) are understood and the activities they are engaged in are seen in particular ways which lead one away from sociological topics. 

For example, aspects of a human body are used to help define the ways commands are delivered to an interactive computer, AI ones included. This is part of who the user ‘is’. A keyboard is the size it is not for reasons to do with, say, class, but for reasons to do with the size of the human hand. The design of ‘on\off’ switches and their virtual equivalents on the screen are designed by matters to do with body mechanics: there is a size over distance ratio and a science to understanding this ratio (viz, Fitts’s law) which means these sizes can be defined systematically. These ‘basics’ about the user are then built on by reference to larger or more encompassing features of what a user might be. A desktop computer is designed to enable certain kinds of work: office work, doings with documents of various kinds. Indeed, when Xerox invented what we now think of as the virtual desktop on the Star system, the ‘user’ the engineers had in mind was that, an office worker. This had all sorts of consequences for what the Xerox engineers came up with. Much of this is still with us today. The things that the desktop computer offered the user were ‘documents’, as a case in point. These were represented with paper-like affordances – pages, borders, titles. Tools were developed that allowed users to format and adjust layout, to cut and paste, similarly in analogue ways with paper (such analogues had their consequences of course: See my book, Myth of the Paperless Office). All this led to what came to be called the WIMP interface: with Windows, Icons (for files and such), a Mouse and a Pointer. WIMP is still the foundation of most interactive systems in use.

Whether this vision of the user-as-office-worker was fully worked out when Xerox devised it some fifty years ago whether it was somewhat simplified, the point is that engineers had in mind a user when they devised their technology, when they innovate; the two go hand in hand. Over the decades since Xerox created the desk top computer, the role of interactive computing has, of course, changed. And so too has the ‘user’ as imagined when innovation occurs. 

Take mobile phones: at first these were very simple computer devices engineered to send sound shapes back and forth: the sound shapes of human words. Their design reflected a compromise between the technological prerequisites for this – microphones, speakers and codecs managed by an interactive computer –  and the size of the human head: the distance between ears and mouth, analogue to the button size problem. Gradually, these features evolved to allow more information-centric activities. At first these echoed doings with desktops – between users and documents. This was particularly so on precursors to the web and its protocols such as WAP. Indeed, the difficulties of getting analogues to WIMP interfaces, designed for document interaction, was a major concern at that time. But computer engineers soon came to realise that the users of mobiles were different or rather what ‘users’ wanted to do was different. Users did not need to be provided with a mouse so as to cut and paste. What they wanted mobiles to do was interact with each other (see my paper from that time: People versus Information). Surprisingly this was not what engineers had first thought when mobiles were being devised. In the minds of computer engineers, voice messages were information exchanges, not social acts. But when engineers began to recognise the social desires of ‘mobile users’, so mobile phones were ‘reimagined’  – Nokia led the charge for this with their SMS functionality and thereafter phones came to became devices for being social. Never mind that Apple came in late to the party and ruined it for Nokia, the point is that mobiles have evolved around the pre-eminence of social interaction, with the mobile applications TikTok and Instagram exemplifying the kind sociality in question and hence, also, the ways computers and interaction with them developed in this domain (see my book, Texture).   

The most recently emerging technology is AI. This technology is also anchored in a joint understanding of what it might do shaped by imagining what a user of AI might do. Again, there is a social aspect to innovation as there always has been, but this has particular aspects to do with the technology and what, in turn, is thought a user of it might want to do. 

To begin with, many AI applications build on things devised before AI. Take search engines. Originally these had no AI in them, but with time and scale of use, options for using AI-type techniques have emerged. This also entailed determining what AI ‘users’ would want, allowing AI users to ‘emerge’ too, if you will. Google’s PageRank, in instance of AI, needs large numbers of users to develop a data base of heuristics for what search target is most likely to be relevant to some search query. It is this that suggests (via AI processing) which target is to be ‘triaged’ to the top of a SERP (Search Enquiry Results Page). AI search engineers assume that ‘users’ will want the most likely answer, not the right answer. Or rather, they assume that the user strikes a compromise between speed and accuracy. They would prefer a quick answer that might be wrong than a right answer that might take a long time to find. This is what engineers at Google are assuming when they enhance that search engine, this is the user they have in mind. AI enabled search combines a notion of what a user wants (it turns out to be speediness) and how the technology can deliver that (through quickly delivered good guesses). The result is a mutual shaping: user and technology making each other.

Similarly with social behaviours:  the messages people post and send each other is also content at scale. Facebook’s’ EdgeGraph only works because of volumes – the ‘postings’ and ‘sharings’ of millions. It is these volumes that lets Facebook’s AI produce outputs: recommendations about new connections, new postings, new ‘friends’. One might say that users of Facebook do not make friends as they used to, before the invention of Facebook; they now make friends through Facebook. ‘Friending’ is now part of everyday parlance, alluding to a distinction between old and new ways of friend making. Again, though, the point is that the technology and the user are mutually shaped. 

These are examples of AI emerging on the back of data found on the web, instances at scale of human behaviours. AI can take large volumes of data from elsewhere, of course, and these do not need to be human. A lot of work in the medical field, for example gene analysis, is of this kind. However, though the basic process of innovation in AI is like the one that has always applied in interactive computing, some of the consequences of this are worth remarking on as they highlight some differences which are consequential. 

To begin with, AI, like all interactive computing, ends up encouraging the tasks it supports. Just as Xerox made documenting even more important to organisations, so AI encourages certain activities over others – take search again. Who now users search engines to explore the web or to ‘surf’? People are less likely to do so as they are tempted by the design and efficacy of AI tools to do other things, namely, to shop, purchase and ‘consume’. These derive from the efficacy of current search tools which encourage these behaviours. No wonder that the web has become a vast emporium for purchasing. Consuming behaviours are constructed into the engineering of AI search tools and these are the primary entry points to the web. Their shape shapes what users end up doing – finding, buying, and consuming (whether these consuming behaviours are quite as expected or whether users are directed in guaranteed ways is perhaps another question. See Mackenzie, ‘Cookies, Pixels and Fingerprints’) 

A second consequence follows on from this. This kind of AI – web enabled and focused – tends to focus on the behaviours rather more than on the intentions of people. What people do on the web matters more than what they think when they are the web. (For a review see ‘Data Behaviourism’ by Antionette de Rouvroy). Take search again: because people click on search results, AI tools think that people are in need of answers, but they might not be. Their intentions might be such that they might don’t mind wrong answers or even no answers at all. As we mention above, they might want to be ‘surfing’ – though this might be too general a word for what their intentions are. A user might be wanting to imagine travelling, for example, and might not want to stop their travels with an answer. They might, by way of contrast,  want to waste time by idling on the web, and again don’t want tools that rush them to ‘complete’. Idling has no end point after all (though it might be time limited). These intentions might or might not be best expressed by the term ‘surfing’.  The point is that AI tools in search cannot see such motives – indeed, AI tools make it increasingly hard for AI itself to see any motives beyond the behaviours they measure. The better the AI tools are at enabling some doings, the less able those same tools are to see intentions that might have lead people to do other things. One call put this another way: the better AI becomes for some task, the less understanding that same AI might have of other tasks and hence of other intentions. This narrowing feeds itself:  the data the AI  uses is  produced by AI tools that shape that data, and that increases the likelihood that users will act in certain (encouraged) ways. In short, AI tools get better by knowing less and as they do so, users do less too. 

As we move forward with a hope of imagining new areas and domains for technology including AI, and if we want these new domains to let us do new things, then it seems to me that the problem of understanding in AI becomes greater, when by that I mean understanding who users might be since if that understanding is limited, then what AI comes to afford will be limited too. 

Take the future of home life. Here AI researchers are saying that just as AI has automated many aspects of the workplace, so too it automate aspects of home life (See for example Russell). At first glance this seems appealing. After all who would not want an automated washing machine –  if that included a device that picked up the dirty laundry, loaded the machine and put the cleaned clothes back in the drawers? The problem here is that there might be some things that people do want automating yet others they don’t want automating. Knowing the difference might be a question of knowing the motives in question. Consider a different labour at home, cooking. If one could devise an automated washing ‘system’, could one devise automated cooking system? One could, at least in theory. Yet consider this: when people set up home as a couple, they will often chose to cook for each other.  They do so as an expression of their love for each other. Cooking can be, if you like, love’s work. When it is, one would imagine these same people would not want AI to automate this work. Doing so would reduce the tools through which they ‘make love’ so to say. (For more discussion of this and the concept of the smart home that is related, see my book of the same name. See also my Connected Home)

I am not saying that people in the future might not end up preferring machines making both their food and doing their laundry; I am saying that the way AI is developed, the manner of innovation one finds inside AI, results in judgements about what people want to do becoming increasingly important and yet also increasingly difficult to achieve: the understanding that is required can get lost in the data that AI tools are themselves creating. Many AI researchers seem completely unaware of this, such as Russell, co-author of one of the primary text books on AI at the current time.

Going forward there is a need to clearly delineate and characterise what people do through understanding why they do it. It is a question of meaning, not action. And this brings me to sociology once again. Sociology has many merits, but its main concern is with the ways individuals find their actions governed – governed by their gender, their class, their ethnicity, by the society in which they live. But what we have just seen is that what things mean to people is crucial for AI going forward. A different discipline has stood up and claimed that space: anthropology. As the great anthropologist Clifford Geertz put it, this discipline is in search of the meaning people give to their lives. If that is the case, then the moral of this blog is that we need more anthropology and not sociology. Perhaps more significant we need more anthropology and not more AI. If we stick with AI, we will make our lives anthropologically less. The future is for us to make, but not with AI.  

Searching for the inner lives of men

26 May

I have been thinking of writing a book on the inner lives of men. By trade I am a sociologist, and so have studied many places; but I haven’t ever gone ‘in’ to the ‘inner life’ of men. There is such a thing as auto-ethnography but this doesn’t seem to actually entail going inside of either sex; it simply makes the doings of an ethnographer its subject when those doings are anodyne and confined to some ‘social practice’ – not the felt life, the life experienced within. The term ‘inner’ points to the intimate, the private, the sensual as well as imagined but it might be for all that a place one ought to avoid, populating one’s own but not voyaging into another’s. But I do want to undertake such a voyage – however it might be done even of the modes of anthropology seem wrong.

A separate trouble is that I don’t think many menfolk think that their inner lives are interesting. That always seems a bit of a risky label. What I find interesting most often, as a case in point, makes men laugh. Not because it is funny as because it always seems unexpected to them.

Years ago, on one of my first dates, the girl I was out with asked: ‘A penny for your thoughts?’ I was a bit startled. My mother would ask the same. The trouble was that at that moment in time I wasn’t thinking of anything. I was caught unawares and whereas with my mother I would come up with some judicious response – ‘something about homework’, with this girl I was blanked. So I said the truth: ‘Nothing’. The girlfriend replied – ‘Yeah? You would have to be really clever to not be thinking about anything’. I was taken aback. I regularly seemed to think about nothing. I certainly didn’t think of myself as really clever. Nor did my school for that matter. But then I thought – maybe my girlfriend thinks I have ideas in my head. That’s pretty nifty if you don’t have anything at all going on in there. But the cheery prospect of that was soon forgotten by a greater problem: she was now annoyed. She was convinced I was lying. I was hiding what I was really thinking. I tried a smile and jogging of my head as if to shake off the misunderstanding, but she looked down. She didn’t take my hand when I sought it. In hindsight, of course, I could have begun worrying about the dilemma in the Smiths’ song – whether a girl wants you for your body or your mind. Clearly, like Morrissey, at that age I would have preferred the former. I wouldn’t be expected to ask, A penny for your thoughts? But as it was, I learnt that one has to make ideas for talk, even when one had none. One had to invent an inner life as a tool to touch the surface – the skin of the other sex.

That was then. What I know now is that while the inner life might matter in relationships, the inner lives of the male aren’t written about much in literature. Well, that’s not quite true. There are novels written about the vernacular of modern men – the echoing phrases, repetitions that are heard as a kind of charm; the ridiculous rhythms of expletives; the dance of eyes at common sights, the conversational lulls. But most of the books that deal with this – with bad language, the grunting, the dazed looks that intersperse attempts at conversation, all of which constitute men’s talk – are bad books. In my view, anyway. They don’t really hear, for example, the poetry in the swearing, the patter of it, the shared breath of complaints that make it up. The authors of these books seem to think this vernacular is proof that there is nothing going on in the inside of men, or if it is, it is a goings-on one doesn’t want to know about. Think of Vernon God Little. All it seems to say is that adolescent boys in the US are tongue-tied by their own offensive language and that, in turn, makes them cruel in their heart. But that seems to me the argument of old ladies – well at least the old ladies that used to torment me when, as a teenager, I swore in their hearshot. I would be told off in public spaces, and in private events, in family visits and at weddings. These ladies have obviously long since passed away – ; Aunts so to speak of my mother’s generation. I am not sure whether the author of Vernon God Little, D.B..C Pierre, knew any of these ladies. But still, his book makes me think about them. To me they were like the English politician contemporary at that time, Mary Whitehouse – priggish, but without the delicious laugh (I mean, Mrs Whitehouse’s, for those who ever heard her being interviewed. It was fabulous: it was the kind of laugh that seemed to be all naughtiness). But this is getting me off the point. There are too many poor books about men. And yet there seem so many on the inner life of women – and these books are often quite exhilarating. The kind of books that make you wonder at how deep and big the world can be – how many voyages can be made around someone’s thoughts. That’s what’s so odd about it.

Think about Pond by Claire-Louise Bennett. It’s an extraordinary travelogue of the inner recesses of a woman’s mind, recounting how the outer world intrudes on the inner and how the inner imposes itself on the outer in a way that creates a weird, spooky, neurotic mixed landscape: shadows from the recesses of the memory buckle the patterns of thought referring to the world at large, making it unclear what is real and what is imagined, what thoughts proceed from the outer senses and what the inner eye. Sometimes the account in Pond is about stuffy dinner parties where the guests who come aren’t those who the host had hoped would come, and, as a result, the narrator (and host) looks at those who are there with a view to those who are not: ‘The absentee at my party would have sat where I had planned her to and so I cannot look at that place now and see who is sat there without thinking about how the person who would have sat there would be swaying with the movement of people around her and so I can’t notice how the person who is sat there moves at all’. This is to paraphrase – I am not meaning to quote here, so much as trying to recount what one reads in Pond. But bear with me – I am trying to give its flavour. I recall other buckled sets of thoughts, having to do with when a boy walks past here in a country lane, another about past travails and how they echo in the narrator’s mind even as she talks to her landlady in the present. Pond is all such – a bouncing around of twisting thoughts in the moments of living where the living seems mocked by the imagined, the recalled, the predicted. Perhaps not everyone’s idea of the inner life and its glories, but it’s definitely an inner life.

And then think of Jenny Offill’s Dept. of Speculation. It’s not about every moment as seen with the inner eye, but the inner thoughts constitutive of selected moments. These selections are tied together through the structure and layout of chapters with huge bits of living in-between ignored. It makes for a strange sense of reality. Though there is continuity on the narrative, it is odd, seemingly stitching together moments that would not naturally, in the order of living, sit side by side. When one starts the book, one imagines that these moments are so dissimilar that it is not the same consciousness whose moments are being recounted each time. You imagine that eventually – after the first fifty pages say – you discover whose these moments are – three, four persons, a whole gaggle maybe. But this never happens; you come to realise instead that these moments are one person’s alone. And you come to realise, too, that these are not descriptions of what is sensually felt, as they are moments of a particular mind and its business: for these are descriptions of how a mind makes calculations of what has been done and what has been felt by its body. The moments are descriptions of how hurtings to the heart are measured by the calculations of the brain. These moments are those when the narrator’s consciousness comes to realise how, in some prior undocumented moment, the relationship she is entangled with, the man she loves, is slowly, step by step, weave by weave, measure by measure, coming apart. You come to learn that the periods not described in the paragraphs are as important as the calculations recounted in the paragraphs, the reader having to weave their own vision of what happens in-between and thus come to understand what the calculations are about and why they are made. This makes The Dept. of Speculation sound hard work. It isn’t. It flows easily. I think it is a wonderful piece of writing.

And then there are the intellectual books – the bookish ones like The Sorrows of an American by Siri Hustvedt. This is an exploration of how her own mind assembles the world around her, taking as its sources the dialogues she has with her brothers and friends, the recounting in her mind of the tales her father would tell. Her book does not confine itself to this though – to Cartesian moments when the locked up inner soul reaches out to other’s through the dance of words. For at the same as recounting the talk, the shifting in understanding and movement in, for example, family relations that happens as a result, she also explores the ways in which consciousness is written about – from the most basic functionalism of cognitive science to the finessed inquiries of Wittgenstein. As she does so, so she explains how her inner territory is made up of the fleeting traffic of family matters alongside the lingering insights of books read and arguments disputed in the public spaces of University life. Wittgenstein’s The Philosophical Investigations echoes in her thoughts about her dad. The result is that one reads Hustvedt not to peer into her inner life, to see her soliloquy so to speak, but to see how her inner voice has vocabularies taken from the minds of those she has read as well as those she has talked to and lived with in a real, un-bookish life. Though the plot of the novel may relate to the narratives of family life, the real narrative, the actual narrative that matters in the book, is the world of her own making: a collage of the inner and the outer made up of readings and listenings – from stuff outside brought in and there combined with read material to make something new, a shared scrapbook in wonderings and perceptions inside and out.

What books do blokes have? What books tell their inner lives other than ones that focus on expletives? There have been one or two great books on this – recall the yearning of the young male and his inner desire for difference so beautifully evoked in Moby-Dick. But generally? And more recently? There’s Karl Ove Knausgård – but is that about an inner life? Or a mind that doesn’t distinguish – just allows observations to transmography into words without an inner or an outer distinction? Does Knausgård keep anything private? Is that the distinction I am after? It used to be one that seemed especially interesting – all the more so male authors. Take D.H. Lawrence. Though women often seem to be the focus of his novels they are merely the instruments – in some cases quite literally – with which he explores his own ‘inner’ and how it discords with his outer. A woman’s gasp as they climax confirms his inner manliness, not the swelling of her clitoris, for example, and this inner manliness has something to do with his external manner – what he does – his fighting, his binding with other men. But the topic of this distinction aside, his books are basically a world as seen from within his own skull. That world just happens to have women in it. But they are not the subject. He is. It makes for interesting and at times quite gripping narrative, but one doesn’t learn about other people; one only learns about him, D.H. Lawrence.

Even the high-brow and better educated male authors seem to be preoccupied with their own skull. Think of Anthony Powell’s A Dance to the Music of Time – it starts with a boy’s school (Eton) and ends with mens’ ends – not with his own, but those of the other boys he first met at Eton. There are one or two exceptions in the plot – the lives of a handful of women are explored – but it seems to me only because these lives affected these boys. Really, those women are only stooges to the plot, moving it along so that the author can address what happens to men, inside men.

The many volumes of Dances were finished written long ago, of course. While the first was published in 1951, the last was in 1975. D.H. Lawrence had packed up shop long before that. Women in Love appeared in 1920. These books are about men who have long passed away like the aunts who used to scold me for swearing. And today? Today the inner lives of men are hidden away in the landscapes of novels. Why is that? I often think it is because this is where they deserve to be. For theirs is the inner life of fools who just happen to be all of the same sex. Think of Paul Ewan’s Francis Plug: a novel about a man trying to write a novel but who has nothing interesting to put in it except his travels around book festivals. Here there is no inner life at all, but an attempt to make the public life a substitute for one. And for those who do try and write about the inner life of men, what is revealed is so appalling as to put one off. One thinks of St Aubyn’s Patrick Melrose novels. These are little more than laments for the damage that old fashioned English publish schools did to the boys who were sent to them and the results of this in how those same boys came to be when parents – viciously cruel, hurtful, bigots with vocabulary.

In sum, if men and their inner lives are to be written about it is to be mocked or parodied and, if it is not worth doing this, then the inner lives in question are too appalling to investigate – unless of course you want it make money out of the horror, as does St Aubyn. I suppose being an aristocrat always entails money-making schemes to keep the old firm going.

So why would anyone want to write about the inner life of men? I wonder if part of the reason for this current lack of interest in the general literary world might be because people are just too sensitive to what might be in the inner recesses of menfolk. There are giggles when I propose the idea to mates in a pub and to my colleagues in the common rooms of the universities I work in. Is this a device on their part to help me avoid what they know would be the vacuum to be found if it were really done? Or is it a horror? One wonders, one wants to explore. I might end up giggling too.

Seeing Quickly: reasoning in the age of Trump

26 Nov

President Trump’s behaviour is often a lightning rod for contemporary issues. Take the altercation between him and the CNN reporter Jim Acosta. There is one video going the rounds on social media that shows Acosta pushing the Whitehouse intern who seeks to remove the roving microphone he is holding; he resists her efforts. This video suggests that Acosta is, indeed, bad mannered and ‘horrible’, as Trump puts it; ‘Look at how he treats an intern!’ Democrats are saying that this video is ‘false’, based on cut segments replayed in sequence to exaggerate Acosta’s bodily actions. One can readily believe it. One only has to think of the reputation of the AltRight crowd.  But, just as that faction in American politics seems to falsify with video, think of how the left does it too. Recall the evisceration of Prime Minister May at the last general election in the UK, for example. Then the so-called Cornbynistas devised video ‘memes’ that showed Mrs May saying the same words again and again like a malfunctioning robot. These too were based on splices of videoed action cut together to create a false image.

In short, truth seems to go out the door with the affordances of digital video.

Should we no longer trust video? If not, then what do we trust? Hasn’t video become the lingua franca of our times? Certainly, news reports are replete with video, with tales from the field not being written but shown; our social media feeds are populated by video too.

It is not so much that one wants video, though. It seems to me, and this is what this blog is about, that we all want to see. Seeing seems more important, better than other modes of understanding – whatever they might be (reading, listening, and so on).

Why has seeing come to have this status? Do we presuppose that what one sees should be given more credibility than, say, what one reads? Do things have to be seen if they are to be taken as truth? If this is the case, it begs questions about who sees, what they see and who is us doing the seeing for us.

Consider journalists: one might say that they act as our seeing agents; they are representative of our eyes. Given this, is it better that they capture what they see with video, rather than interpret what they see with words? Do the resulting videos stand as proxy for our own line of sight, a line of sight we would have if we were there instead of them? As I say, our news services do seem replete with video, not with written analysis.

Perhaps this is how journalism is being shaped at the current time. If this is so, then what does it say about such things as our justice system? Here, too, people act as our representatives: judges act on the public’s behalf. Are they, likewise, shifting their practices to make seeing increasingly central to the judgments?

Judges do look at the visual, that is for sure; but it seems to me that their looking has some subtleties that we can readily acknowledge make it different from our own ways of seeing. Judges don’t simply look, they look and they listen to what a witness says about what something looks like and juxtapose this account against other witness’s accounts. A judge puts these accounts alongside other sources of information. These may including other visual materials but also documentary phenomena, such as traces of action stored in computer systems – records of bank transactions, say, written documents too. Visual witnessing is only part of what judges judge on. Judges subsume questions of seeing into questions of evidence and its heterogeneity – where seeing is only one mode of truth.

So, should we approach the farrago over Trump’s altercation with the CNN reporter as nothing more than an example of ‘troubles with seeing’ when all we have is a certain type of seeing – it’s the gaze enabled by the media, by click-thru’s on social media. It is not seeing in the general that is at issue, but specific social practices.

Perhaps. But surely there is a deeper question here bound up with how easy it is, with digital tools, to ‘crop’, ‘highlight’, ‘remove’, ‘insert’ and ‘recast’ the narrative arc of some ‘seen event’ and, given this, the power of the visual as a mode of information gathering. The visual is powerful, despite being easy to falsify.  The visual can be said to anchor everyday veracity, for example. A judge treats the visual differently because he/she is not in the everyday mode. Indeed, many psychologists would say that what a person sees in everyday life functions more deeply in the processing of consciousness than, say, ideas or thoughts; in this view (no pun intended) to see does have precedence in questions of knowledge. In my own field, human computer interaction, this psychological view is often used to justify the merits of visual memories and recall enabled by digital tools. One consequence of these efforts is that those who have less interest in the cognitive but more in technology and its consequences, such as media studies theorists, are coming to suggest that the ‘visual’ is measure of what we have become. The digital is making us visual, they say.

Yet these arguments from psychology and media studies miss something that, I think, is more important still. When it comes to the visual, to the use of video, it is not the ease with which what can be seen can be altered and made false that is at issue, it is the efficacy of conveying messages and eliciting responses quickly that is. What one should note about our current world is not that falsification is now more commonplace and easily achieved with video. It is not truth that is threatened by making visual concerns the anchor of our knowledge. It is, rather, a question of the pace of our judgements. We see and judge at a glance and do all this quickly; that is the point. We do not see and ask for more information, we do not think slowly. In this regard, the use of video in debates about politics reflects a desire for quick responses, not questions of objectivity and truth. We want quick actions in politics, not slow, ponderous choice making. Politicians must act, we can hear ourselves say, however compromised their choices as a result.

One might add that it is not just video that affords this rapidity. Think of Trump’s tweets. Whether they are true or not doesn’t matter, it’s the affect they have that does, and the affect is instant, unreflective, accusatory. They are sent quickly, read quickly, and have their affect quickly. It would appear that they are created quickly too.

And that is the point. We are all like Trump now. We might not tweet but we make judgements with a glance; we see, we believe; we judge instantly. It’s not the technologies that are undermining questions of truth, but the tempo of our thinking. And it is not just politicians who are thinking quickly. It is all of us.

Enchantment with Computer Reason

4 Aug

Today, when computer systems are so ubiquitous and therefore mundane, so immensely powerful but yet taken for granted, how do programmers motivate themselves? How do they get out of bed in the mornings and say, “Yep! I can’t wait to get on the keyboard!”

There was a time when this motivation seemed easy to explain. Think of Sherry Turkle’s book, The Second Self. There she described how exciting it was to get a machine to ‘act’ in accord with one’s own instructions. She said that getting one’s thoughts onto a screen and for those thoughts to get the machine to function like one’s second self was magical, something that really motivated you. But her book was written when the machines on the desk being one’s second self were called microcomputers, not personal computers. That gives an idea of how long ago that excitement was. So today – what enchants coders?

I do think it is enchantment that gets the coder out of bed but I think this is a quite different kind from that which Turkle described. Indeed it is almost reverse, one might say. In my view, many coders today find their enchantment in Machine Learning. They are enchanted because machine learning makes computers act in ways that they, the coder, cannot understand. It is not their reasoning writ large on the performance of the machine that excites them or provokes a sense of wonder; it is, on the contrary, how the machine works despite them that is.

The aspect of computer programming I am thinking of is a part of machine learning that is sometimes called Deep Learning. This is part of a broader family of methods based on the notion that programmes themselves can, as it were, ‘learn’ how to correctly represent data and thus act on that data. In the approach I am thinking of, no human is required to label data as part of some training set. Rather, the machine or rather the application somehow ‘uncovers’ categories and features in the data (about the world, say) and then acts accordingly.

What comes to mind, particularly, are computer vision systems, where certain programmes are able to identify (to ‘see’, as it were) objects not merely as a function of ‘unsupervised learning’, a technique whereby the programmes come to recognise objects without the aid of a human expert, for such techniques presuppose that what the system finds accords with what the human programmer can see too – the machine in this sense is only copying what the human can do, though doing this autonomously. In contrast, these new systems are identifying objects – patterns, shapes, phenomena in the visual field – that no human could see. They are, if you like, doing something beyond what the human can do.

As it happens, and in many instances, various advanced computer vision processing applications have been doing this for some time – though without the fanfare that has erupted recently.

Good examples of what such programmes can do can be found in the work of, for example, Graham Budgett, an artist at the University of California, Santa Barbara. Here, the images he produces, his art if you like, are to be seen through a browser. These images keep iterating and changing themselves as you look. They do so as a function of the algorithms that make the images you see a transitory output. That is to say, these algorithms constantly reinterpret the objects, the shapes, the forms, the colours, that Budgett provides for them in the first place. The algorithms present these as the first thing one sees. But then they start interpreting and reinterpreting these shapes, colours, forms. In each cycle of interpretation, the code starts with a same initial set of objects (whatever they might be), and these are processed and interpreted results in infinitely new forms every time the code (or the application) is run. The code is probalistic, not deterministic, and so comes up with different interpretations each time it parses.

In a sense on might say that the art here – the painting if you prefer but no paintbrushes are involved, only a keyboard and mouse – is being done by code. What the artist does, in this case Budgett, is select the machine learning algorithms as if they were paints for the palette. The ‘art’ comes to be in how the code interacts with its own output; thus Budgett has created art that performs without his controlling hand.

Though his examples are only of pictorial art, in important respects the pictures are showing something quite radical. The applications producing these pictures are not articulating human knowledge, knowledge about shapes and objects in the world. Rather, they are creating, through the application’s interpretation, new knowledge, new forms and shapes. These are the dynamic output from algorithms. In these respects, the Turing Test has been passed in radically impressive way since computing is not so much mimicking human intelligence, as it is doing something people cannot do – making thing with a new kind of intelligence.

This is significant. If this is the enchantment that coders are finding today, then, this is fundamentally different to the kind described by Turkle in Second Self. If, then, the delight she described was in getting a machine to act according to a coder’s own reasons, now the delight that coders feel is in getting machines to act in terms of reasons that the machine produces. The enchantment is no longer in the self, in how one gets a machine to act as a mirror of one’s own thoughts; it is in how some application can reason autonomously. It is as if the coders want the applications they code to do something more than the coders can imagine themselves.

Now for many coders this seems to be an enchanting moment. Here at last is a glimpse of what they have been seeking since the term ‘AI’ was first made common currency after the Dartmouth Conference where the term was first coined in 1956.

The trick, though, is that the applications that are currently being sought are ones that seem to have reasons that people don’t have, that people couldn’t have, that are more than human in their intelligence. And here it is not simply that computers can process at vast speed, that they are simply better calculators; on the contrary, the coders think that the applications they are producing reason in ways that is beyond human reason.

This is somehow beyond what Turing imagined. Given the deity like status this mathematician has in the pantheon of computer science, this is presumably enormously exciting to the coder. No wonder they are so keen to get out of bed. It’s not what they do that excites them, its what their applications will do.

All the dialogues about trust, computing and society: why?

18 Apr

Any glance at the contemporary intellectual landscape would make it clear that trust and computing is of considerable interest. And by this I do not mean that this has to do with whether computers can be relied upon to do their job; that they simply have to do as they are told. If only it were as simple as that – an interface. As Douglas Rushkoff argues in his brief and provocative book, Program or be Programmed, (2010) when people rely on computers in their everyday life it is not like Miss Daisy trusting in her chauffeur to take her car to the right destination. It’s not what computers are told that is the issue. It’s what computers tell us, the humans. With computing, so Rushkoff wants to have us believe, there is no knowing what the destination is: it is unclear what it is that the humans are trusting in or for. John Naughton, in his From Gutenburg to Zuckerberg (2012), asks similarly large questions and here too the topic of the ‘interface’ seems inconsequential: for him we need to ask whether we can trust computing (and the internet in particular) to bring us dystopia or a heaven – though the contrast he presents is not entirely without irony: it is the duplicitous appeal of Huxley’s Brave New World or the bleakness of Orwell’s Nineteen Eight Four. Meanwhile, Pariser complains in his Filter Bubble (2011) that we cannot trust search engines anymore; today, in the age of The Cloud and massive aggregation systems, search engine providers can hide things away from us in ways that we could not guess. Doing so is at once sinister and capitalistic, Pariser argues; sinister since it is disempowering, capitalistic since it places the market above the public good. Search engines take you to what companies want to sell, not to what you need to know. One time capitalist himself William Davidow is likewise agitated, though it’s not salesmanship that worries him: we are now Overconnected (2011), as he argues in his eponymous book: we cannot trust ourselves to reason properly. This is merely a list of well-known texts in the public domain, there are equally many in the more scholarly world of philosophy, sociology and, of course, computer science. In the first of these there are so many journal papers as to be immense, whether it be Holton’s Deciding to Trust, coming to Believe paper of 1994 or Baier’s book Essays on Moral Prejudice (1994); in sociology there at least as many, including Mitstzal (1996), Mollering (2006) and Gambetta’s edited collection of 1988 (including as it does some philosophers, such as Williams). In computer science and human computer interaction (HCI) there are as many, with Piotr Cofta’s The Trustworthy and Trusted Web of 2011 being one of the most recent. The sheer volume and scale of this discourse leads one to doubt whether any single, unified view will arise out of it even if many of the authors in question want to offer one: Bruce Schneier, though not an academic, comes to mind with his highly readable Liars and Outliers, Enabling the trust that society needs to thrive (2012).

Navigating the domain

So what is one to make of this all? It seems to me that we have to stop rushing to answer what trust is – even if in the end we might come to seek such an answer. Rather, at the moment, and given the wealth of views currently being presented on the topic, we need to ask something about trust that is as it were prior to the question of what it is. We need to ask why all the fuss about trust now? Having done this we can inquire into how these current concerns are effecting what is treated as trust, how that trust is ‘theorised’ and what are the ways that evidence are brought to bear on discussions about that theorised object.

The sociologist Luhmann noted in his essay Familiarity and Trust (1988) that societies seem to make trust a topic of concern at certain historical moments – they need to arrange themselves so as to make trust a possibility and a worry. This interest does not seem to have much to do with trust itself – in its a-historic or transcendental conceptual sense (even if Luhmann had an interest in that himself). It has to do with a particular confluence of concerns that lead societies to reflect on certain things at particular times. This argument is in accord with Richard Rorty’s view about how to understand ideas and concepts in his Philosophy and the Mirror of Nature (1979). In this view, appreciating debates about some concern requires one to see them as historical (even contemporary debates). Doing so entails dissecting the links between views of other apparently disconnected concerns, to create maps of the historical topography of ideas and investigation into the performative goal or goals that lay behind the development and deployment of the ideas in question. It requires, in sum, understanding the ‘when’ of an argument and the ‘so what?’ of it – what it led to.

Let me illustrate what is meant by this in relation to arguments about trust and computing. A decade ago, the philosopher Onora O’Neill offered an account of trust in the Reith Lectures (2002). She wanted to characterise some of the essential, true elements of trust and its basis in action. Hers purported to be an a-historic view, a concern simply with the conceptual fabric of the term. She claimed that trust between people is a function of being near one another. By that she did not mean near in a moral or social sense. She meant in terms of the body. This might seem a strange argument, but bear with me. It comes down to the idea that people trust each other because they can touch each other; because they can see each other, their every movement; that people can, say, grasp another at will and be grasped back in turn: because they are altogether, in one place. Trust would seem to turn on cuddles. This is of course to paraphrase O’Neill. But, given this, a problem occurs when distances are introduced into social relations such that people can no longer cuddle. Trust is weakened if not dissolved. Mechanisms need to be developed, O’Neill argued, that make ties between bodies separated by space possible. In her lectures, she explored various answers to the question of how trust could be made.

Why did O’Neill come up with this view? It seems quite stark; almost startling certainly to one who has not confronted it before. If truth be told, I have simplified her case and used a colourful way of illustrating it, though I do not think mischaracterised it. In presenting it thus, however, one can begin to see that there might be very profound links between it and the context, the historical context in which it was presented. This was just a decade ago and although it seems an eternity in terms of the internet it is the internet that I think is key to that context. And, it is in light of that context, that the credit one should give to O’Neill’s views lie. It seems to me that O’Neill was putting forth a view about the relationship between our bodies, our location in space, and the trust that was fostered (or not) by the use of contemporary technologies of communication, most especially internet-related ones. Her theory of the nature of trust (assuming for the moment that one can call it a theory), was created against the backdrop of the problems of trust and communication highlighted by the internet. With the latter, the human body seemed to be visibly absent and, since trust was problematic on the internet, by dint of that the body must be the seat of trust in ‘normal’ (non-internet) settings of action. Hence O’Neill’s theory.

As it happens, O’Neill did not refer very much to the internet in her lectures. The important point I am wanting to make is that, to understand O’Neill, one does not have to accept the idea that the presence of the body in any universal sense is always essential to trust: one simply has to accept that the absence of the body in acts of communication is a problem in the context of contemporary society, in the internet society. If one places her argument in context one sees that that is in fact what she is writing about. It is, as it were, her starting point. Something about the acts of communication we undertake on the internet make the location of the body – its presence/absence – salient. So, following in Luhmann’s and Rorty’s view, what we have in O’Neill’s lectures is a historically situated claim. Now one could say that historisizing her argument is perhaps reducing the credit it should be given. That is not my intention – though this might not be clear at the moment. One of the reasons why I choose her view to illustrate my case was because her argument was quite often presented at that time. It is in this sense exemplary. As it happens the argument has continued to be argued. Be that as it may, what I have thus far sought to show is the topographical relationship between O’Neill’s ideas and socio-technical context. But one also needs to consider its performativity. In having raised an argument, an argument can thus be assessed, considered, brought to bear; one has to consider also where the argument was deployed, for whom. In my view, what O’Neill was doing in her Lectures was getting the public to think about the role of philosophy, and to suggest that, despite appearances otherwise, philosophy can concern itself with everyday concerns, ones even to do with the body. Whether she succeeded in persuading the public of the relevance of philosophy I do not know, but what one can say is that she got the argument widespread attention, even if she was not the only advocate of it. As Charles Ess and May Thorseth (Eds) discuss in Trust and Virtual Worlds (2011), the idea that it is the absence of the body that undermines trust came to be cultivated when new communications technologies enabled by the internet began to take off – in the nineteen nineties – O’Neill’s the Reith Lectures are illustrative of this ‘cultural moment’. In research since, as Ess and Thorseth show, this link between body and trust can be seen to have been exaggerated. O’Neill can now be seen to be putting forth too strong a case. The purpose of placing arguments in context and exploring their performative consequences, however, should be to make it clear that one ought not to judge attempts to explore trust by a simple right or wrong metric. In historisizing a point of view, we can also see what that point of view might help create, the dialogues it led to and the richer understandings that ensued. It seems to me that O’Neill (and others who put forward her perspective at that time) helped foster discussions, analysis and debate and more nuanced understandings about the role of the body in social relations. The value of O’Neil, part of the success of her argument, is to be found in the fact that this topic was (and is still being) more thoroughly examined than it might otherwise have been.

To locate the discussion of trust, computing and society in time, in the contemporary moment, and to present and consider those arguments in terms of what they seek to attain is of course a big enterprise. There are many such arguments, and there are various goals behind them. Their topography is diverse, their performativities also. Some come from quite particular specialist areas, such as the computer science domain known as Human Computer Interaction (HCI). This has been looking at how to design trust into systems for many years. Criteria for success have to do with the practical use of designs, and less to do with any philosophical aspirations to define trust in a universal sense. Other arguments have their provenance in, for example, sociology and here the topic of trust turns out to be specifically how the concept is used performatively in social action: it is not what the sociologists think trust ought to be that is the topic but how people in everyday life use the concept. In addition to the sociological and the HCI perspectives, there are also philosophical points of view, and here the concern is to address the topic as a species of concept, as illustrative of the stuff of philosophical inquiries. Methods and means of argument are different from those found in, say, sociology, just as they are from those found in HCI. There are also arguments from the domain of technology itself (if not from HCI), and by that I mean from the point of view of those who engineer the systems that constitute the internet as we know it and as it is coming to be: this is the view, broadly speaking, of computer science. From this perspective –admittedly a broad camp – issues to do with distinguishing between systems trustable in engineering terms and systems whose use begs questions about the trustability (or otherwise) of users is prominent. And then we have arguments that are more in the public domain, of the type that were listed in the first paragraph. These are ones that are helping constitute the narrative of our age, what society thinks it is about and what it needs to focus on.

These diverse arguments cannot be added up and a sum made. As should be clear, they need each to be understood as part of the mis-en-scène of contemporary life. And each needs to be judged in terms of their diverse goals. Key, above all, is to see how they variously help foster a dialogue and sense of perspective on the large and sometimes worrisome topic that is trust, technology and society: maybe that is the answer to my question, to the question that led to this blog: why are there so many dialogues about trust, computing and society.

 

Selected Bibliography

Davidow,W. Overconnected, Headline Publishing (2011).

Cofta, P. The Trustworthy and Trusted Web, Now Publihsers (2011).

Ess, C. & Thorseth, M. (Eds) Trust and Virtual Worlds, Peter Lang (2011).

Gergen, K. Relational Being, OUP. (2009).

Hollis, M Trust withing Reason, CUP, (1998)

Lessig, L. Remix, Penguin (2008)

Luhmann,N. (1988) Familiarity and Trust, in Gambetta,D. (Ed) Trust, Blackwell, pp 94-107.

Masum, H. & Tovey, Ms The Reputation Society, MIT (2011).

Mitzal, B. Trust in Moderr Societies, Polity Press, (1996)

Möllering, G. Trust: Reason, Routine, Reflexivity, Elsevier (2006)

Naughton, J. From Gutenburg to Zuckerberg, Quercus (2012).

Pariser, E. Filter Bubble, Viking (2011).

Rorty, R. Philosophy and the Mirror of Nature, Princeton (1979).

O’Neill, O. the Reith Lectures, The BBC (2002).

Schneier, B. Liars and Outliers: Enabling the trust that society needs to thrive, John Wiley (2012).

Rushkoff, D. Program or be Programmed, Soft Skull Press (2010).

Hello Grumps!

23 Jan

I wondered whether anyone might want to idle away some time reading the scribbles that don’t get into the ponderous journals and books of a professional HCI researcher in Corporate life?

No?

Well what about the scribblings of one who finds Baudrillard funny, contemporary analytic philosophy sterile, and HCI a way of exploring what it means to be ‘human’ -even a way of thinking anew about that without  recourse to the likes of Harroway – more to the likes of, let us say, oh, John Naughton?