Tag Archives: Artificial Intelligence

Sociology, Anthropology and the Social Consequences of AI

18 Jun

{This a version of a talk I gave at the Lord Kelvin/Adam Smith Symposium Series at the University of Glasgow in May, 2021 See: A Manifesto for Inclusive Digital Futures}

Sociological views of computer technology are a commonplace today. These see technological development in terms of social factors – how the structure of society, its class, gender or economic relations shape how technology is developed. And these relations are then reinforced by the technologies that result.

One can understand why such views appeal – after all, the social facts of computer technologies can be quite startling – so many people use Google, the profits Facebook makes can seem enormous, and the things one can buy on Alibaba are almost infinite. All this says something about our society, our culture, what we value. Technology seems to reflect these things however good or bad they are even as it helps make them anew. But sociological views tend to neglect how technology developments itself occurs. Developments often have more to do with particular technical problems and their solution – there are social matters in these, but not as sociologists imagine.

Take the case of AI. This technology is often said to have sociological consequences. Indeed, one can hardly doubt this. But what is the way AI is developed? What are the ‘requirements’ AI engineers seek to satisfy? While it might be true that AI struggles with, say, social diversity when it is used, understanding why AI has the form it does might require investigation into how computer engineers actually build it. Their concerns might not be ‘sociological’. But what are they? It is to that task I turn now.      

AI is found in lots of computing technologies, but the ones I am interested in are the latest incarnation of what are called interactive ones: not the ones inside appliances and machines which are beyond our reach. It is those we ‘use’ in interaction. These computers are exemplified by the desktops, laptops and mobiles by the million. These are built by engineers who have two things in mind when they ‘innovate’: they define what these interactive computers will do by defining in part what the ‘user’ will want them to do. This means that these engineers are in effect defining both what the computer will do and who the user will be. This is how they innovate. This sounds like they are doing a kind of sociology but it doesn’t entail the kind of sociological topics I mention above. The ‘user’ these engineers have in mind is a ‘social actor’, doing things with other people, but the way that user (and the things they do) are understood and the activities they are engaged in are seen in particular ways which lead one away from sociological topics. 

For example, aspects of a human body are used to help define the ways commands are delivered to an interactive computer, AI ones included. This is part of who the user ‘is’. A keyboard is the size it is not for reasons to do with, say, class, but for reasons to do with the size of the human hand. The design of ‘on\off’ switches and their virtual equivalents on the screen are designed by matters to do with body mechanics: there is a size over distance ratio and a science to understanding this ratio (viz, Fitts’s law) which means these sizes can be defined systematically. These ‘basics’ about the user are then built on by reference to larger or more encompassing features of what a user might be. A desktop computer is designed to enable certain kinds of work: office work, doings with documents of various kinds. Indeed, when Xerox invented what we now think of as the virtual desktop on the Star system, the ‘user’ the engineers had in mind was that, an office worker. This had all sorts of consequences for what the Xerox engineers came up with. Much of this is still with us today. The things that the desktop computer offered the user were ‘documents’, as a case in point. These were represented with paper-like affordances – pages, borders, titles. Tools were developed that allowed users to format and adjust layout, to cut and paste, similarly in analogue ways with paper (such analogues had their consequences of course: See my book, Myth of the Paperless Office). All this led to what came to be called the WIMP interface: with Windows, Icons (for files and such), a Mouse and a Pointer. WIMP is still the foundation of most interactive systems in use.

Whether this vision of the user-as-office-worker was fully worked out when Xerox devised it some fifty years ago whether it was somewhat simplified, the point is that engineers had in mind a user when they devised their technology, when they innovate; the two go hand in hand. Over the decades since Xerox created the desk top computer, the role of interactive computing has, of course, changed. And so too has the ‘user’ as imagined when innovation occurs. 

Take mobile phones: at first these were very simple computer devices engineered to send sound shapes back and forth: the sound shapes of human words. Their design reflected a compromise between the technological prerequisites for this – microphones, speakers and codecs managed by an interactive computer –  and the size of the human head: the distance between ears and mouth, analogue to the button size problem. Gradually, these features evolved to allow more information-centric activities. At first these echoed doings with desktops – between users and documents. This was particularly so on precursors to the web and its protocols such as WAP. Indeed, the difficulties of getting analogues to WIMP interfaces, designed for document interaction, was a major concern at that time. But computer engineers soon came to realise that the users of mobiles were different or rather what ‘users’ wanted to do was different. Users did not need to be provided with a mouse so as to cut and paste. What they wanted mobiles to do was interact with each other (see my paper from that time: People versus Information). Surprisingly this was not what engineers had first thought when mobiles were being devised. In the minds of computer engineers, voice messages were information exchanges, not social acts. But when engineers began to recognise the social desires of ‘mobile users’, so mobile phones were ‘reimagined’  – Nokia led the charge for this with their SMS functionality and thereafter phones came to became devices for being social. Never mind that Apple came in late to the party and ruined it for Nokia, the point is that mobiles have evolved around the pre-eminence of social interaction, with the mobile applications TikTok and Instagram exemplifying the kind sociality in question and hence, also, the ways computers and interaction with them developed in this domain (see my book, Texture).   

The most recently emerging technology is AI. This technology is also anchored in a joint understanding of what it might do shaped by imagining what a user of AI might do. Again, there is a social aspect to innovation as there always has been, but this has particular aspects to do with the technology and what, in turn, is thought a user of it might want to do. 

To begin with, many AI applications build on things devised before AI. Take search engines. Originally these had no AI in them, but with time and scale of use, options for using AI-type techniques have emerged. This also entailed determining what AI ‘users’ would want, allowing AI users to ‘emerge’ too, if you will. Google’s PageRank, in instance of AI, needs large numbers of users to develop a data base of heuristics for what search target is most likely to be relevant to some search query. It is this that suggests (via AI processing) which target is to be ‘triaged’ to the top of a SERP (Search Enquiry Results Page). AI search engineers assume that ‘users’ will want the most likely answer, not the right answer. Or rather, they assume that the user strikes a compromise between speed and accuracy. They would prefer a quick answer that might be wrong than a right answer that might take a long time to find. This is what engineers at Google are assuming when they enhance that search engine, this is the user they have in mind. AI enabled search combines a notion of what a user wants (it turns out to be speediness) and how the technology can deliver that (through quickly delivered good guesses). The result is a mutual shaping: user and technology making each other.

Similarly with social behaviours:  the messages people post and send each other is also content at scale. Facebook’s’ EdgeGraph only works because of volumes – the ‘postings’ and ‘sharings’ of millions. It is these volumes that lets Facebook’s AI produce outputs: recommendations about new connections, new postings, new ‘friends’. One might say that users of Facebook do not make friends as they used to, before the invention of Facebook; they now make friends through Facebook. ‘Friending’ is now part of everyday parlance, alluding to a distinction between old and new ways of friend making. Again, though, the point is that the technology and the user are mutually shaped. 

These are examples of AI emerging on the back of data found on the web, instances at scale of human behaviours. AI can take large volumes of data from elsewhere, of course, and these do not need to be human. A lot of work in the medical field, for example gene analysis, is of this kind. However, though the basic process of innovation in AI is like the one that has always applied in interactive computing, some of the consequences of this are worth remarking on as they highlight some differences which are consequential. 

To begin with, AI, like all interactive computing, ends up encouraging the tasks it supports. Just as Xerox made documenting even more important to organisations, so AI encourages certain activities over others – take search again. Who now users search engines to explore the web or to ‘surf’? People are less likely to do so as they are tempted by the design and efficacy of AI tools to do other things, namely, to shop, purchase and ‘consume’. These derive from the efficacy of current search tools which encourage these behaviours. No wonder that the web has become a vast emporium for purchasing. Consuming behaviours are constructed into the engineering of AI search tools and these are the primary entry points to the web. Their shape shapes what users end up doing – finding, buying, and consuming (whether these consuming behaviours are quite as expected or whether users are directed in guaranteed ways is perhaps another question. See Mackenzie, ‘Cookies, Pixels and Fingerprints’) 

A second consequence follows on from this. This kind of AI – web enabled and focused – tends to focus on the behaviours rather more than on the intentions of people. What people do on the web matters more than what they think when they are the web. (For a review see ‘Data Behaviourism’ by Antionette de Rouvroy). Take search again: because people click on search results, AI tools think that people are in need of answers, but they might not be. Their intentions might be such that they might don’t mind wrong answers or even no answers at all. As we mention above, they might want to be ‘surfing’ – though this might be too general a word for what their intentions are. A user might be wanting to imagine travelling, for example, and might not want to stop their travels with an answer. They might, by way of contrast,  want to waste time by idling on the web, and again don’t want tools that rush them to ‘complete’. Idling has no end point after all (though it might be time limited). These intentions might or might not be best expressed by the term ‘surfing’.  The point is that AI tools in search cannot see such motives – indeed, AI tools make it increasingly hard for AI itself to see any motives beyond the behaviours they measure. The better the AI tools are at enabling some doings, the less able those same tools are to see intentions that might have lead people to do other things. One call put this another way: the better AI becomes for some task, the less understanding that same AI might have of other tasks and hence of other intentions. This narrowing feeds itself:  the data the AI  uses is  produced by AI tools that shape that data, and that increases the likelihood that users will act in certain (encouraged) ways. In short, AI tools get better by knowing less and as they do so, users do less too. 

As we move forward with a hope of imagining new areas and domains for technology including AI, and if we want these new domains to let us do new things, then it seems to me that the problem of understanding in AI becomes greater, when by that I mean understanding who users might be since if that understanding is limited, then what AI comes to afford will be limited too. 

Take the future of home life. Here AI researchers are saying that just as AI has automated many aspects of the workplace, so too it automate aspects of home life (See for example Russell). At first glance this seems appealing. After all who would not want an automated washing machine –  if that included a device that picked up the dirty laundry, loaded the machine and put the cleaned clothes back in the drawers? The problem here is that there might be some things that people do want automating yet others they don’t want automating. Knowing the difference might be a question of knowing the motives in question. Consider a different labour at home, cooking. If one could devise an automated washing ‘system’, could one devise automated cooking system? One could, at least in theory. Yet consider this: when people set up home as a couple, they will often chose to cook for each other.  They do so as an expression of their love for each other. Cooking can be, if you like, love’s work. When it is, one would imagine these same people would not want AI to automate this work. Doing so would reduce the tools through which they ‘make love’ so to say. (For more discussion of this and the concept of the smart home that is related, see my book of the same name. See also my Connected Home)

I am not saying that people in the future might not end up preferring machines making both their food and doing their laundry; I am saying that the way AI is developed, the manner of innovation one finds inside AI, results in judgements about what people want to do becoming increasingly important and yet also increasingly difficult to achieve: the understanding that is required can get lost in the data that AI tools are themselves creating. Many AI researchers seem completely unaware of this, such as Russell, co-author of one of the primary text books on AI at the current time.

Going forward there is a need to clearly delineate and characterise what people do through understanding why they do it. It is a question of meaning, not action. And this brings me to sociology once again. Sociology has many merits, but its main concern is with the ways individuals find their actions governed – governed by their gender, their class, their ethnicity, by the society in which they live. But what we have just seen is that what things mean to people is crucial for AI going forward. A different discipline has stood up and claimed that space: anthropology. As the great anthropologist Clifford Geertz put it, this discipline is in search of the meaning people give to their lives. If that is the case, then the moral of this blog is that we need more anthropology and not sociology. Perhaps more significant we need more anthropology and not more AI. If we stick with AI, we will make our lives anthropologically less. The future is for us to make, but not with AI.  

Advertisement