29 November 2024

New Media, New Data and a Dark Foreboding

Surveillance as Observation, Simulation and Weapon

The last two decades have seen major changes in surveillance practices; there has been a shift in focus from state power and control to big tech corporations and monetisation. What we are currently witnessing is yet another shift, which is establishing surveillance practices as a means of hybrid warfare. Surveillance can be used as a weapon, and not just in military contexts. The AI-driven vision of accessing what people think and feel might seem harmless in comparison, but it may turn out to be a much more powerful sword.

Surveillance practices old and new

From the 18th century, surveillance was a mainly a state-run endeavour, utilised for administrative purposes, to exercise control and power. Previous surveillance technologies were limited by today’s standards, as not every action was photographed or filmed, not every conversation recorded. Part of its efficiency was derived from a nimbus of perceived pervasiveness, a variation of the panopticon effect that today is framed as a “chilling effect” in legal discussions: you did not know if there was someone listening in on your phone call, but the possibility that someone could potentially be listening made you already change your behaviour.

With the astonishing rise of digital platforms, there has been a considerable shift in surveillance practices: who does it, how do they do it, and why? Large corporations offer services that produce data sets that are in turn used for monetisation. The power dynamics heavily favour a form of surveillance-based capitalism. Search engines, content-providing platforms and social media – each step that is taken on there is constituted in data and leaves data behind. As more and more parts of our lives are constituted by and lived within data, this type of surveillance is almost all-encompassing. Its main objective is not, in essence, political; the power exercised over users lies in the extraction of data and the manipulation of behaviours with the ultimate goal of making money. Despite being aware that all communications are recorded and analysed, the majority of people accept it, ignore it or see a greater benefit in the free services they consume.

Of course, states are keen on obtaining this data, as it is information that is so available and tempting. Proposals by EU policy-makers to make providers automatically analyse messenger data in order to scan for illegal content is one recent example of this desire. In light of more general drift towards democratic backsliding that can be observed in the US and EU member states, the combination of universal datafication, the targeting of individuals and the dissolution of the state/corporate boundary in times of autocratic tendencies seems like quite the dark triad of surveillance practices.

Weaponising surveillance

This dark triad points to yet another shift, the weaponisation of surveillance. Pegasus, a spyware tool offer by the Israel-based NSO Group, describes itself as offering cyberintelligence to help governments fight terrorism and crime. In practice, it works by infiltrating an individual phone – for example, via messenger services – enabling the spyware to harvest any data that is produced (contacts, communications, contents). Even though it is marketed as a tool to prevent or investigate terrorism and crime, this type of intelligence gathering can be used for targeted attacks against individuals for political gain. Using knowledge about individuals – such as journalists or political opponents – to threaten or blackmail them is an effective and low-effort strategy for weaponising surveillance. Rather than running their own intelligence efforts, state actors and agencies have found themselves in the role of customers of private companies that offer their exclusive spying services to them. Given that, in democratic systems, this is being conducted under the label of fighting crime and countering terrorism, these highly invasive measures can happen largely outside of legal oversight.

This is even more so the case when digital surveillance is used in open conflicts for automated information gathering and target selection, as it is reportedly being done in Russia’s aggression against Ukraine as well as in the Israeli strikes against Gaza following the Hamas attacks on October 7, 2023. According to journalistic accounts, Israel’s Lavender system reportedly used surveillance data to identify terrorists and Hamas operatives. To take another example, the Palantir system – as envisioned and promoted by the company – promises a natural language interface that aggregates all available data to boost situational awareness in conflicts, to analyse the possible and most effective courses of action, and recommend these courses of action. Some of this functionality is being used and applied in Ukrainian efforts against the Russian invaders.

In more abstract terms, these practices of weaponised surveillance are also rooted in the platform-derived techniques of datafication, profiling, targeting and recommending. The core difference, however, is that the probabilistic score does not denote potential customers or users who are susceptible to ads, but enemy combatants or terrorists. The data-rich networked and platform-based type of warfare is reflective of what goes under the label of hybrid warfare. What often gets overlooked is that this hybridity also entails a dissolution of established conceptual boundaries: Are the actors state, private or corporate ones? Are they military or civilian? What about the use of technologies? And, above all: At what point are nations at war with each other? How can we still discern this most blurry line?

Expanding the purposes and objects of surveillance

These blurring boundaries also change the foundation of what surveillance practices are or entail. Take, for example, the idea of “autonomous weapons”, often simply envisioned as killer robots or, in more nuanced approaches, as unmanned vehicles, equipped to select and engage targets without human intervention. When the conceptual basis for surveillance is conceived of as a combination of data collection, automated data analysis, pattern recognition and recommended actions as described above, yet another boundary becomes blurry: Are we dealing with a relatively innocent practice of information gathering or intelligence, or should these types of surveillance rather be understood as an “autonomous weapon” in their own right, as the step from recommendation to actual decision can be quite a small one?

The changing purposes of surveillance have been accompanied by an increasing expansion of the objects of surveillance.

The first expansion is rooted in the belief that the future can be observed in the present. Observation practices such as intercepting phone conversations or video surveillance are based on the pretty straightforward notion of seeing what people actually do. Techniques such as profiling and probability-based extrapolations of likely future behaviours create the idea of observing what has not happened yet. In other words, surveillance becomes simulation. It no longer just looks at simply the things that people do – their actual movements, search queries, contacts and communications. It is concerned with what people will, it is assumed, likely do in the future based on the statistical proximity to particular groups conveyed by certain markers, such as affiliation with the purely mathematical-fictional unit of “persons who behaved similarly in the past”. As a side effect, the individual is no longer the undividable object of surveillance. What is under surveillance is an individual’s complex entanglements with a profile: the in-dividual is split into identity markers such as gender, race or age, group affiliations of behavioural patterns.

The second expansion is based on developments in media and interface technologies that sell the idea of accessing what people think and how they feel. When using and navigating our smart phones, smart speakers, smart homes or smart watches, we are no longer limited to purely text-based interactions. We use our voices, gestures and facial expressions – and in doing so we inadvertently help produce unprecedented amounts and types of data. In allowing more direct interactions, the new interfaces access knowledge on social dynamics or emotional states by tracking natural language, non-verbal interactions with the machine and with each other while the device conveniently keeps on recording couple or family dynamics. Applications such as avatar friends or bots specifically target our social and emotional needs – and in doing so elicit more and more data in these areas.

Besides the idea of observing future behaviours, the aggregation and analysis of data that seemingly captures the social and emotional realities of those surveilled, promotes a fairly recent technological imagination: observing the inside of people’s minds – emotions, attitudes, beliefs – accurately (a recurring imagination if you look at the historical example of the polygraph). The consequences of this development are becoming particularly noteworthy and consequential in the field that brands itself as emotion analytics.

A dire premonition: the surveillance of what people think and feel

The premise of so-called affective computing and emotion analytics is to make human affect and emotion machine-readable in an effort to improve human-machine interactions by paying particular attention to those elements that also play a huge role in communication and interactions between humans. Gestures and movements of the body, facial expressions or speech patterns in natural language use are converted into computable data sets. The use of anthropomorphic design elements, interactive bots or human-like social robots offer interfaces that cue humans to make more use of non- and paraverbal modes of communication. While these goals underline particular functional benefits in human-machine interactions, emotion analytics and affective computing – when combined with the shifting of the surveillance practices discussed above – also induce a sense of foreboding.

As we have seen, surveillance as simulation does no longer limit itself to what people actually do but what they are virtually about to do in a probabilistic future. Computing affect and emotion creates a surveillance practice that expands to human thoughts and feelings. At least, this is the promise of emotion analytics. In reality, it further dissociates the object of surveillance, its referent, from the surveillance technique that claims to make it visible. The reason for this is that the epistemological foundations for the analysis of emotions are highly questionable. In many cases, they rely on a particular dictionary of emotions that is able to translate what can actually be observed into the corresponding mental states behind this symbolic code. The notorious FACS model – facial action coding systems – converts visible muscle movements in a human’s face into corresponding emotions. It still offers one of the most popular taxonomies for the analysis of emotions based on visually traceable data, not least because of its almost simplistic implementation of machine-readability. The foundations of this conversion of facial expression into knowledge about a human’s emotions are highly questionable: it neglects social and cultural contexts, assumes the universality of emotional expression and is partly based on a hyperbolic and almost comical system of representation.

Despite the likelihood of creating a fair amount of empirical artefacts, the analysis of emotions or affects within the power dynamics of current surveillance practices might develop a mighty knowledge of its own: Surveillance as simulation and imputation. If programmers of surveillance techniques define situations as real, they become real in their consequences (here, I am corrupting the Thomas theorem a little for the sake of argument). The emotion-surveillance technique creates an affect-laden subject with attendant simulation knowledge on thoughts and feelings. This is a real risk, as the models even claim to detect emotions that the subject is trying to hide.

A look ahead – 1984 is soon

If the current developments promoted by private companies are any indicator of what state actors will soon be eager to trace, track and potentially utilise for political reasons or policing in authoritarian systems, the mere idea of affective computing and emotion analytics is a dire premonition of what is to come. This outlook is further substantiated by the mass implementation of AI-based emotion recognition tools, which is already happening in China.

This type of surveillance produces knowledge that not only claims to reveal what people are likely to do in the future but also what they feel and think, paired with the promise of reading the actual truth behind the fake emotion – as one surely can always feign the right attitude or required ethos. The consequences of this epistemological bending are potentially grave. The AI-powered machine reading tool can quite easily be framed as generating impartial and objective knowledge about disloyal mindsets and attitudes that are in need of sanctioning or prosecution. This might even mark the return to a form of criminal law that is attitude-based rather than act-based. The mere thought of committing an illegal act might after all be something that violates the law.


SUGGESTED CITATION  Bächle, Thomas Christian: New Media, New Data and a Dark Foreboding: Surveillance as Observation, Simulation and Weapon, VerfBlog, 2024/11/29, https://healthyhabit.life/new-media-new-data-and-a-dark-foreboding/, DOI: 10.59704/8387b67c8098d0f8.

One Comment

  1. Laubeiter Tue 3 Dec 2024 at 14:24 - Reply

    First question: If the aforementioned technologies had democratic oversight and were subject of judicial review, would this sufficiently hedge their dangerousness and should we go ahead with the technologies? Or are the technologies inherently violating human rights and cannot be hedged? Second question: the assumption that “persons who behaved similarly in the past”. is a mathematical fiction. Is it? Social media personalization relies on exactly this kind of math and is successful at predicting and steering. What about this math is fictitious then?

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.