777ӰԺ

Technology may be making us unhealthy and miserable

Senior Research Associate Dr Sarah Steele, College Postdoctoral Associate Dr Christopher Markou (2014) and PhD student Tyler Shores organised the recent Intellectual Forum event 'Is technology making you miserable?'. , they consider what governments could do to help.

Social media and screens are omnipresent. Many are concerned about the amount of time we – and our children – spend on devices. Soon to be a father, Prince Harry  that “social media is more addictive than drugs and alcohol, yet it’s more dangerous because it’s normalised and there are no restrictions to it”.

But worries are not just limited to personal use. Many schools and workplaces are increasingly delivering content digitally, and even using game-playing elements like point scoring and competition with others in non-game contexts to .

This “always on” lifestyle means many can’t just “”. There are now claims that many of us are at risk of “” as we find ourselves chronically stressed by hyper-connectivity. But is there evidence that so-called “screen time” is, in fact, bad for us? Or worse: is it making us miserable?

To answer this, the UK government  what we know about the impact of technology use on children, drawing from a nascent, but robust, body of  exploring these questions. The Australian government has done the same, but focused on screen time’s . Governments around the world are drawing together evidence.

We know, for example, that there is a connection between using screens and poorer attention span and academic performance , delayed development in children, increased , greater stress and depressive symptoms , increased  and .

When to act

While there are clearly correlations between increased screen use and psychosocial and physical health issues, correlation doesn’t mean causation. But without definitive scientific evidence, can we afford to ignore them? Should we refrain from making recommendations or regulations until there is direct proof, like the UK’s Royal College of Paediatrics and Child Health recently ?

From a public health perspective, the answer is a firm no. While  public health policy remains the gold standard, we have enough information to know action is required. Definitive scientific evidence of a causal link between technology use, or “screen time”, and negative health is unnecessary to justify appropriate action. This is because what is ultimately at stake is public safety, health, and well-being. And of course, we might never find the evidence.

The “” gives us a basis to act. It argues that, even without scientific consensus, governments have a duty to protect the public from harm. Policy interventions are justifiable where there exists even a plausible risk of harm. With correlations mounting, harm is more than plausible. The and are already acting. But what should be done? A few obvious actions stand out.

Moving forward

YouTube, to start with, has been described as the “” because of how content recommendation engines lead people towards increasingly extreme content. This is because its algorithms have “learned” that people are drawn towards increasingly extreme content from what they started out searching for. We are all searching for that  and hoping the next video will provide it. This problem could be addressed by regulating content recommendation systems and disabling YouTube’s “auto play” feature by default.

We also know that tech companies use elaborate strategies to keep eyes on screens. By exploiting the brain’s reward system, they’ve mastered how to keep people scrolling, clicking, and liking – and potentially make them addicted. The “” of online marketing and product or service engagement weaponises neuroscience by using the brain’s reward system to drive continuous engagement.

It is also used  where competition and gamified approaches like targets or step counters drive increased performance levels. Amazon warehouses . This is something employment and human rights law will need to address, and government should investigate, especially as children are thought to be .

A broader problem is, as tech writer Shoshana Zuboff has masterfully , the way big data is collected and used against us. We know that Google, Facebook, Amazon, and other tech giants constantly collect our data, and then use this data to target individuals and drive particular behaviours and responses.

With “” the business model of the internet, there are no easy solutions. What we urgently need is courage from government to reign in big tech’s excesses and most insidious harms. Of course, tech companies will . Lobbying and advocacy will be their weapons of choice for influencing laws and sustaining profitability. But it is critical that politicians and professional organisations prioritise public health over industry money.

An issue for governments

Thankfully, several governments have a desire to “make the online world a safer place” and take concrete steps towards regulating what techniques big tech can use on the public. A major step will be restricting behavioural advertising, as Germany .

Of course, given that advertising accounted for the majority of Google’s revenue in 2018, we shouldn’t expect it to respond with anything but hostility when its core business model is threatened. It is encouraging that the UK government is taking the lead, with a  calling for a new regulator and social media bosses to be legally liable for harms their platforms cause. This would be a bold step in the right direction.

We can also restrict what personal data can be used to sell products to people, and how advertisements are presented – allowing users greater control over what they see. A return to contextual advertising, where users only see ads that are related to what they’re searching or browsing for, would be a more modest but nonetheless important step.

We should expect these tech firms to use the playbook established by , and . And so transparency mechanisms and robust reporting requirements must be put in place. We must also discuss our options for – and with – .

It is key that we take a precautionary approach to industry-funded research by these tech giants in the same way we have done with tobacco industry funded research and bodies. While technology is part of our lives, how we understand it and how we regulate it must be in the interests of public health at large.

The opinions expressed are those of the authors. .