Ads, algorithms, and adolescents

Children’s mental health is shaped as much by the spaces they inhabit online as by the world around them. Social media, gaming platforms, and digital classrooms offer connection and learning, but they are also commercial environments engineered to capture attention, sell products, and influence behaviour. These digital harms do not appear by accident — they are embedded in systems designed for profit. This feature looks beyond the headlines to examine how commercial actors, policy gaps, and everyday environments converge to affect children’s wellbeing, highlighting opportunities to create safer, fairer spaces across Europe.

Mark Petticrew and Alice Tompson from the Faculty of Public Health & Policy, London School of Hygiene & Tropical Medicine and May van Schalkwyk from the Centre for Pesticide Suicide Prevention, University of Edinburgh discuss.

Mental health conditions in children and young people are common, with anxiety and depression being prevalent. Worldwide, 8% of children and 15% of adolescents experience a mental disorder, with suicide being the third leading cause of death in 15-29 year-olds. While the causes are complex, the World Health Organization (WHO) notes that early negative experiences in digital spaces play a role and increase the risk of mental illness.

This is not surprising; the digital world is now one of the most powerful commercial environments ever created. Among the undoubted benefits — social support, friendship networks, education, entertainment, and digital skill-building — there are well-documented harms. Exposure to harmful content and abusive online experiences, such as bullying and sexual abuse, is consistently associated with poorer mental health in children.

Commercial pressures and online harms

Viewing social media platforms through a Commercial Determinants of Health (CDoH) lens can help us understand how these harms are created — from targeted advertising to addictive algorithms — and for what commercial purposes. Harms do not simply emerge from these systems; they are built-in features, designed to extract data, attention, and profit. As such, they represent a new and fast-evolving form of the CDoH.

The Lancet Commission on the Commercial Determinants of Health defines CDOH as “the systems, practices, and pathways through which commercial actors drive health and equity.” Often, the focus of CDOH research has been on industries such as tobacco, alcohol, fossil fuels, and unhealthy food. Each of these industries uses well-documented strategies to create and normalise harm and then profit from it. These strategies are shared across industries — the so-called corporate playbook. Many of the same tactics now also appear in the digital sphere.

Harms do not simply emerge from these [social media] systems; they are built-in features, designed to extract data, attention, and profit.

Social media as a commercial determinant of health

Corporate practices that damage health often include:

  • Political practices (e.g. lobbying, self-regulation, and policy substitution)
  • Scientific practices (shaping research agendas and disseminating selective evidence)
  • Marketing practices (engineering desire and consumption)
  • Reputation management (e.g. Corporate Social Responsibility campaigns, image-repair campaigns, industry-funded charities, greenwashing, and similar activities).

Social media companies, like many other industries, use these strategies at scale. Platforms are intentionally built to maximise engagement, using addictive design, variable reward systems, and algorithmic reinforcement loops — often targeting those most vulnerable, including adolescents. These features are part of the business model. Like other industries, the greatest profits come from those being harmed most: the heaviest users.

The corporate playbook, ads, algorithms, and influence

The parallels between social media and older harmful industries are striking. The tobacco industry knew its products were addictive and caused cancer and other diseases, but denied it for decades, sowing doubt about the causes. The fossil fuel industry knew about climate change in the 1950s and sowed doubt in the same way.6 The opioid and gambling industries designed addictive products while marketing them as safe — and Big Tech actually based the design of its products on the machines developed by the gambling industry.

Similarly, the social media industry knows its systems cause demonstrable psychological and social harm, and has sought to obscure this through selective evidence, PR campaigns, and strategic partnerships, as Zenone and colleagues have shown. Internal Facebook documents leaked in recent years show the company’s awareness of how its algorithms exploit teenage insecurities. One internal presentation described targeting 13–17-year-old girls as they delete selfies, so that advertisers could reach them at moments of vulnerability.

If social media is a commercial determinant of health, the first step is recognising that harm is not simply a matter of individual ‘misuse.’ The problems are structural.

Lessons from history: deny, distract, delay

The social media industry is following a well-tested corporate script: deny, distract, and delay: deny and distract from harms, and delay regulation by advocating for vague and ineffective self-regulatory measures and guidelines and funding academics to maintain the narrative that ‘more research is needed’, just as the tobacco industry once did. So, what can public health and policymakers do in response? There are lessons to be learned from other industries, one of which is not to repeat mistakes from other ineffectual attempts to persuade harmful industries to ‘Do Better.’ The evidence base tells us quite a lot about what not to do:

  • Don’t rely on self-regulation. Voluntary codes have been shown repeatedly to fail in every major harmful industry. Self-regulation and voluntary guidelines are exploited by such industries and are used to maintain business as usual.
  • Be wary of partnerships. Public-Private Partnerships with industries are frequently advocated for, including by industries themselves. These come in various guises, such as calls for multistakeholder partnerships and ‘whole of society’ approaches. The evidence shows that such collaborations with companies that are creating the harm are used to manipulate policymakers and the policy process, and to delay effective action. As Marks (2019) observes, these arrangements often normalise collaboration with corporate actors that are “causing or exacerbating the very problems that public health agencies are trying to solve.” In digital health, this can mean governments and NGOs partnering with social media platforms to promote online safety or mental health awareness — all while those same platforms continue to profit from harmful engagement algorithms, surveillance advertising, and manipulative design.
  • Do not place the emphasis on individual responsibility and educational approaches. Harmful industries always seek to shift the blame to users and consumers, away from structural solutions. In parallel, this allows them to suggest that the solution also lies with individual-level responses. In the case of social media, this means more education and digital literacy. While these are important, we need to remember that education alone cannot combat engineered, commercially driven addiction and exposure to harmful content and marketing of unhealthy products.

These are some of the lessons we have learned from analyses of other industries. Some have argued that we can go further and apply regulatory lessons learned from the Framework Convention on Tobacco Control.

In an analysis of the contribution of social media to the spread of misinformation, Denniss and Lindberg have argued that the success in addressing the impacts of tobacco suggests that international law to regulate social media may be a fruitful way forward.

Whether this is the case or not, a CDoH perspective tells us that solutions must consider the design features of harmful products, including how and why they came to be and how they contribute to harms and profits. Harmful business models must also be part of health policy.

Why education alone isn’t enough

If social media is a commercial determinant of health, the first step is recognising that harm is not simply a matter of individual ‘misuse.’ The problems are structural. Public health responses that rely solely on digital literacy or user education are insufficient and can even be counterproductive, as they shift blame away from industry and onto users — echoing the ‘responsible drinking’ or ‘responsible gambling’ campaigns of other sectors.

Instead, action should begin by stopping what we already know does not work:

  • Ending reliance on self-regulation that delays effective policy.
  • Halting harmful partnerships that legitimise exploitative practices.
  • Challenging narratives that frame harms as ‘user error.’
  • Focusing on upstream interventions that address the design of platforms and business models themselves.

We can also raise public and policy awareness of the harms and how they are caused, using established counter-marketing techniques and making the evidence base accessible and relatable through the use of various media, such as film.

If a product or service is not safe, we need to ask why it is even being marketed to users. That principle, familiar from tobacco control and food policy, applies equally to the digital domain.

Structural change over individual blame

The social media industry is not unique; it is the latest and perhaps most sophisticated manifestation of a longstanding commercial pattern. Recognising it as such allows us to move beyond blaming individuals and instead regulate systems that profit from harm. What succeeds are structural policies that reduce exposure and power imbalances, from transparency requirements to advertising restrictions and design regulation.

As public health begins to treat digital environments as part of the commercial determinants of health, the challenge — and opportunity — is to prevent history from repeating itself through a new vector of harm.

The documentary

Watch the documentary film, ‘Unmasking influence’ and learn about how commercial actors influence policy and public health.

Mark Petticrew
Professor of Public Health at  |  + posts

Mark Petticrew is Professor of Public Health at the London School of Hygiene and Tropical Medicine (LSHTM). He is Director of the NIHR Public Health Policy Research Unit.

His main research interests are in evidence-based policymaking, and health inequalities. His research has a particular focus on the commercial determinants of health – in particular, the influence of unhealthy commodity industries on health (e.g. through the promotion of tobacco, alcohol, and unhealthy foods, and gambling products). Recent research includes analyses of misinformation disseminated by industry corporate social responsibility (CSR) bodies, and the common “playbook” which is used across these and other industries, including pharma, tech and digital media industries.

Subscribe to our mailing list

 

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

You will be subscribed to EuroHealthNet's monthly 'Health Highlights' newsletter which covers health equity, well-being, and their determinants. To know more about how we handle your data, visit the 'privacy and cookies' section of this site.

The content of this website is machine-translated from English.

While any reasonable efforts were made to provide accurate translations, there may be errors.

We are sorry for the inconvenience.

Skip to content