The Centre for Data Ethics and Innovation (CDEI) recently published a report on online targeting. Online targeting uses algorithmic systems to decide what content is shown to different people online, based on predictions of their preferences and behaviours. It affects a significant proportion of the information seen by people online.
These systems have caused controversy through their use in micro-targeting in political advertising, where campaigners send highly tailored material to voters based on demographic factors such as age and gender, their interests or their physical location. As the Electoral Commission has highlighted, this leads to issues of transparency as only the voter, the campaigner and the platform know who has been targeted with what material, and only the campaigner knows why the voter was targeted.
Another issue is the targeting of disinformation – a significant issue in the current COVID-19 pandemic.
Online targeting can improve individuals’ experience of shopping or browsing online by providing faster access to the content determined to be of interest to each user. The CDEI reports that the public appreciates the value of online targeting but has deep concerns around its potential for exploitation and the accountability of organisations that use it, as well as a desire for more personal control over the use of their data. Similarly, the industry recognises there are limits to self-regulation and that proportionate regulatory action is required to increase accountability, transparency and user empowerment.
The report calls for greater regulation without restrictions on innovation. In order to be effective, a regulator with investigatory powers is needed; and it would need to anticipate and respond to changes in technology with the aim of guiding its positive development. Specifically, the CDEI recommends that the UK government’s proposed independent online harms regulator should fulfil this role, with its remit and duties scoped accordingly. The government has signalled its intention to appoint the UK’s telecoms regulator, Ofcom, as the online harms regulator, so the proposed powers would fall to it.
The recommendations strike a balance between protecting users and imposing costs on online platforms while enabling responsible innovation. Their aim is to support the UK to grow as a global leader in responsible innovation in data-driven technology, creating an environment that fosters:
- Evidence-based policymaking and research – to build capability in regulation to allow research into the impacts of online targeting and the resulting understanding of complex social issues.
- Data intermediaries – to manage the use of personal data on behalf of individual users across multiple services.
- An AI audit market – to support operators to understand and mitigate risks, and to support regulators in understanding and assessing operators’ actions.
The report makes three sets of recommendations to enable the UK to realise the potential of online targeting, while minimising the risks posed by it:
- Companies and organisations should be held accountable for the potential harms posed by the use of online targeting systems. A new code should be introduced by the online harms regulator, developed in close coordination with other regulators such as the Information Commissioner’s Office and the Competition and Markets Authority, online platforms and civil society organisations. It needs to set out best practice for risk management, transparency and the protection of potentially vulnerable users. Compliance with the code should be overseen by the regulator.
- Online targeting should be more transparent so that society can understand the impacts of these systems and policy responses can be built on robust evidence. Recommendations include annual transparency reports, mandatory advertising archives and academic research into issues of significant public interest, empowered by the requirement that companies provide secure access to their data.
- Online platforms should improve the information and controls they offer to users over the way they are targeted, so that such systems are better aligned to individual preferences. Clear labels should be provided on political posts and signalling influencers’ advertising activity. Further, clear standards for the ethical use of online targeting systems should be developed in order to encourage public sector confidence in using these systems.
The report is intended to help shape thinking on the Online Harms Bill and the government has committed to respond to the CDEI’s report within six months. However, the current focus on dealing with COVID-19 may see the Bill and the response pushed back until the end of the year and possibly beyond.