Empowering children online: why the CoE says 'no' to blanket social media bans
As politicians rush to impose blanket social media bans on minors, recent Council of Europe recommendations urge a smarter approach. They stress that online safety measures must protect children without sacrificing their freedom of expression or cutting off essential digital access
H/t to Petro Petiz Viana for sharing the recommendations first here.
Many politicians around the world are rushing to “do something” about minors online. Some individual European Union member states (e.g., Greece) have already adopted age-related restrictions for children and teens using social media (to be enforced early next year), and some other countries are considering it.
Experiences from countries such as the United Kingdom and Australia have already shown that excessive age-related restrictions often lead children to circumvent the rules, and sometimes push them into less secure, fringe online environments. Australian policymakers are not yet admitting that the policy is not working and are instead opting for a Canberra-effect, calling on others to follow in their footsteps, as reported by Politico EU.
Not long ago, a group of scientists from around the world expressed concerns about hasty decisions on an age-based social media ban. CCIA Europe's Daniel Friedlaender shared that in their 2025 statement, Save the Children, also argued that blanket social media bans for children may have "serious unintended consequences for children", such as preventing them from finding useful information and support online, which is especially important for marginalised children who do not have a social safety net offline.
In this context, the recent Council of Europe recommendations for EU Member States on online safety and the empowerment of users and content creators, adopted on April 8, are a breath of fresh air.
There is no major U-turn: the recommendations do include calls to either enhance existing obligations or introduce new ones on platforms. However, the CoE also urges EU policymakers to ensure that actions aimed at protecting Europeans online (both children and adults) do not diminish their right to freedom of expression, undermine other EU treaties, or result in excessive measures. They stress that any actions taken should be evidence-based, while the very hot age-verification topic should primarily target platforms that primarily focus on products, services, and content that are already illegal for children offline.
A gentle reminder about freedom of expression
The recommendations’ Articles 12 and 18 say it loud and clear: even though freedom of expression may disturb, offend, or shock parts of the population, this does not justify the adoption of measures that restrict it, and views that those who challenge the status quo are, in fact, beneficial to democratic societies.
Later, Article 18 states that not all online risks require restrictive measures that could diminish the right to freedom of expression. Articles 38, 44, 54, and 55 call for EU Member States not to place excessive pressure on internet intermediaries and content creators, because this can “compel them to act as censors of speech on behalf of State authorities” (Article 54).
Moreover, internet intermediaries (including platforms) should not be held liable for third-party content that “they merely give access to or which they transmit or store”, unless they avoid taking proactive measures after becoming aware that the content is illegal.
Article 44 summarises the scope of what type of content should be considered legal simply and clearly: “content that is lawful offline should be lawful online”.
What can Member States do besides imposing bans?
The recommendations draw the logical conclusion that to ensure online safety, a combination of proactive measures (both online and offline) is essential (Article 9); ergo, the users should not only be protected but also empowered to protect themselves.
Article 21 states that measures relating to the online space should complement and build on “broader actions taken in the offline realm”. It further explains that EU Member States need a comprehensive, coordinated strategy that addresses the underlying causes of online abuse, whether rooted in social conditions or inequalities. Suggested examples of such strategies include educational initiatives to foster digital citizenship, strengthened media and information literacy, “community empowerment initiatives” and more.
They further argue that the online environment should be safe, trustworthy, and pluralistic, while also remaining free from “unjustified interference” and ensuring “maximised autonomy” for users (Article 2). They elaborate on this in Article 5, stating that, in addition to enhancing the transparency and accountability of platforms, policymakers should also promote “empowerment in society” and strengthen users’ awareness and knowledge of online risks.
Lastly, the Article 66 outlines that the empowerment of users online should be achieved through evidence-based duties for platforms, including: 1) personalised design experiences (Article 66(a)), later reiterated in Article 71; 2) transparency (Article 66(b)), and 3) fair procedures in areas such as content moderation (Article 66(c)).
Avoid actions that may compromise online safety
An important point is made in Article 22, which states that EU Member States should refrain from actions that may compromise user safety online and reduce opportunities for protection and empowerment.
In my personal opinion, this is relevant not only to age-related social media bans (which may push children to either circumvent the rules or move to fringe and less safe websites), but also to measures such as the mass scanning of private messages under the CSAM Regulation, or “Chat Control 2.0” which could effectively put an end to encrypted communication online.
Child-related recommendations: age-based restrictions should apply only to content that is already illegal offline
Article 24 reiterates the EU’s broader ambition to assess, address, and mitigate risks affecting children, and maintains the view that platforms should do more. At the same time, it states that any measures should take into account children’s age, situations of vulnerability, and evolving capacities. Moreover, such measures must uphold their right to freedom of expression. Again, this clearly suggests that banning children from platforms per se is not a smart approach.
“24. Measures to assess and address risks for children, mitigate harms, empower children and protect them should give primary consideration to the best interests of the child and take into account children’s age, situations of vulnerability and evolving capacities. Any such measures should uphold their rights, including the rights to freedom of expression and to private life.”
The most interesting part hides in Articles 75 and 76, where the CoE representatives essentially state that platforms should have tools to mitigate risks to children online, and that these tools should be regularly updated. However, measures such as age verification should primarily be used to protect children from products, services, and content that are already legally restricted to them offline, rather than to “protect” them from being online at all. Such tools should primarily concern platforms that “predominantly provide services or content that is legally restricted to protect children”.
Overall, this part does not exclude the idea of an EU-wide introduction of age-verification tools, but it clearly states that they should only be used to protect children from things that they should not have access to - both online and offline, not prevent them from using platforms at all.
“75. In addition to other appropriate risk mitigation measures that may be adopted by platforms and in line with Recommendation CM/Rec(2018)7 on Guidelines to respect, protect and fulfil the rights of the child in the digital environment, States should require the use of effective systems of age assurance to ensure children are protected from products, services and content in the digital environment which are legally restricted with reference to specific ages. In particular, such systems should be required for platforms that predominantly provide services or content that is legally restricted to protect children. Such systems should uphold human rights and use methods that respect freedom of expression and the protection of personal data and privacy and that are consistent with the best interests of the child. When requiring the implementation of such systems, States should provide safeguards to ensure they do not result in disproportionally excluding children from online spaces and restricting their right to participate in debates on matters of public interest. Safeguards should also be provided to ensure that these systems do not create or exacerbate exclusion from the online space of people in situations of vulnerability and at risk of discrimination.”
“76. States should require the development, production and regular updating by platforms of other age-appropriate and effective tools to mitigate risks for children in the online environment. Such tools, for either children or parents as appropriate, should give primary consideration to the best interests of the child and be developed and deployed taking into account children’s evolving capacities, in accordance with their age and maturity. They should not reinforce discriminatory attitudes, infringe the right of children to privacy or their best interests, or deny children the right to freedom of expression and information.”