The rapid transformation of the digital economy has triggered a fundamental reassessment of public-order regulation within platform-driven markets. Digital intermediaries no longer function merely as technical conduits but have evolved into autonomous ecosystems in which transactions, information exchange and behavioural steering are concentrated. This development has created a new institutional reality in which private infrastructures perform public functions, ranging from the distribution of information to market access and the safeguarding of societal security. This structural shift increases the complexity of the supervisory landscape and intensifies the need for comprehensive legal, technological and governance-oriented reconsideration. The underlying challenges are significant: asymmetries in information and power, internal moderation processes with public implications, and the increasing interdependence of commercial optimisation models with democratic values and regulatory objectives.
At the same time, the digital economy is rapidly evolving towards platformisation as the dominant market architecture. Economies of scale, algorithmic personalisation and data-driven business models introduce new forms of systemic risk. These risks manifest not only in market dynamics but also in societal domains such as information integrity, safety and consumer protection. As a result, legal frameworks are shifting from purely competition-law-based doctrines to integrated regulatory instruments that combine competition law, consumer protection, data law and sector-specific obligations. The enforcement challenge therefore becomes not only legal and technical in nature but also governance-strategic: supervisory authorities must intervene in environments where autonomy, transparency and traceability are persistently under pressure. Against this background, the following sections provide an extensive elaboration of the first five supervisory themes.
Platformisation as a New Enforcement Domain for Supervisory Authorities
Platformisation constitutes a structural reconfiguration of economic relationships and requires a substantial broadening of existing enforcement perspectives. Traditional supervisory models are typically designed for bilateral transactions and linear distribution chains, whereas platforms operate as multilateral ecosystems in which behavioural regulation is partially delegated to private actors. Supervisory authorities are thereby confronted with complex governance architectures in which internal policies, moderation rules and algorithmic recommendation systems influence the normative frameworks that underpin public interests. The platform structure itself introduces a new category of intermediary behaviour that must be subject to regulatory oversight.
Moreover, platformisation requires supervisory authorities to operate in digital environments where speed, scale and data dynamics far exceed those in traditional markets. Effective monitoring of platform processes necessitates access to technical interfaces, insight into data flows and the ability to interpret automated decision-making systems. This shift demands advanced expertise in data analytics, algorithmic risk assessment and digital governance structures. Without such expertise, a structural gap may emerge between supervisory capabilities and the actual complexity of platform market mechanisms.
Platformisation also raises fundamental institutional questions regarding the relationship between private rulemaking—such as internal terms of service, content moderation policies and user agreements—and public-law enforcement norms. Platforms increasingly act as regulatory bodies, and in some instances as quasi-judicial supervisory entities, by governing user behaviour through automated detection systems and moderation mechanisms. This creates a hybrid framework in which public and private regulation overlap, prompting critical considerations regarding legitimacy, legal certainty and accountability.
Balancing Innovation with the Enforcement of Public Order
The digital economy is strongly driven by innovation, yet this dynamic may conflict with the obligation to uphold public-order interests. Regulation in platform markets therefore requires a careful balancing of technological advancement with the safeguarding of societal security, market integrity and data protection. Enforcement mechanisms must be designed in a manner that does not unduly hinder innovation, while at the same time providing effective intervention where risks emerge for consumers, markets or societal stability. This balance requires nuanced and proportionate instruments that facilitate both innovation and corrective oversight.
Supervisory authorities must also anticipate the pace at which digital innovation evolves. Emerging functionalities—including generative algorithms, real-time data processing and automated recommendation systems—can rapidly reshape how information circulates, how transactions occur and how risks materialise. Enforcement cannot therefore remain exclusively reactive; supervisory models must incorporate preventive detection, systematic risk assessment and timely intervention. Maintaining an appropriate balance between innovation and public-order enforcement demands continuous recalibration of regulatory norms and supervisory strategies.
Furthermore, platform markets are characterised by structural tensions between commercial optimisation and public-order priorities. Commercial algorithms typically optimise for engagement, revenue generation or efficiency, whereas public-order objectives focus on safety, reliability and legal certainty. These interests are not always aligned, requiring enforcement mechanisms capable of ensuring that innovation does not compromise fundamental societal protection standards. Regulation in this domain therefore necessitates transparency, accountability and the ability to subject commercial optimisation choices to public-law constraints.
Risks of Market Power, Algorithmic Steering and Information Asymmetry
The concentration of market power among dominant platforms introduces structural risks to competition, market access and user autonomy. Platforms with extensive datasets and advanced algorithmic systems are able to shape market behaviour, restrict entry by competitors and grant preferential treatment to their own products or services. Network effects and high switching costs further reinforce platform dominance, making such market power not merely the result of competitive efficiency but also of structural lock-in mechanisms. Effective oversight of market power requires detailed analysis of data flows, interoperability barriers and internal preference algorithms.
Algorithmic steering adds an additional layer of complexity. Recommendation systems and optimisation models significantly influence the visibility of information, products and services. These systems often function as a form of concealed intermediation, with optimisation parameters that are opaque to users and sometimes only partially understood even within the platform itself. As a result, risks may arise such as discrimination, manipulation, competitive distortion or the amplification of information silos. Effective supervision demands deep insight into algorithmic logic, training data, feedback mechanisms and the key performance indicators used to optimise system outputs.
Information asymmetry plays a central role in platform ecosystems. Users, business partners and public authorities typically possess significantly less information than the platform, which has full visibility over user behaviour, transaction flows, performance metrics and risk indicators. This asymmetry impedes the ability to assess conduct, detect abuse and implement effective enforcement measures. Regulation must therefore incorporate transparency obligations, supervisory access to data and safeguards for equal treatment, ensuring that market power and algorithmic influence do not result in systemic market distortions or societal harm.
Illegal Content, Disinformation and Traceability Obligations
The presence of illegal content and disinformation poses significant risks to public safety, societal stability and the integrity of the digital information environment. Platforms play a central role in the distribution and visibility of such content, making effective detection, removal and preventive mitigation essential. Legal frameworks impose obligations on platforms that range from notice-and-action mechanisms to proactive risk assessments, depending on the size of the platform and the nature of the associated risks. The key challenge lies in implementing these obligations in a manner that upholds both legal safeguards and effective content governance.
Disinformation presents a distinct challenge due to the speed at which misleading content spreads and the complexity of the underlying dissemination mechanisms. Manipulative campaigns may employ automated accounts, micro-targeting and algorithmic amplification, placing platforms in an active role with respect to both detection and mitigation. Legal frameworks therefore require ongoing monitoring, transparency over recommendation systems and detailed reporting on risk-mitigation strategies. Supervisory authorities must assess the proportionality and effectiveness of platform measures in this domain.
Traceability obligations are essential for ensuring the ability to identify harmful behaviour. Without insight into the origin of content, advertising flows or account structures, malicious actors may operate with little constraint. Supervisory frameworks must therefore impose requirements concerning verification processes, automated identification of suspicious patterns and the secure preservation of relevant data for investigative purposes. At the same time, these obligations must include robust safeguards to ensure proportionality and the protection of privacy, preventing intrusive measures that go beyond what is necessary for public-order objectives.
Combined Supervision Themes: Competition Law, Consumer Protection and Data Law
Platforms operate at the intersection of multiple legal domains, making it difficult to address supervisory challenges within a single regulatory framework. Competition law, consumer protection and data law are increasingly interwoven in platform markets because market conduct, user treatment and data practices are mutually reinforcing. A single business practice may simultaneously be misleading under consumer protection law, exclusionary under competition law and inconsistent with data-minimisation principles under data law. Modern supervision must therefore take the form of an integrated framework capable of addressing cross-disciplinary risks.
This interconnectedness requires structural cooperation and information sharing among supervisory authorities. Competition regulators analyse data-driven market power, consumer protection authorities focus on fairness and transparency in user interfaces and data regulators oversee compliance with data-processing obligations. Fragmented enforcement results in regulatory gaps and may allow systemic risks to persist unchecked. Inter-institutional coordination is therefore an essential pillar of contemporary platform supervision.
Furthermore, this integrated approach necessitates that companies develop internal governance structures in which compliance is not segmented along traditional legal categories but organised around risk domains and process integrity. Platforms must be able to identify and address the interaction between competition risks, data governance requirements and consumer-protection obligations within a single coherent compliance framework. For supervisory authorities, this shift enables a focus on systems, processes and governance structures rather than on isolated infringements. This approach facilitates structural behavioural change and contributes to lasting enforcement outcomes.
Transparency Requirements for Recommender Systems and Platform Moderation
Transparency regarding recommender systems constitutes a crucial element of modern platform regulation, as these systems largely determine which information, products and interactions become visible to users. Transparency obligations do not focus solely on disclosing general operating principles; they also encompass clarifying optimisation objectives, parameters used, prioritisation mechanisms and the ways in which feedback loops influence algorithmic outputs. Such transparency is essential to prevent recommender algorithms from creating opaque power structures that are difficult for regulators, business users and end users to assess. By providing insight into the functional architecture of these systems, transparency enables effective risk evaluation and a more thorough assessment of proportionality, consistency and potential discriminatory effects.
Transparency in platform moderation additionally requires clarity on internal policies, enforcement criteria, escalation processes and the application of automated detection systems. Moderation decisions often have a substantial impact on freedom of expression, access to digital markets and reputation mechanisms; thus, adequate reasoning and clear explanations are essential. Transparency ensures that users understand the rationale behind removals, restrictions or ranking changes and strengthens legal certainty and predictability within the digital ecosystem. For regulators, such transparency forms a necessary tool for assessing consistency, compliance with legal standards and the effectiveness of internal governance processes.
Furthermore, combining transparency obligations for both algorithmic recommendation and platform moderation creates an integrated framework that enables regulators to evaluate the interplay between detection, recommendation and enforcement. By revealing how these systems influence each other – for example, when moderation rules affect algorithmic visibility – a more nuanced understanding emerges of the actual power exercised within the platform. Transparency therefore functions not only as a legal requirement but also as a structural governance instrument that supports accountability, proportionality and reviewable decision-making within platform ecosystems.
Cross-Border Liability of Platforms for User Conduct
The cross-border nature of digital platforms presents significant challenges for defining and enforcing liability for user conduct. Platforms often operate across multiple jurisdictions and serve an international audience, with content, transactions and interactions flowing freely between states that have divergent legal regimes. This results in a complex network of jurisdictional questions, applicable laws and mutual enforcement mechanisms. Regulation must provide a coherent approach that prevents malicious actors from exploiting differences between national rules while simultaneously offering clarity regarding platform responsibilities in relation to facilitating user behaviour.
Liability is closely tied to the degree of control a platform exercises over processes such as distribution, moderation, recommendation and transaction handling. When a platform exerts structural influence over the visibility or circulation of content, this may lead to enhanced responsibility for monitoring compliance with laws and regulations. Cross-border obligations therefore require platforms to implement effective detection and escalation mechanisms that can be applied consistently across jurisdictions. This applies equally to illegal content, deceptive commercial practices and harmful user behaviour facilitated by algorithmic processes or technical infrastructure.
Strong international coordination between regulatory authorities is also essential to prevent legal fragmentation and enforcement gaps. Liability can only be effectively realised when information-sharing, system interoperability and mutual recognition of enforcement measures are structurally supported. This requires harmonisation of normative principles such as transparency, proportionality and traceability, ensuring that platforms are not confronted with contradictory obligations and regulators can effectively address cross-border risks. Within this context, cross-border liability functions as a key instrument for safeguarding digital public order in a global and decentralised ecosystem.
Risk-Based Supervisory Models for Very Large Platforms
Very large platforms constitute a category with heightened systemic risks due to their scale, network structures and societal influence. Risk-based supervisory models therefore focus on identifying, assessing and prioritising risks arising from algorithmic steering, information distribution, market power and infrastructure dependence. These models require in-depth analysis of both external effects and internal governance systems. By delineating risk domains – such as content integrity, safety risks, market dynamics and data processing – a structured approach emerges that enables regulators to target interventions where societal impact is potentially greatest.
Risk-based models also require platforms to conduct detailed risk assessments and document internal mitigation structures. This includes processes such as periodic audits of algorithmic systems, stress testing of moderation and detection mechanisms and evaluating potential systemic disruptions caused by technical or organisational shortcomings. These self-assessments must be supported by reproducible methodologies, enabling regulators to determine whether risk mitigation measures are proportionate and effective. As a result, enforcement shifts from incident-driven action to structural risk management within the platform.
Additionally, risk-based supervisory models reinforce the need for continuous monitoring and dynamic adjustment of supervisory strategies. Given the rapid pace of technological development, risks may emerge unexpectedly or increase in severity. Regulators must therefore have access to real-time information, technical systems and a range of differentiated intervention tools, from information requests to binding measures. Risk-based supervision thus becomes a future-proof instrument that aligns with the scale, complexity and societal relevance of very large platforms.
Interoperability and Governance Obligations
Interoperability is an important tool for mitigating market power, strengthening market entry opportunities and enhancing user autonomy. By requiring platforms to open interfaces and technical protocols, an environment is created in which services can interact without relying on a single dominant infrastructure. Interoperability reduces lock-in effects, enhances user choice and stimulates innovation by lowering technical barriers for new entrants. Regulation must, however, establish detailed technical and organisational standards to ensure that interoperability obligations are both effective and do not introduce security risks or unintended data flows.
Governance obligations provide an additional framework enabling regulators to ensure that platforms maintain proper internal structures for decision-making, risk management and transparency. Governance standards include the establishment of responsible functions, oversight of algorithmic systems, documentation of decision-making processes and escalation procedures for system risks. By centralising governance, compliance is no longer dependent on ad-hoc measures or isolated processes; rather, governance structures become the backbone of sustainable compliance and provide regulators with tools to detect and remedy structural deficiencies.
The combination of interoperability and governance obligations creates an integrated framework in which both external and internal power structures are regulated. Interoperability reduces dependencies on dominant systems, while governance ensures that internal processes remain transparent, accountable and auditable. This dual structure enables regulators to address market dominance, information asymmetry and algorithmic opacity in a systemic manner. As such, interoperability and governance obligations form an essential foundation for a future-proof digital ecosystem that genuinely protects public-order interests.
Public–Private Cooperation in Platform Enforcement
Public–private cooperation is indispensable for effective enforcement within platform ecosystems because both public authorities and private platforms possess unique and complementary sources of information, control and intervention capabilities. Platforms have access to real-time data and technical infrastructure, while regulators provide the legal framework, enforcement priorities and assessments of proportionality. Effective cooperation therefore requires structural agreements on data-sharing, operational protocols and mutual role allocation, ensuring that enforcement actions are not hindered by fragmentation or miscommunication. Such cooperation must rest on the rule of law to ensure integrity and consistency in enforcement.
Public–private cooperation also requires platforms to develop internal structures that facilitate collaboration with regulators. This includes specialised compliance teams, escalation processes for incidents and mechanisms for timely reporting of heightened risks. By institutionalising these structures, platforms can contribute to efficient and predictable enforcement dialogue. Meanwhile, regulators must develop guidelines ensuring that information exchange remains proportionate and necessary and that sensitive or personal data is adequately protected. This prevents cooperation from producing unintended risks or misuse of information.
Furthermore, public–private cooperation is essential for addressing cross-border risks such as illegal content, fraud or systemic manipulation. By establishing joint protocols – for instance on detection, removal, escalation or international enforcement requests – risks can be contained more quickly and effectively. Public–private cooperation thus becomes a strategic pillar of modern supervision by combining the speed of private detection mechanisms with the legitimacy and legal authority of public enforcement. Strengthening this cooperation delivers a robust, proportionate and future-oriented enforcement framework suitable for the scale and complexity of digital platform markets.

