DDoS Attacks

DDoS attacks constitute a distinct category of digital disruption incidents in which the primary objective is not covert intrusion or the theft of data, but rather the (temporary) unavailability or material destabilisation of a website, network, platform, or online service. The core mechanism involves artificially escalating the load placed on infrastructure so that capacity—bandwidth, compute resources, session tables, connection pools, or application-layer limits—is consistently exceeded. The operational manifestation can vary widely: a sudden traffic spike saturating an uplink, a stream of protocol requests pushing network components into unstable states, or an ostensibly “normal” sequence of HTTP or API calls that, due to volume and timing, targets precisely the weak points of the application layer. DDoS is therefore not a single, uniform phenomenon, but an umbrella term covering multiple attack types, each with its own technical signature, escalation risk, and evidentiary challenges.

In practice, the consequences are rarely confined to the technical layer. Downtime and severe performance degradation translate swiftly into lost revenue, failed transactions, disrupted supply-chain processes, breached service levels, and substantial mitigation and recovery costs. Reputational harm may arise through the public visibility of the outage, escalation on social media, or a diminished perception of reliability among customers, patients, passengers, or other user groups. In commercial relationships, a DDoS incident can give rise to contractual claims, for example where availability commitments are not met or where security obligations are alleged to have been inadequately discharged. In regulated sectors, additional questions may arise, including whether the incident triggers notification requirements or indicates deficiencies in organisational and technical safeguards. At the same time, a DDoS attack can form part of a broader strategy in the underlying factual context: as an extortion tool (“pay or stay down”), as a diversion during concurrent data theft, or as leverage in conflict situations ranging from competitive disputes to ideologically motivated activity.

Technical Forms and the Layered Nature of DDoS

Volumetric attacks are designed to exhaust available bandwidth or saturate upstream capacity, such that the network path to the target environment simply becomes congested. These attacks are often identifiable through exceptionally high traffic volumes and patterns inconsistent with normal user behaviour, such as sudden surges of UDP traffic or anomalous traffic directed at specific ports. The technical impact is not confined to the target itself; it can also materialise at transit providers, internet exchanges, or shared infrastructure components. In environments with shared uplinks or shared DDoS scrubbing capabilities, collateral disruption may occur, affecting other services hosted in the same environment. This chain effect makes it essential, for incident analysis purposes, to distinguish between where the traffic is observed and where the actual availability degradation occurs.

Protocol attacks target weaknesses or constraints in network and transport protocols, such as the depletion of stateful resources in firewalls, load balancers, or edge routers. In such scenarios, relatively modest bandwidth may still cause substantial disruption because the attack forces resource-intensive processing per packet or per session. Examples include variants of SYN floods, fragmentation patterns, or manipulation of handshake mechanisms, intended to fill session tables, trigger timeouts, or disable hardware acceleration. The forensic picture is typically more complex than with purely volumetric attacks, because the data stream is less “coarse” and legitimate traffic at packet level can sometimes closely resemble malicious traffic. Evidentiary assessments therefore often rely heavily on specific telemetry, including netflow, conntrack statistics, load balancer metrics, and time-series data.

Application-layer attacks (Layer 7) are engineered to overwhelm web servers, APIs, and backend services by simulating requests that appear valid yet, in combination, consume disproportionate server-side resources. Examples include intensive search queries, repeated retrieval of dynamic pages, the triggering of complex database queries, or abuse of endpoints that generate large objects. In these cases, the attacker does not necessarily send large volumes of data but instead induces “work” on the server side. The boundary between legitimate peak demand due to marketing campaigns and malicious load can be narrow in this domain, making the interpretation of logs, request patterns, and user-agent variability critical. Moreover, contemporary attacks are frequently hybrid: an initial volumetric phase may be used to overwhelm monitoring and mitigation, followed by an application-layer phase focused on core functionality. This layered approach has direct implications for attribution, damages assessment, and the evaluation of intent and foreseeability.

Botnets, Reflection, and Amplification in Practice

Many DDoS campaigns rely on botnets: networks of compromised devices ranging from IoT endpoints and routers to hijacked servers. The distribution of sources across multiple countries and networks complicates blocking and filtering, while device heterogeneity produces diverse traffic signatures. Botnet command can occur through command-and-control infrastructure, but also through more opportunistic mechanisms such as hardcoded target lists, peer-to-peer coordination, or abuse of public platforms. From a technical analysis perspective, it is therefore important to distinguish between the executing source traffic (the bots) and the orchestration or control layer (C2), as these layers present different evidentiary pathways. Traffic observations most often point primarily to the bots, whereas proof of orchestration typically requires other sources, such as infrastructure administration artefacts, domain registrations, server images, chat logs, or payment flows.

Reflection and amplification techniques increase attack power by abusing misconfigured servers that send response traffic to the victim. The classic pattern involves a relatively small request sent with a spoofed source address (the victim’s address), prompting a much larger response that is reflected toward the victim. Services such as DNS, NTP, memcached, and certain directory or discovery protocols have historically been exploited as amplifiers when open resolvers or exposed services are left unsecured on the internet. This technique complicates attribution because the visible traffic appears to originate from legitimate servers with a reputation as “normal” infrastructure. Case files therefore often feature disputes over whether the observed source IP meaningfully indicates culpability or merely reflects misuse by an unknown third party.

The use of reflectors and amplifiers also raises questions regarding duties of care and responsibility for infrastructure operators. Where a reflector is demonstrably misconfigured, disputes may arise concerning negligence, patch management, and response times following notification, particularly where repeated abuse occurs. At the same time, the principal allegation in a criminal context typically remains directed at the actor who initiates spoofing and orchestration, not at the unaware owner of a misused server. For evidentiary and analytical clarity, it is therefore essential to describe roles with precision: the initiator, the facilitator, the executing bots, the reflectors, and the parties providing mitigation or hosting services. Proper role delineation prevents technical causality from being conflated with legal attribution.

Impact, Damages, and Downstream Effects

The impact of a DDoS attack is often experienced as downtime or severe performance degradation, but the economic and operational consequences typically extend beyond that immediate effect. In e-commerce, even a single hour of unavailability can produce direct revenue loss and abandoned shopping carts, while in business-to-business environments disruption of API integrations can halt supply-chain processes such as inventory management, logistics tracking, or payment processing. In sectors with socially critical functions—healthcare, mobility, energy, or public services—the impact may translate into safety risks, escalation of call-centre demand, and a loss of confidence in digital channels. A disciplined damages assessment therefore requires a distinction between direct losses (mitigation costs, recovery labour, emergency capacity) and indirect losses (reputational harm, churn, penalties, contractual claims). Without that distinction, there is a material risk of double counting or asserting losses that cannot be substantiated by operational and financial records.

From a contractual perspective, a DDoS incident may engage service level agreements, uptime guarantees, and liability provisions, frequently giving rise to disputes concerning force majeure, the scope of best-efforts obligations, and the reasonableness of mitigation measures. In outsourcing arrangements, it becomes important to understand how responsibilities are allocated between the customer, hosting provider, managed security provider, and any cloud service suppliers. Where mitigation solutions are contractually contemplated—such as scrubbing, rate limiting, WAF configurations, or redundant routing—questions may arise as to whether those measures were adequately implemented and maintained. Incidents can also generate disputes regarding knowledge and notification obligations: when each party was informed, which decisions were taken, and whether escalation followed runbooks and incident procedures. In this setting, the technical timeline becomes a legally relevant artefact, where minute-by-minute developments can determine whether contractual obligations were satisfied.

Reputational harm has its own dynamics and is often accelerated by the combination of visible unavailability and unclear public communication. External stakeholders may interpret a disruption as a broader security failure, even where no data compromise has occurred. In practice, therefore, not only the technical response but also the governance of incident communications is relevant: consistent messaging, avoidance of speculation, and realistic recovery indications without unfounded certainty. Where notification duties apply, timing is likewise critical: reporting too late can prompt supervisory scrutiny, while reporting too early without a sufficient factual basis can generate unnecessary alarm. A balanced approach requires evidence-based statements, clear separation between what is known and what remains under investigation, and consistent terminology that translates technical concepts into legally reliable language.

Motives and Context: Extortion, Diversion, and Pressure Tactics

DDoS attacks are frequently deployed as an extortion mechanism, in which threats of sustained disruption are coupled with a demand for payment. The modus operandi ranges from a “proof attack” to establish credibility to immediate large-scale disruption, sometimes accompanied by communications via email, chat platforms, or even public postings. The strategic and legal tension often lies in how to respond: payment may temporarily halt attacks but can equally encourage repetition or escalation, whereas refusal to pay may expose the organisation to prolonged disruption and increased losses. In such scenarios, the factual matrix quickly expands beyond mere availability: threatening communications, payment channels, and the potential involvement of intermediaries become integral elements of the case.

A DDoS attack can also serve as a diversion, intended to overload monitoring teams, SOC capacity, and incident responders while a parallel attack line is executed, such as credential stuffing, data exfiltration, or lateral movement within a network. The risk is that attention is monopolised by the “noisy” availability incident, while subtler indicators of data compromise go unnoticed. For that reason, incident investigations often benefit from examining correlations between the DDoS timeline and other security events, including suspicious login attempts, changes to IAM configurations, anomalous data flows, or unusual DNS activity. The presence of a DDoS attack is not, in itself, proof of a concurrent intrusion, but the possibility is sufficiently realistic to warrant structured consideration as a working hypothesis during investigation.

DDoS attacks may also be employed as leverage in conflict situations, including competitive disputes, disgruntled customer scenarios, activist campaigns, or internal escalations following employment disputes. In such contexts, motives can be diffuse and the actor group may include individuals with limited technical skills who “purchase” an attack through a service. The resulting case file often becomes hybrid in nature: on the one hand, technical artefacts such as logs, netflow data, and attack patterns; on the other hand, behavioural and communication indicators such as threats, claims of responsibility, and timing aligned with commercial friction. This combined picture frequently determines how intent, premeditation, and involvement are evaluated. A contextual analysis detached from the technical record risks tunnel vision, while purely technical analysis without context may miss key signals concerning motive and actor identification.

Legal Characterisation: Disruption of Automated Systems and Questions of Intent

From a legal standpoint, DDoS cases commonly centre on the intentional and unlawful disruption or impairment of automated systems or networks. In practice, classification often depends on the nature, severity, and duration of the disruption, as well as on the evidential basis for intent and the concrete role attributed to a suspect. Disputes frequently arise as to when conduct amounts to “rendering unusable” as opposed to “mere” delay or hindrance, and whether a temporary degradation suffices to meet relevant seriousness thresholds. The scale and sensitivity of the target can also influence assessment: a disruption affecting a small website may be technically similar to one affecting a critical service, yet the societal impact and quantum of damage may materially affect legal evaluation. A legally defensible characterisation therefore requires a fact-based description capable of measurement: response times, error rates, uptime statistics, user impact, and the nature of remediation steps.

Questions of intent assume a particular dimension where so-called “stresser” or “booter” services are used, enabling an individual to initiate an attack through a small number of actions without deep technical competence. In case files, it is sometimes argued that a single “click” is insufficient to establish full intent, particularly where an individual claims a testing or demonstration purpose. Countervailing considerations include that such services are typically self-evidently oriented toward disruption, that marketing and functionality frequently refer to taking targets “down,” and that the user makes conscious choices by entering a target address and selecting attack duration and methods. Legal assessment may further depend on the extent to which the user understood the scale of the attack, the likely effects, and the actual impact observed. Circumstances such as repetition, timing (for example during peak periods), and surrounding communications can also be indicative of the purpose behind the conduct.

Involvement is also broader than personally “pressing the button.” Facilitation may include offering the service, operating infrastructure, developing software, providing bulletproof hosting, recruiting customers, or handling payments. The evidential approach can then shift from proving one specific attack to demonstrating structural facilitation of disruptive conduct. This, in turn, raises questions about whether there is a sustainable business model, organisational structure, or a role that materially contributes to the commission of disruptions. In that context, technical artefacts (panel logs, API keys, server images) are often combined with financial data and communications to substantiate the scale of facilitation and the degree of blameworthiness.

Evidentiary Landscape and Technical Sources in DDoS Case Files

In practice, evidentiary reasoning in DDoS matters rests on a mosaic of technical sources, each of which illuminates only a portion of the incident. Traffic analyses can demonstrate that a target environment was flooded and can often pinpoint when the disruption began, escalated, and subsided, yet such analyses do not automatically identify the actor behind the attack. Netflow records, packet captures, firewall and load balancer logs, WAF telemetry, and host-level metrics can be combined into a timeline that quantifies technical impact in terms of bandwidth consumption, packet rates, session pressure, error codes, latency, and resource exhaustion. The evidential value of that timeline depends materially on retention periods, logging granularity, and whether mitigation was enabled during the incident in a way that filtered traffic before it reached the target environment. Where scrubbing services or CDN proxies are active, visibility into raw source data can be limited, even though precisely those raw data points are often regarded as particularly relevant in criminal proceedings for interpreting attack patterns, spoofing indicators, and source variability.

Requests to hosting providers and network operators frequently yield additional materials, such as abuse reports, IP allocation data, router logs, or records of blackhole routing actions. These materials can be valuable for reconstructing the chain of events, but they are also susceptible to interpretive error, because terminology and logging formats differ between providers and because certain measures are triggered automatically. Blackholing, for example, may have been deployed to protect a network, yet outwardly it can appear as “self-inflicted downtime” rather than “downtime caused by an attack,” a distinction that can become contentious in civil proceedings and damages analysis. There is also a recurring risk that provider data represent only a subset of relevant telemetry—for instance, edge logs only, or sampled data—meaning conclusions about volume and origin should be presented with an explicit and transparent margin of uncertainty. A legally robust analysis therefore sets out which sources are available, where material gaps exist, and what assumptions are required to move from raw data to a coherent account.

Payment traces form a second pillar of the evidentiary picture, particularly where the case involves purchased attack services or the exploitation of booter infrastructure. Transactions through payment processors, crypto wallets, voucher systems, or platforms offering in-app payments can establish linkages between accounts, attack packages, and time windows. Evidential strength, however, depends on context: a payment can indicate procurement of a service, but does not necessarily demonstrate use against a particular target, especially where the service is panel-based and multiple users may operate under the same account. In addition, a payment may relate to hosting, domain registration, or advertising, requiring a careful and explicit distinction between legitimate infrastructure costs and illicit operational expenditure. Where financial data are correlated with technical logs, it is critical that time zones, timestamps, rounding practices, and log rotation are normalised with care, so that causal narratives are not constructed on the basis of spurious correlations.

Attribution and the Pitfall of Apparent Source Origin

Attribution in DDoS cases remains structurally complex because the visible source often does not coincide with the control layer. In botnet scenarios, source addresses commonly belong to compromised devices, meaning the traffic is technically “genuine” in the sense that it originates from those devices, while those devices function merely as executors without perpetrator intent. In reflection and amplification scenarios, the position is even more ambiguous: the observed traffic appears to come from ostensibly legitimate servers that have been abused as reflectors, while the initiating requests—often with spoofed source addresses—are not directly visible to the victim. This regularly produces misleading intuitions in case files, such as assuming that the owner of a reflector must be involved, or that a particular country of origin conveys meaningful information about the perpetrator. A defensible attribution analysis therefore states expressly that a source IP, in many scenarios, is only an indicator of where traffic came from, not of who orchestrated it.

A further complication arises from infrastructure that is shared across multiple parties, including VPS providers, VPN exit nodes, residential proxy networks, or compromised cloud accounts. In such circumstances, a single IP address may be used by different actors over short periods, or provider logs may lack sufficient granularity to separate distinct activities. Even where panel logs from attack services exist, uncertainty can arise through credential sharing, the reuse of API keys, or account takeover by third parties. For legally sound reasoning, a chain of corroboration is therefore preferable: not a single indicator, but a set of mutually reinforcing signals, such as consistency across login times, observed fingerprints, geography, device characteristics, and parallel communications or payments. Where such a chain is absent, attribution conclusions typically need to be confined to probabilities and scenario-based assessments rather than categorical assignments.

The distinction between technical likelihood and legal certainty is particularly pronounced in this domain. An incident response team may, based on heuristics and experience, infer that a known botnet or a recurring booter campaign is involved, but such inferences are not always suitable as evidence in contested proceedings. Traceability of methodology, reproducibility of analysis, and transparency regarding measurement error largely determine the evidential weight of technical conclusions. In addition, opposing parties frequently and legitimately test alternative explanations: could the traffic have been a flash crowd, was there a configuration error, or was the downtime caused by mitigation settings that were overly aggressive? Attribution in a case file that must withstand contradiction therefore requires an explicit delineation of what the data establish with certainty, what is merely plausible, and which elements remain unproven.

Source Traffic Versus Command-and-Control: An Analytical Separation

A careful assessment of DDoS incidents requires a clear separation between source traffic and command-and-control, because these layers have different characteristics and are evidenced through different means. Source traffic is the observable traffic that burdens the service: IP addresses, protocols, request patterns, header values, payload structures, and timing. Command-and-control concerns orchestration: where the attack is activated, which interface is used, what panel action is executed, and which infrastructure coordinates bots or reflection requests. In many case files, source traffic is abundant while C2 traces are absent, for example because orchestration occurs through external services, because servers are dismantled rapidly, or because logs are not preserved. In that situation, there is a material risk that source traffic is treated implicitly as a proxy for orchestration, despite the fact that such an inference is not technically justified.

C2 traces may present in multiple forms: booter panel logs, API call logs, server processes maintaining command queues, DNS infrastructure that dynamically rotates endpoints, or messaging channels through which attack instructions are shared. Where such traces exist, it is essential to correlate them with the factual incident timeline and with observed impact metrics. A panel action indicating “start attack” at a given time has probative value only where it coincides with the onset of disruption and where parameters such as target address, port, and method align with the traffic that was observed. Discrepancies may point to logging errors, time drift, or the existence of multiple simultaneous attacks by different actors. An analytical approach that makes this correlation explicit strengthens the robustness of conclusions and narrows the space for doubt about causation.

It is also relevant that DDoS campaigns can be multi-stage: an initial phase may operate as a decoy, followed by a second phase using different methods or different targets, for example against status pages, DNS providers, or authentication endpoints. In such scenarios, C2 may be visible at one point while source traffic is generated elsewhere or is routed via reflectors. The separation between source traffic and orchestration allows such scenarios to be modelled without internal inconsistency. Legally, this matters because a suspect’s role may relate only to a segment of the chain—such as operating a panel or supplying infrastructure—while execution is carried out by third parties. A case file that conflates source traffic and C2 risks misallocating roles, which bears directly on intent, participation, and facilitation.

Reliability, Completeness, and Interpretation of Logs

Logs are often treated as objective recordings of fact, yet in reality they are products of configuration choices, sampling, rotation, normalisation, and filtering. During a DDoS incident, logging volumes can become so large that systems drop log entries, buffers overflow, or only summary metrics are retained in order to preserve performance. Moreover, mitigation systems may block traffic at the edge, meaning the target environment sees only a portion of the attack pattern, while an upstream provider sees a different portion. In incident investigation, this means that the “absence” of certain log entries is not, by itself, proof of the “absence” of traffic. A legally sustainable analysis therefore describes the logging landscape: which components log what, at what granularity, with what retention, and under what conditions drop-offs occur.

Time synchronisation is an underestimated source of error. Where NTP is inconsistent, or where components log in different time zones without uniform normalisation, events can be misordered. A panel login may then appear to occur after an attack, or a mitigation action may appear to have been taken before the attack began. Such apparent contradictions are often amplified in proceedings and can undermine the credibility of the case file as a whole. It is therefore important to build a timeline with explicit reference to time bases, offsets, and any drift, and to perform cross-checks where possible against external sources such as provider events, monitoring dashboards, or ticketing systems. Where such checks are not available, conservative conclusions and transparent uncertainty margins are advisable.

Interpreting logs also requires an understanding of normal operational behaviour. Application-layer attacks can resemble legitimate traffic spikes, particularly where user agents vary, IP addresses are globally distributed, and requests are syntactically valid. The distinction may lie in subtle factors: unusual request frequency per session, abnormal cache-miss ratios, repetitive patterns targeting resource-intensive endpoints, or atypical header combinations. At the same time, legitimate spikes driven by campaigns or news events can display similar features. Robust interpretation therefore benefits from baseline data: historical traffic profiles, normal error rates, and known peak patterns. Without a baseline, there is a material risk that technical conclusions rest predominantly on assumptions, which is vulnerable in a legal setting.

Alternative Scenarios: Account Misuse, Testing Services, and Lack of Insight

In DDoS case files, it is often necessary to test alternative scenarios explicitly, precisely because the technical act can be relatively simple and because infrastructure and accounts may be accessible to multiple individuals. Account misuse is a recurring theme: credentials may be shared, leaked, guessed, or obtained through credential stuffing, after which a third party uses the service without direct involvement of the account holder. Where a panel account initiated an attack, it does not automatically follow that the registered person was the actor, particularly where MFA is absent, IP logging is limited, or login patterns are inconclusive. A serious scenario analysis therefore considers signals such as anomalous login locations, device fingerprints, sudden changes to passwords or email addresses, and the presence of earlier compromise indicators in other accounts associated with the same person.

A second alternative scenario concerns the use of public testing services or “stress tests” without sufficient understanding of scope and consequences. In certain environments, load testing tools and performance testing platforms exist for legitimate scalability testing, yet misconfiguration or lack of proper authorisation can cause disruption. In parallel, some services present themselves as “stressers” with an appearance of legitimacy, while their marketing and functionality are in fact oriented toward taking down external targets. Legal assessment of intent in such cases may depend on the extent to which contextual cues and warnings were present, the terminology used by the service, and whether the user had reasonable grounds to believe the activity related to an owned or authorised environment. This does not alter the fundamental concern that directing traffic at a third party without authorisation is inherently problematic, but the precise degree of blameworthiness and the applicable classification may vary with knowledge, purpose, and foreseeability.

Misconfiguration and human error should also be considered as hypotheses, particularly where observed disruption does not align neatly with clear attack signatures. A misconfigured rate limiter, a WAF rule that blocks legitimate requests, or an autoscaling constraint that fails during peak load can produce the same outcome as a DDoS attack: unavailability and elevated error rates. This becomes even more relevant where an organisation adopts emergency measures during an incident, such as disabling endpoints or blocking regions, thereby creating a partially self-inflicted disruption. In disputes, an opposing party may invoke such factors to challenge causation or reduce damages. A well-constructed case file therefore distinguishes between primary cause (attack or internal error), secondary causes (mitigation artefacts), and the contribution of each to the factual downtime.

Legal Attribution, Role Delineation, and Assessment of Blameworthiness

Legal attribution in DDoS matters requires a differentiated role-based approach because the attack chain comprises multiple links and involvement can take many forms. Initiation concerns the act of starting or directing the attack, for example via a panel, script, or API call. Facilitation concerns making resources available, such as botnet infrastructure, reflection capacity, hosting, software, instructions, or customer recruitment. Support may include collecting payments, supplying accounts, maintaining servers, or providing technical assistance to customers. Each role has a distinct evidential profile: initiation is often linkable to specific timestamps and target parameters, whereas facilitation tends to be evidenced by structural patterns, repetition, administrative traces, and the existence of a sustained revenue model.

The assessment of blameworthiness is influenced by whether there is structural facilitation and awareness of the misuse purpose. Where a service is primarily offered to take targets “offline,” the argument that it is a neutral tool is less persuasive, particularly where marketing, user interface terminology, or support communications expressly point to disruption. The position differs for generic infrastructure services, where customer misuse can occur without provider intent and where notice-and-takedown processes are relevant. In that context, the discussion often shifts to compliance with internal policies, responsiveness to abuse notifications, and whether adequate organisational and technical measures were implemented to limit misuse. For legal characterisation, it is important to distinguish factual findings from normative labels: what was done, what was omitted, and what knowledge existed at which point in time.

Finally, proportionality is relevant. The scale and consequences of the disruption, its duration, target selection, and the presence of repetition can shape the seriousness profile of a case file. A short-lived disruption with limited impact calls for a different approach than a prolonged attack on critical services with significant societal consequences. Professionalisation indicators—such as deploying hybrid attack vectors, switching methods, and evading mitigation—may suggest a higher organisational level and clearer intent. Where a suspect controls only a limited segment of the chain, the causal line to the disruption must be built with care to avoid imputing responsibility based on proximity rather than contribution. A case file that maintains sharp role delineation creates room for a nuanced assessment of fault, intent, and sanctionability without conflating technical complexity with legal certainty.

Previous Story

Malware

Next Story

Identity Theft

Latest from Cybercrime

Cryptojacking

Cryptojacking involves the covert and unauthorised use of the computing power of a computer, server, smartphone,…

Data Theft

Data theft constitutes one of the most consequential categories of business-related incidents in today’s economy because…

Online Fraud

Online fraud has evolved into an umbrella term for a wide range of behaviours in which…

Identity Theft

Identity theft is an umbrella concept covering conduct in which personal data are obtained against the…