Cybercrime, Incident Response & Digital Risk

You need to get one thing out of your head immediately: a hack is not a technical incident you “just” solve with a recovery plan and a few sleepless nights. A hack is a governance and legal tipping point that hits at the exact moment you least feel like thinking in terms of governance and law, namely when everything screams for speed, for action, for that primitive reflex: “restore it, get it back online, fix it.” And that is precisely where things go wrong. Because the moment your systems are touched, the clock starts ticking not only on recovery, but on notification duties, investigation duties, contractual obligations, liability questions, insurance trajectories, supplier dependencies and, most merciless of all, the clock of reputation. That reputation clock runs faster than your ticketing system, faster than your SIEM, faster than your forensic partner can even get fully operational. And the bitter part is this: the outside world doesn’t need to know exactly what happened to already have an opinion. The outside world sees only that something happened, and that is enough to start measuring you: how mature are you, how transparent are you, how controlled are you, how responsible are you? Intentions hardly count. In this world you are not weighed by what you meant to do, but by what you can demonstrate you did.

I speak to you harshly about this, not because I enjoy hurting you, but because I want to keep you standing. The first hours after an incident are not “warm-up hours”; they are dossier hours. You make decisions that, months or years later, will determine your entire position. What you preserve, what you overwrite, which logs you retain or lose, which forensic party you engage and with what mandate, who is allowed to speak internally, who is allowed to speak externally, which words you put in emails, how you capture facts and how you frame uncertainty, all of that becomes the spine of your story. And that story is not a luxury; it is your defensive line. And now comes the truth you may not want, but absolutely need: sometimes you are harmed by non-conforming conduct by others, suppliers who do not deliver what they promised, service providers who were careless, chain partners who left gaps, attackers who exploited them, contracts that looked “watertight” on paper but dripped in practice. And before the dust has even settled, you are confronted with a second layer you did not order but will receive anyway: suspicion, accusation, the suggestion that you yourself fell short. You are attacked, and at the same time you must defend yourself against the reproach that you should have locked better, monitored better, steered better. That is the perversity of this era: nobody looks at your good intentions; they look at your measures. And measures are measurable, testable, criticisable, and above all: in hindsight always “easy” to improve.

That is why I work from a simple, confrontational idea you should tape above your desk if necessary: whoever loses the story during an incident, later loses the right, even when they had it. I am not only about recovery; I am about recovery with evidence discipline. Speed without evidence discipline is a victory today and a defeat tomorrow. I force a line that is legally sound, technically aligned and operationally executable at board level. I make sure you secure data and documentation without destroying your own evidence. I make sure there is one central stream of facts, not a gossip circuit, not an internal mythology breeding in Slack channels while the first external questions are already forming. And I do give you hope, but not the sugar-coated hope of “it will blow over.” I give you hard hope: if you show discipline now, you will later be able to explain what you did and why, without betraying yourself, and without being strangled by your own words.

Hacking

When you’ve been “hacked,” the temptation is enormous to use the word itself as a lightning rod. As if saying it automatically places the problem where you would prefer it: outside you, with “the perpetrator,” with “cybercrime,” with “the world being dangerous.” I understand the reflex. But I will not allow you to live inside it. Because “hacking” is not a single, self-evident fact; it is a label. And a label only becomes legally and operationally useful when you fill it with specifics: what access was obtained, by which route, with which privileges, on which systems, with which traces, and above all, what happened afterwards? It is not the moment of entry that later breaks or carries your file; it is what you do next with that reality. If you reduce the incident to “we were hacked,” you hand the outside world an empty container they will gladly fill for you. And believe me: they never fill it in your favour.

In practice I see how hacking often begins as a technical detail and ends as a governance crisis. An admin account that “briefly” wasn’t protected by MFA. An old VPN connection that remained “temporarily” in place. A service account with overly broad rights because “otherwise the integration wouldn’t work.” Segmentation that existed on paper but was hollowed out by exceptions in reality. Then comes the moment you discover someone has been watching for days or weeks, lateral movement has occurred, backups have been probed, the attacker has calmly chosen where it will hurt the most. And at that moment the first crucial choice appears: do you clean blindly and hope, or do you secure methodically and understand? I press you into the paradox: whoever “recovers” too fast without knowing what happened often recovers precisely into the state the attacker already knows. You put the door back on its hinges, but the key is still under the doormat.

My role then is to pull you out of panic logic and bring you back to manageable steps. I make sure the first decisions are not taken by the loudest voice, but by the most disciplined approach. I ensure a forensic trail is drawn that will hold: securing logs, isolating endpoints, pulling chain partners to the table, and above all documenting what you knew at which moment. Because later the question will not only be what factually happened; later it will be: when did you know it, when could you have known it, what did you do with that knowledge? And I say this without detours: the fact that you are a victim does not mean you will not be judged on your governance. Sometimes you are harmed by non-conforming conduct, by abuse, by sabotage, and yet you can quickly be treated as if you are simultaneously “the cause.” That is the world now. And that is exactly why I build with you a dossier that can carry your conduct: not by bluffing, not by shouting louder, but by taming the facts before they tame you.

Phishing

Phishing is the most underestimated attack vector precisely because everyone thinks they understand it. “An email, a link, an employee who clicked.” You can hear it in the phrasing: almost homely, almost banal. And that banality is deadly. Because phishing is rarely a stand-alone stupidity; it is often the first door in a carefully orchestrated chain. And the moment you frame it as “human error,” you make yourself vulnerable on two fronts at once. Internally you wound the employee, who is often already ashamed to the core. Externally you offer a simplistic narrative that will later be used against you: “so you didn’t have your people under control.” I refuse that frame because it is lazy and because it does violence to reality. Reality is that phishing works because organisations are complex systems under time pressure, with authority lines, financial processes, supplier contact points, and a hundred ways in which “just do it quickly” has become the norm.

I take you into what phishing actually does: it weaponises trust as an attack surface. It hijacks your organisation’s language. It copies your leadership’s tone, your invoice rhythm, your IT department’s jargon, your finance team’s tempo. It chooses a moment when someone has just stepped out of a meeting, has twenty tabs open, and must decide in a split second. Phishing is not merely “click behaviour”; it is behavioural manipulation in an environment that has been made structurally susceptible. And then comes phase two: credential harvesting, session takeover, mailbox rules, email forwarding, hidden conversations. Your own internal communication channel becomes the weapon against you. And if you are not careful, you will spread the attack inside your own organisation by sending “helpful” emails with instructions the attacker has already prepared for you.

In incident response I therefore impose a different kind of discipline: not only technical blocking, but governance control. Who communicates internally? With what message? Which facts are certain? Which uncertainties do we explicitly name? I keep you out of the swamp of contradictory internal messages, where half the organisation thinks “it’s probably fine” and the other half thinks “everything is gone.” I guard the existence of one central stream of facts. I make sure evidence is preserved: headers, mail logs, login events, MFA prompts, device data. And I protect you from quick, fatal wording. Because one sentence like “nothing was taken” while you cannot yet know that can later become a chain around your neck. I give you hope, but in the form of control: if you treat phishing as a mature attack on trust, not as an anecdote about clicking, you can both steer recovery and later demonstrate that you took reasonable measures, even if you were the victim of a sophisticated attack and later may still be accused that you “should have prevented it.”

Malware

Malware is the word organisations use when they still do not dare to name what it really is. It sounds technical, almost neutral, as if it is an “infection” you can wipe away with a scanner. But malware is not a stain; it is intent written in code. It is sabotage, espionage, extortion, reconnaissance, persistence, often several at once. The question is not only “do we have malware?” The question is: what function does this malware serve in the attacker’s plan, and how much control have they already built? I speak to you confrontationally here: if you treat malware as an IT problem, you behave as if a break-in at your office is merely a problem of a broken window. You look at the glass, but not at the fact that someone has already been inside, has looked around, and may have made copies of the keys.

In practice I see malware discovered at the wrong moment: not at entry, but at escalation. You notice it when endpoints slow down, when antivirus flags something “suspicious,” when files suddenly show encryption symptoms, when accounts run unusual tasks. And then the reflex arrives: isolate, remove, reinstall, “clean.” Sometimes that is necessary. But if you clean too quickly, you also clean away your own evidence. And evidence is not a luxury here; it is your anchor. Evidence of what occurred, how long it lasted, what data may have been affected, which systems were involved, which measures you took. Without that evidence the incident becomes a story others write for you. And that story is rarely kind. People say: “you had no visibility,” “you had no logging,” “you had no segmentation,” “you had no control.” Notice how quickly judgement shifts from perpetrator to victim, from attack to reproach.

That is why, in malware incidents, I steer toward controlled forensic discipline. I ensure snapshots are taken, memory dumps and disk images are preserved properly, indicators of compromise are collected without panic, backups are not blindly restored, and you understand what “clean” actually means in your environment. I also organise the governance layer: who may decide on downtime, who may decide whether to pay or not pay, who speaks to the insurer, who speaks to suppliers, who speaks to regulators? Because malware does not only pull on cables; it pulls on governance. And in that tension between technical necessity and executive responsibility, I make sure you are not torn apart. You are wounded by the attack, sometimes also because others acted non-conformingly, and yet the question arrives quickly whether you yourself did “enough.” I build with you a dossier that makes your actions intelligible: what you knew, what you did, why you did it, and how you limited the harm. That is the hope I give you: not the hope that it will be painless, but the hope that you will not collapse because, in panic, you demolished your own foundation.

DDoS Attacks

With DDoS attacks I see a special form of self-deception. People say: “Nothing was hacked, it’s just traffic.” As if “just traffic” cannot be the difference between operating and collapsing, between trust and mistrust, between a business that delivers and a business that fails. A DDoS is often not data theft, but it is absolutely digital disruption. And in a world where availability equals reliability, taking availability down is an attack on your right to exist in the eyes of your market. Moreover, DDoS rarely comes alone. It can be a distraction manoeuvre, a pressure tactic, a test of your response, a prelude to extortion. If you treat it as “an annoying rainstorm,” you forget that some rainstorms are deliberately engineered to force you to open your doors.

The governance trap with DDoS is to reduce it to “scaling up” and “turning on a scrubbing service.” Yes, mitigation is crucial. But my focus is what you must do alongside it: document, analyse, communicate. Because a DDoS throws your organisation into permanent urgency: customers complain, sales shouts, leadership wants numbers, IT wants bandwidth, support wants answers. In that chaos the most dangerous sentences are born: “it’s not that bad,” “we have it under control,” “it’s only an external attack,” “no impact.” Sentences that later, if it turns out traffic was intercepted or the attack was a cover for something else, can be used against you. A DDoS forces you not only to scale technology, but to safeguard truth. One truth. Not ten half-truths spreading because everyone “just says something.”

That is why, during DDoS, I organise control: what do we know for certain, what do we suspect, what is still unclear, and how do we phrase it so we do not become trapped by our own words later? I make sure you do not narrow the incident to uptime but broaden it to obligations: contractual SLAs, notification duties where availability incidents may be relevant, supplier arrangements, evidence for any claims. I also tighten your relationship with mitigation partners: who does what, what data is shared, how is it logged, how are decisions recorded. And I tell you this: even if it is “only” disruption, your dossier may later be decisive. Because the question will not only be whether you were attacked, but how you responded, and whether you remained demonstrably careful while sometimes you are harmed by non-conforming conduct and sometimes you are accused of that same failure yourself. If you show discipline in a DDoS, factual, executive, communicative, you can later explain not only why you suffered disruption, but why you acted responsibly. That is hope in an era that turns incidents into immediate character tests.

Identity Theft

Identity theft is the kind of attack where the damage often becomes visible only when it already feels too late. Not because nothing can be done, but because the reality of identity theft is that it can hijack your name, your voice, your authority while you still believe you are the one speaking. It is not only about personal data in a file; it is about access, representation, legitimacy. Whoever steals an identity, of an employee, an executive, a customer, steals the ability to make actions look “valid.” And that makes every process in your organisation vulnerable: payments, contracts, account changes, customer contact, change requests. It is the silent attack on the core of trust: who is who, and who is allowed to do what?

I confront you with an uncomfortable truth: identity theft is often only taken seriously once money disappears or public damage erupts. While the real damage starts earlier, in the pollution of facts. When an attacker acts under a legitimate account, your logging becomes a maze. You see “just a user,” “just a session,” “just an authorisation.” Then the fight over interpretation begins. Was this an employee? A customer? “Normal behaviour” or abuse? You get tangled in your own systems, designed for efficiency, not for philosophical questions about identity. And externally it becomes harsher still: a customer says “I never did that,” you say “but our system shows you did.” If you are not careful, you shift from victim to opposing party. And that is exactly the second layer that can disorient you: you are harmed by non-conforming conduct and by crime, and yet you can suddenly be approached as if you are the cause.

In that situation my role is to pull the incident back to evidence and control. I help you document what truly supported the identity: which authentication was active, which MFA factors, which device fingerprints, which IP ranges, which token events, which recovery flows. I make sure you do not judge too quickly, not the customer, not the employee, not yourself, but let facts speak first. At the same time I organise your communication: you must be empathic without becoming legally reckless; you must be decisive without making promises you cannot keep later. And I offer hope through structure: identity theft is disruptive, but it is not unmanageable. If you can reduce your processes to clear decision points and secure your evidence, you can later explain why you acted as you did, and you can help those harmed while protecting your own position, precisely because in this world you are sometimes harmed by non-conforming conduct and sometimes accused of that same shortcoming.

Cyberstalking and Intimidation

Cyberstalking is the kind of attack where technology is merely the vehicle and the real payload detonates psychologically, socially, and at the level of governance. You cannot dismiss it as “online nonsense” without betraying yourself. Because cyberstalking is not a disagreement that just happens to play out digitally; it is a systematic assault on someone’s freedom of movement, safety, and reputation. And the moment it touches your organization—a staff member being threatened, a director being blackmailed, a customer using your platform as an arena—it ceases to be a private matter. You are suddenly dealing with duty of care, employment-law dimensions, security realities, communication consequences and, yes, evidentiary discipline. The world has changed: intimidation has become scalable. A single perpetrator can cause maximum disruption with minimal means, and the damage unfolds in public—screenshots, shared posts, anonymous accounts that appear faster than you can report them. And then comes the extra vicious mechanism I will not spare you: sometimes you are harmed by non-conforming conduct—platforms that do not intervene, chain partners who let procedures drift, internal processes that cannot match the speed of online escalation—and sometimes you are accused of that same non-conformity, as if you created the conditions in which intimidation could thrive. That is the kind of paradox this era manufactures: you are the victim, yet you are expected to act like the perpetrator in your own defence.

I am blunt with you here: organizations that treat cyberstalking as “a human issue” and therefore “not a legal one” make exactly the mistake the perpetrator is counting on. Because the perpetrator bets on your embarrassment. On your reflex to soothe, to relativize, to hope it blows over. On your fear of drawing attention. On your hesitation to act firmly internally because “it’s such a hassle.” But cyberstalking does not shrink through silence; it grows. It feeds on ambiguity, on inconsistency, on the absence of a central truth. And if you do not organize a central truth—who says what, what we document, what we report, what we escalate—then internally you get what I see far too often: a rumor circuit, a shadow file, loose screenshots living on private phones, a tangle of well-meant actions that later turn against you because no one can explain what exactly happened. And that is where it becomes vicious: first you are harmed by non-conforming conduct—by slowness, by gaps, by half measures—and then you are accused that you had “no control,” that you were “not careful,” that you “failed to protect.” You can feel how the bar keeps moving: do nothing and you are negligent; do too much and you are oppressive; do something in between and you are inconsistent.

My role is twofold, and I will say it without softness: I protect the person and I protect the file. I impose evidentiary discipline where emotions want to drown everything. I make sure the intimidation is documented in a way that holds up: timelines, metadata, traceability, context. I help you draw boundaries: what is a threat, what is defamation, what is doxing, what is unwanted contact, what is platform abuse? And I make sure you act with governance—never panicked, always resolute. Because the hope I offer is not naive reassurance. The hope is that you, through control and discipline, can prevent the incident from poisoning your organization. That you can show you take your people seriously, that you act carefully, and that you do not get dragged into a digital lynch culture where nuance evaporates and the loudest scream wins. And I add this so you do not forget it when the pressure rises: I build a line with you that remains standing when you are later attacked with the question of why you did not act “better.” Because sometimes you are harmed by non-conforming conduct, and sometimes you are accused of having shown that same non-conforming conduct yourself. Only those who keep their facts tight survive that double bind without losing themselves.

Online Fraud

Online fraud is the elegant robbery in which the perpetrator uses your own processes as a crowbar. No battering ram, no explosion—just a neat transaction, a tidy change, an apparently legitimate order. And that is precisely the danger: in digital form, fraud often looks normal. Your systems are built to process, not to distrust. Your workflows are designed to reduce friction, not to treat every action as potentially criminal. But the world has changed: criminals have become readers of process. They study your customer journey, your invoice flow, your reset procedure, your onboarding, your supplier portal. They look for the place where you chose speed over control—and they always find that place. Not because you are stupid, but because every organization makes compromises, and compromises are openings. And then comes the double humiliation: sometimes you are harmed by non-conforming conduct—by suppliers, platforms, payment providers, chain partners who treat security promises like marketing—and sometimes you are accused of that same non-conformity, as if you “enabled” the fraud because your processes did not meet what people later call “reasonable.”

I confront you with a painful paradox: if you treat online fraud as “an incident,” you pretend it is exceptional. Yet fraud becomes structural precisely when you fail to address it structurally. One successful fraud invites repetition, imitation, variants. And what happens internally? Finance wants money back. Sales wants to keep customers. IT wants log files. Legal wants facts. Management wants reputation control. And under that pressure the classic danger returns: you start talking before you know. You start promising before you can. You start compensating before you are sure what you are admitting. You start accusing—a staff member, a customer, a supplier—before you have evidence. And here is where it turns poisonous: you have been harmed, you want to recover, you want to correct, you want to move on. But in that haste you can yourself act non-conformingly—not out of malice, but out of stress—and that is precisely when the bill arrives later: “why did you communicate it like this?” “why did you establish it like that?” “why didn’t you do it differently?” You see the point: you are not only assessed on the fraud; you are assessed on your reaction. And sometimes that assessment is harsher than the fraud itself.

That is why I steer online-fraud files toward the same core: one stream of facts, hard evidentiary preservation, controlled communication. I help you reconstruct: what actions were taken, by which accounts, from which devices, through which channels, with which authorizations, with which anomalies. I make sure you do not only look at “who did it,” but also at “which control should have stopped this”—and above all: what you do now to prevent repetition without panic measures that paralyze operations. I help you, contractually and practically, toward banks, payment providers, platforms, and insurers: not with theatrical indignation, but with evidence and timing. And I offer hope through precision: fraud is disruptive, but it is manageable if you do not let it sink into shame and chaos. Whoever gets the facts in order can both recover what is possible and prevent the outside world from treating your processes as an open buffet. And precisely because sometimes you are harmed by non-conforming conduct and sometimes you are accused of having shown that same non-conforming conduct yourself, I make sure your file is not “pretty,” but hard: traceable, explainable, defensible.

Data Theft

Data theft is the moment you realize that “data” is not an abstract asset, but the embodied trust of people who entrusted something to you. Customer data, personnel files, trade secrets, contracts, strategic documents—these are not just files; they are relationships, expectations, promises. And that is exactly why data theft is so destructive: it does not only hit your systems, it hits your credibility. In this changing world, reputation is no longer something you calmly build with marketing and customer friendliness; reputation is something you can lose in a single incident if you do not control the facts and discipline the communication. And here you must endure my hardness for a moment: whoever engages in “gut-driven communication” after data theft volunteers for the defendant’s bench. Sometimes you are harmed by non-conforming conduct—suppliers who did not live up to their security promises, chains with gaps, attackers who exploited—and sometimes you are accused of that same non-conforming conduct, as if you were the guard who fell asleep. You may feel indignation about that, but indignation is not a strategy; evidentiary discipline is.

I often see organizations shoot themselves in the foot by wanting to be definitive too early. “There is no evidence of exfiltration.” “We have no indications that personal data was taken.” Those sentences might later prove true, but at the moment you say them you often cannot substantiate them. And the world you operate in has changed: people accept fewer smoke screens, but they also accept fewer certainties that are later withdrawn. So you are handed a bizarre assignment: you must be transparent about uncertainty without sounding uncertain. You must show responsibility without admitting liability that the facts do not yet support. You must show empathy for those affected without weakening your position through panic phrasing. That is not a PR trick; that is executive maturity under pressure. And precisely under that pressure, a dangerous distortion arises: your nuance is read as evasion, or your caution as unwillingness. Then you—while harmed—are accused of non-conforming conduct because you were not “enough,” not “fast enough,” not “clear enough.”

My role then is to force you into a controlled truth. I build, with you, a fact matrix: what we know for sure, what we do not know, what we are investigating, and what the source is for every conclusion. I make sure you preserve evidence: network logs, proxy logs, cloud audit logs, endpoint data, DLP events, mail flows. I make sure forensic partners work within a tight framework: scope, chain of custody, reporting lines. And I help you limit exposure: not by minimizing, but by being precise. Sometimes you are harmed by non-conforming conduct, and yet the suspicion arrives at speed: “why didn’t you prevent this?” I then build with you an answer that holds: not defensive, but factual. Not loud, but convincing. The hope I offer you is that whoever guards the truth later does not have to choose between shame and aggression. You can remain upright if you show discipline now—even if the world wounds you and suspects you at the same time.

Cryptojacking

Cryptojacking is the kind of attack that is so treacherous because it often does not feel “dramatic.” No lockscreen, no ransom note, no immediate chaos—just systems that slow down, cloud costs that rise, CPUs that mysteriously run hot, containers that suddenly “run a lot.” And precisely because it does not scream, it is often seen too late or—worse—dismissed as a performance issue. But cryptojacking is theft and abuse. It is the hijacking of your computing power, your infrastructure, your budget, your energy, your capacity. It is an attack on your integrity as an organization: your systems become a mine, and you pay the electricity bill. And here too the bitter double layer applies: sometimes you are harmed by non-conforming conduct—cloud configurations that once stayed open “temporarily,” suppliers who made security “optional,” chain agreements that were not followed—and sometimes you are accused of that same non-conformity, as if you deliberately failed to govern. In this era, negligence is sometimes confused with overload, and overload is not accepted as an excuse.

I confront you with the executive humiliation cryptojacking can be: you discover you have been financing someone’s revenue model for months without noticing. And in a changed world where transparency and accountability are enforced ever more aggressively, that immediately triggers questions. How could this go unnoticed? Where was monitoring? What about cloud governance? Who had access? Which secrets were exposed? Which images were running? Which CI/CD pipelines were vulnerable? And then comes the most dangerous moment: you start defending yourself with explanations that sound like excuses. “We were busy.” “We don’t have enough people.” “It’s complex.” All of that may be true, but it is rarely persuasive. People do not want to hear your reasons; they want to see your measures. And if those measures are missing or cannot be demonstrated, you—on top of being harmed—are accused of non-conforming conduct. The paradox is cruel, but I will not polish it away: in this world you can be right and still be found wrong if you cannot substantiate it.

That is why I treat cryptojacking not as a technical cleanup, but as an incident that exposes governance. I make sure you secure the trail: which workloads, which accounts, which API keys, which scripts, which exploit route. I help you contain the incident without blindly panicking and toppling half your production environment. And I force documentation: when it was discovered, what signals were present, what decisions you took, what measures were taken to prevent recurrence. Because later the question always comes: what did you learn, and how do you prevent your infrastructure from being hijacked again? The hope I offer you is that cryptojacking—however embarrassing—can become the moment you show maturity. Not by denying or downplaying, but by showing control, producing facts, and demonstrably improving. And precisely because sometimes you are harmed by non-conforming conduct and sometimes accused of that same failing, I build with you a narrative that does not run on emotion, but on discipline. In this time, that is the only answer that still stands when the smoke clears.

Areas of Focus

Previous Story

Information Technology

Next Story

Cybersecurity Defense and Technology

Latest from Practice Areas