Technology, Digital Transformation & Emerging Risk Advisory

Digital transformation is still being sold in boardrooms as a fragrant promise: faster, smarter, cheaper, “data-driven,” “AI-first,” and above all as if technology could suspend the gravity of responsibility. But you and I know better. Because every new connection, every cloud migration, every API, every data hub, every integration partner is not a neutral tool, but an extra vein through which your organization bleeds when things go wrong. You expand your attack surface, you replace owned control with borrowed control, you trade tangible certainty for contractual paper and dashboards designed mainly to soothe. And you call it “modernization,” while in fact you are building a complex ecosystem that stays upright only as long as nobody tests it with malice, bad luck, or managerial laziness. The world is changing, at speed, and that change is not romantic: it is merciless. The outside world does not smell incompetence only when the sirens wail, but the moment your answers soften, the moment your logs are missing, the moment your audit trail shows holes, the moment your supplier contract contains more marketing language than enforceable obligations. Then the file is born: fragmented facts, contradictory statements, internal irritation, a board that “was not fully informed,” and a regulator or counterparty that sees only one thing: a lack of control.

And here it gets uncomfortable, so let’s make it properly uncomfortable. Sometimes you have been harmed by non-conforming conduct in digital chains: an IT service provider with access that was far too broad, a software vendor postponing patching, an integrator who opens something “temporarily” and then forgets to close it. But you are also sometimes accused—rightly or wrongly—of letting it happen, of looking away, of valuing the business case above the evidentiary burden of due care. And in today’s world, that is the difference between “bad luck” and “fault”: not what you felt, but what you can prove. Not what you thought, but what you recorded. Not that you once had a policy, but that the policy actually worked when it hurt. That is why I do not position myself as the lawyer who arrives afterward with the wisdom of smoke and explains what you should have done. I position myself as the one who forces you to think forward in evidence: what decisions must you take now so that later you can show you acted carefully, recognized risks, demanded compensating measures, corrected course, learned. Because anyone who cannot do that is not treated in this era as a victim of circumstances, but as the architect of their own weakness—and that is a verdict you cannot scrub away with good intentions.

Digital Transformation & Process Integrity

The moment you touch digital transformation, you touch the nerve of process integrity: the reliability of your numbers, the reproducibility of your transactions, the traceability of decisions, the consistency of your data across the chain. You can say “we innovate” a thousand times, but if your digital process design rattles, you are not building the future—you are building a factory for systemic errors. And systemic errors are dangerous precisely because they are quiet: they do not lie on the pavement like broken glass; they sit inside your organization like a wrong formula in a spreadsheet that everyone trusts. You notice them only when someone uses them. When an internal actor discovers that controls can be bypassed. When an external party manipulates transaction flows. When, in an investigation, you have to explain why numbers do not reconcile, why master data changed, why approvals were never logged, why traceability was “temporarily limited due to migration.” That word “temporary” is, in digital files, a confession disguised as a project plan.

I have seen—and you probably have, too—how integrity risks are born out of organizational cowardice. The business wants to go live, the project team wants a win, the CIO wants delivery, the CFO wants cost savings, and somewhere in that procession stands someone who says: “We’ll fix the controls later.” Later. As if later were a date in a calendar rather than a moral postponement. While in truth, from day one you must demand that transactions are traceable, reconciliations reproducible, exceptions visible, segregation of duties not sacrificed on the altar of “agile” and “devops.” And when you do not enforce that, a parallel reality emerges: systems run, dashboards glow green, but nobody can prove the underlying reality is correct. And if you are then harmed by non-conforming conduct—by a vendor who could do too much, by a workaround that was never rolled back—you suffer the second blow immediately: you cannot show that you could have prevented it.

That is why I confront you with something executives sometimes find irritating: discipline. Not the discipline of paper, but the discipline of demonstrability. I want you to be able to show why you made an architectural choice, which risks you explicitly accepted, which compensating controls you put in place, which test results you reviewed, which exceptions you allowed, and who signed off. I want you to prove that integrity was not “patched on” afterward, but “built in” beforehand. And yes—this means you sometimes say “no” to speed. But that “no” is not obstruction; it is insurance. It is the line that later saves you when you are accused of negligence, or when, as the harmed party, you must show the failure lay outside your control. In this changing world, the winner is not the one with the prettiest roadmap, but the one with the hardest evidentiary case for due care.

Cybersecurity & Data Security

Cybersecurity is still too often treated as technical theatre: a SOC, a few dashboards, an annual pentest, and a presentation of threat pictures to startle the board awake. But you do not need to be startled awake; you need to stay awake. Because cyber incidents are no longer merely IT problems—they are business continuity problems, reputation problems, liability problems, and increasingly evidence problems. When your systems are attacked, it is not only data that gets stolen; reality itself is tampered with. Financial transactions can be manipulated, audit trails contaminated, log sources disabled, and you are left with a management team that mostly hopes it “won’t be that bad.” And hope is not a control. Hope is an emotion. You need controls that function under stress, not in PowerPoint.

And you are not attacked only from the outside. I am sharp with you here because the discomfort is necessary: internal threats and insider abuse are structurally underestimated because they are organizationally painful. They force you to admit that trust is not a strategy. You must set hard boundaries on access rights, minimize privileges, enforce logging as if it were oxygen, and build monitoring as if you will one day have to reconstruct events under hostile questioning. Because that is what happens when you are accused: people do not ask whether you “had a policy,” they ask how you managed access, how you detected anomalies, what alerts existed, who reviewed them, and why no one intervened. And if you have been harmed by non-conforming conduct—an external provider with overly broad access, a patch delayed—you must be able to show you demanded and verified reasonable measures rather than merely “trusted.”

And then there is the GDPR—not as a buzzword, but as a reality that constrains your freedom. In today’s world, data privacy is not an appendix; it is a core risk. You can no longer pretend data is “somewhere,” that subprocessors are “fine,” that cross-border transfers are a footnote. If you do not detect and report breaches adequately, if you do not know where data goes, if you cannot evidence retention periods, you are exposed: to claims, to regulators, to reputational damage, and to the moral judgment that you lacked control. I offer you hope, but not the cheap kind: you can manage this by anchoring security in governance, drilling incident readiness as a reflex, and treating evidence safety (logs, snapshots, immutable storage) with the same seriousness as firewalls. Because in this era the question is not whether you will be tested, but how you remain standing when you are tested.

Digital Forensic Analysis & Data Integrity

You can only defend what you can reconstruct. And reconstruction is not a luxury; it is the backbone of any serious response to incidents, suspicions, and claims. Digital forensics is not only about “who did it,” but about “what is still reliable.” Because when data integrity wobbles, everything wobbles: your financial reporting, your internal controls, your stakeholder communications, your credibility. In an investigation, every gap in your log chain is an invitation to distrust. Every missing piece of evidence is filled by the other side with insinuation. And in this world, where facts circulate faster than verification, that is lethal. You cannot persuade with feelings; you persuade with verifiable traces.

I also confront you with the hard, unromantic work: securing logs, audit trails, and transaction histories as if you will have to present them tomorrow in a hostile environment. Because that is exactly what happens when pressure rises. If you have been harmed, you want to prove you depended on a third party that acted non-conformingly, that the deviation originated there, that you had—or could not have had—signals, that you had reasonable controls but still suffered a breach. If you are accused, you want to show you were not asleep: you had monitoring signals, you acted on them, you had escalation paths, you preserved what mattered. And note this: preservability is not chance; it is design. If your systems do not produce a reliable audit trail, you do not have an “incident,” you have an evidence crisis.

That is why I force you to see forensic capability not as an external emergency service you call only when smoke pours from the roof, but as an integrated discipline. That means deciding in advance what “critical logs” are, how long you retain them, how you ensure integrity, who has access, how you organize chain of custody, how you protect privilege and confidentiality without sabotaging the investigation. It means understanding that cross-border data transfers are not only a privacy question, but also an evidence and accessibility question: can you secure data in time, can you analyze it lawfully, can you share it with the right parties without creating new violations? That sounds heavy—and it is. But here lies the hope: organizations that are forensically mature do not have to improvise during an incident. They act. They keep control. They do not let the storm decide what is “true”; they can prove what was true.

Cloud & Third-Party Technology Risks

Cloud is not a destination; it is a dependency contract disguised as flexibility. You buy scalability, but you borrow control. You buy speed, but you accept a chain of subprocessors, subcontractors, shared responsibility models, and a technical-legal reality in which nobody ever seems fully responsible—until you are. And when things go wrong, you are suddenly faced with questions that are not technical but existential: where was the data, who had access, who could change logs, what backups exist, how fast can you recover, which audit rights can you enforce, what is your exit, how do you prevent vendor lock-in from turning you into a hostage? If you do not have hard answers to those questions, you do not have a cloud strategy—you have a belief system.

I have seen files where “trust in the supplier” was the only rationale. You can feel where that ends. Suppliers are not malevolent because they are suppliers; they are rational: they optimize for their own risk and their own margin. If you do not make audit rights enforceable by contract, you will not get them when it hurts. If you do not drill incident response jointly, you discover in the crisis that your escalation path ends in a ticketing system with a response time of “within five business days.” If you do not build exit scenarios, you learn that data export is an add-on product you still need to purchase, or that your configurations are so proprietary that you can migrate only by rebuilding. And then, when you have been harmed by non-conforming conduct, you are told it was “within the SLA,” that “best effort” was provided, and that the definition of “availability” did not include what you thought it included. You thought you bought certainty; you bought language.

The hope lies in hardness, not cynicism. I want you to treat third-party risks as board-level matters: selection based on demonstrable controls, periodic reassessment, audit rights, obligations around logging and preservation, clear allocation of responsibilities, security-by-design requirements, subprocessors transparency, and above all a workable incident and recovery plan that you test together. I want you to prove you were not naïve, but professional. That you did not merely have “vendor management,” but vendor governance. Because in this changing world chains are judged by their weakest link, and that link is often not the technology but the agreement. You do not have to be perfect. But you must be defensible. And defensibility is not an accident; it is the result of choices you harden today.

Transaction Monitoring & Fraud Detection Systems

Transaction monitoring is often sold as a miracle machine: put AI on it, and fraud disappears. But you and I know that is a dangerous illusion. Monitoring is not magic; it is a system of assumptions. Which patterns do you call suspicious? Where do you set thresholds? Which data is reliable enough to serve as input? How many false positives do you tolerate, and how many false negatives can you live with? Every setting is a choice, and every choice has consequences—legal, financial, reputational. If you generate too many alerts, you drown in noise and, in practice, do nothing. If you generate too few, you are blind and you call it “efficient.” And when something goes wrong, the question you do not want to hear returns: why didn’t you see this, and why didn’t you have a reasonable detection capability?

It becomes even sharper because transaction monitoring often sits precisely at the boundary where accusation and harm collide. You may be the victim of sophisticated manipulation—by external parties, internal employees, collusion. But you may also be accused that your systems were so weak that manipulation was predictable. Inside that accusation is a toxic logic: “If you didn’t see it, you didn’t want to see it.” You feel how unfair that can be, but you also sense it sometimes lands. That is why monitoring must be legally defensible: not only technically “state of the art,” but governance-backed. You must be able to show how you validate models, how you manage changes, how you preserve explainability, how you organize alert handling, how you structure escalation to compliance and legal, and how you make the chain from signal to decision demonstrable.

I am not offering you a soft message here, but I am offering you an exit that works. If you treat monitoring as a strategic instrument, you build a defensible line. You document not only that you have a tool, but that you have a process: governance around model risk, periodic tuning, independent reviews, training for teams that assess alerts, clear decision logic showing why something was or was not escalated, and above all a board-level picture of risks that is not based on soothing averages but on sharp deviations. In the changing world where data moves faster than morality, monitoring is not merely a detection mechanism—it is your mirror. And if you dare to look into that mirror and correct course in time, you can later say: I invested not only in technology, but in demonstrable due care. That is how you remain standing when you are tested, or when you must recover damages from those who acted non-conformingly.

IT Governance & Internal Controls

If you ask me where digital transformation most often derails, I won’t point to hackers, I won’t point to the cloud, I won’t even point to the technology itself. I point to governance that lags behind reality like a board member who walks in late and still insists on steering the conversation. IT governance is not a stage prop, not a folder of “policies” you pull from a cabinet during an audit, not a set of principles you hang on the wall while the real work happens elsewhere. IT governance is how you prove you did not hand control over to speed, to suppliers, to project teams that “just sort something out,” or to a culture in which everyone assumes someone else is responsible. And you know exactly what happens then: when it goes wrong, no one owns the problem, and so you—as an executive, or as an organisation—automatically become the owner of the misery.

I see it again and again: digital programmes are run by enthusiastic teams with deadlines, KPIs, and a mandate that in practice is broader than anything ever formally documented. Change management becomes “agile,” and “agile” becomes a licence to treat controls as an inconvenience. Segregation of duties gets “temporarily” relaxed because otherwise the plan won’t hold. Exceptions are granted without anyone asking the questions that matter: what is the compensating control, where is this decision recorded, who signs it, and what is the plan to return to normal? And then reality walks in. There’s an incident, a claim, an investigation, or an allegation that your internal control environment failed. And then the truth is painfully simple: if your governance does not grow with the technology, your governance dies. You do not want to die; you want to prove you were alive.

That is why I force you to stop seeing internal controls as the CFO’s territory or as “audit fuss,” and to start seeing them as an executive survival kit. I want you to be able to show the full data lifecycle: from source to reporting, from change to approval, from incident to recovery. I want you not only to assign access rights, but to review them periodically, and to be able to explain why someone was allowed to do what they did. I want you to keep change management from drowning in tickets by tying it to risk: which change affects financial integrity, which affects privacy, which affects compliance, and which affects continuity? And yes, that means discipline, documentation, and above all a culture in which “we’ll fix it later” is not praised as decisiveness but heard for what it is: a warning. The hope is simple: governance that demonstrably works does not make you untouchable, but it does make you defensible—and in this era, defensibility is the real currency.

Regulatory Tech Compliance

You can no longer hide behind the excuse that regulation is “complex” and technology is “fast.” The world is moving, and precisely because it is moving, organisations that use digital processes for compliance—or that allow compliance to drown in digital chaos—are scrutinised far more aggressively. Regulatory tech compliance is not a luxury; it is a defensive line. Because the moment you run automated workflows for sanctions screening, anti-money laundering controls, reporting obligations, data processing, or risk reporting, you make a promise to the outside world: we have this under control. And if you cannot substantiate that promise, technology stops being your shield and becomes your trap. Then it is not “a small mistake,” but a structural failure. Then people say: you had systems, you had data, you had tooling—so why didn’t you see it?

I’m addressing you directly because I know exactly how this goes. Compliance asks for tooling, the business wants no friction, IT builds integrations, and everyone is satisfied as long as the dashboard shows checkmarks. But dashboards do not lie because they are malicious; they lie because you configured them to calm you down. If you do not test periodically whether your rules work, whether your lists are current, whether your matching criteria are sensitive enough, whether your exceptions are handled correctly, then you are not a compliant organisation—you are an organisation that wants to look compliant. And if you are then harmed by non-conforming conduct in the chain—a supplier delaying updates, an integration partner bypassing screening—you get the question you do not want: why didn’t you discover this, why was there no independent verification, why was there no audit trail that made the failure visible?

The hope lies in maturing compliance into a digital discipline: you build controls that do not merely exist, but are tested. You ensure that regulatory changes are translated into logic changes, and that those changes are documented, reviewed, and authorised. You make escalation paths work: who decides when there is doubt, how fast, with what documentation, and with what legal assessment? You distinguish between “operationally correct” and “legally defensible,” because they are not the same thing. And you record why you made certain choices, including the limitations of tools and data. Because in a world that judges more harshly, the question is not whether you once bought a tool, but whether you demonstrably governed what that tool actually did.

Data Privacy & Protection

Privacy is no longer a chapter in a compliance handbook; privacy is a permanent negotiation with reality. You process more data than you think, you share more data than you can see, you keep more data than you dare admit. And while you focus on growth, innovation, and “customer experience,” a shadow grows in the background: data flows you have not fully mapped, access rights that have grown historically, exports created “for analysis,” and sub-processors you know only from an appendix nobody reads. When it goes wrong, it is not a misunderstanding but a file. And a file has no patience for your good intentions. A file asks: where is the legal basis, where is the data mapping, where is the DPIA, where is the retention period, where is the notification, where is the evidence that you acted with due care?

Sometimes you are harmed by other parties’ non-conforming conduct: a cloud partner leaving a misconfigured storage bucket exposed, an integrator routing data outside the agreed scope, a vendor using sub-processors without transparency. But you are also sometimes accused that you failed to supervise adequately, that you did not take privacy-by-design seriously, that you let governance slip, that you underestimated the risks of cross-border transfers. And let’s be honest: in digital chains, supervision is not a moral preference, it is an executive duty. If you cannot demonstrate that you knew what was happening to data, then the privacy story becomes a story of loss of control. And loss of control is exactly what the outside world clamps onto, because it signals not only vulnerability, but possible negligence.

The hope is radical clarity: you do not build privacy as a brake, but as a structure. You make data flows visible, you limit access to what is necessary, you minimise, you pseudonymise where possible, and you ensure logging that serves not only security but accountability. You organise incident response and breach notification not as something to “quickly figure out,” but as a drilled reflex with a legal backbone: who decides, who documents, who notifies, and on what basis? And you ensure that third parties do not dictate your privacy landscape, but that you enforce what they must do: transparency, audit rights, notification deadlines, and hard commitments on sub-processors. You do not have to be perfect, but you must be able to prove you were not careless.

Crisis Management & Incident Response in a Digital Context

There is a moment when the world suddenly shrinks into a small room with loud voices. An incident happens. A system is down. Data is gone. A journalist calls. A regulator asks questions. A customer threatens action. The board wants to know within an hour “what is going on” while you don’t even know what exactly has happened yet. And that is precisely where crisis management reveals itself as truth: not what you had on paper, but what you do in practice when pressure drains the blood from your fingers. Many organisations have incident response plans that look like serious documents but function in reality as set dressing—pretty, neat, useless. Because a plan you never test does not exist. It is a wish.

I am confrontational because I want to save you from the classic failure: improvisation without evidentiary discipline. In the first hours of an incident, evidence is often destroyed unintentionally: logs get overwritten, systems are reset, accounts are changed en masse, external parties are let in without a controlled chain of custody. People want to “restore,” but they forget that restoration without preservation is later interpreted as concealment. And that is the most poisonous distortion in a file: that your reasonable attempt to limit damage is reframed by the other side as an attempt to erase traces. If you have been harmed by non-conforming conduct, you want to secure traces precisely to substantiate liability. If you are accused, you want to show you acted professionally: controlled, documented, proportionate, and legally anchored.

That is why incident readiness must be a working reflex. I want you to simulate, drill, and repeat until the team no longer has to think about who calls whom, who decides, who documents, who communicates, and what must absolutely not happen. I want communication not to be surrendered to panic, but to be led by strategy: transparent where possible, restrained where necessary, and always consistent with facts you can substantiate. I want you to have agreements in place with suppliers in advance about their role, their response times, their access, their notification obligations, and their support for forensic investigation. The hope is that this not only helps you recover faster, but above all ensures you remain standing when the outside world tests you. A crisis does not have to break you—unless you keep pretending preparation is something for later.

Strategic Tech Investments & Digital Resilience

Investing in technology today is no longer a matter of “seizing opportunities”; it is a choice about what kind of organisation you dare to be in a world that punishes faster than you can explain. Every euro you spend on AI, analytics, blockchain, automation, or new platforms buys not only functionality, but risk: model risk, dependency risk, integrity risk, compliance risk, reputational risk. And if you do not treat it explicitly as such, you are effectively buying a time bomb wrapped in a nice ribbon. I know that sounds harsh, but I say it because it is true: digital resilience is not a by-product of innovation. Resilience is a design choice. And those who do not make that choice receive resilience as a punishment instead of as a strategy.

Sometimes you are harmed because technology partners or internal teams act non-conformingly: “temporary” shortcuts, untested releases, half-understood AI models, data quality sacrificed for speed. But you are also sometimes accused that you invested without governance, that you parked risks, that you placed the business case above the evidentiary burden of due care. And that is exactly why strategic investments need a legal and executive architecture. Not because you should be afraid, but because you should be realistic. If your AI model influences decisions, you must be able to explain how it works, what its boundaries are, how you prevent bias and manipulation, how you protect input data, and how you detect incidents related to model behaviour. If you cannot do that, your innovation is not progress; it is a vulnerability in a tailored suit.

The hope lies in a mature investment discipline that needs no glamour. You link every strategic tech investment to a resilience plan: backup strategies that work, disaster recovery that is tested, business continuity that is realistic, and monitoring that detects anomalies before they become headlines. You build governance around change: who may do what, when, with which controls, and with which audit trail? You demand not only glossy certificates from suppliers, but practical support during incidents, access to relevant logs, transparency about sub-processors, and an exit you can actually execute without paralysing your organisation. And you report not only success to the board, but also risk: where are we vulnerable, what do we accept, what do we mitigate, what have we learned? In this changing world, resilience is not a shield you buy. It is a posture you prove. And if you prove that posture—consistently, demonstrably, hard—then in the storm you can say something rare: I did not gamble on luck; I built on due care.

Related Expertises

Previous Story

Criminal enforcement & compliance

Next Story

In-Person Meeting

Latest from Practice Areas