Several EU cybersecurity regulations have come into effect or are approaching their deadlines: NIS2, the Cyber Resilience Act, the EU AI Act, and DORA. They all require organizations to implement concrete technical security measures — risk analysis, vulnerability management, incident response readiness, supply chain security, staff training, and more.
I’ve been implementing exactly these measures for many years, long before any regulation required them. My services address the technical security work that these regulations demand. What I don’t do is compliance certification, regulatory gap analysis on paper, or formal compliance sign-off — that’s the job of your legal team, your compliance function, or a dedicated GRC consultancy.
In short: these regulations tell you what security measures to implement. I help you actually implement them.
Regulations require documented risk analysis covering threats, vulnerabilities, and the effectiveness of security measures.
My Agile Threat Modeling workshops and the Attack Tree Quickstart produce documented attack trees with threat actors, attack paths, security controls, effectiveness ratings, and risk simulations. Those deliverables are usable as evidence for risk analysis requirements — because they are risk analysis, not a regulatory checklist.
Most of these regulations also require testing the effectiveness of security measures and handling vulnerabilities systematically.
The Application Pentest, API Security Check, Cloud Security Check, and Container Platform Review, and Container Platform Review all produce detailed findings reports with evidence, risk ratings, and remediation guidance. The Security Sparring Partner adds expert triage when your own scanning tools produce results that need prioritization. When attack trees are available, the effectiveness of implemented controls can additionally be verified through Micro Attack Simulations.
DORA introduces Threat-Led Penetration Testing (TLPT) for in-scope financial entities, modeled on the TIBER-EU framework.
Before any testing happens, TLPT starts with a threat intelligence phase: relevant current, sector-specific, and entity-specific threats are collected, and realistic attack scenarios are defined across physical intrusion, social engineering, and technical attack vectors. Those scenarios are then executed as a red team assessment against live production systems — a multi-week engagement requiring an accredited provider, a dedicated team, and formal oversight by the competent authority. That red team execution is not what I offer; it belongs with a specialized provider. What I can help with on that side is coordination and preparation around the exercise.
Where I do fit is the work on either side of the red team. Upfront, my Agile Threat Modeling workshops and the Attack Tree Quickstart produce exactly the kind of documented threat actors, critical functions, attack paths, and control assumptions that a TLPT scenario definition has to build on. And alongside TLPT, DORA positions Threat-Led Penetration Testing as a complement to classic asset-based pentests, not as a replacement. The Application Pentest, API Security Check, Cloud Security Check, Container Platform Review, and Attack Surface Mapping deliver the asset-level depth that DORA still expects in parallel, optimized for coverage of individual systems rather than end-to-end red team realism.
Supply chain security and secure development practices are another recurring requirement.
The Secure SDLC Process Review assesses your entire development process against established maturity models. DevSecOps Pipeline coaching helps automate security scanning — SCA + SBOM, SAST, container scanning, AI-based code review — directly in your CI/CD pipeline. Both address the supply chain and development process security requirements from NIS2 and the Cyber Resilience Act.
Regulations require security-by-design and defense-in-depth measures in system architectures.
Security Architecture consulting reviews your software and system design from a security perspective, including zero trust architecture principles, cloud security, microservice isolation, and GenAI component integration. This addresses the “security by design” requirements that come up across these regulations.
The EU AI Act introduces specific security requirements for AI systems, including adversarial robustness, accuracy validation, and cybersecurity measures for high-risk AI.
High-risk AI requirements under the EU AI Act become enforceable on August 2, 2026. Non-compliance can lead to substantial administrative fines; maximum penalties are defined as a percentage of global turnover and a fixed cap, so the stakes are high for in-scope systems.
The Agentic AI Security assessment covers these aspects: prompt injection, tool poisoning, data exfiltration, memory manipulation, and goal hijacking across LLM integrations, RAG pipelines, MCP tools, and agentic architectures.
NIS2 explicitly requires cybersecurity training for staff and board-level security awareness.
The Web Security Bootcamp and Pentesting Training build hands-on security skills in development teams. The Live Hacking Event is designed specifically for awareness at all organizational levels, including C-suite and board. Custom Focus Sessions address specific topics as needed.
Rather than engaging security expertise only when a regulation demands it, the Security Sparring Partner retainer provides ongoing access to security advisory. That’s closer to the continuous risk management posture these regulations are designed around.
None of the above constitutes legal advice or a compliance guarantee. I implement technical security measures. For regulatory compliance assessment, formal certification, or legal guidance on specific regulatory obligations, consult a qualified legal/compliance professional.