- 90% of organisations will adopt hybrid cloud by 2027, yet most fail their first compliance audit due to gaps in shared responsibility documentation. Defining who owns each security control before architecture design begins prevents the most common audit finding.
- GDPR does not mandate EU data residency, but data protection obligations travel with the data. Hybrid architectures that route data through non-EU nodes require Standard Contractual Clauses and documented Transfer Impact Assessments for every cross-border flow.
- Incomplete logging causes more audit failures than any other single control gap. Organisations that aggregate logs from all environments into a single tamper-evident platform with 12-month retention resolve 80% of auditor evidence requests within hours, not weeks.
At a Glance
Time required: 6 to 10 weeks (add 4 to 8 weeks for external certification audit)
Difficulty: Advanced (requires cloud architecture and compliance expertise)
Prerequisites: Existing cloud environment (or migration plan), identified regulatory frameworks, budget for tooling and potential external audit
Steps: 7
Outcome: A documented hybrid cloud architecture with mapped controls, audit evidence repository, and readiness for ISO 27001, GDPR, DORA, or SOC 2 assessment
What You Need Before Starting
Tools and Access
- Cloud provider console access: Administrative access to your cloud accounts (AWS, Azure, or GCP) with permissions to view network configurations, IAM policies, and logging settings
- Network architecture documentation: Current network diagrams showing on-premises infrastructure, cloud resources, and interconnections. If these do not exist, creating them is your first task.
- SIEM or log aggregation platform: Either an existing deployment (Splunk, Elastic, Azure Sentinel) or budget to deploy one. This is non-negotiable for audit readiness.
Knowledge and Permissions
- Cloud architect: Someone who understands networking, identity management, and security controls across your specific cloud platforms. General IT managers typically lack the depth auditors expect.
- Compliance stakeholder: A person who knows which regulatory frameworks apply to your organisation and can interpret control requirements. For ISO 27001 engagements, this person should have attended lead auditor or lead implementer training.
- Executive sponsor: Budget authority and the ability to enforce architectural decisions across teams. Hybrid cloud compliance requires changes to development workflows, access policies, and operational procedures.
Time Commitment
- Total estimated time: 6 to 10 weeks for design and evidence preparation
- Largest single block: Step 1 (data classification and regulatory mapping) at 1 to 2 weeks, as it requires input from legal, compliance, and technical teams
- Can be split across: 5 to 7 working sessions over 6 weeks, with implementation work between sessions
Step 1: Classify Data and Map Regulatory Requirements
What you will accomplish: A complete inventory of data types processed in your hybrid environment, each mapped to the specific regulatory obligations that govern how it must be stored, processed, and transferred.
Time required: 1 to 2 weeks
Every architectural decision in a hybrid cloud depends on knowing what data goes where and which rules apply. Without this foundation, you will build an architecture that works technically but fails at audit. Start by cataloguing every data type your systems process: customer personal data, financial records, health data, employee records, intellectual property, and operational telemetry. Each carries different obligations.
For European SMBs, the regulatory landscape typically includes GDPR for personal data, ISO 27001 for information security management, and potentially DORA if you operate in financial services or NIS2 if you provide essential services. The European Data Protection Board’s SME guidance provides a practical starting point for understanding which transfer requirements apply when data crosses borders within a hybrid architecture.
Key actions:
- Create a data inventory spreadsheet listing every data type, its classification level (public, internal, confidential, restricted), and the systems that process it
- Map each data type to applicable regulations: GDPR for personal data, DORA Article 28 for financial services ICT, NIS2 for essential service operators
- Identify which data can reside in public cloud, which must stay on-premises, and which can exist in either location with appropriate controls
- Document data flows between on-premises and cloud environments, including any transit through non-EU regions
If this step fails:
- Data owners cannot classify their data: Start with a workshop format. Present the classification levels with examples from your industry. Most teams can classify 80% of their data within a 2-hour facilitated session. The remaining 20% typically requires legal input.
- Regulatory requirements are unclear: Engage a compliance consultant for a 1-day regulatory mapping exercise. The cost of a single consulting day is far less than redesigning an architecture that fails audit.
Checkpoint: You should now have a data classification register that lists every data type, its regulatory obligations, and its permitted deployment locations within your hybrid architecture.
Step 2: Define the Shared Responsibility Model Across Environments
What you will accomplish: A documented matrix showing which security controls are owned by your cloud provider, which are owned by your organisation, and which are shared, with named owners for every control.
Time required: 1 to 2 weeks
The shared responsibility model is where most hybrid cloud audit failures originate. Organisations assume their cloud provider handles security controls that are actually the customer’s responsibility. In a hybrid environment, this gap compounds because on-premises controls have no provider to share responsibility with at all. The Cloud Security Alliance’s Cloud Controls Matrix (CCM) provides a framework of 197 controls across 17 domains that explicitly maps which controls belong to the provider and which belong to the customer.
For each control domain, document the responsibility split across three environments: on-premises infrastructure (100% your responsibility), IaaS/PaaS cloud services (shared, with the split depending on service type), and SaaS applications (primarily provider responsibility, with customer controls limited to configuration and access). Your cloud provider’s compliance documentation will specify their side. Your job is to document your side and verify there are no gaps.
Key actions:
- Download your cloud provider’s shared responsibility documentation and map it against ISO 27017 cloud security controls
- Create a responsibility matrix with columns for: control name, on-premises owner, cloud provider responsibility, customer responsibility, and evidence location
- Assign a named individual (not a team or role) as owner for every customer-side control
- Identify controls that fall into gaps between environments, particularly backup, disaster recovery, and incident response that must span both
If this step fails:
- Cloud provider documentation is vague: Request a written compliance brief from your provider’s enterprise support team. Major providers (AWS, Azure, GCP) all offer compliance-specific documentation for regulated customers. If your provider cannot clarify their control boundaries, that is itself a risk finding.
- No one wants to own controls: Escalate to your executive sponsor. Unowned controls are the exact finding auditors look for. Every control must have a named owner with authority and capacity to implement it.
Checkpoint: You should now have a complete shared responsibility matrix covering all control domains, with named owners for every customer-side control and documented provider responsibilities verified against their compliance documentation.
Step 3: Design Network Segmentation and Data Flow Controls
What you will accomplish: A network architecture where regulated data flows through controlled, encrypted pathways with access restrictions and logging at every boundary between on-premises and cloud environments.
Time required: 1 to 2 weeks
Network segmentation is the architectural control that auditors assess most rigorously in hybrid environments. The principle is straightforward: regulated data should not share network pathways with unregulated traffic, and movement between security zones must be explicitly authorised and logged. NIST SP 800-53 provides the control framework for network segmentation, including boundary protection (SC-7) and information flow enforcement (AC-4) controls that map directly to audit evidence requirements.
In a hybrid architecture, your segmentation design must address three boundary types. First, the connection between on-premises and cloud environments, typically through VPN tunnels or dedicated interconnects. Second, segmentation within the cloud environment itself, separating production from development and regulated workloads from general compute. Third, external-facing boundaries where data enters or exits your environment entirely.
For European SMBs subject to GDPR, pay particular attention to data flows that cross national borders. The European Commission’s guidance on international data transfers confirms that GDPR protections follow the data regardless of location. If your hybrid architecture routes data through cloud regions outside the EU, even temporarily during failover, you need documented legal mechanisms for that transfer.
Key actions:
- Design network zones aligned to data classification levels from Step 1: a restricted zone for regulated data, a standard zone for internal operations, and a DMZ for external-facing services
- Configure firewall rules that default to deny-all between zones, with explicit allow rules documented and justified
- Encrypt all data in transit between on-premises and cloud environments using TLS 1.2 or higher, with certificate management documented
- Deploy network flow logging at every boundary crossing and feed logs to your centralised monitoring platform
If this step fails:
- Legacy applications cannot operate within segmented networks: Create a documented exception register. For each exception, record the application, the segmentation rule it violates, compensating controls applied, the risk owner, and the remediation timeline. Auditors accept documented exceptions with compensating controls far more readily than undocumented gaps.
- Cloud provider VPC/VNet limitations constrain your segmentation design: Use service-level controls (security groups, network ACLs, private endpoints) to achieve logical segmentation where physical network separation is not available. Document the approach and its equivalence to the physical segmentation requirement.
Checkpoint: You should now have a network architecture with clearly defined security zones, encrypted inter-zone communication, documented firewall rules, and flow logging at every boundary.
Step 4: Implement Identity and Access Management Across Environments
What you will accomplish: A unified identity layer that enforces consistent authentication, authorisation, and access policies across both on-premises and cloud workloads.
Time required: 1 to 2 weeks
Identity and access management (IAM) in hybrid environments fails audits when on-premises and cloud systems use separate identity providers with inconsistent policies. An employee might have multi-factor authentication enforced in the cloud but single-factor access to on-premises systems that connect to the same data. Auditors treat this inconsistency as a control failure because it creates a path to regulated data that bypasses your strongest controls.
The solution is a single identity provider that governs both environments. For most European SMBs, this means federating on-premises Active Directory with your cloud provider’s IAM service, or migrating to a cloud-native identity platform that covers both. Whichever approach you choose, the audit requirement is the same: consistent enforcement of least-privilege access, role-based controls, and multi-factor authentication regardless of where the resource sits.
ISO 27001 Annex A control 5.15 (Access control) and 8.5 (Secure authentication) both require documented access policies applied consistently across the information security management scope. If your hybrid cloud is in scope, which it should be, inconsistent IAM is a nonconformity.
Key actions:
- Deploy or extend a single identity provider across on-premises and cloud environments (Azure AD/Entra ID, Okta, or equivalent)
- Enforce multi-factor authentication for all administrative access and all access to systems processing regulated data
- Implement role-based access control (RBAC) with roles defined by job function, not individual permissions. Document each role, its permissions, and the business justification
- Configure automated access reviews on a quarterly cycle: managers must confirm that each team member’s access is still appropriate for their current role
- Enable just-in-time (JIT) privileged access for administrative tasks: standing admin access should be the exception, not the default
If this step fails:
- Legacy on-premises systems do not support federation: Place these systems behind a reverse proxy or bastion host that does support your identity provider. Access to the legacy system passes through the proxy, which enforces MFA and logging. Document this as a compensating control.
- Teams resist removing standing admin access: Start with monitoring. Enable JIT access management alongside existing standing access for 30 days, tracking actual usage patterns. The data typically shows that 70 to 80% of admin access is used less than once per month, making the case for JIT clear.
Checkpoint: You should now have a single identity provider governing access across both environments, with MFA enforced for all privileged and regulated-data access, documented RBAC roles, and automated quarterly access reviews configured.
Step 5: Build Centralised Logging and Monitoring
What you will accomplish: A unified logging and monitoring platform that aggregates security events from all environments with tamper-evident storage, regulatory retention periods, and automated alerting.
Time required: 1 to 2 weeks
Centralised logging is the single control that has the highest impact on audit outcomes. When auditors request evidence that a control is operating, they look for logs. When they investigate an incident scenario during the assessment, they ask to see detection and response timelines. If your logs are fragmented across systems, incomplete, or insufficient in retention, every other control you have implemented loses its evidential value.
For hybrid cloud architectures, logging must cover four categories: infrastructure events (network flows, firewall decisions, resource provisioning), identity events (authentication attempts, authorisation decisions, privilege escalations), application events (data access, transactions, errors), and operational events (deployments, configuration changes, patch applications). Each category must be collected from both on-premises and cloud environments and aggregated into a single platform.
Retention requirements vary by regulation. GDPR does not prescribe a specific log retention period but requires data processing records. ISO 27001 typically expects 12 months of operational logs. DORA Article 11 requires financial entities to maintain records of ICT incidents and their resolution. In practice, 12 months of hot storage with 3 years of archive satisfies most European regulatory frameworks.
Key actions:
- Deploy a SIEM or log aggregation platform (Splunk, Elastic Security, Azure Sentinel, or equivalent) with agents on all on-premises servers and cloud-native log ingestion enabled
- Configure log sources for all four categories: infrastructure, identity, application, and operational. Verify that no environment or system is excluded from collection
- Enable tamper-evident log storage: write-once storage, hash chains, or immutable blob storage that prevents log modification after collection
- Set retention policies aligned to your regulatory requirements: minimum 12 months hot, 3 years archive for most European frameworks
- Create automated alerts for critical security events: failed authentication patterns, privilege escalation, data exfiltration indicators, and configuration changes to security controls
If this step fails:
- Log volume overwhelms storage budget: Implement tiered logging. Collect full verbose logs for systems processing regulated data and summary logs for non-regulated systems. Use sampling for high-volume, low-risk event streams like DNS queries. Document the tiering rationale.
- On-premises systems produce logs in non-standard formats: Deploy log normalisation at the collection layer (Logstash, Fluentd, or equivalent) that converts proprietary formats into a common schema before ingestion. This enables consistent querying across all environments.
Checkpoint: You should now have a centralised logging platform ingesting events from all on-premises and cloud systems, with tamper-evident storage, 12-month minimum retention, and automated alerting for critical security events.
Step 6: Prepare Audit Evidence and Documentation
What you will accomplish: A structured evidence repository that maps every implemented control to its regulatory requirement, with current configuration evidence and operational records ready for auditor review.
Time required: 1 to 2 weeks
The difference between passing and failing a regulatory audit is rarely about whether controls exist. It is about whether you can prove they exist, prove they operate consistently, and prove they have been reviewed. Auditors assess documentation as rigorously as technical implementation. An undocumented control is, from an audit perspective, a missing control.
Your evidence repository should be structured around the Statement of Applicability (SoA) if pursuing ISO 27001, or the relevant control framework for your target certification. Each control entry needs three types of evidence: design evidence (the policy or procedure that defines the control), implementation evidence (configuration screenshots, deployment records, or tool outputs showing the control is in place), and operating evidence (logs, review records, or test results showing the control functions over time).
For hybrid cloud specifically, the CSA Cloud Controls Matrix Auditing Guidelines provide a structured approach to collecting and organising evidence across cloud and on-premises environments. These guidelines align with ISO 27001 and SOC 2 evidence expectations, making them useful regardless of your target framework.
Key actions:
- Create a control-to-evidence matrix: one row per control, columns for regulatory requirement, control description, design evidence, implementation evidence, operating evidence, and last review date
- Collect current configuration evidence from all environments: IAM policies, network security group rules, encryption settings, logging configurations. Export these as dated screenshots or configuration dumps.
- Compile operating evidence: access review completion records, incident response test results, change management logs, and vulnerability scan reports from the past 12 months
- Create an information security policy set if one does not exist: at minimum, you need an overarching ISMS policy, acceptable use policy, access control policy, incident response procedure, and business continuity plan
- Establish a quarterly evidence refresh cycle: configuration evidence becomes stale within 90 days, and auditors discount evidence older than the review period
If this step fails:
- No policies or procedures exist: Prioritise the top 5 policies auditors always request: information security policy, access control policy, incident response procedure, change management procedure, and data protection policy. Template-based approaches can produce acceptable first drafts within 1 to 2 weeks.
- Evidence is scattered across teams and systems: Designate a single compliance coordinator who owns the evidence repository. Use a shared platform (SharePoint, Confluence, or a GRC tool) with a consistent folder structure mirroring your control framework. ISO 27001 certified partners such as HST Solutions can accelerate this process by deploying embedded engineers who build the evidence collection into your infrastructure automation from the start.
Checkpoint: You should now have a structured evidence repository with current documentation for every control in your target framework, including design, implementation, and operating evidence with a quarterly refresh schedule established.
Step 7: Run Internal Audit Simulations Before External Assessment
What you will accomplish: At least two complete internal audit cycles that identify and remediate control gaps before an external auditor encounters them.
Time required: 2 to 3 weeks (for two cycles)
No organisation passes a regulatory audit on its first attempt without rehearsal. Internal audit simulations expose the gaps between what your documentation says and what your systems actually do. The goal is not to produce a perfect result on the first internal cycle but to find every problem while you still have time to fix it.
Run your first internal audit against the complete control framework, covering every domain from data classification through logging and evidence management. This first cycle will surface the largest gaps: controls that exist on paper but are not implemented, evidence that is missing or stale, and processes that teams believe are happening but are not documented. Allow 1 to 2 weeks to remediate findings from the first cycle before running the second.
The second internal audit should simulate the external auditor’s approach. Sample controls rather than reviewing every one. Request evidence under time pressure. Test incident response scenarios by asking teams to demonstrate detection and containment for a simulated event. This cycle builds audit readiness in the team, not just in the documentation.
Key actions:
- Conduct a full-scope first internal audit: walk through every control in your Statement of Applicability or control framework, verify evidence exists, and test a sample of technical controls
- Document all findings with severity ratings (critical, major, minor, observation) and assign remediation owners with deadlines
- Remediate all critical and major findings before the second cycle. Minor findings should have a documented remediation plan even if not yet complete.
- Run a second internal audit focused on sampling and scenario testing: select 30 to 40% of controls for detailed review, and run at least one tabletop incident response exercise
- Compile a management review report summarising both cycles, residual risks, and the organisation’s readiness assessment
If this step fails:
- First cycle reveals too many critical findings to remediate in time: Prioritise controls that affect regulated data directly: access controls, logging, encryption, and data flow controls. Defer lower-risk controls to a remediation timeline that extends beyond the initial certification. Most certification bodies accept a remediation plan for minor nonconformities.
- Teams treat the internal audit as a compliance exercise rather than a genuine test: Involve the executive sponsor in the management review. Present findings in business terms: “This control gap means an auditor will issue a nonconformity, delaying certification by 8 to 12 weeks.” Connecting audit findings to business timelines changes behaviour faster than compliance language.
Checkpoint: You should now have completed two internal audit cycles, remediated all critical findings, documented residual risks with remediation plans, and produced a management review report confirming readiness for external assessment.
Common Mistakes to Avoid
Mistake 1: Designing Architecture Before Classifying Data
What happens: Teams select cloud services, configure networks, and deploy workloads before understanding which data carries regulatory obligations. When audit preparation begins, the architecture does not align with compliance requirements, forcing expensive redesign.
How to fix it: Complete the data classification exercise in Step 1 before making any architectural decisions. Use the classification register as the input to every subsequent design choice.
How to prevent it: Make the data classification register a required input for architecture review approvals. No design decision should be approved without referencing the data it will process and the regulations that apply.
Mistake 2: Assuming the Cloud Provider Handles Compliance
What happens: Organisations select a cloud provider with ISO 27001 or SOC 2 certification and assume that certification covers their own workloads. In reality, provider certifications cover the provider’s infrastructure, not the customer’s configuration, access policies, or application-level controls.
How to fix it: Complete the shared responsibility mapping in Step 2. Verify that every customer-side control has an owner and evidence. Your provider’s certification is a foundation, not a replacement for your own compliance programme.
How to prevent it: Include a shared responsibility review in every cloud service adoption decision. Before any new cloud service goes into production, document which compliance controls it inherits from the provider and which remain your responsibility.
Mistake 3: Treating Logging as an Afterthought
What happens: Logging is configured as a final step after architecture is deployed. By that point, some systems produce logs in incompatible formats, others have logging disabled by default, and retention periods are inconsistent. Auditors find gaps that take weeks to remediate.
How to fix it: Design logging requirements as part of the architecture, not after deployment. Include log format, retention, and aggregation requirements in the architecture specification document.
How to prevent it: Make centralised log ingestion a deployment prerequisite. No system goes into production until it is confirmed to be sending logs to the central platform in the correct format with the required retention.
Mistake 4: No Evidence of Control Operation Over Time
What happens: Organisations can demonstrate that controls exist at a point in time but cannot prove they have been operating consistently. An access review policy exists, but there are no records of reviews actually being conducted. An incident response procedure exists, but it has never been tested.
How to fix it: Implement the quarterly evidence refresh from Step 6. Ensure that every control with an operational requirement (reviews, tests, scans) has a calendar reminder and a designated owner responsible for completing and recording the activity.
How to prevent it: Automate evidence collection where possible. Scheduled vulnerability scans, automated access review workflows, and configuration drift detection produce operating evidence continuously without manual effort.
Mistake 5: Building for One Framework Instead of a Control Baseline
What happens: Organisations design their hybrid cloud architecture to satisfy a single framework (e.g., ISO 27001) and then discover that a customer, partner, or regulator requires a different one (SOC 2, DORA, NIS2). The architecture does not map cleanly, requiring parallel compliance efforts.
How to fix it: Build against a comprehensive control baseline such as the CSA Cloud Controls Matrix or NIST SP 800-53, which map to multiple frameworks simultaneously. Then document how your controls satisfy each specific framework’s requirements.
How to prevent it: During Step 1, identify all frameworks that are currently required or likely to be required within 24 months. Design your control baseline to cover the union of all requirements rather than optimising for a single framework.