SOC 2 Audit Readiness Workspace: From Scoping to Internal Audit in 9 Steps

A practical guide for GRC consultants and compliance teams on using Rakenne's SOC 2 Audit Readiness workspace template to build a complete, internally consistent set of SOC 2 documentation — with tool-assisted validation at every step.

  • intermediate
  • 25 min read
  • 2026-03-11
  • Skills
Author Ricardo Cabral · Founder

Preparing for a SOC 2 audit is one of the most documentation-intensive compliance engagements a SaaS company can undertake. A first-time Type II engagement for a mid-sized organization typically spans 6–12 months, with the bulk of effort spent on drafting system descriptions, building risk registers, writing control narratives, generating policies, and assembling evidence — all while maintaining internal consistency across dozens of artifacts that auditors will cross-reference.

Rakenne’s SOC 2 Audit Readiness workspace template provides 9 specialized skills and over 25 validation tools that guide an LLM agent through the entire readiness lifecycle. Each skill enforces a structured workflow, loads AICPA-specific references, and uses deterministic tools to check the agent’s output — catching the kinds of errors that plain LLM drafting misses: incomplete SCSR pairs, unvalidated CUECs, orphaned risks without controls, and policies with vague language that auditors would flag.

This guide walks through all 9 skills in sequence, shows real dialog excerpts and tool outputs from a live session, and explains what makes tool-assisted SOC 2 documentation materially better than generic AI drafting.


Why plain LLMs fall short for SOC 2

A plain LLM can draft system descriptions, policies, and control narratives. Where it struggles is audit-grade validation:

ConcernPlain LLMRakenne with SOC 2 skills
SCSR pairingMay list commitments without matching system requirementsValidates every service commitment has a paired system requirement
TSC coverageCan miss criteria or map to invalid IDsValidates criterion IDs against AICPA TSC 2017 and checks coverage per category
Risk–control traceabilityWeak without structured stateEnforces links between risk register, TSC criteria, and controls
CUEC specificityGeneric user responsibilitiesFlags vague CUECs (under 50 characters) and checks TSC alignment
Policy completenessOutput varies with prompt phrasingChecks 10 required sections, flags vague language (“appropriate”, “periodically”)
Self-correctionModel may claim coverage without delivering itValidation tools return PASS/FAIL; the agent revises until checks pass

The difference is structural: skills give the agent a spec (workflow + references + structure) and tools (deterministic checks) to verify its own output. This is what turns a draft into an auditable artifact.


The 9-step SOC 2 readiness journey

The workspace template installs 9 skills that map to a four-phase readiness lifecycle:

PhaseStepSkillWhat gets validated
Scope & Context1Organization ProfileBoundary completeness, CUEC specificity, TSC category selection
Scope & Context2System DescriptionSCSR pairing, subservice org methods, CSOCs, CC1–CC9 coverage, TSC scope
Assess3Risk AssessmentTSC risk mapping, control gaps, residual risk thresholds
Assess4Gap AnalysisTSC criteria coverage, evidence artifact validation, ToE completeness
Implement5Control NarrativesTSC criteria referenced, narrative presence, evidence placeholders
Implement6Policy GeneratorSection completeness, vague language detection, TSC policy coverage
Implement7Vendor ManagementVendor risk scoring, SOC report review, CSOC validation
Validate8Monitoring & TestingTest plan coverage, evidence strength, exception tracking
Validate9Internal AuditAudit plan completeness, finding severity classification, evidence organization

Each skill reads workspace files produced by earlier steps — the organization profile feeds into every subsequent skill, the risk register feeds into gap analysis and control narratives, and testing results feed into the internal audit. The entire chain is traceable.


Step 1: Organization Profile — the foundation everything else reads

Every SOC 2 engagement starts with capturing the organizational context. The Organization Profile skill builds a structured organization-profile.json that all other skills reference — services, SCSR, TSC categories in scope, system boundaries, subservice organizations, and CUECs.

Organization Profile

You: DataVault Technologies is a B2B SaaS company providing secure document management and collaboration. We’re based in Austin, TX with 120 employees. We run on AWS (us-east-1 and eu-west-1), use Okta for IAM, GitHub for source control, Datadog for monitoring, and Stripe for billing. We have 99.9% uptime SLAs with enterprise customers and are subject to SOC 2. Sarah Chen is our CISO, Michael Torres is CTO.

Agent: I’ve drafted the organization profile. Let me run the boundary validator and CUEC mapper.

The boundary validator found 1 error and 1 warning:

  • [ERROR] No CSOCs defined — you have 3 carved-out subservice organizations (AWS, Datadog, Stripe) but haven’t documented the controls you assume are in place at each. I’ll need those before this passes.
  • [WARNING] No out-of-scope items defined. Explicitly stating exclusions helps auditors and prevents scope creep.

The CUEC mapper flagged that no CUECs are defined yet. What responsibilities do your customers have for securing their use of DataVault?

You: Customers must enforce MFA for admin accounts, decommission users when employees leave, configure IP allowlisting for restricted access, report suspected security incidents within 24 hours, and maintain the security of documents before upload.

Agent: I’ve added 5 CUECs and defined CSOCs for all three carved-out vendors. Both validators now pass — boundary validator reports PASS with all 9 checks satisfied, and the CUEC mapper confirms all 5 CUECs are specific (minimum 50 characters) and mapped to TSC categories.

I’ve saved organization-profile.json and updated the dashboard: Organization Profile complete, TSC categories in scope are Security (CC) and Availability (A), audit type is Type II.


Step 2: System Description — the most commonly deficient document

The System Description skill produces the Section III narrative and management assertion letter per AICPA DC Section 200 description criteria. This is the document auditors review most closely — and the one most commonly flagged for deficiencies.

System Description

You: Draft the system description for our SOC 2 Type II audit. The audit period is July 1, 2025 through June 30, 2026.

Agent: I’ve loaded the organization profile and started the system description. Let me gather the remaining details for the five system components.

I need to understand your infrastructure beyond AWS — databases, caching, CDN, networking. Also your application stack, CI/CD pipeline, and data types processed.

You: AWS runs everything: RDS PostgreSQL, ElastiCache Redis, ALB, CloudFront CDN, S3 for document storage. Our app is Node.js/React deployed via GitHub Actions. We store customer documents, user profiles, collaboration metadata, and audit logs. Change management goes through PR reviews with required approvals and automated security scans.

Agent: I’ve built the system description JSON and run the completeness checker. Two errors found:

  • [ERROR] Missing CUECs in the description narrative — I can pull these from the organization profile automatically.
  • [ERROR] Control environment narrative doesn’t cover CC1–CC9 yet. I’ll draft the governance, risk assessment, information, and monitoring sections now.

After adding CUECs from the profile and drafting the CC1–CC9 control environment, the completeness checker now passes with 0 errors. The trust services scope validator confirms Security (CC) and Availability (A) are properly scoped.

I’ve saved the system description to output/datavault-soc2-system-description.md — 18 pages covering all AICPA-required sections.


Step 3: Risk Assessment — where validation earns its keep

Risk assessment is where tool-assisted validation makes the biggest difference. A plain LLM can list risks, but it cannot reliably check that every risk maps to a valid TSC criterion, that scores follow the defined methodology, or that residual risk doesn’t exceed inherent risk.

The Risk Assessment skill activates three validation tools:

  1. Risk-to-TSC mapper — validates every risk maps to valid TSC criterion IDs and flags blind spots
  2. Control gap identifier — finds TSC criteria with risks but no mapped controls
  3. Residual risk validator — ensures post-treatment scores are logically consistent
Risk Assessment

You: Conduct the risk assessment based on our organization profile. We should cover all TSC domains for Security and Availability.

Agent: I’ve established the 5×5 risk methodology with an appetite threshold of 12. Using the 12 TSC risk domains, I’ve identified 11 risks across Security (CC1–CC9) and Availability (A1). Let me run the validation suite.

The risk-to-TSC mapper confirmed all 11 risks map to valid criteria. However, it flagged 2 in-scope criteria with no associated risk: CC4.2 (evaluating and communicating deficiencies) and A1.3 (recovery). I’ll add risks for those.

After adding 2 more risks (13 total), the control gap identifier found 3 criteria where risks exist but no controls are mapped: CC3.4 (fraud risk), CC8.1 (change management for infrastructure), and A1.2 (capacity planning). These need controls assigned.

Once controls are mapped, the residual risk validator confirmed all entries pass — residual ≤ inherent, all risks above appetite have treatment decisions, and accepted risks have documented rationale.

Here’s a sample from the risk register the agent produced:

IDRiskThreatLIScoreTreatmentTSC CriteriaControls
RISK-AC-001Unauthorized access to customer documentsExternal attacker, insider threat3515TreatCC6.1, CC6.3MFA, RBAC, quarterly access review
RISK-SO-001Production system compromise via misconfigurationCloud misconfiguration4416TreatCC7.1, CC7.2Infrastructure-as-code, config scanning
RISK-AV-001Extended platform outageInfrastructure failure, DDoS3515TreatA1.1, A1.2Multi-AZ, auto-scaling, DDoS protection

Step 4: Gap Analysis — finding what’s missing

After the first three steps produce foundational artifacts, the Gap Analysis skill reviews the entire workspace to identify TSC criteria with incomplete coverage, missing evidence, and untested controls.

Gap Analysis

You: Run the gap analysis against our controls inventory and TSC criteria.

Agent: I’ve loaded the controls inventory and run the TSC mapping engine. Coverage: CC 82%, A 67%. There are 5 unmapped criteria — including CC4.2 (evaluating deficiencies), A1.2 (capacity), and A1.3 (recovery).

The evidence artifact validator found 4 implemented controls missing a Test of Effectiveness and 2 controls with no evidence artifacts at all. I’ve built the remediation roadmap:

  • Phase 1 (Immediate): Map missing CC criteria and add Tests of Effectiveness for the 4 controls
  • Phase 2 (Short-term): Address Availability gaps — capacity planning procedure and disaster recovery testing
  • Phase 3 (Pre-audit): Complete evidence collection for all controls and run full validation

Step 5: Control Narratives — structuring what auditors read

The Control Narrative Author skill writes detailed narratives for each TSC criterion in scope, with evidence placeholders that auditors can trace to artifacts.

Control Narratives

You: Draft control narratives for all in-scope criteria — CC1–CC9 and A1.

Agent: I’ve created soc2-control-narratives.md with sections for each criterion. For CC6 (Logical and Physical Access), I’ve described Okta SSO with MFA enforcement, RBAC via AWS IAM, quarterly access reviews, and 24-hour termination deprovisioning. Evidence placeholders reference your Okta configuration export, access review report, and HR offboarding checklist.

Let me run the TSC coverage checker. Result: all 10 major criteria referenced (CC1–CC9, A1). Two warnings — CC4 and CC7 need clearer evidence references. I’ll add the specific Datadog monitoring dashboard and the deficiency reporting procedure as evidence.

After revision, all checks pass. Each criterion has a narrative and at least one evidence reference.


Step 6: Policy Generator — tailored policies with document control

The Policy Generator skill creates 8 core policies (plus optional Privacy and Processing Integrity policies) tailored to the organization’s context. Each policy maps to specific TSC criteria, references actual systems and roles, and undergoes structural validation.

Policy Generator

You: Generate all required policies for our SOC 2 engagement.

Agent: I’ve generated 8 core policies based on your organization profile and risk register:

  1. POL-001 Information Security Policy — Governance, roles, commitments (CC1, CC2)
  2. POL-002 Access Control Policy — Logical/physical access, MFA, RBAC (CC6)
  3. POL-003 Change Management Policy — PR reviews, CI/CD, rollback (CC8)
  4. POL-004 Incident Response Policy — Detection, triage, notification (CC7)
  5. POL-005 Risk Assessment Policy — Methodology, treatment, monitoring (CC3, CC9)
  6. POL-006 Vendor Management Policy — Third-party risk, SOC reports (CC9)
  7. POL-007 Business Continuity Policy — BCP/DRP, RTO/RPO (A1)
  8. POL-008 Data Classification Policy — Classification levels, handling (CC6, C1)

The policy completeness checker ran on each: 7 passed, POL-004 had a warning — the incident notification SLA says “promptly” instead of a specific timeframe. I’ve updated it to “within 72 hours of confirmation” to match your service commitments.

The policy TSC mapper shows 100% coverage of in-scope criteria across the 8 policies. All policies reference named roles (Sarah Chen as CISO, Michael Torres as CTO) and actual systems (Okta, AWS, GitHub).

A plain LLM could draft generic policies. The skill ensures each policy is internally consistent with the organization profile, risk register, and every other artifact in the workspace.


Step 7: Vendor Management — third-party oversight per CC9.2

SOC 2 auditors specifically test vendor oversight per CC9.2. The Vendor Management skill builds a risk-tiered assessment framework, vendor register, and SOC report review validation.

Vendor Management

You: Build the vendor management program based on our organization profile.

Agent: I’ve identified 5 vendors from the organization profile and extended the list with 3 additional third parties that have data access. Running the vendor risk scorer now.

Tier classification:

  • Critical: AWS (hosts all customer data and infrastructure), Okta (controls authentication)
  • High: GitHub (source code access), Stripe (payment data processing)
  • Medium: Datadog (infrastructure monitoring, log ingestion), SendGrid (email delivery)
  • Low: Calendly (scheduling), Notion (internal documentation)

The SOC report review checker found:

  • AWS, Okta, GitHub, and Stripe all have current SOC 2 Type II reports (< 12 months) — PASS
  • Datadog has a SOC 2 report but it expires next month — WARNING, flag for renewal tracking
  • SendGrid SOC 2 report is current — PASS
  • CSOCs from the system description are validated against the vendor controls described in their reports — PASS

I’ve produced the Vendor Risk Assessment Framework, Vendor Register, and Vendor Security Requirements documents.


Step 8: Monitoring & Testing — building the evidence machine

The Monitoring & Testing skill builds the testing infrastructure for the audit period: a control testing plan, evidence collection matrix, and exception tracker. This is critical for Type II — auditors need evidence that controls operated throughout the period, not just at a point in time.

Monitoring & Testing

You: Build the control testing plan for our Type II audit period (July 2025 – June 2026).

Agent: I’ve built the testing plan covering all 13 in-scope TSC criteria. Each control has a defined testing method, frequency, tester, and expected evidence. Let me validate.

The test plan validator confirmed all criteria have at least one tested control. One warning: CTRL-AC-003 (access reviews) is tested only by inquiry — I’ve upgraded it to inspection + reperformance (re-execute a sample of quarterly access reviews independently).

The evidence coverage checker shows 85% coverage — 3 controls are still pending evidence collection. Evidence strength is adequate: 60% inspection/reperformance, 30% observation, 10% inquiry (always combined with another method).

The exception tracker found 2 existing exceptions:

  • EXC-001 (Minor): One quarterly access review was completed 5 days late. Root cause: reviewer on PTO. Remediation: backup reviewer assigned.
  • EXC-002 (Moderate): Backup restoration test not performed in Q3. Root cause: infrastructure team transition. Remediation: scheduled for Q4 with documented procedure.

Both exceptions have remediation plans, owners, and target dates — all checks pass.


Step 9: Internal Audit — the capstone readiness assessment

The Internal Audit skill is the final checkpoint before engaging the external auditor. It simulates the auditor’s approach: testing controls, classifying findings per AICPA severity guidance, organizing evidence, and producing a readiness assessment.

Internal Audit

You: Conduct the internal readiness audit for our SOC 2 Type II engagement.

Agent: I’ve loaded all prior artifacts — organization profile, system description, risk register, gap analysis, control narratives, 8 policies, vendor register, testing plan, and evidence matrix. Let me draft the audit plan and run validation.

The audit plan validator confirmed: all 13 in-scope TSC criteria are covered, audit period matches the engagement (July 2025 – June 2026), auditor independence is documented, and sampling methodology is defined.

I’ve completed control testing. Three findings identified:

FIND-001 (Backup Verification, A1.2): Significant Deficiency. DR test was not performed in Q3 2025 — the control exists but didn’t operate effectively during the period. Root cause: infrastructure team transition. Recommendation: complete DR test immediately and implement automated scheduling.

FIND-002 (Access Review Timeliness, CC6.3): Deficiency. One of four quarterly access reviews was completed 5 days late. Control operated but not within the defined timeline. Recommendation: assign backup reviewer for PTO coverage.

FIND-003 (Vendor SOC Report Tracking, CC9.2): Observation. Datadog SOC 2 report approaching expiration with no automated renewal tracking. Recommendation: implement vendor certification expiration alerts.

The finding severity classifier confirmed all classifications are consistent with AICPA guidance. The evidence organizer mapped all evidence to TSC criteria — 2 criteria have evidence gaps that need attention before the external audit.

Overall readiness: Conditionally Ready. No material weaknesses. One significant deficiency (FIND-001) has an active remediation plan. Recommend completing DR test and evidence collection before engaging the external auditor.

The audit report references actual workspace artifacts:

RefRequirementEvidenceStatus
CC6.1Access control policy implemented?POL-002 v1.0, Okta SSO configPass
CC6.3Access reviews performed?Q1–Q4 access review reportsPass (with exception: Q2 late)
A1.2Capacity and recovery planning?BCP section 4.2, DR test reportFail (Q3 DR test missing)
CC9.2Vendor oversight documented?Vendor Register, SOC reportsPass

The dashboard: tracking progress across all 9 skills

As each skill completes, the agent updates the project dashboard. The dashboard provides a single view of SOC 2 readiness:

Key metrics tracked:

  • SOC 2 Readiness — percentage of skills completed (Scope & Context → Assess → Implement → Validate)
  • Current Phase — which lifecycle phase you’re in
  • Audit Type — Type I or Type II
  • Documents Produced — total output artifacts (system descriptions, policies, registers, reports)
  • Risk Distribution — breakdown by severity (Critical / High / Medium / Low)
  • TSC Criteria Coverage — percentage of in-scope criteria with mapped, tested controls
  • Gap Findings — open vs. closed findings from gap analysis
  • Evidence Coverage — percentage of controls with collected evidence
  • Exceptions — open vs. remediated control exceptions
  • Audit Readiness — Ready, Conditionally Ready, or Not Ready

The dashboard gives consultants and management a real-time view of where the engagement stands and what still needs attention.


Effort comparison: consultant time with and without tool assistance

Based on typical SOC 2 consulting effort breakdowns for first-time Type II certification of a mid-sized SaaS organization:

Activity% of effortTool-assisted acceleration
Scoping, organization profile, system description15%Organization Profile + System Description skills reduce interviews and enforce AICPA structure
Risk assessment + treatment18%Three risk validation tools enforce TSC mapping, identify control gaps, validate residual scoring
Gap analysis + remediation planning12%TSC mapping engine and evidence validator replace manual checklist work
Control narratives10%Context-aware generation with TSC coverage checking and evidence placeholders
Policy set (8–10 policies)14%Context-aware generation with completeness checker and vague-language detection
Vendor management8%Risk scoring and SOC report review validation structure the assessment
Monitoring, testing, evidence15%Test plan validator, evidence coverage checker, and exception tracker automate quality checks
Internal audit + readiness8%Finding severity classification and evidence organization prepare for external auditor

The heaviest documentation activities (system description, risk assessment, policy generation, evidence management) are exactly where the validation tools add the most value — not by replacing consultant judgment, but by catching the mechanical errors and omissions that consume review cycles.


Getting started

  1. Create a new project in Rakenne and select the SOC 2 Audit Readiness workspace template
  2. All 9 skills and validation tools are automatically installed
  3. Start with the Organization Profile — provide your client’s details and let the agent build the structured profile
  4. Follow the four-phase sequence through all 9 steps, or jump to specific skills based on your gap analysis results
  5. Use the dashboard to track progress and identify what’s still needed

Each skill is independent but reads artifacts from earlier steps. You can run them in any order, but the recommended sequence ensures each skill has the context it needs from prior outputs.


Summary

The SOC 2 Audit Readiness workspace template turns SOC 2 preparation from an ad-hoc drafting exercise into a structured, validated process. The 9 skills cover the full readiness lifecycle, and the 25+ validation tools enforce the same checks a senior GRC consultant would apply — consistently, automatically, and traceably.

The result is not a set of generic templates. It is a set of internally consistent, organization-specific, TSC-aligned artifacts that reference each other, link risks to controls to policies to evidence, and flag gaps before an auditor does.

Try it yourself

Open a workspace with the skills described in this article and start drafting in minutes.

Get Started Free — No Sign-Up

Ready to let your expertise drive the workflow?

Stop wrestling with rigid templates and generic chatbots. Describe your process, let the agent handle the rest.

Get Started Free — No Sign-Up