SOC 2 Audit Readiness Workspace: From Scoping to Internal Audit in 9 Steps
A practical guide for GRC consultants and compliance teams on using Rakenne's SOC 2 Audit Readiness workspace template to build a complete, internally consistent set of SOC 2 documentation — with tool-assisted validation at every step.
Preparing for a SOC 2 audit is one of the most documentation-intensive compliance engagements a SaaS company can undertake. A first-time Type II engagement for a mid-sized organization typically spans 6–12 months, with the bulk of effort spent on drafting system descriptions, building risk registers, writing control narratives, generating policies, and assembling evidence — all while maintaining internal consistency across dozens of artifacts that auditors will cross-reference.
Rakenne’s SOC 2 Audit Readiness workspace template provides 9 specialized skills and over 25 validation tools that guide an LLM agent through the entire readiness lifecycle. Each skill enforces a structured workflow, loads AICPA-specific references, and uses deterministic tools to check the agent’s output — catching the kinds of errors that plain LLM drafting misses: incomplete SCSR pairs, unvalidated CUECs, orphaned risks without controls, and policies with vague language that auditors would flag.
This guide walks through all 9 skills in sequence, shows real dialog excerpts and tool outputs from a live session, and explains what makes tool-assisted SOC 2 documentation materially better than generic AI drafting.
Why plain LLMs fall short for SOC 2
A plain LLM can draft system descriptions, policies, and control narratives. Where it struggles is audit-grade validation:
| Concern | Plain LLM | Rakenne with SOC 2 skills |
|---|---|---|
| SCSR pairing | May list commitments without matching system requirements | Validates every service commitment has a paired system requirement |
| TSC coverage | Can miss criteria or map to invalid IDs | Validates criterion IDs against AICPA TSC 2017 and checks coverage per category |
| Risk–control traceability | Weak without structured state | Enforces links between risk register, TSC criteria, and controls |
| CUEC specificity | Generic user responsibilities | Flags vague CUECs (under 50 characters) and checks TSC alignment |
| Policy completeness | Output varies with prompt phrasing | Checks 10 required sections, flags vague language (“appropriate”, “periodically”) |
| Self-correction | Model may claim coverage without delivering it | Validation tools return PASS/FAIL; the agent revises until checks pass |
The difference is structural: skills give the agent a spec (workflow + references + structure) and tools (deterministic checks) to verify its own output. This is what turns a draft into an auditable artifact.
The 9-step SOC 2 readiness journey
The workspace template installs 9 skills that map to a four-phase readiness lifecycle:
| Phase | Step | Skill | What gets validated |
|---|---|---|---|
| Scope & Context | 1 | Organization Profile | Boundary completeness, CUEC specificity, TSC category selection |
| Scope & Context | 2 | System Description | SCSR pairing, subservice org methods, CSOCs, CC1–CC9 coverage, TSC scope |
| Assess | 3 | Risk Assessment | TSC risk mapping, control gaps, residual risk thresholds |
| Assess | 4 | Gap Analysis | TSC criteria coverage, evidence artifact validation, ToE completeness |
| Implement | 5 | Control Narratives | TSC criteria referenced, narrative presence, evidence placeholders |
| Implement | 6 | Policy Generator | Section completeness, vague language detection, TSC policy coverage |
| Implement | 7 | Vendor Management | Vendor risk scoring, SOC report review, CSOC validation |
| Validate | 8 | Monitoring & Testing | Test plan coverage, evidence strength, exception tracking |
| Validate | 9 | Internal Audit | Audit plan completeness, finding severity classification, evidence organization |
Each skill reads workspace files produced by earlier steps — the organization profile feeds into every subsequent skill, the risk register feeds into gap analysis and control narratives, and testing results feed into the internal audit. The entire chain is traceable.
Step 1: Organization Profile — the foundation everything else reads
Every SOC 2 engagement starts with capturing the organizational context. The Organization Profile skill builds a structured organization-profile.json that all other skills reference — services, SCSR, TSC categories in scope, system boundaries, subservice organizations, and CUECs.
You: DataVault Technologies is a B2B SaaS company providing secure document management and collaboration. We’re based in Austin, TX with 120 employees. We run on AWS (us-east-1 and eu-west-1), use Okta for IAM, GitHub for source control, Datadog for monitoring, and Stripe for billing. We have 99.9% uptime SLAs with enterprise customers and are subject to SOC 2. Sarah Chen is our CISO, Michael Torres is CTO.
Agent: I’ve drafted the organization profile. Let me run the boundary validator and CUEC mapper.
The boundary validator found 1 error and 1 warning:
- [ERROR] No CSOCs defined — you have 3 carved-out subservice organizations (AWS, Datadog, Stripe) but haven’t documented the controls you assume are in place at each. I’ll need those before this passes.
- [WARNING] No out-of-scope items defined. Explicitly stating exclusions helps auditors and prevents scope creep.
The CUEC mapper flagged that no CUECs are defined yet. What responsibilities do your customers have for securing their use of DataVault?
You: Customers must enforce MFA for admin accounts, decommission users when employees leave, configure IP allowlisting for restricted access, report suspected security incidents within 24 hours, and maintain the security of documents before upload.
Agent: I’ve added 5 CUECs and defined CSOCs for all three carved-out vendors. Both validators now pass — boundary validator reports PASS with all 9 checks satisfied, and the CUEC mapper confirms all 5 CUECs are specific (minimum 50 characters) and mapped to TSC categories.
I’ve saved
organization-profile.jsonand updated the dashboard: Organization Profile complete, TSC categories in scope are Security (CC) and Availability (A), audit type is Type II.
Step 2: System Description — the most commonly deficient document
The System Description skill produces the Section III narrative and management assertion letter per AICPA DC Section 200 description criteria. This is the document auditors review most closely — and the one most commonly flagged for deficiencies.
You: Draft the system description for our SOC 2 Type II audit. The audit period is July 1, 2025 through June 30, 2026.
Agent: I’ve loaded the organization profile and started the system description. Let me gather the remaining details for the five system components.
I need to understand your infrastructure beyond AWS — databases, caching, CDN, networking. Also your application stack, CI/CD pipeline, and data types processed.
You: AWS runs everything: RDS PostgreSQL, ElastiCache Redis, ALB, CloudFront CDN, S3 for document storage. Our app is Node.js/React deployed via GitHub Actions. We store customer documents, user profiles, collaboration metadata, and audit logs. Change management goes through PR reviews with required approvals and automated security scans.
Agent: I’ve built the system description JSON and run the completeness checker. Two errors found:
- [ERROR] Missing CUECs in the description narrative — I can pull these from the organization profile automatically.
- [ERROR] Control environment narrative doesn’t cover CC1–CC9 yet. I’ll draft the governance, risk assessment, information, and monitoring sections now.
After adding CUECs from the profile and drafting the CC1–CC9 control environment, the completeness checker now passes with 0 errors. The trust services scope validator confirms Security (CC) and Availability (A) are properly scoped.
I’ve saved the system description to
output/datavault-soc2-system-description.md— 18 pages covering all AICPA-required sections.
Step 3: Risk Assessment — where validation earns its keep
Risk assessment is where tool-assisted validation makes the biggest difference. A plain LLM can list risks, but it cannot reliably check that every risk maps to a valid TSC criterion, that scores follow the defined methodology, or that residual risk doesn’t exceed inherent risk.
The Risk Assessment skill activates three validation tools:
- Risk-to-TSC mapper — validates every risk maps to valid TSC criterion IDs and flags blind spots
- Control gap identifier — finds TSC criteria with risks but no mapped controls
- Residual risk validator — ensures post-treatment scores are logically consistent
You: Conduct the risk assessment based on our organization profile. We should cover all TSC domains for Security and Availability.
Agent: I’ve established the 5×5 risk methodology with an appetite threshold of 12. Using the 12 TSC risk domains, I’ve identified 11 risks across Security (CC1–CC9) and Availability (A1). Let me run the validation suite.
The risk-to-TSC mapper confirmed all 11 risks map to valid criteria. However, it flagged 2 in-scope criteria with no associated risk: CC4.2 (evaluating and communicating deficiencies) and A1.3 (recovery). I’ll add risks for those.
After adding 2 more risks (13 total), the control gap identifier found 3 criteria where risks exist but no controls are mapped: CC3.4 (fraud risk), CC8.1 (change management for infrastructure), and A1.2 (capacity planning). These need controls assigned.
Once controls are mapped, the residual risk validator confirmed all entries pass — residual ≤ inherent, all risks above appetite have treatment decisions, and accepted risks have documented rationale.
Here’s a sample from the risk register the agent produced:
| ID | Risk | Threat | L | I | Score | Treatment | TSC Criteria | Controls |
|---|---|---|---|---|---|---|---|---|
| RISK-AC-001 | Unauthorized access to customer documents | External attacker, insider threat | 3 | 5 | 15 | Treat | CC6.1, CC6.3 | MFA, RBAC, quarterly access review |
| RISK-SO-001 | Production system compromise via misconfiguration | Cloud misconfiguration | 4 | 4 | 16 | Treat | CC7.1, CC7.2 | Infrastructure-as-code, config scanning |
| RISK-AV-001 | Extended platform outage | Infrastructure failure, DDoS | 3 | 5 | 15 | Treat | A1.1, A1.2 | Multi-AZ, auto-scaling, DDoS protection |
Step 4: Gap Analysis — finding what’s missing
After the first three steps produce foundational artifacts, the Gap Analysis skill reviews the entire workspace to identify TSC criteria with incomplete coverage, missing evidence, and untested controls.
You: Run the gap analysis against our controls inventory and TSC criteria.
Agent: I’ve loaded the controls inventory and run the TSC mapping engine. Coverage: CC 82%, A 67%. There are 5 unmapped criteria — including CC4.2 (evaluating deficiencies), A1.2 (capacity), and A1.3 (recovery).
The evidence artifact validator found 4 implemented controls missing a Test of Effectiveness and 2 controls with no evidence artifacts at all. I’ve built the remediation roadmap:
- Phase 1 (Immediate): Map missing CC criteria and add Tests of Effectiveness for the 4 controls
- Phase 2 (Short-term): Address Availability gaps — capacity planning procedure and disaster recovery testing
- Phase 3 (Pre-audit): Complete evidence collection for all controls and run full validation
Step 5: Control Narratives — structuring what auditors read
The Control Narrative Author skill writes detailed narratives for each TSC criterion in scope, with evidence placeholders that auditors can trace to artifacts.
You: Draft control narratives for all in-scope criteria — CC1–CC9 and A1.
Agent: I’ve created
soc2-control-narratives.mdwith sections for each criterion. For CC6 (Logical and Physical Access), I’ve described Okta SSO with MFA enforcement, RBAC via AWS IAM, quarterly access reviews, and 24-hour termination deprovisioning. Evidence placeholders reference your Okta configuration export, access review report, and HR offboarding checklist.Let me run the TSC coverage checker. Result: all 10 major criteria referenced (CC1–CC9, A1). Two warnings — CC4 and CC7 need clearer evidence references. I’ll add the specific Datadog monitoring dashboard and the deficiency reporting procedure as evidence.
After revision, all checks pass. Each criterion has a narrative and at least one evidence reference.
Step 6: Policy Generator — tailored policies with document control
The Policy Generator skill creates 8 core policies (plus optional Privacy and Processing Integrity policies) tailored to the organization’s context. Each policy maps to specific TSC criteria, references actual systems and roles, and undergoes structural validation.
You: Generate all required policies for our SOC 2 engagement.
Agent: I’ve generated 8 core policies based on your organization profile and risk register:
- POL-001 Information Security Policy — Governance, roles, commitments (CC1, CC2)
- POL-002 Access Control Policy — Logical/physical access, MFA, RBAC (CC6)
- POL-003 Change Management Policy — PR reviews, CI/CD, rollback (CC8)
- POL-004 Incident Response Policy — Detection, triage, notification (CC7)
- POL-005 Risk Assessment Policy — Methodology, treatment, monitoring (CC3, CC9)
- POL-006 Vendor Management Policy — Third-party risk, SOC reports (CC9)
- POL-007 Business Continuity Policy — BCP/DRP, RTO/RPO (A1)
- POL-008 Data Classification Policy — Classification levels, handling (CC6, C1)
The policy completeness checker ran on each: 7 passed, POL-004 had a warning — the incident notification SLA says “promptly” instead of a specific timeframe. I’ve updated it to “within 72 hours of confirmation” to match your service commitments.
The policy TSC mapper shows 100% coverage of in-scope criteria across the 8 policies. All policies reference named roles (Sarah Chen as CISO, Michael Torres as CTO) and actual systems (Okta, AWS, GitHub).
A plain LLM could draft generic policies. The skill ensures each policy is internally consistent with the organization profile, risk register, and every other artifact in the workspace.
Step 7: Vendor Management — third-party oversight per CC9.2
SOC 2 auditors specifically test vendor oversight per CC9.2. The Vendor Management skill builds a risk-tiered assessment framework, vendor register, and SOC report review validation.
You: Build the vendor management program based on our organization profile.
Agent: I’ve identified 5 vendors from the organization profile and extended the list with 3 additional third parties that have data access. Running the vendor risk scorer now.
Tier classification:
- Critical: AWS (hosts all customer data and infrastructure), Okta (controls authentication)
- High: GitHub (source code access), Stripe (payment data processing)
- Medium: Datadog (infrastructure monitoring, log ingestion), SendGrid (email delivery)
- Low: Calendly (scheduling), Notion (internal documentation)
The SOC report review checker found:
- AWS, Okta, GitHub, and Stripe all have current SOC 2 Type II reports (< 12 months) — PASS
- Datadog has a SOC 2 report but it expires next month — WARNING, flag for renewal tracking
- SendGrid SOC 2 report is current — PASS
- CSOCs from the system description are validated against the vendor controls described in their reports — PASS
I’ve produced the Vendor Risk Assessment Framework, Vendor Register, and Vendor Security Requirements documents.
Step 8: Monitoring & Testing — building the evidence machine
The Monitoring & Testing skill builds the testing infrastructure for the audit period: a control testing plan, evidence collection matrix, and exception tracker. This is critical for Type II — auditors need evidence that controls operated throughout the period, not just at a point in time.
You: Build the control testing plan for our Type II audit period (July 2025 – June 2026).
Agent: I’ve built the testing plan covering all 13 in-scope TSC criteria. Each control has a defined testing method, frequency, tester, and expected evidence. Let me validate.
The test plan validator confirmed all criteria have at least one tested control. One warning: CTRL-AC-003 (access reviews) is tested only by inquiry — I’ve upgraded it to inspection + reperformance (re-execute a sample of quarterly access reviews independently).
The evidence coverage checker shows 85% coverage — 3 controls are still pending evidence collection. Evidence strength is adequate: 60% inspection/reperformance, 30% observation, 10% inquiry (always combined with another method).
The exception tracker found 2 existing exceptions:
- EXC-001 (Minor): One quarterly access review was completed 5 days late. Root cause: reviewer on PTO. Remediation: backup reviewer assigned.
- EXC-002 (Moderate): Backup restoration test not performed in Q3. Root cause: infrastructure team transition. Remediation: scheduled for Q4 with documented procedure.
Both exceptions have remediation plans, owners, and target dates — all checks pass.
Step 9: Internal Audit — the capstone readiness assessment
The Internal Audit skill is the final checkpoint before engaging the external auditor. It simulates the auditor’s approach: testing controls, classifying findings per AICPA severity guidance, organizing evidence, and producing a readiness assessment.
You: Conduct the internal readiness audit for our SOC 2 Type II engagement.
Agent: I’ve loaded all prior artifacts — organization profile, system description, risk register, gap analysis, control narratives, 8 policies, vendor register, testing plan, and evidence matrix. Let me draft the audit plan and run validation.
The audit plan validator confirmed: all 13 in-scope TSC criteria are covered, audit period matches the engagement (July 2025 – June 2026), auditor independence is documented, and sampling methodology is defined.
I’ve completed control testing. Three findings identified:
FIND-001 (Backup Verification, A1.2): Significant Deficiency. DR test was not performed in Q3 2025 — the control exists but didn’t operate effectively during the period. Root cause: infrastructure team transition. Recommendation: complete DR test immediately and implement automated scheduling.
FIND-002 (Access Review Timeliness, CC6.3): Deficiency. One of four quarterly access reviews was completed 5 days late. Control operated but not within the defined timeline. Recommendation: assign backup reviewer for PTO coverage.
FIND-003 (Vendor SOC Report Tracking, CC9.2): Observation. Datadog SOC 2 report approaching expiration with no automated renewal tracking. Recommendation: implement vendor certification expiration alerts.
The finding severity classifier confirmed all classifications are consistent with AICPA guidance. The evidence organizer mapped all evidence to TSC criteria — 2 criteria have evidence gaps that need attention before the external audit.
Overall readiness: Conditionally Ready. No material weaknesses. One significant deficiency (FIND-001) has an active remediation plan. Recommend completing DR test and evidence collection before engaging the external auditor.
The audit report references actual workspace artifacts:
| Ref | Requirement | Evidence | Status |
|---|---|---|---|
| CC6.1 | Access control policy implemented? | POL-002 v1.0, Okta SSO config | Pass |
| CC6.3 | Access reviews performed? | Q1–Q4 access review reports | Pass (with exception: Q2 late) |
| A1.2 | Capacity and recovery planning? | BCP section 4.2, DR test report | Fail (Q3 DR test missing) |
| CC9.2 | Vendor oversight documented? | Vendor Register, SOC reports | Pass |
The dashboard: tracking progress across all 9 skills
As each skill completes, the agent updates the project dashboard. The dashboard provides a single view of SOC 2 readiness:
Key metrics tracked:
- SOC 2 Readiness — percentage of skills completed (Scope & Context → Assess → Implement → Validate)
- Current Phase — which lifecycle phase you’re in
- Audit Type — Type I or Type II
- Documents Produced — total output artifacts (system descriptions, policies, registers, reports)
- Risk Distribution — breakdown by severity (Critical / High / Medium / Low)
- TSC Criteria Coverage — percentage of in-scope criteria with mapped, tested controls
- Gap Findings — open vs. closed findings from gap analysis
- Evidence Coverage — percentage of controls with collected evidence
- Exceptions — open vs. remediated control exceptions
- Audit Readiness — Ready, Conditionally Ready, or Not Ready
The dashboard gives consultants and management a real-time view of where the engagement stands and what still needs attention.
Effort comparison: consultant time with and without tool assistance
Based on typical SOC 2 consulting effort breakdowns for first-time Type II certification of a mid-sized SaaS organization:
| Activity | % of effort | Tool-assisted acceleration |
|---|---|---|
| Scoping, organization profile, system description | 15% | Organization Profile + System Description skills reduce interviews and enforce AICPA structure |
| Risk assessment + treatment | 18% | Three risk validation tools enforce TSC mapping, identify control gaps, validate residual scoring |
| Gap analysis + remediation planning | 12% | TSC mapping engine and evidence validator replace manual checklist work |
| Control narratives | 10% | Context-aware generation with TSC coverage checking and evidence placeholders |
| Policy set (8–10 policies) | 14% | Context-aware generation with completeness checker and vague-language detection |
| Vendor management | 8% | Risk scoring and SOC report review validation structure the assessment |
| Monitoring, testing, evidence | 15% | Test plan validator, evidence coverage checker, and exception tracker automate quality checks |
| Internal audit + readiness | 8% | Finding severity classification and evidence organization prepare for external auditor |
The heaviest documentation activities (system description, risk assessment, policy generation, evidence management) are exactly where the validation tools add the most value — not by replacing consultant judgment, but by catching the mechanical errors and omissions that consume review cycles.
Getting started
- Create a new project in Rakenne and select the SOC 2 Audit Readiness workspace template
- All 9 skills and validation tools are automatically installed
- Start with the Organization Profile — provide your client’s details and let the agent build the structured profile
- Follow the four-phase sequence through all 9 steps, or jump to specific skills based on your gap analysis results
- Use the dashboard to track progress and identify what’s still needed
Each skill is independent but reads artifacts from earlier steps. You can run them in any order, but the recommended sequence ensures each skill has the context it needs from prior outputs.
Summary
The SOC 2 Audit Readiness workspace template turns SOC 2 preparation from an ad-hoc drafting exercise into a structured, validated process. The 9 skills cover the full readiness lifecycle, and the 25+ validation tools enforce the same checks a senior GRC consultant would apply — consistently, automatically, and traceably.
The result is not a set of generic templates. It is a set of internally consistent, organization-specific, TSC-aligned artifacts that reference each other, link risks to controls to policies to evidence, and flag gaps before an auditor does.
Try it yourself
Open a workspace with the skills described in this article and start drafting in minutes.
Get Started Free — No Sign-Up