CLAUDE CODE COURSE ยท LEVEL 3 / 3

Claude Code ยท Advanced
ten modules for org deployment

For EMs, CTOs, and engineering leads taking Claude Code to 30โ€“200 people. Sixty to ninety minutes per module, ten to fifteen hours total. Team vs Enterprise plans, SSO/SCIM/RBAC, Bedrock Japan residency, Agent SDK, ZDR contracts, internal plugin marketplace, Routines, CI/CD, and the OKRs you'll need to prove it worked.

Level 3 10 modules 10โ€“15 hours EM or CTO

Who this is for

This course is for managers who already use Claude Code well on their own. Audience: engineering managers, CTOs, engineering leads, platform team leads. The goal is being able to design the rollout, operations, and governance at an organization of 30โ€“200.

The beginner and intermediate courses were about personal productivity. This one is about the design calls and operational patterns when you put Claude Code in front of a full team. Ten modules, sixty to ninety minutes each, ten to fifteen hours of self-study.

Prerequisites. Six months of daily Claude Code CLI or Desktop use, Git and GitHub Actions basics, admin on your IdP (Okta / Entra ID / Google Workspace), admin on an AWS account. Without those, the modules will take much longer.

Module shape

Every module follows the same six sections. This isn't a feature tour โ€” each module asks what to decide, how to build it, and how to tell success from failure.

  • Goal โ€” the state you should reach.
  • Content โ€” topics covered.
  • Design calls โ€” options and criteria for your org's context.
  • Implementation โ€” the actual build order.
  • Success signals โ€” measurable proof it's working.
  • Failure patterns โ€” the pits people fall into.

Module 1 ยท Team vs Enterprise + organization policy (60 min)

Goal

Pick Team or Enterprise for your company with a clear rationale โ€” based on size, industry, and infosec requirements. Write a one-page policy for what's OK to put into Claude.

Content

  • Team ($20/seat + Code Premium option) vs Enterprise.
  • Enterprise-only: mandatory SSO, SCIM, custom retention, audit log export, custom contracts (ZDR, BAA).
  • Decision rule of thumb: under 30 โ†’ Team, 50+ โ†’ Enterprise, in between depends.
  • Organization policy: classify information (Public / Internal / Confidential / Secret).
  • Matrix: what each class allows, what each class forbids.

Design calls

  • Headcount 30โ€“50? Look at growth curve and plan for six-month seats.
  • Legal or InfoSec requires ZDR or BAA? Enterprise, no debate.
  • Department-by-department adoption turns into a merger headache later. Company-wide from day one.
  • Decide up front whether non-engineering functions (strategy, legal, sales) get seats.

Implementation

  1. Draft the information classes on one page (4 classes ร— 3 examples each).
  2. Review with InfoSec.
  3. Send an RFP to Anthropic sales, get a formal quote.
  4. Legal checks DPA / ZDR / BAA requirements.
  5. Prepare the plan-selection memo for executive sign-off.

Success signals

  • 90%+ of users classify information correctly (spot check).
  • You can explain the plan choice in five minutes at a board meeting.
  • Acceptable Use Policy fits on one page and is in the new-hire onboarding pack.

Failure patterns

  • Thinking "Enterprise fixes everything" and punting policy design.
  • No Acceptable Use Policy โ€” "just use common sense" tossed to the field.
  • Some departments on Team, others on Enterprise. Audit and contracts go sideways.

Module 2 ยท SSO + SCIM + RBAC (90 min)

Goal

Get IdP-based SSO and SCIM running in production. Implement RBAC as Group โ†’ Role mapping. Pipe audit logs to a SIEM.

Content

  • Choosing an IdP (Okta / Entra ID / Google Workspace).
  • SAML 2.0 vs OIDC โ€” when each wins.
  • SCIM-based auto-provisioning.
  • Group โ†’ Role mapping (Admin / PowerUser / Reader / Auditor).
  • Session handling and timeouts.
  • Audit: who did what, exported to SIEM (Splunk / Datadog / Elastic).
  • Hands-on in a sandbox (Entra ID free tier) with a real integration.

Design calls

  • If you already have an IdP, keep it. If you're installing one, Entra ID or Okta are the defaults.
  • Cap roles at four. Any more and nobody can keep the inventory tidy.
  • If you already run a SIEM, match Claude's log format to your existing dashboards.

Implementation

  1. Create a Claude app in the IdP.
  2. Upload SAML 2.0 metadata to Anthropic's admin console.
  3. Turn on the SCIM endpoint and test group sync.
  4. Create Admin / PowerUser / Reader / Auditor groups in the IdP.
  5. Map groups to roles.
  6. Configure audit log export (S3 / Splunk HEC / Datadog API).
  7. Test the offboarding flow โ€” delete in IdP, access vanishes from Claude.

Success signals

  • IdP removal cuts access within five minutes.
  • Audit logs reach the SIEM in real time (under ten-minute lag).
  • Zero manual user add/remove work.

Failure patterns

  • Skipping SCIM โ€” departed employees keep access.
  • Ten-plus roles in the system, drifting from reality.
  • Audit logs exported but nobody looks at them (no dashboard).

Module 3 ยท Japan residency on AWS Bedrock (75 min)

Goal

Get API traffic contained inside ap-northeast-1 in a working configuration. Prove "data doesn't leave for the US" through both routing and logs.

Content

  • Why Japan residency matters (APPI, customer requirements, BCP clauses).
  • Direct API (api.anthropic.com) vs Bedrock (ap-northeast-1) โ€” how they differ.
  • Point Claude Code at Bedrock: env vars, IAM role.
  • Price comparison: Direct API vs Bedrock.
  • Use Opus 4.7 in ap-northeast-1 (supported since 2026-04-20).
  • Hands-on: enable Bedrock in your AWS account, run Claude Code CLI against it.

Design calls

  • Direct API is simple but US-routed. Bedrock stays in Japan but adds IAM / VPC / billing work.
  • Cross-region inference profiles are handy, but the data path can escape to the US. Disable them explicitly.
  • Bedrock costs more than Direct API. Treat the delta as the price of meeting residency requirements.

Implementation

  1. Request and get approval for Claude model access in ap-northeast-1 on Bedrock.
  2. Create a dedicated IAM role, least privilege only.
  3. Push env vars: CLAUDE_CODE_USE_BEDROCK=1, AWS_REGION=ap-northeast-1.
  4. Pin the inference profile to ap-northeast-1 only. No cross-region.
  5. Decide whether to use VPC Endpoint / PrivateLink.
  6. Monitor CloudTrail for a week and confirm every call's destination region.

Success signals

  • Every API call stays in ap-northeast-1 (CloudTrail confirms).
  • Legal and InfoSec have signed off on the residency setup.
  • Monthly cost sits inside the forecast range.

Failure patterns

  • Cross-region inference profile misconfigured โ€” data leaks to the US.
  • Direct API and Bedrock mixed โ€” a few users still on old keys.
  • Bedrock costs estimated like Direct API costs โ€” budget overrun.

Cross-link: deep-security.html

Module 4 ยท Build a custom agent on the Agent SDK (90 min)

Goal

Ship at least one autonomous agent with the Agent SDK (Python or TypeScript). Know when to pick the SDK over a Claude Code extension.

Content

  • Agent SDK (Python / TypeScript) โ€” what it is, when you actually need it.
  • Building autonomous workflows in the API rather than as a Claude Code extension.
  • Built-in tools: file / shell / web.
  • Calling MCP servers from the SDK.
  • Protocol: loop / stop conditions / error handling / monitoring.
  • Sample: a Slack-triggered Q&A bot over internal docs.
  • Hands-on: one mini bot, end to end.

Design calls

  • Claude Code enough, or do you need the SDK? Decide based on interactivity, persistence, and trigger source.
  • If a slash command or a Skill does the job, don't reach for the SDK. This is rule one.
  • The SDK is for workflows that run outside a Claude Code session.

Implementation

  1. Pick one problem (e.g. Slack message โ†’ internal wiki search โ†’ reply).
  2. Create a Python or TypeScript project using the Agent SDK.
  3. Wire in the minimum built-in tools required.
  4. Write down explicit stop conditions (max steps / max tokens / timeout).
  5. Design the error-path behavior.
  6. Log in structured JSON.
  7. Monitor: failure rate, average tokens, average latency on a dashboard.

Success signals

  • Autonomous tasks adapt to new workflows with minimal prompt edits.
  • Token usage lands within ยฑ30% of forecast.
  • Failures alert a human.

Failure patterns

  • Loose stop conditions โ€” token blowup.
  • Writing SDK code for tasks Claude Code could handle, adding ops load.
  • Text logs instead of structured logs โ€” failure diagnosis drags on.

Module 5 ยท ZDR contracts and enterprise security (60 min)

Goal

Know ZDR's scope and the contract process. Sign it in a form InfoSec accepts. Set up operations that prevent "ZDR-covered vs. not" confusion internally.

Content

  • How ZDR works and what it covers (data deleted after API response).
  • ZDR covers: Messages API / Token API / Claude Code (via API key or Enterprise).
  • ZDR doesn't cover: Console / Consumer / Teams UI (anything besides Claude Code).
  • Contract process: contact Anthropic sales, eligibility review, org-level activation.
  • Features that get turned off when ZDR is enabled (conversation history retention).

Design calls

  • If you sign ZDR, either ban Console and Consumer internally, or restrict them to Public-class data.
  • No conversation history means no "continue last chat." Train users alongside rollout.
  • Regulated industries (medical, financial) typically sign ZDR + BAA together.

Implementation

  1. Agree with Legal and InfoSec that ZDR is needed.
  2. Apply for ZDR with Anthropic sales.
  3. Communicate internally that usage has to go through covered endpoints (Messages API / Claude Code).
  4. Lock down Console and Consumer UI at the IdP, so people don't type confidential data into them.
  5. Post-activation, verify on one test that conversation history is actually gone.

Success signals

  • ZDR contract exists, InfoSec has signed off.
  • Zero confidential inputs to non-ZDR UIs (spot-audited).

Failure patterns

  • Signing ZDR but continuing to put confidential data into the Console UI.
  • Not telling users that history will disappear โ€” chaos on day one.

Module 6 ยท Internal plugin marketplace (75 min)

Goal

Build an internal distribution system for plugins (skills + commands + MCP config). Design the versioning, signing, and approval process.

Content

  • Distribute plugins (skills + commands + MCP config) inside the org.
  • Use an internal GitHub repo as the marketplace.
  • Distribute extraKnownMarketplaces via settings.json.
  • Plugin versioning, signing, approval.
  • Hands-on: build and distribute an internal "weekly-report" plugin.

Design calls

  • One marketplace only. Two or more breaks versioning.
  • Approval: PR review required, plus a security sign-off.
  • Strict Semantic Versioning. Breaking change bumps major.

Implementation

  1. Create a dedicated repo in your internal GitHub org (e.g. claude-marketplace).
  2. README documents the submission process.
  3. CODEOWNERS locks down reviewers.
  4. Distribute a settings.json template (with extraKnownMarketplaces preset).
  5. First plugin "weekly-report" ships to one team for dogfooding.
  6. Track usage on a dashboard (installs, runs).

Success signals

  • Ten or more people internally run the same /your-command as a standard.
  • Plugins follow Semantic Versioning.
  • Nothing unreviewed runs in production.

Failure patterns

  • No versioning โ€” one breaking change, the whole company's work stops.
  • Skipping approvals โ€” quality varies and trust erodes.
  • Multiple marketplaces compete, nobody knows which is canonical.

Module 7 ยท Running Routines in production (75 min)

Goal

Build the ops base for Routines (trigger design, failure alerts, run-count tracking). Automate three recurring tasks across schedule / GitHub / API triggers.

Content

  • Routines โ€” Claude Code tasks run automatically via schedule / trigger / API.
  • Schedule: weekly review generated Monday 10:00.
  • GitHub trigger: PR merged โ†’ related docs auto-updated.
  • API trigger: external systems fire jobs.
  • Execution limits per plan (Plan / Max / Team / Enterprise).
  • Failure alert design (Slack / PagerDuty).
  • Hands-on: design three routines (schedule / GitHub / API).

Design calls

  • Start Schedule routines on non-critical work. If it fails, nobody's day is ruined.
  • GitHub triggers fit "doc update" and "changelog append" post-merge.
  • API triggers open the door to internal chatbot / ticketing integration.

Implementation

  1. Pick three recurring tasks to automate.
  2. Match each to its right trigger.
  3. Write the routine and dry-run it.
  4. Wire up Slack / PagerDuty for failure.
  5. Soak for two weeks, measure failure rate.
  6. Check that run counts sit inside the plan limits.

Success signals

  • Three or more recurring tasks no longer require manual work.
  • Failure rate under 5%, and every failure pages someone.
  • Run counts stay inside 70% of the limit โ€” room to grow.

Failure patterns

  • No visibility on errors โ€” silent failures go on for weeks.
  • Jumping straight to critical workflows โ€” the first failure hurts badly.
  • Ignoring the execution cap, hitting the wall at month-end.

Module 8 ยท Parallel Sessions across multiple repos (60 min)

Goal

Use Parallel Sessions for multi-repo feature work. Design the context handoff between sessions explicitly.

Content

  • Big orgs split repos (backend / frontend / infra / docs).
  • Parallel Sessions in Claude Code Desktop handle multiple repos at once.
  • Cross-session info patterns (clipboard vs. shared doc vs. session summary export).
  • Code review: diff written in session A reviewed by Claude in session B.

Design calls

  • Context is NOT shared across sessions automatically. Build for that.
  • Keep a one-page "feature note" shared across sessions and have each session read it first.
  • Cross-repo refactors don't belong in a single session โ€” the review becomes impossible.

Implementation

  1. Build a feature-note template (purpose, affected repos, API contract).
  2. Each session reads the note before starting.
  3. Export session A's diff and have session B review it.
  4. End every session with a written handoff note.

Success signals

  • One feature gets worked in three repos concurrently.
  • Review reveals no cross-session inconsistencies.

Failure patterns

  • Assuming session B inherits knowledge from session A.
  • Skipping feature notes, explaining things verbally per session โ€” nothing reproduces.

Module 9 ยท Claude Code in CI/CD (75 min)

Goal

Call Claude Code from GitHub Actions. Ship automated PR review and CI-failure analysis in production. Manage API keys securely via OIDC.

Content

  • Call Claude Code from GitHub Actions.
  • Automatic code review on PR open.
  • On CI failure, Claude comments with the likely cause.
  • CI triggered from Routines (and vice versa).
  • API key management (GitHub Secrets vs. OIDC federation).
  • Hands-on: wire up automated PR review via GitHub Actions.

Design calls

  • API keys through OIDC + AWS Secrets Manager / Vault. Don't store long-lived keys in GitHub Secrets.
  • Start review as "suggesting," not "blocking." Accept some noise.
  • CI failure analysis stays at "comment with context." No automatic retries.

Implementation

  1. Build the workflow that fetches the API key dynamically via OIDC.
  2. Create a workflow that triggers Claude on PR opened.
  3. Use GITHUB_TOKEN for posting comments, least privilege.
  4. Codify false-positive patterns in the prompt so they stop appearing.
  5. Soak for two weeks. Collect feedback from PR authors and reviewers.

Success signals

  • Human review load drops; early fixes go up.
  • False-positive rate is inside what the team tolerates.
  • Zero API key leaks.

Failure patterns

  • Not codifying false positives โ€” the team loses patience.
  • Storing API keys in GitHub Secrets long-term, plain text.
  • Making review blocking โ€” CI is red all the time.

Module 10 ยท Internal enablement and OKRs (90 min)

Goal

Write a 30/60/90-day rollout plan with audience-specific enablement and OKRs. Be ready to make the ROI case to the board at the six-month mark.

Content

  • 30/60/90-day rollout inside the company.
  • Audience-specific training (engineers / PMs / strategy / legal / IR).
  • Selecting and training champions.
  • OKRs for measuring impact.
  • Maintenance owners for operational docs.
  • Playbooks for when things go sideways (low utilization, unclear ROI).

Design calls

  • No company-wide rollout on day one. Champions (5) โ†’ wider (20) โ†’ company-wide, three stages.
  • Different content per audience. What engineers and strategy teams need are not the same.
  • OKRs should weight outcomes (time saved, quality) over outputs (PR count). Don't measure only the easy things.

Implementation

  1. Days 0โ€“30: pick five champions, run focused training.
  2. Days 30โ€“60: champions roll out to their teams, weekly retro.
  3. Days 60โ€“90: company-wide rollout, audience-specific onboarding docs shipped.
  4. After 90: monthly KR reads, quarterly review.

Example OKR

  • KR1: 70%+ active users.
  • KR2: X% of monthly PRs involve Claude.
  • KR3: XX hours / person / month saved by six months in.

Success signals

  • At six months, ROI is defensible at the board.
  • Five or more champions are active.
  • Audience-specific onboarding docs are maintained.

Failure patterns

  • Distribute and forget. No training, no measurement.
  • OKRs built on outputs only โ€” outcomes invisible.
  • One or two champions carrying everything โ€” bus factor of one.

Beyond advanced

After ten modules, the organization moves from "can operate Claude Code" to "can use Claude Code as a competitive weapon." Next steps:

  • Productize via Agent SDK โ€” decide which internal tools can become customer-facing features.
  • Publish MCP servers externally โ€” expose your product to LLMs.
  • Apply for Anthropic partner programs โ€” case studies, beta access.
  • Stand up an internal AI ethics body โ€” authority to make hard calls on AI use.
  • Work through the deep dives โ€” deep-security.html / deep-ecosystem.html / deep-mcp.html.

Enterprise rollout checklist โ€” 20 items

Before any large rollout (30+ people), every one of these has to be done. One unchecked box, pause the rollout and go back.

  • Team / Enterprise contract signed
  • DPA / ZDR approved by Legal
  • SSO / SCIM live in production
  • Audit log exported to SIEM
  • Bedrock Japan residency configuration
  • API key rotation in operation (90-day cycle)
  • Internal plugin marketplace
  • Five or more champions trained
  • Onboarding docs per audience
  • Utilization dashboard
  • Incident response playbook
  • Acceptable Use Policy
  • Information class โ†’ allowed tool mapping
  • OKRs defined
  • Quarterly review cadence
  • ROI formula fitted to your company
  • Routines monitoring and alerting
  • Access revocation on departure
  • Access provisioning for new hires
  • Six-month refresher training plan
Operating principle. This checklist isn't one-and-done. Walk it again every six months and update as the organization shifts. "Access revocation on departure" and "API key rotation" slip through the cracks most often โ€” quarterly review must touch both.

This page reflects what was current as of 2026-04-22. Anthropic ships fast; features, prices, and contract terms change without notice.

Last verified: 2026-04-22
Sources