FACILITATOR SCRIPT ยท 90 MINUTES

The 90-minute
running script.

Every line, every click, every "what do I say when a CFO asks me that". Read this aloud the night before. On the day, run it next to the Driver View.

90 min 5 sections 15 Q&A entries

How to read this

Five block types. That's it:

"Line to say" โ€” read it out. Bolded fragments are optional but recommended.
What you click or type on your own laptop.
What you point at on screen โ€” where the executives should look.
Questions execs tend to ask, and the prepared answer.
What to do when something breaks.
Three posture notes for the room:
  1. Drop the perfectionism. If Claude is slow, or it misfires, stay calm and say "that's part of it". You're running a live Research Preview. That's the reality, not a bug.
  2. Don't fill the silence. When Claude is thinking in Plan mode for 5โ€“10 seconds, stay quiet. Match the exec's tempo, not your own nerves.
  3. Use numbers, not adjectives. Not "fast" โ€” "three minutes". Not "efficient" โ€” "104 hours back per year". That's the language the room speaks.

0:00 โ€“ 0:05OPENING (5 min)

Set expectations. Three pillars, how today runs, prep check. About 300 words of lines โ€” roughly 1.5 min of speaking and 3.5 min of back-and-forth.

0:00 โ€” 1 min: Welcome + goal

Hit Space in Driver View to start the timer. Put index.html on the main display.
Thanks for carving out the 90 minutes. What we're going to do is put the latest Claude products โ€” as of April 2026 โ€” in your hands. The goal is narrow on purpose: tomorrow morning, on your own laptop, you try at least one task with Claude yourself. Nothing more, nothing less.

0:01 โ€” 2 min: Where Claude is right now

Pull up the "3 PILLARS" section on index.html.
Anthropic, the company behind Claude, has shipped three big things in the last two months. One, Claude Code Desktop โ€” you run a dev agent straight from an app tab, no terminal. Two, Claude Design โ€” you talk to it, it produces pitch decks and one-pagers. Figma's stock dropped 7% on launch day. Three, Claude Cowork โ€” an autonomous agent that actually operates on your desktop. We'll hit all three today, in that order.
Q: How's this different from ChatGPT?
A: Today isn't "which is better". It's "where does Claude reach places ChatGPT doesn't". Keep using ChatGPT. Add Claude on top. More on that in Claude vs ChatGPT vs Gemini (JP deep dive).

0:03 โ€” 1 min: How today runs

I'll drive from my laptop. You follow along on yours. If something doesn't work on your machine, don't try to fix it mid-session โ€” just jot it down, I'll follow up afterwards one-on-one.

0:04 โ€” 1 min: Quick prep check

Check with each exec: (1) Claude Desktop open? (2) Max plan logged in? (3) Wi-Fi OK?
All set? Good. The next 40 minutes are the main course. We're going to make Claude Code Desktop build a MIXI executive tool from scratch.

0:05 โ€“ 0:45PART 1 โ€” Build AI Executive from zero (40 min ยท main event)

The section with the most time. The moment you're after: executives watching an app get built in front of them. Five beats โ€” hand over spec, approve the plan, configure keys, run it, add a feature.

0:05 โ€“ 0:07(1) Tour of the Code tab (2 min)

Open Claude Desktop. Click the Code tab. Don't do anything yet.
This is the Code tab inside Claude Desktop. Big redesign shipped GA on April 14, 2026. Left panel is Parallel Sessions โ€” you can have several tasks running at once. The middle is empty right now. That's where the chat, file edits, and diff review all land. Integrated terminal is at the bottom, preview pane on the right. Works the same on Mac and Windows.
Point at the "+" on Parallel Sessions, the Plan mode toggle, the Model setting, and the terminal toggle (Cmd+`). Three seconds each.
You don't have to open a terminal ever again. Today, this is the only place we work.

0:07 โ€“ 0:12(2) Hand over the spec, get the plan (5 min)

In Code tab, create a new workspace at ~/workshop-demo/ai-executive/. Copy P1-01 from Driver View, paste into Code. Turn Plan mode ON. Send.
I just handed Claude the spec for our AI Executive tool. Three agents โ€” CEO, CFO, CTO โ€” evaluating the same agenda in parallel, from their own angle. Plan mode is on, which means: don't start writing files yet. Show me the plan first.
Wait 5โ€“10 seconds. Read the plan aloud for about 15 seconds.
Look at this. Eight files, around 200 lines of code. One app.py on the backend, one index.html on the frontend, rest is config and Docker. That's the whole thing. Approving.
Approve the plan. Generation runs. Scroll the Visual Diff briefly so the room sees the files landing.
Q: How does it build this accurately from plain English?
A: Two reasons. One, the spec is structured (Plan mode enforces that). Two, Opus 4.7 can hold the whole 1M-token context and keep internal consistency across files. More in the Code Desktop deep dive (JP).
If no plan comes back or the model says "Plan mode not supported" โ€” pick Opus 4.7 from the model selector in the top right and resend.

0:12 โ€“ 0:20(3) ๐Ÿ”‘ API key configuration lesson (8 min ยท the heart of Part 1)

This is the most important eight minutes of the day. AI tools need API keys to run. And if you mishandle those keys, the damage is real โ€” audit violations, leaks, surprise bills. But we're not going to do the old engineer move where you SSH in and edit a dotfile. We're making the app itself expose a settings page, and entering the keys through the browser. Same flow on Windows or Mac.
Copy P1-02 from Driver View, paste into Code tab, send.
Claude creates app/settings.py (the API) and static/settings.html (the UI). Verify it doesn't ask you to paste keys into chat.
Notice something. Claude doesn't ask me "what are your key values". Because I'm going to type them into password fields in the browser in a moment. If I pasted them in chat, they could end up in a model log. They'd also be sitting in anyone's screen recording. That's principle one.
Point at the "6 Principles Quick Ref" in the right column of Driver View, or open the API Key Hygiene deep dive (JP) to the "Web UI vs dotfile" section on a second tab.
The six principles: (1) never paste in chat, (2) go through .env, (3) check .gitignore, (4) rotate on a schedule, (5) least privilege, (6) separate prod and dev keys. Today's app enforces all six through the UI. Password inputs mask the value. The app writes .env for you. It checks .gitignore. There's a rotate button. And the keys I'm using now? I issued them just for this workshop, and I'll revoke them within 24 hours of finishing. That's number six.
Open http://localhost:8000 in the browser (app is already running). First run redirects automatically to /settings?first_run=true. Paste the four pre-prepared keys into the password fields, one at a time (about 30 seconds). Password fields don't render the value on screen share, but be safe โ€” don't focus any other field while pasting.
Click "Validate & Save". The app hits OpenAI with a test call for each key (2โ€“3 seconds). On success, a green "saved" toast and a "Go to evaluation" button appear.
In the integrated terminal, run git check-ignore -v .env and confirm it returns .gitignore:line:.env. That's principle three, proven.
Q: Is putting it in a password field actually safe?
A: Safe in the "doesn't hit chat" sense, yes. The key still lands in .env eventually, so file permissions (chmod 600), .gitignore, and rotation all need to be handled. The app handles all three for you. So it's not "the password field is the security" โ€” it's "human error gets designed out". That's the accurate read.
Q: Is this level of paranoia really necessary?
A: A key accidentally pushed to a public GitHub repo gets picked up by attacker scanners in under 60 seconds on average. Thousands of keys leak from public repos per year. And audit regimes increasingly want a log of "which key was rotated when". So yes.
Q: Are the keys themselves expensive?
A: The key itself costs nothing. The API calls you make through it do. If someone steals your key, they spend your money. That's the sting.
Q: Why build a settings page for an internal tool?
A: It's becoming the floor, not the ceiling. Notion, Slack, Datadog โ€” they all manage keys through a Web UI. Your internal tools can't be worse than that anymore. Building it this way is partly about learning where the bar has moved.

0:20 โ€“ 0:30(4) Run an agenda, watch it work (10 min)

Right after saving, the app sends us back to the evaluation screen. Now we put it through a real agenda.
Don't paste P1-03 into Code tab โ€” read it as an operator runbook. Paste the test agenda into the browser's agenda input, hit "Evaluate".
SSE streams in: Phase 1 (10โ€“15s) โ†’ Phase 2 (10โ€“15s) โ†’ Phase 3 (10โ€“15s). Cards light up in sequence. About 40โ€“45 seconds total.
Phase 1 โ€” the three execs are each evaluating the agenda independently, without seeing each other's take. We want the three pure perspectives first.
Phase 2 now. Each exec reads the other two and responds. This is where the tool earns its keep. The CFO calls out numbers the CEO glossed over. The CTO pushes back on the CEO's optimism with tech-debt reality. Blind spots that never appear in a solo evaluation get pulled into daylight.
Phase 3 โ€” the Facilitator pulls it all together. Consensus points, conflicts, three executable actions, and a final call: GO, NO-GO, or conditional GO.
Click "Copy to clipboard" โ€” show that the full Markdown is ready to paste into the meeting notes.
That drops straight into board minutes. Think about next week's board meeting โ€” run this against the agenda beforehand, and you walk in with 45 seconds of pre-thought objections already on the page.
App won't start and you're stuck โ†’ tell Claude "there's an error, fix it" (it usually self-heals). Ten minutes in and still broken โ†’ click [A] Copy pre-built backup folder in Driver View, run the commands shown. Backup lives at ~/workshop-backup/ai-executive/. Keys aren't set in the backup, but you just drop them through /settings and you're back.

0:30 โ€“ 0:37(5) Add a feature, in plain English (7 min)

Watch this. I'm going to add a feature without calling an engineer.
Copy P1-04 from Driver View, paste into Code tab, send.
Claude finds the right files, proposes the change as a Visual Diff, waits for you to Accept.
Green is new, red is removed, right side shows the diff. I review it. Looks right. Accept. What that just added was a fourth agent that synthesizes a set of concrete quarterly actions. Refresh and we'll see it.
Refresh the browser. The new "Three actions for next quarter" section appears.
Q: Doesn't it worry you โ€” it could slip in weird code?
A: Every change is reviewable in Visual Diff before you Accept. And if you turn Plan mode on first, you can see the intent before any file gets touched. The runaway risk is visibly lower here than with other AI coding tools.

0:37 โ€“ 0:40(6) Wrap (3 min)

Three things to take away from Part 1:
(1) The Code tab in Claude Desktop is the one place you work.
(2) Keys live in .env, never in chat, always revoked after.
(3) You can modify in plain English. Visual Diff is your safety net.
ROI, rough number: if one exec spends two hours a week in Code Desktop, that's 104 hours a year reclaimed. Roughly three planners' worth of research cycles.

0:40 โ€“ 0:45Buffer / Q&A (5 min)

Any questions on Part 1?
(Wait 5 seconds. If nothing comes.) OK โ€” Part 2.

0:45 โ€“ 1:02PART 2 โ€” Claude Design (17 min)

A Research Preview that launched four days ago. Two exec scenarios to feel the raw speed of it.

๐Ÿ’ก The Part 2 slides can all be generated by Claude Design itself: the prompts (P0โ€“P20) for the deck you use in Part 2 live in the Claude Design slide prompt pack (JP). Generate the deck in 30 minutes during the day-before rehearsal. Opening the section with "by the way, I made these slides with Claude Design" lands well.

0:45 โ€“ 0:47(1) Where Design fits (2 min)

Open claude.ai/design in a new tab. Advance Driver View timer to PART 2 (press N).
Claude Design is Research Preview, released April 17, 2026 โ€” that's five days ago. Figma's stock fell 7% on launch day. Built on Opus 4.7. You talk to it, it produces design artifacts. We'll run two exec scenarios.

0:47 โ€“ 0:55(2) New-business one-pager (8 min)

Copy P2-01 from Driver View, paste into Claude Design, run.
Generation preview. Takes 30โ€“60 seconds to finish.
Generating a one-pager for "AI Executive Talent Academy", A4 portrait. Seven sections โ€” problem, solution, target, differentiation, monetization, roadmap, team. What normally takes two days in the company is happening in under a minute.
After the first version, refine inline one or two times ("make the roadmap more concrete" etc.).
Here's the real point: this is a zero-to-0.8 accelerator. The 0.8-to-0.95 finishing pass still goes through Figma or Canva. But the time to a first draft collapses. That's what matters.
Top right, Export โ†’ PPTX. Show the download.

0:55 โ€“ 1:00(3) Monthly review, 5 slides (5 min)

Copy P2-02 from Driver View, open a fresh Design session, paste, run.
Five slides generate in about a minute โ€” executive summary, segment performance, highlights, concerns, focus for next month.
Use case: 10 minutes before the morning exec meeting. Plug in the numbers, generate, tweak by hand. From zero-to-ready in 10 minutes instead of 3 hours.

1:00 โ€“ 1:02(4) Export and what it can't yet do (2 min)

Exports as PPTX, PDF, or Canva. And โ€” Research Preview, it's not finished. Japanese typography is sometimes a bit rough. But even so, a fast first draft beats no draft. More in the Design deep dive (JP).
Design not responding โ†’ log in again in another browser, or hit [B] Play pre-recorded demo in Driver View.

1:02 โ€“ 1:20PART 3 โ€” Claude Cowork: research + mini live (18 min)

Assume the facilitator has zero production experience with Cowork. That's fine. Bring in the research, run one live attempt. If it works, great. If it fails, it becomes the teaching moment.

1:02 โ€“ 1:04(1) What Cowork is, in three lines (2 min)

Advance Driver View timer to PART 3. Open the Cowork tab in Claude Desktop.
Cowork went GA on February 24, 2026. It's an autonomous agent for non-engineers. It works across local files, Gmail, and Google Drive, handling multi-step tasks on its own. On OSWorld, the benchmark, it scores 72.5% โ€” humans hit 87%. Six months ago, the score was under 15%. Five times better in a year.

1:04 โ€“ 1:10(2) Five exec use cases (6 min)

On the main display, open the Cowork deep dive (JP) and scroll to the "20 exec use cases" table.
Five I'd flag for your roles:
(1) Summarizing the board packet: 2 hours โ†’ 10 minutes
(2) Earnings Q&A drafting: 4 hours โ†’ 30 minutes
(3) Cross-document DD analysis: half a day โ†’ 1 hour
(4) Gmail + Drive weekly brief: 1 hour โ†’ automated
(5) Contract review: 3 hours โ†’ 20 minutes
All five are in production somewhere overseas already.

1:10 โ€“ 1:14(3) MIXI IR PDF โ€” live mini demo (4 min)

Prep: drop a published MIXI IR PDF at ~/Downloads/mixi-ir.pdf.
In the Cowork tab, copy P3-01 from Driver View and run.
Cowork reads the PDF, writes a 3-line summary to ~/Desktop/summary.md. Let the room watch.
(If it works) 30 seconds to read the whole thing and give us three lines. That's Cowork.
(If it fails) That didn't behave the way we planned. And that's also Cowork โ€” a Research Preview, an autonomous agent, running in front of you. The real skill isn't the tool. It's knowing where to point it. That's 80% of the adoption decision.
Works either way. Success or failure, it's teaching material. Don't flinch on a failure.

1:14 โ€“ 1:18(4) How MIXI might actually roll this out (4 min)

If you wanted to try it at MIXI, Corporate Planning, Legal, and IR are the three departments to start with.
Corporate Planning: weekly brief automation.
Legal: first-pass contract risk triage.
IR: earnings-call Q&A prep.
RBAC for exec-only access. Everything logged. Specifics in the Cowork deep dive (JP), under "MIXI rollout scenarios".

1:18 โ€“ 1:20(5) Five adoption traps (2 min)

Five things to watch:
(1) Japan data residency: Cowork itself runs on US infrastructure. Confidential data needs a separate setup through Bedrock Tokyo.
(2) Training data: Enterprise plan + ZDR keeps your data out of training.
(3) Local permissions: least-privilege is non-negotiable.
(4) Mis-action risk: run in confirm-required mode.
(5) Cost: set a budget ceiling.
Full notes in the security guide (JP).
Q: When could we actually start at MIXI?
A: A 30-day pilot โ€” tomorrow. Five people in Corporate Planning, Pro plan. Decide internally over the next two weeks.

1:20 โ€“ 1:30CLOSING (10 min)

1:20 โ€“ 1:22(1) Three things to try tomorrow (2 min)

Advance Driver View timer to CLOSING. Return to index.html.
Three things. Pick one, commit to doing it tomorrow:
(1) Code Desktop โ€” once a week, throw something from your own job at it in plain English. Summarize a report. Research a market. Pick anything.
(2) Claude Design โ€” for the next exec document you need to make, generate the first draft there. Stop starting from blank.
(3) Open the Cowork pilot conversation. Five people in Corporate Planning. 30 days.

1:22 โ€“ 1:27(2) Q&A (5 min)

Anything that's been bugging you, anything you didn't get to ask โ€” now's the time.
Q: Do we keep ChatGPT too?
A: Yes. Keep your existing ChatGPT investment. Add Claude for contract review, code, exec-doc drafts, and long-form processing. That's the current best answer.
Q: What about company-wide rollout?
A: Start with a few execs based on what you saw today. Measure ROI. Then expand to departments. Licensing: Team Premium at $20/user/month, plus Code and Cowork add-ons. Enterprise makes sense once you cross 50 seats.
Q: Who signs off on the security review?
A: Legal and InfoSec. I'll send you the security guide (JP) โ€” it covers SOC 2, ISO, ZDR, and APPI.
Q: Can I be proficient in 14 days?
A: There's a 14-day learning path. 30 minutes a day. At the end, you're modifying your own tools.
Q: Can I keep the ai-executive tool we built today?
A: Yes. It stays at ~/workshop-demo/ai-executive/. Today's API keys get revoked, so to keep using it just swap in your own internal keys through /settings.

1:27 โ€“ 1:30(3) Next steps (3 min)

What I'll send you:
(1) Today's full resource URL (mixi-exec-ai-workshop.mocco.team) โ€” bookmark it.
(2) The 14-day learning path email. Arrives tomorrow morning.
(3) A Cowork pilot rollout proposal. Within two weeks.
That's it. Thanks for the 90 minutes.
Stop the Driver View timer. Hit "Export notes as .md" โ€” save the Q&A and observations as your operations log.

What to cut when you're running late

If the clock is against you, this is the order to trim:

#What to cutSavesRisk
1Part 2 (c) monthly review, 5 slides5 minLow โ€” the one-pager already sells Design
2Part 1 buffer at 40โ€“455 minLow โ€” move Q&A to CLOSING
3Part 1 (5) Visual Diff walkthrough detail2โ€“3 minMedium โ€” the plain-English-modification moment still lands
4Part 3 (5) five adoption traps2 minMedium โ€” send in the follow-up email
5Part 3 (2) use cases โ€” just show the table3 minMedium โ€” deep dive (JP) catches the rest
Never cut: the API key 6-principles lesson (0:12โ€“0:20). That's the highest-value eight minutes in the room for executives.

2-minute insert when you have time (optional)

A bonus track if Part 1 or Part 2 finished faster than expected. 2 minutes introducing Claude's Artifacts, Projects, and Memory.

We've got a little time โ€” one more thing worth seeing. Claude has three distinctive pieces:
Artifacts โ€” in the Chat tab, say "make me this page in HTML", and the right panel renders a working HTML/CSS/JS preview in place. A 30-second prototype of today's slides is a fair demo.
Projects โ€” teach Claude a reusable context (MIXI exec docs, your department's KPIs) as a "project", and you stop re-explaining yourself every session. Saves every knowledge worker five minutes a day.
Memory โ€” Claude remembers key points across conversations. "Pick up where we left off last week" starts to actually work.
If time allows, demo Artifacts once in claude.ai Chat โ€” e.g., "build me a simple internal survey form in HTML".
Q: I want to try this too.
A: It's in Day 1 and Day 3 of the 14-day learning path. Also in the glossary under Artifacts / Projects / Memory.

Night-before rehearsal checklist

  1. Read this script aloud end to end (time it โ€” it should run 90 minutes)
  2. Open Driver View. Confirm Space / N / B all work
  3. In Code tab, confirm Plan mode and Opus 4.7 are both selectable
  4. Issue the four workshop API keys in OpenAI console (prefix workshop-YYYYMMDD-*)
  5. Pre-build ai-executive at ~/workshop-backup/ai-executive/ as fallback. Confirm docker compose up -d starts it cleanly
  6. Place a public MIXI IR PDF at ~/Downloads/mixi-ir.pdf
  7. At the venue: confirm Claude Desktop, claude.ai, and Cowork all run on the Wi-Fi
  8. Pre-record a demo video and place at assets/fallback-demo.mp4 (for Cowork failure)
  9. HDMI to the external display, screen-share quality check
  10. Notifications off โ€” Slack, email, personal apps all

People-logistics Q&A (anticipate)

Typical operational scenarios during a 90-minute session, and how to handle them without losing your nerve.

Q: An exec shows up 10 minutes late.
A: OPENING is short. When they walk in, give a 30-second summary and bring them in at the top of Part 1. Part 1 is self-contained, so the damage is limited. Also DM them the index.html URL on the way so they can skim during their commute.
Q: An exec has to leave around 1:00 for an emergency.
A: Part 1 + Part 2 (0:05โ€“1:02) is the core. Part 3 Cowork works as self-study through the learning pages. As they step out, say "don't worry about Cowork, I'll send it over". Heavy follow-up in the T+1 email.
Q: Someone asks for a break mid-session.
A: There's a natural break point at 0:45 (Part 1 โ†’ Part 2). 3 minutes for water works. 90 minutes straight is the default, but don't force it. Best if you ask upfront: "do you want a short break around the 45-minute mark?"
Q: What about recording / screen capture?
A: Recommend no recording. Three reasons: (1) the API key section has password fields that still risk exposure, (2) execs get more cautious when they know every word is on tape โ€” the conversation flattens, (3) raw recordings sent internally can burn social trust in ways that can't be edited out. If a record is genuinely needed, use the Claude Code Markdown report and facilitator notes.
Q: API cost and budget worry.
A: $0.01โ€“0.02 per agenda on gpt-4o-mini. $0.25โ€“0.50 on gpt-4o. Set OpenAI Console โ†’ Usage limits โ†’ Hard limit at $50/month. No matter what, spending stops at $50. Even running 100 agendas stays under $2. Set a $30 Monthly Budget Alert and nothing sneaks up on you.
Q: What if an exec turns up without being logged into Claude Desktop?
A: It's on the T-1 checklist, but it can still happen morning-of. Login from a Max subscription takes 2โ€“3 minutes, so fold it into the first 3 minutes of OPENING โ€” do it as a group, confirm everyone's logged in. Push the Part 1 start to 0:08. It's a fair trade.
Q: Venue Wi-Fi is flaky.
A: Three cases:
(1) Slow but alive (under 10 Mbps): expect 10โ€“20 second latency, fill the gaps with commentary.
(2) Drops and reconnects: switch to your iPhone tethering (check your data plan beforehand).
(3) Completely down: hit [B] in Driver View, switch to the pre-recorded video. Say "we're going to offline mode for the rest" without over-apologizing, and narrate over the video. The deep dive (JP) pages carry the study material either way โ€” the learning value mostly survives.
Q: If you keep hitting Approve, don't you eventually just rubber-stamp it?
A: Yes, and you're right to worry. "Approval fatigue" is real โ€” people who check everything start accepting without reading after 50 rounds. Don't solve it with willpower. Solve it with structure. Read Plan mode carefully once at the start, so the per-file approvals are reduced to "does this match what we planned". Configure settings.json so read-only is auto and writes are ask. The number of approvals drops, which is the only sustainable fix. More in the security guide (JP): "approval fatigue".
Q: Cowork is great. But how far should we automate?
A: This is the hardest question in the room. Useful frame: Trust Levels. Classify every action as Watched (human in the loop always), Batched (review the output in chunks), or Autonomous (trust it unsupervised). Sending email = Watched. Reading a file = Autonomous. Decide per action type, not per tool. Second test: "what happens if this fires wrong 100 times in a row?" โ€” blast radius thinking. Don't automate payments. Good line to hold. Policy template in the Cowork deep dive (JP): "automation scope framework".

Five unexpected failure patterns

SymptomFix
Claude Desktop got logged outLog back in. Verify auth state beforehand.
Can't select Opus 4.7Recheck the Max plan. Sonnet 4.6 works as a substitute (also 1M ctx).
Docker port 8000 collisionChange docker-compose.yml ports to 8001:8000, use http://localhost:8001.
Wi-Fi unstableSwitch to phone tethering. Test in advance.
Exec drops an unexpectedly deep technical question"That's in the deep dive โ€” I'll send it over afterwards." Close the loop politely.

Facilitator self-check (right after)

  1. Export Session Notes from Driver View (Markdown)
  2. Immediately revoke the four workshop API keys
  3. Append any issues or improvements you noticed to Session Notes โ†’ feeds the next revision
  4. Within 24 hours: thank-you email to the execs + resource URL + 14-day learning path
  5. Within 2 weeks: Cowork pilot proposal goes to the exec meeting
Last verified: 2026-04-22
Related: