Give them the tools and they will build it. That's the problem.
I Wish I Had Claude Code at Work
I keep hitting moments at work where Claude Code would solve this in minutes.
A data analysis problem that would take me an hour to untangle in a spreadsheet. A quick tool to help work through a third party risk assessment. A way to talk through a problem out loud and have something useful come back. I know what Claude Code could do because I use it outside of work. The productivity gain is real.
But I cannot just install it on my work laptop. Not because the technology is not ready. Because I have not answered my own questions about how to do it safely.
And I suspect I am not alone in this.
The Questions I Keep Asking Myself
Every time I think about enabling Claude Code in an enterprise setting, I come back to the same set of concerns.
What happens to the data? Claude Code could read, process and generate content based on what it sees. That is the whole point. But without controls, data could leak outside the organisation or be exposed to people internally who should not see it. And what about the code it helps write? That is the organisation's intellectual property. Where does it go? Who owns it?
Who is accountable when it makes a bad call? Claude Code could analyse data, identify patterns and make recommendations. But should it approve access requests? Flag transactions as fraudulent? Prioritise incidents? The moment you let AI make decisions without human oversight, you have transferred accountability to something that cannot be held accountable. And when it goes wrong (not if, when), you are the one explaining to the board what happened.
How do I prove we are in control? If a regulator asks how we are governing AI tools, I need an answer. That means audit trails. Logging. Evidence that humans stayed in the loop for anything consequential.
Where My Thinking Has Landed
I do not have a finished framework. But I keep coming back to three pillars.
Data and IP protection. Treat Claude Code like any other data processor. Clear policies on what it could access, where outputs go, who owns what. If you wouldn't let someone email that data to themselves, you should not let them paste it into an AI tool without controls.
Human oversight on consequential decisions. AI could inform. AI could recommend. AI should not decide. The governance needs bright lines around what requires human sign-off. I think of this as the automation boundary: where does helpful automation end and dangerous delegation begin?
Ethical guardrails and audit trails. Acceptable use policies. Training. Mechanisms to flag misuse. And logging everything, because if something goes wrong in 18 months, you will wish you had the evidence.
One Thing I Find Interesting
Claude Code supports a CLAUDE.md file that defines rules and constraints for how it operates. It is essentially policy-as-code for AI behaviour. You could create an organisational baseline that every project must use. Prohibited actions. Required behaviours. Security requirements baked in.
It is not a technical control (users could ignore it or modify it), but it is a new governance primitive. Version control it. Audit it. Treat it like infrastructure.
The Bigger Question
The governance question is not whether to adopt Claude Code. Organisations will adopt it, one way or another. The question is whether security leaders are ready to enable it safely or whether we will be reacting to shadow AI the same way we reacted to shadow IT when the cloud was the new home of applications.
I would rather be ahead of this one.
How are others thinking about it?