The AI Sabbatical
"In the beginner's mind there are many possibilities, but in the expert's there are few." — Shunryū Suzuki
Here’s a pattern I keep seeing, in myself and on my team and in conversations I have with other people trying to bring AI into their organizations.
Someone gets excited about a new tool. They read about it, watch a demo, maybe install it. Then they sit down at their desk on a Monday morning with a Jira ticket and think, should I try doing this with the new thing?
Almost every time, the answer is no. The ticket has a deadline. The code has tests that need to pass. There are stakeholders waiting. The familiar path is faster and safer today, even if the new path might be much faster in a month. So they do the work the way they always have, and the new tool stays in the tab they never open.
Multiply that across every person on every team, every week, and you get an organization that has “adopted” AI on paper while almost no one is actually using it.
How I did this for myself
I know this pattern well because I lived it.
A year and half ago I left a job running a game studio. Between that job and the next one, I had a stretch of time without a team to lead, without a roadmap to hit, and without anyone waiting on my deliverables. I spent almost all of it getting up to speed with the state of the art in LLMs and AI-driven development. It was one of the most energizing stretches of my career. I read papers, built things, tried every tool I could get my hands on, and followed my curiosity wherever it led. Ideas compounded on each other. I built a personal website, a portal for vibe coders, a multiplayer game, a tool for generating animated 3D avatars, and an entirely new rendering engine for hybrid 2D/3D scenes. I could feel whole categories of work getting easier in my head in a way I hadn’t felt since I was first learning to program.
Looking back on it now, nothing about that stretch would have been possible inside my old job. The kind of exploration I was doing didn’t look anything like the output a studio head is supposed to produce. It was open-ended, unpredictable, and hard to justify on any normal dashboard. The free time without consequences was the whole ingredient. Everything I can do now with these tools traces back to that window, and so does the shift in my own career, from running a game studio to working on the forefront of AI-driven game development.
I didn’t call it a sabbatical at the time, but that’s what it was. A deliberate stretch of time set aside for learning, with no other deliverable attached to it. It worked, and it transformed my career. The thing I want to argue in this piece is that you shouldn’t have to quit your job to get that window. Your employer should be the one handing it to you.
Training isn’t the bottleneck
The default response to this is training. Courses, lunch-and-learns, internal certification programs, microlearning curricula. Some of it works. Slack ran an experiment where ten minutes a day of AI microlearning nearly doubled participants’ comfort level with AI tools (from 43% to 72%), and programs like that have their place.
But they’re not the bottleneck.
The bottleneck is that learning a new tool requires making mistakes, and making mistakes at work is expensive. This isn’t a new observation. Amy Edmondson has been writing about psychological safety for twenty years, and the short version is that teams only learn when people feel safe enough to try things that might not work. AI adoption is a learning problem, and learning requires looking dumb in public for a while before you look smart.
Most workplaces aren’t structured to make that feel safe. If I spend three hours trying to get Claude to refactor a module and the output is worse than what I would’ve written myself, that’s three hours of unproductive time on someone’s dashboard. If I break the build because I trusted an agent’s fix without reading it carefully (which, uh, I have done), that’s a visible, embarrassing failure with real consequences.
So what actually happens is one of two things. People use AI tools in secret and don’t share what they learn, or they avoid the tools entirely under the cover of “being careful.” Neither builds a team that can actually use this stuff.
What I mean by an AI sabbatical
An AI sabbatical is a sanctioned, time-boxed break from your normal work duties for the express purpose of learning to use AI. Not a training course. A stretch of time, a week or two or a month depending on what you can afford, during which the expectation is that you’re not shipping your regular work. You’re experimenting. You’re building throwaway things, breaking stuff, and trying the tool on problems that don’t matter.
I chose the word “sabbatical” on purpose. A sabbatical isn’t a vacation. You’re still working, sometimes intensely, but you’ve been released from the ordinary performance frame. Academics take them to write books that might not pan out. Engineers sometimes take them to explore a technology their team will need later. The shared idea is that stepping back from producing is what lets you become someone who can produce differently.
The Zen tradition has a word for this, shoshin, beginner's mind. A sabbatical is a way of deliberately putting yourself back in that seat, where you don't know what the tool is good at, and that not-knowing is what makes you willing to try everything.
That’s the shape of the AI learning problem. You can’t learn to delegate to an agent while you’re under pressure to deliver, or learn what Claude is actually good at while you’re afraid of what happens if it’s bad at it, or build taste about when to trust model output when every interaction carries real consequences. You need low-stakes reps, and you need a lot of them.
The neighboring ideas
A few adjacent practices are already out there:
AI hackathons. A team gets two days to build something with AI. These work fine for what they are. STIM, a Swedish music licensing company, ran a two-day hackathon that produced three working AI systems, one of which went to production. But two days is a hackathon, not a sabbatical. You learn a specific thing; you don’t rewire your instincts.
“Friday AI playtime” and curiosity sessions. Some teams are pushing this: carve out a few hours a week for people to mess around with AI tools. It’s directionally right, but a few hours a week is a trickle. For most people, it isn’t enough to escape the gravity of their real work.
Microlearning. Ten minutes a day, consistent over weeks. Measurably effective at moving comfort levels. But comfort is not fluency, and fluency is what we actually need.
20% time. Google’s old idea. Still practiced here and there. Same direction, but 20% time usually ends up funneled into a side project with its own deliverables, so you’re still performing, just on a different stage.
An AI sabbatical is the biggest-dose version of this family. The shift we’re asking people to make isn’t a ten-minutes-a-day shift; it’s a rewiring of how they approach work, and rewiring takes time and permission to be unproductive.
What makes one actually work
A few things have to be true or it doesn’t work.
It has to be explicitly sanctioned. Not “if you have spare cycles, poke at this.” Sanctioned means your manager knows, your team knows, your quarterly goals reflect it, and nobody is silently judging your commit graph. If you can’t tell your skip-level you spent last Thursday trying to get Claude to build a Chrome extension for your cat, it isn’t really a sabbatical.
It has to be long enough for you to screw up, understand what happened, and take another swing at it. You need to do that loop a few times before anything really sinks in. My rough number is about a week minimum. The first two or three days you’re still thinking like someone trying to be efficient. Around day four you start taking bigger swings, building things you’d never greenlight for work, trying agent patterns that feel overly ambitious, letting the model drive for a while to see what happens. That’s when the learning actually kicks in.
It has to be oriented around projects you picked, not a curriculum someone else wrote. The whole point is to build things that don’t matter so you develop judgment about what does. Pick something you’re personally curious about. Build a silly app, rewrite an old side project, try the tool on a corner of your life it was never meant for. Build a game! Games are great for this because they have clear rules, immediate feedback, and nobody dies if the AI screws up. You’ll probably throw most of what you build away, which is fine, because what you’re really building underneath all of it is judgment about what these tools can and can’t do.
When you come back, you come back with something that’s harder to measure than a deliverable. You have a feel for the tool. You know when to reach for it, when not to, what a bad prompt looks like, when to trust the output versus verify it line by line. That’s the thing that makes the daily application stick.
When a whole team does it together
Everything I’ve said so far works for an individual. The effect is good. One person comes back with sharper instincts and starts using the tools more effectively, and over time some of that diffuses to the people around them.
But something different happens when an entire team takes the sabbatical together.
I’ve watched this play out a few times now, and the pattern is consistent. You take a team of five or six engineers, clear their calendars for a week, tell them the week is for learning, and something catalytic happens in the group that doesn’t happen in isolation. People see each other’s experiments. Someone figures out a prompting pattern in the morning, shares it in Slack at lunch, and by the afternoon three other people have tried it on their own problems. Somebody else’s dead end saves you from hitting it yourself; somebody else’s breakthrough becomes the starting point for your next attempt. The learning compounds inside the group in a way it just doesn’t when one person is off doing it alone.
The productivity shift on the other side is the part that surprised me the first time I saw it. It isn’t a linear improvement. The team doesn’t come back 10% faster or 20% faster. They come back working differently. They reach for agents on tasks they wouldn’t have considered automatable two weeks earlier, build small tools for themselves instead of filing tickets, and start having conversations about architecture that begin with “what if we let the agent own this part” instead of “who’s going to write this part.” The ceiling on what the team thinks is possible has moved, and that’s the thing that cascades into real throughput gains over the following months.
Why organizations should pay for this
The standard enterprise approach to AI adoption is to buy licenses, roll out training, and measure seat utilization. Six months later, leadership wonders why the productivity numbers haven’t moved.
The answer is structural. You’ve given people a tool and told them to use it in the same environment where using it badly is penalized and using it well requires skills they haven’t built yet. Of course adoption stalls.
A sabbatical changes the structure. For that week, nobody is expected to ship anything to production. The whole point is to learn. A week of an engineer’s time is cheap compared to the delta between an engineer who knows how to work with AI agents and one who doesn’t.
The usual advice for adopting a new tool is “just use it on your real work.” For most tools in most eras, that was fine. The learning curve was gentle enough that you could pay it down inside your normal job.
AI tools aren’t like that. Using them well means changing how you approach the work itself, not just learning a new interface. You have to figure out which parts of a problem to hand off, and how much of what comes back is worth keeping. That’s a different kind of learning, and it doesn’t happen under deadline pressure. You need room to be a beginner again.
The real payoff shows up on the Monday after. You sit down at your desk, look at the same Jira ticket that would have taken three days, and you already know where the agent will save you a day and where it will trip you up. You know which prompts to start with, when to let it run, when to drive it yourself. The hesitation is gone. The ticket ships faster, the next one ships faster still, and the tools you didn’t have time to get comfortable with three weeks ago are now just part of how you work.

