The Craft Doesn't Leave You

The Craft Doesn't Leave You

One of the quieter struggles of leading engineering teams is that the time to practice the craft shrinks. You're in meetings. You're reviewing designs. You're hiring, mentoring, unblocking. When you get home, there's family. The long, uninterrupted sessions you had earlier in your career get harder to come by.

But reading about engineering and actually practicing it are two very different things. I make the time because I think it's essential. Not because leaders need to prove they can code, but because understanding how things are actually built helps you guide engineers and make better decisions. When you've felt the friction of a bad abstraction or debugged a race condition yourself, you lead differently than someone reading about it in a status update.

Jumping Into Unfamiliar Territory

Recently I took on a project that required building a feature to analyze patterns in how children were interacting with AI. The data was complex, the stakes were high, and the tech stack was entirely new to me. I had never used Databricks. I had never written PySpark or PySQL. It was unfamiliar territory across the board.

I built it anyway.

What became clear very quickly was the power of tools like Cursor. I didn't know the syntax of the platform, but I understood how to architect systems and how to build things step by step. That distinction turned out to matter a lot.

What Transfers and What Doesn't

Strong engineering foundations transfer across tools and languages. Syntax does not.

Just because I didn't know PySpark didn't mean I didn't know how to construct the solution. I approached it the way I would approach any system: break the problem down, prototype pieces, test assumptions, iterate. The language was new. The thinking was not.

There was a moment early on that humbled me. I saw PySpark code that looked something like this:

df = spark.sql("SELECT * FROM usage_data")
filtered = df.filter(df.age < 18)

My SQL brain lit up. I got on my high horse immediately. I've been writing queries for 20 years, and I know better than this. Why would you select everything and then filter? Just put the WHERE clause in the query. This is basic stuff.

Cursor put me right back in my place. That's not how PySpark works. The lazy evaluation model means those operations get optimized together at execution time. Spark builds an execution plan and handles the filtering efficiently regardless of where you write it. My instinct was right in SQL and completely wrong here.

The lesson was clear: confidence in your principles is good. But showing up thinking you know better than the platform? That's a great way to learn you don't.

This is where I see some junior engineers struggle. There's a belief that engineering skill is tied to knowing a specific language or framework. But real engineering is about structuring problems and building systems that work reliably. If you can do that, you can learn any stack. The tools just accelerate how quickly you get there.

The Human Still Drives

Here's the thing about AI coding tools that gets lost in the hype: they don't make decisions for you. They accelerate execution. The human still drives.

Throughout this project, I was the one telling Cursor to use the newer chat response API instead of the legacy completions endpoint. I told it to use the SDK instead of raw HTTP calls. I made the call to introduce multithreading for performance. I chose the validation library for schema enforcement.

These aren't syntax decisions. These are architecture decisions.

And they require someone who understands systems, stays current with the landscape, and knows what "good" looks like.

Staying up to date with AI tooling and the platforms around it is its own skill now. We're in a stage where things change fast, and the gap between someone who's keeping up and someone who isn't shows up quickly in the quality of what gets built.

Prototype Fast, Then Rebuild Right

My process ended up following a pattern I'd recommend to anyone working with AI tools:

First, build a fast prototype. Deliberately ignore most engineering principles. Move quickly just to prove the idea works.

Once it does, treat that prototype as a blueprint. Not as production code. As a proof of concept that tells you the shape of the real system.

From there, write a plan for how the system should actually be structured. Read the plan. Refine the plan. Then build it properly with real engineering rigor.

That sequencing was extremely effective. The prototype gave me confidence in the approach. The rebuild gave me confidence in the system.

One thing worth noting: a prototype proving the idea works is not the same as the idea being shippable. It's tempting to see a working demo and assume it's ready. It's not. The prototype shows you the shape of the problem. The real build is where you solve it properly.

Tests as Thinking Tools

I'm not a "write tests first" person. I never have been. But one of the more surprising realizations during this work was how valuable tests became when AI is generating code quickly.

When code is being written fast and you're not authoring every line yourself, the tests become your sanity check. Not just for whether the code runs, but for whether the inputs and outputs of each function actually match what you intended.

Reading the tests became a faster way to verify intent than reading all the code.

In a stack I'd never worked in, that mattered even more. The tests forced me to think clearly about what the system was actually supposed to do. They were documentation for intent in a codebase that was growing faster than I could review line by line.

The Cross-Functional Reality

This project also required coordination across product, engineering, data science, data engineering, and clinical teams. From the engineering side, we were thinking about efficiency, performance, and cost. From the clinical side, the focus was on making sure the reasoning and conclusions were sound.

Working with large language models adds a layer of responsibility. You can't just assume outputs are correct. We iterated on prompts, reviewed outputs carefully, and evaluated results against human-labeled data to build confidence in the system. Throughout all of that, I was still responsible for the full scope of leadership work: reviewing designs, coordinating across teams, and making sure the feature came together correctly.

The Craft Is Changing

What stood out to me afterward was that staying connected to the craft still matters, but the craft itself is evolving.

You don't need to write every line of code anymore. What matters is understanding how to structure problems, reason about systems, and use tools effectively. Anyone who claims they aren't using AI tools today is either hiding it, avoiding it, or putting themselves at a disadvantage.

I've spent years building teams, scaling systems, and shipping products. I've done it multiple times at different stages of growth. But I've never stopped wanting to build things myself. The best part of where we are right now is that the tools have caught up to the ambition. A leader who understands architecture, knows how to hire and grow a team, and can contribute real code from day one?

That's not a contradiction. That's the job.

Strong engineering principles matter more than ever. And the leaders who still practice the craft, even when their calendar says they shouldn't, are the ones who will build the best teams and the best products.