The Joy of Not Coding (Part 1): Fast Apps, False Starts, and Finding My AI Teammate

It had been a while since I wrote some code.

A common, wistful thought from those who love to code, but also love leading teams and building products. Sure, we get our rays of sunshine: sitting in on design reviews, jumping into incidents, anticipating thorny issues, nudging productivity along... but coding always takes a backseat.

Something unexpected has started to bring my two worlds together: AI coding assistants.

I've been curious to try them out and more so, to get first-hand experience of where they shine and where they don't, where be dragons, and what might shape the new age of development. Just like nobody likes an ivory-tower architect, nobody likes one who simply says, “Just use AI and ship faster!”
Time to try it myself.

This post reflects what I call Stage One: hands-on exploration, with only the teensiest bias that experience must count for something.

Enter AI Assistant

I started in a fairly lazy fashion:

“Hey AI, build me a Spring Boot application that uses my song graph to provide music recommendations. Use Spring Data Neo4j.”

It wrote code really fast. The structure looked standard. The node entities and their relationships were all there. On the surface, it was impressive.

Then I asked it to expose a few REST endpoints so I could test the graph interactions. Controllers, services– all present. So far, so good.

But when I ran the application, things began to unravel.

Convincing, but not really

The app wouldn’t start. Neo4j was up and the connection properties were fine – they pointed to my Neo4j AuraDB instance. But the assistant went into a tailspin. Unable to connect, it decided I must want to run Neo4j in Docker. Before I could say stop, it went off and outfitted my project with docker-compose files and accessories. The real fix? A one-line correction in the configuration bean. That took me under a minute. Cleaning up the mess (manually) took much longer, and by then, I’d already paid the usage bill.

Once the app was finally running, I began testing it and was puzzled about the apparent correlation between song IDs and Spotify URLs. After reading the code (cluttered with pointless JavaDoc), I found that it had invented the idea that the song ID was derived from the Spotify URL, or inserted into it automatically. Nonsense. My existing song graph had no such assumptions. Fixing it involved generating multiple cycles of tests, edits, more tests, and you guessed it, even more JavaDoc.

Then came the issue of a case-insensitive bug. I looked into the repository class where I found that it had written Cypher queries for all the findBys. Because I'm quite comfortable with Spring Data Neo4j, I knew that these Cypher statements didn't need to be there. Had I been unfamiliar, I might’ve just tweaked the Cypher and moved on, unknowingly baking in a maintenance headache to keep the queries synced with changes to my entities.

What I had was a teammate who looked senior, sounded confident, and was subtly (but dangerously) wrong. It reminded me of my first Graphish post.

Slowing down with specs

By now it was clear: one does not simply build an application by issuing vague instructions to an AI assistant between meetings.

Around this time, I signed up for a free trial of Kiro. The idea of reviewing a spec was appealing. Now wiser, I started small: one entity at a time and clear instructions about the usefulness of JavaDoc. I reviewed the code at each step, skimmed over the tests (which looked good), and gradually moved faster. Over time, I slipped into LGTM mode (there's only so much markdown one can continuously read) and soon realised my tiny app had 300 tests and a suspicious near-100% code coverage.

Proper code bloat! I wasn’t going to maintain that many tests or accept longer and longer build times. I asked Kiro to clean it up, and it dutifully refactored them into a three-level-deep abstraction hierarchy that I’m quite sure will fall apart as soon as the domain model evolves.

On the one hand, I had an application. And honestly, I wouldn't have had the time or focus to write all this code between meetings. But, on the other hand, I was left wanting. The experience had stripped away one of the deepest joys of coding: you wrote no code. Where was the frustration trying to get something to work and the satisfaction of fixing it? I also found that AI-generated code is not an ideal path for me when trying to learn a new framework or trying new features. It's too noisy.

So, what do I think now?

I’m impressed with the speed, the confidence, the sheer volume of code it can produce in a nonchalant way. However, there is much to oversee and think about especially in the medium to long term – code bloat, suboptimal use of frameworks, the cost of producing code and the future cost of maintenance, performance considerations and the illusion of progress.

Judgement still matters. Experience still matters. And yes, productivity can be boosted, especially when these tools are intentionally deployed for repetitive or low-value tasks.

Stage Two begins

Having scratched the itch and tried out recent Spring AI features by reading, coding (myself), failing, trying again and succeeding, I’m now expanding my hobby team: I’m going to partner with an artificially intelligent engineer: enthusiastic, energetic, and sometimes a little too sure of themselves.

As I move into this next phase, I’ll be looking through two lenses:

Professional: to identify opportunities to improve productivity, spot risks early, and gain clarity about how we use AI in development without falling into hype or debt.

Personal: to use the little time I have to write code for myself again, and enjoy it, with a little help from a friend.

What has your experience been like? I'd love to hear from you in the comments. And don't forget to subscribe to have the next part delivered straight to your inbox.

So needless to say
I'm odds and ends
But I'll be stumblin' away
Slowly learnin' that life is okay
Say after me
It's no better to be safe than sorry

Take on Me (a-ha)