That's the right question, and the answer separates people who were valuable because they could type fast from people who were valuable because they could think well. The latter group is doing fine.
The new rules are pretty clear to me. Senior people are now judged less on recall and more on judgement, framing and whether they can spot when an answer looks wrong, even if a machine produced it. That shift is overdue. What lands well here is the distinction between not memorising syntax and not understanding what’s actually happening underneath. Those are very different things, and conflating them is where teams get into trouble.
Worth noting that Stack Overflow’s 2024 developer survey found over 75 percent of professionals now use AI coding tools at least weekly, but debugging and system design remain the hardest skills to replace.
Interviewing for thinking rather than theatre feels like a healthier bar for everyone involved. How far you think hiring processes will realistically move this year, or whether most firms will cling to the old tests a bit longer?
Good question on timing. I think we'll see a split.
Forward thinking teams (especially those feeling talent shortages) will adapt quickly because they have to. They're already losing good candidates to interview processes that feel disconnected from actual work.
But most large orgs are probably 12-18 months behind. Interview processes have institutional inertia. They're tied to competency frameworks, legal review, interviewer training, the whole apparatus. Plus there's genuine uncertainty about what "good" looks like when you're testing for AI-augmented work.
The early movers will be smaller teams and companies where one hiring manager can just decide "we're doing this differently now" without needing committee approval. Those experiments will produce the case studies that convince everyone else.
What I'm particularly watching for is whether the industry develops shared language around these capabilities. Right now, ‘can you work effectively with AI?’ means different things to different interviewers. Once we get consensus on what we're actually evaluating, adoption will accelerate.
The ironic part is that evaluating architecture and mentorship is easier than syntax tests. Just have real conversations about trade-offs and past decisions. We've been making interviews harder than they need to be.
Liked this one! The question comes down to: "when code is not taking most of your time, how valuable are you?"
That's the right question, and the answer separates people who were valuable because they could type fast from people who were valuable because they could think well. The latter group is doing fine.
AI changes how we write code, not why it exists.
The "why" and "what should this do" haven't changed at all. AI just speeds up the translation from intent to implementation.
Which is why the people who understand the why are more valuable now, not less.
The new rules are pretty clear to me. Senior people are now judged less on recall and more on judgement, framing and whether they can spot when an answer looks wrong, even if a machine produced it. That shift is overdue. What lands well here is the distinction between not memorising syntax and not understanding what’s actually happening underneath. Those are very different things, and conflating them is where teams get into trouble.
Worth noting that Stack Overflow’s 2024 developer survey found over 75 percent of professionals now use AI coding tools at least weekly, but debugging and system design remain the hardest skills to replace.
Interviewing for thinking rather than theatre feels like a healthier bar for everyone involved. How far you think hiring processes will realistically move this year, or whether most firms will cling to the old tests a bit longer?
Good question on timing. I think we'll see a split.
Forward thinking teams (especially those feeling talent shortages) will adapt quickly because they have to. They're already losing good candidates to interview processes that feel disconnected from actual work.
But most large orgs are probably 12-18 months behind. Interview processes have institutional inertia. They're tied to competency frameworks, legal review, interviewer training, the whole apparatus. Plus there's genuine uncertainty about what "good" looks like when you're testing for AI-augmented work.
The early movers will be smaller teams and companies where one hiring manager can just decide "we're doing this differently now" without needing committee approval. Those experiments will produce the case studies that convince everyone else.
What I'm particularly watching for is whether the industry develops shared language around these capabilities. Right now, ‘can you work effectively with AI?’ means different things to different interviewers. Once we get consensus on what we're actually evaluating, adoption will accelerate.
The ironic part is that evaluating architecture and mentorship is easier than syntax tests. Just have real conversations about trade-offs and past decisions. We've been making interviews harder than they need to be.