“Stop Learning SQL and Python”
Why Senior Analysts Are Right (And Wrong)
AI has changed the game. The question now is: what are the new rules?
“I hate leet coding interviews.”
My friend, a senior analyst, told me that over coffee last month. Five years in, crushing it at her job, but dreading the thought of ever having to switch roles because of the interview gauntlet. And honestly? I get it. When you’re spending your days designing data strategies and managing stakeholders, being asked to write a flawless binary search on a whiteboard feels absurd. Especially when Copilot’s sitting right there in VS Code, ready to handle that instantly.
So yeah, the “stop learning SQL and Python” crowd has a point. Just not the one they think they’re making.
📌 Quick note: This is a free article on concepts. Paid subscribers get implementation guides midweek, every week.
Why Everyone’s Fed Up
Two things are happening at once. First, as you get more senior, your job changes. You’re not writing queries all day anymore. You’re figuring out what questions to ask, which systems to build, and how to structure teams around data. Second, AI assistants have gotten really good at syntax. I don’t keep the exact parameters for window functions memorized anymore, and I don’t need to. That’s what documentation is for. GitHub Copilot does.
These tools aren’t the problem. Most interview processes are still stuck in 2015, testing whether you’ve memorized edge cases instead of whether you can actually solve problems. That’s what’s driving people up the wall.
But Here’s Where It Gets Tricky
There’s a huge difference between “I don’t need to memorize syntax” and “I don’t need to understand how this works.” Some people are conflating the two, and that’s dangerous.
AI is incredible at generating code. It’s also incredibly bad at knowing whether that code is right for your specific situation. I’ve watched Copilot suggest a JOIN that would absolutely wreck performance on our data model. If I didn’t understand what was happening under the hood, I’d have shipped it and spent the next week debugging production issues.
What hasn’t changed is that you still need to debug things. You still need to spot when something’s wrong. And if you’re senior, you’re definitely mentoring people and making architectural calls. You can’t do any of that without understanding the underlying concepts. The tool that writes the code might be new, but the requirement to know what good code looks like isn’t going anywhere.
🔒 This Week’s Paid Guide
What Matters Now
So if mindless memorization of code snippets is out, what’s in? I think about it as four core things.
First: problem framing. Can you take a messy stakeholder request and turn it into something concrete? That’s half the battle right there.
Second: critical thinking. When your model spits out results, can you tell if they make sense? Can you reason about uncertainty, edge cases, and what might be missing?
Third: working effectively with AI. Writing good prompts, integrating outputs, and knowing when to trust the suggestion and when to override it. This is a skill now.
Fourth: technical judgment. Whether a human or an LLM wrote it, can you review the code and evaluate whether it’s efficient, maintainable, and correct? Can you explain why one approach is better than another?
These aren’t easier skills than syntax memorization. They’re harder, but they’re the ones that actually matter.
Where This Is Heading
The good news is that some teams are figuring this out. I’ve seen interviews shift from “write this algorithm from scratch” to “here’s an ambiguous business problem, walk me through how you’d approach it.” You can use whatever tools you want. The interviewer wants to see how you think, how you break down problems, and what trade-offs you consider.
That’s not a lower bar. It’s a higher one. It’s just measuring the right things.
And honestly, that’s the direction all of us need to be moving into, whether we’re hiring or getting hired. Let AI handle the mechanical stuff. That’s what it’s good at. Focus your energy on the conceptual work, the strategic thinking, the judgment calls. That’s where humans still have the advantage, and probably will for a while.
Here’s what paid subscribers got this past month:
Advanced Model Drift Detection - A technical deep dive into statistical methods (PSI, Wasserstein, KL divergence) for catching data distribution shifts before they tank your models, plus a diagnostic framework that takes you from alert to root cause without manual investigation.
MLOps on a $50 Monthly Budget - The complete architecture for running production ML inference at founder-friendly costs using serverless GPUs, DuckDB querying S3, and automated shutdown scripts that prevent runaway cloud bills.
AI Agent Starter Kit - Production-grade agent infrastructure with built-in cost tracking, compliance engines for EU AI Act/GDPR, and structured logging that makes debugging agents forensic analysis instead of guesswork.
DIY Data Catalog Template - A working metadata management system you can test locally in under 2 hours, with validation scripts, tier-based governance, and a Google Sheets backend that proves the pattern before you invest in vendor platforms.
Not a paid subscriber yet?



AI changes how we write code, not why it exists.
Liked this one! The question comes down to: "when code is not taking most of your time, how valuable are you?"