If AI Does the Junior Work, Who Becomes Senior?

The Shift Brief | Week of February 16th, 2026

Most AI automation right now is aimed at junior-level tasks. Summarizing filings, pulling data, drafting first passes, building basic models, and scanning documents. On the surface, this makes sense. Remove the grunt work, increase leverage, let people focus on higher-value thinking. But there's a deeper question underneath that I don't think we're talking about enough: if AI does the junior work, how do people become senior?

Recent reporting has pointed out that AI is already reshaping entry-level roles by automating the modular, repeatable tasks that used to be how people learned. That shift isn't theoretical. It's happening. Firms are seeing real efficiency gains. But in many organizations, those "junior tasks" weren't just busywork. They were the training ground. Where pattern recognition developed, where judgment started forming, where people built the intuition that eventually shows up as senior-level conviction.

In investment teams, especially, expertise doesn't appear fully formed. It comes from repetition. From digging through messy disclosures. From reconciling numbers that don't quite line up. From writing drafts that aren't very good at first. From noticing something small that feels off and pulling the thread. Sometimes the alpha isn't in the final summary. It's in the friction of doing the work. It's in the footnote you would've skipped if you were only reviewing a polished output.

Take a simple example: preparing a first-pass sector overview used to be a rite of passage for analysts. In doing it manually, you learned how the industry worked. You saw how companies disclosed risk. You developed a feel for how language changed across cycles. If AI now generates that overview instantly and the human only edits it, where does that learning actually happen? Where do the reps come from?

This isn't an argument against automation. The productivity gains are real. Removing mechanical repetition is a clear win. Copying numbers into spreadsheets was never the source of insight, but wrestling with complexity was. There's a difference between eliminating friction and eliminating formation. If we automate too much without redesigning how people develop judgment, we risk building teams that are very good at reviewing AI outputs but haven't built the internal models to know when those outputs are incomplete or wrong.

The real leadership challenge isn't deciding whether to use AI. It's deciding which layers of work build capability and which layers are just overhead. The firms that get this right won't slow down adoption. They'll redesign training and workflow so that automation increases leverage without hollowing out development. Efficiency compounds. But so does capability. The question is whether we're being intentional about building both.

– Ryan

About Shift
Investment research shouldn't be this hard. Shift turns your firm's scattered knowledge into powerful insights with AI built for how you actually work. We're a team of builders and finance experts based in Charlottesville, VA.