You glance at your org chart: data engineers holding pipelines together with duct tape, analysts drowning in one-off asks, and a lone ML ops specialist clinging to the last thread of model uptime. Meanwhile, a cheerful GenAI chatbot promises to “summarize any dataset in seconds.”
You could slash the team, buy licenses, and hope for the best. Or dig in and defend the headcount, risking the label of pre-AI relic. Neither choice screams “career longevity.”
But here’s what you should be asking:
What do we want humans to own? What can we safely automate? And how do we design a team for a future that keeps shifting?
Replacing people with machines isn’t the goal. AI gives you the opportunity to redesign how human judgment scales. Before someone else does it for you.
Why Now?
Even just a year ago, GenAI could still be viewed as a novelty demo. Today, it’s a line item on your competitor’s roadmap. Shadow AI is already creeping into workflows via unvetted tools. Compliance risk is rising. Your smartest people are experimenting (without telling IT).
Meanwhile, budget season is upon us. And someone in finance is asking, “Can’t we do all this with fewer people now?”
This isn’t task automation. This is a strategic reallocation of cognitive effort. We automate the work of synthesis and translation so we can amplify the human work of true discovery and strategic reasoning. That’s corporate board-speak for, “We use AI to do the grunt work (e.g., summarizing, rewording, connecting dots) so humans can spend their brainpower on the big stuff.”
The cost of inaction: AI adoption decisions made outside your org. Talent drain when high-performers see no path forward. Siloed AI churning out impressive-sounding nonsense.
And the myths won’t help:
- “GenAI can replace the analysts.” Not really. It’s great at clear-cut tasks. But when the problem’s fuzzy or new? That still takes a person.
- “We just need a full-stack unicorn.” Last I checked, burnout wasn’t a business model.
- “No-code plug-and-play software means no more data engineering.” Sure, if your idea of self-service is pretty dashboards built on dirty data and no context.
It’s time to get organized before someone upstairs decides to reorganize, if you catch my drift.
We Need Humans for Divergent Thinking
Let’s talk Introduction to Psychology. We need to understand how people think about things, how they look at the world, and how they solve problems. The distinction between convergent and divergent tasks is a core concept in psychology and creativity, and it’s now critically important for designing data teams in the age of AI.
Convergent Thinking is about finding a single, correct, or established answer to a well-defined problem. It’s logical, systematic, and involves applying known rules and knowledge.
Divergent Thinking is about generating many different options, ideas, or solutions for a problem that is often ambiguous or has no single “right” answer. It’s creative, exploratory, and spontaneous.
Convergent Thinking | Divergent Thinking |
Solve, classify, define | Imagine, brainstorm, hypothesize |
One correct answer | Many possible ideas |
Works with clear constraints | Thrives in ambiguity |
Follows known rules | Breaks or rewrites the rules |
Examples:
- Calculate Q2 churn rate → Convergent
- Brainstorm reasons churn spiked → Divergent
- Write code to return lifetime value → Convergent
- Design a new customer analytics platform → Divergent
AI excels at convergence. It summarizes, translates, classifies, and codes at scale. But the spark—the strategy, the novel “what if”—still requires a human.
Here’s what humans do better:
- Frame the problem. Decide what’s worth analyzing.
- Pose the hypothesis. Come up with the “why.”
- Make the leap. Link insight to a real-world business action.
- Navigate ambiguity. Choose a direction when there’s no map.
The best model is a loop: A human defines the problem → AI handles the convergent analysis → a human interprets and acts on the result. That’s the hybrid advantage.
Why Can’t AI Handle the Divergent Tasks?
AI is built from a massive collection of human-generated work. AI learns to predict the most likely next word, a convergent task, as opposed to predicting a completely novel and unexpected way to finish a sentence.
Psychologists often rely on a framework called the Big Five personality traits: Openness (to new experience), Conscientiousness, Extraversion, Agreeableness, and Neuroticism (e.g., anxiety, self-doubt).
Openness to new experiences has a proven link to divergent thinking, but not convergent thinking, which makes sense. And humans can have new experiences while AI can’t.
On the flip side, conscientiousness has a negative connection to divergent thinking and positive correlation to convergent thinking. Generative AI systems are built to be conscientious… precise, rule-following, and helpful.
Give AI the tasks it handles best… and give humans the ones where they still outperform. And, yes, I do have a framework to help you do that!
The R.A.I.S.E. Framework: Future-Proof Your Data Org
Here’s the framework that keeps your team ahead without getting ejected out the airlock: R.A.I.S.E.
- Reskill
- Automate
- Integrate
- Specialize
- Evaluate
Each pillar connects to real-world decisions, budget needs, and KPIs… because strategy without execution is just a TED Talk.
Reskill: Humans Don’t Vanish. Their Tasks Evolve
Decisions | Investment | Measurement | Actions |
Which roles get copilots? Which get technical upskilling? | Internal “AI guilds,” hands-on workshops, dedicated time for tool experimentation | % of team trained on key tools Average turnaround time per task | Map task types to automation potential Launch a skills heatmap to guide L&D Pair analysts with prompt engineers |
This isn’t just skilling up. The goal is offloading low-value cognition so humans can focus on causal inference, strategic synthesis, and hypothesis generation.
Automate: Target Tasks, Not Titles
Decisions | Investment | Measurement | Actions |
Which repeatable workflows (QA, documentation, summary generation) are ready for automation? | Prompt libraries, secure AI platforms, audit tools | Hours saved per analyst/month Pre/post error rates in automated deliverables % of workflows with human-in-the-loop guardrails | Run a “task inventory” across teams Pilot a report-writing bot with QA sign-off Track rework rates before full rollout |
GenAI is great at “convergent” tasks like pattern recognition, translation, and summarization. Humans still own abstraction, ambiguity, and the leap from “what happened” to “what now?”
Integrate: AI Into Workflows, Not on Top of Them
Decisions | Investment | Measurement | Actions |
Where in the data lifecycle does GenAI drive actual leverage? | API integrations, model monitoring, workflow orchestration | Adoption of AI-augmented tools SLA adherence for AI-supported outputs | Embed copilots directly into tools analysts already use Require AI output to pass through existing QA protocols Monitor stakeholder trust quantitatively |
Bake it in. The team barely even realizes GenAI is helping. For example, instead of alt-tabbing between Excel and ChatGPT, they get inline suggestions in their BI tool.
Specialize: Kill the Unicorn. Build the Squad
Decisions | Investment | Measurement | Actions |
Which niche roles create leverage now (e.g., ML ops, data PMs, prompt engineers)? | Clear job architecture, role definitions, internal mobility | Time to fill open roles Analyst retention in high-burnout positions | Rewrite job postings to reflect real scope (not wish lists) Create “data product” owner roles to bridge tech and strategy Use skills assessments to drive project staffing |
Specialists don’t slow you down; they make your generalists faster and your output more reliable.
Evaluate: Stress-Test Org Design Like It’s Q4 Every Quarter
Decisions | Investment | Measurement | Actions |
When and how to revisit your org and tooling assumptions? | Org health checks, external capability benchmarks | Time-to-Insight (TtI): The elapsed time between a stakeholder’s business question and a vetted, actionable recommendation % of projects delivered with clear business ownership Satisfaction gap between data team and business stakeholders | Schedule quarterly R.A.I.S.E. reviews Benchmark your team’s capability mix vs. industry peers Survey stakeholders for clarity on ownership and value |
Quarterly evaluation ensures your AI copilots remain helpful partners and don’t evolve into a HAL 9000: confidently wrong and locking humans out of the loop.
What Good Looks Like
Here’s what a high-functioning, future-ready data org looks like: Your analysts run point on strategic questions, not just report formatting. The AI copilots handle the grunt work like summarizing, formatting, and even suggesting next steps.
Meanwhile, data PMs triage business requests, ensuring those “quick asks” don’t become giant time sucks. And engineers focus on platform resilience and governance, not ad hoc fixes.
The GenAI is embedded in process, not bolted on.
Yes, it may take some discipline, but it’s not magic. It’s design.
Pitfalls & Pushback: What to Expect (and Say Back)
- “We’ll just hire prompt engineers.”
That’s an important skill, but their impact is capped by your data foundation. A Formula 1 driver can’t win in a rental car. Invest in both the expert and the engine.
- “Automation will kill morale.”
Not if the team chooses what to automate. Nobody misses tedious QA logs.
- “We’re already too late.”
Not at all. But the window is closing.
Start with a 90-day “catch-up plan”:
- Audit shadow AI usage (it’s happening)
- Identify 2 quick wins (1 internal, 1 external)
- Present a rollout plan with metrics, guardrails, and training
- “AI ownership is political.”
What else is new? So is marketing. So is HR. Do it anyway.
Define your data products. Assign owners. Set SLAs. Ship work.
- “No-code means no more engineers.”
Actually, it means engineers now build something better: The data products that power self-serve, AI-driven decision-making. They go from dashboard jockeys to “force multipliers” for every analyst in the org.
Final Word
The next time a board member slides that McKinsey deck across the table and asks for HAL 9000, you’ll answer calmly: “We’ve already RAISE’d the team. Humans lead, hybrids scale, and no one’s locked out of the pod bay.”
🧭 Your Executive Checklist
✅ Ask: Which analyst tasks are automation-ready right now?
✅ Champion: A GenAI-aligned reskilling and skills heatmap
✅ Set: A comprehensive AI governance playbook
✅ Build: Specialist roles instead of chasing pretty unicorns
✅ Measure: Time-to-Insight and stakeholder satisfaction
✅ Review: Team structure and AI ROI… every 6 months
For more columns from Michael Bagalman’s Data Science for Decision Makers series, click here (from All Things Innovation) and here (from All Things Insights).
Contributor
-
Michael Bagalman brings a wealth of experience applying data science and analytics to solve complex business challenges. As VP of Business Intelligence and Data Science at STARZ, he leads a team leveraging data to inform decision-making across the organization. Bagalman has previously built and managed analytics teams at Sony Pictures, AT&T, Publicis, and Deutsch. He is passionate about translating cutting-edge techniques into tangible insights executives can act on. Bagalman holds degrees from Harvard and Princeton and teaches marketing analytics at the university level. Through his monthly column, he aims to demystify important data science concepts for leaders seeking to harness analytics to drive growth.
View all posts