Ben Lorica and Evangelos Simoudis on AI and Layoffs, Systems Thinking, Verification, and the Future of Knowledge Work
Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon • RSS.
In this episode, host Ben Lorica is joined by Evangelos Simoudis to explore how artificial intelligence is reshaping the modern workplace and what knowledge workers must do to adapt. They discuss the critical shift from routine execution to “systems thinking,” emphasizing the need for rapid experimentation, AI orchestration, and deep domain expertise. Tune in to learn how to future-proof your career, navigate flatter organizational structures, and leverage AI as a collaborative tool rather than viewing it as a threat.
Interview highlights – key sections from the video version:
Related content:
- A video version of this conversation is available on our YouTube channel.
- The “Data Center Rebellion” is here
- Data Engineering in 2026: What Changes?
- Ben Lorica and Evangelos Simoudis → When AI Eats the Bottom Rung of the Career Ladder
- Ben Lorica and Evangelos Simoudis → Stop Piloting, Start Shipping: A Playbook for Measurable AI
- Ben Lorica and Evangelos Simoudis → Is Waymo Actually Profitable? The Real Cost of the Robotaxi Revolution
Support our work by subscribing to our newsletter📩
Transcript
Below is a polished and edited transcript.
Ben Lorica: All right, so we’re back with my good friend Evangelos Simoudis. His blog is at corporateinnovation.co. We’re recording this on the morning of March 6, 2026. Episode notes will be on thedataexchange.media. Hit the subscribe button on YouTube and subscribe wherever you get your podcasts. So I’m just going to jump in. Our topic today is AI and layoffs, and there are actually many facets to this topic. The first is: is AI really causing a lot of these layoffs? We’re going to skip that for now. The second is: how is AI changing the nature of work? We might cover that in more depth in future podcasts. But for today, we’ll focus on what knowledge workers can do to make themselves more attractive to companies with the rise of AI.
So with that, the first set of advice that I’ll dispense—and hopefully Evangelos will either agree or push back on—is around ways of working. One thing you can do is learn to work with AI as a collaborator, not just as a tool that’s threatening you or something to resist. The premise here is that people who develop the skills to direct, validate, and experiment with AI will be more valuable than those who ignore it, try to compete with it, or rely solely on pure execution speed.
The second thing you can do is shift away from routine execution and move yourself more towards problem definition. Evangelos, I’m seeing this especially among developers, including myself. You used to write a lot of code, but now you’re more into this kind of spec-driven development, where you spend a lot of time really defining what you want to do, the libraries you want to use, and basically sketching out the problem you want to solve for the AI model.
And finally, the last thing in this area: be an authority on complex edge cases. The idea here is that AI at this moment cannot handle everything. The people who can handle the edge cases that AI cannot will be much better positioned—at least for now. So I’ll stop there, and we’ll cover other categories. But what do you think, Evangelos? These are new ways of working.
Evangelos Simoudis: Let me start by saying, good to see you again in our monthly meeting here. A lot of what I’m about to say in response to the framework you’re laying out is driven by the experience we have through our corporate advisory work, which stems from two levels: advising corporations, and doing some development and prototyping—either in advance of what corporations will ask for, or in response to what they’re asking for.
The first thing I would say is that we’re starting to distinguish among three types of corporations. One is the digital natives—companies like Airbnb, Netflix, eBay. From the beginning, they embed AI into their business processes, and their employees are very much tuned into how to keep optimizing and how AI can help them with that. The second, which to me is the most interesting case, I’ll call the incumbent innovators. These are companies that have been around for a while—examples could be Walmart, JPMorgan Chase, Intuit—but they’re starting to deliberately introduce AI into their business processes. The third category is the traditionalists, who tend to be late adopters or market laggards, if you want to use less generous terminology. In my mind, today they are deluged by pilots without being able to make clear decisions.
When I look at the employees across those companies—and I’ll start with the last group, because I think that’s what your statement points to—I’m seeing a lot of people using AI tools as a replacement for search, as opposed to really experimenting with a breadth of tools and starting to determine how to incorporate them into their daily workflow and the business processes they participate in. The second observation, particularly in these traditionalist companies, is that their AI efforts tend to be driven by individual employees as opposed to being deliberate, top-down decisions about what the corporation wants to do. Again, the digital natives have the right DNA, and their employees have the right DNA.
The last point, and I’ll stop here, is that going forward, it may not be unreasonable for what we broadly call knowledge workers to think about—I won’t call it gig work—but working for more than one company. I think the right application of AI will have employment implications. Even if somebody embraces the direction and advice you’re providing, which I think is very sound, they may still find themselves needing to work at more than one company in order to make up full employment.
Ben Lorica: Yeah. And I’ll actually talk about that towards the end as well. So the second area I want to call out is: what are some skills you might want to build or become comfortable with?
The first one is rapid experimentation. These AI tools really allow you to build prototypes quickly and test ideas. You need to be comfortable with quicker action. Traditionally, people may have moved more slowly with a lot more deliberation, but with these AI tools, you can test several ideas all at once. You just have to be comfortable with the notion of experimentation.
Second—and this is something that’s already happening in software development, just because the tools are more mature there—you will probably need to learn how to orchestrate several agents at once. I think this is a learnable skill. Programmers are learning it now; they have several Claude Code instances going, tackling a problem simultaneously. As these tools mature in other areas—and I’m actually about to release a newsletter a few weeks after this podcast goes out, imagining what agents would look like in two domains: accounting and legal analysis—these tools are currently associated with software development only because software developers are the tip of the spear. They’re learning how to orchestrate multi-agent workflows first, but it’s coming to other fields.
Another thing you might want to become comfortable with is verification. AI right now is not perfect. You want to be someone who is adept at this core skill of verification—learning when not to delegate to AI and learning how to spot where AI is weak.
And then, going back to something I covered earlier: invest in domain expertise. Especially the kind you can’t just glean from a manual or a book—the kind that derives from experience, from working within your domain, from being able to spot subtle errors and guide AI. There’s domain knowledge that cannot be captured and written down precisely. That’s the kind of domain knowledge I’m talking about.
So in summation: get comfortable with experimentation. Learn to orchestrate multiple agents at once. Treat verification as an important skill. And invest in deep domain expertise, especially the type that cannot be written down in books or manuals.
Evangelos Simoudis: I think you’re making some very good points, which I believe are absolutely relevant to the software engineering audience we have—particularly this notion of experimentation and managing multiple agents.
Ben Lorica: And like I said, it’s mostly because the agent tools are mature there. But looking ahead, even if you’re not in software engineering, similar agent tools may show up in your domain within the next 18 months.
Evangelos Simoudis: So again, just to broaden what you’re saying: I think what has become very important—and I’ve written about this, I wrote a couple of pieces defining a framework for levels of autonomy, as I call it. I took some lessons from my work in the mobility industry and transferred them more broadly to AI. One of the observations from constructing that framework—people can read it on my blog—was that we are shifting and starting to place a lot of importance on people who are systems thinkers. In other words, they don’t think at the component level; they’re able to think at the system level.
Ben Lorica: Yeah, it’s similar to what I alluded to earlier: get away from routine execution.
Evangelos Simoudis: Right. As a systems thinker, you are forced to understand the operation of an entire system or an entire business process. If we move away from software engineering and ask how we can use those skills more broadly, people who routinely work in specific business processes—like paying vendors in a large corporation or dealing with resumes in HR—really need to understand the entire process they’re working within. Today, their work may be on a small portion of an entire process, but as we move beyond just software engineering, everyone needs to become much more familiar with the business process where they’re operating and understand where AI can play a role.
In some corporations—what I call the incumbent innovators—you’ll see a lot more design thinking from the top, with people articulating how AI can enhance the business process. But in the traditionalist, laggard-category corporations I mentioned earlier, it’s the employee who is expected to introduce AI. The ones who will differentiate themselves in these environments will be the ones who make an attempt to understand the entire process. They’re the ones who, through experimentation and through their reading, are introducing the right AI into the right step of that process. Determining what is the right step and what is the right AI involves a lot of experimentation. And today, we’re at a point where it’s very possible to do it. Many tools are becoming available, and the companies offering those tools are willing to give you free trials so you can determine how a tool can fit into your process.
Having that flexibility, being able to think at a system level, and then orchestrating tools as part of a business process—that’s what systems thinking creates: a very unique skill. And we don’t have a lot of systems thinkers. I talk to a lot of our portfolio companies, which are all startups developing some type of AI tool or application. We have a lot of people who may be experts in programming, experts in using specific packages or developing models, but they’re not very good at thinking at the systems level. I think that will be a very big differentiator as we move forward.
Ben Lorica: Yeah. And in general, your work is going to shift from actually producing work to directing systems.
Evangelos Simoudis: Right.
Ben Lorica: And there is something to be said about someone who knows a business process from end to end. Like I said, this is where that hard-to-write-down domain knowledge comes in—it’s just harder for an AI to replicate. If you can’t write it down, it’s harder for an AI to do it. Although, there is this notion that I’ve heard people talk about, tied to the black-box nature of these systems: we have the traditional concept of human tacit knowledge—knowledge that Evangelos may have just from years of experience advising companies that’s just in his head. And people are speculating that there’s an emergent machine tacit knowledge, because these systems are black boxes and we don’t quite know how they’re arriving at some of their decisions.
Evangelos Simoudis: Which is scary, by the way. I’ve recently listened to some interviews with people from AI labs, and their responses are very disconcerting. It would appear to me that it’s very easy for them to lose control of things. But that’s a whole different story, beyond today’s scope.
Ben Lorica: So in terms of skills to build: this whole notion of getting comfortable with rapid experimentation is partly just the reality of what’s happening. Routine tasks are getting compressed in terms of time. What used to take you eight hours is now ten minutes. Clearly, you’re going to have to be comfortable experimenting and trying new things quickly.
Evangelos Simoudis: The one thing to say, though—and I’m seeing it with two of my portfolio companies these days—is that there is a fine line between trying to understand a new domain, say something around biotech or logistics, and pretending that you’re an expert in that domain. What I’ve been advising these portfolio companies, and actually the employees at their customer companies that they’re working with, is: find the people to partner with. If you’re in a corporation, find the expert with whom you can partner, where the expert provides the deep domain knowledge and you bring in the experimentation culture and the AI knowledge. These kinds of combinations—adding the systems thinking we talked about earlier—will definitely be very important. I don’t think we’ll find them broadly, so people who are able to achieve them will become extremely valuable assets for their organizations. We’re not asking people to become experts in biotech or agriculture or logistics if they’re coming from a different area. We talk about upskilling and reskilling, but there are limits to how you can upskill or reskill an employee. The most successful outcomes are where you bring these competencies together in order to achieve, as a corporation, the results you’re striving for through AI.
Ben Lorica: So Evangelos mentioned this notion of systems thinking, and I also mentioned verification as a core skill. I want to make sure people understand what I mean by verification, because it’s in many ways tied to systems thinking. The workers who are good at telling AI what it’s doing wrong or right are by definition going to be more valuable than those who are sloppier or more prone to not catching errors in the AI system. That’s how you develop this whole systems engineering practice—at each step, you need someone who can actually verify what’s going well and what’s going wrong.
Evangelos Simoudis: Right. I think we are amazed on a daily basis by the capabilities these systems—particularly generative AI systems—are exhibiting. But we need to understand how to use them effectively. Experimentation is one way. Orchestrating multiple instances of them is another skill. And making sure you’re not propagating hallucinations or other kinds of errors is critical.
Ben Lorica: Yeah. So this verification is kind of like an attention to detail.
Evangelos Simoudis: Right. And attention to detail is a skill. We have our own employees using two or three different systems on the same problem, trying to establish what the ground truth is. Trying to establish, for a particular domain—whether you’re dealing with logistics or ad tech—what combination of tools gives you answers or solutions that are verifiably relevant and verifiably correct. Because of their eloquence, a lot of times, particularly for less involved individuals who are using AI on a more occasional basis, they find it harder to understand the need for verification.
Ben Lorica: So this notion of systems thinking that you introduced—I think it’s very much related to what I’m trying to nail down, which is domain expertise that’s hard to write down. If it’s something you can write down in a manual or book, then chances are an AI can do that well.
Evangelos Simoudis: You need to verify it, but I’ll agree with that.
Ben Lorica: It’s the kind of domain knowledge that comes over time—where you know that this is an edge case, or this clause needs to be redlined in this contract.
Evangelos Simoudis: Yeah. In some domains you can call them heuristic optimizations that come from your experience. You know, for example, that you can string together A and B, even though no book says you should do that, but you know from experience that you can. That type of knowledge is something that…
Ben Lorica: …will make you very valuable. Yeah.
Evangelos Simoudis: Right. And a lot of times it comes from experimentation and your willingness to say, “Well, let me try this. And let me try it in 20 different environments.” One of the things we’ve been developing is a scenario analyzer for mobility situations. What we found generative AI to be particularly good at—and again, this ties back to the experimentation point—is that if I give it a scenario, it can generate 20 other scenarios that are slight variations. Then I can look at the answers it provides for each variation, combine the original with the 20 the system generated, and ask: can I create a better understanding of a particular problem, context, or situation? It took us a while to even accept that the tools could help us expand our horizon—go from one scenario to 21—and then synthesize the 21 answers into something much more comprehensive that can help our clients address the problem they wanted to address.
Ben Lorica: All right, so the last part is a bit more high-level: positioning yourself for the labor market. Evangelos alluded to one thing, which is that you might need to get comfortable holding multiple positions at once. Maybe your skill is such that you can help several companies simultaneously.
The second thing—and this maybe comes from me being much more financially conservative than most people, Evangelos—is that I think you should prepare for a little more instability in the job market. That might mean building a larger cash buffer. Preparing for more volatility and choppiness. Maybe there will be more layoffs in your career compared to previous generations.
I also think you should pay attention to how your output is captured and reused. Pay closer attention to how your contributions are being captured, used, and maybe used to improve a model. I’m not sure exactly what you’re going to do with that information, but understanding it is key to understanding how your career may play out. If you think what you’re doing is slowly getting automated away, that’s a signal for you to act.
And then this goes back to something we’ve talked about quite a bit: Evangelos noted systems thinking, but another thing you can do is build skills that compound with experience and not just repetition. Skills like system-level judgment, customer intuition, or if you’re a software engineer, debugging in very messy environments. These are the kinds of skills that come with experience and are very difficult for an AI to capture quickly.
So to summarize: prepare for more instability in the job market, which might mean saving more. Pay attention to how your work is being used or automated. And build skills that compound with experience, not just repetition.
Evangelos Simoudis: A few years ago, I gave a speech at a European Union meeting about what will characterize the employees of the next few decades. This was before generative AI burst onto the scene, but after the Transformer paper came out. I made three points that I want to reiterate today in response to how you’ve laid things out, Ben.
The first was that you really need to think about employees in three stages: as they enter their career, at mid-career, and towards the end of their career. AI will have a different impact on each of these broad categories. There are many subcategories, but let’s stay at the higher level today.
The second is that regardless of category, what will define successful people is flexibility and adaptability. We talked about one form of flexibility: being able to deal with multiple assignments at the same time. You may work for two different companies—20 hours for each, or 30 for one and 10 for the other. That flexibility will be an important characteristic. This notion of “I work 40 hours a week every week and take X number of weeks of vacation” could become a thing of the past. As you said, we’re in an intermediate stage where things are changing but haven’t settled yet.
The last thing I would say is: this is a great time to assess your entrepreneurial desires. A lot of people may want to be entrepreneurs, may want to start something on their own, but they haven’t explored that yet. Because of the impact AI is having and will continue to have, people will start thinking about whether they need to be working for a large organization—government or corporation—or whether they can be entrepreneurs. Entrepreneur doesn’t mean you’re on your own, by the way. In a sense, an Uber driver is an entrepreneur. It can mean teaming up with three or four other people towards a common goal. This is a great time to examine whether you have the skillset, the stamina, and the desire to explore entrepreneurship. I think it will become a very important differentiator, particularly for those early or in the middle of their careers. Those were the three points I made in that presentation, and I think they apply even more today than when I first made them.
Ben Lorica: One more thing struck me as you were talking, Evangelos, and I’d like to get your reaction. I’m still thinking this through. If you think of the modern company structure, it’s kind of like a pyramid: you have the base of entry-level positions, then middle management, then executives. But people are projecting that because of AI, companies might look more like diamonds. The base is smaller because a lot of entry-level tasks are automated, which means to enter the company you may have to be more senior. And you might not need that next level of expertise either, because that’s also handled by AI. I bring this up to say that besides being an entrepreneur, if you do decide to work for a large company, be prepared for a different hierarchical structure. It could be much flatter than what we’re seeing today.
Evangelos Simoudis: I think we’ll see two organizational changes as we move forward. The first is smaller teams. As AI tools and applications become trusted co-workers and helpers, the need for very large teams won’t be there.
Ben Lorica: Because each team member is actually managing a team of agents.
Evangelos Simoudis: Exactly. That’s the point of the orchestrator. The second thing is that—and we see this today in our firm, and with these incumbent innovators I keep referring to, which I really like as a group of companies—they are realizing that with a flatter organization that has the characteristics we talked about—experimentation, systems thinking, using AI as a co-pilot—they can accomplish more with smaller, flatter organizations. As they rethink their business processes, either to enhance them with AI or to completely reimagine them as AI-first, a flatter organization achieves much more. Companies like JPMorgan and Walmart that fit in that incumbent innovator group—we’re seeing them move in that direction, toward flatter organizations comprising smaller teams. I think that’s an important transformation the corporate world is starting to undertake right now.
Ben Lorica: So let’s crystallize this last topic about flatter organizations. In your mind, what does that mean for someone listening? Does that mean avoid becoming a middle manager? What else?
Evangelos Simoudis: Well, I would say that every member of such a team needs to have a purpose. We had this paradigm—and many times we used to make fun of it—of “shuffling paper,” moving things around. That type of role in an organization will start being eliminated. You need to be able to say: here is my purpose, here is why I’m here, here is the contribution I’m making by using AI, and here is the result that my work, with the help of AI, is providing to the organization. You need to be able to articulate your contribution to the ROI of your organization. In this way, you’re showing a reason for your existence—what you’re contributing, how you’re helping the corporation or government advance.
Ben Lorica: And you’re going to have to learn how to use these tools. How you collaborate with your employees, how your employees’ agents collaborate with your agents—you will need to learn how to use these tools. I don’t think we will have people whose job is to manage people only and never use these tools.
Evangelos Simoudis: Actually, I will tell you—I’m programming these days a lot more, in the way you described.
Ben Lorica: And it’s not quite vibe coding; it’s a lot deeper than that. It’s spec-driven development.
Evangelos Simoudis: But I’m doing a lot more, and I’m having fun with it, by the way. In our work—between managing and investing in startups, advising corporations, and developing IP, the three things we do within Synapse—we’re using multiple AI tools, and we’re requiring every employee to use these multiple AI tools. In fact, one of the questions we’re starting to get from corporations, specifically these incumbent innovators, is: how do I change my culture to change the DNA, to make my employees embrace AI a lot more than they are now? These corporations don’t want to see employees say, “Oh yes, I’m using ChatGPT or Gemini on a daily basis to do searches.” They want to hear, “Here’s how these AI tools enable me to do my HR job or procurement job more effectively, and here’s how I’m contributing to the corporation.” That’s a big step.
Ben Lorica: So very quickly—I’ll give you the final word. We talked about a lot of things: domain expertise, verification skills, systems thinking. Unfortunately, Evangelos, a lot of these things assume you already have some experience. So what say you, in closing, to people who are trying to get that first job, that entry-level position?
Evangelos Simoudis: Entry-level positions have become a tough and sore spot right now.
Ben Lorica: Yeah. But give us some practical advice.
Evangelos Simoudis: I would say: come in as an AI expert. Don’t just play with AI—do meaningful things with AI.
Ben Lorica: And that might tie to one of your earlier pieces of advice: how do you demonstrate that you’re an AI expert? Because you were an entrepreneur.
Evangelos Simoudis: Yeah. You did side projects even when you were in school.
Ben Lorica: And with that, thank you, Evangelos.
