Chen Goldberg on Reinforcement Learning, AI Infrastructure, Goodput, and The Memory Supply Chain.
Subscribe: Apple • Spotify • Overcast • Pocket Casts • YouTube • AntennaPod • Podcast Addict • Amazon • RSS.
Chen Goldberg, EVP of Engineering at CoreWeave, joins the podcast to discuss the critical infrastructure shifts required as companies move AI from pilot to production. She explores the necessity of specialized AI clouds versus general-purpose hyperscalers, the importance of optimizing for “Goodput,” and the realities of the current GPU and memory supply chain. The conversation also covers the emergence of reinforcement learning in the enterprise, the complexities of agentic workflows, and how engineering roles are evolving in the AI era.
Interview highlights – key sections from the video version:
-
-
- Setting the stage: pilots vs production—what’s real in enterprise AI?
- From experiments to production: differentiation, unknowns, and choosing partners
- Reality check: are these insights beyond Silicon Valley?
- How non-SV companies approach AI: leverage unique data, people, and customers
- When to consider an AI-first cloud vs hyperscalers or DIY
- Best-of-breed strategy: CoreWeave’s partnership model and developer tool acquisitions
- Who’s training models today: finance, retail, health—and emerging workloads like video
- Inference in production: why AI workloads break old cloud assumptions (multi-node, reliability, cost)
- “Arena” for real-workload benchmarking: testing infra choices without guesswork
- Measuring efficiency with “Goodput”: pipeline bottlenecks and Mission Control visibility
- Ops + constraints: telemetry-driven troubleshooting, memory supply pressure, and infra planning
- Agents and model choices: infra needs, security/sandboxes, and open-weights trade-offs
- Closing: early days, technical debt, and workforce skills—why depth still matters
-
Related content:
- A video version of this conversation is available on our YouTube channel.
- The PARK Stack: The LAMP Stack of the AI Era
- Zhen Lu → The Infrastructure for Production AI
- Lior Gavish → Why Traditional Observability Falls Short for AI Agents
- Your agents need runbooks, not bigger context windows
- The “Data Center Rebellion” is here
- Data Engineering in 2026: What Changes?
Support our work by subscribing to our newsletter📩
Transcript
Below is a polished and edited transcript.
Ben Lorica: All right. Today we have a great guest, Chen Goldberg, SVP of Engineering at CoreWeave. The tagline is “The essential cloud for AI,” or “The force multiplier for AI,” trusted by the world’s leading AI pioneers. She is an industry veteran who formerly ran big engineering teams at Google. And so with that, Chen, welcome to the podcast.
Chen Goldberg: Thank you so much, Ben. I’m excited to be here. And maybe one correction before we kick off, my name is pronounced “Hen.”
Ben Lorica: All right. So for some context, I’m sure a lot of our listeners already know this, but CoreWeave is, as I said, a cloud platform focused on AI. It does have some strategic partners, including NVIDIA, which is both an investor and obviously a supplier of chips, and then OpenAI, which is also one of their larger customers.
So with that out of the way, I guess my first question to you is… You probably have seen the same surveys I have, which say that a lot of people have AI pilots, but they don’t go to production. But I actually think a lot of those surveys are a bit overblown because I do talk to a lot of people and they do have AI in production. So as best as you can tell, how would you describe the state of reality in terms of pilots and production?
Chen Goldberg: I think how quickly the industry moved from experimentation to production—and even these kinds of surveys—is really a testament to how AI is impacting the industry as a whole. It’s just yet another example. We can talk about how engineers—I lead engineering teams at CoreWeave—how our work has changed just this past three months with models.
So, what we are hearing from customers has definitely changed. And furthermore, it’s not just moving from experimentation to production. There are still a lot of unknowns and a lot of challenges around: How do I differentiate? What is my unique asset if I’m an enterprise or a startup? So there are just a lot of things that folks have to think about, including which cloud provider they are partnering with.
Ben Lorica: So, for our listeners, to frame your response moving forward: You are talking to people that are not just inside the Silicon Valley bubble, right? You’re talking to people in regular companies across different industries. Is that correct?
Chen Goldberg: Correct. And again, that change also happened pretty quickly from a level of interest. It’s similar, but probably on steroids, to what we saw 10 years ago or so when we talked about cloud-native and the move to the cloud, where the evolution of SaaS happened and how we create experiences really impacted every industry.
We are seeing the same thing right now. And what I see companies doing, definitely outside of Silicon Valley, is really looking inside and thinking, “Okay, what kind of assets do we have? What kind of data do I have? What kind of people, what kind of customers do I have?” And then thinking about how they take that and innovate further.
Ben Lorica: And as you mentioned, regarding the cloud computing journey for a lot of companies, they’ve had years and years of experience. So either they’re working with one of these hyperscalers or maybe they have their own cloud setup. But for AI, a lot of companies have a lot less experience. And they already have relationships with existing cloud platforms. So at what point does it make sense for a company to look at something like CoreWeave, one of these AI-first cloud platforms? How much AI am I already doing before I even look at something like CoreWeave? Or is that the right question?
Chen Goldberg: It is definitely the right question. And regarding specialized tools, I think everybody should consider that. A company like ours, yes, we are of course specializing with our infrastructure, but we are also building developer tools with our Weights & Biases platform. I think that like any other area that you want to differentiate in and to have an edge, you need to make sure that you get yourself the best support you can.
With that in mind, I think that everybody should explore the “best of breed” approach and what can really help them accelerate. If you just do whatever everyone else is doing, then you won’t have that edge, assuming you need it or you want it for your business or it’s critical.
Specifically when we think about our customers for our cloud services, anyone that is either doing any type of training, or using inference for real production workloads—which means that they care about security, reliability, and scale, that can be either latency or performance—those are the kind of customers that talk to us.
Chen Goldberg: And what we are doing, where we are unique, is we think about ourselves like a “Tiger Team,” like the experts in this space. So when someone comes to us, one thing that our customers are telling us is that this is beyond just, “I’m a vendor, give me something, I’m going to pay you.” It’s really a partnership in essence, and we take pride in that. This is very similar to the early days of cloud, where you were looking to those trusted partners that can help you accelerate.
I’ll just give an example of a startup company. They actually took a significant amount of the money they raised and decided to use it to partner with us. And when talking with them, they said, “Hey, we don’t want to waste our resources on things that you already solved and we know you’re the best at.” So that’s the way I think about that.
Ben Lorica: By the way, I want to give a shout out to another open source project that you folks acquired, Marimo.
Chen Goldberg: Yes. Are you a fan of Marimo?
Ben Lorica: Yes, yes. I’ve had them on this podcast actually. And I’m friends with Lukas as well of Weights & Biases. So great, great tools that you folks have brought in.
Chen Goldberg: I think that really speaks to that strategy of looking for the best tools for the job. And that’s been our strategy and where we’re focusing. And definitely knowing folks like Lukas and Shawn from Weights & Biases or Akshay from Marimo, it’s about innovating. It’s about taking a different approach to this problem and where we think it really creates the right results.
Ben Lorica: So you mentioned the two things that people tend to do: training and inference. So the stereotype is training is mostly for Silicon Valley giants and labs. You can probably give us examples of otherwise. So are there companies that are actually doing training that are not one of these tech companies?
Chen Goldberg: Yes. I will give a couple of examples. Definitely anyone in the financial world when we think about risk as an example, is something that we see customers using training with their own data already.
Ben Lorica: Really? From scratch? So this is like a foundation model?
Chen Goldberg: Building their own models. They’ve been doing ML before; there is nothing new about that. So it’s not necessarily doing their own LLMs, but just the work of training models. We are also seeing on the retail side companies investing in unique experiences. There is an assumption that the way we as consumers consume research data will see that. Definitely early on, drug discovery and research around health, we’ve seen some of those customers definitely running things at scale in this space.
And there are some other things that are very common—well, maybe it’s Silicon Valley, maybe I shouldn’t say very common—some other use cases around call center services, video generation, which is again other use cases.
Ben Lorica: Yeah, the video generation is interesting because it’s something that the foundation model builders can do, right? So Gemini can do it, OpenAI can do it. But for some reason there’s still an opening for startups that really specialize in this, right?
Chen Goldberg: Yeah. I think that for everything, it’s probably too early to talk about disruption in this space. Because maybe in parallel to what we are seeing around the strength of foundation models, we also see evolution of tools. How easy it is to maybe train or do inference at scale or use your own data.
Ben Lorica: Fine-tuning has become so easy now, right?
Chen Goldberg: And we are just at the beginning, right? If you think about where we were six months ago, so much has evolved. Actually, we acquired another company last year called OpenPipe. And what they are doing is making reinforcement learning easier. So the idea is that they make it simpler for you to do that trade-off between best, cheap, and fast. It’s always that trade-off. But can they help on some of those metrics make things more automated and like a “good enough” approach? And again, it depends on what task we are trying to solve for.
So, too early. There is a new model coming up every day. It’s hard to keep up.
Ben Lorica: Almost literally.
Chen Goldberg: Yeah. Not… yes. Sometimes two. No, just kidding.
Ben Lorica: And I’m going to go back to reinforcement learning in a little bit here. But so on the inference side, what’s the biggest rude awakening that people have when they move to a high production kind of application? So at first, of course, they go, “We can just maybe use a traditional cloud or do this ourselves.” But then at some point, what causes them to move to something like CoreWeave?
Chen Goldberg: So I think that’s a great opportunity to talk about the origin story or why I joined CoreWeave. So for folks that don’t know, I spent almost a decade at Google Cloud.
Ben Lorica: Kubernetes.
Chen Goldberg: Yes, part of the founding team of Kubernetes. And really what Kubernetes did in the space of cloud is create an abstraction layer, right? Making cloud resources feel the same everywhere. And actually creating the opportunity for portability of workloads and multi-cloud becoming a reality.
And part of that world was based on the fundamental assumption that all resources are the same. And if I’m a user, I can declare what kind of resources I need. I don’t need to go into too much detail and I can let this orchestrator deal with things on my behalf.
Ben Lorica: Which for our younger listeners was a pain in the “you know where” before Kubernetes.
Chen Goldberg: It was really hard. It was really, really hard. And there was no need for it to be hard. And then what happened with AI workloads is that some of those assumptions that we had in the past were no longer true.
For example, the idea that I can easily move workloads from one node to another, from one machine to another. That’s actually no longer true because most workloads, especially when you run things at scale, are running on multiple nodes, on multiple machines. And that just makes that orchestration and how you think about highly available systems and reliability different.
The cost and complexity of systems became really hard. Once you understand that you’re moving from a single node to multi-node environment, and you add on top of that the cost and the complexity of the systems, then the risk of having one point underperforming and just dragging down the entire system has huge consequences. Huge consequences. It’s for some customers and companies, tens of millions of dollars.
And also it’s an advantage from a company perspective. If I cannot move fast enough or if my service is down, it will impact my business. Period.
Ben Lorica: The user experience is so critical.
Chen Goldberg: Correct. And that’s where companies like CoreWeave came about. And I joined CoreWeave because I saw some of those challenges while I was working at Google. Two things: One, being part of this infrastructure replatforming felt to me like a missed opportunity as an infrastructure person. It’s a great time.
And I realized that in order to meet the demand of those workloads, we need to do things differently. We have the opportunity to do things differently. And that’s what CoreWeave is about. We have the privilege and luxury to really focus on a smaller set of problems. We can build simpler solutions for that. And we are doing it across the stack.
The other point is that we’ve learned it’s not one thing. It’s not just your GPU or ASIC. It’s always a combination of things. Of how you get that sustainable differentiation, that sustainable advantage. And that’s what we do. We optimize different things across the stack.
Ben Lorica: One of the things that I came across that I found interesting is this thing that you call “Arena.”
Chen Goldberg: Yes.
Ben Lorica: Which is almost like a digital twin for people running large AI workloads, right? So it’s a simulation environment, right?
Chen Goldberg: Actually, what’s unique about it, it’s not a simulation environment.
Ben Lorica: Oh.
Chen Goldberg: It’s real infrastructure. Real production. It’s our real cloud.
Ben Lorica: So is it like, here’s my workload, and then you have like an exact copy and then you simulate the…
Chen Goldberg: Yes. Some of the decisions that people are asked to make right now—and you asked before about inference—is making decisions around the infrastructure. Because as we said before with Kubernetes, if I’m running it on H100s or H200s or GB200s, the system has to be different. The architecture, the deployment, things will change and will perform differently between those systems.
And people are asked to make big decisions where the stakes are really high without having the data. And those benchmarks that are available, like “test this model,” are actually not a good mirror to what will happen in real time or with real production workloads.
And what we are offering is an experience that right now we know is valued by all of our customers, is the opportunity to partner with us with your real workloads, and we bring the expertise of real infrastructure. And it gives you the opportunity to benchmark: What kind of environment? What kind of storage solution should I use? What kind of networking? What kind of data I have? Let’s do troubleshooting. How do I do highly available?
There are a lot of questions where we are helping customers accelerate the decision making instead of guessing. And not getting into those long-term commitments without having any answer. Which I think is what the industry is expecting people to do and it’s not reasonable really.
Ben Lorica: So basically allows me to stress test beyond what GPUs do I have?
Chen Goldberg: Stress the system and learn. We all talk about experimentation.
Ben Lorica: So then, as you were describing it, I imagine like almost like many, many dimensions. Almost like a multi-armed bandit that I have to optimize the configuration, right?
Chen Goldberg: Exactly. I think that’s exactly right. You’re spot on. It’s not one dimension. It’s not just which model I’m using. But it’s also how I am deploying it, how the network is configured. What kind of data am I measuring? Where do I get my data into? There’s a lot of things I would like to consider.
And you know, we talk a lot about performance and reliability. There is another aspect that is really critical to our customers, which is security. Which also relates to of course instrumentation and observability. So having a system where you can partner with experts on all of those dimensions and make some quick decisions is key.
Ben Lorica: And you folks also introduced this notion of efficiency which… I don’t know if I’ve heard this term before: “Goodput.”
Chen Goldberg: Actually it starts from Google as well.
Ben Lorica: Okay. So it’s the actual time GPUs spend performing useful work. So tell us why that’s something that people should start tracking.
Chen Goldberg: People are already tracking a lot around efficiency in the sense that because compute is so in demand and very expensive, people are looking for solutions to get the best out of their infrastructure. And maybe by the way, this is I think another big difference between general purpose cloud and AI cloud.
What we have learned—and we’ve seen that as we are submitting our results to the different MLPerf and benchmarks results and seeing feedback from our customers—that there are different type of bottlenecks along the way. So for example, one of the things that we’ve done in order to improve Goodput is making sure that we accelerate the amount of data, both from volume and throughput, that gets into the GPU. So anything we can do in order to make that processing more effective will be preferred.
So we built a specific caching mechanism on top of our CoreWeave AI Object Storage that accelerates that. So that will be one example.
Another thing that we’ve been doing is giving customer transparency into the system. So that means that when a job is not maybe performing well, I can quicker know what’s the root cause. Because maybe my job is underperforming, I want to know why. Do I need to restart? Do I need to fix something? Is it something that will be resolved on its own? Those kind of data when the stack is very complicated is really hard to learn.
So what we’ve been doing, it’s a solution that we call “Mission Control.” And we are thinking about the stack as one. So no longer thinking about storage separately, network separately, workloads, orchestration and so on. But how do we bring all the data together and make informed decisions. So that’s another way our customers are able to improve Goodput.
And of course from reliability perspective, we have our own proprietary IP that talks about how we test and validate the infrastructure, both proactive and reactive. And those kind of things are all accumulating together to improve efficiency.
Ben Lorica: By the way, one of the interesting things about IT Ops and DevOps is the telemetry and the amount of data that’s being generated. In fact, it was always the laboratory for all these massive time series analysis tools and databases. So I imagine you folks are innovating around… it’s not just collecting all this telemetry, but making sense of it. So you must be using AI to diagnose AI, right?
Chen Goldberg: Of course. Yes.
Ben Lorica: So, anything to share there?
Chen Goldberg: I think what’s definitely interesting is leveraging the amount of data that we have to accelerate troubleshooting even further. Which just helps us as a team to be more effective and watching trends. But I imagine that’s not unique for us. So that’s just us using AI with the data that is an asset that we have as a platform running at scale.
Ben Lorica: But you must have a team of people who are building AI tools or time series, massive time series analysis tools or something like that, right?
Chen Goldberg: Yes. And every day that goes by, there are more and more people that do that. Because maybe going back to the discussion we had at the beginning when you said like, “Hey, do you use AI at scale?” Of course, like every other company, we’re also consumers of other AI technologies and we are leveraging that significantly.
Ben Lorica: So obviously a lot of listeners have heard about GPUs and GPU shortages and so on. But the other thing that’s been in the headlines recently is memory.
Chen Goldberg: Yeah.
Ben Lorica: So anything to report on DRAM? DRAM and memory constraints, DRAM supply, rising prices?
Chen Goldberg: I’m not an expert from a supply chain perspective.
Ben Lorica: Well, even your high level is a lot more than what we know.
Chen Goldberg: So I think fundamentally the demand for AI infrastructure continues to rise. And it has implications on the entire supply chain. And we are seeing it across the industry. And that’s of course, on one hand, a great opportunity. Not just for building more capacity, but also finding innovative ways to make the infrastructure more efficient. And those are the kind of two things that we are optimizing for.
So internally within the team, of course, we are working with our vendors and working on improving the supply chain. And if anyone is in that space, I’m sure everybody is familiar with that challenge right now. But in addition to that, we are also looking at how we can help our customers optimize and find solutions. I think at the end of the day, that is how we partner with our customers.
Ben Lorica: Because memory is a critical component as well, right? For everything. Because the model sizes, right?
Chen Goldberg: Memory is… and it’s not just memory, but yes. Almost everything in the supply chain is impacted at the moment.
Ben Lorica: So you brought up the topic of reinforcement learning earlier. I’m a long time reinforcement learning fan, cheerleader in some ways, but it’s always been one of these things that’s always just around the corner, beyond the grasp of regular teams, right? So always the province of advanced teams. So you seem to be hinting that it’s getting closer to being much more accessible and usable. You’re seeing more people at least trying it out.
Chen Goldberg: We are seeing more people trying it out. And again, I think that this is where folks are moving to more of those production use cases and the need to make that trade-off.
Ben Lorica: So just to clarify, we’re not talking Silicon Valley bubble companies here. So you’re seeing RL beyond the usual tech suspects? Is that what you’re hinting at?
Chen Goldberg: Tech companies. Not just startups.
Ben Lorica: Not just startups. Okay.
Chen Goldberg: Not just startups. But definitely tech companies that are investing in AI right now in order to innovate and differentiate are the kind of companies that are definitely investing in this space.
Ben Lorica: So for reinforcement learning, maybe you need more CPUs as well, right?
Chen Goldberg: Yes, of course.
Ben Lorica: So then is that part of what you folks are trying to understand? Okay, so if reinforcement learning grows, we need to add CPU capacity? Is that…
Chen Goldberg: Look, I think that some folks, when they think about CoreWeave, just think about GPUs in large clusters. That’s not the case.
Ben Lorica: That’s the stereotype.
Chen Goldberg: Yes. The way people should think about it is that CoreWeave as the AI Hyperscaler gives you all the tools and infrastructure you need in order to run your AI workloads. Period. That means, okay, that of course we have file systems and object storage and CPUs. And we have a cloud console and API and Terraform and security guarantees and SLAs and SLOs. And with Kubernetes…
Ben Lorica: Sorry. Vector databases?
Chen Goldberg: We are helping customers manage their own. We don’t yet have a managed service of a vector database. But the way someone will consume is they will find a lot of things that are familiar from other clouds. You know, we’ve launched our serverless RL offering. So of course we have an inference service.
So just think about the set of tools and services—and you yourself mentioned like Marimo and Weights & Biases—so that’s our approach. So it’s a simpler stack, highly optimized. And of course if there is a need for CPU and other things, then of course it’s available as part of our offering.
Ben Lorica: Do you folks have Ray? I’m an advisor to Anyscale.
Chen Goldberg: We have customers using Ray on CoreWeave as well. If you look at our documentation, there is some Helm charts and examples of how you can run Ray on CKS, CoreWeave Kubernetes Service.
Ben Lorica: Awesome. So as I mentioned right at the beginning, right, so NVIDIA and CoreWeave are kind of joined at the hip. But I don’t know how much you can answer this question, but there’s also other alternatives to NVIDIA, right? So other GPUs, custom silicon. So that’s not something you folks pay attention to at all because you’re all in on NVIDIA? Or do you monitor the rise of alternatives?
Chen Goldberg: We monitor the entire industry all the time. And we partner with different vendors in the industry as long as it’s what we are hearing from our customers. Okay? And that’s really for us the North Star. And that’s where the partnership with NVIDIA has been so great. You know, we are working as a partner and as a customer of course.
And it’s important to mention that our differentiation goes beyond the GPUs of course, right? Because we talked about before the multiple dimensions and we are investing across the stack. And there is place for solution innovation, but at the moment we don’t see demand or a need to diversify beyond working with NVIDIA. And NVIDIA already has a portfolio of accelerators which we make available.
Ben Lorica: And obviously one of NVIDIA’s main secrets of course is their software stack is so much better than, much more mature and much bigger ecosystem than the other hardware platforms.
Chen Goldberg: Yeah. So let’s see. I wanted to ask you about the other buzzword which is “Agents.”
Chen Goldberg: Okay.
Ben Lorica: Obviously with agents, the computational patterns are slightly different than just straight up inference. So what changes for a customer that’s doing a lot of agents and what does that mean for their infrastructure?
Chen Goldberg: First of all, like before you mentioned, definitely when you’re running agents at scale, there are different requirements from a compute, GPUs and CPU for tools.
Ben Lorica: Yeah, and then agents also have… they might do reasoning, but they might also do these loops. They’re calling tools, they’re calling other agents, right?
Chen Goldberg: Exactly. When we are talking with customers, there are multiple things that are very important. One of course is where they have the data. And data gravity. Agents also create… and we start seeing more and more people talking about it… security considerations. There’s a lot of innovation in this space. You know, we see for example a need for sandboxes as part of that, right? Like how do I run maybe untrusted code as part of those systems. And definitely the scale.
So those are the kind of things that we are seeing customers do. There are also of course folks that are using existing off-the-shelf systems that they can use for agents. And I expect we’ll see more of that. So we are also internally using, we are customers of other systems in some areas.
Ben Lorica: So your sense is people are really using agents? It’s real? And what percentage of people are using… of these workloads are now agentic, you would say? It’s impossible to answer, but…
Chen Goldberg: It’s impossible. Non-trivial. I can only probably like my best insight is from thinking about our product and engineering team. Right? So where I see my our own team. And I imagine that’s true in the industry, but I’m not an expert into other areas.
We’re having conversations when you think about agentic coding, for example, the quality. How does it change code contribution? How do you do code review? The size of PRs. So those are kind of conversations from quality, how much you need human in the loop, when? The expertise that humans need to demonstrate in order to be effective. So those are kind of conversations that really evolves every day.
Ben Lorica: So the best models are still the proprietary closed models, but the open weights models are starting to get better. They’re cheaper or they’re faster, they’re cheaper to run, you can customize them. But the downside is they’re from China. And some enterprises, some industries, that’s kind of a no-fly zone. So what’s your sense of adoption of these Chinese open weights models?
Chen Goldberg: I don’t have the data. But maybe, you know, the key point that you said is when you said open source models become better, and then you talked about those dimensions. So at the end it’s kind of a trade-off, right? And it depends on the task. And where RL for example can be useful is by making those models better and maybe cheaper and more accurate. Right? When we say better, those are the kind of things that we are looking into. It’s probably too early to know exactly what people are using because that just keeps changing very frequently.
Ben Lorica: So in closing, we’re still in the early days of AI.
Chen Goldberg: Early days.
Ben Lorica: But people should probably start thinking ahead and worry about technical debt even at this point. So what kind of technical debt are you seeing people incur and what should they be avoiding at this point?
Chen Goldberg: So first of all, yes, it’s early days for AI, but it’s moving really fast. And I think that that’s something that if people are not experimenting heavily right now, they’re probably behind. Even for your own personal productivity, for your product, for your teams. So I think that will be the first thing.
The second thing is understanding the implications of that. The market is still constrained. We talked about supply chain, we talked about capacity. Talent will be another. So folks need to make some strategic decision with some unknowns. I know some companies will feel uncomfortable with that, but I think that’s part of experimenting with a new technology.
And when you think about technical debt, I’ve been around for enough decades to know that it’s never like just everything changes, right? We will always stay with some legacy. So I think the opportunity is how to take these new tools and also apply it to maybe the less sexy problems, which can probably be very useful. Like some of the things that I’ve mentioned is around troubleshooting and engineering excellence and productivity. I think that that’s definitely where we see value already today.
Ben Lorica: Actually one thing that people don’t realize is that, so obviously there’s a lot of talk about AI for programming and writing code. But there’s also a lot of AI and agents being used in data engineering pipelines, Ops, DevOps. So there’s a lot of things already happening that the consumer may not be realizing is being powered by AI. And that’s on the technical side.
And I just… I’m in the last few months, I’m getting more and more scared for knowledge workers. Yeah, it’s just it really… this technology is going to be very disruptive. So as she mentioned, the onus is really on you to really use these tools to upskill, to constantly learn.
Chen Goldberg: Yes. One of my other passions is mentoring. Okay, again part of me being in the industry for so long. And if there is one thing that hasn’t changed with what I tell folks, you know, whether they are early on on their career or later on, is the importance of being experts. I don’t see the tools as a way to avoid depth or expertise or knowing what you do. So maybe that will be the other thing that I think people should think about, especially when there is a new technology. Like what are the things that I know best where I can apply my tools?
I was just talking with a friend for example from, she’s in a marketing domain, and she said that she feels like she has those superpowers now. Right? She has so much experience and she can do things she couldn’t do before with much less time. And that’s the way I think people should think about it.
Ben Lorica: But that means though that there might be fewer… you might end up needing fewer. Because one thing that seems like the stats are bearing out is the entry level jobs for recent college grads, that’s really soft at the moment. So there’s a bit of a hollowing out of that pipeline, right? So you need those entry level people to become mid, to become senior inside your company, right? So if you kind of slow down the hiring of that entry level, you lose that.
Chen Goldberg: Yeah. I can’t speak about correlation. I do believe… and that’s, you know, we had those kind of thoughts in the past with different technology changes. The advice I would give to junior folks early on in their career: The technology is actually accessible. There’s a lot of things that you can start and do and build expertise.
And that’s something, you know, maybe I will just bring it back to Kubernetes. One of the things that I loved when we just started is that there was no one with 10 years experience with Kubernetes back then. Or with cloud-native. It was all new. This is the time. Okay? There is this new technology. And that’s the opportunity for folks to lean in.
Ben Lorica: And with that, thanks for joining us.
Chen Goldberg: Thank you for having me.
