Site icon The Data Exchange

The Junior Data Engineer is Now an AI Agent

Matthew Glickman on Data Engineering Automation, Agent Workflows, Confidence Checks, Institutional Knowledge.

Subscribe: AppleSpotify OvercastPocket CastsAntennaPodPodcast AddictAmazon •  RSS.

Matthew Glickman, Co-founder and CEO of Genesis Computing and former VP of Product at Snowflake, joins Ben Lorica to discuss the emergence of AI data agents tailored for the data engineering persona. They explore why many enterprise AI projects hit a wall during the “last 10%” of development, how agents are successfully automating complex pipelines and legacy migrations, and why the “build vs. buy” equation has fundamentally changed in the era of Generative AI.

Subscribe to the Gradient Flow Newsletter

Interview highlights – key sections from the video version:

Jump to transcript



Related content:


Support our work by subscribing to our newsletter📩


Transcript

Below is a polished and edited transcript.

Ben Lorica: Today we have Matthew Glickman, co-founder and CEO of a new startup called Genesis Computing, which you can find at genesiscomputing.ai. The taglines from the website are “Enterprise ready AI data agents” and “Delivers AI data agents with advanced built-in skills to securely automate your most critical data workflows. No building, no custom coding, just results from day one.” With that, welcome to the podcast, Matthew.

Matthew Glickman: Awesome, Ben. Great to see you again. It’s been 10 years since we last did one of these, so it’s great to be back.

Ben Lorica: To establish your credentials here, you were at Goldman Sachs for many years, and then you went to Snowflake, where you headed product for many years. We’ll mostly be talking about what Matthew is doing at Genesis, but more broadly, data agents in general. To start off, after you left Snowflake, there were lots of things you could have been doing or building. What specific problem—no pun intended—was the genesis of Genesis Computing?

Matthew Glickman: That’s perfect; the name works. Great question. We could have done anything, but I believe life presents you with opportunities, and you either take them or ignore them. I believe in not ignoring them. What drove me to leave Goldman and join Snowflake was seeing the cloud tsunami coming. I saw an opportunity to take what I had learned running quant data teams at Goldman and bring that expertise to Snowflake. I was actually the person who brought both Databricks and Snowflake into Goldman—one of the first financials for both.

Similarly, as the AI wave started to sweep over us, particularly in the enterprise space, I saw an opportunity. Every customer I met with at Snowflake was struggling. They saw these glimmers of opportunity to unlock their data teams using GPT-4. They would make these incredible demos and then hit a wall. Everyone was trying to build a framework around these powerful models to unlock their data teams, who were the bottleneck in data projects, data engineering builds, and providing data products for business users. My co-founder Justin Langseth and I realized that given what we knew about the space, why not build this once in a way that all these enterprises could use to really take advantage of this new AI technology?

Ben Lorica: So GPT-4… I guess this is post-ChatGPT, right?

Matthew Glickman: Right.

Ben Lorica: So by that point, enterprises and financial services were already playing around with some notion of automation.

Matthew Glickman: Right. They were playing around with it and had immense pressure from their CEOs and leadership to produce something with AI. That “something” was ill-defined, but they were all creating these killer demos like, “I can ask my data this question” or “I can transform my business with this dashboard that comes up on the fly.” These were great demos, but they would work three times out of five. Or they would give wrong answers. They would all hit this wall. They saw something powerful there, but you know the expression that the last 10% of a project is always the hardest? In AI, it actually gets very steep, very fast.

Ben Lorica: When you read headlines or see survey results saying, “Hey, there’s a lot of AI usage, but people get stuck and never deploy to production,” that resonates with you?

Matthew Glickman: That is exactly the problem. I think we all get fooled—I’ve been there—because these demos are so compelling. You feel like, given where I’ve reached with this demo, it can’t be that hard to get to the finish line. But because this technology is non-deterministic, it’s incredibly hard to get that last 10% of value.

The second thing we see is that everyone focuses on the “sexy” use case—putting an AI in front of a customer or CEO to ask open-ended questions. The problem with that use case is that it has a very asymmetric risk profile. If it works, great. But if it doesn’t, it’s a disaster. Even if it performs well 99 out of 100 times, that tail is a disaster. I get why everyone focuses there—it seems like the sexy problem, like creating an AI financial analyst.

Ben Lorica: Or the AI investment banker.

Matthew Glickman: Exactly. Sounds like a great idea.

Ben Lorica: Just to interrupt here for a second, but regarding that last 10%… if you were to describe that to our listeners, would that be mainly accuracy? What is that last 10% if you were to break it down?

Matthew Glickman: I think it’s reliability.

Ben Lorica: Or predictability, right?

Matthew Glickman: Predictability. Right.

Ben Lorica: So I guess there’s that whole thing where I don’t mind if it’s only 80% accurate as long as I know the 20% where it’s not accurate.

Matthew Glickman: Right, and knowing when it’s going to happen. The problem is, you could test it, and these models can fool you because it seems like they get it so well. It’s almost like observing a brilliant child prodigy. It wows you, so you say, “I got this, this is going to work.” But you don’t realize that because it’s probabilistic, as soon as it matters, it can go off the rails. The classic example was the Canadian travel company where an agent on their website gave away free flights. They never did it in testing, but it did it in that scenario.

I think the real opportunity is in enterprise use cases where the user is happy to have even 60% or 70% of their time back and is willing to handle the cases when it doesn’t perform. However—and we’ve spent a lot of time on this—for that 60% to 70%, you need a system around these models that can confidently tell you, “In this case, we are highly confident that it has produced the right answer.” Instead of rolling the dice and leaving it to the end user to figure out if it’s right. If I still have to review every result, you haven’t saved me time. What I want to be able to do is say, “Here is a task. Here is how you should judge if the task is correct. Now go off, and with guardrails, figure out what was correct. If you’re not confident, come back to me.”

Ben Lorica: A listener might look at your LinkedIn profile and say, “Well, of course he’s working on data-related applications because he came from Snowflake.” Let’s set aside the AI investment associate and focus on data applications, particularly in financial services. You guys started this company focused on data applications of AI. What’s been the response in the market?

Matthew Glickman: It’s been huge. We are focusing specifically on the Data Engineer persona. We’re not trying to replace the investment banker, the equity trader, or the claims adjuster.

Ben Lorica: What is the difference between the data engineering persona and those other personas at a high level?

Matthew Glickman: The comical part is that I’ve never met a data engineer—and I’ve been one, sold to them, and have many friends in the field—who doesn’t want to give you their job to do.

Ben Lorica: Or is not fiddling around with automation.

Matthew Glickman: Exactly. There is something about this space—a cross between coding, data semantic understanding, and business understanding. There’s a great meme of a character wearing a shirt that says, “I’m the data engineer and no one knows my name.” The only way people find out who the data engineer is, is when things break. Because of that persona, this hits every single time. The only feedback we get when we’re introduced to customers is, “If this works, this is a no-brainer.” Conversely, financial analysts want that job; it’s in high demand, and they don’t want to be replaced. Data engineers are happy to move on to other things because they always have more to do than they have time for.

Ben Lorica: A listener might say, “Well, that may not be a great idea because you’re building tools and selling them to the people who may get displaced by the tools.” But the reality is, as you said, data engineers have more to do than they have time in the day for.

Matthew Glickman: And they’d rather do more business-impacting tasks. Trying to track down why a pipeline is breaking, figuring out how to add a new cleansing or entity resolution, fiddling with tools… these tasks bog them down. They want to focus on things like helping the CMO run more campaigns to reduce customer churn. They don’t want to focus on the mechanics of figuring out how to code with a new framework or how to optimize twelve different ways of calculating revenue into one.

They also want to focus on building the AIs that will face off to their business users. The expression that always drove me crazy—and I think Databricks and Snowflake both use this term—is that you have to have a solid data platform and data strategy before you can do AI. I think that’s a completely false statement. You should be using AI to get your data in good shape so you can do more AI.

Ben Lorica: People associate data engineers mainly with pipelines. Years ago, I started thinking about injecting software engineering rigor into data engineering, and now people like the folks at Bauplan are talking about it. But now you’re talking about automation and “vibe coding” to some extent. Doesn’t that introduce a lot of new personalities into the data engineering space who may not have that software engineering rigor? What are some things people have to be careful about as they introduce more of these AI capabilities?

Matthew Glickman: I don’t think data engineers are going away. If anything, this is just going to give them superpowers. The customers we work with see this effectively as hiring a team of junior data engineers.

Ben Lorica: So it’s not like I can take an analyst and turn them into a data engineer because of your tools?

Matthew Glickman: Absolutely not. That is a completely false narrative. We sell to data engineers; we empower data engineers; we make them rock stars.

Ben Lorica: Let’s make it concrete. I start using Genesis and I want to build a pipeline. How does it work?

Matthew Glickman: You describe what you’re trying to do as if you were assigning it to a new junior engineer.

Ben Lorica: Like a prompt.

Matthew Glickman: You provide a prompt or a document saying, “This is what I’m trying to do.” Typically, a business user might say, “I have this report. I want to produce this new Tableau dashboard from my campaigns.”

Ben Lorica: And now Databricks and Snowflake will go, “Okay, that’s great, but you need our catalog.”

Matthew Glickman: Right. You need the catalog, you need to ingest the data, you need to rationalize it. But you start with the business requirement. At that point, we have an agent who onboards the project. Her name is Eve—Genesis Eve. We have a set of pre-built “blueprints” involved. Think of them as learned skills.

Ben Lorica: Based on my data platform? So it has access to my catalog, my lakehouse, and warehouses?

Matthew Glickman: It does. But think about the blueprint as the skill that’s independent of your specific data. For my company, what toolset do I want to use? Databricks, Snowflake? Do I want to use DBT or Spark pipelines?

Ben Lorica: Oh, so you specify that.

Matthew Glickman: You specify that in a document. This document becomes the playbook the system uses to take business requirements and reliably guide these AIs through multi-step processes, with gaps along the way where the AI has to prove to the Genesis system that it has actually completed that step. Where most people fall down is taking a relatively simple set of steps and assuming you can extrapolate.

Ben Lorica: But I’ll need to provide Genesis access to my catalog, my lakehouse, my warehouse.

Matthew Glickman: Yes. Part of the onboarding is installing Genesis into your data environment—Snowflake, Databricks, AWS, wherever. You connect them to your data sources, catalogs, and external systems.

Ben Lorica: So it knows everything that a new data engineer would know, but maybe not an experienced data engineer who knows every nook and cranny and business rule.

Matthew Glickman: Exactly. But where it gets interesting is that the acquired knowledge you’re alluding to will be absorbed by the system over time. This isn’t unique to data engineers, but the institutional knowledge required to keep these enterprise data plants running is often in multiple people’s heads. It’s a terrible liability. I saw this firsthand with a customer where the head of data engineering was burnt out. We were there the day he was leaving, passing it off to the remaining person on the team. The guy leaving was the happiest I’ve ever seen him, but I knew all this knowledge was walking out the door. We saw the terror on the face of the guy inheriting this, trying to absorb by osmosis what happens every other Thursday when a specific feed doesn’t run right.

That is one of the real value propositions: being able to, in an ambient way, acquire knowledge out of people’s heads and get it into the system. The next time a project starts, we don’t have to ask Mary for the specifics on how to set up particular pipelines. The system has acquired it.

The last piece of this—and this is critical—is that we don’t run as SaaS. We install our system inside the customer’s environment. The Genesis system gets smarter for that customer, but that knowledge isn’t leaking to us as Genesis the company. Our customers win because not only has this been documented out of people’s heads and into a system they can read in natural language, but it securely becomes an asset they can use for other projects. We’ve had customers say that if all we did was produce documents for their existing systems, we had them at that.

Ben Lorica: There’s the notion of using your tool moving forward for pipelines. I get that. But regarding your example of the guy retiring—unless he had it written down, how much could you possibly absorb on that one day?

Matthew Glickman: Which they never do. Let’s be serious. Even in bigger teams, information tends to be dispersed. Each person has a piece of it, and when someone leaves, you probably let it go until something breaks. But this ability to acquire knowledge by reading the code and documentation is key. When the system proposes something and a human says, “No, this is the way we define revenue here,” that becomes part of the acquired knowledge.

In the end state, this corpus of knowledge truly becomes an asset. It can find things because the knowledge isn’t dispersed. It can say, “Wait a second, why are we calculating sales numbers between these two regions in three different ways?” Or, “We have code here, but it’s not consistent with the data in the database.” A human being would never get around to doing that because they’re just trying to do the minimum.

I was talking to an investor about this. At Goldman, there wasn’t a specific role for data engineers. You just had very highly paid quants who spent 70% of their job doing data engineering because they were the only ones who understood the data. Of course, they were left doing the grunge work and monitoring jobs. It’s like a part-time job for people that they aren’t even paid for; it’s just something they have to do to keep the lights on.

Ben Lorica: Two other examples come to mind. One might be more specific to financial services: legacy systems like COBOL and Fortran. The people maintaining those systems are dying off, and migration isn’t going to happen right away. Does your system handle that?

Matthew Glickman: Yes, it’s absolutely a killer use case. We had a healthcare company that had already signed a traditional consulting firm for a massive migration project from SAP, HANA, Oracle, Informatica—a whole bunch of legacy systems. The consultant was at it for six months, spinning their wheels because no one understood the technology anymore. They brought us in to see what the Genesis system could make of it.

It came back a week later. They gave us a subset of the problem, and our system just read the code. It found—I didn’t know this, but evidently the AI did—that SAP abbreviates certain fields with German abbreviations. It uncovered a piece of logic completely missed by the humans working on it for six months that was used to classify customers. It’s not only going to do the things we don’t want to do or replace people in scarce supply; it’s going to do it better. Humans don’t want to read every piece of code and translate everything.

What makes these migrations hard is that you don’t really want to migrate line-by-line.

Ben Lorica: Right. You want to know what it is doing, and that’s what you want to migrate.

Matthew Glickman: Yes. You want to migrate and re-architect it with the latest and greatest way of doing things. Don’t do the same row-by-row transformation; do the entire thing in bulk. Or, as happens often, they had the same logic implemented three different ways producing different results. They wanted a rationalization project to produce one set of right results.

The important thing is that because we can install in the cloud or on-prem, the agents can connect to the source system as part of the migration. They interrogate the source systems to see, “Now that I’ve rewritten this in this new way, does it match the results of the old system?”

We’re never converting code-for-code. The system reads the code and documentation, then produces human-readable system-of-record documentation used to produce the new code. Unlike any human documentation I’ve ever seen, this documentation will always be in sync with the code. The plan creates documents explaining how we’re going to calculate things. Humans—specifically business users like the CFO—can read this and verify if it’s correct. The coding is the easy part. Having AI generate the right code based on a well-documented, golden source of truth that humans can opine on is the key.

Ben Lorica: Let’s go under the covers a bit. For that spec your system produces, are you relying on one model or multiple models? And do you use your own models or commercial APIs?

Matthew Glickman: We use commercial models. For reasons I still don’t understand, these frontier models continue to improve for this use case more than anything else we’re seeing. In the consumer space, they may plateau, but every single release from the top frontier models—Anthropic, OpenAI, and it will be interesting to see if Google is now in that mix—is a leap.

Ben Lorica: Probably because it’s something they use themselves. You have natural validators—you can run the code and make sure it works—and there’s a lot of dogfooding involved.

Matthew Glickman: I think you’re right. But even more than regular engineering, data engineering requires context. The agents understand they are going to acquire knowledge by running a query or reading a document, then making a change and testing a pipeline to understand the impact. Traditional software engineering is a stateless problem: run the program, see results, make a change, rerun. In the data world, there are side effects.

Ben Lorica: That’s the challenge. That’s why there was this desire to inject software engineering rigor into this problem.

Matthew Glickman: It’s a great goal, and I think the way you get there is by AI taking us there. You’re not going to change human behavior in a space under so much pressure. AI can take us there by inheriting the current environment and, as it becomes more aware, taking us on that path. We can delegate that rigor to these AI systems.

Ben Lorica: One challenge with data engineering pipelines is that you tend to touch multiple distributed systems—sensors, Kafka, Spark. There’s a lot of infrastructure you need just to mock up and test, and it’s complicated to mimic production 100%. Going back to my earlier question: how many models do you use?

Matthew Glickman: We let customers choose from the set of top-tier models we have tested.

Ben Lorica: And they have their own accounts?

Matthew Glickman: They have their own accounts—directly to Anthropic or OpenAI, or through Bedrock, Azure, Databricks, or Snowflake. We don’t have to be a trusted party in that. But we won’t let them use a model that’s not top-tier because they just won’t get the performance.

Ben Lorica: So you’re constantly evaluating.

Matthew Glickman: Constantly. We’re doing our own evals now that we’ll publish soon. Benchmarks are gamed these days, so we’re testing real workflows.

Ben Lorica: So you may support Gemini at some point?

Matthew Glickman: Yes, we’ll definitely support Gemini. In our early testing, it’s very different. Our system has to understand how to behave differently for each model. For example, Anthropic’s Claude 3.5 models are much more chatty in explaining why they make a tool call.

Ben Lorica: And much more expensive, too.

Matthew Glickman: Much more expensive because of that. GPT-4o is much less wordy and less expensive, but you give up knowing why it made a decision. In our initial testing, Gemini seems much more self-critical. We built hundreds of custom tools, and if tools do not behave reliably, AIs struggle. Claude is much more adaptable if it doesn’t get a result it needs. Gemini is not. We’ve seen cases where Gemini says, “I called this function, it didn’t do what I expected,” and it starts to go a little insane because it read the documentation and knew what the function was supposed to do. We’ll have to have different controls around Gemini. But having the three big model providers banging on each other is amazing for us.

Ben Lorica: Claude Code… my experience is that it will brute force things.

Matthew Glickman: “Brute force” is exactly the right word.

Ben Lorica: It might cost you. You’re happy you got a result, then you look at the bill. Do people ask you about open weights, particularly the Chinese models? Are Chinese open weights models effectively a “no-fly zone” for enterprise in the US?

Matthew Glickman: At this point, people are just looking for performance. The big impact open weight models, particularly from China, will have is putting even more pressure on the other three to keep going.

Ben Lorica: Downward pressure on price, too.

Matthew Glickman: Downward pressure on price and upward pressure on performance. To justify the price, they have to really outperform. Right now, we’re seeing zero demand for anything other than the top tier. Everyone wants the latest—o1, 4o, Gemini.

The important thing—and this goes for the entire AI space—is that it really is a race in the enterprise to be there first. We want our system to become so knowledgeable about an enterprise that if someone else comes along, they have to be better than us not just feature-for-feature, but better than us with all the acquired knowledge we’ve learned about the enterprise. AI systems, if designed correctly, get better over time about the enterprise they’re deployed in. That’s a competitive advantage that is really hard to compete against.

Ben Lorica: Matthew, there’s another class of startups going after the business analyst persona. Their entry point is going into the warehouse or lakehouse, learning the catalog, and helping analysts or business people write their own reports. Over time, those startups might say, “We need to help build pipelines because some ad hoc reports require custom pipelines.” Do you see a future where there’s a convergence of these two? You’re starting from the data engineer, they’re starting from the analyst.

Matthew Glickman: It’s possible. I challenge them to do that.

Ben Lorica: You know the class of startups I’m describing, right?

Matthew Glickman: Yeah, the “BI flavor of the month” that starts with an AI prompt. I think that’s a very tough space to start from. It’s getting harder to differentiate.

Ben Lorica: Their assumption is that in a big enterprise like JP Morgan, they use everything—Databricks, Snowflake, BigQuery. They want to build a universal catalog to better serve the analyst persona. You’re doing the same thing for the data engineer persona.

Matthew Glickman: They may try. However, starting with the understanding of these complex environments and working towards the business user is much more compelling than trying to work backward.

Ben Lorica: It’s a harder lift to go from standard reports to bespoke pipelines. Whereas once you understand the pipelines, you can generate the reports.

Matthew Glickman: Exactly. I can make it easy for AI systems that will likely talk to your data APIs and be monetized in the data platforms. Databricks, Snowflake, AWS—they’re all going to have “talk-to-your-data” functionality built in. BI tools like Tableau will have it built in because their chance of surviving is leveraging their install base and visualization experience.

People don’t just want universality; they want good visualizations that produce the right answer and don’t give the CEO the wrong revenue number. All the BI tools had visions of building into the pipeline space, and they all failed because it’s a different skillset. We see a good place because with things like Iceberg, Databricks and Snowflake are providing a cleaned-up view of the world. We can feed that demand and pull in legacy systems. We have a customer with a pile of Perl code they can’t find engineers to maintain anymore. Solving the problem of complex marketing data pipelines pulling from inbound/outbound systems, Salesforce, and warehouses—that is work human beings were never built for. We’re lucky there’s now something that can make this maintainable. Let the AIs do that stuff.

Ben Lorica: When you first come into a team, you describe usage as “human-in-the-loop.” Why do you use the word “agent” then?

Matthew Glickman: That’s a great question. We’re not selling agents; our system uses agentic technology to initiate those steps. We give the agent tools—run a query, build a pipeline, run DBT—and a blueprint of guardrails, and say “go.”

Ben Lorica: And then there’s a human that interacts with this.

Matthew Glickman: There’s a human, but importantly, the system escalates to the human when necessary. The human loop has to be requested. The system shouldn’t say “give me work to do,” but rather “stop and ask for help when needed.”

Ben Lorica: The whole point is to increase productivity, not create work.

Matthew Glickman: Exactly. I want time back. Time is the measure of success here. How much time are you freeing up so people can do other things? This is why I think the MIT study struggling to see productivity used a flawed metric. The only reliable measurement will be time saved. You’ll see this in hiring trends. A JP Morgan won’t be the same size company in five years. We’re not selling agents or tools; we’re selling time.

Ben Lorica: I’ll give you something else to sell besides time: capability expansion. I’m giving your data engineers superpowers.

Matthew Glickman: Yes. I like that. Capability expansion is great.

Ben Lorica: What is the reaction from data engineers when you show them? If there’s any pushback, what is it?

Matthew Glickman: The only pushback we’ve gotten—and I’m being truthful here—is skepticism that it’s going to work. When we come in, it sounds too good to be true. They say, “This can’t possibly work. My environment is super complex. I have legacy systems, data in Salesforce, etc.”

But if there’s an API and a human can be trained to do it… even if these models don’t continue to improve (which they clearly will), at the current capability level, we’re already at the level of a junior data engineer. Assuming that’s the case, of course I’m going to take this. It’s cheaper than hiring a person, works all night, and I can run an army of them.

The risk—and I’ve spoken about this on LinkedIn—is that the jobs it’s going to take out are the entry-level data engineer jobs. Those jobs are going to disappear.

Ben Lorica: Which means you lose that pipeline of entry-level people becoming senior people.

Matthew Glickman: Exactly. The pipeline is going to be missing. There’s a social contract that if you study hard and get internships, there will be jobs. The worst part is that schools aren’t teaching AI as a core skill. Finance and humanities programs aren’t teaching it because it’s seen as cheating. Students will graduate without necessary AI skills, but the entry-level jobs are disappearing. The remaining jobs will be senior ones requiring you to manage an army of agents.

We have to get this into education systems so AI becomes a core skill. It’s a capability expansion—a data engineer with this skill becomes a super data engineer. We have a responsibility to force people to use this. You have to use it to learn what it means to become better.

Ben Lorica: One key challenge is that the instructors themselves aren’t familiar with it. Let me close with this: there’s a subtle pushback you’ve probably felt. You present Genesis, they say “cool,” and after you leave, the data engineers say, “Man, that’s great, but we can build that ourselves. Wouldn’t it be cool to build it ourselves?” Obviously, the reason you don’t is maintenance.

Matthew Glickman: You have to maintain it. I’ve always seen this “build vs. buy” disease. If you can build a product and make money from it, build it. But if you’re building it internally for cost, someone building it for revenue will always build it better.

I remember at Goldman, before they embraced the cloud, they had an internal private cloud. A quant got a form three months later to apply for his machines. He saw duplicate mount points, deleted one he didn’t recognize, and hit apply. He figured if it didn’t work, it was only his 10 machines. Well, no one had ever done that before. He hit the button and took out half of Goldman.

I went up to him and said, “If it’s any consolation, I heard AWS had a similar problem recently.” He asked, “Really?” I said, “Of course not.” Because for AWS, every second of downtime is lost revenue. Everything is optimized.

It is a disease, but we actually don’t see it that often. People view it as, “Even if you have ideas of how to do this, learn from us. Worst case, you’ll learn about the obstacles you’d hit on your own.”

Ben Lorica: To the managers listening, this is the old build vs. buy conversation. But particularly now, people want to learn how to use these AI tools. You need to discern if someone is volunteering to build something just to learn the tools and then leave your team.

Matthew Glickman: Exactly. Just think: Is it core to my business? “Build” has always been the case, but even more so in the AI age, build only the things that are core to your business. For everything else, find someone for whom it is core to their business and pay them.

You should hire the company where the people are experts in that space. Hire Harvey for law because the people behind Harvey are lawyers. Hire Genesis because we’ve lived in the data space. Build what gives you a differentiated advantage, but realize it has to be supported forever.

Ben Lorica: And with that, thank you, Matthew.

Matthew Glickman: Awesome. Thank you, Ben.

Exit mobile version