Is AI a Utility? Defining Usability and Public Trust

Ben Lorica and Evangelos Simoudis on AI Utility Characteristics, Section 230 Reform, and Algorithmic Amplification.

Subscribe: AppleSpotify OvercastPocket CastsAntennaPodPodcast AddictAmazon •  RSS.

In this episode, Ben Lorica and Evangelos Simoudis of Synapse Partners explore the complex reality behind AI-driven layoffs, from automation and upskilling gaps to strategic shifts in R&D. They also dive into the massive capital investments in AI, discussing the growing pressure for ROI and the emergence of LLMOps as a form of financial management. The conversation highlights practical strategies for enterprises, emphasizing the need to break down organizational silos to succeed in the AI era.

Interview highlights – key sections from the video version:

Jump to transcript


Subscribe to the Gradient Flow Newsletter


Related content:


Support our work by subscribing to our newsletter📩


Transcript

Below is a polished and edited transcript.

Ben Lorica: All right, so we’re back for my regular conversation with my friend Evangelos Simoudis of Synapse Partners. He has a blog which you can find at corporateinnovation.co. We are recording this on the morning of December 12th, 2025. We’re going to talk about a lot of things, and the links to the resources will be found in the episode notes on dataexchange.media. If you are on YouTube, please hit the subscribe button. Otherwise, subscribe wherever you get your podcasts.

Ben Lorica: Topic number one: We want to drill down on the notion of AI as a utility. Is it or is it not a utility? The context here is we’re discussing this the week after President Trump signed a controversial executive order aimed at dismantling state-level regulations for governing AI, with the goal of creating a single national regulatory framework. They’re arguing that this unified front is necessary to compete with China. All right. So with that long-winded intro, Evangelos… utility. Give me one property of a utility, and then tell me whether or not AI meets this property.

Evangelos Simoudis: The first one would be whether it is broadly usable. What I mean by that is, can everybody in the population use this capability, or do they need special training before they can do it? My short answer is that today we’re finding that yes, people do need special training.

We have on one extreme the approach that Middle Eastern countries are taking, where they’re starting to introduce AI in the early stages of their children’s curriculum. We have other examples of enterprises finally realizing that their employees need that kind of training, and they’re starting to put programs in place. But even though LLMs and chatbots have shown tremendous capability, “broadly usable” has to be one of the characteristics.

Ben Lorica: One can say, for example, that electricity is broadly usable. I can just go up to an outlet and plug something in. To some extent, maybe listeners will say AI is broadly usable because I can go to a chatbot and start talking to it. Whether or not I’m using it properly is a secondary thing, but it’s the same thing with an electricity outlet.

Evangelos Simoudis: That goes to my second characteristic actually, which is accessibility. I separate usability—what it takes for me to interact with a system and make good use of it—versus accessibility, which to me is like the dial tone.

Ben Lorica: Let me ask you about usability. Some bullish listeners may argue, “Well, it’s usable because I can just use it.”

Evangelos Simoudis: Yes, but the question is, in order to make effective use of it, do you need better education? Frankly, from our firm’s perspective, when we talk to enterprises—and this is how a lot of this conversation started—we’re finding that they recognize that in order for their employees to be effective users of AI and make a difference to their business (productivity-wise, cost-wise), these employees will need to have the right education. Whether that education is a two-hour course or a longer curriculum, it doesn’t really matter to me. But the fact is that despite the easy interaction with a chatbot, in order to make it usable, my argument is you need certain things.

Same thing, by the way, if you take electricity. You don’t want your child to put their finger in the outlet. You need to tell them that this is a bad thing. Or you need to teach the employee the difference between 110 volts and 220 volts. There are differences like that. That’s why I’m talking about usability. I’m separating that from accessibility, which to me is the notion of a dial tone or electricity provision. We’re not there yet. We’re getting there, but to me, these are two distinct characteristics rather than one.

Ben Lorica: So regarding accessibility… where does AI fall?

Evangelos Simoudis: I don’t think we are there yet. We’re striving to do that. Look at the numbers reported by OpenAI; they say, “We have 900 million monthly users.” Think of how many people have a mobile phone around the world. Think of what is necessary for somebody to access and make good use of even a chatbot, let alone other AI tools. So we’re getting there, but we’re not there yet.

My argument is that if we want to get there, the question is: what kind of investments will be necessary and who will be making these investments? This is the difference between utility versus infrastructure. In the case of a utility, take any utility in the world, they are required to make certain investments to provide access. Think of what we did when we wanted to provide internet in rural areas in the US. I think that if we want to make AI a utility, we need to be thinking along those lines.

Ben Lorica: I see what you’re saying. Basically, even broadband access in the US is still wanting.

Evangelos Simoudis: Exactly. And again, the question is where do we want to place AI in this national imperative?

Ben Lorica: All right. Property three.

Evangelos Simoudis: Affordability. Today, if I want to have continuous interaction with a chatbot, I need to pay. If I want a free interaction, then the quality of the output I receive is impacted. Think of the various plans.

Ben Lorica: What about electricity?

Evangelos Simoudis: With electricity, I have rates. It’s not that I get lower quality electricity. I get paid on use, but the quality is exactly the same. Here we’re saying that with AI, we have a difference in quality based on how much you’re willing to pay.

Ben Lorica: So it’s service-oriented delivery.

Evangelos Simoudis: It’s very similar to the internet. Today, I pay a higher price to my provider in order to get higher speeds or better bandwidth. I think we need to think about what that means for AI. I’m not saying that we cannot get there, but depending on how we want to think of AI, we need to be thinking about these issues. I recognize that all of these companies providing this capability are losing a lot of money—think of how much money OpenAI is using as a poster child of this. But if we want to think of it as a “utility” for the population—to make them better, to make them more intelligent—then we need to be thinking about affordability.

Ben Lorica: So in this dimension, AI is in the same boat as the internet?

Evangelos Simoudis: Yes, it is.

Ben Lorica: And mobile phones, right?

Evangelos Simoudis: Mobile phones in what way?

Ben Lorica: Mobile phone access. Depending on my plan, I may have a better data plan than you.

Evangelos Simoudis: Yes, but the quality of the responses is not impacted. It is impacted regarding how much data I can access, or where I have it accessible. Maybe I don’t have accessible mobile telephony when I go to another country because I haven’t paid for that plan. But the quality of my reception is not being impacted by how much I pay.

Ben Lorica: Right. Any other essential characteristics of a utility?

Evangelos Simoudis: I think the issue is trust. I trust that my electricity provider is going to be giving me the right service.

Ben Lorica: The voltage is as advertised.

Evangelos Simoudis: Right. And here, because of the data and because of user-generated content, the trust is not there. Who is going to establish that trust is a whole different story. Actually, I will say that for cloud computing and the internet service, I trust it. When my provider tells me, “This is the bandwidth you’re going to have, this is the uptime,” there is trust there.

Ben Lorica: But isn’t that trust location and service-provider related? If I’m in a different part of the country or world where the internet is flaky, and I schedule a Zoom meeting with you, I don’t trust that I’ll be able to meet you.

Evangelos Simoudis: Are you talking about quality of service, or are you talking about the quality of the output? Because here with AI, we have two issues. There is the quality of service…

Ben Lorica: Quality of service is one component.

Evangelos Simoudis: And there I think we’re okay. Whenever I log into ChatGPT, I can have an interaction. But to me, it’s more an issue of: do I believe the result?

Ben Lorica: I see what you’re saying. Assuming that I can connect to the electricity provider, I know I’m going to get the right voltage. In this case, I don’t know if I’m going to get the right answer.

Evangelos Simoudis: That’s right. Because I do not know whether the data used to create that model was vetted enough, scrutinized enough, or whether the model is creating behaviors that were completely unanticipated. Again, this is not an uptime issue. The model is up all the time.

Ben Lorica: Especially right now when the foundation model providers are in this very competitive race. They’re all pushing models quickly to get ahead of the competition, adding features, and it’s not clear how much red teaming, QA, or testing they’re doing.

Evangelos Simoudis: Exactly.

Ben Lorica: But Evangelos, in the electricity market, do I have a choice as to who my electricity provider is? Yes, in San Francisco. Maybe there’s competition, and all of them behave equally well in terms of the output dimension that you described.

Evangelos Simoudis: The more I was thinking about this, Ben, I saw a spectrum. I mentioned earlier countries like the UAE, Saudi Arabia, and even China on the other side of the world; the government is taking a very hands-on approach to how they’re going to deal with AI. I think the United States is on the other extreme, where it’s letting the market decide, not unlike what we’ve done with other technologies. And then Europe is somewhere in between. I think as we decide how we want to utilize AI, we will need to decide where we fall in this spectrum.

Ben Lorica: Any other features?

Evangelos Simoudis: No, I think that is good for a start. I’m not advocating that prices be set by some governmental body, like with electricity. But assuming that we want everybody in the population to have access to this capability, we may need to impose some contributions or discounts so that it will not be a case of “haves and have-nots.” That will require some deeper thinking.

Ben Lorica: One question on the output regarding the electricity analogy. We’re talking about standardized, commoditized output. But we aren’t saying commoditized in the sense that if I sign up for Gemini, I want Gemini to have a predictable output, but I don’t want to force it to have exactly the same output as OpenAI’s models.

Evangelos Simoudis: Exactly. The same thing applies to other things we take for granted, whether it is electricity, internet provision, or mobile telephony—we make choices. The choices could be quality of service related or place of residence related. Even with electricity, in many places, you have choices.

Ben Lorica: It’s just that if we’re both customers of the same AI provider, we should get roughly the same service.

Evangelos Simoudis: Exactly.

Ben Lorica: In the utility space, there’s the notion of a natural monopoly in the sense that there is high CapEx and high fixed costs to get into the market, and then the margins are low. In many ways, there are few players in each market. Is that what you’re anticipating in AI as well? Look at the cost of these foundation models, at least now. Who knows, there might be a breakthrough in models and the cost of entry might be lower.

Evangelos Simoudis: I actually think that we are benefiting from having multiple providers. I think that what it costs to become a legitimate provider has certain characteristics of what the electricity providers do. Think of the conversation we’re having in this country about whether to use nuclear energy and how much it costs to build a nuclear power plant. We compare ourselves to China and say, “Right now China is building 30 nuclear reactors and we’re building one or two.”

These are country-wide considerations and decisions. From that perspective, I agree that we need to have a national policy, and maybe that executive order is the beginning of that. We cannot have one state deciding to provide one type of AI and another state providing another if we want to see that as important for the population—just as we saw the internet, cloud computing, and the web as important for the population.

Ben Lorica: So explain to listeners why we have state-level regulation for electricity?

Evangelos Simoudis: I will go back to another area that I’m very involved in: autonomous vehicles. In many instances like that, the lack of decision-making by the federal government forced states to start legislating or regulating on a local basis. When such large investments are involved—as with autonomous vehicles or artificial intelligence—the companies making the investments want uniformity. They want to be able to build to one set of rules.

This had not happened with AI in the same way it has not happened with autonomous vehicles. From that perspective, I think that under the right objectives, having a federal umbrella of rules is the right thing to do. It points to the fact that we do see AI as something that the entire population of the country needs to have access to and utilize—not only rich corporations, but every company no matter how small, and every citizen no matter how poor. That is what we have to be striving for, and I think that will help us as a nation.

Ben Lorica: One last feature I want to throw out. I think you addressed this, but: passive consumption. In the sense that when I plug my appliance into the wall, I don’t need to know how electricity works. So, I shouldn’t need to know how AI works when I use it. Is that a fair requirement for a utility?

Evangelos Simoudis: No, actually I think you need to understand how it works. That’s the example we were using about not wanting your child to put their fingers in the outlet. You don’t need to understand how a neural network produces an answer or how a neuro-symbolic system reasons. But you do want to understand the implications of asking the question.

Ben Lorica: The basics.

Evangelos Simoudis: Exactly.

Ben Lorica: All right. Topic number two is Section 230. For our non-US listeners, Section 230 of the Communications Decency Act protects online platforms from liability for user-generated content. Basically, the idea is they’re distributors, not publishers, so they’re not liable.

So what is Section 230 in the age of Generative AI? Historically, people associate this with social media—Facebook and similar platforms. But on the other hand, people are interacting with Meta.ai, Grok, ChatGPT, or Claude. The chatbots are generating content clearly themselves; it’s not user-generated. Evangelos, what’s your take? Are they publishers?

Evangelos Simoudis: When you brought this up, I thought it would be a fascinating discussion we may need to revisit. But let me start by saying this: unfortunately, I have to break it into a few cases. When we were thinking about social networks, we could think about it in one set, but with what we’re doing right now, I think we need to start creating layers.

Let me tell you about a couple of cases I’m thinking about. Let’s assume I download Llama (provided by Meta) to my computer. I have the ability to do that. Now I use it, and maybe I even fine-tune it with some of my data. That’s a very different use case than if I use Llama on Meta’s platform.

Ben Lorica: Or if Meta.ai integrates so that you can actually generate the thing inside the Facebook interface.

Evangelos Simoudis: I think that will put them into an even more precarious position regarding whether Section 230 will apply.

Ben Lorica: Is that user-generated? Because the user came up with the prompt.

Evangelos Simoudis: But it is Meta that trained the model to create that. So in a sense, Meta is also a creator. From that perspective, Section 230 as we know it today should not apply there. It should not give them immunity.

Ben Lorica: Even though they might say, “Well the user was the one who came up with the prompt, we just gave them tools.” It’s like a camera, right? Here’s a camera, you take the picture and then you post it on our platform. But the camera is something that we made.

Evangelos Simoudis: But here is the difference now, Ben. Let me stay with the Llama use case. If I download Llama to my computer and fine-tune it with my data, that is very different than if I use Llama on Meta’s computers. When we think about how to refine Section 230, we really need to start thinking about these kinds of cases. It’s not as simple as it was when we just had social networks.

Another use case: I use GPT-5 to create some content and then I put it onto Facebook or TikTok. That’s a whole different case because at that point, TikTok is just the platform. In that case, they should have the same immunity as they have under Section 230 today. It’s when you unify the two—which OpenAI has signaled they might be interested in doing—that it gets complicated.

It’s not easy anymore. Whoever wants to think of the next iteration of Section 230 does not have an easy task because they need to start distinguishing among all of these cases. I’m sure this will go to the courts, and I pity the judges that will have to think about all of these corner cases: who is doing what, what data was used to create the model, was the model fine-tuned or used out of the box, and on whose computers was the content created?

Ben Lorica: By the way, we’re having this discussion the week after the Trump administration released a National Security Strategy which, among many things, expressed skepticism about content moderation. That’s additional context.

Evangelos Simoudis: Yeah. Think of the following: what happens if I download a Chinese model, fine-tune it, and then put the content on X? It gets very complex and hairy very quickly.

Ben Lorica: This is one of my pet peeves about Section 230. There’s the notion that I’m just a distributor, not a publisher. But once you have a recommendation engine, you’re exercising editorial oversight. Isn’t that a publisher?

Evangelos Simoudis: Yes, absolutely.

Ben Lorica: Look at something like TikTok. One of the big reveals in the TikTok book that came out this year is that TikTok—at least in the early days and I think up to now—has something called the ability to do “heating.” Basically, a TikTok employee goes, “Oh, this is an interesting piece of content, let’s make sure this gets a lot of views.”

Evangelos Simoudis: Amplify it, yes.

Ben Lorica: That’s editorial. That’s a publisher. And a recommendation engine is editorial.

Evangelos Simoudis: Yes.

Ben Lorica: The companies that will be on much shakier ground moving forward are the companies that will unify Generative AI with a publishing platform. You can imagine Substack starting to provide Generative AI tools natively inside Substack. Because Substack also has the equivalent of tweets, they will also fall under this.

Evangelos Simoudis: Exactly. The point you and I are making is that as we move further into the utilization of AI in every aspect of our life—information creation, dissemination, entertainment, productivity, research—we need to rethink the regulations we created for social networks. They need to be reconsidered and updated to make sure the right parties assume their responsibilities. This is far more complicated than straightforward social networks, even if you take the recommendation engine and amplification capability out.

Ben Lorica: By the way, this unification of Generative AI with social media is essentially happening now.

Evangelos Simoudis: Exactly. These companies are always striving for convenience to support their business models. To me, having the ability to do all this from within the same platform is a convenience.

Ben Lorica: Why should I copy and paste over?

Evangelos Simoudis: Exactly. But convenience has implications. These implications can be pretty complex, and we need to make sure we understand them so that we will not be faced with unpleasant surprises in the not-too-distant future.

Ben Lorica: Content moderation itself is increasingly being done by AI. So, AI is on every step of this pipeline.

Evangelos Simoudis: We are trying to incorporate AI along that entire stack under the guise of productivity improvement. There are implications for making these decisions, and we need to consider them as opposed to just accepting them.

Ben Lorica: There are even simpler examples. Imagine YouTube Shorts or TikTok short videos. These platforms may not necessarily help you generate video from scratch, but they can take something you film and enhance it with Generative AI technology. In the meantime, who knows what they’re adding? Maybe there are copyright violations or other objectionable things. Who’s liable? Is it the content poster, the model that created it, or the platform that distributed it?

Evangelos Simoudis: And when you say “the model that created it,” the point I wanted to underline is the data that was used to create the model.

Ben Lorica: But then they’ll say, “Well sure, we helped Evangelos enhance his video, but he ultimately was the one who decided to post it. We’re just distributors, not publishers.”

Evangelos Simoudis: I think that we cannot make a decision that applies across everything. We need to look at this new stack on a layer-by-layer basis and determine what applies to Section 230 as it stands today, what needs to be enhanced, and what needs to be created—maybe “Section 231.” That’s all I’m saying today. You need to explore it in that way.

Ben Lorica: Now that I verbalize it, I can already anticipate their response: “We’re still distributors.”

Evangelos Simoudis: I’m not going to argue this here, Ben. I’m just telling you how I view it. This is going to be a long process in several levels of courts. That’s why I said earlier: I pity the judges that will have to understand the issue and render a decision.

Ben Lorica: I guess to me, Evangelos, in closing, I go back to my main pet peeve which is still amplification. The recommendation is still the most important thing here. Once you do that, you’re actually a publisher. That’s the one that actually needs more revisiting than the content generation. If you generate content in reverse chronological order, no one might see it. But if it gets amplified… that’s clearly editorial.

Evangelos Simoudis: There are some issues here that are very subjective regarding how each perceives the role of this and the content generation. If you are an adversary that wants to create problems for another party—whether it is a government, a group, or an individual—you will obviously want the lightest degree of interference. On the other hand, if you’re thinking about national security or harm to people, you will need to be thinking very carefully about how you approach each of these layers.

I’ll end with this example: Think of what Australia just instituted, forbidding access to social media for individuals under 16 years of age. Whether you like it or not as an American, the Australian government decided to take action because they felt that in this way they protect their population. I think the same kind of decisions will need to be made here. All I’m saying is that they cannot be done in one fell swoop. I think they need to be done on a layer-by-layer basis, no matter how hard that is.

Ben Lorica: I still think we need to focus on algorithms. With reverse chronological feeds, we might be lighter touch, but once you have an algorithm, you’re a publisher. And with that, thank you, Evangelos.