Bringing AI and computing closer to data sources

The Data Exchange Podcast: Bruno Fernandez-Ruiz on the current state of edge computing.


SubscribeApple • Android • Spotify • Stitcher • Google • RSS.

In this episode of the Data Exchange I speak with Bruno Fernandez-Ruiz, CTO and cofounder of Nexar, Inc., a startup that uses dash cams powered by vision-based applications to improve driving and logistics. The company also provides data services to city planners and companies interested in near real-time “street view” updates covering road and traffic conditions.

Download the 2021 Trends Report: Data, Machine Learning, AI and learn emerging technologies for data management, data engineering, machine learning, and AI.

Prior to co-founding Nexar, Bruno held a series of technical leadership roles at Yahoo! and led teams that implemented complex, large-scale data applications. Nexar poses a different set of challenges, including ones that fall under edge computing. Loosely speaking, edge computing involves bringing compute resources as close to data sources as possible in order to reduce latency and bandwidth requirements.

Ever since my discussion with Google’s Pete Warden last year (“Why TinyML will be huge”), I wanted a followup episode dedicated to edge computing. Given his background at Yahoo! and now at Nexar, Bruno was the perfect guest to explain why edge computing is both challenging and interesting.

One of the things Bruno highlighted is the difference between what insiders describe as the consumer and provider edge:

    ❛ On the consumer land, what you find is that you have all these embedded architectures, where you are trying to really do all these techniques of porting, you know, using tools like Apache TVM or OctoML. Porting the the architectures, pruning the architectures, quantizing the architectures all the way down to an 8-bit, looking at which layers on which operators are supported. That’s the consumer edge: you are looking to really leverage the wattage, the power, the heating, all these constraints that you have on your chip.

    Now when you go to the provider edge, surprisingly, people think well, that should be like a region in AWS, and it’s not. This is prime real estate. You know these are huge data centers in the middle of San Francisco. So what you have is relatively high compute density, not a lot of it, but very high compute density, but very low storage. It is designed really to address mostly the low latency use case today. And towards 5G has started to address the high bandwidth use case. 5G should really not be about just people streaming Netflix. So on the provider edge, what you find is that all that optimization that you have done on the consumer edge on the consumer device, it actually makes sense to use exactly the same techniques on the provider edge, because you want to squeeze in that prime real estate every watt of power, you want to extract it as much as you can. Because it’s super costly. And you want to really get that return your money.

Related content and resources:


Free Report

DOWNLOAD


[Image from Storyblocks. ]