The combination of the right software and commodity hardware will prove capable of handling most machine learning tasks

The Data Exchange Podcast: Nir Shavit on multicore software, neurobiology, and deep learning.


SubscribeiTunesAndroidSpotifyStitcherGoogle, and RSS.

In this episode of the Data Exchange I speak with Nir Shavit, Professor of EECS at MIT, and cofounder and CEO of Neural Magic, a startup that is creating software to enable deep neural networks to run on commodity CPUs (at GPU speeds or faster). Their initial products are focused on model inference, but they are also working on similar software for model training.

 

Ray Summit has been postponed until the Fall. In the meantime, enjoy an amazing series of virtual conferences beginning in mid May on the theme “Scalable machine learning, scalable Python, for everyone”. Go to anyscale.com/events for details.

Our conversation spanned many topics, including:

    • Neurobiology, in particular the combination of Nir’s research areas of multicore software and connectomics – a branch of neurobiology.
    • Why he believes the combination of the right software and CPUs will prove capable of handling many deep learning tasks.
    • Speed is not the only factor: the “unlimited memory” of CPUs are able to unlock larger problems and architectures.
    • Neural Magic’s initial offering is in inference, model training using CPUs is also on the horizon.

(Full transcript of our conversation is below.)

Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.


Download a complete transcript of this episode by filling out the form below:

Short excerpt:

Ben: Let’s start with your research in neurobiology. What is the connection between this line of research and AI in industry?

Nir: In recent years, I’ve been at MIT working in a field called connectomics. In connectomics, we take tiny slivers of brain, the size of a grain of salt, from mammals, like mice. Our colleagues at Harvard slice these tiny grains of salt 30,000 times, then image that with an electron microscope. A cubic millimeter of mouse brain gives you about two petabytes of data, which you then run machine learning (ML) algorithms on to extract the connectivity of the tissue. So we’re learning what connectivity looks like in the brain. The thing I’ve learned in this research is that our brain essentially is an extremely sparse computing device.

I’ve taken that understanding into my other area of research—multi-core computing—to try to come up with an understanding of where we should be going with the design of computer hardware for machine learning, what things should look like in the future, based on learnings from my work in connectomics.

Ben: There are people in AI who think that, while it’s interesting to learn how the brain works, they say, “We’re only interested in getting results.” What do you say to people who say that understanding the intricacies of the brain is only useful to the extent that it helps us get better AI systems?

Nir: Well, first of all, I’m with them. That’s fine. That’s a valid goal. I think the first learning we can get from understanding how the brain works is understanding that we don’t really need to build these kinds of massive, tens of petaflop devices to actually solve the problems our brain solves. Our brain is an extremely sparse computing device. Essentially, if you do the calculation, your brain, your cortex does about the same compute as a cell phone. But it does it on a very large graph, a graph that is petabyte size. In the hardware devices we build, the compute is a petaflop of compute on a cell phone worth of device—16 gigs, 32 gigs. If we want to mimic our brain, we should be building something that is like a cell phone of compute on a petabyte of memory, not a petaflop of compute on a cell phone with memory.

This is the understanding I’ve had from this research—that basically we are solving the problem the wrong way. And there is a reason for it: we don’t know the algorithm. We really don’t know what the graph looks like. Typically in computer science, when you don’t know what the problem is, you end up throwing a lot of compute at it. That’s the stage machine learning is really in right now. But it’s a temporary time.


Related content:


[Image by Pete Linforth from Pixabay]