The Data Exchange Podcast: Viral Shah on Julia’s many advantages when it comes to building industrial applications involving optimization, simulations, and mathematical modeling.
This week I have my annual check-in on the state of Julia with Viral Shah, Co-founder and CEO of Julia Computing. Since we spoke last year, Julia continues to make inroads and grow its user base, and Julia Computing closed their $24M Series A round in July.
There were a lot areas I wanted to catch up on, including:
- Plans – technology, product, hiring, growth – in light of their Series A milestone.
- The Julia community: users of the Julia language; package creators; and the rise of sub communities like the ones that formed around Flux, Turing, SciML.
- The growing number of packages that have reached version 1.0 – including DataFrames, Plots, and the csv package is about to reach 1.0.
- Julia’s many advantages when it comes to industrial applications of optimization, simulations, and mathematical modeling.
- Plans for how to grow Julia from its current base of roughly a million avid users.
- Julia’s multiple dispatch feature.
… this concept of differentiable programming. I think we spoke a little bit about it the last time. And I’m thrilled to say that we’ve come a long way since then. If you think of optimization, what is basically under the hood? Calculating derivatives, and then stepping in the direction which optimizes your loss function. So the key is to compute derivatives. Today, Julia has differentiable programming as part of the ecosystem. e have a huge ecosystem of chain rules and things like that, where you can automatically compute derivatives of lots and lots of mathematical functions.
… What our users want is the ability to work with data, the ability to bring in science and simulation and the ability to apply machine learning, in combination with science. You know, many of us computer scientists have the hubris of not caring about the domain information. I have machine learning. I will just learn everything, give me enough data. If you can bring your domain knowledge into the simulation, you can be immensely faster and immensely cheaper than anything else. Those are the people who are going to win – the people who can combine the best of all these different worlds, as opposed to one or the other.
Download a complete transcript of this episode by filling out the form below:
- A video version of this conversation is available on our YouTube channel.
- “Applications of Reinforcement Learning: Recent examples from large US companies”
- Viral Shah: “A programming language for scientific machine learning and differentiable programming”
- Max Pumperla: “Connecting Reinforcement Learning to Simulation Software”
- Matthew Honnibal: “Building open source developer tools for language applications”
- Travis Addair: “The Future of Machine Learning Lies in Better Abstractions
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.
[Image by New York Mets Scoreboard Control from Public Domain Pictures.net.]