Site icon The Data Exchange

Best Practices for Building LLM-Backed Applications

Waleed Kadous on open source LLMs, fine tuning, RAG, and productionizing LLM applications.

Subscribe: AppleSpotify OvercastGoogleAntennaPodPodcast AddictAmazon •  RSS.

Waleed Kadous, Chief Scientist at Anyscale1, is one of my go-to experts for best practices on building applications leveraging large language models. He has authored pivotal articles that I regularly reference, including:

Subscribe to the Gradient Flow Newsletter

 

Interview highlights – key sections from the video version:

  1. Open Source LLMs: when and how to use them
  2. Code Llama vs. GitHub Copilot
  3. Deploying open source LLMs
  4. Fine tuning LLMs
  5. Using GPT to create fine tuning datasets
  6. Retrieval augmented generation
  7. RAG at scale and the role of LLMs in RAG
  8. Evaluating RAG and experimenting with different RAG configurations and settings
  9. Reimagining “AutoML” in the age of LLMs
  10. Mixture of experts
  11. AMD and other hardware options for LLM inference
  12. Supply of open source LLMs

 

Types of RAG applications, by Waleed Kadous (image by Ben Lorica).

Related content:


If you enjoyed this episode, please support our work by encouraging your friends and colleagues to subscribe to our newsletter:


[1] Ben Lorica is an advisor to Anyscale and other startups.

Exit mobile version