Using LLMs to Build AI Co-pilots for Knowledge Workers

Steve Hsu on building AI assistants that focus on your private data, stored securely in a separate memory module outside of the LLM.

Subscribe: AppleSpotify OvercastGoogleAntennaPodPodcast AddictAmazon •  RSS.

Steve Hsu wears many hats, but most recently he is co-founder of SuperFocus, a startup building LLM-backed knowledge co-pilots for enterprises. A significant challenge in the realm of Large Language Models (LLMs) is the issue of “hallucination”, where the models generate non-factual or inaccurate information. This problem persists despite advancements in machine learning and even in larger, more complex models like GPT-4. Solving this issue is critical, especially in our current era of LLMs, to ensure reliable AI interactions across various applications, including education and enterprise operations, as it affects the credibility, trustworthiness, and ultimate effectiveness of these powerful tools.

Subscribe to the Gradient Flow Newsletter

Interview highlights – key sections from the video version:

Learn how to build practical, robust and safe AI applications by attending the AI Conference in San Francisco (Sep 26-27). Use the discount code FriendsofBen18 to save 18% on your registration.

Related content:

If you enjoyed this episode, please support our work by encouraging your friends and colleagues to subscribe to our newsletter:

[Image: Knowledge Co-pilots by Ben Lorica, using images and icons from Infogram.]