Jackie Brosamer and Brad Axen on Block’s AI Agent, MCP Integration, Developer Workflows & Incident Response.
Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon • RSS.
Jackie Brosamer and Brad Axen from Block discuss codename goose (Goose), their open-source AI agent designed to automate complex engineering and knowledge work. They explore Goose’s architecture, its integration with the Model Context Protocol (MCP) for connecting tools and models, and its diverse use cases within Block, from developer assistance to incident response. The conversation also covers practical applications, managing AI limitations like hallucinations, and the future of AI-assisted engineering.
Interview highlights – key sections from the video version:
-
-
- Open-Source Licensing and Internal Use
- Why Build Another AI Coding Agent
- Scaling Goose Adoption at Block
- Balancing Multiple AI Tools & Onboarding New Engineers
- “Vibe Coding” Culture and Mentorship Evolution
- Choosing and Switching Between Frontier Models
- From Local Coding to Cloud Dependence & Defining Agents
- Model Context Protocol (MCP): Benefits, Critiques & Security
- Automating Workflows and Machine Learning with Goose
- Goose for Developers, Notebooks, IDEs & Non-Developer Users
- Managing Context, Hallucinations & Validation Strategies
- External Adoption and Data-Driven Use Cases
- Multi-Agent Protocols, Reasoning Costs & Workforce Impact
-
Related content:
- A video version of this conversation is available on our YouTube channel.
- Vibe Coding and CHOP: What You Need to Know About AI-Driven Development
- The Rise of the AI-Powered Developer
- Steve Yegge → Vibe Coding and the Rise of AI Agents: The Future of Software Development is Here
- Structure Is All You Need
- Vaibhav Gupta → Unleashing the Power of BAML in LLM Applications
- Mars Lan → The Security Debate: How Safe is Open-Source Software?
Support our work by subscribing to our newsletter📩
Transcript
Below is a heavily edited excerpt, in Question & Answer format.
Introduction & Overview
What exactly is Goose? Goose is an open source, on-machine AI agent designed to automate complex engineering and knowledge work tasks from start to finish. Developed by Block (formerly Square), it combines large language model reasoning with tool integrations to create a flexible automation platform. The project was publicly released in January 2025 under an MIT license after about nine months of internal development.
Why did Block build yet another AI copilot? The team initially set out to leverage LLMs for developer tasks, recognizing they had become genuinely useful tools for building code. However, they quickly realized the potential extended far beyond developer workflows. Because LLMs are general-purpose, an agent with the right tools can automate tasks for design teams, support agents, and many other roles. Goose was created to be this flexible agent platform capable of handling diverse automation needs across the organization.
How widely is Goose used within Block? Approximately 5,000 people at Block use Goose weekly, including both developers and non-developers. Block runs the same open source version internally (a practice called “dogfooding”), adding only proprietary authentication and security connectors required for their corporate environment.
Architecture & Technical Integration
How central is the Model Context Protocol (MCP) to Goose? While Goose predated MCP, the team quickly integrated it upon release. MCP is powerful because it provides a standard way for Goose to connect with any model (Anthropic, OpenAI, or open source options) and integrate with diverse data sources like GitHub, Slack, Google Calendar, and custom internal systems. Goose is essentially a “blank slate” until connected to tools via MCP – its capabilities depend entirely on these connections. This makes it highly customizable without changing code.
What models work with Goose, and which are most popular? Goose supports any LLM, and users can hot-swap models mid-conversation. Currently, users gravitate toward “frontier models” – the latest and most capable options. The Anthropic Sonnet family and OpenAI’s reasoner models see the most use. Interestingly, users often employ different models for different tasks within the same conversation: one model for planning/design, then switching to Sonnet for execution. Gemini 2.5 Pro with its 1-million token context window handles tasks requiring large amounts of content. The proactiveness of the Sonnet family is a key reason for its popularity.
How does Goose handle context window limitations? The team considers this a fundamental challenge requiring multiple strategies:
- Smart summarization over long context windows
- Selective context retrieval using RAG to identify which tools are relevant for specific queries
- Enabling the agent to navigate information through tool-calling (like using a knowledge graph)
- Multi-turn searching that outperforms simple semantic search
- Having the agent iteratively search codebases using tools like ripgrep rather than dumping all results into the prompt The goal is feeding the right tokens into the context window, not all available tokens.
Can Goose work with local models? Yes, hobbyists in the open source community are seeing success running Goose with local models like Llama. While local models don’t yet solve all enterprise-scale coding problems, they’re becoming increasingly capable for many practical tasks.
