Posts

What is MCP? The model context protocol explained for agile teams

Modern editorial illustration of a developer at a laptop with translucent connection lines flowing from a chat window into stylized icons for a backlog board, a planning poker deck, and a retro board, representing an AI assistant orchestrating real tools across a sprint workflow
Kelly Lewandowski

Kelly Lewandowski

Last updated 27/04/20269 min read

If you have used Claude, Cursor, or ChatGPT in the last few months, you have probably seen MCP show up in the settings panel without much explanation. Anthropic released the spec in late 2024, the major coding assistants shipped support through 2025, and by early 2026 every SaaS tool with an API is racing to publish an MCP server. The short version: MCP is how an AI assistant stops being a chatbot and starts being a teammate that can do real work in your tools. For agile teams that already live in Jira, GitHub, Linear, and Slack all day, that shift is bigger than it sounds.

What MCP actually is

The Model Context Protocol (MCP) is an open standard, originally published by Anthropic, that defines how an AI assistant talks to an external system. Instead of every AI vendor building a one-off integration with every SaaS tool, the tool publishes one MCP server and any compliant client can use it. You can think of it as USB-C for AI tools. The host (Claude Desktop, Cursor, ChatGPT, your IDE) speaks the same protocol to any MCP server it is pointed at, the way any USB-C cable works with any USB-C port.
MCP server
A program (usually run by the SaaS vendor) that exposes a list of tools an AI can call, plus the auth and transport for calling them.
MCP client
The AI app that consumes the server. Claude Desktop, Cursor, Zed, ChatGPT, and Claude Code all ship MCP clients today.
Tool
A single named action with a JSON schema. Things like retro_create_item, planning_poker_cast_vote, or standup_submit_answers.
Resource
A piece of context the server can hand back, like a file, a board, or a sprint summary. The model can read these without taking any action.
When a user prompts the assistant, the model decides which tools to call, the client makes those calls over MCP, the server runs them against the underlying API, and the results flow back into the conversation. The user sees a single response. Behind the scenes, anywhere from one to a few dozen tool calls happened.

How MCP is different from a REST API or a webhook

Agile tools have had REST APIs and webhooks for over a decade. So why does MCP need to exist at all? The answer is that REST APIs and webhooks are for developers, not models. They expect a human to read the docs, decide what to call, write code, and ship it. MCP is built for an AI to discover and use a tool at runtime, without anyone writing integration code first. Editorial illustration comparing three approaches in three vertical lanes: a developer typing code labeled REST API, an automated arrow firing on an event labeled webhook, and a chat interface where an AI is selecting tools from a menu labeled MCP, all rendered in flat vibrant colors
ConcernREST APIWebhookMCP
DirectionYou call itIt calls youThe model calls it on your behalf
Who writes the integrationA developer, ahead of timeA developer, ahead of timeThe model, at runtime, from a tool list
DiscoveryRead the docsRead the docsServer returns a typed list of tools
Auth modelAPI keys, OAuthSigned payloadsOAuth 2.1 + PKCE + Dynamic Client Registration
Best forApps and scriptsReacting to eventsConversational and agent workflows
A webhook is push. A REST API is pull. MCP is a model picking up the phone, asking "what can you do for me right now?", and only making the calls it needs for this specific prompt. The other thing that matters is that it returns typed tool definitions, so the model knows which arguments are required, which are optional, and what the response will look like. No more guessing from a docs page.

The auth flow: OAuth 2.1, PKCE, and DCR in plain English

This is the part most people skim, and it is the part that actually makes MCP usable inside a company. Four pieces fit together, and each one solves a specific problem.
  1. Dynamic Client Registration (DCR, RFC 7591)
    Old-school OAuth required every client to be pre-registered by hand: a developer logs into the SaaS admin panel, fills out a form, copies a client ID and secret, and pastes them somewhere. That does not scale when "the client" is every Claude install in the world. DCR lets the MCP client introduce itself to the server programmatically ("I am Claude Desktop on this user's laptop, here is my redirect URI") and the server hands back a fresh client ID. No admin involvement.
  2. OAuth 2.1 authorization code flow
    Once it has a client ID, the assistant opens your browser, redirects to the SaaS provider's sign-in page, and asks you to approve the connection. This is the same flow you use to "Sign in with Google" on a third-party site. You see exactly which org you are connecting and which scopes (read, write) the assistant is asking for. The SaaS provider hands back an authorization code, which the client trades for an access token.
  3. PKCE (Proof Key for Code Exchange)
    PKCE prevents the authorization code from being stolen mid-flight. The client generates a one-time secret, hashes it with SHA-256, and sends the hash up front. When it later redeems the code, it has to send the original secret. An attacker who intercepts the code cannot redeem it without that secret. The MCP spec mandates PKCE with the S256 method, with no fallback to weaker variants.
  4. Resource indicators (RFC 8707)
    The 2026 MCP spec also requires the client to name the specific server it intends to call when it requests a token. That stops a token issued for one MCP server from being replayed against a different one. The token is bound to a single audience.
The result, from the user's point of view, is one click in their browser. From the security team's point of view, it is OAuth done correctly: per-user, scoped, audience-bound, revocable, and with no shared secrets sitting in config files.

Why agile teams specifically should care

Most early MCP coverage focuses on developers using AI to read code or query databases. That undersells it. Agile teams sit in a unique spot: a lot of your meeting prep is aggregating context from five tools (PRs, tickets, incidents, releases, customer feedback) and then writing it into one tool (the standup, the retro board, the planning poker room). That is exactly the shape of work MCP is good at.
🌅Standups draft themselves

Your assistant pulls yesterday's PRs and ticket activity from your dev tools, formats your three answers, and submits them to today's standup before you open Slack.

🃏Planning poker from the backlog

"Pull the next 10 unestimated tickets, open a planning poker room called Sprint 47, and add each ticket as a round." One prompt, sprint room ready.

🪞Retros pre-seeded with real data

Walk into the retro with this sprint's incidents, releases, and customer reports already on the board, grouped by column. The team discusses, doesn't transcribe.

Action items with context

The assistant turns retro discussion into action items, assigns owners, and links each one back to the source thread or ticket. No copy-paste tax.

The deeper shift is that the scrum master role starts to look different. A lot of the meeting plumbing (collecting updates, finding stragglers, summarising last sprint) is work an AI can do reliably given the right tool surface. The human time goes back into the conversations that actually need a human: facilitating the difficult retro, coaching the team on a recurring impediment, helping a new joiner write better stories. For more on what this looks like in practice, see our piece on AI agents changing how we estimate and AI-assisted backlog refinement.

What you can actually do today

The current MCP-supporting clients are Claude Desktop, Claude Code, Cursor, Zed, ChatGPT, and a growing list of agent frameworks. On the server side, GitHub, Linear, Sentry, PagerDuty, Notion, Slack, Atlassian, and most modern SaaS tools either ship an MCP server or have one in beta. Editorial illustration of a sprint board floating in the foreground with stylized chat bubbles streaming into it from the left, each bubble carrying a small icon for code, ticket, incident, or message, vibrant flat colors on a soft gradient background Kollabe ships an MCP server too. Connect it to your AI client of choice and you get the full set of standup, retro, planning poker, and action item tools, with the same auth model described above, scoped to one organization, and a one-click revoke if you change your mind. The whole thing lives at kollabe.com/mcp along with copy-paste config snippets for each client. If you would rather build something more bespoke than a chat session, the same endpoints are exposed through Kollabe's public REST API using personal access tokens. Same auth model, same shapes, no MCP client required. Useful for a Claude Skill that runs your team's exact standup format, a CI job that opens a planning poker room when a sprint kicks off, or a cron that closes a retro every other Friday.

A reasonable starting point

If your team has not touched MCP yet, the lowest-risk on-ramp is to pick one ritual you find tedious and connect one MCP server. Standups are the usual first win because the value is obvious within a single morning. From there, the next sprint usually finds a second use case on its own. Try our retrospective template generator for a non-MCP starting point if you want to see how AI fits agile rituals before wiring up a server, then come back to this post when you are ready to plug an assistant into your real tools.

No. Anthropic published the spec, but it is open. OpenAI, Google, and several open-source clients implement it, and any model can sit behind a compliant client. The protocol does not care which model is on the other end.

To use it, no. Connecting an MCP server to Claude Desktop or Cursor is a few clicks in settings plus an OAuth approval. To build a server, yes; that part is still developer territory, but most teams will only ever consume servers their vendors publish.

Function calling is a feature of one model. ChatGPT plugins were OpenAI-specific. MCP is a transport-and-auth standard that any model and any tool can implement, so a server you publish once works in every compliant client. The interoperability is the point.

They keep working. MCP usually sits on top of the same internal API your web app uses. You are not migrating anything; you are exposing a new surface for AI clients while your dashboards and scripts keep hitting the API as before.

With OAuth 2.1, PKCE, scoped tokens, and a clear revoke path, the security posture is the same as connecting any third-party app. The bigger risk is bad prompts, not bad protocol design. Start with read-only scopes, watch what the assistant does for a sprint, and grant write access once you trust the workflow.