Fiddler Feature Verification
Analyze the traffic generated by a feature run, decide whether the observed HTTP behavior
looks correct, and produce a grouped-by-endpoint summary with likely issues.
Operating rules
- This skill is MCP-first. Use Fiddler Everywhere MCP tools for traffic analysis whenever they are available in the current session.
- Do not use shell tools, , , workspace file scans, or exported session dumps to inspect traffic if the Fiddler MCP tools are available.
- Use whenever it helps narrow a large or noisy capture to the traffic that matters for the feature verification.
- Keep the analysis practical. The goal is to verify whether the feature appears to work, not to produce an exhaustive packet-level audit.
- If the Fiddler MCP tools are not available in the current session, stop and tell the user to run first, then retry.
- Never manually probe or send raw MCP protocol requests with when the runtime already exposes Fiddler MCP tools.
- Use only tool names that the host advertises in . Never invent or assume tool names beyond the ones available in the session.
Prerequisites check
- Verify that Fiddler Everywhere is installed.
- Verify that the Fiddler Everywhere MCP tools are available.
Useful tools and how to use them
Use this first to confirm that Fiddler is reachable and in a usable state.
What it helps verify:
- Whether the user is logged in
- Whether Fiddler appears to be capturing traffic
- Whether there are browser or terminal instances attached
- Whether HTTPS inspection prerequisites look healthy
Use this as a fast sanity check before deeper analysis.
What it helps verify:
- Whether anything has been captured at all
- Whether the user likely ran the feature recently enough to analyze it
This is the main tool for verification. Use it to pull the captured session list, then narrow the traffic locally in memory.
Use it to:
- Find the requests most likely related to the feature run
- Identify the order of requests
- Spot failures, retries, redirects, preflights, and slow calls
- Build endpoint groups for the final summary
When narrowing the list, prefer clues from the user's request such as:
- Hostname
- URL path or path fragment
- HTTP method
- Feature name or keyword
- A rough time window such as "just now" or "after clicking Save"
If the session list is already manageable, narrowing locally in memory is usually enough. If the capture is large or noisy, use
to focus Fiddler on the host, endpoint family, method, or failure pattern that matters.
Use this after
identifies the interesting sessions.
Good candidates for detail inspection:
- Any session with >= 400
- Any session with an empty or missing
- The slowest session for an endpoint
- A representative successful request for an important endpoint
- OPTIONS preflight requests and the request immediately after them when CORS might be involved
Use the details to inspect:
- Request headers and response headers
- Request and response bodies
- Redirect targets
- Content length and content type
- Auth headers, cookies, validation messages, and error payloads
Rate limit: avoid firing more than 5
calls in rapid succession.
Use this when filtering will make the analysis faster or more reliable.
Possible use cases:
- Show only traffic for one host
- Show only failing requests
- Focus the UI on a particular endpoint family
- Reduce a very large capture to the recent feature run you actually need to inspect
- Isolate retries, auth failures, or one request method such as
It is often useful when
returns too much unrelated traffic.
Suggested workflow
This workflow is intentionally flexible. Adapt it to the feature and the amount of captured traffic.
-
Understand the feature scope.
- Extract any useful clue from the user's request: action performed, host, path fragment, method, or expected endpoint.
- If the request is vague, analyze the most recent traffic and say that the result is based on the recent capture.
-
Pull the session list with
.
- Shortlist sessions that match the feature scope.
- If no clear clue is available, focus on the most recent burst of related sessions rather than the entire capture history.
- If the capture is too noisy to reason about comfortably, use to narrow the visible traffic before continuing.
-
Group traffic by endpoint.
- Group by host + normalized path.
- Strip query strings for grouping.
- Treat numeric IDs and UUID-like segments as path variables when useful, so and are understood as the same endpoint family.
-
Review the sequence.
- Check whether the request flow looks plausible for the feature.
- Look for expected follow-up calls such as create then fetch, preflight then actual request, upload then status poll, or save then refresh.
- If a needed follow-up call is absent, call that out as a possible issue rather than a certainty unless the evidence is strong.
-
Inspect representative details.
- Fetch details for failures, slow calls, mixed-status endpoints, and one or two key successful endpoints.
- Use response bodies and headers as evidence when explaining whether the feature appears healthy.
-
Decide whether the feature appears to work properly.
- A healthy feature run usually shows the expected endpoints, mostly successful status codes, reasonable latency, and no repeated failures.
- If the traffic is incomplete or ambiguous, say so directly.
Output format
Do not dump raw JSON. Write a plain-language verification report with these sections.
text
Feature Verification
Overall verdict: [Feature appears healthy / Feature appears partially successful / Feature likely failed / Inconclusive]
Traffic window: [what part of the capture you analyzed]
Endpoint summary:
- METHOD HOST /normalized/path
Calls: [N]
Statuses: [e.g. 200 x3, 401 x1]
Timing: [avg X ms, max Y ms]
What happened: [plain-language summary of what this endpoint appears to do]
Evidence: [optional header/body/status detail when useful]
- METHOD HOST /another/path
Calls: [N]
Statuses: [...]
Timing: [...]
What happened: [...]
Possible issues:
- ⚠️ [Endpoint] [Issue name] — [what looks wrong and why it matters]
- ⚠️ [Endpoint] [Issue name] — [supporting evidence]
Conclusion:
- [Short answer on whether the feature appears to work properly]
Output requirements
- Group the summary by endpoint, not by raw session ID.
- Include status-code distribution and timing for each endpoint group.
- If there are no obvious issues, say so explicitly:
No obvious issues detected in the analyzed traffic.
- If there are issues, prefix each issue with , name it clearly, and explain what it appears to be.
- If the capture is ambiguous or incomplete, say that the conclusion is tentative.