# AgentFirstTools > Independent guidance, checklists, benchmarks, and audit services for choosing tools that AI agents can use reliably. AgentFirstTools helps agent operators, builders, founders, engineers, and technical teams choose and use tools that autonomous AI agents can inspect, call, verify, retry, and recover from. The site focuses on practical evidence: benchmarks, checklists, implementation patterns, and tool-selection support. ## Start here - [Home](https://agentfirsttools.com/): Rankings, benchmark work, comparisons, and updates for teams choosing software their AI agents can use reliably. - [Benchmark hub](https://agentfirsttools.com/benchmarks/): Current and upcoming benchmark tracks for agent-tool categories, starting with search APIs for agent tasks. - [Agent tool selection hub](https://agentfirsttools.com/tools/): Scorecards, checklists, implementation patterns, benchmarks, and audit support for choosing tools agents can use. - [Best search APIs for AI agents](https://agentfirsttools.com/tools/search-apis-for-ai-agents/): Buyer guide comparing SerpAPI, Brave Search API, and Tavily for official-documentation retrieval using the May 2026 benchmark evidence. - [Official docs retrieval for AI agents](https://agentfirsttools.com/guide/official-docs-retrieval-for-ai-agents/): Playbook for helping agents find, cite, and verify official documentation before acting on code, API, or infrastructure tasks. - [What makes a tool agent-first?](https://agentfirsttools.com/guide/what-makes-a-tool-agent-first/): Practical criteria for inspectability, scriptability, safety, state, recovery, and workflow fit. - [Agent-first tool checklists](https://agentfirsttools.com/checklists/): API, CLI, and MCP server evaluation checklists. - [Agent-first audit](https://agentfirsttools.com/services/agent-first-audit/): Evidence-backed audit for teams deciding whether a tool is suitable for AI-agent workflows. - [Sample AI-agent tool audit report](https://agentfirsttools.com/services/agent-first-audit/sample-report/): Preview the structure, evidence excerpts, scorecard summary, and recommendation format of an audit deliverable. ## Checklists and scorecards - [Agent-first tool scorecard](https://agentfirsttools.com/tools/agent-first-scorecard/): Scorecard for evaluating inspectability, scriptability, bounded action, verification, recovery, and composability. Includes reusable [Markdown](https://agentfirsttools.com/assets/agent-first-scorecard-template.md) and [CSV](https://agentfirsttools.com/assets/agent-first-scorecard-template.csv) worksheets. - [Best search APIs for AI agents](https://agentfirsttools.com/tools/search-apis-for-ai-agents/): Practical comparison of SerpAPI, Brave Search API, and Tavily for search tasks where agents need official source URLs rather than plausible summaries. - [Agent-first API checklist](https://agentfirsttools.com/checklists/agent-first-api-checklist/): Checklist for APIs agents need to inspect, call, verify, retry, and recover from safely. - [Agent-first CLI checklist](https://agentfirsttools.com/checklists/agent-first-cli-checklist/): Checklist for command-line tools agents need to discover, run, verify, retry, and recover from safely. - [Agent-first MCP server checklist](https://agentfirsttools.com/checklists/agent-first-mcp-server-checklist/): Checklist for MCP servers agents need to discover, call, verify, recover from, and operate safely. ## Benchmarks and evidence - [Official docs search API benchmark — May 2026](https://agentfirsttools.com/benchmarks/official-docs-search-api-may-2026/): 30-task cohort comparing Brave Search API, SerpAPI, and Tavily on official-documentation retrieval with Success@k, MRR, latency, and downloadable evidence tables. - [Benchmark cost guide](https://agentfirsttools.com/benchmarks/benchmark-costs-for-agent-tools/): Budget ranges, cost drivers, and planning guidance for benchmarking tools AI agents need to use reliably. - [Search API benchmark pilot note — May 2026](https://agentfirsttools.com/benchmarks/search-api-pilot-may-2026/): Low-profile lab note from an early search API benchmark pilot. This is not a provider ranking or buying recommendation. ## Implementation patterns - [Agent action receipts](https://agentfirsttools.com/patterns/agent-action-receipts/): Pattern for returning durable receipts from agent-facing tools: IDs, status URLs, diffs, audit events, and next actions. ## Contact and data handling - [Privacy note](https://agentfirsttools.com/privacy/): How update-list and audit-inquiry submissions, privacy-friendly measurement, server logs, and data retention are handled. ## Full context - [llms-full.txt](https://agentfirsttools.com/llms-full.txt): Expanded machine-readable overview of AgentFirstTools pages, positioning, and evaluation criteria.