Each API solves a different shape of problem. Follow the landing page for code samples, pricing context, and how it pairs with other endpoints.
Ask a question in natural language; get a source-backed answer and optional structured JSON.
Use when
- Grounding LLMs on live web data
- Enrichment when the input is a question, not a URL list
You get
An answer object with validated fields, citations, and NOT_FOUND when the web cannot support a claim — not just raw page text.
Tip: Use Search when you need a list of links to explore; use Answers when you need a single synthesized response.
Semantic web search from a plain-English query — deduplicated links with titles and descriptions.
Use when
- Discovery without maintaining your own SERP scrapers
- Feeding candidate URLs into Scrapes or Batches
You get
A search object with result.links (url, title, description) plus optional hosted JSON for the full result set.
Tip: Search returns links across the web for a query; Map returns URLs discovered for one site you already chose.
Extract content from a URL you already know — markdown, HTML, text, JSON, screenshots, parsers, LLM extract.
Use when
- One page or a small set of known URLs
- Rich formats and structured extraction per page
You get
Per-URL content (markdown_content, html_content, json_content, etc.) ready for RAG, agents, or storage.
Tip: Use Batch when you have hundreds to thousands of URLs and want one job with predictable wall-clock time.
Discover URLs for a single domain — sitemaps, links, filters, and cursor pagination.
Use when
- Site audits
- Building URL lists before crawl or batch jobs
You get
A list of URLs on or under the target site, filterable with include/exclude patterns.
Tip: Map discovers URLs only; Crawl fetches content across many pages of a site.
Walk a site from a start URL with depth and page limits; poll until completed; list pages and retrieve content.
Use when
- Multi-page documentation or blogs on one site
- When you want exploration + content in one workflow
You get
Crawl status, per-page retrieve_ids, then markdown/HTML/JSON via /v1/retrieve — similar to batch but site-shaped.
Tip: Use Batch when your URLs come from anywhere (not one crawl frontier); use Crawl for hierarchical site walks.
Process large URL lists in one job — predictable completion time, parsers, webhooks, metadata, cursor over items.
Use when
- Thousands of arbitrary URLs (catalogs, SERPs, exports)
- Pipelines that need webhooks and batch-level parsers
You get
Batch status, paginated items, then per-item retrieve — ideal for scale without hand-rolling concurrency.
Tip: Parallel Scrapes are often faster for tiny sets; Batch wins for large lists and operational simplicity.