Olostep REST APIs

Choose the right API endpoint for your job.

Olostep exposes focused HTTP APIs: Answers and Search for questions and discovery; Scrapes for known URLs; Maps for site URL inventory; Crawls for multi-page site walks; Batches for large arbitrary URL lists. Each row below links to a dedicated page with examples, FAQs, and documentation.

Official docs: Get started · Playground

Quick decisions

  • I have a URL and need page content (markdown, HTML, screenshot, JSON, …)Open Scrapes
  • I need all discoverable URLs for one domainOpen Maps
  • I need to crawl many pages on a site from a start URLOpen Crawls
  • I have a large list of URLs to process in one jobOpen Batches
  • I need an answer grounded on the live web (with optional JSON schema)Open Answers
  • I need semantic web search — ranked links for a natural-language queryOpen Search

All endpoints at a glance

Each API solves a different shape of problem. Follow the landing page for code samples, pricing context, and how it pairs with other endpoints.

Answers

POST /v1/answers

Ask a question in natural language; get a source-backed answer and optional structured JSON.

Use when

  • Grounding LLMs on live web data
  • Enrichment when the input is a question, not a URL list

You get

An answer object with validated fields, citations, and NOT_FOUND when the web cannot support a claim — not just raw page text.

Tip: Use Search when you need a list of links to explore; use Answers when you need a single synthesized response.

Search

POST /v1/searches

Semantic web search from a plain-English query — deduplicated links with titles and descriptions.

Use when

  • Discovery without maintaining your own SERP scrapers
  • Feeding candidate URLs into Scrapes or Batches

You get

A search object with result.links (url, title, description) plus optional hosted JSON for the full result set.

Tip: Search returns links across the web for a query; Map returns URLs discovered for one site you already chose.

Scrapes

POST /v1/scrapes

Extract content from a URL you already know — markdown, HTML, text, JSON, screenshots, parsers, LLM extract.

Use when

  • One page or a small set of known URLs
  • Rich formats and structured extraction per page

You get

Per-URL content (markdown_content, html_content, json_content, etc.) ready for RAG, agents, or storage.

Tip: Use Batch when you have hundreds to thousands of URLs and want one job with predictable wall-clock time.

Maps

POST /v1/maps

Discover URLs for a single domain — sitemaps, links, filters, and cursor pagination.

Use when

  • Site audits
  • Building URL lists before crawl or batch jobs

You get

A list of URLs on or under the target site, filterable with include/exclude patterns.

Tip: Map discovers URLs only; Crawl fetches content across many pages of a site.

Crawls

POST /v1/crawls

Walk a site from a start URL with depth and page limits; poll until completed; list pages and retrieve content.

Use when

  • Multi-page documentation or blogs on one site
  • When you want exploration + content in one workflow

You get

Crawl status, per-page retrieve_ids, then markdown/HTML/JSON via /v1/retrieve — similar to batch but site-shaped.

Tip: Use Batch when your URLs come from anywhere (not one crawl frontier); use Crawl for hierarchical site walks.

Batches

POST /v1/batches

Process large URL lists in one job — predictable completion time, parsers, webhooks, metadata, cursor over items.

Use when

  • Thousands of arbitrary URLs (catalogs, SERPs, exports)
  • Pipelines that need webhooks and batch-level parsers

You get

Batch status, paginated items, then per-item retrieve — ideal for scale without hand-rolling concurrency.

Tip: Parallel Scrapes are often faster for tiny sets; Batch wins for large lists and operational simplicity.

Which endpoint should I use?

Short answers for common integration choices.

I have a single URL — which endpoint?

Use POST /v1/scrapes (Scrapes) to extract markdown, HTML, JSON, screenshots, or parser output from that page.

I need every URL on a website — which endpoint?

Use POST /v1/maps (Maps) for domain-level URL discovery with optional path filters and cursor pagination.

I need to walk a site and get many pages of content — which endpoint?

Use POST /v1/crawls (Crawls) when you want a multi-page crawl from a start URL, then list pages and retrieve content per page.

I have thousands of arbitrary URLs — which endpoint?

Use POST /v1/batches (Batches) for large lists with predictable job time, optional parsers, webhooks, and metadata.

I need a direct answer from the web — which endpoint?

Use POST /v1/answers (Answers) for agentic Q&A with validation and optional structured JSON — not just a link list.

I need ranked links for a question — which endpoint?

Use POST /v1/searches (Search) for semantic web search that returns deduplicated links with titles and descriptions; then scrape selected URLs if you need full page content.

When are parallel Scrapes better than a Batch?

For a small number of URLs, parallel /v1/scrapes calls often finish faster. Batches shine for large lists and operational patterns (webhooks, single job id, item cursoring).

Build on the full API

500 free credits to try — explore the playground or read the docs.