WebPageSnap

200+ Edge Nodes
<50ms Response
95%+ Cache Hit
100K Free/Day

Live Demo

https://webpagesnap.com/api/scrape?url=https://example.com&format=json

API Documentation

REST API Global CDN ~50ms
01

Smart Cache

KV storage with 7-day TTL and 95%+ hit rate

02

Global Edge

200+ edge nodes for nearest response

03

Multi Format

JSON structured data or raw HTML

04

Smart Redirect

Auto-follow JS redirects to final page

05

Anti-Bot Bypass

Realistic browser simulation

Endpoint

GET https://webpagesnap.com/api/scrape

Parameters

Param
Type
Req
Description
url
string
YES
Target webpage URL
format
string
--
Output: json (default) | html
nocache
boolean
--
Skip cache, force fresh fetch

Claude Code SKILL

Install this SKILL to enable Claude Code to automatically scrape web pages using the WebPageSnap API. Once installed, simply ask Claude Code to fetch any webpage and it will use this API.

1 Create the skill directory
Terminal
mkdir -p ~/.claude/skills/web-scraper
2 Create SKILL.md file with the following content
~/.claude/skills/web-scraper/SKILL.md
---
name: web-scraper
description: |
  Scrape and extract structured content from any web page URL. Use when:
  - User wants to fetch/scrape/crawl a webpage
  - User needs to extract page metadata (title, description, OG tags, Twitter cards)
  - User wants to analyze a website's content
  - User provides a URL and asks "what's on this page" or similar
  - User needs HTML body content from a URL
---

# Web Scraper

Fetch and parse web page content via the WebPageSnap API.

## API Usage

```bash
curl "https://webpagesnap.com/api/scrape?url=<URL_ENCODED>&format=json"
```

**Parameters:**
- `url`: URL-encoded target webpage (required)
- `format`: Response format, use `json` (required)

## Response Structure

```json
{
  "success": true,
  "url": "https://example.com/",
  "finalUrl": "https://example.com/",
  "format": "json",
  "header": {
    "title": "Page Title",
    "description": "Meta description",
    "keywords": "keyword1, keyword2",
    "author": "Author Name",
    "charset": "utf-8",
    "viewport": "width=device-width, initial-scale=1",
    "ogTitle": "Open Graph Title",
    "ogDescription": "Open Graph Description",
    "ogImage": "https://example.com/og-image.png",
    "ogUrl": "https://example.com",
    "twitterCard": "summary_large_image",
    "twitterTitle": "Twitter Card Title",
    "twitterDescription": "Twitter Card Description",
    "twitterImage": "https://example.com/twitter-image.png"
  },
  "body": "<html>...</html>"
}
```

## Workflow

1. URL-encode the target URL
2. Call the API with curl or WebFetch tool
3. Parse the JSON response
4. Extract relevant data from `header` (metadata) or `body` (HTML content)

## Example

```bash
curl "https://webpagesnap.com/api/scrape?url=https%3A%2F%2Fround-lake.dustinice.workers.dev%3A443%2Fhttps%2Fgithub.com&format=json"
```
How to Use

After installation, simply ask Claude Code to scrape or fetch any webpage. For example: "Fetch the content from https://example.com" or "What's on this page: https://github.com". Claude Code will automatically use this SKILL to call the WebPageSnap API.

Frequently Asked Questions

What is a web scraper API?

A web scraper API is a service that programmatically extracts content from websites. Our web scraper API provides structured data extraction with JSON and HTML output formats, making it easy to integrate web scraping into your applications.

How does this web scraper API handle JavaScript pages?

Our web scraper API automatically detects and follows JavaScript redirects. The API simulates real browser behavior to ensure you get the final page content, even for JavaScript-heavy websites.

Is the web scraper API free to use?

Yes, our web scraper API offers a generous free tier with 1000 requests per day. The API includes smart caching to maximize your quota efficiency, with a 95%+ cache hit rate for frequently accessed pages.

What output formats does the web scraper API support?

The web scraper API supports two output formats: JSON for structured data with metadata extraction, and HTML for raw page content. Choose JSON format to get title, description, Open Graph tags, and body content in a structured response.

How fast is the web scraper API?

Our web scraper API delivers responses in approximately 50ms for cached content. The web scraper API uses Cloudflare's global edge network with 200+ nodes worldwide, ensuring fast response times regardless of your location.

Does the web scraper API extract metadata?

Yes, the web scraper API automatically extracts metadata including page title, meta description, Open Graph tags, Twitter cards, and canonical URLs. This makes our web scraper API ideal for link previews and content aggregation.

Can I use the web scraper API for commercial projects?

Absolutely! The web scraper API is designed for both personal and commercial use. Our web scraper API provides enterprise-grade reliability with smart caching and global CDN distribution for production applications.