Live Demo
https://webpagesnap.com/api/scrape?url=https://example.com&format=json
https://webpagesnap.com/api/scrape?url=https://example.com&format=json
KV storage with 7-day TTL and 95%+ hit rate
200+ edge nodes for nearest response
JSON structured data or raw HTML
Auto-follow JS redirects to final page
Realistic browser simulation
https://webpagesnap.com/api/scrape
urlformatjson (default) | htmlnocacheInstall this SKILL to enable Claude Code to automatically scrape web pages using the WebPageSnap API. Once installed, simply ask Claude Code to fetch any webpage and it will use this API.
mkdir -p ~/.claude/skills/web-scraper
---
name: web-scraper
description: |
Scrape and extract structured content from any web page URL. Use when:
- User wants to fetch/scrape/crawl a webpage
- User needs to extract page metadata (title, description, OG tags, Twitter cards)
- User wants to analyze a website's content
- User provides a URL and asks "what's on this page" or similar
- User needs HTML body content from a URL
---
# Web Scraper
Fetch and parse web page content via the WebPageSnap API.
## API Usage
```bash
curl "https://webpagesnap.com/api/scrape?url=<URL_ENCODED>&format=json"
```
**Parameters:**
- `url`: URL-encoded target webpage (required)
- `format`: Response format, use `json` (required)
## Response Structure
```json
{
"success": true,
"url": "https://example.com/",
"finalUrl": "https://example.com/",
"format": "json",
"header": {
"title": "Page Title",
"description": "Meta description",
"keywords": "keyword1, keyword2",
"author": "Author Name",
"charset": "utf-8",
"viewport": "width=device-width, initial-scale=1",
"ogTitle": "Open Graph Title",
"ogDescription": "Open Graph Description",
"ogImage": "https://example.com/og-image.png",
"ogUrl": "https://example.com",
"twitterCard": "summary_large_image",
"twitterTitle": "Twitter Card Title",
"twitterDescription": "Twitter Card Description",
"twitterImage": "https://example.com/twitter-image.png"
},
"body": "<html>...</html>"
}
```
## Workflow
1. URL-encode the target URL
2. Call the API with curl or WebFetch tool
3. Parse the JSON response
4. Extract relevant data from `header` (metadata) or `body` (HTML content)
## Example
```bash
curl "https://webpagesnap.com/api/scrape?url=https%3A%2F%2Fround-lake.dustinice.workers.dev%3A443%2Fhttps%2Fgithub.com&format=json"
```
After installation, simply ask Claude Code to scrape or fetch any webpage. For example: "Fetch the content from https://example.com" or "What's on this page: https://github.com". Claude Code will automatically use this SKILL to call the WebPageSnap API.
A web scraper API is a service that programmatically extracts content from websites. Our web scraper API provides structured data extraction with JSON and HTML output formats, making it easy to integrate web scraping into your applications.
Our web scraper API automatically detects and follows JavaScript redirects. The API simulates real browser behavior to ensure you get the final page content, even for JavaScript-heavy websites.
Yes, our web scraper API offers a generous free tier with 1000 requests per day. The API includes smart caching to maximize your quota efficiency, with a 95%+ cache hit rate for frequently accessed pages.
The web scraper API supports two output formats: JSON for structured data with metadata extraction, and HTML for raw page content. Choose JSON format to get title, description, Open Graph tags, and body content in a structured response.
Our web scraper API delivers responses in approximately 50ms for cached content. The web scraper API uses Cloudflare's global edge network with 200+ nodes worldwide, ensuring fast response times regardless of your location.
Yes, the web scraper API automatically extracts metadata including page title, meta description, Open Graph tags, Twitter cards, and canonical URLs. This makes our web scraper API ideal for link previews and content aggregation.
Absolutely! The web scraper API is designed for both personal and commercial use. Our web scraper API provides enterprise-grade reliability with smart caching and global CDN distribution for production applications.