Skip to content
forked from unclecode/crawl4ai

πŸ”₯πŸ•·οΈ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper

License

Notifications You must be signed in to change notification settings

cassgo/crawl4ai

Repository files navigation

Crawl4AI v0.2.6 πŸ•·οΈπŸ€–

GitHub Stars GitHub Forks GitHub Issues GitHub Pull Requests License

Crawl4AI simplifies web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. πŸ†“πŸŒ

Try it Now!

  • Use as REST API: Open In Colab
  • Use as Python library: Open In Colab

✨ visit our Documentation Website

Features ✨

  • πŸ†“ Completely free and open-source
  • πŸ€– LLM-friendly output formats (JSON, cleaned HTML, markdown)
  • 🌍 Supports crawling multiple URLs simultaneously
  • 🎨 Extracts and returns all media tags (Images, Audio, and Video)
  • πŸ”— Extracts all external and internal links
  • πŸ“š Extracts metadata from the page
  • πŸ”„ Custom hooks for authentication, headers, and page modifications before crawling
  • πŸ•΅οΈ User-agent customization
  • πŸ–ΌοΈ Takes screenshots of the page
  • πŸ“œ Executes multiple custom JavaScripts before crawling
  • πŸ“š Various chunking strategies: topic-based, regex, sentence, and more
  • 🧠 Advanced extraction strategies: cosine clustering, LLM, and more
  • 🎯 CSS selector support
  • πŸ“ Passes instructions/keywords to refine extraction

Cool Examples πŸš€

Quick Start

from crawl4ai import WebCrawler

# Create an instance of WebCrawler
crawler = WebCrawler()

# Warm up the crawler (load necessary models)
crawler.warmup()

# Run the crawler on a URL
result = crawler.run(url="https://www.nbcnews.com/business")

# Print the extracted content
print(result.markdown)

Extract Structured Data from Web Pages πŸ“Š

Crawl all OpenAI models and their fees from the official page.

import os
from crawl4ai import WebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy

url = 'https://openai.com/api/pricing/'
crawler = WebCrawler()
crawler.warmup()

result = crawler.run(
    url=url,
    extraction_strategy=LLMExtractionStrategy(
        provider="openai/gpt-4",
        api_token=os.getenv('OPENAI_API_KEY'),
        instruction="Extract all model names and their fees for input and output tokens."
    ),
)

print(result.extracted_content)

Execute JS, Filter Data with CSS Selector, and Clustering

from crawl4ai import WebCrawler
from crawl4ai.chunking_strategy import CosineStrategy

js_code = ["const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"]

crawler = WebCrawler()
crawler.warmup()

result = crawler.run(
    url="https://www.nbcnews.com/business",
    js=js_code,
    css_selector="p",
    extraction_strategy=CosineStrategy(semantic_filter="technology")
)

print(result.extracted_content)

Documentation πŸ“š

For detailed documentation, including installation instructions, advanced features, and API reference, visit our Documentation Website.

Contributing 🀝

We welcome contributions from the open-source community. Check out our contribution guidelines for more information.

License πŸ“„

Crawl4AI is released under the Apache 2.0 License.

Contact πŸ“§

For questions, suggestions, or feedback, feel free to reach out:

Happy Crawling! πŸ•ΈοΈπŸš€

About

πŸ”₯πŸ•·οΈ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 63.6%
  • HTML 31.8%
  • JavaScript 3.8%
  • CSS 0.8%