Showing 129 open source projects for "crawl"

View related business solutions
  • Run Any Workload on Compute Engine VMs Icon
    Run Any Workload on Compute Engine VMs

    From dev environments to AI training, choose preset or custom VMs with 1–96 vCPUs and industry-leading 99.95% uptime SLA.

    Compute Engine delivers high-performance virtual machines for web apps, databases, containers, and AI workloads. Choose from general-purpose, compute-optimized, or GPU/TPU-accelerated machine types—or build custom VMs to match your exact specs. With live migration and automatic failover, your workloads stay online. New customers get $300 in free credits.
    Try Compute Engine
  • Managed MySQL, PostgreSQL, and SQL Databases on Google Cloud Icon
    Managed MySQL, PostgreSQL, and SQL Databases on Google Cloud

    Get back to your application and leave the database to us. Cloud SQL automatically handles backups, replication, and scaling.

    Cloud SQL is a fully managed relational database for MySQL, PostgreSQL, and SQL Server. We handle patching, backups, replication, encryption, and failover—so you can focus on your app. Migrate from on-prem or other clouds with free Database Migration Service. IDC found customers achieved 246% ROI. New customers get $300 in credits plus a 30-day free trial.
    Try Cloud SQL Free
  • 1
    X-Crawl

    X-Crawl

    Flexible Node.js AI-assisted crawler library

    A high-performance web crawling and scraping framework for Node.js, designed for large-scale data extraction.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Dungeon Crawl: Stone Soup

    Dungeon Crawl: Stone Soup

    A game of dungeon exploration, combat and magic

    Crawl also sports a number of systems and mechanics that diverge from more traditional roguelikes. For example, there's less emphasis on character classes (or backgrounds in Crawl parlance); your character is defined more by their skills, species and choice of deity than their background. Crawl includes an in-game tutorial and manual in the name of providing something playable without the aid of a guide or wiki.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    Firecrawl

    Firecrawl

    Turn entire websites into LLM-ready markdown or structured data

    Crawl and convert any website into LLM-ready markdown or structured data. Built by Mendable.ai and the Firecrawl community. Includes powerful scraping, crawling, and data extraction capabilities. Firecrawl is an API service that takes a URL, crawls it, and converts it into clean markdown or structured data. We crawl all accessible subpages and give you clean data for each.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    Spatie Crawler

    Spatie Crawler

    An easy to use, powerful crawler implemented in PHP

    Spatie Crawler is a PHP library that allows developers to crawl websites and extract information efficiently. It can be used for web scraping, link checking, or automated testing of web pages. The library is simple to use and supports customizable crawling strategies, including controlling crawl depth and handling redirects. It’s suitable for building crawlers that navigate large or dynamically generated websites.
    Downloads: 2 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 5
    Heritrix

    Heritrix

    Internet Archive's open-source, web-scale, web crawler project

    ...Since our crawler seeks to collect and preserve the digital artifacts of our culture for the benefit of future researchers and generations, this name seemed apt. Heritrix is designed to respect the robots.txt exclusion directives† and META nofollow tags. Please consider the load your crawl will place on seed sites and set politeness policies accordingly. Also, always identify your crawl with contact information in the User-Agent so sites that may be adversely affected by your crawl can contact you or adapt their server behavior accordingly.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    Nebula libp2p DHT

    Nebula libp2p DHT

    A libp2p DHT crawler, monitor, and measurement tool

    ...The crawler can store its results as JSON documents or in a postgres database - the --dry-run flag prevents it from doing either. Nebula will print a summary of the crawl at the end instead. A crawl takes ~5-10 min depending on your internet connection. You can also specify the network you want to crawl by appending, e.g., --network FILECOIN and limit the number of peers to crawl by providing the --limit flag.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Laravel Sitemap

    Laravel Sitemap

    Create and generate sitemaps with ease

    ...This works by crawling your entire site. The generator has the ability to execute JavaScript on each page so links injected into the dom by JavaScript will be crawled as well. The easiest way is to crawl the given domain and generate a sitemap with all found links. The destination of the sitemap should be specified by $path. If you don't want a crawled link to appear in the sitemap, just don't return it in the callable you pass to hasCrawled. You can also instruct the underlying crawler to not crawl some pages by passing a callable to shouldCrawl. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    The Web MCP

    The Web MCP

    A powerful Model Context Protocol (MCP) server

    Bright Data’s Web MCP server gives AI assistants robust, real-time web capabilities through an MCP interface designed to avoid blocks, rate limits, and CAPTCHAs. It presents search, crawl, navigate, and extraction tools that agents can call directly, replacing brittle scraping prompts with typed operations. The README markets it as a “gateway” to the live web so assistants don’t fall back to stale training data. Bright Data also advertises a getting-started tier with a free monthly allotment, plus options for remote or self-hosted operation depending on governance needs. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Douyin TikTok Download API

    Douyin TikTok Download API

    Douyin TikTok Download API

    ...You can deploy or transform this project yourself to achieve more functions, or you can directly call scraper.py in your project or install an existing pip package as a parsing library to easily crawl data, etc. Support input Douyin|TikTokuser homepage to crawl the author [homepage video data (remove watermark link, liked video list (permission must be public), video comment data, background music video list data, etc...).
    Downloads: 2 This Week
    Last Update:
    See Project
  • $300 in Free Credit for Your Google Cloud Projects Icon
    $300 in Free Credit for Your Google Cloud Projects

    Build, test, and explore on Google Cloud with $300 in free credit. No hidden charges. No surprise bills.

    Launch your next project with $300 in free Google Cloud credit—no hidden charges. Test, build, and deploy without risk. Use your credit across the Google Cloud platform to find what works best for your needs. After your credits are used, continue building with free monthly usage products. Only pay when you're ready to scale. Sign up in minutes and start exploring.
    Start Free Trial
  • 10
    Search1API MCP

    Search1API MCP

    A Model Context Protocol (MCP) server

    The Search1API MCP Server is a Model Context Protocol server that provides search and crawl functionality using Search1API. It enables web and news searches, content extraction, and sitemap retrieval, integrating seamlessly with MCP clients. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Puppeteer

    Puppeteer

    Headless Chrome Node.js API

    ...However, it can also be set to run full or non-headless Chrome or Chromium, simply set the headless option when launching a browser. Many of the things you can do manually in the browser, you can also do with Puppeteer such as generate page screenshots and PDFs, crawl a Single-Page Application, test Chrome extensions and more.
    Downloads: 13 This Week
    Last Update:
    See Project
  • 12
    Scrapy

    Scrapy

    A fast, high-level web crawling and web scraping framework

    Scrapy is a fast, open source, high-level framework for crawling websites and extracting structured data from these websites. Portable and written in Python, it can run on Windows, Linux, macOS and BSD. Scrapy is powerful, fast and simple, and also easily extensible. Simply write the rules to extract the data, and add new functionality if you wish without having to touch the core. Scrapy does the rest, and can be used in a number of applications. It can be used for data mining, monitoring...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 13
    Web-Check

    Web-Check

    All-in-one OSINT tool for analysing any website

    ...Get an insight into the inner-workings of a given website: uncover potential attack vectors, analyse server architecture, view security configurations, and learn what technologies a site is using. Currently the dashboard will show: IP info, SSL chain, DNS records, cookies, headers, domain info, search crawl rules, page map, server location, redirect ledger, open ports, traceroute, DNS security extensions, site performance, trackers, associated hostnames, carbon footprint. Stay tuned, as I'll add more soon. The aim is to help you easily understand, optimize and secure your website.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    BettaFish

    BettaFish

    Public opinion analysis system

    BettaFish is an open-source, multi-agent public opinion analysis system built to automate the collection, deep analysis, and reporting of social media data at scale through conversational queries. It uses a modular architecture of specialized agents that collaborate to crawl mainstream platforms, extract multimodal content like text and short video, and synthesize insights through both statistical and large language model techniques. With a design that lets users pose questions in natural language and receive structured reports, charts, and visualizations, the system aims to break information cocoons and provide comprehensive views of trends and public sentiment. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    Deep Research

    Deep Research

    Use any LLMs (Large Language Models) for Deep Research

    Deep Research is a local-first research agent that orchestrates multiple LLMs to generate in-depth reports in minutes. It combines “thinking” and “task” model roles with live internet access to plan, search, read, and synthesize findings into structured outputs. The project emphasizes privacy: processing and storage happen locally, avoiding server-side retention of your queries and notes. A simple web UI lets you enter topics and configure models, while the backend streams progress as...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 16
    crwlr

    crwlr

    Library for Rapid (Web) Crawler and Scraper Development

    This library provides kind of a framework and a lot of ready-to-use, so-called steps, that you can use as building blocks, to build your own crawlers and scrapers with. Before diving into the library, let's have a look at the terms crawling and scraping. For most real-world use cases, those two things go hand in hand, which is why this library helps with and combines both. A (web) crawler is a program that (down)loads documents and follows the links in it to load them as well. A crawler...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 17
    StringZilla

    StringZilla

    10x faster string search, split, sort, and shuffle for long strings

    ...The contents of that file would remain immutable, and the mapping can be shared by multiple Python processes simultaneously. A standard dataset pre-processing use case would be to map a sizeable textual dataset like Common Crawl into memory, spawn child processes, and split the job between them.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    SiteOne Crawler

    SiteOne Crawler

    SiteOne Crawler is a website analyzer and exporter

    SiteOne Crawler is a very useful and easy-to-use tool you'll ♥ as a Dev/DevOps, website owner or consultant. Works on all popular platforms - Windows, macOS, and Linux (x64 and arm64 too). It will crawl your entire website in depth, analyze and report problems, show useful statistics and reports, generate an offline version of the website, generate sitemaps, or send reports via email. Watch a detailed video with a sample report for Astro. build website. This crawler can be used as a command-line tool (see releases and video), or you can use a multi-platform desktop application with a graphical interface (see a video about the app).
    Downloads: 4 This Week
    Last Update:
    See Project
  • 19
    crawley

    crawley

    The unix-way web crawler

    ...Found URLs are streamed to stdout and guaranteed to be unique (with fragments omitted) Scan depth (limited by starting host and path, by default - 0) can be configured. Can crawl rules and sitemaps from robots.txt. Brute mode - scan HTML comments for URLs (this can lead to bogus results) Make use of HTTP_PROXY / HTTPS_PROXY environment values + handle proxy auth. Directory-only scan mode (aka fast-scan)
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Roach

    Roach

    The complete web scraping toolkit for PHP

    Roach is a complete web scraping toolkit for PHP. It is a shameless clone heavily inspired by the popular Scrapy package for Python. Roach allows us to define spiders that crawl and scrape web documents. But wait, there’s more. Roach isn’t just a simple crawler, but includes an entire pipeline to clean, persist and otherwise process extracted data as well. It’s your all-in-one resource for web scraping in PHP. Roach doesn’t depend on a specific framework. Instead, you can use the core package on its own or install one of the framework-specific adapters. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    RobotsTxt

    RobotsTxt

    The repository contains Google's robots.txt parser

    ...It also focuses on correctness—edge cases like overlapping patterns and longest-match resolution are handled consistently. Consumers integrate it to decide whether a specific URL may be fetched by a particular bot name and to respect crawl-delay or sitemaps hints where applicable. The library serves both search-scale crawlers and smaller tools that need a reliable decision engine for polite crawling.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    PHPScraper

    PHPScraper

    A universal web-util for PHP

    ...In some cases there is an option to get a simple or detailed version. PHPScraper can assist in collecting feeds such as RSS feeds, sitemap.xml-entries and static search indexes. This can be useful when deciding on the next page to crawl or building up a list of pages on a website.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    VFS for Git

    VFS for Git

    Virtual file system for Git, enable Git at enterprise scale

    ...This ensures that operations like status and checkout are as fast as possible. Git struggles to handle enterprise-scale repositories. Operations like cloning will slow to a crawl when you have millions of files in a repository, and even something as simple as getting your repository status will leave you waiting.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Python-Spider

    Python-Spider

    Python3 web crawler practice

    ...As part of the author’s public learning-path repositories, python-spider likely includes examples of HTTP requests, HTML parsing, maybe concurrency or scheduling to crawl multiple pages, and techniques to handle common web-scraping issues. For people wanting to get hands-on with building scrapers, collecting data, or learning how to navigate web programming in Python, this repository acts as a didactic reference or starting point. Because it’s published publicly under an open license, users are free to fork and adapt the code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Snap Lens Web Crawler

    Snap Lens Web Crawler

    Crawl and download Snap Lenses from lens.snapchat.com with ease.

    Crawl and download Snap Lenses from lens.snapchat.com with ease. This crawler is a dependency of Snap Camera Server https://snap-camera-server.sourceforge.io
    Downloads: 3 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB
Gen AI apps are built with MongoDB Atlas
Atlas offers built-in vector search and global availability across 125+ regions. Start building AI apps faster, all in one place.
Try Free →