Focused on faster, higher-quality video generation, driving innovation, empowering creators, and enabling commercial impact.
Neza (Subnet 99) is a fully decentralized network for AI-powered video generation, built on cutting-edge open-source models. Our mission is to create a resilient, peer-to-peer platform that makes high-quality AI video content accessible to everyone.
Neza shards advanced generation models like WAN2.1 across the network. Validators publish workflows, miners execute them to generate videos, and validators then assess output quality and distribute rewards—together forming a scalable, trustworthy ecosystem for on-demand AI video production.
Note: By downloading or running this software, you agree to comply with the terms and conditions outlined in our agreement.
- Overview
- Decentralization
- Hardware Requirements
- How to Run Neza
- Workflow Mechanism
- Miner Optimization Directions
Decentralization is at the heart of Neza: we don’t depend on any central API—instead, each miner and validator runs its own model.
Miners run open-source video models through ComfyUI, processing workflows dispatched by validators. They earn rewards based on:
- Video Quality: Rated against benchmark models using ImageBind.
- Speed: Measured in seconds per frame.
Higher quality and faster performance yield greater rewards!
Validators act as decentralized gateways: they define workflows, dispatch them to miners, and use ImageBind to assess the quality of each video.
Each validator runs its own transparent scoring system—rewarding miners based on video quality, speed and reliability. There’s no central authority; all validation happens on the validators’ hardware.
Miners' GPUs should meet or exceed:
- GeForce RTX 4090 (24 GB GDDR6X): only for 480P video generation.
- NVIDIA A100 40 GB HBM2: supports 480P and 720P at moderate speed.
System requirements:
- Storage: 200 GB SSD.
- System RAM: 32 GB.
- CPU: 8 cores.
Validators's GPUs should meet or exceed:
- NVIDIA A100 40 GB HBM2
System requirements:
- Storage: 500 GB SSD.
- System RAM: 64 GB.
- CPU: 16 cores.
- Network: ≥ 100 Mbps.
Validators run multiple video validation tasks in parallel and execute the compute-intensive ImageBind model for quality checks, so they require more powerful hardware to keep the network running smoothly.
Please refer to the Miner Installation Guide.
Miners implement a task queue to handle multiple concurrent workflows and expose task status queries. Upon receiving a workflow from a validator, the miner will:
- Enqueue the task
- Execute the workflow via the ComfyAPI
- Upload the generated video to the designated public-cloud storage
Please refer to the Validator Installation Guide.
Validators use the VideoVerifier (built on ImageBind) to assess generated videos through the following steps:
- Retrieve or define workflow configurations
- Dispatch workflows to miners
- Await video generation results
- Evaluate video quality with ImageBind
- Score miners on quality and response time
- Normalize scores to determine subnet weights
Neza supports two types of workflows: synthetic workflows and organic workflows.
Synthetic workflows are preset test tasks defined by validators to benchmark miners—measuring video quality, processing speed, and reliability.
Organic workflows come from real user requests. Validators push these requests straight into the database so validators can dispatch them to miners in real time—ensuring miners not only run benchmark tests but also fulfill actual needs.
To earn higher rewards in Neza, miners can optimize in the following aspects:
- GPU Upgrade: Using higher-performance GPUs (such as H200, B200) can significantly improve video generation speed
- High-Speed Storage: Use NVMe SSDs to store models and temporary files, reducing I/O bottlenecks
- Memory Optimization: Increase system memory to 64GB or more, reducing memory swapping and improving stability
- Network Bandwidth: Upgrade to gigabit or higher bandwidth to ensure fast video uploads
- Task Scheduling System: Implement an efficient task scheduling system that dynamically allocates resources based on task priority and resource availability
- Multi-Instance Deployment: Deploy multiple ComfyUI instances, each connected to different GPUs
- Load Balancing: Implement intelligent load balancing, assigning tasks to the most suitable GPU
- Queue Optimization: Optimize task queue management to reduce waiting time
- Warm-up Mechanism: Keep models in a warm-up state in memory to reduce cold start time