Neysa Nebula
Nebula allows you to deploy and scale your AI projects quickly, easily and cost-efficiently2 on highly robust, on-demand GPU infrastructure. Train and infer your models securely and easily on the Nebula cloud powered by the latest on-demand Nvidia GPUs and create and manage your containerized workloads through Nebula’s user-friendly orchestration layer. Access Nebula’s MLOps and low-code/no-code engines to build and deploy AI use cases for business teams and to deploy AI-powered applications swiftly and seamlessly with little to no coding. Choose between the Nebula containerized AI cloud, your on-prem environment, or any cloud of your choice. Build and scale AI-enabled business use-cases within a matter of weeks, not months, with the Nebula Unify platform.
Learn more
Pangolin
Pangolin is an open source, identity-aware tunneled reverse-proxy platform that lets you securely expose applications from any location without opening inbound ports or requiring a traditional VPN. It uses a distributed architecture of globally available nodes to route traffic through encrypted WireGuard tunnels, enabling devices behind NATs or firewalls to serve applications publicly via a central dashboard. Through the unified dashboard, you can manage sites and resources across your infrastructure, define granular access-control rules (such as SSO, OIDC, PINs, geolocation, and IP restrictions), and monitor real-time health and usage metrics. The system supports self-hosting (Community or Enterprise editions) or a managed cloud option, and works by installing a lightweight agent on each site while using the central control server to handle ingress, routing, authentication, and failover.
Learn more
Nebula
Nebula is the home of smart, thoughtful videos, podcasts, and classes from your favorite creators. A place for experimentation and exploration, with exclusive originals, bonus content, and no ads in sight. Original productions and bonus material. Nebula is creator-owned and operated. Watch offline in our mobile apps. Subscribe to get access to all of our premium content, including Nebula Originals, Nebula Plus bonus content, Nebula First early releases, and Nebula Classes.
Learn more
Headscale
Headscale is an open-source, self-hosted implementation of the control server used by the Tailscale network, enabling users to keep full ownership of their private tailnets while using Tailscale clients. It supports registering users and nodes, issuing pre-authentication keys, advertising subnet-routes and exit-node capabilities, enforcing access-control lists, and integrating with OIDC/SAML identity providers for user authentication. The server is deployable via Debian/Ubuntu packages or standalone binaries, configurable through a YAML file, and managed via its CLI or REST API. Headscale tracks each node, route, and user in its database, supports route approval workflows, and enables features such as subnet routing, exit node designation, and node-to-node mesh within the tailnet. Being self-hosted, it gives organizations and hobbyists full control over their private network endpoints, encryption keys, and traffic flows, rather than depending on a commercial control plane.
Learn more