Data Science Tools for Linux

View 16 business solutions

Browse free open source Data Science tools and projects for Linux below. Use the toggles on the left to filter open source Data Science tools by OS, license, language, programming language, and project status.

  • $300 in Free Credit for Your Google Cloud Projects Icon
    $300 in Free Credit for Your Google Cloud Projects

    Build, test, and explore on Google Cloud with $300 in free credit. No hidden charges. No surprise bills.

    Launch your next project with $300 in free Google Cloud credit—no hidden charges. Test, build, and deploy without risk. Use your credit across the Google Cloud platform to find what works best for your needs. After your credits are used, continue building with free monthly usage products. Only pay when you're ready to scale. Sign up in minutes and start exploring.
    Start Free Trial
  • Deploy Apps in Seconds with Cloud Run Icon
    Deploy Apps in Seconds with Cloud Run

    Host and run your applications without the need to manage infrastructure. Scales up from and down to zero automatically.

    Cloud Run is the fastest way to deploy containerized apps. Push your code in Go, Python, Node.js, Java, or any language and Cloud Run builds and deploys it automatically. Get fast autoscaling, pay only when your code runs, and skip the infrastructure headaches. Two million requests free per month. And new customers get $300 in free credit.
    Try Cloud Run Free
  • 1
    RStudio

    RStudio

    RStudio is an integrated development environment (IDE) for R

    RStudio is a powerful, full-featured integrated development environment (IDE) tailored primarily for the R programming language but increasingly supportive of other languages like Python and Julia. It brings together console, editor, plotting, workspace, history, and file-management panes into a unified interface, helping data scientists, statisticians, and analysts to work more productively. The IDE is cross-platform: there are desktop versions for Windows, macOS and Linux, as well as a server version for remote or multi-user deployment via a web browser. In addition to code editing and execution, RStudio offers extensive support for reproducible research via R Markdown, notebooks, and integration with version control systems like Git and SVN. Package development is built in, with tooling for building, checking, and testing R packages, plus integration with documentation tools, CRAN submission workflows, and project templates.
    Downloads: 80 This Week
    Last Update:
    See Project
  • 2
    ggplot2

    ggplot2

    An implementation of the Grammar of Graphics in R

    ggplot2 is a system written in R for declaratively creating graphics. It is based on The Grammar of Graphics, which focuses on following a layered approach to describe and construct visualizations or graphics in a structured manner. With ggplot2 you simply provide the data, tell ggplot2 how to map variables to aesthetics, what graphical primitives to use, and it will take care of the rest. ggplot2 is over 10 years old and is used by hundreds of thousands of people all over the world for plotting. In most cases using ggplot2 starts with supplying a dataset and aesthetic mapping (with aes()); adding on layers (like geom_point() or geom_histogram()), scales (like scale_colour_brewer()), and faceting specifications (like facet_wrap()); and finally, coordinating systems. ggplot2 has a rich ecosystem of community-maintained extensions for those looking for more innovation. ggplot2 is a part of the tidyverse, an ecosystem of R packages designed for data science.
    Downloads: 23 This Week
    Last Update:
    See Project
  • 3
    Milvus

    Milvus

    Vector database for scalable similarity search and AI applications

    Milvus is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment. Milvus 2.0 is a cloud-native vector database with storage and computation separated by design. All components in this refactored version of Milvus are stateless to enhance elasticity and flexibility. Average latency measured in milliseconds on trillion vector datasets. Rich APIs designed for data science workflows. Consistent user experience across laptop, local cluster, and cloud. Embed real-time search and analytics into virtually any application. Milvus’ built-in replication and failover/failback features ensure data and applications can maintain business continuity in the event of a disruption. Component-level scalability makes it possible to scale up and down on demand.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 4
    Quadratic

    Quadratic

    Data science spreadsheet with Python & SQL

    Quadratic enables your team to work together on data analysis to deliver better results, faster. You already know how to use a spreadsheet, but you’ve never had this much power before. Quadratic is a Web-based spreadsheet application that runs in the browser and as a native app (via Electron). Our goal is to build a spreadsheet that enables you to pull your data from its source (SaaS, Database, CSV, API, etc) and then work with that data using the most popular data science tools today (Python, Pandas, SQL, JS, Excel Formulas, etc). Quadratic has no environment to configure. The grid runs entirely in the browser with no backend service. This makes our grids completely portable and very easy to share. Quadratic has Python library support built-in. Bring the latest open-source tools directly to your spreadsheet. Quickly write code and see the output in full detail. No more squinting into a tiny terminal to see your data output.
    Downloads: 8 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 5
    XGBoost

    XGBoost

    Scalable and Flexible Gradient Boosting

    XGBoost is an optimized distributed gradient boosting library, designed to be scalable, flexible, portable and highly efficient. It supports regression, classification, ranking and user defined objectives, and runs on all major operating systems and cloud platforms. XGBoost works by implementing machine learning algorithms under the Gradient Boosting framework. It also offers parallel tree boosting (GBDT, GBRT or GBM) that can quickly and accurately solve many data science problems. XGBoost can be used for Python, Java, Scala, R, C++ and more. It can run on a single machine, Hadoop, Spark, Dask, Flink and most other distributed environments, and is capable of solving problems beyond billions of examples.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 6
    marimo

    marimo

    A reactive notebook for Python

    marimo is an open-source reactive notebook for Python, reproducible, git-friendly, executable as a script, and shareable as an app. marimo notebooks are reproducible, extremely interactive, designed for collaboration (git-friendly!), deployable as scripts or apps, and fit for modern Pythonista. Run one cell and marimo reacts by automatically running affected cells, eliminating the error-prone chore of managing the notebook state. marimo's reactive UI elements, like data frame GUIs and plots, make working with data feel refreshingly fast, futuristic, and intuitive. Version with git, run as Python scripts, import symbols from a notebook into other notebooks or Python files, and lint or format with your favorite tools. You'll always be able to reproduce your collaborators' results. Notebooks are executed in a deterministic order, with no hidden state, delete a cell and marimo deletes its variables while updating affected cells.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 7
    Positron

    Positron

    Positron, a next-generation data science IDE

    Positron is a next-generation integrated development environment (IDE) created by Posit PBC (formerly RStudio Inc) specifically tailored for data science workflows in Python, R, and multi-language ecosystems. It aims to unify exploratory data analysis, production code, and data-app authoring in a single environment so that data scientists move from “question → insight → application” without switching tools. Built on the open-source Code-OSS foundation, Positron provides a familiar coding experience along with specialized panes and tooling for variable inspection, data-frame viewing, plotting previews, and interactive consoles designed for analytical work. The IDE supports notebook and script workflows, integration of data-app frameworks (such as Shiny, Streamlit, Dash), database and cloud connections, and built-in AI-assisted capabilities to help write code, explore data, and build models.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 8
    cuDF

    cuDF

    GPU DataFrame Library

    Built based on the Apache Arrow columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data. cuDF provides a pandas-like API that will be familiar to data engineers & data scientists, so they can use it to easily accelerate their workflows without going into the details of CUDA programming. For additional examples, browse our complete API documentation, or check out our more detailed notebooks. cuDF can be installed with conda (miniconda, or the full Anaconda distribution) from the rapidsai channel. cuDF is supported only on Linux, and with Python versions 3.7 and later. The RAPIDS suite of open-source software libraries aims to enable the execution of end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization but exposing that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 9
    Rodeo

    Rodeo

    A data science IDE for Python

    A data science IDE for Python. RODEO, that is an open-source python IDE and has been brought up by the folks at yhat, is a development environment that is lightweight, intuitive and yet customizable to its very core and also contains all the features mentioned above that were searched for so long. It is just like your very own personal home base for exploration and interpretation of data that aims at Data Scientists and answers the main question, "Is there anything like RStudio for Python?" Rodeo makes it very easy for its users to explore what is created by them and also alongside allows the users to Inspect, interact, compare data frames, plots and even much more. It is an IDE that has been built especially for data science/Machine Learning in Python and you can also very simply think of it as a light weight alternative to the IPython Notebook.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Build AI Apps with Gemini 3 on Vertex AI Icon
    Build AI Apps with Gemini 3 on Vertex AI

    Access Google’s most capable multimodal models. Train, test, and deploy AI with 200+ foundation models on one platform.

    Vertex AI gives developers access to Gemini 3—Google’s most advanced reasoning and coding model—plus 200+ foundation models including Claude, Llama, and Gemma. Build generative AI apps with Vertex AI Studio, customize with fine-tuning, and deploy to production with enterprise-grade MLOps. New customers get $300 in free credits.
    Try Vertex AI Free
  • 10
    Data Science at the Command Line

    Data Science at the Command Line

    Data science at the command line

    Command Line by Jeroen Janssens, published by O’Reilly Media in October 2021. Obtain, scrub, explore, and model data with Unix Power Tools. This repository contains the full text, data, and scripts used in the second edition of the book Data Science at the Command Line by Jeroen Janssens. This thoroughly revised guide demonstrates how the flexibility of the command line can help you become a more efficient and productive data scientist. You’ll learn how to combine small yet powerful command-line tools to quickly obtain, scrub, explore, and model your data. To get you started, author Jeroen Janssens provides a Docker image packed with over 100 Unix power tools, useful whether you work with Windows, macOS, or Linux. You’ll quickly discover why the command line is an agile, scalable, and extensible technology. Even if you’re comfortable processing data with Python or R, you’ll learn how to greatly improve your data science workflow by leveraging the command line’s power.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    DearPyGui

    DearPyGui

    Graphical User Interface Toolkit for Python with minimal dependencies

    Dear PyGui is an easy-to-use, dynamic, GPU-Accelerated, cross-platform graphical user interface toolkit(GUI) for Python. It is “built with” Dear ImGui. Features include traditional GUI elements such as buttons, radio buttons, menus, and various methods to create a functional layout. Additionally, DPG has an incredible assortment of dynamic plots, tables, drawings, debuggers, and multiple resource viewers. DPG is well suited for creating simple user interfaces as well as developing complex and demanding graphical interfaces. DPG offers a solid framework for developing scientific, engineering, gaming, data science and other applications that require fast and interactive interfaces. The Tutorials will provide a great overview and links to each topic in the API Reference for more detailed reading. Complete theme and style control. GPU-based rendering and efficient C/C++ code.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    SageMaker Training Toolkit

    SageMaker Training Toolkit

    Train machine learning models within Docker containers

    Train machine learning models within a Docker container using Amazon SageMaker. Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models. To train a model, you can include your training script and dependencies in a Docker container that runs your training code. A container provides an effectively isolated environment, ensuring a consistent runtime and reliable training process. The SageMaker Training Toolkit can be easily added to any Docker container, making it compatible with SageMaker for training models. If you use a prebuilt SageMaker Docker image for training, this library may already be included. Write a training script (eg. train.py). Define a container with a Dockerfile that includes the training script and any dependencies.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    DAT Linux

    DAT Linux

    The data science OS

    DAT Linux is a Linux distribution for data science. It brings together all your favourite open-source data science tools and apps into a ready-to-run desktop environment. https://datlinux.com It's based on Lubuntu, so it’s easy to install and use. The custom DAT Linux Control Panel provides a centralised one-stop-shop for running and managing dozens of data science programs. DAT Linux is perfect for students, professionals, academics, or anyone interested in data science who doesn’t want to spend endless hours downloading, installing, configuring, and maintaining applications from a range of sources, each with different technical requirements and set-up challenges.
    Leader badge
    Downloads: 37 This Week
    Last Update:
    See Project
  • 14
    MCPower

    MCPower

    MCPower — simple Monte Carlo power analysis for complex models

    MCPower-GUI is a desktop application that provides a graphical interface for the MCPower Monte Carlo power analysis library. It guides users through the full workflow across three tabs: Model setup (formula input with live parsing, CSV data upload with auto-detected variable types, effect size sliders, and correlation editing), Analysis configuration (find power for a given sample size or find the minimum sample size for a target power, with multiple testing correction and scenario analysis), and Results (interactive charts, exportable tables, and auto-generated Python replication scripts). Supports both standard linear models and mixed-effects models. Additional features include analysis history, configurable scenarios, and built-in documentation.
    Downloads: 15 This Week
    Last Update:
    See Project
  • 15
    CSAPP-Labs

    CSAPP-Labs

    Solutions and Notes for Labs of Computer Systems

    CSAPP-Labs is a repository that organizes and provides practical lab exercises corresponding to the famous textbook Computer Systems: A Programmer’s Perspective (CS:APP), helping students deepen their understanding of how computer systems work at the machine level. The exercises cover core topics such as data representation, assembly language, processor architecture, cache behavior, memory hierarchy, linking, and concurrency, contextualizing abstract concepts from the book in real code and experiments. Each lab is structured to include test programs, Makefiles, harnesses, and step-by-step instructions that guide students through hands-on interaction with low-level programming and system behavior. By actually building and debugging code that runs close to hardware, learners acquire intuition about performance trade-offs, bit-level manipulation, stack frame layout, and how compilers and OS features influence execution.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Data Science Specialization

    Data Science Specialization

    Course materials for the Data Science Specialization on Coursera

    The Data Science Specialization Courses repository is a collection of materials that support the Johns Hopkins University Data Science Specialization on Coursera. It contains the source code and resources used throughout the specialization’s courses, covering a broad range of data science concepts and techniques. The repository is designed as a shared space for code examples, datasets, and instructional materials, helping learners follow along with lectures and assignments. It spans essential topics such as R programming, data cleaning, exploratory data analysis, statistical inference, regression models, machine learning, and practical data science projects. By providing centralized resources, the repo makes it easier for students to practice concepts and replicate examples from the curriculum. It also offers a structured view of how multiple disciplines—programming, statistics, and applied data analysis—come together in a professional workflow.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    Deep Learning course

    Deep Learning course

    Slides and Jupyter notebooks for the Deep Learning lectures

    Slides and Jupyter notebooks for the Deep Learning lectures at Master Year 2 Data Science from Institut Polytechnique de Paris. This course is being taught at as part of Master Year 2 Data Science IP-Paris. Note: press "P" to display the presenter's notes that include some comments and additional references. This lecture is built and maintained by Olivier Grisel and Charles Ollion.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Deep Learning with PyTorch

    Deep Learning with PyTorch

    Latest techniques in deep learning and representation learning

    This course concerns the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. The prerequisites include DS-GA 1001 Intro to Data Science or a graduate-level machine learning course. To be able to follow the exercises, you are going to need a laptop with Miniconda (a minimal version of Anaconda) and several Python packages installed. The following instruction would work as is for Mac or Ubuntu Linux users, Windows users would need to install and work in the Git BASH terminal. JupyterLab has a built-in selectable dark theme, so you only need to install something if you want to use the classic notebook interface.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    DeepLearningProject

    DeepLearningProject

    An in-depth machine learning tutorial

    This tutorial tries to do what most Most Machine Learning tutorials available online do not. It is not a 30 minute tutorial that teaches you how to "Train your own neural network" or "Learn deep learning in under 30 minutes". It's a full pipeline which you would need to do if you actually work with machine learning - introducing you to all the parts, and all the implementation decisions and details that need to be made. The dataset is not one of the standard sets like MNIST or CIFAR, you will make you very own dataset. Then you will go through a couple conventional machine learning algorithms, before finally getting to deep learning! In the fall of 2016, I was a Teaching Fellow (Harvard's version of TA) for the graduate class on "Advanced Topics in Data Science (CS209/109)" at Harvard University. I was in charge of designing the class project given to the students, and this tutorial has been built on top of the project I designed for the class.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Great Expectations

    Great Expectations

    Always know what to expect from your data

    Great Expectations helps data teams eliminate pipeline debt, through data testing, documentation, and profiling. Software developers have long known that testing and documentation are essential for managing complex codebases. Great Expectations brings the same confidence, integrity, and acceleration to data science and data engineering teams. Expectations are assertions for data. They are the workhorse abstraction in Great Expectations, covering all kinds of common data issues. Expectations are a great start, but it takes more to get to production-ready data validation. Where are Expectations stored? How do they get updated? How do you securely connect to production data systems? How do you notify team members and triage when data validation fails? Great Expectations supports all of these use cases out of the box. Instead of building these components for yourself over weeks or months, you will be able to add production-ready validation to your pipeline in a day.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    Metaflow

    Metaflow

    A framework for real-life data science

    Metaflow is a human-friendly Python library that helps scientists and engineers build and manage real-life data science projects. Metaflow was originally developed at Netflix to boost productivity of data scientists who work on a wide variety of projects from classical statistics to state-of-the-art deep learning.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    Nuclio

    Nuclio

    High-Performance Serverless event and data processing platform

    Nuclio is an open source and managed serverless platform used to minimize development and maintenance overhead and automate the deployment of data-science-based applications. Real-time performance running up to 400,000 function invocations per second. Portable across low laptops, edge, on-prem and multi-cloud deployments. The first serverless platform supporting GPUs for optimized utilization and sharing. Automated deployment to production in a few clicks from Jupyter notebook. Deploy one of the example serverless functions or write your own. The dashboard, when running outside an orchestration platform (e.g. Kubernetes or Swarm), will simply be deployed to the local docker daemon. The Getting Started With Nuclio On Kubernetes guide has a complete step-by-step guide to using Nuclio serverless functions over Kubernetes.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    PySyft

    PySyft

    Data science on data without acquiring a copy

    Most software libraries let you compute over the information you own and see inside of machines you control. However, this means that you cannot compute on information without first obtaining (at least partial) ownership of that information. It also means that you cannot compute using machines without first obtaining control over those machines. This is very limiting to human collaboration and systematically drives the centralization of data, because you cannot work with a bunch of data without first putting it all in one (central) place. The Syft ecosystem seeks to change this system, allowing you to write software which can compute over information you do not own on machines you do not have (total) control over. This not only includes servers in the cloud, but also personal desktops, laptops, mobile phones, websites, and edge devices. Wherever your data wants to live in your ownership, the Syft ecosystem exists to help keep it there while allowing it to be used privately.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    Self-learning-Computer-Science

    Self-learning-Computer-Science

    Resources to learn computer science in your spare time

    Self-learning Computer Science is a curated, open-source guide repository designed to help learners independently study computer science topics using high-quality university-level resources. The author (an undergraduate CS student) assembled links to courses from institutions like MIT, UC Berkeley, Stanford, etc., covering mathematics, programming, data structures/algorithms, computer architecture, machine learning, software engineering and more. It’s aimed at learners who find traditional course structures restrictive and want a flexible, self-paced path through CS, with a focus on building depth and breadth rather than shortcut exam skills. The repository provides a roadmap, references, teaching materials, and sometimes the author’s own project examples, offering both guidance and community support. Because the CS field is broad, the structure helps learners allocate study time, avoid duplication, and benefit from “best in class” resources instead of randomly browsing.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    gophernotes

    gophernotes

    The Go kernel for Jupyter notebooks and nteract

    gophernotes is a Go kernel for Jupyter notebooks and nteract. It lets you use Go interactively in a browser-based notebook or desktop app. Use gophernotes to create and share documents that contain live Go code, equations, visualizations and explanatory text. These notebooks, with the live Go code, can then be shared with others via email, Dropbox, GitHub and the Jupyter Notebook Viewer. Go forth and do data science, or anything else interesting, with Go notebooks! This project utilizes a Go interpreter called gomacro under the hood to evaluate Go code interactively. The gophernotes logo was designed by the brilliant Marcus Olsson and was inspired by Renee French's original Go Gopher design. If you have the JUPYTER_PATH environmental variable set or if you are using an older version of Jupyter, you may need to copy this kernel config to another directory.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • Next
MongoDB Logo MongoDB
Gen AI apps are built with MongoDB Atlas
Atlas offers built-in vector search and global availability across 125+ regions. Start building AI apps faster, all in one place.
Try Free →