• Get started
  • Tutorials
  • Guides
  • Concepts
  • Reference
  • Release notes

Xorq documentation

Write your ML pipelines once. Run them anywhere. Xorq handles caching, lineage, and multi-engine execution automatically.

GET STARTED → VIEW ON GITHUB

Getting started

New to Xorq? Start here to build your first pipeline.

Cache expression results

Speed up workflows with automatic caching

Defer query execution

Master deferred execution

Install Xorq

Get Xorq running on your machine

Introduction

Learn what Xorq is and why it exists

Quickstart

Get a first taste of Xorq

Switch between backends

Run expressions on any backend

No matching items

ML tutorials

Train models, split data, and deploy predictions with Xorq’s ML workflow.

Compare model performance

Compare models systematically

Deploy your first model

Deploy models to production

Split data for training

Split data properly

Train your first model

Start your ML journey

No matching items

Explore the docs

AI tutorials

Call LLMs, build MCP tools, and process data with AI

Analytics tutorials

Query across engines, write UDFs, and build analytics workflows

Guides

Production-ready patterns for deploying and scaling your pipelines

Concepts

Deep dives into deferred execution, caching, and architecture

CLI reference

Every command you need to build, run, and serve pipelines

Python API

Full reference for Xorq’s Python API

Troubleshooting

Resolve common issues and errors when using Xorq

Release notes

Version history, breaking changes, and new features

 

This page is built with Quarto.