Documentation
Getting Started
- Design
- Quickstart
- Tutorial: Loading a file
- Tutorial: Loading stream data from Kafka
- Tutorial: Loading a file using Hadoop
- Tutorial: Loading stream data using HTTP push
- Tutorial: Querying data- Further tutorials
- Tutorial: Rollup
- Tutorial: Configuring retention
- Tutorial: Updating existing data
- Tutorial: Compacting segments
- Tutorial: Deleting data
- Tutorial: Writing your own ingestion specs
- Tutorial: Transforming input data
- Clustering
Data Ingestion
- Ingestion overview
- Data Formats
- Tasks Overview
- Ingestion Spec
- Schema Design
- Schema Changes
- Batch File Ingestion
- Stream Ingestion
- Compaction
- Updating Existing Data
- Deleting Data
- Task Locking & Priority
- Task Reports
- FAQ
- Misc. Tasks
Querying
- Overview
- Timeseries
- TopN
- GroupBy
- Time Boundary
- Segment Metadata
- DataSource Metadata
- Search
- Select
- Scan
- Components
- SQL
- Lookups
- Joins
- Multitenancy
- Caching
- Sorting Orders
- Virtual Columns
Design
- Overview
- Storage
- Node Types
- Dependencies
Operations
- API Reference
- Including Extensions
- Data Retention
- Metrics and Monitoring
- Alerts
- Updating the Cluster
- Different Hadoop Versions
- Performance FAQ
- Dump Segment Tool
- Insert Segment Tool
- Pull Dependencies Tool
- Recommendations
- TLS Support
- Password Provider
Configuration
- Configuration Reference
- Recommended Configuration File Organization
- JVM Configuration Best Practices
- Common Configuration
- Coordinator
- Overlord
- MiddleManager & Peons
- Broker
- Historical
- Caching
- General Query Configuration
- Configuring Logging
Development
- Overview
- Libraries
- Extensions
- JavaScript
- Build From Source
- Versioning
- Integration
- Experimental Features