Custom Indexing Framework
The custom indexing framework exposes specific interfaces that you implement to define your data processing logic. Some common APIs include:
-
process(): Transform raw checkpoint data (transactions, events, object changes) into your desired database rows. This is where you extract meaningful information, filter relevant data, and format it for storage. -
commit(): Store your processed data to the database with proper transaction handling. The framework calls this with batches of processed data for efficient bulk operations. -
prune(): Clean up old data based on your retention policies (optional). Useful for managing database size by removing outdated records while preserving recent data.
Sequential and concurrent pipeline types and their trade-offs are detailed in Pipeline Architecture.
What Are Custom Indexers?
The `sui-indexer-alt-framework` is a powerful Rust framework for building high-performance, custom blockchain indexers on Sui. It provides customizable, production-ready components for data ingestion, processing, and storage.
Indexer Pipeline Architecture
The `sui-indexer-alt-framework` provides two distinct pipeline architectures. Understand the differences between the sequential and concurrent pipelines that the `sui-indexer-alt-framework` provides to decide which best suits your project needs.
Build a Custom Indexer
Build a custom indexer using the `sui-indexer-alt-framework` module. The example indexer demonstrates a sequential pipeline that extracts transaction digests from Sui checkpoints and stores them in a local PostgreSQL.
Bring Your Own Store (BYOS)
Implement a custom storage backend for the customer indexer framework.
Integrate Data Sources
Learn how to integrate custom data sources and storage systems with Sui indexers. Covers checkpoint data sources, custom store implementations, and Move event deserialization for building flexible indexing solutions.
Optimize Runtime and Performance
Learn how to optimize Sui custom indexer performance through runtime configuration, resource monitoring, and debugging tools. Covers ingestion settings, database tuning, Tokio console debugging, Prometheus metrics, and data pruning strategies.