Skip to content

Latest commit

 

History

History
388 lines (295 loc) · 15.5 KB

File metadata and controls

388 lines (295 loc) · 15.5 KB
title Rust SDK (alpha)
description Full SDK guide for using PowerSync in Rust applications.
sidebarTitle SDK Reference

import SdkFeatures from '/snippets/sdk-features.mdx'; import RustInstallation from '/snippets/rust/installation.mdx'; import RustWatchQuery from '/snippets/rust/basic-watch-query.mdx'; import GenerateSchemaAutomatically from '/snippets/generate-schema-automatically.mdx'; import LocalOnly from '/snippets/local-only-escape.mdx';

This SDK is currently in [**alpha**](/resources/feature-status), intended for external testing and public feedback. Expect breaking changes and instability as development continues. The SDK is distributed via crates.io Refer to the `powersync-native` repo on GitHub Full API reference for the SDK Gallery of example projects/demo apps built with Rust and PowerSync Changelog for the SDK

SDK Features

Installation

Getting Started

Prerequisites: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the Setup Guide).

1. Define the Client-Side Schema

The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and your Sync Streams (or legacy Sync Rules), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the PowerSync protocol: schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using SQLite views to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).

The types available are text, integer and real. These should map directly to the values produced by your Sync Streams (or legacy Sync Rules). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see Types.

Example:

use powersync::schema::{Column, Schema, Table};

pub fn app_schema() -> Schema {
    let mut schema = Schema::default();
    let todos = Table::create(
        "todos",
        vec![
            Column::text("list_id"),
            Column::text("created_at"),
            Column::text("completed_at"),
            Column::text("description"),
            Column::integer("completed"),
            Column::text("created_by"),
            Column::text("completed_by"),
        ],
        |_| {},
    );

    let lists = Table::create(
        "lists",
        vec![
            Column::text("created_at"),
            Column::text("name"),
            Column::text("owner_id"),
        ],
        |_| {},
    );

    schema.tables.push(todos);
    schema.tables.push(lists);
    schema
}
**Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this.

2. Instantiate the PowerSync Database

Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your Sync Streams (or legacy Sync Rules). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.

Process setup

PowerSync is based on SQLite, and statically links a SQLite extension that needs to be enabled for the process before the SDK can be used. The SDK offers a utility to register the extension, and we recommend calling it early in main():

use powersync::env::PowerSyncEnvironment;

mod schema;

fn main() {
    PowerSyncEnvironment::powersync_auto_extension()
        .expect("could not load PowerSync core extension");

    // TODO: Start database and your app
}

Database setup

For maximum flexibility, the PowerSync Rust SDK can be configured with different asynchronous runtimes and HTTP clients used to connect to the PowerSync Service. These dependencies can be configured through the PowerSyncEnvironment struct, which wraps:

  1. An HTTP client (implement the powersync::http::HttpClient trait). When enabling the reqwest feature on the powersync crate, that trait is implemented for reqwest::Client.
  2. An asynchronous pool giving out leases to SQLite connections.
  3. A timer implementation allowing the sync client to implement delayed retries on connection errors. This is typically provided by async runtimes like Tokio.

To configure PowerSync, begin by configuring a connection pool:

Use ConnectionPool::open to open a database file with multiple connections configured with WAL mode:

use powersync::{ConnectionPool, error::PowerSyncError};
use powersync::env::PowerSyncEnvironment;

fn open_pool() -> Result<ConnectionPool, PowerSyncError>{
    ConnectionPool::open("database.db")
}
```Rust use powersync::ConnectionPool; use powersync::env::PowerSyncEnvironment; use powersync::error::PowerSyncError; use rusqlite::Connection;

fn open_pool() -> Result<ConnectionPool, PowerSyncError> { let connection = Connection::open_in_memory()?; Ok(ConnectionPool::single_connection(connection)) }

</Tab>
</Tabs>

Next, create a database and start asynchronous tasks used by the sync client when connecting.
To be compatible with different executors, the SDK uses a model based on long-lived actors instead of
spawning tasks dynamically. All asynchronous processes are exposed through `PowerSyncDatabase::async_tasks()`,
these tasks must be spawned before connecting.

<Tabs>
<Tab title="Tokio">

Ensure you depend on `powersync` with the `tokio` feature enabled.

```Rust
#[tokio::main]
async fn main() {
    PowerSyncEnvironment::powersync_auto_extension()
        .expect("could not load PowerSync core extension");

    let pool = open_pool().expect("open pool");
    let env = PowerSyncEnvironment::custom(
        reqwest::Client::new(),
        pool,
        PowerSyncEnvironment::tokio_timer(),
    );

    let db = PowerSyncDatabase::new(env, schema::app_schema());
    db.async_tasks().spawn_with_tokio();
}
Ensure you depend on `powersync` with the `smol` feature enabled.
async fn start_app() {
    let pool = open_pool().expect("open pool");
    let env = PowerSyncEnvironment::custom(
        reqwest::Client::new(),
        pool,
        // Use the async_io crate to implement timers in PowerSync
        PowerSyncEnvironment::async_io_timer(),
    );

    let db = PowerSyncDatabase::new(env, schema::app_schema());
    // TODO: Use a custom multi-threaded executor instead of the default
    let tasks = db.async_tasks().spawn_with(smol::spawn);
    for task in tasks {
        // The task will automatically stop once the database is dropped, but we
        // want to keep it running until then.
        task.detach();
    }
}

fn main() {
    PowerSyncEnvironment::powersync_auto_extension()
        .expect("could not load PowerSync core extension");
    smol::block_on(start_app());
}
PowerSync is executor-agnostic and supports all async Rust runtimes. You need to provide:
  1. A future that delays execution by scheduling its waker through a timer.
  2. A way to spawn futures as a task that is polled independently.

PowerSync uses the Timer trait for timers, it can be installed by creating a PowerSyncEnvironment with PowerSyncEnvironment::custom and passing your custom timer implementation.

Spawning tasks is only necessary once after opening the database. All tasks necessary for the sync client are exposed through PowerSyncDatabase::async_tasks. You can spawn these by providing a function turning these futures into independent tasks via AsyncDatabaseTasks::spawn_with.

Finally, instruct PowerSync to sync data from your backend:

// MyBackendConnector is defined in the next step...
db.connect(SyncOptions::new(MyBackendConnector {
    db: db.clone(),
})).await;

3. Integrate with your Backend

Create a connector to integrate with your backend. The PowerSync backend connector provides the connection between your application backend and the PowerSync managed database. It is used to:

  1. Retrieve an auth token to connect to the PowerSync instance.
  2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.

Accordingly, the connector must implement two methods:

  1. fetch_credentials - This method is automatically invoked by the PowerSync Client SDK to obtain authentication credentials. The SDK caches credentials internally and only calls this method when needed (e.g. on initial connection or when the token is near expiry). See When fetchCredentials() is Called for details, and Authentication Setup for instructions on how the credentials should be generated.
  2. upload_data - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app's backend API. You need to implement how those writes are processed and uploaded in this method. See When uploadData() is Called for details on triggers, throttling, and retry behavior, and Writing Client Changes for considerations on the app backend implementation.

Example:

struct MyBackendConnector {
    db: PowerSyncDatabase,
}

#[async_trait]
impl BackendConnector for MyBackendConnector {
    async fn fetch_credentials(&self) -> Result<PowerSyncCredentials, PowerSyncError> {
        // implement fetchCredentials to obtain the necessary credentials to connect to your backend
        // See an example implementation in https://github.com/powersync-ja/powersync-native/blob/508193b0822b8dad1a534a16462e2fcd36a9ac68/examples/egui_todolist/src/database.rs#L119-L133

        Ok(PowerSyncCredentials {
            endpoint: "[Your PowerSync instance URL or self-hosted endpoint]".to_string(),
            // Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly
            token: "An authentication token".to_string(),
        })
    }

    async fn upload_data(&self) -> Result<(), PowerSyncError> {
        // Implement uploadData to send local changes to your backend service
        // You can omit this method if you only want to sync data from the server to the client
        // See an example implementation under Usage Examples (sub-page)
        // See https://docs.powersync.com/handling-writes/writing-client-changes for considerations.
        let  mut local_writes = self.db.crud_transactions();
        while let Some(tx) = local_writes.try_next().await? {
            todo!("Inspect tx.crud for local writes that need to be uploaded to your backend");
            tx.complete().await?;
        }

        Ok(())
    }
}

Using PowerSync: CRUD functions

Once the PowerSync instance is configured you can start using the SQLite DB functions.

The most commonly used CRUD functions to interact with your SQLite data are:

  • reader - run statements reading from the database.
  • writer - execute a read query every time source tables are modified.
  • writer - write to the database.

Reads

To obtain a connection suitable for reads, call and await PowerSyncDatabase::reader(). The returned connection leased can be used as a rusqlite::Connection to run queries.

async fn find(db: &PowerSyncDatabase, id: &str) -> Result<(), PowerSyncError> {
    let reader = db.reader().await?;
    let mut stmt = reader.prepare("SELECT * FROM lists WHERE id = ?")?;
    let mut rows = stmt.query(params![id])?;
    while let Some(row) = rows.next()? {
        let id: String = row.get("id")?;
        let name: String = row.get("name")?;

        println!("Found todo list: {id}, {name}");
    }
}

Watching Queries

The watch_statement method executes a read query whenever a change to a dependent table is made.

Mutations

Local writes on tables are automatically captured with triggers. To obtain a connection suitable for writes, use the PowerSyncDatabase::writer method:

The execute method executes a write query (INSERT, UPDATE, DELETE) and returns the results (if any).

async fn insert_customer(
    db: &PowerSyncDatabase,
    name: &str,
    email: &str,
) -> Result<(), PowerSyncError> {
    let writer = db.writer().await?;
    writer.execute(
        "INSERT INTO customers (id, name, email) VALUES (uuid(), ?, ?)",
        params![name, email],
    )?;
    Ok(())
}

If you're looking for transactions, use the transaction method from rusqlite on writer.

Configure Logging

The Rust SDK uses the log crate internally, so you can configure it with any backend, e.g. with env_logger:

fn main() {
    env_logger::init();
    // ...
}

Additional Usage Examples

For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the Usage Examples page.

ORM / SQL Library Support

The Rust SDK does not currently support any higher-level SQL libraries, but we're investigating support for Diesel and sqlx. Please reach out to us if you're interested in these or other integrations.

Troubleshooting

See Troubleshooting for pointers to debug common issues.

Supported Platforms

See Supported Platforms -> Rust SDK.

Upgrading the SDK

To update the PowerSync SDK, run cargo update powersync or manually update to the latest version.