Uploading
One of the main operations for rac-delta is uploading new versions of your builds or directories, and apply only chunk changed or removing obsolete chunks from remote storage.
You can use rac-delta to update a build or to upload a completely new build to your storage.
Upload pipeline
For this, rac-delta SDK provides an upload pipeline which already implements all steps to automatically upload new builds to your storage.
- Node.js
- Rust
Basic pipeline usage:
const remoteIndexToUse = undefined;
await racDeltaClient.pipelines.upload.execute('path/to/build', remoteIndexToUse, {
requireRemoteIndex: false,
force: false,
ignorePatterns: undefined,
onStateChange: (state) => {
console.log(state);
},
onProgress: (type, progress, speed) => {
console.log(type, progress.toFixed(1), speed?.toFixed(1));
},
});
Parameters:
| Name | Type | Description | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| path | string | The path to your local build that will be uploaded (relative or absolute path) | ||||||||||||||||||
| remote rd-index | RDIndex | The rd-index.json as RDIndex object that will be used as remote index, if none provided, the pipeline will try to download it from your storage | ||||||||||||||||||
| upload options | UploadOptions |
|
Basic pipeline usage:
let remote_index_to_use: Option<RDIndex> = None;
match client.pipelines.upload {
UploadPipelineBundle::Hash(pipeline) => {
pipeline
.execute(
Path::new("my/dir"),
remote_index_to_use,
Some(UploadOptions {
require_remote_index: Some(false),
force: Some(false),
ignore_patterns: None,
on_state_change: Some(std::sync::Arc::new(|state| {
println!("Upload state: {:?}", state);
})),
on_progress: Some(std::sync::Arc::new(|phase, progress, speed| {
println!(
"Phase: {:?}, progress: {:.1}%, speed: {}",
phase,
progress * 100.0,
speed
.map_or("unknown".to_string(), |s| format!("{:.1} bytes/s", s))
);
})),
}),
)
.await?;
}
UploadPipelineBundle::Url(_p) => {
// none for SSH
}
}
Parameters:
| Name | Type | Description | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| path | Path | The path to your local build that will be uploaded (relative or absolute path) | ||||||||||||||||||
| remote rd-index | Option<RDIndex> | The rd-index.json as RDIndex object that will be used as remote index, if none provided, the pipeline will try to download it from your storage | ||||||||||||||||||
| upload options | Option<UploadOptions> |
|
This will automatically generate your local rd-index.json, get remote rd-index.json if none was provided, compare both indexes, generate a Delta Plan and upload and cleaning the new chunks to your storage configured in the rac-delta client.
Pipeline helpers
In order to achieve the correct upload of the directory using rac-delta, the upload pipeline uses internal methods that uses rac-delta services for uploading, index comparison, deletion of obsolete chunks, etc...
If you don't want to use the default execute method, you can create your own pipeline using those helpers and services.
- Node.js
- Rust
Example usage of pipeline helpers:
const racDeltaClient = await RacDeltaClient.create({
chunkSize: 1024 * 1024,
maxConcurrency: 6,
storage: {
type: 'ssh',
host: 'localhost',
pathPrefix: '/root/upload',
port: 2222,
credentials: {
username: 'root',
password: 'password',
},
},
});
const remoteIndex = fetch('my/api/or/my/storage/rd-index.json');
// Generate local rd-index.json (you could use racDeltaClient.delta.createIndexFromDirectory too)
const localIndex = await racDeltaClient.pipelines.upload.scanDirectory('my/build');
// Generate a deltaPlan comparing both indexes
const deltaPlan = await racDeltaClient.delta.compareForUpload(localIndex, remoteIndex);
// Upload new chunks (uses maxConcurrency from client)
await racDeltaClient.pipelines.upload.uploadMissingChunks(deltaPlan, 'my/build', false);
//... Delete obsolete chunks, upload new rd-index... etc
Example usage of pipeline helpers:
let config = RacDeltaConfig {
chunk_size: 1024 * 1024,
max_concurrency: Some(6),
storage: StorageConfig::SSH(SSHStorageConfig {
base: BaseStorageConfig {
path_prefix: Some("/root/upload".to_string()),
},
host: "localhost".to_string(),
port: Some(2222),
credentials: SSHCredentials {
username: "root".to_string(),
password: Some("password".to_string()),
private_key: None,
},
}),
};
let client: RacDeltaClient = RacDeltaClient::new(config).await?;
let remote_index = fetch from remote...;
// Generate local rd-index.json (you could use client.delta.create_index_from_directory too)
let local_index: Option<RDIndex> = match client.pipelines.upload {
UploadPipelineBundle::Hash(ref pipeline) => {
Some(pipeline.scan_directory(Path::new("my/dir"), None).await?)
}
UploadPipelineBundle::Url(ref _p) => None,
};
// Generate a DeltaPlan comparing both indexes
let delta_plan: DeltaPlan = client
.delta
.compare_for_upload(&local_index.unwrap(), remote_index)
.await?;
// Upload new chunks (uses max_concurrency from client)
match client.pipelines.upload {
UploadPipelineBundle::Hash(ref pipeline) => {
pipeline
.upload_missing_chunks(&delta_plan, Path::new("my/dir"), false, None)
.await?
}
UploadPipelineBundle::Url(ref _p) => (),
};
//... Delete obsolete chunks, upload new rd-index... etc
For Rust, pipelines are always divided in Hash and Url, this is made because UrlPipeline execute differs from HashPipeline, making an Enum resolves this partially, but the project is open for enhancements!
Note: For almost every case you will use Hash pipeline, Url is only for the URL storage type.
For a full list of Upload Pipeline helpers see: pipelines Also see DeltaPlan