Download OpenAPI specification:Download
Note: This describes the Starfish API
Starfish uses Bearer authentication.
Although Bearer authentication is commonly used with OAuth 2.0, Starfish API doesn't use OAuth 2.0.
Bearer authentication is also called token authentication.
To use the API you need to obtain a token first.
To obtain a token use auth endpoint.
Note that this token is valid for 16 hours.
The validity period can be configured in the config auth.auth_token_timeout_secs.
The token consists of 3 parts separated by a colon: token_ver:token_id:token_secret,
for example sf-api-v1:lr9Cnex0za:AVj8w19TMhMjjVbEHse3EPeeq1TnuLuCXK6IHxzCzls.\
token_ver, version of the API, for now, it is sf-api-v1.\token_id for the above example it is lr9Cnex0za. It identifies the token.\token_secret in this case it is AVj8w19TMhMjjVbEHse3EPeeq1TnuLuCXK6IHxzCzls.token_secret is known only for the owner of the token.The token then needs to be used in all API calls in Authorization header in HTTP request:
POST /api/... HTTP/1.1
Host: starfish.com
Authorization: Bearer TOKEN
Content-Type: application/json
Returns archive job object. Single archive job may contain multiple low-level jobs.
| volume_and_path | string volume and path in as |
| archive_target_name | string Archive target name defined with /api/archive/target API |
| dest_path | string Destination dir path appended to archive_target dest_path |
| migrate | boolean Default: false Remove files from source after copy to archive. If alias |
object (job.options.response) | |
| remove_source | boolean Default: false Alias to |
| remove_empty_dirs | boolean Default: false For each removed file remove also parent directory and in case of success
it will be recursive up to the job root (job root will not be removed if it's volume root).
Using this option without |
| generate_manifest | boolean or null Generate a manifest file for an upload/copy job. If not defined, the value for upload/copy command will be used (true by default). If set, it overwrites settings per upload/copy command and global default set by dispatcher.generate_manifest. |
| query | string If defined here will overwrite query filters from url params |
| compression_type | string Enum: "gzip" "xz" During upload to object store compress file contents. Ignored if used without compression_type. |
| compression_level | integer Set compression level. Defaults to 1 for xz and 6 for gzip. |
| overwrite | string Enum: "never" "success_if_identical" "older" "always" Allowed only when archiving to another volume. See |
| storage_class | string Allowed only for S3 and Azure archive targets. Override 'storage_class' parameter from archive targets. |
| parallel_upload_count | integer When the size of the object being uploaded exceeds the Starfish multi-part upload size
(default ~100MB), uploads will be split across |
| no_sparse | boolean Allowed only for volume archive targets. Restore sparse files as non-sparse. This is default on Windows volumes, because restoring sparse files is not supported there. |
| inplace | boolean Allowed only for volume archive targets. See |
| command_verbose | boolean Allowed only for object store archive target. Run low level command (upload) with debug log level. |
| dedup | boolean Default: false deduplicate files with same contents to object store (uses md5 of file content as uploaded object name). This option works for cloud storage only and is exclusive with 'tar'. |
| part_size | string Minimum part size for multipart uploads to object store. Files with size greater that that value would be uploaded using multipart upload. Files with size up to "min_part_size" could be uploaded with single request and depending on another config parameters whole file could be loaded into memory. |
| min_part_size | string Alias to 'part_size' for backward compatibility. Please use 'part_size' instead of that option. |
| tar | boolean Default: false upload tar.gz archive of input files to object store instead of individual files. This option is exclusive with 'dedup'. |
| verbose | boolean Default: false run job command with DEBUG log level |
| prescan_enabled | boolean Default: true Enable filesystem prescanning |
| prescan_type | string Default: "diff" Enum: "diff" "sync" "mtime" Change prescan type |
| from_scratch | boolean Default: false Force job to run archive on all matching entries, even if they are already archived |
| job_name | string Use that name for copy job. Copy job with the same job_name will be run only on changed entries unless --from-scratch is given. New results will override results from previous job with the same name. |
| workers_per_agent | Array of strings Number of workers to be run on agent that can run that job. Element in that list could be just a number then it applies as default to all agents but it could be also a string in form |
| entries_from_file | string or null (entries_from_file_enum) Enum: "paths" "sfids" Determines the type of entries passed in file. Supported by restore job. Other jobs expects paths only. |
| hard_links | boolean Default: false Preserve hardlinks when copying files between volumes (ignored for other types of archive targets). Linux only. |
{- "volume_and_path": "usr:path",
- "archive_target_name": "fake-s3",
- "dest_path": "my/dir",
- "migrate": true,
- "options": {
- "archive_target_name": "string",
- "dst_allow_empty_dir": true,
- "archive_target_id": 78,
- "dst_path": "string",
- "dst_volume": "string",
- "dst_volume_id": 0
}, - "remove_source": false,
- "remove_empty_dirs": false,
- "generate_manifest": true,
- "query": "string",
- "compression_type": "gzip",
- "compression_level": 0,
- "overwrite": "never",
- "storage_class": "string",
- "parallel_upload_count": 0,
- "no_sparse": true,
- "inplace": true,
- "command_verbose": true,
- "dedup": false,
- "part_size": "10MiB",
- "min_part_size": "string",
- "tar": false,
- "verbose": false,
- "prescan_enabled": true,
- "prescan_type": "diff",
- "from_scratch": false,
- "job_name": "string",
- "entries_from_file": "paths",
- "hard_links": false
}{- "href": "/api/archive/job/123",
- "volume_and_path": "projects:dir1/dir2",
- "target_id": 0,
- "target_name": "string",
- "target_info": {
- "bucket_name": "destination-bucket",
- "dst_path": "common-subdir"
}, - "query": "string",
- "archiving_options": {
- "migrate": true,
- "remove_empty_dirs": true
}, - "low_level_jobs": {
- "UPLOADING_FILES": [
- 123
], - "REMOVING_SOURCE": [
- 129
]
}, - "stats": {
- "UPLOADING_FILES": {
- "matched_entries": 126,
- "done_entries": 123,
- "failed_entries": 3,
- "tmp_errors": 6
}, - "PINNING_TAGS": {
- "matched_entries": 123,
- "done_entries": 123,
- "failed_entries": 0,
- "tmp_errors": 0
}, - "REMOVING_SOURCE": {
- "matched_entries": 123,
- "done_entries": 123,
- "failed_entries": 0,
- "tmp_errors": 6
}
}, - "id": 0,
- "status": "STARTING",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "creation_time": 1593093530.123456,
- "creation_time_hum": "2020-06-25 15:58:50",
- "end_time": 1593093600.123456,
- "end_time_hum": "2020-06-25 16:00:00",
- "duration": 70,
- "duration_hum": "1m10s",
- "created_by_id": 1,
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "created_by_hum": "Alice (uid=12)"
}| status | Array of strings Job status(es), either a list of statuses or a single status as a string. Cannot be used together with |
| running | boolean if set to |
| requested_by | Array of strings created by a given entity, either a list of entities or a single entity as a string. For example 'gui', 'client', 'scheduler' etc. |
| creation_time | string Supports FROM-TO and RELATIVE formats; FROM-TO: '# hour|day|week|month|year(s) ago' or 'YYYYMMDD[HHMM[SS]]' or 'now' or 'inf', for example:
RELATIVE: '[+|-]N[y|m|w|d|h]', meaning a number of years, months, weeks, days (default) or hours, for example:
|
| end_time | string the same as |
| sort_by | string Enum: "archive_target_name" "command" "creation_time" "dest_path" "end_time" "id" "path" "phase" "query" "reason" "requested_by" "status" "target_name" "volume_and_path" "volume_id" "volume_name" Example: sort_by=archive_target_name +command Sort by given fields. Multiple fields should be separated with some whitespace or comma. Each field could be prefixed with '+' or '-' to sort ascending or descending (default is ascending). By default, results are sorted by id, but the limit is applied descending. If limit is also specified, results are sorted first and then the limit is applied. |
| limit | integer Maximum number of returned jobs |
| paging_offset | integer Parameter that describes paging offset. It should be equal to number of entries that have been already printed on the previous pages. For example:
With paged result comes field |
| add_paging_params_to_response | boolean Default: false A flag specifying whether to include paging params in response. |
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
| created_by_username | Array of strings Only jobs created by user with given username will be taken into account. Request may specify more then one name. |
| created_by_uid | Array of strings Only jobs created by user with given UID will be taken into account. Request may specify more then one user id. |
{- "archive_jobs": [
- {
- "href": "/api/archive/job/123",
- "volume_and_path": "projects:dir1/dir2",
- "target_id": 0,
- "target_name": "string",
- "target_info": {
- "bucket_name": "destination-bucket",
- "dst_path": "common-subdir"
}, - "query": "string",
- "archiving_options": {
- "migrate": true,
- "remove_empty_dirs": true
}, - "low_level_jobs": {
- "UPLOADING_FILES": [
- 123
], - "REMOVING_SOURCE": [
- 129
]
}, - "stats": {
- "UPLOADING_FILES": {
- "matched_entries": 126,
- "done_entries": 123,
- "failed_entries": 3,
- "tmp_errors": 6
}, - "PINNING_TAGS": {
- "matched_entries": 123,
- "done_entries": 123,
- "failed_entries": 0,
- "tmp_errors": 0
}, - "REMOVING_SOURCE": {
- "matched_entries": 123,
- "done_entries": 123,
- "failed_entries": 0,
- "tmp_errors": 6
}
}, - "id": 0,
- "status": "STARTING",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "creation_time": 1593093530.123456,
- "creation_time_hum": "2020-06-25 15:58:50",
- "end_time": 1593093600.123456,
- "end_time_hum": "2020-06-25 16:00:00",
- "duration": 70,
- "duration_hum": "1m10s",
- "created_by_id": 1,
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "created_by_hum": "Alice (uid=12)"
}
], - "next_page_params": {
- "limit": 10,
- "sort_by": 1,
- "paging_offset": 51,
- "add_paging_params_to_response": true
}, - "matched_archive_jobs_count": 70
}| volume_id required | integer Volume id that archive jobs are related to |
[- {
- "href": "/api/archive/job/123",
- "volume_and_path": "projects:dir1/dir2",
- "target_id": 0,
- "target_name": "string",
- "target_info": {
- "bucket_name": "destination-bucket",
- "dst_path": "common-subdir"
}, - "query": "string",
- "archiving_options": {
- "migrate": true,
- "remove_empty_dirs": true
}, - "low_level_jobs": {
- "UPLOADING_FILES": [
- 123
], - "REMOVING_SOURCE": [
- 129
]
}, - "stats": {
- "UPLOADING_FILES": {
- "matched_entries": 126,
- "done_entries": 123,
- "failed_entries": 3,
- "tmp_errors": 6
}, - "PINNING_TAGS": {
- "matched_entries": 123,
- "done_entries": 123,
- "failed_entries": 0,
- "tmp_errors": 0
}, - "REMOVING_SOURCE": {
- "matched_entries": 123,
- "done_entries": 123,
- "failed_entries": 0,
- "tmp_errors": 6
}
}, - "id": 0,
- "status": "STARTING",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "creation_time": 1593093530.123456,
- "creation_time_hum": "2020-06-25 15:58:50",
- "end_time": 1593093600.123456,
- "end_time_hum": "2020-06-25 16:00:00",
- "duration": 70,
- "duration_hum": "1m10s",
- "created_by_id": 1,
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "created_by_hum": "Alice (uid=12)"
}
]| archive_job_id required | integer ID of the archive job |
{- "href": "/api/archive/job/123",
- "volume_and_path": "projects:dir1/dir2",
- "target_id": 0,
- "target_name": "string",
- "target_info": {
- "bucket_name": "destination-bucket",
- "dst_path": "common-subdir"
}, - "query": "string",
- "archiving_options": {
- "migrate": true,
- "remove_empty_dirs": true
}, - "low_level_jobs": {
- "UPLOADING_FILES": [
- 123
], - "REMOVING_SOURCE": [
- 129
]
}, - "stats": {
- "UPLOADING_FILES": {
- "matched_entries": 126,
- "done_entries": 123,
- "failed_entries": 3,
- "tmp_errors": 6
}, - "PINNING_TAGS": {
- "matched_entries": 123,
- "done_entries": 123,
- "failed_entries": 0,
- "tmp_errors": 0
}, - "REMOVING_SOURCE": {
- "matched_entries": 123,
- "done_entries": 123,
- "failed_entries": 0,
- "tmp_errors": 6
}
}, - "id": 0,
- "status": "STARTING",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "creation_time": 1593093530.123456,
- "creation_time_hum": "2020-06-25 15:58:50",
- "end_time": 1593093600.123456,
- "end_time_hum": "2020-06-25 16:00:00",
- "duration": 70,
- "duration_hum": "1m10s",
- "created_by_id": 1,
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "created_by_hum": "Alice (uid=12)"
}File should consist of arbitrary number of file/dir paths separated with \0. Paths should be relative to archive job root_path. entries_from_file argument mustpassed in archive job start request must be paths. Job will start only after entries file is uploaded or will set status to timeout when file was not uploaded or the uploading took too long. Note that SFIDs are not yet supported in this case.
| archive_job_id required | integer ID of the archive job |
Returns id of the created target
| name required | string |
| type required | string (target_type) Enum: "azure" "s3" "swift" "volume" Type of storage |
| dst_path | string Dest path under which files will be stored |
required | storage_azure (object) or storage_s3 (object) or storage_swift (object) or (storage_volume (storage_volume (object) or storage_volume (object))) (one_of_storages) |
| verify | boolean Default: true |
{- "verify": false
}{- "id": 71,
- "href": "/api/archive/target/71"
}| obfuscate | boolean Default: true If enabled then fields like passwords or secret keys will be replaced with Note that for backwards compatibility reasons if deprecated Basic HTTP authorization is used,
|
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
| name | string Default: "" Name filter. If provided only archive target with exact name will be returned. |
[- {
- "id": 71,
- "href": "/api/archive/target/71"
}
]| target_id required | integer ID of the archive target |
| obfuscate | boolean Default: true If enabled then fields like passwords or secret keys will be replaced with Note that for backwards compatibility reasons if deprecated Basic HTTP authorization is used,
|
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
{- "id": 71,
- "href": "/api/archive/target/71"
}This call is dangerous as changing storage parameters can make already archived data unable to restore.
| target_id required | integer ID of the archive target |
| verify | boolean Default: true If service should verify connection to object store |
required | storage_azure (object) or storage_s3 (object) or storage_swift (object) or (storage_volume (storage_volume (object) or storage_volume (object))) (one_of_storages) |
{- "verify": true,
- "params": {
- "bucket_name": "new_bucket_name_123"
}
}{- "id": 71,
- "href": "/api/archive/target/71"
}Deletes archive target. Together archive jobs and jobs results related to specified target are also deleted. Data archived to the deleted archive target is not deleted but is no longer accessible by Starfish (data archived to volume is an exception which is restorable as long as the volume exists)
| target_id required | integer ID of the archive target |
| username | string Username that exists in the Starfish DB. If specified, only matches tokens for this user. |
| show_auto_generated | boolean Default: false If true, also displays auto-generated tokens (not displayed by default). |
| show_expired | boolean Default: false If true, also displays expired tokens (not displayed by default). |
[- {
- "auto_generated": false,
- "creation_time": 1656979234,
- "creation_time_hum": "2022-07-05 02:00:34",
- "description": "alice's api key",
- "id": 62,
- "public_key": "FGzQiO4leR",
- "user": {
- "system_id": "1234",
- "username": "alice"
}, - "valid_until": 1703977203,
- "valid_until_hum": "2023-12-31 00:00:03"
}
]Credentials are verified against PAM. In
order to get access the user has to be a member of a group specified by
auth.user_group_name or auth.admin_group_name config property
(default: starfish-users and starfish). Members of the latter group
are given superuser access to Starfish.
Token is valid for 16 hours.
The validity period can be configured in the config auth.auth_token_timeout_secs.
| username required | string |
| password required | string |
| token_timeout_secs | integer |
| token_description | string |
{- "username": "string",
- "password": "string",
- "token_timeout_secs": 0,
- "token_description": "string"
}{- "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9",
- "superuser": true,
- "zone_manager": true
}| token_id required | string Example: FGzQiO4leR Token ID / public key |
{- "auto_generated": false,
- "creation_time": 1656979234,
- "creation_time_hum": "2022-07-05 02:00:34",
- "description": "alice's api key",
- "id": 62,
- "public_key": "FGzQiO4leR",
- "user": {
- "system_id": "1234",
- "username": "alice"
}, - "valid_until": 1703977203,
- "valid_until_hum": "2023-12-31 00:00:03"
}| token_id required | string Example: FGzQiO4leR Token ID / public key |
| valid_until | number <float> |
{- "valid_until": 1656949329.642527
}{- "valid_until": 1656949329.642527
}JSON Patch is described in RFC 6902 (https://datatracker.ietf.org/doc/html/rfc6902/).
The copy and move operations have not been implemented.
A JSON Patch request body is a JSON document that represents an array of objects. Each object represents a single operation to be applied to the target JSON document.
| user_param_name required | string Example: favourite_paths User parameter name. |
| op | string Operation, one of 'replace', 'add', 'remove' or 'test' |
| path | string A string containing a JSON-Pointer value described in RFC 6901 (https://datatracker.ietf.org/doc/html/rfc6901/) that references a location within the target document (the "target location") where the operation is performed. Allowed characters are reduced to the upper and lower case letters, numbers and the underscore sign. The single character "-" is used to append new elements to JSON array. |
| value | object Object relevant to json path |
[- {
- "op": "replace",
- "path": "",
- "value": [
- {
- "path": "vol:dir",
- "zones": [
- 1,
- 2
]
}
]
}, - {
- "op": "add",
- "path": "/0/zones/-",
- "value": 3
}, - {
- "op": "remove",
- "path": "/0/zones/0"
}, - {
- "op": "test",
- "path": "",
- "value": [
- {
- "path": "vol:dir",
- "zones": [
- 2,
- 3
]
}
]
}
]{- "name": "favourite_paths",
- "value": {
- "path": "vol:path",
- "zones": [
- 1,
- 2
]
}
}JSON Schema is the vocabulary that enables JSON data consistency, validity, and interoperability (https://json-schema.org/).
| user_param_name required | string Example: favourite_paths User parameter name. |
{- "name": "gui_general",
- "json_schema": {
- "contentMediaType": "application/json",
- "contentSchema": { },
- "title": "GuiGeneral",
- "type": "string"
}
}| obfuscate | boolean Default: true If enabled then fields like passwords or secret keys will be replaced with Note that for backwards compatibility reasons if deprecated Basic HTTP authorization is used,
|
| do_not_obfuscate_fields | string Default: "loki_password" Example: do_not_obfuscate_fields=loki_password List of comma separated fields that could be excluded from obfuscation. If user has no permissions to obtain mentioned field then it will be obfuscated anyway. |
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
{- "secret_key": "password",
- "internal_logrotate_backup_days_count": 2,
- "temp": { },
- "agent": {
- "autodiscovery_on_start": false,
- "update_disk_usage_on_start": true,
- "update_mount_opts_on_start": true,
- "exclude_volumes": "/mnt/disk1,/mnt/disk2",
- "exclude_fstypes": "proc,sysfs,nfsd,rootfs,ramfs,initramfs,mqueue,hugetlbfs,selinuxfs,configfs,autofs, fuse.lxcfs,bdev,cpuset,sockfs,usbfs,pipefs,anon_inodefs,inotifyfs,tracefs",
- "exclude_dirs": ".snapshot*,~snapshot*,.zfs,.isilon,.ifsvar",
- "autodiscovery_roots": "/",
- "autodiscovery_timeout": 1,
- "enable_serve_file": false,
- "initial_scan": true,
- "run_init_volume": true,
- "default_cost_per_gb": 0.0244
}, - "crawler": {
- "chunk_size": 3145728,
- "periodic_flush_interval_sec": 300,
- "queue_wait_interval": 10,
- "progress_log_interval": 60,
- "max_queue_size": 10000
}, - "config": { },
- "volumes": { },
- "scans": { },
- "dispatcher": { },
- "cron": { },
- "gateway": {
- "use_x_sendfile": false
}, - "client": {
- "http_timeout": 7,
- "http_max_retries": 3
}, - "archive": { },
- "archive:alt_metadata:XXX": {
- "custom-oct-mode": "{{ \"0o{:o}\".format(stat.st_mode % 512) }}",
- "custom-uid": "{{stat.st_uid}}",
- "afm-atime": "{{stat.st_atime_ns | strftime_ns(\"%Y-%m-%dT%H:%M:%SZ\")}}"
}, - "auth": {
- "pam_service_file": "starfish"
}, - "pg": { },
- "pgloader": { },
- "GUI": { },
- "templates": {
- "hash": "job start \"hasher --algorithm md5 sha1\"",
- "hash-quick": "job start --job-name hash-quick --size 100K-100P \"hasher --quick\"",
- "mtime": "scan start --type mtime",
- "diff": "scan start --type diff",
- "sync": "scan start --type sync"
}, - "log_level": "DEBUG"
}{- "auth": {
- "bind_port": 30013,
- "port": 30013,
- "saml_enabled": true
},
}Excludes are divided into list of directory name patterns and list of file name patterns
| volume_name required | string name of volume |
| path required | string |
| dir_excludes.add | Array of strings list of name patterns which should be added to list of blocked directory names |
| dir_excludes.set | Array of strings list of name patterns which should overwrite current list of blocked directory names |
| dir_excludes.delete | Array of strings list of name patterns which should be removed from list of blocked directory names |
| file_excludes.add | Array of strings list of name patterns which should be added to list of blocked file names |
| file_excludes.set | Array of strings list of name patterns which should overwrite current list of blocked file names |
| file_excludes.delete | Array of strings list of name patterns which should be removed from list of blocked file names |
{- "dir_excludes.add": [
- "string"
], - "dir_excludes.set": [
- "string"
], - "dir_excludes.delete": [
- "string"
], - "file_excludes.add": [
- "string"
], - "file_excludes.set": [
- "string"
], - "file_excludes.delete": [
- "string"
]
}{ }Job and first incarnation definition
object (incarnation) | |
object (job.input) |
{- "incarnation": {
- "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "job": {
- "name": "string",
- "allow_overlapping_job": false,
- "batch_per_dir": false,
- "batch_fields": {
- "jobs.hash.result.md5": "md5",
- "jobs.hash.mt": "md5_mtime"
}, - "cmd_output_format": "text",
- "command": [
- "string"
], - "entries_from_file": "paths",
- "generate_manifest": false,
- "ignore_results": false,
- "post_verification": true,
- "pre_verification": true,
- "prescan_type": "diff",
- "query_str": "ext jpg not uid 5",
- "path_passing_method": "arg",
- "requested_by": "string",
- "root_path": "string",
- "volume": "string",
- "agent_fail_fast": true,
- "agent_fail_fast_min_batches": 100,
- "agent_fail_fast_threshold": 100,
- "snapshot_glob": "string",
- "sort_by": [
- "ino"
], - "group_by": [
- "ino"
], - "retry_entries": "grouped",
- "limit": 0,
- "options": {
- "archive_volume_id": 0,
- "dst_path": "string",
- "mounted_src_volume_required": true,
- "archive_target_id": 78,
- "dst_volume": "string",
- "dst_volume_id": 0
},
}
}{- "id": 1,
- "long_id": "j_cmd_generating_f_20181010_1052_3",
- "heartbeat": 1593093530.123456,
- "ended_at": 1593093530.123456,
- "started_at": 1593093421.123456,
- "created_at": 1593093420.123456,
- "duration": 10,
- "duration_hum": "1m10s",
- "avg_file_size": 8191.8,
- "avg_file_size_hum": "8K",
- "bandwidth": "string",
- "batches_in_progress": [
- {
- "alive_timestamp": 1631182228.0658422,
- "batch_size_bytes": 18326979,
- "batch_size_bytes_hum": "17,5MiB",
- "items_in_batch": 1000,
- "pid": 1312,
- "started_at": 1589205125.19,
- "started_at_hum": "2020-05-11 15:52:05 +0200"
}
], - "incarnations": [
- {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}
], - "options": {
- "archive_target_name": "string",
- "dst_allow_empty_dir": true,
- "archive_target_id": 78,
- "dst_path": "string",
- "dst_volume": "string",
- "dst_volume_id": 0
}, - "status": "starting",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "runtime": 0,
- "runtime_hum": "string",
- "est_total_bytes": 0,
- "est_total_entries_num": 0,
- "last_dispatcher_operation": "string",
- "pause_interval": 0,
- "reason_code": "string",
- "reason_msg": "string",
- "tag_results_with": "string",
- "volume_id": 0,
- "rerun_from_job_id": 0,
- "query": [
- [
- "string"
]
], - "throttle_interval": 0,
- "retry_count": 0,
- "throttle_cmd": "string",
- "prefetch_size": 0,
- "prefetch_parallel_threads": 0,
- "items_in_batch": 0,
- "batch_size_bytes": 0,
- "prescan_id": 0,
- "pre_verify_ctime": true,
- "post_verify_ctime": true,
- "manifest_status": "not-requested",
- "manifest_query_ids": [
- "string"
], - "manifest_loc": "string",
- "cmd_name": "string",
- "current_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "first_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 150,
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0,
- "bandwidth_seconds": 16399744.619680852,
- "source": "home:user/projects",
- "snapshot": "string",
- "name": "string",
- "allow_overlapping_job": false,
- "batch_per_dir": false,
- "batch_fields": {
- "jobs.hash.result.md5": "md5",
- "jobs.hash.mt": "md5_mtime"
}, - "cmd_output_format": "text",
- "command": [
- "string"
], - "entries_from_file": "paths",
- "generate_manifest": false,
- "ignore_results": false,
- "post_verification": true,
- "pre_verification": true,
- "prescan_type": "diff",
- "query_str": "ext jpg not uid 5",
- "path_passing_method": "arg",
- "requested_by": "string",
- "root_path": "string",
- "volume": "string",
- "agent_fail_fast": true,
- "agent_fail_fast_min_batches": 100,
- "agent_fail_fast_threshold": 100,
- "snapshot_glob": "string",
- "sort_by": [
- "ino"
], - "group_by": [
- "ino"
], - "retry_entries": "grouped",
- "limit": 0
}| name | string List only jobs with specified job name. If name contains * or ? then it will return all jobs that match basic shell regex (* match any substring, ? match single char). |
| status | Array of strings Job status(es), either a list of statuses or a single status as a string. Cannot be used together with |
| running | boolean if set to |
| requested_by | Array of strings created by a given entity, either a list of entities or a single entity as a string. 'archive' ('restore') returns all low-level jobs started by archive (restore) jobs. To return low-level jobs started by a specific archive (restore) job with id=4, 'archive#4' ('restore#4') should be passed. Cannot be used together with |
| requested_by_archive_or_restore | boolean if set to |
| created_at | string Supports FROM-TO and RELATIVE formats; FROM-TO: '# hour|day|week|month|year(s) ago' or 'YYYYMMDD[HHMM[SS]]' or 'now' or 'inf', for example:
RELATIVE: '[+|-]N[y|m|w|d|h]', meaning a number of years, months, weeks, days (default) or hours, for example:
|
| ended_at | string the same as |
| long_id | Array of strings long id of the job |
| num_id | Array of integers numeric id of the job |
string or Array of strings source volume(s) name(s) | |
| src_volume_id | integer ID of job's source volume |
| root_path | string root path on source volume |
| sort_by | string Enum: "cmd_line" "command" "created_at" "current_incarnation_id" "dst_volume_id" "estimated_total_bytes" "estimated_total_entries_num" "heartbeat" "id" "incarnation_id" "name" "reason_code" "reason_msg" "requested_by" "started_at" "status" "tag_entry_failure_count" "volume_id" "volume_name" Example: sort_by=created_at -name Sort by given fields. Multiple fields should be separated with some whitespace or comma. Each field could be prefixed with '+' or '-' to sort ascending or descending (default is ascending). By default results are sorted by 'created_at' but the limit is applied descending. If limit is also specified, results are sorted first and then the limit is applied. |
| limit | integer Maximum number of returned jobs |
| paging_offset | integer Parameter that describes paging offset. It should be equal to number of entries that have been already printed on the previous pages. For example:
With paged result comes field |
| add_paging_params_to_response | boolean Default: false A flag specifying whether to include paging params in response. |
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
| created_by_username | Array of strings Only jobs created by user with given username will be taken into account. Request may specify more then one name. |
| created_by_uid | Array of strings Only jobs created by user with given UID will be taken into account. Request may specify more then one user id. |
| force_cache_reload | boolean forces cached volumes to be reloaded |
{- "jobs": [
- {
- "id": 1,
- "long_id": "j_cmd_generating_f_20181010_1052_3",
- "heartbeat": 1593093530.123456,
- "ended_at": 1593093530.123456,
- "started_at": 1593093421.123456,
- "created_at": 1593093420.123456,
- "duration": 10,
- "duration_hum": "1m10s",
- "avg_file_size": 8191.8,
- "avg_file_size_hum": "8K",
- "bandwidth": "string",
- "batches_in_progress": [
- {
- "alive_timestamp": 1631182228.0658422,
- "batch_size_bytes": 18326979,
- "batch_size_bytes_hum": "17,5MiB",
- "items_in_batch": 1000,
- "pid": 1312,
- "started_at": 1589205125.19,
- "started_at_hum": "2020-05-11 15:52:05 +0200"
}
], - "incarnations": [
- {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}
], - "options": {
- "archive_target_name": "string",
- "dst_allow_empty_dir": true,
- "archive_target_id": 78,
- "dst_path": "string",
- "dst_volume": "string",
- "dst_volume_id": 0
}, - "status": "starting",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "runtime": 0,
- "runtime_hum": "string",
- "est_total_bytes": 0,
- "est_total_entries_num": 0,
- "last_dispatcher_operation": "string",
- "pause_interval": 0,
- "reason_code": "string",
- "reason_msg": "string",
- "tag_results_with": "string",
- "volume_id": 0,
- "rerun_from_job_id": 0,
- "query": [
- [
- "string"
]
], - "throttle_interval": 0,
- "retry_count": 0,
- "throttle_cmd": "string",
- "prefetch_size": 0,
- "prefetch_parallel_threads": 0,
- "items_in_batch": 0,
- "batch_size_bytes": 0,
- "prescan_id": 0,
- "pre_verify_ctime": true,
- "post_verify_ctime": true,
- "manifest_status": "not-requested",
- "manifest_query_ids": [
- "string"
], - "manifest_loc": "string",
- "cmd_name": "string",
- "current_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "first_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 150,
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0,
- "bandwidth_seconds": 16399744.619680852,
- "source": "home:user/projects",
- "snapshot": "string",
- "name": "string",
- "allow_overlapping_job": false,
- "batch_per_dir": false,
- "batch_fields": {
- "jobs.hash.result.md5": "md5",
- "jobs.hash.mt": "md5_mtime"
}, - "cmd_output_format": "text",
- "command": [
- "string"
], - "entries_from_file": "paths",
- "generate_manifest": false,
- "ignore_results": false,
- "post_verification": true,
- "pre_verification": true,
- "prescan_type": "diff",
- "query_str": "ext jpg not uid 5",
- "path_passing_method": "arg",
- "requested_by": "string",
- "root_path": "string",
- "volume": "string",
- "agent_fail_fast": true,
- "agent_fail_fast_min_batches": 100,
- "agent_fail_fast_threshold": 100,
- "snapshot_glob": "string",
- "sort_by": [
- "ino"
], - "group_by": [
- "ino"
], - "retry_entries": "grouped",
- "limit": 0
}
], - "next_page_params": {
- "limit": 10,
- "sort_by": 1,
- "paging_offset": 51,
- "add_paging_params_to_response": true
}, - "matched_jobs_count": 70
}| target_id | integer ID of the archive target |
| src_volume_id | integer ID of job's source volume |
| dst_volume_id | integer ID of job's destination volume |
{ }| job_id required | string Example: j_hash_20170821_1254_117 Id of the job |
{- "id": 1,
- "long_id": "j_cmd_generating_f_20181010_1052_3",
- "heartbeat": 1593093530.123456,
- "ended_at": 1593093530.123456,
- "started_at": 1593093421.123456,
- "created_at": 1593093420.123456,
- "duration": 10,
- "duration_hum": "1m10s",
- "avg_file_size": 8191.8,
- "avg_file_size_hum": "8K",
- "bandwidth": "string",
- "batches_in_progress": [
- {
- "alive_timestamp": 1631182228.0658422,
- "batch_size_bytes": 18326979,
- "batch_size_bytes_hum": "17,5MiB",
- "items_in_batch": 1000,
- "pid": 1312,
- "started_at": 1589205125.19,
- "started_at_hum": "2020-05-11 15:52:05 +0200"
}
], - "incarnations": [
- {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}
], - "options": {
- "archive_target_name": "string",
- "dst_allow_empty_dir": true,
- "archive_target_id": 78,
- "dst_path": "string",
- "dst_volume": "string",
- "dst_volume_id": 0
}, - "status": "starting",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "runtime": 0,
- "runtime_hum": "string",
- "est_total_bytes": 0,
- "est_total_entries_num": 0,
- "last_dispatcher_operation": "string",
- "pause_interval": 0,
- "reason_code": "string",
- "reason_msg": "string",
- "tag_results_with": "string",
- "volume_id": 0,
- "rerun_from_job_id": 0,
- "query": [
- [
- "string"
]
], - "throttle_interval": 0,
- "retry_count": 0,
- "throttle_cmd": "string",
- "prefetch_size": 0,
- "prefetch_parallel_threads": 0,
- "items_in_batch": 0,
- "batch_size_bytes": 0,
- "prescan_id": 0,
- "pre_verify_ctime": true,
- "post_verify_ctime": true,
- "manifest_status": "not-requested",
- "manifest_query_ids": [
- "string"
], - "manifest_loc": "string",
- "cmd_name": "string",
- "current_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "first_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 150,
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0,
- "bandwidth_seconds": 16399744.619680852,
- "source": "home:user/projects",
- "snapshot": "string",
- "name": "string",
- "allow_overlapping_job": false,
- "batch_per_dir": false,
- "batch_fields": {
- "jobs.hash.result.md5": "md5",
- "jobs.hash.mt": "md5_mtime"
}, - "cmd_output_format": "text",
- "command": [
- "string"
], - "entries_from_file": "paths",
- "generate_manifest": false,
- "ignore_results": false,
- "post_verification": true,
- "pre_verification": true,
- "prescan_type": "diff",
- "query_str": "ext jpg not uid 5",
- "path_passing_method": "arg",
- "requested_by": "string",
- "root_path": "string",
- "volume": "string",
- "agent_fail_fast": true,
- "agent_fail_fast_min_batches": 100,
- "agent_fail_fast_threshold": 100,
- "snapshot_glob": "string",
- "sort_by": [
- "ino"
], - "group_by": [
- "ino"
], - "retry_entries": "grouped",
- "limit": 0
}| job_id required | string Example: j_hash_20170821_1254_117 Id of the job |
| remove_agents | Array of strings |
| id | integer Job id generated by DB |
| long_id | string More verbose version of job id. |
| heartbeat | number Job last update timestamp |
| ended_at | number job's |
| started_at | number or null equal to |
| created_at | number equal to first incarnation's |
| duration | integer difference between |
Array of strings or integers[ items ] Number of workers to be run on agent that can run that job. Each element is a pair of worker name and a number. Value for default worker is declared explicitly. | |
| duration_hum | string job duration as a humanized string, '-' if job is still running |
| avg_file_size | number Average size of completed entries when there is at least on completed entry, 0 otherwise. |
| avg_file_size_hum | string Human readable form of avg_file_size. It translates 8192 to '8K' and 1048576 to '1M' |
| bandwidth | string Computed average bandwidth of running job. |
Array of objects (batch_in_progress) | |
Array of objects (incarnation.output) | |
object (job.options.response) | |
| status | string Enum: "starting" "resuming" "tagging" "prescan" "preparing" "in_progress" "stopping" "done" "timeout" "stopped" "failed" job status |
object (job_state) state for the corresponding status | |
| runtime | integer difference between |
| runtime_hum | string Human readable form of runtime |
| est_total_bytes | number or null Estimated number of bytes (sum of files sizes) to be processed by job. |
| est_total_entries_num | number or null Estimated number of files to be processed by job. |
| last_dispatcher_operation | string Last executed operation by dispatcher service for this job. |
| pause_interval | number Total number of seconds job was paused. |
| reason_code | string or null Reason why the job has failed. |
| reason_msg | string or null Human readable message associated with job failure. |
| tag_results_with | string or null Name of tag to be assigned to successful entries. |
| volume_id | number Id of source volume. |
| rerun_from_job_id | number or null Id of rerun job (started with "sf job rerun"). NULL if job was not rerun. |
| query | Array of strings[ items ] Parsed query that was used to run this job. |
| throttle_interval | number Number of seconds the job was throttled for. |
| retry_count | number How many times single entry should be retied. Value 1 means command will be executed up to 2 times per entry. |
| throttle_cmd | string or null Command used to check if job should be throttled. By default jobs are not throttled. |
| prefetch_size | number Number of bytes to prefetch from disk before executing a command. |
| prefetch_parallel_threads | number How many threads will be used for prefetching. |
| items_in_batch | number Maximum number of files and directories in batch, set by "--batch-size-entries" cli option. |
| batch_size_bytes | number Maximum size of batch in bytes, set by "--batch-size-bytes" cli option. |
| prescan_id | number or null Id of a scan that was run as prescan for this job or null if there was no prescan. |
| pre_verify_ctime | boolean or null Set to False if job ignores ctime of entries during pre verification. |
| post_verify_ctime | boolean or null Set to False if job ignores ctime of entries during post verification. |
| manifest_status | string Enum: "not-requested" "generating" "uploading" "upload-failed" "failed" "done" Status of job manifest. By default job manifests are not generated in which case the value is "not-requested". |
| manifest_query_ids | Array of strings Ids of queries run in order to generate job manifests. |
| manifest_loc | string or null Location where manifest file will be stored on agent running the job. |
| cmd_name | string First part of command job parameter. |
object (incarnation.output) | |
object (incarnation.output) | |
| fs_bytes_done | number Total number of bytes done for given job |
| fs_entries_pushed | number Total number entries pushed to job entry queue. |
| fs_entries_done | number Total number of files and dirs successfully processed for given job |
| fs_entries_failed | number Total number of files and dirs failed for given job. |
| fs_entries_temp_error | number Total number of error while processing job. There can be multiple errors per single entry if retries are enabled. |
| fs_entries_timedout | number Total number of entries that have timedout. |
| fs_entries_unprocessed | number Total number of entries that have not been processed. It may happen when job was stopped in progress. |
| bandwidth_seconds | number Time in seconds used to calculate job bandwidth. |
| source | string Source volpath for this job. |
| snapshot | string Snapshot path selected for a job. If snapshot_glob contains pattern that matches many directories, then this field contains the one selected for the job. |
| name required | string Jobs with the same name will be run only on changed entries unless --from-scratch is given. When --from-scratch is given new results will override results from previous job with the same name. Entries can be queried based on job names. Each job has to have a name. Job name can be provided in command config in which case there is no need to provide it using API. |
| allow_overlapping_job | boolean Default: false When starting job dispatcher will check if no other job with common subdirectory is running. That option allows to disable that verification. It is highly recommended to disable also prescan as job will fail in case of prescan failed with overlapping scan error. |
| batch_per_dir | boolean Default: false When enabled dispatcher will create one batch per each directory containing all entries (in that case 'batch_size_entries' and 'batch_size_bytes' are ignored). When using batch_per_dir user is not allowed to add query filters to ensure that worker will receive all entries from a directory. This option is used by jobs that create tar archives. This option is exclusive to ignore_results option as combination of both options would result in running a job n times where n is the number of retries of the job. |
| batch_fields | object This field contains mapping dictionary of params which should be
read from FsEntry and passed to command. For example value
|
| cmd_output_format | string Enum: "text" "json" |
| command required | Array of strings |
| entries_from_file | string or null (entries_from_file_enum) Enum: "paths" "sfids" Determines the type of entries passed in file. Supported by restore job. Other jobs expects paths only. |
| generate_manifest | boolean or null Generate a manifest file for a job. This value overwrites settings per command and global default set by dispatcher.generate_manifest. |
| ignore_results | boolean Default: false When job is started with this option - its generated job results are not stored in database.
This option is exclusive to |
| post_verification | boolean Default: true Turns on verification that entry on which command was executed is that same expected version of entry From before cmd was run. Enabled, prevents from attaching job results to entry that has changed since cmd was executed on it. |
| pre_verification | boolean Default: true Turns on verification that entry on which command is going to be executed is that same expected version of entry as in the database. Enabled, prevents from running command on an entry whose verfion on filesystem results to entry that has changed since differs from version in the database |
| prescan_type | string Enum: "diff" "mtime" "sync" |
| query_str | string |
| path_passing_method | string Enum: "arg" "stdin" "stdin_json" The way command receives entries:
|
| requested_by | string |
| root_path required | string |
| volume required | string |
| agent_fail_fast | boolean or null Should the job fail before processing all entries in case some entries fail. |
| agent_fail_fast_min_batches | number or null How many entries batches should fail before whole job is marked as failed and aborted. |
| agent_fail_fast_threshold | number or null How many percent of entries should fail before whole job is marked as failed and aborted. |
| snapshot_glob | string or null Path to the snapshot, relative to the volume root, may contain '*' or '.' that will be expanded. If many snapshots match the pattern, the last in the alphabetic order is used. Note that prescan is disabled by default when using that option. |
| sort_by | Array of strings Items Enum: "ino" "parent_id" "parent_path" "fn" "ext" "depth" "mode" "uid" "gid" "username" "groupname" "usersid" "groupsid" "ct" "mt" "at" "size" "blck" "volume_id" "volume" "jobs" "fs" List of columns that will be used to sort entries. To sort by json field (e.g. job result or custom argument) column can be defined with comma usage, for example "jobs.hash.result.md5" or "fs.win.dacl". |
| group_by | Array of strings Items Enum: "ino" "parent_id" "parent_path" "fn" "ext" "depth" "mode" "uid" "gid" "username" "groupname" "usersid" "groupsid" "ct" "mt" "at" "size" "blck" "volume_id" "volume" "jobs" "fs" The set of fields for which entries will be grouped into a single batch. For example, if group_by is set to ['ino'], all entries with the same inode will be processed in the same batch. If used together with sort_by then needs to be a prefix of the sort_by parameter. |
| retry_entries | string Enum: "grouped" "only_failed" "whole_batch" Decides how entries are retried.
|
| limit | integer or null Limits entries processed by the job to N elements. NOTE: causes non-deterministic order when ran multiple times (due to nature of underlying database). |
{- "remove_agents": [
- "string"
], - "id": 1,
- "long_id": "j_cmd_generating_f_20181010_1052_3",
- "heartbeat": 1593093530.123456,
- "ended_at": 1593093530.123456,
- "started_at": 1593093421.123456,
- "created_at": 1593093420.123456,
- "duration": 10,
- "duration_hum": "1m10s",
- "avg_file_size": 8191.8,
- "avg_file_size_hum": "8K",
- "bandwidth": "string",
- "batches_in_progress": [
- {
- "alive_timestamp": 1631182228.0658422,
- "batch_size_bytes": 18326979,
- "batch_size_bytes_hum": "17,5MiB",
- "items_in_batch": 1000,
- "pid": 1312,
- "started_at": 1589205125.19,
- "started_at_hum": "2020-05-11 15:52:05 +0200"
}
], - "incarnations": [
- {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}
], - "options": {
- "archive_target_name": "string",
- "dst_allow_empty_dir": true,
- "archive_target_id": 78,
- "dst_path": "string",
- "dst_volume": "string",
- "dst_volume_id": 0
}, - "status": "starting",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "runtime": 0,
- "runtime_hum": "string",
- "est_total_bytes": 0,
- "est_total_entries_num": 0,
- "last_dispatcher_operation": "string",
- "pause_interval": 0,
- "reason_code": "string",
- "reason_msg": "string",
- "tag_results_with": "string",
- "volume_id": 0,
- "rerun_from_job_id": 0,
- "query": [
- [
- "string"
]
], - "throttle_interval": 0,
- "retry_count": 0,
- "throttle_cmd": "string",
- "prefetch_size": 0,
- "prefetch_parallel_threads": 0,
- "items_in_batch": 0,
- "batch_size_bytes": 0,
- "prescan_id": 0,
- "pre_verify_ctime": true,
- "post_verify_ctime": true,
- "manifest_status": "not-requested",
- "manifest_query_ids": [
- "string"
], - "manifest_loc": "string",
- "cmd_name": "string",
- "current_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "first_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 150,
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0,
- "bandwidth_seconds": 16399744.619680852,
- "source": "home:user/projects",
- "snapshot": "string",
- "name": "string",
- "allow_overlapping_job": false,
- "batch_per_dir": false,
- "batch_fields": {
- "jobs.hash.result.md5": "md5",
- "jobs.hash.mt": "md5_mtime"
}, - "cmd_output_format": "text",
- "command": [
- "string"
], - "entries_from_file": "paths",
- "generate_manifest": false,
- "ignore_results": false,
- "post_verification": true,
- "pre_verification": true,
- "prescan_type": "diff",
- "query_str": "ext jpg not uid 5",
- "path_passing_method": "arg",
- "requested_by": "string",
- "root_path": "string",
- "volume": "string",
- "agent_fail_fast": true,
- "agent_fail_fast_min_batches": 100,
- "agent_fail_fast_threshold": 100,
- "snapshot_glob": "string",
- "sort_by": [
- "ino"
], - "group_by": [
- "ino"
], - "retry_entries": "grouped",
- "limit": 0
}{- "id": 1,
- "long_id": "j_cmd_generating_f_20181010_1052_3",
- "heartbeat": 1593093530.123456,
- "ended_at": 1593093530.123456,
- "started_at": 1593093421.123456,
- "created_at": 1593093420.123456,
- "duration": 10,
- "duration_hum": "1m10s",
- "avg_file_size": 8191.8,
- "avg_file_size_hum": "8K",
- "bandwidth": "string",
- "batches_in_progress": [
- {
- "alive_timestamp": 1631182228.0658422,
- "batch_size_bytes": 18326979,
- "batch_size_bytes_hum": "17,5MiB",
- "items_in_batch": 1000,
- "pid": 1312,
- "started_at": 1589205125.19,
- "started_at_hum": "2020-05-11 15:52:05 +0200"
}
], - "incarnations": [
- {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}
], - "options": {
- "archive_target_name": "string",
- "dst_allow_empty_dir": true,
- "archive_target_id": 78,
- "dst_path": "string",
- "dst_volume": "string",
- "dst_volume_id": 0
}, - "status": "starting",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "runtime": 0,
- "runtime_hum": "string",
- "est_total_bytes": 0,
- "est_total_entries_num": 0,
- "last_dispatcher_operation": "string",
- "pause_interval": 0,
- "reason_code": "string",
- "reason_msg": "string",
- "tag_results_with": "string",
- "volume_id": 0,
- "rerun_from_job_id": 0,
- "query": [
- [
- "string"
]
], - "throttle_interval": 0,
- "retry_count": 0,
- "throttle_cmd": "string",
- "prefetch_size": 0,
- "prefetch_parallel_threads": 0,
- "items_in_batch": 0,
- "batch_size_bytes": 0,
- "prescan_id": 0,
- "pre_verify_ctime": true,
- "post_verify_ctime": true,
- "manifest_status": "not-requested",
- "manifest_query_ids": [
- "string"
], - "manifest_loc": "string",
- "cmd_name": "string",
- "current_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "first_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 150,
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0,
- "bandwidth_seconds": 16399744.619680852,
- "source": "home:user/projects",
- "snapshot": "string",
- "name": "string",
- "allow_overlapping_job": false,
- "batch_per_dir": false,
- "batch_fields": {
- "jobs.hash.result.md5": "md5",
- "jobs.hash.mt": "md5_mtime"
}, - "cmd_output_format": "text",
- "command": [
- "string"
], - "entries_from_file": "paths",
- "generate_manifest": false,
- "ignore_results": false,
- "post_verification": true,
- "pre_verification": true,
- "prescan_type": "diff",
- "query_str": "ext jpg not uid 5",
- "path_passing_method": "arg",
- "requested_by": "string",
- "root_path": "string",
- "volume": "string",
- "agent_fail_fast": true,
- "agent_fail_fast_min_batches": 100,
- "agent_fail_fast_threshold": 100,
- "snapshot_glob": "string",
- "sort_by": [
- "ino"
], - "group_by": [
- "ino"
], - "retry_entries": "grouped",
- "limit": 0
}Stops all jobs or jobs matching given criteria. If no filters are provided, all jobs are stopped. Returns the list of that jobs.
| created_by_username | Array of strings Only jobs created by user with given username will be taken into account. Request may specify more then one name. |
| created_by_uid | Array of strings Only jobs created by user with given UID will be taken into account. Request may specify more then one user id. |
[- {
- "id": 1,
- "long_id": "j_cmd_generating_f_20181010_1052_3",
- "heartbeat": 1593093530.123456,
- "ended_at": 1593093530.123456,
- "started_at": 1593093421.123456,
- "created_at": 1593093420.123456,
- "duration": 10,
- "duration_hum": "1m10s",
- "avg_file_size": 8191.8,
- "avg_file_size_hum": "8K",
- "bandwidth": "string",
- "batches_in_progress": [
- {
- "alive_timestamp": 1631182228.0658422,
- "batch_size_bytes": 18326979,
- "batch_size_bytes_hum": "17,5MiB",
- "items_in_batch": 1000,
- "pid": 1312,
- "started_at": 1589205125.19,
- "started_at_hum": "2020-05-11 15:52:05 +0200"
}
], - "incarnations": [
- {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}
], - "options": {
- "archive_target_name": "string",
- "dst_allow_empty_dir": true,
- "archive_target_id": 78,
- "dst_path": "string",
- "dst_volume": "string",
- "dst_volume_id": 0
}, - "status": "starting",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "runtime": 0,
- "runtime_hum": "string",
- "est_total_bytes": 0,
- "est_total_entries_num": 0,
- "last_dispatcher_operation": "string",
- "pause_interval": 0,
- "reason_code": "string",
- "reason_msg": "string",
- "tag_results_with": "string",
- "volume_id": 0,
- "rerun_from_job_id": 0,
- "query": [
- [
- "string"
]
], - "throttle_interval": 0,
- "retry_count": 0,
- "throttle_cmd": "string",
- "prefetch_size": 0,
- "prefetch_parallel_threads": 0,
- "items_in_batch": 0,
- "batch_size_bytes": 0,
- "prescan_id": 0,
- "pre_verify_ctime": true,
- "post_verify_ctime": true,
- "manifest_status": "not-requested",
- "manifest_query_ids": [
- "string"
], - "manifest_loc": "string",
- "cmd_name": "string",
- "current_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "first_incarnation": {
- "progress_history": [
- {
- "stats_time": 1589204901,
- "stats": {
- "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 7373,
- "max_workers_number": 3,
- "fs_entries_error_stats": { },
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0
}
}
], - "created_by_hum": "Alice (uid=12)",
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "cmd_line": [
- "sf",
- "job",
- "start",
- "--no-prescan",
- "echo %PATH%",
- "home:"
], - "from_scratch": false,
- "prescan_enabled": false,
- "created_by_id": 1
}, - "fs_bytes_done": 69870585,
- "fs_entries_pushed": 200,
- "fs_entries_done": 150,
- "fs_entries_failed": 50,
- "fs_entries_temp_error": 250,
- "fs_entries_timedout": 1,
- "fs_entries_unprocessed": 0,
- "bandwidth_seconds": 16399744.619680852,
- "source": "home:user/projects",
- "snapshot": "string",
- "name": "string",
- "allow_overlapping_job": false,
- "batch_per_dir": false,
- "batch_fields": {
- "jobs.hash.result.md5": "md5",
- "jobs.hash.mt": "md5_mtime"
}, - "cmd_output_format": "text",
- "command": [
- "string"
], - "entries_from_file": "paths",
- "generate_manifest": false,
- "ignore_results": false,
- "post_verification": true,
- "pre_verification": true,
- "prescan_type": "diff",
- "query_str": "ext jpg not uid 5",
- "path_passing_method": "arg",
- "requested_by": "string",
- "root_path": "string",
- "volume": "string",
- "agent_fail_fast": true,
- "agent_fail_fast_min_batches": 100,
- "agent_fail_fast_threshold": 100,
- "snapshot_glob": "string",
- "sort_by": [
- "ino"
], - "group_by": [
- "ino"
], - "retry_entries": "grouped",
- "limit": 0
}
]File should consist of arbitrary number of file/dir paths separated with \0. Paths should be relative to job root_path. Job will start only after entries file is uploaded or will set status to timeout when file was not uploaded or the uploading took too long. In order to receive job entries the job should be started with entries_from_file = paths. Note that SFIDs are not yet supported in this case.
| job_id required | string Example: j_hash_20170821_1254_117 Id of the job |
| is_data_completed | boolean Default: true Example: is_data_completed=true if set to |
| job_id required | integer Job id for which manifest should be created or fetched. |
| archive_job | boolean Default: false Value |
| low_level_job | boolean Default: false Value |
| csv | boolean Default: false When |
{- "query_id": "20180606_100701_7359bb_volume_name_",
- "location": "/api/v1/async/query/20180606_100701_7359bb_volume_name_"
}| job_id required | integer Job id for which manifest should be created or fetched. |
| archive_job | boolean Default: false Value |
| low_level_job | boolean Default: false Value |
| csv | boolean Default: false When |
{- "license": {
- "username": "starfish",
- "expires": "2018-07-13",
- "expires_timestamp": 1634075999.999,
- "expired": false,
- "proof_of_concept": false,
- "zones_count": 0
}, - "features": {
- "backup": false,
- "excludes": true,
- "fs-monitor": true,
- "fsentry": false,
- "job": true,
- "query": false,
- "reports": false,
- "reports:basic": false,
- "reports:analytics": false,
- "scan": true,
- "scheduling": false,
- "tag": true,
- "volume": false
}, - "comment": "string",
- "licenses": {
- "/opt/starfish/etc/license": {
- "all_enabled": false,
- "comment": "string",
- "disabled": [
- "string"
], - "enabled": [
- "string"
], - "expires": "2018-07-13",
- "proof_of_concept": false,
- "username": "starfish",
- "zones_count": 0
}
}, - "features_expirations": {
- "backup": null,
- "fsentry": 1626253529.999,
- "job": null,
- "tag": 1626253529.999,
- "query": 1626253529.999,
- "reports": 1626253529.999,
- "reports:basic": 1626253529.999,
- "reports:analytics": null,
- "excludes": 1626253529.999,
- "fs-monitor": 1626253529.999,
- "scan": 1626253529.999,
- "scheduling": 1626253529.999,
- "volume": 1626253529.999
}
}Results are based on entries found when crawling a volume; it means that only users that ever owned an entry are guaranteed to be listed.
| volume_name | string Volume name. Response will contain only users/groups for a given volume. |
| user | string Response will contain only users with a given name(s). Request may specify more then one user. |
| uid | string Response will contain only users with a given user id(s). Request may specify more then one user id. |
[- {
- "name": "user_name_1",
- "uid": 1,
- "volume": "volume_name_1"
}
]Results are based on entries found when crawling a volume; it means that only users and groups that ever owned an entry are guaranteed to be mapped.
| volume_name | string Volume name. Response will contain only users/groups for a given volume. |
| user | string Response will contain only users with a given name(s). Request may specify more then one user. |
| uid | string Response will contain only users with a given user id(s). Request may specify more then one user id. |
[- {
- "uid": 1,
- "name": "user 1",
- "volume": "volume1",
- "gids": [
- 1,
- 2
]
}
]Results are based on entries found when crawling a volume; it means that only groups that ever owned an entry are guaranteed to be listed.
| volume_name | string Volume name. Response will contain only users/groups for a given volume. |
| group | string Response will contain only grups with a given name(s). Request may specify more then one group. |
| gid | string Response will contain only users with a given group id(s). Request may specify more then one group id. |
[- {
- "gid": 1,
- "name": "group 1",
- "volume": "volume1"
}
]Results are based on entries found when crawling a volume; it means that only groups and users that ever owned an entry are guaranteed to be mapped.
| volume_name | string Volume name. Response will contain only users/groups for a given volume. |
| group | string Response will contain only grups with a given name(s). Request may specify more then one group. |
| gid | string Response will contain only users with a given group id(s). Request may specify more then one group id. |
[- {
- "gid": 1,
- "name": "group 1",
- "uids": [
- 1
], - "volume": "volume1"
}
]NOTE
The asynchronous version of Starfish (/async/query) should be used in most cases,
since the synchronous query is limited by HTTP timeouts.
For queries that return in 30 seconds or less, this query is appropriate.
Starfish uses async query internally, including the sf query command.
| volumes_and_paths | string Example: volumes_and_paths=home:projects/starfish Name of the volume and path as |
| zones | Array of arrays Example: zones=sample_zone_name&zones=other_zone_name Names of the zones. Multiple zones are supported in query strings |
| query | string Example: query=type=f size=0-1024 Query filters. All filters supported by
|
| format | string Default: "parent_path fn type size blck ct mt at uid gid mode tags_explicit tags_inherited" Space separated list of fields that should be returned. Note: output_format 'json' could have additional values even not specified here, for 'csv' it's a list of columns. |
| sort_by | string Example: sort_by=parent_path,-ct,+size Sort by given fields. Multiple fields should be comma-separated; you
can prefix each field with Allowed keys:
|
| group_by | string Example: group_by=volume,username Group result by given fields. Multiple fields should be comma-separated. Allowed keys:
|
| limit | integer Default: 1000 Limit the number of returned entries |
| force_tag_inherit | boolean Default: false Inherit tags even if they are in non-inheritable tagset |
| size_unit | string Default: "B" Determines size unit in which size-related fields will be returned. Allowed values: B,K,Ki,M,Mi,G,Gi,T,Ti,P,Pi,E,Ei,Z,Zi,Y,Yi. |
| size_unit_precision | integer Determines number of decimal places returned in size-related fields. Works only if also 'size_unit' option is used. By default, size is rounded to 1 decimal place. |
| hum_size_precision | integer Determines number of decimal places in human-readable size-related fields. By default, human-readable sizes are rounded to 1 decimal place. |
| type_hum_format | string Determines what type_hum values are returned. The value of this parameter should be pairs of <filetype>=<value> separated by ";" where "filetype" is d for directory, f for regular file, l for symbolic link, b for block device, c for character device, s for socket, p for FIFO pipe and "value" is the desired type_hum value for that filetype. Example: "f=regular file;d=directory;l=symbolic link" For filetypes not listed in the format, default value is used. |
| humanize_nested | boolean Default: false Show nested fields such as aggrs, rec_aggrs or jobs with additional human-readable fields. |
| without_private_tags | boolean Default: false Do not show tags from private tagsets when this flag is set. |
| mount_agent | string Default: "None" Show mount path of volume for specified agent address. If an agent is specified that is not associated with a given volume, the API will return the mount_path of the default agent. This option supports single and multiple volume queries. |
[- {
- "_id": 0,
- "fn": "file_name",
- "size": 32768,
- "volume": "volume_name",
- "inode": 247145,
- "full_path": "full/path/to/the/file_name",
- "size_unit": "KiB",
- "zones": [
- {
- "id": 1,
- "name": "zone_name",
- "relative_path": "dir/inside/zone"
}
]
}
]See the description for GET /query/ endpoint. Parameters sent in the body which are not described have the same meaning as in the case of the GET version of the endpoint.
| volumes_and_paths | Array of strings List of volume names and paths as |
| zones | Array of strings Names of the zones. Multiple zones are supported in query strings |
| query | string |
| format | string |
| sort_by | string |
| group_by | string |
| limit | integer |
| force_tag_inherit | boolean |
| size_unit | string |
| size_unit_precision | integer |
| hum_size_precision | integer |
| type_hum_format | string |
| humanize_nested | boolean |
| without_private_tags | boolean |
| mount_agent | string |
{- "volumes_and_paths": [
- "home:projects/starfish",
- "backup:projects/starfish/"
], - "zones": [
- "sample_zone_name",
- "other_zone_name"
], - "query": "string",
- "format": "string",
- "sort_by": "string",
- "group_by": "string",
- "limit": 0,
- "force_tag_inherit": true,
- "size_unit": "string",
- "size_unit_precision": 0,
- "hum_size_precision": 0,
- "type_hum_format": "string",
- "humanize_nested": true,
- "without_private_tags": true,
- "mount_agent": "string"
}[- {
- "_id": 0,
- "fn": "file_name",
- "size": 32768,
- "volume": "volume_name",
- "inode": 247145,
- "full_path": "full/path/to/the/file_name",
- "size_unit": "KiB",
- "zones": [
- {
- "id": 1,
- "name": "zone_name",
- "relative_path": "dir/inside/zone"
}
]
}
]NOTE
The asynchronous version of Starfish (/async/query) should be used in most cases,
since the synchronous query is limited by HTTP timeouts.
For queries that return in 30 seconds or less, this query is appropriate.
Starfish uses async query internally, including the sf query command.
| volumes_and_paths required | string Example: home:projects%2Fstarfish/backup:projects%2Fstarfish/ Name of the volume and path as |
| zones | Array of arrays Example: zones=sample_zone_name&zones=other_zone_name Names of the zones. Multiple zones are supported in query strings |
| query | string Example: query=type=f size=0-1024 Query filters. All filters supported by
|
| format | string Default: "parent_path fn type size blck ct mt at uid gid mode tags_explicit tags_inherited" Space separated list of fields that should be returned. Note: output_format 'json' could have additional values even not specified here, for 'csv' it's a list of columns. |
| sort_by | string Example: sort_by=parent_path,-ct,+size Sort by given fields. Multiple fields should be comma-separated; you
can prefix each field with Allowed keys:
|
| group_by | string Example: group_by=volume,username Group result by given fields. Multiple fields should be comma-separated. Allowed keys:
|
| limit | integer Default: 1000 Limit the number of returned entries |
| force_tag_inherit | boolean Default: false Inherit tags even if they are in non-inheritable tagset |
| size_unit | string Default: "B" Determines size unit in which size-related fields will be returned. Allowed values: B,K,Ki,M,Mi,G,Gi,T,Ti,P,Pi,E,Ei,Z,Zi,Y,Yi. |
| size_unit_precision | integer Determines number of decimal places returned in size-related fields. Works only if also 'size_unit' option is used. By default, size is rounded to 1 decimal place. |
| hum_size_precision | integer Determines number of decimal places in human-readable size-related fields. By default, human-readable sizes are rounded to 1 decimal place. |
| type_hum_format | string Determines what type_hum values are returned. The value of this parameter should be pairs of <filetype>=<value> separated by ";" where "filetype" is d for directory, f for regular file, l for symbolic link, b for block device, c for character device, s for socket, p for FIFO pipe and "value" is the desired type_hum value for that filetype. Example: "f=regular file;d=directory;l=symbolic link" For filetypes not listed in the format, default value is used. |
| humanize_nested | boolean Default: false Show nested fields such as aggrs, rec_aggrs or jobs with additional human-readable fields. |
| without_private_tags | boolean Default: false Do not show tags from private tagsets when this flag is set. |
| mount_agent | string Default: "None" Show mount path of volume for specified agent address. If an agent is specified that is not associated with a given volume, the API will return the mount_path of the default agent. This option supports single and multiple volume queries. |
[- {
- "_id": 0,
- "fn": "file_name",
- "size": 32768,
- "volume": "volume_name",
- "inode": 247145,
- "full_path": "full/path/to/the/file_name",
- "size_unit": "KiB",
- "zones": [
- {
- "id": 1,
- "name": "zone_name",
- "relative_path": "dir/inside/zone"
}
]
}
]Use this method instead of synchronous for most operations, since the synchronous
query is limited by HTTP timeouts. Strongly recommended param is async_after_sec,
which shortens feedback loop, especially for small queries.
| volumes_and_paths | string Example: volumes_and_paths=home:projects/starfish Name of the volume and path as |
| zones | Array of arrays Example: zones=sample_zone_name&zones=other_zone_name Names of the zones. Multiple zones are supported in query strings |
| queries | Array of strings Default: [] Example: queries=type=d rec_aggrs.size=0-1024&queries=type=f size=0-1024 List of separate queries. Each query is processed separately so entries are not sorted between queries. The result is an sorted output of first query and then of the next one etc. When passing multiple queries the result is sorted only partially. By design, entry which satisfy filters from multiple queries will appear more than once. EXAMPLE: Assume a database with 4 entries: Query with
|
| format | string Default: "parent_path fn type size blck ct mt at uid gid mode tags_explicit tags_inherited" Space separated list of fields that should be returned. Note: output_format 'json' could have additional values even not specified here, for 'csv' it's a list of columns. |
| sort_by | string Example: sort_by=parent_path,-ct,+size Sort by given fields. Multiple fields should be comma-separated; you
can prefix each field with Allowed keys:
|
| group_by | string Example: group_by=volume,username Group result by given fields. Multiple fields should be comma-separated. Allowed keys:
|
| force_tag_inherit | boolean Default: false Inherit tags even if they are in non-inheritable tagset |
| output_format | string Default: "json" Enum: "csv" "json" "txt" Allow to change output format from json to csv, example: output_format=csv |
| delimiter | string Default: "," Column delimiter if output_format is csv |
| escape_paths | boolean Default: false escape \t and \n characters if output_format is csv |
| print_headers | boolean Default: true Print column names in the first line if output_format is csv |
| line_delimiter | string Example: line_delimiter= Line delimiter if output_format is txt ( |
| size_unit | string Default: "B" Determines size unit in which size-related fields will be returned. Allowed values: B,K,Ki,M,Mi,G,Gi,T,Ti,P,Pi,E,Ei,Z,Zi,Y,Yi. |
| size_unit_precision | integer Determines number of decimal places returned in size-related fields. Works only if also 'size_unit' option is used. By default, size is rounded to 1 decimal place. |
| hum_size_precision | integer Determines number of decimal places in human-readable size-related fields. By default, human-readable sizes are rounded to 1 decimal place. |
| type_hum_format | string Determines what type_hum values are returned. The value of this parameter should be pairs of <filetype>=<value> separated by ";" where "filetype" is d for directory, f for regular file, l for symbolic link, b for block device, c for character device, s for socket, p for FIFO pipe and "value" is the desired type_hum value for that filetype. Example: "f=regular file;d=directory;l=symbolic link" For filetypes not listed in the format, default value is used. |
| humanize_nested | boolean Default: false Show nested fields such as aggrs, rec_aggrs or jobs with additional human-readable fields. |
| without_private_tags | boolean Default: false Do not show tags from private tagsets when this flag is set. |
| mount_agent | string Default: "None" Show mount path of volume for specified agent address. If an agent is specified that is not associated with a given volume, the API will return the mount_path of the default agent. This option supports single and multiple volume queries. |
| limit | integer Limit the number of returned entries |
| async_after_sec | number <float> Default: 5 If passed, async query waits async_after_sec before switching to async. If it finishes in async_after_sec, it returns result immediately with http status code 200 |
[- {
- "_id": 0,
- "fn": "file_name",
- "size": 32768,
- "volume": "volume_name",
- "inode": 247145,
- "full_path": "full/path/to/the/file_name",
- "size_unit": "KiB",
- "zones": [
- {
- "id": 1,
- "name": "zone_name",
- "relative_path": "dir/inside/zone"
}
]
}
]| query_id required | string Query ID that was returned by start async query operation |
{- "is_done": true,
- "query_id": "string",
- "location": "/api/v1/async/query/20180606_100701_7359bb_volume_name:"
}| query_id required | string Query ID that was returned by start async query operation |
[- {
- "_id": 0,
- "fn": "file_name",
- "size": 32768,
- "volume": "volume_name",
- "inode": 247145,
- "full_path": "full/path/to/the/file_name",
- "size_unit": "KiB",
- "zones": [
- {
- "id": 1,
- "name": "zone_name",
- "relative_path": "dir/inside/zone"
}
]
}
]Returns restore job object. Single restore job may contain multiple low-level jobs. Restore job will restore also permissions of files and directories. Existing directories permissions will not be overridden unless restore_permissions option is provided.
| query | string Example: query=type=f size=0-1024 Query filters. All filters supported by
|
| src_volume_and_path | string the original location of files before archiving; this parameter will become optional once restoring from multiple volumes in a single is supported |
| dst_volume_and_path | string volume:path to which files are to be restored |
| query | string If defined here will overwrite query filters from url params. |
object All of these options can also be defined in top level dictionary | |
| entries_from_file | string or null (entries_from_file_enum) Enum: "paths" "sfids" Determines the type of entries passed in file. Supported by restore job. Other jobs expects paths only. |
object Properties dedicated for restore job. All of these options can also be defined in top level dictionary. |
{- "src_volume_and_path": "src_volume_name:path/to/dir",
- "dst_volume_and_path": "dst_volume_name:path/to/dir",
- "query": "latest-copied-version",
- "command_options": {
- "overwrite": "never",
- "check_file_fields": "mode,gid,uid,mtime,size",
- "check_dir_fields": "mode,gid,uid,mtime",
- "no_permissions": false,
- "permissions": "restore_none",
- "permissions_only": false,
- "no_create_dest_path": false,
- "no_sparse": false,
- "inplace": false,
- "remove_and_forget": false,
- "hard_links": false
}, - "entries_from_file": "paths",
- "restore_options": {
- "from_targets": ""
}
}{- "href": "/api/restore/job/123",
- "src_volume_and_path": "projects:dir1/dir2",
- "dst_volume_and_path": "projects:dir1/dir2",
- "entries_from_file": "paths",
- "low_level_jobs": {
- "CREATING_DIR_TREE": [
- 199
], - "DOWNLOADING_FILES": [
- 201,
- 209
], - "FIXING_DIR_METADATA": [
- 211
]
}, - "stats": {
- "CREATING_DIR_TREE": {
- "matched_dirs": 40,
- "done_dirs": 40,
- "failed_dirs": 0
}, - "DOWNLOADING_FILES": {
- "done_entries": 123,
- "failed_entries": 3,
- "matched_entries": 126,
- "not_archived_files": 0
}, - "FIXING_DIR_METADATA": {
- "matched_dirs": 40,
- "done_dirs": 40,
- "failed_dirs": 0
}
}, - "id": 0,
- "status": "STARTING",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "creation_time": 1593093530.123456,
- "creation_time_hum": "2020-06-25 15:58:50",
- "end_time": 1593093600.123456,
- "end_time_hum": "2020-06-25 16:00:00",
- "duration": 70,
- "duration_hum": "1m10s",
- "created_by_id": 1,
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "created_by_hum": "Alice (uid=12)"
}| status | Array of strings Job status(es), either a list of statuses or a single status as a string. Cannot be used together with |
| running | boolean if set to |
| requested_by | Array of strings created by a given entity, either a list of entities or a single entity as a string. For example 'gui', 'client', 'scheduler' etc. |
| creation_time | string Supports FROM-TO and RELATIVE formats; FROM-TO: '# hour|day|week|month|year(s) ago' or 'YYYYMMDD[HHMM[SS]]' or 'now' or 'inf', for example:
RELATIVE: '[+|-]N[y|m|w|d|h]', meaning a number of years, months, weeks, days (default) or hours, for example:
|
| end_time | string the same as |
| sort_by | string Enum: "creation_time" "dst_path" "dst_volume_id" "end_time" "id" "phase" "query" "reason" "requested_by" "src_path" "src_volume_id" "status" Example: sort_by=creation_time -dst_path Sort by given fields. Multiple fields should be separated with some whitespace or comma. Each field could be prefixed with '+' or '-' to sort ascending or descending (default is ascending). By default, results are sorted by id, but the limit is applied descending. If limit is also specified, results are sorted first and then the limit is applied. |
| limit | integer Maximum number of returned jobs |
| paging_offset | integer Parameter that describes paging offset. It should be equal to number of entries that have been already printed on the previous pages. For example:
With paged result comes field |
| add_paging_params_to_response | boolean Default: false A flag specifying whether to include paging params in response. |
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
| created_by_username | Array of strings Only jobs created by user with given username will be taken into account. Request may specify more then one name. |
| created_by_uid | Array of strings Only jobs created by user with given UID will be taken into account. Request may specify more then one user id. |
{- "restore_jobs": [
- {
- "href": "/api/restore/job/123",
- "src_volume_and_path": "projects:dir1/dir2",
- "dst_volume_and_path": "projects:dir1/dir2",
- "entries_from_file": "paths",
- "low_level_jobs": {
- "CREATING_DIR_TREE": [
- 199
], - "DOWNLOADING_FILES": [
- 201,
- 209
], - "FIXING_DIR_METADATA": [
- 211
]
}, - "stats": {
- "CREATING_DIR_TREE": {
- "matched_dirs": 40,
- "done_dirs": 40,
- "failed_dirs": 0
}, - "DOWNLOADING_FILES": {
- "done_entries": 123,
- "failed_entries": 3,
- "matched_entries": 126,
- "not_archived_files": 0
}, - "FIXING_DIR_METADATA": {
- "matched_dirs": 40,
- "done_dirs": 40,
- "failed_dirs": 0
}
}, - "id": 0,
- "status": "STARTING",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "creation_time": 1593093530.123456,
- "creation_time_hum": "2020-06-25 15:58:50",
- "end_time": 1593093600.123456,
- "end_time_hum": "2020-06-25 16:00:00",
- "duration": 70,
- "duration_hum": "1m10s",
- "created_by_id": 1,
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "created_by_hum": "Alice (uid=12)"
}
], - "next_page_params": {
- "limit": 10,
- "sort_by": 1,
- "paging_offset": 51,
- "add_paging_params_to_response": true
}, - "matched_restore_jobs_count": 70
}| restore_job_id required | integer Id of the restore job |
{- "href": "/api/restore/job/123",
- "src_volume_and_path": "projects:dir1/dir2",
- "dst_volume_and_path": "projects:dir1/dir2",
- "entries_from_file": "paths",
- "low_level_jobs": {
- "CREATING_DIR_TREE": [
- 199
], - "DOWNLOADING_FILES": [
- 201,
- 209
], - "FIXING_DIR_METADATA": [
- 211
]
}, - "stats": {
- "CREATING_DIR_TREE": {
- "matched_dirs": 40,
- "done_dirs": 40,
- "failed_dirs": 0
}, - "DOWNLOADING_FILES": {
- "done_entries": 123,
- "failed_entries": 3,
- "matched_entries": 126,
- "not_archived_files": 0
}, - "FIXING_DIR_METADATA": {
- "matched_dirs": 40,
- "done_dirs": 40,
- "failed_dirs": 0
}
}, - "id": 0,
- "status": "STARTING",
- "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "creation_time": 1593093530.123456,
- "creation_time_hum": "2020-06-25 15:58:50",
- "end_time": 1593093600.123456,
- "end_time_hum": "2020-06-25 16:00:00",
- "duration": 70,
- "duration_hum": "1m10s",
- "created_by_id": 1,
- "created_by": {
- "system_id": 12,
- "username": "Alice"
}, - "created_by_hum": "Alice (uid=12)"
}File should consist of arbitrary number of file/dir paths or SFIDs separated with \0. Paths should be relative to restore job root_path. Paths/SFIDs choice should match entries_from_file argument passed in restore job start request. Job will start only after entries file is uploaded or will set status to timeout when file was not uploaded or the uploading took too long.
| restore_job_id required | integer Id of the restore job |
| agent_address | string Default: "https://localhost:30002" Agent address where volume is present. Default agent address is stored in Starfish configuration file. |
object | |
| loading_priority | integer Default: 0 Priority to load chunks. Value 100 means pgloader should load chunks immediately. |
| overlapping_check_disabled | boolean Default: false If set, scan is started without checking if it overlaps already pending scan; in rare cases using this parameter can lead to a broken tree structure in the database which can be fixed only by performing a sync scan; if really necessary, it is recommended to use this option for small jobs (e.g. refreshing directories in UI: depth = 0) |
| requested_by required | string Enum: "gui" "client" "scheduler" "dispatcher" "internal" "monitor" |
| type required | string Enum: "diff" "mtime" "sync" |
| volume required | string |
{- "crawler_options": {
- "depth": 0,
- "ignore_dirs": [
- "string"
], - "ignore_files": [
- "string"
], - "num_workers": 0,
- "posix_acl": true,
- "samqfs": true,
- "startpoint": "",
- "startpoints": [
- ""
], - "win_acl": true,
- "win_attr": true,
- "win_backup_privilege": "string",
- "xattrs": true,
- "xattrs_regex": "string"
}, - "loading_priority": 0,
- "overlapping_check_disabled": false,
- "requested_by": "gui",
- "type": "diff",
- "volume": "volume_name"
}{- "agent_state": "active",
- "agent_heard": 0,
- "status": "done",
- "crawler": {
- "added_dirs": 50,
- "added_files": 100,
- "broken_entries": {
- "item_empty_file_name": 3,
- "iten_non_utf8_name": 2,
- "ignored_huge_dir": 1,
- "scandir_timeout": 2,
- "warning_large_dir": 3,
- "item_path_too_long": 2,
- "item_bad_symlink": 1,
- "item_startpoint_no_a_dir": 2,
- "posix_acl_error": 3,
- "windows_acl_error": 2,
- "samfs_error": 1,
- "ENOENT": 1,
- "ESTALE": 2
}, - "state_durations": {
- "preparing": 3
}, - "broken_entry_count": 25
}, - "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "crawler_options": {
- "depth": 0,
- "ignore_dirs": [
- "string"
], - "ignore_files": [
- "string"
], - "num_workers": 0,
- "posix_acl": true,
- "samqfs": true,
- "startpoint": "",
- "startpoints": [
- ""
], - "win_acl": true,
- "win_attr": true,
- "win_backup_privilege": "string",
- "xattrs": true,
- "xattrs_regex": "string"
}, - "loading_priority": 0,
- "overlapping_check_disabled": false,
- "requested_by": "gui",
- "type": "diff",
- "volume": "volume_name"
}| sort_order | integer Enum: -1 1 Use -1 for descending order and 1 for ascending |
| limit | integer Max number of returned scans. Use "0" for unlimited response, by default 10000. |
| paging_offset | integer Parameter that describes paging offset. It should be equal to number of entries that have been already printed on the previous pages. For example:
With paged result comes field |
| running | boolean if set to |
| volume | string name of the volume |
| long_id | Array of strings long id of the scan |
| num_id | Array of integers numeric id of the scan |
| vol_paths | Array of strings list volume names and paths joined with ':' (ex. "vol1", "vol2:path"), will list scans that have at least one startpoint equal to any of given vol:path |
| overlapping_vol_paths | Array of strings list volume names and paths joined with ':' (ex. "vol1", "vol2:path"), will list scans that potentially overlap with any of given vol:path |
| state | Array of strings Scan state(s), either a list of states or a single state as a string. Cannot be used together with |
| type | string Could be used more then once in query then scan of any type will be returned |
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
{- "scans": [
- {
- "agent_state": "active",
- "agent_heard": 0,
- "status": "done",
- "crawler": {
- "added_dirs": 50,
- "added_files": 100,
- "broken_entries": {
- "item_empty_file_name": 3,
- "iten_non_utf8_name": 2,
- "ignored_huge_dir": 1,
- "scandir_timeout": 2,
- "warning_large_dir": 3,
- "item_path_too_long": 2,
- "item_bad_symlink": 1,
- "item_startpoint_no_a_dir": 2,
- "posix_acl_error": 3,
- "windows_acl_error": 2,
- "samfs_error": 1,
- "ENOENT": 1,
- "ESTALE": 2
}, - "state_durations": {
- "preparing": 3
}, - "broken_entry_count": 25
}, - "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "crawler_options": {
- "depth": 0,
- "ignore_dirs": [
- "string"
], - "ignore_files": [
- "string"
], - "num_workers": 0,
- "posix_acl": true,
- "samqfs": true,
- "startpoint": "",
- "startpoints": [
- ""
], - "win_acl": true,
- "win_attr": true,
- "win_backup_privilege": "string",
- "xattrs": true,
- "xattrs_regex": "string"
}, - "loading_priority": 0,
- "overlapping_check_disabled": false,
- "requested_by": "gui",
- "type": "diff",
- "volume": "volume_name"
}
], - "next_page_params": {
- "filters": {
- "type": [
- "monitor",
- "mtime"
]
}, - "limit": 10,
- "sort_order": 1,
- "paging_offset": 51
}, - "matched_scans_count": 70
}| scan_id required | integer |
| reason | string Enum: "some_chunks_not_applied" "invalid_chunk_file" "invalid_mapping_file" "invalid_result_file" "volume_deleted" "crawler_failed" "event_monitor_failed" "timed_out" "unknown_scan_type" "unexpected_exception" "job_stopped" "service_stop" "manual" the reason for scan stop |
| ignore_unavailable_agent | boolean Default: false stop scan even if the running agent is not available |
{- "reason": "service_stop",
- "ignore_unavailable_agent": false
}{- "agent_state": "active",
- "agent_heard": 0,
- "status": "done",
- "crawler": {
- "added_dirs": 50,
- "added_files": 100,
- "broken_entries": {
- "item_empty_file_name": 3,
- "iten_non_utf8_name": 2,
- "ignored_huge_dir": 1,
- "scandir_timeout": 2,
- "warning_large_dir": 3,
- "item_path_too_long": 2,
- "item_bad_symlink": 1,
- "item_startpoint_no_a_dir": 2,
- "posix_acl_error": 3,
- "windows_acl_error": 2,
- "samfs_error": 1,
- "ENOENT": 1,
- "ESTALE": 2
}, - "state_durations": {
- "preparing": 3
}, - "broken_entry_count": 25
}, - "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "crawler_options": {
- "depth": 0,
- "ignore_dirs": [
- "string"
], - "ignore_files": [
- "string"
], - "num_workers": 0,
- "posix_acl": true,
- "samqfs": true,
- "startpoint": "",
- "startpoints": [
- ""
], - "win_acl": true,
- "win_attr": true,
- "win_backup_privilege": "string",
- "xattrs": true,
- "xattrs_regex": "string"
}, - "loading_priority": 0,
- "overlapping_check_disabled": false,
- "requested_by": "gui",
- "type": "diff",
- "volume": "volume_name"
}{- "agent_state": "active",
- "agent_heard": 0,
- "status": "done",
- "crawler": {
- "added_dirs": 50,
- "added_files": 100,
- "broken_entries": {
- "item_empty_file_name": 3,
- "iten_non_utf8_name": 2,
- "ignored_huge_dir": 1,
- "scandir_timeout": 2,
- "warning_large_dir": 3,
- "item_path_too_long": 2,
- "item_bad_symlink": 1,
- "item_startpoint_no_a_dir": 2,
- "posix_acl_error": 3,
- "windows_acl_error": 2,
- "samfs_error": 1,
- "ENOENT": 1,
- "ESTALE": 2
}, - "state_durations": {
- "preparing": 3
}, - "broken_entry_count": 25
}, - "state": {
- "name": "string",
- "display_name": "string",
- "is_running": true,
- "is_successful": false,
- "is_failed": false,
- "is_final": false
}, - "crawler_options": {
- "depth": 0,
- "ignore_dirs": [
- "string"
], - "ignore_files": [
- "string"
], - "num_workers": 0,
- "posix_acl": true,
- "samqfs": true,
- "startpoint": "",
- "startpoints": [
- ""
], - "win_acl": true,
- "win_attr": true,
- "win_backup_privilege": "string",
- "xattrs": true,
- "xattrs_regex": "string"
}, - "loading_priority": 0,
- "overlapping_check_disabled": false,
- "requested_by": "gui",
- "type": "diff",
- "volume": "volume_name"
}| volume_name required | string name of volume |
| agent_address | string |
| disable_full_scan | boolean |
| extra_monitor_args | Array of strings |
| start_scan_on_agent | boolean |
{- "agent_address": "string",
- "disable_full_scan": true,
- "extra_monitor_args": [
- "string"
], - "start_scan_on_agent": true
}| volume_name required | string name of volume |
| action required | string Enum: "resume" "pause" "stop" |
| agent_address | string |
{- "action": "resume",
- "agent_address": "string"
}| sort_by | string Enum: "cron" "next_run_timestamp" "path" "template" "volume_name" Example: sort_by=cron Sort by given fields. Multiple fields should be separated with some whitespace or comma. Each field could be prefixed with '+' or '-' to sort ascending or descending (default is ascending). |
| vol_and_path_list | string Example: vol_and_path_list=vol1:/path/on/vol1,vol2:/path/on/vol2 Filter the output by returning only the entries with matching volumes and paths |
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
[- {
- "volume_name": "foo",
- "fs_entry_path": "path/to/dir",
- "schedule": {
- "template": "diff",
- "next_run_timestamp": 1558724400,
- "cron": "0 21 * * *"
}
}
]| volume_name required | string name of volume |
| path | string Path to directory for which we want to list cron entries. By default set to '' which means root of the volume. Entries from whole subtree will be returned. |
[- {
- "volume_name": "foo",
- "fs_entry_path": "path/to/dir",
- "schedule": {
- "template": "diff",
- "next_run_timestamp": 1558724400,
- "cron": "0 21 * * *"
}
}
]| volume_name required | string name of volume |
| template required | string Name of starfish template which should be started when time comes. |
| cron required | string Cron expression which describes when templates should be run. |
| path | string Path to directory for which we want set cron entry. Default is set to '' which is root of the volume. |
{- "template": "diff",
- "next_run_timestamp": 1558724400,
- "cron": "0 21 * * *"
}| volume_name required | string name of volume |
| path | string Path to directory for which we want to remove cron entries. Default set to '' which is root of the volume. Will not remove schedules for any subdirectory of this path. |
| template | string Name of template that should be removed. If more entries have template with the same name this call will remove all of them. |
| full_config | boolean Default: false If false, configuration for queries, commands and upload is trimmed from reported config. |
| no_obfuscation | boolean Default: false If |
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
| custom_timeout | integer or null If defined, status_pool_timeout, request_timeout and status_timeout will be replaced. |
{- "now_ts": 1679573885.295482,
- "now_hum": "2023-03-23 13:18:05 +0100",
- "sfhome": "string",
- "client_version": "string",
- "license": {
- "out_data": {
- "license": {
- "username": "starfish",
- "expires": "2018-07-13",
- "expires_timestamp": 1634075999.999,
- "expired": false,
- "proof_of_concept": false,
- "zones_count": 0
}, - "features": {
- "backup": false,
- "excludes": true,
- "fs-monitor": true,
- "fsentry": false,
- "job": true,
- "query": false,
- "reports": false,
- "reports:basic": false,
- "reports:analytics": false,
- "scan": true,
- "scheduling": false,
- "tag": true,
- "volume": false
}, - "comment": "string",
- "licenses": {
- "/opt/starfish/etc/license": {
- "all_enabled": false,
- "comment": "string",
- "disabled": [
- "string"
], - "enabled": [
- "string"
], - "expires": "2018-07-13",
- "proof_of_concept": false,
- "username": "starfish",
- "zones_count": 0
}
}, - "features_expirations": {
- "backup": null,
- "fsentry": 1626253529.999,
- "job": null,
- "tag": 1626253529.999,
- "query": 1626253529.999,
- "reports": 1626253529.999,
- "reports:basic": 1626253529.999,
- "reports:analytics": null,
- "excludes": 1626253529.999,
- "fs-monitor": 1626253529.999,
- "scan": 1626253529.999,
- "scheduling": 1626253529.999,
- "volume": 1626253529.999
}
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "loaded_config": {
- "out_data": {
- "secret_key": "***",
- "ssl_certificate_file": "/opt/starfish/etc/sf-ssl.crt",
- "ssl_private_key_file": "/opt/starfish/etc/sf-ssl.key",
- "temp": { },
- "config": { },
- "volumes": { },
- "scans": { },
- "dispatcher": { },
- "auth": {
- "pam_service_file": "starfish"
}
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "services": {
- "config": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
], - "unofficial_properties": [
- "string"
]
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "volumes": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
]
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "scans": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
]
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "archive": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
]
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "gateway": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
]
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "auth": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
]
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "cron": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
], - "last_checkpoint": 0
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "pgloader": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "dirs_per_scan": { },
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
]
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "dispatcher": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
]
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "temp": {
- "out_data": {
- "url": "string",
- "version": "string",
- "gather_time_sec": 0,
- "status": {
- "storage_path": "string",
- "usage": {
- "total": 0,
- "used": 0,
- "free": 0,
- "percent": 0
}, - "partition": {
- "device": "string",
- "mountpoint": "string",
- "fstype": "string",
- "opts": "string"
}, - "dirs_per_scan": {
- "number_of_files": 0,
- "size": 0
}, - "storage_free_disk_space_gib_alert_threshold": 0,
- "storage_free_disk_space_percentage_alert_threshold": 0
}
}, - "url": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}
}, - "loki": {
- "out_data": {
- "status": {
- "name": "[service name]",
- "status": "UP",
- "started_at": 1622183490.7162066,
- "service_pid": [
- 19842
]
}, - "metrics": {
- "loki_panic_total": 0,
- "process_cpu_seconds_total": 15.55,
- "process_resident_memory_bytes": 26107904,
- "process_start_time_seconds": 1622183514.39
}, - "url": "string",
- "status_message": "string"
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "agents": {
- "property1": {
- "out_data": {
- "url": "string",
- "version": "string",
- "status": {
- "name": "string",
- "status": "string",
- "started_at": 0,
- "service_pid": [
- 0
], - "service_statuses": { },
- "agent_address": "string",
- "unofficial_properties": [
- "string"
], - "loaded_config": {
- "secret_key": "***",
- "ssl_certificate_file": "/opt/starfish/etc/sf-ssl.crt",
- "ssl_private_key_file": "/opt/starfish/etc/sf-ssl.key",
- "temp": { },
- "config": { },
- "volumes": { },
- "scans": { },
- "dispatcher": { },
- "auth": {
- "pam_service_file": "starfish"
}
}
}, - "gather_time_sec": 0
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "property2": {
- "out_data": {
- "url": "string",
- "version": "string",
- "status": {
- "name": "string",
- "status": "string",
- "started_at": 0,
- "service_pid": [
- 0
], - "service_statuses": { },
- "agent_address": "string",
- "unofficial_properties": [
- "string"
], - "loaded_config": {
- "secret_key": "***",
- "ssl_certificate_file": "/opt/starfish/etc/sf-ssl.crt",
- "ssl_private_key_file": "/opt/starfish/etc/sf-ssl.key",
- "temp": { },
- "config": { },
- "volumes": { },
- "scans": { },
- "dispatcher": { },
- "auth": {
- "pam_service_file": "starfish"
}
}
}, - "gather_time_sec": 0
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}
}, - "volumes": {
- "out_data": [
- {
- "id": 1,
- "vol": "foo",
- "display_name": "/mnt/foo/",
- "inode": 657,
- "store_win_acl": null,
- "store_posix_acl": false,
- "total_capacity": 31231231237654,
- "capacity_set_manually": false,
- "free_space": 333222111000,
- "free_space_set_manually": true,
- "mounts": {
- "http://agent1:30002": "/media/foo",
- "http://agent2:30002": "/mnt/foo"
}, - "mount_opts": {
- "http://agent1:30002": "rw,relatime",
- "http://agent2:30002": "rw,relatime,vers=3.0,username=nfsuser,addr=1.2.3.4"
}, - "dir_excludes": [
- ".snapshot*",
- "~snapshot*",
- ".zfs"
], - "file_excludes": [ ],
- "ignored_dir_stat_fields": [
- "st_mtime"
], - "ignored_file_stat_fields": [
- "st_mtime"
], - "user_params": { },
- "type": "Linux",
- "last_full_diff_scan_date": 1657670400
}
], - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "disk": {
- "out_data": {
- "partitions": [
- {
- "filesystem": "string",
- "size": 0,
- "used": 0,
- "avail": 0,
- "use_perc": 0,
- "fstype": "string",
- "mount": "string"
}
]
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "host": {
- "out_data": {
- "cpu_count_physical": 4,
- "cpu_count_logical": 4,
- "cpu_model": "Intel(R) Core(TM) i5-7400 CPU @ 3.00GHz",
- "memory_total_gib": 15.56
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "postgres": {
- "pg_db": {
- "db_partitioning": {
- "out_data": true,
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "db_stats": {
- "out_data": {
- "name": "string",
- "size": 0,
- "pg_version": {
- "raw": "string",
- "major": 0,
- "major_sub": 0,
- "minor": 0
}, - "started_at": 0
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "schemas_stats": {
- "out_data": {
- "sf": {
- "size": 373252096,
- "index_size": 222322688,
- "toast_size": 253952,
- "live_tuples": 300249,
- "dead_tuples": 180849
}, - "sf_scans": {
- "size": 1073152,
- "index_size": 180224,
- "toast_size": 49152,
- "live_tuples": 792,
- "dead_tuples": 210
}
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "tables_stats": {
- "out_data": {
- "sf.file_current": {
- "size": 246128640,
- "index_size": 182255616,
- "toast_size": 8192,
- "live_tuples": 245680,
- "dead_tuples": 128246,
- "reltuples": 245680,
- "relpages": 7792
}, - "sf.dir_current": {
- "size": 91504640,
- "index_size": 31145984,
- "toast_size": 8192,
- "live_tuples": 23051,
- "dead_tuples": 52603,
- "reltuples": 23051,
- "relpages": 7363
}
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "tables_vacuum_times": {
- "out_data": {
- "sf.dir_current": {
- "n_mod_since_analyze": 0,
- "last_vacuum_time": null,
- "last_analyze_time": 1622183509.45088,
- "running_vacuum_start_time": null
}, - "sf.dir_history": {
- "n_mod_since_analyze": 0,
- "last_vacuum_time": null,
- "last_analyze_time": 1622183510.28933,
- "running_vacuum_start_time": null
}
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "tables_vacuum_progress": {
- "out_data": {
- "sf.file_current": {
- "phase": "vacuuming indexes",
- "heap_blks_total": 270855803,
- "heap_blks_scanned": 270855803,
- "heap_blks_vacuumed": 0,
- "index_vacuum_count": 0,
- "max_dead_tuples": 178956969,
- "num_dead_tuples": 151344321
}
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "pg_invalid_indexes": {
- "out_data": [
- "string"
], - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "pg_invalid_constraints": {
- "out_data": [
- "string"
], - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "pg_migrations": {
- "out_data": [
- {
- "pid": "6048",
- "state": "idle",
- "query": "COMMIT",
- "application_name": "migrator-6045",
- "xact_start": "",
- "query_start": "2021-04-23 10:32:24.353323+02",
- "now": "2021-04-23 10:32:28.230993+02"
}, - {
- "pid": "20444",
- "state": "idle",
- "query": "COMMIT",
- "application_name": "migrator-20438",
- "xact_start": "",
- "query_start": "2021-04-22 14:58:24.43139+02",
- "now": "2021-04-23 10:32:28.231029+02"
}
], - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "hostname": {
- "out_data": "string",
- "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}
}, - "pg_metrics": {
- "out_data": {
- "meta": { },
- "backends": { },
- "tablespaces": { },
- "tables": { },
- "indexes": { },
- "sequences": { },
- "extensions": { },
- "statements": { },
- "system": {
- "cpu_model": "string",
- "num_cores": 0,
- "loadavg": 0,
- "memused": 0,
- "memfree": 0,
- "membuffers": 0,
- "memcached": 0,
- "swapused": 0,
- "swapfree": 0,
- "hostname": "string",
- "memslab": 0
}, - "settings": { },
- "locks": [
- { }
], - "locations": {
- "property1": {
- "mountpoint": "string",
- "real_location": "string",
- "local": true,
- "disk_free": 0,
- "disk_free_percent": 0,
- "disk_used": 0,
- "disk_total": 0,
- "inodes_used": 0,
- "inodes_total": 0,
- "disk_free_alert": 0,
- "disk_free_percent_alert": 0
}, - "property2": {
- "mountpoint": "string",
- "real_location": "string",
- "local": true,
- "disk_free": 0,
- "disk_free_percent": 0,
- "disk_used": 0,
- "disk_total": 0,
- "inodes_used": 0,
- "inodes_total": 0,
- "disk_free_alert": 0,
- "disk_free_percent_alert": 0
}
}, - "gather_time_sec": { }
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "pg_backrest": {
- "version": {
- "out_data": {
- "raw": "string",
- "major": 0,
- "major_sub": 0,
- "minor": 0
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "stanzas": {
- "out_data": {
- "property1": {
- "archive": [
- {
- "database": {
- "id": 0
}, - "id": "string",
- "max": "string",
- "min": "string"
}
], - "backup": [
- {
- "archive": {
- "start": "string",
- "stop": "string"
}, - "backrest": {
- "format": 0,
- "version": "string"
}, - "database": {
- "id": "string",
- "repo-key": "string"
}, - "info": {
- "delta": 0,
- "repository": {
- "delta": null,
- "size": null
}, - "size": 0
}, - "label": "string",
- "prior": "string",
- "reference": "string",
- "timestamp": {
- "start": 0,
- "stop": 0
}, - "type": "string"
}
], - "cipher": "string",
- "db": [
- { }
], - "status": {
- "code": 0,
- "lock": {
- "backup": {
- "held": true
}
}, - "message": "string"
}
}, - "property2": {
- "archive": [
- {
- "database": {
- "id": 0
}, - "id": "string",
- "max": "string",
- "min": "string"
}
], - "backup": [
- {
- "archive": {
- "start": "string",
- "stop": "string"
}, - "backrest": {
- "format": 0,
- "version": "string"
}, - "database": {
- "id": "string",
- "repo-key": "string"
}, - "info": {
- "delta": 0,
- "repository": {
- "delta": null,
- "size": null
}, - "size": 0
}, - "label": "string",
- "prior": "string",
- "reference": "string",
- "timestamp": {
- "start": 0,
- "stop": 0
}, - "type": "string"
}
], - "cipher": "string",
- "db": [
- { }
], - "status": {
- "code": 0,
- "lock": {
- "backup": {
- "held": true
}
}, - "message": "string"
}
}
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}, - "gather_time_sec": 0.059895
}, - "error": {
- "module": "concurrent.futures._base",
- "class": "TimeoutError",
- "descr": "Requesting data timed out after 10 seconds",
- "traceback": [
- "Traceback (most recent call last):",
- " File \"src/sfutils/http.py\", line 426, in send",
- " File \"src/sfutils/http.py\", line 179, in retry_http_function",
- " File \"src/sfutils/http.py\", line 393, in _send_request_and_check_response",
- "sfutils.http_errors.CommunicationError: Could not connect to agent: [Errno 113] No route to host",
- ""
]
}
}
}
}{- "version": "1.0",
- "items": [
]
}| volume_name required | string name of volume |
{- "version": "1.0",
- "items": [
- {
- "ATime": "Mon, 25 Apr 2016 13:30:00 GMT",
- "Basename": "dir_0",
- "CTime": "Mon, 25 Apr 2016 13:30:00 GMT",
- "File-Mode": "drwxrwxrwx",
- "Group-Id": 0,
- "Mode": 511,
- "MTime": "Mon, 25 Apr 2016 13:30:00 GMT",
- "Size": 4096,
- "Type": 16384,
- "User-Id": 0,
- "Tags-Explicit": [ ],
- "Tags-Inherited": [ ]
}, - {
- "ATime": "Mon, 25 Apr 2016 13:35:00 GMT",
- "Basename": "file_0",
- "CTime": "Mon, 25 Apr 2016 13:35:00 GMT",
- "File-Mode": "-rw-rw-rw-",
- "Group-Id": 1000,
- "Mode": 438,
- "MTime": "Mon, 25 Apr 2016 13:35:00 GMT",
- "Size": 100,
- "Type": 32768,
- "User-Id": 1000,
- "Tags-Explicit": [ ],
- "Tags-Inherited": [ ]
}
]
}| volume_name required | string name of volume |
| path required | string |
{- "version": "1.0",
- "items": [
- {
- "ATime": "Mon, 25 Apr 2016 13:30:00 GMT",
- "Basename": "dir_0",
- "CTime": "Mon, 25 Apr 2016 13:30:00 GMT",
- "File-Mode": "drwxrwxrwx",
- "Group-Id": 0,
- "Mode": 511,
- "MTime": "Mon, 25 Apr 2016 13:30:00 GMT",
- "Size": 4096,
- "Type": 16384,
- "User-Id": 0,
- "Tags-Explicit": [ ],
- "Tags-Inherited": [ ]
}, - {
- "ATime": "Mon, 25 Apr 2016 13:35:00 GMT",
- "Basename": "file_0",
- "CTime": "Mon, 25 Apr 2016 13:35:00 GMT",
- "File-Mode": "-rw-rw-rw-",
- "Group-Id": 1000,
- "Mode": 438,
- "MTime": "Mon, 25 Apr 2016 13:35:00 GMT",
- "Size": 100,
- "Type": 32768,
- "User-Id": 1000,
- "Tags-Explicit": [ ],
- "Tags-Inherited": [ ]
}
]
}| volume_name required | string name of volume |
| path required | string |
| tag.add | Array of strings list of tags which will be added to entry |
| tag.set | Array of strings list of tags which will overwrite tags currently attached to entry |
| tag.delete | Array of strings list of tags which will be removed from entry |
{- "tag.add": [
- "string"
], - "tag.set": [
- "string"
], - "tag.delete": [
- "string"
]
}{ }Retrieve summary for a given volume and paths grouped by given column for each defined action_tag
| volumes_and_paths | Array of strings Default: [] Name of the volume and path as |
| query | string Default: "" List of filters which should be applied to summarize query |
| group_by | string (group_by_single_simple_key) Enum: "at" "blck" "ct" "depth" "ext" "fn" "gid" "groupname" "groupsid" "ino" "mode" "mt" "parent_id" "parent_path" "size" "uid" "username" "usersid" "volume" "volume_id" Group result by a single field. |
{- "volumes_and_paths": [
- "vol1:path2",
- "vol2:path2"
], - "query": "type=f size=10K-999P",
- "group_by": "username"
}{- "summarize_id": "summarize_19700101_010000"
}Check if summarize query is already done.
| summarize_id required | string Id of started summarize query |
{- "summarize_id": "summarize_19700101_010000",
- "is_done": true
}Get results of finished summarize query
| summarize_id required | string Id of started summarize query |
[- {
- "action_tag": "TO_DELETE",
- "count": 12,
- "size_sum": 12345,
- "group_by_value": "txt"
}
]| with_tagset | boolean If set to true, list of tags will be returned with tagsets in format {tagset}:{tag} (default tagset will be printed as an empty string, so all tags in default tagset will be printed as ':{tag}'). By default tags are returned without tagset. In such case tags with same name are returned as one. |
| in_tagset | string List only tags from given tagset. To list tags in default tagset provide empty string. If not provided, tags from all tagsets are returned. |
| with_private | string List also tags that are in private tagsets (which means tagset with name starting with '__'). This flag is true by default. |
| limit | integer Maximum number of returned tags. |
{- "tags": [
- "tag1",
- "tag2",
- "tag3",
- "tagset1:tag1"
]
}What tags add to which paths. In case list of paths is empty, tag is created without attaching it to any path.
| paths | Array of strings (volume_and_path) list of paths as |
| tags | Array of strings list of tags |
| strict | boolean Default: false If set to true - request fails if any of the requested paths to tag is nonexistent. If set to false and some of the requested paths to tag are not found, the number of paths that were found in a database is returned in 'existing_paths_count'. |
{- "paths": [
- "projects:dir1/dir2",
- "usr:foo/bar"
], - "tags": [
- "tag1",
- "tag2",
- "tagset1:tag1"
], - "strict": false
}{- "errors": [
- "projects: volume not found",
- "dir1/dir2: no volume name"
], - "existing_paths_count": 5,
- "added_tags_count": 7
}Set of tags to be detached and set of paths from which to detach. Both sets should be non-empty.
| paths | Array of strings (volume_and_path) list of paths as 'volume:path' |
| tags | Array of strings list of tags |
{- "paths": [
- "projects:dir1/dir2",
- "usr:foo/bar"
], - "tags": [
- "tag1",
- "tag2",
- "tagset1:tag1"
]
}{- "errors": [
- "projects: volume not found",
- "dir1/dir2: no volume name"
], - "untagged_count": 5
}If volume name is not given, or is null, tag is removed from entries on all volumes. Also, if volume name is not given, or is null, tag will no longer be available unless reintroduced. If tag name is not given, or is null, all tags in a given volume are removed.
What tags to remove from which volumes. Both volume and tag cannot be null at the same time.
| volume | string or null volume name |
| tag | string or null tag name |
{- "volume": "projects",
- "tag": "tag1"
}{- "purged_count": 3,
- "removed_tags_count": 1
}Please note that on the contrary to POST /tag/ on some/dir/ (that adds explicit tag only to the some/dir/, all entries of some/dir/ inherit such tag), POST /tag/pin on some/dir/ assures that each entry in some/dir/ grants explicit tag for every inherited tag it had before. Therefore, after POST /tag/ a_tag on some/dir/ and POST /tag/pin on some/dir/, querying on explicit tags on some/dir/some/file will return a_tag, but without pinning it will not.
| volume required | string |
| path required | string path to subtree |
| set_tagset | string name of the tagset that new explicit tags should receive |
| query | string query used to filter entries which should have tags pinned |
{- "volume": "vol1",
- "path": "path/to/some/dir",
- "set_tagset": "some_new_tagset",
- "query": "string"
}12If the tagset is not provided, the default one will be assumed. Currently, combining tags is not implemented, i.e. new tag cannot already exist.
Tag to be renamed and a new tag name
| tag required | string Tag to be renamed in |
| new_tag required | string New tag name in |
{- "tag": "tag1",
- "new_tag": "tagset:tag2"
}{- "new_tag": "tagset:tag2"
}If id is not provided get a list of all volumes with detailed info about crons and size info. If an id is provided return a single volume.
| id | integer numerical id of volume |
| add_cron_info | boolean Default: false If enabled and querying a list of volumes then add also "cron" field with detailed information about crons entries attached to the volume. |
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
| sort_by | string Enum: "display_name" "free_space" "name" "path" "volume_id" Example: sort_by=display_name Sort by given fields. Multiple fields should be separated with some whitespace or comma. Each field could be prefixed with '+' or '-' to sort ascending or descending (default is ascending). |
| with_disk_usage | boolean Default: false If enabled and querying a list of volumes then add also "volume_size_info" field
with details about disk size. |
| with_mount_opts | boolean Default: false If enabled and querying a single volume then update the "mount_opts" field with details about mount options of the volume.\ |
[- {
- "id": 5,
- "vol": "foo",
- "display_name": "/mnt/foo/",
- "inode": 657,
- "inode_str": "657",
- "store_win_acl": null,
- "store_posix_acl": false,
- "total_capacity": 31231231237654,
- "capacity_set_manually": false,
- "free_space": 333222111000,
- "free_space_set_manually": true,
- "cron": [
- {
- "vol": "home",
- "path": "/",
- "cron": "0 2 * * wed#2",
- "template": "diff",
- "next_run_timestamp": 1657670400,
- "next_run_hum": "2022-07-13 02:00:00"
}
], - "mounts": {
- "http://agent1:30002": "/media/foo",
- "http://agent2:30002": "/mnt/foo"
}, - "mount_opts": {
- "http://agent1:30002": "rw,relatime",
- "http://agent2:30002": "rw,relatime,vers=3.0,username=nfsuser,addr=1.2.3.4"
}, - "dir_excludes": [
- ".snapshot*",
- "~snapshot*",
- ".zfs"
], - "file_excludes": [ ],
- "ignored_dir_stat_fields": [
- "st_mtime"
], - "ignored_file_stat_fields": [
- "st_mtime"
], - "user_params": { },
- "type": "Linux",
- "volume_size_info": {
- "number_of_files": "0,",
- "number_of_dirs": "1,",
- "sum_of_logical_sizes": "4096,",
- "sum_of_logical_sizes_div_nlinks": "4096,",
- "sum_of_blocks": "8,",
- "sum_of_blocks_div_nlinks": 8
}, - "cron_service_up": "true,",
- "number_of_files": "0,",
- "number_of_dirs": "1,",
- "sum_of_logical_sizes_div_nlinks": "4096,",
- "sum_of_logical_sizes_no_nlinks": "4096,",
- "sum_of_physical_sizes_div_nlinks": "4096,",
- "sum_of_blocks_div_nlinks": "8,",
- "sum_of_physical_sizes_no_nlinks": "4096,",
- "sum_of_blocks": "8,",
- "sum_of_logical_sizes": "4096,",
- "sum_of_physical_sizes": "4096,"
}
]| vol | string New volume name. Updating a volume with some new volume name is possible only when no scan and no job is pending on this volume, also redash reports cannot be calculated at that time. |
| agent_address | string Agent address to be added to the volume (required when adding new volume). When adding new or modifying existing agent, adress will be normalized (possibly enhanced with schema and or port number). When removing an agent, part of address is also accepted as long as it uniquely identifies agent. |
| root | string Path where the volume is mounted on the agent (required when adding new volume) |
| no_cron | boolean Default: false If set to true then default daily scan cron job will be not added for the volume |
| display_name | string User familiar name that may contain also characters that are forbidden in |
| default_agent_address | string The agent that will be used to scan volume when no agent provided in scan request. |
| dir_excludes | Array of strings directories (glob patterns allowed) to be excluded during scanning |
| file_excludes | Array of strings filenames (glob patterns are allowed) to be excluded during scanning |
| ignored_dir_stat_fields | Array of strings If only fields from this list differ between db and fs, CHANGE event is not triggered. Applies to directories. These values are set per given volume, in addition to the global values in config, which apply to all volumes. Possible fields are: st_mode, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime, st_blocks, st_nlink, st_ino |
| ignored_file_stat_fields | Array of strings If only fields from this list differ between db and fs, CHANGE event is not triggered. Applies to non-directories. These values are set per given volume, in addition to the global values in config, which apply to all volumes. Possible fields are: st_mode, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime, st_blocks, st_nlink, st_ino |
| store_win_acl | boolean Default: true Only applies to Windows volumes - cannot be set on Linux volumes. If enabled will store also Windows access control lists when scanning this volume. |
| store_win_attr | boolean Default: false Only applies to Windows volumes - cannot be set on Linux volumes. If enabled will store also Windows file attributes (read-only, hidden, etc.) when scanning this volume. |
| store_posix_acl | boolean Default: false Store also POSIX access control lists when scanning this volume. This may be slow. |
| total_capacity | number Capacity of the volume. This will be ignored if |
| capacity_set_manually | boolean If set to false then |
| free_space | number free space of the volume. This will be ignored if |
| free_space_set_manually | boolean If set to false then |
| user_params | object Map of user parameters. This is a good place to store some properties about the volume. User
may define any string to string pair. Some of them are used by SF internally (example: |
| type | string OS on which volume should be mounted ('Linux' or 'Windows') |
{- "vol": "foo",
- "agent_address": "string",
- "root": "string",
- "no_cron": false,
- "display_name": "string",
- "default_agent_address": "string",
- "dir_excludes": [
- "string"
], - "file_excludes": [
- "string"
], - "ignored_dir_stat_fields": [
- "string"
], - "ignored_file_stat_fields": [
- "string"
], - "store_win_acl": true,
- "store_win_attr": false,
- "store_posix_acl": false,
- "total_capacity": 0,
- "capacity_set_manually": true,
- "free_space": 0,
- "free_space_set_manually": true,
- "user_params": {
- "cost_per_gb": "0.0244",
- "location": "West Coast"
}, - "type": "string"
}{- "volume": {
- "id": 1,
- "vol": "foo",
- "display_name": "/mnt/foo/",
- "inode": 657,
- "inode_str": "657",
- "store_win_acl": null,
- "store_posix_acl": false,
- "total_capacity": 31231231237654,
- "capacity_set_manually": false,
- "free_space": 333222111000,
- "free_space_set_manually": true,
- "mounts": {
- "http://agent1:30002": "/media/foo",
- "http://agent2:30002": "/mnt/foo"
}, - "mount_opts": {
- "http://agent1:30002": "rw,relatime",
- "http://agent2:30002": "rw,relatime,vers=3.0,username=nfsuser,addr=1.2.3.4"
}, - "dir_excludes": [
- ".snapshot*",
- "~snapshot*",
- ".zfs"
], - "file_excludes": [ ],
- "ignored_dir_stat_fields": [
- "st_mtime"
], - "ignored_file_stat_fields": [
- "st_mtime"
], - "user_params": { },
- "type": "Linux"
}, - "created": true
}| volume_name required | string name of volume |
| add_cron_info | boolean Default: false If enabled and querying a list of volumes then add also "cron" field with detailed information about crons entries attached to the volume. |
| with_disk_usage | boolean Default: false If enabled and querying a list of volumes then add also "volume_size_info" field
with details about disk size. |
| with_mount_opts | boolean Default: false If enabled and querying a single volume then update the "mount_opts" field with details about mount options of the volume.\ |
{- "id": 5,
- "vol": "foo",
- "display_name": "/mnt/foo/",
- "inode": 657,
- "inode_str": "657",
- "store_win_acl": null,
- "store_posix_acl": false,
- "total_capacity": 31231231237654,
- "capacity_set_manually": false,
- "free_space": 333222111000,
- "free_space_set_manually": true,
- "cron": [
- {
- "vol": "home",
- "path": "/",
- "cron": "0 2 * * wed#2",
- "template": "diff",
- "next_run_timestamp": 1657670400,
- "next_run_hum": "2022-07-13 02:00:00"
}
], - "mounts": {
- "http://agent1:30002": "/media/foo",
- "http://agent2:30002": "/mnt/foo"
}, - "mount_opts": {
- "http://agent1:30002": "rw,relatime",
- "http://agent2:30002": "rw,relatime,vers=3.0,username=nfsuser,addr=1.2.3.4"
}, - "dir_excludes": [
- ".snapshot*",
- "~snapshot*",
- ".zfs"
], - "file_excludes": [ ],
- "ignored_dir_stat_fields": [
- "st_mtime"
], - "ignored_file_stat_fields": [
- "st_mtime"
], - "user_params": { },
- "type": "Linux",
- "volume_size_info": {
- "number_of_files": "0,",
- "number_of_dirs": "1,",
- "sum_of_logical_sizes": "4096,",
- "sum_of_logical_sizes_div_nlinks": "4096,",
- "sum_of_blocks": "8,",
- "sum_of_blocks_div_nlinks": 8
}, - "cron_service_up": "true,",
- "number_of_files": "0,",
- "number_of_dirs": "1,",
- "sum_of_logical_sizes_div_nlinks": "4096,",
- "sum_of_logical_sizes_no_nlinks": "4096,",
- "sum_of_physical_sizes_div_nlinks": "4096,",
- "sum_of_blocks_div_nlinks": "8,",
- "sum_of_physical_sizes_no_nlinks": "4096,",
- "sum_of_blocks": "8,",
- "sum_of_logical_sizes": "4096,",
- "sum_of_physical_sizes": "4096,"
}| volume_name required | string name of volume |
| skip_check_on_agent | boolean Default: false Do not request agent to verify if agent_address and root is valid. |
| vol | string New volume name. Updating a volume with some new volume name is possible only when no scan and no job is pending on this volume, also redash reports cannot be calculated at that time. |
| agent_address | string Agent address to be added to the volume (required when adding new volume). When adding new or modifying existing agent, adress will be normalized (possibly enhanced with schema and or port number). When removing an agent, part of address is also accepted as long as it uniquely identifies agent. |
| root | string Path where the volume is mounted on the agent (required when adding new volume) |
| no_cron | boolean Default: false If set to true then default daily scan cron job will be not added for the volume |
| display_name | string User familiar name that may contain also characters that are forbidden in |
| default_agent_address | string The agent that will be used to scan volume when no agent provided in scan request. |
| dir_excludes | Array of strings directories (glob patterns allowed) to be excluded during scanning |
| file_excludes | Array of strings filenames (glob patterns are allowed) to be excluded during scanning |
| ignored_dir_stat_fields | Array of strings If only fields from this list differ between db and fs, CHANGE event is not triggered. Applies to directories. These values are set per given volume, in addition to the global values in config, which apply to all volumes. Possible fields are: st_mode, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime, st_blocks, st_nlink, st_ino |
| ignored_file_stat_fields | Array of strings If only fields from this list differ between db and fs, CHANGE event is not triggered. Applies to non-directories. These values are set per given volume, in addition to the global values in config, which apply to all volumes. Possible fields are: st_mode, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime, st_blocks, st_nlink, st_ino |
| store_win_acl | boolean Default: true Only applies to Windows volumes - cannot be set on Linux volumes. If enabled will store also Windows access control lists when scanning this volume. |
| store_win_attr | boolean Default: false Only applies to Windows volumes - cannot be set on Linux volumes. If enabled will store also Windows file attributes (read-only, hidden, etc.) when scanning this volume. |
| store_posix_acl | boolean Default: false Store also POSIX access control lists when scanning this volume. This may be slow. |
| total_capacity | number Capacity of the volume. This will be ignored if |
| capacity_set_manually | boolean If set to false then |
| free_space | number free space of the volume. This will be ignored if |
| free_space_set_manually | boolean If set to false then |
| user_params | object Map of user parameters. This is a good place to store some properties about the volume. User
may define any string to string pair. Some of them are used by SF internally (example: |
| type | string OS on which volume should be mounted ('Linux' or 'Windows') |
{- "skip_check_on_agent": true,
- "vol": "foo",
- "agent_address": "string",
- "root": "string",
- "no_cron": false,
- "display_name": "string",
- "default_agent_address": "string",
- "dir_excludes": [
- "string"
], - "file_excludes": [
- "string"
], - "ignored_dir_stat_fields": [
- "string"
], - "ignored_file_stat_fields": [
- "string"
], - "store_win_acl": true,
- "store_win_attr": false,
- "store_posix_acl": false,
- "total_capacity": 0,
- "capacity_set_manually": true,
- "free_space": 0,
- "free_space_set_manually": true,
- "user_params": {
- "cost_per_gb": "0.0244",
- "location": "West Coast"
}, - "type": "string"
}{- "id": 1,
- "vol": "foo",
- "display_name": "/mnt/foo/",
- "inode": 657,
- "inode_str": "657",
- "store_win_acl": null,
- "store_posix_acl": false,
- "total_capacity": 31231231237654,
- "capacity_set_manually": false,
- "free_space": 333222111000,
- "free_space_set_manually": true,
- "mounts": {
- "http://agent1:30002": "/media/foo",
- "http://agent2:30002": "/mnt/foo"
}, - "mount_opts": {
- "http://agent1:30002": "rw,relatime",
- "http://agent2:30002": "rw,relatime,vers=3.0,username=nfsuser,addr=1.2.3.4"
}, - "dir_excludes": [
- ".snapshot*",
- "~snapshot*",
- ".zfs"
], - "file_excludes": [ ],
- "ignored_dir_stat_fields": [
- "st_mtime"
], - "ignored_file_stat_fields": [
- "st_mtime"
], - "user_params": { },
- "type": "Linux"
}| volume_name required | string name of volume |
| remove_reports | boolean Default: false Flag that determines if redash reports data should be deleted together with volume |
{ }Returns zone object.
| name | string Zone name, not empty string which may contain only letters |
Array of objects (user) list of managers | |
Array of objects (zonegroup) list of managing groups' | |
| paths | Array of strings (volume_and_path) list of paths as 'volume:path' |
| user_params | object Default: {} Optional dictionary of user-defined key-value pairs |
{- "name": "zone_name",
- "managers": [
- {
- "system_id": 12,
- "username": "Alice"
}
], - "managing_groups": [
- {
- "system_id": 1001,
- "groupname": "admins"
}
], - "paths": [
- "projects:dir1/dir2"
], - "user_params": {
- "cost_per_gb": 0.0123,
- "purpose": "keep users data",
- "location": "2.3.b"
}
}{- "id": 0,
- "name": "zone_name",
- "managers": [
- {
- "system_id": 12,
- "username": "Alice"
}
], - "managing_groups": [
- {
- "system_id": 1001,
- "groupname": "admins"
}
], - "restore_managers": [
- [
- "alice",
- "bob"
]
], - "restore_managing_groups": [
- [
- "managers",
- "students"
]
], - "paths": [
- "projects:dir1/dir2"
], - "tagsets": [
- {
- "name": "tagset_name",
- "tag_names": [
- "tag1",
- "tag2"
]
}
], - "user_params": {
- "cost_per_gb": "0.0123",
- "purpose": "keep users data",
- "location": "2.3.b"
}
}Returns list of zone objects.
| confidential | boolean Default: false If enabled then fields that may contain confidential info will be replaced either with |
[- {
- "id": 0,
- "name": "zone_name",
- "managers": [
- {
- "system_id": 12,
- "username": "Alice"
}
], - "managing_groups": [
- {
- "system_id": 1001,
- "groupname": "admins"
}
], - "restore_managers": [
- [
- "alice",
- "bob"
]
], - "restore_managing_groups": [
- [
- "managers",
- "students"
]
], - "paths": [
- "projects:dir1/dir2"
], - "tagsets": [
- {
- "name": "tagset_name",
- "tag_names": [
- "tag1",
- "tag2"
]
}
], - "user_params": {
- "cost_per_gb": "0.0123",
- "purpose": "keep users data",
- "location": "2.3.b"
}
}
]Updates zone object. See "Request body" for list of fields that can be updated.
| zone_id required | integer ID of zone |
| name | string Zone name, not empty string which may contain only letters |
Array of objects (user) list of managers | |
Array of objects (zonegroup) list of managing groups' | |
| paths | Array of strings (volume_and_path) list of paths as 'volume:path' |
| user_params | object Default: {} Optional dictionary of user-defined key-value pairs |
{- "name": "zone_name",
- "managers": [
- {
- "system_id": 12,
- "username": "Alice"
}
], - "managing_groups": [
- {
- "system_id": 1001,
- "groupname": "admins"
}
], - "paths": [
- "projects:dir1/dir2"
], - "user_params": {
- "cost_per_gb": 0.0123,
- "purpose": "keep users data",
- "location": "2.3.b"
}
}{- "id": 0,
- "name": "zone_name",
- "managers": [
- {
- "system_id": 12,
- "username": "Alice"
}
], - "managing_groups": [
- {
- "system_id": 1001,
- "groupname": "admins"
}
], - "restore_managers": [
- [
- "alice",
- "bob"
]
], - "restore_managing_groups": [
- [
- "managers",
- "students"
]
], - "paths": [
- "projects:dir1/dir2"
], - "tagsets": [
- {
- "name": "tagset_name",
- "tag_names": [
- "tag1",
- "tag2"
]
}
], - "user_params": {
- "cost_per_gb": "0.0123",
- "purpose": "keep users data",
- "location": "2.3.b"
}
}Returns zone object.
| zone_id required | integer ID of zone |
{- "id": 0,
- "name": "zone_name",
- "managers": [
- {
- "system_id": 12,
- "username": "Alice"
}
], - "managing_groups": [
- {
- "system_id": 1001,
- "groupname": "admins"
}
], - "restore_managers": [
- [
- "alice",
- "bob"
]
], - "restore_managing_groups": [
- [
- "managers",
- "students"
]
], - "paths": [
- "projects:dir1/dir2"
], - "tagsets": [
- {
- "name": "tagset_name",
- "tag_names": [
- "tag1",
- "tag2"
]
}
], - "user_params": {
- "cost_per_gb": "0.0123",
- "purpose": "keep users data",
- "location": "2.3.b"
}
}Updates a user params in a zone object.
| zone_id required | integer ID of zone |
Dictionary of user-defined key-value pairs
{- "cost_per_gb": "0.0123",
- "purpose": "keep users data",
- "location": "2.3.b"
}{- "id": 0,
- "name": "zone_name",
- "managers": [
- {
- "system_id": 12,
- "username": "Alice"
}
], - "managing_groups": [
- {
- "system_id": 1001,
- "groupname": "admins"
}
], - "restore_managers": [
- [
- "alice",
- "bob"
]
], - "restore_managing_groups": [
- [
- "managers",
- "students"
]
], - "paths": [
- "projects:dir1/dir2"
], - "tagsets": [
- {
- "name": "tagset_name",
- "tag_names": [
- "tag1",
- "tag2"
]
}
], - "user_params": {
- "cost_per_gb": "0.0123",
- "purpose": "keep users data",
- "location": "2.3.b"
}
}Adds/updates a single user param to a zone object.
| zone_id required | integer ID of zone |
| user_param_name required | string name (key) of the zone user parameter |
| value required | string |
{- "id": 0,
- "name": "zone_name",
- "managers": [
- {
- "system_id": 12,
- "username": "Alice"
}
], - "managing_groups": [
- {
- "system_id": 1001,
- "groupname": "admins"
}
], - "restore_managers": [
- [
- "alice",
- "bob"
]
], - "restore_managing_groups": [
- [
- "managers",
- "students"
]
], - "paths": [
- "projects:dir1/dir2"
], - "tagsets": [
- {
- "name": "tagset_name",
- "tag_names": [
- "tag1",
- "tag2"
]
}
], - "user_params": {
- "cost_per_gb": "0.0123",
- "purpose": "keep users data",
- "location": "2.3.b"
}
}Deletes a single user param from zone object.
| zone_id required | integer ID of zone |
| user_param_name required | string name (key) of the zone user parameter |
{- "id": 0,
- "name": "zone_name",
- "managers": [
- {
- "system_id": 12,
- "username": "Alice"
}
], - "managing_groups": [
- {
- "system_id": 1001,
- "groupname": "admins"
}
], - "restore_managers": [
- [
- "alice",
- "bob"
]
], - "restore_managing_groups": [
- [
- "managers",
- "students"
]
], - "paths": [
- "projects:dir1/dir2"
], - "tagsets": [
- {
- "name": "tagset_name",
- "tag_names": [
- "tag1",
- "tag2"
]
}
], - "user_params": {
- "cost_per_gb": "0.0123",
- "purpose": "keep users data",
- "location": "2.3.b"
}
}Add, set or delete zone managers and/or managing groups that can restore within the zone. To allow/forbid only selected zone managers or managing groups to restore, add user: or group: prefix to username and groupname, respectively. To allow/forbid all zone managers and managing groups to restore within the zone, pass zone-managers constant as a list element. When none of the zone managers or managing groups can restore within the zone, only SF admin can restore inside it.
| zone_id required | integer ID of zone |
| restoring_managers.add | Array of strings Add selected (or all) zone managers/managing groups as restoring managers/managing groups. Cannot be used with |
| restoring_managers.delete | Array of strings Delete selected (or all) zone managers/managing groups as restoring managers/managing groups. Cannot be used with |
| restoring_managers.set | Array of strings Set selected (or all) zone managers/managing groups as restoring managers/managing groups. If |
{- "restoring_managers.add": [
- "user:alice",
- "user:bob",
- "group:admins"
], - "restoring_managers.delete": [
- "group:managers"
]
}[- "user:alice",
- "user:bob",
- "group:admins"
]