WORK IN PROGRESS | Services are in active development and are subject to change.


The Remote CLI provides a means to run jobs on Charity Engine compute resources using simple command line tools. It can be used as a standalone command interface or via GNU Parallel (using Parallel allows you to manage batches of work using just a single command line tool).

See Computing with Charity Engine for a contextual overview of the Charity Engine ecosystem (including additional interfaces such as Remote API, Ethereum Smart Contract, and Ethereum Dapp).


Contents


General Usage

To submit and monitor jobs using the CLI, download the latest version of the Charity Engine Remote CLI tool for your platform. Then, run the ce-cli binary, setting options as appropriate:

ce-cli [args]
OPTIONS
--version // Show version number [boolean]
--app // Application name (e.g. "charityengine:wolframengine"), docker image from Docker Hub (e.g. "docker:image-name") or a custom docker image URL (e.g. "docker:image-name https://example.com/file") [string] [required]
--auth // Authorization key [string] [required]
--batch // Arbitrary data that will be linked to a job and returned with results. Used to categorize jobs into batches, limited to 200 bytes [string]
--cache-inputs // Configures input file persistence. If enabled, input files may be cached on the compute node side for repeat computations. Should be disabled for dynamically changing data. Accepts strings "all" and "none", or a zero-indexed list of input files to cache (e.g. 0 2 would cache input files zero and two). [array] [default: "all"]
--checkpoint // Saves the state of running jobs and resumes them upon restart of the CLI. Prevents jobs with exactly identical parameters from running more than once, even after restarts of the CLI [boolean] [default: false]
--checkpointfile // Location and filename of the checkpoint file [string]
--commandline // Command line to execute. If using Docker images, command line should not include the command to execute Docker. It should be the command that will run inside the Docker container [string] [required]
--copies // Number of identical copies to execute [number] [default: 1]
--debug // Enables debug messages [boolean] [default: false]
--env // List of additional environment variables as key-value pairs to be passed to the job [array of string]
--eula // If running proprietary applications, marks whether end-user licence agreement of the application is accepted. Must be set to a string "accepted" for the jobs to be accepted into the system [string]
--exitafterstart // Exits the CLI after starting a job or checking job status. Useful to run multiple jobs in parallel without spiking up memory usage. Requires checkpointing to be enabled. After jobs are created, the CLI must be run again with the same parameters to retrieve results [boolean] [default: false]
--filechunksize // File part size to use when staging input files, in bytes [number] [default: 16384]
--hours // Maximum execution time allowed, in hours [number] [default: 1]
--inputfile // One or more input files as a local filename or an URL. If local files are specified, they will be staged to remote public URLs [array of string]
--instancetype // Sets the instance type to use for the job [string] [default: C.2x2]
--outputdir // Folder name to put output files to. If not specified, output files will be put to working dir. Wildcard %JOBKEY% will be replaced with the job ID [string]
--pollfrequency // RPC polling frequency in seconds [number] [default: 30]
--resubmit // Resubmits jobs if checkpointing is used and jobs are already completed [boolean] [default: false]
--result-storage // Location where output files should be stored. Options are "temporary" (free for up to two weeks), or "estuary" (Filecoin; requires an Estuary API key to be specified in the --result-storage-config option) [string] [default: temporary]
--result-storage-config // Specifies any parameters that may be required when using non-default result storage destination [string]
--tag // Arbitrary data that will be linked to a job and returned with results. Limited to 1KB [string]
--useowndevices // Do not use the public network for job execution, but run on local devices instead. Useful for testing and debugging [default: false]
--help // Show help [boolean]

Input and output file handling

Input Files. Input files are specified using the --inputfile parameter and are expected to be either URLs of the files that are publicly available  publicly online, or paths to local files. If local file paths are specified, the files will be automatically staged into Charity Engine servers and made available online. The files being staged are automatically deduplicated by splitting them into parts and only unique chunks are uploaded. Chunk size can be defined with the --filechunksize parameter. 

Caching. Input files are cached on compute hosts for a period of 14 days from last use. This cFile caching is also handled automatically when local files are given (ie already-existing files are not re-transferred). When files are passed via URL, however, it is important that any updated files be given different URLs from the prior versions that were submitted to the network to prevent any cached files on compute nodes from being used (ex. by use of hashes or version numbers).

Output Files. Output files from the computations will be automatically downloaded into the computer running the Remote CLI. Files will be placed into the location specified using the --outputdir command line parameter, or to a local working directory if that option is not configured.

More info. See the overview of Input + Output Files for additional details on passing computing data in and out of Charity Engine.

Batch Capabilities

Jobs are automatically scheduled and executed on Charity Engine compute resources following submission via the Remote CLI. Simply feed jobs into the system and then retrieve the results. Instances do not need to be procured in advance and jobs do not need to be packed into time slots, as the integrated batch scheduler will match jobs to available resources and manage their execution without any compute cycles being wasted.

Integrated job copy scheduling

If a lot of jobs use the same input file, or no input file at all, up to 500 jobs can be queued using a single CLI run by specifying the  --copies parameter:

ce-cli --app "docker:containername" --commandline "./run filename" --inputfile local_inputfile --copies 500 ...

This will instantiate dentical containers, but each of the containers will have a distinct environment variable named CE_JOB_COPY, counting up from zero.

GNU Parallel

Batches of jobs can be run using GNU Parallel (see Examples), or by using the --exitafterstart option to quickly launch all jobs, then running additional commands to retrieve the results. An example of the latter approach in a shell script is:

ce-cli --app "docker:containername" --commandline "./run filename1" --inputfile local_inputfile1 --checkpoint true --exitafterstart true ...
ce-cli --app "docker:containername" --commandline "./run filename2" --inputfile local_inputfile2 --checkpoint true --exitafterstart true ...
ce-cli --app "docker:containername" --commandline "./run filename3" --inputfile local_inputfile3 --checkpoint true --exitafterstart true ...
# Jobs have been created at this point, retrieve results
ce-cli --app "docker:containername" --commandline "./run filename1" --inputfile local_inputfile1 --checkpoint true ...
ce-cli --app "docker:containername" --commandline "./run filename2" --inputfile local_inputfile2 --checkpoint true ...
ce-cli --app "docker:containername" --commandline "./run filename3" --inputfile local_inputfile3 --checkpoint true ...

Testing + Troubleshooting

Testing via Command Line

To write a simple “2+2” string to a text file, launch the Remote CLI as follows:

ce-cli --app "docker:node" --commandline "echo \"2+2\" > /local/output/out.txt" --auth [KEY] --inputfile http://example.com/demo.txt local-file.txt

This will create a job that will launch a docker container from the image called node at DockerHub and then run echo "2+2" > /local/output/out.txt inside of that container. 

Testing via GNU Parallel

This simple concept can also be expanded to create 25 text files concurrently, one for each string in the series [ 1+1, 1+2, … 5+4, 5+5 ] by using GNU Parallel:

parallel -j 10000 ce-cli --app "docker:node" --commandline "echo \"{1}+{2}\" \> /local/output/out.txt" --auth [KEY] ::: 1 2 3 4 5 ::: 1 2 3 4 5

Troubleshooting

For details on what the CLI is doing throughout the job execution process, use the --debug flag. This will show the point of the process at which problems are experienced. 

If trouble occurs on the execution node, it can also be helpful to set the --useowndevices flag. This causes the job to be given to a node operated by the account that submitted the job, rather than sending it out to other qualified nodes in the Charity Engine device pool.

The useOwnDevices flag requires at least one of your devices to be running a Charity Engine client, attached with the same authorization key used to access the Remote CLI. For instructions on setting up the Charity Engine client, see Installing Charity Engine on Administered Resources.

  • No labels