Version: Next

RQ Launcher plugin

PyPI PyPI - License PyPI - Python Version PyPI - Downloads Example application Plugin source

The RQ Launcher plugin provides a launcher for distributed execution and job queuing based on Redis Queue (RQ).

RQ launcher allows parallelizing across multiple nodes and scheduling jobs in queues. Usage of this plugin requires a Redis server. When parallelisation on a single node is intended, the Joblib launcher may be preferable, since it works without a database.


pip install hydra-rq-launcher --upgrade

Usage of this plugin requires a Redis server.

Note that RQ does not support Windows.


Once installed, add hydra/launcher=rq to your command line. Alternatively, override hydra/launcher in your config:

- hydra/launcher: rq

The configuration packaged with the plugin is defined here. The default configuration is as follows:

$ python hydra/launcher=rq --cfg hydra -p hydra.launcher
# @package hydra.launcher
_target_: hydra_plugins.hydra_rq_launcher.rq_launcher.RQLauncher
job_timeout: null # maximum runtime of the job before it's killed (e.g. "1d" for 1 day, units: d/h/m/s), default: no limit
ttl: null # maximum queued time before the job before is discarded (e.g. "1d" for 1 day, units: d/h/m/s), default: no limit
result_ttl: null # how long successful jobs and their results are kept (e.g. "1d" for 1 day, units: d/h/m/s), default: no limit
failure_ttl: null # specifies how long failed jobs are kept (e.g. "1d" for 1 day, units: d/h/m/s), default: no limit
at_front: false # place job at the front of the queue, instead of the back
job_id: null # job id, will be overidden automatically by a uuid unless specified explicitly
description: null # description, will be overidden automatically unless specified explicitly
queue: default # queue name
host: ${env:REDIS_HOST,localhost} # host address via REDIS_HOST environment variable, default: localhost
port: ${env:REDIS_PORT,6379} # port via REDIS_PORT environment variable, default: 6379
db: ${env:REDIS_DB,0} # database via REDIS_DB environment variable, default: 0
password: ${env:REDIS_PASSWORD,} # password via REDIS_PASSWORD environment variable, default: no password
mock: ${env:REDIS_MOCK,False} # switch to run without redis server in single thread, for testing purposes only
stop_after_enqueue: false # stop after enqueueing by raising custom exception
wait_polling: 1.0 # wait time in seconds when polling results

The plugin is using environment variables to store Redis connection information. The environment variables REDIS_HOST, REDIS_PORT, REDIS_DB, and REDIS_PASSWORD, are used for the host address, port, database, and password of the server, respectively.

For example, they might be set as follows when using bash or zsh as a shell:

export REDIS_HOST="localhost"
export REDIS_PORT="6379"
export REDIS_DB="0"

Assuming configured environment variables, workers connecting to the Redis server can be launched using:

rq worker --url redis://:[email protected]$REDIS_HOST:$REDIS_PORT/$REDIS_DB

An example application using this launcher is provided in the plugin repository.

Starting the app with python --multirun task=1,2,3,4,5 will enqueue five jobs to be processed by worker instances:

$ python --multirun task=1,2,3,4,5
[HYDRA] RQ Launcher is enqueuing 5 job(s) in queue : default
[HYDRA] Sweep output dir : multirun/2020-06-15/18-00-00
[HYDRA] Enqueued 13b3da4e-03f7-4d16-9ca8-cfb3c48afeae
[HYDRA] #1 : task=1
[HYDRA] Enqueued 00c6a32d-e5a4-432c-a0f3-b9d4ef0dd585
[HYDRA] #2 : task=2
[HYDRA] Enqueued 63b90f27-0711-4c95-8f63-70164fd850df
[HYDRA] #3 : task=3
[HYDRA] Enqueued b1d49825-8b28-4516-90ca-8106477e1eb1
[HYDRA] #4 : task=4
[HYDRA] Enqueued ed96bdaa-087d-4c7f-9ecb-56daf948d5e2
[HYDRA] #5 : task=5
[HYDRA] Finished enqueuing
[HYDRA] Polling job statuses every 1.0 sec

Note that any dependencies need to be installed in the Python environment used to run the RQ worker. For serialization of jobs cloudpickle is used.

The RQ documentation holds further information on job monitoring, which can be done via console or web interfaces, and provides patterns for worker and exception handling.

Last updated on by Omry Yadan