init-script

1.先下载两个脚本文件 generic-init.d,我的项目里有定时任务,所以两个都需要用到

2.创建配置文件

$ cat /etc/default/celeryd
# Names of nodes to start
# most people will only start one node:
CELERYD_NODES="worker1" # work 任务节点
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/prod/softwares/python/bin/celery" # celery 命令的绝对路径
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="wfstar" # 应用实例
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/prod/deploys/wfstar" # 项目目录
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8" # 命令行参数,更多请参考 man 手册
# Configure node-specific settings by appending node name to arguments:
#CELERYD_OPTS="--time-limit=300 -c 8 -c:worker2 4 -c:worker3 2 -Ofair:worker1"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists (e.g., nobody).
CELERYD_USER="prod"
CELERYD_GROUP="prod"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1 # 自动创建日志和 pid 文件

3.将下载的 celerydcelerybeat 拷贝到 /etc/init.d 目录并赋予可执行权限

注意部分内容的修改,如执行用户

$ cat /etc/init.d/celeryd
...
DEFAULT_USER="prod"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"
...
$ cat /etc/init.d/celerybeat
...
CELERY_BIN=${CELERY_BIN:-"celery"}
DEFAULT_USER="prod"
DEFAULT_PID_FILE="/var/run/celery/beat.pid"
DEFAULT_LOG_FILE="/var/log/celery/beat.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_CELERYBEAT="$CELERY_BIN beat"
...

4.启动 work 和 beat

$ sudo /etc/init.d/celeryd start
$ sudo /etc/init.d/celerybeat start

supervisord

1.安装

$ sudo apt-get install supervisor # 因为我这里的项目用的是 Python3,所以没有使用 pip 安装

2.配置

/etc/supervisor/conf.d 新建配置文件 celery_wfstar_worker.confcelery_wfstar_beat.conf

$ cat /etc/supervisor/conf.d/celery_wfstar_worker.conf
[program:wfstar_worker]
command=/home/prod/deploys/wfstart_env/bin/celery -A wfstar worker -l info # 执行命令
directory=/home/prod/deploys/wfstar # 运行目录
user=prod # 执行用户
numprocs=1 # 启动进程数
stdout_logfile=/home/prod/deploys/wfstar/logs/celery.log # 标准输出日志
redirect_stderr=true # 将 stderr 的日志会写入 stdout 日志文件中
autostart=true # supervisor 启动,程序自动启动
autorestart=true # 自动重启
startsecs=10 # 进程启动 10s 后,状态为 running 则为启动成功
stopwaitsecs = 600
killasgroup=true
priority=998 # 优先级
stopsignal=QUIT # 停止信号
$ cat /etc/supervisor/conf.d/celery_wfstar_beat.conf
[program:wfstar_beat]
command=/home/prod/deploys/wfstart_env/bin/celery -A wfstar beat -l info
directory=/home/prod/deploys/wfstar
user=prod
numprocs=1
stdout_logfile=/home/prod/deploys/wfstar/logs/celerybeat.log
redirect_stderr=true
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true
priority=999
stopsignal=QUIT

配置文件可参考 github celery supervisord configure

3.启动

$ sudo supervisorctl reread
$ sudo supervisorctl update
$ sudo supervisorctl start all
$ sudo supervisorctl status
wfstar_beat RUNNING pid 16752, uptime 0:00:34
wfstar_worker RUNNING pid 16751, uptime 0:00:34

参考

How to run celery as a daemon

Celery 后台运行