I followed the suggested solution for this question.
And came up with the following docker file
FROM ubuntu:16.04
ADD write_time.py /
USER root
RUN apt-get update && \
apt-get install -y python cron && \
chmod +x /write_time.py && \
(crontab -l 2>/dev/null; echo "* * * * * cd / && /usr/bin/python /write_time.py >> test.out") | crontab -
the write_time.py is
#!/usr/bin/env python
import datetime
time = datetime.datetime.now()
time = time.strftime("%Y-%m-%dT%H:%M:%S.%f")
print(time)
with open("time.txt", "a") as f:
f.write(time+"\n")
After i build with the below command and run it -
docker build . -t se
docker run -it se
I exec into the contaier, to check if either test.out or test.txt is created at / but i do not see either.(have waited for more than 2 mins)
Anything i am doing wrong here?
Thanks!
Solved it.
Docker CMD needs to be set to cron deamon.
Related
I'm running a python job which logs into a file:
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', filename='/app/logs/ups_tracking.log')
self.logger = logging.getLogger('TRACK-UPS')
When running the job manually, the log files are well created / incremented with new entries.
When running through crontab (syntax below), the logs are not written as expected.
### TRACKING UPS ###
* * * * * python /app/UPS/parcels.py
root#91067d2217e7:/app/logs# service cron status
[ ok ] cron is running.
I'm running the whole thing in a docker container, with the dockerfile below:
#Create the flask custom image
FROM python:latest
# Place your flask application on the server
COPY ./back /app/
WORKDIR /app
# Install requirements.txt
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip3 install --no-cache-dir -r requirements.txt
RUN apt-get update && apt-get install -y netcat cron
COPY ./config/init.sh /tmp/init.sh
RUN chmod +x /tmp/init.sh
# Copy crontab_file file to the cron.d directory
COPY ./config/crontab_file /etc/cron.d/crontab_file
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/crontab_file
# Apply cron job
RUN crontab /etc/cron.d/crontab_file
# Start CRON service
RUN service cron start
EXPOSE 8889
ENTRYPOINT ["/tmp/init.sh"]
Am I missing something here ?
Thanks !
# Start CRON service
RUN service cron start
means crond is running only during that RUN stage.
seeing that I wonder if you are starting cron in /tmp/init.sh too ?
I'm running a python script inside a docker container using crontab. Also, I set some environment variables (as database host, password, etc.) in .env file in the project's directory. If I run the script manually inside the container (python3 main.py) everything is working properly. But when the script is run by crontab the environment variables are not found (None).
I have the following setup:
Dockerfile
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get -y install cron
RUN apt-get install -y python3-pip python-dev
WORKDIR /home/me/theservice
COPY . .
RUN chmod 0644 theservice-cron
RUN touch /var/log/theservice-cron.log
RUN chmod +x run.sh
ENTRYPOINT ./run.sh
run.sh
#!/bin/bash
crontab theservice-cron
cron -f
docker-compose.yml
version: '3.7'
services:
theservice:
build: .
env_file:
- ./.env
theservice-cron
HOME=/home/me/theservice
* * * * * python3 /home/me/theservice/main.py >> /var/log/theservice-cron.log 2>&1
#* * * * * cd /home/me/theservice && python3 main.py >> /var/log/theservice-cron.log 2>&1
I assumed that the cronjob is running in another directory and there the environment variables set in /home/me/theservice/.env are not accessible. So I tried to add HOME=/home/me/theservice line in the theservice-cron file or just to execute /home/me/theservice before running the script but it didn't help.
In the python script, I use os to access environment variables
import os
print(os.environ['db_host'])
How I can fix this problem?
I had similar problem.
I did fix it using the following:
CMD printenv > /etc/environment && cron && tail -f /var/log/theservice-cron.log
According to
https://askubuntu.com/questions/700107/why-do-variables-set-in-my-etc-environment-show-up-in-my-cron-environment, cron reads env vars from /etc/enviroment
For those fighting to get ENV variables from docker-compose into docker, simply have a shell script run at ENTRYPOINT in your Dockerfile, with
printenv > /etc/environment
again, the naming of "/etc/environment" is CRUCIAL !
And then in your crontab, have it call a shell script:
* * * * * bash -c "sh /var/www/html/cron_php.sh"
The scripts simply does :
#!/bin/bash
cd /var/www/html
php whatever.php
You will now have the docker-compose environment variables in your php cron application. It took me a full day to figure this out. Hope i save someone's trouble now !
UPDATE:
In Azure Docker (Web app) the mechanism doesn't seem to work. A small tweak is needed:
In the Dockerfile, in the ENTRYPOINT sh script, write a file (and CHMOD to execution rights chmod 770 ) /etc/environments.sh using this command:
eval $(printenv | awk -F= '{print "export " $1"=""""$2""" }' >> /etc/environments.sh)
Then, in your crontab shell where you execute php, do this:
#!/bin/bash
. /etc/environments.sh
php whatever.php
Notice the "." instead of source. Even though the Docker container is Linux using bash, source did not do the trick, the . did work.
Note: In my local Windows Docker the first solution, using /etc/envrionment worked fine. I was baffled to find out that on Azure the second fix was needed.
I've been trying to figure out how to best call a script with cronjobs and am unable to figure it out. Either I go with a custom command where I use the following in .ebextension/"some config file":
container_commands:
01_some_cron_job:
command: "cat .ebextensions/some_cron_job.txt > /etc/cron.d/mycron && chmod 644 /etc/cron.d/mycron"
leader_only: true
some_cron_job.txt:
* * * * * root source /opt/python/run/venv/bin/activate && source /opt/python/current/env && /usr/bin/python /opt/python/current/app/manage.py cron_command >> /var/log/myjob.log 2>&1
This works when i run the command locally but after having uploaded it to eb I get the following error:
File "/opt/python/current/app/manage.py", line 18
) from exc
^ SyntaxError: invalid syntax
Or I could call the script directly:
* * * * * root source /opt/python/run/venv/bin/activate && source /opt/python/current/env && /usr/bin/python /opt/python/current/app/api/cron.py >> /var/log/myjob.log 2>&1
But am then getting import errors when trying to import a function from a another file in the same directory:
ImportError: attempted relative import with no known parent package
I'm quite lost and would appreciate any help.
I managed to find a working solution where I instead used:
files:
"/etc/cron.d/mycron":
mode: "000644"
owner: root
group: root
content: |
0/10 * * * * root source /opt/python/current/env && /opt/python/run/venv/bin/python3 /opt/python/current/app/manage.py cron_command >> /var/log/newjob.log 2>&1
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/mycron.bak"
I think the problem came about because of some python version problems in the virtual environment.
I am trying to put a cronjob inside my docker container, which runs a python script.
My Dockerfile:
FROM python:3.6.2
RUN apt-get update && apt-get -y install cron
ADD . /dir
WORKDIR /dir
RUN pip install -r requirements.txt
RUN chmod 0644 /dir
ADD crontab /dir
ENV NAME loader
CMD cron -f
My crontab file:
* * * * * root /loader.py
As i run:
$ docker run -t -i loader:latest
I get nothing even after 10 mins. The script writes out Hello world when it runs. I removed the cron from the docker image and the script works and writes out Hello world
As I was trying to simplify things i tried to leave the script out so the cron only writes out something like hi every minute like this:
* * * * * root echo "hi"
but nothing happens.
There are a couple of problems.
First, as #charlesduffy mentioned, you need to install your crontab file where cron will find it (for example, /etc/crontab). For testing purposes, I used the following crontab file:
* * * * * root date > /tmp/flagfile
I built an image using your Dockerfile, but I added a syslog service to see the log messages from cron. My Dockerfile looks like:
FROM python:3.6.2
RUN apt-get update && apt-get -y install cron busybox
ADD . /dir
WORKDIR /dir
RUN chmod 0644 /dir
COPY crontab /etc/crontab
ENV NAME loader
CMD busybox syslogd && cron -f
Once the container is running, I see the following in /var/log/messages:
Dec 11 19:49:54 1d77fad4bf9d cron.info cron[9]: (CRON) INFO (pidfile fd = 3)
Dec 11 19:49:54 1d77fad4bf9d cron.info cron[9]: (*system*) INSECURE MODE (group/other writable) (/etc/crontab)
Dec 11 19:49:54 1d77fad4bf9d cron.info cron[9]: (CRON) INFO (Running #reboot jobs)
It looks like cron is unhappy with the permissions on /etc/crontab. I modified my Dockerfile to fix that:
FROM python:3.6.2
RUN apt-get update && apt-get -y install cron busybox
ADD . /dir
WORKDIR /dir
RUN chmod 0644 /dir
COPY crontab /etc/crontab
RUN chmod 600 /etc/crontab
ENV NAME loader
CMD busybox syslogd && cron -f
Now if I run the container, after about a minute I see /tmp/flagfile show up, and in /var/log/messages I see:
Dec 11 19:57:16 dda8a21d48a4 cron.info cron[10]: (CRON) INFO (pidfile fd = 3)
Dec 11 19:57:16 dda8a21d48a4 cron.info cron[10]: (CRON) INFO (Running #reboot jobs)
Dec 11 19:58:01 dda8a21d48a4 authpriv.err CRON[71]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Dec 11 19:58:01 dda8a21d48a4 authpriv.info CRON[71]: pam_unix(cron:session): session opened for user root by (uid=0)
Dec 11 19:58:01 dda8a21d48a4 cron.info CRON[72]: (root) CMD (date > /tmp/flagfile)
Dec 11 19:58:01 dda8a21d48a4 authpriv.info CRON[71]: pam_unix(cron:session): session closed for user root
I have been struggling this for a while and i don't know what i am doing wrong. I am trying to run a shell script inside a container and the shell script reads a python script from the directory where the shell script is located. But i am getting this error saying `python: can't open file 'get_gene_length_filter.py': [Errno 2] No such file or directory'.
Here is my Dockerfile:
FROM ubuntu:14.04.3
RUN apt-get update && apt-get install -y g++ \
make \
git \
zlib1g-dev \
python \
wget \
curl \
python-matplotlib \
python-numpy \
python-pandas
ENV BINPATH /usr/bin
ENV EVO2GIT https://upendra_35#bitbucket.org/upendra_35/evolinc_docker.git
RUN git clone $EVO2GIT
WORKDIR /evolinc_docker
RUN chmod +x evolinc-part-I.sh && cp evolinc-part-I.sh $BINPATH
RUN wget -O- http://cole-trapnell-lab.github.io/cufflinks/assets/downloads/cufflinks-2.2.1.Linux_x86_64.tar.gz | tar xzvf -
RUN wget -O- https://github.com/TransDecoder/TransDecoder/archive/2.0.1.tar.gz | tar xzvf -
ENV PATH /evolinc_docker/cufflinks-2.2.1.Linux_x86_64/:$PATH
ENV PATH /evolinc_docker/TransDecoder-2.0.1/:$PATH
ENTRYPOINT ["/usr/bin/evolinc-part-I.sh"]
CMD ["-h"]
Here is my git repo code:
#!/bin/bash
# Create a directory to move all the output files
mkdir output
# Extracting classcode u transcripts, making fasta file, removing transcripts > 200 and selecting protein coding transcripts
grep '"u"' $comparefile | gffread -w transcripts_u.fa -g $referencegenome - && python get_gene_length_filter.py transcripts_u.fa \
transcripts_u_filter.fa && TransDecoder.LongOrfs -t transcripts_u_filter.fa
And this is how i'm running it:
docker run --rm -v $(pwd):/working-dir -w /working-dir ubuntu/evolinc -c AthalianaslutteandluiN30merged.gtf -g TAIR10_chr.fasta
I'm going to take a guess and assume that get_gene_length_filter.py is in /evolinc_docker, the working directory declared in the Dockerfile. Unfortunately, when you run docker run ... -w /working-dir ..., the working directory will be /working-dir, and so Python will be looking for get_gene_length_filter.py in /working-dir, where it apparently is not found. Edit your shell script to refer to get_gene_length_filter.py by its full absolute path: python /evolinc_docker/get_gene_length_filter.py.
Your statement
the shell script reads a python script from the directory where the shell script is located.
is wrong.
In your shell script, when you call python get_gene_length_filter.py ,the get_gene_length_filter.py file is not assumed to be in the same directory as the shell script, but instead assumed to be in the current working directory.
To describe paths relative to the current shell script directory, use a variable set as follow:
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
and then you would call your python script with:
python $SCRIPT_DIR/get_gene_length_filter.py
This way, your shell script will work whatever the working directory is.