I have a dockerfile which automates the building of an image.
I am using the docker cloud, connected to Digital Ocean as the server.
Within my dockerfile, I get the software I need, add the relevant GitHub repository containing the python scripts I wish to run. I then start the cron scheduler and add the script with appropriate times. For example:
The cron_files.txt file looks like this:
0 12 * * * /usr/bin/python /home/dir/run_check.py
0 15 * * * /usr/bin/python /home/dir/run_push.py
In my dockerfile, I do the following:
RUN service cron start
RUN service cron status
RUN crontab -u root cron_files.txt
In the log files, I can see that cron is succesfully started.
Edit, thanks to r0manarmy for this - How to run a cron job inside a docker container?
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
How do I edit the above to create the crontab file from the cron_files.txt rather than the above example?
I've tried ADD crontab cron_files.txt /etc/cron.d/feeds
But this returns:
ADD crontab cron_files.txt /etc/cron.d/feeds
lstat crontab: no such file or directory
Ps. I am using FROM debian:jessie
You probably want to set cron as the CMD:
In order to do this, just use the crond command on, say, alpine linux.
Take a look at this example for ideas:
https://gist.github.com/mhubig/a01276e17496e9fd6648cf426d9ceeec
Related
I have installed python3 following the Digital Ocean guide and to run my python scripts just by command line, this is the bash I use and it works:
~$ source /home/username/python_projects/project_name/bin/activate
~$ python3 /home/username/python_projects/project_name.py
~$ deactivate
if I put that commands in crontab in the same order, nothing happens.
0 7 * * * source /home/username/python_projects/project_name/bin/activate
0 7 * * * python3 /home/username/python_projects/project_name.py
0 7 * * * deactivate
What am I doing wrong?
Cronjob is active and running.
~$ systemctl status cron
The python file has this permissions:
-rw-rw-r-- 1 user_name user_name 17075 Feb 7 02:30 python_projects/project_name.py
The activate job runs, then exits, and then the next cron job knows nothing about what it did. You want to run them all in the same process (though running deactivate at the end is pointless, as everything that activate did will be wiped when the job ends anyway).
In practice, you can run the Python interpreter from within the virtual environment directly.
For what it's worth, the error message about "no MTA installed" means that the output from cron was discarded because the system could not figure out how to send it by email. You'll probably want to change the rule to write the output to a file.
0 7 * * * cd python_projects && ./project_name/bin/python3 project_name.py >>project_name.log 2>&1
Perhaps notice that cron always starts in your home directory, so you don't have to spell out /home/username (provided of course that username is the user whose crontab this runs from).
There are a few things that could be causing your cronjob not to run:
Environmental variables: The source command in your cronjob sets up the virtual environment, but this environment is not passed on to the cron process. You should specify the full path to the Python executable in your virtual environment, e.g. /home/username/python_projects/project_name/bin/python3.
Permission issues: Ensure that the cron user has permission to execute the script and access the virtual environment.
Output: By default, cron does not send any output to the terminal. If your script produces output, you may want to redirect it to a file for debugging purposes. You can do this by adding > /tmp/output.log 2>&1 to the end of your crontab entry.
You can check the system logs for any error messages related to your cron job. The logs are usually located in /var/log/syslog or /var/log/messages.
I hope this helps!
I have a modified AWS basicPubSub function to transfer data to the AWS IoT core, I want the script to run at start-up.
I have added this script into, make it executable and updated the init.d
/etc/init.d
chmod 755 LOMAWS.sh
sudo update-rc.d LOMAWS.sh defaults
But the script does not start, how can I make it run from start up?
clear
echo "LOM AWS Script starting"
cd /home/pi/Documents/awsiot/aws-iot-device-sdk-python/samples/basicPubSub
sudo python basicPubSub.py -e "XXXXXXXX-ats.iot.us-east-2.amazonaws.com" -r root_CA.crt -c XXXXXXXX-certificate.pem.crt -k XXXXXXX-private.pem.key
Have you tried UserData?
By default, user data scripts and cloud-init directives run only during the first boot cycle when an instance is launched
I am trying to run a subprocess command to do a git pull.
The cwd of the Git repository is /home/ubuntu/Ingest.
The id_rsa that I'm using with Github is located at /home/ubuntu/.ssh/id_rsa.
How would I run a subprocess call to do the following?
import shlex, subprocess
subprocess.call(shlex.split('git pull origin master'), cwd='/home/ubuntu/Ingest')
The log looks like:
movies_ec2.py:43#__init__ [INFO] Version not up to date...Doing a git pull and exiting...
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
The script is running from cron and is not picking up on the id_rsa. (Note: I am not looking to use GitPython). What do I need to change in my cron job or script so that this will work? My cron job is currently:
# sudo crontab -e
*/1 * * * * STAGE=production /home/ubuntu/Ingest/ingest/movies_ec2.py > /home/ubuntu/test.log 2>&1
The following answer addresses this question quite well: How to specify in crontab by what user to run script?. However, in short, this can be accomplished by modifying the user in the crontab to be "ubuntu", which is the current user that is used to create all the files, etc.
$ sudo vim /etc/crontab
*/1 * * * * ubuntu /home/ubuntu/Ingest/ingest/movies_ec2.py > /home/ubuntu/test.log 2>&1
I have a CRON job set like this :
0 0 * * * cd /home/path/to/script && sudo -u myuser ./thescript.sh
This script builds a docker image running a simple python app dumping on file a test validation report. When I run this script in a terminal everything works fine (the generated file goes in /home/myuser). Unfortunately, when I run the CRON job, the file is created, but empty. It must have something to do with the Root owning the CRON job, but I can't figure out how to get it done
Any clue ?
A sudo requires a TTY and cron doesn't run commands with a TTY. You need to run the cron for root user
This can be done using
sudo crontab -e
Then in cron don't use sudo
0 0 * * * cd /home/path/to/script && ./thescript.sh
I am creating a cron job to execute a python script
hello.py
a = 'a cron job was executed here'
text_file = open('output_hello.txt', 'w')
text_file.write(a)
text_file.close()
Works fine if I execute via terminal, I am on ubuntu 15.10.
My cron job file is:
* * * * * /usr/bin/python /home/rohit/hello.py
(excluding the #)
I am a root user and creating the job in /var/spool/cron
The issue is that it is not executing the script. I don't know why.
One does not simply modify the crontab, you run the command:
crontab -e
and edit from there. Execute the above command using sudo if you want it to run as root.
Assuming your paths are correct, your script may not have the right environment or it may not be executable. Ensure your script starts with:
#!/usr/bin/python
And also that you then give execute permission to that script:
chmod a+x hello.py
Ensure you use crontab -e and if you have any doubts about your syntax, you can find more info here:
https://help.ubuntu.com/community/CronHowto