My problem is that the cronjob seems to be running fine, but not executing the code properly within the .sh files, please see below for details.
I type crontab -e, to bring up cron:
In that file:
30 08 * * 1-5 /home/user/path/backup.sh
45 08 * * 1-5 /home/user/path/runscript.sh >> /home/user/cronlog.log 2>&1
backup.sh:
#!/bin/sh
if [ -e "NEW_BACKUP.sql.gz" ]
then
mv "NEW_BACKUP.sql.gz" "OLD_BACKUP.sql.gz"
fi
mysqldump -u username -ppassword db --max_allowed_packet=99M | gzip -9c > NEW_BACKUP.sql.gz
runscript.sh:
#!/bin/sh
python /home/user/path/uber_sync.py
uber_sync.py:
import keyword_sync
import target_milestone_sync
print "Starting Sync"
keyword_sync.sync()
print "Keyword Synced"
target_milestone_sync.sync()
print "Milestone Synced"
print "Finished Sync"
The problem is, it seems to do the print statements in uber_sync, but not actually execute the code from the import statements... Any ideas?
Also note that keyword_sync and target_milestone_sync are located in the same directory as uber_sync, namely /home/user/path
Thank you for any help.
Your import statements fail because python can not locate your modules. Add them them to your search path and then import your modules, like this (add this to uber_sync.py):
import sys
sys.path.append("/home/user/path")
import keyword_sync
import target_milestone_sync
Python looks for modules in the current directory (the dir the code is executed in), in the $PYTHONPATH environment variable and config files. This all ends up in sys.path which can be edited like any list object. If you want to learn more about the reasons a certain module gets imported or not i suggest also looking into the standard module imp.
In your case you tested your code in /home/user/path via python uber_sync.py and it worked, because your modules were in the current directory. But when execute it in some/other/dir via python /home/user/path/uber_sync.py the current dir becomes some/other/dir and your modules are not found.
Related
What I want
I'm using Visual Studio Code and Python 3.7.0 and I'm just trying to import another Python file, from another folder, into my python file.
Details
Here is my folder structure
root/
dir1/
data.txt
task11.py
task12.py
dir2/
data.txt
task21.py
task22.py
Helpers/
FileHelper/
ReadHelper.py
So a short explanation:
I use the same function in every "task"-file
Instead of putting the function in every "task"-file, I've created a helper file where the function exists
I want to import the helper file "ReadHelper.py" into my task files
What I've tried
e.g. in the file task11.py:
from Helpers.FileHelper.ReadHelper import *
import os, sys
parentPath = os.path.abspath("../../")
if parentPath not in sys.path:
sys.path.insert(0, parentPath)
from Helpers.FileHelper.ReadHelper import *
import os, sys
sys.path.append('../../')
from Helpers.FileHelper.ReadHelper import *
None of the above solutions works as I always end up with the error:
ModuleNotFoundError: No module named 'Helpers'
I've also tried:
from ..Helpers.FileHelper.ReadHelper import *
But it ends up with the error: ValueError: attempted relative import beyond top-level package
So how can I import the file ReadHelper.py to my task files?
P.S
There are some similar questions to this but they are really old and the answers have not helped me.
Update 1
There is an option in Visual Studio code, If I run this command with this import from Helpers.FileHelper import ReadHelper then no errors are generated and the code executes perfectly.
One downside is that this interactive window is slow at starting and it cannot handle inputs.
I tried the answer of #Omni as well:
$> python -m root.dir1.task11
And it worked! but as he said, there is a downside: it is slow to type in the terminal.
So I tried to create a task in Visual Studio Code that could execute the above shell command for the file that I'm currently in, but did not succeed.
Do you know how to create a task in vscode to run the above command?
I've also tried to add __init__.py-files under every directory so they would be seen as packages Python3 tutorial - 6.4 Module Packages. But this didn't help and the same error occurred.
Update 2
I come up with a way to make it really easy to have a folder structure like this and get the imports to work correctly in the terminal.
Basically what I did was:
created a Python script
created a task in visual studio code
With this, I can now run my python files, with the imports, by only pressing cmd + shift + B.
Explanation
The visual studio task:
{
"version": "2.0.0",
"tasks": [
{
"label": "Run python file",
"type": "shell",
"command": "python3 /PATH_TO_ROOT_FOLDER/run_python_file.py ${file}",
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new",
"focus": true
}
}
]
}
The part that we want to focus on is this one:
"command": "python3 /PATH_TO_ROOT_FOLDER/run_python_file.py ${file}",
This part runs the new python file I created at the root folder and passes the path, of the file which is active, as a parameter
The python script:
import os, sys
# This is a argument given trough a shell command
PATH_TO_MODULE_TO_RUN = sys.argv[1]
ROOT_FOLDER = "root/"
def run_module_gotten_from_shell():
# Here I take only the part of the path that is needed
relative_path_to_file = PATH_TO_MODULE_TO_RUN.split(ROOT_FOLDER)[1]
# Creating the shell command I want to run
shell_command = createShellCommand(relative_path_to_file)
os.system(shell_command)
# Returning "python3 -m PATH.TO.MODULE"
def createShellCommand(relative_path_to_file):
part1 = "python3"
part2 = "-m"
# Here I change the string "dir1/task11.py" => "dir1.task11"
part3 = relative_path_to_file.replace("/", ".")[:-3]
shell_command = "{:s} {:s} {:s}".format(part1, part2, part3)
return shell_command
run_module_gotten_from_shell()
This python script gets as a parameter the path to the active file
Then it creates a shell command of the path (the shell command is like #kasper-keinänen 's answer)
Then it runs that shell command
With these modifications, I can run any file inside the root directory with imports from any file inside the root directory.
And I can do it by only pressing cmd + shift + B.
You could try running the script with the -m option that allows modules to be located using the Python module namespace docs.python.org.
If you run the task11.py script then:
$ python3 -m dir1.task11
And in the task11.py do the import like:
from Helpers.FileHelper.ReadHelper import *
Adding the full absolute path to the sys.path variable should make it work.
import sys
sys.path.append('/full/path/to/Helpers/FilesHelper/')
from ReadHelper import *
If you're only trying to do this in VSCode, and not during normal run time. You can add the path in the .vscode/settings.json
{
"python.analysis.extraPaths": [
"${workspaceFolder}/webapp"
],
}
NOTE: This does not solve standard Python importing. My use-case was specific to a monolithic project where all editor config files where in the root and thus I couldn't open 'webapp/' as a workspace itself.
a) Execute the task modules as scripts within an environment that knows about the helper functions. That way the code in the taks modules does not have to know anything about the package structure present. It imitates the builtins of the python interpreter.
# cli argument #1 is the task module to execute
import sys
task_to_execute = sys.argv[1]
from Helpers.FileHelper.ReadHelper import *
exec(open(task_to_execute).read())
b) Use relative imports correctly. In order to do so, you have to execute the task code via (this might be a disadvantage of this solution).
$> python -m root.dir1.task11.task11
The problem is your file/folder structure. I would suggest creating a sort of 'control' file in your root folder, which can then work from the top-down to reference all your other modules.
So let's say you had a file in your root folder called MasterTask.py it could look like this:
from dir1.task11.task11 import *
from dir1.task12.task12 import *
from dir2.task21.task21 import *
from dir2.task22.task22 import *
from Helpers.FileHelper.ReadHelper import *
class Master:
#Do your task work here
pass
One other option would be to move the Helpers folder into your Python37\Lib\site-packages folder, which would also allow the use of from Helpers.FileHelper.ReadHelper import * as is - assuming that you are not planning on this to be used on other machines other than your own.
I'm try to get a job in crontab to run twice per day at different times. It is a python script that calls other python scripts and a bash script as functions. All of the scripts are located in the path given in the crontab. The crontab looks like this:
PATH=/home/test/Desktop/UntitledFolder/ContinuousTest
0 08 * * 1,2,3,4,5 /home/test/Desktop/UntitledFolder/ContinuousTest/automated.py
46 10 * * * /home/test/Desktop/UntitledFolder/ContinuousTest/automated.py
The code looks like this
#!/usr/bin/env python
import curses
import os
def Move():
os.system("cd /home/test/Desktop/UntitledFolder/ContinuousTest")
def Upgrade():
os.system("python upgrade.py")
os.system("python upgrade.py")
def Setup():
os.system("python setup.py")
os.system("python setup2.py")
def Throughput():
os.system("./test.sh")
def Sleep():
os.system("sleep 320")
Move()
Setup()
Upgrade()
Sleep()
Throughput()
I see that when the script is run from the cronjob, I get this error:
/usr/bin/env: python: No such file or directory
What could be the problem?
/usr/bin/env must search PATH to find the python executable to run. Since you completely replace PATH with only a single directory, and don't include the usual /bin, and /usr/bin paths, env cannot find python to run.
The solution is to either set PATH=/bin:/usr/bin:/home/test/Desktop/UntitledFolder/ContinuousTest, or just dispense with env altogether and put #!/usr/bin/python (or python3 if that is the intention) at the top of your script.
Another reasonable solution would be to not set PATH in your crontab, but put PATH modifications inside the script as necessary instead - that might lead to fewer surprises down the road if you add additional jobs to your crontab.
I am struggling to run the a python script as a cron job.
I am logged in as root
the permission for the python script is
-rwxr-xr-x 1 root root 2374 Mar 1 22:49 k_collab_spark.2.py
I am starting the script with
#!/usr/bin/env python
I tested the pythong script
if i do "./k_collab_spark.2.py` this work fine.
on the crontab i have set the job as
15 12 * * * /opt/lampp/htdocs/testme/SPARK/k_collab_spark.2.py >> /var/log/kspark.log
I do not see any message on the log file
Once i adde 2>&1 it gives an error Traceback (most recent call last):
File "/opt/lampp/htdocs/kabeer/SPARK/k_collab_spark.2.py", line 2, in
import requests
ImportError: No module named requests but if i execute the service manually it is successful . WHen i run it manually it works fine
Tried defining the path but still the same issue
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
import requests
ImportError: No module named requests
Any idea what i am missing.. Appreciate any help around this.
Try to run script with another first line:
#!/usr/bin/python
If it's executes successfully the problem in python interpreter, because when you have several versions of Python installed, /usr/bin/env will ensure the interpreter used - is the first one on your environment's $PATH, which i guess has no requests lib.
Can you add python explicitly before the script name?
At the end of the crontab line, add 2>&1, which redirects error messages to the log file as well. See this link for a detailed description In the shell, what does " 2>&1 " mean?
There is also a possibility that your current user and root runs different versions of python.
I used a shell script to call the python script. THe anaconda on the box was causing the trouble
export PATH=/opt/anaconda3/bin:$PATH
/opt/anaconda3/bin/python /opt/lampp/htdocs/scriptme.py >/opt/lampp/htdocs/scriptme.log 2>&1
Add the following lines of code to your script and edit the crontab :
from distutils.sysconfig import get_python_lib
print(get_python_lib())
Now check the log in crontab, you will get some path
e.g. "/usr/lib/python2.7/dist-packages"
cd(change directory) to the above path and ls(list directory) to check if package exists ; if not :
sudo pip3 install requests -t . # dot indicates current directory
or else if you have a requirements.txt file then you could try:
sudo pip3 install -r requirements.txt -t "/usr/lib/python2.7/dist-packages"
#try this from the directory where "requirements.txt" file exists
Now run your scripts.
I'm facing of a strange issue, and after a couple of hour of research I'm looking for help / explanation about the issue.
It's quite simple, I wrote a cgi server in python and I'm working with some libs including pynetlinux for instance.
When I'm starting the script from terminal with any user, it works fine, no bug, no dependency issue. But when I'm trying to start it using a script in rc.local, the following code produce an error.
import sys, cgi, pynetlinux, logging
it produce the following error :
Traceback (most recent call last):
File "/var/simkiosk/cgi-bin/load_config.py", line 3, in
import cgi, sys, json, pynetlinux, loggin
ImportError: No module named pynetlinux
Other dependencies produce similar issue.I suspect some few things like user who executing the script in rc.local (root normaly) and trying some stuff found on the web without success.
Somebody can help me ?
Thanx in advance.
Regards.
Ollie314
First of all, you need to make sure if the module you want to import is installed properly. You can check if the name of the module exists in pip list
Then, in a python shell, check what the paths are where Python is looking for modules:
import sys
sys.path
In my case, the output is:
['', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-x86_64-linux-gnu', '/usr/lib/python3.4/lib-dynload', '/usr/local/lib/python3.4/dist-packages', '/usr/lib/python3/dist-packages']
Finally, append those paths to $PATH variable in /etc/rc.local. Here is an example of my rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing
export PATH="$PATH:/usr/lib/python3.4:/usr/lib/python3.4/plat-x86_64-linux-gnu:/usr/lib/python3.4/lib-dynload:/usr/local/lib/python3.4/dist-packages:/usr/lib/python3/dist-packages"
# Do stuff
exit 0
The path where your modules are install is probably normally sourced by .bashrc or something similar. .bashrc doesn't get sourced when it's not an interactive shell. /etc/profile is one place that you can put system wide path changes. Depending on what Linux version/distro it may use /etc/profile.d/ in which case /etc/profile runs all the scripts in /etc/profile.d, add a new shell script there with execute permissions and a .sh extention.
I want to implement a userland command that will take one of its arguments (path) and change the directory to that dir. After the program completion I would like the shell to be in that directory. So I want to implement cd command, but with external program.
Can it be done in a python script or I have to write bash wrapper?
Example:
tdi#bayes:/home/$>python cd.py tdi
tdi#bayes:/home/tdi$>
Others have pointed out that you can't change the working directory of a parent from a child.
But there is a way you can achieve your goal -- if you cd from a shell function, it can change the working dir. Add this to your ~/.bashrc:
go() {
cd "$(python /path/to/cd.py "$1")"
}
Your script should print the path to the directory that you want to change to. For example, this could be your cd.py:
#!/usr/bin/python
import sys, os.path
if sys.argv[1] == 'tdi': print(os.path.expanduser('~/long/tedious/path/to/tdi'))
elif sys.argv[1] == 'xyz': print(os.path.expanduser('~/long/tedious/path/to/xyz'))
Then you can do:
tdi#bayes:/home/$> go tdi
tdi#bayes:/home/tdi$> go tdi
That is not going to be possible.
Your script runs in a sub-shell spawned by the parent shell where the command was issued.
Any cding done in the sub-shell does not affect the parent shell.
cd is exclusively(?) implemented as a shell internal command, because any external program cannot change parent shell's CWD.
As codaddict writes, what happens in your sub-shell does not affect the parent shell. However, if your goal is to present the user with a shell in a different directory, you could always have Python use os.chdir to change the sub-shell's working directory and then launch a new shell from Python. This will not change the working directory of the original shell, but will leave the user with one in a different directory.
As explained by mrdiskodave
in Equivalent of shell 'cd' command to change the working directory?
there is a hack to achieve the desired behavior in pure Python.
I made some modifications to the answer from mrdiskodave to make it work in Python 3:
The pipes.quote() function has moved to shlex.quote().
To mitigate the issue of user input during execution, you can delete any previous user input with the backspace character "\x08".
So my adaption looks like the following:
import fcntl
import shlex
import termios
from pathlib import Path
def change_directory(path: Path):
quoted_path = shlex.quote(str(path))
# Remove up to 32 characters entered by the user.
backspace = "\x08" * 32
cmd = f"{backspace}cd {quoted_path}\n"
for c in cmd:
fcntl.ioctl(1, termios.TIOCSTI, c)
I shall try to show how to set a Bash terminal's working directory to whatever path a Python program wants in a fairly easy way.
Only Bash can set its working directory, so routines are needed for Python and Bash. The Python program has a routine defined as:
fob=open(somefile,"w")
fob.write(dd)
fob.close()
"Somefile" could for convenience be a RAM disk file. Bash "mount" would show tmpfs mounted somewhere like "/run/user/1000", so somefile might be "/run/user/1000/pythonwkdir". "dd" is the full directory path name desired.
The Bash file would look like:
#!/bin/bash
#pysync ---Command ". pysync" will set bash dir to what Python recorded
cd `cat /run/user/1000/pythonwkdr`