I have a Python package that I have uploaded to PyPP. The script calls two additional R scripts to run. I have verified that the required R scripts are also uploaded to PyPI (by physically downloading the latest version and seeing them present in the directory). I also can successfully install and run the main python script.
However, I am having trouble figuring out how to call the R scripts from within the Python script. That is, what directory structure do I use? Here is the command I use to run:
$ python_script -f file1.txt -g file2.txt
and I get this error:
Fatal error: cannot open file 'script.r': No such file or directory
In the Python script, here is how I am calling the R script:
cmd = [ 'Rscript', 'python_script/Rscript.r' ]
output = subprocess.Popen(cmd, stderr=subprocess.PIPE).communicate()
result = output[1].decode('utf-8')
But nothing I try works: I've tried just 'Rscript.r' and './Rscript.r'
I'm at a loss as to how to correctly call this script. It is in the same directory as the main python_script I am running.
The path here would be relative to where you're invoking python_script from, but your R scripts exist in a directory relative to where your package has been installed.
You can use __file__ to determine the full path to the file which is being executed. By splitting this, you can get a path to the directory where the package was installed, and then add any additional directories/filenames to get a full path to your R script:
import os
this_dir, this_filename = os.path.split(__file__)
RSCRIPT_PATH = os.path.join(this_dir, "Rscript.r")
cmd = ['Rscript', RSCRIPT_PATH]
output = subprocess.Popen(cmd, stderr=subprocess.PIPE).communicate()
result = output[1].decode('utf-8')
Note: Best practice to ensure cross-platform compatibility here is to use os.path.join('path', 'to', 'file.txt') to generate a path instead of path/to/file.txt, since not all platforms use / as a path separator.
Related
This question already has answers here:
How can I specify working directory for a subprocess
(2 answers)
Closed 1 year ago.
I have an exe located at C:\Users\srinivast6>C:\Users\srinivast6\Documents\Cipia\Cipia\DriverSenseCLI-v7.4.3-win64.
This exe I can run in terminal (non admin mode ,windows OS) without any problem.
I am calling it programmatically using subprocess.Popenas below.
process = subprocess.Popen(['myApplication.exe'])
But it is giving following error. It is not able to read license file. What might be the cause for this?. Do I need to open this in admin mode?
Cannot open license file : license.dat
License 284
Error initializing library:
license is not valid
EDIT1: After changing the directory from where I was running python script to the directory where exe is located,I no more see any error. It is launching as expected. But I am packing this python script as an standalone executable. So the user might use this executable from any directory to launch myApplication.exe. I cannot actually put restriction on user.
So is it possible to set the current working directory programmatically to the path where myApplication.exe is located??
Some more information needs to be shared, but a guess for what could be happening is that you are executing the python script from a directory other than where you normally execute myApplication.exe, and that the call to open the file in myApplication is using a relative path (which seems to be the case here). That would mean when you execute myApplication with Popen, it will have the working directory set to the working directory of wherever you executed the python script and the relative path would be relative to that. If that is the case, try executing the script from the same directory as where you would usually execute myApplication.exe or change the path passed to file the file open call in myApplication to use an absolute path.
Example:
Directory Structure
C:\Users\user
|--popen.py
|--somedir
|--license.dat
|--myApplication.exe
|--myApplication.py
Contents of popen.py:
process = subprocess.Popen(['myApplication.exe'])
Contents of myApplication.py (I realize python files don't get compiled to executables, but it is only for the sake of an example):
f = open('license.dat', 'r')
Now, this wouldn't work:
cwd: C:\Users\user
$ python popen.py # File not found error.
Either execute the script from somedir:
cwd: C:\Users\user\somedir
$ python ..\popen.py
Or alternatively, change the path passed to open in myApplication.py:
f = open('C:\Users\user\somedir\license.dat', 'r')
How can I write a Python program that runs all Python scripts in the current folder? The program should run in Linux, Windows and any other OS in which python is installed.
Here is what I tried:
import glob, importlib
for file in glob.iglob("*.py"):
importlib.import_module(file)
This returns an error: ModuleNotFoundError: No module named 'agents.py'; 'agents' is not a package
(here agents.py is one of the files in the folder; it is indeed not a package and not intended to be a package - it is just a script).
If I change the last line to:
importlib.import_module(file.replace(".py",""))
then I get no error, but also the scripts do not run.
Another attempt:
import glob, os
for file in glob.iglob("*.py"):
os.system(file)
This does not work on Windows - it tries to open each file in Notepad.
You need to specify that you are running the script through the command line. To do this you need to add python3 plus the name of the file that you are running. The following code should work
import os
import glob
for file in glob.iglob("*.py"):
os.system("python3 " + file)
If you are using a version other than python3, just change the argument from python3 to python
Maybe you can make use of the subprocess module; this question shows a few options.
Your code could look like this:
import os
import subprocess
base_path = os.getcwd()
print('base_path', base_path)
# TODO: this might need to be 'python3' in some cases
python_executable = 'python'
print('python_executable', python_executable)
py_file_list = []
for dir_path, _, file_name_list in os.walk(base_path):
for file_name in file_name_list:
if file_name.endswith('.csv'):
# add full path, not just file_name
py_file_list.append(
os.path.join(dir_path, file_name))
print('PY files that were found:')
for i, file_path in enumerate(py_file_list):
print(' {:3d} {}'.format(i, file_path))
# call script
subprocess.run([python_executable, file_path])
Does that work for you?
Note that the docs for os.system() even suggest using subprocess instead:
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function.
If you have control over the content of the scripts, perhaps you might consider using a plugin technique, this would bring the problem more into the Python domain and thus makes it less platform dependent. Take a look at pyPlugin as an example.
This way you could run each "plugin" from within the original process, or using the Python::multiprocessing library you could still seamlessly use sub-processes.
I connected yesterday using the SSH protocol to another computer and tried to load, through Python, a SO file (which would be compiled C). Here is what I got in the CLI:
The file that is being requested (libLMR_Demodulator.so) next to "OSError:" is in the same dir as the file I want to load (libDemodulatorJNI_lmr.so).
The python code (v3.5.2) is the following one:
import ctypes
sh_obj = ctypes.cdll.LoadLibrary('./libLMR_Demodulator.so')
actual_start_frequency = sh_obj.getActualStartFrequency(ctypes.c_long(0))
print('The Current Actual Frequency Is: ' + str(actual_start_frequency))
#Charles Duffy is right. The issue come from dependencies. You can verify this by command:
ldd libLMR_Demodulator.so
You have several ways to fix this issue:
Put all the lib to /lib, /usr/lib paths, or directly install them to your system.
Put the libs' path to /etc/ld.so.conf file, then run ldconfig to refresh cache.
use LD_LIBRARY_PATH to add the libs' path, then try to run you script
LD_LIBRARY_PATH=[..path] python [script.py]
or
export LD_LIBRARY_PATH=[..path]
python [script.py]
You can check with manual of dlopen to get more details.
I got here looking for how to ensure that a module / package with a .so file was able to load another .so file that it depends upon -- changing the current directory to the location of the first .so file (i.e., in the directory where the module is) seems to work for me:
import os,sys,inspect
cwd = os.getcwd()
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
os.chdir(currentdir)
import _myotherlib
os.chdir(cwd) # go back
might also work for the OP case?
I am trying to run this tool within a lambda function: https://github.com/nicolas-f/7DTD-leaflet
The tool depends on Pillow which depends on imaging libraries not available in the AWS lambda container. To try and get round this I've ran pyinstaller to create a binary that I can hopefully execute. This file is named map_reader and sits at the top level of the lambda zip package.
Below is the code I am using to try and run the tool:
command = 'chmod 755 map_reader'
args = shlex.split(command)
print subprocess.Popen(args)
command = './map_reader -g "{}" -t "{}"'.format('/tmp/mapFiles', '/tmp/tiles')
args = shlex.split(command)
print subprocess.Popen(args)
And here is the error, which occurs on the second subprocess.Popen call:
<subprocess.Popen object at 0x7f08fa100d10>
[Errno 13] Permission denied: OSError
How can I run this correctly?
You may have been misled into what the issue actually is.
I don't think that the first Popen ran successfully. I think that it just dumped a message in standard error and you're not seeing it. It's probably saying that
chmod: map_reader: No such file or directory
I suggest you can try either of these 2:
Extract the map_reader from the package into /tmp. Then reference it with /tmp/map_reader.
Do it as recommended by Tim Wagner, General Manager of AWS Lambda who said the following in the article Running Arbitrary Executables in AWS Lambda:
Including your own executables is easy; just package them in the ZIP file you upload, and then reference them (including the relative path within the ZIP file you created) when you call them from Node.js or from other processes that you’ve previously started. Ensure that you include the following at the start of your function code:
process.env[‘PATH’] = process.env[‘PATH’] + ‘:’ + process.env[‘LAMBDA_TASK_ROOT’]
The above code is for Node JS but for Python, it's like the following
import os
os.environ['PATH']
That should make the command command = './map_reader <arguments> work.
If they still don't work, you may also consider running chmod 755 map_reader before creating the package and uploading it (as suggested in this other question).
I know I'm a bit late for this but if you want a more generic way of doing this (for instance if you have a lot of binaries and might not use them all), this how I do it, provided you put all your binaries in a bin folder next to your py file, and all the libraries in a lib folder :
import shutil
import time
import os
import subprocess
LAMBDA_TASK_ROOT = os.environ.get('LAMBDA_TASK_ROOT', os.path.dirname(os.path.abspath(__file__)))
CURR_BIN_DIR = os.path.join(LAMBDA_TASK_ROOT, 'bin')
LIB_DIR = os.path.join(LAMBDA_TASK_ROOT, 'lib')
### In order to get permissions right, we have to copy them to /tmp
BIN_DIR = '/tmp/bin'
# This is necessary as we don't have permissions in /var/tasks/bin where the lambda function is running
def _init_bin(executable_name):
start = time.clock()
if not os.path.exists(BIN_DIR):
print("Creating bin folder")
os.makedirs(BIN_DIR)
print("Copying binaries for "+executable_name+" in /tmp/bin")
currfile = os.path.join(CURR_BIN_DIR, executable_name)
newfile = os.path.join(BIN_DIR, executable_name)
shutil.copy2(currfile, newfile)
print("Giving new binaries permissions for lambda")
os.chmod(newfile, 0775)
elapsed = (time.clock() - start)
print(executable_name+" ready in "+str(elapsed)+'s.')
# then if you're going to call a binary in a cmd, for instance pdftotext :
_init_bin('pdftotext')
cmdline = [os.path.join(BIN_DIR, 'pdftotext'), '-nopgbrk', '/tmp/test.pdf']
subprocess.check_call(cmdline, shell=False, stderr=subprocess.STDOUT)
There were two issues here. First, as per Jeshan's answer, I had to move the binary to /tmp before I could properly access it.
The other issue was that I'd ran pyinstaller on ubuntu, creating a single file. I saw elsewhere some comments about being sure to compile on the same architecture as the lambda container runs. Therefore I ran pyinstaller on ec2 based on the Amazon Linux AMI. The output was multiple .os files, which when moved to tmp, worked as expected.
copyfile('/var/task/yourbinary', '/tmp/yourbinary')
os.chmod('/tmp/yourbinary', 0555)
Moving the binary to /tmp and making it executable worked for me
There is no need to copy the files the /tmp. You can just use ld-linux to execute any file including those not marked executable.
So, for running a non-executable on AWS Lambda, you use the following command:
/lib64/ld-linux-x86-64.so.2 /opt/map_reader
P.S. It would make more sense to add the map_reader binary or any other static files in a Lambda Layer, thus the /opt folder.
Like the docs mention for Node.js, you need to update the $PATH, else you'll get command not found when trying to run the executables you added at the root of your Lambda package. In Node.js, that's:
process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT']
Now, the same thing in Python:
import os
# Make the path stored in $LAMBDA_TASK_ROOT available to $PATH, so that we
# can run the executables we added at the root of our package.
os.environ["PATH"] += os.pathsep + os.environ['LAMBDA_TASK_ROOT']
Tested OK with Python 3.8.
(As a bonus, here are some more env variables used by Lambda.)
I am creating a python program that runs a jar file. The jar file and some support files are placed in a different location than the python program's directory. I tried adding jar file path to sys.path but it's unable to access the file from there, however the path is added to sys.path correctly. How can I get this working?
jar file location: E:\data
python file location: C:\Users\user\Desktop
I am using subprocess to call the jar file, the code looks like:
import os
import sys
import subprocess as sp
class abc():
def __init__(self):
sys.path.append(r'E:\data')
def run(self):
print sys.path
env = dict(os.environ)
env['JAVA_OPTS'] = '-Xms256m -Xmx256m -Xss1024k'
sp.call(['java', '-jar', 'file.jar'], env=env)
if __name__ == '__main__':
o = abc()
o.run()
After running above code, I get an error saying:
Error: Unable to access jarfile file.jar
What if you just change your working directory:
import os
cwd = os.getcwd() #current directory
os.chdir('path/to/jar')
... # run file
...
os.chdir(cwd)
sys.path and PYTHONPATH are used when importing python modules
When executing commands, the operating system lookup the command in its system path (%PATH% on Windows).
There is no lookup path for data / filenames passed as argument.
When using sp.call() the system path lookup uses whatever directory the script has been launched from. So you need to either change dir to E:\DATA or use the absolute path:
sp.call(['java', '-jar', 'E:\DATA\file.jar'], env=env)
There are plenty of env variable on Windows: https://en.wikipedia.org/wiki/Environment_variable#Default_values