Im trying to execute a command line programa through Django using the subprocess.check_output(). I have tried to do it first with simple commands like:
subprocess.check_output('ls', '-l')
And its working ok, but now I'm trying to do it with a command line program, which I have already put in the root folder of my project (so if I execute a 'ls -l' it appears there) but Django is throwing me an 'OSError: [Errno 2] No such file or directory'
The programs needs to be in somewhere particularly? This is how I'm doing it right now:
output = subprocess.check_output(['kmersFreq', 'sequence.fasta', '2', '0'])
print output
add shell = True argument in your subprocess call
Related
I am trying to automate a Python script to continuously capture and write network packets to files. The script relies heavily on the use of Wireshark's dumpcap CLI command, and uses the subprocess Popen module to run the command in a shell. During normal execution, the script creates a new directory and begins capturing network data to that new directory. The script runs to completion when executed manually on the command line, however, when I have cron execute the job the script terminates before the capture begins with the following error:
Traceback (most recent call last):
File "./netcapture.py", line 19, in <module>
dumpcap = subprocess.Popen(['dumpcap', '-i', '3', '-a', 'duration:30', '-b', 'duration:10', '-w', filepath], cwd=r'/var/tmp/traces/')
File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__ errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1238, in __execute_child raise child_exception
OSError: [Errno 2] No such file or directory
the filepath variable is
netdata_02_24_2021/trace.pcap
which is the name of the newly created directory and a filename & type for dumpcap to write the network data to. I am running this script with Python 2.6 on RHEL 6 for reference. The cron job syntax is as follows:
0 0 * * * cd /var/tmp/traces/ && python ./netcapture.py &
Things I know:
cron is able to find and execute the script
the current working directory that the script executes in is correct
subprocess.Popen works when executed by cron, as it is used prior to the error being thrown to successfully make a
mkdir -p netdata_02_24_2021
call in the current working directory
the pathing is correct for the capture write. I am able to execute this script manually and it recognizes the new directory and writes to it
Things I have tried:
providing the full path for all directories
switching the subprocess.Popen() calls to subprocess.check_call() calls
using the shell=True syntax in my subprocess.Popen() calls
At this point, I'm not sure what could be causing this execution issue with cron. Any help would be greatly appreciated.
I'm running Laravel project on Nginx server, in which I call a python file using below command by passing arguments
$result = exec("python3 path/to/file.py $data");
In the python file, there is a line of code
font_file = base_path+'/fonts/LiberationMono-Bold.ttf'
cmd = ["ttf2cxf_stream",
"",
"-s","5.0",
font_file,"STDOUT"]
p = Popen(cmd, stdout=PIPE, stderr=PIPE)
This is giving an error that it can't open the font file which is present on that location. The owner of the project is ubuntu:www-data and the font file is present in that project. I also tried giving it 777 permission but still no luck.
Now when I run the same command in terminal
python3 path/to/file.py "data"
It successfully runs without the font file access error.
What could be the issue?
I had faced similar issue while using "ttf2cxf_stream" library to open font - ttf files. Please check if ttf2cxf_stream exist in /usr/bin directory if not then try copying it from /usr/local/bin/ directory and see if you can run py file through PHP code.
well I'm trying to make a logger for my python script using a runner in a '.bat' format, that executes the script and saves an output file; without me having to do it manually.
when I tried to run my python script, script.py, and pass 20 as an argument for the script as well as redirecting the output to a log_file.txt, using windows command prompt, it worked just fine, and the log file was created.
~the cmd command:
python script.py 20 >> log_file.txt
But when I tried to run the same code using the runner ".bat" file it didn't work.
~The codes I've written inside the "runner.bat" is as follows
python script.py 20 >> log_file.txt
pause
~but the execution command is done by the bat file was-as seen from the screen-:
C:\Users\dahom\Desktop\folder>python script.py 1>>log_file.txt
I expected the ".bat" file to behave the same save the log_file as the cmd terminal.
But when I ran the bat file it didn't redirect the output to the log_file.txt
But it seems to be running the script, without but one indication that it takes some time for the script running.
note: both the batch file and the script are in the same folder/dir/path.
HERE is an image showing everything.
TRY:
#echo off
"C:\Users\dahom\AppData\Local\Programs\Python\Python37-32\python.exe" "C:\Users\dahom\Desktop\new.py" >> "C:\Users\dahom\Desktop\log_file.txt" & type "C:\Users\dahom\Desktop\log_file.txt"
pause
NEW.py:-
print("Echo Fox")
OUTPUT OF THE BATCH SCRIPT:-
Echo fox
Press any key to continue . . .
WORKING:-
Just Provide the full paths of each file used in the command (python exec, python script, text file etc). When the command get's pipe'd to file use & type "file_path" to display the contents of the file after writing it.
Running on Windows system, I run bash.exe using subprocess.call().
Following is the code
def predict():
os.system('notepad cmnd.txt')
subprocess.call(['C:/Windows/System32/bash.exe'])
print(file_contents)
label = Label(master, text=file_contents)
#subprocess.call(['c:/users/hp/open.py'])
label.pack()
The handle passes to bash,thus not executing a couple of commands.
cd commands that runs on actually entering values return Missing Directory error.
ls command returns 'cannot run binary file' error.
What should I do?
I'm not really sure what you want here, but if you want to run bash commands in a Windows enviorment, you can try using subprocess.check_output():
from subprocess import check_output
bash_commands = ['ls', 'pwd']
for command in bash_commands:
output = check_output(['bash', '-c', command]).decode()
print(output)
Which in this example, lists all files in the current directory and prints out the parent working directory.
I m running on a remote server a python script using nohup.
First I connected to the remote machine using a VPN and SSH
Second I run a python script using the following command:
nohup python backmap.py mpirun -np 48 &
The python script contains the following lines:
frame = []
file_in = open("Traj_equil_prot.pdb", "r")
for line in file_in:
if line.startswith('TITLE'):
frame.append(line[127:134])
import os
for fileNum in range(631, 29969):
os.system("./initram-v5.sh -f Traj_equil_prot_frame" + str(fileNum) + ".pdb -o Traj_equilprot_aa_frame" + str(frame[fileNum]) + ".gro -to amber -p topol.top")
The script was running just fine the whole day. but now it just crashed and when I try to re-launch it again I'm getting the following error:
Traceback (most recent call last): File "", line 1, in
IOError: [Errno 5] Input/output error
The file is in the working directory. I tried to disconnect/connect again but still the same problem. I don't know what I'm missing. Any help, please?
I had the same problem, I used to run my script using this command:
python MY_SCRIPT.py &
The script will run in the background and the output will be displayed on the terminal until you logout or exit.
By logging out, the script is still running but it's output has nowhere to display thus, exceptions will be made when the script wants to display something (e.g. calling print).
Solution:
I've piped the output to somewhere other than the actual terminal display:
python MY_SCRIPT.py >/dev/null &
Checkout this link for more details:
https://stackoverflow.com/a/38238281/6826476
I finally fixed the problem by opening the file "file_in", modifying it (just adding a point in the REMARK line for example) and saving the changes.