I have the following static method in Python.
import subprocess
class ProcessUtility:
#staticmethod
def execute_command(url):
process = None
process = subprocess.check_output(["phantomas " + url + " --har=test.har"])
return process
It command basically writes the output to test.har file. I have created the test.har file in the same directory as that of the script and given it read,write and execute permissions.
Upon executing I get the error.
OSError: [Errno 2] No such file or directory
Any ideas why I keep getting this.
Most subprocess functions take a list of arguments as opposed to a string, if shell=False (the default). Try
process = subprocess.check_output(['phantomas', url, '--har=test.har'])
Related
I have a batch file which is running a python script and in the python script, I have a subprocess function which is being ran.
I have tried subprocess.check_output, subprocess.run, subprocess.Popen, all of them returns me an empty string only when running it using a batch file.
If I run it manually or using an IDE, I get the response correctly. Below is the code for subprocess.run:
response = subprocess.run(fileCommand, shell=True, cwd=pSetTableauExeDirectory, capture_output=True)
self.writeInLog(' Command Response: \t' + str(response))
Response is in stdout=b''
When ran in batch file and from task scheduler:
Command Response: CompletedProcess(args='tableau refreshextract
--config-file "Z:\XXX\tableau_config\SampleSuperStore.txt"',
returncode=0, stdout=b'', stderr=b'')
When ran manually or in IDE:
Command Response: CompletedProcess(args='tableau refreshextract
--config-file "Z:\XXX\tableau_config\SampleSuperStore.txt"',
returncode=0, stdout=b'Data source refresh completed.\r\n0 rows uploaded.\r\n', stderr=b'')
Batch file which runs the python program. Parameters are parsed to the python application
SET config=SampleSuperStore.txt
CALL C:\XXX\AppData\Local\Continuum\anaconda3\Scripts\activate.bat
C:\XXX\AppData\Local\Continuum\anaconda3\python.exe Z:\XXX\pMainManual.py "%config%"
Why is that??
--Complete python code---
try:
from pWrapper import wrapper
import sys
except Exception as e:
print(str(e))
class main:
def __init__(self):
self.tableauPath = 'C:\\Program Files\\Tableau\\Tableau 2018.3\\bin\\'
self.tableauCommand = 'tableau refreshextract --config-file'
def runJob(self,argv):
self.manual_sProcess(argv[1])
def manual_sProcess(self,tableauConfigFile):
new_wrapper = wrapper()
new_wrapper.tableauSetup(self.tableauPath,self.tableauCommand)
if new_wrapper.tableauConfigExists(tableauConfigFile):
new_wrapper.tableauCommand(tableauConfigFile)
if __name__ == "__main__":
new_main = main()
new_main.runJob(sys.argv)
Wrapper class:
def tableauCommand(self,tableauConfigFile):
command = self.setTableauExeDirectory + ' ' + self.refreshConfigCommand + ' "' + tableauConfigFile + '"'
self.new_automateTableauExtract.runCommand(tableauConfigFile,command,self.refreshConfigCommand,self.tableauFilePath,self.setTableauExeDirectory)
Automate Class:
def runCommand(self,pConfig,pCommand,pRefreshConfigCommand,pFilePath,pSetTableauExeDirectory):
try:
fileCommand = pRefreshConfigCommand + ' "' + pFilePath + '"'
response = subprocess.run(fileCommand, shell=True, cwd=pSetTableauExeDirectory, capture_output=True)
self.writeInLog(' Command Response: \t' + str(response))
except Exception as e:
self.writeInLog('Exception in function runCommand: ' + str(e))
UPDATE: I initially thought that the bat file was causing this issue but it looks like it works when running manually a batch file but not when it is set on task scheduler
Updated
First of all, if there is a need to run anaconda-prompt by calling activate.bat file, you can simply do as follows:
import subprocess
def call_anaconda_venv():
subprocess.call('python -m venv virtual.env')
subprocess.call('cmd.exe /k /path/venv/Scripts/activate.bat')
if __name__ == "__main__":
call_anaconda_venv()
The result of the above code would be a running instance of anaconda-prompt as required.
Now as Problem Seems Like:
I have a batch file which is running a python script and in the python script, I have a subprocess function which is being run.
I have implemented the same program as required; Suppose we have
Batch File ---> script.bat **** includes a command to run python script i.e test.py. ****
Python Script File ---> test.py **** includes a method to run commands using subprocess. ****
Batch File ---> sys_info.bat **** includes a command which would give the system information of my computer. ****
Now First, script.bat includes a command that will run the required python script as given below;
python \file_path\test.py
pause
Here, pause command is used to prevent auto-closing console after execution. Now we have test.py, python script which includes subprocess method to run required commands and get their output.
from subprocess import check_output
class BatchCommands:
#staticmethod
def run_commands_using_subprocess(commands):
print("Running commands from File: {}".format(commands))
value = check_output(commands, shell=True).decode()
return value
#staticmethod
def run():
commands_from_file = "\file-path\sys_info.bat"
print('##############################################################')
print("Shell Commands using >>> subprocess-module <<<")
print('##############################################################')
values = BatchCommands.run_commands_using_subprocess(commands_from_file)
print(values)
if __name__ == '__main__':
BatchCommands.run()
Now, in the end, I have a sys_info.bat file which includes commands to renew the IP-Adress of my computer. Commands in sys_info.bat file are as follows;
systeminfo
Place multiple commands in sys_info.bat file, then you can also run multiple commands at a time like:
ipconfig/all
ipconfig/release
ipconfig/reset
ipconfig/renew
ipconfig
Before to use the file, set all files directory paths, and run the batch file i.e script.py in command-prompt as follows;
Run command-prompt or terminal as an administrator.
run \file_path\script.py
Here is the result after running the batch file in the terminal.
This is happening because your ide is not running in a shell that works in the way that open subprocess is expecting.
If you set SHELL=False and specify the absolute path to the batch file it will run.
you might still need the cwd if the batch file requires it.
I've been trying to run a Java program and capture it's STDOUT output to a file from the Python script. The idea is to run test files through my program and check if it matches the answers.
Per this and this SO questions, using subprocess.call is the way to go. In the code below, I am doing subprocess.call(command, stdout=f) where f is the file I opened.
The resulted file is empty and I can't quite understand why.
import glob
test_path = '/path/to/my/testfiles/'
class_path = '/path/to/classfiles/'
jar_path = '/path/to/external_jar/'
test_pattern = 'test_case*'
temp_file = 'res'
tests = glob.glob(test_path + test_pattern) # find all test files
for i, tc in enumerate(tests):
with open(test_path+temp_file, 'w') as f:
# cd into directory where the class files are and run the program
command = 'cd {p} ; java -cp {cp} package.MyProgram {tc_p}'
.format(p=class_path,
cp=jar_path,
tc_p=test_path + tc)
# execute the command and direct all STDOUT to file
subprocess.call(command.split(), stdout=f, stderr=subprocess.STDOUT)
# diff is just a lambda func that uses os.system('diff')
exec_code = diff(answers[i], test_path + temp_file)
if exec_code == BAD:
scream(':(')
I checked the docs for subprocess and they recommended using subprocess.run (added in Python 3.5). The run method returns the instance of CompletedProcess, which has a stdout field. I inspected it and the stdout was an empty string. This explained why the file f I tried to create was empty.
Even though the exit code was 0 (success) from the subprocess.call, it didn't mean that my Java program actually got executed. I ended up fixing this bug by breaking down command into two parts.
If you notice, I initially tried to cd into correct directory and then execute the Java file -- all in one command. I ended up removing cd from command and did the os.chdir(class_path) instead. The command now contained only the string to run the Java program. This did the trick.
So, the code looked like this:
good_code = 0
# Assume the same variables defined as in the original question
os.chdir(class_path) # get into the class files directory first
for i, tc in enumerate(tests):
with open(test_path+temp_file, 'w') as f:
# run the program
command = 'java -cp {cp} package.MyProgram {tc_p}'
.format(cp=jar_path,
tc_p=test_path + tc)
# runs the command and redirects it into the file f
# stores the instance of CompletedProcess
out = subprocess.run(command.split(), stdout=f)
# you can access useful info now
assert out.returncode == good_code
I am writing a script in which i am trying to open an .exe file by using class and prints the process id.It looks as follows:
import subprocess
DETACHED_PROCESS = 0x00000008
class hello:
def open_exe(self,file):
self.file = file
print "file name",self.file
process = subprocess.Popen([r"self.file"],creationflags=DETACHED_PROCESS,shell=True)
if object.pid is None:
return 0
else:
print "Process id is: ", process.pid
return 1
hello1 = hello()
hello1.open_exe("D:\\development\\learning\\score\\testing.exe")
When i run it does not open the file "testing.exe" but the process id does get printed.So, help me out why the specified .exe is not getting opened .If i pass the complete path of exe directly to the function "open_exe()" it does gets opened, but i want to open in the same manner as in code. So , kindly suggest appropriate solution.
I wrote a python script to run a command called "gtdownload" on a bunch of files with multiprocessing. The function "download" is where I am having trouble with.
#/usr/bin/env python
import os, sys, subprocess
from multiprocessing import Pool
def cghub_dnld_file(file1, file2, file3, np):
<open files>
<read in lines>
p = Pool(int(np))
map_args = [(line.rstrip(),name_lines[i].rstrip(),bar_lines[i].rstrip()) for i, line in enumerate(id_lines)]
p.map(download_wrapper,map_args)
def download(id, name, bar):
<check if file has been downloaded, if not download>
<.....>
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dnld_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dnld_cmd,shell=True)
def download_wrapper(args):
return download(*args)
def main():
<read in arguments>
<...>
cghub_dnld_file(file1,file2,file3,threads)
if __name__ == "__main__":
main()
If this file does not exist in the database, gtdownload would quit which also kills my python job with the following error:
Traceback (most recent call last):
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 102, in <module>
main()
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 98, in main
cghub_dnld_file(file1,file2,file3,threads)
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 22, in cghub_dnld_file
p.map(download_wrapper,map_args)
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 250, in map
return self.map_async(func, iterable, chunksize).get()
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 554, in get
raise self._value
OSError: [Errno 2] No such file or directory
The actual error message from gtdownload :
Welcome to gtdownload-3.8.5a.
Ready to download
Communicating with GT Executive ...
Headers received from the client: 'HTTP/1.1 100 Continue
HTTP/1.1 404 Not Found
Date: Tue, 29 Jul 2014 18:49:57 GMT
Server: Apache/2.2.15 (Red Hat and CGHub)
Strict-Transport-Security: max-age=31536000
X-Powered-By: PHP/5.3.3
Content-Length: 669
Connection: close
Content-Type: text/xml
'
Error: You have requested to download a uuid which either does not exist within the system, or has not yet reached the 'live' state. The requested action will not be performed. Please double check the supplied uuid or contact thelpdesk for further assistance.
I would like the script to skip the one that does not exist and start gtdownload on the next one. I tried to output the stderr of subprocess.call to a pipe and see if there is the "error" keyword. But it seems it stops at the exact subprocess.call command. Same thing with os.system.
I made a MCV case without the multiprocessing and subprocess did not kill the main process at all. Looks like multiprocessing messes things up although I had it run with 1 thread just for testing.
#!/usr/bin/env python
import subprocess
#THis is the id that gtdownload had problem with
id = "df1e073f-4485-4d5f-8659-cd8eaac17329"
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
print dlnd_cmd
subprocess.call(dlnd_cmd,shell=True)
print "done"
Clearly multiprocessing conflicts subprocess.call but it is not clear to me why.
What is the best way to avoid the failure of subprocess killing the main process?
Handle the exception in some appropriate way and move on, of course.
try:
subprocess.call(dlnd_cmd)
except OSError as e:
print 'failed to download: {!r}'.format(e)
However, this may not be appropriate here. The kinds of exceptions that subprocess.call raises are usually not transient things that you can just log and work around; if it's not working now, it will continue to not work forever until you fix the underlying problem (a bug in your script, or gtdownload not being installed right, or whatever).
For example, if the code you showed us is your actual code:
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dlnd_cmd)
… then this is guaranteed to raise an OSError for the reason explained in dano's answer: call (without shell=True) will try to take that entire string—spaces, shell-redirection, etc.—as the name of an executable program to find on your $PATH. And there is no such program. So it will raise an OSError(errno.ENOENT). (Which is exactly what you're seeing.) Just logging that doesn't do you any good; it's a good thing that your entire process is exiting, so you can debug that problem.
subprocess.call should not kill the main process. Something else must be wrong with your script or your conclusions about the script's behaviour is wrong. Did you try printing some trace output after the subprocess call?
You have to use shell=True to use subprocess.call with a string for an argument (and with shell redirection):
subprocess.call(dlnd_cmd, shell=True)
Without shell=True, subprocess tries to treat your entire command string like a single executable name, which of course doesn't exist, and leads to the No such file or directory exception.
See this answer for more info on when to use a string vs. when to use a sequence with subprocess.
I am trying to use bash functions inside my python script to allow me to locate a specific directory and then grep a given file inside the directory. The catch is that I only have part of the directory name, so I need to use the bash function find to get the rest of the directory name (names are unique and will only ever return one folder)
The code I have so far is as follows:
def get_tag(part_of_foldername):
import subprocess
import os
p1 = subprocess.Popen(["find", "/path/to/directory", "-maxdepth", "1", "-name", "%s.*" % part_of_foldername, "-type", "d"], stdout=subprocess.PIPE)
directory = p1.communicate()[0].strip('\n')
os.chdir(directory)
p2 = subprocess.Popen(["grep", "STUFF_", ".hgtags"], stdout=subprocess.PIPE)
tag = p2.comminucate()[0].strip('\n')
return tag
Here is what's really strange. This code works when you enter it line by line into interactive, but not when it's run thru a script. It also works when you import the script file into interactive and call the function, but not when it's called by the main function. The traceback I get from running the script straight is as follows:
Traceback (most recent call last):
File "./integration.py", line 64, in <module>
main()
File "./integration.py", line 48, in main
tag = get_tag(folder)
File "./integration.py", line 9, in get_date
os.chdir(directory)
OSError: [Errno 2] No such file or directory: ''
And it's called in the main function like this:
if block_dict[block][0]=='0':
tag = get_tag(folder)
with "folder" being previously defined as a string.
Please note we use python 2.6 so I can't use the module check_output unfortunately.
Have you tried using the glob module as opposed to find?
import glob
glob.glob("/path/to/directory/*/SomeDir/path/*")
You can look past multiple dirctories using **:
glob.glob("/path/**/SomeDir/path/*")
and that would match /path/to/your/SomeDir/path/file.
evidently p1.communicate()[0].strip('\n') is returning an empty string. are you really using the hardcoded value "/path/to/directory" as in your example?
Check the result of p1.communicate()[0]. It maybe empty string.
. in "%s.*" % part_of_foldername seems to be the cause.
UPDATE
Found typo: comminucate -> comminucate
def get_tag(part_of_foldername):
p1 = subprocess.Popen(["find", "/path/to/directory", "-maxdepth", "1", "-name", "*%s*" % part_of_foldername, "-type", "d"], stdout=subprocess.PIPE)
out, err = p1.communicate()
directory = out.split('\n')[0]
p1.wait()
if directory:
os.chdir(directory)
p2 = subprocess.Popen(["grep", "STUFF_", ".hgtags"], stdout=subprocess.PIPE)
out, err = p2.communicate()
p2.wait()
return out.rstrip('\n')