I am using python subprocess to email logfile to a user who runs python script. However each time user runs the script logfile gets overwritten. Here is the unix subprocess command I am using inside python code:
subprocess.Popen("mail -s 'logfile.log attached' -r az12#abc.com -a logfile.log $USER#abc.com &> /dev/null",shell=True)
How could I make logfile name unique? Maybe incremant logfile name as logfile1.log, logfile2.log and so on?
Trick is how do I achieve this inside subprocess?
Also you can do this with datetime module:
import datetime
filename = "logfile-%s.log" % datetime.datetime.today().isoformat()
command = "mail -s '{0} attached' -r az12#abc.com -a {0} $USER#abc.com &> /dev/null".format(filename)
subprocess.Popen(command, shell=True)
The name of log file will look like logfile-2015-03-13T21:37:14.927095.log.
Try using timestamp to generate a logfile name. About using that one in subprocess, command is nothing but a string. So it is as simple as
import time
fileName = "logfile." + str(time.time()) + ".log" # use your logic to generate logFile name.
command = "mail -s '%s attached' -r az12#abc.com -a %s $USER#abc.com &> /dev/null" %(fileName, fileName)
subprocess.Popen(command,shell=True)
Related
Python 3.10.6
Windows 10
I have a python function that executes a DXL script using subsystem.run() or os.system() (whichever works best I guess). The problem is that when I run a custom command using python it does not work, but when I paste the same command in the command prompt, it works. I should also clarify that command prompt is not the ms store windows terminal (cannot run ibm doors commands there for some reason). It is the OG prompt
I need to use both python and IBM Doors for the solution.
Here is a summer version of my code (Obviously, the access values are not real):
#staticmethod
def run_dxl_importRTF():
dquotes = chr(0x22) # ASCII --> "
module_name = "TEST_TEMP"
script_path = "importRTF.dxl"
script_do_nothing_path = "doNothing.dxl"
user = "user"
password = "pass"
database_config = "11111#11.11.1111.0"
doors_path = dquotes + r"C:\Program Files\IBM\Rational\DOORS\9.7\bin\doors.exe" + dquotes
file_name = "LIBC_String.rtf"
# Based On:
# "C:\Program Files\IBM\Rational\DOORS\9.7\\bin\doors.exe" -dxl "string pModuleName = \"%~1\";string pFilename = \"%~2\";#include <importRTF.dxl>" -f "%TEMP%" -b "doNothing.dxl" -d 11111#11.11.1111.0 -user USER -password PASSWORD
script_arguments = f"{dquotes}string pModuleName=\{dquotes}{module_name}\{dquotes};string pFileName=\{dquotes}{file_name}\{dquotes};#include <{script_path}>{dquotes}"
command = [doors_path, "-dxl", script_arguments, "-f", "%TEMP%", "-b", script_do_nothing_path, '-d', database_config, '-user', user, '-password', password]
res = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
print(f"COMMAND:\n{' '.join(res.args)}")
print(f"STDERR: {repr(res.stderr)}")
print(f'STDOUT: {res.stdout}')
print(f'RETURN CODE: {res.returncode}')
return
PYTHON SCRIPT OUTPUT:
COMMAND:
"C:\Program Files\IBM\Rational\DOORS\9.7\bin\doors.exe" -dxl "string pModuleName=\"TEST_TEMP\";string pFileName=\"LIBC_String.rtf\";#include <importRTF.dxl>" -f %TEMP% -b doNothing.dxl -d 11111#11.11.1111.0 -user USER_TEMP -password PASS_TEMP
STDERR: 'The system cannot find the path specified.\n'
STDOUT:
RETURN CODE: 1
When I run the same command in the command prompt, it works (dxl script is compiled).
I identified the problem which is the script_argument variable. Meaning that, when I try to just enter the IBM Doors server without compiling a DXL script, it works on python and the command prompt.
The python script needs to be dynamic meaning that all of the initial declared variables can change value and have a path string in it. I am also trying to avoid .bat files. They also did not work with dynamic path values
Thanks for your time
I tried:
Changing CurrentDirectory (cwd) to IBM Doors
os.system()
Multiple workarounds
Tried IBM Doors path without double quotes (it doesnt work because of the whitespaces)
.bat files
When calling subprocess.run with a command list and shell=True, python will expand the command list to a string, adding more quoting along the way. The details are OS dependent (on Windows, you always have to expand the list to a command) but you can see the result via the subprocess.list2cmdline() function.
Your problem is these extra escapes. Instead of using a list, build a shell command string that already contains the escaping you want. You can also use ' for quoting strings so that internal " needed for shell quoting can be entered literally.
Putting it all together (and likely messing something up here), you would get
#staticmethod
def run_dxl_importRTF():
module_name = "TEST_TEMP"
script_path = "importRTF.dxl"
script_do_nothing_path = "doNothing.dxl"
user = "user"
password = "pass"
database_config = "11111#11.11.1111.0"
doors_path = r"C:\Program Files\IBM\Rational\DOORS\9.7\bin\doors.exe"
file_name = "LIBC_String.rtf"
script_arguments = (rf'string pModuleName=\"{module_name}\";'
'string pFileName=\"{file_name}\";'
'#include <{script_path}>')
command = (f'"{doors_path}" -dxl "{script_arguments}" -f "%TEMP%"'
' -b "{script_do_nothing_path}" -d {database_config}'
' -user {user} -password {pass}')
res = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
print(f"COMMAND:\n{' '.join(res.args)}")
print(f"STDERR: {repr(res.stderr)}")
print(f'STDOUT: {res.stdout}')
print(f'RETURN CODE: {res.returncode}')
I'm trying to get and kill all other running python instances of the same script, I found an edge case where the path is not in psutil's cmdline list, when the process is started with ./myscript.py and not python ./myscript.py
the script's content is, note the shebang:
#!/bin/python
import os
import psutil
import sys
import time
for proc in psutil.process_iter():
if "python" in proc.name():
print("name", proc.name())
script_path = sys.argv[0]
proc_script_path = sys.argv[0]
if len(proc.cmdline()) > 1:
proc_script_path = proc.cmdline()[1]
else:
print("there's no path in cmdline")
if script_path.startswith("." + os.sep) or script_path.startswith(".." + os.sep):
script_path = os.path.normpath(os.path.join(os.getcwd(), script_path))
if proc_script_path.startswith("." + os.sep) or proc_script_path.startswith(".." + os.sep):
proc_script_path = os.path.normpath(os.path.join(proc.cwd(), proc_script_path))
print("script_path", script_path)
print("proc_script_path", proc_script_path)
print("my pid", os.getpid())
print("other pid", proc.pid)
if script_path == proc_script_path and os.getpid() != proc.pid:
print("terminating instance ", proc.pid)
proc.kill()
time.sleep(300)
how can I get the script path of a python process when it's not in psutil's cmdline?
When invoking a python script like this, the check if 'python' in proc.name() is the problem. This will not show python or python3 for the scripts in question, but it will show the script name. Try the following:
import psutil
for proc in proc.process_iter():
print('Script name: {}, cmdline: {}'.format(proc.name(), proc.cmdline()))
You should see something like ():
Script name: myscript.py, cmdline: ['/usr/bin/python3', './myscript.py']
Hope this helps.
when the process is started with ./relative/or/absolute/path/to/script.py and not python /relative/or/absolute/path/to/script.py
the psutil.Process.name() is script.py and not python.
To get the list of process paths running your script.py:
ps -eo pid,args|awk '/script.py/ && $2 != "awk" {print}'
To get get the list of process paths running your script.py not having psutil in the path. Replace your script.py and psutil in the following script.
ps -eo pid,args|awk '! /psutil/ && /script.py/ && $2 != "awk" {print}'
explanation:
ps -eo pid,args list all processes, providing process id and process path (args)
! /psutil/ match all process paths not having psutil in the path.
&& /script.py/ and match all process paths having script.py in the path.
&& $2 != "awk" and not wanting this awk process.
{print} output the matched lines.
I'm running a Bash system command from within Python and have encountered this problem when using read to define a here-document in the command:
import os
text = "~|||-:this is text:-|||~"
fileName = "sound.wav"
command =\
"IFS= read -d \'\' text <<EOF\n" +\
text + "\n" +\
"EOF\n" +\
"echo \"${text}\" | text2wave -scale 1 -o " + fileName
os.system(command)
Could you help me figure out how to fix this?
Here is a slightly simplified version:
import os
text = "~|||-:this is text:-|||~"
command =\
"IFS= read -d \'\' text <<EOF\n" +\
text + "\n" +\
"EOF\n" +\
"echo \"${text}\""
os.system(command)
I wanted to make clear in the one above that I'd be using pipes. When I run this, I get the following error:
sh: 1: read: Illegal option -d
There's no reason to do all that in shell. Python can write directly to the standard input of the process that will run text2wave. Here's an example using the subprocess module.
p = subprocess.Popen(["text2wave", "-scale", "1", "-o", filename], stdin=subprocess.PIPE)
p.stdin.write(text + "\n")
p.wait()
A more pythonic way to do this would be using subprocess, either by specifying stdin to check_call or by using Popen.communicate.
Also, -d might not be a valid option in your version of read. What does help [r]ead tell you?
I seem to be having a problem. I have a view where I can allow staff users to download the MySQL database for that program, however it is not working at all. I get an error which says Errno 2] No such file or directory: '/usr/local/src/djcode/c2duo_mms/backup.gz'.
I don't know why I get the error, but it the likely answer is because the I can't dump the database properly. It can't find backup.gz, because it cannot find the file beacause the step where it supposed to dump the file does not work.
views.py
#login_required
def dbbackup(request):
if not (request.user.is_authenticated() and request.user.is_staff):
raise http.Http404
os.popen3("mysqldump -u *username* -p*password* *database* > /usr/local/src/djcode/c2duo_mms/backup.sql")
os.popen3("gzip -c /usr/local/src/djcode/c2duo_mms/backup.sql > /usr/local/src/djcode/c2duo_mms/backup.gz"
dataf = open('/usr/local/src/djcode/c2duo_mms/backup.gz', 'r')
return HttpResponse(dataf.read(), mimetype='application/x-gzip')
EDIT: I have tried running a small python script. Now the following python file below works (saves a file named backup.gz in the c2duo_mms directory). So why can I not do the same thing from my views.py file!?
#!/usr/bin/env python
import os
os.popen3("mysqldump -u *username* -p*password* *database* > /usr/local/src/djcode/c2duo_mms/backup.sql")
os.popen3("gzip -c /usr/local/src/djcode/c2duo_mms/backup.sql > /usr/local/src/djcode/c2duo_mms/backup.gz")
Use a full path here:
os.popen3("mysqldump --add-drop-table -u " + settings.DATABASE_USER + " -p" + settings.DATABASE_PASSWORD + " " + settings.DATABASE_NAME + " > backup.sql")
i.e. Where you are saving down the sql.
Try something like this:
import subprocess
command = "mysqldump -u *username* -p*password* *database* > /usr/local/src/djcode/c2duo_mms/backup.sql"
p = subprocess.Popen(command, shell=True, bufsize=0, stdout=subprocess.PIPE, universal_newlines=True)
p.wait()
output = p.stdout.read()
p.stdout.close()
The var "output" will give you access to any error messages from the command.
Popen opens a process, but it does not create a shell around it. Since I don't expect an intermediate shell, then I don't expect that the redirects there would be interpreted.
Popen does return file handles to the various streams in/out of the process - it would be stdout that you get without the redirects.
If you read and the store the content from those pipe handles, you can do the redirect inside the python code.
Perhaps you could consider the subprocess module - http://docs.python.org/library/subprocess.html - and you can specify what shell to use with it, which then can interpret the redirects.
The webserver was running as a different user than root (it needs to be the same), so I did not have permissions to save in that folder. I changed the ownership of the folder I wanted to save to which has worked now.
chown -R "apache" c2duo_mms
I usually execute a Fortran file in Linux (manually) as:
Connect to the server
Go to the specific folder
ifort xxx.for -o xxx && ./xxx (where 'xxx.for' is my Fortran file and 'xxx' is Fortran executable file)
But I need to call my fortran file (xxx.for) from python (I'm a beginner), so I used subprocess with the following command:
cmd = ["ssh", sshConnect, "cd %s;"%(workDir), Fortrancmd %s jobname "%s -o %s" exeFilename "%s && %s ./ %s%s"%(exeFilename)]
But I get an error, and I'm not sure what's wrong. Here's the full code:
import string
import subprocess as subProc
from subprocess import Popen as ProcOpen
from subprocess import PIPE
import numpy
import subprocess
userID = "pear"
serverName = "say4"
workDir = "/home/pear/2/W/fortran/"
Fortrancmd = "ifort"
jobname = "rad.for"
exeFilename = "rad"
sshConnect=userID+"#"+servername
cmd=["ssh", sshConnect, "cd %s;"%(workDir), Fortrancmd %s jobname "%s -o %s" exeFilename "%s && %s ./ %s%s"%(exeFilename)]
**#command to execute fortran files in Linux
**#ifort <filename>.for -o <filename> && ./<filename> (press enter)
**#example:ifort xxx.for -o xxx && ./xxx (press enter)
print cmd
How can I write a python program that performs all 3 steps described above and avoids the error I'm getting?
there are some syntax errors...
original:
cmd=["ssh", sshConnect, "cd %s;"%(workDir), Fortrancmd %s jobname "%s -o %s" exeFilename "%s && %s ./ %s%s"%(exeFilename)]
I think you mean:
cmd = [
"ssh",
sshConnect,
"cd %s;" % (workDir,),
"%s %s -o %s && ./%s" % (Fortrancmd, jobname, exeFilename, exeFilename)
]
A few notes:
a tuple with one element requires a comma at the end of the first argument see (workDir,) to be interpreted as a tuple (vs. simple order-of-operations parens)
it is probably easier to contruct your fortan command with a single string format operation
PS - For readability it is often a good idea to break long lists into multiple lines :)
my advice
I would recommend looking at this stackoverflow thread for ssh instead of using subprocess
For the manual part you may want to look into pexpect or for windows wexpect. These allow you to perform subprocesses and pass input under interactive conditions.
However most of what you're doing sounds like it would work well in a shell script. For simplicity, you could make a shell script on the server side for your server side operations, and then plug in the path in the ssh statement:
ssh user#host "/path/to/script.sh"
one error:
you have an unquoted %s in your list of args, so your string formatting will fail.
Here is a complete example of using the subprocess module to run a remote command via ssh (a simple echo in this case) and grab the results, hope it helps:
>>> import subprocess
>>> proc = subprocess.Popen(("ssh", "remoteuser#host", "echo", "1"), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
>>> stdout, stderr = proc.communicate()
Which in this case returns: ('1\n', '')
Note that to get this to work without requiring a password you will likely have to add your local user's public key to ~remoteuser/.ssh/authorized_keys on the remote machine.
You could use fabric for steps 1 and 2.
This is the basic idea:
from fabric.api import *
env.hosts = ['host']
dir = '/home/...'
def compile(file):
with cd(dir):
run("ifort %s.for -o %s" %(file,file))
run("./%s > stdout.txt" % file)
Create fabfile.py
And you run fab compile:filename
do you have to use python?
ssh user#host "command"