How can python wait for a batch SGE script finish execution? - python

I have a problem I'd like you to help me to solve.
I am working in Python and I want to do the following:
call an SGE batch script on a server
see if it works correctly
do something
What I do now is approx the following:
import subprocess
try:
tmp = subprocess.call(qsub ....)
if tmp != 0:
error_handler_1()
else:
correct_routine()
except:
error_handler_2()
My problem is that once the script is sent to SGE, my python script interpret it as a success and keeps working as if it finished.
Do you have any suggestion about how could I make the python code wait for the actual processing result of the SGE script ?
Ah, btw I tried using qrsh but I don't have permission to use it on the SGE
Thanks!

From your code you want the program to wait for job to finish and return code, right? If so, the qsub sync option is likely what you want:
http://gridscheduler.sourceforge.net/htmlman/htmlman1/qsub.html

Additional Answer for an easier processing:
By using the python drmaa module : link which allows a more complete processing with SGE.
A functioning code provided in the documentation is here: [provided you put a sleeper.sh script in the same directory]
please notice that the -b n option is needed to execute a .sh script, otherwise it expects a binary by default like explained here
import drmaa
import os
def main():
"""Submit a job.
Note, need file called sleeper.sh in current directory.
"""
s = drmaa.Session()
s.initialize()
print 'Creating job template'
jt = s.createJobTemplate()
jt.remoteCommand = os.getcwd()+'/sleeper.sh'
jt.args = ['42','Simon says:']
jt.joinFiles=False
jt.nativeSpecification ="-m abe -M mymail -q so-el6 -b n"
jobid = s.runJob(jt)
print 'Your job has been submitted with id ' + jobid
retval = s.wait(jobid, drmaa.Session.TIMEOUT_WAIT_FOREVER)
print('Job: {0} finished with status {1}'.format(retval.jobId, retval.hasExited))
print 'Cleaning up'
s.deleteJobTemplate(jt)
s.exit()
if __name__=='__main__':
main()

Related

How to login to IBM 5250 emulator using Python/Java/.NET? [duplicate]

I need to login to IBM i System using Python without entering the username and password manually.
I used py3270 library but it is not able to detect the Emulator wc3270. The emulator I use has .hod extension and opens with IBM i Launcher.
Can anyone help me with this? what could be the possible solution for this?
os.system() is a blocking statement. That is, it blocks, or stops further Python code from being executed until whatever os.system() is doing has completed. This problem needs us to spawn a separate thread, so that the Windows process executing the ACS software runs at the same time the rest of the Python code runs. subprocess is one Python library that can handle this.
Here is some code that opens an ACS 5250 terminal window and pushes the user and password onto that window. There's no error checking, and there are some setup details that my system assumes about ACS which your system may not.
# the various print() statements are for looking behind the scenes
import sys
import time
import subprocess
from pywinauto.application import Application
import pywinauto.keyboard as keyboard
userid = sys.argv[1]
password = sys.argv[2]
print("Starting ACS")
cmd = r"C:\Users\Public\IBM\ClientSolutions\Start_Programs\Windows_x86-64\acslaunch_win-64.exe"
system = r'/system="your system name or IP goes here"'
# Popen requires the command to be separate from each of the parameters, so an array
result = subprocess.Popen([cmd, r"/plugin=5250",system], shell=True)
print(result)
# wait at least long enough for Windows to get past the splash screen
print("ACS starting - pausing")
time.sleep(5)
print("connecting to Windows process")
ACS = Application().connect(path=cmd)
print(ACS)
# debugging
windows = ACS.windows()
print(windows)
dialog = ACS['Signon to IBM i']
print(dialog)
print("sending keystrokes")
keyboard.send_keys(userid)
keyboard.send_keys("{TAB}")
keyboard.send_keys(password)
keyboard.send_keys("{ENTER}")
print('Done.')
Currently, I am facing the same issue. I was able to run the IBMi (ACS), however, once it run, my python script stop functioning as if the app is preventing the python from being running. In generally speaking, the app seems to not detecting the script.But once I closed the app, my python script continue to work.. I put some indication e.g timesleep, however as i mentioned earlier, it only continue to that line of code once IBM is closed. There will be few lines to be added to move the selection to 5250 and inject the credential.
*I tried with pyautogui, still facing the same issue. so now i tried pywinauto import keyboard .
#Variables
dir = sys.argv[1]
username = sys.argv[2]
password = sys.argv[3]
x = dir.split("\\")
print(x[-1])
command = "cd \ && cd Users/Public/Desktop && " + '"' + x[-1] + '"'
print(command)
os.system(command)
------ FROM THIS LINE OF CODE ONWARDS, IT STOPPED RUNNING ONCE IBM IS LAUNCHED ---
print('TIME START')
time.sleep(5)
print('TIME END')
keyboard.send_keys(username)
keyboard.send_keys(password)
keyboard.send_keys("{ENTER}")
print('Done.')
Appreciate your help to look into this matter. Thanks

How can I find the Command Line of a Process with Python?

I tried to get the Command Line of a running Process with Python.
I managed to do it with Powershell but I need to get it with Python which is my Problem.
Working Powershell Code:
Get-CimInstance Win32_Process -Filter "name = 'process.exe'" | select CommandLine
I tried everything I found but still can't do it...
It would be very nice if someone could help me.
This is answered here: https://stackoverflow.com/a/20721781/18505766
import psutil
for process in psutil.process_iter():
cmdline = process.cmdline
if "main.py" in cmdline and "testarg" in cmdline:
# do something
EDIT: Sorry, actually this is a bit different.
But if you're on Windows, you can try the following:
import wmi
w=wmi.WMI()
name = "cmd.exe"
for process in w.Win32_Process():
if process.Name == name:
tmp1 = process.Commandline
tmp2 = tmp1.split(' ', 1)
args = tmp2[1]
break
print(args)
EDIT2: improved code. Important: only the first process' arguments found are stored in var 'args'!

Subprocess to open python file and return data

I am trying to use Python to open another file. This file is going to start up a socket and create threads for listening for additional connections, and threads for sending/receiving data. The main thread will not return.
However, if the setup of sockets fail, I want to return a error code to the other python script that executed the subprocess.
main.py
py3output = subprocess.check_output(['python3', 'py3.py'])
print('py3 said:' + str(py3output))
py3.py
def returnme():
return 10
returnme()
When I run this, it prints:
py3 said:b''
I am just trying to figure out how to get the return value back to the main calling program.
To return an exit code n back to the OS, you need sys.exit(n). But seems like you do not want to check the exit code but the stdout otput. So your program might need to rewrite to:
def returnme():
return 10
print(returnme())
You should only return a string as a standard output using following code:
sample.py
import sys
def returnme():
sys.stdout.write(str(10))
sys.stdout.flush()
returnme()
main.py
from subprocess import check_output
output = check_output(['python','sample.py'])
print('Sample.py says :' + output)

python-daemon to start an independent process but let the main application continue?

Okay I'm officially out of ideas after running each and every sample I could find on google up to 19th page. I have a "provider" script. The goal of this python script is to start up other services that run indefinitely even after this "provider" stopped running. Basically, start the process then forget about it but continue the script and not stopping it...
My problem: python-daemon... I have actions (web-service calls to start/stop/get status from the started services). I create the start commands on the fly and perform variable substitution on the config files as required.
Let's start from this point: I have a command to run (A bash script that executes a java process - a long running service that will be stopped sometime later).
def start(command, working_directory):
pidfile = os.path.join(working_directory, 'application.pid')
# I expect the pid of the started application to be here. The file is not created. Nothing is there.
context = daemon.DaemonContext(working_directory=working_directory,
pidfile=daemon.pidfile.PIDLockFile(pidfile))
with context:
psutil.Popen(command)
# This part never runs. Even if I put a simple print statement at this point, that never appears. Debugging in pycharms shows that my script returns with 0 on with context
with open(pidfile, 'r') as pf:
pid = pf.read()
return pid
From here on in my caller to this method I prepare a json object to return to the client which essentially contains an instance_id (don't mind it) and a pid (that'll be used to stop this process in another request.
What happens? After with context my application exits with status 0, nothing is returned, no json response gets created, no pidfile gets created only the executed psutil.Popen command runs. How can I achieve what I need? I need an independently running process and need to know its PID in order to stop it later on. The executed process must run even if the current python script stops for some reason. I can't get around the shell script as that application is not mine I have to use what I have.
Thanks for any tip!
#Edit:
I tried using simply the Popen from psutil/subprocess with somewhat more promising result.
def start(self, command):
import psutil/subprocess
proc = psutil.Popen(command)
return str(proc.pid)
Now If I debug the application and wait some undefined time on the return statement everything is working great! The service is running the pid is there, I can stop later on. Then I simply ran the provider without debugging. It returns the pid but the process is not running. Seems like Popen has no time to start the service because the whole provider stops faster.
#Update:
Using os.fork:
#staticmethod
def __start_process(command, working_directory):
pid = os.fork()
if pid == 0:
os.chdir(working_directory)
proc = psutil.Popen(command)
with open('application.pid', 'w') as pf:
pf.write(proc.pid)
def start(self):
...
__start_process(command, working_directory)
with open(os.path.join(working_directory, 'application.pid'), 'r') as pf:
pid = int(pf.read())
proc = psutil.Process(pid)
print("RUNNING" if proc.status() == psutil.STATUS_RUNNING else "...")
After running the above sample, RUNNING is written on console. After the main script exits because I'm not fast enough:
ps auxf | grep
No instances are running...
Checking the pidfile; sure it's there it was created
cat /application.pid
EMPTY 0bytes
From multiple partial tips i got, finally managed to get it working...
def start(command, working_directory):
pid = os.fork()
if pid == 0:
os.setsid()
os.umask(0) # I'm not sure about this, not on my notebook at the moment
os.execv(command[0], command) # This was strange as i needed to use the name of the shell script twice: command argv[0] [args]. Upon using ksh as command i got a nice error...
else:
with open(os.path.join(working_directory, 'application.pid'), 'w') as pf:
pf.write(str(pid))
return pid
That together solved the issue. The started process is not a child process of the running python script and won't stop when the script terminates.
Have you tried with os.fork()?
In a nutshell, os.fork() spawns a new process and returns the PID of that new process.
You could do something like this:
#!/usr/bin/env python
import os
import subprocess
import sys
import time
command = 'ls' # YOUR COMMAND
working_directory = '/etc' # YOUR WORKING DIRECTORY
def child(command, directory):
print "I'm the child process, will execute '%s' in '%s'" % (command, directory)
# Change working directory
os.chdir(directory)
# Execute command
cmd = subprocess.Popen(command
, shell=True
, stdout=subprocess.PIPE
, stderr=subprocess.PIPE
, stdin=subprocess.PIPE
)
# Retrieve output and error(s), if any
output = cmd.stdout.read() + cmd.stderr.read()
print output
# Exiting
print 'Child process ending now'
sys.exit(0)
def main():
print "I'm the main process"
pid = os.fork()
if pid == 0:
child(command, working_directory)
else:
print 'A subprocess was created with PID: %s' % pid
# Do stuff here ...
time.sleep(5)
print 'Main process ending now.'
sys.exit(0)
if __name__ == '__main__':
main()
Further info:
Documentation: https://docs.python.org/2/library/os.html#os.fork
Examples: http://www.python-course.eu/forking.php
Another related-question: Regarding The os.fork() Function In Python

Script to capture everything on screen

So I have this python3 script that does a lot of automated testing for me, it takes roughly 20 minutes to run, and some user interaction is required. It also uses paramiko to ssh to a remote host for a separate test.
Eventually, I would like to hand this script over to the rest of my team however, it has one feature missing: evidence collection!
I need to capture everything that appears on the terminal to a file. I have been experimenting with the Linux command 'script'. However, I cannot find an automated method of starting script, and executing the script.
I have a command in /usr/bin/
script log_name;python3.5 /home/centos/scripts/test.py
When I run my command, it just stalls. Any help would be greatly appreciated!
Thanks :)
Is a redirection of the output to a file what you need ?
python3.5 /home/centos/scripts/test.py > output.log 2>&1
Or if you want to keep the output on the terminal AND save it into a file:
python3.5 /home/centos/scripts/test.py 2>&1 | tee output.log
I needed to do this, and ended up with a solution that combined pexpect and ttyrec.
ttyrec produces output files that can be played back with a few different player applications - I use TermTV and IPBT.
If memory serves, I had to use pexpect to launch ttyrec (as well as my test's other commands) because I was using Jenkins to schedule the execution of my test, and pexpect seemed to be the easiest way to get a working interactive shell in a Jenkins job.
In your situation you might be able to get away with using just ttyrec, and skip the pexpect step - try running ttyrec -e command as mentioned in the ttyrec docs.
Finally, on the topic of interactive shells, there's an alternative to pexpect named "empty" that I've had some success with too - see http://empty.sourceforge.net/. If you're running Ubuntu or Debian you can install empty with apt-get install empty-expect
I actually managed to do it in python3, took a lot of work, but here is the python solution:
def record_log(output):
try:
with open(LOG_RUN_OUTPUT, 'a') as file:
file.write(output)
except:
with open(LOG_RUN_OUTPUT, 'w') as file:
file.write(output)
def execute(cmd, store=True):
proc = Popen(cmd.encode("utf8"), shell=True, stdout=PIPE, stderr=PIPE)
output = "\n".join((out.decode()for out in proc.communicate()))
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = template % (cmd, output)
print(output)
if store:
record_log(output)
return output
# SSH function
def ssh_connect(start_message, host_id, user_name, key, stage_commands):
print(start_message)
try:
ssh.connect(hostname=host_id, username=user_name, key_filename=key, timeout=120)
except:
print("Failed to connect to " + host_id)
for command in stage_commands:
try:
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(command)
except:
input("Paused, because " + command + " failed to run.\n Please verify and press enter to continue.")
else:
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = ssh_stderr.read() + ssh_stdout.read()
output = template % (command, output)
record_log(output)
print(output)

Categories

Resources