I am using Expect for automation, and I want to execute a Python script from it. But it is not working... This is what I have tried so far:
#!/usr/bin/expect
spawn "./os_fun"
and
#!/usr/bin/expect
spawn "./os_fun.py"
and
#!/usr/bin/expect
spawn python "./os_fun(.py)"
The "os_fun.py" contains the simple code:
#!/bin/usr/python
import os
print os.getcwd()
I would also like to mention that I must use Expect only and not Bash as I need to do the automation part, and I am not supposed to use Pexpect.
When it comes to Expect, you always have to expect something, so that Expect will wait for it. Else, it will proceed as such. Simply spawning a processing does not make sense, as Expect don't wait to see for it which in turn makes the user not to see the output as well.
In your case, you just have to run the code and see the output till the program completes. I hope my understanding is correct.
!/usr/bin/expect
spawn python os_fun.py
expect eof; # will wait till 'eof' seen
Here, expect command will wait till it sees close of the running program.
Default timeout is 10 seconds which can be changed as
set timeout 60; # Timeout value as 1 min
Related
I am running some Python code using a SLURM script on a remote server accessed through SSH. At some point, issues related to licenses on the SLURM platform may happen, generating errors in Python and ending the subprocess. I want to use try-except to let the Python subprocess wait until the issue is fixed, after that it can keep running from where it stopped.
What are some smart implementations for that?
My most obvious solution is just keeping Python inside a loop if the error occurs and letting it read a file every X seconds, when I finally fix the error and want it to keep running from where it stopped, I would write something on the file and break the loop. I wonder if there is a smarter way to provide input to the Python subprocess while it is running through the SLURM script.
One idea might be to add a signal handler for signal USR1 to your Python script like this.
In the signal handler function, you can set a global variable or send a message or set a threading.Event that the main process is waiting on.
Then you can signal the process with:
kill -USR1 <PID>
or with the Python os.kill() equivalent.
Though I do have to agree there is something to be said for the simplicity of your process doing:
touch /tmp/blocked.$$
and your program waiting in a loop with a 1s sleep for that file to be removed. This way you can tell which process id is blocked.
I want to store the output of the terminal command top into a file, using Python.
In the terminal, when I type top and hit enter, I get an output that is real time, so it keeps updating. I want to store this into a file for a fixed duration and then stop writing.
file=open("data.txt","w")
file.flush()
import os,time
os.system("top>>data.txt -n 1")
time.sleep(5)
exit()
file.close()
I have tried to use time.sleep() and then exit(), but it doesn't work, and the only way top can be stopped is in the terminal, by Control + C
The process keeps running and the data is continuously written onto the file, which is not ideal, as one would guess
For clarity: I know how to write the output on to the file, I just want to stop writing after a period
system will wait for the end of the child process. If you do not want that, the Pythonic way is to directly use the subprocess module:
import subprocess
timeout=60 # let top run for one minute
file=open("data.txt","w")
top = subprocess.Popen(["top", "-n", 1], stdout=file)
if top.wait(timeout) is None: # wait at most timeout seconds
top.terminate() # and terminate child
The panonoic way (which is highly recommended for robust code) would be to use the full path of top. I have not here, because it may depend on the actual system...
The issue you could be facing is that os.system starts the process as part of the current process. So the rest of your script will not be run until the command you run has completed execution.
I think what you want to be doing is executing your console command on another thread so that the thread running your python script can continue while the command runs in the background. See run a python program on a new thread for more info.
I'd suggest something like (this is untested):
import os
import time
import multiprocessing
myThread = multiprocessing.process(target=os.system, args=("top>>data.txt -n 1",))
myThread.start()
time.sleep(5)
myThread.terminate()
That being said, you may need to consider the thread safety of os.system(), if it is not thread safe you'll need to find an alternative that is.
Something else worth noting (and that I know little about) is that it may not be ideal to terminate threads in this way, see some of the answers here: Is there any way to kill a Thread?
I'm working on a tool for data entry at my job where it basically takes a report ID number, opens a PDF to that page of that report, allows you to input the information and then saves it.
I'm completely new to instantiating new processes in python; this is the first time that I've really tried to do it. so basically, I have a relevant function:
def get_report(id):
path = report_path(id)
if not path:
raise NameError
page = get_page(path, id)
proc = subprocess.Popen(["C:\Program Files (x86)\Adobe\Reader 11.0\Reader\AcroRd32.exe", "/A", "page={}".format(page),
path])
in order to open the report in Adobe Acrobat and be able to input information while the report is still open, I determined that I had to use multiprocessing. So, as a result, in the main loop of the program, where it iterates through data and gets the report ID, I have this:
for row in rows:
print 'Opening report for {}'.format(ID)
arg = ID
proc = Process(target=get_report, args=(arg,))
proc.start()
row[1] = raw_input('Enter the desired value: ')
rows.updateRow(row)
while proc.is_alive():
pass
This way, one can enter data without the program hanging on the subprocess.Popen() command. However, if it simply continues on to the next record without closing the Acrobat window that pops up, then it won't actually open the next report. Hence the while proc.is_alive():, as it gives one a chance to close the window manually. I'd like to kill the process immediately after 'enter' is hit and the value entered, so it will go on and just open the next report with even less work. I tried several different things, ways to kill processes through the pid using os.kill(); I tried killing the subprocess, killing the process itself, killing both of them, and also tried using subprocess.call() instead of Popen() to see if it made a difference.
It didn't.
What am I missing here? How do I kill the process and close the window that it opened in? Is this even possible? Like I said, I have just about 0 experience with processes in python. If I'm doing something horribly wrong, please let me know!
Thanks in advance.
To kill/terminate a subprocess, call proc.kill()/proc.terminate(). It may leave grandchildren processes running, see subprocess: deleting child processes in Windows
This way, one can enter data without the program hanging on the subprocess.Popen() command.
Popen() starts the command. It does not wait for the command to finish. There are .wait() method and convenience functions such as call()
Even if Popen(command).wait() returns i.e., if the corresponding external process has exited; it does not necessarily mean that the document is closed in the general case (the launcher app is done but the main application may persist).
i.e., the first step is to drop unnecessary multiprocessing.Process and call Popen() in the main process instead.
The second step is to make sure to start an executable that owns the opened document i.e., if it is killed the corresponding document won't stay opened: AcroRd32.exe might be already such program (test it: see whether call([r'..\AcroRd32.exe', ..]) waits for the document to be closed) or it might have a command-line switch that enables such behavior. See How do I launch a file in its default program, and then close it when the script finishes?
I tried killing the subprocess, killing the process itself, killing both of them, and also tried using subprocess.call() instead of Popen() to see if it made a difference.
It didn't.
If kill() and Popen() behave the same in your case then either you've made a mistake (they don't behave the same: you should create a minimal standalone code example with a dummy pdf that demonstrates the problem. Describe using words: what do you expect to happen (step by step) and what happens instead) or AcroRd32.exe is just a launcher app that I've described above (it just opens the document and immediately exits without waiting for the document to be closed).
I have several python scripts that turn my TV on and off. Sometimes the TV does not respond the first time so I use a while loop to continue sending the command until the "success" response is sent. Up to 10 times.
I need to check if one of these programs are running when any of them are started and kill the first process.
This answer uses domain locks and I think this could work but I dont really understand whats happening there:
https://stackoverflow.com/a/7758075/2005444
What I dont know is what the process_name would be. The scripts are titles tvon.py, tvoff.py, and tvtoggle.py. Is it just the title? Would it include the extension? How do I get the pid so I can kill the process?
This is running on Ubuntu 14.04.1
EDIT: all I really need is to search for any of these running scripts first. Also, instead of killing the process maybe I could just wait for it to finish. I could just do a loop and break it if none of those processes are running.
The reason I need to do this is if the tv is off and the off script is run it will never succeed. The TV wont respond if it is already off. Which is why I built in the limit of 10 commands. It never really takes over 4 so 10 is overkill. The problem is if the off command is trying to run and I turn the TV on using the tvon script the TV will turn on and back off. Although the TV limits how often commands can be accepted, which reduces the chance of this happening I still want the to be as cleanly working as possible.
EDIT:
I found that I can not kill the process because it can lock the tty port up which requires a manual restart. So I think the smarter way is to have the second process wait until the first is done. Or find a way to tell the first process to stop at a specific point in the loop so I know its not transmitting.
If you have a socket, use it. Sockets provide full-blown bidirectional communication. Just write your script to kill itself if it receives anything on the socket. This can be most easily done by creating a separate thread which tries to do a socket.recv() (for SOCK_DGRAM) or socket.accept() (for SOCK_STREAM/SOCK_SEQPACKET), and then calls sys.exit() once that succeeds.
You can then use socket.send() (SOCK_DGRAM) or socket.connect() (SOCK_STREAM/SOCK_SEQPACKET) from the second script instance to ask the first instance to exit.
This function can kill a python script by name on *nix systems. It looks through a list of running processes, finds the PID of the one associated with your script, and issues a kill command.
import subprocess
def killScript(scriptName):
# get running processes with the ps aux command
res = subprocess.check_output(["ps","aux"], stderr=subprocess.STDOUT)
for line in res.split("\n"):
# if one of the lines lists our process
if line.find(scriptName) != -1:
info = []
# split the information into info[]
for part in line.split(" "):
if part.strip() != "":
info.append(part)
# the PID is in the second slot
PID = info[1]
#kill the PID
subprocess.check_output(["kill",PID], stderr=subprocess.STDOUT)
At the beginning of your tv script you could run something like:
killList = ["tvon.py", "tvoff.py", "tvtoggle.py"]
for script in killList:
killScript(script)
I have a python irc bot which I start up as root by doing /etc/init.d/phenny start. Sometimes it dies, though and it seems to happen overnight.
What can I do to inspect it and see the status of the process in a text file?
If you know it's still running, you can pstack it to see it's walkback. I'm not sure how useful that will be because you will see the call stack of the interpreter. You could also try strace or ltrace as someone else mentioned.
I would also make sure that in whatever environment the script runs in, you have set ulimit -c unlimited so that a core is generated in case python it is outright crashing.
Another thing I might try is to have this job executed by a parent that does not wait it's child. This should cause the proc table entry to stick around as a zombie even when the underlying job has exited.
If you're interested in really low level process activity, you can run the python interpreter under strace with standard error redirected to a file.
If you're only interested in inspecting the python code when your bot crashes, and you have the location in the source where the crash happens, you can wrap that location with try/except and break into the debugger in theexcept clause:
import pdb; pdb.set_trace()
You'll probably need to run your bot in non-daemon mode for that to work, though.
You might want to try Python psutils, it is something that I have used and works.
A cheap way to get some extra clues about the problem would be to start phenny with
/etc/init.d/phenny start 2>/tmp/phenny.out 1>&2
When it crashes, check the tail of /tmp/phenny.out for the Python traceback.
If you only need to verify that the process is running you could just run a script that checks the output of command
ps ax | grep [p]henny
every few seconds. If it's empty, then obviously the process is dead.