There's a little app named logivew that I'm writing a script to monitor, along with some other tasks. In the main while loop (which will exit when the app I'm most concerned about closes), I check to see if logview needs restarting. The code I have presently is roughly as follows:
#a good old global
logview = "/usr/bin/logview"
#a function that starts logview:
port = 100
log_file = "/foo/bar"
logview_process = subprocess.Popen([logview, log_file, port],
stdout = subprocess.DEVNULL,
stderr = subprocess.STDOUT)
#a separate function that monitors in the background:
while True:
time.sleep(1)
logview_status = 0
try:
logview_status = psutil.Process(logview_process.pid).status()
except psutil.NoSuchProcess:
pass
if(logview_status == psutil.STATUS_STOPPED or
logview_status == psutil.STATUS_ZOMBIE or
logview_status == psutil.STATUS_DEAD or
logview_status == 0):
print("Logview died; restarting")
logview_cli_list = [logview]
logview_cli_list.extend(logview_process.args)
logview_process = subprocess.Popen(logview_cli_list,
stdout = subprocess.DEVNULL,
stderr = subprocess.STDOUT)
if(some_other_condition): break
However, if I test-kill logview, the condition triggers and I do see the printed message, but then I see it again, and again, and again. It seems that the condition triggers every single iteration of the loop if logview does die. And, it never does get restarted properly.
So clearly... I'm doing something wrong. =)
Any help (or better methods!) would be greatly appreciated.
I don't know your logview program but the problem is here:
logview_cli_list = [logview]
logview_cli_list.extend(logview_process.args)
When you're creating the argument list, you're putting logview twice in your command, because logview_process.args also contains the name of the launched command, so the program probably fails immediately because of bad args, and is run again and again...
The fix is then obvious:
logview_cli_list = logview_process.args
a better fix would be to create the process in the loop if a given flag is set and set the flag at the start.
When process dies, set the flag to trigger the process creation again. Would have avoided this copy/almost paste mistake.
Related
I have the following Python program running in a Docker container.
Basically, if the Python process exits gracefully (ex. when I manually stop the container) or if the Python process crashes (while inside some_other_module.do_work()) then I need to do some cleanup and ping my DB telling it that process has exited.
What's the best way to accomplish this? I saw one answer where they did a try catch on main(), but that seems a bit odd.
My code:
def main():
some_other_module.do_work()
if __name__ == '__main__':
main()
I assume that the additional cleanup will be done by a different process, since the main process has likely crashed in a not recoverable way (I understood the question in this way).
The simplest way would be that the main process sets a flag somewhere (maybe creates a file in a specified location, or a column value in a database table; could also include the PID of the main process that sets the flag) when it starts and removes (or un-sets) that same flag if it finishes gracefully.
The cleanup process just needs to check the flag:
if the flag is set but the main process has ended already (the flag could contain the PID of the main process, so the cleanup process uses that to find if the main process is still running or not), then a cleanup is in order.
if the flag is set and the main process is running, then nothing is to be done.
if the flag is not set, then nothing is to be done.
Try-catch on main seems simplest, but doesn't/may not work for most things (please see comments below). You can always except specific exceptions:
def main():
some_other_module.do_work()
if __name__ == '__main__':
try:
main()
except Exception as e:
if e == "<INSERT GRACEFUL INTERRUPT HERE>":
# finished gracefully
else:
print(e)
# crash
Use a try/except
def thing_that_crashes():
exit()
try:
thing_that_crashes()
except:
print('oh and by the way, that thing tried to kill me')
I think it is impossible to catch a process with advanced suicidal behaviour (I don't know sending a SYGKILL to itself or something) so if you need your main process to live whatever happens, maybe run the other one in a subprocess.
You could wrap your script with another subprocess script and check the returncode. Inspired by this Relevant question.
from subprocess import Popen
script = Popen("python abspath/to/your/script.py")
script.communicate()
if script.returncode <> 0:
# something went wrong
# do something about it
i want to start a function after a timeout in a While true loop, but the code dont execute anything and jumps out the loop and i dont know why :/
Here is my Code
import requests
from threading import Timer
def timeout(flag):
print("New Request")
statuscode = requests.get("http://adslkfhdsjf.de").status_code
if statuscode == 200 and flag == 0:
print("Service available")
#Testzwecke
print("Flag: ", flag)
flag = 0
#Poste result to Backend
elif statuscode == 200 and flag == 1:
print("Service is available now")
print("Flag: ", flag)
flag = 0
#Email an User
#Post Request
elif statuscode != 200 and flag == 0:
print("Service is not available")
#Testzwecke
print("Flag: ", flag)
flag = 1
#Email to User
#Post Request
else:
print("Service is not available")
#Testzwecke
print("Flag: ", flag)
#Post Request
Timer(10, timeout, flag)
timeout(0)
I want that timeout is executed for example every 10 seconds. So every 10 second one condition from the function timeout() will be executed.
But its not working so far, the console output is nothing :/
Your first problem is just that you're not calling main(). And normally, I'd just add a comment to tell you that and close the question as a typo, but you don't want to fix that until you first fix your bigger problem.
Your code tries to create and call a new timeout function over and over, as fast as possible. And the first thing that timeout function does is to create a new Timer object. Which is a new thread.
So you're spawning new threads as fast as Python will let you, which means in a very short time you're going to have more threads than your OS can handle. If you're lucky, that will mean you get an exception and your program quits. If you're unlucky, that will mean your system slows to a crawl as the kernel starts swapping thread stacks out to disk, and, even after you manage to kill the program, it may still take minutes to recover.
And really, there's no reason for the while loop here. Each Timer schedules the next Timer, so it will keep running forever. And there's only ever 2 threads alive at a time that way.
But there's not even a reason for a Timer in the first place. You don't want to do anything while waiting 10 seconds between requests, so why not just sleep?
import time
import requests
def main():
flag = 0
while True:
print("New Request")
statuscode = requests.get("http://google.de").status_code
if statuscode == 200 and flag == 0:
print("Service available")
# etc.
time.sleep(10)
main()
Your code had another problem: you're defining a local variable named flag in timeout, but then you're trying to use it, in that flag == 0 check, before you ever assign to it. That would raise an UnboundLocalError. The fact that you happen to also have a local variable named flag in main doesn't make a difference. To fix this, you'd have to do one of these:
Pass flag in as an argument for Timer to pass to each timeout call as a parameter. (Probably best.)
Add a nonlocal flag declaration to timeout, so it becomes a closure cell shared by all of the timeout functions you define. (Not bad, but not the most idiomatic solution.)
Add a global flag declaration to both functions, so it becomes a global variable shared by everyone in the universe. (Probably fine a program this simple, but at the very least not a good habit to get into.)
But, once we've gotten rid of the thread, we've also gotten rid of the function, so there's just the one local flag, so the problem doesn't come up in the first place.
I have a thread that monitors user input which looks like this:
def keyboard_monitor(temp): #temp is sys.stdin
global flag_exit
while True:
keyin = temp.readline().rstrip().lower()
if keyin == "exit":
flag_exit = True
print("Stopping...")
if flag_exit == True:
break
If I type exit the flag is properly set and all the other threads terminate. If another one of my threads sets the flag, this thread refuses to finish because it's hanging on the user input. I have to input something to get it to finish. How do I change this code so that the program finishes when the flag is set externally?
Its hard to tell exactly what is going wrong without more of your code, but as an easy solution you could rather exit() which is a python built in. This should reliably terminate the application, also sys.exit()
From wim's comment:
You can use the atexit module to register clean up handlers
import atexit
def cleanup():
pass
# TODO cleanup
atexit.register(cleanup)
I have a program that includes a -p --pause flag with values 1 for pause and 2 for resume. The purpose of this is to allow a user to specifically pause the program (it's intended to run as a daemon) manually. Functionally, it works (it pauses and resumes when need be...the kill function also works, though is unrelated to this question now that I look at my code) but I can't get my messages to show up. Basically I would like the original parent program to let print to console when it is being paused or resumed from another place. I think this can be done with signals, but I'm not really sure how this is done, especially in Python (just learning it)
Thanks!
if __Kill:
os.system("kill %s" % old_pid)
sys.exit()
if int(__Pause) == 1:
pause_proc(old_pid)
elif int( __Pause) == 2:
resume_proc(old_pid)
def pause_proc(pid):
print "Pausing Procedure"
os.kill(pid, signal.SIGSTOP)
def resume_proc(pid):
os.kill(pid, signal.SIGCONT)
print "Resuming Procedure"
I've tried posting this in the reverse-engineering stack-exchange, but I thought I'd cross-post it here for more visibility.
I'm having trouble switching from debugging one thread to another in pydbg. I don't have much experience with multithreading, so I'm hoping that I'm just missing something obvious.
Basically, I want to suspend all threads, then start single stepping in one thread. In my case, there are two threads.
First, I suspend all threads. Then, I set a breakpoint on the location where EIP will be when thread 2 is resumed. (This location is confirmed by using IDA). Then, I enable single-stepping as I would in any other context, and resume Thread 2.
However, pydbg doesn't seem to catch the breakpoint exception! Thread 2 seems to resume and even though it MUST hit that address, there is no indication that pydbg is catching the breakpoint exception. I included a "print "HIT BREAKPOINT" inside pydbg's internal breakpoint handler, and that never seems to be called after resuming Thread 2.
I'm not too sure about where to go next, so any suggestions are appreciated!
dbg.suspend_all_threads()
print dbg.enumerate_threads()[0]
oldcontext = dbg.get_thread_context(thread_id=dbg.enumerate_threads()[0])
if (dbg.disasm(oldcontext.Eip) == "ret"):
print disasm_at(dbg,oldcontext.Eip)
print "Thread EIP at a ret"
addrstr = int("0x"+(dbg.read(oldcontext.Esp + 4,4))[::-1].encode("hex"),16)
print hex(addrstr)
dbg.bp_set(0x7C90D21A,handler=Thread_Start_bp_Handler)
print dbg.read(0x7C90D21A,1).encode("hex")
dbg.bp_set(oldcontext.Eip + dbg.instruction.length,handler=Thread_Start_bp_Handler)
dbg.set_thread_context(oldcontext,thread_id=dbg.enumerate_threads()[0])
dbg.context = oldcontext
dbg.resume_thread(dbg.enumerate_threads()[0])
dbg.single_step(enable=True)
return DBG_CONTINUE
Sorry about the "magic numbers", but they are correct as far as I can tell.
One of your problems is that you try to single step through Thread2 and you only refer to Thread1 in your code:
dbg.enumerate_threads()[0] # <--- Return handle to the first thread.
In addition, the code the you posted is not reflective of the complete structure of your script, which makes it hard to judge wether you have other errors or not. You also try to set breakpoint within the sub-brach that disassembles your instructions, which does not make a lot of sense to me logically. Let me try to explain what I know, and lay it out in an organized manner. That way you might look back at your code, re-think it and correct it.
Let's start with basic framework of debugging an application with pydbg:
Create debugger instance
Attache to the process
Set breakpoints
Run it
Breakpoint gets hit - handle it.
This is how it could look like:
from pydbg import *
from pydbg.defines import *
# This is maximum number of instructions we will log
MAX_INSTRUCTIONS = 20
# Address of the breakpoint
func_address = "0x7C90D21A"
# Create debugger instance
dbg = pydbg()
# PID to attach to
pid = int(raw_input("Enter PID: "))
# Attach to the process with debugger instance created earlier.
# Attaching the debugger will pause the process.
dbg.attach(pid)
# Let's set the breakpoint and handler as thread_step_setter,
# which we will define a little later...
dbg.bp_set(func_address, handler=thread_step_setter)
# Let's set our "personalized" handler for Single Step Exception
# It will get triggered if execution of a thread goes into single step mode.
dbg.set_callback(EXCEPTION_SINGLE_STEP, single_step_handler)
# Setup is done. Let's run it...
dbg.run()
Now having the basic structure, let's define our personalized handlers for breakpoint and single stepping. The code snippet below defines our "custom" handlers. What will happen is when breakpoint hits we will iterate through threads and set them to single step mode. It will in turn trigger single step exception, which we will handle and disassemble MAX_INSTRUCTIONS amount of instructions:
def thread_step_setter(dbg):
dbg.suspend_all_threads()
for thread_id in dbg.enumerate_threads():
print "Single step for thread: 0x%08x" % thread_id
h_thread = dbg.open_thread(thread_id)
dbg.single_step(True, h_thread)
dbg.close_handle(h_thread)
# Resume execution, which will pass control to step handler
dbg.resume_all_threads()
return DBG_CONTINUE
def single_step_handler(dbg):
global total_instructions
if instructions == MAX_INSTRUCTION:
dbg.single_step(False)
return DBG_CONTINUE
else:
# Disassemble the instruction
current_instruction = dbg.disasm(dbg.context,Eip)
print "#%d\t0x%08x : %s" % (total_instructions, dbg.context.Eip, current_instruction)
total_instructions += 1
dbg.single_step(True)
return DBG_CONTINUE
Discloser: I do not guarantee that the code above will work if copied and pasted. I typed it out and haven't tested it. However, if basic understanding is acquired, the small syntactical error could be easily fixed. I apologize in advanced if I have any. I don't currently have means or time to test it.
I really hope it helps you out.