How can I stop a program's execution with python? - python

I use a program (on Windows), whose name I won't disclose, that can be opened from the command line without going through any authentication. I'm trying to create some security to prevent others from accessing it this way.
I plan on replacing the built-in binary for this program with a batch file with a first line that points to my own authentication system (implemented in python, compiled to .exe with py2exe), and the second line to the command that opens the program.
My original plan was to have my script (auth.py) stop the batch file from executing the second line if authentication failed with something like this:
if not authenticated:
print('Authentication failed')
sys.exit(1)
else:
print('Successfully authenticated!')
I had counted on sys.exit(1) to do this, and I didn't bother testing it out until I was done developing the script. Now I realize that sys.exit only exits the python process.
I either need a way to stop the batch process FROM the python script, or a way for the batch to detect the exit code from auth.py (1 if failed, 0 if passed) and execute the other program only if the exit code is 0 from there.
If anyone could give me any suggestions or help of any sort, I would really appreciate it.
Thanks!

Use subprocess to call the program on successful authentication. So your python script would launch the program, not a batch file.
if not authenticated:
print('Authentication failed')
sys.exit(1)
else:
print('Successfully authenticated!')
proc = subprocess.Popen([program])
Please note, if the user has permission to start the program from within a python or batch script, nothing is stopping them from accessing it directly. This will prevent no one from accessing the program; save maybe the extreme un-technical.

You could do something really complicated to try and find the parent PID of the Python process and kill that or you could just check the %ERRORLEVEL% in your batch file. Something like:
python auth.py
if %ERRORLEVEL% neq 0 exit /B 1

I found these two methods hope these might help
http://metazin.wordpress.com/2008/08/09/how-to-kill-a-process-in-windows-using-python/ http://code.activestate.com/recipes/347462-terminating-a-subprocess-on-windows/

Related

Make python script write output in .txt file after force quit

I have a Python script running on a server through SSH with the following command:
nohup python3 python_script.py >> output.txt
It was running for a very long time and (probably) created useful output, so I want to force it to stop now. How do I make it write the output it has so far, into the output.txt file?
The file was automatically created when I started running the script, but the size is zero (nothing has been written in it so far).
as Robert said in his comment, check that the output you are expecting to go to the file is actually making it there and not stderr. If the process is already running and has been for a long time without any response or writes into your output file, I think there are 3 options:
It is generating outout but it's not going where you are expecting (Roberts response)
It is generating output but it's buffered and for some reason has yet to be flushed
It's hasn't generated any output
Option 3 is easy: wait longer. Options 1 & 2 are a little bit tricky. If you are expecting a significant amount of output from this process, you could check the memory consumption of the python instance running your script and see if it's growing or very large. Also you could use lsof to see if it has the file open and to get some idea what it's doing with it.
If you find that your output appears to be going somewhere like /dev/null, take a look at this answer on redirecting output for an existing process.
In the unlikely event that you have a huge buffer that hasn't been flushed, you could try using ps to get the PID and then use kill -STOP [PID] to pause the process and see where you can get using GDB.
Unless it would be extremely painful, I would probably just restart the whole thing, but add periodic flushing to the script, and maybe some extra status reporting so you can tell what is going on. It might help too if you could post your code, since there may be other options available in your situation depending on how the program is written.

writing to file does not complete even when using python with statement

I am using a simple python with statement such as below to write to a log file.
with open(filename, 'a+') as f:
do_stuff1()
f.write('stuff1 complete. \n')
do_stuff2()
f.write('stuff2 complete. \n')
do_stuff3()
f.write('stuff3 complete. \n')
I am finding that my script fails intermittently at do_stuff2() however in the log file I do not find the line "stuff1 complete" as I would expect if the file was closed correctly as should happen when using with. The only reason I do know the script is failing at do_stuff2() without my log working is because this function calls an API that does its own logging and that other log file tells me that 2 has been executed even if it did not complete.
My question is what sort of error would have to occur inside the with statement that would not only stop execution but also prevent the file from being closed correctly?
Some additional information:
The script is a scheduled task that runs late at night.
I have never been able to reproduce by running the process interactively.
The problem occurs once in every 2-3 nights.
I do see errors in the Windows event logs that point to a dll file, .NET Framework and a 0xC0000005 error which is a memory violation. The API used by do_stuff2() does use this DLL which in turn uses the .NET Framework.
Obviously I am going to try to fix the problem itself but at this point my question is focused on what could happen inside the with (potentially some number of layers below my code) that could break its intended functionality of closing the file properly regardless of whether the content of the with is executed successfully.
The with can only close the file if there is an exception. If there is a segfault inside an extension, no exception might be raised and the process dies without giving Python a chance to close the file. You can try to use f.flush() at several places to force Python to write to the file.

Set a python process name

My python script needs to be killed every hour and after I need to restarted it. I need this to do because it's possible sometimes (I create screenshots) a browser window is hanging because of a user login popup or something.. Anyway. I created 2 files 'reload.py' and 'screenshot.py'. I run reload.py by cronjob.
I thought something like this would work
# kill process if still running
try :
os.system("killall -9 screenshotTaker");
except :
print 'nothing to kill'
# reload or start process
os.execl("/path/to/script/screenshots.py", "screenshotTaker")
The problem is, and what I read aswel the second argument of execl (the given process name) doesn't work? How can I set a process name for it to make the kill do it's work?
Thanks in advance!
The first argument to os.execl is the path to the executable. The remaining arguments are passed to that executable as if their where typed on the command-line.
If you want "screenshotTaker" become the name of the process, that is "screenshots.py" responsibility to do so. Do you do something special in that sense in that script?
BTW, a more common approach is to keep track (in /var/run/ usually) of the PID of the running program. And kill it by PID. This could be done with Python (using os.kill) At system-level, some distribution have helpers for that exact purpose. For example, on Debian there is start-stop-daemon. Here is a excerpt of the man:
start-stop-daemon(8) dpkg utilities start-stop-daemon(8)
NAME
start-stop-daemon - start and stop system daemon programs
SYNOPSIS
start-stop-daemon [options] command
DESCRIPTION
start-stop-daemon is used to control the creation and termination of
system-level processes. Using one of the matching options,
start-stop-daemon can be configured to find existing instances of a
running process.

Program does not exit. How to find out what python is doing?

I have a python script which is working fine so far. However, my program does not exit properly. I can debug until and I'm returning to the end, but the programm keeps running.
main.main() does a lot of stuff: it downloads (http, ftp, sftp, ...) some csv files from a data provider, converts the data into a standardized file format and loads everyting into the database.
This works fine. However, the program does not exit. How can I find out, where the programm is "waiting"?
There exist more than one provider - the script terminates correctly for all providers except for one (sftp download, I'm using paramiko)
if __name__ == "__main__":
main.log = main.log2both
filestoconvert = []
#filestoconvert = glob.glob(r'C:\Data\Feed\ProviderName\download\*.csv')
main.main(['ProviderName'], ['download', 'convert', 'load'], filestoconvert)
I'm happy for any thoughts and ideas!
If your program does not terminate it most likely means you have a thread still working.
To list all the running threads you can use :
threading.enumerate()
This function lists all Thread that are currently running (see documentation)
If this is not enough you might need a bit of script along with the function (see documentation):
sys._current_frames()
So to print stacktrace of all alive threads you would do something like :
import sys, traceback, threading
thread_names = {t.ident: t.name for t in threading.enumerate()}
for thread_id, frame in sys._current_frames().iteritems():
print("Thread %s:" % thread_names.get(thread_id, thread_id))
traceback.print_stack(frame)
print()
Good luck !
You can involve the python debugger for a script.py with
python -m pdb script.py
You find the pdb commands at http://docs.python.org/library/pdb.html#debugger-commands
You'd better use GDB, which allows to pinpoint hung processes, like jstack in Java
This question is 10 years old, but I post my solution for someone with a similar issue with a non-finishing Python script like mine.
In my case, the debugging process didn't help. All debugging outputs showed only one thread. But the suggestion by #JC Plessis that some work should be going on helped me find the cause.
I was using Selenium with the chrome driver, and I was finishing the selenium process after closing the only tab that was open with
driver.close()
But later, I changed the code to use a headless browser, and the Selenium driver wasn't closed after driver.close(), and the python script was stuck indefinitely. It results that the right way to shutdown the Selenium driver was actually.
driver.quit()
That solved the problem, and the script was finally finishing again.
You can use sys.settrace to pinpoint which function blocks. Then you can use pdb to step through it.

process closing while saving a file - Python - Windows XP

I'm working on a project for school where e-mails will be pulled from an inbox and downloaded to different locations depending on how things are parsed. The language I'm writing in is Python, and the environment it will be run on is Windows XP. The idea is that the program will run in the background with no interaction from the user until they basically shutdown their computer. A concern I had is what this will mean if they shut it down while a file is in the process of being saved, and what I can do to handle it.
Will it just be a file.part thing? Will the shutdown throw the "Waiting to close X application" message and finish saving before terminating on its own?
use atexit module
You should really check this out: Link (How Windows Shuts Down)
easy crossplatform/crosslanguage way of handling partial file saving:
save to a temporary filename like "file.ext.part"
after you're done saving, rename to "file.ext"

Categories

Resources