Random "pythonw.exe has stopped working" crashing - python

SO,
The code in question is the following, however it can randomly happen on other scripts too (I don't think the error lies in the code)
For some reason, completely randomly it sometimes crashes and pops up that "pythonw.exe has stopped working" it could be after 5 hours, 24 hours or 5 days... I can't figure out why it's crashing.
from datetime import date, timedelta
from sched import scheduler
from time import time, sleep, strftime
import random
import traceback
s = scheduler(time, sleep)
random.seed()
def periodically(runtime, intsmall, intlarge, function):
currenttime = strftime('%H:%M:%S')
with open('eod.txt') as o:
eod = o.read().strip()
if eod == "1":
EOD_T = True
else:
EOD_T = False
while currenttime >= '23:40:00' and currenttime <= '23:59:59' or currenttime >= '00:00:00' and currenttime <= '11:30:00' or EOD_T:
if currenttime >= '23:50:00' and currenttime <= '23:59:59':
EOD_T = False
currenttime = strftime('%H:%M:%S')
print currenttime, "Idling..."
sleep(10)
open("tca.txt", 'w').close
open("tca.txt", 'w').close
runtime += random.randrange(intsmall, intlarge)
s.enter(runtime, 1, function, ())
s.run()
def execute_subscripts():
st = time()
print "Running..."
try:
with open('main.csv'):
CSVFile = True
except IOError:
CSVFile = False
with open('eod.txt') as eod:
eod = eod.read().strip()
if eod == "1":
EOD_T = True
else:
EOD_T = False
if CSVFile and not EOD_T:
errors = open('ERROR(S).txt', 'a')
try:
execfile("SUBSCRIPTS/test.py", {})
except Exception:
errors.write(traceback.format_exc() + '\n')
errors.write("\n\n")
errors.close()
print """ %.3f seconds""" % (time() - st)
while True:
periodically(15, -10, +50, execute_subscripts)
Does anyone know how I can find out why it's crashing or know why and know a way to fix it?
Thanks
- Hyflex

I don't know, but it may be related to the two lines that do this:
open("tca.txt", 'w').close
Those aren't doing what you intend them to do: they leave the file open. You need to call the close method (not merely retrieve it):
open("tca.txt", 'w').close()
^^
But that's probably not it. CPython will automatically close the file object when it becomes garbage (which happens immediately in this case - refcount hits 0 as soon as the statement ends).
Maybe you should move to a Linux system ;-)
Idea: would it be possible to run this with python.exe instead, from a DOS box (cmd.exe) you leave open and ignore? A huge problem with debugging pythonw.exe deaths is that there's no console window to show any error messages that may pop up.
Which leads to another question: what's this line doing?
print "Running..."
If you're running under pythonw.exe, you never see it, right? And that can cause problems, depending on exactly which versions of Python and Windows you're running. Standard input and standard output don't really exist, under pythonw, and I remember tracking down one mysterious pythonw.exe death to the Microsoft libraries blowing up when "too much" data was written to sys.stdout (which print uses).
One way to tell: if you run this under python.exe instead from a DOS box, and it runs for a year without crashing, that was probably the cause ;-)
Example
Here's a trivial program:
i = 0
while 1:
i += 1
with open("count.txt", "w") as f:
print >> f, i
print "hi!"
Using Python 2.7.6 under 32-bit Windows Vista, it has radically different behavior depending on whether python.exe or pythonw.exe is used to run it.
Under python.exe:
C:\Python27>python yyy.py
hi!
hi!
hi!
hi!
hi!
hi!
hi!
hi!
hi!
hi!
hi!
...
That goes on forever, and the value in count.txt keeps increasing. But:
C:\Python27>pythonw yyy.py
C:\Python27>
That is, there's no visible output. And that's expected: pythonw runs its program disconnected from the console window.
After a very short time, pythonw.exe silently dies (use Task Manager to see this) - vanishes without a trace. At that point:
C:\Python27>type count.txt
1025
So MS's libraries are still crapping out when "too much" is written to stdout from a disconnected program. Take out the print "hi!", and it runs "forever".
Python 3
This is "fixed" in Python 3, via the dubious expedient of binding sys.stdout to None under its pythonw.exe. You can read the history of this mess here.

Related

Python multiprocessing process hangs at the end of it

I have a python script driving me crazy.
Right now i have a infinite loop that checks a data and get into a function in a specific situation.
while true:
time.sleep(1)
peso = getPeso()
if peso > 900:
processo = multiprocessing.Process(target=processo_automatico)
processo.start()
processo.join()
andare() #this is where it should go after the process.
The function works a lot of times but nondeterministically it hangs at the end of the process (literally finish and hangs there forever. How do i know this? i tried with logs).
So at the beginning I tried to terminate it with exit codes:
processo.join(timeout=10)
if processo.exitcode is None:
errore_cicalino() #this is just a warning for me
processo.kill()
elif processo.exitcode != 0:
errore_cicalino()
processo.kill()
but this never worked. NEVER.
So i tried without the join(). Tried with is_alive().
time.sleep(10)
if processo.is_alive():
processo.terminate()
errore_cicalino()
and even like this, it never entered this if.
This is driving me crazy, i accept the fact that the process could fail but, after the timeout, i should be able to terminate it and carry on with the script.
The script is running on a Raspberry Pi 4 2 GB.
Any idea?
Minimal example:
while True:
time.sleep(10)
processo = multiprocessing.Process(target=processo_automatico)
processo.start()
processo.join()
code randomly hangs at the end of the started process and cannot terminate in any way.
processo_automatico() is a function where the script get a picture from a camera and upload it in a DB thanks to another module.
def processo_automatico():
now = str(datetime.now())
now = now.replace(":", "-")
foto = CAMERA.read_camera()
cv2.imwrite("/var/www/html/" + now + ".png", foto)
DB.insert("/var/www/html/" + now + ".png", peso)
They don't create exceptions and i already tried to add to the end of the function a log info(executed even when the code hangs)
Solved.
What was that?
Well, my focus was on the end of the process while the issue was right at the start.
See the function getPeso()? That one just gets values from the serial.
After a few hours of getting those values the raspberry py just starts to see random values and you gotta reboot to fix it.
I wasn't prepared for that. My function just became a recursive infinite function with no break.
My tip, do not use infinite cycles except if you really need to or at least think if it could get stuck somewhere and check it.

print information in a corner when using python/ipython REPL -- and quitting it while thread is running

I am using a python library I wrote to interact with a custom USB device. The library needs to send and receive data. I also need to interactively call methods. At the moment I utilize two shells, one receiving only and the other sending only. The latter is in the (i)python REPL. It works but it is clumsy, so I want to consolidate the two things in a single shell, which will have the advantage to have access to data structures from both sides in one context. That works fine. The problem is in the UI.
In fact, the receiving part needs to asynchronously print some information. So I wrote something like the following:
#!/usr/bin/python
import threading
import time
import blessings
TOTAL=5
def print_time( thread_id, delay):
count = 0
t=blessings.Terminal()
while count < TOTAL:
time.sleep(delay)
count += 1
stuff = "Thread " + str(thread_id) + " " + str(time.ctime(time.time())) + " -- " + str(TOTAL - count) + " to go"
with t.location(t.width - len(stuff) - 1, thread_id):
print (stuff, end=None )
print("", end="") # just return the cursor
try:
t1 = threading.Thread( target = print_time, args = (1, 2, ) )
t1.start()
print ("Thread started")
except:
print ("Error: unable to start thread")
That is my __init__.py file for the module. It somewhat works, but it has two problems:
While the thread is running, you cannot exit the REPL neither with CTRL-D nor with sys.exit() (that is the reason I am using TOTAL=5 above, so your life is easier if you try this code). This is a problem since my actual thread needs to be an infinite loop. I guess one solution could be to exit via a custom call which will cause a break into that infinite loop, but is there anything better?
The cursor does not return correctly to its earlier position
if I remove the end="" in the line with the comment # just return the cursor, it sort of works, but obviously print an unwanted newline in the place the cursor was (which messes us other input and/or output which might be happening there, in addition to add that unwanted newline)
if I leave the end="" it does not return the cursor, not even if I add something to print, e.g. print(".", end="") -- the dots . are printed at the right place, but the blinking cursor and the input is printed at the top
I know these are two unrelated problem and I could have asked two separate questions, but I need an answer to both, or otherwise it's a moot point. Or alternatively, I am open to other solutions. I thought of a separate GTK window, and that might work, but it's a subpar solution, since I really would like this to work in CLI only (to keep it possible in a ssh-without-X-tunneling setup).
Using blessed instead of blessing does not have the problem with the cursor not returning to the previous position, even without anything outside of the with context.
Making the thread a daemon solves the other problem.

Stopping a program in if statement [duplicate]

How do I exit a script early, like the die() command in PHP?
import sys
sys.exit()
details from the sys module documentation:
sys.exit([arg])
Exit from Python. This is implemented by raising the
SystemExit exception, so cleanup actions specified by finally clauses
of try statements are honored, and it is possible to intercept the
exit attempt at an outer level.
The optional argument arg can be an integer giving the exit status
(defaulting to zero), or another type of object. If it is an integer,
zero is considered “successful termination” and any nonzero value is
considered “abnormal termination” by shells and the like. Most systems
require it to be in the range 0-127, and produce undefined results
otherwise. Some systems have a convention for assigning specific
meanings to specific exit codes, but these are generally
underdeveloped; Unix programs generally use 2 for command line syntax
errors and 1 for all other kind of errors. If another type of object
is passed, None is equivalent to passing zero, and any other object is
printed to stderr and results in an exit code of 1. In particular,
sys.exit("some error message") is a quick way to exit a program when
an error occurs.
Since exit() ultimately “only” raises an exception, it will only exit
the process when called from the main thread, and the exception is not
intercepted.
Note that this is the 'nice' way to exit. #glyphtwistedmatrix below points out that if you want a 'hard exit', you can use os._exit(*errorcode*), though it's likely os-specific to some extent (it might not take an errorcode under windows, for example), and it definitely is less friendly since it doesn't let the interpreter do any cleanup before the process dies. On the other hand, it does kill the entire process, including all running threads, while sys.exit() (as it says in the docs) only exits if called from the main thread, with no other threads running.
A simple way to terminate a Python script early is to use the built-in quit() function. There is no need to import any library, and it is efficient and simple.
Example:
#do stuff
if this == that:
quit()
Another way is:
raise SystemExit
You can also use simply exit().
Keep in mind that sys.exit(), exit(), quit(), and os._exit(0) kill the Python interpreter. Therefore, if it appears in a script called from another script by execfile(), it stops execution of both scripts.
See "Stop execution of a script called with execfile" to avoid this.
While you should generally prefer sys.exit because it is more "friendly" to other code, all it actually does is raise an exception.
If you are sure that you need to exit a process immediately, and you might be inside of some exception handler which would catch SystemExit, there is another function - os._exit - which terminates immediately at the C level and does not perform any of the normal tear-down of the interpreter; for example, hooks registered with the "atexit" module are not executed.
I've just found out that when writing a multithreadded app, raise SystemExit and sys.exit() both kills only the running thread. On the other hand, os._exit() exits the whole process. This was discussed in "Why does sys.exit() not exit when called inside a thread in Python?".
The example below has 2 threads. Kenny and Cartman. Cartman is supposed to live forever, but Kenny is called recursively and should die after 3 seconds. (recursive calling is not the best way, but I had other reasons)
If we also want Cartman to die when Kenny dies, Kenny should go away with os._exit, otherwise, only Kenny will die and Cartman will live forever.
import threading
import time
import sys
import os
def kenny(num=0):
if num > 3:
# print("Kenny dies now...")
# raise SystemExit #Kenny will die, but Cartman will live forever
# sys.exit(1) #Same as above
print("Kenny dies and also kills Cartman!")
os._exit(1)
while True:
print("Kenny lives: {0}".format(num))
time.sleep(1)
num += 1
kenny(num)
def cartman():
i = 0
while True:
print("Cartman lives: {0}".format(i))
i += 1
time.sleep(1)
if __name__ == '__main__':
daemon_kenny = threading.Thread(name='kenny', target=kenny)
daemon_cartman = threading.Thread(name='cartman', target=cartman)
daemon_kenny.setDaemon(True)
daemon_cartman.setDaemon(True)
daemon_kenny.start()
daemon_cartman.start()
daemon_kenny.join()
daemon_cartman.join()
from sys import exit
exit()
As a parameter you can pass an exit code, which will be returned to OS. Default is 0.
I'm a total novice but surely this is cleaner and more controlled
def main():
try:
Answer = 1/0
print Answer
except:
print 'Program terminated'
return
print 'You wont see this'
if __name__ == '__main__':
main()
...
Program terminated
than
import sys
def main():
try:
Answer = 1/0
print Answer
except:
print 'Program terminated'
sys.exit()
print 'You wont see this'
if __name__ == '__main__':
main()
...
Program terminated Traceback (most recent call last): File "Z:\Directory\testdieprogram.py", line 12, in
main() File "Z:\Directory\testdieprogram.py", line 8, in main
sys.exit() SystemExit
Edit
The point being that the program ends smoothly and peacefully, rather than "I'VE STOPPED !!!!"
Problem
In my practice, there was even a case when it was necessary to kill an entire multiprocessor application from one of those processes.
The following functions work well if your application uses the only main process. But no one of the following functions didn't work in my case as the application had many other alive processes.
quit()
exit(0)
os._exit(0)
sys.exit(0)
os.kill(os.getppid(), 9) - where os.getppid() is the pid of parent process
The last one killed the main process and itself but the rest processes were still alive.
Solution
I had to kill it by external command and finally found the solution using pkill.
import os
# This can be called even in process worker and will kill
# whole application included correlated processes as well
os.system(f"pkill -f {os.path.basename(__file__)}")
In Python 3.5, I tried to incorporate similar code without use of modules (e.g. sys, Biopy) other than what's built-in to stop the script and print an error message to my users. Here's my example:
## My example:
if "ATG" in my_DNA:
## <Do something & proceed...>
else:
print("Start codon is missing! Check your DNA sequence!")
exit() ## as most folks said above
Later on, I found it is more succinct to just throw an error:
## My example revised:
if "ATG" in my_DNA:
## <Do something & proceed...>
else:
raise ValueError("Start codon is missing! Check your DNA sequence!")
My two cents.
Python 3.8.1, Windows 10, 64-bit.
sys.exit() does not work directly for me.
I have several nexted loops.
First I declare a boolean variable, which I call immediateExit.
So, in the beginning of the program code I write:
immediateExit = False
Then, starting from the most inner (nested) loop exception, I write:
immediateExit = True
sys.exit('CSV file corrupted 0.')
Then I go into the immediate continuation of the outer loop, and before anything else being executed by the code, I write:
if immediateExit:
sys.exit('CSV file corrupted 1.')
Depending on the complexity, sometimes the above statement needs to be repeated also in except sections, etc.
if immediateExit:
sys.exit('CSV file corrupted 1.5.')
The custom message is for my personal debugging, as well, as the numbers are for the same purpose - to see where the script really exits.
'CSV file corrupted 1.5.'
In my particular case I am processing a CSV file, which I do not want the software to touch, if the software detects it is corrupted. Therefore for me it is very important to exit the whole Python script immediately after detecting the possible corruption.
And following the gradual sys.exit-ing from all the loops I manage to do it.
Full code: (some changes were needed because it is proprietory code for internal tasks):
immediateExit = False
start_date = '1994.01.01'
end_date = '1994.01.04'
resumedDate = end_date
end_date_in_working_days = False
while not end_date_in_working_days:
try:
end_day_position = working_days.index(end_date)
end_date_in_working_days = True
except ValueError: # try statement from end_date in workdays check
print(current_date_and_time())
end_date = input('>> {} is not in the list of working days. Change the date (YYYY.MM.DD): '.format(end_date))
print('New end date: ', end_date, '\n')
continue
csv_filename = 'test.csv'
csv_headers = 'date,rate,brand\n' # not real headers, this is just for example
try:
with open(csv_filename, 'r') as file:
print('***\nOld file {} found. Resuming the file by re-processing the last date lines.\nThey shall be deleted and re-processed.\n***\n'.format(csv_filename))
last_line = file.readlines()[-1]
start_date = last_line.split(',')[0] # assigning the start date to be the last like date.
resumedDate = start_date
if last_line == csv_headers:
pass
elif start_date not in working_days:
print('***\n\n{} file might be corrupted. Erase or edit the file to continue.\n***'.format(csv_filename))
immediateExit = True
sys.exit('CSV file corrupted 0.')
else:
start_date = last_line.split(',')[0] # assigning the start date to be the last like date.
print('\nLast date:', start_date)
file.seek(0) # setting the cursor at the beginnning of the file
lines = file.readlines() # reading the file contents into a list
count = 0 # nr. of lines with last date
for line in lines: #cycling through the lines of the file
if line.split(',')[0] == start_date: # cycle for counting the lines with last date in it.
count = count + 1
if immediateExit:
sys.exit('CSV file corrupted 1.')
for iter in range(count): # removing the lines with last date
lines.pop()
print('\n{} lines removed from date: {} in {} file'.format(count, start_date, csv_filename))
if immediateExit:
sys.exit('CSV file corrupted 1.2.')
with open(csv_filename, 'w') as file:
print('\nFile', csv_filename, 'open for writing')
file.writelines(lines)
print('\nRemoving', count, 'lines from', csv_filename)
fileExists = True
except:
if immediateExit:
sys.exit('CSV file corrupted 1.5.')
with open(csv_filename, 'w') as file:
file.write(csv_headers)
fileExists = False
if immediateExit:
sys.exit('CSV file corrupted 2.')
In Python 3.9, you can also use: raise SystemExit("Because I said so").
Just put at the end of your code quit() and that should close a python script.
use exit and quit in .py files
and sys.exit for exe files

Print line by line when running a process in python?

I recently worked on using multithreading but ran into an issue where it seems that multiprocessing would be the better way to go. When I run a simple loop counter function as a process, why doesn't it iterate through the loop and print out the output? Instead the code waits for a set amount of time before producing the output. Is there a way this can be solved or am I stuck dealing with processes this way?
import multiprocessing, time
def loop_process(process_name):
loopCnt = 0
print "\nstarting {}".format(process_name)
for loopCnt in range(15):
print("value of loopCnt = {}".format(loopCnt))
loopCnt += 1
time.sleep(1)
print('stopping {}'.format(process_name))
if __name__ == '__main__':
L00P_process = multiprocessing.Process(target=loop_process, args=('L00P_process',))
L00P_process.start()
L00P_process.join()
print('processes stopped')
print "Exiting Main"
I'm not clear on what you're seeing. On a Windows box just now, running the program from an interactive console ("DOS box"), I saw value of loopCnt = ... once per second until the program ended. That's what I expected.
Which OS are you running under, how are you running the program, and what exactly are you seeing?
On most (all?) machines, standard output (stdout) is line-buffered if it's attached to an interactive terminal. Which means output is forced to display each time a line boundary is hit. For other kinds of output device, it may use other kinds of buffering, and that could account for a delay.
Something to try: first add
import sys
near the top, then add
sys.stdout.flush()
after your
print("value of loopCnt = {}".format(loopCnt))
That should tell us whether you are, or are not, experiencing a problem with output buffering.

Drop into an Interpreter anytime in Python

I know how to drop into an interpreter with pdb and IPython, but this requires me knowing beforehand exactly where I want to stop. However, I often run number crunching scripts that take minutes to hours, and I would like to know exactly what it's progress is. One solution is to simply put lots of logging statements everywhere, but then I either inundate myself with too much information or fail to log exactly what I want to know.
Is there a way to initialize a listener loop that under some key combination will drop me into the code wherever it currently is? Think CTRL+Z but leaving me in Python rather than Bash.
You can use the signal module to setup a handler that will launch the debugger when you hit control-C or control-Z or whatever.. SIGINTR, SIGSUSP.
For example, define a module instant_debug.py that overrides SIGQUIT,
import signal
import pdb
def handler(signum, frame):
pdb.set_trace()
signal.signal(signal.SIGQUIT, handler)
Then make a script
import instant_debug
import time
for i in xrange(1000000):
print i
time.sleep(0.1)
At any point during execution, you can jump into the code by typing CTRL+\, examine the stack with u and d as in normal pdb, then continue with c as if nothing ever happened. Note that you will only jump in at the end of the next "atomic" operation -- that means no stopping in the middle of a giant C module.
You could do this
def main():
i = 1000
while True:
print "Count Down %s" % i
time.sleep(1)
i -= 1
try:
main()
except KeyboardInterrupt:
pass # Swallow ctrl-c
finally:
code.interact("Dropped into interpreter", local=globals())

Categories

Resources