Splitting a string causing GUI to crash? - python

I have the following code which for some reason is causing the Python GUI to crash at the line print:
urlstr = str(URLS)
a = urlstr.split('=', 2)
print('a=', a)
URLS is a URL generated from a dictionary, which is in turn created from the values of a series of text files. I am making a string from URLS then trying to split it at the second equals sign.
If I remove the print statement the code runs fine, but the conditional logic that 'a' is passed to does not produce a value.
Can anyone see what the issue is as I am really confused.
Thanks

I think you are using the IDLE Python shell and threads. Correct me if I am wrong.
Sometimes this causes Tkinter/the IDLE to crash.
Here is some code I wrote in 2008 to prevent the Python Shell from crashing when using threads.
## das muss in idlelib.PyShell in die letzten Zeilen der Klasse
## this must be copied to idlelib.PyShell into the last lines of the class
############################# adds by nicco kunzmann #########################
__nk___init__= __init__
def __init__(self, *args, **kw):
self.__nk___init__(*args, **kw)
self.__nk_writelist= []
self.__nk_act_act_write()
__nk_write= write
def write(self, *args):
self.__nk_writelist.append(args)
def __nk_act_act_write(self):
try:
while self.__nk_writelist:
args= self.__nk_writelist.pop(0)
try:
self.__nk_write(*args)
except:
traceback.print_exception(*sys.exc_info())
print args
finally:
self.text.after(1, self.__nk_act_act_write)
############################# adds by n.k. the end #########################

Related

Queue empty for some reason, even with wrapper function to put arguments in

I have a project where I'm having a device collect information from a scale once a mixing process is done, and I'm having some trouble with a threading process. This code has the function run while a window is open telling the person using the device what is happening:
def recipe_info_gather():
fourth_win.destroy()
time=int(2.5*1000)+500*process.get_number_of_components()
print(time)
global recipe_window
recipe_window=Tk()
recipe_window.title("Getting Recipe Info")
rec_inter=window(recipe_window)
rec_inter.my_canvas.create_text(rec_inter.width/2,rec_inter.height/2, text="Please wait as the pi gets the recipe information from the scale",font=("Arial", 20), fill='white')
press_OK(ser)
t=threading.Thread(target=get_recipe_info, args=(ser,))
t.start()
recipe_window.after(time,finish_or_continue)
recipe_window.mainloop()
and this is the function that the thread is calling
def store_in_queue(func):
#this is solely for the get_recipe_info function for running while window is open
def wrapper(*args):
my_queue.put(func(*args))
return wrapper
#store_in_queue
def get_recipe_info(ser):
from get_name import process
r={}
r['name']=(get_stable_weight(ser)).decode("utf-8")
time.sleep(0.5)
r['total']=(get_stable_weight(ser)).decode("utf-8")[7:14]
time.sleep(0.5)
r['fact']=(get_stable_weight(ser)).decode("utf-8")
time.sleep(0.5)
r['recalcs']=(get_stable_weight(ser)).decode("utf-8")
time.sleep(0.5)
r['ignores']=(get_stable_weight(ser)).decode("utf-8")
time.sleep(0.5)
r['status']=(get_stable_weight(ser)).decode("utf-8")
time.sleep(0.5)
# for i in range(1,process.get_number_of_components()+1):
# r['wgt{}'.format(str(i))]=(get_stable_weight(ser))[6:12].decode("utf-8")
# if i!=process.get_number_of_components():
# time.sleep(0.5)
r['wgt1']=(get_stable_weight(ser))[6:12].decode("utf-8")
time.sleep(0.5)
r['wgt2']=(get_stable_weight(ser))[6:12].decode("utf-8")
time.sleep(0.5)
r['wgt3']=(get_stable_weight(ser))[6:12].decode("utf-8")
for i in range(1,4):
print(r['wgt{}'.format(str(i))])
return r
I also have a line my_queue=queue.Queue() to store the recipe information in, and I have another function that's supposed to extract the resuting dictionary from this variable after the recipe_info_gather() function is done executing with this line: recipe=my_queue.get_nowait(). I had this working before, but now all of a sudden, when I execute this line, I get an error saying the queue is empty even though I had the wrapper function to put arguments in it. It also looks like the get_recipe_info() function actually executes after the get_nowait() line for some reason, so it looks like the thread isn't running concurrently while that window is open.
I was wondering if anyone had intuition as to why this is happening and how to fix it?
Edit: I got a little bit further and found out that wrapper is being returned as None, but even then, I'm not sure why that's happening, so some input would be good.

Python put previous written line into variable

I am using a module that when it outputs an error it just prints that error and continues the script, I would like to put that error into a variable, however because of this behavour I cant just do except Exception as e, so I'm looking for a way to put the previously printed line into a variable
note: I tried looking in the module for where it prints this, but couldnt find it
Well, it ain't pretty, but you could (hoping the write does not do anything funny) try to hijack sys.stdout.write() (or stderr, depending on where you script writes to).
This could be one way to do so, as ugly as it may be:
import sys
class Wrap:
def __init__(self):
self.last_line = None
self._next_line = ''
def write(self, text, *args, **kwargs):
sys.__stdout__.write(text, *args, **kwargs)
self._next_line += text
try:
self.last_line = self._next_line.split('\n')[-2]
self.next_line = self._next_line.split('\n')[-1]
except IndexError:
# We did not have \n yet and _next_line did not split
# into at least two items
pass
save_stdout = sys.stdout
sys.stdout = Wrap()
print('xxx\nzzz') # This was that function you wanted to call
last_line = sys.stdout.last_line
sys.stdout = save_stdout
print(last_line)
This will give you zzz as output. I.e. last line printed (w/o newline (re)added) while sys.stdout was our wrapper.
You can of course just write a function wrapper and use that to formalize the hack a bit.

Can I insert deletable characters in python input buffer?

I want to automatically indent the next line of a console app but the user needs to be able to remove it. sys.stdout.write and print make undeletable characters and I can't write to sys.stdin (as far as I know). I'm essentially trying to get smart indenting but I can only nest deeper and deeper. Any ideas on how to climb back out?
Edit: I should have noted that this is part of a Windows program that uses IronPython. While I could do something much fancier (and might in the future), I am hoping to quickly get a reasonably pleasant experience with very little effort as a starting point.
The cmd module provides a very simple interface for creating a command line interface to your program. It might not be able to put some buffer characters in front of the next line but if you're looking for an obvious way to let your users know that the command has returned, it can provide a shell-like prompt at the beginning of each line. If you already have functions defined for your program, integrating them into a processor would be a matter of writing a handler that access the function:
import cmd
import math
def findHpyot(length, height):
return math.sqrt(length **2 + height **2)
class MathProcessor(cmd.Cmd):
prompt = "Math>"
def do_hypot(self, line):
x = raw_input("Length:")
y = raw_input("Height:")
if x and y:
try:
hypot = findHypot(float(x), float(y))
print "Hypot:: %.2f" %hypot
except ValueError:
print "Length and Height must be numbers"
def do_EOF(self, line):
return True
def do_exit(self, line):
return True
def do_quit(self, line):
return True
if __name__ == "__main__":
cmdProcessor = MathProcessor()
cmdProcessor.cmdloop()
Things to consider when writing an interactive shell using cmd
The name after do_ is the command that your users will use so that in this example, the available commands will be hypot, exit, quit, and help.
Without overriding do_help, calling help will give you a list of available commands
Any call that you want to quit the program should return True
If you want to process entries from the function call, say you wanted to be able to handle a call like "hypot 3 4" you can use the local line variable in the function call

Python clean way to wrap individual statements in a try except block

I'm currently doing some Python automation of Excel with com. It's fully functional, and does what I want, but I've discovered something surprising. Sometimes, some of the Excel commands I use will fail with an exception for no apparent reason. Other times, they will work.
In the VB equivalent code for what I'm doing, this problem is apparently considered normal, and is plastered over with a On Error Resume Next statement. Python does not have said statement, of course.
I can't wrap up the whole set in a try except loop, because it could "fail" halfway through and not complete properly. So, what would be a pythonic way to wrap several independent statements into a try except block? Specifically, something cleaner than:
try:
statement
except:
pass
try:
statement
except:
pass
The relevant code is the excel.Selection.Borders bit.
def addGridlines(self, infile, outfile):
"""convert csv to excel, and add gridlines"""
# set constants for excel
xlDiagonalDown = 5
xlDiagonalUp = 6
xlNone = -4142
xlContinuous = 1
xlThin = 2
xlAutomatic = -4105
xlEdgeLeft = 7
xlEdgeTop = 8
xlEdgeBottom = 9
xlEdgeRight = 10
xlInsideVertical = 11
xlInsideHorizontal = 12
# open file
excel = win32com.client.Dispatch('Excel.Application')
workbook = excel.Workbooks.Open(infile)
worksheet = workbook.Worksheets(1)
# select all cells
worksheet.Range("A1").CurrentRegion.Select()
# add gridlines, sometimes some of these fail, so we have to wrap each in a try catch block
excel.Selection.Borders(xlDiagonalDown).LineStyle = xlNone
excel.Selection.Borders(xlDiagonalUp).LineStyle = xlNone
excel.Selection.Borders(xlDiagonalUp).LineStyle = xlNone
excel.Selection.Borders(xlEdgeLeft).LineStyle = xlContinuous
excel.Selection.Borders(xlEdgeLeft).Weight = xlThin
excel.Selection.Borders(xlEdgeLeft).ColorIndex = xlAutomatic
excel.Selection.Borders(xlEdgeTop).LineStyle = xlContinuous
excel.Selection.Borders(xlEdgeTop).Weight = xlThin
excel.Selection.Borders(xlEdgeTop).ColorIndex = xlAutomatic
excel.Selection.Borders(xlEdgeBottom).LineStyle = xlContinuous
excel.Selection.Borders(xlEdgeBottom).Weight = xlThin
excel.Selection.Borders(xlEdgeBottom).ColorIndex = xlAutomatic
excel.Selection.Borders(xlEdgeRight).LineStyle = xlContinuous
excel.Selection.Borders(xlEdgeRight).Weight = xlThin
excel.Selection.Borders(xlEdgeRight).ColorIndex = xlAutomatic
excel.Selection.Borders(xlInsideVertical).LineStyle = xlContinuous
excel.Selection.Borders(xlInsideVertical).Weight = xlThin
excel.Selection.Borders(xlInsideVertical).ColorIndex = xlAutomatic
excel.Selection.Borders(xlInsideHorizontal).LineStyle = xlContinuous
excel.Selection.Borders(xlInsideHorizontal).Weight = xlThin
excel.Selection.Borders(xlInsideHorizontal).ColorIndex = xlAutomatic
# refit data into columns
excel.Cells.Select()
excel.Cells.EntireColumn.AutoFit()
# save new file in excel format
workbook.SaveAs(outfile, FileFormat=1)
workbook.Close(False)
excel.Quit()
del excel
Update:
Perhaps a bit of explanation on the error bit is required. Two identical runs on my test machine, with identical code, on the same file, produce the same result. One run throws exceptions for every xlInsideVertical line. The other throws exceptions for every xlInsideHorizontal. Finally, a third run completes with no exceptions at all.
As far as I can tell Excel considers this normal behavior, because I'm cloning the VB code built by excel's macro generator, not VB code produced by a person. This might be an erroneous assumption, of course.
It will function with each line wrapped in a try except block I just wanted something shorter and more obvious, because 20 lines wrapped in their own try catch loops is just asking for trouble later.
Update2:
This is a scrubbed CSV file for testing: gist file
Conclusion:
The answer provided by Vsekhar is perfect. It abstracts away the exception suppression, so that later, if and when I have time, I can actually deal with the exceptions as they occur. It also allows for logging the exceptions so they don't disappear, not stopping other exceptions, and is small enough to be easily manageable six months from now.
Consider abstracting away the suppression. And to Aaron's point, do not swallow exceptions generally.
class Suppressor:
def __init__(self, exception_type):
self._exception_type = exception_type
def __call__(self, expression):
try:
exec expression
except self._exception_type as e:
print 'Suppressor: suppressed exception %s with content \'%s\'' % (type(self._exception_type), e)
# or log.msg('...')
Then, note in the traceback of your current code exactly what exception is raised, and create a Suppressor for just that exception:
s = Suppressor(excel.WhateverError) # TODO: put your exception type here
s('excel.Selection.Borders(xlDiagonalDown).LineStyle = xlNone')
This way you get line-by-line execution (so your tracebacks will still be helpful), and you are suppressing only the exceptions you explicitly intended. Other exceptions propagate as usual.
Exceptions never happen "for no apparent reason". There is always a reason and that reason needs to be fixed. Otherwise, your program will start to produce "random" data where "random" is at the mercy of the bug that you're hiding.
But of course, you need a solution for your problem. Here is my suggestion:
Create a wrapper class that implements all the methods that you need and delegates them to the real Excel instance.
Add a decorator before each method which wraps the method in a try except block and log the exception. Never swallow exceptions
Now the code works for your customer which buys you some time to find out the cause of the problem. My guess is that a) Excel doesn't produce a useful error message or b) the wrapper code swallows the real exception leaving you in the dark or c) the Excel method returns an error code (like "false" for "failed") and you need to call another Excel method to determine what the cause of the problem is.
[EDIT] Based on the comment below which boil down to "My boss doesn't care and there is nothing I can do": You're missing a crucial point: It's your bosses duty to make the decision but it your duty to give her a list of options along with pros/cons so that she can make a sound decision. Just sitting there saying "I can't do anything" will get you into the trouble that you're trying to avoid.
Example:
Solution 1: Ignore the errors
Pro: Least amount of work
Con: There is a chance that the resulting data is wrong or random. If important business decisions are based on it, there is a high risk that those decisions will be wrong.
Solution 2: Log the errors
Pro: Little amount of work, users can start to use the results quickly, buys time to figure out the source of the problem
Con: "If you can't fix it today, what makes you think you will have time to fix it tomorrow?" Also, it might take you a long time to find the source of the problem because you're no expert
Solution 3: Ask an expert
Find an expert in the field and help him/her have a look/improve the solution.
Pro: Will get a solution much more quickly than learning the ins and outs of COM yourself
Con: Expensive but high chance of success. Will also find problems that we don't even know about.
...
I think you see the pattern. Bosses make wrong decisions because we (willingly) let them. Any boss in the world is happy for hard facts and input when they have to make a decision (well, those who don't shouldn't be bosses, so this is a surefire way to know when to start looking for a new job).
If you select solution #2, go for the wrapper approach. See the docs how to write a decorator (example from IBM). It's just a few minutes of work to wrap all the methods and it will give you something to work with.
The next step is to create a smaller example which sometimes fails and then post specific questions about Python, Excel and the COM wrapper here to figure out the reason for the problems.
[EDIT2] Here is some code that wraps the "dangerous" parts in a helper class and makes updating the styles more simple:
class BorderHelper(object):
def __init__(self, excel):
self.excel = excel
def set( type, LineStyle = None, Weight = None, Color = None ):
border = self.excel.Selection.Borders( type )
try:
if LineStyle is not None:
border.LineStyle = LineStyle
except:
pass # Ignore if a style can't be set
try:
if Weight is not None:
border.Weight = Weight
except:
pass # Ignore if a style can't be set
try:
if Color is not None:
border.Color = Color
except:
pass # Ignore if a style can't be set
Usage:
borders = BorderHelper( excel )
borders.set( xlDiagonalDown, LineStyle = xlNone )
borders.set( xlDiagonalUp, LineStyle = xlNone )
borders.set( xlEdgeLeft, LineStyle = xlContinuous, Weight = xlThin, Color = xlAutomatic )
...
This just wraps functions calls, but you can extend it to handle attribute access as well, and to proxy the results of nested attribute accesses, finally just wrapping the __setattr__ in your try:except block.
It might be sensible to swallow only some specific exception types in your case (as #vsekhar says).
def onErrorResumeNext(wrapped):
class Proxy(object):
def __init__(self, fn):
self.__fn = fn
def __call__(self, *args, **kwargs):
try:
return self.__fn(*args, **kwargs)
except:
print "swallowed exception"
class VBWrapper(object):
def __init__(self, wrapped):
self.wrapped = wrapped
def __getattr__(self, name):
return Proxy(eval('self.wrapped.'+name))
return VBWrapper(wrapped)
Example:
exceptionProofBorders = onErrorResumeNext(excel.Selection.Borders)
exceptionProofBorders(xlDiagonalDown).LineStyle = xlNone
exceptionProofBorders(xlDiagonalup).LineStyle = xlNone
You can zip arguments from three list, and do the following:
for border, attr, value in myArgs:
while True:
i = 0
try:
setattr(excel.Selection.Borders(border), attr, value)
except:
if i>100:
break
else:
break
If your exceptions are trully random, this will try until success (with a limit of 100 tries). I don't recommend this.

How to programmatically exit pdb started in eval() or exec() without showing output

In my python code I have this line:
try:
result = eval(command, self.globals, self.locals)
except SyntaxError:
exec(command, self.globals, self.locals)
The command variable can be any string. Hence the python debugger pdb may be started in eval/exec and still be active when eval/exec is returning. What I want to do is make sure normal program execution is resumed when returning from eval/exec. To just give you an idea, this is approximately the behavior that I want:
try:
result = eval(command, self.globals, self.locals)
try: self.globals['pdb'].run('continue')
except: pass
except SyntaxError:
exec(command, self.globals, self.locals)
try: self.globals['pdb'].run('continue')
except: pass
However the try line is shown in the debugger before it is executed, but I dont want the debugger to show my code at all. Also it doesn't really work... The reason i repeat code is to minimize debugging in my code, else I could just do it after the except block.
So how can I do this?
As a sidenote:
If you try to enter the following lines into the IPython or bpython interpreters you'll see that they have the same problem and you are able to step into their code.
import pdb
pdb.set_trace()
next
However if you do this in the standard cpython interpreter you are returned to the python prompt. The reason for this is obviously because the two former are implemented in python and the last one is not. But my wish is to get the same behavior even when all code is python.
While I'm somewhat concerned that you are eval/exec'ing a string that you don't control, I'll assume you've thought that bit through.
I think the simplest thing would be to persuade pdb to check the stack frame on each step and resume automatically when you return to the desired level. You can do that with a simple bit of hotfixing. In the code below I've simplified it down to a simple eval since all you are really asking is to have pdb resume automatically on return to a specific function. Call Pdb().resume_here() in the function that you don't want traced. N.B. the resumption is global and there's only one resumption point but I'm sure you can modify that if you wanted.
If you run the code then you'll enter the debugger in function foo() and you can then single step but as soon as you return to bar() the code continues automatically.
e.g.
import sys
from pdb import Pdb
def trace_dispatch(self, frame, event, arg):
if frame is self._resume_frame:
self.set_continue()
return
return self._original_trace_dispatch(frame, event, arg)
def resume_here(self):
Pdb._resume_frame = sys._getframe().f_back
# hotfix Pdb
Pdb._original_trace_dispatch = Pdb.trace_dispatch
Pdb.trace_dispatch = trace_dispatch
Pdb.resume_here = resume_here
Pdb._resume_frame = None
def foo():
import pdb
pdb.set_trace()
print("tracing...")
for i in range(3):
print(i)
def bar():
Pdb().resume_here()
exec("foo();print('done')")
print("returning")
bar()

Categories

Resources