I am using pyvisa to communicate via USB with an instrument. I am able to control it properly. Since it is a high voltage source, and it is dangerous to forget it with high voltage turned on, I wanted to implement the __del__ method in order to turn off the output when the code execution finishes. So basically I wrote this:
import pyvisa as visa
class Instrument:
def __init__(self, resource_str='USB0::1510::9328::04481179::0::INSTR'):
self._resource_str = resource_str
self._resource = visa.ResourceManager().open_resource(resource_str)
def set_voltage(self, volts: float):
self._resource.write(f':SOURCE:VOLT:LEV {volts}')
def __del__(self):
self.set_voltage(0)
instrument = Instrument()
instrument.set_voltage(555)
The problem is that it is not working and in the terminal I get
$ python3 comunication\ test.py
Exception ignored in: <function Instrument.__del__ at 0x7f4cca419820>
Traceback (most recent call last):
File "comunication test.py", line 12, in __del__
File "comunication test.py", line 9, in set_voltage
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 197, in write
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 157, in write_raw
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/resource.py", line 190, in session
pyvisa.errors.InvalidSession: Invalid session handle. The resource might be closed.
I guess that what is happening is that pyvisa is being "deleted" before the __del__ method of my object is being called. How can I prevent this? How can I tell Python that pyvisa is "important" for objects of the Instrument class so it is not unloaded until all of them have been destroyed?
In general, you cannot assume that __del__ will be called. If you're coming from an RAII (resource allocation is initialization) language such as C++, Python makes no similar guarantee of destructors.
To ensure some action is reversed, you should consider an alternative such as context managers:
from contextlib import contextmanager
#contextmanager
def instrument(resource_str='USB0::1510::9328::04481179::0::INSTR'):
...
try:
... # yield something
finally:
# set voltage of resource to 0 here
You would use it like
with instrument(<something>) as inst:
...
# guaranteed by here to set to 0.
I believe Ami Tavory's answer is generally considered to be the recommended solution, though context managers aren't always suitable depending on how the application is structured.
The other option would be to explicitly call the cleanup functions when the application is exiting. You can make it safer by wrapping the whole application in a try/finally, with the finally clause doing the cleanup. Note that if you don't include a catch then the exception will be automatically re-raised after executing the finally, which may be what you want. Example:
app = Application()
try:
app.run()
finally:
app.cleanup()
Be aware, though, that you potentially just threw an exception. If the exception happened, for example, mid-communication then you may not be able to send the command to reset the output as the device could be expecting you to finish what you had already started.
Finally I found my answer here using the package atexit. This does exactly what I wanted to do (based on my tests up to now):
import pyvisa as visa
import atexit
class Instrument:
def __init__(self, resource_str):
self._resource = visa.ResourceManager().open_resource(resource_str)
# Configure a safe shut down for when the class instance is destroyed:
def _atexit():
self.set_voltage(0)
atexit.register(_atexit) # https://stackoverflow.com/a/41627098
def set_voltage(self, volts: float):
self._resource.write(f':SOURCE:VOLT:LEV {volts}')
instrument = Instrument(resource_str = 'USB0::1510::9328::04481179::0::INSTR')
instrument.set_voltage(555)
The advantage of this solution is that it is user-independent, it does not matter how the user instantiates the Instrument class, in the end the high voltage will be turned off.
I faced the same kind of safety issue with another type of connected device. I could not predict safely the behavior of the __del__ method as discussed in questions like
I don't understand this python __del__ behaviour.
I ended with a context manager instead. It would look like this in your case:
def __enter__(self):
"""
Nothing to do.
"""
return self
def __exit__(self, type, value, traceback):
"""
Set back to zero voltage.
"""
self.set_voltage(0)
with Instrument() as instrument:
instrument.set_voltage(555)
Related
Long story short, I am writing python code that occasionally causes an underlying module to spit out complaints in the terminal that I want my code to respond to. My question is if there is some way that I can take in all terminal outputs as a string while the program is running so that I might parse it and execute some handler code. Its not errors that crash the program entirely and not a situation where I can simply do a try catch. Thanks for any help!
Edit: Running on Linux
there are several solutions to your need. the easiest would be to use a shared buffer of sort and get all your package output there instead of stdout (with regular print) thus keeping your personal streams under your package control.
since you probably already have some code with print or you want for it to work with minimal change, i suggest use the contextlib.redirect_stdout with a context manager.
give it a shared io.StringIO instance and wrap all your method with it.
you can even create a decorator to do it automatically.
something like:
// decorator
from contextlib import redirect_stdout
import io
import functools
SHARED_BUFFER = io.StringIO()
def std_redirecter(func):
#functools.wrap(func)
def inner(*args, **kwargs):
with redirect_stdout(SHARED_BUFFER) as buffer:
print('foo')
print('bar')
func(*args, **kwargs)
return inner
// your files
#std_redirecter
def writing_to_stdout_func():
print('baz')
// invocation
writing_to_stdout_func()
string = SHARED_BUFFER.getvalue() // 'foo\nbar\nbaz\n'
Have a look at the following MWE.
import sys
from PyQt5.QtWidgets import QMainWindow, QPushButton, QApplication
class MainWindow(QMainWindow):
def __init__(self, parent=None):
super().__init__(parent)
self.button = QPushButton('Bham!')
self.setCentralWidget(self.button)
self.button.clicked.connect(self.btnClicked)
def btnClicked(self):
print(sys.excepthook)
raise Exception
#import traceback
#sys.excepthook = traceback.print_exception
if __name__ == '__main__':
app = QApplication(sys.argv)
mainWindow = MainWindow()
mainWindow.show()
app.exec_()
I have a number of questions. I don't know if they are all related (I guess so), so forgive me if they are not.
When I run the above code from the terminal, all is fine. The program runs, if I click the button it prints the traceback and dies. If I run it inside an IDE (I tested Spyder and PyCharm), the traceback is not displayed. Any idea why? Essentially the same question was raised in other posts also on SO, here and here. Please don't mark this as a duplicate of either of those; please read on.
By adding the commented lines, the traceback is again displayed properly. However, they also have the nasty side effect that the app does no longer terminate on unhandled exceptions! I have no idea why this happens, as AFAIK excepthook only prints the traceback, it cannot prevent the program from exiting. At the moment it is called, it's too late for rescue.
Also, I don't understand how Qt comes into play here, as exceptions that are not thrown inside a slot still crash the app as I would expect. No matter if I change excepthook or not, PyQt does not seem to override it as well (at least the print seems to suggest so).
FYI, I am using Python 3.5 with PyQt 5.6, and I am aware of the changes in the exception handling introduced in PyQt 5.5. If those are indeed the cause for the behaviour above, I would be glad hear some more detailed explanations.
When an exception happens inside a Qt slot, it's C++ which called into your Python code. Since Qt/C++ know nothing about Python exceptions, you only have two possibilities:
Print the exception and return some default value to C++ (like 0, "" or NULL), possibly with unintended side effects. This is what PyQt < 5.5 does.
Print the exception and then call qFatal() or abort(), causing the application to immediately exit inside C++. That's what PyQt >= 5.5 does, except when you have a custom excepthook set.
The reason Python still doesn't terminate is probably because it can't, as it's inside some C++ code. The reason your IDE isn't showing the stack is probably because it doesn't deal with the abort() correctly - I'd suggest opening a bug against the IDE for that.
Whilst #the-compiler's answer is correct in explaining why it happens, I thought I might provide a workaround if you'd like these exceptions to be raised in a more pythony way.
I decorate any slots with this decorator, which catches any exceptions in the slot and saves them to a a global variable:
exc_info = None
def pycrash(func):
"""Decorator that quits the qt mainloop and stores sys.exc_info. We will then
raise it outside the qt mainloop, this is a cleaner crash than Qt just aborting as
it does if Python raises an exception during a callback."""
def f(*args, **kwargs):
global exc_info
try:
return func(*args, **kwargs)
except:
if exc_info is None # newer exceptions don't replace the first one
exc_info = sys.exc_info()
qapplication.exit()
return f
Then just after my QApplication's exec(), I check the global variable and raise if there's anything there:
qapplication.exec_()
if exc_info is not None:
type, value, traceback = exc_info
raise value.with_traceback(traceback)
This is not ideal because quitting the mainloop doesn't stop other slots higher in the stack from still completing, and if the failed slot affects them, they might see some unexpected state. But IMHO it's still much better than PyQt just aborting with no cleanup.
When the twisted reactor is running and an exception occurs within a deferred that isn't caught, "Unhandled Error" is printed to the terminal along with a traceback and the exception. Is it possible to handle/intercept these exceptions (e.g., set a callback or override a method)?
EDIT: I'm aware that I can catch a failure by adding an errback to a deferrerd. What I want to know is if there is a way to intercept an unhandled failure/exception that has traversed its way up the chain to the reactor.
EDIT: Essentially, I'm wondering if the twisted reactor has a global error handler or something that can be accessed. I wonder because it prints the traceback and error from the failure.
Example:
Unhandled Error
Traceback (most recent call last):
File "/var/projects/python/server.py", line 359, in run_server
return server.run()
File "/var/projects/python/server.py", line 881, in run
reactor.run()
File "/usr/local/lib/python2.6/dist-packages/Twisted-11.0.0-py2.6-linux-x86_64.egg/twisted/internet/base.py", line 1162, in run
self.mainLoop()
File "/usr/local/lib/python2.6/dist-packages/Twisted-11.0.0-py2.6-linux-x86_64.egg/twisted/internet/base.py", line 1171, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "/usr/local/lib/python2.6/dist-packages/Twisted-11.0.0-py2.6-linux-x86_64.egg/twisted/internet/base.py", line 793, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/var/projects/python/server.py", line 524, in monitor
elapsed = time.time() - info.last
exceptions.NameError: global name 'info' is not defined
Because these tracebacks are written using a call to twisted.python.log.deferr() (in Twisted 10.2 anyway), it is possible to redirect them using a log observer. This is the most common thing to do with these stack traces. I can't find any base class for log observers (surprisingly) but there are a couple built in:
twisted.python.log.PythonLoggingObserver - Anything logged goes to the standard Python logging module. (I use this in my application.)
twisted.python.log.FileLogObserver - Anything logged goes to a file.
Both of these will catch stack traces reported by the reactor. All you have to do is construct the log observer (no arguments) and then call the object's start() method.
(Side note: there's also a StdioOnnaStick class that you can construct and assign to sys.stdout or sys.stderr if you want. Then anything you print goes to the Twisted log.)
To really, truly intercept these calls, so the stack traces never get logged at all, you could either:
Subclass twisted.internet.SelectReactor and override its runUntilCurrent() method. That is what logs the stack traces. You would need to study the source of twisted.internet.base.ReactorBase before doing this.
After you have done all twisted.* imports, set twisted.python.log.deferr to a function of your choosing, that is compatible with the prototype def err(_stuff=None, _why=None, **kw).
You can add an errback to the deferred; unhandled exceptions are automatically converted to twisted.python.failure.Failure.
Answering to your comment:
Essentially, I'm wondering if the twisted reactor has a global error handler or something that can be accessed. I wonder because it prints the traceback and error from the failure.
The response is "not in a proper way".
First, the reactor has nothing to do with deferreds, actually, the whole deferred module should be placed in the twisted.python package, but this cannot be done yet because of some dependencies. Back to your question...
Digging into the twisted code (more precisel, the twisted.internet.defer module) you can outline the following event flow:
When the callback method is called with a result, the deferred instance begins to run its callbacks through the _runCallbacks method;
If one of the callbacks throws an exception, it is wrapped into Failure (line 542);
If the callback chain is exhausted and the last result was a failure, the current result is assigned to the failResult property of a DebugInfo instance (line 575);
If the deferred instance, and thus its DebugInfo instance, are garbage collected ad there still is an active failure as a result, the DebugInfo.__del__ method is called and the traceback printed out.
Given these premises, one of the simplest solutions would be to monkey patch the DebugInfo class:
from twisted.internet.defer import DebugInfo
del DebugInfo.__del__ # Hides all errors
I have a class that looks like this:
class A:
def __init__(self, filename, sources):
# gather info from file
# info is updated during lifetime of the object
def close(self):
# save info back to file
Now, this is in a server program, so it might be shutdown without prior notice by a signal. Is it safe to define this to make sure the class saves it's info, if possible?
def __del__(self):
self.close()
If not, what would you suggest as a solution instead?
Waiting until later is just not the strategy to making something reliable. In fact, you have to go the complete opposite direction. As soon as you know something that should be persistent, you need to take action to persist it. In fact, if you want to make it reliable, you need to first write to disk the steps needed to recover from the failure that might happen while you are trying to commit the change. pseudopython:
class A:
def __init__(self, filename, sources):
self.recover()
# gather info from file
# info is updated during lifetime of the object
def update_info(self, info):
# append 'info' to recovery_log
# recovery_log.flush()
# write 'info' to file
# file.flush()
# append 'info-SUCCESS' to recover_log
# recovery_log.flush()
def recover(self):
# open recovery_log
# skip to last 'info-SUCCESS'
# read 'info' from recover_log
# write 'info' to file
# file.flush()
# append 'info-SUCCESS' to recover_log
# recovery_log.flush()
The important bit is that recover() happens every time, and that every step is followed by a flush() to make sure data makes it out to disk before the next step occurs. another important thing is that only appends ever occur on the recover log itself. nothing is overwritten in such a way that the data in the log can become corrupted.
No. You are NEVER safe.
If the opearting system wants to kill you without prior notice, it will. You can do nothing about it. Your program can stop running after any instruction, at any time, and have no opportunity to execute any additional code.
There is just no way of protecting your server from a killing signal.
You can, if you want, trap lesser signals and manually delete your objects, forcing the calls to close().
For orderly cleanup you can use the sys.atexit hooks. Register a function there that calls your close method. The destructor of on object may not be called at exit.
The __del__ method is not guaranteed to ever be called for objects that still exist when the interpreter exits.
Even if __del__ is called, it can be called too late. In particular, it can occur after modules it wants to call have been unloaded. As pointed out by Keith, sys.atexit is much safer.
I came across the Python with statement for the first time today. I've been using Python lightly for several months and didn't even know of its existence! Given its somewhat obscure status, I thought it would be worth asking:
What is the Python with statement
designed to be used for?
What do
you use it for?
Are there any
gotchas I need to be aware of, or
common anti-patterns associated with
its use? Any cases where it is better use try..finally than with?
Why isn't it used more widely?
Which standard library classes are compatible with it?
I believe this has already been answered by other users before me, so I only add it for the sake of completeness: the with statement simplifies exception handling by encapsulating common preparation and cleanup tasks in so-called context managers. More details can be found in PEP 343. For instance, the open statement is a context manager in itself, which lets you open a file, keep it open as long as the execution is in the context of the with statement where you used it, and close it as soon as you leave the context, no matter whether you have left it because of an exception or during regular control flow. The with statement can thus be used in ways similar to the RAII pattern in C++: some resource is acquired by the with statement and released when you leave the with context.
Some examples are: opening files using with open(filename) as fp:, acquiring locks using with lock: (where lock is an instance of threading.Lock). You can also construct your own context managers using the contextmanager decorator from contextlib. For instance, I often use this when I have to change the current directory temporarily and then return to where I was:
from contextlib import contextmanager
import os
#contextmanager
def working_directory(path):
current_dir = os.getcwd()
os.chdir(path)
try:
yield
finally:
os.chdir(current_dir)
with working_directory("data/stuff"):
# do something within data/stuff
# here I am back again in the original working directory
Here's another example that temporarily redirects sys.stdin, sys.stdout and sys.stderr to some other file handle and restores them later:
from contextlib import contextmanager
import sys
#contextmanager
def redirected(**kwds):
stream_names = ["stdin", "stdout", "stderr"]
old_streams = {}
try:
for sname in stream_names:
stream = kwds.get(sname, None)
if stream is not None and stream != getattr(sys, sname):
old_streams[sname] = getattr(sys, sname)
setattr(sys, sname, stream)
yield
finally:
for sname, stream in old_streams.iteritems():
setattr(sys, sname, stream)
with redirected(stdout=open("/tmp/log.txt", "w")):
# these print statements will go to /tmp/log.txt
print "Test entry 1"
print "Test entry 2"
# back to the normal stdout
print "Back to normal stdout again"
And finally, another example that creates a temporary folder and cleans it up when leaving the context:
from tempfile import mkdtemp
from shutil import rmtree
#contextmanager
def temporary_dir(*args, **kwds):
name = mkdtemp(*args, **kwds)
try:
yield name
finally:
shutil.rmtree(name)
with temporary_dir() as dirname:
# do whatever you want
I would suggest two interesting lectures:
PEP 343 The "with" Statement
Effbot Understanding Python's
"with" statement
1.
The with statement is used to wrap the execution of a block with methods defined by a context manager. This allows common try...except...finally usage patterns to be encapsulated for convenient reuse.
2.
You could do something like:
with open("foo.txt") as foo_file:
data = foo_file.read()
OR
from contextlib import nested
with nested(A(), B(), C()) as (X, Y, Z):
do_something()
OR (Python 3.1)
with open('data') as input_file, open('result', 'w') as output_file:
for line in input_file:
output_file.write(parse(line))
OR
lock = threading.Lock()
with lock:
# Critical section of code
3.
I don't see any Antipattern here.
Quoting Dive into Python:
try..finally is good. with is better.
4.
I guess it's related to programmers's habit to use try..catch..finally statement from other languages.
The Python with statement is built-in language support of the Resource Acquisition Is Initialization idiom commonly used in C++. It is intended to allow safe acquisition and release of operating system resources.
The with statement creates resources within a scope/block. You write your code using the resources within the block. When the block exits the resources are cleanly released regardless of the outcome of the code in the block (that is whether the block exits normally or because of an exception).
Many resources in the Python library that obey the protocol required by the with statement and so can used with it out-of-the-box. However anyone can make resources that can be used in a with statement by implementing the well documented protocol: PEP 0343
Use it whenever you acquire resources in your application that must be explicitly relinquished such as files, network connections, locks and the like.
Again for completeness I'll add my most useful use-case for with statements.
I do a lot of scientific computing and for some activities I need the Decimal library for arbitrary precision calculations. Some part of my code I need high precision and for most other parts I need less precision.
I set my default precision to a low number and then use with to get a more precise answer for some sections:
from decimal import localcontext
with localcontext() as ctx:
ctx.prec = 42 # Perform a high precision calculation
s = calculate_something()
s = +s # Round the final result back to the default precision
I use this a lot with the Hypergeometric Test which requires the division of large numbers resulting form factorials. When you do genomic scale calculations you have to be careful of round-off and overflow errors.
An example of an antipattern might be to use the with inside a loop when it would be more efficient to have the with outside the loop
for example
for row in lines:
with open("outfile","a") as f:
f.write(row)
vs
with open("outfile","a") as f:
for row in lines:
f.write(row)
The first way is opening and closing the file for each row which may cause performance problems compared to the second way with opens and closes the file just once.
See PEP 343 - The 'with' statement, there is an example section at the end.
... new statement "with" to the Python
language to make
it possible to factor out standard uses of try/finally statements.
points 1, 2, and 3 being reasonably well covered:
4: it is relatively new, only available in python2.6+ (or python2.5 using from __future__ import with_statement)
The with statement works with so-called context managers:
http://docs.python.org/release/2.5.2/lib/typecontextmanager.html
The idea is to simplify exception handling by doing the necessary cleanup after leaving the 'with' block. Some of the python built-ins already work as context managers.
Another example for out-of-the-box support, and one that might be a bit baffling at first when you are used to the way built-in open() behaves, are connection objects of popular database modules such as:
sqlite3
psycopg2
cx_oracle
The connection objects are context managers and as such can be used out-of-the-box in a with-statement, however when using the above note that:
When the with-block is finished, either with an exception or without, the connection is not closed. In case the with-block finishes with an exception, the transaction is rolled back, otherwise the transaction is commited.
This means that the programmer has to take care to close the connection themselves, but allows to acquire a connection, and use it in multiple with-statements, as shown in the psycopg2 docs:
conn = psycopg2.connect(DSN)
with conn:
with conn.cursor() as curs:
curs.execute(SQL1)
with conn:
with conn.cursor() as curs:
curs.execute(SQL2)
conn.close()
In the example above, you'll note that the cursor objects of psycopg2 also are context managers. From the relevant documentation on the behavior:
When a cursor exits the with-block it is closed, releasing any resource eventually associated with it. The state of the transaction is not affected.
In python generally “with” statement is used to open a file, process the data present in the file, and also to close the file without calling a close() method. “with” statement makes the exception handling simpler by providing cleanup activities.
General form of with:
with open(“file name”, “mode”) as file_var:
processing statements
note: no need to close the file by calling close() upon file_var.close()
The answers here are great, but just to add a simple one that helped me:
with open("foo.txt") as file:
data = file.read()
open returns a file
Since 2.6 python added the methods __enter__ and __exit__ to file.
with is like a for loop that calls __enter__, runs the loop once and then calls __exit__
with works with any instance that has __enter__ and __exit__
a file is locked and not re-usable by other processes until it's closed, __exit__ closes it.
source: http://web.archive.org/web/20180310054708/http://effbot.org/zone/python-with-statement.htm