I have a Python class that inherits from Popen:
class S(Popen):
def exit(self):
self.stdin.close()
return self.wait()
This works fine, except that if I call the exit() method on my Python unit test (using the built-in 'unittest' framework), the following error comes up when running the test:
/usr/lib/python3.5/unittest/case.py:600: ResourceWarning: unclosed
file <_io.TextIOWrapper name=5 encoding='UTF-8'> testMethod()
Here's the test code:
class TestS(unittest.TestCase):
def test_exit(self):
s = S()
self.assertTrue(s.exit() == 0)
I know it's triggered by the return self.wait() line because there are no other files being opened and if it's replaced by return 0 the warning goes away.
Is there something else that needs to be done for proper clean-up? Perhaps something equivalent to pclose() in C? Found a similar question helpful but it doesn't really help solve this issue. The test passes, but I'd rather not suppress the warning, without understanding the cause.
Some things I already tried, with no success:
Did a with S() as s block
Same as above with self.exit() being called by a destructor (def __exit__)
Thanks in advance!
I believe the warning might refer to the stdout/stderr of the subprocess, particularly if you were using subprocess.PIPE for either of them.
I had the same issue myself, and it went away after adding calls to to proc.stdout.close() and proc.stderr.close() after wait returns.
Related
I am using pyvisa to communicate via USB with an instrument. I am able to control it properly. Since it is a high voltage source, and it is dangerous to forget it with high voltage turned on, I wanted to implement the __del__ method in order to turn off the output when the code execution finishes. So basically I wrote this:
import pyvisa as visa
class Instrument:
def __init__(self, resource_str='USB0::1510::9328::04481179::0::INSTR'):
self._resource_str = resource_str
self._resource = visa.ResourceManager().open_resource(resource_str)
def set_voltage(self, volts: float):
self._resource.write(f':SOURCE:VOLT:LEV {volts}')
def __del__(self):
self.set_voltage(0)
instrument = Instrument()
instrument.set_voltage(555)
The problem is that it is not working and in the terminal I get
$ python3 comunication\ test.py
Exception ignored in: <function Instrument.__del__ at 0x7f4cca419820>
Traceback (most recent call last):
File "comunication test.py", line 12, in __del__
File "comunication test.py", line 9, in set_voltage
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 197, in write
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 157, in write_raw
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/resource.py", line 190, in session
pyvisa.errors.InvalidSession: Invalid session handle. The resource might be closed.
I guess that what is happening is that pyvisa is being "deleted" before the __del__ method of my object is being called. How can I prevent this? How can I tell Python that pyvisa is "important" for objects of the Instrument class so it is not unloaded until all of them have been destroyed?
In general, you cannot assume that __del__ will be called. If you're coming from an RAII (resource allocation is initialization) language such as C++, Python makes no similar guarantee of destructors.
To ensure some action is reversed, you should consider an alternative such as context managers:
from contextlib import contextmanager
#contextmanager
def instrument(resource_str='USB0::1510::9328::04481179::0::INSTR'):
...
try:
... # yield something
finally:
# set voltage of resource to 0 here
You would use it like
with instrument(<something>) as inst:
...
# guaranteed by here to set to 0.
I believe Ami Tavory's answer is generally considered to be the recommended solution, though context managers aren't always suitable depending on how the application is structured.
The other option would be to explicitly call the cleanup functions when the application is exiting. You can make it safer by wrapping the whole application in a try/finally, with the finally clause doing the cleanup. Note that if you don't include a catch then the exception will be automatically re-raised after executing the finally, which may be what you want. Example:
app = Application()
try:
app.run()
finally:
app.cleanup()
Be aware, though, that you potentially just threw an exception. If the exception happened, for example, mid-communication then you may not be able to send the command to reset the output as the device could be expecting you to finish what you had already started.
Finally I found my answer here using the package atexit. This does exactly what I wanted to do (based on my tests up to now):
import pyvisa as visa
import atexit
class Instrument:
def __init__(self, resource_str):
self._resource = visa.ResourceManager().open_resource(resource_str)
# Configure a safe shut down for when the class instance is destroyed:
def _atexit():
self.set_voltage(0)
atexit.register(_atexit) # https://stackoverflow.com/a/41627098
def set_voltage(self, volts: float):
self._resource.write(f':SOURCE:VOLT:LEV {volts}')
instrument = Instrument(resource_str = 'USB0::1510::9328::04481179::0::INSTR')
instrument.set_voltage(555)
The advantage of this solution is that it is user-independent, it does not matter how the user instantiates the Instrument class, in the end the high voltage will be turned off.
I faced the same kind of safety issue with another type of connected device. I could not predict safely the behavior of the __del__ method as discussed in questions like
I don't understand this python __del__ behaviour.
I ended with a context manager instead. It would look like this in your case:
def __enter__(self):
"""
Nothing to do.
"""
return self
def __exit__(self, type, value, traceback):
"""
Set back to zero voltage.
"""
self.set_voltage(0)
with Instrument() as instrument:
instrument.set_voltage(555)
I'm writing a python script, that should behave like a typical shell and providing some self written functions.
It is working quite well already, but it always exits after a successful command, so that it has to be started again to perform a second task.
How can I make it, so it doesn't finish with exit code 0 but returns to shell awaiting new input? How would I have to implement exit methods then?
Following example always exits after typing print-a or print-b:
import click
import click_repl
from prompt_toolkit.history import FileHistory
import os
#click.group(invoke_without_command=True)
#click.pass_context
def cli(ctx):
if ctx.invoked_subcommand is None:
ctx.invoke(repl)
#cli.command()
def print_a():
print("a")
#cli.command()
def print_b():
print("b")
#cli.command()
def repl():
prompt_kwargs = {
'history': FileHistory(os.path.expanduser('~/.repl_history'))
}
click_repl.repl(click.get_current_context(), prompt_kwargs)
def main():
while True:
cli(obj={})
if __name__ == "__main__":
main()
(And a bonus question: In the cmd package it is possible to customize the > prompt tag, is this possible with click to? So that it's something like App> instead?)
Use the standalone_mode argument, try this:
rv = cli(obj={}, standalone_mode=False)
When parsing failed, the code above will throw a UsageError. When --help was passed, rv will be the integer 0. In most other cases the return value of the function that handles the command is returned, although there are a bunch of exceptions and the behavior in general is quite complex, more explanations here:
https://click.palletsprojects.com/en/master/commands/#command-return-values
The advantage of this approach is that you can use return values from command handlers. The disadvantage is that you lose the pretty printed help message when parsing failed (maybe there is way to restore it?).
Another option is to not use standalone_mode and instead wrap your call to cli in a try/except block where you catch a SystemExit:
try:
cli(obj={})
except SystemExit as e:
if e.code != 0:
raise
By catching SystemExit you can stop the program exit process initiated by click. If the command parsed succesfully then SystemExit(0) is caught. Note again that parsing --help also counts as a 'successfull' parse, and therefore also returns SystemExit(0).
The disadvantage of this approach is that you cannot use the return value of a command handler, which makes it more difficult to know when --help was passed. The upside is that all help messages to the console are restored.
I should also note that SystemExit inherits from BaseException but not from Exception. So to actually catch SystemExit you can either catch it directly or catch BaseException.
You could check out click-shell, which is a wrapper for click and the python cmd module. It supports auto completion and help from docstrings out of the box.
Alongside the click_shell there is yet another option which is click_repl.
I am using Luigi to run several tasks, and then I need to bulk transfer the output to a standardized file location. I've written a WrapperTask with an overridden complete() method to do this:
from luigi.task import flatten
class TaskX(luigi.WrapperTask):
date = luigi.DateParameter()
client = luigi.s3.S3Client()
def requires(self):
yield TaskA(date=self.date)
yield TaskB(date=self.date)
def complete(self):
tasks_complete = all(r.complete() for r in flatten(self.requires()))
## at the end of everything, batch copy the files
if tasks_complete:
self.client.copy('current-old', 'current')
return True
else:
return False
if __name__ == "__main__":
luigi.run()
but I'm having trouble getting conditional part of complete() to be called when the process is actually finished.
I assume this is because of asynchronous behavior pointed out by others, but I'm not sure how to fix it.
I've tried running Luigi with these command-line parameters:
$ PYTHONPATH="" luigi --module x TaskX --worker-retry-external-task
But that doesn't seem to be working correctly. Is this the right approach to handle this type of task?
Also, I'm curious — has anyone had experience with the --worker-retry-external-task command? I'm having some trouble understanding it.
In the source code,
def _is_external(task):
return task.run is None or task.run == NotImplemented
is called to determine whether or not a LuigiTask has a run() method, which a WrapperTask does not. Thus, I'd expect the --retry-external-task flag to retry complete() for this until it's complete, thus performing the action. However, just playing around in the interpreter leads me to believe that:
>>> import luigi_newsletter_process
>>> task = luigi_newsletter_process.Newsletter()
>>> task.run
<bound method Newsletter.run of Newsletter(date=2016-06-22, use_s3=True)>
>>> task.run()
>>> task.run == None
False
>>> task.run() == None
True
This code snippet is not doing what it thinks it is.
Am I off-base here?
I still think that overriding .complete() should in theory have been able to do this, and I'm still not sure why it's not, but if you're just looking for a way to bulk-transfer files after running a process, a workable solution is just to have the transfer take place within a .run() method:
def run(self):
logger.info('transferring into current directory')
self.client.copy('current-old','current')
Am on a project using txrdq, am testing (using trial) for a case where a queued job may fail, trial marks the testcase as failed whenever it hits a failure in a errback ..
The errback is a normal behaviour, since a queued job may fail to launch, how to test this case using trial without failing the test ?
here's an example of the test case:
from twisted.trial.unittest import TestCase
from txrdq.rdq import ResizableDispatchQueue
from twisted.python.failure import Failure
class myTestCase(TestCase):
def aFailingJob(self, a):
return Failure("This is a failure")
def setUp(self):
self.queue = ResizableDispatchQueue(self.aFailingJob, 1)
def tearDown(self):
pass
def test_txrdq(self):
self.queue.put("Some argument", 1)
It seems likely that the exception is being logged, since the error handler just raises it. I'm not exactly sure what the error handling code in txrdq looks like, so this is just a guess, but I think it's a pretty good one based on your observations.
Trial fails any unit test that logs an exception, unless the test cleans that exception up after it's logged. Use TestCase.flushLoggedErrors(exceptionType) to deal with this:
def test_txrdq(self):
self.queue.put("Some argument", 1)
self.assertEqual(1, len(self.flushLoggedErrors(SomeException)))
Also notice that you should never do Failure("string"). This is analogous to raise "string". String exceptions are deprecated in Python since a looooong time ago. Always construct a Failure with an exception instance:
class JobError(Exception):
pass
def aFailingJob(self, a):
return Failure(JobError("This is a failure"))
This makes JobError the exception type you'd pass to flushLoggedErrors.
Make sure that you understand whether queue processing is synchronous or asynchronous. If it is synchronous, your test (with the flushLoggedErrors call added) is fine. If it is asynchronous, your error handler may not have run by the time your test method returns. In that case, you're not going to be testing anything useful, and the errors might be logged after the call to flush them (making the flush useless).
Finally, if you're not writing unit tests '''for''' txrdq, then you might not want to write tests like this. You can probably unit test txrdq-using code without using an actual txrdq. A normal Queue object (or perhaps another more specialized test double) will let you more precisely target the units in your application, making your tests faster, more reliable, and easier to debug.
This issue has now (finally!) been solved, by L. Daniel Burr. There's a new version (0.2.14) of txRDQ on PyPI.
By the way, in your test you should add from txrdq.job import Job, and then do something like this:
d = self.queue.put("Some argument", 1)
return self.assertFailure(d, Job)
Trial will make sure that d fails with a Job instance. There are a couple of new tests at the bottom of txrdq/test/test_rdq.py that illustrate this kind of assertion.
I'm sorry this problem caused so much head scratching for you - it was entirely my fault.
Sorry to see you're still having a problem. I don't know what's going on here, but I have been playing with it for over an hour trying to...
The queue.put method returns a Deferred. You can attach an errback to it to do the flush as #exarkun describes, and then return the Deferred from the test. I expected that to fix things (having read #exarkun's reply and gotten a comment from #idnar in #twisted). But it doesn't help.
Here's a bit of the recent IRC conversation, mentioning what I think could be happening: https://gist.github.com/2177560
As far as I can see, txRDQ is doing the right thing. The job fails and the deferred that is returned by queue.put is errbacked.
If you look in _trial_temp/test.log after you run the test, what do you see? I see an error that says Unhandled error in Deferred and the error is a Failure with a Job in it. So it seems likely to me that the error is somewhere in txRDQ. That there is a deferred that's failing and it's passing on the failure just fine to whoever needs it, but also returning the failure - causing trial to complain. But I don't know where that is. I put a print into the init of the Deferred class just out of curiousity to see how many deferreds were made during the running of the test. The answer: 12!
Sorry not to have better news. If you want to press on, go look at every deferred made by the txRDQ code. Is one of them failing with an errback that returns the failure? I don't see it, and I've put print statements in all over the place to check that things are right. I guess I must be missing something.
Thanks, and thanks too #exarkun.
In my python code I have this line:
try:
result = eval(command, self.globals, self.locals)
except SyntaxError:
exec(command, self.globals, self.locals)
The command variable can be any string. Hence the python debugger pdb may be started in eval/exec and still be active when eval/exec is returning. What I want to do is make sure normal program execution is resumed when returning from eval/exec. To just give you an idea, this is approximately the behavior that I want:
try:
result = eval(command, self.globals, self.locals)
try: self.globals['pdb'].run('continue')
except: pass
except SyntaxError:
exec(command, self.globals, self.locals)
try: self.globals['pdb'].run('continue')
except: pass
However the try line is shown in the debugger before it is executed, but I dont want the debugger to show my code at all. Also it doesn't really work... The reason i repeat code is to minimize debugging in my code, else I could just do it after the except block.
So how can I do this?
As a sidenote:
If you try to enter the following lines into the IPython or bpython interpreters you'll see that they have the same problem and you are able to step into their code.
import pdb
pdb.set_trace()
next
However if you do this in the standard cpython interpreter you are returned to the python prompt. The reason for this is obviously because the two former are implemented in python and the last one is not. But my wish is to get the same behavior even when all code is python.
While I'm somewhat concerned that you are eval/exec'ing a string that you don't control, I'll assume you've thought that bit through.
I think the simplest thing would be to persuade pdb to check the stack frame on each step and resume automatically when you return to the desired level. You can do that with a simple bit of hotfixing. In the code below I've simplified it down to a simple eval since all you are really asking is to have pdb resume automatically on return to a specific function. Call Pdb().resume_here() in the function that you don't want traced. N.B. the resumption is global and there's only one resumption point but I'm sure you can modify that if you wanted.
If you run the code then you'll enter the debugger in function foo() and you can then single step but as soon as you return to bar() the code continues automatically.
e.g.
import sys
from pdb import Pdb
def trace_dispatch(self, frame, event, arg):
if frame is self._resume_frame:
self.set_continue()
return
return self._original_trace_dispatch(frame, event, arg)
def resume_here(self):
Pdb._resume_frame = sys._getframe().f_back
# hotfix Pdb
Pdb._original_trace_dispatch = Pdb.trace_dispatch
Pdb.trace_dispatch = trace_dispatch
Pdb.resume_here = resume_here
Pdb._resume_frame = None
def foo():
import pdb
pdb.set_trace()
print("tracing...")
for i in range(3):
print(i)
def bar():
Pdb().resume_here()
exec("foo();print('done')")
print("returning")
bar()