I need to pickle object [wxpython frame object] and send it as a prameter to this function apply_async on multiproccessing pool module
could someone provide me an example how can I do it
I tried the following and get an error message :
myfile = file(r"C:\binary.dat", "w")
pickle.dump(self, myfile)
myfile.close()
self.my_pool.apply_async(fun,[i,myfile])
def fun(i,self_object):
window = pickle.load(self_oject)
wx.CallAfter(window.LogData, msg)
could someone tell me what could be the problem
If the error give some indicator below the last error message i get:
File "C:\Python26\lib\copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.name
TypeError: can't pickle PySwigObject objects
You can not serialize a widget for use in another process. I guess you want to change the GUI content from another process that is started by the multiprocessing module. In that case, you should define a callback function in the parent process that gets called when the result of the sub-process is ready. Therefore you can use the "callback" parameter of apply_async.
Something like:
def fun(i):
# do something in this sub-process and then return a log message
return "finished doing something"
def cb(resultFromFun):
wx.CallAfter(window.LogData, resultFromFun)
my_pool.apply_async(fun, [i], callback = cb)
I don't believe that wxPython objects can be pickled. They are just wrappers around C objects, which contain lots of pointers and other stateful stuff. The pickle module doesn't know enough about them to be able to restore their state afterwards.
Related
In the code below when I call .get() in get_query_result I actually get an ImportError as described in the function comment. The actual result in do_query is a large dictionary. It might be in the 10s of MB range. I've verified that this large object can be pickled, and that when adding it to the output queue there are no errors. Simpler objects work just fine in place of result.
Of course I've distilled this code down from it's actual form, but it seems like perhaps there's some issue with the size of result and perhaps it's just not deserializeable when trying to read it off the queue? I've even tried adding a sleep before calling .get() in case the internal mechanisms of SimpleQueue need time to get all the bytes into the queue but it didn't make a difference. I've also tried using multiprocess.Queue and pipes with the same result each time. Also the module name in the error message is slightly different each time making me think this has something to do with the object in the queue only being incorrectly serialized or something.
Your help is greatly appreciated.
import multiprocessing
from multiprocessing import queues
import pickle
def do_query(recv_queue, send_queue):
while True:
item = recv_queue.get()
#result = long_running_function(item)
try:
test = pickle.loads(pickle.dumps(result))
#this works and prints this large object out just fine
print(test)
except pickle.PicklingError:
print('obj was not pickleable')
try:
send_queue.put([result])
except Exception as e:
#This also never happens
print('Exception when calling .put')
class QueueWrapper:
def __init__(self)
self._recv_queue = queues.SimpleQueue()
self._send_queue = queues.SimpleQueue()
self._proc = None
def send_query(self, query):
self._send_queue.put(query)
def get_query_result(self):
'''
When calling .get() I get "ImportError: No module named tmp_1YYBC\n"
If I replace 'result' in do_query with something like ['test'] it works fine.
'''
return self._recv_queue.get()
def init(self):
self._proc = multiprocess.Process(target=do_query, args=(self._send_queue, self._recv_queue))
self._proc.start()
if __name__=='__main__':
queue_wrapper = QueueWrapper()
queue_wrapper.init()
queue_wrapper.send_query('test')
#import error raised when calling below
queue_wrapper.get_query_result()
Edit1: If I change the code to pickle result myself and then send that pickled result in the queue I'm able to successfully call .get on the other end of the queue. However when I go to unpickle that item I get the same error as before. To recap that means I can pickle and unpickle the object I want to send between processes in the process running do_query just fine, if I pickle it myself I can send it between processes just fine, but when I go to unpickle manually I get an error. Almost seems like I'm able to read off the queue before it's done being written to or something? That shouldn't be the case if I'm understanding .get and .put correctly.
Edit2: After some more digging I see that the type(result) is returning <class tmp_1YYBC._sensor_msgs__Image> which is not correct and should be just sensor_msgs.msg._Image.Image but it's interesting to note that the weird prefix appears in my error message. If I try to construct a new Image, copy all the data from result into that image and send that newly created object in the queue I get the exact same error message... It seems like pickle or the other process in general has trouble knowing how to construct this image object?
Hello :) I´m a complete beginner when it comes to COM objects, any help is appreciated!
I´m working on a Python program supposed to read incoming MS-Word documents in a client/server fashion, i.e. the client sends a request (one or multiple MS-Word documents) and the server reads specific content from those requests using pythoncom and win32com.
Because I want to minimize waiting time for the client (client needs a status message from server, I do not want to open an MS-Word instance for every request. Hence, I intend to have a pool of running MS-Word instances from which the server can pick and choose. This, in turn, means I have to reuse those instances from the pool in different threads and this is what causes trouble right now. After I read Using win32com with multithreading, my dummy code for the server looks like this:
import pythoncom, win32com.client, threading, psutil, os, queue, time, datetime
appPool = {'WINWORD.EXE': queue.Queue()}
def initAppPool():
global appPool
wordApp = win32com.client.DispatchEx('Word.Application')
appPool["WINWORD.EXE"].put(wordApp) # For testing purpose I only use one MS-Word instance currently
def run_in_thread(appid, path):
#open doc, read do some stuff, close it and reattach MS-Word instance to pool
pythoncom.CoInitialize()
wordApp = win32com.client.Dispatch(pythoncom.CoGetInterfaceAndReleaseStream(appid, pythoncom.IID_IDispatch))
doc = wordApp.Documents.Open(path)
time.sleep(3) # read out some content ...
doc.Close()
appPool["WINWORD.EXE"].put(wordApp)
if __name__ == '__main__':
initAppPool()
pathOfFile2BeRead1 = r'C:\Temp\file4.docx'
pathOfFile2BeRead2 = r'C:\Temp\file5.doc'
#treat first request
wordApp = appPool["WINWORD.EXE"].get(True, 10)
pythoncom.CoInitialize()
wordApp_id = pythoncom.CoMarshalInterThreadInterfaceInStream(pythoncom.IID_IDispatch, wordApp)
readDocjob1 = threading.Thread(target=run_in_thread,args=(wordApp_id,pathOfFile2BeRead1), daemon=True)
readDocjob1.start()
#wait here until readDocjob1 is done
wait = True
while wait:
try:
wordApp = appPool["WINWORD.EXE"].get(True, 1)
wait = False
except queue.Empty:
print(f"[{datetime.datetime.now()}] error: appPool empty")
except BaseException as err:
print(f"[{datetime.datetime.now()}] error: {err}")
So far everything works as expected, but when I start a second request similar to the first one:
(x) wordApp_id = pythoncom.CoMarshalInterThreadInterfaceInStream(pythoncom.IID_IDispatch, wordApp)
readDocjob2 = threading.Thread(target=run_in_thread,args=(wordApp_id,pathOfFile2BeRead2), daemon=True)
readDocjob2.start()
I receive the following error message: "The application called an interface that was marshaled for a different thread" for the (x) marked line.
I thought that is why I have to use pythoncom.CoGetInterfaceAndReleaseStream to jump between threads with the same COM object? And besides that why does it work the first time but not the second time?
I searched for different solutions on StackOverflow which use CoMarshalInterface instead of CoMarshalInterThreadInterfaceInStream, but they all gave me the same error. I´m really confused right now.
EDIT:
After fixing the error as mentioned in the comments, I ran into a mysterious behavior.
When the second job is executed:
wordApp_id = pythoncom.CoMarshalInterThreadInterfaceInStream(pythoncom.IID_IDispatch, wordApp)
readDocjob2 = threading.Thread(target=run_in_thread,args=(wordApp_id,pathOfFile2BeRead2), daemon=True)
readDocjob2.start()
The function run_in_thread terminates immediately without executing any line, respectively it seems that the pythoncom.CoInitialize() is not working properly.
The script finishes without any error messages though.
def run_in_thread(instance,appid, path):
#open doc, read do some stuff, close it and reattach MS-Word instance to pool
pythoncom.CoInitialize()
wordApp = win32com.client.Dispatch(pythoncom.CoGetInterfaceAndReleaseStream(appid, pythoncom.IID_IDispatch))
doc = wordApp.Documents.Open(path)
time.sleep(3) # read out some content ...
doc.Close()
instance.flag = True
What happens is you put back in the "activePool" a COM reference that you got from CoGetInterfaceAndReleaseStream. But this reference was created specially for this new thread and then you call CoMarshalInterThreadInterfaceInStream on this new reference.
That's what is wrong.
You must always use the original COM reference you got from the thread that created it, to be able to call CoMarshalInterThreadInterfaceInStream repeatedly.
So, to solve the problem, you must change how your apppool works, use some kind of a "in use" flag but don't touch the original COM reference.
Long story short, I am writing python code that occasionally causes an underlying module to spit out complaints in the terminal that I want my code to respond to. My question is if there is some way that I can take in all terminal outputs as a string while the program is running so that I might parse it and execute some handler code. Its not errors that crash the program entirely and not a situation where I can simply do a try catch. Thanks for any help!
Edit: Running on Linux
there are several solutions to your need. the easiest would be to use a shared buffer of sort and get all your package output there instead of stdout (with regular print) thus keeping your personal streams under your package control.
since you probably already have some code with print or you want for it to work with minimal change, i suggest use the contextlib.redirect_stdout with a context manager.
give it a shared io.StringIO instance and wrap all your method with it.
you can even create a decorator to do it automatically.
something like:
// decorator
from contextlib import redirect_stdout
import io
import functools
SHARED_BUFFER = io.StringIO()
def std_redirecter(func):
#functools.wrap(func)
def inner(*args, **kwargs):
with redirect_stdout(SHARED_BUFFER) as buffer:
print('foo')
print('bar')
func(*args, **kwargs)
return inner
// your files
#std_redirecter
def writing_to_stdout_func():
print('baz')
// invocation
writing_to_stdout_func()
string = SHARED_BUFFER.getvalue() // 'foo\nbar\nbaz\n'
I am using pyvisa to communicate via USB with an instrument. I am able to control it properly. Since it is a high voltage source, and it is dangerous to forget it with high voltage turned on, I wanted to implement the __del__ method in order to turn off the output when the code execution finishes. So basically I wrote this:
import pyvisa as visa
class Instrument:
def __init__(self, resource_str='USB0::1510::9328::04481179::0::INSTR'):
self._resource_str = resource_str
self._resource = visa.ResourceManager().open_resource(resource_str)
def set_voltage(self, volts: float):
self._resource.write(f':SOURCE:VOLT:LEV {volts}')
def __del__(self):
self.set_voltage(0)
instrument = Instrument()
instrument.set_voltage(555)
The problem is that it is not working and in the terminal I get
$ python3 comunication\ test.py
Exception ignored in: <function Instrument.__del__ at 0x7f4cca419820>
Traceback (most recent call last):
File "comunication test.py", line 12, in __del__
File "comunication test.py", line 9, in set_voltage
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 197, in write
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 157, in write_raw
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/resource.py", line 190, in session
pyvisa.errors.InvalidSession: Invalid session handle. The resource might be closed.
I guess that what is happening is that pyvisa is being "deleted" before the __del__ method of my object is being called. How can I prevent this? How can I tell Python that pyvisa is "important" for objects of the Instrument class so it is not unloaded until all of them have been destroyed?
In general, you cannot assume that __del__ will be called. If you're coming from an RAII (resource allocation is initialization) language such as C++, Python makes no similar guarantee of destructors.
To ensure some action is reversed, you should consider an alternative such as context managers:
from contextlib import contextmanager
#contextmanager
def instrument(resource_str='USB0::1510::9328::04481179::0::INSTR'):
...
try:
... # yield something
finally:
# set voltage of resource to 0 here
You would use it like
with instrument(<something>) as inst:
...
# guaranteed by here to set to 0.
I believe Ami Tavory's answer is generally considered to be the recommended solution, though context managers aren't always suitable depending on how the application is structured.
The other option would be to explicitly call the cleanup functions when the application is exiting. You can make it safer by wrapping the whole application in a try/finally, with the finally clause doing the cleanup. Note that if you don't include a catch then the exception will be automatically re-raised after executing the finally, which may be what you want. Example:
app = Application()
try:
app.run()
finally:
app.cleanup()
Be aware, though, that you potentially just threw an exception. If the exception happened, for example, mid-communication then you may not be able to send the command to reset the output as the device could be expecting you to finish what you had already started.
Finally I found my answer here using the package atexit. This does exactly what I wanted to do (based on my tests up to now):
import pyvisa as visa
import atexit
class Instrument:
def __init__(self, resource_str):
self._resource = visa.ResourceManager().open_resource(resource_str)
# Configure a safe shut down for when the class instance is destroyed:
def _atexit():
self.set_voltage(0)
atexit.register(_atexit) # https://stackoverflow.com/a/41627098
def set_voltage(self, volts: float):
self._resource.write(f':SOURCE:VOLT:LEV {volts}')
instrument = Instrument(resource_str = 'USB0::1510::9328::04481179::0::INSTR')
instrument.set_voltage(555)
The advantage of this solution is that it is user-independent, it does not matter how the user instantiates the Instrument class, in the end the high voltage will be turned off.
I faced the same kind of safety issue with another type of connected device. I could not predict safely the behavior of the __del__ method as discussed in questions like
I don't understand this python __del__ behaviour.
I ended with a context manager instead. It would look like this in your case:
def __enter__(self):
"""
Nothing to do.
"""
return self
def __exit__(self, type, value, traceback):
"""
Set back to zero voltage.
"""
self.set_voltage(0)
with Instrument() as instrument:
instrument.set_voltage(555)
I have a question about this on testing the following code:
1,
def file_close_test():
f = open('/tmp/test', 'w+')
if __name__ == '__main__':
file_close_test()
# wait to see whether file closed.
import time
time.sleep(30)
2,
def file_close_on_exc_test():
f = open('/tmp/test', 'w+')
raise Exception()
def exception_wrapper():
try:
file_close_on_exc_test()
except:
pass
# wait to see whether file closed.
import time
time.sleep(10)
if __name__ == '__main__':
exception_wrapper()
import time
time.sleep(30)
The file object closed when the file_close_test exits because no reference to it.
After the exception raised,the file object not closed.so i think the related stack data not released.
After exception_wrapper exit,the file closed automatically.
can you explain this for me? thanks.
The exception includes a traceback object which can be used to access all of the local variables in any of the stack frames active when the exception was thrown. That means you can still access the file until the exception context is cleared.
Even after the sleep() at the end of exception_wrapper you could use sys.exc_info to get at the open file like this:
tb = sys.exc_info()[2]
print tb.tb_next.tb_frame.f_locals['f']
All of this is of course specific to the particular Python implementation you are using. Other implementations may not implicitly close files at all until they are garbage collected.
The bottom line is you should never depend on Python's reference counting or garbage collection to clean up resources like open files, always do it explicitly.