pyqtgraph : I want to execute pyqtgraph in new process - python

Dear pyqtgraph masters,
I want to execute pyqtgraph in a newly created process.
In my project there is a python module : trading.py. This module makes a new process using this code
p = Process(target = realDataProcess.realDataProcessStart, args=(self.TopStockList, self.requestCodeList, self.account))
And you know, To maintain pyqtgraph displaying the computer moniter, we have to use pyqt loop like below.
QApplication.instance().exec_()
But in new process, It seems that Above code doesn't work. My graph pops up and suddenly disappear.....
Is there any solution about this? please help me out.

My experience with multiprocess and pyqtgraph is, that you can't create a new pyqtgraph window on new processes.
Therefore, you can only use pyqtgrahp on your main process.
I think there was the explanation somewhere on the net.
If you want to create additional processes to do something, besides pyqtgraph, put your pyqtgraph code below if name == 'main':
Otherwise, you will have as many windows as you have processes.

You may want to use the class RemoteGraphicsView, which uses the Multiprocessing utility library.
Multiprocessing utility library
This library provides:
simple mechanism for starting a new python interpreter process that can be controlled from the original process
(this allows, for example, displaying and manipulating plots in a remote process
while the parent process is free to do other work)
proxy system that allows objects hosted in the remote process to be used as if they were local
Qt signal connection between processes
very simple in-line parallelization (fork only; does not work on windows) for number-crunching
You can actually use this class to make a graph that execute on a new process in a second window if you want.
Take a look at these two examples examples/RemoteGraphicsView.py and examples/RemoteSpeedTest.py

Related

Python Subprocess Calls into Matplotlib on Windows

I'm running Python 3.5 under Windows 10, and I'm using matplotlib.pyplot to generate PGF files, which are images I use for use in LaTeX.
I'm running a front-end GUI that gives the end-user configuration options, and then make calls into matplotlib.pyplot.savefig() which generates and saves the image.
The problem I have is that the matplotlib backend used (backend_pgf.py) makes a subprocess.Popen() call that forces a Windows console window (cmd) to pop up in order to do the required LaTeX processing. Visually it's distracting to the user and should be hidden.
Here's that code fragment:
latex = subprocess.Popen([str(self.texcommand), "-halt-on-error"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
cwd=self.tmpdir)
What I want to do is prevent that console window from displaying. I know I can use subprocess.STARTUPINFO() to set dwFlags and prevent this console window from displaying (or pass in shell=True).
I could override the class in question, but that class is nested deep in other classes and modules, so you can imagine the complexity of managing the code base for a simple function change.
My question then is... how to make this change in a logically deep package like matplotlib?
Thanks much.
Rich
If you are in full control of the app you are writing, and do not want at any moment the terminal window (I assume you don't), you can resort to
monkey-patch the subprocess.Popen call itself to always set that flag.
It is relatively painless - just call a function like this in the initialization code for your app:
def patch():
import subprocess
original_popen = subprocess.Popen
def Popen(*args, **kwargs):
# code to create your STARTUPINFO object
kwargs["startupinfo"] = subprocess # ...
return original_open(*args, **kwargs)
subprocess.Popen = Popen
It does not matter this is not changing the call where it is deep nested inside matplotlib - as long as this function is called before matplotlib itself is initialized, and even then, it would only fail if the module in matplotlib would do from subprocess import Popen (so it would have an independent reference to the original Popen). But if that is happening, so much the better: just patch the Popen name in the matplotlib submodule then.
For problems like this where the changes would be inconsequential to the regular functioning of the library, I often just monkey-patch the offending function/method. In your case it would be something like this
from matplotlib.backends.backend_pgf import LatexManager
def __init_patched__(self):
# copy/paste the original source here and make your changes
LatexManager.__init__ = __init_patched__
Of course, you will need to update the patched code if the matplotlib source changes

hijacking terminal stdin from python

Is there a way in python to hijack the terminal stdin? Unix only solutions will do fine.
I'm currently writing a small wrapper around top as I want to be able to monitor named processes, e.g. all running python instances. Basically I'm calling pgrep to get process id's and then runs top using the -p option.
Overall this script have worked satisfactorily for a few years now (well with the caveat that top -p only accepts 20 pid's...). However, I now would like adjust the script to update the call to top if new processes matching the name pattern are born. This also works relatively nicely, but... any options set interactively in top gets lost every time I update the pid list but natural causes as I stop and restart top. Therefore I would like to hijack the terminal stdin somehow to be able to backtrack what ever the settings are in affect so I can set them accordingly after updating the pid-list, or even halt updating if neccesary (e.g. if top is awaiting more instructions from the user).
Now perhaps what I'm trying to achieve is just silly and there are better ways to do it, if so I'd highly appreciate enlightenment
(oh. the tag ps were used as the tag top does not exists and I'm to new here to define new tags, after all the two utilities are related)
thanks \p
What you are doing sounds like a bit of a hack. I would just write a Python script using psutil that does exactly what you want. Whatever information you are interested in, psutil should give it to you - and more.
Quick and dirty:
import psutil
import time
while True:
processes = [ p for p in psutil.process_iter() if 'python' in p.name() ]
for p in processes:
# print out whatever information interests you
print(
p.pid,
p.name(),
p.cpu_percent(),
p.io_counters().read_bytes,
p.io_counters().write_bytes
)
time.sleep(10)
Link to Documentation: http://pythonhosted.org/psutil/

How to load from disk, process, then store data in a common hdf5 concurrently with python, pyqt, h5py?

Premise:
I've created a mainwindow. One of the drop down menu's has an 'ProcessData' item. When it's selected, I create a QProgressDialog. I then do a lot of processing in the main loop and periodically update the label and percentage in the QProgressDialog.
My processing looks like: read a large amount of data from a file (numpy memmapped array), do some signal processing, write the output to a common h5py file. I iterate over the available input files, and all of the output is stored in a common h5py hdf5 file. The entire process takes about two minutes per input file and pins one CPU to 100%.
Goal:
How do I make this process non-blocking, so that the UI is still responsive? I'd still like my processing function to be able to update the QProgressDialog and it's associated label.
Can I extend this to process more than one dataset concurrently and retain the ability to update the progressbar info?
Can I write into h5py from more than one thread/process/etc.? Will I have to implement locking on the write operation?
Software Versions:
I use python 3.3+ with numpy/scipy/etc. UI is in PyQt4 4.11/ Qt 4.8, although I'd be interested in solutions that use python 3.4 (and therefore asyncio) or PyQt5.
This is quite a complex problem to solve, and this format is not really suited to providing complete answers to all your questions. However, I'll attempt to put you on the right track.
How do I make this process non-blocking, so that the UI is still responsive? I'd still like my processing function to be able to update the QProgressDialog and it's associated label.
To make it non-blocking, you need to offload the processing into a Python thread or QThread. Better yet, offload it into a subprocess that communicates progress back to the main program via a thread in the main program.
I'll leave you to implement (or ask another question on) creating subprocesses or threads. However, you need to be aware that only the MainThread can access GUI methods. This means you need to emit a signal if using a QThread or use QApplication.postEvent() from a python thread (I've wrapped the latter up into a library for Python 2.7 here. Python 3 compatibility will come one day)
Can I extend this to process more than one dataset concurrently and retain the ability to update the progressbar info?
Yes. One example would be to spawn many subprocesses. Each subprocess can be configured to send messages back to an associated thread in the main process, which communicates the progress information to the GUI via the method described for the above point. How you display this progress information is up to you.
Can I write into h5py from more than one thread/process/etc.? Will I have to implement locking on the write operation?
You should not write to a hdf5 file from more than one thread at a time. You will need to implement locking. I think possibly even read access should be serialised.
A colleague of mine has produced something along these lines for Python 2.7 (see here and here), you are welcome to look at it or fork it if you wish.

Python multiprocessing, PyAudio, and wxPython

I have a wxPython GUI, and would like to use multiprocessing to create a separate process which uses PyAudio. That is, I want to use PyAudio, wxPython, and the multiprocessing module, but although I can use any two of these, I can't use all three together. Specifically, if from one file I import wx, and create a multiprocessing.Process which opens PyAudio, PyAudio won't open. Here's an example:
file: A.py
import wx
import time
use_multiprocessing = True
if use_multiprocessing:
from multiprocessing import Process as X
else:
from threading import Thread as X
import B
if __name__=="__main__":
p = X(target=B.worker)
p.start()
time.sleep(5.)
p.join()
file: B.py
import pyaudio
def worker():
print "11"
feed = pyaudio.PyAudio()
print "22"
feed.terminate()
In all my tests I see 11 print, but the problem is that I don't see 22 for the program as shown.
If I only comment out import wx I see 22 and pyaudio loads
If I only set use_multiprocessing=False so I use threading instead, I see 22 and pyaudio loads.
If I do something else in worker, it will run (only pyaudio doesn't run)
I've tried this with Python 2.6 and 2.7; PyAudio 0.2.4, 0.2.7, and 0.2.8; and wx 3.0.0.0 and 2.8.12.1; and I'm using OSX 10.9.4
There are two reasons this can happen, but they look pretty much the same.
Either way, the root problem is that multiprocessing is just forking a child. This could be either causing CoreFoundation to get confused about its runloop*, or causing some internal objects inside wx to get confused about its threads.**
But you don't care why your child process is deadlocking; you want to know how to fix it.
The simple solution is to, instead of trying to fork and then clean up all the stuff that shouldn't be copied, spawn a brand-new Python process and then copy over all the stuff that should.
As of Python 3.4, there are actually two variations on this. See Contexts and start methods for details, and issue #8713 for the background.
But you're on 2.6, so that doesn't help you. So, what can you do?
The easiest answer is to switch from multiprocessing to the third-party library billiard. billiard is a fork of Python 2.7's multiprocessing, which adds many of the features and bug fixes from both Python 3.x and Celery.
I believe new versions have the exact same fix as Python 3.4, but I'm not positive (sorry, I don't have it installed, and can't find the docs online…).
But I'm sure that it has a similar but different solution, inherited from Celery: call billiards.forking_enable(False) before calling anything else on the library. (Or, from outside the program, set the environment variable MULTIPROCESSING_FORKING_DISABLE=1.)
* Usually, CF can detect the problem and call __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YO‌U_MUST_EXEC__, which logs an error message and fails. But sometimes it can't, and will wait end up waiting forever for an event that nobody can send. Google that string for more information.
** See #5527 for details on the equivalent issue with threaded Tkinter, and the underlying problem. This one affects all BSD-like *nixes, not just OS X.
If you can't solve the problem by fixing or working around multiprocessing, there's another option. If you can spin off the child process before you create your main runloop or create any threads, you can prevent the child process from getting confused. This doesn't always work, but it often does, so it may be worth trying.
That's easy to do with Tkinter or PySide or another library that doesn't actually do anything until you call a function like mainloop or construct an App instance.
But with wx, I think it does some of the setup before you even touch anything beyond the import. So, you may have to do something a little hacky and move the import wx after the p.start().
In your real app, you probably aren't going to want to start doing audio until some trigger from the GUI. This means you'll need to create some kind of sync object, like an Event. So, you create the Event, then start the child process. The child initializes the audio, and then waits on the Event. And then, where you'd like to launch the child from the GUI, you instead just signal the Event.

Python Multiprocessing module processes raising runtime error and do nothing

I am trying to use multiprocessing module in python 2.7 to create a gui with wxpython that calls a separate module in a process that will graph things with matplotlib. However Everytime it calls that module, the gui instead just opens a copy of inself. Next I have tried using the multiprocessing module in a simple example. In the IDLE it appears to start the processes fine, but the processes don't actually run. When I run the code from the command line a attributeerror is raised, yet the code works fine when I switch all multiproccessing.Process to threading.Thread
Heres the command line code:
http://imgur.com/QuCUWRD
I've tested this module before and it seemed to have worked, so I am probably just doing something silly, however I can't figure out my error at all!
EDIT:
In my GUI changing the line from
queue_thread = multiprocessing.Process(
target=simple_queue_model.main_func, args = (self.inputs,))
to:
queue_thread = multiprocessing.Process(
target=simple_queue_model.main_func(self.inputs))
Causes the process to be called, however the main GUI window freezes until the process finishes running, and a new gui window is opened again which I don't understand.
EDIT 2:
the previous change just causes my GUI to call main_func no as a separate process. The line queue_thread.start() is what causes a new GUI to spawn, so overall this module isnt working at all for me
When you start a new process with multiprocessing on Windows, a whole new virgin python process is started, which then imports the various modules you need and passes variables using pickle. In this case, you have defined your functions in the __main__ namespace of the interactive session. To get them to run, save them to a module that y=can be imported from a new process. Be sure to consult the guidelines.
Conversely, threads can share memory and are directly passed the function definitions from the current namespace.

Categories

Resources