Python Subprocess Calls into Matplotlib on Windows - python

I'm running Python 3.5 under Windows 10, and I'm using matplotlib.pyplot to generate PGF files, which are images I use for use in LaTeX.
I'm running a front-end GUI that gives the end-user configuration options, and then make calls into matplotlib.pyplot.savefig() which generates and saves the image.
The problem I have is that the matplotlib backend used (backend_pgf.py) makes a subprocess.Popen() call that forces a Windows console window (cmd) to pop up in order to do the required LaTeX processing. Visually it's distracting to the user and should be hidden.
Here's that code fragment:
latex = subprocess.Popen([str(self.texcommand), "-halt-on-error"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
cwd=self.tmpdir)
What I want to do is prevent that console window from displaying. I know I can use subprocess.STARTUPINFO() to set dwFlags and prevent this console window from displaying (or pass in shell=True).
I could override the class in question, but that class is nested deep in other classes and modules, so you can imagine the complexity of managing the code base for a simple function change.
My question then is... how to make this change in a logically deep package like matplotlib?
Thanks much.
Rich

If you are in full control of the app you are writing, and do not want at any moment the terminal window (I assume you don't), you can resort to
monkey-patch the subprocess.Popen call itself to always set that flag.
It is relatively painless - just call a function like this in the initialization code for your app:
def patch():
import subprocess
original_popen = subprocess.Popen
def Popen(*args, **kwargs):
# code to create your STARTUPINFO object
kwargs["startupinfo"] = subprocess # ...
return original_open(*args, **kwargs)
subprocess.Popen = Popen
It does not matter this is not changing the call where it is deep nested inside matplotlib - as long as this function is called before matplotlib itself is initialized, and even then, it would only fail if the module in matplotlib would do from subprocess import Popen (so it would have an independent reference to the original Popen). But if that is happening, so much the better: just patch the Popen name in the matplotlib submodule then.

For problems like this where the changes would be inconsequential to the regular functioning of the library, I often just monkey-patch the offending function/method. In your case it would be something like this
from matplotlib.backends.backend_pgf import LatexManager
def __init_patched__(self):
# copy/paste the original source here and make your changes
LatexManager.__init__ = __init_patched__
Of course, you will need to update the patched code if the matplotlib source changes

Related

pyqtgraph : I want to execute pyqtgraph in new process

Dear pyqtgraph masters,
I want to execute pyqtgraph in a newly created process.
In my project there is a python module : trading.py. This module makes a new process using this code
p = Process(target = realDataProcess.realDataProcessStart, args=(self.TopStockList, self.requestCodeList, self.account))
And you know, To maintain pyqtgraph displaying the computer moniter, we have to use pyqt loop like below.
QApplication.instance().exec_()
But in new process, It seems that Above code doesn't work. My graph pops up and suddenly disappear.....
Is there any solution about this? please help me out.
My experience with multiprocess and pyqtgraph is, that you can't create a new pyqtgraph window on new processes.
Therefore, you can only use pyqtgrahp on your main process.
I think there was the explanation somewhere on the net.
If you want to create additional processes to do something, besides pyqtgraph, put your pyqtgraph code below if name == 'main':
Otherwise, you will have as many windows as you have processes.
You may want to use the class RemoteGraphicsView, which uses the Multiprocessing utility library.
Multiprocessing utility library
This library provides:
simple mechanism for starting a new python interpreter process that can be controlled from the original process
(this allows, for example, displaying and manipulating plots in a remote process
while the parent process is free to do other work)
proxy system that allows objects hosted in the remote process to be used as if they were local
Qt signal connection between processes
very simple in-line parallelization (fork only; does not work on windows) for number-crunching
You can actually use this class to make a graph that execute on a new process in a second window if you want.
Take a look at these two examples examples/RemoteGraphicsView.py and examples/RemoteSpeedTest.py

Python script to interact with fdisk prompts automatically

This Python program enters fdisk. I see the output. fdisk is an interactive program. How do I get the Python program to pass an "m" to the first field and press enter?
import subprocess
a = "dev/sda"
x = subprocess.call(["fdisk", a])
print x
I'd rather not import a new module/library, but I could. I've tried different syntax with subprocess.call() and extra parameters in the above. Nothing seems to work. I get different errors. I've reviewed Python documentation. I want to feed input and press Enter in the subsequent, interactive menu options of fdisk.
Check out the pexpect library (I know you didn't want an extra module, but you want to use the best tool for the job). It's pure Python, with no compiled submodules, so installation is a snap. Basically, it does the same thing in Python as the classic Unix utility expect - spawns child applications, controls them, and responds to expected patterns in their output. It's great for automation, and especially application testing, where you can quickly feed the newest build of a command-line program a series of inputs and guide the interaction based on what output appears.
In case you just don't want another module at all, you can always fall back on the subprocess module's Popen() constructor. It spawns and creates a connection to a child process, allowing you to communicate with it as needed, and in fact pexpect relies a great deal on it. I personally think using pexpect is more intuitive than subprocess.Popen(), but that's just me. YMMV.

Python multiprocessing, PyAudio, and wxPython

I have a wxPython GUI, and would like to use multiprocessing to create a separate process which uses PyAudio. That is, I want to use PyAudio, wxPython, and the multiprocessing module, but although I can use any two of these, I can't use all three together. Specifically, if from one file I import wx, and create a multiprocessing.Process which opens PyAudio, PyAudio won't open. Here's an example:
file: A.py
import wx
import time
use_multiprocessing = True
if use_multiprocessing:
from multiprocessing import Process as X
else:
from threading import Thread as X
import B
if __name__=="__main__":
p = X(target=B.worker)
p.start()
time.sleep(5.)
p.join()
file: B.py
import pyaudio
def worker():
print "11"
feed = pyaudio.PyAudio()
print "22"
feed.terminate()
In all my tests I see 11 print, but the problem is that I don't see 22 for the program as shown.
If I only comment out import wx I see 22 and pyaudio loads
If I only set use_multiprocessing=False so I use threading instead, I see 22 and pyaudio loads.
If I do something else in worker, it will run (only pyaudio doesn't run)
I've tried this with Python 2.6 and 2.7; PyAudio 0.2.4, 0.2.7, and 0.2.8; and wx 3.0.0.0 and 2.8.12.1; and I'm using OSX 10.9.4
There are two reasons this can happen, but they look pretty much the same.
Either way, the root problem is that multiprocessing is just forking a child. This could be either causing CoreFoundation to get confused about its runloop*, or causing some internal objects inside wx to get confused about its threads.**
But you don't care why your child process is deadlocking; you want to know how to fix it.
The simple solution is to, instead of trying to fork and then clean up all the stuff that shouldn't be copied, spawn a brand-new Python process and then copy over all the stuff that should.
As of Python 3.4, there are actually two variations on this. See Contexts and start methods for details, and issue #8713 for the background.
But you're on 2.6, so that doesn't help you. So, what can you do?
The easiest answer is to switch from multiprocessing to the third-party library billiard. billiard is a fork of Python 2.7's multiprocessing, which adds many of the features and bug fixes from both Python 3.x and Celery.
I believe new versions have the exact same fix as Python 3.4, but I'm not positive (sorry, I don't have it installed, and can't find the docs online…).
But I'm sure that it has a similar but different solution, inherited from Celery: call billiards.forking_enable(False) before calling anything else on the library. (Or, from outside the program, set the environment variable MULTIPROCESSING_FORKING_DISABLE=1.)
* Usually, CF can detect the problem and call __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YO‌U_MUST_EXEC__, which logs an error message and fails. But sometimes it can't, and will wait end up waiting forever for an event that nobody can send. Google that string for more information.
** See #5527 for details on the equivalent issue with threaded Tkinter, and the underlying problem. This one affects all BSD-like *nixes, not just OS X.
If you can't solve the problem by fixing or working around multiprocessing, there's another option. If you can spin off the child process before you create your main runloop or create any threads, you can prevent the child process from getting confused. This doesn't always work, but it often does, so it may be worth trying.
That's easy to do with Tkinter or PySide or another library that doesn't actually do anything until you call a function like mainloop or construct an App instance.
But with wx, I think it does some of the setup before you even touch anything beyond the import. So, you may have to do something a little hacky and move the import wx after the p.start().
In your real app, you probably aren't going to want to start doing audio until some trigger from the GUI. This means you'll need to create some kind of sync object, like an Event. So, you create the Event, then start the child process. The child initializes the audio, and then waits on the Event. And then, where you'd like to launch the child from the GUI, you instead just signal the Event.

getting list of camera or render globals from maya scene file without opening scene in maya

is it possible with python to read a maya scene file without opening and get list of camera's or render globals setting or perform some operations on it.
I need this so that if their are multiple scene files I do not have to open each maya scene and just tweak it from a python script, I would really appreciate if some one can show how to start rest I will do myself...
Its possible, may just not be practical. A Maya ASCII file is pretty trivial to parse. A Maya binary file not so much. It may be a bit premature to discuss there matters.
First things first. Maya does offer a batch mode, the batch mode does not use up a full license. It still opens a full Maya environment but no GUI. This is the perfect platform to inject scripts. It doesn't really fulfill your request of not opening Maya but it fulfills all other requirements. There are a few ways for using this batch mode. But mostly it boils down to two general methods.
Method 1, using mayabatch: The mayaXX/bin/ directory contains a command called mayabatch. You can simply call this from a cmd/shell batch file/shellscript. This is a extremely simple way of calling existing scripts. Assuming mayabatch is in the path of your environment and your script is in Maya script path. Lets assume for simplicity that your script looks as follows (demo.py):
import maya.cmds as cmd
def func():
print cmd.ls()
# we want this to execute on import
func()
Now calling this for one file under windows would look as following:
mayabatch -command "python(""import demo"") " -file filename.ma
calling this over all Maya in the current folder files would then be simply:
for %i in (*.ma) do mayabatch -command "python(""import demo"") " -file %i
The quoting rules are a bit different on mac and Linux but should be easy to find. reefer to the manual of your shell.
Method 2, using mayapy and standalone:
Now you can also directly call the python script with mayapy. Its located in the same directory as mayabtach. Tough the standalone can be called form other python interpreters too if you include Maya modules in the system path. Now the script must be a bit changed so it does the same thing (demo_direct.py):
#init maya
import maya.standalone
maya.standalone.initialize( name='python' )
import maya.cmds as cmd
import glob
def func():
print cmd.ls(shapes=True)
for file in glob.glob('*.m[ab]'):
cmd.file( file, o=True )
func()
call this form command line with:
mayapy demo_driect.py
Neither of these methods load Mayas graphical user interface. So you can not call stuff like playblast etc. Since they rely on a GUI, and thus a full Maya license. Nothing says you can not do a similar load loop in the GUI as above.
Then for the methods without actually loading Maya. This is a bit tricky. Doing this is not necessarily worth it since the batch nodes are pretty fast as they don't eventuate everything on load. A sample script that parses a Maya ASCII file for frame ranges. You might find it useful. See following post. For Maya binary files there is a python module in cgKit that will help you read Maya binary chunks. Unfortunately it doesn't do anything for understanding the data.

non-interactive Python command history

I know the python interpreter and ipython have a easy way to browse through the history of commands. That is in interactive Python programming.
My problem/question:
I have a GUI-based Python tool that allows me to click and enter values in fields before hitting the "PLOT" button and I get a plot on screen. What I am looking for is a way to access a "minimimum script" that exactly reproduces the plot.
So I was wondering if there was a way to request a backlog of all the commands an uninteractive Python instance went through.
If it is not built-in, could someone advise a way to automatically dump function calls in a file at the same time as they are run.
The simplest way to do that would be to Pickle your plot object. Then you can just reload the pickle file and the object will be in memory just as it was when dumped.
It should only take a couple of lines to implement a dump and reload feature in your program.
This of course doesn't give you a list of commands or anything like that to regenerate the figure, but it does give you the exact state of the object.
If you are using matplotlib to do the plotting, then the image itself is not picklable. But you could create a class that contains all the information you entered that is passed to the matplotlib routines and pickle that, again saving the state.

Categories

Resources