External executable crashes when being launched from Python script - python

I am currently getting an issue with an external executable crashing when it is launched from a Python script. So far I have tried using various subprocess calls. As well as the more redundant methods such as os.system and os.startfile.
Now the exe doesn't have this issue when I call it normally from the command line or by double-clicking on it from the explorer window. I've looked around to see if other people have had a similar problem too. As far as I can tell the closest possible cause of this issue is that the child process unnecessarily hangs due to the I/O exceeding 65K. So I've tried using Popen without PIPES and I have also changed the stdout and stdin to write to temporary files to try and alleviate my problem. But unfortunately none of this has worked.
What I eventually want to do is be able to autorun this executable several times with various outputs provided by xmls. Everything else is pretty much in place, including the xml modifications which the executable requires. I have also tested the xml modification portion of the code as a standalone script to make sure that this isn't the issue.
Due to the nature of script I am a bit reluctant to put up any actual code up on the net as the company I work for is a bit strict when it comes to showing code. I would ask my colleagues if I could but unfortunately I'm the only one here who actually has used python.
Any help would be much appreciated.
Thanks.

As I've not had any response I've kind of gone down a different route with this. Rather than relying on the subprocess module to call the exe I have moved that logic out into a batch file. The xmls are still modified by the python script and most of the logic is still handled in script. It's not what ideally would have liked from the program but it will have to do.
Thanks to anybody who gave this some thought and tried to at least look for an alternative. Even if nobody answered.

Related

Run python in background

I've been searching a lot for this problem, but I didnt find any valuable answer.
I want to make a script (lets say it is a library) which runs some functions at reboot. Inside my library, there will be a function like
def randomfunction():
print("randomtext")
After loading this function, everytime a call for randomfunction() in any python run (I will .py as cgi scripts) will return me "randomtext".
Is that possible or I miss something?
It is working on python idle if I use exec, but I want this exec to be on system. That would be for a linux OS.
Don't you need some kind of Interprocess Communication for this?
Might be worth taking a look at these docs: Python IPC
Also,
this SO post might help you. I think it offers a solution to what you are looking for.

Python starts extremely slowly the first time after I reboot Windows

I apologize for not having a reproducible example. My problem is with a large base of proprietary code and I don't have an extract that shows the same behavior. Even better, it isn't my software and I know about 2% of how it works.
Simply, this Python program I'm dealing with takes about 80 seconds to complete its entire setup and get to the point where all its flask code is running and the webserver being created is up and able to respond to requests. BUT -- that's only the first time I run it on Windows after rebooting. On subsequent times starting the python script in question, it takes more like 10 seconds.
And the nutty part is, in a workgroup of 10 people, mine is the only computer that has the problem.
Things I can say:
Python 2.7.11, Windows 7, git bash version 2.9.0.windows.1.
It doesn't appear to matter whether I invoke my python program from the git bash command line or the Windows command line.
However, in git bash, saying "python" gets no response forever until I hit Ctrl-C, but saying "winpty python" opens an interactive python session as it should. I mention this because for a while I thought my main problem was related to the git bash shell bug (https://stackoverflow.com/a/32599341/5593532). But point 2 above would seem to contradict this. No such weirdness occurs in invoking a bare python interactive session from the Windows command line.
I've had trouble getting meaningful profiling output, partly because of multi-threading or child processes or something. And the web server doesn't have an exit event per se, thus I can only stop it by smacking it with a Ctrl-C in the command-line window where I ran the script, which seems to kill the part of the process that would save the profiling data.
From the fragmentary profiling info I was able to produce (with gratitude to https://ymichael.com/2014/03/08/profiling-python-with-cprofile.html), I am suspicious that something weird is happening in loading a large number of imported packages, and perhaps especially the alembic and/or werkzeug packages. (And maybe even sqlalchemy.) The profiling output didn't have much tottime in those packages, but it did have rather a lot of cumtime there.
My sys.path inside Python doesn't seem meaningfully different from anyone else's nearby. I might have one or two different items in the list, or three .egg files on the path when they've only got one, but it's mostly the same list in the same order. So much for the idea that it's taking a long time to hunt and learn where packages are and then re-using the information later.
I've got PyCharm Community Edition able to run the script and its associated junk in IDE mode, set breakpoints, and all that jazz, so I can set breakpoints and follow execution to a degree, in case that would answer a noteworthy question you could raise for me.
Anyone got a wild notion what's up? (he asked quite unreasonably)

Python & Windows: Spawning a process, then dying without showing the cmd prompt

I've got a problem with a python script which is responsible for syncing changes from a VCS (among other things) which may include changes to itself / libraries it depends on. In such cases, I would like to be able to detect if the sync touched anything I depend on, and restart if that was the case.
On POSIX platforms this is easy. exec(), done. On Windows it's incredibly annoying. I can Popen or exec*() and then die, but the problem is my user will see the cmd prompt show a new line, which messes up the flow of output, not to mention completely breaks any future input on stdin, allows them to run commands while my script is still running, which is undesirable.
I've searched high and low without finding anything approaching an answer to this (and yes, I know there are alternative ways to accomplish the same goal, but it'd be nice to have this work). Seemingly the problem is telling the console that the new process is now the "owner" of the console, but I can't find a way to do that. I tried calling https://msdn.microsoft.com/en-us/library/windows/desktop/ms681952(v=vs.85).aspx through ctypes, and that didn't work either.
Any help much appreciated. Thanks!

Subprocess crashes unexpectedly from PyDev, works fine from double click in windows explorer

I've spent quite a bit of time trying to figure this out. I'm trying to invoke this line to run abaqus (an FEA program):
popen = subprocess.Popen(callCommand, cwd=workDir, creationflags=subprocess.CREATE_NEW_CONSOLE)
popen.wait()
When double clicking on the .py file everthing works fine. However on running it from Eclipse, Abaqus crashes:
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
Then later I also get "SMAPython.exe has stopped working".
I've played around with admin privilege settings but to no avail. Don't have the rep to tag it with Abaqus.
The solution (which I've come accross after writting a draft for the question) was found here:
http://sourceforge.net/p/pydev/discussion/293649/thread/94a76ecb/
Basically, PyDev adds some environment variables that don't play well with Abaqus, so to turn them off the following code can be used:
import os
try:
os.environ.pop('PYTHONIOENCODING')
except KeyError:
pass
# now call abaqus...
Hopefully this is of use to someone, I've spent almost two days fixing this. It is a bit of a niche use of PyDev (I'm not a programmer, I'm a Civil Engineer) but I think it is much more powerful to have Eclipse take care of all the source files. Abaqus CAE files are all binary and proprietary so source control and custom edits are a pain otherwise.
I guess in any case the solution is to trace the problem by taking bits of it off and checking what works and what the differences are.

Writing a kernel mode profiler for processes in python

I would like seek some guidance in writing a "process profiler" which runs in kernel mode. I am asking for a kernel mode profiler is because I run loads of applications and I do not want my profiler to be swapped out.
When I said "process profiler" I mean to something that would monitor resource usage by the process. including usage of threads and their statistics.
And I wish to write this in python. Point me to some modules or helpful resource.
Please provide me guidance/suggestion for doing it.
Thanks,
Edit::: Would like to add that currently my interest isto write only for linux. however after i built it i will have to support windows.
It's going to be very difficult to do the process monitoring part in Python, since the python interpreter doesn't run in the kernel.
I suspect there are two easy approaches to this:
use the /proc filesystem if you have one (you don't mention your OS)
Use dtrace if you have dtrace (again, without the OS, who knows.)
Okay, following up after the edit.
First, there's no way you're going to be able to write code that runs in the kernel, in python, and is portable between Linux and Windows. Or at least if you were to, it would be a hack that would live in glory forever.
That said, though, if your purpose is to process Python, there are a lot of Python tools available to get information from the Python interpreter at run time.
If instead your desire is to get process information from other processes in general, you're going to need to examine the options available to you in the various OS APIs. Linux has a /proc filesystem; that's a useful start. I suspect Windows has similar APIs, but I don't know them.
If you have to write kernel code, you'll almost certainly need to write it in C or C++.
don't try and get python running in kernel space!
You would be much better using an existing tool and getting it to spit out XML that can be sucked into Python. I wouldn't want to port the Python interpreter to kernel-mode (it sounds grim writing it).
The /proc option does sound good.
some code code that reads proc information to determine memory usage and such. Should get you going:
http://www.pixelbeat.org/scripts/ps_mem.py reads memory information of processes using Python through /proc/smaps like charlie suggested.
Some of your comments on other answers suggest that you are a relatively inexperienced programmer. Therefore I would strongly suggest that you stay away from kernel programming, as it is very hard even for experienced programmers.
Why would you want to write something that
is a very complex system (just look at existing profiling infrastructures and how complex they are)
can not be done in python (I don't know any kernel that would allow execution of python in kernel mode)
already exists (oprofile on Linux)
have you looked at PSI? (http://www.psychofx.com/psi/)
"PSI is a Python module providing direct access to real-time system and process information. PSI is a Python C extension, providing the most efficient access to system information directly from system calls."
it might give you what you are looking for. .... or at least a starting point.
Edit 2014:
I'd recommend checking out psutil instead:
https://pypi.python.org/pypi/psutil
psutil is actively maintained and has some nifty process monitoring features. PSI seems to be somewhat dead (last release 2009).

Categories

Resources