there may very well be an answer to this question, but it's really hard to google for.
you can add commands to gdb by writing them in python. I am interested in debugging one of those python scripts that's running in gdb session.
my best guess is to run gdb on gdb and execute the user added command and somehow magically break on the python program code?
has anybody done anything like this before? I don't know the mechanism by which gdb calls python code, so if it's not in the same process space as the gdb that's calling it, I don't see how I'd be able to set breakpoints in the python program.
or do I somehow get pdb to run in gdb? I guess I can put pdb.set_trace() in the python program, but here's the extra catch: I'd like to be able to do all this from vscode.
so I guess my question is: what order of what things do I need to run to be able to vscode debug a python script that was initiated by gdb?
anybody have any idea?
thanks.
so I figured it out. it's kinda neat.
you run gdb to debug your program as normal, then in another window you attach to a running python program.
in this case the running python program is the gdb process.
once you attach, you can set breakpoints in the python program, and then when you run commands in the first window where the gdb session is, if it hits a breakpoint in the python code, it will pop up in the second window.
the tipoff was that when you run gdb there does not appear to be any other python process that's a child of gdb or related anywhere, so I figured gdb must dynamically link to some python library so that the python compiler/interpreter must be running in the gdb process space, so I figured I'd try attaching to that, and it worked.
Here is the situation (as an example) - I ran a ML learning python script (which I wrote) for a long time but I didn't add functionality to save its weights.
My question in this situation is if it's possible to somehow intercept the running python interpreter and execute a command within the program context.
For example since I have model_seq global variable inside the running program - I would like to execute:
model_seq.save_weights("./model_weights")
Inside the process.
I've heard this is somewhat possible with gdb.
(Personally I know this can be done for sure with a C program and gdb - but since Python is compiled and interpreted the steps are a bit unclear for me (and I'm not sure if I would actually need a special python3 build or the default ubuntu one I have running will work))
Sorry, If I'm a little bit unclear.
I'm writing a module with python, and its basic functions must be run or shut, which runs or terminates other specified python scripts correspondingly.
Run is realised by os.popen(), but I did not manage to find any possible realisation for shut command. Is there any built-in function in python, which allows you to terminate other scripts?
I'd like to call a separate non-child python program from a python script and have it run externally in a new shell instance. The original python script doesn't need to be aware of the instance it launches, it shouldn't block when the launched process is running and shouldn't care if it dies. This is what I have tried which returns no error but seems to do nothing...
import subprocess
python_path = '/usr/bin/python'
args = [python_path, '&']
p = subprocess.Popen(args, shell=True)
What should I be doing differently
EDIT
The reason for doing this is I have an application with a built in version of python, I have written some python tools that should be run separately alongside this application but there is no assurance that the user will have python installed on their system outside the application with the builtin version I'm using. Because of this I can get the python binary path from the built in version programatically and I'd like to launch an external version of the built in python. This eliminates the need for the user to install python themselves. So in essence I need a simple way to call an external python script using my current running version of python programatically.
I don't need to catch any output into the original program, in fact once launched I'd like it to have nothing to do with the original program
EDIT 2
So it seems that my original question was very unclear so here are more details, I think I was trying to over simplify the question:
I'm running OSX but the code should also work on windows machines.
The main application that has a built in version of CPython is a compiled c++ application that ships with a python framework that it uses at runtime. You can launch the embedded version of this version of python by doing this in a Terminal window on OSX
/my_main_app/Contents/Frameworks/Python.framework/Versions/2.7/bin/python
From my main application I'd like to be able to run a command in the version of python embedded in the main app that launches an external copy of a python script using the above python version just like I would if I did the following command in a Terminal window. The new launched orphan process should have its own Terminal window so the user can interact with it.
/my_main_app/Contents/Frameworks/Python.framework/Versions/2.7/bin/python my_python_script
I would like the child python instance not to block the main application and I'd like it to have its own terminal window so the user can interact with it. The main application doesn't need to be aware of the child once its launched in any way. The only reason I would do this is to automate launching an external application using a Terminal for the user
If you're trying to launch a new terminal window to run a new Python in (which isn't what your question asks for, but from a comment it sounds like it's what you actually want):
You can't. At least not in a general-purpose, cross-platform way.
Python is just a command-line program that runs with whatever stdin/stdout/stderr it's given. If those happen to be from a terminal, then it's running in a terminal. It doesn't know anything about the terminal beyond that.
If you need to do this for some specific platform and some specific terminal program—e.g., Terminal.app on OS X, iTerm on OS X, the "DOS prompt" on Windows, gnome-terminal on any X11 system, etc.—that's generally doable, but the way to do it is by launching or scripting the terminal program and telling it to open a new window and run Python in that window. And, needless to say, they all have completely different ways of doing that.
And even then, it's not going to be possible in all cases. For example, if you ssh in to a remote machine and run Python on that machine, there is no way it can reach back to your machine and open a new terminal window.
On most platforms that have multiple possible terminals, you can write some heuristic code that figures out which terminal you're currently running under by just walking os.getppid() until you find something that looks like a terminal you know how to deal with (and if you get to init/launchd/etc. without finding one, then you weren't running in a terminal).
The problem is that you're running Python with the argument &. Python has no idea what to do with that. It's like typing this at the shell:
/usr/bin/python '&'
In fact, if you pay attention, you're almost certainly getting something like this through your stderr:
python: can't open file '&': [Errno 2] No such file or directory
… which is exactly what you'd get from doing the equivalent at the shell.
What you presumably wanted was the equivalent of this shell command:
/usr/bin/python &
But the & there isn't an argument at all, it's part of sh syntax. The subprocess module doesn't know anything about sh syntax, and you're telling it not to use a shell, so there's nobody to interpret that &.
You could tell subprocess to use a shell, so it can do this for you:
cmdline = '{} &'.format(python_path)
p = subprocess.Popen(cmdline, shell=True)
But really, there's no good reason to. Just opening a subprocess and not calling communicate or wait on it already effectively "puts it in the background", just like & does on the shell. So:
args = [python_path]
p = subprocess.Popen(args)
This will start a new Python interpreter that sits there running in the background, trying to use the same stdin/stdout/stderr as your parent. I'm not sure why you want that, but it's the same thing that using & in the shell would have done.
Actually I think there might be a solution to your problem, I found a useful solution at another question here.
This way subprocess.popen starts a new python shell instance and runs the second script from there. It worked perfectly for me on Windows 10.
You can try using screen command
with this command a new shell instance created and the current instance runs in the background.
# screen; python script1.py
After running above command, a new shell prompt will be seen where we can run another script and script1.py will be running in the background.
Hope it helps.
I am trying to get a home made Fiji script to sun inside Python by calling Fiji, but there's little documentation on how to do it.
What I need is something like this:
def myfijiscript:
[CODE]
and then in Python:
fiji(myfijiscript)
is there a way to do this?
Python (or, to be precise, Jython) scripts within Fiji are executed using the org.python.util.PythonInterpreter class (see source code).
It doesn't make much sense to run a Jython script within a Java instance that is started from with Python, but have a look at those two questions concerning how to run external commands in python. You can save your script in a file myscript.py and then do:
call(["./ImageJ-linux64", "myscript.py"])
using the ImageJ launcher from the command line.
The other way is to use ImageJ as a library and just import the classes you need for your script, as others have suggested:
from ij import IJ