Save and restore python interpreter state - python

Is there a way to store the state of the python interpreter embedded in a C program (not the terminal interpreter or a notebook) and restore it later resuming execution were it left off?
Other questions and answers I found about this topic evolved around saving the state of the interactive shell or a jupyter notebook or for debugging. However my goal is to freeze execution and restoring after a complete restart of the program.
A library which achieves a similar goal for the Lua Language is called Pluto, however I don't know of any similar libraries or built-in ways to achieve the same in an embedded python interpreter.

No, there is absolutely no way of storing the entire state of the CPython interpreter as it is C code, other than dumping the entire memory of the C program to a file and resuming from that. It would however mean that you couldn't restart the C program independent of the Python program running in the embedded interpreter. Of course it is not what you would want.
It could be possible in a more limited case to pickle/marshal some objects but not all objects are picklable - like open files etc. In general case the Python program must actively cooperate with freezing and restoring.

Related

Embeding Python in C++: persistence of interpreter across execution

Using Py_Initialize(), we can start the python interpreter in a C++ program.
However as the function does not return anything, we cannot use the same interpreter in a different program.
Is there any way of calling Py_Initialize() in one C++ program, make the interpreter persistent and use it in a different C++ program(without calling Py_Initialize() again)?
Edit: To be more specific, Is there a way can get hold of an instance of a python interpreter and pass it to another execution as a parameter and use it to run python scripts.
No. The CPython interpreter itself does not work like that. There is no distinct interpreter object, but rather a floating set of globals with a stateful API. What's worse, Python code can load arbitrary other libraries, whose state can definitely not be persisted (in general).
What you can do is to pickle the existing variables. That can sometimes bring you somewhere close. That is not really a hosting problem, but a Python problem. Naturally, though, you could make sure that your C code hosting Python made sure to execute the serializations steps after the "real" Python code has finished executing. Something like How can I save all the variables in the current python session? might be a starting point.

Standalone Python interpreter

I want to run a python program without any underlying OS.
I have read articles on running python on small microcontrollers, but i want it on a bigger processor (Intel, ARM).
My criteria is:
It could be directly run as binary.
The Python interpreter could be loaded, onto which I can run my program.
At worst, tell me an extremely small, basic OS i can run it on.
Note: I want to use my program like a minimalistic operating system. I should be able to load it like any other OS, and it should be able to access memory and have basic I/O.
Note 2: Will there be limitations in terms of python's functions?
Note: this post describes x86 exclusively, as, next to ARM, requested by the OP.
It could be directly run as binary.
Binary? Python is not compiled, so no binary is produced. I think you mean just "run a Python program directly" here.
You could implement an additional compilation step, so that Python source files are compiled to bytecode prior to being executed, of course.
The Python interpreter could be loaded, onto which I can run my program.
"loaded" is a problem here. You need software to load the interpreter, displaying a chicken-egg problem. Intel x86 solves the problem by using a so-called BIOS (Basic I/O System), which starts further, user-defined programs. This "user-defined" program would be your Python interpreter then.
On more modern machines, UEFI is used instead of the legacy BIOS.
I want to use my program like a minimalistic operating system. I
should be able to load it like any other OS, and it should be able to
access memory and have basic I/O.
The aforementioned BIOS provides, as the acronym says, basic I/O functionality like reading/writing from/to disks, reading/writing from/to the screen, etc. Either use these basic routines and abstract from these or circumvent them and rewrite them all from scratch. That includes graphics drivers (a basic VGA driver will suffice), disk drivers (for loading Python files from disk), and filesystem (a simple FAT-16 is sufficient).
After all, you not only need to write a Python interpreter but a whole development environment from scratch.
Will there be limitations in terms of python's functions?
It depends on what you implement. For networking you need the appropriate drivers, for file stuff a filesystem + secondary storage driver. You are the ultimate master of your system you create, so it is up to you how un/limited your Python environment will be.

Why GASP doesn't work when I run it from IDLE?

I made this script:
from gasp import *
begin_graphics()
Circle((200, 200), 60)
Line((100, 400), (580, 200))
Box((400, 350), 120, 100)
update_when('key_pressed')
end_graphics()
When I start it from terminal, it works perfectly. When I run it from IDLE, it doesn't work, I get no answer (shell prompt (>>>) disappears but nothing happens).
In general, you can't run GUI apps in the embedded Python interpreter in IDLE, unless the library you're using is designed to integrate with IDLE. Or, worse, it may work on one machine and not on another. I'll explain why below, but first just take that on faith.
As far as I can tell, gasp's documentation doesn't address this, but similar libraries either warn you that they may not work in IDLE (easygui, early versions of graphics, etc.) or come with special instructions for how to use them in IDLE (e.g., later versions of graphics).
Now, maybe gasp should be designed to integrate with IDLE, given that it's specifically designed for novices, and many of those novices will be using the IDE that comes built in with Python. Or maybe not. But, even if that should be true, that's something for gasp to deal with. File a bug or feature request, but you'll need some way to keep working until someone gets around to writing the code.
The simplest solution here is to use a different IDE, one that runs its interactive Python interpreter in a completely separate process, exactly the same as you get when running it yourself in the terminal. There are lots of good options out there that are at least free (-as-in-beer) for non-commercial use (PyCharm, Komodo, Eclipse PyDev, emacs with your favorite collection of packages, etc.). Although Stack Overflow is not a good place for advice on picking the best one for you (if googling isn't sufficient, try asking on a mailing list or forum), almost any of them will work.
Another option: instead of using an interpreter built into an IDE, you might want to to consider running an enhanced interpreter environment (like ipython-gtk or emacs with a smaller set of packages) alongside your IDE. Of course they'll no longer be tightly integrated (the "I" in "IDE"), but in my experience, even working in an environment where the whole team uses PyCharm or PyDev, I still end up doing most of my interactive testing in ipython; you may find you prefer that as well. Or you may not, but give it a try and see.
So, why is there a problem in the first place?
First, if you don't understand what an "event loop" or "runloop" or "mainloop" is, please read either Why your GUI app freezes or the Wikipedia page or some other introduction to the idea.
Normally, when you run the interactive Python interpreter (e.g., by typing python at the bash or C: prompt in your terminal), it runs in its own process. So, it can start up a runloop and never return (until you quit), and the terminal won't get in your way.
But when you run the interactive Python interpreter inside IDLE, it's actually running in the same process as IDLE, which has its own runloop. If you start up a runloop and never return, IDLE's runloop doesn't get to run. That means it doesn't get to respond to events from the OS, like "refresh your window" or "prepare for a new window to open", so from the user's (your) point of view, IDLE and your app are both just frozen.
One way to get around this is in your code is to spawn another thread for your runloop, instead of taking over the main thread. (This doesn't work with all GUI libraries, but it works with some. This is how graphics solved the problem.) Another way is to spawn a whole new child process to run your GUI. (This works with all GUI libraries, but it's a lot more work—now you have to deal with inter-process communication. One of the novice-friendly matplotlib wrappers does this.) Finally, you can integrate your runloop with IDLE's Tkinter runloop by having one drive the other. (Not all GUI libraries can be driven this way, but Tkinter's can, and IDLE can be monkeypatched to work this way; graphics used to do this.) But none of these are even remotely simple. And they're probably things that gasp itself should be doing, not your code.

Debug crashing C Library used in Python project

complete Python noob here (and rusty in C).
I am using a Mac with Lion OS. Trying to use NFCpy, which uses USBpy, which uses libUSB. libUSB is crashing due to a null pointer but I have no idea how to debug that since there are so many parts involved.
Right now I am using xcode to view the code highlighted but I run everything from bash. I can switch to Windows or Linux if this is going to be somehow easier with a different environment.
Any suggestions on how to debug this would be much appreciated ;-)
PS: It would be just fine if I could see the prints I put in C in the bash where I run the Python script
You should see your printf() you put in C in your terminal, something is already wrong here. Are you sure that you're using the latest compiled library? To be sure, instead of print, you can use use assert(0) (you need to include assert.h).
Anyway, you can debug your software using gdb:
gdb --args python yourfile.py
# type "run" to start the program
# if you put assert() in your code, gdb will stop at the assert, or you can put
# manual breakpoint by using "b filename:lineno" before "run"
Enable core dumps (ulimit -Sc unlimited) and crash the program to produce a core file. Examine the core file with gdb to learn more about the conditions leading up to the crash. Inspect the functions and local variables on the call stack for clues.
Or run the program under gdb to begin with and inspect the live process after it crashes and gdb intercepts the signal (SIGSEGV, SIGBUS, whatever).
Both of these approaches will be easier if you make sure all relevant native code (Python, libUSB, etc) have debugging symbols available.
Isolating the problem in a program which is as small as you can manage to make it, as Tio suggested, will also make this process easier.
PS: It would be just fine if I could see the prints I put in C in the bash where I run the Python script
You didn't mention anything about adding prints "in C" elsewhere in your question. Did you modify libUSB to add debugging prints? If so, did you rebuild it? What steps did you take to ensure that your new build would be used instead of the previously available libUSB? You may need to adjust your dylib-related environment variables to get the dynamic linker to prefer your version over the system version. If you did something else, explain what. :)

Writing a kernel mode profiler for processes in python

I would like seek some guidance in writing a "process profiler" which runs in kernel mode. I am asking for a kernel mode profiler is because I run loads of applications and I do not want my profiler to be swapped out.
When I said "process profiler" I mean to something that would monitor resource usage by the process. including usage of threads and their statistics.
And I wish to write this in python. Point me to some modules or helpful resource.
Please provide me guidance/suggestion for doing it.
Thanks,
Edit::: Would like to add that currently my interest isto write only for linux. however after i built it i will have to support windows.
It's going to be very difficult to do the process monitoring part in Python, since the python interpreter doesn't run in the kernel.
I suspect there are two easy approaches to this:
use the /proc filesystem if you have one (you don't mention your OS)
Use dtrace if you have dtrace (again, without the OS, who knows.)
Okay, following up after the edit.
First, there's no way you're going to be able to write code that runs in the kernel, in python, and is portable between Linux and Windows. Or at least if you were to, it would be a hack that would live in glory forever.
That said, though, if your purpose is to process Python, there are a lot of Python tools available to get information from the Python interpreter at run time.
If instead your desire is to get process information from other processes in general, you're going to need to examine the options available to you in the various OS APIs. Linux has a /proc filesystem; that's a useful start. I suspect Windows has similar APIs, but I don't know them.
If you have to write kernel code, you'll almost certainly need to write it in C or C++.
don't try and get python running in kernel space!
You would be much better using an existing tool and getting it to spit out XML that can be sucked into Python. I wouldn't want to port the Python interpreter to kernel-mode (it sounds grim writing it).
The /proc option does sound good.
some code code that reads proc information to determine memory usage and such. Should get you going:
http://www.pixelbeat.org/scripts/ps_mem.py reads memory information of processes using Python through /proc/smaps like charlie suggested.
Some of your comments on other answers suggest that you are a relatively inexperienced programmer. Therefore I would strongly suggest that you stay away from kernel programming, as it is very hard even for experienced programmers.
Why would you want to write something that
is a very complex system (just look at existing profiling infrastructures and how complex they are)
can not be done in python (I don't know any kernel that would allow execution of python in kernel mode)
already exists (oprofile on Linux)
have you looked at PSI? (http://www.psychofx.com/psi/)
"PSI is a Python module providing direct access to real-time system and process information. PSI is a Python C extension, providing the most efficient access to system information directly from system calls."
it might give you what you are looking for. .... or at least a starting point.
Edit 2014:
I'd recommend checking out psutil instead:
https://pypi.python.org/pypi/psutil
psutil is actively maintained and has some nifty process monitoring features. PSI seems to be somewhat dead (last release 2009).

Categories

Resources