My research requires processing memory traces of applications. For C/C++ programs, this is easy using Intel's PIN library. However, as suggested here Use Intel Pin to instrument Python scripts, I may need to instrument the Python runtime itself, which I'm not sure will represent the true memory behavior of a given python script due to some overheads(If this is not the case, please comment). Some of the existing python memory profilers only talk about the runtime memory "usage" in terms of the heap space usage, etc.
I ended up making an executable from my python script using PyInstaller and running my PINTool over it. However, I'm not sure if this is the right approach.
Is there any way(any library or any hack into the python runtime) that may help in getting the memory traces accessed by the python scripts?
Related
Recently, I wrote a python program, which requires a lot of memory. Then the computer memory is not enough, and it explodes.
It is known that the operating system will use part of the hard disk as virtual memory,which could solve the problem of insufficient memory. If you change the virtual memory of the operating system, you can solve the problem of insufficient memory in python programs, but the scope of impact is too wide.
Can python implement program-level virtual memory? That is, when the memory is insufficient, the hard disk is mapped to the program memory.
I need to run python program with large memory consumption.
Using disk space as memory is usually called swapping.
It is usually simpler to do it yourself than making a script to do it for you. But if you insist on your script doing it for you, then a way is just to execute the commands you would use to do it manually.
Here is a tutorial for how to add swap to a Linux system (first result on google) : https://linuxize.com/post/create-a-linux-swap-file/
Take each command in that tutorial, run them using subprocess, and you will get the desired result.
If you are on Windows (which you did not tell) then the method applies (but I could not find quickly an easy way to do it with commands).
I am a beginner learning to program python using VS code so my knowledge about both the VS code and the python extension is limited. I am facing two very annoying problems.
Firstly, when the python extension starts the memory usage of vs code jumps from ~300 mb to 1-1.5 Gbs. If I have any thing else open then everything gets extremely sluggish. This seems to me a bit abnormal. I have tried disabling all other extensions but the memory consumption remains the same. Is there a way (or some settings that I can change to reduce the memory consumption?
Secondly, the intellisense autocomplete takes quite a bit of time (sometimes 5-10 mins) before it starts to kick in. Also it stops working sometimes completely. Any pointers what could be causing that?
PS: I am using VS code version 1.50 (September update) and python anaconda 4.8.3.
VSCode as a code editor, in addition to the memory space occupied by VSCode itself, it needs to download the corresponding language services and language extensions to support, so it occupies some memory space.
For memory, it is recommended that you uninstall unnecessary third-party extensions and duplicate language services. In addition, this is a good habit if we use virtual environments in VSCode. The folder of the virtual environment exists in the project, and the installation package is stored in the project without occupying system resources.
For automatic completion, this function is provided by the corresponding language service and extension. please try to reload VSCode and wait for the language service to load before editing the code.
Therefore, you can try to use the extension "Pylance", which not only provides outstanding language service functions but also provides automatic completion.
At least for the intellisense, you could try changing
"python.jediEnabled": false
in your settings.json file. This will allow you to use a newer version of the intellisense, but it might need to download first.
But beyond that, I’d suggest using Pycharm instead. It’s quite snappy, and it has a free version.
I want to run a python program without any underlying OS.
I have read articles on running python on small microcontrollers, but i want it on a bigger processor (Intel, ARM).
My criteria is:
It could be directly run as binary.
The Python interpreter could be loaded, onto which I can run my program.
At worst, tell me an extremely small, basic OS i can run it on.
Note: I want to use my program like a minimalistic operating system. I should be able to load it like any other OS, and it should be able to access memory and have basic I/O.
Note 2: Will there be limitations in terms of python's functions?
Note: this post describes x86 exclusively, as, next to ARM, requested by the OP.
It could be directly run as binary.
Binary? Python is not compiled, so no binary is produced. I think you mean just "run a Python program directly" here.
You could implement an additional compilation step, so that Python source files are compiled to bytecode prior to being executed, of course.
The Python interpreter could be loaded, onto which I can run my program.
"loaded" is a problem here. You need software to load the interpreter, displaying a chicken-egg problem. Intel x86 solves the problem by using a so-called BIOS (Basic I/O System), which starts further, user-defined programs. This "user-defined" program would be your Python interpreter then.
On more modern machines, UEFI is used instead of the legacy BIOS.
I want to use my program like a minimalistic operating system. I
should be able to load it like any other OS, and it should be able to
access memory and have basic I/O.
The aforementioned BIOS provides, as the acronym says, basic I/O functionality like reading/writing from/to disks, reading/writing from/to the screen, etc. Either use these basic routines and abstract from these or circumvent them and rewrite them all from scratch. That includes graphics drivers (a basic VGA driver will suffice), disk drivers (for loading Python files from disk), and filesystem (a simple FAT-16 is sufficient).
After all, you not only need to write a Python interpreter but a whole development environment from scratch.
Will there be limitations in terms of python's functions?
It depends on what you implement. For networking you need the appropriate drivers, for file stuff a filesystem + secondary storage driver. You are the ultimate master of your system you create, so it is up to you how un/limited your Python environment will be.
I am targetting an embedded platform with linux_rt, and would like to compile cpython. I am not asking whether python is appropriate for realtime, or its latency. I AM asking about compiling under platform constraints.
I would like an interpretter embedded in a C shared library, but will also accept an exectuable binary if needs be.
Any C compiling ive done is for mainstream OS deployment, and i usually just hit make install. Im not afraid to get a little dirty, but am afraid of longterm maintenance and repeatability.
To avoid as much memory overhead as possible, are there any compiler configurations that can be changed from defaults? Can I easily strip sections of the standard library I know will not be needed?
Target platform has a 600 MHz Celeron, and 256mb RAM. The required firmware is built for a v2.6 kernel (might be 2.4). The default OS image uses Busybox, and most standard system libraries are minimally available. The root filesystem is around 100mB (flash), although I will have an external memory card mounted and can extended root onto there.
Python should have 70% Cpu and 128mB ram at most times, although I could imagine sloppy execution of the interpretter at times, and on RT linux, that could start to add up. Just trying to take precautions before I dive in.
Looking for simple Do's or Don'ts. Reference to similar projects would be great, but I really want to stick with CPython where possible.
I do not have the target platform in the shop yet, so I cannot post any tests. Will have the unit in 2 weeks and will update this post, at that time, if needed.
make a VM with the target configuration to help you get started. VirtualBox or QEmu. If you don't have a root FS one place to start is TinyCore, which is very small, configurable, but also can run on your laptop -- http://www.linuxjournal.com/article/11023
My team is incorporating the Python 2.4.4 runtime into our project in order to leverage some externally developed functionality.
Our platform has a 450Mhz SH4 application core and limited memory for use by the Python runtime and application.
We have ported Python, but initial testing has highlighted the following hurdles:
a) start-up times for the Python runtime can be as bad as 25 seconds (when importing the libraries concerned, and in turn their dependencies)
b) Python never seems to release memory to the OS during garbage collection - the only recourse is to close the runtime and restart (incurring start-up delays noted above, which often times is impractical)
If we can mitigate these issues our use of Python would be substantially improved. Any guidance from the SO community would be very valuable. Especially from anyone who has knowledge of the intrinsics of how the Python execution engine operates.
Perhaps it is hard to believe, but CPython version 2.4 never releases memory to the OS. This is allegedly fixed in verion Python 2.5.
In addition, performance (processor-wise) was improved in Python 2.5 and Python 2.6 on top of that.
See the C API section in What's new in Python 2.5, look for the item called Evan Jones’s patch to obmalloc
Alex Martelli (whose advice should always be at least considered), says multiprocess is the only way to go to free memory. If you cannot use multiprocessing (module in Python 2.6), os.fork is at least available. Using os.fork in the most primitive manner (fork one work process at the beginning, wait for it to finish, fork a new..) is still better than relauching the interpreter paying 25 seconds for that.