I am computational chemist and use python codes (through jupyter notebook) to make analysis of my system. Today, while doing Principal Component Analysis and trying plot some results, it returns MemoryError.
I try to find cause by googling, the suggestions were to look if I am not using 34 bit version, but I am sure that this is not my problem (and in addition - I am using PC with Linux). Another suggestion - to delete something...so, I deleted some bigger files, which were not needed anymore - it doesnt help.
Then I found some suggestins, which were made for the specific task the other people ask and those were not my topic.
In particular, MemoryError occurs using mdanalysis package for PCA and using matplotlib inline.
Related
I'm attempting to setup a UART communication between an ESP32 and STM32 using the Espressif IDF through VScode on a macbook pro using the M1 processor. Below I have provided an image of one of the problems I've been encountering and cant seem to find anything on the internet with a similar situation to my own. I previously had this example working but that no longer seems to be the case and I havent made any changes to the code since it's last usage.
Another error I seem to be recieving is in relation to my processor I believe but I have no idea how to go about fixing it? I can't seem to build any projects in VS code or even run python scripts that previously worked. Any help would be greatly appreciated.
I've attempted to uninstall VS code from my mac and looked through a ridiculous amount of forums and webpages but nothing specific enough to my problem that has helped.
after uninstalling my anaconda3 files and reinstalling I'm now receiving this error.
enter image description here
I am trying to execute python code in VS code enabling jupyter notebook execution.Repeatedly the execution screen turns gray which makes the output invisible with the headers.Code will be still executable.
Any suggestions to recover from this issue..Each time copy pasting to another notebook and rerunning is not helping to solve the issue.
I also have the same issue with VScode and Jupyter notebook, in my case it only happens when the overall size of the notebook is large (more than 150 MB) which caused by keeping the output of the cells (in my case the high quality figures), this causes the notebook to crash and grey out all the outputs. The solution that I found so far is to clear the output, it won't crash again, there is also some solution suggested by the developers here, which suggests to remove the Code cash. I would suggest to break long notebooks to smaller notebooks, or clear the output.
Update
I frequently had this issue with my notebooks of any size. One of the solutions was to remove the code Cache on my Windows machine (for mac users you have to find the equivalent app data on your system and remove the Cache).
The easiest way to access the Cache folder is to open a run window and search the following line and delete the Cache as much as you can:
%APPDATA%\Code - Insiders\Code Cache
It helped me so far. Please let me know if it worked for you guys too or you found any other solutions.
so I'm pretty new to both swift and stack, but I have a problem with implimenting Pythonkit. But basically to provide a short summary, the kit basically allows you to be able to utilize python inside of swift (using certain swift functions in order to save variables, do functions, however, can execute the code of a .py file).
In the process of doing this however this error pops up
Fatal error: Python library not found. Set the PYTHON_LIBRARY environment variable with the path to a Python library.: file /Users/****/Library/Developer/Xcode/DerivedData/Y******L-
So I tried updating my Python software to 3.0 and above (I'm not quite sure, I think it was 3.7), however when I run python it shows it is at version 2.7. This kind of confused me a bit, however it was until the error did I notice something was wrong. Why does it show that error? What does it mean?
(I did find a similar post with a similar error in stack overflow, however I don't have enough reputation to comment as well as I don't think the answer was addressed to what I wanted so I'm just going to continue writing here)
The swift code itself has updated (xcode11) and following the steps a lot of the functions have changed within Xcode, what is it that I did wrong that caused the error? Following the link it seems I have to utilize pyto? Checking that link the functions of pyto are different than the ones pf PythonKit. In PythonKit you can utilize python within your swift code, while pyto seems to be a python integration for iOS?
Any advice? And please ask if anything needs clarification, or if there's a different solution please tell me. Thanks,Vince
I'm trying to start work with this: https://github.com/deepsound-project/samplernn-pytorch
I've installed all the library dependencies through Anaconda console but I'm then not sure how I'm to run the python training scripts.
I guess I just need general help with getting a git RNN in python working? I've found a lot of tutorials that show working from notebooks in Jupyter or even from scratch but can't find ones working from python code files?
I'm sorry if my terminology is backward, I'm an architect who is attempting coding, note a software engineer.
There are instructions for getting the SampleRNN implementation working in terminal on the git page. All of the commands listed on the page are for calling the Python scripts from terminal, not from a Jupyter Notebook. If you've installed all the correct dependencies then in theory all you should need to do is call the terminal scripts to try it out.
FYI it took me a while to find a combination of parameters with which this model would train without running into memory errors, but I was working with my own dataset, not the one provided. It's also very intensive - the default train time is 1000 epochs which even on my relatively capable GPU was prohibitively high, so you might want to reduce that value considerably just to reach the end of a training cycle unless you have a sweet setup :)
We own a corporate level forum which is developed using python (with Django framework). We are intermittently observing memory usage spikes on our production setup and we wish to track the cause.
The occurrences of these incidences are random and not directly related to the load (as per current study).
I have browsed a lot on internet and especially stackoverflow for some suggestions and was not able to get any similar situation.
Yes, I was able to locate a lot of profiler utils like Python memory profiler but these require some code level inclusion of these modules and as this happens to be in production profiler are not a great help (we plan to review our implementation in the next release).
We wish to review this issue based on occurrence.
Thus I wish to check whether there is any tool that we can use to create a dump for analysis offline (just like heapdumps in java).
Any pointers?
Is gdb the only option?
OS: Linux
Python: 2.7 (currently we do not plan to upgrade until that can help in fixing this issue)
Cheers!
AJ
maybe you can try using valgrind. It's a bit tricky, but you can follow up here if you are interested in it
How to use valgrind with python?