I got a Neural Network implemented with numpy (Python 2.7) and a machine to test it faster. Lately my code on this machine got freeze but if a test it on my notebook (less cpu, ram, etc) it run without problem (only slower).
Which can be the problem? I thought that it was my code but if it works on a slower pc, so I think that machine have a trouble.
edit: Also, sometimes it works without problems.
edit 2: Both pcs are Ubuntu 16.04
edit 3: It happens event with same input and parameters
If it doesn't always occurs and is confined to one machine it could very well be a hardware problem.
The problem is that they are often hard to test, because they generally leave little evidence in the way of log files.
Try testing the RAM.
If that doesn't turn up errors, try logging the CPU temperatures to check that it doesn't get too hot.
Also, log the different voltages. It could be that the power supply is on the way out.
Try compiling code on the same machine where your code gets frozen. Each machine (more precisely microprocessor) has different instruction set. The flaws in instruction set may be covered by using Microcode. This could be the place where the problem may exist.
Related
I am working on translating a rather complex script from Matlab to Python and the results are fine.
However Matlab takes around 5 seconds to complete, whether Python takes over 2 minutes for the same starting conditions.
Being surprised of Python's poor performance I took a look onto CPU usage and noticed that Python does not use more than 1% while executing. Basically CPU usage is around 0.2% and barely changes whether im running the script or not. My system has 8 logical cores, so this does not appear to be a multi-core issue. Also, no other hardware is seeing any high usage (at least that is what task manager is telling me).
I am running my program with IPython 3.9.10 through Spyder 5.3.0. The Python installation is rather fresh and I did not change much except for installing a few standard modules.
I did not post any code because it will be a bit much. But essentially it is only a big hunk of "basic" operations of scalars and vectors. Also, there is a function that gets solved with scipy.optimize.minimize and nothing in the program relies on other software or inputs.
So, all in all my question is, where comes this limitation from and are there ways to "tell" Python to use more resources?
I use a jupyter notebook to execute a pyton script. The script calls the association_rules function from the mlxtend framework. Calling this function the ram literally explodes from 500 MB used to over 32 GB. But that would not be the problem. The Problem is if i execute the script locally on my windows 10 PC the ram maxes out but everything is still running. When i do the same on a unix server (Xfce) the server crashes. Is there something i can do prevent the server from crashing and to guarantee that the script continues?
Upadate:
I basically missed the fact that windows is swapping ram all the time, the only difference is that windows does not crash. I'm quite sure this would be solved on linux by fixing the swapping configuration. So basically the question is obselete.
Update:
I have made some wrong assumptions. The windows PC was already swapping, and the swapping partition went out of memory as well. So on all machine the same Problem appeared and all them crashed. In the end it was a mistake on the data preprocessing. Sorry for the unconvenience and please see this question as not relevant any more.
Run the script using the nice command to assign priority.
I'm trying to start work with this: https://github.com/deepsound-project/samplernn-pytorch
I've installed all the library dependencies through Anaconda console but I'm then not sure how I'm to run the python training scripts.
I guess I just need general help with getting a git RNN in python working? I've found a lot of tutorials that show working from notebooks in Jupyter or even from scratch but can't find ones working from python code files?
I'm sorry if my terminology is backward, I'm an architect who is attempting coding, note a software engineer.
There are instructions for getting the SampleRNN implementation working in terminal on the git page. All of the commands listed on the page are for calling the Python scripts from terminal, not from a Jupyter Notebook. If you've installed all the correct dependencies then in theory all you should need to do is call the terminal scripts to try it out.
FYI it took me a while to find a combination of parameters with which this model would train without running into memory errors, but I was working with my own dataset, not the one provided. It's also very intensive - the default train time is 1000 epochs which even on my relatively capable GPU was prohibitively high, so you might want to reduce that value considerably just to reach the end of a training cycle unless you have a sweet setup :)
I'm trying to profile a python application in pycharm, however when the application terminates and the profiler results are displayed Pycharm requires all 16gb of RAM that I have, which makes pycharm unusable.
Said python application is doing reinforcement learning, so it does take a bit of time to run (~10 min or so), however while running it does not require large amounts of RAM.
I'm using the newest version of PyCharm on Ubuntu 16.04 and CProfile is used by Pycharm for profiling.
I'd be very glad if one of you knows a solution.
EDIT: It seems this was an issue within PyCharm, which has since been fixed (as of 2017/11/21)
It's a defect within PyCharm: https://youtrack.jetbrains.com/issue/PY-25768
I am running a Python script using Spyder 2.3.9. I have a fairly large script and when running it through with (300x600) iterations (a loop inside another loop), everything appears to be working fine and takes approximately 40 minutes. But when I increase the number to (500x600) iterations, after 2 hours, the output yields:
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
I've been trying to go through the code but don't see anything that might be causing this in particular. I am using Python 2.7.12 64bits, Qt 4.8.7, PyQt4 (API v2) 4.11.4. (Anaconda2-4.0.0-MacOSX-x86_64)
I'm not entirely sure what additional information is pertinent, but if you have any suggestions or questions, I'd be happy to read them.
https://github.com/spyder-ide/spyder/issues/3114
It seems this issue has been opened on their GitHub profile, should be addressed soon given the repo record.
Some possible solutions:
It may be helpful, if possible, to modify your script for faster convergence. Very often, for most practical purposes, the incremental value of iterations after a certain point is negligible.
An upgrade or downgrade of the Spyder environment may help.
Check your local firewall for blocked connections to 127.0.0.1 from pythonw.exe.
If nothing works, try using Spyder on Ubuntu.