I am running a script on Digital ocean 1GB memory droplet. When I start program memory usage remains under 150MB but suddenly program stops and I get -bash: fork: can't allocate memory error if I run any command. what is causing the issue?
Memory usage when the program is running:
Sounds like your interactive shell or the script you're running is consuming all the memory it is allowed to get.
You should inspect the output of uname -a, and inspect the output of dmesg, and inspect the program you're running for a loop or using the -x parameter to running it if it is a shell script, as in bash -x yourprogram, and seeing if the output looks like never-ending loop to you.
uname will list system resource limits for your user-environment. dmesg will tell you if the system has decided your program is behaving badly and needs some reaping to keep it from dragging everything down. bash -x will show all commands as a script executes them.
One of those should lead you to the problem. Good luck.
Related
I am attempting to write a python script that calls rsync to synchronize code between my vm and my local machine. I had done this successfully using subprocess.call to write from my local machine to my vm, but when I tried to write a command that would sync from my vm to my local machine, subprocess.call seems to erroneously duplicate text or modify the command in a way that doesn't make sense to me.
I've seen a number of posts about subprocess relating to whether a shell is expected / being used, but I tried running the command with both shell=True and shell=False` and saw the same weird command modification.
Here is what I am running:
command = ['rsync', '-avP', f'{user}#{host}:/home/{user}/Workspace/{remote_dir}', os.getcwd()]) # attempting to sync from a dir on my remote machine to my current directory on my local machine
print('command: ' + ' '.join(command))
subprocess.call(command)
Here is the output (censored):
command: rsync -avP benjieg#[hostname]:/home/benjieg/Workspace/maige[dirname] /Users/benjieg/Workspace
building file list ...
rsync: link_stat "/Users/benjieg/Workspace/benjieg#[hostname]:/home/benjieg/Workspace/[dirname]" failed: No such file or directory (2)
8383 files to consider
sent 181429 bytes received 20 bytes 362898.00 bytes/sec
total size is 217164673 speedup is 1196.84
rsync error: some files could not be transferred (code 23) at /AppleInternal/Library/BuildRoots/810eba08-405a-11ed-86e9-6af958a02716/Library/Caches/com.apple.xbs/Sources/rsync/rsync/main.c(996) [sender=2.6.9]
If I run the command directly in my terminal, it works just fine. The mysterious thing that i'm seeing is that subprocess appears to append some extra path before my user name in the command. You can see this in the 'link stat' line in the output. I'm guessing this is why it's unable to find the directory. But I really can't figure out why it would do that.
Further, I tried moving out of the 'Workspace' directory on my local machine to home, and even more weird behavior ensued. It seemed to begin looking into all of the files on my local machine, even prompting dropbox to ask me if iterm2 had permission to access it. Why would this happen? Why would whatever subprocess is doing cause a search of my whole local file system? I'm really mystified by this seemingly random behavior.
I'm on MacOS 13.0.1 (22A400) using python 3.10.4
I was calling a python script (2.7) from a console (Ubuntu 14.04) using a command: python script_name.py. At some point, I wanted to stop the running script by pressing Ctrl-C. However, when I checked Ubuntu System Monitor, the memory used by the python script was not freed up (I monitored Ubuntu System Monitor before I called the script, during the process, and after I pressed Ctrl-C to stop the script). I tried to free up the memory using a command explained on http://www.upubuntu.com/2013/01/how-to-free-up-unused-memory-in.html , but didn't work (I mean, the memory usage was not changed).
However, if I used pycharm to run and stop the script, the memory was freed up directly once I pressed the Stop button. For some reasons (such as from ssh or just to test from console), I want to run my script from the console (without using pycharm or any other IDEs).
My question is, what is the command, or how to stop running python script and free up directly the memory used by the script, if I run the script from the console?
Many thanks in advance.
Those commands did not work since what you're trying to achieve is not what they do. How did you check the memory being used by your Python script. I use top to see memory and could used by each process (sorted in ascending order by default). You may have checked before the system had time to register that the python process was killed, I've used this a lot and I've never tun into with the OS not getting memory back after a process has been killed with ctrl + c.
Pycharm is probably doing some cleanup when you stop the program from it versus just having to wait for the OS to reclaim memory versys when you SIGTERM a process from a shell
I am a newbie in Fabric, and want to run one command in a background, it is written in shell script, and I have to run that command via Fabric, so lets assume I have a command in shell script as:
#!/bin/bash/
java &
Consider this is a file named myfile.sh
Now in Fabric I am using this code to run my script as:
put('myfile.sh', '/root/temp/')
sudo('sh /root/temp/myfile.sh')
Now this should start the Java process in background but when I login to the Machine and see the jobs using jobs command, nothing is outputted.
Where is the problem please shed some light.
Use it with
run('nohup PATH_TO_JMETER/Jmetercommand & sleep 5; exit 0)
maybe the process exists before you return. when you type in java, normally it shows up help message and exits. Try a sleep statement or something that lingers. and if you want to run it in the background, you could also append & to the sudo call
I use run("screen -d -m sh /root/temp/myfile.sh",pty=False). This starts a new screen session in detached mode, which will continue running after the connection is lost. I use the pty=False option because I found that when connecting to several hosts, the process would not be started in all of them without this option.
I am trying to run tests in Jenkins for a Python package which uses PyQt4, and the tests create windows. Since I'm running the tests in Jenkins, I need to redirect the graphical output, so I'm using xvfb-run. Most of the time, this works, but a fraction of the time, the testing will randomly fail with:
/usr/bin/xvfb-run: line 171: kill: (27375) - No such process
If I re-run the tests, it works fine most of the time (so it's just a one-off problem).
Has anyone encountered this issue before? Do you have any ideas for workarounds to improve the stability of the testing?
It work through find the Xvfb process and kill it.
ps auwx | grep "Xvfb" | grep -v grep
If your copy of xvfb-run is the same as mine, I can confirm I've seen this too.
In my case, the target process caused Xvfb to crash. This means that the wrapper script itself fails at line 171 when tearing down no longer running Xvfb. To work around it I wrapped kill $XVFBPID in a set +e/set -e block. It also helps if you specify --error-file= so that xvfb-run saves the asynchronous standard error output from Xvfb while your target process is running, so you can get the underlying cause fixed.
Work around:
# Kill Xvfb now that the command has exited.
# Ignore failure of kill since we want to be forgiving of Xvfb itself crashing
set +e
kill $XVFBPID
set -e
I've looked at some questions about profiling memory usage in Python programs, but so far haven't been able to get anything to work. My program must run as root (it opens a TUN/TAP device).
First, I tried heapy; unfortunately this didn't work for me. Every time my code tried to execute hpy().heap() the program froze. Not wanting to waste too much timed I decided to try valgrind.
I tried valgrind with massif:
# valgrind --tool=massif ./my_prog.py --some-options value
I think the issue is related to profiling Python programs. I tried my program (which runs as root) and no massif output file was generated. I also wasn't able to generate an output file with another Python program (which doesn't run as root). However, a simple C test program worked fine and produced the massif file.
What are the issues preventing Valgrind and massif from working correctly with Python programs?
Instead of having the script launch the interpreter, directly calling it as a parameter to Valgrind solves the problem.
valgrind --tool=massif python my_script.py