Starting and stopping server script - python

I've been building a performance test suite to exercise a server. Right now I run this by hand but I want to automate it. On the target server I run a python script that logs server metrics and halts when I hit enter. On the tester machine I run a bash script that iterates over JMeter tests, setting timestamps and naming logs and executing the tests.
I want to tie these together so that the bash script drives the whole process, but I am not sure how best to do this. I can start my python script via ssh, but how to halt it when a test is done? If I can do it in ssh then I don't need to mess with existing configuration and that is a big win. The python script is quite simple and I don't mind rewriting it if that helps.

The easiest solution is probably to make the Python script respond to signals. Of course, you can just SIGKILL the script if it doesn't require any cleanup, but having the script actually handle a shutdown request seems cleaner. SIGHUP might be a popular choice. Docs here.
You can send a signal with the kill command so there is no problem sending the signal through ssh, provided you know the pid of the script. The usual solution to this problem is to put the pid in a file in /var/run when you start the script up. (If you've got a Debian/Ubuntu system, you'll probably find that you have the start-stop-daemon utility, which will do a lot of the grunt work here.)
Another approach, which is a bit more code-intensive, is to create a fifo (named pipe) in some known location, and use it basically like you are currently using stdin: the server waits for input from the pipe, and when it gets something it recognizes as a command, it executes the command ("quit", for example). That might be overkill for your purpose, but it has the advantage of being a more articulated communications channel than a single hammer-hit.

Related

How to run snakemake on linux backend without output in putty?

I try to run all the rules by followed commands:
touch scripts/*.py
snakemake --cores <YOUR NUMBER>
The problem is my local internet connection is unstable, I could submit the command through putty to the linux computation platform, while it seems that there're always outputs returns back to the putty interface. So when my local internet connection is interrupted, the code running is also interrupted.
Is there any methods that I could let the codes just run on the linux backends? Then outputs could be written in the log file at last.
This could be a very basic question.
This is a common problem (not just for snakemake), and there are several options, at least the following:
use a program that can persists across multiple connection: popular options are screen, tmux. The workflow would look like this: log on to the server, launch screen or tmux, once inside the program launch the code you would like to run, log off, next time you login to the server, you can reconnect to the previous session and observe computations that were done in the meantime. I recommend tmux, see this tmux tutorial.
use nohup, this launches the computation in the background and it will continue running on the server if you disconnect:
nohup snakemake --cores <YOUR NUMBER>
Note that with this option, if you want to see the progress of computation, you will need to watch the appropriate .log inside the .snakemake folder.

Best practice for an infinite loop Python script that runs on Windows as a Service

I have a python script that reads data from an OPCDA server and then push it to InfluxDB.
So basically it connects to the OPCDA using the OpenOPC library and to InfluxDB using the InfluxDB Python client and then starts an infinite while loop that runs every 5 seconds to read and push data to the database.
I have installed the script as a Service using NSSM. What is the best practice to ensure that the script is running 24/7 ? How to avoid crashes ?
Should i daemonize the script ?
Thank you in advance,
Bnjroos
I suggest at least to add logging at the script level. You could also use custom Exit Codes from python so NSSM knows to report failure. Your failure would probably be when connecting to your services so, i.e. netowrk down or something so you could write custom exceptions for NSSM to restart. If it's running every 5 seconds you would probably know very soon.
Ensuring availability and avoiding crashes is about your code more than infrastructure, hence the above recommendations.
I believe using NSSM (for scheduling and such) is better than daemonizing, since you're basically adding functionality of NSSM in your script and potentially adding more code that may fail.

Killing a subprocess started via sudo

In a project I am working on, there is some code that starts up a long-running process using sudo:
subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
I would like to clean up this process when the parent exits. Currently, the subprocess keeps running when the parent exits (re-attached to init, of course).
I am not sure of the best solution to this problem. The code is limited to only running certain commands via sudo, and granting blanket authority to run sudo kill would be sketchy at best.
I don't have an open pipe to the child process that I can close (the child process is not reading from stdin), and I am not able to modify the code of the child process.
Are there any other mechanisms that might work in this situation?
First of all I just answer the question. Though I do not think it is a good thing to do, it is what you asked for. I would wrap that child process into a small program that can listen stdin. Then you may sudo that program, and it will be able to run the process without sudo, and will know its pid and have the rights needed to kill the process when you ask it through stdin to do so.
However, generally such a situation means sudo with no password and poor security. The most common technique is to use lowering your program's privileges, not elevating them. In such case you should create a runner program that is started by superuser, than it starts your main program with lowering of privileges and listens for a pipe to communicate. When it is necessary to run a command, your main program tells that to the runner program, and runner program does the job. When it is necessary to terminate command, you again tell this to a runner program via the pipe.
The common rules are:
If you need superuser rights, you should give them to the very parent process.
If a child process needs to do a privileged operation, it requests the top-level process to do that for him.
The top-level process should be kept as small as possible and do as little as possible. The larger it is, the more holes in security it creates.
That's what many applications do. The first example that comes into my mind is Apache web server (at least on *nix) that has a small top-level program and preforked working programs that are not run as root/wheel/whatever-else-is-the-superuser-username.
This will raise OSError: [Errno 1] Operation not permitted on the last line:
p = subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
print p.stdout.read()
p.terminate()
Assuming sudo will not ask for a password, one workaround is to make a shell script which calls sudo …
#!/bin/sh
sudo /usr/bin/somecommand
… and then do this in Python:
p = subprocess.Popen("/path/to/script.sh", cwd="/path/to")
print p.stdout.read()
p.terminate()

remotely start Python program in background

I need to use fabfile to remotely start some program in remote boxes from time to time, and get the results. Since the program takes a long while to finish, I wish to make it run in background and so I dont need to wait. So I tried os.fork() to make it work. The problem is that when I ssh to the remote box, and run the program with os.fork() there, the program can work in background fine, but when I tried to use fabfile's run, sudo to start the program remotely, os.fork() cannot work, the program just die silently. So I switched to Python-daemon to daemonalize the program. For a great while, it worked perfectly. But now when I started to make my program to read some Python shelve dicts, python-daemon cannot work any longer. Seems like if you use python-daemon, the shelve dicts cannot be loaded correctly, which I dont know why. Anyone has an idea besides os.fork() and Python-daemon, what else can I try to solve my problem?
If I understand your question right, I think you're making this far too complicated. os.fork() is for multiprocessing, not for running a program in the background.
Let's say for the sake of discussion that you wanted to run program.sh and collect what it sends to standard output. To do this with fabric, create locally:
fabfile.py:
from fabric.api import run
def runmyprogram():
run('./program.sh > output 2> /dev/null < /dev/null &')
Then, locally, run:
fab -H remotebox runmyprogram
The program will execute remotely, but fabric will not wait for it to finish. You'll need to harvest the output files later, perhaps using scp. The "&" makes this run in the background on the remote machine, and output redirection is necessary to avoid a hung fabric session.
If you don't need to use fabric, there are easier ways of doing this. You can ssh individually and run
nohup ./program.sh > output &
then come back later to check output.
If this is something that you'll do on a regular basis, this might be the better option, since you can just set up a cron job to run every so often, and then collect the output whenever you want.
If you'd rather not harvest the output files later, you can use:
fabfile.py:
from fabric.api import run
def runmyprogram():
run('./program.sh')
Then, on your local machine:
fab -H remotebox runmyprogram > output &
The jobs will run remotely, and put all their output back into the local output file. This runs in the background on your local machine, so you can do other things. However, if the connection between your local and remote machines might be interrupted, it's better to use the first approach so the output is always safely stored on the remote machines.
For those who came across this post in the future. Python-daemon can still work. It is just that be sure to load the shelve dicts within the same process. So previously the shelve dicts is loaded in parent process, when python-daemon spawns a child process, the dict handler is not passed correctly. When we fix this, everything works again.
Thanks for those suggesting valuable comments on this thread!

running a python script on a remote computer

I have a python script and am wondering is there any way that I can ensure that the script run's continuously on a remote computer? Like for example, if the script crashes for whatever reason, is there a way to start it up automatically instead of having to remote desktop. Are there any other factors I have to be aware of? The script will be running on a window's machine.
Many ways - In the case of windows, even a simple looping batch file would probably do - just have it start the script in a loop (whenever it crashes it would return to the shell and be restarted).
Maybe you can use XMLRPC to call functions and pass data. Some time ago I did something like that you ask by using the SimpleXMLRPCServer and xmlrpc.client. You have examples of simple configurations in the docs.
Depends on what you mean by "crash". If it's just exceptions and stuff, you can catch everything and restart your process within itself. If it's more, then one possibility though is to run it as a daemon spawned from a separate python process that acts as a supervisor. I'd recommend supervisord but that's UNIX only. You can clone a subset of the functionality though.

Categories

Resources