How to run snakemake on linux backend without output in putty? - python

I try to run all the rules by followed commands:
touch scripts/*.py
snakemake --cores <YOUR NUMBER>
The problem is my local internet connection is unstable, I could submit the command through putty to the linux computation platform, while it seems that there're always outputs returns back to the putty interface. So when my local internet connection is interrupted, the code running is also interrupted.
Is there any methods that I could let the codes just run on the linux backends? Then outputs could be written in the log file at last.
This could be a very basic question.

This is a common problem (not just for snakemake), and there are several options, at least the following:
use a program that can persists across multiple connection: popular options are screen, tmux. The workflow would look like this: log on to the server, launch screen or tmux, once inside the program launch the code you would like to run, log off, next time you login to the server, you can reconnect to the previous session and observe computations that were done in the meantime. I recommend tmux, see this tmux tutorial.
use nohup, this launches the computation in the background and it will continue running on the server if you disconnect:
nohup snakemake --cores <YOUR NUMBER>
Note that with this option, if you want to see the progress of computation, you will need to watch the appropriate .log inside the .snakemake folder.

Related

How to access terminal Python process running on server from other script

Consider situation:
I have an Ubuntu server with installed Python, tensorflow and other libs.
My code is python script, that load several models, some of them pretrained vectors .bin, some files from server folders, etc.
When i run script in terminal it launch interactive session, where i input some text and script output me back (like chatbot). During answer it call my Ai models (Tensorflow, keras).
Question: how do i access this running session from other python script? I mean i want use it as a function: to send text and receive answer back.
And of course i need to run this terminal session in background for long time.
I read this and similar answers, but not sure is that right solution (seems not a full):
In Linux, how to prevent a background process from being stopped after closing SSH client
What i am asking, commonly is done by REST server with API that expose and then this api is called from a external code. But there is no API wotking: Tensorflow throw errors when run via Flask (was not able to fix).
If you want your script stays up after closing ssh session, add & disown at the end of your execution command and it will run in background.

Unable to access pi camera through web browser

I am writing a Python CGI script that I want to run on my laptop's browser. This script will SSH into two Pis, and give the command to take a photo. The server hosting this script is on one of the Pis that I want to SSH into, and that Pi also is acting as an access point for the other Pi and my laptop to connect to (everything is a LAN, not connected to the Internet).
I am successfully able to run this script on my laptop's browser to run simple commands like ls -l on both Pis and print out the results for both on the browser. But, I ultimately want to be able to give the raspistill command to both Pis. When I do this, only the Pi with the server is taking the image, but the other Pi is not. I assume it's because permissions aren't set properly for the server (I tried running the commands as sudo but still no luck). However, if I run the same script on a Python IDLE it works fine. Can somebody help me identify the issue?
Here is my script:
#! /usr/bin/env python3
from pssh import ParallelSSHClient
import cgi
print("Content-Type: text/plain\r\n")
print("\r\n ")
host = ['172.24.1.1','172.24.1.112']
user = 'XXXX'
password = 'XXXX'
client = ParallelSSHClient(host, user, password)
output = client.run_command('raspistill -o test.jpg', sudo=True)
// AMENDMENT:
for line in output['172.24.1.1'].stdout: // works as well with '172.24.1.112'
print(line)
AMENDMENT:
Apparently, if I output anything from the stdout it works fine. Why is this the case? Is is just waiting for me flush the output or something? I suspect this might be a issue with the pssh package I am using.
In your pi, go into the terminal and type sudo raspi-config and then navigate with the keys to camera and then enable it. This will restart you pi.
From https://www.raspberrypi.org/documentation/configuration/camera.md:
Use the cursor keys to move to the camera option, and select 'enable'.
On exiting raspi-config, it will ask to reboot. The enable option
will ensure that on reboot the correct GPU firmware will be running
with the camera driver and tuning, and the GPU memory split is
sufficient to allow the camera to acquire enough memory to run
correctly.
After this, go into sudo raspi-config and enable ssh (which is another option just like pi-camera). Link for this here
After thoroughly reading through the documentation for the pssh module, my problem had to do with the exit codes and how they are handled.
The documentation about run_command states that:
function will return after connection and authentication establishment and after commands have been sent to successfully established SSH channels.
And as a result:
Because of this, exit codes will not be immediately available even for
commands that exit immediately.
Initially, I was just blindly running the run_command expecting the commands to finish, but it turns out I need to get the exit codes to truly finish the processes the commands are running. The documentations states a couple of ways to do this:
At least one of
Iterating over stdout/stderr to completion
Calling client.join(output) is necessary to cause parallel-ssh to wait for commands to finish and be able to gather exit codes.
This is why in my amendment to the code, where I was outputting from stdout, the commands seemed to work properly.

Python code crashes with "cannot connect to X server" when detaching ssh+tmux session

I run Python code on a remote machine (which I ssh into) and then use Tmux. The code runs fine UNTIL I disconnect from the remote machine. The whole point of my connecting via Tmux is so that the code continues to run even when I'm not connected to the remote machine. When I reconnect later, I have the error message:
: cannot connect to X server localhost:11.0
Does anyone have an idea why this is happening or how I can stop it?
cannot connect to X server localhost:11.0
...means that your code is trying (and failing) to connect to an X server -- a GUI environment -- presumably being forwarded over your SSH session. tmux provides session continuity for terminal applications; it can't emulate an X server.
If you want to stop it from being able to make any GUI connection at all (and perhaps, if the software is thusly written, from even trying), unset the DISPLAY environment variable before running your code.
If this causes an error or exception, the code generating that is the same code that's causing your later error.
If you want to create a fake GUI environment that will still be present, you can do that too, with Xvfb.
Some Linux distributions provide the xvfb-run wrapper, to automate setting this up for you:
# prevent any future commands in this session from connecting to your real X environment
unset DISPLAY XAUTHORITY
# run yourcode.py with a fake X environment provided by xvfb-run
xvfb-run python yourcode.py
By the way, see the question xvfb-run unreliable when multiple instances invoked in parallel for notes on a bug present in xvfb-run, and a fix available for same.
If you want an X server you can actually detach from and reattach to later, letting you run GUI applications with similar functionality to what tmux gives you for terminal applications, consider using X11vnc or a similar tool.

Is it Possible to Run a Python Code Forever?

I have coded a Python Script for Twitter Automation using Tweepy. Now, when i run on my own Linux Machine as python file.py The file runs successfully and it keeps on running because i have specified repeated Tasks inside the Script and I also don't want to stop the script either. But as it is on my Local Machine, the script might get stopped when my Internet Connection is off or at Night. So i couldn't keep running the Script Whole day on my PC..
So is there any way or website or Method where i could deploy my Script and make it Execute forever there ? I have heard about CRON JOBS before in Cpanel which can Help repeated Tasks but here in my case i want to keep running my Script on the Machine till i don't close the script .
Are their any such solutions. Because most of twitter bots i see are running forever, meaning their Script is getting executed somewhere 24x7 . This is what i want to know, How is that Task possible?
As mentioned by Jon and Vincent, it's better to run the code from a cloud service. But either way, I think what you're looking for is what to put into the terminal to run the code even after you close the terminal. This is what worked for me:
nohup python code.py &
You can add a systemd .service file, which can have the added benefit of:
logging (compressed logs at a central place, or over network to a log server)
disallowing access to /tmp and /home-directories
restarting the service if it fails
starting the service at boot
setting capabilities (ref setcap/getcap), disallowing file access if the process only needs network access, for instance

Starting and stopping server script

I've been building a performance test suite to exercise a server. Right now I run this by hand but I want to automate it. On the target server I run a python script that logs server metrics and halts when I hit enter. On the tester machine I run a bash script that iterates over JMeter tests, setting timestamps and naming logs and executing the tests.
I want to tie these together so that the bash script drives the whole process, but I am not sure how best to do this. I can start my python script via ssh, but how to halt it when a test is done? If I can do it in ssh then I don't need to mess with existing configuration and that is a big win. The python script is quite simple and I don't mind rewriting it if that helps.
The easiest solution is probably to make the Python script respond to signals. Of course, you can just SIGKILL the script if it doesn't require any cleanup, but having the script actually handle a shutdown request seems cleaner. SIGHUP might be a popular choice. Docs here.
You can send a signal with the kill command so there is no problem sending the signal through ssh, provided you know the pid of the script. The usual solution to this problem is to put the pid in a file in /var/run when you start the script up. (If you've got a Debian/Ubuntu system, you'll probably find that you have the start-stop-daemon utility, which will do a lot of the grunt work here.)
Another approach, which is a bit more code-intensive, is to create a fifo (named pipe) in some known location, and use it basically like you are currently using stdin: the server waits for input from the pipe, and when it gets something it recognizes as a command, it executes the command ("quit", for example). That might be overkill for your purpose, but it has the advantage of being a more articulated communications channel than a single hammer-hit.

Categories

Resources