I am writing a Python CGI script that I want to run on my laptop's browser. This script will SSH into two Pis, and give the command to take a photo. The server hosting this script is on one of the Pis that I want to SSH into, and that Pi also is acting as an access point for the other Pi and my laptop to connect to (everything is a LAN, not connected to the Internet).
I am successfully able to run this script on my laptop's browser to run simple commands like ls -l on both Pis and print out the results for both on the browser. But, I ultimately want to be able to give the raspistill command to both Pis. When I do this, only the Pi with the server is taking the image, but the other Pi is not. I assume it's because permissions aren't set properly for the server (I tried running the commands as sudo but still no luck). However, if I run the same script on a Python IDLE it works fine. Can somebody help me identify the issue?
Here is my script:
#! /usr/bin/env python3
from pssh import ParallelSSHClient
import cgi
print("Content-Type: text/plain\r\n")
print("\r\n ")
host = ['172.24.1.1','172.24.1.112']
user = 'XXXX'
password = 'XXXX'
client = ParallelSSHClient(host, user, password)
output = client.run_command('raspistill -o test.jpg', sudo=True)
// AMENDMENT:
for line in output['172.24.1.1'].stdout: // works as well with '172.24.1.112'
print(line)
AMENDMENT:
Apparently, if I output anything from the stdout it works fine. Why is this the case? Is is just waiting for me flush the output or something? I suspect this might be a issue with the pssh package I am using.
In your pi, go into the terminal and type sudo raspi-config and then navigate with the keys to camera and then enable it. This will restart you pi.
From https://www.raspberrypi.org/documentation/configuration/camera.md:
Use the cursor keys to move to the camera option, and select 'enable'.
On exiting raspi-config, it will ask to reboot. The enable option
will ensure that on reboot the correct GPU firmware will be running
with the camera driver and tuning, and the GPU memory split is
sufficient to allow the camera to acquire enough memory to run
correctly.
After this, go into sudo raspi-config and enable ssh (which is another option just like pi-camera). Link for this here
After thoroughly reading through the documentation for the pssh module, my problem had to do with the exit codes and how they are handled.
The documentation about run_command states that:
function will return after connection and authentication establishment and after commands have been sent to successfully established SSH channels.
And as a result:
Because of this, exit codes will not be immediately available even for
commands that exit immediately.
Initially, I was just blindly running the run_command expecting the commands to finish, but it turns out I need to get the exit codes to truly finish the processes the commands are running. The documentations states a couple of ways to do this:
At least one of
Iterating over stdout/stderr to completion
Calling client.join(output) is necessary to cause parallel-ssh to wait for commands to finish and be able to gather exit codes.
This is why in my amendment to the code, where I was outputting from stdout, the commands seemed to work properly.
Related
I try to run all the rules by followed commands:
touch scripts/*.py
snakemake --cores <YOUR NUMBER>
The problem is my local internet connection is unstable, I could submit the command through putty to the linux computation platform, while it seems that there're always outputs returns back to the putty interface. So when my local internet connection is interrupted, the code running is also interrupted.
Is there any methods that I could let the codes just run on the linux backends? Then outputs could be written in the log file at last.
This could be a very basic question.
This is a common problem (not just for snakemake), and there are several options, at least the following:
use a program that can persists across multiple connection: popular options are screen, tmux. The workflow would look like this: log on to the server, launch screen or tmux, once inside the program launch the code you would like to run, log off, next time you login to the server, you can reconnect to the previous session and observe computations that were done in the meantime. I recommend tmux, see this tmux tutorial.
use nohup, this launches the computation in the background and it will continue running on the server if you disconnect:
nohup snakemake --cores <YOUR NUMBER>
Note that with this option, if you want to see the progress of computation, you will need to watch the appropriate .log inside the .snakemake folder.
I want to run following command in terminal: sudo service motion start.
If I do that by myself, it will work fine - the camera is going to stream on specific port. But, if I run it in Python with: os.system("sudo service motion start"), it will still start service at specific port but won't show stream of the camera - it will show gray picture with counting date and time.
Some other options such as subprocess.call() didn't work either.
I would like to run that particular command from Python. I don't mind if terminal window opens (and closes afterwards), but it would be the best case if it doesn't open and command simply runs somewhere in background.
I need to execute python script on remote server (access through puTTY), but I don't have a stable Internet connection, and every time I execute the script I get problems after several minutes due to my Internet getting disconnected.
How do I remotely execute the script without being connected to server?
(e.g. I connect to server, run script, and can logout while executing)
You can use a Linux Screen, it opens a background terminal and keeps a shell active even through network disruptions.
Open the screen typing in your terminal $ screen and execute there your script, even if you lose connection it won't kill the process.
Here you will find a well explained How to for this program. I use it for my regular day working on remote.
try this
nohup your_script >/dev/null 2>&1 &
program will be running in background
I have a remote server through Blue Host that's intended to run a server based on Twisted for Python. The only access I have to it is over SSH, so to keep Python running after I log out I tried using nohup python server.py & and screen -dm python server.py, getting the same results for each. Everything works fine until I log out of SSH - even though Python is running in the background as expected, once I've logged out, my client can no longer communicate with the server. The strange part is that if I log back in over SSH and check the running processes with ps aux, I see Python running and my client can successfully communicate with the server again. Even if I don't type anything at all once I log back in, everything works as expected. But, of course, as soon as I log back out, it's as if the server is gone.
I've contacted support for the hosting service in case this is some oddity on their end, but hopefully this is something that can be resolved on my end instead.
Edit: Looks like Blue Host doesn't want me doing server-y stuff without buying the VPS upgrade so it looks like that's the big problem.
Edit 2: Okay, so in case anybody ends up having a similar problem, here's what the main issue turned out to be. I was mistaken in my original description; I was able to connect to the server but I was getting kicked off immediately for what turned out to be a MySQL error. I guess trying to connect to a localhost database with no active connection somehow causes problems, so instead I changed the MySQL connection command to connect to my site's IP address instead, even though it was the same IP as the server. That seemed to do the trick in terms of my main issue.
Don't use this method to keep the server process running. Instead try using supervisor (apt-get install supervisor). It allows you to daemonize your process, and ability to stop/restart etc.
Here's a sample config entry (/etc/supervisor/supervisord.conf):
[program:my_server]
command=python /path/to/server/server.py
directory=/path/to/server/
autostart=true
autorestart=true
stdout_logfile=/var/log/server.log
stderr_logfile=/var/log/server_error.log
user=your_linux_user_name
After you edit your config, do
sudo service supervisor stop
sudo service supervisor start #need to do this - doing a `restart` doesn't reload the config file!
your server should now be running properly. You can manage its lifecycle via sudo supervisorctl
I am using Python and wxpython for gui. I am trying to connect ssh tunnel. After connecting to ssh, wants a new terminal to open and have to continue my operation in local machine. How to achieve this?
I tried subprocess, pexpect and paramiko, but all are capable to connect to ssh but not open the new teminal
Below my code is there which I tried with pexpect:
import time
import sys
import pexpect
c = pexpect.spawn("ssh -Y -L xxxx:localhost:xxxx user # host.com")
time.sleep(0.1)
c.expect("[pP]aasword")
c.sendline("xxxxxx")
time.sleep(0.2)
c.interact()
c.pexpect([user#host.com~]$)
# here after its connects to ssh then command wont be executed
c.sendline("xfce4-terminal")
On 24/04/2013
I am able to open new terminal but what happens is when the new terminal will open controls from gui doesn't go there. Any help?
Opening a new local terminal and connecting an existing process in to it is a little complicated. There are at least three approaches:
Open the terminal before you start connecting, and run all the code that tries to establish the connection from within it. This is simplest. The main drawback is that the terminal will appear even if the connection fails, which might be what you want to avoid.
Run the connection attempt with a session of tmux or screen and if you detect that it succeeded then reattach that session in to a new terminal.
Make your Python program provide a pty that the terminal can attach to - your program will need to hang around and pass input and output between the remote connection and the pty.