I think I may of been googling this stuff wrong, however I was wondering if I could have my raspberry Pi execute a command after I connect to it via SSH.
Workflow:
1) SSH into Pi via terminal
2) Once logged in, the Pi executes a command to display the current temperature (I already know of this command)
The pi already outputs
Linux raspberrypi 3.10.25+ #622 PREEMPT Fri Jan 3 18:41:00 GMT 2014 armv6l
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Jul 11 15:11:35 2014
I could be misunderstanding this all together, perhaps even have the command executed and shown in the dialog above.
What you're looking for is the motd which is quite common on various linux distros. This is not in python (but can be). The motd runs multiple commands on login via SSH and constructs a message which it outputs to the user. More information on this (which actually has temperature listed) can be found here: Rapberry Pi Welcome Message. The problem is this will likely change slightly depending on linux distros. A good git repo which has a nice message can also be found here: Raspberry Pi Motd
Yup, this can been done. The method that I am aware of utilizes subprocess https://docs.python.org/2/library/subprocess.html. As long as you know the name of the python script and the arguments (if any) you can pass them into subprocess. Here is an example of how to connect via python ssh script (taken from http://python-for-system-administrators.readthedocs.org/en/latest/ssh.html):
import subprocess
import sys
HOST="www.example.org"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="uname -a"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
print result
Just append the command to the ssh-command.
ssh user#server "echo test"
"echo test" is executed on the remote machine.
You can execute a command from bash without actually logging into the other computer by placing the command after the shh command:
$ ssh pi#pi_addr touch wat.txt
Would create the text file ~/wat.txt.
This is a little cumbersome for automation however since as password must be provided so you can set a public/private RSA-key on your computer in order to be able to login to your pi remotely without a password. Simply do the following:
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphase (empty for no passphrase):
Enter the same passphrase again:
$ssh pi#pi_addr mkdir -p .ssh
$cat .ssh/id_rsa.pub | ssh pi#pi_addr 'cat >> .ssh/authorized_keys'
Don't enter a passphrase and leave everything default when running ssh-keygen. Now you will be able to rn ssh pi#pi_addr without enter a password.
Example python file:
import subprocess
SERVER = "pi#pi_addr"
subprocess.call("ssh pi#pi_addr touch wat.txt")
Related
I am trying to SSH into another host from within a python script and run a command that requires sudo.
I'm able to ssh from the python script as follows:
import subprocess
import sys
import json
HOST="hostname"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="sudo command"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print(error)
else:
print(result)
But I want to run a command like this after sshing :
extract_response = subprocess.check_output(['sudo -u username internal_cmd',
'-m', 'POST',
'-u', 'jobRun/-/%s/%s' % (job_id, dataset_date)])
return json.loads(extract_response.decode('utf-8'))[0]['id']
How do I do that?
Also, I don't want to be providing the sudo password every time I run this sudo command, for that I have added this command (i.e., internal_cmd from above) at the end of visudo in the new host I'm trying to ssh into. But still when just typing this command directly in the terminal like this:
ssh -t hostname sudo -u username internal_cmd -m POST -u/-/1234/2019-01-03
I am being prompted to give the password. Why is this happening?
You can pipe the password by using the -S flag, that tells sudo to read the password from the standard input.
echo 'password' | sudo -S [command]
You may need to play around with how you put in the ssh command, but this should do what you need.
Warning: you may know this already... but never store your password directly in your code, especially if you plan to push code to something like Github. If you are unaware of this, look into using environment variables or storing the password in a separate file.
If you don't want to worry about where to store the sudo password, you might consider adding the script user to the sudoers list with sudo access to only the command you want to run along with the no password required option. See sudoers(5) man page.
You can further restrict command access by prepending a "command" option to the beginning of your authorized_keys entry. See sshd(8) man page.
If you can, disable ssh password authentication to require only ssh key authentication. See sshd_config(5) man page.
I'm trying to install a software(it's basically a shell script) on remote Linux machine using paramiko.
I know that the software once run will prompt for a end user license acceptance (y or n).
So I wrote the script as below,
====================
`
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect ('remoteLinuxHost', username=r'user', password='pass')
stdin, stdout, stderr = ssh.exec_command(r'/full/path/to/software')
stdin.write("y\n")
stdin.flush()
print "output ", stdout.readlines()
`
This works absolutely fine on Red hat machine, but the same code hangs forever on Suse Linux! (Just prints output and script will just keeps running)
NOTE:
1.When it hangs on Suse, I manually checked
ps -ef | grep 'software' that displayed,
root 10941 10938 1 21:10 ? 00:00:00 /bin/sh software
root 11050 11031 0 21:10 ? 00:00:00 more softwareEndUserLicense.txt
This confirmed that prompt (y or n) is not being answered here.
I've tried using ssh.invoke_shell() method with the same result, also have cross checked permission of the software, gave sudo, preceded it with /bin/bash nothing working.
Please suggest.
Thank you.
I've got a USB GPIO electronic gizmo attached to a desktop PC running Linux Mint 17 "Mate"; in this environment the gizmo appears as /dev/ttyACM0. I've written a GUI Python 2.7/Tkinter program to control the gizmo via the pySerial module. The program works when run from the console using sudo.
Being a GUI program, I want to be able to run it from the "Mate" desktop - but I can't, because being a serial device, accessing the gizmo requires root privileges obtained via sudo, wot has to be invoked at a Terminal.
# here's the offending code
import serial
numa = serial.Serial("/dev/ttyACM0", 19200, timeout=1)
....
How do I invoke the "Enter your password..." routine from within the Python program so a raw user doesn't have to open a Terminal to enter the password?
Thanks for any advice you can provide!
I can't answer your question, but instead I'm going to solve your problem.
When you list the device file, you'll see something like this:
$ ls -l /dev/ttyACM0
crw-rw---- 1 root dialout 188, 0 Apr 4 11:22 /dev/ttyACM0
Both the owner (root) and the owner group (dialout) have read-write-access (rw-), while everybody else isn't able to access the device (---). Therefore, instead of giving the program root access to your system, you can simply add the user(s) to the dialout group:
$ sudo usermod -aG dialout <username>
Logging out and back in will be necessary, but afterwards your script will be able to both read and write to the serial interface without the need of a root password.
Use gksudo rather than sudo
from subprocess import call
call('gksudo -D "Program requires root priveledge \nSudo "your command here",shell=True)
for example:
call('gksudo -D "Override Sudo message " cp /etc/hosts /home/new.hosts',shell=True)
I have previously written a program for a linux env which automatically runs the SSHFS binary as a user and inputs a stored ssh private key passphrase. (the public half is already on the remote server) I had this working with simple pexpect commands on one server. (Ubuntu server 14.04, ssh version 6.6, sshfs version 2.5) But this single piece of the program is proving to be an issue when the application has been moved to a redhat machine (RHEL6.5, ssh version 5.3, sshfs version 2.4) This simple step has been driving me crazy all day so now I turn to this community for support. My original code (simplified) looked like this:
proc = pexpect.spawn('sshfs %s#%s:%s...') #many options, unrelated
proc.expect([pexpect.EOF, 'Enter passphrase for key.*', pexpect.TIMEOUT], timeout=30)
if proc.match_index == 1:
proc.sendline('thepassphrase')
Which runs as expected on ubuntu but not rhel. I have also tried the fallback method of piping to subprocess without much success either.
proc = subprocess.Popen('sshfs %s#%s:%s...', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.stdin.write('thepassphrase'+'\n')
proc.stdin.flush()
Of course I have tried many slight variations of this without success, and of course the command runs fine when I run it manually.
Update 3/3:
I have also today manually compiled and installed ssh 6.6 in rhel to see if that was causing the issue, but the issue persists even with the new ssh binary.
Update 3/9:
Today I have found one particular solution which works, but I am not happy with the fact that many other different solutions did not work, and I am still looking for the answer as to why. Here is the best I could do so far:
proc = subprocess.check_call("sudo -H -u %s ssh-keygen -p -P %s -N '' -f %s" % (user, userKey['passphrase'], userKey['path']), shell=True)
time.sleep(2)
proc = subprocess.Popen(cmd, shell=True)
proc.communicate()
time.sleep(1)
proc = subprocess.check_call("sudo -H -u %s ssh-keygen -p -P '' -N %s -f %s" % (user, userKey['passphrase'], userKey['path']), shell=True)
Removes the passphrase from the key, mounts the drive, and then re-adds the key. Obviously I don't like this solution, but it will have to do until I can get to the bottom of this.
Update 3/23:
Well due to my stupidity I did not see the immediate problem with this method until now and now I am back to the drawing board. While this workaround does work for the first time the connection is made, the -o reconnect obviously fails because sshfs does not know the passphrase to reconnect. This means that this solution is no longer viable, and I would really if anyone knows how to get the pexpect version working.
After talking to the developer personally, I have determined that this is now a known bug and that there was not a proper method of running the command in the same style as specified in the question. However, the developer quickly came forward with an equally useful method which involved spawning another terminal process.
passphrase = 'some passphrase'
cmd = "sudo -H -u somebody sshfs somebody#somewhere:/somewhere-else /home/somebody/testmount -o StrictHostKeychecking=no -o nonempty -o reconnect -o workaround=all -o IdentityFile=/somekey -o follow_symlinks -o cache=no"
bash = pexpect.spawn('bash', echo=False)
bash.sendline('echo READY')
bash.expect_exact('READY')
bash.sendline(cmd)
bash.expect_exact('Enter passphrase for key')
bash.sendline(passphrase)
bash.sendline('echo COMPLETE')
bash.expect_exact('COMPLETE')
bash.sendline('exit')
bash.expect_exact(pexpect.EOF)
I have tested this solution and it has worked as a good workaround without very much extra overhead. You may view his full response here.
A little context is in order for this question: I am making an application that copies files/folders from one machine to another in python. The connection must be able to go through multiple machines. I quite literally have the machines connected in serial so I have to hop through them until I get to the correct one.
Currently, I am using python's subprocess module (Popen). As a very simplistic example I have
import subprocess
# need to set strict host checking to no since we connect to different
# machines over localhost
tunnel_string = "ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string.split())
# Do work, copy files etc. over ssh on localhost with port 9999
proc.terminate()
My question:
When doing it like this, I cannot seem to get agent forwarding to work, which is essential in something like this. Is there a way to do this?
I tried using the shell=True keyword in Popen like so
tunnel_string = "eval `ssh-agent` && ssh-add && ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string, shell=True)
# etc
The problem with this is that the name of the machines is given by user input, meaning they could easily inject malicious shell code. A second problem is that I then have a new ssh-agent process running every time I make a connection.
I have a nice function in my bashrc which identifies already running ssh-agents and sets the appropriate environment variables and adds my ssh key, but of cource subprocess cannot reference functions defined in my bashrc. I tried setting the executable="/bin/bash" variable with shell=True in Popen to no avail.
You should give Fabric a try.
It provides a basic suite of operations for executing local or remote
shell commands (normally or via sudo) and uploading/downloading files,
as well as auxiliary functionality such as prompting the running user
for input, or aborting execution.
The program below will give you a test run.
First install fabric with pip install fabric then save the code below in fabfile.py
from fabric.api import *
env.hosts = ['server url/IP'] #change to ur server.
env.user = #username for the server
env.password = #password
def run_interactive():
with settings(warn_only = True)
cmd = 'clear'
while cmd is not 'stop fabric':
run(cmd)
cmd = raw_input('Command to run on server')
Change to the directory containing your fabfile and run fab run_interactive then each command you enter will be run on the server
I tested your first simplistic example and agent forwarding worked. The only think that I can see that might cause problems is that the environment variables SSH_AGENT_PID and SSH_AUTH_SOCK are not set correctly in the shell that you execute your script from. You might use ssh -v to get a better idea of where things are breaking down.
Try setting up a SSH config file: https://linuxize.com/post/using-the-ssh-config-file/
I frequently am required to tunnel through a bastion server and I use a configuration like so in my ~/.ssh/config file. Just change the host and user names. This also presumes that you have entries for these host names in your hosts (/etc/hosts) file.
Host my-bastion-server
Hostname my-bastion-server
User user123
AddKeysToAgent yes
UseKeychain yes
ForwardAgent yes
Host my-target-host
HostName my-target-host
User user123
AddKeysToAgent yes
UseKeychain yes
I then gain access with syntax like:
ssh my-bastion-server -At 'ssh my-target-host -At'
And I issue commands against my-target-host like:
ssh my-bastion-server -AT 'ssh my-target-host -AT "ls -la"'