I need to automate a some start up processes on a remote machine running ROS. To do this, i'm trying to use paramiko to log into the remote machine via ssh and launch the launch file.
The issue that i'm having is that my ~/.bashrc file is not sourced.
I can source source /opt/ros/noetic/setup.bash and get roscore to work, but i can't then find any of my launch files as my work space is not sourced:
command = 'source /opt/ros/noetic/setup.bash && roscore'
my bashrc file contains both source /opt/ros/noetic/setup.bash and source source /home/ben/catkin_ws/devel/setup.bash but whenever i source this file as before, i can't even get roscore to work -
command = 'source ~/.bashrc && roscore'
Connected to 192.168.XX.XX
bash: roscore: command not found
A minimal working example -
#! /usr/bin/env python3
import paramiko
import numpy as np
import os
class Paramiko():
def __init__(self, hostname, username, password, port):
self.hostname = hostname
self.username = username
self.password = password
self.port = port
paramiko.util.log_to_file("paramiko.log")
def ExecuteCommand(self, command):
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(self.hostname, username = self.username, password = self.password)
print("Connected to %s" % self.hostname)
except paramiko.AuthenticationException:
print("Failed to connect to %s due to wrong username/password" %self.hostname)
exit(1)
except Exception as e:
print(e.message)
exit(2)
try:
stdin, stdout, stderr = ssh.exec_command(command)
except Exception as e:
print(e.message)
err = ''.join(stderr.readlines())
out = ''.join(stdout.readlines())
final_output = str(out)+str(err)
print(final_output)
return final_output
def main():
hostname = "192.168.XX.XX"
username = "ben"
password = "lol_nice_try"
port = 22
command = 'source ~/.bashrc && roslaunch some_package some_launchfile.launch'
para = Paramiko(hostname, username, password, port)
answer = para.ExecuteCommand(command)
if __name__ == "__main__":
main()
I'm considering using a bash script to do it, or maybe using os.system to do it, but that would be a fresh start and have it's own problems.
Open to ideas. In doing some reading i'm lead to believe that the paramiko ssh isn't actually a login session?
I've tried setting try get_pty=true when calling exec_command as per Problems with python interpertor after ssh with paramiko into a remote machine but that doesn't do anything. I'm not even sure what that option actually does, as the paramiko documentation doesn't appear to have anything about it.
Another comment on that thread says something about having a dedicated profile, but isn't that what ~/.bashrc is?
Not an answer to the original question, but a work around is to use pexpect and pxssh -
from pexpect import pxssh
try:
s = pxssh.pxssh()
hostname = '192.168.XX.XX'
username = 'ben'
password = 'cmon_guy'
s.login(hostname, username, password)
s.sendline('roslaunch system_diagnostics system_diagnostics_example_node.launch') # run a command
s.prompt() # match the prompt
print(s.after) # print everything before the prompt.
s.logout()
except pxssh.ExceptionPxssh as e:
print("pxssh failed on login.")
print(e)
Not quite sure how to get it to output the ROS_INFO as it's being sent to the terminal, without closing the connection, but that's a question for another day at this point.
Run the command after launching an interactive session.
Modify the command like below:
command = 'bash --login -c "roscore"'
and then execute using paramiko.
From man bash
When bash is invoked as an interactive login shell, or as a
non-interactive shell with the --login option, it first reads and
executes commands from the
file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in
that order, and reads
and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is
started to inhibit this
behavior.
you could alternatively use systemd and create services for roscore and every launch file you want to start and stop and then you can execute sudo systemctl start roscore.service to start your roscore you could do the same for other launch files
this is an ros service template you could follow :
`
[Unit]
Description=navigation
After=NetworkManager.service time-sync.target
[Service]
Type=forking
User=youruser
ExecStart=/bin/bash -c "bashfile_location/bashscript.bash & while ! echo exit > /dev/null; do sleep 1; done"
Restart=on-failure
[Install]
WantedBy=multi-user.target`
and you should create a bash file to start roscore or really any other ros file node or launch
#!/bin/bash
source /opt/ros/noetic/setup.bash
source /home/username/catkin_ws/devel/setup.bash
source ~/.bashrc
roslaunch package launchfile
and don't forget chmod +x
sorry if you find my answer disorganized
Related
I have this script made in python with paramiko
# -*- coding:Utf8 -*-
import sys
import paramiko
from scp import SCPClient
#Connect function
def createSSHClient(server="ec2-35-180-205-148.eu-west-3.compute.amazonaws.com", port=22, user="admin", key="/home/user/Téléchargements/aws_webforce.pem",password=""):
"createSSHClient return SSHClient object from paramiko.client class and connect to server on port using user and key"
ssh = paramiko.client.SSHClient()
ssh.load_host_keys(key)
#ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) #to avoid raise SSHException Server XY not found in known_hosts
ssh.connect(hostname=server,port=port,username=user,key_filename=key)
return ssh
#Connecting
ssh=createSSHClient()
# Define progress callback that prints the current percentage completed for the file
def progress(filename, size, sent):
sys.stdout.write("%s's progress: %.2f%% \r" % (filename, float(sent)/float(size)*100) )
# SCPCLient takes a paramiko transport and progress callback as its arguments.
scp = SCPClient(ssh.get_transport(),progress=progress)
#use scp with python: Uploading the 'corrige_exercice1_docker_compose' directory with its content in the
# /tmp remote directory
scp.put('corrige_exercice1_docker_compose', recursive=True, remote_path='/tmp')
scp.close()
for cmd in ["sudo apt update",
"sudo apt install docker.io",
"sudo apt install docker-compose",
"cd /tmp/corrige_exercice1_docker_compose && sudo docker-compose up -d",
"echo 'done'"]:
stdin, stdout, stderr = ssh.exec_command(cmd)
stdout_list=stdout.readlines()
for txt in stdout_list:
print(txt)
ssh.close()
I first didnt do cd /tmp && and ran cd /tmp in a separate command. The docker-compose didnt run correctly (i didnt have it running on the server).
However, my paramiko script didnt halt or showed an error for this command.
How can i set paramiko to halt on exit code of command not 0 ?
I have tried to check the doc # https://docs.paramiko.org/en/stable/api/client.html but didnt find.
thank you
You can have the return code of the executed command from stdout and stop the loop if it is greater than 0:
for cmd in list_of_cmds:
stdin, stdout, stderr = ssh.exec_command(cmd)
return_code = stdout.channel.recv_exit_status()
if return_code > 0:
print(f"Command '{cmd}' was not successful")
break
I am trying to SSH into another host from within a python script and run a command that requires sudo.
I'm able to ssh from the python script as follows:
import subprocess
import sys
import json
HOST="hostname"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="sudo command"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print(error)
else:
print(result)
But I want to run a command like this after sshing :
extract_response = subprocess.check_output(['sudo -u username internal_cmd',
'-m', 'POST',
'-u', 'jobRun/-/%s/%s' % (job_id, dataset_date)])
return json.loads(extract_response.decode('utf-8'))[0]['id']
How do I do that?
Also, I don't want to be providing the sudo password every time I run this sudo command, for that I have added this command (i.e., internal_cmd from above) at the end of visudo in the new host I'm trying to ssh into. But still when just typing this command directly in the terminal like this:
ssh -t hostname sudo -u username internal_cmd -m POST -u/-/1234/2019-01-03
I am being prompted to give the password. Why is this happening?
You can pipe the password by using the -S flag, that tells sudo to read the password from the standard input.
echo 'password' | sudo -S [command]
You may need to play around with how you put in the ssh command, but this should do what you need.
Warning: you may know this already... but never store your password directly in your code, especially if you plan to push code to something like Github. If you are unaware of this, look into using environment variables or storing the password in a separate file.
If you don't want to worry about where to store the sudo password, you might consider adding the script user to the sudoers list with sudo access to only the command you want to run along with the no password required option. See sudoers(5) man page.
You can further restrict command access by prepending a "command" option to the beginning of your authorized_keys entry. See sshd(8) man page.
If you can, disable ssh password authentication to require only ssh key authentication. See sshd_config(5) man page.
Here is my code:
import subprocess
HOST = 'host_name'
PORT = '111'
USER = 'user_name'
CMD = 'sudo su - ec2-user; ls'
process = subprocess.Popen(['ssh','{}#{}'.format(USER, HOST),
'-p', PORT, CMD],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = process.stdout.readlines()
if not result:
print "Im an error"
err = process.stderr.readlines()
print('ERROR: {}'.format(err))
else:
print "I'm a success"
print(result)
When I run this I receive the following output in my terminal:
dredbounds-computer: documents$ python terminal_test.py
Im an error
ERROR: ['sudo: sorry, you must have a tty to run sudo\n']
I've tried multiple things but I keep getting that error "sudo: sorry, you must have a tty to run sudo". It works fine if I just do it through the terminal manually, but I need to automate this. I read that a workaround might be to use '-t' or '-tt' in my ssh call, but I haven't been able to implement this successfully in subprocess yet (terminal just hangs for me). Anyone know how I can fix my code, or work around this issue? Ideally I'd like to ssh, then switch to the sudo user, and then run a file from there (I just put ls for testing purposes).
sudo is prompting you for a password, but it needs a terminal to do that. Passing -t or -tt provides a terminal for the remote command to run in, but now it is waiting for you to enter a password.
process = subprocess.Popen(['ssh','-tt', '{}#{}'.format(USER, HOST),
'-p', PORT, CMD],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
process.stdin.write("password\r\n")
Keep in mind, though, that the ls doesn't run until after the shell started by su exits. You should either log into the machine as ec2-user directly (if possible), or just use sudo to run whatever command you want without going through su first.
You can tell sudo to work without requiring a password. Just add this to /etc/sudoers on the remote server host_name.
user ALL = (ec2-user) NOPASSWD: ls
This allows the user named user to execute the command ls as ec2-user without entering a password.
This assumes you change your command to look like this, which seems more reasonable to me:
CMD = 'sudo -u ec2-user ls'
I'm trying to write a small script to mount a VirtualBox shared folder each time I execute the script. I want to do it with Python, because I'm trying to learn it for scripting.
The problem is that I need privileges to launch mount command. I could run the script as sudo, but I prefer it to make sudo by its own.
I already know that it is not safe to write your password into a .py file, but we are talking about a virtual machine that is not critical at all: I just want to click the .py script and get it working.
This is my attempt:
#!/usr/bin/env python
import subprocess
sudoPassword = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'
subprocess.Popen('sudo -S' , shell=True,stdout=subprocess.PIPE)
subprocess.Popen(sudoPassword , shell=True,stdout=subprocess.PIPE)
subprocess.Popen(command , shell=True,stdout=subprocess.PIPE)
My python version is 2.6
Many answers focus on how to make your solution work, while very few suggest that your solution is a very bad approach. If you really want to "practice to learn", why not practice using good solutions? Hardcoding your password is learning the wrong approach!
If what you really want is a password-less mount for that volume, maybe sudo isn't needed at all! So may I suggest other approaches?
Use /etc/fstab as mensi suggested. Use options user and noauto to let regular users mount that volume.
Use Polkit for passwordless actions: Configure a .policy file for your script with <allow_any>yes</allow_any> and drop at /usr/share/polkit-1/actions
Edit /etc/sudoers to allow your user to use sudo without typing your password. As #Anders suggested, you can restrict such usage to specific commands, thus avoiding unlimited passwordless root priviledges in your account. See this answer for more details on /etc/sudoers.
All the above allow passwordless root privilege, none require you to hardcode your password. Choose any approach and I can explain it in more detail.
As for why it is a very bad idea to hardcode passwords, here are a few good links for further reading:
Why You Shouldn’t Hard Code Your Passwords When Programming
How to keep secrets secret
(Alternatives to Hardcoding Passwords)
What's more secure? Hard coding credentials or storing them in a database?
Use of hard-coded credentials, a dangerous programming error: CWE
Hard-coded passwords remain a key security flaw
sudoPassword = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'
p = os.system('echo %s|sudo -S %s' % (sudoPassword, command))
Try this and let me know if it works. :-)
And this one:
os.popen("sudo -S %s"%(command), 'w').write('mypass')
To pass the password to sudo's stdin:
#!/usr/bin/env python
from subprocess import Popen, PIPE
sudo_password = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'.split()
p = Popen(['sudo', '-S'] + command, stdin=PIPE, stderr=PIPE,
universal_newlines=True)
sudo_prompt = p.communicate(sudo_password + '\n')[1]
Note: you could probably configure passwordless sudo or SUDO_ASKPASS command instead of hardcoding your password in the source code.
Use -S option in the sudo command which tells to read the password from 'stdin' instead of the terminal device.
Tell Popen to read stdin from PIPE.
Send the Password to the stdin PIPE of the process by using it as an argument to communicate method. Do not forget to add a new line character, '\n', at the end of the password.
sp = Popen(cmd , shell=True, stdin=PIPE)
out, err = sp.communicate(_user_pass+'\n')
subprocess.Popen creates a process and opens pipes and stuff. What you are doing is:
Start a process sudo -S
Start a process mypass
Start a process mount -t vboxsf myfolder /home/myuser/myfolder
which is obviously not going to work. You need to pass the arguments to Popen. If you look at its documentation, you will notice that the first argument is actually a list of the arguments.
I used this for python 3.5. I did it using subprocess module.Using the password like this is very insecure.
The subprocess module takes command as a list of strings so either create a list beforehand using split() or pass the whole list later. Read the documentation for moreinformation.
#!/usr/bin/env python
import subprocess
sudoPassword = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'.split()
cmd1 = subprocess.Popen(['echo',sudoPassword], stdout=subprocess.PIPE)
cmd2 = subprocess.Popen(['sudo','-S'] + command, stdin=cmd1.stdout, stdout=subprocess.PIPE)
output = cmd2.stdout.read.decode()
sometimes require a carriage return:
os.popen("sudo -S %s"%(command), 'w').write('mypass\n')
Please try module pexpect. Here is my code:
import pexpect
remove = pexpect.spawn('sudo dpkg --purge mytool.deb')
remove.logfile = open('log/expect-uninstall-deb.log', 'w')
remove.logfile.write('try to dpkg --purge mytool\n')
if remove.expect(['(?i)password.*']) == 0:
# print "successfull"
remove.sendline('mypassword')
time.sleep(2)
remove.expect(pexpect.EOF,5)
else:
raise AssertionError("Fail to Uninstall deb package !")
To limit what you run as sudo, you could run
python non_sudo_stuff.py
sudo -E python -c "import os; os.system('sudo echo 1')"
without needing to store the password. The -E parameter passes your current user's env to the process. Note that your shell will have sudo priveleges after the second command, so use with caution!
I know it is always preferred not to hardcode the sudo password in the script. However, for some reason, if you have no permission to modify /etc/sudoers or change file owner, Pexpect is a feasible alternative.
Here is a Python function sudo_exec for your reference:
import platform, os, logging
import subprocess, pexpect
log = logging.getLogger(__name__)
def sudo_exec(cmdline, passwd):
osname = platform.system()
if osname == 'Linux':
prompt = r'\[sudo\] password for %s: ' % os.environ['USER']
elif osname == 'Darwin':
prompt = 'Password:'
else:
assert False, osname
child = pexpect.spawn(cmdline)
idx = child.expect([prompt, pexpect.EOF], 3)
if idx == 0: # if prompted for the sudo password
log.debug('sudo password was asked.')
child.sendline(passwd)
child.expect(pexpect.EOF)
return child.before
It works in python 2.7 and 3.8:
from subprocess import Popen, PIPE
from shlex import split
proc = Popen(split('sudo -S %s' % command), bufsize=0, stdout=PIPE, stdin=PIPE, stderr=PIPE)
proc.stdin.write((password +'\n').encode()) # write as bytes
proc.stdin.flush() # need if not bufsize=0 (unbuffered stdin)
without .flush() password will not reach sudo if stdin buffered.
In python 2.7 Popen by default used bufsize=0 and stdin.flush() was not needed.
For secure using, create password file in protected directory:
mkdir --mode=700 ~/.prot_dir
nano ~/.prot_dir/passwd.txt
chmod 600 ~/.prot_dir/passwd.txt
at start your py-script read password from ~/.prot_dir/passwd.txt
with open(os.environ['HOME'] +'/.prot_dir/passwd.txt') as f:
password = f.readline().rstrip()
import os
os.system("echo TYPE_YOUR_PASSWORD_HERE | sudo -S TYPE_YOUR_LINUX_COMMAND")
Open your ide and run the above code. Please change TYPE_YOUR_PASSWORD_HERE and TYPE_YOUR_LINUX_COMMAND to your linux admin password and your desired linux command after that run your python script. Your output will show on terminal. Happy Coding :)
You can use SSHScript . Below are example codes:
## filename: example.spy
sudoPassword = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'
$$echo #{sudoPassword} | sudo -S #{command}
or, simply one line (almost the same as running on console)
## filename: example.spy
$$echo mypass | sudo -S mount -t vboxsf myfolder /home/myuser/myfolder
Then, run it on console
sshscript example.spy
Where "sshscript" is the CLI of SSHScript (installed by pip).
solution im going with,because password in plain txt in an env file on dev pc is ok, and variable in the repo and gitlab runner is masked.
use .dotenv put pass in .env on local machine, DONT COMMIT .env to git.
add same var in gitlab variable
.env file has:
PASSWORD=superpass
from dotenv import load_dotenv
load_dotenv()
subprocess.run(f'echo {os.getenv("PASSWORD")} | sudo -S rm /home//folder/filetodelete_created_as_root.txt', shell=True, check=True)
this works locally and in gitlab. no plain password is committed to repo.
yes, you can argue running a sudo command w shell true is kind of crazy, but if you have files written to host from a docker w root, and you need to pro-grammatically delete them, this is functional.
My scenario is I need to login to a remote machine and then do a sudo to another account like (sudo su anotheract) and then run the other required command.
But I am able to successfully connect to remote machine using below script. But the scripts hangs in the line where I am executing the sudo command(sudo su anotheract)
Can you please help me find the fix for this code?
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
ssh.connect(hostname='XX.XXX.XX.XX',port=22, username='myname',password='XXXXX')
ssh.exec_command=("sudo su anotheract")
stdout,stdin,stderr=ssh.exec_command("java -jar /usr/share/XXX/LogR.jar")
print stdout.readlines()
One (not very safe) way to do it is to pipe the password in. The caveat is that the user that you are using to connect to the box using paramiko should have sudo rights.
For example:
supass = 'some_pass'
stdout, stdin, stderr = ssh.exec_command('echo %s | sudo -S anotheract' % supass)
Again, this is not a very safe implementation but gets the job done in a jiffy.
import pxssh
ssh = pxssh.pxssh()
ssh.login('host', 'user', 'password')
ssh.sendline("sudo su anotheract")
ssh.prompt('yourrootpassword')
And in paramiko on most linux systems you cant do sudo commands thats because sudo expect commands from tty and then it isnt raise exception, but you can try method invokeshell, but I used paramiko many years ago I dont remember what was been wrong with it. If you want send various commands on shell you could use pxssh.
It can hangen because sudo waits for password. Try to add NOPASSWD: statement to the /etc/sudoers.
user ALL = NOPASSWD: /bin/true
Also it is impossible to change user using su and then continue do to something after su is finished. When su is finished, you are back to your original shell of the original user.
So you need to run all commands with sudo:
stdout,stdin,stderr = ssh.exec_command=("sudo -u anotheract java -jar /usr/share/XXX/LogR.jar")