How to automatically input ssh private key passphrase with pexpect - python

I have previously written a program for a linux env which automatically runs the SSHFS binary as a user and inputs a stored ssh private key passphrase. (the public half is already on the remote server) I had this working with simple pexpect commands on one server. (Ubuntu server 14.04, ssh version 6.6, sshfs version 2.5) But this single piece of the program is proving to be an issue when the application has been moved to a redhat machine (RHEL6.5, ssh version 5.3, sshfs version 2.4) This simple step has been driving me crazy all day so now I turn to this community for support. My original code (simplified) looked like this:
proc = pexpect.spawn('sshfs %s#%s:%s...') #many options, unrelated
proc.expect([pexpect.EOF, 'Enter passphrase for key.*', pexpect.TIMEOUT], timeout=30)
if proc.match_index == 1:
proc.sendline('thepassphrase')
Which runs as expected on ubuntu but not rhel. I have also tried the fallback method of piping to subprocess without much success either.
proc = subprocess.Popen('sshfs %s#%s:%s...', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.stdin.write('thepassphrase'+'\n')
proc.stdin.flush()
Of course I have tried many slight variations of this without success, and of course the command runs fine when I run it manually.
Update 3/3:
I have also today manually compiled and installed ssh 6.6 in rhel to see if that was causing the issue, but the issue persists even with the new ssh binary.
Update 3/9:
Today I have found one particular solution which works, but I am not happy with the fact that many other different solutions did not work, and I am still looking for the answer as to why. Here is the best I could do so far:
proc = subprocess.check_call("sudo -H -u %s ssh-keygen -p -P %s -N '' -f %s" % (user, userKey['passphrase'], userKey['path']), shell=True)
time.sleep(2)
proc = subprocess.Popen(cmd, shell=True)
proc.communicate()
time.sleep(1)
proc = subprocess.check_call("sudo -H -u %s ssh-keygen -p -P '' -N %s -f %s" % (user, userKey['passphrase'], userKey['path']), shell=True)
Removes the passphrase from the key, mounts the drive, and then re-adds the key. Obviously I don't like this solution, but it will have to do until I can get to the bottom of this.
Update 3/23:
Well due to my stupidity I did not see the immediate problem with this method until now and now I am back to the drawing board. While this workaround does work for the first time the connection is made, the -o reconnect obviously fails because sshfs does not know the passphrase to reconnect. This means that this solution is no longer viable, and I would really if anyone knows how to get the pexpect version working.

After talking to the developer personally, I have determined that this is now a known bug and that there was not a proper method of running the command in the same style as specified in the question. However, the developer quickly came forward with an equally useful method which involved spawning another terminal process.
passphrase = 'some passphrase'
cmd = "sudo -H -u somebody sshfs somebody#somewhere:/somewhere-else /home/somebody/testmount -o StrictHostKeychecking=no -o nonempty -o reconnect -o workaround=all -o IdentityFile=/somekey -o follow_symlinks -o cache=no"
bash = pexpect.spawn('bash', echo=False)
bash.sendline('echo READY')
bash.expect_exact('READY')
bash.sendline(cmd)
bash.expect_exact('Enter passphrase for key')
bash.sendline(passphrase)
bash.sendline('echo COMPLETE')
bash.expect_exact('COMPLETE')
bash.sendline('exit')
bash.expect_exact(pexpect.EOF)
I have tested this solution and it has worked as a good workaround without very much extra overhead. You may view his full response here.

Related

Execute Host OS Command from Flask container [duplicate]

How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

SSHing from within a python script and run a sudo command having to give sudo password

I am trying to SSH into another host from within a python script and run a command that requires sudo.
I'm able to ssh from the python script as follows:
import subprocess
import sys
import json
HOST="hostname"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="sudo command"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print(error)
else:
print(result)
But I want to run a command like this after sshing :
extract_response = subprocess.check_output(['sudo -u username internal_cmd',
'-m', 'POST',
'-u', 'jobRun/-/%s/%s' % (job_id, dataset_date)])
return json.loads(extract_response.decode('utf-8'))[0]['id']
How do I do that?
Also, I don't want to be providing the sudo password every time I run this sudo command, for that I have added this command (i.e., internal_cmd from above) at the end of visudo in the new host I'm trying to ssh into. But still when just typing this command directly in the terminal like this:
ssh -t hostname sudo -u username internal_cmd -m POST -u/-/1234/2019-01-03
I am being prompted to give the password. Why is this happening?
You can pipe the password by using the -S flag, that tells sudo to read the password from the standard input.
echo 'password' | sudo -S [command]
You may need to play around with how you put in the ssh command, but this should do what you need.
Warning: you may know this already... but never store your password directly in your code, especially if you plan to push code to something like Github. If you are unaware of this, look into using environment variables or storing the password in a separate file.
If you don't want to worry about where to store the sudo password, you might consider adding the script user to the sudoers list with sudo access to only the command you want to run along with the no password required option. See sudoers(5) man page.
You can further restrict command access by prepending a "command" option to the beginning of your authorized_keys entry. See sshd(8) man page.
If you can, disable ssh password authentication to require only ssh key authentication. See sshd_config(5) man page.

Filter output of a process with `grep` while keeping the return value

I think this is not a Python question but in order to provide the context I'll tell, what exactly I'm doing.
I run a command on a remote machine using ssh -t <host> <command> like this:
if os.system('ssh -t some_machine [ -d /some/directory ]') != 0:
do_something()
(note: [ -d /some/directory ] is only an example. Could be replaced by any command which returns 0 in case everything went fine)
Unfortunately ssh prints "Connection to some_machine close." every time I run it.
Stupidly I tried to run ssh -t some_machine <command> | grep -v "Connection" but this returns the result of grep of course.
So in short: In Python I'd like to run a process via ssh and evaluate it's return value while filtering away some unwanted output.
Edit: this question suggests s.th. like
<command> | grep -v "bla"; return ${PIPESTATUS[0]}
Indeed this might be an approach but it seems to work with bash only. At least with zsh PIPESTATUS seems to be not defined.
Use subprocess, and connect the two commands in Python rather than a shell pipeline.
from subprocess import Popen, PIPE, call
p1 = Popen(["ssh", "-t", "some_machine", "test", "-d", "/some/directory"],
stdout=PIPE)
if call(["grep", "-v", "Connection"], stdin=p1.stdout) != 0:
# use p1.returncode for the exit status of ssh
do_something()
Taking this a step further, try to avoid running external programs when unnecessary. You can examine the output of ssh directly in Python without using grep; for example, using the re library to examine the data read from p1.stdout yourself. You can also use a library like Paramiko to connect to the remote host instead of shelling out to run ssh.

Using subprocess to execute shell script with sudo [duplicate]

I'm trying to write a small script to mount a VirtualBox shared folder each time I execute the script. I want to do it with Python, because I'm trying to learn it for scripting.
The problem is that I need privileges to launch mount command. I could run the script as sudo, but I prefer it to make sudo by its own.
I already know that it is not safe to write your password into a .py file, but we are talking about a virtual machine that is not critical at all: I just want to click the .py script and get it working.
This is my attempt:
#!/usr/bin/env python
import subprocess
sudoPassword = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'
subprocess.Popen('sudo -S' , shell=True,stdout=subprocess.PIPE)
subprocess.Popen(sudoPassword , shell=True,stdout=subprocess.PIPE)
subprocess.Popen(command , shell=True,stdout=subprocess.PIPE)
My python version is 2.6
Many answers focus on how to make your solution work, while very few suggest that your solution is a very bad approach. If you really want to "practice to learn", why not practice using good solutions? Hardcoding your password is learning the wrong approach!
If what you really want is a password-less mount for that volume, maybe sudo isn't needed at all! So may I suggest other approaches?
Use /etc/fstab as mensi suggested. Use options user and noauto to let regular users mount that volume.
Use Polkit for passwordless actions: Configure a .policy file for your script with <allow_any>yes</allow_any> and drop at /usr/share/polkit-1/actions
Edit /etc/sudoers to allow your user to use sudo without typing your password. As #Anders suggested, you can restrict such usage to specific commands, thus avoiding unlimited passwordless root priviledges in your account. See this answer for more details on /etc/sudoers.
All the above allow passwordless root privilege, none require you to hardcode your password. Choose any approach and I can explain it in more detail.
As for why it is a very bad idea to hardcode passwords, here are a few good links for further reading:
Why You Shouldn’t Hard Code Your Passwords When Programming
How to keep secrets secret
(Alternatives to Hardcoding Passwords)
What's more secure? Hard coding credentials or storing them in a database?
Use of hard-coded credentials, a dangerous programming error: CWE
Hard-coded passwords remain a key security flaw
sudoPassword = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'
p = os.system('echo %s|sudo -S %s' % (sudoPassword, command))
Try this and let me know if it works. :-)
And this one:
os.popen("sudo -S %s"%(command), 'w').write('mypass')
To pass the password to sudo's stdin:
#!/usr/bin/env python
from subprocess import Popen, PIPE
sudo_password = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'.split()
p = Popen(['sudo', '-S'] + command, stdin=PIPE, stderr=PIPE,
universal_newlines=True)
sudo_prompt = p.communicate(sudo_password + '\n')[1]
Note: you could probably configure passwordless sudo or SUDO_ASKPASS command instead of hardcoding your password in the source code.
Use -S option in the sudo command which tells to read the password from 'stdin' instead of the terminal device.
Tell Popen to read stdin from PIPE.
Send the Password to the stdin PIPE of the process by using it as an argument to communicate method. Do not forget to add a new line character, '\n', at the end of the password.
sp = Popen(cmd , shell=True, stdin=PIPE)
out, err = sp.communicate(_user_pass+'\n')
subprocess.Popen creates a process and opens pipes and stuff. What you are doing is:
Start a process sudo -S
Start a process mypass
Start a process mount -t vboxsf myfolder /home/myuser/myfolder
which is obviously not going to work. You need to pass the arguments to Popen. If you look at its documentation, you will notice that the first argument is actually a list of the arguments.
I used this for python 3.5. I did it using subprocess module.Using the password like this is very insecure.
The subprocess module takes command as a list of strings so either create a list beforehand using split() or pass the whole list later. Read the documentation for moreinformation.
#!/usr/bin/env python
import subprocess
sudoPassword = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'.split()
cmd1 = subprocess.Popen(['echo',sudoPassword], stdout=subprocess.PIPE)
cmd2 = subprocess.Popen(['sudo','-S'] + command, stdin=cmd1.stdout, stdout=subprocess.PIPE)
output = cmd2.stdout.read.decode()
sometimes require a carriage return:
os.popen("sudo -S %s"%(command), 'w').write('mypass\n')
Please try module pexpect. Here is my code:
import pexpect
remove = pexpect.spawn('sudo dpkg --purge mytool.deb')
remove.logfile = open('log/expect-uninstall-deb.log', 'w')
remove.logfile.write('try to dpkg --purge mytool\n')
if remove.expect(['(?i)password.*']) == 0:
# print "successfull"
remove.sendline('mypassword')
time.sleep(2)
remove.expect(pexpect.EOF,5)
else:
raise AssertionError("Fail to Uninstall deb package !")
To limit what you run as sudo, you could run
python non_sudo_stuff.py
sudo -E python -c "import os; os.system('sudo echo 1')"
without needing to store the password. The -E parameter passes your current user's env to the process. Note that your shell will have sudo priveleges after the second command, so use with caution!
I know it is always preferred not to hardcode the sudo password in the script. However, for some reason, if you have no permission to modify /etc/sudoers or change file owner, Pexpect is a feasible alternative.
Here is a Python function sudo_exec for your reference:
import platform, os, logging
import subprocess, pexpect
log = logging.getLogger(__name__)
def sudo_exec(cmdline, passwd):
osname = platform.system()
if osname == 'Linux':
prompt = r'\[sudo\] password for %s: ' % os.environ['USER']
elif osname == 'Darwin':
prompt = 'Password:'
else:
assert False, osname
child = pexpect.spawn(cmdline)
idx = child.expect([prompt, pexpect.EOF], 3)
if idx == 0: # if prompted for the sudo password
log.debug('sudo password was asked.')
child.sendline(passwd)
child.expect(pexpect.EOF)
return child.before
It works in python 2.7 and 3.8:
from subprocess import Popen, PIPE
from shlex import split
proc = Popen(split('sudo -S %s' % command), bufsize=0, stdout=PIPE, stdin=PIPE, stderr=PIPE)
proc.stdin.write((password +'\n').encode()) # write as bytes
proc.stdin.flush() # need if not bufsize=0 (unbuffered stdin)
without .flush() password will not reach sudo if stdin buffered.
In python 2.7 Popen by default used bufsize=0 and stdin.flush() was not needed.
For secure using, create password file in protected directory:
mkdir --mode=700 ~/.prot_dir
nano ~/.prot_dir/passwd.txt
chmod 600 ~/.prot_dir/passwd.txt
at start your py-script read password from ~/.prot_dir/passwd.txt
with open(os.environ['HOME'] +'/.prot_dir/passwd.txt') as f:
password = f.readline().rstrip()
import os
os.system("echo TYPE_YOUR_PASSWORD_HERE | sudo -S TYPE_YOUR_LINUX_COMMAND")
Open your ide and run the above code. Please change TYPE_YOUR_PASSWORD_HERE and TYPE_YOUR_LINUX_COMMAND to your linux admin password and your desired linux command after that run your python script. Your output will show on terminal. Happy Coding :)
You can use SSHScript . Below are example codes:
## filename: example.spy
sudoPassword = 'mypass'
command = 'mount -t vboxsf myfolder /home/myuser/myfolder'
$$echo #{sudoPassword} | sudo -S #{command}
or, simply one line (almost the same as running on console)
## filename: example.spy
$$echo mypass | sudo -S mount -t vboxsf myfolder /home/myuser/myfolder
Then, run it on console
sshscript example.spy
Where "sshscript" is the CLI of SSHScript (installed by pip).
solution im going with,because password in plain txt in an env file on dev pc is ok, and variable in the repo and gitlab runner is masked.
use .dotenv put pass in .env on local machine, DONT COMMIT .env to git.
add same var in gitlab variable
.env file has:
PASSWORD=superpass
from dotenv import load_dotenv
load_dotenv()
subprocess.run(f'echo {os.getenv("PASSWORD")} | sudo -S rm /home//folder/filetodelete_created_as_root.txt', shell=True, check=True)
this works locally and in gitlab. no plain password is committed to repo.
yes, you can argue running a sudo command w shell true is kind of crazy, but if you have files written to host from a docker w root, and you need to pro-grammatically delete them, this is functional.

Execute a command after successfully SSH connection

I think I may of been googling this stuff wrong, however I was wondering if I could have my raspberry Pi execute a command after I connect to it via SSH.
Workflow:
1) SSH into Pi via terminal
2) Once logged in, the Pi executes a command to display the current temperature (I already know of this command)
The pi already outputs
Linux raspberrypi 3.10.25+ #622 PREEMPT Fri Jan 3 18:41:00 GMT 2014 armv6l
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Jul 11 15:11:35 2014
I could be misunderstanding this all together, perhaps even have the command executed and shown in the dialog above.
What you're looking for is the motd which is quite common on various linux distros. This is not in python (but can be). The motd runs multiple commands on login via SSH and constructs a message which it outputs to the user. More information on this (which actually has temperature listed) can be found here: Rapberry Pi Welcome Message. The problem is this will likely change slightly depending on linux distros. A good git repo which has a nice message can also be found here: Raspberry Pi Motd
Yup, this can been done. The method that I am aware of utilizes subprocess https://docs.python.org/2/library/subprocess.html. As long as you know the name of the python script and the arguments (if any) you can pass them into subprocess. Here is an example of how to connect via python ssh script (taken from http://python-for-system-administrators.readthedocs.org/en/latest/ssh.html):
import subprocess
import sys
HOST="www.example.org"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="uname -a"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
print result
Just append the command to the ssh-command.
ssh user#server "echo test"
"echo test" is executed on the remote machine.
You can execute a command from bash without actually logging into the other computer by placing the command after the shh command:
$ ssh pi#pi_addr touch wat.txt
Would create the text file ~/wat.txt.
This is a little cumbersome for automation however since as password must be provided so you can set a public/private RSA-key on your computer in order to be able to login to your pi remotely without a password. Simply do the following:
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphase (empty for no passphrase):
Enter the same passphrase again:
$ssh pi#pi_addr mkdir -p .ssh
$cat .ssh/id_rsa.pub | ssh pi#pi_addr 'cat >> .ssh/authorized_keys'
Don't enter a passphrase and leave everything default when running ssh-keygen. Now you will be able to rn ssh pi#pi_addr without enter a password.
Example python file:
import subprocess
SERVER = "pi#pi_addr"
subprocess.call("ssh pi#pi_addr touch wat.txt")

Categories

Resources