I would like to start the docker container from a python script. When i call the docker image through my code , i am unable to start the docker container
import subprocess
import docker
from subprocess import Popen, PIPE
def kill_and_remove(ctr_name):
for action in ('kill', 'rm'):
p = Popen('docker %s %s' % (action, ctr_name), shell=True,
stdout=PIPE, stderr=PIPE)
if p.wait() != 0:
raise RuntimeError(p.stderr.read())
def execute():
ctr_name = 'sml/tools:8' # docker image file name
p = Popen(['docker', 'run', '-v','/lib/modules:/lib/modules',
'--cap-add','NET_ADMIN','--name','o-9000','--restart',
'always', ctr_name ,'startup',' --base-port',
9000,' --orchestrator-integration-license',
' --orchestrator-integration-license','jaVl7qdgLyxo6WRY5ykUTWNRl7Y8IzJxhRjEUpKCC9Q='
,'--orchestrator-integration-mode'],
stdin=PIPE)
out = p.stdin.write('Something')
if p.wait() == -20: # Happens on timeout
kill_and_remove(ctr_name)
return out
following are docker container details for your reference
dev#dev-VirtualBox:sudo docker ps -a
[sudo] password for dev:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
79b3b9d215f3 sml/tools:8 "/home/loadtest/st..." 46 hours ago Up 46 hours pcap_replay_192.168.212.131_9000_delay_dirty_1
Could some one suggest me why i could not start my container through my program
docker-py (https://github.com/docker/docker-py) should be used to control Docker via Python.
This will start an Ubuntu container running sleep infinity.
import docker
client = docker.from_env()
client.containers.run("ubuntu:latest", "sleep infinity", detach=True)
Have a look at https://docker-py.readthedocs.io/en/stable/containers.html for more details (capabilities, volumes, ..).
Related
I am writing a program in python on Ubuntu, to execute a command ls -l on RaspberryPi, connect with Network.
Can anybody guide me on how do I do that?
Sure, there are several ways to do it!
Let's say you've got a Raspberry Pi on a raspberry.lan host and your username is irfan.
subprocess
It's the default Python library that runs commands.
You can make it run ssh and do whatever you need on a remote server.
scrat has it covered in his answer. You definitely should do this if you don't want to use any third-party libraries.
You can also automate the password/passphrase entering using pexpect.
paramiko
paramiko is a third-party library that adds SSH-protocol support, so it can work like an SSH-client.
The example code that would connect to the server, execute and grab the results of the ls -l command would look like that:
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('raspberry.lan', username='irfan', password='my_strong_password')
stdin, stdout, stderr = client.exec_command('ls -l')
for line in stdout:
print line.strip('\n')
client.close()
fabric
You can also achieve it using fabric.
Fabric is a deployment tool which executes various commands on remote servers.
It's often used to run stuff on a remote server, so you could easily put your latest version of the web application, restart a web-server and whatnot with a single command. Actually, you can run the same command on multiple servers, which is awesome!
Though it was made as a deploying and remote management tool, you still can use it to execute basic commands.
# fabfile.py
from fabric.api import *
def list_files():
with cd('/'): # change the directory to '/'
result = run('ls -l') # run a 'ls -l' command
# you can do something with the result here,
# though it will still be displayed in fabric itself.
It's like typing cd / and ls -l in the remote server, so you'll get the list of directories in your root folder.
Then run in the shell:
fab list_files
It will prompt for an server address:
No hosts found. Please specify (single) host string for connection: irfan#raspberry.lan
A quick note: You can also assign a username and a host right in a fab command:
fab list_files -U irfan -H raspberry.lan
Or you could put a host into the env.hosts variable in your fabfile. Here's how to do it.
Then you'll be prompted for a SSH password:
[irfan#raspberry.lan] run: ls -l
[irfan#raspberry.lan] Login password for 'irfan':
And then the command will be ran successfully.
[irfan#raspberry.lan] out: total 84
[irfan#raspberry.lan] out: drwxr-xr-x 2 root root 4096 Feb 9 05:54 bin
[irfan#raspberry.lan] out: drwxr-xr-x 3 root root 4096 Dec 19 08:19 boot
...
Simple example from here:
import subprocess
import sys
HOST="www.example.org"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="uname -a"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
print result
It does exactly what you want: connects over ssh, executes command, returns output. No third party library needed.
You may use below method with linux/ Unix 's built in ssh command.
import os
os.system('ssh username#ip bash < local_script.sh >> /local/path/output.txt 2>&1')
os.system('ssh username#ip python < local_program.py >> /local/path/output.txt 2>&1')
Paramiko module can be used to run multiple commands by invoking shell. Here I created class to invoke ssh shell
class ShellHandler:
def __init__(self, host, user, psw):
logger.debug("Initialising instance of ShellHandler host:{0}".format(host))
try:
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.ssh.connect(host, username=user, password=psw, port=22)
self.channel = self.ssh.invoke_shell()
except:
logger.error("Error Creating ssh connection to {0}".format(host))
logger.error("Exiting ShellHandler")
return
self.psw=psw
self.stdin = self.channel.makefile('wb')
self.stdout = self.channel.makefile('r')
self.host=host
time.sleep(2)
while not self.channel.recv_ready():
time.sleep(2)
self.initialprompt=""
while self.channel.recv_ready():
rl, wl, xl = select.select([ self.stdout.channel ], [ ], [ ], 0.0)
if len(rl) > 0:
tmp = self.stdout.channel.recv(24)
self.initialprompt=self.initialprompt+str(tmp.decode())
def __del__(self):
self.ssh.close()
logger.info("closed connection to {0}".format(self.host))
def execute(self, cmd):
cmd = cmd.strip('\n')
self.stdin.write(cmd + '\n')
#self.stdin.write(self.psw +'\n')
self.stdin.flush()
time.sleep(1)
while not self.stdout.channel.recv_ready():
time.sleep(2)
logger.debug("Waiting for recv_ready")
output=""
while self.channel.recv_ready():
rl, wl, xl = select.select([ self.stdout.channel ], [ ], [ ], 0.0)
if len(rl) > 0:
tmp = self.stdout.channel.recv(24)
output=output+str(tmp.decode())
return output
If creating different shell each time does not matter to you then you can use method as below.
def run_cmd(self,cmd):
try:
cmd=cmd+'\n'
#self.ssh.settimeout(60)
stdin,stdout,stderr=self.ssh.exec_command(cmd)
while not stdout.channel.eof_received:
time.sleep(3)
logger.debug("Waiting for eof_received")
out=""
while stdout.channel.recv_ready():
err=stderr.read()
if err:
print("Error: ",my_hostname, str(err))
return False
out=out+stdout.read()
if out:
return out
except:
error=sys.exc_info()
logger.error(error)
return False
I am having problems in displaying the output of a command run with Python subprocess.Popen within a docker container. The terminal just hangs while the process is running and only at the end it prints out the output.
I have a Python script like this (simplified):
def test():
print('start')
process = subprocess.Popen('pytest', stdout=subprocess.PIPE, universal_newlines=True)
for line in iter(process.stdout.readline, ''):
print(">>> {}".format(line.rstrip()))
def docker():
client = docker.from_env()
command = './this_script --test'
generator = client.containers.run('python:3', command, remove=True, init=True, working_dir='/test', stdout=True, stderr=True, stream=True)
for log in generator:
print('>>> {}'.format(log))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
...
# if called with --test, the script calls test()
# if called with --docker, the script calls docker()
Even the print('start') at the beginning of test() is not printed until the end of the process.
How can I force the stdout of the subprocess to be displayed real time?
I am on Ubuntu 18.04, using Python 3.6.7, Docker version 18.09.6, build 481bc77.
EDIT:
So the problem is that the "run" command hangs and does not return until the container process has ended.
I found a way to make it work by running the container in detached mode:
Added the -u flag in the shebang
Updated the docker() function to start the container in detached mode:
def docker():
client = docker.from_env()
command = 'python -u this_script.py --test'
generator = client.containers.run('python:3', command, remove=True, init=True, working_dir='/test', detach=True)
for log in generator.logs(stdout=True, stderr=True, stream=True):
print('>>> {}'.format(log))
I am try to use Python to set up a reverse SSH tunnel. Some software that starts with the system is going to manage it and kill it or start it based on commands it receives.
I have written a class to manage the reverse tunnel as follows:
# imports omitted for brevity
class SshProcess():
def __init__(self):
self.process = None
def start(self, port):
if self.process is not None:
return None
command = [
# 'sudo',
'ssh',
'-R {port}:127.0.0.1:22'.format(port=port),
'{username}#{host}'.format(username=config.USERNAME, host=config.HOST),
'-o StrictHostKeyChecking=no'
]
def threaded_popen():
self.process = subprocess.Popen(
(' '.join(command)), # command, # shlex.split(command),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True
)
self.process.wait()
logger.info('Reverse SSH to {username}#{host} has exited'.format(username=config.USERNAME, host=config.HOST))
logger.debug('command raw: {command}'.format(command=command))
logger.debug('command joined: {command}'.format(command=(' '.join(command))))
self.thread = Thread(target=threaded_popen)
self.thread.start()
def stop(self):
if self.process is not None:
try:
self.process.communicate(input="exit\n")
self.process.terminate()
except (ValueError, OSError) as e:
logger.warning('Closing reverse SSH raised {error}'.format(error=e.__class__.__name__))
logger.warning(e)
self.process = None
if self.thread is not None:
self.thread.join()
Now whenever I call start I receive the following log statements:
2017-06-28 14:32:46,343 - module - DEBUG - command raw: ['ssh', '-R 4000:127.0.0.1:22', 'tich#192.168.0.88', '-o StrictHostKeyChecking=no']
2017-06-28 14:32:46,344 - module - DEBUG - command joined: ssh -R 4000:127.0.0.1:22 tich#192.168.0.88 -o StrictHostKeyChecking=no
2017-06-28 14:32:46,797 - module - INFO - Reverse SSH to tich#192.168.0.88 has exited
The issue is the ssh tunnel exits nearly instantly after starting. performing a simple pidof ssh in Linux gives no output as if the process does not even exist.
I have also tried using communicate() after starting the process and you can see it establishes the connection and receives output. However shortly after the function exits, the subprocess exits as well.
I have set up RSA keypairs for both the root and the regular user. Copying and pasting the command into a terminal does not produce the instant exit bug.
The purpose is setting up a reverse SSH session so a remote user can log in. But I currently have not found an existing packaged solution that offers this functionality.
You done some weird ssh connection.My advice is to use paramiko a great ssh package.
on the other hand, you are sub-processioning only for a linux commamd so if u like it like that use:
install sshpass (yum install or apt-get)
sshpass -p your_password ssh user#hostname
and change this setting instead of the flag u sent:
change ssh_config
vi /etc/ssh/ssh_config
change the below key from "ask" to "no"
StrictHostKeyChecking no
Following this post (Running Sudo Command with paramiko) I was able to run commands as sudo remotely.
I can execute sudo pkill -2 pure-ftpd successfully, but when I try to execute sudo service pure-ftpd start I can't see any effect on the server although I see that the output in stdout and stderr is correct.
Here is my code:
class RemoteCmdSender(object):
def __init__(self, host, usr=None, passwd=None):
self.host = host
self.usr = usr
self.passwd = str(passwd)
def send_cmd_as_bash(self, cmd):
client = SSHClient()
client.set_missing_host_key_policy(AutoAddPolicy())
client.connect(hostname=self.host, username=self.usr,
password=self.passwd)
transport = client.get_transport()
session = transport.open_session()
session.get_pty('bash')
session.exec_command(cmd)
stdin = session.makefile('wb', -1)
stdin.write(self.passwd.strip() + '\n')
stdin.flush()
stdout = session.makefile('rb', -1).read()
stderr = session.makefile_stderr('rb', -1).read()
client.close()
return stdout, stderr
and the execution:
print cmd_sender.send_cmd_as_bash("sudo service pure-ftpd")
output:
Starting ftp server: Running: /usr/sbin/pure-ftpd -l pam -l puredb:/etc/pure-ftpd/pureftpd.pdb -E -O clf:/var/log/pure-ftpd/transfer.log -8 UTF-8 -u 1000 -B\r\n
Which is consistent with the output that I get if I log to the server using ssh and write sudo service pure-ftpd start in the bash.
PS: I want to make clear that both commands works correctly when run from an ssh session using bash
I am writing a program in python on Ubuntu, to execute a command ls -l on RaspberryPi, connect with Network.
Can anybody guide me on how do I do that?
Sure, there are several ways to do it!
Let's say you've got a Raspberry Pi on a raspberry.lan host and your username is irfan.
subprocess
It's the default Python library that runs commands.
You can make it run ssh and do whatever you need on a remote server.
scrat has it covered in his answer. You definitely should do this if you don't want to use any third-party libraries.
You can also automate the password/passphrase entering using pexpect.
paramiko
paramiko is a third-party library that adds SSH-protocol support, so it can work like an SSH-client.
The example code that would connect to the server, execute and grab the results of the ls -l command would look like that:
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('raspberry.lan', username='irfan', password='my_strong_password')
stdin, stdout, stderr = client.exec_command('ls -l')
for line in stdout:
print line.strip('\n')
client.close()
fabric
You can also achieve it using fabric.
Fabric is a deployment tool which executes various commands on remote servers.
It's often used to run stuff on a remote server, so you could easily put your latest version of the web application, restart a web-server and whatnot with a single command. Actually, you can run the same command on multiple servers, which is awesome!
Though it was made as a deploying and remote management tool, you still can use it to execute basic commands.
# fabfile.py
from fabric.api import *
def list_files():
with cd('/'): # change the directory to '/'
result = run('ls -l') # run a 'ls -l' command
# you can do something with the result here,
# though it will still be displayed in fabric itself.
It's like typing cd / and ls -l in the remote server, so you'll get the list of directories in your root folder.
Then run in the shell:
fab list_files
It will prompt for an server address:
No hosts found. Please specify (single) host string for connection: irfan#raspberry.lan
A quick note: You can also assign a username and a host right in a fab command:
fab list_files -U irfan -H raspberry.lan
Or you could put a host into the env.hosts variable in your fabfile. Here's how to do it.
Then you'll be prompted for a SSH password:
[irfan#raspberry.lan] run: ls -l
[irfan#raspberry.lan] Login password for 'irfan':
And then the command will be ran successfully.
[irfan#raspberry.lan] out: total 84
[irfan#raspberry.lan] out: drwxr-xr-x 2 root root 4096 Feb 9 05:54 bin
[irfan#raspberry.lan] out: drwxr-xr-x 3 root root 4096 Dec 19 08:19 boot
...
Simple example from here:
import subprocess
import sys
HOST="www.example.org"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="uname -a"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
print result
It does exactly what you want: connects over ssh, executes command, returns output. No third party library needed.
You may use below method with linux/ Unix 's built in ssh command.
import os
os.system('ssh username#ip bash < local_script.sh >> /local/path/output.txt 2>&1')
os.system('ssh username#ip python < local_program.py >> /local/path/output.txt 2>&1')
Paramiko module can be used to run multiple commands by invoking shell. Here I created class to invoke ssh shell
class ShellHandler:
def __init__(self, host, user, psw):
logger.debug("Initialising instance of ShellHandler host:{0}".format(host))
try:
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.ssh.connect(host, username=user, password=psw, port=22)
self.channel = self.ssh.invoke_shell()
except:
logger.error("Error Creating ssh connection to {0}".format(host))
logger.error("Exiting ShellHandler")
return
self.psw=psw
self.stdin = self.channel.makefile('wb')
self.stdout = self.channel.makefile('r')
self.host=host
time.sleep(2)
while not self.channel.recv_ready():
time.sleep(2)
self.initialprompt=""
while self.channel.recv_ready():
rl, wl, xl = select.select([ self.stdout.channel ], [ ], [ ], 0.0)
if len(rl) > 0:
tmp = self.stdout.channel.recv(24)
self.initialprompt=self.initialprompt+str(tmp.decode())
def __del__(self):
self.ssh.close()
logger.info("closed connection to {0}".format(self.host))
def execute(self, cmd):
cmd = cmd.strip('\n')
self.stdin.write(cmd + '\n')
#self.stdin.write(self.psw +'\n')
self.stdin.flush()
time.sleep(1)
while not self.stdout.channel.recv_ready():
time.sleep(2)
logger.debug("Waiting for recv_ready")
output=""
while self.channel.recv_ready():
rl, wl, xl = select.select([ self.stdout.channel ], [ ], [ ], 0.0)
if len(rl) > 0:
tmp = self.stdout.channel.recv(24)
output=output+str(tmp.decode())
return output
If creating different shell each time does not matter to you then you can use method as below.
def run_cmd(self,cmd):
try:
cmd=cmd+'\n'
#self.ssh.settimeout(60)
stdin,stdout,stderr=self.ssh.exec_command(cmd)
while not stdout.channel.eof_received:
time.sleep(3)
logger.debug("Waiting for eof_received")
out=""
while stdout.channel.recv_ready():
err=stderr.read()
if err:
print("Error: ",my_hostname, str(err))
return False
out=out+stdout.read()
if out:
return out
except:
error=sys.exc_info()
logger.error(error)
return False