print stdout of a subprocess command in a docker container - python

I am having problems in displaying the output of a command run with Python subprocess.Popen within a docker container. The terminal just hangs while the process is running and only at the end it prints out the output.
I have a Python script like this (simplified):
def test():
print('start')
process = subprocess.Popen('pytest', stdout=subprocess.PIPE, universal_newlines=True)
for line in iter(process.stdout.readline, ''):
print(">>> {}".format(line.rstrip()))
def docker():
client = docker.from_env()
command = './this_script --test'
generator = client.containers.run('python:3', command, remove=True, init=True, working_dir='/test', stdout=True, stderr=True, stream=True)
for log in generator:
print('>>> {}'.format(log))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
...
# if called with --test, the script calls test()
# if called with --docker, the script calls docker()
Even the print('start') at the beginning of test() is not printed until the end of the process.
How can I force the stdout of the subprocess to be displayed real time?
I am on Ubuntu 18.04, using Python 3.6.7, Docker version 18.09.6, build 481bc77.
EDIT:
So the problem is that the "run" command hangs and does not return until the container process has ended.
I found a way to make it work by running the container in detached mode:
Added the -u flag in the shebang
Updated the docker() function to start the container in detached mode:
def docker():
client = docker.from_env()
command = 'python -u this_script.py --test'
generator = client.containers.run('python:3', command, remove=True, init=True, working_dir='/test', detach=True)
for log in generator.logs(stdout=True, stderr=True, stream=True):
print('>>> {}'.format(log))

Related

paramiko equivalent of "cat File.gz | ssh addres script.sh" in python 3.7

Command i'm trying to run using paramiko in python 3.7:
Windows:
type file.ext4.gz | ssh user#address sudo update.sh
Mac:
cat file.ext4.gz | ssh user#address sudo update.sh
From the cmd / terminals and from .bat / .sh this works, after entering the password. I've been working on a simple python gui (PysimpleGui) to allow the user to fo this, but without the need to enter the password (this is saved from initial connection).
I've tried:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(config["IP_ADDRESS"], username=config["USERNAME"], password=config["PASSWORD"], timeout=5)
a = client.open_sftp()
a.put(file_location, "sh update.sh", callback=sent)
While this works to send the file, it doesn't run it and gives the error:
OSError: Failure
I don't want to do this in subprocess, as this tool is to prevent the use of terminal for the "end user"
I've been beating my head against this for 2 days now. Thank you.
EDIT:
Here is the STDIO Code:
def send_ssh(value, input=None):
if input:
transport = client.get_transport()
channel = transport.open_session()
channel.exec_command(value)
with open(input, "rb") as file:
for chunk in iter(functools.partial(file.read, read_size), b''):
if channel.send_ready():
channel.sendall(chunk)
if channel.recv_ready():
print(channel.recv(1024).decode().strip())
if channel.recv_stderr_ready():
print(channel.recv_stderr(1024).decode().strip())
while not channel.exit_status_ready():
if channel.recv_ready():
print(channel.recv(1024).decode().strip())
if channel.recv_stderr_ready():
print(channel.recv_stderr(1024).decode().strip())
else:
w, r, e = client.exec_command(value, get_pty=True)
error = e.read().strip().decode()
if error != "":
return error
else:
return r.read().strip().decode()
Once the file is cat to the script it's the verified by the script. I worked around this by just using SFTP to send the file and running my
cat file | sudo script.sh
this works, but does require that i transfer a 600mb file (thankfully always over a local connection (LAN)) each time. The above code does transfer the file, but it doesn't complete. If i just try sending it via for line in file: i'll corrupt.
Keeping things simpler, below we're using threading to allow synchronous APIs to be used rather than needing to write explicit asynchronous code:
import shutil
client = SSHClient()
client.load_system_host_keys()
client.connect('user#address')
# here's the important part: we're using the file handles returned by exec_command()
update_stdin, update_stdout, update_stderr = client.exec_command('sudo update.sh')
# copy stdout and stderr from the remote thread to our own process's stdout and stderr
t_out = Thread(target=shutil.copyfileobj, args=[update_stdout, sys.stdout]); t_out.start()
t_err = Thread(target=shutil.copyfileobj, args=[update_stderr, sys.stderr]); t_err.start()
# write your local file to the remote stdin, in the foreground: we don't exit until done.
shutil.copyfileobj(open('file.ext4.gz', 'r'), update_stdin)
update_stdin.close()
# optional, but let's be graceful: wait for the threads to exit, and collect exit status
t_out.join(); t_err.join()
result = stdout.channel.recv_exit_status()
print(f"Remote process exited with status {result}")

Starting docker container using python script

I would like to start the docker container from a python script. When i call the docker image through my code , i am unable to start the docker container
import subprocess
import docker
from subprocess import Popen, PIPE
def kill_and_remove(ctr_name):
for action in ('kill', 'rm'):
p = Popen('docker %s %s' % (action, ctr_name), shell=True,
stdout=PIPE, stderr=PIPE)
if p.wait() != 0:
raise RuntimeError(p.stderr.read())
def execute():
ctr_name = 'sml/tools:8' # docker image file name
p = Popen(['docker', 'run', '-v','/lib/modules:/lib/modules',
'--cap-add','NET_ADMIN','--name','o-9000','--restart',
'always', ctr_name ,'startup',' --base-port',
9000,' --orchestrator-integration-license',
' --orchestrator-integration-license','jaVl7qdgLyxo6WRY5ykUTWNRl7Y8IzJxhRjEUpKCC9Q='
,'--orchestrator-integration-mode'],
stdin=PIPE)
out = p.stdin.write('Something')
if p.wait() == -20: # Happens on timeout
kill_and_remove(ctr_name)
return out
following are docker container details for your reference
dev#dev-VirtualBox:sudo docker ps -a
[sudo] password for dev:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
79b3b9d215f3 sml/tools:8 "/home/loadtest/st..." 46 hours ago Up 46 hours pcap_replay_192.168.212.131_9000_delay_dirty_1
Could some one suggest me why i could not start my container through my program
docker-py (https://github.com/docker/docker-py) should be used to control Docker via Python.
This will start an Ubuntu container running sleep infinity.
import docker
client = docker.from_env()
client.containers.run("ubuntu:latest", "sleep infinity", detach=True)
Have a look at https://docker-py.readthedocs.io/en/stable/containers.html for more details (capabilities, volumes, ..).

SSH command exiting instantly instead of staying alive as python subprocess

I am try to use Python to set up a reverse SSH tunnel. Some software that starts with the system is going to manage it and kill it or start it based on commands it receives.
I have written a class to manage the reverse tunnel as follows:
# imports omitted for brevity
class SshProcess():
def __init__(self):
self.process = None
def start(self, port):
if self.process is not None:
return None
command = [
# 'sudo',
'ssh',
'-R {port}:127.0.0.1:22'.format(port=port),
'{username}#{host}'.format(username=config.USERNAME, host=config.HOST),
'-o StrictHostKeyChecking=no'
]
def threaded_popen():
self.process = subprocess.Popen(
(' '.join(command)), # command, # shlex.split(command),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True
)
self.process.wait()
logger.info('Reverse SSH to {username}#{host} has exited'.format(username=config.USERNAME, host=config.HOST))
logger.debug('command raw: {command}'.format(command=command))
logger.debug('command joined: {command}'.format(command=(' '.join(command))))
self.thread = Thread(target=threaded_popen)
self.thread.start()
def stop(self):
if self.process is not None:
try:
self.process.communicate(input="exit\n")
self.process.terminate()
except (ValueError, OSError) as e:
logger.warning('Closing reverse SSH raised {error}'.format(error=e.__class__.__name__))
logger.warning(e)
self.process = None
if self.thread is not None:
self.thread.join()
Now whenever I call start I receive the following log statements:
2017-06-28 14:32:46,343 - module - DEBUG - command raw: ['ssh', '-R 4000:127.0.0.1:22', 'tich#192.168.0.88', '-o StrictHostKeyChecking=no']
2017-06-28 14:32:46,344 - module - DEBUG - command joined: ssh -R 4000:127.0.0.1:22 tich#192.168.0.88 -o StrictHostKeyChecking=no
2017-06-28 14:32:46,797 - module - INFO - Reverse SSH to tich#192.168.0.88 has exited
The issue is the ssh tunnel exits nearly instantly after starting. performing a simple pidof ssh in Linux gives no output as if the process does not even exist.
I have also tried using communicate() after starting the process and you can see it establishes the connection and receives output. However shortly after the function exits, the subprocess exits as well.
I have set up RSA keypairs for both the root and the regular user. Copying and pasting the command into a terminal does not produce the instant exit bug.
The purpose is setting up a reverse SSH session so a remote user can log in. But I currently have not found an existing packaged solution that offers this functionality.
You done some weird ssh connection.My advice is to use paramiko a great ssh package.
on the other hand, you are sub-processioning only for a linux commamd so if u like it like that use:
install sshpass (yum install or apt-get)
sshpass -p your_password ssh user#hostname
and change this setting instead of the flag u sent:
change ssh_config
vi /etc/ssh/ssh_config
change the below key from "ask" to "no"
StrictHostKeyChecking no

subprocess for executing unix commands

I basically want to connect to my simulator from there execute few commands.
from unix shell, I connect to my simulator from unix shell by giving the command "gmake CONFD_NUMBER=1 nthconfdcli", but when I run the below script, my code hangs .
def Simulator():
command = "gmake CONFD_NUMBER=50 nthconfdcli"
p = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE)
(output, err) = p.communicate()
p.expect("#")
p.sendline('show test cli');
p.expect (['#',pexpect.EOF])
show = p.before
print show
p.sendline('exit');

How can I start another process required by my Selenium tests

I'm using django-selenium to add Selenium testing functionality to existing unittests.
My Selenium tests are reliant on a web server running on my machine which would be triggered by running our django app like so; main.py -a
So the first thing I want to do in my Selenium test is start this server which I setup like so;
def start_server():
path = os.path.join(os.getcwd(), 'main.py -a')
server_running = is_server_running()
if server_running is False:
server = subprocess.Popen('cmd.exe', stdin= subprocess.PIPE, stdout= subprocess.PIPE)
stdout, stderr = server.communicate(input='%s\n' % path)
print 'Server error:\n{0}\n'.format(stderr)
server_running = is_server_running()
return server_running
However when I do this the webserver takes over the execution of the django test process in the command line. I assume the way I should be doing this is to launch the command prompt in a separate process and then trigger the main.py -a command in that process.
Is this the right idea and if so, how can I modify that function to spawn a new process and launch my command? I was trying to run 'cmd.exe' using Process(target=path but I couldn't get it to work. Thanks :)
The way I have gone with this is with a much simpler launch method;
startServer.py
def run():
path = os.path.join(os.getcwd(), 'main.py')
server_running = is_server_running()
if server_running is False:
subprocess.Popen(['python', path, '-a'])
if __name__ == '__main__':
run()
Which I can then start and stop in my tests' setup & teardown as so;
def setUp(self):
self.server = Process(target= startServer.run)
self.server.start()
def test(self):
# run test process
def tearDown(self):
utils.closeBrowser(self.ff)
There may well be a better way of doing things & something here may not be 'as it should be' but it works (with a socket forcibly closed error) :)
My only outstanding issue is test starting before the database tables have been created :(

Categories

Resources