Docker: Python server not listening - python

I'm trying to write a simple python server that writes a message (from JSON) to a file. When I deploy my docker container, nothing happens. When I stop the container (keyboard interrupt) all console output is written at once an the container shuts down.
My python code:
import socketserver
import json
class PoCServer(socketserver.BaseRequestHandler):
def handle(self):
addr = self.client_address[0]
print("[{}] incoming connection...".format(addr))
buff = bytes()
while True:
rawdata = self.request.recv(256)
if not rawdata: break
buff = buff + rawdata
data = json.loads(buff.decode())
with open("data/" + data["name"] + ".txt", "w") as f:
f.write(data["msg"])
print("[{}] file ".format(addr) + data["name"] + ".txt written...")
server = socketserver.ThreadingTCPServer(("localhost", 10000), PoCServer)
print("[+] server listening...")
server.serve_forever()
My Dockerfile:
FROM python
WORKDIR /app
RUN mkdir /app/data
COPY server.py /app
EXPOSE 10000
CMD ["python", "server.py"]
Thank you!

Since the server listening message is visible after keyboard interrupt this means that code is working normally but the outputs are getting buffered. They are displayed once the program exits.
Running your code with -u flag should help solve this issue. According to python help page:
-u : unbuffered binary stdout and stderr;
which seems to be the problem. So in your docker file replace entry point with CMD ["python", "-u", "server.py"]
Now though, this will print the output without buffering but you should be careful in exposing the right ports and mapping them to ports on local system to actually send/receive response to server.

Related

Port mapping in docker doesn't work for python server

I want to build a server with sagemath, it should take in code, execute it and send back the result. SageMath has a python interface with which I thought this could be achieved. I don't really know python that well but I found a good starting point here https://ask.sagemath.org/question/23431/running-sage-from-other-languages-with-higher-performance/. The problem is that I wanted to run this in a docker container so and just map the port but this doesn't seem to be working.
I changed the python file to adjust it to the newer version:
import socket
import sys
from io import StringIO
from sage.all import *
from sage.calculus.predefined import x
from sage.repl.preparse import preparse
SHUTDOWN = False
HOST = 'localhost'
PORT = 8888
MAX_MSG_LENGTH = 102400
# Create socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
print('Socket created')
# Bind socket to localhost and port
try:
s.bind((HOST, PORT))
except (socket.error , msg):
print('Bind failed. Error Code : ' + str(msg[0]) + ' Message ' + msg[1])
sys.exit()
print('Socket bind complete')
# Start listening on socket
s.listen(10)
print('Socket now listening')
# Loop listener for new connections
while not SHUTDOWN:
# Wait to accept a new client connection
conn, addr = s.accept()
print('Connected with ' + addr[0] + ':' + str(addr[1]))
# Receive message from client
msg = conn.recv(MAX_MSG_LENGTH)
if msg:
if msg == "stop":
SHUTDOWN = True
else:
parsed = preparse(msg)
if parsed.startswith('load') or parsed.startswith('attach'):
os.system('sage "' + os.path.join(os.getcwd(), parsed.split(None, 1)[1]) + '"')
else:
# Redirect stdout to my stdout to capture into a string
sys.stdout = mystdout = io.StringIO()
# Evalutate msg
try:
eval(compile(parsed,'<cmdline>','exec'))
result = mystdout.getvalue() # Get result from mystdout
except Exception as e:
result = "ERROR: " + str(type(e)) + " " + str(e)
# Restore stdout
sys.stdout = sys.__stdout__
# Send response to connected client
if result == "":
conn.sendall("Empty result, did you remember to print?")
else:
conn.sendall(result)
# Close client connection
conn.close()
# Close listener
s.close()
then I created a script to run it
#!/bin/bash
/usr/bin/sage -python /var/app/sage-server.py
afterwards I created the Dockerfile to install SageMath
FROM ubuntu:20.04
COPY src/ /var/app
RUN apt-get update \
&& DEBIAN_FRONTEND="noninteractive" TZ="Europe/London" apt-get -y install tzdata \
&& apt-get install sagemath -y
WORKDIR /var/app
RUN ["chmod", "+x", "/var/app/server.sh"]
CMD ["/var/app/server.sh"]
and finally a docker-compose file to automate the mapping and to include it into the main project
version: '3'
services:
sage-server:
build: .
volumes:
- ./src:/var/app
ports:
- "9999:8888"
All parts execute well and without any error (on an ubuntu). I even see that the port is used when I go into the container but the port 9999 is not used in the host system.
Does any of you have an idea?
As suggested by #David Maze, you need to expose on 0.0.0.0 to be able to be reachable in the container.
HOST = '0.0.0.0'
Then you get some issues:
preparse is expecting a str so you need to decode the bytes received:
msg = conn.recv(MAX_MSG_LENGTH).decode('utf-8')
StringIO was directly imported so no need to prefix with io.:
sys.stdout = mystdout = StringIO()
you need to send back a bytes, but result is an str, so you need to encode:
conn.sendall(result.encode('utf-8'))
The Dockerfile can be improved (https://docs.docker.com/develop/develop-images/dockerfile_best-practices/):
FROM ubuntu:20.04
# First install dependencies so next builds will reuse docker build cache
# Use `--no-install-recommends` to minimize the installed packages
# Clean apt data
RUN apt-get update \
&& DEBIAN_FRONTEND="noninteractive" TZ="Europe/London" apt-get -y install tzdata \
&& apt-get install --no-install-recommends sagemath -y \
&& rm -rf /var/lib/apt/lists/*
# Copy changing data as late as possible to use docker build cache
COPY src/ /var/app
WORKDIR /var/app
# Explicit the listening ports
EXPOSE 8888
# No need for a wrapper script for simple command (and workaround chmod issue)
ENTRYPOINT ["/usr/bin/sage", "-python", "/var/app/sage-server.py"]
As the build-time copy of the source code is overridden by the volume mount at run-time, the permissions have to be right at host side (the build RUN chmod +x has no effect on the run-time mounted files)

Gracefully abort remote Windows command executed over SSH from Windows Python Paramiko script when Ctrl+C is pressed

I have a follow up question that builds off the question I asked here: Run multiple commands in different SSH servers in parallel using Python Paramiko, which was already answered.
Thanks to the answer on the link above, my python script is as follows:
# SSH.py
import paramiko
import argparse
import os
path = "path"
python_script = "worker.py"
# definitions for ssh connection and cluster
ip_list = ['XXX.XXX.XXX.XXX', 'XXX.XXX.XXX.XXX', 'XXX.XXX.XXX.XXX']
port_list = [':XXXX', ':XXXX', ':XXXX']
user_list = ['user', 'user', 'user']
password_list = ['pass', 'pass', 'pass']
node_list = list(map(lambda x: f'-node{x + 1} ', list(range(len(ip_list)))))
cluster = ' '.join([node + ip + port for node, ip, port in zip(node_list, ip_list, port_list)])
# run script on command line of local machine
os.system(f"cd {path} && python {python_script} {cluster} -type worker -index 0 -batch 64 > {path}/logs/'command output'/{ip_list[0]}.log 2>&1")
# loop for IP and password
stdouts = []

clients = []
for i, (ip, user, password) in enumerate(zip(ip_list[1:], user_list[1:], password_list[1:]), 1):
try:
print("Open session in: " + ip + "...")
client = paramiko.SSHClient()
client.connect(ip, user, password)
except paramiko.SSHException:
print("Connection Failed")
quit()
try:
path = f"C:/Users/{user}/Desktop/temp-ines"
stdin, stdout, stderr = ssh.exec_command(
f"cd {path} && python {python_script} {cluster} -type worker -index {i} -batch 64>"

 f"C:/Users/{user}/Desktop/{ip}.log 2>&1 &"
)

clients.append(ssh)
stdouts.append(stdout)
except paramiko.SSHException:
print("Cannot run file. Continue with other IPs in list...")
client.close()
continue
# Wait for commands to complete
for i in range(len(stdouts)):
print("hello")
stdouts[i].read()
print("hello1")
clients[i].close()
print('hello2")
print("\n\n***********************End execution***********************\n\n")
This script, which is run locally, is able to SSH into the servers and run the command (i.e., run a python script called worker.py and log the command output to a log file). I.e., it is able to go through the first for loop with no issues.
My issue is related to the second for loop. Please see the print statements I added in the second for loop to be clear. When I run SSH.py locally, this is what I observe:
As you can see, I ssh into each of the servers and then stay at reading the command output of the first server I ssh over to. The worker.py script can take 30 mins or so to complete and the command output is the same on each server -- so it will take 30 mins to read the command output of the first server, then close the SSH connection of the first server, take a couple seconds to read the command output of the second server (as it is the same as the first one and would already be entirely printed), close its SSH connection, and so on. Please see below some of the command line output, if this helps.
Now, my question is, what if I don't want to wait until the worker.py script finishes, i.e., those entire 30 mins? I cannot/do not know how to raise a KeyboardInterrupt exception. What I have tried is quitting the local SSH.py script. However, as you can see from the print statements, this will not close the SSH connections although the training, and thus the log files, will stop logging info. In addition, after I quit the local SSH.py script, if I try to delete any of the log files, I get an error saying "cannot delete file because it is being used in cmd.exe" -- this only happens sometimes and I believe it is because of not closing the SSH connections?
First run in python console:
It hangs: Local python and log file running and saving but no print statements and no python and log file being run/saved in servers.
I run it again so second process starts:
Now, the first process doesn't hang anymore (python running and log files being saved in server). And can close this second run/process. It is like the second run/process helps with the hang of the first run/process.
If I were to run python SSH.py in the terminal it would just hang.
This was not happening before.
If you know that SSHClient.close cleanly close the connection and abort the remote command, call it on response to KeyboardInterrupt.
For this you cannot use the simple solution with stdout.read, as it blocks and prevents handling of the Ctrl+C on Windows.
Use the waiting code from my answer to Run multiple commands in different SSH servers in parallel using Python Paramiko (the while any(x is not None for x in stdouts): snippet).
And wrap it to try:...except (KeyboardInterrupt):.
try:
while any(x is not None for x in stdouts):
for i in range(len(stdouts)):
stdout = stdouts[i]
if stdout is not None:
channel = stdout.channel
# To prevent losing output at the end, first test for exit,
# then for output
exited = channel.exit_status_ready()
while channel.recv_ready():
s = channel.recv(1024).decode('utf8')
print(f"#{i} stdout: {s}")
while channel.recv_stderr_ready():
s = channel.recv_stderr(1024).decode('utf8')
print(f"#{i} stderr: {s}")
if exited:
print(f"#{i} done")
clients[i].close()
stdouts[i] = None
time.sleep(0.1)
except (KeyboardInterrupt):
print("Aborting")
for i in range(len(clients)):
print(f"#{i} closing")
clients[i].close()
If you do not need to separate the stdout and stderr, you can greatly simplify the code by using Channel.set_combine_stderr. See Paramiko ssh die/hang with big output.

Run multiple commands in different SSH servers in parallel using Python Paramiko

I have an SSH.py with the goal of connecting to many servers over SSH to run a Python script (worker.py). I am using Paramiko, but am very new to it and learning as I go. On each server I ssh over with, I need to keep the Python script running -- this is for training a model parallely and so the script needs to run on all machines as to update model parameters/train jointly. The Python script on the servers need to be running so either all the SSH connections cannot close or I have to figure out a way for the Python script on the servers to keep running even if I close the connection.
From extensive googling, it looks like you can achieve this with nohup or:
client = paramiko.SSHClient()
client.connect(ip_address, username, password)
transport = client.get_transport()
channel = transport.open_session()
channel.exec_command("python worker.py > /logs/'command output' 2>&1")
However, what is unclear to me is how do we close/exit all SSH connections? I am running the SSH.py file on cmd.exe, would closing the cmd.exe be enough for all processes remotely to close?
In addition, is my use of client.close() correct for my purposes?
Please see below what I have for my code.
# SSH.py
import paramiko
import argparse
import os
path = "path"
python_script = "worker.py"
# definitions for ssh connection and cluster
ip_list = ['XXX.XXX.XXX.XXX', XXX.XXX.XXX.XXX', XXX.XXX.XXX.XXX']
port_list = [':XXXX', ':XXXX', ':XXXX']
user_list = ['user', 'user', 'user']
password_list = ['pass', 'pass', 'pass']
node_list = list(map(lambda x: f'-node{x + 1} ', list(range(len(ip_list)))))
cluster = ' '.join([node + ip + port for node, ip, port in zip(node_list, ip_list, port_list)])
# run script on command line of local machine
os.system(f"cd {path} && python {python_script} {cluster} -type worker -index 0 -batch 64 > {path}/logs/'command output'/{ip_list[0]}.log 2>&1")
# loop for IP and password
for i, (ip, user, password) in enumerate(zip(ip_list[1:], user_list[1:], password_list[1:]), 1):
try:
print("Open session in: " + ip + "...")
client = paramiko.SSHClient()
client.connect(ip, user, password)
transport = client.get_transport()
channel = transport.open_session()
except paramiko.SSHException:
print("Connection Failed")
quit()
try:
channel.exec_command(f"cd {path} && python {python_script} {cluster} -type worker -index {i} -batch 64 > {path}/logs/'command output'/{ip_list[i]}.log 2>&1", timeout=30)
client.close() # here I am closing connection but above command should be running, my question is can I safely close cmd.exe on which I am running SSH.py?
except paramiko.SSHException:
print("Cannot run file. Continue with other IPs in list...")
client.close()
continue
The code is based on Running process of remote SSH server in the background using Python Paramiko
Edit: It seems like the channel.exec_command() is not executing the command
f"cd {path} && python {python_script} {cluster} -type worker -index {i} -batch 64 > {path}/logs/'command output'/{ip_list[i]}.log 2>&1"
So I wonder if it is because of client.close()? What would happen if I comment out all the lines with client.close()? Would this help? Is this dangerous? When I quit my local Python script, would this close all my SSH connections and hence, no need for client.close()?
Also all my machines have Windows OS.
Indeed, the problem is that you close the SSH connection. As the remote process is not detached from the terminal, closing the terminal terminates the process. On Linux servers, you can use nohup. I do not know what is (if there is) a Windows equivalent.
Anyway, it seems that you do not need to close the connection. I understood, that you are ok with waiting for all the commands to complete.
stdouts = []
clients = []
# Start the commands
commands = zip(ip_list[1:], user_list[1:], password_list[1:])
for i, (ip, user, password) in enumerate(commands, 1):
print("Open session in: " + ip + "...")
client = paramiko.SSHClient()
client.connect(ip, user, password)
command = \
f"cd {path} && " + \
f"python {python_script} {cluster} -type worker -index {i} -batch 64 " + \
f"> {path}/logs/'command output'/{ip_list[i]}.log 2>&1"
stdin, stdout, stderr = client.exec_command(command)
clients.append(client)
stdouts.append(stdout)
# Wait for commands to complete
for i in range(len(stdouts)):
stdouts[i].read()
clients[i].close()
Note that the above simple solution with stdout.read() is working only because you redirect the commands output to a remote file. Were you not, the commands might deadlock.
Without that (or if you want to see the command output locally) you will need a code like this:
while any(x is not None for x in stdouts):
for i in range(len(stdouts)):
stdout = stdouts[i]
if stdout is not None:
channel = stdout.channel
# To prevent losing output at the end, first test for exit,
# then for output
exited = channel.exit_status_ready()
while channel.recv_ready():
s = channel.recv(1024).decode('utf8')
print(f"#{i} stdout: {s}")
while channel.recv_stderr_ready():
s = channel.recv_stderr(1024).decode('utf8')
print(f"#{i} stderr: {s}")
if exited:
print(f"#{i} done")
clients[i].close()
stdouts[i] = None
time.sleep(0.1)
If you do not need to separate the stdout and stderr, you can greatly simplify the code by using Channel.set_combine_stderr. See Paramiko ssh die/hang with big output.
Regarding your question about SSHClient.close: If you do not call it, the connection will be closed implicitly, when the script finishes, when Python garbage collector cleans up the pending objects. It's a bad practice. And even if Python won't do it, the local OS will terminate all connections of the local Python process. That's a bad practice too. In any case, that will terminate the remote processes along.

paramiko equivalent of "cat File.gz | ssh addres script.sh" in python 3.7

Command i'm trying to run using paramiko in python 3.7:
Windows:
type file.ext4.gz | ssh user#address sudo update.sh
Mac:
cat file.ext4.gz | ssh user#address sudo update.sh
From the cmd / terminals and from .bat / .sh this works, after entering the password. I've been working on a simple python gui (PysimpleGui) to allow the user to fo this, but without the need to enter the password (this is saved from initial connection).
I've tried:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(config["IP_ADDRESS"], username=config["USERNAME"], password=config["PASSWORD"], timeout=5)
a = client.open_sftp()
a.put(file_location, "sh update.sh", callback=sent)
While this works to send the file, it doesn't run it and gives the error:
OSError: Failure
I don't want to do this in subprocess, as this tool is to prevent the use of terminal for the "end user"
I've been beating my head against this for 2 days now. Thank you.
EDIT:
Here is the STDIO Code:
def send_ssh(value, input=None):
if input:
transport = client.get_transport()
channel = transport.open_session()
channel.exec_command(value)
with open(input, "rb") as file:
for chunk in iter(functools.partial(file.read, read_size), b''):
if channel.send_ready():
channel.sendall(chunk)
if channel.recv_ready():
print(channel.recv(1024).decode().strip())
if channel.recv_stderr_ready():
print(channel.recv_stderr(1024).decode().strip())
while not channel.exit_status_ready():
if channel.recv_ready():
print(channel.recv(1024).decode().strip())
if channel.recv_stderr_ready():
print(channel.recv_stderr(1024).decode().strip())
else:
w, r, e = client.exec_command(value, get_pty=True)
error = e.read().strip().decode()
if error != "":
return error
else:
return r.read().strip().decode()
Once the file is cat to the script it's the verified by the script. I worked around this by just using SFTP to send the file and running my
cat file | sudo script.sh
this works, but does require that i transfer a 600mb file (thankfully always over a local connection (LAN)) each time. The above code does transfer the file, but it doesn't complete. If i just try sending it via for line in file: i'll corrupt.
Keeping things simpler, below we're using threading to allow synchronous APIs to be used rather than needing to write explicit asynchronous code:
import shutil
client = SSHClient()
client.load_system_host_keys()
client.connect('user#address')
# here's the important part: we're using the file handles returned by exec_command()
update_stdin, update_stdout, update_stderr = client.exec_command('sudo update.sh')
# copy stdout and stderr from the remote thread to our own process's stdout and stderr
t_out = Thread(target=shutil.copyfileobj, args=[update_stdout, sys.stdout]); t_out.start()
t_err = Thread(target=shutil.copyfileobj, args=[update_stderr, sys.stderr]); t_err.start()
# write your local file to the remote stdin, in the foreground: we don't exit until done.
shutil.copyfileobj(open('file.ext4.gz', 'r'), update_stdin)
update_stdin.close()
# optional, but let's be graceful: wait for the threads to exit, and collect exit status
t_out.join(); t_err.join()
result = stdout.channel.recv_exit_status()
print(f"Remote process exited with status {result}")

How to list current directory files through socket API?

So right now I have a simple FTP system to transfer files.
But I am confused about how I would run commands on the server machine from a client machine.
How would I open a terminal on the server machine from my client machine to use commands such as ls or mkdir or cd? Or can I do this straight from Socket Programming
You could use the python module subprocess. (https://pymotw.com/2/subprocess/)
For example, assuming you have a client/server 'dialogue' set up using sockets, you could do something like this:
client.py
# assume 's' is your socket already connected to the server
# prompt the user for a command to send
cmd = raw_input("user > ")
s.send(cmd) # send your command to the server
# let's say you input 'ls -la'
You could put the above code inside a loop that only breaks when you enter 'quit' or something, to continually send and receive commands. You would need a loop or something similar on the server side too, to continually accept and return the output from your commands. You could also use threads.
server.py
# on the server side do this
# s is again your socket bound to a port
# but we're on the server side this time!
from subprocess import Popen, PIPE
cmd = s.recv(1024)
# cmd now has 'ls -la' assigned to it
# parse it a bit
cmd = cmd.split() # to get ['ls', '-la']
# now we execute the command on the server with subprocess
p = Popen(cmd, stdout=PIPE)
result = p.communicate()
# result is, in this case, the listing of files in the current directory
s.send(result[0]) # result[0] should be a str
# you now make sure to receive your result on the client
Note: I think a newer version is subprocess32, but all methods are the same as far as I remember.

Categories

Resources