I've been trying to write a python script to control the starting and stopping of a minecraft server. I've got it to accept commands through input() but i also wanted the logs of the server to be printed on the console(or be processed someway), since that the process never ends, readline hangs everytime the server finished outputing text,no further input can be performed. Is there a way to let stdin and stdout to work simultaneously,or is there a way to time out readline so i can continue?
The code i've got so far:
import subprocess
from subprocess import PIPE
import os
minecraft_dir = "D:\Minecraft Server"
executable = 'java -Xms4G -Xmx4G -jar "D:\Minecraft Server\paper-27.jar" java'
process = None
def server_command(cmd):
if(process is not None):
cmd = cmd + '\n'
cmd = cmd.encode("utf-8")
print(cmd)
process.stdin.write(cmd)
process.stdin.flush()
else:
print("Server is not running.")
def server_stop():
if process is None:
print("Server is not running.")
else:
process.stdin.write("stop\n".encode("utf-8"))
process.stdin.flush()
while True:
command=input()
command=command.lower()
if(command == "start"):
if process is None:
os.chdir(minecraft_dir)
process = subprocess.Popen(executable,stdin=PIPE,stdout=PIPE)
print("Server started.")
else:
print("Server Already Running.")
elif(command == "stop"):
server_stop()
process=None
else:
server_command(command)
I've mentioned processing the server log someway or the other because i don't really need it on the console,since i can always read from a file that it generated. But this particular server i'm running needs the stdout=PIPE argument or it throws out
java.io.IOException: ReadConsoleInputW failed
at org.fusesource.jansi.internal.Kernel32.readConsoleInputHelper(Kernel32.java:816)
at org.fusesource.jansi.internal.WindowsSupport.readConsoleInput(WindowsSupport.java:99)
at org.jline.terminal.impl.jansi.win.JansiWinSysTerminal.processConsoleInput(JansiWinSysTerminal.java:112)
at org.jline.terminal.impl.AbstractWindowsTerminal.pump(AbstractWindowsTerminal.java:458)
at java.lang.Thread.run(Unknown Source)
and i think it breaks the PIPE? Because no further input is directed to the process(process.stdin.write not working), yet the process is still running.
Any help on either one of the issue would be greatly appreciated.
Related
I am making a python program that communicates with a minecraft server and tells it to start, stop, or any other commands. The program reads an email and executes the command given from the email. When the program enters the command to start the server, it freezes the python program until the minecraft server is stopped.
I have tried to have the program open a batch file that starts the server, but then I don't have any way of making the program turn it off, or type in commands, or read the console.
if data[0]+data[1]+data[2]+data[3]+data[4]+data[5] == 'start ':
comm = data.replace('start ','')
try:
mb = int(float(comm)*1024)
#calls the command to start the server
call('java -Xmx' + str(mb) + 'M -Xms' + str(mb) + 'M -jar server.jar nogui')
#program freezes here until server is stopped
except Exception:
call('java -Xmx1024M -Xms1024M -jar server.jar nogui')
print("started server")
elif data == "stop server" or data == "stop":
call('stop')
elif data[0]+data[1]+data[2]+data[3]+data[4] == 'call ':
comm = data.replace('call ','')
call(comm)
print("called command")
elif data[0]+data[1]+data[2] == 'op ':
comm = data
call(comm)
print("op player")
else:
print("not a command")
deletemail(mail,i)
print("deleted item")
I expected to see the program continue and respond to emails, but it froze instead. The program continued off normally after the server was stopped. I know this isn't an error, but is there a way to get the python program to continue while the server is running?
Its is better practice to use subprocess to handle this. You can use popen which will allow you to continue to run your script and send in commands. Here is a good answer on using popen that shows how to send in commands.
This is really interesting.
I have following scripts on my linux machine:
sleep.py
import time
from datetime import datetime
print(datetime.now())
time.sleep(20)
print('Over!')
print(datetime.now())
loop.py
import time
for i in range(20):
time.sleep(1)
print(i)
I can terminate them directly by ctrl+c if I login through PuTTY or git-bash.
But when I trying to run the Python scripts on Windows console:
test.py
def ssh_pty_command(cmd, ip, username, passwd=None, key_filename=None):
"""run ssh.exec_command with realtime output and return exit_code."""
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
logging.debug('Connecting to remote server {}...'.format(ip))
ssh.connect(ip, 22, username, password=passwd,
key_filename=key_filename, timeout=5)
stdin, stdout, stderr = ssh.exec_command(cmd, get_pty=True)
logging.info('ssh_In: {}'.format(cmd))
# print '$: {}'.format(cmd)
for line in iter(stdout.readline, ""):
logging.info('ssh_Out: {}'.format(
line.rstrip('\n').encode('utf-8')))
for err in iter(stderr.readline, ""):
logging.error('ssh_Error: {}'.format(
err.rstrip().encode('utf-8')))
exit_code = stdout.channel.recv_exit_status()
logging.debug('Task exit with code {}.\n'.format(exit_code))
return exit_code
except Exception as err:
logging.error('*** Caught SSH exception: %s: %s' %
(err.__class__, err))
raise
finally:
ssh.close()
ssh_pty_command('python loop.py',ip,username)
ssh_pty_command('python sleep.py',ip,username)
When I press ctrl+c , the loop.py terminated immediately, but the sleep.py waits until the time.sleep(20) is finished and then terminate the execution.
How can I terminate the sleep.py immediately?
Note I did try to use get_pty=True in my exec_command method in my function, but it didn't help.
I guess it should have something to do with the signal sent by Paramiko, but not sure where to dig in...
Ctrl+C signals an interrupt. Python interpreter checks for the interrupt regularly, and raises KeyboardInterrupt exception, when it detects one.
When Paramiko is waiting for an incoming data on a socket, the checking for interrupts is probably suspended (just guessing). So if a remote command is not producing any output, you cannot break the local script.
Your loop.py produces an output using print(i), so your local Python script can be interrupted at the moment it's processing the output.
In any case, it's not the remote script that cannot be interrupted, it's the local script. So it has probably nothing to do with time.sleep as such.
See also:
Stopping python using ctrl+c
https://docs.python.org/3/library/exceptions.html#KeyboardInterrupt
If you actually do not want to wait for the command to finish, your question is not really about Python (nor Paramiko), but about Linux. See Execute remote commands, completely detaching from the ssh connection.
I'm trying to implement a tcp 'echo server'.
Simple stuff:
Client sends a message to the server.
Server receives the message
Server converts message to uppercase
Server sends modified message to client
Client prints the response.
It worked well, so I decided to parallelize the server; make it so that it could handle multiple clients at time.
Since most Python interpreters have a GIL, multithreading won't cut it.
I had to use multiproces... And boy, this is where things went downhill.
I'm using Windows 10 x64 and the WinPython suit with Python 3.5.2 x64.
My idea is to create a socket, intialize it (bind and listen), create sub processes and pass the socket to the children.
But for the love of me... I can't make this work, my subprocesses die almost instantly.
Initially I had some issues 'pickling' the socket...
So I googled a bit and thought this was the issue.
So I tried passing my socket thru a multiprocessing queue, through a pipe and my last attempt was 'forkpickling' and passing it as a bytes object during the processing creating.
Nothing works.
Can someone please shed some light here?
Tell me whats wrong?
Maybe the whole idea (sharing sockets) is bad... And if so, PLEASE tell me how can I achieve my initial objective: enabling my server to ACTUALLY handle multiple clients at once (on Windows) (don't tell me about threading, we all know python's threading won't cut it ¬¬)
It also worth noting that no files are create by the debug function.
No process lived long enough to run it, I believe.
The typical output of my server code is (only difference between runs is the process numbers):
Server is running...
Degree of parallelism: 4
Socket created.
Socket bount to: ('', 0)
Process 3604 is alive: True
Process 5188 is alive: True
Process 6800 is alive: True
Process 2844 is alive: True
Press ctrl+c to kill all processes.
Process 3604 is alive: False
Process 3604 exit code: 1
Process 5188 is alive: False
Process 5188 exit code: 1
Process 6800 is alive: False
Process 6800 exit code: 1
Process 2844 is alive: False
Process 2844 exit code: 1
The children died...
Why god?
WHYYyyyyy!!?!?!?
The server code:
# Imports
import socket
import packet
import sys
import os
from time import sleep
import multiprocessing as mp
import pickle
import io
# Constants
DEGREE_OF_PARALLELISM = 4
DEFAULT_HOST = ""
DEFAULT_PORT = 0
def _parse_cmd_line_args():
arguments = sys.argv
if len(arguments) == 1:
return DEFAULT_HOST, DEFAULT_PORT
else:
raise NotImplemented()
def debug(data):
pid = os.getpid()
with open('C:\\Users\\Trauer\\Desktop\\debug\\'+str(pid)+'.txt', mode='a',
encoding='utf8') as file:
file.write(str(data) + '\n')
def handle_connection(client):
client_data = client.recv(packet.MAX_PACKET_SIZE_BYTES)
debug('received data from client: ' + str(len(client_data)))
response = client_data.upper()
client.send(response)
debug('sent data from client: ' + str(response))
def listen(picklez):
debug('started listen function')
pid = os.getpid()
server_socket = pickle.loads(picklez)
debug('acquired socket')
while True:
debug('Sub process {0} is waiting for connection...'.format(str(pid)))
client, address = server_socket.accept()
debug('Sub process {0} accepted connection {1}'.format(str(pid),
str(client)))
handle_connection(client)
client.close()
debug('Sub process {0} finished handling connection {1}'.
format(str(pid),str(client)))
if __name__ == "__main__":
# Since most python interpreters have a GIL, multithreading won't cut
# it... Oughta bust out some process, yo!
host_port = _parse_cmd_line_args()
print('Server is running...')
print('Degree of parallelism: ' + str(DEGREE_OF_PARALLELISM))
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket created.')
server_socket.bind(host_port)
server_socket.listen(DEGREE_OF_PARALLELISM)
print('Socket bount to: ' + str(host_port))
buffer = io.BytesIO()
mp.reduction.ForkingPickler(buffer).dump(server_socket)
picklez = buffer.getvalue()
children = []
for i in range(DEGREE_OF_PARALLELISM):
child_process = mp.Process(target=listen, args=(picklez,))
child_process.daemon = True
child_process.start()
children.append(child_process)
while not child_process.pid:
sleep(.25)
print('Process {0} is alive: {1}'.format(str(child_process.pid),
str(child_process.is_alive())))
print()
kids_are_alive = True
while kids_are_alive:
print('Press ctrl+c to kill all processes.\n')
sleep(1)
exit_codes = []
for child_process in children:
print('Process {0} is alive: {1}'.format(str(child_process.pid),
str(child_process.is_alive())))
print('Process {0} exit code: {1}'.format(str(child_process.pid),
str(child_process.exitcode)))
exit_codes.append(child_process.exitcode)
if all(exit_codes):
# Why do they die so young? :(
print('The children died...')
print('Why god?')
print('WHYYyyyyy!!?!?!?')
kids_are_alive = False
edit: fixed the signature of "listen". My processes still die instantly.
edit2: User cmidi pointed out that this code does work on Linux; so my question is: How can I 'made this work' on Windows?
You can directly pass a socket to a child process. multiprocessing registers a reduction for this, for which the Windows implementation uses the following DupSocket class from multiprocessing.resource_sharer:
class DupSocket(object):
'''Picklable wrapper for a socket.'''
def __init__(self, sock):
new_sock = sock.dup()
def send(conn, pid):
share = new_sock.share(pid)
conn.send_bytes(share)
self._id = _resource_sharer.register(send, new_sock.close)
def detach(self):
'''Get the socket. This should only be called once.'''
with _resource_sharer.get_connection(self._id) as conn:
share = conn.recv_bytes()
return socket.fromshare(share)
This calls the Windows socket share method, which returns the protocol info buffer from calling WSADuplicateSocket. It registers with the resource sharer to send this buffer over a connection to the child process. The child in turn calls detach, which receives the protocol info buffer and reconstructs the socket via socket.fromshare.
It's not directly related to your problem, but I recommend that you redesign the server to instead call accept in the main process, which is the way this is normally done (e.g. in Python's socketserver.ForkingTCPServer module). Pass the resulting (conn, address) tuple to the first available worker over a multiprocessing.Queue, which is shared by all of the workers in the process pool. Or consider using a multiprocessing.Pool with apply_async.
def listen() the target/start for your child processes does not take any argument but you are providing serialized socket as an argument args=(picklez,) to the child process this would cause an exception in the child process and exit immediately.
TypeError: listen() takes no arguments (1 given)
def listen(picklez) should solve the problem this will provide one argument to the target of your child processes.
I've successfully implemented Paramiko using exec_command, however, the command I'm running on the remote machine(s) can sometimes take several minutes to complete.
During this time my Python script has to wait for the remote command to complete and receive stdout.
My goal is to let the remote machine "run in the background", and allow the local Python script to continue once it sends the command via exec_command.
I'm not concerned with stdout at this point, I'm just interested in bypassing waiting for stdout to return so the script can continue on while the command runs on the remote machine.
Any suggestions?
Current script:
def function():
ssh_object = paramiko.SSHClient()
ssh_object.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_object.connect(address, port=22, username='un', password='pw')
command = 'command to run'
try:
stdin, stdout, stderr = ssh_object.exec_command(command)
stdout.readlines()
except:
do something else
Thank you!
Use a separate thread to run the command. Usually threads should be cleaned up with the join command (the exception are daemon threads that you expect to run until your program exits). Exactly how you do that depends on the other stuff your program is running. But an example is:
import threading
def ssh_exec_thread(ssh_object, command):
stdin, stdout, stderr = ssh_object.exec_command(command)
stdout.readlines()
def function():
ssh_object = paramiko.SSHClient()
ssh_object.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_object.connect(address, port=22, username='un', password='pw')
command = 'command to run'
thread = threading.Thread(target=ssh_exec_thread, args=(ssh_object, command)
thread.start()
...do something else...
thread.join()
You can make this fancier by passing a Queue to ssh_exec_command and put the result on the queue for processing by your program later.
I want to log into a remote computer using the python library paramiko,
then start a daemon process using the python-daemon library which, after
the programm terminates, is still working as some kind of job queue.
This is my code so far:
(in this example the daemon will just open a file and print some random numbers into it)
#client.py
import paramiko
def main():
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('machine1', username='user1')
command = 'python server_daemon.py'
stdin,stdout,stderr = ssh.exec_command(command)
ssh.close()
if __name__=="__main__":
main()
'
#server_daemon.py
import time
import daemon
def main():
with daemon.DaemonContext():
s = [str(x)+"\n" for x in range(1000)]
for i in s:
with open("test.txt", "a") as f:
f.write(i)
time.sleep(0.4)
while True:
pass
if __name__=="__main__":
main()
Unfortunately this doesn't seem to do the thing,
if I remove the daemonizing context from the script it seems to work but I have to wait for the server to finish.
I also tried to redirect the output to /dev/null and this didn't work,
thanks for any suggestions.