I want to log into a remote computer using the python library paramiko,
then start a daemon process using the python-daemon library which, after
the programm terminates, is still working as some kind of job queue.
This is my code so far:
(in this example the daemon will just open a file and print some random numbers into it)
#client.py
import paramiko
def main():
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('machine1', username='user1')
command = 'python server_daemon.py'
stdin,stdout,stderr = ssh.exec_command(command)
ssh.close()
if __name__=="__main__":
main()
'
#server_daemon.py
import time
import daemon
def main():
with daemon.DaemonContext():
s = [str(x)+"\n" for x in range(1000)]
for i in s:
with open("test.txt", "a") as f:
f.write(i)
time.sleep(0.4)
while True:
pass
if __name__=="__main__":
main()
Unfortunately this doesn't seem to do the thing,
if I remove the daemonizing context from the script it seems to work but I have to wait for the server to finish.
I also tried to redirect the output to /dev/null and this didn't work,
thanks for any suggestions.
Related
I wrote this code in Paramiko:
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect(hostname, username=user, password=passwd, timeout=3)
session = ssh.invoke_shell()
session.send("\n")
session.send("echo step 1\n")
time.sleep(1)
session.send("sleep 30\n")
time.sleep(1)
while not session.recv_ready():
time.wait(2)
output = session.recv(65535)
session.send("echo step 2\n")
time.sleep(1)
output += session.recv(65535)
I'm trying execute more commands on my Linux server. The problem is my Python code not wait to finish execute command, for example if I'm try to execute sleep 30, the Python not wait 30 seconds for finish execute commands. How can resolve this problem ? I tried with while recv_ready(), but it still does not wait.
Use exec_command: http://docs.paramiko.org/en/1.16/api/channel.html
stdin, stdout, stderr = ssh.exec_command("my_long_command --arg 1 --arg 2")
The following code works for me:
from paramiko import SSHClient, AutoAddPolicy
import time
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect('111.111.111.111', username='myname', key_filename='/path/to/my/id_rsa.pub', port=1123)
sleeptime = 0.001
outdata, errdata = '', ''
ssh_transp = ssh.get_transport()
chan = ssh_transp.open_session()
# chan.settimeout(3 * 60 * 60)
chan.setblocking(0)
chan.exec_command('ls -la')
while True: # monitoring process
# Reading from output streams
while chan.recv_ready():
outdata += chan.recv(1000)
while chan.recv_stderr_ready():
errdata += chan.recv_stderr(1000)
if chan.exit_status_ready(): # If completed
break
time.sleep(sleeptime)
retcode = chan.recv_exit_status()
ssh_transp.close()
print(outdata)
print(errdata)
Please note that command history cannot be executed with ssh as is.
See example here: https://superuser.com/questions/962001/incorrect-output-of-history-command-of-ssh-how-to-read-the-timestamp-info-corre
In case you do not need to read the stdout and stderr separately, you can use way more straightforward code:
stdin, stdout, stderr = ssh_client.exec_command(command)
stdout.channel.set_combine_stderr(True)
output = stdout.readlines()
The readlines reads until the command finishes and returns a complete output.
In case you need the output separately, do not be tempted to remove the set_combine_stderr and call readlines on stdout and stderr separately. That might deadlock. See Paramiko ssh die/hang with big output
For a correct code that reads the outputs separately, see Run multiple commands in different SSH servers in parallel using Python Paramiko.
Obligatory warning: Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
I wrote this code in Paramiko:
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect(hostname, username=user, password=passwd, timeout=3)
session = ssh.invoke_shell()
session.send("\n")
session.send("echo step 1\n")
time.sleep(1)
session.send("sleep 30\n")
time.sleep(1)
while not session.recv_ready():
time.wait(2)
output = session.recv(65535)
session.send("echo step 2\n")
time.sleep(1)
output += session.recv(65535)
I'm trying execute more commands on my Linux server. The problem is my Python code not wait to finish execute command, for example if I'm try to execute sleep 30, the Python not wait 30 seconds for finish execute commands. How can resolve this problem ? I tried with while recv_ready(), but it still does not wait.
Use exec_command: http://docs.paramiko.org/en/1.16/api/channel.html
stdin, stdout, stderr = ssh.exec_command("my_long_command --arg 1 --arg 2")
The following code works for me:
from paramiko import SSHClient, AutoAddPolicy
import time
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect('111.111.111.111', username='myname', key_filename='/path/to/my/id_rsa.pub', port=1123)
sleeptime = 0.001
outdata, errdata = '', ''
ssh_transp = ssh.get_transport()
chan = ssh_transp.open_session()
# chan.settimeout(3 * 60 * 60)
chan.setblocking(0)
chan.exec_command('ls -la')
while True: # monitoring process
# Reading from output streams
while chan.recv_ready():
outdata += chan.recv(1000)
while chan.recv_stderr_ready():
errdata += chan.recv_stderr(1000)
if chan.exit_status_ready(): # If completed
break
time.sleep(sleeptime)
retcode = chan.recv_exit_status()
ssh_transp.close()
print(outdata)
print(errdata)
Please note that command history cannot be executed with ssh as is.
See example here: https://superuser.com/questions/962001/incorrect-output-of-history-command-of-ssh-how-to-read-the-timestamp-info-corre
In case you do not need to read the stdout and stderr separately, you can use way more straightforward code:
stdin, stdout, stderr = ssh_client.exec_command(command)
stdout.channel.set_combine_stderr(True)
output = stdout.readlines()
The readlines reads until the command finishes and returns a complete output.
In case you need the output separately, do not be tempted to remove the set_combine_stderr and call readlines on stdout and stderr separately. That might deadlock. See Paramiko ssh die/hang with big output
For a correct code that reads the outputs separately, see Run multiple commands in different SSH servers in parallel using Python Paramiko.
Obligatory warning: Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
I've been trying to write a python script to control the starting and stopping of a minecraft server. I've got it to accept commands through input() but i also wanted the logs of the server to be printed on the console(or be processed someway), since that the process never ends, readline hangs everytime the server finished outputing text,no further input can be performed. Is there a way to let stdin and stdout to work simultaneously,or is there a way to time out readline so i can continue?
The code i've got so far:
import subprocess
from subprocess import PIPE
import os
minecraft_dir = "D:\Minecraft Server"
executable = 'java -Xms4G -Xmx4G -jar "D:\Minecraft Server\paper-27.jar" java'
process = None
def server_command(cmd):
if(process is not None):
cmd = cmd + '\n'
cmd = cmd.encode("utf-8")
print(cmd)
process.stdin.write(cmd)
process.stdin.flush()
else:
print("Server is not running.")
def server_stop():
if process is None:
print("Server is not running.")
else:
process.stdin.write("stop\n".encode("utf-8"))
process.stdin.flush()
while True:
command=input()
command=command.lower()
if(command == "start"):
if process is None:
os.chdir(minecraft_dir)
process = subprocess.Popen(executable,stdin=PIPE,stdout=PIPE)
print("Server started.")
else:
print("Server Already Running.")
elif(command == "stop"):
server_stop()
process=None
else:
server_command(command)
I've mentioned processing the server log someway or the other because i don't really need it on the console,since i can always read from a file that it generated. But this particular server i'm running needs the stdout=PIPE argument or it throws out
java.io.IOException: ReadConsoleInputW failed
at org.fusesource.jansi.internal.Kernel32.readConsoleInputHelper(Kernel32.java:816)
at org.fusesource.jansi.internal.WindowsSupport.readConsoleInput(WindowsSupport.java:99)
at org.jline.terminal.impl.jansi.win.JansiWinSysTerminal.processConsoleInput(JansiWinSysTerminal.java:112)
at org.jline.terminal.impl.AbstractWindowsTerminal.pump(AbstractWindowsTerminal.java:458)
at java.lang.Thread.run(Unknown Source)
and i think it breaks the PIPE? Because no further input is directed to the process(process.stdin.write not working), yet the process is still running.
Any help on either one of the issue would be greatly appreciated.
I am not familiar with subprocesses and I would like to have some help with the following problem.
I have 3 apps. Lets say I am running them with command like this:
python manage.py app1
python manage.py app2
python manage.py app2
I want to make a main script to control them like run_app1 or stop_app1
Everything runs in linux.
My apologies for my poor explanation. I have a problem called Dyslexia, also known as reading disorder. It is some times hard for me to write down what I am thinking.
Thank you for reading or any help
Using the subprocess Python module a first step could be something like this:
# Master
from multiprocessing.connection import Listener
from subprocess import Popen, PIPE
import socket
import sys
port = 10000
lstn = Listener(('localhost', int(port)), authkey=b'secret')
proc = Popen((sys.executable, 'worker.py', str(port)), stdout=PIPE, stderr=PIPE)
conn = lstn.accept()
conn.send([1, 'Brian', None])
print(proc.stdout.readline())
# Worker
from multiprocessing.connection import Client
import sys
port = int(sys.argv[1])
conn = Client(('localhost', port), authkey=b'secret')
while True:
try:
msg = conn.recv()
print('Received: %s', str(msg))
sys.stdout.flush()
except EOFError:
break
The master process initializes a listener and then opens the worker process. Messages can be sent to the worker via the connection object and stdout and stderr go back to the master process.
I've successfully implemented Paramiko using exec_command, however, the command I'm running on the remote machine(s) can sometimes take several minutes to complete.
During this time my Python script has to wait for the remote command to complete and receive stdout.
My goal is to let the remote machine "run in the background", and allow the local Python script to continue once it sends the command via exec_command.
I'm not concerned with stdout at this point, I'm just interested in bypassing waiting for stdout to return so the script can continue on while the command runs on the remote machine.
Any suggestions?
Current script:
def function():
ssh_object = paramiko.SSHClient()
ssh_object.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_object.connect(address, port=22, username='un', password='pw')
command = 'command to run'
try:
stdin, stdout, stderr = ssh_object.exec_command(command)
stdout.readlines()
except:
do something else
Thank you!
Use a separate thread to run the command. Usually threads should be cleaned up with the join command (the exception are daemon threads that you expect to run until your program exits). Exactly how you do that depends on the other stuff your program is running. But an example is:
import threading
def ssh_exec_thread(ssh_object, command):
stdin, stdout, stderr = ssh_object.exec_command(command)
stdout.readlines()
def function():
ssh_object = paramiko.SSHClient()
ssh_object.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_object.connect(address, port=22, username='un', password='pw')
command = 'command to run'
thread = threading.Thread(target=ssh_exec_thread, args=(ssh_object, command)
thread.start()
...do something else...
thread.join()
You can make this fancier by passing a Queue to ssh_exec_command and put the result on the queue for processing by your program later.