Piping Python output to NetCat - time.sleep() breaks output? - python

I am attempting to dump a python script's output to netcat using WSL on windows. In one terminal, I have my python program & netcat set up in listener mode. In another terminal, I am connecting to this listener and expecting to see my python output. The below code works and I am able to do this successfully, but when adding time.sleep(1) inbetween function calls I do not receive any Python output. Why is this?
I have tried using the print() function instead of sys.stdout.write(). I've tried different nc configurations.
Python producer script:
import time
from sys import stdout
def generate_stream_data():
stdout.write('some text\n')
if __name__ == '__main__':
iterations = 10000
while iterations > 0:
generate_stream_data()
time.sleep(1) # this causes nothing to get piped to nc
Piping the python output in WSL:
python above_code.py | nc -lk localhost 9999
In another WSL terminal:
nc -v localhost 9999
Commenting out time.sleep(1) lets me see my output in my second terminal. I have tested the standalone Python program, and it successfully prints 'some_ text' to console with the 1sec delay. Why does adding this cause me to not see anything in netcat?

Related

print execution to std output in python

I am trying to port some simple scripts that I have in tcl to python.
Using tcl/expect, we can see every executed command on the standard output. For example,
spawn ssh admin#$IP
send "ls\r"
would yield by default an output like this:
ssh admin#10.10.10.10
ls
....
In python, only way I saw was to decode the child.before or after outputs.
Is there a way python can output everything it runs to the console or a file ?
This is what I am doing now:
#!/usr/bin/env python3
import pexpect
shellprompt = "] # "
child = pexpect.spawn('ssh admin#XYZ')
child.sendline('ls')
child.expect(shellprompt)
ls_out = child.before.decode()
print(ls_out)
This is run on a Linux machine, and doing ssh to a Linux machine

Call python script as module with input from bash script

From a bash function, I want to call a python script which prompts for input, and I need to run that script as a module using python -m
Here is select_pod.py
# above this will be a print out of pods
pod = input('Pick pod')
print(pod)
Here is the bash function:
function foo() {
POD=$(python3 -m select_pod)
kubectl exec $POD --stdin --tty bash
}
I can't get the input to work, i.e. "Pick pod" is not printed to the terminal.
When you do POD=$(python3 -m select_pod), the POD=$(...) means that any output printed to stdout within the parentheses will be captured within the POD variable instead of getting printed to the screen. Simply echoing out POD is no good, as this will first be done once the Python script has finished.
What you need to do is to duplicate the output of the Python program. Assuming Linux/Posix, this can be done using e.g.
POD=$(python3 -m select_pod | tee /dev/stderr)
Because your terminal shows both stdout and stderr, duplicating the output from stdout to stderr makes the text show up.
Hijacking the error channel for this might not be ideal, e.g. if you want to later sort the error messages using something like 2> .... A different solution is to just duplicate it directly to the tty:
POD=$(python3 -m select_pod | tee /dev/tty)
You can change sys.stdout before input :
import sys
save_sys_stdout = sys.stdout
sys.stdout = sys.stderr
pod = input('Pick pod')
sys.stdout = save_sys_stdout
print(pod)
So that POD=$(python3 -m select_pod) will work and you don't need to do split after.

Run script for send in-game Terraria server commands

In the past week I install a Terraria 1.3.5.3 server into an Ubuntu v18.04 OS, for playing online with friends. This server should be powered on 24/7, without any GUI, only been accessed by SSH on internal LAN.
My friends ask me if there is a way for them to control the server, e.g. send a message, via internal in-game chat, so I thought use a special character ($) in front of the desired command ('$say something' or '$save', for instance) and a python program, that read the terminal output via pipe, interpreter the command and send it back with a bash command.
I follow these instructions to install the server:
https://www.linode.com/docs/game-servers/host-a-terraria-server-on-your-linode
And config my router to forward a dedicated port to the terraria server.
All is working fine, but I really struggle to make python send a command via "terrariad" bash script, described in the link above.
Here is a code used to send a command, in python:
import subprocess
subprocess.Popen("terrariad save", shell=True)
This works fine, but if I try to input a string with space:
import subprocess
subprocess.Popen("terrariad \"say something\"", shell=True)
it stop the command in the space char, output this on the terminal:
: say
Instead of the desired:
: say something
<Server>something
What could I do to solve this problem?
I tried so much things but I get the same result.
P.S. If I send the command manually in the ssh putty terminal, it works!
Edit 1:
I abandoned the python solution, by now I'll try it with bash instead, seem to be more logic to do this way.
Edit 2:
I found the "terrariad" script expect just one argument, but the Popen is splitting my argument into two no matter the method I use, as my input string has one space char in the middle. Like this:
Expected:
terrariad "say\ something"
$1 = "say something"
But I get this of python Popen:
subprocess.Popen("terrariad \"say something\"", shell=True)
$1 = "say
$2 = something"
No matter i try to list it:
subprocess.Popen(["terrariad", "say something"])
$1 = "say
$2 = something"
Or use \ quote before the space char, It always split variables if it reach a space char.
Edit 3:
Looking in the bash script I could understand what is going on when I send a command... Basically it use the command "stuff", from the screen program, to send characters to the terraria screen session:
screen -S terraria -X stuff $send
$send is a printf command:
send="`printf \"$*\r\"`"
And it seems to me that if I run the bash file from Python, it has a different result than running from the command line. How this is possible? Is this a bug or bad implementation of the function?
Thanks!
I finally come with a solution to this, using pipes instead of the Popen solution.
It seems to me that Popen isn't the best solution to run bash scripts, as described in How to do multiple arguments with Python Popen?, the link that SiHa send in the comments (Thanks!):
"However, using Python as a wrapper for many system commands is not really a good idea. At the very least, you should be breaking up your commands into separate Popens, so that non-zero exits can be handled adequately. In reality, this script seems like it'd be much better suited as a shell script.".
So I came with the solution, using a fifo file:
First, create a fifo to be use as a pipe, in the desired directory (for instance, /samba/terraria/config):
mkfifo cmdOutput
*/samba/terraria - this is the directory I create in order to easily edit the scripts, save and load maps to the server using another computer, that are shared with samba (https://linuxize.com/post/how-to-install-and-configure-samba-on-ubuntu-18-04/)
Then I create a python script to read from the screen output and then write to a pipe file (I know, probably there is other ways to this):
import shlex, os
outputFile = os.open("/samba/terraria/config/cmdOutput", os.O_WRONLY )
print("python script has started!")
while 1:
line = input()
print(line)
cmdPosition = line.find("&")
if( cmdPosition != -1 ):
cmd = slice(cmdPosition+1,len(line))
cmdText = line[cmd]
os.write(outputFile, bytes( cmdText + "\r\r", 'utf-8'))
os.write(outputFile, bytes("say Command executed!!!\r\r", 'utf-8'))
Then I edit the terraria.service file to call this script, piped from terrariaServer, and redirect the errors to another file:
ExecStart=/usr/bin/screen -dmS terraria /bin/bash -c "/opt/terraria/TerrariaServer.bin.x86_64 -config /samba/terraria/config/serverconfig.txt < /samba/terraria/config/cmdOutput 2>/samba/terraria/config/errorLog.txt | python3 /samba/terraria/scripts/allowCommands.py"
*/samba/terraria/scripts/allowCommands.py - where my script is.
**/samba/terraria/config/errorLog.txt - save Log of errors in a file.
Now I can send commands, like 'noon' or 'dawn' so I can change the in-game time, save world and backup it with samba server before boss fights, do another stuff if I have some time XD, and have the terminal showing what is going on with the server.

Freeze stdin when in the background, unfreeze it when in the foreground

I am trying to run a sript in the background:
nohup script.py > out 2> err < /dev/null &
The script (Python 3.4) does at some point:
answer = input('? ')
(it has a menu running in one of the threads)
And the nohup call is crashing with:
EOFError: EOF when reading a line
Because of the /dev/null redirection of stdin I imagine. If I run it without stdin redirection:
nohup script.py > out 2> err &
It crashes with:
OSError: [Errno 9] Bad file descriptor
If I run it with:
script.py > out 2> err
It works, but blocks my terminal (it is in the foreground)
If I run it with:
script.py > out 2> err &
It runs in the background alright, but it gets stopped as soon as the input call is reached.
What I would like is:
be able to redirect stdout and stderr to the filesystem
be able to put the script in the background
be able to move if to the foreground and interact with the menu normally (so stdin must be enabled somehow). stdout and stderr would still be redirected to the filesystem, but stdin would behave normally.
the script must run fine in the background and in the foreground (of course, the menu is not working in the background, because stdin is "frozen")
Basically, what I would like is that when it is in the background, stdin is kind of "frozen", and whenever it comes to the foreground it works again normally.
Is this possible? The solution does not need to involve nohup
What you want (and how input works and fails on EOF under python) with an interactive menu means that you cannot safely pass a file as stdin when invoking your program. This means your only option is to invoke this like so:
$ script.py > out 2> err &
As a demonstration, this is my script:
from time import sleep
import sys
c = 0
while True:
sleep(0.001)
c += 1
if c % 1000 == 0:
print(c, flush=True)
if c % 2000 == 0:
print(c, file=sys.stderr, flush=True)
if c % 10000 == 0:
answer = input('? ')
print('The answer is %s' % answer, flush=True)
Essentially, every second it will write to stdout, every two seconds it will write to stderr, and lastly, every ten seconds it will wait for input. If I were to run this and wait a bit over a seconds (to allow disk flush), and chain this together, like so:
$ python script.py > out 2> err & sleep 2.5; cat out err
[1] 32123
1000
2000
2000
$
Wait at least 10 seconds and try cat out err again:
$ cat out err
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
? 2000
4000
6000
8000
10000
[1]+ Stopped python script.py > out 2> err
$
Note that the prompt generated by input is also written to stdout, and the program effectively continued running up to where it is expecting stdin to give it data. You simply have to bring the process back into foreground by %, and start feeding it the required data, then suspend with ^Z (CtrlZ) and keep it running again in the background with %&. Example:
$ %
python script.py > out 2> err
Test input
^Z
[1]+ Stopped python script.py > out 2> err
$ %&
[1]+ python script.py > out 2> err &
$
Now cat out again, after waiting another ten seconds:
$ cat out
1000
...
10000
? The answer is Test input
11000
...
20000
?
[1]+ Stopped python script.py > out 2> err
$
This is essentially a basic crash course in how standard processes typically functions in both foreground and background, and things just simply work as intended if the code handles the standard IO correctly.
Lastly, you can't really have it both ways. If the application expects stdin and none is provided, then the clear option is failure. If one is provided however but application got sent to background and kept running, it will be Stopped as it expects further input. If this stopped behaviour is unwanted, the application is at fault, there is nothing can be done but to change the application to not result in an error when EOF is encountered when executed with /dev/null as its stdin. If you want to keep stdin as is, with the application being able to somehow keep running when it is in the background you cannot use the input function as it will block when stdin is empty (resulting in process being stopped).
Now that you have clarified via the comment below that your "interactive prompt" is running in a thread, and since usage of input directly reads from stdin and you seem unwilling to modify your program (you asking for general case) but expects a utility to do this for you, the simple solution is to execute this within a tmux or screen session as they fully implement a pseudo tty that is independent from whichever console that started (so you can disconnect and send the session to the background, or start other virtual sessions, see manual pages) which will provide stdio that the program expects.
Finally, if you actually want your application to support this natively, you cannot simply use input as is, but you should check whether input can safely be called (i.e. perhaps making use of select), or by checking whether the process is currently in the foreground or background (An example you could start working from is How to detect if python script is being run as a background process, although you might want to check using sys.stdin, maybe.) to determine if input can be safely called (however if user suspends the task as input comes it would still hang as input waits), or use unix sockets for communication.

Print to terminal window in python

I am trying to debug my code and see if I am entering a loop when I should. I am running this on a Linux server. The python script gets executed via website. I want to print something to the terminal window to let me know I entered the loop. I have been trying:
proc = subprocess.Popen(["echo" + "Statement"])
How do you print to the terminal window through a python script?
Casual print() should do it in case you run the server from the terminal window.
Otherwise, you need to pipe output somewhere, probably to file like this
import sys
sys.stdout = open('/tmp/server.log', 'w')
and then in your terminal window do
tail -f /tmp/server.log
which will hook to the file and print any changes, therefore getting effect you wanted.

Categories

Resources