Related to Does Python logging write to stdout or stderr by default? but I want to know WHY it defaults to stderr instead WHAT is the default.
For me it isn't really obvious why stderr is the default. I noticed that something is wrong when I run python script.py | tee out.log and ended up with empty log file. Now I know that It can be solved by either python script.py 2>&1 | tee out.log or using stream parameter:
logging.basicConfig(stream=sys.stdout)
After that event it seems reasonable to me to change default stream to stdout in each script to avoid being surprised again. Is that a good practice? Will I miss something?
You probably don't want to change the default to stdout.
The reason for that is stderr is meant to be an output for all messages that are just some internal and/or debugging program information like logs and errors, rather than actual output, the result of the program.
A computer program basically takes data as input and create data as output.
While a program is running, one may be informed of its actions thanks to a monitoring, without altering its output data. To do so, computer programs traditionally have two outputs, one standard output for data and another one for monitoring (logging and errors): stderr.
This is especially useful when your program is pipeable: you wouldn't have error messages in-between video pixels, for example, because it would break your data.
Related
I have a Python script running on a server through SSH with the following command:
nohup python3 python_script.py >> output.txt
It was running for a very long time and (probably) created useful output, so I want to force it to stop now. How do I make it write the output it has so far, into the output.txt file?
The file was automatically created when I started running the script, but the size is zero (nothing has been written in it so far).
as Robert said in his comment, check that the output you are expecting to go to the file is actually making it there and not stderr. If the process is already running and has been for a long time without any response or writes into your output file, I think there are 3 options:
It is generating outout but it's not going where you are expecting (Roberts response)
It is generating output but it's buffered and for some reason has yet to be flushed
It's hasn't generated any output
Option 3 is easy: wait longer. Options 1 & 2 are a little bit tricky. If you are expecting a significant amount of output from this process, you could check the memory consumption of the python instance running your script and see if it's growing or very large. Also you could use lsof to see if it has the file open and to get some idea what it's doing with it.
If you find that your output appears to be going somewhere like /dev/null, take a look at this answer on redirecting output for an existing process.
In the unlikely event that you have a huge buffer that hasn't been flushed, you could try using ps to get the PID and then use kill -STOP [PID] to pause the process and see where you can get using GDB.
Unless it would be extremely painful, I would probably just restart the whole thing, but add periodic flushing to the script, and maybe some extra status reporting so you can tell what is going on. It might help too if you could post your code, since there may be other options available in your situation depending on how the program is written.
I want to write a python script, that opens an *.exe-file (it is a CMD-console application)
communicates with it by sending input and reading output (for example via stdin, stdout) many times.
I tried it with communicate(), but it closes the pipe after I send the first input (communicate(input='\n')),
so it does work for me only once.
Then I tried it again via p.stdin.readline(), but I only can read line by line. When I read an newline, the process
terminates (that is not what I need).
I just want to start a program, read the output and send an input to it, then wait until the next output and send
a new input to it, and so on....
Is there a good way to do it? Does anybody have an example or a similar problem that is solved?
I need the same code as you, actually i am trying using:
https://pexpect.readthedocs.io/en/stable/index.html
After no sucess with subprocess..
I read How do you create a daemon in Python? and also this topic, and tried to write a very simple daemon :
import daemon
import time
with daemon.DaemonContext():
while True:
with open('a.txt', 'a') as f:
f.write('Hi')
time.sleep(2)
Doing python script.py works and returns immediately to terminal (that's the expected behaviour). But a.txt is never written and I don't get any error message. What's wrong with this simple daemon?
daemon.DaemonContext() has option working_directory that has default fault value / i.e. your program probably doesn't have permission to create a new file there.
The problem described here is solved by J.J. Hakala's answer.
Two additional (important) things :
Sander's code (mentioned here) is better than python-daemon. It is more reliable. Just one example: try to start two times the same daemon with python-daemon : big ugly error. With Sander's code : a nice notice "Daemon already running."
For those who want to use python-daemon anyway: DaemonContext() only makes a daemon. DaemonRunner() makes a daemon + control tool, allowing to do python script.py start or stop, etc.
One thing that's wrong with it, is it has no way to tell you what's wrong with it :-)
A daemon process is, by definition, detached from the parent process and from any controlling terminal. So if it's got something to say – such as error messages – it will need to arrange that before becoming a daemon.
From the python-daemon FAQ document:
Why does the output stop after opening the daemon context?
The specified behaviour in PEP 3143_ includes the requirement to
detach the process from the controlling terminal (to allow the process
to continue to run as a daemon), and to close all file descriptors not
known to be safe once detached (to ensure any files that continue to
be used are under the control of the daemon process).
If you want the process to generate output via the system streams
‘sys.stdout’ and ‘sys.stderr’, set the ‘DaemonContext’'s ‘stdout’
and/or ‘stderr’ options to a file-like object (e.g. the ‘stream’
attribute of a ‘logging.Handler’ instance). If these objects have file
descriptors, they will be preserved when the daemon context opens.
Set up a working channel of communication, such as a log file. Ensure the files you open aren't closed along with everything else, using the files_preserve option. Then log any errors to that channel.
I'm making a webshell in python, so actually, the user will use his favourite shell trough a web server. My idea is to create a subprocess.Popen with bash -i and to make two functions read and write in the webapp that, respectively, read the stdout or write in the stdin of the subprocess.
I start the shell with:
p = subprocess.Popen(["script","-f","-c","bash -i -l"],stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
The writing is Ok, but when I read the standard output I don't read the user typing:
while select.select([p.stdout],[],[],0)[0]!=[] or select.select([p.stderr],[],[],0)[0]!=[]:
if(select.select([p.stdout],[],[],0)[0]!=[]): data+=p.stdout.read(1)
if(select.select([p.stderr],[],[],0)[0]!=[]): data+=p.stderr.read(1)
I could force the echoing adding the user input to the output but that's not very elegant because, if the user uses some program that prevent echoing (like a password input), the user input would always be shown in the web page.
So, there is a way, like an option in the bash parameters, to force it adding the input to the output?
PS: If you wonder why I'm using script to run bash is because running bash alone will cause python to stop itself with
[1]+ Stopped python RPCServer.py
Altough I've not figured out WHY it happens, i've found how to prevent it from this question: Main Python Process is stopped using subprocess calls in SocketServer
So, there is a way, like an option in the bash parameters, to force it adding the input to the output?
Yes: you can use the -v command-line option ("Print shell input lines as they are read.").
I am writing a Python program which runs a virtual terminal. Currently I am launching it like so:
import pexpect, thread
def create_input(child, scrollers, textlength=80, height=12):
while 1:
newtext = child.readline()
print newtext
child = pexpect.spawn("bash", timeout=30000)
thread.start_new_thread(create_input,(child))
This works, and I can send commands to it via child.send(command). However, I only get entire lines as output. This means that if I launch something like Nano or Links, I don't receive any output until the process has completed. I also can't see what I'm typing until I press enter. Is there any way to read the individual characters as bash outputs them?
You would need to change the output of whatever program bash is running to be unbuffered instead of line buffering. A good amount of programs have a command line option for unbuffered output.
The expect project has a tool called unbuffer that looks like it can give you all bash output unbuffered. I have personally never used it, but there are other answers here on SO that also recommend it: bash: force exec'd process to have unbuffered stdout
The problem is lies in something else. If you open an interactive shell normally a terminal window is opened that runs bash, sh, csh or whatever. See the word terminal!
In the old days, we connected a terminal to a serial port (telnet does the same but over ip), again the word terminal.
Even a dumb terminal respond to ESC codes, to report its type and to set the cursor position, colors, clear screen etc.
So you are starting a subprocess with interactive output, but there is no way in telling that shell and subprocesses are to a terminal in this setup other than with bash startup parameters if there are any.
I suggest you enable telnetd but only on localhost (127.0.0.1)
Within your program, make a socket and connect to localhost:telnet and look up how to emulate a proper terminal. If a program is in line mode you are fine but if you go to full screen editing, somewhere you will need an array of 80x24 or 132x24 or whatever size you want to store its characters, color. You also need to be able to shift lines up in that array.
I have not looked but I cannot imagine there is no telnet client example in python, and a terminal emu must be there too!
Another great thing is that telnet sessions clean up if the the ip connection is lost, eliminating ghost processes.
Martijn