Make python script write output in .txt file after force quit - python

I have a Python script running on a server through SSH with the following command:
nohup python3 python_script.py >> output.txt
It was running for a very long time and (probably) created useful output, so I want to force it to stop now. How do I make it write the output it has so far, into the output.txt file?
The file was automatically created when I started running the script, but the size is zero (nothing has been written in it so far).

as Robert said in his comment, check that the output you are expecting to go to the file is actually making it there and not stderr. If the process is already running and has been for a long time without any response or writes into your output file, I think there are 3 options:
It is generating outout but it's not going where you are expecting (Roberts response)
It is generating output but it's buffered and for some reason has yet to be flushed
It's hasn't generated any output
Option 3 is easy: wait longer. Options 1 & 2 are a little bit tricky. If you are expecting a significant amount of output from this process, you could check the memory consumption of the python instance running your script and see if it's growing or very large. Also you could use lsof to see if it has the file open and to get some idea what it's doing with it.
If you find that your output appears to be going somewhere like /dev/null, take a look at this answer on redirecting output for an existing process.
In the unlikely event that you have a huge buffer that hasn't been flushed, you could try using ps to get the PID and then use kill -STOP [PID] to pause the process and see where you can get using GDB.
Unless it would be extremely painful, I would probably just restart the whole thing, but add periodic flushing to the script, and maybe some extra status reporting so you can tell what is going on. It might help too if you could post your code, since there may be other options available in your situation depending on how the program is written.

Related

Why Python logging writes to stderr by default?

Related to Does Python logging write to stdout or stderr by default? but I want to know WHY it defaults to stderr instead WHAT is the default.
For me it isn't really obvious why stderr is the default. I noticed that something is wrong when I run python script.py | tee out.log and ended up with empty log file. Now I know that It can be solved by either python script.py 2>&1 | tee out.log or using stream parameter:
logging.basicConfig(stream=sys.stdout)
After that event it seems reasonable to me to change default stream to stdout in each script to avoid being surprised again. Is that a good practice? Will I miss something?
You probably don't want to change the default to stdout.
The reason for that is stderr is meant to be an output for all messages that are just some internal and/or debugging program information like logs and errors, rather than actual output, the result of the program.
A computer program basically takes data as input and create data as output.
While a program is running, one may be informed of its actions thanks to a monitoring, without altering its output data. To do so, computer programs traditionally have two outputs, one standard output for data and another one for monitoring (logging and errors): stderr.
This is especially useful when your program is pipeable: you wouldn't have error messages in-between video pixels, for example, because it would break your data.

Python daemon shows no output for an error

I read How do you create a daemon in Python? and also this topic, and tried to write a very simple daemon :
import daemon
import time
with daemon.DaemonContext():
while True:
with open('a.txt', 'a') as f:
f.write('Hi')
time.sleep(2)
Doing python script.py works and returns immediately to terminal (that's the expected behaviour). But a.txt is never written and I don't get any error message. What's wrong with this simple daemon?
daemon.DaemonContext() has option working_directory that has default fault value / i.e. your program probably doesn't have permission to create a new file there.
The problem described here is solved by J.J. Hakala's answer.
Two additional (important) things :
Sander's code (mentioned here) is better than python-daemon. It is more reliable. Just one example: try to start two times the same daemon with python-daemon : big ugly error. With Sander's code : a nice notice "Daemon already running."
For those who want to use python-daemon anyway: DaemonContext() only makes a daemon. DaemonRunner() makes a daemon + control tool, allowing to do python script.py start or stop, etc.
One thing that's wrong with it, is it has no way to tell you what's wrong with it :-)
A daemon process is, by definition, detached from the parent process and from any controlling terminal. So if it's got something to say – such as error messages – it will need to arrange that before becoming a daemon.
From the python-daemon FAQ document:
Why does the output stop after opening the daemon context?
The specified behaviour in PEP 3143_ includes the requirement to
detach the process from the controlling terminal (to allow the process
to continue to run as a daemon), and to close all file descriptors not
known to be safe once detached (to ensure any files that continue to
be used are under the control of the daemon process).
If you want the process to generate output via the system streams
‘sys.stdout’ and ‘sys.stderr’, set the ‘DaemonContext’'s ‘stdout’
and/or ‘stderr’ options to a file-like object (e.g. the ‘stream’
attribute of a ‘logging.Handler’ instance). If these objects have file
descriptors, they will be preserved when the daemon context opens.
Set up a working channel of communication, such as a log file. Ensure the files you open aren't closed along with everything else, using the files_preserve option. Then log any errors to that channel.

How to check if pdf printing is finished on linux command line

I have a bunch of files that I need to print via PDF printer and after it is printed I need to perform additional tasks, but only when it is finally completed.
So to do this from my python script i call command "lpr path/to/file.doc -P PDF"
But this command immediately returns 0 and I have no way to track when printing process is finished, was it successful or not etc...
There is an option to send email when printing is done, but to wait for email after I start printing looks very hacky to me.
Do you have some ideas how to get this done?
Edit 1
There are a plenty of ways to check if printer is printing something at current moment. Therefore at the moment after I start printing something I run lpq command every 0.5 second to find out if it is still printing. But this looks to m e not the best way to do it. I want to be able get alerted or something when actual printing process is finished. Was it successful or not etc...
If you have CUPS, you can use the System V-compatible lp instead of lpr. This prints, on stdout, a job id, e.g.
request id is PDF-5 (1 file(s))
(this is for the virtual printer cups-pdf). You can then grep for this id in the output of lpstat:
lpstat | grep '^PDF-5 '
If that produces no output, then your job is done. lpstat -l produces more status information, but its output will also be a bit harder to parse.
Obviously, there are cleaner Python solutions then running this actual shell code. Unfortunately, I couldn't find a way to check the status of a single job without plowing through the list of jobs.
You can check the state of the printer using the lpstat command (man lpstat). To wait for a process to finish, get the PID of the process and pass it wait command as argument

How do I read terminal output in real time from Python?

I am writing a Python program which runs a virtual terminal. Currently I am launching it like so:
import pexpect, thread
def create_input(child, scrollers, textlength=80, height=12):
while 1:
newtext = child.readline()
print newtext
child = pexpect.spawn("bash", timeout=30000)
thread.start_new_thread(create_input,(child))
This works, and I can send commands to it via child.send(command). However, I only get entire lines as output. This means that if I launch something like Nano or Links, I don't receive any output until the process has completed. I also can't see what I'm typing until I press enter. Is there any way to read the individual characters as bash outputs them?
You would need to change the output of whatever program bash is running to be unbuffered instead of line buffering. A good amount of programs have a command line option for unbuffered output.
The expect project has a tool called unbuffer that looks like it can give you all bash output unbuffered. I have personally never used it, but there are other answers here on SO that also recommend it: bash: force exec'd process to have unbuffered stdout
The problem is lies in something else. If you open an interactive shell normally a terminal window is opened that runs bash, sh, csh or whatever. See the word terminal!
In the old days, we connected a terminal to a serial port (telnet does the same but over ip), again the word terminal.
Even a dumb terminal respond to ESC codes, to report its type and to set the cursor position, colors, clear screen etc.
So you are starting a subprocess with interactive output, but there is no way in telling that shell and subprocesses are to a terminal in this setup other than with bash startup parameters if there are any.
I suggest you enable telnetd but only on localhost (127.0.0.1)
Within your program, make a socket and connect to localhost:telnet and look up how to emulate a proper terminal. If a program is in line mode you are fine but if you go to full screen editing, somewhere you will need an array of 80x24 or 132x24 or whatever size you want to store its characters, color. You also need to be able to shift lines up in that array.
I have not looked but I cannot imagine there is no telnet client example in python, and a terminal emu must be there too!
Another great thing is that telnet sessions clean up if the the ip connection is lost, eliminating ghost processes.
Martijn

What would happen if I abruptly close my script while it's still doing file I/O operations?

here's my question: I'm writing a script to check if my website running all right, the basic idea is to get the server response time and similar stuff every 5 minutes or so, and the script will log the info each time after checking the server status. I know it's no good to close the script while it's in the middle of checking/writing logs, but I'm curious if there are lots of server to check and also you have to do the file I/O pretty frequently, what would happen if you abruptly close the script?
OK, here's an example:
while True:
DemoFile = open("DemoFile.txt", "a")
DemoFile.write("This is a test!")
DemoFile.close()
time.sleep(30)
If I accidentally close the script while this line DemoFile.write("This is a test!") is running, what would I get in the DemoFile.txt? Do I get "This i"(an incomplete line) or the complete line or the line not even added?
Hopefully somebody knows the answer.
According to the python io documentation, buffering is handled according to the buffering parameter to the open function.
The default behavior in this case would be either the device's block size or io.DEFAULT_BUFFER_SIZE if the block size can't be determined. This is probably something like 4096 bytes.
In short, that example will write nothing. If you were writing something long enough that the buffer was written once or twice, you'd have multiples of the buffer size written. And you can always manually flush the buffer with flush().
(If you specify buffering as 0 and the file mode as binary, you'd get "This i". That's the only way, though)
As #sven pointed out, python isn't doing the buffering. When the program is terminated, all open file descriptors are closed and flushed by the operating system.

Categories

Resources