linux tee is not working with python? - python

I made a python script which communicates with a web server using an infinite loop.
I want to log every communication data to a file and also monitor them from terminal at same time. so I used tee command like this.
python client.py | tee logfile
however, I got nothing from terminal nor logfile.
the python script is working fine.
what is happening here?
am I missing something?
some advice would be appreciated.
thank you in advance.

From man python:
-u Force stdin, stdout and stderr to be totally unbuffered. On systems
where it matters, also put stdin, stdout and stderr in binary mode. Note
that there is internal buffering in xreadlines(), readlines() and file-
object iterators ("for line in sys.stdin") which is not influenced by
this option. To work around this, you will want to use "sys.stdin.read‐
line()" inside a "while 1:" loop.
So what you can do is:
/usr/bin/python -u client.py >> logfile 2>&1
Or using tee:
python -u client.py | tee logfile

Instead of making it fully unbuffered you can make it linebuffered as it is normally with sys.stdout.reconfigure(line_buffering=True) (after import sys of course).
This was added in 3.7, docs: https://docs.python.org/3/library/io.html#io.TextIOWrapper.reconfigure

Related

How to set file buffering parameters?

Running a long and time consuming number crunching process in the shell with a Python script. In the script, to indicate progress, I have inserted occassional print commands like
#!/usr/bin/env python3
#encoding:utf-8
print('Stage 1 completed')
Triggering the script in the shell by
user#hostname:~/WorkingDirectory$chmod 744 myscript.py && nohup ./myscript.py&
It redirects the output to nohup.out, but I cannot see the output until the entire script is done, probably because of stdout buffering. So in this scenario, how do I somehow adjust the buffering parameters to check the progress periodically? Basically, I want zero buffering, so that as soon a print command is issued in the python script, it will appear on nohup.out. Is that possible?
I know it is a rookie question and in addition to the exact solution, any easy to follow reference to the relevant material (which will help me master the buffering aspects of shell without getting into deeper Kernel or hardware level) will be greatly appreciated too.
If it is important, I am using #54~16.04.1-Ubuntu on x86_64
Python is optimised for reading in and printing out lots of data.
So standard input and output of the Python interpreter are buffered by default.
We can override this behavior some ways:
use interpretator python with option -u.
From man python:
-u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout and stderr in
binary mode. Note that there is internal buffering in xreadlines(), readlines() and file-object iterators ("for line in
sys.stdin") which is not influenced by this option. To work around this, you will want to use "sys.stdin.readline()" inside a
"while 1:" loop.
Run script in shell:
nohup python -u ./myscript.py&
Or modify shebang line of script to #!/usr/bin/python -u and then run:
nohup ./myscript.py&
use shell command stdbuf for turn off buffering stream
See man stdbuf.
Set unbuffered stream for output:
stdbuf --output=0 nohup ./myscript.py&
Set unbuffered stream for output and errors:
stdbuf -o0 -e0 nohup ./myscript.py&

Python, subprocess.check_call() and pipes redirection

Why am I getting list of files when executing this command?
subprocess.check_call("time ls &>/dev/null", shell=True)
If I will paste
time ls &>/dev/null
into the console, I will just get the timings.
OS is Linux Ubuntu.
On debian-like systems, the default shell is dash, not bash. Dash does not support the &> shortcut. To get only the subprocess return code, try:
subprocess.check_call("time ls >/dev/null 2>&1", shell=True)
To get subprocess return code and the timing information but not the directory listing, use:
subprocess.check_call("time ls >/dev/null", shell=True)
Minus, of course, the subprocess return code, this is the same behavior that you would see on the dash command prompt.
The Python version is running under sh, but the console version is running in whatever your default shell is, which is probably either bash or dash. (Your sh may actually be a different shell running in POSIX-compliant mode, but that doesn't make any difference.)
Both bash and dash have builtin time functions, but sh doesn't, so you get /usr/bin/time, which is a normal program. The most important difference this makes is that the time builtin is not running as a subprocess with its own independent stdout and stderr.
Also, sh, bash, and dash all have different redirection syntax.
But what you're trying to do seems wrong in the first place, and you're just getting lucky on the console because two mistakes are canceling out.
You want to get rid of the stdout of ls but keep the stderr of time, but that's not what you asked for. You're trying to redirect both stdout and stderr: that's what >& means on any shell that actually supports it.
So why are you still getting the time stderr? Either (a) your default shell doesn't support >&, or (b) you're using the builtin instead of the program, and you're not redirecting the stderr of the shell itself, or maybe (c) both of the above.
If you really want to do exactly the same thing in Python, with the exact same bugs canceling out in the exact same way, you can run your default shell manually instead of using shell=True. Depending on which reason it was working, that would be either this:
subprocess.check_call([os.environ['SHELL'], '-c', 'time ls &> /dev/null'])
or this:
subprocess.check_call('{} -c time ls &> /dev/null'.format(os.environ(SHELL), shell=True)
But really, why are you doing this at all? If you want to redirect stdout and not stderr, write that:
subprocess.check_call('time ls > /dev/null', shell=True)
Or, better yet, why are you even using the shell in the first place?
subprocess.check_call(['time', 'ls'], stdout=subprocess.devnull)

How to issue a command from the command line of a process running on Linux

Lets say I issue a command from the Linux command line. This will cause Linux to create a new Process and lets say that the Process expects to receive the command from the user.
For Example: I will run a python script test.py which will accept a command from the user.
$python test.py
TEST>addController(192.168.56.101)
Controller added
TEST>
The question I have is can I write a script which will go into the command line (TEST>) and issue a command? As far as I know if I write a script to run multiple commands it will wait for the first process to exit before running the next command.
Regards,
Vinay Pai B.H.
You should look into expect. It's a tool that is designed to automate user interaction with commands that need it. The man page explains how to use it.
Seems like there is also pexpect, a Python version of similar functionality.
Assuming the Python script is reading its commands from stdin, you can pass them in with a pipe or a redirection:
$ python test.py <<< 'addController(192.168.56.101)'
$ echo $'addController(192.168.56.101)\nfoo()\nbar()\nbaz()' | python test.py
$ python test.py <<EOF
addController(192.168.56.101)
foo()
bar()
baz()
EOF
If you don't mind waiting for the calls to finish (one at a time) before returning control to your program, you can use the subprocess library. If you want to start something running and not wait for it to finish, you can use the multiprocessing library.

Executing an xmgrace batch with subprocess.Popen()

I have to make graphs from several files with data. I already found a way to run a simple command
xmgrace -batch batch.bfile -nosafe -hardcopy
in which batch.bfile is a text file with grace commands to print the graph I want. I already tried it manually and it works perfectly. To do this with several files I just have to edit one parameter inside batch.bfile and run the same command every time I make a change.
I have already written a python code which edits batch.bfile and goes through all the data files with a for cycle. In each cycle step I want to run the mentioned command directly in the command line.
After searching a bit I found two solutions, one with os.system() and another with subprocess.Popen() and I could only make subprocess.Popen() work without giving any errors by writing:
subprocess.Popen("xmgrace -batch batch.bfile -nosafe -hardcopy", shell=True)
Problem is, this doesn't do anything in practice, i.e., it just isn't the same as running the command directly in the command line. I already tried writing the full directory for the batch.bfile but nothing changed.
I am using Python 2.7 and Mac OS 10.7
Have you checked running xmgrace from the command line using sh? (i.e. invoke /bin/sh, then run xmgrace... which should be the same shell that Popen is using when you set shell=true).
Another solution would be to create a shell script (create a file like myscript.sh, and run chmod +x from the terminal). In the script call xmgrace:
#!/bin/bash
xmgrace -batch batch.bfile -nosafe -hardcopy
You could then test that myscript.sh works, which ought to pick up any environment variables that might be in your profile that might differ from python. If this works, you could call the script from python's subprocess.Popen('myscript.sh'). You can check what the environment variables are set in python for subprocess by running:
import os
os.environ
You may want to check out http://sourceforge.net/projects/graceplot/
When use use Popen, you can capture the application's output to stdout to stderr and print it within your application - this way you can see what is happening:
from subprocess import Popen, PIPE
ps = Popen(reportParameters,bufsize=512, stdout = PIPE, stderr = PIPE)
if ps:
while 1:
stdout = ps.stdout.readline()
stderr = ps.stderr.readline()
exitcode = ps.poll()
if (not stdout and not stderr) and (exitcode is not None):
break
if stdout:
stdout = stdout[:-1]
print stdout
if stderr:
stderr = stderr[:-1]
print stderr

Can i control PSFTP from a Python script?

i want to run and control PSFTP from a Python script in order to get log files from a UNIX box onto my Windows machine.
I can start up PSFTP and log in but when i try to run a command remotely such as 'cd' it isn't recognised by PSFTP and is just run in the terminal when i close PSFTP.
The code which i am trying to run is as follows:
import os
os.system("<directory> -l <username> -pw <password>" )
os.system("cd <anotherDirectory>")
i was just wondering if this is actually possible. Or if there is a better way to do this in Python.
Thanks.
You'll need to run PSFTP as a subprocess and speak directly with the process. os.system spawns a separate subshell each time it's invoked so it doesn't work like typing commands sequentially into a command prompt window. Take a look at the documentation for the standard Python subprocess module. You should be able to accomplish your goal from there. Alternatively, there are a few Python SSH packages available such as paramiko and Twisted. If you're already happy with PSFTP, I'd definitely stick with trying to make it work first though.
Subprocess module hint:
# The following line spawns the psftp process and binds its standard input
# to p.stdin and its standard output to p.stdout
p = subprocess.Popen('psftp -l testuser -pw testpass'.split(),
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Send the 'cd some_directory' command to the process as if a user were
# typing it at the command line
p.stdin.write('cd some_directory\n')
This has sort of been answered in: SFTP in Python? (platform independent)
http://www.lag.net/paramiko/
The advantage to the pure python approach is that you don't always need psftp installed.

Categories

Resources