Write and save a file with nano using subprocess - python

how can I write/append to a file by calling nano using subprocess and get it saved automatically .For example I have a file and I want to open it and append something at the end of it so I write
>>> import tempfile
>>> file = tempfile.NamedTemporaryFile(mode='a')
>>> example = file.name
>>> f.close()
>>> import subprocess
>>> subprocess.call(['nano', example])
Now once the last line gets executed the file gets open and I can write anything and then save it by hitting Ctrl+O and Ctrl+X
Instead I want that I send the input through a stdin PIPE and and the file gets saved by itself ie there could be any mechanism that hits Ctrl+O and Ctrl+X automayically by itself ?
Can help me in solving this issue ?

A ctrl-O is just a character, same as any other. You can send it by writing '\x0f' (or, in Python 3, b'\x0f').
However, that probably isn't going to do you any good. Most programs that provide an interactive GUI in the terminal, like nano, cannot be driven by stdin. They need to take control of the terminal, and to do that, they will either check that stdin isatty and then tcsetattr it, or just open /dev/tty,
You can deal with this by creating a pseudo-terminal with os.openpty, os.forkpty, or pty.
But it's often easier to use a library like pexpect to deal with interactive programs, GUI or otherwise.
And it's even easier to not try to drive an interactive program in the first place. For example, unlike nano, ed is designed to be driven in "batch mode" by a script, and sed even more so.
And it's even easier to not try to drive a program at all when you're trying to do something that can be just as easily done directly in Python. The easiest way to append something to a file is to open it in 'a' mode and write to it. No need for an external program at all. For example:
new_line = input('What do you want to add?')
with open(fname, 'a') as f:
f.write(new_line)
If the only reason you were using nano is because you needed something to sudo… there's really no reason for that. You can sudo anything else—like sed, or another Python script—just as easily. Using nano is just making things harder for yourself for absolutely no reason.
The big question here is: why do you have a file that's not writable by your Python script, but which you want arbitrary remote users to be able to append to? That sounds like a very bad system design. You make files non-writable because you want to restrict normal users from modifying them; if you want your Python script to be able to modify it on behalf of your remote users, why isn't it owned by the same user that the script runs as?

In the (unlikely) event that you still find that you need to control nano or some other interactive program from a Python process, I'm going to suggest the same thing here that I suggested for this question: Using python subprocess.call() to launch an ncurses process ...
... don't use subprocess for controlling curses/full-screen interactive processes. use pexpect. That's what it's for.
(On the other hand I also agree with the many comments here regarding better ways to work around the permissions issue. Write some sort of script (in Python, bash, sed or whatever) which can be run under sudo and which can make the in-place edits or appendices to your data file directly.

Related

Vim (macvim): Alternately read input from keyboard and external program

I've got a python program that reads input from a midi device and produces text output derived from the incoming MIDI messages. As a simple example, let's say that it's simply mapping MIDI Note On events to note names e.g. note_on(60) --> 'C'. I'd like to capture the output in real time to a GVIM (actually MacVim) window without losing the ability to edit the output with a computer keyboard, i.e. I need for MacVim to read from both an external program and from the computer keyboard.
What's the cleanest general way to implement that under the assumption that the MIDI reader will never generate output while I'm trying to type and vice-versa? I'd prefer to be able to give the python script a filename and have it start MacVim with that file open, but doing it with shell commands or connecting from within MacVim would also be acceptable.
Based on the answers to How do I read and write repeatedly from a process in vim?, it looks like vim does not easily support 2 input sources asynchronously. I'll leave the question open in case someone happens to know an elegant solution, but for now it seems like the best approach is have my python program write to a normal file, use 'tail -f' for real-time viewing, and edit afterwards.

Linux: write to stdin of python interpreter process and have that process evaluate input as code

I am running gnu linux (Linux Mint to be specific). The following is my desired workflow:
I open vim in (say) process 1000 and then start up a python interpreter in process 1001.
I write some code in vim and then select certain lines and then write those lines to the file /proc/1001/fd/0.
At this point I would like the python interpreter to interpret this as code and execute it as if it were typed in directly.
This does not work as desired. Instead the text is displayed on the interpreter's screen, but it is not executed (similar to when error messages of subprocesses are displayed in bash). I presume this has something to do with the fact that my workflow isn't playing well with readline (or some sort of equivalent library). Or my problem might just be that the python interpreter was never designed to be used this way (for presumably security and other reasons).
I understand there are many IDEs with similar functionality, but I was hoping that something simple might work. I'm curious if it's something that can be fixed or if there is something fundamental that I'm misunderstanding.
It exists and it's called vim-slime
The only requirement is that you run the Python interpreter inside tmux or screen, or even better: byobu
Installing the vim-slime plugin is easy if you're using vim-pathogen:
cd ~/.vim/bundle
git clone git://github.com/jpalardy/vim-slime.git
See the vim-slime page for configuration details, but if you're using tmux, simply add the following to your .vimrc and re-start Vim:
let g:slime_target = "tmux"
Trying it out
Type in some Python code inside Vim:
def fib():
a, b = 0, 1
while 1:
yield a
a, b = b, a + b
Then press Ctrl-c-Ctrl-c to tell vim-slime to send the contents of your current buffer to another window. The first time you run it, vim-slime will ask you which screen/tmux window to send it to, but after that, press the key-sequence and it will send it wherever you told it to the first time.
vim-slime is visual-mode aware, too! If you only want to send a few lines to Python, enter visual-line mode with V, highlight the lines you want, and press the same Ctrl-c-Ctrl-c key sequence to send just those line.

Can I save a text file in python without closing it?

I am writing a program in which I would like to be able to view a log file before the program is complete. I have noticed that, in python (2.7 and 3), that file.write() does not save the file, file.close() does. I don't want to create a million little log files with unique names but I would like to be able to view the updated log file before the program is finished. How can I do this?
Now, to be clear I am scripting using Ansys Workbench (trying to batch some CFX runs). Here's a link to a tutorial that shows what I'm talking about. They appear to have wrapped python, and by running the script I can send commands to the various modules. When the script is running there is no console onscreen and it appears to be eating all of the print statements, so the only way I can report what's happening is via a file. Also, I don't want to bring a console window up because eventually I will just run the program in batch mode (no interface). But the simulations take a long time to run and I can't wait for the program to finish before checking on what's happening.
You would need this:
file.flush()
# typically the above line would do. however this is used to ensure that the file is written
os.fsync(file.fileno())
Check this: http://docs.python.org/2/library/stdtypes.html#file.flush
file.flush()
Flush the internal buffer, like stdio‘s fflush(). This may be a no-op on some file-like objects.
Note flush() does not necessarily write the file’s data to disk. Use flush() followed by os.fsync() to ensure this behavior.
EDITED: See this question for detailed explanations: what exactly the python's file.flush() is doing?
Does file.flush() after each write help?
Hannu
This will write the file to disk immediately:
file.flush()
os.fsync(file.fileno())
According to the documentation https://docs.python.org/2/library/os.html#os.fsync
Force write of file with filedescriptor fd to disk. On Unix, this calls the native fsync() function; on Windows, the MS _commit() function.
If you’re starting with a Python file object f, first do f.flush(), and then do os.fsync(f.fileno()), to ensure that all internal buffers associated with f are written to disk.

subprocess.call does not wait for the process to complete

Per Python documentation, subprocess.call should be blocking and wait for the subprocess to complete. In this code I am trying to convert few xls files to a new format by calling Libreoffice on command line. I assumed that the call to subprocess call is blocking but seems like I need to add an artificial delay after each call otherwise I miss few files in the out directory.
what am I doing wrong? and why do I need the delay?
from subprocess import call
for i in range(0,len(sorted_files)):
args = ['libreoffice', '-headless', '-convert-to',
'xls', "%s/%s.xls" %(sorted_files[i]['filename'],sorted_files[i]['filename']), '-outdir', 'out']
call(args)
var = raw_input("Enter something: ") # if comment this line I dont get all the files in out directory
EDIT It might be hard to find the answer through the comments below. I used unoconv for document conversion which is blocking and easy to work with from an script.
It's possible likely that libreoffice is implemented as some sort of daemon/intermediary process. The "daemon" will (effectively1) parse the commandline and then farm the work off to some other process, possibly detaching them so that it can exit immediately. (based on the -invisible option in the documentation I suspect strongly that this is indeed the case you have).
If this is the case, then your subprocess.call does do what it is advertised to do -- It waits for the daemon to complete before moving on. However, it doesn't do what you want which is to wait for all of the work to be completed. The only option you have in that scenario is to look to see if the daemon has a -wait option or similar.
1It is likely that we don't have an actual daemon here, only something which behaves similarly. See comments by abernert
The problem is that the soffice command-line tool (which libreoffice is either just a link to, or a further wrapper around) is just a "controller" for the real program soffice.bin. It finds a running copy of soffice.bin and/or creates on, tells it to do some work, and then quits.
So, call is doing exactly the right thing: it waits for libreoffice to quit.
But you don't want to wait for libreoffice to quit, you want to wait for soffice.bin to finish doing the work that libreoffice asked it to do.
It looks like what you're trying to do isn't possible to do directly. But it's possible to do indirectly.
The docs say that headless mode:
… allows using the application without user interface.
This special mode can be used when the application is controlled by external clients via the API.
In other words, the app doesn't quit after running some UNO strings/doing some conversions/whatever else you specify on the command line, it sits around waiting for more UNO commands from outside, while the launcher just runs as soon as it sends the appropriate commands to the app.
You probably have to use that above-mentioned external control API (UNO) directly.
See Scripting LibreOffice for the basics (although there's more info there about internal scripting than external), and the API documentation for details and examples.
But there may be an even simpler answer: unoconv is a simple command-line tool written using the UNO API that does exactly what you want. It starts up LibreOffice if necessary, sends it some commands, waits for the results, and then quits. So if you just use unoconv instead of libreoffice, call is all you need.
Also notice that unoconv is written in Python, and is designed to be used as a module. If you just import it, you can write your own (simpler, and use-case-specific) code to replace the "Main entrance" code, and not use subprocess at all. (Or, of course, you can tear apart the module and use the relevant code yourself, or just use it as a very nice piece of sample code for using UNO from Python.)
Also, the unoconv page linked above lists a variety of other similar tools, some that work via UNO and some that don't, so if it doesn't work for you, try the others.
If nothing else works, you could consider, e.g., creating a sentinel file and using a filesystem watch, so at least you'll be able to detect exactly when it's finished its work, instead of having to guess at a timeout. But that's a real last-ditch workaround that you shouldn't even consider until eliminating all of the other options.
If libreoffice is being using an intermediary (daemon) as mentioned by #mgilson, then one solution is to find out what program it's invoking, and then directly invoke it yourself.

running a python script indefinitely (as a process, pretty much)

i have tests that i ran which can take up to 15m at a time. during these 15m, a log file is periodically written to. however, most of the content is useless.
in response to this i have a python script that parses out the useless text and displays the relevant data.
what i'm trying to achieve is similar to what tail -f log_file, constantly updating the terminal with the newest additions to a file. i was thinking that if a python script ran as a process, it could parse the log file whenever the tests write to it, then the python script can go to sleep until interrupted again once the log file is written to.
any ideas how one can achieve this?
i already have a script that does the parsing, i just don't know how to make it do it continually and efficiently.
You could just have the script filter standard input, and pipe tail -f through it. When you're waiting on stdin, your script will sleep, so it's plenty efficient.
Eg.
python long_running_script.py && tail -f log_file | python filter_logs.py
Your script can be something like
while true:
line = sys.stdin.readline()
if filter_line(line): print line
looks like you need something like "pytailer":
http://code.google.com/p/pytailer/
While I never used it myself, last example looks like what you want.
any ideas how one can achieve this?
This should be pretty easy to do. Most of what you want is already part of your OS.
python test.py | python log_parser.py
Be sure your tests write their log to stdout instead of some other file. This is often easy to do with small changes to the logging configuration.
Having implemented almost this exact tool, I had great success using the inotify capability in twisted

Categories

Resources