Vim (macvim): Alternately read input from keyboard and external program - python

I've got a python program that reads input from a midi device and produces text output derived from the incoming MIDI messages. As a simple example, let's say that it's simply mapping MIDI Note On events to note names e.g. note_on(60) --> 'C'. I'd like to capture the output in real time to a GVIM (actually MacVim) window without losing the ability to edit the output with a computer keyboard, i.e. I need for MacVim to read from both an external program and from the computer keyboard.
What's the cleanest general way to implement that under the assumption that the MIDI reader will never generate output while I'm trying to type and vice-versa? I'd prefer to be able to give the python script a filename and have it start MacVim with that file open, but doing it with shell commands or connecting from within MacVim would also be acceptable.

Based on the answers to How do I read and write repeatedly from a process in vim?, it looks like vim does not easily support 2 input sources asynchronously. I'll leave the question open in case someone happens to know an elegant solution, but for now it seems like the best approach is have my python program write to a normal file, use 'tail -f' for real-time viewing, and edit afterwards.

Related

Python: call and run a process with input files

I am looking for a solution to run a process with input files in python:
in my script I call a process using sub-process:
import subprocess as sp
sp.call(['C:\EnergyPlusV8-8-0\EP-Launch.exe'])
So the program I would like to launch is open, but then I need to choose 2 input files and then press the button "Simulate.." to execute the program(Energy Plus).
***comment:
I mean, after those code lines, the interface of the program(Energy Plus) is open, then I choose in that window which input files the program has to use. After that in the same interface of the program I start the simulation. I want to do these steps just in the python code, without interacte with the EnergyPlus interface. I hope I clearify the ambiguities
I would like to do the last steps automatically(knowing the input files location) in the python code.
How can I do this?
You won't be able to do this unless EnergyPlus is providing some kind of API, or you are prepared to write UI manipulation code, which would really depend on the type of application it is. Without more information I'm going to have to say what you want to do is not possible.

How to send command in separate python window

Searching isn't pulling up anything useful so perhaps my verbiage is wrong.
I have a python application that I didn't write that takes user input and performs tasks based on the input. The other script I did write watches the serial traffic for a specific match condition. Both scripts run in different windows. What I want to do is if I get a match condition from my script output a command to the other script. Is there a way to do this with python? I am working in windows and want to send the output to a different window.
Since you can start the script within your script, you can just follow the instructions in this link: Read from the terminal in Python
old answer:
I assume you can modify the code in the application you didn't write. If so, you can tell the code to "print" what it's putting on the window to a file, and your other code could constantly monitor that file.

Displaying reports for files using Python

Question about general possibilities using Python here, I don't really know enough about programming to know whether it's something that's doable, and if so, how do I go about it.
I have a program which is a simple desktop program, which you load files into. The program can then output various properties of the thing that's in the file, and depending on what you ask it to do will output a report. It outputs the report in text format, but not as a file, and instead actually, just in the program itself displays the report. Like this:
My question is that if I want to get this text output for a large number of files, I'm currently manually loading the files individually into the program making the report, copying this to a text file, and saving the text file.
Basically I want to know whether it's extremely difficult to get Python to do this for me, or not. If it is doable, are the resources available for me to read about how it might be done? Are there conditions about being able to run my program and various commands from the Python command box?
Hope my question's clear enough. Sorry if it's a bit garbled.
The tricky part here is
The program can then output various properties of the thing that's in the file, and depending on what you ask it to do will output a report.
Basically, if the desktop application you use has a command line interface, it is possible and relatively easy.
If this program has command line option to open a document and output a report in any format (print the report on the standard output, write it into a file on the disk, etc.), you can call that commands from a script python for each files you set in a list.
If your software doesn't have a CLI (Command Line Interface), it might be possible but more diffficult. In that case, you have to automate actions by using a library that will emulate clicks on the Window of you software (1. Click on Open 2. Click, click, click to select the file to load 3. Click on the button to generate a report etc.) It's a pain, but it can be considered.
You will find plenty of resources to learn by yourself how to code a python script. You will probably need to learn about lists, loops, files manipulations and maybe the subprocess library which will let you call any command from your python script.
I suggest you to start with Python3 instead of Python2 because it has a better support for unicode that could quickly become an issue if you have non ascii characters in your input files or in reports from your software.
Good luck ;)
If the only way you can get report is selecting and copy/pasting it from program GUI, the situation just begs for AutoIt instead of Python.
With Python it would be much more difficult. Unless you want to improve your python knowledge or course...
Simulating keypresses, you can open specific file in program (through sending ctrl+o or alt and navigating file menu). Simulating mouse or keypress - start report generation. Then simulate a mouse click in text area, and perform something like:
(just a skeleton of script, probably need to be modified to suit your situation and needs)
send("^{a}^{c}") ; to select all and copy (if these keys are supported in this program
$text = ClipGet() ; get contents of clipboard
$fout = FileOpen("somefile.txt",2)
FileWrite($fout,$text)
FileClose($fout)
To fully automate the task, in script you can get a list of source files in specific folder, and run this macro for each of them, automatically naming resulting txt files.

Write and save a file with nano using subprocess

how can I write/append to a file by calling nano using subprocess and get it saved automatically .For example I have a file and I want to open it and append something at the end of it so I write
>>> import tempfile
>>> file = tempfile.NamedTemporaryFile(mode='a')
>>> example = file.name
>>> f.close()
>>> import subprocess
>>> subprocess.call(['nano', example])
Now once the last line gets executed the file gets open and I can write anything and then save it by hitting Ctrl+O and Ctrl+X
Instead I want that I send the input through a stdin PIPE and and the file gets saved by itself ie there could be any mechanism that hits Ctrl+O and Ctrl+X automayically by itself ?
Can help me in solving this issue ?
A ctrl-O is just a character, same as any other. You can send it by writing '\x0f' (or, in Python 3, b'\x0f').
However, that probably isn't going to do you any good. Most programs that provide an interactive GUI in the terminal, like nano, cannot be driven by stdin. They need to take control of the terminal, and to do that, they will either check that stdin isatty and then tcsetattr it, or just open /dev/tty,
You can deal with this by creating a pseudo-terminal with os.openpty, os.forkpty, or pty.
But it's often easier to use a library like pexpect to deal with interactive programs, GUI or otherwise.
And it's even easier to not try to drive an interactive program in the first place. For example, unlike nano, ed is designed to be driven in "batch mode" by a script, and sed even more so.
And it's even easier to not try to drive a program at all when you're trying to do something that can be just as easily done directly in Python. The easiest way to append something to a file is to open it in 'a' mode and write to it. No need for an external program at all. For example:
new_line = input('What do you want to add?')
with open(fname, 'a') as f:
f.write(new_line)
If the only reason you were using nano is because you needed something to sudo… there's really no reason for that. You can sudo anything else—like sed, or another Python script—just as easily. Using nano is just making things harder for yourself for absolutely no reason.
The big question here is: why do you have a file that's not writable by your Python script, but which you want arbitrary remote users to be able to append to? That sounds like a very bad system design. You make files non-writable because you want to restrict normal users from modifying them; if you want your Python script to be able to modify it on behalf of your remote users, why isn't it owned by the same user that the script runs as?
In the (unlikely) event that you still find that you need to control nano or some other interactive program from a Python process, I'm going to suggest the same thing here that I suggested for this question: Using python subprocess.call() to launch an ncurses process ...
... don't use subprocess for controlling curses/full-screen interactive processes. use pexpect. That's what it's for.
(On the other hand I also agree with the many comments here regarding better ways to work around the permissions issue. Write some sort of script (in Python, bash, sed or whatever) which can be run under sudo and which can make the in-place edits or appendices to your data file directly.

Dynamic python user input to a seperate C program

I have a python GUI interface written in Tkinter. The main point of the program is to setup many different variables for a final calculation on a hyperspectral image (geography stuff). Now, some further specifications have developed where the user would like to be able to actively input some parameters for groups of pixels to be smoothed. This information would be input in the Python GUI and the C programs that handle the image modifications need this as input. Since the images can be giant, I want to try and avoid always re-running the C program (which involves memory allocation, reading a giant file, etc.) with a call such as
os.system(./my_C_Program param1 param2 param3....)
I'd prefer to have a system where once I've called my_C_Program, it can be in a state of waiting after having loaded all the resources into memory. I was thinking something involving getchar() would be what I want, but I don't know how I can get the output from python to go my_C_Program. I've seen a few similar questions about this on SO, but I wasn't able to determine quite how those scenarios would help mine specifically.
If getchar() is the answer, can someone please explain how stdout works with multiple terminals open?
As well, I'm trying to keep this program easily multiplatform across linux/mac/windows.
To summarize, I want the following functionality:
User selects certain input from python GUI
That input becomes the input for a C program
That C program can handle more input without having to be run again from the start (avoiding file I/O, etc).
The first thing you should probably do is start using Python's subprocess module, rather than os.system. Once you've done that, then you can change it so the C program's stdin is something you can write to in Python, rather than inheriting the Python script's stdin.
After that, you could just have Python send data over that the C program can interpret. For example, you might want to use a bunch of JSON chunks, one per line, like Twitter's streaming API1; the Python script makes a request dictionary, serializes it with json.dump, and then writes a newline. The C program reads a line, parses the JSON, and handles the request.
1 Upon reading the documentation, it looks like their implementation is a little more complex. You could adopt how they do it or just do it like I described.
icktoofay and JasonFruit have suggested decent approaches; I'm going to suggest something to decouple the two programs a little further.
If you write your C program as a server that listens for requests and replies with answers on a TCP socket, you can more easily change clients, make it support multiple simultaneous clients, perform near-seamless upgrades of clients or servers without necessarily needing to modify the other, or you could move the C program to more powerful hardware without needing to do more than slightly modify configurations.
Once you have opened a listening socket and accepted a connection, the rest of your program could continue as if you were just interacting over standard input and standard output. This works well enough, but you might prefer to encode your data in some standardized format such as JSON or ASN.1, which can save some manual string handling.
Could you do something with pexpect? It lets you provide input to a command-line program, waiting for specified prompts from it before continuing. It also lets you read the intervening output, so you could respond to that as needed.
If you're on Windows (as I note from your comment that you are), you could try winpexpect, which is similar.

Categories

Resources