I started to play with python-mode in Emacs (latest Emacs, latest python-mode.el)
When I try to send of line of code to the process via py-execute-line or send function definition via py-execute-def-or-class, it grabs the whole buffers, saves it in a temporary file and sends exec(compile(open(some_temp_file_name).read()...) string for execution to the process.
My question is why does it has to be that way?
Why can't we just (comint-send-string proc string) to the process where the string is one line of code or a block (or at least avoid saving a temp file every time)?
Can't reproduce with current trunk.
Please file a complete bug-report at:
https://gitlab.com/python-mode-devs/python-mode/issues
Related
I want to write a python script, that opens an *.exe-file (it is a CMD-console application)
communicates with it by sending input and reading output (for example via stdin, stdout) many times.
I tried it with communicate(), but it closes the pipe after I send the first input (communicate(input='\n')),
so it does work for me only once.
Then I tried it again via p.stdin.readline(), but I only can read line by line. When I read an newline, the process
terminates (that is not what I need).
I just want to start a program, read the output and send an input to it, then wait until the next output and send
a new input to it, and so on....
Is there a good way to do it? Does anybody have an example or a similar problem that is solved?
I need the same code as you, actually i am trying using:
https://pexpect.readthedocs.io/en/stable/index.html
After no sucess with subprocess..
I'm looking for a best way to invoke a python script on a remote server and poll the status and fetch the output. It is preferable for me to have this implementation also in python. The python script takes a very long time to execute and I want to poll the status intermediately on what all actions are being performed. Any suggestions?
There are so many options. 'Polling' is generally a bad idea, as it assumes CPU occupation.
You could have your script send you status changes.
You could have your script write it's actual status into a (remote) file (wither overwriting or appending to a log file) and you can look into that file. This is probably the easiest way. You can monitor the file with tail -f file over the link
And many more - and more complicated - other options.
I'm writing files from one process using open and write (i.e. direct kernel calls.) After the write, I simply close and exit the application without flushing. Now, the application is started from a Python-Wrapper which immediately after the application exits reads the files. Sometimes however, the Python wrapper reads incorrect data, as if I'm still reading an old version of the file (i.e. the wrapper reads stale data)
I thought that no matter whether the file metadata and contents are written to disk, the user visible contents would be always valid & consistent (i.e. buffers get flushed to memory at least, so subsequent reads get the same content, even though it might not be committed to disk.) What's going on here? Do I need to sync on close in my application; or can I simply issue a sync command after running my application from the Python script to guarantee that everything has been written correctly? This is running on ext4.
On the Python side:
# Called for lots of files
o = subprocess.check_output (['./App.BitPacker', inputFile]) # Writes indices.bin and dict.bin
indices = open ('indices.bin', 'rb').read ()
dictionary = open ('dict.bin', 'rb').read ()
with open ('output-file', 'wb') as output:
output.write (dictionary) # Invalid content in output-file ...
# output-file is a placeholder, one output-file per inputFile or course
I've never had your problem and always found a call to close() to be sufficient. However, from the man entry on close(2):
A successful close does not guarantee that the data has been successfully saved to disk, as the kernel defers writes. It is not common for a file system to flush the buffers when the stream is closed. If you need to be sure that the data is physically stored use fsync(2). (It will depend on the disk hardware at this point.)
As, at time of writing, you haven't included code for the write processes I can only suggest adding a call to fsync in that process and see if this makes a difference.
I have a Python app running on Linux. It is called every minute from cron. It checks a directory for files and if it finds one it processes it - this can take several minutes. I don't want the next cron job to pick up the file currently being processed so I lock it using the code below which calls portalocker. The problem is it doesn't seem to work. The next cron job manages to get a file handle returned for the file all ready being processed.
def open_and_lock(full_filename):
file_handle = open(full_filename, 'r')
try:
portalocker.lock(file_handle, portalocker.LOCK_EX
| portalocker.LOCK_NB)
return file_handle
except IOError:
sys.exit(-1)
Any ideas what I can do to lock the file so no other process can get it?
UPDATE
Thanks to #Winston Ewert I checked through the code and found the file handle was being closed way before the processing had finished. It seems to be working now except the second process blocks on portalocker.lock rather than throwing an exception.
After fumbling with many schemes, this works in my case. I have a script that may be executed multiple times simultaneously. I need these instances to wait their turn to read/write to some files. The lockfile does not need to be deleted, so you avoid blocking all access if one script fails before deleting it.
import fcntl
def acquireLock():
''' acquire exclusive lock file access '''
locked_file_descriptor = open('lockfile.LOCK', 'w+')
fcntl.lockf(locked_file_descriptor, fcntl.LOCK_EX)
return locked_file_descriptor
def releaseLock(locked_file_descriptor):
''' release exclusive lock file access '''
locked_file_descriptor.close()
lock_fd = acquireLock()
# ... do stuff with exclusive access to your file(s)
releaseLock(lock_fd)
You're using the LOCK_NB flag which means that the call is non-blocking and will just return immediately on failure. That is presumably happening in the second process. The reason why it is still able to read the file is that portalocker ultimately uses flock(2) locks, and, as mentioned in the flock(2) man page:
flock(2) places advisory locks only;
given suitable permissions on a file,
a process is free to ignore the use of
flock(2) and perform I/O on the file.
To fix it you could use the fcntl.flock function directly (portalocker is just a thin wrapper around it on Linux) and check the returned value to see if the lock succeeded.
Don't use cron for this. Linux has inotify, which can notify applications when a filesystem event occurs. There is a Python binding for inotify called pyinotify.
Thus, you don't need to lock the file -- you just need to react to IN_CLOSE_WRITE events (i.e. when a file opened for writing was closed). (You also won't need to spawn a new process every minute.)
An alternative to using pyinotify is incron which allows you to write an incrontab (very much in the same style as a crontab), to interact with the inotify system.
what about manually creating an old-fashioned .lock-file next to the file you want to lock?
just check if it’s there; if not, create it, if it is, exit prematurely. after finishing, delete it.
I think fcntl.lockf is what you are looking for.
here's my question: I'm writing a script to check if my website running all right, the basic idea is to get the server response time and similar stuff every 5 minutes or so, and the script will log the info each time after checking the server status. I know it's no good to close the script while it's in the middle of checking/writing logs, but I'm curious if there are lots of server to check and also you have to do the file I/O pretty frequently, what would happen if you abruptly close the script?
OK, here's an example:
while True:
DemoFile = open("DemoFile.txt", "a")
DemoFile.write("This is a test!")
DemoFile.close()
time.sleep(30)
If I accidentally close the script while this line DemoFile.write("This is a test!") is running, what would I get in the DemoFile.txt? Do I get "This i"(an incomplete line) or the complete line or the line not even added?
Hopefully somebody knows the answer.
According to the python io documentation, buffering is handled according to the buffering parameter to the open function.
The default behavior in this case would be either the device's block size or io.DEFAULT_BUFFER_SIZE if the block size can't be determined. This is probably something like 4096 bytes.
In short, that example will write nothing. If you were writing something long enough that the buffer was written once or twice, you'd have multiples of the buffer size written. And you can always manually flush the buffer with flush().
(If you specify buffering as 0 and the file mode as binary, you'd get "This i". That's the only way, though)
As #sven pointed out, python isn't doing the buffering. When the program is terminated, all open file descriptors are closed and flushed by the operating system.