Exception while handling generator.send() in try...except block - python

filename = 'tempfile'
def tail(filename):
fd = open(filename)
while True:
line = fd.readline()
if not line:
continue
else:
if filename != 'uh':
yield line
else:
print 'Returning f to close the file'
yield fd
try:
genObj = tail(filename)
valfromgen= genObj.next()
while valfromgen:
print valfromgen
valfromgen= genObj.next()
except:
traceback.print_exc()
try:
fd_Got_Back = genObj.send('uh')
fd_Got_Back.close()
except:
traceback.print_exc()
Intention of the code: I have opened the file in the generator function only and not outside it, but, I want to close that file outside the generator function by using 'send' probably.
What I am trying to do: Replicating tail -f from unix.
How I am trying to do:
Open a tempfile in read mode.
If the tempfile has 1 new line written in it (which I'll keep on writing manually and saving tempfile using notepad), yield the newly written line.
Problem:
The problem is that I'm trying to check how I can close the opened tempfile from this python code if I press Ctrl + C (i.e. SIGTERM) when this Python code runs in command prompt. In order to emulate this, I have opened the tempfile in the tail function, and whenever there is an exception (which will be raised by the system when I press Ctrl + C), the control should go in the 1st except. Then, from here, I'm trying to send a value uh to the generator function tail, so that it should yield the file descriptor of the opened file which I can use to close the opened tempfile.
PS: I expect a solution where I have opened the file in the generator function only and not outside it.

I think You're misunderstanding how "send" works. Send simply causes a generator to yield that value on its next iteration. It does not change the value of the original parameters. You can then use that yielded value for some purpose. So you could make your code:
filename = 'tempfile'
def tail(filename):
fd = open(filename)
while True:
line = fd.readline()
if not line:
continue
else:
x = (yield line)
if (x == 'uh'):
print 'Returning f to close the file'
yield fd
try:
genObj = tail(filename)
valfromgen= genObj.next()
while valfromgen:
print valfromgen
valfromgen= genObj.next()
except:
traceback.print_exc()
try:
genObj.send('uh').close()
except:
traceback.print_exc()

I have figured out the problem where I was stuck and I have come up with this solution:-
When I press Ctrl + C (on Windows), the KeyboardInterrupt actually happens in fd.readline(). So, I just placed a try...except there, so that the generator function yields the file descriptor whenever Ctrl + C is hit. If there is no KeyBoardInterrupt , then, just print a newly read line from tempfile
This file descriptor is checked using isinstance() in the main body, and if it is found to be a file, then, I'm closing the file as well as the generator
PS: (this KeyboardInterrupt might vary on Linux..probably SigTerm will be raised, but, please check. So, in order to make the code generic, just remove KeyBoard Interrupt and use just normal except)
import sys, traceback
filename = 'tempfile'
def tail(filename):
fd = open(filename)
while True:
try:
line = fd.readline()
except KeyboardInterrupt:
print 'keyboard interrupt here'
yield fd
if not line:
continue
else:
yield line
try:
genObj = tail(filename)
valfromgen= genObj.next()
while valfromgen:
if isinstance(valfromgen, file):
print 'Closing this file now as `tail` yielded a file descriptor'
valfromgen.close()
genObj.close()
break
print 'Yielded line: ', valfromgen
valfromgen= genObj.next()
print 'Just in order to check that things are in order, the following line will raise StopIteration. If it raises, it means we are good.'
print genObj.next()
except:
traceback.print_exc()

Related

os.rename, os.replace and shutil.move errors on windows 10

I'm trying to implement simple file locking using renaming on windows 10. I've got the following test program that renames a file to lock it, then opens and reads it, and renames it to unlock it. However, I'm seeing intermittent errors when I run two of these simultaneously using different arguments (e.g. test.py 1, test.py 2)
import sys
import os
from time import sleep
import shutil
def lockFile():
while True:
try:
os.replace("testfile", "lockfile"+sys.argv[1])
if(os.path.exists("lockfile"+sys.argv[1])):
print("successfully locked", flush=True)
print(os.stat("lockfile"+sys.argv[1]))
else:
print("failed to lock", flush=True)
raise BaseException()
return
except:
print("sleeping...", flush=True)
sleep(1)
def unlockFile():
while True:
try:
os.replace("lockfile"+sys.argv[1], "testfile")
if(os.path.exists("testfile")):
print("successfully unlocked", flush=True)
else:
print("failed to unlock", flush=True)
raise BaseException()
return
except:
print("sleeping...", flush=True)
sleep(1)
while True:
lockFile()
if(os.path.exists("lockfile"+sys.argv[1])):
print("file is available", flush=True)
else:
print("file is not available", flush=True)
with open(("lockfile"+sys.argv[1])) as testFile:
contents = testFile.read()
print(contents.rstrip(), flush=True)
unlockFile()
What I'm seeing is that occasionally the rename/replace/move doesn't throw an exception, os.path.exists says the locked file is present, I can stat the locked file, and then suddenly the locked file is gone and I can't open it:
successfully locked
os.stat_result(st_mode=33206, st_ino=9288674231797231, st_dev=38182903, st_nlink=1, st_uid=0, st_gid=0, st_size=12, st_atime=1536956584, st_mtime=1536956584, st_ctime=1536942815)
file is not available
Traceback (most recent call last):
File "test.py", line 41, in <module>
with open(("lockfile"+sys.argv[1])) as testFile:
FileNotFoundError: [Errno 2] No such file or directory: 'lockfile2'
I think part of the problem is that os.path.exists lies
Directories cache file names to file handles mapping. The most common
problems with this are:
•You have an opened file, and you need to check if the file has been
replaced by a newer file. You have to flush the parent directory's
file handle cache before stat() returns the new file's information and
not the opened file's.
◦Actually this case has another problem: The old file may have been
deleted and replaced by a new file, but both of the files may have the
same inode. You can check this case by flushing the open file's
attribute cache and then seeing if fstat() fails with ESTALE.
•You need to check if a file exists. For example a lock file. Kernel
may have cached that the file does not exist, even if in reality it
does. You have to flush the parent directory's negative file handle
cache to to see if the file really exists.
So sometimes when your function is checking to see if the path exists in the lockFile() function, it doesn't actually exist.
Ok, based on the post linked above, os.path lies, I cobbled together a solution. This may still just be lucky timing and is only for Windows at this point. If I change the subprocess.Popen to rename/replace or omit the os.stat before doing the os.path.exists check then it doesn't work. But this code doesn't seem to hit the problem. Tested with 5 simultaneous scripts running and without sleep calls.
def lockFile():
while True:
try:
p = subprocess.Popen("rename testfile lockfile"+sys.argv[1], shell=True,
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
result = p.wait()
statresult = os.stat("lockfile"+sys.argv[1])
if(os.path.exists("lockfile"+sys.argv[1])):
print("successfully locked", flush=True)
print(os.stat("lockfile"+sys.argv[1]), flush=True)
else:
print("failed to lock", flush=True)
raise BaseException()
return
except BaseException as err:
print("sleeping...", flush=True)
#sleep(1)
def unlockFile():
while True:
try:
p = subprocess.Popen("rename lockfile"+sys.argv[1] + " testfile", shell=True,
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
result = p.wait()
statresult = os.stat("testfile")
if(os.path.exists("testfile")):
pass
else:
print("failed to unlock", flush=True)
raise BaseException()
return
except BaseException as err:
print("sleeping...", flush=True)
#sleep(1)

How to exit function with signal on Windows?

I have the following code written in Python 2.7 on Windows. I want to check for updates for the current python script and update it, if there is an update, with a new version through ftp server preserving the filename and then executing the new python script after terminating the current through the os.kill with SIGNTERM.
I went with the exit function approach but I read that in Windows this only works with the atexit library and default python exit methods. So I used a combination of the atexit.register() and the signal handler.
***necessary libraries***
filematch = 'test.py'
version = '0.0'
checkdir = os.path.abspath(".")
dircontent = os.listdir(checkdir)
r = StringIO()
def exithandler():
try:
try:
if filematch in dircontent:
os.remove(checkdir + '\\' + filematch)
except Exception as e:
print e
ftp = FTP(ip address)
ftp.login(username, password)
ftp.cwd('/Test')
for filename in ftp.nlst(filematch):
fhandle = open(filename, 'wb')
ftp.retrbinary('RETR ' + filename, fhandle.write)
fhandle.close()
subprocess.Popen([sys.executable, "test.py"])
print 'Test file successfully updated.'
except Exception as e:
print e
ftp = FTP(ip address)
ftp.login(username, password)
ftp.cwd('/Test')
ftp.retrbinary('RETR version.txt', r.write)
if(r.getvalue() != version):
atexit.register(exithandler)
somepid = os.getpid()
signal.signal(SIGTERM, lambda signum, stack_frame: exit(1))
os.kill(somepid, signal.SIGTERM)
print 'Successfully replaced and started the file'
Using the:
signal.signal(SIGTERM, lambda signum, stack_frame: exit(1))
I get:
Traceback (most recent call last):
File "C:\Users\STiX\Desktop\Python Keylogger\test.py", line 50, in <module>
signal.signal(SIGTERM, lambda signum, stack_frame: exit(1))
NameError: name 'SIGTERM' is not defined
But I get the job done without a problem except if I use the current code in a more complex script where the script give me the same error but terminates right away for some reason.
On the other hand though, if I use it the correct way, signal.SIGTERM, the process goes straight to termination and the exit function never executed. Why is that?
How can I make this work on Windows and get the outcome that I described above successfully?
What you are trying to do seems a bit complicated (and dangerous from an infosec-perspective ;-). I would suggest to handle the reload-file-when-updated part of the functionality be adding a controller class that imports the python script you have now as a module and, starts it and the reloads it when it is updated (based on a function return or other technique) - look this way for inspiration - https://stackoverflow.com/a/1517072/1010991
Edit - what about exe?
Another hacky technique for manipulating the file of the currently running program would be the shell ping trick. It can be used from all programming languages. The trick is to send a shell command that is not executed before after the calling process has terminated. Use ping to cause the delay and chain the other commands with &. For your use case it could be something like this:
import subprocess
subprocess.Popen("ping -n 2 -w 2000 1.1.1.1 > Nul & del hack.py & rename hack_temp.py hack.py & hack.py ", shell=True)
Edit 2 - Alternative solution to original question
Since python does not block write access to the currently running script an alternative concept to solve the original question would be:
import subprocess
print "hello"
a = open(__file__,"r")
running_script_as_string = a.read()
b = open(__file__,"w")
b.write(running_script_as_string)
b.write("\nprint 'updated version of hack'")
b.close()
subprocess.Popen("python hack.py")

Script writes in reverse

Sorry if I asked this wrong or formatted it wrong, this is my first time here.
Basically, this script is a very, very, simple text editor. The problem is, when it writes to a file, I want it to write:
Hi, my name
is bob.
But, it writes:
is bob.
Hi, my name
How can I fix this?
The code is here:
import time
import os
userdir = os.path.expanduser("~\\Desktop")
usrtxtdir = os.path.expanduser("~\\Desktop\\PythonEdit Output.txt")
def editor():
words = input("\n")
f = open(usrtxtdir,"a")
f.write(words + '\n')
nlq = input('Line saved. "/n" for new line. "/quit" to quit.\n$ ')
if(nlq == '/quit'):
print('Quitting. Your file was saved on your desktop.')
time.sleep(2)
return
elif(nlq == '/n'):
editor()
else:
print("Invalid command.\nBecause Brendan didn't expect for this to happen,\nthe program will quit in six seconds.\nSorry.")
time.sleep(6)
return
def lowlevelinput():
cmd = input("\n$ ")
if(cmd == "/edit"):
editor()
elif(cmd == "/citenote"):
print("Well, also some help from internet tutorials.\nBut Brendan did all the scripting!")
lowlevelinput()
print("Welcome to the PythonEdit Basic Text Editor!\nDeveloped completley by Brendan*!")
print("Type \"/citenote\" to read the citenote on the word Brendan.\nType \"/edit\" to begin editing.")
lowlevelinput()
Nice puzzle. Why are the lines coming out in reverse? Because of output buffering:
When you write to a file, the system doesn't immediately commit your data to disk. This happens periodically (when the buffer is full), or when the file is closed. You never close f, so it is closed for you when f goes out of scope... which happens when the function editor() returns. But editor() calls itself recursively! So the first call to editor() is the last one to exit, and its output is the last to be committed to disk. Neat, eh?
To fix the problem, it is enough to close f as soon as you are done writing:
f = open(usrtxtdir,"a")
f.write(words + '\n')
f.close() # don't forget the parentheses
Or the equivalent:
with open(usrtxtdir, "a") as f:
f.write(words + '\n')
But it's better to fix the organization of your program:
Use a loop to run editor(), not recursive calls.
An editor should be writing out the file at the end of the session, not with every line input. Consider collecting the user input in a list of lines, and writing everything out in one go at the end.
If you do want to write as you go, you should open the file only once, write repeatedly, then close it when done.
You need to close your file after writing, before you try to open it again. Otherwise your writes will not be finalized until the program is closed.
def editor():
words = input("\n")
f = open(usrtxtdir,"a")
f.write(words + '\n')
nlq = input('Line saved. "/n" for new line. "/quit" to quit.\n$ ')
f.close() # your missing line!
if(nlq == '/quit'):
print('Quitting. Your file was saved on your desktop.')
time.sleep(2)
return
elif(nlq == '/n'):
editor()
else:
print("Invalid command.\nBecause Brendan didn't expect for this to happen,\nthe program will quit in six seconds.\nSorry.")
time.sleep(6)
return
If you replace:
f = open(usrtxtdir,"a")
f.write(words + '\n')
with:
with open(usrtxtdir,"a") as f:
f.write(words + '\n')
It comes out in order. Pretty much always use with open() for file access. It handles the closing of the files for you automatically, even in the event of a crash. Although you might consider taking text in memory and writing it only upon quit. But that's not really part of the problem at hand.
Python's file.write() documentation states: "Due to buffering, the string may not actually show up in the file until the flush() or close() method is called"
Since you're recursively reopening the file and writing to it before closing it (or flushing the buffer), the outer value ('Hi, my name') isn't yet written when the inner frame (where you write 'is bob.') completes, which appears to automatically flush the write buffer.
You should be able to add file.flush() to correct it like this:
import time
import os
userdir = os.path.expanduser("~\\Desktop")
usrtxtdir = os.path.expanduser("~\\Desktop\\PythonEdit Output.txt")
def editor():
words = input("\n")
f = open(usrtxtdir,"a")
f.write(words + '\n')
f.flush() # <----- ADD THIS LINE HERE -----< #
nlq = input('Line saved. "/n" for new line. "/quit" to quit.\n$ ')
if(nlq == '/quit'):
print('Quitting. Your file was saved on your desktop.')
time.sleep(2)
return
elif(nlq == '/n'):
editor()
else:
print("Invalid command.\nBecause Brendan didn't expect for this to happen,\nthe program will quit in six seconds.\nSorry.")
time.sleep(6)
return
def lowlevelinput():
cmd = input("\n$ ")
if(cmd == "/edit"):
editor()
elif(cmd == "/citenote"):
print("Well, also some help from internet tutorials.\nBut Brendan did all the scripting!")
lowlevelinput()
print("Welcome to the PythonEdit Basic Text Editor!\nDeveloped completley by Brendan*!")
print("Type \"/citenote\" to read the citenote on the word Brendan.\nType \"/edit\" to begin editing.")
lowlevelinput()
Also, don't forget to close your file after you're done with it!

python: tail file in background [duplicate]

I'd like to make the output of tail -F or something similar available to me in Python without blocking or locking. I've found some really old code to do that here, but I'm thinking there must be a better way or a library to do the same thing by now. Anyone know of one?
Ideally, I'd have something like tail.getNewData() that I could call every time I wanted more data.
Non Blocking
If you are on linux (as windows does not support calling select on files) you can use the subprocess module along with the select module.
import time
import subprocess
import select
f = subprocess.Popen(['tail','-F',filename],\
stdout=subprocess.PIPE,stderr=subprocess.PIPE)
p = select.poll()
p.register(f.stdout)
while True:
if p.poll(1):
print f.stdout.readline()
time.sleep(1)
This polls the output pipe for new data and prints it when it is available. Normally the time.sleep(1) and print f.stdout.readline() would be replaced with useful code.
Blocking
You can use the subprocess module without the extra select module calls.
import subprocess
f = subprocess.Popen(['tail','-F',filename],\
stdout=subprocess.PIPE,stderr=subprocess.PIPE)
while True:
line = f.stdout.readline()
print line
This will also print new lines as they are added, but it will block until the tail program is closed, probably with f.kill().
Using the sh module (pip install sh):
from sh import tail
# runs forever
for line in tail("-f", "/var/log/some_log_file.log", _iter=True):
print(line)
[update]
Since sh.tail with _iter=True is a generator, you can:
import sh
tail = sh.tail("-f", "/var/log/some_log_file.log", _iter=True)
Then you can "getNewData" with:
new_data = tail.next()
Note that if the tail buffer is empty, it will block until there is more data (from your question it is not clear what you want to do in this case).
[update]
This works if you replace -f with -F, but in Python it would be locking. I'd be more interested in having a function I could call to get new data when I want it, if that's possible. – Eli
A container generator placing the tail call inside a while True loop and catching eventual I/O exceptions will have almost the same effect of -F.
def tail_F(some_file):
while True:
try:
for line in sh.tail("-f", some_file, _iter=True):
yield line
except sh.ErrorReturnCode_1:
yield None
If the file becomes inaccessible, the generator will return None. However it still blocks until there is new data if the file is accessible. It remains unclear for me what you want to do in this case.
Raymond Hettinger approach seems pretty good:
def tail_F(some_file):
first_call = True
while True:
try:
with open(some_file) as input:
if first_call:
input.seek(0, 2)
first_call = False
latest_data = input.read()
while True:
if '\n' not in latest_data:
latest_data += input.read()
if '\n' not in latest_data:
yield ''
if not os.path.isfile(some_file):
break
continue
latest_lines = latest_data.split('\n')
if latest_data[-1] != '\n':
latest_data = latest_lines[-1]
else:
latest_data = input.read()
for line in latest_lines[:-1]:
yield line + '\n'
except IOError:
yield ''
This generator will return '' if the file becomes inaccessible or if there is no new data.
[update]
The second to last answer circles around to the top of the file it seems whenever it runs out of data. – Eli
I think the second will output the last ten lines whenever the tail process ends, which with -f is whenever there is an I/O error. The tail --follow --retry behavior is not far from this for most cases I can think of in unix-like environments.
Perhaps if you update your question to explain what is your real goal (the reason why you want to mimic tail --retry), you will get a better answer.
The last answer does not actually follow the tail and merely reads what's available at run time. – Eli
Of course, tail will display the last 10 lines by default... You can position the file pointer at the end of the file using file.seek, I will left a proper implementation as an exercise to the reader.
IMHO the file.read() approach is far more elegant than a subprocess based solution.
Purely pythonic solution using non-blocking readline()
I am adapting Ijaz Ahmad Khan's answer to only yield lines when they are completely written (lines end with a newline char) gives a pythonic solution with no external dependencies:
import time
from typing import Iterator
def follow(file, sleep_sec=0.1) -> Iterator[str]:
""" Yield each line from a file as they are written.
`sleep_sec` is the time to sleep after empty reads. """
line = ''
while True:
tmp = file.readline()
if tmp is not None:
line += tmp
if line.endswith("\n"):
yield line
line = ''
elif sleep_sec:
time.sleep(sleep_sec)
if __name__ == '__main__':
with open("test.txt", 'r') as file:
for line in follow(file):
print(line, end='')
The only portable way to tail -f a file appears to be, in fact, to read from it and retry (after a sleep) if the read returns 0. The tail utilities on various platforms use platform-specific tricks (e.g. kqueue on BSD) to efficiently tail a file forever without needing sleep.
Therefore, implementing a good tail -f purely in Python is probably not a good idea, since you would have to use the least-common-denominator implementation (without resorting to platform-specific hacks). Using a simple subprocess to open tail -f and iterating through the lines in a separate thread, you can easily implement a non-blocking tail operation in Python.
Example implementation:
import threading, Queue, subprocess
tailq = Queue.Queue(maxsize=10) # buffer at most 100 lines
def tail_forever(fn):
p = subprocess.Popen(["tail", "-f", fn], stdout=subprocess.PIPE)
while 1:
line = p.stdout.readline()
tailq.put(line)
if not line:
break
threading.Thread(target=tail_forever, args=(fn,)).start()
print tailq.get() # blocks
print tailq.get_nowait() # throws Queue.Empty if there are no lines to read
All the answers that use tail -f are not pythonic.
Here is the pythonic way: ( using no external tool or library)
def follow(thefile):
while True:
line = thefile.readline()
if not line or not line.endswith('\n'):
time.sleep(0.1)
continue
yield line
if __name__ == '__main__':
logfile = open("run/foo/access-log","r")
loglines = follow(logfile)
for line in loglines:
print(line, end='')
So, this is coming quite late, but I ran into the same problem again, and there's a much better solution now. Just use pygtail:
Pygtail reads log file lines that have not been read. It will even
handle log files that have been rotated. Based on logcheck's logtail2
(http://logcheck.org)
Ideally, I'd have something like tail.getNewData() that I could call every time I wanted more data
We've already got one and itsa very nice. Just call f.read() whenever you want more data. It will start reading where the previous read left off and it will read through the end of the data stream:
f = open('somefile.log')
p = 0
while True:
f.seek(p)
latest_data = f.read()
p = f.tell()
if latest_data:
print latest_data
print str(p).center(10).center(80, '=')
For reading line-by-line, use f.readline(). Sometimes, the file being read will end with a partially read line. Handle that case with f.tell() finding the current file position and using f.seek() for moving the file pointer back to the beginning of the incomplete line. See this ActiveState recipe for working code.
You could use the 'tailer' library: https://pypi.python.org/pypi/tailer/
It has an option to get the last few lines:
# Get the last 3 lines of the file
tailer.tail(open('test.txt'), 3)
# ['Line 9', 'Line 10', 'Line 11']
And it can also follow a file:
# Follow the file as it grows
for line in tailer.follow(open('test.txt')):
print line
If one wants tail-like behaviour, that one seems to be a good option.
Another option is the tailhead library that provides both Python versions of of tail and head utilities and API that can be used in your own module.
Originally based on the tailer module, its main advantage is the ability to follow files by path i.e. it can handle situation when file is recreated. Besides, it has some bug fixes for various edge cases.
Python is "batteries included" - it has a nice solution for it: https://pypi.python.org/pypi/pygtail
Reads log file lines that have not been read. Remembers where it finished last time, and continues from there.
import sys
from pygtail import Pygtail
for line in Pygtail("some.log"):
sys.stdout.write(line)
You can also use 'AWK' command.
See more at: http://www.unix.com/shell-programming-scripting/41734-how-print-specific-lines-awk.html
awk can be used to tail last line, last few lines or any line in a file.
This can be called from python.
If you are on linux you implement a non-blocking implementation in python in the following way.
import subprocess
subprocess.call('xterm -title log -hold -e \"tail -f filename\"&', shell=True, executable='/bin/csh')
print "Done"
# -*- coding:utf-8 -*-
import sys
import time
class Tail():
def __init__(self, file_name, callback=sys.stdout.write):
self.file_name = file_name
self.callback = callback
def follow(self, n=10):
try:
# 打开文件
with open(self.file_name, 'r', encoding='UTF-8') as f:
# with open(self.file_name,'rb') as f:
self._file = f
self._file.seek(0, 2)
# 存储文件的字符长度
self.file_length = self._file.tell()
# 打印最后10行
self.showLastLine(n)
# 持续读文件 打印增量
while True:
line = self._file.readline()
if line:
self.callback(line)
time.sleep(1)
except Exception as e:
print('打开文件失败,囧,看看文件是不是不存在,或者权限有问题')
print(e)
def showLastLine(self, n):
# 一行大概100个吧 这个数改成1或者1000都行
len_line = 100
# n默认是10,也可以follow的参数传进来
read_len = len_line * n
# 用last_lines存储最后要处理的内容
while True:
# 如果要读取的1000个字符,大于之前存储的文件长度
# 读完文件,直接break
if read_len > self.file_length:
self._file.seek(0)
last_lines = self._file.read().split('\n')[-n:]
break
# 先读1000个 然后判断1000个字符里换行符的数量
self._file.seek(-read_len, 2)
last_words = self._file.read(read_len)
# count是换行符的数量
count = last_words.count('\n')
if count >= n:
# 换行符数量大于10 很好处理,直接读取
last_lines = last_words.split('\n')[-n:]
break
# 换行符不够10个
else:
# break
# 不够十行
# 如果一个换行符也没有,那么我们就认为一行大概是100个
if count == 0:
len_perline = read_len
# 如果有4个换行符,我们认为每行大概有250个字符
else:
len_perline = read_len / count
# 要读取的长度变为2500,继续重新判断
read_len = len_perline * n
for line in last_lines:
self.callback(line + '\n')
if __name__ == '__main__':
py_tail = Tail('test.txt')
py_tail.follow(1)
A simple tail function from pypi app tailread
You Can use it also via pip install tailread
Recommended for tail access of large files.
from io import BufferedReader
def readlines(bytesio, batch_size=1024, keepends=True, **encoding_kwargs):
'''bytesio: file path or BufferedReader
batch_size: size to be processed
'''
path = None
if isinstance(bytesio, str):
path = bytesio
bytesio = open(path, 'rb')
elif not isinstance(bytesio, BufferedReader):
raise TypeError('The first argument to readlines must be a file path or a BufferedReader')
bytesio.seek(0, 2)
end = bytesio.tell()
buf = b""
for p in reversed(range(0, end, batch_size)):
bytesio.seek(p)
lines = []
remain = min(end-p, batch_size)
while remain > 0:
line = bytesio.readline()[:remain]
lines.append(line)
remain -= len(line)
cut, *parsed = lines
for line in reversed(parsed):
if buf:
line += buf
buf = b""
if encoding_kwargs:
line = line.decode(**encoding_kwargs)
yield from reversed(line.splitlines(keepends))
buf = cut + buf
if path:
bytesio.close()
if encoding_kwargs:
buf = buf.decode(**encoding_kwargs)
yield from reversed(buf.splitlines(keepends))
for line in readlines('access.log', encoding='utf-8', errors='replace'):
print(line)
if 'line 8' in line:
break
# line 11
# line 10
# line 9
# line 8

How to print telnet response line by line?

Is it possible to print the telnet response line by line, when a command executed over telnet keeps on responding over console ?
Example: I have executed a command (to collect logs), It keeps on displaying logs on console window. Can we read the response line by line & print it , without missing any single line ?
Below snippet writes the log, but only after certain specified time. If I stop the service/script (CTRL-C) in between, that doesn't write anything.
import sys
import telnetlib
import time
orig_stdout = sys.stdout
f = open('outpuy.txt', 'w')
sys.stdout = f
try:
tn = telnetlib.Telnet(IP)
tn.read_until(b"pattern1")
tn.write(username.encode('ascii') + b"\n")
tn.read_until(b"pattern2")
tn.write(command1.encode('ascii') + b"\n")
z = tn.read_until(b'abcd\b\n',600)
array = z.splitlines( )
except:
sys.exit("Telnet Failed to ", IP)
for i in array:
i=i.strip()
print(i)
sys.stdout = orig_stdout
f.close()
You can use tn.read_until("\n") in a loop in order to read one line durint execution of your telnet command
while True:
line = tn.read_until(b"\n") # Read one line
print(line)
if b'abcd' in line: # last line, no more read
break
You can use the ready_very_eager, read_eager, read_lazy, and ready_very_lazy functions specified in the documentation to read your stream byte-by-byte. You can then handle the "until" logic on your own code and at the same time write the read lines to the console.

Categories

Resources