Here is a simplified version of my function that upon sending a few web-requests and making a few checks proceeds to modify a certain .txt file which name is contained in the variable filename_partners:
async def add_partner(session, user_id):
user_premium_request = await session.get(SOME_REQUEST)
if (await user_premium_request.content.read())[-145:-141] == b"true":
with open(filename_partners, "r+") as partners_library:
file = partners_library.read()
if (str(user_id) + ":") in file:
pass
else:
user_item_id_list = []
user_item_request = await session.get(SOME_REQUEST)
user_items_list = json.loads(await user_item_request.text())["data"]
for item in user_items_list:
user_item_id_list.append(item["assetId"])
if len(user_item_id_list) >= 1:
with open(filename_partners, "a+") as partners_library:
partners_library.write("\n" + str(user_id) + ":" + str(time.time()))
else:
with open(filename_partners, "r+") as partners_library:
file = partners_library.read()
if (str(user_id) + ":") in file:
lines = file.splitlines()
cursor_token = lines[0]
with open(filename_partners_temp, "a+") as partners_library_temp:
partners_library_temp.write(cursor_token)
for line in lines[1:]:
if (str(user_id) + ":") in line:
partners_library_temp.write(
"\n"
+ str(user_id)
+ ":"
+ str(time.time() + 2592000)
)
else:
partners_library_temp.write("\n" + line)
partners_library_temp.seek(0)
new_file = partners_library_temp.read()
with open(filename_partners, "w+") as partners_library_2:
partners_library_2.write(new_file)
os.remove(filename_partners_temp)
The issue is, for my program, I have to call this function 1,000s of times, which might take a very long time. To combat it, I decided to call it concurrently:
# seller_ids is a list of 200-300 items
for user_id in seller_ids:
tasks.append(add_partner(session, user_id))
await asyncio.gather(*tasks)
However, doing so causes my program to freeze/hang after 2-20 minutes of running. No errors are triggered, and I am fairly certain that what causes it is the fact that I am calling a lot of .txt-modifying functions concurrently. To test my theory, I tried to call them consecutively:
for user_id in seller_ids_clear:
await add_partner(session, user_id)
After running my program that way, it was no longer freezing/hanging, however, it was running about 10 times slower, which is an issue in my case.
I am sure that file operations are to blame here, but I am not sure how calling that async function concurrently 100s of times can cause my program to freeze with no errors being displayed. If anyone here has more experience with file operations and asyncio, please do let me know your suggestions and theories!
UPDATE: It seems like the program as a whole is not freezing, but only the add_partner() function freezes/hangs, causing a big part of the program to indefinitely wait for it to finish, while other parts of the program, not connected with add_partner() whatsoever, continue to function normally.
I'm making a Python file, and I want a different screen to show up for someone who is opening the file for the very first time, other than a recurring visitor. Like so:
if USER_NEW:
print('New user screen')
else:
print('other screen')
Exactly how would I go about this?
You would need to store the information in another file to read later, when your script is run again. When you run your script you need to read this file and see if contains a value to say the visitor has already looked at the file or write in it to tell the program when they have during the next visit.
For example:
with open('user.txt', 'r+') as file:
if file.read() == '':
print('New user screen')
else:
print('other screen')
file.write('visited')
You can save program information in a state file. Where to do that depends on the operating system and even then you'll find disagreement. On windows you can use the environment variable %LOCALAPPDATA%. On most unixy systems you can use ~/.myapp or perhaps ~/.config/myapp (I like the second much better). I don't know the conventions on other machines.
You also need to establish a convention for the format of the file. I'm just going to look for a file name.
import platform
import os
import time
if platform.system() == "Windows":
my_app_path = os.path.expandvars("%LOCALAPPDATA%/myapp")
elif platform.system() == "Linux":
my_app_path = os.path.expanduser("~/.config/myapp")
else:
exit(2)
first_run_file = os.path.join(my_app_path, "first_run.txt")
if not os.path.exists(first_run_file):
first_run = True
os.makedirs(my_app_path)
open(first_run_file, "w")
else:
first_run = False
So I created this lab assignment for class. It seemed fine, except for it doesn't run at all in idle. Idle opens, and then immediately closes. It ran fine in powershell, and the IDE i was using, but would not run at all for my professor.
The program just opens randomNumbers.txt and then lists the values inside. I have had no problems with any of my programs before this one, and this one seems to be the simplest. Is there a simple mistake I'm overlooking? As well as that, if you have any suggestions as to methods i can use to optimize this code id love the suggestions, I've been using python for 2 months now.
Sorry if this post was a bit long, just really confused.
import time, sys
def main():
global file
file = open("randomNumber.txt","r")
prepArray()
print("\n-----------\n# | Value\n-----------")
printArray()
file.close()
closeInput = input("\nPress ENTER to exit")
print("Closing...")
def prepArray():
global numberSplit
global file
openFile = input("Open randomNumber.txt (Y/N): ")
print("\n")
if openFile.lower() == "y":
try:
f = open("randomNumber.txt","r")
except IOError:
print("Error opening file: Did you run the generator first?")
main()
elif openFile.lower() == "n":
sys.exit()
else:
print("\nInvalid input, enter either (Y for yes, N for no)\n")
main()
numberSplit = file.readline()
numberSplit = numberSplit.split(",")
numberSplit = numberSplit[:-1]
def printArray():
global numberSplit
lineCount = 1
totalCount = 0
for item in numberSplit:
print(lineCount,"-",item)
lineCount += 1
totalCount += float(item)
print("\nTotal:",round((totalCount),2))
main()
randomNumbers.txt just contains
119.18,470.54,159.89,360.56,47.15,489.77,242.54,
I was testing your code and its work fine! I'm sure your problem is you haven't randomNumbers.txt in the same folder, please try to put your txt file in the same folder where is your script and it'll work! ;-)
I'd like to make the output of tail -F or something similar available to me in Python without blocking or locking. I've found some really old code to do that here, but I'm thinking there must be a better way or a library to do the same thing by now. Anyone know of one?
Ideally, I'd have something like tail.getNewData() that I could call every time I wanted more data.
Non Blocking
If you are on linux (as windows does not support calling select on files) you can use the subprocess module along with the select module.
import time
import subprocess
import select
f = subprocess.Popen(['tail','-F',filename],\
stdout=subprocess.PIPE,stderr=subprocess.PIPE)
p = select.poll()
p.register(f.stdout)
while True:
if p.poll(1):
print f.stdout.readline()
time.sleep(1)
This polls the output pipe for new data and prints it when it is available. Normally the time.sleep(1) and print f.stdout.readline() would be replaced with useful code.
Blocking
You can use the subprocess module without the extra select module calls.
import subprocess
f = subprocess.Popen(['tail','-F',filename],\
stdout=subprocess.PIPE,stderr=subprocess.PIPE)
while True:
line = f.stdout.readline()
print line
This will also print new lines as they are added, but it will block until the tail program is closed, probably with f.kill().
Using the sh module (pip install sh):
from sh import tail
# runs forever
for line in tail("-f", "/var/log/some_log_file.log", _iter=True):
print(line)
[update]
Since sh.tail with _iter=True is a generator, you can:
import sh
tail = sh.tail("-f", "/var/log/some_log_file.log", _iter=True)
Then you can "getNewData" with:
new_data = tail.next()
Note that if the tail buffer is empty, it will block until there is more data (from your question it is not clear what you want to do in this case).
[update]
This works if you replace -f with -F, but in Python it would be locking. I'd be more interested in having a function I could call to get new data when I want it, if that's possible. – Eli
A container generator placing the tail call inside a while True loop and catching eventual I/O exceptions will have almost the same effect of -F.
def tail_F(some_file):
while True:
try:
for line in sh.tail("-f", some_file, _iter=True):
yield line
except sh.ErrorReturnCode_1:
yield None
If the file becomes inaccessible, the generator will return None. However it still blocks until there is new data if the file is accessible. It remains unclear for me what you want to do in this case.
Raymond Hettinger approach seems pretty good:
def tail_F(some_file):
first_call = True
while True:
try:
with open(some_file) as input:
if first_call:
input.seek(0, 2)
first_call = False
latest_data = input.read()
while True:
if '\n' not in latest_data:
latest_data += input.read()
if '\n' not in latest_data:
yield ''
if not os.path.isfile(some_file):
break
continue
latest_lines = latest_data.split('\n')
if latest_data[-1] != '\n':
latest_data = latest_lines[-1]
else:
latest_data = input.read()
for line in latest_lines[:-1]:
yield line + '\n'
except IOError:
yield ''
This generator will return '' if the file becomes inaccessible or if there is no new data.
[update]
The second to last answer circles around to the top of the file it seems whenever it runs out of data. – Eli
I think the second will output the last ten lines whenever the tail process ends, which with -f is whenever there is an I/O error. The tail --follow --retry behavior is not far from this for most cases I can think of in unix-like environments.
Perhaps if you update your question to explain what is your real goal (the reason why you want to mimic tail --retry), you will get a better answer.
The last answer does not actually follow the tail and merely reads what's available at run time. – Eli
Of course, tail will display the last 10 lines by default... You can position the file pointer at the end of the file using file.seek, I will left a proper implementation as an exercise to the reader.
IMHO the file.read() approach is far more elegant than a subprocess based solution.
Purely pythonic solution using non-blocking readline()
I am adapting Ijaz Ahmad Khan's answer to only yield lines when they are completely written (lines end with a newline char) gives a pythonic solution with no external dependencies:
import time
from typing import Iterator
def follow(file, sleep_sec=0.1) -> Iterator[str]:
""" Yield each line from a file as they are written.
`sleep_sec` is the time to sleep after empty reads. """
line = ''
while True:
tmp = file.readline()
if tmp is not None:
line += tmp
if line.endswith("\n"):
yield line
line = ''
elif sleep_sec:
time.sleep(sleep_sec)
if __name__ == '__main__':
with open("test.txt", 'r') as file:
for line in follow(file):
print(line, end='')
The only portable way to tail -f a file appears to be, in fact, to read from it and retry (after a sleep) if the read returns 0. The tail utilities on various platforms use platform-specific tricks (e.g. kqueue on BSD) to efficiently tail a file forever without needing sleep.
Therefore, implementing a good tail -f purely in Python is probably not a good idea, since you would have to use the least-common-denominator implementation (without resorting to platform-specific hacks). Using a simple subprocess to open tail -f and iterating through the lines in a separate thread, you can easily implement a non-blocking tail operation in Python.
Example implementation:
import threading, Queue, subprocess
tailq = Queue.Queue(maxsize=10) # buffer at most 100 lines
def tail_forever(fn):
p = subprocess.Popen(["tail", "-f", fn], stdout=subprocess.PIPE)
while 1:
line = p.stdout.readline()
tailq.put(line)
if not line:
break
threading.Thread(target=tail_forever, args=(fn,)).start()
print tailq.get() # blocks
print tailq.get_nowait() # throws Queue.Empty if there are no lines to read
All the answers that use tail -f are not pythonic.
Here is the pythonic way: ( using no external tool or library)
def follow(thefile):
while True:
line = thefile.readline()
if not line or not line.endswith('\n'):
time.sleep(0.1)
continue
yield line
if __name__ == '__main__':
logfile = open("run/foo/access-log","r")
loglines = follow(logfile)
for line in loglines:
print(line, end='')
So, this is coming quite late, but I ran into the same problem again, and there's a much better solution now. Just use pygtail:
Pygtail reads log file lines that have not been read. It will even
handle log files that have been rotated. Based on logcheck's logtail2
(http://logcheck.org)
Ideally, I'd have something like tail.getNewData() that I could call every time I wanted more data
We've already got one and itsa very nice. Just call f.read() whenever you want more data. It will start reading where the previous read left off and it will read through the end of the data stream:
f = open('somefile.log')
p = 0
while True:
f.seek(p)
latest_data = f.read()
p = f.tell()
if latest_data:
print latest_data
print str(p).center(10).center(80, '=')
For reading line-by-line, use f.readline(). Sometimes, the file being read will end with a partially read line. Handle that case with f.tell() finding the current file position and using f.seek() for moving the file pointer back to the beginning of the incomplete line. See this ActiveState recipe for working code.
You could use the 'tailer' library: https://pypi.python.org/pypi/tailer/
It has an option to get the last few lines:
# Get the last 3 lines of the file
tailer.tail(open('test.txt'), 3)
# ['Line 9', 'Line 10', 'Line 11']
And it can also follow a file:
# Follow the file as it grows
for line in tailer.follow(open('test.txt')):
print line
If one wants tail-like behaviour, that one seems to be a good option.
Another option is the tailhead library that provides both Python versions of of tail and head utilities and API that can be used in your own module.
Originally based on the tailer module, its main advantage is the ability to follow files by path i.e. it can handle situation when file is recreated. Besides, it has some bug fixes for various edge cases.
Python is "batteries included" - it has a nice solution for it: https://pypi.python.org/pypi/pygtail
Reads log file lines that have not been read. Remembers where it finished last time, and continues from there.
import sys
from pygtail import Pygtail
for line in Pygtail("some.log"):
sys.stdout.write(line)
You can also use 'AWK' command.
See more at: http://www.unix.com/shell-programming-scripting/41734-how-print-specific-lines-awk.html
awk can be used to tail last line, last few lines or any line in a file.
This can be called from python.
If you are on linux you implement a non-blocking implementation in python in the following way.
import subprocess
subprocess.call('xterm -title log -hold -e \"tail -f filename\"&', shell=True, executable='/bin/csh')
print "Done"
# -*- coding:utf-8 -*-
import sys
import time
class Tail():
def __init__(self, file_name, callback=sys.stdout.write):
self.file_name = file_name
self.callback = callback
def follow(self, n=10):
try:
# 打开文件
with open(self.file_name, 'r', encoding='UTF-8') as f:
# with open(self.file_name,'rb') as f:
self._file = f
self._file.seek(0, 2)
# 存储文件的字符长度
self.file_length = self._file.tell()
# 打印最后10行
self.showLastLine(n)
# 持续读文件 打印增量
while True:
line = self._file.readline()
if line:
self.callback(line)
time.sleep(1)
except Exception as e:
print('打开文件失败,囧,看看文件是不是不存在,或者权限有问题')
print(e)
def showLastLine(self, n):
# 一行大概100个吧 这个数改成1或者1000都行
len_line = 100
# n默认是10,也可以follow的参数传进来
read_len = len_line * n
# 用last_lines存储最后要处理的内容
while True:
# 如果要读取的1000个字符,大于之前存储的文件长度
# 读完文件,直接break
if read_len > self.file_length:
self._file.seek(0)
last_lines = self._file.read().split('\n')[-n:]
break
# 先读1000个 然后判断1000个字符里换行符的数量
self._file.seek(-read_len, 2)
last_words = self._file.read(read_len)
# count是换行符的数量
count = last_words.count('\n')
if count >= n:
# 换行符数量大于10 很好处理,直接读取
last_lines = last_words.split('\n')[-n:]
break
# 换行符不够10个
else:
# break
# 不够十行
# 如果一个换行符也没有,那么我们就认为一行大概是100个
if count == 0:
len_perline = read_len
# 如果有4个换行符,我们认为每行大概有250个字符
else:
len_perline = read_len / count
# 要读取的长度变为2500,继续重新判断
read_len = len_perline * n
for line in last_lines:
self.callback(line + '\n')
if __name__ == '__main__':
py_tail = Tail('test.txt')
py_tail.follow(1)
A simple tail function from pypi app tailread
You Can use it also via pip install tailread
Recommended for tail access of large files.
from io import BufferedReader
def readlines(bytesio, batch_size=1024, keepends=True, **encoding_kwargs):
'''bytesio: file path or BufferedReader
batch_size: size to be processed
'''
path = None
if isinstance(bytesio, str):
path = bytesio
bytesio = open(path, 'rb')
elif not isinstance(bytesio, BufferedReader):
raise TypeError('The first argument to readlines must be a file path or a BufferedReader')
bytesio.seek(0, 2)
end = bytesio.tell()
buf = b""
for p in reversed(range(0, end, batch_size)):
bytesio.seek(p)
lines = []
remain = min(end-p, batch_size)
while remain > 0:
line = bytesio.readline()[:remain]
lines.append(line)
remain -= len(line)
cut, *parsed = lines
for line in reversed(parsed):
if buf:
line += buf
buf = b""
if encoding_kwargs:
line = line.decode(**encoding_kwargs)
yield from reversed(line.splitlines(keepends))
buf = cut + buf
if path:
bytesio.close()
if encoding_kwargs:
buf = buf.decode(**encoding_kwargs)
yield from reversed(buf.splitlines(keepends))
for line in readlines('access.log', encoding='utf-8', errors='replace'):
print(line)
if 'line 8' in line:
break
# line 11
# line 10
# line 9
# line 8
I'm starting to work on problems for google's Code Jam. However I there seams to be a problem with my submission. Whenever I submit I am told "Your output should start with 'Case #1: '". My output a print statement starts with ""Case #%s: %s"%(y + 1, p)" which says Case #1: ext... when I run my code.
I looked into it and it said "Your output should start with 'Case #1: ': If you get this message, make sure you did not upload the source file in place of the output file, and that you're outputting case numbers properly. The first line of the output file should always start with "Case #1:", followed by a space or the end of the line."
So what is an output file and how would I incorporate it into my code?
Extra info: This is my code I'm saving it as GoogleCode1.py and submitting that file. I wrote it in the IDLE.
import string
firstimput = raw_input ("cases ")
for y in range(int(first)):
nextimput = raw_input ("imput ")
firstlist = string.split(nextimput)
firstlist.reverse()
p = ""
for x in range(len(firstlist)):
p = p +firstlist[x] + " "
p = p [:-1]
print "Case #%s: %s"%(y + 1, p)
Run the script in a shell, and redirect the output.
python GoogleCode1.py > GoogleCode1.out
I/O redirection aside, the other way to do this would be to read from and write to various files. Lookup file handling in python
input_file = open('/path/to/input_file')
output_file = open('/path/to/output_file', 'w')
for line in input_file:
answer = myFunction(line)
output_file.write("Case #x: "+str(answer))
input_file.close()
output_file.close()
Cheers
Make sure you're submitting a file containing what your code outputs -- don't submit the code itself during a practice round.