Is Python (Flask) not closing the file correctly? - python

I’m new to python. I have a separate flask web application that reads the ‘log.txt’ file in real time (via AJAX)
The expected result is that it shows “message 1” and 5 seconds later shows “message 2”. Then I have the below scripts to write into the files:
When I make a standalone Python script the below works (I see the first message and 5 seconds later the second message):
f = sys.stdout = sys.stderr = open('static/staging/stdout/discovery.log', 'w')
print "Message 1"
f.close()
time.sleep(5)
f = sys.stdout = sys.stderr = open('static/log.txt', 'a')
print "Message 2"
f.close()
time.sleep(5)
When I put it as part of a web app, and the function Is call as below, they both appear together after 10 seconds and not one after the other.
#app.route('/mywriter/')
def myApp():
f = sys.stdout = sys.stderr = open('static/log.txt', 'w')
print "Message 1"
f.close()
time.sleep(5)
f = sys.stdout = sys.stderr = open('static/log.txt', 'a')
print "Message 2”
f.close()
time.sleep(5)
I must point out at this stage that I was using flush() and fsync() but from what I understand doing f.close() will do both of these anyway. Besides it works when I put it as a standalone script, but not when I put it into a method as part of a webapp.

Use the debugger in your IDE. Set a breakpoint at the first time.sleep(), and stepping through line by line you should be able to diagnose the issue much more exactly.

Related

Is it possible to get another thread's print content?

I will execute two methods get and post at the same time in diffrent threads, the first method get will last for a long time(100 seconds), and the second request post will repeat every 5 seconds. Here get request will print a lot as follow
import time
def get():
for i in range(100):
print(i)
print("\n")
time.sleep(1)
What post need to do is to collect all the print content from get and append to a txt every time it executes(if file exists, add new contents to txt). Here is the pseudocode.
def post(request):
print_from_get = sys.stdout
with open("output.txt", "w") as text_file:
text_file.write(print_from_get)
My question is whether we can collect the print from get, if possible, how should I do it?
I present at the end a way for doing what is requested but I do not recommend using bare prints for this. There are better tools, and if you still want to stick with print, you can use the file keyword to specify where the output should go.
import time
import sys
def get(out_stream=sys.stdout):
for i in range(100):
print(i, file=out_stream)
print("\n")
time.sleep(1)
you can pass a file object opened with open as the argument, and the output will go to that file. Alternatively, you can use a io.StringIO as #barmar suggested, and write the contents to a file once get finishes.
All other options mess with sys.stdout, that is the default file object that print uses (it is the default value of the file keyword argument). This affects everything that uses sys.stdout, which can be more than all the print statements that get executed.
If you need to redirect bare print statements (not using print's file keyword), you must replace sys.stdout. The most obvious replacement would be to use an output file object:
from threading import Thread
import time
import sys
def get():
for i in range(100):
print(i)
time.sleep(1)
if __name__ == "__main__":
old_stdout = sys.stdout
with open("output.txt", "a") as text_fil:
sys.stdout = text_fil
get_thread = Thread(None, get)
get_thread.start()
get_thread.join()
sys.stdout = old_stdout
If you need to have one writing thread and a reading thread, exactly as the question states, you can use Queues, but a simpler approach uses pipes (I added a dummy thread (postthread) that calls post to make it similar to OP's use):
from threading import Thread
import time
import sys
import os
def get():
for i in range(100):
print(i)
time.sleep(1)
def post(read_pipe):
with open("output.txt", "ab") as text_fil: # os.read yields bytes
try:
data = os.read(read_pipe, 65535) # arbitrary (but large) number
text_fil.write(data)
except Exception as e:
return True # the read descriptor was closed: quit
return len(data) == 0 # the write descriptor was closed: quit
def postthread(read_pipe):
for _ in range(21):
time.sleep(5)
if post(read_pipe):
return
if __name__ == "__main__":
old_stdout = sys.stdout
r_fd, w_fd = os.pipe()
with os.fdopen(w_fd, 'wt') as write_pipe: # write pipe is a file object
read_pipe = r_fd
sys.stdout = write_pipe
get_thread = Thread(None, get)
post_thread = Thread(None, postthread, args=(read_pipe, ))
post_thread.start()
get_thread.start()
get_thread.join()
sys.stdout = old_stdout
# closing the write pipe (and its file descriptor) causes read to see EOF.
post_thread.join()
os.close(r_fd)
sys.stdout = old_stdout

Python Run Code When Thread Finishes

I am currently writing a simple file server using Python and the Socket library. I have most of it finished, and now I'm writing a way to push an update to the server. Here's what I expect it to do:
Client Should Send New Python File To Server
Server Deletes File Previous Python File
Server Downloads Python File
Server Runs New Code
Server With Old Code Quits
As of right now, my program can do 1-3, but it still cannot run the new code and quit the program. Since the server program is about 100 lines long, here's a link to the pastebin with the code. If you want to use the uploader too, here it is.
Looking at my code, the problem either lies in my update code:
class updServer(Thread):
def __init__(self,s):
Thread.__init__(self)
self.s = s
def run(self):
os.remove(os.path.realpath(__file__))
fn = self.s.recv(2048)
with open(fn, 'wb') as f:
still_data = True
while still_data:
data = self.s.recv(2048)
if not data:
still_data = False
if data == '':
still_data = False
f.write(data)
f.flush()
os.fsync(f.fileno())
f.close()
global name_of_file
name_of_file = fn
return None
Or in the way that I'm trying to run the new program under it:
elif file_name_sent == b'to update server':
new_t = updServer(conn)
threads.append(new_t)
new_t.start()
global name_of_file
execfile(name_of_file)
os._exit(0)
Note: I have also tried to use os.command and subprocess.call, and I also know that python is on my PATH. Instructions for the uploader are here if you want them.

How to print telnet response line by line?

Is it possible to print the telnet response line by line, when a command executed over telnet keeps on responding over console ?
Example: I have executed a command (to collect logs), It keeps on displaying logs on console window. Can we read the response line by line & print it , without missing any single line ?
Below snippet writes the log, but only after certain specified time. If I stop the service/script (CTRL-C) in between, that doesn't write anything.
import sys
import telnetlib
import time
orig_stdout = sys.stdout
f = open('outpuy.txt', 'w')
sys.stdout = f
try:
tn = telnetlib.Telnet(IP)
tn.read_until(b"pattern1")
tn.write(username.encode('ascii') + b"\n")
tn.read_until(b"pattern2")
tn.write(command1.encode('ascii') + b"\n")
z = tn.read_until(b'abcd\b\n',600)
array = z.splitlines( )
except:
sys.exit("Telnet Failed to ", IP)
for i in array:
i=i.strip()
print(i)
sys.stdout = orig_stdout
f.close()
You can use tn.read_until("\n") in a loop in order to read one line durint execution of your telnet command
while True:
line = tn.read_until(b"\n") # Read one line
print(line)
if b'abcd' in line: # last line, no more read
break
You can use the ready_very_eager, read_eager, read_lazy, and ready_very_lazy functions specified in the documentation to read your stream byte-by-byte. You can then handle the "until" logic on your own code and at the same time write the read lines to the console.

Python : Handling multiple telnet sessions to same IP

I have few list of commands (any number), which have to be executed over telnet on one particular IP/HOST. And the output to be stored in separate file. These commands are specific to log collection.
I need them in such a way that, Execute all those required commands at once (Start/enabling for log collection) - Multiple telnet sessions, one session per command. After sometime (Not a timed activity), require another script to stop all of them & logs stored in separate file respectively (based on the list of commands executed).
I could able to do it only for one particular command, that too only for short interval of time.
I hope I'm clear with the details. Please let me know if you are not clear with the concept. Please help me in this regard.
import sys
import telnetlib
import time
orig_stdout = sys.stdout
f = open('out.txt', 'w')
sys.stdout = f
try:
tn = telnetlib.Telnet(IP)
tn.read_until(b"login: ")
tn.write(username.encode('ascii') + b"\n")
tn.read_until(b"# ")
tn.write(command1.encode('ascii') + b"\n")
#time.sleep(30)
z = tn.read_until(b'abcd\b\n',4) >> Just a random pattern, so that it reads for long duration.
#z=tn.read_very_eager()
output = z.splitlines( )
except:
sys.exit("Telnet Failed to ",IP)
for i in output:
i=i.strip().decode("utf-8")
print(i)
sys.stdout = orig_stdout
f.close()

Not able to redirect shared library output to /dev/null in python

I'm trying to redirect the output of the constructor of an object that is created by a shared library to /dev/null. The side effect of the construction is the printing of lots of junk that I don't need.
The code is as follows:
f = open("/dev/null", 'w')
tmpErr = sys.stderr
tmpOut = sys.stdout
sys.stderr = f
sys.stdout = f
foo = Foo(param1, param2)
sys.stderr = tmpErr
sys.stdout = tmpOut
f.close()
If I replace the function call with a simple print (print "hello", for example) or a call to a local function, the redirection seems to work.
Also, using the ">&" operator in the shell (tcsh) I managed to redirect all the output perfectly.
What am I missing here?

Categories

Resources