Global file initialization in flask- python - python

I have been googling to find how to create a global file, which will open till my application is completed . Need to write the output of all modules in a view in single file. so that users can download this file as a report once application is completed running from Front end. This is the class I have created
import time
class FileOperations:
def __init__(self):
self.current_time = time.strftime('%Y-%m-%d_%H-%M-%S')
self.outfile = open("reports/username_" + self.current_time + ".txt", 'w')
self.outfile.write("Final Report \n")
self.outfile.write("*****************")
I want this file to get it generated when the application start running & should be available for all modules

A context manager is a way to safely handle operations such as writing to file. It also allows you to better trace when file opens or closes.
I suggest you take the time when the application starts, and reuse that file as I take it you intended. That's probably "safer" than keeping the file open.
def get_time():
global start_time
start_time = time.strftime('%Y-%m-%d_%H-%M-%S')
def write_to_file():
with open('reports/username_{}.txt'.format(start_time), 'a') as f:
f.write("Final Report \n")
f.write("*****************")
if 'start_time' not in globals():
get_time()
The conditional will run each time the module is imported. By checking if its already defined in the module scope, we make sure to only define it once.

Related

Importing variable from called script repeats the entire calling script

I have a main.py file
import common_functions as conf
print("Main File:")
filename = conf.testing()
from TC import TC
and I want to assign the below return statement as a variable "filename"
common_functions.py
def testing():
print("This should only print once!")
return "awesome file"
I want to then be able to access this variable, in another file that I am importing (TC )
TC.py
from main import filename
print("TC File:")
print(f"Filename is: {filename}")
however, currently, if I do this, then the output is:
Main File:
This should only print once!
Main File:
This should only print once!
TC File:
Filename is: awesome file
I am really struggling with this one, as I am trying to pass a variable into the called scripts, but that variable is only named from another function... so it seems as though everytime I it's called, then the function kicks off again
I would like to be able to set the variable filename in the main file from the function it is calling, and then in the called file (TC.py) I would like to be able to access that variable as a string, not rerun everything.
Is that possible?

File atomicity with luigi python library

Do I need to worry about file atomicity in luigi with the following code, picking a dataframe and returning it as an output from a task? I don't get the atomicity part, as I would hope luigi would just wait for the task to complete writing a file before stating the task is complete.
class readSQLtoPickle(luigi.Task):
sql = luigi.Parameter()
pickle = luigi.Parameter()
def output(self):
return luigi.LocalTarget(self.pickle,format=format.Nop)
def run(self):
data = pd.read_sql(self.sql, ariel)
with self.output().open('w') as f:
pickle.dump(data, f)
class grabData(luigi.Task): # standard Luigi Task class
sql = luigi.Parameter(default="SELECT * FROM DIM_DRUG_PRODUCT")
pickle = luigi.Parameter(default="drug_product.pkl")
def requires(self):
# we need to read the log file before we can process it
return readSQLtoPickle(sql=self.sql, pickle=self.pickle)
def run(self):
with self.input().open('r') as f:
df = pickle.load(f)
print(type(df))
print(df.head(100))
print(len(df))
Writing to LocalTarget is atomic. Behind the scene lugi first writes to a temp file and then moves the temp file to your actual target. Look for atomic_file in the source code
I don't get the atomicity part, as I would hope luigi would just wait for the task to complete writing a file before stating the task is complete.
If you use a local scheduler to run your task (--local-scheduler) and have only one worker, then you should be fine.
It becomes a problem if you have several workers working on the same tasks and are trying to identity which tasks are now available to run.
In your example one worker could be trying to check if grabData is ready to run, and see that the file is available while another worker is in the middle of readSQLtoPickle writing on the file.

Sublime plugin for executing a command

I been writing markdown files lately, and have been using the awesome table of content generator (github-markdown-toc) tool/script on a daily basis, but I'd like it to be regenerated automatically each time I press Ctrl+s, right before saving the md file in my sublime3 environment.
What I have done till now was to generate it from the shell manually, using:
gh-md-toc --insert my_file.md
So I wrote a simple plugin, but for some reason I can't see the result I wanted.
I see my print but the toc is not generated.
Does anybody has any suggestions? what's wrong?
import sublime, sublime_plugin
import subprocess
class AutoRunTOCOnSave(sublime_plugin.EventListener):
""" A class to listen for events triggered by ST. """
def on_post_save_async(self, view):
"""
This is called after a view has been saved. It runs in a separate thread
and does not block the application.
"""
file_path = view.file_name()
if not file_path:
return
NOT_FOUND = -1
pos_dot = file_path.rfind(".")
if pos_dot == NOT_FOUND:
return
file_extension = file_path[pos_dot:]
if file_extension.lower() == ".md": #
print("Markdown TOC was invoked: handling with *.md file")
subprocess.Popen(["gh-md-toc", "--insert ", file_path])
Here's a slightly modified version of your plugin:
import sublime
import sublime_plugin
import subprocess
class AutoRunTOCOnSaveListener(sublime_plugin.EventListener):
""" A class to listen for events triggered by ST. """
def on_post_save_async(self, view):
"""
This is called after a view has been saved. It runs in a separate thread
and does not block the application.
"""
file_path = view.file_name()
if not file_path:
return
if file_path.split(".")[-1].lower() == "md":
print("Markdown TOC was invoked: handling with *.md file")
subprocess.Popen(["/full/path/to/gh-md-toc", "--insert ", file_path])
I changed a couple things, along with the name of the class. First, I simplified your test for determining if the current file is a Markdown document (fewer operations means less room for error). Second, you should include the full path to the gh-md-toc command, as it's possible subprocess.Popen can't find it on the default path.
I figured out, since gh-md-toc is a bash script, I replaced the following line:
subprocess.Popen(["gh-md-toc", "--insert ", file_path])
with:
subprocess.check_call("gh-md-toc --insert %s" % file_path, shell=True)
So now it works well, on each save.

Python NamedTemporaryFile - ValueError When Reading

I am having an issue writing to a NamedTemporaryFile in Python and then reading it back. The function downloads a file via tftpy to the temp file, reads it, hashes the contents, and then compares the hash digest to the original file. The function in question is below:
def verify_upload(self, image, destination):
# create a tftp client
client = TftpClient(ip, 69, localip=self.binding_ip)
# generate a temp file to hold the download info
if not os.path.exists("temp"):
os.makedirs("temp")
with NamedTemporaryFile(dir="temp") as tempfile, open(image, 'r') as original:
try:
# attempt to download the target image
client.download(destination, tempfile, timeout=self.download_timeout)
except TftpTimeout:
raise RuntimeError("Could not download {0} from {1} for verification".format(destination, self.target_ip))
# hash the original file and the downloaded version
original_digest = hashlib.sha256(original.read()).hexdigest()
uploaded_digest = hashlib.sha256(tempfile.read()).hexdigest()
if self.verbose:
print "Original SHA-256: {0}\nUploaded SHA-256: {1}".format(original_digest, uploaded_digest)
# return the hash comparison
return original_digest == uploaded_digest
The problem is that every time I try to execute the line uploaded_digest = hashlib.sha256(tempfile.read()).hexdigest() the application errors out with a ValueError - I/O Operation on a closed file. Since the with block is not complete I am struggling to understand why the temp file would be closed. The only possibility I can think of is that tftpy is closing the file after doing the download, but I cannot find any point in the tftpy source where this would be happening. Note, I have also tried inserting the line tempfile.seek(0) in order to put the file back in a proper state for reading, however this also gives me the ValueError.
Is tftpy closing the file possibly? I read that there is possibly a bug in NamedTemporaryFile causing this problem? Why is the file closed before the reference defined by the with block goes out of scope?
TFTPy is closing the file. When you were looking at the source, you missed the following code path:
class TftpClient(TftpSession):
...
def download(self, filename, output, packethook=None, timeout=SOCK_TIMEOUT):
...
self.context = TftpContextClientDownload(self.host,
self.iport,
filename,
output,
self.options,
packethook,
timeout,
localip = self.localip)
self.context.start()
# Download happens here
self.context.end() # <--
TftpClient.download calls TftpContextClientDownload.end:
class TftpContextClientDownload(TftpContext):
...
def end(self):
"""Finish up the context."""
TftpContext.end(self) # <--
self.metrics.end_time = time.time()
log.debug("Set metrics.end_time to %s", self.metrics.end_time)
self.metrics.compute()
TftpContextClientDownload.end calls TftpContext.end:
class TftpContext(object):
...
def end(self):
"""Perform session cleanup, since the end method should always be
called explicitely by the calling code, this works better than the
destructor."""
log.debug("in TftpContext.end")
self.sock.close()
if self.fileobj is not None and not self.fileobj.closed:
log.debug("self.fileobj is open - closing")
self.fileobj.close() # <--
and TftpContext.end closes the file.

Read from a log file as it's being written using python

I'm trying to find a nice way to read a log file in real time using python. I'd like to process lines from a log file one at a time as it is written. Somehow I need to keep trying to read the file until it is created and then continue to process lines until I terminate the process. Is there an appropriate way to do this? Thanks.
Take a look at this PDF starting at page 38, ~slide I-77 and you'll find all the info you need. Of course the rest of the slides are amazing, too, but those specifically deal with your issue:
import time
def follow(thefile):
thefile.seek(0,2) # Go to the end of the file
while True:
line = thefile.readline()
if not line:
time.sleep(0.1) # Sleep briefly
continue
yield line
You could try with something like this:
import time
while 1:
where = file.tell()
line = file.readline()
if not line:
time.sleep(1)
file.seek(where)
else:
print line, # already has newline
Example was extracted from here.
As this is Python and logging tagged, there is another possibility to do this.
I assume this is based on a Python logger, logging.Handler based.
You can just create a class that gets the (named) logger instance and overwrite the emit function to put it onto a GUI (if you need console just add a console handler to the file handler)
Example:
import logging
class log_viewer(logging.Handler):
""" Class to redistribute python logging data """
# have a class member to store the existing logger
logger_instance = logging.getLogger("SomeNameOfYourExistingLogger")
def __init__(self, *args, **kwargs):
# Initialize the Handler
logging.Handler.__init__(self, *args)
# optional take format
# setFormatter function is derived from logging.Handler
for key, value in kwargs.items():
if "{}".format(key) == "format":
self.setFormatter(value)
# make the logger send data to this class
self.logger_instance.addHandler(self)
def emit(self, record):
""" Overload of logging.Handler method """
record = self.format(record)
# ---------------------------------------
# Now you can send it to a GUI or similar
# "Do work" starts here.
# ---------------------------------------
# just as an example what e.g. a console
# handler would do:
print(record)
I am currently using similar code to add a TkinterTreectrl.Multilistbox for viewing logger output at runtime.
Off-Side: The logger only gets data as soon as it is initialized, so if you want to have all your data available, you need to initialize it at the very beginning. (I know this is what is expected, but I think it is worth being mentioned.)
Maybe you could do a system call to
tail -f
using os.system()

Categories

Resources