Get last execution time of script python [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Im writting a script in Python that takes as an input a timestamp. For the first execution, it should be something as the now() function. However, for further executions, the input parameter should be the last execution time of the same script. This is being done to avoid getting duplicates results.
Can anyone give me a clue please?

As far as I know, there is no "last executed" attribute for files. Some operating systems have a "last accessed" attribute, but even if that's updated on execution, it would also be updated any time the file was read, which is probably not what you want.
If you need to be able to store information between runs, you'll need to save that data somewhere. You could write it to a file, save it to a database, or write it to a caching service, such as memcached.

Related

Re-run a python script after a few days if it fails [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Assuming a script fails and it is captured in a try catch how to run a python script again after a few days?
I can use sleep on this problem, but I think it will not work due to the fact that the server restarts every day. What is the best solution on this problem?
Typically you want to address this with a cron job.
I would probably do the following:
When the python file runs, save a log file with the status date/time.
Set up a cron job on the server to check, say once every 24 hours, that checks that log file and either do nothing or runs the python file again.

Dashboard for monitoring the results of an iterative program [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I run a Python code in which an iterative process is done. Every few minutes an iteration is performed and the results are stored in a file. Currently, after each iteration I have to run another Python script to plot the recent results to monitor the progress. I want to have a dashboard which plots the recent results whenever the results file is updated.
It's not entirely clear what your question is, but it sounds like you want to monitor the output file for changes and plot them when the file is changed.
If you're using Linux (as the tag suggests), then I'd suggest using inotify, which is a Linux API allows you to monitor filesystem events (like file writes!).
There is a Python wrapper around this, also named inotify: https://pypi.org/project/inotify/. You should be able to add a watch on your log file and run your plotting function when it's modified (perhaps by watching for the IN_CLOSE_WRITE event).

How do I set a dataflow window that will continually retrigger for more data after all records have been written to bigquery? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
We have a streaming pipeline reading from pub/sub and writing to bigquery. It wasn't working without adding a window function, because a default global window only fires once and doesn't know when to re-trigger. There is no GroupBy or combine.
We tried to add a beam Window with a trigger, but there are some problems. If we use a globalWindow, it runs really slow and sometimes gives null pointer exceptions. If we use a fixed window, it's fast but but it doesn't seem to acknowledge the pub/sub messages sometimes.
What we'd really want is a pipeline that reads from pub/sub, gets a batch of however many it could get, writes to bigquery, and once everything is written and the pubsub messages are acknowledged, retrigger the read-from-pubsub. Is this possible?
I think you are looking for this. You have a composite trigger named Repeatedly.forever and you can combine it with AfterCount
Something like this where you trigger after 1000 elements read.
Repeatedly.forever(AfterCount(1000))

How to verify it's the same computer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was wondering if there's any Python code I can run that, assuming the Python code has not been tampered with, will return an ID specific to that computer that cannot be spoofed. My initial thought was to use MAC addresses, but I know those can be easily spoofed. How about CPU serial number? How would I go about doing that?
Thanks.
This is an impossible problem, as it's equivalent to effective DRM. Every identifier can be spoofed. Remember, the user can tamper with your Python code (any compiling/obfuscating/encrypting you do can be reversed, so don't bother) to return whatever identifier they want. (And even if your code were absolutely read-only, they could change the Python runtime or the OS to do whatever they want.)

Two functional functions when called together creates an error [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I have this code related with the download, unzip, search and upload to a AWS S3.
I have a function to download, another to unzip, another to make the searches and another function to upload the found files; to finally call all this functions into one function.
The problem it's that at some point (specifically in the last function) the execution gets an error.
The code has 1684 lines of code, and could take even 4 hours to execute.
If an error is found in a function, the try/catches guarantee the final return.
I've tried to call every function sequentially and they work.
If trying to call all the functions, except the last one it still works.
If trying to call the last function (the upload to S3), it works.
I believe it could be related with the RAM
Trust me, it's huge
Basically, my problem was solved through the creation of a ".sh" format script that calls all the functions sequentially.

Categories

Resources