Dashboard for monitoring the results of an iterative program [closed] - python

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I run a Python code in which an iterative process is done. Every few minutes an iteration is performed and the results are stored in a file. Currently, after each iteration I have to run another Python script to plot the recent results to monitor the progress. I want to have a dashboard which plots the recent results whenever the results file is updated.

It's not entirely clear what your question is, but it sounds like you want to monitor the output file for changes and plot them when the file is changed.
If you're using Linux (as the tag suggests), then I'd suggest using inotify, which is a Linux API allows you to monitor filesystem events (like file writes!).
There is a Python wrapper around this, also named inotify: https://pypi.org/project/inotify/. You should be able to add a watch on your log file and run your plotting function when it's modified (perhaps by watching for the IN_CLOSE_WRITE event).

Related

Re-run a python script after a few days if it fails [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Assuming a script fails and it is captured in a try catch how to run a python script again after a few days?
I can use sleep on this problem, but I think it will not work due to the fact that the server restarts every day. What is the best solution on this problem?
Typically you want to address this with a cron job.
I would probably do the following:
When the python file runs, save a log file with the status date/time.
Set up a cron job on the server to check, say once every 24 hours, that checks that log file and either do nothing or runs the python file again.

How do I set a dataflow window that will continually retrigger for more data after all records have been written to bigquery? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
We have a streaming pipeline reading from pub/sub and writing to bigquery. It wasn't working without adding a window function, because a default global window only fires once and doesn't know when to re-trigger. There is no GroupBy or combine.
We tried to add a beam Window with a trigger, but there are some problems. If we use a globalWindow, it runs really slow and sometimes gives null pointer exceptions. If we use a fixed window, it's fast but but it doesn't seem to acknowledge the pub/sub messages sometimes.
What we'd really want is a pipeline that reads from pub/sub, gets a batch of however many it could get, writes to bigquery, and once everything is written and the pubsub messages are acknowledged, retrigger the read-from-pubsub. Is this possible?
I think you are looking for this. You have a composite trigger named Repeatedly.forever and you can combine it with AfterCount
Something like this where you trigger after 1000 elements read.
Repeatedly.forever(AfterCount(1000))

Get last execution time of script python [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Im writting a script in Python that takes as an input a timestamp. For the first execution, it should be something as the now() function. However, for further executions, the input parameter should be the last execution time of the same script. This is being done to avoid getting duplicates results.
Can anyone give me a clue please?
As far as I know, there is no "last executed" attribute for files. Some operating systems have a "last accessed" attribute, but even if that's updated on execution, it would also be updated any time the file was read, which is probably not what you want.
If you need to be able to store information between runs, you'll need to save that data somewhere. You could write it to a file, save it to a database, or write it to a caching service, such as memcached.

Using Task Scheduler vs Multithreading [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I needed to perform a heavy operation while using a Tkinter GUI. So the GUI would stop responding as soon as the operation began. So, I had two choices(or that's what i think,as I'm new to python & programming as well): MultiThreading or Schtasks .
So, I chose the easier of the two,i.e Schtasks, as I'm working on a deadline(& I dont know much about Multithreading).
What I'm doing is accessing a python file from a different project.
I run batch files which is in this different project(which contains the desired python file that i need to run) to be run by Schtasks
Now the constraint is batch file can access only this python file & not a particular method present in that file(isn't it?) & I need to access only a particular method .
So, my question is:
Is the approach I'm using correct? If not what do you suggest would be better ? Or should I just switch to MultiThreading
Your question opens a huge topic - what you are trying to do is generally not simple and can have large problems which you cannot even foresee if you don't know the topic of multitasking very well. One issue, for example, is synchronizing the access to the file you mention from within different threads or processes or tasks.
However, if you want to start somewhere and just want to write something which separates your GUI code from your computation code, I recommend you start here: http://docs.python.org/2/library/multiprocessing.html .

Python R/W to text file network [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What could happen if multiple users run the same copies of python script which designed to R/W data to a single text file store in network device at the same time?
Will the processes stop working?
If so, what could be the solution?
It can happen many bad things, I don't think the processes stop working, not at least because of concurrent access to file a file, but what could happen is and inconsistent file creation: for example, if one processes write hello, and there is a concurrent access to the file, you might get a line like hhelllolo
A solution I can see is, use a database as suggested, or, create a mechanism for locking the file to concurrent accesses (which might be cumbersome because you're working on network, not the same computer)
Another solution I can think of is create a server side simple script who handle the requests and lock the file for concurrent access. This is almost the same solution as using a database, you'll be creating an storage system from scratch so why bother :)
Hope this helps!

Categories

Resources