I am trying to write a script to automate my backups under linux and I would like to have some kind of system tray notification (KDE) that a backup is running.
after reading this other SE post and doing some research, I cannot seem to find a DBUS library for bash, so instead I am thinking of tweaking the python script from his answer and making it into an addon for my main backup script by having my bash backup script repeatedly call the python notification script to create, update, and remove the notification when the backup is done.
However, i'm not quite sure how to implement this on the python side since if I were to just call python3 notify.py argument1 argument2 from bash, it would create a new instance of the python script every time.
Essentially, here's what i'm trying to do in my bash script:
#awesome backup script
./notification.py startbackup #this creates a new instance of the python script and sets up the KDE progress bar, possibly returning some kind of ID that is reused later?
#do backup things here.....
#periodically
./notification.py updateProgress 10%
./notification.py updateProgress 20%
#etc...
#finish the backup...
./notification.py endbackup #set the progressbar to complete and do cleanup
Since I havent done anything like this before and am not sure what to search for, I am wondering How I would go about implementing something like this in the python/bash setup I have now?
i.e. if i were to make a bash variable to store an instance ID that was returned from the first call to the python script and pass it back on each subsequent call, how would i have to write my python script in order to handle this and act on the same notification created originally, rather than creating new ones?
Either keep the process running and send commands through a pipe or use a file to store the instance ID.
Related
I am currently using Airflow to run a DAG (say dag.py) which has a few tasks, and then, it has a python script to execute (done via bash_operator). The python script (say report.py) basically takes data from a cloud (s3) location as a dataframe, does a few transformations, and then sends them out as a report over email.
But the issue I'm having is that airflow is basically running this python script, report.py, everytime Airflow scans the repository for changes (i.e. every 2 mins). So, the script is being run every 2 mins (and hence the email is being sent out every two minutes!).
Is there any work around to this? Can we use something apart from a bash operator (bare in mind that we need to do a few dataframe transformations before sending out the report)?
Thanks!
Just make sure you do everything serious in the tasks. It in the python script. The script will be executed often by scheduler but it should simply create tasks and build dependencies between them. The actual work is done in the 'execute' methods of the tasks.
For example rather than sending email in the script you should add the 'EmailOperator' as a task and the right dependencies, so the execute method of the operator will be executed not when the file is parsed by scheduler, but when all dependencies (other tasks ) will complete
Trying to use rpyc server via progress script, and have the script do different tasks based on different values I'm trying to get from the client.
I'm using a rpyc server to automate some tasks on demand from users, and trying to implement the client in a progress script in this way:
1.progress script launches.
2.progress script calls the rpyc client via cmd, to run a function that checks if the server is live, and returns some sort of var to indicate wether the server is live or not (doesn't really matter to me what kind of indication is used, I guess different chars like 0-live 1-not live would be preferable).
3.based on the value returned in the last step, either notifies the user that the server is down and quits, or proceeds to the rest of the code.
The part I'm struggling with is stage 2, how to call the client in a way that stores a value it should return, and how to actually return the value properly to the script.
I thought about using -param command, but couldn't figure how to use it in my scenario, where the value I'm trying to return is to a script that already midrun, and not just call another progress script with the value.
The code of the client that I use for checking if the server is up is:
def client_check():
c=rpyc.connect(host,18812)
if __name__=="__main__":
try:
client_check()
except:
#some_method_of_transferring_the_indication#
For the progress script, as mentioned I didn't really managed to figure out the right way to call the client and store a value in the way I'm trying to..
I guess I can make the server create a file that will use as an indicator for his status, and check for the file at the start of the script, but I don't know if that's the right way to do so, and prefare to avoid using this if possible.
I am guessing that you are saying that you are shelling out from the Progress script to run your rpyc script as an external process?
In that case something along these lines will read the first line of output from that rpyc script:
define variable result as character no-undo format "x(30)".
input through value( "myrpycScript arg1 arg2" ). /* this runs your rpyc script */
import unformatted result. /* this reads the result */
input close.
display result.
I am creating a test automation which uses an application without any interfaces. However, The application calls a batch script when it changes modes, and I am therefore am able to catch the mode transitions.
What I want to do is to get the batch script to give an input to my python script (I have a state machine running in python) during runtime. Such that I can monitor the state of the application with python instead of the batch file.
I am using a similar state machine to the one of Karn Saheb:
https://dev.to/karn/building-a-simple-state-machine-in-python
However, instead of changing states statically like:
device.on_event('event')
I want the python script to do something similar to:
while(True):
device.on_event(input()) # where the input is passed from the batch script:
REM state.bat
set CurrentState=%1
"magic code to pass CurrentState to python input()" %CurrentState%
I see that a solution would be to start the python script from the batch file every time it is called with the "event" and then save the current event in another file upon termination of the python script... But I want to avoid such handling and rather evaluate this during runtime.
Thank you in advance!
A reasonably portable way of doing this without ugly polling on temporary files is to use a socket: have the main process listen and have the batch file(s) start a small program that connects to the server and writes a message.
There are security considerations here: you can start by listening only to the loopback interface, with further authentication if the local machine should not be trusted.
If you have more than one of these processes, or if you need to handle the child dying before it issues its next report, you’ll have to use threads or something like select to unify the news from different input channels (e.g., waiting on the child to exit vs. waiting on news from the next batch file).
Summary: I have a python script which collects tweets using Twitter API and i have postgreSQL database in the backend which collects all the streamed tweets. I have custom code which overcomes the ratelimit issue and i made it to run 24/7 for months.
Issue: Sometimes streaming breaks and sleeps for given secs but it is not helpful. I do not want to check it manually.
def on_error(self,status)://tweepy method
self.mailMeIfError(['me <me#localhost'],'listen.py <root#localhost>','Error Occured on_error method',str(error))
time.sleep(300)
return True
Assume mailMeIfError is a method which takes care of sending me a mail.
I want a simple cron script which always checks the process and restart the python script if not running/error/breaks. I have gone through some answers from stackoverflow where they have used Process ID. In my case process ID still exists because this script sleeps if Error.
Thanks in advance.
Using Process ID is much easier and safer. Try using watchdog.
This can all be done in your one script. Cron would need to be configured to start your script periodically, say every minute. The start of your script then just needs to determine if it is the only copy of itself running on the machine. If it spots that another copy is running, it just silently terminates. Else it continues to run.
This behaviour is called a Singleton pattern. There are a number of ways to achieve this for example Python: single instance of program
I am working on a django web application.
A function 'xyx' (it updates a variable) needs to be called every 2 minutes.
I want one http request should start the daemon and keep calling xyz (every 2 minutes) until I send another http request to stop it.
Appreciate your ideas.
Thanks
Vishal Rana
There are a number of ways to achieve this. Assuming the correct server resources I would write a python script that calls function xyz "outside" of your django directory (although importing the necessary stuff) that only runs if /var/run/django-stuff/my-daemon.run exists. Get cron to run this every two minutes.
Then, for your django functions, your start function creates the above mentioned file if it doesn't already exist and the stop function destroys it.
As I say, there are other ways to achieve this. You could have a python script on loop waiting for approx 2 minutes... etc. In either case, you're up against the fact that two python scripts run on two different invocations of cpython (no idea if this is the case with mod_wsgi) cannot communicate with each other and as such IPC between python scripts is not simple, so you need to use some sort of formal IPC (like semaphores, files etc) rather than just common variables (which won't work).
Probably a little hacked but you could try this:
Set up a crontab entry that runs a script every two minutes. This script will check for some sort of flag (file existence, contents of a file, etc.) on the disk to decide whether to run a given python module. The problem with this is it could take up to 1:59 to run the function the first time after it is started.
I think if you started a daemon in the view function it would keep the httpd worker process alive as well as the connection unless you figure out how to send a connection close without terminating the django view function. This could be very bad if you want to be able to do this in parallel for different users. Also to kill the function this way, you would have to somehow know which python and/or httpd process you want to kill later so you don't kill all of them.
The real way to do it would be to code an actual daemon in w/e language and just make a system call to "/etc/init.d/daemon_name start" and "... stop" in the django views. For this, you need to make sure your web server user has permission to execute the daemon.
If the easy solutions (loop in a script, crontab signaled by a temp file) are too fragile for your intended usage, you could use Twisted facilities for process handling and scheduling and networking. Your Django app (using a Twisted client) would simply communicate via TCP (locally) with the Twisted server.