How do i run a command only when a file is open - python

I'm trying to make a file change its name as long as another file is running,
and when i exit the running file i want the changed file name back to its original name
the code i used only seems to open the file and ignore the commands under it,
would really appreciate the help.
import os
import time
if os.startfile(r'C:\Users\Michael\Desktop\test\file.exe'):
time.sleep(3)
os.rename(r'C:\Users\Michael\Desktop\test\name.txt',r'C:\Users\Michael\Desktop\test\name2.txt')

I don't have a Windows computer handy so I can't test this. However, the Python documentation for os.startfile (https://docs.python.org/3/library/os.html) doesn't specify anything regarding the return value. This makes me suspect that the return value is None or something like that. If so, that would explain why your code block isn't being run.
This line from that documentation should be helpful:
startfile() returns as soon as the associated application is launched. There is no option to wait for the application to close, and no way to retrieve the application’s exit status.
Therefore, since you're trying to do something (i.e., revert the file name) once the process terminates, you want to use a different function than os.startfile. I'm not that familiar with Windows so perhaps someone else can point you in the right direction.

Something like this might work, but it probably isn't the best idea:
EDIT: added prints and time.sleep to check every 10 seconds
import os
import psutil
import time
process_name = 'some_process'
file_name = 'some_file_path'
replacement = 'some_replacement_file_name'
while True:
print(f'looking for {process_name}')
time.sleep(10)
if process_name in (p.name() for p in psutil.process_iter()):
print(f'{process_name} started')
os.rename(file_name, replacement)
while True:
time.sleep(10)
if not process_name in (p.name() for p in psutil.process_iter()):
print(f'{process_name} stopped')
os.rename(replacement, file_name)
break

Related

Detecting application start up with python

I've working on a windows python program and I need it to run once I open an app. Is it possible ? If so how would I implement it ?
We need some information, what did you want to do?
Did you wanna know, if a process is started and then you will continue you python script? Than you can do this:
import psutil
def is_process_running(processName):
for process in psutil.process_iter(): # iterates through all processes from your OS
try:
if processName.lower() in process.name().lower(): # lowers the name, because "programm" != proGramm"
return True # if the process is found, then return true
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess): # some failures what could go wrong
pass
return False;
while(!is_process_running('nameOfTheProcess') # only continue as long as the return is False
time.sleep(10) # wait 10 seconds
For further information:
psutil-docs
This can be achieved in multiple ways, following can also be the one.
import subprocess
#sub-process library being part of python doesn't
#require any additional installation and can do all the work.
#For Windows operating system we can use 'tasklist'
#in place of 'ps aux' as the first argument
#(haven't work with windows lately but this is close to something i remember).
#check_output api called when shell=true,returns list of running processes.
running_processes = subprocess.check_output(['ps aux'], shell=True)
#check if the application needed is running, for brevity, I am using firefox,
#if yes, it fires off another python script.
if bytes('firefox',encoding='utf-8') in running_processes:
subprocess.call(['python3', '/path/to/application.py'])
else:
print('it not')

Use while loop to listen for an event?

I am trying to write a small clipboard logger (for linux) that listens for an event (the change of the clipboard content) and writes the clipboard content to a file (on-change).
What I have come up with is a simple while loop with the clipboard module pyperclip:
import pyperclip
recent_value = ""
while True:
tmp_value = pyperclip.paste()
if tmp_value != recent_value:
recent_value = tmp_value
with open(".clipboard_history.txt", "a") as f:
f.write(recent_value + "\n")
So my first question is, can I actually run a while True loop to 'listen' or will this consume too much memory or be generally inefficient or bad practice?
And the second question is, how can I run this in the background like a shell job control (ampersand)?
Should I go for a daemon like suggested here or some kind of event loop or threading magic?
I basically want something that sits in the background and listens for an event (clipboard content changes), reacts on it (writes to a file) and waits again.
============================
edit: Thanks for the input! + new question: Would I still need the sleep method if I used threading?
Running your current loop will drain CPU, import from time, and use time.sleep(1) or something else that would put the program to sleep for a little while (ideally 0.1-2~ seconds if you think they would be swapping copy/paste fast)
You don't need to thread if this is all your program is going to be doing.
First of all, you need to fix your indentation as the following:
while True:
tmp_value = pyperclip.paste()
if tmp_value != recent_value:
recent_value = tmp_value
with open(".clipboard_history.txt", "a") as f:
f.write(recent_value + "\n")
Now if your script has more code, and you want this loop to keep running in the background, you can use threading and define a function for the loop:
from threading import Thread
t = Thread(target=your-func, args=[])
t.start()
def your-func(self):
# your loop goes here
So my first question is, can I actually run a while True loop to
'listen' or will this consume too much memory or be generally
inefficient or bad practice?
It is inefficient. You probably want to add at the end of your loop
time.sleep(0.1)
Better practice would be to run your script every time clipboard is being written to. This discussion is also relevant to you: Trigger an event when clipboard content changes
And the second question is, how can I run this in the background like
a shell job control (ampersand)?
Refer to here.
To run a python file with no console the extension should be .pyw, for example when running logger.pyw it will be ran without opening a python shell.
Hope this answered your question.

enable a script to stop itself when a command is issued from terminal

I have a script runReports.py that is executed every night. Suppose for some reason the script takes too long to execute, I want to be able to stop it from terminal by issuing a command like ./runReports.py stop.
I tried to implement this by having the script to create a temporary file when the stop command is issued.
The script checks for existence of this file before running each report.
If the file is there the script stops executing, else it continues.
But I am not able to find a way to make the issuer of the stop command aware that the script has stopped successfully. Something along the following lines:
$ ./runReports.py stop
Stopping runReports...
runReports.py stopped successfully.
How to achieve this?
For example if your script runs in loop, you can catch signal http://en.wikipedia.org/wiki/Unix_signal and terminate process:
import signal
class SimpleReport(BaseReport):
def __init__(self):
...
is_running = True
def _signal_handler(self, signum, frame):
is_running = False
def run(self):
signal.signal(signal.SIGUSR1, self._signal_handler) # set signal handler
...
while is_running:
print("Preparing report")
print("Exiting ...")
To terminate process just call kill -SIGUSR1 procId
You want to achieve inter process communication. You should first explore the different ways to do that : system V IPC (memory, very versatile, possibly baffling API), sockets (including unix domain sockets)(memory, more limited, clean API), file system (persistent on disk, almost architecture independent), and choose yours.
As you are asking about files, there are still two ways to communicate using files : either using file content (feature rich, harder to implement), or simply file presence. But the problem using files, is that is a program terminates because of an error, it may not be able to write its ended status on the disk.
IMHO, you should clearly define what are your requirements before choosing file system based communication (testing the end of a program is not really what it is best at) unless you also need architecture independence.
To directly answer your question, the only reliable way to know if a program has ended if you use file system communication is to browse the list of currently active processes, and the simplest way is IMHO to use ps -e in a subprocess.
Instead of having a temporary file, you could have a permanent file(config.txt) that has some tags in it and check if the tag 'running = True'.
To achieve this is quiet simple, if your code has a loop in it (I imagine it does), just make a function/method that branches a check condition on this file.
def continue_running():
with open("config.txt") as f:
for line in f:
tag, condition = line.split(" = ")
if tag == "running" and condition == "True":
return True
return False
In your script you will do this:
while True: # or your terminal condition
if continue_running():
# your regular code goes here
else:
break
So all you have to do to stop the loop in the script is change the 'running' to anything but "True".

Python: Reread contents of a file

I have a file that an application updates every few seconds, and I want to extract a single number field in that file, and record it into a list for use later. So, I'd like to make an infinite loop where the script reads a source file, and any time it notices a change in a particular figure, it writes that figure to an output file.
I'm not sure why I can't get Python to notice that the source file is changing:
#!/usr/bin/python
import re
from time import gmtime, strftime, sleep
def write_data(new_datapoint):
output_path = '/media/USBHDD/PythonStudy/torrent_data_collection/data_one.csv'
outfile = open(output_path, 'a')
outfile.write(new_datapoint)
outfile.close()
forever = 0
previous_data = "0"
while forever < 1:
input_path = '/var/lib/transmission-daemon/info/stats.json'
infile = open(input_path, "r")
infile.seek(0)
contents = infile.read()
uploaded_bytes = re.search('"uploaded-bytes":\s(\d+)', contents)
if uploaded_bytes:
current_time = strftime("%Y-%m-%d %X", gmtime())
current_data = uploaded_bytes.group(1)
if current_data != previous_data:
write_data(","+ current_time + "$" + uploaded_bytes.group(1))
previous_data = uploaded_bytes.group(1)
infile.close()
sleep(5)
else:
print "couldn't write" + strftime("%Y-%m-%d %X", gmtime())
infile.close()
sleep(60)
As is now, the (messy) script writes once correctly, and then I can see that although my source file (stats.json) file is changing, my script never picks up on any changes. It keeps on running, but my output file doesn't grow.
I thought that an open() and a close() would do the trick, and then tried throwing in a .seek(0).
What file method am I missing to ensure that python re-opens and re-reads my source file, (stats.json)?
Unless you are implementing some synchronization mechanism or could guarantee somehow atomic read and write, I think you are calling for race condition and subtle bugs here.
Imagine the "reader" accessing the file whereas the "writer" hasn't completed its write cycle. There is a risk of reading incomplete/inconsistent data. In "modern" systems, you could also hit the cache -- and not seeing file modifications "live" as they appends.
I can think of two possible solutions:
You forgot the parentheses on the close in the else of the infinite loop.
infile.close --> infile.close()
The program that is changing the JSON file is not closing the file, and therefore it is not actually changing.
Two problems I see:
Are you sure your file is really updated on filesystem? I do not know on what operating system you are playing with your code, but caching may kick your a$$ in this case, if the files is not flushed by producer.
Your problem is worth considering pipe instead of file, however I cannot guarantee what transmission will do if it stuck on writing to pipe if your consumer is dead.
Answering your problems, consider using one of the following:
pynotifyu
watchdog
watcher
These modules are intended to monitor changes on filesystem and then call proper actions. Method in your example is primitive, has big performance penalty and couple other problems mentioned already in other answers.
Ilya, would it help to check(os.path.getmtime), whether stats.json changed before you process the file?
Moreover, i'd suggest to make advantage of the fact it's JSON file:
import json
import os
import sys
dir_name ='/home/klaus/.config/transmission/'
# stats.json of daemon might be elsewhere
file_name ='stats.json'
full_path = os.path.join(dir_name, file_name)
with open(full_path) as fp:
json.load(fp)
data = json.load(fp)
print data['uploaded-bytes']
Thanks for all the answers, unfortunately my error was in the shell, and not in the script with Python.
The cause of the problem turned out to be the way I was putting the script in the background. I was doing: Ctrl+Z which I thought would put the task in the background. But it does not, Ctrl+Z only suspends the task and returns you to the shell, a subsequent bg command is necessary for the script to run on infinite loop in the background

Restarting a self-updating python script

I have written a script that will keep itself up to date by downloading the latest version from a website and overwriting the running script.
I am not sure what the best way to restart the script after it has been updated.
Any ideas?
I don't really want to have a separate update script.
oh and it has to work on both linux/windows too.
In Linux, or any other form of unix, os.execl and friends are a good choice for this -- you just need to re-exec sys.executable with the same parameters it was executed with last time (sys.argv, more or less) or any variant thereof if you need to inform your next incarnation that it's actually a restart. On Windows, os.spawnl (and friends) is about the best you can do (though it will transiently take more time and memory than os.execl and friends would during the transition).
The CherryPy project has code that restarts itself. Here's how they do it:
args = sys.argv[:]
self.log('Re-spawning %s' % ' '.join(args))
args.insert(0, sys.executable)
if sys.platform == 'win32':
args = ['"%s"' % arg for arg in args]
os.chdir(_startup_cwd)
os.execv(sys.executable, args)
I've used this technique in my own code, and it works great. (I didn't bother to do the argument-quoting step on windows above, but it's probably necessary if arguments could contain spaces or other special characters.)
I think the best solution whould be something like this:
Your normal program:
...
# ... part that downloaded newest files and put it into the "newest" folder
from subprocess import Popen
Popen("/home/code/reloader.py", shell=True) # start reloader
exit("exit for updating all files")
The update script: (e.g.: home/code/reloader.py)
from shutil import copy2, rmtree
from sys import exit
# maybie you could do this automatic:
copy2("/home/code/newest/file1.py", "/home/code/") # copy file
copy2("/home/code/newest/file2.py", "/home/code/")
copy2("/home/code/newest/file3.py", "/home/code/")
...
rmtree('/home/code/newest') # will delete the folder itself
Popen("/home/code/program.py", shell=True) # go back to your program
exit("exit to restart the true program")
I hope this will help you.
The cleanest solution is a separate update script!
Run your program inside it, report back (when exiting) that a new version is available. This allows your program to save all of its data, the updater to apply the update, and run the new version, which then loads the saved data and continues. To the user this can be completely transparent, as they just run the updater-shell which runs the real program.
To additionally support script calls with Python's "-m" parameter the following can be used (based on the Alex's answer; Windows version):
os.spawnl(os.P_WAIT, sys.executable, *([sys.executable] +
(sys.argv if __package__ is None else ["-m", __loader__.name] + sys.argv[1:])))
sys.exit()
Main File:
if __name__ == '__main__':
if os.path.isfile('__config.py'):
print 'Development'
push.update_server()
else:
e = update.check()
if not e: sys.exit()
Update File:
def check():
e = 1.....perform checks, if something needs updating, e=0;
if not e:
os.system("python main.pyw")
return e
Here's the logic:
Main program calls the update function
1) If the update function needs to update, than it updates and calls a new instances of "main"
Then the original instance of "main" exits.
2) If the update function does not need to update, then "main" continues to run
Wouldn't it just be easier to do something like
Very simple, no extra imports needed, and compatible with any OS depending on what you put in the os.system field
def restart_program():
print("Restarting Now...")
os.system('your program here')
You can use reload(module) to reload a module.

Categories

Resources