I wrote a Python 3 script to measure the running time of a process, since it keeps dying and I'm interested in how long it will keep actively running. But it's running on my laptop, and I realized the statistics will be skewed by periods when I've put it to sleep.
It's on Linux, so I'm sure there's a log file I could parse (it used to be pm-suspend.log before systemd), but I'm wondering if there's a more general/direct way.
Can a process ask to be notified of suspend events?
I should note that I'm interested in the wallclock time the process was running, not the actual CPU execution time. So time.process_time() won't work.
It includes the suspended periods but as far as i think it is the only option to find the process running time.Moreover using this you can even find the running time of even individual lines of code.
from time import time
t0=time()
#your code here
print(time()-t0)
Related
I have a program, which I want to synchronize to a script (currently Python). Right now the execution of the program is asynchronous to the script, which fetches the current states and is controlling its inputs. The program is deterministic, but throughout the execution it is coincidence if I fetch a few states earlier or later at a specific time (memory values). If I would be able to have more control over the program flow, the results would also be more deterministic and reproducible.
Is there even the slightest possibility to stop a program at a time point, make some actions and start it again, everything from an external script? A little bit like a debugger is able to stop a program during the execution and then resume it again.
Another way could be, if there was a possibility to at least slow down the program execution (even better to a specific speed). Like a "CPU Killer" which slows down the computer, but just for the program. In this case the script has more time to react in a (maybe even standard) bigger time slot.
The script wouldn't have to be Python either.
Thanks in advance!
Use the time module to slow down the execution speed. I would use a boolean loop to check whether you should continue execution or not.
import time
while wait = False:
if wait == False:
run logic
else:
time.sleep(1)
I am running a program which is attempting to open 1000 threads using Python's ThreadPoolExecutor, which I have configured to allow a maximum of 1000 threads. On a Windows machine with 4GB of memory, I am able to start ~870 threads before I get a runtime error: can't start new thread. With 16GB of memory, I am able to start ~870 threads as well, though the runtime error, can't start new thread, occurs two minutes later. All threads are running a while loop, which means that they will never complete their tasks. This is the intention.
Why is PyCharm/Windows/Python, whichever may be the culprit, failing to start more than 870 out of the 1000 threads which I am attempting to start, with that number being invariable despite a significant change in the RAM? This leaves me to conclude that hardware limitations are not the problem, which also leaves me completely and utterly confused.
What could be causing this, and how do I fix it?
It is very hard to say without all the details of your configuration and your code, but my guess is that it's windows being starved for certain kinds of memory. I suggest looking into the details in this article:
I attempted to duplicate your issue with Pycharm and python3.8 on my linux box and I was able to make 10000 threads with the code below. Note that I have every thread sleep for quite a while upon creation otherwise the thread creation process slows way down as the main thread of execution, which is trying to make the threads, becomes CPU starved. I have 32GB of RAM but I am able to make 10000 threads with a ThreadPoolExecutor on linux.
from concurrent.futures import ThreadPoolExecutor
import time
def runForever():
time.sleep(10)
while True:
for i in range(100):
a = 10
t = ThreadPoolExecutor(max_workers=10000)
for i in range(10000):
t.submit(runForever)
print(len(t._threads))
print(len(t._threads))
I'm writing a program to control GPIO's on my raspberry pi. I would like my program to ask me how long I would like to keep a GPIO on before it turns off.
Is it possible to have it stay on for 1 hour and then turn off. The problem I'm having is that when its on for an hour I cant issue any other commands to turn on other GPIO's because sleep.time is still being processed. I'd like to set multiple GPIO's for different times at the same time.
There are many ways to solve the problem. Conceptually, instead of sleeping 30 seconds and then doing something, you can sleep one second, do a bunch of stuff, check the time, lather, rinse, repeat. And by "sleep one second" it could just as easily be a tenth of a second or five seconds or whatever seems reasonable to me.
Another solution is to create a second thread (or process) for this sleep command so that your main thread (or process) runs unabated.
The choice depends on what all you need to do, how accurate you need the delay to be, what other things are running on the system, and so on.
Your current script only have one thread running, the sleep() will put the current running thread into sleep mode which blocks further commands.
time.sleep(secs) Python Doc
time.sleep(secs)
Suspend execution of the current thread for the given number of seconds.
You will need one more thread on the background which keeps the timer for you. In the mean time, the thread on the foreground can still takes other commands.
I recommend you read this page
threading – Manage concurrent threads
Across the internet, I've saw several examples of handling a service, which awaits for messages from clients in an endless loop, including a short sleep. Example:
while True:
check_messages_stack()
sleep(0.1)
What's the point of the sleep there? Is it supposed to save resources? If it does save some resources, would it be a meaningful amount?
sleep like the others have said relaxes the CPU usage, but in addition if it's something like accessing network/web data the sleep is needed to not crash the host server and ban you.
While your PC is running, your CPU needs to execute a lot of process (to make your PC run).
As CPUs are extremly fast they can fake to do multiple tasks at the same time but they don't really do (nowadays we get multiple cores CPU and mutithreading but forget about it for this explaination).
They just execute some part of a process during a certain amount of time then a part of an other process then go back to the first and so on.
In a simple way, the CPU is allow to switch to one process to another when it is not used by the process it actualy runing, for instance when the process does some I/O or waiting for user interaction.
When you do a while true it will loop ASAP it finished to execute the function in the while loop. Here ASAP realy mean ASAP, so no other process would be able to do anything in between of two loops. Because the CPU is continously processing.
When you put a sleep you allow the CPU to let other process to be executed. There is no really mater of how long the sleep is because a random CPU can do billions of operations in a microsecond.
So in your question, the sleep(0.1) allows your CPU to execute somme bilions operations in between of two check_messages_stack() call.
For more information, looking for "CPU Scheduling".
sleep doesn't use CPU resources, but constantly executing check_messages_stack() might (depending what you have there) so if you don't need to do it constantly it is a good practice to give CPU some time off.
If the function doesn't wait for anything, it would consume 100% CPU, by adding that sleep, it gives the CPU some time to execute other processes
I've got a Python process that is running as a daemon, using daemon runner, in the background. Its purpose is to query for some information from another computer on the network every 15 minutes, do some processing, and then send it somewhere else for logging. However, every so often, the processing bit takes much longer and the CPU usage for the process spikes for an extended period of time. Is there any way to figure out what might be happening during that time? I do have the daemon source.
The best thing to do is instrument the daemon with logging statements (using either the logging module or print statements with timestamps), and redirect the output to a log file. Then you can watch the logfile (perhaps using multitail) and note the output when you see the CPU spike.