I have a program, which I want to synchronize to a script (currently Python). Right now the execution of the program is asynchronous to the script, which fetches the current states and is controlling its inputs. The program is deterministic, but throughout the execution it is coincidence if I fetch a few states earlier or later at a specific time (memory values). If I would be able to have more control over the program flow, the results would also be more deterministic and reproducible.
Is there even the slightest possibility to stop a program at a time point, make some actions and start it again, everything from an external script? A little bit like a debugger is able to stop a program during the execution and then resume it again.
Another way could be, if there was a possibility to at least slow down the program execution (even better to a specific speed). Like a "CPU Killer" which slows down the computer, but just for the program. In this case the script has more time to react in a (maybe even standard) bigger time slot.
The script wouldn't have to be Python either.
Thanks in advance!
Use the time module to slow down the execution speed. I would use a boolean loop to check whether you should continue execution or not.
import time
while wait = False:
if wait == False:
run logic
else:
time.sleep(1)
Related
How to make cmds.duplicate execute immediately when called in maya? Instead of waiting for the entire script to run and then executing it in batches. For example, for this script below, all execution results will appear immediately after the entire script is executed
import time
for i in range(1, 6):
pm.select("pSphere{}".format(i))
time.sleep(0.5)
cmds.duplicate()
I have tried to use python multithreading, like this
import threading
import time
def test():
for i in range(50):
cmds.duplicate('pSphere1')
time.sleep(0.1)
thread = threading.Thread(target=test)
thread.start()
#thread.join()
Sometimes it can success, but sometimes it will crash maya. If the main thread join, it will not achieve the effect. When I want to do a large number of cmds.duplicate, it will resulting in a very high memory consumption, and the program runs more and more slowly. In addition, all duplicate results appear together after the entire python script runs, so I suspect that when I call cmds When duplicating, Maya did not finish executing and outputting the command, but temporarily put the results in a container with variable capacity. With the increase of my calls, the process of dynamic expansion of the container causes the program to become slower and slower, and the memory consumption also increase dramatically. Because I saw that other plug-ins can see the command execution results in real time, so I thought that this should be a proper way to do this just thath I haven't found yet
Your assumptions are not correct. Maya does not need to display anything to complete a tool. If you want to see the results inbetween you can try to use:
pm.refresh()
but this will not change the behaviour in general. I suppose your memory problems have a different source. You could check if it helps to turn off history or the undo queue temporarily.
And of course Ennakard is right with the answer, that most maya commands are not thread save unless mentioned in the docs. Every node creation and modificatons have to be done in the main thread.
The simple answer is you don't, maya command in general and most interaction with maya are not thread safe
threading is usually used for data manipulation before it get used to manipulate anything in maya, but once you start creating node or setting attribute, or any maya modification, no threading.
I am fairly new to programming with Python, so forgive me if this is trivial.
I know that when programming microcontrollers it is possible to interrupt the main program (e.g. on button press or due to a timer). The interrupt leads to a code outside of the main program that is then executed. Afterwards, the main program is continued to be executed. Hereby, the interrupt handler remembers where it interrupted the main program and returns to that exact point within the code. Is it possible to implement that on Python as well?
I looked into the "threading"-library but it doesn't seem fit, since I don't want several tasks running parallel. There it seems like I have to check for an event on every second line of my main code to ensure that it really interrupts the program immediately.
If you need some context:
I am implementing a program using the "PsychoPy Coder" (PsychoPy v2021.2.3) on Windows 10.
I expect the program (when finished) to run for at least an hour, depending on the user. I want this program to be interrupted every 60 to 90 seconds for a "baseline task" the user has to solve. This baseline task will last for about 6 to 9 seconds and the actual program should continue afterwards. Also, I want the user to be able to abort the program with a specific button at anytime.
I would be very thankful for any hint on an elegant way of programming this :) Have a nice day!
I wrote a Python 3 script to measure the running time of a process, since it keeps dying and I'm interested in how long it will keep actively running. But it's running on my laptop, and I realized the statistics will be skewed by periods when I've put it to sleep.
It's on Linux, so I'm sure there's a log file I could parse (it used to be pm-suspend.log before systemd), but I'm wondering if there's a more general/direct way.
Can a process ask to be notified of suspend events?
I should note that I'm interested in the wallclock time the process was running, not the actual CPU execution time. So time.process_time() won't work.
It includes the suspended periods but as far as i think it is the only option to find the process running time.Moreover using this you can even find the running time of even individual lines of code.
from time import time
t0=time()
#your code here
print(time()-t0)
I had a program that ran recursively, and while 95% of the time it wasn't an issue sometimes I would hit a recursion limit if I was doing something that took too long. In my efforts to convert to and iterative code, I decided to try something along the lines of this:
while True:
do something
#check if task is done
if done:
print 'ALL DONE'
break
else:
time.sleep(600)
continue
I've tested my code and it works fine, but I was wondering if there is anything inherently wrong with this method? Will it eat up RAM or crash the box if it was left to run for too long?
Thanks in advance!
EDIT:
The "do something" I refer to is checking a log file for certain keywords periodically, as data is constantly being written to the log file. Once these lines are written, which happens at varying length of times, I have the script perform certain tasks, such as copying specific lines to a separate files.
My original program had two functions, one called itself periodically until it found keywords, which would then call the 'dosomething' function. The do something function upon completion would then call original function, and this would happen until the task was finished or I hit the recursion limit
There is nothing inherently wrong in this pattern. I have used the daemon function in init.d to start a very similar python script. As long as "do something" doesn't leak, it should be able to run forever.
I think that either way
time.sleep()
will not stop the recursion limit
Because sleep only pauses the execution , and doesn't free any kind of memory
check https://docs.python.org/2/library/time.html the Time.sleep() description
It suspends the operation , but it will not do any memory optimization
The pattern you describe is easy to implement, but usually not the best way to do things. If the task completes just after you check, you still have to wait 5 minutes to resume processing. However, sometimes there is little choice but to do this; for example, if the only way to detect the task is complete is to check for the existence of a file, you may have to do it this way. In such cases the time interval choice needs to balance the CPU consumed by the "spin" with wait time.
Another pattern that is also fairly easy is to simply block while waiting on the task to complete. Whether this is easy or not depends on the particular API you are using. But this technique does not scale because all processing must wait for a single activity to complete. Imagine not being able to open a new browser tab while a page is loading.
Best practice today generally uses one of several models for asynchronous processing. Much like writing event handlers for mouse clicks, etc. in a website or GUI, you write a callback function that handles the result of processing, and pass that callback to the task. No CPU is wasted and the response is handled immediately without waiting. Many frameworks support this model today. Tulip uses the actor model.
Specifically regarding the recursion limit, I don't think your sleep loop is responsible for hitting the stack frame limit. Maybe it was something happening within the task itself.
I'm writing a program in which I want to evaluate a piece of code asynchronously. I want it to be isolated from the main thread so that it can raise an error, enter an infinite loop, or just about anything else without disrupting the main program. I was hoping to use threading.Thread, but this has a major problem; I can't figure out how to stop it. I have tried Thread._stop(), but that frequently doesn't work. I end up with a thread that I can't control hogging both interpreter time and CPU power. The code in the thread doesn't open any files or do anything else that would cause problems if I hard-killed it.
Python's multiprocessing.Process.terminate() does this really well; unfortunately, initiating a process on Windows takes nearly a second, which is long enough to cause annoying delays in my GUI.
Does anyone know either a: how to kill a Python thread (I don't think I care how dirty the exit is), or b: how to speed up starting a process?
A third possibility would be a third-party library that provides an alternative method for asynchronous execution, but I've never heard of any such thing.
In my case, the best way to do this seems to be to maintain a running worker process, and send the code to it on an as-needed basis. If the process acts up, I kill it and then start a new one immediately to avoid any delay the next time.