I have a fabric script to manage our deployments. I need it to run in parallel mode so it can finish in a reasonable amount of time, but I need one command to run only once, not multiple times as it would in parallel mode.
Don't specify the hosts before executing the function which you would like to execute only one time.
After that function, you can set env.host variable to the machines you want to run on.
For example,
def task():
init()
execute(main_job)
def init():
# do some initialization
# set host
env.host = ['192.168.5.11', '192.168.5.12']
#parallel
def main_job():
# main job code...
Related
I tried to run multiple processes on 1 instance in AWS EC2 and set environment variables for all of them by adding a new variable in the first run.
The first process set a new environment variable by the following line in python:
os.environ["access_token"] = "token"
and when the next process run I got:
print("access_token" in os.environ)
-> False
How can I set a global environment variable in the first run and use it in the next processes?
I need a different variable for each instance so I can't add a global environment variable for all instances before the first run
If you want to make this work (without involving your process supervisor), you need to make your first process directly start the second process itself; this can be repeated for as many processes as need be, invoking them all through the same wrapper.
Let's say you have a service definition like so:
[Service]
ExecStartPre=/usr/bin/set-my-environment-variable
ExecStart=/usr/bin/service-that-needs-environment-variable
As you've noticed, this doesn't work: The environment variables that set-my-environment-variable sets are all gone when it exits, so service-that-needs-environment-variable can't see them.
One thing you can try that does work is the following:
ExecStart=/usr/bin/set-my-environment-variable /usr/bin/service-that-needs-environment-variable
...if you change set-my-environment-variable to be like the following:
#!/usr/bin/env python
import os, sys
os.environ["access_token"] = "token"
os.execv(sys.argv[1], sys.argv[1:])
I have a function to check for sniffing tools I want to constantly run in the background of my python script:
def check():
unwanted_programmes = [] # to be added to
for p in psutil.process_iter(attrs=['pid', 'name']):
for item in unwanted_programmes:
if item in str(p.info['name']).lower():
webhook_hackers(str(p.info['name'])) # send the programme to a webhook which is in another function
sys.exit()
time.sleep(1)
I want this to run right from the start and then the rest of the script
I have written code like so:
if __name__ == "__main__":
check = threading.Thread(target=get_hackers())
check.start()
check.join()
threading.Thread(target=startup).start()
# startup function just does some prints and inputs before running other functions
However, this code only runs check once and then startup but I want check to run and then keep running in the background. Meanwhile, startup to run just once, how would I do this?
Your check function does what you want it to, but it only does it once, and that's the behavior that you're seeing; the thread finishes running the function and then cleanly exits. If you place everything in the function inside of a while(True): block at the top of the function then the function will loop infinitely and the thread will never exit, which sounds like it's what you want.
I have a multiprocessing programs in python, which spawns several sub-processes and manages them (restarting them if the children identify problems, etc). Each subprocess is unique and their setup depends on a configuration file. The general structure of the master program is:
def main():
messageQueue = multiprocessing.Queue()
errorQueue = multiprocessing.Queue()
childProcesses = {}
for required_children in configuration:
childProcesses[required_children] = MultiprocessChild(errorQueue, messageQueue, *args, **kwargs)
for child_process in ChildProcesses:
ChildProcesses[child_process].start()
while True:
if local_uptime > configuration_check_timer: # This is to check if configuration file for processes has changed. E.g. check every 5 minutes
reload_configuration()
killChildProcessIfConfigurationChanged()
relaunchChildProcessIfConfigurationChanged()
# We want to relaunch error processes immediately (so while statement)
# Errors are not always crashes. Sometimes other system parameters change that require relaunch with different, ChildProcess specific configurations.
while not errorQueue.empty():
_error_, _childprocess_ = errorQueue.get()
killChildProcess(_childprocess_)
relaunchChildProcess(_childprocess)
print(_error_)
# Messages are allowed to lag if a configuration_timer is going to trigger or errorQueue gets something (so if statement)
if not messageQueue.empty():
print(messageQueue.get())
Is there a way to prevent the contents of the infinite while True loop take up 100pct CPU. If I add a sleep event at the end of the loop (e.g. sleep for 10s), then errors will take 10s to correct, ans messages will take 10s to flush.
If on the other hand, there was a way to have a time.sleep() for the duration of the configuration_check_timer, while still running code if messageQueue or errorQueue get stuff inside them, that would be nice.
everyone. I have a file Python (for example named: run.py). This program takes some parameters (python run.py param1 param2 ...) and each tuple parameter is a setting. Now, I have to run many settings simultaneously to finish all as soon as possible. I wrote a file run.sh as follow:
python run.py setting1 &
python run.py setting2 &
#more setting
...
wait
This file will execute all processes simultaneously, right? And I run on the machine 64 core cpu. I have some question here:
Will each process run on one core or not?
If not, how can I do this?
If I can run a process per a core, time running of setting1 will equal to time running when I just run an individual process: python run.py setting1
Did you try to use the multiprocessing module?
Assuming you want to execute some function work(arg1, arg2) multiple times in parallel, you'd end up with something like this
import multiprocessing
p = multiprocessing.Pool(multiprocessing.cpu_count()
results = p.starmap(work, [(arg11, arg12), (arg21, arg22)....]
# do something with the list of results
If your functions all look very different from each other then you can get away by writing a function wrapper, like so:
def wrapper(dict_args, inner_function):
return inner_function(dict_args)
# then launch the multiprocessing mapping
p.starmap(wrapper, [({'arg1': v1, 'arg2': v2}, job1), ({'foo': bar}, job2)..]
So far I've used nosetests with just one process and everything works fine.
To ensure my setUp is only executed once, I'm using a boolean var.
def setUp(self):
if not self.setupOk:
selTest.setupOk = True
# start selenium
# do other stuff which will be needed for all other tests to be able to run
Now I would like to run nosetests with the option --processes=5
How can I ensure that setUp(self) is only execued by one process (while the other processes are waiting).
I've tried to work with
def setUp(self):
lock = multiprocessing.Lock()
lock.acquire()
if not self.setupOk:
selTest.setupOk = True
# start selenium
# do other stuff which will be needed for all other tests to be able to run
lock.release()
but this doesn't seems to work.
setUp will be called before every test is run. If you want a method to execute just once, you can use setUpClass:
#classmethod
def setUpClass(cls):
print "do stuff which needs to be run once"