Hey Guys I am creating an application which takes in a request from the user. The main class in the server side is the Controller . I spawn a thread during init, which keeps actively listening for requests from the client ( I need to spawn a thread here. )
Once I get an request, I look into the type of request and call a function to handle it.
In that function, I want to create multiple processes to utilise my 8 cores effectively.
Here is the code:-
class Controller(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
def __init__(self, *args, **kwargs):
self.datapaths={}
self.monitor_thread=hub.spawn(self._monitor)
super(Controller, self).__init__( *args, **kwargs)
def _monitor(self):
global connstream
while True:
#Get Connection from client
data = connstream.read(15000)
data=eval(data)
print "Recieved a request from the client:-",data
for key,value in data.iteritems():
type=int(key)
request=value
if type==4:
self.get_route(type,request,connstream)
def get_route(self,type,request,connection):
global get_route_result
cities=request['Cities']
number_of_cities=request['Number_of_Cities']
city_count=0
processes=[]
pool = mp.Pool(processes=8)
for city,destination_ip in cities.iteritems():
args=(type,destination_ip)
processes.append(args)
city_count=city_count+1
if city_count==number_of_cities:
break
pool.map(self.get_route_process,processes)
def get_route_process(self,HOST,destination):
#Do Something
But the error I get is:-
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 325, in _handle_workers
while thread._state == RUN or (pool._cache and thread._state != TERMINATE):
AttributeError: '_MainThread' object has no attribute '_state'
So in a nutshell, I create a thread, which tries to create multiple processes, but the code fails.
Related
I am using telenium to automate the test of kivy application.
https://github.com/tito/telenium/blob/master/README.md
def enable_server():
def start_server():
os.system('python server.py')
t1 = threading.Thread(target=start_server, daemon=True)
t1.start()
My Skeleton of the telenium testcases looks like this.
class UITestCase(TeleniumTestCase):
cmd_entrypoint = [" main.py"]
def first_test(self):
"""code to test"""
def second_test(self):
enable_server()
"""code to test"""
def third_test(self):
enable_server()
"""code to test"""
Since two of tests need the enable_server() the application does not execute the third_test completely and fails. I am not sure why this is happening.
Error:
Traceback (most recent call last):
File "node_sim.py", line 63, in <module>
loop.run_until_complete(start_server)
File "/home/user/.pyenv/versions/3.7.3/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/home/user/.pyenv/versions/3.7.3/lib/python3.7/asyncio/tasks.py", line 603, in _wrap_awaitable
return (yield from awaitable.__await__())
File "/home/user/.pyenv/versions/3.7.3/lib/python3.7/site-packages/websockets/legacy/server.py", line 1071, in __await_impl__
server = await self._create_server()
File "/home/user/.pyenv/versions/3.7.3/lib/python3.7/asyncio/base_events.py", line 1378, in create_server
% (sa, err.strerror.lower())) from None
OSError: [Errno 98] error while attempting to bind on address ('0.0.0.0', 5000): address already in use
You should put enable_server in init method. It will run before every UITestCase.
Each new TeleniumTestCase will close and start the application, so you always run from a clean app. If you always need to do something before starting the test, you can overload the init. This will be executed once before any tests in the class starts:
doc
I would like to pass messages out from my function running in a process pool while the function is still running.
My application uses asyncio and multiprocessing queues to receive and distribute messages to a worker pool using asyncio.run_in_executor(). I manually created the pool so I could provide an initializer.
The problem I have is that I would like the functions that are running in the executor pool to be able to send messages out to the asyncio loop. This is how I started my new application process:
self._application = Application(self.outgoing_queue, self.incoming_queue, application_cores, log_level=logging.INFO)
self._application_process = mp.Process(target=self._application.run)
self._application_process.start()
the queues are from:
self.outgoing_queue = mp.Queue()
self.incoming_queue = mp.Queue()
I can't use my asyncio queue, or multiprocessing queue since those can't be passed to the process by this method:
async def run_operation():
kwargs = {
'out_queue': self._work_pool_queue
}
func = functools.partial(attribute, *args, **kwargs)
result = await self._loop.run_in_executor(self._work_pool, func)
result_msg = common.messages.MessageResult(result, msg.reply_id, msg.cpu_cost)
await self.outgoing_send(result_msg)
asyncio.create_task(run_operation())
My self._work_pool is created with:
self._work_pool = concurrent.futures.ProcessPoolExecutor(max_workers=self._cores, initializer=_work_pool_init)
since the following traceback results:
Task exception was never retrieved
future: <Task finished coro=<Application.message_router.<locals>.run_operation() done, defined at c:\users\brian\gitlab\rf-applications\rfapplications\common\application.py:95> exception=RuntimeError('Queue objects should only be shared between processes through inheritance')>
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "c:\Program Files\Python37\lib\multiprocessing\queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "c:\Program Files\Python37\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "c:\Program Files\Python37\lib\multiprocessing\queues.py", line 58, in __getstate__
context.assert_spawning(self)
File "c:\Program Files\Python37\lib\multiprocessing\context.py", line 356, in assert_spawning
' through inheritance' % type(obj).__name__
RuntimeError: Queue objects should only be shared between processes through inheritance
"""
I was looking at using a Manager().Queue() (https://docs.python.org/3/library/multiprocessing.html#managers) since those can be sent to the process pool (Python multiprocessing Pool Queues communication). However, these queues seem to open up the possibility of remote connections, which I would like to avoid (I use secure websockets to communicate between remote machines right so far).
I am trying to read meta info from celery task in case of timeout (if task is not finished in given time). I have 3 celery workers. When I execute tasks on 3 workers serially my timeout logic (getting meta info from redis backend) works fine. But, when I execute tasks in parallel using threads, I get error 'AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for''.
main script.
from threading import Thread
from util.tasks import app
from celery.exceptions import TimeoutError
# from celery.task.control import revoke
from celery.result import AsyncResult
def run(cmd, workerName, async=False, timeout=9999999):
print "Executing Celery cmd: ", cmd
ret = app.send_task(workerName+'.run_cmd', args=[cmd], kwargs={}, queue=workerName)
if async:
return ret
else:
try:
return ret.get(timeout=timeout)
except TimeoutError:
task = AsyncResult(ret.task_id)
# print task.info
out = task.info['PROGRESS']
# stop_task(ret.task_id)
print 'TIMEOUT', out
return 'TIMEOUT', out
cmd = r'ping 10.10.10.10'
threads = []
# this block works
print "This block works"
run(cmd, 'MH_VTF203', timeout=10)
run(cmd, 'MH_VTF1661', timeout=10)
run(cmd, 'MH_VTF106', timeout=10)
# this block errors
print "This block erros"
for vtf in ['MH_VTF203', 'MH_VTF1661', 'MH_VTF106']:
t = Thread(target=run, args=[cmd, vtf], kwargs={'timeout': 10})
t.start()
threads.append(t)
for t in threads:
t.join()
util.tasks.py
from celery import Celery
import subprocess
app = Celery('tasks', backend='redis://', broker='redis://localhost:6379/0')
app.conf.CELERY_IGNORE_RESULT = False
app.conf.CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
#app.task()
def run_cmd(*args, **kwargs):
cmd = " ".join(args)
print "executing command :",cmd
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out = ""
while p.poll() is None:
l = p.stdout.readline()
print l
out += l
run_cmd.update_state(
state='PROGRESS',
meta={'PROGRESS': out}
)
l = p.stdout.read()
print l
out += l
return out
except subprocess.CalledProcessError, e:
print 'Error executing command: ', cmd
return str(e)
Output.
C:\Python27\python.exe C:/Users/mkr/Documents/work/New_RoD/testing/run.py
This block works
Executing Celery cmd: ping 10.10.10.10
TIMEOUT
Pinging 10.10.10.10 with 32 bytes of data:
Request timed out.
Request timed out.
Executing Celery cmd: ping 10.10.10.10
TIMEOUT
Pinging 10.10.10.10 with 32 bytes of data:
Request timed out.
Request timed out.
Executing Celery cmd: ping 10.10.10.10
TIMEOUT
Pinging 10.10.10.10 with 32 bytes of data:
Request timed out.
Request timed out.
This block erros
Executing Celery cmd: ping 10.10.10.10
Executing Celery cmd: ping 10.10.10.10
Executing Celery cmd: ping 10.10.10.10
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "C:/Users/mkr/Documents/work/New_RoD/testing/run.py", line 18, in run
out = task.info['PROGRESS']
File "C:\Python27\lib\site-packages\celery\result.py", line 356, in result
return self._get_task_meta()['result']
File "C:\Python27\lib\site-packages\celery\result.py", line 339, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "C:\Python27\lib\site-packages\celery\backends\base.py", line 292, in get_task_meta
meta = self._get_task_meta_for(task_id)
AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "C:/Users/mkr/Documents/work/New_RoD/testing/run.py", line 18, in run
out = task.info['PROGRESS']
File "C:\Python27\lib\site-packages\celery\result.py", line 356, in result
return self._get_task_meta()['result']
File "C:\Python27\lib\site-packages\celery\result.py", line 339, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "C:\Python27\lib\site-packages\celery\backends\base.py", line 292, in get_task_meta
meta = self._get_task_meta_for(task_id)
AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'
Exception in thread Thread-3:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "C:/Users/mkr/Documents/work/New_RoD/testing/run.py", line 18, in run
out = task.info['PROGRESS']
File "C:\Python27\lib\site-packages\celery\result.py", line 356, in result
return self._get_task_meta()['result']
File "C:\Python27\lib\site-packages\celery\result.py", line 339, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "C:\Python27\lib\site-packages\celery\backends\base.py", line 292, in get_task_meta
meta = self._get_task_meta_for(task_id)
AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'
Process finished with exit code 0
using app.AsyncResult worked for me
Works for me as suggested by https://stackoverflow.com/users/2682417/mylari in one of the comments above
celery1 = Celery('mytasks', backend='redis://localhost:6379/1', broker='redis://localhost:6379/0')
def t_status(id):
c = celery1.AsyncResult(id)
return c
Calling method:
#app.route("/tasks/<task_id>", methods=["GET"])
def get_status(task_id):
task_result = t_status(task_id)
result = {
"task_id": task_id,
"task_status": task_result.status,
"task_result": task_result.result
}
return jsonify(result), 200
Celery operations are not thread safe - you probably want to wrap the call to task.info in a lock.
Also mixing celery and threads like that is a little odd.
Try this:
from celery.result import AsyncResult
from iota_celery.app_iota import app as celery_app
AsyncResult(x, app=celery_app).revoke(terminate=True, signal='SIGKILL')
celery.AsyncResult work for me:
celery = make_celery(flask_app)
task_result = celery.AsyncResult(task_id)
I'm running a Flask app and internally using a library written in Node.js, which I access through ZeroRPC (the actual node process is managed by Circus). This works fine on its own; I can unit test with no issues. But when starting the Flask app as a listening process, and calling into a REST api which calls this libary, the program throws an exception when trying to start the process. The code to start the service is as follows:
from circus.watcher import Watcher
from circus.arbiter import ThreadedArbiter
from circus.util import (DEFAULT_ENDPOINT_DEALER, DEFAULT_ENDPOINT_SUB,
DEFAULT_ENDPOINT_MULTICAST)
class Node(object):
{... omitted code that initializes self._arbiter and self._client ...}
def start(self):
if self._arbiter and self._client:
return
port = 'ipc:///tmp/inlinejs_%s' % os.getpid()
args = 'lib/server.js --port %s' % port
watcher = Watcher('node', '/usr/local/bin/node', args,
working_dir=INLINEJS_DIR)
self._arbiter = ThreadedArbiter([watcher], DEFAULT_ENDPOINT_DEALER,
DEFAULT_ENDPOINT_SUB, multicast_endpoint=DEFAULT_ENDPOINT_MULTICAST)
self._arbiter.start()
self._client = zerorpc.Client()
self._client.connect(port)
This function returns, but shortly afterwards in a separate thread, I get this error:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/python/lib/python2.7/site-packages/circus/_patch.py", line 21, in _bootstrap_inner
self.run()
File "/python/lib/python2.7/site-packages/circus/arbiter.py", line 647, in run
return Arbiter.start(self)
File "/python/lib/python2.7/site-packages/circus/util.py", line 319, in _log
return func(self, *args, **kw)
File "/python/lib/python2.7/site-packages/circus/arbiter.py", line 456, in start
self.initialize()
File "/python/lib/python2.7/site-packages/circus/util.py", line 319, in _log
return func(self, *args, **kw)
File "/python/lib/python2.7/site-packages/circus/arbiter.py", line 427, in initialize
self.evpub_socket.bind(self.pubsub_endpoint)
File "socket.pyx", line 432, in zmq.core.socket.Socket.bind (zmq/core/socket.c:4022)
File "checkrc.pxd", line 21, in zmq.core.checkrc._check_rc (zmq/core/socket.c:5838)
ZMQError: Address already in use
I have no idea why this is happening, especially since it doesn't happen in unit tests. Can anyone shed any light?
In general, when you get this type of "Address in use" error, it means that your program is trying to bind on an IP port number but something else got there first.
I am not familiar with this library, but since the error is caused by "evpub_socket.bind", I am going to guess that you have a conflict with the port number specified by the constant DEFAULT_ENDPOINT_SUB. From the circus source code I see these constants:
DEFAULT_ENDPOINT_DEALER = "tcp://127.0.0.1:5555"
DEFAULT_ENDPOINT_SUB = "tcp://127.0.0.1:5556"
DEFAULT_ENDPOINT_STATS = "tcp://127.0.0.1:5557"
Check your system (netstat) and see if any process is listening on ports 5555, 5556, 5557. Or perhaps you are running this program twice and you forgot about the first one.
I'm trying to make a worker run only one task at a time, then shutdown. I've got the shutdown part working correctly (some background here: celery trying shutdown worker by raising SystemExit in task_postrun signal but always hangs and the main process never exits), but when it shuts down, I'm getting an error:
[2013-02-13 12:19:05,689: CRITICAL/MainProcess] Couldn't ack 1, reason:AttributeError("'NoneType' object has no attribute 'method_writer'",)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/kombu/transport/base.py", line 104, in ack_log_error
self.ack()
File "/usr/local/lib/python2.7/site-packages/kombu/transport/base.py", line 99, in ack
self.channel.basic_ack(self.delivery_tag)
File "/usr/local/lib/python2.7/site-packages/amqplib/client_0_8/channel.py", line 1742, in basic_ack
self._send_method((60, 80), args)
File "/usr/local/lib/python2.7/site-packages/amqplib/client_0_8/abstract_channel.py", line 75, in _send_method
self.connection.method_writer.write_method(self.channel_id,
AttributeError: 'NoneType' object has no attribute 'method_writer'
Why is this happening? Not only does it not ack, but it also purges all of the other tasks that are left in the queue (big problem).
How do I fix this?
UPDATE
Below is the stack trace with everything updated (pip install -U kombu amqp amqplib celery):
[2013-02-13 11:58:05,357: CRITICAL/MainProcess] Internal error: AttributeError("'NoneType' object has no attribute 'method_writer'",)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/worker/__init__.py", line 372, in process_task
req.execute_using_pool(self.pool)
File "/usr/local/lib/python2.7/dist-packages/celery/worker/job.py", line 219, in execute_using_pool
timeout=task.time_limit)
File "/usr/local/lib/python2.7/dist-packages/celery/concurrency/base.py", line 137, in apply_async
**options)
File "/usr/local/lib/python2.7/dist-packages/celery/concurrency/base.py", line 27, in apply_target
callback(target(*args, **kwargs))
File "/usr/local/lib/python2.7/dist-packages/celery/worker/job.py", line 333, in on_success
self.acknowledge()
File "/usr/local/lib/python2.7/dist-packages/celery/worker/job.py", line 439, in acknowledge
self.on_ack(logger, self.connection_errors)
File "/usr/local/lib/python2.7/dist-packages/kombu/transport/base.py", line 98, in ack_log_error
self.ack()
File "/usr/local/lib/python2.7/dist-packages/kombu/transport/base.py", line 93, in ack
self.channel.basic_ack(self.delivery_tag)
File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 1562, in basic_ack
self._send_method((60, 80), args)
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 57, in _send_method
self.connection.method_writer.write_method(
AttributeError: 'NoneType' object has no attribute 'method_writer'
Exiting in task_postrun is not recommended as task_postrun is executed outside of the "task body" error handling.
Exactly what happens when a task calls sys.exit is not well defined,
and actually it depends on the pool being used.
With multiprocessing the child process will simply be replaced by a new one.
In other pools the worker will shutdown, but this is something that is likely to change
so that it's consistent with multiprocessing behavior.
Calling exit outside of the task body is regarded as an internal error (crash).
The "task body" is whatever executes at task.__call__()
I think maybe a better solution for this would be to use a custom execution
strategy:
from celery.worker import strategy
from functools import wraps
#staticmethod
def shutdown_after_strategy(task, app, consumer):
default_handler = strategy.default(task, app, consumer)
def _shutdown_to_exit_after(fun):
#wraps(fun)
def _inner(*args, **kwargs):
try:
return fun(*args, **kwargs)
finally:
raise SystemExit()
return _inner
return _decorate_to_exit_after(default_handler)
#celery.task(Strategy=shutdown_after_strategy)
def shutdown_after():
print('will shutdown after this')
This isn't exactly beautiful, but the execution strategy is there to optimize
task execution and not to be easily extendable (the worker "precompiles" the execution
path for each task type by caching Task.Strategy)
In Celery 3.1 you can extend the worker and consumer using "bootsteps", so likely
there will be a pretty solution then.