weird error with django-celery or python - python

I'm having trouble running tasks. I run ./manage celeryd -B -l info, it correctly loads all tasks to registry.
The error happens when any of the tasks run - the task starts, does its thing, and then I get:
[ERROR/MainProcess] Thread 'ResultHandler' crashed: ValueError('Octet out of range 0..2**64-1',)
Traceback (most recent call last):
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 221, in run
return self.body()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 458, in body
on_state_change(task)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 436, in on_state_change
state_handlers[state](*args)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 413, in on_ack
cache[job]._ack(i, time_accepted, pid)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 1016, in _ack
self._accept_callback(pid, time_accepted)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 424, in on_accepted
self.acknowledge()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 516, in acknowledge
self.on_ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/consumer.py", line 405, in ack
message.ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/kombu-2.1.0-py2.7.egg/kombu/transport/base.py", line 98, in ack
self.channel.basic_ack(self.delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/channel.py", line 1740, in basic_ack
args.write_longlong(delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/serialization.py", line 325, in write_longlong
raise ValueError('Octet out of range 0..2**64-1')
ValueError: Octet out of range 0..2**64-1
I also must note that this worked on my previous Lion install, and even if I create a blank virtualenv with some test code, when a task runs it gives this error.
This happens with Python 2.7.2 and 2.6.4.
Django==1.3.1
amqplib==1.0.2
celery==2.4.6
django-celery==2.4.2

It appears there is some bug with homebrew install python. I've now switched to the native Lion one (2.7.1) and it works.

Related

Error in Airflow Scheduler: it keeps re-launching:

Hi I'm running Apache Airflow 2.2.3 and recently started to get this error where the scheduler keeps re-launching throwing a warning:** "DagFileProcessorManager (PID=xxxx) exited with exit code 1 - re-launching"**
The Error:
[2022-11-15 14:19:45,922] {manager.py:318} WARNING - DagFileProcessorManager (PID=9176) exited with exit code 1 - re-launching
[2022-11-15 14:19:45,943] {manager.py:163} INFO - Launched DagFileProcessorManager with pid: 9182
[2022-11-15 14:19:45,963] {settings.py:52} INFO - Configured default timezone Timezone('UTC')
Process ForkProcess-6051:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.8/dist-packages/airflow/dag_processing/manager.py", line 287, in _run_processor_manager
processor_manager.start()
File "/usr/local/lib/python3.8/dist-packages/airflow/dag_processing/manager.py", line 520, in start
return self._run_parsing_loop()
File "/usr/local/lib/python3.8/dist-packages/airflow/dag_processing/manager.py", line 530, in _run_parsing_loop
self._refresh_dag_dir()
File "/usr/local/lib/python3.8/dist-packages/airflow/dag_processing/manager.py", line 683, in _refresh_dag_dir
[
File "/usr/local/lib/python3.8/dist-packages/airflow/dag_processing/manager.py", line 686, in <listcomp>
if might_contain_dag(info.filename, True, z)
File "/usr/local/lib/python3.8/dist-packages/airflow/utils/file.py", line 221, in might_contain_dag
with zip_file.open(file_path) as current_file:
File "/usr/lib/python3.8/zipfile.py", line 1535, in open
raise BadZipFile("Bad magic number for file header")
zipfile.BadZipFile: Bad magic number for file header
This error prevents Airflow UI to reload new DAG's or modifications to DAG codes.
At first i thought it might be something related with python codes that i have in my "includes" folder inside the dags folder, so i have added that folder to my .airflowignore file to skip this files, but scheduler keeps re-launching.
The only way i've mangaged to reload the new Dags or dag modifications is by executing "airflow db init" command. But this is not optimal.
Please help!

Python KeyError: 'flags', when running BlueBorne script

I am trying to run the file l2cap_infra.py with Python 2, but I am getting the following error:
Traceback (most recent call last):
File "l2cap_infra.py", line 524, in <module>
main(*sys.argv[1:])
File "l2cap_infra.py", line 508, in main
l2cap_loop, _ = create_l2cap_connection(src_hci, dst_bdaddr, pcap_path=pcap_path)
File "l2cap_infra.py", line 489, in create_l2cap_connection
handle_information_negotiation_process(l2cap_loop)
File "l2cap_infra.py", line 425, in handle_information_negotiation_process
l2cap_loop.send(info_req)
File "l2cap_infra.py", line 142, in send
self._sock.send(packet)
File "l2cap_infra.py", line 213, in send
self.send_fragment(Raw(str(l2cap)[i:i+L2CAP_DEFAULT_MTU]), i == 0)
File "l2cap_infra.py", line 223, in send_fragment
hci = HCI_Hdr() / HCI_ACL_Hdr(handle=scapy_handle, flags=scapy_flags) / frag
File "/usr/local/lib/python2.7/dist-packages/scapy/base_classes.py", line 227, in __call__
i.__init__(*args, **kargs)
File "/usr/local/lib/python2.7/dist-packages/scapy/packet.py", line 135, in __init__
self.fields[f] = self.get_field(f).any2i(self, v)
File "/usr/local/lib/python2.7/dist-packages/scapy/packet.py", line 170, in get_field
return self.fieldtype[fld]
KeyError: 'flags'
This might be a version conflict; I had a similar problem and I had to edit a file in /usr/local/lib/python2.7/.
What code do I have to change in that linked file or in one of my pip libraries to make this code work?
It seems it's a compatibility issue between BlueBorne and Scapy.
You (most likely) have installed the latest Scapy version (v2.4.0), which dropped the fields kwarg from scapy.layers.bluetooth.HCI_ACL_Hdr's initializer, while BlueBorne (l2cap_infra.py, and possibly others) was not updated (or branched) accordingly.
The latest version that still has it is v2.3.3 ([GitHub]: secdev/scapy - (v2.3.3) scapy/scapy/layers/bluetooth.py).
Possible solutions:
Uninstall your current Scapy version (pip uninstall scapy) and install v2.3.3 (pip install scapy==2.3.3). Probably, this is the simplest (and most suitable) for you ([PyPI]: scapy 2.3.3)
Submit a bug to BlueBorne and wait for them to add support for newer Scapy versions
Fix it yourself ("fields" (v2.3.3) to "PB" + "BC" (v2.4.0) kwargs conversion), and maybe submit a patch :)

PyDev 3.9.2 - Cannot attach the debugger to gevent-based processes

I'm having problems with attaching to gevent-based processes with the PyDev's debugger under PyDev 3.9.2.
It works fine in PyDev 3.9.1 and LiClipse 3.9.1, just not with 3.9.2 which is the latest PyDev right now (6 Feb, 2015).
It also works properly when a piece of code is run directly from PyDev's debugger rather than externally.
Note that it doesn't depend on whether there are breakpoints set (enabled or not) - the mere fact of attaching to a process suffices for the exception to be raised.
Here is a sample module to reproduce the behaviour along with two exceptions - one from PyDev and the other one from the gevent code's point of view.
Can anyone please shed any light on it? Thanks a lot.
from gevent.monkey import patch_all
patch_all()
import threading
import gevent
def myfunc():
t = threading.current_thread()
print(t.name)
while True:
gevent.spawn(myfunc)
gevent.sleep(1)
Debug Server at port: 5678
Traceback (most recent call last):
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd_attach_to_process/attach_script.py", line 16, in attach
patch_multiprocessing=False,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1828, in settrace
patch_multiprocessing,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1920, in _locked_settrace
CheckOutputThread(debugger).start()
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 261, in start
thread.start()
File "/usr/lib/python2.7/threading.py", line 750, in start
self.__started.wait()
File "/usr/lib/python2.7/threading.py", line 620, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 339, in wait
waiter.acquire()
File "_semaphore.pyx",...
Traceback (most recent call last):
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd_attach_to_process/attach_script.py", line 16, in attach
patch_multiprocessing=False,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1828, in settrace
patch_multiprocessing,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1920, in _locked_settrace
CheckOutputThread(debugger).start()
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 261, in start
thread.start()
File "/usr/lib/python2.7/threading.py", line 750, in start
self.__started.wait()
File "/usr/lib/python2.7/threading.py", line 620, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 339, in wait
waiter.acquire()
File "_semaphore.pyx", line 112, in gevent._semaphore.Semaphore.acquire (gevent/gevent._semaphore.c:3004)
File "/home/dsuch/projects/pydev-plugin/pydev-plugin/local/lib/python2.7/site-packages/gevent/hub.py", line 330, in switch
switch_out()
File "/home/dsuch/projects/pydev-plugin/pydev-plugin/local/lib/python2.7/site-packages/gevent/hub.py", line 334, in switch_out
raise AssertionError('Impossible to call blocking function in the event loop callback')
AssertionError: Impossible to call blocking function in the event loop callback

Scrapy: Unhandled Error

My scraper runs fine for about an hour. After a while I start seeing these errors:
2014-01-16 21:26:06+0100 [-] Unhandled Error
Traceback (most recent call last):
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/crawler.py", line 93, in start
self.start_reactor()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/crawler.py", line 130, in start_reactor
reactor.run(installSignalHandlers=False) # blocking call
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/twisted/internet/base.py", line 1192, in run
self.mainLoop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/twisted/internet/base.py", line 1201, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/twisted/internet/base.py", line 824, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/utils/reactor.py", line 41, in __call__
return self._func(*self._a, **self._kw)
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/engine.py", line 106, in _next_request
if not self._next_request_from_scheduler(spider):
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/engine.py", line 132, in _next_request_from_scheduler
request = slot.scheduler.next_request()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/scheduler.py", line 64, in next_request
request = self._dqpop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/scheduler.py", line 94, in _dqpop
d = self.dqs.pop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/queuelib/pqueue.py", line 43, in pop
m = q.pop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/squeue.py", line 18, in pop
s = super(SerializableQueue, self).pop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/queuelib/queue.py", line 157, in pop
self.f.seek(-size-self.SIZE_SIZE, os.SEEK_END)
exceptions.IOError: [Errno 22] Invalid argument
What could possibly be causing this? My version is 0.20.2. Once I get this error, scrapy stops doing anything. Even if I stop and run it again (using a JOBDIR directory), it still gives me these errors. I need to delete the job directory and start over if I need to get rid of these errors.
Try this:
Ensure that you're running latest Scrapy version (current: 0.24)
Search inside the resumed folder, and backup the file requests.seen
After backed up, remove the scrapy job folder
Start the crawl resuming with JOBDIR= option again
Stop the crawl
Replace the newly created requests.seen with previously backed up
Start crawl again

Parallel Python - RuntimeError: Communication pipe read error

I'm using parallel python to run multiple dynamic simulations using a module called OrcFxAPI. The program works perfectly if it is run as a python program on my machine however if I convert it to an exe file using py2exe and then run I get the following error:
Traceback (most recent call last):
File "Analysis.pyc", Line 500, in multiprocessor
File "pp.pyc", Line 342, in __init__
File "pp.pyc", Line 506, in set_ncpus
File "pp.pyc", Line 140, in __init__
File "pp.pyc", Line 152, in start
File "pptransport.pyc", Line 140, in receive
RuntimeError: Communication pipe read error
It is failing at this line in my program:
job_server = pp.Server(ppservers=ppservers)
but I think it might be something to do with the path used to import the OrcFxAPI module when submitting the job:
job = job_server.submit(max_seastate, (gui_vars, case_list, case, line_info, output_vars), (), ("OrcFxAPI",), callback=finished, callbackargs=(case_no, no_of_cases,))

Categories

Resources