I'm following the instructions on https://github.com/jorilallo/celery-flower-heroku to deploy Flower celery monitoring app to Heroku.
After configuring and deploying my app I see the following in heroku logs:
Traceback (most recent call last):
File "/app/.heroku/python/bin/flower", line 9, in <module>
load_entry_point('flower==0.7.0', 'console_scripts', 'flower')()
File "/app/.heroku/python/lib/python2.7/site-packages/flower/__main__.py", line 11, in main
flower.execute_from_commandline()
File "/app/.heroku/python/lib/python2.7/site-packages/celery/bin/base.py", line 306, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/app/.heroku/python/lib/python2.7/site-packages/flower/command.py", line 99, in handle_argv
return self.run_from_argv(prog_name, argv)
File "/app/.heroku/python/lib/python2.7/site-packages/flower/command.py", line 75, in run_from_argv
**app_settings)
File "/app/.heroku/python/lib/python2.7/site-packages/flower/app.py", line 40, in __init__
max_tasks_in_memory=max_tasks)
File "/app/.heroku/python/lib/python2.7/site-packages/flower/events.py", line 60, in __init__
state = shelve.open(self._db)
File "/app/.heroku/python/lib/python2.7/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/app/.heroku/python/lib/python2.7/shelve.py", line 223, in __init__
Shelf.__init__(self, anydbm.open(filename, flag), protocol, writeback)
File "/app/.heroku/python/lib/python2.7/anydbm.py", line 85, in open
return mod.open(file, flag, mode)
File "/app/.heroku/python/lib/python2.7/dumbdbm.py", line 250, in open
return _Database(file, mode)
File "/app/.heroku/python/lib/python2.7/dumbdbm.py", line 71, in __init__
f = _open(self._datfile, 'w')
IOError: [Errno 2] No such file or directory: 'postgres://USERNAME:PASSWORD#ec2-HOST.compute-1.amazonaws.com:5432/DBNAME.dat'
Notice the .dat appendix there? No idea where it comes from, its not present int my DATABASE_URL env variable.
Furthermore, the error above is with flower 0.7. I also tried installing 0.6, with which I do get further (namely the DB is correctly recognized and connection established), but I then get the following warnings once flower starts:
2014-06-19T15:14:02.464424+00:00 app[web.1]: [E 140619 15:14:02 state:138] Failed to inspect workers: '[Errno 104] Connection reset by peer', trying again in 128 seconds
2014-06-19T15:14:02.464844+00:00 app[web.1]: [E 140619 15:14:02 events:103] Failed to capture events: '[Errno 104] Connection reset by peer', trying again in 128 seconds.
Loading flower in my browser does show a few tabs of stuff, but there is no data.
How do I resolve these issues?
Flower doesn't support database persistence. It saves the state to file(s) using shelve module.
Related
I am trying a python code in which I am using pyarrow and trying to make connection to hadoop server using fs.HadoopFileSystem(host=host_value, port=port_value) but everytime I am getting an error message:
self.parquet_writer = HDFSWriter(host_value='hdfs://10.110.8.239',port_value=9000)
File "/app/aerial_server.py", line 54, in __init__
self.hdfs_client = fs.HadoopFileSystem(host=host_value, port=port_value)
File "pyarrow/_hdfs.pyx", line 89, in pyarrow._hdfs.HadoopFileSystem.__init__
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status
OSError: HDFS connection failed
env variables
PYTHON_VERSION=3.7.13
HADOOP_OPTS=-Djava.library.path=/app/hadoop-3.3.2/lib/nativ
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
HADOOP_INSTALL=/app/hadoop-3.3.2
ARROW_LIBHDFS_DIR=/app/hadoop-3.3.2/lib/nativeHADOOP_MAPRED_HOME=/app/hadoop-3.3.2
HADOOP_COMMON_HOME=/app/hadoop-3.3.2
HADOOP_HOME=/app/hadoop-3.3.2
HADOOP_HDFS_HOME=/app/hadoop-3.3.2PYTHON_PIP_VERSION=22.0.4
CLASSPATH=/app/hadoop-3.3.2/bin/hdfs classpath --glob
HADOOP_COMMON_LIB_NATIVE_DIR=/app/hadoop-3.3.2/lib/native
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/app/hadoop-3.3.2/sbin:/app/hadoop-3.3.2/bin
_=/usr/bin/env
I tried to get all the active/scheduled/reserved tasks in redis:
from celery.task.control import inspect
inspect_obj = inspect()
inspect_obj.active()
inspect_obj.scheduled()
inspect_obj.reserved()
But was greeted with a list of errors as follows:
My virtual environment ==> HubblerAPI.
Iam using this from the ec2 console
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/celery/app/control.py", line 81, in active
return self._request('dump_active', safe=safe)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/celery/app/control.py", line 71, in _request
timeout=self.timeout, reply=True,
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/celery/app/control.py", line 316, in broadcast
limit, callback, channel=channel,
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/pidbox.py", line 283, in _broadcast
chan = channel or self.connection.default_channel
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/connection.py", line 771, in default_channel
self.connection
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/connection.py", line 756, in connection
self._connection = self._establish_connection()
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/connection.py", line 711, in _establish_connection
conn = self.transport.establish_connection()
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
**OSError: [Errno 111] Connection refused**
My celery config file is as follows:
BROKER_TRANSPORT = 'redis'
BROKER_TRANSPORT_OPTIONS = {
'queue_name_prefix': 'dev-',
'wait_time_seconds': 10,
'polling_interval': 30,
# The polling interval decides the number of seconds to sleep
between unsuccessful polls
'visibility_timeout': 3600 * 5,
# If a task is not acknowledged within the visibility_timeout, the
task will be redelivered to another worker and executed.
}
CELERY_MESSAGES_DB = 6
BROKER_URL = "redis://%s:%s/%s" % (AWS_REDIS_ENDPOINT, AWS_REDIS_PORT,
CELERY_MESSAGES_DB)
What am i doing wrong here as the error log suggests that its not using the redis broker.
Looks like your python code doesn't recognize your configs since it is attempting to use RabbitMQ's ampq protocol instead of the configured broker.
I suggest the following
https://docs.celeryq.dev/en/stable/getting-started/backends-and-brokers/redis.html
Your configs look similar to Django configs for Celery yet it doesn't seem you are using Celery with Django.
https://docs.celeryq.dev/en/latest/django/first-steps-with-django.html
The issue is using "BROKER_URL" instead of "CELERY_BROKER_URL" in settings.py. Celery wasn't finding the URL and was defaulting to the rabbitmq port instead of the redis port.
I am new to neo4j world. I have successfully used it on my macbook. Now I am deploying it on a remote Linux machine with the same setup. But I keep getting this Protocol error. What caused this issue? How to fix this? I have been banging my head on this error for days.
Traceback (most recent call last):
File "/root/dev/knowledgeGraphH/knowledge/media_entity_mapper.py", line 31, in <module>
main()
File "/root/dev/knowledgeGraphH/knowledge/media_entity_mapper.py", line 28, in main
map_media_to_entities()
File "/root/dev/knowledgeGraphH/knowledge/media_entity_mapper.py", line 7, in map_media_to_entities
data_manager = DataManager()
File "/root/dev/knowledgeGraphH/knowledge/data_manager/data_manager.py", line 13, in __init__
self.graphDB = Neo4jManager()
File "/root/dev/knowledgeGraphH/knowledge/neo4j_manager.py", line 10, in __init__
self.session = self.driver.session()
File "/root/dev/knowledgeGraphH/env/lib/python2.7/site-packages/neo4j/v1/session.py", line 148, in session
session = Session(self)
File "/root/dev/knowledgeGraphH/env/lib/python2.7/site-packages/neo4j/v1/session.py", line 461, in __init__
self.connection = connect(driver.host, driver.port, driver.ssl_context, **driver.config)
File "/root/dev/knowledgeGraphH/env/lib/python2.7/site-packages/neo4j/v1/connection.py", line 465, in connect
return Connection(s, der_encoded_server_certificate=der_encoded_server_certificate, **config)
File "/root/dev/knowledgeGraphH/env/lib/python2.7/site-packages/neo4j/v1/connection.py", line 237, in __init__
self.fetch()
File "/root/dev/knowledgeGraphH/env/lib/python2.7/site-packages/neo4j/v1/connection.py", line 326, in fetch
self.acknowledge_failure()
File "/root/dev/knowledgeGraphH/env/lib/python2.7/site-packages/neo4j/v1/connection.py", line 273, in acknowledge_failure
fetch()
File "/root/dev/knowledgeGraphH/env/lib/python2.7/site-packages/neo4j/v1/connection.py", line 311, in fetch
raw.writelines(self.channel.chunk_reader())
File "/root/dev/knowledgeGraphH/env/lib/python2.7/site-packages/neo4j/v1/connection.py", line 169, in chunk_reader
chunk_header = self._recv(2)
File "/root/dev/knowledgeGraphH/env/lib/python2.7/site-packages/neo4j/v1/connection.py", line 152, in _recv
raise ProtocolError("Server closed connection")
neo4j.v1.exceptions.ProtocolError: Server closed connection
Seems to be port issue. Is the bolt port open or not? You have access to the port or not?
Check the output of the following command:
lsof -i tcp:7687
change the port number if you have changed bolt port address.
It turns out it was because I used the wrong credentials for this connection.
I'm currently facing this problem when batch uploading video by youtube-upload .sh. What can i do to prevent this? Can anyone teach me how I can write something in .sh to take action on this error? Should I retry the last row of the script or something else?
Traceback (most recent call last):
File "/usr/local/bin/youtube-upload", line 5, in <module>
main.run()
File "/Library/Python/2.7/site-packages/youtube_upload/main.py", line 214, in run
sys.exit(lib.catch_exceptions(EXIT_CODES, main, sys.argv[1:]))
File "/Library/Python/2.7/site-packages/youtube_upload/lib.py", line 35, in catch_exceptions
fun(*args, **kwargs)
File "/Library/Python/2.7/site-packages/youtube_upload/main.py", line 211, in main
run_main(parser, options, args)
File "/Library/Python/2.7/site-packages/youtube_upload/main.py", line 153, in run_main
video_id = upload_youtube_video(youtube, options, video_path, len(args), index)
File "/Library/Python/2.7/site-packages/youtube_upload/main.py", line 121, in upload_youtube_video
request_body, progress_callback=progress.callback)
File "/Library/Python/2.7/site-packages/youtube_upload/upload_video.py", line 37, in upload
RETRIABLE_EXCEPTIONS, max_retries=max_retries)
File "/Library/Python/2.7/site-packages/youtube_upload/lib.py", line 71, in retriable_exceptions
raise exc
socket.error: [Errno 54] Connection reset by peer`
This is what i'm using in .sh file, i've repeat 20 rows for different video.
youtube-upload --title="" --client-secrets=client_secrets.json -- description="" --tags="" --thumbnail="" --playlist="" --privacy="unlisted" /users/desktop/video/4.mp4
I am using django-1.2 and python-2.6 and I am using mysql server.
After working for a while - selecting and updating records, I got this error:
Exception in thread Thread-269:
Traceback (most recent call last):
File "/usr/lib64/python2.6/threading.py", line 532, in __bootstrap_inner
File "dispatcher.py", line 42, in run
File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 80, in __len__
File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 271, in iterator
File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 677, in results_iter
File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 731, in execute_sql
File "/usr/lib/python2.6/site-packages/django/db/backends/__init__.py", line 75, in cursor
File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 297, in _cursor
File "/usr/lib64/python2.6/site-packages/MySQLdb/__init__.py", line 81, in Connect
File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 187, in __init__
OperationalError: (2001, "Can't create UNIX socket (24)")
here are lines 41,42 of my dispatcher.py:
dataList = Mydata.objects.filter(date__isnull=True)[:chunkSize]
print '%s - DB worker finished reading %s entrys' % (datetime.now(),len(dataList))
Any clue why I get this error?
I tried googling but could not find an answer.
I am connecting to the db using django - (I am using localhost)
On my machine, errno==24 is defined like
#define EMFILE 24 /* Too many open files */
Which means you are running out of filedescriptors. Your app is "leaking" filedescriptors by opening them (and not closing them) again and again.
Maybe you're not forgetting to close file. But have too many files opened at the same time.