I'm trying to make a queue system using celery+sqs.
Still in my local environment with localstack I'm not able to receive messages in worker. It just doesn't show anything. There is a question some time ago, but I'm ok in their config.
I'm using all other SQS/SNS activities from other function, but isn't working from celery.
My current setup is like this:
Docker config:
services:
localstack:
image: localstack/localstack:latest
environment:
- SERVICES=sqs,sns
- HOSTNAME=localstack
- HOSTNAME_EXTERNAL=localstack
ports:
- '4566:4566'
networks:
- platform_default
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
And the celery instantiation is down, used a get_queue directly to be sure of its link.
return Celery(
"server",
task_default_queue=config.sqs_celery.queue_name,
broker=f"sqs://",
broker_url=f"sqs://{config.sqs_celery.aws_access_key_id}:{config.sqs_celery.aws_secret_access_key}#{config.sqs_celery.broker_site_port}",
# in my case: sqs://localstack:localstack#localhost:4566
broker_transport_options={
'region': config.sqs_celery.region,
"predefined_queues": {
config.sqs_celery.queue_name: {
"url": get_queue_url(config.sqs_celery.queue_name),
# in my case: http://localstack:4566/000000000000/tasks
'region': config.sqs_celery.region,
}
}
}
)
Please maybe you have some ideas to start, because I lost half of the day trying to figure out what is wrong.
SOLVED:
The queue url can't be solved, you have to check everything. I set it to "localhost" and it worked.
Related
I am very new with working on OCP . I have a task to schedule a curl statement via crontab but I'm unable to figure out where to pass the curl statement.
Not sure how to even start. I looked up some examples but do not see anything that matches my requirement
OCP is based on Kubernetes. In Kubernetes you have the CronJob resource, which seems to be what you're looking for for your resource and allows you to run a job on a specific schedule.
As you need to use curl, you can use the curlimage/curl image, which has the curl binary included:
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-curl-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: curl-job
image: curlimages/curl
imagePullPolicy: IfNotPresent
args:
- "http://your.url.you.want.to.curl"
restartPolicy: Never
i have a really big problem with channels.
when I try to run asgi server in production the problems come up but there is no problem when running in terminal.
first let me show you a little code
class LogConsumer(AsyncConsumer):
async def websocket_connect(self, event):
print('befor')
await self.send({
"type": "websocket.accept",
"text": "hellow"
})
print('after')
async def websocket_disconnect(self, event):
print(event)
there are more but i commented them too see problem is solving or not and guess what ...
application = ProtocolTypeRouter({
'websocket': AllowedHostsOriginValidator(
AuthMiddlewareStack(
URLRouter(
[
url(r"^ws/monitoring/$", LogConsumer),
]
)
),
)
})
ASGI_APPLICATION = "fradmin_mainserver.routing.application"
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("localhost", 6379)],
},
},
}
ASGI_THREADS = 1000
supervisor config
[fcgi-program:asgi]
socket=tcp://localhost:8008
environment=PYTHONPATH=/home/datis/.pyenv/versions/cv/bin/python
User=datis
environment=HOME="/home/datis",USER="datis"
# Directory where your site's project files are located
directory=/home/datis/PycharmProjects/fradmin_mainserver/
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "django_chanels.asgi" to match your project name
command=/home/datis/.pyenv/versions/cv/bin/daphne -u /run/uwsgi/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers fradmin_mainserver.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=1
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/var/log/uwsgi/asgi.log
redirect_stderr=true
ok these are configurations .
when i use
daphne fradmin_mainserver.asgi:application --bind 0.0.0.0 --port 8008 --verbosity 1
there is no problem but when use this inside supervisor the only thing i get is :
2021-04-13 11:45:27,015 WARNING Application instance <Task pending coro=<SessionMiddlewareInstance.__call__() running at /home/datis/.pyenv/versions/3.6.8/envs/cv/lib/python3.6/site-packages/channels/sessions.py:183> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f02e2222d38>()]>> for connection <WebSocketProtocol client=['127.0.0.1', 46234] path=b'/ws/monitoring/'> took too long to shut down and was killed.
even i tryed to start a service with currect code and i made :
[Unit]
Description=daphne daemon
After=network.target
[Service]
PIDFile=/run/daphne/pid
User=root
Group=root
WorkingDirectory=/home/datis/PycharmProjects/fradmin_mainserver/
Environment="DJANGO_SETTINGS_MODULE=fradmin_mainserver.settings"
ExecStart=/home/datis/.pyenv/versions/cv/bin/daphne --bind 0.0.0.0 --port 8008 --verbosity 0 fradmin_mainserver.asgi:application
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
Restart=on-abort
PrivateTmp=true
StandardOutput=file:/var/log/daphne/access.log
StandardError=file:/var/log/daphne/access.log
[Install]
WantedBy=multi-user.target
but the result was the same:
its like websocket_connec() is never called
I tried to create it with syncconsumer but problem whas the same
but when i stop supervisorctl all runs together
192.168.7.100:0 - - [13/Apr/2021:14:38:24] "WSCONNECTING /ws/monitoring/" - -
192.168.7.100:0 - - [13/Apr/2021:14:38:24] "WSCONNECT /ws/monitoring/" - -
before
192.168.7.100:0 - - [13/Apr/2021:14:39:25] "WSDISCONNECT /ws/monitoring/" - -
{'type': 'websocket.disconnect', 'code': 1001}
192.168.7.100:0 - - [13/Apr/2021:14:39:25] "WSCONNECTING /ws/monitoring/" - -
192.168.7.100:0 - - [13/Apr/2021:14:39:25] "WSCONNECT /ws/monitoring/" - -
before
192.168.7.100:0 - - [13/Apr/2021:14:39:27] "WSDISCONNECT /ws/monitoring/" - -
versions:
python:3.6.8
django: 2.2.6
channels:2.4.0
channels_redis: 2.4.2
daphne : 2.5.0
help me please it a real product project and i dont what to do anymore i tried everything and readed every line in stack overflow github and etc .
change AsyncConsumer to AsyncWebsocketConsumer
I'm creating a basic project to test Flask + Celery + RabbitMQ + Docker.
For some reason, that I do not know, when I call the celery, the task seems to call RabbitMQ, but it stays at the PENDING state always, it never changes to another state. I try to use task.get(), but the code freezes. Example:
The celery worker (e.g. worker_a.py) is something like this:
from celery import Celery
# Initialize Celery
celery = Celery('worker_a',
broker='amqp://guest:guest#tfcd_rabbit:5672//',
backend='rpc://')
[...]
#celery.task()
def add_nums(a, b):
return a + b
While docker-compose.yml is something like this:
[...]
tfcd_rabbit:
container_name: tfcd_rabbit
hostname: tfcd_rabbit
image: rabbitmq:3.8.11-management
environment:
- RABBITMQ_ERLANG_COOKIE=test
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
ports:
- 5672:5672
- 15672:15672
networks:
- tfcd
tfcd_worker_a:
container_name: tfcd_worker_a
hostname: tfcd_worker_1
image: test_flask_celery_docker
entrypoint: celery
command: -A worker_a worker -l INFO -Q worker_a
volumes:
- .:/app
links:
- tfcd_rabbit
depends_on:
- tfcd_rabbit
networks:
- tfcd
[...]
The repository with all the files and instructions to run it can be found here.
Would anyone know what might be going on?
Thank you in advance.
After a while, a friend of mine discovered the problem:
The correct queue name was missing when creating the task, because Celery was using the default name "celery" instead of the correct queue name.
The final code is this:
[...]
#celery.task(queue='worker_a')
def add_nums(a, b):
return a + b
I'm pretty new to Airflow. I've read through the documentation several times, torn through numerous S/O questions and many random articles online but have yet to fix this issue. I have a feeling its something super simple I'm doing wrong.
I have Docker for Windows and I pulled the puckel/docker-airflow image and ran a container with ports exposed so I can hit the UI from my host. I have another container running mcr.microsoft.com/mssql/server on which I restored the WideWorldImporters sample db. From the Airflow UI, I have been able to successfully create the connection to this db and can even query it from the Data Profiling section. Check images below:
Connection Creation
Successful Query to Connection
So while this works, my dag fails at the 2nd task sqlData. here is the code:
from airflow.models import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
from airflow.operators.mssql_operator import MsSqlOperator
from datetime import timedelta, datetime
copyData = DAG(
dag_id='copyData',
schedule_interval='#once',
start_date=datetime(2019,1,1)
)
printHelloBash = BashOperator(
task_id = "print_hello_Bash",
bash_command = 'echo "Lets copy some data"',
dag = copyData
)
mssqlConnection = "WWI"
sqlData = MsSqlOperator(sql="select top 100 InvoiceDate, TotalDryItems from sales.invoices",
task_id="select_some_data",
mssql_conn_id=mssqlConnection,
database="WideWorldImporters",
dag = copyData,
depends_on_past=True
)
queryDataSuccess = BashOperator(
task_id = "confirm_data_queried",
bash_command = 'echo "We queried data!"',
dag = copyData
)
printHelloBash >> sqlData >> queryDataSuccess
Initially the error was:
*[2019-02-22 16:13:09,176] {{logging_mixin.py:95}} INFO - [2019-02-22 16:13:09,176] {{base_hook.py:83}} INFO - Using connection to: 172.17.0.3
[2019-02-22 16:13:09,186] {{models.py:1760}} ERROR - Could not create Fernet object: Incorrect padding
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 171, in get_fernet
_fernet = Fernet(fernet_key.encode('utf-8'))
File "/usr/local/lib/python3.6/site-packages/cryptography/fernet.py", line 34, in __init__
key = base64.urlsafe_b64decode(key)
File "/usr/local/lib/python3.6/base64.py", line 133, in urlsafe_b64decode
return b64decode(s)
File "/usr/local/lib/python3.6/base64.py", line 87, in b64decode
return binascii.a2b_base64(s)
binascii.Error: Incorrect padding*
I noticed that this has to do with cryptography, and I went ahead and ran pip install cryptography and pip install airflow[crytpo], where both returned the exact same results informing me that the requirement has already been satisfied. Finally, I found something that said I just need to generate a fernet_key. The default key in my airflow.cfg file was fernet_key = $FERNET_KEY. So from the cli in the container I ran:
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
And got a code that I replaced $FERNET_KEY with. I restarted the container and re-ran the dag and now my error is:
[2019-02-22 16:22:13,641] {{models.py:1760}} ERROR -
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/cryptography/fernet.py", line 106, in _verify_signature
h.verify(data[-32:])
File "/usr/local/lib/python3.6/site-packages/cryptography/hazmat/primitives/hmac.py", line 69, in verify
ctx.verify(signature)
File "/usr/local/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/hmac.py", line 73, in verify
raise InvalidSignature("Signature did not match digest.")
cryptography.exceptions.InvalidSignature: Signature did not match digest.
Which from an initial crypto doc scan has something to do with compatibility?
I'm at a lost now and decided that I'd ask this question to see if I'm potentially going down the wrong path in resolving this. Any help would be greatly appreciated as Airflow seems awesome.
Thanks to some side communication from #Tomasz I finally got my DAG to work. He recommended I try using docker-compose which is also listed in the puckel/docker-airflow github repo. I ended up using the docker-compose-LocalExecutor.yml file instead of the Celery Executor though. There was some small troubleshooting and more configuration I had to go through as well. To begin, I took my existing MSSQL container that had the sample db in it and turned it into an image using docker commit mssql_container_name. Only reason I did this is to save time having to restore the backup sample dbs; you could always copy the backups into the container and restore them later if you want. Then I added my new image to the existing docker-compose-LocalExecutor.yml file like so:
version: '2.1'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
mssql:
image: dw:latest
ports:
- "1433:1433"
webserver:
image: puckel/docker-airflow:1.10.2
restart: always
depends_on:
- postgres
- mssql
environment:
- LOAD_EX=n
- EXECUTOR=Local
#volumes:
#- ./dags:/usr/local/airflow/dags
# Uncomment to include custom plugins
# - ./plugins:/usr/local/airflow/plugins
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
Mind you, dw is what I named the new image that was based off of the mssql container. Next, I renamed the file to just docker-compose.yml so that I could easily run docker-compose up (not sure if there is a command to point directly to a different YAML file). Once everything was up and running, I navigated to the Airflow UI and configured my connection. Note: since you are using docker-compose you don't need to know the IP address of the other containers since they use DNS service discovery which I found out about here. Then to test the connection I went to Data Profiling to do an ad-hoc query, but the connection wasn't there. This is because the puckel/docker-airflow image doesn't have pymssql installed. So just bash into the container docker exec -it airflow_webserver_container bash and install it pip install pymssql --user. Exit the container and restart all services using docker-compose restart. After a minute everything was up and running. My connection showed up in Ad hoc Query and I could successfully select data. Finally, I turned my DAG on, the scheduler picked it up and everything was successful! Super relieved after spending weeks of googling. Thanks to #y2k-shubham for helping out and some super huge appreciation to #Tomasz who I actually reached out to initially after his awesome and thorough post about Airflow on the r/datascience subreddit.
I'm using docker, selenium, and Django.
I just realised i was doing my tests on my production database ; while i wanted to test on StaticLiveServerTestCase self-generated database.
I tried to follow that tutorial
#override_settings(ALLOWED_HOSTS=['*'])
class BaseTestCase(StaticLiveServerTestCase):
host = '0.0.0.0'
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.host = socket.gethostbyname(socket.gethostname())
cls.selenium = webdriver.Remote(
command_executor='http://hub:4444/wd/hub',
desired_capabilities=DesiredCapabilities.CHROME,
)
cls.selenium.implicitly_wait(5)
#classmethod
def tearDownClass(cls):
cls.selenium.quit()
super().tearDownClass()
class MyTest(BaseTestCase):
def test_simple(self):
self.selenium.get(self.live_server_url)
I've no error trying to connect to the chrome-hub, but when i try to print my page_source, i'm not on my django app but on a chrome error message. Here is a part :
<div class="error-code" jscontent="errorCode" jstcache="7">ERR_CONNECTION_REFUSED</div>
I'm using docker-compose 1. Selenium.yml:
chrome:
image: selenium/node-chrome:3.11.0-dysprosium
volumes:
- /dev/shm:/dev/shm
links:
- hub
environment:
HUB_HOST: hub
HUB_PORT: '4444'
hub:
image: selenium/hub:3.11.0-dysprosium
ports:
- "4444:4444"
expose:
- "4444"
app:
links:
- hub
I guess i did something wrong in my docker-compose file, but i don't manage to figure out what.
Thanks in advance !
PS : live_server_url = http://localhost:8081
You need to put container_name of the container that is running Django/the tests as host when using docker-compose, i.e.
host = 'app'
For a more detailed discussion see this question