In my app I use Celery and RabbitMQ.
On localhost everything works fine:
I send tasks to few workers, they calculate it and return result to call.py (I use groups).
The problem start here:
On my laptop (macbook) I have RabbitMQ, on desktop (pc, windows) - celery-workers. I start call.py (on laptop), it sends data to my desktop (to workers), they recieve and calculate tasks, and in the end (when all tasks succeeded) my laptop dont receive any response from workers.
No errors, nothing.
My laptop ip - 192.168.1.14. This ip I use in broker and backend parametrs, when I make Celery instanse.
In rabbitmq-env.conf:
NODE_IP_ADRESS=192.168.1.14
On my router I make forwading to port 5672 to 192.168.1.14.
So, if all app runs on localhost and I use my public ip (5.57.N.N.) - all works.
If I use workers on another host (192.168.1.14) I dont have response from them (calculated result).
How to fix that?
Thanks!
Are you using the default user guest:guest ? If so, that user can only connect via localhost. See https://www.rabbitmq.com/access-control.html#loopback-users
Related
I have a running flask application which is catering to rest calls by another application using specific port an endpoints.
I want to configure nginx for the application with gunicorn(existing) to tackle DOS and **gunicorn freezing **due to the same.
However I do not want to hide the port. I other words. If we hit the IP:port/ then only we should be able to access the application endpoint, because there are some other applications already running on the ip on some ports which are the calling the application on different port.
Hence I do not want to hit XX.YY.ZZ.AA and get routed to XX.YY.ZZ.AA:5000.
Rather I want to reach XX.YY.ZZ.AA:5000 only if I hit XX.YY.ZZ.AA:5000.
I could not find a way to do the same as I see nginx mostly in context of port masking and reverse proxy.
Please help !!!
Similarly to Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable I cannot start my container because it does not (need to) listen on a port. It is a Discord bot that just needs outbound connections for the APIs.
Is there a way I can get around this error? I've tried listening on port 0.0.0.0:8080 using socket module with
import socket
s = socket.socket()
s.bind(("0.0.0.0", 8080))
s.listen()
Cloud Run is oriented to request-driven tasks and this explains Cloud Run's listen requirement.
Generally (!) clients make requests to your Cloud Run service endpoint triggering the creation of instances to process the requests and generate responses.
Generally (!) if there are no outstanding responses to be sent, the service scales down to zero instances.
Running a bot, you will need to configure Cloud Run artificially to:
Always run (and pay for) at least one instance (so that the bot isn't terminated)
Respond (!) to incoming requests (on one thread|process)
Run your bot (on one thread|process)
To do both #2 and #3 you'll need to consider Python multithreading|multiprocessing.
For #2, the code in your question is insufficient. You can use low-level sockets, but it will need to respond to incoming requests and so you will need to implement a server. It would be simpler to use e.g. Flask which gives you an HTTP server with very little code.
And this server code only exists to satisfy the Cloud Run requirement, it is not required for your bot.
If I were you, I'd run the bot on a Compute Engine VM. You can do this for Free.
If your bot is already packaged using a container, you can deploy the container directly to a VM.
I deployed celery for some tasks that need to be performed at my workplace. These tasks are huge and I bought a few high-spec machines for performing these. Before I detail my issue, let me brief about what all I've deployed:
RabbitMQ broker on a remote server
Producer that pushes tasks on another remote server
Workers at 3 machines deployed at my workplace
Now, when I started the whole process was as smooth as I tested and everything process just great!
The problem
Unfortunately, I forgot to consult my network guy about a fixed IP address, and as per our location, we do not have a fixed IP address from our ISP. So my celery workers upon network disconnect freeze and do nothing. Even when the network is running, because the IP Address changed, and the connection to the broker is not being recreated or worker is not retrying connection. I have tried configuration like BROKER_CONNECTION_MAX_RETRIES = 0 and BROKER_HEARTBEAT = 10. But I had no option but to post it out here and look for experts on this matter!
PS: I cannot restart the workers manually everytime the network changes the IP address by kill -9
Restarting the app using:
sudo rabbitmqctl stop_app
sudo rabbitmqctl start_app
solved the issue for me.
Also, since I had virtual host setup, I needed to get that reset too.
Not sure why was that needed. Or in fact any of the above was needed, but it did solve the problem for me.
The issue was because I was unable to understand the nature of AMQP protocol or RabbitMQ.
When a celery worker starts it opens up a channel at RabbitMQ. This channel upon any network changes tries to reconnect, but the port/sock opened for the channel previously is registered with a different public IP address of the client. As such the negotiations between the celery worker (client) and RabbitMQ (server) cannot resume because the client has changed the address, hence a new channel needs to be established in case of a change in the public IP address of the client.
The answer by #qreOct above is due to either I was unable to express the question properly or because of the difference in our perceptions. Still thanks a lot for taking your time out!
I have a few servers that require executing commands on other servers. For example a Bitbucket Server post receive hook executing a git pull on another server. Another example is the CI server pulling a new docker image and restarting an instance on another server.
I would normally use ssh for this, creating a user/group specifically for the job with limited permission.
A few downsides with ssh:
Synchronous ssh call means a git push will have to wait until complete.
If a host is not contactable for whatever reason, the ssh command will fail.
Maintaining keys, users, and sudoers permissions can become unwieldy.
Few possibilities:
Find an open source out of the box solution (I have tried with no luck so far)
Set up an REST API on each server that accepts calls with some type of authentication, e.g. POST https://server/git/pull/?apikey=a1b2c3
Set up Python/Celery to execute tasks on a different queue for each host. This means a celery worker on each server that can execute commands and possibly a service that accepts REST API calls, converting them to Celery tasks.
Is there a nice solution to this problem?
Defining the problem
You want to be able to trigger a remote task without waiting for it to complete.
This can be achieved in any number of ways, including with SSH. You can execute a remote command without waiting for it to complete by closing or redirecting all I/O streams, e.g. like this:
ssh user#host "/usr/bin/foobar </dev/null >/dev/null 2>&1"
You want to be able to defer the task if the host is currently unavailable.
This requires a queuing/retry system of some kind. You will also need to decide whether the target hosts will be querying for messages ("pull") or whether messages will be sent to the target hosts from elsewhere ("push").
You want to simplify access control as much as possible.
There's no way to completely avoid this issue. One solution would be to put most of the authentication logic in a centralized task server. This splits the problem into two parts: configuring access rights in the task server, and configuring authentication between the task server and the target hosts.
Example solutions
Hosts attempt to start tasks over SSH using method above for asynchrony. If host is unavailable, task is written to local file. Cron job periodically retries sending failed tasks. Access control via SSH keys.
Hosts add tasks by writing commands to files on an SFTP server. Cron job on target hosts periodically checks for new commands and executes them if found. Access control managed via SSH keys on the SFTP server.
Hosts post tasks to REST API which adds them to queue. Celery daemon on each target host consumes from queue and executes tasks. Access managed primarily by credentials sent to the task queuing server.
Hosts post tasks to API which adds tasks to queue. Task consumer nodes pull tasks off the queue and send requests to API on target hosts. Authentication managed by cryptographic signature of sender appended to request, verified by task server on target host.
You can also look into tools that do some or all of the required functions out of the box. For example, some Google searching came up with Rundeck which seems to have some job scheduling capabilities and a REST API. You should also consider whether you can leverage any existing automated deployment or management tools already present in your system.
Conclusions
Ultimately, there's no single right answer to this question. It really depends on your particular needs. Ask yourself: How much time and effort do you want to spend creating this system? What about maintenance? How reliable does it need to be? How much does it need to scale? And so on, ad infinitum...
Scenario:
My server/application needs to handle multiple concurrent requests and for each request the server opens an ssh link to another m/c, runs a long command and sends the result back.
1 HTTP request comes → server starts 1 SSH connection → waits long time → sends back the SSH result as HTTP response
This should happen simultaneously for > 200 HTTP and SSH connections in real time.
Solution:
The server just has to do one task, that is, open an SSH connection for each HTTP request and keep the connection open. I can't even write the code in an asynchronous way as there is just one task to do: SSH. IOLoop will get blocked for each request. Callback and deferred functions don't provide an advantage as the ssh task runs for a long time. Threading sounds the only way out with event driven technique.
I have been going through tornado examples in Python but none suit my particular need:
tornado with twisted
gevent/eventlet pseudo multithreading
python threads
established HTTP servers like Apache
Environment:
Ubuntu 12.04, high RAM & net speed
Note:
I am bound to use python for coding and please be limited to my scenario only. Opening up multiple SSH links while keeping HTTP connections open sounds all asynch work but I want it to look like a synchronous activity.
If by ssh you mean actually running the ssh process, try tornado.process.Subprocess. If you want something more integrated into the server, twisted has an asynchronous ssh implementation. If you're using something like paramiko, threads are probably your best bet.