Release a message back to SQS - python

I have a some EC2 servers pulling work off of a SQS queue. Occasionally, they encounter a situation where the can't finish the job. I have the process email me of the condition. As it stands now, the message stays "in flight" until it times out. I would like for the process to immediately release it back to the queue after the email is sent. But, I'm not sure how to accomplish this. Is there a way? If so, can you please point me to the call or post a code snippet.
I'm using Python 2.7.3 and Boto 2.5.2.

If you have read a message and decide, for whatever reason, that you do not want to process it and would rather make it immediately available to other readers of the queue, you can simply set that message's visibility timeout to zero using the change_visibility method of the Message object in boto. See The SQS Developer's Guide for details.

Related

How to open a new pyghmi Session via pyghmi.impi.command.Command after the previous one has timed out?

I'm having some issues with the pyghmi python library, which is used for sending IPMI commands with python scripts. My goal is to implement an HTTP API to send IPMI commands through HTTP requests.
I am already able to create a Session and send a few commands with the library, but if the Session remains IDLE for 30 seconds, it logged itself out.
When the Session is logged out, I can't create a new one : I get an error "Session is logged out", or a deadlock.
How can I do if I want to have a server that is always up and create Session when it receives requests, if I can't create new Session when the previous one is logged out ?
What I've tried :
from pyghmi.ipmi import command
ipmi = command.Command(ip, user, passwd)
res = ipmi.get_power()
print(res)
# wait 30 seconds
res2 = ipmi.get_power() # get "Session logged out" error
ipmi2 = command.Command(ip, user, paswd) # Deadlock if wait < 30 seconds, else no error
res3 = ipmi2.get_power() # get "Session logged out" error
# Impossible to create new command.Command() Session, every command will give "logged out" error
The other problem is that I can't use the asynchronous way by giving an "onlogon callback" function in the command.Command() call, because I will need the callback return value in the caller and that's not possible with this sort of thread behavior.
Edit: I already tried some examples provided here but it's always one-time run scripts, whereas I'm looking for something that can stay "up" forever.
So I finally achieved a sort of solution. I emailed the Pyghmi's main contributor and he said that this lib was not suited for a multi- and reuseable- Session implementation (there is currently an open issue "Session reuse" on Pyghmi repository).
First "solution": use processes
My goal was to create an HTTP API. To avoid the Session timeout issue, I create a new Process (not Thread) for every new request. That works fine, but I did not keep this solution because it is to heavy and sockets consuming. It seems that by creating processes, the memory used by Pyghmi is not shared between processes (that's the goal of processes) so every Session utilisation is not a reuse but a creation.
Second "solution" : use Confluent
Confluent is a tool developed by Lenovo that allow to control hardware via HTTP. It uses a sort of patched version of Pyghmi as backend for IPMI calls. Confluent documentation here.
Once installed and configured on a server, Confluent worked well to control IPMI devices via HTTP. I packaged it in a Docker image along with an ipmi_simulator for testing purposes : confluent dockerized.
The solution today is to run Command.eventloop() after creating the connection. It is documented in ipmi/command.py, which has a very trivial Housekeeper class which in the current version 1.5.53 is actually just a renamed Thread class, with no additional features. It merely runs the eventloop.
The implementation looks like this. One of those mentioned house keeping tasks is sending keepalive messages, if enabled which it is by default and can be influence by supplying keepalive=True at Command instantiation:
class Housekeeper(threading.Thread):
"""A Maintenance thread for housekeeping
Long lived use of pyghmi may warrant some recurring asynchronous behavior.
This stock thread provides a simple minimal context for these housekeeping
tasks to run in. To use, do 'pyghmi.ipmi.command.Maintenance().start()'
and from that point forward, pyghmi should execute any needed ongoing
tasks automatically as needed. This is an alternative to calling
wait_for_rsp or eventloop in a thread of the callers design.
"""
def run(self):
Command.eventloop()

AWS/Python: Peeking SQS message

There is a receive function at https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#SQS.Client.receive_message to get SQS message,
Is there a function that I can just take a Peek at the SQS messages, without actually receiving it. Because If I receive the messages, it will be deleted from the queue. But I want the messages to be stay in the queue after peeking.
you can check
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
aws sqs sdk's (and client libraries written on top of them) by default they dont delete messages. but they have 'Visibility Timeout' which is 30 seconds by default. That means after you read the message , it wont be visible to other consumers for 30 seconds. It is up to a client to delete it within that time frame so that no one else will get that message ever.
So You can reduce that visility time out to something really small like 1 second. So you can download the message and within 1 second it will be available to other consumers . you can even set it to 0 so everyone can read the message at any point.
But this still means you will receieve the message. SQS is pretty simple queue system. you might want to check other queue system like Kafka or different way to design your system like using Notification services such as SNS

How to know if message was published to a queue using rabbitmq routing features

I've been working on a project which uses rabbitmq to communicate. Recently we discovered that it would be much scalable if we used rabbit routing feature. So basically we bind the queue to several routing keys and use an exchange with type direct.
It's working like publish/subscribe. So it's possible to bind and unbind the queue to different events so consumers/subscribers only receive messages to which they're interested.
Of course, the producer/publisher now uses the binding key (event name) as routing_key to pass it to pika implementation. However, when it publishes something for a binding that doesn't exist the message is lost, i.e. when nobody bound a queue for event foo, but some publisher calls pika.basic_publish(..., routing_key='foo').
So my question is:
Is it possible to know if the message was actually published in a queue?
What I've tried:
Checking the return value of pika.basic_publish. It always returns None.
Check if there's an exception when we try to publish for a binding that doesn't exist. There is none.
Having an additional queue to make out of band control (since all subscribers are run by the same process). This approach doesn't feel ideal to me.
Additional info
Since I'm using this routing feature, the queue names are generated by rabbit. I don't have any problem if the new approach has to name the queue itself.
If a new approach is suggested which requires binding to exchanges instead of queues, I would like to hear them, but I would prefer to avoid them as they're not actually AMQP and are an extension implemented by rabbitmq.
pika version is 0.9.5
rabbitmq version is 2.8
Thanks a lot
I believe the answer to your problem is the Mandatory flag in RabbitMQ:
This flag tells the server how to react if a message cannot be routed to a queue. Specifically, if mandatory is set and after running the bindings the message was placed on zero queues then the message is returned to the sender (with a basic.return). If mandatory had not been set under the same circumstances the server would silently drop the message.
This basically means, enqueue a message and if it can't be routed then return it to me. Take a look at basic_publish in the specification to turn it on.
It may be possible to use a dead letter exchange to store messages that have not been consumed http://www.rabbitmq.com/dlx.html
I am not sure this is exactly what you are looking for but could be used for a solution.

How to peek at messages in the queue

I don't want the message to count as "read" but I'd like to know what's in the queue. The documentation:
http://boto.s3.amazonaws.com/ref/sqs.html#module-boto.sqs
Isn't very straight forward about what absorbs a message and what doesn't. The dump message seems close, but I'd rather do this in memory rather than to a file.
The faq:
http://aws.amazon.com/articles/1343#12
Has some sketchy solution:
How do I peek at a message?
With version 2008-01-01, the PeekMessage action has been removed from
Amazon SQS. This functionality was used mainly to debug small systems
— specifically to confirm a message was successfully sent to the queue
or deleted from the queue. To do this with version 2008-01-01, you can
log the message ID and the receipt handle for your messages and
correlate them to confirm when a message has been received and
deleted.
Has anyone had any luck with this? It seems like very basic queue functionality and I'd be shocked if there wasn't a clean way to do this.
Right click no longer works in the new SQS console.
To view queue messages in the SQS console you now need to click into a queue > Send and receive messages > Poll for messages
There is no longer a true peek function available in SQS but you can probably accomplish what you want by simply using get_messages and setting the visibility_timeout quite low. As long as you don't delete the messages you have read, they will reappear on the queue after the visibility_timeout has expired and will be available for reading. The only tricky part is trying to figure out how long the timeout should be. If you have lots and lots of messages in the queue, you will have to make multiple calls to get_messages to retrieve them all and you probably don't want previously read messages reappearing while you are still peeking at the messages.
Update 11/11/2020
Right-clicking no longer works on new SQS console.
See #marmor's answer
Original Answer (old dashboard)
If you have access to Amazon's AWS Console, on the queue list page, you can right-click on a queue.
Then select View/Delete Messages from the pop-up menu.
This will pop-up a window where you can start polling for messages in the queue.
Refer to images below:
I have created a desktop application named SQSCLI
A Graphical Desktop Application available for Windows & Mac
It has a Long Polling Support and it uses AWS profiles available in your local.
Video Demo of this application
https://www.youtube.com/watch?v=ALNHHvts9oo
More details on this link
https://www.middlewareinventory.com/blog/sqscli-app/
In the new AWS Console:
Visit the main queue page at https://console.aws.amazon.com/sqs
Click on the name of your queue - this will take you to its details page
Click on the "Send and Receive Messages" button (top right)
Click on the "Poll for Messages" button
Click on the message id to view message details

How does Amazon's SQS notify one of my "worker" servers whenever there is something in the queue?

I'm following this tutorial: http://boto.s3.amazonaws.com/sqs_tut.html
When there's something in the queue, how do I assign one of my 20 workers to process it?
I'm using Python.
Unfortunately, SQS lacks some of the semantics we've often come to expect in queues. There's no notification or any sort of blocking "get" call.
Amazon's related SNS/Simple Notification Service may be useful to you in this effort. When you've added work to the queue, you can send out a notification to subscribed workers.
See also:
http://aws.amazon.com/sns/
Best practices for using Amazon SQS - Polling the queue
This is (now) possible with Long polling on a SQS queue.
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/Query_QueryReceiveMessage.html
Long poll support (integer from 1 to 20) - the duration (in seconds) that the ReceiveMessage action call will wait until a message is in the queue to include in the response, as opposed to returning an empty response if a message is not yet available.
If you do not specify WaitTimeSeconds in the request, the queue attribute ReceiveMessageWaitTimeSeconds is used to determine how long to wait.
Type: Integer from 0 to 20 (seconds)
Default: The ReceiveMessageWaitTimeSeconds of the queue.
Further to point out a problem with SQS - You must poll for new notifications, and there is no guarantee that on any particular poll you will receive an event that exists in the queue (this is due to the redundancy of their architecture). This means you need to consider the possibility that your polling didn't return a message that existed (which for me meant I needed to increase the polling rate).
All in all I found too many limitations with SQS (as I've found with some other AWS tools such as SimpleDB). But that's just my injected opinion.
Actual if you dont require a low latency, you can try this:
Create an cloudwatch alarm on your queue, like messages visible or messages received > 0.
As an action you will send a message to an sns topic, which then can send the message to your workers via an http/s endpoint.
normally this kind of approach is used for autoscaling.
There is now an JMS wrapper for SQS from Amazon that will let you create listeners that are automatically triggered when a new message is available.
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/jmsclient.html#jmsclient-gsg

Categories

Resources