I'm using ROS in my project and I need to send one message from time to time. I have this function:
void RosNetwork::sendMessage(string msg, string channel) {
_mtx.lock();
ros::Publisher chatter_pub = _n.advertise<std_msgs::String>(channel.c_str(),10);
ros::Rate loop_rate(10);
std_msgs::String msgToSend;
msgToSend.data = msg.c_str();
chatter_pub.publish(msgToSend);
loop_rate.sleep();
cout << "Message Sent" << endl;
_mtx.unlock();
}
And I have this in python:
def callbackFirst(data):
#rospy.loginfo(rospy.get_caller_id() + "I heard %s", data.data)
print("Received message from first filter")
def callbackSecond(data):
#rospy.loginfo(rospy.get_caller_id() + "I heard %s", data.data)
print("Received message from second filter")
def listener():
rospy.Subscriber("FirstTaskFilter", String, callbackFirst)
print("subscribed to FirstTaskFilter")
rospy.Subscriber("SecondTaskFilter", String, callbackSecond)
print("subscribed to SecondTaskFilter")
rospy.spin()
The listener is a thread in python.
I get to the function sendMessage (I see in the terminal "Message Sent" a lot of times) but I don't see that the python script receives the message.
Update: I tested the python callback with rostopic pub /FirstTaskFilter std_msgs/String "test" and this works perfectly.
Any thought?
You are re-advertising the publisher every time and then you are immediately using it to publish something.
This is problematic as it needs some time for the subscribers to subscribe to newly emerging publishers. If you are publishing messages before the subscriber has finished with this, these messages will not arrive.
To avoid this problem, do not advertise a new publisher every time, but do it only once in the constructor of your class and store the publisher in a member variable. Your code could look something like this:
RosNetwork() {
_chatter_pub = _n.advertise<std_msgs::String>(channel.c_str(),10);
ros::Duration(1).sleep(); // optional, to make sure no message gets lost
}
void RosNetwork::sendMessage(string msg, string channel) {
...
_chatter_pub.publish(msgToSend);
...
}
The one-second-sleep after advertise makes sure that all existing subscribers can subscribe before you start publishing messages. This is only necessary, if it is important that not a single message gets lost. In most practical cases it can be omitted.
The proper solution to your problem is to use pub.getNumSubscribers() and wait until that is > 0. Then publish.
Related
I'm trying to understand if grpc server using streams is able to wait for all client messages to be read in prior to sending responses.
I have a trivial application where I send in several numbers I'd like to add and return.
I've set up a basic proto file to test this:
syntax = "proto3";
message CalculateRequest{
int64 x = 1;
int64 y = 2;
};
message CalculateReply{
int64 result = 1;
}
service Svc {
rpc CalculateStream (stream CalculateRequest) returns (stream CalculateReply);
}
On my server-side I have implemented the following code which returns the answer message as the message is received:
class CalculatorServicer(contracts_pb2_grpc.SvcServicer):
def CalculateStream(self, request_iterator, context):
for request in request_iterator:
resultToOutput = request.x + request.y
yield contracts_pb2.CalculateReply(result=resultToOutput)
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
contracts_pb2_grpc.add_SvcServicer_to_server(
CalculatorServicer(), server)
server.add_insecure_port('localhost:9000')
server.start()
server.wait_for_termination()
if __name__ == '__main__':
print( "We're up")
logging.basicConfig()
serve()
I'd like to tweak this to first read in all the numbers and then send these out at a later stage - something like the following:
class CalculatorServicer(contracts_pb2_grpc.SvcServicer):
listToReturn = []
def CalculateStream(self, request_iterator, context):
for request in request_iterator:
listToReturn.append (request.x + request.y)
# ...
# do some other stuff first before returning
for item in listToReturn:
yield contracts_pb2.CalculateReply(result=resultToOutput)
Currently, my implementation to write out later doesn't work as the code at the bottom is never reached. Is this by design that the connection seems to "close" before reaching there?
The grpc.io website suggests that this should be possible with BiDirectional streaming:
for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes.
Thanks in advance for any help :)
The issue here is the definition of "all client messages." At the transport level, the server has no way of knowing whether the client has finished independent of the client closing its connection.
You need to add some indication of the client's having finished sending requests to the protocol. Either add a bool field to the existing CalculateRequest or add a top-level oneof with one of the options being something like a StopSendingRequests
I'm trying send data to my Data Lake with a While Loop.
Basically, the intention is to continually loop through code and send data to my Data Lake when ever data received from my Azure Service Bus using the following code:
This code receives message from my Service Bus
def myfunc():
with ServiceBusClient.from_connection_string(CONNECTION_STR) as client:
# max_wait_time specifies how long the receiver should wait with no incoming messages before stopping receipt.
# Default is None; to receive forever.
with client.get_queue_receiver(QUEUE_NAME, session_id=session_id, max_wait_time=5) as receiver:
for msg in receiver:
# print("Received: " + str(msg))
themsg = json.loads(str(msg))
# complete the message so that the message is removed from the queue
receiver.complete_message(msg)
return themsg
This code assigns a variable to the message:
result = myfunc()
The following code sends the message to my data lake
rdd = sc.parallelize([json.dumps(result)])
spark.read.json(rdd) \
.write.mode("overwrite").json('/mnt/lake/RAW/FormulaClassification/F1Area/')
I would like help looping through the code to continually checking for messages and sending the results to my data lake.
I believe the solution is accomplished with a While Loop but not sure
Just because you're using Spark doesn't mean you cannot loop
First off all, you're only returning the first message from your receiver, so it should look like this
with client.get_queue_receiver(QUEUE_NAME, session_id=session_id, max_wait_time=5) as receiver:
msg = str(next(receiver))
# print("Received: " + msg)
themsg = json.loads(msg)
# complete the message so that the message is removed from the queue
receiver.complete_message(msg)
return themsg
To answer your question,
while True:
result = json.dumps(myfunc())
rdd = sc.parallelize([result])
spark.read.json(rdd) \ # You should use rdd.toDF().json here instead
.write.mode("overwrite").json('/mnt/lake/RAW/FormulaClassification/F1Area/')
Keep in mind that the output file names aren't consistent and you might not want them to be overwritten
Alternatively, you should look into writing your own Source / SparkDataStream class that defines SparkSQL sources so that you don't need a loop in your main method and it's natively handled by Spark
I want to publish a message from a Python-based microservice (using Pika) to a queue of RabbitMQ consumed by a .NET-based microservice (using MassTransit as event bus).
A sample code of the (Python-based) publisher is as follows:
connection = pika.BlockingConnection(pika.URLParameters('amqp://guest:guest#rabbitmq:5672/%2F'))
channel = connection.channel()
channel.queue_declare(queue='queue_net', durable=True)
channel.basic_publish(exchange='', routing_key='queue_net', body='message',
properties=pika.BasicProperties(
content_type = 'text/plain',
delivery_mode = 2 # make messages persistent
))
Instead of being sent to the queue_net queue, the message goes to a queue_net_error queue (from the RabbitMQ Management screen).
For completeness, a sample code of the (.NET-based) consumer is as follows:
In Startup, ConfigureServices:
services.AddMassTransit(config =>
{
config.AddConsumer<SampleConsumer>();
config.UsingRabbitMq((ctx, cfg) =>
{
cfg.Host("amqp://rabbitmq:5672", h =>
{
h.Username("guest");
h.Password("guest");
});
cfg.ReceiveEndpoint("queue_net", ep =>
{
ep.ConfigureConsumer<SampleConsumer>(ctx);
});
});
});
where SampleConsumer is defined as:
public class SampleConsumer : IConsumer<string>
{
public Task Consume(ConsumeContext<string> context)
{
Console.WriteLine(context);
return Task.CompletedTask;
}
}
What am I doing wrong? I am able to do the opposite, i.e. publish from .NET and consume in Python.
Many thanks.
You can either use a supported message format and content type so that the message can be consumed by MassTransit, or you can use the raw JSON message deserializer in MassTransit to consume messages from other languages that send raw JSON.
Messages end up in the _error queue when the message is unable to be processed, which could be a serialization error or an exception thrown by a consumer. Messages end up in the _skipped queue when they are able to be deserialized but no consumer actually consumed the message (due to a mismatched message type, etc.).
I have a plain simple Python function which should dead-letter a message if it does not match few constraint. Actually I'm raising an exception and everything works fine (I mean the message is being dead-lettered), but I would like to understand if there is a "clean" way to dead-letter the message without raising an exception.
async def function_handler(message: func.ServiceBusMessage, starter: str):
for msg in [message]:
client = df.DurableOrchestrationClient(starter)
message_body = msg.get_body().decode("utf-8")
msg = json.loads(message_body)
if 'valid' in msg:
instance_id = await client.start_new('orchestrator', None, json.dumps(message_body))
else:
raise Exception(f'not found valid {msg["id"]}')
This is part of host.json, this should indicate I'm working with version 2.0 of Azure Functions
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
},
Suggestions are welcome
At time of writing, in Python it is not possible interactively send a message in dead-letter.
I found out that autocomplete=false is only supported for C#.
This basically means that the only way to dead letter a message is raise an exception, just like I was doing in my code.
Thanks to #GauravMantri to pointing me the right way (i.e. have a look at how to use the autocomplete configuration parameter).
Azure Service Bus Queue has this Max Delivery Count property that you can make use of. Considering you only want to process a message exactly once and then deadletter the message in case Function is unable to process, what you can do is set the max delivery count to 1. That way the message will be automatically deadlettered after 1st delivery.
By default, Function runtime tries to auto-complete the message if there is no exception in processing the message. You do not want Function runtime to do that. For that what you would need to do is set auto complete setting to false. However if the message is processed successfully, you would want to delete that message thus you will need to call auto complete manually if the message processing is successful.
Something like:
if 'valid' in msg:
instance_id = await client.start_new('orchestrator', None, json.dumps(message_body))
//auto complete the message here...
else:
//do nothing and the message will be dead-lettered
I have sleekXMPP for python and I have used the API to create functions to send messages, although when I researched into receiving them I can't find anything, can someone please help me to work this out, or disprove the possibility. Thanks.
Below is the code I used to send Messages, If its any help.
to = config.get('usermap', to[4:])
gmail_from_user = config.get('auth', 'email')
gmail_from_secret = config.get('auth', 'secret')
sys.stdout = stdouttmp
sys.stderr = stderrtmp
print "Sending chat message to " + to
xmpp = SendMsgBot(gmail_from_user, gmail_from_secret, to, message)
xmpp.register_plugin('xep_0030') # Service Discovery
xmpp.register_plugin('xep_0199') # XMPP Ping
sys.stdout = stdouttmp
if xmpp.connect(('talk.google.com', 5222)):
xmpp.process(block=True)
else:
sys.stdout = stdouttmp
print("Unable to connect.")
sys.stdout = stdouttmp
sys.stderr = stderrtmp
btw I'm using a .cfg text file for the users email and password, along with some contacts, which is then parsed in
I see that you're using the send_client.py example. The intent of that example is how to reliably log in, send a single message, and then log out. Your use case is to both send and receive messages, so you would be better served looking at the echo_client.py example.
Notably, in order to receive a message you would do:
# in your __init__ method:
def __init__(...):
# ...
self.add_event_handler('message', self.recv_message)
def recv_message(self, msg):
# You'll probably want to ignore error and headline messages.
# If you want to handle group chat messages, add 'groupchat' to the list.
if msg['type'] in ('chat', 'normal'):
print "%s says: %s" % (msg['from'], msg['body'])
Again, you will need to switch from using the SendMsgBot example because it automatically disconnects after sending its message.
Don't forget that there is the sleek#conference.jabber.org chat room if you need any help.
-- Lance