gRPC Python - how to add idle time for client - python

I'm using gRPC to call a service in client. After I set up channel:
channel = grpc.insecure_channel('server_url:service_port')
stub = Client.Stub(channel)
It works pretty good. However, if there's 5 minutes not using the client to send request, then the next request will get error message:
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, Stream removed)>

Unfortunately the gRPC retries functionality when the channel breaks is still work in progress and not fully available yet. One thing you could do as a workaround is to write an interceptor to retry automatically if it sees such error.

Related

VerneMQ single publish messages lost when client is offline

I am quite new to MQTT and brokers, but I am having an issue with VerneMQ not sending offline messages to clients. Here is my setup. I have a backend written in Python which is using the Paho Eclipse MQTT library's single() method to send messages to a connected client. The client, a virtual machine on my development station, has a client which is written in go-lang, using paho.mqtt.golang to connect to the broker and subscribe.
The call to single() on the backend looks like this:
def send_message(device_id, payload):
token = get_jwt('my_token').decode()
mqtt.single(
f'commands/{device_id}',
payload=payload,
qos=2,
hostname=MESSAGING_HOST,
port=8080,
client_id='client_id',
auth={'username': 'username', 'password': f'Bearer {token}'},
transport='websockets'
)
On the client, the session is established with the following options:
func startListenerRun(cmd *cobra.Command, args []string) {
//mqtt.DEBUG = log.New(os.Stdout, "", 0)
mqtt.ERROR = log.New(os.Stdout, "", 0)
opts := mqtt.NewClientOptions().AddBroker(utils.GetMessagingHost()).SetClientID(utils.GetClientId())
opts.SetKeepAlive(20 * time.Second)
opts.SetDefaultPublishHandler(f)
opts.SetPingTimeout(5 * time.Second)
opts.SetCredentialsProvider(credentialsProvider)
opts.SetConnectRetry(false)
opts.SetAutoReconnect(true)
opts.willQos=2
opts.SetCleanSession(false)
I am not showing all the code, but hopefully enough to illustrate how the session is being set up.
I am running VerneMQ as a docker container. We are using the following environment variables to change configuration defaults in the Dockerfile:
ENV DOCKER_VERNEMQ_PLUGINS.vmq_diversity on
ENV DOCKER_VERNEMQ_VMQ_DIVERSITY.myscript1.file /etc/vernemq/authentication.lua
ENV DOCKER_VERNEMQ_VMQ_ACL.acl_file /etc/vernemq/vmq.acl
ENV DOCKER_VERNEMQ_PLUGINS.vmq_acl on
ENV DOCKER_VERNEMQ_RETRY_INTERVAL=3000
As long as the client has an active connection to the broker, the server's published messages arrive seamlessly. However, if I manually close the client's connection to the broker, and then publish a message on the backend to that client, when the client's connection reopens, the message is not resent by the broker. As I said, I am new to MQTT, so I may need to configure additional options, but so far I've yet to determine which. Can anyone shed any light on what might be happening on my setup that would cause offline messages to not be sent? Thanks for any information.
As thrashed out in the comments
Messages will only be queued for an offline client that has subscribed at greater than QOS 0
More details can be found here
You need to make QOS to 1 or 2 depending on your requirement and also you can use --retain flag which is quite useful. retain flag will make sure that last message will be delivered irrespective of any failure. You can know the last status of device. Check this http://www.steves-internet-guide.com/mqtt-retained-messages-example/

How to create a node js API for which users can subscribe to listen to events?

I am trying to create and node.js api to which users can subscribe to get event notifications?
I created the below API and was able to call the API using python ,however its not clear to me how can folks subscribe to it?
How can folks subscribe to this API to get notification of New root build released?what do I need to change?
node.js API
app.get("/api/root_event_notification", (req, res, next) => {
console.log(req.query.params)
var events = require('events');
var eventEmitter = new events.EventEmitter();
//Create an event handler:
var myEventHandler = function () {
console.log('new_root_announced!');
res.status(200).json({
message: "New root build released!",
posts: req.query.params
});
}
import requests
python call
input_json = {'BATS':'678910','root_version':'12A12'}
url = 'http://localhost:3000/api/root_event_notification?params=%s'%input_json
response = requests.get(url)
print response.text
OUTPUT:-
{"message":"New root build released!","posts":"{'root_version': '12A12', 'BATS': '678910'}"}
You can't just postpone sending an http response for an arbitrary amount of time. Both client and server (and sometimes the hosting provider's infrastructure) will timeout the http request after some number of minutes. There are various tricks to try to keep the http connection alive, but all have limitations.
Using web technologies, the usual options for get clients getting updated server data:
http polling (client regularly polls the server). There's also a long polling adaptation version of this that attempts to improve efficiency a bit.
Websocket. Clients makes a websocket connection to the server which is a lasting, persistent connection. Then either client or server can send data/events of this connection at any time, allowing the server to efficiently send notifications to the client at any time.
Server Sent Events (SSE). This is a newer http technology that allows one-way notification from server to client using some modified http technology.
Since a server cannot typically connect directly to a client due to firewall and public IP address issues, the usual mechanism for a server to notify a client is to use either a persistent webSocket connection from client to server over which either side can then send webSocket packets or use the newer SSE (server sent events) which allows some server events to be sent to a client over a long lasting connection.
The client can also "poll" the server repeatedly, but this is not really an event notification system (and not particularly efficient or timely) as much as it is some state that the client can check.

Mapping gRPC error codes to HTTP error codes

My web-application makes HTTP requests to an Extensible Service Proxy (ESP), which in-turn, delegates to a gRPC server (written in Python). Ignoring Android and iOS clients, the architecture is:
The ESP is a nginx reverse proxy.
The gRPC server ("Your code" in the reference architecture) may raise an exception, in which case I use context.abort to raise an exception and terminate the RPC with a non-OK status:
try:
# Do something that could fail.
except ValueError as e:
context.abort(grpc.StatusCode.DATA_LOSS, str(e))
While it is possible to use set_code and set_details, they still result in a HTTP status of 200 OK.
There are two problems:
The gRPC status codes are translated by the ESP container (nginx proxy) to a a generic 500 Internal Server Error.
The accompanying details are stripped-out.
and 2. combined means the web client has, at most, a 500 Internal Server Error for all exceptions raised by the gRPC server.
Ultimately, I don't understand how more informative (ideally, custom) errors can be returned to web-clients.
grpc Status code::DATA_LOSS, are translated to HTTP code 500. The code is here
The grpc status detail (status code and error message) are send back in the response body in JSON format. The code is here

Connect timeout and retry, what happens with old request?

What if I send off a GET/POST request and I get hit by a Connect timeout (not read timeout) and retry the request after?
Will the old request be cancled also on the server or will it maybe still arrive at the server at a later time and executed on server?
Also if we do not get hit by connect timeout but to get the response just takes longer it should mean the request arrived at the server but is probably still being executed on the server right? So we should wait until response is recieved since we etablished the connection for sure?
Thank you in advance!

Python Session 10054 Connection Aborted Error

I wrote a web scraper using requests module. I open up a session and send subsequent requests using this session. It has 2 phases.
1) Scrape page by page and collect id's in an array.
2) Get details about each id in the array using requests to an ajax server on the same host.
The scraper works fine on my Linux machine. However when I run the bot on Windows 10, phase 1 is completed just fine but after a couple of requests in phase 2 python throws this exception
File "c:\python27\lib\site-packages\requests\adapters.py", line 453, in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', error(10054, 'Varolan bir ba\xf0lant\xfd uzaktaki bir ana bilgisayar taraf\xfdndan zorla kapat\xfdld'))
What is different between two OS's which causes this? How can I overcome this problem?
Having modified my request code like below using retrying module had no positive effects. Now script doesn't throw exceptions but simply hangs doing nothing.
#retry(wait_exponential_multiplier=1000, wait_exponential_max=10000, stop_max_attempt_number=7)
def doReq(self, url):
time.sleep(0.5)
response = self.session.get(url, headers=self.headers)
return response
I still don't know why this problem occurs only in Windows. However, retrying decorator seems to have fixed the problem of socket error. The reason why the script hangs was due to the server not responding to a request. By default requests mode waits forever for a response. By adding a timeout value requests throws a timeout exception and retry decorator catches it and tries again. I know this is a work around rather than a solution but this is the best I've got right now.

Categories

Resources