How to log request and response activity on a python gRPC server? - python

I went through the gRPC tutorial and got a successful response from the server.
However the server did not log anything from the command line. I was expecting to see something like request received - response sent with this status code. Similar to a django dev server or a local http.server:
Serving HTTP on 127.0.0.1 port 8000 (http://127.0.0.1:8000/) ...
127.0.0.1 - - [27/Jun/2022 13:06:08] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [27/Jun/2022 13:06:08] code 404, message File not found
What I tried with my gRPC server:
GRPC_VERBOSITY=DEBUG python greeter_server.py
That just gave strange output. I tried with INFO and received nothing when requests came in.
I saw on grpc.server there was the interceptor parameter:
interceptors: An optional list of ServerInterceptor objects that observe
and optionally manipulate the incoming RPCs before handing them over to
handlers. The interceptors are given control in the order they are
specified. This is an EXPERIMENTAL API.
How can I log requests and responses to the cli?

In gRPC, Since The complete implementation of server and client are in your hands, it is upto the developer to create context-ful logging at both sides.
You can use python logging module to print logs to cli. This is better than plain print() in many ways.
Ideally One can make server to print logs like.
Interface on which server is started on. (ip/hostname:port)
When a RPC call is received for a particular method.
Any intermediate info or debug logs in the method. Etc..,
In case you are looking for default logging in grpc which are just useful during development and not much helps in production, you can explore more about gRPC traces using which you can control logging of different sub components of gRPC.

Related

Implementing WebSockets with Sony's Audio Control API in Python

Sony's website provided a example to use WebSockets to works with their api in Node.js
https://developer.sony.com/develop/audio-control-api/get-started/websocket-example#tutorial-step-3
it worked fine for me. But when i was trying to implement it in Python, it does not seems to work
i use websocket_client
import websocket
ws = websocket.WebSocket()
ws.connect("ws://192.168.0.34:54480/sony/avContent",sslopt={"cert_reqs": ssl.CERT_NONE})
gives
websocket._exceptions.WebSocketBadStatusException: Handshake status 403 Forbidden
but in their example code, there is not any kinds of authrization or authentication
I recently had the same problem. Here is what I found out:
Normal HTTP responses can contain Access-Control-Allow-Origin headers to explicitly allow other websites to request data. Otherwise, web browsers block such "cross-origin" requests, because the user could be logged in there for example.
This "same-origin-policy" apparently does not apply to WebSockets and the handshakes can't have these headers. Therefore any website could connect to your Sony device. You probably wouldn't want some website to set your speaker/receiver volume to 100% or maybe upload a defective firmware, right?
That's why the audio control API checks the Origin header of the handshake. It always contains the website the request is coming from.
The Python WebSocket client you use assumes http://192.168.0.34:54480/sony/avContent as the origin by default in your case. However, it seems that the API ignores the content of the Origin header and just checks whether it's there.
The WebSocket#connect method has a parameter named suppress_origin which can be used to exclude the Origin header.
TL;DR
The Sony audio control API doesn't accept WebSocket handshakes that contain an Origin header.
You can fix it like this:
ws.connect("ws://192.168.0.34:54480/sony/avContent",
sslopt={"cert_reqs": ssl.CERT_NONE},
suppress_origin=True)

Using twisted to selectively reverse proxy to different servers

I'm using Twisted (well twistd actually) to serve content like this currently :
twistd -n -o web --path=./foo/
That's fine but I want to send some requests to another server - like this.
When the client requests
localhost/something.html
I want the request to be handled by the twistd server .
But when the client requests
localhost/api/somedata
I want the request to be reverse proxied to another server .
So in summary if the URL contains the string "api" then I want the request reverse proxied elsewhere.
I can see that Twisted has a built in Reverse Proxy but I don't know how to use that so that I can filter requests made in such a way that some requests would get sent off to the alternative server and some wouldn't.
ReverseProxyResource is a resource. You can place it into a resource hierarchy.
root = Resource()
root.putChild("something.html", SomethingHTML())
root.putChild("api", ReverseProxyResource(...))
This is just one example of an arrangement of the resource hierarchy. You can combine ReverseProxyResource with other resources in any of the ways supported by IResource.

Obtain the AWS SQS Message Receipt Handle from the forwarded aws-sqsd daemon Elastic Beanstalk requests

Anyone know how to get the message receipt handle from the aws-sqsd worker daemon (When the request is forwarded to the application)? Basically, the aws-sqsd drives the work item pickup, and then I want to use boto to potentially extend the message visibility (so I don't have the message itself in boto, so m.receipt_handle does not help). In the headers of the aws-sqsd request, it provides almost all that info, except the receipt_handle...
We are trying to extend the current solution we have in place for sqs message handling. The worker instances in our Elastic Beanstalk environments are setup to use the aws-sqsd daemon for message retrieval/forwarding to our application.
Currently trying to determine the best means of obtaining the message receipt handle from the aws-sqsd daemon to then allow us to extend the message visibility timeout using Boto after the fact (which requires the receipt handle).
As it stands, the aws-sqsd provides the following information in the request headers... but sadly the message receipt handle is not provided...)
Headers:
Host: localhost
X-Aws-Sqsd-Sent-At: 2015-12-02T03:46:35Z
User-Agent: aws-sqsd/2.0
X-Aws-Sqsd-Queue:
X-Aws-Sqsd-Path:
X-Forwarded-For: 127.0.0.1
X-Aws-Sqsd-First-Received-At: 2015-12-02T03:46:35Z
X-Aws-Sqsd-Msgid:
X-Aws-Sqsd-Receive-Count: 1
Content-Type: application/x-www-form-urlencoded
X-Aws-Sqsd-Sender-Id:
Content-Length: 298
X-Real-Ip: 127.0.0.1
Would be very helpful to also include:
X-Aws-Sqsd-Message-Receipt-Handle
Currently using:
X-Aws-Sqsd-Queue
X-Aws-Sqsd-Msgid
How could I go about getting that important piece of information?
This touches on an overall limitation I have been encountering, it is very hard to obtain metadata for instances/envs within an instance (e.g. 'region', which requires using the ec2-metadata tool / boto.utils.get_instance_metadata(), pulling out the placement/availability zone and then doing string manipulation for the region... why is 'region' not provided... same with 'worker_queue_url'... have to get that from the message headers itself (header parsing).

How do I pass internet traffic through localhost?

I have created a local server using Python and I am trying to pass traffic to see the HTTP requests and responses.
I've tried the inputting the following into my browser and only get a 404:
http://127.0.0.1:8080/http://www.google.com
For clarification sake I am trying to get the following:
>Starting simple_httpd on port: 8080<br>
>127.0.0.1 - - [05/Jan/2015 15:12:33] code 404, message File not found<br>
>127.0.0.1 - - [05/Jan/2015 15:12:33] "GET /http://www.google.com HTTP/1.1" 404 -
If you'd like the browser to send HTTP requests through a proxy server you've developed running on localhost, you need to configure its proxy settings to point at (in your case) "127.0.0.1:8080."
On Chrome for example, you can do this in Settings | Advanced Settings... as per this link. (Depending on OS, it will probably just bring you to the appropriate system-wide settings for your computer).
Another option is to edit your computer's hosts file to point the desired domain(s) at 127.0.0.1. If you go this way, you'll need to run your server on port 80.
Note that writing a completely-functional proxy server with good compatibility can be rather involved, but getting the basics up for experimentation aren't so bad. Also note that you'll generally be out of luck for SSL traffic.
BTW if you are just interested in seeing the request/response traffic, you might be better served with your browser's developer tools, or the excellent Firebug, which generally allow pretty good inspection of HTTP traffic in the "network" tab.

Pylons 0.9.6 Get Current Server Name

In my Pylons config file, I have:
[server:main1]
port = 9090
...config here...
[server:main2]
port = 9091
...config here...
Which are ran using:
paster serve --server-name=main1 ...(more stuff)...
paster serve --server-name=main2 ...(more stuff)...
Now, using Haproxy and Stunnel, I have all http requests going to main1 and all https requests going to main2. I would like some of my controllers to react a little differently based on if they are being requested under http or https but pylons.request.scheme always thinks that it is under http even when it is not.
Seeing as I always know that main2 is always the one handling all https requests, is there a way for the controller to determine what sever name it was ran under or what id it is?
I got around this by just changing the workflow to not have to react differently based on what protocol it's under. It doesn't look like there's a way to pass a unique arbitrary identifier to each separate process that it can read.

Categories

Resources