I'm trying to get OpenTelemetry tracing working with FastAPI and Requests. Currently, my setup looks like this:
import requests
from opentelemetry.baggage.propagation import W3CBaggagePropagator
from opentelemetry.propagators.composite import CompositePropagator
from fastapi import FastAPI
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.instrumentation.requests import RequestsInstrumentor
from opentelemetry.propagate import set_global_textmap
from opentelemetry.propagators.b3 import B3MultiFormat
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
set_global_textmap(CompositePropagator([B3MultiFormat(), TraceContextTextMapPropagator(), W3CBaggagePropagator()]))
app = FastAPI()
FastAPIInstrumentor.instrument_app(app)
RequestsInstrumentor().instrument()
#app.get("/")
async def get_things():
r = requests.get("http://localhost:8081")
return {
"Hello": "world",
"result": r.json()
}
The / endpoint just does a GET to another service that looks basically like this one, just with some middleware to log the incoming headers.
If I send a request like this (httpie format),
http :8000 'x-b3-traceid: f8c83f4b5806299983da51de66d9a242' 'x-b3-spanid: ba24f165998dfd8f' 'x-b3-sampled: 1'
I expect that the downstream service, i.e. the one being requested by requests.get("http://localhost:8081"), to receive headers that look something like
{
"x-b3-traceid": "f8c83f4b5806299983da51de66d9a242",
"x-b3-spanid": "xxxxxxx", # some generated value from the upstream service
"x-b3-parentspanid": "ba24f165998dfd8f",
"x-b3-sampled": "1"
}
But what I'm getting is basically exactly what I sent to the upstream service:
{
"x-b3-traceid": "f8c83f4b5806299983da51de66d9a242",
"x-b3-spanid": "ba24f165998dfd8f",
"x-b3-sampled": "1"
}
I must be missing something obvious, but can't seem to figure out exactly what.
Sending a W3C traceparent header results in the same exact situation (just with traceparent in the headers that are received downstream). Any pointers would be appreciated.
EDIT - I'm not using any exporters, as in our environment, Istio is configured to export the traces. So we just care about the HTTP traces for now.
The B3MultiFormat propagator doesn't consider the parent span id field while serialising the context into HTTP headers since X-B3-ParentSpanId is an optional header https://github.com/openzipkin/b3-propagation#multiple-headers. You can expect the X-B3-TraceId and X-B3-SpanId to be always present but not the remaining ones.
Edit:
Are you setting the concrete tracer provider? It doesn't look like from the shared snippet but I don't know if you are actual application code. It's all no-op if you do not set the sdk tracer provider i.e no recording spans are created in FastAPI service. Please do the following.
...
from opentelemetry.trace import set_tracer_provider
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.resources import Resource
set_tracer_provider(TracerProvider(
resource=Resource.create({"serice.name": "my-service"})
))
...
Another edit:
OpenTelemetry does not store the parent span ID in the context https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#spancontext. The context propagation client libraries from OTEL are limited to serialise and pass on this info only. I don't think you can have the parentSpanId propagated.
Related
I've been trying to figure out how to properly do a POST with FastAPI.
I'm currently doing a POST with python's "requests module" and passing some json data as shown below:
import requests
from fastapi import FastAPI
json_data = {"user" : MrMinty, "pass" : "password"} #json data
endpoint = "https://www.testsite.com/api/account_name/?access_token=1234567890" #endpoint
print(requests.post(endpoint, json=json_data). content)
I don't understand how to do the same POST using just FastAPI's functions, and reading the response.
The module request is not a FastAPI function, but is everything you need,
first you need to have your FastAPI server running, either in your computer or in a external server that you have access.
Then you need to know the IP or domain of your server, and then you make the request:
import requests
some_info = {'info':'some_info'}
head = 'http://192.168.0.8:8000' #IP and port of your server
# maybe in your case the ip is the localhost
requests.post(f'{head}/send_some_info', data=json.dumps(tablea))
# "send_some_info" is the address of your function in fast api
Your FastApI script would look like this, and should be running while you make your request:
from fastapi import FastAPI
app = FastAPI()
#app.post("/send_some_info")
async def test_function(dict: dict[str, str]):
# do something
return 'Success'
I am working with multiple applications that communicate asynchronously using Kafka. These applications are managed by several departments and contract testing is appropriate to ensure that the messages used during communication follow the expected schema and will evolve according to the contract specification.
It sounded like the pact library for python is a good fit because it helps creating contract tests for HTTP and message integrations.
What I wanted to do is to send an HTTP request and to listen from the appropriate and dedicated Kafka topic immediately after. But it seems that the test is forcing me specify an HTTP code even if what I am expecting is a message from a queue without an HTTP status code. Furthermore, it seems that the HTTP request is being sent before the consumer is listening. Here is some sample code.
from pact.consumer import Consumer as p_Consumer
from pact.provider import Provider as p_Provider
from confluent_kafka import Consumer as k_Consumer
pact = p_Consumer('Consumer').has_pact_with(p_Provider('Provider'))
pact.start_service()
atexit.register(pact.stop_service)
config = {'bootstrap.servers':'server', 'group.id':0, 'auto.offset.reset':'latest'}
consumer = k_consumer(config)
consumer.subscribe(['usertopic'])
def user():
while True:
msg = consumer.poll(timeout=1)
if msg is None:
continue
else:
return msg.value().decode()
class ConstractTesting(unittest.TestCase):
expected = {
'username': 'UserA',
'id':123,
'groups':['Editors']
}
pact.given('UserA exists and is not an administrator')
.upon_receiving('a request for UserA')
.with_request(method='GET',path='/user/')
.will_respond_with(200, body=expected)
with pact:
result = user()
self.assertEqual(result,expected)
How would I carry out contract testing in Python using Kafka? It feels like I am going through a lot of hoops to carry out this test.
With Pact message it's a different API you write tests against. You don't use the standard HTTP one, in fact the transport itself is ignored altogether and it's just the payload - the message - we're interested in capturing and verifying. This allows us to test any queue without having to build specific interfaces for each
See this example: https://github.com/pact-foundation/pact-python/blob/02643d4fb89ff7baad63e6436f6a929256c6bf12/examples/message/tests/consumer/test_message_consumer.py#L65
You can read more about message pact testing here: https://docs.pact.io/getting_started/how_pact_works#non-http-testing-message-pact
And finally here are some Kafka examples for other languages that may be helpful: https://docs.pactflow.io/docs/examples/kafka/js/consumer
I am running Pyramid as my API server. Recently we started getting query string parameters out of order when handed to the RESTful API server. For example, a GET to /v1/finishedGoodRequests?exact=true&id=39&join=OR&exact=false&name=39
is logged by the RESTful api module upon init as request.url:
v1/finishedGoodRequests?join=OR&name=39&exact=true&exact=false&id=39
with request.query_string: join=OR&name=39&exact=true&exact=false&id=39
I process the query params in order to qualify the search, in this case id exactly 39 or 39 anywhere in the name. What kind of possible server setting or bug could have crept in to the server code to cause such a thing? It is still a MultiDict...
As a simple example, this works fine for me, and the MultiDict has always preserved the order and so I suspect something is getting rewritten by something you're using in your stack.
from pyramid.config import Configurator
from pyramid.view import view_config
from waitress import serve
#view_config(renderer='json')
def view(request):
return list(request.GET.items())
config = Configurator()
config.scan(__name__)
app = config.make_wsgi_app()
serve(app, listen='127.0.0.1:8080')
$ curl http://localhost:8080\?join=OR\&name=39\&exact=true\&exact=false\&id=39
[["join", "OR"], ["name", "39"], ["exact", "true"], ["exact", "false"], ["id", "39"]]
Depending on which WSGI server you are using, often you can view environ vars to see the original url which may be handy. Waitress does not, so instead just put something high up in the pipeline (wsgi middleware) that can log out the environ['QUERY_STRING'] and see if it doesn't match somewhere lower down in your stack.
I am see the following error on Javascript console:
VM31:1 XMLHttpRequest cannot load '<some-url>'. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '<my-url>' is therefore not allowed access.
How do I enable Cross-Origin Resource Sharing with Google App Engine (Python) to access ?
You'll have to use the Access-Control-Allow-Origin http header in your yaml configuration
handlers:
- url: /
...
http_headers:
Access-Control-Allow-Origin: http://my-url
Find more under CORS Support in the docs
For a python script you can add the following line near other self.response.header lines.
self.response.headers['Access-Control-Allow-Origin'] = '*'
This worked for me. The idea was taken from a php issue listed in the notes of another answer.
For those who are wondering how to basically allow all origins for the AppEngine instance in Springboot:
use the #CrossOrigin(origins = "*") annotation on the #RestController classes your project has
or use use the same annotation above for any of your specific resource methods that has one of the #GetMapping, #PostMapping, etc annotations.
No need to set any of the handlers in the app.yaml. Actually it didn't work when changing the app.yaml file as explained in the docs
...
...
...
#SpringBootApplication
#RestController
#CrossOrigin(origins = "*") // <--- here
public class SpringbootApplication {
...
...
#GetMapping("/")
#CrossOrigin(origins = "*"). // <--- or here
public String hello() {
.....
}
}
If you want to serve a script, you cannot use Jeffrey Godwyll's answer, unfortunately. The documentation, somewhat hidden in the second sentence of the http_headers states: "If you need to set HTTP headers in your script handlers, you should instead do that in your app's code."
Another possibility is to allow your app to handle pre-flight requests by "prematurely" returning the headers. GOTCHA: If you are building a POST endpoint, make it so that it returns the allow cross site origin request headers on everything BUT your desired request method. Sometimes there may be a pre-flight GET as well (for some odd reason):
from flask import Flask, request
HEADERS = {
"Access-Control-Allow-Origin": "*",
}
app = Flask(__name__)
#app.route("/", methods=["GET", "POST"])
def main():
if request.method != "POST":
return ("", 204, HEADERS)
return "After the POST"
If you are building an app for GET only, you can instead write if request.method == "OPTIONS":..., like in the Cloud Functions documentation
I am trying to create a jsonrpc2 server that will accept json over http , process the data and return json back to the requesting client.
I am quite new to rpc servers and wsgi and have only used it as part of a webframework like django.
I am attempting to follow the example given with the jsonrpc2 documentation. The first step is creating a file hello.py
def greeting(name):
return dict(message="Hello, %s!" % name)
The next step involves starting the service
runjsonrpc2 hello
runserver :8080
I know the service is working since when I use a browser on a remote machine and browse to http://myip.dydns.org:8080 , It responds with "405 Method Not Allowed" and I see debug information on my server shell
DEBUG:root:jsonrpc
DEBUG:root:check method
The next step is what I am having a hard time understanding. I want to know how to create a python client to send json to the service and get a response.
What I tried is :
>>> from httplib import HTTPConnection
>>> h = HTTPConnection("myip.dydns.org:8080")
>>> from json import JSONEncoder
>>> call_values = {'jsonrpc':'2.0', 'method':'greeting', 'id':'greeting'}
What are the steps involved to get the response from the webservice using python.
Sadly the jsonrpc2 documentation only uses a TestApp from a webtest library to test on localhost.
I could not find any sample python code that creates a client from a remote machine and gets a response for the greeting function.
Can someone help me to get started.
edit: I got a little further . But I still cannot get the contents of the response
>>> from httplib import HTTPConnection
>>> con = HTTPConnection("myip.dyndns.org:8080")
>>> import json
>>> con.request('POST', '/', json.dumps({"jsonrpc": "2.0", "method": "casoff_jsonrpc2.greeting", "id":1.0,"params":{"name":"harijay"}},ensure_ascii=False).encode('utf-8'), {'Content-Type': 'application/json;charset=utf-8'})
I see the server then echo to its shell
DEBUG:root:jsonrpc
DEBUG:root:check method
DEBUG:root:check content-type
DEBUG:root:response {"jsonrpc": "2.0", "id": 1.0, "result": {"message": "Hello, harijay!"}}
But on the client. I dont know how to get the result.
edit2: I finally solved this
All I had to do was
>>> con.getresponse().read()
Try excellent package requests
I you intend to do anything with http clients in Python, I would highly recommend learning requests - it is an order of magnitude easier to learn and use than any other http related module in Python and for me it became sort of Swiss army knife when experimenting over http.
Example of how to use if for JSON-RPC is here:
https://stackoverflow.com/a/8634905/346478