Python - Read headers inside controller? - python

I'm building controller between source control system and Odoo in a way that specific integrated code source control system (like bitbucket, github) would be able to payload data using json. Reading of actual payloaded data is working, but what I'm struggling, is reading headers data inside controller.
I need headers data so I could identify from which system this payload is received (for example data structure might be different in bitbucket and github). Now if I would read that header, I would know which system payloads the data and how to parse it properly.
So my controller looks like this:
from odoo import http
from odoo.http import request
class GitData(http.Controller):
"""Controller responsible for receiving git data."""
#http.route(['/web/git.data'], type='json', auth="public")
def get_git_data(self, **kwargs):
"""Get git data."""
# How to read headers inside here??
data = request.jsonrequest
# do something with data
return '{"response": "OK"}'
Now for example I can call this route with:
import requests
import json
url = 'http://some_url/web/git.data'
headers = {
'Accept': 'text/plain',
'Content-Type': 'application/json',
'type': 'bitbucket'}
data = {'some': 'thing'}
r = requests.post(url, data=json.dumps(data), headers=headers)
Now it looks that controller reads headers automatically, because it understands that it is json type. But what if I need to manually check specific header data like headers['type'] (in my example it was bitbucket)?
I tried looking into dir(self) and dir(request), but did not see anything related with headers. Also **kwargs is empty, so no headers there.
Note.: request object is actually:
# Thread local global request object
_request_stack = werkzeug.local.LocalStack()
request = _request_stack()
"""
A global proxy that always redirect to the current request object.
"""
# (This is taken from Odoo 10 source)
So basically it is part of werkzeug.
Maybe someone has more experience with werkzeug or controllers in general, so could point me in the right direction?
P.S. Also in Odoo itself I did not find any example that would read headers like I want. It looks the only place headers are used (actually setting them instead of reading), are after the fact, when building a response back.

from openerp.http import request
Within your controller handling your specific path. You can access the request headers using the code below. (Confirmed Odoo8,Odoo10... probably works for Odoo9 as well)
headers = request.httprequest.headers

Related

How to retrieve form data from Playwright Request object?

I am adding a custom route handler to a Playwright page and I am trying to inspect the request passed into the handler. For context here is a following code snippet:
def handler(route: Route, request: Request):
# Do things with `request`
...
await page.route('**/*', handler=handler)
For POST/PUT requests with a Content-Type of application/json, I have been able to successfully inspect the payload by using request.post_data_buffer. However, when the Content-Type is multipart/form-data, I have not been able locate where I can get the form data. All of the post_data, post_data_buffer, and post_data_json properties have a value of None, and I couldn't see anything else in the documentation which could contain the form_data.
The issue had nothing to do with really any details in my original post. The issue was I was using Chromium, and it is a known bug that post_data does not contain file/blob data.

Post request with multiple parameters using Twisted Web Client

I would like to send a POST request with multiple parameters using Twisted Web Client :
image : image
metadata : json document with meta data
I need to use pure Twisted without external libraries like Treq and requests.
At the moment, I can send only one parameter and tried few ways without success.
Do someone know how to change body to achieve this goal ?
from __future__ import print_function
from twisted.internet import reactor
from twisted.web.client import Agent
from twisted.web.http_headers import Headers
from bytesprod import BytesProducer
agent = Agent(reactor)
body = BytesProducer(b"hello, world")
d = agent.request(
b'POST',
b'http://httpbin.org/post',
Headers({'User-Agent': ['Twisted Web Client Example'],
'Content-Type': ['text/x-greeting']}),
body)
def cbResponse(ignored):
print('Response received')
d.addCallback(cbResponse)
def cbShutdown(ignored):
reactor.stop()
d.addBoth(cbShutdown)
reactor.run()
You need to specify how you would like the parameters encoded. If you want to to submit them like a browser form, you need to application/x-www-form-urlencoded or multipart/form-data encode the data. The former is generally for short data - and since of your parameters is "image" it probably isn't short. So you should multipart/form-data the data.
Once you have, you just declare this in the request head and include the encoded data in the body.
For example,
body = multipart_form_encoded_body_producer(your_form_fields))
d = agent.request(
b'POST',
b'http://httpbin.org/post',
Headers({'User-Agent': ['Twisted Web Client Example'],
'Content-Type': ['multipart/form-data']}),
body)
Conveniently, treq provides a multipart/form-data encoder
So multipart_form_encoded_body_producer(...) probably looks something like:
MultiPartProducer([
("image", image_data),
("metadata", some_metadata),
...
])
You mentioned that you can't use Treq. You didn't mention why. I recommend using Treq or at least finding another library that can do the encoding for you. If you can't do that for some unreasonable reason, you'll have to implement multipart/form-data encoding yourself. It is reasonably well documented and of course there are multiple implementations you can also use as references and interoperability testing tools.

How should I include csv data in a python put request using aiohttp

I'm trying to use the salesforce bulk api 2.0 to upsert some data, and they only accept csv data. In this documentation, for step 2, they say create the csv file. Then in step 4, I need to upload the csv data. I have code that doesn't throw any errors, but the record is not processed, which makes me think I am doing something wrong.
So I have the following as my csv_string:
csv_string = "Id, Name__c\n,\"Doe, John\""
Here is how I am currently sending the data
headers = {'Content-Type': 'text/csv', 'Accept': 'application/json'}
data = {'file': csv_string}
async with self.session.put(upload_url, data = data, headers = headers) as response:
r = await response.text()
print(r)
According to the documentation, I am supposed to get " response that includes the job ID, with a job state of Open." but =it just prints an empty line.
Then when I do step 16: Check the job status and results, it successfully returns JobComplete and response.text() returns the following: "sf__Id","sf__Created",file=Id%2C+Name__c%0A%2C+%22Doe%2C+John%22 which is basically a url encoded version of my csv_string. There is no change to the data in salesforce, so the upsert fails. The fact that an empty line is printed out makes me believe that I am not passing the csv in correctly.
I've tried using aiohttp's FormData, but that changes the data type to multi-part encoded which is not accepted. I've also tried passing data = csv_string which makes salesforce return an error. I was thinking maybe I need to pass it in as binary data, for example, when you open a file using open("file_name", "rb"), but I don't know how to convert this existing string to binary data. Can someone give an example of how to pass csv data in a request using aiohttp? Or maybe tell me how to convert this string to binary data to I can try passing it in this way?
Thanks #identigral. This was one of the issues.
One major thing that helped me debug was going to setup->bulk data load jobs. If you click on a specific job, and hover over the "state message", it will give you the reason why a job failed. Although salesforce has an api for getting the failed job record here, which supposedly is supposed to return an error message, it did not work, which is why I felt kind of stuck, and led me to believe I wasn't passing in the csv correctly.
So I had a few errors:
Like identigral pointed out, I used "CLRF" as the line ending because I thought I was on windows, but since I type out the string myself in the code, I had to use "LF". I believe if I read in a csv file that I create using Excel, I would probably have to use "CLRF", although I haven't tested it yet.
Salesforce doesn't like the space in front of "Name__c", so although I had a field with that name on my object, it said "field Name__c" not found.
The documentation I linked said that after uploading the csv, "You should get a response that includes the job ID still in the Open state." However, that is not the case. The PUT request to upload the csv will have an empty request body and only return status 201 if the request was successful. This is found here: link
I realized this was the correct way as in this documentation, it gives an example of passing in data of type text/plain by doing data='Привет, Мир!', so I figured text/csv should be the same.
So the final code to send the csv that ended up working is as follows: (self.session is an instance of aiohttp.ClientSession() and I had already included the bearer token in the default headers when initializing the session):
csv_string = "Id,Name__c\n,\"Doe,John\""
headers = {'Content-Type': 'text/csv', 'Accept': 'application/json'}
async with self.session.put(upload_url, data = csv_string, headers = headers) as response:
assert response.status == 201 #data was successfully received.
The following was how I defined my when creating the job (replace MyObject__c with the API name of the object from salesforce):
body = {'object': 'MyObject__c',
'contentType': 'CSV',
'operation': 'upsert',
"lineEnding": "LF",
"externalIdFieldName": "Id" }

Python Requests Programmatically get Dev Tools Form Data pre-formatted as a dictionary

I am trying to update an already saved form on a system using HTTP requests. Due to the server configuration for the third party app we use, updating by POST requires sending a fully filled out payload every single time.
I want to get round this by recovering the form data already present on the server and converting it into a dictionary. Then changing any values I need and reposting to make changes sever side.
The application we use sends a POST request when the save button is clicked for a particular form.
Here I send a post request with no payload.
[This simulates pressing the save button and is also the point where dev tools shows me a the payload I want to capture]
post_test = self.session.post(url_to_retrieve_from)
I thought that now I should be able to print the output, which should resemble what Google Dev tools Form data captures.
print(post_test.text)
This just gives me html found on the webpage.
If Dev Tools can get this from the server then I should also be able to?
Example of Data I am trying to get via requests:
Form Data
If Dev Tools can get this from the server then I should also be able to?
Yes, of course. In requests you pass form data in data keyword:
import requests
url = 'http://www.example.com'
data = {
'name': 'value',
}
response = requests.post(url, data=data)
You can get the data you sent with a request from the response in this way:
import requests
response = requests.post('http://your_url', data=data) # send request
body = response.request.body
parsed_data = dict(data.split('=') for data in body.split('&')) # parse request body
Here you can find more information about data argument
In the documentation, in the class requests.Response we can find the attribute:
request = None
The PreparedRequest object to which this is a response.
In requests.PreparedRequest class we can read:
body = None
request body to send to the server.

Yggdrasil authentication with Python

I decided to try to make an automated login script for Minecraft. However, the new authentication API is stumping me. I can't find any mentions of the new functionality of the API on here. This is my code as it stands:
import requests
import json
data = json.dumps({"agent":{"name":"Minecraft","version":1},"username":"abcdef","password":"abcdef","clientToken":""})
headers = {'Content-Type': 'application/json'}
r = requests.post('https://authserver.mojang.com', data=data, headers=headers)
print (r.text)
Unfortunately, this returns:
{"error":"Method Not Allowed","errorMessage":"The method specified in the request is not allowed for the resource identified by the request URI"}
According to this resource on request format, this error means that I didn't correctly send a post request. However, I clearly declared requests.post(), so my first question is how am I incorrect, and what is the correct way to go about this?
My second question is, since I'm relatively new to Python and JSON, how would I replace the username and password fields with my own data, inside a variable?
You haven't specified an endpoint in your POST request, for example:
https://authserver.mojang.com/authenticate
The root of the website probably does not accept POST requests
http://wiki.vg/Authentication#Authenticate

Categories

Resources