Python Requests API sequential run - python

I'm fairly new to Python and I was able to run a python request and grabbed the session token and saved it to a variable, Now I'm trying to pass that session to a new request but I'm not sure how can I API request sequentially right after one another?
this is my request ..
url = "https://1.1.1.1/jsonrpc"
payload = json.dumps(
{
"session": 1,
"id": 1,
"method": "exec",
"params": [
{
"url": "sys/login/user",
"data": [
{
"user": "admin",
"passwd": "password"
}
]
}
]
}
)
response = requests.request("POST", url, data=payload, verify=False)
s = (response.json())
print (s['session'])
now I want to pass 's' variable to a new API request in the same .py file but not sure how to run them right after each other.
url = "https://1.1.1.1/jsonrpc"
payload = json.dumps({
"session": s
"id": 1,
"method": "set",
"params": [
{
"url": "/dvmdb/adom",
"data": [
{
"name": "NEW_ADOM"
}
]
}
]
})
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)

If you are using s in two separated functions, you can declare s variable outside those two functions and change that via global keyword
s = None
def get_session():
...
global s
s = response.json()['session']
Then you can use s in another function

Related

Netsuite Rest API returns "No Content" status (204) when completed successfully

i use the requests library. how can this be the default behavior? any way to return the ID of the item created?
def create_sales_order():
url = f"https://{url_account}.suitetalk.api.netsuite.com/services/rest/record/v1/salesOrder"
data = {
"entity": {
"id": "000"
},
"item": {
"items": [
{
"item": {
"id": 25
},
"quantity": 3,
"amount": 120
}
]
},
"memo": "give me money",
"Department": "109"
}
body = json.dumps(data)
response = client.post(url=url, headers=headers, data=body)
print(response.text)
Ok so it turns out that the header returned in the 204 empty response contains a link to the created item (Location is the key name in the json returned) , which is sufficient to do another get request and have all the info returned.

400 Bad Request POST request

I'm programing in Python some API application, using POSTMAN, and a Bearer token. I already receive the token, and to some GET with success response.
But when doing a insert of a record I got 400 Bad request error, this is the code I'm using for adding the record
def add_identity(token, accountid, newIdentity):
end_point = f"https://identityservice-demo.clearid.io/api/v2/accounts/{accountid}/identities/"
headers = CaseInsensitiveDict()
headers["Content-type"] = "application/json; charset=utf-8"
headers["Authorization"] = f"Bearer {token}"
response = requests.request("POST", end_point, data=newIdentity, headers=headers)
print(f"{response.reason} - {response.status_code}")
the variable newIdentity has the following data
nID = {
"privateData": {
"birthday": "1985-30-11T18:23:27.955Z",
"employeeNumber": "99999999",
"secondaryEmail": "",
"cityOfResidence": "Wakanda",
"stateOfResidence": "Florida",
"zipCode": "102837",
"phoneNumberPrimary": "(999)-999-999)",
"phoneNumberSecondary": "+5-(999)-999-9999"
},
"companyData": {
"approvers": [
{
"approverId": ""
}
],
"supervisorName": "Roger Rabbit",
"departmentName": "Presidency",
"jobTitle": "President",
"siteId": "string",
"companyName": "ACME Inc",
"workerTypeDescription": "",
"workerTypeCode": ""
},
"systemData": {
"hasExtendedTime": "true",
"activationDateUtc": "2022-03-16T18:23:27.955Z",
"expirationDateUtc": "2022-03-16T18:23:27.955Z",
"externalId": "999999",
"externalSyncTimeUtc": "2022-03-16T18:23:27.955Z",
"provisioningAttributes": [
{
"name": ""
}
],
"customFields": [
{
"customFieldType": "string",
"customFieldName": "SSNO",
"customFieldValue": "9999999"
}
]
},
"nationalIdentities": [
{
"nationalIdentityNumber": "0914356777",
"name": "Passport",
"issuer": "Wakanda"
}
],
"description": "1st Record ever",
"status": "Active",
"firstName": "Bruce",
"lastName": "Wayne",
"middleName": "Covid",
"displayName": "Bruce Wayne",
"countryCode": "WK",
"email": "bruce.wayne#wakanda.com",
"creationOnBehalf": "ACME"
}
what could solve the problem?
the swagger for the API is
https://identityservice-demo.clearid.io/swagger/index.html#/Identities/get_api_v2_accounts__accountId__identities
Thanks for your help in advance
data have to be a dict ,, you can try import json and data=json.dumps(newIdentity) ,
and if it keeps returning 400 , check well that all the parameters are accepted by the api by recreating the request with Postman or any request editor, and if the api uses any web interface check what is the reason for that 400 . This was translated by Google so I don't know if I said something nonsense :)

How to get the latest commit date of the file along with content details from GitHub API call

I have used the below GitHub api and i'am able to get the files details of the path.
https://github.***.com/api/v3/repos/exampleowner-Management/examplerepo/contents/Compile/Teradata/Tables?access_token=*****
The result of this API call is :
[
{
"name": ".DS_Store",
"path": "Compile/Tables/test",
"sha": "1cef8efa8694678e3b7ab230a6a891afa1a1996d",
"size": 8196,
"url": "***",
"html_url": "***",
"git_url": "***",
"download_url": "***",
"type": "file",
"_links": {
"self": "***",
"git": "***",
"html": "***"
}
}]
I need to get the commit date details for the sha in this response.
"sha": "1cef8efa8694678e3b7ab230a6a891afa1a1996d"
I have tried using another API ,which is :
https://github.***.com/api/v3/repos/exampleowner-Management/examplerepo/commits/1cef8efa8694678e3b7ab230a6a891afa1a1996d?access_token=*****
but the response of this API for this sha is:
{
"message": "Not Found",
"documentation_url": "https://developer.github.com/enterprise/2.14/v3/repos/commits/#get-a-single-commit"}
How can we get commit date details along with GitHub content details by using API calls?
finally got the expected result by using Graphql.here is the complete code
def run_query(query): # A simple function to use requests.post to make the API call. Note the json= section.
try:
request = requests.post('https://api.github.***.com/graphql', json={'query': query}, headers=headers)
return request.json()
except e:
returnVal = '404'
query = """
{
repository(owner: \""""+ownerVal+"""\", name: \""""+repoVal+"""\") {
object(expression: \""""+branchVal+"""\") {
... on Commit {
blame(path: \""""+folderVal+"/"+data['name']+"""\") {
ranges {
commit {
committedDate
}
}
}
}
}
}
}
"""
headers = {"Authorization": "Bearer "+access_token}
result = run_query(query)
commit_date = result["data"]["repository"]["object"]["blame"]["ranges"][0]["commit"]["committedDate"]

Creating a JSON post request with python

I am experimenting with the Rapaport Technet API, and want to hit an endpoint which expects the following JSON:
{
"request": {
"header": {
"username": "my_username",
"password": "my_password"
},
"body": {}
}
}
Code:
url = 'https://technet.rapaport.com:449/HTTP/JSON/Prices/GetPriceChanges.aspx'
headers = {'username': 'my_username', 'password': 'my_password'}
r = requests.post(url, headers)
I get this response:
{
"response": {
"header": {
"error_code": 1001,
"error_message": "Invalid format"
},
"body": {}
}
}
Any idea what the problem could be?
According to this example from Rapaport Technet API docs, that whole JSON is sent to be as data in the POST request. So simply do the same as given here in Requests docs.
json_data = {
"request": {
"header": {
"username": "my_username",
"password": "my_password"
},
"body": {}
}
}
r = requests.post(url, json=json_data)

Making a request to a RESTful API using Python

I have a RESTful API that I have exposed using an implementation of Elasticsearch on an EC2 instance to index a corpus of content. I can query the search by running the following from my terminal (MacOSX):
curl -XGET 'http://ES_search_demo.com/document/record/_search?pretty=true' -d '{
"query": {
"bool": {
"must": [
{
"text": {
"record.document": "SOME_JOURNAL"
}
},
{
"text": {
"record.articleTitle": "farmers"
}
}
],
"must_not": [],
"should": []
}
},
"from": 0,
"size": 50,
"sort": [],
"facets": {}
}'
How do I turn above into a API request using python/requests or python/urllib2 (not sure which one to go for - have been using urllib2, but hear that requests is better...)? Do I pass as a header or otherwise?
Using requests:
import requests
url = 'http://ES_search_demo.com/document/record/_search?pretty=true'
data = '''{
"query": {
"bool": {
"must": [
{
"text": {
"record.document": "SOME_JOURNAL"
}
},
{
"text": {
"record.articleTitle": "farmers"
}
}
],
"must_not": [],
"should": []
}
},
"from": 0,
"size": 50,
"sort": [],
"facets": {}
}'''
response = requests.post(url, data=data)
Depending on what kind of response your API returns, you will then probably want to look at response.text or response.json() (or possibly inspect response.status_code first). See the quickstart docs here, especially this section.
Using requests and json makes it simple.
Call the API
Assuming the API returns a JSON, parse the JSON object into a
Python dict using json.loads function
Loop through the dict to extract information.
Requests module provides you useful function to loop for success and failure.
if(Response.ok): will help help you determine if your API call is successful (Response code - 200)
Response.raise_for_status() will help you fetch the http code that is returned from the API.
Below is a sample code for making such API calls. Also can be found in github. The code assumes that the API makes use of digest authentication. You can either skip this or use other appropriate authentication modules to authenticate the client invoking the API.
#Python 2.7.6
#RestfulClient.py
import requests
from requests.auth import HTTPDigestAuth
import json
# Replace with the correct URL
url = "http://api_url"
# It is a good practice not to hardcode the credentials. So ask the user to enter credentials at runtime
myResponse = requests.get(url,auth=HTTPDigestAuth(raw_input("username: "), raw_input("Password: ")), verify=True)
#print (myResponse.status_code)
# For successful API call, response code will be 200 (OK)
if(myResponse.ok):
# Loading the response data into a dict variable
# json.loads takes in only binary or string variables so using content to fetch binary content
# Loads (Load String) takes a Json file and converts into python data structure (dict or list, depending on JSON)
jData = json.loads(myResponse.content)
print("The response contains {0} properties".format(len(jData)))
print("\n")
for key in jData:
print key + " : " + jData[key]
else:
# If response code is not ok (200), print the resulting http error code with description
myResponse.raise_for_status()
Below is the program to execute the rest api in python-
import requests
url = 'https://url'
data = '{ "platform": { "login": { "userName": "name", "password": "pwd" } } }'
response = requests.post(url, data=data,headers={"Content-Type": "application/json"})
print(response)
sid=response.json()['platform']['login']['sessionId'] //to extract the detail from response
print(response.text)
print(sid)
So you want to pass data in body of a GET request, better would be to do it in POST call. You can achieve this by using both Requests.
Raw Request
GET http://ES_search_demo.com/document/record/_search?pretty=true HTTP/1.1
Host: ES_search_demo.com
Content-Length: 183
User-Agent: python-requests/2.9.0
Connection: keep-alive
Accept: */*
Accept-Encoding: gzip, deflate
{
"query": {
"bool": {
"must": [
{
"text": {
"record.document": "SOME_JOURNAL"
}
},
{
"text": {
"record.articleTitle": "farmers"
}
}
],
"must_not": [],
"should": []
}
},
"from": 0,
"size": 50,
"sort": [],
"facets": {}
}
Sample call with Requests
import requests
def consumeGETRequestSync():
data = '{
"query": {
"bool": {
"must": [
{
"text": {
"record.document": "SOME_JOURNAL"
}
},
{
"text": {
"record.articleTitle": "farmers"
}
}
],
"must_not": [],
"should": []
}
},
"from": 0,
"size": 50,
"sort": [],
"facets": {}
}'
url = 'http://ES_search_demo.com/document/record/_search?pretty=true'
headers = {"Accept": "application/json"}
# call get service with headers and params
response = requests.get(url,data = data)
print "code:"+ str(response.status_code)
print "******************"
print "headers:"+ str(response.headers)
print "******************"
print "content:"+ str(response.text)
consumeGETRequestSync()

Categories

Resources