I'm new to using the Google Cloud suite and I'm trying to build a simple python script that calls the Vision API for document text extraction on a set of files. To do so, I have replicated the instructions found here:
https://cloud.google.com/vision/docs/ocr#vision_text_detection-drest
Currently my python script looks something like this:
key = <my_api_key>
url = 'https://vision.googleapis.com/v1/images:annotate?key=' + key
access_token = <my_access_token>
headers = {'Authorization': 'Bearer ' + access_token,
'Content-Type': 'application/json; charset=utf-8'}
where access_token is determined by
$ gcloud auth application-default print-access-token
(Normally, running using curl in bash I would replace access_token with $(gcloud auth ...).) Next,
import base64
import json
import requests
with open(file, 'rb') as f :
encoding = base64.b64encode(f.read()).decode('ascii')
request = {'requests': [{'features': [{'type': 'DOCUMENT_TEXT_DETECTION'}],
'image': {'content': encoding},
'imageContext': {'languageHints': ['en']}}]}
with open('request.json', 'w') as r :
r.write(json.dumps(request))
with open('request.json') as d :
response = requests.post(url = url, data = d, headers = headers)
i.e. I convert 'file' to base64, create the requests.json file then POST it.
I'm not very familiar with authentication so here is my question: at the moment the only authentication I have is, from what I can tell, an API key and a service account. I used the service account json file in to set GOOGLE_APPLICATION_CREDENTIALS and that allows me to call
$ gcloud auth application-default print-access-token
The only issue I'm facing is that the token seems to expire. So I have to (a) go back to bash, set GOOGLE_APPLICATION_CREDENTIALS, call the above command again, then copy and paste the token into my code. Is there an out-of-the-box type solution that allows me to have a static token or a static way to run my script?
Thanks to those who commented - it seems the easiest way to authenticate is using a service account and the associated .json file, rather than adding GOOGLE_APPLICATION_CREDENTIALS to path.
from google.cloud import vision
service_account_json = <my_service_account_json>
client = vision.ImageAnnotatorClient.from_service_account_json(service_account_json)
Parameters can still be passed into the request by sending the request as a dict (equivalent to the usual json).
def annotate (filename) :
with open(filename, 'rb') as f :
encoding = f.read()
request = {'image': {'content': encoding},
'features': [{'type': 'DOCUMENT_TEXT_DETECTION'}],
'image_context': {'language_hints': ['en']}}
response = client.annotate_image(request=request)
return response
Related
I have an Openshift 3.11 cluster with the default installation of Prometheus and Alertmanager.
I want to write a python script to scrape the Alertmanager API endpoint so that I can parse the data and pass that on to s third party monitoring tool for our ops team.
My problem is that to get to the API I need to authenticate against oauth. How can I do this within python?
I don't know if this is any different for alert manager compared with the Reddit API, but when I set up a bot for that, I had to first register it on their OAuth page, then I needed to use the codes that it gave me there, to request an access token with the rest of my user data (Login and password as well as what the application was called) Then I could use that authentication to contact the API endpoint.
Here is the function I made to request the access token from Reddit:
def AuthRequest():
import requests
import requests.auth
import json
client_auth = requests.auth.HTTPBasicAuth('Application ID code', 'OAuth secret code')
post_data = {"grant_type": "password", "username": "Your_Username", "password": "Your_Password"}
headers = {"User-Agent": "Name_Of_Application"}
response = requests.post("https://www.reddit.com/api/v1/access_token", auth=client_auth, data=post_data, headers=headers)
return response.json()
And here is the code that contacts the endpoint and takes the data from it:
import requests
import requests.auth
import json
currentComments = []
headers = {"Authorization": auth['token_type'] + " " + auth['access_token'], "User-Agent": "Your_Application_Name"}
mentions = requests.get("https://oauth.reddit.com/message/unread.json", headers=headers).json()
Note here that 'auth' is simply the 'response' from the authentication token. I hope that helps, I don't really know how this differs with alertmanager as I've never really had to use it.
I found the fix for me.
I needed to create a service account
oc create serviceaccount <serviceaccount name> -n <your-namespace>
Then create a cluster role binding for it
oc create clusterrolebinding <name for your role> \
--clusterrole=cluster-monitoring-view \
--serviceaccount=<your-namespace>:<serviceaccount name>
Get a token from the SA and then use that in the curl
oc sa get-token <serviceaccount name> -n <your-namespace>
Is it possible to generate the downloadable link for a private file uploaded in the google drive?
I have tried with the files API and generated the 'webContent Link', but it is accessible only to the owner and the shared users.
(Expecting a public link that can be shared with anyone)
def file_info_drive(access_token, file_id):
headers = {'Authorization': 'Bearer ' + access_token, "content-type": "application/json"}
response = requests.get('https://www.googleapis.com/drive/v2/files/{file_id}', headers=headers)
response = response.json()
link = response['webContentLink']
return link
You want to make anyone download the file using webContentLink.
You want to use Drive API v2.
You want to achieve this using 'request' module with Python.
You have already been able to upload and download the file using Drive API.
If my understanding is correct, how about this modification?
Modification points:
In order to make anyone download the file using webContentLink, it is required to share publicly the file.
In this modified script, the file is publicly shared with the condition of {'role': 'reader', 'type': 'anyone', 'withLink': True}. In this case, the persons who know the URL can download the file.
Modified script:
When your script is modified, it becomes as follows.
def file_info_drive(access_token, file_id):
headers = {'Authorization': 'Bearer ' + access_token, "content-type": "application/json"}
# Using the following script, the file is shared publicly. By this, anyone can download the file.
payload = {'role': 'reader', 'type': 'anyone', 'withLink': True}
requests.post('https://www.googleapis.com/drive/v2/files/{file_id}/permissions', json=payload, headers=headers)
response = requests.get('https://www.googleapis.com/drive/v2/files/{file_id}', headers=headers)
response = response.json()
link = response['webContentLink']
return link
Note:
In this case, the POST method is used. So if an error for the scopes occurs, please add https://www.googleapis.com/auth/drive to the scope.
Reference:
Permissions: insert
If I misunderstood your question and this was not the direction you want, I apologize.
I am trying to create RFC in cherwell using REST API in Python. I tried first in Swegger UI. I got it working there. I am able to create RFC successfully. Then by following that Curl Request, in python, using request module, I tried and got 401. I found it why i am getting 401. It's because of in Authorization i am using Bearer which is a temporary token. It will live only for 10 minutes. If i do request after 10 minutes i got a 401. Bearer is a compulsory field. I can't make a request without it. I tried to pass username and password instead of Bearer, it didn't work. below is my request,
with open('C:\Cherwell\payload.json') as file:
Data = json.load(file)
payload = Data
header = {"Authorization":"Bearer XXXXXXXX"}
r = requests.post("https:/URL/CherwellAPI/api/V1/savebusinessobject?
api_key=XXXX-XXXX-XXXX-XXXX", auth=('user','pass'), headers = header,
data=json.dumps(payload))
print r
It will be great, if anyone can help who have done this before! Please Advice
Appreciate any help!
Found this solution that I used to address a similar problem. It's a function that requests a token from /CherwellAPI/token and returns a properly formatted Bearer token. You need to pass this Bearer token along in API requests as a Authorization parameter in the API header. Should look like token=bearer_token.
import json
import requests
configFile = 'config.json'
with open(configFile) as cf:
config_data = json.load(cf)
def getCherwellToken():
params = {'apikey' : config_data['cherwell']['client_id']}
data = {'grant_type' : 'password',
'client_id' : config_data['cherwell']['client_id'],
'username' : config_data['cherwell']['username'],
'password' : config_data['cherwell']['password']}
url = 'https://.cherwellondemand.com/CherwellAPI/token'
session = requests.post(url=url, data=data)
if session:
token = json.loads(session.text)
else:
token = None
return 'Bearer ' + token['access_token']
using another call Get Token You can get access token and using that you can request to create a new ticket. This worked for me.
I'm trying to make a Python webapp write to Firebase DB using HTTP API (I'm using the new version of Firebase presented at Google I/O 2016).
My understanding so far is that the specific type of write I'd like to accomplish is made with a POST request to a URL of this type:
https://my-project-id.firebaseio.com/{path-to-resource}.json
What I'm missing is the auth part: if I got it correctly a JWT should be passed in the HTTP Authorization header as Authorization : Bearer {token}.
So I created a service account, downloaded its private key and used it to generate the JWT, added it to the request headers and the request successfully wrote to Firebase DB.
Now the JWT has expired and any similar request to the firebase DB are failing.
Of course I should generate a new token but the question is: I wasn't expecting to handle token generation and refresh myself, most HTTP APIs I'm used to require just a static api key to be passed in the request so my webapps could be kept relatively simple by just adding the stati api key string to the request.
If I have to take care of token generation and expiration the webapp logic needs to become more complex (because I'd have to store the token, check if it is still valid and generate a new one when not), or I could just generate a new token for every request (but does this really make sense?).
I'd like to know if there's a best practice to follow in this respect or if I'm missing something from the documentation regarding this topic.
Thanks,
Marco
ADDENDUM
This is the code I'm currently running:
import requests
import json
from oauth2client.service_account import ServiceAccountCredentials
_BASE_URL = 'https://my-app-id.firebaseio.com'
_SCOPES = [
'https://www.googleapis.com/auth/userinfo.email',
'https://www.googleapis.com/auth/firebase.database'
]
def _get_credentials():
credentials = ServiceAccountCredentials.from_json_keyfile_name('my_service_account_key.json', scopes=_SCOPES)
return credentials.get_access_token().access_token
def post_object():
url = _BASE_URL + '/path/to/write/to.json'
headers = {
'Authorization': 'Bearer '+ _get_credentials(),
'Content-Type': 'application/json'
}
payload = {
'title': title,
'message': alert
}
return requests.post(url,
data=json.dumps(payload),
headers=headers)
Currently for every request a new JWT is generated. It doesn't seem optimal to me. Is it possible to generate a token that doesn't expire?
Thanks for the code example. I got it working better by using the credentials.authorize function which creates an authenticated wrapper for http.
from oauth2client.service_account import ServiceAccountCredentials
from httplib2 import Http
import json
_BASE_URL = 'https://my-app-id.firebaseio.com'
_SCOPES = [
'https://www.googleapis.com/auth/userinfo.email',
'https://www.googleapis.com/auth/firebase.database'
]
# Get the credentials to make an authorized call to firebase
credentials = ServiceAccountCredentials.from_json_keyfile_name(
_KEY_FILE_PATH, scopes=_SCOPES)
# Wrap the http in the credentials. All subsequent calls are authenticated
http_auth = credentials.authorize(Http())
def post_object(path, objectToSave):
url = _BASE_URL + path
resp, content = http_auth.request(
uri=url,
method='POST',
headers={'Content-Type': 'application/json'},
body=json.dumps(objectToSave),
)
return content
objectToPost = {
'title': "title",
'message': "alert"
}
print post_object('/path/to/write/to.json', objectToPost)
I have some code where I am trying to authenticate against Azure's Resource Manager REST API.
import json
import requests
tenant_id = "TENANT_ID"
app_id = "CLIENT_ID"
password = "APP_SECRET"
token_endpoint = 'http://login.microsoftonline.com/%s/oauth2/token' % tenant_id
management_uri = 'https://management.core.windows.net/'
payload = { 'grant_type': 'client_credentials',
'client_id': app_id,
'client_secret': password
}
auth_response = requests.post(url=token_endpoint, data=payload)
print auth_response.status_code
print auth_response.reason
This returns:
200
OK
However, when I print auth_response.content or auth_reponse.text, I get back a 400 HTML error code and an error message.
HTTP Error Code: 400
Sorry, but we’re having trouble signing you in.
We received a bad request.
I am able to get back the correct information using PostMan, however, with the same URI and payload. I used the "Generate Code" option in Postman to export my request to a Python requests script and tried running that. But, I get the same errors.
Anybody have any idea why this is happening?
Only modify your token_endpoint to https Protocols. E.G:
token_endpoint = 'https://login.microsoftonline.com/%s/oauth2/token' % tenant_id.
You can refer to https://msdn.microsoft.com/en-us/library/azure/dn645543.aspx for more details.
Meanwhile, you can leverage Microsoft Azure Active Directory Authentication Library (ADAL) for Python for acquire the access token in a ease.
You should use HTTPS instead of HTTP for token_endpoint, and you should specify API version too. Here is what you should use.
token_endpoint = 'https://login.microsoftonline.com/%s/oauth2/token?api-version=1.0' % tenant_id