I need to connect an API on azurewebsites using Python to download a JSON file automatically.
I can access the website and download a JSON file manually.
I tried to connect using:
url = 'https://myplatformconnectiot.azurewebsites.net/swagger/index.html'
r = requests.get(url, headers={"Authentication": " application/json"},cookies={},auth=('user#example.com', 'password'),)
r.json()
Do you know how to download a JSON file in azurewebsites using Python?
You need to use the kudu console url to a get particular file download from a web app.
By using the below python code you can download the file form the web app
import json
import requests
url = 'https://<webappname>.scm.azurewebsites.net/wwwroot/wwwroot/css/site.css'
r = requests.get(url,auth=('username','urlpassword'))
with open(r'C:\Users\name.json','wb') as f:
f.write(r.content)
username & password will be from publish profile credentials file of a web app. you can get the publish profile credentials from portal as shown in below image
Kudu is the engine behind a number of features in Azure App Service related to source control based deployment, and other deployment methods like Dropbox and OneDrive sync.
for more information about kudu you can refer the below document
Related
I want to send requests to a deployed app on a cloud run with python, but inside the test file, I don't want to hardcode the endpoint; how can I get the URL of the deployed app with python script inside the test file so that I can send requests to that URL?
You can use gcloud to fetch the url of the service like this
gcloud run services describe SERVICE_NAME
--format="value(status.url)"
In a pure Python way, you can use Google's API Client Library for Run.
To my knowledge, there isn't a Cloud Client Library
The method is namespaces.services.get and it is documented by APIs Explorer namespaces.services.get.
One important fact with Cloud Run is that the API endpoint differs by Cloud Run region.
See service endpoint. You will need to override the client configuration (using ClientOptions) with the correct (region-specific) api_endpoint.
The following is from-memory! I've not run this code but it should be (nearly) correct:
import google.auth
import os
from googleapiclient import discovery
from google.api_core.client_options import ClientOptions
creds, project = google.auth.default()
REGION = os.getenv("REGION")
SERVICE = os.getenv("SERVICE")
# Must override the default run.googleapis.com endpoint
# with region-specific endpoint
api_endpoint = "https://{region}-run.googleapis.com".format(
region=REGION
)
options = ClientOptions(
api_endpoint=api_endpoint
)
service = discovery.build("run", "v1",
client_options=options,
credentials=creds
)
name = "namespaces/{namespace}/services/{service}".format(
namespace=project,
service=SERVICE
)
rqst = service.namespaces().services().get(name=name)
resp = rqst.execute()
The resp will be Service and you can grab its ServiceStatus url.
so I try to make a python API so the user can upload a pdf file then the API directly sends it to Azure storage. what I found is I must have a directory i.e.
container_client = ContainerClient.from_connection_string(conn_str=conn_str,container_name='mycontainer')
with open('mylocalpath/myfile.pdf',"rb") as data:
container_client.upload_blob(name='myblockblob.pdf', data=data)
another solution is I have to store it on VM and then replace the local path to it, but I don't want to make my VM full.
If you want to upload it directly from the client-side to azure storage blob instead of receiving that file to your API you can use azure shared access signature inside your storage account and from your API you can make a function to generate Pre-Signed URL using that shared access signature service and return that URL to your client it will allow the client to upload file to your blob via that URL.
To generate URL can you follow the below code:
from datetime import datetime, timedelta
from azure.storage.blob import generate_blob_sas, BlobSasPermissions
blobname= "<blobname>"
accountkey="<accountkey>" #get this from access key section in azure storage.
containername = "<containername>"
def getpushurl(filename):
token = generate_blob_sas(
account_name=blobname,
container_name=containername,
account_key=accountkey,
permission=BlobSasPermissions(write=True),
expiry=datetime.utcnow() + timedelta(seconds=100),
blob_name=filename,
)
url = f"https://{blobname}.blob.core.windows.net/{containername}/{filename}?{token}"
return url
pdfpushurl = getpushurl("demo.text")
print(pdfpushurl)
So after generating this URL give it to the client so client could directly send the file to the URL received with PUT request and it will get uploaded directly to azure storage.
You can generate a SAS token with write permission for your users so that your users could upload .pdf files directly on their side without storing them on the server. For details, pls see my previous post here.
Try the code below to generate a SAS token with container write permission:
from azure.storage.blob import BlobServiceClient,ContainerSasPermissions,generate_container_sas
from datetime import datetime, timedelta
storage_connection_string=''
container_name = ''
block_blob_service = BlobServiceClient.from_connection_string(storage_connection_string)
container_client = block_blob_service.get_container_client(container_name)
sasToken = generate_container_sas(account_name=container_client.account_name,
container_name=container_client.container_name,
account_key= container_client.credential.account_key,
#grant write permission only
permission=ContainerSasPermissions(write=True),
start=datetime.utcnow() - timedelta(minutes=1),
#1 hour vaild time
expiry=datetime.utcnow() + timedelta(hours=1)
)
print(sasToken)
After you have replied to this SAS token to your user, just see this official guide to upload files from a HTML page, I think it would be helpful if you are developing a web app.
I am working to create a pipeline with the spotify API that logs my streaming history. I am planning to automate it by uploading it as a lambda function and scheduling it to run every few hours. I have everything mostly in order, except for that on the first run the API requires web authentication. Here is my code:
import spotipy
import spotipy.util as util
import urllib3
un = USERNAME
scope = 'user-read-recently-played'
cid = CLIENT_ID
csid = CLIENT_SECRET_ID
redr = r'http://localhost:8888/callback/'
token = util.prompt_for_user_token(un,scope,cid,csid,redr)
When this is run for the first time, this message pops up:
User authentication requires interaction with your
web browser. Once you enter your credentials and
give authorization, you will be redirected to
a url. Paste that url you were directed to to
complete the authorization.
Opened <LINK HERE> in your browser
Enter the URL you were redirected to:
And then I have to copy the link from my browser into that space. I can get the URL that I need to paste using urllib3:
req_adr = ADDRESS_IT_OPENS_IN_BROWSER
http = urllib3.PoolManager()
resp = http.request('GET',req_adr)
redrurl = resp.geturl()
But I don't know how to pass it into the input prompt from the util.prompt_for_user_token response
Any suggestions would be very welcome.
So it turns out there is a workaround. You can run it one time on a local machine and that generates a a file called .cache-USERNAME. If you include that file in your deployment package you don't have to copy/paste the URL and it is able to be automated with a lambda function in AWS.
I'm trying to use python to download an excel file that is hosted in a sharepoint which is part of the Microsoft Azure platform. I tried to retrieve the file with HTTPforhumans's request by doing:
r = requests.get(url)
But my requests keep getting denied (r.status_code returns 200) because I need to login to a valid account before trying to access the file. I do have a valid account and password, and I can access to my account and to the excel file via the browser. But I have no idea how to deal wit the Azure authentication procedure. And apparently it is not as easy as just doing:
auth = HTTPBasicAuth('email#somewhere.com', 'pass1234')
r = requests.post(url=url, auth=auth)
It is my uderstanding that there's a flow to follow, but when I try to read the documentation, it just goes over my head (I'm an engineer and I do not have experience in this kind of environment).
Can someone guide me in the process of how to login and download the file?
Try O365 rest python client library.it supports SharePoint Online authentication and allows to download/upload a file as demonstrated below: Please find the code here:
ctx_auth = AuthenticationContext(url)
ctx_auth.acquire_token_for_user(username,password)
ctx = ClientContext(url, ctx_auth)
response = File.open_binary(context, "/Shared Documents/User Guide.docx")
with open("./User Guide.docx", "wb") as local_file:
local_file.write(response.content)
You can download the latest version using below command
pip install git+https://github.com/vgrem/Office365-REST-Python-Client.git
For further reference please visit link
hope it helps.
I can't seem to get the EMBED-API Server-side Authorization demo to work:
https://ga-dev-tools.appspot.com/embed-api/server-side-authorization/
In the demo it says the following:
Once the library is installed you can add the following python module
to your project and invoke the get_access_token() method to get an
access token that you can use to authorize the Embed API.
# service-account.py
from oauth2client.service_account import ServiceAccountCredentials
# The scope for the OAuth2 request.
SCOPE = 'https://www.googleapis.com/auth/analytics.readonly'
# The location of the key file with the key data.
KEY_FILEPATH = 'path/to/json-key.json'
# Defines a method to get an access token from the ServiceAccount object.
def get_access_token():
return ServiceAccountCredentials.from_json_keyfile_name(
KEY_FILEPATH, SCOPE).get_access_token().access_token
I've succesfully done all the previous steps, but this one I just can't get my head around. Where do I put this code? It seems as if it should be put in a .py file.
Can someone please help?
It depends on your implementation, but basically you want to run your service account code on your server, and have the access token passed to your client application so it can make authorized requests from the browser.
The whole app is open sourced and you can see where the service account code is in the source code.
As in the demo, if you are using django or app engine it is easy to put python server code in your site which will return the token and replace the value in template code.
Add that code in service-account.py file and upload it on your server using FTP. I saved the code using dreamweaver, updated the path and added following line at the end of the service-account.py file:
print get_access_token()
Upload .JSON file in same directory and ran the command python service-account.py to get access_token.