GCP Authentication for Python with LOADED JSON , NOT file - python

To Authenticate pipeline in Python project I'm using this
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/the/json/key.json"
How to do the same thing BUT with the loaded JSON file? (without using path to the JSON)??

There are a few ways to authenticate against a project in Google Cloud.
Please take a look at: https://googleapis.dev/python/google-api-core/latest/auth.html as well as the best practices as mentioned here: https://cloud.google.com/docs/authentication/best-practices-applications

Related

Accessing a file from google drive within Python

I am working on a machine learning task and have saved a Keras model and want to deploy it to Github (so that I can host a web demo using Streamlit and/or Flask). However, the model file is so large (> 1 GB), that I cannot upload it to Github for free.
My thought process regarding an alternative is to upload it to a cloud service such as google drive (or dropbox, box etc.) then using some sort of Python module to access it from there.
So my question is, can I upload a pickle file containing a pickled Keras model to Google Drive and then access that object from a Python script? If so, how would I go about doing so?
Thank you!
I believe you can, you'll need to pip oauth2client & gspread. To access the data you would need to enable API manager on your google drive and get credentials in the form of a json file. Then you would need to share the file with the email in the credentials giving it permission. You could then port over the information as you needed to, I'm not sure how Keras works but this would be the first step.
Another important factor is that Google api is very touch when it comes to requests that are coming to fast, to overcome this put in sleep commands between each one, but if you do that this method may become way to slow for your idea.
scope = ["https://spreadsheets.google.com/feeds", 'https://www.googleapis.com/auth/spreadsheets',
"https://www.googleapis.com/auth/drive.file", "https://www.googleapis.com/auth/drive"]
creds = ServiceAccountCredentials.from_json_keyfile_name("Your json file here.json", scope)
client = gspread.authorize(creds)
sheet = client.open("your google sheets name or whatever").sheet1 # Open the spreadhseet
data = sheet.get_all_records() # you can call all the information with this.
I understand that you require a way to upload and download large files* from Drive using Python. If I understood your situation correctly, then you can achieve your goals easily by using the Drive API as #TimothyChen commented. First I highly recommend you to follow the Drive API Python Quickstart tutorial to create a working example. Later, you could modify it to use Files.create() and Files.get() to upload/download files as needed. Don't hesitate to ask me more questions if you have doubts.
*Please, keep in mind that there is a 5 TB size limit in Drive.

Uploading csv files to azure container using SAS URI in python?

I am trying to upload files to azure using the SAS URI only. I found ways using C# but I didn't find a solution using python. The only solution I found using python is to input the account name and account key as parameters in blockblobservice. Here is an example Upload image to azure blob storage using python but I am trying to avoid using this solution. Is there a specific way to upload csv files to azure using only the SAS URI ? Thanks for your help :)
If you're using the latest python blob sdk azure-storage-blob 12.4.0, then you can use the code like below(please feel free to modify the code as per your need):
from azure.storage.blob import BlobClient
upload_file_path="d:\\a11.csv"
sas_url="https://xxx.blob.core.windows.net/test5/a11.csv?sastoken"
client = BlobClient.from_blob_url(sas_url)
with open(upload_file_path,'rb') as data:
client.upload_blob(data)
print("**file uploaded**")
Here is the test result:
This might help:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python#upload-blobs-to-a-container
Example is shown by using the Python SDK for Azure Storage

Using Boto3 to upload a file to Amazon WorkDocs

According to the Amazon WorkDocs SDK page, you can use Boto3 to migrate your content to Amazon WorkDocs. I found the entry for the WorkSpaces Client in the Boto3 documentation, but every call seems to require a "AuthenticationToken" parameter. The only information I can find on AuthenticationToken is that is it supposed to be a "Amazon WorkDocs authentication token".
Does anyone know what this token is? How do I get one? Is there any code examples of using the WorkDocs Client in Boto3?
I am trying to create a simple Python script that will upload a single document into WorkDocs, but there seems to be little to no information on how to do this. I was easily able to write a script that can upload/download files from S3, but this seems like something else entirely.

Is it possible to connect and query a BigQuery table from Google App-Engine (python) without OAuth2 authentication dialog?

I’m working on a Google App-Engine project which stores around 100K entities in the Datastore. Since I have to search in the string properties of those entities I have to find an effective way to do it.
After some research I found Google’s BigQuery service which looks perfect for me. I already imported the entities into BigQuery via the web interface but I can not connect and run a query on the BigQuery from the App-Engine code.
My App-Engine project has no web interface. It generates only JSON outputs which are consumed by mobile applications.
So, my question is this: is it possible to connect and run a query from the App-Engine python code without the OAuth2 authentication dialog?
Yes. Simply use what's known as a "service account" as described here. Then, some simple Python code once you've exported GOOGLE_APPLICATION_CREDENTIALS to point to the credential file:
from google.cloud import bigquery
client = bigquery.Client(project='PROJECT_ID')
for dataset in client.list_datasets():
do_something_with(dataset)
More info here too.
Just a quick caution this is case sensitive
- to check the captalisation for 'Client';
my_bigquery_client = bigquery.client(project = 'my_project')
- this fails with an error "TypeError: 'module' object is not callable"
my_bigquery_client = bigquery.Client(project = 'my_project')
- this works.

Azure storage container API request authentication failing with Django app

I am trying to sync the static files of my django application to Azure storage. I am getting an error when I try to write static files to the storage container when running the manage.py collectstatic command.
I am getting the error. The MAC signature found in the HTTP request is not the same as any computed signature.
The common cause for this error is mismatched time signatures on the two servers, but this is not the problem in my case.
I am using the django packages django-azure-storage and azure-sdk-for-python to format the request.
Here is a gist of the http request and responses generated when trying to connect to the azure storage container.
Is there anything that seems wrong from these outputs?
I have downloaded the django packages and Azure SDK following your description. I have coded a sample to reproduce this issue, but it works fine on my side. Below are the steps that I have done:
Set up the environment: Python 2.7 and Azure SDK(0.10.0).
1.Trying to use the django-azure-storage
It is very frustrating that I didn't import it into my project successfully since this is the first time I used it. Usually, I leverage Azure Python SDK directly. This time I copied storage.py as AzureStorage class in my project.
#need import django contentfile type
from django.core.files.base import ContentFile
#import the AzureStorage Class form my project
from DjangoWP.AzureStorage import AzureStorage
# my local image path
file_path="local.png";
# my Azure storage blob file
def djangorplugin():
azurestorage=AzureStorage(myaccount, mykey,"mycontainer")
stream=open(file_path, 'rb')
data = stream.read()
#need convert file to ContentFile
azurestorage.save("Testfile1.png",ContentFile(data))
2.You many want to know how to use Azure SDK for Python directly, below code snippet for your reference:
from azure.storage.blobservice import BlobService
#my local image path
file_path="local.png";
def upload():
blob_service = BlobService(account_name=myaccount, account_key=mykey)
stream=open(file_path, 'rb')
data = stream.read()
blob_service.put_blob("mycontainer","local.png",data,"BlockBlob")
If you have any further concerns, please feel free to let us know.
I was incorrectly using the setting DEFAULT_FILE_STORAGE instead of STATICFILES_STORAGE to override the storage backend used while syncing static files. Changing this setting solved this problem.
I was also encountering problems when trying to use django-storages, which specifies to use the DEFAULT_FILE_STORAGE setting in its documentation. However, using STATICFILES_STORAGE with this package also fixed the issue I was having.

Categories

Resources