Fire store in python filling fields with random data - python

I have a question about python and firestore, I have conected python to fire store and I can add data however when I try to fill in a field in my database using a Varible all I get in the firebase console is a random string.
My code
import urllib
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
link = "example.com/test1"
f = urllib.urlopen(link)
uw = f.read()
linky = "Example.com/test.txt"
fa = urllib.urlopen(linky)
uname = fa.read()
print(uname)
unname=uname
# Use a service account
uname="hello"
cred = credentials.Certificate('key.json')
firebase_admin.initialize_app(cred)
#uw="hello"
db = firestore.client()
doc_ref = db.collection(u'treasure').document(uw)
doc_ref.set({
u'found_by': uname
#u'last': u'Lovelace',
#u'born': 1815
})
#print(uuname)
print(uname)
Here is my firebase consoleSorry, I don't have the needed reputation to embed an image but here is the link
noteI am trying to load the data to put into the database from a server however I have verified this is not the issue. the first one of these url ib request is getting a name of the document however this works well but the second one is where I get the field data but the problem is not loading it off a serverThanks!

The data is being base64-encoded.
>>> s = 'aGVsbG8='
>>> base64.b64decode(s)
b'hello'
I don't use Firebase/Firestore but if the data is being automatically base64-encoded on write then most likely it will be automatically decoded when read.
If you need to manually decode it, note that base64.b64decode returns bytes, so you need to call .decode() on the bytes to get a str.
This comment on github suggests that adding the u prefix to string literals will make Firestore encode as UTF-8 instead of base64.
So in your example:
uname = u'Hello'

Related

Json in PUT request to Python Flask app fails to decode

I'm working on building an SQLite3 database to store aperture flux measurements of stars in any given astronomical image. I have one file (star_database.py) containing a Flask app running with two routes that handle selecting from and inserting into that database. There is a separate script (aperture_photometry.py) that will call those routes when incoming images need photometric processing. The crux of my problem is in the interaction between the function to insert data into the SQLite database and the aperture photometry script tasked with passing data to the Flask app. Here are the relevant functions:
# Function in aperture_photometry.py that turns Star object data into a dictionary
# and passes it to the Flask app
from astropy.io import fits
import requests
import json
def measure_photometry(image, bandpass):
df = fits.getdata(image)
hf = fits.getheader(image, ext=1)
date_obs = hf['DATE-OBS']
ra, dec = get_center_coords(hf)
response = requests.get(f'http://127.0.0.1:5000/select/{ra}/{dec}/').content
star_json = json.loads(response)
if star_json is not None:
stars = json_to_stars(star_json)
get_raw_flux(df, df*0.01, hf, stars) # Error array will be changed
star_json = []
# Critical section
for star in stars:
star_json.append({"star_id":star.star_id, "source_id":star.source_id, "flux":star.flux, "flux_err":star.flux_err})
response = requests.put('http://127.0.0.1:5000/insert/', data={'stars':star_json, 'bandpass':bandpass, 'dateobs':date_obs})
print(response.content)
else:
print("ERROR: Could not get star objects from database.")
# Function in star_database.py that handles incoming flux measurements from the
# aperture photometry script, and then inserts data into the SQLite database
from flask import Flask, request
#app.route('/insert/', methods=['PUT'])
def insert_flux_rows():
conn = create_connection(database)
if conn is not None:
c = conn.cursor()
body = request.get_json(force=True)
print(body)
# More comes after this, but it is not relevant to the question
After running the Flask app and calling aperture_photometry.py, the PUT request response.content line prints a 400 Bad Request error with the message, Failed to decode JSON object: Expecting value: line 1 column 1 (char 0). I think that problem is either in the way I have tried to format the star object data as it is being passed into the PUT request in measure_photometry, or if not there is something wrong with doing body = request.get_json(force=True). It is also worth mentioning that the statement print(body) in insert_flux_rows does not print anything to stdout. For all intents and purposes the two scripts must remain separate, i.e. I cannot combine them and remove the requests dependency.
I would really appreciate some help with this, as I have been trying to fix it all day.
Based on the top answer from this question, it seems like your data variable in the measure_photometry function may not be properly convertable to json.
You should try to test it out (maybe run a json.dumps on it) to see if a more detailed error message is provided. There's also the jsonschema package.

Setting ["GOOGLE_APPLICATION_CREDENTIALS"] from a dict rather than file path

I'm trying to set the environment variable from a dict but getting and error when connecting.
#service account pulls in airflow variable that contains the json dict with service_account credentials
service_account = Variable.get('google_cloud_credentials')
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]=str(service_account)
error
PermissionDeniedError: Error executing an HTTP request: HTTP response code 403 with body '<?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message><Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.</Details></Error>'
when reading if I use and point to file then there are no issues.
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]=/file/path/service_account.json
I'm wondering is there a way to convert the dict object to an os path like object? I don't want to store the json file on the container and airflow/google documentation isn't clear at all.
The Python stringio package lets you create a file-like object backed by a string, but that won't help here because the consumer of this environment variable is expecting a file path, not a file-like object. I don't think it's possible to do what you're trying to do. Is there a reason you don't want to just put the credentials in a file?
There is a way to do it, but the Google documentation is terrible. So I wrote a Github gist to document the recipe that I and a colleague (Imee Cuison) developed to use the key securely. Sample code below:
import json
from google.oauth2.service_account import Credentials
from google.cloud import secretmanager
def access_secret(project_id:str, secret_id:str, version_id:str="latest")->str:
"""Return the secret in string format"""
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
# Access the secret version.
response = client.access_secret_version(name=name)
# Return the decoded payload.
return response.payload.data.decode('UTF-8')
def get_credentials_from_token(token:str)->Credentials:
"""Given an authentication token, return a Credentials object"""
credential_dict = json.loads(secret_payload)
return Credentials.from_service_account_info(credential_dict)
credentials_secret = access_secret("my_project", "my_secret")
creds = get_credentials_from_token(credentials_secret)
# And now you can use the `creds` Credentials object to authenticate to an API
Putting the service account into the repository is not a good practice. As a best practice; You need to use authentication propagating from the default google auth within your application.
For instance, using Google Cloud Kubernetes you can use the following python code :
from google.cloud.container_v1 import ClusterManagerClient
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform', ])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)

Trouble with Google Application Credentials

Hi there first and foremost this is my first time using Googles services. I'm trying to develop an app with the Google AutoML Vision Api (Custom Model). I have already build a custom model and generated the API keys(I hope I did it correctly tho).
After many attempts of developing via Ionics & Android and failing to connect to the to the API.
I have now taken the prediction modelling given codes in Python (on Google Colab) and even with that I still get an error message saying that Could not automatically determine credentials. I'm not sure where I have gone wrong in this. Please help. Dying.
#installing & importing libraries
!pip3 install google-cloud-automl
import sys
from google.cloud import automl_v1beta1
from google.cloud.automl_v1beta1.proto import service_pb2
#import key.json file generated by GOOGLE_APPLICATION_CREDENTIALS
from google.colab import files
credentials = files.upload()
#explicit function given by Google accounts
[https://cloud.google.com/docs/authentication/production#auth-cloud-implicit-python][1]
def explicit():
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(credentials)
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)
#import image for prediction
from google.colab import files
YOUR_LOCAL_IMAGE_FILE = files.upload()
#prediction code from modelling
def get_prediction(content, project_id, model_id):
prediction_client = automl_v1beta1.PredictionServiceClient()
name = 'projects/{}/locations/uscentral1/models/{}'.format(project_id,
model_id)
payload = {'image': {'image_bytes': content }}
params = {}
request = prediction_client.predict(name, payload, params)
return request # waits till request is returned
#print function substitute with values
content = YOUR_LOCAL_IMAGE_FILE
project_id = "REDACTED_PROJECT_ID"
model_id = "REDACTED_MODEL_ID"
print (get_prediction(content, project_id, model_id))
Error Message when run the last line of code:
credentials = files.upload()
storage_client = storage.Client.from_service_account_json(credentials)
these two lines are the issue I think.
The first one actually loads the contents of the file, but the second one expects a path to a file, instead of the contents.
Lets tackle the first line first:
I see that just passing the credentials you get after calling credentials = files.upload() will not work as explained in the docs for it. Doing it like you're doing, the credentials don't actually contain the value of the file directly, but rather a dictionary for filenames & contents.
Assuming you're only uploading the 1 credentials file, you can get the contents of the file like this (stolen from this SO answer):
from google.colab import files
uploaded = files.upload()
credentials_as_string = uploaded[uploaded.keys()[0]]
So now we actually have the contents of the uploaded file as a string, next step is to create an actual credentials object out of it.
This answer on Github shows how to create a credentials object from a string converted to json.
import json
from google.oauth2 import service_account
credentials_as_dict = json.loads(credentials_as_string)
credentials = service_account.Credentials.from_service_account_info(credentials_as_dict)
Finally we can create the storage client object using this credentials object:
storage_client = storage.Client(credentials=credentials)
Please note I've not tested this though, so please give it a go and see if it actually works.

GSpread pass credentials from Python not JSON

Im using GSpread trying to pass the content on my JSON file (Google API Service Application credentials) as a python Dictionary on my script. Im trying to not to carry a json file wherever I take my script.
I get the following error when I tried to pass a dictionary instead of a json file on the following line:
credentials = ServiceAccountCredentials.from_json_keyfile_name(auth_gdrive(), scope)
TypeError: expected str, bytes or os.PathLike object, not set
### auth_gdrive() returns a dictionary like this:
def auth_gdrive():
dic = {
"type": "miauuuuuu",
"pass": "miauuuu"
}
Im not allow to show whats really in the dic.
Since I wanted to pass the credentials details from within my application , and not from a json file I couldn't use:
ServiceAccountCredentials.from_json_keyfile_name()
from_json_keyfile_name() expects a json file. But looking into the docs I found the following:
ServiceAccountCredentials.from_json_keyfile_dict()
This will expect an dict object , this is all I needed.
Link:
https://oauth2client.readthedocs.io/en/latest/source/oauth2client.service_account.html
Thank you everyone again
Additional tip: I am using Google API to read Google Drive files, but also using AWS. I stored the service account credentials in AWS Secrets Manager, so that I did not need a file. I copy-pasted each key-value pair from the downloaded JSON file into AWS Secrets Manager. But I kept getting the error:
Traceback (most recent call last):
File "./copy_from_google_drive_to_s3.py", line 301, in <module>
sys.exit(main())
File "./copy_from_google_drive_to_s3.py", line 96, in main
keyfile_dict=keyDict, scopes=scopes,
File "/usr/local/lib/python3.7/site-packages/oauth2client/service_account.py", line 253, in from_json_keyfile_dict
revoke_uri=revoke_uri)
File "/usr/local/lib/python3.7/site-packages/oauth2client/service_account.py", line 185, in _from_parsed_json_keyfile
signer = crypt.Signer.from_string(private_key_pkcs8_pem)
File "/usr/local/lib/python3.7/site-packages/oauth2client/_pure_python_crypt.py", line 182, in from_string
raise ValueError('No key could be detected.')
ValueError: No key could be detected.
I had to convert the string representation of newline back into newline:
# Last part of using AWS Secrets Manager, returns json string.
sa_creds = get_secret_value_response['SecretString']
# Convert JSON string to dict.
sa_creds = json.loads(sa_creds)
# In the private key, 1-char newline got replaced with 2-char '\n'
sa_creds['private_key'] = sa_creds['private_key'].replace('\\n', '\n')
credentials = ServiceAccountCredentials.from_json_keyfile_dict(
keyfile_dict=sa_creds,
scopes=['https://www.googleapis.com/auth/drive.readonly',]
)
My solution is close to Bob McCormick. The difference is that it's using the credentials method for using service account info instead of JSON file.
Here i'm using Googles Secret Manager to import service account information so that my code can connect to a different GCP project:
from google.cloud import secretmanager
from google.oauth2 import service_account
# Create the Secret Manager client.
secret_client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = f"projects/{project-id}/secrets/{very-secret-secret-name}/versions/latest"
# Access the secret version.
secret_response = secret_client.access_secret_version(request={"name": name})
# Getting the secret data
secret_payload = json.loads(secret_response.payload.data.decode("UTF-8"))
# Applying the credentials as INFO instead of JSON
credentials = service_account.Credentials.from_service_account_info(
secret_payload,
scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
Since you're using ServiceAccountCredentials, I'm assuming you're using OAuth2 for authorization. You can skip the json file by using oauth2client.SignedJwtAssertionCredentials to create the appropriate credentials object and pass that to gspread.authorize.
import gspread
from oauth2client.client import SignedJwtAssertionCredentials
credentials = SignedJwtAssertionCredentials(service_account_name, private_key.encode(),
['https://spreadsheets.google.com/feeds'])
gclient = gspread.authorize(credentials)
UPDATE: It appears that oauth2client.SignedJwtAssertionCredentials has been deprecated in favor of oauth2client.service_account.ServiceAccountCredentials, which only supports json and p12 keyfiles.

How to login to the sandbox using salesforce Bulk API

I'm trying to use python to connect to Salesforce Bulk API. However, I don't want to test my code on the real salesforce. I want to test with my sandbox. However, I don't know how to connect to sandbox only... I've tried to add sandbox=True but it doesn't work...
import salesforce_bulk
bulk=salesforce_bulk.SalesforceBulk(username="username",password="password")
The advice here may be a bit deprecated. I was able to get the bulk uploads working by combining the Salesforce and SalesforceBulk libraries. Note the domain that I am passing to the api as well the sand_box name that needs to be appended to the username.
from simple_salesforce import Salesforce
from salesforce_bulk import SalesforceBulk
import json
from salesforce_bulk.util import IteratorBytesIO
from urllib.parse import urlparse
USER = "user#domain.com.<sandbox_name>"
PASS = "pass"
SEC_TOKEN = "token"
DOMAIN = "<domain>--<sandbox_name>.<instance>.my"
sf = Salesforce(username=USER, password=PASS, security_token=SEC_TOKEN, domain=DOMAIN)
bulk = SalesforceBulk(sessionId=sf.session_id, host=sf.sf_instance)
job = bulk.create_query_job("table", contentType='JSON')
batch = bulk.query(job, "select Id,LastName from table limit 5000")
bulk.close_job(job)
while not bulk.is_batch_done(batch):
sleep(10)
for result in bulk.get_all_results_for_query_batch(batch):
result = json.load(IteratorBytesIO(result))
for row in result:
print(row)
Old question but I had the same problem today, so maybe this will help someone.
This is a complete hack, but it works - probably a better hack would be to do this using salesforce-oauth-request (which does have a "sandbox=True" option), but I was logging in via beatbox anyway, so tried this first.
Gist is you log in to the sandbox using beatbox (which lets you specify your serverUrl) and then use that sessionId and instance_url to log in through salesforce_bulk.
import beatbox
from salesforce_bulk import SalesforceBulk
# log in to sandbox using beatbox
service = beatbox.PythonClient()
service.serverUrl = 'https://test.salesforce.com/services/Soap/u/20.0'
user = 'user#user.com'
password = 'secret'
token = '12345'
service.login(user, password+token)
# the _Client_serverUrl has the instance url + some
# SOAP stuff, so we need to strip that off
groups = service._Client__serverUrl.split('/')
instance_url = '/'.join(groups[:3])
# now we can use the instance_url and sessionId to
# log into Salesforce through SalesforceBulk
bulk = sfdc_bulk_connect(instance_url, service.sessionId)
Have you check the package is install
This library will use the salesforce-oauth-request package (which you must install) to run the Salesforce OAUTH2 Web flow and return an access token.
And the password did you have token or not ?

Categories

Resources