I'm attempting to run a localhost web server that has remote api access to a remote datastore using the remote_api_stub method ConfigureRemoteApiForOAuth.
I have been using the following Google doc for reference but find it rather sparse:
https://cloud.google.com/appengine/docs/python/tools/remoteapi
I believe I'm missing the authentication bit, but can't find a concrete resource to guide me. What would be the easiest way, given the follow code example, to access a remote datastore while running dev_appserver.py?
import webapp2
from google.appengine.ext import ndb
from google.appengine.ext.remote_api import remote_api_stub
class Topic(ndb.Model):
created_by = ndb.StringProperty()
subject = ndb.StringProperty()
#classmethod
def query_by_creator(cls, creator):
return cls.query(Topic.created_by == creator)
class MainPage(webapp2.RequestHandler):
def get(self):
remote_api_stub.ConfigureRemoteApiForOAuth(
'#####.appspot.com',
'/_ah/remote_api'
)
topics = Topic.query_by_creator('bill')
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('<html><body>')
self.response.out.write('<h1>TOPIC SUBJECTS:<h1>')
for topic in topics.fetch(10):
self.response.out.write('<h3>' + topic.subject + '<h3>')
self.response.out.write('</body></html>')
app = webapp2.WSGIApplication([
('/', MainPage)
], debug=True)
This get's asked a lot, simply because you can't use app engines libraries outside of the SDK. However, there is also an easier way to do it from within the App Engine SDK as well.
I would use gcloud for this. Here's how to set it up:
If you want to interact with google cloud storage services inside or outside of the App Engine environment, you may use Gcloud (https://googlecloudplatform.github.io/gcloud-python/stable/) to do so.
You need a service account on your application as well as download the JSON credentials file. You do this on the app engine console under the authentication tab. Create it, and then download it. Call it client_secret.json or something.
With those, once you install the proper packages for gcloud with pip, you'll be able to make queries as well as write data.
Here is an example of authenticating yourself to use the library:
from gcloud import datastore
# the location of the JSON file on your local machine
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/location/client_secret.json"
# project ID from the Developers Console
projectID = "THE_ID_OF_YOUR_PROJECT"
os.environ["GCLOUD_TESTS_PROJECT_ID"] = projectID
os.environ["GCLOUD_TESTS_DATASET_ID"] = projectID
client = datastore.Client(dataset_id=projectID)
Once that's done, you can make queries like this:
query = client.query(kind='Model').fetch()
It's actually super easy. Any who, that's how I would do that! Cheers.
Related
I'm using the Firebase Admin Python SDK to read/write data to Firestore. I've created a service account with the necessary permissions and saved the credentials .json file in the source code (I know this isn't the most secure, but I want to get the thing running before fixing security issues). When testing the integration locally, it works flawlessly. But after deploying to GCP, where our service is hosted, calls to Firestore don't work properly and retry for a while before throwing 503 Deadline Exceeded errors. However, SSHing into a GKE pod and calling the SDK manually works without issues. It's just when the SDK is used in code flow that causes problems.
Our service runs in Google Kubernetes Engine in one project (call it Project A), but the Firestore database is in another project (call it project B). The service account that I'm trying to use is owned by Project B, so it should still be able to access the database even when it is being initialized from inside Project A.
Here's how I'm initiating the SDK:
from firebase_admin import get_app
from firebase_admin import initialize_app
from firebase_admin.credentials import Certificate
from firebase_admin.firestore import client
from google.api_core.exceptions import AlreadyExists
credentials = Certificate("/path/to/credentials.json")
try:
app = initialize_app(credential=credentials, name="app_name")
except ValueError:
app = get_app(name="app_name")
client = client(app=app)
Another wrinkle is that another part of our code is able to successfully use the same service account to produce Firebase Access Tokens. The successful code is:
import firebase_admin
from firebase_admin import auth as firebase_admin_auth
if "app_name" in firebase_admin._apps:
# Already initialized
app = firebase_admin.get_app(name="app_name")
else:
# Initialize
credentials = firebase_admin.credentials.Certificate("/path/to/credentials.json")
app = firebase_admin.initialize_app(credential=credentials, name="app_name")
firebase_token = firebase_admin_auth.create_custom_token(
uid="id-of-user",
developer_claims={"admin": is_admin, "site_slugs": read_write_site_slugs},
app=app,
)
Any help appreciated.
Turns out that the problem here was a conflict between gunicorn's gevents and the SDK's use of gRCP. Something related to websockets. I found the solution here. I added the following code to our Django app's settings:
import grpc.experimental.gevent as grpc_gevent
grpc_gevent.init_gevent()
I want to send requests to a deployed app on a cloud run with python, but inside the test file, I don't want to hardcode the endpoint; how can I get the URL of the deployed app with python script inside the test file so that I can send requests to that URL?
You can use gcloud to fetch the url of the service like this
gcloud run services describe SERVICE_NAME
--format="value(status.url)"
In a pure Python way, you can use Google's API Client Library for Run.
To my knowledge, there isn't a Cloud Client Library
The method is namespaces.services.get and it is documented by APIs Explorer namespaces.services.get.
One important fact with Cloud Run is that the API endpoint differs by Cloud Run region.
See service endpoint. You will need to override the client configuration (using ClientOptions) with the correct (region-specific) api_endpoint.
The following is from-memory! I've not run this code but it should be (nearly) correct:
import google.auth
import os
from googleapiclient import discovery
from google.api_core.client_options import ClientOptions
creds, project = google.auth.default()
REGION = os.getenv("REGION")
SERVICE = os.getenv("SERVICE")
# Must override the default run.googleapis.com endpoint
# with region-specific endpoint
api_endpoint = "https://{region}-run.googleapis.com".format(
region=REGION
)
options = ClientOptions(
api_endpoint=api_endpoint
)
service = discovery.build("run", "v1",
client_options=options,
credentials=creds
)
name = "namespaces/{namespace}/services/{service}".format(
namespace=project,
service=SERVICE
)
rqst = service.namespaces().services().get(name=name)
resp = rqst.execute()
The resp will be Service and you can grab its ServiceStatus url.
I have a Flask app which looks like this:
from flask import Flask
import boto3
application = Flask(__name__)
#application.route("/")
def home():
return "Server successfully loaded"
#application.route("/app")
def frontend_from_aws():
s3 = boto3.resource("s3")
frontend = s3.Object(bucket_name = "my_bucket", key = "frontend.html")
return frontend.get()["Body"].read()
if __name__ == "__main__":
application.debug = True
application.run()
Everything works perfectly when I test locally, but when I deploy the app to Elastic Beanstalk the second endpoint gives an internal server error:
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I didn't see anything alarming in the logs, though I'm not completely sure I'd know where to look. Any ideas?
Update: As a test, I moved frontend.html to a different bucket and modified the "/app" endpoint accordingly, and mysteriously it worked fine. So apparently this has something to do with the settings for the original bucket. Does anybody know what the right settings might be?
I found a quick and dirty solution: IAM policies (AWS console -> Identity & Access Management -> Policies). There was an existing policy called AmazonS3FullAccess, and after I attached aws-elasticbeanstalk-ec2-role to it my app was able to read and write to S3 at will. I'm guessing that more subtle access management can be achieved by creating custom roles and policies, but this was good enough for my purposes.
Did you set up your AWS credentials on your Elastc Beanstalk instance as they are on your local machine (i.e. in ~/.aws/credentials)?
I'm trying to use the google remote api on my app engine project to upload local files to my app's default cloud storage bucket.
I have configured my app.yaml to have remote api on. I'm able to access my bucket and upload/access files from it. I run my local python console and try to write to the bucket with the following code:
from google.appengine.ext.remote_api import remote_api_stub
from google.appengine.api import app_identity
import cloudstorage
def auth_func():
return ('user#gmail.com', '*******')
remote_api_stub.ConfigureRemoteApi('my-app-id', '/_ah/remote_api', auth_func,'my-app-id.appspot.com')
filename = "/"+app_identity.get_default_gcs_bucket_name()+ "/myfile.txt"
gcs_file = cloudstorage.open(filename,'w',content_type='text/plain',options={'x-goog-meta-foo': 'foo','x-goog-meta-bar': 'bar'})
I see the following reponse:
WARNING:root:suspended generator urlfetch(context.py:1214) raised DownloadError(Unable to fetch URL: http://None/_ah/gcs/my-app-id.appspot.com/myfile.txt)
Notice the
http://None/_ah/gcs.....
I don't think None should be part of the url. Is there an issue with the GoogleAppEngineCloudStorageClient, v1.9.0.0? I'm also using Google App Engine 1.9.1.
Any ideas?
Google Cloud Storage client does not respect remote_api_stub and considers you are running script locally
os.environ['SERVER_SOFTWARE'] = 'Development (remote_api)/1.0'
or even
os.environ['SERVER_SOFTWARE'] = ''
will help.
The function, checking your environment from common.py
def local_run():
"""Whether we should hit GCS dev appserver stub."""
server_software = os.environ.get('SERVER_SOFTWARE')
if server_software is None:
return True
if 'remote_api' in server_software:
return False
if server_software.startswith(('Development', 'testutil')):
return True
return False
If I understand correctly, you want to upload a local text file to a specific bucket. I do not think what you're doing will work.
The alternative would be to ditch the RemoteAPI and upload it using the Cloud Storage API.
How do I get App Engine to generate the URL of the server it is currently running on?
If the application is running on development server it should return
http://localhost:8080/
and if the application is running on Google's servers it should return
http://application-name.appspot.com
You can get the URL that was used to make the current request from within your webapp handler via self.request.url or you could piece it together using the self.request.environ dict (which you can read about on the WebOb docs - request inherits from webob)
You can't "get the url for the server" itself, as many urls could be used to point to the same instance.
If your aim is really to just discover wether you are in development or production then use:
'Development' in os.environ['SERVER_SOFTWARE']
Here is an alternative answer.
from google.appengine.api import app_identity
server_url = app_identity.get_default_version_hostname()
On the dev appserver this would show:
localhost:8080
and on appengine
your_app_id.appspot.com
If you're using webapp2 as framework chances are that you already using URI routing in you web application.
http://webapp2.readthedocs.io/en/latest/guide/routing.html
app = webapp2.WSGIApplication([
webapp2.Route('/', handler=HomeHandler, name='home'),
])
When building URIs with webapp2.uri_for() just pass _full=True attribute to generate absolute URI including current domain, port and protocol according to current runtime environment.
uri = uri_for('home')
# /
uri = uri_for('home', _full=True)
# http://localhost:8080/
# http://application-name.appspot.com/
# https://application-name.appspot.com/
# http://your-custom-domain.com/
This function can be used in your Python code or directly from templating engine (if you register it) - very handy.
Check webapp2.Router.build() in the API reference for a complete explanation of the parameters used to build URIs.