From within an App Engine app, is there a way to determine the project ID a GAE (App Engine) instance is running on?
I want to access a big query table in the same project that the App Engine instance is running in. I'd rather not hard code it in or include it in another config file if possible.
Edit: forgot to mention that this is from Python
This is the "official" way:
from google.appengine.api import app_identity
GAE_APP_ID = app_identity.get_application_id()
See more here: https://developers.google.com/appengine/docs/python/appidentity/
You can get a lot of info from environment variables:
import os
print os.getenv('APPLICATION_ID')
print os.getenv('CURRENT_VERSION_ID')
print os.environ
I tried the other approaches in 2019 using Python3. So far as I can tell, those approaches are for Python2 (and one for Java).
I was able to accomplish this in Python3 using:
import os
app_id = os.getenv('GAE_APPLICATION')
print(app_id)
project_id = os.getenv('GOOGLE_CLOUD_PROJECT')
print(project_id)
source: https://cloud.google.com/appengine/docs/standard/python3/runtime
I also added an app version, in case you need it too.
import com.google.appengine.api.utils.SystemProperty;
String appId = SystemProperty.applicationId.get();
String appVersion = SystemProperty.applicationVersion.get();
Related
I am trying to use OpenAI's API to play with some of the examples they have. However, when I go to load my API key, I get errors. I created a ".env" file and did:
OPENAI_API_KEY=XYZ-123
and then in Python I have the following:
import os
import openai
openai.api_key_path = ".env"
openai.api_key = os.getenv("OPENAI_API_KEY")
print(openai.Model.list())
Every time it tells me my API key is malformed. I can also remove the 3rd line and I get the same error that it is malformed but I copied it directly into the .env file from the website. Also, if I set the key directly in Python, it seems to work just fine:
openai.api_key = "XYZ-123"
But for security, I would prefer I don't see the key in my Python code. Any suggestions on how to resolve this?
Create a .properties document and add only your API key in this document without any quotation marks or anything. The API key should be the only text in the document. Pass the path for this document in for the value of openai.api_key_path and it should work.
Remember that the value expects the path from the root directory. If you make it in the root directory, just pass in ".properties". If you make it in a sub directory called backend, for example, pass in "backend/.properties".
Hope this helps.
I suggest using dotenv for this purpose:
pip install python-dotenv
Usage example:
import dotenv
config = dotenv.dotenv_values(".env")
openai.api_key = config['OPENAI_API_KEY']
It is pretty flexible and works well whenever storing secrets in .env files comes up. Don't forget to add them to .gitignore or use .env.local (ignored by default)!
Setting the openai.api_key_path did not seem to work for me. Once I deleted its value, my code started working.
Does not work
This probably searches the path first even if the api_key value is set, and then throws an error.
openai.api_key_path = '.env'
openai.api_key = os.getenv("OPENAI_API_KEY")
Works
Initialize the api_key_path to None again.
openai.api_key_path = None
openai.api_key = os.getenv("OPENAI_API_KEY")
I believe your .env file OpenAI key needs to be in the format:
OPENAI_API_KEY="XYZ-123"
I know that you can use the EMAIL_BACKEND setting, and I think I have written a working mutt backend, but I can't set my EMAIL_BACKEND to my class because it apparently has to be the string import path, not the name of the class. The local path (emails) doesn't work because the current directory apparently isn't in the Python import path. And I can't use local package imports (from . import) because, of course, it has to be a simple string.
I got it working by copying my module into /usr/local/lib/python3.7/, but that's such a terrible long-term solution that it isn't even worth it.
My project directory structure is like: django/project/app/, with emails.py under app/, alongside settings.py and the others. The project/app structure didn't make a lot of sense to me (I only have one app) but I got the impression that it was the intended way to setup Django, so I did that.
It shouldn't be relevant, but BTW my mutt backend code is:
import subprocess
from django.core.mail.backends.base import BaseEmailBackend
class MuttBackend(BaseEmailBackend):
def send_messages(self, email_messages):
for m in email_messages: self.send(m)
def send(self, message):
print(message.subject, message.from_email, message.to, message.body)
mutt = subprocess.Popen(args = ['/usr/local/bin/mutt', *message.to,
'-s', message.subject,
'-e', f'set from="{message.from_email}"'],
stdin = subprocess.PIPE)
mutt.stdin.write(bytes(message.body, 'utf-8'))
mutt.stdin.close()
How can I set EMAIL_BACKEND to a class without using its import path, or find another workaround? I did some googling but couldn't find anyone else who had gotten anything like this to work.
I figured it out. The default config assumed uWSGI was running in project/, not project/app/, so the import path I needed was app.emails.MuttBackend.
We are working on an add-on that writes to a log file and we need to figure out where the default var/log directory is located (the value of the ${buildout:directory} variable).
Is there an easy way to accomplish this?
In the past I had a similar use case.
I solved it by declaring the path inside the zope.conf:
zope-conf-additional +=
<product-config pd.prenotazioni>
logfile ${buildout:directory}/var/log/prenotazioni.log
</product-config>
See the README of this product:
https://github.com/PloneGov-IT/pd.prenotazioni/
This zope configuration can then be interpreted with this code:
from App.config import getConfiguration
product_config = getattr(getConfiguration(), 'product_config', {})
config = product_config.get('pd.prenotazioni', {})
logfile = config.get('logfile')
See the full example
here: https://github.com/PloneGov-IT/pd.prenotazioni/blob/9a32dc6d2863b5bfb5843d441e652101406d9a2c/pd/prenotazioni/init.py#L17
Worth noting is the fact that the initial return avoids multiple logging if the init function is mistakenly called more than once.
Anyway, if you do not want to play with buildout and custom zope configuration, you may want to get the default event log location.
It is specified in the zope.conf. You should have something like this:
<eventlog>
level INFO
<logfile>
path /path/to/plone/var/log/instance.log
level INFO
</logfile>
</eventlog>
I was able to obtain the path with this code:
from App.config import getConfiguration
import os
eventlog = getConfiguration().eventlog
logpath = eventlog.handler_factories[0].instance.baseFilename
logfolder = os.path.split(logpath)[0]
Probably looking at in the App module code you will find a more straightforward way of getting this value.
Another possible (IMHO weaker) solution would be store (through buildout or your prefered method) the logfile path into an environment variable.
You could let buildout set it in parts/instance/etc/zope.conf in an environment variable:
[instance]
recipe = plone.recipe.zope2instance
environment-vars =
BUILDOUT_DIRECTORY ${buildout:directory}
Check it in Python code with:
import os
buildout_directory = os.environ.get('BUILDOUT_DIRECTORY', '')
By default you already have the INSTANCE_HOME environment variable, which might be enough.
I was trying to do a bit of testing on a pieace of code but I get the ImportError: Start directory is not importable. Below is my code. I would really appreciate if someone could help. I am using Python 2.7.5 and also pycharm.
This is the command I execute in virenv
manage.py test /project/tests.py
from django.test import TestCase
# Create your tests here.
from project.models import Projects
from signup.models import SignUp
class ProjectTestCase(TestCase):
def SetUp(self):
super(ProjectTestCase,self).SetUp()
self.john = SignUp.objects.get(email='john#john.com')
self.project = Projects.objects.get(pk=1)
def test_project_permission(self):
self.assertFalse(self.john.has_perm('delete',self.project))
self.assertTrue(self.john.has_perm('view',self.project))
self.assertTrue(self.john.has_perm('change',self.project))
You should verify that you have an __init__.py in your project directory (it can be an empty file)
You're not calling test correctly. Assuming you have an app known as project, the way to call the tests in that app is manage.py test project
[Edit for additional comment which is really getting into separate question territory]
I would suggest adding this to settings.py (at the bottom)
import sys
if 'test' in sys.argv:
DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3'}
SOUTH_TESTS_MIGRATE = False
This:
A. Will use sqlite (sort of, I'm not sure if it's actually sqlite or an in-memory db that works like sqlite) ... I've noticed issues if I use postgres and am trying to have my permissions not be excessively liberal AND I've had a test abort)
B. Disables south migrations (so it will just start with a clean database built against whatever models are currently saying)
I've extracted several of my sqlalchemy models to a separate and installable package (../lib/site-packages), to use across several applications. So I only need to:
from models_package import MyModel
from any application needing access to these models.
Everything is ok so far, except I cannot find a satisfactory way of getting several application dependent config variables used by some of the models, which may vary from application to application. So some model need to be aware of some variables, where previously I've used the application they were in.
Neither
current_app.config['XYZ']
or
config = LocalProxy(lambda: current_app.config['XYZ'])
have worked (outside of application context errors) so I'm stuck right now. Maybe this is poor programming and/or design on my behalf, so how do clear this up? There must be some way, but I haven't reasoned myself toward it yet.
SOLUTION:
Avoiding setting items that would occur on module load (like a constant containing an api key), both of the above should work, and they do. Anything not using those in the context of model-in-the-application use will of course error, methods returning the values you need should be good.
If you are using a configuration pattern utilising classes and inheritance as described here, you could simply import your config classes with their respective properties and access them anywhere you want:
class Config(object):
IMPORT = 'ME'
DEBUG = False
TESTING = False
DATABASE_URI = 'sqlite:///:memory:'
class ProductionConfig(Config):
DATABASE_URI = 'mysql://user#localhost/foo'
class DevelopmentConfig(Config):
DEBUG = True
class TestingConfig(Config):
TESTING = True
Now, in your foo.py:
from config import Config
print(Config.IMPORT) # prints 'ME'
well, since current_app can be a proxy of your flask program when the blueprint is registered, and that is done at run-time, you can't use it in your models_package modules.
(app tries to import models_package, and models_package requires app's configs to initalize things- thus import fails)
one option would be doing circular imports :
assuming everything is in 'App' module:
__init__.py
import flask
application = flask.Flask(__name__)
application.config = #load configs
import models_package
models_package.py
from App import application
config = application.config
or create your own config object, but that somehow doubles complexity:
models_package.py
import flask
config = flask.config.Config(defaults=flask.Flask.default_config)
#pick one of those and apply the same config initialization as you do in
#your __init__.py
config.from_pyfile(..) #or
config.from_object(..) #or
config.from_envvar(..)