I've been successfully running the local development server daily and have made no changes except that I called "gcloud components update" just before it stopped working. Now I get:
..snip... <<PATH TO MY SDK>>/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/application_configuration.py", line 518, in _parse_configuration
with open(configuration_path) as f:
IOError: [Errno 2] No such file or directory: 'app.yaml'
Of course app.yaml hasn't moved.
Any thoughts?
It looks like there's an active issue on Google's issue tracker (opened on Oct 2, 2018) pertaining to this:
After updating to the Python (2.7) extensions for GAE to version
1.9.76, I am no longer able to run my code with dev_appserver.py
As of Oct 3, a fix appears to be in the works, but for now they suggest downgrading Google Cloud SDK to version 218.0.0:
It seems like you are affected by a known issue regarding ‘dev_appserver.py’ breaks in Google Cloud SDK version [219.0.1]. App Engine specialists are currently working to resolve it. However
there is no ETA at this moment. As a workaround you can downgrade the
Google Cloud SDK version using this command:
gcloud components update --version 218.0.0
The assignee of the issue will post an update on that issue when it has been resolved.
UPDATE (OCT 9, 2018): Cloud SDK version 220.0.0, which fixes the dev_appserver.py issue, is now available. I updated (via gcloud components update) and verified that it works. (Note: there are already a couple of complaints on the Issue Tracker that dev_appserver.py takes too long to load now. I didn't notice a significant difference from version 218, but I didn't compare timings.)
You may create make file and have something like this:
export SDK=dev_appserver.py
export APP_PATH=${CURDIR}
run:
$(SDK) $(APP_PATH)/path-to/app.yaml
And just use it with: make run so you don't have to worry about paths.
cd to the directory with app.yaml in it and try again
On Windows, dev_appserver.py %CD% is enough if your .yaml file has the default app.yaml name. Otherwise dev_appserver.py %CD%/your-file-name.yaml
This worked for me: in app.yaml, change
runtime: go
to
runtime: go111
Related
I'm in the process of upgrading a python2.7 gae std app to python3.7.
Everything is great except my indices just wont work.
I have a simple looking index.yaml file:
indexes:
- kind: Response
ancestor: yes
properties:
- name: __key__
direction: desc
And when I run certain commands then I get this
google.api_core.exceptions.FailedPrecondition: 400 no matching index found. recommended index is:
- kind: Response
ancestor: yes
properties:
- name: __key__
direction: desc
The command I'm using to run dev_appserver is: dev_appserver.py --application=my_project_id app.yaml.
My index.yaml file lives in the same directory as my app.yaml file.
Nothing else is running. The application itself is a Flask api, the error is coming up when I curl one of the endpoints.
What I've tried
After poking around docs and SO for a bit it seems like I might need to run the datastore emulator locally. So made sure that my gcloud components were up to date then:
gcloud beta emulators datastore env-init
# which gave me:
export DATASTORE_DATASET=firestore-datastore-280307
export DATASTORE_EMULATOR_HOST=::1:8608
export DATASTORE_EMULATOR_HOST_PATH=::1:8608/datastore
export DATASTORE_HOST=http://::1:8608
export DATASTORE_PROJECT_ID=firestore-datastore-280307
# then
gcloud beta emulators datastore start --project=my_project_id
# which gave me
stuff...
[datastore] API endpoint: http://::1:8679
[datastore] If you are using a library that supports the DATASTORE_EMULATOR_HOST environment variable, run:
[datastore]
[datastore] export DATASTORE_EMULATOR_HOST=::1:8679
[datastore]
[datastore] Dev App Server is now running.
So by combining these outputs it looks like my environment should be like:
export DATASTORE_DATASET=my_project_id
export DATASTORE_EMULATOR_HOST=::1:8679
export DATASTORE_EMULATOR_HOST_PATH=::1:8679/datastore
export DATASTORE_HOST=http://::1:8679
export DATASTORE_PROJECT_ID=my_project_id
Cool. So I leave the emulator running and try to get my dev_appserver to connect to it:
export DATASTORE_DATASET=my_project_id
export DATASTORE_EMULATOR_HOST=::1:8679
export DATASTORE_EMULATOR_HOST_PATH=::1:8679/datastore
export DATASTORE_HOST=http://::1:8679
export DATASTORE_PROJECT_ID=my_project_id
dev_appserver.py --application=my_project_id app.yaml
It starts up but when I curl my endpoint I get the same index error.
So then I kill dev_appserver and try it like this:
# same env vars as before
dev_appserver.py --support_datastore_emulator=true --application=my_project_id app.yaml
Then I get a new error:
RuntimeError: Cannot use the Cloud Datastore Emulator because the packaged grpcio is incompatible to this system. Please install grpcio using pip
I pip installed grpcio in a python2.7 env to get over that error. Now it looks like everything is running. But I'm still getting the missing index error.
And another strange thing: If I go to http://localhost:8000 and try navigating to anythin to do with datastores then I get errors like:
ConnectionError: Cannot connect to Cloud Datastore Emulator on ::1:{THE_PORT}
Which is very weird.
I'm considering going back to 2.7.
Don't go back to 2.7! If you have this problem in production, go to your developer console and check your indexes: https://console.cloud.google.com/.....
See if they are still building. It does take some time for the indexes to build.
If this only happens in development:
dev_appserver is buggy in Windows. I couldn't tell if you in on Windows or not. I had problems using dev_appserver in a virtual environment even on Mac when porting an app to Python 3.7.
You state you are using Flask. Try using the Flask server in development instead of dev_appserver. That is what worked for me. There is good documentation on this. You will start it with something like:
cd /Users/myname/venv37
source ./bin/activate
export FLASK_APP=/Users/myname/path_to_app
FLASK_ENV=development flask run --port 5000
edit:
ndb is not compatible with python 3.7. They have developed a new service Google Cloud NDB which makes the old ndb data usable: https://cloud.google.com/appengine/docs/standard/python3/migrating-to-cloud-ndb
New apps should use Cloud Datastore or Firestore. But legacy ndb apps can migrate to Google Cloud NDB.
I'm trying to deploy a Flask web app with mysql connectivity. It's my first time using Azure, and coming off Linux it all seems pretty confusing.
My understanding is that one includes within the requirements.txt to include the packages required. When I build the default Flask app from Azure the file looks like this:
Flask<1
At this stage the site loads fine.
If I then include an additional line
https://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.1.14.tar.gz
As per this answer https://stackoverflow.com/a/34489738/2697874
Then in my views.py file (which seems to be broadly synonymous to my old app.py file) I include...import mysql.connector
I then restart and reload my site...which then returns the error The page cannot be displayed because an internal server error has occurred.
Error logging spits out a load of html (seems pretty weird way to deliver error logs - so I must be missing something here). When I save to html and load it up I get this...
How can I include the mysql.connector library within my Flask web app?
Per my experience, the resoure https://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.1.14.tar.gz is for Linux, not for Azure WebApps based on Windows, and the link seems to be not available now.
I used the command pip search mysql-connector to list the related package. Then, I tried to use mysql-connector instead of mysql-connector-python via pip install, and tried to import mysql.connector in local Python interpreter that works fine.
So please use mysql-connector==2.1.4 instead of mysql-connector-python== in the requirements.txt file of your project using IDE, then re-deploy the project on Azure and try again. The package will be installed automatically as the offical doc said as below.
Package Management
Packages listed in requirements.txt will be installed automatically in the virtual environment using pip. This happens on every deployment, but pip will skip installation if a package is already installed.
Any update, please feel free to let me know.
After i recently updated the gcloud components with gcloud components update to version 108.0.0, i noticed the gcloud preview app deploy app.yaml command has started taking too long every time (about 15 minutes) for my project. Before this it only used to take about a minute to complete.
I figured out that using gcloud preview app deploy --verbosity info app.yaml displays progress of deployment process and I noticed every file in source code is being uploaded every time i deploy including the files in lib directory which has a number of packages installed, about 2000 files in it so this is where the delay is coming from. Since I am new to appengine, i dont know if this is normal.
The project exists inside a folder of git repo, and i noticed after every deploy, 2 files in default directory, source-context.json and source-contexts.json, are being created and have information about git repo inside. I feel that can somehow be relevant.
I went through a number of relevant questions here but couldnt figure out the issue. It would be great if this can be resolved if its an issue at all because its a big inconvenience having to wait 15 mins to deploy every time.
I only started using google appengine a month ago so please dont mind if the question is incorrect. Please let me know if additional info is needed to resolve this. Thanks
UPDATE: I am using gcloud sdk on ubuntu 14.04 LTS.
Yes, this is the expected behaviour, each deployment is standalone, no assumption is made about anything being "already deployed", all app's artifacts are uploaded at every deployment.
Update: Kekito's comment suggests different tools may actually behave differently. My answer applies to the linux version of the Python SDK, regardless of deploying a new version or re-deploying the same version.
Yesterday this code was working fine both in local and production servers:
import cloudstorage
def filelist(Handler):
gs_bucket_name="/bucketname"
list=cloudstorage.listbucket(gs_bucket_name)
logging.warning(list)
self.write(list)
for e in list:
self.write(e)
self.write("<br>")
From yesterday to today I've upgraded GAE Launcher and changed the billing options (I was using a free trial and now a paid account) (not sure if it has anything to do, but just to give extra information)
But today the code stopped working in local (works fine in production)
This is the beginning of the error log
WARNING 2015-02-20 09:50:21,721 admin.py:106] <cloudstorage.cloudstorage_api._Bucket object at 0x10ac31e90>
ERROR 2015-02-20 09:50:21,729 api_server.py:221] Exception while handling service_name: "app_identity_service"
method: "GetAccessToken"
request: "\n7https://www.googleapis.com/auth/devstorage.full_control"
request_id: "WoMrXkOyfe"
The warning shows a bucket object, but as soon as I try to iterate in the list I get the exception on the identity service.
What is hapening? Seems that I need to authorize local devserver gcs mockup, but I'm not sure how.
Remember this is only happening in devserver, not in production.
Thanks for your help
This is a problem with the latest release (1.9.18). For now, until it gets fixed, you can downgrade to 1.9.17 by downloading the installer from here and just running it: https://storage.googleapis.com/appengine-sdks/featured/GoogleAppEngineLauncher-1.9.17.dmg
As per the answer below, the 1.9.18 has been patched with a fix for this. If you still want to install the 1.9.17 version, please follow this link: https://storage.googleapis.com/appengine-sdks/deprecated/1917/GoogleAppEngineLauncher-1.9.17.dmg
It's a known bug in the dev_appserver, where the SDK does not cope with the credentials already persisted in earlier versions. For me (on Ubuntu 15.10 with SDK 1.9.33) it helped to simply remove a file:
rm ~/.config/gcloud/application_default_credentials.json
as suggested in the bug issue filed by Jari Wiklund.
Update: As of March 5th, 2105 this was fixed in the public release of 1.9.18, which may be a simpler way to get the fix.
Note: while the fix was in Python, issue can surface in Java, PHP and Go also because they use the Python local dev server code.
I do believe that the cause of the problem is a bug in the local dev server (GoogleAppEngineLauncher), recently released.
I'm experiencing something similar in the PHP runtime: GloudStorage fails locally
I am successfully able to get BQ data from one project to another from the advice in this answer. However this only works when deployed on my development/staging instance and not my local development server on Google App Engine.
My findings are that it works in production because you include:
libraries:
- name: pycrypto
version: "latest"
in app.yaml. However these libraries are not accessible from the dev server. I have tried installing everything locally (Pycrypto, oauth2client, openSSL) after digging through some docs and tracing the error but still cannot get it to work. I have tried installing through pip and manually doing the build/install from the raw files to no avail. Any advice on getting these queries to work on the local django server? Working on Ubuntu if that matters, perhaps it's looking in the wrong spot for the libraries?
If its just the libs that are mising follow this answer https://stackoverflow.com/a/11405769/3877822 to insatll pycrypto to the root of your project
As #Udi suggests in the comment below, the following command also
installs pycrypto and can be used in virtualenv as well:
easy_install
http://www.voidspace.org.uk/downloads/pycrypto26/pycrypto-2.6.win32-py2.7.exe
Notice to choose the relevant link for your setup from this list