Updating browser cache on flask - python

I have a small flask server I'm running mostly for experimenting and tools I'm developing for self use (on my home network). It is running on development mode on a raspberry pi machine. It is configured to launch on startup via rc.local:
sudo -H -u pi /home/pi/Server/start.sh &
and the start.sh file reads
#!/bin/bash
cd /home/pi/Server
source /home/pi/Server/venv/bin/activate
export FLASK_APP=/home/pi/Server/app.py
export FLASK_ENV=development
export FLASK_RUN_HOST=192.168.1.104
export FLASK_RUN_PORT=5001
flask run
At the first couples of days everything was running fine, but now I get the following error:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/lib/python3/dist-packages/flask/app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "/usr/lib/python3/dist-packages/flask/app.py", line 1741, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3/dist-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/lib/python3/dist-packages/flask/app.py", line 2291, in wsgi_app
ctx.push()
File "/usr/lib/python3/dist-packages/flask/ctx.py", line 377, in push
self.app, self.request
File "/usr/lib/python3/dist-packages/flask/sessions.py", line 343, in open_session
data = s.loads(val, max_age=max_age)
File "/usr/lib/python3/dist-packages/itsdangerous.py", line 643, in loads
.unsign(s, max_age, return_timestamp=True)
File "/usr/lib/python3/dist-packages/itsdangerous.py", line 466, in unsign
return value, self.timestamp_to_datetime(timestamp)
File "/usr/lib/python3/dist-packages/itsdangerous.py", line 404, in timestamp_to_datetime
return datetime.utcfromtimestamp(ts + EPOCH)
OverflowError: timestamp out of range for platform time_t
From what I see here This is an issue of browser cache. How can I tell flask to cope with this?

Looks like you're using sessions/cookies? Try looking into that, maybe the date isn't proper or invalid. Try clearing it session.clear() or use a shorter expiration date. I've also had issues after upgrading from python 2 to 3 that messed up the cookies, if you've done that, you need to clear your cache so python3 date/time cookies can be set.

This seems to be an error when time returned is 0 from this Adafruit CircuitPython NTP issue. A direct approach would be to patch some flask dependencies with a PR.
However this seems more to be an error with your cache age. Try reducing it to a short time
#app.after_request
def after_request(response):
response.headers["Cache-Control"] = "max-age=300" # in second
return response

Related

Python SocketIO KeyError: 'Session is disconnected'

On a small Flask webserver running on a RaspberryPi with about 10-20 clients, we periodically get this error:
Error on request:
Traceback (most recent call last):
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/werkzeug/serving.py", line 270, in run_wsgi
execute(self.server.app)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/werkzeug/serving.py", line 258, in execute
application_iter = app(environ, start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/flask_socketio/__init__.py", line 43, in __call__
start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/engineio/middleware.py", line 47, in __call__
return self.engineio_app.handle_request(environ, start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/socketio/server.py", line 360, in handle_request
return self.eio.handle_request(environ, start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/engineio/server.py", line 291, in handle_request
socket = self._get_socket(sid)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/engineio/server.py", line 427, in _get_socket
raise KeyError('Session is disconnected')
KeyError: 'Session is disconnected'
The error is generated automatically from inside python-socketio. What does this error really mean and how can I prevent or suppress it?
As far as I can tell, this usually means the server can't keep up with supplying data to all of the clients.
Some possible mitigation techniques include disconnecting inactive clients, reducing the amount of data sent where possible, sending live data in larger chunks, or upgrading the server. If you need a lot of data throughput, there may be also be a better option than socketIO.
I have been able to reproduce it by setting a really high ping rate and low timeout in the socketIO constructor:
from flask_socketio import SocketIO
socketio = SocketIO(engineio_logger=True, ping_timeout=5, ping_interval=5)
This means the server has to do a lot of messaging to all of the clients and they don't have long to respond. I then open around 10 clients and I start to see the KeyError.
Further debugging of our server found a process that was posting lots of live data which ran fine with only a few clients but starts to issue the occasional KeyError once I get up to about a dozen.

OSError: MoviePy error: the file guitar.mp4 could not be found

I'm working on a video to audio converter with react and flask/python.
I have received a 500 with this error:
raise IOError(("MoviePy error: the file %s could not be found!\n"
OSError: MoviePy error: the file guitar.mp4 could not be found!
Please check that you entered the correct path.
EDIT: As stated in comments, moviepy VideoFileClip is looking for a path. Per suggestion, I am now attempting to write the incoming video file to a temp directory housed in the backend of the app. The updated stack trace shows the filepath printing, however when presented to VideoFileClip it is still unhappy.
The following snippet is the onSubmit for the video file upload:
const onSubmit = async (e) => {
e.preventDefault()
const data = new FormData()
console.log('hopefully the mp4', videoData)
data.append('mp3', videoData)
console.log('hopefully a form object with mp4', data)
const response = await fetch('/api/convert', {
method: "POST",
body: data
})
if (response.ok) {
const converted = await response.json()
setMp3(converted)
console.log(mp3)
} else {
window.alert("something went wrong :(");
}
}
Here is a link to an image depicting the console output of my file upload
from within init.py
app = Flask(__name__)
app.config.from_object(Config)
app.register_blueprint(convert, url_prefix='/api/convert')
CORS(app)
from within converter.py
import os
from flask import Blueprint, jsonify, request
import imageio
from moviepy.editor import *
convert = Blueprint('convert', __name__)
#convert.route('', methods=['POST'])
def convert_mp4():
if request.files['mp3'].filename:
os.getcwd()
filename = request.files['mp3'].filename
print('hey its a file again', filename)
safe_filename = secure_filename(filename)
video_file = os.path.join("/temp/", safe_filename)
print('hey its the file path', video_file)
video_clip = VideoFileClip(video_file)
print('hey its the VideoFileClip', video_clip)
audio_clip = video_clip.audio
audio_clip.write_audiofile(os.path.join("/temp/", f"{safe_filename}-converted.mp3"))
video_clip.close()
audio_clip.close()
return jsonify(send_from_directory(os.path.join("/temp/", f"{safe_filename}-converted.mp3")))
else:
return {'error': 'something went wrong :('}
In the stack trace below you can see file printing the name of the video, my only other thought on why this may not be working was because it was getting lost in the post request, however the fact it is printing after my if file: check is leaving me pretty confused.
hey its a file again guitar.mp4
hey its the file path /temp/guitar.mp4
127.0.0.1 - - [22/Apr/2021 12:12:15] "POST /api/convert HTTP/1.1" 500 -
Traceback (most recent call last):
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 161, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 161, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/jasondunn/projects/audioconverter/back/api/converter.py", line 20, in convert_mp4
video_clip = VideoFileClip(video_file)
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__
self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__
infos = ffmpeg_parse_infos(filename, print_infos, check_duration,
File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_reader.py", line 270, in ffmpeg_parse_infos
raise IOError(("MoviePy error: the file %s could not be found!\n"
OSError: MoviePy error: the file /temp/guitar.mp4 could not be found!
Please check that you entered the correct path.
thanks in advance for taking a look/future advice. First official post on Stack Overflow :)
Looks like python cannot find guitar.mp4 :(
I appears that you need to save the file contents on disk before processing. Looking at the docs for MoviePy you need to pass in the file name or absolute path into VideoFileClip constructor, this object will open the file on disk and handle processing after instantiation.
Saving the file within the request should be simple enough. The code below should be able to handle this
file.save(os.path.join("/path/to/some/dir", filename))
Now you can give VideoFileClip a proper URI to the file.
video_clip = VideoFileClip(os.path.join("/path/to/some/dir", filename))
This is what I would write for convert_mp4 although it isn't tested.
#convert.route('', methods=["POST"])
def convert_mp4():
if request.files.get("mp3"):
# clean the filename
safe_filename = secure_filename(request.files["mp3"].filename)
# save file to some directory on disk
request.files["mp3"].save(os.path.join("/path/to/some/dir", safe_filename))
video_clip = VideoFileClip(os.path.join("/path/to/some/dir", safe_filename))
audio_clip = video_clip.audio # convert to audio
# you may need to change the name or save to a different directory
audio_clip.write_audiofile(os.path.join("/path/to/some/dir", f"{safe_filename}.mp3"))
# close resources, maybe use a context manager for better readability
video_clip.close()
audio_clip.close()
# responds with data from a specific file
send_from_directory("/path/to/some/dir", f"{safe_filename}.mp3")
else:
return jsonify(error="File 'mp3' does not exist!")
Whenever you are saving data to disk through flask you should use secure_filename which is built into flask from the werkzeug project. This function will clean the input name so attackers cannot create malicious filenames.
I would suggest even going a bit further, maybe create 2 endpoints. One for submitting the data to process, and the second for retrieving it. This keeps your requests fast and allows flask to handle other requests in the meantime (however you will need some background process to handle the conversion).
Update April 30th, 2021
I know we solved this on Discord but I wanted to document the solution.
Your MP4 data is not being saved to disk using the save method (see this). You can check the above code that implements this.
Once that is done, we now know where this data is and can instantiate the VideoFileClip object using the known file path, this will allow the conversion to take place and you will need to then save the converted MP3 file on a location within your filesystem.
Once the MP3 is saved to disk, you can use the flask send_from_directory function to send the data back within your response. This response cannot contain JSON content as the content type is already set to audio/mpeg based on the MP3 file contents.
I think the issue is with how you are using file = request.files['mp3'].filename.
The value assigned to file isn't a pointer to the uploaded file. It is just the name of the file, a string. Just request.files['mp3'] is an instance of the werkzeug.datastructures.FileStorage class documented here.
The libraries you are passing that string to are interpreting it as a path to the file they are supposed to open.
Since you haven't saved the file anywhere they library isn't finding anything.
I'm not familiar with the libraries you are using, but they might have a way to send the in-memory file data to them directly without having to save the file then have them open it up again.
If not, then you will probably want to save the file to some temporary location and then have the library open and read the file.

Authentication Error when using Flask to connect to ParseServer

What I am trying to achieve is pretty simple.
I want to use Flask to create a web app that connects to a remote Server via API calls (specifically ParseServer).
I am using a third-party library to achieve this and everything works perfectly when I am running my code in a stand-alone script. But when I add my code into the Flask I suddenly can't authenticate with the Server
Or to be more precise I get an 'unauthorized' error when executing an API call.
It seems to me that in Flask, the registration method used by the APi library is not remembered.
I tried many things of putting the registration and initialization code in different places in Flask, nothing worked.
I asked a similar question in the Github of the Library with no help.
So I guess I have two questions that could help me solve this
1) Where should I put a registration method and import of the files from this library?
&
2) What can I do to identify the issue specifically, eg. to know precisely what's wrong?
Here's some code
The Flask code is here
#app.route('/parseinsert')
def run_parse_db_insert():
"""The method for testing implementation and design of the Parse Db
"""
pc = ParseCommunication()
print(pc.get_all_names_rating_table())
return 'done'
The ParseCommunication is my Class that deals with Parse. If I run ParseCommunication from that script, with the same code as above in the main part, everything works perfectly.
I run the Flask app with dev_appserver.py from Google App Engine.
My folder structure
/parseTest
/aplication
views.py
app.yaml
run.py
My run.py code
import os
import sys
sys.path.insert(1, os.path.join(os.path.abspath('.'), 'lib'))
sys.path.insert(1, os.path.join(os.path.abspath('.'), 'application'))
import aplication
Let me know what else I could provide to help out.
Thank you in Advance
EDIT:
A stack trace as requested.
It's mostly related to the library (from what I can say?)
ERROR 2016-09-28 06:45:50,271 app.py:1587] Exception on /parseinsert [GET]
Traceback (most recent call last):
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/theshade/Devel/ParseBabynames/parseTest/aplication/views.py", line 34, in run_parse_db_insert
name = pc.get_user('testuser1')
File "/home/theshade/Devel/ParseBabynames/parseTest/aplication/parseCommunication.py", line 260, in get_user
return User.Query.get(username=uname)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 58, in get
return self.filter(**kw).get()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 150, in get
results = self._fetch()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 117, in _fetch
return self._manager._fetch(**options)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 41, in _fetch
return [klass(**it) for it in klass.GET(uri, **kw).get('results')]
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/connection.py", line 108, in GET
return cls.execute(uri, 'GET', **kw)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/connection.py", line 102, in execute
raise exc(e.read())
ResourceRequestLoginRequired: {"error":"unauthorized"}
Parse requires keys and env variables. Check this line:
API_ROOT = os.environ.get('PARSE_API_ROOT') or 'https://api.parse.com/1'
Your error is in line 102 at:
https://github.com/milesrichardson/ParsePy/blob/master/parse_rest/connection.py
Before you can parse, you need to register:
from parse_rest.connection import register
APPLICATION_ID = '...'
REST_API_KEY = '...'
MASTER_KEY = '...'
register(APPLICATION_ID, REST_API_KEY, master_key=MASTER_KEY)

python appengine-gcs-client demo with local devserver hitting AccessTokenRefreshError(u'internal_failure',)

I'm having trouble getting the python appengine-gcs-client demo working using the 1.9.40 (latest presently) SDK's dev_appserver.py.
I followed the Setting Up Google Cloud Storage and the App Engine and Google Cloud Storage Sample instructions.
I created the default bucket for a paid app, with billing enabled and a non-zero daily spending limit set. I successfully uploaded a file to that bucket using the developer console.
I cloned the GoogleCloudPlatform/appengine-gcs-client repo from github. I copied the python/src/cloudstorage dir into the python/demo dir, which now looks like this:
dancorn-laptop.acasa:/home/dancorn/src/appengine-gcs-client/python> find demo/ | sort
demo/
demo/app.yaml
demo/blobstore.py
demo/cloudstorage
demo/cloudstorage/api_utils.py
demo/cloudstorage/api_utils.pyc
demo/cloudstorage/cloudstorage_api.py
demo/cloudstorage/cloudstorage_api.pyc
demo/cloudstorage/common.py
demo/cloudstorage/common.pyc
demo/cloudstorage/errors.py
demo/cloudstorage/errors.pyc
demo/cloudstorage/__init__.py
demo/cloudstorage/__init__.pyc
demo/cloudstorage/rest_api.py
demo/cloudstorage/rest_api.pyc
demo/cloudstorage/storage_api.py
demo/cloudstorage/storage_api.pyc
demo/cloudstorage/test_utils.py
demo/__init__.py
demo/main.py
demo/main.pyc
demo/README
This is how I executed the devserver and the errors reported when trying to access http://localhost:8080 as instructed:
dancorn-laptop.acasa:/home/dancorn/src/appengine-gcs-client/python> /home/usr_local/google_appengine_1.9.40/dev_appserver.py demo
INFO 2016-08-04 01:07:51,786 sdk_update_checker.py:229] Checking for updates to the SDK.
INFO 2016-08-04 01:07:51,982 sdk_update_checker.py:257] The SDK is up to date.
INFO 2016-08-04 01:07:52,121 api_server.py:205] Starting API server at: http://localhost:50355
INFO 2016-08-04 01:07:52,123 dispatcher.py:197] Starting module "default" running at: http://localhost:8080
INFO 2016-08-04 01:07:52,124 admin_server.py:116] Starting admin server at: http://localhost:8000
INFO 2016-08-04 01:08:03,461 client.py:804] Refreshing access_token
INFO 2016-08-04 01:08:05,234 client.py:827] Failed to retrieve access token: {
"error" : "internal_failure"
}
ERROR 2016-08-04 01:08:05,236 api_server.py:272] Exception while handling service_name: "app_identity_service"
method: "GetAccessToken"
request: "\n7https://www.googleapis.com/auth/devstorage.full_control"
request_id: "ccqdTObLrl"
Traceback (most recent call last):
File "/home/usr_local/google_appengine_1.9.40/google/appengine/tools/devappserver2/api_server.py", line 247, in _handle_POST
api_response = _execute_request(request).Encode()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/tools/devappserver2/api_server.py", line 186, in _execute_request
make_request()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/tools/devappserver2/api_server.py", line 181, in make_request
request_id)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/apiproxy_stub.py", line 131, in MakeSyncCall
method(request, response)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/app_identity/app_identity_defaultcredentialsbased_stub.py", line 192, in _Dynamic_GetAccessToken
token = credentials.get_access_token()
File "/home/usr_local/google_appengine_1.9.40/lib/oauth2client/oauth2client/client.py", line 689, in get_access_token
self.refresh(http)
File "/home/usr_local/google_appengine_1.9.40/lib/oauth2client/oauth2client/client.py", line 604, in refresh
self._refresh(http.request)
File "/home/usr_local/google_appengine_1.9.40/lib/oauth2client/oauth2client/client.py", line 775, in _refresh
self._do_refresh_request(http_request)
File "/home/usr_local/google_appengine_1.9.40/lib/oauth2client/oauth2client/client.py", line 840, in _do_refresh_request
raise AccessTokenRefreshError(error_msg)
AccessTokenRefreshError: internal_failure
WARNING 2016-08-04 01:08:05,239 tasklets.py:468] suspended generator _make_token_async(rest_api.py:55) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,240 tasklets.py:468] suspended generator get_token_async(rest_api.py:224) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,240 tasklets.py:468] suspended generator urlfetch_async(rest_api.py:259) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,240 tasklets.py:468] suspended generator run(api_utils.py:164) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,240 tasklets.py:468] suspended generator do_request_async(rest_api.py:198) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
WARNING 2016-08-04 01:08:05,241 tasklets.py:468] suspended generator do_request_async(storage_api.py:128) raised RuntimeError(AccessTokenRefreshError(u'internal_failure',))
ERROR 2016-08-04 01:08:05,241 main.py:62] AccessTokenRefreshError(u'internal_failure',)
Traceback (most recent call last):
File "/home/dancorn/src/appengine-gcs-client/python/demo/main.py", line 43, in get
self.create_file(filename)
File "/home/dancorn/src/appengine-gcs-client/python/demo/main.py", line 89, in create_file
retry_params=write_retry_params)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/cloudstorage_api.py", line 97, in open
return storage_api.StreamingBuffer(api, filename, content_type, options)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/storage_api.py", line 697, in __init__
status, resp_headers, content = self._api.post_object(path, headers=headers)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 82, in sync_wrapper
return future.get_result()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 383, in get_result
self.check_success()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/storage_api.py", line 128, in do_request_async
deadline=deadline, callback=callback)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 198, in do_request_async
follow_redirects=False)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/api_utils.py", line 164, in run
result = yield tasklet(**kwds)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 259, in urlfetch_async
self.token = yield self.get_token_async()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 224, in get_token_async
self.scopes, self.service_account_id)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/home/dancorn/src/appengine-gcs-client/python/demo/cloudstorage/rest_api.py", line 55, in _make_token_async
token, expires_at = yield rpc
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/ndb/tasklets.py", line 513, in _on_rpc_completion
result = rpc.get_result()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/app_identity/app_identity.py", line 519, in get_access_token_result
rpc.check_success()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/apiproxy_stub_map.py", line 579, in check_success
self.__rpc.CheckSuccess()
File "/home/usr_local/google_appengine_1.9.40/google/appengine/api/apiproxy_rpc.py", line 157, in _WaitImpl
self.request, self.response)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/remote_api/remote_api_stub.py", line 201, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/usr_local/google_appengine_1.9.40/google/appengine/ext/remote_api/remote_api_stub.py", line 235, in _MakeRealSyncCall
raise pickle.loads(response_pb.exception())
RuntimeError: AccessTokenRefreshError(u'internal_failure',)
INFO 2016-08-04 01:08:05,255 module.py:788] default: "GET / HTTP/1.1" 200 249
I was surprised when I saw the attempt to contact a Google server, I was expecting to use a faked, local filesystem-based emulation, based on these notes from the App Engine and Google Cloud Storage Sample instructions:
Using the client library with the development app server:
You can use the client library with the development server.
**Note**: Files saved locally are subject to the file size and naming conventions imposed by the local filesystem.
app.yaml walkthrough:
You specify the project ID in the line application: your-app-id,
replacing the value your-app-id. This value isn't used when running
locally, but you must supply a valid project ID before deploying: the
deployment utility reads this entry to determine where to deploy your
app.
Deploying the Sample, step 5:
In your browser, visit https://.appspot.com; the
application will execute on page load, just as it did when running
locally. Only this time, the app will actually be writing to and
reading from a real bucket.
I even placed my real app's ID into the app.yaml file, but that didn't make any difference.
I've checked the known GAE issues and only found this potentially related one, but on a much older SDK version:
Issue 11690 GloudStorage bug in GoogleAppEngineLanucher development server
I checked a few older SDK versions I have around (1.9.30, 1.9.35), just in case - no difference either.
My questions:
How can I make the GCS client operate locally (w/ faked GCS based on the local filesystem) when it's used with dev_appserver.py?
Since it's mentioned it should work with the real GCS as well even when used with dev_appserver.py what do I need to do to achieve that? (less important, more of a curiosity)
Actually the reason was what IMHO is a quite silly bug - inability to read the credentials from a local file written by an earlier version of the SDK (or related package?) and failure to fallback to a more decent action which leads to a rather misleading traceback throwing off the investigation.
Credit goes to this answer: https://stackoverflow.com/a/35890078/4495081 ('tho the bug mentioned in the post was for something else, ultimately triggering the similar end result)
After removing the ~/.config/gcloud/application_default_credentias.json file the demo completed successfully using the local filesystem. And my real app worked fine as well.
My 2nd question stands, but I'm not too worried about it - personally I don't see a great value in using the real GCS storage with the local development server - I have to do testing on a real staging GAE app anyways for other reasons.

"Random" SocketError/Connection Refused errors on py2neo queries

Hullo, hope this doesn't end up being too trivial.
The relevant parts of my stack are Gunicorn/Celery, neomodel (0.3.6), and py2neo (1.5). Neo4j version is 1.9.4, bound on 0.0.0.0:7474 (all of this is on linux, Ubuntu 13.04 I think)
So my gunicorn/celery servers are fine most of the time, except occasionally, I get the following error:
ConnectionRefusedError(111, 'Connection refused')
Stacktrace (most recent call last):
File "flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "flask/_compat.py", line 33, in reraise
raise value
File "flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "Noomsa/web/core/util.py", line 156, in inner
user = UserMixin().get_logged_in()
File "Noomsa/web/core/util.py", line 117, in get_logged_in
user = models.User.index.get(username=flask.session["user"])
File "neomodel/index.py", line 50, in get
nodes = self.search(query=query, **kwargs)
File "neomodel/index.py", line 41, in search
return [self.node_class.inflate(n) for n in self._execute(str(query))]
File "neomodel/index.py", line 28, in _execute
return self.__index__.query(query)
File "py2neo/neo4j.py", line 2044, in query
self.__uri__, quote(query, "")
File "py2neo/rest.py", line 430, in _send
raise SocketError(err)
So, as you can see, I do a call to User.index.get (The first call in the request response), and get a socket error. Sometimes. Most of the time, it connects fine. The error occurs amongst all Flask views/Celery tasks that use the neo4j connection (and not just doing User.index.get ;)).
So far, the steps I've taken have involved moneky patching the neomodel connection function to check that the GraphDatabaseService object is created per thread, and to automatically reconnect (and authenticate) to the neo4j server every 30 or so seconds. This may have reduced the frequency of the errors, but they still occur.
Looking for the error online, it seems to be mostly people trying to connect to the wrong interface/ip/port. However, given that the majority of my requests go through, I don't feel like that is the case here.
Any ideas? I don't think it's related, but my database seems to have 38k orphaned nodes; that's probably worthy of another question in its own right.
EDIT: I should add, this seems to disappear when running gunicorn/celery with workers=1, instead of workers=$CPU_N. Can't see why it should matter, as apparently neo4j is set up to handle $N_CPU*10 connections by default.
This looks like a networking or web stack configuration problem so I don't think I can help from a py2neo perspective. I'd recommend upgrading to py2neo 1.6 though as the client HTTP code has been completely rewritten and it might handle a reconnection more gracefully.

Categories

Resources