Hullo, hope this doesn't end up being too trivial.
The relevant parts of my stack are Gunicorn/Celery, neomodel (0.3.6), and py2neo (1.5). Neo4j version is 1.9.4, bound on 0.0.0.0:7474 (all of this is on linux, Ubuntu 13.04 I think)
So my gunicorn/celery servers are fine most of the time, except occasionally, I get the following error:
ConnectionRefusedError(111, 'Connection refused')
Stacktrace (most recent call last):
File "flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "flask/_compat.py", line 33, in reraise
raise value
File "flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "Noomsa/web/core/util.py", line 156, in inner
user = UserMixin().get_logged_in()
File "Noomsa/web/core/util.py", line 117, in get_logged_in
user = models.User.index.get(username=flask.session["user"])
File "neomodel/index.py", line 50, in get
nodes = self.search(query=query, **kwargs)
File "neomodel/index.py", line 41, in search
return [self.node_class.inflate(n) for n in self._execute(str(query))]
File "neomodel/index.py", line 28, in _execute
return self.__index__.query(query)
File "py2neo/neo4j.py", line 2044, in query
self.__uri__, quote(query, "")
File "py2neo/rest.py", line 430, in _send
raise SocketError(err)
So, as you can see, I do a call to User.index.get (The first call in the request response), and get a socket error. Sometimes. Most of the time, it connects fine. The error occurs amongst all Flask views/Celery tasks that use the neo4j connection (and not just doing User.index.get ;)).
So far, the steps I've taken have involved moneky patching the neomodel connection function to check that the GraphDatabaseService object is created per thread, and to automatically reconnect (and authenticate) to the neo4j server every 30 or so seconds. This may have reduced the frequency of the errors, but they still occur.
Looking for the error online, it seems to be mostly people trying to connect to the wrong interface/ip/port. However, given that the majority of my requests go through, I don't feel like that is the case here.
Any ideas? I don't think it's related, but my database seems to have 38k orphaned nodes; that's probably worthy of another question in its own right.
EDIT: I should add, this seems to disappear when running gunicorn/celery with workers=1, instead of workers=$CPU_N. Can't see why it should matter, as apparently neo4j is set up to handle $N_CPU*10 connections by default.
This looks like a networking or web stack configuration problem so I don't think I can help from a py2neo perspective. I'd recommend upgrading to py2neo 1.6 though as the client HTTP code has been completely rewritten and it might handle a reconnection more gracefully.
Related
Im going through Flask based tutorial "Learn flask framework" by Matt Copperwaite and now am stuck in following error.
After adding Flask-Admin I started to build admin dashboard with it. I tried to add Fileadmin module to control static files:
from flask_admin.contrib.fileadmin import FileAdmin
And now I'm getting next error, after trying to access corresponding web-form:
[2021-12-15 14:14:32,563] ERROR in app: Exception on /admin/blogfileadmin/ [GET]
Traceback (most recent call last):
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask/app.py", line 1518, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask/app.py", line 1516, in full_dispatch_request
rv = self.dispatch_request()
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask/app.py", line 1502, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask_admin/base.py", line 69, in inner
return self._run_view(f, *args, **kwargs)
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask_admin/base.py", line 368, in _run_view
return fn(self, *args, **kwargs)
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask_admin/contrib/fileadmin/__init__.py", line 812, in index_view
delete_form = self.delete_form()
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask_admin/contrib/fileadmin/__init__.py", line 495, in delete_form
delete_form_class = self.get_delete_form()
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask_admin/contrib/fileadmin/__init__.py", line 425, in get_delete_form
class DeleteForm(self.form_base_class):
File "/home/demino/WebProjects/blog/blog/lib/python3.9/site-packages/flask_admin/contrib/fileadmin/__init__.py", line 426, in DeleteForm
path = fields.HiddenField(validators=[validators.Required()])
AttributeError: module 'wtforms.validators' has no attribute 'Required'
I already experienced a bunch of problems caused by book being published in 2015, but so far I managed to solve them quite fast. Now Im stuck and cant find information that could help.
Thanks in advance.
Edit1: Solved - manually changed validators.required() call to validators.DataRequired() in flask-admin -> fileadmin init.py . They distinguished them in v1.0.2 (https://wtforms.readthedocs.io/en/stable/changes/#version-1-0-2). Not sure though which exactly, Data- or Input- requirement is correct here. Will see.
Solved - manually changed validators.required() call to validators.DataRequired() in flask-admin -> fileadmin init.py . They distinguished them in v1.0.2 (https://wtforms.readthedocs.io/en/stable/changes/#version-1-0-2).
On a small Flask webserver running on a RaspberryPi with about 10-20 clients, we periodically get this error:
Error on request:
Traceback (most recent call last):
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/werkzeug/serving.py", line 270, in run_wsgi
execute(self.server.app)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/werkzeug/serving.py", line 258, in execute
application_iter = app(environ, start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/flask_socketio/__init__.py", line 43, in __call__
start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/engineio/middleware.py", line 47, in __call__
return self.engineio_app.handle_request(environ, start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/socketio/server.py", line 360, in handle_request
return self.eio.handle_request(environ, start_response)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/engineio/server.py", line 291, in handle_request
socket = self._get_socket(sid)
File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/engineio/server.py", line 427, in _get_socket
raise KeyError('Session is disconnected')
KeyError: 'Session is disconnected'
The error is generated automatically from inside python-socketio. What does this error really mean and how can I prevent or suppress it?
As far as I can tell, this usually means the server can't keep up with supplying data to all of the clients.
Some possible mitigation techniques include disconnecting inactive clients, reducing the amount of data sent where possible, sending live data in larger chunks, or upgrading the server. If you need a lot of data throughput, there may be also be a better option than socketIO.
I have been able to reproduce it by setting a really high ping rate and low timeout in the socketIO constructor:
from flask_socketio import SocketIO
socketio = SocketIO(engineio_logger=True, ping_timeout=5, ping_interval=5)
This means the server has to do a lot of messaging to all of the clients and they don't have long to respond. I then open around 10 clients and I start to see the KeyError.
Further debugging of our server found a process that was posting lots of live data which ran fine with only a few clients but starts to issue the occasional KeyError once I get up to about a dozen.
I'm guessing this is a really simple question but I don't know the terminology for it; hopefully it's a quick answer and close!
Info on the project: It's a Python 3.8 project which is based on Google Cloud Platform, using Cloud Functions, BigQuery, Secret Manager, PubSub, Scheduler and uses a service account (not the project default) for authentication. Aforementioned service account has appropriate permissions for everything it does - works great when triggered in isolation.
I've got a couple of Google Cloud Functions applications I'd like to execute at exactly the same time. I'm using a single Service Account for authentication between both apps and in the same GCP project; keeps things nice and simple as I already have to set up quite a few things if I want to make another instance. Unfortunately when I execute one of my apps while the other one is running, the initial one will fail, presumably because the service account on the new one now has the authentication.
Is there any way to fix this so both apps are able to continue to be authenticated using the same credentials? Would it be as easy as using different secret versions within Google Secret Manager?
Normally I'd just delay it but I'm mostly looking for a way to make my app scalable and not conflict with other apps which are running.
Update:
Looking at the logs it seems like there might be an issue with how my BigQuery functions specifically are being called. It seems to be quota related, specifically to do with table updates and polling.py, but I'm not sure what might be causing it? Seems unusual.
Traceback (most recent call last):
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/functions_framework/__init__.py", line 149, in view_func
function(data, context)
File "/workspace/main.py", line 1016, in main
primaryRequest(ga_sessions_1g1, reportName='ga_sessions_1g1')
File "/workspace/main.py", line 1003, in primaryRequest
write_to_bq(reportName)
File "/workspace/main.py", line 958, in write_to_bq
job.result() # Waits for the job to complete.
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/cloud/bigquery/job/base.py", line 631, in result
return super(_AsyncJob, self).result(timeout=timeout, **kwargs)
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/api_core/future/polling.py", line 134, in result
raise self._exception
google.api_core.exceptions.Forbidden: 403 Exceeded rate limits: too many table update operations for this table. For more information, see https://cloud.google.com/bigquery/troubleshooting-errors
Here's the function which contains line 958 as above:
def write_to_bq(bqLookup):
client = bigquery.Client(credentials=scopedBqCredentials, project=bqCredentials.project_id,)
bqTableLookup = bqTableDict.get(bqLookup)
table_id = f'{gcpProject}.{bqDataset}.{bqTableLookup}'
job_config = bigquery.LoadJobConfig(
source_format=bigquery.SourceFormat.CSV, skip_leading_rows=0, autodetect=False,
)
with open('/tmp/result.csv', "rb") as source_file:
job = client.load_table_from_file(source_file, table_id, job_config=job_config)
job.result() # Waits for the job to complete.
table = client.get_table(table_id) # Make an API request.
print("Loaded {} rows and {} columns to {}".format(
table.num_rows, len(table.schema), table_id))
You are performing a load job with the method client.load_table_from_file(). With BigQuery you are limited to 1500 load job per table and per day (1 per minutes in constant flow)
Note: The 403 error led you to permission error. A better error code could be use by BigQuery API (529 for example) in case of quota excedeed.
I have a small flask server I'm running mostly for experimenting and tools I'm developing for self use (on my home network). It is running on development mode on a raspberry pi machine. It is configured to launch on startup via rc.local:
sudo -H -u pi /home/pi/Server/start.sh &
and the start.sh file reads
#!/bin/bash
cd /home/pi/Server
source /home/pi/Server/venv/bin/activate
export FLASK_APP=/home/pi/Server/app.py
export FLASK_ENV=development
export FLASK_RUN_HOST=192.168.1.104
export FLASK_RUN_PORT=5001
flask run
At the first couples of days everything was running fine, but now I get the following error:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/lib/python3/dist-packages/flask/app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "/usr/lib/python3/dist-packages/flask/app.py", line 1741, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3/dist-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/lib/python3/dist-packages/flask/app.py", line 2291, in wsgi_app
ctx.push()
File "/usr/lib/python3/dist-packages/flask/ctx.py", line 377, in push
self.app, self.request
File "/usr/lib/python3/dist-packages/flask/sessions.py", line 343, in open_session
data = s.loads(val, max_age=max_age)
File "/usr/lib/python3/dist-packages/itsdangerous.py", line 643, in loads
.unsign(s, max_age, return_timestamp=True)
File "/usr/lib/python3/dist-packages/itsdangerous.py", line 466, in unsign
return value, self.timestamp_to_datetime(timestamp)
File "/usr/lib/python3/dist-packages/itsdangerous.py", line 404, in timestamp_to_datetime
return datetime.utcfromtimestamp(ts + EPOCH)
OverflowError: timestamp out of range for platform time_t
From what I see here This is an issue of browser cache. How can I tell flask to cope with this?
Looks like you're using sessions/cookies? Try looking into that, maybe the date isn't proper or invalid. Try clearing it session.clear() or use a shorter expiration date. I've also had issues after upgrading from python 2 to 3 that messed up the cookies, if you've done that, you need to clear your cache so python3 date/time cookies can be set.
This seems to be an error when time returned is 0 from this Adafruit CircuitPython NTP issue. A direct approach would be to patch some flask dependencies with a PR.
However this seems more to be an error with your cache age. Try reducing it to a short time
#app.after_request
def after_request(response):
response.headers["Cache-Control"] = "max-age=300" # in second
return response
What I am trying to achieve is pretty simple.
I want to use Flask to create a web app that connects to a remote Server via API calls (specifically ParseServer).
I am using a third-party library to achieve this and everything works perfectly when I am running my code in a stand-alone script. But when I add my code into the Flask I suddenly can't authenticate with the Server
Or to be more precise I get an 'unauthorized' error when executing an API call.
It seems to me that in Flask, the registration method used by the APi library is not remembered.
I tried many things of putting the registration and initialization code in different places in Flask, nothing worked.
I asked a similar question in the Github of the Library with no help.
So I guess I have two questions that could help me solve this
1) Where should I put a registration method and import of the files from this library?
&
2) What can I do to identify the issue specifically, eg. to know precisely what's wrong?
Here's some code
The Flask code is here
#app.route('/parseinsert')
def run_parse_db_insert():
"""The method for testing implementation and design of the Parse Db
"""
pc = ParseCommunication()
print(pc.get_all_names_rating_table())
return 'done'
The ParseCommunication is my Class that deals with Parse. If I run ParseCommunication from that script, with the same code as above in the main part, everything works perfectly.
I run the Flask app with dev_appserver.py from Google App Engine.
My folder structure
/parseTest
/aplication
views.py
app.yaml
run.py
My run.py code
import os
import sys
sys.path.insert(1, os.path.join(os.path.abspath('.'), 'lib'))
sys.path.insert(1, os.path.join(os.path.abspath('.'), 'application'))
import aplication
Let me know what else I could provide to help out.
Thank you in Advance
EDIT:
A stack trace as requested.
It's mostly related to the library (from what I can say?)
ERROR 2016-09-28 06:45:50,271 app.py:1587] Exception on /parseinsert [GET]
Traceback (most recent call last):
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/theshade/Devel/ParseBabynames/parseTest/aplication/views.py", line 34, in run_parse_db_insert
name = pc.get_user('testuser1')
File "/home/theshade/Devel/ParseBabynames/parseTest/aplication/parseCommunication.py", line 260, in get_user
return User.Query.get(username=uname)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 58, in get
return self.filter(**kw).get()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 150, in get
results = self._fetch()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 117, in _fetch
return self._manager._fetch(**options)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 41, in _fetch
return [klass(**it) for it in klass.GET(uri, **kw).get('results')]
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/connection.py", line 108, in GET
return cls.execute(uri, 'GET', **kw)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/connection.py", line 102, in execute
raise exc(e.read())
ResourceRequestLoginRequired: {"error":"unauthorized"}
Parse requires keys and env variables. Check this line:
API_ROOT = os.environ.get('PARSE_API_ROOT') or 'https://api.parse.com/1'
Your error is in line 102 at:
https://github.com/milesrichardson/ParsePy/blob/master/parse_rest/connection.py
Before you can parse, you need to register:
from parse_rest.connection import register
APPLICATION_ID = '...'
REST_API_KEY = '...'
MASTER_KEY = '...'
register(APPLICATION_ID, REST_API_KEY, master_key=MASTER_KEY)