I've spent a 3+ hours on this for 18 of the last 21 days. Please, someone, tell me what I'm misunderstanding!
TL;DR: My code is repeatedly sending the db charset as a string to PyMysql, while it expects an object with an attribute called "encoding"
Background
This is Python code running on a docker container. A second container houses the database. The database address is stored in a .env variable called ENGINE_URL:
ENGINE_URL=mysql+pymysql://root:#database/starfinder_development?charset=utf8
I'm firing off Alembic and Flask-Alembic commands using click commands in the CLI. All of the methods below are used in CLI commands.
Models / Database Setup (works)
from flask import Flask
flask_app = Flask(__name__)
db_engine = SQLAlchemy(flask_app)
from my_application import models
def create_database():
db_engine.create_all()
At this point I can open up the database container and use the MySql CLI to see that all of my models have now been converted into tables with columns and relationships.
Attempt 1: Alembic
Create Revision Files with Alembic (works)
from alembic.config import Config
def main(): # fires prior to any CLI command
filepath = os.path.join(os.path.dirname(__file__),
"alembic.ini")
alembic_config = Config(file_=filepath)
alembic_config.set_main_option("sqlalchemy.url",
ENGINE_URL)
alembic_config.set_main_option("script_location",
SCRIPT_LOCATION)
migrate_cli(obj={"alembic_config": alembic_config})
def revision(ctx, message):
alembic_config = ctx.obj["alembic_config"]
alembic_command.revision(alembic_config, message)
At this point I have a migration file the was created exactly as expected. Then I need to upgrade the database using that migration...
Running Migrations with Alembic (fails)
def upgrade(ctx, migration_revision):
alembic_config = ctx.obj["alembic_config"]
migration_revision = migration_revision.lower()
_dispatch_alembic_cmd(alembic_config, "upgrade",
revision=migration_revision)
firing this off with cli_command upgrade head causes a failure, which I've included here at the bottom because it has an identical stack trace to my second attempt.
Attempt 2: Flask-Alembic
This attempt finds me completely rewriting my main and revision commands, but it doesn't get as far as using upgrade.
Create Revision Files with Flask-Alembic (fails)
def main(): # fires prior to any CLI command
alembic_config = Alembic()
alembic_config.init_app(flask_app)
migrate_cli(obj={"alembic_config": alembic_config})
def revision(ctx, message):
with flask_app.app_context():
alembic_config = ctx.obj["alembic_config"]
print(alembic_config.revision(message))
This results in an error that is identical to the error from my previous attempt.
The stack trace in both cases:
(Identical failure using alembic upgrade & flask-alembic revision)
File "/Users/MyUser/.pyenv/versions/3.6.2/envs/sf/lib/python3.6/site-packages/pymysql/connections.py", line 678, in __init__
self.encoding = charset_by_name(self.charset).encoding
AttributeError: 'NoneType' object has no attribute 'encoding'
In response, I went into the above file & added a print on L677, immediately prior to the error:
print(self.charset)
utf8
Note: If I modify my ENGINE_URL to use a different ?charset=xxx, that change is reflected here.
So now I'm stumped
PyMysql expects self.charset to have an attribute encoding, but self.charset is simply a string. How can I change this to behave as expected?
Help?
A valid answer would be an alternative process, though the "most correct" answer would be to help me resolve the charset/encoding problem.
My primary goal here is simply to get migrations working on my flask app.
Related
Using Python3.8, CDK 2.19.0
I want to create an A Record against a hosted zone that's already in my AWS account.
I am doing the following:
hosted_zone = route53.HostedZone.from_hosted_zone_attributes(self, "zone",
zone_name="my.awesome.zone.",
hosted_zone_id="ABC12345DEFGHI"
)
route53.ARecord(self, "app_record_set",
target=self.lb.load_balancer_dns_name, # this is declared above, and works fine.
zone=hosted_zone,
record_name="test-cdk.my.awesome.zone"
)
Inside my app.py I have:
env_EU = cdk.Environment(account="12345678901112", region="eu-west-1")
app = cdk.App()
create_a_record = DomianName(app, "DomianName", env=env_EU)
When I run cdk synth I get the following error:
➜ cdk synth
jsii.errors.JavaScriptError:
Error: Expected object reference, got "${Token[TOKEN.303]}"
File ".../.venv/lib/python3.8/site-packages/jsii/_kernel/providers/process.py", line 326, in send
...(full traceback)
Subprocess exited with error 1
I've tried from_lookup (rather than from_hosted_zone_attributes, Python3.9/Node 17/12/16 (just in case) but nothing is helping. I get the same error every time.
If I comment out the A Record creation, then the synth completes as expected.
cdk.context.json also has the correct hosted zone cached BUT only happens if I comment out the A record creation.
The ARecord target expects a type of RecordTarget. You are passing a string (token). Use a LoadBalancerTarget:
import aws_cdk.aws_elasticloadbalancingv2 as elbv2
# zone: route53.HostedZone
# lb: elbv2.ApplicationLoadBalancer
route53.ARecord(self, "AliasRecord",
zone=zone,
target=route53.RecordTarget.from_alias(targets.LoadBalancerTarget(lb))
)
Every time that i try to launch my notebook im getting the error below .
let's specify that im new worker on the project and the file config.py was created before that i joined the team.
Does anyone knows how to resolve it please?
The code actually done is
Requirements.txt
psycopg2==2.7.3.2.
SQLAlchemy==1.2.2
pandas==0.21.0
docker==3.3.0
python-json-logger
sshtunnel==0.1.4
jupyter
jupytext==1.2
geopy==2.2.0
errror detail
~/SG/notebooks/config.py in <module>
1 # Using jupytext
----> 2 c.NotebookApp.contents_manager_class = "jupytext.TextFileContentsManager"
3 c.ContentsManager.default_jupytext_formats = "ipynb,py"
NameError: name 'c' is not defined
code
the row causing the error in the notebook is
from src.util.connect_postgres import postgres_connexion
the content of the file connect_postgres
from sqlalchemy import create_engine
from config.util.database import TARGET_TEST_HOST, TARGET_PROD_HOST, \
TARGET_TEST_DB, TARGET_PROD_DB, TARGET_TEST_USER, TARGET_PROD_USER, SG_PROD_USER, SG_PROD_HOST
from config.secrets.passwords import TARGET_PROD_PWD, TARGET_TEST_PWD, SG_PROD_PWD
from sshtunnel import SSHTunnelForwarder
import psycopg2
def _create_engine_psg(user, db, host, port, pwd):
""" Returns a connection object to PostgreSQL """
url = build_postgres_url(db, host, port, pwd, user)
return create_engine(url, client_encoding='utf8')
def build_postgres_url(db, host, port, pwd, user):
url = 'postgresql://{}:{}#{}:{}/{}'.format(user, pwd, host, port, db)
return url
def postgres_connexion(env):
if env == 'prod':
return create_engine_psg_with_tunnel_ssh(TARGET_PROD_DB,
TARGET_PROD_USER, TARGET_PROD_PWD, SG_PROD_PWD,
SG_PROD_USER,
SG_PROD_HOST, TARGET_PROD_HOST)
else:
raise ValueError("'env' parameter must be 'prod'.")
config.py
c.NotebookApp.contents_manager_class = "jupytext.TextFileContentsManager"
c.ContentsManager.default_jupytext_formats = "ipynb,py"
I red that i can generate the file and then edit it.
when i tried to create the jupyter_notebook_config it is always in my personal directory of marczhr
/Users/marczhr/.jupyter/jupyter_notebook_config.py
but i want to be done in my something that i can push on git.
Hope that im clear ^^
Thank you,
Don't run the notebook from the directory with the configuration file.
The reason is that there is an import with a config module or package in the code listed. By launching the notebook from the directory with the configuration file, it will import that Jupyter configuration file, instead of the correct package or module, with the resulting error.
Instead, run it from somewhere else, or put the configuration file elsewhere.
Or perhaps best, take the two configuration lines and add them to the end of your /Users/marczhr/.jupyter/jupyter_notebook_config.py file, then remove the 2-3 line config.py file.
In the latter case, you can now launch the notebook server from anywhere, and you don't need to specify any configuration file, since Jupyter will automatically use the generated (with added lines) one.
If you want to keep the config.py file, then launch the Jupyter notebook server from another directory, and simply specify the full path, like
jupyter --config=$HOME/SG/notebooks/config.py
All in all, this is a classic nameclash upon import, because of identically named files/directories. Always be wary of that.
(I've commented on some other potential problems in the comments: that still stands, but is irrelevant to the current problem here.)
I have a python script(list.py) which is used to interact with postgresql database.
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
engine = create_engine(os.getenv("postgresql://postgres:nehal#localhost:5432/lecture3"))
db = scoped_session(sessionmaker(bind=engine))
def main():
flights = db.execute("SELECT origin, destination, duration FROM flights").fetchall()
for flight in flights:
print(f"{flight.origin} to {flight.destination}, {flight.duration} minutes.")
if __name__ == "__main__":
main()
I have postgresql installed on Ubuntu 16.04 with lecture3 as database.When I execute the code as python list.py,I get the following error:
Traceback (most recent call last):
File "list.py", line 5, in <module>
engine = create_engine(os.getenv("postgresql://postgres:nehal#localhost:5432/lecture3"))
File "/home/nehal/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/__init__.py", line 424, in create_engine
return strategy.create(*args, **kwargs)
File "/home/nehal/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 52, in create
plugins = u._instantiate_plugins(kwargs)
AttributeError: 'NoneType' object has no attribute '_instantiate_plugins'
postgres is the postgresql username and nehal is the password.
How do I correct the error?
os.getenv is used to get the value of an environment variable, and returns None by default if that variable doesn't exist. You're passing it your connection string, which (almost certainly) doesn't exist as an environment variable. So it's returning None, which is given to create_engine, which fails because it's expecting a connection string. Just pass your connection string in directly:
engine = create_engine("postgresql://postgres:nehal#localhost:5432/lecture3")
I would try to run this without the getenv which seems useless and might return None
create_engine("postgresql://postgres:nehal#localhost:5432/lecture3")
This work for me:
export DATABASE_URL="postgres://localhost/lecture3"
export DATABASE_URL="postgres://localhost:5432/lecture3"
export DATABASE_URL="postgres:///lecture3"
You should use this command in your ENV.
Do this before running $ python list.py
where
username your psql username
password your psql password
server localhost or remote server
port 5432
database your psql database
(flask) $ export DATABASE_URL="postgresql://username:password#localhost:5432/database"
Verify:
(flask) $ echo $DATABASE_URL
Run:
$ python list.py
For me the below worked perfectly without any issues
engine = create_engine("postgres://postgres:password#localhost:5432")
where "password" shall be substituted for your PostgreSQL password you used while installing PostgreSQL.
If you are using windows then setting up in System Variable would resolve the issue. For example on windows 7: click start and type computer, it will bring up computer program, click computer and it will open up computer program, click on "system properties" tab then it will bring up system properties application, now click the "Advanced" tab then you need to click the "Environment Variables" tab, it will open the dialog box which will have user variables and system variables. Click the "New" tab on system variables and then type "DATABASE_URL" in variable name and "postgres+psycopg2://postgres:password#localhost:5432/postgres" in Variable value. Clike "OK"
Open new command prompt using cmd. Change directory to the directory where your list.py program exist. in my case I have it in c:\cs50\src3 so after opening the cmd then cd c:\cs50\src3. then execute python list.py you will see your results like
enter image description here
I tried looking at the documentation for running ZEO on a ZODB database, but it isn't working how they say it should.
I can get a regular ZODB running fine, but I would like to make the database accessible by several processes for a program, so I am trying to get ZEO to work.
I created this script in a folder with a subfolder zeo, which will hold the "database.fs" files created by the make_server function in a different parallel process:
CODE:
from ZEO import ClientStorage
import ZODB
import ZODB.config
import os, time, site, subprocess, multiprocessing
# make the server in for the database in a separate process with windows command
def make_server():
runzeo_path = site.getsitepackages()[0] + "\Lib\site-packages\zeo-4.0.0-py2.7.egg\ZEO\\runzeo.py"
filestorage_path = os.getcwd() + '\zeo\database.fs'
subprocess.call(["python", runzeo_path, "-a", "127.0.0.1:9100", "-f" , filestorage_path])
if __name__ == "__main__":
server_process = multiprocessing.Process(target = make_server)
server_process.start()
time.sleep(5)
storage = ClientStorage.ClientStorage(('localhost', 9100), wait=False)
db = ZODB.DB(storage)
connection = db.open()
root = connection.root()
the program will just block at the ClientStorage line if the wait=False is not given.
If the wait=False is given it produces this error:
Error Message:
Traceback (most recent call last):
File "C:\Users\cbrown\Google Drive\EclipseWorkspace\NewSpectro - v1\20131202\2 - database\zeo.py", line 17, in <module>
db = ZODB.DB(storage)
File "C:\Python27\lib\site-packages\zodb-4.0.0-py2.7.egg\ZODB\DB.py", line 443, in __init__
temp_storage.load(z64, '')
File "C:\Python27\lib\site-packages\zeo-4.0.0-py2.7.egg\ZEO\ClientStorage.py", line 841, in load
data, tid = self._server.loadEx(oid)
File "C:\Python27\lib\site-packages\zeo-4.0.0-py2.7.egg\ZEO\ClientStorage.py", line 88, in __getattr__
raise ClientDisconnected()
ClientDisconnected
Here is the output from the cmd prompt for my process which runs a server:
------
2013-12-06T21:07:27 INFO ZEO.runzeo (7460) opening storage '1' using FileStorage
------
2013-12-06T21:07:27 WARNING ZODB.FileStorage Ignoring index for C:\Users\cab0008
\Google Drive\EclipseWorkspace\NewSpectro - v1\20131202\2 - database\zeo\databas
e.fs
------
2013-12-06T21:07:27 INFO ZEO.StorageServer StorageServer created RW with storage
s: 1:RW:C:\Users\cab0008\Google Drive\EclipseWorkspace\NewSpectro - v1\20131202\
2 - database\zeo\database.fs
------
2013-12-06T21:07:27 INFO ZEO.zrpc (7460) listening on ('127.0.0.1', 9100)
What could I be doing wrong? I just want this to work locally right now so there shouldn't be any need for fancy web stuff.
You should use proper process management and simplify your life. You likely want to look into supervisor, which can be responsible for running/starting/stopping your application and ZEO.
Otherwise, you need to look at the double-fork trick to daemonize ZEO -- but why bother when a process management tool like supervisor does this for you.
If you are savvy with relational database administration, and already have a relational database at your disposal -- you can also consider RelStorage as a very good ZODB (low-level) storage backend.
In Windows you should use double \ instead of a single \ in the paths. Easy and portable way to accomplish this is to use os.path.join() function, eg. os.path.join('os.getcwd()', 'zeo', 'database.fs'). Otherwise a similar code worked ok for me.
Had same error on Windows , on Linux everything OK ...
your code is ok , to make this to work change following
C:\Python33\Lib\site-packages\ZEO-4.0.0-py3.3.egg\ZEO\zrpc\trigger.py ln:235
self.trigger.send(b'x')
C:\Python33\Lib\site-packages\ZEO-4.0.0-py3.3.egg\ZEO\zrpc\client.py ln:458:459 - comment them
here is those lines:
if socktype != socket.SOCK_STREAM:
continue
I have a python script that sets up several gearman workers. They call into some methods on SQLAlchemy models I have that are also used by a Pylons app.
Everything works fine for an hour or two, then the MySQL thread gets lost and all queries fail. I cannot figure out why the thread is getting lost (I get the same results on 3 different servers) when I am defining such a low value for pool_recycle. Also, why wouldn't a new connection be created?
Any ideas of things to investigate?
import gearman
import json
import ConfigParser
import sys
from sqlalchemy import create_engine
class JSONDataEncoder(gearman.DataEncoder):
#classmethod
def encode(cls, encodable_object):
return json.dumps(encodable_object)
#classmethod
def decode(cls, decodable_string):
return json.loads(decodable_string)
# get the ini path and load the gearman server ips:ports
try:
ini_file = sys.argv[1]
lib_path = sys.argv[2]
except Exception:
raise Exception("ini file path or anypy lib path not set")
# get the config
config = ConfigParser.ConfigParser()
config.read(ini_file)
sqlachemy_url = config.get('app:main', 'sqlalchemy.url')
gearman_servers = config.get('app:main', 'gearman.mysql_servers').split(",")
# add anypy include path
sys.path.append(lib_path)
from mypylonsapp.model.user import User, init_model
from mypylonsapp.model.gearman import task_rates
# sqlalchemy setup, recycle connection every hour
engine = create_engine(sqlachemy_url, pool_recycle=3600)
init_model(engine)
# Gearman Worker Setup
gm_worker = gearman.GearmanWorker(gearman_servers)
gm_worker.data_encoder = JSONDataEncoder()
# register the workers
gm_worker.register_task('login', User.login_gearman_worker)
gm_worker.register_task('rates', task_rates)
# work
gm_worker.work()
I've seen this across the board for Ruby, PHP, and Python regardless of DB library used. I couldn't find how to fix this the "right" way which is to use mysql_ping, but there is a SQLAlchemy solution as explained better here http://groups.google.com/group/sqlalchemy/browse_thread/thread/9412808e695168ea/c31f5c967c135be0
As someone in that thread points out, setting the recycle option to equal True is equivalent to setting it to 1. A better solution might be to find your MySQL connection timeout value and set the recycle threshold to 80% of it.
You can get that value from a live set by looking up this variable http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_connect_timeout
Edit:
Took me a bit to find the authoritivie documentation on useing pool_recycle
http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html?highlight=pool_recycle