define_table does not create table in database - python

I am running define_tables in the recommended way:
db = DAL('postgres://user:XXXX#localhost:5432/mydb', migrate_enabled=False, auto_import=False, lazy_tables=True)
db.define_table('auth_user',
Field('email', unique=True),
Field('password', length=512, type='password', readable=False, label='Password'),
...)
This gets executed without errors, but no table is created in the database. Whenever I try to insert a new user:
relation "auth_user" does not exist
What can be going on? Once the tables are created (manually, for example), the application works fine. I am using a postgres backend. This happens no matter what value I give to lazy_tables
EDIT
This is the full test script:
from gluon import DAL
from gluon import Field
db = DAL('postgres://user:pass#localhost:5432/mydb', migrate_enabled=False)
db.define_table(
'auth_user',
Field('email', type='string', unique=True),
Field('password', type='password'),
Field('registration_key', type='string', length=512, writable=False, readable=False, default=''),
Field('reset_password_key', type='string', length=512, writable=False, readable=False, default=''),
Field('registration_id', type='string', length=512, writable=False, readable=False, default=''),
)
db.commit()
print db.tables
db.auth_user.insert(email='g#b.c')
And I get the following output:
['auth_user']
Traceback (most recent call last):
File "xxx.py", line 19, in <module>
db.auth_user.insert(email='g#b.c')
File "/tmp/web2py/gluon/dal.py", line 9293, in insert
ret = self._db._adapter.insert(self, self._listify(fields))
File "/tmp/web2py/gluon/dal.py", line 1361, in insert
raise e
psycopg2.ProgrammingError: relation "auth_user" does not exist
LINE 1: INSERT INTO auth_user(reset_password_key,registration_id,reg...
The table is somehow "created" (in memory?), but it is not really in the postgres database. What does this mean?

Simply remove migrate_enabled=False, which turns off migrations and therefore prevents the creation or modification of database tables. There is also no need to explicitly set auto_import=False as that is already the default.
If the above doesn't help, it is possible that web2py did successfully create such a table previously and it was removed without web2py knowing about it. If the application's /databases folder includes a file with a name like *_auth_user.table, delete that file and try again.
If that's not the issue, check the /databases/sql.log file and confirm that web2py attempted to create the table. Most likely, something in your system configuration is preventing the table from being created.
UPDATE: From your edit, it appears you are using the DAL outside of a web2py application. Because you have not specified the folder argument to the DAL() constructor, it will save the *.table migration files in the current working directory, and it will not create a sql.log file. In this case, it is best to create a separate folder for the migration and log files:
DAL('postgres://user:pass#localhost:5432/mydb', folder='/path/to/folder')
In that case, it will save all of the *.table migration files and the sql.log file in the specified folder.

Related

SQL INSERT error during Apache Superset init

I'm setting up Superset using Python 3.7.7 + Debian 10 in a Docker container and am getting an error when running superset init. Expected result: Superset loads example data and initial configuration into my Postgresql 11 database.
The problem looks to be related to Superset's attempt at loading information about its example database. Specifically, some of the values it's attempting to insert are being read as NoneType, which sqlalchemy is rejecting because they're not bytes-like. Here are the most relevant portions of the trace:
2020-05-20 22:07:50,651:INFO:root:Creating database reference for examples
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/cryptography/utils.py", line 36, in _check_byteslike
memoryview(value)
TypeError: memoryview: a bytes-like object is required, not 'NoneType'
...
File "/usr/local/lib/python3.7/site-packages/superset/cli.py", line 51, in init
utils.get_example_database()
File "/usr/local/lib/python3.7/site-packages/superset/utils/core.py", line 976, in get_example_database
return get_or_create_db("examples", db_uri)
File "/usr/local/lib/python3.7/site-packages/superset/utils/core.py", line 968, in get_or_create_db
db.session.commit()
...
sqlalchemy.exc.StatementError: (builtins.TypeError) data must be bytes-like
[SQL: INSERT INTO dbs (created_on, changed_on, verbose_name, database_name, sqlalchemy_uri, password, cache_timeout, select_as_create_table_as, expose_in_sqllab,
allow_run_async, allow_csv_upload, allow_ctas, allow_dml, force_ctas_schema, allow_multi_schema_metadata_fetch, extra, perm, impersonate_user, created_by_fk, ch
anged_by_fk) VALUES (%(created_on)s, %(changed_on)s, %(verbose_name)s, %(database_name)s, %(sqlalchemy_uri)s, %(password)s, %(cache_timeout)s, %(select_as_create
_table_as)s, %(expose_in_sqllab)s, %(allow_run_async)s, %(allow_csv_upload)s, %(allow_ctas)s, %(allow_dml)s, %(force_ctas_schema)s, %(allow_multi_schema_metadata
_fetch)s, %(extra)s, %(perm)s, %(impersonate_user)s, %(created_by_fk)s, %(changed_by_fk)s) RETURNING dbs.id]
[parameters: [{'database_name': 'examples', 'sqlalchemy_uri': 'postgresql://superset:[redacted]#[redacted]:5432/superset', 'password': '[redacted]', 'perm': None,
'verbose_name': None, 'cache_timeout': None, 'force_ctas_schema': None}]]
Full trace here. The funny business starts here, when Superset attempts to record the metadata about its example database but apparently doesn't pass parameters for all the required fields. It attempts to instantiate the Database model here.
I get the same error with superset load_examples. I don't think it's a database connection issue, as I'm able to access via psql and can see that data has been populated in the user table.
Obviously this command typically works fine in other environments, so I'm wondering if perhaps there's some kind of incompatibility in my setup I'm not aware of. Package versions: apache-superset 0.35.2, cryptography 2.7, sqlalchemy 1.3.16, sqlalchemy-utils 0.36.1. My superset_config.py is here, and the Dockerfile is a duplicate of this one.

Python SQLAlchemy: psycopg2.ProgrammingError relation already exists?

I have repeatable tried to create a table MYTABLENAME with SQLAlchemy in Python. I deleted all tables through my SQL client Dbeaver but I am getting an error that the table exists such that
Traceback (most recent call last):
File "/home/hhh/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/hhh/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
psycopg2.ProgrammingError: relation "ix_MYTABLENAME_index" already exists
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "ix_MYTABLENAME_index" already exists
[SQL: 'CREATE INDEX "ix_MYTABLENAME_index" ON "MYTABLENAME" (index)']
I succeed in the creation of tables and their insertions with an unique name but the second time I am getting the error despite deleting the tables in Dbeaver.
Small example
from datetime import date
from sqlalchemy import create_engine
import numpy as np
import pandas as pd
def storePandasDF2PSQL(myDF_):
#Store results as Pandas Dataframe to PostgreSQL database.
#
#Example
#df=pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
#dbName= date.today().strftime("%Y%m%d")+"_TABLE"
#engine = create_engine('postgresql://hhh:yourPassword#localhost:1234/hhh')
#df.to_sql(dbName, engine)
df = myDF_
dbName = date.today().strftime("%Y%m%d")+"_TABLE"
engine = create_engine('postgresql://hhh:yourPassword#localhost:1234/hhh')
# ERROR: NameError: name 'table' is not defined
#table.declarative_base.metadata.drop_all(engine) #Drop all tables
#TODO: This step is causing errors because the SQLAlchemy thinks the
#TODO: table still exists even though deleted
df.to_sql(dbName, engine)
What is the proper way to clean up the backend such as some hanging index in order to recreate the table with fresh data? In other words, how to solve the error?
The issue might be from sqlalchemy side which believes that there is an index as message of deletion of tables was not notified to the sqlalchemy. There is a sqlalchemy way of deleting the tables
table.declarative_base.metadata.drop_all(engine)
This should keep Sqlalchemy informed about the deletions.
This answer does not address the reusing of the same table names and hence not about cleaning up the SQLAlchemy metadata.
Instead of reusing the table names, add the execution time like this to the end of the tableName
import time
dbName = date.today().strftime("%Y%m%d")+"_TABLE_"+str(time.time())
dbTableName = dbName
so your SQL developmnet environment, such as SQL client locking up the connection or specific tables, does not matter that much. Closing Dbeaver can help while running the Python with SQLAlchemy.

Error: trying to redefine a primary key as non-primary key

I'm using the dataset library to attempt to back up a postgres database into an sqlite file. The code I'm running goes as follows:
local_db = "sqlite:///backup_file.db"
with dataset.connect(local_db) as save_to:
with dataset.connect(postgres_db) as download_from:
for row in download_from['outlook']:
save_to['outlook'].insert(row)
If I print one row of the table, it looks like this:
OrderedDict([
('id', 4400),
('first_sighting', '2014-08-31'),
('route', None),
('sighted_by', None),
('date', None)
])
However, when I get to the line save_to['outlook'].insert(row) I get an error with the following stack trace:
Traceback (most recent call last):
File "/home/anton/Development/Python/TTC/backup_db.py", line 25, in <module>
save_to['outlook'].insert(dict(row))
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/dataset/table.py", line 79, in insert
row = self._sync_columns(row, ensure, types=types)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/dataset/table.py", line 278, in _sync_columns
self._sync_table(sync_columns)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/dataset/table.py", line 245, in _sync_table
self._table.append_column(column)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/sqlalchemy/sql/schema.py", line 681, in append_column
column._set_parent_with_dispatch(self)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/sqlalchemy/sql/base.py", line 431, in _set_parent_with_dispatch
self._set_parent(parent)
File "/home/anton/.virtualenvs/flexity/lib/python3.6/site-packages/sqlalchemy/sql/schema.py", line 1344, in _set_parent
self.key, table.fullname))
sqlalchemy.exc.ArgumentError: Trying to redefine primary-key column 'id' as a non-primary-key column on table 'outlook'
Any ideas as to what I'm doing wrong? I've tried this in python 2.7.14 and 3.6.3
Assuming you have a schema and table made for "outlook", did you make a PK field? Did you let sqlite decide which field to make a PK field?
It is highly that you are trying to insert id twice. Once, sqlite is inserting itself, and other comes from the other table records.
I figured it out! So, the trick is that by default the database library makes tables with an auto-incrementing integer primary-key. But, my data already has an 'id' column. In order to avoid this problem, I should define my table before I try to add lines to it, and define it with no primary key as follows:
with dataset.connect(local_db) as save_to:
with dataset.connect(postgres_db) as download_from:
table_to_save_to = save_to.create_table('outlook', primary_id=False)
for row in download_from['outlook']:
table_to_save_to.insert(row)
By doing .create_table(table_name, primary_key=False) I can make sure that i can insert my own id values into the table.
I found this solution by reading the docs.

SQLAlchemy IntegrityError

I'm having a problem using SQLAlchemy with PySide(PyQt). I'm trying to pop-up a QtGui.QDialog, but when I do this SQLAlchemy throws an exception:
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\preo\preodb\dbviewandmodel.py", line 32, in rowCount
return len(self.rows())
File "C:\Python27\lib\site-packages\preo\preodb\dbviewandmodel.py", line 30, in rows
return self.tableobj.query.all()
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\query.py", line 1579, in all
return list(self)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\query.py", line 1688, in __iter__
self.session._autoflush()
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\session.py", line 862, in _autoflush
self.flush()
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\session.py", line 1388, in flush
self._flush(objects)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\session.py", line 1469, in _flush
flush_context.execute()
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\unitofwork.py", line 302, in execute
rec.execute(self)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\unitofwork.py", line 446, in execute
uow
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\orm\mapper.py", line 1878, in _save_obj
execute(statement, params)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1191, in execute
params)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1271, in _execute_clauseelement
return self.__execute_context(context)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1302, in __execute_context
context.parameters[0], context=context)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1401, in _cursor_execute
context)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\base.py", line 1394, in _cursor_execute
context)
File "C:\Python27\lib\site-packages\sqlalchemy-0.6.6-py2.7.egg\sqlalchemy\engine\default.py", line 299, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (IntegrityError) ('23000', "[23000] [Microsoft][ODBC
SQL Server Driver][SQL Server]Violation of UNIQUE KEY
constraint 'UQ__users__F3DBC5720DAF0CB0'. Cannot insert duplicate key in
object 'dbo.users'. (2627) (SQLExecDirectW); [01000] [Microsoft][ODBC SQL Server
Driver][SQL Server]The statement has been terminated. (3621)") u'INSERT INTO users
(username, fullname, email, passwordmd5) OUTPUT inserted.id VALUES (?, ?, ?, ?)'
(None, None, None, None)
This is particularly troubling because I have no code, anywhere, that even attempts to insert records into SQL; I am only ever attempting to query data from the database. In fact, my DB model is read-only with respect to what PySide/PyQt are doing (i.e., I'm using a QtGui.QTableView model/view and there is no insertRows function in that model).
I have no idea what's going on or how to solve it - again, I have no code to modify SQL records at all, but still SQLAlchemy attempts to be inserting blank records into one of my SQL tables. All I can see, in the background, is the QTableView data model is querying the database A LOT. It just seems that when I popup this QDialog (which does have some code in it to query some table column) this error is thrown. Oddly, this isn't consistent, sometime the popup appears first before the exception, sometimes the popup appears after the exception. Under normal circumstances, the QTableView data model works great, just not when I popup this dialog (and ironically, the popup isn't using any QTableView at all, just standard widgets like QLineEdit, QTextEdit, etc.)
If it helps, I'm using Python 2.7 with SQLAlchemy 0.6.6 (also with Elixir 0.7.1), and PySide 1.0.0 (and PyQt4 4.8.3). I'm on Windows 7 using SQL 2008 R2 (Express). And yes, I've tried rebooting the PC, but the problem still occurs after a reboot. I'm reluctant to post more code because I have a lot of it in this particular project and I can't nail down the problem anything specific.
I'm hoping someone might know of oddities in SQLAlchemy and/or PyQt that might be related to this. I'm also hoping I can continue using SQLAlchemy as I have a large data model built; I'm reluctant, at this point, to abandon this and use PyQt's SQL features.
I've managed to make this problem go away, but it's still not really clear to me why SQLAlchemy was trying to insert rows in my database - that really bothers me, but it's not happening anymore.
At any rate, what was, I think, happening, was related to my SQLAlchemy data model and the way I was accessing it, here is a snippet of that model:
from elixir import *
metadata.bind = 'mssql+pyodbc://username:password/dbname'
metadata.bind.echo = False
class Users(Entity):
using_options(tablename = 'users')
username = Field(String(50), unique=True)
fullname = Field(String(255))
email = Field(String(255))
passwordmd5 = Field(String(32))
def __repr__(self):
return "<Users ({})({})({})>".format(self.username, self.fullname, self.email)
def prettyname(self):
return {'username':'User Name', 'fullname':'Full Name', 'email':'Email Address', 'passwordmd5':'$hidden$'}
In my code I needed a way of getting 'pretty' label names for a GUI without having to hard code this in a GUI (I've been trying to create a dynamic way of building GUI forms). So, I added the 'prettyname' method to my data model to give me some application specific metadata in that data model. All I'm doing is returning a dictionary of items.
I had a secondary problem in that sometimes I needed to get this data from the class instance for Users and sometimes for a query result for Users (for example, Users.get_by(id=1)). As it turned out, retrieving this data had to be done in two ways. In the class instances I had to get the value this way:
prettyname = Users().prettyname()['username']
But when I was using query results it was:
prettyname = queryresult.prettyname()['username']
SQLAlchemy seems to have a real problem when I was using the former method (the class instance method) - as this was being used everytime I was seeing the crash. When I was using the latter instance I was never seeing a crash. Still, I needed access to that metadata in the class instance.
The fix, or should I say what turned out to fix this came from another Stackoverflow article (thank you everyone at Stackoverflow - I'd be nothing without you). I changed the structure of the dbmodel:
class Users(Entity):
using_options(tablename = 'users')
username = Field(String(50), unique=True, info={'prettyname':'User Name'})
fullname = Field(String(255), info={'prettyname':'Full Name'})
email = Field(String(255), info={'prettyname':'Email Address'})
passwordmd5 = Field(String(32), info={'hidden':True})
def __repr__(self):
return "<Users ({})({})({})>".format(self.username, self.fullname, self.email)
This allows me to use a common method of introspection to get the dictionary data in the info argument, regardless if I'm looking at a class instance, or a query result. In this case I use the '.table' method of either the class or query result, then get the column that I need (.c), then use the .info method of that column to return the dictionary.
Whatever the case, now SQLAlchemy no longer tries to arbitrarily insert rows in the database anymore.

PostgreSQL problem in Django

I have a Django application and I'm using postgres. I try to execute the bollowing line in one of my tests:
print BillingUser.objects.all()
And I get the following error:
"current transaction is aborted, commands ignored until end of transaction block."
My postresql log:
ERROR: duplicate key value violates unique constraint "billing_rental_wallet_id_key"
STATEMENT: INSERT INTO "billing_rental" ("wallet_id", "item_id", "end_time", "time", "value", "index", "info") VALUES (61, 230, E'2010-02-11 11:01:01.092336', E'2010-02-01 11:01:01.092336', 10.0, 1, NULL)
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: INSERT INTO "billing_timeable" ("creation_date", "update_date") VALUES (E'2010-02-01 11:01:01.093504', E'2010-02-01 11:01:01.093531')
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT "billing_timeable"."id", "billing_timeable"."creation_date", "billing_timeable"."update_date", "billing_billinguser"."timeable_ptr_id", "billing_billinguser"."username", "billing_billinguser"."pin", "billing_billinguser"."sbox_id", "billing_billinguser"."parental_code", "billing_billinguser"."active" FROM "billing_billinguser" INNER JOIN "billing_timeable" ON ("billing_billinguser"."timeable_ptr_id" = "billing_timeable"."id") LIMIT 21
How can I fix that?
Thanks, Arshavski Alexander.
Ok... looking at the PostgreSQL log, it does look that you are doing a wrong insert that will abort the transaction... now, looking at your code I think the problems lies here:
at lines 78-81
currency = Currency.objects.all()[2]
if not Wallet.objects.filter(user=user):
wallet = Wallet(user=user, currency=currency)
wallet.save()
You will create a wallet for the current user, but then on line 87-88 you wrote:
user.wallet.amount = 12.0
user.wallet.save()
However, as you save the wallet after retrieving the user, it does not know that you had already created a wallet for him, and having a OneToOne relationship, this will cause the error you're having... I think what you should do is to add a line after 81:
currency = Currency.objects.all()[2]
if not Wallet.objects.filter(user=user):
wallet = Wallet(user=user, currency=currency)
wallet.save()
user.wallet = wallet
That should solve the issue....
You insert data in some of your test functions. After invalid insert DB connections is in fail state. You need to rollback transaction or turn it off completely. See django docs on transactions and testing them.
From the log it looks like you are trying to insert an item with a duplicate ID which throws an error and the rest of your code can't access the DB anymore. Fix that query, and it should work.

Categories

Resources