I created a column column1 in postgres for table table1. My query uses sqlalchemy session.query. I then renamed the column1 which was created to column2 using psql. But sqlalchemy query throws
DBAPIError: (UndefinedColumn) column table1.column1
Am I missing something here?
I just tried to understand how the session query works but found nothing
Related
Below is the query i used to manually execute, and i tried to construct it with SQL Alchemy but failed already at the compiling.
"DELETE FROM Table WHERE col1 LIKE 'xxx' and TRUNC(TO_DATE(date, 'yyyy-mm-dd HH24:MI:SS')) like TRUNC(CURRENT_DATE-"+**str(days)**+") AND id IN ("**Dataframe**")"
The values STR(DAYS) and DATAFRAME need to be dynamic. One for subtracting the value with current_date and checking for an id in a Dataframe.
Below is my attempt to solve it with SQL Alchemy in python, but as i already fail with the syntax i thought i might ask for a best practice solution
sa = imported SQL Alchemy package
Table.delete().where(sa.and_(sa.and_(Table.c.col1 == "xxx", func.trunc(func.to_date(Table.c.date, 'yyyy-mm-dd HH24:MI:SS')).like(func.trunc(func.current_date()-days))), Table.c.id.in(Dataframe))
I wrote a code to connect sqlalchemy to clickhose .the problem is when i print querystatement, table name loaded twice in it! what do sqlalchemy exactly that cause this problem ? does anyone had this problem before?
my code :
session.query(className)
query.statement:
select a,b from table_name, table_name
I have connected to a db and iterate through the metadata to get the table names, each time dropping the table. However I get the error message:
pyodbc.ProgrammingError: ('42S02', "[42S02] [Microsoft][ODBC Microsoft Access Driver] Table 'MSysAccessStorage' does not exist. (-1305) (SQLExecDirectW)")
Which doesn't make sense as I am getting the table name from the database so it has to exist. My connection must be working to get the table names and the other parts of my code such as insert into work. Here is my code:
for row in cursor.tables():
if(str(row.table_name)!="pricesBackup" and str(row.table_name)!="recipesBackup"):
sqlLine="DROP TABLE costingDB1.accdb." + row.table_name
print(sqlLine)
cursor.execute(sqlLine)
conn.commit()
This seems very odd to me and wondering how to fix this. Thankyou in advance
Tables whose names start with "MSys" are internal system tables (row.table_type == "SYSTEM TABLE") that cannot be dropped. You'll need to restrict your deletions to tables where row.table_type == "TABLE".
I have found the solution: I instead made a list from the table names removing the MSys tables and ~TMC tables, so just mine. I then used this list to drop tables. I believe the issue was that the 2 sql queries in the loop were clashing. It was trying to find the tables from the drop table command.
I am trying to insert a dataframe to an existing django database model using the following code:
database_name = settings.DATABASES['default']['NAME']
database_url = 'sqlite:///{database_name}'.format(database_name=database_name)
engine = create_engine(database_url)
dataframe.to_sql(name='table_name', con=engine, if_exists='replace', index = False)
After running this command, the database schema changes also eliminating the primary key and leading to the following error: django.db.utils.OperationalError: foreign key mismatch
Note: The pandas column names and the database columns are matching.
It seems that the problem comes from the if_exists='replace' parameter in the to_sql method. The pandas documentation says the following:
if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’
How to behave if the table already exists.
fail: Raise a ValueError.
replace: Drop the table before inserting new values.
append: Insert new values to the existing table.
The 'replace' parameter replaces the table with another table defined by a predefined schema, if the table already exists. In your case it replaces your table created by the django migration with a base table, thus losing the primary key, foreign key and all. Try replacing the 'replace' parameter with 'append'.
I'm new to sqlalchemy and am wondering how to do a union of two tables that have the same columns. I'm doing the following:
table1_and_table2 = sql.union_all(self.tables['table1'].alias("table1_subquery").select(),
self.tables['table2'].alias("table2_subquery").select())
I'm seeing this error:
OperationalError: (OperationalError) (1248, 'Every derived table must have its own alias')
(Note that self.tables['table1'] returns a sqlalchemy Table with name table1.)
Can someone point out the error or suggest a better way to combine the rows from both tables?
First, can you output the SQL generated that creates the problem? You should be able to do this by setting echo=True in your create_engine statement.
Second, and this is just a hunch, try rearranging your subqueries to this:
table1_and_table2 = sql.union_all(self.tables['table1'].select().alias("table1_subquery"),
self.tables['table2'].select().alias("table2_subquery"))
If my hunch is right it's creating aliases, then running a query and the resulting query results are re-aliased and clashing