Working on Postgres DB within python using pscopg2, with an ORM of pewee. I created the initial tables using pewee and I needed to perform an ALTER statement:
improt psycopg2
cur.execute("ALTER TABLE Test_Table ADD COLUMN filename VARCHAR(100)")
conn.commit()
Which after executed, I do a select * from Test_Table from and the table is present.
However, when I do a select using the pewee ORM, that column filename does not exist in the Test_Table.
What do I need to do in order for that ALTER statement to show up using peewee?
Peewee models are not dynamically-created based on the state of the database schema. They are declarative.
So if you are adding a column to your database, you would add a corresponding field instance to the model class. Typically this is done offline (e.g., not in the middle of while your application is running).
Refer here for docs on Peewee's schema migration utilities:
http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#migrate
Related
I'm trying to find a way to dynamically create table in database (SQLite3). What I'm trying to do is to get file from user and based on rows in file create table with the same amount of columns. Everything I found is from 6-10 years so it doesn't work anymore.
I suspect that you are not finding many examples are Django is abstracting out raw SQL.
Have you looked at using Raw SQL queries, specifically Executing custom SQL directly
table_name = 'Your_Name'
with connection.cursor() as cursor:
cursor.execute(f"create table if not exists {table_name} ( id integer PRIMARY KEY )")
Will create a table called 'Your_Name' with a column called id, you will need to read the CSV, there is an example of how to do that here, if you follow that example you could add the DDL into views.py
I'm working on a very complex raw sql statement that has to be run with sqlalchemy. One part of the statement is a JOIN with a temporary table that is filled with data from a csv file.
The select is looking as follows:
(SELECT * FROM (VALUES ('1','xyz#something.com','+99123456798')) AS t (id, email, phone))
To prevent any sql injections I cannot simply copy paste everything from within the csv into the select.
I know that sqlalchemy has the option of inserting values with :x and then pass the actual value in the execute method, but I have A LOT of values and substituting them will be impossible.
Is there a way to build this temporary table from the csv with the necessary sql injection protection using sqlalchemy?
I'm looking to write tests for my application. I would like to work with a clean database for all my tests. For various reasons, I cannot create a separate test database.
What I currently do is run everything in a transaction and never commit to the db. However some tests read from the db, so I'd like to delete all rows at the start of the transaction and start from there.
The problem I am running into is with foreign key constraints. Currently I just go through each table and do
cursor.execute("DELETE FROM %s" % tablename)
which gives me
IntegrityError: (1451, u'Cannot delete or update a parent row: a
foreign key constraint fails (`testing`.`app_adjust_reason`,
CONSTRAINT `app_adjust_reason_ibfk_2` FOREIGN KEY (`adjust_reason_id`)
REFERENCES `adjust_reason` (`id`))')
edit: I would like something generic that could be applied to any database. Otherwise I would specifically drop the constraints
A more general approach is to create a database from scratch before the test run and drop it after, use CREATE DATABASE_NAME and DROP DATABASE_NAME. This way, you are always starting with a clean database state and you would not worry about the foreign key or other constraints.
Note that you would also need to create your table schema and (possibly test data) after you create a database.
As a real world example, this is what Django does when you run your tests. The table schema is recreated by the Django ORM from your models, then the fixtures or/and the schema and data migrations are applied.
I would like to use Pyramid and SQLAlchemy with an already existing MySQL database.
Is it possible to automatically create the models from the MySQL tables. I do not want to write them all by hand.
This could be either by retrieving the tables and structure from the Server or using a MySQL "Create Table..." script, which contains all the tables.
Thanks in advance,
Linus
In SQLAlchemy you can reflect your database like this:
from sqlalchemy import create_engine, MetaData
engine = create_engine(uri)
meta = MetaData(bind=engine)
meta.reflect()
Then, meta.tables are your tables.
By the way, it is described here: http://docs.sqlalchemy.org/en/latest/core/reflection.html
To generate the code based on the database tables there are packages such as https://pypi.python.org/pypi/sqlacodegen and http://turbogears.org/2.0/docs/main/Utilities/sqlautocode.html , but I haven't used them.
I added a url column in my table, and now sqlalchemy is saying 'unknown column url'.
Why isn't it updating the table?
There must be a setting when I create the session?
I am doing:
Session = sessionmaker(bind=engine)
Is there something I am missing?
I want it to update any table that doesn't have a property that I added to my Table structure in my python code.
I'm not sure SQLAlchemy supports schema migration that well (atleast the last time I touched it, it wasn't there).
A couple of options.
Don't manually specify your tables. Use the autoload feature to have SQLAlchemy automatically read out the columns from your database. This will require tests to make sure that it works but you get the idea in generally. DRY.
Try SQLAlchemy migrate.
Manually update the table after you change the model specification.
In cases when you just add new columns, you can safely use metadata.create_all(bind=engine) to update your schema from your class definitions.
However this will NOT modify existing columns or remove columns from DB schema if you remove them in SQLA definition.