I would like to use Pyramid and SQLAlchemy with an already existing MySQL database.
Is it possible to automatically create the models from the MySQL tables. I do not want to write them all by hand.
This could be either by retrieving the tables and structure from the Server or using a MySQL "Create Table..." script, which contains all the tables.
Thanks in advance,
Linus
In SQLAlchemy you can reflect your database like this:
from sqlalchemy import create_engine, MetaData
engine = create_engine(uri)
meta = MetaData(bind=engine)
meta.reflect()
Then, meta.tables are your tables.
By the way, it is described here: http://docs.sqlalchemy.org/en/latest/core/reflection.html
To generate the code based on the database tables there are packages such as https://pypi.python.org/pypi/sqlacodegen and http://turbogears.org/2.0/docs/main/Utilities/sqlautocode.html , but I haven't used them.
Related
I have googled tutorials/examples on how to retrieve data from a postgresql database which already has tables, but most of the them are using Models. As in they are tutorials about making a DB from scratch.
But, what of a situation where a database already exists retrieval achieved?
is it achieved by using psycopg2.cursor?
I have a PostgreSQL database with existing tables. I wish to :
Create a set of Python models (plain classes, SQLAlchemy models or other) based on the existing database
Then manage changes in these models with a migrations tool.
The second part I think is easy to achieve as long as I manage to get my initial schema created. How can this be achieved?
So if someone is willing to use SQLAlchemy I found these two solutions:
Straight with SQLAlchemy reflection and automapping
With sqlacodegen
I'm trying to make a complete copy of a Snowflake DB into PostgreSQL DB (every table/view, every row). I don't know the best way to go about accomplishing this. I've tried using a package called pipelinewise , but I could not get the access needed to convert a snowflake view to a postgreSQL table (it needs a unique id). Long story short it just would not work for me.
I've now moved on to using the snowflake-sqlalchemy package. So, I'm wondering what is the best way to just make a complete copy of the entire DB. Is it necessary to make a model for each table, because this is a big DB? I'm new to SQL alchemy in general, so I don't know exactly where to start. My guess is with reflections , but when I try the example below I'm not getting any results.
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine, MetaData
engine = create_engine(URL(
account="xxxx",
user="xxxx",
password="xxxx",
database="xxxxx",
schema="xxxxx",
warehouse="xxxx"
))
engine.connect()
metadata = MetData(bind=engine)
for t in metadata.sorted_tables:
print(t.name)
I'm pretty sure the issue is not the engine because I did do the validate.py example and it does return the version like expected. Any advice to why my code above is not working, or a better way to accomplish my goal of making a complete copy of the DB would be greatly appreciated.
Try this: I got it working on mine, but I have a few functions that I use for my sqlalchemy engine, so might not work as is:
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine, MetaData
import sqlalchemy sa
engine = sa.create_engine(URL(
account="xxxx",
user="xxxx",
password="xxxx",
database="xxxxx",
schema="xxxxx",
warehouse="xxxx"
))
inspector = sa.inspect(engine)
schema = inspector.default_schema_names
for table_name in inspector.get_table_names(schema):
print(table_name)
Working on Postgres DB within python using pscopg2, with an ORM of pewee. I created the initial tables using pewee and I needed to perform an ALTER statement:
improt psycopg2
cur.execute("ALTER TABLE Test_Table ADD COLUMN filename VARCHAR(100)")
conn.commit()
Which after executed, I do a select * from Test_Table from and the table is present.
However, when I do a select using the pewee ORM, that column filename does not exist in the Test_Table.
What do I need to do in order for that ALTER statement to show up using peewee?
Peewee models are not dynamically-created based on the state of the database schema. They are declarative.
So if you are adding a column to your database, you would add a corresponding field instance to the model class. Typically this is done offline (e.g., not in the middle of while your application is running).
Refer here for docs on Peewee's schema migration utilities:
http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#migrate
I want to come up minimal set of queries/loc that extracts the table metadata within a database, on as many versions of database as possible. I'm using PostgreSQl. I'm trying to get this using python. But I've no clue on how to do this, as I'm a python newbie.
I appreciate your ideas/suggestions on this issue.
You can ask your database driver, in this case psycopg2, to return some metadata about a database connection you've established. You can also ask the database directly about some of it's capabilities, or schemas, but this is highly dependent on the version of the database you're connecting to, as well as the type of database.
Here's an example taken from http://bytes.com/topic/python/answers/438133-find-out-schema-psycopg for PostgreSQL:
>>> import psycopg2 as db
>>> conn = db.connect('dbname=billings user=steve password=xxxxx port=5432')
>>> curs = conn.cursor()
>>> curs.execute("""select table_name from information_schema.tables WHERE table_schema='public' AND table_type='BASETABLE'""")
>>> curs.fetchall()
[('contacts',), ('invoicing',), ('lines',), ('task',), ('products',),('project',)]
However, you probably would be better served using an ORM like SQLAlchemy. This will create an engine which you can query about the database you're connected to, as well as normalize how you connect to varying database types.
If you need help with SQLAlchemy, post another question here! There's TONS of information already available by searching the site.