I am writing a basic gui for a program which uses Peewee. In the gui, I would like to show all the tables which exist in my database.
Is there any way to get the names of all existing tables, lets say in a list?
Peewee has the ability to introspect Postgres, MySQL and SQLite for the following types of schema information:
Table names
Columns (name, data type, null?, primary key?, table)
Primary keys (column(s))
Foreign keys (column, dest table, dest column, table)
Indexes (name, sql*, columns, unique?, table)
You can get this metadata using the following methods on the Database class:
Database.get_tables()
Database.get_columns()
Database.get_indexes()
Database.get_primary_keys()
Database.get_foreign_keys()
So, instead of using a cursor and writing some SQL yourself, just do:
db = PostgresqlDatabase('my_db')
tables = db.get_tables()
For even more craziness, check out the reflection module, which can actually generate Peewee model classes from an existing database schema.
To get a list of the tables in your schema, make sure that you have established your connection and cursor and try the following:
cursor.execute("SELECT table_name FROM information_schema.tables WHERE table_schema='public'")
myables = cursor.fetchall()
mytables = [x[0] for x in mytables]
I hope this helps.
Related
I have this pattern for deletion of all rows in a Postgresql table and subsequent insertion with SQLAlchemy:
db = create_engine("postgresql://...", echo=False).connect()
metadata = MetaData(db)
my_table = Table('my_table', metadata, autoload_with=db)
...
db.execute(my_table.delete())
db.execute(my_table.insert(), values)
where values is a list.
I can't uderstand why I get a psycopg2.errors.UniqueViolation when trying to insert.
The data which is inserted is not duplicated, so I guess the problem is that the delete is not committed?
I don't use a Session: what do I need to do to get this simple pattern working correctly?
I found the solution by completely disabling automatic SQLAlchemy transactions (which are not needed in my case of bulk deletions/insertions) with the supported DBAPI isolation_level="AUTOCOMMIT":
db = create_engine("postgresql://...", echo=False).connect().execution_options(isolation_level="AUTOCOMMIT")
See https://docs.sqlalchemy.org/en/14/core/connections.html#setting-transaction-isolation-levels-including-dbapi-autocommit
I am trying to access tables from a database using python. There was some code on the website: https://rnacentral.org/help/public-database
import psycopg2.extras
def main():
conn_string = "host='hh-pgsql-public.ebi.ac.uk' dbname='pfmegrnargs' user='reader' password='NWDMCE5xdipIjRrp'"
conn = psycopg2.connect(conn_string)
cursor = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)`
# retrieve a list of RNAcentral databases
query = "SELECT * FROM rnc_database"
cursor.execute(query)
for row in cursor:
print(row)`
When i run this code, i get back a list of databases:
I want to access tables from one of these databases but I don't know what the schema for those tables are or what the values in each list returned represents. I have been looking at 'postgresql to python' resources but all of them are about accessing tables when you know the name of the tables and the columns within.... Is there code for how I can access the table names from the database?
Thank You
Edit: sorry, i thought i linked the website before
The dataset you want to use has schema diagram here https://rnacentral.org/help/public-database
For general purpose I would use something like https://dbeaver.io/ tool it will show you all the schemas in the db and tables inside the schema and so forth. The DBeaver settings to connect to your db would look like this
If you want to keep using python script to explore the db this sql query
SELECT *
FROM pg_catalog.pg_tables
WHERE schemaname != 'pg_catalog' AND
schemaname != 'information_schema';
Should help you.
I have a function in my code that generates a bunch of tables on an API call. It looks somewhat like this:
def create_tables():
rows = connection.execute(sqlcmd)
for i, row in enumerate(rows):
# Do some work here
t = Table(f"data_{i}", metadata, *columns)
metadata.create_all()
I need another function where I iterate over the tables created in above function, then dump records in to each table from another API. Since, I'm not using declarative mapping or models in sqlalchmey, how do I identify these tables in my database and write data to specific table??
you can use the reflection system
meta.reflect(bind=someengine)
# now all located tables are present within the MetaData object’s
# dictionary of tables
table1 = meta.tables['data_1']
table1.insert().values(...)
I'm using SQLAlchemy for MySQL.
The common example of SQLAlchemy is
Defining model classes by the table structure. (class User(Base))
Migrate to the database by db.create_all (or alembic, etc)
Import the model class, and use it. (db.session.query(User))
But what if I want to use raw SQL file instead of defined model classes?
I did read automap do similar like this, but I want to get mapper object from raw SQL file, not created database.
Is there any best practice to do this?
This is an example of DDL
-- ddl.sql
-- This is just an example, so please ignore some issues related to a grammar
CREATE TABLE `card` (
`card_id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'card',
`card_company_id` bigint(20) DEFAULT NULL COMMENT 'card_company_id',
PRIMARY KEY (`card_id`),
KEY `card_ix01` (`card_company_id`),
KEY `card_ix02` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='card table'
And I want to do like
Base = raw_sql_base('ddl.sql') # Some kinda automap_base but from SQL file
# engine, suppose it has two tables 'user' and 'address' set up
engine = create_engine("mysql://user#localhost/program")
# reflect the tables
Base.prepare(engine)
# mapped classes are now created with names by sql file
Card = Base.classes.card
session = Session(engine)
session.add(Card(card_id=1, card_company_id=1))
session.commit() # Insert
SQLAlchemy is not an SQL parser, but the exact opposite; its reflection works against existing databases only. In other words you must execute your DDL and then use reflection / automap to create the necessary Python models:
from sqlalchemy.ext.automap import automap_base
# engine, suppose it has two tables 'user' and 'address' set up
engine = create_engine("mysql://user#localhost/program")
# execute the DDL in order to populate the DB
with open('ddl.sql') as ddl:
engine.execute(ddl)
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# mapped classes are now created with names by sql file
Card = Base.classes.card
session = Session(engine)
session.add(Card(card_id=1, card_company_id=1))
session.commit() # Insert
This of course may fail, if you have already executed the same DDL against your database, so you would have to handle that case as well. Another possible caveat is that some DB-API drivers may not like executing multiple statements at a time, if your ddl.sql happens to contain more than one CREATE TABLE statement etc.
...but I want to get mapper object from raw SQL file.
Ok, in that case what you need is the aforementioned parser. A cursory search produced two candidates:
sqlparse: Generic, but the issue tracker is a testament to how nontrivial parsing SQL is. Is often confused, for example parses ... COMMENT 'card', `card_company_id` ... as a keyword and an identifier list, not as a keyword, a literal, punctuation, and an identifier (or even better, the column definitions as their own nodes).
mysqlparse: A MySQL specific solution, but with limited support for just about anything, and it seems abandoned.
Parsing would be just the first step, though. You'd then have to convert the resulting trees to models.
I'm writing a python script that would reset the database to an initial state (some hardcoded entries in every table). The db consists of multiple tables with primary and foreign keys.
Every time I would run the script, it should remove all the old entries in all of the tables, reset the primary key counter and insert the sample entries.
Currently I am trying to achieve this like this:
# Delete all the entries from the tables
cursor.execute("DELETE FROM table1")
cursor.execute("DELETE FROM table2")
cursor.execute("DELETE FROM table3")
# Reset the primary key counter and insert sample entries
cursor.execute("ALTER TABLE table1 AUTO_INCREMENT = 1")
cursor.execute("INSERT INTO table1(username, password) VALUES('user01', '123')")
cursor.execute("ALTER TABLE table2 AUTO_INCREMENT = 1")
cursor.execute("INSERT INTO table2(column1, column2) VALUES('column1_data', 'column2_data')")
This isn't working due to the presence of foreign keys in some tables (it won't let me delete them).
I generate the tables using a models.py script (I also use Django), so I thought I could solve this the following way:
remove the database programatically and create a new one with the same name
call the models.py script to generate empty tables in the db
insert sample data using the script I wrote
Is this a good solution or am I overlooking something?
I use scripts monthly to purge a transaction table, after archiving the contents.
Try using the 'truncate' command, ie.
truncate table [tablename];
It resets the counter (auto-increment) for primary key, automatically.
Then use your insert statements to populate base info.
Also, this preserves all of the table base settings (keys,indexes,.).