When using mclient it is possible to list all tables in database by issuing command '\d'. I'm using python-monetdb package and I don't know how the same can be accomplished. I've seen example like "SELECT * FROM TABLES;" but I get an error that "tables" table does not exist.
In your query you need to specify that you are looking for the tables table that belongs to the default sys schema, or sys.tables. The SQL query that returns the names of all non-system tables in MonetDB is:
SELECT t.name FROM sys.tables t WHERE t.system=false
In Python this should look something like:
import monetdb.sql
connection = monetdb.sql.connect(username='<username>', password='<password>', hostname='<hostname>', port=50000, database='<database>')
cursor = connection.cursor()
cursor.execute('SELECT t.name FROM sys.tables t WHERE t.system=false')
If you are looking for tables only in a specific schema, you will need to extend your query, specifying the schema:
SELECT t.name FROM sys.tables t WHERE t.system=false AND t.schema_id IN (SELECT s.id FROM sys.schemas s WHERE name = '<schema-name>')
where the <schema-name> is your schema, surrounded by single quotes.
Related
The task is to identify if the SQL statements is a DML / DDL or not.
So I had to use an array and push all the DML/DDL patterns into that and search for them by iterating.
Below is a simple code snippet where
I am sending an SQL query as a parameter
Check if it has update, alter, drop, delete
Print
def check_dml_statementent (self,sql)
actions = ['update', 'alter', 'drop', 'delete']
for action in actions
if ( re.search (action,sql.lower() ) :
print "This is a dml statement "
But there are some edge cases, for which I need to code for
To consider
Update table table_name where ...
alter table table_name
create table table_name
delete * from table_name
drop table_name
Not to consider:
select * from table where action='drop'
So, the task is to identify only SQL statements which modify, drop, create, alter table etc.
One specific idea is to check if an SQL statement starts with above array values using startswith function.
You can use python-sqlparse for that:
import sqlparse
query = """
select * from table where action='delete';
delete from table where name='Alex'
"""
parsed = sqlparse.parse(query)
for p in parsed:
print(p.get_type())
This will output:
SELECT
DELETE
I am trying to access tables from a database using python. There was some code on the website: https://rnacentral.org/help/public-database
import psycopg2.extras
def main():
conn_string = "host='hh-pgsql-public.ebi.ac.uk' dbname='pfmegrnargs' user='reader' password='NWDMCE5xdipIjRrp'"
conn = psycopg2.connect(conn_string)
cursor = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)`
# retrieve a list of RNAcentral databases
query = "SELECT * FROM rnc_database"
cursor.execute(query)
for row in cursor:
print(row)`
When i run this code, i get back a list of databases:
I want to access tables from one of these databases but I don't know what the schema for those tables are or what the values in each list returned represents. I have been looking at 'postgresql to python' resources but all of them are about accessing tables when you know the name of the tables and the columns within.... Is there code for how I can access the table names from the database?
Thank You
Edit: sorry, i thought i linked the website before
The dataset you want to use has schema diagram here https://rnacentral.org/help/public-database
For general purpose I would use something like https://dbeaver.io/ tool it will show you all the schemas in the db and tables inside the schema and so forth. The DBeaver settings to connect to your db would look like this
If you want to keep using python script to explore the db this sql query
SELECT *
FROM pg_catalog.pg_tables
WHERE schemaname != 'pg_catalog' AND
schemaname != 'information_schema';
Should help you.
I usually use R to do SQL queries by using ODBC to link to a SQL database. The code generally looks like this:
library(RODBC)
ch<-odbcConnect('B1P HANA',uid='****',pwd='****')
myOffice <- c(0)
office_clause = ""
if (myOffice != 0) {
office_clause = paste(
'AND "_all"."/BIC/ZSALE_OFF" IN (',paste(myOffice, collapse=", "),')'
)
}
a <- sqlQuery(ch,paste(' SELECT "_all"."CALDAY" AS "ReturnDate FROM "SAPB1P"."/BIC/AZ_RT_A212" "_all"
WHERE "_all"."CALDAY"=20180101
',office_clause,'
GROUP BY "_all"."CALDAY
'))
The workflow is:
odbcConnect is to link R and SQL using ODBC.
myOffice is an array for achieving data from R. Those data will be used as filter conditions in WHERE clause in SQL.
a stores the query result from SQL database.
So, how to do all of these in Python, i.e., do SQL queries in Python by using ODBC to link SQL database and Python? I am new to Python. All I know is like:
import pyodbc
conn = pyodbc.connect(r'DSN=B1P HANA;UID=****;PWD=****')
Then I do not know how to continue. And I cannot find an overall example online. Could anyone help by providing a comprehensive example? From link SQL database in Python unitl retrieving the result?
Execute SQL from python
Instantiate a Cursor and use the execute method of the Cursor class to execute any SQL statement.
cursor = cnxn.cursor()
Select
You can use fetchall, fetchone, and fetchmany to retrieve rows returned from SELECT statements:
import pyodbc
cursor = cnxn.cursor()
cnxn = pyodbc.connect('DSN=myDSN;UID=***;PWD=***')
cursor.execute("SELECT Col1, Col2 FROM MyTable WHERE Col1= 'SomeValue'")
rows = cursor.fetchall()
for row in rows:
print(row.Col1, row.Col2 )
You can provide parameterized queries in a sequence or in the argument list:
cursor.execute("SELECT Col1, Col2, Col3, ... FROM MyTable WHERE Col1 = ?", 'SomeValue',1)
Insert
INSERT commands also use the execute method; however, you must subsequently call the commit method after an insert or you will lose your changes:
cursor.execute("INSERT INTO MyTable (Col1) VALUES ('SomeValue')")
cnxn.commit()
Update and Delete
As with an insert, you must also call commit after calling execute for an update or delete:
cursor.execute("UPDATE MyTable SET Col1= 'SomeValue'")
cnxn.commit()
Metadata Discovery
You can use the getinfo method to retrieve data such as information about the data source and the capabilities of the driver. The getinfo method passes through input to the ODBC SQLGetInfo method.
cnxn.getinfo(pyodbc.SQL_DATA_SOURCE_NAME)
I have 2 tables, "vector" and "vocab." I'm trying to do this:
c.execute('SELECT value FROM vector WHERE word IN (SELECT word FROM vocab)')
I'm getting the error sqlite3.OperationalError: no such table: vocab Of course, this is because I haven't connected to the vocab table. I only connected to the vector table before:
dbname = "/Users/quantumjuker/NLP/vector.db"
conn = sqlite3.connect(dbname)
c = conn.cursor()
How can I connect to the vocab table as well so I don't receive an error?
Thanks!
So SQLite3 has an ability to read additional SQLite data files. This is done using the ATTACH command. The nice thing about it is that it is used as an sql query. So you do something like:
c.execute("ATTACH 'vocab.db' AS 'vocabulary'");
Note the AS aliases the database to a name, not the table to a name. Once this is done you can run your query against the vocab table as well.
I am writing a basic gui for a program which uses Peewee. In the gui, I would like to show all the tables which exist in my database.
Is there any way to get the names of all existing tables, lets say in a list?
Peewee has the ability to introspect Postgres, MySQL and SQLite for the following types of schema information:
Table names
Columns (name, data type, null?, primary key?, table)
Primary keys (column(s))
Foreign keys (column, dest table, dest column, table)
Indexes (name, sql*, columns, unique?, table)
You can get this metadata using the following methods on the Database class:
Database.get_tables()
Database.get_columns()
Database.get_indexes()
Database.get_primary_keys()
Database.get_foreign_keys()
So, instead of using a cursor and writing some SQL yourself, just do:
db = PostgresqlDatabase('my_db')
tables = db.get_tables()
For even more craziness, check out the reflection module, which can actually generate Peewee model classes from an existing database schema.
To get a list of the tables in your schema, make sure that you have established your connection and cursor and try the following:
cursor.execute("SELECT table_name FROM information_schema.tables WHERE table_schema='public'")
myables = cursor.fetchall()
mytables = [x[0] for x in mytables]
I hope this helps.