This question already has answers here:
SQLAlchemy support of Postgres Schemas
(12 answers)
SQLAlchemy: engine, connection and session difference
(3 answers)
Closed 5 years ago.
I am working with a Postgres database. This database has three schemas (schema_1, schema_2, public). If I run a simple query, the public schema will be queried by default:
from sqlalchemy import create_engine
con = create_engine('postgresql+pg8000://usr:pwd#server/db')
con.execute('select count(*) from the_table')
I cannot access the tables in schema_1 or schema_2, unless I specify the path in the query:
con.execute('select count(*) from schema_1.the_table')
Is there any way to specify the default path of the query to schema_1 without the need of specifying the full path in the query itself?
I tried with:
con.execute('SET search_path TO "schema_1";')
but this does not seem to work:
insp = inspect(con)
print(insp.default_schema_name)
# 'public'
I believe I am not executing the SET search_path TO "schema_1" correctly because the same command does work in other Postgres clients (like pgAdmin)
Related
This question already has answers here:
Not all parameters were used in the SQL statement (Python, MySQL)
(5 answers)
Closed last year.
error imageActually I am trying to create flask web application in which, I wanted to store my input values from html webpage into MySQL database, but am getting error as Not all parameters were used in the SQL statement when calling my insert function.
Please find my code
import mysql.connector as conn
def insert_records(name,email,location,salary,band):
con=conn.connect(host='localhost',user='root',password='my password',database="my database")
cur=con.cursor()
sql="""insert into emp1.emp(Name,Email,Location,Salary,Band) values (%s,%s,%s,%d,%s)"""
val=(name,email,location,salary,band)
cur.execute(sql,val)
con.commit()
con.close()
You need to use %s for every argument per the documentation - they use %s for strings, integers, datetimes, etc.
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html
This question already has answers here:
Sqlite insert query not working with python?
(2 answers)
Closed 1 year ago.
I am currently learning SQL for one of my projects and the site, that I learn from, advised me to use DB Browser to see my Database Content. However, I can't see the data inside the SQL. This is how my code looks like. I'm creating a table and then trying to write some values in it. It creates the DB successfully but the data doesn't show up.
import sqlite3 as sql
connection = sql.connect("points.db")
cursor = connection.cursor()
cursor.execute("CREATE TABLE IF NOT EXISTS servers (server_id TEXT, name TEXT, exp INTEGER)")
cursor.execute("INSERT INTO servers VALUES ('848117357214040104', 'brknarsy', 20)")
Can you check that your data is inserted?
Something like this in the end:
cursor.execute("SELECT * FROM servers")
r = cursor.fetchall()
for i in r:
print(r)
Perhaps SQLite browser just needs a refresh
This question already has answers here:
Executing multiple statements with Postgresql via SQLAlchemy does not persist changes
(3 answers)
Closed 3 years ago.
I am trying to execute a sql query from a file using sqlalchemy.
When I run the queries, I get a result saying that it affected x amount of rows, but when I check the DB it doesn't actually insert anything to the tables.
Here is my current code:
def import_to_db(df, table_name):
df.to_sql(
table_name,
con=engine,
schema='staging',
if_exists='replace',
index= False,
method= 'multi'
)
print('imported data to staging.{}'.format(table_name))
with open('/home/kyle/projects/data_pipelines/ahc/sql/etl_{}.sql'.format(table_name)) as fp:
etl = fp.read()
result = engine.execute(etl)
print('moved {} rows to public.{}'.format(result.rowcount, table_name))
When I run the .sql scripts manually, they work fine. I even tried making stored procedures but that didn't work either. Here is an example of one of the sql files im executing:
--Delete Id's in prod table that are in current staging table
DELETE
FROM public.table
WHERE key IN
(SELECT key FROM staging.table);
--Insert new/old id's into prod table and do any cleaning
INSERT INTO
public.table
SELECT columna, columnb, columnc
FROM staging.table;
Found a solution, although I don't fully understand it.
I added BEGIN; at the top of my script, and COMMIT; at the bottom.
This works, but my row count now say -1 so it doesn't help me much for logging.
This question already has answers here:
Execute a PL/SQL function (or procedure) from SQLAlchemy
(2 answers)
stored procedures with sqlAlchemy
(8 answers)
Closed 4 years ago.
I wrote this function to take in three parameters and a database connection, execute a function, and then pull from a view that is populated by the function.
def get_customer_data(start_date, end_date, segment, con):
""" Get customer data
"""
# This executes a process in the database that populates
# my_view which we will subsequently pull from.
sql = """EXEC PROCESS_POPULATE_VIEW ('%s','%s','%s');
""" % (str(start_date), str(end_date), segment)
con.execute(sql)
sql = "SELECT * FROM MY_VIEW"
return pd.read_sql(sql, con)
This is what I get back:
DatabaseError: (cx_Oracle.DatabaseError) ORA-00900: invalid SQL statement [SQL: "EXEC PROCESS_POPULATE_VIEW ('11-JUN-2018','13-JUN-2018','Carrier');\n "] (Background on this error at: http://sqlalche.me/e/4xp6)
Am I not allowed to call the EXEC command from sqlalchemy? Is there a workaround for this?
This question already has answers here:
List of tables, db schema, dump etc using the Python sqlite3 API
(12 answers)
Closed 7 years ago.
I have a problem to get data from the sqlite3 database. I can't find out the names of tables and their encoding. When I open DB through sqlitebrowser names were just unreadable characters.
Connection to DB is fine.
conn = sqlite3.connect('my.db')
conn_cursor = conn.cursor()
conn.text_factory = str
But how can I get the names of tables and their encoding?
You can use this query to get tables names.
res = conn.execute("SELECT name FROM sqlite_master WHERE type='table';")
for name in res.fetchall():
print(name[0])