This question already has an answer here:
Copy table from remote sqlite database?
(1 answer)
Closed 8 years ago.
working with PySide.
I have a sqlite database as default database:
db1 = QtSql.QSqlDatabase.addDatabase("QSQLITE")
db1.setDatabaseName(path_of_db)
db1.open()
and a second database with connection name "second_db":
db2 = QtSql.QSqlDatabase.addDatabase("QSQLITE", "second_db")
db2.setDatabaseName(path_of_db)
db2.open()
query = QSql.QSqlQuery("second_db")
query.exec_("SELECT * FROM table_name")
Now i want to insert records from a table from db1 into a table from db2. I have a model for db1. I know that I can insert record per record through the models. I also thought about writing the records from db1 in a file/variable and then insert these into db2.
Is there a simpler/quicker solution maybe a sql-query? How can I solve the problem?
Thank you for your help ;-)
I'd probably try row by row insertion, and see how fast it is. Might be all you need. A few tips on bulk insertions are here; basically,if you use BEGIN and END TRANSACTION, and an in-memory journal then you greatly speed up inserts. Oh and create indexes after you're done, not before.
Related
This question already has answers here:
Sqlite insert query not working with python?
(2 answers)
Closed 1 year ago.
I am currently learning SQL for one of my projects and the site, that I learn from, advised me to use DB Browser to see my Database Content. However, I can't see the data inside the SQL. This is how my code looks like. I'm creating a table and then trying to write some values in it. It creates the DB successfully but the data doesn't show up.
import sqlite3 as sql
connection = sql.connect("points.db")
cursor = connection.cursor()
cursor.execute("CREATE TABLE IF NOT EXISTS servers (server_id TEXT, name TEXT, exp INTEGER)")
cursor.execute("INSERT INTO servers VALUES ('848117357214040104', 'brknarsy', 20)")
Can you check that your data is inserted?
Something like this in the end:
cursor.execute("SELECT * FROM servers")
r = cursor.fetchall()
for i in r:
print(r)
Perhaps SQLite browser just needs a refresh
This question already has answers here:
SQLite not saving data between uses
(1 answer)
python sqlite3, how often do I have to commit?
(1 answer)
Closed 1 year ago.
I have a database in python sqlite3 and I'm trying to delete rows from it depending on a value that I selected.
After the execution of this code, I got the result that I want when I'm in this function. But the problem is that when I'm out of this function and try to print the database, the deletion query did not work but the one to add values did. Can anyone understand why?
def datamanip():
selected = SecTree.focus()
values = SecTree.item(selected, 'text')
conn = sqlite3.connect('DataStore.db')
c = conn.cursor()
query='DELETE FROM Limits WHERE TypeCard=(?)'
c.execute(query,(values,))
c.execute("INSERT INTO Limits VALUES (:TypeCard,:CreaseMaxC,:CreaseMinC,:CreaseMaxA,:CreaseMinA,:WidthMaxC,:WidthMinC,:WidthMaxA,:WidthMinA)",
{'TypeCard':values,
'CreaseMaxC': w2data,
'CreaseMinC': wdata,
'CreaseMaxA': w4data,
'CreaseMinA': w3data,
'WidthMaxC': w6data,
'WidthMinC': w5data,
'WidthMaxA': w8data,
'WidthMinA': w7data
}
)
c.execute('SELECT * FROM Limits')
records= c.fetchall()
print(records)
EDIT:
The connection must be commited after making the deletion for the database.
conn.commit()
solved the problem.
This question already has answers here:
Executing multiple statements with Postgresql via SQLAlchemy does not persist changes
(3 answers)
Closed 3 years ago.
I am trying to execute a sql query from a file using sqlalchemy.
When I run the queries, I get a result saying that it affected x amount of rows, but when I check the DB it doesn't actually insert anything to the tables.
Here is my current code:
def import_to_db(df, table_name):
df.to_sql(
table_name,
con=engine,
schema='staging',
if_exists='replace',
index= False,
method= 'multi'
)
print('imported data to staging.{}'.format(table_name))
with open('/home/kyle/projects/data_pipelines/ahc/sql/etl_{}.sql'.format(table_name)) as fp:
etl = fp.read()
result = engine.execute(etl)
print('moved {} rows to public.{}'.format(result.rowcount, table_name))
When I run the .sql scripts manually, they work fine. I even tried making stored procedures but that didn't work either. Here is an example of one of the sql files im executing:
--Delete Id's in prod table that are in current staging table
DELETE
FROM public.table
WHERE key IN
(SELECT key FROM staging.table);
--Insert new/old id's into prod table and do any cleaning
INSERT INTO
public.table
SELECT columna, columnb, columnc
FROM staging.table;
Found a solution, although I don't fully understand it.
I added BEGIN; at the top of my script, and COMMIT; at the bottom.
This works, but my row count now say -1 so it doesn't help me much for logging.
Does anybody know of a good Python code that can copy large number of tables (around 100 tables) from one database to another in SQL Server?
I ask if there is a way to do it in Python, because due to restrictions at my place of employment, I cannot copy tables across databases inside SQL Server alone.
Here is a simple Python code that copies one table from one database to another. I am wondering if there is a better way to write it if I want to copy 100 tables.
print('Initializing...')
import pandas as pd
import sqlalchemy
import pyodbc
db1 = sqlalchemy.create_engine("mssql+pyodbc://user:password#db_one")
db2 = sqlalchemy.create_engine("mssql+pyodbc://user:password#db_two")
print('Writing...')
query = '''SELECT * FROM [dbo].[test_table]'''
df = pd.read_sql(query, db1)
df.to_sql('test_table', db2, schema='dbo', index=False, if_exists='replace')
print('(1) [test_table] copied.')
SQLAlchemy is actually a good tool to use to create identical tables in the second db:
table = Table('test_table', metadata, autoload=True, autoload_with=db1)
table.create(engine=db2)
This method will also produce correct keys, indexes, foreign keys. Once the needed tables are created, you can move the data by either select/insert if the tables are relatively small or use bcp utility to dump table to disk and then load it into the second database (much faster but more work to get it to work correctly)
If using select/insert then it is better to insert in batches of 500 records or so.
You can do something like this:
tabs = pd.read_sql("SELECT table_name FROM INFORMATION_SCHEMA.TABLES", db1)
for tab in tabs['table_name']:
pd.read_sql("select * from {}".format(tab), db1).to_sql(tab, db2, index=False)
But it might be be awfully slow. Use SQL Server tools to do this job.
Consider using sp_addlinkedserver procedure to link one SQL Server from another. After that you can execute:
SELECT * INTO server_name...table_name FROM table_name
for all tables from the db1 database.
PS this might be done in Python + SQLAlchemy as well...
This question already has answers here:
How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
(7 answers)
Closed 7 years ago.
I have a python script that is using the Psycopg adapter; I am parsing a JSON Array and inserting into my PostgreSQL database.
for item in data["SchoolJSONData"]:
mId = item.get("Id")
mNumofRooms = item.get("NumofRooms")
mFloors = item.get("Floors")
con = None
con = psycopg2.connect("dbname='database' user='dbuser'")
cur = con.cursor()
cur.execute('INSERT INTO Schools(Id, NumofRooms, Floors)VALUES(%s, %s, %s)',(mId, mNumofRooms, mFloors))
con.commit()
Everytime I run the script again, I get the following:
psycopg2.IntegrityError: duplicate key value violates unique constraint "schools_pkey"
How can I run the insert script so that it will ignore existing entries in the database?
EDIT: Thanks for the replies all... I am trying to NOT overwrite any data, only ADD (if the PK is not already in the table), and ignore any errors. In my case I will only be adding new entries, never updating data.
There is no single one way to solve this problem. As well this problem has little to do with python. It is valid exception generated by the database ( not just postgre all databases will do the same ).
But you can try - catch this exception and continue smoothly later.
OR
you can use "select count(*) where id = mId" to ensure it is not existing already.