SQLITE multiple files - python

Sorry about this unprofessional question but I'm kinda new to sqlite but I was wondering if there's any way I can open two files in same python command db = sqlite3.connect('./cogs/database/users.sqlite')
when I open this in my command it doesn't allow me to do same thing in the same command to open another file so for example
open db = sqlite3.connect('./cogs/database/users.sqlite') and read something from it if so
open db = sqlite3.connect('./cogs/database/anotherfile.sqlite') and insert to it
but it always accepts first file only and ignore second file

Assign db1 so it connects to users.sqlite,
and db2 so it connects to anotherfile.sqlite.
Then you can e.g. SELECT from one
and INSERT into the other,
with a temp var bridging the two.

Sqlite databases are single file based, so no - sqlite3.connect builds a connection object to a single data base file.
Even if you build two connection objects, you can't execute queries across them.
If you really need the data from two files at a time you need to merge that data into one data - or don't use sqlite.

You can execute queries across two SQLite files, but you will need to execute an ATTACH command on the first connection cursor.
conn = sqlite3.connect("users.sqlite")
cur = conn.cursor()
cmd = "ATTACH DATABASE 'anotherfile.sqlite' AS otra"
try:
cur.execute(cmd)
query = """
SELECT
t1.Id, t1.Name, t2.Address
FROM personnel t1
LEFT JOIN otra.location t2
ON t2.PersonId = t1.Id
WHERE t1.Status = 'current'
ORDER BY t1.Name;
"""
cur.execute(query)
rows = cur.fetchall()
except sqlite3.Error as err:
do_something_with(err)

Related

Is my Postgresql COPY transaction still running?

I have a large (18GB) CSV file I am trying to load into a database through the use of a python script. My approach is broken down like so:
Load the file into a temporary table, this is so the file gets loaded without failing due to any duplicate issues or primary keys.
Copy the data from the table into a new table with indexes and Conflict handling for potential dupes.
My python code and the SQL command is as follows:
engine = sqlalchemy.create_engine('postgres://adatabasestring')
event.listen(engine, 'connect', init_search_path)
connection = engine.raw_connection()
cursor = connection.cursor()
file_path = "/path/to/file.csv"
print("COPYING file to temp table")
keyword_adding_sql = """
CREATE TEMP TABLE tmp_table
(LIKE working_table INCLUDING DEFAULTS)
ON COMMIT DROP;
COPY tmp_table FROM '{file_path}' DELIMITER ',' CSV HEADER;
INSERT INTO working_table
SELECT *
FROM tmp_table
ON CONFLICT DO NOTHING;
"""
cursor.execute(keyword_adding_sql.format(file_path=file_path))
connection.commit()
The script was ran from my local machine but the file it was copying and the database itself are on the remote server. I ran the script and left it running and came back to an error code saying the server closed unexpectedly. I was worried the transaction was cancelled or the like but I queried pg_stat_activity and the query itself is still listed as active.
Will the query ever be finished or is it hanging on itself? I checked the working_table size and it looks like all the data is there as expected.

How can I select part of sqlite database using python

I have a very big database and I want to send part of that database (1/1000) to someone I am collaborating with to perform test runs. How can I (a) select 1/1000 of the total rows (or something similar) and (b) save the selection as a new .db file.
This is my current code, but I am stuck.
import sqlite3
import json
from pprint import pprint
conn = sqlite3.connect('C:/data/responses.db')
c = conn.cursor()
c.execute("SELECT * FROM responses;")
Create a another database with similar table structure as the original db. Sample records from original database and insert into new data base
import sqlite3
conn = sqlite3.connect("responses.db")
sample_conn = sqlite3.connect("responses_sample.db")
c = conn.cursor()
c_sample = sample_conn.cursor()
rows = c.execute("select no, nm from responses")
sample_rows = [r for i, r in enumerate(rows) if i%10 == 0] # select 1/1000 rows
# create sample table with similar structure
c_sample.execute("create table responses(no int, nm varchar(100))")
for r in sample_rows:
c_sample.execute("insert into responses (no, nm) values ({}, '{}')".format(*r))
c_sample.close()
sample_conn.commit()
sample_conn.close()
Simplest way to do this would be:
Copy the database file in your filesystem same as you would any other file (e.g. ctrl+c then ctrl+v in windows to make responses-partial.db or something)
Then open this new copy in an sqlite editor such as http://sqlitebrowser.org/ run the delete query to remove however many rows you want to. Then you might want to run compact database from file menu.
Close sqlite editor and confirm file size is smaller
Email the copy
Unless you need to create a repeatable system I wouldn't bother with doing this in python. But you could perform similar steps in python (copy the file, open it it run delete query, etc) if you need to.
The easiest way to do this is to
make a copy of the database file;
delete 999/1000th of the data, either by keeping the first few rows:
DELETE FROM responses WHERE SomeID > 1000;
or, if you want really random samples:
DELETE FROM responses
WHERE rowid NOT IN (SELECT rowid
FROM responses
ORDER BY random()
LIMIT (SELECT count(*)/1000 FROM responses));
run VACUUM to reduce the file size.

SQLite Database and Python

I have been given an SQLite file to exam using python. I have imported the SQLite module and attempted to connect to the database but I'm not having any luck. I am wondering if I have to actually open the file up as "r" as well as connecting to it? please see below; ie f = open("History.sqlite","r+")
import sqlite3
conn = sqlite3.connect("history.sqlite")
curs = conn.cursor()
results = curs.execute ("Select * From History.sqlite;")
I keep getting this message when I go to run results:
Operational Error: no such table: History.sqlite
An SQLite file is a single data file that can contain one or more tables of data. You appear to be trying to SELECT from the filename instead of the name of one of the tables inside the file.
To learn what tables are in your database you can use any of these techniques:
Download and use the command line tool sqlite3.
Download any one of a number of GUI tools for looking at SQLite files.
Write a SELECT statement against the special table sqlite_master to list the tables.

Database not consistent when accessed from different source files

I have a program in Python composed of 4 source files. One of them is the main file which imports the other 3. As I work with a small Sqlite database, I am creating tables in one of the "secondary" source files, but when I access the database again from the main source file, the tables just populated before are empty.
Can I save the tables' content in a more consistent way? I am quite surprised with what is happening.
So in the main file I typed:
conn = sqlite3.connect("bayes.db")
cur = conn.cursor()
cur.execute("select count(*) from TableA")
print cur.fetchone()
The result is 0 (rows).
Just before, in another source file I do the same thing and get size=8 of TableA.
You must call the commit function in order to save your changes in the database. You can see the full documentation here: http://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.commit

Insert multiple tab-delimited text files into MySQL with Python?

I am trying to create a program that takes a number of tab delaminated text files, and works through them one at a time entering the data they hold into a MySQL database. There are several text files, like movies.txt which looks like this:
1 Avatar
3 Iron Man
3 Star Trek
and actors.txt that looks the same etc. Each text file has upwards of one hundred entries each with an id and corresponding value as seen above. I have found a number of code examples on this site and others but I can't quite get my head around how to implement them in this situation.
So far my code looks something like this ...
import MySQLdb
database_connection = MySQLdb.connect(host='localhost', user='root', passwd='')
cursor = database_connection.cursor()
cursor.execute('CREATE DATABASE library')
cursor.execute('USE library')
cursor.execute('''CREATE TABLE popularity (
PersonNumber INT,
Category VARCHAR(25),
Value VARCHAR(60),
)
''')
def data_entry(categories):
Everytime i try to get the other code I have found working with this I just get lost completely. Hopeing someone can help me out by either showing me what I need to do or pointing me in the direction of some more information.
Examples of the code I have been trying to adapt to my situation are:
import MySQLdb, csv, sys
conn = MySQLdb.connect (host = "localhost",user = "usr", passwd = "pass",db = "databasename")
c = conn.cursor()
csv_data=csv.reader(file("a.txt"))
for row in csv_data:
print row
c.execute("INSERT INTO a (first, last) VALUES (%s, %s), row")
c.commit()
c.close()
and:
Python File Read + Write
MySQL can read TSV files directly using the mysqlimport utility or by executing the LOAD DATA INFILE SQL command. This will be faster than processing the file in python and inserting it, but you may want to learn how to do both. Good luck!

Categories

Resources