Python and sqlite3 - importing and exporting databases - python

I'm trying to write a script to import a database file. I wrote the script to export the file like so:
import sqlite3
con = sqlite3.connect('../sqlite.db')
with open('../dump.sql', 'w') as f:
for line in con.iterdump():
f.write('%s\n' % line)
Now I want to be able to import that database. I have tried :
import sqlite3
con = sqlite3.connect('../sqlite.db')
f = open('../dump.sql','r')
str = f.read()
con.execute(str)
but I'm not allowed to execute more than one statement. Is there a way to get it to run an SQL script directly?

sql = f.read() # watch out for built-in `str`
cur.executescript(sql)
Documentation.

Try using
con.executescript(str)
Documentation
Connection.executescript(sql_script)
This is a nonstandard shortcut that creates an intermediate cursor object
by calling the cursor method, then calls the cursor’s executescript
method with the parameters given.
Or create the cursor first
import sqlite3
con = sqlite3.connect('../sqlite.db')
f = open('../dump.sql','r')
str = f.read()
cur = con.cursor()
cur.execute(str)

Related

How to open a BLOB file from MySQL using Python without saving?

Code:
import mysql.connector
import sys
def write_file(data, filename):
with open(filename, 'wb') as f:
f.write(data)
sampleNum = 0;
db_config = mysql.connector.connect(user='root', password='test',
host='localhost',
database='technical')
# query blob data form the authors table
cursor = db_config.cursor()
try:
sampleNum=sampleNum+1;
query = "SELECT fileAttachment FROM document_control WHERE id=%s"
cursor.execute(query,(sampleNum,))
file = cursor.fetchone()[0]
write_file(file, 'User'+str(sampleNum)+'.docx')
except AttributeError as e:
print(e)
finally:
cursor.close()
What it does
The above code - gets the file from MySQL stored as a BLOB and it saves me a .docx file into a folder.
Question
But instead of saving it, view it then delete it. Am I able to simply open the BLOB in word without saving it?
If so, how can it be done?
In general, passing binary data like a BLOB entity as a file-like object can be done with the built-in module io, for example:
import io
f = io.BytesIO(data)
# f now can be used anywhere a file-object is expected
But your question actually comes more down to MS Word's ability to open files that aren't saved anywhere on the disk. I don't think it can do that. Best practice would probably be to generate a temporary file using tempfile, so that you can at least expect the system to clean it up eventually:
import tempfile
with tempfile.NamedTemporaryFile(suffix='.docx', delete=False) as f:
f.write(data)
print(f.name)
Edit:
In your code in particular, you could try the following to store the data in a temporary file and automatically open it in MS Word:
import tempfile, subprocess
WINWORD_PATH = r'C:\Program Files (x86)\Microsoft Office\Office14\winword.exe'
def open_as_temp_docx(data):
with tempfile.NamedTemporaryFile(suffix='.docx', delete=False) as f:
f.write(data)
subprocess.Popen([WINWORD_PATH, f.name])
cursor = db_config.cursor()
try:
sampleNum=sampleNum+1;
query = "SELECT fileAttachment FROM document_control WHERE id=%s"
cursor.execute(query,(sampleNum,))
open_as_temp_docx(cursor.fetchone()[0])
I don't have a Windows machine with MS Word at hand, so I can't test this. The path to winword.exe on your machine may vary, so make sure it is correct.
Edit:
If it is important to delete the file as soon as MS Word closes, the following should work:
import tempfile, subprocess, os
WINWORD_PATH = r'C:\Program Files (x86)\Microsoft Office\Office14\winword.exe'
def open_as_temp_docx(data):
with tempfile.NamedTemporaryFile(suffix='.docx', delete=False) as f:
f.write(data)
subprocess.Popen([WINWORD_PATH, f.name]).wait()
if os.path.exists(f.name):
os.unlink(f.name)

Reading sql query from a json file

I want to fetch the SQL query from a text file and run it in Python program. This is my code:
csvfilelist=os.listdir(inputPath)
mycursor = mydb.cursor()
for csvfilename in csvfilelist:
with open(inputPath + csvfilename, 'r') as csvFile:
reader = csv.reader(csvFile)
for row in reader:
'''r = "INSERT INTO Terminate.RAW VALUES('%s','%s','%s','%s','%s')" %(row[0],row[1],row[2],row[3],row[4],row[5])'''
try:
result = mycursor.execute(r)
mydb.commit()
except mysql.connector.Error as err:
print(err)
csvFile.close()
Say you have a INI file containing the query
[main]
query=INSERT INTO Terminate.RAW VALUES('%s','%s','%s','%s','%s')
you may load it
config = configparser.ConfigParser()
config.read('myfile.ini')
query = config['main']['query']
and later you can call it with
r = query % (row[0],row[1],row[2],row[3],row[4],row[5])
As pointed out in comments, using "%" in queries is not a good solution, you should bind your variables when executing the query. I don't remember the exact syntax, it's something like
r = query
mycursor.execute(r, (row[0],row[1],row[2],row[3],row[4],row[5]))
Edit: sorry, I just read that your file is JSON, not INI. You wrote that in the title, not in the post. If so, you should use the json module instead of configparser module.

how to render data from postgresql to csv in python flask app?

I am new in python and trying to write a code in it. I am trying to run a select query but i am not able to to render a data to csv file ?
this is the psql query :
# \copy (
# SELECT
# sr.imei,
# sensors.label,sr.created_at,
# sr.received_at,
# sr.type_id,
#
but How to write it in python to render it to csv file ?
thanking you,
Vikas
sql = "COPY (SELECT * FROM sensor_readings WHERE reading=blahblahblah) TO STDOUT WITH CSV DELIMITER ';'"
with open("/tmp/sensor_readings.csv", "w") as file:
cur.copy_expert(sql, file)
I think you just need to change the sql for your use, and it should work.
Install psycopg2 via pip install psycopg2 than you need something like this
import csv
import psycopg2
query = """
SELECT
sr.imei,
sensors.label,sr.created_at,
sr.received_at,
sr.type_id,
sr.data FROM sensor_readings as sr LEFT JOIN sensors on sr.imei = sensors.imei
WHERE sr.imei not like 'test%' AND sr.created_at > '2019-02-01'
ORDER BY sr.received_at desc
"""
conn = psycopg2.connect(database="routing_template", user="postgres", host="localhost", password="xxxx")
cur = conn.cursor()
cur.execute(query)
with open('result.csv', 'w') as f:
writer = csv.writer(f, delimiter=',')
for row in cur.fetchall():
writer.writerow(row)
cur.close()
conn.close()

Dump a MySQL table to a CSV file and save it in a given location using Python script

I used following python script to dump a MySQL table to a CSV file. But it was saved in the same folder which python script is saved. I want to save it in another folder. How can I do it? Thank you
print 'Writing database to csv file'
import MySQLdb
import csv
import time
import datetime
import os
currentDate=datetime.datetime.now().date()
user = ''
passwd = ''
host = ''
db = ''
table = ''
con = MySQLdb.connect(user=user, passwd=passwd, host=host, db=db)
cursor = con.cursor()
query = "SELECT * FROM %s;" % table
cursor.execute(query)
with open('Data on %s.csv' % currentDate ,'w') as f:
writer = csv.writer(f)
for row in cursor.fetchall():
writer.writerow(row)
print 'Done'
Change this:
with open('/full/path/tofile/Data on %s.csv' % currentDate ,'w') as f:
This solves your problem X. But you have a problem Y. That is 'How do i efficiently, dump CSV data from mysql, without having to write a lot of code?'
Answer to problem Y is SELECT INTO OUTFILE

Internet History Script For Google Chrome

I'm not looking for a "best" or most efficient script to do this. But I was wondering if there exists a script to pull Internet History for a day's time from, say, Google Chrome and log it to a txt file. I'd prefer if it were in Python or MATLAB.
If you guys have a different method using one of these languages utilizing locally stored browser history data from Google Chrome, I'd be all ears for that too.
I'd be super-thankful if anyone could help with this!
From my understanding, it seems easy to be done. I don't know if this is what you want.
Internet history from Chrome is stored at a specific path. Take Win7 for example, it's stored at win7: C:\Users\[username]\AppData\Local\Google\Chrome\User Data\Default\History
In Python:
f = open('C:\Users\[username]\AppData\Local\Google\Chrome\User Data\Default\History', 'rb')
data = f.read()
f.close()
f = open('your_expected_file_path', 'w')
f.write(repr(data))
f.close()
Building on what m170897017 said:
That file is an sqlite3 database, so taking repr() of its contents won't do anything meaningful.
You need to open the sqlite database and run SQL against it to get the data out. In python use the sqlite3 library in the stdlib to do this.
Here's a related SuperUser question that shows some SQL for getting URLs and timestamps: https://superuser.com/a/694283
Dodged sqlite3/sqlite, I'm using the Google Chrome extension "Export History", exporting everything into a CSV file, and subsequently loading that CSV file into cells within MATLAB.
Export History
My code turned out to be:
file_o = ['history.csv'];
fid = fopen(file_o, 'rt');
fmt = [repmat('%s', 1, 6) '%*[^\n]'];
C = textscan(fid,fmt,'Delimiter',',','CollectOutput',true);
C_unpacked = C{:};
C_urls = C_unpacked(1:4199, 5);
Here's another one:
import csv, sqlite3, os
from datetime import datetime, timedelta
connection = sqlite3.connect(os.getenv("APPDATA") + "\..\Local\Google\Chrome\User Data\Default\history")
connection.text_factory = str
cur = connection.cursor()
output_file = open('chrome_history.csv', 'wb')
csv_writer = csv.writer(output_file)
headers = ('URL', 'Title', 'Visit Count', 'Date (GMT)')
csv_writer.writerow(headers)
epoch = datetime(1601, 1, 1)
for row in (cur.execute('select url, title, visit_count, last_visit_time from urls')):
row = list(row)
url_time = epoch + timedelta(microseconds=row[3])
row[3] = url_time
csv_writer.writerow(row)
This isn't exactly what you are looking for. However, by using this you can manipulate the database tables to your liking
import os
import sqlite3
def Find_path():
User_profile = os.environ.get("USERPROFILE")
History_path = User_profile + r"\\AppData\Local\Google\Chrome\User Data\Default\History" #Usually this is where the chrome history file is located, change it if you need to.
return History_path
def Main():
data_base = Find_path()
con = sqlite3.connect(data_base) #Connect to the database
c = con.cursor()
c.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name") #Change this to your prefered query
print(c.fetchall())
if __name__ == '__main__':
Main()

Categories

Resources