Can't find table dbstat - python

c.execute('select sum(unused), sum(pgsize), sum(payload), count(*) from dbstat')
or
c.execute('select sum(unused), sum(pgsize), sum(payload), count(*) from main.dbstat')
I'm using sqlite3 database, and I'm trying to get the statistics of the database from the dbstat table. This line works fine on Linux not on Window. In both cases I made sure that I'm using the same sqlite3 version and the same python3 version. I would love to know why this doesn't work on windows.
Error:
c.execute('select sum(unused), sum(pgsize), sum(payload), count(*) from dbstat') sqlite3.OperationalError: no such table: dbstat

#Shawn ... ok, figured out what was going on. Python on Windows has a different sqlite3.dll than Python on Linux. The one on Windows didn't have the SQLITE_ENABLE_DBSTAT_VTAB. To make it work, you can compile the sqlite3.dll yourself from the source code like #Shawn said or you can download the compiled dll from their website, where it has the option enabled, and add it do the DLLs folder in the python director.
You can check the sqlite3 compiled options by calling
PRAGMA compile_options;
If you want to check the python sqlite3.dll run this python script
import sqlite3
conn = sqlite3.connect('test.db')
c = conn.cursor()
c.execute('PRAGMA compile_options;')
available_pragmas = c.fetchall()
print(available_pragmas)
conn.close()
Python3.7 output will look like this
[('COMPILER=msvc-1916',), ('ENABLE_FTS4',), ('ENABLE_FTS5',), ('THREADSAFE=1',)]

Related

How to access old .mdb file using python pyodbc [duplicate]

Can someone point me in the right direction on how to open a .mdb file in python? I normally like including some code to start off a discussion, but I don't know where to start. I work with mysql a fair bit with python. I was wondering if there is a way to work with .mdb files in a similar way?
Below is some code I wrote for another SO question.
It requires the 3rd-party pyodbc module.
This very simple example will connect to a table and export the results to a file.
Feel free to expand upon your question with any more specific needs you might have.
import csv, pyodbc
# set up some constants
MDB = 'c:/path/to/my.mdb'
DRV = '{Microsoft Access Driver (*.mdb)}'
PWD = 'pw'
# connect to db
con = pyodbc.connect('DRIVER={};DBQ={};PWD={}'.format(DRV,MDB,PWD))
cur = con.cursor()
# run a query and get the results
SQL = 'SELECT * FROM mytable;' # your query goes here
rows = cur.execute(SQL).fetchall()
cur.close()
con.close()
# you could change the mode from 'w' to 'a' (append) for any subsequent queries
with open('mytable.csv', 'w') as fou:
csv_writer = csv.writer(fou) # default field-delimiter is ","
csv_writer.writerows(rows)
There's the meza library by Reuben Cummings which can read Microsoft Access databases through mdbtools.
Installation
# The mdbtools package for Python deals with MongoDB, not MS Access.
# So install the package through `apt` if you're on Debian/Ubuntu
$ sudo apt install mdbtools
$ pip install meza
Usage
>>> from meza import io
>>> records = io.read('database.mdb') # only file path, no file objects
>>> print(next(records))
Table1
Table2
…
This looks similar to a previous question:
What do I need to read Microsoft Access databases using Python?
http://code.activestate.com/recipes/528868-extraction-and-manipulation-class-for-microsoft-ac/
Answer there should be useful.
For a solution that works on any platform that can run Java, consider using Jython or JayDeBeApi along with the UCanAccess JDBC driver. For details, see the related question
Read an Access database in Python on non-Windows platform (Linux or Mac)
In addition to bernie's response, I would add that it is possible to recover the schema of the database. The code below lists the tables (b[2] contains the name of the table).
con = pyodbc.connect('DRIVER={};DBQ={};PWD={}'.format(DRV,MDB,PWD))
cur = con.cursor()
tables = list(cur.tables())
print 'tables'
for b in tables:
print b
The code below lists all the columns from all the tables:
colDesc = list(cur.columns())
This code will convert all the tables to CSV.
Happy Coding
for tbl in mdb.list_tables("file_name.MDB"):
df = mdb.read_table("file_name.MDB", tbl)
df.to_csv(tbl+'.csv')

Is there a SQLite equivalent to COPY from PostgreSQL?

I have local tab delimited raw data files "...\publisher.txt" and "...\field.txt" that I would like to load into a local SQLite database. The corresponding tables are already defined in the local database. I am accessing the database through the python-sql library in an ipython notebook. Is there a simple way to load these text files into the database?
CLI command 'readfile' doesn't seem to work in python context:
INSERT INTO Pub(k,p) VALUES('pubFile.txt',readfile('pubFile.txt'));
Throws error:
(sqlite3.OperationalError) no such function: readfile
[SQL: INSERT INTO Pub(k,p) VALUES('pubFile.txt',readfile('pubFile.txt'));]
(Background on this error at: http://sqlalche.me/e/e3q8)
No, there isn't such a command in SQLite (any longer). That feature was removed, and has been replaced by the SQLite CLI's .import statement.
See the official documentation:
The COPY command is available in SQLite version 2.8 and earlier. The COPY command has been removed from SQLite version 3.0 due to complications in trying to support it in a mixed UTF-8/16 environment. In version 3.0, the command-line shell contains a new command .import that can be used as a substitute for COPY.
The COPY command is an extension used to load large amounts of data into a table. It is modeled after a similar command found in PostgreSQL. In fact, the SQLite COPY command is specifically designed to be able to read the output of the PostgreSQL dump utility pg_dump so that data can be easily transferred from PostgreSQL into SQLite.
A sample code to load a text file into an SQLite database via the CLI is as below:
sqlite3 test.db ".import "test.txt" test_table_name"
You may read the input file into a string and then insert it:
sql = "INSERT INTO Pub (k, p) VALUES ('pubFile.txt', ?)"
with open ("pubFile.txt", "r") as myfile:
data = '\n'.join(myfile.readlines())
cur = conn.cursor()
cur.execute(sql, (data,))
conn.commit()

Can't connect to SQL Server DB from Pyodbc

I am having a difficult time trying to connect to a SQL Server DB on Linux, using pyodbc. I have a ODCINI file entry created. I started with this:
import pyodbc
conn = pyodbc.connect('DSN=DSN;Database=DB;UID=UID;PWD=PWD')
cursor = conn.cursor()
cursor.execute('SELECT count(*) FROM dbo.tableA')
for row in cursor.fetchall():
print(row)
which throws this error:
RuntimeError: Unable to set SQL_ATTR_CONNECTION_POOLING attribute.
I googled that error and added this line after reading some recommendations:
pyodbc.pooling=False
So script changed to this:
import pyodbc
pyodbc.pooling=False
conn = pyodbc.connect('DSN=DSN;Database=DB;UID=UID;PWD=PWD')
cursor = conn.cursor()
cursor.execute('SELECT count(*) FROM dbo.tableA')
for row in cursor.fetchall():
print(row)
Which resulted in this:
pyodbc.InterfaceError: ('IM003', '[IM003] 䑛瑡䑡物捥嵴佛䉄⁃楬嵢匠数楣楦摥搠楲敶\u2072潣汵\u2064潮⁴敢氠慯敤d\uffff\uffff㢸ꔻ罱\x00\ue5b8鮫罱\x00㳰ꔻ罱\x00\uffff\uffff罱\x00\x00\x00\x00\x00鳭ꕞ罱\x00塰ꕉ罱 (0) (SQLDriverConnect)')
At the suggestion of a coworker I added these 2 lines AFTER the pyodbc.connect line:
conn.setdecoding(pyodbc.SQL_CHAR, encoding='latin1', to=str)
conn.setencoding(str, encoding='latin1')
I tried that with both latin1 and utf-8. Neither work, still throws the same interface error with Chinese characters.
Any ideas?
I had the similar issue with same description RuntimeError: Unable to set SQL_ATTR_CONNECTION_POOLING attribute. I had no clue what is happening and why is happening. After lot of debugging i was able to figure it out why.
Simple answer is :
Reinstall the unixODBC drivers or/and SQL drivers.
Reason why :
When install the ODBC Drivers first and then SQL related drivers, sometimes it override the symlinks in Unix system. You can find out more info on this from pyodbc official GitHub issue#847 .
you can simply uninstall and then do:
conda install unixodbc

SQLite Database and Python

I have been given an SQLite file to exam using python. I have imported the SQLite module and attempted to connect to the database but I'm not having any luck. I am wondering if I have to actually open the file up as "r" as well as connecting to it? please see below; ie f = open("History.sqlite","r+")
import sqlite3
conn = sqlite3.connect("history.sqlite")
curs = conn.cursor()
results = curs.execute ("Select * From History.sqlite;")
I keep getting this message when I go to run results:
Operational Error: no such table: History.sqlite
An SQLite file is a single data file that can contain one or more tables of data. You appear to be trying to SELECT from the filename instead of the name of one of the tables inside the file.
To learn what tables are in your database you can use any of these techniques:
Download and use the command line tool sqlite3.
Download any one of a number of GUI tools for looking at SQLite files.
Write a SELECT statement against the special table sqlite_master to list the tables.

how to deal with .mdb access files with python

Can someone point me in the right direction on how to open a .mdb file in python? I normally like including some code to start off a discussion, but I don't know where to start. I work with mysql a fair bit with python. I was wondering if there is a way to work with .mdb files in a similar way?
Below is some code I wrote for another SO question.
It requires the 3rd-party pyodbc module.
This very simple example will connect to a table and export the results to a file.
Feel free to expand upon your question with any more specific needs you might have.
import csv, pyodbc
# set up some constants
MDB = 'c:/path/to/my.mdb'
DRV = '{Microsoft Access Driver (*.mdb)}'
PWD = 'pw'
# connect to db
con = pyodbc.connect('DRIVER={};DBQ={};PWD={}'.format(DRV,MDB,PWD))
cur = con.cursor()
# run a query and get the results
SQL = 'SELECT * FROM mytable;' # your query goes here
rows = cur.execute(SQL).fetchall()
cur.close()
con.close()
# you could change the mode from 'w' to 'a' (append) for any subsequent queries
with open('mytable.csv', 'w') as fou:
csv_writer = csv.writer(fou) # default field-delimiter is ","
csv_writer.writerows(rows)
There's the meza library by Reuben Cummings which can read Microsoft Access databases through mdbtools.
Installation
# The mdbtools package for Python deals with MongoDB, not MS Access.
# So install the package through `apt` if you're on Debian/Ubuntu
$ sudo apt install mdbtools
$ pip install meza
Usage
>>> from meza import io
>>> records = io.read('database.mdb') # only file path, no file objects
>>> print(next(records))
Table1
Table2
…
This looks similar to a previous question:
What do I need to read Microsoft Access databases using Python?
http://code.activestate.com/recipes/528868-extraction-and-manipulation-class-for-microsoft-ac/
Answer there should be useful.
For a solution that works on any platform that can run Java, consider using Jython or JayDeBeApi along with the UCanAccess JDBC driver. For details, see the related question
Read an Access database in Python on non-Windows platform (Linux or Mac)
In addition to bernie's response, I would add that it is possible to recover the schema of the database. The code below lists the tables (b[2] contains the name of the table).
con = pyodbc.connect('DRIVER={};DBQ={};PWD={}'.format(DRV,MDB,PWD))
cur = con.cursor()
tables = list(cur.tables())
print 'tables'
for b in tables:
print b
The code below lists all the columns from all the tables:
colDesc = list(cur.columns())
This code will convert all the tables to CSV.
Happy Coding
for tbl in mdb.list_tables("file_name.MDB"):
df = mdb.read_table("file_name.MDB", tbl)
df.to_csv(tbl+'.csv')

Categories

Resources