Can someone point me in the right direction on how to open a .mdb file in python? I normally like including some code to start off a discussion, but I don't know where to start. I work with mysql a fair bit with python. I was wondering if there is a way to work with .mdb files in a similar way?
Below is some code I wrote for another SO question.
It requires the 3rd-party pyodbc module.
This very simple example will connect to a table and export the results to a file.
Feel free to expand upon your question with any more specific needs you might have.
import csv, pyodbc
# set up some constants
MDB = 'c:/path/to/my.mdb'
DRV = '{Microsoft Access Driver (*.mdb)}'
PWD = 'pw'
# connect to db
con = pyodbc.connect('DRIVER={};DBQ={};PWD={}'.format(DRV,MDB,PWD))
cur = con.cursor()
# run a query and get the results
SQL = 'SELECT * FROM mytable;' # your query goes here
rows = cur.execute(SQL).fetchall()
cur.close()
con.close()
# you could change the mode from 'w' to 'a' (append) for any subsequent queries
with open('mytable.csv', 'w') as fou:
csv_writer = csv.writer(fou) # default field-delimiter is ","
csv_writer.writerows(rows)
There's the meza library by Reuben Cummings which can read Microsoft Access databases through mdbtools.
Installation
# The mdbtools package for Python deals with MongoDB, not MS Access.
# So install the package through `apt` if you're on Debian/Ubuntu
$ sudo apt install mdbtools
$ pip install meza
Usage
>>> from meza import io
>>> records = io.read('database.mdb') # only file path, no file objects
>>> print(next(records))
Table1
Table2
…
This looks similar to a previous question:
What do I need to read Microsoft Access databases using Python?
http://code.activestate.com/recipes/528868-extraction-and-manipulation-class-for-microsoft-ac/
Answer there should be useful.
For a solution that works on any platform that can run Java, consider using Jython or JayDeBeApi along with the UCanAccess JDBC driver. For details, see the related question
Read an Access database in Python on non-Windows platform (Linux or Mac)
In addition to bernie's response, I would add that it is possible to recover the schema of the database. The code below lists the tables (b[2] contains the name of the table).
con = pyodbc.connect('DRIVER={};DBQ={};PWD={}'.format(DRV,MDB,PWD))
cur = con.cursor()
tables = list(cur.tables())
print 'tables'
for b in tables:
print b
The code below lists all the columns from all the tables:
colDesc = list(cur.columns())
This code will convert all the tables to CSV.
Happy Coding
for tbl in mdb.list_tables("file_name.MDB"):
df = mdb.read_table("file_name.MDB", tbl)
df.to_csv(tbl+'.csv')
Related
Can someone point me in the right direction on how to open a .mdb file in python? I normally like including some code to start off a discussion, but I don't know where to start. I work with mysql a fair bit with python. I was wondering if there is a way to work with .mdb files in a similar way?
Below is some code I wrote for another SO question.
It requires the 3rd-party pyodbc module.
This very simple example will connect to a table and export the results to a file.
Feel free to expand upon your question with any more specific needs you might have.
import csv, pyodbc
# set up some constants
MDB = 'c:/path/to/my.mdb'
DRV = '{Microsoft Access Driver (*.mdb)}'
PWD = 'pw'
# connect to db
con = pyodbc.connect('DRIVER={};DBQ={};PWD={}'.format(DRV,MDB,PWD))
cur = con.cursor()
# run a query and get the results
SQL = 'SELECT * FROM mytable;' # your query goes here
rows = cur.execute(SQL).fetchall()
cur.close()
con.close()
# you could change the mode from 'w' to 'a' (append) for any subsequent queries
with open('mytable.csv', 'w') as fou:
csv_writer = csv.writer(fou) # default field-delimiter is ","
csv_writer.writerows(rows)
There's the meza library by Reuben Cummings which can read Microsoft Access databases through mdbtools.
Installation
# The mdbtools package for Python deals with MongoDB, not MS Access.
# So install the package through `apt` if you're on Debian/Ubuntu
$ sudo apt install mdbtools
$ pip install meza
Usage
>>> from meza import io
>>> records = io.read('database.mdb') # only file path, no file objects
>>> print(next(records))
Table1
Table2
…
This looks similar to a previous question:
What do I need to read Microsoft Access databases using Python?
http://code.activestate.com/recipes/528868-extraction-and-manipulation-class-for-microsoft-ac/
Answer there should be useful.
For a solution that works on any platform that can run Java, consider using Jython or JayDeBeApi along with the UCanAccess JDBC driver. For details, see the related question
Read an Access database in Python on non-Windows platform (Linux or Mac)
In addition to bernie's response, I would add that it is possible to recover the schema of the database. The code below lists the tables (b[2] contains the name of the table).
con = pyodbc.connect('DRIVER={};DBQ={};PWD={}'.format(DRV,MDB,PWD))
cur = con.cursor()
tables = list(cur.tables())
print 'tables'
for b in tables:
print b
The code below lists all the columns from all the tables:
colDesc = list(cur.columns())
This code will convert all the tables to CSV.
Happy Coding
for tbl in mdb.list_tables("file_name.MDB"):
df = mdb.read_table("file_name.MDB", tbl)
df.to_csv(tbl+'.csv')
I want to connect oracle database to python and using select statement whatever result i will get, I want that result to be exported as csv file in sftp location.
I know we can connect oracle with python using cx_oracle package.
But my concern is to get data from oracle to csv and export that.
Also to make a note my data is bigger in size.
Can anyone help me to get the solution which could be fast as well.
Thanks in advance.
Start with something like this, and change to meet your requirements:
import cx_Oracle
import csv
import sys, os
if sys.platform.startswith("darwin"):
cx_Oracle.init_oracle_client(lib_dir=os.environ.get("HOME")+"/Downloads/instantclient_19_8")
connection = cx_Oracle.connect(user='cj', password='cj', dsn='localhost/orclpdb1')
cursor = connection.cursor()
cursor.arraysize = 5000
with open("testpy.csv", "w", encoding='utf-8') as outputfile:
writer = csv.writer(outputfile, lineterminator="\n")
results = cursor.execute('select * from all_objects where rownum < 10000000 order by object_id')
writer.writerows(results)
In particular, you will want to tune the arraysize value.
See Tuning cx_Oracle.
I recently started using Macbook because my laptop was changed at work and right after that I started having problems with some of my code that I use to upload a dataframe to a postgresql database.
import psycopg2
from io import StringIO
def create_connection(user,password):
return psycopg2.connect(
host='HOST',
database='DBNAME',
user=user,
password=password)
conn = create_connection(user,password)
table = "data_analytics.tbl_summary_wingmans_rt"
buffer = StringIO()
df.to_csv(buffer, header=False, index=False)
buffer.seek(0)
cursor = conn.cursor()
cursor.copy_from(buffer, table, sep=",", null="")
conn.commit()
cursor.close()
As you can see, the code is quite simple and even before the change of equipment it ran without major problem on Windows. But as soon as I run this same code on the mac it throws me the following error:
Error: relation "data_analytics.tbl_summary_wingmans_rt" does not exist
In several posts I saw that it could be the use of double quotes but I have already used the following and I still do not have a positive result.
"data_analytics."tbl_summary_wingmans_rt""
""data_analytics"."tbl_summary_wingmans_rt""
'data_analytics."tbl_summary_wingmans_rt"'
The behaviour of copy_from changed in psycopg2 2.9 to properly quote the table name, which means that you can no longer supply a schema-qualified table name that way; you have to use copy_expert instead.
You have to separate schema and table before sending it to Postgres parser now,
when you are sending "data_analytics.tbl_summary_wingmans_rt" its a single string and unable to parse
use '"data_analytics"."tbl_summary_wingmans_rt"' this will parse the output as "schema"."table" and PostgreSQL will be able to parse
I am trying to open a .sqlite3 file in python but I see no information is returned. So I tried r and still get empty for tables. I would like to know what tables are in this file.
I used the following code for python:
import sqlite3
from sqlite3 import Error
def create_connection(db_file):
""" create a database connection to the SQLite database
specified by the db_file
:param db_file: database file
:return: Connection object or None
"""
try:
conn = sqlite3.connect(db_file)
return conn
except Error as e:
print(e)
return None
database = "D:\\...\assignee.sqlite3"
conn = create_connection(database)
cur = conn.cursor()
rows = cur.fetchall()
but rows are empty!
This is where I got the assignee.sqlite3 from:
https://github.com/funginstitute/downloads
I also tried RStudio, below is the code and results:
> con <- dbConnect(drv=RSQLite::SQLite(), dbname="D:/.../assignee")
> tables <- dbListTables(con)
But this is what I get
first make sure you provided correct path on your connection string to the sql
light db ,
use this conn = sqlite3.connect("C:\users\guest\desktop\example.db")
also make sure you are using the SQLite library in the unit tests and the production code
check the types of sqllite connection strings and determain which one your db belongs to :
Basic
Data Source=c:\mydb.db;Version=3;
Version 2 is not supported by this class library.
SQLite
In-Memory Database
An SQLite database is normally stored on disk but the database can also be
stored in memory. Read more about SQLite in-memory databases.
Data Source=:memory:;Version=3;New=True;
SQLite
Using UTF16
Data Source=c:\mydb.db;Version=3;UseUTF16Encoding=True;
SQLite
With password
Data Source=c:\mydb.db;Version=3;Password=myPassword;
so make sure you wrote the proper connection string for your sql lite db
if you still cannot see it, check if the disk containing /tmp full otherwise , it might be encrypted database, or locked and used by some other application maybe , you may confirm that by using one of the many tools for sql light database ,
you may downliad this tool , try to navigate directly to where your db exist and it will give you indication of the problem .
download windows version
Download Mac Version
Download linux version
good luck
In a project, I need to extract data from a Visual FoxPro database, which is stored in dbf files, y have a data directory with 539 files I need to take into account, each file represents a database table, so I've been doing some testing and my code goes like this:
import pyodbc
connection = pyodbc.connect("Driver={Microsoft Visual FoxPro Driver};SourceType=DBF;SourceDB=P:\\Data;Exclusive=No;Collate=Machine;NULL=No;DELETED=Yes")
tables = connection.cursor().tables()
for _ in tables:
print _
this prints only 15 tables, with no obvious pattern, always the same 15 tables, I thought this was because the rest of the tables were empty but I checked and it some of the tables (dbf files) on the list are empty too, then, I thought it was a permission issue, but all the files have the same permission structure, so, I don't know what's happening here.
Any light??
EDIT:
It is not truccating the output, the tables it list are not the 15 first or anything like that
I DID IT!!!!
There where several problems with what I was doing so, here I come with what I did to solve it (after implementing it the first time with Ethan Furman's solution)
The first thing was a driver problem, it turns out that the Windows' DBF drivers are 32 bits programs and runs on a 64 bits operating system, so, I had installed Python-amd64 and that was the first problem, so I installed a 32bit Python.
The second issue was a library/file issue, according to this, dbf files in VFP > 7 are diferent, so my pyodbc library won't read them correctly, so I tried some OLE-DB libraries with no success and I decided to to it from scratch.
Googling for a while took me to this post which finally gave me a light on this
Basically, what I did was the following:
import win32com.client
conn = win32com.client.Dispatch('ADODB.Connection')
db = 'C:\\Profit\\profit_a\\ARMM'
dsn = 'Provider=VFPOLEDB.1;Data Source=%s' % db
conn.Open(dsn)
cmd = win32com.client.Dispatch('ADODB.Command')
cmd.ActiveConnection = conn
cmd.CommandText = "Select * from factura, reng_fac where factura.fact_num = reng_fac.fact_num AND factura.fact_num = 6099;"
rs, total = cmd.Execute() # This returns a tuple: (<RecordSet>, number_of_records)
while total:
for x in xrange(rs.Fields.Count):
print '%s --> %s' % (rs.Fields.item(x).Name, rs.Fields.item(x).Value)
rs.MoveNext() #<- Extra indent
total = total - 1
And it gave me 20 records which I checked with DBFCommander and were OK
First, you need to install pywin32 extensions (32bits) and the Visual FoxPro OLE-DB Provider (only available for 32bits), in my case for VFP 9.0
Also, it's good to read de ADO Documentation at the w3c website
This worked for me. Thank you very much to those who replied
I would use my own dbf package and the code would go something like this:
import dbf
from glob import glob
for dbf_file in glob(r'p:\data\*.dbf'):
with dbf.Table(dbf_file) as table:
for record in table:
do_something_with(record)
A table is list-like, and iteration through it returns records. A record is list-, dict-, and obj-like, and iteration returns the values; besides iteration through the record, individual fields can be accessed either by offset (record[0] for the first field), by field-name using dict-like access (record['some_field']), or by field-name using obj.attr-like access (record.some_field).
If you just wanted to dump the contents of each dbf file into a csv file you could do:
for dbf_file in glob(r'p:\data\*.dbf'):
with dbf.Table(dbf_file) as table:
dbf.export(table, dbf_file)
I know this doesn't directly answer your question, but might still help. I've had lots of issues using ODBC with VFP databases and I've found it's often much easier treating the VFP tables as free tables when possible.
Using Yusdi Santoso's dbf.py and glob, here's some code to open each table in a directory and run through each record.
import glob
import os
import dbf
os.chdir("P:\\data")
for file in glob.glob("*.dbf"):
table = dbf.readDbf(file)
for row in table:
#do stuff