dBase library - trying to delete a record - python

im using Python 3.6 and the dbf library https://pypi.python.org/pypi/dbf as well as example file dbase_30.dbf & dbase_30.fpt from https://github.com/infused/dbf/tree/master/spec/fixtures. executing this code results in an error.
import dbf
dbase_30 = dbf.Table('dbase_30.dbf')
dbase_30.open()
print("Table size: {}".format(dbase_30.__len__()))
dbase_30[0].delete_record()
Am i doing something wrong here?

The infused link is for a Ruby package, so that won't be much help.
dbase_30 above is a table;
dbase_30[0] is a record in the table
the command to delete records is a module level function called delete
So, if you want to delete the very first record:
dbf.delete(dbase_30[0])
This only marks the record as deleted, it doesn't actually remove it. To remove all deleted records:
dbase_30.pack()

Related

Save KDB+/q table on a Mac

I'm new to q and I'm trying to save a file on my Mac. Currently using Jupyter Notebook if that makes a difference.
A quick table:
t:([] c1:`a`b`c; c2:1.1 2.2 3.3)
I first extract my current location by using \cd and i get: "/Users/Gorlomi/Documents/q"
but when I try
`:/Users/Gorlomi/Documents/q set t
I get:
evaluation error:
type
[1] (.q.set)
[0] `:/Users/Gorlomi/Documents/q set t
^
I'm following examples from "Q for Mortals" from the kx website:
https://code.kx.com/q4m3/1_Q_Shock_and_Awe/#11-starting-q
For easy find use cmd (or ctrl) + F and find "t set t"
Thank you in advance.
There are two answers to this question, depending on whether you want to save your file as a flat table, or a splayed table.
If you want to save you table as a flat table, you need to give a file name for your table. Currently, you're just giving it the directory that you want to save it in. So for instance, the following should work for you:
`:/Users/Gorlomi/Documents/q/t set t
If instead, you want to save your table as a splayed table, then you will need to pass it a directory (ideally, one that is not already being used by the file system). To do this, you will pass set a file path with a trailing forward slash. So the following should work for you:
`:/Users/Gorlomi/Documents/q/t/ set t

PostgreSQL COPY of a CSV file breaking before completion. Works with psql \copy but not COPY

I'm currently working with what I think to be a pretty large file that I need to ingest into a postgreSQL database. This is a few example rows of the .csv reads that the COPY command uses to insert into the database. The structure of the rows is that sample_id and otu_id are two foreign keys which refer to primary keys in a sample table and an otu table.
sample_id,otu_id,count
163,2901,0.0
164,2901,0.0
165,2901,0.0
Which is ingested into the table with the following code using SQLAlchemy:
self._engine.execute(
text('''COPY otu.sample_otu from :csv CSV header''').execution_options(autocommit=True),
csv=fname)
After copying to the table from the .csv file, I query the database for the samples shown and it gives me a result for the sample_id=163, otu_id=2901 but it doesn't give me the rows after that. If I'm correct, the COPY command stops copying after the first error it encounters so my guess is that there's a problem with the sample of id 164 and otu id of 2901.
I've tried the following:
There is a valid entry in the otu table for 2901, likewise for the
sample table id 164 so I don't think it's a missing key error.
I have also searched the file for duplicate foreign key combinations and I can't seem to find any.
I've tried to only write every second entry into the .csv file that is copied from incase it was something to do with how large the .csv file was but it ended up giving me the same issue but cutting off at the different point. When only copying entries with even otu_ids, the subsequent table query results for otu_ids > 2890 breaks at sample id 152, otu id 2900.
I tried using psql's \copy command to manually copy from the .csv file:
\copy sample_otu FROM 'bpaotu-ijpgihw6' WITH DELIMITER ',' CSV HEADER;
This seems to work perfectly fine. The query shows otu_ids past the otu id 2901.
I'm just very confused as to why it breaks there as the .csv rows before and afterward look identical and there are entries as the corresponding primary key values in the foreign tables which it uses.
For anyone that comes across this:
The problem was a simple scope error. I missed an indent on the file using block in the python database importing script so it was attempting to COPY from the .csv file before it was finished writing the file. So the copy statement wouldn't throw errors as it assumed the file was finished when it was actually still being written to.
I moved the indentation so the COPY is run after the file writing block has closed and it now works.
It also explains why the psql \copy command worked and the scripted COPY didn't work as the manual copy was after the file was already created but failed to fully import.

Where does 'default' come from?

i've just started using TinyDB as a way to store my data into an JSON file and this makes it easy for my to search for any content in my file. So, i've copied and pasted a code from https://pypi.python.org/pypi/tinydb and changed the names accordingly to fit this project i am doing. However, i do not understand where this 'default' and '1' comes from.
Also, the codes provided to create a table are all done in command line and none written in python3 so does anyone know which websites provide help on creating tables using TinyDB in Python 3? I've searched everywhere.
Can someone enlighten me please.
from tinydb import TinyDB, Query
db = TinyDB('/home/pi/Desktop/csv/smartkey1.json')
table = db.table('pillar')
table.insert({'active': True})
table.all()
[{'active': True}]
Output:
{"_default": {}, "pillar": {"1": {"active": true}}}
The _default is showing you content of deafult table. In your case it is empty - {}.
In the case of pillar table, number 1 is unique identifier - Element ID.
Not sure if I understood your last question correctly, but instead of "inputting lines in command line", save those lines in a file with .py extension and run it with python filename.py from your command line.

Reading DBF files with pyodbc

In a project, I need to extract data from a Visual FoxPro database, which is stored in dbf files, y have a data directory with 539 files I need to take into account, each file represents a database table, so I've been doing some testing and my code goes like this:
import pyodbc
connection = pyodbc.connect("Driver={Microsoft Visual FoxPro Driver};SourceType=DBF;SourceDB=P:\\Data;Exclusive=No;Collate=Machine;NULL=No;DELETED=Yes")
tables = connection.cursor().tables()
for _ in tables:
print _
this prints only 15 tables, with no obvious pattern, always the same 15 tables, I thought this was because the rest of the tables were empty but I checked and it some of the tables (dbf files) on the list are empty too, then, I thought it was a permission issue, but all the files have the same permission structure, so, I don't know what's happening here.
Any light??
EDIT:
It is not truccating the output, the tables it list are not the 15 first or anything like that
I DID IT!!!!
There where several problems with what I was doing so, here I come with what I did to solve it (after implementing it the first time with Ethan Furman's solution)
The first thing was a driver problem, it turns out that the Windows' DBF drivers are 32 bits programs and runs on a 64 bits operating system, so, I had installed Python-amd64 and that was the first problem, so I installed a 32bit Python.
The second issue was a library/file issue, according to this, dbf files in VFP > 7 are diferent, so my pyodbc library won't read them correctly, so I tried some OLE-DB libraries with no success and I decided to to it from scratch.
Googling for a while took me to this post which finally gave me a light on this
Basically, what I did was the following:
import win32com.client
conn = win32com.client.Dispatch('ADODB.Connection')
db = 'C:\\Profit\\profit_a\\ARMM'
dsn = 'Provider=VFPOLEDB.1;Data Source=%s' % db
conn.Open(dsn)
cmd = win32com.client.Dispatch('ADODB.Command')
cmd.ActiveConnection = conn
cmd.CommandText = "Select * from factura, reng_fac where factura.fact_num = reng_fac.fact_num AND factura.fact_num = 6099;"
rs, total = cmd.Execute() # This returns a tuple: (<RecordSet>, number_of_records)
while total:
for x in xrange(rs.Fields.Count):
print '%s --> %s' % (rs.Fields.item(x).Name, rs.Fields.item(x).Value)
rs.MoveNext() #<- Extra indent
total = total - 1
And it gave me 20 records which I checked with DBFCommander and were OK
First, you need to install pywin32 extensions (32bits) and the Visual FoxPro OLE-DB Provider (only available for 32bits), in my case for VFP 9.0
Also, it's good to read de ADO Documentation at the w3c website
This worked for me. Thank you very much to those who replied
I would use my own dbf package and the code would go something like this:
import dbf
from glob import glob
for dbf_file in glob(r'p:\data\*.dbf'):
with dbf.Table(dbf_file) as table:
for record in table:
do_something_with(record)
A table is list-like, and iteration through it returns records. A record is list-, dict-, and obj-like, and iteration returns the values; besides iteration through the record, individual fields can be accessed either by offset (record[0] for the first field), by field-name using dict-like access (record['some_field']), or by field-name using obj.attr-like access (record.some_field).
If you just wanted to dump the contents of each dbf file into a csv file you could do:
for dbf_file in glob(r'p:\data\*.dbf'):
with dbf.Table(dbf_file) as table:
dbf.export(table, dbf_file)
I know this doesn't directly answer your question, but might still help. I've had lots of issues using ODBC with VFP databases and I've found it's often much easier treating the VFP tables as free tables when possible.
Using Yusdi Santoso's dbf.py and glob, here's some code to open each table in a directory and run through each record.
import glob
import os
import dbf
os.chdir("P:\\data")
for file in glob.glob("*.dbf"):
table = dbf.readDbf(file)
for row in table:
#do stuff

pysqlite, save database, and open it later

Newbie to sql and sqlite.
I'm trying to save a database, then copy the file.db to another folder and open it. So far I created the database, copy and pasted the file.db to another folder but when I try to access the database the output says that it is empty.
So far I have
from pysqlite2 import dbapi2 as sqlite
conn = sqlite.connect('db1Thu_04_Aug_2011_14_20_15.db')
c = conn.cursor()
print c.fetchall()
and the output is
[]
You need something like
c.execute("SELECT * FROM mytable")
for row in c:
#process row
I will echo Mat and point out that is not valid syntax. More than that, you do not include any select request (or other sql command) in your example. If you actually do not have a select statement in your code, and you run fetchall on a newly created cursor, you can expect to get an empty list, which seems to be what you have.
Finally, do make sure that you are opening the file from the right directory. If you tell sqlite to open a nonexistent file, it will happily create a new, empty one for you.

Categories

Resources