How can I update a record in a dbf with the dbf module : https://pypi.python.org/pypi/dbf
Here is what I tried:
table2 = dbf.Table(filename, codepage="cp1252")
table[0].name = "changed"
But I get:
File "<input>", line 1, in <module>
File "/venv/lib/python3.4/site-packages/dbf/ver_33.py", line 2508, in __setattr__
raise DbfError("unable to modify fields individually except in `with` or `Process()`")
dbf.ver_33.DbfError: unable to modify fields individually except in `with` or `Process()`
I managed to read and append but not to modify data, how can I do that?
Writing data to records can be accomplished in a few different ways:
using the record as a context manager:
with table[0] as rec:
rec.name = "changed"
using the dbf.Process function to deal with several records in a loop:
for rec in Process(my_table):
rec.name = rec.name.title()
using the write function in the dbf module:
dbf.write(some_record, name='My New Name')
I did it this way to enhance both safety and performance.
OK, soo I didn't understand the "with or process" part but here is how I managed to get my way:
dbf.write(table[0],name="changed")
found there, this fork had more documentation https://bitbucket.org/ltvolks/python-dbase
this should be safer Search in DBF and update record
Related
I am trying to create a program that modify different dbf file but since each file have different field name, I am unsure if the python dbf library have the option to specify the field with string.
To modify the data without variable
dbf.write(record, CUSTOM2="Hello")
But when I tried this, it gives an error
dbf.write(record, {'CUSTOM2': "Hello"})
TypeError: write() takes 1 positional argument but 2 were given
There are a couple ways to update records in dbf files:
use the record as a context manager:
target_field = 'custom2'
with record:
record[target_field] = "Hello"
or use the write function with keywords:
target_field = 'custom2'
update = {target_field: "Hello"}
dbf.write(record, **update)
I have exported an ArcGIS Desktop 10.7 table into a dbf file.
Now I want to do some GIS calculation in standalone Python.
Therefore I have started a PyCharm project referencing the ArcGIS Python interpreter and hence am able to import arcpy into my main.py.
Problem is: I don't want to pip install other modules, but I don't know how to correctly read the dbf table with arcpy.
#encoding=utf-8
import arcpy
path=r"D:\test.dbf"
sc=arcpy.SearchCursor(path) # Does not work: IOError exception str() failed
tv=arcpy.mapping.TableView(path) # Does not work either: StandaloneObject invalid data source or table
The dbf file is correct, it can be read into ArcGIS.
Can someone please give me an idea, how to read the file standalone with arcpy?
Using pandas
Python from ArcMap comes with some modules. You can load the data into a pandas.DataFrame and work with this format. Pandas is well-documented and there is a lot of already asked question about it all over the web. It's also super easy to do groupby or table manipulations.
import pandas as pd
import arcpy
def read_arcpy_table(self, table, fields='*', null_value=None):
"""
Transform a table from ArcMap into a pandas.DataFrame object
table : Path the table
fields : Fields to load - '*' loads all fields
null_value : choose a value to replace null values
"""
fields_type = {f.name: f.type for f in arcpy.ListFields(table)}
if fields == '*':
fields = fields_type.keys()
fields = [f.name for f in arcpy.ListFields(table) if f.name in fields]
fields = [f for f in fields if f in fields_type and fields_type[f] != 'Geometry'] # Remove Geometry field if FeatureClass to avoid bug
# Transform in pd.Dataframe
np_array = arcpy.da.FeatureClassToNumPyArray(in_table=table,
field_names=fields,
skip_nulls=False,
null_value=null_value)
df = self.DataFrame(np_array)
return df
# Add the function into the loaded pandas module
pd.read_arcpy_table = types.MethodType(read_arcpy_table, pd)
df = pd.read_arcpy_table(table='path_to_your_table')
# Do whatever calculations need to be done
Using cursor
You can also use arcpy cursors and dict for simple calculation.
There are simple example on this page on how to use correctly cursors :
https://desktop.arcgis.com/fr/arcmap/10.3/analyze/arcpy-data-access/searchcursor-class.htm
My bad,
after reading the Using cursor approach, I figured out that using the
sc=arcpy.SearchCursor(path) # Does not work: IOError exception str() failed
approach was correct, but at the time around 3 AM, I was a little bit exhausted and missed the typo in the path that caused the error. Nevertheless, a more descriptive error message e.g. IOError could not open file rather than IOError exception str() failed would have solved my mistake as acrGIS newbie.. : /
im using Python 3.6 and the dbf library https://pypi.python.org/pypi/dbf as well as example file dbase_30.dbf & dbase_30.fpt from https://github.com/infused/dbf/tree/master/spec/fixtures. executing this code results in an error.
import dbf
dbase_30 = dbf.Table('dbase_30.dbf')
dbase_30.open()
print("Table size: {}".format(dbase_30.__len__()))
dbase_30[0].delete_record()
Am i doing something wrong here?
The infused link is for a Ruby package, so that won't be much help.
dbase_30 above is a table;
dbase_30[0] is a record in the table
the command to delete records is a module level function called delete
So, if you want to delete the very first record:
dbf.delete(dbase_30[0])
This only marks the record as deleted, it doesn't actually remove it. To remove all deleted records:
dbase_30.pack()
I use python dbfread to read a 20 MB dbf file, but it performs very slow.
from dbfread import DBF
for record in DBF('people.dbf'):
for key, value in record.items():
try:
value = str(value)
except:
value = value.encode('utf-8')
print key
print value
if i use "load = true" to load the file first, the search is faster, but it takes a lot of memory.
table = DBF('people.dbf', load=True)
I found some dbfview application runs pretty fast with my dbf, almost immediately. Anyone have good advise for it, is it because python doesn't have good support for reading dbf ? I download a application call "dbfview" and open the same dbf file, it gets the data almost immediately. Is there anyway to write a code performs like that ?
I am trying to simplify this so it will pull any data and lay out without any conflict, but I find errors if I have a date structure with colons and if some of my products have uses of backslash or code like characters. Any way to strip or contain these in a string if existing? Here is my simple process using MySQLdb...
c = db.cursor()
exstring = "SELECT id,model,upc,date,cost FROM products"
CellRange("A5:I600").clear() # cleanup existing data
c.execute(exstring)
sh = c.fetchall()
for i, pos in enumerate(sh):
Cell(5+i, 1).horizontal = pos #starts 5th row
A lot of the errors for these types of dates (datetime)...
2013-06-01 05:15:02
Get a list of 27/basic_io.py errors, as well as global name 'logging' is not defined.
File "27/basic_io.py", line 352, in __setattr__
File "27/basic_io.py", line 238, in _set_horizontal
File "27/basic_io.py", line 607, in __setattr__
File "27/basic_io.py", line 352, in __setattr__
File "27/basic_io.py", line 167, in _set_value
Namerror: global name 'logging' is not defined
But if I select basic data it all works good. I would like to get to the point of a SELECT * and it will pull back any table structure without any issues. If not possible, then a way to filter individual columns.
DataNitro added improved support for datetime types in the latest release (https://datanitro.com/pro/auth/login). Try it out and see if it solves the problem - you should be able to run the original script without using date formatting now.
Source: I'm one of the DataNitro developers.
I'm not sure what the 27/basic_io.py file is, but it sounds like the problem is that it can't cope with objects from Python's datetime module.
Try SELECTing the date column as a string instead with...
exstring = 'SELECT id,model,upc,DATE_FORMAT(date, "%Y-%m-%d"),cost FROM products'
You can vary the returned format by using different format specifiers in DATE_FORMAT().