I have a problem with the .to_json() function in pandas. I have a dataframe that includes a python object, strings and integers but when I want to convert this dataframe into a JSON document, it simply does not include all parameters of the python object. Here is my code:
for i in range(len(self.pre_company)):
id_ = self.pre_company.index.values[i]
code = self.pre_company.iloc[i][0]
name = self.pre_company.iloc[i][1]
group = self.pre_company.iloc[i][2]
try:
cdf = self.country.loc[self.pre_company.iloc[i][5]]
country = Country(self.pre_company.iloc[i][5], cdf[0], cdf[1], cdf[2], cdf[3])
except KeyError:
country = None
town = self.pre_company.iloc[i][4]
company = {'Country': country, 'Code': code, 'Name': name, 'Group': group, 'Town': town}
new_row = pd.DataFrame(company, index=[id_])
if i == 0:
self.company = new_row
else:
self.company = pd.concat([company, new_row])
del new_row
del self.pre_company
data2 = self.company.to_json(orient="records")
with open('test2.json', 'w') as out:
for i in data2:
out.write(str(i))
And for the first line, I get this result:
{"Country":{"a2code":"FR","ccTLD":".fr","officialName":"The French Republic"},"Code":"blah","Name":"blahblah","Group":"GROUP","Town":"town"}
Do you have any idea why is the function not completely working ?
PS: I use CSV input file to get my data into dataframes
I found the answer to my problem: I just had to add a to_dict() function to my python objects which just writes the object in a dict (the way I wanted) and the I add this class at the end of my python object file:
class CompanyEncoder(json.JSONEncoder):
def default(self, obj):
"""
Encodes the object
:param self: the encoder itself
:param obj: the object to encode
:return: the encoded object
"""
if isinstance(obj, Company):
return obj.to_dict()
else:
return json.JSONEncoder.default(self, obj)
And to convert my object to a dict I used this lines (btw I need to create the object before using this:
company = Company(id_, code, name, group, country, town)
self.company.append(company.to_dict())
with open(path, 'w') as out:
out.write(InvoiceLineEncoder().encode(self.company))
Related
I am trying to use setFilter method in a custom class extended from QSqlTableModel class. I used the same method before for other classes and it worked well but this time whatever filter I apply, I get 0 results. Even when I do:
self.setFilter("")
I get 0 results. However, for other classes I used to use this line to reset existing filters meaning I'm supposed to return all the objects in the table as a result.
Note: If i dont use any filters, I can retrieve all the objects properly.
Here is my code:
I retrieve the data from a csv file:
file = open(fileName, 'r', encoding="utf-8")
csvreader = csv.reader(file)
rows = []
for row in csvreader:
rows.append(row)
file.close()
return rows
Creation of the table:
createTableQ= QSqlQuery()
createTableQ.exec_(
"""
CREATE TABLE cities (
id INTEGER PRIMARY KEY UNIQUE NOT NULL,
name VARCHAR(100) NOT NULL,
countryId INTEGER,
countryCode VARCHAR(100) NOT NULL,
countryName VARCHAR(100) NOT NULL
)
"""
)
createTableQ.finish()
Then apply a few preprocessing on the data then insert them in the sqlite file:
cities=convertToValue(cities)
for city in cities:
insertDataQ=QSqlQuery()
if not insertDataQ.exec_(city):
print(insertDataQ.lastError().text())
insertDataQ.finish()
convertToValue method takes the data and formats it in a SQL insert query format. That function is not the problem I use it in another class where I can use filter.
The above methods work only once to create the sqlite file anyway.
My class (NOTE: setCityData method is for the sqlite file creation i explained above, runs only once if the file does not exist):
class CityListModel(QSqlTableModel):
def __init__(self, parent=None):
super(CityListModel, self).__init__(parent)
setcityData()
self.setTable(table_name)
self.setSort(1, Qt.AscendingOrder)
self.setEditStrategy(QSqlTableModel.OnFieldChange)
self.recipient = ""
self.select()
while self.canFetchMore():
self.fetchMore()
def data(self, index, role):
if role < Qt.UserRole:
return QSqlTableModel.data(self, index, role)
sql_record = QSqlRecord()
sql_record = self.record(index.row())
return sql_record.value(role - Qt.UserRole)
def roleNames(self):
"""Converts dict to hash because that's the result expected
by QSqlTableModel"""
names = {}
id = "id".encode()
name = "name".encode()
countryId = "countryId".encode()
countryCode = "countryCode".encode()
countryName = "countryName".encode()
names[hash(Qt.UserRole + 0)] = id
names[hash(Qt.UserRole + 1)] = name
names[hash(Qt.UserRole + 2)] = countryId
names[hash(Qt.UserRole + 3)] = countryCode
names[hash(Qt.UserRole + 4)] = countryName
return names
My filter method which returns 0 results with any filter text:
#Slot(str)
def applyFilter(self, filterCountryName):
#self.setFilter("countryName = 'Turkey'")
self.setFilter("")
Edit: I call the function from qml side the same way I do for my other custom classes. Where I call it is irrelevant but basically this will be the end product:
CustomSelector{
id:countryList
Layout.fillHeight: true
Layout.fillWidth: true
isLocked: false
model:countryListModel
shownItemCount:4
selectedItem.onTextChanged: {
cityListModel.applyFilter(countryList.selectedItem.text)
}
}
While trying to give a reproducable example I realized the connection to my sqlite database gets broken somewhere before trying to set the filter. Therefore, setFilter can never succeed.
In Python flask, if I want to update data in the Postgres database table, the below put method works. In the below code Using 'if condition' I am checking and updating the values for three columns ie for fname, lname, address.
My question is if I want to update more columns example 30 to 40 columns, should I write multiple individuals 'if statement' or Is there any optimized way to update more columns?
class TodoId(Resource):
def put(self,email):
data = parser.parse_args()
todo = Model.query.get(email)
if 'fname' in data.keys():
todo.fname = data.get('fname')
if 'lname' in data.keys():
todo.lname = data.get('lname')
if 'address' in data.keys():
todo.address = data.get('address')
db.session.commit()
You can write a generic update method:
class TodoId(Resource):
def put(self,email):
data = parser.parse_args()
todo = Model.query.get(email)
for key, value in data.items():
if hasattr(todo, key) and value is not None:
setattr(todo, key, value)
db.session.commit()
You can use the "update" method and pass the data object to it.
class TodoId(Resource):
def put(self,email):
data = parser.parse_args()
Model.query.get(email).update(data)
db.session.commit()
I am attempting to fetch user data from a postgres database and I am able to do so successfully however when it comes to serlialization, am receiving the following:
TypeError: Object of type datetime is not JSON serializable
Here is my code:
execute query function:
def execute_query(self, query: str, to_dict: bool = False) -> psycopg2.extensions.cursor:
if to_dict:
self._cur = self._con.cursor(cursor_factory=psycopg2.extras.RealDictCursor)
else:
self._cur = self._con.cursor()
self._cur.execute(query)
self._con.commit()
return self._cur
caller:
def add_user():
db = DatabaseConnector()
db.connect()
# add new user
db.execute_query(f"INSERT INTO user (uname, email, fname, lname, passwordhash, create_date) VALUES ('usera', 'usera#gmail.com', 'Jesse', 'Pinkman', 'myhash', 'NOW()');")
# fetch new user
try:
user = db.execute_query("SELECT * FROM user WHERE uname = 'usera'", to_dict=True)
fetch = user.fetchone()
except Exception as e:
print(e)
finally:
db.close()
print(json.dumps(fetch)) # <---- exception raised
According to other related stack overflow posts, I could do the following to transform datetime objects to strings, however the entire list object comes back as a string and not the individual datetime object being converted to a string.
class DateEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, date):
return str(obj)
return json.JSONEncoder.default(self, obj)
user = json.dumps(fetch, cls=DateEncoder)
I have also tried using the following:
json.dumps(user, default=str)
However returns the result of str rather than a dict
My working suggestion is the following where I pull out the date and convert it to a string, however is there a better way to do this?
sample = {'uname': fetch['uname'], 'email': fetch['email'], 'fname': fetch['fname'], 'lname`': fetch['lname'], 'create_date': str(fetch['create_date'])}
I wrote the following code to get all the columns names and types in sqlalchemy. However, in case of ENUM type I am getting Varchar instead.
I want to know a way using which I can get enum type for all the enum columns and all the possible values of enum as well.
EntityClass = eval(entity_name)
entity_dict = {}
entity_dict['attributes'] = []
for column in EntityClass.__table__.columns:
entity_dict['attributes'].append({
'key': str(column.key),
'type': str(column.type).split('(')[0].split('[')[0]
})
print entity_dict
I am using the following method to create a new Enum in sqlalchemy:
class CourseType(enum.Enum):
Certified = "Certified"
Non_Certified = "Non_Certified"
#staticmethod
def list():
return list(map(lambda c: c.value, CourseType))
And this is how I am using the enum in a table:
type = Column(Enum(CourseType))
Any suggestions?
Okay, I got the solution. This is how I solved the issue:
EntityClass = eval(entity_name)
entity_dict = {}
entity_dict['attributes'] = []
for column in EntityClass.__table__.columns:
if hasattr(column.type, 'enums'):
column_type = 'ENUM'
possible_values = column.type.enums
else:
column_type = str(column.type).split('(')[0].split('[')[0]
possible_values = 'NotApplicable'
entity_dict['attributes'].append({
'key': str(column.key),
'type': column_type,
'possible_values': possible_values
})
return entity_dict
I think the str return in case of column.type was varchar only. So what I did was I checked if there was an attribute called enums in the column.type object, and if it is there I marked the column_type as ENUM and got the possible values for the column from column.type.enums.
I want to return the result of a query as JSON. I'm using the following route to return one model instance as a JSON object.
#mod.route('/autocomplete/<term>', methods=['GET'])
def autocomplete(term):
country = Country.query.filter(Country.name_pt.ilike('%'+ term + '%')).first()
country_dict = country.__dict__
country_dict.pop('_sa_instance_state', None)
return jsonify(json_list=country_dict)
This code works well if I use the first() method. However, I need to use the all() to get all results. When I do that, I get the following error.
country_dict = country.__dict__
AttributeError: 'list' object has no attribute '__dict__'
What should I be doing to return the entire list of results as JSON?
You need to do that "jsonify preparation step" for each item in the list, since .all() returns a list of model instances, not just one instance like .first(). Work on a copy of each __dict__ so you don't mess with SQLAlchemy's internal representation of the instances.
#mod.route('/autocomplete/<term>', methods=['GET'])
def autocomplete(term):
countries = []
for country in Country.query.filter(Country.name_pt.ilike('%' + term + '%'):
country_dict = country.__dict__.copy()
country_dict.pop('_sa_instance_state', None)
countries.append(country_dict)
return jsonify(json_list=countries)
Probably better just to return the data about each country explicitly, rather than trying to magically jsonify the instance.
#mod.route('/autocomplete/<term>', methods=['GET'])
def autocomplete(term):
countries = []
for country in Country.query.filter(Country.name_pt.ilike('%' + term + '%'):
countries.append({
'id': country.id,
'name': country.name_pt,
})
return jsonify(countries=countries)