I have written a small python script that uses SQLAlchemy to read all records of the db. Here is some of the code
Base=declarative_base()
Session = sessionmaker(bind=engine)
cess=Session()
class Test(Base):
__tablename__ = 'test'
my_id = Column(Integer, primary_key=True)
name = Column(String)
def __init__(self, id, name):
self.my_id = id
self.name = name
def __repr__(self):
return "<User('%d','%s')>" % (self.id, self.name)
query= cess.query(Test.my_id, Test.name).order_by(Test.my_id).all()
Now the query object i want to convert to a json string. How can i do this ? using json.dumps(query) throws an exception ?
Kind Regards
json.dumps will convert object according to its conversion table.
Since you have rows of type Test, these cannot be directly serialized. Probably the quickest approach is to convert each returned row to a Python dict and then pass this through to json.dumps.
This answer describes how you might go about converting a table row to a dict.
Or, perhaps the _asdict() method from row object can be utilised directly.
query = cess.query(Test.my_id, Test.name).order_by(Test.my_id).all()
json.dumps([ row._asdict() for row in query ])
An alternative might be to access the __dict__ attribute directly on each row, although you should check the output to ensure that there are no internal state variables in row.__dict__.
query = cess.query(Test.my_id, Test.name).order_by(Test.my_id).all()
json.dumps([ row.__dict__ for row in query ])
How I did it:
fe = SomeClass.query.get(int(1))
fe_dict = fe.__dict__
del fe_dict['_sa_instance_state']
return flask.jsonify(fe_dict)
Basically, given the object you've retrieved, grab the dict for the class instance, remove the sqlalchemy object that can't be json serialized and convert to json. I'm using flask to do this but I think json.dumps() would work the same.
Related
I am using flask-restful this is
My class I want to insert
class OrderHistoryResource(Resource):
model = OrderHistoryModel
schema = OrderHistorySchema
order = OrderModel
product = ProductModel
def post(self):
value = req.get_json()
data = cls.schema(many=True).load(value)
data.insert()
In my model
def insert(self):
db.session.add(self)
db.session.commit()
schema
from config.ma import ma
from model.orderhistory import OrderHistoryModel
class OrderHistorySchema(ma.ModelSchema):
class Meta:
model = OrderHistoryModel
include_fk = True
Example Data I want to insert
[
{
"quantity":99,
"flaskSaleStatus":true,
"orderId":"ORDER_64a79028d1704406b6bb83b84ad8c02a_1568776516",
"proId":"PROD_9_1568779885_64a79028d1704406b6bb83b84ad8c02a"
},
{
"quantity":89,
"flaskSaleStatus":true,
"orderId":"ORDER_64a79028d1704406b6bb83b84ad8c02a_1568776516",
"proId":"PROD_9_1568779885_64a79028d1704406b6bb83b84ad8c02a"
}
]
this is what i got after insert method has started
TypeError: insert() takes exactly 2 arguments (0 given)
or there is another way to do this action?
Edited - released marshmallow-sqlalchemy loads directly to instance
You need to loop through the OrderModel instances in your list.
You can then use add_all to add the OrderModel objects to the session, then bulk update - see the docs
Should be something like:
db.session.add_all(data)
db.session.commit()
See this post for brief discussion on why add_all is best when you have complex ORM relationships.
Also - not sure you need to have all your models/schemas as class variables, it's fine to have them imported (or just present in the same file, as long as they're declared before the resource class).
You are calling insert on list cause data is list of model OrderHistoryModel instances.
Also post method doesn't need to be classmethod and you probably had an error there as well.
Since data is list of model instances you can use db.session.add_all method to add them to session in bulk.
def post(self):
value = req.get_json()
data = self.schema(many=True).load(value)
db.session.add_all(data)
db.session.commit()
I'm using SQLalchemy and have entered data into the database:
class Directions(db.Model):
id = db.Column(db.Integer, primary_key=True)
key = db.Column(db.String(16), index=True, unique=False)
Now, I'm trying to search for a given key:
Directions.query.filter(Directions.key=={some string})
But I get:
<flask_sqlalchemy.BaseQuery object at 0x103df57b8>
How do I uncover the actual result?
Try using this:
direction = Directions.query.filter_by(key == <some string>).first()
print(direction)
The filter method return a BaseQuery object which you can chain multiple filters on it. You use first or all to get the results of current query.
You have to open up a session and use the query method of the session object. For example:
engine = create_engine(<db url>)
Session = sessionmaker(bind=engine)
with Session() as sess:
sess.query(Directions).filter(Direction.key=={some string})
That code leading up to the Session call is described in the Session Documentation and might change for your application. You can read more about query objects in the docs as well.
So I have a set of JSON files and would like to import them to my sqlite database using sqlalchemy.
The way that I am thinking is:
Declare a class in python with all the variable name:
class Designs(Base):
__tablename__='designs'
__table_args__ = {'sqlite_autoincrement': True}
design_name= Column(String(80),nullable=False,primary_key=True)
user_name= Column(String(80),nullable=False,primary_key=True)
rev_tag= Column(String(80),nullable=False,primary_key=True)
...... much more variables.....
Read the JSON (using python json package and store it one by one)
import json
data = json.load(open('xxx.json'))
for key, value in data.iteritems():
store it in the sql database
But if my JSON file is very big, declaring all variables in the class seems very troublesome and hard to maintain as I plan to further grow my JSON file.
Wondering if there are any better way to do it
Sqlalchemy offers a mapping interface in addition to the declarative interface you have shown. Using it you can add columns programmaticly.
metadata = MetaData()
# This tuple of columns could be generated programmaticly
columns = (
Column('design_name', String(80), primary_key=True),
Column('user_name', String(80), nullable=False),
Column('rev_tag', String(80), nullable=False),
...
)
designs = Table('designs', metadata, *columns)
class Designs(object):
def __init__(self, json_data):
for key, value in data.iteritems():
setattr(self, key, value)
mapper(Designs, designs)
I am trying to get a subset of a table from my database. The database is a MySql database.
Python code:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, VARCHAR, DATETIME, INT, TEXT, TIMESTAMP
from datetime import datetime
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
class TrackablesTable(Base):
__tablename__ = 'Trackables'
trackableId = Column(INT, primary_key=True) #autogenerate
productID = Column(TEXT)
createdOn = Column(TIMESTAMP) #autogenerate
urlTitle = Column(TEXT)
humanTitle = Column(TEXT)
userId = Column(VARCHAR(45))
def __repr__(self):
return "<MyTable(%s)>" % (self.asin)
#staticmethod
def getTrackableByProductId(productID, session):
trackable = session.query(TrackablesTable).filter_by(productID=productID)
return trackable
Note the method at the bottom. I was expecting this method to get me all the rows in the "Trackables" table with a "productID" column with the value of the productID variable. Instead, it seems to be returning a query which is malformed.
The query it returns is below:
SELECT "Trackables"."trackableId" AS "Trackables_trackableId", "Trackables"."productID" AS "Trackables_productID", "Trackables"."createdOn" AS "Trackables_createdOn", "Trackables"."urlTitle" AS "Trackables_urlTitle", "Trackables"."humanTitle" AS "Trackables_humanTitle", "Trackables"."userId" AS "Trackables_userId"
FROM "Trackables"
WHERE "Trackables"."productID" = :productID_1
MySQL workbench is telling me the query is malformed. Further, the value in the query of productID (":productID_1") is not the actual value of the variable referenced in the code.
You need to execute the query, not just return it. The query remains a query object until a method such as all(), first(), or scalar() is called on it, or it is iterated over.
Your method should look like this:
#staticmethod
def getTrackableByProductId(productID, session):
q = session.query(TrackableTable).filter_by(productID=productID)
return q.first()
When you print out the query, SQLAlchemy shows the query with format placeholders rather than actual values. The actual query is built by the dbapi (such as python-mysql) outside of SQLAlchemy's control.
Side note: Your code, both the use of staticmethod and the naming conventions, looks like you've tried to copy a Java class. Consider reading PEP8.
I have the following class with associated attributes:
class Company(object):
self.ticker # string
self.company # string
self.creator # string
self.link # string
self.prices # list of tuples (this could be many hundreds of entries long)
self.creation_date # date entry
I populate individual companies and then store a list of companies into class Companies:
class Companies(object):
def __init__(self):
self.companies = []
def __len__(self):
return len(self.companies)
def __getitem__(self, key):
return self.companies[key]
def __repr__(self):
return 'Company list of length %i' % (self.__len__())
def add(self, company):
self.companies.append(company)
I want to be able to easily perform queries such as Companies.find(creator="someguy") and have the class return a list of all companies created by someguy. I also want to be able to run a query such as Companies.find(creation_date > x) and return a list of all entries created after a certain date.
I have a little bit of experience of doing similar work with Django's built in SQL functionality and found that to be pretty convenient. However, this project isn't using Django and I don't know if there are any other, smaller packages that provide this functionality. I'd like to keep the interfacing with the SQL server to a minimum because I don't have much experience with the language.
Here are my questions:
To do the above, do I need to use an external database program? Or, does a package exist that will do the above, and also allow me to easily save the data (pickling or other)? I feel, perhaps unjustifiably so, that things start becoming messier and more complicated when you start incorporating SQL.
If a database is not necessary for the above, how do you evaluate when you have enough data where you will want to incorporate an external database?
What packages, e.g. Django, exist to minimize the SQL legwork?
All this being said, what do you recommend I do?
SQLAlchemy is a full featured Python ORM (Object Relational Mapper). It can be found at http://www.sqlalchemy.org/.
Example usage to define a new object:
from sqlalchemy import Column, Integer, String
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
fullname = Column(String)
password = Column(String)
def __init__(self, name, fullname, password):
self.name = name
self.fullname = fullname
self.password = password
To create a new User:
>>> ed_user = User('ed', 'Ed Jones', 'edspassword')
>>> ed_user.name
'ed'
>>> ed_user.password
'edspassword'
To write to the database (after creating a new session):
ed_user = User('ed', 'Ed Jones', 'edspassword')
session.add(ed_user)
session.commit()
And to query:
our_user = session.query(User).filter_by(name='ed').first()
>>> our_user
<User('ed','Ed Jones', 'edspassword')>
More details can be found at http://docs.sqlalchemy.org/en/rel_0_8/orm/tutorial.html (taken from docs).