I'm trying to create a pretty advanced query within django and im having problems doing so, i can use the basic:
for obj in Invoice.objects.filter():
but if i try and move this into raw PostgreSQL query i get an error telling me that the relation does not exist am i doing something wrong, i am following the Preforming raw SQL on the django documents but i keep getting same error
full code:
def csv_report(request):
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="somefilename.csv"'
writer = csv.writer(response, csv.excel)
response.write(u'\ufeff'.encode('utf8'))
writer.writerow([
smart_str(u"ID"),
smart_str(u"value"),
smart_str(u"workitem content type"),
smart_str(u"created date"),
smart_str(u"workitem.id"),
smart_str(u"workitem"),
smart_str(u"workitem_content_type"),
])
for obj in Invoice.objects.raw('SELECT * from twm_Invoice'):
writer.writerow([
smart_str(obj.pk),
smart_str(obj.value),
smart_str(obj.workitem_content_type),
smart_str(obj.created_date),
smart_str(obj.workitem_id),
smart_str(obj.workitem),
smart_str(obj.workitem_content_type),
])
return response
i have tried to use the app now within front of the model name and without none of them seem to work.
Thanks J
try running your raw sql directly in the database.. my guess is that your table name is not correct, usually they're lowercase
BTW.. I hope you have a very good reason for using raw sql queries and not the awesome ORM ;)
Related
I recently found out about sqlalchemy in Python. I'd like to use it for data science rather than website applications.
I've been reading about it and I like that you can translate the sql queries into Python.
The main thing that I am confused about that I'm doing is:
Since I'm reading data from an already well established schema, I wish I didn't have to create the corresponding models myself.
I am able to get around that reading the metadata for the table and then just querying the tables and columns.
The problem is when I want to join to other tables, this metadata reading is taking too long each time, so I'm wondering if it makes sense to pickle cache it in an object, or if there's another built in method for that.
Edit: Include code.
Noticed that the waiting time was due to an error in the loading function, rather than how to use the engine. Still leaving the code in case people comment something useful. Cheers.
The code I'm using is the following:
def reflect_engine(engine, update):
store = f'cache/meta_{engine.logging_name}.pkl'
if update or not os.path.isfile(store):
meta = alq.MetaData()
meta.reflect(bind=engine)
with open(store, "wb") as opened:
pkl.dump(meta, opened)
else:
with open(store, "r") as opened:
meta = pkl.load(opened)
return meta
def begin_session(engine):
session = alq.orm.sessionmaker(bind=engine)
return session()
Then I use the metadata object to get my queries...
def get_some_cars(engine, metadata):
session = begin_session(engine)
Cars = metadata.tables['Cars']
Makes = metadata.tables['CarManufacturers']
cars_cols = [ getattr(Cars.c, each_one) for each_one in [
'car_id',
'car_selling_status',
'car_purchased_date',
'car_purchase_price_car']] + [
Makes.c.car_manufacturer_name]
statuses = {
'selling' : ['AVAILABLE','RESERVED'],
'physical' : ['ATOURLOCATION'] }
inventory_conditions = alq.and_(
Cars.c.purchase_channel == "Inspection",
Cars.c.car_selling_status.in_( statuses['selling' ]),
Cars.c.car_physical_status.in_(statuses['physical']),)
the_query = ( session.query(*cars_cols).
join(Makes, Cars.c.car_manufacturer_id == Makes.c.car_manufacturer_id).
filter(inventory_conditions).
statement )
the_inventory = pd.read_sql(the_query, engine)
return the_inventory
I am currently working on querying MongoDB objects in Python Django and had no trouble in creating queries if it's the other attributes needed.
However I need to modify my queries to specifically filter through the ObjectIds returning one or no object found.
From my Javascript I am passing a json data to my Django views.py here's how it currently looks like:
def update(request):
#AJAX data
line = json.loads(request.body)
_id = line['_id']
print("OBJECT_ID: %s" % (_id))
another_id = line['another_id']
print("ANOTHER_ID: %s" % (another_id))
*Don't confuse the another_id, there are objects that has the same another_id s and unfortunately has to remain like that. That's why I can't query it for update since it will update all duplicates. This is the reason why I need the ObjectId.
For checking here's what it prints out:
{u'$oid': u'582fc95bb7abe7943f1a45b2'}
ANOTHER_ID: LTJ1277
Therefore I appended the query in views.py like this:
try:
Line.objects(_id=_id).update(set__geometry=geometry, set__properties=properties)
print("Edited: " + another_id)
except:
print("Unedited.")
But it didn't return any object.
So I was wondering if the query itself can't recognize the $oidin the json body as "_id" : ObjectId("582fc95bb7abe7943f1a45b2")?
*Edit:
from bson.objectid import ObjectId
where I edited my views.py with:
_id = line['_id']
print("VALUES: %s" % (_id.get('$oid')))
try:
Line.objects(_id=ObjectId(_id.get('$oid'))).update(set__geometry=geometry, set__properties=properties)
Output:
VALUES: 582fc95bb7abe7943f1a498c
No luck. Still not querying/not found.
According to this Using MongoDB with Django reference site:
Notice that to access the unique object ID, you use "id" rather than "_id".
I tried revising the code from:
Line.objects(_id=ObjectId(_id.get('$oid'))).update(set__geometry=geometry, set__properties=properties)
to
Line.objects(id=ObjectId(_id.get('$oid'))).update(set__geometry=geometry, set__properties=properties)
...And it now works fine. Keeping this question for others who might need this.
In my app,i am including a function called xml parsing.I am trying to data the data from an xml file and save it to mysql database.
I coded this with the help from google engine,but as required the data's are not saving in the database.I can run the app without any error.
Please see my codes below
views.py
def goodsdetails(request):
path = "{0}shop.xml".format(settings.PROJECT_ROOT)
xmlDoc = open(path, 'r')
xmlDocData = xmlDoc.read()
xmlDocTree = etree.XML(xmlDocData)
for items in xmlDocTree.iter('item'):
item_id = items[0].text
customername = items[1].text
itemname = items[2].text
location = items[3].text
rate = items[4].text
shop=Shop.objects.create(item_id=item_id,customername=customername,
itemname=itemname,location=location,rate=rate)
shop.save()
shops = Shop.objects.all()
context={'shops':shops}
return render(request,'index.html', context)
I am using the above logic to save the data in database from xml file.I am not getting any error but it is not saving into database
Expected answers are most welcome.
*Update:*I updated the code,really the xml data gets saved in db,but while displaying the same i am getting the following Traceback
IntegrityError at /
(1062, "Duplicate entry '101' for key 'PRIMARY'")
Thanks
Shop.objects.get loads data from the database. You want to create data, which you do by just calling Shop(item_id=item_id,customername=customername, itemname=itemname,location=location,rate=rate) and then shop.save().
If you want to update the data, you need to do something like this:
shop = Shop.objects.get(tem_id=item_id)
shop.customername = customername
...etc...
shop.save()
i have the following model in google app engine:
class CustomUsers(db.Model):
cuid = db.StringProperty()
name = db.StringProperty()
email = db.StringProperty()
bday = db.StringProperty()
now given a cuid i just want to check if he is present in the high replication data store.
in my views i write the following, but eventually get an 500 error:
usr_prev = CustomUsers.all().filter('cuid =', request.POST['id'])
if(not usr_prev):
logging.info("this user does not exist")
else:
logging.info("user exists")
but this gives me an error. How shall i do it?
(Do not consider any importing issues.)
"500 errors" don't exist in isolation. They actually report useful information, either on the debug page or in the logs. I'd guess that your logs show that request.POST has no element id, but that is just a guess.
I noticed that you have assigned a query object to usr_prev. You need to execute the query. I think Query.count(1) function should do it.
However, this may not be related to the 500 error.
you can go like this to see:
query = Users.all()
cnt = query.filter('id =', user_id).count()
logging.info(cnt)
return int(cnt)
once this is done, seems you are using django with app engine, as per the error you are not returning string response. you might want to return
return HttpResponse("1") instead of return HttpResponse(1)
i have this issue with app engine's datastore. In interactive console, I always get no entities when i ask if a url already exists in my db. When I execute the following code....
from google.appengine.ext import db
class shortURL(db.Model):
url = db.TextProperty()
date = db.DateTimeProperty(auto_now_add=True)
q = shortURL.all()
#q.filter("url =", "httphello")
print q.count()
for item in q:
print item.url
i get the this response, which is fine
6
httphello
www.boston.com
http://www.boston.com
httphello
www.boston.com
http://www.boston.com
But when I uncomment the line "q.filter("url =", "httphello")", i get no entities at all (a response of 0). I know its something ultra simple, but I just can't see it! help.
TextProperty values are not indexed, and cannot be used in filters or sort orders.
You might want to try with StringProperty if you don't need more than 500 characters.
I think .fetch() is missing . you can do a fetch before you do some manipulation on a model.
Also. I don't think you need db.TextProperty() for this, you can use db.StringProperty().