I am a beginner SQLite user and has run into some trouble, hoping to find someone who could help.
I am trying to read some data out of a database, put it into a variable in python and print it out onto a HTML page.
The table inside the database is calle "Status", it contains two columns "stamp" and "messages". "stamp is an INT containing a time stamp, and "messages" contains a TEXT.
#cherrypy.expose
def comment(self, ID = None):
con = lite.connect('static/database/Status.db')
output = ""
with con:
cur = con.cursor()
cur.execute("SELECT * FROM Status WHERE stamp = ?", (ID,))
temp = cur.fetchone()
output = temp[0]
comments = self.readComments(ID)
page = get_file(staticfolder+"/html/commentPage.html")
page = page.replace("$Status", output)
The error I am getting reads:
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/cherrypy/_cprequest.py", line 606, in respond
cherrypy.response.body = self.handler()
File "/usr/lib/pymodules/python2.7/cherrypy/_cpdispatch.py", line 25, in __call__
return self.callable(*self.args, **self.kwargs)
File "proj1base.py", line 184, in comment
page = page.replace("$Status", output)
TypeError: expected a character buffer object
I was wondering if anyone could help me clarify what a character buffer object is, and how do i use one in order for my code to work?
Replace "character buffer" by "string" for starters. (There are more types exposing the "buffer protocol" in Python, but don't bother with them for now.) Most likely, output ended up not being a string. Log its type in the line before the error.
Related
I have written the below python function (a snippet of the full code) to work in AWS Lambda. The purpose of it is to take a GeoJSON from an S3 bucket and parse it accordingly.
Once parsed, it is placed back into JSON format (data) and then should be inserted into the specified database using
bulk_item['uuid'] = str(uuid.uuid4())
bulk_item['name'] = feature_name
bulk_item['type'] = feature_type
bulk_item['info'] = obj
bulk_item['created'] = epoch_time
bulk_item['scope'] = 2
data = json.dumps(bulk_item)
print(data)
self.database.upsert_record(self.organisation, json_doc=data)
except Exception as e:
print(f'Exception: {e.__class__.__name__}({e})')
The db_access file in which the above is relating to is another python script. The function upsert_record is as below:
def upsert_record(self, organisation,
json_doc={}):
My code is working perfectly until I try to upsert it into the database. Once this line is gotten to, it throws the error
Traceback (most recent call last):
File "/var/task/s3_asset_handler.py", line 187, in process_incoming_file
self.database.upsert_record(self.organisation, json_doc=data)
File "/opt/python/database_access.py", line 1218, in upsert_record
new_uuid = json_doc['uuid']
TypeError: string indices must be integers
I can't seem to figure out the issue at all
You are trying to get an element from a JSON object, but passing a string.
The
data = json.dumps(bulk_item)
creates a string representing the object.
Try using bulk_item on it's own.
I have this code to query a influx DB, but it is not working at all. Here is the python code.
import os
from influxdb import InfluxDBClient
username = u'{}'.format(os.environ['INFLUXDB_USERNAME'])
password = u'{}'.format(os.environ['INFLUXDB_PASSWORD'])
client = InfluxDBClient(host='127.0.0.1', port=8086, database='data',
username=username, password=password)
result = client.query("SELECT P_askbid_midprice1 FROM 'DCIX_OB' WHERE time > '2018-01-01';")
I got the following error, but it's still unclear how to fix the code above. If I query directly from influxdb with bash with SELECT P_askbid_midprice1 FROM 'DCIX_OB' WHERE time > '2018-01-01'; it worked perfectly fine.
Press ENTER or type command to continue
Traceback (most recent call last):
File "graph_influxdb.py", line 11, in <module>
result = client.query("SELECT P_askbid_midprice1 FROM 'DCIX_OB' WHERE time > '2018-01-01';")
File "/home/ubuntu/.local/lib/python3.5/site-packages/influxdb/client.py", line 394, in query
expected_response_code=expected_response_code
File "/home/ubuntu/.local/lib/python3.5/site-packages/influxdb/client.py", line 271, in request
raise InfluxDBClientError(response.content, response.status_code)
influxdb.exceptions.InfluxDBClientError: 400: {"error":"error parsing query: found DCIX_OB, expected identifier at line 1, char 31"}
How can I fix it?
result = client.query("SELECT P_askbid_midprice1 FROM DCIX_OB WHERE time > '2018-01-01'")
this should work
you might be able to use Pinform which is some kind of ORM/OSTM (Object time series mapping) for InfluxDB.
It can help with designing the schema and building normal or aggregation queries.
cli.get_fields_as_series(OHLC,
field_aggregations={'close': [AggregationMode.MEAN]},
tags={'symbol': 'AAPL'},
time_range=(start_datetime, end_datetime),
group_by_time_interval='10d')
Disclaimer: I am the author of this library
I'm attempting to write a program that utilizes urllib2 to parse HTML, and then utilizes PyRSS2Gen to create the RSS feed, in XML.
I keep getting the error
Traceback (most recent call last):
File "pythonproject.py", line 46, in <module>
get_rss()
File "pythonproject.py", line 43, in get_rss
rss.write_xml(open("cssnews.rss.xml", "w"))
File "build/lib/PyRSS2Gen.py", line 34, in write_xml
self.publish(handler)
File "build/lib/PyRSS2Gen.py", line 380, in publish
item.publish(handler)
File "build/lib/PyRSS2Gen.py", line 427, in publish
_opt_element(handler, "title", self.title)
File "build/lib/PyRSS2Gen.py", line 58, in _opt_element
_element(handler, name, obj)
File "build/lib/PyRSS2Gen.py", line 53, in _element
obj.publish(handler)
AttributeError: 'builtin_function_or_method' object has no attribute 'publish'
upon trying to run it.
From what I could find, other users came across this issue when trying to create a new tag for the XML, but I am trying to use the default tags given with PyRSS2Gen. Inspecting the PyRSS2Gen.py file shows the write_xml() command I am using, so is the error with how I am assigning values to the rss items by popping them from a list?
def get_rss():
sys.path.append('build/lib')
from PyRSS2Gen import RSS2, RSSItem
rss = RSS2(
title = 'Python RSS Creator',
link = 'technews.acm.org',
description = 'Creates RSS out of HTML',
items = [],
)
for x in range(0, len(rssTitles)):
rss.items.append(RSSItem(
title = rssTitles.pop,
link = rssLinks.pop,
description = rssDesc.pop,
))
rss.write_xml(open("cssnews.rss.xml", "w"))
# 5 - Call function
get_rss()
I ended up just writing out to a file, like so;
news = open("news.rss.xml", "w")
news.write("<?xml version=\"1.0\" ?>")
news.write("\n")
news.write("<rss xmlns:atom=\"http://www.w3.org/2005/Atom\" version=\"2.0\">")
news.write("\n")
news.write("<channel>")
news.write("\n")
etc.
PyRSS2Gen relies on its inputs to either be strings or to have a publish method that does all the necessary conversion.
In this case, you missed to call the pop method on the rssTitles, giving you a function rather than a string. Adding () after all the pop mentions should give you a usable program.
Note that similar errors can also crop up when there's other non-sting items around (eg. byte strings); the AttributeError line gives you a hint as to the object that went into the RSS item, and the backtrace indicates where in the RSS item that is (the title, in this case).
I keep getting a FieldMissingError on a field ('severa_id') of which I am sure that it exists. I've checked ver_33.py which shows that the Exception triggers if the field is not in self._meta.fields.
table._meta.fields shows the field being there:
print(table._meta.fields)
>>>
['proj_code', 'severa_id', 'rec_id', 'ext_key']
>>>
This is the code I'm trying:
table = dbf.Table(path_to_dbf)
table.open()
for row in dbf.Process(table):
for project in projects:
if str(row.proj_code)[0:4] == project["ProjectNumber"]:
row.write_record(severa_id=project["GUID"])
I've also tried these methods of setting the field:
row.severa_id = project["ProjectNumber"]
#row.write()
row.write_record()
Lastly, I've also tried setting each of the other fields (with a random string) which results in the same error.
EDIT: I am using the dbf module (https://pypi.python.org/pypi/dbf/0.96.005)
EDIT: The original traceback:
Traceback (most recent call last):
File "<string>", line 420, in run_nodebug
File "C:\Users\Alexander\Documents\update_projecten\update_dbf.py", line 58, in <module>
row.write()
File "C:\Users\Alexander\Envs\vfp\lib\site-packages\dbf\ver_33.py", line 2451, in __getattr__
raise FieldMissingError(name)
dbf.ver_33.FieldMissingError: 'write: no such field in table'
EDIT: The final version of the script that worked. Note not using Process and indicating in dbf.write the row and field to be written.
table = dbf.Table(path_to_dbf)
table.open()
for row in table:
for project in projects:
if str(row.proj_code)[0:4] == project["ProjectNumber"]:
dbf.write(row, severa_id=project["GUID"])
write_record no longer exists as a row method, so the error you are seeing is probably stating that write_record is not a field.
Instead, try:
dbf.write(severa_id=project['GUID'])
I'm having some trouble with SalesForce, I've never used it before so I'm not entirely sure what is going wrong here. I am using the simple_salesforce python module. I have successfully pulled data from SalesForce standard objects, but this custom object is giving me trouble. My query is
result = sf.query("Select Name from Call_Records__c")
which produces this error:
Traceback (most recent call last):
File "simple.py", line 15, in <module>
result = sf.query("Select Name from Call_Records__c")
File "/usr/local/lib/python2.7/dist-packages/simple_salesforce/api.py", line 276, in query
_exception_handler(result)
File "/usr/local/lib/python2.7/dist-packages/simple_salesforce/api.py", line 634, in _exception_handler
raise exc_cls(result.url, result.status_code, name, response_content)
simple_salesforce.api.SalesforceMalformedRequest: Malformed request https://sandbox.company.com/services/data/v29.0/query/?q=Select+Name+from+Call_Records__c. Response content: [{u'errorCode': u'INVALID_TYPE', u'message': u"\nSelect Name from Call_Records__c\n ^\nERROR at Row:1:Column:18\nsObject type 'Call_Records__c' is not supported. If you are attempting to use a custom object, be sure to append the '__c' after the entity name.
Please reference your WSDL or the describe call for the appropriate names."}]
I've tried it with and without the __c for both the table name and the field name, still can't figure this out. Anything blatantly wrong?
Make sure your result is Call_Records__c/CallRecords__c
Call_Records__c result = sf.query("Select Name from Call_Records__c")
Or
CallRecords__c result = sf.query("Select Name from CallRecords__c")
Try using -
result = sf.query("Select Name from CallRecords__c")