Python - string indices must be integers - python

I apologize if this is a basic fix and the length of the post, but I am new to Python. I've also included a large chunk of the script for context purposes.
I am using a script to pull scanning data from JSON into a MySQL DB. The script was working fine until an update was released.
Now when I run the script I receive the error:
for result in resultc['response']['results']:
TypeError: string indices must be integers
Before this update I knew the data types for each value, but this has changed and I cannot pinpoint where. Is there a way to convert each value to be recognized as a string?
# Send the cumulative JSON and then populate the table
cumresponse, content = SendRequest(url, headers, cumdata)
resultc = json.loads(content)
off = 0
print "\nFilling cumvulndata table with vulnerabilities from the cumulative database. Please wait..."
for result in resultc['response']['results']:
off += 1
print off, result
cursor.execute ("""INSERT INTO cumvulndata(
offset,pluginName,repositoryID,
severity,pluginID,hasBeenMitigated,
dnsName,macAddress,familyID,recastRisk,
firstSeen,ip,acceptRisk,lastSeen,netbiosName,
port,pluginText,protocol) VALUES
(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,(FROM_UNIXTIME(%s)),%s,%s,(FROM_UNIXTIME(%s)),%s,%s, %s,%s)""" , (off,result["pluginName"],result["repositoryID"]),result["severity"]),
result["pluginID"]), result["hasBeenMitigated"]),result["dnsName"],
result["macAddress"],result["familyID"]),result["recastRisk"]),
result["firstSeen"],result["ip"],result["acceptRisk"],result["lastSeen"],
result["netbiosName"],result["port"],result["pluginText"],result["protocol"]))

Put this before the for loop to work out which object is the string (I guess it's probably the second one)
print type(resultc)
print type(resultc['response'])

Related

Dynamically Create Query String for Django > MySQL from JSON

I am new to Django and am having some issues writing to a MySQL dB. What I am trying to do is loop through nested JSON data, and dynamically create a string to be used with a save() method.
I've looped through my nested JSON data and successfully created a string that contains the data I want to save in a single row to the MySQL table "mysqltable":
q = "station_id='thisid',stall_id='thisstaull',source='source',target='test'"
I then try to save this to the table in MySQL:
b = mysqltable(q)
b.save()
But I am getting the error:
TypeError: int() argument must be a string or a number, not 'mysqltable'
What I think is happening is that it doesn't like the fact I have created a string to use in b = mysqltable(q). When I just write out the statement like the below it works fine:
q = mysqltable(station_id='thisid',stall_id='thisstaull',source='source',target='test')
q.save()
But I am not sure how to take that string and make it available to use with b.save(). Any help would be greatly appreciated!
Instead string, create a dictionary, and then pass it directly to mysqltable:
mysqltable(**dictWithData)
Of course you can re-parse string onto dictionary, but this is useless work...

Python: Iterating through q.collections.table

I used Dan Nugents' Python library (http://www.timestored.com/kdb-guides/python-api) and successfully got the table object, i.e., after submitting a query to the database. However, how can I iterate through every single row of the object?
Let's say the table object is "results". I tried:
for i in results.keys():
print results[i]
However, it says "raise ValueError". Any ideas?
From this section (4. Example of sending / receiving data and tables. of this). You seem to be getting a list of dict's(results is not a dictionary). Try printing them as
for result in results:
print result

Python MySQL insert and retrieve a list in Blob

I'm trying to insert a list of element into a MySQL database (into a Blob column). This is an example of my code is:
myList = [1345,22,3,4,5]
myListString = str(myList)
myQuery = 'INSERT INTO table (blobData) VALUES (%s)'
cursor.execute(query, myListString)
Everything works fine and I have my list stored in my database. But, when I want to retrieve my list, because it's now a string I have no idea how to get a real integer list instead of a string.
For example, if now i do :
myQuery = 'SELECT blobData FROM db.table'
cursor.execute(myQuery)
myRetrievedList = cursor.fetch_all()
print myRetrievedList[0]
I ll get :
[
instead of :
1345
Is there any way to transform my string [1345,22,3,4,5] into a list ?
You have to pick a data format for your list, common solutions in order of my preference are:
json -- fast, readable, allows nested data, very useful if your table is ever used by any other system. checks if blob is valid format. use json.dumps() and json.loads() to convert to and from string/blob representation
repr() -- fast, readable, works across Python versions. unsafe if someone gets into your db. user repr() and eval() to get data to and from string/blob format
pickle -- fast, unreadable, does not work across multiple architectures (afaik). does not check if blob is truncated. use cPickle.dumps(..., protocol=(cPickle.HIGHEST_PROTOCOL)) and cPickle.loads(...) to convert your data.
As per the comments in this answer, the OP has a list of lists being entered as the blob field. In that case, the JSON seems a better way to go.
import json
...
...
myRetrievedList = cursor.fetch_all()
jsonOfBlob = json.loads(myRetrievedList)
integerListOfLists = []
for oneList in jsonOfBlob:
listOfInts = [int(x) for x in oneList]
integerListOfLists.append(listOfInts)
return integerListOfLists #or print, or whatever

python SPARQL query RESULTS BINDINGS: IF statement on BINDINGs value?

Using Python with SPARQLWrapper, JSON, urlib2 & cgi. Had trouble passing a working SPARQL query with some NULL values to python so I populated the blanks with a literal and will try to filter at the output. I have this results section example:
for result in results["results"]["bindings"]:
project = result["project"]["value"].encode('utf-8')
filename = result["filename"]["value"].encode('utf-8')
url = result["url"]["value"].encode('utf-8')
...and I print the %s. Is there a way to filter a value, i.e., IF VALUE NE "string" then PRINT? Or is there another workaround? I'm at the tail-end of a small project, I know I need a better wrapper, I just need to get these results filtered before I can move on. T very much IA...
I'm one of the developers of the SPARQLWrapper library, and the question had been already answered at the mailing list.
Regarding optionals values on the original query, the result set will come with no values for those variables. The problems is that we'd need to parse the query to populate such missing entries, and we want to avoid such parsing; therefore you'd need to check it for avoiding runtime problems with KeyError.
Usually I use a code like:
for result in results["results"]["bindings"]:
party = result["party"]["value"] if ("party" in result) else None

Python: using pyodbc and replacing row field values

I'm trying to figure out if it's possible to replace record values in a Microsoft Access (either .accdb or .mdb) database using pyodbc. I've poured over the documentation and noted where it says that "Row Values Can Be Replaced" but I have not been able to make it work.
More specifically, I'm attempting to replace a row value from a python variable. I've tried:
setting the connection autocommit to "True"
made sure that it's not a data type issue
Here is a snippet of the code where I'm executing a SQL query, using fetchone() to grab just one record (I know with this script the query is only returning one record), then I am grabbing the existing value for a field (the field position integer is stored in the z variable), and then am getting the new value I want to write to the field by accessing it from an existing python dictionary created in the script.
pSQL = "SELECT * FROM %s WHERE %s = '%s'" % (reviewTBL, newID, basinID)
cursor.execute(pSQL)
record = cursor.fetchone()
if record:
oldVal = record[z]
val = codeCrosswalk[oldVal]
record[z] = val
I've tried everything I can think bit cannot get it to work. Am I just misunderstanding the help documentation?
The script runs successfully but the newly assigned value never seems to commit. I even tried putting "print str(record[z])this after the record[z] = val line to see if the field in the table has the new value and the new value would print like it worked...but then if I check in the table after the script has finished the old values are still in the table field.
Much appreciate any insight into this...I was hoping this would work like how using VBA in MS Access databases you can use an ADO Recordset to loop through records in a table and assign values to a field from a variable.
thanks,
Tom
The "Row values can be replaced" from the pyodbc documentation refers to the fact that you can modify the values on the returned row objects, for example to perform some cleanup or conversion before you start using them. It does not mean that these changes will automatically be persisted in the database. You will have to use sql UPDATE statements for that.

Categories

Resources