python/sqlite3 query with column name to JSON - python

i want to return a sql query output with column name as json,
to create an table on client-side.
But i have not found a solution for this.
my code:
json_data = json.dumps(c.fetchall())
return json_data
like this output:
{
"name" : "Toyota1",
"product" : "Prius",
"color" : [
"white pearl",
"Red Methalic",
"Silver Methalic"
],
"type" : "Gen-3"
}
does anyone know a solution?

Your code only returns the values. To also get the column names you need to query a table called 'sqlite_master', which has the sql string that was used to create the table.
c.execute("SELECT sql FROM sqlite_master WHERE " \
"tbl_name='your_table_name' AND type = 'table'")
create_table_string = cursor.fetchall()[0][0]
This will give you a string from which you can parse the column names:
"CREATE TABLE table_name (columnA text, columnB integer)"

Related

Updating Jsontype column in table database

Trying to update Json column in table User with python script.
So, I have a list of UID(stored in uid_list variable), by this list of uid I would like to update following uids in database.
json_data = Column(JSONType) - column, that need to be updated, name and surname.
The data that stores in this column: {"view_data": {"active": false, "text": "", "link": "http://google.com/"}, "name": "John", "surname": "Black", "email": "john#gmail.com"}
def update_json_column_in_table_in_db_by_list_of_uid():
uid_list = ['25a00f0e-58a5-4356-8b91-b18ea2eed71d', '68ccc759-97ae-48a2-bc42-5c2f1fa7a0ba', '9e2ee469-f777-4622-bca1-68d924caed0f']
name = 'empty'
surname = 'empty2'
User.query.filter(User.uid.in_(uid_list)).update({User.json_data: name + surname})
You need to do two things:
use .where() instead of .filter()
use func.jsonb_set or func.json_set
from sqlalchemy import func
stmt = update(User).values(json_data=func.json_set(User.json_data, '{name}', name)).where(User.uid.in_(uid_list))

How can you query an item in a list field in DynamoDB using Python?

I have a table that contains an item with the following attributes:
{
"country": "USA",
"names": [
"josh",
"freddy"
],
"phoneNumber": "123",
"userID": 0
}
I'm trying to query an item in a DynameDB by looking for a name using python. So I would write in my code that the item I need has "freddy" in the field "names".
I saw many forums mentioning "contains" but none that show an example...
My current code is the following:
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('users_table')
data = table.query(
FilterExpression: 'names = :name',
ExpressionAttributeValues: {
":name": "freddy"
}
)
I obviously cannot use that because "names" is a list and not a string field.
How can I look for "freddy" in names?
Since names field isn't part of the primary key, so you can't use query. The only way to look for an item by names is to use scan.
import boto3
from boto3.dynamodb.conditions import Key, Attr
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('users_table')
data = table.scan(
FilterExpression=Attr('names').contains('freddy')
)

Parsing json into Insert statements with Python

I have a file which contains several json records. I have to parse this file and load each of the jsons to a particular SQL-Server table. However, the table might not exist on the database, in which case I have to also create it first before loading. So, I have to parse the json file and figure out the fields/columns and create the table. Then I will have to de-serialize the jsons into records and insert them into the table created. However, the caveat is that some fields in the json are optional i.e. a field might be absent from one json record but could be present in another record. Below is an example file with 3 records :-
{ id : 1001,
name : "John",
age : 30
} ,
{ id : 1002,
name : "Peter",
age : 25
},
{ id : 1002,
name : "Kevin",
age : 35,
salary : 5000
},
Notice that the field salary appears only in the 3rd record. The results should be :-
CREATE TABLE tab ( id int, name varchar(100), age int, salary int );
INSERT INTO tab (id, name, age, salary) values (1001, 'John', 30, NULL)
INSERT INTO tab (id, name, age, salary) values (1002, 'Peter', 25, NULL)
INSERT INTO tab (id, name, age, salary) values (1003, 'Kevin', 35, 5000)
Can anyone please help me with some pointers as I am new to Python. Thanks.
You could try this:
import json
TABLE_NAME = "tab"
sqlstatement = ''
with open ('data.json','r') as f:
jsondata = json.loads(f.read())
for json in jsondata:
keylist = "("
valuelist = "("
firstPair = True
for key, value in json.items():
if not firstPair:
keylist += ", "
valuelist += ", "
firstPair = False
keylist += key
if type(value) in (str, unicode):
valuelist += "'" + value + "'"
else:
valuelist += str(value)
keylist += ")"
valuelist += ")"
sqlstatement += "INSERT INTO " + TABLE_NAME + " " + keylist + " VALUES " + valuelist + "\n"
print(sqlstatement)
However for this to work, you'll need to change your JSON file to correct the syntax like this:
[{
"id" : 1001,
"name" : "John",
"age" : 30
} ,
{
"id" : 1002,
"name" : "Peter",
"age" : 25
},
{
"id" : 1003,
"name" : "Kevin",
"age" : 35,
"salary" : 5000
}]
Running this gives the following output:
INSERT INTO tab (age, id, name) VALUES (30, 1001, 'John')
INSERT INTO tab (age, id, name) VALUES (25, 1002, 'Peter')
INSERT INTO tab (salary, age, id, name) VALUES (5000, 35, 1003, 'Kevin')
Note that you don't need to specify NULLs. If you don't specify a column in the insert statement, it should automatically insert NULL into any columns you left out.
In Python, you can do something like this using sqlite3 and json, both from the standard library.
import json
import sqlite3
# The string representing the json.
# You will probably want to read this string in from
# a file rather than hardcoding it.
s = """[
{
"id": 1001,
"name": "John",
"age" : 30
},
{
"id" : 1002,
"name" : "Peter",
"age" : 25
},
{
"id" : 1002,
"name" : "Kevin",
"age" : 35,
"salary" : 5000
}
]"""
# Read the string representing json
# Into a python list of dicts.
data = json.loads(s)
# Open the file containing the SQL database.
with sqlite3.connect("filename.db") as conn:
# Create the table if it doesn't exist.
conn.execute(
"""CREATE TABLE IF NOT EXISTS tab(
id int,
name varchar(100),
age int,
salary int
);"""
)
# Insert each entry from json into the table.
keys = ["id", "name", "age", "salary"]
for entry in data:
# This will make sure that each key will default to None
# if the key doesn't exist in the json entry.
values = [entry.get(key, None) for key in keys]
# Execute the command and replace '?' with the each value
# in 'values'. DO NOT build a string and replace manually.
# the sqlite3 library will handle non safe strings by doing this.
cmd = """INSERT INTO tab VALUES(
?,
?,
?,
?
);"""
conn.execute(cmd, values)
conn.commit()
This will create a file named 'filename.db' in the current directory with the entries inserted.
To test the tables:
# Testing the table.
with sqlite3.connect("filename.db") as conn:
cmd = """SELECT * FROM tab WHERE SALARY NOT NULL;"""
cur = conn.execute(cmd)
res = cur.fetchall()
for r in res:
print(r)

Query a multi-level JSON object stored in MySQL

I have a JSON column in a MySQL table that contains a multi-level JSON object. I can access the values at the first level using the function JSON_EXTRACT but I can't find how to go over the first level.
Here's my MySQL table:
CREATE TABLE ref_data_table (
`id` INTEGER(11) AUTO_INCREMENT NOT NULL,
`symbol` VARCHAR(12) NOT NULL,
`metadata` JSON NOT NULL,
PRIMARY KEY (`id`)
);
Here's my Python script:
import json
import mysql.connector
con = mysql.connector.connect(**config)
cur = con.cursor()
symbol = 'VXX'
metadata = {
'tick_size': 0.01,
'data_sources': {
'provider1': 'p1',
'provider2': 'p2',
'provider3': 'p3'
},
'currency': 'USD'
}
sql = \
"""
INSERT INTO ref_data_table (symbol, metadata)
VALUES ('%s', %s);
"""
cur.execute(sql, (symbol, json.dumps(metadata)))
con.commit()
The data is properly inserted into the MySQL table and the following statement in MySQL works:
SELECT symbol, JSON_EXTRACT(metadata, '$.data_sources')
FROM ref_data_table
WHERE symbol = 'VXX';
How can I request the value of 'provider3' in 'data_sources'?
Many thanks!
Try this:
'$.data_sources.provider3'
SELECT symbol, JSON_EXTRACT(metadata, '$.data_sources.provider3)
FROM ref_data_table
WHERE symbol = 'VXX';
the JSON_EXTRACT method in MySql supports that, the '$' references the JSON root, whereas periods reference levels of nesting. in this JSON example
{
"key": {
"value": "nested_value"
}
}
you could use JSON_EXTRACT(json_field, '$.key.value') to get "nested_value"

Traditional SQL vs MongoDB/CouchDB for simple python app

Say I got an traditional SQL structure like so:
create table tags (id PRIMARY KEY int, tag varchar(100));
create table files (id PRIMARY KEY int, filename varchar(500));
create table tagged_files (tag_id int, file_id int);
I add some tags:
insert into table tags (tag) values ('places');
insert into table tags (tag) values ('locations');
And some files:
insert into table files (filename) values ('/tmp/somefile');
insert into table files (filename) values ('/tmp/someotherfile');
and then tag these files:
insert into table tagged_files (tag_id, file_id) values (1,1);
insert into table tagged_files (tag_id, file_id) values (1,2);
Then I can find all the files tagged with the first tag like so:
select * from files, tagged_files where id = file_id and tag_id = 1
But how do I do the same thing using NoSQL solutions like MongoDB and CouchDB?? And what NoSQL project is best suited for something like this?
For MongoDB.
Chances are you'd simply save them with the tags as:
db.Files.save({ "fn" : "/tmp/somefile", "ts" : [ { "t" : "places" }, { "t" : "locations" }] });
db.Files.save({ "fn" : "/tmp/someotherfile", "ts" : [ { "t" : "locations" }] });
Alternative is to save tags separately (ObjectId's are 12 bytes iirc):
db.Tags.save({ "t" : "places" });
db.Tags.save({ "t" : "locations" });
db.Files.save({ "fn" : "/tmp/somefile", "t" : [ { "i" : ObjectId("IdOfPlaces") }, { "i" : ObjectId("IdOfLocations") }] });
In both cases to get a file it's
db.Files.find({ "_id" : ObjectId("4c19e79e244f000000007e0d") });
db.Files.find({ "fn" : "/tmp/somefile" });
Or a couple files based on tags:
db.Files.find({ ts : { t : "locations" } })
db.Files.find({ t : { i : ObjectId("4c19e79e244f000000007e0d") } })
Examples are from the mongo console. And if you store tags with Id's you'd obviously have to read the tags up based on the Id's once you got the file(s).

Categories

Resources