Select into subdict from multiple tables - python

I have a database structure like this:
CREATE TABLE person (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
age INTEGER NOT NULL,
hometown_id INTEGER REFERENCES town(id)
);
CREATE TABLE town (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
population INTEGER NOT NULL
);
And I want to get the following result when selecting:
{
"name": "<person.name>",
"age": "<person.age>"
"hometown": {
"name": "<tometown.name>",
"population": "<tometown.population>"
}
}
I'm already using psycopg2.extras.DictCursor, so I think I need to play with SQL's SELECT AS.
Here's an example of what I tried with no resullt, I've done many similar with minor adjustments, all of them raising different errors:
SELECT
person(name, age),
town(name, population) as town,
FROM person
JOIN town ON town.id = person.hometown_id
Any way to do this, or should I just select all columns individually and build the dict inside of Python?
Postgres version info:
psql (9.4.6, server 9.5.2)
WARNING: psql major version 9.4, server major version 9.5.
Some psql features might not work.

smth like?..
t=# with t as (
select to_json(town),* from town
)
select json_build_object('name',p.name,'age',age,'hometown',to_json) "NameItAsYou Wish"
from person p
join t on t.id=p.hometown_id
;
NameItAsYou Wish
--------------------------------------------------------------------------------
{"name" : "a", "age" : 23, "hometown" : {"id":1,"name":"tn","population":100}}
(1 row)

Related

How to effectively synchronize freshly fetched data with data stored in database?

Let's start with initialization of the database:
import sqlite3
entries = [
{"name": "Persuasive", "location": "Bolivia"},
{"name": "Crazy", "location": "Guyana"},
{"name": "Humble", "location": "Mexico"},
{"name": "Lucky", "location": "Uruguay"},
{"name": "Jolly", "location": "Alaska"},
{"name": "Mute", "location": "Uruguay"},
{"name": "Happy", "location": "Chile"}
]
conn = sqlite3.connect('entries.db')
conn.execute('''DROP TABLE ENTRIES''')
conn.execute('''CREATE TABLE ENTRIES
(ID INT PRIMARY KEY NOT NULL,
NAME TEXT NOT NULL,
LOCATION TEXT NOT NULL,
ACTIVE NUMERIC NULL);''')
conn.executemany("""INSERT INTO ENTRIES (ID, NAME, LOCATION) VALUES (:id, :name, :location)""", entries)
conn.commit()
That was an initial run, just to populate the database with some data.
Then, everytime the application runs, new data gets fetched from somewhere:
findings = [
{"name": "Brave", "location": "Bolivia"}, # new
{"name": "Crazy", "location": "Guyana"},
{"name": "Humble", "location": "Mexico"},
{"name": "Shy", "location": "Suriname"}, # new
{"name": "Cautious", "location": "Brazil"}, # new
{"name": "Mute", "location": "Uruguay"},
{"name": "Happy", "location": "Chile"}
]
In this case, we have 3 new items in the list. I expect that all the items that are in the database now will remain there, and the 3 new items will be appended to the db. And all items from the list above would get active flag set to True, remaining ones would get the flag set to False. Let's prepare a dump from the database:
conn = sqlite3.connect('entries.db')
cursor = conn.execute("SELECT * FROM ENTRIES ORDER BY ID")
db_entries = []
for row in cursor:
entry = {"id": row[0], "name": row[1], "location": row[2], "active": row[3]}
db_entries.append(entry)
OK, now we can compare what's in new findings, and what was there already in the database:
import random
for f in findings:
n = next((d for d in db_entries if d["name"] == f["name"] and d["location"] == f["location"]), None)
if n is None:
id = int(random.random() * 10)
conn.execute('''INSERT INTO ENTRIES(ID, NAME, LOCATION, ACTIVE) VALUES (?, ?, ?, ?)''',
(id, f["name"], f["location"], 1))
conn.commit()
else:
active = next((d for d in db_entries if d['id'] == n['id']), None)
active.update({"act": "yes"})
conn.execute("UPDATE ENTRIES set ACTIVE = 1 where ID = ?", (n["id"],))
conn.commit()
(I know you're probably upset with the random ID generator, but it's for prototyping purpose only)
As you saw, instances of db_entries that are common with findings instances were updated with a flag ({"act": "yes"}). They get processed now, and beside that the items that are no longer active get updated with a different flag and then queried for deactivation:
for d in db_entries:
if "act" in d:
conn.execute("UPDATE ENTRIES set ACTIVE = 1 where ID = ?", (d["id"],))
conn.commit()
else:
if d["active"] == 1:
d.update({"deact": "yes"})
for d in db_entries:
if "deact" in d:
conn.execute("UPDATE ENTRIES set ACTIVE = 0 where ID = ?", (d["id"],))
conn.commit()
conn.close()
And this is it: items fetched on the fly were compared with those in the database and synchronized.
I have a feeling that this approach saves some data transfer between application and database, as it only updates items that require updating, but on the other hand it feels like the whole process could be rebuilt and made more effective.
What would you improve in this process?
Wouldn't a simpler approach be in just inserting all new data, but keeping track of duplicates by appending ON CONFLICT DO UPDATE SET.
You wouldn't even necessarily need the ID field, but you would need a unique key on NAME and LOCATION to identify duplicates. Then following query would identify the duplicate and not insert it, but just update the NAME field with the same value again (so basically same result as ignoring the row).
INSERT INTO ENTRIES (NAME, LOCATION)
VALUES ('Crazy', 'Guyana')
ON CONFLICT(NAME,LOCATION) DO UPDATE SET NAME = 'Crazy';
then you can simply execute:
conn.execute('''INSERT INTO ENTRIES(NAME, LOCATION) VALUES (?, ?) ON CONFLICT(NAME,LOCATION) DO UPDATE SET NAME=?''',
(f["name"], f["location"], f["name"]))
This would simplify your "insert only new entries" process. I recon you could also combine this in such a way, that the update you perform is not updating the NAME field, but in fact add your ACTIVE logic here.
Also since SQLite version 3.24.0 it supports UPSERT

How to set and print the value of json object in postgresql?

What I have done till now is :
DO
$do$
DECLARE
namz Text;
jsonObject json =
'{
"Name": "Kshitiz Kala",
"Education": "B.Tech",
"Skills": ["J2EE", "JDBC", "Html"]
}';
BEGIN
SELECT jsonObject->'Name' into namz;
select namz;
END
$do$
I am not finding any success here.
actual problem is I am passing a json Object to a stored procedure which will store the data in three different table 1) user table contains user_id, user_name, user_edu. 2) skill table contain skill_id, skill_name. 3) user_skill table contain id, user_id, usr_skill_id.
This is the json object I am passing from Django application
{"Name": "Kshitiz Kala", "Education": "B.Tech", "Skills": ["J2EE", "JDBC", "Html"]}
Your json code is fine, just check:
SELECT '{"Name": "Kshitiz Kala",
"Education": "B.Tech",
"Skills": ["J2EE", "JDBC", "Html"]
}'::json->'Name'
or in pgplsql:
DO
$do$
DECLARE
namz Text;
jsonObject json =
'{
"Name": "Kshitiz Kala",
"Education": "B.Tech",
"Skills": ["J2EE", "JDBC", "Html"]
}';
BEGIN
SELECT jsonObject->'Name' into namz;
raise 'JSON value Name is %', namz;
END
$do$
Problem is somewhere else. Eg. last line in this block won't do anything (SELECT namz in plpgpsql)

Query a multi-level JSON object stored in MySQL

I have a JSON column in a MySQL table that contains a multi-level JSON object. I can access the values at the first level using the function JSON_EXTRACT but I can't find how to go over the first level.
Here's my MySQL table:
CREATE TABLE ref_data_table (
`id` INTEGER(11) AUTO_INCREMENT NOT NULL,
`symbol` VARCHAR(12) NOT NULL,
`metadata` JSON NOT NULL,
PRIMARY KEY (`id`)
);
Here's my Python script:
import json
import mysql.connector
con = mysql.connector.connect(**config)
cur = con.cursor()
symbol = 'VXX'
metadata = {
'tick_size': 0.01,
'data_sources': {
'provider1': 'p1',
'provider2': 'p2',
'provider3': 'p3'
},
'currency': 'USD'
}
sql = \
"""
INSERT INTO ref_data_table (symbol, metadata)
VALUES ('%s', %s);
"""
cur.execute(sql, (symbol, json.dumps(metadata)))
con.commit()
The data is properly inserted into the MySQL table and the following statement in MySQL works:
SELECT symbol, JSON_EXTRACT(metadata, '$.data_sources')
FROM ref_data_table
WHERE symbol = 'VXX';
How can I request the value of 'provider3' in 'data_sources'?
Many thanks!
Try this:
'$.data_sources.provider3'
SELECT symbol, JSON_EXTRACT(metadata, '$.data_sources.provider3)
FROM ref_data_table
WHERE symbol = 'VXX';
the JSON_EXTRACT method in MySql supports that, the '$' references the JSON root, whereas periods reference levels of nesting. in this JSON example
{
"key": {
"value": "nested_value"
}
}
you could use JSON_EXTRACT(json_field, '$.key.value') to get "nested_value"

How to index document I want to query multiple random fields dynamodb boto?

I have the following in Dynamo:
{
username: "Joe",
account_type: "standard",
favorite_food: "Veggies",
favorite_sport: "Hockey",
favorite_person: "Steve",
date_created: <utc-milliseconds>,
record_type: "Person"
}
My table was created as follows:
Table('persons', schema=[HashKey('record_type')], global_indexes=[GlobalAllIndex('IndexOne', parts=[HashKey('favorite_food')])])
I want to be able to perform queries where I can query for:
favorite_food = "Meat", favorite_sport="Hockey"
or just
favorite_food = "Meat", date_created > <some date in utc-milliseconds>
or even just:
favorite_food = "Meat"
or
account_type: "premium", favorite_food: "Veggies"
How many indices should I be creating for global, secondary, etc. I would like to be able to perform a "query" as opposed to a scan if possible, but I cannot be sure what a user would query. I just want a list of all the "usernames" that match the query.

python/sqlite3 query with column name to JSON

i want to return a sql query output with column name as json,
to create an table on client-side.
But i have not found a solution for this.
my code:
json_data = json.dumps(c.fetchall())
return json_data
like this output:
{
"name" : "Toyota1",
"product" : "Prius",
"color" : [
"white pearl",
"Red Methalic",
"Silver Methalic"
],
"type" : "Gen-3"
}
does anyone know a solution?
Your code only returns the values. To also get the column names you need to query a table called 'sqlite_master', which has the sql string that was used to create the table.
c.execute("SELECT sql FROM sqlite_master WHERE " \
"tbl_name='your_table_name' AND type = 'table'")
create_table_string = cursor.fetchall()[0][0]
This will give you a string from which you can parse the column names:
"CREATE TABLE table_name (columnA text, columnB integer)"

Categories

Resources