Updating Jsontype column in table database - python

Trying to update Json column in table User with python script.
So, I have a list of UID(stored in uid_list variable), by this list of uid I would like to update following uids in database.
json_data = Column(JSONType) - column, that need to be updated, name and surname.
The data that stores in this column: {"view_data": {"active": false, "text": "", "link": "http://google.com/"}, "name": "John", "surname": "Black", "email": "john#gmail.com"}
def update_json_column_in_table_in_db_by_list_of_uid():
uid_list = ['25a00f0e-58a5-4356-8b91-b18ea2eed71d', '68ccc759-97ae-48a2-bc42-5c2f1fa7a0ba', '9e2ee469-f777-4622-bca1-68d924caed0f']
name = 'empty'
surname = 'empty2'
User.query.filter(User.uid.in_(uid_list)).update({User.json_data: name + surname})

You need to do two things:
use .where() instead of .filter()
use func.jsonb_set or func.json_set
from sqlalchemy import func
stmt = update(User).values(json_data=func.json_set(User.json_data, '{name}', name)).where(User.uid.in_(uid_list))

Related

How to effectively synchronize freshly fetched data with data stored in database?

Let's start with initialization of the database:
import sqlite3
entries = [
{"name": "Persuasive", "location": "Bolivia"},
{"name": "Crazy", "location": "Guyana"},
{"name": "Humble", "location": "Mexico"},
{"name": "Lucky", "location": "Uruguay"},
{"name": "Jolly", "location": "Alaska"},
{"name": "Mute", "location": "Uruguay"},
{"name": "Happy", "location": "Chile"}
]
conn = sqlite3.connect('entries.db')
conn.execute('''DROP TABLE ENTRIES''')
conn.execute('''CREATE TABLE ENTRIES
(ID INT PRIMARY KEY NOT NULL,
NAME TEXT NOT NULL,
LOCATION TEXT NOT NULL,
ACTIVE NUMERIC NULL);''')
conn.executemany("""INSERT INTO ENTRIES (ID, NAME, LOCATION) VALUES (:id, :name, :location)""", entries)
conn.commit()
That was an initial run, just to populate the database with some data.
Then, everytime the application runs, new data gets fetched from somewhere:
findings = [
{"name": "Brave", "location": "Bolivia"}, # new
{"name": "Crazy", "location": "Guyana"},
{"name": "Humble", "location": "Mexico"},
{"name": "Shy", "location": "Suriname"}, # new
{"name": "Cautious", "location": "Brazil"}, # new
{"name": "Mute", "location": "Uruguay"},
{"name": "Happy", "location": "Chile"}
]
In this case, we have 3 new items in the list. I expect that all the items that are in the database now will remain there, and the 3 new items will be appended to the db. And all items from the list above would get active flag set to True, remaining ones would get the flag set to False. Let's prepare a dump from the database:
conn = sqlite3.connect('entries.db')
cursor = conn.execute("SELECT * FROM ENTRIES ORDER BY ID")
db_entries = []
for row in cursor:
entry = {"id": row[0], "name": row[1], "location": row[2], "active": row[3]}
db_entries.append(entry)
OK, now we can compare what's in new findings, and what was there already in the database:
import random
for f in findings:
n = next((d for d in db_entries if d["name"] == f["name"] and d["location"] == f["location"]), None)
if n is None:
id = int(random.random() * 10)
conn.execute('''INSERT INTO ENTRIES(ID, NAME, LOCATION, ACTIVE) VALUES (?, ?, ?, ?)''',
(id, f["name"], f["location"], 1))
conn.commit()
else:
active = next((d for d in db_entries if d['id'] == n['id']), None)
active.update({"act": "yes"})
conn.execute("UPDATE ENTRIES set ACTIVE = 1 where ID = ?", (n["id"],))
conn.commit()
(I know you're probably upset with the random ID generator, but it's for prototyping purpose only)
As you saw, instances of db_entries that are common with findings instances were updated with a flag ({"act": "yes"}). They get processed now, and beside that the items that are no longer active get updated with a different flag and then queried for deactivation:
for d in db_entries:
if "act" in d:
conn.execute("UPDATE ENTRIES set ACTIVE = 1 where ID = ?", (d["id"],))
conn.commit()
else:
if d["active"] == 1:
d.update({"deact": "yes"})
for d in db_entries:
if "deact" in d:
conn.execute("UPDATE ENTRIES set ACTIVE = 0 where ID = ?", (d["id"],))
conn.commit()
conn.close()
And this is it: items fetched on the fly were compared with those in the database and synchronized.
I have a feeling that this approach saves some data transfer between application and database, as it only updates items that require updating, but on the other hand it feels like the whole process could be rebuilt and made more effective.
What would you improve in this process?
Wouldn't a simpler approach be in just inserting all new data, but keeping track of duplicates by appending ON CONFLICT DO UPDATE SET.
You wouldn't even necessarily need the ID field, but you would need a unique key on NAME and LOCATION to identify duplicates. Then following query would identify the duplicate and not insert it, but just update the NAME field with the same value again (so basically same result as ignoring the row).
INSERT INTO ENTRIES (NAME, LOCATION)
VALUES ('Crazy', 'Guyana')
ON CONFLICT(NAME,LOCATION) DO UPDATE SET NAME = 'Crazy';
then you can simply execute:
conn.execute('''INSERT INTO ENTRIES(NAME, LOCATION) VALUES (?, ?) ON CONFLICT(NAME,LOCATION) DO UPDATE SET NAME=?''',
(f["name"], f["location"], f["name"]))
This would simplify your "insert only new entries" process. I recon you could also combine this in such a way, that the update you perform is not updating the NAME field, but in fact add your ACTIVE logic here.
Also since SQLite version 3.24.0 it supports UPSERT

How can you query an item in a list field in DynamoDB using Python?

I have a table that contains an item with the following attributes:
{
"country": "USA",
"names": [
"josh",
"freddy"
],
"phoneNumber": "123",
"userID": 0
}
I'm trying to query an item in a DynameDB by looking for a name using python. So I would write in my code that the item I need has "freddy" in the field "names".
I saw many forums mentioning "contains" but none that show an example...
My current code is the following:
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('users_table')
data = table.query(
FilterExpression: 'names = :name',
ExpressionAttributeValues: {
":name": "freddy"
}
)
I obviously cannot use that because "names" is a list and not a string field.
How can I look for "freddy" in names?
Since names field isn't part of the primary key, so you can't use query. The only way to look for an item by names is to use scan.
import boto3
from boto3.dynamodb.conditions import Key, Attr
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('users_table')
data = table.scan(
FilterExpression=Attr('names').contains('freddy')
)

How do I convert my tuple into the format so that it is acceptable for the JSON format in Python

I currently have this method in python code :
#app.route('/getData', methods = ['GET'])
def get_Data():
c.execute("SELECT abstract,category,date,url from Data")
data = c.fetchall()
resp = jsonify(data)
resp.status_code = 200
return resp
The output I get from this is:
[
[
"2020-04-23 15:32:13",
"Space",
"https://www.bisnow.com/new-jersey",
"temp"
],
[
"2020-04-23 15:32:13",
"Space",
"https://www.bisnow.com/events/new-york",
"temp"
]
]
However, I want the output to look like this:
[
{
"abstract": "test",
"category": "journal",
"date": "12-02-2020",
"link": "www.google.com"
},
{
"abstract": "test",
"category": "journal",
"date": "12-02-2020",
"link": "www.google.com"
}
]
How do I convert my output into an expected format?
As #jonrsharpe indicates, you simply cannot expect the tuple coming from this database query to turn into a dictionary in the JSON output. Your data variable does not contain the information necessary to construct the response you desire.
It will depend on your database but my recommendation would be to find a way to retrieve dicts from your database query instead of tuples, in which case the rest of your code should work as is. For instance, for sqlite, you could define your cursor c like this:
import sqlite3
connection = sqlite3.connect('dbname.db') # database connection details here...
connection.row_factory = sqlite3.Row
c = connection.cursor()
Now, if your database for some reason cannot support a dictionary cursor, you need to roll your own dictionary after retrieving the database query results. For your example, something like this:
fieldnames = ('abstract', 'category', 'date', 'link')
numfields = len(fieldnames)
data = []
for row in c.fetchall():
for idx in range(0, numfields - 1):
dictrow[fields[idx]] = row[idx]
data.append(dictrow)
I iterate over a list of field labels, which do not have to match your database columns but do have to be in the same order, and creating a dict by pairing the label with the datum from the db tuple in the same position. This passage would replace the single line data = c.fetchall() in OP.

How to index document I want to query multiple random fields dynamodb boto?

I have the following in Dynamo:
{
username: "Joe",
account_type: "standard",
favorite_food: "Veggies",
favorite_sport: "Hockey",
favorite_person: "Steve",
date_created: <utc-milliseconds>,
record_type: "Person"
}
My table was created as follows:
Table('persons', schema=[HashKey('record_type')], global_indexes=[GlobalAllIndex('IndexOne', parts=[HashKey('favorite_food')])])
I want to be able to perform queries where I can query for:
favorite_food = "Meat", favorite_sport="Hockey"
or just
favorite_food = "Meat", date_created > <some date in utc-milliseconds>
or even just:
favorite_food = "Meat"
or
account_type: "premium", favorite_food: "Veggies"
How many indices should I be creating for global, secondary, etc. I would like to be able to perform a "query" as opposed to a scan if possible, but I cannot be sure what a user would query. I just want a list of all the "usernames" that match the query.

how can i query data filtered by a JSON Column in SQLAlchemy?

I'm writing an app using Flask and flask-SQLAlchemy.
I have a models.py file as follow
from sqlalchemy.dialects.postgresql import JSON
class Custom(db.Model):
__tablename__ = 'custom'
data = db.Column(JSON)
data field's value would be like this:
[
{"type": "a string", "value": "value string"},
{"type": "another", "value": "val"},
...
]
Now I want to query all Custom objects that their data field contains an object like this {"type": "anything", "value": "what I want"} in the list of objects it has.
According to the documentation, it can be done using cast:
from sqlalchemy.types import Unicode
Custom.query.filter(Custom.data['value'].astext.cast(Unicode) == "what I want")
Assuming that your table is name "custom" and your json field is named "data" the following sql statement will get your results where the value subfield is equal to "what I want".
sql = text("select * from custom where data->>'value'= 'what I want'")
result = db.engine.execute(sql)

Categories

Resources