How to update all object columns in SqlAlchemy? - python

I have a table of Users(more than 15 columns) and sometimes I need to completely update all the user attributes.For xample, I want to replace
user_in_db = session.query(Users).filter_by(user_twitter_iduser.user_twitter_id).first()
with some other object.
I have found the following solution :
session.query(User).filter_by(id=123).update({"name": user.name})
but I fell that writing all 15+ attributes is error-prone and there should exist a simpler solution.

You can write:
session.query(User).filter_by(id=123).update({column: getattr(user, column) for column in User.__table__.columns.keys()})
This will iterate over the columns of the User model (table) and it'll dynamically create a dictionary with the necessary keys and values.

Related

How can I insert in Python multiple rows of a Dictionary with a WHERE condition?

I need a little help. My problem is that I don't understand how to connect a WHERE condition in SQLite with the insertion of multiple elements from a dictionary.
My goal is to compare the Location Column from the Dictionary with the Country column from the existing table.
But I can't find a solution how to approach this, to implement a WHERE condition.
my code:
def add_countries_to_table(self, countryList):
self.cursor.execute('''
INSERT OR IGNORE INTO country (Country)
VALUES (:Location)''', countryList)
self.db.saveChanges()
Thanks for any help.
Simple. You don't. An insert is an insert, there is no WHERE clause in it.
You can define your Country to be UNIQUE and use an 'UPSERT' (UPDATE or INSERT) to ignore already existing values in the dictionary.
So INSERT INTO country (Country) VALUES(:Location) ON CONFLICT(X) DO NOTHING would be something close to the command you're looking for.
Alternatively you can try using a Set instead of a Dictionary to prevent duplicate values beforehand and let your program/script deal with duplicates instead of the DB.
You need to use a playholder. I presume you tried that with (:Location). There are many possibilities for placeholder. Normally placeholder would be ? or %s.
I presume you want to work with the values of the dict.
If you want to insert multiple rows you need to use a tulpe.
def add_countries_to_table(self, countryList):
ctry_tl = ()
for row in countryList['Location']:
ctry_tulpe = (row)
ctry_tl.append(ctry_tuple)
self.cursor.execute('''
UPSERT INTO country
VALUES (?)''', ctry_tl)
self.db.commit()

Pyspark join conditions using dictionary values for keys

I'm working on a script that tests the contents of some newly generated tables against production tables. The newly generated tables may or may not have the same column names and may have multiple columns that have to be used in join conditions. I'm attempting to write out a function with the needed keys being passed using a dictionary.
something like this:
def check_subset_rel(self, remote_df, local_df, keys):
join_conditions = []
for key in keys:
join_conditions.append(local_df.key['local_key']==remote_df.key['remote_key'])
missing_subset_df = local_df.join(remote_df, join_conditions, 'leftanti')
pyspark/python doesn't like the dictionary usage in local_df.key['local_key'] and remote_df.key['remote_key']. I get a "'DataFrame' object has no attribute 'key'" error. I'm pretty sure that it's expecting the actual name of the column instead of any variable, but I'm not sure if I can make that conversation between value and column name.
Does anyone know how I could go about this?

Inserting python list into SQLite cell [duplicate]

I have a list/array of strings:
l = ['jack','jill','bob']
Now I need to create a table in slite3 for python using which I can insert this array into a column called "Names". I do not want multiple rows with each name in each row. I want a single row which contains the array exactly as shown above and I want to be able to retrieve it in exactly the same format. How can I insert an array as an element in a db? What am I supposed to declare as the data type of the array while creating the db itself? Like:
c.execute("CREATE TABLE names(id text, names ??)")
How do I insert values too? Like:
c.execute("INSERT INTO names VALUES(?,?)",(id,l))
EDIT: I am being so foolish. I just realized that I can have multiple entries for the id and use a query to extract all relevant names. Thanks anyway!
You can store an array in a single string field, if you somehow genereate a string representation of it, e.g. sing the pickle module. Then, when you read the line, you can unpickle it. Pickle converts many different complex objects (but not all) into a string, that the object can be restored of. But: that is most likely not what you want to do (you wont be able to do anything with the data in the tabel, except selecting the lines and then unpickle the array. You wont be able to search.
If you want to have anything of varying length (or fixed length, but many instances of similiar things), you would not want to put that in a column or multiple columns. Thing vertically, not horizontally there, meaning: don't thing about columns, think about rows. For storing a vector with any amount of components, a table is a good tool.
It is a little difficult to explain from the little detail you give, but you should think about creating a second table and putting all the names there for every row of your first table. You'd need some key in your first table, that you can use for your second table, too:
c.execute("CREATE TABLE first_table(int id, varchar(255) text, additional fields)")
c.execute("CREATE TABLE names_table(int id, int num, varchar(255) name)")
With this you can still store whatever information you have except the names in first_table and store the array of names in names_table, just use the same id as in first_table and num to store the index positions inside the array. You can then later get back the array by doing someting like
SELECT name FROM names_table
WHERE id=?
ORDER BY num
to read the array of names for any of your rows in first_table.
That's a pretty normal way to store arrays in a DB.
This is not the way to go. You should consider creating another table for names with foreign key to names.
You could pickle/marshal/json your array and store it as binary/varchar/jsonfield in your database.
Something like:
import json
names = ['jack','jill','bill']
snames = json.dumps(names)
c.execute("INSERT INTO nametable " + snames + ";")

What model should a SQLalchemy database column be to contain an array of data?

So I am trying to set up a database, the rows of which will be modified frequently. Every hour, for instance, I want to add a number to a particular part of my database. So if self.checkmarks is entered into the database equal to 3, what is the best way to update this part of the database with an added number to make self.checkmarks now equal 3, 2? I tried establishing the column as db.Array but got an attribute error:
AttributeError: 'SQLAlchemy' object has no attribute 'Array'
I have found how to update a database, but I do not know the best way to update by adding to a list rather than replacing. My approach was as follows, but I don't think append will work because the column cannot be an array:
ven = data.query.filter_by(venid=ven['id']).first()
ven.totalcheckins = ven.totalcheckins.append(ven['stats']['checkinsCount'])
db.session.commit()
Many thanks in advance
If you really want to have a python list as a Column in SQLAlchemy you will want to have a look at the PickleType:
array = db.Column(db.PickleType(mutable=True))
Please note that you will have to use the mutable=True parameter to be able to edit the column. SQLAlchemy will detect changes automatically and they will be saved as soon as you commit them.
If you want the pickle to be human-readable you can combine it with json or other converters that suffice your purposes.

group by in django

How can i create simple group by query in trunk version of django?
I need something like
SELECT name
FROM mytable
GROUP BY name
actually what i want to do is simply get all entries with distinct names.
If you need all the distinct names, just do this:
Foo.objects.values('name').distinct()
And you'll get a list of dictionaries, each one with a name key. If you need other data, just add more attribute names as parameters to the .values() call. Of course, if you add in attributes that may vary between rows with the same name, you'll break the .distinct().
This won't help if you want to get complete model objects back. But getting distinct names and getting full data are inherently incompatible goals anyway; how do you know which row with a given name you want returned in its entirety? If you want to calculate some sort of aggregate data for all the rows with a given name, aggregation support was recently added to Django trunk and can take care of that for you.
Add .distinct to your queryset:
Entries.objects.filter(something='xxx').distinct()
this will not work because every row have unique id. So every record is distinct..
To solve my problem i used
foo = Foo.objects.all()
foo.query.group_by = ['name']
but this is not official API.

Categories

Resources