With this query i will select so many row in this table x_giolam.
-I want create a 'For' to which sum all 'giolam' with all rows i were select in this query.
cr.execute("""select name,giolam from x_giolam where name=%s"""%(ma_luong)),
i want to create a loop to sum all 'giolam' with rows were select
You should do that in the query, not in a for loop:
SELECT name, SUM(giolam) as giolam_sum FROM x_giolam WHERE name=%s GROUP BY name
Or, since you're limiting by name=%s already you don't need the GROUP BY
SELECT SUM(giolam) as giolam_sum FROM x_giolam WHERE name=%s
As Michael noted, you should calculate the sum in the query. If you are dead set on looping through each matching row you can do something like this::
cursor.execute("""select name,giolam from x_giolam where name=%s"""%(ma_luong)),
rows = cursor.fetchall()
for row in rows:
print sum = sum + row.giolam
Adapted from pyodbc refence
Related
here a pic to a better understand
[1]: https://i.stack.imgur.com/S6tpl.png
def consult(self):
book = self.cuadro_blanco_cliente.get_children()
for elementos in book:
self.cuadro_blanco_cliente.delete(elementos)
query = "SELECT Nro, codigo, nombre, nfc, telefono, celular,direccion FROM clientes"#
rows = self.run_query(query)#query
for row in rows:
self.cuadro_blanco_cliente.insert('',0, text=row[1],values=row)
The problem isn't on the id field, is in the way you are using to add the rows on the display. You are traversing the array from id 1 to n, but adding the rows always to the beginning, making it look like the ids go from n to 1.
Try adding this at the end of your query clause:
"... ORDER BY id DESC"
This way, you will insert first, the last element, and then insert the other rows before the last, and so on, securing the fetched rows are ordered by id.
I added some lines to the code, and fixed the problem, begin from 1 now
for row in rows:
id = row[0]
self.cuadro_blanco_cliente.insert("",END, id, text=id, values=row)
I have the following code in python to update db where the first column is "id" INTEGER PRIMARY KEY AUTOINCREMENT UNIQUE:
con = lite.connect('test_score.db')
with con:
cur = con.cursor()
cur.execute("INSERT INTO scores VALUES (NULL,?,?,?)", (first,last,score))
item = cur.fetchone()
on.commit()
cur.close()
con.close()
I get table "scores" with following data:
1,Adam,Smith,68
2,John,Snow,76
3,Jim,Green,88
Two different users (userA and userB) copy test_score.db and code to their computer and use it separately.
I get back two db test_score.db but now with different content:
user A test_score.db :
1,Adam,Smith,68
2,John,Snow,76
3,Jim,Green,88
4,Jim,Green,91
5,Tom,Hanks,15
user A test_score.db :
1,Adam,Smith,68
2,John,Snow,76
3,Jim,Green,88
4,Chris,Prat,99
5,Tom,Hanks,09
6,Tom,Hanks,15
I was trying to use
insert into AuditRecords select * from toMerge.AuditRecords;
to combine two db into one but failed as the first column is a unique id. Two db have now the same ids but with different or the same data and merging is failing.
I would like to find unique rows in both db (all values different ignoring id) and merge results to one full db.
Result should be something like this:
1,Adam,Smith,68
2,John,Snow,76
3,Jim,Green,88
4,Jim,Green,91
5,Tom,Hanks,15
6,Chris,Prat,99
7,Tom,Hanks,09
I can extract each value one by one and compare but want to avoid it as I might have longer rows in the future with more columns.
Sorry if it is obvious and easy questions, I'm still learning. I tried to find the answer but failed, please point me to answer if it already exists somewhere else. Thank you very much for your help.
You need to define the approach to resolve duplicated rows. Will consider the max score? The min? The first one?
Considering the table AuditRecords has all the lines of both User A and B, you can use GROUP BY to deduplicate rows and use an aggregation function to resolve the score:
insert into
AuditRecords
select
id,
first_name,
last_name,
max(score) as score
from
toMerge.AuditRecords
group by
id,
first_name,
last_name;
For this requirement you should have defined a UNIQUE constraint for the combination of the columns first, last and score:
CREATE TABLE AuditRecords(
id INTEGER PRIMARY KEY AUTOINCREMENT,
first TEXT,
last TEXT,
score INTEGER,
UNIQUE(first, last, score)
);
Now you can use INSERT OR IGNORE to merge the tables:
INSERT OR IGNORE INTO AuditRecords(first, last, score)
SELECT first, last, score
FROM toMerge.AuditRecords;
Note that you must explicitly define the list of the columns that will receive the values and in this list the id is missing because its value will be autoincremented by each insertion.
Another way to do it without defining the UNIQUE constraint is to use EXCEPT:
INSERT INTO AuditRecords(first, last, score)
SELECT first, last, score FROM toMerge.AuditRecords
EXCEPT
SELECT first, last, score FROM AuditRecords
I have a python program in which I want to read the odd rows from one table and insert them into another table. How can I achieve this?
For example, the first table has 5 rows in total, and I want to insert the first, third, and fifth rows into another table.
Note that the table may contains millions of rows, so the performance is very important.
I found a few methods here. Here's two of them transcribed to psycopg2.
If you have a sequential primary key, you can just use mod on it:
database_cursor.execute('SELECT * FROM table WHERE mod(primary_key_column, 2) = 1')
Otherwise, you can use a subquery to get the row number and use mod:
database_cursor.execute('''SELECT col1, col2, col3
FROM (SELECT row_number() OVER () as rnum, col1, col2, col3
FROM table)
WHERE mod(rnum, 2) = 1''')
If you have an id-type column that is guaranteed to increment by 1 upon every insert (kinda like an auto-increment index), you could always mod that to select the row. However, this would break when you begin to delete rows from the table you are selecting from.
A more complicated solution would be to use postgresql's row_number() function. The following assumes you have an id column that can be used to sort the rows in the desired order:
select r.*
from (select *,row_number() over(order by id) as row
from <tablename>
) r
where r.row % 2 = 0
Note: regardless of how you do it, the performance will NEVER really be efficient as you necessarily have to do a full table scan, and selecting all columns on a table with millions of records using a full table scan is going to be slow.
I have an Sqlite3 database called MYTABLE like this:
My objective is to update the values of COUNT column by simply adding the existing value with the new value.
There will be two inputs that I will recieve:
Firstly, a list of IDs that I need to update for COUNT column.
For example: ['1','3','2','5']
And Secondly, the number of count to be added to the every IDs in
the list above.
So far, the best I can come up with is:
#my input1, the list of IDs that need updating
id_list = ['1','2','5','3']
#my input2, the value to be added to the existing count value
new_count = 3
#empty list to store the original count values before updating
original_counts = []
#iterate through the input1 and retrieve the original count values
for item in id_list :
cursor = conn.execute("SELECT COUNT from MYTABLE where ID=?",[item])
for row in cursor:
original_counts.append(row[0])
#iterate through the input1 and update the count values
for i in range(len(id_list )):
conn.execute("UPDATE MYTABLE set COUNT = ? where ID=?",[original_counts[i]+new_count ,mylist[i])
Is there better/more elegent and more efficient way to achieve what I want?
UPDATE 1:
I have tried this based on N Reed's answer(not exactly the same) like this and it worked!
for item in mylist:
conn.execute("UPDATE MYTABLE set VAL=VAL+? where ID=?",[new_count,item])
Take Away for me is we can update a value in sqlite3 based on it's current value(which I didn't know)
You want to create a query that looks like this:
UPDATE MYTABLE set COUNT = COUNT + 5 where ID in (1, 2, 3, 4)
I don't know python that well, but you probably want code in python something like:
conn.execute("UPDATE MYTABLE set COUNT = COUNT + ? where ID in (?)", new_count , ",".join(mylist))
Keep in mind there is a limit to the number of items you can have in the Id list with sqllite (I think it is something like 1000)
Also be very careful about sql injection when you are creating queries this way. You probably will want to make sure that all the items in mylist have already been escaped somewhere else.
I also recommend against having a column called 'count' as it is a keyword in sql.
I have an insert-only table in MySQL named word. Once the number of rows exceeds 1000000, I would like to delete the first 100000 rows of the table.
I am using mysqldb in python, so I have a global variable:
wordcount = cursor.execute("select * from word")
will return the number of rows in the table in the python environment. I then increment the wordcount by 1 everytime I insert a new row. Then I check if the number of rows are greater than 1000000, if it is, I want to delete the first 100000 rows:
if wordcount > 1000000:
cursor.execute("delete from word limit 100000")
I got this idea from this thread:
Delete first X lines of a database
However, this SQL ends of deleting my ENTIRE table, what am I missing here?
Thank you.
I don't think that's the right way of getting the number of rows. You need to change your statement to have a count(*) and then use MySQLs cursor.fetchone() to get a tuple of the results, where the first position (kinda like wordcount = cursor.fetchone()[0]) will have the correct row count.
Your delete statement looks right, maybe you have explicit transactions? in which case you'd have to call commit() on your db object after the delete.
If your table "word" have ID field (key auto_increment field) you may write some stored procedure of deleting first 100000-rows. The key part of stored procedure is:
drop temporary table if exists tt_ids;
create temporary table tt_ids (id int not null);
insert into tt_ids -- taking first 100000 rows
select id from word
order by ID
limit 100000;
delete w
from word w
join tt_ids ids on w.ID = ids.ID;
drop temporary table if exists tt_ids;
Also you may build some indexes on tt_ids on ID-field for a speed-UP your query.