I have created a database in sqlite which looks like this
employeeID Name email Age
1 Darshan darshan#example.com 24
2 Anja anja#example.com 22
3 Neeta neeta#gmail.com 28
4 Michelle m#gmail.com 32
So for enquiring by searching for ID I use this code
ID = 1
c.execute("SELECT * FROM employee WHERE employeeID=:employeeID",[ID])
print(c.fetchall())
conn.close
which returns me this, which is great as wanted
1 Darshan darshan#example.com 24
but is it possible to only return me an email ID or just Age from this?
You can specify the returned columns in the SQL statement like this for the Age column:
SELECT Age FROM employee WHERE ...
Related
I have two dataframes.
One is music.
name
Date
Edition
Song_ID
Singer_ID
LA
01.05.2009
1
1
1
Second
13.07.2009
1
2
2
Mexico
13.07.2009
1
3
1
Let's go
13.09.2009
1
4
3
Hello
18.09.2009
1
5
(4,5)
And another dataframe called singer
Singer
nationality
Singer_ID
JT Watson
USA
1
Rafinha
Brazil
2
Juan Casa
Spain
3
Kidi
USA
4
Dede
USA
5
Now I would like to create a database called musicten from these two dataframes using sqlite3.
What I done so far:
import sqlite3
conn = sqlite3.connect('musicten.db')
c = conn.cursor()
c.execute('''
CREATE TABLE IF NOT EXISTS singer
([Singer_ID] INTEGER PRIMARY KEY, [Singer] TEXT, [nationality] TEXT)
''')
c.execute('''
CREATE TABLE IF NOT EXISTS music
([SONG_ID] INTEGER PRIMARY KEY, [SINGER_ID] INTEGER SECONDARY KEY, [name] TEXT, [Date] DATE, [EDITION] INTEGER)
''')
conn.commit()
import sqlite3
conn = sqlite3.connect('musicten.db')
c = conn.cursor()
c.execute('''
INSERT INTO singer (Singer_ID, Singer,nationality)
VALUES
(1,'JT Watson',' USA'),
(2,'Rafinha','Brazil'),
(3,'Juan Casa','Spain'),
(4,'Kidi','USA'),
(5,'Dede','USA')
''')
c.execute('''
INSERT INTO music (Song_ID,Singer_ID, name, Date,Edition)
VALUES
(1,1,'LA',01/05/2009,1),
(2,2,'Second',13/07/2009,1),
(3,1,'Mexico',13/07/2009,1),
(4,3,'Let's go',13/09/2009,1),
(5,tuple([4,5]),'Hello',18/09/2009,1)
''')
conn.commit()
But this code seems not work to insert values to the dataframe.
SO my goal is to INSERT VALUES to the Table that the database has two tables with values.
First, do not import sqlite3 the second time. Also, you still have an open connection.
Two issues with the SQL:
'Let''s go' (single quote character must be doubled/escaped
tuple([4,5]) => '(4,5)'
I have 2 tables in PostgreSQL:-
"student" table
student_id name score
1 Adam 10
2 Brian 9
"student_log" table:-
log_id student_id score
1 1 10
2 2 9
I have a python script which fetches a DataFrame with columns - "name" and "score" and then populates it to the student table.
I want to update the student and student_log table whenever the "score" changes for a student. Also, if there is a new student name in the dataframe, I want to add another row for it in the student table as well as maintain its record in the "student_log" table. Can anyone suggest how it can be done?
Let us consider the new fetched DataFrame looks like this:-
name score
Adam 7
Lee 5
Then the Expected Result is:-
"student" table
student_id name score
1 Adam 7
2 Brian 9
3 Lee 5
"student_log" table:-
log_id student_id score
1 1 10
2 2 9
3 1 7
4 3 5
I finally found a good answer. I used trigger, function and CTE.
I create a function to log changes along with a trigger to handle the updates. Following is the code.
CREATE OR REPLACE FUNCTION log_last_changes()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$$
DECLARE
serial_num integer;
BEGIN
IF NEW.name <> OLD.name OR NEW.score <> OLD.score
THEN
SELECT SETVAL('log_id_seq', (select max(id) from log)) into serial_num;
INSERT INTO log(student_id,score)
VALUES(NEW.id,NEW.score)
ON CONFLICT DO NOTHING;
END IF;
RETURN NEW;
END;
$$;
CREATE TRIGGER log_student
AFTER UPDATE
ON student
FOR EACH ROW
EXECUTE PROCEDURE log_last_changes();
THE CTE expression is as follow:-
WITH new_values(id, name, score) AS (
values
(1,'Adam',7),
(2,'Brian',9),
(3,'Lee',5)
),
upsert AS
(
UPDATE student s
SET NAME = nv.name,
SCORE = nv.score
FROM new_values nv, student s2
WHERE
s.id = nv.id and s.id = s2.id
Returning s.*
)
INSERT INTO student select id, name, score
FROM
new_values
WHERE NOT EXISTS (
SELECT 1 from upsert up where up.id=new_values.id
);
I guess you try to diff two dataframe
here is a example
#old student dataframe
old_pd:pd.DataFrame
#new student dataframe
new_pd:pd.DataFrame
joined_pd = new_pd.join(old_pd,on='name',lsuffix='_new',rsuffix='_old')
diff_pd = joined_pd[joined_pd['score_new']!=joined_pd['score_old']]
#then insert all diff_pd to student_log table.and update to student table
This question is duplicated I think, but I couldn't understand other answers...
Original table looks like this.
NAME AGE SMOKE
John 25 None
Alice 23 None
Ken 26 None
I will update SMOKE column in these rows,
But I need to use output of function which was coded in python check_smoke(). If I input name to check_smoke(), then it returns "Smoke" or "Not Smoke".
So final table would look like below:
NAME AGE SMOKE
John 25 Smoke
Alice 23 Not Smoke
Ken 26 Not Smoke
I'm using sqlite3 and python3.
How can I do it? Thank you for help!
You could use 1 cursor to select rows and another one to update them.
Assuming that the name of the table is smk (replace it by your actual name) and that con is an established connection to the database, you could do:
curs = con.cursor()
curs2 = con.cursor()
batch = 64 # size of a batch of records
curs.execute("SELECT DISTINCT name FROM smk")
while True:
names = curs.fetchmany(batch) # extract a bunch of rows
if len(names) == 0: break
curs2.executemany('UPDATE smk SET smoke=? WHERE name=?', # and update them
[(check_smoke(name[0]), name[0]) for name in names])
con.commit()
I am quite new to SQLITE3 as well as python. I a complete beginner in SQLite. I don't understand much. I am right now learning as a go for my project.I am working on a project where I have one database with about 20 tables inside of it. One table is for user input and the other tables are pre-loaded with values. How can I compare and match which values that are in the pre-loaded table with the user table?? For example:
Users Table:
Barcode: Item:
1234 milk
4321 cheese
5678 butter
8765 water
9876 sugar
Pre-Loaded Table:
Barcode: Availability:
1234 1
5678 1
9876 1
1111 1
Now, I want to be able to compare each row in the Pre-Loaded Table to each row in the Users Table. They both have the Barcode column in common to be able to compare. As a result, during the query process, it should check each row:
1234 - milk - 1 (those columns are equal )
5678 - butter - 1 ( those columns are equal)
9876 - sugar - 1 (those columns are equal)
1100 - - 1 ( this barcode does not exist in the Users Table)
so when a Barcode, in this case, 1100 doesn't exist in the Users Table, the code should print: You don't have all the items for the Pre-Loaded Table. How can I get the code to this?
so far I have this: This code does work by the way.
import sqlite3 as sq
connect = sq.connect('Food_Data.db')
con = connect.cursor()
sql = ("SELECT Users_Food.Barcode, Users_Food.Item, Recipe1.Ham_Swiss_Omelet FROM Users_Food INNER JOIN Recipe1 ON Users_Food.Barcode = Recipe1.Barcode WHERE Recipe1.Ham_Swiss_Omelet = '1'")
con.execute(sql)
data = con.fetchall()
print("You can make: Ham Swiss Omelet")
formatted_row = '{:<10} {:<9} {:>9} '
print(formatted_row.format("Barcode", "Ingredients", "Availability"))
for row in data:
print(formatted_row.format(*row))
#print (row[:])
#connect.commit()
It prints:
You can make: Ham Swiss Omelet
Barcode Ingredients Availability
9130849874 butter 1
2870896881 eggs 1
5501066727 water 1
1765023029 salt 1
9118188735 pepper 1
4087256674 ham 1
3009527296 cheese 1
The SQLite code:
sql = ("SELECT Users_Food.Barcode, Users_Food.Item, Recipe1.Ham_Swiss_Omelet FROM Users_Food INNER JOIN Recipe1 ON Users_Food.Barcode = Recipe1.Barcode WHERE Recipe1.Ham_Swiss_Omelet = '1'")
It combines the two tables with the Barcode in common and and the corresponding food names and availability. However, If one of the barcode values is not present in the Pre-Loaded table, when I compare how can I go about coding to know that it is not there while still displaying what is there in common between those two tables? It is like checking to see if the tables are identical.
how can I transform this table:
ID ITEM_CODE
--------------------
1 1AB
1 22S
1 1AB
2 67R
2 225
3 YYF
3 1AB
3 UUS
3 F67
3 F67
3 225
......
..to a list of lists, each list being a distinct ID containing its allocated item_codes?
in the form: [[1AB,22S,1AB],[67R,225],[YYF,1AB,UUS,F67,F67,225]]
Using this query:
SELECT ID, ITEM_CODE
FROM table1
ORDER BY ID;
and doing cursor.fetchall() in python does not return it as a list nor ordered by ID
Thank you
You probly will have less post-processing in Python using that query:
SELECT GROUP_CONCAT(ITEM_CODE)
FROM table1
GROUP BY ID
ORDER BY ID;
That will directly produce that result:
1AB,22S,1AB
67R,225
YYF,1AB,UUS,F67,F67,225
After that, cursor.fetchall() will directly return more or less what you expected, I think.
EDIT:
result = [ split(row, ',') for row in cursor.fetchall()]