I have a database table with 3 columns (A,B,C). I want to add some rows in the table, for that i am going to take input from user by making a 'textentrydialog' like this https://pastebin.com/0JYm5x6e. But the problem is that i want to add multiple rows in table for the multiple values of 'A' but values of B and C are same (For example)
B = Ram
C = Aam
A = s,t,k
So the values in table should insert in this way:
(s,Ram,Aam)
(t,Ram,Aam)
(k,Ram,Aam)
Can someone please help with this how can i insert?
Here is a proposal, able to create the output you have shown, with the input you have shown.
Note that I assume you insist on the way to input, which implies using a single table.
If you can accept different input, I recommend to use two tables.
One with (id, A, C) one with (id, B) and then query by using join using(id).
A MCVE for this is at the end of the answer. It contains some additional test cases, which I made up to demonstrate that it does not only give output for given input, trying to guess obvious usecases.
Query:
select A, group_concat(B), C
from toy
group by A,C;
Output:
Mar|t,u|Aam
Ram|s,t,k|Aam
Ram|k,s,m|Maa
MCVE:
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE toy (A varchar(10), B varchar(10), C varchar(10));
INSERT INTO toy VALUES('Ram','s','Aam');
INSERT INTO toy VALUES('Ram','t','Aam');
INSERT INTO toy VALUES('Ram','k','Aam');
INSERT INTO toy VALUES('Mar','t','Aam');
INSERT INTO toy VALUES('Mar','u','Aam');
INSERT INTO toy VALUES('Ram','k','Maa');
INSERT INTO toy VALUES('Ram','s','Maa');
INSERT INTO toy VALUES('Ram','m','Maa');
COMMIT;
Related
I'm making a database that takes user input and storing it in the database. However I want each row in the database (db) to be unique to each user and to contain separate input (thats the users put in)
This is the code I have so far:
user_id = random_number_genned_here
keyword = input_from_user
sqlite_file = 'keywords.sqlite'
conn = sqlite3.connect(sqlite_file)
c = conn.cursor()
c.execute("""CREATE TABLE IF NOT EXISTS keyword(userid TEXT UNIQUE, keyword TEXT)""")
try:
c.execute("""INSERT INTO keyword (userid , keyword) VALUES (?, ?, )""", (user_id,ukeyword))
except:
#where I need help
So basically what I need to do 2 things.
First thing: I need to see if a userid is already in the databse. The try and except does that. If it isn't in the database then I need to create a row in the database for that userid and add the keyword into the keyword column.
If a userid is already in the database then I need to add the keyword to the column.
Second thing: If the keyword column has some text in it, then I need to column to store the new keyword in.
I have bit and pieces of it but I don't know how to put it together.
To add a column to a table you can use the ALTER TABLE SQL
ALTER TABLE keyword ADD COLUMN your_column_definition
SQL As Understood By SQLite ALTER TABLE
You would have to generate the SQL programatically.
However, it would be simpler to look at the design of the keyword table. Why have the userid as UNIQUE when there are multiple data items to be stored per userid? I'd suggest that matters would be simplified if you were to have a composite UNIQUEness that is make userid and keyword combined as UNIQUE.
e.g. perhaps use :-
CREATE TABLE IF NOT EXISTS keyword(userid TEXT, keyword TEXT, UNIQUE(userid,keyword));
Perhaps consider the following demo :-
DROP TABLE IF EXISTS keyword;
CREATE TABLE IF NOT EXISTS keyword(userid TEXT, keyword TEXT, UNIQUE(userid,keyword));
INSERT OR IGNORE INTO keyword VALUES
('User001','TEST'),('User001','NOT A TEST'),('User001','KEYWORD A'),('User001','TEST'),
('User002','TEST'),('User002','KEYWORD A'),('User002','KEYWORD B')
;
-- Ooops (not really as duplicates just get ignored)
INSERT OR IGNORE INTO keyword VALUES
('User001','TEST'),('User001','NOT A TEST'),('User001','KEYWORD A'),('User001','TEST'),
('User002','TEST'),('User002','KEYWORD A'),('User002','KEYWORD B')
;
SELECT * FROM keyword;
SELECT * FROM keyword WHERE userid = 'User001';
When run the message log shows :-
DROP TABLE IF EXISTS keyword
> OK
> Time: 0.439s
CREATE TABLE IF NOT EXISTS keyword(userid TEXT, keyword TEXT, UNIQUE(userid,keyword))
> OK
> Time: 0.108s
INSERT OR IGNORE INTO keyword VALUES
('User001','TEST'),('User001','NOT A TEST'),('User001','KEYWORD A'),('User001','TEST'),
('User002','TEST'),('User002','KEYWORD A'),('User002','KEYWORD B')
> Affected rows: 6
> Time: 0.095s
-- Ooops (not really as duplicates just get ignored)
INSERT OR IGNORE INTO keyword VALUES
('User001','TEST'),('User001','NOT A TEST'),('User001','KEYWORD A'),('User001','TEST'),
('User002','TEST'),('User002','KEYWORD A'),('User002','KEYWORD B')
> Affected rows: 0
> Time: 0s
SELECT * FROM keyword
> OK
> Time: 0s
SELECT * FROM keyword WHERE userid = 'User001'
> OK
> Time: 0s
Note that the second insert inserts 0 rows as they are all duplicates
The queries produce :-
I have a python program in which I want to read the odd rows from one table and insert them into another table. How can I achieve this?
For example, the first table has 5 rows in total, and I want to insert the first, third, and fifth rows into another table.
Note that the table may contains millions of rows, so the performance is very important.
I found a few methods here. Here's two of them transcribed to psycopg2.
If you have a sequential primary key, you can just use mod on it:
database_cursor.execute('SELECT * FROM table WHERE mod(primary_key_column, 2) = 1')
Otherwise, you can use a subquery to get the row number and use mod:
database_cursor.execute('''SELECT col1, col2, col3
FROM (SELECT row_number() OVER () as rnum, col1, col2, col3
FROM table)
WHERE mod(rnum, 2) = 1''')
If you have an id-type column that is guaranteed to increment by 1 upon every insert (kinda like an auto-increment index), you could always mod that to select the row. However, this would break when you begin to delete rows from the table you are selecting from.
A more complicated solution would be to use postgresql's row_number() function. The following assumes you have an id column that can be used to sort the rows in the desired order:
select r.*
from (select *,row_number() over(order by id) as row
from <tablename>
) r
where r.row % 2 = 0
Note: regardless of how you do it, the performance will NEVER really be efficient as you necessarily have to do a full table scan, and selecting all columns on a table with millions of records using a full table scan is going to be slow.
I am in facing a performance problem in my code.I am making db connection a making a select query and then inserting in a table.Around 500 rows in one select query ids populated .Before inserting i am running select query around 8-9 times first and then inserting then all using cursor.executemany.But it is taking 2 miuntes to insert which is not qood .Any idea
def insert1(id,state,cursor):
cursor.execute("select * from qwert where asd_id =%s",[id])
if sometcondition:
adding.append(rd[i])
cursor.executemany(indata, adding)
where rd[i] is a aray for records making and indata is a insert statement
#prog start here
cursor.execute("select * from assd")
for rows in cursor.fetchall()
if rows[1]=='aq':
insert1(row[1],row[2],cursor)
if rows[1]=='qw':
insert2(row[1],row[2],cursor)
I don't really understand why you're doing this.
It seems that you want to insert a subset of rows from "assd" into one table, and another subset into another table?
Why not just do it with two SQL statements, structured like this:
insert into tab1 select * from assd where asd_id = 42 and cond1 = 'set';
insert into tab2 select * from assd where asd_id = 42 and cond2 = 'set';
That'd dramatically reduce your number of roundtrips to the database and your client-server traffic. It'd also be an order of magnitude faster.
Of course, I'd also strongly recommend that you specify your column names in both the insert and select parts of the code.
I have to parse a very complex dump (whatever it is). I have done the parsing through Python. Since the parsed data is very huge in amount, I have to feed it in the database (SQL). I have also done this. Now the thing is I have to compare the data now present in the SQL.
Actually I have to compare the data of 1st dump with the data of the 2nd dump. Both dumps have the same fields (attributes) but the values of their fields may be different. So I have to detect this change. For this, I have to do the comparison. But I don't have the idea how to do this all using Python as my front end.
If you don't have MINUS or EXCEPT, there is also this, which will show all non-matching rows using a UNION/GROUP BY trick
SELECT MAX(table), data1, data2
FROM (
SELECT 'foo1' AS table, foo1.data1, foo1.data2 FROM foo1
UNION ALL
SELECT 'foo2' AS table, foo2.data1, foo2.data2 FROM foo2
) AS X
GROUP BY data1, data2
HAVING COUNT(*) = 1
ORDER BY data1, data2
I have a general purpose table compare SP which also can do a more complex table compare with left and right and inner joins and monetary threshold (or threshold percentage) and subset criteria.
Why not do the 'dectect change' in SQL? Something like:
select foo.data1, foo.data2 from foo where foo.id = 'dump1'
minus
select foo.data1, foo.data2 from foo where foo.id = 'dump2'
I am running the following sql code in python :
SELECT
FIN AS 'LIN',
CUSIP,
Borrower_Name,
Alias,
DS_Maturity,
Spread,
Facility,
Facility_Size,
Log_date
FROM
[Main].[FacilityInformation]
WHERE
CUSIP IN ('00485GAC2', 'N1603LAD9')
OR (YEAR(DS_Maturity) in (2019,2024)
AND ((Borrower_Name LIKE 'Acosta Inc , Bright Bidco BV%'
OR Alias LIKE 'Acosta 9/14 (18.61) Non-Extended Cov-Lite, Lumileds 2/18 Cov-Lite%')))
It works perfectly when I have 3 or 4 borrower names or cusips or alias, but I am trying to run this with dozens of possible values. I thoungh that following the same logic as IN ('{}') with LIKE '{}%' will work, but it doesn't. So I want to use a efficient code but not something like :
SELECT * FROM table WHERE
column LIKE 'text1%'
column LIKE 'text2%'
.
.
.
column LIKE 'textn%'
This is good if every time you know the exactly numbers of time that you have to introduce the 'text, even though it is hard to do this 30 times or more so it will be pretty bad for a large number of borrower_names or cusips. It is not efficient. I hope it is clear what I am trying to ask.
You can join using VALUES and LIKE:
-- Test data
DECLARE #s TABLE (mytext NVARCHAR(20))
INSERT INTO #s VALUES ('abc'), ('def'), ('ghi')
-- Select
SELECT
s.mytext
FROM #s s
INNER JOIN (VALUES ('a%'),('%h%')) n(wildcard) ON s.mytext LIKE n.wildcard
Or you can do it by using a table:
DECLARE #s TABLE (mytext NVARCHAR(20))
DECLARE #t TABLE (wildcard NVARCHAR(20))
INSERT INTO #s VALUES ('abc'), ('def'), ('ghi')
INSERT INTO #t VALUES ('a%'), ('%h%')
SELECT s.mytext FROM #s s
INNER JOIN #t t ON s.mytext LIKE t.wildcard
Both gives the result:
mytext
------
abc
ghi