Updating value in columns based on a column from another table - python

I am using pandasql to update or replace values in columns using an ID from another table.
I have two tables one that I am trying to update or replace the values
Table A
id start_destination end_destination
=======================================
1 3431 3010
2 3521 3431
3 3010 3521
Table B
destination_id destination_name
=======================================
3010 NameA
3431 NameB
3521 NameC
I am trying to write an SQL query to create the following output
id start_destination end_destination
=======================================
1 NameB NameA
2 NameC NameB
3 NameA NameC
I tried
update TableA
set start_destination = TableB.destination_name
from TableB
where TableB.destination_id = TableA.start_destination
But I was getting an error
(sqlite3.OperationalError) near "from": syntax error
In the real dataset, there are more than 10 columns in Table A, which I need to keep all.
Also, if there is a destination_id that cannot be matched with either start or end_destination, it should be null.

You may try joining table B to table A, twice:
SELECT
a.id,
b1.destination_name AS start_destination,
b2.destination_name AS end_destination
FROM TableA a
LEFF JOIN TableB b1
ON b1.destination_id = a.start_destination
LEFT JOIN TableB b2
ON b2.destination_id = a.end_destination
ORDER BY
a.id;

Related

split column into multi dynamically in python or sql

I'm trying to Split the details column into multi using T-sql or python.
the table is like this:
ID
Details
15
Hotel:Campsite;Message:Reservation inquiries
150
Page:45-discount-y;PageLink:https://xx.xx.net/SS/45-discount-y/|
13
NULL
There are a lot of keys or columns under the details. So I want a dynamic way to split the details into multiple columns using python or tsql
The desired output:
ID
Details
Hotel
Message
Page
PageLink
15
Hotel:Campsite;Message:Reservation inquiries
Campsite
Reservation inquiries
NULL
NULL
150
Page:45-discount-y;PageLink:https://xx.xx.net/SS/45-discount-y
NULL
NULL
45-discount-y
https://xx.xx.net/SS/45-discount-y/|
13
NULL
NULL
NULL
NULL
NULL
First :split part of Data ';' with string_split
second :after split second part of Data with string_split and replace
we use replace for Handle character : in 'Page Link in'
Finally use pivot
dECLARE #cols AS NVARCHAR(MAX),#scols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
set #query = '
;with cte as (
select Id,Details,valuesd,[1],replace( [2],''https//'',''https://'') as [2] from (
select * from (
select Id,Details,value as valuesd
from T
cross apply(
select *
from string_split(Details,'';'')
)d
)t
cross apply (
select RowN=Row_Number() over (Order by (SELECT NULL)), value
from string_split(replace( t.valuesd,''https://'',''https//''), '':'')
) d
) src
pivot (max(value) for src.RowN in([1],[2])) p
)
SELECT T.id,T.Details,Max([Hotel]) as [Hotel],Max([Message]) as Message,Max([Page]) as Page,Max([PageLink]) as PageLink from
(
select Id,Details, valuesd,[1],[2]
from cte
) x
pivot
(
max( [2]) for [1] in ([Hotel],[Message],[Page],[PageLink])
) p
right join T on p.id=T.id
group by T.id,T.Details
'
execute(#query)
You can to insert the basic data with the following codes
create table T(id int,Details nvarchar(max))
insert into T
select 15,'Hotel:Campsite;Message:Reservation inquiries' union all
select 150,'Page:45-discount-y;PageLink:https://xx.xx.net/SS/45-discount-y/|' union all
select 13, null

SQL updating a table, using information from another table, that has just been entered [duplicate]

I have a two tables,
Here is my first table,
ID SUBST_ID CREATED_ID
1 031938 TEST123
2 930111 COOL123
3 000391 THIS109
4 039301 BRO1011
5 123456 COOL938
... ... ...
This is my second table,
ID SERIAL_ID BRANCH_ID
1 039301 NULL
2 000391 NULL
3 123456 NULL
... ... ...
I need to some how update all rows within my second table using data from my first table.
It would need to do this all in one update query.
Both SUBST_ID and SERIAL_ID match, it needs to grab the created_id from the first table and insert it into the second table.
So the second table would become the following,
ID SERIAL_ID BRANCH_ID
1 039301 BRO1011
2 000391 THIS109
3 123456 COOL938
... ... ...
Thank you for your help and guidance.
UPDATE TABLE2
JOIN TABLE1
ON TABLE2.SERIAL_ID = TABLE1.SUBST_ID
SET TABLE2.BRANCH_ID = TABLE1.CREATED_ID;
In addition to Tom's answer if you need to repeat the operation frequently and want to save time you can do:
UPDATE TABLE1
JOIN TABLE2
ON TABLE1.SUBST_ID = TABLE2.SERIAL_ID
SET TABLE2.BRANCH_ID = TABLE1.CREATED_ID
WHERE TABLE2.BRANCH_ID IS NULL
UPDATE TABLE2
JOIN TABLE1
ON TABLE1.SUBST_ID = TABLE2.SERIAL_ID
SET TABLE2.BRANCH_ID = TABLE1.CREATED_ID
WHERE TABLE2.BRANCH_ID IS NULL or TABLE2.BRANCH_ID='';
I think this should work
UPDATE secondTable
JOIN firsTable ON secondTable.SERIAL_ID = firsTable.SUBST_ID
SET BRANCH_ID = CREATED_ID
It is very simple to update using Inner join query in SQL .You can do it without using FROM clause. Here is an example :
UPDATE customer_table c
INNER JOIN
employee_table e
ON (c.city_id = e.city_id)
SET c.active = "Yes"
WHERE c.city = "New york";
Using INNER JOIN:
UPDATE TABLE1
INNER JOIN TABLE2 ON TABLE1.SUBST_ID = TABLE2.SERIAL_ID
SET TABLE2.BRANCH_ID = TABLE1.CREATED_ID;
Another alternative solution like below: Here I am using WHERE clause instead of JOIN
UPDATE
TABLE1,
TABLE2
WHERE
TABLE1.SUBST_ID = TABLE2.SERIAL_ID
SET
TABLE2.BRANCH_ID = TABLE1.CREATED_ID;
You can use this too:
update TABLE1 set BRANCH_ID = ( select BRANCH_ID from TABLE2 where TABLE1.SUBST_ID = TABLE2.SERIAL_ID)
but with my experience I can say that this way is so slow and not recommend it!

Alternative to WITH RECURSIVE CLAUSE

Snowflake DB does not support recursive with clause function , Need help me on how to achieve below query . Below query works well in Teradata
If any one also can help me to achieve using Python that would be great
WITH RECURSIVE RECURTEMP(ID,KCODE,LVL)
AS(SELECT ID, MIN(KCODE) AS KCODE,1
FROM TABLE_A
GROUP BY 1
UNION ALL
SELECT b.ID, trim(a.KCODE)|| ';'||trim(b.KCODE), LVL+1
FROM TABLE_A a
INNER JOIN RECURTEMP b ON a.ID = b.ID AND a.KCODE > b.KCODE
)
SELECT * FROM RECURTEMP
![Result]: https://imgur.com/a/ppSRXeT
CREATE TABLE MYTABLE (
ID VARCHAR2(50),
KCODE VARCHAR2(50)
);
INSERT INTO MYTABLE VALUES ('ABCD','K10');
INSERT INTO MYTABLE VALUES ('ABCD','K53');
INSERT INTO MYTABLE VALUES ('ABCD','K55');
INSERT INTO MYTABLE VALUES ('ABCD','K56');
COMMIT;
OUTPUT as below
ID KCODE LEVEL
--------------------------------------
ABCD K10 1
ABCD K53;K10 2
ABCD K55;K10 2
ABCD K56;K10 2
ABCD K55;K53;K10 3
ABCD K56;K53;K10 3
ABCD K56;K55;K10 3
ABCD K56;K55;K53;K10 4
Recursive WITH is now supported in Snowflake.
Your query
WITH RECURSIVE RECURTEMP(ID,KCODE,LVL) AS(
SELECT
ID,
MIN(KCODE) AS KCODE,
1
FROM
TABLE_A
GROUP BY
1
UNION ALL
SELECT
b.ID,
trim(a.KCODE) || ';' || trim(b.KCODE) AS KCODE,
LVL+1
FROM
TABLE_A a
INNER JOIN RECURTEMP b ON (a.ID = b.ID AND a.KCODE > b.KCODE)
)
SELECT * FROM RECURTEMP
Link to article is below.
https://docs.snowflake.net/manuals/user-guide/queries-cte.html#overview-of-recursive-cte-syntax

mysql millions of rows data import from one table to another table

I have a table, it has two millions rows data. For each row, it has a body column, it store a JSON format data. For example:
table_a:
id user_id body
1 1 {'tel': '13678031283', 'email': 'test#gmail.com', 'name': 'test'....}
2 2 {'tel' : '1567827126', 'age': '16'....}
......
I have another table, named table_b:
table_b:
id user_id tel email name
1 1 13678019 test#qq.com test1
2 2 15627378 test1#qq.com test2
.....
table_a has 2 million rows data, I want to import all table_a data to table_b, each row of table_a should be process.
I want to deal with it like this:
for row in table_a_rows:
result = process(row)
insert result to table_b
.....
But i think it is not a good idea. it there a better way to make it?
You can select the data you need from table_a directly with JSON_EXTRACT. For example, getting the email would be something like this:
mysql> SELECT JSON_EXTRACT(body, '$.email') from table_a;
So you could replace directly into table_b all the data you have in table_a:
mysql> REPLACE INTO table_b SELECT user_id,
JSON_EXTRACT(body, '$.tel'),
JSON_EXTRACT(body,'$.email'),
JSON_EXTRACT(body,'$.name') from table_a

Loop Through First Column (Attribute) of Each Table from SQLite Database with Python

I'm writing a utility to help analyze SQLite database consistencies using Python. Manually I've found some inconsistencies so I thought it would be helpful if I could do these in bulk to save time. I set out to try this in Python and I'm having trouble.
Let's say I connect, create a cursor, and run a query or two and end up with a result set that I want to iterate through. I want to be able to do something like this (list of tables each with an ID as the primary key):
# python/pseudocode of what I'm looking for
for table in tables:
for pid in pids:
query = 'SELECT %s FROM %s' % (pid, table)
result = connection.execute(query)
for r in result:
print r
And that would yield a list of IDs from each table in table list. I'm not sure if I'm even close.
The problem here is that some tables have a primary key called ID while others TABLE_ID, etc. If they were all ID, I could select the IDs from each but they're not. This is why I was hoping to find a query that would allow me to select only the first column or the key for each table.
To get the columns of a table, execute PRAGMA table_info as a query.
The result's pk column shows which column(s) are part of the primary key:
> CREATE TABLE t(
> other1 INTEGER,
> pk1 INTEGER,
> other2 BLOB,
> pk2 TEXT,
> other3 FLUFFY BUNNIES,
> PRIMARY KEY (pk1, pk2)
> );
> PRAGMA table_info(t);
cid name type notnull dflt_value pk
--- ------ -------------- ------- ---------- --
0 other1 INTEGER 0 0
1 pk1 INTEGER 0 1
2 other2 BLOB 0 0
3 pk2 TEXT 0 2
4 other3 FLUFFY BUNNIES 0 0

Categories

Resources