Python sqlite3 generate unique identifier - python

I am trying to add values to a 'pending application table'. This is what I have so far:
appdata = [(ID,UNIQUE_IDENTIFIER,(time.strftime("%d/%m/%Y"),self.amount_input.get(),self.why_input.get())]
self.c.execute('INSERT into Pending VALUES (?,?,?,?,?)', appdata)
self.conn.commit()
I need to set a value for 'UNIQUE_IDENTIFIER', which is a primary key in a sqlite database.
How can I generate a unquie number for this value?
CREATE TABLE Pending (
ID STRING REFERENCES StaffTable (ID),
PendindID STRING PRIMARY KEY,
RequestDate STRING,
Amount TEXT,
Reason TEXT
);

two ways to do that:
1-First
in python you can use uuid module example:
>>> import uuid
>>> str(uuid.uuid4()).replace('-','')
'5f202bf198e24242b6a11a569fd7f028'
note : a small chance to get the same str so check for object exist with the same primary key in the table before saving
this method uuid.uuid4() each time return new random
for example:
>>> ID=str(uuid.uuid4()).replace('-','')
>>>cursor.execute("SELECT * FROM Pending WHERE PendindID = ?", (ID,))
>>>if len(data)==0:
#then save new object as there is no row with the same id
else:
#create new ID
2-second
in sqlite3 make a composite primary key according to sqlite doc
CREATE TABLE Pending (
column1,
column2,
column3,
PRIMARY KEY (column1, column2)
);
Then make sure of uniqueness throw unique(column1, column2)

Related

Insert unique auto-increment ID on python to sql?

I'd like to insert an Order_ID to make each row unique using python and pyodbc to SQL Server.
Currently, my code is:
name = input("Your name")
def connectiontoSQL(order_id,name):
query = f'''\
insert into order (Order_ID, Name)
values('{order_id}','{name}')'''
return (execute_query_commit(conn,query))
If my table in SQL database is empty and I'd like it to add a order_ID by 1 every time I execute,
How should I code order_id in Python such that it will automatically create the first order_ID as OD001, and if I execute another time, it would create OD002?
You can create a INT Identity column as your primary key and add a computed column that has the order number that you display in your application.
create table Orders
(
[OrderId] [int] IDENTITY(0,1) NOT NULL,
[OrderNumber] as 'OD'+ right( '00000' + cast(OrderId as varchar(6)) , 6) ,
[OrderDate] date,
PRIMARY KEY CLUSTERED
(
[OrderId] ASC
)
)

Insert list as singular value in postgresql

I use sqlalchemy engine for insertion data in postgresql table. I won't to insert list in one row as if list be a string with many value.
query = text('INSERT INTO table (list_id, list_name) VALUES ({}, {}) RETURNING'.format(my_list,'list_name'))
result_id = self.engine.execute(query)
when i tried execute my code I received error:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "["
LINE 1: ...INTO table (list_id, list_name) VALUES (['str1... ^
[SQL: INSERT INTO table (list_id, list_name) VALUES (['str1', 'str1', 'str1'], 'list_name') RETURNING id]
I tried to represent my list as str(my list) but result was same. Also i try str(['str1', 'str1', 'str1']).replace('[', '{').replace(']', '}')
My table query:
CREATE TABLE api_services (
id SERIAL PRIMARY KEY,
list_id VARCHAR,
list_name VARCHAR(255) NOT NULL
);

On conflict, change insert values

Given the schema
CREATE TABLE `test` (
`name` VARCHAR(255) NOT NULL,
`text` TEXT NOT NULL,
PRIMARY KEY(`name`)
)
I would like to insert new data in such a way that if a given name exists, the name I am trying to insert is changed. I've checked the SQLite docs, and all I could find is INSERT OR REPLACE, which would change the text of the existing name instead of creating a new element.
The only solution I can think of is
def merge_or_edit(curr, *data_tuples):
SELECT = """SELECT COUNT(1) FROM `test` WHERE `name`=?"""
INSERT = """INSERT INTO `test` (`name`, `text`) VALUES (?, ?)"""
to_insert = []
for t in data_tuples:
while curr.execute(SELECT, (t[0],)).fetchone()[0] == 1:
t = (t[0] + "_", t[1])
to_insert.append(t)
curr.executemany(INSERT, to_insert)
But this solution is extremely slow for large sets of data (and will crash if the rename takes its name to more than 255 chars.)
What I would like to know is if this functionality is even possible using raw SQLite code.

Adding dict object to postgresql

So I am using psycopg2 on Python3.5 to insert some data into a postgresql database. What I would like to do is have two columns that are strings and have the last column just be a dict object. I don't need to search the dict, just be able to pull it out of the database and use it.
so for instance:
uuid = "testName"
otherString = ""
dict = {'id':'122','name':'test','number':'444-444-4444'}
# add code here to store two strings and dict to postgresql
cur.execute('''SELECT dict FROM table where uuid = %s''', 'testName')
newDict = cur.fetchone()
print(newDict['number'])
Is this possible, and if so how would I go about doing this?
If your PostgreSQL version is sufficiently new (9.4+) and psycopg version is >= 2.5.4 all the keys are strings and values can be represented as JSON, it would be best to store this into a JSONB column. Then, should the need arise, the column would be searchable too. Just create the table simply as
CREATE TABLE thetable (
uuid TEXT,
dict JSONB
);
(... and naturally add indexes, primary keys etc as needed...)
When sending the dictionary to PostgreSQL you just need to wrap it with the Json adapter; when receiving from PostgreSQL the JSONB value would be automatically converted into a dictionary, thus inserting would become
from psycopg2.extras import Json, DictCursor
cur = conn.cursor(cursor_factory=DictCursor)
cur.execute('INSERT into thetable (uuid, dict) values (%s, %s)',
['testName', Json({'id':'122','name':'test','number':'444-444-4444'})])
and selecting would be as simple as
cur.execute('SELECT dict FROM thetable where uuid = %s', ['testName'])
row = cur.fetchone()
print(row['dict']) # its now a dictionary object with all the keys restored
print(row['dict']['number']) # the value of the number key
With JSONB, PostgreSQL can store the values more efficiently than just dumping the dictionary as text. Additionally, it becomes possible to do queries with the data, for example just select the some of the fields from the JSONB column:
>>> cur.execute("SELECT dict->>'id', dict->>'number' FROM thetable")
>>> cur.fetchone()
['122', '444-444-4444']
or you could use them in queries if needed:
>>> cur.execute("SELECT uuid FROM thetable WHERE dict->>'number' = %s',
['444-444-4444'])
>>> cur.fetchall()
[['testName', {'id': '122', 'name': 'test', 'number': '444-444-4444'}]]
You can serialize the data using JSON before storing the data:
import json
data = json.dumps({'id':'122','name':'test','number':'444-444-4444'})
Then when retrieving the code you deserialize it:
cur.execute('SELECT dict from ....')
res = cur.fetchone()
dict = json.loads(res['dict'])
print(dict['number'])

MySQL and Python is generating a duplicate entry error I cannot resolve

I'm new to MySQL but have a solid foundation in Python. After researching this extensively over the last 2 days including reading many stackoverflow questions and answers, I still haven't been able to resolve the issue so any help specific to this problem would be appreciated. UPDATED: The error is posted below.
I am trying to create a database which retrieves daily price data from yahoo and inputs the data into the corresponding table.
The MySQL tables and database were created using the MySQL Workbench 6.1. I'm using Python 2.7 Anaconda distribution on Windows 8.1 64 bit.
Here is the MySQL table:
-- -----------------------------------------------------
-- Table `securities_master_00`.`daily_price`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `securities_master_00`.`daily_price` (
`id` INT NOT NULL AUTO_INCREMENT,
`data_vendor_id` INT NOT NULL,
`symbol_id` INT NOT NULL,
`price_date` DATETIME NOT NULL,
`created_date` DATETIME NOT NULL,
`last_updated_date` DATETIME NOT NULL,
`open_price` DECIMAL(19,4) NULL DEFAULT NULL,
`high_price` DECIMAL(19,4) NULL DEFAULT NULL,
`low_price` DECIMAL(19,4) NULL DEFAULT NULL,
`close_price` DECIMAL(19,4) NULL DEFAULT NULL,
`adj_close_price` DECIMAL(19,4) NULL DEFAULT NULL,
`volume` BIGINT NULL DEFAULT NULL,
INDEX `index_data_vendor_id_idx` (`data_vendor_id` ASC),
PRIMARY KEY (`id`),
INDEX `index_symbol_id_idx` (`symbol_id` ASC),
CONSTRAINT `index_data_vendor_id`
FOREIGN KEY (`data_vendor_id`)
REFERENCES `securities_master_00`.`data_vendor` (`id`)
ON DELETE NO ACTION
ON UPDATE CASCADE,
CONSTRAINT `index_symbol_id`
FOREIGN KEY (`symbol_id`)
REFERENCES `securities_master_00`.`symbol` (`id`)
ON DELETE NO ACTION
ON UPDATE CASCADE)
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8;
Here is the corresponding python code that generates the error:
def insert_daily_data_into_db(data_vendor_id, symbol_id, daily_data):
'''
Takes a list of tuples of daily data and adds it to the MySQL database.
Appends the vendor ID and symbol ID to the data.
daily_data: List of tuples of the OHLC data (with adj_close and volume)
'''
# Create the time now
now = datetime.datetime.utcnow()
# Amend the data to include the vendor ID and symbol ID
daily_data = [(data_vendor_id, symbol_id, d[0], now, now,
d[1], d[2], d[3], d[4], d[5], d[6]) for d in daily_data]
# Create the insert strings
column_str = '''data_vendor_id, symbol_id, price_date, created_date,
last_updated_date, open_price, high_price, low_price,
close_price, volume, adj_close_price'''
insert_str = ('%s, ' * 11)[:-2]
final_str = 'INSERT INTO daily_price (%s) VALUES (%s)' % \
(column_str, insert_str)
# Using the MySQL connection, carry out an INSERT INTO for every symbol
with con:
cur = con.cursor()
cur.executemany(final_str, daily_data)
if __name__ == '__main__':
# Loop over the tickers and insert the daily historical data into the database
tickers = obtain_list_of_db_tickers()
for t in tickers:
print 'Adding data for %s' % t[1]
yf_data = get_daily_historic_data_yahoo(t[1])
insert_daily_data_into_db('1', t[0], yf_data) # I believe the error is here relating to the data vendor id # but am unclear on the method to solve the problem
ERROR CODE:
Adding data for ABT
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Owner\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 585, in runfile
execfile(filename, namespace)
File "C:/Users/Owner/Documents/Python Scripts/price_retrieval_mine.py", line 99, in <module>
insert_daily_data_into_db('1', t[0], yf_data)
File "C:/Users/Owner/Documents/Python Scripts/price_retrieval_mine.py", line 91, in insert_daily_data_into_db
cur.executemany(final_str, daily_data)
File "C:\Users\Owner\Anaconda\lib\site-packages\MySQLdb\cursors.py", line 262, in executemany
r = self._query('\n'.join([query[:p], ',\n'.join(q), query[e:]]))
File "C:\Users\Owner\Anaconda\lib\site-packages\MySQLdb\cursors.py", line 354, in _query
rowcount = self._do_query(q)
File "C:\Users\Owner\Anaconda\lib\site-packages\MySQLdb\cursors.py", line 318, in _do_query
db.query(q)
_mysql_exceptions.IntegrityError: (1062, "Duplicate entry '1' for key 'data_vendor_id_UNIQUE'")
UPDATED output using SQL command: SHOW INDEXES FROM securities_master_00.daily_price
Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type
daily_price 0 PRIMARY 1 id A 2 NULL NULL BTREE
daily_price 0 data_vendor_id_UNIQUE 1 data_vendor_id A 2 NULL NULL BTREE
daily_price 0 symbol_id_UNIQUE 1 symbol_id A 2 NULL NULL BTREE
daily_price 1 index_data_vendor_id_idx 1 data_vendor_id A 2 NULL NULL BTREE
daily_price 1 index_symbol_id_idx 1 symbol_id A 2 NULL NULL BTREE
As the SHOW INDEXES statement result indicates, there are five indexes on your table, though only three are declared in your CREATE TABLE statement. The two extra indexes are UNIQUE indexes on your foreign key columns, which is a problem because you need to have a many-to-one relationship between the daily_price table and either of the data_vendor and symbol tables. This reflects the fact that many prices will be generated by the same vendor and, over some period of time, for the same symbols.
You need to DROP both of these extra indexes - or alternatively, DROP the daily_price table and recreate it using the table definition that you posted in this question - in order to stop throwing an IntegrityError when you try to insert rows into the table.
Put more plainly, the data_vendor_id_UNIQUE index on the table prevents you from ever having two rows in that table with the same data_vendor_id. Since every row you insert with data from Yahoo has data_vendor_id = 1, according to the last line of your Python code (presumably this corresponds with Yahoo's entry in the data_vendor table), the second row you try to insert violates the unique constraint of that index and produces the error you see here.
It would be a good idea for you to try to figure out where these extra indexes came from, especially if you're working with someone else on this project or using someone else's code. It's possible there are less obvious problems hiding behind this error.
Finally, it will be well worth your time to learn about indexes, how they work, and when to use them, if you plan to do any serious work with MySQL. You should try to become familiar with statements like SHOW INDEXES and especially EXPLAIN when it comes to query execution plans, so that you can diagnose errors like this one quickly and easily.

Categories

Resources