I'm trying to store a dictionary in a SQLite database (python) and be able to pull it from the DB and use it like any other dict in python and It doesn't seem to work.
I've tried using json.dumps() and still no success.
This is my table:
cursor.execute("""
CREATE TABLE IF NOT EXISTS users(
email TEXT PRIMARY KEY NOT NULL,
password TEXT NOT NULL,
ip_dict VARIANT NOT NULL);
""")
and I would like to insert a dict to the "ip_dict" column, for instance:
ip_dict = {"pc1": "192.168.1.2", "router": "192.168.1.1"}
i tried doing:
ip_dict = json.dumps(ip_dict)
cursor.execute(f"""
INSERT INTO users(email,password,ip_dict)
VALUES("example#gmail.com", "123456789", {ip_dict})
""")
but no success.
Thanks
First, I would not overlay the dictionary with its string (i.e. JSON) representation; I would use separate variable just to keep things straight. But since you now have a string representation and you want to store that value in a TEXT column, nothing further needs to be done (I don't know what you were trying to accomplish by surrounding the string with {}):
ip_dict_json = json.dumps(ip_dict)
cursor.execute(f"""
INSERT INTO users(email,password,ip_dict)
VALUES('example#gmail.com', '123456789', ?)
""", (ip_dict_json,))
Note that I am using single quotes to represent string literals as this is the more common and portable method to do so and that the string value for the dictionary is being passed as a parameter to a prepared statement.
Update
If you want to get more sophisticated, you could define the column type to be a special type such as dictionary and then specify an adapter and converter for this type:
cursor.execute("""
CREATE TABLE IF NOT EXISTS users(
email TEXT PRIMARY KEY NOT NULL,
password TEXT NOT NULL,
ip_dict dictionary NOT NULL);
""")
import json
sqlite3.register_adapter(dict, lambda d: json.dumps(d).encode('utf8'))
sqlite3.register_converter("dictionary", lambda d: json.loads(d.decode('utf8')))
cursor.execute(f"""
INSERT INTO users(email,password,ip_dict)
VALUES('example#gmail.com', '123456789', ?)
""", (ip_dict,)) # passing a dictionary and not a string
cursor.execute('SELECT * FROM users')
rows = cursor.fetchall()
for row in rows:
print(row[2]) # this is a dictionary and not a string
Related
import sqlite3
conn = sqlite3.connect('carlist.db')
c = conn.cursor()
c.execute("""CREATE TABLE IF NOT EXISTS carlist(
ID INTEGER PRIMARY KEY AUTOINCREMENT,
Brand text NOT NULL,
Model text NOT NULL,
License Plate text NOT NULL,
Year INTEGER)""")
I'm trying to make auto increment ID with the character when I add the new data
Example for ID : AFK000001, AFK000002, AFK000003 and etc. This should be auto field.
If you are using SQLite 3.31.0+ you can define a generated column:
aID TEXT GENERATED ALWAYS AS ('AFK' || SUBSTR('000000' || ID, -6))
This column as it is defined will not be stored in the table, but will be generated every time you query the table.
If you want it to be stored then you can add the keyword STORED at the end of its definition.
The AUTO_INCREMENT is exclusive for integers and there is no way to do that automatically, however you can make a little script to achieve that by :
Store last ID
Get the integer subString
Increment its value and add it to the character subString
Store it in table
In a Python script, I'm generating a scrypt hash using a salt made up of data from os.urandom and I would like to save these in a MySQL table. If I attempt to use the standard method I've seen used for efficiently storing hashes in a database, using a CHAR column, I get "Incorrect string value:" errors for both the hash and the salt. The only data type I've been able to find that allows the random data is blob, but since blobs are stored outside the table they have obvious efficiency problems.
What is the proper way to do this? Should I do something to the data prior to INSERTing it into the db to massage it into being accepted by CHAR? Is there another MySQL datatype that would be more appropriate for this?
Edit:
Someone asked for code, so, when I do this:
salt = os.urandom(255)
hash = scrypt.hash(password,salt,1<<15,8,1,255)
cursor.execute("INSERT INTO users (email,hash,salt) values (%s,%s,%s)", [email,hash,salt])
MySQL gives me the "Incorrect string value" errors when I attempt to insert these values.
Edit 2:
As per Joran's request, here is the schema that doesn't like this:
CREATE TABLE `users` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`email` varchar(254) NOT NULL DEFAULT '',
`hash` char(255) NOT NULL,
`salt` char(255) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Your hash is a binary value that most likely will contain "unprintable characters" if it is interpreted as a string. To store arbitrary binary data, use the BINARY or VARBINARY data type.
If you have to use a string datatype, you can use base64 encoding to convert arbitrary data to an ASCII string.
I created 4 columns for a table
cur.execute("""CREATE TABLE videoinfo (
id INT UNSIGNED PRIMARY KEY AUTO INCREMENT,
date DATETIME NOT NULL,
src_ip CHAR(32),
hash CHAR(150));
""")
I have a .txt file which has three columns of data inside. I want to use LOAD DATA LOCAL INFILEcommand to insert data, but the problem is ,the table I created now has four columns, the first one is the id, so, can mysql automatically insert data from the second column or extra command is needed?
Many thanks!
AUTO INCREMENT isn't valid syntax. If you check MySQL's documentation for the CREATE TABLE statement, you'll see the proper keyword is AUTO_INCREMENT.
Additionally, date is a keyword, so you'll need to quote it with backticks, as mentioned on the MySQL identifier documentation page. The documentation also lists all keywords, which must be quoted to use them as identifiers. To be safe, you could simply quote all identifiers.
To insert data only into some columns, you can explicitly specify columns. For LOAD DATA INFILE:
LOAD DATA INFILE 'file_name'
INTO TABLE videoinfo
(`date`, src_ip, hash)
For the INSERT statement:
INSERT INTO videoinfo (`date`, src_ip, hash)
VALUES (...);
This, too, is revealed in the MySQL manual. Notice a pattern?
I'm using the MySQLdb package for interacting with MySQL. I'm having trouble getting the proper type conversions.
I am using a 16-byte binary uuid as a primary key for the table and have a mediumblob holding zlib compressed json information.
I'm using the following schema:
CREATE TABLE repositories (
added_id int auto_increment not null,
id binary(16) not null,
data mediumblob not null,
create_date int not null,
update_date int not null,
PRIMARY KEY (added_id),
UNIQUE(id)
) DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ENGINE=InnoDB;
Then I create a new row in the table using the following code:
data = zlib.compress(json.dumps({'hello':'how are you :D'})
row_id = uuid.uuid(4).hex
added_id = cursor.execute('
INSERT INTO repositories (id, data, create_date, update_date)
VALUES (%s, %s, %s, %s)',
binascii.a2b_hex(row_id),
data,
time.time(),
time.time()
)
Then to retrieve data I use a similar query:
query = cursor.execute('SELECT added_id, id, data, create_date, update_date ' \
'FROM repositories WHERE id = %s',
binascii.a2b_hex(row_id)
)
Then the query returns an empty result.
Any help would be appreciated. Also, as an aside, is it better to store unix epoch dates as integers or TIMESTAMP?
NOTE: I am not having problems inserting the data, just trying to retrieve it from the database. The row exists when I check via mysqlclient.
Thanks Alot!#
One tip: you should be able to call uuid.uuid4().bytes to get the raw
bytes. As for timestamps, if you want to perform time/date manipulation
in SQL it's often easier to deal with real TIMESTAMP types.
I created a test table to try to reproduce what you're seeing:
CREATE TABLE xyz (
added_id INT AUTO_INCREMENT NOT NULL,
id BINARY(16) NOT NULL,
PRIMARY KEY (added_id),
UNIQUE (id)
) DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ENGINE=InnoDB;
My script is able to insert and query for the rows using the binary field as a
key without problem. Perhaps you are incorrectly fetching / iterating over the
results returned by the cursor?
import binascii
import MySQLdb
import uuid
conn = MySQLdb.connect(host='localhost')
key = uuid.uuid4()
print 'inserting', repr(key.bytes)
r = conn.cursor()
r.execute('INSERT INTO xyz (id) VALUES (%s)', key.bytes)
conn.commit()
print 'selecting', repr(key.bytes)
r.execute('SELECT added_id, id FROM xyz WHERE id = %s', key.bytes)
for row in r.fetchall():
print row[0], binascii.b2a_hex(row[1])
Output:
% python qu.py
inserting '\x96\xc5\xa4\xc3Z+L\xf0\x86\x1e\x05\xebt\xf7\\\xd5'
selecting '\x96\xc5\xa4\xc3Z+L\xf0\x86\x1e\x05\xebt\xf7\\\xd5'
1 96c5a4c35a2b4cf0861e05eb74f75cd5
% python qu.py
inserting '\xac\xc9,jn\xb2O#\xbb\xa27h\xcd<B\xda'
selecting '\xac\xc9,jn\xb2O#\xbb\xa27h\xcd<B\xda'
2 acc92c6a6eb24f40bba23768cd3c42da
To supplement existing answers, there's also an issue with the following warning when dealing with binary strings in queries:
Warning: (1300, "Invalid utf8 character string: 'ABCDEF'")
It is reproduced by the following:
cursor.execute('''
CREATE TABLE `table`(
bin_field` BINARY(16) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
''')
bin_value = uuid.uuid4().bytes
cursor.execute('INSERT INTO `table`(bin_field) VALUES(%s)', (bin_value,))
Whenever MySQL sees that a string literal in a query isn't valid against current character_set_connection it will emit the warning. There are several solutions to it:
Explicitly set _binary charset literal
INSERT INTO `table`(bin_field) VALUES(_binary %s)
Manually construct queries with hexadecimal literals
INSERT INTO `table`(bin_field) VALUES(x'abcdef')
Change connection charset if you're only working with binary strings
For more details see MySQL Bug 79317.
Update
As #charlax pointed out, there's binary_prefix flag which can be passed to the connection's initialiser to automatically prepend _binary prefix when interpolating arguments. It's supported by recent versions of both, mysql-client and pymysql.
I'm using PyGreSQL to access my DB. In the use-case I'm currently working on; I am trying to insert a record into a table and return the last rowid... aka the value that the DB created for my ID field:
create table job_runners (
id SERIAL PRIMARY KEY,
hostname varchar(100) not null,
is_available boolean default FALSE
);
sql = "insert into job_runners (hostname) values ('localhost')"
When I used the db.insert(), which made the most sense, I received an "AttributeError". And when I tried db.query(sql) I get nothing but an OID.
Q: Using PyGreSQL what is the best way to insert records and return the value of the ID field without doing any additional reads or queries?
INSERT INTO job_runners
(hostname,is_available) VALUES ('localhost',true)
RETURNING id
That said, I have no idea about pygresql, but by what you've already written, I guess it's db.query() that you want to use here.
The documentation in PyGreSQL says that if you call dbconn.query() with and insert/update statement that it will return the OID. It goes on to say something about lists of OIDs when there are multiple rows involved.
First of all; I found that the OID features did not work. I suppose knowing the version numbers of the libs and tools would have helped, however, I was not trying to return the OID.
Finally; by appending "returning id", as suggested by #hacker, pygresql simply did the right thing and returned a record-set with the ID in the resulting dictionary (see code below).
sql = "insert into job_runners (hostname) values ('localhost') returning id"
rv = dbconn.query(sql)
id = rv.dictresult()[0]['id']
Assuming you have a cursor object cur:
cur.execute("INSERT INTO job_runners (hostname) VALUES (%(hostname)s) RETURNING id",
{'hostname': 'localhost'})
id = cur.fetchone()[0]
This ensures PyGreSQL correctly escapes the input string, preventing SQL injection.