When using the sqlite3 module in python, all elements of cursor.description except the column names are set to None, so this tuple cannot be used to find the column types for a query result (unlike other DB-API compliant modules). Is the only way to get the types of the columns to use pragma table_info(table_name).fetchall() to get a description of the table, store it in memory, and then match the column names from cursor.description to that overall table description?
No, it's not the only way. Alternatively, you can also fetch one row, iterate over it, and inspect the individual column Python objects and types. Unless the value is None (in which case the SQL field is NULL), this should give you a fairly precise indication what the database column type was.
sqlite3 only uses sqlite3_column_decltype and sqlite3_column_type in one place, each, and neither are accessible to the Python application - so their is no "direct" way that you may have been looking for.
I haven't tried this in Python, but you could try something like
SELECT *
FROM sqlite_master
WHERE type = 'table';
which contains the DDL CREATE statement used to create the table. By parsing the DDL you can get the column type info, such as it is. Remember that SQLITE is rather vague and unrestrictive when it comes to column data types.
Related
I have a list/array of strings:
l = ['jack','jill','bob']
Now I need to create a table in slite3 for python using which I can insert this array into a column called "Names". I do not want multiple rows with each name in each row. I want a single row which contains the array exactly as shown above and I want to be able to retrieve it in exactly the same format. How can I insert an array as an element in a db? What am I supposed to declare as the data type of the array while creating the db itself? Like:
c.execute("CREATE TABLE names(id text, names ??)")
How do I insert values too? Like:
c.execute("INSERT INTO names VALUES(?,?)",(id,l))
EDIT: I am being so foolish. I just realized that I can have multiple entries for the id and use a query to extract all relevant names. Thanks anyway!
You can store an array in a single string field, if you somehow genereate a string representation of it, e.g. sing the pickle module. Then, when you read the line, you can unpickle it. Pickle converts many different complex objects (but not all) into a string, that the object can be restored of. But: that is most likely not what you want to do (you wont be able to do anything with the data in the tabel, except selecting the lines and then unpickle the array. You wont be able to search.
If you want to have anything of varying length (or fixed length, but many instances of similiar things), you would not want to put that in a column or multiple columns. Thing vertically, not horizontally there, meaning: don't thing about columns, think about rows. For storing a vector with any amount of components, a table is a good tool.
It is a little difficult to explain from the little detail you give, but you should think about creating a second table and putting all the names there for every row of your first table. You'd need some key in your first table, that you can use for your second table, too:
c.execute("CREATE TABLE first_table(int id, varchar(255) text, additional fields)")
c.execute("CREATE TABLE names_table(int id, int num, varchar(255) name)")
With this you can still store whatever information you have except the names in first_table and store the array of names in names_table, just use the same id as in first_table and num to store the index positions inside the array. You can then later get back the array by doing someting like
SELECT name FROM names_table
WHERE id=?
ORDER BY num
to read the array of names for any of your rows in first_table.
That's a pretty normal way to store arrays in a DB.
This is not the way to go. You should consider creating another table for names with foreign key to names.
You could pickle/marshal/json your array and store it as binary/varchar/jsonfield in your database.
Something like:
import json
names = ['jack','jill','bill']
snames = json.dumps(names)
c.execute("INSERT INTO nametable " + snames + ";")
I am aware that 'datetime' is not a valid data type for SQLite, but I'm not quite sure what to replace it with. Take for example the following statements, would I store it as text like the others? If so, how would I then manipulate that later on, as a date?
drop table if exists entries;
create table entries (
id integer primary key autoincrement,
title text not null,
'text' text not null,
date_created
);
I am aware that 'datetime' is not a valid data type for SQLite
SQLite's flexibility results in virtually any column type being valid e.g.
CREATE TABLE weidrcolumntypes (column1 rumplestiltskin, column2 weirdtestcolumn, column3 etc)
is valid and will create a table with 3 columns (4 with rowid column) :-
SQLite's flexibility also allows any value to be stored in any column (an exception being the rowid, for tables that are not defined using WITHOUT ROWID, where the value must be an INTEGER).
A more comprehensive answer (tagged for Android but the principles still apply) is here.
This may also be of interest.
So in brief any column type would be able to handle/store datetime.
Take for example the following statements, would I store it as text
like the others?
As per above even as it is the code you have would be usable. You may wish to specify a column type of TEXT or INTEGER.
If so, how would I then manipulate that later on, as a date?
As for storing date time, Integer, Long or String could be used, the latter having the complimentary Date And Time Functions.
As such, you could do much of the manipulation within your queries which would be language independent.
I'm building up a table in Python's SQLite module which will consist of 18 columns at least. These columns are named by times (for example "08-00"), all stored in a list called 'time_range'. I want to avoid writing up all 18 table names by hand in the SQL statement, since all this already exists inside the mentioned list and it would make the code quite ugly. Howver, this:
marks = '?,'*18
self.c.execute('''CREATE TABLE finishedJobs (%s)''' % marks, tuple(time_range))
did not work. As it seems, Python/SQLite does not accept parameters at this place. Is there any smart workaround for my purpose or do I really have to name every single column in a CREATE TABLE statement by hand?
Using the sqlite module for Python, I have declared a column of data in my database to be of type real, but due to errors in the source data, some rows are of type text. This is fine, but when I pull the data, I want sqlite to throw an error if the row contains data of type text when the declared column type is real. How do I do this?
I have tried opening the database connection with
con = sqlite3.connect('mydatabase.db',detect_types=sqlite3.PARSE_DECLTYPES)
however, calls to fetchone() do not throw any errors.
What would be the best way of detecting a discrepancy between the row datatype and the datatype declared for the column?
I have also considered cleaning up data before inserting into the database, but these anomalous text rows contain special codes that may have more relevance in the future.
When you use detect_types=sqlite3.PARSE_DECLTYPES, you actually need to register a converter (or adapter for inserting) for the declared type, the only converters that are registered by default are for DATE and DATETIME.
If you want an error if a column declared as real is not numeric, it would be as simple as:
sqlite3.register_converter('real', float)
I made in typo in the datatype of sqlite3 carelessly when writing the schema.sql:
create table xxx(
id integer primary key autoincrement not null,
num integet
)
As you can see, integet should be integer, but it still works which means I can insert data into it. Why? And how can I correct it to integer without break the database?
As the documentation for the Python sqlite3 module and the database itself both explain, sqlite3 is untyped. You can insert string values into an INTEGER column, or integer values into a VARCHAR column.
However, sqlite has a feature called "column affinity", where it will try to treat parameters numerically if the column type is INTEGER, etc. And the Python module tries to map each of the sqlite types to Python types, so if sqlite gives it an INTEGER column, Python will map it to an integral type.
As the respective documentation explains:
If the declared type contains the string "INT" then it is assigned INTEGER affinity.
INTEGER int or long, depending on size
The comparison is case-insensitive. Anything that does not match any of the rules will be treated as NUMERIC, which Python will treat as str/bytes/buffer (depending on your version).
So, to sqlite itself, integet matches the rules for INTEGER, so, you can store integral values and retrieve them as integers. (Of course you can also still store 'abc', and you'll get it back as a string.) And Python will see this as an INTEGER column, and therefore read values as int when possible.
As for the last part of your question, sqlite3 does not provide any way to change the type of a column after it's been created. In fact, the ALTER TABLE command only allows adding new columns or constraints, or renaming tables.
If you need to modify a live database, what you need to do is create a new table with the right columns, copy everything over, delete the old table, and rename the new table, like this:
CREATE TABLE yyy(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, num INTEGER)
INSERT INTO yyy SELECT * FROM xxx
DROP TABLE xxx
ALTER TABLE yyy RENAME TO xxx
Any string values you inserted into the old table will be copied over—and every subsequent time you fetch them, because the column type is INTEGER, Python will try convert to int if possible, instead of leaving them as bytes or str.
SQLite is forgiving to the point of silliness. You can create a column of type "wibble". This CREATE TABLE statement works in SQLite.
create table wibble (wibble wibble);
And you can insert into it, too.
insert into wibble values ('wibble');