Related
I am doing a migration from Cassandra on an AWS machine to Astra Cassandra and there are some problems :
I cannot insert new data in Astra Cassandra with a column which is around 2 million characters and 1.77 MB (and I have bigger data to insert - around 20 millions characters). Any one knows how to address the problem?
I am inserting it via a Python app (cassandra-driver==3.17.0) and this is the error stack I get :
start.sh[5625]: [2022-07-12 15:14:39,336]
INFO in db_ops: error = Error from server: code=1500
[Replica(s) failed to execute write]
message="Operation failed - received 0 responses and 2 failures: UNKNOWN from 0.0.0.125:7000, UNKNOWN from 0.0.0.181:7000"
info={'consistency': 'LOCAL_QUORUM', 'required_responses': 2, 'received_responses': 0, 'failures': 2}
If I used half of those characters it works.
new Astra Cassandra CQL Console table description :
token#cqlsh> describe mykeyspace.series;
CREATE TABLE mykeyspace.series (
type text,
name text,
as_of timestamp,
data text,
hash text,
PRIMARY KEY ((type, name, as_of))
) WITH additional_write_policy = '99PERCENTILE'
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.UnifiedCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair = 'BLOCKING'
AND speculative_retry = '99PERCENTILE';
Old Cassandra table description:
ansible#cqlsh> describe mykeyspace.series;
CREATE TABLE mykeyspace.series (
type text,
name text,
as_of timestamp,
data text,
hash text,
PRIMARY KEY ((type, name, as_of))
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
Data sample :
{"type": "OP", "name": "book", "as_of": "2022-03-17", "data": [{"year": 2022, "month": 3, "day": 17, "hour": 0, "quarter": 1, "week": 11, "wk_year": 2022, "is_peak": 0, "value": 1.28056854009628e-08}, .... ], "hash": "84421b8d934b06488e1ac464bd46e83ccd2beea5eb2f9f2c52428b706a9b2a10"}
where this json contains 27.000 entries inside the data array like :
{"year": 2022, "month": 3, "day": 17, "hour": 0, "quarter": 1, "week": 11, "wk_year": 2022, "is_peak": 0, "value": 1.28056854009628e-08}
Python part of the code :
def insert_to_table(self, table_name, **kwargs):
try:
...
elif table_name == "series":
self.session.execute(
self.session.prepare("INSERT INTO series (type, name, as_of, data, hash) VALUES (?, ?, ?, ?, ?)"),
(
kwargs["type"],
kwargs["name"],
kwargs["as_of"],
kwargs["data"],
kwargs["hash"],
),
)
return True
except Exception as error:
current_app.logger.error('src/db/db_ops.py insert_to_table() table_name = %s error = %s', table_name, error)
return False
Big thanks !
you are hitting the configured limit for the maximum mutation size. On Cassandra, this defaults to 16 MB, while on Astra DB at the moment it is 4 MB (it's possible that it will be raised, but performing inserts with veyr large cell sizes is still strongly discouraged).
A more agile approach to storing this data would be to revise your data model and split the big row with the huge string into several rows, each containing a single item of the 27000 or so entries. With proper use of partitioning, you would still be able to retrieve the whole contents with a single query (paginated between the database and the driver for your convenience, which would help avoiding annoying timeouts which may arise when reading so large individual rows).
Incidentally, I suggest you create the prepared statement only once outside of the insert_to_table function (caching it or something). In the insert function you simply self.session.execute(already_prepared_statement, (value1, value2, ...)) which would noticeably improve your performance.
A last point: I believe the drivers are able to connect to Astra DB only starting from version 3.24.0, so I'm not sure how you are using version 3.17. I don't think version 3.17 know of the cloud argument to the Cluster constructor. In any case, I suggest you upgrade the drivers to the latest version (currently 3.25.0).
There's something not quite right with the details you posted in your question.
In the schema you posted, the data column is of type text:
data text,
But the sample data you posted looks like you are inserting key/value pairs which oddly seems to be formatted like a CQL collection type.
If it were really a string, it would be formatted as:
... "data": "key1: value1, key2: value2, ...", ...
Review your data and code then try again. Cheers!
Finally, the short solution to make a data migration possible for an inhouse Cassandra DB to Astra Cassandra (DataStax) was to use compression (zlib).
So the "data" field of each entry was compress in Python with zlib and then stored in Cassandra to reduce the size of the entries.
def insert_to_table(self, table_name, **kwargs):
try:
if table_name == 'series_as_of':
...
elif table_name == 'series':
list_of_bytes = bytes(json.dumps(kwargs["data"]),'latin1')
compressed_data = zlib.compress(list_of_bytes)
a = str(compressed_data, 'latin1')
self.session.execute(
self.session.prepare(INSERT_SERIES_QUERY),
(
kwargs["type"],
kwargs["name"],
kwargs["as_of"],
a,
kwargs["hash"],
),
)
....
Then, when reading the entries, a decompression step is needed :
def get_series(self, query_type, **kwargs):
try:
if query_type == "data":
execution = self.session.execute_async(
self.session.prepare(GET_SERIES_DATA_QUERY),
(kwargs["type"], kwargs["name"], kwargs["as_of"]),
timeout=30,
)
decompressed = None
a = None
for row in execution.result():
my_bytes = row[0].encode('latin1')
decompressed = zlib.decompress(my_bytes)
a = str(decompressed, encoding="latin1")
return a
...
The data looks like this now in Astra Cassandra :
token#cqlsh> select * from series where type = 'OP' and name = 'ZTP_PFM_H_LUX_PHY' and as_of = '2022-09-30';
type | name | as_of | data | hash
------+-------------------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------
OP | ZTP_PFM_H_LUX_PHY | 2022-09-30 00:00:00.000000+0000 | x\x9c-\x8dÉ\nÂ#\x10D\x7fEúl`z\x16Mü\x80\x90\x83â\x1c\x14\F\x86îYÈÅ\x05\x92\x8b\x88ÿnÂx*ªÞ\x83\x82\x8f\x83ñýJ\x0e6\x0b\x07{ë`9å\x83îÿår°Þ¶;ßùíñämw.\x02\rþ\x99\x8b!\x85\x94\x95h*%\n\x8a4ÒL®·¹õ4ôÅÓÙ¨\x10\të \x99H\x04¡\x8cf6¹!\x95\x02'\x93"Jb\x1dë\x84ÈT¯U\x90\x19\x11W8\x1dp£\x8d\x83/ü\x00Ó<0\x99 | 4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945
(1 rows)
Long term solution is to rearrange the data like #Stefano Lottini said.
I need to write an automated python code to create database table having column names as the keys from the json file and column data should be the values of those respective key.
My json looks like this:
{
"Table_1": [
{
"Name": "B"
},
{
"BGE3": [
"Itm2",
"Itm1",
"Glass"
]
},
{
"Trans": []
},
{
"Art": [
"SYS"
]
}]}
My table name should be: Table_1.
So my column name should look like: Name | BGE3 | Trans | Art.
And data should be its respected values.
Creation of table and columns has to be dynamic because I need to run this code on multiple json file.
So far I have managed to connect to the postgresql database using python.
So please help me with the solutions.Thankyou.
Postgres version 13.
Existing code:
cur.execute("CREATE TABLE Table_1(Name varchar, BGE3 varchar, Trans varchar, Art varchar)")
for d in data: cur.execute("INSERT into B_Json_3(Name, BGE3, Trans , Art) VALUES (%s, %s, %s, %s,)", d)
Where data is a list of arrays i made which can only be executed for this json. I need a function that will execute any json i want that can have 100 elements of list in the values of any key.
The table creation portion, using Python json module to convert JSON to Python dict and psycopg2.sql module to dynamically CREATE TABLE:
import json
import psycopg2
from psycopg2 import sql
tbl_json = """{
"Table_1": [
{
"Name": "B"
},
{
"BGE3": [
"Itm2",
"Itm1",
"Glass"
]
},
{
"Trans": []
},
{
"Art": [
"SYS"
]
}]}
"""
# Transform JSON string into Python dict. Use json.load if pulling from file.
# Pull out table name and column names from dict.
tbl_dict = json.loads(tbl_json)
tbl_name = list(tbl_dict)[0]
tbl_name
'Table_1'
col_names = [list(col_dict)[0] for col_dict in tbl_dict[tbl_name]]
# Result of above.
col_names
['Name', 'BGE3', 'Trans', 'Art']
# Create list of types and then combine column names and column types into
# psycopg2 sql composed object. Warning: sql.SQL() does no escaping so potential
# injection risk.
type_list = ["varchar", "varchar", "varchar"]
col_type = []
for i in zip(map(sql.Identifier, col_names), map(sql.SQL,type_list)):
col_type.append(i[0] + i[1])
# The result of above.
col_type
[Composed([Identifier('Name'), SQL('varchar')]),
Composed([Identifier('BGE3'), SQL('varchar')]),
Composed([Identifier('Trans'), SQL('varchar')])]
# Build psycopg2 sql string using above.
sql_str = sql.SQL("CREATE table {} ({})").format(sql.Identifier(tbl_name), sql.SQL(',').join(col_type) )
con = psycopg2.connect("dbname=test host=localhost user=aklaver")
cur = con.cursor()
# Shows the CREATE statement that will be executed.
print(sql_str.as_string(con))
CREATE table "Table_1" ("Name"varchar,"BGE3"varchar,"Trans"varchar)
# Execute statement and commit.
cur.execute(sql_str)
con.commit()
# In psql client the result of the execute:
\d "Table_1"
Table "public.Table_1"
Column | Type | Collation | Nullable | Default
--------+-------------------+-----------+----------+---------
Name | character varying | | |
BGE3 | character varying | | |
Trans | character varying | | |
I'm still not able to insert mydata to sql. I have search but fail to find the solution. I read json file and insert the data onto sql db. The json file contain list of multiple dictionary of key/value. Below is the parameters column configured
for the db table1
+---------------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------------------+--------------+------+-----+---------+----------------+
| id | int(10) | NO | PRI | NULL | auto_increment |
| log_number | int(10) | YES | | NULL | |
| timestamp | datetime | YES | | NULL | |
| staff_id | varchar(10) | YES | | NULL | |
| staff_name | varchar(30) | YES | | NULL | |
| temp_staff_id | varchar(10) | YES | | NULL | |
| temp_staff_name | varchar(30) | YES | | NULL | |
| staff_department | varchar(20) | YES | | NULL | |
| service_id | varchar(20) | YES | | NULL | |
| service_number | varchar(20) | YES | | NULL | |
| state | varchar(20) | YES | | NULL | |
| city | varchar(20) | YES | | NULL | |
| description | varchar(50) | YES | | NULL | |
| additional_remarks | varchar(100) | YES | | NULL | |
+---------------------------+--------------+------+-----+---------+----------------+
Not all dictionary have all the db parameters listed. Some doesn't have 'state' and 'city' and some dictionary doesn't have 'additional_remarks'..some have 'temp_staff_id' and 'temp_staff_name'. Overall the parameters in the db will cover different type of key/value data of the json file that need to be inserted.
Below is the sample data that need to be inserted to the db table1
{
"response": {
"client_log": {
"data": [
{
"log_number": "1",
"timestamp": "2020-01-11 04:23:31",
"staff_id": "A123",
"staff_name": "Krill",
"staff_department": "Sales",
"service_id": "11111",
"service_number": "BU-11111",
"description": "This is description1"
},
{
"log_number": "2",
"timestamp": "2020-01-11 07:03:39",
"staff_id": "A456",
"staff_name": "James",
"staff_department": "Graphic",
"service_id": "22222",
"service_number": "XU-22211",
"State": "AAA",
"City": "AAA"
},
{
"log_number": "3",
"timestamp": "2020-01-27 14:29:11",
"temp_staff_id": "A571",
"temp_staff_name": "Mary",
"staff_department": "Engr",
"service_id": "89000",
"service_number": "SP-89001",
"State": "BBB",
"City": "BBB"
},
{
"log_number": "4",
"timestamp": "2020-01-27 15:08:20",
"staff_id": "A765",
"staff_name": "Alex",
"staff_department": "Sales",
"service_id": "09880",
"service_number": "XU-09880",
"description": "This is description3333"
}
],
"query": "4"
},
}
}
I'm using the script below to insert the data...I insert all the keys available in the table1.. but I got error
with open(myfile, 'r') as f:
mydata = json.load(f)
#Insert the values onto table1 of db.
sql = "INSERT INTO `table1` (`log_number`, `timestamp`, `staff_id`, `staff_name`, `temp_staff_id`, `temp_staff_name`, `staff_department`, `service_id`, `service_number`, `state`, `city`, `description`, `additional_remarks`) VALUES ( %(log_number)s, %(timestamp)s, %(staff_id)s, %(staff_name)s, %(temp_staff_id)s, %(temp_staff_name)s, %(staff_department)s, %(service_id)s, %(service_number)s, %(state)s, %(city)s, %(description)s, %(additional_remarks)s )"
cursor.executemany( sql, mydata['response']['client_log']['data'])
Error...
1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '%(temp_staff_id)s, %(temp_staff_name)s, 'Sales', '11111', 'BU-11111', %(state)s,' at line 1
If all the key of each dictionary same and exists and follow exactly like parameters of the db I can insert data without any error.
As the set of dictionary keys in the json file is not same and some not exists...so, how can I do the data insertion..for example if no state or city or additional_remarks just insert null. I'm not sure how to do it when some of the key parameters is not same with others or not exist. I hope you guys can help and advise me the solution. Thank you
I have tried mr rashid solution and return error below
<class 'dict'> ====> mydata is a dictionary type
{'response': {'client_log': {'data': [{'log_number': '1', 'timestamp': '2020-01-11 04:23:31', 'staff_id': 'A123', 'staff_name': 'Krill', 'staff_department': 'Sales', 'service_id': '11111', 'service_number': 'BU-11111', 'description': 'This is description1'}, {'log_number': '2', 'timestamp': '2020-01-11 07:03:39', 'staff_id': 'A456', 'staff_name': 'James', 'staff_department': 'Graphic', 'service_id': '22222', 'service_number': 'XU-22211', 'State': 'AAA', 'City': 'AAA'}, {'log_number': '3', 'timestamp': '2020-01-27 14:29:11', 'temp_staff_id': 'A571', 'temp_staff_name': 'Mary', 'staff_department': 'Engr', 'service_id': '89000', 'service_number': 'SP-89001', 'State': 'BBB', 'City': 'BBB'}, {'log_number': '4', 'timestamp': '2020-01-27 15:08:20', 'staff_id': 'A765', 'staff_name': 'Alex', 'staff_department': 'Sales', 'service_id': '09880', 'service_number': 'XU-09880', 'description': 'This is description3333'}], 'total_hero_query': '13'}, 'response_time': '0.723494', 'transaction_id': '909122', 'transaction_status': 'OK', 'transaction_time': 'Fri Feb 28 15:27:51 2020'}}
'str' object has no attribute 'keys'
I edit below
for row in json_in['response']['client_log']['data']:
The error related with the key gone but now error related to sql
1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'description1, 2020-01-11 04:23:31, Krill, Sales, BU-11111, 1, 11111),(A123, This' at line 1
This error maybe due to different keys or non exists keys in the dictionary data as per my earlier problem of this thread...
I really need help as till date...still not able to resolve the issue. Please help me. Thank you.
So instead of hardcoding the sql to have every column. You need to dynamically build the sql statements to have the columns/values that you DO have.
This can be done with a simple loop.
Also instead of immediately executing the insert command. I put them into a list so that you can then thread all the uploads to run at the same time. I did not write the thread portion of the code, so you will need to do that. The 2nd loop will need to be replaced with thread code, or if you aren't going to thread you can just do the execute instead of appending it to the sql_inserts variable.
Edit: Added Json Loading.
So I don't exactly know how your data is coming into the code. But raw json data shows up as a unicode string. You have to format it into a dictionary. Luckily there is a handy dandy python module that does that.
So we can just import the json module and then load the data into it.
Also note that I had a copy and paste error with a variable name.
Make sure any of the code that says keys now says sql_keys
import json
# This converts the json data into a python dictionary so that python can process it.
json_in = json.loads(mydata)
sql_template = 'INSERT INTO `table1` ({}) VALUES ({})'
sql_inserts = []
for row in json_in:
sql_keys = row.keys()
sql_values = []
for key in sql_keys:
sql_values.append(row[key])
# This is the line my copy and paste error was on
sql_inserts.append(sql_template.format(', '.join(sql_keys), ', '.join(sql_values)))
for insert in sql_inserts:
cursor.executemany(insert, mydata['response']['client_log']['data'])
This is a very clean and easy to read way to do this. If you really cared about how many lines of code there are, like in a code golf situation, you could make this sorter by doing list comprehension and/or possibly a lambda.
Python: 2.7.12
pyodbc: 4.0.24
OS: Ubuntu 16.4
DB: MySQL 8
driver: MySQL 8
Expected behaviour: resultset should have numbers in columns with datatype int
Actual Behaviour: All of the columns with int data type have 0's (If parameterised query is used)
Here's the queries -
1.
cursor.execute("SELECT * FROM TABLE where id =7")
Result set:
[(7, 1, None, 1, u'An', u'Zed', None, u'Ms', datetime.datetime(2016, 12, 20, 0, 0), u'F', u'Not To Be Disclosed', None, None, u'SPRING', None, u'4000', datetime.datetime(2009, 5, 20, 18, 55), datetime.datetime(2019, 1, 4, 14, 25, 58, 763000), 0, None, None, None, bytearray(b'\x00\x00\x00\x00\x01(n\xba'))]
2.
cursor.execute("SELECT * FROM patients where patient_id=?", [7])`
or
cursor.execute("SELECT * FROM patients where patient_id=?", ['7'])
or
cursor.execute("SELECT * FROM patients where patient_id IN ", [7])
Result set:
[(0, 0, None, 0, u'An', u'Zed', None, u'Ms', datetime.datetime(2016, 12, 20, 0, 0), u'F', u'Not To Be Disclosed', None, None, u'SPRING', None, u'4000', datetime.datetime(2009, 5, 20, 18, 55), datetime.datetime(2019, 1, 4, 14, 25, 58, 763000), 0, None, None, None, bytearray(b'\x00\x00\x00\x00\x01(n\xba'))]
Rest of the result set is okay except for the columns with int data type that all have 0's if paramterized query is used.
It seems like it should have worked without issues. Can I get some help here.
Edit : Here's the schema of the table:
CREATE TABLE `patient
`lastname` varchar(30) DEFAULT NULL,
`known_as` varchar(30) DEFAULT NULL,
`title` varchar(50) DEFAULT NULL,
`dob` datetime DEFAULT NULL,
`sex` char(1) DEFAULT NULL,
`address1` varchar(30) DEFAULT NULL,
`address2` varchar(30) DEFAULT NULL,
`address3` varchar(30) DEFAULT NULL,
`city` varchar(30) DEFAULT NULL,
`state` varchar(16) DEFAULT NULL,
`postcode` char(4) DEFAULT NULL,
`datecreated` datetime NOT NULL,
`dateupdated` datetime(6) DEFAULT NULL,
`isrep` tinyint(1) DEFAULT NULL,
`photo` longblob,
`foreign_images_imported` tinyint(1) DEFAULT NULL,
`ismerged` tinyint(1) DEFAULT NULL,
`rowversion` varbinary(8) DEFAULT NULL,
PRIMARY KEY (`patient_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
You have encountered this bug in MySQL Connector/ODBC.
EDIT: The bug has now been fixed.
The following (Python 3) test code verifies that MySQL Connector/ODBC returns zero (incorrect), while mysqlclient returns the correct value:
import MySQLdb # from mysqlclient
import pyodbc
host = 'localhost'
user = 'root'
passwd = 'whatever'
db = 'mydb'
port = 3307
charset = 'utf8mb4'
use_odbc = False # or True
print(f'{"" if use_odbc else "not "}using ODBC ...')
if use_odbc:
connection_string = (
f'DRIVER=MySQL ODBC 8.0 ANSI Driver;'
f'SERVER={host};UID={user};PWD={passwd};DATABASE={db};PORT={port};'
f'charset={charset};'
)
cnxn = pyodbc.connect(connection_string)
print(f'{cnxn.getinfo(pyodbc.SQL_DRIVER_NAME)}, version {cnxn.getinfo(pyodbc.SQL_DRIVER_VER)}')
else:
cnxn = MySQLdb.connect(
host=host, user=user, passwd=passwd, db=db, port=port, charset=charset
)
int_value = 123
crsr = cnxn.cursor()
crsr.execute("CREATE TEMPORARY TABLE foo (id varchar(10) PRIMARY KEY, intcol int, othercol longblob)")
crsr.execute(f"INSERT INTO foo (id, intcol) VALUES ('Alfa', {int_value})")
sql = f"SELECT intcol, othercol FROM foo WHERE id = {'?' if use_odbc else '%s'}"
crsr.execute(sql, ('Alfa',))
result = crsr.fetchone()[0]
print(f'{"pass" if result == int_value else "FAIL"} -- expected: {repr(int_value)} ; actual: {repr(result)}')
Console output with use_odbc = True:
using ODBC ...
myodbc8a.dll, version 08.00.0018
FAIL -- expected: 123 ; actual: 0
Console output with use_odbc = False:
not using ODBC ...
pass -- expected: 123 ; actual: 123
FWIW i just posted a question where i was seeing this in version 3.1.14 of the ODBC connector but NOT in version 3.1.10.
This is my code. It will read a bunch of files with SQL commands in identical formats (i.e. comments prefaced with -, along with some blank lines, hence the if condition in my loop)
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import csv,sqlite3,os
conn = sqlite3.connect('db_all.db')
c = conn.cursor()
files = os.listdir('C:\\Users\\ghb\\Desktop\\database_schema')
for file in files:
string = ''
with open(file, 'rb') as read:
for line in read.readlines():
if line[0]!='-' and len(line)!=0: string = string + line.rstrip() #error also occurs if I skip .rstrip
print string #for debugging purposes
c.executescript(string)
string=''
conn.close()
Error:
Traceback (most recent call last):
File "C:/Users/ghb/Desktop/database_schema/database_schema.py", line 16, in <module>
c.executescript(string)
sqlite3.OperationalError: near "SET": syntax error
For fear of clutter, here is the output of the string variable (without the INSERT INTO data, that is confidential):
SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";
CREATE TABLE IF NOT EXISTS `career` (
`carKey` int(11) NOT NULL AUTO_INCREMENT,
`persID` bigint(10) unsigned zerofill NOT NULL,
`persKey` int(6) unsigned NOT NULL,
`wcKey` int(2) unsigned NOT NULL,
`wtKey` int(2) unsigned DEFAULT NULL,
`pwtKey` int(2) unsigned DEFAULT NULL,
`dptId` bigint(10) unsigned NOT NULL,
`dptNr` int(4) unsigned NOT NULL,
`dptalias` varchar(10) COLLATE utf8_icelandic_ci NOT NULL,
`class` enum('A','B') COLLATE utf8_icelandic_ci NOT NULL,
`getfilm` enum('yes','no') COLLATE utf8_icelandic_ci NOT NULL DEFAULT 'yes',
`finished` enum('true','false') COLLATE utf8_icelandic_ci NOT NULL DEFAULT 'false',
`startDate` date NOT NULL,
`endDate` date DEFAULT NULL,
`regDate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`user` tinyint(4) NOT NULL,
`status` set('LÉST','BRFL','BRFD','BRNN') COLLATE utf8_icelandic_ci DEFAULT NULL,
`descr` text COLLATE utf8_icelandic_ci,
PRIMARY KEY (`carKey`),
KEY `pwtKey` (`pwtKey`),
KEY `wtKey` (`wtKey`),
KEY `dptId` (`dptId`),
KEY `user` (`user`),
KEY `persID` (`persID`),
KEY `persKey` (`persKey`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_icelandic_ci AUTO_INCREMENT=2686 ;
Input sample:
INSERT INTO `career` (`carKey`, `persID`, `persKey`, `wcKey`, `wtKey`, `pwtKey`, `dptId`, `dptNr`, `dptalias`, `class`, `getfilm`, `finished`, `startDate`, `endDate`, `regDate`, `user`,
(5, 34536, 346, 22, 44, 34, 3454356, 33, 'asdasd', 'ASDASD', 'ASDSD', 'true', '1991-02-04', '2010-05-02', '2009-05-02 00:01:02', 1, NULL, 'HH:
'),