Assign appointmnet times in sync and async manner in MySQL - python

I am working on a token management system. The tokens are issued in sync and async manner. In sync manner the tokens are issued based on the current time and the previous token's end time. It's basically queue. Whereas, in async tokens, the user can provide time in which he/she wants an appointment. So, the database will be checked and if that time frame is available, the token will be issued.
My sync token are working perfectly fine. The problem is I am not sure how to synchronize the database with sync and async tokens.
For e.g, if some user asks for an async token at time 15:00:00 and some other user asks for time 14:00:00. Where the current sync tokens' latest end time is 12:00:00, how to insert that data in the table?
Note: There can be only one active token at a particular time either sync or async. Each transaction is assigned max 5 mins. The timings are calculated in the python script. The bank_open time is 10:00:00 and close_time is 18:00:00
Here's the MySQL table:
+----+-------+--------------------------------------+-----------+-----------------+-----------------+-------------+
| id | type | token | date | start_time | end_time | no_of_trans |
+----+-------+--------------------------------------+-----------+-----------------+-----------------+-------------+
| 36 | sync | 4059cdea-6e9d-4943-8d4c-f5df99d12ff5 | 2020-4-22 | 15:53:15.347864 | 15:58:15.347864 | 1 |
| 37 | sync | 5b420dec-9fc2-415f-b2f2-3649a3c6fa26 | 2020-4-22 | 15:58:15.347864 | 16:18:15.347864 | 4 |
| 38 | sync | 72501a99-c081-48ee-a05d-959499eeb944 | 2020-4-22 | 16:18:15.347864 | 16:33:15.347864 | 3 |
| 39 | sync | 3d08050e-115f-4bb1-a98a-5991e87c4ead | 2020-4-22 | 16:33:15.347864 | 16:43:15.347864 | 2 |
| 40 | sync | c19e664b-0df0-4149-8b13-98d7450ea72f | 2020-4-22 | 16:43:15.347864 | 16:58:15.347864 | 3 |
| 41 | async | 56a6f5bf-0f1a-4694-8c69-c67f7ed39147 | 2020-4-22 | 16:59:00.123456 | 17:04:00.123456 | 1 |
| 42 | async | bac9b649-4462-46cb-a290-df63cb4b7158 | 2020-4-22 | 17:00:00.123456 | 17:05:00.123456 | 1 |
| 43 | async | e9b12cf7-c03f-4d2a-a4a4-f19b0d7d50f3 | 2020-4-22 | 17:47:00.123456 | 17:52:00.123456 | 1 |
| 44 | async | e9b12cf7-c03f-4d2a-a4a4-f19b0d7d50f3 | 2020-4-22 | 17:47:00.123456 | 17:52:00.123456 | 1 |
| 45 | async | a04f1347-ec1d-469d-911a-7783658e1e71 | 2020-4-22 | 17:50:00.123456 | 17:55:00.123456 | 1 |
+----+-------+--------------------------------------+-----------+-----------------+-----------------+-------------+
And this is the schema:
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| type | varchar(255) | YES | | NULL | |
| token | varchar(255) | YES | | NULL | |
| date | varchar(255) | YES | | NULL | |
| start_time | varchar(255) | YES | | NULL | |
| end_time | varchar(255) | YES | | NULL | |
| no_of_trans | int(11) | YES | | NULL | |
+-------------+--------------+------+-----+---------+----------------+
Currently, I'm checking these parameters in python script:
for i in range(len(async_end_list)):
#print(async_start_list[i], '....', async_end_list[i])
if (user_time >= async_start_list[i] and user_time <= async_end_list[i]):
print("Wrong input")
break
if (user_time > async_start_list[i] and user_time > async_end_list[i]):
if (end_time < bank_close and user_time > bank_open and user_time < bank_close and user_time > latest_sync_end_time):
if (total_trans <= max_trans):
mycursor.execute("""INSERT INTO tokens (type, token, date, start_time, end_time, no_of_trans) VALUES (%s, %s, %s, %s, %s, %s)""",
('async', rand_token, today, user_time, end_time, no_of_trans))
mydb.commit()
print(mycursor.rowcount, "record inserted.")
break
else:
print("Maximum transactions for day has exceeded!!")
break
else:
print("Record cannot be inserted.")
break

Related

Dividing a dataframe into several dataframes according to date column

I have a dataframe which contains a specific column for the date which is called 'testdate'. And I have a period between two specific date, such as 20110501~20120731.
From a dataframe, I want to divide that dataframe into multiple dataframes according to the year-month of 'testdate'.
For example, if 'testdate' is between 20110501-20110531 then df1, if 'testdate' is between next month, then f2, ... and so on.
For example, a whole dataframe looks like this...
| StudentID | Testdate | Record |
| -------- | -------- | ------ |
| 1 | 20110528 | 50 |
| 2 | 20110601 | 75 |
| 3 | 20110504 | 100 |
| 4 | 20110719 | 82 |
| 5 | 20111120 | 42 |
| 6 | 20111103 | 95 |
| 7 | 20120520 | 42 |
| 8 | 20120503 | 95 |
But, I want to divide it like this...
[DF1]: name should be 201105
| StudentID | Testdate | Record |
| -------- | -------- | ------ |
| 1 | 20110528 | 50 |
| 3 | 20110504 | 100 |
[DF2]: name should be 201106
| StudentID | Testdate | Record |
| -------- | -------- | ------ |
| 2 | 20110601 | 75 |
[DF3]
| StudentID | Testdate | Record |
| -------- | -------- | ------ |
| 4 | 20110719 | 82 |
[DF4]
| StudentID | Testdate | Record |
| -------- | -------- | ------ |
| 5 | 20111120 | 42 |
| 6 | 20111103 | 95 |
[DF5]
| StudentID | Testdate | Record |
| -------- | -------- | ------ |
| 7 | 20120520 | 42 |
| 8 | 20120503 | 95 |
I found some codes for dividing a dataframe according to the quarter, but I could find any codes for my task.
How can I deal with it ? Many thanks to your help.
Create a grouper by slicing yyyymm from testdate then group the dataframe and store each group inside a dict comprehension
s = df['Testdate'].astype(str).str[:6]
dfs = {f'df_{k}': g for k, g in df.groupby(s)}
# dfs['df_201105']
StudentID Testdate Record
0 1 20110528 50
2 3 20110504 100
# dfs['df_201106']
StudentID Testdate Record
1 2 20110601 75

How to get the columns with null values in GridDB?

I have GridDB Python client running on my Ubuntu computer. I would like to get the columns having null values using a GridDM query. I know it’s possible to get the rows with null values but I want columns this time.
Take for an example, the timeseries table below:
'''
-- | timestamp | value1 | value2 | value3 | output |
-- |---------------------|--------|--------|---------|--------|
-- | 2021-06-24 12:00:22 | 1.3819 | 2.4214 | | 0 |
-- | 2021-06-25 11:55:23 | 4.8726 | 6.2324 | 9.3424 | 1 |
-- | 2021-06-26 05:40:53 | 6.1313 | | 5.4648 | 0 |
-- | 2021-06-27 08:24:19 | 6.7543 | | 9.7967 | 0 |
-- | 2021-06-28 13:34:51 | 3.5713 | 1.4452 | | 1 |
'''
The solution should basically return value2 and value3 columns. Thanks in advance!

MySQL Foreign Keys Batch Queries using SQL Alchemy

What is the best way to insert data on two MySQL tables from data frames using SQLAlchemy with a foreign key constraint. I'm inserting daily data across 2 tables and I want that for a specific date, the ID is the same across tables.
Thanks in Advance.
Table 1
| ID | Day | Info1 |
| -- | ------------ | ------ |
| 0 | 2022-01-01 | apple |
| 1 | 2022-01-03 | banana |
| 2 | 2022-01-02 | mango |
Table 2
| ID | Day | Info2 |
| -- | ------------ | ------ |
| 0 | 2022-01-01 | green |
| 1 | 2022-01-03 | yellow |
| 2 | 2022-01-02 | orange |
Thanks in Advance

Cursor Insert return out of memory errors

I am a newbie of MySQL. I am using Python Connect to insert over 350,000 rows into a table running on a MySQL 8 database. My Python code look like this.
cursor = cnx.cursor(buffered=True)
stmt = "INSERT INTO ......"
cursor.executemany(stmt, data)
cnx.commit()
cursor.close()
which returns the following errors:
[ERROR] [MY-010934] [Server] Out of memory; check if mysqld or some
other process uses all available memory; if not, you may have to use
'ulimit' to allow mysqld to use more memory or you can add more swap
space
If I reduce the inserted rows such as inserting 200,000 rows only, the error disappears. I think there must be some size limit on the MySQL settings, but I don't know which one to change. I tried to manually increase the innodb_buffer_pool_size to 500MB as many answers said, but the error continues. What should I do? I printed out my system variable about size and it's listed below.
| binlog_cache_size | 32768 |
| binlog_row_event_max_size | 8192 |
| binlog_stmt_cache_size | 32768 |
| binlog_transaction_dependency_history_size | 25000 |
| bulk_insert_buffer_size | 8388608 |
| delayed_queue_size | 1000 |
| histogram_generation_max_mem_size | 20000000 |
| host_cache_size | 279 |
| innodb_buffer_pool_chunk_size | 134217728 |
| innodb_buffer_pool_size | 134217728 |
| innodb_change_buffer_max_size | 25 |
| innodb_doublewrite_batch_size | 0 |
| innodb_ft_cache_size | 8000000 |
| innodb_ft_max_token_size | 84 |
| innodb_ft_min_token_size | 3 |
| innodb_ft_total_cache_size | 640000000 |
| innodb_log_buffer_size | 16777216 |
| innodb_log_file_size | 50331648 |
| innodb_log_write_ahead_size | 8192 |
| innodb_max_undo_log_size | 1073741824 |
| innodb_online_alter_log_max_size | 134217728 |
| innodb_page_size | 16384 |
| innodb_purge_batch_size | 300 |
| innodb_sort_buffer_size | 1048576 |
| innodb_sync_array_size | 1 |
| join_buffer_size | 262144 |
| key_buffer_size | 8388608 |
| key_cache_block_size | 1024 |
| large_page_size | 0 |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 1073741824 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| max_heap_table_size | 16777216 |
| max_join_size | 18446744073709551615 |
| max_relay_log_size | 0 |
| myisam_data_pointer_size | 6 |
| myisam_max_sort_file_size | 9223372036853727232 |
| myisam_mmap_size | 18446744073709551615 |
| myisam_sort_buffer_size | 8388608 |
| ngram_token_size | 2 |
| optimizer_trace_max_mem_size | 1048576 |
| parser_max_mem_size | 18446744073709551615 |
| performance_schema_accounts_size | -1 |
| performance_schema_digests_size | 10000 |
| performance_schema_error_size | 4890 |
| performance_schema_events_stages_history_long_size | 10000 |
| performance_schema_events_stages_history_size | 10 |
| performance_schema_events_statements_history_long_size | 10000 |
| performance_schema_events_statements_history_size | 10 |
| performance_schema_events_transactions_history_long_size | 10000 |
| performance_schema_events_transactions_history_size | 10 |
| performance_schema_events_waits_history_long_size | 10000 |
| performance_schema_events_waits_history_size | 10 |
| performance_schema_hosts_size | -1 |
| performance_schema_session_connect_attrs_size | 512 |
| performance_schema_setup_actors_size | -1 |
| performance_schema_setup_objects_size | -1 |
| performance_schema_users_size | -1 |
| preload_buffer_size | 32768 |
| profiling_history_size | 15 |
| query_alloc_block_size | 8192 |
| query_prealloc_size | 8192 |
| range_alloc_block_size | 4096 |
| range_optimizer_max_mem_size | 8388608 |
| read_buffer_size | 131072 |
| read_rnd_buffer_size | 262144 |
| rpl_read_size | 8192 |
| select_into_buffer_size | 131072 |
| slave_pending_jobs_size_max | 134217728 |
| sort_buffer_size | 262144 |
| thread_cache_size | 9 |
| tmp_table_size | 16777216 |
| transaction_alloc_block_size | 8192 |
| transaction_prealloc_size | 4096 |
Reducing innodb_buffer_pool_size rather than increasing it could solve memory over-usage.
My machine hardware has 1.7 G memory, MySQL use about 23% memory on startup.
When setting innodb_buffer_pool_size = 100M, after inserting 400,000 rows of data using Python MySQL Connector, MySQL memory usage climbs to 30% and doesn't go down even if the program ended.
However, if I set innodb_buffer_pool_size = 50M, the memory still climbs to a 30% level when the Python program running, but soon goes down if the program ended.
Pure experimental finding, maybe some veteran could explain the back reasoning.
As #BoarGules suggested, this solution proved to be a savior for me in mysql out of memory problem:
https://stackoverflow.com/a/7137270/7977550
MySQL 5.7, 6gb ram free , use mysql.connector and "values" in chunks(values) is 2799617 item list.
def chunks(data, rows=10000):
""" Divides the data into 10000 rows each """
for i in range(0, len(data), rows):
yield data[i:i+rows]
query = "INSERT INTO table VALUES (%s);"
div_values = chunks(values) # divide into 10000 rows each
for values in div_values:
cursor.executemany(query, list(zip(values)))

merge django's auth_user with existing user table

Currently I have a legacy app which refers to a user table with all the custom fields. Since there is a good amount of legacy code referring to that table I cannot simple rename that table as auth_user. So the thing I'm trying to do is somehow merge (i don't know that its the right term) auth_user and user.
Below is user table:
+-------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+----------------+
| user_id | int(10) | NO | PRI | NULL | auto_increment |
| name | varchar(100) | NO | | NULL | |
| address | varchar(100) | NO | | NULL | |
| phone_no | varchar(15) | NO | | NULL | |
| city | varchar(100) | NO | | NULL | |
| state | varchar(100) | NO | | NULL | |
| pin_no | int(10) | NO | | NULL | |
| type | varchar(100) | NO | | NULL | |
| email | varchar(100) | NO | | NULL | |
| password | varchar(100) | NO | | NULL | |
| is_active | tinyint(1) | NO | | NULL | |
| role | varchar(40) | NO | | NULL | |
| creation_date | int(100) | NO | | NULL | |
| edit_date | int(100) | NO | | NULL | |
| country | varchar(255) | NO | | NULL | |
| district | varchar(255) | NO | | NULL | |
| ip | varchar(255) | NO | | NULL | |
| added_by | int(11) | NO | | NULL | |
| is_phone_verified | binary(1) | NO | | 0 | |
| remember_token | varchar(100) | YES | | NULL | |
| disclaimer_agreed | int(11) | YES | | 0 | |
| mobile_login | tinyint(4) | NO | | 0 | |
+-------------------+--------------+------+-----+---------+----------------+
and django's auth_user table:
+--------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| password | varchar(128) | NO | | NULL | |
| last_login | datetime(6) | YES | | NULL | |
| is_superuser | tinyint(1) | NO | | NULL | |
| username | varchar(150) | NO | UNI | NULL | |
| first_name | varchar(30) | NO | | NULL | |
| last_name | varchar(30) | NO | | NULL | |
| email | varchar(254) | NO | | NULL | |
| is_staff | tinyint(1) | NO | | NULL | |
| is_active | tinyint(1) | NO | | NULL | |
| date_joined | datetime(6) | NO | | NULL | |
+--------------+--------------+------+-----+---------+----------------+
What I would want is a single table, that django will refer to in using contrib.auth related stuff and also at the same time does not deprecate my legacy code. maybe something like this:
+-------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+----------------+
| user_id | int(10) | NO | PRI | NULL | auto_increment |
| name | varchar(100) | NO | | NULL | |
| address | varchar(100) | NO | | NULL | |
| phone_no | varchar(15) | NO | | NULL | |
| city | varchar(100) | NO | | NULL | |
| state | varchar(100) | NO | | NULL | |
| pin_no | int(10) | NO | | NULL | |
| type | varchar(100) | NO | | NULL | |
| email | varchar(100) | NO | | NULL | |
| password | varchar(100) | NO | | NULL | |
| is_active | tinyint(1) | NO | | NULL | |
| role | varchar(40) | NO | | NULL | |
| creation_date | int(100) | NO | | NULL | |
| edit_date | int(100) | NO | | NULL | |
| country | varchar(255) | NO | | NULL | |
| district | varchar(255) | NO | | NULL | |
| ip | varchar(255) | NO | | NULL | |
| added_by | int(11) | NO | | NULL | |
| is_phone_verified | binary(1) | NO | | 0 | |
| remember_token | varchar(100) | YES | | NULL | |
| disclaimer_agreed | int(11) | YES | | 0 | |
| mobile_login | tinyint(4) | NO | | 0 | |
| last_login | datetime(6) | YES | | NULL | |
| is_superuser | tinyint(1) | NO | | NULL | |
| username | varchar(150) | NO | UNI | NULL | |
| first_name | varchar(30) | NO | | NULL | |
| last_name | varchar(30) | NO | | NULL | |
| is_staff | tinyint(1) | NO | | NULL | |
| is_active | tinyint(1) | NO | | NULL | |
| date_joined | datetime(6) | NO | | NULL | |
+-------------------+--------------+------+-----+---------+----------------+
The motive here is to take advantage of Django's builtin authentication and permission system but without breaking legacy code. I think there must be a solution for this as this is not the first time someone is porting some legacy application to django.
I would also like to mention this link How to Extend Django User Model, but I'm not sure which approach is the best to adopt or should I be doing something completely different
Django explicitly supports custom user models. Create a model for your existing table, make it inherit from AbstractBaseUser, and set the AUTH_USER_MODEL setting to point to your new model. See the comprehensive docs.
We can read in more detail from a Django ticket how to migrate from a built-in User model to a custom User model.
Quoting from Carsten Fuchs
Assumptions
Your project doesn't have a custom user model yet.
All existing users must be kept.
There are no pending migrations and all existing migrations are applied.
It is acceptable that all previous migrations are lost and can no longer be unapplied, even if you use version control to checkout old
commits that still have the migration files. This is the relevant
downside of this approach.
Preparations
Make sure that any third party apps that refer to the Django User model only use the ​generic referencing methods.
Make sure that your own reusable apps (apps that are intended to be used by others) use the generic reference methods.
I suggest to not do the same with your project apps: The switch to a custom user model is only done once per project and never again. It
is easier (and in my opinion also clearer) to change from
django.contrib.auth.models import User to something else (as detailed
below) than replacing it with generic references that are not needed
in project code.
Make sure that you have a backup of your code and database!
Update the code
You can create the new user model in any existing app or a newly created one. My preference is to create a new app:
./manage.py startapp Accounts
I chose the name "Accounts", but any other name works as well.
Aymeric: „Create a custom user model identical to auth.User, call it User (so many-to-many tables keep the same name) and set
db_table='auth_user' (so it uses the same table).“ In
Accounts/models.py:
from django.contrib.auth.models import AbstractUser
from django.db import models
class User(AbstractUser):
class Meta:
db_table = 'auth_user'
In settings.py, add the app to INSTALLED_APPS and update the AUTH_USER_MODEL setting:
INSTALLED_APPS = (
# ...
'Accounts',
)
AUTH_USER_MODEL = 'Accounts.User'
In your project code, replace all imports of the Django user model:
from django.contrib.auth.models import User with the new, custom one:
from Accounts.models import User
Delete all old migrations. (Beforehand, see if comment 14 is relevant to you!) For example, in the project root:
find . -path "*/migrations/*.py" -not -name "__init__.py" -delete
find . -path "*/migrations/*.pyc" -delete
Create new migrations from scratch:
./manage.py makemigrations
Make any changes to your admin.py files as required. (I cannot give any solid information here, but this is not crucial for the result and
so the details can still be reviewed later.)
Make sure that your testsuite completes successfully! (A fresh test database must be used, it cannot be kept from previous runs.)
At this point, the changes to the code are complete. This is a good time for a commit.
Note that we're done – except that the new migration files mismatch
the contents of the django_migrations table.
(It may even be possible to serve your project at this point: It's
easy to back off before the database is actually changed. Only do
this if you understand that you cannot even touch the migrations
system as long as the steps below are not completed!)
Update the database
Truncate the django_migrations table. MySQL 8 example:
TRUNCATE TABLE django_migrations;
This is possibly different for other databases or verions of MySQL <
8.
Fake-apply the new set of migrations
./manage.py migrate --fake
Check the ContentTypes as described at comment 12
Conclusion
The upgrade to the custom user model is now complete. You can make changes to this model and generate and apply migrations for it as with
any other models.
As a first step, you may wish to unset db_table and generate and apply the resulting migrations.
In my opinion, the startproject management command should anticipate the introduction of a custom user model.

Categories

Resources