I alway get this error, even I put default value on the avg_winning_trade:
django.db.utils.IntegrityError: null value in column "avg_winning_trade" of relation "trading_summary" violates not-null constraint
DETAIL: Failing row contains (43897354-d89b-4014-a607-a0e6ee423b52, 1, 2023-02-05 11:09:56.727199+00, null, 1, 19, 1, 0.00, null, -1.01, -1.01, 0, 0, 0.00, 0%, 0.00).
The second null value is avg_winning_trade who cause the error here, I don't know why it is null here
this is my model
class Summary(
ActivatorModel,
Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, null=True, blank=True)
starting_balance = models.DecimalField(default=0, max_digits=10, decimal_places=2)
total_number_of_trades = models.PositiveIntegerField(default=0)
total_number_of_winning_trades = models.PositiveIntegerField(default=0)
total_number_of_losing_trades = models.PositiveIntegerField(default=0)
total_number_of_be_trade = models.PositiveIntegerField(default=0)
largest_winning_trade = models.DecimalField(default=0, max_digits=10, decimal_places=2)
largest_losing_trade = models.DecimalField(default=0, max_digits=10, decimal_places=2)
avg_winning_trade = models.DecimalField(default=0, max_digits=10, decimal_places=2)
avg_losing_trade = models.DecimalField(default=0, max_digits=10, decimal_places=2)
total_trade_costs = models.DecimalField(default=0, max_digits=10, decimal_places=2)
trade_win_rate = models.CharField(max_length=10)
This is the table
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------------------------------+--------------------------+-----------+----------+---------+----------+-------------+--------------+-------------
id | uuid | | not null | | plain | | |
status | integer | | not null | | plain | | |
activate_date | timestamp with time zone | | | | plain | | |
deactivate_date | timestamp with time zone | | | | plain | | |
total_number_of_trades | integer | | not null | | plain | | |
user_id | integer | | | | plain | | |
total_number_of_winning_trades | integer | | not null | | plain | | |
avg_losing_trade | numeric(10,2) | | not null | | main | | |
avg_winning_trade | numeric(10,2) | | not null | | main | | |
largest_losing_trade | numeric(10,2) | | not null | | main | | |
largest_winning_trade | numeric(10,2) | | not null | | main | | |
total_number_of_be_trade | integer | | not null | | plain | | |
total_number_of_losing_trades | integer | | not null | | plain | | |
total_trade_costs | numeric(10,2) | | not null | | main | | |
trade_win_rate | character varying(10) | | not null | | extended | | |
starting_balance | numeric(10,2) | | not null | | main | | |
avg_winning_trade field value should not be null.
my Bad, I think I figured it myself. avg_winning_trade has 0 value by default but, In my testing I was sending null value to It. Since there was not winning trade.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 months ago.
Improve this question
I am exploring ways on bash as well as on Python to print out values onto an ASCII art for improving the readability.
The one difficulty is to update values without changing the format of the art.
The ascii art looks something like this:
========================================================
| | | | | |
| | | | | |
| CPU | | GPU | | HDD |
| | | | | |
| | | | | |
| ${CPU_W}W | | | | |
| ${CPU_Freq}MHz| | | |Avail Mem${size}G|
| | |${GPU_W}W | | Used Mem${size}G|
| | |${GPU}Mhz | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
========================================================
So far, I was able to prevent the format from changing on Bash. But doing this, is not allowing me to change the values.
cat << "EOF"
========================================================
| | | | | |
| | | | | |
| CPU | | GPU | | HDD |
| | | | | |
| | | | | |
| ${CPU_W}W | | | | |
| ${CPU_Freq}MHz| | | |Avail Mem${size}G|
| | |${GPU_W}W | | Used Mem${size}G|
| | |${GPU}Mhz | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
========================================================
EOF
Without cat << "EOF" ...ascii art.... EOF, I can update the values but the format keeps changing .
Is there anyway to keep the same format even with the values changing? Thanks in advance.
I'm not exactly sure what you're trying to achieve... so?
f = """
========================================================
| | | | | |
| | | | | |
| CPU | | GPU | | HDD |
| | | | | |
| | | | | |
|{cpuW:>15}| | | | |
|{cpuF:>15}| | | |Avail Mem{ramA:>8}|
| | |{gpuW:>10}| | Used Mem{ramU:>8}|
| | |{gpuF:>10}| | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
========================================================
"""
def main():
cpuW = 12.2
cpuF = 2354
gpuW = 15.2
gpuF = 789
ramA = 16.1
ramU = 12.2
d = {
'cpuW': f'{cpuW:.1f} W ',
'cpuF': f'{cpuF:d} MHz ',
'gpuW': f'{gpuW:.1f} W ',
'gpuF': f'{gpuF:d} MHz ',
'ramA': f' {ramA:.1f} GB',
'ramU': f' {ramU:.1f} GB',
}
out = f.format_map(d)
print(out)
if __name__ == '__main__':
main()
Output is:
========================================================
| | | | | |
| | | | | |
| CPU | | GPU | | HDD |
| | | | | |
| | | | | |
| 12.2 W | | | | |
| 2354 MHz | | | |Avail Mem 16.1 GB|
| | | 15.2 W | | Used Mem 12.2 GB|
| | | 789 MHz | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
========================================================
In python, you can use the format and str.ljust method.
format(value[, format_spec])
Convert a value to a “formatted” representation, as controlled by format_spec. The interpretation of format_spec will depend on the type of the value argument; however, there is a standard formatting syntax that is used by most built-in types: Format Specification Mini-Language.
It depends on what you want the notation to be (e.g how many zeros, ecc...).
here is an example:
>>> characters = 10
>>> format(32,".2E").zfill(characters)
'003.20E+01'
>>> #first number is the minimum number of characters
>>> format(32,"{0}.2E".format(characters))
' 3.20E+01'
or, with fstrings:
>>> f"{32:.2E}"
'3.20E+01'
Example on how to use it in ascii art:
>>> def create_ascii_art(CPU_W,CPU_freq,GPU_W,
... GPU,Free_Mem,Used_Mem):
... CPU_W,CPU_freq,GPU_W,GPU,Free_Mem,Used_Mem = map(float,(CPU_W,CPU_freq,GPU_W,GPU,Free_Mem,Used_Mem))
... return f'''
...========================================================
...| | | | | |
...| | | | | |
...| CPU | | GPU | | HDD |
...| | | | | |
...| | | | | |
...|{CPU_W:12.2E}W | | | | |
...|{CPU_freq:12.2E}MHz| | | |Avail Mem{Free_Mem:7.3}G|
...| | |{GPU_W:8.2E}W | | Used Mem{Used_Mem:7.3}G|
...| | |{GPU:6.0E}Mhz | | |
...| | | | | |
...| | | | | |
...| | | | | |
...| | | | | |
...========================================================'''
>>> print(create_ascii_art(1,1,1,1,1,1))
output:
========================================================
| | | | | |
| | | | | |
| CPU | | GPU | | HDD |
| | | | | |
| | | | | |
| 1.00E+00W | | | | |
| 1.00E+00MHz| | | |Avail Mem 1.0G|
| | |1.00E+00W | | Used Mem 1.0G|
| | | 1E+00Mhz | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
========================================================
The "format" does not change; the display width of the strings in the here document depends on the values of the variables. You'll want to make sure the values are padded to the expected width (or truncated, if necessary).
A slightly tortured way to accomplish this is to pad the actual values:
#!/bin/bash
:
printf -v CPU_W '%8s' "$CPU_W"
printf -v CPU_Freq '%11s' "$CPU_Freq"
printf -v GPU_W '%8s' "$GPU_W"
printf -v GPU '%6s' "$GPU"
printf -v size '%7s' "$size"
cat <<EOF
========================================================
| | | | | |
| | | | | |
| CPU | | GPU | | HDD |
| | | | | |
| | | | | |
| ${CPU_W}W | | | | |
| ${CPU_Freq}MHz| | | |Avail Mem${size}G|
| | |${GPU_W}W | | Used Mem${size}G|
| | |${GPU}Mhz | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
========================================================
EOF
Probably a better solution altogether is to use printf directly to format the output. (Personally, I would perceive a notable reduction in my blood pressure if you removed at least half of the repetitive "ASCII art", which would somewhat tidy up the necessary code, too.)
We can't know what values these variables contain; if they are integers, maybe experiment with i instead of s for the printf format code, or correspondingly f or perhaps g for floating-point values.
If you need truncation, try %8.8 instead of %8, etc.
An altogether nicer approach is to use a library for producing this format. I can't recommend any particular one for Bash; for Python, look at tabulate.
I want to update table B the same as table A has data.
class TableA(models.Model):
a_id = models.BigAutoField(primary_key=True)
user = models.ForeignKey(User)
quantity = models.IntegerField()
buy_price = models.FloatField(default=0)
class TableB(models.Model):
""" Historical data: day wise report """
b_id = models.BigAutoField(primary_key=True)
user = models.ForeignKey(User)
current_amount = models.FloatField(default=0)
invested_amount = models.FloatField(default=0)
timestamp = models.DateField()
Table A
| a_id | user | quantity | buy_price |
| -------- | -------- | ------- | --------- |
| 1 | 12 | 240 | 10.2 |
| 2 | 13 | 120 | 2.3 |
Table B (current_price = 14.1)
| b_id | user | current_amount | invested_amount | timestamp |
| -------- | -------- | ------- | --------------- | ---------------|
| 1 | 12 | 3384 | 2448 | 20-04-2022 |
| 2 | 13 | 1692 | 276 | 20-04-2022 |
I have 20 million users (rows) in Table A and now I want to update(create new instances for each day) their gold prices in Table B.
How can I do it better in Django ORM?
Can I do it in a single query? Please suggest me.
current_price = 14.1
users = TableA.objects.all()
for user in users:
TableB.objects.filter(user=user.user
).update(current_amount=user.quantity*current_price)
I am working on a token management system. The tokens are issued in sync and async manner. In sync manner the tokens are issued based on the current time and the previous token's end time. It's basically queue. Whereas, in async tokens, the user can provide time in which he/she wants an appointment. So, the database will be checked and if that time frame is available, the token will be issued.
My sync token are working perfectly fine. The problem is I am not sure how to synchronize the database with sync and async tokens.
For e.g, if some user asks for an async token at time 15:00:00 and some other user asks for time 14:00:00. Where the current sync tokens' latest end time is 12:00:00, how to insert that data in the table?
Note: There can be only one active token at a particular time either sync or async. Each transaction is assigned max 5 mins. The timings are calculated in the python script. The bank_open time is 10:00:00 and close_time is 18:00:00
Here's the MySQL table:
+----+-------+--------------------------------------+-----------+-----------------+-----------------+-------------+
| id | type | token | date | start_time | end_time | no_of_trans |
+----+-------+--------------------------------------+-----------+-----------------+-----------------+-------------+
| 36 | sync | 4059cdea-6e9d-4943-8d4c-f5df99d12ff5 | 2020-4-22 | 15:53:15.347864 | 15:58:15.347864 | 1 |
| 37 | sync | 5b420dec-9fc2-415f-b2f2-3649a3c6fa26 | 2020-4-22 | 15:58:15.347864 | 16:18:15.347864 | 4 |
| 38 | sync | 72501a99-c081-48ee-a05d-959499eeb944 | 2020-4-22 | 16:18:15.347864 | 16:33:15.347864 | 3 |
| 39 | sync | 3d08050e-115f-4bb1-a98a-5991e87c4ead | 2020-4-22 | 16:33:15.347864 | 16:43:15.347864 | 2 |
| 40 | sync | c19e664b-0df0-4149-8b13-98d7450ea72f | 2020-4-22 | 16:43:15.347864 | 16:58:15.347864 | 3 |
| 41 | async | 56a6f5bf-0f1a-4694-8c69-c67f7ed39147 | 2020-4-22 | 16:59:00.123456 | 17:04:00.123456 | 1 |
| 42 | async | bac9b649-4462-46cb-a290-df63cb4b7158 | 2020-4-22 | 17:00:00.123456 | 17:05:00.123456 | 1 |
| 43 | async | e9b12cf7-c03f-4d2a-a4a4-f19b0d7d50f3 | 2020-4-22 | 17:47:00.123456 | 17:52:00.123456 | 1 |
| 44 | async | e9b12cf7-c03f-4d2a-a4a4-f19b0d7d50f3 | 2020-4-22 | 17:47:00.123456 | 17:52:00.123456 | 1 |
| 45 | async | a04f1347-ec1d-469d-911a-7783658e1e71 | 2020-4-22 | 17:50:00.123456 | 17:55:00.123456 | 1 |
+----+-------+--------------------------------------+-----------+-----------------+-----------------+-------------+
And this is the schema:
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| type | varchar(255) | YES | | NULL | |
| token | varchar(255) | YES | | NULL | |
| date | varchar(255) | YES | | NULL | |
| start_time | varchar(255) | YES | | NULL | |
| end_time | varchar(255) | YES | | NULL | |
| no_of_trans | int(11) | YES | | NULL | |
+-------------+--------------+------+-----+---------+----------------+
Currently, I'm checking these parameters in python script:
for i in range(len(async_end_list)):
#print(async_start_list[i], '....', async_end_list[i])
if (user_time >= async_start_list[i] and user_time <= async_end_list[i]):
print("Wrong input")
break
if (user_time > async_start_list[i] and user_time > async_end_list[i]):
if (end_time < bank_close and user_time > bank_open and user_time < bank_close and user_time > latest_sync_end_time):
if (total_trans <= max_trans):
mycursor.execute("""INSERT INTO tokens (type, token, date, start_time, end_time, no_of_trans) VALUES (%s, %s, %s, %s, %s, %s)""",
('async', rand_token, today, user_time, end_time, no_of_trans))
mydb.commit()
print(mycursor.rowcount, "record inserted.")
break
else:
print("Maximum transactions for day has exceeded!!")
break
else:
print("Record cannot be inserted.")
break
Say I have the following table:
+----+---------+--------+---------+---------+---------+---------+-----------+-----------+-----------+----------+-----------+------------+------------+---------+---+
| 1 | 0.72694 | 1.4742 | 0.32396 | 0.98535 | 1 | 0.83592 | 0.0046566 | 0.0039465 | 0.04779 | 0.12795 | 0.016108 | 0.0052323 | 0.00027477 | 1.1756 | 1 |
| 2 | 0.74173 | 1.5257 | 0.36116 | 0.98152 | 0.99825 | 0.79867 | 0.0052423 | 0.0050016 | 0.02416 | 0.090476 | 0.0081195 | 0.002708 | 7.48E-05 | 0.69659 | 1 |
| 3 | 0.76722 | 1.5725 | 0.38998 | 0.97755 | 1 | 0.80812 | 0.0074573 | 0.010121 | 0.011897 | 0.057445 | 0.0032891 | 0.00092068 | 3.79E-05 | 0.44348 | 1 |
| 4 | 0.73797 | 1.4597 | 0.35376 | 0.97566 | 1 | 0.81697 | 0.0068768 | 0.0086068 | 0.01595 | 0.065491 | 0.0042707 | 0.0011544 | 6.63E-05 | 0.58785 | 1 |
| 5 | 0.82301 | 1.7707 | 0.44462 | 0.97698 | 1 | 0.75493 | 0.007428 | 0.010042 | 0.0079379 | 0.045339 | 0.0020514 | 0.00055986 | 2.35E-05 | 0.34214 | 1 |
| 7 | 0.82063 | 1.7529 | 0.44458 | 0.97964 | 0.99649 | 0.7677 | 0.0059279 | 0.0063954 | 0.018375 | 0.080587 | 0.0064523 | 0.0022713 | 4.15E-05 | 0.53904 | 1 |
| 8 | 0.77982 | 1.6215 | 0.39222 | 0.98512 | 0.99825 | 0.80816 | 0.0050987 | 0.0047314 | 0.024875 | 0.089686 | 0.0079794 | 0.0024664 | 0.00014676 | 0.66975 | 1 |
| 9 | 0.83089 | 1.8199 | 0.45693 | 0.9824 | 1 | 0.77106 | 0.0060055 | 0.006564 | 0.0072447 | 0.040616 | 0.0016469 | 0.00038812 | 3.29E-05 | 0.33696 | 1 |
| 11 | 0.7459 | 1.4927 | 0.34116 | 0.98296 | 1 | 0.83088 | 0.0055665 | 0.0056395 | 0.0057679 | 0.036511 | 0.0013313 | 0.00030872 | 3.18E-05 | 0.25026 | 1 |
| 12 | 0.79606 | 1.6934 | 0.43387 | 0.98181 | 1 | 0.76985 | 0.0077992 | 0.011071 | 0.013677 | 0.057832 | 0.0033334 | 0.00081648 | 0.00013855 | 0.49751 | 1 |
+----+---------+--------+---------+---------+---------+---------+-----------+-----------+-----------+----------+-----------+------------+------------+---------+---+
I have two sets of row indices :
set1 = [1,3,5,8,9]
set2 = [2,4,7,10,10]
Note : Here, I have indicated the first row with index value 1. Length of both sets shall always be same.
What I am looking for is a fast and pythonic way to get the difference of column values for corresponding row indices, that is : difference of 1-2,3-4,5-7,8-10,9-10.
For this example, my resultant dataframe is the following:
+---+---------+--------+---------+---------+---------+---------+-----------+-----------+-----------+----------+-----------+------------+------------+---------+---+
| 1 | 0.01479 | 0.0515 | 0.0372 | 0.00383 | 0.00175 | 0.03725 | 0.0005857 | 0.0010551 | 0.02363 | 0.037474 | 0.0079885 | 0.0025243 | 0.00019997 | 0.47901 | 0 |
| 1 | 0.02925 | 0.1128 | 0.03622 | 0.00189 | 0 | 0.00885 | 0.0005805 | 0.0015142 | 0.004053 | 0.008046 | 0.0009816 | 0.00023372 | 0.0000284 | 0.14437 | 0 |
| 3 | 0.04319 | 0.1492 | 0.0524 | 0.00814 | 0.00175 | 0.05323 | 0.0023293 | 0.0053106 | 0.0169371 | 0.044347 | 0.005928 | 0.00190654 | 0.00012326 | 0.32761 | 0 |
| 3 | 0.03483 | 0.1265 | 0.02306 | 0.00059 | 0 | 0.00121 | 0.0017937 | 0.004507 | 0.0064323 | 0.017216 | 0.0016865 | 0.00042836 | 0.00010565 | 0.16055 | 0 |
| 1 | 0.05016 | 0.2007 | 0.09271 | 0.00115 | 0 | 0.06103 | 0.0022327 | 0.0054315 | 0.0079091 | 0.021321 | 0.0020021 | 0.00050776 | 0.00010675 | 0.24725 | 0 |
+---+---------+--------+---------+---------+---------+---------+-----------+-----------+-----------+----------+-----------+------------+------------+---------+---+
My resultant difference values are absolute here.
I cant apply diff(), since the row indices may not be consecutive.
I am currently achieving my aim via looping through sets.
Is there a pandas trick to do this?
Use loc based indexing -
df.loc[set1].values - df.loc[set2].values
Ensure that len(set1) is equal to len(set2). Also, keep in mind setX is a counter-intuitive name for list objects.
You need to select by data reindexing and then subtract:
df = df.reindex(set1) - df.reindex(set2).values
loc or iloc will raise a future warning, since passing list-likes to .loc or [] with any missing label will raise KeyError in the future.
In short, try the following:
df.iloc[::2].values - df.iloc[1::2].values
PS:
Or alternatively, if (like in your question the indices follow no simple rule):
df.iloc[set1].values - df.iloc[set2].values
Currently I have a legacy app which refers to a user table with all the custom fields. Since there is a good amount of legacy code referring to that table I cannot simple rename that table as auth_user. So the thing I'm trying to do is somehow merge (i don't know that its the right term) auth_user and user.
Below is user table:
+-------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+----------------+
| user_id | int(10) | NO | PRI | NULL | auto_increment |
| name | varchar(100) | NO | | NULL | |
| address | varchar(100) | NO | | NULL | |
| phone_no | varchar(15) | NO | | NULL | |
| city | varchar(100) | NO | | NULL | |
| state | varchar(100) | NO | | NULL | |
| pin_no | int(10) | NO | | NULL | |
| type | varchar(100) | NO | | NULL | |
| email | varchar(100) | NO | | NULL | |
| password | varchar(100) | NO | | NULL | |
| is_active | tinyint(1) | NO | | NULL | |
| role | varchar(40) | NO | | NULL | |
| creation_date | int(100) | NO | | NULL | |
| edit_date | int(100) | NO | | NULL | |
| country | varchar(255) | NO | | NULL | |
| district | varchar(255) | NO | | NULL | |
| ip | varchar(255) | NO | | NULL | |
| added_by | int(11) | NO | | NULL | |
| is_phone_verified | binary(1) | NO | | 0 | |
| remember_token | varchar(100) | YES | | NULL | |
| disclaimer_agreed | int(11) | YES | | 0 | |
| mobile_login | tinyint(4) | NO | | 0 | |
+-------------------+--------------+------+-----+---------+----------------+
and django's auth_user table:
+--------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| password | varchar(128) | NO | | NULL | |
| last_login | datetime(6) | YES | | NULL | |
| is_superuser | tinyint(1) | NO | | NULL | |
| username | varchar(150) | NO | UNI | NULL | |
| first_name | varchar(30) | NO | | NULL | |
| last_name | varchar(30) | NO | | NULL | |
| email | varchar(254) | NO | | NULL | |
| is_staff | tinyint(1) | NO | | NULL | |
| is_active | tinyint(1) | NO | | NULL | |
| date_joined | datetime(6) | NO | | NULL | |
+--------------+--------------+------+-----+---------+----------------+
What I would want is a single table, that django will refer to in using contrib.auth related stuff and also at the same time does not deprecate my legacy code. maybe something like this:
+-------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+----------------+
| user_id | int(10) | NO | PRI | NULL | auto_increment |
| name | varchar(100) | NO | | NULL | |
| address | varchar(100) | NO | | NULL | |
| phone_no | varchar(15) | NO | | NULL | |
| city | varchar(100) | NO | | NULL | |
| state | varchar(100) | NO | | NULL | |
| pin_no | int(10) | NO | | NULL | |
| type | varchar(100) | NO | | NULL | |
| email | varchar(100) | NO | | NULL | |
| password | varchar(100) | NO | | NULL | |
| is_active | tinyint(1) | NO | | NULL | |
| role | varchar(40) | NO | | NULL | |
| creation_date | int(100) | NO | | NULL | |
| edit_date | int(100) | NO | | NULL | |
| country | varchar(255) | NO | | NULL | |
| district | varchar(255) | NO | | NULL | |
| ip | varchar(255) | NO | | NULL | |
| added_by | int(11) | NO | | NULL | |
| is_phone_verified | binary(1) | NO | | 0 | |
| remember_token | varchar(100) | YES | | NULL | |
| disclaimer_agreed | int(11) | YES | | 0 | |
| mobile_login | tinyint(4) | NO | | 0 | |
| last_login | datetime(6) | YES | | NULL | |
| is_superuser | tinyint(1) | NO | | NULL | |
| username | varchar(150) | NO | UNI | NULL | |
| first_name | varchar(30) | NO | | NULL | |
| last_name | varchar(30) | NO | | NULL | |
| is_staff | tinyint(1) | NO | | NULL | |
| is_active | tinyint(1) | NO | | NULL | |
| date_joined | datetime(6) | NO | | NULL | |
+-------------------+--------------+------+-----+---------+----------------+
The motive here is to take advantage of Django's builtin authentication and permission system but without breaking legacy code. I think there must be a solution for this as this is not the first time someone is porting some legacy application to django.
I would also like to mention this link How to Extend Django User Model, but I'm not sure which approach is the best to adopt or should I be doing something completely different
Django explicitly supports custom user models. Create a model for your existing table, make it inherit from AbstractBaseUser, and set the AUTH_USER_MODEL setting to point to your new model. See the comprehensive docs.
We can read in more detail from a Django ticket how to migrate from a built-in User model to a custom User model.
Quoting from Carsten Fuchs
Assumptions
Your project doesn't have a custom user model yet.
All existing users must be kept.
There are no pending migrations and all existing migrations are applied.
It is acceptable that all previous migrations are lost and can no longer be unapplied, even if you use version control to checkout old
commits that still have the migration files. This is the relevant
downside of this approach.
Preparations
Make sure that any third party apps that refer to the Django User model only use the generic referencing methods.
Make sure that your own reusable apps (apps that are intended to be used by others) use the generic reference methods.
I suggest to not do the same with your project apps: The switch to a custom user model is only done once per project and never again. It
is easier (and in my opinion also clearer) to change from
django.contrib.auth.models import User to something else (as detailed
below) than replacing it with generic references that are not needed
in project code.
Make sure that you have a backup of your code and database!
Update the code
You can create the new user model in any existing app or a newly created one. My preference is to create a new app:
./manage.py startapp Accounts
I chose the name "Accounts", but any other name works as well.
Aymeric: „Create a custom user model identical to auth.User, call it User (so many-to-many tables keep the same name) and set
db_table='auth_user' (so it uses the same table).“ In
Accounts/models.py:
from django.contrib.auth.models import AbstractUser
from django.db import models
class User(AbstractUser):
class Meta:
db_table = 'auth_user'
In settings.py, add the app to INSTALLED_APPS and update the AUTH_USER_MODEL setting:
INSTALLED_APPS = (
# ...
'Accounts',
)
AUTH_USER_MODEL = 'Accounts.User'
In your project code, replace all imports of the Django user model:
from django.contrib.auth.models import User with the new, custom one:
from Accounts.models import User
Delete all old migrations. (Beforehand, see if comment 14 is relevant to you!) For example, in the project root:
find . -path "*/migrations/*.py" -not -name "__init__.py" -delete
find . -path "*/migrations/*.pyc" -delete
Create new migrations from scratch:
./manage.py makemigrations
Make any changes to your admin.py files as required. (I cannot give any solid information here, but this is not crucial for the result and
so the details can still be reviewed later.)
Make sure that your testsuite completes successfully! (A fresh test database must be used, it cannot be kept from previous runs.)
At this point, the changes to the code are complete. This is a good time for a commit.
Note that we're done – except that the new migration files mismatch
the contents of the django_migrations table.
(It may even be possible to serve your project at this point: It's
easy to back off before the database is actually changed. Only do
this if you understand that you cannot even touch the migrations
system as long as the steps below are not completed!)
Update the database
Truncate the django_migrations table. MySQL 8 example:
TRUNCATE TABLE django_migrations;
This is possibly different for other databases or verions of MySQL <
8.
Fake-apply the new set of migrations
./manage.py migrate --fake
Check the ContentTypes as described at comment 12
Conclusion
The upgrade to the custom user model is now complete. You can make changes to this model and generate and apply migrations for it as with
any other models.
As a first step, you may wish to unset db_table and generate and apply the resulting migrations.
In my opinion, the startproject management command should anticipate the introduction of a custom user model.