I have a Quote table that generates a price based on lots of parameters. Price is generated based on data from multiple other tables like: Charges, Coupon, Promotions etc. The best way to deal with it is using ForeignKeys. Here's the table (all _id fields are foreign keys):
Quote
#user input fields
charge_id
promotion_id
coupon_id
tariff_id
Everything looks good. Each quote record has very detailed data that tells you where the price comes from. The problem is the data in the tables that it depends on isn't guaranteed to stay the same. It might get deleted or get changed. Let's say a Quote has a foreign key to a tariff and then it gets changed. The records associated with it no longer tell the same story. How do you deal with something like this? Would really appreciate if you recommend some theory related to this.
If you don't want your quote values to be changed by change in foreign key related objects, the best bet is to add all fields from individual foreign keys models in the Quote class.
Then while calculating the values of Quote, you fetch data from all the (now not) related objects and save them in the Quotes table.
Now any changes to Foreign tables will not affect the Quotes table.
Another option would be to use a library like django-simple-history and that keeps track of all changes to the models with change of time.
Related
I've got a large dataset to work with to create a storage system to monitor movement in a store. There's over like 300 products in that store and the main structure of all tables is the same. The only difference is the data inside. There's a larger data base called StorageTF and I want to create a lot of tables called Product_1,Product_2,Product_3 etc..
The table structure should look like
The main large data set (table) looks like this:
CREATE TABLE StoringTF (
Store_code INTEGER,
Store TEXT,
Product_Date TEXT,
Permission INTEGER,
Product_Code INTEGER,
Product_Name TEXT,
Incoming INTEGER,
Unit_Buying_Price INTEGER,
Total_Buying_Price INTEGER,
Outgoing INTEGER,
Unit_Sell_Price INTEGER,
Total_Sell_Price INTEGER,
Description TEXT)
I want the user to input a code in an entry called PCode
it looks like this
PCode = Entry(root, width=40)
PCode.grid(row=0,column=0)
then a function compares the input with all codes in the main table and takes that one and gets the table that has the same product_code.
So the sequence is. All the product tables for all product_Codes in the main table will be created and will have all data from main table that has same product_code.
Then when the program is opened the user inputs a product_code
the program picks the table that has the same code and shows it to the user.
Thanks a lot and I know it's hard but I really need your help and I'm certain you can help me. Thanks.
The product table should look like
CREATE TABLE Product_x (Product_Code INTEGER,
Product_Name TEXT, --taken from main table from lines that has same product code
Entry_Date, TEXT,
Permission_Number INTEGER,
Incoming INTEGER,
Outgoing INTEGER,
Description TEXT,
Total_Quantity_In_Store INTEGER, --which is main table's incoming - outgoing
Total_Value_In_Store INTEGER --main table's total_buying_price - total_sell_price
)
Thank you for your help and hope you can figure it out because I'm really struggling with it.
From your comment:
I think I'd select some columns from main table but I don't know how I'd update the only some columns with select columns from main table where product code = PCode.get() "which is the entry box". is that possible.
Yes, it is definitely possible to present only certain rows and columns of data to the user.
However, there are many patterns (i.e. programming techniques) that you could follow for presenting data to the user, but every common, best-practice technique always separates the backend data (i.e. database) from the user interface. It is not necessary to limit presentation of data to one entire table at a time. In most cases the data should never be presented and/or exposed to the user exactly as it appears in a table. Of course sometimes the data is simple and direct enough to do that, but most applications re-format and group data in different views for proper presentation. (Here the term view is meant as a very general, abstract term for representing data in alternative ways from how it is stored. I mention specific sqlite views below.)
The entire philosophy behind modern databases is for efficient, well-designed storage that can be queried to return just what data is appropriate for each application. Much of this capability is based on the host-language data models, but sqlite directly supports features to help with this. For instance, a view can be defined to select only certain columns and rows at a time (i.e. choose certain Produce_Code values). An sqlite view is just an SQL query that is saved and can have certain properties and actions defined for it. By default, a sqlite view is read-only, but triggers can be defined to allow updates to the underlying tables via the view.
From my earlier comment: You should research data normalization. That is the key principle for designing relational databases. For instance, you should avoid duplicate data columns like Product_Name. That column should only be in the StoringTF. Calculated columns are also usually redundant and unnecessary--don't store the Total_Value_In_Store column, rather calculate it when needed by query and/or view. Having duplicate columns invites mismatched data or at least unnecessary care to make sure all columns are synced when one is updated. Instead you can just query joined tables to get related values.
Honestly, these concepts can require much study before implementing properly. By all means, go forward with developing a solution that fits your needs, but a Stack Overflow answer is no place for full tutorials which I perceive that you might need. Really your question seems more about overall design and I think my answer can get you started on the right track. Anything more specific and you'll need to ask other questions later on.
Can we do a loosely coupled data access layer design in python?
Lets say,in a scenario i have an oracle table with column name ACTIVITY_ID with column datatype as Number(10). If this column is a foreign key in lot many tables,to hold the data of this column, can i create something like ACTID class (like a java object) and can be used across the code if i want to manipulate/hold the ACTIVITY_ID column data so that i could maintain consistency of business object columns. Is there any such possibility in python ?
Try Django
As I understand it, Python does not natively have any database functionality. There are many different libraries/frameworks/etc. that can be used to provide database functionality to Python. I recommend taking a look at Django. With Django, you create a class for each database table and Django hides a LOT of the details, including allowing using with multiple database engines such as MySQL and PostgreSQL. Django handles Foreign Key relationships very well. Every table normally has a primary key, by default an auto-incremented id field. If you add a field like activity = models.ForeignKey(Activity) then you now have a foreign key field activity_id in one table referencing the primary key field id in the Activity table. The Admin page will take care of cascading deletion of records if you delete an Activity record, and in general things "just work" the way you might expect them to.
I have two Models:-
tripbook(id,country, state, place,detail,rating,experience,date)
album(id,location, rating, experience, date)
How can I show both the tripbook and album at one place based on creation date.
For this I made one more table to store some details from both the tables as:
broadcast(id, post_type, social_status, creation date, tripbook_id, album_id)
But this is not working for me as one of the foreign key will be blank from tripbook_id or album_id.
If any other database structure can help then suggest it.
Background
I am looking for a way to dump the results of MySQL queries made with Python & Peewee to an excel file, including database column headers. I'd like the exported content to be laid out in a near-identical order to the columns in the database. Furthermore, I'd like a way for this to work across multiple similar databases that may have slightly differing fields. To clarify, one database may have a user table containing "User, PasswordHash, DOB, [...]", while another has "User, PasswordHash, Name, DOB, [...]".
The Problem
My primary problem is getting the column headers out in an ordered fashion. All attempts thus far have resulted in unordered results, and all of which are less then elegant.
Second, my methodology thus far has resulted in code which I'd (personally) hate to maintain, which I know is a bad sign.
Work so far
At present, I have used Peewee's pwiz.py script to generate the models for each of the preexisting database tables in the target databases, then went and entered all primary and foreign keys. The relations are setup, and some brief tests showed they're associating properly.
Code: I've managed to get the column headers out using something similar to:
for i, column in enumerate(User._meta.get_field_names()):
ws.cell(row=0,column=i).value = column
As mentioned, this is unordered. Also, doing it this way forces me to do something along the lines of
getattr(some_object, title)
to dynamically populate the fields accordingly.
Thoughts and Possible Solutions
Manually write out the order that I want stuff in an array, and use that for looping through and populating data. The pros of this is very strict/granular control. The cons are that I'd need to specify this for every database.
Create (whether manually or via a method) a hash of fields with an associated weighted value for all possibly encountered fields, then write a method for sorting "_meta.get_field_names()" according to weight. The cons of this is that the columns may not be 100% in the right order, such as Name coming before DOB in one DB, while after it in another.
Feel free to tell me I'm doing it all wrong or suggest completely different ways of doing this, I'm all ears. I'm very much new to Python and Peewee (ORMs in general, actually). I could switch back to Perl and do the database querying via DBI with little to no hassle. However, it's libraries for excel would cause me as many problems, and I'd like to take this as a time to expand my knowledge.
There is a method on the model meta you can use:
for field in User._meta.get_sorted_fields():
print field.name
This will print the field names in the order they are declared on the model.
I'm new to Django and currently defining an existing database scheme as a Django model.
I have a central repository to store values like:
(some_qualifier_id_1, some_qualifier_id_2, measure_id, value)
The value is database-wise an integer. It can however refer to categorical data, in which case I want to link it to another table which gives me additional information like the string that's supposed to be displayed instead of the number, ordering information.
Can I tell Django to create a link to a table using value as a foreign key sometimes?
Update:
Using the int to get the category - yes, that's what I do. However the point of the model layer as far as I understand it now was being able to tell Django how the relations between tables are. Just using an int and using it to look up stuff would mean hacking it together without telling Django what I'm doing, which will probably mean I have to generate SQL manually instead of using the model layer at some point. Use case: Order category values by some ordering field in the Categories table.
If the measure_id value is semantically a foreign key, so each value must be an index for something in the measure table, then declare it as such, and everythin will come together alone.
If the measure_id value is not a foreign key, you can annotate your resultset on demand, using
MyModel.objects.filter(**your_filters).extra(select={
'measure_name':
'SELECT measure.name FROM measure WHERE mymodeltable.measure_id = measure.id'
})
Then your retrieved objects will have a 'measure_name' attribute with the joined column.