Compare two rows in a one2many table in OpenERP6.1 - python

How can we compare two rows in a oneTomany table in OpenERP6.1?
I have a main table, say 'XX' and i have a oneTomany table, say 'YY' corresponding to
that table.
Now, i have three columns in the 'YY' table.Every time i create records into
this table, i want to check if the values in the three columns are identical.
i.e if i click the create button and entered the first row with values
'happy','new','year',
the next time when you enter the same values, you should be prompted with
a message that these values should not be repeated.

you can use at least one of two methods:
1. use _sql_constraint with unique on your columns, like in
_sql_constraints = [
('number_uniq', 'unique(number, company_id, journal_id, type)', 'Invoice Number must be unique per Company!'),
]
from account.invoice object, or
override create/write methods for yy object and write a onchange function for your fields.

Why cant you use _constraint? You will only get warning when you are saving the record.

Related

Assign data between n persons in python

I want to assign the data in equal proportion between various people in python automatically. The names should appear in the first column 'Name' automatically.
Please be specific with what you mean by 'automatically'. If I don't get the result you wanted wrong then it should be like this:
df['Name'] = ["Person 1","Person 2","Person 3",...]
The length of 'Name' column rows should be the same with the other.

Pandas dataframe- How to count the number of distinct rows for a given ID

I have this dataframe and I want to add a column to it with the total of distinct SalesOrderId for a given CustomerId
So, with I am trying to do there would be a new column with the value 3 for all this rows.
How can I do it?
I am trying this way but I get an error
data['TotalOrders'] = data.groupby([['CustomerID','SalesOrderID']]).size().reset_index(name='count')
Try using transform:
data['TotalOrders'] = df.groupby('CustomerID')['SalesOrderID'].transform('nunique')
This will give you one entry for each entry in the group. (thanks #Rodalm)

Grist: lookup value from another table

Attempting to learn Grist, and don't know Python...but willing to learn. Just trying to draw the lines between Python and formulas.
I have a table that has "Items": fields named "ProductID" "collection" & "buyer"
There is another table that is named "Sales": fields named "Sku"(same as ProductID) "Qty" "Cost" "Sales" "Date"
I would like to create another table, that consolidates the data into one document (since all of sales may not be in all of items, and sales has a ton of duplicates due to the date the transaction occurred.)
Something like: "Sku" "Buyer" "collection" "Qty" "Cost" "Sales" "margin"(formula to calculate)
"Sales" would need to be the root table, and reference the "items" table for more information.
If my data was smaller, in excel I would:
copy skus, paste in new tab, remove duplicates, and run a sumifs.
ex: if in cell B1 and sku is in a1:
=Sumifs(sales!$Qty, sales!$sku, A1)
Then I would run an index match on items in c1 for example:
=index(items!$Buyer, match(a1, Items!$ProductID, 0), 1)
(Very late, but answering in case it helps others.)
It sounds like the resulting table should have one record per Sku (aka ProductId). You can do it in two ways: as another view of the Items table or as a summary of the Sales table grouped by Sku.
In either case, you can pull in the sum of Qty or Cost as needed.
In case of a summary table, such sums get included automatically (more on that in https://support.getgrist.com/summary-tables/#summary-formulas).
If you base it on the Items table, you can look up relevant Sales records by adding a formula column (e.g. named sales) with formula:
Sales.lookupRecords(Sku=$ProductID)
Then you can easily add sums of Qty and Cost as columns with formulas like SUM($sales.Qty) or SUM($sales.Cost).

Take values from one column and create new column from it

I have big database that has one column called "Measurments" and one column with called "data" which contains data about those different measurments, for example, i measurments you can find height, weight and different indices values and in data you will find the value for this "measurment".
I would like to organize this database in a way that each unique measurment type, will have its' own column, so for example i'll have column name weight, height ect. and the vvalue they got from the column "data".
Until nowI have used this way in order to create many little databases with my relevant data:
df_NDVI=df[(df['Measurement'] == 'NDVI') & (df['Data']!='Corrupt')]
df_VPP_kg=df[(df['Measurement'] == 'WEIGHT')]
But as youcan see, it is not efficient and it creates many databases instead of one with those columns.
My end goal: to take each unique field from "measurments" column and create new column for it with the correct data from column "data".
Try this:
df["obs"]=df.groupby("Measurements")["Measurements"].cumcount()
df.pivot(index="obs", columns="Measurements", values="Data")
So you will get 1 column for each unique value from Measurements, and Data will be order below by order of observation.

django query - how to get latest row during distinct

this is my db RATING table where i want the last row of duplicate entries:
i did this:
bewertung = Rating.objects.filter(von_location=1).distinct('von_location')
but i am getting the first row where bewertung=4. i want the last row with this property. how can i get bewertung=3, the last row of von_location column? what is the best way for this to get?
It looks as if you want the record with the largest id (among the records with the right value for von_location). That's the first record retrieved when you sort them in descending order by the id column:
Rating.objects.filter(von_location=1).order_by('-id')[0]

Categories

Resources