Extracting row based on row number - python

I have a data frame with 100 rows based on user's input I want to extract that particular row.
I have tried using if row==df.index but it gives this error "The truth value of an array with more than one element is ambiguous"

If user is inputting the row number i.e. integer location of the row then you can use df.iloc[I].
If user is inputting the index of the row, then you can use df.loc[I]
If something else then please update your question with more information

Related

How to get the whole row with column names while looking for a minimum value?

I have a data frame. There are four columns.
I can find a minimum number using this code:
df_temp=df_A2C.loc[ (df_A2C['TO_ID'] == 7)]
mini_value = df_temp['DURATION_H'].min()
print("minimum value in column 'TO_ID' is: " , mini_value)
Output:
minimum value in column 'TO_ID' is: 0.434833333333333
Now, I am trying to get the whole row with all column names while looking for a minimum value using TO_ID. Something like this.
How can we get the whole row with all column names while looking for a minimum value?
if you post the data as a code or text, I would have been able to share the result
assumption: you're searching for a minimum value for a specific to_id
# as per your code, filter out by to_id
# sort the result on duration and take the top value
df_A2C.loc[ (df_A2C['TO_ID'] == 7)].sort_values('DURATION_H').head(1)

Regarding counting of duplicate values in data

How to get all the duplicate values of one specific column in dataframe?
I want to only check values on one column but it's getting output with table or data.
I want to count the number the times each value is repeated
Use df['column'].value_counts().
Above answer is correct , and further can be converted to dict
df["col1"].value_counts().to_dict()

How to extract values based on column header in excel?

I have an excel file containing values, I needed values as the highlighted one in single column and deleting the rest on. But due to mismatch in rows and column header file, I am not able to extract. Once you will see the excel will able to understand what values I needed.As this is just a sample of mine data.
Column A2:A17 date is continuous but few date are repeating, but in Row (D1:K1) date are not repeating, so in this case value of same date occurring just below of of one other.
How to get values in one column?
Is there a way to highlight the values of same date occurring in row and column? The sample data consist of manually highlighted. I have huge dataset that cannot be manually highlighted.
Because from colour code also I can get the required values too.
Following is the file I am attaching here
https://docs.google.com/spreadsheets/d/1-xBMKRP1_toA_Ky8mKxCKAFi4uQ8YWJq/edit?usp=sharing&ouid=110042758694954349181&rtpof=true&sd=true
Please visit the link and help me to find the solution.
Thank you
I'm not clear what those values in columns D to K are.
If only the shaded ones matter and they can be derived from the Latitude and Longitude for each row separately:
Insert a column titled "Row", say in A, and populate it 1,2,3...
I think you also want a column E which is whatever the calculation you currently have in D-K. Is this "Distance"?
Then create a Pivot Table on rows A to E and you can do anything you are likely to need: https://support.microsoft.com/en-us/office/create-a-pivottable-to-analyze-worksheet-data-a9a84538-bfe9-40a9-a8e9-f99134456576
Dates at Colum Labels, Row numbers as Row Labels, and Sum of "Distance" as Values.

Pandas dataframe- How to count the number of distinct rows for a given ID

I have this dataframe and I want to add a column to it with the total of distinct SalesOrderId for a given CustomerId
So, with I am trying to do there would be a new column with the value 3 for all this rows.
How can I do it?
I am trying this way but I get an error
data['TotalOrders'] = data.groupby([['CustomerID','SalesOrderID']]).size().reset_index(name='count')
Try using transform:
data['TotalOrders'] = df.groupby('CustomerID')['SalesOrderID'].transform('nunique')
This will give you one entry for each entry in the group. (thanks #Rodalm)

Rejecting zero values when creating a list of minimum values. (Python Field Calc)

I'm trying to create a list of minimum values from four columns of values. Below is the statement I have used.
min ([!Depth!, !Depth_1!, !Depth_12!, !Depth_1_13!])
The problem I'm having is that some of the fields under these columns contain zeros. I need it to return the next lowest value from the columns that is greater than zero.
I have an attribute table for a shapefile from an ArcGIS document. It has 10 columns. ID, Shape, Buffer ID (x4), Depth (x4).
I need to add an additional column to this data which represents the minimum number from the 4 depth columns. Many of the cells in this column are equal to zero. I need the new column to take the minimum value from the four depth columns but ignore the zero values and take the next lowest value.
A screen shot of what I am working from:
Create a function that does it for you. I added a pic so you can follow the steps. Just change the input names to your column names.
def my_min(d1,d2,d3,d4):
lst = [d1,d2,d3,d4]
return min([x for x in lst if x !=0])

Categories

Resources