I have a spreadsheet that comes to me with a column that contains FQDN's of computers. However, filtering this is difficult because of the unique names and I ended up putting in a new column next the FQDN column and then entering a less unique value based on that name. An example of this would be:
dc01spmkt.domain.com
new column value = "MARKETING"
All of the hosts will have a 3 letter designation so people can filter on the new column with the more generic titles.
My question is: Is there a way that I can script this so that when the raw sheet comes I can run the script and it will look for values in the old column to populate the new one? So if it finds 'mkt' together in the hostname field it writes MARKETING, or if it finds 'sls' it writes SALES?
If I understand you correctly, you should be able to do this with an if, isnumber, search formula as follows:
=IF(ISNUMBER(SEARCH("mkt",A1))=TRUE,"Marketing",IF(ISNUMBER(SEARCH("sls",A1))=TRUE,"Sales",""))
which would yield you the following:
asdfamkt Marketing
sls Sales
aj;sldkjfa
a;sldkfja
mkt Marketing
sls Sales
What this is doing is using Search, which returns the numbered place where your text you are searching begins in the field. Then you use ISNumber to return a true or false as to whether your Search returned a number, meaning it found your three letters in question. Then you are using the IF to say that if ISNumber is True, then you want to call it "Marketing" or whatever.
You can draw out the IF arguments for as many three letter variables as you would need to.
Hope this helped!
Related
I have 2 datasets. One contains a column of companies name, and another contains a column of headlines of news. So the aim I want to achieve is to find all the news whose headline contains one company in the other datasets.Basically the two datasets are like this, and I wanna select the news with specific company names
I have tried to use for loop to achieve my goals, but I think it takes too much time and I think pandas or some other libraries can do this in an easier way.
I am a starter in python.
If I understand correctly you should have 2 data sets with different columns, first, you need to loop through the dataset that contains the company name to search in the headline, then you could use obj. find(“search”) to find matches in both datasets.
Also if every query is stored in a CSV format you could use the split() function to get the only column you wanna use
Supposing that you have saved your company names in a pd.Series called company and headlines and texts in a pd.DataFrame called df, this will be what you are looking for:
# it will add a column called "company" to your initial df
for org, headline in zip(company, df['headline']):
if org in headline:
df.loc[df['headline'] == headline, 'company'] = org
You should pay attention to lower and upper case letters, as this will only find the corresponding company if the exact same word appears in the headline.
So I've been working on data classification as part of a research project but since there are thousands of different values, I thought it best to use python to simplify the process rather than going through each record and classifying it manually.
So basically, I have a dataframe wherein one column is entitled "description" and another is entitled "codes". Each row in the "description" column contains a survey response about activities. The descriptions are all different but might contain some keywords. I have a list of some 40 codes to classify each row based on the text. I was thinking of manually creating some columns in the csv file and in each column, typing a keyword corresponding to each of the codes. Then, a loop (or function with a loop) is applied to the dataframe that goes through each row and if a specific substring is found that corresponds to any of the keywords, and then updated the "codes" column with the code corresponding to that keyword.
My Dilemma
For example:
Suppose the list of codes is "Dance", "Nap", "Run", and "Fight" that are in a separate dataframe column. This dataframe also with the manually entered keyword columns is shown below (can be more than two but I just used two for illustration purposes).
This dataframe is named "classes".
category
Keyword1
Keyword2
Dance
dance
danc
Nap
sleep
slept
Run
run
quick
Fight
kick
unch
The other dataframe is as follows with the "codes" column initially blank.
This dataframe is named "data".
description
codes
Iwasdancingthen
She Slept
He was stealing
The function or loop will search through the "description" column above and check if the keywords are in a given row. If they are, the corresponding codes are applied (as shown in the resulting dataframe below in bold). If not, the row in the "codes" column is left blank. The loop should run as many times as there are Keyword columns; the loop will run twice in this case since there are two keyword columns.
description
codes
Iwasdancingthen
Dance
She Slept
Sleep
He landed a kick
Fight
We are family
FYI: The keywords don't actually have to be complete words. I'd like to use partial words too as you see above.
Also, it should be noted that the loop or function I want to make should account for case sensitivity and strings that are combined.
I hope you understand what I'm trying to do.
What I tried:
At first, I tried using a dictionary and manipulate it somehow. I used the advice here:
search keywords in dataframe cell
However, this didn't work too well as I had many "Nan" values pop up and it became too complicated, so I tried a different route using lists. The code I used was based off another user's advice:
How to conditionally update DataFrame column in Pandas
Here's what I did:
# Create lists from the classes dataframe
Keyword1list = classes["Keyword1"].values.tolist()
Category = classes["category"].values.tolist()
I then used the following loop for classification
for i in range(len(Keyword1list)):
data.loc[data["description"] == Keyword1list[i] , "codes"] = Category[i]
However, the resulting output still gives me "Nan" for all columns. Also, I don't know how to loop over every single keyword column (in this case, loop over the two columns "Keyword1" and "Keyword2").
I'd really appreciate it if anyone could help me with a function or loop that works. Thanks in advance!
Edit: It was pointed out to me that some descriptions might contain multiple keywords. I forgot to mention that the codes in the "classes" dataframe are ordered by rank so that the ones that appear first on the dataframe should take priority; for example, if both "dance" and "nap" are in a description, the code listed higher in the "classes" dataframe (i.e. dance) should be selected and inputted into the "codes" column. I hope there's a way to do that.
EDIT: Using advanced search in Excel (under data tab) I have been able to create a list of unique company names, and am now able to SUMIF based on the cell containing the companies name!
Disclaimer: Any python solutions would be greatly appreciated as well, pandas specifically!
I have 60,000 rows of data, containing information about grants awarded to companies.
I am planning on creating a python dictionary to store each unique company name, with their total grant $ given (agreemen_2), and location coordinates. Then, I want to display this using Dash (Plotly) on a live MapBox map of Canada.
First thing first, how do I calculate and store the total value that was awarded to each company?
I have seen SUMIF in other solutions, but am unsure how to output this to a new column, if that makes sense.
One potential solution I thought was to create a new column of unique company names, and next to it SUMIF all the appropriate cells in col D.
PYTHON STUFF SO FAR
So with the below code, I take a much messier looking spreadsheet, drop duplicates, sort based on company name, and create a new pandas database with the relevant data columns:
corp_df is the cleaned up new dataframe that I want to work with.
and recipien_4 is the companies unique ID number, as you can see it repeats with each grant awarded. Folia Biotech in the screenshot shows a duplicate grant, as proven with a column i did not include in the screenshot. There are quite a few duplicates, as seen in the screenshot.
import pandas as pd
in_file = '2019-20 Grants and Contributions.csv'
# create dataframe
df = pd.read_csv(in_file)
# sort in order of agreemen_1
df.sort_values("recipien_2", inplace = True)
# remove duplicates
df.drop_duplicates(subset='agreemen_1', keep='first', inplace=True)
corp_dict = { }
# creates empty dict with only 1 copy of all corporation names, all values of 0
for name in corp_df_2['recipien_2']:
if name not in corp_dict:
corp_dict[name] = 0
# full name, id, grant $, longitude, latitude
corp_df = df[['recipien_2', 'recipien_4', 'agreemen_2','longitude','latitude']]
any tips or tricks would be greatly appreciated, .ittertuples() didn't seem like a good solution as I am unsure how to filter and compare data, or if datatypes are preserved. But feel free to prove me wrong haha.
I thought perhaps there was a better way to tackle this problem, straight in Excel vs. iterating through rows of a pandas dataframe. This is a pretty open question so thank you for any help or direction you think is best!
I can see that you are using pandas to read de the file csv, so you can use the method:
Group by
So you can create a new dataframe making groupings for the name of the company like this:
dfnew = dp.groupby(['recipien_2','agreemen_2']).sum()
Then dfnew have the values.
Documentation Pandas Group by:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html
The use of group_by followed by a sum may be the best for you:
corp_df= df.group_by(by=['recipien_2', 'longitude','latitude']).apply(sum, axis=1)
#if you want to transform the index into columns you can add this after as well:
corp_df=corp_df.reset_index()
So, I have data consists of names of the persons and I want to assign a unique numeric ID to each of them based on their first names. But the thing is I want to give same unique numeric ID to same first name person. For example, if say, there are two person with same name e.g. John, they will have same unique numeric ID value. Note that I want to assign this ID dynamically because the people data will get added constantly so every time the new people data added I need to check whether the I already have a ID for that person or do I have to generate a new one. I want do this excel with some formula or macros.
Also if anyone knows how to do this python like generating an same unique numeric ID for same string. I also try to find answer using UUID module of python, but didn't find any proper solution.
ID Name
1 John
2 Michelle
1 John
3 Hasan
2 Michelle
As you can see I John value has same numeric ID which is '1' so as 'Michelle'
This UDF is a bit shaky but will work depending on how many names you have and the spread of the names ...
Public Function GenerateId(ByVal strText As String) As Long
Dim i As Long
For i = 1 To Len(strText)
strChar = UCase(Mid(strText, i, 1))
GenerateId = GenerateId + Asc(strChar)
Next
End Function
... there is a chance it will double up but it's not easy to predict. You'd have to run all names through and check all outcomes.
Also, I know it's not a sequential ID approach starting from 1 but you didn't specify that so I used some creative licence. :-)
Also, this will ensure that the name retains it's ID if the data is sorted differently, not sure if that's a requirement or not but it's a consideration.
Worth a potential shot anyway.
I have csv file
salary = pd.read_csv('./datasets/salary.csv')
is it possible to have an output like this
Apology for sharing only the concept as you did not provide any code in the question. Consider adding example code if you are unable to understand the concept.
This will require creating a new "column" - "Label" in the dataframe for each matching "Salary". For example, check the table in the link below:
Click to see a sample table to achieve desired columns
This columns "Label" can be filled using ifesle statements. Use string function or == "string in the column Salary" to write the conditional statements. Additionally, use for loop if dataframe has multiple entries for each type of salary. Second, create the three new columns of interest i.e. "per annum", "p.a. + Super", and "p.d.". Now, use ifesle statement again on Label column to enter values row wise in each column of interest based on the conditional statement.
This should let you achieve the desired entries.
Hope it helps.