So I was trying to split a list of values into dataframe in Python.
Here is a sample example of my list
ini_string1 = "Time=2014-11-07 00:00:00,strangeness=0.0001,p-value=0.19,deviation=0.78,D_Range=low'"
templist = []
for i in range(5):
templist.append({ini_string1})
Now I was trying to create a dataframe with Time, Strangeness, P-Values, Deviation, D_Range as variables.
I was able to get a data frame when I have only one sigle value of ini_string but counld not make it when I have list of values.
Below is a sample code I tried with single value ini_string
lst_dict = []
cols = ['Time','Strangeness', 'P-Values', 'Deviation', 'Is_Deviation']
# Initialising string
for i in range(5):
ini_string1 = "Time=2014-11-07 00:00:00,strangeness=0.0001,p-value=0.19,deviation=0.78,D_Range=low'"
tempstr = ini_string1
res = dict(item.split("=") for item in tempstr.split(","))
lst_dict.append({'Time': res['Time'],
'Strangeness': res['strangeness'],
'P-Values': res['p-value'],
'Deviation': res['deviation'],
'Is_Deviation': res['D_Range']})
print(lst_dict)
strdf = pd.DataFrame(lst_dict, columns=cols)
I could not figureout the implementation for list of values
The below code will do the job.
from collections import defaultdict
import pandas as pd
ini_string1 = "Time=2014-11-07 00:00:00,strangeness=0.0001,p-value=0.19,deviation=0.78,D_Range='low'"
ini_string2 = "Time=2015-12-07 00:00:00,strangeness=0.0005,p-value=0.31,deviation=0.01,D_Range='high'"
ini_strings = [ini_string1, ini_string2]
dd = defaultdict(list)
for ini_str in ini_strings:
for key_val in ini_str.split(','):
k, v = key_val.split('=')
dd[k].append(v)
df = pd.DataFrame(dd)
Read more about defaultdict - How does collections.defaultdict work?
Python has other interesting data structures - https://docs.python.org/2/library/collections.html
Related
I have a data in a csv file here is a sample:
firstnb,secondnb,distance
901,19011,459.73618164837535
901,19017,492.5540450352788
901,19018,458.489289271722
903,13019,167.46632044684435
903,13020,353.16001204909657
the desired output:
901,19011,19017,19018
903,13019,13020
As you can see in the output I want to take firstnb column (901/903)
and put beside each one the secondnb I believe you can understand from the desired output better than my explanation :D
What I tried so far is the following:
import pandas as pd
import csv
df = pd.read_csv('test.csv')
with open('neighborList.csv','w',newline='') as file:
writer = csv.writer(file)
secondStation = []
for row in range(len(df)):
firstStation = df['firstnb'][row]
for x in range(len(df)):
if firstStation == df['firstnb'][x]:
secondStation.append(df['secondnb'][x])
# line = firstStation ,secondStation
# writer.writerow(line)
print(firstStation,secondStation)
secondStation = []
my code output this :
901 [19011, 19017, 19018]
901 [19011, 19017, 19018]
901 [19011, 19017, 19018]
903 [13019, 13020]
903 [13019, 13020]
Pandas has a built in function to do this, called groupby:
df = pd.read_csv(YOUR_CSV_FILE)
df_grouped = list(df.groupby(df['firstnb'])) # group by first column
# chain keys and values into merged list
for key, values in df_grouped:
print([key] + values['secondnb'].tolist())
Here I just print the sublists; you can save them into a new csv in any format you'd like (strings, ints, etc)
First, I grouped the data by firstnb, creating a list of the values in secondnb using the aggregate function.
df[['firstnb','secondnb']].groupby('firstnb').aggregate(func=list).to_dict()
By turning this into a dict, we get:
{'secondnb': {901: [19011, 19017, 19018], 903: [13019, 13020]}}
I'm not entirely clear on what the final output should be (plain strings, lists, …), but from here on, it's easy to produce whatever you'd like.
For example, a list of lists:
intermediate = df[['firstnb','secondnb']].groupby('firstnb').aggregate(func=list).to_dict()
[[k] + v for k,v in intermediate['secondnb'].items()]
Result:
[[901, 19011, 19017, 19018], [903, 13019, 13020]]
def toList(a):
res = []
for r in a:
res.append(r)
return res
df.groupby('firstnb').agg(toList)
Hi I wrote some code that builds a default dictionary
def makedata(filename):
with open(filename, "r") as file:
for x in features:
previous = []
count = 0
for line in file:
var_name = x
regexp = re.compile(var_name + r'.*?([0-9.-]+)')
match = regexp.search(line)
if match and (match.group(1)) != previous:
previous = match.group(1)
count += 1
if count > wlength:
count = 1
target = str(str(count) + x)
dict.setdefault(target, []).append(match.group(1))
file.seek(0)
df = pd.DataFrame.from_dict(dict)
The dictionary looks good but when I try to convert to dataframe it is empty. I can't figure it out
dict:
{'1meanSignalLenght': ['0.5305184', '0.48961428', '0.47203177', '0.5177274'], '1amplCor': ['0.8780955002105448', '0.8634431017504487', '0.9381169983046714', '0.9407036427333355'], '1metr10.angle1': ['0.6439386643584522', '0.6555194964997434', '0.9512436169922103', '0.23789348400794422'], '1syncVar': ['0.1344131181025432', '0.08194580887223515', '0.15922251165913678', '0.28795644612520327'], '1linVelMagn': ['0.07062673289287498', '0.08792496681784517', '0.12603999663935528', '0.14791253129369603'], '1metr6.velSum': ['0.17850601560734558', '0.15855169971072014', '0.21396496345720045', '0.2739525279330513']}
df:
Empty DataFrame
Columns: []
Index: []
{}
I think part of your issue is that you are using the keyword 'dict', assuming it is a variable
make a dictionary in your function, call it something other than 'dict'. Have your function return that dictionary. Then when you make a dataframe use that return value. Right now, you are creating a data frame from an empty dictionary object.
df = pd.DataFrame(dict)
This should make a dataframe from the dictionary.
You can either pass a list of dicts simply using pd.DataFrame(list_of_dicts) (use pd.DataFrame([dict]) if your variable is not a list) or a dict of list using pd.DataFrame.from_dict(dict). In this last case dict should be something like dict = {a:[1,2,3], "b": ["a", "b", "c"], "c":...}.
see: Pandas Dataframe from dict with empty list value
I got a dataset in python and the structure of it is like
Tree Species number of trunks
------------------------------
Acer rubrum 1
Quercus bicolor 1
Quercus bicolor 1
aabbccdd 0
and I have a question of can I implement a function similar to
Select sum(number of trunks)
from trees.data['Number of Trunks']
where x = trees.data["Tree Species"]
group by trees.data["Tree Species"]
in python? x is an array contains five elements:
x = array(['Acer rubrum', 'Acer saccharum', 'Acer saccharinum',
'Quercus rubra', 'Quercus bicolor'], dtype='<U16')
what I want to do is mapping each elements in x to trees.data["Tree Species"] and calculate the sum of number of trunks, it should return an array of
array = (sum_num(Acer rubrum), sum_num(Acer saccharum), sum_num(Acer saccharinum),
sum_num(Acer Quercus rubra), sum_num(Quercus bicolor))
Did you want to look at Python Pandas. That will allow you to do something like
df.groupby('Tree Species')['Number of Trunks'].sum()
Please note here df is whatever the variable name you read in your data frame. I would recommend you to look at pandas and lambda function too.
You can do something like this:
import pandas as pd
df = pd.DataFrame()
tree_species = ["Acer rubrum", "Quercus bicolor", "Quercus bicolor", "aabbccdd"]
no_of_trunks = [1,1,1,0]
df["Tree Species"] = tree_species
df["Number of Trunks"] = no_of_trunks
df.groupby('Tree Species').sum() #This will create a pandas dataframe
df.groupby('Tree Species')['Number of Trunks'].sum() #This will create a pandas series.
You can do the same thing by just using dictionaries too:
tree_species = ["Acer rubrum", "Quercus bicolor", "Quercus bicolor", "aabbccdd"]
no_of_trunks = [1,1,1,0]
d = {}
for key, trunk in zip(tree_species, no_of_trunks):
if not key in d.keys():
d[key] = 0
d[key] += trunk
print(d)
I have few categorical columns (description) in my DataFrame df_churn which i'd like to convert to numerical values. And of course I'd like to create a lookup table because i will need to convert them back eventually.
The problem is that every column has a different number of categories so appending to df_categories is not easy and I cant think of any simple way of do so.
Here is what I have so far. It stops after first column, because of the different length.
cat_clmn = ['CLI_REGION','CLI_PROVINCE','CLI_ORIGIN','cli_origin2','cli_origin3', 'ONE_PRD_TYPE_1']
df_categories = pd.DataFrame()
def categorizer(_clmn):
for clmn in cat_clmn:
dict_cat = {key: value for value, key in enumerate(df_churn[clmn].unique())}
df_categories[clmn] = dict_cat.values()
df_categories[clmn + '_key'] = dict_cat.keys()
df_churn[clmn + '_CAT'] = df_churn[clmn].map(dict_cat)
categorizer(cat_clmn)
There is a temporary solution, but I am sure it can be done in a better way.
df_CLI_REGION = pd.DataFrame()
df_CLI_PROVINCE = pd.DataFrame()
df_CLI_ORIGIN = pd.DataFrame()
df_cli_origin2 = pd.DataFrame()
df_cli_origin3 = pd.DataFrame()
df_ONE_PRD_TYPE_1 = pd.DataFrame()
cat_clmn = ['CLI_REGION','CLI_PROVINCE','CLI_ORIGIN','cli_origin2','cli_origin3', 'ONE_PRD_TYPE_1']
df_lst = [df_CLI_REGION,df_CLI_PROVINCE,df_CLI_ORIGIN,df_cli_origin2,df_cli_origin3, df_ONE_PRD_TYPE_1]
def categorizer(_clmn):
for clmn, df in zip(cat_clmn,df_lst):
d = {key: value for value, key in enumerate(df_churn[clmn].unique())}
df[clmn] = d.values()
df[clmn + '_key'] = d.keys()
df_churn[clmn + '_CAT'] = df_churn[clmn].map(d)
categorizer(cat_clmn)
I am currently working to make a dictionary with a tuple of names as keys and a float as the value of the form {(nameA, nameB) : datavalue, (nameB, nameC) : datavalue ,...}
The values data is from a matrix I have made into a pandas DataFrame with the names as both the index and column labels. I have created an ordered list of the keys for my final dictionary called keys with the function createDictionaryKeys(). The issue I have is that not all the names from this list appear in my data matrix. I want to only include the names do appear in the data matrix in my final dictionary.
How can I do this search avoiding the slow linear for loop? I created a dictionary that has the name as key and a value of 1 if it should be included and 0 otherwise as well. It has the form {nameA : 1, nameB: 0, ... } and is called allow_dict. I was hoping to use this to do some sort of hash search.
def createDictionary( keynamefile, seperator, datamatrix, matrixsep):
import pandas as pd
keys = createDictionaryKeys(keynamefile, seperator)
final_dict = {}
data_df = pd.read_csv(open(datamatrix), sep = matrixsep)
pd.set_option("display.max_rows", len(data_df))
df_indices = list(data_df.index.values)
df_cols = list(data_df.columns.values)[1:]
for i in df_indices:
data_df = data_df.rename(index = {i:df_cols[i]})
data_df = data_df.drop("Unnamed: 0", 1)
allow_dict = descriminatePromoters( HARDCODEDFILENAME, SEP, THRESHOLD )
#print ( item for item in df_cols if allow_dict[item] == 0 ).next()
present = [ x for x in keys if x[0] in df_cols and x[1] in df_cols]
for i in present:
final_dict[i] = final_df.loc[i[0],i[1]]
return final_dict
Testing existence in python sets is O(1), so simply:
present = [ x for x in keys if x[0] in set(df_cols) and x[1] in set(df_cols)]
...should give you some speed up. Since you're iterating through in O(n) anyway (and have to to construct your final_dict), something like:
colset = set(df_cols)
final_dict = {k: final_df.loc[k[0],k[1]]
for k in keys if (k[0] in colset)
and (k[1] in colset)}
Would be nice, I would think.