Creating new column with list of strings - python

I have a given dataset: https://www.kaggle.com/abcsds/pokemon
I need to create new column based on another column called 'Name of the Pokemon'(string type) that will contain list of strings instead of strings
I need to use function. That's my code:
import pandas as pd
import numpy as np
df = pd.read_csv('pokemon.csv')
def transform_faves(df):
df = df.assign(name_as_list = df.name) #new column
list_of_a_single_column = df['name'].tolist()
df['name_as_list'] = list_of_a_single_column
print(type(list_of_a_single_column))
return df
df = transform_faves(df)
The problem is that new column is still string rather than list of strings. Why such conversion does not work?

Related

how to define a function that gets the number of columns in a pandas dataframe that takes the dataframe as the function's parameter

say I got multiple pandas dataframes and I want to check their number of columns and create a def function that does that, how? Already tried on my own but when I the following code it returns a type error
import pandas as pd
def load_csv(filename):
filename = pd.read_csv(filename)
return filename
def columns_count(f):
f = load_csv(f)
columns = f.shape[1]
return columns
Code to loop through dataframes and count number of columns
def count_num_cols(df):
return len(df.columns) # or df.shape[1]
list_of_paths = ["C://Users//file.txt", ]
for a_path in list_of_paths:
df = pd.read_csv(a_path)
print(count_num_cols(df.shape[1])

Add new columns and new column names in python

I have a CSV file in the following format:
Date,Time,Open,High,Low,Close,Volume
09/22/2003,00:00,1024.5,1025.25,1015.75,1022.0,720382.0
09/23/2003,00:00,1022.0,1035.5,1019.25,1022.0,22441.0
10/22/2003,00:00,1035.0,1036.75,1024.25,1024.5,663229.0
I would like to add 20 new columns to this file, the value of each new column is synthetically created by simply randomizing a set of numbers.
It would be something like this:
import pandas as pd
df = pd.read_csv('dataset.csv')
print(len(df))
input()
for i in range(len(df)):
#Data that already exist
date = df.values[i][0]
time = df.values[i][1]
open_value= df.values[i][2]
high_value=df.values[i][3]
low_value=df.values[i][4]
close_value=df.values[i][5]
volume=df.values[i][6]
#This is the new data
prediction_1=randrange(3)
prediction_2=randrange(3)
prediction_3=randrange(3)
prediction_4=randrange(3)
prediction_5=randrange(3)
prediction_6=randrange(3)
prediction_7=randrange(3)
prediction_8=randrange(3)
prediction_9=randrange(3)
prediction_10=randrange(3)
prediction_11=randrange(3)
prediction_12=randrange(3)
prediction_13=randrange(3)
prediction_14=randrange(3)
prediction_15=randrange(3)
prediction_16=randrange(3)
prediction_17=randrange(3)
prediction_18=randrange(3)
prediction_19=randrange(3)
prediction_20=randrange(3)
#How to concatenate these data row by row in a matrix?
#How to add new column names and save the file?
I would like to concatenate them (old+synthetic data) and, after that, I would like to add 20 new columns named 'synthetic1', 'synthetic2', ..., 'synthetic20', to the existing column names and then save the resulting new dataset in a new text file.
I could do that easily with NumPy, but here, we have no numeric data and, therefore, I don't know how to do (or if it is possible to do) that. Is possible to do that with Pandas or another library?
Here's a way you can do:
import numpy as np
# set nrow and col, nrow should match the number of rows in existing df
n_row = 100
n_col = 20
f = pd.DataFrame(np.random.randint(100, size=(n_row, n_col)), columns=['synthetic' + str(x) for x in range(1,n_col+1)])
df = pd.concat([df, f])

Pandas: need to remove the row that contains a string. BUT my condition is not working

from chainer import datasets
from chainer.datasets import tuple_dataset
import numpy as np
import matplotlib.pyplot as plt
import chainer
import pandas as pd
import math
I have a file CSV contains 40300 data.
df =pd.read_csv("Myfile.csv", header = None)
in this area i am removing the ignored rows and columns
columns = [0,1]
rows = [0,1,2]
df.drop(columns, axis = 1, inplace = True) #drop the two first columns that no need to the code
df.drop(rows, axis = 0, inplace = True) #drop the two first rwos that no need to the code
in this area i want to remove the row if string data type faced BUT its not working
df[~df.E.str.contains("Intf Shut")]~this part is not working with me
df.to_csv('summary.csv', index = False, header = False)
df.head()
You have to reassign the value of df in df
df = df[~df.E.str.contains("Intf Shut")]
have to change the column name into array which I choose the third column,
df[~df[2].isin(to_drop)]
Then you can define first a variable "to_drop" to the specific text that contains, Which its like following.
to_drop = ['My text 1', 'My text 2']

How to update a pandas dataframe with an array of values, indexes, and columns?

I have a large dataframe and would like to update specific values at known row and column indices. I would like to do this without an explicit for loop.
For example:
import string
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(10, 10), index = range(10), columns = list(string.ascii_lowercase)[:10])
I have arbitrary arrays of indexes, columns, and values that I would like to use to update df. For example:
update_values = [0,-2,-3]
update_index = [3,5,7]
update_columns = ["d","g","i"]
I can loop over the arrays to update the original dataframe:
for i,j,v in zip(update_index, update_columns, update_values):
df.loc[i,j] = v
but would like to use a technique not involving an explicit for loop.
Use the underlying numpy values
indexes = map(df.columns.get_loc, update_columns)
df.values[update_index, list(indexes)] = update_values
try using loc which is used to specify the needed indexes and columns names loc[[index_names], [columns_names]]
df.loc[[3,5,7], ["d","g","i"]] = [0,-2,-3]

Changing the dtype for specific columns in a pandas dataframe

I have a pandas dataframe which I have created from data stored in an xml file:
Initially the xlm file is opened and parsed
xmlData = etree.parse(filename)
trendData = xmlData.findall("//TrendData")
I created a directory which lists all the data names (which are used as column names) as keys and gives the position of the data in the xml file:
Parameters = {"TreatmentUnit":("Worklist/AdminData/AdminValues/TreatmentUnit"),
"Modality":("Worklist/AdminData/AdminValues/Modality"),
"Energy":("Worklist/AdminData/AdminValues/Energy"),
"FieldSize":("Worklist/AdminData/AdminValues/Fieldsize"),
"SDD":("Worklist/AdminData/AdminValues/SDD"),
"Gantry":("Worklist/AdminData/AdminValues/Gantry"),
"Wedge":("Worklist/AdminData/AdminValues/Wedge"),
"MU":("Worklist/AdminData/AdminValues/MU"),
"My":("Worklist/AdminData/AdminValues/My"),
"AnalyzeParametersCAXMin":("Worklist/AdminData/AnalyzeParams/CAX/Min"),
"AnalyzeParametersCAXMax":("Worklist/AdminData/AnalyzeParams/CAX/Max"),
"AnalyzeParametersCAXTarget":("Worklist/AdminData/AnalyzeParams/CAX/Target"),
"AnalyzeParametersCAXNorm":("Worklist/AdminData/AnalyzeParams/CAX/Norm"),
....}
This is just a small part of the directory, the actual one list over 80 parameters
The directory keys are then sorted:
sortedKeys = list(sorted(Parameters.keys()))
A header is created for the pandas dataframe:
dateList=[]
dateList.append('date')
headers = dateList+sortedKeys
I then create an empty pandas dataframe with the same number of rows as the number of records in trendData and with the column headers set to 'headers' and then loop through the file filling the dataframe:
df = pd.DataFrame(index=np.arange(0,len(trendData)), columns=headers)
for a,b in enumerate(trendData):
result={}
result["date"] = dateutil.parser.parse(b.attrib['date'])
for i,j in enumerate(Parameters):
result[j] = b.findtext(Parameters[j])
df.loc[a]=(result)
df = df.set_index('date')
This seems to work fine but the problem is that the dtype for each colum is set to 'object' whereas most should be integers. It's possible to use:
df.convert_objects(convert_numeric=True)
and it works fine but is now depricated.
I can also use, for example, :
df.AnalyzeParametersBQFMax = pd.to_numeric(df.AnalyzeParametersBQFMax)
to convert individual columns. But is there a way of using pd.to_numeric with a list of column names. I can create a list of columns which should be integers using the following;
int64list=[]
for q in sortedKeys:
if q.startswith("AnalyzeParameters"):
int64list.append(q)
but cant find a way of passing this list to the function.
You can explicitly replace columns in a DataFrame with the same column just with another dtype.
Try this:
import pandas as pd
data = pd.DataFrame({'date':[2000, 2001, 2002, 2003], 'type':['A', 'B', 'A', 'C']})
data['date'] = data['date'].astype('int64')
when now calling data.dtypes it should return the following:
date int64
type object
dtype: object
for multiple columns use a for loop to run through the int64list you mentioned in your question.
for multiple columns you can do it this way:
cols = df.filter(like='AnalyzeParameters').columns.tolist()
df[cols] = df[cols].astype(np.int64)

Categories

Resources