I have a txt file which I read into pandas dataframe. The problem is that inside this file my text data recorded with delimiter ''. I need to split information in 1 column into several columns but it does not work because of this delimiter.
I found this post on stackoverflow just with one string, but I don't understand how to apply it once I have a whole dataframe: Split string at delimiter '\' in python
After reading my txt file into df it looks something like this
df
column1\tcolumn2\tcolumn3
0.1\t0.2\t0.3
0.4\t0.5\t0.6
0.7\t0.8\t0.9
Basically what I am doing now is the following:
df = pd.read_fwf('my_file.txt', skiprows = 8) #I use skip rows because there is irrelevant text
df['column1\tcolumn2\tcolumn3'] = "r'" + df['column1\tcolumn2\tcolumn3'] +"'" # i try to make it a row string as in the post suggested but it does not really work
df['column1\tcolumn2\tcolumn3'].str.split('\\',expand=True)
and what I get is just the following (just displayed like text inside a data frame)
r'0.1\t0.2\t0.3'
r'0.4\t0.5\t0.6'
r'0.7\t0.8\t0.9'
I am not very good with regular expersions and it seems a bit hard, how can I target this problem?
It looks like your file is tab-delimited, because of the "\t". This may work
pd.read_csv('file.txt', sep='\t', skiprows=8)
Related
I have a semicolon-delimited pandas DataFrame with all dtypes of object. Within some of the cells the string value can have ", a comma (,), or both (ex. TES"T_ING,_VALUE). I am then querying the DF using df.query based on some condition to get a subset of the DataFrame but the rows that have the pattern described in the example are being omitted completely but the remaining rows are being returned just fine. Another requirement is that I need to match all " within the text with a closing quote as well but applying a lambda to replace " with "" is also not being done properly. I have tried several methods and they are listed below
Problem 1:
pd.read_csv("file.csv", delimiter=';')
pd.read_csv("file.csv", delmiter=';', thousands=',')
pd.read_csv("file.csv", delimiter=";", escapechar='"')
pd.read_csv("file.csv", delimiter=";", encoding='utf-8')
All of the above fail to load the data in question.
Problem 2:
Input: TES"T_ING,_VALUE to TES""T_ING,_VALUE
I have tried:
df.apply(lambda s: s.str.replace('"', '""')
which doesn't do anything.
What exactly is going on? I haven't been able to find any questions tackling this particular type of issue anywhere.
Appreciate your help in advance.
EDIT: Sorry I didn't provide some mockup data due to sensitivity but here is some fake data that illustrates the issue
The following is a sample of how the csv structure
Column1;Column2;Column3;Column4;Column5\n
TES"T_ING,_VALUE;Col2Value;Col3Value;Col4Value;Col5Value\n
Col1value;TES"T_ING,_VALUE2;Col3Value;Col4Value;Col5Value\n
I have tried utilizing quoting=csv.QUOTE_ALL/QUOTE_NONNUMERIC and quotechar='"' when loading in the df but the result ends up being
Column1;Column2;Column3;Column4;Column5\n
"TES"T_ING,_VALUE;Col2Value;Col3Value;Col4Value;Col5Value";;;;\n
"Col1value;TES"T_ING,_VALUE2;Col3Value;Col4Value;Col5Value";;;;\n
So it interprets the whole row as value in column 1 rather than actually splitting on the ; and applying the quoting to only column1. Truthfully I can iterate through each row in the df and maybe do a split and load the remaining values into their respective column but the CSV is quite large so this operation would take sometime. The subset of the data the user queries on is supposed to be returned from an endpoint (this part is already working).
The problem was solved utilizing pd.apply and utilizing a custom function to process each record.
df = pd.read_csv("csv_file.csv", delimiter=';', escapechar='\\')
def mapper(record):
if ';' in record['col1']:
content = record['col1'].split(';')
if len(content) == num_columns:
if '"' in content[0]:
content[0] = content[0].replace('"', '""')
record['col1'] = content[0]
# repeat for remaining columns
processed = df.apply(lambda x: mapper(x), axis=1)
I'm trying to import csv style data from a software designed in Europe into a df for analysis.
The data uses two characters to delimit the data in the files, 'DC4' and 'SI' ("Shift In" I believe). I'm currently concatenating the files and delimiting them by the 'DC4' character using read_csv into a df. Then I use a regex line to replace all the 'SI' characters into ';' in the df. I skip every other line in the code to remove the identifiers I don't need next. If I open the data at this point everything is split by the 'DC4' and all 'SI' are converted to ;.
What would you suggest to further split the df by the ; character now? I've tried to split the df by series.string but got type errors. I've exported to csv and reimported it using ; as the delimiter, but it doesn't split the existing columns that were already split with the first import for some reason? I also get parser errors on some rows way down the df so I think there are dirty rows (this is just information I've found. If not helpful please ignore it). I can ignore these lines without affecting the data I need.
The size of the df is around 60-70 columns and usually less than 75K rows when I pull a full report. I'm using PyCharm and Python 3.8. Thank you all for any help on this, I very much appreciate it. Here is my code so far:
path = file directory location
df = pd.concat([pd.read_csv(f, sep='', comment=" ", na_values='Nothing', header=None, index_col=False)
for f in glob.glob(path + ".file extension")], ignore_index=True)
df = df.replace('', ';', regex=True)
df = df.iloc[::2]
df.to_csv(r'new_file_location', index=False, encoding='utf-8-sig')
So you have a CSV (technically not a CSV I guess) that's separated by two different values (DC4 and SI) and you want to read it into a dataframe?
You can do so directly with pandas, the read_csv function allows you to specify regex delimiters, so you could use "\x0e|\x14" and use either DC4 or SI as selarator: pd.read_csv(path, sep="\x0e|\x14")
An example with readable characters:
The csv contains:
col1,col2;col3
val1,val2,val3
val4;val5;val6
Which can be read as follows:
import pandas as pd
df = pd.read_csv(path, sep=",|;")
which results in df being:
col1 col2 col3
0 val1 val2 val3
1 val4 val5 val6
I am trying to read a data file with a header. The data file is attached and I am using the following code:
import pandas as pd
data=pd.read_csv('TestData.out', sep=' ', skiprows=1, header=None)
The issue is that I have 20 columns in my data file, while I am getting 32 columns in the variable data. How can I resolve this issue. I am very new to Python and I am learning.
Data_File
Your Text File has two spaces together, in from of any value that does not have a minus sign. if sep=' ', pandas sees this as two delimiters with nothing (Nan) inbetween.
This will fix it:
data = pd.read_csv('TestData.out', sep='\s+', skiprows=1, header=None)
In this case the sep is interpreted as a regex, which looks for "one of more spaces" as the delimiter, and reurns Columns 0 though 19.
Your data file has inconsistent space delimitation. So, you just have to skip the subsequent space after the delimiter. This simple code works:
data= pd.read_csv('TestData.out',sep=' ',skiprows=1,skipinitialspace=True)
I'm trying to create a very human-readable script that will be multi-indexed. It looks like this:
A
one : some data
two : some other data
B
one : foo
three : bar
I'd like to use pandas' read_csv to automatically read this in as a multi-indexed file with both \t and : used as delimiters so that I can easily slice by section (i.e., A and B). I understand something like that header=[0,1] and perhaps tupleize_cols may be used to this end, but I can't get that far since it doesn't seem to want to read both the tabs and colons properly. If I use sep='[\t:]', it consumes the leading tabs. If I don't use the regexp and read with sep='\t', it gets the tabs right, but doesn't handle the colons. Is this possible using read_csv? I could do it line by line, but there must be an easier way :)
This is the output I had in mind. I added labels to the indices and column, which could hopefully be applied when reading it in:
value
index_1 index_2
A one some data
two some other data
B one foo
three bar
EDIT: I used part of Ben.T's answer to get what I needed. I'm not in love with my solution since I'm writing to a temp file, but it does work:
with open('temp.csv','w') as outfile:
for line in open(reader.filename,'r'):
if line[0] != '\t' or not line.strip():
index1 = line.split('\n')[0]
else:
outfile.write(index1+':'+re.sub('[\t]+','',line))
pd.read_csv('temp.csv', sep=':', header=None, \
names = ['index_1', 'index_2', 'Value'] ).set_index(['index_1', 'index_2'])
You can use two delimiters in read_csv such as:
pd.read_csv( path_file, sep=':|\t', engine='python')
Note the engine='python' to prevent a warning.
EDIT: with your input format it seems difficult, but with input like:
A one : some data
A two : some other data
B one : foo
B three : bar
with a \t as delimiter after A or B, then you get a multiindex by:
pd.read_csv(path_file, sep=':|\t', header = None, engine='python', names = ['index_1', 'index_2', 'Value'] ).set_index(['index_1', 'index_2'])
I'm having a tough time correctly loading csv file to pandas dataframe. The file is csv saved in MS Excel, where the rows looks like this:
Montservis, s.r.o.;"2 012";"-14.98";"-34.68";"- 11.7";"0.02";"0.09";"0.16";"284.88";"10.32";"
I am using
filep="file_name.csv"
raw_data = pd.read_csv(filep,engine="python",index_col=False, header=None, delimiter=";")
(I have tried several combinations and alternatives of read_csv arguments, but without any success.....I have tried also read_table )
What I want to see in my dataframe that each semi colon separated value will be in separate column (I understand that read_csv works this way(?)).
Unfortunately, I always end up with whole row being placed in first column of dataframe. So basicly after loading I have many rows, but only one column (two if I count also indexes)
I have placed sample here:
datafile
Any idea welcomed.
Add quoting = 3. 3 is for QUOTE_NONE refer this.
raw_data = pd.read_csv(filep,engine="python",index_col=False, header=None, delimiter=";", quoting = 3)
This will give [7 rows x 23 columns] dataframe
The problem is enclosing characters which can be ignored by \ character.
raw_data = pd.read_csv(filep,engine="python",index_col=False, header=None, delimiter='\;')