Combine csv files with identical columns and unescape html code - python

Dear all i often need to concat csv files with identical headers (i.e. but them into one big file). Usually i just use pandas, but i now need to operate in an enviroment were i am not at liberty to install any library. The csv and the html libs do exsit.
I also need to remove all remaining html tags like %amp; for the apercent symbol which are still within the data. I do know in which columns these can come up.
I thought aboug doing it like this, and the concat part of my code seems to work fine:
import CSV
import html
for file in files: # files is a list of csv files.
with open(file, "rt", encoding="utf-8-sig") as source, open(outfilePath, "at", newline='',
encoding='utf-8-sig') as result:
d_reader = csv.DictReader(source,delimiter=";")
# Set header based on first file in file_list:
if file == test_files[0]:
Common_header = d_reader.fieldnames
# Define DcitwriterObject
wtr = csv.DictWriter(result, fieldnames=Common_header, lineterminator='\n', delimiter=";")
# Write Header only once to emtpy file
if result.tell() == 0:
wtr.writeheader()
# If i remove this block i get my concateneated singe csv file as a result
# Howerver the html tags/encoded symbols are sill present.
for row in d_reader:
print(html.unescape(row['ColA'])) # This prints the unescaped Values in the column correctly
# If i kepp these two lines, i get an empty file with just the header as a result of the concatenation
row['ColA'] = html.unescape(row['ColA'])
row['ColB'] = html.unescape(row['ColB'])
wtr.writerows(d_reader)
I would have thought the simply suppling the encoding='utf-8-sig' part to the result file would be sufficient to get rid of the html symbols but that does not work. If you could give me a hint what i am doint wrong in the usage of the code containing the html.unescape function in my code that would be nice.
Thank you in advance

Related

Preserve input file format with pandas

I need to process hundreds of fairly large CSV files. Each file contains 4 header lines followed by 864000 lines of data and weight more than 200 Mo. Columns type are most of the time recognized as object because missing values are indicated as "NAN" (with quotes). I want to perform couple of operations on these data and export them to a new file in a format similar to the input file. To do so, I wrote the following code
df = pd.read_csv(in_file, skiprows=[0,2,3])
# Get file header
with open(in_file, 'r') as fi:
header = [next(fi) for x in range(4)]
# Write header to destination file
with open(out_file, 'w') as fo:
for i_line in header:
fo.write(i_line)
# Do some data transformation here
df = foobar(df)
# Append data to destination file
df.to_csv(out_file, header=False, index=False, mode='a')
I struggle to preserve exactly the input format. For instance, I have dates in the input files formated as "2019-08-28 00:00:00.2" while they are written in the output files as 2019-08-28 00:00:00.2, i.e. without the quotation marks.
Same for "NAN" values that are rewritten without their quotes.Pandas wants to clean everything out.
I tried other variants that worked, but because of the file size, running time was unreasonable.
Include quoting parameter in to_csv i.e. quoting=csv.QUOTE_NONNUMERIC or quoting=2
so your to csv statement will be as follows:
df.to_csv(out_file, header=False, index=False, mode='a', quoting=2)
Note: you need to import csv if you want to use csv.QUOTE_NONNUMERIC
More details about the parameters can be found on the documentation (below): https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html

Grab values from seperate csv file and replace the values of columns in a pipe delimited file

Trying to whip this out in python. Long story short I got a csv file that contains column data i need to inject into another file that is pipe delimited. My understanding is that python can't replace values, so i have to re-write the whole file with the new values.
data file(csv):
value1,value2,iwantthisvalue3
source file(txt, | delimited)
value1|value2|iwanttoreplacethisvalue3|value4|value5|etc
fixed file(txt, | delimited)
samevalue1|samevalue2| replacedvalue3|value4|value5|etc
I can't figure out how to accomplish this. This is my latest attempt(broken code):
import re
import csv
result = []
row = []
with open("C:\data\generatedfixed.csv","r") as data_file:
for line in data_file:
fields = line.split(',')
result.append(fields[2])
with open("C:\data\data.txt","r") as source_file, with open("C:\data\data_fixed.txt", "w") as fixed_file:
for line in source_file:
fields = line.split('|')
n=0
for value in result:
fields[2] = result[n]
n=n+1
row.append(line)
for value in row
fixed_file.write(row)
I would highly suggest you use the pandas package here, it makes handling tabular data very easy and it would help you a lot in this case. Once you have installed pandas import it with:
import pandas as pd
To read the files simply use:
data_file = pd.read_csv("C:\data\generatedfixed.csv")
source_file = pd.read_csv('C:\data\data.txt', delimiter = "|")
and after that manipulating these two files is easy, I'm not exactly sure how many values or which ones you want to replace, but if the length of both "iwantthisvalue3" and "iwanttoreplacethisvalue3" is the same then this should do the trick:
source_file['iwanttoreplacethisvalue3'] = data_file['iwantthisvalue3]
now all you need to do is save the dataframe (the table that we just updated) into a file, since you want to save it to a .txt file with "|" as the delimiter this is the line to do that (however you can customize how to save it in a lot of ways):
source_file.to_csv("C:\data\data_fixed.txt", sep='|', index=False)
Let me know if everything works and this helped you. I would also encourage to read up (or watch some videos) on pandas if you're planning to work with tabular data, it is an awesome library with great documentation and functionality.

How to search for a combination of keywords in a text-file, extract lines above and below, and then export to Excel using pandas

I am trying to extract 5 lines before and after a specific combination of keywords from several SEC 10-K filings and then export that data into Excel so that I can then further process it manually.
Unfortunately I have to rely on the .txt format filings rather than the .html or .xblr ones because the latter are not always available. I already downloaded and partially cleaned the .txt files to remove unneeded tags.
In short, my goal is to tell python to loop through the downloaded .txt files (e.g. all those in the same folder or simply by providing a reference .txt list with all the file names), open each one, look for the the word "cumulative effect" (ideally combined with other keywords, see code below), extract 5 lines before and after it, and then export the output to an excel with the filename in column A and the extracted paragraph in column B.
Using this code I managed to extract 5 lines above and below the keyword "cumulative effect" for one .txt file (which you can find here, for reference).
However I am still struggling with automating/looping the whole process and exporting the extracted text to Excel using pandas.
import collections
import itertools
import sys
from pandas import DataFrame
filing='0000950123-94-002010_1.txt'
#with open(filing, 'r') as f:
with open(filing, 'r', encoding='utf-8', errors='replace') as f:
before = collections.deque(maxlen=5)
for line in f:
if ('cumulative effect' in line or 'Cumulative effect' in line) and ('accounting change' in line or 'adoption' in line or 'adopted' in line or 'charge' in line):
sys.stdout.writelines(before)
sys.stdout.write(line)
sys.stdout.writelines(itertools.islice(f, 5))
break
before.append(line)
findings = {'Filing': [filing],
'Extracted_paragraph': [line]
}
df = DataFrame(findings, columns= ['Filing', 'Extracted_paragraph'])
export_excel = df.to_excel (r'/Users/myname/PYTHON/output.xlsx', index = None, header=True)
print (df)
Using this line of code I obtain the paragraph I need, but I only managed to export the single line in which the keyword is contained to excel and not the entire text.
This is the python output and
this is the exported text to Excel.
How do I go about creating the loop and properly exporting the entire paragraph of interest into excel?
Thanks a lot in advance!!
I believe your basic error was in
'Extracted_paragraph': [line]
which should have been
'Extracted_paragraph': [before]
So with some simplifying changes, the main section of you code should look like this:
with open(filing, 'r', encoding='utf-8', errors='replace') as f:
before = collections.deque(maxlen=5)
for line in f:
if ('cumulative effect' in line or 'Cumulative effect' in line) and ('accounting change' in line or 'adoption' in line or 'adopted' in line or 'charge' in line):
break
before.append(line)
before = ''.join(before)
findings = {'Filing': [filing],
'Extracted_paragraph': [before]
}
df = DataFrame(findings, columns= ['Filing', 'Extracted_paragraph'])
And then continue from there to export to Excel, etc.

Python: How to create a new dataframe with first row when a specific value

I am reading csv files into python using:
df = pd.read_csv(r"C:\csvfile.csv")
But the file has some summary data, and the raw data start if a value "valx" is found. If "valx" is not found then the file is useless. I would like to create news dataframes that start when "valx" is found. I have been trying for a while with no success. Any help on how to achieve this is greatly appreciated.
Unfortunately, pandas only accepts skiprows for rows to skip in the beginning. You might want to parse the file before creating the dataframe.
As an example:
import csv
with open(r"C:\csvfile.csv","r") as f:
lines = csv.reader(f, newline = '')
if any('valx' in i for i in lines):
data = lines
Using the Standard Libary csv module, you can read file and check if valx is in the file, if it is found, the content will be returned in the data variable.
From there you can use the data variable to create your dataframe.

converting an uploaded csv to python list

I have a two-column csv which I have uploaded via an HTML page to be operated on by a python cgi script. Looking at the file on the server side, it looks to be a long string i.e for a file called test.csv with the contents.
col1, col2
x,y
has become
('upfile', 'test.csv', 'col1,col2'\t\r\nx,y')
Col1 contains the data I want to operate on (i.e. x) and col 2 contains its identifier (y). Is there a better way of doing the uploading or do I need to manually extract the fields I want - this seems potentially very error-prone
thanks
If you're using the cgi module in python, you should be able to do something like:
form = cgi.FieldStorage()
thefile = form['upfile']
reader = csv.reader(thefile.file)
header = reader.next() # list of column names
for row in reader:
# row is a list of fields
process_row(row)
See, for example, cgi programming or the python cgi module docs.
Can't you use the csv module to parse this? It certantly better than rolling your own.
Something along the lines of
import csv
import cgi
form = cgi.FieldStorage()
thefile = form['upfile']
reader = csv.reader(thefile, delimiter=',')
for row in reader:
for field in row:
doThing()
EDIT: Correcting my answer from the ars answer posted below.
Looks like your file is becoming modified by the HTML upload. Is there anything stopping you from just ftp'ing in and dropping the csv file where you need it?
Once the CSV file is more proper, here is a quick function that will put it into a 2D array:
def genTableFrCsv(incsv):
table = []
fin = open(incsv, 'rb')
reader = csv.reader(fin)
for row in reader:
table.append(row)
fin.close()
return table
From here you can then operate on the whole list in memory rather than pulling bit by bit from the file as in Vitor's solution.
The easy solution is rows = [row.split('\t') for r in csv_string.split('\r\n')]. It's only error proned if you have users from different platforms submit data. They might submit comas or tabs and their line breaks could be \n, \r\n, \r, or ^M. The easiest solution is to use regular expressions. Book mark this page if you don't know regular expressions:
http://regexlib.com/CheatSheet.aspx
And here's the solution:
import re
csv_string = 'col1,col2'\t\r\nx,y' #obviously your csv opening code goes here
rows = re.findall(r'(.*?)[\t,](.*?)',csv_string)
rows = rows[1:] # remove header
Rows is now a list of tuples for all of the rows.

Categories

Resources