I am adding a new row to a specific CSV that already exist, but for unknown reason the new row is being added along with the last row and not in a new one.
So, it's showing in CSV as:
11-07-2016,38361,9076,14487,292,741614-07-2016,38417,9767,15832,301,7416
When should be showing as:
11-07-2016,38361,9076,14487,292,7416
14-07-2016,38417,9767,15832,301,7416
My code is:
import time
import csv
today = (time.strftime("%d-%m-%Y"))
newRow = """%s,%s,%s,%s,%s,%s""" % (today, yes, ok, war, leg, noag)
fd = open('data.csv','a')
fd.write(newRow)
fd.close()
Any idea why this is happening? Thanks a lot.
It looks like the file doesn't have a newline at the end. Try adding one before appending the new line:
newRow = "\n%s,%s,%s,%s,%s,%s\n" % (today, yes, ok, war, leg, noag)
with open("data.csv", "a") as f:
f.write(newRow)
Right now you are not using the csv module, just the regular write as for text file.
To treat the file as a csv change:
fd.write(newRow)
to:
csv_writer = csv.writer(fd)
csv_writer.writerow(newRow)
If you want to edit the file as a text file you should add the "new line charecter" manualy, so:
fd.write('\n' + newRow)
will work (in that case the import csv is redundant)
import time
import csv
today = (time.strftime("%d-%m-%Y"))
newRow = """%s,%s,%s,%s,%s,%s""" % (today, yes, ok, war, leg, noag)
with open('data.csv','a',newline='') as fd:
writer = csv.writer(fd)
writer.writerow(newRow)
It's writerow instead of write, it'll automatically add a new line. and newline = '' prevents your skip a line.
Related
I am probably making a stupid mistake, but I can't find where it is. I want to count the number of lines in my csv file. I wrote this, and obviously isn't working: I have row_count = 0 while it should be 400. Cheers.
f = open(adresse,"r")
reader = csv.reader(f,delimiter = ",")
data = [l for l in reader]
row_count = sum(1 for row in reader)
print row_count
with open(adresse,"r") as f:
reader = csv.reader(f,delimiter = ",")
data = list(reader)
row_count = len(data)
You are trying to read the file twice, when the file pointer has already reached the end of file after saving the data list.
First you have to open the file with open
input_file = open("nameOfFile.csv","r+")
Then use the csv.reader for open the csv
reader_file = csv.reader(input_file)
At the last, you can take the number of row with the instruction 'len'
value = len(list(reader_file))
The total code is this:
input_file = open("nameOfFile.csv","r+")
reader_file = csv.reader(input_file)
value = len(list(reader_file))
Remember that if you want to reuse the csv file, you have to make a input_file.fseek(0), because when you use a list for the reader_file, it reads all file, and the pointer in the file change its position
If you are working with python3 and have pandas library installed you can go with
import pandas as pd
results = pd.read_csv('f.csv')
print(len(results))
I would consider using a generator. It would do the job and keeps you safe from MemoryError of any kind
def generator_count_file_rows(input_file):
for row in open(input_file,'r'):
yield row
And then
for row in generator_count_file_rows('very_large_set.csv'):
count+=1
The important stuff is hidden in comments section of solution which is marked correct.
Re-sharing Erdős-Bacon's solution here for better visibility.
Why ?
Because: It saves lot of memory without having to create list.
So I think it is better do this way
def read_raw_csv(file_name):
with open(file_name, 'r') as file:
csvreader = csv.reader(file)
# count number of rows
entry_count = sum(1 for row in csvreader)
print(entry_count-1) # -1 is for discarding header row.
Checkout this link for more info
# with built in libraries
opened_file = open('f.csv')
from csv import reader
read_file = reader(opened_file)
apps_data = list(read_file)
rowcount = len(apps_data) #which incudes header row
print("Total rows incuding header: " + str(rowcount))
Simply Open the csv file in Notepad++. It shows the total row count in a jiffy. :)
Or
in cmd prompt , Provide file path and key in the command
find \c \v "some meaningless string" Filename.csv
update-my file.txt.zp is tab delimited and looks kind of like this :
file.txt.zp
I want to split the first col by : _ /
original post:
I have a very large zipped tab delimited file.
I want to open it, scan it one row at a time, split some of the col, and write it to a new file.
I got various errors (every time I fix one another pops)
This is my code:
import csv
import re
import gzip
f = gzip.open('file.txt.gz')
original = f.readlines()
f.close()
original_l = csv.reader(original)
for row in original_l:
file_l = re.split('_|:|/',row)
with open ('newfile.gz', 'w', newline='') as final:
finalfile = csv.writer(final,delimiter = ' ')
finalfile.writerow(file_l)
Thanks!
for this code i got the error:
for row in original_l:
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
so based on what I found here I added this after f.close():
original = original.decode('utf8')
and then got the error:
original = original.decode('utf8')
AttributeError: 'list' object has no attribute 'decode'
Update 2
This code should produce the output that you're after.
import csv
import gzip
import re
with gzip.open('file.txt.gz', mode='rt') as f, \
open('newfile.gz', 'w') as final:
writer = csv.writer(final, delimiter=' ')
reader = csv.reader(f, delimiter='\t')
_ = next(reader) # skip header row
for row in reader:
writer.writerow(re.split(r'_|:|/', row[0]))
Update
Open the gzip file in text mode because str objects are required by the CSV module in Python 3.
f = gzip.open('file.txt.gz', 'rt')
Also specify the delimiter when creating the csv.reader.
original_l = csv.reader(original, delimiter='\t')
This will get you past the first hurdle.
Now you need to explain what the data is, which columns you wish to extract, and what the output should look like.
Original answer follows...
One obvious problem is that the output file is constantly being overwritten by the next row of input. This is because the output file is opened in (over)write mode (`'w`` ) once per row.
It would be better to open the output file once outside of the loop.
Also, the CSV file delimiter is not specified when creating the reader. You said that the file is tab delimited so specify that:
original_l = csv.reader(original, delimiter='\t')
On the other hand, your code attempts to split each row using other delimiters, however, the rows coming from the csv.reader are represented as a list, not a string as the re.split() code would require.
Another problem is that the output file is not zipped as the name suggests.
I wrote this program to read a column from an excel file then write it into a txt file:
import xlrd, sys
text_file = open("Output.txt", "w")
isotope = xlrd.open_workbook(sys.argv[1])
first_sheet=isotope.sheet_by_index(0)
x= []
for rownum in range(first_sheet.nrows):
x.append(first_sheet.cell(rownum, 1))
for item in x:
text_file.write("%s\n" % item)
text_file.close()
It reads the column correctly but writes it like so:
number:517.0
number:531.0
number:517.0
number:520.0
number:513.0
number:514.0
number:522.0
Can I read it in a way that it just writes the value and not "number:"? I could just cut out the first 7 characters of every line, but that seems kind of inefficient.
Thanks for the help!
Also, if you want a way to read entire values of a row in one shot:
You can take first_sheet and do:
first_sheet.row_values(index_of_row)
This will return a list with all the values of the index_of_row.
i have following output from a csv file:
word1|word2|word3|word4|word5|word6|01:12|word8
word1|word2|word3|word4|word5|word6|03:12|word8
word1|word2|word3|word4|word5|word6|01:12|word8
what i need to do is change the time string like this 00:01:12.
my idea is to extract the list item [7] and add a "00:" as string to the front.
import csv
with open('temp', 'r') as f:
reader = csv.reader(f, delimiter="|")
for row in reader:
fixed_time = (str("00:") + row[7])
begin = row[:6]
end = row[:8]
print begin + fixed_time +end
get error message:
TypeError: can only concatenate list (not "str") to list.
i also had a look on this post.
how to change [1,2,3,4] to '1234' using python
i neeed to know if my approach to soloution is the right way. maybe need to use split or anything else for this.
thx for any help
The line that's throwing the exception is
print begin + fixed_time +end
because begin and end are both lists and fixed_time is a string. Whenever you take a slice of a list (that's the row[:6] and row[:8] parts), a list is returned. If you just want to print it out, you can do
print begin, fixed_time, end
and you won't get an error.
Corrected code:
I'm opening a new file for writing (I'm calling it 'final', but you can call it whatever you want), and I'm just writing everything to it with the one modification. It's easiest to just change the one element of the list that has the line (row[6] here), and use '|'.join to write a pipe character between each column.
import csv
with open('temp', 'r') as f, open('final', 'w') as fw:
reader = csv.reader(f, delimiter="|")
for row in reader:
# just change the element in the row to have the extra zeros
row[6] = '00:' + row[6]
# 'write the row back out, separated by | characters, and a new line.
fw.write('|'.join(row) + '\n')
you can use regex for that:
>>> txt = """\
... word1|word2|word3|word4|word5|word6|01:12|word8
... word1|word2|word3|word4|word5|word6|03:12|word8
... word1|word2|word3|word4|word5|word6|01:12|word8"""
>>> import re
>>> print(re.sub(r'\|(\d\d:\d\d)\|', r'|00:\1|', txt))
word1|word2|word3|word4|word5|word6|00:01:12|word8
word1|word2|word3|word4|word5|word6|00:03:12|word8
word1|word2|word3|word4|word5|word6|00:01:12|word8
I have some code that is meant to convert CSV files into tab delimited files. My problem is that I cannot figure out how to write the correct values in the correct order. Here is my code:
for file in import_dir:
data = csv.reader(open(file))
fields = data.next()
new_file = export_dir+os.path.basename(file)
tab_file = open(export_dir+os.path.basename(file), 'a+')
for row in data:
items = zip(fields, row)
item = {}
for (name, value) in items:
item[name] = value.strip()
tab_file.write(item['name']+'\t'+item['order_num']...)
tab_file.write('\n'+item['amt_due']+'\t'+item['due_date']...)
Now, since both my write statements are in the for row in data loop, my headers are being written multiple times over. If I outdent the first write statement, I'll have an obvious formatting error. If I move the second write statement above the first and then outdent, my data will be out of order. What can I do to make sure that the first write statement gets written once as a header, and the second gets written for each line in the CSV file? How do I extract the first 'write' statement outside of the loop without breaking the dictionary? Thanks!
The csv module contains methods for writing as well as reading, making this pretty trivial:
import csv
with open("test.csv") as file, open("test_tab.csv", "w") as out:
reader = csv.reader(file)
writer = csv.writer(out, dialect=csv.excel_tab)
for row in reader:
writer.writerow(row)
No need to do it all yourself. Note my use of the with statement, which should always be used when working with files in Python.
Edit: Naturally, if you want to select specific values, you can do that easily enough. You appear to be making your own dictionary to select the values - again, the csv module provides DictReader to do that for you:
import csv
with open("test.csv") as file, open("test_tab.csv", "w") as out:
reader = csv.DictReader(file)
writer = csv.writer(out, dialect=csv.excel_tab)
for row in reader:
writer.writerow([row["name"], row["order_num"], ...])
As kirelagin points out in the commends, csv.writerows() could also be used, here with a generator expression:
writer.writerows([row["name"], row["order_num"], ...] for row in reader)
Extract the code that writes the headers outside the main loop, in such a way that it only gets written exactly once at the beginning.
Also, consider using the CSV module for writing CSV files (not just for reading), don't reinvent the wheel!
Ok, so I figured it out, but it's not the most elegant solutions. Basically, I just ran the first loop, wrote to the file, then ran it a second time and appended the results. See my code below. I would love any input on a better way to accomplish what I've done here. Thanks!
for file in import_dir:
data = csv.reader(open(file))
fields = data.next()
new_file = export_dir+os.path.basename(file)
tab_file = open(export_dir+os.path.basename(file), 'a+')
for row in data:
items = zip(fields, row)
item = {}
for (name, value) in items:
item[name] = value.strip()
tab_file.write(item['name']+'\t'+item['order_num']...)
tab_file.close()
for file in import_dir:
data = csv.reader(open(file))
fields = data.next()
new_file = export_dir+os.path.basename(file)
tab_file = open(export_dir+os.path.basename(file), 'a+')
for row in data:
items = zip(fields, row)
item = {}
for (name, value) in items:
item[name] = value.strip()
tab_file.write('\n'+item['amt_due']+'\t'+item['due_date']...)
tab_file.close()