Python: Writing new line to csv in for loop - python
I have the following problem. I have a for loop, in which i want to add always the newcoming element of a list to a new line in a csv file. But something is wrong with my code. If i open the file, there is alwys a space between the rows and the list brackets are also existing.
My code extract:
allParameterCombi.append([fRC_Network1.jc,fRC_Network1.radius,fRC_Network1.frcSize, fRC_Network1.percent[0],fRC_Network1.percent[1],fRC_Network1.distType, fRC_Network1.ID])
with open('parameterCombinatons.csv','a') as csv_file:
writer = csv.writer(csv_file, delimiter=";")
writer.writerow([allParameterCombi[count3-1]])
count3 = count3 +1
How it looks like opened:
You have to open your file in binary mode ("ab") before passing them to csv.writer like this:
with open('parameterCombinatons.csv','ab') as csv_file: #python3 open('parameterCombinatons.csv','a',newline='')
Related
Writing multiple lists to CSV only outputs 1 line
I have several lists of various lengths that I am trying to export to CSV so they can be called up again later when the program is launched again. Everytime I try the following, it only outputs a single line of data to the csv: export = [solutions, fedata, bbcom, fenxt, ten_99, ten_99links] with open('test.csv','w') as f: writer = csv.writer(f) # writer.writerow([<header row>]) #uncomment this and put header, if you want header row , if not leave it commented. for x in zip(*export): writer.writerow(x) some of the lists currently only have 1 item in them, but I am trying to basically make a CSV be a database for this program as we will be adding more to the lists as it is expanded. Any help is appreciated, I am really banging my head against the wall here. I tried the pasted code but it only outputs a single line of data
Do you want every item to be on a newline or every list on a newline? If you want an empty line between the prints then you can remove the newline='' Try this: with open('test.csv', 'w', newline='') as f: writer = csv.writer(f) writer.writerows(export)
Python changing Comma Delimitation CSV
NEWBIE USING PYTHON (2.7.9)- When I export a gzipped file to a csv using: myData = gzip.open('file.gz.DONE', 'rb') myFile = open('output.csv', 'wb') with myFile: writer = csv.writer(myFile) writer.writerows(myData) print("Writing complete") It is printing in the csv with a comma deliminated in every character. eg. S,V,R,","2,1,4,0,",",2,0,1,6,1,1,3,8,0,4,",",5,0,5,0,1,3,4,2,0,6,4,7,3,6,4,",",",",2,0,0,0,5,6,5,9,2,9,6,7,4,",",2,0,0,7,2,4,5,2,3,5,",",0,0,0,2,"," I,V,E,",",",",",",E,N,",",N,/,A,",",0,4,2,1,4,4,9,3,7,0,",":,I,R,_,",",N,/,A,",",U,N,A,N,S,W,",",",",",",","," " S,V,R,",",4,7,3,3,5,5,",",2,0,5,7,",",5,0,5,0,1,4,5,0,1,6,4,8,6,3,7,",",",",2,0,0,0,5,5,3,9,2,9,2,8,0,",",2,0,4,4,1,0,8,3,7,8,",",0,0,0,2,"," I,V,E,",",",",",",E,N,",",N,/,A,",",0,4,4,7,3,3,5,4,5,5,",",,:,I,R,_,",",N,/,A,",",U,N,A,N,S,W,",",",",",",","," How do I get rid of the comma so that it is exported with the correct fields? eg. SVR,2144370,20161804,50501342364,,565929674,2007245235,0002,1,PPDAP,PPLUS,DEACTIVE,,,EN,N/A,214370,:IR_,N/A,,,,, SVR,473455,208082557,14501648637,,2000553929280,2044108378,0002,1,3G,CODAP,INACTIVE,,,EN,N/A,35455,:IR_,N/A,,,,,
You are only opening the gzip file. I think you are expecting the opened file to act automatically like an iterator. Which it does. However each line is a text string. The writerows expects an iterator with each item being an array of values to write with comma separation. Thus given an iterator with each item being a sting, and given that a string is an array of characters you get the result you found. Since you didn't mention what the gzip data lines really contain I can't guess how to parse the lines into an array of reasonable chunks. But assuming a function called 'split_line' appropriate to that data you could do with gzip.open('file.gz.Done', 'rb') as gzip_f: data = [split_line(l) for l in gzip_f] with open('output.csv', 'wb') as myFile: writer = csv.writer(myFile) writer.writerows(data) print("Writing complete") Of course at this point doing row by row and putting the with lines together makes sense. See https://docs.python.org/2/library/csv.html
I think it's simply because gzip.open() will give you a file-like object but csvwriter.writerows() needs a list of lists of strings to do its work. But I don't understand why you want to use the csv module. You look like you only want to extract the content of the gzip file and save it in a output file uncompressed. You could do that like that: import gzip input_file_name = 'file.gz.DONE' output_file_name = 'output.csv' with gzip.open(input_file_name, 'rt') as input_file: with open('output.csv', 'wt') as output_file: for line in input_file: output_file.write(line) print("Writing complete") If you want to use the csv module because you're not sure your input data is properly formatted (and you want an error message right away) you could then do: import gzip import csv input_file_name = 'file.gz.DONE' output_file_name = 'output.csv' with gzip.open(input_file_name, 'rt', newline='') as input_file: reader_csv = csv.reader(input_file) with open('output.csv', 'wt', newline='') as output_file: writer_csv = csv.writer(output_file) writer_csv.writerows(reader_csv) print("Writing complete") Is that what you were trying to do ? It's difficult to guess because we don't have the input file to understand. If it's not what you want, could you care to clarify what you want?
Since I now have information the gzipped file is itself comma, separated values it simplifies thus.. with gzip.open('file.gz.DONE', 'rb') as gzip_f, open('output.csv', 'wb') as myFile: myfile.write(gzip_f.read()) In other words it is just a round about gunzip to another file.
Read csv lines and save it as seperate txt file, named as a line - python
i have some problem with simple code. I have a csv file with one column, and hundreds rows. I would like to get a code to read each line of csv and save it as separate txt files. What is important, the txt files should have be named as read line. Example: 1.Adam 2. Doroty 3. Pablo will give me adam.txt, doroty.txt and pablo txt. files. Please, help.
This should do what you need on python 3.6 with open('file.csv') as f: # Open file with hundreds of rows for name in f.read().split('\n'): # Get list of all names with open(f'{name.strip()}.txt', 'w') as s: # Create file per name pass
Alternatively you can use built-in CSV library to avoid any complications with parsing csv files: import csv with open('names.csv') as csvfile: reader = csv.DictReader(csvfile) for row in reader: file_name ='{0}.txt'.format(row['first_name']) with open(file_name, 'w') as f: pass
Loop that will iterate a certain number of times through a CSV in Python
I have a large CSV file (~250000 rows) and before I work on fully parsing and sorting it I was trying to display only a part of it by writing it to a text file. csvfile = open(file_path, "rb") rows = csvfile.readlines() text_file = open("output.txt", "w") row_num = 0 while row_num < 20: text_file.write(", ".join(row[row_num])) row_num += 1 text_file.close() I want to iterate through the CSV file and write only a small section of it to a text file so I can look at how it does this and see if it would be of any use to me. Currently the text file ends up empty. A way I thought might do this would be to iterate through the file with a for loop that exits after a certain number of iteration but I could be wrong and I'm not sure how to do this, any ideas?
There's nothing specifically wrong with what you're doing, but it's not particularly Pythonic. In particular reading the whole file into memory with readlines() at the start seems pointless if you're only using 20 lines. Instead you could use a for loop with enumerate and break when necessary. csvfile = open(file_path, "rb") text_file = open("output.txt", "w") for i, row in enumerate(csvfile): text_file.write(row) if row_num >= 20: break text_file.close() You could further improve this by using with blocks to open the files, rather than closing them explicitly. For example: with open(file_path, "rb") as csvfile: #your code here involving csvfile #now the csvfile is closed! Also note that Python might not be the best tool for this - you could do it directly from Bash, for example, with just head -n20 csvfile.csv > output.txt.
A simple solution would be to just do : #!/usr/bin/python # -*- encoding: utf-8 -*- file_path = './test.csv' with open(file_path, 'rb') as csvfile: with open('output.txt', 'wb') as textfile: for i, row in enumerate(csvfile): textfile.write(row) if i >= 20: break Explanation : with open(file_path, 'rb') as csvfile: with open('output.txt', 'wb') as textfile: Instead of using open and close, it is recommended to use this line instead. Just write the lines that you want to execute when your file is opened into a new level of indentation. 'rb' and 'wb' are the keywords you need to open a file in respectively 'reading' and 'writing' in 'binary mode' for i, row in enumerate(csvfile): This line allows you to read line by line your CSV file, and using a tuple (i, row) gives you both the content of the row and its index. That's one of the awesome built-in functions from Python : check out here for more about it. Hope this helps ! EDIT : Note that Python has a CSV package that can do that without enumerate : # -*- encoding: utf-8 -*- import csv file_path = './test.csv' with open(file_path, 'rb') as csvfile: reader = csv.reader(csvfile) with open('output.txt', 'wb') as textfile: writer = csv.writer(textfile) i = 0 while i<20: row = next(reader) writer.writerow(row) i += 1 All we need to use is its reader and writer. They have functions next (that reads one line) and writerow (that writes one). Note that here, the variable row is not a string, but a list of strings, because the function does the split job by itself. It might be faster than the previous solution. Also, this has the major advantage of allowing you to look anywhere you want in the file, no necessarily from the beginning (just change the bounds for i)
Is there a way to only read the header from a csv file in python?
I was wondering if there was a way to only read the header info from a csv file without loading the whole file. I'm dealing with large csv files and would rather not have to load the whole thing. Thanks!
with open(filename) as in_file: csv_reader = csv.reader(in_file) header = next(csv_reader) This works because csv.reader() returns a generator, not a list. It will only output data if next() is called (i.e. by using next(csv_reader) or by using it in a for loop, like for row in csv_reader).
You wouldn't normally read the whole file anyway. Just create the CSV reader and call next() on it once. import csv f = open('myfile.csv') reader = csv.reader(f) header = reader.next()