List to csv without commas in Python - python

I have a following problem.
I would like to save a list into a csv (in the first column).
See example here:
import csv
mylist = ["Hallo", "der Pixer", "Glas", "Telefon", "Der Kühlschrank brach kaputt."]
def list_na_csv(file, mylist):
with open(file, "w", newline="") as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerows(mylist)
list_na_csv("example.csv", mylist)
My output in excel looks like this:
Desired output is:
You can see that I have two issues: Firstly, each character is followed by comma. Secondly, I don`t know how to use some encoding, for example UTF-8 or cp1250. How can I fix it please?
I tried to search similar question, but nothing worked for me. Thank you.

You have two problems here.
writerows expects a list of rows, said differently a list of iterables. As a string is iterable, you write each word in a different row, one character per field. If you want one row with one word per field, you should use writerow
csv_writer.writerow(mylist)
by default, the csv module uses the comma as the delimiter (this is the most common one). But Excel is a pain in the ass with it: it expects the delimiter to be the one of the locale, which is the semicolon (;) in many West European countries, including Germany. If you want to use easily your file with your Excel you should change the delimiter:
csv_writer = csv.writer(csv_file, delimiter=';')
After your edit, you want all the data in the first column, one element per row. This is kind of a decayed csv file, because it only has one value per record and no separator. If the fields can never contain a semicolon nor a new line, you could just write a plain text file:
...
with open(file, "w", newline="") as csv_file:
for row in mylist:
print(row, file=file)
...
If you want to be safe and prevent future problems if you later want to process more corner cases values, you could still use the csv module and write one element per row by including it in another iterable:
...
with open(file, "w", newline="") as csv_file:
csv_writer = csv.writer(csv_file, delimiter=';')
csv_writer.writerows([elt] for elt in mylist)
...

l = ["Hallo", "der Pixer", "Glas", "Telefon", "Der Kühlschrank brach kaputt."]
with open("file.csv", "w") as msg:
msg.write(",".join(l))
For less trivial examples:
l = ["Hallo", "der, Pixer", "Glas", "Telefon", "Der Kühlschrank, brach kaputt."]
with open("file.csv", "w") as msg:
msg.write(",".join([ '"'+x+'"' for x in l]))
Here you basically set every list element between quotes, to prevent from the intra field comma problem.

Try this it will work 100%
import csv
mylist = ["Hallo", "der Pixer", "Glas", "Telefon", "Der Kühlschrank brach kaputt."]
def list_na_csv(file, mylist):
with open(file, "w") as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow(mylist)
list_na_csv("example.csv", mylist)

If you want to write the entire list of strings to a single row, use csv_writer.writerow(mylist) as mentioned in the comments.
If you want to write each string to a new row, as I believe your reference to writing them in the first column implies, you'll have to format your data as the class expects: "A row must be an iterable of strings or numbers for Writer objects". On this data that would look something like:
csv_writer.writerows((entry,) for entry in mylist)
There, I'm using a generator expression to wrap each word in a tuple, thus making it an iterable of strings. Without something like that, your strings are themselves iterables and lead to it delimiting between each character as you've seen.
Using csv to write a single entry per line is almost pointless, but it does have the advantage that it will escape your delimiter if it appears in the data.
To specify an encoding, the docs say:
Since open() is used to open a CSV file for reading, the file will by
default be decoded into unicode using the system default encoding (see
locale.getpreferredencoding()). To decode a file using a different
encoding, use the encoding argument of open:
import csv with open('some.csv', newline='', encoding='utf-8') as f:
reader = csv.reader(f)
for row in reader:
print(row)
The same applies to writing in something other than the system default encoding: specify the encoding argument when
opening the output file.

try split("\n")
example:
counter = 0
amazing list = ["hello","hi"]
for x in titles:
ok = amazinglist[counter].split("\n")
writer.writerow(ok)
counter +=1

Related

csv.writer exports horizontally

I have a list filled with values. I want to export the list to a .csv file.
I want the values to be on top of each other, yet right now they get exported next to each other.
e.g. I want it to be like:
1
2
3
not
1 2 3
How do I have to change my code?
with open('C:(...)file.csv', "w", newline='\n') as csfile:
wr = csv.writer(csfile, delimiter =';')
wr.writerow([item[0] for item in sntlst])
It is important to receive item[0] of my list.
Change the delimiter argument to '\n':
with open('C:(...)file.csv', "w", newline='\n') as csfile:
wr = csv.writer(csfile, delimiter='\n')
wr.writerow([item[0] for item in sntlst])
The newline='\n' isn't even necessary, so you can do without it:
with open('C:(...)file.csv', "w") as csfile:
wr = csv.writer(csfile, delimiter='\n')
wr.writerow([item[0] for item in sntlst])
The answer you have accepted (by #Ann Zen) may work, but that's due to a fluke because technically it's abusing the use of delimiter keyword argument with respect to the sample shown in the documentation of opening files for a csv.writer — and because of that it might now work properly on all platforms.
So, that said, in my opinion the code shown below would be a proper and better way to do things. It makes each item in your list a "row" of one item in the CSV file that's being created.
import csv
sntlst = [1, 2, 3]
output_file_path = 'list_file.csv'
with open(output_file_path, "w", newline='') as csvfile:
wr = csv.writer(csvfile)
wr.writerows([item] for item in sntlst)

How to remove delimiters when reading csv file in Python?

Just trying to learn python and trying to help a friend with taking a column from a .csv file to print it with a label-maker. The first problem I came across is this:
I will use this example file: test.csv
1111,2222,3333,4444
aaaa,bbbb,cccc,dddd
aaaa,bbbb,cccc,dddd
I run it trough:
import csv
with open('test.csv', 'r') as csv_File:
csv_reader = csv.reader(csv_File)
with open('test2.csv', 'w') as new_file:
csv_writer = csv.writer(new_file)
for line in csv_reader:
(csv_writer).writerow(line[1])
and get the output:
2,2,2,2
b,b,b,b
b,b,b,b
I want the output:
2222
bbbb
bbbb
what am I doing wrong?
writerow is expecting a whole list to write as a row, just as you got a whole list from the reader. To output one field only you should wrap it in a list:
csv_writer.writerow([line[1]])
But note it would be simpler to just write the data directly, since you don't need any of the functionality that the CSV writer gives you:
with open('test2.csv', 'w') as new_file:
for line in csv_reader:
new_file.write(line[1])
writerow takes a iterable of data of one row. You provide it a single string that gets interpreted as iterable and each element gets printed as column.
Fix:
csv_writer.writerow([line[1]]) # put the string into a list so you provide a single item row

Read CSV with comma as linebreak

I have a file saved as .csv
"400":0.1,"401":0.2,"402":0.3
Ultimately I want to save the data in a proper format in a csv file for further processing. The problem is that there are no line breaks in the file.
pathname = r"C:\pathtofile\file.csv"
with open(pathname, newline='') as file:
reader = file.read().replace(',', '\n')
print(reader)
with open(r"C:\pathtofile\filenew.csv", 'w') as new_file:
csv_writer = csv.writer(new_file)
csv_writer.writerow(reader)
The print reader output looks exactly how I want (or at least it's a format I can further process).
"400":0.1
"401":0.2
"402":0.3
And now I want to save that to a new csv file. However the output looks like
"""",4,0,0,"""",:,0,.,1,"
","""",4,0,1,"""",:,0,.,2,"
","""",4,0,2,"""",:,0,.,3
I'm sure it would be intelligent to convert the format to
400,0.1
401,0.2
402,0.3
at this stage instead of doing later with another script.
The main problem is that my current code
with open(pathname, newline='') as file:
reader = file.read().replace(',', '\n')
reader = csv.reader(reader,delimiter=':')
x = []
y = []
print(reader)
for row in reader:
x.append( float(row[0]) )
y.append( float(row[1]) )
print(x)
print(y)
works fine for the type of csv files I currently have, but doesn't work for these mentioned above:
y.append( float(row[1]) )
IndexError: list index out of range
So I'm trying to find a way to work with them too. I think I'm missing something obvious as I imagine that it can't be too hard to properly define the linebreak character and delimiter of a file.
with open(pathname, newline=',') as file:
yields
ValueError: illegal newline value: ,
The right way with csv module, without replacing and casting to float:
import csv
with open('file.csv', 'r') as f, open('filenew.csv', 'w', newline='') as out:
reader = csv.reader(f)
writer = csv.writer(out, quotechar=None)
for r in reader:
for i in r:
writer.writerow(i.split(':'))
The resulting filenew.csv contents (according to your "intelligent" condition):
400,0.1
401,0.2
402,0.3
Nuances:
csv.reader and csv.writer objects treat comma , as default delimiter (no need to file.read().replace(',', '\n'))
quotechar=None is specified for csv.writer object to eliminate double quotes around the values being saved
You need to split the values to form a list to represent a row. Presently the code is splitting the string into individual characters to represent the row.
pathname = r"C:\pathtofile\file.csv"
with open(pathname) as old_file:
with open(r"C:\pathtofile\filenew.csv", 'w') as new_file:
csv_writer = csv.writer(new_file, delimiter=',')
text_rows = old_file.read().split(",")
for row in text_rows:
items = row.split(":")
csv_writer.writerow([int(items[0]), items[1])
If you look at the documentation, for write_row, it says:
Write the row parameter to the writer’s file
object, formatted according to the current dialect.
But, you are writing an entire string in your code
csv_writer.writerow(reader)
because reader is a string at this point.
Now, the format you want to use in your CSV file is not clearly mentioned in the question. But as you said, if you can do some preprocessing to create a list of lists and pass each sublist to writerow(), you should be able to produce the required file format.

How to split tsv file into smaller tsv file based on row values

I have a tsv file in.txt which I would like to split into a smaller tsv file called out.txt.
I would like to import only the rows of in.txt which contain a string value My String Value in column 6 into out.txt.
import csv
# r is textmode
# rb is binary mode
# binary mode is faster
with open('in.txt','rb') as tsvIn, open('out.txt', 'w') as tsvOut:
tsvIn = csv.reader(tsvIn, delimiter='\t')
tsvOut = csv.writer(tsvOut)
for row in tsvIn:
if "My String Value" in row:
tsvOut.writerows(row)
My output looks like this.
D,r,a,m,a
1,9,6,1,-,0,4,-,1,3
H,y,u,n, ,M,o,k, ,Y,o,o
B,e,o,m,-,s,e,o,n, ,L,e,e
M,u,-,r,y,o,n,g, ,C,h,o,i,",", ,J,i,n, ,K,y,u, ,K,i,m,",", ,J,e,o,n,g,-,s,u,k, ,M,o,o,n,",", ,A,e,-,j,a, ,S,e,o
A, ,p,u,b,l,i,c, ,a,c,c,o,u,n,t,a,n,t,',s, ,s,a,l,a,r,y, ,i,s, ,f,a,r, ,t,o,o, ,s,m,a,l,l, ,f,o,r, ,h,i,m, ,t,o, ,e,v,e,n, ,g,e,t, ,a, ,c,a,v,i,t,y, ,f,i,x,e,d,",", ,l,e,t, ,a,l,o,n,e, ,s,u,p,p,o,r,t, ,h,i,s, ,f,a,m,i,l,y,., ,H,o,w,e,v,e,r,",", ,h,e, ,m,u,s,t, ,s,o,m,e,h,o,w, ,p,r,o,v,i,d,e, ,f,o,r, ,h,i,s, ,s,e,n,i,l,e,",", ,s,h,e,l,l,-,s,h,o,c,k,e,d, ,m,o,t,h,e,r,",", ,h,i,s, ,.,.,.
K,o,r,e,a,n,",", ,E,n,g,l,i,s,h
S,o,u,t,h, ,K,o,r,e,a
It should look like this with tab separated values
Drama Hyn Mok Yoo A public accountant's salary is far to small for him...etc
There are a few things wrong with your code. Let's look at this line by line..
import csv
Import module csv. Ok.
with open('in.txt','rb') as tsvIn, open('out.txt', 'w') as tsvOut:
With auto-closed binary file read handle tsvIn from in.txt, and text write handle tsvOut from out.txt, do... (Note: you probably want to use mode wb instead of mode w; see this post)
tsvIn = csv.reader(tsvIn, delimiter='\t')
Let tsvIn be the result of the call of function reader in module csv with arguments tsvIn and delimiter='\t'. Ok.
tsvOut = csv.writer(tsvOut)
Let tsvOut be the result of the call of function writer in module csv with argument tsvOut. You proably want to add another argument, delimiter='\t', too.
for row in tsvIn:
For each element in tsvIn as row, do...
if "My String Value" in row:
If string "My String Value" is present in row. You mentioned that you wanted to show only those rows whose sixth element was equal to the string, thus you should use something like this instead...
if len(row) >= 6 and row[5] == "My String Value":
This means: If the length of row is at least 6, and the sixth element of row is equal to "My String Value", do...
tsvOut.writerows(row)
Call method writerows of object tsvOut with argument row. Remember that in Python, a string is just a sequence of characters, and a character is a single-element string. Thus, a character is a sequence. Then, we have that row is, according to the docs, a list of strings, each representing a column of the row. Thus, a row is a list of strings. Then, we have the writerows method, that expects a list of rows, that is, a list of lists of strings, that is, a list of lists of sequences of characters. It happens that you can interpret each of row's elements as a row, when it's actually a string, and each element of that string as a string (as characters are strings!). All of this means is that you'll get a messy, character-by-character output. You should try this instead...
tsvOut.writerow(row)
Method writerow expects a single row as an argument, not a list of rows, thus this will yield the expected result.
try this:
import csv
# r is textmode
# rb is binary mode
# binary mode is faster
with open('in.txt','r') as tsvIn, open('out.txt', 'w') as tsvOut:
reader = csv.reader(tsvIn, delimiter='\t')
writer = csv.writer(tsvOutm, delimiter='\t')
[writer.writerow(row) for row in reader if "My String Value" in row]

Python: How do I delete periods occurring alone in a CSV file?

I have a bunch of CSV files. In some of them, missing data are represented by empty cells, but in others there is a period. I want to loop over all my files, open them, delete any periods that occur alone, and then save and close the file.
I've read a bunch of other questions about doing whole-word-only searches using re.sub(). That is what I want to do (delete . when it occurs alone but not the . in 3.5), but I can't get the syntax right for a whole-word-only search where the whole word is a special character ('.'). Also, I'm worried those answers might be a little different in the case where a whole word can be distinguished by tab and newlines too. That is, does /b work in my CSV file case?
UPDATE: Here is a function I wound up writing after seeing the help below. Maybe it will be useful to someone else.
import csv, re
def clean(infile, outfile, chars):
'''
Open a file, remove all specified special characters used to represent missing data, and save.\n\n
infile:\tAn input file path\n
outfile:\tAn output file path\n
chars:\tA list of strings representing missing values to get rid of
'''
in_temp = open(infile)
out_temp = open(outfile, 'wb')
csvin = csv.reader(in_temp)
csvout = csv.writer(out_temp)
for row in csvin:
row = re.split('\t', row[0])
for colno, col in enumerate(row):
for char in chars:
if col.strip() == char:
row[colno] = ''
csvout.writerow(row)
in_temp.close()
out_temp.close()
Something like this should do the trick... This data wouldn't happen to be coming out of SAS would it - IIRC, that quite often used '.' as missing for numeric values.
import csv
with open('input.csv') as fin, open('output.csv', 'wb') as fout:
csvin = csv.reader(fin)
csvout = csv.writer(fout)
for row in csvin:
for colno, col in enumerate(row):
if col.strip() == '.':
row[colno] = ''
csvout.writerow(row)
Why not just use the csv module?
#!/usr/bin/env python
import csv
with open(somefile) as infile:
r=csv.reader(infile)
rows = []
for row in csv:
rows.append(['' if f == "." else f for f in row])
with open(newfile, 'w') as outfile:
w=csv.writer(outfile)
w.writelines(rows)
The safest way would be to use the CSV module to process the file, then identify any fields that only contain ., delete those and write the new CSV file back to disk.
A brittle workaround would be to search and replace a dot that is not surrounded by alphanumerics: \B\.\B is the regex that would find those dots. But that might also find other dots like the middle dot in "...".
So, to find a dot that is surrounded by commas, you could search for (?<=,)\.(?=,).

Categories

Resources