I have 3 files 1.txt, 2.txt, and 3.txt and I am trying to concatenate together the contents of these files into one output file in Python. Can anyone explain why the code below only writes the content of 1.txt and not 2.txt or 3.txt? I'm sure it's something really simple, but I can't seem to figure out the problem.
import glob
import shutil
for my_file in glob.iglob('/Users/me/Desktop/*.txt'):
with open('concat_file.txt', "w") as concat_file:
shutil.copyfileobj(open(my_file, "r"), concat_file)
Thanks for the help!
you constantly overwrite the same file.
either use:
with open('concat_file.txt', "a")
or
with open('concat_file.txt', "w") as concat_file:
for my_file in glob.iglob('/Users/me/Desktop/*.txt'):
shutil.copyfileobj(open(my_file, "r"), concat_file)
I believe that what's wrong with your code is that in every loop iteration, you are essentially adding files to themselves.
If you manually unroll the loop you will see what I mean:
# my_file = '1.txt'
concat_file = open(my_file)
shutil.copyfileobj(open(my_file, 'r'), concat_file)
# ...
I'd suggest deciding beforehand which file you want all the files to be copied to, maybe like this:
import glob
import shutil
output_file = open('output.txt', 'w')
for my_file in glob.iglob('/Users/me/Desktop/*.txt'):
with open('concat_file.txt', "w") as concat_file:
shutil.copyfileobj(open(my_file, "r"), output_file)
Related
I have a list of .txt file in one folder, with names like: "image1.txt", "image2.txt", "image3.txt", etc.
I need to perform some operations for each file.
I was trying like this:
import glob
for each_file in glob.glob("C:\...\image\d+\.txt"):
print(each_file) (or whatever)
But it seems it doesn't work. How can I solve?
I think you are looking for something like this:
import os
for file in os.listdir('parent_folder'):
with open(os.path.join('parent_folder', file), 'r') as f:
data = f.read()
# operation on data
#Alternatively
for i in range(10):
with open(f'image{i}.txt', 'r') as f:
data = f.read()
# operation on data
The with operator takes care of everything to do with the file, so you don't need to worry about the file after it goes out of scope.
If you want to read and also write to the file in the same operation, use open(file, 'r+) and then the following:
with open(f'image{i}.txt', 'r+') as f:
data = f.read()
# operation on data
f.seek(0)
f.write(data)
f.truncate()
Take this answer, that I wrote.
path objects have the read_text method. As long as it can decode it, then it will read it - you shouldn't have a problem with text files. Also, since you are using windows paths, make sure to put an r before the string, like this r"C:\...\image\d+\.txt" or change the direction of the slashes. A quick example:
from pathlib import Path
for f in Path(r"C:\...\image\d+\").rglob('**/*.txt'):
print(f.read_text())
I am trying to open the file using a path instead of file name I used glob.glob option to go and search in the path for an input file. Now I got struck with opening that. Any help would be appreciated.
import glob
a = (glob.glob("*/file.txt"))
with open (a, 'r') as f:
Trying to read the file.txt and I am getting error in line3. Any help would be appreciated.
Error: TypeError: expacted str, bytes or os.PathLike object, not list
glob.glob returns a list of file paths. You will need to access one of the paths in the list, or iterate over them.
import glob
a = glob.glob("*/file.txt")
with open(a[0], 'r') as f:
text= f.read()
glob.glob() returns a list. You need to loop through it, opening each file.
import glob
for filename in glob.glob("*/file.txt"):
with open(filename, "r") as f:
...
I have a noob python question... so bear with me.
Can I open multiple files before closing the previos.
So... can I run
import os
files=[]
for file in os.listdir(os.curdir):
files.append(open(file,'w'))
Then edit each file as I want and finish with
for file in files:
file.close()
Thanks in advance
Seems legit and works fine.
Doing operations would be hard for you this way. the list "files" doesn't contain the filenames. You would not know which file is what.
It is perfectly fine to open each file using open and later close all of them. However, you will want to make sure all of your files are closed properly.
Normally, you would do this for one file:
with open(filename,'w') as f:
do_something_with_the_file(f)
# the file is closed here, regardless of what happens
# i.e. even in case of an exception
You could do the same with multiple files:
with open(filename1,'w') as f1, open(filename2,'w') as f2:
do_something_with_the_file(f)
# both files are closed here
Now, if you have N files, you could write your own context manager, but that would probably be an overkill. Instead, I would suggest:
open_files = []
try:
for filename in list_of_filenames:
open_files.append(open(filename, 'w'))
# do something with the files here
finally:
for file in open_files:
file.close()
BTW, your own code deltes the contents of all files in the current directory. I am not sure you wanted that:
for file in os.listdir(os.curdir):
files.append(open(file,'w')) # open(file,'w') empties the file!!!
Maybe you wanted open(file, 'r') or open(file, 'a') instead? (see https://docs.python.org/2/library/functions.html#open)
Your solution will certainly work but the recommended way would be to use contextmanager so that the files gets handled seamlessly. For example
for filename in os.listdir(os.curdir):
with open(filename, 'w') as f:
# do some actions on the file
The with statement will take care of closing the file for you.
i need to save 6 lists in csv file and load back when program open.help me please
this is one of my list and all the lists are updating these are some sample details
list_of_DVDsuppliers=[["a","m",15],["w","p",34]]
i tried this code but it wont work.
import csv
list_of_DVDsuppliers=[["a","m",15],["w","p",34]]
myfile = open("pppp.csv", 'wb')
wr = csv.writer(myfile, quoting=csv.QUOTE_NONE)
wr.writerow(list_of_DVDsuppliers)
myfile.close
1.help me to save and read in csv?
still need help.................
You want
wr.writerows(list_of_DVDsuppliers)
with an s, because you're writing more than one row.
As well,
myfile.close
doesn't do anything: you want
myfile.close()
to actually call it, not merely mention the name.
Better would be to use a with block:
with open("pppp.csv", "wb") as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_NONE)
wr.writerows(list_of_DVDsuppliers)
because then you never have to remember to close the file, it's done automatically.
This produces a file looking like
a,m,15
w,p,34
which I'm guessing is what you want (you might want the list flattened and written as one row instead.)
[PS: I'm assuming Python 2 here. Othewise it should be open("pppp.csv", "r", newline='').]
I am trying to automate a process where in a specific folder, there are multiple text files following the same data format/structure. In the text files, the data is separated by a comma. I want to be able to output all of these text files into one cumulative csv file. This is what I currently have, and seem to be stuck where I am because of my lack of python knowledge.
from collections import defaultdict
import glob
def get_site_files():
sites = defaultdict(list)
for fname in glob.glob('*.txt'):
csv_out = csv.writer(open('out.csv', 'w'), delimiter=',')
f = open('myfile.txt')
for line in f:
vals = line.split(',')
csv_out.writerow()
f.close()
EDIT: bringing up comments: I want to make sure that all of the text files are read, not just only myfile.txt.
Also, if I could combine them all into one large .txt file and then I could make those into a csv that would be great too, I just am not sure the exact way to do this.
Just a little bit of reordering of your code.
import csv
import glob
def get_site_files():
with open('out.csv', 'w') as out_file:
csv_out = csv.writer(out_file, delimiter=',')
for fname in glob.glob('*.txt'):
with open(fname) as f:
for line in f:
vals = line.split(',')
csv_out.writerow(vals)
get_site_files()
But since they are all in the same format you can just concatenate them:
import glob
with ('out.csv', 'w') as fout:
for fname in glob.glob('*.txt'):
with open(fname, 'r') as fin:
fout.write(fin.read())
You could also try a different way:
I used os.listdir() once. That gives you a List of all the files in your directory. In combination with os.path.join you can manage all *.csv files in a certain directory.
Some additional Information can be found in the reference: os and os.path
So I would just loop through all the files in the directory (searching for them ending on ".csv"), for each of them, store each line in a list as a string, separate the strings by the colums delimiter, make "," to "." in the left strings and concatenate the strings again. Afterwards push each line of the list to the outputfile you wish to use
I highly recommend the python standard library for information about the total functionality of python to newbies ;)
Hope that helps ;)
I adapted the code above to covert text files to csv and get working code to convert all csv files in a folder to one text files appending all csv files. Works great.
import glob
import csv
def get_site_files():
with open('out.txt', 'w') as out_file:
csv_out = csv.writer(out_file, delimiter=',')
for fname in glob.glob('*.csv'):
with open(fname) as f:
for line in f:
vals = line.split(',')
csv_out.writerow(vals)enter code here