I have a zip file that has a path. When I unzip the file using python and put it in my target folder, it then creates all of the files in the path inside my target folder.
Target: d:\unzip_files
zip file has a path and file name of: \NIS\TEST\Files\tnt.png
What happens: d:\unzip_files\NIS\TEST\Files\tnt.png
Is there a way to hae it just unzip the tnt.png file into d:\unzip_files? Or will I have to read down the list and move the file and then delete all of the empty folders?
import os, sys, zipfile
zippath = r"D:\zip_files\test.zip"
zipdir = r"D:\unzip_files"
zfile = zipfile.ZipFile(zippath, "r")
for name in zfile.namelist():
zfile.extract(name, zipdir)
zfile.close()
So, this is what worked..
import os, sys, zipfile
zippath = r"D:\zip_files\test.zip"
zipdir = r"D:\unzip_files"
zfile = zipfile.ZipFile(zippath, "r")
for name in zfile.namelist():
fname = os.path.join(zipdir, os.path.basename(name))
fout = open(fname, "wb")
fout.write(zfile.read(name))
fout.close()
Thanks for the help.
How about reading file as binary and dump it? Need to deal cases where there is pre-existing file.
for name in zfile.namelist():
fname = os.path.join(zipdir, os.path.basename(name))
fout = open(fname, 'wb')
fout.write(zfile.read(name))
Related
I would like to read all the contents from all the text files in a directory. I have 4 text files in the "path" directory, and my codes are;
for filename in os.listdir(path):
filepath = os.path.join(path, filename)
with open(filepath, mode='r') as f:
content = f.read()
thelist = content.splitlines()
f.close()
print(filepath)
print(content)
print()
When I run the codes, I can only read the contents from only one text file.
I will be thankful that there are any advice or suggestions from you or that you know any other informative inquiries for this question in stackoverflow.
If you need to filter the files' name per suffix, i.e. file extension, you can either use the string method endswith or the glob module of the standard library https://docs.python.org/3/library/glob.html
Here an example of code which save each file content as a string in a list.
import os
path = '.' # or your path
files_content = []
for filename in filter(lambda p: p.endswith("txt"), os.listdir(path)):
filepath = os.path.join(path, filename)
with open(filepath, mode='r') as f:
files_content += [f.read()]
With the glob way here an example
import glob
for filename in glob.glob('*txt'):
print(filename)
This should list your file and you can read them one by one. All the lines of the files are stored in all_lines list. If you wish to store the content too, you can keep append it too
from pathlib import Path
from os import listdir
from os.path import isfile, join
path = "path_to_dir"
only_files = [f for f in listdir(path) if isfile(join(path, f))]
all_lines = []
for file_name in only_files:
file_path = Path(path) / file_name
with open(file_path, 'r') as f:
file_content = f.read()
all_lines.append(file_content.splitlines())
print(file_content)
# use all_lines
Note: when using with you do not need to call close() explicitly
Reference: How do I list all files of a directory?
Basically, if you want to read all the files, you need to save them somehow. In your example, you are overriding thelist with content.splitlines() which deletes everything already in it.
Instead you should define thelist outside of the loop and use thelist.append(content.splitlines) each time, which adds the content to the list each iteration
Then you can iterate over thelist later and get the data out.
I want to find string e.g. "Version1" from my files of a folder which contains multiple ".c" and ".h" files in it and replace it with "Version2.2.1" using python file.
Anyone know how this can be done?
Here's a solution using os, glob and ntpath. The results are saved in a directory called "output". You need to put this in the directory where you have the .c and .h files and run it.
Create a separate directory called output and put the edited files there:
import glob
import ntpath
import os
output_dir = "output"
if not os.path.exists(output_dir):
os.makedirs(output_dir)
for f in glob.glob("*.[ch]"):
with open(f, 'r') as inputfile:
with open('%s/%s' % (output_dir, ntpath.basename(f)), 'w') as outputfile:
for line in inputfile:
outputfile.write(line.replace('Version1', 'Version2.2.1'))
Replace strings in place:
IMPORTANT! Please make sure to back up your files before running this:
import glob
for f in glob.glob("*.[ch]"):
with open(f, "r") as inputfile:
newText = inputfile.read().replace('Version1', 'Version2.2.1')
with open(f, "w") as outputfile:
outputfile.write(newText)
I need to open a file from a different directory without using it's path while staying in the current directory.
When I execute the below code:
for file in os.listdir(sub_dir):
f = open(file, "r")
lines = f.readlines()
for line in lines:
line.replace("dst=", ", ")
line.replace("proto=", ", ")
line.replace("dpt=", ", ")
I get the error message FileNotFoundError: [Errno 2] No such file or directory: because it's in a sub directory.
Question: Is there an os command I can use that will locate and open the file in sub_dir?
Thanks! -let me know if this is a repeat, I searched and couldn't find one but may have missed it.
os.listdir() lists only the filename without a path. Prepend these with sub_dir again:
for filename in os.listdir(sub_dir):
f = open(os.path.join(sub_dir, filename), "r")
If all you are doing is loop over the lines from the file, just loop over the file itself; using with makes sure that the file is closed for you when done too. Last but not least, str.replace() returns the new string value, not change the value itself, so you need to store that return value:
for filename in os.listdir(sub_dir):
with open(os.path.join(sub_dir, filename), "r") as f:
for line in f:
line = line.replace("dst=", ", ")
line = line.replace("proto=", ", ")
line = line.replace("dpt=", ", ")
You must give the full path if those files are not in the current directory:
f = open( os.path.join(sub_dir, file) )
I would not use file as a variable name, maybe filename, since this is used to create a file object in Python.
Code to copy files using shutil
import shutil
import os
source_dir = "D:\\StackOverFlow\\datasets"
dest_dir = "D:\\StackOverFlow\\test_datasets"
files = os.listdir("D:\\StackOverFlow\\datasets")
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for filename in files:
if file.endswith(".txt"):
shutil.copy(os.path.join(source_dir, filename), dest_dir)
print os.listdir(dest_dir)
I would like to run a function over all files in one folder and create new files out of them. I have put the code for one file bellow. I would appreciate it if you kindly help me.
def newfield2(infile,outfile):
output = ["%s\t%s" %(item.strip(),2) for item in infile]
outfile.write("\n".join(output))
outfile.close()
return outfile
infile = open("E:/SAGA/data/2006last/325125401.all","r")
outfile = open("E:/SAGA/data/2006last/325125401_edit.all","r")
I would like to change all the files in the 'E:/SAGA/data/2006last/' folder and create new files with edit extension.
Use os.listdir() to list all files in a directory. The function returns just the filenames, not the full path. The os.path module gives you the tools to construct filenames as needed:
import os
folder = 'E:/SAGA/data/2006last'
for filename in os.listdir(folder):
infilename = os.path.join(folder, filename)
if not os.path.isfile(infilename): continue
base, extension = os.path.splitext(filename)
infile = open(infilename, 'r')
outfile = open(os.path.join(folder, '{}_edit.{}'.format(base, extension)), 'w')
newfield2(infile, outfile)
import os
def apply_to_all_files:
for sub_path in os.listdir(path):
next_path = os.path.join(path, sub_path)
if os.path.isfile(next_path):
infile = open(next_path,"r")
outfile = open(next_path + '.out', "w")
newfield2(infile, outfile)
My problem is as follows. I want to list all the file names in my directory and its subdirectories and have that output printed in a txt file. Now this is the code I have so far:
import os
for path, subdirs, files in os.walk('\Users\user\Desktop\Test_Py'):
for filename in files:
f = os.path.join(path, filename)
a = open("output.txt", "w")
a.write(str(f))
This lists the names of the files in the folders (there are 6) but each new file overwrites the old so there is only one file name in the output.txt file at any given time. How do I change this code so that it writes all of the file names in the output.txt file?
don't open a file in your for loop. open it before your for loop
like this
import os
a = open("output.txt", "w")
for path, subdirs, files in os.walk(r'C:\Users\user\Desktop\Test_Py'):
for filename in files:
f = os.path.join(path, filename)
a.write(str(f) + os.linesep)
Or using a context manager (which is better practice):
import os
with open("output.txt", "w") as a:
for path, subdirs, files in os.walk(r'C:\Users\user\Desktop\Test_Py'):
for filename in files:
f = os.path.join(path, filename)
a.write(str(f) + os.linesep)
You are opening the file in write mode. You need append mode. See the manual for details.
change
a = open("output.txt", "w")
to
a = open("output.txt", "a")
You can use below code to write only File name from a folder.
import os
a = open("output.txt", "w")
for path, subdirs, files in os.walk(r'C:\temp'):
for filename in files:
a.write(filename + os.linesep)
If you want to avoid creating new lines in the text file include newline='' within the context manager. You won't have to format your text file later.
Code to write names of all files in a folder/directory:
file_path = 'path_containing_files'
with open("Filenames.txt", mode='w', newline='') as fp:
for file in os.listdir(file_path):
f = os.path.join(file_path, file)
fp.write(str(f) + os.linesep)
If you want to write the file names of a particular file type eg. XML, you can add an if condition:
file_path = 'path_containing_files'
with open("Filenames.txt", mode='w', newline='') as fp:
for file in os.listdir(file_path):
if file.endswith('.xml'):
f = os.path.join(file_path, file)
fp.write(str(f) + os.linesep)