Improve performance of Writing a file - Python 3.4 - python

I am not well versed with Python, based on my knowledge and some browsing I wrote the script mentioned below, this script basically looks for all files in C:\temp\dats folder and writes it in C:\temp\datsOutput\output.text file, for some reason my code is running terribly slow, can anyone advise me to improve it to have a better performance?
import os
a = open(r"C:\temp\datsOutput\output.txt", "w")
path = r'C:\temp\dats'
for filename in os.listdir(path):
fullPath = path+"\\"+filename
with open(fullPath, "r") as ins:
for line in ins:
a.write(line)

Two speedups. First, copy the whole file at once. Second, treat the files as binary (add a “b” after the “r” or “w” when opening a file.)
Combined, runs about 10x faster.
Final code looks like this
import os
a = open(r"C:\temp\datsOutput\output.txt", "wb")
path = r'C:\temp\dats'
for filename in os.listdir(path):
fullPath = path+"\\"+filename
with open(fullPath, "rb") as ins:
a.write(ins.read())

Related

How to make Python read multiple txt files

I need help with this Python Script I am a noob to Python, but I really need this to work
I have over 200.txt files all located in different folders and subs, and in each txt file I have 2 codes that need to be edited
This is what I have going now and this works great but I have to rename the script to match the file name
f1 = open('(1).txt', 'r')
f2 = open('(1A).txt', 'w')
for line in f1:
f2.write(line.replace('"aplid": 2147483648', '"aplid": -2147483648'))
f1.close()
f2.close()
This is my goal for the script to read anyfilename.txt in any folder
import glob
import os
for f in glob.glob("*.txt"):
f1 = open('f', 'r')
f2 = open('f', 'w')
for line in f1:
f2.write(line.replace('"aplid": 2147483648,', '"aplid": -2147483648,'))
f2.write(line.replace('"orlid": 2147483648,', '"orlid": -2147483648,'))
f1.close()
f2.close()
I made a batch script for this project and it works great, but it's super slow, it would be faster for me to edit the python script for each txt file lol..
I don't know if Python can read Folders and subfolders like batch dir /b /s /a-d
I'm sorry for bothering you'll, but I searched online and I can't find anything that helps with this and most of it I don't understand
I keep reading that using a path can help, but this script is going to be placed in several computers so using path is not best
You can use os.walk to traverse through a directory tree:
import os
for root, dirs, files in os.walk('.'):
for name in files:
if os.path.splitext(name)[1] != '.txt':
continue
nmin = os.path.join(root,name)
with open(nmin,"r") as fin:
data = fin.read()
data = data.replace('"aplid": 2147483648,', '"aplid": -2147483648,') \
.replace('"orlid": 2147483648,', '"orlid": -2147483648,')
with open(nmin,"w") as fout:
fout.write(data)

Edit multiple text files, and save as new files

My first post on StackOverflow, so please be nice. In other words, a super beginner to Python.
So I want to read multiple files from a folder, divide the text and save the output as a new file. I currently have figured out this part of the code, but it only works on one file at a time. I have tried googling but can't figure out a way to use this code on multiple text files in a folder and save it as "output" + a number, for each file in the folder. Is this something that's doable?
with open("file_path") as fReader:
corpus = fReader.read()
loc = corpus.find("\n\n")
print(corpus[:loc], file=open("output.txt","a"))
Possibly work with a list, like:
from pathlib import Path
source_dir = Path("./") # path to the directory
files = list(x for x in filePath.iterdir() if x.is_file())
for i in range(len(files)):
file = Path(files[i])
outfile = "output_" + str(i) + file.suffix
with open(file) as fReader, open(outfile, "w") as fOut:
corpus = fReader.read()
loc = corpus.find("\n\n")
fOut.write(corpus[:loc])
** sorry for multiple editting....
welcome to the site. Yes, what you are asking above is completely doable and you are on the right track. You will need to do a little research/practice with the os module which is highly useful when working with files. The two commands that you will want to research a bit are:
os.path.join()
os.listdir()
I would suggest you put two folders within your python file, one called data and the other called output to catch the results. Start and see if you can just make the code to list all the files in your data directory, and just keep building that loop. Something like this should list all the files:
# folder file lister/test writer
import os
source_folder_name = 'data' # the folder to be read that is in the SAME directory as this file
output_folder_name = 'output' # will be used later...
files = os.listdir(source_folder_name)
# get this working first
for f in files:
print(f)
# make output folder names and just write a 1-liner into each file...
for f in files:
output_filename = f.split('.')[0] # the part before the period
output_filename += '_output.csv'
output_path = os.path.join(output_folder_name, output_filename)
with open(output_path, 'w') as writer:
writer.write('some data')

Python: Input and output filename matching

I'm trying to come up with a way for the filenames that I'm reading to have the same filename as what I'm writing. The code is currently reading the images and doing some processing. My output will be extracting the data from that process into a csv file. I want both the filenames to be the same. I've come across fname for matching, but that's for existing files.
So if your input file name is in_file = myfile.jpg do this:
my_outfile = "".join(infile.split('.')[:-1]) + 'csv'
This splits infile into a list of parts that are separated by '.'. It then puts them back together minus the last part, and adds csv
your my_outfile will be myfile.csv
Well in python it's possible to do that but, the original file might be corrupted if we were to have the same exact file name i.e BibleKJV.pdf to path BibleKJV.pdf will corrupt the first file. Take a look at this script to verify that I'm on the right track (if I'm totally of disregard my answer):
import os
from PyPDF2 import PdfFileReader , PdfFileWriter
path = "C:/Users/Catrell Washington/Pride"
input_file_name = os.path.join(path, "BibleKJV.pdf")
input_file = PdfFileReader(open(input_file_name , "rb"))
output_PDF = PdfFileWriter()
total_pages = input_file.getNumPages()
for page_num in range(1,total_pages) :
output_PDF.addPage(input_file.getPage(page_num))
output_file_name = os.path.join(path, "BibleKJV.pdf")
output_file = open(output_file_name , "wb")
output_PDF.write(output_file)
output_file.close()
When I ran the above script, I lost all data from the original path "BibleKJV.pdf" thus proving that if the file name and the file delegation i.e .pdf .cs .word etc, are the same then the data, unless changed very minimal, will be corrupted.
If this doesn't give you any help please, edit your question with a script of what you're trying to achieve.

Python script not combining csv files

I am trying to combine over 100,000 CSV files (all same formats) in a folder using below script. Each CSV file is on average 3-6KB of size. When I run this script, it only opens exact 47 .csv files and combines. When I re-run it only combines same .csv files, not all of them. I don't understand why it is doing that?
import os
import glob
os.chdir("D:\Users\Bop\csv")
want_header = True
out_filename = "combined.files.csv"
if os.path.exists(out_filename):
os.remove(out_filename)
read_files = glob.glob("*.csv")
with open(out_filename, "w") as outfile:
for filename in read_files:
with open(filename) as infile:
if want_header:
outfile.write('{},Filename\n'.format(next(infile).strip()))
want_header = False
else:
next(infile)
for line in infile:
outfile.write('{},{}\n'.format(line.strip(), filename))
Firstly check the length of read_files:
read_files = glob.glob("*.csv")
print(len(read_files))
Note that glob isn't necessarily recursive as described in this SO question.
Otherwise your code looks fine. You may want to consider using the CSV library but note that you need to adjust the field size limit with really large files.
Are you shure your all filenames ends with .csv? If all files in this directory contains what you need, then open all of them without filtering.
glob.glob('*')

Executing the script one time in each folder

I need to make a script that executes a script one time in each folder of a directory.
Script in question:
f = open('OrderEXAMPLE.txt', 'r')
data = f.readlines()
mystr = ",".join([line.strip() for line in data])
with open('CSV.csv', 'w') as f2:
f2.write(mystr)
With this script, it changes a list of customer data into csv form.
Each order form has its own folder, so my intial thought was to put the same script into each folder. From there, write another script that executes each script simultaneously.
Folder structure is like so:
Order_forms
--Order_123
-----Order_form
--Order_124
-----Order_form
Amateur at python, so advice is needed and appreciated.
Just walk the directory structure with one script. This will write a separate CSV for each file with the name <original_filename>_CSV.csv. Without more clarity on the desired output nor knowing what the data looks like I can't help much more. You should be able to tweak this for whatever you need.
import os
parent_folder = 'Order_forms'
for root, dirs, files in os.walk(parent_folder):
for f in files:
with open(os.path.join(root, f), 'r') as f1:
data = f1.readlines()
mystr = ",".join([line.strip() for line in data])
with open(os.path.join(root, '{}_CSV.csv'.format(f)), 'w') as f2:
f2.write(mystr)

Categories

Resources