I've tried a few different methods of doing this but each one has errors or will only function in specific ways.
randomData = ("Some Random stuff")
with open("outputFile.txt", "a") as file:
file.write(randomData)
exit()
What i'm trying to to is write to the "outputFile.txt" file and then next output to a different file such as "outputFileTwo.txt".
If you need different filename at every start then you have to save in other file (ie. config.txt) information about current filename. It can be number which you will use in filename (file1.txt, file2.txt, etc.).
And at start read number from config.txt, increase this number, use in filename, and write number again in config.txt.
Or you can use modul datetime to use current date and time in filename.
https://docs.python.org/3.5/library/datetime.html
There is also module to generate temporary (random and unique) filename.
https://docs.python.org/3.5/library/tempfile.html
If you're writing the same data to multiple files, you can do something like this:
data = "some data"
files = ['file1.txt', 'file2.txt', 'file3.txt']
for file in files:
with open(file, "a") as f:
f.write(data)
Based on your comment concerning a new file name for each run:
import time
randomData = ("Some Random stuff")
t,s = str(time.time()).split('.')
filename = t+".txt"
print ("writing to", filename)
with open(filename, "a") as file:
file.write(randomData)
Related
def Delete_con():
contact_to_delete= input("choose name to delete from contact")
to_Delete=list(contact_to_delete)
with open("phonebook1.txt", "r+") as file:
content = file.read()
for line in content:
if not any(line in line for line in to_Delete):
content.write(line)
I get zero error. but the line is not deleted. This function ask the user what name he or she wants to delete from the text file.
This should help.
def Delete_con():
contact_to_delete= input("choose name to delete from contact")
contact_to_delete = contact_to_delete.lower() #Convert input to lower case
with open("phonebook1.txt", "r") as file:
content = file.readlines() #Read lines from text
content = [line for line in content if contact_to_delete not in line.lower()] #Check if user input is in line
with open("phonebook1.txt", "w") as file: #Write back content to text
file.writelines(content)
Assuming that:
you want the user to supply just the name, and not the full 'name:number' pair
your phonebook stores one name:number pair per line
I'd do something like this:
import os
from tempfile import NamedTemporaryFile
def delete_contact():
contact_name = input('Choose name to delete: ')
# You probably want to pass path in as an argument
path = 'phonebook1.txt'
base_dir = os.path.dirname(path)
with open(path) as phonebook, \
NamedTemporaryFile(mode='w+', dir=base_dir, delete=False) as tmp:
for line in phonebook:
# rsplit instead of split supports names containing ':'
# if numbers can also contain ':' you need something smarter
name, number = line.rsplit(':', 1)
if name != contact_name:
tmp.write(line)
os.replace(tmp.name, path)
Using a tempfile like this means that if something goes wrong while processing the file you aren't left with a half-written phonebook, you'll still have the original file unchanged. You're also not reading the entire file into memory with this approach.
os.replace() is Python 3.3+ only, if you're using something older you can use os.rename() as long as you're not using Windows.
Here's the tempfile documentation. In this case, you can think of NamedTemporaryFile(mode='w+', dir=base_dir, delete=False) as something like open('tmpfile.txt', mode='w+'). NamedTemporaryFile saves you from having to find a unique name for your tempfile (so that you don't overwrite an existing file). The dir argument creates the tempfile in the same directory as phonebook1.txt which is a good idea because os.replace() can fail when operating across two different filesystems.
Using python 2.7..
I am using below to send all print output to a file called output.log. How can i have this send to a different file each time it runs...In bash we could declare a variable called date or something and have that part of the file name...so how can i achieve the same with python ??
So my question is..
every time i run the below script, my file should have a naming convention of output_date/time.log
Also how can i delete file that are older than X days that have a file naming convention of output_*.log
import sys
f = open('output.log', 'w')
sys.stdout = f
print "test"
f.close()
with some personal preference of formatting this is generally what I do.
import time
moment=time.strftime("%Y-%b-%d__%H_%M_%S",time.localtime())
f = open('output'+moment+'.log', 'w')
as far as automated deleting, do you want it deleted on run of the test?
os.remove(fileStringName)
works, you just have to do the arithmetic and string conversion. I would use os.listdir(pathToDirWithOutputLogs) iterate through the file names and do the math on them and call os.remove on the old ones.
To get date/time:
from time import gmtime, strftime
outputFileName = "output_#.log"
outputFileName = outputFileName.replace("#", strftime("%Y-%m-%d_%H:%M:%S", gmtime()))
For numerical incrementing:
outputFileName = "output #.log"
outputVersion = 1
while os.path.isfile(outputFileName.replace("#", str(outputVersion))):
outputVersion += 1
outputFileName = outputFileName.replace("#", str(outputVersion))
To delete files older than a certain date, you can iterate through all the files in the directory with ``, and delete them with os.remove(). You can compare the file names after parsing them.
lastTime = "2015-08-03_19:04:41"
for fn in filter(os.path.isfile, os.listdir()):
strtime = fn[fn.find("_"):fn.rfind(".")]
if strtime < lastTime:
os.remove(fn)
I modified the code based on the comments from experts in this thread. Now the script reads and writes all the individual files. The script reiterates, highlight and write the output. The current issue is, after highlighting the last instance of the search item, the script removes all the remaining contents after the last search instance in the output of each file.
Here is the modified code:
import os
import sys
import re
source = raw_input("Enter the source files path:")
listfiles = os.listdir(source)
for f in listfiles:
filepath = source+'\\'+f
infile = open(filepath, 'r+')
source_content = infile.read()
color = ('red')
regex = re.compile(r"(\b be \b)|(\b by \b)|(\b user \b)|(\bmay\b)|(\bmight\b)|(\bwill\b)|(\b's\b)|(\bdon't\b)|(\bdoesn't\b)|(\bwon't\b)|(\bsupport\b)|(\bcan't\b)|(\bkill\b)|(\betc\b)|(\b NA \b)|(\bfollow\b)|(\bhang\b)|(\bbelow\b)", re.I)
i = 0; output = ""
for m in regex.finditer(source_content):
output += "".join([source_content[i:m.start()],
"<strong><span style='color:%s'>" % color[0:],
source_content[m.start():m.end()],
"</span></strong>"])
i = m.end()
outfile = open(filepath, 'w+')
outfile.seek(0)
outfile.write(output)
print "\nProcess Completed!\n"
infile.close()
outfile.close()
raw_input()
The error message tells you what the error is:
No such file or directory: 'sample1.html'
Make sure the file exists. Or do a try statement to give it a default behavior.
The reason why you get that error is because the python script doesn't have any knowledge about where the files are located that you want to open.
You have to provide the file path to open it as I have done below. I have simply concatenated the source file path+'\\'+filename and saved the result in a variable named as filepath. Now simply use this variable to open a file in open().
import os
import sys
source = raw_input("Enter the source files path:")
listfiles = os.listdir(source)
for f in listfiles:
filepath = source+'\\'+f # This is the file path
infile = open(filepath, 'r')
Also there are couple of other problems with your code, if you want to open the file for both reading and writing then you have to use r+ mode. More over in case of Windows if you open a file using r+ mode then you may have to use file.seek() before file.write() to avoid an other issue. You can read the reason for using the file.seek() here.
I am having trouble reading in files through Python 2.7. I prompt the user to load a file, and once the filename is given, the program will simply load and print the lines in the file.
filename = raw_input("Filename to load: ")
print load_records(students, filename)
def load_records(students, filename):
#loads student records from a file
records = []
in_file = open(filename, "r")
for line in in_file:
print line
However, even if the entire file path is specified, the program throws a 'ValueError: mixing iterations and read methods would lose data.'
I need to unzip a .ZIP archive. I already know how to unzip it, but it is a huge file and takes some time to extract. How would I print the percentage complete for the extraction? I would like something like this:
Extracting File
1% Complete
2% Complete
etc, etc
here an example that you can start with, it's not optimized:
import zipfile
zf = zipfile.ZipFile('test.zip')
uncompress_size = sum((file.file_size for file in zf.infolist()))
extracted_size = 0
for file in zf.infolist():
extracted_size += file.file_size
print "%s %%" % (extracted_size * 100/uncompress_size)
zf.extract(file)
to make it more beautiful do this when printing:
print "%s %%\r" % (extracted_size * 100/uncompress_size),
You can just monitor the progress of each file being extracted with tqdm():
from zipfile import ZipFile
from tqdm import tqdm
# Open your .zip file
with ZipFile(file=path) as zip_file:
# Loop over each file
for file in tqdm(iterable=zip_file.namelist(), total=len(zip_file.namelist())):
# Extract each file to another directory
# If you want to extract to current working directory, don't specify path
zip_file.extract(member=file, path=directory)
In python 2.6 ZipFile object has a open method which can open a named file in zip as a file object, you can sue that to read data in chunks
import zipfile
import os
def read_in_chunks(zf, name):
chunk_size= 4096
f = zf.open(name)
data_list = []
total_read = 0
while 1:
data = f.read(chunk_size)
total_read += len(data)
print "read",total_read
if not data:
break
data_list.append(data)
return "".join(data_list)
zip_file_path = r"C:\Users\anurag\Projects\untitled-3.zip"
zf = zipfile.ZipFile(zip_file_path, "r")
for name in zf.namelist():
data = read_in_chunks(zf, name)
Edit: To get the total size you can do something like this
total_size = sum((file.file_size for file in zf.infolist()))
So now you can print the total progress and progress per file, e.g. suppose you have only 1 big file in zip, other methods(e.g. just counting file sizes and extract) will not give any progress at all.
ZipFile.getinfolist() will generate a number of ZipInfo objects from the contents of the zip file. From there you can either total up the number of bytes of all the files in the archive and then count up how many you've extracted thus far, or you can go by the number of files total.
I don't believe you can track the progress of extracting a single file. The zipfile extract function has no callback for progress.