How to add a list into a file .txt - python

hi i was suppose to add a customer name to a customer.txt file
[1, “Amin Milani Fard”, “Columbia College”, 778]
[2, “Ali”, “Douiglas College”, 77238]
def addingcustomer(file_name,new_name):
f=open(file_name,'w')
for line in f:
if new_name==line:
return ("already existed")
elif new_name!=line:
f.write(str(new_name)+"\n")
return ("succesfully added")
it gives me this error:
Traceback (most recent call last):
File "C:\Users\Yuvinng\Desktop\Customer assignment 1\Customer assignment 2", line 77, in <module>
function(c)
File "C:\Users\Yuvinng\Desktop\Customer assignment 1\Customer assignment 2", line 26, in function
print (addingcustomer("customerlist.txt",x))
File "C:\Users\Yuvinng\Desktop\Customer assignment 1\Customer assignment 2", line 60, in addingcustomer
for line in f:
io.UnsupportedOperation: not readable

You probably want to close the file too, after you are done.
f.close()

You are opening the file with w, meaning you ask for write only permissions. Then you try to loop through all lines in the file, which is clearly a read operation. IIRC you should open the file with r+, w+ or a+, depending on what behaviour you want (read here for a description). Furthermore as mentionened by mh512 it is generally a good idea to close your file with f.close() when you're done with it.
However you might also want to rethink your algorithm.
for line in f:
if new_name==line:
return ("already existed")
elif new_name!=line:
f.write(str(new_name)+"\n")
return ("succesfully added")
For every line it processes, this will either return "already existed" if it equals the new name or write the new name to the file and return. Therefore this loop will always return after the first line. Furthermore even if the name already exists at a later point, if it isn't in the first line, it will be written to the file again. Since this is homework I won't give you the complete solution, but as a hint you might want to loop through all lines, before you decide what to do.

Here are a couple of things that might have caused the error:
You started by: "i was suppose to add a customer name to a customer.txt file" but the stack trace you posted says that the file you are trying to read is "customerlist.txt".
You are using the file_open function with 'w' for write privilege. try using 'r' or 'wr'.
E.

Probably not a direct answer to your specific question, but it solves your problem as a sideeffect.
I assume that every line in your code is a python list of the form
line = [id, fullname, establishment, some_integer]
and you (or someone else) stored it just by writing it to a line into the file with name file_name. That's not pythonic at all. You should use a standard format like CSV (Comma-Separated-Value) for the file (which python supports in its librarys with the CSV-module). As delimitter you can chose a comma, a semi-colon or what ever you want. Let's assume you chose a semi-colon as delimitter. Than the file would look like this:
id; name; establishment; some_other_id
"1"; "Amin Milani Fard"; "Columbia College"; "778"
"2", "Ali"; "Douiglas College"; "77238"
etc.
Assuming a list
mylist = ["1"; "Amin Milani Fard"; "Columbia College"; "778"]
you would need a CSV writer two write to this list:
import csv
wFile = open(file_name, 'w')
csvWriter = csv.writer(wFile, delimiter=';')
csvWriter.writerow(mylist) # write list to csv file
rFile.close()
if you want to read the list again you do:
rFile = open(file_name, 'r')
csvReader = csv.reader(rFile, delimiter=';')
for row in csvReader: # step throug rows. Each row is a list.
print ','.join(row)
rFile.close()
I don't this is a big effort for you but I suggest the following, if you want to clean up the files and your code a little bit: Turn the file into a csv file by reading all the lines into a list. turning every list into a valid python list and then write those lists with csvWriter to your file again which will then be a valid csv file. Then use csvreader and csvWriter to add lines to your file if they do not exist.
In your case (assuming the visible formatting is consistent) I would do:
import csv
old_name = 'customer.txt'
new_name = 'customer.csv'
rFile = open(old_name, 'r')
wfile = open(new_name, w)
csvWriter = csv.writer(wFile, delimiter=';')
for line in rFile:
line = line.strip()[1:-1] # strip braces "[" and "]" and newline "\n"
mylist = line.split(', ') # split the line by ', '
csvWriter.writerwo(mylist)
rFile.close()
wFile.close()
You will have a csv file afterwards. No you can use the csvreaders and writers as described.
Edit:
Perhaps the following code snippets helps you to understand what I meant above. csv readers and writers are really not that complicated. See
import csv
# Creating a customer file with some entrys to start with.
wFile = open('test.csv', 'w') # 'w' means crete fresh file
csvWriter = csv.writer(wFile, quoting=csv.QUOTE_MINIMAL)
csvWriter.writerow(['1', 'old', '2', 'customer'])
csvWriter.writerow(['1', 'an other old', '2', 'customer'])
wFile.close() # Don't forget to close the file
new_customers = [ # List of new customers to add if not exist.
['1', 'old', '2', 'customer'], # will not be added.
['4', 'new', '2', 'customer'] # will be added.
]
# First we wont to eliminate existent customers from list `new_customers`
rFile = open('test.csv', 'r') # 'r' means read file
csvReader = csv.reader(rFile)
print new_customers
for row in csvReader:
print row
if row in new_customers:
new_customers.remove(row) # remove customer if exists
rFile.close() # Don't forget to close the file
if new_customers: # there are new customers left in new_customers
wFile = open('test.csv', 'a') # 'a' means append to existing file
csvWriter = csv.writer(wFile, quoting=csv.QUOTE_MINIMAL)
for customer in new_customers:
csvWriter.writerow(customer) # add them to the file.
wFile.close()

Related

Python programm for files

I am trying to make a programm that:
will save in a file named "class.txt" the full names of students of a class (one per line)
Then I have to insert the names:
Karamitsos Giorgos
Patapoukas Dimitris
and then close the file.
Read the file and print only the first names.
Following is my attempt:
f = open('class.txt','w')
l = ['karamitsos giorgos \n', 'patapoukas dimitris \n']
f.writelines(l)
f.close()
f1 = open('class.txt','r')
l2 = str(f1.readlines())
l3 = list(l2.split(" "))
print(l3,'\n')
print(l3[1], l3[4])
f1.close()
A few observations:
The output file should probably be opened using mode 'a', which instead of erasing any old file's contents will instead append new data to the end.
By using a context manager to open the file, e.g. with open('class.txt', 'a') as f:, the file will automatically be closed when the with block is exited.
Method readlines will read the entire file into memory in one shot. What if the file has megabytes of data? You should instead iteratively read the file line by line.
You can use print to add additional names to the file. By default a newline will be appended to the end of whatever you are printing.
with open('class.txt','a') as f:
for name in ('karamitsos giorgos', 'patapoukas dimitris'):
print(name, file=f)
with open('class.txt', 'r') as f:
for name in f:
name_parts = name.strip().split(' ')
print(name_parts[0])
However, you can avoid opening and closing the file twice by opening it once for both appending and reading:
with open('class.txt','a+') as f:
for name in ('karamitsos giorgos', 'patapoukas dimitris'):
print(name, file=f)
# Seek to beginning of the file:
f.seek(0, 0)
for name in f:
name_parts = name.strip().split(' ')
print(name_parts[0])

Failure while trying to append in a try statement

So I have a small program that reads a file and creates it if it doesn't exist.
But it fails when you try to read the contents of the second and third file and append it to the first.
I marked in the code exactly where it fails.
It always jumps to the except part, I didnt include it here because it seemed unnecesary (the except part)
with open ('lista1.txt','r') as file_1:
reader_0 = file_1.readlines() #reads a list of searchterms, the first search term of this list is "gt-710"
for search in reader_0:
# creates the txt string component of the file to be created, this is the first one
file_0 = search.replace("\n","") +".txt"
file_1 = str(file_0.strip())
# creates the txt string component of the file to be created, this is the second one
files_2 = search.replace("\n","") +"2.txt"
file_2 = str(files_2.strip())
# creates the txt string component of the file to be created, this is the second one
files_3 = search.replace("\n","") +"3.txt"
file_3 = str(files_3.strip())
try: #if the file named the same as the searchterm exists, read its contents
file = open(file_1,"r")
file2 = open(file_2,"r")
file3 = open(file_3,"r")
file_contents = file.readlines()
file_contents2 = file2.readlines()
file_contents3 = file3.readlines()
file = open(file_1,"a") #appends the contents of file 3 and file 2 to file 1
print("im about here")
file.write(file_contents2) #fails exactly here I don't know why
file.write(file_contents3)
file2 = open(file_2,"w+")
file2.write(file_contents)
file3 = open(file_3,"w+")
file3.write(file_contents2)
The reason it fails at the point you mention is that you are trying to write a list into the file (not a string). file2.readlines() returns a list of strings, each being their own line so to fix this change all the readlines to filexxx.read() which returns the whole file contents as a string.
I also recommend making the changes the other answer states to make your code more readable/robust.
You start reading from file_ with file = open(file_1, 'r'), and then open it again in append mode, without closing the first I/O operation - causing a failure when you attempt to write to the find while it's open in read mode.
Change your file reading/writing to utilize the less error-prone with open syntax, as follows:
with open(file_1, 'r') as file_handle:
file_contents = file_handle.read()
with open(file_2, 'r') as file_handle:
file_contents2 = file_handle.read()
with open(file_3, 'r') as file_handle:
file_contents3 = file_handle.read()
with open(file_1, 'a') as file_handle:
file_handle.write(file_contents2)
# etc.
This syntax makes it very evident when a file is no longer open, and in what state it is open in.

How do I split each line into two strings and print without the comma?

I'm trying to have output to be without commas, and separate each line into two strings and print them.
My code so far yields:
173,70
134,63
122,61
140,68
201,75
222,78
183,71
144,69
But i'd like it to print it out without the comma and the values on each line separated as strings.
if __name__ == '__main__':
# Complete main section of code
file_name = "data.txt"
# Open the file for reading here
my_file = open('data.txt')
lines = my_file.read()
with open('data.txt') as f:
for line in f:
lines.split()
lines.replace(',', ' ')
print(lines)
In your sample code, line contains the full content of the file as a str.
my_file = open('data.txt')
lines = my_file.read()
You then later re-open the file to iterate the lines:
with open('data.txt') as f:
for line in f:
lines.split()
lines.replace(',', ' ')
Note, however, str.split and str.replace do not modify the existing value, as strs in python are immutable. Also note you are operating on lines there, rather than the for-loop variable line.
Instead, you'll need to assign the result of those functions into new values, or give them as arguments (E.g., to print). So you'll want to open the file, iterate over the lines and print the value with the "," replaced with a " ":
with open("data.txt") as f:
for line in f:
print(line.replace(",", " "))
Or, since you are operating on the whole file anyway:
with open("data.txt") as f:
print(f.read().replace(",", " "))
Or, as your file appears to be CSV content, you may wish to use the csv module from the standard library instead:
import csv
with open("data.txt", newline="") as csvfile:
for row in csv.reader(csvfile):
print(*row)
with open('data.txt', 'r') as f:
for line in f:
for value in line.split(','):
print(value)
while python can offer us several ways to open files this is the prefered one for working with files. becuase we are opening the file in lazy mode (this is the prefered one espicialy for large files), and after exiting the with scope (identation block) the file io will be closed automaticly by the system.
here we are openening the file in read mode. files folow the iterator polices, so we can iterrate over them like lists. each line is a true line in the file and is a string type.
After getting the line, in line variable, we split (see str.split()) the line into 2 tokens, one before the comma and the other after the comma. split return new constructed list of strings. if you need to omit some unwanted characters you can use the str.strip() method. usualy strip and split combined together.
elegant and efficient file reading - method 1
with open("data.txt", 'r') as io:
for line in io:
sl=io.split(',') # now sl is a list of strings.
print("{} {}".format(sl[0],sl[1])) #now we use the format, for printing the results on the screen.
non elegant, but efficient file reading - method 2
fp = open("data.txt", 'r')
line = None
while (line=fp.readline()) != '': #when line become empty string, EOF have been reached. the end of file!
sl=line.split(',')
print("{} {}".format(sl[0],sl[1]))

Different results between print and write

I have the function below which is reading in a previously constructed file into a defaultdict. The file it reads is a csv and contains file sizes and the paths of files.
If more than 1 file matches the filesize of another, then that file is run through a hashing function.
The issue i have is, print is giving me the expected output, where as writing the output to a file is not.
def loadfiles():
'''Loads files and identifies potential duplicates'''
files = defaultdict(list) # uses defaultdict
with open(tmpfile) as csvfile: # reads the file into a dictionary
reader = csv.DictReader(csvfile)
for row in reader:
files[row['size']].append(row['file'])
for key, value in files.items():
if len([item for item in value if item]) > 1:
with open (reportname, 'w') as fr:
writer = csv.writer(fr, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['size','filename','hash'])
for value in value:
writer.writerow([key,value,str(md5Checksum(value))])
print(key, value, str(md5Checksum(value)))
The output to a file is this:
size,filename,hash
43842270,/home/bob/scripts/inprogress_python_scripts/file_dup/testingscript/webwolf-8.0.0.M25.jar,b325dc62d33e2ada19aea07cbcfb237f
43842270,/home/bob/scripts/inprogress_python_scripts/file_dup/testingscript/bkwolf.jar,b325dc62d33e2ada19aea07cbcfb237f
Where as the output to screen from print is this:
128555 /home/bob/scripts/inprogress_python_scripts/file_dup/testingscript/SN0aaa(1).pdf def426a8dee8f226e40df826fcde9904
128555 /home/bob/scripts/inprogress_python_scripts/file_dup/testingscript/SN0aaa(1) (another copy).pdf def426a8dee8f226e40df826fcde9904
128555 /home/bob/scripts/inprogress_python_scripts/file_dup/testingscript/SN0aaa.pdf def426a8dee8f226e40df826fcde9904
128555 /home/bob/scripts/inprogress_python_scripts/file_dup/testingscript/SN0aaa(1) (copy).pdf def426a8dee8f226e40df826fcde9904
43842270 /home/bob/scripts/inprogress_python_scripts/file_dup/testingscript/webwolf-8.0.0.M25.jar b325dc62d33e2ada19aea07cbcfb237f
43842270 /home/b/scripts/inprogress_python_scripts/file_dup/testingscript/bkwolf.jar b325dc62d33e2ada19aea07cbcfb237f
Any ideas / guidance please as to whats wrong?
Using "w" opens the file in write mode, overwriting anything that already exists in the file. Use "a" for append instead.
This will lead to the problem that you will have your header (size,filename,hash) multiple times in there - consider writing this in the very first line and not in a loop.
See, for example: https://www.w3schools.com/python/python_file_write.asp

Removing last row in csv

I'm trying to remove the last row in a csv but I getting an error: _csv.Error: string with NUL byte
This is what I have so far:
dcsv = open('PnL.csv' , 'a+r+b')
cWriter = csv.writer(dcsv, delimiter=' ')
cReader = csv.reader(dcsv)
for row in cReader:
cWriter.writerow(row[:-1])
I cant figure out why I keep getting errors
I would just read in the whole file with readlines(), pop out the last row, and then write that with csv module
import csv
f = open("summary.csv", "r+w")
lines=f.readlines()
lines=lines[:-1]
cWriter = csv.writer(f, delimiter=',')
for line in lines:
cWriter.writerow(line)
This should work
import csv
f = open('Pnl.csv', "r+")
lines = f.readlines()
lines.pop()
f = open('Pnl.csv', "w+")
f.writelines(lines)
I'm not sure what you're doing with the 'a+r+b' file mode and reading and writing to the same file, so won't provide a complete code snippet, but here's a simple method to skip any lines that contains a NUL byte in them in a file you're reading, whether it's the last, first, or one in the middle being read.
The trick is to realize that the docs say the csvfile argument to a csv.writer() "can be any object which supports the iterator protocol and returns a string each time its next() method is called." This means that you can replace the file argument in the call with a simple filter iterator function defined this way:
def filter_nul_byte_lines(a_file):
for line in a_file:
if '\x00' not in line:
yield line
and use it in a way similar to this:
dcsv = open('Pnl.csv', 'rb+')
cReader = csv.reader(filter_nul_byte_lines(dcsv))
for row in cReader:
print row
This will cause any lines with a NUL byte in them to be ignored while reading the file. Also this technique works on-the-fly as each line is read, so it does not require reading the entire file into memory at once or preprocessing it ahead of time.

Categories

Resources