delete row from file using csv reader and lists python - python

there are similar questions on SO to this but none that deal with the specifics that I require.
I have the following code that seeks to delete a row in a file, based on specified user input. The methodology is to
Read file into a list
Delete the relevant row in the list (ideally while reading in the list?)
Over-write file.
It's 2 and 3 that I would like some guidance on as well as comments as to the best solution (for beginners, for teaching/learning purposes) to carry out this sort of simple delete/edit in python with csv reader.
Code
""" ==============TASK
1. Search for any given username
2. Delete the whole row for that particular user
e.g.
Enter username: marvR
>>The record for marvR has been deleted from file.
"""
import csv
#1. This code snippet asks the user for a username and deletes the user's record from file.
updatedlist=[]
with open("fakefacebook.txt",newline="") as f:
reader=csv.reader(f)
username=input("Enter the username of the user you wish to remove from file:")
for row in reader: #for every row in the file
if username not in updatedlist:
updatedlist=row #add each row, line by line, into a list called 'udpatedlist'
print(updatedlist)
#delete the row for the user from the list?
#overwrite the current file with the updated list?
File contents:
username,password,email,no_of_likes
marvR,pass123,marv#gmail.com,400
smithC,open123,cart#gmail.com,200
blogsJ,2bg123,blog#gmail.com,99
Update
Based on an answer below, I have this, but when it overwrites the file, it doesn't update it with the list correctly, not sure why.
import csv
def main():
#1. This code snippet asks the user for a username and deletes the user's record from file.
updatedlist=[]
with open("fakefacebook.txt",newline="") as f:
reader=csv.reader(f)
username=input("Enter the username of the user you wish to remove from file:")
for row in reader: #for every row in the file
if row[0]!=username: #as long as the username is not in the row .......
updatedlist=row #add each row, line by line, into a list called 'udpatedlist'
print(updatedlist)
updatefile(updatedlist)
def updatefile(updatedlist):
with open("fakefacebook.txt","w",newline="") as f:
Writer=csv.writer(f)
Writer.writerow(updatedlist)
print("File has been updated")
main()
It appears to print the updatedfile correctly (as a list) in that it removes the username that is entered. But on writing this to the file, it only prints ONE username to the file.
Any thoughts so I can accept a final answer?

if username not in updatedlist:
To me should be:
if row[0] != username:
Then in a second loop you write updatedlist into your csv file.
I would personnally write everything in another file while reading, then in the end delete the old file and replace it by the new one, which makes it one loop only.
Edit:
replace updatedlist=row with updatedlist.append(row): the first one means overwriting updatedlist with one row while the second one means adding one more row to it.
writerow writes one row, and you give it a list of rows.
Use writerows instead and your writing function will work.
You nearly made it all by yourself, which was my objective.
Some other answers already give you better (faster, cleaner ...) ways, so I won't.

I recommend this approach:
with open("fakefacebook.txt", 'r+') as f:
lines = f.readlines()
f.seek(0)
username = input("Enter the username of the user you wish to remove from file: ")
for line in lines:
if not username in line.split(',')[0]: # e.g. is username == 'marvR', the line containing 'marvR' will not be written
f.write(line)
f.truncate()
All lines from the file are read into lines. Then I go back to the beginning position of the file with f.seek(0). At this point the user is asked for a username, which is then used to check each line before writing back to the file. If the line contains the username specified, it will not be written, thus 'deleting' it. Finally we remove any excess with f.truncate(). I hope this helps, if you have any questions don't hesitate to ask!

I tried to stick to your code: (EDIT: not elegant, but as near as possible to the OPs code)
""" ==============TASK
1. Search for any given username
2. Delete the whole row for that particular user
e.g.
Enter username: marvR
>>The record for marvR has been deleted from file.
"""
import csv
#1. This code snippet asks the user for a username and deletes the user's record from file.
updatedlist=[]
with open("fakefacebook.txt",newline="") as f:
reader=csv.reader(f)
username=input("Enter the username of the user you wish to remove from file:")
content = []
for row in reader: #for every row in the file
content.append(row)
# transpose list
content = list(map(list, zip(*content)))
print(content)
index = [i for i,x in enumerate(content[0]) if x == username]
for sublist in content:
sublist.pop(index[0])
print(content)
# transpose list
content = list(map(list, zip(*content)))
#write back
thefile = open('fakefacebook.txt', 'w')
for item in content:
thefile.write("%s\n" % item)
But I would suggest to use numpy or pandas

Something like this should do you, using the csv module. Since you have structured tabular data with defined columns you should use a DictReader and DictWriter to read and write to/from your file;
import csv
with open('fakefacebook.txt', 'r+') as f:
username = input("Enter the username of the user you wish "
"to remove from file:")
columns = ['username', 'password', 'email', 'no_of_likes']
reader = csv.DictReader(f, columns)
filtered_output = [line for line in reader if line['username'] != username]
f.seek(0)
writer = csv.DictWriter(f, columns)
writer.writerows(filtered_output)
f.truncate()
This opens the input file, filters the out any entries where the username is equal to the desired username to be deleted, and writes what entries are left back out to the input file, overwriting what's already there.

And for another answer: write to a new file and then rename it!
import csv
import os
def main():
username = input("Enter the username of the user you wish to remove from file:")
# check it's not 'username'!
#1. This code snippet asks the user for a username and deletes the user's record from file.
with open("fakefacebook.txt", newline="") as f_in, \
open("fakefacebook.txt.new", "w", newline="") as f_out:
reader = csv.reader(f_in)
writer = csv.writer(f_out)
for row in reader: #for every row in the file
if row[0] != username: # as long as the username is not in the row
writer.writerow(row)
# rename new file
os.rename("fakefacebook.txt.new", "fakefacebook.txt")
main()

Related

How i add another list in json python

my json file :
{
"ali":{"name":"ali","age":23,"email":"his email"},
"joe":{"name":"joe","age":55,"email":"his email"}
}
And my code
name=input("name:")
age=input("age: ")
email=input("email:")
list={}
list[name]={"name":name,"age":age,"email":email}
data=json.dumps(list)
with open ('info.json','a') as f:
f.write(data)
i need method to append anather one (another name)to the json file
any idea?
To update an existing json file you need to read the entire file, make adjustments and write the whole lot back again:
with open ('info.json','r') as f:
data = json.load(f)
name = input("name:")
age = input("age: ")
email = input("email:")
data[name] = {"name":name, "age":age, "email":email}
with open ('info.json','w') as f:
json.dump(data, f)
By the way, there are no lists involved here, just nested dictionaries.
Also, if the user enters a duplicate name, then this code will overwrite the one in the file with updated data. This may, or may not, be what you want.

Searching for only the first value in an array in a csv file

So i am creating a account login system which searches a database for a username (and its relevant password) and, if found, will log the user on.
This is what the csv file currently looks like
['dom', 'enter password']
This is written in one field (on an excel spreadsheet), and was written to the file when the user registered. However when I try to log on, the program will only log on if that is what is entered in the field, when I would like it to log on when dom is entered.
Here is the code which reads the csv file to see if the username/password is found on the file:
def Get_Details():
user_namev2=user_name.get().lower() #Make it so entry box goes red if passwords password is incorrect, and red if username is incorrect/not fault
user_passwordv2=user_password.get().lower()
with open ('Accounts.csv', 'r') as Account_file:
reader = csv.reader(Account_file)
for row in reader:
for field in row:
if field == user_namev2:
print ("In file")
Here is how the username and password get written to the csv file upon registering an account.
if re.match(user_password2v2, user_passwordv2):
print("Passwords do match")
user_info = []
user_info.append(user_namev2)
user_info.append(user_passwordv2)
with open ('Accounts.csv', 'a') as Account_file:
writer = csv.writer(Account_file)
writer.writerow([user_info])
Bottom()
Any ideas on how i can search the csv file so that only a certain part of the string is searched and matched with user_namev2
Assuming the user_namev2is whatever is in the dom column and user_passwordv2 is whatever is in the enter password column, I would use regex to a part of the username that is in dom column.
import regex
with open ('Accounts.csv', 'r') as Account_file:
reader = csv.reader(Account_file)
for row in reader:
if re.findall(user_namevv2[0:5],row[0]): #slice a part of the user name
print ("In file")
Note: I'm also choosing to take the first element of every row with row[0] since that is the dom column.

Appending to a csv file in Python

Hi I have a csv file with names and surnames and empty username and password columns.
How can I use python csv to write to the columns 3 and 4 in each row, just appending to it, not overwriting anything.
The csv module doesn't do that, you'd have to write it out to a separate file then overwrite the old file with the new one, or read the whole file into memory and then write over it.
I'd recommend the first option:
from csv import writer as csvwriter, reader as cvsreader
from os import rename # add ', remove' on Windows
with open(infilename) as infile:
csvr = csvreader(infile)
with open(outfilename, 'wb') as outfile:
csvw = csvwriter(outfile)
for row in csvr:
# do whatever to get the username / password
# for this row here
row.append(username)
row.append(password)
csvw.writerow(row)
# or 'csvw.writerow(row + [username, password])' if you want one line
# only on Windows
# remove(infilename)
rename(outfilename, infilename)

process large text file in python

I have a very large file (3.8G) that is an extract of users from a system at my school. I need to reprocess that file so that it just contains their ID and email address, comma separated.
I have very little experience with this and would like to use it as a learning exercise for Python.
The file has entries that look like this:
dn: uid=123456789012345,ou=Students,o=system.edu,o=system
LoginId: 0099886
mail: fflintstone#system.edu
dn: uid=543210987654321,ou=Students,o=system.edu,o=system
LoginId: 0083156
mail: brubble#system.edu
I am trying to get a file that looks like:
0099886,fflintstone#system.edu
0083156,brubble#system.edu
Any tips or code?
That actually looks like an LDIF file to me. The python-ldap library has a pure-Python LDIF handling library that could help if your file possesses some of the nasty gotchas possible in LDIF, e.g. Base64-encoded values, entry folding, etc.
You could use it like so:
import csv
import ldif
class ParseRecords(ldif.LDIFParser):
def __init__(self, csv_writer):
self.csv_writer = csv_writer
def handle(self, dn, entry):
self.csv_writer.writerow([entry['LoginId'], entry['mail']])
with open('/path/to/large_file') as input, with open('output_file', 'wb') as output:
csv_writer = csv.writer(output)
csv_writer.writerow(['LoginId', 'Mail'])
ParseRecords(input, csv_writer).parse()
Edit
So to extract from a live LDAP directory, using the python-ldap library you would want to do something like this:
import csv
import ldap
con = ldap.initialize('ldap://server.fqdn.system.edu')
# if you're LDAP directory requires authentication
# con.bind_s(username, password)
try:
with open('output_file', 'wb') as output:
csv_writer = csv.writer(output)
csv_writer.writerow(['LoginId', 'Mail'])
for dn, attrs in con.search_s('ou=Students,o=system.edu,o=system', ldap.SCOPE_SUBTREE, attrlist = ['LoginId','mail']:
csv_writer.writerow([attrs['LoginId'], attrs['mail']])
finally:
# even if you don't have credentials, it's usually good to unbind
con.unbind_s()
It's probably worthwhile reading through the documentation for the ldap module, especially the example.
Note that in the example above, I completely skipped supplying a filter, which you would probably want to do in production. A filter in LDAP is similar to the WHERE clause in a SQL statement; it restricts what objects are returned. Microsoft actually has a good guide on LDAP filters. The canonical reference for LDAP filters is RFC 4515.
Similarly, if there are potentially several thousand entries even after applying an appropriate filter, you may need to look into the LDAP paging control, though using that would, again, make the example more complex. Hopefully that's enough to get you started, but if anything comes up, feel free to ask or open a new question.
Good luck.
Assuming that the structure of each entry will always be the same, just do something like this:
import csv
# Open the file
f = open("/path/to/large.file", "r")
# Create an output file
output_file = open("/desired/path/to/final/file", "w")
# Use the CSV module to make use of existing functionality.
final_file = csv.writer(output_file)
# Write the header row - can be skipped if headers not needed.
final_file.writerow(["LoginID","EmailAddress"])
# Set up our temporary cache for a user
current_user = []
# Iterate over the large file
# Note that we are avoiding loading the entire file into memory
for line in f:
if line.startswith("LoginID"):
current_user.append(line[9:].strip())
# If more information is desired, simply add it to the conditions here
# (additional elif's should do)
# and add it to the current user.
elif line.startswith("mail"):
current_user.append(line[6:].strip())
# Once you know you have reached the end of a user entry
# write the row to the final file
# and clear your temporary list.
final_file.writerow(current_user)
current_user = []
# Skip lines that aren't interesting.
else:
continue
Again assuming your file is well-formed:
with open(inputfilename) as inputfile, with open(outputfilename) as outputfile:
mail = loginid = ''
for line in inputfile:
line = inputfile.split(':')
if line[0] not in ('LoginId', 'mail'):
continue
if line[0] == 'LoginId':
loginid = line[1].strip()
if line[0] == 'mail':
mail = line[1].strip()
if mail and loginid:
output.write(loginid + ',' + mail + '\n')
mail = loginid = ''
Essentially equivalent to the other methods.
To open the file you'll want to use something like the with keyword to ensure it closes properly even if something goes wrong:
with open(<your_file>, "r") as f:
# Do stuff
As for actually parsing out that information, I'd recommend building a dictionary of ID email pairs. You'll also need a variable for the uid and the email.
data = {}
uid = 0
email = ""
To actually parse through the file (the stuff run while your file is open) you can do something like this:
for line in f:
if "uid=" in line:
# Parse the user id out by grabbing the substring between the first = and ,
uid = line[line.find("=")+1:line.find(",")]
elif "mail:" in line:
# Parse the email out by grabbing everything from the : to the end (removing the newline character)
email = line[line.find(": ")+2:-1]
# Given the formatting you've provided, this comes second so we can make an entry into the dict here
data[uid] = email
Using the CSV writer (remember to import csv at the beginning of the file) we can output like this:
writer = csv.writer(<filename>)
writer.writerow("User, Email")
for id, mail in data.iteritems:
writer.writerow(id + "," + mail)
Another option is to open the writer before the file, write the header, then read the lines from the file at the same time as writing to the CSV. This avoids dumping the information into memory, which might be highly desirable. So putting it all together we get
writer = csv.writer(<filename>)
writer.writerow("User, Email")
with open(<your_file>, "r") as f:
for line in f:
if "uid=" in line:
# Parse the user id out by grabbing the substring between the first = and ,
uid = line[line.find("=")+1:line.find(",")]
elif "mail:" in line:
# Parse the email out by grabbing everything from the : to the end (removing the newline character)
email = line[line.find(": ")+2:-1]
# Given the formatting you've provided, this comes second so we can make an entry into the dict here
writer.writerow(iid + "," + email)

Can't read appended data using pickle.load() method

I have written two scripts Write.py and Read.py.
Write.py opens friends.txt in append mode and takes input for name, email ,phone no and then dumps the dictionary into the file using pickle.dump() method and every thing works fine in this script.
Read.py opens friends.txt in read mode and then loads the contents into dictionary using pickle.load() method and displays the contents of dictionary.
The main problem is in Read.py script, it justs shows the old data, it never shows the appended data ?
Write.py
#!/usr/bin/python
import pickle
ans = "y"
friends={}
file = open("friends.txt", "a")
while ans == "y":
name = raw_input("Enter name : ")
email = raw_input("Enter email : ")
phone = raw_input("Enter Phone no : ")
friends[name] = {"Name": name, "Email": email, "Phone": phone}
ans = raw_input("Do you want to add another record (y/n) ? :")
pickle.dump(friends, file)
file.close()
Read.py
#!/usr/bin/py
import pickle
file = open("friends.txt", "r")
friend = pickle.load(file)
file.close()
for person in friend:
print friend[person]["Name"], "\t", friend[person]["Email"] , "\t", friend[person]["Phone"]
What must be the problem, the code looks fine. Can some one point me in the right direction ?
Thanks.
You have to load from the file several times. Each writing process ignores the others, so it creates a solid block of data independent from the others in the file. If you read it afterwards, it reads only one block at a time. So you could try:
import pickle
friend = {}
with open('friends.txt') as f:
while 1:
try:
friend.update(pickle.load(f))
except EOFError:
break # no more data in the file
for person in friend.values():
print '{Name}\t{Email}\t{Phone}'.format(**person)
You have to call pickle.load once for each time you called pickle.dump. You write routine does not add an entry to the dictionary, it adds another dictionary. You will have to call pickle.load until the entire file is read, but this will give you several dictionaries you would have to merge. The easier way for this would be just to store the values in CSV-format. This is as simple as
with open("friends.txt", "a") as file:
file.write("{0},{1},{2}\n".format(name, email, phone))
To load the values into a dictionary you would do:
with open("friends.txt", "a") as file:
friends = dict((name, (name, email, phone)) for line in file for name, email, phone in line.split(","))

Categories

Resources