Hi I have a csv file with names and surnames and empty username and password columns.
How can I use python csv to write to the columns 3 and 4 in each row, just appending to it, not overwriting anything.
The csv module doesn't do that, you'd have to write it out to a separate file then overwrite the old file with the new one, or read the whole file into memory and then write over it.
I'd recommend the first option:
from csv import writer as csvwriter, reader as cvsreader
from os import rename # add ', remove' on Windows
with open(infilename) as infile:
csvr = csvreader(infile)
with open(outfilename, 'wb') as outfile:
csvw = csvwriter(outfile)
for row in csvr:
# do whatever to get the username / password
# for this row here
row.append(username)
row.append(password)
csvw.writerow(row)
# or 'csvw.writerow(row + [username, password])' if you want one line
# only on Windows
# remove(infilename)
rename(outfilename, infilename)
Related
I want to open a variable number of csv files and then I would like to iterate over the csv files opened and upload 1 row of each file at a time to my sql database.
For example, loop through each file uploading the first row of each file to the database, then loop again through each file uploading the second row of each file to the database.
However, I'm stuck in having the csv files ready to be uploaded in a single object.
The error happens at 'csv_data[i] = csv.reader...'
Each file is for a different table, so I cannot append them.
import csv
import sys
i = 0
for argv in sys.argv[1:]:
csv_file = open(argv, newline='', encoding='utf-8-sig')
csv_data[i] = csv.reader(csv_file, dialect='excel', delimiter=',', quotechar='|')
csv_file.close()
i += 1
After this code, I would need something to loop through each file uploading a certain row number.
zip together the files, iterate through them:
file_handles = [open(file, newline='', encoding='utf-8-sig') for file in argv[1:]]
readers = (csv.reader(file, dialect='excel', delimiter=',', quotechar='|') for file in file_handles)
# zip here
for line_group in zip(*readers):
# line_group is a tuple of line i of each file
# don't forget to close your files
for file_handle in file_handles:
try:
file_handle.close()
except:
print("Issue closing one of the files")
I have written this code to create a yml file with UUID as it's name. I need to read csv file and dynamically create a .yml file for each entry in the .csv
import uuid
import yaml
with open('sample.yaml', 'r') as stream:
d = yaml.load(stream, Loader=yaml.FullLoader)
d['id'] = 'id'
print(d)
d['id']=id
f = open('Test.yaml','w+')
print(d)
yaml.dump(d, f, default_flow_style=False)
If you want to create a file for each row in the csv, all you need to do is read in the csv file and move through the lines one by one. One way to do that:
import csv
with open("file_name.csv") as f:
reader = csv.reader(f)
for row in reader:
# open new yaml file
# write row to file
# close the file
Is this what you had in mind?
You will want to create unique file names (otherwise you'll keep writing over the same file with every new row). You can also do more complex writing to the file if you use str.split(',') on each row in order to format the yml files.
I'm having trouble storing data minus the header into a new file. I don't understand Python enough to debug.
Ultimately, I'd like to extract data from each file and store into one main csv file rather than opening each file individually, while copying and pasting everything into the main csv file I would like.
My code is as follows:
import csv, os
# os.makedirs() command will create a folder titled in green or in apostrophies
os.makedirs('HeaderRemoved', exist_ok=True)
# Loop through every file in the current working directory.
for csvFilename in os.listdir('directory'):
if not csvFilename.endswith('.csv'):
continue #skips non-csv files
print('Removing header from ' + csvFilename + '...')
### Read the CSV file in (skipping first Row)###
csvRows = []
csvFileObj = open(csvFilename)
readerObj = csv.reader(csvFileObj)
for row in readerObj:
if readerObj.line_num == 1:
continue # skips first row
csvRows.append(row)
print (csvRows) #----------->Check to see if it has anything stored in array
csvFileObj.close()
#Todo: Write out the CSV file
csvFileObj = open(os.path.join('HeaderRemoved', 'directory/mainfile.csv'), 'w',
newline='')
csvWriter = csv.writer(csvFileObj)
for row in csvRows:
csvWriter.writerow(row)
csvFileObj.close()
The csv files that are being "scanned" or "read" have text and numbers. I do not know if this might be preventing the script from properly "reading" and storing the data into the csvRow array.
The problem comes from you reusing the same variable when you loop over your file names. See the documentation for listdir, it returns a list of filenames. Then your newfile isn't really pointing to the file anymore, but
to a string filename from the directory.
https://docs.python.org/3/library/os.html#os.listdir
with open(scancsvFile, 'w') as newfile:
array = []
#for row in scancsvFile
for newfile in os.listdir('directory'): # <---- you're reassigning the variable newfile here
if newfile.line_num == 1:
continue
array.append(lines)
newfile.close()
I am trying to add extra columns in a csv file after processing an input csv file. But, I am getting extra new line added after each line in the output.
What's missing or wrong in my below code -
import csv
with open('test.csv', 'r') as infile:
with open('test_out.csv', 'w') as outfile:
reader = csv.reader(infile, delimiter=',')
writer = csv.writer(outfile, delimiter=',')
for row in reader:
colad = row[5].rstrip('0123456789./ ')
if colad == row[5]:
col2ad = row[11]
else:
col2ad = row[5].split(' ')[-1]
writer.writerow([row[0],colad,col2ad] +row[1:])
I am processing huge a csv file so would like to get rid of those extra lines.
I had the same problem on Windows (your OS as well, I presume?). CSV and Windows as combination make a \r\r\n at the end of each line (so: double newline).
You need to open the output file in binary mode:
with open('test_out.csv', 'wb') as outfile:
For other answers:
Python's CSV writer produces wrong line terminator
CSV in Python adding an extra carriage return
I am using Blair's Python script which modifies a CSV file to add the filename as the last column (script appended below). However, instead of adding the file name alone, I also get the Path and File name in the last column.
I run the below script in windows 7 cmd with the following command:
python C:\data\set1\subseta\add_filename.py C:\data\set1\subseta\20100815.csv
The resulting ID field is populated by the following C:\data\set1\subseta\20100815.csv, although, all I need is 20100815.csv.
I'm new to python so any suggestion is appreciated!
import csv
import sys
def process_file(filename):
# Read the contents of the file into a list of lines.
f = open(filename, 'r')
contents = f.readlines()
f.close()
# Use a CSV reader to parse the contents.
reader = csv.reader(contents)
# Open the output and create a CSV writer for it.
f = open(filename, 'wb')
writer = csv.writer(f)
# Process the header.
header = reader.next()
header.append('ID')
writer.writerow(header)
# Process each row of the body.
for row in reader:
row.append(filename)
writer.writerow(row)
# Close the file and we're done.
f.close()
# Run the function on all command-line arguments. Note that this does no
# checking for things such as file existence or permissions.
map(process_file, sys.argv[1:])
Use os.path.basename(filename). See http://docs.python.org/library/os.path.html for more details.