The following piece of code creates a CSV file, but every other line is blank. How can I prevent these linebreaks from happening?
import datetime
import time
import csv
i = 0
while i < 10:
TempProbe = "78.12"
CurrentTime = time.strftime("%x")
CurrentDate = time.strftime("%I:%M:%S")
stringAll = TempProbe + "," + CurrentTime + "," + CurrentDate
print(stringAll)
file = open("outFile.csv", "a")
csvWriter = csv.writer( file )
csvWriter.writerow( [TempProbe, CurrentTime,CurrentDate] )
file.close()
i = i + 1
time.sleep(1)
This is probably because the default line terminator is '\r\n'. You can correct this by passing lineterminator='\n' to your csv.writer object, like so:
csvWriter = csv.writer(file, lineterminator='\n')
P.S. Move this line out of your while loop to avoid destroying and recreating the file writer object.
You need to set the lineterminator for your csvWriter, as in the code below.
csvWriter = csv.writer(file, lineterminator='\n')
For an explanation why, see: CSV file written with Python has blank lines between each row
You can simply use the open function to write a csv file:
file = open("outFile.csv", "a")
file.write(stringAll+'\n')
Also, you should take the file open and close functions out of the loop.
Use this: 'ab' vs 'a',writes binary should fix the problem
file = open("outFile.csv", "ab")
csvWriter = csv.writer(file, lineterminator='\n' )
or this you dont need to open/close on every write :
file = open("outFile.csv", "ab")
csvWriter = csv.writer(file, lineterminator='\n' )
i = 0
while i < 10:
TempProbe = "78.12"
CurrentTime = time.strftime("%x")
CurrentDate = time.strftime("%I:%M:%S")
stringAll = TempProbe + "," + CurrentTime + "," + CurrentDate
print(stringAll)
csvWriter.writerow( [TempProbe, CurrentTime,CurrentDate] )
i = i + 1
time.sleep(1)
file.close()
Related
def generate_daily_totals(input_filename, output_filename):
"""result in the creation of a file blahout.txt containing the two lines"""
with open(input_filename, 'r') as reader, open(output_filename, 'w') as writer: #updated
for line in reader: #updated
pieces = line.split(',')
date = pieces[0]
rainfall = pieces[1:] #each data in a line
total_rainfall = 0
for data in rainfall:
pure_data = data.rstrip()
total_rainfall = total_rainfall + float(pure_data)
writer.write(date + "=" + '{:.2f}'.format(total_rainfall) + '\n') #updated
#print(date, "=", '{:.2f}'.format(total_rainfall)) #two decimal point format,
generate_daily_totals('data60.txt', 'totals60.txt')
checker = open('totals60.txt')
print(checker.read())
checker.close()
By reading a file, the original program runs well but I was required to convert it by writing the file. I am confused as the write method applies to string only so does that mean only the print section can be replaced by write method? This is the first time I am trying to use the write method. Thanks!
EDIT: the above codes have been updated based on the blhsing instruction which helped a lot! But still not running well as the for loop which gets skipped for some reason. Proper suggestions would be appreciated!
expected output:
2006-04-10 = 1399.46
2006-04-11 = 2822.36
2006-04-12 = 2803.81
2006-04-13 = 1622.71
2006-04-14 = 3119.60
2006-04-15 = 2256.14
2006-04-16 = 3120.05
2006-04-20 = 1488.00
You should open both the input file for reading, and the output file for writing, so change:
with open(input_filename, 'w') as writer:
for line in writer: # error not readable
to:
with open(input_filename, 'r') as reader, open(output_filename, 'w') as writer:
for line in reader:
Also, unlike the print function, the write method of a file object does not automatically add a trailing newline character to the output, so you would have to add it on your own.
Change:
writer.write(date + "=" + '{:.2f}'.format(total_rainfall))
to:
writer.write(date + "=" + '{:.2f}'.format(total_rainfall) + '\n')
or you can use print with the outputting file object specified as the file argument:
print(date, "=", '{:.2f}'.format(total_rainfall), file=writer)
When reading a CSV into a list and then trying to write back directly to that same CSV with some modifications I am finding that the script skips to the next file prematurely at the exact same point every time.
Changing the outputfile to a secondary file (.e. filea.csv is read and the write is fileaCorrected.csv) works.
import time
import datetime
import csv
import sys
import glob
import pdb
pattern1 = '%m/%d/%Y %H:%M'
pattern2 = '%m/%d/%Y %H:%M:%S'
pattern3 = '%Y-%m-%d %H:%M:%S'
folderLocation = input("which Folder should be scanned and trimmed EX C:\program files\data:")
endDate = input("what is the last timestamp that should be in the file(s): ")
trimDate = int(time.mktime(time.strptime(endDate, pattern1)))
fileList = sorted(glob.glob(folderLocation+'/*.csv'))
for FileName in fileList:
removedLines = 0
FilesComplete = 0
f = open(FileName)
csv_f = csv.reader(f)
#pdb.set_trace()
header = next(f)
newFileName = (FileName[:-4]) + " endateremoved.csv"
with open(FileName, 'w') as csvfile:
filewriter = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
csvfile.write(header)
for row in csv_f:
#(FileName," -- ",row[0])
date_time = row[0]
epoch = int(time.mktime(time.strptime(date_time, pattern3)))
if epoch < trimDate:
lineWriter = ""
for item in row[:-1]:
lineWriter += item + ","
lineWriter += row[-1]
#print(lineWriter)
csvfile.write(lineWriter + "\n")
else:
#print("removed line %s" % row)
removedLines += 1
break
FilesComplete += 1
print (str(FilesComplete) + " Files Completed")
print("%d files had removed lines" % removedLines)'
I feel as though I am making a minor mistake in the script that is causing the file to end prematurely.
example
input
car,a
boat,b
plane,c
output
car,a
boat,b
pla
I could create a workaround that delete the old files and rename the new ones but that seems janky? Any thoughts on this are appreciated.
I can print results to terminal but unable to write to csv file
full file:https://1drv.ms/u/s!AizscpxS0QM4hJo5SnYOHAcjng-jww
datapath = '1.json'
data = json.load(open(datapath))
with open('1.json') as file:
data = json.load(file)
for element in data['RoleDetailList']:
if 'RoleName' in element.keys():
s = element['RoleName']
#print s
with open('roleassign.csv', 'wt') as file:
file.write('Role,Policy\n')
for policy in element['AttachedManagedPolicies']:
c = s + ',' + policy['PolicyName']
#print c
file.write(c + '\n')
In csv file i get only headers, when uncomment print c i see lines are printed into terminal (output)
some of the lines from output:
ADFS-amtest-ro,pol-amtest-ro
adfs-host-role,pol-amtest-ro
aws-elasticbeanstalk-ec2-role,AWSElasticBeanstalkWebTier
Please try code below:
with open('output.json') as file:
data = json.load(file)
with open('roleassign.csv', 'wt') as file:
file.write('Role,Policy\n')
for element in data['RoleDetailList']:
if 'RoleName' in element.keys():
s= element['RoleName']
for policy in element['AttachedManagedPolicies']:
c = s + ',' + policy['PolicyName']
file.write(c + '\n')
Your File writer is being opened in the loop and every time it was overwriting the file with only the headers. Simply moved it out.
You should use csv.writer from the csv built-in module
in your example:
with open('roleassign.csv', 'w') as csv_file:
writer = csv.writer(csv_file, delimiter=',')
writer.writerow(['Role','Policy'])
for policy in element['AttachedManagedPolicies']:
c = [s, policy['PolicyName']]
writer.writerow(c)
Additionally, to incorporate Rehan's answer, the loop that is updating you s variable shouldn't be out there. The code blow should work for you:
with open('roleassign.csv', 'w') as csv_file:
writer = csv.writer(csv_file, delimiter=',')
writer.writerow(['Role','Policy'])
for element in data['RoleDetailList']:
if 'RoleName' in element.keys():
s = element['RoleName']
for policy in element['AttachedManagedPolicies']:
c = [s, policy['PolicyName']]
writer.writerow(c)
I am able to get this to output my MYSQL command which I have removed for security, however I keep getting an error when I try and write this tab delimited output to a CSV. Any help to boost the Python rookie would be appreciated.
#!/usr/bin/pytho
import sys, csv
import MySQLdb
import os
import mysql.connector
import subprocess
import string
if __name__ == '__main__':
du = sys.argv[1]
csv_home = '/home/oatey/bundle_' + du + '.csv'
input = sys.stdin
output = sys.stdout
#read and rewrite to file with arguement
new = open("/home/oatey/valid.sql2", "w")
with open("/home/oatey/bundle.sql")as write_query:
#read_file = write_query.read()
for line in write_query:
lr = line.replace('{$$}', du)
print lr
new.write(lr)
new.close()
write_query.close()
with open("/home/oatey/valid.sql2") as w:
mysql_output = subprocess.check_output(MYSQL_COMMAND, stdin=w)
#print mysql_output
b = open("/home/oatey/" + du + ".txt", "r+")
#",".join("%s" % i for i in mysql_output
b.write(mysql_output)
print mysql_output
b.close()
#read tab-delimited file
with open("/home/oatey/" + du + ".txt", 'rb') as data:
cr = data.readlines()
contents = [line for line in cr]
with open("/home/oatey/" + du + ".csv", "wb") as wd:
cw = csv.writer(wd, quotechar='', quoting=csv.QUOTE_NONE)
wd.write(contents)
I bet the error you are getting is:
TypeError: must be string or buffer, not list
contents is a list, you cannot write a list via write(). Quote from docs:
file.write(str)
Write a string to the file.
Instead, use csvwriter.writerows():
with open("/home/oatey/" + du + ".csv", "wb") as wd:
cw = csv.writer(wd, quotechar='', quoting=csv.QUOTE_NONE)
cw.writerows(contents)
The problem I am having at this point in time (being new to Python) is writing strings to a text file. The issue I'm experiencing is one where either the strings don't have linebreaks inbetween them or there is a linebreak after every character. Code to follow:
import string, io
FileName = input("Arb file name (.txt): ")
MyFile = open(FileName, 'r')
TempFile = open('TempFile.txt', 'w', encoding='UTF-8')
for m_line in MyFile:
m_line = m_line.strip()
m_line = m_line.split(": ", 1)
if len(m_line) > 1:
del m_line[0]
#print(m_line)
MyString = str(m_line)
MyString = MyString.strip("'[]")
TempFile.write(MyString)
MyFile.close()
TempFile.close()
My input looks like this:
1 Jargon
2 Python
3 Yada Yada
4 Stuck
My output when I do this is:
JargonPythonYada YadaStuck
I then modify the source code to this:
import string, io
FileName = input("Arb File Name (.txt): ")
MyFile = open(FileName, 'r')
TempFile = open('TempFile.txt', 'w', encoding='UTF-8')
for m_line in MyFile:
m_line = m_line.strip()
m_line = m_line.split(": ", 1)
if len(m_line) > 1:
del m_line[0]
#print(m_line)
MyString = str(m_line)
MyString = MyString.strip("'[]")
#print(MyString)
TempFile.write('\n'.join(MyString))
MyFile.close()
TempFile.close()
Same input and my output looks like this:
J
a
r
g
o
nP
y
t
h
o
nY
a
d
a
Y
a
d
aS
t
u
c
k
Ideally, I would like each of the words to appear on a seperate line without the numbers in front of them.
Thanks,
MarleyH
You have to write the '\n' after each line, since you're stripping the original '\n';
Your idea of using '\n'.join() doesn't work because it will use\n to join the string, inserting it between each char of the string. You need a single \n after each name, instead.
import string, io
FileName = input("Arb file name (.txt): ")
with open(FileName, 'r') as MyFile:
with open('TempFile.txt', 'w', encoding='UTF-8') as TempFile:
for line in MyFile:
line = line.strip().split(": ", 1)
TempFile.write(line[1] + '\n')
fileName = input("Arb file name (.txt): ")
tempName = 'TempFile.txt'
with open(fileName) as inf, open(tempName, 'w', encoding='UTF-8') as outf:
for line in inf:
line = line.strip().split(": ", 1)[-1]
#print(line)
outf.write(line + '\n')
Problems:
the result of str.split() is a list (this is why, when you cast it to str, you get ['my item']).
write does not add a newline; if you want one, you have to add it explicitly.