I am creating a file and I want to write all lines of write_line to my output.
With this could I have a new file but only with the last line of write_log not all the lines. I think I should have a for before written log and tell to write all, but i am so new with python and need help.
I am getting name / familtname / id by SOAP response. I want to print responses which are in lines, now i just see the last line not all the lines.
timestamp = str(datetime.datetime.now())[:19]
file = open(CreateFile, 'w')
write_line = str(name).strip() + ';' + familyname.strip() + ';' + str(id).strip() + ';' + timestamp
file.writelines(write_line + '\n')
def CreateFile():#******************creating output log file*****
today = str(datetime.date.today()).split('-')
NowTime = str(datetime.datetime.now())[11:19:]
Nowtime_split = NowTime.split(':')
timestamp=Nowtime_split[0]+Nowtime_split[1]+Nowtime_split[2]
daystamp=today[0]+today[1]+today[2]
filename = 'log' + '_' + daystamp + '_' + timestamp + '.csv'
destfile = r'C:\Desktop' + str(filename)
file = open(destfile, 'w')
file.close()
return(destfile)
CreateFile=CreateFile()
this is a small case:
import datetime
timestamp = str(datetime.datetime.now())[:19]
file = open('1.txt', 'w')
for i in range(10):
write_line ='try'+str(i)
file.writelines(write_line + '\n')
file.close()
`
I'm not really sure what you want but I think the problem is because you're using write parameter to open the file and it's always replacing the previous text, so what you can do is replacing write with append(a):
timestamp = str(datetime.datetime.now())[:19]
with open(CreateFile, 'a') as file:
write_line = str(name).strip() + ';' + familyname.strip() + ';' +str(id).strip() + ';' + timestamp
file.write(write_line + '\n')
I suggest you to use with open... in order to avoid closing the file opened and other futures errors
lines = ['line1', 'line2', ...] # set of lines (list) you want to add in the current timestamp file
with open('current_timestampfile.txt', 'w') as f:
f.writelines("%s\n" % l for l in lines)
Related
I have a program that runs twice a day however I am rewriting it as I'm making a new update for my program. I'm running into a problem. The program writes about 2500 different text files each time it runs.
Each file is a csv and has five columns
date, data, data, data, data
Im wanting to delete the last row of the data to write the new information only if it has already ran that day.
with open('file.csv', 'r+') as file:
info = [line.split(',') for line in file]
for row in info:
if str(today) in row:
#need help here
this is the old one that I'm re-doing. I need to rework it as it will no longer work with the new program.
with open('file.csv', 'a+') as file:
file.write(str(today) + ',' +
str(data) + ',' +
str(data) + ',' +
str(data) + ',' +
str(data) + ',' + '\n')
Maybe this will help you
with open('file.csv', 'r+') as file:
info = [line.split(',') for line in file]
for idx, row in enumerate(info):
if str(today) in row:
# replace with new today data
info[idx] = [str(today), str(data), str(data), str(data), str(data)]
with open('file.csv', 'w+') as file:
for row in info:
file.write(str(today) + ',' +
str(data) + ',' +
str(data) + ',' +
str(data) + ',' +
str(data) + ',' + '\n')
I added my code below. The output is always an empty file.
Fasta_1 = ">ABC123\n ATCGTACGATCGATCGATCGCTAGACGTATCG"
Fasta_2 = ">DEF456\n actgatcgacgatcgatcgatcgacgact"
Fasta_3 = ">JIH789\n ACTGAC-ACTGT--ACTGTA----CATGTG"
output = open("sequence.fasta", "w")
output.write = (Fasta_1 + '\n' + Fasta_2 + '\n' + Fasta_3)
output.close()
Your problem is output.write is not a property, it is function. So, it should be called this way:
Fasta_1 = ">ABC123\n ATCGTACGATCGATCGATCGCTAGACGTATCG"
Fasta_2 = ">DEF456\n actgatcgacgatcgatcgatcgacgact"
Fasta_3 = ">JIH789\n ACTGAC-ACTGT--ACTGTA----CATGTG"
output = open("sequence.fasta", "w")
output.write(Fasta_1 + '\n' + Fasta_2 + '\n' + Fasta_3) # < !!!!
output.close()
Truoble with a really annoying homework. I have a csv-file with lots of comma-delimitered fields per row. I need to take the last two fields from every row and write them into a new txt-file. The problem is that some of the latter fields have sentences, those with commas are in double quotes, those without them aren't. For example:
180,easy
240min,"Quite easy, but number 3, wtf?"
300,much easier than the last assignment
I did this and it worked just fine, but the double quotes disappear. The assignment is to copy the fields to the txt-file, use semicolon as delimiter and remove possible line breaks. The text must remain exactly the same. We have an automatic check system, so it's no use arguing if this makes any sense.
import csv
file = open('myfile.csv', 'r')
output= open('mytxt.txt', 'w')
csvr = csv.reader(file)
headline = next(csvr)
for line in csvr:
lgt = len(line)
time = line[lgt - 2].replace('\n', '')
feedb = line[lgt - 1].replace('\n', '')
if time != '' and feedb != '':
output.write(time + ';' + feedb + '\n')
output.close()
file.close()
Is there some easy solution for this? Can I use csv module at all? No one seems to have exactly the same problem.
Thank you all beforehand.
Try this,
import csv
file = open('myfile.csv', 'r')
output= open('mytxt.txt', 'w')
csvr = csv.reader(file)
headline = next(csvr)
for line in csvr:
lgt = len(line)
time = line[lgt - 2].replace('\n', '')
feedb = line[lgt - 1].replace('\n', '')
if time != '' and feedb != '':
if ',' in feedb:
output.write(time + ';"' + feedb + '"\n')
else:
output.write(time + ';' + feedb + '\n')
output.close()
file.close()
Had to do it the ugly way, the file was too irrational. Talked with some collaegues on the same course and apparently the idea was NOT to use csv module here, but to rehearse basic file handling in Python.
file = open('myfile.csv','r')
output = open('mytxt.txt', 'w')
headline = file.readline()
feedb_lst = []
count = 0
for line in file:
if line.startswith('1'): #found out all lines should start with an ID number,
data_lst = line.split(',', 16) #that always starts with '1'
lgt = len(data_lst)
time = data_lst[lgt - 2]
feedb = data_lst[lgt - 1].rstrip()
feedback = [time, feedb]
feedb_lst.append(feedback)
count += 1
else:
feedb_lst[count - 1][1] = feedb_lst[count - 1][1] + line.rstrip()
i = 1
for item in feedb_lst:
if item[0] != '' and item[1] != '':
if i == len(feedb_lst):
output.write(item[0] + ';' + item[1])
else:
output.write(item[0] + ';' + item[1] + '\n')
i += 1
output.close()
file.close()
Thank you for your help!
I am trying to loop over the lines in a file and create multiple directories. My script is working only for the first line of list in a file. Here is my script. I have attached image of list as well. That is for both list_bottom.dat and list_top.dat.
import os
f = open("list_top.dat", "r")
g = open("list_bottom.dat", "r")
for lines in f:
m_top = lines.split()[0]
m_bot = lines.split()[0]
os.mkdir(m_top)
os.chdir(m_top)
for lines in g:
print(lines)
m_bot = lines.split()[0]
print(m_bot)
os.mkdir(m_top + "_" + m_bot)
os.chdir(m_top + "_" + m_bot)
for angle in range(5):
os.mkdir(m_top + "_" + "m_bot" + "_angle_" + str(angle))
os.chdir(m_top + "_" + "m_bot" + "_angle_" + str(angle))
os.chdir("../")
os.chdir("../")
os.chdir("../")
os.chdir("../")
you are trying to read from a file pointer, not from its content. you should do this instead
with open("file.txt") as f:
lines = f.readlines()
for line in lines:
do_stuff()
(for readability i don't post this as a comment, but that's a comment)
I have a folder full of .mpt files, each of them having the same data format.
I need to delete the first 57 lines from all files and append these files into one csv - output.csv.
I have that section already:
import glob
import os
dir_name = 'path name'
lines_to_ignore = 57
input_file_format = '*.mpt'
output_file_name = "output.csv"
def convert():
files = glob.glob(os.path.join(dir_name, input_file_format))
with open(os.path.join(dir_name, output_file_name), 'w') as out_file:
for f in files:
with open(f, 'r') as in_file:
content = in_file.readlines()
content = content[lines_to_ignore:]
for i in content:
out_file.write(i)
print("working")
convert()
print("done")
This part works ok.
how do i add the filename of each .mpt file as the last column of the output.csv
Thank you!
This is a quick 'n dirty solution.
In this loop the variable i is just a string (a line from a CSV file):
for i in content:
out_file.write(i)
So you just need to 1) strip off the end of line character(s) (either "\n" or "\r\n") and append ",".
If you're using Unix, try:
for i in content:
i = i.rstrip("\n") + "," + output_file_name + "\n"
out_file.write(i)
This assumes that the field separator is a comma. Another option is:
for i in content:
i = i.rstrip() + "," + output_file_name
print >>out_file, i
This will strip all white space from the end of i.
Add quotes if you need to quote the output file name:
i = i.rstrip(...) + ',"' + output_file_name '"'
The relevant part:
with open(f, 'r') as in_file:
content = in_file.readlines()
content = content[lines_to_ignore:]
for i in content:
new_line = ",".join([i.rstrip(), f]) + "\n" #<-- this is new
out_file.write(new_line) #<-- this is new