I have a script that pulls data and writes it into a TXT file, then in the same code I have a For Loop that changes the format by replacing quotes to double quotes and concatenates the result with a text in another new file.
with open ('myfile.txt', 'w') as f:
print(response['animals']['mammals'], file=f)
fout = open("mynewfile.txt", "wt")
f = open('myfile.txt', 'r')
for line in f:
x = str(line).replace("'", '"')
fout.write(f"mammals = {x}")
f.close()
fout.close()
The result is basically that all that is in myfile.txt with quotes i.e ['dog', 'cat'] it is edited and written in mynewfile.txt as mammals = ["dog", "cat"], that is cool. But I also want to manually add some other text to mynewfile.txt and every time I need to update that data and run the script, the data that I enter manually is deleted because of the For Loop.
Is there a way to overwrite just that line without touching the rest of the lines in the file?
Get numbers of strings in list and start indexing from [last_word +1
Related
I am looking for a simple way to split lines in python from a .txt file and then just read out the names and compare them to another file.
I've had a code that split the lines successfully, but I couldn't find a way to read out just the names, unfortunately the code that split it successfully was lost.
this is what the .txt file looks like.
Id;Name;Job;
1;James;IT;
2;Adam;Director;
3;Clare;Assisiant;
example if the code I currently have (doesn't output anything)
my_file = open("HP_liki.txt","r")
flag = index = 0
x1=""
for line in my_file:
line.strip().split('\n')
index+=1
content = my_file.read()
list=[]
lines_to_read = [index-1]
for position, line1 in enumerate(x1):
if position in lines_to_read:
list=line1
x1=list.split(";")
print(x1[1])
I need a solution that doesn't import pandas or csv.
The first part of your code confuses me as to your purpose.
for line in my_file:
line.strip().split('\n')
index+=1
content = my_file.read()
Your for loop iterates through the file and strips each line. Then it splits on a newline, which cannot exist at this point. The for already iterates by lines, so there is no newline in any line in this loop.
In addition, once you've stripped the line, you ignore the result, increment index, and leave the loop. As a result, all this loop accomplishes is to count the lines in the file.
The line after the loop reads from a file that has no more data, so it will simply handle the EOF exception and return nothing.
If you want the names from the file, then use the built-in file read to iterate through the file, split each line, and extract the second field:
name_list = [line.split(';')[1]
for line in open("HP_liki.txt","r") ]
name_list also includes the header "Name", which you can easily delete.
Does that handle your problem?
So without using any external library you can use simple file io and then generalize according to your need.
readfile.py
file = open('datafile.txt','r')
for line in file:
line_split = line.split(';')
if (line_split[0].isdigit()):
print(line_split[1])
file.close()
datafile.txt
Id;Name;Job;
1;James;IT;
2;Adam;Director;
3;Clare;Assisiant;
If you run this you'll have output
James
Adam
Clare
You can change the if condition according to your need
I have my dataf.txt file:
Id;Name;Job;
1;James;IT;
2;Adam;Director;
3;Clare;Assisiant;
I have written this to extract information:
with open('dataf.txt','r') as fl:
data = fl.readlines()
a = [i.replace('\n','').split(';')[:-1] for i in data]
print(a[1:])
Outputs:
[['1', 'James', 'IT'], ['2', 'Adam', 'Director'], ['3', 'Clare', 'Assisiant']]
I'm trying to have output to be without commas, and separate each line into two strings and print them.
My code so far yields:
173,70
134,63
122,61
140,68
201,75
222,78
183,71
144,69
But i'd like it to print it out without the comma and the values on each line separated as strings.
if __name__ == '__main__':
# Complete main section of code
file_name = "data.txt"
# Open the file for reading here
my_file = open('data.txt')
lines = my_file.read()
with open('data.txt') as f:
for line in f:
lines.split()
lines.replace(',', ' ')
print(lines)
In your sample code, line contains the full content of the file as a str.
my_file = open('data.txt')
lines = my_file.read()
You then later re-open the file to iterate the lines:
with open('data.txt') as f:
for line in f:
lines.split()
lines.replace(',', ' ')
Note, however, str.split and str.replace do not modify the existing value, as strs in python are immutable. Also note you are operating on lines there, rather than the for-loop variable line.
Instead, you'll need to assign the result of those functions into new values, or give them as arguments (E.g., to print). So you'll want to open the file, iterate over the lines and print the value with the "," replaced with a " ":
with open("data.txt") as f:
for line in f:
print(line.replace(",", " "))
Or, since you are operating on the whole file anyway:
with open("data.txt") as f:
print(f.read().replace(",", " "))
Or, as your file appears to be CSV content, you may wish to use the csv module from the standard library instead:
import csv
with open("data.txt", newline="") as csvfile:
for row in csv.reader(csvfile):
print(*row)
with open('data.txt', 'r') as f:
for line in f:
for value in line.split(','):
print(value)
while python can offer us several ways to open files this is the prefered one for working with files. becuase we are opening the file in lazy mode (this is the prefered one espicialy for large files), and after exiting the with scope (identation block) the file io will be closed automaticly by the system.
here we are openening the file in read mode. files folow the iterator polices, so we can iterrate over them like lists. each line is a true line in the file and is a string type.
After getting the line, in line variable, we split (see str.split()) the line into 2 tokens, one before the comma and the other after the comma. split return new constructed list of strings. if you need to omit some unwanted characters you can use the str.strip() method. usualy strip and split combined together.
elegant and efficient file reading - method 1
with open("data.txt", 'r') as io:
for line in io:
sl=io.split(',') # now sl is a list of strings.
print("{} {}".format(sl[0],sl[1])) #now we use the format, for printing the results on the screen.
non elegant, but efficient file reading - method 2
fp = open("data.txt", 'r')
line = None
while (line=fp.readline()) != '': #when line become empty string, EOF have been reached. the end of file!
sl=line.split(',')
print("{} {}".format(sl[0],sl[1]))
Noob here. I need to read in a file, using the read (rather than readlines()) method (which provides the input to several functions), and identify all of the lines in that file (i.e. to print or to append to a list).
I've tried join, split, appending to lists, all with little to show.
# Code I'm stuck with:
with open("text.txt", 'r') as file:
a = file.read()
# Stuff that doesn't work
for line in a:
# can't manipulate when using the below, but prints fine
# print(line, end = '')
temp = (line, end = '')
for line in a:
temp = ''
while not ' ':
temp += line
new = []
for i in a:
i = i.strip()
I tend to get either everything in a long string, or
'I', ' ', 't','e','n','d',' ', 't','o' .... get individual chars. I'm just looking to get each line up to the newline char \n, or basically, what readlines() would give me, despite the file being stored in memory using read()
with open('text.txt') as file:
for line in file:
# do whatever you want with the line
The file object is iterable over the lines in the file - for a text file.
All you need to do is split the file after reading and you get the list of each line.
with open("text.txt", 'r') as file:
a = file.read()
a.split('\n')
With the above help, and using read rather than readlines, I was able to separate out individual lines from a file as follows:
with open("fewwords.txt", "r") as file:
a = file.read()
empty_list = []
# break a, which is read in as 1 really big string, into lines, then remove newline char
a = a.split('\n')
for i in range(len(a)):
initial_list.append(a[i])
I am trying to write a python script to convert rows in a file to json output, where each line contains a json blob.
My code so far is:
with open( "/Users/me/tmp/events.txt" ) as f:
content = f.readlines()
# strip to remove newlines
lines = [x.strip() for x in content]
i = 1
for line in lines:
filename = "input" + str(i) + ".json"
i += 1
f = open(filename, "w")
f.write(line)
f.close()
However, I am running into an issue where if I have an entry in the file that is quoted, for example:
client:"mac"
This will be output as:
"client:""mac"""
Using a second strip on writing to file will give:
client:""mac
But I want to see:
client:"mac"
Is there any way to force Python to read text in the format ' "something" ' without appending extra quotes around it?
Instead of creating an auxiliary list to strip the newline from content, just open the input and output files at the same time. Write to the output file as you iterate through the lines of the input and stripping whatever you deem necessary. Try something like this:
with open('events.txt', 'rb') as infile, open('input1.json', 'wb') as outfile:
for line in infile:
line = line.strip('"')
outfile.write(line)
I have a test.txt file that contains:
yellow.blue.purple.green
red.blue.red.purple
And i'd like to have on output.txt just the second and the third part of each line, like this:
blue.purple
blue.red
Here is python code:
with open ('test.txt', 'r') as file1, open('output.txt', 'w') as file2:
for line in file1:
file2.write(line.partition('.')[2]+ '\n')
but the result is:
blue.purple.green
blue.red.purple
How is possible take only the second and third part of each line?
Thanks
You may want
with open('test.txt', 'r') as file1, open('output.txt', 'w') as file2:
for line in file1:
file2.write(".".join(line.split('.')[1:3])+ '\n')
When you apply split('.') to the line e.g. yellow.blue.purple.green, you get a list of values
["yellow", "blue", "purple", "green"]
By slicing [1:3], you get the second and third items.
First I created a .txt file that had the same data that you entered in your original .txt file. I used the 'w' mode to write the data as you already know. I create an empty list as well that we will use to store the data, and later write to the output.txt file.
output_write = []
with open('test.txt', 'w') as file_object:
file_object.write('yellow.' + 'blue.' + 'purple.' + 'green')
file_object.write('\nred.' + 'blue.' + 'red.' + 'purple')
Next I opened the text file that I created and used 'r' mode to read the data on the file, as you already know. In order to get the output that you wanted, I read each line of the file in a for loop, and for each line I split the line to create a list of the items in the line. I split them based on the period in between each item ('.'). Next you want to get the second and third items in the lines, so I create a variable that stores index[1] and index[2] called new_read. After we will have the two pieces of data that you want and you'll likely want to write to your output file. I store this data in a variable called output_data. Lastly I append the output data to the empty list that we created earlier.
with open ('test.txt', 'r') as file_object:
for line in file_object:
read = line.split('.')
new_read = read[1:3]
output_data = (new_read[0] + '.' + new_read[1])
output_write.append(output_data)
Lastly we can write this data to a file called 'output.txt' as you noted earlier.
with open('output.txt', 'w') as file_object:
file_object.write(output_write[0])
file_object.write('\n' + output_write[1])
print(output_write[0])
print(output_write[1])
Lastly I print the data just to check the output:
blue.purple
blue.red