\ in the end of each line of a txt - python

I want to write the pathnames of images into a .txt in python. A create a new line for each path yet It creates an unwanted \ at the end of each image. I tried to strip it yet nothing happens. How can I strip it?
for num, name in enumerate(dirNames):
os.mkdir(name)
os.chdir(name)
with open(f'{os.path.basename(name)}.txt', 'w+', encoding = 'utf-8') as f:
for tile in png_cropedShips[num]:
path = os.path.join('tiles', os.path.basename(name), os.path.basename(tile),'\n')
print(path)
f.write(path)
for line in f:
line.rstrip('\\')
The output:
tiles\T34TBL20211001T094041\T34TBL_20211001T094041_c38_r9_ndwi.png\
tiles\T34TBL20211001T094041\T34TBL_20211001T094041_c39_r0_ndwi.png\
tiles\T34TBL20211001T094041\T34TBL_20211001T094041_c39_r12_ndwi.png\
tiles\T34TBL20211001T094041\T34TBL_20211001T094041_c39_r13_ndwi.png\

path = "a\\string\\"
path = path.removesuffix("\\")
print(path)
will print:
a\string
This should do it, when using backslashes you should place them two times in a string because it kind of ignores the first one.

The correct output was achieved by writing the newline character separately from the images paths!
for num, name in enumerate(dirNames):
os.mkdir(name)
os.chdir(name)
with open(f'{os.path.basename(name)}.txt', 'w+', encoding = 'utf-8') as f:
for tile in png_cropedShips[num]:
path = os.path.join('tiles', os.path.basename(name), os.path.basename(tile))
f.write(path)
f.write('\n')

Related

Why can’t the concatenated variable 'new_path' work but the whole string 'full_path' can

I tried to figure out whether some images stored by the name in a text file exists in a specific image folder.
The text file looks like this (including 1k lines) :
4916849840_43930a0979_o.png
331595384_492146dbf3_k.png
13517809023_f49228b3ec_o.png
So I wrote the code like:
import os
txt_path = '/Users/owner/Desktop/360_dataset_3k/train/train_3k_1.txt'
images_path = '/Users/owner/Desktop/360_dataset_3k/360_512_3k_1/'
full_path = '/Users/owner/Desktop/360_dataset_3k/360_512_3k_1/27107804622_984dbb2181_o.png'
f = open(txt_path,"r")
lines = f.readlines()
for line in lines:
new_path = os.path.join(images_path, line)
print(new_path)
if os.path.exists(new_path):
print("True.\n")
else:
print("False.\n")
However, the outputs are endless False.
BUT, when I replaced the new_path here
if os.path.exists(new_path):
print("True.\n")
with the variable full_path which is a print of new_path created by a line in lines.
if os.path.exists(full_path):
print("True.\n")
The outputs are True!!!
And after checking in a rather tedious way, it turns out they do exist in the image folder.
Can somebody tell me what is going on please? THANKS SO MUCH.
When you call the readlines() method for file object, each line has a newline character ('\n') at the end. You need to strip line before joining the path.
Code:
f = open(txt_path,"r")
lines = f.readlines()
for line in lines:
line = line.strip() #Remove whitespace
new_path = os.path.join(images_path, line)
print(new_path)
if os.path.exists(new_path):
print("True.\n")
else:
print("False.\n")
Also close file object f.close() when task is done.

How to read quoted string from File and write it without quotes?

I am trying to write a python script to convert rows in a file to json output, where each line contains a json blob.
My code so far is:
with open( "/Users/me/tmp/events.txt" ) as f:
content = f.readlines()
# strip to remove newlines
lines = [x.strip() for x in content]
i = 1
for line in lines:
filename = "input" + str(i) + ".json"
i += 1
f = open(filename, "w")
f.write(line)
f.close()
However, I am running into an issue where if I have an entry in the file that is quoted, for example:
client:"mac"
This will be output as:
"client:""mac"""
Using a second strip on writing to file will give:
client:""mac
But I want to see:
client:"mac"
Is there any way to force Python to read text in the format ' "something" ' without appending extra quotes around it?
Instead of creating an auxiliary list to strip the newline from content, just open the input and output files at the same time. Write to the output file as you iterate through the lines of the input and stripping whatever you deem necessary. Try something like this:
with open('events.txt', 'rb') as infile, open('input1.json', 'wb') as outfile:
for line in infile:
line = line.strip('"')
outfile.write(line)

import filenames iteratively from a different file

I have a large number of entries in a file. Let me call it file A.
File A:
('aaa.dat', 'aaa.dat', 'aaa.dat')
('aaa.dat', 'aaa.dat', 'bbb.dat')
('aaa.dat', 'aaa.dat', 'ccc.dat')
I want to use these entries, line by line, in a program that would iteratively pick an entry from file A, concatenate the files in this way:
filenames = ['aaa.dat', 'aaa.dat', 'ccc.dat'] ###entry number 3
with open('out.dat', 'w') as outfile: ###the name has to be aaa-aaa-ccc.dat
for fname in filenames:
with open(fname) as infile:
outfile.write(infile.read().strip())
All I need to do is to substitute the filenames iteratively and create an output in a "aaa-aaa-aaa.dat" format. I would appreciate any help-- feeling a bit lost!
Many thanks!!!
You can retrieve and modify the file names in the following way:
import re
pattern = re.compile('\W')
with open('fnames.txt', 'r') as infile:
for line in infile:
line = (re.sub(pattern, ' ', line)).split()
# Old filenames - to concatenate contents
content = [x + '.dat' for x in line[::2]];
# New filename
new_name = ('-').join(line[::2]) + '.dat'
# Write the concatenated content to the new
# file (first read the content all at once)
with open(new_name, 'w') as outfile:
for con in content:
with open(con, 'r') as old:
new_content = old.read()
outfile.write(new_content)
This program reads your input file, here named fnames.txt with the exact structure from your post, line by line. For each line it splits the entries using a precompiled regex (precompiling regex is suitable here and should make things faster). This assumes that your filenames are only alphanumeric characters, since the regex substitutes all non-alphanumeric characters with a space.
It retrieves only 'aaa' and dat entries as a list of strings for each line and forms a new name by joining every second entry starting from 0 and adding a .dat extension to it. It joins using a - as in the post.
It then retrieves the individual file names from which it will extract the content into a list content by selecting every second entry from line.
Finally, it reads each of the files in content and writes them to the common file new_name. It reads each of them all at ones which may be a problem if these files are big and in general there may be more efficient ways of doing all this. Also, if you are planning to do more things with the content from old files before writing, consider moving the old file-specific operations to a separate function for readability and any potential debugging.
Something like this:
with open(fname) as infile, open('out.dat', 'w') as outfile:
for line in infile:
line = line.strip()
if line: # not empty
filenames = eval(line.strip()) # read tuple
filenames = [f[:-4] for f in filenames] # remove extension
filename = '-'.join(filenames) + '.dat' # make filename
outfile.write(filename + '\n') # write
If your problem is just calculating the new filenames, how about using os.path.splitext?
'-'.join([
f[0] for f in [os.path.splitext(path) for path in filenames]
]) + '.dat'
Which can be probably better understood if you see it like this:
import os
clean_fnames = []
filenames = ['aaa.dat', 'aaa.dat', 'ccc.dat']
for fname in filenames:
name, extension = os.path.splitext(fname)
clean_fnames.append(name)
name_without_ext = '-'.join(clean_fnames)
name_with_ext = name_without_ext + '.dat'
print(name_with_ext)
HOWEVER: If your issue is that you can not get the filenames in a list by reading the file line by line, you must keep in mind that when you read files, you get text (strings) NOT Python structures. You need to rebuild a list from a text like: "('aaa.dat', 'aaa.dat', 'aaa.dat')\n".
You could take a look to ast.literal_eval or try to rebuild it yourself. The code below outputs a lot of messages to show what's happening:
import pprint
collected_fnames = []
with open('./fileA.txt') as f:
for line in f:
print("Read this (literal) line: %s" % repr(line))
line_without_whitespaces_on_the_sides = line.strip()
if not line_without_whitespaces_on_the_sides:
print("line is empty... skipping")
continue
else:
line_without_parenthesis = (
line_without_whitespaces_on_the_sides
.lstrip('(')
.rstrip(')')
)
print("Cleaned parenthesis: %s" % line_without_parenthesis)
chunks = line_without_parenthesis.split(', ')
print("Collected %s chunks in a %s: %s" % (len(chunks), type(chunks), chunks))
chunks_without_quotations = [chunk.replace("'", "") for chunk in chunks]
print("Now we don't have quotations: %s" % chunks_without_quotations)
collected_fnames.append(chunks_without_quotations)
print("collected %s lines with filenames:\n%s" %
(len(collected_fnames), pprint.pformat(collected_fnames)))

Python Split not working properly

I have the following code to read the lines in a file and split them with a delimiter specified. After split I have to write some specific fields into another file.
Sample Data:
Week49_A_60002000;Mar;FY14;Actual;Working;E_1000;PC_000000;4287.63
Code:
import os
import codecs
sfilename = "WEEK_RPT_1108" + os.extsep + "dat"
sfilepath = "Club" + "/" + sfilename
sbackupname = "Club" + "/" + sfilename + os.extsep + "bak"
try:
os.unlink(sbackupname)
except OSError:
pass
os.rename(sfilepath, sbackupname)
try:
inputfile = codecs.open(sbackupname, "r", "utf-16-le")
outputfile = codecs.open(sfilepath, "w", "utf-16-le")
sdelimdatfile = ";"
for line in inputfile:
record = line.split(';')
outputfile.write(record[1])
except IOError, err:
pass
I can see that the 0th array position contains the whole line instead of the first record:
record[0] = Week49_A_60002000;Mar;FY14;Actual;Working;E_1000;PC_000000;4287.63
while on printing record[1], it says array index out of range.
Need help as new to python.
Thanks!
After you comment saying that print line outputs u'\u6557\u6b65\u3934\u415f\u365f\u3030\u3230\u3030\u3b30\u614d\u3b72\u5946\u3431‌​\u413b\u7463\u6175\u3b6c\u6f57\u6b72\u6e69\u3b67\u5f45\u3031\u3030\u503b\u5f43\u3‌​030\u3030\u3030\u343b\u3832\u2e37\u3336', I can explain what happens and how to fix it.
What happens:
you have a normal 8bits characters file, and the line you show is even in plain ASCII, but you try to decode it as if it were in UTF-16 little endian. So you wrongly combine every two bytes in a single 16 bits unicode character! If your system had been able to correctly display them and if you had directly print line instead of repr(line), you would have got 敗步㤴䅟㙟〰㈰〰㬰慍㭲奆㐱䄻瑣慵㭬潗歲湩㭧彅〱〰倻彃〰〰〰㐻㠲⸷㌶. Of course, none of those unicode characters is the semicolon (; or \x3b of \u003b) so the line cannot be splitted on it.
But as you encode it back before writing record[0] you find the whole line in the new file, what let you believe erroneously that the problem is in the split function.
How to fix:
Just open the file normally, or use the correct encoding if it contains non ascii characters. But as you are using a version 2 of python, I would just do:
try:
inputfile = open(sbackupname, "r")
outputfile = open(sfilepath, "w")
sdelimdatfile = ";"
for line in inputfile:
record = line.split(sdelimdatfile)
outputfile.write(record[1])
except IOError, err:
pass
If you really need to use the codecs module, for example if the file contains UTF8 or latin1 characters, you can replace the open part with:
encoding = "utf8" # or "latin1" or whatever the actual encoding is...
inputfile = codecs.open(sbackupname, "r", encoding)
outputfile = codecs.open(sfilepath, "w", encoding)
Then there is no index [1]:
Either skip the line with "continue" if len(record) < 1 or just not write to the file (like here)
for line in inputfile:
record = line.split(';')
if len(record) >= 1:
outputfile.write(record[1])

CSV file processing in Python

I work with spatial data that is output to text files with the following format:
COMPANY NAME
P.O. BOX 999999
ZIP CODE , CITY
+99 999 9999
23 April 2013 09:27:55
PROJECT: Link Ref
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Design DTM is 30MB 2.5X2.5
Stripping applied to design is 0.000
Point Number Easting Northing R.L. Design R.L. Difference Tol Name
3224808 422092.700 6096059.380 2.520 -19.066 -21.586 --
3224809 422092.200 6096059.030 2.510 -19.065 -21.575 --
<Remainder of lines>
3273093 422698.920 6096372.550 1.240 -20.057 -21.297 --
Average height difference is -21.390
RMS is 21.596
0.00 % above tolerance
98.37 % below tolerance
End of Report
As shown, the files have a header and a footer. The data is delimited by spaces, but not an equal amount between the columns.
What I need, is comma delimited files with Easting, Northing and Difference.
I'd like to prevent having to modify several hundred large files by hand and am writing a small script to process the files. This is what I have so far:
#! /usr/bin/env python
import csv,glob,os
from itertools import islice
list_of_files = glob.glob('C:/test/*.txt')
for filename in list_of_files:
(short_filename, extension )= os.path.splitext(filename)
print short_filename
file_out_name = short_filename + '_ed' + extension
with open (filename, 'rb') as source:
reader = csv.reader( source)
for row in islice(reader, 10, None):
file_out= open (file_out_name, 'wb')
writer= csv.writer(file_out)
writer.writerows(reader)
print 'Created file: '+ file_out_name
file_out.close()
print 'All done!'
Questions:
How can I let the line starting with 'Point number' become the header in the output file? I'm trying to put DictReader in place of the reader/writer bit but can't get it to work.
Writing the output file with delimiter ',' does work but writes a comma in place of each space, giving way too much empty columns in my output file. How do I circumvent this?
How do I remove the footer?
I can see a problem with your code, you are creating a new writer for each row; so you will end up only with the last one.
Your code could be something like this, without the need of CSV readers or writers, as it's simple enough to be parsed as simple text (problem would arise if you had text columns, with escaped characters and so on).
def process_file(source, dest):
found_header = False
for line in source:
line = line.strip()
if not header_found:
#ignore everything until we find this text
header_found = line.starswith('Point Number')
elif not line:
return #we are done when we find an empty line, I guess
else:
#write the needed columns
columns = line.split()
dest.writeline(','.join(columns[i] for i in (1, 2, 5)))
for filename in list_of_files:
short_filename, extension = os.path.splitext(filename)
file_out_name = short_filename + '_ed' + extension
with open(filename, 'r') as source:
with open(file_out_name. 'w') as dest:
process_file(source, dest)
This worked:
#! /usr/bin/env python
import glob,os
list_of_files = glob.glob('C:/test/*.txt')
def process_file(source, dest):
header_found = False
for line in source:
line = line.strip()
if not header_found:
#ignore everything until we find this text
header_found = line.startswith('Stripping applied') #otherwise, header is lost
elif not line:
return #we are done when we find an empty line
else:
#write the needed columns
columns = line.split()
dest.writelines(','.join(columns[i] for i in (1, 2, 5))+"\n") #newline character adding was necessary
for filename in list_of_files:
short_filename, extension = os.path.splitext(filename)
file_out_name = short_filename + '_ed' + ".csv"
with open(filename, 'r') as source:
with open(file_out_name, 'wb') as dest:
process_file(source, dest)
To answer to your first and last question: it is simply about ignoring the corresponding lines, i.e. not to write them to output. This corresponds to if not header_found and else if not line: blocks of fortran proposal.
Second point is that there is no dedicated delimiter in your file: you have one or more spaces, which makes it hard to be parsed using csv module. Using split() will parse each line and return the list of non-blank characters, and will therefore only return useful values.

Categories

Resources