How to iterate in reverse from the middle of a text file? - python

The Problem:
I am writing a program whose ultimate objective is to extract several specific lines from text-versions of .json files. I want to automate the manual process of copy/pasting tens or hundreds of lines that all share the same keyword, but of which are a few lines removed FROM that keyword.
Proposed solution:
The python program iterates through the .txt file to look for a specific keyword
Once it finds that word, it then stops and iterates backwards from
that line until it finds a SECOND keyword.
When the second keyword is found, the program writes the entire line that the keyword
is on to a new file, and then resumes iterating through the file again from the initial keyword's line.
Illustration:
<fields>
<fullName>NAME KEYWORD</fullName> ##line I want to iterate backwards to so I can write it to another file##
<label>example_label</label>
<length>131072</length>
<trackHistory>false</trackHistory> ##line with keyword to stop the iterating process#
<type>example_type</type>
</fields>
Once the line with "NAME KEYWORD" is written to a new file, then the program continues onto the next section, which will have many of the same fields, but a different "NAME KEYWORD", etc.
Attempted solutions:
I have been looking for clear information online about how to iterate through a text file in reverse from a given point. I have found one site, (kite.com), which illustrates how to use the readlines() and reversed() functions, but those actions are performed on the document as a whole, as opposed to a distinct portion.
I also reviewed Python's own documentation, but the suggestions there do not appear to propose the functionality that I'm looking for here. (Unless I've misunderstanding.)
TL;DR
Does anyone have an idea about whether there is an existing module, function or practice which would allow Python to iterate backwards from the middle of a text file?

As others mentioned in the comments, it would be better to work with the original JSON or use an XML parser. But if these aren't possible (maybe the file is too big to load into memory at once), I think you can do it without having to read in reverse.
saved_line = None
for line in oldfile:
if 'NAME KEYWORD' in line:
saved_line = line
elif '<trackHistory>false</trackHistory>' in line and saved_line:
newfile.write(saved_line)
saved_line will always contain the same line that you would have found if you iterated backwards after finding the <trackHistory>false</trackHistory> line.

Related

Extra characters and order changes in terminal output? Is this a terminal problem (CoCalc)?

Why am I seeing extra ] characters in output of a list construction that should have just a list of lists? Is this a terminal problem (using CoCalc's terminal)?
Particularly, the output should have just two levels of lists, the global list and each of the sublists inside it.
But when I read through the output of data in a Python interpreter in CoCalc's terminal. Then I see this kind of thing:
Notice the extra ] as if there was inner lists that should not exist. Also notice the numbering which seems to not be in order, even though in the data it is ordered.
What's happening here?
To reconstruct the problem:
Download the dorothea_valid.data file from here:
https://archive.ics.uci.edu/ml/machine-learning-databases/dorothea/DOROTHEA/
Then create a project in CoCalc (https://cocalc.com/). Upload dorothea_valid.data to that project.
Start a Linux terminal in CoCalc, and make sure you know the path/working directory so that you can find dorothea_valid.data from Python. In the Linux terminal start the Python interpreter by writing python.
Paste the following function meant for reading a file with sequences of integer values separated by "\n" to the interpreter:
def read_datafile(fname):
data = list()
with open(fname, 'r') as file:
for line in file:
data.append([int(i) for i in line.split()])
return data
# and then call print(read_datafile(fname)) to get the output.
Then call read_datafile() on dorothea_valid.data, and then print the resulting object as suggested in the above comment. The screen captured lines are seen when scrolling right to the bottom, however problems may be seen from other parts of the output as well.
EDIT:
It's now 10/08/2022 and I'm unable to see the problem. Maybe it has been fixed in CoCalc.
You are creating inner lists. You're using one list comprehension per line of the file so it's making one list of integers per line. If you want it all as one list, use extend rather than append:
for line in file:
data.extend(int(i) for i in line.split())
Notice I'm using a generator expression here rather than a list comprehension. Using a list comprehension is a waste becaues it creates the whole list in memory only to be read through once and then discarded.

How to 'flatten' lines from text file if they meet certain criteria using Python?

To start I am a complete new comer to Python and programming anything other than web languages.
So, I have developed a script using Python as an interface between a piece of Software called Spendmap and an online app called Freeagent. This script works perfectly. It imports and parses the text file and pushes it through the API to the web app.
What I am struggling with is Spendmap exports multiple lines per order where as Freeagent wants One line per order. So I need to add the cost values from any orders spread across multiple lines and then 'flatten' the lines into One so it can be sent through the API. The 'key' field is the 'PO' field. So if the script sees any matching PO numbers, I want it to flatten them as per above.
This is a 'dummy' example of the text file produced by Spendmap:
5090071648,2013-06-05,2013-09-05,P000001,1133997,223.010,20,2013-09-10,104,xxxxxx,AP
COMMENT,002091
301067,2013-09-06,2013-09-11,P000002,1133919,42.000,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
301067,2013-09-06,2013-09-11,P000002,1133919,359.400,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
301067,2013-09-06,2013-09-11,P000003,1133910,23.690,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
The above has been formatted for easier reading and normally is just one line after the next with no text formatting.
The 'key' or PO field is the first bold item and the second bold/italic item is the cost to be totalled. So if this example was to be passed through the script id expect the first row to be left alone, the Second and Third row costs to be added as they're both from the same PO number and the Fourth line to left alone.
Expected result:
5090071648,2013-06-05,2013-09-05,P000001,1133997,223.010,20,2013-09-10,104,xxxxxx,AP
COMMENT,002091
301067,2013-09-06,2013-09-11,P000002,1133919,401.400,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
301067,2013-09-06,2013-09-11,P000003,1133910,23.690,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
Any help with this would be greatly appreciated and if you need any further details just say.
Thanks in advance for looking!
I won't give you the solution. But you should:
Write and test a regular expression that breaks the line down into its parts, or use the CSV library.
Parse the numbers out so they're decimal numbers rather than strings
Collect the lines up by ID. Perhaps you could use a dict that maps IDs to lists of orders?
When all the input is finished, iterate over that dict and add up all orders stored in that list.
Make a string format function that outputs the line in the expected format.
Maybe feed the output back into the input to test that you get the same result. Second time round there should be no changes, if I understood the problem.
Good luck!
I would use a dictionary to compile the lines, using get(key,0.0) to sum values if they exist already, or start with zero if not:
InputData = """5090071648,2013-06-05,2013-09-05,P000001,1133997,223.010,20,2013-09-10,104,xxxxxx,AP COMMENT,002091
301067,2013-09-06,2013-09-11,P000002,1133919,42.000,20,2013-10-31,103,xxxxxx,AP COMMENT,002143
301067,2013-09-06,2013-09-11,P000002,1133919,359.400,20,2013-10-31,103,xxxxxx,AP COMMENT,002143
301067,2013-09-06,2013-09-11,P000003,1133910,23.690,20,2013-10-31,103,xxxxxx,AP COMMENT,002143"""
OutD = {}
ValueD = {}
for Line in InputData.split('\n'):
# commas in comments won't matter because we are joining after anyway
Fields = Line.split(',')
PO = Fields[3]
Value = float(Fields[5])
# set up the output string with a placeholder for .format()
OutD[PO] = ",".join(Fields[:5] + ["{0:.3f}"] + Fields[6:])
# add the value to the old value or to zero if it is not found
ValueD[PO] = ValueD.get(PO,0.0) + Value
# the output is unsorted by default, but you could sort or preserve original order
for POKey in ValueD:
print OutD[POKey].format(ValueD[POKey])
P.S. Yes, I know Capitals are for Classes, but this makes it easier to tell what variables I have defined...

Writelines (python) clears the text file it is supposed to write to, and writes nothing

What I'm doing is pretty basic, but for some reason isn't actually writing anything into the text file I need.
The first thing I've done is gotten input from the user and assigned it to assoc. This works fine, as I can print out assoc whenever I please, and that appears to work completely fine.
Next, I open a different file depending on whether or not assoc is equal to 0, 1, or 2. I read all the lines and assign the list of read lines to the variable beta, then I grab the length of beta and assign it to prodlen, add one two prodlen and assign that new value to localid and close the object. The only reason I'm including this is because I fear I've missed something crucial and simple.
if assoc==0:
fob=open('pathto/textfile1.txt','r')
if assoc==1:
fob=open('pathto/textfile2.txt','r')
if assoc==2:
fob=open('pathto/textfile3.txt','r')
beta=fob.readlines();
prodlen=len(beta);
localid=prodlen+1;
fob.close;
After I get the user input, open the file, list its contents, and read its length, I then use the user's input again to open the file with writing permissions. (I've only included one of the possible if statements, because the others are identical except for which file they write to and what VALUE, which is a string, is). I append list beta with \n to get a line break, followed by a string, which has been represented here by VALUE. I then add localid onto the end, in string form.
if assoc==0:
fob=open('pathto/textfile1.txt','w')
beta.append("\nVALUE"+str(localid))
print (beta)
fob.writelines(beta)
My true problem, though, is in the last two lines. When I print out list beta, it includes the new value that I've appended. But when I try to write the list to the file, it clears any data that was currently in the file and doesn't write anything inside! I'm a python noob, so please, keep the solution simple (if possible). I assume the solution to this is relatively simple. I'm probably just overlooking something.
use the 'a' option instead of 'w' in your open call. w overwrites, a appends.
http://docs.python.org/2/library/functions.html#open
python open built-in function: difference between modes a, a+, w, w+, and r+?
is a useful explanation of different modes.

python. write to file, cannot understand behavior

I don't understand why I cannot write to file in my python program. I have list of strings measurements. I want just write them to file. Instead of all strings it writes only 1 string. I cannot understand why.
This is my piece of code:
fmeasur = open(fmeasur_name, 'w')
line1st = 'rev number, alg time\n'
fmeasur.write(line1st)
for i in xrange(len(measurements)):
fmeasur.write(measurements[i])
print measurements[i]
fmeasur.close()
I can see all print of these trings, but in the file there is only one. What could be the problem?
The only plausible explanation that I have is that you execute the above code multiple times, each time with a single entry in measurements (or at least the last time you execute the code, len(measurements) is 1).
Since you're overwriting the file instead of appending to it, only the last set of measurements would be present in the file, but all of them would appear on the screen.
edit Or do you mean that the data is there, but there's no newlines between the measurements? The easiest way to fix that is by using print >>fmeasur, measurements[i] instead of fmeasur.write(...).

How do I parse a listing of files to get just the filenames in Python?

So lets say I'm using Python's ftplib to retrieve a list of log files from an FTP server. How would I parse that list of files to get just the file names (the last column) inside a list? See the link above for example output.
Using retrlines() probably isn't the best idea there, since it just prints to the console and so you'd have to do tricky things to even get at that output. A likely better bet would be to use the nlst() method, which returns exactly what you want: a list of the file names.
This best answer
You may want to use ftp.nlst() instead of ftp.retrlines(). It will give you exactly what you want.
If you can't, read the following :
Generators for sysadmin processes
In his now famous review, Generator Tricks For Systems Programmers An Introduction, David M. Beazley gives a lot of receipes to answer to this kind of data problem with wuick and reusable code.
E.G :
# empty list that will receive all the log entry
log = []
# we pass a callback function bypass the print_line that would be called by retrlines
# we do that only because we cannot use something better than retrlines
ftp.retrlines('LIST', callback=log.append)
# we use rsplit because it more efficient in our case if we have a big file
files = (line.rsplit(None, 1)[1] for line in log)
# get you file list
files_list = list(files)
Why don't we generate immediately the list ?
Well, it's because doing it this way offer you much flexibility : you can apply any intermediate generator to filter files before turning it into files_list : it's just like pipe, add a line, you add a process without overheat (since it's generators). And if you get rid off retrlines, it still work be it's even better because you don't store the list even one time.
EDIT : well, I read the comment to the other answer and it says that this won't work if there is any space in the name.
Cool, this will illustrate why this method is handy. If you want to change something in the process, you just change a line. Swap :
files = (line.rsplit(None, 1)[1] for line in log)
and
# join split the line, get all the item from the field 8 then join them
files = (' '.join(line.split()[8:]) for line in log)
Ok, this may no be obvious here, but for huge batch process scripts, it's nice :-)
And a slightly less-optimal method, by the way, if you're stuck using retrlines() for some reason, is to pass a function as the second argument to retrlines(); it'll be called for each item in the list. So something like this (assuming you have an FTP object named 'ftp') would work as well:
filenames = []
ftp.retrlines('LIST', lambda line: filenames.append(line.split()[-1]))
The list 'filenames' will then be a list of the file names.
Is there any reason why ftplib.FTP.nlst() won't work for you? I just checked and it returns only names of the files in a given directory.
Since every filename in the output starts at the same column, all you have to do is get the position of the dot on the first line:
drwxrwsr-x 5 ftp-usr pdmaint 1536 Mar 20 09:48 .
Then slice the filename out of the other lines using the position of that dot as the starting index.
Since the dot is the last character on the line, you can use the length of the line minus 1 as the index. So the final code is something like this:
lines = ftp.retrlines('LIST')
lines = lines.split("\n") # This should split the string into an array of lines
filename_index = len(lines[0]) - 1
files = []
for line in lines:
files.append(line[filename_index:])
If the FTP server supports the MLSD command, then please see section “single directory case” from that answer.
Use an instance (say ftpd) of the FTPDirectory class, call its .getdata method with connected ftplib.FTP instance in the correct folder, then you can:
directory_filenames= [ftpfile.name for ftpfile in ftpd.files]
I believe it should work for you.
file_name_list = [' '.join(each_file.split()).split()[-1] for each_file_detail in file_list_from_log]
NOTES -
Here I am making a assumption that you want the data in the program (as list), not on console.
each_file_detail is each line that is being produced by the program.
' '.join(each_file.split())
To replace multiple spaces by 1 space.

Categories

Resources