I need to parse a part of my output file that looks like this (Image is also attached for clarity)
y,mz) = (-.504D-04,-.543D-04,-.538D-03)
The expected output is :
the code I have so far looks like below:
class NACParser(ParseSection):
name = "coupling"
which is good but there are some issues:
It only prints from the very last line and this, I think, is because it overwrites due to similar other lines.
This code will only work for this specific molecule and I want something that can work for any molecule. What I mean is : in this example - I have a molecule with 15 atoms and the first atom is c (carbon) , 5th atom is h (hydrogen) and 11th atom is s (sulfur) but the total number of atoms (which is currently 15 ) and the name of atoms can be different when I have different molecule.
So I am wondering how can I write a general code that can work for a general molecule . Any help?
This will to literally what you asked. Maybe you can use this as a basis. I just gather all the atom IDs when I find a line with "ATOM", and create the dict entries when I find a line with "d/d". I would show the output, but I just typed in faked data because I didn't want to retype all of that.
import re
from pprint import pprint
header = r"(\d+ [a-z]{1,2})"
atoms = []
gather = {}
for line in open('x.txt'):
if len(line) < 5:
continue
if 'ATOM' in line:
atoms = re.findall( header, line )
atoms = [s.replace(' ','') for s in atoms]
continue
if '/d' in line:
parts = line.split()
row = parts[0].replace('/','')
for at,val in zip(atoms,parts[1:]):
gather[at+'_'+row] = val
pprint(gather)
Here's the output from your test data. I hope you realize that the cut-and-paste data doesn't match the image. The image uses d/dx, but the cut and paste uses dE/dx. I have assumed you want the "E" in the dict tag too, but that's easy to fix if you don't.
{'10c_dEdx': '0.8337613D-02',
'10c_dEdy': '-0.8171767D-01',
'10c_dEdz': '-0.4316928D-02',
'11s_dEdx': '0.3138990D-01',
'11s_dEdy': '0.3893252D-01',
'11s_dEdz': '0.2767787D-02',
'12h_dEdx': '0.8416159D-02',
'12h_dEdy': '0.3335059D-02',
'12h_dEdz': '0.1357569D-01',
'13h_dEdx': '0.1128067D-02',
'13h_dEdy': '-0.1457401D-01',
'13h_dEdz': '-0.7834375D-03',
'14h_dEdx': '0.8941240D-02',
'14h_dEdy': '0.4869915D-02',
'14h_dEdz': '-0.1273530D-01',
'15h_dEdx': '0.4292434D-03',
'15h_dEdy': '-0.1418384D-01',
'15h_dEdz': '-0.7764904D-03',
'1c_dEdx': '-0.1150239D-01',
'1c_dEdy': '0.4798462D-02',
'1c_dEdz': '0.6015413D-05',
'2c_dEdx': '0.2259669D-01',
'2c_dEdy': '0.5902019D-01',
'2c_dEdz': '0.3707704D-02',
'3c_dEdx': '-0.3153006D-02',
'3c_dEdy': '-0.4060517D-01',
'3c_dEdz': '-0.2306249D-02',
'4n_dEdx': '-0.2718508D-01',
'4n_dEdy': '0.3404657D-01',
'4n_dEdz': '0.1334956D-02',
'5h_dEdx': '-0.1064344D-01',
'5h_dEdy': '-0.1054522D-01',
'5h_dEdz': '-0.8032586D-03',
'6c_dEdx': '0.3017851D-01',
'6c_dEdy': '-0.2805275D-01',
'6c_dEdz': '-0.9413310D-03',
'7s_dEdx': '-0.2253417D-01',
'7s_dEdy': '0.1196856D-01',
'7s_dEdz': '0.2069422D-03',
'8n_dEdx': '-0.3195785D-01',
'8n_dEdy': '0.1888257D-01',
'8n_dEdz': '0.3914382D-03',
'9h_dEdx': '-0.4441489D-02',
'9h_dEdy': '0.1382483D-01',
'9h_dEdz': '0.6724659D-03'}
Related
I'm trying to get all the substrings under a "customLabel" tag, for example "Month" inside of ...,"customLabel":"Month"},"schema":"metric...
Unusual issue: this is a 1071552 characters long ndjson file, of a single line ("for line in file:" is pointless since there's only one).
The best I found was that:
How to find a substring of text with a known starting point but unknown ending point in python
but if I use this, the result obviously doesn't stop (at Month) and keeps writing the whole remaining of the file, same as if using partition()[2].
Just know that Month is only an example, customLabel has about 300 variants and they are not listed (I'm actually doing this to list them...)
To give some details here's my script so far:
with open("file.ndjson","rt", encoding='utf-8') as ndjson:
filedata = ndjson.read()
x="customLabel"
count=filedata.count(x)
for i in range (count):
if filedata.find(x)>0:
print("Found "+str(i+1))
So right now it properly tells me how many occurences of customLabel there are, I'd like to get the substring that comes after customLabel":" instead (Month in the example) to put them all in a list, to locate them way more easily and enable the use of replace() for traductions later on.
I'd guess regex are the solution but I'm pretty new to that, so I'll post that question by the time I learn about them...
If you want to search for all (even nested) customLabel values like this:
{"customLabel":"Month" , "otherJson" : {"customLabel" : 23525235}}
you can use RegEx patterns with the re module
import re
label_values = []
regex_pattern = r"\"customLabel\"[ ]?:[ ]?([1-9a-zA-z\"]+)"
with open("file.ndjson", "rt", encoding="utf-8") as ndjson:
for line in ndjson:
values = re.findall(regex_pattern, line)
label_values.extend(values)
print(label_values) # ['"Month"', '23525235']
# If you don't want the items to have quotations
label_values = [i.replace('"', "") for i in label_values]
print(label_values) # ['Month', '23525235']
Note: If you're only talking about ndjson files and not nested searching, then it'd be better to use the json module to parse the lines and then easily get the value of your specific key which is customLabel.
import json
label = "customLabel"
label_values = []
with open("file.ndjson", "rt", encoding="utf-8") as ndjson:
for line in ndjson:
line_json = json.loads(line)
if line_json.get(label) is not None:
label_values.append(line_json.get(label))
print(label_values) # ['Month']
I am trying to extend the replace function. Instead of doing the replacements on individual lines or individual commands, I would like to use the replacements from a central text file.
That's the source:
import os
import feedparser
import pandas as pd
pd.set_option('max_colwidth', -1)
RSS_URL = "https://techcrunch.com/startups/feed/"
feed = feedparser.parse(RSS_URL)
entries = pd.DataFrame(feed.entries)
entries = entries[['title']]
entries = entries.to_string(index=False, header=False)
entries = entries.replace(' ', '\n')
entries = os.linesep.join([s for s in entries.splitlines() if s])
print(entries)
I want to be able to replace words from a RSS feed, from a central "Replacement"-file, witch So the source file should have two columns:Old word, New word. Like replace function replace('old','new').
Output/Print Example:
truck
rental
marketplace
D’Amelio
family
launches
to
invest
up
to
$25M
...
In most cases I want to delete the words that are unnecessary for me, so e.g. replace('to',''). But I also want to be able to change special names, e.g. replace('D'Amelio','DAmelio'). The goal is to reduce the number of words and build up a kind of keyword radar.
Is this possible? I can't find any help Googling. But it could well be that I do not know the right terms or can not formulate.
with open('<filepath>','r') as r:
# if you remove the ' marks from around your words, you can remove the [1:-1] part of the below code
words_to_replace = [word.strip()[1:-1] for word in r.read().split(',')]
def replace_words(original_text, words_to_replace):
for word in words_to_replace:
original_text = original_text.replace(word, '')
return original_text
I was unable to understand your question properly but as far as I understand you have strings like cat, dog, etc. and you have a file in which you have data with which you want to replace the string. If this was your requirement, I have given the solution below, so try running it if it satisfies your requirement.
If that's not what you meant, please comment below.
TXT File(Don't use '' around the strings in Text File):
papa, papi
dog, dogo
cat, kitten
Python File:
your_string = input("Type a string here: ") #string you want to replace
with open('textfile.txt',"r") as file1: #open your file
lines = file1.readlines()
for line in lines: #taking the lines of file in one by one using loop
string1 = f'{line}'
string1 = string1.split() #split the line of the file into list like ['cat,', 'kitten']
if your_string == string1[0][:-1]: #comparing the strings of your string with the file
your_string = your_string.replace(your_string, string1[1]) #If string matches like user has given input cat, it will replace it with kitten.
print(your_string)
else:
pass
If you got the correct answer please upvote my answer as it took my time to make and test the python file.
Currently, I am trying to create a csv file containing the subtitles of NBC's "Friends" and their corresponding starting time. So basically I am trying to turn an srt-file into a csv-file in python.
For those of you that are unfamiliar with srt-files, they look like this:
1
00:00:47,881 --> 00:00:49,757
[CAR HORNS HONKING]
2
00:00:49,966 --> 00:00:52,760
There's nothing to tell.
It's just some guy I work with.
3
00:00:52,969 --> 00:00:55,137
Come on.
You're going out with a guy.
…
Now I have used readlines() to turn it into a list like this:
['\ufeff1\n', '00:00:47,881 --> 00:00:49,757\n', '[CAR HORNS HONKING]\n',
'\n', '2\n', '00:00:49,966 --> 00:00:52,760\n',
"There's nothing to tell.\n", "It's just some guy I work with.\n",
'\n', '3\n', '00:00:52,969 --> 00:00:55,137\n', 'Come on.\n',
"You're going out with a guy.\n", ...]
Is there a way to create a dict or dataframe from this list (or the file it is based on) that contains the starting time (end time is not needed) and the lines that belong to it. I've been struggling because while sometimes just one line corresponds to a starting time, other times there are two (There are two lines at most per starting time in this file. However, a solution that can be used in case even more lines are present would be preferable).
Lines that look like the first one ("[CAR HORNS HONKING]") or others that simply say e. g. "CHANDLER:" and their starting times would ideally not be included but that's not all that important right now.
Any help is very much appreciated!
I think this code cover your problem. The main idea is to use a regular expression to locate the starting time of each legend and extract its value and the corresponding lines. The code is not in the most polished form, but I think the main idea is well expressed. I hope it helps.
import re
with open('sub.srt', 'r') as h:
sub = h.readlines()
re_pattern = r'[0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3} -->'
regex = re.compile(re_pattern)
# Get start times
start_times = list(filter(regex.search, sub))
start_times = [time.split(' ')[0] for time in start_times]
# Get lines
lines = [[]]
for sentence in sub:
if re.match(re_pattern, sentence):
lines[-1].pop()
lines.append([])
else:
lines[-1].append(sentence)
lines = lines[1:]
# Merge results
subs = {start_time:line for start_time,line in zip(start_times, lines)}
I have an abstract which I've split to sentences in Python. I want to write to 2 tables. One which has the following columns: abstract id (which is the file number that I extracted from my document), sentence id (automatically generated) and each sentence of this abstract on a row.
I would want a table that looks like this
abstractID SentenceID Sentence
a9001755 0000001 Myxococcus xanthus development is regulated by(1st sentence)
a9001755 0000002 The C signal appears to be the polypeptide product (2nd sentence)
and another table NSFClasses having abstractID and nsfOrg.
How to write sentences (each on a row) to table and assign sentenceId as shown above?
This is my code:
import glob;
import re;
import json
org = "NSF Org";
fileNo = "File";
AbstractString = "Abstract";
abstractFlag = False;
abstractContent = []
path = 'awardsFile/awd_1990_00/*.txt';
files = glob.glob(path);
for name in files:
fileA = open(name,'r');
for line in fileA:
if line.find(fileNo)!= -1:
file = line[14:]
if line.find(org) != -1:
nsfOrg = line[14:].split()
print file
print nsfOrg
fileA = open(name,'r')
content = fileA.read().split(':')
abstract = content[len(content)-1]
abstract = abstract.replace('\n','')
abstract = abstract.split();
abstract = ' '.join(abstract)
sentences = abstract.split('.')
print sentences
key = str(len(sentences))
print "Sentences--- "
As others have pointed out, it's very difficult to follow your code. I think this code will do what you want, based on your expected output and what we can see. I could be way off, though, since we can't see the file you are working with. I'm especially troubled by one part of your code that I can't see enough to refactor, but feels obviously wrong. It's marked below.
import glob
for filename in glob.glob('awardsFile/awd_1990_00/*.txt'):
fh = open(filename, 'r')
abstract = fh.read().split(':')[-1]
fh.seek(0) # reset file pointer
# See comments below
for line in fh:
if line.find('File') != -1:
absID = line[14:]
print absID
if line.find('NSF Org') != -1:
print line[14:].split()
# End see comments
fh.close()
concat_abstract = ''.join(abstract.replace('\n', '').split())
for s_id, sentence in enumerate(concat_abstract.split('.')):
# Adjust numeric width arguments to prettify table
print absID.ljust(15),
print '{:06d}'.format(s_id).ljust(15),
print sentence
In that section marked, you are searching for the last occurrence of the strings 'File' and 'NSF Org' in the file (whether you mean to or not because the loop will keep overwriting your variables as long as they occur), then doing something with the 15th character onward of that line. Without seeing the file, it is impossible to say how to do it, but I can tell you there is a better way. It probably involves searching through the whole file as one string (or at least the first part of it if this is in its header) rather than looping over it.
Also, notice how I condensed your code. You store a lot of things in variables that you aren't using at all, and collecting a lot of cruft that spreads the state around. To understand what line N does, I have to keep glancing ahead at line N+5 and back over lines N-34 to N-17 to inspect variables. This creates a lot of action at a distance, which for reasons cited is best to avoid. In the smaller version, you can see how I substituted in string literals in places where they are only used once and called print statements immediately instead of storing the results for later. The results are usually more concise and easily understood.
So I matched (with the help of kind contributors on stack overflow) the item number in:
User Number 1 will probably like movie ID: RecommendedItem[item:557, value:7.32173]the most!
Now I'm trying to extract the corresponding name from another text file using the item number. Its contents look like:
557::Voyage to the Bottom of the Sea (1961)::Adventure|Sci-Fi
For some reason I'm just coming up with 'None' on terminal. No matches found.
myfile = open('result.txt', 'r')
myfile2 = open('movies.txt', 'r')
content = myfile2.read()
for line in myfile:
m = re.search(r'(?<=RecommendedItem\[item:)(\d+)',line)
n = re.search(r'(?<=^'+m.group(0)+'\:\:)(\w+)',content)
print n
I'm not sure if I can use a variable in a look behind assertion..
Really appreciate all the help I'm getting here!
EDIT: Turns out the only problem was the unneeded caret symbol in the second regular-expression.
Here, once you've found the number, you use a 'old style' (could equally use .format if you so desired) string format to put it into the regular expression. I thought it'd be nice to access the values via a dictionary hence the named matches, you could do it without this though. To get the a list of genres, just .split("|") the string under suggestionDict["Genres"].
import re
num = 557
suggestion="557::Voyage to the Bottom of the Sea (1961)::Adventure|Sci-Fi"
suggestionDict = re.search(r'%d::(?P<Title>[a-zA-Z0-9 ]+)\s\((?P<Date>\d+)\)::(?P<Genres>[a-zA-Z1-9|]+)' % num, suggestion).groupdict()
#printing to show if it works/doesn't
print('\n'.join(["%s:%s" % (k,d) for k,d in suggestionDict.items()]))
#clearer example of how to use
print("\nCLEAR EXAMPLE:")
print(suggestionDict["Title"])
Prodcuing
Title:Voyage to the Bottom of the Sea
Genres:Adventure|Sci
Date:1961
CLEAR EXAMPLE:
Voyage to the Bottom of the Sea
>>>