Trying to format text when pulling from webpage HTML - python

I've created a basic counter for words in a song, but am having trouble formatting the album title and artist name from a given page on this lyrics website. Here's an example of what I am focused on:
I want to format it in this way:
Album Title: [Album Title] (Release_year)
Artist: [Artist Name]
I'm running into two problems:
The album title isn't enclosed in its own tag, so if I call the h1 tag I get both the album name, release year and artist name. How do I call them separately, or how do I break them up when calling them?
The album name has two blank lines and two blank spaces included in the string. How do I get rid of them? The release year prints right next to the album title, which is exactly what I'm looking for, but I cant get the album title to format properly.
This is what I currently have:
song_artist = soup.find("a",{"class":"artist"}).get_text()
album_title = soup.find("h1",{"class":"album_name"}).get_text()
print "Album Title: " + str(album_title)
print "Song Artist: " + str(song_artist.title())
which produces:
Thank you!!

album_title = soup.find("h1",{"class":"album_name"}).find(text=True).strip()
album_year = soup.find("span",{"class":"release_year"}).get_text().strip()
print 'Album Title: {} {}'.format(album_title, album_year)

Related

How to extract certain paragraph from text file

def extract_book_info(self):
books_info = []
for file in os.listdir(self.book_folder_path):
title = "None"
author = "None"
release_date = "None"
last_update_date = "None"
language = "None"
producer = "None"
with open(self.book_folder_path + file, 'r', encoding = 'utf-8') as content:
book_info = content.readlines()
for lines in book_info:
if lines.startswith('Title'):
title = lines.strip().split(': ')
elif lines.startswith('Author'):
try:
author = lines.strip().split(': ')
except IndexError:
author = 'Empty'
elif lines.startswith('Release date'):
release_date = lines.strip().split(': ')
elif lines.startswith('Last updated'):
last_update_date = lines.strip().split(': ')
elif lines.startswith('Produce by'):
producer = lines.strip().split(': ')
elif lines.startswith('Language'):
language = lines.strip().split(': ')
elif lines.startswith('***'):
pass
books_info.append(Book(title, author, release_date, last_update_date, producer, language, self.book_folder_path))
with open(self.book_info_path, 'w', encoding="utf-8") as book_file:
for book_info in books_info:
book_file.write(book_info.__str__() + "\n")
I was using this code tried to extract the book title , author , release_date ,
last_update_date, language, producer, book_path).
This the the output I achieve:
['Title', 'The Adventures of Sherlock Holmes'];;;['Author', 'Arthur Conan Doyle'];;;None;;;None;;;None;;;['Language', 'English'];;;data/books_data/;;;
This is the output I should achieved.
May I know what method I should used to achieve the following output
The Adventures of Sherlock Holmes;;;Arthur Conan Doyle;;;November29,2002;;;May20,2019;;;English;;;
This is the example of input:
Title: The Adventures of Sherlock Holmes
Author: Arthur Conan Doyle
Release Date: November 29, 2002 [eBook #1661]
[Most recently updated: May 20, 2019]
Language: English
Character set encoding: UTF-8
Produced by: an anonymous Project Gutenberg volunteer and Jose Menendez
*** START OF THE PROJECT GUTENBERG EBOOK THE ADVENTURES OF SHERLOCK HOLMES ***
cover
str.split gives you a list as a result. You're using it to assign to a single value instead.
'Title: Sherlock Holmes'.split(':') # => ['Title', 'Sherlock Holmes']
What I can gather from your requirement you want to access the second element from the split every time. You can do so by:
...
for lines in book_info:
if lines.startswith('Author'):
_, author = lines.strip().split(':')
elif...
Be careful since this can throw an IndexError if there is no second element in a split result. (That's why there's a try on the author param in your code)
Also, avoid calling __str__ directly. That's what the str() function calls for you anyway. Use that instead.

Selecting Custom Field from RSS feed

So I'm trying to select the information within a custom feed from an RSS file. So far I'm making use of feed parser to access more common fields within the file. I was wondering if anyone would know how to achieve this given that there are numerous custom fields. The specific custom field i'm trying to select is the venue address.
<item>
<title>First 5 Forever babies, books and rhymes</title>
<description>Thursday, June 9, 2022, 9:30&nbsp;&ndash;&nbsp;10am <br/><br/><img src="https://www.trumba.com/i/DgBXwGE0qzQ0Bl7V7ynz3DTV.jpg" title="First 5 Forever babies, books and rhymes"
<link>https://www.brisbane.qld.gov.au/trumba?trumbaEmbed=view%3devent%26eventid%3d159467519</link>
<x-trumba:ealink>https://eventactions.com/eventactions/brisbane-city-council#/actions/030ma0wxb5wympxw4nud1vrr72</x-trumba:ealink>
<category>2022/06/09 (Thu)</category>
<guid isPermaLink="false">http://uid.trumba.com/event/159467519</guid>
<x-trumba:masterid isPermaLink="false">http://uid.trumba.com/master/159467517</x-trumba:masterid>
<xCal:summary>First 5 Forever babies, books and rhymes</xCal:summary>
<xCal:location>Nundah Library</xCal:location>
<xCal:dtstart>2022-06-08T23:30:00Z</xCal:dtstart>
<x-trumba:localstart tzAbbr="EAST" tzCode="260">2022-06-09T09:30:00</x-trumba:localstart>
<x-trumba:formatteddatetime>Thursday, June 9, 2022, 9:30 - 10am</x-trumba:formatteddatetime>
<xCal:dtend>2022-06-09T00:00:00Z</xCal:dtend>
<x-trumba:localend tzAbbr="EAST" tzCode="260">2022-06-09T10:00:00</x-trumba:localend>
<x-microsoft:cdo-alldayevent>false</x-microsoft:cdo-alldayevent>
<x-trumba:customfield name="Event Type" id="21" type="number">Library events</x-trumba:customfield>
<x-trumba:customfield name="Venue" id="22542" type="text">Nundah Library</x-trumba:customfield>
<x-trumba:customfield name="Venue address" id="22505" type="text">Nundah Library, 1 Bage Street (via Primrose Lane), Nundah</x-trumba :customfield>
<x-trumba:customfield name="Parent event" id="42212" type="text">First 5 Forever children's literacy sessions</x-trumba:customfield>
<x-trumba:customfield name="Age range" id="21858" type="text">Infants and toddlers</x-trumba:customfield>
<x-trumba:customfield name="Cost" id="22177" type="text">Free</x-trumba:customfield>
<x-trumba:customfield name="Event type" id="21859" type="text">Free</x-trumba:customfield>
<x-trumba:customfield name="Library event types" id="22496" type="text">Babies, books & rhymes,Children's literacy</x-trumba:customfield>
<x-trumba:customfield name="Event image" id="40" type="uri" imageWidth="1290" imageHeight="775">https://www.trumba.com/i/DgBXwGE0qzQ0Bl7V7ynz3DTV.jpg</x-trumba:customfield>
<x-trumba:customfield name="Age" id="23562" type="text">0-1 year olds</x-trumba:customfield>
<x-trumba:categorycalendar>Brisbane's calendar|Library events</x-trumba:categorycalendar>
Examples of the code I have used previously to retrieve information can be seen below:
blog_feed = feedparser.parse(url)
posts = blog_feed.entries
for post in posts:
#collecting the title for each individual item in the RSS file
title = post.title
#selecting the entire item as "word"
word = posts[counter]
counter = counter+1
#we know that the date of the event is always stored after the code (category)
date = word.category
After attempting to use BS4 I can successfully retrieve the address, but I am still unsure if it is possible to make use of this method within a loop to find the address of each item within the RSS file and then append the address to a main list given another parameter is true.
with open("brisbane-city-council.rss") as fp:
soup = BeautifulSoup(fp, "html.parser")
addrress = soup.find("x-trumba:customfield", id="22505")
print(addrress)
Below is the for loop I am using.
for post in posts:
#collecting the title for each individual item in the RSS file
title = post.title
#selecting the entire item as "word"
word = posts[counter]
counter = counter+1
#we know that the date of the event is always stored after the code (category)
date = word.category
#pulling down the link as it is unique for each event
link = word.link
#formatting date for ease of use and to allow functionality to be completed
date = date.split(' (')
date = date[0]
date = datetime.datetime.strptime(date, "%Y/%m/%d").date()
if date > start_date and date < end_date:
post_list.append(title)
description = post.summary
h = html2text.HTML2Text()
h.ignore_links = True
description = h.handle(description)
description_list.append(description)
link_list.append(link)
else:
continue

How to do search by option to search from files? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm a beginner trying to build a simple library management system using Python. Users can search a book from a list of many books stored in a text file. Here is an example of what is in the text file:
Author: J.K Rowling
Title: Harry Potter and the Deathly Hollow
Keywords: xxxx
Published by: xxxx
Published year: xxxx
Author: Stephen King
Title: xxxx
Keywords: xxxx
Published by: xxxx
Published year: xxxx
Author: J.K Rowling
Title: Harry Potter and the Half Blood Prince
Keywords: xxxx
Published by: xxxx
Published year: xxxx
This is where it gets difficult for me. There is a Search by Author option for the user to search books. What I want to do is when the users search for any authors (e.g. J.K Rowling), it would output all (in this case, there are two J.K Rowling books) of the related components (Author, Title, Keywords, Published by, Published year). This is the last piece of the program, which I'm having very much difficulty in doing. Please help me, and thank you all in advance.
Is it possible for you to implement the text file in the form of a JSON file instead? It could be a better alternative since you could easily access all the values depending on the key you have chosen and search through those as well.
{
"Harry Potter and the Deathly Hollow" :
{
"Author": "J.K Rowling",
"Keywords": xxxx,
"Published by": xxxx,
"Published year": xxxx
},
'Example 2' :
{
"Author": "Stephen King"
"Keywords": xxxx
"Published by": xxxx
"Published year": xxxx
}
}
You can iterate through the lines of the text file like this:
with open(r"path\to\text_file.txt", "r") as books:
lines = books.readlines()
for index in range(len(lines)):
line = lines[index]
Now, get the author of each book by splitting the line on the ":" character and testing if the first part == "Author". Then, get the second part of the split string and strip it of the "\n" [newline] and " " characters to make sure there are no extra spaces or anything that will mess up the search on either side. I would also recomment lowercasing the author name and search query to make capitalisation not matter. Test if this is equal to the search query:
if line.split(":")[0] == "Author" and\
line.split(":")[1].strip("\n ").lower() == search_query.lower():
Then, in this if loop, print out all the required information about this book.
Completed code:
search_query = "J.K Rowling"
with open(r"books.txt", "r") as books:
lines = books.readlines()
for index in range(len(lines)):
line = lines[index]
if line.split(":")[0] == "Author" and line.split(":")[1].strip("\n ").lower() == search_query.lower():
print(*lines[index + 1: index + 5])
Generally, a lot of problems to be programmed can be resolved into a three-step process:
Read the input into an internal data structure
Do processing as required
Write the output
This problem seems like quite a good fit for that pattern:
In the first part, read the text file into an in-memory list of either dictionaries or objects (depending on what's expected by your course)
In the second part, search the in-memory list according to the search criteria; this will result in a shorter list containing the results
In the third part, print out the results neatly
It would be reasonable to put these into three separate functions, and to attack each of them separately
# To read the details from the file ex books.txt
with open("books.txt","r") as fd:
lines = fd.read()
#Split the lines based on Author. As Author word will be missing after split so add the Author to the result. The entire result is in bookdetails list.
bookdetails = ["Author" + line for line in lines.split("Author")[1:]]
#Author Name to search
authorName = "J.K Rowling"
# Search for the given author name from the bookdetails list. Split the result based on new line results in array of details.
result = [book.splitlines() for book in bookdetails if "Author: " + authorName in book]
print(result)
If you will always receive this format of the file and you want to transform it into a dictionary:
def read_author(file):
data = dict()
with open(file, "r") as f:
li = f.read().split("\n")
for e in li:
if ":" in e:
data[e.split(":")[0]] = e.split(":")[1]
return data['Author']
Note: The text file sometimes has empty lines so I check if the line contains the colon (:) before transforming it into a dict.
Then if you want a more generic method you can pass the KEY of the element you want:
def read_info(file, key):
data = dict()
with open(file, "r") as f:
li = f.read().split("\n")
for e in li:
if ":" in e:
data[e.split(":")[0]] = e.split(":")[1]
return data[key]
Separating the reading like the following you can be more modular:
class BookInfo:
def __init__(self, file) -> None:
self.file = file
self.data = None
def __read_file(self):
if self.data is None:
with open(self.file, "r") as f:
li = f.read().split("\n")
self.data = dict()
for e in li:
if ":" in e:
self.data[e.split(":")[0]] = e.split(":")[1]
def read_author(self):
self.__read_file()
return self.data['Author']
Then create objects for each book:
info = BookInfo("book.txt")
print(info.read_author())

Extract data from text file using Python (or any language)

I have a text file that looks like:
First Name Bob
Last name Smith
Phone 555-555-5555
Email bob#bob.com
Date of Birth 11/02/1986
Preferred Method of Contact Text Message
Desired Appointment Date 04/29
Desired Appointment Time 10am
City Pittsburgh
Location State
IP Address x.x.x.x
User-Agent (Browser/OS) Apple Safari 14.0.3 / OS X
Referrer http://www.example.com
First Name john
Last name Smith
Phone 555-555-4444
Email john#gmail.com
Date of Birth 03/02/1955
Preferred Method of Contact Text Message
Desired Appointment Date 05/22
Desired Appointment Time 9am
City Pittsburgh
Location State
IP Address x.x.x.x
User-Agent (Browser/OS) Apple Safari 14.0.3 / OS X
Referrer http://www.example.com
.... and so on
I need to extract each entry to a csv file, so the data should look like: first name, last name, phone, email, etc. I don't even know where to start on something like this.
first of all you'll need to open the text file in read mode.
I'd suggest using a context manager like so:
with open('path/to/your/file.txt', 'r') as file:
for line in file.readlines():
# do something with the line (it is a string)
as for managing the info you could build some intermediate structure, for example a dictionary or a list of dictionaries, and then translate that into a CSV file with the csv module.
you could for example split the file whenever there is a blank line, maybe like this:
with open('Downloads/test.txt', 'r') as f:
my_list = list() # this will be the final list
entry = dict() # this contains each user info as a dict
for line in f.readlines():
if line.strip() == "": # if line is empty start a new dict
my_list.append(entry) # and append the old one to the list
entry = dict()
else: # otherwise split the line and create new dict
line_items = line.split(r' ')
print(line_items)
entry[line_items[0]] = line_items[1]
print(my_list)
this code won't work because your text is not formatted in a consistent way: you need to find a way to make the split between "title" and "content" (like "first name" and "bob") in a consistent way. I suggest maybe looking at regex and fixing the txt file by making spacing more consistent.
assuming the data resides in a:
a="""
First Name Bob
Last name Smith
Phone 555-555-5555
Email bob#bob.com
Date of Birth 11/02/1986
Preferred Method of Contact Text Message
Desired Appointment Date 04/29
Desired Appointment Time 10am
City Pittsburgh
Location State
IP Address x.x.x.x
User-Agent (Browser/OS) Apple Safari 14.0.3 / OS X
Referrer http://www.example.com
First Name john
Last name Smith
Phone 555-555-4444
Email john#gmail.com
Date of Birth 03/02/1955
Preferred Method of Contact Text Message
Desired Appointment Date 05/22
Desired Appointment Time 9am
City Pittsburgh
Location State
IP Address x.x.x.x
User-Agent (Browser/OS) Apple Safari 14.0.3 / OS X
Referrer http://www.example.com
"""
line_sep = "\n" # CHANGE ME ACCORDING TO DATA
fields = ["First Name", "Last name", "Phone",
"Email", "Date of Birth", "Preferred Method of Contact",
"Desired Appointment Date", "Desired Appointment Time",
"City", "Location", "IP Address", "User-Agent","Referrer"]
records = a.split(line_sep * 2)
all_records = []
for record in records:
splitted_record = record.split(line_sep)
one_record = {}
csv_record = []
for f in fields:
found = False
for one_field in splitted_record:
if one_field.startswith(f):
data = one_field[len(f):].strip()
one_record[f] = data
csv_record.append(data)
found = True
if not found:
csv_record.append("")
all_records.append(";".join(csv_record))
one_record will have the record as dictionary and csv_record will have it as a list of fields (ordered as fields variable)
Edited to add: ignore this answer, the code from Koko Jumbo looks infinitely more sensible and actually gives you a CVS file at the end of it! It was a fun exercise though :)
Just to expand on fcagnola's code a bit.
If it's a quick and dirty one-off, and you know that the data will be consistently presented, the following should work to create a list of dictionaries with the correct key/value pairing. Each line is processed by splitting the line and comparing the line number (reset to 0 with each new dict) against an array of values that represent where the boundary between key and value falls.
For example, "First Name Bob" becomes ["First","Name","Bob"]. The function has been told that linenumber= 0 so it checks entries[linenumber] to get the value "2", which it uses to join the key name (items 0 & 1) and then join the data (items 2 onwards). The end result is ["First Name", "Bob"] which is then added to the dictionary.
class Extract:
def extractEntry(self,linedata,lineindex):
# Hardcoded list! The quick and dirty part.
# This is specific to the example data provided. The entries
# represent the index to be used when splitting the string
# between the key and the data
entries = (2,2,1,1,3,4,3,3,1,1,2,2,1)
return self.createNewEntry(linedata,entries[lineindex])
def createNewEntry(self,linedata,dataindex):
list_data = linedata.split()
key = " ".join(list_data[:dataindex])
data = " ".join(list_data[dataindex:])
return [key,data]
with open('test.txt', 'r') as f:
my_list = list() # this will be the final list
entry = dict() # this contains each user info as a dict
extr = Extract() # class for splitting the entries into key/value
x = 0
for line in f.readlines():
if line.strip() == "": # if line is empty start a new dict
my_list.append(entry) # and append the old one to the list
entry = dict()
x = 0
else: # otherwise split the line and create new dict
extracted_data = extr.extractEntry(line,x)
entry[extracted_data[0]] = extracted_data[1]
x += 1
my_list.append(entry)
print(my_list)

Split and save text string on scrapy

I need split a substring from a string, exactly this source text:
Article published on: Tutorial
I want delete "Article published on:" And leave only
Tutorial
, so i can save this
i try with:
category = items[1]
category.split('Article published on:','')
and with
for p in articles:
bodytext = p.xpath('.//text()').extract()
joined_text = ''
# loop in categories
for each_text in text:
stripped_text = each_text.strip()
if stripped_text:
# all the categories together
joined_text += ' ' + stripped_text
joined_text = joined_text.split('Article published on:','')
items.append(joined_text)
if not is_phrase:
title = items[0]
category = items[1]
print('title = ', title)
print('category = ', category)
and this don't works, what im missing?
error with this code:
TypeError: 'str' object cannot be interpreted as an integer
You probably just forgot to assign the result:
category = category.replace('Article published on:', '')
Also it seems that you meant to use replace instead of split. The latter also works though:
category = category.split(':')[1]

Categories

Resources