I'm web scraping from a local archive of techcrunch.com. I'm using regex to sort through and grab every heading for each article, however my output continues to remain as the last occurrence.
def extractNews():
selection = listbox.curselection()
if selection == (0,):
# Read the webpage:
response = urlopen("file:///E:/University/IFB104/InternetArchive/Archives/Sun,%20October%201st,%202017.html")
html = response.read()
match = findall((r'<h2 class="post-title"><a href="(.*?)".*>(.*)</a></h2>'), str(html)) # use [-2] for position after )
if match:
for link, title in match:
variable = "%s" % (title)
print(variable)
and the current output is
Heetch raises $12 million to reboot its ridesharing service
which is the last heading of the entire webpage, as seen in the image below (last occurrence)
The website/image looks like
this and each article block consists of the same code for the heading:
<h2 class="post-title">Heetch raises $12 million to reboot its ridesharing service</h2>
I cannot see why it keeps resulting to this last match. I have ran it through websites such as https://regex101.com/ and it tells me that I only have one match which is not the one being outputted in my program. Any help would be greatly appreciated.
EDIT: If anyone is aware of a way to display each matched result SEPARATELY between different <h1></h1> tags when writing to a .html file, it would mean a lot :) I am not sure if this is right but I think you use [-#] for the position/match being referred too?
The regex is fine, but your problem is in the loop here.
if match:
for link, title in match:
variable = "%s" % (title)
Your variable is overwritten in each iteration. That's why you only see the its value for the last iteration of the loop.
You could do something along these lines:
if match:
variableList = []
for link, title in match:
variable = "%s" % (title)
variableList.append(variable)
print variableList
Also, generally, I would recommend against using regex to parse html (as per the famous answer).
If you haven't already familiarised yourself with BeautifulSoup, you should. Here is a non-regex solution using BeautifulSoup to dig out all h2 post-titles from your html page.
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "html.parser")
soup.findAll('h2', {'class':'post-title'})
Related
For "extra credit" in a beginners class in Python that I am taking I wanted to extract data out of a URL using regex. I know that there are other ways I could probably do this, but my regex desperately needs work so...
Given a URL to start at, find the xth occurrence of a href on the page, and use that link to go down a level. Rinse and repeat until I have found the required link on the page at the requested depth on the site.
I am using Python 3.7 and Beautiful Soup 4.
At the beginning of the program, after all of the house-keeping is done, I have:
starting_url = 'http://blah_blah_blah_by_Joe.html'
extracted_name = re.findall('(?<=by_)([a-zA-Z0-9]+)[^.html]*', starting_url)
selected_names.append(extracted_name)
# Just for testing purposes
print(selected_name) [['Joe']]
Hmm, a bit odd didn't expect a nested list, but I know how to flatten a list, so ok. Let's go on.
I work my way through a couple of loops, opening each url for the next level down by using:
html = urllib.request.urlopen(url, context=ctx).read()
soup = BeautifulSoup(html, 'html.parser')
tags = soup('a')
Continue processing and, in the loop where the program should have found the href I want:
# Testing to check I have found the correct href
print(desired_link) <a href="http://blah_blah_blah_by_Mary.html">blah
blah</a>
type(desired_link) bs4.element.tag
Correct link, but a "type" new to me and not something I can use re.findall on. So more research and I have found:
for link in soup.find_all('a') :
tags = link.get('href')
type(tags) str
print(tags)
http://blah_blah_blah_by_George.html
http://blah_blah_blah_by_Bill.html
http://blah_blah_blah_by_Mary.html
etc.
Right type, but when I look at what printed, I think what I am looking at is maybe just one long string? And I need a way to just assign the third href in the string to a variable that I can use in re.findall('regex expression', desired_link).
Time to ask for help, I think.
And, while we are at it, any ideas about why I get the nested list the first time I used re.findall with the regex?
Please let me know how to improve this question so it is clearer what I've done and what I'm looking for (I KNOW you guys will, without me even asking).
You've printed every link on the page. But each time in the loop tags contains only one of them (you can print len(tags) to validate it easily).
Also I suggest replacing [a-zA-Z0-9]+ with \w+ - it will catch letters, numbers and underscores and is much cleaner.
I'm trying to scrape data from a table in a website. I can pull the data in, in the form of source code. But in my program, I get the error: TypeError: replace_with() takes exactly 2 arguments (3 given)
import urllib2
import bs4
import re
page_content = ""
for i in range(1,11):
page = urllib2.urlopen("http://b3.caspio.com/dp.asp?appSession=627360156923294&RecordID=&PageID=2&PrevPageID=2&CPIpage="+str(i)+"&CPISortType=&CPIorderBy=")
page_content += page.read()
soup = bs4.BeautifulSoup(page_content)
tables = soup.find_all('tr')
file = open('crime_data.csv', 'w+')
for i in tables:
i = i.replace_with('</td>' , (',')) # this is where I get the error
i = re.sub(r'<.?td[^>]*>','',i)
file.write(i + '\n')
Why is it giving me that error?
Also, in essence, I'm trying take the data from the table and put it into a csv file. Any and all help would be greatly appreciated!
That replace_with function does not do what it appears you want it to. The linked docs state that: PageElement.replace_with()removes a tag or string from the tree, and replaces it with the tag or string of your choice
From your code it looks more like you want to replace the whole end tag </td> with a , in a an effort to get some sort of comma separated data.
Perhaps you should instead just use the get_text method on your <td> elements, and format them from there:
for i in tables:
file.write(i.get_text(',').strip() + '\n')
file.close() ####### <----- VERY IMPORTANT TO CLOSE FILES
Note
I tested your code out and you are not really scraping what you are after. I played around with it and came up with this:
import urllib2
import bs4
def scrape_crimes(html,write_headers):
soup = bs4.BeautifulSoup(html) # make the soup
table = soup.find_all('table',class_=('cbResultSetTable',)) # search for the exact table you want, there are multiple nested tables on the pages you are scraping
if len(table) > 0: # if the table is found
table = table[0] # set the table to the first result
else:
return # no table found, no use scraping
with open('crime_data.csv', 'a') as f: # opens file to append content
trs = table.find_all('tr') # get all the rows in the table
if write_headers: # if we request that headers are written
for th in trs[0].find_all('th'): # write each header followed by a comma
f.write(th.get_text(strip=True).encode('utf-8')+',') # ensure data is writable by calling encode
f.write('\n') # write a newline
for tr in trs: # for each table row in the table
tds = tr.find_all('td') # get all the td elements
if len(tds) > 0: # if there are td elements (not true for header rows
for td in tds: # for each td element
f.write(td.get_text(strip=True).encode('utf-8')+',') # add the data followed by a comma
f.write('\n') # finish the row off with a newline
open('crime_data.csv','w').close() # clear the file before running
for i in range(1,11):
page = urllib2.urlopen("http://b3.caspio.com/dp.asp?appSession=627360156923294&RecordID=&PageID=2&PrevPageID=2&CPIpage="+str(i)+"&CPISortType=&CPIorderBy=")
scrape_crimes(page.read(),i==1) # offset the code to a function, the second argument is only true the first time
# this ensures that you will get headers only at the top of your output file
I removed the use of the re library because in general regex and html do not play nicely together., the short explanation being: HTML is not a regular language.
I also witch from using the coding pattern:
file = open('file_name','w')
# do stuff
file.close()
to this preferred pattern:
with open('file_name','w') as f:
# do stuff
In the first example it is often common to forget to close the file, which you did forget in your provided code. The second pattern will handle the close for you, so no worries there. Also, it is not good practice to name your variables with the same names as native python commands.
I changed your scripts pattern from combining all the pages html to scraping each page one by one because that is not a good idea. You could run into memory issues if you were doing this with large pages. Instead, it is usually better to handle the data in chunks.
The next thing I did was look at the HTML of the page you were scraping. You were pulling all <tr> elements but had you closely inspected the page, you would have seen that the table you are after is actually contained in a <tr>, giving you some big nasty block of text as a "result". Using bs4's optional classs_ argument to denote a specific class to look for in the table element leads to the data you are after.
The next thing I noticed was that the table headers would get pulled for every page, sprinkling your results with this redundant information. You would only want to pull this info the first time, so I added some logic for that.
I switched to using the .get_text method instead of the regex/replace_with combo you had because of the above explanations. The get_text method returns unicode however so I added the call to .encode('utf-8') which would ensure the data would be writable. I also specified the strip=True argument to get rid of any pesky white-space characters on the data. The reasoning behind this: you load the whole bs4 library, why not use it? The good people who write that library spent a lot of time taking care of parsing the text so you don't have to waste time doing it.
Hope this was helpful! Happy scraping!
I am currently working on a project in which I want to allow regex search in/on a huge set of HTML files.
After first pinpointing the files of my interest I now want to highlight the found keyword!
Using BeautifulSoup I can determine the Node in which my keyword is found. One thing I do is changing the color of the whole parent.
However, I would also like to add my own <span>-Tags around just they keyword(s) I found.
Determining the position and such is no big deal using the find()-functions provided by BFSoup. But adding my tags around regular text seems to be impossible?
# match = keyword found by another regex
# node = the node I found using the soup.find(text=myRE)
node.parent.setString(node.replace(match, "<myspan>"+match+"</myspan>"))
This way I only add mere text and not a proper Tag, as the document is not freshly parsed, which I hope to avoid!
I hope my problem became a little clear :)
Here's a simple example showing one way to do it:
import re
from bs4 import BeautifulSoup as Soup
html = '''
<html><body><p>This is a paragraph</p></body></html>
'''
(1) store the text and empty the tag
soup = Soup(html)
text = soup.p.string
soup.p.clear()
print soup
(2) get start and end positions of the words to be boldened (apologies for my English)
match = re.search(r'\ba\b', text)
start, end = match.start(), match.end()
(3) split the text and add the first part
soup.p.append(text[:start])
print soup
(4) create a tag, add the relevant text to it and append it to the parent
b = soup.new_tag('b')
b.append(text[start:end])
soup.p.append(b)
print soup
(5) append the rest of the text
soup.p.append(text[end:])
print soup
here is the output from above:
<html><body><p></p></body></html>
<html><body><p>This is </p></body></html>
<html><body><p>This is <b>a</b></p></body></html>
<html><body><p>This is <b>a</b> paragraph</p></body></html>
If you add the text...
my_tag = node.parent.setString(node.replace(match, "<myspan>"+match+"</myspan>"))
...and pass it through BeautifulSoup once more
new_soup = BeautifulSoup(my_tag)
it should be classified as a BS tag object and available for parsing.
You could apply these changes to the original mass of text and run it through as a whole, to avoid repetition.
EDIT:
From the docs:
# Here is a more complex example that replaces one tag with another:
from BeautifulSoup import BeautifulSoup, Tag
soup = BeautifulSoup("<b>Argh!<a>Foo</a></b><i>Blah!</i>")
tag = Tag(soup, "newTag", [("id", 1)])
tag.insert(0, "Hooray!")
soup.a.replaceWith(tag)
print soup
# <b>Argh!<newTag id="1">Hooray!</newTag></b><i>Blah!</i>
I'm writing a webscraper and I have a table full of links to .pdf files that I want to download, save, and later analyze. I was using beautiful soup and I had soup find all the links. They are normally beautiful soup tag objects, but I've turned them into strings. The string is actually a bunch of junk with the link text buried in the middle. I want to cut out that junk and just leave the link. Then I will turn these into a list and have python download them later. (My plan is for python to keep a list of the pdf link names to keep track of what it's downloaded and then it can name the files according to those link names or a portion thereof).
But the .pdfs come in variable name-lengths, e.g.:
I_am_the_first_file.pdf
And_I_am_the_seond_file.pdf
and as they exist in the table, they have a bunch of junk text:
a href = ://blah/blah/blah/I_am_the_first_file.pdf[plus other annotation stuff that gets into my string accidentally]
a href = ://blah/blah/blah/And_I_am_the_seond_file.pdf[plus other annotation stuff that gets into my string accidentally]
So I want to cut ("slice") the front part and the last part off of the string and just leave the string that points to my url (so what follows is the desired output for my program):
://blah/blah/blah/I_am_the_first_file.pdf
://blah/blah/blah/And_I_am_the_seond_file.pdf
As you can see, though, the second file has more characters in the string than the first. So I can't do:
string[9:40]
or whatever because that would work for the first file but not for the second.
So i'm trying to come up with a variable for the end of the string slice, like so:
string[9:x]
wherein x is the location in the string that ends in '.pdf' (and my thought was to use the string.index('.pdf') function to do this.
But is t3h fail because I get an error trying to use a variable to do this
("TypeError: 'int' object is unsubscriptable")
Probably there's an easy answer and a better way to do this other than messing with strings, but you guys are way smartert than me and I figured you'd know straight off.
Here's my full code so far:
import urllib, urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("mywebsite.com")
soup = BeautifulSoup(page)
table_with_my_pdf_links = soup.find('table', id = 'searchResults')
#"search results" is just what the table i was looking for happened to be called.
for pdf_link in table_with_my_pdf_links.findAll('a'):
#this says find all the links and looop over them
pdf_link_string = str(pdf_link)
#turn the links into strings (they are usually soup tag objects, which don't help me much that I know of)
if 'pdf' in pdf_link_string:
#some links in the table are .html and I don't want those, I just want the pdfs.
end_of_link = pdf_link_string.index('.pdf')
#I want to know where the .pdf file extension ends because that's the end of the link, so I'll slice backward from there
just_the_link = end_of_link[9:end_of_link]
#here, the first 9 characters are junk "a href = yadda yadda yadda". So I'm setting a variable that starts just after that junk and goes to the .pdf (I realize that I will actualy have to do .pdf + 3 or something to actually get to the end of string, but this makes it easier for now).
print just_the_link
#I debug by print statement because I'm an amatuer
the line (Second from the bottom) that reads:
just_the_link = end_of_link[9:end_of_link]
returns an error (TypeError: 'int' object is unsubscriptable)
also, the ":" should be hyper text transfer protocol colon, but it won't let me post that b/c newbs can't post more than 2 links so I took them out.
just_the_link = end_of_link[9:end_of_link]
This is your problem, just like the error message says. end_of_link is an integer -- the index of ".pdf" in pdf_link_string, which you calculated in the preceding line. So naturally you can't slice it. You want to slice pdf_link_string.
Sounds like a job for regular expressions:
import urllib, urllib2, re
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("mywebsite.com")
soup = BeautifulSoup(page)
table_with_my_pdf_links = soup.find('table', id = 'searchResults')
#"search results" is just what the table i was looking for happened to be called.
for pdf_link in table_with_my_pdf_links.findAll('a'):
#this says find all the links and looop over them
pdf_link_string = str(pdf_link)
#turn the links into strings (they are usually soup tag objects, which don't help me much that I know of)
if 'pdf' in pdf_link_string:
pdfURLPattern = re.compile("""://(\w+/)+\S+.pdf""")
pdfURLMatch = pdfURLPattern.search(line)
#If there is no match than search() returns None, otherwise the whole group (group(0)) returns the URL of interest.
if pdfURLMatch:
print pdfURLMatch.group(0)
I am using Beautiful Soup to extract 'content' from web pages. I know some people have asked this question before and they were all pointed to Beautiful Soup and that's how I got started with it.
I was able to successfully get most of the content but I am running into some challenges with tags that are part of the content. (I am starting off with a basic strategy of: if there are more than x-chars in a node then it is content). Let's take the html code below as an example:
<div id="abc">
some long text goes here and hopefully it
will get picked up by the parser as content
</div>
results = soup.findAll(text=lambda(x): len(x) > 20)
When I use the above code to get at the long text, it breaks (the identified text will start from 'and hopefully..') at the tags. So I tried to replace the tag with plain text as follows:
anchors = soup.findAll('a')
for a in anchors:
a.replaceWith('plain text')
The above does not work because Beautiful Soup inserts the string as a NavigableString and that causes the same problem when I use findAll with the len(x) > 20. I can use regular expressions to parse the html as plain text first, clear out all the unwanted tags and then call Beautiful Soup. But I would like to avoid processing the same content twice -- I am trying to parse these pages so I can show a snippet of content for a given link (very much like Facebook Share) -- and if everything is done with Beautiful Soup, I presume it will be faster.
So my question: is there a way to 'clear tags' and replace them with 'plain text' using Beautiful Soup. If not, what will be best way to do so?
Thanks for your suggestions!
Update: Alex's code worked very well for the sample example. I also tried various edge cases and they all worked fine (with the modification below). So I gave it a shot on a real life website and I run into issues that puzzle me.
import urllib
from BeautifulSoup import BeautifulSoup
page = urllib.urlopen('http://www.engadget.com/2010/01/12/kingston-ssdnow-v-dips-to-30gb-size-lower-price/')
anchors = soup.findAll('a')
i = 0
for a in anchors:
print str(i) + ":" + str(a)
for a in anchors:
if (a.string is None): a.string = ''
if (a.previousSibling is None and a.nextSibling is None):
a.previousSibling = a.string
elif (a.previousSibling is None and a.nextSibling is not None):
a.nextSibling.replaceWith(a.string + a.nextSibling)
elif (a.previousSibling is not None and a.nextSibling is None):
a.previousSibling.replaceWith(a.previousSibling + a.string)
else:
a.previousSibling.replaceWith(a.previousSibling + a.string + a.nextSibling)
a.nextSibling.extract()
i = i+1
When I run the above code, I get the following error:
0:<a href="http://www.switched.com/category/ces-2010">Stay up to date with
Switched's CES 2010 coverage</a>
Traceback (most recent call last):
File "parselink.py", line 44, in <module>
a.previousSibling.replaceWith(a.previousSibling + a.string + a.nextSibling)
TypeError: unsupported operand type(s) for +: 'Tag' and 'NavigableString'
When I look at the HTML code, 'Stay up to date.." does not have any previous sibling (I did not how previous sibling worked until I saw Alex's code and based on my testing it looks like it is looking for 'text' before the tag). So, if there is no previous sibling, I am surprised that it is not going through the if logic of a.previousSibling is None and a;nextSibling is None.
Could you please let me know what I am doing wrong?
-ecognium
An approach that works for your specific example is:
from BeautifulSoup import BeautifulSoup
ht = '''
<div id="abc">
some long text goes here and hopefully it
will get picked up by the parser as content
</div>
'''
soup = BeautifulSoup(ht)
anchors = soup.findAll('a')
for a in anchors:
a.previousSibling.replaceWith(a.previousSibling + a.string)
results = soup.findAll(text=lambda(x): len(x) > 20)
print results
which emits
$ python bs.py
[u'\n some long text goes here ', u' and hopefully it \n will get picked up by the parser as content\n']
Of course, you'll probably need to take a bit more care, i.e., what if there's no a.string, or if a.previousSibling is None -- you'll need suitable if statements to take care of such corner cases. But I hope this general idea can help you. (In fact you may want to also merge the next sibling if it's a string -- not sure how that plays with your heuristics len(x) > 20, but say for example that you have two 9-character strings with an <a> containing a 5-character strings in the middle, perhaps you'd want to pick up the lot as a "23-characters string"? I can't tell because I don't understand the motivation for your heuristic).
I imagine that besides <a> tags you'll also want to remove others, such as <b> or <strong>, maybe <p> and/or <br>, etc...? I guess this, too, depends on what the actual idea behind your heuristics is!
When I tried to flatten tags in the document, that way, the tags' entire content would be pulled up to its parent node in place (I wanted to reduce the content of a p tag with all sub-paragraphs, lists, div and span, etc. inside but get rid of the style and font tags and some horrible word-to-html generator remnants), I found it rather complicated to do with BeautifulSoup itself since extract() also removes the content and replaceWith() unfortunatetly doesn't accept None as argument. After some wild recursion experiments, I finally decided to use regular expressions either before or after processing the document with BeautifulSoup with the following method:
import re
def flatten_tags(s, tags):
pattern = re.compile(r"<(( )*|/?)(%s)(([^<>]*=\\\".*\\\")*|[^<>]*)/?>"%(isinstance(tags, basestring) and tags or "|".join(tags)))
return pattern.sub("", s)
The tags argument is either a single tag or a list of tags to be flattened.