Get all URLs on a Page Python - python

I'm working on something that requires me to get all the URLs on a page. It seems to work on most websites I've tested, for example microsoft.com, but it only returns three from google.com. Here is the relevant source code:
import urllib
import time
import re
fwcURL = "http://www.microsoft.com" #URL to read
mylines = urllib.urlopen(fwcURL).readlines()
print "Found URLs:"
time.sleep(1) #Pause execution for a bit
for item in mylines:
if "http://" in item.lower(): #For http
print item[item.index("http://"):].split("'")[0].split('"')[0] # Remove ' and " from the end, for example in href=
if "https://" in item.lower(): #For https
print item[item.index("https://"):].split("'")[0].split('"')[0] # Ditto
If my code can be improved, or if there is a better way to do this, please respond. Thanks in advance!

Try Using Mechanize or BeautifulSoup or lxml.
By using BeautifulSoup, you can easily get all the html/xml content very easily.
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("some_url")
soup = BeautifulSoup(page.read())
links = soup.findAll("a")
for link in links:
print link["href"]
BeautifulSoup is very easy to learn and understand.

First off, HTML is not a regular language, and no amount of simple string manipulation like that is going to work on all pages. You need a real HTML parser. I'd recommend Lxml. Then it's just a matter of recursing through the tree and finding the elements you want.
Second, some pages may be dynamic, so you won't find all of the contents in the html source. Google makes heavy use of javascript and AJAX (notice how it displays results without reloading the page).

I would use lxml and do:
import lxml.html
page = lxml.html.parse('http://www.microsoft.com').getroot()
anchors = page.findall('a')
It's worth noting that if links are dynamically generated (via JS or similar), then you won't get those short of automating a browser in some fashion.

Related

Trying to use BeautifulSoup to learn python

I am trying to use BeautifulSoup to scrape basically any website just to learn, and when trying, I never end up getting all instances of each parameter set. Attached is my code, please let me know what I'm doing wrong:
import requests
url = "https://www.newegg.com/core-i7-8th-gen-intel-core-i7-8700k/p/N82E16819117827?Item=N82E16819117827"
result = requests.get(url)
doc = BeautifulSoup(result.text, "html.parser")
price = doc.find_all(string="$")
print(price)
#### WHY DOES BEAUTIFULSOUP NOT RETURN ALL INSTANCES!?!?!?```
as per the url provided in the question, I could see the price with $ symbol is available in the price-current class name.
So I have used a find_all() to get all the prices.
Use the below code:
import requests
from bs4 import BeautifulSoup
url = "https://www.newegg.com/core-i7-8th-gen-intel-core-i7-8700k/p/N82E16819117827?Item=N82E16819117827"
result = requests.get(url)
doc = BeautifulSoup(result.text, "html.parser")
price = doc.find_all(attrs={'class': "price-current"})
for p in price:
print(p.text)
output:
$399.99
$400.33
$403.09
$412.00
I'm not sure what you mean by "all instances of each parameter set." One reason that your code block may not be working, however, is that you forgot to import the BeautifulSoup library.
from bs4 import BeautifulSoup
Also, it's not the best practice to scrape live sites. I would highly recommend the website toscrape.com. It's a really great resource for newbies. I still use it to this day to hone my scraping skills and expand them.
Lastly, BeautifulSoup works best when you have a decent grasp of HTML and CSS, especially the selector syntax. If you don't know those two, you will struggle a little bit no matter how much Python you know. The BeautifulSoup documentation can give you some insight on how to navigate the HTML and CSS if you are not well versed in those.

Python: Reading a webpage and extracting text from that page

I'm writing in Python to try and get exchange rates from the website:
xe.com/currency/converter (I can't post another link, sorry - I'm at limit)
I want to be able to get rates from this file, for example, for the conversion between GBP and USD:
Therefore, I would search the url: "http://www.xe.com/currencyconverter/convert/?Amount=1&From=GBP&To=USD" , then get the value printed "1.56371 USD" (the rates at the time I was writing this message), and assign that value as an int to a variable, like rate_usd.
At the moment, I was thinking about using the BeautifulSoup module and urllib.request module, and request the url ("http://www.xe.com/currencyconverter/convert/?Amount=1&From=GBP&To=USD") and search through it using BeautifulSoup. At the moment, I'm at this stage in the coding:
import urllib.request
import bs4 from BeautifulSoup
def rates_fetcher(url):
html = urllib.request.urlopen(url).read()
soup = BeautifulSoup(html)
# code to search through soup and fetch the converted value
# e.g. 1.56371
# How would I extract this value?
# I have inspected the page element and found the value I want to be in the class:
# <td width="47%" align="left" class="rightCol">1.56371
# I'm thinking about searching through the class: class="rightCol"
# and extracting the value that way, but how?
url1 = "http://www.xe.com/currencyconverter/convert/?Amount=1&From=GBP&To=USD"
rates_fetcher(url1)
Any help would be much appreciated, and thank you whoever took the time to read this.
p.s. Sorry in advance if I have made any typos, I'm kinda' in a hurry :s
It sounds like you've got the right idea.
def rates_fetcher(url):
html = urllib.request.urlopen(url).read()
soup = BeautifulSoup(html)
return [item.text for item in soup.find_all(class_='rightCol')]
That should do it... This will return a list of the text inside any tag with the class 'rightCol'.
If you haven't read through the Beautiful Soup documentation, you really oughtta. It's straightforward and very useful.
Try pyquery. It's a lot better than Soup.
PS: For urllib, try Requests: Http for humans
PS2: Actually I use Node and jQuery/jQuery-like for html scrapping at last.

How to copy all the text from url (like [Ctrl+A][Ctrl+C] with webbrowser) in python?

I know there is the easy way to copy all the source of url, but it's not my task. I need exactly save just all the text (just like webbrowser user copy it) to the *.txt file.
Is it unavoidable to parse source code html for it, or there is a better way?
I think it is impossible if you don't parse at all. I guess you could use HtmlParser http://docs.python.org/2/library/htmlparser.html and just keep the data tags, but you will most likely get many other elements than you want.
To get exactly the same as [Ctrl-C] would be very difficult to avoid parsing because of things like the style="display: hidden;" which would hide the text, which again will result in full parsing of html, javascript and css of both the document and resource files.
Parsing is required. Don't know if there's a library method. A simple regex:
text = sub(r"<[^>]+>", " ", html)
this requires many improvements, but it's a starting point.
With python, the BeautifulSoup module is great for parsing HTML, and well worth a look. To get the text from a webpage, it's just a case of:
#!/usr/env python
#
import urllib2
from bs4 import BeautifulSoup
url = 'http://python.org'
html = urllib2.urlopen(url).read()
soup = BeautifulSoup(html)
# you can refine this even further if needed... ie. soup.body.div.get_text()
text = soup.body.get_text()
print text

Web Scraping data using python?

I just started learning web scraping using Python. However, I've already ran into some problems.
My goal is to web scrape the names of the different tuna species from fishbase.org (http://www.fishbase.org/ComNames/CommonNameSearchList.php?CommonName=salmon)
The problem: I'm unable to extract all of the species names.
This is what I have so far:
import urllib2
from bs4 import BeautifulSoup
fish_url = 'http://www.fishbase.org/ComNames/CommonNameSearchList.php?CommonName=Tuna'
page = urllib2.urlopen(fish_url)
soup = BeautifulSoup(html_doc)
spans = soup.find_all(
From here, I don't know how I would go about extracting the species names. I've thought of using regex (i.e. soup.find_all("a", text=re.compile("\d+\s+\d+")) to capture the texts inside the tag...
Any input will be highly appreciated!
You might as well take advantage of the fact that all the scientific names (and only scientific names) are in <i/> tags:
scientific_names = [it.text for it in soup.table.find_all('i')]
Using BS and RegEx are two different approaches to parsing a webpage. The former exists so you don't have to bother so much with the latter.
You should read up on what BS actually does, it seems like you're underestimating its utility.
What jozek suggests is the correct approach, but I couldn't get his snippet to work (but that's maybe because I am not running the BeautifulSoup 4 beta). What worked for me was:
import urllib2
from BeautifulSoup import BeautifulSoup
fish_url = 'http://www.fishbase.org/ComNames/CommonNameSearchList.php?CommonName=Tuna'
page = urllib2.urlopen(fish_url)
soup = BeautifulSoup(page)
scientific_names = [it.text for it in soup.table.findAll('i')]
print scientific_names
Looking at the web page, I'm not sure exactly about what information you want to extract. However, note that you can easily get the text in a tag using the text attribute:
>>> from bs4 import BeautifulSoup
>>> html = '<a>some text</a>'
>>> soup = BeautifulSoup(html)
>>> [tag.text for tag in soup.find_all('a')]
[u'some text']
Thanks everyone...I was able to solve the problem I was having with this code:
import urllib2
from bs4 import BeautifulSoup
fish_url = 'http://www.fishbase.org/ComNames/CommonNameSearchList.php?CommonName=Salmon'
page = urllib2.urlopen(fish_url)
html_doc = page.read()
soup = BeautifulSoup(html_doc)
scientific_names = [it.text for it in soup.table.find_all('i')]
for item in scientific_names:
print item
If you want a long term solution, try scrapy. It is quite simple and does a lot of work for you. It is very customizable and extensible. You will extract all the URLs you need using xpath, which is more pleasant and reliable. Still scrapy allows you to use re, if you need.

Trouble parsing HTML using BeautifulSoup

I'm trying to use BeautifulSoup to parse some HTML in Python. Specifically, I'm trying to create two arrays of soup objects: one for the dates of postings on a website, and one for the postings themselves. However, when I use findAll on the div class that matches the postings, only the initial tag is returned, not the text inside the tag. On the other hand, my code works just fine for the dates. What is going on??
# store all texts of posts
texts = soup.findAll("div", {"class":"quote"})
# store all dates of posts
dates = soup.findAll("div", {"class":"datetab"})
The first line above returns only
<div class="quote">
which is not what I want. The second line returns
<div class="datetab">Feb<span>2</span></div>
which IS what I want (pre-refining).
I have no idea what I'm doing wrong. Here is the website I'm trying to parse. This is for homework, and I'm really really desperate.
Which version of BeautifulSoup are you using? Version 3.1.0 performs significantly worse with real-world HTML (read: invalid HTML) than 3.0.8. This code works with 3.0.8:
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://harvardfml.com/")
soup = BeautifulSoup(page)
for incident in soup.findAll('span', { "class" : "quote" }):
print incident.contents
That site is powered by Tumblr. Tumblr has an API.
There's a python port of Tumblr that you can use to read documents.
from tumblr import Api
api = Api('harvardfml.com')
freq = {}
posts = api.read()
for post in posts:
#do something here
for your bogus findAll, without the actual source code of your program it is hard to see what is wrong.

Categories

Resources