I am trying to randomly explore Webscraping through python.I have link of google search results page. I used url lib to extract all the links which are present in the GOOGLE SEARCH RESULT PAGE. From that parsed page of google I am extracting all possible anchor tags with the help of Beautiful Soup library. So now I have lots of links. Among those I want to pick selected links which matches my required pattern.
Example I want to pick all such lines:
This is one of the many links that got parsed. But I want to narrow down the result of the links which are like this
/url?q=http://avadl.uploadt.com/DL4/Film/&sa=U&ved=0ahUKEwiYwOKe1r7hAhWUf30KHcHUBkMQFggUMAA&usg=AOvVaw39cIJ0T8_CAQMY8EkSWZJl
And among such picks I need to extract only this part
http://avadl.uploadt.com/DL4/Film/
I tried this and this
possible_websites.append(re.findall('/url?q=(\S+)',links))
possible_websites.append(re.findall('/url?q=(\S+^&)',links))
Here's my code
soup = BeautifulSoup(webpage, 'html.parser')
tags = soup('a')
possible_websites=[]
for tag in tags:
links = tag.get('href', None)
possible_websites.append(re.findall('/url?q=(\S+)',links))
I want to use regular expression to extract the required text part. I am using Beautiful soup module to extract the HTML data. In short this is much of a reguar expression problem.
It’s not regex, but I’d use urllib:
from urllib.parse import parse_qs, urlparse
url = urlparse('/url?q=http://avadl.uploadt.com/DL4/Film/&sa=U&ved=0ahUKEwiYwOKe1r7hAhWUf30KHcHUBkMQFggUMAA&usg=AOvVaw39cIJ0T8_CAQMY8EkSWZJl')
qs = parse_qs(url.query)
print(qs['q'][0])
If you really need a regex, use q=(.*/)& otherwise go with Ry-'s answer, i.e.:
import re
u = "/url?q=http://avadl.uploadt.com/DL4/Film/&sa=U&ved=0ahUKEwiYwOKe1r7hAhWUf30KHcHUBkMQFggUMAA&usg=AOvVaw39cIJ0T8_CAQMY8EkSWZJl"
m = re.findall("q=(.*/)&", u)
if m:
print(m[0])
# http://avadl.uploadt.com/DL4/Film/
Demo
Related
My current project requires to obtain the summaries of some wikipedia pages. This is really easy to do, but I want to make a general script for that. More specifically I also want to obtain the summaries of hyperlinks. For example I want to get the summary of this page: https://en.wikipedia.org/wiki/Creative_industries (this is easy). Moreover, I would also like to get the summaries of the hyperlinks in Section: Definitions -> 'Advertising', 'Marketing', 'Architecture',...,'visual arts'. My problem is that some of these hyperlinks have different page names. For example, the previous mentioned page has the hyperlink 'Software' (number 6), but I want the summary of the page, which is 'Software Engineering'.
Can someone help me with that? I can find the summaries of the pages with the same hyperlink name, but that is not always the case. So basically I am looking for a way to use (page.links) to only one area of the page.
Thank you in advance
Try using Beautiful soup, this will print all the links with the given prefix
from bs4 import BeautifulSoup
import requests, re
''' Dont forget to install/setup package = 'lxml' '''
url = "your link"
response = requests.get(url)
data = response.text
soup = BeautifulSoup(data,'lxml')
tags = soup.find_all('a')
''' This will print every available link'''
for tag in tags:
print(tag.get('href'))
''' this will print links with only prefix as given'''
for link in soup.find_all('a',attrs={'href': re.compile("^{{you prefix here}}")}):
print(link.get('href')
I want to web scrape the links and their respective texts from a table. I plan to use regex to accomplish this.
So let's say in this page I have multiple text_i tags. I want to get all the text_i's into a list and then get all the href's into a separate list.
I have:
web = requests.get(url)
web_text = web.text
texts = re.findall(r'<table .*><a .*>(.*)</a></table>, web_text)'
The regex expression finds all the anchor tags, of whatever class, inside a HTML table of whatever class and returns the texts, correct? This is taking an extraordinarily long time. Is this the correct way to do it?
Also, how do I go about getting the href url's now?
I suggest you use Beautiful Soup to parse the HTML text of the table.
Adapted from Beautiful Soup's documentation you could do for example:
from bs4 import BeautifulSoup
soup = BeautifulSoup(web_text, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))
I'm trying to extract specific links on a page full of links. The links I need contain the word "apartment" in them.
But whatever I try, I get way more data extracted than only the links I need.
<a href="https://www.website.com/en/ad/apartment/abcd123" title target="IWEB_MAIN">
If anyone could help me out on this, it'd be much appreciated!
Also, if you have a good source that could inform me better about this, it would be double appreciated!
Yon can use regular expression re.
import re
soup=BeautifulSoup(Pagesource,'html.parser')
alltags=soup.find_all("a",attrs={"href" : re.compile("apartment")})
for item in alltags:
print(item['href']) #grab href value
Or You can use css selector
soup=BeautifulSoup(Pagesource,'html.parser')
alltags=soup.select("a[href*='apartment']")
for item in alltags:
print(item['href'])
You find the details in official documents Beautifulsoup
Edited:
You need to consider parent div first then find the anchor tag.
import requests
from bs4 import BeautifulSoup
res=requests.get("https://www.immoweb.be/en/search/apartment/for-sale/leuven/3000")
soup = BeautifulSoup(res.text, 'html.parser')
for item in soup.select("div[data-type='resultgallery-resultitem'] >a[href*='apartment']"):
print(item['href'])
I have the following code with a purpose to parse specific information from each of multiple pages. The http of each of the multiple pages is structured and therefore I use this structure to collect all links at the same time for further parsing.
import urllib
import urlparse
import re
from bs4 import BeautifulSoup
Links = ["http://www.newyorksocialdiary.com/party-pictures?page=" + str(i) for i in range(2,27)]
This command gives me a list of http links. I go further to read in and make soups.
Rs = [urllib.urlopen(Link).read() for Link in Links]
soups = [BeautifulSoup(R) for R in Rs]
As these make the soups that I desire, I cannot achieve the final goal - parsing structure . For instance,
Something for Everyone
I am specifically interested in obtaining things like this: '/party-pictures/2007/something-for-everyone'. However, the code below cannot serve this purpose.
As = [soup.find_all('a', attr = {"href"}) for soup in soups]
Could someone tell me where went wrong? I highly appreciate your assistance. Thank you.
I am specifically interested in obtaining things like this: '/party-pictures/2007/something-for-everyone'.
The next would be going for regular expression!!
You don't necessarily need to use regular expressions, and, from what I understand, you can filter out the desired links with BeautifulSoup:
[[a["href"] for a in soup.select('a[href*=party-pictures]')]
for soup in soups]
This, for example, would give you the list of links having party-pictures inside the href. *= means "contains", select() is a CSS selector search.
You can also use find_all() and apply the regular expression filter, for example:
pattern = re.compile(r"/party-pictures/2007/")
[[a["href"] for a in soup.find_all('a', href=pattern)]
for soup in soups]
This should work :
As = [soup.find_all(href=True) for soup in soups]
This should give you all href tags
If you only want hrefs with name 'a', then the following would work :
As = [soup.find_all('a',href=True) for soup in soups]
Given an HTML link like
texttxt
how can I isolate the url and the text?
Updates
I'm using Beautiful Soup, and am unable to figure out how to do that.
I did
soup = BeautifulSoup.BeautifulSoup(urllib.urlopen(url))
links = soup.findAll('a')
for link in links:
print "link content:", link.content," and attr:",link.attrs
i get
*link content: None and attr: [(u'href', u'_redirectGeneric.asp?genericURL=/root /support.asp')]* ...
...
Why am i missing the content?
edit: elaborated on 'stuck' as advised :)
Use Beautiful Soup. Doing it yourself is harder than it looks, you'll be better off using a tried and tested module.
EDIT:
I think you want:
soup = BeautifulSoup.BeautifulSoup(urllib.urlopen(url).read())
By the way, it's a bad idea to try opening the URL there, as if it goes wrong it could get ugly.
EDIT 2:
This should show you all the links in a page:
import urlparse, urllib
from BeautifulSoup import BeautifulSoup
url = "http://www.example.com/index.html"
source = urllib.urlopen(url).read()
soup = BeautifulSoup(source)
for item in soup.fetchall('a'):
try:
link = urlparse.urlparse(item['href'].lower())
except:
# Not a valid link
pass
else:
print link
Here's a code example, showing getting the attributes and contents of the links:
soup = BeautifulSoup.BeautifulSoup(urllib.urlopen(url))
for link in soup.findAll('a'):
print link.attrs, link.contents
Looks like you have two issues there:
link.contents, not link.content
attrs is a dictionary, not a string. It holds key value pairs for each attribute in an HTML element. link.attrs['href'] will get you what you appear to be looking for, but you'd want to wrap that in a check in case you come across an a tag without an href attribute.
Though I suppose the others might be correct in pointing you to using Beautiful Soup, they might not, and using an external library might be massively over-the-top for your purposes. Here's a regex which will do what you ask.
/<a\s+[^>]*?href="([^"]*)".*?>(.*?)<\/a>/
Here's what it matches:
'text'
// Parts: "url", "text"
'text<span>something</span>'
// Parts: "url", "text<span>something</span>"
If you wanted to get just the text (eg: "textsomething" in the second example above), I'd just run another regex over it to strip anything between pointed brackets.