I want to append href from this class:
<a class="_2UzuFa" href="/awg-all-weather-gear-solid-men-polo-neck-black-grey-t-shirt/p/itm19ae710c69708?pid=TSHGFKPZNGYMP2FC&lid=LSTTSHGFKPZNGYMP2FCZPKPX3&marketplace=FLIPKART&store=clo%2Fash%2Fank%2Fedy&srno=b_1_38&otracker=browse&fm=organic&iid=en_7%2Fz2ZgorbMeTmb%2F05oING%2BjZoEV8lwngUWQpEDanwo443TzRZ2XfvI9qIOekIcXbWiZZReg3l4w%2Fa03968TVxw%3D%3D&ppt=None&ppn=None&ssid=3o5k6hnkq80000001660826655971"J5 -o7Q4n"></a>
my code =
for item in class:
containt = soup.find('href)
print(containt)
its not working
Do not use reserved keywords like class as variable name and to extract the hrefs value from a tag use .get('href').
Example
from bs4 import BeautifulSoup
html='''<a class="_2UzuFa" href="/awg-all-weather-gear-solid-men-polo-neck-black-grey-t-shirt/p/itm19ae710c69708?pid=TSHGFKPZNGYMP2FC&lid=LSTTSHGFKPZNGYMP2FCZPKPX3&marketplace=FLIPKART&store=clo%2Fash%2Fank%2Fedy&srno=b_1_38&otracker=browse&fm=organic&iid=en_7%2Fz2ZgorbMeTmb%2F05oING%2BjZoEV8lwngUWQpEDanwo443TzRZ2XfvI9qIOekIcXbWiZZReg3l4w%2Fa03968TVxw%3D%3D&ppt=None&ppn=None&ssid=3o5k6hnkq80000001660826655971"J5 -o7Q4n"></a>'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html)
for a in soup.select('a'):
print(a.get('href'))
Output
/awg-all-weather-gear-solid-men-polo-neck-black-grey-t-shirt/p/itm19ae710c69708?pid=TSHGFKPZNGYMP2FC&lid=LSTTSHGFKPZNGYMP2FCZPKPX3&marketplace=FLIPKART&store=clo%2Fash%2Fank%2Fedy&srno=b_1_38&otracker=browse&fm=organic&iid=en_7%2Fz2ZgorbMeTmb%2F05oING%2BjZoEV8lwngUWQpEDanwo443TzRZ2XfvI9qIOekIcXbWiZZReg3l4w%2Fa03968TVxw%3D%3D&ppt=None&ppn=None&ssid=3o5k6hnkq80000001660826655971
Example based on flipkart
from bs4 import BeautifulSoup
import requests
url='https://www.flipkart.com/mens-tshirts/awg-all-weather-gear~brand/pr?sid=clo,ash,ank,edy&marketplace=FLIPKART&otracker=product_breadCrumbs_AWG+All+Weather+Gear+Men%27s+T-shirts'
soup =BeautifulSoup(requests.get(url).text)
for e in soup.select('a._2UzuFa'):
print('https://www.flipkart.com'+e.get('href'))
Related
import requests
from bs4 import BeautifulSoup
result=requests.get('http://textfiles.com/stories/').text
soup=BeautifulSoup (result, 'lxml')
stories=soup.find_all('tr')
print (stories)
The find method works but find_all doesn't I'm not sure why maybe it is because it doesn't have a class?
correct code is
import requests
from bs4 import BeautifulSoup
result=requests.get('http://textfiles.com/stories/')
soup = BeautifulSoup(result.content, 'html5lib')
stories=soup.find_all('tr')
you can access each 'tr' by
stories[0]
0 can be replaced with any number in list
You can also use Pandas
eg
import pandas
import requests
from bs4 import BeautifulSoup
result=requests.get('http://textfiles.com/stories/')
soup = BeautifulSoup(result.content, 'html5lib')
df=pandas.read_html(soup.prettify())
print(df)
I am using BeautifulSoup to extract the list items under the class "secondary-nav-main-links" from the https://www.champlain.edu/current-students web page. I thought my working code below would extract the entire "li" line but the last portion, "/li", is placed on its own line. I included screen captures of the current output and the indended output. Any ideas? Thanks!!
from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen('https://www.champlain.edu/current-students')
bs = BeautifulSoup(html.read(), 'html.parser')
soup = bs.find(class_='secondary-nav secondary-nav-sm has-callouts')
for div in soup.find_all('li'):
print(div)
Current output:
capture1
Intended output:
capture2
You can remove the newline character with str.replace
And you can unescape html characters like & with html.unescape
str(div).replace('\n','')
To replace & with &, add this to the print statement
import html
html.unescape(str(div))
So your code becomes
from urllib.request import urlopen
from bs4 import BeautifulSoup
import html
html = urlopen('https://www.champlain.edu/current-students')
bs = BeautifulSoup(html.read(), 'html.parser')
soup = bs.find(class_='secondary-nav secondary-nav-sm has-callouts')
for div in soup.find_all('li'):
print(html.unescape(str(div).replace('\n','')))
I need to fetch Art and Biography from below href link
<a class="gr-hyperlink" href="/genres/art">Art</a>,
<a class="gr-hyperlink" href="/genres/biography">Biography</a>,
This is my code
import numpy as np
import pandas as pd
from urllib import urlopen
from bs4 import BeautifulSoup
import re
def getHTMLContent(link):
html = urlopen(link)
soup = BeautifulSoup(html, 'html.parser')
return soup
content = getHTMLContent('https://abc')
hyperLinks = content.find_all('a', class_="gr-hyperlink")
hyperLinks
After running find_all on a BeautifulSoup element, you get an iterable ResultSet element.
Each item in the ResultSet is a BeautifulSoup Tag element.
Use BeautifulSoup's get_text method to extract the Tag's text:
content = [link.get_text() for link in hyperLinks]
Given the following code:
# import the module
import bs4 as bs
import urllib.request
import re
masterURL = 'http://www.metrolyrics.com/top100.html'
sauce = urllib.request.urlopen(masterURL).read()
soup = bs.BeautifulSoup(sauce,'lxml')
for div in soup.findAll('ul', {'class': 'song-list'}):
for span in div:
for link in span:
for a in link:
print(a)
I can parse multiple divs, and i get a result as follows :
My question is instead of getting the full contents of the div how can I only return the highlighted portion, the URL of the Href?
Try this. You need to specify the right class to fetch the urls connected to it.
from bs4 import BeautifulSoup
import urllib.request
masterURL = 'http://www.metrolyrics.com/top100.html'
sauce = urllib.request.urlopen(masterURL).read()
soup = BeautifulSoup(sauce,'lxml')
for div in soup.find_all(class_='subtitle'):
print(div.get("href"))
Output:
http://www.metrolyrics.com/charles-goose-lyrics.html
http://www.metrolyrics.com/param-singh-lyrics.html
http://www.metrolyrics.com/westlife-lyrics.html
http://www.metrolyrics.com/luis-fonsi-lyrics.html
http://www.metrolyrics.com/grease-lyrics.html
http://www.metrolyrics.com/shanti-dope-lyrics.html
and so on ---
if 'href' in a.attrs:
a.attrs['href']
this will give you what you need.
I have a number of facebook groups that I would like to get the count of the members of. An example would be this group: https://www.facebook.com/groups/347805588637627/
I have looked at inspect element on the page and it is stored like so:
<span id="count_text">9,413 members</span>
I am trying to get "9,413 members" out of the page. I have tried using BeautifulSoup but cannot work it out.
Thanks
Edit:
from bs4 import BeautifulSoup
import requests
url = "https://www.facebook.com/groups/347805588637627/"
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data, "html.parser")
span = soup.find("span", id="count_text")
print(span.text)
In case there is more than one span tag in the page:
from bs4 import BeautifulSoup
soup = BeautifulSoup(your_html_input, 'html.parser')
span = soup.find("span", id="count_text")
span.text
You can use the text attribute of the parsed span:
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup('<span id="count_text">9,413 members</span>', 'html.parser')
>>> soup.span
<span id="count_text">9,413 members</span>
>>> soup.span.text
'9,413 members'
If you have more than one span tag you can try this
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'html.parser')
tags = soup('span')
for tag in tags:
print(tag.contents[0])
Facebook uses javascrypt to prevent bots from scraping. You need to use selenium to extract data on python.