Trimming the outputs in python - python

I made something which gets the time from https://time.is/ and shows the time. I used BeautifulSoup and urllib.request.
But I want to trim the output. I'm getting this as output and I want to remove the code part.
<div id="twd">07:29:26</div>
Program File:
import urllib.request
from bs4 import BeautifulSoup
url = 'https://time.is/'
hdr = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)' }
req = urllib.request.Request(url, headers=hdr)
res = urllib.request.urlopen(req)
soup = BeautifulSoup(res, 'html.parser')
string = soup.find(id='twd')
print(string)
How can I get just the text?

You can get the text from the dom element with .text like:
string.text
Test Code:
import urllib.request
from bs4 import BeautifulSoup
url = 'https://time.is/'
hdr = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)'}
req = urllib.request.Request(url, headers=hdr)
res = urllib.request.urlopen(req)
soup = BeautifulSoup(res, 'html.parser')
string = soup.find(id='twd')
print(string.text)
Results:
07:06:11PM

Related

BeautifulSoup findAll not returning results

I want to get the product name and prices of this page. I pretty much repeated the exact same thing, I did for the product name for the price, but I'm not getting anything.
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup as bSoup
header = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:77.0) Gecko/20100101 Firefox/77.0'}
url = "https://www.walmart.ca/search?q=lettuce"
req = Request (url = url, headers = header)
client = urlopen (req)
pageHtml = client.read()
client.close()
pageSoup = bSoup(pageHtml, 'html.parser')
products = pageSoup.findAll ("div", {"class":"css-155zfob e175iya63"})
print (len(products)) #prints 15, like expected
for product in products:
pass
prices = pageSoup.findAll ("div", {"class":"css-8frhg8 e175iya65"})
print (len(prices)) #prints 0 and idk why :/
for price in prices:
pass
The page https://www.walmart.ca/search?q=lettuce does not return the content you expect:
curl -s -H 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:77.0) Gecko/20100101 Firefox/77.0' 'https://www.walmart.ca/search?q=lettuce' | grep 'css-8frhg8'
You probably saw that class in a browser where the content was partially render at run-time via JavaScript. This means you need to use a library that can emulate a browser with JavaScript support.

Beautiful soup text returns blank

I'm trying to scrape a website, but it returns blank, can you help please? what am i missing?
import requests
from bs4 import BeautifulSoup
URL = 'https://ks.wjx.top/jq/50921280.aspx'
page = requests.get(URL)
soup = BeautifulSoup(page.content, 'html.parser')
print(soup.text)
To get a response, add the User-Agent header to requests.get(), otherwise, the website thinks that your a bot, and will block you.
import requests
from bs4 import BeautifulSoup
URL = "https://ks.wjx.top/jq/50921280.aspx"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36"
}
page = requests.get(URL, headers=headers)
soup = BeautifulSoup(page.content, "html.parser")
print(soup.prettify())

beautifulsoup not returning all html

import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.amazon.com/s?k=iphone+5s&ref=nb_sb_noss')
c = r.content
soup = BeautifulSoup(c, 'html.parser')
all = soup.find_all("span", {"class": "a-size-medium a-color-base a-text-normal"})
print(all)
so this is my simple script of python trying to scrape a page in amazon but not all the html is returned in the "soup" variable therefor i get nothing when trying to find a specific series of tags and extract them.
Try the below code, it should do the trick for you.
You actually missed to add headers in your code
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
url = 'https://www.amazon.com/s?k=iphone+5s&ref=nb_sb_noss'
response = requests.get(url, headers=headers)
print(response.text)
soup = BeautifulSoup(response.content, features="lxml")
my_all = soup.find_all("span", {"class": "a-size-medium a-color-base a-text-normal"})
print(my_all)

BeautifulSoup Find periodically returns None

I am trying to get a value from a class. From time to time, find returns the value I need, but another time it no longer works.
Code:
import requests
from bs4 import BeautifulSoup
url = 'https://beru.ru/catalog/molotyi-kofe/76321/list'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'}
page = requests.get(url, headers=headers)
soup = BeautifulSoup(page.content, 'html.parser')
item_count = (soup.find('div', class_='_2StYqKhlBr')).text.split()[4]
print(item_count)
The reason why that you get the values sometimes and sometimes not. That's because the website is protected by CAPTCHA
So when the request is blocked by CAPTCHA
It's became like the following:
https://beru.ru/showcaptcha?retpath=https://beru.ru/catalog/molotyi-kofe/76321/list?ncrnd=4561_aa1b86c2ca77ae2b0831c4d95b9d85a4&t=0/1575204790/b39289ef083d539e2a4630548592a778&s=7e77bfda14c97f6fad34a8a654d9cd16
You can verify by parse the response content:
import requests
from bs4 import BeautifulSoup
r = requests.get(
'https://beru.ru/catalog/molotyi-kofe/76321/list')
soup = BeautifulSoup(r.text, 'html.parser')
for item in soup.findAll('div', attrs={'class': '_2StYqKhlBr _1wAXjGKtqe'}):
print(item)
for item in soup.findAll('div', attrs={'class': 'captcha__image'}):
for captcha in item.findAll('img'):
print(captcha.get('src'))
And you will get the CAPTCHA image link:
https://beru.ru/captchaimg?aHR0cHM6Ly9leHQuY2FwdGNoYS55YW5kZXgubmV0L2ltYWdlP2tleT0wMEFMQldoTnlaVGh3T21WRmN4NWFJRUdYeWp2TVZrUCZzZXJ2aWNlPW1hcmtldGJsdWU,_0/1575206667/b49556a86deeece9765a88f635c7bef2_df12d7a36f0e2d36bd9c9d94d8d9e3d7

Beautiful Soup in Python cannot find id despite the id existing

the soup.find method returns None instead of the product title despite the productTitle existing in the page.
It works on amazon.it but not on amazon.com
import requests
from bs4 import BeautifulSoup
url = r'https://www.amazon.com/SanDisk-128GB-Extreme-microSD-Adapter/dp/B07FCMKK5X/ref=sr_1_1?fst=as:off&pf_rd_i=16225007011&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=74069509-93ef-4a3c-8dca-a9e3fa773a64&pf_rd_r=HWWSV1CX6VJBC57MRVP6&pf_rd_s=merchandised-search-4&pf_rd_t=101&qid=1564513802&rnid=16225007011&s=computers-intl-ship&sr=1-1'
headers = {'User-Agent' : r'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36' }
page = requests.get(url, headers=headers)
soup = BeautifulSoup(page.content, 'html.parser')
fullprice = soup.find(id='productTitle')
print(fullprice)
Seems you just need User-Agent header
import requests
from bs4 import BeautifulSoup as bs
headers = {'user-agent': 'Mozilla/5.0'}
r = requests.get('https://www.amazon.com/SanDisk-128GB-Extreme-microSD-Adapter/dp/B07FCMKK5X/ref=sr_1_1?fst=as:off&pf_rd_i=16225007011&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=74069509-93ef-4a3c-8dca-a9e3fa773a64&pf_rd_r=HWWSV1CX6VJBC57MRVP6&pf_rd_s=merchandised-search-4&pf_rd_t=101&qid=1564513802&rnid=16225007011&s=computers-intl-ship&sr=1-1', headers = headers)
soup = bs(r.content, 'html.parser')
print(soup.select_one('[name="description"]')['content'])

Categories

Resources