warning in building webcrawler in python using beautifulsoup - python

I am trying to build a simple web crawler that gives the URLs of every legion product displayed on amazon.in if the key searched is 'legion'. I am using the following code:
import requests
from bs4 import BeautifulSoup
def legion_spider(max_pages):
page = 1
while page <= max_pages:
url = 'https://www.amazon.in/s?k=legion&qid=1588862016&swrs=82DF79C1243AF6D61651CCAA9F883EC4&ref=sr_pg_'+ str(page)
source_code = requests.get(url)
plain_txt = source_code.text
soup = BeautifulSoup(plain_txt)
for link in soup.findAll('a',{'class': 'a-size-medium a-color-base a-text-normal'}):
href = link.get('href')
print(href)
page += 1
legion_spider(1)
and the output I am getting is this:
C:\Users\lenovo\AppData\Local\Programs\Python\Python38-32\python.exe "E:/Python Practice/web_crawler.py"
E:/Python Practice/web_crawler.py:10: UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 10 of the file E:/Python Practice/web_crawler.py. To get rid of this warning, pass the additional argument 'features="html.parser"' to the BeautifulSoup constructor.
soup = BeautifulSoup(plain_txt)
Process finished with exit code 0

You are missing the parser! Follow this part of the BS documentation!
BeautifulSoup(markup, <parser>)

Related

Cannot get the data via my scripts but the data is available when I "inspect"

When I inspect "https://dse.bigexam.hk/en/ssp?p=1&band=1&order=name&asc=1" I can find the data I want. For examples the total pages "Showing schools 1 to 10 of 143." can be found. However, I got no data from my scripts. Anyone can help? Thanks.
from bs4 import BeautifulSoup
import requests
def makeSoup(url):
response = requests.get(url)
return BeautifulSoup(response.text, 'lxml')
url = "https://dse.bigexam.hk/en/ssp?p=1&band=1&order=name&asc=1"
soup = makeSoup(url)
pages = soup.find('div', attrs={'class': 'col-sm'})
print(pages)
That's because it's loaded using Ajax/javascript. Requests library doesn't handle that, you'll need to use something that can execute these scripts and get the dom.
you can try selenium

why doesn't my web scraper work? Python3 - requests, BeautifulSoup

I have been following this python tutorial for a while, and I made a web scrawler, similar to the one in the video.
Language: Python
import requests
from bs4 import BeautifulSoup
def spider(max_pages):
page = 1
while page <= max_pages:
url = 'https://www.aliexpress.com/category/7/computer-office.html?trafficChannel=main&catName=computer-office&CatId=7&ltype=wholesale&SortType=default&g=n&page=' + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'html.parser')
for link in soup.findAll('a', {'class':'item-title'}):
href = link.get('href')
title = link.string
print(href)
page += 1
spider(1)
And this is the output that the program gives:
PS D:\development> & C:/Users/hirusha/AppData/Local/Programs/Python/Python38/python.exe "d:/development/Python/TheNewBoston/Python/one/web scrawler.py"n/TheNewBoston/Python/one/web scrawler.py"
PS D:\development>
What can I do?
Before this, I had an error, the code was:
soup = BeautifulSoup(plain_text)
i changed this to
soup = BeautifulSoup(plain_text, 'html.parser')
and the error was gone,
the error i got here was:
d:/development/Python/TheNewBoston/Python/one/web scrawler.py:10: GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("lxml"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 10 of the file d:/development/Python/TheNewBoston/Python/one/web scrawler.py. To get rid of this warning, pass the additional argument 'features="lxml"' to the BeautifulSoup constructor.
soup = BeautifulSoup(plain_text)
Any help is appreciated, Thank You!
There are no results as the class you are targeting is not present until the webpage is rendered, which doesn't happen with requests.
Data is dynamically retrieved from a script tag. You can regex the JavaScript object holding the data and parse with json to get that info.
The error you show was due to a parser not being specified originally; which you rectified.
import re, json, requests
import pandas as pd
r = requests.get('https://www.aliexpress.com/category/7/computer-office.html?trafficChannel=main&catName=computer-office&CatId=7&ltype=wholesale&SortType=default&g=n&page=1')
data = json.loads(re.search(r'window\.runParams = (\{".*?\});', r.text, re.S).group(1))
df = pd.DataFrame([(item['title'], 'https:' + item['productDetailUrl']) for item in data['items']])
print(df)

How to sift through specific items from a webpage using conditional statement

I've made a scraper in python. It is running smoothly. Now I would like to discard or accept specific links from that page as in, links only containing "mobiles" but even after making some conditional statement I can't do so. Hope I'm gonna get any help to rectify my mistakes.
import requests
from bs4 import BeautifulSoup
def SpecificItem():
url = 'https://www.flipkart.com/'
Process = requests.get(url)
soup = BeautifulSoup(Process.text, "lxml")
for link in soup.findAll('div',class_='')[0].findAll('a'):
if "mobiles" not in link:
print(link.get('href'))
SpecificItem()
On the other hand if I do the same thing using lxml library with xpath, It works.
import requests
from lxml import html
def SpecificItem():
url = 'https://www.flipkart.com/'
Process = requests.get(url)
tree = html.fromstring(Process.text)
links = tree.xpath('//div[#class=""]//a/#href')
for link in links:
if "mobiles" not in link:
print(link)
SpecificItem()
So, at this point i think with BeautifulSoup library the code should be somewhat different to get the purpose served.
The root of your problem is your if condition works a bit differently between BeautifulSoup and lxml. Basically, if "mobiles" not in link: with BeautifulSoup is not checking if "mobiles" is in the href field. I didn't look too hard but I'd guess it's comparing it to the link.text field instead. Explicitly using the href field does the trick:
import requests
from bs4 import BeautifulSoup
def SpecificItem():
url = 'https://www.flipkart.com/'
Process = requests.get(url)
soup = BeautifulSoup(Process.text, "lxml")
for link in soup.findAll('div',class_='')[0].findAll('a'):
href = link.get('href')
if "mobiles" not in href:
print(href)
SpecificItem()
That prints out a bunch of links and none of them include "mobiles".

Grabbing instagram feed using Python

I'm trying to get all Instagram posts by a specific user in Python. Below my code:
import requests
from bs4 import BeautifulSoup
def get_images(user):
url = "https://www.instagram.com/" + str(user)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for image in soup.findAll('img'):
href = image.get('src')
print(href)
get_images('instagramuser')
However, I'm getting the error:
UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 14 of the file C:/Users/Bedri/PycharmProjects/untitled1/main.py. To get rid of this warning, change code that looks like this:
BeautifulSoup([your markup])
to this: BeautifulSoup([your markup], "html.parser") markup_type=markup_type))
So my question, what am I doing wrong?
You should pass parser to BeautifulSoup, it's not an error, just a warning.
soup = BeautifulSoup(plain_text, "html.parser")
soup = BeautifulSoup(plain_text,'lxml')
I would recommend using > lxml < instead of > html.parser <
Instead of requests.get use urlopen
here's the code all in one line
from urllib import request
from bs4 import BeautifulSoup
def get_images(user):
soup = BeautifulSoup(request.urlopen("https://www.instagram.com/"+str(user)),'lxml')
for image in soup.findAll('img'):
href = image.get('src')
print(href)
get_images('user')

Page scraper parser error?

Hello everyone, new Python user here I am having a strange error when building a very basic page scraper.
I am using BeautifulSoup4 to assist me and when I execute my code I get this error
"UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 13 of the file C:/Users/***/PycharmProjects/untitled1/s.py. To get rid of this warning, change code that looks like this:"
BeautifulSoup([your markup])
to this:
BeautifulSoup([your markup], "html.parser")
markup_type=markup_type))
If anyone has any help to fix this I would greatly appreciate it!
Code Follows
import requests
from bs4 import BeautifulSoup
def trade_spider():
url = 'http://buckysroom.org/trade/search.php?page=' # Could add a + pls str(pagesomething) to add on to the url so that it would update
source_code = requests.get(url) #requests the data from the site
plain_text = source_code.text #imports all of the data gathered
soup = BeautifulSoup(plain_text) #This hold all of the data, and allows you to sort through all of the data, converts it
for link in soup.find_all( 'a', {'class' : 'item-name'}):
href = link.get('href')
print(href)
trade_spider()
You could try to change the following line to:
soup = BeautifulSoup(plain_text, "html.parser")
or whatever other parser you need to use...

Categories

Resources