why doesn't my web scraper work? Python3 - requests, BeautifulSoup - python

I have been following this python tutorial for a while, and I made a web scrawler, similar to the one in the video.
Language: Python
import requests
from bs4 import BeautifulSoup
def spider(max_pages):
page = 1
while page <= max_pages:
url = 'https://www.aliexpress.com/category/7/computer-office.html?trafficChannel=main&catName=computer-office&CatId=7&ltype=wholesale&SortType=default&g=n&page=' + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'html.parser')
for link in soup.findAll('a', {'class':'item-title'}):
href = link.get('href')
title = link.string
print(href)
page += 1
spider(1)
And this is the output that the program gives:
PS D:\development> & C:/Users/hirusha/AppData/Local/Programs/Python/Python38/python.exe "d:/development/Python/TheNewBoston/Python/one/web scrawler.py"n/TheNewBoston/Python/one/web scrawler.py"
PS D:\development>
What can I do?
Before this, I had an error, the code was:
soup = BeautifulSoup(plain_text)
i changed this to
soup = BeautifulSoup(plain_text, 'html.parser')
and the error was gone,
the error i got here was:
d:/development/Python/TheNewBoston/Python/one/web scrawler.py:10: GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("lxml"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 10 of the file d:/development/Python/TheNewBoston/Python/one/web scrawler.py. To get rid of this warning, pass the additional argument 'features="lxml"' to the BeautifulSoup constructor.
soup = BeautifulSoup(plain_text)
Any help is appreciated, Thank You!

There are no results as the class you are targeting is not present until the webpage is rendered, which doesn't happen with requests.
Data is dynamically retrieved from a script tag. You can regex the JavaScript object holding the data and parse with json to get that info.
The error you show was due to a parser not being specified originally; which you rectified.
import re, json, requests
import pandas as pd
r = requests.get('https://www.aliexpress.com/category/7/computer-office.html?trafficChannel=main&catName=computer-office&CatId=7&ltype=wholesale&SortType=default&g=n&page=1')
data = json.loads(re.search(r'window\.runParams = (\{".*?\});', r.text, re.S).group(1))
df = pd.DataFrame([(item['title'], 'https:' + item['productDetailUrl']) for item in data['items']])
print(df)

Related

python web-crawler Guessed at parser warning

I am trying to make a web crawler using python (3.8) I mostly think I'm done but I'm getting this error can any body help me and thank's in advance.
Python code :
import requests
from bs4 import BeautifulSoup
def aliexpress_spider (max_pages):
page = 1
while page <= max_pages:
url = "https://www.aliexpress.com/af/ps4.html?trafficChannel=af&d=y&CatId=0&SearchText=ps4&ltype=affiliate&SortType=default&page=" + str(page)
sourcecode = requests.get(url)
plaintext = sourcecode.text
soup = BeautifulSoup(plaintext)
for link in soup.findAll('a' , {'class' : 'item-title'}):
href = "https://www.aliexpress.com" + link.get("href")
title = link.string
print(href)
print(title)
page += 1
aliexpress_spider(1)
Error massege :
GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 11 of the file C:/Users/moham/PycharmProjects/moh/test.py. To get rid of this warning, pass the additional argument 'features="html.parser"' to the BeautifulSoup constructor.
soup = BeautifulSoup(plaintext)
import requests
from bs4 import BeautifulSoup
def aliexpress_spider (max_pages):
page = 1
while page <= max_pages:
url = "https://www.aliexpress.com/af/ps4.html?trafficChannel=af&d=y&CatId=0&SearchText=ps4&ltype=affiliate&SortType=default&page=" + str(page)
sourcecode = requests.get(url)
soup = BeautifulSoup(sourcecode.text ,"html.parser")
for link in soup.findAll('a' , {'class' : 'item-title'}):
href = "https://www.aliexpress.com" + link.get("href")
title = link.string
print(href)
print(title)
print(soup.title)
page += 1
aliexpress_spider(1)

warning in building webcrawler in python using beautifulsoup

I am trying to build a simple web crawler that gives the URLs of every legion product displayed on amazon.in if the key searched is 'legion'. I am using the following code:
import requests
from bs4 import BeautifulSoup
def legion_spider(max_pages):
page = 1
while page <= max_pages:
url = 'https://www.amazon.in/s?k=legion&qid=1588862016&swrs=82DF79C1243AF6D61651CCAA9F883EC4&ref=sr_pg_'+ str(page)
source_code = requests.get(url)
plain_txt = source_code.text
soup = BeautifulSoup(plain_txt)
for link in soup.findAll('a',{'class': 'a-size-medium a-color-base a-text-normal'}):
href = link.get('href')
print(href)
page += 1
legion_spider(1)
and the output I am getting is this:
C:\Users\lenovo\AppData\Local\Programs\Python\Python38-32\python.exe "E:/Python Practice/web_crawler.py"
E:/Python Practice/web_crawler.py:10: UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 10 of the file E:/Python Practice/web_crawler.py. To get rid of this warning, pass the additional argument 'features="html.parser"' to the BeautifulSoup constructor.
soup = BeautifulSoup(plain_txt)
Process finished with exit code 0
You are missing the parser! Follow this part of the BS documentation!
BeautifulSoup(markup, <parser>)

Grabbing instagram feed using Python

I'm trying to get all Instagram posts by a specific user in Python. Below my code:
import requests
from bs4 import BeautifulSoup
def get_images(user):
url = "https://www.instagram.com/" + str(user)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for image in soup.findAll('img'):
href = image.get('src')
print(href)
get_images('instagramuser')
However, I'm getting the error:
UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 14 of the file C:/Users/Bedri/PycharmProjects/untitled1/main.py. To get rid of this warning, change code that looks like this:
BeautifulSoup([your markup])
to this: BeautifulSoup([your markup], "html.parser") markup_type=markup_type))
So my question, what am I doing wrong?
You should pass parser to BeautifulSoup, it's not an error, just a warning.
soup = BeautifulSoup(plain_text, "html.parser")
soup = BeautifulSoup(plain_text,'lxml')
I would recommend using > lxml < instead of > html.parser <
Instead of requests.get use urlopen
here's the code all in one line
from urllib import request
from bs4 import BeautifulSoup
def get_images(user):
soup = BeautifulSoup(request.urlopen("https://www.instagram.com/"+str(user)),'lxml')
for image in soup.findAll('img'):
href = image.get('src')
print(href)
get_images('user')

Getting specific data after crawling websites in Python

This is my first Python project, which I pretty much wrote by following youtube videos. Although not well versed, I think I have the basics of coding.
#importing the module that allows to connect to the internet
import requests
#this allows to get data from by crawling webpages
from bs4 import BeautifulSoup
#creating a loop to change url everytime it is executed
def creator_spider(max_pages):
page = 0
while page < max_pages:
url = 'https://www.patreon.com/sitemap/campaigns/' + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('a', {'class': ''}):
href = "https://www.patreon.com" + link.get('href')
#title = link.string
print(href)
#print(title)
get_single_item_data(href)
page = page + 1
def get_single_item_data(item_url):
source_code = requests.get(item_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
print soup
for item_name in soup.findAll('h6'):
print(item_name.string)
From each page I crawl, I want the code to get this highlighted information: http://imgur.com/a/e59S9
whose source code is: http://imgur.com/a/8qv7k
what I reckon is I should change the attributes of soup.findAll() in the get_single_item_data() functiom, but all my attempts have been futile. Any help on this is very much appreciated.
from bs4 docs
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-by-css-class
It’s very useful to search for a tag that has a certain CSS class, but the name of the CSS attribute, “class”, is a reserved word in Python. Using class as a keyword argument will give you a syntax error. As of Beautiful Soup 4.1.2, you can search by CSS class using the keyword argument class_:
soup.find_all("a", class_="sister")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
However after closer look at the code you mentioned in pic this approach will not get what you want. In the source I see data-react-id . DOM is build by ReactJS and requests.get(url) will not execute JS on your end. Disable JS in your browser to see what is returned with requests.get(url).
Best regards

Page scraper parser error?

Hello everyone, new Python user here I am having a strange error when building a very basic page scraper.
I am using BeautifulSoup4 to assist me and when I execute my code I get this error
"UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 13 of the file C:/Users/***/PycharmProjects/untitled1/s.py. To get rid of this warning, change code that looks like this:"
BeautifulSoup([your markup])
to this:
BeautifulSoup([your markup], "html.parser")
markup_type=markup_type))
If anyone has any help to fix this I would greatly appreciate it!
Code Follows
import requests
from bs4 import BeautifulSoup
def trade_spider():
url = 'http://buckysroom.org/trade/search.php?page=' # Could add a + pls str(pagesomething) to add on to the url so that it would update
source_code = requests.get(url) #requests the data from the site
plain_text = source_code.text #imports all of the data gathered
soup = BeautifulSoup(plain_text) #This hold all of the data, and allows you to sort through all of the data, converts it
for link in soup.find_all( 'a', {'class' : 'item-name'}):
href = link.get('href')
print(href)
trade_spider()
You could try to change the following line to:
soup = BeautifulSoup(plain_text, "html.parser")
or whatever other parser you need to use...

Categories

Resources