While I am trying to implement a simple web crawler code in Colab, and as I have written the following code, I got the syntax error as follows. Please advise me how to resolve the issue to run it:
import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages):
page=1
while page <= max_pages:
url = 'https://www.ebay.com/sch/i.html?_from=R40&_nkw=2%22+Butterfly+Valve&_sacat=0&_pgn='+ str(page)
source_code= requests.get(url)
plain_text=source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.findALL('a', {'class':'s-item__title s-item__title--has-tags'})
href = link.get('href')
print(href)
page+=1
trade_spider(1)
Error:
File "<ipython-input-4-5d567ac26fb5>", line 11
for link in soup.findALL('a', {'class':'s-item__title s-item__title--has-tags'})
^
IndentationError: unexpected indent
There are a lot of wrong things about this code, but I can help. The for loop has an extra indent, so delete an indent from the start of and also add a : to the end of the for loop. Also, it seems like you just copied this from the internet but whatever. Anyways, here is the correct code:
import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages):
page=1
while page <= max_pages:
url = 'https://www.ebay.com/sch/i.html?_from=R40&_nkw=2%22+Butterfly+Valve&_sacat=0&_pgn='+ str(page)
source_code= requests.get(url)
plain_text=source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.findALL('a', {'class':'s-item__title s-item__title--has-tags'}):
href = link.get('href')
print(href)
page+=1
trade_spider(1)
Edit: after I ran this code, there is an error:
main.py:10: GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html5lib"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 10 of the file main.py. To get rid of this warning, pass the additional argument 'features="html5lib"' to the BeautifulSoup constructor.
So here's the correct code:
import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages):
page=1
while page <= max_pages:
url = 'https://www.ebay.com/sch/i.html?_from=R40&_nkw=2%22+Butterfly+Valve&_sacat=0&_pgn='+ str(page)
source_code= requests.get(url)
plain_text=source_code.text
soup = BeautifulSoup(plain_text, features="html5lib")
for link in soup.find_all('a', {'class':'s-item__title s-item__title--has-tags'}):
href = link.get('href')
print(href)
page+=1
trade_spider(1)
Related
I have been following this python tutorial for a while, and I made a web scrawler, similar to the one in the video.
Language: Python
import requests
from bs4 import BeautifulSoup
def spider(max_pages):
page = 1
while page <= max_pages:
url = 'https://www.aliexpress.com/category/7/computer-office.html?trafficChannel=main&catName=computer-office&CatId=7<ype=wholesale&SortType=default&g=n&page=' + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'html.parser')
for link in soup.findAll('a', {'class':'item-title'}):
href = link.get('href')
title = link.string
print(href)
page += 1
spider(1)
And this is the output that the program gives:
PS D:\development> & C:/Users/hirusha/AppData/Local/Programs/Python/Python38/python.exe "d:/development/Python/TheNewBoston/Python/one/web scrawler.py"n/TheNewBoston/Python/one/web scrawler.py"
PS D:\development>
What can I do?
Before this, I had an error, the code was:
soup = BeautifulSoup(plain_text)
i changed this to
soup = BeautifulSoup(plain_text, 'html.parser')
and the error was gone,
the error i got here was:
d:/development/Python/TheNewBoston/Python/one/web scrawler.py:10: GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("lxml"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 10 of the file d:/development/Python/TheNewBoston/Python/one/web scrawler.py. To get rid of this warning, pass the additional argument 'features="lxml"' to the BeautifulSoup constructor.
soup = BeautifulSoup(plain_text)
Any help is appreciated, Thank You!
There are no results as the class you are targeting is not present until the webpage is rendered, which doesn't happen with requests.
Data is dynamically retrieved from a script tag. You can regex the JavaScript object holding the data and parse with json to get that info.
The error you show was due to a parser not being specified originally; which you rectified.
import re, json, requests
import pandas as pd
r = requests.get('https://www.aliexpress.com/category/7/computer-office.html?trafficChannel=main&catName=computer-office&CatId=7<ype=wholesale&SortType=default&g=n&page=1')
data = json.loads(re.search(r'window\.runParams = (\{".*?\});', r.text, re.S).group(1))
df = pd.DataFrame([(item['title'], 'https:' + item['productDetailUrl']) for item in data['items']])
print(df)
I am trying to make a web crawler using python (3.8) I mostly think I'm done but I'm getting this error can any body help me and thank's in advance.
Python code :
import requests
from bs4 import BeautifulSoup
def aliexpress_spider (max_pages):
page = 1
while page <= max_pages:
url = "https://www.aliexpress.com/af/ps4.html?trafficChannel=af&d=y&CatId=0&SearchText=ps4<ype=affiliate&SortType=default&page=" + str(page)
sourcecode = requests.get(url)
plaintext = sourcecode.text
soup = BeautifulSoup(plaintext)
for link in soup.findAll('a' , {'class' : 'item-title'}):
href = "https://www.aliexpress.com" + link.get("href")
title = link.string
print(href)
print(title)
page += 1
aliexpress_spider(1)
Error massege :
GuessedAtParserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 11 of the file C:/Users/moham/PycharmProjects/moh/test.py. To get rid of this warning, pass the additional argument 'features="html.parser"' to the BeautifulSoup constructor.
soup = BeautifulSoup(plaintext)
import requests
from bs4 import BeautifulSoup
def aliexpress_spider (max_pages):
page = 1
while page <= max_pages:
url = "https://www.aliexpress.com/af/ps4.html?trafficChannel=af&d=y&CatId=0&SearchText=ps4<ype=affiliate&SortType=default&page=" + str(page)
sourcecode = requests.get(url)
soup = BeautifulSoup(sourcecode.text ,"html.parser")
for link in soup.findAll('a' , {'class' : 'item-title'}):
href = "https://www.aliexpress.com" + link.get("href")
title = link.string
print(href)
print(title)
print(soup.title)
page += 1
aliexpress_spider(1)
I'm trying to get all Instagram posts by a specific user in Python. Below my code:
import requests
from bs4 import BeautifulSoup
def get_images(user):
url = "https://www.instagram.com/" + str(user)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for image in soup.findAll('img'):
href = image.get('src')
print(href)
get_images('instagramuser')
However, I'm getting the error:
UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 14 of the file C:/Users/Bedri/PycharmProjects/untitled1/main.py. To get rid of this warning, change code that looks like this:
BeautifulSoup([your markup])
to this: BeautifulSoup([your markup], "html.parser") markup_type=markup_type))
So my question, what am I doing wrong?
You should pass parser to BeautifulSoup, it's not an error, just a warning.
soup = BeautifulSoup(plain_text, "html.parser")
soup = BeautifulSoup(plain_text,'lxml')
I would recommend using > lxml < instead of > html.parser <
Instead of requests.get use urlopen
here's the code all in one line
from urllib import request
from bs4 import BeautifulSoup
def get_images(user):
soup = BeautifulSoup(request.urlopen("https://www.instagram.com/"+str(user)),'lxml')
for image in soup.findAll('img'):
href = image.get('src')
print(href)
get_images('user')
This is my first Python project, which I pretty much wrote by following youtube videos. Although not well versed, I think I have the basics of coding.
#importing the module that allows to connect to the internet
import requests
#this allows to get data from by crawling webpages
from bs4 import BeautifulSoup
#creating a loop to change url everytime it is executed
def creator_spider(max_pages):
page = 0
while page < max_pages:
url = 'https://www.patreon.com/sitemap/campaigns/' + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('a', {'class': ''}):
href = "https://www.patreon.com" + link.get('href')
#title = link.string
print(href)
#print(title)
get_single_item_data(href)
page = page + 1
def get_single_item_data(item_url):
source_code = requests.get(item_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
print soup
for item_name in soup.findAll('h6'):
print(item_name.string)
From each page I crawl, I want the code to get this highlighted information: http://imgur.com/a/e59S9
whose source code is: http://imgur.com/a/8qv7k
what I reckon is I should change the attributes of soup.findAll() in the get_single_item_data() functiom, but all my attempts have been futile. Any help on this is very much appreciated.
from bs4 docs
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-by-css-class
It’s very useful to search for a tag that has a certain CSS class, but the name of the CSS attribute, “class”, is a reserved word in Python. Using class as a keyword argument will give you a syntax error. As of Beautiful Soup 4.1.2, you can search by CSS class using the keyword argument class_:
soup.find_all("a", class_="sister")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
However after closer look at the code you mentioned in pic this approach will not get what you want. In the source I see data-react-id . DOM is build by ReactJS and requests.get(url) will not execute JS on your end. Disable JS in your browser to see what is returned with requests.get(url).
Best regards
New to python, thought I'd try to make a web crawler as a first project. Found Beautiful Soup as the solution. All is well except that the ONE page I want to crawl yields no results :(
Here is the code:
import requests
from bs4 import BeautifulSoup
from mechanize import Browser
def crawl_list(max_pages):
mech = Browser()
place = 1
while place <= max_pages:
url = "http://www.crummy.com/software/BeautifulSoup/bs4/doc/"
page = mech.open(url)
html = page.read()
soup = BeautifulSoup(html)
for link in soup.findAll('a'):
href = link.get('href')
print(href)
place += 1
crawl_list(1)
This code works wonders. I get the whole list of links. BUT, as soon as I put http://diseasesdatabase.com/disease_index_a.asp in the value of 'url', no dice.
Perhaps it has to do with the .asp? Can someone please solve this mystery?
I'm getting this as an error message:
mechanize._response.httperror_seek_wrapper: HTTP Error 410: Gone
Thanks in advance.