POST request and headers in selenium - python

I'm trying to add functionality to a headless webbrowser. I know there are easier ways but I stumbled across seleniumrequests an it sparked my interest. I was wondering if there would be a way to add request headers as well as being able to POST data as a payload. I've done some searching around and haven't had much luck. The following prints the html of the first website and screenshots for verification and then my program just hangs on the POST request. Doesn't terminate or raise an exception or anything. Where am I going wrong?
Thanks!
#!/usr/bin/env python
from seleniumrequests import PhantomJS
from selenium import webdriver
#Setting user-agent
webdriver.DesiredCapabilities.PHANTOMJS['phantomjs.page.customHeaders.User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/37.0.2062.120 Chrome/37.0.2062.120 Safari/537.36'
browser = PhantomJS()
browser.get('http://www.google.com')
print browser.page_source
browser.save_screenshot('preSearch.png')
searchReq='https://www.google.com/complete/search?'
data={"q":"this is my search term"}
resp = browser.request('POST', str(searchReq), data=data)
print resp
browser.save_screenshot('postSearch.png')

Related

Retrieving an image gives 403 error while it works with browser

Hi i'm trying to build a manga downloader app, for this reason I'm scraping several sites, however I have a problem once I get the image URL.
I can see the image using my browser (chrome), I can also download it, however I can't do the same using any popular scripting library.
Here is what I've tried:
String imgSrc = "https://cdn.mangaeden.com/mangasimg/aa/aa75d306397d1d11d07d66746dae78a36dc78672ae9e97a08cb7abb4.jpg"
Connection.Response resultImageResponse = Jsoup.connect(imgSrc)
.userAgent(
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.21 (KHTML, like Gecko) Chrome/19.0.1042.0 Safari/535.21")
.referrer("none").execute();
// output here
OutputStreamWriter out = new OutputStreamWriter(new FileOutputStream(new java.io.File(String.valueOf(imgPath))));
out.write(resultImageResponse.body()); // resultImageResponse.body() is where the image's contents are.
out.close();
I've also tried this:
URL imgUrl = new URL(imgSrc);
Files.copy(imgUrl.openStream(), imgPath);
Lastly, since I was sure the link works I've tried to download the image using python, but also in this case I get a 403 error
import requests
base_url = "https://cdn.mangaeden.com/mangasimg/d0/d08f07d762acda8a1f004677ab2414b9766a616e20bd92de4e2e44f1.jpg"
res = requests.get(url)
googling I found this Unable to get image url in Mangaeden API Angular 6 which seems really close to my problem, however I don't understand if I'm setting wrong the referrer or it doesn't work at all...
Do you have any tips?
Thank you!
How to fix?
Add some "headers" to your request to show that you might be a "browser", this will give you a 200 as response and you can save the file.
Note This will also work for postman, just overwrite the hidden user agent and you will get the image as response
Example (python)
import requests
headers ={
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36'
}
url = "https://cdn.mangaeden.com/mangasimg/d0/d08f07d762acda8a1f004677ab2414b9766a616e20bd92de4e2e44f1.jpg"
res = requests.get(url,headers=headers)
with open("image.jpg", 'wb') as f:
f.write(res.content)
Someone wrote this answer, but later deleted it, so I will copy the answer in case it can be useful.
AFAIK, you can't download anything else apart from HTML Documents
using jsoup.
If you open up Developer Tools on your browser, you can get the exact
request the browser has made. With Chrome, it's something like this.
The minimal cURL request would in your case be:
'https://cdn.mangaeden.com/mangasimg/aa/aa75d306397d1d11d07d66746dae78a36dc78672ae9e97a08cb7abb4.jpg'
\ -H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.21
(KHTML, like Gecko) Chrome/19.0.1042.0 Safari/535.21' \ --output
image.jpg;
You can refer to HedgeHog's answer for a sample Python
solution; here's how to achieve the same in Java using the new HTTP
Client:
import java.net.URI; import java.net.http.HttpClient; import
java.net.http.HttpRequest; import
java.net.http.HttpResponse.BodyHandlers; import java.nio.file.Path;
import java.nio.file.Paths;
public class ImageDownload {
public static void main(String[] args) throws Exception {
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://cdn.mangaeden.com/mangasimg/aa/aa75d306397d1d11d07d66746dae78a36dc78672ae9e97a08cb7abb4.jpg"))
.header("user-agent", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.21 (KHTML, like Gecko) Chrome/19.0.1042.0
Safari/535.21")
.build();
client.send(request, BodyHandlers.ofFile(Paths.get("image.jpg")));
} }
I adopted this solution in my java code.
Also, one last bit, if the image is downloaded but you can't open it, it is probably due to a 503 error code in the request, in this case you will just have to perform the request again. You can recognize broken images because the image reader will say something like
Not a JPEG file: starts with 0x3c 0x68
which is <h, an HTML error page instead of the image

How to bypass Mod_Security while scraping

I tried running this Python script using BeautifulSoup and requests modules :
from bs4 import BeautifulSoup as bs
import requests
url = 'https://udemyfreecourses.org/
headers = {'UserAgent' : 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36'
}
soup = bs(requests.get(url, headers= headers).text, 'lxml')
But when I send this line :
print(soup.get_text())
It doesn't scrape the text data but instead, It returns this output:
Not Acceptable!Not Acceptable!An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security.
I even even used headers when requesting the webpage, so It can looks like a normal navigator, but I'm still getting this message that's preventing me from accessing the real webpage
Note : The webpage is working perfectly on the navigator directly, but It doesn't show much info when I try to scrape it.
Is there any other way than the one I used with headers that can get a perfect valid request from the website and bypass this security called Mod_Security?
Any help would be very very helpful, Thanks.
EDIT: The Dash in "User-Agent" is essential.
Following this Answer https://stackoverflow.com/a/61968635/8106583
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:55.0) Gecko/20100101 Firefox/55.0',
}
Your User-Agent is the problem. This User-Agent works for me.
Also: Your ip might be blocked by now :D

Why does Jupyter give me a ModSecurity error when I try to run Beautiful Soup?

I am currently trying to build a webscraping program to pull data from a real estate website using Beautiful Soup. I haven't gotten very far but the code is as follows:
import requests
from bs4 import BeautifulSoup
r=requests.get("http://pyclass.com/real-estate/rock-springs-wy/LCWYROCKSPRINGS/")
c=r.content
soup=BeautifulSoup(c,"html.parser")
print(soup)
When I try to print the data to at least see if the program is working I get an error message saying "Not Acceptable!An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security." How do I get the server to stop blocking my IP address? I've read some similar issues with other programs and tried clearing the cookies, trying different browsers, etc and nothing has fixed it.
This is happening since the webpage thinks that your a bot (and is correct), therefore you will get blocked when sending a request.
To "bypass" this issue, try adding the user-agent to the headers parameter in the requests.get() method.
import requests
from bs4 import BeautifulSoup
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
}
url = "http://pyclass.com/real-estate/rock-springs-wy/LCWYROCKSPRINGS/"
soup = BeautifulSoup(requests.get(url, headers=headers).content, "html.parser")
print(soup.prettify())

How to use 'requests'?

I'm a Korean who just started learning Python.
First, I apologize for my English.
I learned how to use beautifulSoup on YouTube. And on certain sites, crawling was successful.
However, I found out that crawl did not go well on certain sites, and that I had to set up user-agent through a search.
So I used 'requests' to make code to set user-agent. Subsequently, the code to read a particular class from html was used equally, but it did not work.
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'}
url ='https://store.leagueoflegends.co.kr/skins'
response = requests.get(url, headers = headers)
soup = BeautifulSoup(response.text, 'html.parser')
for skin in soup.select(".item-name"):
print(skin)
Here's my code. I have no idea what the problem is.
Please help me.
Your problem is that requests do not render javascript. instead, it only gives you the "initial" source code of the page. what you should use is a package called Selenium. it lets you control your browser )Chrome, Firefox, ...etc) from Python. the website won't be able to tell the difference and you won't need to mess with the headers and user-agents. there are plenty of videos on Youtube on how to use it.

python web crawler cannot get full page

I try to run the following python code:
import requests
headers={'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36'}
url="https://search.bilibili.com/all?keyword=Steins;Gate0"
try:
r=requests.get(url=url,headers=headers)
r.encoding='utf-8'
if(r.status_code==200):
print(r.text)
except:
print("This is the selection of Steins Gate")
I am a beginner of web crawler. This is a crawler spider write by requests on python, but I cannot get the full page and I think this is the problem of Asynchronous page loading (Perhaps the website has other Strategy)
So the question is how to get the full page.
What you're dealing with is a well known problem that is somewhat straightforward but complex to execute because the content you want doesn't exist on the page without some sort of browser interaction.
Some recommendations:
Investigate headless browsers like headless chrome and their use cases
Investigate selenium, how to use it with Python and headless browsers

Categories

Resources