Fake a cookie to scrape a site in python - python

The site that I'm trying to scrape uses js to create a cookie. What I was thinking was that I can create a cookie in python and then use that cookie to scrape the site. However, I don't know any way of doing that. Does anybody have any ideas?

Please see Python httplib2 - Handling Cookies in HTTP Form Posts for an example of adding a cookie to a request.
I often need to automate tasks in web
based applications. I like to do this
at the protocol level by simulating a
real user's interactions via HTTP.
Python comes with two built-in modules
for this: urllib (higher level Web
interface) and httplib (lower level
HTTP interface).

If you want to do more involved browser emulation (including setting cookies) take a look at mechanize. It's simulation capabilities are almost complete (no Javascript support unfortunately): I've used it to build several scrapers with much success.

Related

How to surf the web without cookies from code

I was try to scrape some links from the web via google search.
Let's say my query is [games site:pastebin.com].
I was trying this in both python and dart but the result i got was that i need to login in for it and i don't ant to use cookies.
So, is there any way to get the result of https://www.google.com/search?q=site%3Apastebin.com+games from code block without cookies?
The Code I Tried:
Python 3.9.5
import requests
req = requests.get("https://www.google.com/search?q=games+site%3Apastebin.com")
That fully depends on the website you are trying to access. Some pages won't allow you to use certain features from their page without cookies at all, some do. For the purpose you are trying to achieve, I'd rather recommend using a search API, which doesn't require cookies - since cookies are normally for regular users.
Google usually asfaik doesn't like it if you scrape their content using scripts.
As mentioned before, you can look for alternative search engines, which don't require cookie usage

What information do I need when scraping a website that requires logging in?

I want to access my business' database on some site and scrape it using Python (I'm using Requests and BS4, I can go further if needed). But I couldn't.
Can someone provide us with info and simple resources on how to scrape such sites.
I'm not talking about providing usernames and passwords. The site requires much more than this.
How do I know the info I am required to provide for my script aside of UN and PW(e.g. how do I know that I must provide, say, an auth token)?
How to deal with the site when there are no HTTP URLs, but hrefs in the form of javascript:__doPostBack?
And in this regard, how do I transit from the logging in page to the page I want (the one contained in the aforementioned mentioned javascript:__doPostBack)?
Are the libraries I'm using enough? or do you recommend using—and learning in my case—something else?
Your help is greatly appreciated and thanked.
You didn't mention what you use for scraping, but since this sounds like a lot of the interaction on this site is based on client-side code, I'd suggest using a real browser to do the scraping, and interacting with the site not using low-level HTTP requests but using client side interaction (such as typing in elements or clicking buttons). This way, you don't need to worry about what form data to send or how to get the URLs of links yourself.
One recommended method of doing this would be to use BeutifulSoup with Selenium / WebDriver. There are multiple resources on how to do this, for example: How can I parse a website using Selenium and Beautifulsoup in python?

Python: interrogate database over http

I want to do automatic searches on a database (in this example www.scopus.com) with a simple python script. I need some place from where to start. For example I would like to do a search and get a list of links and open the links and extract information from the opened pages. Where do I start?
Technically speaking, scopus.com is not "a database", it's a web site that let's you search / consult a database. If you want to programmatically access their service, the obvious way is to use their API, which will mostly requires sending HTTP requests and parsing the HTTP response. You can do this with the standard lib's modules, but you'll certainly save a lot of time using python-requests instead. And you'll certainly want to get some understanding of the HTTP protocol before...

python open web page and get source code

We have developed a web based application, with user login etc, and we developed a python application that have to get some data on this page.
Is there any way to communicate python and system default browser ?
Our main goal is to open a webpage, with system browser, and get the HTML source code from it ? We tried with python webbrowser, opened web page succesfully, but could not get source code, and tried with urllib2, in that case, i think we have to use system default browser's cookie etc, and i dont want to this, because of security.
https://pypi.python.org/pypi/selenium
You can try to use Selenium, he was done for testing, but nothing prevents you from using it for other purposes
If your web site is navigable without Javascript, then you could try Mechanize or zope.testbrowser. These tools offer a higher level API than urllib2, letting you do things like follow links on pages and fill out HTML forms.
This can be helpful in navigating a site that uses cookie based authentication with HTML forms for login, for example.
Have a look at the nltk module---they have some utilities for looking at web pages and getting text. There's also BeautifulSoup, which is a bit more elaborate. I'm currently using both to scrape web pages for a learning algorithm---they're pretty widely used modules, so that means you can find lots of hints here :)

Different Twitter HTML structure for browsers and python web opener

I'm working on a script which downloads some data from Twitter profiles. I found out that HTML structure is different in web browser than in python "robot" because when I open the page through python urllib2 and BeautifulSoup I get different tag IDs and classes. Is there a way to get the same content as in web browser?
I need it for short urls resolving because in web browser, resolved urls are stored in link title attribute.
Most websites adapt their response according to the User-Agent header on the request. If none is set, it is obvious that this is not a browser, but some sort of script. You'll probably want to set a User-Agent header that is somewhat similar to a "real" browser.
Lots of methods to do this are described here: Changing user agent on urllib2.urlopen and here: Fetch a Wikipedia article with Python
On an unrelated note, you might want to use Requests, which is a much better API than the standard urllib2.
Don't screen scrape for twitter profile information. Use the api. Your whole program will be much more robust. It's probably against their TOS to change your user agent and mess with stuff too.

Categories

Resources