Browser Automation? [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want to create a little program with following features.
Use proxy in format proxy:port:username:password
Choose a proxy sequentialially from list
Open http://example.com
Fill Details choosing data from data.txt ( CSV )
Export Cookie,username,password,email address --> cookie.txt
Delete Cookies
Log into associated email account and confirm account by visiting
link sent to that email address.
Then cycle through Step1 again.
I read several similar question on stackoverflow.
I planned to use Selenium for this program, but reading comment here How to save and restore all cookies with Selenium RC?
the get_cookie method doesn't provide the path, domain, and expiry
date for each cookie, so it isn't possible to fully restore those
parameters with create_cookie. any other ideas
And i won't be able to manipulate cookies using method as describe here http://hub.tutsplus.com/tutorials/how-to-build-a-python-bot-that-can-play-web-games--active-11117
I want to know easiest way to tackle this problem. I plan to run single threaded application.

I don't know selenium, but why not use mechanize and requests ? Both are awesome.

Related

what is the better way to get the information from this website with scrapy? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to scrape this website with scrapy and I have had to search for each link extracting the information from each one, I would like to know if there is an API of the site that I can use (I don't know how to find it).
I would also like to know how I can obtain the latitude and longitude? Currently the map is shown but I do not know how to obtain the numbers
I appreciate any suggestions
The website may be loading the data dynamically using Javascript. Use your browser dev tools and look at the networking tab, look for any XHR calls which may be accessing an API. Then you can scrape from that directly.

How to write in a text box python or html [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So, I wanted to make something to automatically sign me into my twitter account. How would I use python, or HTML, to go to twitter.com/login and automatically put in my username and password, and then submits it?
You don't have to go through browser automation to access data in twitter. What you really need is a Twitter API. There are multiple Python clients, like tweepy or python-twitter.
What you want to do is called Web automation. In Python, I recommend Selenium. It's pretty much all you need. The documentation is quite good and you can find examples online.
Note: use passwd for your password! Keep it safe đź‘Š

Scraping PHP from popup [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is there a way to scrape data from a popup? I'd like to import data from the site tennisinsight.com.
For example, http://tennisinsight.com/match-preview/?matchid=191551201
This is a sample data extraction link. When clicking "overview" there is a button with "Match Stats", I'd like to be able to import those data from many links in a text or CSV file.
What's the best way to accomplish this? Is Scrapy able to do this? Is there software able to do this?
You want to open the network analyzer in your browser (e.g. in Web Developer in Firefox) to see what requests are sent when you click the "match stats" button in order to replicate them using python.
When I do it, a POST request is sent to http://tennisinsight.com/wp-admin/admin-ajax.php with action and matchID parameters.
You presumably already know the match ID (see URL you posted above), so you just need to set up a POST request for each matchID you have.
import requests
r = requests.post('http://tennisinsight.com/wp-admin/admin-ajax.php', data={'action':'showMatchStats', 'matchID':'191551201'})
print r.text #this is your content of interest

Should I create a new file for save data with python? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
It may seem like a stupid question, but I really can't find information about this in google.
I am trying to develop a server-client application in python language, I am searching for a correct way to save data on a computer.
I have a client, that when he click the "Register" button I want that his computer will save the information and he can auto-login when he secondly entered the program.
Should I make a new file, save it with the data in the computer and then, load it again and read the data? I really don't know is this is the correct way.
There are different approaches to this problem. You could save the credentials/token/.. to the local disk, but keep in mind that in some cases this might be consindered a security risk. If you do so you should probably store it under user's home folder to keep it from other (non-admin/root users) at least.
You could also store it and encrypt it with e "Master password" (like Firefox does if you enable it).
Or you could connect to a 3rd party authentication server and store your information there. It all depends on the use case you are implementing as well as the complexity required.

Can I use urlllib2 to play videos? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Can I use urllib2 to open a webpage which contain video (like vimeo page) and this visit will be counted as view?
In general, yes. A request done with urllib2 will be a normal HTTP request and as such will be recognized as a normal “visit” for the server you are connecting to. Depending on what additional headers you set, you can even make yourself look like a common browser, so they won’t be able to filter you out either.
As far as video counts go however, I’m pretty sure that simply visiting the site—without executing any code on it, and without actually playing the video—will not increase the view counter. In addition, these sites employ some systems to prevent abuse of the counter too. So if you have the hope to be able to spoof real views and increment the view counter by repeatedly visiting the page, then you will be out of luck.
As for actually playing—if you are interested in the content instead of the view counter—then yes, you can use Python to get access to the video. Of course Python won’t be able to play it, but you can download it instead. There are scripts like this one that already do this for you too.

Categories

Resources