How to automatically log in using python - python

I want to use python to automatically enter a website, login to it and maybe pressing some buttons or something like that(for example entering an online class automatically).
And I also don't want to use selenium or something like that(I don't want to use web drivers).
And if you are suggesting me the "requests" please tell me how should I do it.
thanks a lot for your help.

You cannot use requests or beautifulsoup or urllib for clicking purposes,
Here is the answer for why you can't
I suggest you to use selenium it is easy to use here is the documentation
https://www.selenium.dev/documentation/en/
And you can download chrome driver from here (if you are using chrome):
https://chromedriver.chromium.org/downloads
If you want a detailed explanation on how to install driver and start creating some scripts i found a youtube video which explains it all:
https://www.youtube.com/watch?v=8iAqUVvytJk&ab_channel=TheAmericanDeveloper

Related

How to play a Youtube video in python

What modules should I use to make a Python script where Python gets the name of a video, opens up a Chrome window, opens Youtube and plays the first result. I've tried Youtube Data API but it's not for that kind of things.
You can use the subprocess to call a browser and run the url you request. With this minimal code you can ask for a website to go, and open in firefox. But you can use it as a bootstrap code for your application using chrome.
import subprocess
url = input('What site you would to go: ')
subprocess.Popen(['firefox',url])
Using this template, you can just change the code to ask for a youtube link. If you have firefox installed you can just run it.
from selenium import webdriver
# You could also use Chrome, I just default to Firefox
driver = webdriver.Firefox()
driver.get('URL within some quotes')
# And if you want full-screen, finish it off with
driver.maximize_window()
And boom, that's a really quick way to open a site. You need to pip install selenium, of course, but this is my preferred way. Just wanted to offer an additional option.

How to get url from any browser like chrome, opera, etc to variable in Python automatically?

i'm beginner in Python so how a good ways to get url from any browser like chrome, opera, etc to variable in Python ? thanks
To get url you can do something like this:
import urllib2
response = urllib2.urlopen('http://domainname.com/')
html = response.read()
This can work, just grab the URL and stick it to a variable
variable_name = "https://www.url.com"
To answer this, you need to understand what context you are trying to grab the URL in for your Python app.
Are you making a desktop app that runs in the background and tracks Browser HTTP requests for different URLS? Windows (assuming you use it) already has something that already does that. Open command prompt and type in ipconfig /displaydns > dnslist.txt -- Viola! :)
Are you making trying to run Python code inside your web browser to do this? Browsers don't support Python normally but if you wanted you could install a plugin and ask your users to install it. You can read more about that here: Chrome extension in python? In this case I would strongly reccomend you just use Javascript however. The plugin just converts your python into JS anyway since browsers are made to only execute Javascript by default...
Are you trying to make a serverside app? In this case you would always know what domain is being called because it would be the domain of your own website! Your webserver would always know the exact URL that was requesting it in this case.
Which is it for you?

How do I input information into a website with python?

I have this python code, which accesses a website using the module webbrowser:
import webbrowser
webbrowser.open('kahoot.it')
How could I input information into a text box on this website?
I suggest you use Selenium for that matter.
Here is an example code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
# driver = webdriver.Firefox() # Use this if you prefer Firefox.
driver = webdriver.Chrome()
driver.get('http://www.google.com/')
search_input = driver.find_elements_by_css_selector('input.gLFyf.gsfi')[0]
search_input.send_keys('some search string' + Keys.RETURN)
You can use Selenium better if you know HTML and CSS well. Knowing Javascript/JQuery may help too.
You need the specific webdriver to run it properly:
GeckoDriver (Firefox)
Chrome
There are other webdrivers available, but one of the previous should be enough for you.
On Windows, you should have the executable on the same folder as your code. On Ubuntu, you should copy the webdriver file to /usr/local/bin/
You can use Selenium not only to input information, but also to a lot of other utilities.
I don't think that's doable with the webbrowser module, I suggest you take a look at Selenium
How to use Selenium with Python?
Depending on how complex (interactive, reliant on scripts, ...) your activity is, you can use requests or, as others have suggested, selenium.
Requests allows you to send and get basic data from websites, you would probably use this when automatically submitting an order form, querying an API, checking if a page has ben updated, ...
Selenium gives you programmatic control of a "normal" browser, this seems better for you specific use-case.
The webbrowser module is actually only (more or less) able to open a browser. You can use this if you want to open a link from inside your application.

Using Mechanize for python, need to be able to right click

My script logs in to my account, navigates the links it needs to, but I need to download an image. This seems to be easy enough to do using urlretrive. The problem is that the src attribute for the image contains a link which points it to the page which initiates a download prompt, and so my only foreseeable option is to right click and select 'save as'. I'm using mechanize and from what I can tell Mechanize doesn't have this functionality. My question is should I switch to something like Selenium?
Mechanize, last I checked, was pretty poorly maintained and documented. Selenium has a much more active community.
That being said: why do you need mechanize to do this? Why not just use urllib?
I would try to watch Chrome's network tab, and try to imitate the final request to get the image. If it turned out to be too difficult, then I would use selenium as you suggested.

Advanced screen-scraping using curl

I need to create a script that will log into an authenticated page and download a pdf.
However, the pdf that I need to download is not at a URL, but it is generated upon clicking on a specific input button on the page. When I check the HTML source, it only gives me the url of the button graphic and some obscure name of the button input, and action=".".
In addition, both the url where the button is and the form name is obscured, for example:
url = /WebObjects/MyStore.woa/wo/5.2.0.5.7.3
input name = 0.0.5.7.1.1.11.19.1.13.13.1.1
How would I log into the page, 'click' that button, and download the pdf file within a script?
Maybe Mechanize module can help.
I think that url on clicking the button maybe generated using javascript.So, to run javascript code from python script take a look at Spidermonkey.
Try mechanize or twill. HttpFox or firebug can help you to build your queries. Remember you can also pickle cookies from browser and use it later with py libs. If the code is generated by javascript it could be possible to 'reverse engineer' it. If nof you can run some javascript interpret or use selenium or windmill to script a real browser.
You could observe what requests are made when you click the button (using Firebug in Firefox or Developer Tools in Chrome). You may then be able to request the PDF directly.
It's difficult to help without seeing the page in question.
As Acorn said, you should try monitoring the actual requests and see if you can spot a pattern.
If not, then your best bet is actually to automate a fully-featured browser, that will be able to run Javascript, so you'll exactly mimic what a regular user would do. Have a look at this page on the Python Wiki for ideas, check the section Python Wrappers around Web "Libraries" and Browser Technology.

Categories

Resources