I must get the URLs of all subpages found within one Google Site in the editor mode. I have a subpage for each Form(1 to 6 Upper) of all classes at school. However, I intend to automate any future changes using Python code. I must be able to access each page and upload photos to the subpages under each one. But, for that, I must get onto the subpage itself.
Basically, the web structure goes like this:
EVERYTHING -> CLASSES -> SUBJECTS
I have tried using Selenium for automation but that idea didn't work out since I cannot log in with Google once it enters automation mode since Selenium is active. I have tried using a program to simulate mouse motion and actually click on the subpages but it is far too complex and after several unsuccessful attempts, I gave up.
I need ideas on what I should do to access each subpage and retrieve its URL. I would appreciate if someone could help me because I am really stuck as I cannot hope to update the entire site manually on a weekly basis.
If someone could show me the code which would perform this task, I would appreciate it too much to express in words. No matter what, thanks very much!
Related
Using Selenium to try and automate a bit of data entry with Salesforce. I have gotten my script to load a webpage, allow me to login, and click an "edit" button.
My next step is to enter data into a field. However, I keep getting an error about the field not being found. I've tried to identify it by XPATH, NAME, and ID and continue to get the error. For reference, my script works with a simple webpage like Google. I have a feeling that clicking the edit button in Salesforce opens either another window or frame (sorry if I'm using the wrong terminology). Things I've tried:
Looking for other frames (can't seem to find any in the HTML)
Having my script wait until the element is present (doesn't seem to work)
Any other options? Thank you!
Salesforce's Lighting Experience (the new white-blue UI) is built with web components that hide their internal implementation details. You'd need to read up a bit about "shadow DOM", it's not a "happy soup" of html and JS all chucked into top page's html. Means that CSS is limited to that one component, there's no risk of spilling over or overwriting another page area's JS function if you both declare function with same name - but it also means it's much harder to get into element's internals.
You'll have to read up about how Selenium deals with Shadow DOM. Some companies claim they have working Lightning UI automated tests/ Heard good stuff about Provar, haven't used it myself.
For custom UI components SF developer has option to use "light dom", for standard UI you'll struggle a bit. If you're looking for some automation without fighting with Lighting Experience (especially that with 3 releases/year SF sometimes changes the structure of generated html, breaking old tests) - you could consider switching over to classic UI for the test? It'll be more accessible for Selenium, won't be exactly same thing the user does - but server-side errors like required fields, validation rules should fire all the same.
I have an idea for a program that I would like to try to write. Basically, I'm looking to check if I'm accessing a specific website. Then if I'm on that specific website the program immediately terminates that site.
I want to block a few websites from myself for times when I need to focus on school or work. Say I try to check my Facebook, I want it to close itself no matter how many times I try.
Does anyone know a way to check if a specific website is being opened?
I'd like to ask somebody with experience with headless browsers and python if it's possible to extract box info with distance from closest strike on webpage below. Till now I was using python bs4 but since everything is driven by jQuery here simple download of webpage doesn't work. I found PhantomJS but I wasn't able extract it too so I am not sure if it's possible. Thanks for hints.
https://lxapp.weatherbug.net/v2/lxapp_impl.html?lat=49.13688&lon=16.56522&v=1.2.0
This isn't really a Linux question, it's a StackOverflow question, so I won't go into too much detail.
The thing you want to do can be easily done with Selenium. Selenium has both a headless mode, and a heady mode (where you can watch it open your browser and click on things). The DOM query API is a bit less extensive than bs4, but it does have nice visual query (location on screen) functions. So you would write a Python script that initializes Selenium, goes to your website and interacts with it. You may need to do some image recognition on screenshots at some point. It may be as simple as finding for a certain query image on the screen, or something much more complicated.
You'd have to go through the Selenium tutorials first to see how it works, which would take you 1-2 days. Then figure out what Selenium stuff you can use to do what you want, that depends on luck and whether what you want happens to be easy or hard for that particular website.
Instead of using Selenium, though, I recommend trying to reverse engineer the API. For example, the page you linked to hits https://cmn-lx.pulse.weatherbug.net/data/lightning/v1/spark with parameters like:
_
callback
isGpsLocation
location
locationtype
safetyMessage
shortMessage
units
verbose
authid
timestamp
hash
You can figure out by trial and error which ones you need and what to put in them. You can capture requests from your browser and then read them yourself. Then construct appropriate requests from a Python program and hit their API. It would save you from having to deal with a Web UI designed for humans.
I am trying to get started with webcrawling.
My main struggle is that I need a visual interface linked to python that allows me to see what is happening as I crawl the webpage. The main idea is that I have this webpage which after I load the url I have to press an x to be redirected to a new page from which I want to extract some data. However, using an inspector I am having a hard time finding the actual redirecting link.
link:https://shop.axs.co.uk/Lw%2fYCwAAAAA6dpvSAAAAAABB%2fv%2f%2f%2fwD%2f%2f%2f%2f%2fBXRoZW8yAP%2f%2f%2f%2f%2f%2f%2f%2f%2f%2f
PS: The main reason is because I want to buy some concert tickets, to go see a band my dad loves, but tickets are currently sold out. Sometimes people resell theirs and I want to detect when tickets are available on the second page and then give myself a notification that on the visual interface I am using I am able to proceed to buy the tickets.
I know I am asking for alot but I really want to get me and my dad to the concert.
Thank you in advance kind stranger.
To begin with. You need to use Selenium because interacting with javascript requires something more advance than just a scraper.
There you have a simple tutorial:
https://realpython.com/modern-web-automation-with-python-and-selenium/
I am looking for resources or guide so that I could build a Python code to fill my ~2k online forms automatically. Sorry I dont have any script to share as many resources in which python code is written to go to form URL and fill it. Since in my case it is a pop up form it doesnt really have a real URL.
Please be kind, I am new to Python.
Is there a way to do something to imitate clicks on browser window and fill in new values in the form ?
You can imitate clicks in the browser using selenium https://realpython.com/modern-web-automation-with-python-and-selenium/. There are plenty of tutorials how to do that.
Other tools would be:
https://www.cypress.io/
http://wwwsearch.sourceforge.net/mechanize/
If you don't want to write code - an
extension in the browser: https://www.seleniumhq.org/projects/ide/