I am trying to automate native android app using robot framework + appium with AppiumLibrary and was able to successfully open application ,from there my struggle begins, not able to find any element on the screen through UI automator viewer since the app which I was testing is web-view context and it shows like a single frame(no elements in it is being identified) . I have spoken to dev team and they gave some html static pages where I could see some element id's for that app. So I have used those id's ,But whenever I ran the test it throws error as element doesn't match . The same app is working with java + appium testNG framework. Only difference I could see between these two is, using java + appium framework complete html code is getting when we call page source method for the android driver object but in robot its returning some xml code which was displayed in UI automator viewer(so this xml doesn't contain any HTML source code with element id's and robot is searching the id's in this xml code and hence it is failing). I am totally confused and got stuck here. Can some one help me on this issue.
Switch to (webview) context resolved this issue.
Related
Normally, when we make the test scripts either it is Robot Framework or Behat Framework. We will manually find the locator of the element we focused on each web page by using Developer Tools on the browser we use. We can make the scripts because we know the flow or the step of an individual testing scenario. However, I want to figure the automated way to extract the information of the relationship among web pages inside an individual website without any manual input and make a script out of it
The purpose of this question is to figure out the way to automatically detect the relationship between each page in an individual website in order to develop a test step and further develop the automated test scripts generator which the scripts can be Robot Framework or Behat Framework for automated testing on an individual website developed by using Laravel Framework which normally contains many web pages inside that are related to each other.
Do you guys have any ideas about this?
Please tell me if you have any.
Please leave your comments below.
I am trying to create a simple script with python that will go through my Facebook activity log and delete some content to make my clean up much easier. My previous experience with similar tasks makes it pretty simple to go through the dom and automate interaction.
However, the actual unlike or delete buttons don't appear to be in the HTML source of the downloaded page. I assume because they're created dynamically by React. I just need to know how to interact with these elements.
How do you target a React generated element that isn't in the DOM with a python crawler?
I'm trying to scrape search results from a number of websites. The problem is that not all of these sites return their search results as plain html text, a lot of it is dynamically generated with with JS, AJAX, etc. However, I can see exactly what I need by looking at the page with the Firefox inspector, since the scripts have all run and modified the html.
My question is: is there a way for me to download a webpage AFTER allowing the scripts to run, or at least get them to run locally. That way, I'd get the final html.
For reference, I'm using python.
Possible duplicate. In that case the question is with php and JS.
Sure, you have to provide some enviroment for scripts (js) to run and often to return a test value to target server. It's not that easy for the server side languages. So today for this we mostly leverage browser driving or imitating tools mentioned there.
I've found for you the python analog to v8js php plugin: PyV8.
PyV8 is a python wrapper for Google V8 engine, it act as a bridge between the Python and JavaScript objects, and support to hosting Google's v8 engine in a python script.
If properly configured, your scraper:
Gets site's js
Evaluates this js thru the given plugin
Gets access to target html for further parse.
I have a program written for text simplification in python language, I need this program to be run on a browser as a plugin... If you click the plugin it should take the webpage's text as input and pass this input to my text simplification program and the output of the program should be again displayed in another web page...
Text simplification program takes input text and produces a simplified version of the text, so now I'm planning to create a plugin which uses this program and produces simplified version of text on the webpage...
It will be of great help if anyone help me out through this...
You would need to use NPAPI plugins in Chrome Extension:
http://code.google.com/chrome/extensions/npapi.html
Then you use Content Scripts to get the webpage text, you pass it to the Background Page via Messaging. Then your NPAPI plugin will call python (do it however you like since its all in C++), and from the Background Page, you send the text within the plugin.
Concerning your NPAPI plugin, you can take a look how it is done in pyplugin or gather ideas from here to create it.
Now the serious question, why can't you do this all in JavaScript?
If you want an easier way than trying to figure out plugins, make it run as a webservice somewhere (Google App Engine is good for Python, and free), then use a bookmarklet to send pages from the browser. As an added bonus, it works with any browser, not just Chrome.
More explanation:
Rather than running on your own computer, you make your program run on a computer at Google (or somewhere else), and access it over the web. See Google's introduction to App Engine. Then, if you want it in your browser, you make a "bookmarklet" - a little bit of javascript that grabs the web page you're currently on (either the code or the URL, depends on what you're trying to do), and sends it to your program over the web. You can add this to your browser's bookmark bar as a button you can click. There's some more info on this site.
Is it possible for my python web app to provide an option the for user to automatically send jobs to the locally connected printer? Or will the user always have to use the browser to manually print out everything.
If your Python webapp is running inside a browser on the client machine, I don't see any other way than manually for the user.
Some workarounds you might want to investigate:
if you web app is installed on the client machine, you will be able to connect directly to the printer, as you have access to the underlying OS system.
you could potentially create a plugin that can be installed on the browser that does this for him, but I have no clue as how this works technically.
what is it that you want to print ? You could generate a pdf that contains everything that the user needs to print, in one go ?
You can serve to the user's browser a webpage that includes the necessary Javascript code to perform the printing if the user clicks to request it, as shown for example here (a pretty dated article, but the key idea of using Javascript to call window.print has not changed, and the article has some useful suggestions, e.g. on making a printer-friendly page; you can locate lots of other articles mentioning window.print with a web search, if you wish).
Calling window.print (from the Javascript part of the page that your Python server-side code will serve) will actually (in all browsers/OSs I know) bring up a print dialog, so the user gets system-appropriate options (picking a printer if he has several, maybe saving as PDF instead of doing an actual print if his system supports that, etc, etc).