Automating Tasks through a VNC Connection within a Browser - python

So first off, overall what I'm trying to accomplish is for a base machine (as in a VPS) to run automated task through Firefox using Python.
Now the object or goal is to have Firefox run the given tasks in the browser itself, though then connect to a VPS (through the browser) using a VNC connection, and control or issue tasks as well to that VPS (this is the part I'm having trouble with); with also as little memory required as possible for maximum efficiency.
To give an example, if you've used Digital Ocean, you can view your VPS's specific screen or terminal within the current browser.
To be clear, the VPS OS I'm using to run the base process is Linux, though the VPS that the program is connecting to (through the browser) is using a Windows OS. Something such as this let's say (note I didn't screenshot this):
My problem lies with that after running through all of the scripted tasks using Selenium in Python (with Firefox), once I open up the VPS in the browser, I can't figure out how to access it properly or issue jobs to be completed.
I've thought about maybe using (x,y) coordinates for mouse clicks, though I can't say this would exactly work (I tested it with iMacros, though not yet Selenium).
So in a nutshell, I'm running base tasks in Firefox to start, and then connecting to a VPS, and finally issuing more tasks to be completed from Firefox to that VPS that's using a Windows OS environment.
Suggestions on how to make this process simpler, more efficient, or further its reliability?

There is a class in java called Robot class which can handle almost all keyboard operation
There is a similer thing present in python gtk.gdk.Display.
Refer below:-
Is there a Python equivalent to Java's AWT Robot class?
Take a screenshot via a python script. [Linux]
OR
Python ctypes keybd_event simulate ctrl+alt+delete
Demo java code:-
try{
Robot robot = new Robot();
robot.keyPress(KeyEvent.VK_CONTROL);
robot.keyPress(KeyEvent.VK_ALT);
robot.keyPress(KeyEvent.VK_DELETE);
robot.keyRelease(KeyEvent.VK_CONTROL);
robot.keyRelease(KeyEvent.VK_ALT);
robot.keyRelease(KeyEvent.VK_DELETE);
}
catch(Exception ex)
{
System.out.println(ex.getMessage());
}
Hope it will help you :)

Related

Make Python script run automatically + SQL database

I have done many research on the forum but I didn't find a clear answer to my question.
I have created a python code which extracts data from a website and put it in a SQL database (with SQLITE).
Now, I would like to make it run 24 hours a day on a server so that it collects data at each moment.
Although I know how to code, I don't have a broad knowledge of IT in general.
Would you know by chance how to solve my problem ?
Edit: I was more thinking about putting my scripts on a cloud server to make it run instead of running it directly from my pc.
Thank you very much.
Lcs
I can think of two options:
1) You use a simple "while true" and "wait" in python, so the script is running constantly.
import time
while True:
# Code
time.sleep(5) # goes to sleep for 5 seconds
You can also do this in shell script (infinite loop+sleep) and then run the program when you want.
2) You can use cronjobs to run the pytho script. You basically have the script the will scrap the website and it will be launched every X seconds/minutes/hours. You can read more about how to use cronjobs in Google
Depends on what system you are using (Unix or windows) you can use either:
1. Windows Task Scheduler for Windows
2. Crontab for Unix

Multiple Selenium instances on same machine?

What is the best way to run multiple selenium instances in parallel if the tests are for:
same browser type
same machine
I've read this: https://code.google.com/p/selenium/wiki/ScalingWebDriver and seems there is a systemic problem regarding running multiple Selenium instances on the same machine. But I'd like to ask the community if there is a way that I'm not seeing.
I have a working selenium instance run e2e tests. However, I would now like to run 5 of those selenium instances in parallel using the same browser type.
I've looked into Selenium Grid 2 and I'm not sure if it fits my use case. It seems like the main point of Selenium Grid 2 is to be able to distribute test according to browser version / operating system. But in my case, each test is for the same type, same browser version.
Running the standalone test works great!
Running multiple Firefox processes:
But as I try to scale by spawning multiple Firefox processes, I get errors that mostly involve HTTP and requests error, including BadClient and StatusError and Exception: Request cannot be sent:
webdriver.Firefox()
Using Grid
I've dug into the webdriver.Firefox() code and it looks like under the scenes, it's connecting locally:
class WebDriver(RemoteWebDriver):
def __init__(self, firefox_profile=None, firefox_binary=None, timeout=30, capabilities=None, proxy=None):
...
RemoteWebDriver.__init__(self,
command_executor=ExtensionConnection("127.0.0.1", self.profile, self.binary, timeout))
The RemoteWebDriver instance seems to just connect to localhost on a free port that it finds. Which seems like the same command used by Grid when registering a node:
java -jar selenium-server-standalone-2.44.0.jar -role node -hub http://localhost:4444/grid/register
Does Grid have any relevance for running parallel Selenium instances on the same machine? Or is it primarily a load balancer for instances running on various different machines?
Is it possible to have reliable, non-flaky instances of Selenium run in parallel on the ​same​ machine? I get HTTP flakiness when I run them in parallel (lots of request cannot be sent or Bad Status Error or the browser closing before info can be read from socket)
What's the point of Selenium Grid 2? Does it just act as a load balancer for parallel test runs on multiple machines? If I ran grid locally with the hub and node on the same machine (all for FF), would it effectively be the same as me running multiple webdriver.Firefox() processes?
Or is there some more magic behind the scenes?

shell command from python script

I need you guys :D
I have a web page, on this page I have check some items and pass their value as variable to python script.
problem is:
I Need to write a python script and in that script I need to put this variables into my predefined shell commands and run them.
It is one gnuplot and one other shell commands.
I never do anything in python can you guys send me some advices ?
THx
I can't fully address your questions due to lack of information on the web framework that you are using but here are some advice and guidance that you will find useful. I did had a similar problem that will require me to run a shell program that pass arguments derived from user requests( i was using the django framework ( python ) )
Now there are several factors that you have to consider
How long will each job takes
What is the load that you are expecting (are there going to be loads of jobs)
Will there be any side effects from your shell command
Here are some explanation that why this will be important
How long will each job takes.
Depending on your framework and browser, there is a limitation on the duration that a connection to the server is kept alive. In other words, you will have to take into consideration that the time for the server to response to a user request do not exceed the connection time out set by the server or the browser. If it takes too long, then you will get a server connection time out. Ie you will get an error response as there is no response from the server side.
What is the load that you are expecting.
You will have probably figure that if a work that you are requesting is huge,it will take out more resources than you will need. Also, if you have multiple requests at the same time, it will take a huge toll on your server. For instance, if you do proceed with using subprocess for your jobs, it will be important to note if you job is blocking or non blocking.
Side effects.
It is important to understand what are the side effects of your shell process. For instance, if your shell process involves writing and generating lots of temp files, you will then have to consider the permissions that your script have. It is a complex task.
So how can this be resolve!
subprocesswhich ship with base python will allow you to run shell commands using python. If you want more sophisticated tools check out the fabric library. For passing of arguments do check out optparse and sys.argv
If you expect a huge work load or a long processing time, do consider setting up a queue system for your jobs. Popular framework like celery is a good example. You may look at gevent and asyncio( python 3) as well. Generally, instead of returning a response on the fly, you can retur a job id or a url in which the user can come back later on and have a look
Point to note!
Permission and security is vital! The last thing you want is for people to execute shell command that will be detrimental to your system
You can also increase connection timeout depending on the framework that you are using.
I hope you will find this useful
Cheers,
Biobirdman

Confused about DBus

Ok, so, I might be missing the plot a bit here, but would really like some help. I am quite new to development etc. and have now come to a point where I need to implement DBus (or some other inter-program communication). I am finding the concept a bit hard to understand though.
My implementation will be to use an HTML website to change certain variables to be used in another program, thus allowing for the program to be dynamically changed in its working. I am doing this on a raspberry PI using Raspbian. I am running a webserver to host my website, and this is where the confusion comes in.
As far as I understand, DBus runs a service which allows you to call methods from a program in another program. So does this mean that my website needs to run a DBUS service to allow me to call methods from it into my program? To complicate things a bit more, I am coding in Python, so I am not sure if I can run a Python script on my website that would allow me to run a DBUS service. Would it be better to use JavaScript?
For me, the most logical solution would be to run a single DBUS service that somehow imports method from different programs and can be queried by others who want to run those methods. Is that possible?
Help would be appreciated!
Thank you in advance!
So does this mean that my website needs to run a DBUS service to
allow me to call methods from it into my program?
A dbus background process (a daemon) would run on your web server, yes.
In fact dbus provides two daemons. One is a system daemon which permits
objects to receive system information (e.g. printer availability for exampple)
and the second is a general user application to application IPC daemon. It is the
second daemon that you definitely use for different applications to communicate.
I am coding in Python, so I am not sure if I can run a Python script
on my website that would allow me to run a DBUS service.
There is no problem using python; dbus has bindings for many languages (e.g Java, perl, ruby, c++, Python). dbus objects can be mapped to python objects.
the most logical solution would be to run a single DBUS service that
somehow imports method from different programs and can be queried by
others who want to run those methods. Is that possible?
Correct - dbus provides a mechanism by which a client process will create dbus object or objects which allow that process to other services to other dbus-aware processes.
This sounds like you should write an isolated D-Bus service to act as a datastore, and communicate synchronously with it in your scripts to write and read values. You can use shelve to persist the values between service invocations.
In the tutorial, the "Making method calls" section covers synchronous calls, and "Exporting objects" covers writing most of the D-Bus service.

Python script will not run in Task Scheduler for "Run whether use is logged on or not"

I have written a python script and wanted to have it run at a set period everyday with the use of Task Scheduler. I have had no problems with Task Scheduler for running programs while logged off, before creating this task.
If I select "Run only when user is logged on" my script runs as expected with the desired result and no error code (0x0).
If I select "Run whether user is logged on or not" with "Run with highest privileges" and then leave it overnight or log off to test it, it does not do anything and has an error code of 0x1.
I have the action to "Start a program" with the Details as follows:
Program/script: C:\Python27\python2.7.exe
Add arguments: "C:\Users\me\Desktop\test.py"
I think it has to do with permissions to use python while logged off but I can't figure this one out. Wondering if anyone has suggestions or experience on this.
This is on Windows 7 (fyi)
Thanks,
JP
I think I have found the solution to this problem. My script is used to create a powerpoint slide deck and needs to open MS PPT.
I stumbled upon a post from another forum with a link to MS's policy on this. It basically boils down to the following:
"Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behaviour and/or deadlock when Office is run in this environment.
Automating PowerPoint from a scheduled task falls under the unsupported scenario when scheduled task is run with the option "Run whether user logged on or not". But, using it with "Run only when the user is logged on" option falls under the supported category."
From here
I would try it with the script not in your Users directory
I have experience supporting PowerPoint automation under the Task Scheduler by way of a C++ app called p3icli (available on sourceforge). This is the approach I successfully used:
1) Add a command-line (-T) switch that indicates p3icli will run under Task Scheduler.
2) The command-line switch forces p3icli to start an instance of powerpnt.exe using CreateProcess() and then wait X milliseconds for that instance to stabilize.
3) After X milliseconds elapse, p3icli connects to the running PPT instance created in step 2 and processes automation commands.
I would guess that a similar approach can be used with Python.
Task Scheduler compatibility is easily the most troublesome feature I ever added to p3icli. For example, manipulating multiple presentations by changing the active window simply does not work. And as I'm sure you've discovered, debugging problems is no fun at all.
NB: Your python solution must include code that forces PowerPoint to unconditionally close when your python script is complete (modulo a python crash). Otherwise, orphaned instances of PowerPoint will appear in Task Manager.
Click the link for some thoughts on the Task Scheduler from a p3icli point of view.

Categories

Resources