I have a script in python with 1 test in it.
The test calls other functions from different class in the project which generate differeydata each time to be send through the post request in the test.
I want to run this test using the k6 tool, for 5 users for example.
Is it possible? Or only Javascript
As of current version 0.42.0 it doesn't seem to be possible. If your machine has an SSH server running you can go for xk6-ssh extension to connect to the machine where your python script lives and run it.
Alternatively you can come up with your own k6 extension allowing execution of local commands.
However I don't think it's a viable approach because you're unlikely to get good results and metrics.
Maybe it worth considering migrating to Locust tool which is Python-based so you can write a Locustfile around your Python script logic and it will be executed with the given number of virtual users.
Check out What Is Locust Load Testing? for more information if needed.
Related
I am working on a project that allows users to upload a python script to an API and run it on a schedule. Currently, I'm trying to figure out a way to limit the functionality of the script so that it cannot access local files, mess with the flask server running the API, etc. Do you have any ideas on how I can achieve this? Is there anyway to make it so only specific libraries are available for importing?
Running other scripts on your server is serious security issue. If you are trying to deploy Python interpreter on your web application, you can try with something like judge0 - GitHub. It is free if you deploy it yourself and it will run scripts safely inside containers.
The simplest way is to ensure the user running the script is not root, but a user specifically designed for this task (e.g. part of a group that can only read and not write or execute). This means at minimum you should ensure all files have the appropriate mode. Then you can just use a pipe or something to run the script.
Alternatively, you could use a runtime that’s not “local”, like a VM or compute service (AWS lambda, etc). The latter would be simplest, and there’s lots of vendors who offer compute service with programmatic api.
I'm building a website with React and Firebase that utilizes an algorithm I wrote in python. The database and authentication for the project are both handled by Firebase, so I would like to keep the cloud functions in that same ecosystem if possible.
Right now, I'm using the python-shell npm package to send and receive data from NodeJS to my python script.
I have local unit testing set up so I can test the https.onCall functions locally without needing to deploy and test from the client.
When I am testing locally, everything works perfectly.
However, when I push the functions to the cloud and trigger the function from the client, the logs in the Firebase console show that the python script is missing dependencies.
What is the best way to ensure that the script has all the dependencies available to it up on the server?
I have tried:
-Copying the actual dependency folders from my library/.../site-packages and putting them in the same directory under the /functions folder with the python script. This almost works. I just run into an issue with numpy: "No module named 'numpy.core._multiarray_umath'" is printed to the logs in Firebase.
I apologize if this is an obvious answer. I'm new to Python, and the solutions I've found online seem way to elaborate or involve hosting the python code in another ecosystem (like AWS or Heroku). I am especially hesitant to go to all that work because it runs fine locally. If I can just find a way to send the dependencies up with the script I'm good to go.
Please let me know if you need any more information.
the logs in the Firebase console show that the python script is missing dependencies.
That's because the nodejs runtime targeted by the Firebase CLI doesn't have everything you need to run python programs.
If you need to run a function that's primarily written in python, you should not use the Firebase CLI and instead uses the Google Cloud tools to target the python runtime, which should do everything you want. Yes, it might be extra work for you to learn new tools, and you will not be able to use the Firebase CLI, but it will be the right way to run python in Cloud Functions.
I'm new to Python (relatively new to programing in general) and I have created a small python script that scrape some data off of a site once a week and stores it to a local database (I'm trying to do some statistical analysis on downloaded music). I've tested it on my Mac and would like to put it up onto my server (VPS with WiredTree running CentOS 5), but I have no idea where to start.
I tried Googling for it, but apparently I'm using the wrong terms as "deploying" means to create an executable file. The only thing that seems to make sense is to set it up inside Django, but I think that might be overkill. I don't know...
EDIT: More clarity
You should look into cron for this, which will allow you to schedule the execution of your Python script.
If you aren't sure how to make your Python script executable, add a shebang to the top of the script, and then add execute permissions to the script using chmod.
Copy script to server
test script manually on server
set cron, "crontab -e" to a value that will test it soon
once you've debugged issues set cron to the appropriate time.
Sounds like a job for Cron?
Cron is a scheduler that provides a way to run certain scripts (apps, etc.) at certain times.
Here is a short tutorial that explains how to set up cron.
See this for more general cron information.
Edit:
Also, since you are using CentOS: if you end up having issues with your script later on... it could partly be caused by SELinux. There are ways to disable SELinux on your server (if you have enough access permissions.) But... there are arguments against disabling SELinux, as well.
I have a Rails server which will need to run a python script at the background. I know that I can run it like I run terminal commands in ruby, but how is the performance like? is it better to use a python framework and not Rails? Is there better ways (optimization wise) to run python scripts on a Rails server?
If you mean that it needs to run periodically, just set it up as a cron job, no special performance characteristics to worry about there.
If you mean that it needs to run when pages are requested from your Ruby website, then simply running the script each time won't perform well as it needs to fire up the Python interpreter over and over again.
If the Python script is large but is only called from a relatively small number of page requests, you might be able to get away with this, sometimes it's not worth the time to optimise a slow operation that isn't called often.
If the bulk of your website is based around the functionality of the Python script, then yes, you are probably better off switching to a Python web framework and loading it as a module.
If the Python script isn't very big, then you are probably better off rewriting it in Ruby.
Worst case scenario is that the script is big and used often, but doesn't make up enough of your website to justify switching to Python. In that case, I'd consider wrapping the Python in a daemon that Ruby can talk to in the background.
You will incur the cost of starting python each time you run it from ruby. The cost would be the same in a python framework, unless you could use the python script as a library instead.
You could setup a daemon in rails to execute the python script.
http://railscasts.com/episodes/129-custom-daemon - Tutorial for setting up daemons in rails
Is there a testing framework (preferable python) that executes test, monitor the progress (failed/passed/timeout) and controls the vmware? Thanks
I am trying to make some automation functional testing in Vmware using Autoit script, VMs are controlled by a little python script on the host machine (deploy test files into VMs, execute them and collect the results data). But now it seems to be lots of works to do if I want this script to be able to manage and execute a series of test cases.
Thanks a lot!
Cheers,
Zhe
There are lots of continuous integration tools that may do what you want.
One implemented in Python that may fit your need is Buildbot - it can manage running builds and tests across multiple machines and consolidating the results.