Locust integration with product SDK is not showing stats - python

I'm not getting so much in documentation for sdk load testing using Locust.
from locust import HttpUser, task
import random
import string
from cnvrgv2 import Cnvrg
from cnvrgv2.modules.users.users_client import UsersClient
class CnvrgUser(HttpUser):
def on_start(self):
print("This is on start method")
# Start an instance of of the SDK
self.cnvrg: Cnvrg = Cnvrg(domain='***',email='***',password='***', organization='performance')
# overwrite the internal _session attribute with the locust session
self.cnvrg._session = self.client
#task
def Create_project(self):
"""User creates project"""
print("Inside project create")
i = 0
while i< 10:
project_name = 'project_name' + \
"".join(random.choices(string.ascii_letters, k=6))
# import pdb; pdb.set_trace()
response = self.cnvrg.projects.create(project_name)
i+=1
# self.arch.assets.create(behaviours=["Builtin", "RecordEvidence", "Attachments"], attrs={"foo": "bar"})
As in sdk we have Session with a value {} as mentioned in doc
what is required inside this request.Sessions class in my sdk

The main requirement of this approach is that you are able to replace the SDKs internally used requests.Session object with Locust's HttpSession (self.client).
First of all, does your Cnvrg object even store its session in _session? (we might need to improve the docs to clarify this requirement)
If it does, can you replace it after initializing (as is done in the example in the docs), or is it too late (because your SDK has already established session cookies etc)? If so you'd need to use a more advanced approach, maybe overriding the request method of the existing Session object.

Related

Create activities for an user with Stream-Framework

I'm trying to setup stream-framework the one here not the newer getstream. I've setup the Redis server and the environment properly, the issue I'm facing is in creating the activities for a user.
I've been trying to create activities, following the documentation to add an activity but it gives me an error message as follows:
...
File "/Users/.../stream_framework/activity.py", line 110, in serialization_id
if self.object_id >= 10 ** 10 or self.verb.id >= 10 ** 3:
AttributeError: 'int' object has no attribute 'id'
Here is the code
from stream_framework.activity import Activity
from stream_framework.feeds.redis import RedisFeed
class PinFeed(RedisFeed):
key_format = 'feed:normal:%(user_id)s'
class UserPinFeed(PinFeed):
key_format = 'feed:user:%(user_id)s'
feed = UserPinFeed(13)
print(feed)
activity = Activity(
actor=13, # Thierry's user id
verb=1, # The id associated with the Pin verb
object=1, # The id of the newly created Pin object
)
feed.add(activity) # Error at this line
I think there is something missing in the documentation or maybe I'm doing something wrong. I'll be very grateful if anyone helps me get the stream framework working properly.
The documentation is inconsistent. The verb you pass to the activity should be (an instance of?*) a subclass of stream_framework.verbs.base.Verb. Check out this documentation page on custom verbs and the tests for this class.
The following should fix the error you posted:
from stream_framework.activity import Activity
from stream_framework.feeds.redis import RedisFeed
from stream_framework.verbs import register
from stream_framework.verbs.base import Verb
class PinFeed(RedisFeed):
key_format = 'feed:normal:%(user_id)s'
class UserPinFeed(PinFeed):
key_format = 'feed:user:%(user_id)s'
class Pin(Verb):
id = 5
infinitive = 'pin'
past_tense = 'pinned'
register(Pin)
feed = UserPinFeed(13)
activity = Activity(
actor=13,
verb=Pin,
object=1,
)
feed.add(activity)
I quickly looked over the code for Activity and it looks like passing ints for actor and object should work. However, it is possible that these parameters are also outdated in the documentation.
* The tests pass in classes as verb. However, the Verb base class has the methods serialize and __str__ that can only be meaningfully invoked if you have an object of this class. So I'm still unsure which is required here. It seems like in the current state, the framework never calls these methods, so classes still work, but I feel like the author originally intended to pass instances.
With the help of great answer by #He3lixxx, I was able to solve it partially. As the package is no more maintained, the package installs the latest Redis client for python which was creating too many issues so by installation redis-2.10.5 if using stream-framework-1.3.7, should fix the issue.
I would also like to add a complete guide to properly add activity to a user feed.
Key points:
If you are not using feed manager, then make sure to first insert the activity before you add it to the user with feed.insert_activity(activity) method.
In case of getting feeds with feed[:] throws an error something like below:
File "/Users/.../stream_framework/activity.py", line 44, in get_hydrated
activity = activities[int(self.serialization_id)]
KeyError: 16223026351730000000001005L
then you need to clear data for that user using the key format for it in my case the key is feed:user:13 for user 13, delete it with DEL feed:user:13, In case if that doesn't fix the issue then you can FLUSHALL which will delete everything from Redis.
Sample code:
from stream_framework.activity import Activity
from stream_framework.feeds.redis import RedisFeed
from stream_framework.verbs import register
from stream_framework.verbs.base import Verb
class PinFeed(RedisFeed):
key_format = 'feed:normal:%(user_id)s'
class UserPinFeed(PinFeed):
key_format = 'feed:user:%(user_id)s'
class Pin(Verb):
id = 5
infinitive = 'pin'
past_tense = 'pinned'
register(Pin)
feed = UserPinFeed(13)
print(feed[:])
activity = Activity(
actor=13,
verb=Pin,
object=1)
feed.insert_activity(activity)
activity_id = feed.add(activity)
print(activity_id)
print(feed[:])

With Python Kubernetes client, how to replicate `kubectl create -f` generally?

My Bash script using kubectl create/apply -f ... to deploy lots of Kubernetes resources has grown too large for Bash. I'm converting it to Python using the PyPI kubernetes package.
Is there a generic way to create resources given the YAML manifest? Otherwise, the only way I can see to do it would be to create and maintain a mapping from Kind to API method create_namespaced_<kind>. That seems tedious and error prone to me.
Update: I'm deploying many (10-20) resources to many (10+) GKE clusters.
Update in the year 2020, for anyone still interested in this (since the docs for the python library is mostly empty).
At the end of 2018 this pull request has been merged,
so it's now possible to do:
from kubernetes import client, config
from kubernetes import utils
config.load_kube_config()
api = client.ApiClient()
file_path = ... # A path to a deployment file
namespace = 'default'
utils.create_from_yaml(api, file_path, namespace=namespace)
EDIT: from a request in a comment, a snippet for skipping the python error if the deployment already exists
from kubernetes import client, config
from kubernetes import utils
config.load_kube_config()
api = client.ApiClient()
def skip_if_already_exists(e):
import json
# found in https://github.com/kubernetes-client/python/blob/master/kubernetes/utils/create_from_yaml.py#L165
info = json.loads(e.api_exceptions[0].body)
if info.get('reason').lower() == 'alreadyexists':
pass
else
raise e
file_path = ... # A path to a deployment file
namespace = 'default'
try:
utils.create_from_yaml(api, file_path, namespace=namespace)
except utils.FailToCreateError as e:
skip_if_already_exists(e)
I have written a following piece of code to achieve the functionality of creating k8s resources from its json/yaml file:
def create_from_yaml(yaml_file):
"""
:param yaml_file:
:return:
"""
yaml_object = yaml.loads(common.load_file(yaml_file))
group, _, version = yaml_object["apiVersion"].partition("/")
if version == "":
version = group
group = "core"
group = "".join(group.split(".k8s.io,1"))
func_to_call = "{0}{1}Api".format(group.capitalize(), version.capitalize())
k8s_api = getattr(client, func_to_call)()
kind = yaml_object["kind"]
kind = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', kind)
kind = re.sub('([a-z0-9])([A-Z])', r'\1_\2', kind).lower()
if "namespace" in yaml_object["metadata"]:
namespace = yaml_object["metadata"]["namespace"]
else:
namespace = "default"
try:
if hasattr(k8s_api, "create_namespaced_{0}".format(kind)):
resp = getattr(k8s_api, "create_namespaced_{0}".format(kind))(
body=yaml_object, namespace=namespace)
else:
resp = getattr(k8s_api, "create_{0}".format(kind))(
body=yaml_object)
except Exception as e:
raise e
print("{0} created. status='{1}'".format(kind, str(resp.status)))
return k8s_api
In above function, If you provide any object yaml/json file, it will automatically pick up the API type and object type and create the object like statefulset, deployment, service etc.
PS: The above code doesn't handler multiple kubernetes resources in one file, so you should have only one object per yaml file.
I see what you are looking for. This is possible with other k8s clients available in other languages. Here is an example in java. Unfortunately the python client library does not support that functionality yet. I opened a new feature request requesting the same and you can either choose to track it or contribute yourself :). Here is the link for the issue on GitHub.
The other way to still do what you are trying to do is to use java/golang client and put your code in a docker container.

Mock flask.request in python nosetests

I'm writing test cases for code that is called via a route under Flask. I don't want to test the code by setting up a test app and calling a URL that hits the route, I want to call the function directly. To make this work I need to mock flask.request and I can't seem to manage it. Google / stackoverflow searches lead to a lot of answers that show how to set up a test application which again is not what I want to do.
The code would look something like this.
somefile.py
-----------
from flask import request
def method_called_from_route():
data = request.values
# do something with data here
test_somefile.py
----------------
import unittest
import somefile
class SomefileTestCase(unittest.TestCase):
#patch('somefile.request')
def test_method_called_from_route(self, mock_request):
# want to mock the request.values here
I'm having two issues.
(1) Patching the request as I've sketched out above does not work. I get an error similar to "AttributeError: 'Blueprint' object has no attribute 'somefile'"
(2) I don't know how to exactly mock the request object if I could patch it. It doesn't really have a return_value since it isn't a function.
Again I can't find any examples on how to do this so I felt a new question was acceptable.
Try this
test_somefile.py
import unittest
import somefile
import mock
class SomefileTestCase(unittest.TestCase):
def test_method_called_from_route(self):
m = mock.MagicMock()
m.values = "MyData"
with mock.patch("somefile.request", m):
somefile.method_called_from_route()
unittest.main()
somefile.py
from flask import request
def method_called_from_route():
data = request.values
assert(data == "MyData")
This is going to mock the entire request object.
If you want to mock only request.values while keeping all others intact, this would not work.
A few years after the question was asked, but this is how I solved this with python 3.9 (other proposed solutions stopped working with python 3.8 see here). I'm using pytest and pytest-mock, but the idea should be the same across testing frameworks, as long as you are using the native unittest.mock.patch in some capacity (pytest-mock essentially just wraps these methods in an easier to use api). Unfortunately, it does require that you set up a test app, however, you do not need to go through the process of using test_client, and can just invoke the function directly.
This can be easily handled by using the Application Factory Design Pattern, and injecting application config. Then, just use the created app's .test_request_context as a context manager to mock out the request object. using .test_request_context as a context manager, gives everything called within the context access to the request object. Here's an example below.
import pytest
from app import create_app
#pytest.fixture
def request_context():
"""create the app and return the request context as a fixture
so that this process does not need to be repeated in each test
"""
app = create_app('module.with.TestingConfig')
return app.test_request_context
def test_something_that_requires_global_request_object(mocker, request_context):
"""do the test thing"""
with request_context():
# mocker.patch is just pytest-mock's way of using unittest.mock.patch
mock_request = mocker.patch('path.to.where.request.is.used')
# make your mocks and stubs
mock_request.headers = {'content-type': 'application/json'}
mock_request.get_json.return_value = {'some': 'json'}
# now you can do whatever you need, using mock_request, and you do not
# need to remain within the request_context context manager
run_the_function()
mock_request.get_json.assert_called_once()
assert 1 == 1
# etc.
pytest is great because it allows you to easily setup fixtures for your tests as described above, but you could do essentially the same thing with UnitTest's setUp instance methods. Happy to provide an example for the Application Factory design pattern, or more context, if necessary!
with help of Gabrielbertouinataa on this article: https://medium.com/#vladbezden/how-to-mock-flask-request-object-in-python-fdbc249de504:
code:
def print_request_data():
print(flask.request.data)
test:
flask_app = flask.Flask('test_flask_app')
with flask_app.test_request_context() as mock_context:
mock_context.request.data = "request_data"
mock_context.request.path = "request_path"
print_request_data()
Here is an example of how I dealt with it:
test_common.py module
import pytest
import flask
def test_user_name(mocker):
# GIVEN: user is provided in the request.headers
given_user_name = "Some_User"
request_mock = mocker.patch.object(flask, "request")
request_mock.headers.get.return_value = given_user_name
# WHEN: request.header.get method is called
result = common.user_name()
# THEN: user name should be returned
request_mock.headers.get.assert_called_once_with("USERNAME", "Invalid User")
assert result == given_user_name
common.py module
import flask
def user_name():
return flask.request.headers.get("USERNAME", "Invalid User")
What you're trying to do is counterproductive. Following the RFC 2616 a request is:
A request message from a client to a server includes, within the first line of that message, the method to be applied to the resource, the identifier of the resource, and the protocol version in use.
Mocking the Flask request you need to rebuild its structure, what certainly, you will not to want to do!
The best approach should be use something like Flask-Testing or use some recipes like this, and then, test your method.

Understanding Class inheritance to DRY up some code

I am using the cloudant python library to connect to my cloudant account.
Here is the code I have so far:
import cloudant
class WorkflowsCloudant(cloudant.Account):
def __init__(self):
super(WorkflowsCloudant, self).__init__(settings.COUCH_DB_ACCOUNT_NAME,
auth=(settings.COUCH_PUBLIC_KEY,
settings.COUCH_PRIVATE_KEY))
#blueprint.route('/<workflow_id>')
def get_single_workflow(account_id, workflow_id):
account = WorkflowsCloudant()
db = account.database(settings.COUCH_DB_NAME)
doc = db.document(workflow_id)
resp = doc.get().json()
if resp['account_id'] != account_id:
return error_helpers.forbidden('Invalid Account')
return jsonify(resp)
This Flask controller will have CRUD operations inside of it, but with the current implementation, I will have to set the account and db variables in each method before performing operations on the document I want to view/manipulate. How can I clean up (or DRY up) my code so that I only have to call to my main WorkflowsCloudant class?
I don't know cloudant, so I may be totally off base, but I believe this answers your question:
Delete the account, db, and doc lines from get_single_workflow.
Add the following lines to __init__:
db = account.database(settings.COUCH_DB_NAME)
self.doc = db.document(workflow_id)
Change the resp line in get_single_workflow to:
resp = WorkflowsCloudant().doc.get().json()

Google Analytics and Python

I'm brand new at Python and I'm trying to write an extension to an app that imports GA information and parses it into MySQL. There is a shamfully sparse amount of infomation on the topic. The Google Docs only seem to have examples in JS and Java...
...I have gotten to the point where my user can authenticate into GA using SubAuth. That code is here:
import gdata.service
import gdata.analytics
from django import http
from django import shortcuts
from django.shortcuts import render_to_response
def authorize(request):
next = 'http://localhost:8000/authconfirm'
scope = 'https://www.google.com/analytics/feeds'
secure = False # set secure=True to request secure AuthSub tokens
session = False
auth_sub_url = gdata.service.GenerateAuthSubRequestUrl(next, scope, secure=secure, session=session)
return http.HttpResponseRedirect(auth_sub_url)
So, step next is getting at the data. I have found this library: (beware, UI is offensive) http://gdata-python-client.googlecode.com/svn/trunk/pydocs/gdata.analytics.html
However, I have found it difficult to navigate. It seems like I should be gdata.analytics.AnalyticsDataEntry.getDataEntry(), but I'm not sure what it is asking me to pass it.
I would love a push in the right direction. I feel I've exhausted google looking for a working example.
Thank you!!
EDIT: I have gotten farther, but my problem still isn't solved. The below method returns data (I believe).... the error I get is: "'str' object has no attribute '_BecomeChildElement'" I believe I am returning a feed? However, I don't know how to drill into it. Is there a way for me to inspect this object?
def auth_confirm(request):
gdata_service = gdata.service.GDataService('iSample_acctSample_v1.0')
feedUri='https://www.google.com/analytics/feeds/accounts/default?max-results=50'
# request feed
feed = gdata.analytics.AnalyticsDataFeed(feedUri)
print str(feed)
Maybe this post can help out. Seems like there are not Analytics specific bindings yet, so you are working with the generic gdata.
I've been using GA for a little over a year now and since about April 2009, i have used python bindings supplied in a package called python-googleanalytics by Clint Ecker et al. So far, it works quite well.
Here's where to get it: http://github.com/clintecker/python-googleanalytics.
Install it the usual way.
To use it: First, so that you don't have to manually pass in your login credentials each time you access the API, put them in a config file like so:
[Credentials]
google_account_email = youraccount#gmail.com
google_account_password = yourpassword
Name this file '.pythongoogleanalytics' and put it in your home directory.
And from an interactive prompt type:
from googleanalytics import Connection
import datetime
connection = Connection() # pass in id & pw as strings **if** not in config file
account = connection.get_account(<*your GA profile ID goes here*>)
start_date = datetime.date(2009, 12, 01)
end_data = datetime.date(2009, 12, 13)
# account object does the work, specify what data you want w/
# 'metrics' & 'dimensions'; see 'USAGE.md' file for examples
account.get_data(start_date=start_date, end_date=end_date, metrics=['visits'])
The 'get_account' method will return a python list (in above instance, bound to the variable 'account'), which contains your data.
You need 3 files within the app. client_secrets.json, analytics.dat and google_auth.py.
Create a module Query.py within the app:
class Query(object):
def __init__(self, startdate, enddate, filter, metrics):
self.startdate = startdate.strftime('%Y-%m-%d')
self.enddate = enddate.strftime('%Y-%m-%d')
self.filter = "ga:medium=" + filter
self.metrics = metrics
Example models.py: #has the following function
import google_auth
service = googleauth.initialize_service()
def total_visit(self):
object = AnalyticsData.objects.get(utm_source=self.utm_source)
trial = Query(object.date.startdate, object.date.enddate, object.utm_source, ga:sessions")
result = service.data().ga().get(ids = 'ga:<your-profile-id>', start_date = trial.startdate, end_date = trial.enddate, filters= trial.filter, metrics = trial.metrics).execute()
total_visit = result.get('rows')
<yr save command, ColumnName.object.create(data=total_visit) goes here>

Categories

Resources