I am trying to use the Stava API in a Flask project. I have seen the following stackoverflow
and installed swagger_client
swagger-codegen generate -i https://developers.strava.com/swagger/swagger.json -l python -o ./StravaPythonClient
as per their instructions. However when i run the app i still get import swagger_client
ModuleNotFoundError: No module named 'swagger_client'
My code is here
import swagger_client
from swagger_client.rest import ApiException
from pprint import pprint
# Configure OAuth2 access token for authorization: strava_oauth
swagger_client.configuration.access_token = 'fe931c21b503a46b61b1000000000000000000000'
# create an instance of the API class
api_instance = swagger_client.StreamsApi()
id = 2284367626 # Long | The identifier of the activity.
#keys = # array[String] | Desired stream types.
keyByType = true # Boolean | Must be true. (default to true)
try:
# Get Activity Streams
api_response = api_instance.getActivityStreams(id, keys, keyByType)
pprint(api_response)
except ApiException as e:
print("Exception when calling StreamsApi->getActivityStreams: %s\n" % e)
not sure what packages i should be installing to get this working now.
First install swagger-codegen and check that it's working, this example is for linux. Easier with mac where you can use homebrew.
wget https://repo1.maven.org/maven2/io/swagger/swagger-codegen-cli/2.4.13/swagger-codegen-cli-2.4.13.jar -O swagger-codegen-cli.jar
java -jar swagger-codegen-cli.jar help
After that go in your project and generate the swagger-client. The code below tells that it's for python and should be stored in a folder within the project called generated
java -jar swagger-codegen-cli.jar generate -i https://developers.strava.com/swagger/swagger.json -l python -o generated
Go into the generated folder and install the requirements
cd generated && python setup.py install --user && cd ..
Change your import statements to refer to the generated folder.
from generated import swagger_client
from generated.swagger_client.rest import ApiException
from pprint import pprint
# Configure OAuth2 access token for authorization: strava_oauth
swagger_client.Configuration.access_token = 'fe931c21b503a46b61b1000000000000000000000'
# create an instance of the API class
api_instance = swagger_client.StreamsApi()
id = 2284367626 # Long | The identifier of the activity.
#keys = # array[String] | Desired stream types.
keyByType = true # Boolean | Must be true. (default to true)
try:
# Get Activity Streams
api_response = api_instance.getActivityStreams(id, keys, keyByType)
pprint(api_response)
except ApiException as e:
print("Exception when calling StreamsApi->getActivityStreams: %s\n" % e)
Now you can run the file. Ps when you set the access token: configuration needs to be written with upper case C.
Related
I've succeeded to perform a quick tap using Culebra-Client by following code from this stackoverflow answer. But I still don't understand how to perform multiple touch
A quick example using the newly introduced API in 2.0.47 to handle multi touch. It uses MultiTouch Tester to show the touches but you can use any app.
#! /usr/bin/env python3
from culebratester_client import PerformTwoPointerGestureBody, Point
from pprint import pprint
from com.dtmilano.android.viewclient import ViewClient, KEY_EVENT
helper = ViewClient(*ViewClient.connectToDeviceOrExit(), useuiautomatorhelper=True).uiAutomatorHelper
# multitouch tester
id = 'com.the511plus.MultiTouchTester:id/touchStr'
oid = helper.ui_device.find_object(ui_selector=f'res#{id}').oid
api_instance = helper.api_instance
try:
body = PerformTwoPointerGestureBody(start_point1=Point(300, 100), start_point2=Point(900, 100), end_point1=Point(300, 1600), end_point2=Point(900, 1600), steps=500)
api_response = api_instance.ui_object_oid_perform_two_pointer_gesture_post(oid, body=body)
pprint(api_response)
except ApiException as e:
print("Exception when calling DefaultApi->ui_object_oid_perform_two_pointer_gesture_post: %s\n" % e)
Once you run it, the 2 pointers will be recognized by the app.
I basically want to run this command: argo submit -n argo workflows/workflow.yaml -f params.json through the official python SDK.
This example covers how to submit a workflow manifest, but I don't know where to add the input parameter file.
import os
from pprint import pprint
import yaml
from pathlib import Path
import argo_workflows
from argo_workflows.api import workflow_service_api
from argo_workflows.model.io_argoproj_workflow_v1alpha1_workflow_create_request import \
IoArgoprojWorkflowV1alpha1WorkflowCreateRequest
configuration = argo_workflows.Configuration(host="https://localhost:2746")
configuration.verify_ssl = False
with open("workflows/workflow.yaml", "r") as f:
manifest = yaml.safe_load(f)
api_client = argo_workflows.ApiClient(configuration)
api_instance = workflow_service_api.WorkflowServiceApi(api_client)
api_response = api_instance.create_workflow(
namespace="argo",
body=IoArgoprojWorkflowV1alpha1WorkflowCreateRequest(workflow=manifest, _check_type=False),
_check_return_type=False)
pprint(api_response)
Where to pass in the params.json file?
I found this snippet in the docs of WorkflowServiceApi.md (which was apparently too big to render as markdown):
import time
import argo_workflows
from argo_workflows.api import workflow_service_api
from argo_workflows.model.grpc_gateway_runtime_error import GrpcGatewayRuntimeError
from argo_workflows.model.io_argoproj_workflow_v1alpha1_workflow_submit_request import IoArgoprojWorkflowV1alpha1WorkflowSubmitRequest
from argo_workflows.model.io_argoproj_workflow_v1alpha1_workflow import IoArgoprojWorkflowV1alpha1Workflow
from pprint import pprint
# Defining the host is optional and defaults to http://localhost:2746
# See configuration.py for a list of all supported configuration parameters.
configuration = argo_workflows.Configuration(
host = "http://localhost:2746"
)
# Enter a context with an instance of the API client
with argo_workflows.ApiClient() as api_client:
# Create an instance of the API class
api_instance = workflow_service_api.WorkflowServiceApi(api_client)
namespace = "namespace_example" # str |
body = IoArgoprojWorkflowV1alpha1WorkflowSubmitRequest(
namespace="namespace_example",
resource_kind="resource_kind_example",
resource_name="resource_name_example",
submit_options=IoArgoprojWorkflowV1alpha1SubmitOpts(
annotations="annotations_example",
dry_run=True,
entry_point="entry_point_example",
generate_name="generate_name_example",
labels="labels_example",
name="name_example",
owner_reference=OwnerReference(
api_version="api_version_example",
block_owner_deletion=True,
controller=True,
kind="kind_example",
name="name_example",
uid="uid_example",
),
parameter_file="parameter_file_example",
parameters=[
"parameters_example",
],
pod_priority_class_name="pod_priority_class_name_example",
priority=1,
server_dry_run=True,
service_account="service_account_example",
),
) # IoArgoprojWorkflowV1alpha1WorkflowSubmitRequest |
# example passing only required values which don't have defaults set
try:
api_response = api_instance.submit_workflow(namespace, body)
pprint(api_response)
except argo_workflows.ApiException as e:
print("Exception when calling WorkflowServiceApi->submit_workflow: %s\n" % e)
Have you tried using a IoArgoprojWorkflowV1alpha1WorkflowSubmitRequest? Looks like it has submit_options of type IoArgoprojWorkflowV1alpha1SubmitOpts which has a parameter_file param.
I've used startup scripts on Google Cloud Compute Instances:
setsid python home/junaid_athar/pull.py
And I can run the following script on the VM without issue when logged in at the root directory:
setsid python3 home/junaid_athar/btfx.py
however, when I add setsid python3 home/junaid_athar/btfx.py to the startup-script it throws an error saying:
ImportError: cannot import name 'opentype'
The same script runs fine when I'm logged in, but not when I run it as a startup-script, why and how do I resolve it?
Update: I'm pretty new to programming, and hack away. Here's the script:
import logging
import time
import sys
import json
from btfxwss import BtfxWss
from google.cloud import bigquery
log = logging.getLogger(__name__)
fh = logging.FileHandler('/home/junaid_athar/test.log')
fh.setLevel(logging.CRITICAL)
sh = logging.StreamHandler(sys.stdout)
sh.setLevel(logging.CRITICAL)
log.addHandler(sh)
log.addHandler(fh)
logging.basicConfig(level=logging.DEBUG, handlers=[fh, sh])
def stream_data(dataset_id, table_id, json_data):
bigquery_client = bigquery.Client()
dataset_ref = bigquery_client.dataset(dataset_id)
table_ref = dataset_ref.table(table_id)
data = json.loads(json_data)
# Get the table from the API so that the schema is available.
table = bigquery_client.get_table(table_ref)
rows = [data]
errors = bigquery_client.create_rows(table, rows)
wss=BtfxWss()
wss.start()
while not wss.conn.connected.is_set():
time.sleep(2)
# Subscribe to some channels
wss.subscribe_to_trades('BTCUSD')
# Do something else
t = time.time()
while time.time() - t < 5:
pass
# Accessing data stored in BtfxWss:
trades_q = wss.trades('BTCUSD') # returns a Queue object for the pair.
while True:
while not trades_q.empty():
item=trades_q.get()
if item[0][0]=='te':
json_data={'SEQ':item[0][0], 'ID':item[0][1][0], 'TIMESTAMP':int(str(item[0][1][1])[:10]) , 'PRICE':item[0][1][3], 'AMOUNT':item[0][1][2], 'UNIQUE_TS':item[0][1][1], 'SOURCE':'bitfinex'}
stream_data('gdax','btfxwss', json.dumps(json_data))
# Unsubscribing from channels:
wss.unsubscribe_from_trades('BTCUSD')
# Shutting down the client:
wss.stop()
I'm running it on a Standard 1-CPU 3.75mem machine. (Debian GNU/Linux 9 (stretch)).
I THINK the problem is with the install directory of python3 & modules and the difference between how start-up scripts are ran vs being logged into the machine-- how do I troubleshoot that?
Figured out what was going on. Startup scripts are run as the (on the?) root. I added -u username to the start of the startup script, and it ran as though I were SSH'ed into the server. All is good, thanks all for your help!
So on the python sdk for speaker recognition using Microsoft cognitive on the CreateProfile.py I set my subscription key under the variable subscritionKey (note: the value set to the variable on this example isn't my actual product key) But when I place it into the one of the parameters for the function create_profile I get the error...
Exception: Error creating profile: {"error":{"code":"Unspecified","message":"Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."}}
Is there I can pass my subscritionKey without having to input it constantly through terminal each time?
import IdentificationServiceHttpClientHelper
import sys
subscritionKey = "j23h4i32h4iu3324iu234h233b43"
def create_profile(subscription_key, locale):
"""Creates a profile on the server.
Arguments:
subscription_key -- the subscription key string
locale -- the locale string
"""
helper = IdentificationServiceHttpClientHelper.IdentificationServiceHttpClientHelper(
subscription_key)
creation_response = helper.create_profile(locale)
print('Profile ID = {0}'.format(creation_response.get_profile_id()))
if __name__ == "__main__":
if len(sys.argv) < 2:
print('Usage: python CreateProfile.py <subscription_key>')
print('\t<subscription_key> is the subscription key for the service')
#sys.exit('Error: Incorrect Usage.')
create_profile(subscritionKey, 'en-us')
My guess is that I'm getting issues because I'm passing it as a string :/
Your question is basically how to consume the SDK, right?
The following code works for me: In a file called main.py that is one level out of the folder Identification:
import sys
sys.path.append('./Identification')
from CreateProfile import create_profile
subscriptionKey = "<YOUR-KEY>"
create_profile(subscriptionKey, "en-us")
Running python main.py (with Python 3), that code returns
Profile ID = cf04bf79-xxxx-xxxxx-xxxx
I am getting the following import error on Mac:
ImportError: No module named Conf_Reader
Here are the few initial lines of my Python code:
import dotenv
import os
import testrail
import Conf_Reader
#setup the testrail client and connect to the testrail instance
def get_testrail_client():
testrail_file = os.path.join(os.path.dirname(__file__),'testrail.env')
testrail_url = Conf_Reader.get_value(testrail_file,'TESTRAIL_URL')
client = testrail.APIClient(testrail_url)
..
..
..
So far tried with pip and not able to find any sources for doing its installation.
I have the same problem on Mac.
To avoid using other independence you can skip using env. but pass as variable:
# create a credential.py
TESTRAIL_URL='https://testrail.com/testrail'
TESTRAIL_USER='xxxxx'
TESTRAIL_PASSWORD = 'xxxxx'
# on your update_testrail.py
from credential import TESTRAIL_URL,
TESTRAIL_USER,
TESTRAIL_PASSWORD
testrail_url = TESTRAIL_URL
client = testrail.APIClient(testrail_url)
# Get and set the TestRail User and Password
client.user = TESTRAIL_USER
client.password = TESTRAIL_PASSWORD
They should have linked to https://bangladroid.wordpress.com/2016/08/20/create-separate-credential-files-for-selenium-python/
where it explains you make your own ‘Conf_Reader.py’ file is as below:
"""
A simple conf reader.
For now, we just use dotenv and return a key.
"""
import dotenv,os
def get_value(conf,key):
# type: (object, object) -> object
"Return the value in conf for a given key"
value = None
try:
dotenv.load_dotenv(conf)
value = os.environ[key]
except Exception,e:
print 'Exception in get_value'
print 'file: ',conf
print 'key: ',key
return value