Currently within the app, requesting a UberX instantly give you a exact quote price but in the Python API, I couldn't find it. I can only find the range of the cost. Where is the exact quote at?
Try to use "POST /v1.2/requests/estimate"
Example Request
curl -X POST \
-H 'Authorization: Bearer <TOKEN>' \
-H 'Accept-Language: en_US' \
-H 'Content-Type: application/json' \
-d '{
"start_latitude": 37.7752278,
"start_longitude": -122.4197513,
"end_latitude": 37.7773228,
"end_longitude": -122.4272052
}' "https://api.uber.com/v1.2/requests/estimate"
I suggest you use "product_id" as well - to get the price for the product you need. Otherwise, if none is provided, it will default to the cheapest product for the given location.
You will get the response like:
{
"fare": {
"value": 5.73,
"fare_id": "d30e732b8bba22c9cdc10513ee86380087cb4a6f89e37ad21ba2a39f3a1ba960",
"expires_at": 1476953293,
"display": "$5.73",
"currency_code": "USD",
"breakdown": [
{
"type": "promotion",
"value": -2.00,
"name": "Promotion"
},
{
"type": "base_fare",
"notice": "Fares are slightly higher due to increased demand",
"value": 7.73,
"name": "Base Fare"
}
]
},
"trip": {
"distance_unit": "mile",
"duration_estimate": 540,
"distance_estimate": 2.39
},
"pickup_estimate": 2
}
Related to Pyton SDK - Please check: https://developer.uber.com/docs/riders/ride-requests/tutorials/api/python. You need to authenticate your user, and then get a product you want to use, and then get upfront fare (if product support this: upfront_fare_enabled field set to true). And after that you can book a ride. Code how to do it is in doc link I have sent as well:
# Get products for a location
response = client.get_products(37.77, -122.41)
products = response.json.get('products')
product_id = products[0].get('product_id')
# Get upfront fare and start/end locations
estimate = client.estimate_ride(
product_id=product_id,
start_latitude=37.77,
start_longitude=-122.41,
end_latitude=37.79,
end_longitude=-122.41,
seat_count=2
)
fare = estimate.json.get('fare')
# Request a ride with upfront fare and start/end locations
response = client.request_ride(
product_id=product_id,
start_latitude=37.77,
start_longitude=-122.41,
end_latitude=37.79,
end_longitude=-122.41,
seat_count=2,
fare_id=fare['fare_id']
)
request = response.json
request_id = request.get('request_id')
# Request ride details using `request_id`
response = client.get_ride_details(request_id)
ride = response.json
# Cancel a ride
response = client.cancel_ride(request_id)
ride = response.json
Related
I am trying to create an way to actively monitor sales/listings. I have a created POST api that will call the website and it currently is successfully pulling down the JSON data that has the item that sold, the price, the time, seller, and buyer.
Here's an example of the data returned in the JSON.
Item: 1
Price: 50$
Sold Date: 10/28/2021 10:00AM
Seller: John
Buyer: Frank
Say this script runs every 5 minutes and prints out the last sale. At 10:05, it sees Item 1 sold so it prints out the data. At 10:10, no new items sold, so it prints out Item 1 again. I am looking for a way to only print out if Item 2 sells(aka updated JSON data), but I am having trouble figuring out the best way to handle this logic in python.
Would you just use date/time minus the last 5mins? Or is there a better way?
Code is simple:
asset_url = 'www.sample.com/api/'
seller_payload = json.dumps({
"name": "find",
"arguments": [
{
"database": "prod",
"data": "SALES",
"query": {
"seller": {
"$in": [
"sellerid"
]
},
},
"sort": {
"epoch": {
"$numberInt": "-1"
}
},
"limit": {
"$numberInt": "1"
}
}
],
"service": "db"
})
seller_response = requests.request("POST", asset_url, headers=profile_headers, data=seller_payload)
asset_id = seller_response[0]['asset']
seller = seller_response[0]['seller']
price = seller_response[0]['price']
print(asset_id)
print(seller)
print(price)
If the script runs every 5 minutes, store each transaction in a file. Something like this:
from pickle import dump, loads
transactions = []
try:
transactions = loads("transactions.txt") # try to read transactions from file.
except:
print("An exception occurred.")
# Check if the last transaction is the same as it was five minutes ago.
if (not (transactions[-1]["asset"] == seller_response[0]["asset"] && transactions[-1]["seller"] == seller_response[0]["seller"] && transactions[-1]["soldDate"] == seller_response[0]["soldDate"] && transactions[-1]["buyer"] == seller_response[0]["buyer"] && transactions[-1]["item"] == seller_response[0]["item"])):
print(seller_response[0])
transactions.push(seller_response[0])
dump(transactions,"transactions.txt")
I successfully setup the Qualtrics API for exporting contacts from working with the example script from their website. My problem is the API only exports 100 contacts at a time. It seems if I use the the url from the nextPage field in my initial json export to do another API call I can get another 100 but that isn't ideal. I sometimes have lists with over 10,000 people and cannot manually work this.
I am a Python noob and would like to know how its possible to use the nextPage URL function to receive more than 100 responses at a time.
The API Call looks like this
# List Contacts in Mailing List
import requests
# Setting user Parameters
apiToken = "YOUR API TOKEN"
dataCenter = "YOUR DATACENTER"
directoryId = "POOL_123456"
mailingListId = "CG_123456"
baseUrl = "https://{0}.qualtrics.com/API/v3/directories/{1}/mailinglists/{2}/contacts".format(dataCenter, directoryId, mailingListId)
headers = {
"x-api-token": apiToken,
}
response = requests.get(baseUrl, headers=headers)
print(response.text)
And I receive a similar results to this with only 100 responses:
{
"meta": {
"httpStatus": "200 - OK",
"requestId": "7de14d38-f5ed-49d0-9ff0-773e12b896b8"
},
"result": {
"elements": [
{
"contactId": "CID_123456",
"email": "js#example.com",
"extRef": "1234567",
"firstName": "James",
"language": "en",
"lastName": "Smith",
"phone": "8005552000",
"unsubscribed": false
},
{
"contactId": "CID_3456789",
"email": "person#example.com",
"extRef": "12345678",
"firstName": "John",
"language": "en",
"lastName": "Smith",
"phone": "8005551212",
"unsubscribed": true
}
],
"nextPage": null
}
}
Does anyone have an idea how I can loop the nextPage information to get an entire list of contacts, no matter how many sets of 100 are contained? I have some lists where there are tens, hundreds, and thousands of contacts and would like it to work for all.
Appreciate all input! Thanks!
Use a while loop and rename baseUrl to nextPage:
nextPage = "https://{0}.qualtrics.com/API/v3/directories/{1}/mailinglists/{2}/contacts".format(dataCenter, directoryId, mailingListId)
while nextPage is not None:
response = requests.get(nextPage, headers=headers)
json_data = json.loads(response.text)
#process json_data
nextPage = json_data['result']['nextPage']
I'm trying to hit my geocoding server's REST API:
[https://locator.stanford.edu/arcgis/rest/services/geocode/USA_StreetAddress/GeocodeServer] (ArcGIS Server 10.6.1)
...using the POST method (which, BTW, could use an example or two, there only seems to be this VERY brief "note" on WHEN to use POST, not HOW: https://developers.arcgis.com/rest/geocode/api-reference/geocoding-geocode-addresses.htm#ESRI_SECTION1_351DE4FD98FE44958C8194EC5A7BEF7D).
I'm trying to use requests.post(), and I think I've managed to get the token accepted, etc..., but I keep getting a 400 error.
Based upon previous experience, this means something about the formatting of the data is bad, but I've cut-&-pasted directly from the Esri support site, this test pair.
# import the requests library
import requests
# Multiple address records
addresses={
"records": [
{
"attributes": {
"OBJECTID": 1,
"Street": "380 New York St.",
"City": "Redlands",
"Region": "CA",
"ZIP": "92373"
}
},
{
"attributes": {
"OBJECTID": 2,
"Street": "1 World Way",
"City": "Los Angeles",
"Region": "CA",
"ZIP": "90045"
}
}
]
}
# Parameters
# Geocoder endpoint
URL = 'https://locator.stanford.edu/arcgis/rest/services/geocode/USA_StreetAddress/GeocodeServer/geocodeAddresses?'
# token from locator.stanford.edu/arcgis/tokens
mytoken = <GeneratedToken>
# output spatial reference id
outsrid = 4326
# output format
format = 'pjson'
# params data to be sent to api
params ={'outSR':outsrid,'f':format,'token':mytoken}
# Use POST to batch geocode
r = requests.post(url=URL, data=addresses, params=params)
print(r.json())
print(r.text)
Here's what I consistently get:
{'error': {'code': 400, 'message': 'Unable to complete operation.', 'details': []}}
I had to play around with this for longer than I'd like to admit, but the trick (I guess) is to use the correct request header and convert the raw addresses to a JSON string using json.dumps().
import requests
import json
url = 'http://sampleserver6.arcgisonline.com/arcgis/rest/services/Locators/SanDiego/GeocodeServer/geocodeAddresses'
headers = { 'Content-Type': 'application/x-www-form-urlencoded' }
addresses = json.dumps({ 'records': [{ 'attributes': { 'OBJECTID': 1, 'SingleLine': '2920 Zoo Dr' }}] })
r = requests.post(url, headers = headers, data = { 'addresses': addresses, 'f':'json'})
print(r.text)
For some inexplicable reason, google provides no stackdriver api for appengine, so I'm stuck implementing one. No worries - I thought - I have already worked with the API builder to talk to bigquery, so I built up a client and started trying to send events:
credentials = SignedJwtAssertionCredentials(STACKDRIVER_AUTH_GOOGLE_CLIENT_EMAIL,
STACKDRIVER_AUTH_GOOGLE_PRIVATE_KEY,
scope='https://www.googleapis.com/auth/trace.append')
http = httplib2.Http()
credentials.refresh(http) #Working around an oauth2client bug
credentials = credentials.authorize(http)
service = build('cloudtrace', 'v1', http=http)
batch = service.new_batch_http_request()
batch.add(service.projects().patchTraces(
body=traces_json,
projectId=STACKDRIVER_AUTH_GOOGLE_PROJECT_ID))
print batch.execute()
I left out the definition of traces_json because no matter what I send, the service always responds with an error. If traces_json = '{}':
{u'error': {u'code': 400,
u'errors': [{u'domain': u'global',
u'message': u'Invalid value at \'traces\' (type.googleapis.com/google.devtools.cloudtrace.v1.Traces), "{}"',
u'reason': u'badRequest'}],
u'message': u'Invalid value at \'traces\' (type.googleapis.com/google.devtools.cloudtrace.v1.Traces), "{}"',
u'status': u'INVALID_ARGUMENT'}}
But even if I use a body, crafted from the google documentation, I still get the same error.
I'm running a packet sniffer on the machine where I'm attempting this, and only very rarely see it actually communicating with googleapis.com.
So the question is, really, what am I missing that will get me sending events to stackdriver?
UPDATE
Here's the most recent iteration of what I'd been working with, though using the google doc example verbatim (with the exception of changing the project id) produces the same result.
{
"traces": [
{
"projectId": "projectname",
"traceId": "1234123412341234aaaabb3412347890",
"spans": [
{
"kind": "RPC_SERVER",
"name": "trace_name",
"labels": {"label1": "value1", "label2": "value2"},
"spanId": "spanId1",
"startTime": "2016-06-01T05:01:23.045123456Z",
"endTime": "2016-06-01T05:01:23.945123456Z",
},
],
},
],
}
And the error message that comes with it:
{u'error': {u'code': 400,
u'errors': [{u'domain': u'global',
u'message': u'Invalid value at \'traces\' (type.googleapis.com/google.devtools.cloudtrace.v1.Traces), "MY ENTIRE JSON IS REPEATED HERE"',
u'reason': u'badRequest'}],
u'message': u'Invalid value at \'traces\' (type.googleapis.com/google.devtools.cloudtrace.v1.Traces), "MY ENTIRE JSON IS REPEATED HERE"',
u'status': u'INVALID_ARGUMENT'}}
SECOND UPDATE
Doing this in the explorer produces approximately the same result. I had to switch to a numeric span_id because, despite the docs' statement that it only has to be a unique string, I get errors about requiring what appears to be a 64-bit integer, any time I provide anything else.
PATCH https://cloudtrace.googleapis.com/v1/projects/[number or name]/traces?key={YOUR_API_KEY}
{
"traces": [
{
"projectId": "[number or name]",
"traceId": "1234123412341234aaaabb3412347891",
"spans": [
{
"kind": "RPC_SERVER",
"name": "trace_name",
"labels": {
"label1": "value1"
},
"startTime": "2016-06-01T05:01:23.045123456Z",
"endTime": "2016-06-01T05:01:25.045123456Z"
},
{
"spanId": "0"
}
]
}
]
}
Response:
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT"
}
}
The issue is in the format of your data. You can not send empty messages either. The best way to explore how to use the API is to go to the StackDriver Trace API explorer, you will find out the exact data structure to send:
https://cloud.google.com/trace/api/reference/rest/v1/projects/patchTraces#traces
Pay special attention to the format of traceId. It needs to be 32 character hex string like this: 7d9d1a6e2d1f3f27484992f33d97e5cb
Here is a working python example to show how to use the 3 methods on StackDriver trace on github: https://github.com/qike/cloud-trace-samples-python
Copy paste code below:
def list_traces(stub, project_id):
"""Lists traces in the given project."""
trace_id = None
req = trace_pb2.ListTracesRequest(project_id=project_id)
try:
resp = stub.ListTraces(req, TIMEOUT)
for t in resp.traces:
trace_id = t.trace_id
print("Trace is: {}".format(t.trace_id))
except NetworkError, e:
logging.warning('Failed to list traces: {}'.format(e))
sys.exit(1)
return trace_id
def patch_traces(stub, project_id):
req = trace_pb2.PatchTracesRequest(project_id=project_id)
trace_id = str(uuid.uuid1()).replace('-', '')
now = time.time()
trace = req.traces.traces.add()
trace.project_id = project_id
trace.trace_id = trace_id
span1 = trace.spans.add()
span1.span_id = 1
span1.name = "/span1.{}".format(trace_id)
span1.start_time.seconds = int(now)-10
span1.end_time.seconds = int(now)
span2 = trace.spans.add()
span2.span_id = 2
span2.name = "/span2"
span2.start_time.seconds = int(now)-8
span2.end_time.seconds = int(now)-5
try:
resp = stub.PatchTraces(req, TIMEOUT)
print("Trace added successfully.\n"
"To view list of traces, go to: "
"http://console.cloud.google.com/traces/traces?project={}&tr=2\n"
"To view this trace added, go to: "
"http://console.cloud.google.com/traces/details/{}?project={}"
.format(project_id, trace_id, project_id))
except NetworkError, e:
logging.warning('Failed to patch traces: {}'.format(e))
sys.exit(1)
def get_trace(stub, project_id, trace_id):
req = trace_pb2.GetTraceRequest(project_id=project_id,
trace_id=trace_id)
try:
resp = stub.GetTrace(req, TIMEOUT)
print("Trace retrieved: {}".format(resp))
except NetworkError, e:
logging.warning('Failed to get trace: {}'.format(e))
sys.exit(1)
UPDATED to answer error received from API explorer
Regarding the errors you got from using API explorer, it was due to using 0 as span_id. It should be a 64 bit int other than 0.
I also observed that the span_id you set is in a different span object than the one you intended. Make sure you don't by mistake clicking on a "+" sign to add a new span object.
Below is a successful patch request I sent to my project through API explorer:
{
"traces": [
{
"projectId": "<project ID>", // I used string ID, not numeric number
"traceId": "1234123412341234aaaabb3412347891",
"spans": [
{
"spanId": "1",
"name": "foo",
"startTime": "2016-06-01T05:01:23.045123456Z",
"endTime": "2016-06-01T05:01:25.045123456Z"
}
]
}
]
}
Response
200
I have a working Flask API and now I want to implement search queries.
My understanding is that the filter is applied on the client and the Flask API takes care of it.
Flask==0.10.1
Flask-HTTPAuth==2.7.0
Flask-Limiter==0.9.1
Flask-Login==0.3.2
Flask-Mail==0.9.1
Flask-Principal==0.4.0
Flask-Restless==0.17.0
I have followed documentation and performed my search query, but still just retrieving same results:
http://flask-restless.readthedocs.org/en/latest/searchformat.html
No filter:
curl -u aaa:bbb -H "Content-Type: application/json" http://0.0.0.0:8080/api/1.0/job/
{
"jobs": [
{
"description": "ESXi job completed",
"reference": "07FC78BCC0",
"status": 1
},
{
"description": "Server discovery failed. Please verify HTTPS/SSH parameters",
"reference": "A6EE28F4C0",
"status": -1
}]
}
Search query based on:
http://flask-restless.readthedocs.org/en/latest/searchformat.html
curl -u aaa:bbb -G -H "Content-Type: application/json" -d '{
> "filters": [{"name": "description", "op": "like", "val": "%ESXi%"}]}' http://0.0.0.0:8080/api/1.0/job/
Or
curl -u aaa:bbb -G -H "Content-Type: application/json" -d '{"filters": [{"name": "status", "op": "eq", "val":0}]}' http://0.0.0.0:8080/api/1.0/job/
And same results are shown.
This is my Flask endpoint:
def get_jobs():
"""
:return:
"""
try:
log.info(request.remote_addr + ' ' + request.__repr__())
jobs = Model.Job.query.order_by(desc(Model.Job.job_start)).limit(settings.items_per_page).all()
# =========================================================
# Get JOBS
# =========================================================
values = ['description', 'status', 'reference']
response = [{value: getattr(d, value) for value in values} for d in jobs]
return jsonify(jobs=response)
except Exception, excpt:
log.exception(excpt.__repr__())
response = json.dumps('Internal Server Error. Please try again later')
resp = Response(response, status=500, mimetype='application/json')
return resp
My Model
class Job(db.Model, AutoSerialize, Serializer):
"""
"""
__tablename__ = 'job'
__public__ = ('status','description','reference','job_start','job_end')
id = Column(Integer, primary_key=True, server_default=text("nextval('job_id_seq'::regclass)"))
description = Column(String(200))
reference = Column(String(50))
job_start = Column(DateTime)
job_end = Column(DateTime)
fk_server = Column(ForeignKey(u'server.id'))
owner_id = Column(ForeignKey(u'auth_user.id'))
is_cluster = Column(Boolean)
host_information = Column(String(1024))
status = Column(Integer, nullable=False)
owner = relationship(u'AuthUser')
server = relationship(u'Server')
def serialize(self):
"""
:return:
"""
d = Serializer.serialize(self)
return d
Do I need to change anything?
Maybe having __public__ as a Job attribute is interfering with the way the filtering works. There's a warning in the Flask-Restless documentation about this.