I'm building a standalone proximity search tool in python 2.7 (the intent is distributing it using py2exe and NSIS) and tkinter that takes a center point address, queries the database, and returns all addresses in the database within a certain range.
This is my first time venturing into the google api, and I am extremely confused about how to make use of it to retrieve this data.
I followed the code here: http://www.libertypages.com/clarktech/?p=315
And receive nothing but a 610.
I tried using this url instead, based on a question here on stack overflow and receive a Access Denied Error: http://maps.googleapis.com/maps/api/geocode/xml?
I've set up the project in the API console, enabled the maps service, added both a browser and a server api key, tried them both, and failed.
I've spent all morning pouring through the API documentation, and I can find NOTHING that tells me what URL to specify for a simple API information request for google maps api v3.
This is the actual code of my function, it's a slightly modified version of what I linked above, with some debugging output mixed in, when I ran it with http://maps.google.com/maps/geo? I received 610,0,0,0 :
def get_location(self, query):
params = { }
params[ 'key' ] = "key from console" # the actual key, of course, is not provided here
params[ 'sensor' ] = "false"
params[ 'address' ] = query
params = urllib.urlencode( params )
print "http://maps.googleapis.com/maps/api/geocode/json?%s" % params
try:
f = urllib.urlopen( "http://maps.googleapis.com/maps/api/geocode/json?%s" % params )
Everything Runs perfectly, I just wind up with "610" and "Fail" as the lat and long for all the addresses in my database. :/
I've tried a server app API key and a browser app API key, an Oauth Client ID isn't an option, because I don't want my users to have to allow the access.
I'm REALLY hoping someone will just say "go read this document, you moron" and I can shuffle off with my tail between my legs.
UPDATE: I found this: https://developers.google.com/maps/documentation/geocoding/ implemented the changes it suggests, no change, but it's taking longer to give me the "ACCESS DENIED" response.
The API Key is causing it to fail. I took it out of the query parameters dictionary and it simply works as an anonymous request.
params = { }
params[ 'sensor' ] = "false"
params[ 'address' ] = query
Related
I have a Cosmos DB instance running the Gremlin API, and I am using the Python SDK on 3.7.
Currently I am trying to build a graph that models papers with topics and authors, by having each object connected to the other. In my code I scrape the web for some papers, then model them like so:
insert_paper_query = '''g.addV('paper').property('id', '{}').property('title', '{}').property('abstract', '{}')
.property('time_processed', '{}').property('url', '{}').property('pk', '{}')'''.format(item['id'], item['title'], item['abstract'], time_processed, item['url'], item['id'])
callback = paper_graph_client.submitAsync(insert_paper_query)
print_status_attributes(callback.result())
if callback.result() is not None:
logging.info("Paper stored in Gremlin graph successfully: {}".format(new_paper))
# Assess the author and topic vectors, then create edges between these vectors
logging.info('Processing author vertices for paper {}'.format(item['id']))
for author in item['authors']:
add_author_query = "g.addV('author').property('name', '{}')".format(author)
callback = paper_graph_client.submitAsync(add_author_query)
add_author_edge_query = "g.V('{}').addE('authoredBy').to(g.V('{}'))".format(item['id'], author)
callback = paper_graph_client.submitAsync(add_author_edge_query)
logging.info('Processing topic vertices for paper {}'.format(item['id']))
The values that are being inserted are taken from the dictionary that models a paper, for example:
{
"id": "2161355c-8ac1-4b96-abde-e0c221b4c3b9",
"title": "Some boring title",
"abstract": "Some boring abstract",
"time_processed": 1647210602,
"url": "https://paperarchive.com",
"authors": [
"author1",
"author2",
"author3"
]
}
In theory this should work. I've tested these queries on the Azure Portal with some dummy data, and they work fine - the vertices and edges are added and connected as expected. Yet I run my code, nothing is added to the database. But there isn't any errors either - the code keeps running as if nothing happened. Even the callback object doesn't report anything amiss. I've double checked the connection and the queries, and everything is perfectly in order.
I'm really struggling to find the cause of why none of these queries work in python, but they are correct and do work on the portal. Am I just daft and missing something obvious, or is there something else wrong here?
I discovered what the root cause of this was.
There were errors being thrown but, for some reason, they would not appear using the normal logging stack trace.
The root cause of this issue was the fact that I did not explicitly state the serializer when initialising my client object. The serializer defaulted to GraphSON v3, which is not currently supported by Cosmos DB.
Changing this to use V2 solved this problem:
def authenticate_gremlin_client():
paper_graph_client = client.Client('wss://<my database>.gremlin.cosmos.azure.com:443/','g',
username="/dbs/publications/colls/papers",
password="<my password>",
message_serializer=serializer.GraphSONSerializersV2d0())
return paper_graph_client
I'll preface this by saying I'm fairly new to BigQuery. I'm running into an issue when trying to schedule a query using the Python SDK. I used the example on the documentation page and modified it a bit but I'm running into errors.
Note that my query does use scripting to set some variables, and it's using a MERGE statement to update one of my tables. I'm not sure if that makes a huge difference.
def create_scheduled_query(dataset_id, project, name, schedule, service_account, query):
parent = transfer_client.common_project_path(project)
transfer_config = bigquery_datatransfer.TransferConfig(
destination_dataset_id=dataset_id,
display_name=name,
data_source_id="scheduled_query",
params={
"query": query
},
schedule=schedule,
)
transfer_config = transfer_client.create_transfer_config(
bigquery_datatransfer.CreateTransferConfigRequest(
parent=parent,
transfer_config=transfer_config,
service_account_name=service_account,
)
)
print("Created scheduled query '{}'".format(transfer_config.name))
I was able to successfully create a query with the function above. However the query errors out with the following message:
Error code 9 : Dataset specified in the query ('') is not consistent with Destination dataset '{my_dataset_name}'.
I've tried changing passing in "" as the dataset_id parameter, but I get the following error from the Python SDK:
google.api_core.exceptions.InvalidArgument: 400 Cannot create a transfer with parent projects/{my_project_name} without location info when destination dataset is not specified.
Interestingly enough I was able to successfully create this scheduled query in the GUI; the same query executed without issue.
I saw that the GUI showed the scheduled query's "Resource name" referenced a transferConfig, so I used the following command to see what that transferConfig looked like, to see if I could apply the same parameters using my Python script:
bq show --format=prettyjson --transfer_config {my_transfer_config}
Which gave me the following output:
{
"dataSourceId": "scheduled_query",
"datasetRegion": "us",
"destinationDatasetId": "",
"displayName": "test_scheduled_query",
"emailPreferences": {},
"name": "{REDACTED_TRANSFER_CONFIG_ID}",
"nextRunTime": "2021-06-18T00:35:00Z",
"params": {
"query": ....
So it looks like the GUI was able to use "" for destinationDataSetId but for whatever reason the Python SDK won't let me use that value.
Any help would be appreciated, since I prefer to avoid the GUI whenever possible.
UPDATE:
This does appear to be related to the scripting I used in my query. I removed the scripts from the query and it's working. I'm going to leave this open because I feel like this should be possible using the SDK since the query with scripting works in the console without issue.
This same thing also threw me through a loop but I managed to figure out what was wrong. The problem is with the
parent = transfer_client.common_project_path(project)
line that is given in the example query. By default, this returns something of the form projects/{project_id}. However, the CreateTransferConfigRequest documentation says of the parent parameter:
The BigQuery project id where the transfer configuration should be created. Must be in the format projects/{project_id}/locations/{location_id} or projects/{project_id}. If specified location and location of the destination bigquery dataset do not match - the request will fail.
Sure enough, if you use the projects/{project_id}/locations/{location_id} format instead, it resolves the error and allows you to pass a null destination_dataset_id.
I had the exact same issue. the fix for the issue is as below.
The below method returns Projects/{projectid}
parent = transfer_client.common_project_path(project_id)
instead use the below method , which returns projects/{project}/locations/{location}
parent = transfer_client.common_location_path(project_id , "EU")
I had tried with the above change , i am able to schedule a script in BQ.
I am fetching a subscription's Secure Score using the Microsoft Azure Security Center (ASC) Management Client Library. All operations in the library state that
You should not instantiate directly this class, but create a Client instance that will create it for you and attach it as attribute.
Therefore, I am creating a SecurityCenter client with the following specification:
SecurityCenter(credentials, subscription_id, asc_location, base_url=None)
However, it seems to me like the only way to get the asc_location information properly is to use the SecurityCenter client to fetch it... The spec says the same as the quote above, You should not instantiate.... So I am stuck not being able to create the client because I need the ASC location to do so, and I need to create the client to get the ASC locations.
The documentation mentions
The location where ASC stores the data of the subscription. can be retrieved from Get locations
Googling and searching through the Python SDK docs for this "Get locations" gives me nothing (else than the REST API). Have I missed something? Are we supposed to hard-code the location like in this SO post or this GitHub issue from the SDK repository?
As offical API reference list locations indicates:
The location of the responsible ASC of the specific subscription (home
region). For each subscription there is only one responsible location.
It will not change, so you can hardcode this value if you already know the value of asc_location of your subscription.
But each subscription may have different asc_location values(my 2 Azure subscriptions have different asc_location value).
So if you have a lot of Azure subscriptions, you can just query asc_location by API (as far as I know, this is the only way I can find to do this)and then use SDK to get the Secure Score, try the code below:
from azure.mgmt.security import SecurityCenter
from azure.identity import ClientSecretCredential
import requests
from requests.api import head, request
TENANT_ID = ''
CLIENT = ''
KEY = ''
subscription_id= ''
getLocationsURL = "https://management.azure.com/subscriptions/"+subscription_id+"/providers/Microsoft.Security/locations?api-version=2015-06-01-preview"
credentials = ClientSecretCredential(
client_id = CLIENT,
client_secret = KEY,
tenant_id = TENANT_ID
)
#request for asc_location for a subscription
azure_access_token = credentials.get_token('https://management.azure.com/.default')
r = requests.get(getLocationsURL,headers={"Authorization":"Bearer " + azure_access_token.token}).json()
location = r['value'][0]['name']
print("location:" + location)
client = SecurityCenter(credentials, subscription_id, asc_location=location)
for score in client.secure_scores.list():
print(score)
Result:
I recently came across this problem.
Based on my observation, I can use whatever location under my subscription to initiate SecurityCenter client. Then later client.locations.list() gives me exactly one ASC location.
# Any of SubscriptionClient.subscriptions.list_locations will do
location = 'eastasia'
client = SecurityCenter(
credential, my_subscription_id,
asc_location=location
)
data = client.locations.list().next().as_dict()
pprint(f"Asc location: {data}")
In my case, the it's always westcentralus regardless my input was eastasia.
Note that you'll get exception if you use get instead of list
data = client.locations.get().as_dict()
pprint(f"Asc location: {data}")
# azure.core.exceptions.ResourceNotFoundError: (ResourceNotFound) Could not find location 'eastasia'
So what i did was a bit awkward,
create a SecurityCenter client using a location under my subscription
client.locations.list() to get ASC location
Use the retrieved ASC location to create SecurityCenter client again.
I ran into this recently too, and initially did something based on #stanley-gong's answer. But it felt a bit awkward, and I checked to see how the Azure CLI does it. I noticed that they hardcode a value for asc_location:
def _cf_security(cli_ctx, **_):
from azure.cli.core.commands.client_factory import get_mgmt_service_client
from azure.mgmt.security import SecurityCenter
return get_mgmt_service_client(cli_ctx, SecurityCenter, asc_location="centralus")
And the PR implementing that provides some more context:
we have a task to remove the asc_location from the initialization of the clients. currently we hide the asc_location usage from the user.
centralus is a just arbitrary value and is our most common region.
So... maybe the dance of double-initializing a client or pulling a subscription's home region isn't buying us anything?
I am currently learning aws, lambda, and python all at ones.
Have been going well till I am trying to get an id from the browser.
https://[apinumber].execute-api.ap-northeast-1.amazonaws.com/prod?id=1
I have put the right settings (I think) in aws
I have put this setting under resources -> method request -> URL Query String Parameters
What would be the best way to get this id?
I have tried many ways but didn't really find a solution yet.
I have been stuck with this for the last days.
#always start with the lambda_handler
def lambda_handler(event, context):
# get page id
page_id = event['id']
if page_id:
return page_id
else:
return 'this page id is empty'
Help would be highly appreciated.
Found the answer.
Needed to add the mapping terms in integration request -> mapping (below page)
{
"id": "$input.params('id')"
}
The function is now working.
I'm trying to create an OrientDB graph database using PyOrient, and I can't find enough documentation to allow me to get Functions working. I've been able to create a function using record_create into the ofunction cluster, but although it doesn't crash, it doesn't appear to work either.
Here's my code:
#!/usr/bin/python
import pyorient
ousername="user"
opassword="pass"
client = pyorient.OrientDB("localhost", 2424)
session_id = client.connect( ousername, opassword )
db_name="database"
client.db_create( db_name, pyorient.DB_TYPE_GRAPH, pyorient.STORAGE_TYPE_PLOCAL )
# Set up the schema of the database
client.command( "create class URL extends V" )
client.command( "CREATE PROPERTY URL.url STRING")
client.command( "CREATE PROPERTY URL.id INTEGER")
client.command( "CREATE SEQUENCE urlseq")
client.command( "CREATE INDEX urls ON URL (url) UNIQUE")
# Get the id numbers of all the clusters
info=client.db_reload()
clusters={}
for c in info:
clusters[c.name]=c.id
print(clusters)
# Construct a test function
# All this should do is create a new URL vertex. Eventually it will check for uniqueness of url, etc.
code="INSERT INTO URL SET id = sequence('urlseq').next(), url='?'"
addURL_func = { '#OFunction': { 'name': 'addURL', 'code':'orient.getGraph().command("sql","%s",[urlparam]);' % code, 'language':'javascript', 'parameters':'urlparam', 'idempotent':False } }
client.record_create( clusters['ofunction'], addURL_func )
# Assume allURLs contains the list of URLs I want to store
for url in allURLs:
client.command("select addURL('%s')" % url)
vs = client.command("select * from URL")
for v in vs:
print(v.url)
Doing all the select addURL bits runs happily, but doing select * from URL simply times out. Presumably because (as I've discovered by examining the database in Studio) there are still no URL vertices. Although why that should timeout rather than returning an empty list or giving a useful error message, I'm not sure.
What am I doing wrong, and is there an easier way to create Functions through PyOrient?
I don't want to just write the Functions in Studio, because I am prototyping and want them written from the Python code rather than being lost every time I drop the mangled experimental graph!
I've mainly been using the OrientDB wiki page to find out about OrientDB functions, and the PyOrient github page as almost my only source of documentation for that.
Edit: I've been able to create a working Function in SQL (see my own answer below) but I still can't create a working Javascript Function which creates a vertex. My current best attempt is:
code2="""var g=orient.getGraph();g.command('sql','CREATE VERTEX URL SET id = sequence(\\"urlseq\\").next(), url = \\"'+urlparam+'\\"',[urlparam]);"""
myFunction2 = 'CREATE FUNCTION addURL2 "' + code2 + '" parameters [urlparam] idempotent false language javascript'
client.command(myFunction2)
which runs without crashing when called from PyOrient, but doesn't actually create any vertices. But if I call it from Studio, it works!?! I have no idea what's going on.
OK, after a lot of hacking and Googling, I've got it working:
code="CREATE VERTEX URL SET id = sequence('urlseq').next(), url = :urlparam;"
myFunction = 'CREATE FUNCTION addURL "' + code + '" parameters [urlparam] idempotent false language sql'
client.command(myFunction)
The key here seems to be the use of a colon before parameter names in OrientDB's version of SQL. I couldn't find any reference to this anywhere in the OrientDB docs, but someone online had discovered it somehow.
I'm answering my own question in the hope that this will help others struggling wth ODB's poor documentation!
You could try something like :
code="var g=orient.getGraph();\ng.command(\\'sql\\',\\'%s\\',[urlparam]);"
myFunction = "CREATE FUNCTION addURL '" + code + "' parameters [urlparam] idempotent false language javascrip"
client.command(myFunction);
UPDATE
I used this code (version 2.2.5) and it worked for me
code="var g=orient.getGraph().command(\\'sql\\',\\'%s\\',[urlparam]);"
myFunction = "CREATE FUNCTION addURL '" + code + "' parameters [urlparam] idempotent false language javascrip"
client.command(myFunction);
Hope it helps