I'm having some trouble getting elastic search integrated with an existing application, but it should be a fairly straightforward issue. I'm able to create and destroy indices but for some reason I'm having trouble getting data into elastic search and querying for it.
I'm using the pyes library and honestly finding the documentation to be less than helpful on this front. This is my current code:
def initialize_transcripts(database, mapping):
database.indices.create_index("transcript-index")
def index_course(database, sjson_directory, course_name, mapping):
database.put_mapping(course_name, {'properties': mapping}, "transcript-index")
all_transcripts = grab_transcripts(sjson_directory)
video_counter = 0
for transcript_tuple in all_transcripts:
data_map = {"searchable_text": transcript_tuple[0], "uuid": transcript_tuple[1]}
database.index(data_map, "transcript-index", course_name, video_counter)
video_counter += 1
database.indices.refresh("transcript-index")
def search_course(database, query, course_name):
search_query = TermQuery("searchable_text", query)
return database.search(query=search_query)
I'm first creating the database, and initializing the index, then trying to add data in and search it with the second two methods. I'm currently getting the following error:
raise ElasticSearchException(response.body, response.status, response.body)
pyes.exceptions.ElasticSearchException: No handler found for uri [/transcript-index/test-course] and method [PUT]
I'm not quite sure how to approach it, and the only reference I could find to this error suggested creating your index beforehand which I believe I am already doing. Has anyone run into this error before? Alternatively do you know of any good places to look that I might not be aware of?
Any help is appreciated.
For some reason, adding the ID to the index, despite the fact that it is shown in the starting documentation: (http://pyes.readthedocs.org/en/latest/manual/usage.html) doesn't work, and in fact causes this error.
Once I removed the video_counter argument to index, this worked perfectly.
Related
I am learning Python3 and I have a fairly simple task to complete but I am struggling how to glue it all together. I need to query an API and return the full list of applications which I can do and I store this and need to use it again to gather more data for each application from a different API call.
applistfull = requests.get(url,authmethod)
if applistfull.ok:
data = applistfull.json()
for app in data["_embedded"]["applications"]:
print(app["profile"]["name"],app["guid"])
summaryguid = app["guid"]
else:
print(applistfull.status_code)
I next have I think 'summaryguid' and I need to again query a different API and return a value that could exist many times for each application; in this case the compiler used to build the code.
I can statically call a GUID in the URL and return the correct information but I haven't yet figured out how to get it to do the below for all of the above and build a master list:
summary = requests.get(f"url{summaryguid}moreurl",authmethod)
if summary.ok:
fulldata = summary.json()
for appsummary in fulldata["static-analysis"]["modules"]["module"]:
print(appsummary["compiler"])
I would prefer to not yet have someone just type out the right answer but just drop a few hints and let me continue to work through it logically so I learn how to deal with what I assume is a common issue in the future. My thought right now is I need to move my second if up as part of my initial block and continue the logic in that space but I am stuck with that.
You are on the right track! Here is the hint: the second API request can be nested inside the loop that iterates through the list of applications in the first API call. By doing so, you can get the information you require by making the second API call for each application.
import requests
applistfull = requests.get("url", authmethod)
if applistfull.ok:
data = applistfull.json()
for app in data["_embedded"]["applications"]:
print(app["profile"]["name"],app["guid"])
summaryguid = app["guid"]
summary = requests.get(f"url/{summaryguid}/moreurl", authmethod)
fulldata = summary.json()
for appsummary in fulldata["static-analysis"]["modules"]["module"]:
print(app["profile"]["name"],appsummary["compiler"])
else:
print(applistfull.status_code)
I had the same problem as here (see link below), brielfy: unable to create .exe of a python script that uses APScheduler
Pyinstaller 3.3.1 & 3.4.0-dev build with apscheduler
So I did as suggested:
from apscheduler.triggers import interval
scheduler.add_job(Run, 'interval', interval.IntervalTrigger(minutes = time_int),
args = (input_file, output_dir, time_int),
id = theID, replace_existing=True)
And indeed importing interval.IntervalTrigger and passing it as an argument to add_job solved this particular error.
However, now I am encountring:
TypeError: add_job() got multiple values for argument 'args'
I tested it and I can ascertain it is occurring because of the way trigger is called now. I also tried defining trigger = interval.IntervalTrigger(minutes = time_int) separately and then just passing trigger, and the same happens.
If I ignore the error with try/except, I see that it does not add the job to the sql database at all (I am using SQLAlchemy as a jobstore). Initially I thought it is because I am adding several jobs in a for loop, but it happens with a single job add as well.
Anyone know of some other workaround if the initial problem, or any idea why this error might occur? I can't find anything online either :(
Things always work better in the morning.
For anyone else who encounters this: you don't need both 'interval' and interval.IntervalTrigger() as arguments, the code should be, this is where the error comes from.
scheduler.add_job(Run, interval.IntervalTrigger(minutes = time_int),
args = (input_file, output_dir, time_int),
id = theID, replace_existing=True)
I try to download a bingads report using python SDK, but I keep getting an error says: "Type not found: 'Aggregation'" after submitting a report request. I've tried all 4 options mentioned in the following link:
https://github.com/BingAds/BingAds-Python-SDK/blob/master/examples/v13/report_requests.py
Authentication process prior to request works just fine.
I execute the following:
report_request = get_report_request(authorization_data.account_id)
reporting_download_parameters = ReportingDownloadParameters(
report_request=report_request,
result_file_directory=FILE_DIRECTORY,
result_file_name=RESULT_FILE_NAME,
overwrite_result_file=True, # Set this value true if you want to overwrite the same file.
timeout_in_milliseconds=TIMEOUT_IN_MILLISECONDS
)
output_status_message("-----\nAwaiting download_report...")
download_report(reporting_download_parameters)
after a careful debugging, it seems that the program fails when trying to execute a command within "reporting_service_manager.py". Here is workflow:
download_report(self, download_parameters):
report_file_path = self.download_file(download_parameters)
then:
download_file(self, download_parameters):
operation = self.submit_download(download_parameters.report_request)
then:
submit_download(self, report_request):
self.normalize_request(report_request)
response = self.service_client.SubmitGenerateReport(report_request)
SubmitGenerateReport starts a sequence of events ending with a call to "_SeviceCall.init" function within "service_client.py", returning an exception "Type not found: 'Aggregation'"
try:
response = self.service_client.soap_client.service.__getattr__(self.name)(*args, **kwargs)
return response
except Exception as ex:
if need_to_refresh_token is False \
and self.service_client.refresh_oauth_tokens_automatically \
and self.service_client._is_expired_token_exception(ex):
need_to_refresh_token = True
else:
raise ex
Can anyone shed some light? .
Thanks
Please be sure to set Aggregation e.g., as shown here.
aggregation = 'Daily'
If the report type does not use aggregation, you can set Aggregation=None.
Does this help?
This may be a bit late 2 months after the fact but maybe this will help someone else. I had the same error (though I suppose it may not be the same issue). It does look like you did what I did (and I'm sure others will as well): copy-paste the Microsoft example code and tried to run it only to find that it didn't work.
I spent quite some time trying to debug the issue and it looked to me like the XML wasn't being searched correctly. I was using suds-py3 for the script at the time so I tried suds-community and everything just worked after that.
I also re-read the Bing Ads API walkthrough for getting started again and found that they recommend suds-jurko instead.
Long story short: If you want to use the bingads API don't use suds-py3, use either suds-community (which I can confirm works for everything I've used the API for) or suds-jurko (which is the one recommended by Microsoft).
CDK is great but I'm really struggling with how to deploy a step function to put an item into a dynamodb table with the python sdk. I eventually want to provide the item via event bridge.
When I get errors I'm not sure if they are real errors or not:
I've tried this:
event_table = dynamodb.Table(self, 'events',
partition_key=dynamodb.Attribute(name='id', type=dynamodb.AttributeType.STRING),
billing_mode=dynamodb.BillingMode.PAY_PER_REQUEST, removal_policy=RemovalPolicy.DESTROY)
map = {'id': {'S.$': '$.id'}}
dynamo_put_task = CallDynamoDB.put_item(item=map, table_name=event_table.table_name)
task = Task(self, 'invoke-notification-service', task=dynamo_put_task)
state_machine = StateMachine(self, 'save-event', definition=task)
Rule(self, 'rule_name', targets=[(SfnStateMachine(state_machine))], event_bus=event_bus, event_pattern=EventPattern(account=[Stack.of(self).account]))
I really don't know how to use a DynamoAttributeValueMap as this is just a shell in python which is why I'm not using one.
And even for this to run I've had to remove the following line from DynamoPutItemProps:
if isinstance(item, dict): item = DynamoAttributeValueMap(**item)
otherwise I get an "unknown keyword argument id" when I try to run this. Which kinda makes sense as it's trying to unwrap a map and pass it into a constructor that takes no parameters (just self).
A simple example of creating a step function with cdk to write to dynamodb would be very helpful.
I really don't know how to instantiate that map.
The code that I've got - in the example - at least generates the cloudformation template but the item is empty. It is just {}.
When I try to do something like this cause I think that's what I'm supposed to do...
map = DynamoAttributeValueMap()
map._values = {'id': DynamoAttributeValue().with_s('$.id')}
To try and force values into that map I get error:
TypeError: Don't know how to convert object to JSON:
DynamoAttributeValueMap(id=)
So then from this, I assume that this code should work:
dynamo_put_task = CallDynamoDB.put_item(item={'id': DynamoAttributeValue().with_s('1234')},
table_name=event_table.table_name, return_values=DynamoReturnValues.ALL_NEW)
This generates this template:
Type: AWS::StepFunctions::StateMachine
Properties:
DefinitionString:
Fn::Join:
- ""
- - '{"StartAt":"invoke-notification-service","States":{"invoke-notification-service":{"End":true,"Parameters":{"Item":{},"TableName":"'
- Ref: events26E65764
- '","ReturnValues":"ALL_NEW"},"Type":"Task","Resource":"arn:'
- Ref: AWS::Partition
- :states:::dynamodb:putItem"}}}
RoleArn:
Fn::GetAtt:
- saveeventRole24A1910F
- Arn
Notice that the Item is blank.
Of course this is with the
if isinstance(item, dict): item = DynamoAttributeValueMap(**item)
from DynamoPutItemsProps removed.
So is there a bug? Just a plain worked example would be useful.
I do suspectand I suspect there might be a bug as nothing works unless I remove that line of code.
I'm trying to create an API that gets all the information for a single node from a chef server.
def get_nodeInfo(self, name):
the above is the methods head, so I pass the node name here. I have tried a lot of different methods found on internet but I keep getting "ChefServerNotFound: object not found" error. Does anyone have any advice for me on this.
result_set = chef.Search('node', q="name:test*")
for result in result_set:
node = chef.Node(result["name"])
print node
I used the above code.
Thank you in advance
First off, please don't comment on old non-bugs to try and get attention. It pisses off maintainers and is not productive. Second, you don't need to use search at all to get a single node's data, just do Node(name) it will load.