I love using Boto API for Amazon Web Services but now I'm not capable of finding where is the error.
I'm using AWS for check domain availability and I have created a script in Python that includes the class at this link:
https://www.codatlas.com/github.com/boto/boto/develop/boto/route53/domains/layer1.py?line=67
I call the method check_domain_availability() on passing domain name:
Route53DomainsConnection.check_domain_availability('example.com',None)
but the method returns this error:
AttributeError: 'str' object has no attribute 'make_request'
I can try to pass parameters in many modes but no result.
Where am I wrong? Thanks in advance.
P.S: I use Debian wheezy and Python3.2
More on status of subdomains
I have found a method to get the status of a record just create with route53.
this is the code:
changes = ResourceRecordSets(conn, "ZONEID")
change = changes.add_change("STRING FOR ADD NEW SUBDOMAIN")
change.add_value(MY_IP)
status = changes.commit()
If print the status variable is contained the response of commit and the status:
{u'ChangeResourceRecordSetsResponse':{u'ChangeInfo': {u'Status: u'PENDING etc.....
Now i would like to be able to swhitch to another operation only if the status of subdomamin is "SYNC" but i doesn't able to access dinamically to string for check status.
I can use a while ? Can i use sleep command ? Can anyone help me over to resolve my problem ? Thanks
You don't show your code which makes it harder to debug but this line:
Route53DomainsConnection.check_domain_availability('example.com',None)
looks suspicious. It looks like you are trying to access the check_domain_availability method from the class rather than an instance of the class. I just did the following and it worked for me:
In [1]: import boto.route53.domains
In [2]: c = boto.route53.domains.connect_to_region('us-east-1')
In [3]: c.check_domain_availability('foobar.com')
Out[3]: {u'Availability': u'UNAVAILABLE'}
Related
I'm working with the Azure Data Factory Python SDK to create a Trigger as documented here. Unfortunately, it's not working and I'm getting a very cryptic error message.
I'm using the following code (as per the example):
tr_name = 'mytrigger'
scheduler_recurrence = ScheduleTriggerRecurrence(frequency='Minute', interval='15',start_time='2017-12-12T04:00:00Z', end_time='2017-12-12T05:00:00Z', time_zone='UTC')
pipeline_parameters = {'inputPath':'adftutorial/input', 'outputPath':'adftutorial/output'}
pipelines_to_run = []
pipeline_reference = PipelineReference(reference_name='copyPipeline')
pipelines_to_run.append(TriggerPipelineReference(pipeline_reference=pipeline_reference, parameters=pipeline_parameters))
tr_properties = ScheduleTrigger(description='My scheduler trigger', pipelines = pipelines_to_run, recurrence=scheduler_recurrence)
adf_client.triggers.create_or_update(rg_name, df_name, tr_name, tr_properties)
The error I'm getting is:
azure.core.exceptions.HttpResponseError: (InvalidPropertyValue) Invalid value for property 'Properties'
Code: InvalidPropertyValue
Message: Invalid value for property 'Properties'
Target: mytrigger
which doesn't tell me much at all :-(. Has anyone seen this before? I'm struggling to figure this out. I've looked for an attribute Properties, but there doesn't seem to be one. Is there a good way to debug what's going on here?
To resolve this azure.core.exceptions.HttpResponseError: (InvalidPropertyValue) Invalid value for property 'Properties' error:
Instead of:
tr_properties = ScheduleTrigger(description='My scheduler trigger', pipelines = pipelines_to_run, recurrence=scheduler_recurrence)
You can try this:
tr_properties = TriggerResource(properties=ScheduleTrigger(description='My scheduler trigger',pipelines=pipelines_to_run,recurrence=scheduler_recurrence))
You can refer to Bad body serialization creating a Schedule Trigger in Data Factory and Azure data factory trigger creation using python SDK
I try to download a bingads report using python SDK, but I keep getting an error says: "Type not found: 'Aggregation'" after submitting a report request. I've tried all 4 options mentioned in the following link:
https://github.com/BingAds/BingAds-Python-SDK/blob/master/examples/v13/report_requests.py
Authentication process prior to request works just fine.
I execute the following:
report_request = get_report_request(authorization_data.account_id)
reporting_download_parameters = ReportingDownloadParameters(
report_request=report_request,
result_file_directory=FILE_DIRECTORY,
result_file_name=RESULT_FILE_NAME,
overwrite_result_file=True, # Set this value true if you want to overwrite the same file.
timeout_in_milliseconds=TIMEOUT_IN_MILLISECONDS
)
output_status_message("-----\nAwaiting download_report...")
download_report(reporting_download_parameters)
after a careful debugging, it seems that the program fails when trying to execute a command within "reporting_service_manager.py". Here is workflow:
download_report(self, download_parameters):
report_file_path = self.download_file(download_parameters)
then:
download_file(self, download_parameters):
operation = self.submit_download(download_parameters.report_request)
then:
submit_download(self, report_request):
self.normalize_request(report_request)
response = self.service_client.SubmitGenerateReport(report_request)
SubmitGenerateReport starts a sequence of events ending with a call to "_SeviceCall.init" function within "service_client.py", returning an exception "Type not found: 'Aggregation'"
try:
response = self.service_client.soap_client.service.__getattr__(self.name)(*args, **kwargs)
return response
except Exception as ex:
if need_to_refresh_token is False \
and self.service_client.refresh_oauth_tokens_automatically \
and self.service_client._is_expired_token_exception(ex):
need_to_refresh_token = True
else:
raise ex
Can anyone shed some light? .
Thanks
Please be sure to set Aggregation e.g., as shown here.
aggregation = 'Daily'
If the report type does not use aggregation, you can set Aggregation=None.
Does this help?
This may be a bit late 2 months after the fact but maybe this will help someone else. I had the same error (though I suppose it may not be the same issue). It does look like you did what I did (and I'm sure others will as well): copy-paste the Microsoft example code and tried to run it only to find that it didn't work.
I spent quite some time trying to debug the issue and it looked to me like the XML wasn't being searched correctly. I was using suds-py3 for the script at the time so I tried suds-community and everything just worked after that.
I also re-read the Bing Ads API walkthrough for getting started again and found that they recommend suds-jurko instead.
Long story short: If you want to use the bingads API don't use suds-py3, use either suds-community (which I can confirm works for everything I've used the API for) or suds-jurko (which is the one recommended by Microsoft).
I have a Python 2 script which uses boto3 library.
Basically, I have a list of instance ids and I need to iterate over it changing the type of each instance from c4.xlarge to t2.micro.
In order to accomplish that task, I'm calling the modify_instance_attribute method.
I don't know why, but my script hangs with the following error message:
EBS-optimized instances are not supported for your requested configuration.
Here is my general scenario:
Say I have a piece of code like this one below:
def change_instance_type(instance_id):
client = boto3.client('ec2')
response = client.modify_instance_attribute(
InstanceId=instance_id,
InstanceType={
'Value': 't2.micro'
}
)
So, If I execute it like this:
change_instance_type('id-929102')
everything works with no problem at all.
However, strange enough, if I execute it in a for loop like the following
instances_list = ['id-929102']
for instance_id in instances_list:
change_instance_type(instance_id)
I get the error message above (i.e., EBS-optimized instances are not supported for your requested configuration) and my script dies.
Any idea why this happens?
When I look at EBS optimized instances I don't see that T2 micros are supported:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html
I think you would need to add EbsOptimized=false as well.
Hi I am using pyvmomi API, to perform vmotions against a cluster when DRS is set to manual mode. I am going through a vcenter and querying a cluster and getting recommendation and using that to perform the Vmotions. The code is something like this.
content=getVCContent(thisHost, {'user':username,'pwd':decoded_password},logger)
allClusterObj = content.viewManager.CreateContainerView(content.rootFolder, [pyVmomi.vim.ClusterComputeResource], True)
allCluster = allClusterObj.view
for thisDrsRecommendation in thisCluster.drsRecommendation:
print thisDrsRecommendation.reason
for thisMigration in thisDrsRecommendation.migrationList:
print ' vm:', thisMigration.vm.name
while True:
relocate_vm_to_host(thisMigration.vm.name,thisMigration.destination.name, allClusterObj.view)
#FUNCTION definition
def relocate_vm_to_host(vm, host , allCluster):
for thisCluster in allCluster:
for thisHost in thisCluster.host:
if thisHost.name == host:
for thisVm in thisHost.vm:
print 'Relocating vm:%s to host:%s on cluster:%s' %(thisVm.name,thisHost.name,thisCluster.name)
task = thisVm.RelocateVM(priority='defaultpriority')
I am getting an error saying the attribute doesn't exist.
AttributeError: 'vim.VirtualMachine' object has no attribute 'RelocateVM'
But the pyvmomi documentaion here https://github.com/vmware/pyvmomi/blob/master/docs/vim/VirtualMachine.rst
has a detailed explanation for the method
RelocateVM(spec, priority):
Anyone know what's the reason the method is missing? I also tried checking the available methods of the object ,that has RelocateVM_Task ,instead of RelocateVM(for which I couldn't find documentation) When I used that I get this error
TypeError: For "spec" expected type vim.vm.RelocateSpec, but got str
I checked the documentation for vim.vm.RelocateSpec, I am calling it in a function , but still throws an error.
def relocate_vm(VmToRelocate,destination_host,content):
allvmObj = content.viewManager.CreateContainerView(content.rootFolder, [pyVmomi.vim.VirtualMachine], True)
allvms = allvmObj.view
for vm in allvms:
if vm.name == VmToRelocate:
print 'vm:%s to relocate %s' %(vm.name , VmToRelocate)
task = vm.RelocateVM_Task(spec = destination_host)
Any help is appreciated.
Thanks
Looks like a mistake in the documentation. The method is called Relocate (and not RelocateVM).
Note, BTW, that in your first sample you're not passing the destination host to the call to Relocate so something is definitely missing there.
You can see some samples at https://gist.github.com/rgerganov/12fdd2ded8d80f36230f or at https://github.com/sijis/pyvmomi-examples/blob/master/migrate-vm.py.
Lastly, one way to realize you're using the wrong name is by calling Python's dir method on a VirtualMachine object. This will list all properties of the object so you can see which methods it has:
>>> vm = vim.VirtualMachine('vm-1234', None)
>>> dir(vm)
['AcquireMksTicket', [...] 'Relocate', 'RelocateVM_Task', [...] ]
(abbreviated output)
I am trying to create a custom filter for OpenStack, using their FilterScheduler component. The documentation for the FilterScheduler is here: http://docs.openstack.org/developer/nova/devref/filter_scheduler.html#
Now, there is not much in the way of documentation for creating your own custom filter. Infact, the complete documentation is:
If you want to create your own filter you just need to inherit from BaseHostFilter and implement one method: host_passes. This method should return True if host passes the filter. It takes host_state (describes host) and filter_properties dictionary as the parameters.
As an example, nova.conf could contain the following scheduler-related settings:
--scheduler_driver=nova.scheduler.FilterScheduler
--scheduler_available_filters=nova.scheduler.filters.standard_filters
--scheduler_available_filters=myfilter.MyFilter
--scheduler_default_filters=RamFilter,ComputeFilter,MyFilter
I have created a custom "test_filter.py" -- it is very similar to "all_hosts_filter.py", which is the simplest standard filter.
Here it is in it's entirety:
from nova.scheduler import filters
from nova.openstack.common import log as logging
LOG = logging.getLogger(__name__)
class TestFilter(filters.BaseHostFilter):
"""NOOP host filter. Returns all hosts."""
def host_passes(self, host_state, filter_properties):
LOG.debug("COMING FROM: nova/scheduler/filters/test_filter.py")
return True
But when I put this file, "test_filter.py", in the nova/scheduler/filters folder and restart OpenStack I get the following exception:
CRITICAL nova [-] Class test_filter could not be found: 'module' object has no attribute 'test_filter'
It appears that OpenStack is registering and trying to import my new filter, but some error is occuring. For reference, this is what the releveant sections of my /etc/nova/nova.conf file looks like:
scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_available_filters=nova.scheduler.filters.test_filter.TestFilter
scheduler_default_filters=TestFilter,RamFilter,ComputeFilter
======
UPDATE: 15th April 2000 hours BST.
An update to this question, still struggling. After discussing the problem with boris-42 on the OpenStack IRC channel we have investigated a bit more:
Openstack-scheduler is run as service from /usr/bin/nova-scheduler
It then has an error:
"Inner Exception: 'module' object has no attribute 'test_filter' from (pid=32696) import_class /usr/lib/python2.7/dist-packages/nova/utils.py:78"
Which suggests it is using the /usr/lib/python2.7/dist-packages/nova/ folder for the source files of the installation.
Putting my custom "test_filter.py" into /usr/lib/python2.7/dist-packages/nova/scheduler/filters causes the error above.
However, on closer inspection it appears that all other files in the /usr/lib/python2.7/dist-packages/nova/scheduler/filters folder are actually links to the files in /usr/share/pyshared/nova/scheduler/filters
So I put my "test_filter.py" in /usr/share/pyshared/nova/scheduler/filters -- and then created a symbolic link in the original folder.
This results in exactly the same folder. As long as the file is either present or a link exists in the folder /usr/lib/python2.7/dist-packages/nova/scheduler/filters then the error occurs.
The nova.conf file has been updated as follows:
scheduler_available_filters=nova.scheduler.filters.TestFilter
scheduler_default_filters=TestFilter
I dont think you have to put your file in /usr/lib/python2.7/dist-packages/nova/scheduler/filters. You can put anywhere and make sure that path is in PYTHONPATH.
As metnion in example
If you want to create your own filter you just need to inherit from BaseHostFilter and implement one method: host_passes. This method should return True if host passes the filter. It takes host_state (describes host) and filter_properties dictionary as the parameters.
As an example, nova.conf could contain the following scheduler-related settings:
......
--scheduler_available_filters=myfilter.MyFilter
.......
You have to mention myfilet.MyFilter without nova.scheduler.filters.