Azure Data Factory Python SDK Create Trigger - python

I'm working with the Azure Data Factory Python SDK to create a Trigger as documented here. Unfortunately, it's not working and I'm getting a very cryptic error message.
I'm using the following code (as per the example):
tr_name = 'mytrigger'
scheduler_recurrence = ScheduleTriggerRecurrence(frequency='Minute', interval='15',start_time='2017-12-12T04:00:00Z', end_time='2017-12-12T05:00:00Z', time_zone='UTC')
pipeline_parameters = {'inputPath':'adftutorial/input', 'outputPath':'adftutorial/output'}
pipelines_to_run = []
pipeline_reference = PipelineReference(reference_name='copyPipeline')
pipelines_to_run.append(TriggerPipelineReference(pipeline_reference=pipeline_reference, parameters=pipeline_parameters))
tr_properties = ScheduleTrigger(description='My scheduler trigger', pipelines = pipelines_to_run, recurrence=scheduler_recurrence)
adf_client.triggers.create_or_update(rg_name, df_name, tr_name, tr_properties)
The error I'm getting is:
azure.core.exceptions.HttpResponseError: (InvalidPropertyValue) Invalid value for property 'Properties'
Code: InvalidPropertyValue
Message: Invalid value for property 'Properties'
Target: mytrigger
which doesn't tell me much at all :-(. Has anyone seen this before? I'm struggling to figure this out. I've looked for an attribute Properties, but there doesn't seem to be one. Is there a good way to debug what's going on here?

To resolve this azure.core.exceptions.HttpResponseError: (InvalidPropertyValue) Invalid value for property 'Properties' error:
Instead of:
tr_properties = ScheduleTrigger(description='My scheduler trigger', pipelines = pipelines_to_run, recurrence=scheduler_recurrence)
You can try this:
tr_properties = TriggerResource(properties=ScheduleTrigger(description='My scheduler trigger',pipelines=pipelines_to_run,recurrence=scheduler_recurrence))
You can refer to Bad body serialization creating a Schedule Trigger in Data Factory and Azure data factory trigger creation using python SDK

Related

Unable to pass custom values to AWS Lambda function

I am trying to pass custom input to my lambda function (Python 3.7 runtime) in JSON format from the rule set in CloudWatch.
However I am facing difficulty accessing elements from the input correctly.
Here's what the CW rule looks like.
Here is what the lambda function is doing.
import sqlalchemy # Package for accessing SQL databases via Python
import psycopg2
def lambda_handler(event,context):
today = date.today()
engine = sqlalchemy.create_engine("postgresql://some_user:userpassword#som_host/some_db")
con = engine.connect()
dest_table = "dest_table"
print(event)
s={'upload_date': today,'data':'Daily Ingestion Data','status':event["data"]} # Error points here
ingestion = pd.DataFrame(data = [s])
ingestion.to_sql(dest_table, con, schema = "some_schema", if_exists = "append",index = False, method = "multi")
When I test the event with default test event values, the print(event) statement prints the default test values ("key1":"value1") but the syntax for adding data to the database ingestion.to_sql() i.e the payload from input "Testing Input Data" is inserted to the database successfully.
However the lambda function still shows an error while running the function at event["data"] as Key error.
1) Am I accessing the Constant JSON input the right way?
2) If not then why is the data still being ingested as the way it is intended despite throwing an error at that line of code
3) The data is ingested when the function is triggered as per the schedule expression. When I test the event it shows an error with the Key. Is it the test event which is not similar to the actual input which is causing this error?
There is alot of documentation and articles on how to take input but I could not find anything that shows how to access the input inside the function. I have been stuck at this point for a while and it frustrates me that why is this not documented anywhere.
Would really appreciate if someone could give me some clarity to this process.
Thanks in advance
Edit:
Image of the monitoring Logs:
[ERROR] KeyError: 'data' Traceback (most recent call last): File "/var/task/test.py"
I am writing this answer based on the comments.
The syntax that was originally written is valid and I am able to access the data correctly. There was a need to implement a timeout as the function was constantly hitting that threshold followed by some change in the iteration.

Error when changing instance type in a python for loop

I have a Python 2 script which uses boto3 library.
Basically, I have a list of instance ids and I need to iterate over it changing the type of each instance from c4.xlarge to t2.micro.
In order to accomplish that task, I'm calling the modify_instance_attribute method.
I don't know why, but my script hangs with the following error message:
EBS-optimized instances are not supported for your requested configuration.
Here is my general scenario:
Say I have a piece of code like this one below:
def change_instance_type(instance_id):
client = boto3.client('ec2')
response = client.modify_instance_attribute(
InstanceId=instance_id,
InstanceType={
'Value': 't2.micro'
}
)
So, If I execute it like this:
change_instance_type('id-929102')
everything works with no problem at all.
However, strange enough, if I execute it in a for loop like the following
instances_list = ['id-929102']
for instance_id in instances_list:
change_instance_type(instance_id)
I get the error message above (i.e., EBS-optimized instances are not supported for your requested configuration) and my script dies.
Any idea why this happens?
When I look at EBS optimized instances I don't see that T2 micros are supported:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html
I think you would need to add EbsOptimized=false as well.

pyvmomi:error when calling RelocateVM

Hi I am using pyvmomi API, to perform vmotions against a cluster when DRS is set to manual mode. I am going through a vcenter and querying a cluster and getting recommendation and using that to perform the Vmotions. The code is something like this.
content=getVCContent(thisHost, {'user':username,'pwd':decoded_password},logger)
allClusterObj = content.viewManager.CreateContainerView(content.rootFolder, [pyVmomi.vim.ClusterComputeResource], True)
allCluster = allClusterObj.view
for thisDrsRecommendation in thisCluster.drsRecommendation:
print thisDrsRecommendation.reason
for thisMigration in thisDrsRecommendation.migrationList:
print ' vm:', thisMigration.vm.name
while True:
relocate_vm_to_host(thisMigration.vm.name,thisMigration.destination.name, allClusterObj.view)
#FUNCTION definition
def relocate_vm_to_host(vm, host , allCluster):
for thisCluster in allCluster:
for thisHost in thisCluster.host:
if thisHost.name == host:
for thisVm in thisHost.vm:
print 'Relocating vm:%s to host:%s on cluster:%s' %(thisVm.name,thisHost.name,thisCluster.name)
task = thisVm.RelocateVM(priority='defaultpriority')
I am getting an error saying the attribute doesn't exist.
AttributeError: 'vim.VirtualMachine' object has no attribute 'RelocateVM'
But the pyvmomi documentaion here https://github.com/vmware/pyvmomi/blob/master/docs/vim/VirtualMachine.rst
has a detailed explanation for the method
RelocateVM(spec, priority):
Anyone know what's the reason the method is missing? I also tried checking the available methods of the object ,that has RelocateVM_Task ,instead of RelocateVM(for which I couldn't find documentation) When I used that I get this error
TypeError: For "spec" expected type vim.vm.RelocateSpec, but got str
I checked the documentation for vim.vm.RelocateSpec, I am calling it in a function , but still throws an error.
def relocate_vm(VmToRelocate,destination_host,content):
allvmObj = content.viewManager.CreateContainerView(content.rootFolder, [pyVmomi.vim.VirtualMachine], True)
allvms = allvmObj.view
for vm in allvms:
if vm.name == VmToRelocate:
print 'vm:%s to relocate %s' %(vm.name , VmToRelocate)
task = vm.RelocateVM_Task(spec = destination_host)
Any help is appreciated.
Thanks
Looks like a mistake in the documentation. The method is called Relocate (and not RelocateVM).
Note, BTW, that in your first sample you're not passing the destination host to the call to Relocate so something is definitely missing there.
You can see some samples at https://gist.github.com/rgerganov/12fdd2ded8d80f36230f or at https://github.com/sijis/pyvmomi-examples/blob/master/migrate-vm.py.
Lastly, one way to realize you're using the wrong name is by calling Python's dir method on a VirtualMachine object. This will list all properties of the object so you can see which methods it has:
>>> vm = vim.VirtualMachine('vm-1234', None)
>>> dir(vm)
['AcquireMksTicket', [...] 'Relocate', 'RelocateVM_Task', [...] ]
(abbreviated output)

Error when settingDBus property

I'm trying to set a property of a DBus Interface with Qt5.7.0.
This is the doc:
http://git.kernel.org/cgit/network/ofono/ofono.git/tree/doc/modem-api.txt
and I want to set the Powered property to true.
Here my code:
QDBusInterface *iface = new QDBusInterface("org.ofono", "/hfp/org/bluez/hci0/dev_xx_xx_xx_xx_xx_xx", "org.ofono.Modem", QDBusConnection::systemBus());
iface->call("SetProperty", "Powered", QVariant(true));
and this is the error:
QDBusError("org.freedesktop.DBus.Error.UnknownMethod", "Method "SetProperty" with signature "sb" on interface "org.ofono.Modem" doesn't exist")
The path comes from the GetModem method and it's correct.
Also, I tried the same with Python:
modem = dbus.Interface(bus.get_object('org.ofono', '/hfp/org/bluez/hci0/dev_xx_xx_xx_xx_xx_xx'), 'org.ofono.Modem')
modem.SetProperty('Powered', dbus.Boolean(1))
and it works fine! So the problem is definitely related to my Qt5 code.
How to better understand the error message? Is wrong my signature or it doesn't find the SetProperty method at all?

Check Domain Availability using Boto Route53

I love using Boto API for Amazon Web Services but now I'm not capable of finding where is the error.
I'm using AWS for check domain availability and I have created a script in Python that includes the class at this link:
https://www.codatlas.com/github.com/boto/boto/develop/boto/route53/domains/layer1.py?line=67
I call the method check_domain_availability() on passing domain name:
Route53DomainsConnection.check_domain_availability('example.com',None)
but the method returns this error:
AttributeError: 'str' object has no attribute 'make_request'
I can try to pass parameters in many modes but no result.
Where am I wrong? Thanks in advance.
P.S: I use Debian wheezy and Python3.2
More on status of subdomains
I have found a method to get the status of a record just create with route53.
this is the code:
changes = ResourceRecordSets(conn, "ZONEID")
change = changes.add_change("STRING FOR ADD NEW SUBDOMAIN")
change.add_value(MY_IP)
status = changes.commit()
If print the status variable is contained the response of commit and the status:
{u'ChangeResourceRecordSetsResponse':{u'ChangeInfo': {u'Status: u'PENDING etc.....
Now i would like to be able to swhitch to another operation only if the status of subdomamin is "SYNC" but i doesn't able to access dinamically to string for check status.
I can use a while ? Can i use sleep command ? Can anyone help me over to resolve my problem ? Thanks
You don't show your code which makes it harder to debug but this line:
Route53DomainsConnection.check_domain_availability('example.com',None)
looks suspicious. It looks like you are trying to access the check_domain_availability method from the class rather than an instance of the class. I just did the following and it worked for me:
In [1]: import boto.route53.domains
In [2]: c = boto.route53.domains.connect_to_region('us-east-1')
In [3]: c.check_domain_availability('foobar.com')
Out[3]: {u'Availability': u'UNAVAILABLE'}

Categories

Resources