i have been familiar with the python API for a while and there is an annoying thing i can't solve.
in short, I want to get all the security rules in my environment.
it works, which bothers me that I can't get the "description" associated with them at all.
my PY code:
from keystoneauth1 import session
from novaclient import client
import json
from requests import get
...AUTH....
sg_list = nova.security_groups.list()
print(sg_list)
....OUTPUT:
[<SecurityGroup description=192.168.140.0/24, id=123213xxxc2e6156243, name=asdasdasd, rules=[{'from_port': 1, 'group': {}, 'ip_protocol': 'tcp', 'to_port': 65535, 'parent_group_id': '615789e4-d4e214213136156243', 'ip_range': {'cidr': '192.168.140.0/24'},....
Is there a solution for this?
Thanks !
The output is a list , so you can do on first element:
sg_list[0].split("description",1)[1]
and you will get the description only
Related
I am on Windows, using Python 3.8.6rc1, protobuf version 3.13.0 and google-cloud-vision version 2.0.0.
My Code is :
from google.protobuf.json_format import MessageToDict
from google.cloud import vision
client = vision.ImageAnnotatorClient()
response = client.annotate_image({
'image': {'source': {'image_uri': 'https://images.unsplash.com/photo-1508138221679-760a23a2285b?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=800&q=60'}},
})
MessageToDict(response)
It fails at MessageToDict(response), I have an attribute error: "DESCRIPTOR". It seems like the response is not a valid protobuf object. Can someone help me? Thank you
This does not really answer my question but I find that one way to solve it and access the protobuf object is to use response._pb so the code becomes:
response = client.annotate_image({
'image': {'source': {'image_uri': 'https://images.unsplash.com/photo-1508138221679-760a23a2285b?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=800&q=60'}},
})
MessageToDict(response._pb)
Look step 3,
Step 1: Import this lib
from google.protobuf.json_format import MessageToDict
Step 2: Send request
keyword_ideas = keyword_plan_idea_service.generate_keyword_ideas(
request=request
)
Step 3: Convert response to json [Look here, add ".pd"]
keyword_ideas_json = MessageToDict(keyword_ideas._pb) // add ._pb at the end of object
Step 4: Do whatever you want with that json
print(keyword_ideas_json)
Github for this same issue: here
maybe have a look at this post
json_string = type(response).to_json(response)
# Alternatively
import proto
json_string = proto.Message.to_json(response)
From the github issue #FriedrichSal posted, you can see that proto does the job and this is still valid in 2022 (library name is proto-plus):
All message types are now defined using proto-plus, which uses different methods for serialization and deserialization.
import proto
objects = client.object_localization(image=image)
json_obs = proto.Message.to_json(objects)
dict_obs = proto.Message.to_dict(objects)
The MessageToJson(objects._pb) still works, but maybe someone prefers not to depend on a "hidden" property.
I have some code to connect to my company’s instance of the JIRA server via Jira’s Python Client. When i do a count on all the projects available in the server, I see close to 4300 projects, but when I loop through the projects to get a list of all the issues in all the projects, I cannot get more than 149 projects. There are about 100 issues in each project so I’d expect a total of 4300 projects and 430000 issues.
But I’m getting 149 projects and the loop exists with ~27000 issues. Has anyone faced this problem? Here’s the code I’m using:
import jira.client
from jira.client import JIRA
import re
import pandas as pd
options = {"server": "https://horizon.xxxxxx.com/issues",'verify':False}
jira = JIRA(options, basic_auth=('user','pwd'))
projects = jira.projects()
allissues = []
allprojects=[]
block_size=50
for projectA in projects:
block_num=0
while True:
start_idx=block_num*block_size
issues=jira.search_issues(f'project = {projectA.key}', start_idx, block_size)
if len(issues)== 0:
break
block_num+=1
for issue in issues:
allissues.append(issue)
allprojects.append(projectA.key)
I don't the setup to currently validate my code, but the code flow should be looking like this,
First import all your necessary libraries and get all the projects, then
projectsList=[]
issuesList=[]
for proj in projects:
while true:
startIndex=block_num*block_size
issues=jira.search_issues(f'project = {projectA.key}', start_idx
if len(issues)== 0:
break
block_num+=1
issueList.extend(issues)
projectsList.append(proj)
Few pointers:
1. Block size is not needed in search issues as argument because by default value is already set at 50.
2. You don't have to add project name every time you loop your issue
If you there are still issues with the results, then do provide any additional information such as any error info, debug logs (add some print statements) etc., that might help in helping you.
Adding quotes to the projects below solved:
import jira.client
from jira.client import JIRA
import re
import pandas as pd
options = {"server": "https://horizon.xxxxxxx.com/issues2",'verify':False}
jira = JIRA(options, basic_auth=('user','pwd'))
issues2=pd.DataFrame()
projects = jira.projects()
block_size=50
for projectA in projects:
block_num=0
while True:
start_idx=block_num*block_size
issues=jira.search_issues(f'project = "{projectA.key}"', start_idx, block_size)
if len(issues)== 0:
break
block_num+=1
allissues = []
allissues.extend(issues)
for allissue in allissues:
d={
'projectname': projectA.key,
'key':allissue .key,
'assignee':allissue .fields.assignee,
'creator':allissue .fields.creator,
'reporter':allissue .fields.reporter,
'created':allissue .fields.created,
'components':allissue .fields.components,
'description':allissue .fields.description,
'summary':allissue .fields.summary,
'fixVersion':allissue .fields.fixVersions,
'subtask':allissue .fields.issuetype.name,
'priority':allissue .fields.priority.name,
'resolution':allissue .fields.resolution,
'resolution.date':allissue .fields.resolutiondate,
'status.name':allissue .fields.status.name,
'statusdescription':allissue .fields.status.description,
'updated':allissue .fields.updated,
'versions':allissue .fields.versions,
'watches':allissue .fields.watches,
'storypoints':allissue .fields.customfield_10002
}
issues2=issues2.append(d, ignore_index=True)
issues2.head()
Does your user have Browse Permission on all the projects?
Are there Issue Security Schemes not allowing you to see some issues?
As part of migrating to Python 3, I need to migrate from logservice to the StackDriver Logging API. I have google-cloud-logging installed, and I can successfully fetch GAE application logs with eg:
>>> from google.cloud.logging_v2 import LoggingServiceV2Client
>>> entries = LoggingServiceV2Client().list_log_entries(('projects/projectname',),
filter_='resource.type="gae_app" AND protoPayload.#type="type.googleapis.com/google.appengine.logging.v1.RequestLog"')
>>> print(next(iter(entries)))
proto_payload {
type_url: "type.googleapis.com/google.appengine.logging.v1.RequestLog"
value: "\n\ts~brid-gy\022\0018\032R5d..."
}
This gets me a LogEntry with text application logs in the proto_payload.value field. How do I deserialize that field? I've found lots of related mentions in the docs, but nothing pointing me to a google.appengine.logging.v1.RequestLog protobuf generated class anywhere that I can use, if that's even the right idea. Has anyone done this?
Woo! Finally got this working. I had to generate and use the Python bindings for the google.appengine.logging.v1.RequestLog protocol buffer myself, by hand. Here's how.
First, I cloned these two repos at head:
https://github.com/googleapis/googleapis.git
https://github.com/protocolbuffers/protobuf.git
Then, I generated request_log_pb2.py from request_log.proto by running:
protoc -I googleapis/ -I protobuf/src/ --python_out . googleapis/google/appengine/logging/v1/request_log.proto
Finally, I pip installed googleapis-common-protos and protobuf. I was then able to deserialize proto_payload with:
from google.cloud.logging_v2 import LoggingServiceV2Client
client = LoggingServiceV2Client(...)
log = next(iter(client.list_log_entries(('projects/brid-gy',),
filter_='logName="projects/brid-gy/logs/appengine.googleapis.com%2Frequest_log"')))
import request_log_pb2
pb = request_log_pb2.RequestLog.FromString(log.proto_payload.value)
print(pb)
You can use the LogEntry.to_api_repr() function to get a JSON version of the LogEntry.
>>> from google.cloud.logging import Client
>>> entries = Client().list_entries(filter_="severity:DEBUG")
>>> entry = next(iter(entries))
>>> entry.to_api_repr()
{'logName': 'projects/PROJECT_NAME/logs/cloudfunctions.googleapis.com%2Fcloud-functions'
, 'resource': {'type': 'cloud_function', 'labels': {'region': 'us-central1', 'function_name': 'tes
t', 'project_id': 'PROJECT_NAME'}}, 'labels': {'execution_id': '1zqolde6afmx'}, 'insertI
d': '000000-f629ab40-aeca-4802-a678-d513e605608e', 'severity': 'DEBUG', 'timestamp': '2019-10-24T2
1:55:14.135056Z', 'trace': 'projects/PROJECT_NAME/traces/9c5201c3061d91c2b624abb950838b4
0', 'textPayload': 'Function execution started'}
Do you really want to use the API v2 ?
If not, use from google.cloud import logging and
set os.environ['GOOGLE_CLOUD_DISABLE_GRPC'] = 'true' - or similar env setting.
That will effectively return a JSON in payload instead of payload_pb
I'm trying to transcode some videos, but something is wrong with the way I am connecting.
Here's my code:
transcode = layer1.ElasticTranscoderConnection()
transcode.DefaultRegionEndpoint = 'elastictranscoder.us-west-2.amazonaws.com'
transcode.DefaultRegionName = 'us-west-2'
transcode.create_job(pipelineId, transInput, transOutput)
Here's the exception:
{u'message': u'The specified pipeline was not found: account=xxxxxx, pipelineId=xxxxxx.'}
To connect to a specific region in boto, you can use:
import boto.elastictranscoder
transcode = boto.elastictranscoder.connect_to_region('us-west-2')
transcode.create_job(...)
I just started using boto the other day, but the previous answer didn't work for me - don't know if the API changed or what (seems a little weird if it did, but anyway). This is how I did it.
#!/usr/bin/env python
# Boto
import boto
# Debug
boto.set_stream_logger('boto')
# Pipeline Id
pipeline_id = 'lotsofcharacters-393824'
# The input object
input_object = {
'Key': 'foo.webm',
'Container': 'webm',
'AspectRatio': 'auto',
'FrameRate': 'auto',
'Resolution': 'auto',
'Interlaced': 'auto'
}
# The object (or objects) that will be created by the transcoding job;
# note that this is a list of dictionaries.
output_objects = [
{
'Key': 'bar.mp4',
'PresetId': '1351620000001-000010',
'Rotate': 'auto',
'ThumbnailPattern': '',
}
]
# Phone home
# - Har har.
et = boto.connect_elastictranscoder()
# Create the job
# - If successful, this will execute immediately.
et.create_job(pipeline_id, input_name=input_object, outputs=output_objects)
Obviously, this is a contrived example and just runs from a standalone python script; it assumes you have a .boto file somewhere with your credentials in it.
Another thing to note is the PresetId's; you can find these in the AWS Management Console for Elastic Transcoder, under Presets. Finally, the values that can be stuffed in the dictionaries are lifted verbatim from the following link - as far as I can tell, they are just interpolated into a REST call (case sensitive, obviously).
AWS Create Job API
I’m implementing a service in Python that interacts with Magento through SOAP v2. So far, I’m able to get the product list doing something like this:
import suds
from suds.client import Client
wsdl_file = 'http://server/api/v2_soap?wsdl=1'
user = 'user'
password = 'password'
client = Client(wsdl_file) # load the wsdl file
session = client.service.login(user, password) # login and create a session
client.service.catalogProductList(session)
However, I’m not able to create a product, as I don’t really know what data I should send and how to send it. I know the method I have to use is catalogProductCreate, but the PHP examples shown here don’t really help me.
Any input would be appreciated.
Thank you in advance.
You are there already. I think your issue is not able to translate the PHP array of arguments to be passed into Python Key Value pairs. If thats the case this will help you a bit. As well as need to set the attribute set using catalogProductAttributeSetList before you create the product
attributeSets = client.service.catalogProductAttributeSetList(session)
#print attributeSets
# Not very sure how to get the current element from the attributeSets array. hope the below code will work
attributeSet = attributeSets[0]
product_details = [{'name':'Your Product Name'},{'description':'Product description'},{'short_description':'Product short description'},{'weight':'10'},{ 'status':'1'},{'url_key':'product-url-key'},{'url_path':'product-url-path'},{'visibility' :4},{'price':100},{'tax_class_id':1},{'categories': [2]},{'websites': [1]}]
client.service.catalogProductCreate(session , 'simple', attributeSet.set_id, 'your_product_sku',product_details)
suds.TypeNotFound: Type not found: 'productData'
Because the productData format is not right. You may want to change from
product_details = [{'name':'Your Product Name'},{'description':'Product description'}, {'short_description':'Product short description'},{'weight':'10'},{ 'status':'1'},{'url_key':'product-url-key'},{'url_path':'product-url-path'},{'visibility' :4},{'price':100},{'tax_class_id':1},{'categories': [2]},{'websites': [1]}]
to
product_details = {'name':'Your Product Name','description':'Product description', 'short_description':'Product short description','weight':'10', 'status':'1','url_key':'product-url-key','url_path':'product-url-path','visibility' :4,'price':100,'tax_class_id':1,'categories': [2],'websites': [1]}
It works for me.