get request using a script python and web API in sonarQUbe - python

I'v recently started using sonar.cloud.whatever.com. on the profolio I have by default the analysisi result for a master branch by default for any given project so if I need to collect the information for multiple branched in this project I have to select those branched one by one.
For the moment I don't know if there is any other simple way to do this without given by sonar
so I stated a script on python using the webservice API given by sonar .
I stated (just to try the result) by collecting all the issues using /api/issues/search
import json
import sys
import os
import requests
def usage():
print("hello")
def collectIssues():
r=requests.get('https://sonar.cloud.sec.NotToMention.com/api/issues/search? componentKeys=project_key&statuses=OPEN')
print("le code ",r.status_code)
print(r.url)
#print(r.headers)
if(r.status_code!=200):
exit(0)
data=r.json()
print(r.headers,"\n\n")
print(data)
print(data['issues'])
def main(args):
collectIssues()
if __name__== '__main__':
main(sys.argv)
exit(0)
if I copy the link and browse it I have the right result with total 1000 issues, but the result of this script gives total 0 and issues = [].
(I want just to sign that project_name and NotToMention are not the real values I replaced them here for security issues.)
the result of this script is :
status_code : 200
https://sonar.cloud.NotTOMention.com/api/issues/search?componentKeys=project_name&statuses=OPEN
JSON RESULT :
{'total': 0, 'p': 1, 'ps': 100, 'paging': {'pageIndex': 1, 'pageSize': 100, 'total': 0}, 'effortTotal': 0, 'debtTotal': 0, 'issues': [], 'components': [], 'facets': []}
thanks for any advice.
best regards

Related

python websocket fetch returns empty object

I'm trying to use the pybit package (https://github.com/verata-veritatis/pybit) on a crypto exchange, but when i try to fetch the data from the websocket, all I get is an empty object as a response.
import pybit
endpoint_public = 'wss://stream.bybit.com/realtime_public'
subs = [
'orderBookL2_25.BTCUSD',
'instrument_info.100ms.BTCUSD',
'last_price.BTCUSD'
]
ws_unauth = WebSocket(endpoint_public, subscriptions=subs)
ws_unauth.fetch('last_price.BTCUSD')
the output is this
{}
EDIT: 2022.09.19
It seems they changed code in module and examples in documentation are different. They don't use fetch() but they assign subscription to functions - handlers - and websocket runs own (hidden) loop to fetch data and execute assigned function.
I found three problems:
First: code works for me if I use endpoint realtime instead of realtime_public - I found it in somewhere in ByBit API documentation (not in documentation for Python module)
Second: there is no 'last_price.BTCUSD' in documentation - and this generate errors when I try it with endpoint realtime - and other subscriptions don't work.
Third: first fetch may not give result and it may need to sleep() short time before first fetch. Normally code should run in some loop and get data every few (milli)seconds and then it makes no problem. You could also use if to run some code only if you get data.
import pybit
import time
endpoint_public = 'wss://stream.bybit.com/realtime'
subs = [
'orderBookL2_25.BTCUSD',
'instrument_info.100ms.BTCUSD',
# 'last_price.BTCUSD'
]
ws_unauth = pybit.WebSocket(endpoint_public, subscriptions=subs)
time.sleep(1)
#print(ws_unauth.fetch('last_price.BTCUSD')) # doesn't work with `realtime_public`; generate error with `realtime`
print(ws_unauth.fetch('orderBookL2_25.BTCUSD')) # doesn't work with `realtime_public`; works with `realtime`
Result:
[
{'price': '40702.50', 'symbol': 'BTCUSD', 'id': 407025000, 'side': 'Buy', 'size': 350009},
{'price': '40703.00', 'symbol': 'BTCUSD', 'id': 407030000, 'side': 'Buy', 'size': 10069},
{'price': '40705.00', 'symbol': 'BTCUSD', 'id': 407050000, 'side': 'Buy', 'size': 28},
# ...
]
BTW:
ByBit API Documentation shows also examples for Public Topic.
They use:
realtime instead of realtime_public,
loop to fetch data periodically,
if data to skip empty response.
from pybit import WebSocket
subs = [
"orderBookL2_25.BTCUSD"
]
ws = WebSocket(
"wss://stream-testnet.bybit.com/realtime",
subscriptions=subs
)
while True:
data = ws.fetch(subs[0])
if data:
print(data)
ByBit API Documentation shows also examples for Private Topic.
They also use:
realtime instead of realtime_public ( + api_key, api_secret),
loop to fetch data periodically,
if data to skip empty response.
For test they use stream-testnet but in real code it should use stream.

Openstack Python API novaclient - SecurityGroup Rule list with description

i have been familiar with the python API for a while and there is an annoying thing i can't solve.
in short, I want to get all the security rules in my environment.
it works, which bothers me that I can't get the "description" associated with them at all.
my PY code:
from keystoneauth1 import session
from novaclient import client
import json
from requests import get
...AUTH....
sg_list = nova.security_groups.list()
print(sg_list)
....OUTPUT:
[<SecurityGroup description=192.168.140.0/24, id=123213xxxc2e6156243, name=asdasdasd, rules=[{'from_port': 1, 'group': {}, 'ip_protocol': 'tcp', 'to_port': 65535, 'parent_group_id': '615789e4-d4e214213136156243', 'ip_range': {'cidr': '192.168.140.0/24'},....
Is there a solution for this?
Thanks !
The output is a list , so you can do on first element:
sg_list[0].split("description",1)[1]
and you will get the description only

How to deserialize App Engine application logs from StackDriver Logging API?

As part of migrating to Python 3, I need to migrate from logservice to the StackDriver Logging API. I have google-cloud-logging installed, and I can successfully fetch GAE application logs with eg:
>>> from google.cloud.logging_v2 import LoggingServiceV2Client
>>> entries = LoggingServiceV2Client().list_log_entries(('projects/projectname',),
filter_='resource.type="gae_app" AND protoPayload.#type="type.googleapis.com/google.appengine.logging.v1.RequestLog"')
>>> print(next(iter(entries)))
proto_payload {
type_url: "type.googleapis.com/google.appengine.logging.v1.RequestLog"
value: "\n\ts~brid-gy\022\0018\032R5d..."
}
This gets me a LogEntry with text application logs in the proto_payload.value field. How do I deserialize that field? I've found lots of related mentions in the docs, but nothing pointing me to a google.appengine.logging.v1.RequestLog protobuf generated class anywhere that I can use, if that's even the right idea. Has anyone done this?
Woo! Finally got this working. I had to generate and use the Python bindings for the google.appengine.logging.v1.RequestLog protocol buffer myself, by hand. Here's how.
First, I cloned these two repos at head:
https://github.com/googleapis/googleapis.git
https://github.com/protocolbuffers/protobuf.git
Then, I generated request_log_pb2.py from request_log.proto by running:
protoc -I googleapis/ -I protobuf/src/ --python_out . googleapis/google/appengine/logging/v1/request_log.proto
Finally, I pip installed googleapis-common-protos and protobuf. I was then able to deserialize proto_payload with:
from google.cloud.logging_v2 import LoggingServiceV2Client
client = LoggingServiceV2Client(...)
log = next(iter(client.list_log_entries(('projects/brid-gy',),
filter_='logName="projects/brid-gy/logs/appengine.googleapis.com%2Frequest_log"')))
import request_log_pb2
pb = request_log_pb2.RequestLog.FromString(log.proto_payload.value)
print(pb)
You can use the LogEntry.to_api_repr() function to get a JSON version of the LogEntry.
>>> from google.cloud.logging import Client
>>> entries = Client().list_entries(filter_="severity:DEBUG")
>>> entry = next(iter(entries))
>>> entry.to_api_repr()
{'logName': 'projects/PROJECT_NAME/logs/cloudfunctions.googleapis.com%2Fcloud-functions'
, 'resource': {'type': 'cloud_function', 'labels': {'region': 'us-central1', 'function_name': 'tes
t', 'project_id': 'PROJECT_NAME'}}, 'labels': {'execution_id': '1zqolde6afmx'}, 'insertI
d': '000000-f629ab40-aeca-4802-a678-d513e605608e', 'severity': 'DEBUG', 'timestamp': '2019-10-24T2
1:55:14.135056Z', 'trace': 'projects/PROJECT_NAME/traces/9c5201c3061d91c2b624abb950838b4
0', 'textPayload': 'Function execution started'}
Do you really want to use the API v2 ?
If not, use from google.cloud import logging and
set os.environ['GOOGLE_CLOUD_DISABLE_GRPC'] = 'true' - or similar env setting.
That will effectively return a JSON in payload instead of payload_pb

Pepper Linux log date format looks meaningless

I have a problem with reading the date value from Pepper robot log. I use Python to fetch the log messages from a remote robot. See an example code:
def onMessage(mess):
print mess # mess is a dictionary with all known LogMessage information.
def main():
app = qi.Application(url="tcp://10.0.11.252:9559")
app.start()
logmanager = app.session.service("LogManager")
listener = logmanager.getListener()
listener.onLogMessage.connect(onMessage)
app.run()
if __name__ == "__main__":
main()
Đ¢his is how one log message looks like:
{
'category': 'ALMemory',
'level': 5,
'source': ':notify:0',
'location': '7b5400e2-18b1-48e4-1127-g4e6544d0621b:3107',
'date': 11112334305291,
'message': 'notifying module: _WholeBodyLooker for datachange for key: ALTracker/ObjectLookAt',
'id': 5599547,
'systemDate': 1533208857670649344,
}
The problem is that I don't know the date value meaning. I didn't find any documentation for this value. When I try to covert 11112334305291 to date the result is not meaningful: Sunday, February 19, 2322 11:31:45.291 PM.
Does anyone have any idea what this might mean?
Most likely nanoseconds since the robot has been on (so in your case, about three hours)- see the qi clock API in the documentation.

How do you connect to AWS Elastic Transcoder?

I'm trying to transcode some videos, but something is wrong with the way I am connecting.
Here's my code:
transcode = layer1.ElasticTranscoderConnection()
transcode.DefaultRegionEndpoint = 'elastictranscoder.us-west-2.amazonaws.com'
transcode.DefaultRegionName = 'us-west-2'
transcode.create_job(pipelineId, transInput, transOutput)
Here's the exception:
{u'message': u'The specified pipeline was not found: account=xxxxxx, pipelineId=xxxxxx.'}
To connect to a specific region in boto, you can use:
import boto.elastictranscoder
transcode = boto.elastictranscoder.connect_to_region('us-west-2')
transcode.create_job(...)
I just started using boto the other day, but the previous answer didn't work for me - don't know if the API changed or what (seems a little weird if it did, but anyway). This is how I did it.
#!/usr/bin/env python
# Boto
import boto
# Debug
boto.set_stream_logger('boto')
# Pipeline Id
pipeline_id = 'lotsofcharacters-393824'
# The input object
input_object = {
'Key': 'foo.webm',
'Container': 'webm',
'AspectRatio': 'auto',
'FrameRate': 'auto',
'Resolution': 'auto',
'Interlaced': 'auto'
}
# The object (or objects) that will be created by the transcoding job;
# note that this is a list of dictionaries.
output_objects = [
{
'Key': 'bar.mp4',
'PresetId': '1351620000001-000010',
'Rotate': 'auto',
'ThumbnailPattern': '',
}
]
# Phone home
# - Har har.
et = boto.connect_elastictranscoder()
# Create the job
# - If successful, this will execute immediately.
et.create_job(pipeline_id, input_name=input_object, outputs=output_objects)
Obviously, this is a contrived example and just runs from a standalone python script; it assumes you have a .boto file somewhere with your credentials in it.
Another thing to note is the PresetId's; you can find these in the AWS Management Console for Elastic Transcoder, under Presets. Finally, the values that can be stuffed in the dictionaries are lifted verbatim from the following link - as far as I can tell, they are just interpolated into a REST call (case sensitive, obviously).
AWS Create Job API

Categories

Resources