When you're on the Google Console, Security Command Center, Findings, you can click on an item to view the details. There is a section that lists "Attributes" and "Source Properties". I would like to get some of these values. The code below is taken from this page (https://cloud.google.com/security-command-center/docs/how-to-api-list-findings) and modified to get what I need:
from google.cloud import securitycenter
client = securitycenter.SecurityCenterClient()
organization_id = "<my organization id>"
org_name = "organizations/{org_id}".format(org_id=organization_id)
finding_result_iterator = client.list_findings(request={"parent": all_sources, "filter": 'severity="HIGH"'})
for i, finding_result in enumerate(finding_result_iterator):
sourceId = finding_result.finding.resource_name
title = finding_result.finding.category
alertTime = finding_result.finding.event_time
serviceName = finding_result.resource.type_
description = ""
additionalInfo = ""
I would like to get the "explanation" and "recommendation" values from Source Properties, but I don't know where to get them. The reference page shows the output for each finding_result in the loop. The Console displays these properties, but I don't know how to get them and I've been searching on the interwebs for a answer. I'm hoping someone here has the answer.
So, I was being a bit impatient with my question, both here and with Google Support. When I tightened up the filters for my call, I found records that do indeed have the two values I was looking for. For those who are interested, I've included some junky test code below.
from google.cloud import securitycenter
client = securitycenter.SecurityCenterClient()
organization_id = "<my org id>"
org_name = "organizations/{org_id}".format(org_id=organization_id)
all_sources = "{org_name}/sources/-".format(org_name=org_name)
finding_result_iterator = client.list_findings(request={"parent": all_sources, "filter": 'severity="HIGH" AND state="ACTIVE" AND category!="Persistence: IAM Anomalous Grant" AND category!="MFA_NOT_ENFORCED"'})
for i, finding_result in enumerate(finding_result_iterator):
sourceId = finding_result.finding.resource_name
projectId = finding_result.resource.project_display_name
title = finding_result.finding.category
alertTime = finding_result.finding.event_time
serviceName = finding_result.resource.type_
description = ""
additionalInfo = ""
externalUri = ""
if hasattr(finding_result.finding,"external_uri"):
externalUri = finding_result.finding.external_uri
sourceProps = finding_result.finding.source_properties
for item in sourceProps:
if (item == "Explanation"):
description = str(sourceProps[item])
if (item == "Recommendation"):
additionalInfo = str(sourceProps[item])
print("TITLE: " + title)
print(" PROJECT ID: " + projectId)
print(" DESCRIPTION: " + description)
print(" SOURCE ID: " + sourceId)
print(" ALERT TIME: {}".format(alertTime))
print(" SERVICE NAME: " + serviceName)
print(" ADDITIONAL INFO: Recommendation: " + additionalInfo)
if len(externalUri) > 0:
print(", External URI: " + externalUri)
if i < 1:
break
So while the question was a bit of a waste, the code might help someone else trying to work with the Security Command Center API.
Related
I don't know why this code does not work.
When I print the list "videos" and "search results" to see what is happening, they are both empty. This is why the if statement is never reached as you can see in the screenshot.
# Query youtube results based on the songs entered in the text file
def find_video_urls(self, songs):
videos = list()
for song in songs:
self.update_text_widget("\nSong - " + song + " - Querying youtube results ...")
query_string = urllib.parse.urlencode({"search_query": song})
with urllib.request.urlopen("http://www.youtube.com/results?" + query_string) as html_content:
# retrieve all videos that met the song name criteria
search_results = re.findall(r'href=\"\/watch\?v=(.{11})', html_content.read().decode())
# only take the top result
if len(search_results) != 0:
videos.append("https://www.youtube.com/watch?v=" + search_results[0])
self.update_text_widget("\nSong - " + song + " - Found top result!")
return videos
GUI Output
Hi Vigilante I will share a code on how to do it with requests. Maybe you can implement it in your code.
import re
import requests
def find_video_urls(songs):
videos = list()
for song in songs:
with requests.session() as ses:
r = ses.get('http://www.youtube.com/results', params={"search_query": song})
search_results = re.findall(b'(/watch\?v=.{11})\"', r.content, re.MULTILINE | re.IGNORECASE | re.DOTALL)
print(search_results)
# only take the top result
if len(search_results) != 0:
videos.append(b"".join([b'https://www.youtube.com', search_results[0]]))
return videos
print(find_video_urls(['Eminem - Lose Yourself']))
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am wondering if anyone is able to read the articles within Tickets of an OTRS system via pyOTRS? I am able to connect and get tickets fine, I just cannot find out how to get the content of the tickets. I have been up and down the PyOTRS documentation but I am stuck. Does anyone have anything they can share with regards to reading articles?
The pre-requisites for PyOTRS are mentioned here: https://pypi.org/project/PyOTRS/.
Once these are complete, following steps can be taken for retrieving OTRS ticket data:
A connection is initiated by creating a client.
OTRS ticket search is conducted.
OTRS Ticket data is retrieved including dynamic fields and articles using get_ticket.to_dct() response.
from pyotrs import Article, Client, Ticket, DynamicField, Attachment
# Initializing
URL = config["url"]
USERNAME = config["username"]
PASSWORD = config["password"]
TICKET_LINK = config["ticketLink"]
# Create session
def createSession():
client = Client(URL, USERNAME, PASSWORD, https_verify=False)
client.session_create()
return client
# Retrieve tickets based on condition
def ticketData(client):
# Specify ticket search condition
data = client.ticket_search(Queues=['queue1Name', 'queue2Name'], States=['open'])
print("Number of tickets retrieved:" + str(len(data)))
# Iterating over all search results
if data[0] is not u'':
for ticket_id in data:
# Get ticket details
get_ticket = client.ticket_get_by_id(ticket_id, articles=1, attachments=1)
print(get_ticket)
q1 = "Ticket id: " + str(get_ticket.field_get("TicketID")) + "\nTicket number: " + str(get_ticket.field_get("TicketNumber")) + "\nTicket Creation Time: " + str(get_ticket.field_get("Created")) + "\n\nTicket title: " + get_ticket.field_get("Title")
print(q1)
# Based on to_dct() response we can access dynamic field (list) and article values
print(get_ticket.to_dct())
# Accessing dynamic field values
dynamicField3 = get_ticket.to_dct()["Ticket"]["DynamicField"][3]["Value"]
dynamicField12 = get_ticket.to_dct()["Ticket"]["DynamicField"][12]["Value"]
# Accessing articles
article = get_ticket.to_dct()["Ticket"]["Article"]
print(len(article))
# Iterating through all articles of the ticket (in cases where tickets have multiple articles)
for a1 in range(0, len(article)):
# Article subject
q2 = "Article " + str(a1+1) + ": " + get_ticket.to_dct()["Ticket"]["Article"][a1]["Subject"] + "\n"
print(q2)
# Article body
x = get_ticket.to_dct()["Ticket"]["Article"][a1]["Body"]
x = x.encode('utf-8') #encoded
q3 = "Body " + str(a1+1) + ": " + x + "\n"
print(q3)
# Ticket link for reference
q4 = "Ticket link: " + TICKET_LINK + ticket_id + "\n"
print(q4, end="\n\n")
def main():
client = createSession()
ticketData(client)
print("done")
main()
How to make text clickable ?
class ComplainceServer():
def __init__(self, jira_server, username, password, encoding='utf-8'):
if jira_server is None:
error('No server provided.')
#print(jira_server)
self.jira_server = jira_server
self.username = username
self.password = password
self.encoding = encoding
def checkComplaince(self, appid, toAddress):
query = "/rest/api/2/search?jql=issuetype = \"Application Security\" AND \"Prod Due Date\" < now()
request = self._createRequest()
response = request.get(query, contentType='application/json')
# Parse result
if response.status == 200 and action == "warn":
data = Json.loads(response.response)
print "#### Issues found"
issues = {}
msg = "WARNING: The below tickets are non-complaint in fortify, please fix them or raise exception.\n"
issue1 = data['issues'][0]['key']
for item in data['issues']:
issue = item['key']
issues[issue] = item['fields']['summary']
print u"* {0} - {1}".format(self._link(issue), item['fields']['summary'])
print "\n"
data = u" {0} - {1}".format(self._link(issue), item['fields']['summary'])
msg += '\n'+ data
SOCKET_TIMEOUT = 30000 # 30s
email = SimpleEmail()
email.setHostName('smtp.com')
email.setSmtpPort(25)
email.setSocketConnectionTimeout(SOCKET_TIMEOUT);
email.setSocketTimeout(SOCKET_TIMEOUT);
email.setFrom('R#group.com')
for toAddress in toAddress.split(','):
email.addTo(toAddress)
email.setSubject('complaince report')
email.addHeader('X-Priority', '1')
email.setMsg(str(msg))
email.send()
def _createRequest(self):
return HttpRequest(self.jira_server, self.username, self.password)
def _link(self, issue):
return '[{0}]({1}/browse/{0})'.format(issue, self.jira_server['url'])
This is the calling function. APPid and toAddress will be passed in from different UI.
from Complaince import ComplainceServer
jira = ComplainceServer(jiraServer, username, password)
issues = jira.checkComplaince(appid, toAddress)
I want issueid to be an embedded link.
currently the email sends as below:
MT-4353(https://check.com/login/browse/MT-4353) - Site Sc: DM isg_cq5
but i want [MT-4353] as hyperlink to the URL https://check.com/login/browse/MT-4353
Firstly, you need to encode your email as html. I'm not familiar with the library you are using so I cannot give an example of this.
I have replaced a snippet of your code with html syntax just to illustrate the point that you are meant to use html syntax to have clickable links in an email.
msg = "<p>WARNING: The below tickets are non-compliant in fortify, please fix them or raise exception.</p>"
issue1 = data['issues'][0]['key']
for item in data['issues']:
issue = item['key']
issues[issue] = item['fields']['summary']
data = u"<a href='{0}'>{1}</a>".format(self._link(issue), item['fields']['summary'])
msg += '<br />'+ data
In future, please ask your questions carefully as your title does not question does not indicate what you are actually meaning. You also have spelling mistakes: Compliant
Oh, I missed the point of self._link(issue) not returning the correct link. It returns MT-4353(https://check.com/login/browse/MT-4353) so you are going to need to extract the link part between the brackets. I suggest a regular expression.
I'm writing code for a Discord bot which searches different game hosting sites. It searches for an image and a description in the html of the page using Robobrowser.
Before, I had no issue. I just added a case for the Google Play Store, however, and now it's telling me "Task was destroyed but it is pending!" when it tries to get those items through a GPS link.
I don't know why this is happening, nor do I know how to fix it. I looked up all other "Task was destroyed..." cases, but none were similar to mine.
Here is my code:
I've tried threading it and awaiting it. Robobrowser cannot be awaited, so that didn't work. Threading also didn't work because I need the functions to return a string. I know it's possible to return something while using a different thread, but it was overly complex for what I'm trying to fix.
def get_embed_caption(url):
print("Getting caption")
desc = None
if url != "No Link":
try:
browser.open(url)
desc = "something"
except:
print("Caption ERROR with url")
desc = None
if desc != None:
if "itch.io" in url and " " not in url:
parse = browser.parsed
parse = str(parse)
pos2 = parse.find("og:description")
pos1 = parse.rfind('content=', 0, pos2)
desc_type = parse[pos1+8:pos1+9]
pos2 = parse.rfind(desc_type, 0, pos2-2)
pos1 = parse.find(desc_type, pos1)
desc = parse[pos1+1:pos2]
if len(desc) > 1000:
desc = desc[:1000]
if "/><" in desc:
pos = parse.find("formatted_description user_formatted")
pos = parse.find("<p>", pos)
desc = parse[pos+3:parse.find('</p>', pos)]
elif "steam" in url and " " not in url:
parse = browser.parsed
parse = str(parse)
pos = parse.find("game_description_snippet")
pos = parse.find('"', pos)
pos = parse.find('>', pos)
desc = parse[pos+1:parse.find('<', pos+1)]
elif "play.google" in url and " " not in url:
parse = browser.parsed
parse = str(parse)
pos = parse.find('aria-label="Description"')
print(parse[pos:pos+20])
pos = parse.rfind("content", 0, pos)
print(parse[pos:pos+20])
pos = parse.find('"', pos)
print(parse[pos:pos+20])
desc = parse[pos+1:parse.find('"', pos+1)]
else:
print("No caption")
desc = None
if desc != None:
desc = desc.replace("<p>", "")
desc = desc.replace("</p>", "")
desc = desc.replace("<em>", "`")
desc = desc.replace("</em>", "`")
desc = desc.replace("<br>", "")
desc = desc.replace("<br/>", "")
return desc
Task was destroyed but it is pending!
task: <Task pending coro=<Client._run_event() running at C:\Users\Gman\AppData\Local\Programs\Python\Python36\lib\site-packages\discord\client.py:307> wait_for=<Future pending cb=[BaseSelectorEventLoop._sock_connect_done(696)(), <TaskWakeupMethWrapper object at 0x0000000005DEAA98>()]>>
It seems to run through the process just fine, but right when it finishes, it crashes.
Google Play Store has a lot of gibberish HTML, probably to make web scraping there difficult on purpose. This led to it taking more than 10 seconds to parse the page. I do not know what caused the task to destroy itself, however.
The fix was to use Python's play-scraper library, which was 20x faster, taking less than half a second to gather the info.
I am still a beginner and have just started with Python.
I try to get the tier and rank of a player with the Riot Games(only EUW) API via JSON, but I get a Exception:
print (responseJSON2[ID][0]['tier'])
TypeError: list indices must be integers or slices, not str
I dont know what I have to change, maybe someone can help me :)
Code:
import requests
def requestSummonerData(summonerName, APIKey):
URL = "https://euw1.api.riotgames.com/lol/summoner/v3/summoners/by-name/" + summonerName + "?api_key=" + APIKey
print (URL)
response = requests.get(URL)
return response.json()
def requestRankedData(ID, APIKey):
URL= "https://euw1.api.riotgames.com/lol/league/v3/positions/by-summoner/"+ID+"?api_key="+APIKey
print (URL)
response = requests.get(URL)
return response.json()
def main():
summonerName = (str)(input('Type your Summoner Name here: '))
APIKey = (str)(input('Copy and paste your API Key here: '))
responseJSON = requestSummonerData(summonerName, APIKey)
print(responseJSON)
ID = responseJSON ['id']
ID = str(ID)
print (ID)
responseJSON2 = requestRankedData(ID, APIKey)
print (responseJSON2[ID][0]['tier'])
print (responseJSON2[ID][0]['entries'][0]['rank'])
print (responseJSON2[ID][0]['entries'][0]['leaguePoints'])
if __name__ == "__main__":
main()
responseJSON2 is a list. A list has indexes (0, 1, 2, ...).
You need to use an int for your list:
ID = str(ID)
is wrong, you need to have an int there!
try with
ID = int(ID)
And you can convert back in string:
def requestRankedData(ID, APIKey):
URL= "https://euw1.api.riotgames.com/lol/league/v3/positions/by-summoner/"+str(ID)+"?api_key="+APIKey
print (URL)
response = requests.get(URL)
return response.json()
You need to find the index matching your ID in your response:
responseJSON2 = requestRankedData(ID, APIKey)
ID_idx = responseJSON2.index(str(ID))
print (responseJSON2[ID_idx][0]['tier'])
print (responseJSON2[ID_idx][0]['entries'][0]['rank'])
print (responseJSON2[ID_idx][0]['entries'][0]['leaguePoints'])
There is my code :
from riotwatcher import LolWatcher()
region = str(input("Your region : ")) #If you only need EUW, just do region = "euw1"
summonerName = str(input("Your summonername : ")) #Asking for the user's summoner name
watcher = LolWatcher(api_key="your_api_key")
summonner = watcher.summoner.by_name(region=region, summoner_name=pseudo) #Getting account informations, you can print(summoner) to see what it gives
rank = watcher.league.by_summoner(region=region, encrypted_summoner_id=summonner["id"]) #User ranks using his id in summoner
tier = rank[0]["tier"]
ranklol = rank[0]["rank"]
lp = rank[0]["leaguePoints"]
print(f"{tier} {ranklol} {lp} LP")
It should be ok, I don't know why are you working with the link, use the API features, it's way more easier. Hope I helped you.