I am trying to retrieve findings from Google Security Center using the Python API. I have installed the python libraries, set up a service account, generated a key and when I try to get the finding or any client functions I get the following error:
Traceback (most recent call last):
File "./find.py", line 12, in <module>
finding_result_iterator = client.list_findings(all_sources)
File "/usr/local/lib/python3.6/site-packages/google/cloud/securitycenter_v1/gapic/security_center_client.py", line 1532, in list_findings
self.transport.list_findings,
AttributeError: 'str' object has no attribute 'list_findings'
I am using the code example from here:
https://cloud.google.com/security-command-center/docs/how-to-api-list-findings
Using Python 3.6, I have the json key file in the client create and my organization id. Any idea why I can't get any client functions to work?
I am passing the API Key in as the argument for authentication like this
client = securitycenter.SecurityCenterClient("gcp-sc.json")
If you have a file called gcp-sc.json with the Google credential data, either
set the environment variable GOOGLE_APPLICATION_CREDENTIALS to point to that path, then initialize the client without configuration (SecurityCenterClient()), it'll pick that up
or if you need to explicitly name the file, SecurityCenterClient.from_service_account_json('gcp-sc.json') ought to do the trick.
You can also pass in a custom credentials object (see the docs) as SecurityCenterClient(credentials=...)
Related
I'm writing a simple email filter to work upon Outlook incoming messages on Windows 10, and seek to code it up in Python using the win32com library, under Anaconda. I also seek to avoid using magic numbers for the "Inbox" as I see in other examples, and would rather use constants that should be defined under win32com.client.constants. But I'm running into simple errors that are surprising:
So, I concocted the following simple code, loosely based upon https://stackoverflow.com/a/65800130/257924 :
import sys
import win32com.client
try:
outlookApp = win32com.client.Dispatch("Outlook.Application")
except:
print("ERROR: Unable to load Outlook")
sys.exit(1)
outlook = outlookApp.GetNamespace("MAPI")
ofContacts = outlook.GetDefaultFolder(win32com.client.constants.olFolderContacts)
print("ofContacts", type(ofContacts))
sys.exit(0)
Running that under an Anaconda-based installer (Anaconda3 2022.10 (Python 3.9.13 64-bit)) on Windows 10 errors out with:
(base) c:\Temp>python testing.py
Traceback (most recent call last):
File "c:\Temp\testing.py", line 11, in <module>
ofContacts = outlook.GetDefaultFolder(win32com.client.constants.olFolderContacts)
File "C:\Users\brentg\Anaconda3\lib\site-packages\win32com\client\__init__.py", line 231, in __getattr__
raise AttributeError(a)
AttributeError: olFolderContacts
Further debugging indicates that the __dicts__ property is referenced by the __init__.py in the error message above. See excerpt of that class below. For some reason, that __dicts__ is an empty list:
class Constants:
"""A container for generated COM constants."""
def __init__(self):
self.__dicts__ = [] # A list of dictionaries
def __getattr__(self, a):
for d in self.__dicts__:
if a in d:
return d[a]
raise AttributeError(a)
# And create an instance.
constants = Constants()
What is required to have win32com properly initialize that constants object?
The timestamps on the init.py file show 10/10/2021 in case that is relevant.
The short answer is to change:
outlookApp = win32com.client.Dispatch("Outlook.Application")
to
outlookApp = win32com.client.gencache.EnsureDispatch("Outlook.Application")
The longer answer is that win32com can work with COM interfaces in one of two ways: late- and early-binding.
With late-binding, your code knows nothing about the Dispatch interface ie. doesn't know which methods, properties or constants are available. When you call a method on the Dispatch interface, win32com doesn't know if that method exists or any parameters: it just sends what it is given and hopes for the best!
With early-binding, your code relies on previously-captured information about the Dispatch interface, taken from its Type Library. This information is used to create local Python wrappers for the interface which know all the methods and their parameters. At the same time it populates the Constants dictionary with any constants/enums contained in the Type Library.
win32com has a catch-all win32com.client.Dispatch() function which will try to use early-binding if the local wrapper files are present, otherwise will fall back to using late-binding. My problem with the package is that the caller doesn't always know what they are getting, as in the OP's case.
The alternative win32com.client.gencache.EnsureDispatch() function enforces early-binding and ensures any constants are available. If the local wrapper files are not available, they will be created (you might find them under %LOCALAPPDATA%\Temp\gen_py\xx\CLSID where xx is the Python version number, and CLSID is the GUID for the Type Library). Once these wrappers are created once then the generic win32com.client.Dispatch() will use these files.
In my Azure Function, I have specified an Environment Variable/App Setting for a database connection string. I can use the Environment Variable when I run the Function locally on my Azure Data Science Virtual Machine using VS Code and Python.
However, when I deploy the Function to Azure, I get an error: KeyValue is None, meaning that it cannot find the Environment Variable for the connection string. See error:
Exception while executing function: Functions.matchmodel Result: Failure
Exception: KeyError: 'CONNECTIONSTRINGS:PDMPDBCONNECTIONSTRING'
Stack: File "/azure-functions
host/workers/python/3.7/LINUX/X64/azure_functions_worker/dispatcher.py", line 315, in
_handle__invocation_request self.__run_sync_func, invocation_id, fi.func, args)
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/azure-functions-host/workers/python/3.7/LINUX/X64/azure_functions_worker/dispatcher.py",
line 434, in __run_sync_func
return func(**params)
File "/home/site/wwwroot/matchmodel/__init__.py", line 116, in main
File "/home/site/wwwroot/matchmodel/production/dataload.py", line 28, in query_dev_database
setting = os.environ["CONNECTIONSTRINGS:PDMPDBCONNECTIONSTRING"]
File "/usr/local/lib/python3.7/os.py", line 679, in __getitem__
raise KeyError(key) from None'
I have tried the following solutions:
Added "CONNECTIONSTRINGS" to specify the Environment Variable in the Python script (which made it work locally)
setting = os.environ["CONNECTIONSTRINGS:PDMPDBCONNECTIONSTRING"]
Used logging.info(os.environ) to output my Environment Variables in the console. My connection string is listed.
Added the connection string as Application Setting in the Azure Function portal.
Added the Connection String as Connection Strings in the Azure Function portal.
Does anyone have any other solutions that I can try?
Actually you are almost get the right the connection string, however you use the wrong prepended string. Further more detailed information you could refer to this doc:Configure connection strings.
Which string to use it depends on which type you choose, like in my test I use a custom type. Then I should use os.environ['CUSTOMCONNSTR_testconnectionstring'] to get the value.
From the doc you could find there are following types:
SQL Server: SQLCONNSTR_
MySQL: MYSQLCONNSTR_
SQL Database: SQLAZURECONNSTR_
Custom: CUSTOMCONNSTR_
I was struggling with this and found out this solution:
import os
setting = os.getenv("mysetting")
In Functions application settings, such as service connection strings, are exposed as environment variables during execution. You can access these settings by declaring import os and then using
setting = os.environ["mysetting"]
As Alex said, try to remove CONNECTIONSTRINGS: from the name of the environment variable. In azure portal just add mysetting in application settings as keyName.
I figured out the issue with help from George and selected George's answer as the correct answer.
I changed the code to os.environ["SQLCONNSTR_PDMPDBCONNECTIONSTRING"] but also I had been deploying the package from VS Code to Azure via the following code using Azure Command Line Interface (Azure CLI). This code overwrites Azure App Settings with the local.settings.json.
func azure functionapp publish <MY_FUNCTION_NAME> --publish-local-settings -i --overwrite-settings -y
I think that was causing changes to the type of database that I had specified in Azure App Settings (SQL Server), so when I had previously tried os.environ["SQLCONNSTR_PDMPDBCONNECTIONSTRING"] it didn't work because I was overwriting my Azure settings using my local.settings.json, which did not specify a database type.
I finally got it to work by deploying using the VS Code Extension called Azure Functions (Azure Functions: Deploy to Function App). This retains the settings that I had created in Azure App Services.
I am trying to learn the mail chimp API in python 3, but I cannot get it started.
from mailchimp3 import MailChimp
client = MailChimp('MY-USERNAME’,‘MY-API')
(obviously I swapped out my username and api key for this example)
Traceback (most recent call last):
File "/Users/jb/Documents/test2.py", line 3, in <module>
client = MailChimp('MY-USERNAME’,‘MY-API')
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mailchimp3/__init__.py", line 96, in __init__
super(MailChimp, self).__init__(*args, **kwargs)
TypeError: __init__() missing 1 required positional argument: 'mc_secret'
I'm very new to python and API's in general, but typically I can find someone else who has had the same error, but all my searches come up blank. I looked in the MailChimp module and I can see that it is suppose to take my API key as the mc_secret argument, so I'm not sure why I keep getting this error. I did just create my mail chimp account today, so perhaps mail chimp takes a while to activate the key or something?
well....I feel sort of stupid.
I just retyped it (instead of copy and pasting it from the documentation) and it worked. I should have noticed that the ',' between the arguments was green in IDLE, indicating something was wrong with the text (encoding or something?). Now it works. Lesson learned, don't just copy and paste from documentation.
I'm following instructions from Google Developers guide in order to create custom message option. I have used their example but I've received an error:
Traceback (most recent call last):
File "test_my_opt.py", line 2, in <module>
value = my_proto_file_pb2.MyMessage.DESCRIPTOR.GetOptions().Extensions[my_proto_file_pb2.my_option]
File "(...)\google\protobuf\internal\python_message.py", line 1167, in __getitem__
_VerifyExtensionHandle(self._extended_message, extension_handle)
File "(...)\google\protobuf\internal\python_message.py", line 170, in _VerifyExtensionHandle
message.DESCRIPTOR.full_name))
KeyError: 'Extension "my_option" extends message type "google.protobuf.MessageOptions", but this message is of type "google.protobuf.MessageOptions".'
I simply used following code:
import my_proto_file_pb2
value = my_proto_file_pb2.MyMessage.DESCRIPTOR.GetOptions().Extensions[my_proto_file_pb2.my_option]
And this proto file:
import "beans-protobuf/proto/src/descriptor.proto";
extend google.protobuf.MessageOptions {
optional string my_option = 51234;
}
message MyMessage {
option (my_option) = "Hello world!";
}
Everything like in guide... so how should I access this option without error?
import "beans-protobuf/proto/src/descriptor.proto";
I think this is the problem. The correct import statement for descriptor.proto is:
import "google/protobuf/descriptor.proto";
The path string is important because you need to be extending the original definitions of the descriptor types, not some copy of them. google/protobuf/descriptor.proto becomes the module google.protobuf.descriptor_pb2 in Python, and the Protobuf library expects that any custom options are extensions to the types in there. But you are actually extending beans-protobuf/proto/src/descriptor.proto, which becomes beans_protobuf.proto.src.descriptor_pb2 in Python, which is a completely different module! Hence, the protobuf library gets confused and doesn't think these extensions are applicable to protobuf descriptors.
I think if you just change the import statement, everything should work. When protobuf is correctly installed, google/protobuf/descriptor.proto should always work as an import -- there's no need to provide your own copy of the file.
I've used boto to interact with S3 with with no problems, but now I'm attempting to connect to the AWS Support API to pull back info on open tickets, trusted advisor results, etc. It seems that the boto library has different connect methods for each AWS service? For example, with S3 it is:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
According to the boto docs, the following should work to connect to AWS Support API:
>>> from boto.support.connection import SupportConnection
>>> conn = SupportConnection('<aws access key>', '<aws secret key>')
However, there are a few problems I see after digging through the source code. First, boto.support.connection doesn't actually exist. boto.connection does, but it doesn't contain a class SupportConnection. boto.support.layer1 exists, and DOES have the class SupportConnection, but it doesn't accept key arguments as the docs suggest. Instead it takes 1 argument - an AWSQueryConnection object. That class is defined in boto.connection. AWSQueryConnection takes 1 argument - an AWSAuthConnection object, class also defined in boto.connection. Lastly, AWSAuthConnection takes a generic object, with requirements defined in init as:
class AWSAuthConnection(object):
def __init__(self, host, aws_access_key_id=None,
aws_secret_access_key=None,
is_secure=True, port=None, proxy=None, proxy_port=None,
proxy_user=None, proxy_pass=None, debug=0,
https_connection_factory=None, path='/',
provider='aws', security_token=None,
suppress_consec_slashes=True,
validate_certs=True, profile_name=None):
So, for kicks, I tried creating an AWSAuthConnection by passing keys, followed by AWSQueryConnection(awsauth), followed by SupportConnection(awsquery), with no luck. This was inside a script.
Last item of interest is that, with my keys defined in a .boto file in my home directory, and running python interpreter from the command line, I can make a direct import and call to SupportConnection() (no arguments) and it works. It clearly is picking up my keys from the .boto file and consuming them but I haven't analyzed every line of source code to understand how, and frankly, I'm hoping to avoid doing that.
Long story short, I'm hoping someone has some familiarity with boto and connecting to AWS API's other than S3 (the bulk of material that exists via google) to help me troubleshoot further.
This should work:
import boto.support
conn = boto.support.connect_to_region('us-east-1')
This assumes you have credentials in your boto config file or in an IAM Role. If you want to pass explicit credentials, do this:
import boto.support
conn = boto.support.connect_to_region('us-east-1', aws_access_key_id="<access key>", aws_secret_access_key="<secret key>")
This basic incantation should work for all services in all regions. Just import the correct module (e.g. boto.support or boto.ec2 or boto.s3 or whatever) and then call it's connect_to_region method, supplying the name of the region you want as a parameter.