MISP Threat Indicators to Azure sentinel - Python Script issue - python

I am trying to push the threat indicators from my MISP instance to azure sentinel by using azure's default documentation on github : https://github.com/microsoftgraph/security-api-solutions/tree/master/Samples/MISP
I performed the steps as per documentation however python3 script.py is giving me following error:
Traceback (most recent call last):
File "script.py", line 100, in <module>
main()
File "script.py", line 96, in main
request_manager.handle_indicator(request_body)
File "/var/azure/sentinel/security-api-solutions/Samples/MISP/RequestManager.py", line 197, in handle_indicator
self._post_to_graph()
File "/var/azure/sentinel/security-api-solutions/Samples/MISP/RequestManager.py", line 184, in _post_to_graph
self._log_post(response)
File "/var/azure/sentinel/security-api-solutions/Samples/MISP/RequestManager.py", line 98, in _log_post
if len(response['value']) > 0:
KeyError: 'value'
This is calling inbuilt method in RequestManager.py for posting the indicators to Graph API

Don't know the answer to your Python questions but have you tried using the Threat Intelligence Platform connector directly against your app? It is in public preview right now.

Related

"Private key is missing or invalid. It should be service " when making BigQuery call with credential

I'm following this data prediction using Cloud ML Engine with scikit-learn tutorial for GCP AI Platforms. I tried to make an API call to BigQuery with:
def query_to_dataframe(query):
import pandas as pd
import pkgutil
privatekey = pkgutil.get_data('trainer', 'privatekey.json')
print(privatekey[:200])
return pd.read_gbq(query,
project_id=PROJECT,
dialect='standard',
private_key=privatekey)
but got the following error:
Traceback (most recent call last):
[...]
TypeError: a bytes-like object is required, not 'str'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/.local/lib/python3.7/site-packages/trainer/task.py", line 66, in <module>
arguments['numTrees']
File "/root/.local/lib/python3.7/site-packages/trainer/model.py", line 119, in train_and_evaluate
train_df, eval_df = create_dataframes(frac)
File "/root/.local/lib/python3.7/site-packages/trainer/model.py", line 95, in create_dataframes
train_df = query_to_dataframe(train_query)
File "/root/.local/lib/python3.7/site-packages/trainer/model.py", line 82, in query_to_dataframe
private_key=privatekey)
File "/usr/local/lib/python3.7/dist-packages/pandas/io/gbq.py", line 149, in read_gbq
credentials=credentials, verbose=verbose, private_key=private_key)
File "/root/.local/lib/python3.7/site-packages/pandas_gbq/gbq.py", line 846, in read_gbq
dialect=dialect, auth_local_webserver=auth_local_webserver)
File "/root/.local/lib/python3.7/site-packages/pandas_gbq/gbq.py", line 184, in __init__
self.credentials = self.get_credentials()
File "/root/.local/lib/python3.7/site-packages/pandas_gbq/gbq.py", line 193, in get_credentials
return self.get_service_account_credentials()
File "/root/.local/lib/python3.7/site-packages/pandas_gbq/gbq.py", line 413, in get_service_account_credentials
"Private key is missing or invalid. It should be service "
pandas_gbq.gbq.InvalidPrivateKeyFormat: Private key is missing or invalid. It should be service account private key JSON (file path or string contents) with at least two keys: 'client_email' and 'private_key'. Can be obtained from: https://console.developers.google.com/permissions/serviceaccounts
When the package runs in local environment, the private key loads fine, but when submitted as a ml-engine training job, the error occurs. Note that the private key fails to load only when I use GCP RUNTIME_VERSION="1.15" and PYTHON_VERSION="3.7", but can load with no problem when I use PYTHON_VERSION="2.7".
In case it's useful, the structure of my package is:
/babyweight
- setup.py
- trainer
- __init__.py
- model.py
- privatekey.json
- task.py
I'm not sure if the problem is due to a bug in Python, or where I placed privatekey.json.
I was able to solve the problem after I changed read_gbq's attribute for reading BigQuery access key from private_keys to credentials, as recommended by #rmesteves, and as shown here. I then set the value as the absolute path to privatekey.json, as shown here. Now the job is able to run without error.
Note: I only encountered this problem with Python 3+, but not with Python 2.7. I'm not sure why. It could possibly be due to the implementation of read_gbq.

CloudDataflow can not use "google.cloud.datastore" package?

I want to put datastore with transaction on CloudDataflow.
So, I wrote below.
def exe_dataflow():
....
from google.cloud import datastore
# call from pipeline
def ds_test(content):
datastore_client = datastore.Client()
kind = 'test_out'
name = 'change'
task_key = datastore_client.key(kind, name)
for _ in range(3):
with datastore_client.transaction():
current_value = client.get(task_key)
current_value['v'] += content['v']
datastore_client.put(task)
# pipeline
....
| 'datastore test' >> beam.Map(ds_test)
But, Error occured and log message was displayed as below.
(7b75e0ef2db229da): Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 582, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 167, in execute
op.start()
...(SNIP)...
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 767, in _import_module
return getattr(__import__(module, None, None, [obj]), obj)
AttributeError: 'module' object has no attribute 'datastore'
CloudDataflow can not use "google.cloud.datastore" package?
add 2018/2/28.
I add --requirements_file to MyOption
options = MyOptions(flags = ["--requirements_file", "./requirements.txt"])
and I make requirements.txt
google-cloud-datastore==1.5.0
But, Another error occurred.
(366397598dcf7f02): Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 582, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 167, in execute
op.start()
...(SNIP)...
File "my_dataflow.py", line 66, in to_entity
File "/usr/local/lib/python2.7/dist-packages/google/cloud/datastore/__init__.py", line 60, in <module>
from google.cloud.datastore.batch import Batch
File "/usr/local/lib/python2.7/dist-packages/google/cloud/datastore/batch.py", line 24, in <module>
from google.cloud.datastore import helpers
File "/usr/local/lib/python2.7/dist-packages/google/cloud/datastore/helpers.py", line 29, in <module>
from google.cloud.datastore_v1.proto import datastore_pb2
File "/usr/local/lib/python2.7/dist-packages/google/cloud/datastore_v1/__init__.py", line 17, in <module>
from google.cloud.datastore_v1 import types
File "/usr/local/lib/python2.7/dist-packages/google/cloud/datastore_v1/types.py", line 21, in <module>
from google.cloud.datastore_v1.proto import datastore_pb2
File "/usr/local/lib/python2.7/dist-packages/google/cloud/datastore_v1/proto/datastore_pb2.py", line 17, in <module>
from google.cloud.datastore_v1.proto import entity_pb2 as google_dot_cloud_dot_datastore__v1_dot_proto_dot_entity__pb2
File "/usr/local/lib/python2.7/dist-packages/google/cloud/datastore_v1/proto/entity_pb2.py", line 28, in <module>
dependencies=[google_dot_api_dot_annotations__pb2.DESCRIPTOR,google_dot_protobuf_dot_struct__pb2.DESCRIPTOR,google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,google_dot_type_dot_latlng__pb2.DESCRIPTOR,])
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/descriptor.py", line 824, in __new__
return _message.default_pool.AddSerializedFile(serialized_pb)
TypeError: Couldn't build proto file into descriptor pool!
Invalid proto descriptor for file "google/cloud/datastore_v1/proto/entity.proto":
google.datastore.v1.PartitionId.project_id: "google.datastore.v1.PartitionId.project_id" is already defined in file "google/cloud/proto/datastore/v1/entity.proto".
...(SNIP)...
google.datastore.v1.Entity.properties: "google.datastore.v1.Entity.PropertiesEntry" seems to be defined in "google/cloud/proto/datastore/v1/entity.proto", which is not imported by "google/cloud/datastore_v1/proto/entity.proto". To use it here, please add the necessary import.
The recommended way to interact with Cloud Datastore from a Cloud Dataflow Pipeline is to use the Datastore I/O API, which is available through the Dataflow SDK and provides some methods to read and write data to a Cloud Datastore database.
You can find detailed documentation for the Datastore I/O package for Dataflow SDK 2.x for Python in this other link. The datastore.v1.datastoreio module is the specific module that you want to use. There is plenty of information in the links I am sharing, but in short, it is a connector to Datastore that uses PTransform to read / write / delete a PCollection from Datastore using the classes ReadFromDatastore() / WriteToDatastore() / DeleteFromDatastore() respectively.
You should try using it instead of implementing the calls yourself. I suspect this may be the reason for the error you are seeing, as a Datastore implementation already exists in the Dataflow SDK:
"google.datastore.v1.PartitionId.project_id" is already defined in file "google/cloud/proto/datastore/v1/entity.proto".
UPDATE:
It looks like those three classes collect several mutations and executes them in a single transaction. You can check that in the code describing the classes.
If the aim is to retrieve (get()) and then update (put()) a Datastore entity, you can probably work with the write_mutations() function, which is described in the documentation, and you can work with a full batch of mutations performing the operations you are interested in.

jirashell throws a "KeyError: u'consumer_key'" and does not start

I am new to jira-python and have run into an issue with using jirashell.
jira-python was installed from documentation given here: http://jira.readthedocs.org/en/latest/
When I try to start jirashell using:
ubuntu#ip-10-0-0-12:~$ jirashell -s https://jira.atlassian.com
I get the following error:
Traceback (most recent call last):
File "/usr/local/bin/jirashell", line 11, in <module> sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/jira/jirashell.py", line 248, in main
jira = JIRA(options=options, basic_auth=basic_auth, oauth=oauth)
File "/usr/local/lib/python2.7/dist-packages/jira/client.py", line 202, in __init__self._create_oauth_session(oauth)
File "/usr/local/lib/python2.7/dist-packages/jira/client.py", line 1871, in _create_oauth_session
oauth['consumer_key'],
KeyError: u'consumer_key'
I have also tried to get to a server using basic auth but that returns the same error. Using curl works fine, but I wanted to see the objects that are getting returned and get help as I develop by python-jira integration.
Thank you for any insight.

HTTP header error using the Python SDK for Azure

I am starting with Microsoft Azure SDK for Python (https://github.com/Azure/azure-sdk-for-python), but I have problems.
I am using Scientific Linux and I have installed the SDK for Python 3.4 following the next steps:
(instead of the SDK directory)
python setup.py install
after that I created a simple script just to test the connection:
from azure.storage import BlobService
blob_service = BlobService(account_name='thename', account_key='Mxxxxxxx3w==' )
blob_service.create_container('testcontainer')
for i in blob_service.list_containers():
print(i.name)
following this documentation:
http://blogs.msdn.com/b/tconte/archive/2013/04/17/how-to-interact-with-windows-azure-blob-storage-from-linux-using-python.aspx
http://azure.microsoft.com/en-us/documentation/articles/storage-python-how-to-use-blob-storage/#large-blobs
but is not working, I always receive the same error:
python3 test.py
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/storageclient.py", line 143, in _perform_request
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/storageclient.py", line 132, in _perform_request_worker
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/http/httpclient.py", line 247, in perform_request
azure.http.HTTPError: The value for one of the HTTP headers is not in the correct format.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 21, in <module>
blob_service.create_container('testcontainer')
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/blobservice.py", line 192, in create_container
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/__init__.py", line 905, in _dont_fail_on_exist
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/blobservice.py", line 189, in create_container
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/storageclient.py", line 150, in _perform_request
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/__init__.py", line 889, in _storage_error_handler
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/__init__.py", line 929, in _general_error_handler
azure.WindowsAzureError: Unknown error (The value for one of the HTTP headers is not in the correct format.)
<?xml version="1.0" encoding="utf-8"?><Error><Code>InvalidHeaderValue</Code><Message>The value for one of the HTTP headers is not in the correct format.
RequestId:b37c5584-0001-002b-24b8-c2c245000000
Time:2014-11-19T14:54:38.9378626Z</Message><HeaderName>x-ms-version</HeaderName><HeaderValue>2012-02-12</HeaderValue></Error>
Thanks in advance and best regards.
I have this exact same issue. I believe it's a library bug, but the author/s haven't had their say yet.
It looks like the response states the version, but it's actually giving you the header that's wrong. Its value should be "2014-02-14", you can do the fix shown in https://github.com/Azure/azure-sdk-for-python/pull/289 .
Hopefully this will be fixed and nobody will ever read this answer. Cheers!

Problems using PyQt's Resource System

I am trying to use PyQt's Resource System but it appears I have no clue what I am doing! I already have to application created, along with its GUI I am just trying to import some images to use with the program.
I used the QtDesigner to create the resource file and I compiled it using pyrcc4.exe. But when I attempt to import the resource file I get this error:
Traceback (most recent call last):
File "C:\Projects\main.py", line 14, in <module>
import main_rc
File "C:\Projects\main_rc.py", line 482, in <module>
qInitResources()
File "C:\Projects\main_rc.py", line 477, in qInitResources
QtCore.qRegisterResourceData(0x01, qt_resource_struct, qt_resource_name, qt_resource_data)
TypeError: argument 2 of qRegisterResourceData() has an invalid type
What am I doing wrong?
pyrcc generates Python 2.x code by default.
Try regenerating your resource files using pyrcc with flag '-py3'

Categories

Resources