read-only user in solr not working as expected - python

I would like to create two users in solr: An admin and a dev.
The dev should not be able to edit the solr metadata. This user should not be able to use solr.add or solr delete, I would like him only to be able to use solr.search for our metadata solr-core (in python pysolr).
However, the user can always use solr.add and solr.delete, no matter what permissions I set for him. Here is my security.json:
{
"authentication":{
"blockUnknown":true,
"class":"solr.BasicAuthPlugin",
"credentials":{
"my_admin":"<creds>",
"my_dev":"<creds>"},
"forwardCredentials":false,
"":{"v":0}},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"user-role":{
"my_admin":["admin"],
"my_dev":["dev"]},
"permissions":[
{
"name":"security-edit",
"role":["admin"],
"index":1},
{
"name":"read",
"role":["dev"],
"index":2}],
"":{"v":0}}
}
I also tried zk-read, metics-read, security-read, collection-admin-read instead of read, always with the same result. The user my_dev can always use solr.add and solr.delete.

So, thanks to #MatsLindh, I got it. I changed the security.json as following and now it works as expected!
{
"authentication":{
"blockUnknown":true,
"class":"solr.BasicAuthPlugin",
"credentials":{
"my_admin":"<...>",
"my_dev":"<...>",
"my_user":"<...>"},
"forwardCredentials":false,
"":{"v":0}},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"user-role":{
"my_admin":["admin"],
"my_dev":["dev"],
"my_user":["user"]},
"permissions":[
{ "name":"update", "role":["admin", "dev"] },
{ "name":"read", "role":["admin", "dev", "user"] },
{ "name":"security-edit", "role":["admin"] },
{ "name":"all", "role":["admin"] }
]
}
}

Related

Check if entry exists in Firebase Realtime Databse (Python)

I am currently trying to retrieve login information from my Realtime DB in Firebase.
As an example, I want to detect whether the provided username already exists in the db.
Here is a code snippet I'm attemping to work with.
ref.child("Users").order_by_child("Username").equal_to("Hello").get()
It does not however work, and instead displays this message:
firebase_admin.exceptions.InvalidArgumentError: Index not defined, add ".indexOn": "Username", for path "/Users", to the rules
Is there something wrong with my code snippet or should I use a different alternative?
As the error suggests, you need to add ".indexOn": "Username" in your security rules as shown below:
{
"rules": {
"Users": {
".indexOn": "Username",
// ... rules
"$uid": {
// ... rules
}
}
}
}
Checkout the documentation for more information.

Unique index in mongodb using db.command

I am using mongodb with PyMongo and I would like to separate schema definition from the rest of the application. I have file user_schema.jsonwith schema for user collection:
{
"collMod": "user",
"validator": {
"$jsonSchema": {
"bsonType": "object",
"required": ["name"],
"properties": {
"name": {
"bsonType": "string"
}
}
}
}
}
Then in db.py:
with open("user_schema.json", "r") as coll:
data = OrderedDict(json.loads(coll.read())) # Read JSON schema.
name = data["collMod"] # Get name of collection.
db.create_collection(name) # Create collection.
db.command(data) # Add validation to the collection.
Is there any way to add unique index to name field in user collection without changing db.py (only by changing user_schema.json)? I know I can use this:
db.user.create_index("name", unique=True)
however, then I have information about the collection in two places. I would like to have all configuration of the collection in the user_schema.json file. I need something like that:
{
"collMod": "user",
"validator": {
"$jsonSchema": {
...
}
},
"index": {
"name": {
"unique": true
}
}
}
No, you won't be able to do that without changing db.py.
The contents of user_schema.json is an object that can be passed to db.command to run the collmod command.
In order to create an index, you need to run the createIndexes command, or one of the helpers that calls this.
It is not possible to complete both of these actions with a single command.
A simple modification would be to store an array of commands to run in user_schema.json, and have db.py iterate the array and run each command.

How to add permission can_get_all_acc_detail to an account of existing blockchain network on hyperledger-iroha?

I want to add account, which has some information readable by all users. According to documentation the user needs to have permissions can_get_all_acc_detail. So I'm trying to add those with creating new role:
tx = self.iroha.transaction([
self.iroha.command('CreateRole', role_name='info', permissions=[primitive_pb2.can_get_all_acc_detail])
])
tx = IrohaCrypto.sign_transaction(tx, account_private_key)
net.send_tx(tx)
Unfortunately after sending transaction I see status:
status_name:ENOUGH_SIGNATURES_COLLECTED, status_code:9, error_code:0(OK)
But then it is taking 5 minutes until timeout.
I've notices that transaction json has different way of embedding permissions than in generic block:
payload {
reduced_payload {
commands {
create_role {
role_name: "info_account"
permissions: can_get_all_acc_detail
}
}
creator_account_id: "admin#example"
created_time: 1589408498074
quorum: 1
}
}
signatures {
public_key: "92f9f9e10ce34905636faff41404913802dfce9cd8c00e7879e8a72085309f4f"
signature: "568b69348aa0e9360ea1293efd895233cb5a211409067776a36e6647b973280d2d0d97a9146144b9894faeca572d240988976f0ed224c858664e76416a138901"
}
In compare in genesis.block it is:
{
"createRole": {
"roleName": "money_creator",
"permissions": [
"can_add_asset_qty",
"can_create_asset",
"can_receive",
"can_transfer"
]
}
},
I'm using iroha version 1.1.3 (but also tested on 1.1.1), python iroha sdh version is 0.0.5.5.
does the account you used to execute the 'Create Role' command have the "can_create_role" permission?

Translating Elasticsearch request from Kibana into elasticsearch-dsl

Recently migrated from AWS Elasticsearch Service (used Elasticsearch 1.5.2) to Elastic Cloud (currently using Elasticsearch 5.1.2). Glad I did it, but with that change comes a newer version of Elasticsearch and newer API's. Struggling to get my head around the new way of requesting stuff. Formerly, I could more or less copy/paste from Kibana's "Elasticsearch Request Body", adjust a few things, run elasticsearch.Elasticsearch.search() and get what I expect.
Here's my Elasticsearch Request Body from Kibana (for brevity, removed some of the extraneous stuff that Kibana usually inserts):
{
"size": 500,
"sort": [
{
"Time.ISO8601": {
"order": "desc",
"unmapped_type": "boolean"
}
}
],
"query": {
"bool": {
"must": [
{
"query_string": {
"query": "Message\\ ID: 2003",
"analyze_wildcard": true
}
},
{
"range": {
"Time.ISO8601": {
"gte": 1484355455678,
"lte": 1484359055678,
"format": "epoch_millis"
}
}
}
],
"must_not": []
}
},
"stored_fields": [
"*"
],
"script_fields": {},
}
Now I want to use elasticsearch-dsl to do it, since that seems to be the recommended method (instead of using elasticsearch-py). How would I translate the above into elasticsearch-dsl?
Here's what I have so far:
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
client = Elasticsearch(
hosts=['HASH.REGION.aws.found.io/elasticsearch'],
use_ssl=True,
port=443,
http_auth=('USER','PASS')
)
s = Search(using=client, index="emp*")
s = s.query("query_string", query="Message\ ID:2003", analyze_wildcards=True)
s = s.query("range", **{"Time.ISO8601": {"gte": 1484355455678, "lte": 1484359055678, "format": "epoch_millis"}})
s = s.sort("Time.ISO8601")
response = s.execute()
for hit in response:
print '%s %s' % (hit['Time']['ISO8601'], hit['Message ID'])
My code written as above is not giving me what I expect. Getting results that include stuff that doesn't match "Message\ ID:2003", and also it's giving me things outside the requested range of Time.ISO8601 as well.
Totally new to elasticsearch-dsl and ES 5.1.2's way of doing things, so I know I've got lots to learn. What am I doing wrong? Thanks in advance for the help!
I don't have elasticsearch running right now but the query looks like what you wanted (you can always see the query produced by looking at s.to_dict()) with the exception of escaping the \ sign. In the original query it was escaped yet in python the result might be different due to different escaping.
I wuld strongly advise to not have spaces in your fields and also to use a more structured query than query_string:
s = Search(using=client, index="emp*")
s = s.filter("term", message_id=2003)
s = s.query("range", Time__ISO8601={"gte": 1484355455678, "lte": 1484359055678, "format": "epoch_millis"})
s = s.sort("Time.ISO8601")
Note that I also changed query() to filter() for a slight speedup and used __ instead of . in the field name keyword argument. elasticsearch-dsl will automatically expand that to ..
Hope this helps...

JIRA attachment with REST API

I want to upload attachment while creating JIRA ticket, I tried the format below but its not working with syntax error given for the file. Do you have any idea how to create this in a json file ?
data = {
"fields": {
"project": {
"key": project
},
"attachments" : [
"file": "C:\data.txt"
],
"summary": task_summary,
"issuetype": {
"name": "Task"
}
}
}
You can only set issue fields during creation: https://docs.atlassian.com/jira/REST/latest/#d2e2786
You can add attachments to already existing issues by posting the content itself: https://docs.atlassian.com/jira/REST/latest/#d2e2035
Sou you have to do it in two steps.
It works now, i was not using rest/api/2/issue/DEV-290/attachments

Categories

Resources