I can see in the Couchbase admin console that my following Python code is putting the data/documents in the bucket:
def write_to_bucket(received_cluster, current_board, always_never, position):
board_as_a_string = ''.join(['s', 'i', 'e'])
cb = received_cluster.open_bucket('boardwascreated')
cb.upsert(board_as_a_string,
{'BoardAsString': board_as_a_string})
But then no matter what I do I can't query the data from Python. I try things like:
def database_query(receiving_cluster):
cb = receiving_cluster.open_bucket('boardwascreated')
for row in cb.n1ql_query('SELECT * FROM boardwascreated'):
print(row)
I am trying every possible thing from https://docs.couchbase.com/python-sdk/2.5/n1ql-queries-with-sdk.html, but every time I try something I get the following error:
No index available on keyspace boardwascreated that matches your query.
Use CREATE INDEX or CREATE PRIMARY INDEX to create an index,
or check that your expected index is online.
To run N1QL queries on a bucket, you need to an index on that bucket. The basic way to do that is to create a primary index on it.
Open your admin console and go to the Query WorkBench. Once you're in the admin console, you should see a "Query" tab on the left. Then run this command to create the primary index.
CREATE PRIMARY INDEX ON boardwascreated
You will also need to supply username/password credentials in order to access the bucket. Initially, you can use whatever Administrator/admin_password combination you have created. I'm not sure off-hand how to supply that using the Python SDK; check the docs.
Later, you should go to the Security tab and create a specialized user for whatever application you are building and give that user whatever query permissions they need.
Related
I log in to Active Directory, then I want to list my own group memberships with Python ldap3 library.
server = Server('server.company.local', get_info=ALL)
conn = Connection(server, user="company\\user", password="password", authentication=NTLM, auto_bind=True)
print(conn.extend.standard.who_am_i())
This code only shows user name (like whoami cmd command), but i want to list my groups (like whoami /groups command).
Unfortunately I dont have the rights to make different searches on the Domain controller, thats why (perhaps) the following code returns empty string:
conn.search("dc=name,dc=company,dc=local","(&(sAMAccountName={}))".format("company\\myusername")
,attributes=['memberOf'])
How can i list my own group membership, like whoami /groups does?
Active Directory generally allows all authenticated users to read a lot of attributes, including memberOf. Check the number of records returned for your search. I expect you are finding zero records with that search. sAMAccountName values do not generally contain the "company\" component but are just "myusername".
The problem was the search base in my search: I replaced "dc=name,dc=company,dc=local" to "dc=company,dc=local" It works fine.
I know that using Google Client Library(dataset.AccessEntry), we can update our roles to the specific dataset for the requested user (Reference). But wants to know how to remove that access when roles have been changed like (from Reader to Writer/Owner, etc.).
I Want to do this deletion automatically like role, dataset name and email comes from the UI as input, python code should update the roles to the specific requested dataset. Appreciate your help.
I am able to delete the entry from dataset.AccessEntry by using remove() method, which removes the first matching element (passed as an argument) from a list in Python. You need to specify PROJECT, DATASET_NAME and role, entity_type, entity_id for corresponding entry you wish to remove.
from google.cloud import bigquery
from google.cloud.bigquery.dataset import DatasetReference
PROJECT='<PROJECT_NAME>'
bq = bigquery.Client(project=PROJECT)
dsinfo = bq.get_dataset("<DATASET_NAME>")
#Specify the entry that will loose access to a dataset
entry = bigquery.AccessEntry(
role="<ROLE>",
entity_type="<ENTITY_TYPE>",
entity_id="<EMAIL>",
)
if entry in dsinfo.access_entries:
entries = list(dsinfo.access_entries)
entries.remove(entry)
dsinfo.access_entries = entries
dsinfo = bq.update_dataset(dsinfo, ["access_entries"])
else:
print("Entry wasn't found in dsinfo.access_entries")
print(dsinfo.access_entries)
You can find the official documentation for google.cloud.bigquery.dataset.AccessEntry here.
I'm definitely not an expert with SQL or sqlalchemy; I've used them quite a bit only doing basic CRUD activity. But now I'm using sqlalchemy to reset the index sequence on a number of tables:
table_sequences = {
"pump_calendar_event":'pump_calendar_event_index_seq',
"pump_calendar_rule":'pump_calendar_rule_index_seq',
}
for table,seq_name in table_sequences.items():
max_index_query = f'SELECT MAX(index)+1 from public."{table}";'
max_index_results = connection.execute(max_index_query).fetchall()[0][0]
if max_index_results is not None:
update_sequence_query = f'ALTER SEQUENCE "{seq_name}" RESTART WITH {max_index_results};'
connection.execute(update_sequence_query)
print(update_sequence_query)
max_index_results executes correctly and returns the highest index +1. But connection.execute(update_sequence_query) does not execute correctly. If it's failing then it's failing silently because I don't get an error. I know the query I generate is correct because I just copy and paste it into the query tool in pgAdmin and it resets the sequence. In short: it works when I copy and paste it manually but not when I run it with connection.execute(update_sequence_query).
How can I access to last DB index?
For example, in my DB I create records with automatically generated names, like news-post1, news-post2, etc. To create name for new record, I need to access latest DB index.
In my case, I need to edit names of images like above. I already know how to access file extension, but not index of DB
def generate_image_name(obj, file_data):
img_extension = path.splitext(file_data)[1]
img_name = "news-img"+<?db.Index?>+img_extension
I find what i exactly what i searching for.
For exmaple, at the moment i have 2 news records.
>>> News.query.all()[-1]
<News 2>
>>> News.query.all()[-1].id
2
I am trying to update new password after reset to cassandra db. This is the query I have written where both username and password fields are dynamic. Is this right?
def update_db(uname, pwd):
query = session.prepare('update user.userdetails set "password"=%s where "username" = ? ALLOW FILTERING', pwd)
session.execute(query, (uname,))
update_db(username, new_pwd)
I am calling this through an API. But it doesn't seem to update.
Alex is absolutely correct in that you need to provide the complete PRIMARY KEY for any write operation. Remove ALLOW FILTERING and your query should work as long as your primary key definition is: PRIMARY KEY (username).
Additionally, it's best practice to parameterize your entire prepared statement, instead of relying on string formatting for password.
query = session.prepare('update user.userdetails set "password"=? where "username"=?')
session.execute(query,[pwd,uname])
Note: If at any point you find yourself needing the ALLOW FILTERING directive, you're doing it wrong.
for updating record you need to provide primary key(s) completely. It will not work with ALLOW FILTERING - you need first to get all primary keys that you want to update, and then issue individual update commands. See the documentation for more detailed description of UPDATE command.
If you really want to specify the default value for some column - why not simply handle it with something like .get('column', 'default-value')?