I wanted to execute raw mongo commands like (db.getUsers()) through the pymongo SDK. For example I have some js file which will contain only db.getUsers() in it . My python program needs to execute that command through establishing connection . I tried this db.command , db.runCommand but I'm not able to achieve that. After establishing the connection it should execute the mongo command whatever in the js file. Please assist.
db.getUsers() is a shell helper script to the usersInfo command (see https://docs.mongodb.com/manual/reference/method/db.getUsers/)
db.getUsers() wraps the usersInfo: 1 command.
You can run the usersInfo command using db.command() in pymongo with something like
from pymongo import MongoClient
db = MongoClient()['admin']
command_result = db.command({'usersInfo': 1})
print(command_result)
Related
I use the hdbcli in python to connect to a hana db.
SQL command execution works via cursor an my connection:
conn = dbapi.connect(
address=os.environ['HOST'],
port=dbconnectport,
user=dbusername,
password=dbpasswd,
databasename=dbconnectdbname
)
...
cursor=conn.cursor()
and execution looks like:
cursor.execute("Select USER_NAME from \"SYS\".\"USERS\" WHERE USER_NAME=\'%s\'" % varcrdbust)
For single queries it works fine. But how can I execute a sql script with a lot of spcial characters?
Via bash shell I can do this fo example in this way:
Create file on os level:
tee >> $PRIVFILE << EOF
WITH
/*
[NAME]
- HANA_Security_CopyPrivilegesAndRoles_CommandGenerator_2.00.000+
[DESCRIPTION]
- Generates SQL commands that can be used to grant roles and privileges assigned to one user to another user or role
SQL script text here.........
And then execute this via hdbsql with argument -I <pathname/filename>
Is there any alternativ in python? Maybe without usage of file creation on os level?
Thanks David
All SAP HANA clients allow the execution of only a single command at a time.
The monitoring script that you chose as an example is in fact just one single command: a relatively large SELECT statement.
So, for every command, you want to have executed, you will need to send a separate .execute.
If you want to process a larger "script" file with several commands, you will need to look out for a "command separator" character (like ; in HANA Studio or hdbsql) and build the individual commands from the strings between those separators.
I am trying to create new nodes using the CloudClient in saltstack python API. Nodes are created successfully but I don't see any logging happening. Below is the code which I am using.
from salt.cloud import CloudClient
cloud_client = CloudClient()
kwargs = {'parallel': True}
cloud_client.map_run(path="mymap.map",**kwargs)
Is there way to run the same code in debug mode to see the output on console from this python script if logging cannot be done.
logging parameters in cloud
log_level: all
log_level_logfile: all
log_file: /var/logs/salt.log
When I try to run with sal-cloud on cli it is working with the below command:
salt-cloud -m mymap.map -P
I was able make it work by adding the below code
from salt.log.setup import setup_console_logger
setup_console_logger(log_level='debug')
So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.
I'm trying to enable data compression in MongoDB 3.0 using the wiredTiger engine. The compression works fine at the server level where I can provide a global compression algorithm for all the collections in the mongo server config file like this:
storage:
engine: wiredTiger
wiredTiger:
collectionConfig:
blockCompressor: zlib
I want to enable this compression at collection level which can be done by using the below code in mongodb shell:
db.createCollection( "test", {storageEngine:{wiredTiger:{configString:'block_compressor=zlib'}}} );
How can I do this using the pymongo driver ?
from pymongo import MongoClient
client = MongoClient("localhost:27017")
db = client.mydb
Given it works via the Mongo shell, pass the same parameters via pymongo:
db.create_collection('test',
storageEngine={'wiredTiger':{'configString':'block_compressor=zlib'}})
from the official docs we see that
create_collection(name, codec_options=None, read_preference=None,
write_concern=None, read_concern=None, **kwargs)
...
**kwargs (optional): additional keyword arguments will be passed as options for the create collection command
I would like to access Meteor's MongoDB from a Python client, while Meteor is running.
I can't start a mongod because Meteor's database is locked.
How do I access the database from another client?
The meteor command provides a clean way. To get the URL for the running mongod:
meteor mongo -U
which you can parse from python.
Meteor starts the mongod for you on port 3002 when you run the meteor command, and stores the mongo data file in .meteor/local/db
Output from ps aux | grep 'mongod' shows the mongod command that meteor uses:
/usr/local/meteor/mongodb/bin/mongod --bind_ip 127.0.0.1 --smallfiles --port 3002 --dbpath /path/to/your/project/.meteor/local/db
So just connect your mongo client accordingly. In python:
>>> import pymongo
>>> con = pymongo.Connection(host='127.0.0.1', port=3002)
>>> con.database_names()
[u'meteor', u'local']
UPDATE: unfortunately making changes directly in mongo in this way won't reflect live in the app, but the changes will be reflected on a full page (re)load.
Use the Meteor deployment instructions
The command will look like this:
PORT=3000 MONGO_URL=mongodb://localhost:27017/myapp node bundle/main.js
You can also find it from within server side code using:
process.env.MONGO_URL
Even if you don't set this environment variable when running, it gets set to the default. This seems to be how it is found internally (packages/mongo/remote_collection_driver.js)
The one is given by meteor mongo -U seems to reconstruct the default domain/ip and db-name, but use the port stored in the file.
You can put this anywhere in the server folder, and read it from the command line.
console.log('db url: ' + process.env.MONGO_URL);
I set up a webpage to display it to double check in the selenium tests that we are using the test database, and not overwriting live data.
And here a shell script to get Mongo URI and Mongo Database:
#!/bin/bash -eux
read -s -p "Enter Password: " password
cmd=$(meteor mongo --url myapp.meteor.com << ENDPASS
$password
ENDPASS)
mongo_uri=$(echo $cmd | cut -f2 -d" ")
mongo_db=$(echo $mongo_uri | cut -d/ -f 4)
#my_client_command_with MONGODB_URI=$mongo_uri MONGO_DB=$mongo_db
````
In regards to a 10 second delay on updates: Tail the MongoDB oplog! There's more information on how to do it here:
http://meteorhacks.com/lets-scale-meteor.html
Make sure you install smart collections and use those (instantiate your collections using Meteor.SmartCollection instead of Meteor.Collection) and you will find the updates are essentially immediate.