"Operation not Permitted" for Redis - python

I am developing on a mac which already have redis installed. by default it doesn't have a redis.conf so the default settings were used when I $ redis-server
# Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf'
I am trying to use redis-py and have the following
import redis
r = redis.Redis('localhost')
r.set('foo','bar')
r.get('foo')
but got the following error
redis.exceptions.ResponseError: operation not permitted
I also tried in terminal $ redis-cli ping, but then i get the following error
(error) ERR operation not permitted
I suppose since there is no redis.conf the default settings doesn't have a password right? Anyways, I also tried to create a redis.conf
$ echo "requirepass foobared" >> redis.conf
$ redis-server redis.conf
then on another window
$ redis-cli
$ redis 127.0.0.1:6379> AUTH foobared
(error) ERR invalid password
also modified the second line of the python script to
r = redis.StrictRedis(host='localhost', port=6379, db=0, password='foobared')
but then I got
redis.exceptions.ResponseError: invalid password
what could I be doing wrong??? Thanks

Without any redis.conf file, Redis uses the default built-in configuration. So the default database file (dump.rdb) and the Append Only File are created in the server directory. Maybe the user running Redis does not have write permission on this dir.
So either you give him the permission, or you define another working directory using a config file.
You'll find a default config file for redis 2.6 here
You must modify this line in the file:
# The working directory.
dir ./

Related

Pypiserver logging in API

I'm trying to create a private Python index using PyPiServer using the API.
According to the documentation I can specify the verbosity and the log file in the pypiserver app setup.
This is what I have:
import pypiserver
from pypiserver import bottle
app = pypiserver.app(root='./packages', password_file='.htpasswd', verbosity=4, log_file='F:\repo\logfile.txt')
bottle.run(app=app, host='itdevws09', port=8181, server='auto')
However when I start it using python mypyserver.py, the index starts up normally and works, however no log file is created. If I create one manually, the log file isn't actually written to.
If I start the pypiserver using the command line using:
pypi-server -p 8080 -P .htpasswd -vvvv --log-file F:/repo/logfile.txt ./packages
The log file is created and written to as normal.
I have tried putting the log-file and verbosity in the bottle.run() method but that doesn't work either. How can I get the logging to work?

Can't connect MongoDb on AWS EC2 using python

I have installed Mongodb 3.0 using this tutorial -
https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/
It has installed fine. I have also given permissions to 'ec2-user' to all the data and log folders ie var/lib/mongo and var/log/mongodb but and have set conf file as well.
Now thing is that mongodb server always fails to start with command
sudo service mongod start
it just say failed, nothing else.
While if I run command -
mongod --dbpath var/lib/mongo
it starts the mongodb server correctly (though I have mentioned same dbpath in .conf file as well)
What is it I am doing wrong here?
When you run sudo mongod it does not load a config file at all, it literally starts with the compiled in defaults - port 27017, database path of /data/db etc. - that is why you got the error about not being able to find that folder. The "Ubuntu default" is only used when you point it at the config file (if you start using the service command, this is done for you behind the scenes).
Next you ran it like this:
sudo mongod -f /etc/mongodb.conf
If there weren't problems before, then there will be now - you have run the process, with your normal config (pointing at your usual dbpath and log) as the root user. That means that there are going to now be a number of files in that normal MongoDB folder with the user:group of root:root.
This will cause errors when you try to start it as a normal service again, because the mongodb user (which the service will attempt to run as) will not have permission to access those root:root files, and most notably, it will probably not be able to write to the log file to give you any information.
Therefore, to run it as a normal service, we need to fix those permissions. First, make sure MongoDB is not currently running as root, then:
cd /var/log/mongodb
sudo chown -R mongodb:mongodb .
cd /var/lib/mongodb
sudo chown -R mongodb:mongodb .
That should fix it up (assuming the user:group is mongodb:mongodb), though it's probably best to verify with an ls -al or similar to be sure. Once this is done you should be able to get the service to start successfully again.
If you’re starting mongod as a service using:
sudo service mongod start
Make sure the directories defined for logpath, dbpath, and pidfilepath in your mongod.conf exist and are owned by mongod:mongod.

Cannot get environmental variables in Python sometimes?

So I have a Django/Python 3.4.3 setup with nginx, gunicorn and postgres on Ubuntu Server 14.04. The server is blank and setup following this guide. I have set a few environmental variables in the /etc/environment as follows and rebooted:
DJANGO_DB_NAME="db"
DJANGO_DB_USER="username"
DJANGO_DB_PASSWORD="password"
DJANGO_SECRET_KEY="9g2&ionu!4u#%#2f&(r0dpp_yplyukxde^*1+evf7ko#_yn6%h"
So from Django's settings.py file I try to access it in a variety of ways, but ran into unexpected behavior:
'NAME': os.getenv('DJANGO_DB_NAME') # this works correctly
'NAME': os.environ.get('DJANGO_DB_NAME') # this works correctly
'NAME': os.environ['DJANGO_DB_NAME'] # this does NOT work and yields 'key' does not exist
None of these works as it returns an empty string instead of the key value:
SECRET_KEY = os.getenv('DJANGO_SECRET_KEY')
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY')
SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
Django Error:
File "/webapps/venv/lib/python3.4/site-packages/django/conf/__init__.py", line 120, in __init__
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.")
From within Ubuntu, when I access the environment variables from the command line, I always get the correct result back:
root#ubuntu-512mb-sfo1-01:/webapps# echo $DJANGO_SECRET_KEY
9g2&ionu!4u#%#2f&(r0dpp_yplyukxde^*1+evf7ko
root#ubuntu-512mb-sfo1-01:/webapps# echo $DJANGO_DB_USER
username
Yet, when I do this from command line it works!
root#ubuntu-512mb-sfo1-01:/webapps# python3 -c "import os; print(os.environ['DJANGO_SECRET_KEY'])"
9g2&ionu!4u#%#2f&(r0dpp_yplyukxde^*1+evf7ko
Now, I am really confused. Any expert know what is going on and how to solve this?
Update 1: per comments by m.wasowski, gunicorn is running as root and running manage.py runserver works just fine again as root. Gunicorn only complains when I run 'service gunicorn start'. Security issues as running as root, or storing key in environment is temporary until I just get it working first.

yowsup-celery How to run yowsup-celery in daemon mode passing whats app config file as argument

I am using:
yowsup-celery: https://github.com/jlmadurga/yowsup-celery
For trying to integrate whats app in my system.
I have been successfully able to store messages and want to now run celery in daemon mode rather than running in terminal
To run it normally we use:
celery multi start -P gevent -c 2 -l info --yowconfig:conf_wasap
To run daemon mode we use:
sudo /etc/init.d/celeryd start
Here how can I pass config file as argument or is there a way to remove dependency of passing it as an argument rather reading the file inside script.
Since version yowsup-celery 0.2.0 it is possible to pass config file path through configuration instead of argument.
YOWSUPCONFIG = "path/to/credentials/file"

Connecting to EC2 using keypair (.pem file) via Fabric

Anyone has any Fabric recipe that shows how to connect to EC2 using the pem file?
I tried writing it with this manner:
Python Fabric run command returns "binascii.Error: Incorrect padding"
But I'm faced with some encoding issue, when I execute the run() function.
To use the pem file I generally add the pem to the ssh agent, then simply refer to the username and host:
ssh-add ~/.ssh/ec2key.pem
fab -H ubuntu#ec2-host deploy
or specify the env information (without the key) like the example you linked to:
env.user = 'ubuntu'
env.hosts = [
'ec2-host'
]
and run as normal:
fab deploy
Without addressing your encoding issue, you might put your EC2 stuff into an ssh config file:
~/.ssh/config
or, if global:
/etc/ssh_config
There you can specify your host, ip address, user, identify file, etc., so it's a simple matter of:
ssh myhost
Example:
Host myhost
User ubuntu
HostName 174.129.254.215
IdentityFile ~/.ssh/mykey.pem
For more details: man ssh_config
Another thing you can do is set the key_filename in the env variable: https://stackoverflow.com/a/5327496/1729558

Categories

Resources