I have a mongodb 2.2.2 setup on ubuntu 12.04 machine and I need to modify binding_ip list while database is running, without mongo restart. Is there a way to do so?
Is it possible to do from pymongo?
p.s. I've actually tried
mongod --config /etc/mongodb.conf --bind_ip 127.0.0.1 31.**
with bind_ip list supplied but it says
Wed Dec 19 17:02:05 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /var/lib/mongodb/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
and I'm not sure if it is not just restarting database.
Apparently you can do with iptables(8) rules. Then try (with 192.0.2.1 being the IP address you want to receive connections on):
iptables -A INPUT -p tcp -d '!' 192.0.2.1 -p 27017 --m state --state NEW -j REJECT
If you already have iptables rules then you may need a different command. Check the output of iptables -L INPUT.
Related
Good morning,
I am currently having a problem that I cannot find the answer to on StackOverflow or google searches, and I have not yet solved.
I am trying to use rsync as a sudo user on a target device.
The issue:
I do not own the target device, so I cannot change ssh/sudo perms.
I do not have credentials to the root user
I do have credentials to sudo user
The transaction must be completed programmatically (minimal user input)
What I've tried:
rsync -a --rsync-path "sudo rsync" USER#HOST:/root/FILE ./
Issue: "A terminal is required to read password"
ok, so let's try passing it through stdin
rsync -a --rsync-path "echo 'PASSWORD' | sudo -S rsync" USER#HOST:/root/FILE ./
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
Issue: rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.3]
rsync: connection unexpectedly closed (4 bytes received so far) [receiver]
rsync error: error in rsync protocol data stream (code 12) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-52.200.1/rsync/io.c(453) [receiver=2.6.9]
Do you guys have any other ideas about what I could be doing?
I am aware that echoing the password is not best practice, however I do not have many other options in the case that the server I am connecting to has not done a key exchange with root user and I cannot change the SUDOPASS settings.
In the end this is all getting plugged into a Python script, so if there is a better pythonic means of using rsync as a sudoer, please inform me.
If your remote sudo is configured so that once you have given the password, you do not need to give it again for a while, then you can try this:
rsync -a --rsync-path "echo 'PASSWORD' | sudo -S date >&/dev/null; sudo rsync" \
USER#HOST:/root/FILE ./
To debug what command is being run on the remote add --debug=CMD2.
If your remote does not understand the bash syntax >&/dev/null, use the longer >/dev/null 2>/dev/null.
I have to implement to my Distributed systems class the Berkeley Algorithm and I chose to do it in python with sockets. The master is supposed to run in the host and the slaves in docker containers.
The closest I got from connecting from host (as a master) to the container (as the slave) was exposing the ports with the -p 9000:9000 flag the running the container, the host connects successfully to the container but doesn't receive or send anything (the same thing for the container) with that I have came to the conclusion that the python socket inside the process simply is not receiving packets from the port. I have already tried using -net=host flag but the host simply can't find the container. One progress that I had was to instantiate two docker containers and pinging one from another using the hostname provided in /etc/hosts but this is not what I really want.
I have the whole code in github if you need the source. The code is commented in English, but the documentation is in Portuguese
Summarising all I want to do is to open a socket with python inside a docker container and be able to reach in the host machine, what kind of network configuration do I need to do be able to do that?
EDIT: More info
The following bash script is used to instantiate three docker containers then execute a command into each one of them to clone my repo, cd into it and into a test folder containing a bash to execute a slave and then start the master at host:
docker run -it -d -p 127.0.0.1:9000:9000/tcp --name slave1 python bash
docker run -it -d -p 127.0.0.1:9001:9001/tcp --name slave2 python bash
docker run -it -d -p 127.0.0.1:9002:9002/tcp --name slave3 python bash
docker exec -t -d slave1 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_1.sh'
sleep 1
docker exec -t -d slave2 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_2.sh'
sleep 1
docker exec -t -d slave3 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_3.sh'
sleep 1
bash test/master.sh
To start each instance I use another bash command
To instantiate the slave I use:
python ../main.py -s 127.0.0.1:9000 175 logs/slave_log_1.txt
The -s is a flag to tell the main.py class that this is a slave, the 127.0.0.1:9000 are the ip and port that this slave is going to listen (and the master is going to connect) and the rest are just configurations (this example is used for the first slave).
And to instantiate the master I use:
python ./main.py -m 127.0.0.1:8080 185 15 test/slaves.txt test/logs/master_log.txt
Just like the slave the -m tells main that this is a master, 127.0.0.1:8080 are the ip and port that the master is going to connect to the slave and the rest are just configurations.
When you run a server-type process inside a Docker container, it needs to be configured to listen on the special "all interfaces" address 0.0.0.0. Each container has its own notion of localhost or 127.0.0.1, and if you set a process to listen or bind to 127.0.0.1, it can only be reached from its own localhost which is different from all other containers' localhost and the host's localhost.
In the server command you show, you'd run something like
python ../main.py -s 0.0.0.0:9000 175 logs/slave_log_1.txt
(Stringly consider building a Dockerfile to describe how to build and start your image. Starting a bunch of empty containers, git clone into each, and then manually launching processes is a lot of manual work to be lost as soon as you docker rm the container.)
I looked through your code and I see you creating the server socket and binding it to a port and listening, but I could not find where you call socket.accept() method ?
Platform: LINUX.
I am a beginner of MongoDB and pymongo. After installing pymongo, here is a simple test I tried on ipython:
import pymongo
client = pymongo.MongoClient();
# Also tried to specify the local host and port number
db = client['myDB']
collections = db['temptables']
collections.insert({'a':'1'})
At this point, it chokes. And in the end, spits out a "Error 111: connection refused" error. So, I tried invoking MongoDB straight from the terminal and I still got the error below [look at the far end]. So, I searched a bit and tried:
removing the lock ( sudo rm /var/lib/mongodb/mongod.lock ). Turns out there was no lock in the first place.
sudo mongod --repair
I even saw a suggestion to comment out the host and port number from the config file. Tried that too, didn't work.
None of the above worked.
This is the error I see when I try to invoke mongodb from command line.
017-08-17T15:25:30.265-0700 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2017-08-17T15:25:30.265-0700 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:237:13
#(connect):1:6
exception: connect failed
Help, please.
Your mongo server isn't running.
You can confirm this by executing sudo ps -ef | grep mongod
If you have mongo installed and in your path, you can execute:
cd && mkdir -p ~/temp_mongo_db && mongod --dbpath=./temp_mongo_db
This will launch mongo and place all database files in your home directory under 'temp_mongo_db'.
Finally, in a new terminal window, execute sudo ps -ef | grep mongod again. You'll now see mongod running.
If you want to run mongo in production, you should configure it to be managed by SystemD or some other init system.
I have a remote server, say 1.2.3.4which is running a docker container that is serving SSHD mapped to port 49222 on the docker host, so to connect to it manually I would do:
workstation$ ssh 1.2.3.4 -t "ssh root#localhost -p 49222" and arrive at the docker container SSH command prompt root#f383b4f71eeb:~#
If I run a fabric command which triggers run('ssh root#localhost -p 49222') then I instead am asked for the root password. However it does not accept the root password which I know to be correct, so I suspect the password prompt is originating from the host and not the docker container.
I defined the following task in my fabfile.py:
#task
def ssh():
env.forward_agent = True
run('ssh root#localhost -p 49222')
with settings(output_prefix=False, forward_agent=True):
run('ssh root#localhost -p 49222')
And in the remote servers sshd_config I needed to set:
AllowAgentForwarding yes
In addition, the output_prefix=False is useful to remove the [hostname] run: prefix that fabric adds to the start of every line, which is fairly annoying for every line of a remote shell.
Having a server that has to handle lots of TCP-requests from gprs-modules I think it is handy to set up something to protect this server from multiple requests from certain ip's.
Now I want to make something(within python) that will check how much times a certain ip tries to connect and if this exceeds a given amount of tries this ip will be blocked for a given amount of time (or forever).
I am wondering if there are libraries present to do this, or how I should tackle this problem in my code.
Don't tackle this from your code - this is what a firewall is designed to do.
Using iptables its trivial:
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --set
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --update --seconds 600 --hitcount 2 -j DROP
The above means "drop anything that makes more than 2 connection attempts in 10 minutes at port $PORT"
If you decide you do want to handle this in code, you don't need a separate library (although using one will probably be more efficient), you can add something like the following to your connection handler:
from collections import defaultdict, deque
from datetime import datetime
floodlog = defaultdict(deque)
def checkForFlood(clientIP):
"""check if how many times clientIP has connected within TIMELIMIT, and block if more than MAX_CONNECTEIONS_PER_TIMELIMIT"""
now = datetime.now()
clientFloodLog = floodlog[clientIP]
clientFloodLog.append(now)
if len(clientFloodLog) > MAX_CONNECTIONS_PER_TIMELIMIT:
earliestLoggedConenction = clientFloodLog.popleft()
if now - earliestLoggedConnection < TIMELIMIT:
blockIP(clientIP)
As Burhan Khalid said. You don't want to try this in your code. It's not very performant and that's what firewalls are made for.
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --set
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --update --seconds 600 --hitcount 2 -j DROP
This example is very usefull but not very handy. The problem is that you're also limiting good/trusted connections.
You need to be more flexible. On a linux-based OS you can use fail2ban. It's a very handy tool to prevent your services of bruteforce attacks by using dynamic iptables rules. On Debian/Ubuntu you can install it by using apt-get. If you're on CentOS you need to use a third party repository.
Log every connection into a logfile:
[Jun 3 03:52:23] server [pid]: Connect from 1.2.3.4
[Jun 3 03:52:23] server [pid]: Failed password for $USER from 1.2.3.4 port $DST
[Jun 3 03:52:23] server [pid]: Connect from 2.3.4.5
[Jun 3 03:52:23] server [pid]: Successful login from 2.3.4.5
Now monitor this file with fail2ban and define a regex to difference between successful and failed logins. Tell fail2ban how long it should block the IP for you and if you would like to get an email notification.
The documentation is very good so have a look onto here how you have to configure fail2ban to monitor your logile: fail2ban docu
You don't have to watch only for failed logins. You can also try to watch out for portscans. And the biggest win: don't only secure your application. Safe also your SSH, HTTP, etc logins for beeing bruteforced! ;)
For a pure Python solution, I think you could reuse something I developed for the same problem, but for the client point of view: avoiding to issue more than 'x tries per sec' to a service provider.
The code is available on GitHub: you can probably reuse most of it, but you'll need to replace the time.sleep call for your 'blacklisting' mechanism.