dockerpy sdk unable to connect to remote server - python

I'm currently trying to use the dockerpy sdk to connect to my remote ubuntu server so i can manage my docker containers via python.
I am getting a few issues when attempting to do this.
docker.APIClient(base_url="ssh://user#ip")
When doing the following I am getting the error:
paramiko.ssh_exception.PasswordRequiredException: private key file is encrypted
I can resolve this issue by adding the kwarg: use_ssh_client, but then i am forced to input a password, which limits the potential for automation.
docker.APIClient(base_url="ssh://user:#ip", use_ssh_client=True)
When using the above code, I have also tried to enter my ssh key password into the base_url such as:
docker.APIClient(base_url="ssh://user:pass#ip", use_ssh_client=True)
However, this then greets me with the following error:
docker.errors.DockerException: Invalid bind address format: ssh://root:pass#ip
I have run out of ideas and am confused as to how I am supposed to get around this?
Many thanks in advance...

It's possible to make a connection as Mr. Piere answered here. Even with that question about docker.client.DockerClient which uses docker.api.client.APIClient under the hood.
You are trying to establish a connection using Password Authentication that's why you asked to prompt a password.
I guess you need to configure the Key-Based SSH Login as said in docker's docs
Steps to fix:
configure SSH Login on a remote server and fill ~/.ssh/config on your local machine
connect from the local terminal using the ssh command to ensure a connection is established without asking password ssh user#ip
connect using library client = docker. APIClient(base_url="ssh://user#ip", use_ssh_client=True)

I had a similar Problem. Your Problem is, that you Key is encrypted. The Docker Clients doesn't have a Passphrase option by default. I wrote some Code based on this Post. It works for me :)
import os
from docker import APIClient
from docker.transport import SSHHTTPAdapter
class MySSHHTTPAdapter(SSHHTTPAdapter):
def _connect(self):
if self.ssh_client:
self.ssh_params["key_filename"] = os.environ.get("SSH_KEY_FILENAME")
self.ssh_params["passphrase"] = os.environ.get("SSH_PASSPHRASE")
self.ssh_client.connect(**self.ssh_params)
client = APIClient('ssh://ip:22', use_ssh_client=True, version='1.41')
ssh_adapter = MySSHHTTPAdapter('ssh://user#ip:22')
client.mount('http+docker://ssh', ssh_adapter)
print(client.version())

Related

AWS Lambda to RDS PostgreSQL

Hello fellow AWS contributors, I’m currently working on a project to set up an example of connecting a Lambda function to our PostgreSQL database hosted on RDS. I tested my Python + SQL code locally (in VS code and DBeaver) and it works perfectly fine with including only basic credentials(host, dbname, username password). However, when I paste the code in Lambda function, it gave me all sorts of errors. I followed this template and modified my code to retrieve the credentials from secret manager instead.
I’m currently using boto3, psycopg2, and secret manager to get credentials and connect to the database.
List of errors I’m getting-
server closed the connection unexpectedly. This probably means the server terminated abnormally before or while processing the request
could not connect to server: Connection timed out. Is the server running on host “db endpoint” and accepting TCP/IP connections on port 5432?
FATAL: no pg_hba.conf entry for host “ip:xxx”, user "userXXX", database "dbXXX", SSL off
Things I tried -
RDS and Lambda are in the same VPC, same subnet, same security group.
IP address is included in the inbound rule
Lambda function is set to run up to 15 min, and it always stops before it even hits 15 min
I tried both database endpoint and database proxy endpoint, none of it works.
It doesn’t really make sense to me that when I run the code locally, I only need to provide the host, dbname, username, and password, that’s it, and I’m able to write all the queries and function I want. But when I throw the code in lambda function, it’s requiring all these secret manager, VPC security group, SSL, proxy, TCP/IP rules etc. Can someone explain why there is a requirement difference between running it locally and on lambda?
Finally, does anyone know what could be wrong in my setup? I'm happy to provide any information in related to this, any general direction to look into would be really helpful. Thanks!
Following the directions at the link below to build a specific psycopg2 package and also verifying the VPC subnets and security groups were configured correctly solved this issue for me.
I built a package for PostgreSQL 10.20 using psycopg2 v2.9.3 for Python 3.7.10 running on an Amazon Linux 2 AMI instance. The only change to the directions I had to make was to put the psycopg2 directory inside a python directory (i.e. "python/psycopg2/") before zipping it -- the import psycopg2 statement in the Lambda function failed until I did that.
https://kalyanv.com/2019/06/10/using-postgresql-with-python-on-aws-lambda.html
This the VPC scenario I'm using. The Lambda function is executing inside the Public Subnet and associated Security Group. Inbound rules for the Private Subnet Security Group only allow TCP connections to 5432 for the Public Subnet Security Group.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html#USER_VPC.Scenario1

How to find out which IP and Port is JIRA() method accessing?

I am using the following script to get issues from Jira.
from jira import JIRA
options = {'server': 'https://it.company.com/'}
jira = JIRA(options, basic_auth=('user', 'password'), max_retries=1)
issues = jira.search_issues('project="Web"', startAt=0, maxResults=50)
I want to replace https://it.company.com/ with https://ip:port.
I usedping to get the IP.
I used nmap for checking ports, but no matter what https://ip:port input I use, I can't get a connection. I also tried these ports.
How can I find out which IP and Port is JIRA() accessing?
The https protocol uses port 443. Refer to wikipedia for details.
However accessing a server via https://server_name/ is different from accessing a server via https://server_ip_address/. This is because during TLS negotiation, server_name is passed to the server via TLS SNI (Server Name Indication). This way multiple virtual websites may be hosted at the same server_ip_address. See wikipedia for details.
If the script works and you just want to know how the connection looks, I recommend letting it run and in the background execute netstat -ano.
If the script doesn't work and you just want to know where it tries to connect, I recommend installing wireshark.
Edit: In any case you (most likely) won't be able to replace it with ip:port because servers treat HTTP requests to an IP different than how they treat requests to a name.
Ask the Jira admin to tell you. Configured in conf/server.xml like any Tomcat app, or there may be a remote proxy such as nginx configured in front of the Jira

Access and Write data securely between server and client on a public-use application

I'm working on coding an application in Python 3 for users to send and retrieve data between others with this application. The process would be a client inputting an encoded string, and using a server as a middle man to then send data to another client. I'm well versed in what would be used for the client application, but this server knowledge I am new to. I have a VPS server up and running, and I researched and found the module pysftp would be good for transferring data back and forth. However, I'm concerned about the security of the server when using the application. This module requires the authentication details of the server when making a connection, and I don't think having my server's host, username and password in the application code is very safe. What would be the safe way to go about this?
Thanks,
Gunner
You might want to use pre-generated authentication keys. If you are familiar with the process of using the ssh-keygen tool to create SSH key pairs, it's the same thing. You just generate the key pair, place the private key on the client machine, and put the public key on the target server. Then you can use pysftp like this:
with pysftp.Connection('hostname', username='me', private_key='/path/to/keyfile') as sftp:
<do some stuff>
the authentication is handled using the key pair and no password is required. This isn't to say that your security issue is solved: the private key is still a sensitive credential that needs to be treated like a password. The advantage is that you don't have a plaintext password stored in a file anywhere, and you are using a well established and secure process to manage authentication. The private key is set with permission 0600 to prevent anyone but the owner from accessing it.

Getting error: "localhost:27017: [Errno 111] Connection refused" while connecting to mongoDB on Heroku but works fine on my computer

I'm using PyMongo. Everything works fine and can connect fine to MongoDB, that's on my computer but when I put the scripts on GitHub and run them through Heroku for my Discord bot I keep on getting the error saying:
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused
I don't know why this happens while it works fine on my computer, I put pymongo in requirements.txt. Below is how I connect to MongoDB (with PyMongo):
import pymongo
from pymongo import MongoClient, ReturnDocument
dbclient = MongoClient('mongodb://localhost:27017/')
# On Heroku I get error:"localhost:27017: [Errno 111] Connection refused"
On your local machine, you can set the specific port to use (e.g. 27017).
Does heroku choose the port for you instead?
Heroku isn't a host where you can run arbitrary things on the local machine. You'll have to connect to a non-local MongoDB host instead of localhost. One easy way to do that is to select an appropriate add-on and add it to your app.
For example, you might choose to use the free starter version of mLab MongoDB. You can provision this add-on by running
heroku addons:create mongolab:sandbox
This addon will set an environment variable MONGODB_URI for you, which you can use to connect:
import os
# Use the default argument if you don't want to have to set MONGODB_URI on your
# dev machine
mongodb_uri = os.getenv('MONGODB_URI', default='mongodb://localhost:27017/')
dbclient = MongoClient(mongodb_uri)
I decided to not use local host because i couldn't understand it, I'm now using an URL given by mongo DB to with my username and password which you can create by going to https://www.mongodb.com/cloud and creating a project and a cluster and collections then the url should be given to you, url should be something like this mongodb+srv://<username>:<password>#cluster-apc2i.mongodb.net/test?retryWrites=true&w=majorityadd that url to your script like this
client = MongoClient("mongodb+srv://<username>:<password>#cluster-apc2i.mongodb.net/test?retryWrites=true&w=majority")
Also make sure to add 0.0.0.0/0 to your allowed ip, that ip means all ip addresses with details are allowed to access, if you don't add that you may get an error saying, time out and other
URL may be given to you after you create a new user from the Database Access Pannel on the left
You should delete the mongod.lock lock from /var/lib/mongodb After that you can restart the service.
Also you can try to change part of client code by
client = MongoClient('localhost', 27017)

How to connect gcm server using python?

I am using ubuntu 14.4 . i tried to send push notification to mobile phone.I referred following '''https://www.digitalocean.com/community/tutorials/how-to-create-a-server-to-send-push-notifications-with-gcm-to-android-devices-using-python ''' its working for my local pc .
same code I am trying in web server but i cant able to send push notification .i got error like "gcm.gcm.GCMAuthenticationException: There was an error authenticating the sender account" my webserver also ubuntu 14.4 . so please any one help me
gcm.py
from gcm import *
gcm= GCM("as........k")
data={"message from":"123","messageto":"1234","message":"Hi","time":"10.00AM","langid":"1"}
reg_id='AP...JBA'
gcm.plaintext_request(registration_id=reg_id,data=data)
i added my server ip in white list but still i am getting same error
You need to add your IP to the white-listed IP list.
The article you linked mentioned it...
gcm: add your API KEY from the Google API project; make sure your
server's IP address is in the allowed IPs
When you create your access key you specify which servers can be used there, so you will need to edit the allowed server list by adding your server's IP.
Make sure to update your Authorization key is defined in your request.
Ensure that outbound ports 5228, 5229, and 5230 are open.
For further errors, look at Google's page
I had the same problem and solved it cleaning the whitelist, saving it and re-inserting the ip of my server in the whitelist.
It seemed so but it's not true. It's just random: sometimes works, sometimes it returns the error mentioned.

Categories

Resources