Trying to build off this post and this post and this tutorial, I'm attempting to write a Python script that can connect to my MySQL Docker container. I'd like to use the pymysql library and then the pymysql.connect() command for ease of use. FYI, the host machine is Ubuntu 16.04.7, Docker version is 20.10.7.
Okay: Here's the docker-compose.yml section spinning up my MySQL container:
MySQL_DB:
container_name: MyMYSQL
image: 667ee8fb158e
ports:
- "52000:3306"
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123
command: mysqld --general-log=1 --general-log-file=/var/lib/mysql/general-log.log
volumes:
- ./logs/mysql.log:/var/lib/mysql/general-log.log
I can't remember where I got this template, but the container is up and running just fine. Note that I'm exposing the container's TCP ports; all other SO posts mentioned that was required for remote connections.
Okay, here's the script I'm using:
# From:
# https://www.geeksforgeeks.org/connect-to-mysql-using-pymysql-in-python/
import pymysql
def mysqlconnect():
# To connect MySQL database
conn = pymysql.connect(host='172.20.0.2',user='me123',password="password123",db='DB01',port=3306)
# To close the connection
conn.close()
# Driver Code
if __name__ == "__main__" :
mysqlconnect()
My Docker-Compose instance assigned the IP address of "127.20.0.2," and I can ping it from the host machine (and within the container).
Running the code generates this error:
me123#ubuntu01/home/me123$ sudo /usr/bin/python3 ./pythonScript.py
Traceback (most recent call last):
File "./pythonScript.py", line 27, in <module>
mysqlconnect()
File "./pythonScript.py", line 14, in mysqlconnect
conn = pymysql.connect(host='172.20.0.2',user='me123',password="password123",db='DB01',port=3306)
File "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 90, in Connect
return Connection(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 699, in __init__
self.connect()
File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 936, in connect
self._request_authentication()
File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1165, in _request_authentication
auth_packet = self._process_auth(plugin_name, auth_packet)
File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1227, in _process_auth
raise err.OperationalError(2059, "Authentication plugin '%s' not configured" % plugin_name)
pymysql.err.OperationalError: (2059, "Authentication plugin 'b'caching_sha2_password'' not configured")
me123#ubuntu01/home/me123$
"Authentication plugin '%s' not configured" strongly suggests that when I run the script, my container is denying the connection. Sadly, there is nothing in the log to explain why this is. Google searches on pymysql.connect() pull up information on how to configure this command, but little to troubleshoot it. Does anyone see what I'm doing wrong?
Related
I'm new to GitLab. I am building my first pipeline to deploy the contents of my GitLab project to an FTP server with TLS encryption. I've written a Python script using ftplib to upload the files to the FTP server that works perfectly when I run it on my local Windows machine. The script uploads the full contents of the project to a folder on the FTP server. Now I'm trying to get it to work on GitLab by calling the script in the project's .gitlab-ci.yml file. Both the script and the yml file are in the top level of my GitLab project. The setup is extremely simple for the moment:
image: python:latest
deploy:
stage: deploy
script:
- python ftpupload.py
only:
- main
However, the upload always times out with the following error message:
File "/usr/local/lib/python3.9/ftplib.py", line 156, in connect
self.sock = socket.create_connection((self.host, self.port), self.timeout,
File "/usr/local/lib/python3.9/socket.py", line 843, in create_connection
raise err
File "/usr/local/lib/python3.9/socket.py", line 831, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
Cleaning up file based variables
ERROR: Job failed: exit code 1
Here's the basic setup for establishing the connection in the Python script that works fine locally but fails on GitLab:
class ReusedSslSocket(ssl.SSLSocket):
def unwrap(self):
pass
class MyFTP_TLS(ftplib.FTP_TLS):
"""Explicit FTPS, with shared TLS session"""
def ntransfercmd(self, cmd, rest=None):
conn, size = ftplib.FTP.ntransfercmd(self, cmd, rest)
if self._prot_p:
conn = self.context.wrap_socket(conn,
server_hostname=self.host,
session=self.sock.session) # reuses TLS session
conn.__class__ = ReusedSslSocket # we should not close reused ssl socket when file transfers finish
return conn, size
session = MyFTP_TLS(server, username, password, timeout=None)
session.prot_p()
I know there are other tools like lftp and git-ftp that I could use in GitLab CI, but I've built a lot of custom functionality into the Python script and would like to use it. How can I successfully deploy the script within GitLab CI? Thanks in advance for your help!
This requires that the GitLab Runner (which executes the pipeline) is able to make an SFTP connection to your FTP server.
Shared runners are likely locked down to only connect to the GitLab server (to prevent an attack vector).
To work around this, install your own runner and register it to your GitLab.
I can't migrate on Heroku. Using Django and MySQL.
I don't what wrong with it.
There are some wrongs on setting_mysql.py?
I got an error like this.
(base) mypc#mypc website % heroku run python manage.py migrate
Running python manage.py migrate on ⬢ myapp... up, run.3308 (Free)
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
self.connect()
File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/base/base.py", line 195, in connect
self.connection = self.get_new_connection(conn_params)
File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/mysql/base.py", line 227, in get_new_connection
return Database.connect(**conn_params)
File "/app/.heroku/python/lib/python3.9/site-packages/MySQLdb/__init__.py", line 130, in Connect
return Connection(*args, **kwargs)
File "/app/.heroku/python/lib/python3.9/site-packages/MySQLdb/connections.py", line 185, in __init__
super().__init__(*args, **kwargs2)
MySQLdb._exceptions.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
You have provided the wrong database access credentials, because if I'm not mistaken - you can't connect to the database via socket.
Try official solution:
On Heroku, sensitive credentials are stored in the environment as
config vars. This includes database connection information (named
DATABASE_URL), which is traditionally hardcoded in Django
applications.
The django-heroku package automatically configures your Django
application to work on Heroku. It is compatible with Django 2.0
applications.
It provides many niceties, including the reading of DATABASE_URL,
logging configuration, a Heroku CI–compatible TestRunner, and
automatically configures ‘staticfiles’ to “just work”.
Installing django-heroku:
pip install django-heroku
Be sure to add django-heroku to your requirements.txt file as well.
Add the following import statement to the top of settings.py:
import django_heroku
Then add the following to the bottom of settings.py:
Activate Django-Heroku:
django_heroku.settings(locals())
Deploy, and you should be good to go!
I'm using this solution and it works for me.
Another option is to copy the database access data in the database panel and manually put it into settings.py
I am trying to run tests using bitbucket pipelines but unfortunately, I can not connect with postgresql.
So I have tried adding rm -rf /tmp/.s.PGSQL.5432/ to my bitbucket-pipeline.yml but nothing has changed when running my test
This is the error that I get
+ python manage.py test
/usr/local/lib/python3.7/site-packages/django/db/backends/postgresql/base.py:265: RuntimeWarning: Normally Django will use a connection to the 'postgres' database to avoid running initialization queries against the production database when it's not needed (for example, when running tests). Django was unable to create a connection to the 'postgres' database and will use the first PostgreSQL database instead.
RuntimeWarning
nosetests --with-coverage --cover-package=accounts,rest_v1, property --verbosity=1
Creating test database for alias 'default'...
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection
self.connect()
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 194, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/postgresql/base.py", line 174, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
bitbucket-pipelines.yml
image: python:3.6.2
pipelines:
default:
- step:
script:
- pip install -r requirements.txt
- python manage.py test
branches:
develop:
- step:
caches:
- node
script:
- pip install -r requirements.txt
- python manage.py test
setting.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'db_name',
'USER': 'user_name',
'PASSWORD': 'db_password',
}
}
I expect bitbucket pipelines to run the test without issues especially DB connection issues
You will want to use the services key and do something similar to what they have in the definition of the service as well.
pipelines:
default:
- step:
image: node
script:
- npm install
- npm test
services:
- postgres
definitions:
services:
postgres:
image: postgres
environment:
POSTGRES_DB: pipelines
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_user_password
Source:
https://community.atlassian.com/t5/Bitbucket-questions/How-do-I-use-Postgres-in-Bitbucket-Pipelines/qaq-p/461910
I'm doing lab in Malware analysis.
The task is to investigate CVE-2015-7547 glibc vulnerability.
Google already gave proof of concept code. This code contains client in C and fake DNS server in python. When I try to run server, it throws exception:
turbolab#sandbox:~/Desktop$ sudo python CVE-2015-7547-poc.py
Traceback (most recent call last):
File "CVE-2015-7547-poc.py", line 176, in <module>
tcp_thread()
File "CVE-2015-7547-poc.py", line 101, in tcp_thread
sock_tcp.bind((IP, 53))
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use
IP was set to 127.0.0.1.
How to run server and connect client to it?
You could run netstat -lpn to list all listening connections, with pids (-n do not resolve names).
To test for this vulnerability
Clone the POC code git clone https://github.com/fjserna/CVE-2015-7547.git
Set your DNS server to localhost (127.0.0.1) edit /etc/resolv.conf
Run the POC DNS server
sudo python CVE-2015-7547-poc.py
Compile the client
make
Run the client
./CVE-2015-7547-client
CVE-2015-7547-client segfaults when you are vulnerable
CVE-2015-7547-client reports CVE-2015-7547-client: getaddrinfo: Name or service not known when not vulnerable.
See this Ubuntu Security Notice for more information, as well the original Google blog
I have mongodb running on a remote server. I can ssh to the remote server and connect to mongodb from the shell on the remote machine. However i have to connect to that mongodb instance from my python script.
However, im unable to connect to mongodb directly from the shell on my local machine running linux using the command:
mongo <remote_ip>:27017
or through pymongo using
connection = pymongo.Connection("<remote_ip>", 27017)
I get the below error when using pymongo:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.11-py2.6-linux-i686.egg/pymongo/connection.py", line 370, in __init__
self.__find_master()
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.11-py2.6-linux-i686.egg/pymongo/connection.py", line 605, in __find_master
raise AutoReconnect("could not find master/primary")
AutoReconnect: could not find master/primary
What could be causing this problem ?. Does it mean mongo is running on a port other than 27017 and if so how can i find out which port it is running on ?
Please Help
Thank You
You can use netstat -a -p on the machine running mongodb to see what port it's attached to. (netstat -a lists all connections and -p provides the name of the program owning the connection.) Also make sure the remote computer is allowing external connections on that port (the connections aren't being blocked by a firewall) and that mongodb is accepting remote connections.