Deploy to FTP via python script with GitLab CI - python

I'm new to GitLab. I am building my first pipeline to deploy the contents of my GitLab project to an FTP server with TLS encryption. I've written a Python script using ftplib to upload the files to the FTP server that works perfectly when I run it on my local Windows machine. The script uploads the full contents of the project to a folder on the FTP server. Now I'm trying to get it to work on GitLab by calling the script in the project's .gitlab-ci.yml file. Both the script and the yml file are in the top level of my GitLab project. The setup is extremely simple for the moment:
image: python:latest
deploy:
stage: deploy
script:
- python ftpupload.py
only:
- main
However, the upload always times out with the following error message:
File "/usr/local/lib/python3.9/ftplib.py", line 156, in connect
self.sock = socket.create_connection((self.host, self.port), self.timeout,
File "/usr/local/lib/python3.9/socket.py", line 843, in create_connection
raise err
File "/usr/local/lib/python3.9/socket.py", line 831, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
Cleaning up file based variables
ERROR: Job failed: exit code 1
Here's the basic setup for establishing the connection in the Python script that works fine locally but fails on GitLab:
class ReusedSslSocket(ssl.SSLSocket):
def unwrap(self):
pass
class MyFTP_TLS(ftplib.FTP_TLS):
"""Explicit FTPS, with shared TLS session"""
def ntransfercmd(self, cmd, rest=None):
conn, size = ftplib.FTP.ntransfercmd(self, cmd, rest)
if self._prot_p:
conn = self.context.wrap_socket(conn,
server_hostname=self.host,
session=self.sock.session) # reuses TLS session
conn.__class__ = ReusedSslSocket # we should not close reused ssl socket when file transfers finish
return conn, size
session = MyFTP_TLS(server, username, password, timeout=None)
session.prot_p()
I know there are other tools like lftp and git-ftp that I could use in GitLab CI, but I've built a lot of custom functionality into the Python script and would like to use it. How can I successfully deploy the script within GitLab CI? Thanks in advance for your help!

This requires that the GitLab Runner (which executes the pipeline) is able to make an SFTP connection to your FTP server.
Shared runners are likely locked down to only connect to the GitLab server (to prevent an attack vector).
To work around this, install your own runner and register it to your GitLab.

Related

Python:: Use pymysql.connect() to Connect to MySQL Docker Container

Trying to build off this post and this post and this tutorial, I'm attempting to write a Python script that can connect to my MySQL Docker container. I'd like to use the pymysql library and then the pymysql.connect() command for ease of use. FYI, the host machine is Ubuntu 16.04.7, Docker version is 20.10.7.
Okay: Here's the docker-compose.yml section spinning up my MySQL container:
MySQL_DB:
container_name: MyMYSQL
image: 667ee8fb158e
ports:
- "52000:3306"
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123
command: mysqld --general-log=1 --general-log-file=/var/lib/mysql/general-log.log
volumes:
- ./logs/mysql.log:/var/lib/mysql/general-log.log
I can't remember where I got this template, but the container is up and running just fine. Note that I'm exposing the container's TCP ports; all other SO posts mentioned that was required for remote connections.
Okay, here's the script I'm using:
# From:
# https://www.geeksforgeeks.org/connect-to-mysql-using-pymysql-in-python/
import pymysql
def mysqlconnect():
# To connect MySQL database
conn = pymysql.connect(host='172.20.0.2',user='me123',password="password123",db='DB01',port=3306)
# To close the connection
conn.close()
# Driver Code
if __name__ == "__main__" :
mysqlconnect()
My Docker-Compose instance assigned the IP address of "127.20.0.2," and I can ping it from the host machine (and within the container).
Running the code generates this error:
me123#ubuntu01/home/me123$ sudo /usr/bin/python3 ./pythonScript.py
Traceback (most recent call last):
File "./pythonScript.py", line 27, in <module>
mysqlconnect()
File "./pythonScript.py", line 14, in mysqlconnect
conn = pymysql.connect(host='172.20.0.2',user='me123',password="password123",db='DB01',port=3306)
File "/usr/lib/python3/dist-packages/pymysql/__init__.py", line 90, in Connect
return Connection(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 699, in __init__
self.connect()
File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 936, in connect
self._request_authentication()
File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1165, in _request_authentication
auth_packet = self._process_auth(plugin_name, auth_packet)
File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1227, in _process_auth
raise err.OperationalError(2059, "Authentication plugin '%s' not configured" % plugin_name)
pymysql.err.OperationalError: (2059, "Authentication plugin 'b'caching_sha2_password'' not configured")
me123#ubuntu01/home/me123$
"Authentication plugin '%s' not configured" strongly suggests that when I run the script, my container is denying the connection. Sadly, there is nothing in the log to explain why this is. Google searches on pymysql.connect() pull up information on how to configure this command, but little to troubleshoot it. Does anyone see what I'm doing wrong?

Python Cassandra driver: connect to Docker container on server - cassandra.UnresolvableContactPoints: {}

I am runnning cassandra in a docker container on a custom server.
I start cassandra docker like this:
docker run --name cassandra -p 9042:9042 -d cassandra:latest
When i want to conect to the server via the python cassandra driver from datastax like this:
from cassandra.cqlengine import connection
connection.setup(["http://myserver.myname.com"], "cqlengine", protocol_version=3)
The exception is thrown:
File "C:\LONG\PATH\TO\venv\lib\site-packages\cassandra\cqlengine\connection.py", line 106, in setup
self.cluster = Cluster(self.hosts, **self.cluster_options)
File "cassandra\cluster.py", line 1181, in cassandra.cluster.Cluster.__init__
cassandra.UnresolvableContactPoints: {}
python-BaseException
After hours of searching through docker network permissions I found the simple solution, so maybe this will help you too.
The simple solution is removing "http://" from the server url and changing my code from
connection.setup(["http://myserver.myname.com"], "cqlengine", protocol_version=3)
To
connection.setup(["myserver.myname.com"], "cqlengine", protocol_version=3)
I thought its a docker networking issue and it took me many hours to pin it down to this simple mistake

Cannot connect to aiohttp server serving over HTTPS

Background: I'm writing a web server using aiohttp with a websocket endpoint at /connect. The app was originally served via HTTP (and clients would connect to ws://host/connect). This worked locally using localhost, but when I deployed to Heroku, the app was served via HTTPS and it didn't allow clients to connect to an insecure websocket. Therefore, I tried to change my server so that it would use HTTPS locally. Now the client can't even complete the TLS handshake with the server. Here is my setup:
server.py
from aiohttp import web
import ssl
app = web.Application()
app.router.add_get('/', handle)
app.router.add_get('/connect', wshandler)
ssl_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ssl_context.load_default_certs()
web.run_app(app, port=8443, ssl_context=ssl_context)
# web.run_app(app, port=8443) # original
When I run the server and try to navigate to https://localhost:8443/ (using Chrome 80), I get the following traceback:
Traceback (most recent call last):
File "/Users/peterwang/anaconda3/lib/python3.7/asyncio/sslproto.py", line 625, in _on_handshake_complete
raise handshake_exc
File "/Users/peterwang/anaconda3/lib/python3.7/asyncio/sslproto.py", line 189, in feed_ssldata
self._sslobj.do_handshake()
File "/Users/peterwang/anaconda3/lib/python3.7/ssl.py", line 763, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: NO_SHARED_CIPHER] no shared cipher (_ssl.c:1056)
I looked at ssl_context.get_ciphers() and found that it does include the ciphersuites that Chrome 80 uses also with TLS1.3. I also used Wireshark to trace the communication between the client and my server. I see the TLS Client Hello, which says that it handles TLS1.0 through TLS1.3 and is compatible with a multitude of ciphers that overlap with ssl_context.get_ciphers(). There is no response from the server.
Does anyone have any advice? (I am using Python 3.7, OpenSSL 1.1.1d, and aiohttp 3.6.2)
A SSL server has to to be configured to use a certificate matching the servers domain and the associated private key, typically using load_cert_chain. Your server is not configured to use a server certificate and key and thus cannot offer any ciphers which requires this - which means it can not offer any ciphers which are typically expected by the client. This means there are no shared ciphers, hence this error.

Python DNS server address already in use

I'm doing lab in Malware analysis.
The task is to investigate CVE-2015-7547 glibc vulnerability.
Google already gave proof of concept code. This code contains client in C and fake DNS server in python. When I try to run server, it throws exception:
turbolab#sandbox:~/Desktop$ sudo python CVE-2015-7547-poc.py
Traceback (most recent call last):
File "CVE-2015-7547-poc.py", line 176, in <module>
tcp_thread()
File "CVE-2015-7547-poc.py", line 101, in tcp_thread
sock_tcp.bind((IP, 53))
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use
IP was set to 127.0.0.1.
How to run server and connect client to it?
You could run netstat -lpn to list all listening connections, with pids (-n do not resolve names).
To test for this vulnerability
Clone the POC code git clone https://github.com/fjserna/CVE-2015-7547.git
Set your DNS server to localhost (127.0.0.1) edit /etc/resolv.conf
Run the POC DNS server
sudo python CVE-2015-7547-poc.py
Compile the client
make
Run the client
./CVE-2015-7547-client
CVE-2015-7547-client segfaults when you are vulnerable
CVE-2015-7547-client reports CVE-2015-7547-client: getaddrinfo: Name or service not known when not vulnerable.
See this Ubuntu Security Notice for more information, as well the original Google blog

Unable to connect to mongodb running on a remote machine

I have mongodb running on a remote server. I can ssh to the remote server and connect to mongodb from the shell on the remote machine. However i have to connect to that mongodb instance from my python script.
However, im unable to connect to mongodb directly from the shell on my local machine running linux using the command:
mongo <remote_ip>:27017
or through pymongo using
connection = pymongo.Connection("<remote_ip>", 27017)
I get the below error when using pymongo:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.11-py2.6-linux-i686.egg/pymongo/connection.py", line 370, in __init__
self.__find_master()
File "/usr/local/lib/python2.6/dist-packages/pymongo-1.11-py2.6-linux-i686.egg/pymongo/connection.py", line 605, in __find_master
raise AutoReconnect("could not find master/primary")
AutoReconnect: could not find master/primary
What could be causing this problem ?. Does it mean mongo is running on a port other than 27017 and if so how can i find out which port it is running on ?
Please Help
Thank You
You can use netstat -a -p on the machine running mongodb to see what port it's attached to. (netstat -a lists all connections and -p provides the name of the program owning the connection.) Also make sure the remote computer is allowing external connections on that port (the connections aren't being blocked by a firewall) and that mongodb is accepting remote connections.

Categories

Resources