I want to write simple python application and put in docker conteiner with dockerfile. My dockerfile is:
FROM ubuntu:saucy
# Install required packages
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb
# Add our python app code to the image
RUN mkdir -p /app
ADD . /app
WORKDIR /app
# Set the default command to execute
CMD ["python", "main.py"]
In my python application I only want to connect to the database. main.py look something like this:
import MySQLdb as db
connection = db.connect(
host='localhost',
port=3306,
user='root',
passwd='password',
)
When I built docker image with:
docker build -t myapp .
and run docker image with:
docker run -i myapp
I got error:
_mysql_exceptions.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
What is the problem?
The problem is that you've never started the database - you need to explicitly start services in most Docker images. But if you want to run two processes in Docker (the DB and your python program), things get a little more complex. You either have to use a process manager like supervisor, or be a bit cleverer in your start-up script.
To see what I mean, create the following script, and call it cmd.sh:
#!/bin/bash
mysqld &
python main.py
Add it to the Dockerfile:
FROM ubuntu:saucy
# Install required packages
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb
# Add our python app code to the image
RUN mkdir -p /app
ADD . /app
WORKDIR /app
# Set the default command to execute
COPY cmd.sh /cmd.sh
RUN chmod +x /cmd.sh
CMD cmd.sh
Now build and try again. (Apologies if this doesn't work, it's off the top of my head and I haven't tested it).
Note that this is not a good solution; mysql will not be getting signals proxied to it, so probably won't shutdown properly when the container stops. You could fix this by using a process manager like supervisor, but the easiest and best solution is to use separate containers. You can find stock containers for mysql and also for python, which would save you a lot of trouble. To do this:
Take the mysql installation stuff out of the Dockerfile
Change localhost in your python code to mysql or whatever you want to call your MySQL container.
Start a MySQL container with something like docker run -d --name mysql mysql
Start your container and link to the mysql container e.g: docker run myapp --link mysql:mysql
Related
I'm trying to make a simple MS SQL Server call from Python by using Docker. The SQL connection is able to establish if I run the python execute script, but it won't work from Docker.
My code is below
Dockerfile
from python:3
WORKDIR /code
COPY requirements.txt .
RUN apt-get update \
&& apt-get install unixodbc -y \
&& apt-get install unixodbc-dev -y \
&& apt-get install freetds-dev -y \
&& apt-get install freetds-bin -y \
&& apt-get install tdsodbc -y \
&& apt-get install --reinstall build-essential -y
RUN echo "[FreeTDS]\n\
Description = FreeTDS Driver\n\
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so\n\
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so" >> /etc/odbcinst.ini
#Pip command without proxy setting
RUN pip install -r requirements.txt
COPY src/ .
CMD ["python", "./producer.py"]
producer.py
import pyodbc
connP = pyodbc.connect('driver={FreeTDS};'
'server={MYSERV01\SQLEXPRESS};'
'database=ABCD;'
'uid=****;'
'pwd=*****')
requirement.txt
kafka-python
pyodbc==4.0.28
Error message
I referred to this article and did. I searched online for resolutions and tried several steps, but nothing helped. I'm pretty new to Docker and no experience in Python, so any help would be really good. Thanks in advance!
In your pyodbc.connect try to give the server as '0.0.0.0' instead of any other value.
If you want to debug it from inside the container, then comment the last CMD line of your Dockerfile.
Build your Docker container
docker build -f Dockerfile -t achu-docker-container .
Run your Docker Container
docker run -it achu-docker-container /bin/bash
This will place you inside the container. This is like, ssh to a different machine.
Go to your WORKDIR
cd code
python ./producer.py
What do you get the above above? (If you install any editor using apt-get install vim you will be able to interactively edit the producer.py file and fix your problem from inside the running container.
Then you can move your changes to your source Dockerfile and build a new image and container with it.
I was trying to connect to the local SQL Server database. I referred many articles and figured out that the following code works:
the server should have host.docker.inter,<port_no> -- this was the catch. When it comes to dedicated database where the sql server is different from the docker image, the server name is provided directly, but when both image and database are in same server, the following code works. Please check the port number in the SQL Configuration TCP Address (IP4All)
I am trying to follow the Flask/React tutorial here, on a plain Windows machine.
On Windows 10, without considering Docker, I have the tutorial working.
On Windows 10 under a docker system (ubuntu-based containers and docker-compose), I do not:
The React server works under the docker.
The Flask server won't successfully build.
The Dockerfile for the Flask server is:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository universe
RUN apt-get update && apt-get install -y python3-pip yarn
RUN pip3 install flask
#RUN pip3 install venv
RUN mkdir -p /app
WORKDIR /app
COPY . /app
#RUN python3 -m venv venv
RUN cd api/venv/Scripts
RUN flask run --no-debugger
This fails at the very last line:
The command '/bin/sh -c flask run --no-debugger' returned a non-zero code: 1
Note that I find myself in the unenviable position of trying to use/teach myself all of Docker, venv, react, and flask at the same time. The venv commands are commented out because I'm not even sure venv makes sense in a docker (but what would I know?) and also because the pip3 install venv command halts with a non-zero code:2.
Any advice is welcome.
There are two obvious issues in the Dockerfile you show.
Each RUN command runs in a clean environment starting from the last known state of the image. Settings like the current directory (and also environment variable values) are not preserved when a RUN command exits. So RUN cd ... starts the RUN command from the old directory, changes to the new directory, and then doesn't remember that; the following RUN command starts again from the old directory. You need the WORKDIR directive to actually change directories.
The RUN commands also run during the build phase. They won't publish network ports or have access to databases; in a multi-container Compose setup they can't connect to other containers. You probably want to run the Flask app as the main container CMD.
So you can update your Dockerfile to look like:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository universe
RUN apt-get update && apt-get install -y python3-pip yarn
WORKDIR /app # Creates the directory as well
COPY requirements.txt ./ # Includes "flask"
RUN pip install -r requirements.txt
COPY . ./
WORKDIR /app/api/venv/Scripts # Not `RUN cd ...`
CMD flask run --no-debugger # Not `RUN ...`
It is in fact common to just not use a virtual environment in Docker; the Docker image is isolated from any other Python installation and so it's safe to use the "system" Python package tree. (I am a little suspicious of the venv directory in there, since virtual environments can't be transplanted into other setups very well.)
Note that I find myself in the unenviable position of trying to use/teach myself all of Docker, venv, react, and flask at the same time.
Put Docker away for another day. It's not necessary, especially during the development phase of your application. If you read through SO questions there are a lot of questions trying to contort Docker into acting just like a local development environment, where it's really not designed for it. There's nothing wrong with locally installing the tools you need to do your job, especially when they're very routine tools like Python and Node.
I believe that flask can't find your app when you run your docker (especially as the docker build attempts to run it). If you want to use the docker only for the purpose of running your app through that docker, use CMD in the dockerfile, thus when running the docker image, it will start your flask app first thing.
I have two docker containers running in an Ubuntu 16.04 machine, one docker container has a mysql sever running, the other container holds a dockerized python script set to run a cron job every minute that loads data into mysql. How can I connect the two to load data through the python script into the mysql container? I have an error showing up:
Here are my relevant commands:
MYSQL container runs without issue:
docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=yourPassword --name icarus -d mysql_docker_image
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
927e50ca0c7d mysql_docker_image "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:3306->3306/tcp, 33060/tcp icarus
Second container holds cron and python script:
#build the container without issue
sudo docker run -t -i -d docker-cron
#exec into it to check logs
sudo docker exec -i -t container_id /bin/bash
#check logs
root#b149b5e7306d:/# cat /var/log/cron.log
Error:
have the following error showing up, which I believe has to do with wrong host address:
Caught this error: OperationalError('(pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'localhost\' ([Errno 99] Cannot assign requested address)")',)
Python Script:
from traffic.data import opensky
from sqlalchemy import create_engine
#from sqlalchemy_utils import database_exists, create_database
import sqlalchemy
import gc
#connection and host information
host = 'localhost'
db='icarus'
engine = create_engine('mysql+pymysql://root:password#'+ host+ ':3306/'+ db) #create engine connection
version= sys.version_info[0]
#functions to upload data
def upload(df,table_name):
df.to_sql(table_name,con=engine,index=False,if_exists='append')
engine.dispose()
print('SUCCESSFULLY LOADED DATA INTO STAGING...')
#pull data drom api
sv = opensky.api_states()
final_df = sv.data
#quick column clean up
print(final_df.head())
final_df=final_df.rename(columns = {'timestamp':'time_stamp'})
#insert data to staging
try:
upload(final_df, 'flights_stg')
except Exception as error:
print('Caught this error: ' + repr(error))
del(final_df)
gc.collect()
I'm assuming the error is the use of 'localhost' as my address? How would i go about resolving something like this?
More information:
MYSQL Dockerfile:
FROM mysql
COPY init.sql /docker-entrypoint-initdb.d
Python Dockerfile:
FROM ubuntu:latest
WORKDIR /usr/src/app
#apt-get install -y build-essential -y python python-dev python-pip python-virtualenv libmysqlclient-dev curl&& \
RUN \
apt-get update && \
apt-get install -y build-essential -y git -y python3.6 python3-pip libproj-dev proj-data proj-bin libgeos++-dev libmysqlclient-dev python-mysqldb curl&& \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt ./
RUN pip3 install --upgrade pip && \
pip3 install --no-cache-dir -r requirements.txt
RUN pip3 install --upgrade setuptools
RUN pip3 install git+https://github.com/xoolive/traffic
COPY . .
# Install cron
RUN apt-get update
RUN apt-get install cron
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/simple-cron
# Add shell script and grant execution rights
ADD script.sh /script.sh
RUN chmod +x /script.sh
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/simple-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
Can you share your dockerfile or compose file from MySQL container. Yes, the problem related to using localhost as host. You must use a docker service name as host. So in docker service name works as DNS. For example if your docker-compose looks like:
services:
mydb:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD:root
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: root
You must use mydb instead of localhost
Looks like a user defined network bridge was the way to go here as recommended. Solved issue by:
docker network create my-net
docker create --name mysql \
--network my-net \
--publish 3306:3306 \
-e MYSQL_ROOT_PASSWORD=password \
mysql:latest
docker create --name docker-cron \
--network my-net \
docker-cron:latest
docker start each of them and using the --name as the host was perfect.
I have written Dockerfile for my python application.
Requirement is :
Install & start mysql server.
Run the application in screen in detach mode.
Below is my Dockerfile:
FROM ubuntu:16.04
# Update OS
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip screen npm vim net-tools
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
RUN pip install --no-cache-dir -r requirements.txt
COPY src /usr/src/app/src
COPY ./src/nsd.ini /etc/
RUN pwd
RUN cd /usr/src/app
RUN service mysql start
RUN /bin/bash -c "chmod +x src/run_demo_app.sh && src/run_demo_app.sh"
Below is the content of bash script
$ cat src/run_demo_app.sh
$ screen -dm bash -c "sleep 10; python -m src.app";
The problem is Mysql doesn't start. I need to start it manually from container.
Also, the screen becomes dead and application do not start. Manually running the script works fine.
So this is a understanding gap and nothing else. Note below issues in your docker file
Never use service command
RUN service mysql start
Docker doesn't use a init system. So never use a service command inside docker.
Don't put everything in same container
You should not put everything in the same container. So mysql should run in its own container and python in its own
Use official images
You don't need to re-invent the wheel. Use official images as much as possible. You should be using mysql and python images in your case
Use docker-compose when multiple services are needed
In your case since you are requiring multiple services, use docker-compose.
No need to use screen in docker
Screen is used when your want your process to be running even if your SSH disconnects. So that in not needed in docker. If you run your docker run or docker-compose up command with an additional -d flag then your container will automatically be launched in background
I am using docker to start mysql service in a container. After the container starts, I want to insert some data to database automatically via python scripts. This is my Dockerfile:
FROM mysql:5.7
EXPOSE 3306
ENV MYSQL_ROOT_PASSWORD 123456
WORKDIR /app
ADD . /app
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip
RUN pip3 install --user -r requirements.txt
RUN python3 init.py
The last row runs the script to add some data to database but this time mysql service has not started yet so it fails when running docker build. How do I accomplish this? Thanks in advance.
According to the docs, the MySQL entrypoint will automatically execute any files with .sh, .gz or .sql scripts found in /docker-entrypoint-initdb.d. So, create a script to execute your Python script for you. If you call this file 01-my-script.sh, your Dockerfile will look like this:
FROM mysql:5.7
EXPOSE 3306
ENV MYSQL_ROOT_PASSWORD 123456
WORKDIR /app
RUN apt-get update && apt-get install -y \
python3 \
python3-pip
# Copy requirements in first, and run them (so cache won't be invalidated)
COPY ./requirements.txt ./requirements.txt
RUN pip3 install --user -r requirements.txt
# Copy SQL Fixture
COPY ./01-my-script.sh /docker-entrypoint-initdb.d/01-my-script.sh
RUN chmod +x /docker-entrypoint-initdb.d/01-my-script.sh
# Copy the rest of your project
COPY . .
And your script will only contain:
#!/bin/sh
python3 /app/init.py
Now, when you bring up your container, your script will execute. Monitor the execution of the running container with docker logs -f <container_name> to make sure your script is running.