I have created a standard django application with startproject, startapp, etc. and I want to deploy it on heroku. When I was using gunicorn I solved the directory issue like so:
web: gunicorn --pythonpath enigma enigma.wsgi
with the --pythonpath option. But now I am using django channels and so it is daphne. Is there an equivalent? I have tried everything but for the life of me I can't get the project to start. I always get issues with the settings file, apps not loaded or another assortment of cwd-related issues.
As given in the Heroku Django channels tutorial, I have tried:
daphne enigma.asgi:channel_layer --port 8888
This led to a variety of module not found errors with asgi and settings.
I also tried
daphne enigma.enigma.asgi:channel_layer --port 8888
This led to module not found enigma.settings errors.
I also tried
cd enigma && daphne enigma.asgi:channel_layer --port 8888
Which led to Django apps not ready errors.
I also tried moving the Procfile and pipfiles into the project directory and deploying that subdirectory but once again I got apps not ready errors.
I have now started temporarily using
cd enigma && python manage.py runserver 0.0.0.0:$PORT
But I know that you're not supposed to do this in production.
Try this:
Procfile
web: daphne enigma.asgi:application --port $PORT --bind 0.0.0.0 -v2
chatworker: python manage.py runworker --settings=enigma.settings -v2
settings.py
if DEBUG:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("localhost", 6379)],
},
},
}
else:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [os.environ.get('REDIS_URL', 'redis://localhost:6379')],
},
},
}
asgi.py
import os, django
from channels.routing import get_default_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'enigma.settings')
django.setup()
application = get_default_application()
Related
Python is version 3.7.6, Django is version 3.0.8, I am trying to deploy from aws,
but adding container_commands gives an error
container_commands and DATABASES code
- container_commands: 01_migrate:
command: "django-admin.py migrate"
leader_only: true 02_compilemessages:
command: "django-admin.py compilemessages"
option_settings: aws:elasticbeanstalk:container:python:
WSGIPath: config.wsgi:application aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: config.settings
- DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"HOST": os.environ.get("RDS_HOST"),
"NAME": os.environ.get("RDS_NAME"),
"USER": os.environ.get("RDS_USER"),
"PASSWORD": os.environ.get("RDS_PASSWORD"),
"PORT": "5432",
}
}
The problem seems to be caused by django-admin.py migrate
Enter django-admin.py migrate command
django.core.exceptions.ImproperlyConfigured: Requested setting
DATABASES, but settings are not configured. You must either define
the environment variable DJANGO_SETTINGS_MODULE or call
settings.configure() before accessing settings.
I get this message, I don't know how to fix it,
my github : https://github.com/dopza86/air_bnb_clone
I need help,thankyou
I'm having trouble with my server. This is a multitenant project, the one in the Windows server is working fine, but the one in Ubuntu is giving me a "No module named 'memcache'" error although it is installed. I know it is installed because I ran a "python3 manage.py runserver 0.0.0.0:8001' and when I accessed by my personal browser worked fine. Gunicorn is pointing properly to my virtual env and there are no log errors when I restart the service, I'm quite desperate now.
My configuration:
CACHE_HOST = os.getenv('cache_host', '127.0.0.1')
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': f'{CACHE_HOST}:11211',
},
'estadisticos': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': f'{CACHE_HOST}:11211',
}
}
So is your memcache host running on localhost? if not perhaps gunicorn is started as a service and doesn't have the right value of the env var cache_host
in any case I'd suggest you add prints at the end of your settings file
and one print to see whether you're using the same ptyhon for gunicorn and your command line:
import sys # if not already imported in settings
print("my python is ", sys.executable)
print("CACHES", CACHES, file=sys.stderr)
or if you can't see stdout:
with open("/tmp/mylog.log") as fout:
print("my python is ", sys.executable, file=fout)
print("CACHES", CACHES, file=fout)
Check that python3 manage.py runserver 0.0.0.0:8001 creates the same trace as running gunicorn.
delete the file `/tmp/mylog.log` between the two runs.
If outputs are identical, but memcached is working for one and not the other, then you had to check that the django settings are not overridden somewhere else
I am trying to enable Channels V2 for a Django app deployed in Heroku.
The WSGI web dyno works perfectly but the second web dyno for ASGI channels never get the requests so when trying to create a websocket connection I get a 404 response.
Here is the Procfile file:
web: gunicorn app_xxx.wsgi --log-file -
web2: daphne app_xxx.routing:application --port $PORT --bind 0.0.0.0 -v2
I have also tried with Uvicorn like:
web: gunicorn app_xxx.wsgi --log-file -
web2: gunicorn app_xxx.asgi:application -b 0.0.0.0:$PORT -w 1 -k uvicorn.workers.UvicornWorker
Seems like everything is in place, just need to find a way to EXPOSE the wss endpoint
In order to make Channels works on Heroku you should first add a Redis add-on then make sure your CHANNEL_LAYERS variable in your settings.py points to this redis host machine. Below you can see an example:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [config('CHANNEL_LAYERS_HOST')],
},
},
}
I've just recently begun trying to wrap my head around Docker and have managed to get a development machine up and running. What i'm now trying to do is to be able to use the debugger in Visual Studio Code in my python application (specifically Django).
I've tried following the limited documentation of the python extension for VS Code which explains the parameters for remote debugging.
Dockerfile
FROM python:3.5.2
RUN apt-get update \
--no-install-recommends && rm -rf /var/lib/apt/lists/* \
&& mkdir -p /code \
EXPOSE 8000
WORKDIR /code
COPY requirements.txt /code
RUN /bin/bash --login -c "pip install -r requirements.txt"
ADD . /code
CMD []
docker-compose.yml
version: '2'
services:
db:
image: postgres
web:
build: .
volumes:
- .:/code
ports:
- "8000:8000"
command: bash -c "./wait-for-it.sh db:5432 && python manage.py migrate && python manage.py runserver 0.0.0.0:8000 --noreload"
depends_on:
- db
launch.json
{
"name": "Attach (Remote Debug)",
"type": "python",
"request": "attach",
"localRoot": "${workspaceRoot}",
"remoteRoot": "/code",
"port": 8000,
"secret": "debug_secret",
"host": "localhost"
}
I've also added the line ptvsd.enable_attach("debug_secret", address = ('0.0.0.0', 8000)) to one of the project files
The Issue
When ever I start the debugger nothing happens and it looks like VS Code is waiting for a breakpoint to hit. But it never does.
Any ideas?
EDIT: Minor update
I have tried using different ports for the debugger aswell as exposing the new ports in docker-compose.yml without any success. It looks like the attach is successfull because the debugger doesn't crash but no breakpoint is triggered. I'm really stuck on this one.
Solution
See answer from theBarkman.
I'll add that I was unable to use a secret to get this working. I did the following:
manage.py
import ptvsd
ptvsd.enable_attach(secret=None, address=('0.0.0.0', '3000'))
launch.json
{
"name": "Attach Vagrant",
"type": "python",
"request": "attach",
"localRoot": "${workspaceRoot}",
"remoteRoot": "/code",
"port": 3000,
"secret": "",
"host":"localhost"
}
I've had the most success remote debugging dockerized Django projects by throwing the ptvsd code into my manage.py file and turning off Django's live code reload.
Since Django essentially spins up 2 servers when you runserver (one for that live code reloading, and the other for the actual app server`, ptvsd seems to get really confused which server it should watch. I could sort of get it to work by waiting for attachment, try/excepting the enable_attach method or breaking into the debugger - but breakpoints would never work, and I could only seem to debug a single file at a time.
If you use the django flag --noreload when spinning up the server, you can throw the ptvsd inside the manage.py file without all the waiting / breaking into the debugger nonsense, and enjoy a much more robust debugging experience.
manage.py:
import ptvsd
ptvsd.enable_attach(secret='mah_secret', address=('0.0.0.0', 3000))
run teh server:
python manage.py runserver 0.0.0.0:8000 --noreload
Hope this helps!
I was trying to do something very similar to you and came across this issue/comment:
https://github.com/DonJayamanne/pythonVSCode/issues/252#issuecomment-245566383
In there it describes that in order to use breakpoints you need to use the ptvsd.break_into_debugger() function.
As an example:
import ptvsd
ptvsd.enable_attach(secret='my_secret',address = ('0.0.0.0', 3000))
ptvsd.wait_for_attach()
ptvsd.break_into_debugger()
As soon as I added this in my python script, my breakpoints worked. Hopefully it's of some use.
Edit Jan 24, 2017
In my DockerFile I installed ptvsd:
FROM kaixhin/theano
RUN pip install ptvsd
WORKDIR /src
EXPOSE 3000
ENTRYPOINT ["python","src/app.py"]
COPY . /src
It looks like your installing dependencies via your requirements.txt file, is ptvsd in your requirements.txt?
a couple trouble shooting tips:
1) make sure your debug port is open. run this from your host.
nc -zv test.example.com 30302
2) make sure your webserve does not reload your app automatically. That will break the debugger connection. Put a print or log statement in your code which runs at startup time to make sure your app is not being loaded twice. This is for socketio running on flask. but django and other webservers have something similar.
socketio.run(app, host="0.0.0.0", port=5000, debug=True, use_reloader=False)
I'm trying to use channels for a django app.I have installed all the required dependencies (i think). I have listed 'channels' on INSTALLED_APPS of myapp/settings.py.However,I run daphne ( daphne chat.asgi:channel_layer --port 8888)-( no error message on cmd), then when i run python manage.py runworker which gives an Error message that says - "channels.asgi.InvalidChannelLayerError: no BACKEND specified for default". . I'm novice for django, i have asgi.py as
import os
import channels.asgi
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "chat.settings")
channel_layer = channels.asgi.get_channel_layer()
But in my myapp/settings.py, i have specified the BACKEND specified for default.Can you please suggest a solution to this error? Here is a probable solution,but the asgi_redis was current in my django1.10. I'm trying to run myapp on my local machine.
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
#"hosts": [os.environ.get('REDIS_URL', 'redis://localhost:6379')],
},
"ROUTING": "myproject.myapp.routing.channel_routing",
},
}
Add this to the top of your settings.py
import asgi_redis
Also, make sure that you have installed Redis
pip install asgi_redis