Can't see the logs for new relic on python service - python

I'm trying to add the logs from my python service that runs on a docker
I followed this tutorial: https://docs.newrelic.com/docs/apm/agents/python-agent/installation/standard-python-agent-install/
So I added this: RUN newrelic-admin generate-config defcf97e23c0621d66d085a35e56f93fac788774 newrelic.ini to my docker file
and changed my entrypoint to run my python like this: NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program python run.py
and when I look at the logs on my pod I can see that the new relic is being used:
2022-09-29 14:09:43,457 (6/MainThread) newrelic.core.agent INFO - New Relic Python Agent (8.2.0.181)
But when I look at new relic in the new service I created for it in Services - APM I can't see it for some reason:
Am I looking at the wrong place or am I missing something?

Related

Can I use a different file structure in azure app services?

I have a python flask API in azure app services. When deploying to it from vs code, I get a "successful deployment" message, but the app still shows the default initial web page provided as a template from Microsoft for new app services.
The current file structure looks something like this:
├───README.md
├───docs
├───data
└───src
├───main.py
└───other_files.py
I changed the file structure to look like the following:
├───README.md
├───docs
├───data
├───app.py
└───src
└───other_files.py
After deploying it like this, the application was able to start normally instead of displaying the boilerplate webpage from Microsoft. What do I need to do to have the app.py inside the src directory?
Can it actually be done?
I was able to solve it on my own by providing a custom startup command in the configuration of the app service. It can be done from the azure portal, in the "Configuration" section, or with the azure CLI.
az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<custom-command>"
And the custom command was:
gunicorn --bind=0.0.0.0 --timeout 600 --chdir src main:app
More details of this here: https://learn.microsoft.com/en-us/azure/app-service/configure-language-python#customize-startup-command

Heroku: python app works locally but does't on remote after git push

I'm trying to deploy a python app that supports redis and I'm currently facing a problem.
I have two git branches, one for production and one for dev and two different heroku apps (python-app, dev-python-app). My git repos are:
git remote
heroku
heroku-test
I use the fallowing commands to deploy the heroku-test to check if the app works correctly before passing to prod:
git branch dev-python-app
git add .
git commit -m "commit msg"
git push heroku-test dev-python-app:master
It says that everything's ok: "remote: Verifying deploy... done." but the app won't start.
If I check logs with: heroku logs --tail -a dev-python-app:
2022-02-09T10:36:29.000000+00:00 app[api]: Build started by user *****
2022-02-09T10:36:53.638128+00:00 app[api]: Deploy 267f3889 by user ****
2022-02-09T10:36:53.638128+00:00 app[api]: Release v21 created by user ****
2022-02-09T10:37:02.000000+00:00 app[api]: Build succeeded
The stange thing is that if I run a one-off dyno using heroku run bash -a dev-python-app and than start the python app as: python3 main.py it works perfectly.
Moreover, It'd be useful to know that I tried this mentioned steps before introducing redis support to the app and It worked perfectly so could be redis the problem?
What do you think? Thank you
I fixed that.
The problem was to scale the working process:
heroku ps:scale worker=1 -a dev-python-app

Datadog Python log collection from self-hosted Github Runner

I'm trying to collect logs from cron jobs running on our self hosted Github runners, but so far can only see the actual github-runner host logs.
I've created a self-hosted Github Runner in AWS running on Unbtu with a standard config.
We've also installed the Datadog agent v7 with their script and basic configuration, and added log collection from files using these instructions
Our config for log collection is below.
curl https://s3.amazonaws.com/dd-agent/scripts/install_script.sh -o ddinstall.sh
export DD_API_KEY=${datadog_api_key}
export DD_SITE=${datadog_site}
export DD_AGENT_MAJOR_VERSION=7
bash ./ddinstall.sh
# Configure logging for GitHub runner
tee /etc/datadog-agent/conf.d/runner-logs.yaml << EOF
logs:
- type: file
path: /home/ubuntu/actions-runner/_diag/Worker_*.log
service: github
source: github-worker
- type: file
path: /home/ubuntu/actions-runner/_diag/Runner_*.log
service: github
source: github-runner
EOF
chown dd-agent:dd-agent /etc/datadog-agent/conf.d/runner-logs.yaml
# Enable log collection
echo 'logs_enabled: true' >> /etc/datadog-agent/datadog.yaml
systemctl restart datadog-agent
After these steps, I can see logs from our Github runners servers. However, on those runners we have several python cron jobs running in Docker containers, logging to stdout. I can see those logs in the Github Runner UI, but they're not available in Datadog, and those are the logs I'd really like to capture, so I can extract metrics from.
Do the docker containers for the python scripts need some special datadog setup as well? Do they need to log to a file that the datadog agents registers as a log file in the setup above?

ModuleNotFoundError: No module named 'django' by Deploying on Azure

I'm trying to deploy a django web app to the Microsoft Azure and this is correctly deployed by the pipeline on DevOps Azure, but I get the error message (ModuleNotFoundError: No module named 'django) on portal Azure and cannot reach my app via the URL.
The app also works properly locally
Here is the whole error message: '''https://pastebin.com/mGHSS8kQ'''
How can I solve this error?
I understand you have tried the steps suggested in the SO thread Eyap shared, and few things here are already covers that. Kindly review these settings.
You can use this command instead - source /antenv3.6/bin/activate.
As a side note- The antenv will be available only after a deployment is initiated. Kindly check the “/” path from SSH and you should see a folder with name starting from antenv.
Browse to .python_packages/lib/python3.6/site-packages/ or .python_packages/lib/site-packages/. Kindly review the file path exists.
Review the Application logs as well (/home/LogFiles folder) from Kudu- https://<yourwebpp-name>.scm.azurewebsites.net/api/logs/docker
The App Service deployment engine automatically activates a virtual environment and runs
pip install -r requirements.txt
The requirements.txt file must be in the project root for dependencies to be installed.
For Django apps, App Service looks for a file named wsgi.py within your app code, and then runs Gunicorn using the following command:
is the name of the folder that contains wsgi.py
gunicorn --bind=0.0.0.0 --timeout 600 .wsgi
If you want more specific control over the startup command, use a custom startup command, replace with the name of folder that contains wsgi.py, and add a --chdir argument if that module is not in the project root.
For additional details, please checkout this document
Configure a Linux Python app for Azure App Service
Quickstart: Create a Python app in Azure App Service on Linux

Django running on an ECS task does not work. "Connection refused" or "No data response" when requesting the webapp

I have some problems running Django on an ECS task.
I want to have a Django webapp running on an ECS task and accessible to the world.
Here are the symptoms:
When I run an ECS task using Django python manage.py runserver 0.0.0.0:8000 as entry point for my container, I have a connection refused response.
When I run the task using Gunicorn using gunicorn --bind 0.0.0.0:8000 my-project.wsgi I have no data response.
I don't see logs on CloudWatch and I can't find any server's logs when I ssh to the ECS instance.
Here are some of my settings related to that kind of issue:
I have set my ECS instance security groups inbound to All TCP | TCP | 0 - 65535 | 0.0.0.0/0 to be sure it's not a firewall problem. And I can assert that because I can run a ruby on rails server on the same ECS instance perfectly.
In my container task definition I set a port mapping to 80:8000 and an other to 8000:8000.
In my settings.py, I have set ALLOWED_HOSTS = ["*"] and DEBUG = False.
Locally my server run perfectly on the same docker image when doing a docker run -it -p 8000:8000 my-image gunicorn --bind=0.0.0.0:8000 wsgi or same with manage.py runserver.
Here is my docker file for a Gunicorn web server.
FROM python:3.6
WORKDIR /usr/src/my-django-project
COPY my-django-project .
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["gunicorn","--bind","0.0.0.0:8000","wsgi"]
# CMD ["python","manage.py", "runserver", "0.0.0.0:8000"]
Any help would be grateful!
To help you debugging:
What is the status of the job when you are trying to access your webapp.
Figure out which instance the job is running and try docker ps on that ecs instance for the running job.
If you are able see the container or the job running on the instance, try access your webapp directly on the server with command like curl http://localhost:8000 or wget
If you container is not running. Try docker ps -a and see which one has just stopped and check with docker logs -f
With this approach, you can cut out all AWS firewall settings, so that you can see if your container is configured correctly. I think it will help you tracking down the issue easier.
After you figuring out the container is running fine and you are able to request with localhost, then you can work on security group inbound/outbound filter.

Categories

Resources