Uploading large files to AWS Elastic Beanstalk / nginx - python

I have an application that uploads a file which works fine on Heroku and on local developer machines - but on AWS EB, the upload is interrupted and doesn't complete.
I've set the nginx directives as follows
.ebextensions/00_project.config
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 2000M
client_body_buffer_size 2000M
option_settings:
"aws:elasticbeanstalk:container:python":
WSGIPath: server:application
.platform/nginx/conf.d/proxy.conf
client_max_body_size 2000M;
client_body_buffer_size 2000M;
.platform/nginx/00_myconf.config
container_commands:
01_reload_nginx:
command: "service nginx reload"
This is running python3.8 and using a plotly/react framework called Dash. https://dash.plotly.com/introduction
As mentioned, it's only not working on AWS so it appears to be a system config issue. To confirm this, I uploaded a large file to a public bucket and had this app read from the bucket instead of the upload and it worked fine.
I've exhausted all options so any help muchly appreciated!

After prompting from kgiannakakis, I checked the logs and was found a "web: MemoryError" line in the web.stdout.log file.
So I upgraded the EB instance to something a bit bigger and it managed to fix the issue... so essentially I was running out of memory and needed to pay AWS more money to get more memory.

Related

MLFLow artifact logging and retrieve on remote server

I am trying to setup a MLFlow tracking server on a remote machine as a systemd service.
I have a sftp server running and created a SSH key pair.
Everything seems to work fine except the artifact logging. MLFlow seems to not have permissions to list the artifacts saved in the mlruns directory.
I create an experiment and log artifacts in this way:
uri = 'http://192.XXX:8000'
mlflow.set_tracking_uri(uri)
mlflow.create_experiment('test', artifact_location='sftp://192.XXX:_path_to_mlruns_folder_')
experiment=mlflow.get_experiment_by_name('test')
with mlflow.start_run(experiment_id=experiment.experiment_id, run_name=run_name) as run:
mlflow.log_param(_parameter_name_, _parameter_value_)
mlflow.log_artifact(_an_artifact_, _artifact_folder_name_)
I can see the metrics in the UI and the artifacts in the correct destination folder on the remote machine. However, in the UI I receive this message when trying to see the artifacts:
Unable to list artifacts stored
under sftp://192.XXX:path_to_mlruns_folder/run_id/artifacts
for the current run. Please contact your tracking server administrator
to notify them of this error, which can happen when the tracking
server lacks permission to list artifacts under the current run's root
artifact directory.
I cannot figure out why as the mlruns folder has drwxrwxrwx permissions and all the subfolders have drwxrwxr-x. What am I missing?
UPDATE
Looking at it with fresh eyes, it seems weird that it tries to list files through sftp://192.XXX:, it should just look in the folder _path_to_mlruns_folder_/_run_id_/artifacts. However, I still do not know how to circumvent that.
The problem seems to be that by default the systemd service is run by root.
Specifying a user and creating a ssh key pair for that user to access the same remote machine worked.
[Unit]
Description=MLflow server
After=network.target
[Service]
Restart=on-failure
RestartSec=20
User=_user_
Group=_group_
ExecStart=/bin/bash -c 'PATH=_yourpath_/anaconda3/envs/mlflow_server/bin/:$PATH exec mlflow server --backend-store-uri postgresql://mlflow:mlflow#localhost/mlflow --default-artifact-root sftp://_user_#192.168.1.245:_yourotherpath_/MLFLOW_SERVER/mlruns -h 0.0.0.0 -p 8000'
[Install]
WantedBy=multi-user.target
_user_ and _group_ should be the same listed by ls -la in the mlruns directory.

How do I make my nginx load all static files from django docker app over HTTPS?

I have my nginx installed on the server and I have my Django application running inside a docker container. While my app loads fine over HTTP, it doesn't load any static files (CSS) over HTTPS. What changes should I make in nginx conf or docker app to solve this?
Since my application is in python (Django App), I collected all the static files using "python manage.py collectstatic -c" and then mounted that directory (which contains these static files - /apps/test_app/static ) on the host server running both docker and nginx.
The command I used to mount was
docker run --net=host -v /static:/apps/test_app/static -d -p 8000:8000 image_id
Here static is the directory on the host server where as /apps/test_app/static is directory inside the container.
Once this is done, I added the following lines inside the nginx.conf or ssl.conf file
location /static/ {
# root /var/www/app/static/;
alias /static/;
autoindex off;
}
Followed by nginx restart and then that solved the problem.

Elastic Beanstalk Static Folder 403 Error

I'm stuck trying to solve a "403 Forbidden" error in my Elastic Beanstalk Flask application. I have set up my python.config file as below:
option_settings:
aws:elasticbeanstalk:container:python:staticfiles:
"/static/": "/static/"
"/templates/": "/templates/"
commands:
01_set_file_permissions:
command: "chmod 777 /opt/python/current/app/static/"
The static folder was initially giving a 404 but the static files section of python.config fixed that. My problem is I can't get the file permissions to be recognised on the server. It always returns a 403 error. Please help.
I worked out the fix. The option_settings section was incorrect in that the route to folder was not given as a relative path. Also the /static/ folder was already set via a hard-coded value in the AWS console at:
AWS > Elastic Beanstalk > > Configuration > Software Configuration > Virtual Path. I needed to change the value of /static/ to static/ here.
Finally the commands section was not required at all. The fixed python.config file (that works) is as follows:
option_settings:
aws:elasticbeanstalk:container:python:staticfiles:
"/templates/": "templates/"

Git server over http using nginx in windows

I'm trying to setup private git server over http(s) using nginx in windows , but without any success yet.
I want it to be like github but locally and with very simple functionality . For example when you go to localhost/path/to/your/repo.git this site will only display the source code tree list nothing else. you have to do all git commands using git console in your machine.
I saw many posts to do something like that like:
how to serve git through http via nginx with user password
https://gist.github.com/massar/9399764
...
Actually, I need to know that do I really need nginx ? I mean can I make it work with just python server code ?
I really confused what I have to do and what I'm missing, because it's my first time to deal with something like this and don't know nginx very well.
Can anyone help me to get it done ?
NOTE I don't want to use other tools like gitlab, I want to code it from the beginning
EDIT
I read about git-http-backend in git docs. I think now I should configure nginx to work with git-http-backend.exe (because I'm in windows)
here is my nginx conf file:
location / {
root "C:/Users/One/Desktop/work";
fastcgi_param SCRIPT_FILENAME "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe";
fastcgi_param PATH_INFO $uri;
fastcgi_param GIT_EXPORT_ALL "";
fastcgi_param GIT_PROJECT_ROOT "C:/Users/One/Desktop/work";
include fastcgi_params;
}
I have project.git bare repository in C:/Users/One/Desktop/work , but git always return not found when I try to clone it.
git clone http://localhost/project.git test
Cloning into 'test'...
fatal: repository 'http://localhost/project.git/' not found
And this is nginx log file
127.0.0.1 - - [24/Jun/2015:14:04:10 +0300] "GET /project.git/info/refs?service=git-upload-pack HTTP/1.1" 404 168 "-" "git/1.9.5.msysgit.0"

Can I use the uwsgi protocol to call http?

Here's a data flow:
http <--> nginx <--> uWSGI <--> python webapp
I guess there's http2uwsgi transfer in nginx, and uwsgi2http in uWSGI.
What if I want to directly call uWSGI to test an API in a webapp?
actually i'm using pyramid. just config [uwsgi] in .ini and run uWSGI. but i want to test if uWSGI hold webapp function normally, the uWSGI socket is not directly reachable by http.
Try using uwsgi_curl
$ pip install uwsgi-tools
$ uwsgi_curl 10.0.0.1:3030 /path
or if you need to do some more requests try uwsgi_proxy from the same package
$ uwsgi_proxy 10.0.0.1:3030
Proxying remote uWSGI server 10.0.0.1:3030 "" to local HTTP server 127.0.0.1:3030...
so you can browse it locally at http://127.0.0.1:3030/.
If your application allows only certain Host header, you can specify host name as well
$ uwsgi_curl 10.0.0.1:3030 host.name/path
$ uwsgi_proxy 10.0.0.1:3030 -n host.name
If application has static files, you can redirect such requests to your front server using -s argument. You can also specify different local port if needed.
From your question I'm assuming, you want to directly run your WSGI-compliant app with uWSGI and open an HTTP-Socket. You can do so by configuring your uwsgi.ini (or whatever the filename is) with
http=127.0.0.1:8080
uwsgi will now open an HTTP-socket that listen on port 8080 for incoming connections from localhost (see documentation: http://uwsgi-docs.readthedocs.org/en/latest/HTTP.html)
Alternatively you can directly start your process from the command-line with the http-parameter:
$ uwsgi --http=127.0.0.1:8080 --module=yourapp:wsgi_entry_point
If you use unix-sockets to configure uwsgi nginx is able to communicate with that socket via the uwsgi-protocol (http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html).
Keep in mind, that if you usually serve static content (css, javascript, images) through nginx you will need to set that up, too, if you run uwsgi directly. But if you only want to test a REST-API this should work out for you.
First, consider those questions:
On which port is uWSGI running?
Is uWSGI running on your or on a remote machine?
If it's running on a remote machine, is the port accessible from your computer? (iptables rules might forbid external access)
If you made sure you have access, you can just call http://hostname:port/path/to/uWSGI for direct API access.
I know this is an old question but I just needed this and found out that this docker+nginx solution works for me the best
cat > /tmp/nginx.conf << EOF
events {}
http {
server {
listen 8000;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:3031;
}
}
}
EOF
docker run -it --network=host --rm --name uswgi-proxy -v /tmp/nginx.conf:/etc/nginx/nginx.conf:ro nginx

Categories

Resources