Can't run a Flask server [duplicate] - python

This question already has answers here:
How to run a flask application?
(6 answers)
Closed yesterday.
This post was edited and submitted for review yesterday.
I'm trying to follow this online tutorial for a Flask/Tensorflow/React application, but I'm having some trouble at the end now trying to run the Flask server.
Flask version: 2.2.3
Python version: 3.10.0
I've search online for solutions, but nothing I've tried has worked. Here's the ways I've tried to run the application:
Not sure if this could be helpful in coming to a solution, but incase it is, here's my app.py file:
import os
import numpy as np
from flask import Flask, request
from flask_cors import CORS
from keras.models import load_model
from PIL import Image, ImageOps
app = Flask(__name__) # new
CORS(app) # new
#app.route('/upload', methods=['POST'])
def upload():
# Disable scientific notation for clarity
np.set_printoptions(suppress=True)
# Load the model
model = load_model("keras_Model.h5", compile=False)
# Load the labels
class_names = open("labels.txt", "r").readlines()
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open("<IMAGE_PATH>").convert("RGB")
# resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.Resampling.LANCZOS)
# turn the image into a numpy array
image_array = np.asarray(image)
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.5) - 1
# Load the image into the array
data[0] = normalized_image_array
# Predicts the model
prediction = model.predict(data)
index = np.argmax(prediction)
class_name = class_names[index]
confidence_score = prediction[0][index]
# Print prediction and confidence score
print("Class:", class_name[2:], end="")
print("Confidence Score:", confidence_score)
Does anyone know what I'm doing wrong here, is there maybe something obvious I'm missing that's causing the problem? If there's any other info I can add that may be helpful, please let me know, thanks.
Edit:
I've added the execution section at the end of my code:
if __name__ == '__main__':
app.run(host="127.0.0.1", port=5000)
And now 'python -m flask run' does attempt to run the app, so the original post question is answered. There does seem to be a subsequent problem now though, it constantly returns this error. I installed tensorflow using 'pip3 install tensorflow' and it does install successfully, but then it always returns with the module not found error. Tensorflow doesn't appear in pip freeze package list, am now looking to see how/why this is.
Edit2: My question was flagged as already answered here, though I'm struggling to see how, as that post has absolutely no mention at all on why 'flask run' or any way of trying to run a flask app might not work, or what to do when it doesn't, which is what this question is. That post is simply discussing the 'correct' way to run a flask app, not how to run one at all when it's not running.

If you don't want to use the flask command, you need to add code to run the development server instead.
if __name__ == "__main__":
app.run()

You're not using the flask command, but you forgot to add code to run the development server instead.
if __name__ == '__main__':
app.run()

Related

DeepPavlov FAQ Bot is returning a 'collections.OrderedDict' object is not callable error

I'm trying to use collab to build a bot for FAQ with DeepPavlov and I modified a tutorial notebook that DeepPavlov has on their site, the only thing I change is using my sample dataset yet I get the 'collections.OrderedDict' object is not callable error when calling on
answer=model_config(["help"])
answer
The full code for this (seperated in cells) is
!pip install -q deeppavlov
from deeppavlov import configs
from deeppavlov.core.common.file import read_json
from deeppavlov.core.commands.infer import build_model
from deeppavlov import configs, train_model
model_config = read_json(configs.faq.tfidf_logreg_en_faq)
model_config["dataset_reader"]["data_path"] = None
model_config["dataset_reader"]["data_url"] = "https://docs.google.com/spreadsheets/d/e/2PACX-1vSUFqHL9u_KkSCfw03bYCIuzfCzfOwXlvsQeTb8tMVaSDYohcHbfL8jNtV24AZiSLNnJJQs58dIsO8A/pub?gid=788315638&single=true&output=csv"
model_config
answer=model_config(["help"])
answer
Anyone know the fix to help my bot run with the sample dataset url I provided in my code? I'm new to bots, deeppavlov, and collab so I'm having a steep learning curve here.
Your code is missing the model training part - you are trying to call the config object instead of actually training and using a model for prediction on your data.
However, this is not the only problem here. Firstly, you might want to change the data_path variable to a string object, otherwise you will face problems here (you may try it yourself to check). Secondly, while trying to run your code with my corrections I have faced a csv-parsing error - please check your csv file again and make sure to get rid of empty rows in it. After you do that, this code should work correctly.
model_config = read_json(configs.faq.tfidf_logreg_en_faq)
model_config["dataset_reader"]["data_path"] = ''
model_config["dataset_reader"]["data_url"] = "your-dataset-link"
faq = train_model(model_config)
answer = faq(["help"])
answer

Worker timeout when preloading Pytorch model in Flask app on Render.com

In my app.py I have a function that uses a pretrained Pytorch model to generate keywords
#app.route('/get_keywords')
def get_keywords():
generated_keywords = ml_controller.generate_keywords()
return jsonify(keywords=generated_keywords)
and in ml_controller.py I have
def generate_keywords():
model = load_keywords_model()
output = model.generate()
return output
This is working fine. Calls to /get_keywords correctly return the generated keywords. However this solution is quite slow since the model gets loaded on each call. Hence I tried to load the model just once by moving it outside my function:
model = load_keywords_model()
def generate_keywords():
output = model.generate()
return output
But now all calls to /get_keywords time out when I deploy my app to Render.com. (Locally it's working.) Strangely the problem is not that the model does not get loaded. When I write
model = load_keywords_model()
testOutput = model.generate()
print(testOutput)
def generate_keywords():
output = model.generate()
return output
a bunch of keywords are generated when I boot gunicorn. Also, all other endpoints that don't call ml_controller.generate_keywords() work without problems.
For testing purposes I also added a dummy function to ml_controller.py that I can call without problems
def dummy_string():
return "dummy string"
Based on answers to similar problems I found, I'm starting Gunicorn with
gunicorn app:app --timeout 740 --preload --log-level debug
and in app.py I'm using
if __name__ == '__main__':
app.run(debug=False, threaded=False)
However, the problem still persists.
The problem is that there's some bug that occurs for Pytorch models when Gunicorn is started with the --preload flag.
Render.com secretly adds this flag and doesn't show it in the settings which is why it took me days to figure this out. You can see all settings Render.com adds by calling printenv in the console.
To resolve the issue add a new environment variable
GUNICORN_CMD_ARGS: '--access-logfile - --bind=0.0.0.0:10000'
which overwrites Render.com's standard settings
GUNICORN_CMD_ARGS: '--preload --access-logfile - --bind=0.0.0.0:10000'

RasaNLUHttpInterpreter: takes from 1 to 4 positional arguments but 5 were given

I am using the RasaNLUHttpInterpreter as stated here to start my server. I give the class all the 4 parameters required (model-name, token, server-name, and project-name). However, I always get the error, that apparently I am handing over 5 arguments (what I don't really do).
The error occurred since I updated my Rasa-Core and NLU to the latest version. However, as in docs, I feel that I use the method correctly. Does anyone have an idea what I am doing wrong or what's happening here?
Here is my run-server.py where I use the RasaNLUHttpInterpreter:
import os
from os import environ as env
from gevent.pywsgi import WSGIServer
from server import create_app
from rasa_core import utils
from rasa_core.interpreter import RasaNLUHttpInterpreter
utils.configure_colored_logging("DEBUG")
user_input_dir = "/app/nlu/" + env["RASA_NLU_PROJECT_NAME"] + "/user_input"
if not os.path.exists(user_input_dir):
os.makedirs(user_input_dir)
nlu_interpreter = RasaNLUHttpInterpreter(
'model_20190702-103405', None, 'http://rasa-nlu:5000', 'test_project')
app = create_app(
model_directory = env["RASA_CORE_MODEL_PATH"],
cors_origins="*",
loglevel = "DEBUG",
logfile = "./logs/rasa_core.log",
interpreter = nlu_interpreter)
http_server = WSGIServer(('0.0.0.0', 5005), app)
http_server.serve_forever()
I am using:
rasa_nlu~=0.15.1
rasa_core==0.14.5
as already mentioned here I have analyzed the problem in detail.
First of all, the method calls and the given link belong to a rasa version that is deprecated. After updating to the latest rasa version which splits up core and nlu, the project was refactored to fit the according documentation.
After rebuilding the bot with the exact same setup, no errors were thrown and the bot worked as expected.
We came to the conclusion that this must have been a particular problem on threxx's workstation.
If someone else might come to this point, he is welcome to post here such that we could help him.

Sagemaker invoke endpoint returned value type?

I have a classifier working with XGBoost in sagemaker, but despite the training set only having 1s and 0s in the first column (csv file, first column is assumed to be target in sagemaker xgboost), the algorithm returns a decimal.
First 3 records return 1.08, 0.34, and 0.91. I'd assume probabilities but 1.08? If these are rounded to 0 or 1 then they're all correct, but why is it returning non-class values?
Furthermore, the class only contains a predict method - is a predict probability method not possible without using your own model?
The code calling this is:
from flask import Flask
from flask import request
import boto3
from sagemaker.predictor import csv_serializer
import sagemaker
app = Flask(__name__)
#app.route("/")
def hello():
numbers = request.args.get('numbers')
#session
boto_session = boto3.Session(profile_name="profilename",
region_name='regionname')
#sagemaker session
sagemaker_session = sagemaker.Session(boto_session=boto_session)
#endpoint
predictor = sagemaker.predictor.RealTimePredictor(endpoint="modelname",
sagemaker_session=sagemaker_session)
predictor.content_type="text/csv"
predictor.serializer=csv_serializer
predictor.deserializer=None
#result
result=predictor.predict(numbers)
result=result.decode("utf-8")
return f'Output: {result}'
if __name__ == "__main__":
app.run(debug=True, port=5000)
The flask section works fine, I can retrieve predictions at 127.0.0.1:5000.
Sagemaker Version 1.3.0. Version 1.9.0 does not work - it requires fcntrl which is mac/linux only - see this on their repo, apparently it's fixed on pypi but I've tried and the version doesn't change or fix the issue, so I'm stuck on 1.3.0 until they resolve it. Version 1.3.0 does not have a predict_proba method.

Install issues with 'lr_utils' in python

I am trying to complete some homework in a DeepLearning.ai course assignment.
When I try the assignment in Coursera platform everything works fine, however, when I try to do the same imports on my local machine it gives me an error,
ModuleNotFoundError: No module named 'lr_utils'
I have tried resolving the issue by installing lr_utils but to no avail.
There is no mention of this module online, and now I started to wonder if that's a proprietary to deeplearning.ai?
Or can we can resolve this issue in any other way?
You will be able to find the lr_utils.py and all the other .py files (and thus the code inside them) required by the assignments:
Go to the first assignment (ie. Python Basics with numpy) - which you can always access whether you are a paid user or not
And then click on 'Open' button in the Menu bar above. (see the image below)
.
Then you can include the code of the modules directly in your code.
As per the answer above, lr_utils is a part of the deep learning course and is a utility to download the data sets. It should readily work with the paid version of the course but in case you 'lost' access to it, I noticed this github project has the lr_utils.py as well as some data sets
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments
Note:
The chinese website links did not work when I looked at them. Maybe the server storing the files expired. I did see that this github project had some datasets though as well as the lr_utils file.
EDIT: The link no longer seems to work. Maybe this one will do?
https://github.com/knazeri/coursera/blob/master/deep-learning/1-neural-networks-and-deep-learning/2-logistic-regression-as-a-neural-network/lr_utils.py
Download the datasets from the answer above.
And use this code (It's better than the above since it closes the files after usage):
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
"lr_utils" is not official library or something like that.
Purpose of "lr_utils" is to fetch the dataset that is required for course.
option (didn't work for me): go to this page and there is a python code for downloading dataset and creating "lr_utils"
I had a problem with fetching data from provided url (but at least you can try to run it, maybe it will work)
option (worked for me): in the comments (at the same page 1) there are links for manually downloading dataset and "lr_utils.py", so here they are:
link for dataset download
link for lr_utils.py script download
Remember to extract dataset when you download it and you have to put dataset folder and "lr_utils.py" in the same folder as your python script that is using it (script with this line "import lr_utils").
The way I fixed this problem was by:
clicking File -> Open -> You will see the lr_utils.py file ( it does not matter whether you have paid/free version of the course).
opening the lr_utils.py file in Jupyter Notebooks and clicking File -> Download ( store it in your own folder ), rerun importing the modules. It will work like magic.
I did the same process for the datasets folder.
You can download train and test dataset directly here: https://github.com/berkayalan/Deep-Learning/tree/master/datasets
And you need to add this code to the beginning:
import numpy as np
import h5py
import os
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
I faced similar problem and I had followed the following steps:
1. import the following library
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
2. download the train_catvnoncat.h5 and test_catvnoncat.h5 from any of the below link:
[https://github.com/berkayalan/Neural-Networks-and-Deep-Learning/tree/master/datasets]
or
[https://github.com/JudasDie/deeplearning.ai/tree/master/Improving%20Deep%20Neural%20Networks/Week1/Regularization/datasets]
3. create a folder named datasets and paste these two files in this folder.
[ Note: datasets folder and your source code file should be in same directory]
4. run the following code
def load_dataset():
with h5py.File('datasets1/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets1/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
5. Load the data:
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
check datasets
print(len(train_set_x_orig))
print(len(test_set_x_orig))
your data set is ready, you may check the len of the train_set_x_orig, train_set_y variable. For mine, it was 209 and 50
I could download the dataset directly from coursera page.
Once you open the Coursera notebook you go to File -> Open and the following window will be display:
enter image description here
Here the notebooks and datasets are displayed, you can go to the datasets folder and download the required data for the assignment. The package lr_utils.py is also available for downloading.
below is your code, just save your file named "lr_utils.py" and now you can use it.
import numpy as np
import h5py
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
if your code file can not find you newly created lr_utils.py file just write this code:
import sys
sys.path.append("full path of the directory where you saved Ir_utils.py file")
Here is the way to get dataset from as #ThinkBonobo:
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments/datasets
write a lr_utils.py file, as above answer #StationaryTraveller, put it into any of sys.path() directory.
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
....
!!! BUT make sure that you delete 'datasets/', cuz now the name of your data file is train_catvnoncat.h5
restart kernel and good luck.
I may add to the answers that you can save the file with lr_utils script on the disc and import that as a module using importlib util function in the following way.
The below code came from the general thread about import functions from external files into the current user session:
How to import a module given the full path?
### Source load_dataset() function from a file
# Specify a name (I think it can be whatever) and path to the lr_utils.py script locally on your PC:
util_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_utils.py")
# Make a module
load_utils = importlib.util.module_from_spec(util_script)
# Execute it on the fly
util_script.loader.exec_module(load_utils)
# Load your function
load_utils.load_dataset()
# Then you can use your load_dataset() coming from above specified 'module' called load_utils
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_utils.load_dataset()
# This could be a general way of calling different user specified modules so I did the same for the rest of the neural network function and put them into separate file to keep my script clean.
# Just remember that Python treat it like a module so you need to prefix the function name with a 'module' name eg.:
# d = nnet_utils.model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1000, learning_rate = 0.005, print_cost = True)
nnet_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_nnet.py")
nnet_utils = importlib.util.module_from_spec(nnet_script)
nnet_script.loader.exec_module(nnet_utils)
That was the most convenient way for me to source functions/methods from different files in Python so far.
I am coming from the R background where you can call just one line function source() to bring external scripts contents into your current session.
The above answers didn't help, some links had expired.
So, lr_utils is not a pip library but a file in the same notebook as the CourseEra website.
You can click on "Open", and it'll open the explorer where you can download everything that you would want to run in another environment.
(I used this on a browser.)
This is how i solved mine, i copied the lir_utils file and paste it in my notebook thereafter i downloaded the dataset by zipping the file and extracting it. With the following code. Note: Run the code on coursera notebook and select only the zipped file in the directory to download.
!pip install zipfile36
zf = zipfile.ZipFile('datasets/train_catvnoncat_h5.zip', mode='w')
try:
zf.write('datasets/train_catvnoncat.h5')
zf.write('datasets/test_catvnoncat.h5')
finally:
zf.close()

Categories

Resources