So my team at work had put together a simple python service and had a dockerfile that performs some small tasks like pip installing dependencies, creating a user, rewiring some directories etc.
Everything worked fine locally -- we were able to build and run a docker image locally on my laptop (I have a MacBook Pro, running High Sierra if that is relevant).
We attempted to build the project on Openshift and kept getting a "Generic Build error" which told us to check the logs.
The log line it was failing on was
RUN pip3 install pandas flask-restful xgboost scikit-learn nltk scipy
and there was no related error listed with it. It literally just stopped at that point.
Specifically, it was breaking out when it got to installing xgboost. We removed the xgboost part and the entire Dockerfile ran fine and the build completed successfully so we know it was definitely just that one install causing the issue. Has anyone else encountered this and know why it happens?
We are pretty certain it wasn't any kind of memory or storage issue as the container had plenty of room for how small the service was.
Note: We were able to eventually use a coworker's template image to get our project up with the needed dependencies, just curious why this was happening in case we run into a similar issue in the future
Not sure if this is an option for you, but when I need to build from a Dockerfile, I usually use a hosted service like Quay.io or DockerHub.
Related
I have a python script that runs fine on the desktop version of Power BI but not on the service. I use Anaconda, which I'm pretty sure comes prepackaged with all the packages needed for Power BI, like numpy, pandas etc, and also have a personal on-premises gateway installed. I am sure the problem is with the python installation as the refresh worked fine on the service on another computer I used previously. The exact error is "Failed to update data source credentials: ADO.NET: Python installation not found".
Googling has brought me to this link, where the exact same issue occurred. However, I don't understand what steps this user took to fix it. Where can I find this "base python installation" if I uninstall Anaconda? I ran where.exe python in the cmd which brought the following links:
C:\Users\Name\Anaconda3\python.exe
C:\Users\Name\AppData\Local\Microsoft\WindowsApps\python.exe
Is the second one some default python installation on my machine as I don't remember installing it. I am thinking the gateway might detect this if I uninstall anaconda and reinstall the gateway. Would the home directory just be the WindowsApps folder? Currently I just have it listed as C:\Users\Name\Anaconda3
Edit: Upon further searching, it seems the WindowsApps/python.exe is just a link to the Microsoft store installation so don't think this is it. Do I need to install Python standalone without anaconda then?
its my first post so do tell me if you require more specifics.
some details may not be too relevant here but i want to give to be as detailed as possible with the timeline here:
I have been using jupyter for my ipynb files for quite some time until i discovered tensorflow, at first it was going ok after installing the module but ever since i tried to use tensorflow to detect and utilise my gpu everything just went south from there. i tried things like downloading some nvidia stuff that my laptop does in fact support, and eventually got my tensorflow to detect my gpu. But the moment i tried to train my model with cnn, however simple the layers are, my kernel will crash. Eventually i used kaggle/colab as temporary solution but now i want to fix it.
After trying to fix the issue of tensorflow/revert back to when tensorflow runs just fine with only my cpu to no avail, i eventually decided to do a hard reset and deleted python/anaconda entirely from my computer.
After installing anaconda back. I booted up jupyter to see that there is a python3 ipykernel that is most likely preinstalled when i downloaded anaconda and i can run a simple hello world just fine. However i realise that after pip installing tensorflow my 'old' settings of tensorflow is still there and can detect my gpu, and hence kernel will crash yet again.
So, i thought why not just make a completely new environment so i can 100% install a new and fresh version of tensorflow. Then i realised that jupyter couldnt exactly detect the new environment that i made (idk if its cause of ipykernel but i did do a pip install ipykernel in the correct environment and its still not detected).
My next solution was to try to use vsc, so i used vsc and managed to detect the new environment but when running a print('hello world') i was told that 'The kernel failed to start as a dll could not be loaded. View Jupyter log for further details.'
I'm really lost as to what to do now, all i want to do at this point of time is to use tensorflow (whether cpu or gpu i rlly dont care anymore) in either vsc/jupyter. As long as my files are .py, i should be able to run it with any environment just fine ( though i didnt test with tensorflow module on py files because i dont see a point in training a model on a py file)
I use windows 10 if that helps
Im sorry if i gave unneccessary details. I would appreciate if i get some advice in anything im doing wrong/have a misunderstanding of/solutions and please do try to dumb it down for me with appropriate explanations if possible... thanks...... i can also be contacted on a voice call in discord if you think typing it is too much of a hassle
I am trying to get tensorflow installed on a raspberry pi 4 which is running manjaro. It is to use the open source BNN library Larq, which recommended manjaro as an OS because it was 64bit as opposed to Raspbian. I have tried to install using yay from Archlinux user repository but got a couple different errors, like: "tensorflow/workspace.bzl: patch does not apply" and a failure to download. Any suggestions, I am very new to manjaro.
As a side note, I am not particularly stuck to Manjaro is anyone has experience using Larq and the larq compute engine on a RPi4 with a different OS any insight there would be helpful as well.
Thank you!
I cannot help you with Manjaro. However, I used Ubuntu 20.04 (64 bits) on my RPI4. I suppose you need the RPI4 to deploy and run your BNNs. If I am correct, I give you the following advice.
Please, note that the RPI4 is needed only to run LCE models (*.tflite). To this end, you don't need to install Tensorflow on your RPI4.
For everything else (see below) you can use a regular Linux box.
To check if everything is fine with your runtime environment (i.e. the RPI4), You can use your main Larq+LCE installation to convert one of your models into an LCE model and test it with the benchmark tool available here. For the RPI4+Ubuntu you should use lce_benchmark_model_aarch64.
If you need to compile your own BNN-based applications for your RPI4, you can follow the build guide on the LCE website. I did it once a long time ago. I used the LCE Docker to have a working environment. Then, from the inside of the docker, I followed the ARM guide: "Cross-compiling with Make" version.
I hope this helps.
I use conda update --all to update my packages. Recently, I encountered an error with Anaconda build, posted at Error while trying to update and use scipy module in Anaconda. It seems now the issue has been fixed. Is there any way, I can test all modules one by one by importing them and deleting them ? I am requesting this because I have noticed that if import doesn't work, I spend a lot of time figuring out the dependency and then the package that is causing this. For instance, a few minutes ago I found that PyCharm 2018.2.4 breaks with the latest version of matplotlib (3.0.0). Hence, it might be helpful to run some type of test script after running conda update --all to ensure that all packages are indeed working--i.e. importable.
I did some research on this topic and found three sources.
First, Anaconda offers run_test.py (Source: https://conda.io/docs/user-guide/tasks/build-packages/recipe.html). However, being new to the world of Python, I am unsure how to go about running a script in Anaconda terminal.
Second, I found: https://conda.io/docs/user-guide/install/test-installation.html. However, this just tells me the version of the package. I am not interested in the version. I need to know whether all packages import properly.
Finally, I found out that there is a method to run test script for all packages at https://anaconda-installer.readthedocs.io/en/latest/testing.html. However, I am unsure how I can run make in Anaconda terminal. I used to use make long time ago when I worked on gcc on Unix environment. Being new to Python, I am unsure how to go about handling this.
I'd appreciate any thoughts or any test script that could help us verify two things:
a) whether all packages have been installed
b) packages are indeed importable; If the package import fails, the script should terminate with handsome error message highlighting the source (package) where import failed.
I've been working recently on deploying a machine learning model as a web service. I used Azure Machine Learning Studio for creating my own Workspace ID and Authorization Token. Then, I trained LogisticRegressionCV model from sklearn.linear_model locally on my machine (using python 2.7.13) and with the usage of below code snippet I wanted to publish my model as web service:
from azureml import services
#services.publish('workspaceID','authorization_token')
#services.types(var_1= float, var_2= float)
#services.returns(int)
def predicting(var_1, var_2):
input = np.array([var_1, var_2].reshape(1,-1)
return model.predict_proba(input)[0][1]
where input variable is a list with data to be scored and model variable contains trained classifier. Then after defining above function I want to make a prediction on sample input vector:
predicting.service(1.21, 1.34)
However following error occurs:
RuntimeError: Error 0085: The following error occurred during script
evaluation, please view the output log for more information:
And the most important message in log is:
AttributeError: 'module' object has no attribute 'LogisticRegressionCV'
The error is strange to me because when I was using normal sklearn.linear_model.LogisticRegression everything was fine. I was able to make predictions sending POST requests to created endpoint, so I guess sklearn worked correctly.
After changing to LogisticRegressionCV it does not.
Therefore I wanted to update sklearn on my workspace.
Do you have any ideas how to do it? Or even more general question: how to install any python module on azure machine learning studio in a way to use predict functions of any model I develpoed locally?
Thanks
For anyone who came across this question like I did in hopes of installing modules in AzureML notebooks; it seems the current environments sit on Conda on the compute so it's now as simple as executing
!conda env list
# conda environments:
#
base * /anaconda
azureml_py36 /anaconda/envs/azureml_py36
!conda -n azureml_py36 -y <packages>
from within the notebook environment or doing pretty much the same without the ! in the terminal environment
For installing python module on Azure ML Studio, there is a section Technical Notes of the offical document Execute Python Script which introduces it.
The general steps as below.
Create a Python project via virtualenv and active it.
Install all packages you want via pip on the virtual Python environment, and then
Package all files and directorys under the path Lib\site-packages of your project as a zip file.
Upload the zip package into your Azure ML WorkSpace as a dataSet.
Follow the offical document to import Python Module for your Execute Python Script.
For more details, you can refer to the other similar SO thread Updating pandas to version 0.19 in Azure ML Studio, it even introduced how to update the version of Python packages installed by Azure.
Hope it helps.
I struggled with the same issue: error 0085
I was able to resolve it by using Azure ML code example available from their library:
Deployment of AzureML Web Services from Python Notebooks
can be found at https://gallery.cortanaintelligence.com/Notebook/Deployment-of-AzureML-Web-Services-from-Python-Notebooks-4
I won't copy the whole code here, but I used it exactly as is and it worked with Boston dataset. Then I used it with my dataset, and I no longer got error 0085. I haven't tracked down the error yet but it's most likely due to some misbehaving character or indent. Hope this helps.