I have code like this
Unix = 'linux'
Mac = 'darwin'
if sys.platform == Unix
do this
elif sys.platform == Mac
do this
I have this check for sys platform because aws is unix based, and therefore if sys.platform = 'Mac'
then I am running locally.
I'm running into trouble when I try to dockerize this application because the dockerized build is linux based,
so in this if-else statement, the build will run the 1st
part of the if-else even though I'm building the docker
container locally.
Is it possible to set the sys.platform in a dockerfile?
edit:
Problem Statement:
Trying to dockerize an aws lambda function. To do so, I need
to test the lambda function locally.
My lamba function composition looks like this:
app
lambda_function1
database.py
helper.py
functions
lambda_function1.py
The main purpose of this lambda function is to read data
from the production database, and then predict some value
based on the data.
database.py
import helper
...
class DB:
def __init__(self):
self.secrets = helper.get_secrets()
self.db_name = self.secrets.get('DB', '')
self.db_host = self.secrets.get('Host', '')
self.db_password = self.secrets.get('Password', '')
...
helper.py
import sys
import boto3
....
def get_secrets():
secrets = {}
if sys.platform == constants.MAC_PLATFORM:
secrets = local_secrets()
return secrets
session = boto3.session.Session()
client = session.client(service_name='secretsmanager',
region_name='us-west-2')
secrets = get_aws_secrets()
As you can see, if sys platform is 'darwin',
then the secrets will be local secrets.
If sys platform is 'linux', then the secrets will be secrets pulled
from aws.
For some reason, I am unable to connect to the database with the aws
secrets in my local docker build due to a tcp/ip error.
I think this is due to some weird configuration issue that I don't have locally that aws might have, therefore I would like to start by working with the local database in docker, and use the get_local_secrets method
to obtain secrets.
Any ideas?
I'd use the environment variable suggestion and run locally by setting some values at runtime:
import os
if os.environ.get('LOCAL_TEST', 'false').lower().strip() == 'true':
secrets = local_secrets()
else:
# use aws secrets
And run your container like:
docker run -e LOCAL_TEST=true your_image
Related
I started using Prefect recently and I noticed I can add decorators to some methods then submit them to prefect. It will then run my script remotely on an agent server. I'm wondering how it is possible to use an attribute and to serialize a method somehow for remote execution.
Example Prefect python script
import sys
import prefect
from prefect import flow, task, get_run_logger
from utilities import AN_IMPORTED_MESSAGE
#task
def log_task(name):
logger = get_run_logger()
logger.info("Hello %s!", name)
logger.info("Prefect Version = %s 🚀", prefect.__version__)
logger.debug(AN_IMPORTED_MESSAGE)
#flow()
def log_flow(name: str):
log_task(name)
if __name__ == "__main__":
name = sys.argv[1]
log_flow(name)
I am new to Flask and please do not mind if the problem sounds trivial.
I have a Flask app (not written by me) which works fine from the local machine as well as remote machines as well when I am directly connected to the network.
But when I connect to the app over VPN it doesn't work. I am able to ssh on that machine as well as access other servers running on the same machine. It is a physical machine and not a VM
app = Flask(__name__)
def loadAppVariables():
mc = pylibmc.Client(["127.0.0.1"], binary=True,
behaviors={"tcp_nodelay": True,
"ketama": True});
app.mc=mc
def initApp():
app.fNet= {some object }
mc = pylibmc.Client(["127.0.0.1"], binary=True,
behaviors={"tcp_nodelay": True,
"ketama": True});
app.mc=mc;
#app.route('/classify', methods=['POST'])
def classify():
# We will save the file to disk for possible data collection.
imagefile = request.files['imagefile']
processImageFile(imagefile)
#app.route('/')
def index():
return render_template('cindex.html', has_result=False)
#app.before_request
def before_request():
loadAppVariables()
#app.teardown_request
def teardown_request(exception):
storeAppVariables()
if __name__ == '__main__':
initApp();
app.run(debug=False,host='0.0.0.0')
I am running latest Flask version and python 2.7. Can anyone please suggest what may be wrong here ?
It seems that you want to access to local enabled flask over another network.
0.0.0.0 ip is to connect to flask from different machines, but in the same network range. so if your IP isn't in the same range, this fails.
if you want to access your web page from the internet, you should consider to deploy your webapp.
So I have just started using Kubernetes API server and I tried this example :
from kubernetes import client, config
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
if __name__ == '__main__':
main()
This worked but it returned the pods that are on my local minikube, I want to get the pods that are at the kubernetes server here :
http://192.168.237.115:8080
How do I do that?
When I do kubectl config view , I get this :
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/piyush/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/piyush/.minikube/apiserver.crt
client-key: /home/piyush/.minikube/apiserver.key
I know this is for the local cluster I set up. I want to know how to modify this to make api requests to kubernetes server on http://192.168.237.115:8080
You can actually create a simple api wrapper. This way you can pass through different yaml configuration files, that I imagine may have different hosts
import yaml
from kubernetes import client
from kubernetes.client import Configuration
from kubernetes.config import kube_config
class K8s(object):
def __init__(self, configuration_yaml):
self.configuration_yaml = configuration_yaml
self._configuration_yaml = None
#property
def config(self):
with open(self.configuration_yaml, 'r') as f:
if self._configuration_yaml is None:
self._configuration_yaml = yaml.load(f)
return self._configuration_yaml
#property
def client(self):
k8_loader = kube_config.KubeConfigLoader(self.config)
call_config = type.__call__(Configuration)
k8_loader.load_and_set(call_config)
Configuration.set_default(call_config)
return client.CoreV1Api()
# Instantiate your kubernetes class and pass in config
kube_one = K8s(configuration_yaml='~/.kube/config1')
kube_one.client.list_pod_for_all_namespaces(watch=False)
kube_two = K8s(configuration_yaml='~/.kube/config2')
kube_two.client.list_pod_for_all_namespaces(watch=False)
Also another neat reference in libcloud. https://github.com/apache/libcloud/blob/trunk/libcloud/container/drivers/kubernetes.py.
Good luck! Hope this helps! :)
I have two solution for you:
[prefered] Configure your kubectl (i.e. ~/.kube/config) file. After kubectl works with your cluster, python client should automatically work with load_kube_config. See here for configuring kubectl: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/
You can configure python client directly. For a complete list of configurations, look at: https://github.com/kubernetes-client/python-base/blob/8704ce39c241f3f184d01833dcbaf1d1fb14a2dc/configuration.py#L48
You may need to set some of those configuration for your client to connect to your cluster. For example, if you don't have any certificate or SSL enabled:
from kubernetes import client, configuration
def main():
configuration.host = "http://192.168.237.115:8080"
configuration.api_key_prefix['authorization'] = "Bearer"
configuration..api_key['authorization'] = "YOUR_TOKEN"
v1 = client.CoreV1Api()
...
You may need to set other configurations such as username, api_key, etc. That's why I think if you follow first solution it would be easier.
config.load_kube_config() takes context as a parameter. If passed None (the default) then the current context will be used. Your current context is probably your minikube.
See here:
https://github.com/kubernetes-incubator/client-python/blob/436351b027df2673869ee00e0ff5589e6b3e2b7d/kubernetes/config/kube_config.py#L283
config.load_kube_config(context='some context')
If you are not familiar with Kubernetes contexts,
Kubernetes stores your configuration under ~/.kube/config (default location). In it you will find context definition for every cluster you may have access to. A field called current-context defines your current context.
You can issue the following commands:
kubectl config current-context to see the current context
kubectl config view to view all the configuration
Can you show me the file ~/.kube/config
If you update the API server in it, the python module kubernetes will automatically pick up the new API server you nominated.
- cluster:
certificate-authority: [Update real ca.crt here]
server: http://192.168.237.115:8080
There are other changes in ~/.kube/config as well, you'd better get the config from the remote kubernetes server directly.
After successfully config with remote kubernetes API servers, you should be fine to run kubectl and get the deployments, deamons, etc.
Then you should be fine to run with python kubernetes SDK
I created some custom classifiers locally and then i try to deploy on bluemix an app that classifies an image based on the classifiers i made.
When I try to deploy it, it failes to start.
import os
import json
from os.path import join, dirname
from os import environ
from watson_developer_cloud import VisualRecognitionV3
import time
start_time = time.time()
visual_recognition = VisualRecognitionV3(VisualRecognitionV3.latest_version, api_key='*************')
with open(join(dirname(__file__), './test170.jpg'), 'rb') as image_file:
print(json.dumps(visual_recognition.classify(images_file=image_file,threshold=0, classifier_ids=['Angle_971786581']), indent=2))
print("--- %s seconds ---" % (time.time() - start_time))
Even if I try to deploy a simple print , it failes to deploy, but the starter app i get from bluemix, or a Flask tutorial (https://www.ibm.com/blogs/bluemix/2015/03/simple-hello-world-python-app-using-flask/) i found online deploy just fine.
I'm very new to web programming and using cloud services so i'm totally lost.
Thank you.
Bluemix is expecting your python application to serve on a port. If your application isn't serving some kind of response on the port, it assumes the application failed to start.
# On Bluemix, get the port number from the environment variable PORT
# When running this app on the local machine, default the port to 8080
port = int(os.getenv('PORT', 8080))
#app.route('/')
def hello_world():
return 'Hello World! I am running on port ' + str(port)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=port)
It looks like you're writing your code to just execute once and stop. Instead, make it do the work when someone hits your URL, like shown in the hello_world() function above.
Think about what you want to happen when someone goes to YOUR_APP_NAME.mybluemix.net
If you do not want your application to be a WEB application, but instead just execute once (a background worker application), then use the --no-route option at the end of your cf push command. Then, look at the logs using cf logs appname --recent to see the output of your application
https://console.ng.bluemix.net/docs/manageapps/depapps.html#deployingapps
The main problem was watson-developer-cloud module, giving me an error that it could not be found.
I downgraded to python version 2.7.12, installing it for all users.
Modified runtime.exe and requirments.txt (requirments.txt possible not needed)
Staged with Diego, using no-route and set-health-check APP_NAME none command.
Those fixed the problem, but i still get an exit status 0.
when you deploy an app in bluemix,you should have a requirements.txt which include services you used in your app.
so ,you should checkout your requirements.txt,maybe you lost
watson_developer_cloud
and then, the requirements.txt likes this:
Flask==0.10.1
watson_developer_cloud
Fabric has a hosts setting to specify which computers to SSH into.
Amazon Web Services has more of a dynamic inventory that can be queried in python using tools like boto.
Is there a way to combine these two services? Ideally, I wanted something as simple as ansible's approach with an inventory file and using an external file like ec2.py.
More specifically, is there a prebaked solution for this use case? Ideally, I would like to run something straightforward like this:
from fabric.api import env, task
import ec2
env.roledefs = ec2.Inventory()
#task
def command():
run("lsb_release -a")
And run it like so, assuming env.roledefs['nginx'] exists:
$ fab -R nginx command
You can use fabric and boto concurrently.
First you need to export the aws_secret_key, aws_secret_access_key and default regions from your console. Fabric file name should be fabfile.py and should not ec2.py/other.
import boto, urllib2
from boto.ec2 import connect_to_region
from fabric.api import env, run, cd, settings, sudo
from fabric.api import parallel
import os
import sys
REGION = os.environ.get("AWS_EC2_REGION")
env.user = "ec2-user"
env.key_filename = ["/home/user/uswest.pem"]
#task
def command():
run("lsb_release -a")
def _create_connection(region):
print "Connecting to ", region
conn = connect_to_region(
region_name = region,
aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY")
)
print "Connection with AWS established"
return connection
Finally this program can be executed by using below command.
$ fab command
From http://docs.python-guide.org/en/latest/scenarios/admin/
You can see that if you set
env.hosts = ['my_server1', 'my_server2']
You'll then be able to target those hosts.
With boto, if you just have a function that does
ec2_connection.get_only_instances(filter={'tag':< whatever>})
and returns a list of their dns names, you'll be able to then set
env.hosts = [< list of dns names from ec2>]
Piece of cake!