jenkins python api install plugin - python

I'm using python-jenkins library to install plugins on jenkins.
The function install_plugin return before having fully installed the plugin. I reboot the jenkins server imediately after that function call.
What should I do to prevent this issue ?

facing similar issue. Code snippet is as below.
import jenkins
server = jenkins.Jenkins('http://192.168.99.100:8080', username='guest', password='guest')
user = server.get_whoami()
version = server.get_version()
print('Hello %s from Jenkins %s' % (user['fullName'], version))
#Get installed plugins
#plugins = server.get_plugins(3)
#print(plugins)
#Download a plugin - Convert to pipeline
info = server.install_plugin('convert-to-pipeline',True) #Convert To Pipeline
print(info)
Also, output is empty string from init.py instead of True
Jenkins' log have an error of broken pipe

Related

How to use Python to connect to an Oracle database by ApacheBeam?

import apache_beam as beam
from apache_beam.io.jdbc import ReadFromJdbc
with beam.Pipeline() as p:
result = (p
| 'Read from jdbc' >> ReadFromJdbc(
fetch_size=None,
table_name='table_name',
driver_class_name='oracle.jdbc.driver.OracleDriver',
jdbc_url='jdbc:oracle:thin:#localhost:1521:orcl',
username='xxx',
password='xxx',
query='selec * from table_name'
)
|beam.Map(print)
)
When I run the above code, the following error occurs:
ERROR:apache_beam.utils.subprocess_server:Starting job service with ['java', '-jar', 'C:\\Users\\YFater/.apache_beam/cache/jars\\beam-sdks-java-extensions-schemaio-expansion-service-2.29.0.jar', '51933']
ERROR:apache_beam.utils.subprocess_server:Error bringing up service
Abache Beam needs to use a java expansion service in order to use JDBC from python.
You get an error because Beam cannot launch the expansion service.
To fix this, install the Java runtime in the computer where you run apache beam, and make sure java is in your path.
IF the problem persists after installing java (or if you already have it installed), probably the JAR files Beam downloaded are bad (maybe the download stopped or the file was truncated due to disk full). In that case, just remove the contents of the $HOME/.apache_beam/cache/jars directory and re-run the beam pipeline.
Add classpath param in ReadFromJdbc
Example:
classpath=['~/.apache_beam/cache/jars/ojdbc8.jar'],

High availability HDFS client python

In HDFSCLI docs it says that it can be configured to connect to multiple hosts by adding urls separated with semicolon ; (https://hdfscli.readthedocs.io/en/latest/quickstart.html#configuration).
I use kerberos client, and this is my code -
from hdfs.ext.kerberos import KerberosClient hdfs_client = KerberosClient('http://host01:50070;http://host02:50070')
And when I try to makedir for example, I get the following error - requests.exceptions.InvalidURL: Failed to parse: http://host01:50070;http://host02:50070/webhdfs/v1/path/to/create
Apparently the version of hdfs I installed was old, the code didn't work with version 2.0.8, and it did work with version 2.5.7

How to create a environment-application on aws beanstalk for a python2.7.14 program using aws eb console?

I am completely new to AWS, and MacOS too. However, I am trying to create simple python apps in aws beanstalk. I got their default demo app- (the one that comes as an option while creating an environment/application) working. I followed this tutorial, and the code works locally (python 2.7.14).
But when I upload this application.py and requirements.txt via a zip, the compiled application shows health 'ok' but 'internal server error' when I load the application url.
I dont know how to debug or even what to debug as the code is quite straightforward: it is mostly an incompatible environment issue.
So, I'm looking everywhere how to make my AWSEB environment a python2.7 instead of python3.4 that its giving me, for both preconfigured docker python and preconfigured python platforms.
I am confused. How do I make my AWSEB environment / application a python 2.7.14, the one I use locally and is working well?
For reference, here is my code:
from flask import Flask
# print a nice greeting.
def say_hello(username = "World"):
return '<p>Hello %s!</p>\n' % username
# some bits of text for the page.
header_text = '''
<html>\n<head> <title>EB Flask Test</title> </head>\n<body>'''
instructions = '''
<p><em>Hint</em>: This is a RESTful web service! Append a username
to the URL (for example: <code>/Thelonious</code>) to say hello to
someone specific.</p>\n'''
home_link = '<p>Back</p>\n'
footer_text = '</body>\n</html>'
# EB looks for an 'application' callable by default.
application = Flask(__name__)
# add a rule for the index page.
application.add_url_rule('/', 'index', (lambda: header_text +
say_hello() + instructions + footer_text))
# add a rule when the page is accessed with a name appended to the site
# URL.
application.add_url_rule('/<username>', 'hello', (lambda username:
header_text + say_hello(username) + home_link + footer_text))
# run the app.
if __name__ == "__main__":
# Setting debug to True enables debug output. This line should be
# removed before deploying a production app.
application.debug = True
application.run()
requirements.txt
click==6.7
Flask==1.0.2
itsdangerous==0.24
Jinja2==2.10
MarkupSafe==1.0
Werkzeug==0.14.1
I'm running into path issues while installing the AWSEB cli- so please restrict your answers to using the console. Thank you.
If you need a different version of the platform for your application, you need to specify using a mechanism elastic beanstalk provides, ie through config.yml file. You need to create a config.yml file with environment/platform requirements and place the file under .elasticbeanstalk folder at base folder of your application.
You can see the config.yml setup and format here
And there are many supported platforms.
how do I make my aws eb environment/application a python2.7.14
Based on the above document you need to specify the following in config.yml file:
global: default_platform: 64bit Amazon Linux 2018.03 v2.7.1 running
Python 2.7
Different config files help to customize the EB config/environment. More here

Apache beam DataFlow runner throwing setup error

We are building data pipeline using Beam Python SDK and trying to run on Dataflow, but getting the below error,
A setup error was detected in beamapp-xxxxyyyy-0322102737-03220329-8a74-harness-lm6v. Please refer to the worker-startup log for detailed information.
But could not find detailed worker-startup logs.
We tried increasing memory size, worker count etc, but still getting the same error.
Here is the command we use,
python run.py \
--project=xyz \
--runner=DataflowRunner \
--staging_location=gs://xyz/staging \
--temp_location=gs://xyz/temp \
--requirements_file=requirements.txt \
--worker_machine_type n1-standard-8 \
--num_workers 2
pipeline snippet,
data = pipeline | "load data" >> beam.io.Read(
beam.io.BigQuerySource(query="SELECT * FROM abc_table LIMIT 100")
)
data | "filter data" >> beam.Filter(lambda x: x.get('column_name') == value)
Above pipeline is just loading the data from BigQuery and filtering based on some column value. This pipeline works like a charm in DirectRunner but fails on Dataflow.
Are we doing any obvious setup mistake? anyone else getting the same error? We could use some help to resolve the issue.
Update:
Our pipeline code is spread across multiple files, so we created a python package. We solved setup error problem by passing --setup_file argument instead of --requirements_file.
We resolved this setup error issue by sending a different set of arguments to the dataflow. Our code is spread across multiple files, so had to create a package for it. If we use --requirements_file, the job will start, but fail eventually, because it wouldn't be able to find the package in the workers. Beam Python SDK sometimes does not throw explicit error message for these instead, it will retry the job and fail. To get your code running with a package, you will need to pass --setup_file argument, which has dependencies listed in it. Make sure package created by python setup.py sdist command includes all the files required by your pipeline code.
If you have a privately hosted python package dependency then pass --extra_package with the path to the package.tar.gz file. Better way is to store in a GCS bucket and pass the path here.
I have written an example project to get started with Apache Beam Python SDK on Dataflow - https://github.com/RajeshHegde/apache-beam-example
Read about it here - https://medium.com/#rajeshhegde/data-pipeline-using-apache-beam-python-sdk-on-dataflow-6bb8550bf366
I'm building a prediction pipeline using Apache Beam/Dataflow. I need to include the model files inside the dependencies available to the remote workers. The Dataflow job failed with the same error log:
Error message from worker: A setup error was detected in beamapp-xxx-xxxxxxxxxx-xxxxxxxx-xxxx-harness-xxxx. Please refer to the worker-startup log for detailed information.
However, this error message didn't give any details about the worker-startup log. Finally, I found a way to have the worker log and solve the problem.
As is known, Dataflow creates compute engines to run jobs and save logs on them so that we can access the vm to see logs. We can connect to the vm in use by our Dataflow job from the GCP console via SSH. Then we can check the boot-json.log file located in /var/log/dataflow/taskrunner/harness:
$ cd /var/log/dataflow/taskrunner/harness
$ cat boot-json.log
Here we should pay attention. When running in batch mode, the vm created by Dataflow is ephemeral and closed when the job failed. If the vm is closed, we can't access it anymore. But a process including a failing item is retried 4 times, so normally we have enough time to open boot-json.log and see what is going on.
At last, I put my Python setup solution here that may help someone else:
main.py
...
model_path = os.path.dirname(os.path.abspath(__file__)) + '/models/net.pd'
# pipeline code
...
MANIFEST.in
include models/*.*
setup.py complete example
REQUIRED_PACKAGES = [...]
setuptools.setup(
...
include_package_data=True,
install_requires=REQUIRED_PACKAGES,
packages=setuptools.find_packages(),
package_data={"models": ["models/*"]},
...
)
Run Dataflow pipelines
$ python main.py --setup_file=/absolute/path/to/setup.py ...

Python web application with OpenCV in Heroku

I am building a web application that uses OpenCV in its back-end. I have built the application on Ubuntu (and I tried it on Windows, too) and it works fine. Currently, I am trying to configure OpenCV to work on Heroku. As OpenCV is not possible to be loaded using pip, I read about using heroku buildpacks which provide customization for the server environment.
The following is my attempt to test two of OpenCV buildpacks:
I build simple web server with Flask that tries to import OpenCV:
#hello.py
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
text = ''
try:
import cv2
text = 'success'
except:
text = 'fail'
pass
return text + ' to load openCV'
if __name__ == "__main__":
port = int(os.environ.get("PORT", 5000))
app.run(host='0.0.0.0', port=port)
The above code should return either success or fail in loading OpenCV.
Then I configured Heroku to use (heroku multi buildpack) by running the following command:
heroku buildpacks:set https://github.com/ddollar/heroku-buildpack-multi
In the .buildpacks file (that is required by multi buildpack) I put the https://github.com/heroku/heroku-buildpack-python and https://github.com/slobdell/heroku-buildpack-python-opencv-scipy buildpacks.
The first one is for compiling a python application and for installing other modules (e.g., Flask) through pip. The second buildpack is the one that is supposed to load OpenCV.
After all, the whole application did not work!
I got (Application Error) page in Heroku as following screenshot:
I tried to use other buildpack (https://github.com/diogojc/heroku-buildpack-python-opencv-scipy) but I got the same result.
My questions are:
What is wrong in the steps I did?
How should I call (or use) OpenCV within my application in heroku?Should I use import statement or some other commands?
I could install by doing as follows:
cd /path/to/your/dir && git init
heroku create MYAPP (start from scratch)
heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git --app MYAPP
create .buildpacks as follows:
https://github.com/heroku/heroku-buildpack-python
https://github.com/diogojc/heroku-buildpack-python-opencv-scipy#cedar14
git add . && git commit -m 'MESSAGE' && git push heroku master
For anyone seeing this today and having the same issue, switch opencv-python in your requirements.txt to opencv-python-headless. This sidesteps the problem with the problematic library file.
The following steps should solve the problem of openCV which you are facing -
Add the heroku-buildpack-apt to the BuildPack by pasting - https://github.com/heroku/heroku-buildpack-apt to add buildpack in dasboard.
ScreenShot -
Adding through Dashboard -> Settings -> Add BuildPacks
Then add the Aptfile in your Github base directory which contains -
libsm6
libxrender1
libfontconfig1
libice6
- one library in each line. See Example Github Link
Now build and deploy and you are ready to go!

Categories

Resources