In HDFSCLI docs it says that it can be configured to connect to multiple hosts by adding urls separated with semicolon ; (https://hdfscli.readthedocs.io/en/latest/quickstart.html#configuration).
I use kerberos client, and this is my code -
from hdfs.ext.kerberos import KerberosClient hdfs_client = KerberosClient('http://host01:50070;http://host02:50070')
And when I try to makedir for example, I get the following error - requests.exceptions.InvalidURL: Failed to parse: http://host01:50070;http://host02:50070/webhdfs/v1/path/to/create
Apparently the version of hdfs I installed was old, the code didn't work with version 2.0.8, and it did work with version 2.5.7
Related
import apache_beam as beam
from apache_beam.io.jdbc import ReadFromJdbc
with beam.Pipeline() as p:
result = (p
| 'Read from jdbc' >> ReadFromJdbc(
fetch_size=None,
table_name='table_name',
driver_class_name='oracle.jdbc.driver.OracleDriver',
jdbc_url='jdbc:oracle:thin:#localhost:1521:orcl',
username='xxx',
password='xxx',
query='selec * from table_name'
)
|beam.Map(print)
)
When I run the above code, the following error occurs:
ERROR:apache_beam.utils.subprocess_server:Starting job service with ['java', '-jar', 'C:\\Users\\YFater/.apache_beam/cache/jars\\beam-sdks-java-extensions-schemaio-expansion-service-2.29.0.jar', '51933']
ERROR:apache_beam.utils.subprocess_server:Error bringing up service
Abache Beam needs to use a java expansion service in order to use JDBC from python.
You get an error because Beam cannot launch the expansion service.
To fix this, install the Java runtime in the computer where you run apache beam, and make sure java is in your path.
IF the problem persists after installing java (or if you already have it installed), probably the JAR files Beam downloaded are bad (maybe the download stopped or the file was truncated due to disk full). In that case, just remove the contents of the $HOME/.apache_beam/cache/jars directory and re-run the beam pipeline.
Add classpath param in ReadFromJdbc
Example:
classpath=['~/.apache_beam/cache/jars/ojdbc8.jar'],
I'm having trouble connecting to my database BLUDB in IBM Db2 on Cloud using SQLAlchemy. Here is the code I've always used and it's always worked fine:
%sql ibm_db_sa://user:pswd#some-host.services.dal.bluemix.net:50000/BLUDB
But now I get this error:
(ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError:
Exception('[IBM][CLI Driver] SQL1042C An unexpected system error
occurred. SQLSTATE=58004\r SQLCODE=-1042') (Background on this error
at: http://sqlalche.me/e/13/f405) Connection info needed in SQLAlchemy
format, example: postgresql://username:password#hostname/dbname or an
existing connection: dict_keys([])
These packages are loaded as always:
import ibm_db
import ibm_db_sa
import sqlalchemy
from sqlalchemy.engine import create_engine
I looked at the python db2 documentation on ibm and the sqlalchemy error message but couldn't get anywhere.
I am working in Jupyterlab locally. I've recently reinstalled Python and Jupyterlab. That's the only thing locally that's changed.
I am able to successfully run the notebooks in the cloud at kaggle and cognitive class. I am also able to connect and query sqlite3 via python without an issue using my local notebook.
All the ibm modules and version numbers are the same before and after installation. I used requirements.txt for reinstallation.
In db2diag.log here are the last two entries:
2020-11-05-14.06.47.081000-300 I13371F372 LEVEL: Warning
PID : 17500 TID : 7808 PROC : python.exe
INSTANCE: NODE : 000
HOSTNAME: DESKTOP-6FFFO2E
EDUID : 7808
FUNCTION: DB2 UDB, bsu security, sqlexLogPluginMessage, probe:20
DATA #1 : String with size, 43 bytes
loadAuthidMapper: GetModuleHandle rc = 126
2020-11-05-14.13.49.282000-300 I13745F373 LEVEL: Warning
PID : 3060 TID : 12756 PROC : python.exe
INSTANCE: NODE : 000
HOSTNAME: DESKTOP-6FFFO2E
EDUID : 12756
FUNCTION: DB2 UDB, bsu security, sqlexLogPluginMessage, probe:20
DATA #1 : String with size, 43 bytes
loadAuthidMapper: GetModuleHandle rc = 126
I think the root of this will be down to the new version of Python and pip caching.
What version did you move from and what version are you now on. Is this a Python 2 to Python 3 change? When changing versions, normally you would need to clean pip install all components, but pip does use a cache. Even for components that may need to be compiled, and there is a good chance that Db2 components are being compiled.
So what you will need to do is to re-install the dependancies with
pip install --no-cache-dir
I'm using python-jenkins library to install plugins on jenkins.
The function install_plugin return before having fully installed the plugin. I reboot the jenkins server imediately after that function call.
What should I do to prevent this issue ?
facing similar issue. Code snippet is as below.
import jenkins
server = jenkins.Jenkins('http://192.168.99.100:8080', username='guest', password='guest')
user = server.get_whoami()
version = server.get_version()
print('Hello %s from Jenkins %s' % (user['fullName'], version))
#Get installed plugins
#plugins = server.get_plugins(3)
#print(plugins)
#Download a plugin - Convert to pipeline
info = server.install_plugin('convert-to-pipeline',True) #Convert To Pipeline
print(info)
Also, output is empty string from init.py instead of True
Jenkins' log have an error of broken pipe
I recently installed apache zeppelin 0.6.2 on Mac OS Siera 10.2, I am able to run the spark and python examples but when I try to run the R codes using either %r or %spark.r I get an error. I have already set the SPARK_HOME and SCALA_HOME in .bash_profile. Attaching the error log:
INFO [2016-10-31 19:48:10,806] ({pool-2-thread-5} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1477923480756 finished by scheduler org.apache.zeppelin.spark.SparkRInterpreter314730576
INFO [2016-10-31 19:48:10,804] ({pool-1-thread-5} ZeppelinR.java[createRScript]:366) - File /var/folders/_b/2cr99z410sddt8km9p9b9fs80000gn/T/zeppelin_sparkr-6402261059466053567.R created
ERROR [2016-10-31 19:48:20,836] ({pool-1-thread-5} TThreadPoolServer.java[run]:296) - Error occurred during processing of message.
org.apache.zeppelin.interpreter.InterpreterException: sparkr is not responding
Just figured out that setting SPARK_HOME in .bash_profile is not enough, one needs to also update the zeppelin-env.sh and set SPARK_HOME there as well.
Recently, when I was installing openstack on 3 vm on centos 7 using answer file I had the following error:
10.7.35.174_osclient.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 10.7.35.174_osclient.pp
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list python-iso8601' returned 1: Error: No matching Packages to list
You will find full trace in log /var/tmp/packstack/20160318-124834-91QzZC/manifests/10.7.35.174_osclient.pp.log
Please check log file /var/tmp/packstack/20160318-124834-91QzZC/openstack-setup.log for more information
Additional information:
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 10.7.35.174. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://10.7.35.174/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
I have already manually installed that module, but the same problem occures anyway.
That command only runs like that:
/usr/bin/yum -d 0 -e 0 -y list python2-iso8601
Is there any way to parse it to python?
Do you have any ideas how to solve it?
Found that kilo version works fine.