I'm not able to connect to FTP server getting below error :-
vmware#localhost ~]$ python try_ftp.py
Traceback (most recent call last):
File "try_ftp.py", line 5, in <module>
f = ftplib.FTP('ftp.python.org')
File "/usr/lib/python2.6/ftplib.py", line 116, in __init__
self.connect(host)
File "/usr/lib/python2.6/ftplib.py", line 131, in connect
self.sock = socket.create_connection((self.host, self.port), self.timeout)
File "/usr/lib/python2.6/socket.py", line 567, in create_connection
raise error, msg
socket.error: [Errno 101] Network is unreachable
I'm writing a very simple code
import ftplib
f = ftplib.FTP('ftp.python.org')
f.login('anonymous','sausaxen#xyz.com')
f.dir()
f.retrlines('RETR motd')
f.quit()
I checked my proxy settings , but it is set to "System proxy setttings"
Please suggest what should I do.
Thanks,
Sam
[torxed#archie ~]$ telnet ftp.python.org 21
Trying 82.94.164.162...
Connection failed: Connection refused
Trying 2001:888:2000:d::a2...
telnet: Unable to connect to remote host: Network is unreachable
It's not as much the hostname that's bad (ping works you mentioned) but the default port of 21 is bad. Or they're not running a standard FTP server on that host at all but rather they're using HTTP as a transport: https://www.python.org/ftp/python/
Try against ftp.acc.umu.se instead.
[torxed#archie ~]$ python
Python 3.3.5 (default, Mar 10 2014, 03:21:31)
[GCC 4.8.2 20140206 (prerelease)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ftplib
>>> f = ftplib.FTP('ftp.acc.umu.se')
>>>
The address ftp.python.org seems bad
EDIT:
the f = ftplib.FTP('ftp.python.org') gives the error message but ping works.
Try pinging the "ftp.python.org" address.
If you need to pass through a proxy, check the you have ftp_proxy set as an environment variable. Normally, what I do is to set the proxy explicitly.
Also, as an alternative, try using httplib or requests
Related
I have been trying to get up and running for a college project for about two weeks now following any tutorial i could find online and i am getting the error below.
Error:
`#Mac-mini ~ % pyspark
Python 3.9.7 (default, Sep 3 2021, 12:37:55)
[Clang 12.0.5 (clang-1205.0.22.9)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1095)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.SparkSubmitArguments.$anonfun$loadEnvironmentArguments$3(SparkSubmitArguments.scala:157)
at scala.Option.orElse(Option.scala:447)
at org.apache.spark.deploy.SparkSubmitArguments.loadEnvironmentArguments(SparkSubmitArguments.scala:157)
at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:115)
at org.apache.spark.deploy.SparkSubmit$$anon$2$$anon$3.<init>(SparkSubmit.scala:1022)
at org.apache.spark.deploy.SparkSubmit$$anon$2.parseArguments(SparkSubmit.scala:1022)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:85)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #24912924
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 13 more
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/pyspark/python/pyspark/shell.py", line 35, in <module>
SparkContext._ensure_initialized() # type: ignore
File "/usr/local/lib/python3.9/site-packages/pyspark/context.py", line 331, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "/usr/local/lib/python3.9/site-packages/pyspark/java_gateway.py", line 108, in launch_gateway
raise Exception("Java gateway process exited before sending its port number")
Exception: Java gateway process exited before sending its port number
Change JAVA_HOME to different Java version.
I had exactly the same error message while having JAVA_HOME set to openjdk-16.0.1 path, then I changed it to adoptopenjdk-12.jdk path and the PySpark Exception: Java gateway process exited before sending its port number has gone.
Thanks #werner for suggestion.
PyDev is breaking at the built-in breakpoint in Python code with the following error.
warning: Debugger speedups using cython not found. Run '"/Users/Work/opt/anaconda3/envs/ch_dev37/bin/python" "/Applications/Eclipse.app/Contents/Eclipse/plugins/org.python.pydev.core_8.0.1.202011071328/pysrc/setup_cython.py" build_ext --inplace' to build.
Could not connect to 127.0.0.1: 5678
Traceback (most recent call last):
File "/Applications/Eclipse.app/Contents/Eclipse/plugins/org.python.pydev.core_8.0.1.202011071328/pysrc/_pydevd_bundle/pydevd_comm.py", line 462, in start_client
s.connect((host, port))
ConnectionRefusedError: [Errno 61] Connection refused
Traceback (most recent call last):
File "/Volumes/GoogleDrive/My Drive/free_energy_data/write_energies.py", line 111, in <module>
write_gases, write_adsorbates, verbose)
File "/Users/Work/opt/dev/gits/CatHub/cathub/catmap_interface.py", line 31, in write_energies
breakpoint()
File "/Applications/Eclipse.app/Contents/Eclipse/plugins/org.python.pydev.core_8.0.1.202011071328/pysrc/pydev_sitecustomize/sitecustomize.py", line 74, in custom_sitecustomize_breakpointhook
pydevd.settrace(*args, **kwargs)
File "/Applications/Eclipse.app/Contents/Eclipse/plugins/org.python.pydev.core_8.0.1.202011071328/pysrc/pydevd.py", line 2623, in settrace
notify_stdin=notify_stdin,
File "/Applications/Eclipse.app/Contents/Eclipse/plugins/org.python.pydev.core_8.0.1.202011071328/pysrc/pydevd.py", line 2688, in _locked_settrace
py_db.connect(host, port) # Note: connect can raise error.
File "/Applications/Eclipse.app/Contents/Eclipse/plugins/org.python.pydev.core_8.0.1.202011071328/pysrc/pydevd.py", line 1262, in connect
s = start_client(host, port)
File "/Applications/Eclipse.app/Contents/Eclipse/plugins/org.python.pydev.core_8.0.1.202011071328/pysrc/_pydevd_bundle/pydevd_comm.py", line 462, in start_client
s.connect((host, port))
ConnectionRefusedError: [Errno 61] Connection refused
I am running the Python code using the 'Run' option (Play button in the attached image) in GUI.
This behavior is surprising because as the release notes says breakpoint() builtin has been supported since earlier release (v6.5.0) of PyDev. So far I have used import pdb; pdb.set_trace() for a breakpoint and it is still functioning well. The breakpoint() builtin is working well without any problem outside PyDev in a terminal.
Additional Details:
Python Version: 3.7.8
Eclipse Version: 2020-09 (4.17.0)
PyDev Version: 8.0.1
Device: MacBook Pro
OS: macOS Big Sur 11.0.1
In this case, breakpoint() tries to connect to the IDE using the Remote Debugger, and, the ConnectionRefusedError: [Errno 61] Connection refused means that you haven't started the remote debugger in the client side.
So, to fix this, please start the debug server in the client side prior to the run (then on launch it will run without the debugger attached until breakpoint() is reached).
See: https://www.pydev.org/manual_adv_remote_debugger.html for info on how to start the remote debugger on the client side.
I created an Ubuntu VM on Azure. In its inbound networking filters, I added ports 22 (for SSHing) and 6379 (for Redis). Following this, I SSHed into an instance from my bash shell, downloaded, built and installed Redis from source. The resultant redis.conf file is located in /tmp/redis-stable, so I edited that to comment out the bind 127.0.0.1 rule.
Then I started redis-server redis.conf from the /tmp/redis-stable directory, and it started normally, following which I SSHed into another instance of the VM, started redis-cli and set some keys. Retrieved them, working correctly.
Now in Python I am running this command:
r = redis.Redis(host='same_which_I_use_for_SSHing', port=6379, password='pwd')
It connects immediately (looks weird). But then when I try a simple command like r.get("foo"), I get this error:
>>> r.get("foo")
Traceback (most recent call last):
File "/lib/python3.5/site-packages/redis/connection.py", line 484, in connect
sock = self._connect()
File "/lib/python3.5/site-packages/redis/connection.py", line 511, in _connect
socket.SOCK_STREAM):
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/lib/python3.5/site-packages/redis/client.py", line 667, in execute_command
connection.send_command(*args)
File "/lib/python3.5/site-packages/redis/connection.py", line 610, in send_command
self.send_packed_command(self.pack_command(*args))
File "/lib/python3.5/site-packages/redis/connection.py", line 585, in send_packed_command
self.connect()
File "/lib/python3.5/site-packages/redis/connection.py", line 489, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 8 connecting to username#ip_address. nodename nor servname provided, or not known.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/lib/python3.5/site-packages/redis/connection.py", line 484, in connect
sock = self._connect()
File "/lib/python3.5/site-packages/redis/connection.py", line 511, in _connect
socket.SOCK_STREAM):
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/lib/python3.5/site-packages/redis/client.py", line 976, in get
return self.execute_command('GET', name)
File "/lib/python3.5/site-packages/redis/client.py", line 673, in execute_command
connection.send_command(*args)
File "/lib/python3.5/site-packages/redis/connection.py", line 610, in send_command
self.send_packed_command(self.pack_command(*args))
File "/lib/python3.5/site-packages/redis/connection.py", line 585, in send_packed_command
self.connect()
File "/lib/python3.5/site-packages/redis/connection.py", line 489, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 8 connecting to username#ip_address. nodename nor servname provided, or not known.
Any idea how to fix this? FYI, the username#ip_address in the error message is the same I use for SSH from bash and connecting to Redis from Python respectively:
ssh username#ip_address
r = redis.Redis(host='username#ip_address', port=6379, password='pwd')
I also tried adding bind 0.0.0.0 in the redis.conf file after commenting out the bind 127.0.0.1 line. Same result.
Update: I tried setting protected mode to no in the config file, followed by running sudo ufw allow 6379 in the VM. Still same result. But now I get a weird error when I run redis-server redis.conf. I don't get the typical redis cube which shows up as a figure Instead I get this:
After this, if I enter redis-cli and issue a simple command like set foo boo, I get this error message:
127.0.0.1:6379> set foo 1
(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
Even shutdown fails after this. I have to run grep to find the process if of redis-server and kill it manually, following which redis-server command needs to be run to start it normally. But of course, I still cannot connect from remote Mac.
I tried to create an Ubuntu VM on Azure, build & configure redis server from source code, and add a port rule for my VM NSG on Azure portal, then I connected the redis server via the same code with python redis package successfully.
Here is my steps as below.
Create an Ubuntu 18.04 VM on Azure portal.
Connect the Ubuntu VM via SSH to install the build packages (include build-essential, gcc and make), download & decompress the tar.gz file of redis source code from redis.io, and build & test the redis server with make & make test.
Configure the redis.conf via vim, to change the bind configuration from 127.0.0.1 to 0.0.0.0.
Add a new port rule of 6379 port with TCP into the NSG for the Network Interface shown in the tab Settings -> Networking of my VM on Azure portal, as the figures below.
Notes: There are two NSG in my Networking tab, the second one is related to my Network Interface which can be accessed via internet.
and
5. Install redis via pip install redis on my local machine.
6. Open a termial to type python to try to connect the redis server hosted on my Azure Ubuntu VM and get the value of the foo key successfully.
>>> import redis
>>> r = redis.Redis(host='xxx.xx.xx.xxx')
>>> r.get('foo')
b'bar'
There are two key issues I found within my testing.
For redis.conf, the binding host must be 0.0.0.0. If not, redis server will be running in protected mode to refuse the query.
For the NSG port rules, make sure the new port rule added in the NSG attached to the current network interface, not the default subnet.
The python package redis is lazy evaluation, only connect redis server when first command request happened.
Hope it helps.
My goal is to have a basic FTP program written in Python. First, I need to gain more knowledge. My question is, how can I connect to an Ubuntu Server (hosted via VirtualBox) using Python?
I have tried using the page on the official Python website but I get an error saying socket.error: [Errno 61] Connection refused when using this
from ftplib import FTP
ftp = FTP('jordan#10.0.0.12')
This is the output I get when using ftp - FTP('10.0.0.12')
Traceback (most recent call last): File "", line 1, in
File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ftplib.py",
line 120, in init
self.connect(host) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ftplib.py",
line 135, in connect
self.sock = socket.create_connection((self.host, self.port), self.timeout) File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py",
line 575, in create_connection
raise err socket.error: [Errno 61] Connection refused
I can use an FTP program such as Transmit (the same port and on SFTP) on the same machine and it works fine.
Python's ftplib does not support SFTP, so using PySFTP would work.
An alternative is using paramiko.
I am using pre-built 'spark-2.0.1-bin-hadoop2.7’ and when I try to start pyspark, I get following message.
Any ideas what could be wrong? I tried using python3, setting SPARK_LOCAL_IP to 127.0.0.1 but same error.
~ -> cd /Applications/spark-2.0.1-bin-hadoop2.7/bin/
/Applications/spark-2.0.1-bin-hadoop2.7/bin -> pyspark
Python 2.7.12 (default, Oct 11 2016, 05:24:00)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.38)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
16/12/19 14:50:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/12/19 14:50:47 WARN Utils: Your hostname, XXXXXX.com resolves to a loopback address: 127.0.0.1; using XX.XX.XX.XXX instead (on interface en0)
16/12/19 14:50:47 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Traceback (most recent call last):
File "/Applications/spark-2.0.1-bin-hadoop2.7/python/pyspark/shell.py", line 43, in <module>
spark = SparkSession.builder\
File "/Applications/spark-2.0.1-bin-hadoop2.7/python/pyspark/sql/session.py", line 169, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File "/Applications/spark-2.0.1-bin-hadoop2.7/python/pyspark/context.py", line 294, in getOrCreate
SparkContext(conf=conf or SparkConf())
File "/Applications/spark-2.0.1-bin-hadoop2.7/python/pyspark/context.py", line 115, in __init__
conf, jsc, profiler_cls)
File "/Applications/spark-2.0.1-bin-hadoop2.7/python/pyspark/context.py", line 174, in _do_init
self._accumulatorServer = accumulators._start_update_server()
File "/Applications/spark-2.0.1-bin-hadoop2.7/python/pyspark/accumulators.py", line 259, in _start_update_server
server = AccumulatorServer(("localhost", 0), _UpdateRequestHandler)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 417, in __init__
self.server_bind()
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 431, in server_bind
self.socket.bind(self.server_address)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
Thanks
Found it. Some how my host mapping was messing it up. Changing it to point to localhost worked.:
/etc/host
#127.0.0.1 XXXXXX.com
127.0.0.1 localhost
In cases when you cannot cleanup /etc/hosts (such as it's being tempered with by some VPN solution), here is a workaround:
from pyspark.sql import SparkSession
def patch_pyspark_accumulators():
from inspect import getsource
import pyspark.accumulators as pa
exec(getsource(pa._start_update_server).replace("localhost", "127.0.0.1"), pa.__dict__)
patch_pyspark_accumulators()
spark = SparkSession.builder.getOrCreate()