Error while reading h5 file in pandas in read mode - python

After having stored the data frame in h5 file i was trying to access the file in another jupyter notebook but when loading the h5 file in read mode in pandas i encountered an error.
I read the file this way
data_frames = pd.HDFStore('data_frames.h5', mode='r')
Error :
HDF5ExtError Traceback (most recent call last)
~/.conda/envs/be_project/lib/python3.6/site-packages/pandas/io/pytables.py in open(self, mode, **kwargs)
586 try:
--> 587 self._handle = tables.open_file(self._path, self._mode, **kwargs)
588 except (IOError) as e: # pragma: no cover
~/.conda/envs/be_project/lib/python3.6/site-packages/tables/file.py in open_file(filename, mode, title, root_uep, filters, **kwargs)
319 # Finally, create the File instance, and return it
--> 320 return File(filename, mode, title, root_uep, filters, **kwargs)
321
~/.conda/envs/be_project/lib/python3.6/site-packages/tables/file.py in __init__(self, filename, mode, title, root_uep, filters, **kwargs)
783 # Now, it is time to initialize the File extension
--> 784 self._g_new(filename, mode, **params)
785
tables/hdf5extension.pyx in tables.hdf5extension.File._g_new (tables/hdf5extension.c:5940)()
HDF5ExtError: HDF5 error back trace
File "H5F.c", line 604, in H5Fopen
unable to open file
File "H5Fint.c", line 1087, in H5F_open
unable to read superblock
File "H5Fsuper.c", line 277, in H5F_super_read
file signature not found
End of HDF5 error back trace
Unable to open/create file 'data_frames.h5'
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-2-aa601ba65d4f> in <module>()
----> 1 data_frames = pd.HDFStore('data_frames.h5', mode='r')
~/.conda/envs/be_project/lib/python3.6/site-packages/pandas/io/pytables.py in __init__(self, path, mode, complevel, complib, fletcher32, **kwargs)
446 self._fletcher32 = fletcher32
447 self._filters = None
--> 448 self.open(mode=mode, **kwargs)
449
450 #property
~/.conda/envs/be_project/lib/python3.6/site-packages/pandas/io/pytables.py in open(self, mode, **kwargs)
617 # is not part of IOError, make it one
618 if self._mode == 'r' and 'Unable to open/create file' in str(e):
--> 619 raise IOError(str(e))
620 raise
621
OSError: HDF5 error back trace
File "H5F.c", line 604, in H5Fopen
unable to open file
File "H5Fint.c", line 1087, in H5F_open
unable to read superblock
File "H5Fsuper.c", line 277, in H5F_super_read
file signature not found
End of HDF5 error back trace
Unable to open/create file 'data_frames.h5'
Please help if possible.

Related

Use multiprocess for boto3 s3 upload_fileobj causes SSLError

In AWS Lambda with runtimes python3.9 and boto3-1.20.32, I run the following code,
s3_client = boto3.client(service_name="s3")
s3_bucket = "bucket"
s3_other_bucket = "other_bucket"
def multiprocess_s3upload(tar_index: dict):
def _upload(filename, bytes_range):
src_key = ...
# get single raw file in tar with bytes range
s3_obj = s3_client.get_object(
Bucket=s3_bucket,
Key=src_key,
Range=f"bytes={bytes_range}"
)
# upload raw file
# error occur !!!!!
s3_client.upload_fileobj(
s3_obj["Body"],
s3_other_bucket,
filename
)
def _wait(procs):
for p in procs:
p.join()
processes = []
proc_limit = 256 # limit concurrent processes to avoid "open too much files" error
for filename, bytes_range in tar_index.items():
# filename = "hello.txt"
# bytes_range = "1024-2048"
proc = Process(
target=_upload,
args=(filename, bytes_range)
)
proc.start()
processes.append(proc)
if len(processes) == proc_limit:
_wait(processes)
processes = []
_wait(processes)
This program is extract partial raw files in a tar file in a s3 bucket, then upload each raw file to another s3 bucket. There may be thousands of raw files in a tar file, so I use multiprocess to speed up s3 upload operation.
And, I got the exception in a subprocess about SSLError for processing the same tar file randomly. I tried different tar file and got the same result. Only the last one subprocess threw the exception, the remaining worked fine.
Process Process-2:
Traceback (most recent call last):
File "/var/runtime/urllib3/response.py", line 441, in _error_catcher
yield
File "/var/runtime/urllib3/response.py", line 522, in read
data = self._fp.read(amt) if not fp_closed else b""
File "/var/lang/lib/python3.9/http/client.py", line 463, in read
n = self.readinto(b)
File "/var/lang/lib/python3.9/http/client.py", line 507, in readinto
n = self.fp.readinto(b)
File "/var/lang/lib/python3.9/socket.py", line 704, in readinto
return self._sock.recv_into(b)
File "/var/lang/lib/python3.9/ssl.py", line 1242, in recv_into
return self.read(nbytes, buffer)
File "/var/lang/lib/python3.9/ssl.py", line 1100, in read
return self._sslobj.read(len, buffer)
ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:2633)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lang/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self._target(*self._args, **self._kwargs)
File "/var/task/main.py", line 144, in _upload
s3_client.upload_fileobj(
File "/var/runtime/boto3/s3/inject.py", line 540, in upload_fileobj
return future.result()
File "/var/runtime/s3transfer/futures.py", line 103, in result
return self._coordinator.result()
File "/var/runtime/s3transfer/futures.py", line 266, in result
raise self._exception
File "/var/runtime/s3transfer/tasks.py", line 269, in _main
self._submit(transfer_future=transfer_future, **kwargs)
File "/var/runtime/s3transfer/upload.py", line 588, in _submit
if not upload_input_manager.requires_multipart_upload(
File "/var/runtime/s3transfer/upload.py", line 404, in requires_multipart_upload
self._initial_data = self._read(fileobj, threshold, False)
File "/var/runtime/s3transfer/upload.py", line 463, in _read
return fileobj.read(amount)
File "/var/runtime/botocore/response.py", line 82, in read
chunk = self._raw_stream.read(amt)
File "/var/runtime/urllib3/response.py", line 544, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "/var/lang/lib/python3.9/contextlib.py", line 137, in __exit__
self.gen.throw(typ, value, traceback)
File "/var/runtime/urllib3/response.py", line 452, in _error_catcher
raise SSLError(e)
urllib3.exceptions.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:2633)
According to this 10-years-ago similar question Multi-threaded S3 download doesn't terminate, the root cause might be boto3 s3 upload use a non-thread-safe library for sending http request. But, the solution doesn't work for me.
I found a boto3 issue about my question. This the problem has disappeared without any change on the author part.
Actually, the problem has recently disappeared on its own, without any (!) change on my part. As I thought, the problem was created and fixed by Amazon. I'm only afraid what if it will be a thing again...
Does anyone know how to fix this?
According to boto3 documentation about multiprocessing (doc),
Resource instances are not thread safe and should not be shared across threads or processes. These special classes contain additional meta data that cannot be shared. It's recommended to create a new Resource for each thread or process:
My modified code,
def multiprocess_s3upload(tar_index: dict):
def _upload(filename, bytes_range):
src_key = ...
# get single raw file in tar with bytes range
s3_client = boto3.client(service_name="s3") # <<<< one clien per thread
s3_obj = s3_client.get_object(
Bucket=s3_bucket,
Key=src_key,
Range=f"bytes={bytes_range}"
)
# upload raw file
s3_client.upload_fileobj(
s3_obj["Body"],
s3_other_bucket,
filename
)
def _wait(procs):
...
...
It seems that no SSLError exception occurs.

Segmentation fault in Python with multiprocessing calls to Dialogflow

I'm struggling to understand why the code below leads to a segmentation fault.
It appears to be non-deterministic, to occur about 30% of times and I noticed it doesn't happen if I replace processes with threads. (But I need processes in my particular application).
The code below is an attempt to isolate the error from a large project (which was super hard to do), so please notice there might be parts that seem illogical or incomplete, and I also added some lines to increase the chances of error (like repeating something multiple times).
In order to run the code, please use your own Dialogflow credentials and set the variables PROJECT_ID, AGENT_NAME and DIALOGFLOW_PROJECT_ID accordingly.
I'm using Python 3.8 with the packages google-api-core==2.0.1, google-auth==2.2.2, google-cloud-dialogflow==2.7.1.
import faulthandler
import os
import shutil
from multiprocessing import Process
from time import sleep
import google.cloud.dialogflow as dialogflow
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = 'credentials.json' # USE YOUR OWN
SESSION_ID = '0'
PROJECT_ID = '####' # USE YOUR OWN
AGENT_NAME = '####' # USE YOUR OWN
DIALOGFLOW_PROJECT_ID = '###' # USE YOUR OWN
session_client = dialogflow.SessionsClient()
session = session_client.session_path(DIALOGFLOW_PROJECT_ID, SESSION_ID)
faulthandler.enable(all_threads=True)
os.makedirs('dir_foo', exist_ok=True)
shutil.make_archive(f'agent_foo', 'zip', f'dir_foo') # Make an empty zip file (mock agent)
def create_agent(language):
print('Creating agent', AGENT_NAME)
parent = "projects/" + PROJECT_ID
agents_client = dialogflow.AgentsClient()
agent = dialogflow.Agent(
parent=parent, display_name=AGENT_NAME, default_language_code=language, time_zone="GMT"
)
print('Creating new agent')
agents_client.set_agent(request={"agent": agent})
return agents_client
def upload_bot(language):
for i in range(5):
agent = create_agent(language)
encoded_string = open(f"agent_foo.zip", "rb").read()
request = {
"parent": f'projects/{PROJECT_ID}',
"agent_content": encoded_string
}
print('Uploading...')
for i in range(5):
agent.restore_agent(request)
print('Successfully updated bot!')
def execute():
try:
upload_bot('en')
except Exception as e:
print(e)
processes_list = list()
for i in range(16):
t = Process(target=execute)
t.start()
processes_list.append(t)
sleep(1)
for c in range(len(processes_list)):
processes_list[c].join()
print('DONE')
Here's the error message:
Fatal Python error: Segmentation fault
Thread 0x00007f78a0d91740 (most recent call first):
File "#/.envs/project-name-3.8/lib/python3.8/site-packages/grpc/_channel.py", line 933 in _blocking
File "#/.envs/project-name-3.8/lib/python3.8/site-packages/grpc/_channel.py", line 944 in __call__
File "#/.envs/project-name-3.8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 66 in error_remapped_callable
File "#/.envs/project-name-3.8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 142 in __call__
File "#/.envs/project-name-3.8/lib/python3.8/site-packages/google/cloud/dialogflow_v2/services/agents/client.py", line 511 in set_agent
File "#/bm/project-name/DEBUG_segfault.py", line 35 in create_agent
File "#/bm/project-name/DEBUG_segfault.py", line 42 in upload_bot
File "#/bm/project-name/DEBUG_segfault.py", line 57 in execute
File "/usr/lib/python3.8/multiprocessing/process.py", line 108 in run
File "/usr/lib/python3.8/multiprocessing/process.py", line 313 in _bootstrap
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 75 in _launch
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 19 in __init__
File "/usr/lib/python3.8/multiprocessing/context.py", line 276 in _Popen
File "/usr/lib/python3.8/multiprocessing/context.py", line 224 in _Popen
File "/usr/lib/python3.8/multiprocessing/process.py", line 121 in start
File "#/bm/project-name/DEBUG_segfault.py", line 65 in <module>'

How to fix "google.api_core.exceptions.ServiceUnavailable: 503 Connect Failed" when connected to Google cloud API

I want to use GOOGLE vision API,and I get the error 'google.api_core.exceptions.ServiceUnavailable: 503 Connect Failed'.
I get 503 error when connecting to Google vision API, and i have followed the instruction step by step. The link is as follows:
https://cloud.google.com/vision/docs/quickstart-client-libraries
I download my key file(json format) and ensure API is enabled.I set the environment variable and follow other instruction by putting the os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = r"/Users/a529157492/Desktop/application_default_credentials 2.json"
directly in the code.
However, i still get the error code.
import io
import os
from google.cloud import vision
from google.cloud.vision import types
from google.oauth2 import service_account
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] =
r"/Users/a529157492/Desktop/application_default_credentials 2.json"
client = vision.ImageAnnotatorClient()
file_name = os.path.abspath(r'/Users/a529157492/Desktop/demo-img.jpg')
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = types.Image(content=content)
response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')[enter image description here][1]
for label in labels:
print(label.description)
However, I get the following errors:
Traceback (most recent call last):
File "/Users/a529157492/PycharmProjects/untitled7/venv/lib/python3.6/site-
packages/google/api_core/grpc_helpers.py", line 57, in
error_remapped_callable
return callable_(*args, **kwargs)
File "/Users/a529157492/PycharmProjects/untitled7/venv/lib/python3.6/site-
packages/grpc/_channel.py", line 532, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Users/a529157492/PycharmProjects/untitled7/venv/lib/python3.6/site-
packages/grpc/_channel.py", line 466, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Connect Failed"
debug_error_string = "
{"created":"#1569469218.801077000","description":"Failed to create
subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc",
"file_line":2636,"referenced_errors":
[{"created":"#1569469218.801045000","description":"Pick
Cancelled","file":"src/core/ext/filters/client_channel/lb_policy/pick_first
/pick_first.cc","file_line":241,"referenced_errors":
[{"created":"#1569469218.800545000","description":"Connect
Failed","file":"src/core/ext/filters/client_channel/subchannel.cc",
"file_line":663,"grpc_status":14,"referenced_errors":
[{"created":"#1569469218.800511000","description":"Failed to connect to
remote host: FD
shutdown","file":"src/core/lib/iomgr/ev_poll_posix.cc",
"file_line":532,"grpc_status":14,"os_error":"Timeout
occurred","referenced_errors":
[{"created":"#1569469218.800460000","description":"connect() timed
out","file":"src/core/lib/iomgr/tcp_client_posix.cc","file_line":114}],
"target_address":"ipv4:172.217.27.138:443"}]}]}]}"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/a529157492/PycharmProjects/untitled7/pictureanalysis.py", line
24, in <module>
response = client.label_detection(image=image)
File "/Users/a529157492/PycharmProjects/untitled7/venv/lib/python3.6/site-
packages/google/cloud/vision_helpers/decorators.py", line 101, in inner
response = self.annotate_image(request, retry=retry, timeout=timeout)
File "/Users/a529157492/PycharmProjects/untitled7/venv/lib/python3.6/site-
packages/google/cloud/vision_helpers/__init__.py", line 72,
in annotate_image
r = self.batch_annotate_images([request], retry=retry, timeout=timeout)
File "/Users/a529157492/PycharmProjects/untitled7/venv/lib/python3.6/site-
packages/google/cloud/vision_v1/gapic/image_annotator_client.py", line 274,
in batch_annotate_images
request, retry=retry, timeout=timeout, metadata=metadata
File "/Users/a529157492/PycharmProjects/untitled7/venv/lib/python3.6/site-
packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/Users/a529157492/PycharmProjects/untitled7/venv/lib/python3.6/site-
packages/google/api_core/grpc_helpers.py", line 59, in
error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.ServiceUnavailable: 503 Connect Failed

python3 URLError unknown url type http

I am trying to use urllib.request.urlretrieve along with the multiprocessing module to download some files and do some processing on them. However, each time I try to run my program, it gives me the error:
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.4/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.4/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "./thumb.py", line 13, in download_and_convert
filename, headers = urlretrieve(url)
File "/usr/lib/python3.4/urllib/request.py", line 186, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/usr/lib/python3.4/urllib/request.py", line 161, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.4/urllib/request.py", line 463, in open
response = self._open(req, data)
File "/usr/lib/python3.4/urllib/request.py", line 486, in _open
'unknown_open', req)
File "/usr/lib/python3.4/urllib/request.py", line 441, in _call_chain
result = func(*args)
File "/usr/lib/python3.4/urllib/request.py", line 1252, in unknown_open
raise URLError('unknown url type: %s' % type)
urllib.error.URLError: <urlopen error unknown url type: http>
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "./thumb.py", line 27, in <module>
pool.map(download_and_convert, enumerate(csvr))
File "/usr/lib/python3.4/multiprocessing/pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib/python3.4/multiprocessing/pool.py", line 599, in get
raise self._value
urllib.error.URLError: <urlopen error unknown url type: http>
The url that it seems to choke on is http://phytoimages.siu.edu/users/vitt/10_27_06_2/Equisetumarvense.JPG. Here is my code:
#!/usr/bin/env python3
from subprocess import Popen
from sys import argv, stdin
import csv
from multiprocessing import Pool
from urllib.request import urlretrieve
def download_and_convert(args):
num, url_list = args
url = url_list[0]
try:
filename, headers = urlretrieve(url)
except:
print(url)
raise
Popen(["convert", filename, "-resize", "250x250",\
str(num)+'.'+url.split('.')[-1]])
if __name__ == "__main__":
csvr = csv.reader(open(argv[1]))
if(len(argv) > 2): nprocs = argv[2]
else: nprocs = None
pool = Pool(processes=nprocs)
pool.map(download_and_convert, enumerate(csvr))
I have no idea why this error is ocurring. Could it be because I am using multiprocessing? If anyone could help me, it would be much appreciated.
Edit: This is the first url it tries to process, and it doesn't change the error if I change it.
Check this code snippet
>>> import urllib.parse
>>> urllib.parse.quote(':')
'%3A'
As you can see urllib interprets the ':' character strangely. Which coincidentally is where your program hangs up on you.
Try urllib.parse.urlencode() that should put you on the right track.
After some help from the comments I found the solution. It appears that the problem was the csv module being tripped up on the byte order mark (BOM). I was able to fix it by opening the file with encoding='utf-8-sig' as suggest here.

Python always returning Network is unreachable because of old ipv6 configuration

I currently get a Network is unreachable error to any request I'm making with python. No matter if im using the urllib library or the requests library.
After some more research its likely that its being caused by an incorrect setup ipv6 tunnel, which seems to be still active:
$ ip -6 addr show
$ ip -6 route
default dev wlan0 metric 1
Some context: I'm running Archlinux and updated the system today, although there didnt seem to be any special updates today related to python. I'm also running this under a virtualenv, but other virtualenvs and using my Python outside the virtualenv also have the same problem.
I'm using a VPN, but also without the VPN I get the same error.
I also tried restarting the PC, haha that normally helps with any problem lol but also didnt help.
I got a feeling it may be Archlinux related but I'm not sure.
This is what I tried before to setup a ipv6 tunnel:
sudo modprobe ipv6
sudo ip tunnel del sit1
sudo ip tunnel add sit1 mode sit remote 59.66.4.50 local $ipv4
sudo ifconfig sit1 down
sudo ifconfig sit1 up
sudo ifconfig sit1 add 2001:da8:200:900e:0:5efe:$ipv4/64
sudo ip route add ::/0 via 2001:da8:200:900e::1 metric 1
Also used this command:
ip -6 addr add 2001:0db8:0:f101::1/64 dev eth0
Update 3 after removing line of ipv6 in my /etc/systemctl.conf, some urls started working:
>>> urllib2.urlopen('http://www.google.com')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 400, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 418, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1207, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1177, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 101] Network is unreachable>
>>> urllib2.urlopen('http://baidu.com')
<addinfourl at 27000560 whose fp = <socket._fileobject object at 0x7f4d1fed5e50>>
This is the error log from ipython.
In [1]: import urllib2
In [2]: urllib2.urlopen('http://google.com')
---------------------------------------------------------------------------
URLError Traceback (most recent call last)
/home/samos/<ipython console> in <module>()
/usr/lib/python2.7/urllib2.py in urlopen(url, data, timeout)
124 if _opener is None:
125 _opener = build_opener()
--> 126 return _opener.open(url, data, timeout)
127
128 def install_opener(opener):
/usr/lib/python2.7/urllib2.py in open(self, fullurl, data, timeout)
398 req = meth(req)
399
--> 400 response = self._open(req, data)
401
402 # post-process response
/usr/lib/python2.7/urllib2.py in _open(self, req, data)
416 protocol = req.get_type()
417 result = self._call_chain(self.handle_open, protocol, protocol +
--> 418 '_open', req)
419 if result:
420 return result
/usr/lib/python2.7/urllib2.py in _call_chain(self, chain, kind, meth_name, *args)
376 func = getattr(handler, meth_name)
377
--> 378 result = func(*args)
379 if result is not None:
380 return result
/usr/lib/python2.7/urllib2.py in http_open(self, req)
1205
1206 def http_open(self, req):
-> 1207 return self.do_open(httplib.HTTPConnection, req)
1208
1209 http_request = AbstractHTTPHandler.do_request_
/usr/lib/python2.7/urllib2.py in do_open(self, http_class, req)
1175 except socket.error, err: # XXX what error?
1176 h.close()
-> 1177 raise URLError(err)
1178 else:
1179 try:
URLError: <urlopen error [Errno 101] Network is unreachable>
I can access google.com normally from a web browser and im pretty sure the network is reachable.
Are you sure you are not using any http proxy server to get to internet?
Try changing the network settings in your browser to no-proxy and check if it is still connecting to internet.
And if you are using proxy ( assume proxy address as http://yourproxy.com) then try doing this to check if this solves the issue.
import urllib2
proxy = urllib2.ProxyHandler({'http': 'yourproxy.com'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
urllib2.urlopen('http://www.google.com')
The big thing was that I edited my /etc/hosts file by putting back the backup of the file everything started working again. I did this to bypass the great firewall to manually set the ipv6 addresses of facebook etc, so it was still using those ipv6 addresses...
Lesson learned: Don't work too much and dont do things in a hurry. Always try to understand what you're doing and write down what you did exactly. So you have a way to fall back.
Removing the following line seemed to help a little bit in /etc/systemctl.conf:
net.ipv6.conf.all.forwarding = 1
It didn't help in the end, still getting the error Network is unreachable for google.com although its accesible in my browser:
>>> urllib2.urlopen('http://baidu.com')
<addinfourl at 27000560 whose fp = <socket._fileobject object at 0x7f4d1fed5e50>>
>>> urllib2.urlopen('http://www.google.com')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 400, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 418, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1207, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1177, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 101] Network is unreachable>

Categories

Resources