Python: ipaddress.AddressValueError: At least 3 parts expected - python

An attribute of ipaddress.IPv4Network can be used to check if any IP address is reserved.
In IPython:
In [52]: IPv4Address(u'169.254.255.1').is_private
Out[52]: False
Yet if I try the exact same thing in a function:
import ipaddress
def isPrivateIp(ip):
unicoded = unicode(ip)
if ipaddress.IPv4Network(unicoded).is_private or ipaddress.IPv6Network(unicoded).is_private:
return True
else:
return False
print isPrivateIp(r'169.254.255.1')
I get:
File "isPrivateIP.py", line 13, in <module>
print isPrivateIp(ur'169.254.255.1')
File "isPrivateIP.py", line 7, in isPrivateIp
if ipaddress.IPv4Network(unicoded).is_private or ipaddress.IPv6Network(unicoded).is_private:
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ipaddress.py", line 2119, in __init__
self.network_address = IPv6Address(self._ip_int_from_string(addr[0]))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ipaddress.py", line 1584, in _ip_int_from_string
raise AddressValueError(msg)
ipaddress.AddressValueError: At least 3 parts expected in u'169.254.255.1'
Why is this the case?
Note: In python 2, ip addresses must be passed to ipaddress functions as unicode objects, hence calling unicode() on the string input ip.

The expected input for ipaddress.IPv6Network() is different than ipaddress.IPv4Network(). If you remove or ipaddress.IPv6Network(unicoded).is_private from your code it works fine. You can read more from here.

Related

Override a function in nltk - Error in ContextIndex class

I am using text.similar('example') function from nltk.Text module.
(Which prints the similar words for a given word based on corpus.)
However I want to store that list of words in a list. But the function itself returns None.
#text is a variable of nltk.Text module
simList = text.similar("physics")
>>> a = text.similar("physics")
the and a in science this which it that energy his of but chemistry is
space mathematics theory as mechanics
>>> a
>>> a
# a contains no value.
So should I modify the source function itself? But I don't think it is a good practice. So how can I override that function so that it returns the value?
Edit - Referring this thread, I tried using the ContextIndex class. But I am getting the following error.
File "test.py", line 39, in <module>
text = nltk.text.ContextIndex(word.lower() for word in words) File "/home/kenden/den/codes/nlpenv/local/lib/python2.7/site-packages/nltk/text.py", line 56, in __init__
for i, w in enumerate(tokens)) File "/home/kenden/den/codes/nlpenv/local/lib/python2.7/site-packages/nltk/probability.py", line 1752, in __init__
for (cond, sample) in cond_samples: File "/home/kenden/den/codes/nlpenv/local/lib/python2.7/site-packages/nltk/text.py", line 56, in <genexpr>
for i, w in enumerate(tokens)) File "/home/kenden/den/codes/nlpenv/local/lib/python2.7/site-packages/nltk/text.py", line 43, in _default_context
right = (tokens[i+1].lower() if i != len(tokens) - 1 else '*END*') TypeError: object of type 'generator' has no len()
This is my line 39 of test.py
text = nltk.text.ContextIndex(word.lower() for word in words)
How can I solve this?
You are getting the error because the ContextIndex constructor is trying to take the len() of your token list (the argument tokens). But you actually pass it as a generator, hence the error. To avoid the problem, just pass a true list, e.g.:
text = nltk.text.ContextIndex(list(word.lower() for word in words))

how can I make a file created in a function readable by other functions?

I need the contents of a file made by some function be able to be read by other functions. The closest I've come is to import a function within another function. The following code is what is what I'm using. According to the tutorials I've read python will either open a file if it exists or create one if not.What's happening is in "def space" the file "loader.py" is duplicated with no content.
def load(): # all this is input with a couple of filters
first = input("1st lot#: ") #
last = input("last lot#: ") #
for a in range(first,last+1): #
x = raw_input("?:")
while x==(""):
print " Error",
x=raw_input("?")
while int(x)> 35:
print"Error",
x=raw_input("?")
num= x #python thinks this is a tuple
num= str(num)
f=open("loader.py","a") #this is the file I want to share
f.write(num)
f.close()
f=open("loader.py","r") #just shows that the file is being
print f.read() #appened
f.close()
print "Finished loading"
def spacer():
count=0
f=open("loader.py","r") #this is what I thought would open the
#file but just opens a new 1 with the
#same name
length=len(f.read())
print type(f.read(count))
print f.read(count)
print f.read(count+1)
for a in range(1,length+1):
print f.read(count)
vector1= int(f.read(count))
vector2 = int(f.read(count+1))
if vector1==vector2:
space= 0
if vector1< vector2:
space= vector2-vector1
else:
space= (35-vector1)+vector2
count=+1
b= open ("store_space.py","w")
b.write(space)
b.close()
load()
spacer()
this what I get
1st lot#: 1
last lot#: 1
?:2
25342423555619333523452624356232184517181933235991010111348287989469658293435253195472514148238543246547722232633834632
Finished loading # This is the end of "def load" it shows the file is being appended
<type 'str'> # this is from "def spacer" I realized python was creating another
# file named "loader.py with nothing in it. You can see this in the
#error msgs below
Traceback (most recent call last):
File "C:/Python27/ex1", line 56, in <module>
spacer()
File "C:/Python27/ex1", line 41, in spacer
vector1= int(f.read(count))
ValueError: invalid literal for int() with base 10: ''tion within another function but this only causes the imported function to run.
The file probably has content, but you're not reading it properly. You have:
count=0
#...
vector1= int(f.read(count))
You told Python to read 0 bytes, so it returns an empty string. Then it tries to convert the empty string to an int, and this fails as the error says, because an empty string is not a valid representation of an integer value.

Is there a way to implement **kwargs behavior when calling a Python script from the command line

Say I have a function as follows:
def foo(**kwargs):
print kwargs
And then call the function like this, I get this handy little dict of all kwargs.
>>> foo(a = 5, b = 7)
{'a': 5, 'b': 7}
I want to do this directly to scripts I call from command line. So entering this:
python script.py a = 5 b = 7
Would create a similar dict to the example above. Can this be done?
Here's what I have so far:
import sys
kwargs_raw = sys.argv[1:]
kwargs = {key:val for key, val in zip(kwargs_raw[::3], kwargs_raw[1::3])}
print kwargs
And here's what this produces:
Y:\...\Python>python test.py a = 5 b = 7
{'a': '5', 'b': '7'}
So you may be wondering why this isn't good enough
Its very structured, and thus, won't work if a or b are anything other that strings, ints, or floats.
I have no way of determining if the user intended to have 5 be an int, string, or float
I've seen ast.literal_eval() around here before, but I couldn't figure out how to get that to work. Both my attempts failed:
>>> ast.literal_eval("a = 5")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "Y:\admin\Anaconda\lib\ast.py", line 49, in literal_eval
node_or_string = parse(node_or_string, mode='eval')
File "Y:\admin\Anaconda\lib\ast.py", line 37, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 1
a = 5
and
>>> ast.literal_eval("{a:5,b:7}")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "Y:\admin\Anaconda\lib\ast.py", line 80, in literal_eval
return _convert(node_or_string)
File "Y:\admin\Anaconda\lib\ast.py", line 63, in _convert
in zip(node.keys, node.values))
File "Y:\admin\Anaconda\lib\ast.py", line 62, in <genexpr>
return dict((_convert(k), _convert(v)) for k, v
File "Y:\admin\Anaconda\lib\ast.py", line 79, in _convert
raise ValueError('malformed string')
ValueError: malformed string
If it matters, I'm using Python 2.7.6 32-bit on Windows 7 64-bit. Thanks in advance
It seems what you're really looking for is a way to parse command-line arguments. Take a look at the argparse module: http://docs.python.org/2/library/argparse.html#module-argparse
Alternately, if you really want to give your arguments in dictionary-ish form, just use the json module:
import json, sys
# Run your program as:
# python my_prog.py "{'foo': 1, 'bar': 2}"
# (the quotes are important)
data = json.loads(sys.argv[1])

Mongoengine - using icontains with all

I have seen this question but it does not answer my question, or even pose it very well.
I think that this is best explained with an example:
class Blah(Document):
someList = ListField(StringField())
Blah.drop_collection()
Blah(someList=['lop', 'glob', 'hat']).save()
Blah(someList=['hello', 'kitty']).save()
# One of these should match the first entry
print(Blah.objects(someList__icontains__all=['Lo']).count())
print(Blah.objects(someList__all__icontains=['Lo']).count())
I assumed that this would print either 1, 0 or 0, 1 (or miraculously 1, 1) but instead it gives
0
Traceback (most recent call last):
File "metst.py", line 14, in <module>
print(Blah.objects(someList__all__icontains=['lO']).count())
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/site-packages/mongoengine/queryset.py", line 1034, in count
return self._cursor.count(with_limit_and_skip=True)
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/site-packages/mongoengine/queryset.py", line 608, in _cursor
self._cursor_obj = self._collection.find(self._query,
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/site-packages/mongoengine/queryset.py", line 390, in _query
self._mongo_query = self._query_obj.to_query(self._document)
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/site-packages/mongoengine/queryset.py", line 213, in to_query
query = query.accept(QueryCompilerVisitor(document))
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/site-packages/mongoengine/queryset.py", line 278, in accept
return visitor.visit_query(self)
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/site-packages/mongoengine/queryset.py", line 170, in visit_query
return QuerySet._transform_query(self.document, **query.query)
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/site-packages/mongoengine/queryset.py", line 755, in _transform_query
value = field.prepare_query_value(op, value)
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/site-packages/mongoengine/fields.py", line 594, in prepare_query_value
return self.field.prepare_query_value(op, value)
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/site-packages/mongoengine/fields.py", line 95, in prepare_query_value
value = re.escape(value)
File "/home/blah/.pythonbrew/pythons/Python-3.1.4/lib/python3.1/re.py", line 246, in escape
return bytes(s)
TypeError: 'str' object cannot be interpreted as an integer
Neither query works!
Does MongoEngine support some way to search using icontains and all? Or some way to get around this?
Note: I want to use MongoEngine, not PyMongo.
Edit: The same issue exists with Python 2.7.3.
The only way to do this, as of now(version 0.8.0) is by using a __raw__ query, possibly combined with re.compile(). Like so:
import re
input_list = ['Lo']
converted_list = [re.compile(q, re.I) for q in input_list]
print(Blah.objects(__raw__={"someList": {"$all": converted_list}}).count())
There is currently no way in mongoengine to combine all and icontains, and the only operator that can be used with other operators is not. This is subtly mentioned in the docs, as in it says that:
not – negate a standard check, may be used before other operators (e.g. Q(age_not_mod=5))
emphasis mine
But it does not say that you can not do this with other operators, which is actually the case.
You can confirm this behavior by looking at the source:
version 0.8.0+ (in module - mongoengine/queryset/transform.py - lines 42-48):
if parts[-1] in MATCH_OPERATORS:
op = parts.pop()
negate = False
if parts[-1] == 'not':
parts.pop()
negate = True
In older versions the above lines can be seen in mongoengine/queryset.py within the _transform_query method.

Bug in python tokenize?

Why would this
if 1 \
and 0:
pass
simplest of code choke on tokenize/untokenize cycle
import tokenize
import cStringIO
def tok_untok(src):
f = cStringIO.StringIO(src)
return tokenize.untokenize(tokenize.generate_tokens(f.readline))
src='''if 1 \\
and 0:
pass
'''
print tok_untok(src)
It throws:
AssertionError:
File "/mnt/home/anushri/untitled-1.py", line 13, in <module>
print tok_untok(src)
File "/mnt/home/anushri/untitled-1.py", line 6, in tok_untok
tokenize.untokenize(tokenize.generate_tokens(f.readline))
File "/usr/lib/python2.6/tokenize.py", line 262, in untokenize
return ut.untokenize(iterable)
File "/usr/lib/python2.6/tokenize.py", line 198, in untokenize
self.add_whitespace(start)
File "/usr/lib/python2.6/tokenize.py", line 187, in add_whitespace
assert row <= self.prev_row
Is there a workaround without modifying the src to be tokenized (it seems \ is the culprit)
Another example where it fails is if no newline at end e.g. src='if 1:pass' fails with same error
Workaround:
But it seems using untokenize different way works
def tok_untok(src):
f = cStringIO.StringIO(src)
tokens = [ t[:2] for t in tokenize.generate_tokens(f.readline)]
return tokenize.untokenize(tokens)
i.e. do not pass back whole token tuple but only t[:2]
though python doc says extra args are skipped
Converts tokens back into Python source code. The iterable must return
sequences with at least two elements,
the token type and the token string.
Any additional sequence elements are
ignored.
Yes, it's a known bug and there is interest in a cleaner patch than the one attached to that issue. Perfect time to contribute to a better Python ;)

Categories

Resources