I'm creating some python scripts for automatic testing some Couchbase operations.
There is something unexpected while executing this code:
for i in range(0, BUCKETS_AMOUNT): # BUCKETS_AMOUNT = 4
bucket_name = '%s%s' % (BUCKET_NAME_PREFIX, i) # BUCKET_NAME_PREFIX = 'test_bck_'
print('Creating bucket: %s' % bucket_name)
admin.bucket_create(bucket_name, ram_quota=512, replicas=1)
print('Opening bucket: %s' % bucket_name)
bucket = cluster.open_bucket(bucket_name)
print('Bucket: %s' % bucket)
inserted_data[bucket_name] = _fill_bucket(bucket)
<Key='/pools/default/buckets/test_bck_1', RC=0x3B[HTTP Operation failed. Inspect status code for details], HTTP Request failed. Examine 'objextra' for full result, Results=1, C Source=(src/http.c,144), OBJ=HttpResult<rc=0x0, value=b'Requested resource not found.\r\n', http_status=404, url=/pools/default/buckets/test_bck_1, tracing_context=0, tracing_output=None>, Tracing Output={"/pools/default/buckets/test_bck_1": null}>
Creating bucket: test_bck_0
Opening bucket: test_bck_0
E
======================================================================
ERROR: test_backup (__main__.TestBackup)
----------------------------------------------------------------------
Traceback (most recent call last):
File "couchbase_backup_test.py", line 29, in test_backup
expected = create_and_fill_test_buckets(self.cluster, self.admin)
File "/u01/app/couchbase/bucket_data_util.py", line 41, in create_and_fill_test_buckets
bucket = cluster.open_bucket(bucket_name)
File "/u01/app/couchbase/env_cb/lib/python3.6/site-packages/couchbase/cluster.py", line 144, in open_bucket
rv = self.bucket_class(str(connstr), **kwargs)
File "/u01/app/couchbase/env_cb/lib/python3.6/site-packages/couchbase/bucket.py", line 273, in __init__
self._do_ctor_connect()
File "/u01/app/couchbase/env_cb/lib/python3.6/site-packages/couchbase/bucket.py", line 282, in _do_ctor_connect
self._connect()
couchbase.exceptions._ProtocolError_0x16 (generated, catch ProtocolError): <RC=0x16[Data received on socket was not in the expected format], There was a problem while trying to send/receive your request over the network. This may be a result of a bad network or a misconfigured client or server, C Source=(src/bucket.c,1066)>
----------------------------------------------------------------------
In this example bucket test_bck_0 is created and filled, but it seems like trying to open test_bck_1 before even creating it.
When I'm executing this code remotely - everything works perfectly. But I need to run this locally from actual node.
There is slight version difference, but I have no possibility to align that.
Couchbase server version: 5.1
It works remotely from:
OS: Windows 7 x64
Python: 3.4.4
couchbase: 2.3.5
Does not work from:
OS: Red Hat Enterprise 7.5
Python: 3.6.3
couchbase: 2.4.0
Also, the problem is that creating a bucket is an asynchronous action so there needs to be a delay between issuing the create bucket request and opening the bucket.
Adding something like this between creating and opening the bucket will help:
import time
time.sleep(5);
You probably aren't seeing this happen when running your script against a remote cluster because it's likely a dedicated cluster with more resources (CPU / RAM), plus network latency will add a little.
You can use couchbase-cli bucket-create to create buckets and many more operations than SDK API exposes.
Related
Using library py3-validate-email-1.0.5 (more here) to check if email address is valid, including SMTP check, I wasn't able to make it through check_smtp step, because I get following error:
Python script
from validate_email import validate_email
from validate_email import validate_email_or_fail
from csv import DictReader
# iterate over each line by column name
with open('email-list.csv', 'r') as read_obj:
csv_dict_reader = DictReader(read_obj, delimiter=';')
for row in csv_dict_reader:
i = 1
while i < 21:
header_name = 'Email'+str(i)
if validate_email_or_fail(
email_address=row[header_name],
check_format=True,
check_blacklist=True,
check_dns=True,
dns_timeout=10,
check_smtp=True,
smtp_timeout=5,
smtp_helo_host='emailsrv.domain.com',
smtp_from_address='email#domain.com',
smtp_skip_tls=False,
smtp_tls_context=None,
smtp_debug=False):
print('Email ' + row[header_name] + ' is valid.')
else:
print('Email ' + row[header_name] + ' is invalid.')
i += 1
Error:
Traceback (most recent call last):
File "//./main.py", line 13, in <module>
if validate_email_or_fail(
File "/usr/local/lib/python3.9/site-packages/validate_email/validate_email.py", line 59, in validate_email_or_fail
return smtp_check(
File "/usr/local/lib/python3.9/site-packages/validate_email/smtp_check.py", line 229, in smtp_check
return smtp_checker.check(hosts=mx_records)
File "/usr/local/lib/python3.9/site-packages/validate_email/smtp_check.py", line 197, in check
raise SMTPTemporaryError(error_messages=self.__temporary_errors)
validate_email.exceptions.SMTPTemporaryError: Temporary error in email address verification:
mx.server.com: 451 timed out (in reply to 'connect')
I figured there is problem with my DNS settings (probably), so I dockerized the script and run it on AWS EC2, where I have used elastic IP, attached it to the EC2 instance where docker container is running, I also setup reverse DNS for domain emailsrv.domain.com with this elastic IP. Tried to run the script, no change.
Then I added MX record pointing to the emailsrv.domain.com, but still no change. The DNS records are setup properly, because I have checked it with multiple DNS tools available.
Since the library doesn't require to actually use my email address login details, I wonder what can be the problem? Just to be sure, the email address used in the script doesn't exist, since I don't have smtp server setup on that instance, obviously.
Any ideas?
Reason behind this was closed port on AWS EC2 instance. Opening the port in security group is not enough, you must send a request to AWS so they remove the restriction on port 25.
When they did that, works flawlessly.
I am having problem for running the following program (sparql_test.py). I am running it from Linux machine. I am installing Virtuoso server in the same Linux machine. In the Linux server, I don't have sudo permission nor browser access. But, I can execute SPARQL query from isql prompt (SQL>) successfully.
Program: sparql_test.py
from SPARQLWrapper import SPARQLWrapper, JSON
sparql = SPARQLWrapper("http://localhost:8890/sparql")
sparql.setQuery("select ?s where { ?s a <http://ehrofip.com/data/Admissions>.} limit 10")
sparql.setReturnFormat(JSON)
result = sparql.query().convert()
for res in result["results"]["bindings"]:
print(res)
I got the following error:
[suresh#deodar complex2vec]$ python sparql_test.py
Traceback (most recent call last):
File "sparql1.py", line 14, in "<module>"
result = sparql.query().convert()
File "/home/suresh/.local/lib/python2.7/site-packages/SPARQLWrapper/Wrapper.py", line 687, in query
return QueryResult(self._query())
File "/home/suresh/.local/lib/python2.7/site-packages/SPARQLWrapper/Wrapper.py", line 667, in _query
raise e
urllib2.HTTPError: HTTP Error 502: Bad Gateway
However, the above program run smoothly in my own laptop. What might be the problem? Is this issue of connection?
Thank you
Best,
Suresh
I do not believe this error is raised by Virtuoso. I believe it is raised by SPARQLWrapper.
It looks like there's something between the outside world (which includes the Linux machine itself) and the Virtuoso listener on port 8890. The "Bad Gateway" suggests there may be two things -- a reverse proxy, and a firewall.
Port 8890 (set as [HttpServer]:Listen in the INI file) must be open to communications, direct or proxied, for SPARQL access to work.
iSQL talks to port 1111 (set as [Parameters]:Listen in the INI file), which apparently doesn't have a similar block/proxy.
I have successfully used python-ldap to connect to a windows 2012 R2 server over ldaps in the past. The procedure I used for this was as follows:
python code:
import ldap
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
ldap.set_option(ldap.OPT_DEBUG_LEVEL, 255)
ip = '<redacted>'
url = "%s://%s:%d" % ('ldaps', ip, 636)
ld = ldap.initialize(url)
ld.protocol_version = 3
ld.set_option(ldap.OPT_REFERRALS, ldap.OPT_OFF)
user = '<redacted>'
passwd = '<redacted>'
ld.simple_bind_s('<redacted>\%s' % user, passwd)
And on the windows server, I used the 'server manager' to add a 'AD CS' role, and created a root certificate. I do not care about verifying the certificate, just using some encryption. After creating the root certificate, LDAPS was enabled on the server, and this code runs without error.
Now, I have followed the exact same procedure on windows server 2016, and the results are not so nice. I have managed to get a few errors from the same script. Usually either 'A TLS packet with unexpected length was received.' or 'Error in the push function.'. I have searched for a few hours but I have not been able to find a solution. Does anyone know if extra steps are needed for configuration on the windows server, or if something about my script is incorrect?
The client I am testing with is using python 2.7 and ubuntu 14.04. pip2.7 has updated the python-ldap library to the latest version. Lere is an example of the failed script run:
ldap_create
ldap_url_parse_ext(ldaps://<redacted>:636)
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP <redacted>:636
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying <redacted>:636
ldap_pvt_connect: fd: 3 tm: -1 async: 0
TLS: can't connect: Error in the push function..
ldap_err2string
Traceback (most recent call last):
File "test_ldap.py", line 13, in <module>
ld.simple_bind_s('<redacted>\%s' % user, passwd)
File "/usr/local/lib/python2.7/dist-packages/ldap/ldapobject.py", line 228, in simple_bind_s
msgid = self.simple_bind(who,cred,serverctrls,clientctrls)
File "/usr/local/lib/python2.7/dist-packages/ldap/ldapobject.py", line 222, in simple_bind
return self._ldap_call(self._l.simple_bind,who,cred,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls))
File "/usr/local/lib/python2.7/dist-packages/ldap/ldapobject.py", line 108, in _ldap_call
result = func(*args,**kwargs)
ldap.SERVER_DOWN: {'info': 'Error in the push function.', 'errno': 104, 'desc': "Can't contact LDAP server"}
So it seems that this is one of those windows things which I just do not understand. After coming into work on the next day, the same code above just started working. It seems that windows server may require many hours in order to allow LDAPS to become available to connect to.
I have coded a wireless probe sniffer using python + scapy. I'm want to use this script in openwrt routers.
Everytime it captures a probe request from nearby devices, the information is send to a webservice. (mac, power and probe).
My problem is the high consumption of CPU. The script runs quite good in my laptop (it takes between 50-70% of cpu), but when I run it in an openwrt router (400mhz cpu,16 ram) it takes 99%.
It's a well known bug in scapy the lost of packets with high loads (I have tested at the same time the script in my laptop and in the router and the router do not catched all the available packets in the air)
I already made some optimizations to the code, but I think there's more room for improvement.
This is the script.
#!/usr/bin/python
from scapy.all import *
import time
import thread
import requests
from datetime import datetime
PROBE_REQUEST_TYPE=0
PROBE_REQUEST_SUBTYPE=4
buf={'arrival':0,'source':0,'dest':0,'pwr':0,'probe':0}
uuid='1A2B3'
def PacketHandler(pkt):
global buf
if pkt.haslayer(Dot11):
if pkt.type==PROBE_REQUEST_TYPE and pkt.subtype == PROBE_REQUEST_SUBTYPE:
arrival= int(time.mktime(time.localtime()))
try:
extra = pkt.notdecoded
except:
extra=None
if extra!=None:
signal_strength = -(256-ord(extra[-4:-3]))
else:
signal_strength = -100
source = pkt.addr2
dest= pkt.addr3
pwr=signal_strength
probe=pkt.getlayer(Dot11).info
if buf['source']!=source and buf['probe']!=probe:
print 'launch %r %r %r' % (source,dest,probe)
buf={'arrival':arrival,'source':source,'dest':dest,'pwr':pwr,'probe':probe}
try:
thread.start_new_thread(exporter,(arrival,source,dest,pwr,probe))
except:
print 'Error launching the thread %r' % source
def exporter (arrival,source,dest,pwr,probe):
global uuid
urlg='http://webservice.com/?arrival='+str(arrival)+'&source='+str(source)+'&dest='+str(dest)+'&pwr='+str(pwr)+'&probe='+str(probe)+'&uuid='+uuid
try:
r=requests.get(urlg)
print r.status_code
print r.content
except:
print 'ERROR in Thread:::::: %r' % source
def main():
print "[%s] Starting scan"%datetime.now()
sniff(iface=sys.argv[1],prn=PacketHandler,store=0)
if __name__=="__main__":
main()
[UPDATE]
After a lot of reading and deep searching (it seems not many people have found full solution to the same issue or something similar).
I have found you can filter directly from the sniff function, so I've added a filter to just catch probe requests.
def main():
print "[%s] Starting scan"%datetime.now()
sniff(iface=sys.argv[1],prn=PacketHandler, filter='link[26] = 0x40',store=0)
In my laptop runs really smooth, using between 1%-3% of cpu and catching most of the available packets in the air.
but when I run this on the router, the script throws an error and crash.
Traceback (most recent call last):
File "snrV2.py", line 66, in <module>
main()
File "snrV2.py", line 63, in main
sniff(iface=sys.argv[1],prn=PacketHandler, filter='link[26] = 0x40',store=0)
File "/usr/lib/python2.7/site-packages/scapy/sendrecv.py", line 550, in sniff
s = L2socket(type=ETH_P_ALL, *arg, **karg)
File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 460, in __init__
attach_filter(self.ins, filter)
File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 132, in attach_filter
s.setsockopt(SOL_SOCKET, SO_ATTACH_FILTER, bpfh)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 99] Protocol not available
I have tried using bpf filter syntax (the same used in tcpdump http://biot.com/capstats/bpf.html) and it is supposed you can also use it in scapy but I get a filter syntax error.
Sniff fucntion:
def main():
print "[%s] Starting scan"%datetime.now()
sniff(iface=sys.argv[1],prn=PacketHandler, filter='type mgt subtype probe-req', store=0)
error:
Traceback (most recent call last):
File "snrV2.py", line 66, in <module>
main()
File "snrV2.py", line 63, in main
sniff(iface=sys.argv[1],prn=PacketHandler, filter='type mgt subtype probe-req', store=0)
File "/usr/lib/python2.7/site-packages/scapy/sendrecv.py", line 550, in sniff
s = L2socket(type=ETH_P_ALL, *arg, **karg)
File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 460, in __init__
attach_filter(self.ins, filter)
File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 120, in attach_filter
raise Scapy_Exception("Filter parse error")
NameError: global name 'Scapy_Exception' is not defined
In the router I have installed the last version of scapy and tcpdump.
Now I really don't know what to do.
I encountered a similar error (socket.error: [Errno 99] Protocol not available) when I tried to use sniff() with a filter on my NETGEAR WNDR4300.
After a lot of searching on Google, I found that the reason is that the Linux kernel of my router does not enable CONFIG_PACKET. It is mentioned in the Scapy installation guide, as follows:
Make sure your kernel has Packet sockets selected (CONFIG_PACKET)
If your kernel is < 2.6, make sure that Socket filtering is selected CONFIG_FILTER)
If you set CONFIG_PACKET=y when you compile the kernel, then it will enable the BPF for underlying socket.
Trying to use Google App Engine's remote_api so that we can do line-by-line debugging through the IDE.
The remote api works great at first. The application is able to successfully retrieve information from the database. The error occurs when wepapp responds to the client browser.
The Code:
It is very similar to the example given in app engine's documentation:
from model import My_Entity
from google.appengine.ext.remote_api import remote_api_stub
# Test database calls
def get(w_self):
remote_api_stub.ConfigureRemoteApi(None, '/_ah/remote_api', auth_func, 'myapp.appspot.com')
t_entity = My_Entity.get_by_key_name('the_key')
w_self.response.set_status(200)
# The error occurs AFTER this code executes, when webapp actually responds to the browser
Error Traceback:
The error seems to be related to the blobstore.
Is the remote api initialized too late into the code?
...After webapp has done something with the blobstore through the localhost server? Then the remote api might be re-directing requests to the blobstore in the server instead of the localhost debug server where webapp is expecting it to be?
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2795, in _HandleRequest
login_url)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3622, in CreateImplicitMatcher
get_blob_storage)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_blobstore.py", line 420, in CreateUploadDispatcher
return UploadDispatcher()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_blobstore.py", line 307, in __init__
get_blob_storage())
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_blobstore.py", line 79, in GetBlobStorage
return apiproxy_stub_map.apiproxy.GetStub('blobstore').storage
AttributeError: 'RemoteStub' object has no attribute 'storage'
Should the remote api be initialized somewhere else in the code?
Or does this problem have to do with something else?
Thanks so much!
To get this working you can use the help of the testbed to start the stubs that are missing:
ADDRESS=....
remote_api_stub.ConfigureRemoteApi(None, '/_ah/remote_api', auth_func, ADDRESS)
# First, create an instance of the Testbed class.
myTestBed = testbed.Testbed()
# Then activate the testbed, which prepares the service stubs for use.
myTestBed.activate()
# Next, declare which service stubs you want to use.
myTestBed.init_blobstore_stub()
myTestBed.init_logservice_stub()