btcli query
Enter wallet name (default): my-wallet-name
Enter hotkey name (default): my-hotkey
Enter uids to query (All): 18
Note that my-wallet-name, my-hotkey where actually correct names. My wallet with one of my hotkeys. And I decided to query the UID 18.
But btcli is returning an error with no specific message
AttributeError: 'Dendrite' object has no attribute 'forward_text'
Exception ignored in: <function Dendrite.__del__ at 0x7f5655e3adc0>
Traceback (most recent call last):
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_dendrite/dendrite_impl.py", line 107, in __del__
bittensor.logging.success('Dendrite Deleted', sufix = '')
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_logging/__init__.py", line 341, in success
cls()
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_logging/__init__.py", line 73, in __new__
config = logging.config()
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_logging/__init__.py", line 127, in config
parser = argparse.ArgumentParser()
File "/usr/lib/python3.8/argparse.py", line 1672, in __init__
prog = _os.path.basename(_sys.argv[0])
TypeError: 'NoneType' object is not subscriptable
What does this means?
How can I query an UID correctly?
I have try to look for UIDs to query but the tool does not give me any.
I was expecting a semantic error or a way to look for a UID i can query but not a TypeError.
It appears that command is broken and should be removed.
I opened an issue for you here: https://github.com/opentensor/bittensor/issues/1085
You can use the python API like:
import bittensor
UID: int = 18
subtensor = bittensor.subtensor( network="nakamoto" )
forward_text = "testing out querying the network"
wallet = bittensor.wallet( name = "my-wallet-name", hotkey = "my-hotkey" )
dend = bittensor.dendrite( wallet = wallet )
neuron = subtensor.neuron_for_uid( UID )
endpoint = bittensor.endpoint.from_neuron( neuron )
response_codes, times, query_responses = dend.generate(endpoint, forward_text, num_to_generate=64)
response_code_text = response_codes[0]
query_response = query_responses[0]
I am trying to retrieve emails > than 12/1/2020 but get a ValueError. I tried adding .replace(microseconds=0) to str(message.ReceivedTime) but still getting this error.
error
Traceback (most recent call last):
File "C:/Users/SLID/PycharmProjects/PPC_COASplitter/PPC_Ack.py", line 178, in <module>
get_url = readEmail()
File "C:/Users/zSLID/PycharmProjects/PPC_COASplitter/PPC_Ack.py", line 144, in readEmail
if str(message.ReceivedTime) >= '2020-12-1 00:00:00' and 'P1 Cust ID 111111 File ID' in message.Subject:
File "C:\Program Files (x86)\Python37-32\lib\site-packages\win32com\client\dynamic.py", line 516, in __getattr__
ret = self._oleobj_.Invoke(retEntry.dispid,0,invoke_type,1)
ValueError: microsecond must be in 0..999999
code
def readEmail():
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders.Item('SharedMailbox, PPC-Investigation')
inbox = folder.Folders.Item('Inbox')
messages = inbox.Items
messages.Sort("[ReceivedTime]", True)
for message in messages:
if str(message.ReceivedTime) >= '2020-12-1 00:00:00' and 'P1 Cust ID 111111 File ID' in message.Subject:
print("")
print('Received from: ',message.Sender)
print("To: ", message.To)
print("Subject: ", message.Subject)
print("Received: ", message.ReceivedTime)
print("Message: ", message.Body)
get_url = re.findall(r'(https?://[^\s]+)', message.Body)[2].strip('>')
return get_url
Firstly, you are comparing DateTime value with a string. Convert the string to a date-time value.
Secondly, never loop through all items in a folder, use Items.Restrict instead (returns a restricted Items collection):
messages = messages.Restrict("[Received] >= '1/12/2020 0:00am' ")
I'm trying to update virtual switch security setting via python but failing with message:
Traceback (most recent call last):
File "VPC_CRS-Compliance-Remediation-ESXi.py", line 787, in <module>
main()
File "VPC_CRS-Compliance-Remediation-ESXi.py", line 771, in main
current_setting(hosts, "check")
File "VPC_CRS-Compliance-Remediation-ESXi.py", line 653, in current_setting
host.configManager.networkSystem.UpdateVirtualSwitch(vswitchName="vSwitch0", spec=switch.spec)
File "/root/pydev/py36-venv/lib64/python3.6/site-packages/pyVmomi/VmomiSupport.py", line 706, in <lambda>
self.f(*(self.args + (obj,) + args), **kwargs)
File "/root/pydev/py36-venv/lib64/python3.6/site-packages/pyVmomi/VmomiSupport.py", line 512, in _InvokeMethod
return self._stub.InvokeMethod(self, info, args)
File "/root/pydev/py36-venv/lib64/python3.6/site-packages/pyVmomi/SoapAdapter.py", line 1374, in InvokeMethod
raise obj # pylint: disable-msg=E0702
pyVmomi.VmomiSupport.InvalidArgument: (vmodl.fault.InvalidArgument) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'A specified parameter was not correct: ',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
invalidProperty = <unset>
I'm trying to set it like this:
for host in hosts:
if host.runtime.connectionState == 'connected':
#set_port_grp_security(host)
security_policy = vim.host.NetworkPolicy.SecurityPolicy()
security_policy.allowPromiscuous = False
security_policy.macChanges = False
security_policy.forgedTransmits = True
network_policy = vim.host.NetworkPolicy(security=security_policy)
switch = vim.host.VirtualSwitch.Config()
switch.spec = vim.host.VirtualSwitch.Specification()
switch.spec.policy = network_policy
a = host.config.network.vswitch
for i in a:
print(i.spec.policy.security.allowPromiscuous)
print(i.name)
host.configManager.networkSystem.UpdateVirtualSwitch(vswitchName=i.name, spec=switch.spec)
I can't find any example how this should be updated correctly anywhere everybody is either creating or removing stuff, nobody is updating :D any idea how to do it ?
This did it for me ( its called in function, parameter for function is a host object from vim.HostSystem):
switches = host.config.network.vswitch
# attept to set security policy
for vswitch in switches:
host_id = host.configManager.networkSystem
specs = vswitch.spec
specs.policy.security.allowPromiscuous = var_allowPromiscuous
specs.policy.security.macChanges = var_macChanges
specs.policy.security.forgedTransmits = var_forgedTransmits
host_id.UpdateVirtualSwitch(vswitchName=vswitch.name, spec=specs)
Took me for ever to figure out, hope it helps somebody....
I'm trying to build a simple Python script to count how many notes each user has entered in a Highrise CRM system, in the last 365 days. I have created a script that works for a tiny data set (a Highrise system with only 10 notes), but it times out on larger data sets (I assume because my script is horribly inefficient due to my lack of Python skills).
I am working on this, using Nitrous.io for the environment, using Python 3.3.
I'm using the Highton wrapper for the Highrise API calls (I haven't figured out how to read the API key in from a file successfully, but I can get it to work by typing the API key and username in directly -- tips here would be useful, but my big focus is getting the script to run on a production-size Highrise environment.)
Can anyone offer recommendations on how to do this more elegantly/correctly?
My Python script is:
# Using https://github.com/seibert-media/Highton to integrate with Highrise CRM
# Change to Python 3.3 with this command: source py3env/bin/activate
# Purpose: Count activity by Highrise CRM user in the last 365 days
from highton import Highton
from datetime import date, datetime, timedelta
#initialize Highrise instance
#keyfile = open('highrisekeys.txt', 'r')
#highrise_key = keyfile.readline()
#highrise_user = keyfile.readline()
#print('api key = ', api_key, 'user = ', api_user)
high = Highton(
api_key = 'THIS_IS_A_SECRET',
user = 'SECRET'
)
users = high.get_users()
#print('users is type: ', type(users))
#for user in users:
# print('Users: ', user.name)
people = high.get_people()
#print('people is type: ', type(people))
notes = []
tmp_notes = []
for person in people:
#print('Person: ', person.first_name, person.last_name)
#person_highrise_id = person.highrise_id
#print(person.last_name)
tmp_notes = high.get_person_notes(person.highrise_id)
if (type(tmp_notes) is list):
notes.extend(high.get_person_notes(person.highrise_id)) # No quotes for person_highrise_id in ()'s
#print('Notes is type ', type(notes), ' for ', person.first_name, ' ', person.last_name)
#print('total number of notes is ', len(notes))
for user in users:
#print(user.name, ' has ', notes.author_id.count(user.highrise_id), ' activities')
counter = 0
for note in notes:
if (note.author_id == user.highrise_id) and (note.created_at > datetime.utcnow() + timedelta(days = -365)):
counter += 1
print(user.name, ' has performed ', counter, ' activities')
The error message I got was:
Traceback (most recent call last): File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 341, in _make_request
self._validate_conn(conn) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 761, in _validate_conn
conn.connect() File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connection.py", line 204, in connect
conn = self._new_conn() File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connection.py", line 134, in _new_conn
(self.host, self.port), self.timeout, **extra_kw) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/util/connection.py", line 64, in create_connection
for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/adapters.py", line 370, in send
timeout=timeout File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 597, in urlopen
_stacktrace=sys.exc_info()[2]) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/util/retry.py", line 245, in increment
raise six.reraise(type(error), error, _stacktrace) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/packages/six.py", line 309, in reraise
raise value.with_traceback(tb) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 341, in _make_request
self._validate_conn(conn) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 761, in _validate_conn
conn.connect() File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connection.py", line 204, in connect
conn = self._new_conn() File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/connection.py", line 134, in _new_conn
(self.host, self.port), self.timeout, **extra_kw) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/packages/urllib3/util/connection.py", line 64, in create_connection
for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', gaierror(-2, 'Name or service not known'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "highrise-analysis.py", line 35, in <module>
tmp_notes = high.get_person_notes(person.highrise_id) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/highton/highton.py", line 436, in get_person_notes
return self._get_notes(subject_id, 'people') File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/highton/highton.py", line 433, in _get_notes
highrise_type, subject_id)), Note) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/highton/highton.py", line 115, in _get_data
content = self._get_request(endpoint, params).content File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/highton/highton.py", line 44, in _get_request
params=params, File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/api.py", line 69, in get
return request('get', url, params=params, **kwargs) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs) File "/home/action/workspace/highrise-analysis/py3env/lib/python3.3/site-packages/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', gaierror(-2, 'Name or service not known'))
Problem Solved: The Highrise API is rate limited to 500 requests per 10 second period from the same IP address for the same account, which I was exceeding when extracting the data. To resolve this, I added a time.sleep(.5) command to pause between each note data-pull per person, to avoid crossing that rate limit threshold.
In addition, I broke the code into 2 separate functions:
1. Extract the users, people, and notes data and store them as local files with pickle, so I didn't need to pull the data each time I wanted to do some analysis
2. Perform analysis on the extracted pickle files
I also needed to add a try / except KeyError conditional, as some notes were created by Highrise users who are no longer active (people who left the company)
Here's revised code:
# Using https://github.com/seibert-media/Highton to integrate with Highrise CRM
# Change to Python 3.3 with this command: source py3env/bin/activate
# Purpose: Count activity by Highrise CRM user in the last 365 days
from highton import Highton
from datetime import date, datetime, timedelta
import time
import pickle
# ===================================================================
def Create_Notes_Backup(highrise_key, highrise_user, notesfile, userfile, peoplefile, trailing_days = 365):
# Function to create new Notes backup file of Highrise instance (this can take a while)
print('Entered Create_Notes_Backup function')
high = Highton(api_key = highrise_key, user = highrise_user) # Connect to API
print('Connected to Highrise')
users = high.get_users()
print('Pulled ', len(users), ' users')
people = high.get_people()
print('Pulled ', len(people), ' people')
notes = []
tmp_notes = []
print('Started creating notes array')
for person in people:
tmp_notes = high.get_person_notes(person.highrise_id)
time.sleep(.5) # Pause per API limits https://github.com/basecamp/highrise-api
if (type(tmp_notes) is list):
print('Pulled ', len(tmp_notes), ' notes for ', person.first_name, ' ', person.last_name)
if tmp_notes[0].created_at > datetime.utcnow() + timedelta(days = -trailing_days):
notes.extend(high.get_person_notes(person.highrise_id)) # No quotes for person_highrise_id in ()'s
print('Finished creating notes array')
# Final Step: Export lists into pickle files
with open(notesfile, 'wb') as f:
pickle.dump(notes, f)
with open(userfile, 'wb') as g:
pickle.dump(users, g)
with open(peoplefile, 'wb') as h:
pickle.dump(people, h)
print('Exported lists to *.bak files')
# ===================================================================
def Analyze_Notes_Backup(notesfile, userfile, peoplefile, trailing_days = 365):
# Function to analyze notes backup:
# 1. Count number of activities in last trailing_days days
# 2. Identify date of last note update
print('Entered Analyze_Notes_Backup function')
notes = []
users = []
people = []
# Load the lists
with open(notesfile, 'rb') as a:
notes = pickle.load(a)
with open(userfile, 'rb') as b:
users = pickle.load(b)
with open(peoplefile, 'rb') as c:
people = pickle.load(c)
# Start counting
user_activity_count = {}
last_user_update = {}
for user in users:
user_activity_count[user.highrise_id] = 0
last_user_update[user.highrise_id] = date(1901, 1, 1)
print('Started counting user activity by note')
for note in notes:
if note.created_at > datetime.utcnow() + timedelta(days = -trailing_days):
#print('Note created ', note.created_at, ' by ', note.author_id, ' regarding ', note.body)
try:
user_activity_count[note.author_id] += 1
except KeyError:
print('User no longer exists')
try:
if (note.created_at.date() > last_user_update[note.author_id]):
last_user_update[note.author_id] = note.created_at.date()
except KeyError:
print('...')
print('Finished counting user activity by note')
print('=======================================')
f = open('highrise-analysis-output.txt', 'w')
f.write('Report run on ')
f.write(str(date.today()))
f.write('\n Highrise People Count: ')
f.write(str(len(people)))
f.write('\n ============================ \n')
for user in users:
print(user.name, ' has performed ', user_activity_count[user.highrise_id], ' activities')
f.write(str.join(' ', (user.name, ', ', str(user_activity_count[user.highrise_id]))))
if last_user_update[user.highrise_id] == date(1901, 1, 1):
print(user.name, ' has not updated Highrise in the last 365 days')
f.write(str.join(' ', (', NO_UPDATES\n')))
else:
print(user.name, ' last updated Highrise ', last_user_update[user.highrise_id])
f.write(str.join(' ', (', ', str(last_user_update[user.highrise_id]), '\n')))
all_done = time.time()
f.close
# ===================================================================
if __name__ == "__main__":
trailing_days = 365 # Number of days back to monitor
# Production Environment Analysis
Create_Notes_Backup(MY_API_KEY, MY_HIGHRISE_USERID, 'highrise-production-notes.bak', 'highrise-production-users.bak', 'highrise-production-people.bak', trailing_days = 365) # Production Environment
Analyze_Notes_Backup('highrise-production-notes.bak', 'highrise-production-users.bak', 'highrise-production-people.bak', trailing_days = 365)
Mike,
What you are doing is going through all the users, and for each one then going through all of the notes. Once you have the user there should be a way to query for just the notes that belong to that user. You probably can include the date range in the query and just do a .count to see how many records match.
If you can't search notes by user, then go through the notes once and store the userId and the sum of that users notes that match your criteria in a dictionary. Then you can match up the userid's with the users table.
Good luck
I have successful built a python search app with pysolr. So far I have used two fields: id and title. now i want to push two different versions of titles; the original and the title after removing the stopwords. any ideas? the following code works:
def BuildSolrIndex(solr, trandata):
tmp = []
for i, dat in enumerate(trandata):
if all(d is not None and len(d) > 0 for d in dat):
d = {}
d["id"] = dat[0]
d["title"] = dat[1]
tmp.append(d)
solr.add(tmp)
solr.optimize()
return solr
but this one does not:
def BuildSolrIndex(solr, trandata):
tmp = []
for i, dat in enumerate(trandata):
if all(d is not None and len(d) > 0 for d in dat):
d = {}
d["id"] = dat[0]
d["title_org"] = dat[1]
d["title_new"] = CleanUpTitle(dat[1])
tmp.append(d)
solr.add(tmp)
solr.optimize()
return solr
Any ideas?
EDIT:
bellow is the exception:
Traceback (most recent call last):
...
solr = BuildSolrIndex(solr, trandata)
File "...", line 56, in BuildSolrIndex
solr.add(tmp)
File "build/bdist.linux-x86_64/egg/pysolr.py", line 779, in add
File "build/bdist.linux-x86_64/egg/pysolr.py", line 387, in _update
File "build/bdist.linux-x86_64/egg/pysolr.py", line 321, in _send_request
pysolr.SolrError: [Reason: None]
<response><lst name="responseHeader"><int name="status">400</int><int name="QTime">8</int></lst><lst name="error"><str name="msg">ERROR: [doc=...] unknown field 'title_new'</str><int name="code">400</int></lst></response>
This looks like an issue with your Solr schema.xml, as the exception indicates that "title_new" is not recognized as a valid field. This answer may be of assistance to you: https://stackoverflow.com/a/14400137/1675729
Check to make sure your schema.xml contains a "title_new" field, and that you've restarted the Solr services if necessary. If this doesn't solve your problem, come on back!